Showing that for real $0<x_i, 0<q$ with $x_1â¦x_n=q^n$ it holds that $(1+x_1)â¦(1+x_n)geq(1+q)^n$
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Using method of Lagrange multipliers, I am looking to minimise the function
$f(x_1,...,x_n)=prod_i=1^n(1+x_i)$ with the side condition $g(x_1,...,x_n)=prod_i=1^nx_i-q^n=0$.
My goal is to show that $f$ is minimal when $x_i=q$ for all $i$.
I have $$nabla g=(prod_i=1, i neq j^nx_i)_1leq j leq n$$
$$nabla f=(prod_i=1, i neq j^n(1+x_i))_1leq j leq n$$
Now I know that $nabla f = lambda nabla g$, so for all $j$:
$$prod_i=1, i neq j^n(1+x_i)=lambdaprod_i=1, i neq j^nx_i$$
and since all $x_i>0$
$$prod_i=1, i neq j^n(1+frac1x_i)=lambda$$
Since the products are equal for all $j$, it follows that the $x_1=x_2=...=x_n$ and thus $x_1...x_n=x_1^n=q^n iff x_i=q$ for all $i$. So far so good.
My problem is to show that it is a minimum: The entries of the Hessian are
$$partial_x_i ^2f=0$$
$$partial_x_j partial_x_kf=prod_i=1,
i neq j,i neq k^n(1+x_i)>0$$
So I have a matrix with zero diagonal and positive entries everywhere else. This matrix is not positive definite, as is easily seen in the $2 times 2$ case.
Did I do any mistakes?
How do I argue that it is a minimum?
analysis optimization lagrange-multiplier
add a comment |Â
up vote
0
down vote
favorite
Using method of Lagrange multipliers, I am looking to minimise the function
$f(x_1,...,x_n)=prod_i=1^n(1+x_i)$ with the side condition $g(x_1,...,x_n)=prod_i=1^nx_i-q^n=0$.
My goal is to show that $f$ is minimal when $x_i=q$ for all $i$.
I have $$nabla g=(prod_i=1, i neq j^nx_i)_1leq j leq n$$
$$nabla f=(prod_i=1, i neq j^n(1+x_i))_1leq j leq n$$
Now I know that $nabla f = lambda nabla g$, so for all $j$:
$$prod_i=1, i neq j^n(1+x_i)=lambdaprod_i=1, i neq j^nx_i$$
and since all $x_i>0$
$$prod_i=1, i neq j^n(1+frac1x_i)=lambda$$
Since the products are equal for all $j$, it follows that the $x_1=x_2=...=x_n$ and thus $x_1...x_n=x_1^n=q^n iff x_i=q$ for all $i$. So far so good.
My problem is to show that it is a minimum: The entries of the Hessian are
$$partial_x_i ^2f=0$$
$$partial_x_j partial_x_kf=prod_i=1,
i neq j,i neq k^n(1+x_i)>0$$
So I have a matrix with zero diagonal and positive entries everywhere else. This matrix is not positive definite, as is easily seen in the $2 times 2$ case.
Did I do any mistakes?
How do I argue that it is a minimum?
analysis optimization lagrange-multiplier
4
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Using method of Lagrange multipliers, I am looking to minimise the function
$f(x_1,...,x_n)=prod_i=1^n(1+x_i)$ with the side condition $g(x_1,...,x_n)=prod_i=1^nx_i-q^n=0$.
My goal is to show that $f$ is minimal when $x_i=q$ for all $i$.
I have $$nabla g=(prod_i=1, i neq j^nx_i)_1leq j leq n$$
$$nabla f=(prod_i=1, i neq j^n(1+x_i))_1leq j leq n$$
Now I know that $nabla f = lambda nabla g$, so for all $j$:
$$prod_i=1, i neq j^n(1+x_i)=lambdaprod_i=1, i neq j^nx_i$$
and since all $x_i>0$
$$prod_i=1, i neq j^n(1+frac1x_i)=lambda$$
Since the products are equal for all $j$, it follows that the $x_1=x_2=...=x_n$ and thus $x_1...x_n=x_1^n=q^n iff x_i=q$ for all $i$. So far so good.
My problem is to show that it is a minimum: The entries of the Hessian are
$$partial_x_i ^2f=0$$
$$partial_x_j partial_x_kf=prod_i=1,
i neq j,i neq k^n(1+x_i)>0$$
So I have a matrix with zero diagonal and positive entries everywhere else. This matrix is not positive definite, as is easily seen in the $2 times 2$ case.
Did I do any mistakes?
How do I argue that it is a minimum?
analysis optimization lagrange-multiplier
Using method of Lagrange multipliers, I am looking to minimise the function
$f(x_1,...,x_n)=prod_i=1^n(1+x_i)$ with the side condition $g(x_1,...,x_n)=prod_i=1^nx_i-q^n=0$.
My goal is to show that $f$ is minimal when $x_i=q$ for all $i$.
I have $$nabla g=(prod_i=1, i neq j^nx_i)_1leq j leq n$$
$$nabla f=(prod_i=1, i neq j^n(1+x_i))_1leq j leq n$$
Now I know that $nabla f = lambda nabla g$, so for all $j$:
$$prod_i=1, i neq j^n(1+x_i)=lambdaprod_i=1, i neq j^nx_i$$
and since all $x_i>0$
$$prod_i=1, i neq j^n(1+frac1x_i)=lambda$$
Since the products are equal for all $j$, it follows that the $x_1=x_2=...=x_n$ and thus $x_1...x_n=x_1^n=q^n iff x_i=q$ for all $i$. So far so good.
My problem is to show that it is a minimum: The entries of the Hessian are
$$partial_x_i ^2f=0$$
$$partial_x_j partial_x_kf=prod_i=1,
i neq j,i neq k^n(1+x_i)>0$$
So I have a matrix with zero diagonal and positive entries everywhere else. This matrix is not positive definite, as is easily seen in the $2 times 2$ case.
Did I do any mistakes?
How do I argue that it is a minimum?
analysis optimization lagrange-multiplier
asked Jul 23 at 21:41
B.Swan
9701619
9701619
4
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09
add a comment |Â
4
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09
4
4
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
1
down vote
Considering the Lagrangian
$$
L(x,lambda) = prod_k^n(x_k+1)-(q+1)^n+lambdaleft(prod_k^n x_k - q^nright)
$$
we have the stationary conditions
$$
L_x_k = prod_jne k^n(x_j+1)+lambdaprod_jne k^n x_j = 0
$$
or
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j
$$
hence
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j = -fracprod_ine k^n(x_i+1)prod_ine k^n x_iRightarrow fracx_jx_j+1 = fracx_ix_i+1Rightarrow x_1=x_2=cdots=x_n = q
$$
This stationary point is a saddle point for $prod_k^n(x_k+1)-(q+1)^n$ as can be checked easily analyzing the behavior of
$$
(q+epsilon+1)^n-(q+1)^n
$$
for $epsilon in [-1,1]$
This is not a problem because the qualification should be done with the Hessian for
$$
F(x)=left(prod_k^n(x_k+1)-(q+1)^nright)circ left(prod_k^n x_k - q^nright)
$$
I leave it here in the hope of finding a suitable expression for such hessian.
NOTE
For the case $n = 3$ we have
$$
left((x+1)(y+1)(z+1)-(q+1)^3right)circleft(z=fracq^3x yright) = (x+1) (y+1) left(fracq^3x y+1right)-(q+1)^3
$$
with Hessian $H$ evaluated at $x=y=z=q$
$$
H = left(
beginarraycc
2+frac2q & 1+frac1q \
1+frac1q & 2+frac2q \
endarray
right)
$$
with eigenvalues
$$
left3 left(frac1q+1right),frac1q+1right
$$
characterizing a minimum.
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
add a comment |Â
up vote
1
down vote
Proof
Apply Carison's inequality, which also could be viewed as a generalized form of Cauchy-Schwarz inequalityï¼Â
$$(x_1+y_1+cdots)(x_2+y_2+cdots)cdots(x_n+y_n+cdots)geq
left[left(prod_i=1^n x_iright)^frac1n+left(prod_i=1^n y_iright)^frac1n+cdotsright]^n,$$where $x_i,y_i,cdotsgeq0$ for
$i=1,2,cdots$
Thus, $$(1+x_1)cdots(1+x_n)geq left[left(prod_i=1^n 1right)^frac1n+left(prod_i=1^n x_iright)^frac1nright]^n=left[1+(q^n)^frac1nright]^n=(1+q)^n.$$
add a comment |Â
up vote
1
down vote
The associated Bordered Hessian matrix is
$$
left[
beginarrayccc
0
& (1+q)^n-1
& (1+q)^n-1
& (1+q)^n-1
&dots
\hline
(1+q)^n-1
& 0
& (1+q)^n-2
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& 0
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& (1+q)^n-2
& 0
&dots
\
vdots
& vdots
& vdots
& vdots
& ddots
endarray
right]
$$
Up to a multiplicative constant, the above has the shape
$$
A =
left[
beginarrayc
0 & a & a & a &dots & a
\hline
a & 0 & 1 & 1 &dots & 1\
a & 1 & 0 & 1 &dots & 1\
a & 1 & 1 & 0 &dots & 1\
vdots
& vdots
& vdots
& vdots
& ddots & vdots\
a &1 & 1 & 1 & dots & 0
endarray
right] .
$$
It is an $(n+1)times(n+1)$ matrix, and its characteristic polynomial is
$$
P_A(x) = (x+1)^n-1(x^2-(n-1)x-na^2) .
$$
So exactly one eigenvalue is positive.
Let $C$ be the matrix:
$$
C=
left[beginarrayc
1 & 0 & 0 & 0 & dots & 0 \hline
0 & 1 & 0 & 0 & ddots & 0 \
0 & -1 & 1 & 0 & ddots & 0 \
0 & 0 & -1 & 1 & ddots & 0 \
vdots & ddots & ddots & ddots & ddots & vdots \
0 & 0 & 0 & 0 & -1 & 1
endarrayright] .
$$
($C$ has ones on the diagonal, in the $ntimes n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives:
$$
CAC^-1
=
left[beginarrayc
0 & na & (n-1)a & (n-2)a & dots & a \hline
a & n-1 & (n-1) & (n-2) & dots & 1 \hline
0 & 0 & -1 & 0 & dots & 0 \
0 & 0 & 0 & -1 & dots & 0 \
vdots & vdots & vdots & vdots & ddots & vdots \
0 & 0 & 0 & 0 & dots & -1
endarrayright] .
$$
As as a quadratic form, $A$ changes by the base change action of $C$ into
$$
CAC^t =
left[
beginarraycc
0 & a & 0 & 0 & 0 &dots & 0\
a & 0 & 1 & 0 & 0 &ddots & 0\hline
0 & 1 & -2 & 1 & 0 &ddots & 0\
0 & 0 & 1 & -2 & 1 &ddots & 0\
0 & 0 & 0 & 1 & -2 &ddots & 0\
vdots & ddots & ddots & ddots & ddots & ddots & vdots\
0 & 0 & 0 & 0 & 0 & dots & -2
endarray
right] .
$$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+lambda g$", and the Hessian matrix of $f+lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation
$$
prod_1le kle n(q+h_k+O(h_k^2))=q^n ,
$$
so formally $q^n-1(h_1+h_2+dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,dots)$ are not equal, say $x_1ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,dots)$ and we get a smaller value because
$$
(1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(sqrt x_1-sqrt x_2)^2>0 .
$$
(Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Considering the Lagrangian
$$
L(x,lambda) = prod_k^n(x_k+1)-(q+1)^n+lambdaleft(prod_k^n x_k - q^nright)
$$
we have the stationary conditions
$$
L_x_k = prod_jne k^n(x_j+1)+lambdaprod_jne k^n x_j = 0
$$
or
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j
$$
hence
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j = -fracprod_ine k^n(x_i+1)prod_ine k^n x_iRightarrow fracx_jx_j+1 = fracx_ix_i+1Rightarrow x_1=x_2=cdots=x_n = q
$$
This stationary point is a saddle point for $prod_k^n(x_k+1)-(q+1)^n$ as can be checked easily analyzing the behavior of
$$
(q+epsilon+1)^n-(q+1)^n
$$
for $epsilon in [-1,1]$
This is not a problem because the qualification should be done with the Hessian for
$$
F(x)=left(prod_k^n(x_k+1)-(q+1)^nright)circ left(prod_k^n x_k - q^nright)
$$
I leave it here in the hope of finding a suitable expression for such hessian.
NOTE
For the case $n = 3$ we have
$$
left((x+1)(y+1)(z+1)-(q+1)^3right)circleft(z=fracq^3x yright) = (x+1) (y+1) left(fracq^3x y+1right)-(q+1)^3
$$
with Hessian $H$ evaluated at $x=y=z=q$
$$
H = left(
beginarraycc
2+frac2q & 1+frac1q \
1+frac1q & 2+frac2q \
endarray
right)
$$
with eigenvalues
$$
left3 left(frac1q+1right),frac1q+1right
$$
characterizing a minimum.
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
add a comment |Â
up vote
1
down vote
Considering the Lagrangian
$$
L(x,lambda) = prod_k^n(x_k+1)-(q+1)^n+lambdaleft(prod_k^n x_k - q^nright)
$$
we have the stationary conditions
$$
L_x_k = prod_jne k^n(x_j+1)+lambdaprod_jne k^n x_j = 0
$$
or
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j
$$
hence
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j = -fracprod_ine k^n(x_i+1)prod_ine k^n x_iRightarrow fracx_jx_j+1 = fracx_ix_i+1Rightarrow x_1=x_2=cdots=x_n = q
$$
This stationary point is a saddle point for $prod_k^n(x_k+1)-(q+1)^n$ as can be checked easily analyzing the behavior of
$$
(q+epsilon+1)^n-(q+1)^n
$$
for $epsilon in [-1,1]$
This is not a problem because the qualification should be done with the Hessian for
$$
F(x)=left(prod_k^n(x_k+1)-(q+1)^nright)circ left(prod_k^n x_k - q^nright)
$$
I leave it here in the hope of finding a suitable expression for such hessian.
NOTE
For the case $n = 3$ we have
$$
left((x+1)(y+1)(z+1)-(q+1)^3right)circleft(z=fracq^3x yright) = (x+1) (y+1) left(fracq^3x y+1right)-(q+1)^3
$$
with Hessian $H$ evaluated at $x=y=z=q$
$$
H = left(
beginarraycc
2+frac2q & 1+frac1q \
1+frac1q & 2+frac2q \
endarray
right)
$$
with eigenvalues
$$
left3 left(frac1q+1right),frac1q+1right
$$
characterizing a minimum.
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Considering the Lagrangian
$$
L(x,lambda) = prod_k^n(x_k+1)-(q+1)^n+lambdaleft(prod_k^n x_k - q^nright)
$$
we have the stationary conditions
$$
L_x_k = prod_jne k^n(x_j+1)+lambdaprod_jne k^n x_j = 0
$$
or
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j
$$
hence
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j = -fracprod_ine k^n(x_i+1)prod_ine k^n x_iRightarrow fracx_jx_j+1 = fracx_ix_i+1Rightarrow x_1=x_2=cdots=x_n = q
$$
This stationary point is a saddle point for $prod_k^n(x_k+1)-(q+1)^n$ as can be checked easily analyzing the behavior of
$$
(q+epsilon+1)^n-(q+1)^n
$$
for $epsilon in [-1,1]$
This is not a problem because the qualification should be done with the Hessian for
$$
F(x)=left(prod_k^n(x_k+1)-(q+1)^nright)circ left(prod_k^n x_k - q^nright)
$$
I leave it here in the hope of finding a suitable expression for such hessian.
NOTE
For the case $n = 3$ we have
$$
left((x+1)(y+1)(z+1)-(q+1)^3right)circleft(z=fracq^3x yright) = (x+1) (y+1) left(fracq^3x y+1right)-(q+1)^3
$$
with Hessian $H$ evaluated at $x=y=z=q$
$$
H = left(
beginarraycc
2+frac2q & 1+frac1q \
1+frac1q & 2+frac2q \
endarray
right)
$$
with eigenvalues
$$
left3 left(frac1q+1right),frac1q+1right
$$
characterizing a minimum.
Considering the Lagrangian
$$
L(x,lambda) = prod_k^n(x_k+1)-(q+1)^n+lambdaleft(prod_k^n x_k - q^nright)
$$
we have the stationary conditions
$$
L_x_k = prod_jne k^n(x_j+1)+lambdaprod_jne k^n x_j = 0
$$
or
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j
$$
hence
$$
lambda = -fracprod_jne k^n(x_j+1)prod_jne k^n x_j = -fracprod_ine k^n(x_i+1)prod_ine k^n x_iRightarrow fracx_jx_j+1 = fracx_ix_i+1Rightarrow x_1=x_2=cdots=x_n = q
$$
This stationary point is a saddle point for $prod_k^n(x_k+1)-(q+1)^n$ as can be checked easily analyzing the behavior of
$$
(q+epsilon+1)^n-(q+1)^n
$$
for $epsilon in [-1,1]$
This is not a problem because the qualification should be done with the Hessian for
$$
F(x)=left(prod_k^n(x_k+1)-(q+1)^nright)circ left(prod_k^n x_k - q^nright)
$$
I leave it here in the hope of finding a suitable expression for such hessian.
NOTE
For the case $n = 3$ we have
$$
left((x+1)(y+1)(z+1)-(q+1)^3right)circleft(z=fracq^3x yright) = (x+1) (y+1) left(fracq^3x y+1right)-(q+1)^3
$$
with Hessian $H$ evaluated at $x=y=z=q$
$$
H = left(
beginarraycc
2+frac2q & 1+frac1q \
1+frac1q & 2+frac2q \
endarray
right)
$$
with eigenvalues
$$
left3 left(frac1q+1right),frac1q+1right
$$
characterizing a minimum.
edited Jul 24 at 10:22
answered Jul 24 at 10:02
Cesareo
5,7272412
5,7272412
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
add a comment |Â
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
Why is the Hessian of the composite function the important one for this problem?
â B.Swan
Jul 24 at 14:52
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
I did not know that one has to be consider the Bordered Hessian!
â B.Swan
Jul 24 at 15:01
add a comment |Â
up vote
1
down vote
Proof
Apply Carison's inequality, which also could be viewed as a generalized form of Cauchy-Schwarz inequalityï¼Â
$$(x_1+y_1+cdots)(x_2+y_2+cdots)cdots(x_n+y_n+cdots)geq
left[left(prod_i=1^n x_iright)^frac1n+left(prod_i=1^n y_iright)^frac1n+cdotsright]^n,$$where $x_i,y_i,cdotsgeq0$ for
$i=1,2,cdots$
Thus, $$(1+x_1)cdots(1+x_n)geq left[left(prod_i=1^n 1right)^frac1n+left(prod_i=1^n x_iright)^frac1nright]^n=left[1+(q^n)^frac1nright]^n=(1+q)^n.$$
add a comment |Â
up vote
1
down vote
Proof
Apply Carison's inequality, which also could be viewed as a generalized form of Cauchy-Schwarz inequalityï¼Â
$$(x_1+y_1+cdots)(x_2+y_2+cdots)cdots(x_n+y_n+cdots)geq
left[left(prod_i=1^n x_iright)^frac1n+left(prod_i=1^n y_iright)^frac1n+cdotsright]^n,$$where $x_i,y_i,cdotsgeq0$ for
$i=1,2,cdots$
Thus, $$(1+x_1)cdots(1+x_n)geq left[left(prod_i=1^n 1right)^frac1n+left(prod_i=1^n x_iright)^frac1nright]^n=left[1+(q^n)^frac1nright]^n=(1+q)^n.$$
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Proof
Apply Carison's inequality, which also could be viewed as a generalized form of Cauchy-Schwarz inequalityï¼Â
$$(x_1+y_1+cdots)(x_2+y_2+cdots)cdots(x_n+y_n+cdots)geq
left[left(prod_i=1^n x_iright)^frac1n+left(prod_i=1^n y_iright)^frac1n+cdotsright]^n,$$where $x_i,y_i,cdotsgeq0$ for
$i=1,2,cdots$
Thus, $$(1+x_1)cdots(1+x_n)geq left[left(prod_i=1^n 1right)^frac1n+left(prod_i=1^n x_iright)^frac1nright]^n=left[1+(q^n)^frac1nright]^n=(1+q)^n.$$
Proof
Apply Carison's inequality, which also could be viewed as a generalized form of Cauchy-Schwarz inequalityï¼Â
$$(x_1+y_1+cdots)(x_2+y_2+cdots)cdots(x_n+y_n+cdots)geq
left[left(prod_i=1^n x_iright)^frac1n+left(prod_i=1^n y_iright)^frac1n+cdotsright]^n,$$where $x_i,y_i,cdotsgeq0$ for
$i=1,2,cdots$
Thus, $$(1+x_1)cdots(1+x_n)geq left[left(prod_i=1^n 1right)^frac1n+left(prod_i=1^n x_iright)^frac1nright]^n=left[1+(q^n)^frac1nright]^n=(1+q)^n.$$
edited Jul 24 at 10:43
answered Jul 24 at 10:38
mengdie1982
2,877216
2,877216
add a comment |Â
add a comment |Â
up vote
1
down vote
The associated Bordered Hessian matrix is
$$
left[
beginarrayccc
0
& (1+q)^n-1
& (1+q)^n-1
& (1+q)^n-1
&dots
\hline
(1+q)^n-1
& 0
& (1+q)^n-2
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& 0
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& (1+q)^n-2
& 0
&dots
\
vdots
& vdots
& vdots
& vdots
& ddots
endarray
right]
$$
Up to a multiplicative constant, the above has the shape
$$
A =
left[
beginarrayc
0 & a & a & a &dots & a
\hline
a & 0 & 1 & 1 &dots & 1\
a & 1 & 0 & 1 &dots & 1\
a & 1 & 1 & 0 &dots & 1\
vdots
& vdots
& vdots
& vdots
& ddots & vdots\
a &1 & 1 & 1 & dots & 0
endarray
right] .
$$
It is an $(n+1)times(n+1)$ matrix, and its characteristic polynomial is
$$
P_A(x) = (x+1)^n-1(x^2-(n-1)x-na^2) .
$$
So exactly one eigenvalue is positive.
Let $C$ be the matrix:
$$
C=
left[beginarrayc
1 & 0 & 0 & 0 & dots & 0 \hline
0 & 1 & 0 & 0 & ddots & 0 \
0 & -1 & 1 & 0 & ddots & 0 \
0 & 0 & -1 & 1 & ddots & 0 \
vdots & ddots & ddots & ddots & ddots & vdots \
0 & 0 & 0 & 0 & -1 & 1
endarrayright] .
$$
($C$ has ones on the diagonal, in the $ntimes n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives:
$$
CAC^-1
=
left[beginarrayc
0 & na & (n-1)a & (n-2)a & dots & a \hline
a & n-1 & (n-1) & (n-2) & dots & 1 \hline
0 & 0 & -1 & 0 & dots & 0 \
0 & 0 & 0 & -1 & dots & 0 \
vdots & vdots & vdots & vdots & ddots & vdots \
0 & 0 & 0 & 0 & dots & -1
endarrayright] .
$$
As as a quadratic form, $A$ changes by the base change action of $C$ into
$$
CAC^t =
left[
beginarraycc
0 & a & 0 & 0 & 0 &dots & 0\
a & 0 & 1 & 0 & 0 &ddots & 0\hline
0 & 1 & -2 & 1 & 0 &ddots & 0\
0 & 0 & 1 & -2 & 1 &ddots & 0\
0 & 0 & 0 & 1 & -2 &ddots & 0\
vdots & ddots & ddots & ddots & ddots & ddots & vdots\
0 & 0 & 0 & 0 & 0 & dots & -2
endarray
right] .
$$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+lambda g$", and the Hessian matrix of $f+lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation
$$
prod_1le kle n(q+h_k+O(h_k^2))=q^n ,
$$
so formally $q^n-1(h_1+h_2+dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,dots)$ are not equal, say $x_1ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,dots)$ and we get a smaller value because
$$
(1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(sqrt x_1-sqrt x_2)^2>0 .
$$
(Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)
add a comment |Â
up vote
1
down vote
The associated Bordered Hessian matrix is
$$
left[
beginarrayccc
0
& (1+q)^n-1
& (1+q)^n-1
& (1+q)^n-1
&dots
\hline
(1+q)^n-1
& 0
& (1+q)^n-2
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& 0
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& (1+q)^n-2
& 0
&dots
\
vdots
& vdots
& vdots
& vdots
& ddots
endarray
right]
$$
Up to a multiplicative constant, the above has the shape
$$
A =
left[
beginarrayc
0 & a & a & a &dots & a
\hline
a & 0 & 1 & 1 &dots & 1\
a & 1 & 0 & 1 &dots & 1\
a & 1 & 1 & 0 &dots & 1\
vdots
& vdots
& vdots
& vdots
& ddots & vdots\
a &1 & 1 & 1 & dots & 0
endarray
right] .
$$
It is an $(n+1)times(n+1)$ matrix, and its characteristic polynomial is
$$
P_A(x) = (x+1)^n-1(x^2-(n-1)x-na^2) .
$$
So exactly one eigenvalue is positive.
Let $C$ be the matrix:
$$
C=
left[beginarrayc
1 & 0 & 0 & 0 & dots & 0 \hline
0 & 1 & 0 & 0 & ddots & 0 \
0 & -1 & 1 & 0 & ddots & 0 \
0 & 0 & -1 & 1 & ddots & 0 \
vdots & ddots & ddots & ddots & ddots & vdots \
0 & 0 & 0 & 0 & -1 & 1
endarrayright] .
$$
($C$ has ones on the diagonal, in the $ntimes n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives:
$$
CAC^-1
=
left[beginarrayc
0 & na & (n-1)a & (n-2)a & dots & a \hline
a & n-1 & (n-1) & (n-2) & dots & 1 \hline
0 & 0 & -1 & 0 & dots & 0 \
0 & 0 & 0 & -1 & dots & 0 \
vdots & vdots & vdots & vdots & ddots & vdots \
0 & 0 & 0 & 0 & dots & -1
endarrayright] .
$$
As as a quadratic form, $A$ changes by the base change action of $C$ into
$$
CAC^t =
left[
beginarraycc
0 & a & 0 & 0 & 0 &dots & 0\
a & 0 & 1 & 0 & 0 &ddots & 0\hline
0 & 1 & -2 & 1 & 0 &ddots & 0\
0 & 0 & 1 & -2 & 1 &ddots & 0\
0 & 0 & 0 & 1 & -2 &ddots & 0\
vdots & ddots & ddots & ddots & ddots & ddots & vdots\
0 & 0 & 0 & 0 & 0 & dots & -2
endarray
right] .
$$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+lambda g$", and the Hessian matrix of $f+lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation
$$
prod_1le kle n(q+h_k+O(h_k^2))=q^n ,
$$
so formally $q^n-1(h_1+h_2+dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,dots)$ are not equal, say $x_1ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,dots)$ and we get a smaller value because
$$
(1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(sqrt x_1-sqrt x_2)^2>0 .
$$
(Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)
add a comment |Â
up vote
1
down vote
up vote
1
down vote
The associated Bordered Hessian matrix is
$$
left[
beginarrayccc
0
& (1+q)^n-1
& (1+q)^n-1
& (1+q)^n-1
&dots
\hline
(1+q)^n-1
& 0
& (1+q)^n-2
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& 0
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& (1+q)^n-2
& 0
&dots
\
vdots
& vdots
& vdots
& vdots
& ddots
endarray
right]
$$
Up to a multiplicative constant, the above has the shape
$$
A =
left[
beginarrayc
0 & a & a & a &dots & a
\hline
a & 0 & 1 & 1 &dots & 1\
a & 1 & 0 & 1 &dots & 1\
a & 1 & 1 & 0 &dots & 1\
vdots
& vdots
& vdots
& vdots
& ddots & vdots\
a &1 & 1 & 1 & dots & 0
endarray
right] .
$$
It is an $(n+1)times(n+1)$ matrix, and its characteristic polynomial is
$$
P_A(x) = (x+1)^n-1(x^2-(n-1)x-na^2) .
$$
So exactly one eigenvalue is positive.
Let $C$ be the matrix:
$$
C=
left[beginarrayc
1 & 0 & 0 & 0 & dots & 0 \hline
0 & 1 & 0 & 0 & ddots & 0 \
0 & -1 & 1 & 0 & ddots & 0 \
0 & 0 & -1 & 1 & ddots & 0 \
vdots & ddots & ddots & ddots & ddots & vdots \
0 & 0 & 0 & 0 & -1 & 1
endarrayright] .
$$
($C$ has ones on the diagonal, in the $ntimes n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives:
$$
CAC^-1
=
left[beginarrayc
0 & na & (n-1)a & (n-2)a & dots & a \hline
a & n-1 & (n-1) & (n-2) & dots & 1 \hline
0 & 0 & -1 & 0 & dots & 0 \
0 & 0 & 0 & -1 & dots & 0 \
vdots & vdots & vdots & vdots & ddots & vdots \
0 & 0 & 0 & 0 & dots & -1
endarrayright] .
$$
As as a quadratic form, $A$ changes by the base change action of $C$ into
$$
CAC^t =
left[
beginarraycc
0 & a & 0 & 0 & 0 &dots & 0\
a & 0 & 1 & 0 & 0 &ddots & 0\hline
0 & 1 & -2 & 1 & 0 &ddots & 0\
0 & 0 & 1 & -2 & 1 &ddots & 0\
0 & 0 & 0 & 1 & -2 &ddots & 0\
vdots & ddots & ddots & ddots & ddots & ddots & vdots\
0 & 0 & 0 & 0 & 0 & dots & -2
endarray
right] .
$$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+lambda g$", and the Hessian matrix of $f+lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation
$$
prod_1le kle n(q+h_k+O(h_k^2))=q^n ,
$$
so formally $q^n-1(h_1+h_2+dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,dots)$ are not equal, say $x_1ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,dots)$ and we get a smaller value because
$$
(1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(sqrt x_1-sqrt x_2)^2>0 .
$$
(Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)
The associated Bordered Hessian matrix is
$$
left[
beginarrayccc
0
& (1+q)^n-1
& (1+q)^n-1
& (1+q)^n-1
&dots
\hline
(1+q)^n-1
& 0
& (1+q)^n-2
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& 0
& (1+q)^n-2
&dots
\
(1+q)^n-1
& (1+q)^n-2
& (1+q)^n-2
& 0
&dots
\
vdots
& vdots
& vdots
& vdots
& ddots
endarray
right]
$$
Up to a multiplicative constant, the above has the shape
$$
A =
left[
beginarrayc
0 & a & a & a &dots & a
\hline
a & 0 & 1 & 1 &dots & 1\
a & 1 & 0 & 1 &dots & 1\
a & 1 & 1 & 0 &dots & 1\
vdots
& vdots
& vdots
& vdots
& ddots & vdots\
a &1 & 1 & 1 & dots & 0
endarray
right] .
$$
It is an $(n+1)times(n+1)$ matrix, and its characteristic polynomial is
$$
P_A(x) = (x+1)^n-1(x^2-(n-1)x-na^2) .
$$
So exactly one eigenvalue is positive.
Let $C$ be the matrix:
$$
C=
left[beginarrayc
1 & 0 & 0 & 0 & dots & 0 \hline
0 & 1 & 0 & 0 & ddots & 0 \
0 & -1 & 1 & 0 & ddots & 0 \
0 & 0 & -1 & 1 & ddots & 0 \
vdots & ddots & ddots & ddots & ddots & vdots \
0 & 0 & 0 & 0 & -1 & 1
endarrayright] .
$$
($C$ has ones on the diagonal, in the $ntimes n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives:
$$
CAC^-1
=
left[beginarrayc
0 & na & (n-1)a & (n-2)a & dots & a \hline
a & n-1 & (n-1) & (n-2) & dots & 1 \hline
0 & 0 & -1 & 0 & dots & 0 \
0 & 0 & 0 & -1 & dots & 0 \
vdots & vdots & vdots & vdots & ddots & vdots \
0 & 0 & 0 & 0 & dots & -1
endarrayright] .
$$
As as a quadratic form, $A$ changes by the base change action of $C$ into
$$
CAC^t =
left[
beginarraycc
0 & a & 0 & 0 & 0 &dots & 0\
a & 0 & 1 & 0 & 0 &ddots & 0\hline
0 & 1 & -2 & 1 & 0 &ddots & 0\
0 & 0 & 1 & -2 & 1 &ddots & 0\
0 & 0 & 0 & 1 & -2 &ddots & 0\
vdots & ddots & ddots & ddots & ddots & ddots & vdots\
0 & 0 & 0 & 0 & 0 & dots & -2
endarray
right] .
$$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+lambda g$", and the Hessian matrix of $f+lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation
$$
prod_1le kle n(q+h_k+O(h_k^2))=q^n ,
$$
so formally $q^n-1(h_1+h_2+dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,dots)$ are not equal, say $x_1ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,dots)$ and we get a smaller value because
$$
(1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(sqrt x_1-sqrt x_2)^2>0 .
$$
(Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)
answered Jul 24 at 22:32
dan_fulea
4,1271211
4,1271211
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860801%2fshowing-that-for-real-0x-i-0q-with-x-1-x-n-qn-it-holds-that-1x-1%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
4
You could also try HUYGENâÂÂS INEQUALITY, stating that for $x_igeq0$ $$(1+x_1)(1+x_2)...(1+x_n)geqleft(1+left(x_1x_2...x_nright)^frac1nright)^n$$
â rtybase
Jul 23 at 21:49
Even if the Hessian would signalize a local minimum at $x_i=q$ $(iin[n])$ you could not be sure that it is actually the global minimum.
â Christian Blatter
Jul 24 at 9:09