Are these critical points of a parameter-dependent minimization problem global minimizers?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.
Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.
In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.
My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?
Any kind of feedback is much appreciated. Thanks in advance!
$ $
(Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.
If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.
If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)
If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?
So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.
functional-analysis optimization
add a comment |Â
up vote
0
down vote
favorite
Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.
Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.
In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.
My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?
Any kind of feedback is much appreciated. Thanks in advance!
$ $
(Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.
If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.
If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)
If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?
So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.
functional-analysis optimization
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.
Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.
In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.
My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?
Any kind of feedback is much appreciated. Thanks in advance!
$ $
(Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.
If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.
If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)
If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?
So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.
functional-analysis optimization
Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.
Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.
In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.
My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?
Any kind of feedback is much appreciated. Thanks in advance!
$ $
(Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.
If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.
If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)
If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?
So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.
functional-analysis optimization
asked Jul 31 at 18:12
Murp
774412
774412
add a comment |Â
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2868318%2fare-these-critical-points-of-a-parameter-dependent-minimization-problem-global-m%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password