Are these critical points of a parameter-dependent minimization problem global minimizers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.



Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.



In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.



My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?



Any kind of feedback is much appreciated. Thanks in advance!



$ $




(Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.



If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.



If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)



If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?



So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.







share|cite|improve this question























    up vote
    0
    down vote

    favorite












    Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.



    Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
    Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.



    In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.



    My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?



    Any kind of feedback is much appreciated. Thanks in advance!



    $ $




    (Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.



    If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.



    If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)



    If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?



    So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.







    share|cite|improve this question





















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.



      Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
      Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.



      In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.



      My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?



      Any kind of feedback is much appreciated. Thanks in advance!



      $ $




      (Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.



      If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.



      If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)



      If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?



      So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.







      share|cite|improve this question











      Problem setting: Suppose a smooth function $fcolon mathbbR^n times Y rightarrow mathbbR$, $(x,y) mapsto f(x,y)$ where $Y$ is a Banach space.



      Assume that $0 in Y$ is the unique minimizer of $f(0,cdot)$ over $X$, i.e. $f(0,0) = min_y in Y f(0,y)$.
      Also assume that a smooth function $gcolon mathbbR^n rightarrow Y$ is known such that $g(0) = 0$ and such that $g(x)$ is always a critical point of $f(x,cdot)$, i.e., $fracpartialpartial y f(x,g(x)) = 0$.



      In words, this means that we have a parameter-dependent family of minimization problems $min_y in Y f(x,y)$, that we know one unique global minimizer within this family, and that else we have a trajectory $g$ of critical points that goes through the known global minimizer.



      My question: May I conclude that there exists a neighborhood of $0 in mathbbR^n$ such that $g$ again gives a trajectory of global minimizers, i.e. such that $min_y in Y f(x,y) = f(x,g(x))$ for $x$ sufficiently small? If not, why is it not working and under what condition could we get such a property?



      Any kind of feedback is much appreciated. Thanks in advance!



      $ $




      (Maybe useful) My thoughts so far: I wanted to try a contradiction proof. If no such neighborhood exists, then I get for every $x$ in a neighborhood of $0 in mathbbR^n$ (except for $x=0$ itself) an element $h(x)$ with $f(x,h(x)) < f(x,g(x))$. As a consequence, we have $limsup_xto 0 f(x,h(x)) leq f(0,0)$. However, $h$ needs not be continuous and in general not even bounded. This is where I am stuck and I do not know how to get this to a contradiction.



      If $h$ were continuous with an extension to $0$, then I could easily conclude $h(0) neq 0$ and $f(0,h(0)) leq f(0,0)$ in contradiction to the uniqueness of the minimizer $(0,0)$.



      If $h$ were at least bounded for $x to 0$ and $Y$ were finite-dimensional, then I could extract a convergent subsequence $h(x_k) to h^ast neq 0$ with $x_k to 0$ for $kto infty$ in view of the Bolzano–Weierstrass theorem. Consequently, continuity of $f$ would yield the estimate $f(0,h^ast) leq limsup_ktoinfty f(x_k,h(x_k)) leq f(0,0)$, which again gives a contradiction. (Right?)



      If $h$ were bounded for $xto 0$ but $Y$ were infinite-dimensional, then I could at least extract a weakly convergent subsequence $h(x_k) rightharpoonup h^ast$. But I am not sure right now if I could conclude $h^ast neq 0$ and if I could maintain the $limsup$-property from the finite-dimensional case. Maybe somebody knows more about this?



      So, if the given requirements on $f$ are not enough to prove the desired property, I could imagine that it might be possible if a coercivity assumption is added. But then I am still not sure about the infinite-dimensional case.









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 31 at 18:12









      Murp

      774412




      774412

























          active

          oldest

          votes











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2868318%2fare-these-critical-points-of-a-parameter-dependent-minimization-problem-global-m%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes










           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2868318%2fare-these-critical-points-of-a-parameter-dependent-minimization-problem-global-m%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?