Canonical equations from action integral

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I'm going through Lanczos Variational Principles of Classical Mechanics and on page 169 it says that we can form the action integral



$$A=int_t_1^t_2 left[sum p_idot q_i-H(q_1,...,q_n;p_1,...,p_n;t)right]dt$$



from which we can get do the variation to get



$$delta A=0=fracdp_idt+fracpartial Hpartial q_i$$



$$delta A=0=-dot q_i + fracpartial Hpartial p_i$$



Now this is straightforward for the $delta p$ variation,
$$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



$$rightarrow dot q_idelta p_i - fracpartial Hpartial p_idelta p_i = 0$$



but for the $delta q$ variation I'm finding I need to do something tricky looking and potentially incorrect:
$$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



$$rightarrow delta(fracddt q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
and using the fact that d and $delta$ commute
$$rightarrow fracddtdelta(q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
$$rightarrow fracddtp_i delta q_i -fracpartial Hpartial q_idelta q_i = 0$$



I can recover the equation as given, except I'm off by a minus sign! So, I'm wondering if the trick I used is valid, and why I'm getting a minus sign off the equation. Any help would be greatly appreciated!







share|cite|improve this question























    up vote
    0
    down vote

    favorite












    I'm going through Lanczos Variational Principles of Classical Mechanics and on page 169 it says that we can form the action integral



    $$A=int_t_1^t_2 left[sum p_idot q_i-H(q_1,...,q_n;p_1,...,p_n;t)right]dt$$



    from which we can get do the variation to get



    $$delta A=0=fracdp_idt+fracpartial Hpartial q_i$$



    $$delta A=0=-dot q_i + fracpartial Hpartial p_i$$



    Now this is straightforward for the $delta p$ variation,
    $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



    $$rightarrow dot q_idelta p_i - fracpartial Hpartial p_idelta p_i = 0$$



    but for the $delta q$ variation I'm finding I need to do something tricky looking and potentially incorrect:
    $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



    $$rightarrow delta(fracddt q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
    and using the fact that d and $delta$ commute
    $$rightarrow fracddtdelta(q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
    $$rightarrow fracddtp_i delta q_i -fracpartial Hpartial q_idelta q_i = 0$$



    I can recover the equation as given, except I'm off by a minus sign! So, I'm wondering if the trick I used is valid, and why I'm getting a minus sign off the equation. Any help would be greatly appreciated!







    share|cite|improve this question





















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I'm going through Lanczos Variational Principles of Classical Mechanics and on page 169 it says that we can form the action integral



      $$A=int_t_1^t_2 left[sum p_idot q_i-H(q_1,...,q_n;p_1,...,p_n;t)right]dt$$



      from which we can get do the variation to get



      $$delta A=0=fracdp_idt+fracpartial Hpartial q_i$$



      $$delta A=0=-dot q_i + fracpartial Hpartial p_i$$



      Now this is straightforward for the $delta p$ variation,
      $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



      $$rightarrow dot q_idelta p_i - fracpartial Hpartial p_idelta p_i = 0$$



      but for the $delta q$ variation I'm finding I need to do something tricky looking and potentially incorrect:
      $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



      $$rightarrow delta(fracddt q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
      and using the fact that d and $delta$ commute
      $$rightarrow fracddtdelta(q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
      $$rightarrow fracddtp_i delta q_i -fracpartial Hpartial q_idelta q_i = 0$$



      I can recover the equation as given, except I'm off by a minus sign! So, I'm wondering if the trick I used is valid, and why I'm getting a minus sign off the equation. Any help would be greatly appreciated!







      share|cite|improve this question











      I'm going through Lanczos Variational Principles of Classical Mechanics and on page 169 it says that we can form the action integral



      $$A=int_t_1^t_2 left[sum p_idot q_i-H(q_1,...,q_n;p_1,...,p_n;t)right]dt$$



      from which we can get do the variation to get



      $$delta A=0=fracdp_idt+fracpartial Hpartial q_i$$



      $$delta A=0=-dot q_i + fracpartial Hpartial p_i$$



      Now this is straightforward for the $delta p$ variation,
      $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



      $$rightarrow dot q_idelta p_i - fracpartial Hpartial p_idelta p_i = 0$$



      but for the $delta q$ variation I'm finding I need to do something tricky looking and potentially incorrect:
      $$p_ideltadot q_i + dot q_idelta p_i -fracpartial Hpartial q_idelta q_i - fracpartial Hpartial p_idelta p_i - fracpartial Hpartial t delta t = 0$$



      $$rightarrow delta(fracddt q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
      and using the fact that d and $delta$ commute
      $$rightarrow fracddtdelta(q_i) p_i -fracpartial Hpartial q_idelta q_i = 0$$
      $$rightarrow fracddtp_i delta q_i -fracpartial Hpartial q_idelta q_i = 0$$



      I can recover the equation as given, except I'm off by a minus sign! So, I'm wondering if the trick I used is valid, and why I'm getting a minus sign off the equation. Any help would be greatly appreciated!









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 15 at 23:36









      DS08

      1078




      1078




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          Your error is when going from $delta(fracddt q_i) p_i$ to $fracddtp_i delta q_i$. In doing so you must change sign:
          $$
          delta left( fracddt q_i right) p_i
          = fracddt(delta q_i) p_i
          = fracddt(delta q_i , p_i) - delta q_i , fracddtp_i
          $$
          Now, $fracddt(delta q_i , p_i)$ vanishes on integration if $delta q_i$ is taken to vanish at the end points, so we end with $- delta q_i , fracddtp_i.$






          share|cite|improve this answer





















          • But what's the reasoning for changing the sign?
            – DS08
            Jul 17 at 16:04











          • It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
            – md2perpe
            Jul 17 at 16:09


















          up vote
          0
          down vote













          You are treating $delta$ like a differential. $delta$ is a variation, not a differential. I feel like the only way I can tell you what went wrong in your manipulations, is to show you the correct way of dealing with a variation.



          Consider an functional , which feeds one smooth vector function $mathbff$ and spits out an integral (which is called the action)
          $$
          S[mathbff] = int_a^b F(mathbff(t),dotmathbff(t),t) dt
          $$
          here $mathbff$ furthermore needs to satisfy $mathbff(a)=mathbfx$, $mathbff(b)=mathbfy$ with $mathbfx,mathbfy$ fixed and given. The function $F$ is say another smooth function.



          What your book is trying to tell you is the function $mathbff$ making the action extremum, satisfies those equations. This is what one does. Let $mathbfg$ be the (unknown) extremum solution, $epsilon >0$ a small number and $mathbfh$ a function with $mathbfh(a)=mathbfh(b)=0$. This is a perturbation of the extremum solution. To put things in perspective, by $delta mathbff$ we mean $mathbff-mathbfg=epsilon mathbfh$. This is what a variation means.



          Now
          $$
          F(mathbff(t),dotmathbff(t),t)-
          F(mathbfg(t),dotmathbfg(t),t)
          =
          F(mathbfg+epsilonmathbfh,dotmathbfg+epsilon dotmathbfh,t)
          -
          F(mathbfg,dotmathbfg,t)
          =
          epsilonleft[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]
          $$
          the equality being up to $epsilon^2$ order. Therefore
          $$
          delta S := S[mathbff]-
          S[mathbfg]=epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]dt
          $$
          Doing an integration by parts over the second summand (and using the fact that $mathbfh(a)=mathbfh(b)=0$
          $$
          delta S =epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcolorred-fracddt
          fracpartial Fpartial dotmathbff
          right]cdot mathbfh dt
          $$
          Now if $S$ is extremum at $mathbfg$, then $delta S =0$ for all choices of $mathbfh$. This forces
          $$
          boxed
          fracpartial Fpartial mathbff-fracddt
          fracpartial Fpartial dotmathbff=0
          $$
          which is called the Euler-Lagrange equation. You can either use, the ideas in this derivation to find Hamiltonian equations of motion from scratch, or you can just use the Euler-Lagrange equation to your action. I hope by now it is clear how unsafe what you were doing originally was.






          share|cite|improve this answer





















          • Variations can be treated as differentials in the function space.
            – md2perpe
            Jul 17 at 16:13










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852934%2fcanonical-equations-from-action-integral%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          Your error is when going from $delta(fracddt q_i) p_i$ to $fracddtp_i delta q_i$. In doing so you must change sign:
          $$
          delta left( fracddt q_i right) p_i
          = fracddt(delta q_i) p_i
          = fracddt(delta q_i , p_i) - delta q_i , fracddtp_i
          $$
          Now, $fracddt(delta q_i , p_i)$ vanishes on integration if $delta q_i$ is taken to vanish at the end points, so we end with $- delta q_i , fracddtp_i.$






          share|cite|improve this answer





















          • But what's the reasoning for changing the sign?
            – DS08
            Jul 17 at 16:04











          • It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
            – md2perpe
            Jul 17 at 16:09















          up vote
          1
          down vote



          accepted










          Your error is when going from $delta(fracddt q_i) p_i$ to $fracddtp_i delta q_i$. In doing so you must change sign:
          $$
          delta left( fracddt q_i right) p_i
          = fracddt(delta q_i) p_i
          = fracddt(delta q_i , p_i) - delta q_i , fracddtp_i
          $$
          Now, $fracddt(delta q_i , p_i)$ vanishes on integration if $delta q_i$ is taken to vanish at the end points, so we end with $- delta q_i , fracddtp_i.$






          share|cite|improve this answer





















          • But what's the reasoning for changing the sign?
            – DS08
            Jul 17 at 16:04











          • It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
            – md2perpe
            Jul 17 at 16:09













          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          Your error is when going from $delta(fracddt q_i) p_i$ to $fracddtp_i delta q_i$. In doing so you must change sign:
          $$
          delta left( fracddt q_i right) p_i
          = fracddt(delta q_i) p_i
          = fracddt(delta q_i , p_i) - delta q_i , fracddtp_i
          $$
          Now, $fracddt(delta q_i , p_i)$ vanishes on integration if $delta q_i$ is taken to vanish at the end points, so we end with $- delta q_i , fracddtp_i.$






          share|cite|improve this answer













          Your error is when going from $delta(fracddt q_i) p_i$ to $fracddtp_i delta q_i$. In doing so you must change sign:
          $$
          delta left( fracddt q_i right) p_i
          = fracddt(delta q_i) p_i
          = fracddt(delta q_i , p_i) - delta q_i , fracddtp_i
          $$
          Now, $fracddt(delta q_i , p_i)$ vanishes on integration if $delta q_i$ is taken to vanish at the end points, so we end with $- delta q_i , fracddtp_i.$







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 16 at 4:46









          md2perpe

          5,95511022




          5,95511022











          • But what's the reasoning for changing the sign?
            – DS08
            Jul 17 at 16:04











          • It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
            – md2perpe
            Jul 17 at 16:09

















          • But what's the reasoning for changing the sign?
            – DS08
            Jul 17 at 16:04











          • It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
            – md2perpe
            Jul 17 at 16:09
















          But what's the reasoning for changing the sign?
          – DS08
          Jul 17 at 16:04





          But what's the reasoning for changing the sign?
          – DS08
          Jul 17 at 16:04













          It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
          – md2perpe
          Jul 17 at 16:09





          It's a partial integration: $$int_t_1^t_2 fracd , delta q_idt p_i , dt = left[ delta q_i , p_i right]_t_1^t_2 - int_t_1^t_2 delta q_i fracd p_idt , dt$$
          – md2perpe
          Jul 17 at 16:09











          up vote
          0
          down vote













          You are treating $delta$ like a differential. $delta$ is a variation, not a differential. I feel like the only way I can tell you what went wrong in your manipulations, is to show you the correct way of dealing with a variation.



          Consider an functional , which feeds one smooth vector function $mathbff$ and spits out an integral (which is called the action)
          $$
          S[mathbff] = int_a^b F(mathbff(t),dotmathbff(t),t) dt
          $$
          here $mathbff$ furthermore needs to satisfy $mathbff(a)=mathbfx$, $mathbff(b)=mathbfy$ with $mathbfx,mathbfy$ fixed and given. The function $F$ is say another smooth function.



          What your book is trying to tell you is the function $mathbff$ making the action extremum, satisfies those equations. This is what one does. Let $mathbfg$ be the (unknown) extremum solution, $epsilon >0$ a small number and $mathbfh$ a function with $mathbfh(a)=mathbfh(b)=0$. This is a perturbation of the extremum solution. To put things in perspective, by $delta mathbff$ we mean $mathbff-mathbfg=epsilon mathbfh$. This is what a variation means.



          Now
          $$
          F(mathbff(t),dotmathbff(t),t)-
          F(mathbfg(t),dotmathbfg(t),t)
          =
          F(mathbfg+epsilonmathbfh,dotmathbfg+epsilon dotmathbfh,t)
          -
          F(mathbfg,dotmathbfg,t)
          =
          epsilonleft[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]
          $$
          the equality being up to $epsilon^2$ order. Therefore
          $$
          delta S := S[mathbff]-
          S[mathbfg]=epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]dt
          $$
          Doing an integration by parts over the second summand (and using the fact that $mathbfh(a)=mathbfh(b)=0$
          $$
          delta S =epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcolorred-fracddt
          fracpartial Fpartial dotmathbff
          right]cdot mathbfh dt
          $$
          Now if $S$ is extremum at $mathbfg$, then $delta S =0$ for all choices of $mathbfh$. This forces
          $$
          boxed
          fracpartial Fpartial mathbff-fracddt
          fracpartial Fpartial dotmathbff=0
          $$
          which is called the Euler-Lagrange equation. You can either use, the ideas in this derivation to find Hamiltonian equations of motion from scratch, or you can just use the Euler-Lagrange equation to your action. I hope by now it is clear how unsafe what you were doing originally was.






          share|cite|improve this answer





















          • Variations can be treated as differentials in the function space.
            – md2perpe
            Jul 17 at 16:13














          up vote
          0
          down vote













          You are treating $delta$ like a differential. $delta$ is a variation, not a differential. I feel like the only way I can tell you what went wrong in your manipulations, is to show you the correct way of dealing with a variation.



          Consider an functional , which feeds one smooth vector function $mathbff$ and spits out an integral (which is called the action)
          $$
          S[mathbff] = int_a^b F(mathbff(t),dotmathbff(t),t) dt
          $$
          here $mathbff$ furthermore needs to satisfy $mathbff(a)=mathbfx$, $mathbff(b)=mathbfy$ with $mathbfx,mathbfy$ fixed and given. The function $F$ is say another smooth function.



          What your book is trying to tell you is the function $mathbff$ making the action extremum, satisfies those equations. This is what one does. Let $mathbfg$ be the (unknown) extremum solution, $epsilon >0$ a small number and $mathbfh$ a function with $mathbfh(a)=mathbfh(b)=0$. This is a perturbation of the extremum solution. To put things in perspective, by $delta mathbff$ we mean $mathbff-mathbfg=epsilon mathbfh$. This is what a variation means.



          Now
          $$
          F(mathbff(t),dotmathbff(t),t)-
          F(mathbfg(t),dotmathbfg(t),t)
          =
          F(mathbfg+epsilonmathbfh,dotmathbfg+epsilon dotmathbfh,t)
          -
          F(mathbfg,dotmathbfg,t)
          =
          epsilonleft[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]
          $$
          the equality being up to $epsilon^2$ order. Therefore
          $$
          delta S := S[mathbff]-
          S[mathbfg]=epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]dt
          $$
          Doing an integration by parts over the second summand (and using the fact that $mathbfh(a)=mathbfh(b)=0$
          $$
          delta S =epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcolorred-fracddt
          fracpartial Fpartial dotmathbff
          right]cdot mathbfh dt
          $$
          Now if $S$ is extremum at $mathbfg$, then $delta S =0$ for all choices of $mathbfh$. This forces
          $$
          boxed
          fracpartial Fpartial mathbff-fracddt
          fracpartial Fpartial dotmathbff=0
          $$
          which is called the Euler-Lagrange equation. You can either use, the ideas in this derivation to find Hamiltonian equations of motion from scratch, or you can just use the Euler-Lagrange equation to your action. I hope by now it is clear how unsafe what you were doing originally was.






          share|cite|improve this answer





















          • Variations can be treated as differentials in the function space.
            – md2perpe
            Jul 17 at 16:13












          up vote
          0
          down vote










          up vote
          0
          down vote









          You are treating $delta$ like a differential. $delta$ is a variation, not a differential. I feel like the only way I can tell you what went wrong in your manipulations, is to show you the correct way of dealing with a variation.



          Consider an functional , which feeds one smooth vector function $mathbff$ and spits out an integral (which is called the action)
          $$
          S[mathbff] = int_a^b F(mathbff(t),dotmathbff(t),t) dt
          $$
          here $mathbff$ furthermore needs to satisfy $mathbff(a)=mathbfx$, $mathbff(b)=mathbfy$ with $mathbfx,mathbfy$ fixed and given. The function $F$ is say another smooth function.



          What your book is trying to tell you is the function $mathbff$ making the action extremum, satisfies those equations. This is what one does. Let $mathbfg$ be the (unknown) extremum solution, $epsilon >0$ a small number and $mathbfh$ a function with $mathbfh(a)=mathbfh(b)=0$. This is a perturbation of the extremum solution. To put things in perspective, by $delta mathbff$ we mean $mathbff-mathbfg=epsilon mathbfh$. This is what a variation means.



          Now
          $$
          F(mathbff(t),dotmathbff(t),t)-
          F(mathbfg(t),dotmathbfg(t),t)
          =
          F(mathbfg+epsilonmathbfh,dotmathbfg+epsilon dotmathbfh,t)
          -
          F(mathbfg,dotmathbfg,t)
          =
          epsilonleft[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]
          $$
          the equality being up to $epsilon^2$ order. Therefore
          $$
          delta S := S[mathbff]-
          S[mathbfg]=epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]dt
          $$
          Doing an integration by parts over the second summand (and using the fact that $mathbfh(a)=mathbfh(b)=0$
          $$
          delta S =epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcolorred-fracddt
          fracpartial Fpartial dotmathbff
          right]cdot mathbfh dt
          $$
          Now if $S$ is extremum at $mathbfg$, then $delta S =0$ for all choices of $mathbfh$. This forces
          $$
          boxed
          fracpartial Fpartial mathbff-fracddt
          fracpartial Fpartial dotmathbff=0
          $$
          which is called the Euler-Lagrange equation. You can either use, the ideas in this derivation to find Hamiltonian equations of motion from scratch, or you can just use the Euler-Lagrange equation to your action. I hope by now it is clear how unsafe what you were doing originally was.






          share|cite|improve this answer













          You are treating $delta$ like a differential. $delta$ is a variation, not a differential. I feel like the only way I can tell you what went wrong in your manipulations, is to show you the correct way of dealing with a variation.



          Consider an functional , which feeds one smooth vector function $mathbff$ and spits out an integral (which is called the action)
          $$
          S[mathbff] = int_a^b F(mathbff(t),dotmathbff(t),t) dt
          $$
          here $mathbff$ furthermore needs to satisfy $mathbff(a)=mathbfx$, $mathbff(b)=mathbfy$ with $mathbfx,mathbfy$ fixed and given. The function $F$ is say another smooth function.



          What your book is trying to tell you is the function $mathbff$ making the action extremum, satisfies those equations. This is what one does. Let $mathbfg$ be the (unknown) extremum solution, $epsilon >0$ a small number and $mathbfh$ a function with $mathbfh(a)=mathbfh(b)=0$. This is a perturbation of the extremum solution. To put things in perspective, by $delta mathbff$ we mean $mathbff-mathbfg=epsilon mathbfh$. This is what a variation means.



          Now
          $$
          F(mathbff(t),dotmathbff(t),t)-
          F(mathbfg(t),dotmathbfg(t),t)
          =
          F(mathbfg+epsilonmathbfh,dotmathbfg+epsilon dotmathbfh,t)
          -
          F(mathbfg,dotmathbfg,t)
          =
          epsilonleft[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]
          $$
          the equality being up to $epsilon^2$ order. Therefore
          $$
          delta S := S[mathbff]-
          S[mathbfg]=epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcdot mathbfh+
          fracpartial Fpartial dotmathbffcdot dotmathbfh
          right]dt
          $$
          Doing an integration by parts over the second summand (and using the fact that $mathbfh(a)=mathbfh(b)=0$
          $$
          delta S =epsilon int_a^b
          left[
          fracpartial Fpartial mathbffcolorred-fracddt
          fracpartial Fpartial dotmathbff
          right]cdot mathbfh dt
          $$
          Now if $S$ is extremum at $mathbfg$, then $delta S =0$ for all choices of $mathbfh$. This forces
          $$
          boxed
          fracpartial Fpartial mathbff-fracddt
          fracpartial Fpartial dotmathbff=0
          $$
          which is called the Euler-Lagrange equation. You can either use, the ideas in this derivation to find Hamiltonian equations of motion from scratch, or you can just use the Euler-Lagrange equation to your action. I hope by now it is clear how unsafe what you were doing originally was.







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 16 at 2:37









          Hamed

          4,401421




          4,401421











          • Variations can be treated as differentials in the function space.
            – md2perpe
            Jul 17 at 16:13
















          • Variations can be treated as differentials in the function space.
            – md2perpe
            Jul 17 at 16:13















          Variations can be treated as differentials in the function space.
          – md2perpe
          Jul 17 at 16:13




          Variations can be treated as differentials in the function space.
          – md2perpe
          Jul 17 at 16:13












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852934%2fcanonical-equations-from-action-integral%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?