ODE problem reading do Carmo's book of Riemannian geometry

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
3












I'm reading do Carmo's book, Riemannian geometry. I have a problem at the Jacobi fields. He talks about the case of constant curvature. He gets to the ODE above and my problem is how dose he solve it? Can some one fill in the details? Thanks a lot!




As a result, the Jacobi equation can be written as



$$fracD^2Jdt^2+KJ~=~0$$



Let $omega(t)$ be a parallel field along $gamma$ with $langlegamma'(t),omega(t)rangle=0$ and $|omega(t)|=1$. It is easy to verify that



$$J(t)~=~begincasesfracsin(tsqrtK)sqrtKomega(t),~~~&textif~K>0\tomega(t),~~~&textif~K=0\fracsinh(tsqrt-K)sqrt-Komega(t),~~~&textif~K<0endcases$$



is a solution of $(2)$ with initial conditions $J(0)=0,J'(0)=omega(0)$








share|cite|improve this question

















  • 3




    It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
    – uniquesolution
    Jul 31 at 19:01










  • This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
    – mrtaurho
    Jul 31 at 19:04







  • 2




    I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
    – Robert Lewis
    Jul 31 at 19:40











  • @mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
    – AlexanderJ93
    Jul 31 at 19:43






  • 1




    @RobertLewis Is right! I edit this right away!
    – Hurjui Ionut
    Jul 31 at 20:45














up vote
2
down vote

favorite
3












I'm reading do Carmo's book, Riemannian geometry. I have a problem at the Jacobi fields. He talks about the case of constant curvature. He gets to the ODE above and my problem is how dose he solve it? Can some one fill in the details? Thanks a lot!




As a result, the Jacobi equation can be written as



$$fracD^2Jdt^2+KJ~=~0$$



Let $omega(t)$ be a parallel field along $gamma$ with $langlegamma'(t),omega(t)rangle=0$ and $|omega(t)|=1$. It is easy to verify that



$$J(t)~=~begincasesfracsin(tsqrtK)sqrtKomega(t),~~~&textif~K>0\tomega(t),~~~&textif~K=0\fracsinh(tsqrt-K)sqrt-Komega(t),~~~&textif~K<0endcases$$



is a solution of $(2)$ with initial conditions $J(0)=0,J'(0)=omega(0)$








share|cite|improve this question

















  • 3




    It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
    – uniquesolution
    Jul 31 at 19:01










  • This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
    – mrtaurho
    Jul 31 at 19:04







  • 2




    I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
    – Robert Lewis
    Jul 31 at 19:40











  • @mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
    – AlexanderJ93
    Jul 31 at 19:43






  • 1




    @RobertLewis Is right! I edit this right away!
    – Hurjui Ionut
    Jul 31 at 20:45












up vote
2
down vote

favorite
3









up vote
2
down vote

favorite
3






3





I'm reading do Carmo's book, Riemannian geometry. I have a problem at the Jacobi fields. He talks about the case of constant curvature. He gets to the ODE above and my problem is how dose he solve it? Can some one fill in the details? Thanks a lot!




As a result, the Jacobi equation can be written as



$$fracD^2Jdt^2+KJ~=~0$$



Let $omega(t)$ be a parallel field along $gamma$ with $langlegamma'(t),omega(t)rangle=0$ and $|omega(t)|=1$. It is easy to verify that



$$J(t)~=~begincasesfracsin(tsqrtK)sqrtKomega(t),~~~&textif~K>0\tomega(t),~~~&textif~K=0\fracsinh(tsqrt-K)sqrt-Komega(t),~~~&textif~K<0endcases$$



is a solution of $(2)$ with initial conditions $J(0)=0,J'(0)=omega(0)$








share|cite|improve this question













I'm reading do Carmo's book, Riemannian geometry. I have a problem at the Jacobi fields. He talks about the case of constant curvature. He gets to the ODE above and my problem is how dose he solve it? Can some one fill in the details? Thanks a lot!




As a result, the Jacobi equation can be written as



$$fracD^2Jdt^2+KJ~=~0$$



Let $omega(t)$ be a parallel field along $gamma$ with $langlegamma'(t),omega(t)rangle=0$ and $|omega(t)|=1$. It is easy to verify that



$$J(t)~=~begincasesfracsin(tsqrtK)sqrtKomega(t),~~~&textif~K>0\tomega(t),~~~&textif~K=0\fracsinh(tsqrt-K)sqrt-Komega(t),~~~&textif~K<0endcases$$



is a solution of $(2)$ with initial conditions $J(0)=0,J'(0)=omega(0)$










share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 31 at 20:45
























asked Jul 31 at 18:48









Hurjui Ionut

314111




314111







  • 3




    It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
    – uniquesolution
    Jul 31 at 19:01










  • This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
    – mrtaurho
    Jul 31 at 19:04







  • 2




    I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
    – Robert Lewis
    Jul 31 at 19:40











  • @mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
    – AlexanderJ93
    Jul 31 at 19:43






  • 1




    @RobertLewis Is right! I edit this right away!
    – Hurjui Ionut
    Jul 31 at 20:45












  • 3




    It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
    – uniquesolution
    Jul 31 at 19:01










  • This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
    – mrtaurho
    Jul 31 at 19:04







  • 2




    I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
    – Robert Lewis
    Jul 31 at 19:40











  • @mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
    – AlexanderJ93
    Jul 31 at 19:43






  • 1




    @RobertLewis Is right! I edit this right away!
    – Hurjui Ionut
    Jul 31 at 20:45







3




3




It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
– uniquesolution
Jul 31 at 19:01




It is a standard second order ordinary differential equation with constant coefficients. There is some theory you need to pick up - the kind they teach in any ODE 101 course.
– uniquesolution
Jul 31 at 19:01












This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
– mrtaurho
Jul 31 at 19:04





This kind of ODE is strongely connected to the trigonometric functions since they are the only kinds of functions that satisfies the relation $f''(x)=-f(x)$. The $K$ only plays around with the argument of the functions you are using hence $f''(sqrtKx)=-Kf(sqrtKx)$. But like it was already mentioned, this is the more basic stuff from second order ODE's.
– mrtaurho
Jul 31 at 19:04





2




2




I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
– Robert Lewis
Jul 31 at 19:40





I think maybe there is an error in your $K < 0$ case; I think maybe you want $sinh$ instead of $sin$; $(sinh (t sqrt-k) / sqrt-K)omega(t)$.
– Robert Lewis
Jul 31 at 19:40













@mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
– AlexanderJ93
Jul 31 at 19:43




@mrtaurho when solving this equation using the usual ODE method, you get a similar solution but with only $omega(0)$ as a constant, rather than the varying $omega(t)$. I believe OP may be wondering how to resolve this difference.
– AlexanderJ93
Jul 31 at 19:43




1




1




@RobertLewis Is right! I edit this right away!
– Hurjui Ionut
Jul 31 at 20:45




@RobertLewis Is right! I edit this right away!
– Hurjui Ionut
Jul 31 at 20:45










2 Answers
2






active

oldest

votes

















up vote
4
down vote



accepted










Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $omega(t)$ along $gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) omega(t)$ where $f colon I rightarrow mathbbR$ is a scalar function. By the product rule and the fact that $omega$ is parallel, we have



$$ fracDJdt(t) = f'(t) omega(t) + f(t) fracDomegadt(t) = f'(t) omega(t),\
fracD^2Jdt(t) = f''(t) omega(t)
$$



so the equation becomes



$$ f''(t) omega(t) + K f(t) omega(t) = (f''(t) + Kf(t)) omega(t) = 0. $$



Since $omega(t) neq 0$ for all $t in I$, we must have $f''(t) + Kf(t) = 0$ for all $t in I$ and in addition, by the initial conditions, we must also have



$$ J(0) = 0 iff f(0) = 0, J'(0) = f'(0) omega(0) = omega(0) iff f'(0) = 1. $$



Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE



$$ f''(t) + K f(t) = 0 $$



with initial conditions



$$ f(0) = 0, f'(0) = 1. $$






share|cite|improve this answer























  • There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
    – Hurjui Ionut
    Jul 31 at 21:01







  • 1




    @HurjuiIonut: Corrected, thanks!
    – levap
    Jul 31 at 21:04










  • Thank you for the answer!
    – Hurjui Ionut
    Jul 31 at 21:08

















up vote
1
down vote













OK, here are some details:



First of all, I assume $D/dt$ is covariant differentiation along $gamma(t)$, i.e.,



$dfracDdt equiv nabla_gamma'(t), tag 1$



where $nabla$ is the Levi-Civita connection associated with the metric $langle cdot, cdot rangle$ on our manifold. Now if $omega(t)$ is parallel along the curve $gamma(1)$, then



$dfracDomegadt = nabla_gamma'(t) omega = 0, tag 2$



and if $f(t)$ is any twice-differentiable function defined along $gamma(t)$, we have by the Leibnitz rule,



$dfracD(fomega)dt = dfracdfdtomega + f dfracDomegadt = dfracdfdtomega, tag 3$



which of course follows from (2):



$dfracD(fomega)dt = nabla_gamma'(t) (f omega) = gamma'(t)[f] omega + f nabla_gamma'(t)omega = dfracdfdt omega ; tag 4$



thus



$dfracD^2(fomega)dt^2 = dfracDdt left ( dfracD(fomega)dt right) = dfracDdt left ( dfracdfdt omega right ) = dfracddt left ( dfracdfdt right ) omega = dfracd^2 fdt^2 omega; tag 5$



now if



$J = fomega tag 6$



satisfies



$dfracD^2 Jdt^2 + KJ = 0, tag 7$



we may, via (5), write



$left (dfracd^2fdt^2 + K f right ) omega = dfracd^2fdt^2 omega + K f omega = dfracD^2(fomega)dt^2 + Kfomega = dfracD^2Jdt^2 + KJ = 0; tag 8$



since $vert omega(t) vert = 1$ along $gamma(t)$, we have $omega(t) ne 0$ on $gamma(t)$, whence



$dfracd^2fdt^2 + K f = 0; tag 9$



furthermore,



$J(0) = 0 Longrightarrow f(0)omega(0) = 0 Longrightarrow f(0) = 0, tag10$



$J'(0) = omega(0) Longrightarrow f'(0) omega(0) = omega(0) Longrightarrow f'(0) = 1; tag11$



so now we see that solving the covariant vector equation (7) with $J = fomega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?



Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form



$f(t) = e^mu t; tag12$



then plugging this into (9) yields



$mu^2 e^mu t + Ke^mu t = 0 Longrightarrow mu^2 + K = 0, tag13$



whence in the usual manner,



$K > 0 Longrightarrow mu = pm i sqrtK, tag14$



$K = 0 Longrightarrow mu = 0; tag15$



$K < 0 Longrightarrow mu = pm sqrt-K; tag16$



it is well-known that cases (14) and (16) yield general solutions of the form



$f(t) = c_+ e^i sqrtKt + c_- e^-isqrtKt$
$= c_+(cos (sqrtKt) + isin(sqrtKt)) + c_-(cos (sqrtKt) - isin(sqrtKt))$
$= (c_+ + c_-)cos(sqrtKt) + i(c_+ - c_-) sin (sqrtKt), tag17$



and



$f(t) = c_+ e^sqrt-K t + c_- e^-sqrt-Kt = (c_+ + c_-)cosh(sqrt-Kt) +(c_+ - c_-)sinh(sqrt-Kt) ; tag18$



in both cases (17), (18) the condition $f(0) = 0$ implies



$f(0) = c_+ + c_- = 0 Longrightarrow c_- = -c_+, tag19$



yielding



$f(t) = 2ic_+ sin(sqrtKt), ; f'(t) = 2ic_+ sqrtK cos (sqrtKt), tag20$



and



$f(t) = 2c_+ cosh sqrt-Kt, ; f'(t) = 2c_+ sqrt-K cosh(sqrt-Kt); tag21$



with $f'(0) = 1$, (20) and (21) determine that



$2ic_+ sqrtK = 1 Longrightarrow c_+ = dfrac-i2sqrtK, tag 22$



and



$2c_+ sqrt-K = 1 Longrightarrow c_+ = dfrac12 sqrt-K, tag23$



respectively; thus we see at last that



$K > 0 Longrightarrow f(t) = dfracsin (sqrtKt)sqrtK, tag24$



$K < 0 Longrightarrow f(t) = dfracsinh (sqrt-Kt)sqrt-K. tag25$



As for (15), in the event that $K = 0$, we have found that $mu = 0$ is a double root of (13), which reduces to $mu^2 = 0$, corresponding to the $K = 0$ case of (9),



$f''(t) = 0. tag26$



When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form



$x^2 - 2mu x + mu^2 = (x - mu)^2 = 0; tag27$



in this case the ODE from which (27) arises by means of the substitution $y = e^mu t$ is



$ddot y - 2mu dot y + mu^2 y = 0, tag28$



which may also of course be written



$left ( dfracddt - mu right )^2 y = left ( dfracddt - mu right )left ( dfracddt - mu right ) y = ddot y - 2mu dot y + mu^2 y = 0; tag29$



setting $z = dot y - mu y$ we see that this implies



$dot z - mu z = 0, tag30$



whence



$z(t) = c_1 e^mu t, tag31$



which is clearly one solution to (29); then with



$dot y - mu y = z(t) = c_1 e^mu t, tag32$



we find



$y(t) = (c_0 + c_1 t)e^mu t; tag33$



now when $mu = 0$ this reduces to



$y(t) = c_0 + c_1 t, tag34$



clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $mu = 0$ case of (9),



$f(t) = c_0 + c_1 t; tag35$



with the initial conditions (10)-(11) we see that



$c_0 = f(0) = 0, ; c_1 = f'(0) = 1, tag36$



whence



$K = 0 Longrightarrow f(t) = t; tag37$



we may now combine (24), (25), (37) together with (6) to see that



$K > 0 Longrightarrow J(t) = dfracsin (sqrtKt)sqrtK omega(t), tag38$



$K = 0 Longrightarrow J(t) = t omega(t), tag39$



$K < 0 Longrightarrow J(t) = dfracsinh (sqrt-Kt)sqrt-K omega(t). tag40$



In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.



One good guess is worth a thousand exploratory computations.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2868357%2fode-problem-reading-do-carmos-book-of-riemannian-geometry%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    4
    down vote



    accepted










    Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $omega(t)$ along $gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) omega(t)$ where $f colon I rightarrow mathbbR$ is a scalar function. By the product rule and the fact that $omega$ is parallel, we have



    $$ fracDJdt(t) = f'(t) omega(t) + f(t) fracDomegadt(t) = f'(t) omega(t),\
    fracD^2Jdt(t) = f''(t) omega(t)
    $$



    so the equation becomes



    $$ f''(t) omega(t) + K f(t) omega(t) = (f''(t) + Kf(t)) omega(t) = 0. $$



    Since $omega(t) neq 0$ for all $t in I$, we must have $f''(t) + Kf(t) = 0$ for all $t in I$ and in addition, by the initial conditions, we must also have



    $$ J(0) = 0 iff f(0) = 0, J'(0) = f'(0) omega(0) = omega(0) iff f'(0) = 1. $$



    Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE



    $$ f''(t) + K f(t) = 0 $$



    with initial conditions



    $$ f(0) = 0, f'(0) = 1. $$






    share|cite|improve this answer























    • There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
      – Hurjui Ionut
      Jul 31 at 21:01







    • 1




      @HurjuiIonut: Corrected, thanks!
      – levap
      Jul 31 at 21:04










    • Thank you for the answer!
      – Hurjui Ionut
      Jul 31 at 21:08














    up vote
    4
    down vote



    accepted










    Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $omega(t)$ along $gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) omega(t)$ where $f colon I rightarrow mathbbR$ is a scalar function. By the product rule and the fact that $omega$ is parallel, we have



    $$ fracDJdt(t) = f'(t) omega(t) + f(t) fracDomegadt(t) = f'(t) omega(t),\
    fracD^2Jdt(t) = f''(t) omega(t)
    $$



    so the equation becomes



    $$ f''(t) omega(t) + K f(t) omega(t) = (f''(t) + Kf(t)) omega(t) = 0. $$



    Since $omega(t) neq 0$ for all $t in I$, we must have $f''(t) + Kf(t) = 0$ for all $t in I$ and in addition, by the initial conditions, we must also have



    $$ J(0) = 0 iff f(0) = 0, J'(0) = f'(0) omega(0) = omega(0) iff f'(0) = 1. $$



    Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE



    $$ f''(t) + K f(t) = 0 $$



    with initial conditions



    $$ f(0) = 0, f'(0) = 1. $$






    share|cite|improve this answer























    • There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
      – Hurjui Ionut
      Jul 31 at 21:01







    • 1




      @HurjuiIonut: Corrected, thanks!
      – levap
      Jul 31 at 21:04










    • Thank you for the answer!
      – Hurjui Ionut
      Jul 31 at 21:08












    up vote
    4
    down vote



    accepted







    up vote
    4
    down vote



    accepted






    Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $omega(t)$ along $gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) omega(t)$ where $f colon I rightarrow mathbbR$ is a scalar function. By the product rule and the fact that $omega$ is parallel, we have



    $$ fracDJdt(t) = f'(t) omega(t) + f(t) fracDomegadt(t) = f'(t) omega(t),\
    fracD^2Jdt(t) = f''(t) omega(t)
    $$



    so the equation becomes



    $$ f''(t) omega(t) + K f(t) omega(t) = (f''(t) + Kf(t)) omega(t) = 0. $$



    Since $omega(t) neq 0$ for all $t in I$, we must have $f''(t) + Kf(t) = 0$ for all $t in I$ and in addition, by the initial conditions, we must also have



    $$ J(0) = 0 iff f(0) = 0, J'(0) = f'(0) omega(0) = omega(0) iff f'(0) = 1. $$



    Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE



    $$ f''(t) + K f(t) = 0 $$



    with initial conditions



    $$ f(0) = 0, f'(0) = 1. $$






    share|cite|improve this answer















    Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $omega(t)$ along $gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) omega(t)$ where $f colon I rightarrow mathbbR$ is a scalar function. By the product rule and the fact that $omega$ is parallel, we have



    $$ fracDJdt(t) = f'(t) omega(t) + f(t) fracDomegadt(t) = f'(t) omega(t),\
    fracD^2Jdt(t) = f''(t) omega(t)
    $$



    so the equation becomes



    $$ f''(t) omega(t) + K f(t) omega(t) = (f''(t) + Kf(t)) omega(t) = 0. $$



    Since $omega(t) neq 0$ for all $t in I$, we must have $f''(t) + Kf(t) = 0$ for all $t in I$ and in addition, by the initial conditions, we must also have



    $$ J(0) = 0 iff f(0) = 0, J'(0) = f'(0) omega(0) = omega(0) iff f'(0) = 1. $$



    Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE



    $$ f''(t) + K f(t) = 0 $$



    with initial conditions



    $$ f(0) = 0, f'(0) = 1. $$







    share|cite|improve this answer















    share|cite|improve this answer



    share|cite|improve this answer








    edited Jul 31 at 21:04


























    answered Jul 31 at 20:57









    levap

    45.9k13272




    45.9k13272











    • There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
      – Hurjui Ionut
      Jul 31 at 21:01







    • 1




      @HurjuiIonut: Corrected, thanks!
      – levap
      Jul 31 at 21:04










    • Thank you for the answer!
      – Hurjui Ionut
      Jul 31 at 21:08
















    • There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
      – Hurjui Ionut
      Jul 31 at 21:01







    • 1




      @HurjuiIonut: Corrected, thanks!
      – levap
      Jul 31 at 21:04










    • Thank you for the answer!
      – Hurjui Ionut
      Jul 31 at 21:08















    There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
    – Hurjui Ionut
    Jul 31 at 21:01





    There is a lil problem. The initial problem is $fracD^2Jdt^2+KJ=0$ thus the scalar problem is $f''(t) + K f(t) = 0$ but i get the idea!
    – Hurjui Ionut
    Jul 31 at 21:01





    1




    1




    @HurjuiIonut: Corrected, thanks!
    – levap
    Jul 31 at 21:04




    @HurjuiIonut: Corrected, thanks!
    – levap
    Jul 31 at 21:04












    Thank you for the answer!
    – Hurjui Ionut
    Jul 31 at 21:08




    Thank you for the answer!
    – Hurjui Ionut
    Jul 31 at 21:08










    up vote
    1
    down vote













    OK, here are some details:



    First of all, I assume $D/dt$ is covariant differentiation along $gamma(t)$, i.e.,



    $dfracDdt equiv nabla_gamma'(t), tag 1$



    where $nabla$ is the Levi-Civita connection associated with the metric $langle cdot, cdot rangle$ on our manifold. Now if $omega(t)$ is parallel along the curve $gamma(1)$, then



    $dfracDomegadt = nabla_gamma'(t) omega = 0, tag 2$



    and if $f(t)$ is any twice-differentiable function defined along $gamma(t)$, we have by the Leibnitz rule,



    $dfracD(fomega)dt = dfracdfdtomega + f dfracDomegadt = dfracdfdtomega, tag 3$



    which of course follows from (2):



    $dfracD(fomega)dt = nabla_gamma'(t) (f omega) = gamma'(t)[f] omega + f nabla_gamma'(t)omega = dfracdfdt omega ; tag 4$



    thus



    $dfracD^2(fomega)dt^2 = dfracDdt left ( dfracD(fomega)dt right) = dfracDdt left ( dfracdfdt omega right ) = dfracddt left ( dfracdfdt right ) omega = dfracd^2 fdt^2 omega; tag 5$



    now if



    $J = fomega tag 6$



    satisfies



    $dfracD^2 Jdt^2 + KJ = 0, tag 7$



    we may, via (5), write



    $left (dfracd^2fdt^2 + K f right ) omega = dfracd^2fdt^2 omega + K f omega = dfracD^2(fomega)dt^2 + Kfomega = dfracD^2Jdt^2 + KJ = 0; tag 8$



    since $vert omega(t) vert = 1$ along $gamma(t)$, we have $omega(t) ne 0$ on $gamma(t)$, whence



    $dfracd^2fdt^2 + K f = 0; tag 9$



    furthermore,



    $J(0) = 0 Longrightarrow f(0)omega(0) = 0 Longrightarrow f(0) = 0, tag10$



    $J'(0) = omega(0) Longrightarrow f'(0) omega(0) = omega(0) Longrightarrow f'(0) = 1; tag11$



    so now we see that solving the covariant vector equation (7) with $J = fomega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?



    Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form



    $f(t) = e^mu t; tag12$



    then plugging this into (9) yields



    $mu^2 e^mu t + Ke^mu t = 0 Longrightarrow mu^2 + K = 0, tag13$



    whence in the usual manner,



    $K > 0 Longrightarrow mu = pm i sqrtK, tag14$



    $K = 0 Longrightarrow mu = 0; tag15$



    $K < 0 Longrightarrow mu = pm sqrt-K; tag16$



    it is well-known that cases (14) and (16) yield general solutions of the form



    $f(t) = c_+ e^i sqrtKt + c_- e^-isqrtKt$
    $= c_+(cos (sqrtKt) + isin(sqrtKt)) + c_-(cos (sqrtKt) - isin(sqrtKt))$
    $= (c_+ + c_-)cos(sqrtKt) + i(c_+ - c_-) sin (sqrtKt), tag17$



    and



    $f(t) = c_+ e^sqrt-K t + c_- e^-sqrt-Kt = (c_+ + c_-)cosh(sqrt-Kt) +(c_+ - c_-)sinh(sqrt-Kt) ; tag18$



    in both cases (17), (18) the condition $f(0) = 0$ implies



    $f(0) = c_+ + c_- = 0 Longrightarrow c_- = -c_+, tag19$



    yielding



    $f(t) = 2ic_+ sin(sqrtKt), ; f'(t) = 2ic_+ sqrtK cos (sqrtKt), tag20$



    and



    $f(t) = 2c_+ cosh sqrt-Kt, ; f'(t) = 2c_+ sqrt-K cosh(sqrt-Kt); tag21$



    with $f'(0) = 1$, (20) and (21) determine that



    $2ic_+ sqrtK = 1 Longrightarrow c_+ = dfrac-i2sqrtK, tag 22$



    and



    $2c_+ sqrt-K = 1 Longrightarrow c_+ = dfrac12 sqrt-K, tag23$



    respectively; thus we see at last that



    $K > 0 Longrightarrow f(t) = dfracsin (sqrtKt)sqrtK, tag24$



    $K < 0 Longrightarrow f(t) = dfracsinh (sqrt-Kt)sqrt-K. tag25$



    As for (15), in the event that $K = 0$, we have found that $mu = 0$ is a double root of (13), which reduces to $mu^2 = 0$, corresponding to the $K = 0$ case of (9),



    $f''(t) = 0. tag26$



    When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form



    $x^2 - 2mu x + mu^2 = (x - mu)^2 = 0; tag27$



    in this case the ODE from which (27) arises by means of the substitution $y = e^mu t$ is



    $ddot y - 2mu dot y + mu^2 y = 0, tag28$



    which may also of course be written



    $left ( dfracddt - mu right )^2 y = left ( dfracddt - mu right )left ( dfracddt - mu right ) y = ddot y - 2mu dot y + mu^2 y = 0; tag29$



    setting $z = dot y - mu y$ we see that this implies



    $dot z - mu z = 0, tag30$



    whence



    $z(t) = c_1 e^mu t, tag31$



    which is clearly one solution to (29); then with



    $dot y - mu y = z(t) = c_1 e^mu t, tag32$



    we find



    $y(t) = (c_0 + c_1 t)e^mu t; tag33$



    now when $mu = 0$ this reduces to



    $y(t) = c_0 + c_1 t, tag34$



    clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $mu = 0$ case of (9),



    $f(t) = c_0 + c_1 t; tag35$



    with the initial conditions (10)-(11) we see that



    $c_0 = f(0) = 0, ; c_1 = f'(0) = 1, tag36$



    whence



    $K = 0 Longrightarrow f(t) = t; tag37$



    we may now combine (24), (25), (37) together with (6) to see that



    $K > 0 Longrightarrow J(t) = dfracsin (sqrtKt)sqrtK omega(t), tag38$



    $K = 0 Longrightarrow J(t) = t omega(t), tag39$



    $K < 0 Longrightarrow J(t) = dfracsinh (sqrt-Kt)sqrt-K omega(t). tag40$



    In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.



    One good guess is worth a thousand exploratory computations.






    share|cite|improve this answer

























      up vote
      1
      down vote













      OK, here are some details:



      First of all, I assume $D/dt$ is covariant differentiation along $gamma(t)$, i.e.,



      $dfracDdt equiv nabla_gamma'(t), tag 1$



      where $nabla$ is the Levi-Civita connection associated with the metric $langle cdot, cdot rangle$ on our manifold. Now if $omega(t)$ is parallel along the curve $gamma(1)$, then



      $dfracDomegadt = nabla_gamma'(t) omega = 0, tag 2$



      and if $f(t)$ is any twice-differentiable function defined along $gamma(t)$, we have by the Leibnitz rule,



      $dfracD(fomega)dt = dfracdfdtomega + f dfracDomegadt = dfracdfdtomega, tag 3$



      which of course follows from (2):



      $dfracD(fomega)dt = nabla_gamma'(t) (f omega) = gamma'(t)[f] omega + f nabla_gamma'(t)omega = dfracdfdt omega ; tag 4$



      thus



      $dfracD^2(fomega)dt^2 = dfracDdt left ( dfracD(fomega)dt right) = dfracDdt left ( dfracdfdt omega right ) = dfracddt left ( dfracdfdt right ) omega = dfracd^2 fdt^2 omega; tag 5$



      now if



      $J = fomega tag 6$



      satisfies



      $dfracD^2 Jdt^2 + KJ = 0, tag 7$



      we may, via (5), write



      $left (dfracd^2fdt^2 + K f right ) omega = dfracd^2fdt^2 omega + K f omega = dfracD^2(fomega)dt^2 + Kfomega = dfracD^2Jdt^2 + KJ = 0; tag 8$



      since $vert omega(t) vert = 1$ along $gamma(t)$, we have $omega(t) ne 0$ on $gamma(t)$, whence



      $dfracd^2fdt^2 + K f = 0; tag 9$



      furthermore,



      $J(0) = 0 Longrightarrow f(0)omega(0) = 0 Longrightarrow f(0) = 0, tag10$



      $J'(0) = omega(0) Longrightarrow f'(0) omega(0) = omega(0) Longrightarrow f'(0) = 1; tag11$



      so now we see that solving the covariant vector equation (7) with $J = fomega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?



      Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form



      $f(t) = e^mu t; tag12$



      then plugging this into (9) yields



      $mu^2 e^mu t + Ke^mu t = 0 Longrightarrow mu^2 + K = 0, tag13$



      whence in the usual manner,



      $K > 0 Longrightarrow mu = pm i sqrtK, tag14$



      $K = 0 Longrightarrow mu = 0; tag15$



      $K < 0 Longrightarrow mu = pm sqrt-K; tag16$



      it is well-known that cases (14) and (16) yield general solutions of the form



      $f(t) = c_+ e^i sqrtKt + c_- e^-isqrtKt$
      $= c_+(cos (sqrtKt) + isin(sqrtKt)) + c_-(cos (sqrtKt) - isin(sqrtKt))$
      $= (c_+ + c_-)cos(sqrtKt) + i(c_+ - c_-) sin (sqrtKt), tag17$



      and



      $f(t) = c_+ e^sqrt-K t + c_- e^-sqrt-Kt = (c_+ + c_-)cosh(sqrt-Kt) +(c_+ - c_-)sinh(sqrt-Kt) ; tag18$



      in both cases (17), (18) the condition $f(0) = 0$ implies



      $f(0) = c_+ + c_- = 0 Longrightarrow c_- = -c_+, tag19$



      yielding



      $f(t) = 2ic_+ sin(sqrtKt), ; f'(t) = 2ic_+ sqrtK cos (sqrtKt), tag20$



      and



      $f(t) = 2c_+ cosh sqrt-Kt, ; f'(t) = 2c_+ sqrt-K cosh(sqrt-Kt); tag21$



      with $f'(0) = 1$, (20) and (21) determine that



      $2ic_+ sqrtK = 1 Longrightarrow c_+ = dfrac-i2sqrtK, tag 22$



      and



      $2c_+ sqrt-K = 1 Longrightarrow c_+ = dfrac12 sqrt-K, tag23$



      respectively; thus we see at last that



      $K > 0 Longrightarrow f(t) = dfracsin (sqrtKt)sqrtK, tag24$



      $K < 0 Longrightarrow f(t) = dfracsinh (sqrt-Kt)sqrt-K. tag25$



      As for (15), in the event that $K = 0$, we have found that $mu = 0$ is a double root of (13), which reduces to $mu^2 = 0$, corresponding to the $K = 0$ case of (9),



      $f''(t) = 0. tag26$



      When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form



      $x^2 - 2mu x + mu^2 = (x - mu)^2 = 0; tag27$



      in this case the ODE from which (27) arises by means of the substitution $y = e^mu t$ is



      $ddot y - 2mu dot y + mu^2 y = 0, tag28$



      which may also of course be written



      $left ( dfracddt - mu right )^2 y = left ( dfracddt - mu right )left ( dfracddt - mu right ) y = ddot y - 2mu dot y + mu^2 y = 0; tag29$



      setting $z = dot y - mu y$ we see that this implies



      $dot z - mu z = 0, tag30$



      whence



      $z(t) = c_1 e^mu t, tag31$



      which is clearly one solution to (29); then with



      $dot y - mu y = z(t) = c_1 e^mu t, tag32$



      we find



      $y(t) = (c_0 + c_1 t)e^mu t; tag33$



      now when $mu = 0$ this reduces to



      $y(t) = c_0 + c_1 t, tag34$



      clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $mu = 0$ case of (9),



      $f(t) = c_0 + c_1 t; tag35$



      with the initial conditions (10)-(11) we see that



      $c_0 = f(0) = 0, ; c_1 = f'(0) = 1, tag36$



      whence



      $K = 0 Longrightarrow f(t) = t; tag37$



      we may now combine (24), (25), (37) together with (6) to see that



      $K > 0 Longrightarrow J(t) = dfracsin (sqrtKt)sqrtK omega(t), tag38$



      $K = 0 Longrightarrow J(t) = t omega(t), tag39$



      $K < 0 Longrightarrow J(t) = dfracsinh (sqrt-Kt)sqrt-K omega(t). tag40$



      In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.



      One good guess is worth a thousand exploratory computations.






      share|cite|improve this answer























        up vote
        1
        down vote










        up vote
        1
        down vote









        OK, here are some details:



        First of all, I assume $D/dt$ is covariant differentiation along $gamma(t)$, i.e.,



        $dfracDdt equiv nabla_gamma'(t), tag 1$



        where $nabla$ is the Levi-Civita connection associated with the metric $langle cdot, cdot rangle$ on our manifold. Now if $omega(t)$ is parallel along the curve $gamma(1)$, then



        $dfracDomegadt = nabla_gamma'(t) omega = 0, tag 2$



        and if $f(t)$ is any twice-differentiable function defined along $gamma(t)$, we have by the Leibnitz rule,



        $dfracD(fomega)dt = dfracdfdtomega + f dfracDomegadt = dfracdfdtomega, tag 3$



        which of course follows from (2):



        $dfracD(fomega)dt = nabla_gamma'(t) (f omega) = gamma'(t)[f] omega + f nabla_gamma'(t)omega = dfracdfdt omega ; tag 4$



        thus



        $dfracD^2(fomega)dt^2 = dfracDdt left ( dfracD(fomega)dt right) = dfracDdt left ( dfracdfdt omega right ) = dfracddt left ( dfracdfdt right ) omega = dfracd^2 fdt^2 omega; tag 5$



        now if



        $J = fomega tag 6$



        satisfies



        $dfracD^2 Jdt^2 + KJ = 0, tag 7$



        we may, via (5), write



        $left (dfracd^2fdt^2 + K f right ) omega = dfracd^2fdt^2 omega + K f omega = dfracD^2(fomega)dt^2 + Kfomega = dfracD^2Jdt^2 + KJ = 0; tag 8$



        since $vert omega(t) vert = 1$ along $gamma(t)$, we have $omega(t) ne 0$ on $gamma(t)$, whence



        $dfracd^2fdt^2 + K f = 0; tag 9$



        furthermore,



        $J(0) = 0 Longrightarrow f(0)omega(0) = 0 Longrightarrow f(0) = 0, tag10$



        $J'(0) = omega(0) Longrightarrow f'(0) omega(0) = omega(0) Longrightarrow f'(0) = 1; tag11$



        so now we see that solving the covariant vector equation (7) with $J = fomega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?



        Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form



        $f(t) = e^mu t; tag12$



        then plugging this into (9) yields



        $mu^2 e^mu t + Ke^mu t = 0 Longrightarrow mu^2 + K = 0, tag13$



        whence in the usual manner,



        $K > 0 Longrightarrow mu = pm i sqrtK, tag14$



        $K = 0 Longrightarrow mu = 0; tag15$



        $K < 0 Longrightarrow mu = pm sqrt-K; tag16$



        it is well-known that cases (14) and (16) yield general solutions of the form



        $f(t) = c_+ e^i sqrtKt + c_- e^-isqrtKt$
        $= c_+(cos (sqrtKt) + isin(sqrtKt)) + c_-(cos (sqrtKt) - isin(sqrtKt))$
        $= (c_+ + c_-)cos(sqrtKt) + i(c_+ - c_-) sin (sqrtKt), tag17$



        and



        $f(t) = c_+ e^sqrt-K t + c_- e^-sqrt-Kt = (c_+ + c_-)cosh(sqrt-Kt) +(c_+ - c_-)sinh(sqrt-Kt) ; tag18$



        in both cases (17), (18) the condition $f(0) = 0$ implies



        $f(0) = c_+ + c_- = 0 Longrightarrow c_- = -c_+, tag19$



        yielding



        $f(t) = 2ic_+ sin(sqrtKt), ; f'(t) = 2ic_+ sqrtK cos (sqrtKt), tag20$



        and



        $f(t) = 2c_+ cosh sqrt-Kt, ; f'(t) = 2c_+ sqrt-K cosh(sqrt-Kt); tag21$



        with $f'(0) = 1$, (20) and (21) determine that



        $2ic_+ sqrtK = 1 Longrightarrow c_+ = dfrac-i2sqrtK, tag 22$



        and



        $2c_+ sqrt-K = 1 Longrightarrow c_+ = dfrac12 sqrt-K, tag23$



        respectively; thus we see at last that



        $K > 0 Longrightarrow f(t) = dfracsin (sqrtKt)sqrtK, tag24$



        $K < 0 Longrightarrow f(t) = dfracsinh (sqrt-Kt)sqrt-K. tag25$



        As for (15), in the event that $K = 0$, we have found that $mu = 0$ is a double root of (13), which reduces to $mu^2 = 0$, corresponding to the $K = 0$ case of (9),



        $f''(t) = 0. tag26$



        When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form



        $x^2 - 2mu x + mu^2 = (x - mu)^2 = 0; tag27$



        in this case the ODE from which (27) arises by means of the substitution $y = e^mu t$ is



        $ddot y - 2mu dot y + mu^2 y = 0, tag28$



        which may also of course be written



        $left ( dfracddt - mu right )^2 y = left ( dfracddt - mu right )left ( dfracddt - mu right ) y = ddot y - 2mu dot y + mu^2 y = 0; tag29$



        setting $z = dot y - mu y$ we see that this implies



        $dot z - mu z = 0, tag30$



        whence



        $z(t) = c_1 e^mu t, tag31$



        which is clearly one solution to (29); then with



        $dot y - mu y = z(t) = c_1 e^mu t, tag32$



        we find



        $y(t) = (c_0 + c_1 t)e^mu t; tag33$



        now when $mu = 0$ this reduces to



        $y(t) = c_0 + c_1 t, tag34$



        clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $mu = 0$ case of (9),



        $f(t) = c_0 + c_1 t; tag35$



        with the initial conditions (10)-(11) we see that



        $c_0 = f(0) = 0, ; c_1 = f'(0) = 1, tag36$



        whence



        $K = 0 Longrightarrow f(t) = t; tag37$



        we may now combine (24), (25), (37) together with (6) to see that



        $K > 0 Longrightarrow J(t) = dfracsin (sqrtKt)sqrtK omega(t), tag38$



        $K = 0 Longrightarrow J(t) = t omega(t), tag39$



        $K < 0 Longrightarrow J(t) = dfracsinh (sqrt-Kt)sqrt-K omega(t). tag40$



        In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.



        One good guess is worth a thousand exploratory computations.






        share|cite|improve this answer













        OK, here are some details:



        First of all, I assume $D/dt$ is covariant differentiation along $gamma(t)$, i.e.,



        $dfracDdt equiv nabla_gamma'(t), tag 1$



        where $nabla$ is the Levi-Civita connection associated with the metric $langle cdot, cdot rangle$ on our manifold. Now if $omega(t)$ is parallel along the curve $gamma(1)$, then



        $dfracDomegadt = nabla_gamma'(t) omega = 0, tag 2$



        and if $f(t)$ is any twice-differentiable function defined along $gamma(t)$, we have by the Leibnitz rule,



        $dfracD(fomega)dt = dfracdfdtomega + f dfracDomegadt = dfracdfdtomega, tag 3$



        which of course follows from (2):



        $dfracD(fomega)dt = nabla_gamma'(t) (f omega) = gamma'(t)[f] omega + f nabla_gamma'(t)omega = dfracdfdt omega ; tag 4$



        thus



        $dfracD^2(fomega)dt^2 = dfracDdt left ( dfracD(fomega)dt right) = dfracDdt left ( dfracdfdt omega right ) = dfracddt left ( dfracdfdt right ) omega = dfracd^2 fdt^2 omega; tag 5$



        now if



        $J = fomega tag 6$



        satisfies



        $dfracD^2 Jdt^2 + KJ = 0, tag 7$



        we may, via (5), write



        $left (dfracd^2fdt^2 + K f right ) omega = dfracd^2fdt^2 omega + K f omega = dfracD^2(fomega)dt^2 + Kfomega = dfracD^2Jdt^2 + KJ = 0; tag 8$



        since $vert omega(t) vert = 1$ along $gamma(t)$, we have $omega(t) ne 0$ on $gamma(t)$, whence



        $dfracd^2fdt^2 + K f = 0; tag 9$



        furthermore,



        $J(0) = 0 Longrightarrow f(0)omega(0) = 0 Longrightarrow f(0) = 0, tag10$



        $J'(0) = omega(0) Longrightarrow f'(0) omega(0) = omega(0) Longrightarrow f'(0) = 1; tag11$



        so now we see that solving the covariant vector equation (7) with $J = fomega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?



        Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form



        $f(t) = e^mu t; tag12$



        then plugging this into (9) yields



        $mu^2 e^mu t + Ke^mu t = 0 Longrightarrow mu^2 + K = 0, tag13$



        whence in the usual manner,



        $K > 0 Longrightarrow mu = pm i sqrtK, tag14$



        $K = 0 Longrightarrow mu = 0; tag15$



        $K < 0 Longrightarrow mu = pm sqrt-K; tag16$



        it is well-known that cases (14) and (16) yield general solutions of the form



        $f(t) = c_+ e^i sqrtKt + c_- e^-isqrtKt$
        $= c_+(cos (sqrtKt) + isin(sqrtKt)) + c_-(cos (sqrtKt) - isin(sqrtKt))$
        $= (c_+ + c_-)cos(sqrtKt) + i(c_+ - c_-) sin (sqrtKt), tag17$



        and



        $f(t) = c_+ e^sqrt-K t + c_- e^-sqrt-Kt = (c_+ + c_-)cosh(sqrt-Kt) +(c_+ - c_-)sinh(sqrt-Kt) ; tag18$



        in both cases (17), (18) the condition $f(0) = 0$ implies



        $f(0) = c_+ + c_- = 0 Longrightarrow c_- = -c_+, tag19$



        yielding



        $f(t) = 2ic_+ sin(sqrtKt), ; f'(t) = 2ic_+ sqrtK cos (sqrtKt), tag20$



        and



        $f(t) = 2c_+ cosh sqrt-Kt, ; f'(t) = 2c_+ sqrt-K cosh(sqrt-Kt); tag21$



        with $f'(0) = 1$, (20) and (21) determine that



        $2ic_+ sqrtK = 1 Longrightarrow c_+ = dfrac-i2sqrtK, tag 22$



        and



        $2c_+ sqrt-K = 1 Longrightarrow c_+ = dfrac12 sqrt-K, tag23$



        respectively; thus we see at last that



        $K > 0 Longrightarrow f(t) = dfracsin (sqrtKt)sqrtK, tag24$



        $K < 0 Longrightarrow f(t) = dfracsinh (sqrt-Kt)sqrt-K. tag25$



        As for (15), in the event that $K = 0$, we have found that $mu = 0$ is a double root of (13), which reduces to $mu^2 = 0$, corresponding to the $K = 0$ case of (9),



        $f''(t) = 0. tag26$



        When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form



        $x^2 - 2mu x + mu^2 = (x - mu)^2 = 0; tag27$



        in this case the ODE from which (27) arises by means of the substitution $y = e^mu t$ is



        $ddot y - 2mu dot y + mu^2 y = 0, tag28$



        which may also of course be written



        $left ( dfracddt - mu right )^2 y = left ( dfracddt - mu right )left ( dfracddt - mu right ) y = ddot y - 2mu dot y + mu^2 y = 0; tag29$



        setting $z = dot y - mu y$ we see that this implies



        $dot z - mu z = 0, tag30$



        whence



        $z(t) = c_1 e^mu t, tag31$



        which is clearly one solution to (29); then with



        $dot y - mu y = z(t) = c_1 e^mu t, tag32$



        we find



        $y(t) = (c_0 + c_1 t)e^mu t; tag33$



        now when $mu = 0$ this reduces to



        $y(t) = c_0 + c_1 t, tag34$



        clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $mu = 0$ case of (9),



        $f(t) = c_0 + c_1 t; tag35$



        with the initial conditions (10)-(11) we see that



        $c_0 = f(0) = 0, ; c_1 = f'(0) = 1, tag36$



        whence



        $K = 0 Longrightarrow f(t) = t; tag37$



        we may now combine (24), (25), (37) together with (6) to see that



        $K > 0 Longrightarrow J(t) = dfracsin (sqrtKt)sqrtK omega(t), tag38$



        $K = 0 Longrightarrow J(t) = t omega(t), tag39$



        $K < 0 Longrightarrow J(t) = dfracsinh (sqrt-Kt)sqrt-K omega(t). tag40$



        In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.



        One good guess is worth a thousand exploratory computations.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Aug 2 at 18:54









        Robert Lewis

        36.7k22155




        36.7k22155






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2868357%2fode-problem-reading-do-carmos-book-of-riemannian-geometry%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?