Let $fin C^2(mathbb R)$. I have to prove that there is $cin [a,b]$ s.t. $f(b)=f(a)+f'(a)(b-a)+fracf''(c)2(b-a)^2.$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite
2












Let $fin C^2(mathbb R)$. I have to prove that there is $cin [a,b]$ s.t. $$f(b)=f(a)+f'(a)(b-a)+fracf''(c)2(b-a)^2.$$



I know that we can apply Rolle's theorem twice with $$g(x):=f(x)-f(a)-f'(a)(x-a)+fracf(a)+f'(a)(b-a)-f(b)(b-a)^2(x-a)^2,$$
but set such a function looks so unnatural for me (I would never think to set such a function), I was wondering if there where a more intuitive way to do it.




The aim of the exercise is to prove Taylor theorem, so I can't use Taylor polynomial.







share|cite|improve this question

























    up vote
    3
    down vote

    favorite
    2












    Let $fin C^2(mathbb R)$. I have to prove that there is $cin [a,b]$ s.t. $$f(b)=f(a)+f'(a)(b-a)+fracf''(c)2(b-a)^2.$$



    I know that we can apply Rolle's theorem twice with $$g(x):=f(x)-f(a)-f'(a)(x-a)+fracf(a)+f'(a)(b-a)-f(b)(b-a)^2(x-a)^2,$$
    but set such a function looks so unnatural for me (I would never think to set such a function), I was wondering if there where a more intuitive way to do it.




    The aim of the exercise is to prove Taylor theorem, so I can't use Taylor polynomial.







    share|cite|improve this question























      up vote
      3
      down vote

      favorite
      2









      up vote
      3
      down vote

      favorite
      2






      2





      Let $fin C^2(mathbb R)$. I have to prove that there is $cin [a,b]$ s.t. $$f(b)=f(a)+f'(a)(b-a)+fracf''(c)2(b-a)^2.$$



      I know that we can apply Rolle's theorem twice with $$g(x):=f(x)-f(a)-f'(a)(x-a)+fracf(a)+f'(a)(b-a)-f(b)(b-a)^2(x-a)^2,$$
      but set such a function looks so unnatural for me (I would never think to set such a function), I was wondering if there where a more intuitive way to do it.




      The aim of the exercise is to prove Taylor theorem, so I can't use Taylor polynomial.







      share|cite|improve this question













      Let $fin C^2(mathbb R)$. I have to prove that there is $cin [a,b]$ s.t. $$f(b)=f(a)+f'(a)(b-a)+fracf''(c)2(b-a)^2.$$



      I know that we can apply Rolle's theorem twice with $$g(x):=f(x)-f(a)-f'(a)(x-a)+fracf(a)+f'(a)(b-a)-f(b)(b-a)^2(x-a)^2,$$
      but set such a function looks so unnatural for me (I would never think to set such a function), I was wondering if there where a more intuitive way to do it.




      The aim of the exercise is to prove Taylor theorem, so I can't use Taylor polynomial.









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 23 at 21:13
























      asked Jul 23 at 21:05









      user386627

      714214




      714214




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          7
          down vote



          accepted










          First, we have:



          beginalign*
          f(b) &= f(a) + int_a^b f'(t),dt \
          &= f(a) + int_a^b left( f'(a) + int_a^t f''(u),du right) ,dt \
          &= f(a) + (b-a) f'(a) + int_a^b int_a^t f''(u),du,dt \
          &= f(a) + (b-a)f'(a) + int_a^b int_u^b f''(u),dt,du \
          &= f(a) + (b-a)f'(a) + int_a^b (b-u) f''(u),du.
          endalign*



          Now, since $f''$ is continuous on $[a,b]$, it has a minimum value $m$ and a maximum value $M$ on this interval. Then
          $$frac12(b-a)^2 m le int_a^b (b-u) f''(u),du le frac12(b-a)^2 M.$$
          Therefore, $frac2(b-a)^2 int_a^b (b-u) f''(u)$ lies between $m$ and $M$, so by the intermediate value theorem, there is some $cin [a,b]$ such that $f''(c) = frac2(b-a)^2 int_a^b (b-u) f''(u)$. It then follows that for this $c$,
          $$f(b) = f(a) + (b-a) f'(a) + frac12 (b-a)^2 f''(c)$$
          as desired.






          share|cite|improve this answer





















          • Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
            – user386627
            Jul 23 at 21:32











          • Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
            – Bob
            Jul 23 at 21:55

















          up vote
          1
          down vote













          Perhaps I can explain what's going on in that auxiliary function.



          The way one usually goes about proving the MVT, is that you "slant" your original function by a linear function so that the difference is zero at the end points. This sets you up to use Rolle's Theorem.
          It's also useful to notice here that we use a linear polynomial because a linear polynomial is the bare minimum of what we would need to get two desiderata: we want the difference to be $0$ at $a$ and $0$ at $b$.



          What's going on in that function is that it's like a "second-order slant". The first tier of this slant is similar to the first application: we want the difference to be zero at the endpoints. This will give us a point in the middle where the first derivative will be zero. But now we want another point where the derivative will be zero so that we can invoke Rolle's Theorem again. Why don't we be easy on ourselves and just stipulate where that other zero will be? Let's construct the slanting function so that the difference will be $0$ at $a$, $0$ at $b$, and so that the derivative will be $0$ at $a$. This is three criteria we want. We can use a quadratic to get these things we want. And the quadratic you have is the one that will do that.



          In general, if you want $n+1$ criteria on a polynomial and its subsequent derivatives to satisfy, you can find an $n$ degree polynomial that will do the trick.



          This could extend to prove Taylor's Theorem in general. The slanting function would still make the difference $0$ at the endpoints. But you would demand a much higher degree of vanishing for the derivatives at $a$.



          If you really want to use induction here, you might try proving this lemma first:




          Lemma: Suppose you have $n+1$ real numbers $a_0, a_1, ldots, a_n-1$ and $b_0$. Then there exists a polynomial $p$ of degree $n$ on $[a,b]$ such that
          $$p(a)=a_0,, p'(a)=a_1,,ldots,,, p^(n-1)(a)=a_n-1, text and,, p(b)=b_0,.$$




          From there you would consider $f-p$ and apply Rolle's theorem to your heart's desire.






          share|cite|improve this answer























          • Thank you for the explanation. I't very useful :)
            – user386627
            Jul 23 at 22:05

















          up vote
          0
          down vote













          Consider the taylor expansion of $f(x)$ at $x_0=a$:



          $f(x)=f(a)+frac f'(a) 1!(x-a)^1 +R_2(x)$
          ,
          Applying Lagrange`s form for the remainder we will get :



          $R_1=frac f''(c_x)(x-a)^2 2!$, looking at $x=b$ we get the form needed. ($c_x in(a,b)$)



          Edit: Since you can`t use the taylor expansion, You can define a maybe more intiutive function :
          $g(x)=f(x)-(f(a)+f'(a)(x-a))$, This $g(x)$ is actually the remainder function of the 2nd degree expansion of the Taylor expansion.



          define $gamma(t)=(x-a)^2$ , and apply Cauchy`s mean value theorem, to get :
          $exists rin(x,a): frac g(x)-g(a) gamma(x)-gamma(a)=frac g'(r) gamma'(r)$, it is easy to see $g(a)=gamma(a)=0$, so rearranging the terms we get :
          $g(x)=frac gamma(x) g'(r) gamma'(x) = frac (x-a)^2(f'(r)-f'(a)) 2(r-a)$, and applying the mean value again to get $exists cin(x,r) : g(x)=fracf''(c)(x-a)^2 2$.



          $implies fracf''(c)(x-a)^2 2=f(x)-(f(a)+f'(a)(x-a))$, set $x=b$ and rearrange the terms to get the form needed.



          I still had to use mean-value theorem twice but I hope this answer is any help.






          share|cite|improve this answer























          • The aim of the exercise is to prove Taylor theorem, so I can't use it.
            – user386627
            Jul 23 at 21:13










          • I`ve added a proof with a (maybe) more intuitive function.
            – Sar
            Jul 23 at 21:47










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860777%2flet-f-in-c2-mathbb-r-i-have-to-prove-that-there-is-c-in-a-b-s-t-fb%23new-answer', 'question_page');

          );

          Post as a guest






























          3 Answers
          3






          active

          oldest

          votes








          3 Answers
          3






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          7
          down vote



          accepted










          First, we have:



          beginalign*
          f(b) &= f(a) + int_a^b f'(t),dt \
          &= f(a) + int_a^b left( f'(a) + int_a^t f''(u),du right) ,dt \
          &= f(a) + (b-a) f'(a) + int_a^b int_a^t f''(u),du,dt \
          &= f(a) + (b-a)f'(a) + int_a^b int_u^b f''(u),dt,du \
          &= f(a) + (b-a)f'(a) + int_a^b (b-u) f''(u),du.
          endalign*



          Now, since $f''$ is continuous on $[a,b]$, it has a minimum value $m$ and a maximum value $M$ on this interval. Then
          $$frac12(b-a)^2 m le int_a^b (b-u) f''(u),du le frac12(b-a)^2 M.$$
          Therefore, $frac2(b-a)^2 int_a^b (b-u) f''(u)$ lies between $m$ and $M$, so by the intermediate value theorem, there is some $cin [a,b]$ such that $f''(c) = frac2(b-a)^2 int_a^b (b-u) f''(u)$. It then follows that for this $c$,
          $$f(b) = f(a) + (b-a) f'(a) + frac12 (b-a)^2 f''(c)$$
          as desired.






          share|cite|improve this answer





















          • Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
            – user386627
            Jul 23 at 21:32











          • Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
            – Bob
            Jul 23 at 21:55














          up vote
          7
          down vote



          accepted










          First, we have:



          beginalign*
          f(b) &= f(a) + int_a^b f'(t),dt \
          &= f(a) + int_a^b left( f'(a) + int_a^t f''(u),du right) ,dt \
          &= f(a) + (b-a) f'(a) + int_a^b int_a^t f''(u),du,dt \
          &= f(a) + (b-a)f'(a) + int_a^b int_u^b f''(u),dt,du \
          &= f(a) + (b-a)f'(a) + int_a^b (b-u) f''(u),du.
          endalign*



          Now, since $f''$ is continuous on $[a,b]$, it has a minimum value $m$ and a maximum value $M$ on this interval. Then
          $$frac12(b-a)^2 m le int_a^b (b-u) f''(u),du le frac12(b-a)^2 M.$$
          Therefore, $frac2(b-a)^2 int_a^b (b-u) f''(u)$ lies between $m$ and $M$, so by the intermediate value theorem, there is some $cin [a,b]$ such that $f''(c) = frac2(b-a)^2 int_a^b (b-u) f''(u)$. It then follows that for this $c$,
          $$f(b) = f(a) + (b-a) f'(a) + frac12 (b-a)^2 f''(c)$$
          as desired.






          share|cite|improve this answer





















          • Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
            – user386627
            Jul 23 at 21:32











          • Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
            – Bob
            Jul 23 at 21:55












          up vote
          7
          down vote



          accepted







          up vote
          7
          down vote



          accepted






          First, we have:



          beginalign*
          f(b) &= f(a) + int_a^b f'(t),dt \
          &= f(a) + int_a^b left( f'(a) + int_a^t f''(u),du right) ,dt \
          &= f(a) + (b-a) f'(a) + int_a^b int_a^t f''(u),du,dt \
          &= f(a) + (b-a)f'(a) + int_a^b int_u^b f''(u),dt,du \
          &= f(a) + (b-a)f'(a) + int_a^b (b-u) f''(u),du.
          endalign*



          Now, since $f''$ is continuous on $[a,b]$, it has a minimum value $m$ and a maximum value $M$ on this interval. Then
          $$frac12(b-a)^2 m le int_a^b (b-u) f''(u),du le frac12(b-a)^2 M.$$
          Therefore, $frac2(b-a)^2 int_a^b (b-u) f''(u)$ lies between $m$ and $M$, so by the intermediate value theorem, there is some $cin [a,b]$ such that $f''(c) = frac2(b-a)^2 int_a^b (b-u) f''(u)$. It then follows that for this $c$,
          $$f(b) = f(a) + (b-a) f'(a) + frac12 (b-a)^2 f''(c)$$
          as desired.






          share|cite|improve this answer













          First, we have:



          beginalign*
          f(b) &= f(a) + int_a^b f'(t),dt \
          &= f(a) + int_a^b left( f'(a) + int_a^t f''(u),du right) ,dt \
          &= f(a) + (b-a) f'(a) + int_a^b int_a^t f''(u),du,dt \
          &= f(a) + (b-a)f'(a) + int_a^b int_u^b f''(u),dt,du \
          &= f(a) + (b-a)f'(a) + int_a^b (b-u) f''(u),du.
          endalign*



          Now, since $f''$ is continuous on $[a,b]$, it has a minimum value $m$ and a maximum value $M$ on this interval. Then
          $$frac12(b-a)^2 m le int_a^b (b-u) f''(u),du le frac12(b-a)^2 M.$$
          Therefore, $frac2(b-a)^2 int_a^b (b-u) f''(u)$ lies between $m$ and $M$, so by the intermediate value theorem, there is some $cin [a,b]$ such that $f''(c) = frac2(b-a)^2 int_a^b (b-u) f''(u)$. It then follows that for this $c$,
          $$f(b) = f(a) + (b-a) f'(a) + frac12 (b-a)^2 f''(c)$$
          as desired.







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 23 at 21:30









          Daniel Schepler

          6,7331513




          6,7331513











          • Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
            – user386627
            Jul 23 at 21:32











          • Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
            – Bob
            Jul 23 at 21:55
















          • Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
            – user386627
            Jul 23 at 21:32











          • Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
            – Bob
            Jul 23 at 21:55















          Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
          – user386627
          Jul 23 at 21:32





          Waouuu ! wonderful :) Moreover, I understand where come from the rest integral in Taylor. Your answer is perfect and helped me A LOT !!!
          – user386627
          Jul 23 at 21:32













          Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
          – Bob
          Jul 23 at 21:55




          Nice, after the first application of FTC I always used integration by parts to get to the last equality: never thought about iterating FTC and Fubini: it seems also much more natural.
          – Bob
          Jul 23 at 21:55










          up vote
          1
          down vote













          Perhaps I can explain what's going on in that auxiliary function.



          The way one usually goes about proving the MVT, is that you "slant" your original function by a linear function so that the difference is zero at the end points. This sets you up to use Rolle's Theorem.
          It's also useful to notice here that we use a linear polynomial because a linear polynomial is the bare minimum of what we would need to get two desiderata: we want the difference to be $0$ at $a$ and $0$ at $b$.



          What's going on in that function is that it's like a "second-order slant". The first tier of this slant is similar to the first application: we want the difference to be zero at the endpoints. This will give us a point in the middle where the first derivative will be zero. But now we want another point where the derivative will be zero so that we can invoke Rolle's Theorem again. Why don't we be easy on ourselves and just stipulate where that other zero will be? Let's construct the slanting function so that the difference will be $0$ at $a$, $0$ at $b$, and so that the derivative will be $0$ at $a$. This is three criteria we want. We can use a quadratic to get these things we want. And the quadratic you have is the one that will do that.



          In general, if you want $n+1$ criteria on a polynomial and its subsequent derivatives to satisfy, you can find an $n$ degree polynomial that will do the trick.



          This could extend to prove Taylor's Theorem in general. The slanting function would still make the difference $0$ at the endpoints. But you would demand a much higher degree of vanishing for the derivatives at $a$.



          If you really want to use induction here, you might try proving this lemma first:




          Lemma: Suppose you have $n+1$ real numbers $a_0, a_1, ldots, a_n-1$ and $b_0$. Then there exists a polynomial $p$ of degree $n$ on $[a,b]$ such that
          $$p(a)=a_0,, p'(a)=a_1,,ldots,,, p^(n-1)(a)=a_n-1, text and,, p(b)=b_0,.$$




          From there you would consider $f-p$ and apply Rolle's theorem to your heart's desire.






          share|cite|improve this answer























          • Thank you for the explanation. I't very useful :)
            – user386627
            Jul 23 at 22:05














          up vote
          1
          down vote













          Perhaps I can explain what's going on in that auxiliary function.



          The way one usually goes about proving the MVT, is that you "slant" your original function by a linear function so that the difference is zero at the end points. This sets you up to use Rolle's Theorem.
          It's also useful to notice here that we use a linear polynomial because a linear polynomial is the bare minimum of what we would need to get two desiderata: we want the difference to be $0$ at $a$ and $0$ at $b$.



          What's going on in that function is that it's like a "second-order slant". The first tier of this slant is similar to the first application: we want the difference to be zero at the endpoints. This will give us a point in the middle where the first derivative will be zero. But now we want another point where the derivative will be zero so that we can invoke Rolle's Theorem again. Why don't we be easy on ourselves and just stipulate where that other zero will be? Let's construct the slanting function so that the difference will be $0$ at $a$, $0$ at $b$, and so that the derivative will be $0$ at $a$. This is three criteria we want. We can use a quadratic to get these things we want. And the quadratic you have is the one that will do that.



          In general, if you want $n+1$ criteria on a polynomial and its subsequent derivatives to satisfy, you can find an $n$ degree polynomial that will do the trick.



          This could extend to prove Taylor's Theorem in general. The slanting function would still make the difference $0$ at the endpoints. But you would demand a much higher degree of vanishing for the derivatives at $a$.



          If you really want to use induction here, you might try proving this lemma first:




          Lemma: Suppose you have $n+1$ real numbers $a_0, a_1, ldots, a_n-1$ and $b_0$. Then there exists a polynomial $p$ of degree $n$ on $[a,b]$ such that
          $$p(a)=a_0,, p'(a)=a_1,,ldots,,, p^(n-1)(a)=a_n-1, text and,, p(b)=b_0,.$$




          From there you would consider $f-p$ and apply Rolle's theorem to your heart's desire.






          share|cite|improve this answer























          • Thank you for the explanation. I't very useful :)
            – user386627
            Jul 23 at 22:05












          up vote
          1
          down vote










          up vote
          1
          down vote









          Perhaps I can explain what's going on in that auxiliary function.



          The way one usually goes about proving the MVT, is that you "slant" your original function by a linear function so that the difference is zero at the end points. This sets you up to use Rolle's Theorem.
          It's also useful to notice here that we use a linear polynomial because a linear polynomial is the bare minimum of what we would need to get two desiderata: we want the difference to be $0$ at $a$ and $0$ at $b$.



          What's going on in that function is that it's like a "second-order slant". The first tier of this slant is similar to the first application: we want the difference to be zero at the endpoints. This will give us a point in the middle where the first derivative will be zero. But now we want another point where the derivative will be zero so that we can invoke Rolle's Theorem again. Why don't we be easy on ourselves and just stipulate where that other zero will be? Let's construct the slanting function so that the difference will be $0$ at $a$, $0$ at $b$, and so that the derivative will be $0$ at $a$. This is three criteria we want. We can use a quadratic to get these things we want. And the quadratic you have is the one that will do that.



          In general, if you want $n+1$ criteria on a polynomial and its subsequent derivatives to satisfy, you can find an $n$ degree polynomial that will do the trick.



          This could extend to prove Taylor's Theorem in general. The slanting function would still make the difference $0$ at the endpoints. But you would demand a much higher degree of vanishing for the derivatives at $a$.



          If you really want to use induction here, you might try proving this lemma first:




          Lemma: Suppose you have $n+1$ real numbers $a_0, a_1, ldots, a_n-1$ and $b_0$. Then there exists a polynomial $p$ of degree $n$ on $[a,b]$ such that
          $$p(a)=a_0,, p'(a)=a_1,,ldots,,, p^(n-1)(a)=a_n-1, text and,, p(b)=b_0,.$$




          From there you would consider $f-p$ and apply Rolle's theorem to your heart's desire.






          share|cite|improve this answer















          Perhaps I can explain what's going on in that auxiliary function.



          The way one usually goes about proving the MVT, is that you "slant" your original function by a linear function so that the difference is zero at the end points. This sets you up to use Rolle's Theorem.
          It's also useful to notice here that we use a linear polynomial because a linear polynomial is the bare minimum of what we would need to get two desiderata: we want the difference to be $0$ at $a$ and $0$ at $b$.



          What's going on in that function is that it's like a "second-order slant". The first tier of this slant is similar to the first application: we want the difference to be zero at the endpoints. This will give us a point in the middle where the first derivative will be zero. But now we want another point where the derivative will be zero so that we can invoke Rolle's Theorem again. Why don't we be easy on ourselves and just stipulate where that other zero will be? Let's construct the slanting function so that the difference will be $0$ at $a$, $0$ at $b$, and so that the derivative will be $0$ at $a$. This is three criteria we want. We can use a quadratic to get these things we want. And the quadratic you have is the one that will do that.



          In general, if you want $n+1$ criteria on a polynomial and its subsequent derivatives to satisfy, you can find an $n$ degree polynomial that will do the trick.



          This could extend to prove Taylor's Theorem in general. The slanting function would still make the difference $0$ at the endpoints. But you would demand a much higher degree of vanishing for the derivatives at $a$.



          If you really want to use induction here, you might try proving this lemma first:




          Lemma: Suppose you have $n+1$ real numbers $a_0, a_1, ldots, a_n-1$ and $b_0$. Then there exists a polynomial $p$ of degree $n$ on $[a,b]$ such that
          $$p(a)=a_0,, p'(a)=a_1,,ldots,,, p^(n-1)(a)=a_n-1, text and,, p(b)=b_0,.$$




          From there you would consider $f-p$ and apply Rolle's theorem to your heart's desire.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 23 at 22:15


























          answered Jul 23 at 21:55









          Robert Wolfe

          5,30722261




          5,30722261











          • Thank you for the explanation. I't very useful :)
            – user386627
            Jul 23 at 22:05
















          • Thank you for the explanation. I't very useful :)
            – user386627
            Jul 23 at 22:05















          Thank you for the explanation. I't very useful :)
          – user386627
          Jul 23 at 22:05




          Thank you for the explanation. I't very useful :)
          – user386627
          Jul 23 at 22:05










          up vote
          0
          down vote













          Consider the taylor expansion of $f(x)$ at $x_0=a$:



          $f(x)=f(a)+frac f'(a) 1!(x-a)^1 +R_2(x)$
          ,
          Applying Lagrange`s form for the remainder we will get :



          $R_1=frac f''(c_x)(x-a)^2 2!$, looking at $x=b$ we get the form needed. ($c_x in(a,b)$)



          Edit: Since you can`t use the taylor expansion, You can define a maybe more intiutive function :
          $g(x)=f(x)-(f(a)+f'(a)(x-a))$, This $g(x)$ is actually the remainder function of the 2nd degree expansion of the Taylor expansion.



          define $gamma(t)=(x-a)^2$ , and apply Cauchy`s mean value theorem, to get :
          $exists rin(x,a): frac g(x)-g(a) gamma(x)-gamma(a)=frac g'(r) gamma'(r)$, it is easy to see $g(a)=gamma(a)=0$, so rearranging the terms we get :
          $g(x)=frac gamma(x) g'(r) gamma'(x) = frac (x-a)^2(f'(r)-f'(a)) 2(r-a)$, and applying the mean value again to get $exists cin(x,r) : g(x)=fracf''(c)(x-a)^2 2$.



          $implies fracf''(c)(x-a)^2 2=f(x)-(f(a)+f'(a)(x-a))$, set $x=b$ and rearrange the terms to get the form needed.



          I still had to use mean-value theorem twice but I hope this answer is any help.






          share|cite|improve this answer























          • The aim of the exercise is to prove Taylor theorem, so I can't use it.
            – user386627
            Jul 23 at 21:13










          • I`ve added a proof with a (maybe) more intuitive function.
            – Sar
            Jul 23 at 21:47














          up vote
          0
          down vote













          Consider the taylor expansion of $f(x)$ at $x_0=a$:



          $f(x)=f(a)+frac f'(a) 1!(x-a)^1 +R_2(x)$
          ,
          Applying Lagrange`s form for the remainder we will get :



          $R_1=frac f''(c_x)(x-a)^2 2!$, looking at $x=b$ we get the form needed. ($c_x in(a,b)$)



          Edit: Since you can`t use the taylor expansion, You can define a maybe more intiutive function :
          $g(x)=f(x)-(f(a)+f'(a)(x-a))$, This $g(x)$ is actually the remainder function of the 2nd degree expansion of the Taylor expansion.



          define $gamma(t)=(x-a)^2$ , and apply Cauchy`s mean value theorem, to get :
          $exists rin(x,a): frac g(x)-g(a) gamma(x)-gamma(a)=frac g'(r) gamma'(r)$, it is easy to see $g(a)=gamma(a)=0$, so rearranging the terms we get :
          $g(x)=frac gamma(x) g'(r) gamma'(x) = frac (x-a)^2(f'(r)-f'(a)) 2(r-a)$, and applying the mean value again to get $exists cin(x,r) : g(x)=fracf''(c)(x-a)^2 2$.



          $implies fracf''(c)(x-a)^2 2=f(x)-(f(a)+f'(a)(x-a))$, set $x=b$ and rearrange the terms to get the form needed.



          I still had to use mean-value theorem twice but I hope this answer is any help.






          share|cite|improve this answer























          • The aim of the exercise is to prove Taylor theorem, so I can't use it.
            – user386627
            Jul 23 at 21:13










          • I`ve added a proof with a (maybe) more intuitive function.
            – Sar
            Jul 23 at 21:47












          up vote
          0
          down vote










          up vote
          0
          down vote









          Consider the taylor expansion of $f(x)$ at $x_0=a$:



          $f(x)=f(a)+frac f'(a) 1!(x-a)^1 +R_2(x)$
          ,
          Applying Lagrange`s form for the remainder we will get :



          $R_1=frac f''(c_x)(x-a)^2 2!$, looking at $x=b$ we get the form needed. ($c_x in(a,b)$)



          Edit: Since you can`t use the taylor expansion, You can define a maybe more intiutive function :
          $g(x)=f(x)-(f(a)+f'(a)(x-a))$, This $g(x)$ is actually the remainder function of the 2nd degree expansion of the Taylor expansion.



          define $gamma(t)=(x-a)^2$ , and apply Cauchy`s mean value theorem, to get :
          $exists rin(x,a): frac g(x)-g(a) gamma(x)-gamma(a)=frac g'(r) gamma'(r)$, it is easy to see $g(a)=gamma(a)=0$, so rearranging the terms we get :
          $g(x)=frac gamma(x) g'(r) gamma'(x) = frac (x-a)^2(f'(r)-f'(a)) 2(r-a)$, and applying the mean value again to get $exists cin(x,r) : g(x)=fracf''(c)(x-a)^2 2$.



          $implies fracf''(c)(x-a)^2 2=f(x)-(f(a)+f'(a)(x-a))$, set $x=b$ and rearrange the terms to get the form needed.



          I still had to use mean-value theorem twice but I hope this answer is any help.






          share|cite|improve this answer















          Consider the taylor expansion of $f(x)$ at $x_0=a$:



          $f(x)=f(a)+frac f'(a) 1!(x-a)^1 +R_2(x)$
          ,
          Applying Lagrange`s form for the remainder we will get :



          $R_1=frac f''(c_x)(x-a)^2 2!$, looking at $x=b$ we get the form needed. ($c_x in(a,b)$)



          Edit: Since you can`t use the taylor expansion, You can define a maybe more intiutive function :
          $g(x)=f(x)-(f(a)+f'(a)(x-a))$, This $g(x)$ is actually the remainder function of the 2nd degree expansion of the Taylor expansion.



          define $gamma(t)=(x-a)^2$ , and apply Cauchy`s mean value theorem, to get :
          $exists rin(x,a): frac g(x)-g(a) gamma(x)-gamma(a)=frac g'(r) gamma'(r)$, it is easy to see $g(a)=gamma(a)=0$, so rearranging the terms we get :
          $g(x)=frac gamma(x) g'(r) gamma'(x) = frac (x-a)^2(f'(r)-f'(a)) 2(r-a)$, and applying the mean value again to get $exists cin(x,r) : g(x)=fracf''(c)(x-a)^2 2$.



          $implies fracf''(c)(x-a)^2 2=f(x)-(f(a)+f'(a)(x-a))$, set $x=b$ and rearrange the terms to get the form needed.



          I still had to use mean-value theorem twice but I hope this answer is any help.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 23 at 21:43


























          answered Jul 23 at 21:12









          Sar

          40410




          40410











          • The aim of the exercise is to prove Taylor theorem, so I can't use it.
            – user386627
            Jul 23 at 21:13










          • I`ve added a proof with a (maybe) more intuitive function.
            – Sar
            Jul 23 at 21:47
















          • The aim of the exercise is to prove Taylor theorem, so I can't use it.
            – user386627
            Jul 23 at 21:13










          • I`ve added a proof with a (maybe) more intuitive function.
            – Sar
            Jul 23 at 21:47















          The aim of the exercise is to prove Taylor theorem, so I can't use it.
          – user386627
          Jul 23 at 21:13




          The aim of the exercise is to prove Taylor theorem, so I can't use it.
          – user386627
          Jul 23 at 21:13












          I`ve added a proof with a (maybe) more intuitive function.
          – Sar
          Jul 23 at 21:47




          I`ve added a proof with a (maybe) more intuitive function.
          – Sar
          Jul 23 at 21:47












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860777%2flet-f-in-c2-mathbb-r-i-have-to-prove-that-there-is-c-in-a-b-s-t-fb%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?