Constructing a numerical proof that integration is equivalent to antidifferentiation

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I've been working on proof for a while that integration and antidifferentiation are equivalent except for a constant difference, below is an outline of that proof so far. The are a couple parts that I feel are on some shifty ground and not so rigorous, such as whether I actually proved my claim, using tangents as approximators for continuous functions, arguing about the meaning of $c$ in terms of constants of integration, and finally how to argue for changing the bonds of the sum in the second to last paragraph. Sorry if the spacing changes make this hard to read, I can remove them if its too bad.



Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinBbbD$ $g'(d)=f(d)$ where $BbbD$ is the domain of $f$.



From here on out I going to take a little bit of a strange path, but in the end the purpose should be clear. Given our definition for $g$, the tangent line at any point $(a,g(a))$ along $g$ will have the equation $y=f(a)(x-a)+g(a)$, but we are limited in using this formula to find tangents because the value of $g(a)$ is unknown, except in one special case, when $a=0$.



When $a=0$ we know $g(a)$ is equal to $c$ by the definition of $g$. That gives us the line tangent to $g$ at $(0,g(0))$ as $y=f(0)x+c$. Since both $g$ and $f$ are continuous this tangent is a reasonable approximation for $g$ around $0$ for some small distance $h$. That allows us to approximate $g(h)$ as $f(0)h+c$ with the equation of our tangent line. Now, with this approximation for the value of $g(h)$ and we can approximate the tangent line through $(h,g(h))$ as $f(h)(x-h)+c+f(0)h$ If we repeat this process in general the tangent line for $g$ at some point $a$ can be approximated as $displaystyle f(a)(x-a)+c+sum_n=0^a/h-1 f(nh)h$.



In general the tangent at $(a,g(a))$ can be used as an approximation for $g(a)$ if we plug in $a$ for $x$ in the equation of the tangent line. Doing some quick algebra that gives us $displaystyle c+sum_n=0^a/h-1 f(nh)h$. Now you may begin to recognize this sum, but there are a couple more steps before the proof is finished.



The part of this sum that puts a restriction on how accurately it approximates $g(a)$ is the size of $h$, so in order to get the best approximation we should consider the $displaystylelim_h to 0$, so that our new sum is $displaystyle c+ lim_h to 0 sum_n=0^a/h-1 f(nh)h$. We can actually simplify this sum by changing the upper from $fracah-1$ to $fracah$ since the only piece of the sum that is lost is the term $displaystylelim_h to 0f(a-h)h$ and since the limit of a product can be rewritten as the product of the limits, namely $displaystylelim_h to 0 f(a-h) timesdisplaystyle lim_h to 0h$. Since $f$ is continuous, the first limit evaluates to $f(a)$, and the second limit evaluates to $0$, meaning that the value of the product is just $f(a) times 0$, or just $0$, meaning that the removal of this term causes no change to the value of the sum in the limiting case. That gives us our new sum-limit as $displaystyle c+lim_h to 0 sum_n=0^a/h f(nh)h$.



The last bit to consider is as $h$ becomes smaller more and more values of $f$ are being added over the interval $[0,a]$, so we can rewrite the sum as $displaystylelim_h to 0 sum_n=0^1/h f(anh)h + c$ since the sum covers the same interval in the limit.



If you recall the definition of the integral of any function $f$ from 0 to $x$, specifically $displaystylelim_h to 0 sum_n=0^1/h f(xnh)h$ you'll notice that the two sums are equal except by a factor of $c$ which can be easily recognized as the constant of integration that appears when ever one is doing any antidifferentiation.







share|cite|improve this question



















  • Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
    – Pink Panther
    Jul 29 at 14:27











  • How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
    – Aaron Quitta
    Jul 29 at 14:35











  • well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
    – Pink Panther
    Jul 29 at 14:43










  • Its totally okay, the error function is kind of a cop out anyway.
    – Aaron Quitta
    Jul 29 at 15:04










  • @PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
    – David C. Ullrich
    Jul 29 at 15:51














up vote
0
down vote

favorite












I've been working on proof for a while that integration and antidifferentiation are equivalent except for a constant difference, below is an outline of that proof so far. The are a couple parts that I feel are on some shifty ground and not so rigorous, such as whether I actually proved my claim, using tangents as approximators for continuous functions, arguing about the meaning of $c$ in terms of constants of integration, and finally how to argue for changing the bonds of the sum in the second to last paragraph. Sorry if the spacing changes make this hard to read, I can remove them if its too bad.



Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinBbbD$ $g'(d)=f(d)$ where $BbbD$ is the domain of $f$.



From here on out I going to take a little bit of a strange path, but in the end the purpose should be clear. Given our definition for $g$, the tangent line at any point $(a,g(a))$ along $g$ will have the equation $y=f(a)(x-a)+g(a)$, but we are limited in using this formula to find tangents because the value of $g(a)$ is unknown, except in one special case, when $a=0$.



When $a=0$ we know $g(a)$ is equal to $c$ by the definition of $g$. That gives us the line tangent to $g$ at $(0,g(0))$ as $y=f(0)x+c$. Since both $g$ and $f$ are continuous this tangent is a reasonable approximation for $g$ around $0$ for some small distance $h$. That allows us to approximate $g(h)$ as $f(0)h+c$ with the equation of our tangent line. Now, with this approximation for the value of $g(h)$ and we can approximate the tangent line through $(h,g(h))$ as $f(h)(x-h)+c+f(0)h$ If we repeat this process in general the tangent line for $g$ at some point $a$ can be approximated as $displaystyle f(a)(x-a)+c+sum_n=0^a/h-1 f(nh)h$.



In general the tangent at $(a,g(a))$ can be used as an approximation for $g(a)$ if we plug in $a$ for $x$ in the equation of the tangent line. Doing some quick algebra that gives us $displaystyle c+sum_n=0^a/h-1 f(nh)h$. Now you may begin to recognize this sum, but there are a couple more steps before the proof is finished.



The part of this sum that puts a restriction on how accurately it approximates $g(a)$ is the size of $h$, so in order to get the best approximation we should consider the $displaystylelim_h to 0$, so that our new sum is $displaystyle c+ lim_h to 0 sum_n=0^a/h-1 f(nh)h$. We can actually simplify this sum by changing the upper from $fracah-1$ to $fracah$ since the only piece of the sum that is lost is the term $displaystylelim_h to 0f(a-h)h$ and since the limit of a product can be rewritten as the product of the limits, namely $displaystylelim_h to 0 f(a-h) timesdisplaystyle lim_h to 0h$. Since $f$ is continuous, the first limit evaluates to $f(a)$, and the second limit evaluates to $0$, meaning that the value of the product is just $f(a) times 0$, or just $0$, meaning that the removal of this term causes no change to the value of the sum in the limiting case. That gives us our new sum-limit as $displaystyle c+lim_h to 0 sum_n=0^a/h f(nh)h$.



The last bit to consider is as $h$ becomes smaller more and more values of $f$ are being added over the interval $[0,a]$, so we can rewrite the sum as $displaystylelim_h to 0 sum_n=0^1/h f(anh)h + c$ since the sum covers the same interval in the limit.



If you recall the definition of the integral of any function $f$ from 0 to $x$, specifically $displaystylelim_h to 0 sum_n=0^1/h f(xnh)h$ you'll notice that the two sums are equal except by a factor of $c$ which can be easily recognized as the constant of integration that appears when ever one is doing any antidifferentiation.







share|cite|improve this question



















  • Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
    – Pink Panther
    Jul 29 at 14:27











  • How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
    – Aaron Quitta
    Jul 29 at 14:35











  • well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
    – Pink Panther
    Jul 29 at 14:43










  • Its totally okay, the error function is kind of a cop out anyway.
    – Aaron Quitta
    Jul 29 at 15:04










  • @PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
    – David C. Ullrich
    Jul 29 at 15:51












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I've been working on proof for a while that integration and antidifferentiation are equivalent except for a constant difference, below is an outline of that proof so far. The are a couple parts that I feel are on some shifty ground and not so rigorous, such as whether I actually proved my claim, using tangents as approximators for continuous functions, arguing about the meaning of $c$ in terms of constants of integration, and finally how to argue for changing the bonds of the sum in the second to last paragraph. Sorry if the spacing changes make this hard to read, I can remove them if its too bad.



Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinBbbD$ $g'(d)=f(d)$ where $BbbD$ is the domain of $f$.



From here on out I going to take a little bit of a strange path, but in the end the purpose should be clear. Given our definition for $g$, the tangent line at any point $(a,g(a))$ along $g$ will have the equation $y=f(a)(x-a)+g(a)$, but we are limited in using this formula to find tangents because the value of $g(a)$ is unknown, except in one special case, when $a=0$.



When $a=0$ we know $g(a)$ is equal to $c$ by the definition of $g$. That gives us the line tangent to $g$ at $(0,g(0))$ as $y=f(0)x+c$. Since both $g$ and $f$ are continuous this tangent is a reasonable approximation for $g$ around $0$ for some small distance $h$. That allows us to approximate $g(h)$ as $f(0)h+c$ with the equation of our tangent line. Now, with this approximation for the value of $g(h)$ and we can approximate the tangent line through $(h,g(h))$ as $f(h)(x-h)+c+f(0)h$ If we repeat this process in general the tangent line for $g$ at some point $a$ can be approximated as $displaystyle f(a)(x-a)+c+sum_n=0^a/h-1 f(nh)h$.



In general the tangent at $(a,g(a))$ can be used as an approximation for $g(a)$ if we plug in $a$ for $x$ in the equation of the tangent line. Doing some quick algebra that gives us $displaystyle c+sum_n=0^a/h-1 f(nh)h$. Now you may begin to recognize this sum, but there are a couple more steps before the proof is finished.



The part of this sum that puts a restriction on how accurately it approximates $g(a)$ is the size of $h$, so in order to get the best approximation we should consider the $displaystylelim_h to 0$, so that our new sum is $displaystyle c+ lim_h to 0 sum_n=0^a/h-1 f(nh)h$. We can actually simplify this sum by changing the upper from $fracah-1$ to $fracah$ since the only piece of the sum that is lost is the term $displaystylelim_h to 0f(a-h)h$ and since the limit of a product can be rewritten as the product of the limits, namely $displaystylelim_h to 0 f(a-h) timesdisplaystyle lim_h to 0h$. Since $f$ is continuous, the first limit evaluates to $f(a)$, and the second limit evaluates to $0$, meaning that the value of the product is just $f(a) times 0$, or just $0$, meaning that the removal of this term causes no change to the value of the sum in the limiting case. That gives us our new sum-limit as $displaystyle c+lim_h to 0 sum_n=0^a/h f(nh)h$.



The last bit to consider is as $h$ becomes smaller more and more values of $f$ are being added over the interval $[0,a]$, so we can rewrite the sum as $displaystylelim_h to 0 sum_n=0^1/h f(anh)h + c$ since the sum covers the same interval in the limit.



If you recall the definition of the integral of any function $f$ from 0 to $x$, specifically $displaystylelim_h to 0 sum_n=0^1/h f(xnh)h$ you'll notice that the two sums are equal except by a factor of $c$ which can be easily recognized as the constant of integration that appears when ever one is doing any antidifferentiation.







share|cite|improve this question











I've been working on proof for a while that integration and antidifferentiation are equivalent except for a constant difference, below is an outline of that proof so far. The are a couple parts that I feel are on some shifty ground and not so rigorous, such as whether I actually proved my claim, using tangents as approximators for continuous functions, arguing about the meaning of $c$ in terms of constants of integration, and finally how to argue for changing the bonds of the sum in the second to last paragraph. Sorry if the spacing changes make this hard to read, I can remove them if its too bad.



Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinBbbD$ $g'(d)=f(d)$ where $BbbD$ is the domain of $f$.



From here on out I going to take a little bit of a strange path, but in the end the purpose should be clear. Given our definition for $g$, the tangent line at any point $(a,g(a))$ along $g$ will have the equation $y=f(a)(x-a)+g(a)$, but we are limited in using this formula to find tangents because the value of $g(a)$ is unknown, except in one special case, when $a=0$.



When $a=0$ we know $g(a)$ is equal to $c$ by the definition of $g$. That gives us the line tangent to $g$ at $(0,g(0))$ as $y=f(0)x+c$. Since both $g$ and $f$ are continuous this tangent is a reasonable approximation for $g$ around $0$ for some small distance $h$. That allows us to approximate $g(h)$ as $f(0)h+c$ with the equation of our tangent line. Now, with this approximation for the value of $g(h)$ and we can approximate the tangent line through $(h,g(h))$ as $f(h)(x-h)+c+f(0)h$ If we repeat this process in general the tangent line for $g$ at some point $a$ can be approximated as $displaystyle f(a)(x-a)+c+sum_n=0^a/h-1 f(nh)h$.



In general the tangent at $(a,g(a))$ can be used as an approximation for $g(a)$ if we plug in $a$ for $x$ in the equation of the tangent line. Doing some quick algebra that gives us $displaystyle c+sum_n=0^a/h-1 f(nh)h$. Now you may begin to recognize this sum, but there are a couple more steps before the proof is finished.



The part of this sum that puts a restriction on how accurately it approximates $g(a)$ is the size of $h$, so in order to get the best approximation we should consider the $displaystylelim_h to 0$, so that our new sum is $displaystyle c+ lim_h to 0 sum_n=0^a/h-1 f(nh)h$. We can actually simplify this sum by changing the upper from $fracah-1$ to $fracah$ since the only piece of the sum that is lost is the term $displaystylelim_h to 0f(a-h)h$ and since the limit of a product can be rewritten as the product of the limits, namely $displaystylelim_h to 0 f(a-h) timesdisplaystyle lim_h to 0h$. Since $f$ is continuous, the first limit evaluates to $f(a)$, and the second limit evaluates to $0$, meaning that the value of the product is just $f(a) times 0$, or just $0$, meaning that the removal of this term causes no change to the value of the sum in the limiting case. That gives us our new sum-limit as $displaystyle c+lim_h to 0 sum_n=0^a/h f(nh)h$.



The last bit to consider is as $h$ becomes smaller more and more values of $f$ are being added over the interval $[0,a]$, so we can rewrite the sum as $displaystylelim_h to 0 sum_n=0^1/h f(anh)h + c$ since the sum covers the same interval in the limit.



If you recall the definition of the integral of any function $f$ from 0 to $x$, specifically $displaystylelim_h to 0 sum_n=0^1/h f(xnh)h$ you'll notice that the two sums are equal except by a factor of $c$ which can be easily recognized as the constant of integration that appears when ever one is doing any antidifferentiation.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 29 at 14:20









Aaron Quitta

194114




194114











  • Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
    – Pink Panther
    Jul 29 at 14:27











  • How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
    – Aaron Quitta
    Jul 29 at 14:35











  • well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
    – Pink Panther
    Jul 29 at 14:43










  • Its totally okay, the error function is kind of a cop out anyway.
    – Aaron Quitta
    Jul 29 at 15:04










  • @PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
    – David C. Ullrich
    Jul 29 at 15:51
















  • Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
    – Pink Panther
    Jul 29 at 14:27











  • How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
    – Aaron Quitta
    Jul 29 at 14:35











  • well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
    – Pink Panther
    Jul 29 at 14:43










  • Its totally okay, the error function is kind of a cop out anyway.
    – Aaron Quitta
    Jul 29 at 15:04










  • @PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
    – David C. Ullrich
    Jul 29 at 15:51















Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
– Pink Panther
Jul 29 at 14:27





Given a continuous real function $f$ we can define another continuous real function $g$ which goes through the point $(0,c)$ and $forall dinmathbb Dquad g'(d)=f(d)$ where $mathbb D$ is the domain of $f$. In general this is not true, for example take the Gaussian function $e^-x^2/2$, which does not have an anti derivative. If anyone has an idea how to do quotes in comments, please let me know!
– Pink Panther
Jul 29 at 14:27













How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
– Aaron Quitta
Jul 29 at 14:35





How would I avoid pitfalls like that, what condition would be relevant/necessary, or would I just say "given a function $f$ that has an antiderivative"? Edit: Wait, isn't it just some version of the error function?
– Aaron Quitta
Jul 29 at 14:35













well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
– Pink Panther
Jul 29 at 14:43




well seems like i cannot edit my post anymore. i was mistaken to say that it does not exist, i meant to say that there does not exist an antiderivative composed of elementary function. An antiderivative may still be given by $g(x)=int_a^x f(t)dt$, if $f$ is a function $f:(a,b)rightarrowmathbb R$. so this is surely still continuous. so i am sorry, i guess i made a mistake there.
– Pink Panther
Jul 29 at 14:43












Its totally okay, the error function is kind of a cop out anyway.
– Aaron Quitta
Jul 29 at 15:04




Its totally okay, the error function is kind of a cop out anyway.
– Aaron Quitta
Jul 29 at 15:04












@PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
– David C. Ullrich
Jul 29 at 15:51




@PinkPanther The function $e^-x^2/2$ certainly does have an antiderivative! There's no closed-form formula for the antiderivative.
– David C. Ullrich
Jul 29 at 15:51










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










I believe that when we do the first linear approximation:



We have $$g(h)=f(0)h+c+epsilon_1(f,h)h$$



where $epsilon_1(f,h)$ is the error term of which we know $lim_h to 0 epsilon_1(f,h) = 0$



Similarly as we do the second approximation, we have



$$g(2h)=g(h)+f(h)h+epsilon_2(f,h)h$$ where $lim_h to 0epsilon_2(f,h)=0$and so on.



I think you still have to show that $$lim_h to 0sum_i=1^a/h hepsilon_i(f,h)=0.$$



$$lim_h to 0sum_i=1^a/h |hepsilon_i(f,h)| le lim_h to 0 fracahcdot h sup_i |epsilon_i(f,h)|=0.$$






share|cite|improve this answer























  • Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
    – Aaron Quitta
    2 days ago










  • It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
    – Siong Thye Goh
    2 days ago










  • That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
    – Aaron Quitta
    2 days ago










  • Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
    – Aaron Quitta
    2 days ago











  • Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
    – Aaron Quitta
    2 days ago










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866123%2fconstructing-a-numerical-proof-that-integration-is-equivalent-to-antidifferentia%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










I believe that when we do the first linear approximation:



We have $$g(h)=f(0)h+c+epsilon_1(f,h)h$$



where $epsilon_1(f,h)$ is the error term of which we know $lim_h to 0 epsilon_1(f,h) = 0$



Similarly as we do the second approximation, we have



$$g(2h)=g(h)+f(h)h+epsilon_2(f,h)h$$ where $lim_h to 0epsilon_2(f,h)=0$and so on.



I think you still have to show that $$lim_h to 0sum_i=1^a/h hepsilon_i(f,h)=0.$$



$$lim_h to 0sum_i=1^a/h |hepsilon_i(f,h)| le lim_h to 0 fracahcdot h sup_i |epsilon_i(f,h)|=0.$$






share|cite|improve this answer























  • Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
    – Aaron Quitta
    2 days ago










  • It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
    – Siong Thye Goh
    2 days ago










  • That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
    – Aaron Quitta
    2 days ago










  • Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
    – Aaron Quitta
    2 days ago











  • Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
    – Aaron Quitta
    2 days ago














up vote
1
down vote



accepted










I believe that when we do the first linear approximation:



We have $$g(h)=f(0)h+c+epsilon_1(f,h)h$$



where $epsilon_1(f,h)$ is the error term of which we know $lim_h to 0 epsilon_1(f,h) = 0$



Similarly as we do the second approximation, we have



$$g(2h)=g(h)+f(h)h+epsilon_2(f,h)h$$ where $lim_h to 0epsilon_2(f,h)=0$and so on.



I think you still have to show that $$lim_h to 0sum_i=1^a/h hepsilon_i(f,h)=0.$$



$$lim_h to 0sum_i=1^a/h |hepsilon_i(f,h)| le lim_h to 0 fracahcdot h sup_i |epsilon_i(f,h)|=0.$$






share|cite|improve this answer























  • Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
    – Aaron Quitta
    2 days ago










  • It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
    – Siong Thye Goh
    2 days ago










  • That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
    – Aaron Quitta
    2 days ago










  • Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
    – Aaron Quitta
    2 days ago











  • Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
    – Aaron Quitta
    2 days ago












up vote
1
down vote



accepted







up vote
1
down vote



accepted






I believe that when we do the first linear approximation:



We have $$g(h)=f(0)h+c+epsilon_1(f,h)h$$



where $epsilon_1(f,h)$ is the error term of which we know $lim_h to 0 epsilon_1(f,h) = 0$



Similarly as we do the second approximation, we have



$$g(2h)=g(h)+f(h)h+epsilon_2(f,h)h$$ where $lim_h to 0epsilon_2(f,h)=0$and so on.



I think you still have to show that $$lim_h to 0sum_i=1^a/h hepsilon_i(f,h)=0.$$



$$lim_h to 0sum_i=1^a/h |hepsilon_i(f,h)| le lim_h to 0 fracahcdot h sup_i |epsilon_i(f,h)|=0.$$






share|cite|improve this answer















I believe that when we do the first linear approximation:



We have $$g(h)=f(0)h+c+epsilon_1(f,h)h$$



where $epsilon_1(f,h)$ is the error term of which we know $lim_h to 0 epsilon_1(f,h) = 0$



Similarly as we do the second approximation, we have



$$g(2h)=g(h)+f(h)h+epsilon_2(f,h)h$$ where $lim_h to 0epsilon_2(f,h)=0$and so on.



I think you still have to show that $$lim_h to 0sum_i=1^a/h hepsilon_i(f,h)=0.$$



$$lim_h to 0sum_i=1^a/h |hepsilon_i(f,h)| le lim_h to 0 fracahcdot h sup_i |epsilon_i(f,h)|=0.$$







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited 2 days ago


























answered 2 days ago









Siong Thye Goh

76.9k134794




76.9k134794











  • Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
    – Aaron Quitta
    2 days ago










  • It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
    – Siong Thye Goh
    2 days ago










  • That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
    – Aaron Quitta
    2 days ago










  • Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
    – Aaron Quitta
    2 days ago











  • Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
    – Aaron Quitta
    2 days ago
















  • Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
    – Aaron Quitta
    2 days ago










  • It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
    – Siong Thye Goh
    2 days ago










  • That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
    – Aaron Quitta
    2 days ago










  • Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
    – Aaron Quitta
    2 days ago











  • Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
    – Aaron Quitta
    2 days ago















Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
– Aaron Quitta
2 days ago




Thanks for the response! Most of it makes sense to me, but I have one thing that I do not understand, that is the second term of the final in equality. Where did you get it from and can you help me understand the meaning of $sup$ and the subscript $i$?
– Aaron Quitta
2 days ago












It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
– Siong Thye Goh
2 days ago




It is something like $a+b+c le 3max(a,b,c)$. I upper bound each term by their maximum.
– Siong Thye Goh
2 days ago












That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
– Aaron Quitta
2 days ago




That is an interesting way of approaching the problem, I'll keep that in mind and come back to you when I become stuck or have finished.
– Aaron Quitta
2 days ago












Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
– Aaron Quitta
2 days ago





Would the definition $epsilon_n(f,g)=frac1h(g(nh)-c-sum_i=0^a/nf(nh)h)$ be correct?
– Aaron Quitta
2 days ago













Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
– Aaron Quitta
2 days ago




Sorry I meant $epsilon_n(f,h)$. Also wouldn't the last limit be equal to $lim_h to 0 a*sup_i|epsilon_i(f,h)|, and since $a$ is a constant that only leaves $lim_h to 0 sup_i|epsilon_i(f,h)|$ to be proven, which we are treated from given since we are saying that $lim_h to 0 e_n(f,h) forall n$?
– Aaron Quitta
2 days ago












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866123%2fconstructing-a-numerical-proof-that-integration-is-equivalent-to-antidifferentia%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?