What is the intuitive meaning of 'order of accuracy' and 'order of approximation' with respect to a numerical method?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I have been studying numerical methods in order to get a better understanding of CFD and the algorithms used in CFD codes. Where I am stuck is I don't understand what is meant by 'order-of-accuracy','order-of-approximation' and 'order-of-error'?.



I have come to understand that 'order of accuracy' is a way to quantify the accuracy of the method,i.e. how close the output of the method is to the actual value. Also since the value, we get from the numerical method is an approximation to the actual value there is an error value (i.e. (actaul_value-approximate_value)>0, usually) which is greater than zero, as the approximate value is not ditto same as the actual value but close to the actual value. The error value is found to depend on the 'step-size' that is used, and the error decreases if we decrease the step-size and we get more "accurate" result as the approximate value inches closer to the actual value. I got this equation for the same:



$$E(h)=Ch^n$$



Where $E$ is the error which is dependent on step size, $h$ is the step size, and n is called the order of accuracy, and $C$ is a constant. Here what I don't understand is why all the literature says "higher 'order-of-accuracy' is better as it means we get a more accurate result ie the approximate value is 'more-closer' to the actual value". But if step size,h, is greater than 1,i.e. if h>1, then I the error value seems to increase i.e the accuracy of the approximate value decreases. Since there is no condition mentioned anywhere that step size can't be greater than one, does the accuracy actually increase as order-of-accuracy increases, whenthe step size is greater than one?



Also, I have come to understand that order-of-approximation is a measure of (a way to quantify) the 'precision' of the approximate value. I understand precision in the context of floating point number, where it occurs due to the restriction on the number of digits that can be held in the memory and so we have single and double precision. I am failing to see what 'precision' means in this context as there is no restriction placed on the number of digits that can be present. Does precision here also has a connection to the number of digits?



And the last question that is hounding my head is big O noting used to represent order of accuracy of order of precision and when using big-O notion with taylor's polynomial, is it representing order-of-accuracy or order-of-precision or order-of-error, as given in:



$$f(x)=f(x)+hf'(x)+h^2f''(x)/2!+h^3f'''(x)/3!+O(h^4)$$







share|cite|improve this question

















  • 1




    Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
    – littleO
    2 hours ago










  • @littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
    – GRANZER
    2 hours ago










  • @littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
    – GRANZER
    2 hours ago














up vote
0
down vote

favorite












I have been studying numerical methods in order to get a better understanding of CFD and the algorithms used in CFD codes. Where I am stuck is I don't understand what is meant by 'order-of-accuracy','order-of-approximation' and 'order-of-error'?.



I have come to understand that 'order of accuracy' is a way to quantify the accuracy of the method,i.e. how close the output of the method is to the actual value. Also since the value, we get from the numerical method is an approximation to the actual value there is an error value (i.e. (actaul_value-approximate_value)>0, usually) which is greater than zero, as the approximate value is not ditto same as the actual value but close to the actual value. The error value is found to depend on the 'step-size' that is used, and the error decreases if we decrease the step-size and we get more "accurate" result as the approximate value inches closer to the actual value. I got this equation for the same:



$$E(h)=Ch^n$$



Where $E$ is the error which is dependent on step size, $h$ is the step size, and n is called the order of accuracy, and $C$ is a constant. Here what I don't understand is why all the literature says "higher 'order-of-accuracy' is better as it means we get a more accurate result ie the approximate value is 'more-closer' to the actual value". But if step size,h, is greater than 1,i.e. if h>1, then I the error value seems to increase i.e the accuracy of the approximate value decreases. Since there is no condition mentioned anywhere that step size can't be greater than one, does the accuracy actually increase as order-of-accuracy increases, whenthe step size is greater than one?



Also, I have come to understand that order-of-approximation is a measure of (a way to quantify) the 'precision' of the approximate value. I understand precision in the context of floating point number, where it occurs due to the restriction on the number of digits that can be held in the memory and so we have single and double precision. I am failing to see what 'precision' means in this context as there is no restriction placed on the number of digits that can be present. Does precision here also has a connection to the number of digits?



And the last question that is hounding my head is big O noting used to represent order of accuracy of order of precision and when using big-O notion with taylor's polynomial, is it representing order-of-accuracy or order-of-precision or order-of-error, as given in:



$$f(x)=f(x)+hf'(x)+h^2f''(x)/2!+h^3f'''(x)/3!+O(h^4)$$







share|cite|improve this question

















  • 1




    Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
    – littleO
    2 hours ago










  • @littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
    – GRANZER
    2 hours ago










  • @littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
    – GRANZER
    2 hours ago












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have been studying numerical methods in order to get a better understanding of CFD and the algorithms used in CFD codes. Where I am stuck is I don't understand what is meant by 'order-of-accuracy','order-of-approximation' and 'order-of-error'?.



I have come to understand that 'order of accuracy' is a way to quantify the accuracy of the method,i.e. how close the output of the method is to the actual value. Also since the value, we get from the numerical method is an approximation to the actual value there is an error value (i.e. (actaul_value-approximate_value)>0, usually) which is greater than zero, as the approximate value is not ditto same as the actual value but close to the actual value. The error value is found to depend on the 'step-size' that is used, and the error decreases if we decrease the step-size and we get more "accurate" result as the approximate value inches closer to the actual value. I got this equation for the same:



$$E(h)=Ch^n$$



Where $E$ is the error which is dependent on step size, $h$ is the step size, and n is called the order of accuracy, and $C$ is a constant. Here what I don't understand is why all the literature says "higher 'order-of-accuracy' is better as it means we get a more accurate result ie the approximate value is 'more-closer' to the actual value". But if step size,h, is greater than 1,i.e. if h>1, then I the error value seems to increase i.e the accuracy of the approximate value decreases. Since there is no condition mentioned anywhere that step size can't be greater than one, does the accuracy actually increase as order-of-accuracy increases, whenthe step size is greater than one?



Also, I have come to understand that order-of-approximation is a measure of (a way to quantify) the 'precision' of the approximate value. I understand precision in the context of floating point number, where it occurs due to the restriction on the number of digits that can be held in the memory and so we have single and double precision. I am failing to see what 'precision' means in this context as there is no restriction placed on the number of digits that can be present. Does precision here also has a connection to the number of digits?



And the last question that is hounding my head is big O noting used to represent order of accuracy of order of precision and when using big-O notion with taylor's polynomial, is it representing order-of-accuracy or order-of-precision or order-of-error, as given in:



$$f(x)=f(x)+hf'(x)+h^2f''(x)/2!+h^3f'''(x)/3!+O(h^4)$$







share|cite|improve this question













I have been studying numerical methods in order to get a better understanding of CFD and the algorithms used in CFD codes. Where I am stuck is I don't understand what is meant by 'order-of-accuracy','order-of-approximation' and 'order-of-error'?.



I have come to understand that 'order of accuracy' is a way to quantify the accuracy of the method,i.e. how close the output of the method is to the actual value. Also since the value, we get from the numerical method is an approximation to the actual value there is an error value (i.e. (actaul_value-approximate_value)>0, usually) which is greater than zero, as the approximate value is not ditto same as the actual value but close to the actual value. The error value is found to depend on the 'step-size' that is used, and the error decreases if we decrease the step-size and we get more "accurate" result as the approximate value inches closer to the actual value. I got this equation for the same:



$$E(h)=Ch^n$$



Where $E$ is the error which is dependent on step size, $h$ is the step size, and n is called the order of accuracy, and $C$ is a constant. Here what I don't understand is why all the literature says "higher 'order-of-accuracy' is better as it means we get a more accurate result ie the approximate value is 'more-closer' to the actual value". But if step size,h, is greater than 1,i.e. if h>1, then I the error value seems to increase i.e the accuracy of the approximate value decreases. Since there is no condition mentioned anywhere that step size can't be greater than one, does the accuracy actually increase as order-of-accuracy increases, whenthe step size is greater than one?



Also, I have come to understand that order-of-approximation is a measure of (a way to quantify) the 'precision' of the approximate value. I understand precision in the context of floating point number, where it occurs due to the restriction on the number of digits that can be held in the memory and so we have single and double precision. I am failing to see what 'precision' means in this context as there is no restriction placed on the number of digits that can be present. Does precision here also has a connection to the number of digits?



And the last question that is hounding my head is big O noting used to represent order of accuracy of order of precision and when using big-O notion with taylor's polynomial, is it representing order-of-accuracy or order-of-precision or order-of-error, as given in:



$$f(x)=f(x)+hf'(x)+h^2f''(x)/2!+h^3f'''(x)/3!+O(h^4)$$









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited 2 hours ago
























asked 2 hours ago









GRANZER

1799




1799







  • 1




    Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
    – littleO
    2 hours ago










  • @littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
    – GRANZER
    2 hours ago










  • @littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
    – GRANZER
    2 hours ago












  • 1




    Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
    – littleO
    2 hours ago










  • @littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
    – GRANZER
    2 hours ago










  • @littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
    – GRANZER
    2 hours ago







1




1




Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
– littleO
2 hours ago




Usually the step size is small, like $h = .1$ or $h=.001$ or something. Convergence results usually guarantee an accurate solution only when $h$ is sufficiently small, or perhaps only in the limit as $h$ approaches $0$.
– littleO
2 hours ago












@littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
– GRANZER
2 hours ago




@littleO Yes, but what does 'small' mean as 1 or 2 is small when compared to 1000. Also in numerical algorithms, I have not come across the idea of limit yet.
– GRANZER
2 hours ago












@littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
– GRANZER
2 hours ago




@littleO Also if step-size is taken as a spatial distance and if the distance is given in mm then 1 mm while being small, the numeral is still greater than 1. And if the distance is taken in km then it would be 0.0001 which would we very less than 1. This would become ver ambiguous.
– GRANZER
2 hours ago










2 Answers
2






active

oldest

votes

















up vote
0
down vote













Your Taylor series should be
$$f(x)=f(x_0)+hf'(x_0)+h^2f''(x_0)/2!+h^3f'''(x_0)/3!+O(h^4)$$
because the derivatives are taken at the point $x_0$. It is useful to note that if $x$ has units of length the derivatives have units of inverse length to a power that matches the order of the derivative. This is the approximate linear scale of changes in $f$. You want $h$ to be smaller than this scale so the terms in the Taylor series are decreasing.



Order means different things in different contexts. In the context of methods for numerically solving differential equations it reflects how many terms of the Taylor series are accounted for. If you use the simple Euler method you get $$f(x_0+h)=f(x_0)+hf'(x_0)$$
This accounts for the first term in the Taylor series. It will be exact when $f$ is linear, so we call it a first order method. The popular fourth order Runge-Kutta method accounts for the first four terms of the Taylor series. It will be exact any time $f$ is a polynomial of fourth degree or less. Its error term will be $O(h^5)$. This is not at all the same as the big O used in analysis of running time of programs. It reflects the truncation error of the method, not the running time.






share|cite|improve this answer




























    up vote
    0
    down vote













    Your conclusion, that $E(h) = ch^p$ implies larger stepsizes like $h>10$ might be better than a stepsize like $h=1$ is based on a common misinterpretation of the $mathcal O(h^p)$ notation.



    Most error estimates are only true for small $h$!



    By definition



    $$ mathcal O(h^p) = f: mathbb R to mathbb R mid text there exists some $delta_f$ such that . $$



    The point here is, that the statement $E(h) in mathcal O(h^p)$ implies
    that in some small region $[-delta_E,delta_E]$ this estimate holds



    $$|E(h)| < |ch^p|, quad text for all -delta_E < h < delta_E.$$



    But for larger stepsizes $h$ the estimate might not be true! And in many cases we just hope that $h$ is simply small enough! (Sometime we can compute $delta_E$, which is better.)



    Relation to the number of accurate digits.



    You are right, that there is a difference between the order of accuracy and the correct number of digits of an approximation.



    Let us assume that rounding of errors during the computation of the approximation of $f$ are of the magnitude $10^-8$ and we use an approximation of order $p=2$.



    Let me lie a little bit for a moment:
    The total error will be a combination of both errors



    $$ |f_approx,float(h) - f(h)| < |f_approx,float(h) - f_approx(h)| + |f_approx(h) - f(h)| \
    < 10^-8 + |ch^p|.$$



    Here you see, that the round off errors are only responible for a small fraction of the total error. If $c=10^3$ and $h=10^-5$ we see that the order of the approximation contributes the most.



    This is quite common and therefore we mostly focus on finding good approximations which are easy to compute.



    (The statement above are a bit simplified, since if $h$ is taken really small, the round off errors usually become really large since we often do stuff like divide by $h$ etc. Which causes suddenly really large errors in the approximation.)






    share|cite





















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873291%2fwhat-is-the-intuitive-meaning-of-order-of-accuracy-and-order-of-approximation%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      0
      down vote













      Your Taylor series should be
      $$f(x)=f(x_0)+hf'(x_0)+h^2f''(x_0)/2!+h^3f'''(x_0)/3!+O(h^4)$$
      because the derivatives are taken at the point $x_0$. It is useful to note that if $x$ has units of length the derivatives have units of inverse length to a power that matches the order of the derivative. This is the approximate linear scale of changes in $f$. You want $h$ to be smaller than this scale so the terms in the Taylor series are decreasing.



      Order means different things in different contexts. In the context of methods for numerically solving differential equations it reflects how many terms of the Taylor series are accounted for. If you use the simple Euler method you get $$f(x_0+h)=f(x_0)+hf'(x_0)$$
      This accounts for the first term in the Taylor series. It will be exact when $f$ is linear, so we call it a first order method. The popular fourth order Runge-Kutta method accounts for the first four terms of the Taylor series. It will be exact any time $f$ is a polynomial of fourth degree or less. Its error term will be $O(h^5)$. This is not at all the same as the big O used in analysis of running time of programs. It reflects the truncation error of the method, not the running time.






      share|cite|improve this answer

























        up vote
        0
        down vote













        Your Taylor series should be
        $$f(x)=f(x_0)+hf'(x_0)+h^2f''(x_0)/2!+h^3f'''(x_0)/3!+O(h^4)$$
        because the derivatives are taken at the point $x_0$. It is useful to note that if $x$ has units of length the derivatives have units of inverse length to a power that matches the order of the derivative. This is the approximate linear scale of changes in $f$. You want $h$ to be smaller than this scale so the terms in the Taylor series are decreasing.



        Order means different things in different contexts. In the context of methods for numerically solving differential equations it reflects how many terms of the Taylor series are accounted for. If you use the simple Euler method you get $$f(x_0+h)=f(x_0)+hf'(x_0)$$
        This accounts for the first term in the Taylor series. It will be exact when $f$ is linear, so we call it a first order method. The popular fourth order Runge-Kutta method accounts for the first four terms of the Taylor series. It will be exact any time $f$ is a polynomial of fourth degree or less. Its error term will be $O(h^5)$. This is not at all the same as the big O used in analysis of running time of programs. It reflects the truncation error of the method, not the running time.






        share|cite|improve this answer























          up vote
          0
          down vote










          up vote
          0
          down vote









          Your Taylor series should be
          $$f(x)=f(x_0)+hf'(x_0)+h^2f''(x_0)/2!+h^3f'''(x_0)/3!+O(h^4)$$
          because the derivatives are taken at the point $x_0$. It is useful to note that if $x$ has units of length the derivatives have units of inverse length to a power that matches the order of the derivative. This is the approximate linear scale of changes in $f$. You want $h$ to be smaller than this scale so the terms in the Taylor series are decreasing.



          Order means different things in different contexts. In the context of methods for numerically solving differential equations it reflects how many terms of the Taylor series are accounted for. If you use the simple Euler method you get $$f(x_0+h)=f(x_0)+hf'(x_0)$$
          This accounts for the first term in the Taylor series. It will be exact when $f$ is linear, so we call it a first order method. The popular fourth order Runge-Kutta method accounts for the first four terms of the Taylor series. It will be exact any time $f$ is a polynomial of fourth degree or less. Its error term will be $O(h^5)$. This is not at all the same as the big O used in analysis of running time of programs. It reflects the truncation error of the method, not the running time.






          share|cite|improve this answer













          Your Taylor series should be
          $$f(x)=f(x_0)+hf'(x_0)+h^2f''(x_0)/2!+h^3f'''(x_0)/3!+O(h^4)$$
          because the derivatives are taken at the point $x_0$. It is useful to note that if $x$ has units of length the derivatives have units of inverse length to a power that matches the order of the derivative. This is the approximate linear scale of changes in $f$. You want $h$ to be smaller than this scale so the terms in the Taylor series are decreasing.



          Order means different things in different contexts. In the context of methods for numerically solving differential equations it reflects how many terms of the Taylor series are accounted for. If you use the simple Euler method you get $$f(x_0+h)=f(x_0)+hf'(x_0)$$
          This accounts for the first term in the Taylor series. It will be exact when $f$ is linear, so we call it a first order method. The popular fourth order Runge-Kutta method accounts for the first four terms of the Taylor series. It will be exact any time $f$ is a polynomial of fourth degree or less. Its error term will be $O(h^5)$. This is not at all the same as the big O used in analysis of running time of programs. It reflects the truncation error of the method, not the running time.







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered 32 mins ago









          Ross Millikan

          275k21183348




          275k21183348




















              up vote
              0
              down vote













              Your conclusion, that $E(h) = ch^p$ implies larger stepsizes like $h>10$ might be better than a stepsize like $h=1$ is based on a common misinterpretation of the $mathcal O(h^p)$ notation.



              Most error estimates are only true for small $h$!



              By definition



              $$ mathcal O(h^p) = f: mathbb R to mathbb R mid text there exists some $delta_f$ such that . $$



              The point here is, that the statement $E(h) in mathcal O(h^p)$ implies
              that in some small region $[-delta_E,delta_E]$ this estimate holds



              $$|E(h)| < |ch^p|, quad text for all -delta_E < h < delta_E.$$



              But for larger stepsizes $h$ the estimate might not be true! And in many cases we just hope that $h$ is simply small enough! (Sometime we can compute $delta_E$, which is better.)



              Relation to the number of accurate digits.



              You are right, that there is a difference between the order of accuracy and the correct number of digits of an approximation.



              Let us assume that rounding of errors during the computation of the approximation of $f$ are of the magnitude $10^-8$ and we use an approximation of order $p=2$.



              Let me lie a little bit for a moment:
              The total error will be a combination of both errors



              $$ |f_approx,float(h) - f(h)| < |f_approx,float(h) - f_approx(h)| + |f_approx(h) - f(h)| \
              < 10^-8 + |ch^p|.$$



              Here you see, that the round off errors are only responible for a small fraction of the total error. If $c=10^3$ and $h=10^-5$ we see that the order of the approximation contributes the most.



              This is quite common and therefore we mostly focus on finding good approximations which are easy to compute.



              (The statement above are a bit simplified, since if $h$ is taken really small, the round off errors usually become really large since we often do stuff like divide by $h$ etc. Which causes suddenly really large errors in the approximation.)






              share|cite

























                up vote
                0
                down vote













                Your conclusion, that $E(h) = ch^p$ implies larger stepsizes like $h>10$ might be better than a stepsize like $h=1$ is based on a common misinterpretation of the $mathcal O(h^p)$ notation.



                Most error estimates are only true for small $h$!



                By definition



                $$ mathcal O(h^p) = f: mathbb R to mathbb R mid text there exists some $delta_f$ such that . $$



                The point here is, that the statement $E(h) in mathcal O(h^p)$ implies
                that in some small region $[-delta_E,delta_E]$ this estimate holds



                $$|E(h)| < |ch^p|, quad text for all -delta_E < h < delta_E.$$



                But for larger stepsizes $h$ the estimate might not be true! And in many cases we just hope that $h$ is simply small enough! (Sometime we can compute $delta_E$, which is better.)



                Relation to the number of accurate digits.



                You are right, that there is a difference between the order of accuracy and the correct number of digits of an approximation.



                Let us assume that rounding of errors during the computation of the approximation of $f$ are of the magnitude $10^-8$ and we use an approximation of order $p=2$.



                Let me lie a little bit for a moment:
                The total error will be a combination of both errors



                $$ |f_approx,float(h) - f(h)| < |f_approx,float(h) - f_approx(h)| + |f_approx(h) - f(h)| \
                < 10^-8 + |ch^p|.$$



                Here you see, that the round off errors are only responible for a small fraction of the total error. If $c=10^3$ and $h=10^-5$ we see that the order of the approximation contributes the most.



                This is quite common and therefore we mostly focus on finding good approximations which are easy to compute.



                (The statement above are a bit simplified, since if $h$ is taken really small, the round off errors usually become really large since we often do stuff like divide by $h$ etc. Which causes suddenly really large errors in the approximation.)






                share|cite























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Your conclusion, that $E(h) = ch^p$ implies larger stepsizes like $h>10$ might be better than a stepsize like $h=1$ is based on a common misinterpretation of the $mathcal O(h^p)$ notation.



                  Most error estimates are only true for small $h$!



                  By definition



                  $$ mathcal O(h^p) = f: mathbb R to mathbb R mid text there exists some $delta_f$ such that . $$



                  The point here is, that the statement $E(h) in mathcal O(h^p)$ implies
                  that in some small region $[-delta_E,delta_E]$ this estimate holds



                  $$|E(h)| < |ch^p|, quad text for all -delta_E < h < delta_E.$$



                  But for larger stepsizes $h$ the estimate might not be true! And in many cases we just hope that $h$ is simply small enough! (Sometime we can compute $delta_E$, which is better.)



                  Relation to the number of accurate digits.



                  You are right, that there is a difference between the order of accuracy and the correct number of digits of an approximation.



                  Let us assume that rounding of errors during the computation of the approximation of $f$ are of the magnitude $10^-8$ and we use an approximation of order $p=2$.



                  Let me lie a little bit for a moment:
                  The total error will be a combination of both errors



                  $$ |f_approx,float(h) - f(h)| < |f_approx,float(h) - f_approx(h)| + |f_approx(h) - f(h)| \
                  < 10^-8 + |ch^p|.$$



                  Here you see, that the round off errors are only responible for a small fraction of the total error. If $c=10^3$ and $h=10^-5$ we see that the order of the approximation contributes the most.



                  This is quite common and therefore we mostly focus on finding good approximations which are easy to compute.



                  (The statement above are a bit simplified, since if $h$ is taken really small, the round off errors usually become really large since we often do stuff like divide by $h$ etc. Which causes suddenly really large errors in the approximation.)






                  share|cite













                  Your conclusion, that $E(h) = ch^p$ implies larger stepsizes like $h>10$ might be better than a stepsize like $h=1$ is based on a common misinterpretation of the $mathcal O(h^p)$ notation.



                  Most error estimates are only true for small $h$!



                  By definition



                  $$ mathcal O(h^p) = f: mathbb R to mathbb R mid text there exists some $delta_f$ such that . $$



                  The point here is, that the statement $E(h) in mathcal O(h^p)$ implies
                  that in some small region $[-delta_E,delta_E]$ this estimate holds



                  $$|E(h)| < |ch^p|, quad text for all -delta_E < h < delta_E.$$



                  But for larger stepsizes $h$ the estimate might not be true! And in many cases we just hope that $h$ is simply small enough! (Sometime we can compute $delta_E$, which is better.)



                  Relation to the number of accurate digits.



                  You are right, that there is a difference between the order of accuracy and the correct number of digits of an approximation.



                  Let us assume that rounding of errors during the computation of the approximation of $f$ are of the magnitude $10^-8$ and we use an approximation of order $p=2$.



                  Let me lie a little bit for a moment:
                  The total error will be a combination of both errors



                  $$ |f_approx,float(h) - f(h)| < |f_approx,float(h) - f_approx(h)| + |f_approx(h) - f(h)| \
                  < 10^-8 + |ch^p|.$$



                  Here you see, that the round off errors are only responible for a small fraction of the total error. If $c=10^3$ and $h=10^-5$ we see that the order of the approximation contributes the most.



                  This is quite common and therefore we mostly focus on finding good approximations which are easy to compute.



                  (The statement above are a bit simplified, since if $h$ is taken really small, the round off errors usually become really large since we often do stuff like divide by $h$ etc. Which causes suddenly really large errors in the approximation.)







                  share|cite













                  share|cite



                  share|cite











                  answered 1 min ago









                  Steffen Plunder

                  41818




                  41818






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873291%2fwhat-is-the-intuitive-meaning-of-order-of-accuracy-and-order-of-approximation%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?