Limit of arithmetic mean of series of measurements

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












Assume we have a series of measurements $x_i$ of, say, some physical quantity with (unknown) real value $x_r$. In a physics book I found the statement that, if we could exclude systematic errors in our measurement, we have
beginequation
(1)quad x_r=lim_ntoinftyfrac 1nsum_i=1^n x_i.
endequation
There is also some kind of "proof", starting with definitions
beginequation
barx_n:=frac 1nsum_i=1^n x_i,quad
e_i:=x_r-x_i,quad
varepsilon:=x_r-barx,
endequation
that is,
beginequation
varepsilon=frac 1nsum_i=1^n(x_r-x_i)=frac 1nsum_i=1^n e_i.
endequation



Next step is the calculation
beginequation
varepsilon^2=frac1n^2left(sum_i e_iright)^2=frac1n^2sum_i e_i^2+frac1n^2sum_isum_jneq ie_ie_japproxfrac1n^2sum_i e_i^2
endequation
for $ntoinfty$, using that $e_i$ and $e_j$ are statistically independent. I can make no sense of this conclusion, and then, how does this yield the above statement (1)?







share|cite|improve this question



















  • This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
    – Claude Leibovici
    Jul 30 at 10:24










  • This is essentially an attempt at the Law of Large Numbers
    – Robert Israel
    Jul 30 at 11:02















up vote
2
down vote

favorite












Assume we have a series of measurements $x_i$ of, say, some physical quantity with (unknown) real value $x_r$. In a physics book I found the statement that, if we could exclude systematic errors in our measurement, we have
beginequation
(1)quad x_r=lim_ntoinftyfrac 1nsum_i=1^n x_i.
endequation
There is also some kind of "proof", starting with definitions
beginequation
barx_n:=frac 1nsum_i=1^n x_i,quad
e_i:=x_r-x_i,quad
varepsilon:=x_r-barx,
endequation
that is,
beginequation
varepsilon=frac 1nsum_i=1^n(x_r-x_i)=frac 1nsum_i=1^n e_i.
endequation



Next step is the calculation
beginequation
varepsilon^2=frac1n^2left(sum_i e_iright)^2=frac1n^2sum_i e_i^2+frac1n^2sum_isum_jneq ie_ie_japproxfrac1n^2sum_i e_i^2
endequation
for $ntoinfty$, using that $e_i$ and $e_j$ are statistically independent. I can make no sense of this conclusion, and then, how does this yield the above statement (1)?







share|cite|improve this question



















  • This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
    – Claude Leibovici
    Jul 30 at 10:24










  • This is essentially an attempt at the Law of Large Numbers
    – Robert Israel
    Jul 30 at 11:02













up vote
2
down vote

favorite









up vote
2
down vote

favorite











Assume we have a series of measurements $x_i$ of, say, some physical quantity with (unknown) real value $x_r$. In a physics book I found the statement that, if we could exclude systematic errors in our measurement, we have
beginequation
(1)quad x_r=lim_ntoinftyfrac 1nsum_i=1^n x_i.
endequation
There is also some kind of "proof", starting with definitions
beginequation
barx_n:=frac 1nsum_i=1^n x_i,quad
e_i:=x_r-x_i,quad
varepsilon:=x_r-barx,
endequation
that is,
beginequation
varepsilon=frac 1nsum_i=1^n(x_r-x_i)=frac 1nsum_i=1^n e_i.
endequation



Next step is the calculation
beginequation
varepsilon^2=frac1n^2left(sum_i e_iright)^2=frac1n^2sum_i e_i^2+frac1n^2sum_isum_jneq ie_ie_japproxfrac1n^2sum_i e_i^2
endequation
for $ntoinfty$, using that $e_i$ and $e_j$ are statistically independent. I can make no sense of this conclusion, and then, how does this yield the above statement (1)?







share|cite|improve this question











Assume we have a series of measurements $x_i$ of, say, some physical quantity with (unknown) real value $x_r$. In a physics book I found the statement that, if we could exclude systematic errors in our measurement, we have
beginequation
(1)quad x_r=lim_ntoinftyfrac 1nsum_i=1^n x_i.
endequation
There is also some kind of "proof", starting with definitions
beginequation
barx_n:=frac 1nsum_i=1^n x_i,quad
e_i:=x_r-x_i,quad
varepsilon:=x_r-barx,
endequation
that is,
beginequation
varepsilon=frac 1nsum_i=1^n(x_r-x_i)=frac 1nsum_i=1^n e_i.
endequation



Next step is the calculation
beginequation
varepsilon^2=frac1n^2left(sum_i e_iright)^2=frac1n^2sum_i e_i^2+frac1n^2sum_isum_jneq ie_ie_japproxfrac1n^2sum_i e_i^2
endequation
for $ntoinfty$, using that $e_i$ and $e_j$ are statistically independent. I can make no sense of this conclusion, and then, how does this yield the above statement (1)?









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 30 at 9:30









Don Fuchs

234




234











  • This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
    – Claude Leibovici
    Jul 30 at 10:24










  • This is essentially an attempt at the Law of Large Numbers
    – Robert Israel
    Jul 30 at 11:02

















  • This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
    – Claude Leibovici
    Jul 30 at 10:24










  • This is essentially an attempt at the Law of Large Numbers
    – Robert Israel
    Jul 30 at 11:02
















This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
– Claude Leibovici
Jul 30 at 10:24




This is basically data reconciliation. Have a look at stats.stackexchange.com/questions/304612/… which is a question of mine. May be, it could help.
– Claude Leibovici
Jul 30 at 10:24












This is essentially an attempt at the Law of Large Numbers
– Robert Israel
Jul 30 at 11:02





This is essentially an attempt at the Law of Large Numbers
– Robert Israel
Jul 30 at 11:02











1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










This looks an awful lot like the Law of Large Numbers to me.



I'll assume that each of the measurements are independent, identically distributed and (as you say that we can "exclude systematic errors"), each has a mean of $x_r$. So $E(x_i)=x_r$ for each $i$, $E(e_i)=x_r-E(x_i)=0$ for each $i$, and $E(barx_n)=Eleft(frac1nsum_ix_i right)=frac1nsum_iE(x_i)=x_r$ for all $n$.



I'll also edit your book's notation a little: I'll use $varepsilon_n:=x_r-barx_n$ instead of just $varepsilon$ as it seems that the book's use of $varepsilon$ still depends on $n$.



We have that $E(varepsilon_n)=x_r-E(barx_n)=0$ for all $n$. Because of this, $$mathrmVar(varepsilon_n)=E(varepsilon_n^2)-E(varepsilon_n)^2=E(varepsilon_n^2)-0=E(varepsilon_n^2)$$
... so your final line of working suggests that the book wants to be working with the variance of $varepsilon_n$.



Furthermore, if the $x_i$ are independent, then the $e_i$ are also independent, which means that their covariances are zero. So, for all $ineq j$:
$$0=mathrmCov(e_i,e_j)=E(e_ie_j)-E(e_i)E(e_j)=E(e_ie_j)-0=E(e_ie_j)$$
... which suggests why the book ignores the $frac1n^2sum_isum_jneq ie_ie_j$ term.



Putting all of this together (and finding the epectation of your final line of working) we get:



$$
beginalign
mathrmVar(varepsilon_n) &= E(varepsilon_n^2)
\ &= ldots
\ &= frac1n^2sum_i E(e_i^2)
\ &= frac1n^2sum_i mathrmVar(e_i)
\ &= frac1n^2times n times mathrmVar(e_1)
\ &= frac1n mathrmVar(e_1)
endalign$$
where we've used the fact that if the $e_i$ are identically distributed then they all have the same variance.



Intuitively now, you can see that if $n$ gets large then the variance of your error term $varepsilon_n$ becomes small. More rigorously, we could use Chebyshev's inequality to show that $varepsilon_n$ converges in probability to zero and hence that $barx_n$ converges in probability to $x_r$ (your statement (1)).






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866832%2flimit-of-arithmetic-mean-of-series-of-measurements%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted










    This looks an awful lot like the Law of Large Numbers to me.



    I'll assume that each of the measurements are independent, identically distributed and (as you say that we can "exclude systematic errors"), each has a mean of $x_r$. So $E(x_i)=x_r$ for each $i$, $E(e_i)=x_r-E(x_i)=0$ for each $i$, and $E(barx_n)=Eleft(frac1nsum_ix_i right)=frac1nsum_iE(x_i)=x_r$ for all $n$.



    I'll also edit your book's notation a little: I'll use $varepsilon_n:=x_r-barx_n$ instead of just $varepsilon$ as it seems that the book's use of $varepsilon$ still depends on $n$.



    We have that $E(varepsilon_n)=x_r-E(barx_n)=0$ for all $n$. Because of this, $$mathrmVar(varepsilon_n)=E(varepsilon_n^2)-E(varepsilon_n)^2=E(varepsilon_n^2)-0=E(varepsilon_n^2)$$
    ... so your final line of working suggests that the book wants to be working with the variance of $varepsilon_n$.



    Furthermore, if the $x_i$ are independent, then the $e_i$ are also independent, which means that their covariances are zero. So, for all $ineq j$:
    $$0=mathrmCov(e_i,e_j)=E(e_ie_j)-E(e_i)E(e_j)=E(e_ie_j)-0=E(e_ie_j)$$
    ... which suggests why the book ignores the $frac1n^2sum_isum_jneq ie_ie_j$ term.



    Putting all of this together (and finding the epectation of your final line of working) we get:



    $$
    beginalign
    mathrmVar(varepsilon_n) &= E(varepsilon_n^2)
    \ &= ldots
    \ &= frac1n^2sum_i E(e_i^2)
    \ &= frac1n^2sum_i mathrmVar(e_i)
    \ &= frac1n^2times n times mathrmVar(e_1)
    \ &= frac1n mathrmVar(e_1)
    endalign$$
    where we've used the fact that if the $e_i$ are identically distributed then they all have the same variance.



    Intuitively now, you can see that if $n$ gets large then the variance of your error term $varepsilon_n$ becomes small. More rigorously, we could use Chebyshev's inequality to show that $varepsilon_n$ converges in probability to zero and hence that $barx_n$ converges in probability to $x_r$ (your statement (1)).






    share|cite|improve this answer

























      up vote
      1
      down vote



      accepted










      This looks an awful lot like the Law of Large Numbers to me.



      I'll assume that each of the measurements are independent, identically distributed and (as you say that we can "exclude systematic errors"), each has a mean of $x_r$. So $E(x_i)=x_r$ for each $i$, $E(e_i)=x_r-E(x_i)=0$ for each $i$, and $E(barx_n)=Eleft(frac1nsum_ix_i right)=frac1nsum_iE(x_i)=x_r$ for all $n$.



      I'll also edit your book's notation a little: I'll use $varepsilon_n:=x_r-barx_n$ instead of just $varepsilon$ as it seems that the book's use of $varepsilon$ still depends on $n$.



      We have that $E(varepsilon_n)=x_r-E(barx_n)=0$ for all $n$. Because of this, $$mathrmVar(varepsilon_n)=E(varepsilon_n^2)-E(varepsilon_n)^2=E(varepsilon_n^2)-0=E(varepsilon_n^2)$$
      ... so your final line of working suggests that the book wants to be working with the variance of $varepsilon_n$.



      Furthermore, if the $x_i$ are independent, then the $e_i$ are also independent, which means that their covariances are zero. So, for all $ineq j$:
      $$0=mathrmCov(e_i,e_j)=E(e_ie_j)-E(e_i)E(e_j)=E(e_ie_j)-0=E(e_ie_j)$$
      ... which suggests why the book ignores the $frac1n^2sum_isum_jneq ie_ie_j$ term.



      Putting all of this together (and finding the epectation of your final line of working) we get:



      $$
      beginalign
      mathrmVar(varepsilon_n) &= E(varepsilon_n^2)
      \ &= ldots
      \ &= frac1n^2sum_i E(e_i^2)
      \ &= frac1n^2sum_i mathrmVar(e_i)
      \ &= frac1n^2times n times mathrmVar(e_1)
      \ &= frac1n mathrmVar(e_1)
      endalign$$
      where we've used the fact that if the $e_i$ are identically distributed then they all have the same variance.



      Intuitively now, you can see that if $n$ gets large then the variance of your error term $varepsilon_n$ becomes small. More rigorously, we could use Chebyshev's inequality to show that $varepsilon_n$ converges in probability to zero and hence that $barx_n$ converges in probability to $x_r$ (your statement (1)).






      share|cite|improve this answer























        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        This looks an awful lot like the Law of Large Numbers to me.



        I'll assume that each of the measurements are independent, identically distributed and (as you say that we can "exclude systematic errors"), each has a mean of $x_r$. So $E(x_i)=x_r$ for each $i$, $E(e_i)=x_r-E(x_i)=0$ for each $i$, and $E(barx_n)=Eleft(frac1nsum_ix_i right)=frac1nsum_iE(x_i)=x_r$ for all $n$.



        I'll also edit your book's notation a little: I'll use $varepsilon_n:=x_r-barx_n$ instead of just $varepsilon$ as it seems that the book's use of $varepsilon$ still depends on $n$.



        We have that $E(varepsilon_n)=x_r-E(barx_n)=0$ for all $n$. Because of this, $$mathrmVar(varepsilon_n)=E(varepsilon_n^2)-E(varepsilon_n)^2=E(varepsilon_n^2)-0=E(varepsilon_n^2)$$
        ... so your final line of working suggests that the book wants to be working with the variance of $varepsilon_n$.



        Furthermore, if the $x_i$ are independent, then the $e_i$ are also independent, which means that their covariances are zero. So, for all $ineq j$:
        $$0=mathrmCov(e_i,e_j)=E(e_ie_j)-E(e_i)E(e_j)=E(e_ie_j)-0=E(e_ie_j)$$
        ... which suggests why the book ignores the $frac1n^2sum_isum_jneq ie_ie_j$ term.



        Putting all of this together (and finding the epectation of your final line of working) we get:



        $$
        beginalign
        mathrmVar(varepsilon_n) &= E(varepsilon_n^2)
        \ &= ldots
        \ &= frac1n^2sum_i E(e_i^2)
        \ &= frac1n^2sum_i mathrmVar(e_i)
        \ &= frac1n^2times n times mathrmVar(e_1)
        \ &= frac1n mathrmVar(e_1)
        endalign$$
        where we've used the fact that if the $e_i$ are identically distributed then they all have the same variance.



        Intuitively now, you can see that if $n$ gets large then the variance of your error term $varepsilon_n$ becomes small. More rigorously, we could use Chebyshev's inequality to show that $varepsilon_n$ converges in probability to zero and hence that $barx_n$ converges in probability to $x_r$ (your statement (1)).






        share|cite|improve this answer













        This looks an awful lot like the Law of Large Numbers to me.



        I'll assume that each of the measurements are independent, identically distributed and (as you say that we can "exclude systematic errors"), each has a mean of $x_r$. So $E(x_i)=x_r$ for each $i$, $E(e_i)=x_r-E(x_i)=0$ for each $i$, and $E(barx_n)=Eleft(frac1nsum_ix_i right)=frac1nsum_iE(x_i)=x_r$ for all $n$.



        I'll also edit your book's notation a little: I'll use $varepsilon_n:=x_r-barx_n$ instead of just $varepsilon$ as it seems that the book's use of $varepsilon$ still depends on $n$.



        We have that $E(varepsilon_n)=x_r-E(barx_n)=0$ for all $n$. Because of this, $$mathrmVar(varepsilon_n)=E(varepsilon_n^2)-E(varepsilon_n)^2=E(varepsilon_n^2)-0=E(varepsilon_n^2)$$
        ... so your final line of working suggests that the book wants to be working with the variance of $varepsilon_n$.



        Furthermore, if the $x_i$ are independent, then the $e_i$ are also independent, which means that their covariances are zero. So, for all $ineq j$:
        $$0=mathrmCov(e_i,e_j)=E(e_ie_j)-E(e_i)E(e_j)=E(e_ie_j)-0=E(e_ie_j)$$
        ... which suggests why the book ignores the $frac1n^2sum_isum_jneq ie_ie_j$ term.



        Putting all of this together (and finding the epectation of your final line of working) we get:



        $$
        beginalign
        mathrmVar(varepsilon_n) &= E(varepsilon_n^2)
        \ &= ldots
        \ &= frac1n^2sum_i E(e_i^2)
        \ &= frac1n^2sum_i mathrmVar(e_i)
        \ &= frac1n^2times n times mathrmVar(e_1)
        \ &= frac1n mathrmVar(e_1)
        endalign$$
        where we've used the fact that if the $e_i$ are identically distributed then they all have the same variance.



        Intuitively now, you can see that if $n$ gets large then the variance of your error term $varepsilon_n$ becomes small. More rigorously, we could use Chebyshev's inequality to show that $varepsilon_n$ converges in probability to zero and hence that $barx_n$ converges in probability to $x_r$ (your statement (1)).







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 30 at 11:26









        Malkin

        1,004421




        1,004421






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866832%2flimit-of-arithmetic-mean-of-series-of-measurements%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?