Cesaro Mean of Sequences - Convergence

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Show that if $(x_n)$ is a convergent sequence then the sequence given by the averages $$y_n = fracx_1 + x_2 + cdots + x_nn$$ also converges to the same limit.



Attempt at Proof.



Since $(x_n)$ converges we can say that for all $m$, such that $m ge N Rightarrow |x_n - L|lt epsilon$.



Base Case. Let n=1 and we have for all m such that $m ge N_0 Rightarrow |x_1 - L|lt epsilon$ and also for $m ge N_1 Rightarrow |x_n+1 - L|lt epsilon$.



Induction Hypothesis. Assume that for an appropriate choice of $N_2$ we have for all m, $m ge N_2 Rightarrow |y_n - L|lt epsilon$.



Choose $maxN_1,N_2$ such that for all $m ge maxN_1,N_2$ implies $$|fracx_1 + x_2 + cdots + x_nn- L|+ |x_n+1 - L|lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n-2L |lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L |le |fracx_1 + x_2 + cdots + x_n+ n x_n+12n-L |lt epsilon$$
and
$$|fracx_1 + x_2 + cdots + x_n+ x_n+1n+1-L |lt|fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L| lt epsilon.$$



Is this approach correct? If not, can you please provide the correct proof. I feel as though I made a mistake in the calculations. Thanks in advance.







share|cite|improve this question



















  • see also this related post
    – G Cab
    Jul 22 at 0:31















up vote
1
down vote

favorite












Show that if $(x_n)$ is a convergent sequence then the sequence given by the averages $$y_n = fracx_1 + x_2 + cdots + x_nn$$ also converges to the same limit.



Attempt at Proof.



Since $(x_n)$ converges we can say that for all $m$, such that $m ge N Rightarrow |x_n - L|lt epsilon$.



Base Case. Let n=1 and we have for all m such that $m ge N_0 Rightarrow |x_1 - L|lt epsilon$ and also for $m ge N_1 Rightarrow |x_n+1 - L|lt epsilon$.



Induction Hypothesis. Assume that for an appropriate choice of $N_2$ we have for all m, $m ge N_2 Rightarrow |y_n - L|lt epsilon$.



Choose $maxN_1,N_2$ such that for all $m ge maxN_1,N_2$ implies $$|fracx_1 + x_2 + cdots + x_nn- L|+ |x_n+1 - L|lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n-2L |lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L |le |fracx_1 + x_2 + cdots + x_n+ n x_n+12n-L |lt epsilon$$
and
$$|fracx_1 + x_2 + cdots + x_n+ x_n+1n+1-L |lt|fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L| lt epsilon.$$



Is this approach correct? If not, can you please provide the correct proof. I feel as though I made a mistake in the calculations. Thanks in advance.







share|cite|improve this question



















  • see also this related post
    – G Cab
    Jul 22 at 0:31













up vote
1
down vote

favorite









up vote
1
down vote

favorite











Show that if $(x_n)$ is a convergent sequence then the sequence given by the averages $$y_n = fracx_1 + x_2 + cdots + x_nn$$ also converges to the same limit.



Attempt at Proof.



Since $(x_n)$ converges we can say that for all $m$, such that $m ge N Rightarrow |x_n - L|lt epsilon$.



Base Case. Let n=1 and we have for all m such that $m ge N_0 Rightarrow |x_1 - L|lt epsilon$ and also for $m ge N_1 Rightarrow |x_n+1 - L|lt epsilon$.



Induction Hypothesis. Assume that for an appropriate choice of $N_2$ we have for all m, $m ge N_2 Rightarrow |y_n - L|lt epsilon$.



Choose $maxN_1,N_2$ such that for all $m ge maxN_1,N_2$ implies $$|fracx_1 + x_2 + cdots + x_nn- L|+ |x_n+1 - L|lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n-2L |lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L |le |fracx_1 + x_2 + cdots + x_n+ n x_n+12n-L |lt epsilon$$
and
$$|fracx_1 + x_2 + cdots + x_n+ x_n+1n+1-L |lt|fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L| lt epsilon.$$



Is this approach correct? If not, can you please provide the correct proof. I feel as though I made a mistake in the calculations. Thanks in advance.







share|cite|improve this question











Show that if $(x_n)$ is a convergent sequence then the sequence given by the averages $$y_n = fracx_1 + x_2 + cdots + x_nn$$ also converges to the same limit.



Attempt at Proof.



Since $(x_n)$ converges we can say that for all $m$, such that $m ge N Rightarrow |x_n - L|lt epsilon$.



Base Case. Let n=1 and we have for all m such that $m ge N_0 Rightarrow |x_1 - L|lt epsilon$ and also for $m ge N_1 Rightarrow |x_n+1 - L|lt epsilon$.



Induction Hypothesis. Assume that for an appropriate choice of $N_2$ we have for all m, $m ge N_2 Rightarrow |y_n - L|lt epsilon$.



Choose $maxN_1,N_2$ such that for all $m ge maxN_1,N_2$ implies $$|fracx_1 + x_2 + cdots + x_nn- L|+ |x_n+1 - L|lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n-2L |lt 2epsilon$$



$$= |fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L |le |fracx_1 + x_2 + cdots + x_n+ n x_n+12n-L |lt epsilon$$
and
$$|fracx_1 + x_2 + cdots + x_n+ x_n+1n+1-L |lt|fracx_1 + x_2 + cdots + x_n+ n x_n+1n+1-L| lt epsilon.$$



Is this approach correct? If not, can you please provide the correct proof. I feel as though I made a mistake in the calculations. Thanks in advance.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 21 at 23:13









Red

1,747733




1,747733











  • see also this related post
    – G Cab
    Jul 22 at 0:31

















  • see also this related post
    – G Cab
    Jul 22 at 0:31
















see also this related post
– G Cab
Jul 22 at 0:31





see also this related post
– G Cab
Jul 22 at 0:31











2 Answers
2






active

oldest

votes

















up vote
3
down vote



accepted










Your approach is wrong. Induction cannaot be used here unless you can get $N$ depending only on $epsilon $ and not on $n$. Here is a correct proof: $|y_n-L|=|frac x_1-L+x_2-L+...+X_n-L n|leq frac X_n-L n$ Split this into two sums: $frac +...+ n +frac x_k+1-L n$ Choose $k$ such that $|x_i-L|<epsilon $ for $i>k$. Then the second term is less than $frac epsilon +epsilon +... +epsilon n=frac n-k n epsilon <epsilon $. The first term tends to $0$ as $n to infty $ (because the numerator does not depend on $n$). We are done.






share|cite|improve this answer




























    up vote
    1
    down vote













    The intuition for a correct proof of this fact is as follows:



    For any $epsilon>0$, there exists $N$ so that $lvert x_n-Lrvert<epsilon$ for all $ngeq N$. Equivalently, we can break the sequence into two parts:



    1. Some initial segment $x_1,x_2,ldots, x_N-1$ of terms that can be anything (but there is only a fixed number of them), and


    2. A tail $x_N+1,x_N+2,ldots$ of terms that are all close to (read: within $epsilon$ of) $L$.


    If you pick some giant $n$, you get
    $$
    fracx_1+x_2+cdots+x_nn=fracx_1+cdots+x_Nn+fracx_N+1+x_N+2+cdots+x_nn.
    $$
    The first term has a fixed numerator, so it tends to $0$ as $ntoinfty$. (The average makes those few initial terms meaningless in the long run.) The second term can easily be seen to satisfy
    $$
    (L-epsilon)fracn-Nnleqfracx_N+1+x_N+2+cdots+x_nnleq(L+epsilon)fracn-Nn,
    $$
    and those bounds tend to $L-epsilon$ and $L+epsilon$ as $ntoinfty$. (Taking the average of a bunch of terms that are close to $L$ should yield a result close to $L$.)



    Can you use these ingredients to complete a proof of the result? The intuition is all there.






    share|cite|improve this answer





















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858957%2fcesaro-mean-of-sequences-convergence%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      3
      down vote



      accepted










      Your approach is wrong. Induction cannaot be used here unless you can get $N$ depending only on $epsilon $ and not on $n$. Here is a correct proof: $|y_n-L|=|frac x_1-L+x_2-L+...+X_n-L n|leq frac X_n-L n$ Split this into two sums: $frac +...+ n +frac x_k+1-L n$ Choose $k$ such that $|x_i-L|<epsilon $ for $i>k$. Then the second term is less than $frac epsilon +epsilon +... +epsilon n=frac n-k n epsilon <epsilon $. The first term tends to $0$ as $n to infty $ (because the numerator does not depend on $n$). We are done.






      share|cite|improve this answer

























        up vote
        3
        down vote



        accepted










        Your approach is wrong. Induction cannaot be used here unless you can get $N$ depending only on $epsilon $ and not on $n$. Here is a correct proof: $|y_n-L|=|frac x_1-L+x_2-L+...+X_n-L n|leq frac X_n-L n$ Split this into two sums: $frac +...+ n +frac x_k+1-L n$ Choose $k$ such that $|x_i-L|<epsilon $ for $i>k$. Then the second term is less than $frac epsilon +epsilon +... +epsilon n=frac n-k n epsilon <epsilon $. The first term tends to $0$ as $n to infty $ (because the numerator does not depend on $n$). We are done.






        share|cite|improve this answer























          up vote
          3
          down vote



          accepted







          up vote
          3
          down vote



          accepted






          Your approach is wrong. Induction cannaot be used here unless you can get $N$ depending only on $epsilon $ and not on $n$. Here is a correct proof: $|y_n-L|=|frac x_1-L+x_2-L+...+X_n-L n|leq frac X_n-L n$ Split this into two sums: $frac +...+ n +frac x_k+1-L n$ Choose $k$ such that $|x_i-L|<epsilon $ for $i>k$. Then the second term is less than $frac epsilon +epsilon +... +epsilon n=frac n-k n epsilon <epsilon $. The first term tends to $0$ as $n to infty $ (because the numerator does not depend on $n$). We are done.






          share|cite|improve this answer













          Your approach is wrong. Induction cannaot be used here unless you can get $N$ depending only on $epsilon $ and not on $n$. Here is a correct proof: $|y_n-L|=|frac x_1-L+x_2-L+...+X_n-L n|leq frac X_n-L n$ Split this into two sums: $frac +...+ n +frac x_k+1-L n$ Choose $k$ such that $|x_i-L|<epsilon $ for $i>k$. Then the second term is less than $frac epsilon +epsilon +... +epsilon n=frac n-k n epsilon <epsilon $. The first term tends to $0$ as $n to infty $ (because the numerator does not depend on $n$). We are done.







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 21 at 23:27









          Kavi Rama Murthy

          20.6k2830




          20.6k2830




















              up vote
              1
              down vote













              The intuition for a correct proof of this fact is as follows:



              For any $epsilon>0$, there exists $N$ so that $lvert x_n-Lrvert<epsilon$ for all $ngeq N$. Equivalently, we can break the sequence into two parts:



              1. Some initial segment $x_1,x_2,ldots, x_N-1$ of terms that can be anything (but there is only a fixed number of them), and


              2. A tail $x_N+1,x_N+2,ldots$ of terms that are all close to (read: within $epsilon$ of) $L$.


              If you pick some giant $n$, you get
              $$
              fracx_1+x_2+cdots+x_nn=fracx_1+cdots+x_Nn+fracx_N+1+x_N+2+cdots+x_nn.
              $$
              The first term has a fixed numerator, so it tends to $0$ as $ntoinfty$. (The average makes those few initial terms meaningless in the long run.) The second term can easily be seen to satisfy
              $$
              (L-epsilon)fracn-Nnleqfracx_N+1+x_N+2+cdots+x_nnleq(L+epsilon)fracn-Nn,
              $$
              and those bounds tend to $L-epsilon$ and $L+epsilon$ as $ntoinfty$. (Taking the average of a bunch of terms that are close to $L$ should yield a result close to $L$.)



              Can you use these ingredients to complete a proof of the result? The intuition is all there.






              share|cite|improve this answer

























                up vote
                1
                down vote













                The intuition for a correct proof of this fact is as follows:



                For any $epsilon>0$, there exists $N$ so that $lvert x_n-Lrvert<epsilon$ for all $ngeq N$. Equivalently, we can break the sequence into two parts:



                1. Some initial segment $x_1,x_2,ldots, x_N-1$ of terms that can be anything (but there is only a fixed number of them), and


                2. A tail $x_N+1,x_N+2,ldots$ of terms that are all close to (read: within $epsilon$ of) $L$.


                If you pick some giant $n$, you get
                $$
                fracx_1+x_2+cdots+x_nn=fracx_1+cdots+x_Nn+fracx_N+1+x_N+2+cdots+x_nn.
                $$
                The first term has a fixed numerator, so it tends to $0$ as $ntoinfty$. (The average makes those few initial terms meaningless in the long run.) The second term can easily be seen to satisfy
                $$
                (L-epsilon)fracn-Nnleqfracx_N+1+x_N+2+cdots+x_nnleq(L+epsilon)fracn-Nn,
                $$
                and those bounds tend to $L-epsilon$ and $L+epsilon$ as $ntoinfty$. (Taking the average of a bunch of terms that are close to $L$ should yield a result close to $L$.)



                Can you use these ingredients to complete a proof of the result? The intuition is all there.






                share|cite|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  The intuition for a correct proof of this fact is as follows:



                  For any $epsilon>0$, there exists $N$ so that $lvert x_n-Lrvert<epsilon$ for all $ngeq N$. Equivalently, we can break the sequence into two parts:



                  1. Some initial segment $x_1,x_2,ldots, x_N-1$ of terms that can be anything (but there is only a fixed number of them), and


                  2. A tail $x_N+1,x_N+2,ldots$ of terms that are all close to (read: within $epsilon$ of) $L$.


                  If you pick some giant $n$, you get
                  $$
                  fracx_1+x_2+cdots+x_nn=fracx_1+cdots+x_Nn+fracx_N+1+x_N+2+cdots+x_nn.
                  $$
                  The first term has a fixed numerator, so it tends to $0$ as $ntoinfty$. (The average makes those few initial terms meaningless in the long run.) The second term can easily be seen to satisfy
                  $$
                  (L-epsilon)fracn-Nnleqfracx_N+1+x_N+2+cdots+x_nnleq(L+epsilon)fracn-Nn,
                  $$
                  and those bounds tend to $L-epsilon$ and $L+epsilon$ as $ntoinfty$. (Taking the average of a bunch of terms that are close to $L$ should yield a result close to $L$.)



                  Can you use these ingredients to complete a proof of the result? The intuition is all there.






                  share|cite|improve this answer













                  The intuition for a correct proof of this fact is as follows:



                  For any $epsilon>0$, there exists $N$ so that $lvert x_n-Lrvert<epsilon$ for all $ngeq N$. Equivalently, we can break the sequence into two parts:



                  1. Some initial segment $x_1,x_2,ldots, x_N-1$ of terms that can be anything (but there is only a fixed number of them), and


                  2. A tail $x_N+1,x_N+2,ldots$ of terms that are all close to (read: within $epsilon$ of) $L$.


                  If you pick some giant $n$, you get
                  $$
                  fracx_1+x_2+cdots+x_nn=fracx_1+cdots+x_Nn+fracx_N+1+x_N+2+cdots+x_nn.
                  $$
                  The first term has a fixed numerator, so it tends to $0$ as $ntoinfty$. (The average makes those few initial terms meaningless in the long run.) The second term can easily be seen to satisfy
                  $$
                  (L-epsilon)fracn-Nnleqfracx_N+1+x_N+2+cdots+x_nnleq(L+epsilon)fracn-Nn,
                  $$
                  and those bounds tend to $L-epsilon$ and $L+epsilon$ as $ntoinfty$. (Taking the average of a bunch of terms that are close to $L$ should yield a result close to $L$.)



                  Can you use these ingredients to complete a proof of the result? The intuition is all there.







                  share|cite|improve this answer













                  share|cite|improve this answer



                  share|cite|improve this answer











                  answered Jul 22 at 0:05









                  Nick Peterson

                  25.5k23758




                  25.5k23758






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858957%2fcesaro-mean-of-sequences-convergence%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?