Convergence in probability implies convergence in expectation.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I would like to see a full solution to the following problem. I have tried approaching it in a number of different ways, however I don't seem to get to the desired result. No point posting what I did, as they just lead to dead end roads.




Show that if $X_nto 0$ in probability then:
$$
Eleft[frac1+right]to 0, mbox as ntoinfty
$$




The converse implication holds as well, but that one is easy to prove using Chebyshev's Inequality.







share|cite|improve this question























    up vote
    0
    down vote

    favorite












    I would like to see a full solution to the following problem. I have tried approaching it in a number of different ways, however I don't seem to get to the desired result. No point posting what I did, as they just lead to dead end roads.




    Show that if $X_nto 0$ in probability then:
    $$
    Eleft[frac1+right]to 0, mbox as ntoinfty
    $$




    The converse implication holds as well, but that one is easy to prove using Chebyshev's Inequality.







    share|cite|improve this question





















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I would like to see a full solution to the following problem. I have tried approaching it in a number of different ways, however I don't seem to get to the desired result. No point posting what I did, as they just lead to dead end roads.




      Show that if $X_nto 0$ in probability then:
      $$
      Eleft[frac1+right]to 0, mbox as ntoinfty
      $$




      The converse implication holds as well, but that one is easy to prove using Chebyshev's Inequality.







      share|cite|improve this question











      I would like to see a full solution to the following problem. I have tried approaching it in a number of different ways, however I don't seem to get to the desired result. No point posting what I did, as they just lead to dead end roads.




      Show that if $X_nto 0$ in probability then:
      $$
      Eleft[frac1+right]to 0, mbox as ntoinfty
      $$




      The converse implication holds as well, but that one is easy to prove using Chebyshev's Inequality.









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 20 at 10:30









      Andrei Crisan

      3219




      3219




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          Use Dominated Convergence Theorem. $frac 1+leq 1$. To use the usual version of DCT you have to go to a.s. convergent subsequences, but DCT is valid for convergence in probability too. Alternatively, let $epsilon in (0,1)$ and $delta =frac epsilon 1-epsilon $ Then $Efrac 1+ =Efrac 1+I_X_n+Efrac 1+I_ <delta leq EI_X_n+frac delta 1+delta =PX_n +epsilon $. The result follows from this.






          share|cite|improve this answer























            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857501%2fconvergence-in-probability-implies-convergence-in-expectation%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote



            accepted










            Use Dominated Convergence Theorem. $frac 1+leq 1$. To use the usual version of DCT you have to go to a.s. convergent subsequences, but DCT is valid for convergence in probability too. Alternatively, let $epsilon in (0,1)$ and $delta =frac epsilon 1-epsilon $ Then $Efrac 1+ =Efrac 1+I_X_n+Efrac 1+I_ <delta leq EI_X_n+frac delta 1+delta =PX_n +epsilon $. The result follows from this.






            share|cite|improve this answer



























              up vote
              1
              down vote



              accepted










              Use Dominated Convergence Theorem. $frac 1+leq 1$. To use the usual version of DCT you have to go to a.s. convergent subsequences, but DCT is valid for convergence in probability too. Alternatively, let $epsilon in (0,1)$ and $delta =frac epsilon 1-epsilon $ Then $Efrac 1+ =Efrac 1+I_X_n+Efrac 1+I_ <delta leq EI_X_n+frac delta 1+delta =PX_n +epsilon $. The result follows from this.






              share|cite|improve this answer

























                up vote
                1
                down vote



                accepted







                up vote
                1
                down vote



                accepted






                Use Dominated Convergence Theorem. $frac 1+leq 1$. To use the usual version of DCT you have to go to a.s. convergent subsequences, but DCT is valid for convergence in probability too. Alternatively, let $epsilon in (0,1)$ and $delta =frac epsilon 1-epsilon $ Then $Efrac 1+ =Efrac 1+I_X_n+Efrac 1+I_ <delta leq EI_X_n+frac delta 1+delta =PX_n +epsilon $. The result follows from this.






                share|cite|improve this answer















                Use Dominated Convergence Theorem. $frac 1+leq 1$. To use the usual version of DCT you have to go to a.s. convergent subsequences, but DCT is valid for convergence in probability too. Alternatively, let $epsilon in (0,1)$ and $delta =frac epsilon 1-epsilon $ Then $Efrac 1+ =Efrac 1+I_X_n+Efrac 1+I_ <delta leq EI_X_n+frac delta 1+delta =PX_n +epsilon $. The result follows from this.







                share|cite|improve this answer















                share|cite|improve this answer



                share|cite|improve this answer








                edited Jul 20 at 10:41


























                answered Jul 20 at 10:34









                Kavi Rama Murthy

                20.6k2830




                20.6k2830






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857501%2fconvergence-in-probability-implies-convergence-in-expectation%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What is the equation of a 3D cone with generalised tilt?

                    Color the edges and diagonals of a regular polygon

                    Relationship between determinant of matrix and determinant of adjoint?