$E[E[Xmid Y] =$ $sum_ymid P(Y=y>0)E[Xmid Y=y] cdot P(Y =y)$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I'm looking at the proof of



$$E[X] = E[E[Xmid Y]]$$



But I'm having trouble to get why does, for example if we take X,Y discrete random variables, we have that



$$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



I know that



$E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$



but from that I lose a bit of intuition.



Does it mean that if we want the expected value of $E[Xmid Y]$ knowing that its a random variable where the variable is $y in operatornameIm(Y)$ then, we must simply all the "probability space of the expected value of $X$ knowing $Y = y$?



Basically, if someone has a good intuitive explanation I'd be so happy!



Also, it is written that



$$sum_ymid P(Y=y)>0E[Xmid Y=y]cdot P(Y =y) = sum_ymid P(Y=y)>0 fracE[Xcdot mathbb1_(Y=y)]P(Y=y) P(Y=y)$$



(where $1$ is the indicating function)



$$= sum_ymid P(Y=y)>0E[Xmidmathbb1_(Y=y)] = E[Xmidmathbb1_(Y=y)] = E[X]$$



and i'm also having trouble following that part.



Thank you all for everything!







share|cite|improve this question

























    up vote
    1
    down vote

    favorite












    I'm looking at the proof of



    $$E[X] = E[E[Xmid Y]]$$



    But I'm having trouble to get why does, for example if we take X,Y discrete random variables, we have that



    $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



    I know that



    $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$



    but from that I lose a bit of intuition.



    Does it mean that if we want the expected value of $E[Xmid Y]$ knowing that its a random variable where the variable is $y in operatornameIm(Y)$ then, we must simply all the "probability space of the expected value of $X$ knowing $Y = y$?



    Basically, if someone has a good intuitive explanation I'd be so happy!



    Also, it is written that



    $$sum_ymid P(Y=y)>0E[Xmid Y=y]cdot P(Y =y) = sum_ymid P(Y=y)>0 fracE[Xcdot mathbb1_(Y=y)]P(Y=y) P(Y=y)$$



    (where $1$ is the indicating function)



    $$= sum_ymid P(Y=y)>0E[Xmidmathbb1_(Y=y)] = E[Xmidmathbb1_(Y=y)] = E[X]$$



    and i'm also having trouble following that part.



    Thank you all for everything!







    share|cite|improve this question























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I'm looking at the proof of



      $$E[X] = E[E[Xmid Y]]$$



      But I'm having trouble to get why does, for example if we take X,Y discrete random variables, we have that



      $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



      I know that



      $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$



      but from that I lose a bit of intuition.



      Does it mean that if we want the expected value of $E[Xmid Y]$ knowing that its a random variable where the variable is $y in operatornameIm(Y)$ then, we must simply all the "probability space of the expected value of $X$ knowing $Y = y$?



      Basically, if someone has a good intuitive explanation I'd be so happy!



      Also, it is written that



      $$sum_ymid P(Y=y)>0E[Xmid Y=y]cdot P(Y =y) = sum_ymid P(Y=y)>0 fracE[Xcdot mathbb1_(Y=y)]P(Y=y) P(Y=y)$$



      (where $1$ is the indicating function)



      $$= sum_ymid P(Y=y)>0E[Xmidmathbb1_(Y=y)] = E[Xmidmathbb1_(Y=y)] = E[X]$$



      and i'm also having trouble following that part.



      Thank you all for everything!







      share|cite|improve this question













      I'm looking at the proof of



      $$E[X] = E[E[Xmid Y]]$$



      But I'm having trouble to get why does, for example if we take X,Y discrete random variables, we have that



      $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



      I know that



      $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$



      but from that I lose a bit of intuition.



      Does it mean that if we want the expected value of $E[Xmid Y]$ knowing that its a random variable where the variable is $y in operatornameIm(Y)$ then, we must simply all the "probability space of the expected value of $X$ knowing $Y = y$?



      Basically, if someone has a good intuitive explanation I'd be so happy!



      Also, it is written that



      $$sum_ymid P(Y=y)>0E[Xmid Y=y]cdot P(Y =y) = sum_ymid P(Y=y)>0 fracE[Xcdot mathbb1_(Y=y)]P(Y=y) P(Y=y)$$



      (where $1$ is the indicating function)



      $$= sum_ymid P(Y=y)>0E[Xmidmathbb1_(Y=y)] = E[Xmidmathbb1_(Y=y)] = E[X]$$



      and i'm also having trouble following that part.



      Thank you all for everything!









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 18 at 23:51









      Michael Hardy

      204k23186462




      204k23186462









      asked Jul 18 at 21:23









      Ian Leclaire

      1489




      1489




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          For the first equation:



          $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



          It helps to think of $E[Xmid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:



          $$E[f(Y)]= sum_y mid P(Y=y)>0 f(y) cdot P(Y =y)$$



          For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.



          $$E[E[Xmid Y]] = sum_y mid P(Y=y)>0 E[Xmid Y = y] cdot P(Y = y) = sum_y mid P(Y=y)>0 ( sum_x in X x P(X = x mid Y = y))cdot P(Y = y)$$



          $$= sum_ymid P(Y=y)>0 (sum_x in X x dfracP(X = x; Y = y)P(Y = y))cdot P(Y = y) = sum_x in X x sum_y mid P(Y=y)>0 P(X = x; Y = y)$$



          $$ = sum_x in X x P(X = x) = E[X]$$






          share|cite|improve this answer






























            up vote
            1
            down vote














            I know that $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$




            Not quite; it is the other way.   $mathsf E[Xmid Y=y]$ is defined as the value of random variable, $mathsf E[Xmid Y],$ for all outcomes, $omega$, where $Y(omega)=y$. $forall omegain Y^-1(y): mathsf E(Xmid Y)(omega)=mathsf E(Xmid Y=y)$



            So $$beginalignmathsf E(mathsf E(Xmid Y))
            &= sum_omegainOmega mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textby definition
            \[1ex]&= sum_yin Y(Omega)sum_omegain Y^-1(y) mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textPartitioning the series
            \[1ex] &= sum_yin Y(Omega) mathsf E(Xmid Y=y) sum_omegain Y^-1(Omega)mathsf Pomega&&textby definition of mathsf E(Xmid Y=y)
            \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf PomegainOmega:Y(omega)=y &&textby countable additivity
            \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf P(Y=y) &&textAbreviationendalign$$



            Now, what kind of value is $mathsf E(Xmid Y=y)$?   Well for any event $E$ with nonzero probability measure, we define $mathsf E(Xmid E)=mathsf E(Xmathbf 1_E)divmathsf P(E)$.



            $$beginalignmathsf E(mathsf E(Xmid Y))&=sum_y:mathsf P(Y=y)>0 dfracmathsf E(Xmathbf 1_Y=y)mathsf P(Y=y)mathsf P(Y=y)+sum_y:mathsf P(Y=y)=00
            \[1ex] &= sum_y:mathsf P(Y=y)>0 mathsf E(Xmathbf 1_Y=y)
            \[1ex] &= sum_y:mathsf P(Y=y)>0sum_omegainOmega:Y(omega)=y X(omega)mathsf Pomega
            \[1ex] &= sum_omegainOmegaX(omega)mathsf Pomega
            \[1ex] &= mathsf E(X)endalign$$






            share|cite|improve this answer





















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856018%2feex-mid-y-sum-y-mid-py-y0ex-mid-y-y-cdot-py-y%23new-answer', 'question_page');

              );

              Post as a guest






























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              2
              down vote



              accepted










              For the first equation:



              $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



              It helps to think of $E[Xmid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:



              $$E[f(Y)]= sum_y mid P(Y=y)>0 f(y) cdot P(Y =y)$$



              For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.



              $$E[E[Xmid Y]] = sum_y mid P(Y=y)>0 E[Xmid Y = y] cdot P(Y = y) = sum_y mid P(Y=y)>0 ( sum_x in X x P(X = x mid Y = y))cdot P(Y = y)$$



              $$= sum_ymid P(Y=y)>0 (sum_x in X x dfracP(X = x; Y = y)P(Y = y))cdot P(Y = y) = sum_x in X x sum_y mid P(Y=y)>0 P(X = x; Y = y)$$



              $$ = sum_x in X x P(X = x) = E[X]$$






              share|cite|improve this answer



























                up vote
                2
                down vote



                accepted










                For the first equation:



                $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



                It helps to think of $E[Xmid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:



                $$E[f(Y)]= sum_y mid P(Y=y)>0 f(y) cdot P(Y =y)$$



                For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.



                $$E[E[Xmid Y]] = sum_y mid P(Y=y)>0 E[Xmid Y = y] cdot P(Y = y) = sum_y mid P(Y=y)>0 ( sum_x in X x P(X = x mid Y = y))cdot P(Y = y)$$



                $$= sum_ymid P(Y=y)>0 (sum_x in X x dfracP(X = x; Y = y)P(Y = y))cdot P(Y = y) = sum_x in X x sum_y mid P(Y=y)>0 P(X = x; Y = y)$$



                $$ = sum_x in X x P(X = x) = E[X]$$






                share|cite|improve this answer

























                  up vote
                  2
                  down vote



                  accepted







                  up vote
                  2
                  down vote



                  accepted






                  For the first equation:



                  $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



                  It helps to think of $E[Xmid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:



                  $$E[f(Y)]= sum_y mid P(Y=y)>0 f(y) cdot P(Y =y)$$



                  For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.



                  $$E[E[Xmid Y]] = sum_y mid P(Y=y)>0 E[Xmid Y = y] cdot P(Y = y) = sum_y mid P(Y=y)>0 ( sum_x in X x P(X = x mid Y = y))cdot P(Y = y)$$



                  $$= sum_ymid P(Y=y)>0 (sum_x in X x dfracP(X = x; Y = y)P(Y = y))cdot P(Y = y) = sum_x in X x sum_y mid P(Y=y)>0 P(X = x; Y = y)$$



                  $$ = sum_x in X x P(X = x) = E[X]$$






                  share|cite|improve this answer















                  For the first equation:



                  $$E[E[Xmid Y]]= sum_ymid P(Y=y)>0E[Xmid Y=y] cdot P(Y =y)$$



                  It helps to think of $E[Xmid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:



                  $$E[f(Y)]= sum_y mid P(Y=y)>0 f(y) cdot P(Y =y)$$



                  For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.



                  $$E[E[Xmid Y]] = sum_y mid P(Y=y)>0 E[Xmid Y = y] cdot P(Y = y) = sum_y mid P(Y=y)>0 ( sum_x in X x P(X = x mid Y = y))cdot P(Y = y)$$



                  $$= sum_ymid P(Y=y)>0 (sum_x in X x dfracP(X = x; Y = y)P(Y = y))cdot P(Y = y) = sum_x in X x sum_y mid P(Y=y)>0 P(X = x; Y = y)$$



                  $$ = sum_x in X x P(X = x) = E[X]$$







                  share|cite|improve this answer















                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jul 19 at 2:38









                  Did

                  242k23208443




                  242k23208443











                  answered Jul 18 at 23:11









                  Enzo Nakamura

                  605




                  605




















                      up vote
                      1
                      down vote














                      I know that $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$




                      Not quite; it is the other way.   $mathsf E[Xmid Y=y]$ is defined as the value of random variable, $mathsf E[Xmid Y],$ for all outcomes, $omega$, where $Y(omega)=y$. $forall omegain Y^-1(y): mathsf E(Xmid Y)(omega)=mathsf E(Xmid Y=y)$



                      So $$beginalignmathsf E(mathsf E(Xmid Y))
                      &= sum_omegainOmega mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textby definition
                      \[1ex]&= sum_yin Y(Omega)sum_omegain Y^-1(y) mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textPartitioning the series
                      \[1ex] &= sum_yin Y(Omega) mathsf E(Xmid Y=y) sum_omegain Y^-1(Omega)mathsf Pomega&&textby definition of mathsf E(Xmid Y=y)
                      \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf PomegainOmega:Y(omega)=y &&textby countable additivity
                      \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf P(Y=y) &&textAbreviationendalign$$



                      Now, what kind of value is $mathsf E(Xmid Y=y)$?   Well for any event $E$ with nonzero probability measure, we define $mathsf E(Xmid E)=mathsf E(Xmathbf 1_E)divmathsf P(E)$.



                      $$beginalignmathsf E(mathsf E(Xmid Y))&=sum_y:mathsf P(Y=y)>0 dfracmathsf E(Xmathbf 1_Y=y)mathsf P(Y=y)mathsf P(Y=y)+sum_y:mathsf P(Y=y)=00
                      \[1ex] &= sum_y:mathsf P(Y=y)>0 mathsf E(Xmathbf 1_Y=y)
                      \[1ex] &= sum_y:mathsf P(Y=y)>0sum_omegainOmega:Y(omega)=y X(omega)mathsf Pomega
                      \[1ex] &= sum_omegainOmegaX(omega)mathsf Pomega
                      \[1ex] &= mathsf E(X)endalign$$






                      share|cite|improve this answer

























                        up vote
                        1
                        down vote














                        I know that $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$




                        Not quite; it is the other way.   $mathsf E[Xmid Y=y]$ is defined as the value of random variable, $mathsf E[Xmid Y],$ for all outcomes, $omega$, where $Y(omega)=y$. $forall omegain Y^-1(y): mathsf E(Xmid Y)(omega)=mathsf E(Xmid Y=y)$



                        So $$beginalignmathsf E(mathsf E(Xmid Y))
                        &= sum_omegainOmega mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textby definition
                        \[1ex]&= sum_yin Y(Omega)sum_omegain Y^-1(y) mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textPartitioning the series
                        \[1ex] &= sum_yin Y(Omega) mathsf E(Xmid Y=y) sum_omegain Y^-1(Omega)mathsf Pomega&&textby definition of mathsf E(Xmid Y=y)
                        \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf PomegainOmega:Y(omega)=y &&textby countable additivity
                        \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf P(Y=y) &&textAbreviationendalign$$



                        Now, what kind of value is $mathsf E(Xmid Y=y)$?   Well for any event $E$ with nonzero probability measure, we define $mathsf E(Xmid E)=mathsf E(Xmathbf 1_E)divmathsf P(E)$.



                        $$beginalignmathsf E(mathsf E(Xmid Y))&=sum_y:mathsf P(Y=y)>0 dfracmathsf E(Xmathbf 1_Y=y)mathsf P(Y=y)mathsf P(Y=y)+sum_y:mathsf P(Y=y)=00
                        \[1ex] &= sum_y:mathsf P(Y=y)>0 mathsf E(Xmathbf 1_Y=y)
                        \[1ex] &= sum_y:mathsf P(Y=y)>0sum_omegainOmega:Y(omega)=y X(omega)mathsf Pomega
                        \[1ex] &= sum_omegainOmegaX(omega)mathsf Pomega
                        \[1ex] &= mathsf E(X)endalign$$






                        share|cite|improve this answer























                          up vote
                          1
                          down vote










                          up vote
                          1
                          down vote










                          I know that $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$




                          Not quite; it is the other way.   $mathsf E[Xmid Y=y]$ is defined as the value of random variable, $mathsf E[Xmid Y],$ for all outcomes, $omega$, where $Y(omega)=y$. $forall omegain Y^-1(y): mathsf E(Xmid Y)(omega)=mathsf E(Xmid Y=y)$



                          So $$beginalignmathsf E(mathsf E(Xmid Y))
                          &= sum_omegainOmega mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textby definition
                          \[1ex]&= sum_yin Y(Omega)sum_omegain Y^-1(y) mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textPartitioning the series
                          \[1ex] &= sum_yin Y(Omega) mathsf E(Xmid Y=y) sum_omegain Y^-1(Omega)mathsf Pomega&&textby definition of mathsf E(Xmid Y=y)
                          \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf PomegainOmega:Y(omega)=y &&textby countable additivity
                          \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf P(Y=y) &&textAbreviationendalign$$



                          Now, what kind of value is $mathsf E(Xmid Y=y)$?   Well for any event $E$ with nonzero probability measure, we define $mathsf E(Xmid E)=mathsf E(Xmathbf 1_E)divmathsf P(E)$.



                          $$beginalignmathsf E(mathsf E(Xmid Y))&=sum_y:mathsf P(Y=y)>0 dfracmathsf E(Xmathbf 1_Y=y)mathsf P(Y=y)mathsf P(Y=y)+sum_y:mathsf P(Y=y)=00
                          \[1ex] &= sum_y:mathsf P(Y=y)>0 mathsf E(Xmathbf 1_Y=y)
                          \[1ex] &= sum_y:mathsf P(Y=y)>0sum_omegainOmega:Y(omega)=y X(omega)mathsf Pomega
                          \[1ex] &= sum_omegainOmegaX(omega)mathsf Pomega
                          \[1ex] &= mathsf E(X)endalign$$






                          share|cite|improve this answer














                          I know that $E[Xmid Y]$ can be defined as a random variable $E[Xmid Y=y](omega)$ if $ Y(omega) = y$




                          Not quite; it is the other way.   $mathsf E[Xmid Y=y]$ is defined as the value of random variable, $mathsf E[Xmid Y],$ for all outcomes, $omega$, where $Y(omega)=y$. $forall omegain Y^-1(y): mathsf E(Xmid Y)(omega)=mathsf E(Xmid Y=y)$



                          So $$beginalignmathsf E(mathsf E(Xmid Y))
                          &= sum_omegainOmega mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textby definition
                          \[1ex]&= sum_yin Y(Omega)sum_omegain Y^-1(y) mathsf E(Xmid Y)(omega)cdotmathsf Pomega &&textPartitioning the series
                          \[1ex] &= sum_yin Y(Omega) mathsf E(Xmid Y=y) sum_omegain Y^-1(Omega)mathsf Pomega&&textby definition of mathsf E(Xmid Y=y)
                          \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf PomegainOmega:Y(omega)=y &&textby countable additivity
                          \[1ex] &=sum_yin Y(Omega)mathsf E(Xmid Y=y)cdotmathsf P(Y=y) &&textAbreviationendalign$$



                          Now, what kind of value is $mathsf E(Xmid Y=y)$?   Well for any event $E$ with nonzero probability measure, we define $mathsf E(Xmid E)=mathsf E(Xmathbf 1_E)divmathsf P(E)$.



                          $$beginalignmathsf E(mathsf E(Xmid Y))&=sum_y:mathsf P(Y=y)>0 dfracmathsf E(Xmathbf 1_Y=y)mathsf P(Y=y)mathsf P(Y=y)+sum_y:mathsf P(Y=y)=00
                          \[1ex] &= sum_y:mathsf P(Y=y)>0 mathsf E(Xmathbf 1_Y=y)
                          \[1ex] &= sum_y:mathsf P(Y=y)>0sum_omegainOmega:Y(omega)=y X(omega)mathsf Pomega
                          \[1ex] &= sum_omegainOmegaX(omega)mathsf Pomega
                          \[1ex] &= mathsf E(X)endalign$$







                          share|cite|improve this answer













                          share|cite|improve this answer



                          share|cite|improve this answer











                          answered Jul 19 at 2:15









                          Graham Kemp

                          80.1k43275




                          80.1k43275






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856018%2feex-mid-y-sum-y-mid-py-y0ex-mid-y-y-cdot-py-y%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Comments

                              Popular posts from this blog

                              What is the equation of a 3D cone with generalised tilt?

                              Color the edges and diagonals of a regular polygon

                              Relationship between determinant of matrix and determinant of adjoint?