Proof of Total Probability Theorem for Conditional Probability

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












The law of total probability states:




Let $left(Omega, Sigma, Prright)$ be a probability space.



Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



Then:
$displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$




I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:




Let $left(Omega, Sigma, Prright)$ be a probability space.



Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



Then:
$displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$





This is how I attempted it:
$$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$



I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.







share|cite|improve this question

























    up vote
    1
    down vote

    favorite
    1












    The law of total probability states:




    Let $left(Omega, Sigma, Prright)$ be a probability space.



    Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



    Then:
    $displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$




    I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:




    Let $left(Omega, Sigma, Prright)$ be a probability space.



    Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



    Then:
    $displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$





    This is how I attempted it:
    $$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$



    I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.







    share|cite|improve this question























      up vote
      1
      down vote

      favorite
      1









      up vote
      1
      down vote

      favorite
      1






      1





      The law of total probability states:




      Let $left(Omega, Sigma, Prright)$ be a probability space.



      Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



      Then:
      $displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$




      I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:




      Let $left(Omega, Sigma, Prright)$ be a probability space.



      Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



      Then:
      $displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$





      This is how I attempted it:
      $$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$



      I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.







      share|cite|improve this question













      The law of total probability states:




      Let $left(Omega, Sigma, Prright)$ be a probability space.



      Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



      Then:
      $displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$




      I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:




      Let $left(Omega, Sigma, Prright)$ be a probability space.



      Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.



      Then:
      $displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$





      This is how I attempted it:
      $$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$



      I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 29 at 22:11









      Andrés E. Caicedo

      63.1k7151235




      63.1k7151235









      asked Jul 29 at 20:48









      Euler_Salter

      2,0061331




      2,0061331




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.



          The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
          $$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
          Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
          That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$






          share|cite|improve this answer























          • According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
            – Euler_Salter
            Jul 29 at 21:05






          • 2




            @Euler_Salter I think you're missing the claim about independence.
            – aduh
            Jul 29 at 21:06






          • 1




            I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
            – aduh
            Jul 29 at 21:09






          • 1




            The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
            – aduh
            Jul 29 at 21:39







          • 1




            $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
            – Graham Kemp
            Jul 30 at 3:23











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866431%2fproof-of-total-probability-theorem-for-conditional-probability%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          2
          down vote



          accepted










          You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.



          The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
          $$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
          Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
          That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$






          share|cite|improve this answer























          • According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
            – Euler_Salter
            Jul 29 at 21:05






          • 2




            @Euler_Salter I think you're missing the claim about independence.
            – aduh
            Jul 29 at 21:06






          • 1




            I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
            – aduh
            Jul 29 at 21:09






          • 1




            The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
            – aduh
            Jul 29 at 21:39







          • 1




            $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
            – Graham Kemp
            Jul 30 at 3:23















          up vote
          2
          down vote



          accepted










          You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.



          The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
          $$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
          Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
          That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$






          share|cite|improve this answer























          • According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
            – Euler_Salter
            Jul 29 at 21:05






          • 2




            @Euler_Salter I think you're missing the claim about independence.
            – aduh
            Jul 29 at 21:06






          • 1




            I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
            – aduh
            Jul 29 at 21:09






          • 1




            The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
            – aduh
            Jul 29 at 21:39







          • 1




            $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
            – Graham Kemp
            Jul 30 at 3:23













          up vote
          2
          down vote



          accepted







          up vote
          2
          down vote



          accepted






          You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.



          The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
          $$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
          Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
          That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$






          share|cite|improve this answer















          You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.



          The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
          $$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
          Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
          That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
          $$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 29 at 21:11


























          answered Jul 29 at 21:01









          aduh

          4,30031238




          4,30031238











          • According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
            – Euler_Salter
            Jul 29 at 21:05






          • 2




            @Euler_Salter I think you're missing the claim about independence.
            – aduh
            Jul 29 at 21:06






          • 1




            I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
            – aduh
            Jul 29 at 21:09






          • 1




            The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
            – aduh
            Jul 29 at 21:39







          • 1




            $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
            – Graham Kemp
            Jul 30 at 3:23

















          • According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
            – Euler_Salter
            Jul 29 at 21:05






          • 2




            @Euler_Salter I think you're missing the claim about independence.
            – aduh
            Jul 29 at 21:06






          • 1




            I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
            – aduh
            Jul 29 at 21:09






          • 1




            The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
            – aduh
            Jul 29 at 21:39







          • 1




            $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
            – Graham Kemp
            Jul 30 at 3:23
















          According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
          – Euler_Salter
          Jul 29 at 21:05




          According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
          – Euler_Salter
          Jul 29 at 21:05




          2




          2




          @Euler_Salter I think you're missing the claim about independence.
          – aduh
          Jul 29 at 21:06




          @Euler_Salter I think you're missing the claim about independence.
          – aduh
          Jul 29 at 21:06




          1




          1




          I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
          – aduh
          Jul 29 at 21:09




          I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
          – aduh
          Jul 29 at 21:09




          1




          1




          The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
          – aduh
          Jul 29 at 21:39





          The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
          – aduh
          Jul 29 at 21:39





          1




          1




          $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
          – Graham Kemp
          Jul 30 at 3:23





          $mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
          – Graham Kemp
          Jul 30 at 3:23













           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866431%2fproof-of-total-probability-theorem-for-conditional-probability%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?