Proof of Total Probability Theorem for Conditional Probability
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
The law of total probability states:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$
I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$
This is how I attempted it:
$$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$
I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.
probability probability-theory
add a comment |Â
up vote
1
down vote
favorite
The law of total probability states:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$
I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$
This is how I attempted it:
$$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$
I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.
probability probability-theory
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
The law of total probability states:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$
I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$
This is how I attempted it:
$$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$
I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.
probability probability-theory
The law of total probability states:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A in Sigma: Pr left(Aright) = sum_i Pr left(A mid B_iright) Pr left(B_iright)$
I want to prove that this is true also for conditional probabilities. So basically I want to prove the following:
Let $left(Omega, Sigma, Prright)$ be a probability space.
Let $leftB_1, B_2, ldotsright$ be a partition of $Omega$ such that $forall i: Pr left(B_iright) > 0$.
Then:
$displaystyle forall A, C in Sigma: Pr left(A mid Cright) = sum_i Pr left(A mid C cap B_iright) Pr left(B_iright)$
This is how I attempted it:
$$Pr(Amid C) = Pr(A|Ccap Omega) = Pr(Amid Ccapleft(bigcup_iB_iright))$$ because it is a partition. Then, using the fact that intersection distributes over union I got: $$Pr(Amid Ccapleft(bigcup_iB_iright)) = Pr(A mid bigcup_ileft(Ccap B_iright))$$
I can't go any further. I know that in a probability space we have that the probability measure $Pr$ is countably additive. I know that if we have a probability space $(Omega, Sigma, Pr)$ then the triplet $(Omega, Sigma, Qr)$ with $$Qr: Qr(A) := Pr(A | C)$$ is a probability space as well. But I have no idea how to use these two information to finish the proof.
probability probability-theory
edited Jul 29 at 22:11
Andrés E. Caicedo
63.1k7151235
63.1k7151235
asked Jul 29 at 20:48
Euler_Salter
2,0061331
2,0061331
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.
The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
$$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
 |Â
show 3 more comments
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.
The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
$$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
 |Â
show 3 more comments
up vote
2
down vote
accepted
You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.
The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
$$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
 |Â
show 3 more comments
up vote
2
down vote
accepted
up vote
2
down vote
accepted
You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.
The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
$$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$
You didn't state the result for conditional probabilities correctly. Here's an easy way to see how the correct result is derived.
The conditional probability $P_C:=P(cdot mid C)$ is a probability measure, so apply the law of total probability to it to get
$$P_C(A) = sum_i P_C(A mid B_i)P_C(B_i).$$
Now show that $P_C(A mid B_i) = P(A mid B_i cap C)$. Thus,
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i mid C).$$
That's the general form of the law of total probability for conditional probabilities. If, in addition, we assume that $B_i$ and $C$ are independent, so that $P(B_i mid C) = P(B_i)$, then the general law reduces to what you wrote, namely
$$P(A mid C) = sum_i P(A mid B_i cap C)P(B_i).$$
edited Jul 29 at 21:11
answered Jul 29 at 21:01
aduh
4,30031238
4,30031238
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
 |Â
show 3 more comments
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
According to Wikipedia here (en.wikipedia.org/wiki/Law_of_total_probability#Statement) we have that the latter should be $P(B_i)$
– Euler_Salter
Jul 29 at 21:05
2
2
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
@Euler_Salter I think you're missing the claim about independence.
– aduh
Jul 29 at 21:06
1
1
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
I'd rephrase that a bit: if there is independence, then your statement looks great. My answer is general and holds with or without independence assumptions.
– aduh
Jul 29 at 21:09
1
1
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
The expression $P((A mid B_i) mid C)$ has no meaning, but if you simply delete it, then what you wrote is okay.
– aduh
Jul 29 at 21:39
1
1
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
$mid$ is not a set operation; it is the divider between the event being measured and the condition under which the measured. There can only be at most one such divider in a probability measure function. It would be somewhat more correct to say $$P_C(Amid B_i)=P_B_icap C(A)$$...because it is the probability measure of event $A$ under the joint conditions of $B_i$ and $C$
– Graham Kemp
Jul 30 at 3:23
 |Â
show 3 more comments
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2866431%2fproof-of-total-probability-theorem-for-conditional-probability%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password