A contradiction hidden in the definition of the probability of the intersection of events?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Consider a set $C$ composed of three distinguishable kinds of elements $A,B,G$, and let $alpha,beta,gamma>0$ be the numbers of elements of each kind, and $|C|=c=alpha+beta+gamma$.



We define the three events $L_n^A, L_n^B, L_n^G$ as to get, in $n$ trials with replacement, at least one element of kind $A$, at least one element of kind $B$, and at least one element of kind $G$.



The probabilities of these events are $$P(L_n^A)=1-left(fracc-alphacright)^n=1-left(fracbeta+gammacright)^n, $$



$$P(L_n^B)=1-left(fracc-betacright)^n=1-left(fracalpha+gammacright)^n,$$



$$
P(L_n^G)=1-left(fracc-gammacright)^n=1-left(fracalpha+betacright)^n.
$$



We evaluate $P(L_n^Acap L_n^B)$. By definition of conditional probability and applying the property of the opposite event, we have
$$
P(L_n^Acap L_n^B)=P(L_n^A|L_n^B)P(L_n^B)=[1-P(overlineL_n^A|L_n^B)]P(L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B).
$$



By means of Bayes' theorem, $P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B|overlineL_n^A)P(overlineL_n^A)$. If we know that the event $L_n^A$ did not take place, all the $n$ extractions are either of kind $B$ or of kind $G$. Therefore, $P(L_n^B|overlineL_n^A)=1-left(fracbeta+gamma-betabeta+gammaright)^n=1-left(fracgammabeta+gammaright)^n$.



In conclusion, since $P(overlineL_n^A)=left(fracbeta+gammacright)^n$, we obtain
$$
P(L_n^Acap L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B)-P(L_n^B|overlineL_n^A)P(overlineL_n^A)=
$$
$$
=1-left(fracalpha+gammacright)^n-left[1-left(fracgammabeta+gammaright)^nright]left(fracbeta+gammacright)^n=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n.
$$



We notice that if $n=0$ (or $n=1$), then we correctly have $P(L_n^Acap L_n^B)=0$.



We now evaluate $P(L_n^Acap L_n^Bcap L_n^G)$. As we have done before, we find
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
which implies that,




for $n=0$, it must also be $P(L_n^Acap L_n^Bcap L_n^G)=0$.




We go on with the calculation of $P(L_n^Acap L_n^Bcap L_n^G)$, and we apply (again) first the definition of opposite event
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=[1-P(overlineL_n^G|L_n^Acap L_n^B)]P(L_n^Acap L_n^B)=
$$
$$
=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
and then the theorem of Bayes on the second term
$$
P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G).
$$



If we know that event $L_n^G$ does not take place, the probability to get at least one element of kind $A$ and at least one element of kind $B$ is
$$
P(L_n^Acap L_n^B|overlineL_n^G)=1-left(fracalphaalpha+betaright)^n-left(fracbetaalpha+betaright)^n.
$$



Therefore, since $P(overlineL_n^G)=left(fracalpha+betacright)^n$,



$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B)-P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G)=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n-left(fracalpha+betacright)^n+left(fracalphacright)^n+left(fracbetacright)^n.
$$




If we now substitute $n=0$ in this expression we obtain $P(L_n^Acap L_n^Bcap L_n^G)=1$,




which is in contradiction with what we observed before, i.e. that $P(L_n^Acap L_n^Bcap L_n^G)=0$ with $n=0$.



There is likely a mistake in this reasoning, but I am not able to spot it.



Thanks for your help!







share|cite|improve this question















  • 2




    What’s the Cliffs notes version?
    – Randall
    Jul 28 at 18:01










  • @Randall Hi Randall, sorry I don't know what are you referring to!
    – Andrea Prunotto
    Jul 28 at 18:03














up vote
1
down vote

favorite












Consider a set $C$ composed of three distinguishable kinds of elements $A,B,G$, and let $alpha,beta,gamma>0$ be the numbers of elements of each kind, and $|C|=c=alpha+beta+gamma$.



We define the three events $L_n^A, L_n^B, L_n^G$ as to get, in $n$ trials with replacement, at least one element of kind $A$, at least one element of kind $B$, and at least one element of kind $G$.



The probabilities of these events are $$P(L_n^A)=1-left(fracc-alphacright)^n=1-left(fracbeta+gammacright)^n, $$



$$P(L_n^B)=1-left(fracc-betacright)^n=1-left(fracalpha+gammacright)^n,$$



$$
P(L_n^G)=1-left(fracc-gammacright)^n=1-left(fracalpha+betacright)^n.
$$



We evaluate $P(L_n^Acap L_n^B)$. By definition of conditional probability and applying the property of the opposite event, we have
$$
P(L_n^Acap L_n^B)=P(L_n^A|L_n^B)P(L_n^B)=[1-P(overlineL_n^A|L_n^B)]P(L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B).
$$



By means of Bayes' theorem, $P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B|overlineL_n^A)P(overlineL_n^A)$. If we know that the event $L_n^A$ did not take place, all the $n$ extractions are either of kind $B$ or of kind $G$. Therefore, $P(L_n^B|overlineL_n^A)=1-left(fracbeta+gamma-betabeta+gammaright)^n=1-left(fracgammabeta+gammaright)^n$.



In conclusion, since $P(overlineL_n^A)=left(fracbeta+gammacright)^n$, we obtain
$$
P(L_n^Acap L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B)-P(L_n^B|overlineL_n^A)P(overlineL_n^A)=
$$
$$
=1-left(fracalpha+gammacright)^n-left[1-left(fracgammabeta+gammaright)^nright]left(fracbeta+gammacright)^n=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n.
$$



We notice that if $n=0$ (or $n=1$), then we correctly have $P(L_n^Acap L_n^B)=0$.



We now evaluate $P(L_n^Acap L_n^Bcap L_n^G)$. As we have done before, we find
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
which implies that,




for $n=0$, it must also be $P(L_n^Acap L_n^Bcap L_n^G)=0$.




We go on with the calculation of $P(L_n^Acap L_n^Bcap L_n^G)$, and we apply (again) first the definition of opposite event
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=[1-P(overlineL_n^G|L_n^Acap L_n^B)]P(L_n^Acap L_n^B)=
$$
$$
=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
and then the theorem of Bayes on the second term
$$
P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G).
$$



If we know that event $L_n^G$ does not take place, the probability to get at least one element of kind $A$ and at least one element of kind $B$ is
$$
P(L_n^Acap L_n^B|overlineL_n^G)=1-left(fracalphaalpha+betaright)^n-left(fracbetaalpha+betaright)^n.
$$



Therefore, since $P(overlineL_n^G)=left(fracalpha+betacright)^n$,



$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B)-P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G)=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n-left(fracalpha+betacright)^n+left(fracalphacright)^n+left(fracbetacright)^n.
$$




If we now substitute $n=0$ in this expression we obtain $P(L_n^Acap L_n^Bcap L_n^G)=1$,




which is in contradiction with what we observed before, i.e. that $P(L_n^Acap L_n^Bcap L_n^G)=0$ with $n=0$.



There is likely a mistake in this reasoning, but I am not able to spot it.



Thanks for your help!







share|cite|improve this question















  • 2




    What’s the Cliffs notes version?
    – Randall
    Jul 28 at 18:01










  • @Randall Hi Randall, sorry I don't know what are you referring to!
    – Andrea Prunotto
    Jul 28 at 18:03












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Consider a set $C$ composed of three distinguishable kinds of elements $A,B,G$, and let $alpha,beta,gamma>0$ be the numbers of elements of each kind, and $|C|=c=alpha+beta+gamma$.



We define the three events $L_n^A, L_n^B, L_n^G$ as to get, in $n$ trials with replacement, at least one element of kind $A$, at least one element of kind $B$, and at least one element of kind $G$.



The probabilities of these events are $$P(L_n^A)=1-left(fracc-alphacright)^n=1-left(fracbeta+gammacright)^n, $$



$$P(L_n^B)=1-left(fracc-betacright)^n=1-left(fracalpha+gammacright)^n,$$



$$
P(L_n^G)=1-left(fracc-gammacright)^n=1-left(fracalpha+betacright)^n.
$$



We evaluate $P(L_n^Acap L_n^B)$. By definition of conditional probability and applying the property of the opposite event, we have
$$
P(L_n^Acap L_n^B)=P(L_n^A|L_n^B)P(L_n^B)=[1-P(overlineL_n^A|L_n^B)]P(L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B).
$$



By means of Bayes' theorem, $P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B|overlineL_n^A)P(overlineL_n^A)$. If we know that the event $L_n^A$ did not take place, all the $n$ extractions are either of kind $B$ or of kind $G$. Therefore, $P(L_n^B|overlineL_n^A)=1-left(fracbeta+gamma-betabeta+gammaright)^n=1-left(fracgammabeta+gammaright)^n$.



In conclusion, since $P(overlineL_n^A)=left(fracbeta+gammacright)^n$, we obtain
$$
P(L_n^Acap L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B)-P(L_n^B|overlineL_n^A)P(overlineL_n^A)=
$$
$$
=1-left(fracalpha+gammacright)^n-left[1-left(fracgammabeta+gammaright)^nright]left(fracbeta+gammacright)^n=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n.
$$



We notice that if $n=0$ (or $n=1$), then we correctly have $P(L_n^Acap L_n^B)=0$.



We now evaluate $P(L_n^Acap L_n^Bcap L_n^G)$. As we have done before, we find
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
which implies that,




for $n=0$, it must also be $P(L_n^Acap L_n^Bcap L_n^G)=0$.




We go on with the calculation of $P(L_n^Acap L_n^Bcap L_n^G)$, and we apply (again) first the definition of opposite event
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=[1-P(overlineL_n^G|L_n^Acap L_n^B)]P(L_n^Acap L_n^B)=
$$
$$
=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
and then the theorem of Bayes on the second term
$$
P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G).
$$



If we know that event $L_n^G$ does not take place, the probability to get at least one element of kind $A$ and at least one element of kind $B$ is
$$
P(L_n^Acap L_n^B|overlineL_n^G)=1-left(fracalphaalpha+betaright)^n-left(fracbetaalpha+betaright)^n.
$$



Therefore, since $P(overlineL_n^G)=left(fracalpha+betacright)^n$,



$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B)-P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G)=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n-left(fracalpha+betacright)^n+left(fracalphacright)^n+left(fracbetacright)^n.
$$




If we now substitute $n=0$ in this expression we obtain $P(L_n^Acap L_n^Bcap L_n^G)=1$,




which is in contradiction with what we observed before, i.e. that $P(L_n^Acap L_n^Bcap L_n^G)=0$ with $n=0$.



There is likely a mistake in this reasoning, but I am not able to spot it.



Thanks for your help!







share|cite|improve this question











Consider a set $C$ composed of three distinguishable kinds of elements $A,B,G$, and let $alpha,beta,gamma>0$ be the numbers of elements of each kind, and $|C|=c=alpha+beta+gamma$.



We define the three events $L_n^A, L_n^B, L_n^G$ as to get, in $n$ trials with replacement, at least one element of kind $A$, at least one element of kind $B$, and at least one element of kind $G$.



The probabilities of these events are $$P(L_n^A)=1-left(fracc-alphacright)^n=1-left(fracbeta+gammacright)^n, $$



$$P(L_n^B)=1-left(fracc-betacright)^n=1-left(fracalpha+gammacright)^n,$$



$$
P(L_n^G)=1-left(fracc-gammacright)^n=1-left(fracalpha+betacright)^n.
$$



We evaluate $P(L_n^Acap L_n^B)$. By definition of conditional probability and applying the property of the opposite event, we have
$$
P(L_n^Acap L_n^B)=P(L_n^A|L_n^B)P(L_n^B)=[1-P(overlineL_n^A|L_n^B)]P(L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B).
$$



By means of Bayes' theorem, $P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B|overlineL_n^A)P(overlineL_n^A)$. If we know that the event $L_n^A$ did not take place, all the $n$ extractions are either of kind $B$ or of kind $G$. Therefore, $P(L_n^B|overlineL_n^A)=1-left(fracbeta+gamma-betabeta+gammaright)^n=1-left(fracgammabeta+gammaright)^n$.



In conclusion, since $P(overlineL_n^A)=left(fracbeta+gammacright)^n$, we obtain
$$
P(L_n^Acap L_n^B)=P(L_n^B)-P(overlineL_n^A|L_n^B)P(L_n^B)=P(L_n^B)-P(L_n^B|overlineL_n^A)P(overlineL_n^A)=
$$
$$
=1-left(fracalpha+gammacright)^n-left[1-left(fracgammabeta+gammaright)^nright]left(fracbeta+gammacright)^n=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n.
$$



We notice that if $n=0$ (or $n=1$), then we correctly have $P(L_n^Acap L_n^B)=0$.



We now evaluate $P(L_n^Acap L_n^Bcap L_n^G)$. As we have done before, we find
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
which implies that,




for $n=0$, it must also be $P(L_n^Acap L_n^Bcap L_n^G)=0$.




We go on with the calculation of $P(L_n^Acap L_n^Bcap L_n^G)$, and we apply (again) first the definition of opposite event
$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=[1-P(overlineL_n^G|L_n^Acap L_n^B)]P(L_n^Acap L_n^B)=
$$
$$
=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B),
$$
and then the theorem of Bayes on the second term
$$
P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G).
$$



If we know that event $L_n^G$ does not take place, the probability to get at least one element of kind $A$ and at least one element of kind $B$ is
$$
P(L_n^Acap L_n^B|overlineL_n^G)=1-left(fracalphaalpha+betaright)^n-left(fracbetaalpha+betaright)^n.
$$



Therefore, since $P(overlineL_n^G)=left(fracalpha+betacright)^n$,



$$
P(L_n^Acap L_n^Bcap L_n^G)=P(L_n^Acap L_n^B)-P(overlineL_n^G|L_n^Acap L_n^B)P(L_n^Acap L_n^B)=P(L_n^Acap L_n^B)-P(L_n^Acap L_n^B|overlineL_n^G)P(overlineL_n^G)=
$$
$$
=1-left(fracalpha+gammacright)^n-left(fracbeta+gammacright)^n+left(fracgammacright)^n-left(fracalpha+betacright)^n+left(fracalphacright)^n+left(fracbetacright)^n.
$$




If we now substitute $n=0$ in this expression we obtain $P(L_n^Acap L_n^Bcap L_n^G)=1$,




which is in contradiction with what we observed before, i.e. that $P(L_n^Acap L_n^Bcap L_n^G)=0$ with $n=0$.



There is likely a mistake in this reasoning, but I am not able to spot it.



Thanks for your help!









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 28 at 17:57









Andrea Prunotto

569114




569114







  • 2




    What’s the Cliffs notes version?
    – Randall
    Jul 28 at 18:01










  • @Randall Hi Randall, sorry I don't know what are you referring to!
    – Andrea Prunotto
    Jul 28 at 18:03












  • 2




    What’s the Cliffs notes version?
    – Randall
    Jul 28 at 18:01










  • @Randall Hi Randall, sorry I don't know what are you referring to!
    – Andrea Prunotto
    Jul 28 at 18:03







2




2




What’s the Cliffs notes version?
– Randall
Jul 28 at 18:01




What’s the Cliffs notes version?
– Randall
Jul 28 at 18:01












@Randall Hi Randall, sorry I don't know what are you referring to!
– Andrea Prunotto
Jul 28 at 18:03




@Randall Hi Randall, sorry I don't know what are you referring to!
– Andrea Prunotto
Jul 28 at 18:03










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










There are a couple of problems that I can see.



  1. You're using Bayes' theorem when the event conditioned on has probability $0$, but Bayes' theorem does not necessarily hold in that case.

  2. In some cases your formulae are only valid for $ngeq 1$. For example, you calculate the probability that you get at least one each of A and B conditional on having no G as $1$ minus the probability of getting $n$ As, minus the probability of getting $n$ Bs. This assumes that these events are disjoint - that you can't get $n$ As and $n$ Bs - which is only true for $ngeq 1$. (Indeed, your formula gives a negative probability when $n=0$.)





share|cite|improve this answer





















  • Thanks! I see very well now!
    – Andrea Prunotto
    Jul 28 at 18:28










  • But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
    – Andrea Prunotto
    Jul 28 at 18:36







  • 1




    I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
    – Especially Lime
    Jul 28 at 18:56










  • But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
    – Andrea Prunotto
    Jul 28 at 19:25











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2865449%2fa-contradiction-hidden-in-the-definition-of-the-probability-of-the-intersection%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










There are a couple of problems that I can see.



  1. You're using Bayes' theorem when the event conditioned on has probability $0$, but Bayes' theorem does not necessarily hold in that case.

  2. In some cases your formulae are only valid for $ngeq 1$. For example, you calculate the probability that you get at least one each of A and B conditional on having no G as $1$ minus the probability of getting $n$ As, minus the probability of getting $n$ Bs. This assumes that these events are disjoint - that you can't get $n$ As and $n$ Bs - which is only true for $ngeq 1$. (Indeed, your formula gives a negative probability when $n=0$.)





share|cite|improve this answer





















  • Thanks! I see very well now!
    – Andrea Prunotto
    Jul 28 at 18:28










  • But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
    – Andrea Prunotto
    Jul 28 at 18:36







  • 1




    I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
    – Especially Lime
    Jul 28 at 18:56










  • But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
    – Andrea Prunotto
    Jul 28 at 19:25















up vote
1
down vote



accepted










There are a couple of problems that I can see.



  1. You're using Bayes' theorem when the event conditioned on has probability $0$, but Bayes' theorem does not necessarily hold in that case.

  2. In some cases your formulae are only valid for $ngeq 1$. For example, you calculate the probability that you get at least one each of A and B conditional on having no G as $1$ minus the probability of getting $n$ As, minus the probability of getting $n$ Bs. This assumes that these events are disjoint - that you can't get $n$ As and $n$ Bs - which is only true for $ngeq 1$. (Indeed, your formula gives a negative probability when $n=0$.)





share|cite|improve this answer





















  • Thanks! I see very well now!
    – Andrea Prunotto
    Jul 28 at 18:28










  • But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
    – Andrea Prunotto
    Jul 28 at 18:36







  • 1




    I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
    – Especially Lime
    Jul 28 at 18:56










  • But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
    – Andrea Prunotto
    Jul 28 at 19:25













up vote
1
down vote



accepted







up vote
1
down vote



accepted






There are a couple of problems that I can see.



  1. You're using Bayes' theorem when the event conditioned on has probability $0$, but Bayes' theorem does not necessarily hold in that case.

  2. In some cases your formulae are only valid for $ngeq 1$. For example, you calculate the probability that you get at least one each of A and B conditional on having no G as $1$ minus the probability of getting $n$ As, minus the probability of getting $n$ Bs. This assumes that these events are disjoint - that you can't get $n$ As and $n$ Bs - which is only true for $ngeq 1$. (Indeed, your formula gives a negative probability when $n=0$.)





share|cite|improve this answer













There are a couple of problems that I can see.



  1. You're using Bayes' theorem when the event conditioned on has probability $0$, but Bayes' theorem does not necessarily hold in that case.

  2. In some cases your formulae are only valid for $ngeq 1$. For example, you calculate the probability that you get at least one each of A and B conditional on having no G as $1$ minus the probability of getting $n$ As, minus the probability of getting $n$ Bs. This assumes that these events are disjoint - that you can't get $n$ As and $n$ Bs - which is only true for $ngeq 1$. (Indeed, your formula gives a negative probability when $n=0$.)






share|cite|improve this answer













share|cite|improve this answer



share|cite|improve this answer











answered Jul 28 at 18:22









Especially Lime

19.1k22252




19.1k22252











  • Thanks! I see very well now!
    – Andrea Prunotto
    Jul 28 at 18:28










  • But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
    – Andrea Prunotto
    Jul 28 at 18:36







  • 1




    I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
    – Especially Lime
    Jul 28 at 18:56










  • But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
    – Andrea Prunotto
    Jul 28 at 19:25

















  • Thanks! I see very well now!
    – Andrea Prunotto
    Jul 28 at 18:28










  • But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
    – Andrea Prunotto
    Jul 28 at 18:36







  • 1




    I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
    – Especially Lime
    Jul 28 at 18:56










  • But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
    – Andrea Prunotto
    Jul 28 at 19:25
















Thanks! I see very well now!
– Andrea Prunotto
Jul 28 at 18:28




Thanks! I see very well now!
– Andrea Prunotto
Jul 28 at 18:28












But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
– Andrea Prunotto
Jul 28 at 18:36





But, still I don't get why, with $n=0$, we have a reasonable result for $P(L_n^Acap L_n^B)$ but not for $P(L_n^Acap L_n^Bcap L_n^G)$, although we used in both cases Bayes' theorem...
– Andrea Prunotto
Jul 28 at 18:36





1




1




I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
– Especially Lime
Jul 28 at 18:56




I think the Bayes' theorem issue may not actually be a problem - when you use it, it looks like both sides of the equation are zero anyway. So the main issue is assuming two events are disjoint, which I think only happens in the triple intersection calculation.
– Especially Lime
Jul 28 at 18:56












But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
– Andrea Prunotto
Jul 28 at 19:25





But then it is however $P(L_0^Acap L_0^Bcap L_0^G)=P(L_0^Acap L_0^B)=0$, isn't it? How can I prove it?
– Andrea Prunotto
Jul 28 at 19:25













 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2865449%2fa-contradiction-hidden-in-the-definition-of-the-probability-of-the-intersection%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?