Dineen's Probability in Finance: A question on limiting behaviour of expectations of sequences of positive integrable random variables
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
In the above book (first edition), the author proves the following theorem:
If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$
In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:
for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$
This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:
- First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).
- Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)
probability-theory measure-theory random-variables
 |Â
show 1 more comment
up vote
2
down vote
favorite
In the above book (first edition), the author proves the following theorem:
If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$
In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:
for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$
This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:
- First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).
- Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)
probability-theory measure-theory random-variables
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52
 |Â
show 1 more comment
up vote
2
down vote
favorite
up vote
2
down vote
favorite
In the above book (first edition), the author proves the following theorem:
If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$
In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:
for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$
This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:
- First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).
- Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)
probability-theory measure-theory random-variables
In the above book (first edition), the author proves the following theorem:
If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$
In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:
for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$
This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:
- First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).
- Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)
probability-theory measure-theory random-variables
asked Jul 25 at 7:52
Jorge.Squared
18710
18710
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52
 |Â
show 1 more comment
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52
 |Â
show 1 more comment
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862149%2fdineens-probability-in-finance-a-question-on-limiting-behaviour-of-expectation%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05
@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26
Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37
In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44
@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52