Dineen's Probability in Finance: A question on limiting behaviour of expectations of sequences of positive integrable random variables

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












In the above book (first edition), the author proves the following theorem:




If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$




In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:




for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$




This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:



  • First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).

  • Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)






share|cite|improve this question



















  • @Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
    – Jorge.Squared
    Jul 25 at 11:05











  • @Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
    – Jorge.Squared
    Jul 25 at 11:26










  • Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
    – Math1000
    Jul 25 at 11:37










  • In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
    – Math1000
    Jul 25 at 11:44










  • @Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
    – Jorge.Squared
    Jul 25 at 11:52














up vote
2
down vote

favorite












In the above book (first edition), the author proves the following theorem:




If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$




In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:




for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$




This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:



  • First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).

  • Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)






share|cite|improve this question



















  • @Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
    – Jorge.Squared
    Jul 25 at 11:05











  • @Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
    – Jorge.Squared
    Jul 25 at 11:26










  • Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
    – Math1000
    Jul 25 at 11:37










  • In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
    – Math1000
    Jul 25 at 11:44










  • @Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
    – Jorge.Squared
    Jul 25 at 11:52












up vote
2
down vote

favorite









up vote
2
down vote

favorite











In the above book (first edition), the author proves the following theorem:




If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$




In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:




for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$




This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:



  • First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).

  • Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)






share|cite|improve this question











In the above book (first edition), the author proves the following theorem:




If $(X_n)_n=1^infty$ and $(Y_n)_n=1^infty$ are increasing sequences of positive integrable random variables on the probability space $(Omega, mathcalF,P)$ and if $$lim_n to infty X_n = lim_n to infty Y_n$$ almost surely, then $$lim_ntoinftymathbbElbrack X_nrbrack = lim_ntoinftymathbbElbrack Y_nrbrack.$$




In course of the proof, the author needs to prove, among other things, that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ for every $omega in Omega,$ where $X^lbrack m rbrack$ denotes the $m-$th truncation of $X$ and $j_n$ is a certain previously defined strictly increasing ($j_n+1 > j_n$) sequence of natural numbers. Here, he distinguishes two cases: 1) in case $lim_ntoinfty X_n(omega) < infty,$ we have $X_n(omega) < j_n$ for every $n$ bigger than certain $n_0$ so that we plainly have (due to the definition of $X^m(omega)$) that $X_n^lbrack j_n rbrack(omega) = X_n(omega)$ for all $n geq n_0.$ 2) In case $lim_ntoinfty X_n(omega) = infty,$ he says:




for all $n,$ we have either $X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$ or $X_n^lbrack j_n rbrack = j_n.$ Since $j_n geq n,$ we have, in either case, $lim_n to infty X_n^lbrack j_n rbrack = infty.$




This would clearly establish the claim that $lim_ntoinfty X_n(omega) = lim_ntoinfty X_n^lbrack j_n rbrack(omega)$ but I have a couple of questions here:



  • First, from where does it follow that $$X_n(omega) - X_n^lbrack j_n rbrack(omega) leq 1/2^j_n$$ and for what $n'$s is it true? My guess is that it has something to do with pointwise convergence of the truncations $X^lbrack m rbrack$ to $X$ for $m to infty$ (this seems obvious).

  • Second, why is it not enough to say: if $lim_n to infty X_n(omega) = infty,$ then $X_n^lbrack j_n rbrack = j_n$ for $n$ big enough and since $j_n geq n,$ we have $lim_nto inftyX_n^lbrack j_n rbrack = infty$ and we ar done (does it have sth in common with the fact that $j_n$ depends on $n$?)








share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 25 at 7:52









Jorge.Squared

18710




18710











  • @Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
    – Jorge.Squared
    Jul 25 at 11:05











  • @Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
    – Jorge.Squared
    Jul 25 at 11:26










  • Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
    – Math1000
    Jul 25 at 11:37










  • In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
    – Math1000
    Jul 25 at 11:44










  • @Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
    – Jorge.Squared
    Jul 25 at 11:52
















  • @Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
    – Jorge.Squared
    Jul 25 at 11:05











  • @Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
    – Jorge.Squared
    Jul 25 at 11:26










  • Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
    – Math1000
    Jul 25 at 11:37










  • In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
    – Math1000
    Jul 25 at 11:44










  • @Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
    – Jorge.Squared
    Jul 25 at 11:52















@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05





@Math1000: I am not saying the theorem is not rigorous, I just do not understand one tiny detail of its proof. This detail is somehow captured in my questions above. And yes, it uses truncations, if only implicitly: they are defined on p. 123 in the first edition.
– Jorge.Squared
Jul 25 at 11:05













@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26




@Math1000: maybe you are referring to a similar theorem dealing with positive bounded random variables only, whereas the theorem I am referring to deals with positive integrable RVs?
– Jorge.Squared
Jul 25 at 11:26












Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37




Ah, indeed I had the wrong theorem; the one I was looking at states "increasing sequences of simple positive random variables." I will delete my previous comments.
– Math1000
Jul 25 at 11:37












In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44




In the proof of the (correct) theorem, the author still is not using the aforementioned truncations, but instead the "canonical sequence" of simple random variables $X_n$ defined explicitly such that $|X_n(omega)-X(omega)|leqslant 2^-n$ for all $n$ and $omega$. I suggest you study the construction of the $X_n$ carefully.
– Math1000
Jul 25 at 11:44












@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52




@Math1000: Thank you, your response suggests that the second edition has a different proof. This raises the question whether the present "proof" is correct or not. I still wonder.
– Jorge.Squared
Jul 25 at 11:52















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862149%2fdineens-probability-in-finance-a-question-on-limiting-behaviour-of-expectation%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862149%2fdineens-probability-in-finance-a-question-on-limiting-behaviour-of-expectation%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?