If X is a non-negative continuous random variable, show that $E[X]=int_0^infty (1-F(x)) dx$
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
There is a hint to solve this by using integration by parts. So I have $$ u = x space mboxthen space u'=1$$
$$v' = F(x) -1 space mboxthen space = ? $$
QUESTION:
What is $v'$ ?
What is the theoretical idea behind this? I would not have thought of this approach without the hint.
integration probability-distributions definite-integrals distribution-theory
add a comment |Â
up vote
0
down vote
favorite
There is a hint to solve this by using integration by parts. So I have $$ u = x space mboxthen space u'=1$$
$$v' = F(x) -1 space mboxthen space = ? $$
QUESTION:
What is $v'$ ?
What is the theoretical idea behind this? I would not have thought of this approach without the hint.
integration probability-distributions definite-integrals distribution-theory
1
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
There is a hint to solve this by using integration by parts. So I have $$ u = x space mboxthen space u'=1$$
$$v' = F(x) -1 space mboxthen space = ? $$
QUESTION:
What is $v'$ ?
What is the theoretical idea behind this? I would not have thought of this approach without the hint.
integration probability-distributions definite-integrals distribution-theory
There is a hint to solve this by using integration by parts. So I have $$ u = x space mboxthen space u'=1$$
$$v' = F(x) -1 space mboxthen space = ? $$
QUESTION:
What is $v'$ ?
What is the theoretical idea behind this? I would not have thought of this approach without the hint.
integration probability-distributions definite-integrals distribution-theory
edited Aug 1 at 15:16
Bernard
110k635102
110k635102
asked Aug 1 at 14:59
user1607
608
608
1
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28
add a comment |Â
1
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28
1
1
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
1
down vote
Observe that for a continuous random variable, (well absolutely continuous to be rigorous):
$$mathsf P(X> x) = int_x^infty f_X(y)operatorname d y$$
Then taking the definite integral (if we can):
$$int_0^infty mathsf P(X> x)operatorname d x = int_0^infty int_x^infty f_X(y)operatorname d yoperatorname d x$$
Observe that we are integrating over the domain where $0< x< infty$ and $x< y< infty$, which is to say $0<y<infty$ and $0< x < y$.
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ iint_0< x< y< infty f_X(y)operatorname d (x,y)
\[1ex] = & ~ int_0^infty int_0^y f_X(y)operatorname d xoperatorname d yendalign$$
Then since $int_0^y f_X(y)operatorname d x = f_X(y) int_0^y 1operatorname d x = y~f_X(y)$ we have:
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ int_0^infty y ~ f_X(y)operatorname d y \[1ex] = & ~ mathsf E(X mid Xgeq 0)~mathsf P(Xgeq 0) \[1ex] = & ~ mathsf E(X) & textsfwhen $X$ is strictly positive endalign$$
add a comment |Â
up vote
1
down vote
You can accomplish this with an interchange of integral signs as follows.
$$int_0^infty (1 - F(x)), dx = int_0^inftyint_x^infty dF(t), dx
= int_0^infty int_0^x dt, dF(x) = int_0^infty x, dF(x)$$
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
add a comment |Â
up vote
0
down vote
In the continuous case you want to go the other way around, differentiate $1-F$ to into $-f$ and then the integration of $1$ creates the factor of $x$ that you want to see. But this works only in the continuous case.
The standard way to do this in the general case is to use Fubini's theorem, $1-F(x)=P(X>x)=int_x^infty dF(y)$. So now you have $int_0^infty int_x^infty dF(y) dx$, and interchanging the order of integration achieves the desired result.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Observe that for a continuous random variable, (well absolutely continuous to be rigorous):
$$mathsf P(X> x) = int_x^infty f_X(y)operatorname d y$$
Then taking the definite integral (if we can):
$$int_0^infty mathsf P(X> x)operatorname d x = int_0^infty int_x^infty f_X(y)operatorname d yoperatorname d x$$
Observe that we are integrating over the domain where $0< x< infty$ and $x< y< infty$, which is to say $0<y<infty$ and $0< x < y$.
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ iint_0< x< y< infty f_X(y)operatorname d (x,y)
\[1ex] = & ~ int_0^infty int_0^y f_X(y)operatorname d xoperatorname d yendalign$$
Then since $int_0^y f_X(y)operatorname d x = f_X(y) int_0^y 1operatorname d x = y~f_X(y)$ we have:
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ int_0^infty y ~ f_X(y)operatorname d y \[1ex] = & ~ mathsf E(X mid Xgeq 0)~mathsf P(Xgeq 0) \[1ex] = & ~ mathsf E(X) & textsfwhen $X$ is strictly positive endalign$$
add a comment |Â
up vote
1
down vote
Observe that for a continuous random variable, (well absolutely continuous to be rigorous):
$$mathsf P(X> x) = int_x^infty f_X(y)operatorname d y$$
Then taking the definite integral (if we can):
$$int_0^infty mathsf P(X> x)operatorname d x = int_0^infty int_x^infty f_X(y)operatorname d yoperatorname d x$$
Observe that we are integrating over the domain where $0< x< infty$ and $x< y< infty$, which is to say $0<y<infty$ and $0< x < y$.
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ iint_0< x< y< infty f_X(y)operatorname d (x,y)
\[1ex] = & ~ int_0^infty int_0^y f_X(y)operatorname d xoperatorname d yendalign$$
Then since $int_0^y f_X(y)operatorname d x = f_X(y) int_0^y 1operatorname d x = y~f_X(y)$ we have:
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ int_0^infty y ~ f_X(y)operatorname d y \[1ex] = & ~ mathsf E(X mid Xgeq 0)~mathsf P(Xgeq 0) \[1ex] = & ~ mathsf E(X) & textsfwhen $X$ is strictly positive endalign$$
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Observe that for a continuous random variable, (well absolutely continuous to be rigorous):
$$mathsf P(X> x) = int_x^infty f_X(y)operatorname d y$$
Then taking the definite integral (if we can):
$$int_0^infty mathsf P(X> x)operatorname d x = int_0^infty int_x^infty f_X(y)operatorname d yoperatorname d x$$
Observe that we are integrating over the domain where $0< x< infty$ and $x< y< infty$, which is to say $0<y<infty$ and $0< x < y$.
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ iint_0< x< y< infty f_X(y)operatorname d (x,y)
\[1ex] = & ~ int_0^infty int_0^y f_X(y)operatorname d xoperatorname d yendalign$$
Then since $int_0^y f_X(y)operatorname d x = f_X(y) int_0^y 1operatorname d x = y~f_X(y)$ we have:
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ int_0^infty y ~ f_X(y)operatorname d y \[1ex] = & ~ mathsf E(X mid Xgeq 0)~mathsf P(Xgeq 0) \[1ex] = & ~ mathsf E(X) & textsfwhen $X$ is strictly positive endalign$$
Observe that for a continuous random variable, (well absolutely continuous to be rigorous):
$$mathsf P(X> x) = int_x^infty f_X(y)operatorname d y$$
Then taking the definite integral (if we can):
$$int_0^infty mathsf P(X> x)operatorname d x = int_0^infty int_x^infty f_X(y)operatorname d yoperatorname d x$$
Observe that we are integrating over the domain where $0< x< infty$ and $x< y< infty$, which is to say $0<y<infty$ and $0< x < y$.
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ iint_0< x< y< infty f_X(y)operatorname d (x,y)
\[1ex] = & ~ int_0^infty int_0^y f_X(y)operatorname d xoperatorname d yendalign$$
Then since $int_0^y f_X(y)operatorname d x = f_X(y) int_0^y 1operatorname d x = y~f_X(y)$ we have:
$$beginalignint_0^infty mathsf P(X> x)operatorname d x = & ~ int_0^infty y ~ f_X(y)operatorname d y \[1ex] = & ~ mathsf E(X mid Xgeq 0)~mathsf P(Xgeq 0) \[1ex] = & ~ mathsf E(X) & textsfwhen $X$ is strictly positive endalign$$
answered Aug 1 at 15:07
James
347213
347213
add a comment |Â
add a comment |Â
up vote
1
down vote
You can accomplish this with an interchange of integral signs as follows.
$$int_0^infty (1 - F(x)), dx = int_0^inftyint_x^infty dF(t), dx
= int_0^infty int_0^x dt, dF(x) = int_0^infty x, dF(x)$$
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
add a comment |Â
up vote
1
down vote
You can accomplish this with an interchange of integral signs as follows.
$$int_0^infty (1 - F(x)), dx = int_0^inftyint_x^infty dF(t), dx
= int_0^infty int_0^x dt, dF(x) = int_0^infty x, dF(x)$$
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
add a comment |Â
up vote
1
down vote
up vote
1
down vote
You can accomplish this with an interchange of integral signs as follows.
$$int_0^infty (1 - F(x)), dx = int_0^inftyint_x^infty dF(t), dx
= int_0^infty int_0^x dt, dF(x) = int_0^infty x, dF(x)$$
You can accomplish this with an interchange of integral signs as follows.
$$int_0^infty (1 - F(x)), dx = int_0^inftyint_x^infty dF(t), dx
= int_0^infty int_0^x dt, dF(x) = int_0^infty x, dF(x)$$
edited Aug 1 at 15:25


Batominovski
22.7k22776
22.7k22776
answered Aug 1 at 15:08
ncmathsadist
41.2k25699
41.2k25699
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
add a comment |Â
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
Very nice answer. This also proves the same result when $X$ is not an absolutely continuous random variable.
– Batominovski
Aug 1 at 15:23
add a comment |Â
up vote
0
down vote
In the continuous case you want to go the other way around, differentiate $1-F$ to into $-f$ and then the integration of $1$ creates the factor of $x$ that you want to see. But this works only in the continuous case.
The standard way to do this in the general case is to use Fubini's theorem, $1-F(x)=P(X>x)=int_x^infty dF(y)$. So now you have $int_0^infty int_x^infty dF(y) dx$, and interchanging the order of integration achieves the desired result.
add a comment |Â
up vote
0
down vote
In the continuous case you want to go the other way around, differentiate $1-F$ to into $-f$ and then the integration of $1$ creates the factor of $x$ that you want to see. But this works only in the continuous case.
The standard way to do this in the general case is to use Fubini's theorem, $1-F(x)=P(X>x)=int_x^infty dF(y)$. So now you have $int_0^infty int_x^infty dF(y) dx$, and interchanging the order of integration achieves the desired result.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
In the continuous case you want to go the other way around, differentiate $1-F$ to into $-f$ and then the integration of $1$ creates the factor of $x$ that you want to see. But this works only in the continuous case.
The standard way to do this in the general case is to use Fubini's theorem, $1-F(x)=P(X>x)=int_x^infty dF(y)$. So now you have $int_0^infty int_x^infty dF(y) dx$, and interchanging the order of integration achieves the desired result.
In the continuous case you want to go the other way around, differentiate $1-F$ to into $-f$ and then the integration of $1$ creates the factor of $x$ that you want to see. But this works only in the continuous case.
The standard way to do this in the general case is to use Fubini's theorem, $1-F(x)=P(X>x)=int_x^infty dF(y)$. So now you have $int_0^infty int_x^infty dF(y) dx$, and interchanging the order of integration achieves the desired result.
answered Aug 1 at 15:08
Ian
65k24681
65k24681
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2869168%2fif-x-is-a-non-negative-continuous-random-variable-show-that-ex-int-0-infty%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
$v'$ means the derivative of $v$. He is doing integration by parts... if you do not remember $u,v$ variables when you studied integration by parts, go back and find it in your calculus text.
– GEdgar
Aug 1 at 15:28