What does it mean that âthe central limit theorem does not hold far away from the peakâ?
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
So I know nothing about large deviations theory, and I'm reading some notes. They claim that:
The CLT does not hold far away from the peak
I am not sure how to parse this statement. There are many statements of the CLT but here is the one I know:
Let $X_n$ be a sequence of i.i.d. random variables with mean $0$ and variance $sigma^2<infty$. Then the following sum:
$$frac1sqrtnsum_k=1^n X_n$$
converges in distribution as $ntoinfty$ to $N(0,sigma^2)$.
Why do the notes say that central limit theorem doesn't hold away from $0$? There's nothing in the central limit theorem that says "only for some interval around $0$". Does it just mean that the convergence rate is very slow and impractical?
probability-theory large-deviation-theory
add a comment |Â
up vote
3
down vote
favorite
So I know nothing about large deviations theory, and I'm reading some notes. They claim that:
The CLT does not hold far away from the peak
I am not sure how to parse this statement. There are many statements of the CLT but here is the one I know:
Let $X_n$ be a sequence of i.i.d. random variables with mean $0$ and variance $sigma^2<infty$. Then the following sum:
$$frac1sqrtnsum_k=1^n X_n$$
converges in distribution as $ntoinfty$ to $N(0,sigma^2)$.
Why do the notes say that central limit theorem doesn't hold away from $0$? There's nothing in the central limit theorem that says "only for some interval around $0$". Does it just mean that the convergence rate is very slow and impractical?
probability-theory large-deviation-theory
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
So I know nothing about large deviations theory, and I'm reading some notes. They claim that:
The CLT does not hold far away from the peak
I am not sure how to parse this statement. There are many statements of the CLT but here is the one I know:
Let $X_n$ be a sequence of i.i.d. random variables with mean $0$ and variance $sigma^2<infty$. Then the following sum:
$$frac1sqrtnsum_k=1^n X_n$$
converges in distribution as $ntoinfty$ to $N(0,sigma^2)$.
Why do the notes say that central limit theorem doesn't hold away from $0$? There's nothing in the central limit theorem that says "only for some interval around $0$". Does it just mean that the convergence rate is very slow and impractical?
probability-theory large-deviation-theory
So I know nothing about large deviations theory, and I'm reading some notes. They claim that:
The CLT does not hold far away from the peak
I am not sure how to parse this statement. There are many statements of the CLT but here is the one I know:
Let $X_n$ be a sequence of i.i.d. random variables with mean $0$ and variance $sigma^2<infty$. Then the following sum:
$$frac1sqrtnsum_k=1^n X_n$$
converges in distribution as $ntoinfty$ to $N(0,sigma^2)$.
Why do the notes say that central limit theorem doesn't hold away from $0$? There's nothing in the central limit theorem that says "only for some interval around $0$". Does it just mean that the convergence rate is very slow and impractical?
probability-theory large-deviation-theory
asked Jul 25 at 15:37
user223391
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58
add a comment |Â
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Well, essentially it means that if you are far away from the mean (which in this case is 0), the 'approxmation' that the sum random variable (call it r.v. $Y$) you have is like the normal distribution becomes really bad.
If you want to understand this, just take some simple example. Let $n=10$ and say you have uniform random variables in range $[0,1]$. If you calculate something like $P(10000<Y<1000000)$ twice, once with the real distribution of $Y$ and another time with the normal 'approximation', you shall see that the difference is huge! Does that make sense?
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Well, essentially it means that if you are far away from the mean (which in this case is 0), the 'approxmation' that the sum random variable (call it r.v. $Y$) you have is like the normal distribution becomes really bad.
If you want to understand this, just take some simple example. Let $n=10$ and say you have uniform random variables in range $[0,1]$. If you calculate something like $P(10000<Y<1000000)$ twice, once with the real distribution of $Y$ and another time with the normal 'approximation', you shall see that the difference is huge! Does that make sense?
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
add a comment |Â
up vote
1
down vote
accepted
Well, essentially it means that if you are far away from the mean (which in this case is 0), the 'approxmation' that the sum random variable (call it r.v. $Y$) you have is like the normal distribution becomes really bad.
If you want to understand this, just take some simple example. Let $n=10$ and say you have uniform random variables in range $[0,1]$. If you calculate something like $P(10000<Y<1000000)$ twice, once with the real distribution of $Y$ and another time with the normal 'approximation', you shall see that the difference is huge! Does that make sense?
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Well, essentially it means that if you are far away from the mean (which in this case is 0), the 'approxmation' that the sum random variable (call it r.v. $Y$) you have is like the normal distribution becomes really bad.
If you want to understand this, just take some simple example. Let $n=10$ and say you have uniform random variables in range $[0,1]$. If you calculate something like $P(10000<Y<1000000)$ twice, once with the real distribution of $Y$ and another time with the normal 'approximation', you shall see that the difference is huge! Does that make sense?
Well, essentially it means that if you are far away from the mean (which in this case is 0), the 'approxmation' that the sum random variable (call it r.v. $Y$) you have is like the normal distribution becomes really bad.
If you want to understand this, just take some simple example. Let $n=10$ and say you have uniform random variables in range $[0,1]$. If you calculate something like $P(10000<Y<1000000)$ twice, once with the real distribution of $Y$ and another time with the normal 'approximation', you shall see that the difference is huge! Does that make sense?
answered Jul 25 at 16:06
Vinayak Suresh
414
414
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
add a comment |Â
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Sure, in the first case you get $0$ in the second you get "not $0$". But if you let $n$ get big enough, the approximation gets a lot better, right? It's just a bad approximation for lower $n$. Is that right?
â user223391
Jul 25 at 16:11
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
Yes, that is exactly right. As n is larger, your approximation is better and better.
â Vinayak Suresh
Jul 25 at 16:20
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862541%2fwhat-does-it-mean-that-the-central-limit-theorem-does-not-hold-far-away-from-th%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
It seems the point that they're trying to make is that while the unconditional distribution (or perhaps even conditional on events with probability close to $1$) of the weighted sum is "close" to normal, the distribution conditional on low-probability events is not.
â Theoretical Economist
Jul 25 at 15:58