Gärtner-Ellis theorem on Markov chains
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Let $Z_n in mathcalX$ be a sequence of independent random variables where $mathcalX$ is a topological vector space and let $mu_n$ the probability measures associated with $Z_n$. Suppose that $Z_n$ that satisfies the conditions for the abstract Gärtner-Ellis Theorem. So the rate function $I(cdot)$ associated with $Z_n$ is the Fenchel–Legendre transform of $M(lambda) =lim_n rightarrow infty frac1n log E[exp langle nlambda, Z_n rangle ]$.
Now consider a second random variable $Y_n$ that is a Markov chain ($Y_n$ is sampled from $Y_n-1$) and the probability measure associated with $Y_n$ is $mu_n$.
My question is:
Is $I(cdot)$ the rate function associated with $Y_n$?
If answer is yes to the above question, let me provide a counter-example of the equivalence of the rate function of $Z_n$ and $Y_n$.
Consider two random variables on $0,1$
let $g_n$ be 1 with probability $1/2$, else 0
let $h_1$ = $g_1$ and then for $n>1$, let $h_n=1-h_n-1$ with probability
$(1/2)^n$ and else $h_n=h_n-1$.
Note that for all $n$, $h_n$ and $g_n$ have the same probability distribution,
$Prob(g_n = 1) = 1/2$ (by definition).
$$
Prob(h_n = 1) = Prob(h_n-1=1)cdot(1/2)^n + Prob(h_n-1=0)cdot(1-(1/2)^n) = (1/2) cdot (1/2)^n+(1/2) cdot (1-(1/2)^n) = (1/2)cdot((1/2)^n + 1-(1/2)^n) = 1/2
$$
Hence $Prob(g_n=1) = Prob(h_n=1)$.
At the limit both random variables have the same distribution,
but the convergence of the sequences is rather different. ($h_n$
converges, $g_n$ doesn't)
My other question is: Does this example invalidate the positive answer to first question?
probability-theory large-deviation-theory
 |Â
show 4 more comments
up vote
0
down vote
favorite
Let $Z_n in mathcalX$ be a sequence of independent random variables where $mathcalX$ is a topological vector space and let $mu_n$ the probability measures associated with $Z_n$. Suppose that $Z_n$ that satisfies the conditions for the abstract Gärtner-Ellis Theorem. So the rate function $I(cdot)$ associated with $Z_n$ is the Fenchel–Legendre transform of $M(lambda) =lim_n rightarrow infty frac1n log E[exp langle nlambda, Z_n rangle ]$.
Now consider a second random variable $Y_n$ that is a Markov chain ($Y_n$ is sampled from $Y_n-1$) and the probability measure associated with $Y_n$ is $mu_n$.
My question is:
Is $I(cdot)$ the rate function associated with $Y_n$?
If answer is yes to the above question, let me provide a counter-example of the equivalence of the rate function of $Z_n$ and $Y_n$.
Consider two random variables on $0,1$
let $g_n$ be 1 with probability $1/2$, else 0
let $h_1$ = $g_1$ and then for $n>1$, let $h_n=1-h_n-1$ with probability
$(1/2)^n$ and else $h_n=h_n-1$.
Note that for all $n$, $h_n$ and $g_n$ have the same probability distribution,
$Prob(g_n = 1) = 1/2$ (by definition).
$$
Prob(h_n = 1) = Prob(h_n-1=1)cdot(1/2)^n + Prob(h_n-1=0)cdot(1-(1/2)^n) = (1/2) cdot (1/2)^n+(1/2) cdot (1-(1/2)^n) = (1/2)cdot((1/2)^n + 1-(1/2)^n) = 1/2
$$
Hence $Prob(g_n=1) = Prob(h_n=1)$.
At the limit both random variables have the same distribution,
but the convergence of the sequences is rather different. ($h_n$
converges, $g_n$ doesn't)
My other question is: Does this example invalidate the positive answer to first question?
probability-theory large-deviation-theory
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49
 |Â
show 4 more comments
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Let $Z_n in mathcalX$ be a sequence of independent random variables where $mathcalX$ is a topological vector space and let $mu_n$ the probability measures associated with $Z_n$. Suppose that $Z_n$ that satisfies the conditions for the abstract Gärtner-Ellis Theorem. So the rate function $I(cdot)$ associated with $Z_n$ is the Fenchel–Legendre transform of $M(lambda) =lim_n rightarrow infty frac1n log E[exp langle nlambda, Z_n rangle ]$.
Now consider a second random variable $Y_n$ that is a Markov chain ($Y_n$ is sampled from $Y_n-1$) and the probability measure associated with $Y_n$ is $mu_n$.
My question is:
Is $I(cdot)$ the rate function associated with $Y_n$?
If answer is yes to the above question, let me provide a counter-example of the equivalence of the rate function of $Z_n$ and $Y_n$.
Consider two random variables on $0,1$
let $g_n$ be 1 with probability $1/2$, else 0
let $h_1$ = $g_1$ and then for $n>1$, let $h_n=1-h_n-1$ with probability
$(1/2)^n$ and else $h_n=h_n-1$.
Note that for all $n$, $h_n$ and $g_n$ have the same probability distribution,
$Prob(g_n = 1) = 1/2$ (by definition).
$$
Prob(h_n = 1) = Prob(h_n-1=1)cdot(1/2)^n + Prob(h_n-1=0)cdot(1-(1/2)^n) = (1/2) cdot (1/2)^n+(1/2) cdot (1-(1/2)^n) = (1/2)cdot((1/2)^n + 1-(1/2)^n) = 1/2
$$
Hence $Prob(g_n=1) = Prob(h_n=1)$.
At the limit both random variables have the same distribution,
but the convergence of the sequences is rather different. ($h_n$
converges, $g_n$ doesn't)
My other question is: Does this example invalidate the positive answer to first question?
probability-theory large-deviation-theory
Let $Z_n in mathcalX$ be a sequence of independent random variables where $mathcalX$ is a topological vector space and let $mu_n$ the probability measures associated with $Z_n$. Suppose that $Z_n$ that satisfies the conditions for the abstract Gärtner-Ellis Theorem. So the rate function $I(cdot)$ associated with $Z_n$ is the Fenchel–Legendre transform of $M(lambda) =lim_n rightarrow infty frac1n log E[exp langle nlambda, Z_n rangle ]$.
Now consider a second random variable $Y_n$ that is a Markov chain ($Y_n$ is sampled from $Y_n-1$) and the probability measure associated with $Y_n$ is $mu_n$.
My question is:
Is $I(cdot)$ the rate function associated with $Y_n$?
If answer is yes to the above question, let me provide a counter-example of the equivalence of the rate function of $Z_n$ and $Y_n$.
Consider two random variables on $0,1$
let $g_n$ be 1 with probability $1/2$, else 0
let $h_1$ = $g_1$ and then for $n>1$, let $h_n=1-h_n-1$ with probability
$(1/2)^n$ and else $h_n=h_n-1$.
Note that for all $n$, $h_n$ and $g_n$ have the same probability distribution,
$Prob(g_n = 1) = 1/2$ (by definition).
$$
Prob(h_n = 1) = Prob(h_n-1=1)cdot(1/2)^n + Prob(h_n-1=0)cdot(1-(1/2)^n) = (1/2) cdot (1/2)^n+(1/2) cdot (1-(1/2)^n) = (1/2)cdot((1/2)^n + 1-(1/2)^n) = 1/2
$$
Hence $Prob(g_n=1) = Prob(h_n=1)$.
At the limit both random variables have the same distribution,
but the convergence of the sequences is rather different. ($h_n$
converges, $g_n$ doesn't)
My other question is: Does this example invalidate the positive answer to first question?
probability-theory large-deviation-theory
edited Jul 27 at 9:33
asked Jul 27 at 6:01
jaogye
405413
405413
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49
 |Â
show 4 more comments
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49
 |Â
show 4 more comments
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2864100%2fg%25c3%25a4rtner-ellis-theorem-on-markov-chains%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
You really need to rephrase the paragraph "Now consider a second random variable $Y_n$ sampled from $Y_n-1$ such that the probability measures assoociate with $Y_n$ is $mu_n$." At the moment, one cannot know what you mean by this.
– Did
Jul 27 at 6:28
Dear @Did, Is now the question more clear?.
– jaogye
Jul 27 at 6:39
It seems you are assuming that the distribution of $Z_n$ is $mu_n$ for every $n$, that $(Z_n)$ satisfies a LDP with rate $I$, and that $(Y_n)$ is a Markov chain with marginal distributions $mu_n$, and that you are asking whether, then, $(Y_n)$ also satisies a LDP with rate $I$. But this is trivially so since the LDP for $(Z_n)$ describes probabilities $P(Z_nin B)$, likewise a LDP for $(Y_n)$ would describe probabilities $P(Y_nin B)$, and you are assuming that for every $n$ and $B$, $P(Z_nin B)=P(Y_nin B)$...
– Did
Jul 27 at 7:56
Dear @Did yes, the answer could be trivial for an expert. However this equivalence can produces an example that at least for me it is counter-intuitive. I will add it to the question
– jaogye
Jul 27 at 9:19
In the example you added, $(h_n)$ and $g_n)$ both satisfy the same LDPs and fail the same LDPs. No counterexample at all.
– Did
Jul 27 at 9:49