Known Markov-type inequalities on entropy?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.
By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?
probability entropy
add a comment |Â
up vote
1
down vote
favorite
Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.
By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?
probability entropy
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.
By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?
probability entropy
Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.
By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?
probability entropy
edited Jul 21 at 9:38
Sangchul Lee
85.5k12155253
85.5k12155253
asked Jul 20 at 11:16
ORBOT Inc.
447612
447612
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42
add a comment |Â
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.
Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.
Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.
Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.
Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
add a comment |Â
up vote
0
down vote
The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.
Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.
Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
add a comment |Â
up vote
0
down vote
up vote
0
down vote
The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.
Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.
Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.
The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.
Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.
Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.
edited Jul 21 at 18:50
answered Jul 21 at 4:56
stats_model
55339
55339
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
add a comment |Â
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
â ORBOT Inc.
Jul 21 at 9:36
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
â stats_model
Jul 21 at 18:40
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857536%2fknown-markov-type-inequalities-on-entropy%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
â stats_model
Jul 21 at 3:52
It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
â stats_model
Jul 21 at 3:57
The side of the inequality was wrong, sorry, I edited it now...
â ORBOT Inc.
Jul 21 at 9:33
Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
â stats_model
Jul 21 at 18:42