Known Markov-type inequalities on entropy?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.



By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?







share|cite|improve this question





















  • I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
    – stats_model
    Jul 21 at 3:52










  • It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
    – stats_model
    Jul 21 at 3:57










  • The side of the inequality was wrong, sorry, I edited it now...
    – ORBOT Inc.
    Jul 21 at 9:33










  • Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
    – stats_model
    Jul 21 at 18:42















up vote
1
down vote

favorite
1












Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.



By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?







share|cite|improve this question





















  • I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
    – stats_model
    Jul 21 at 3:52










  • It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
    – stats_model
    Jul 21 at 3:57










  • The side of the inequality was wrong, sorry, I edited it now...
    – ORBOT Inc.
    Jul 21 at 9:33










  • Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
    – stats_model
    Jul 21 at 18:42













up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.



By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?







share|cite|improve this question













Let $P$ be discrete distribution where the $i$-th value has probability $p_i$.
Define $H$ as the random variable returning $-log(p_i)$ with probability $p_i$.



By definition, $E(H) = mathrmH(P)$.
Is any non-trivial upper bound on $Pr(H geq aE(H)+b)$ is known which is independent of the distribution
$P$?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 21 at 9:38









Sangchul Lee

85.5k12155253




85.5k12155253









asked Jul 20 at 11:16









ORBOT Inc.

447612




447612











  • I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
    – stats_model
    Jul 21 at 3:52










  • It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
    – stats_model
    Jul 21 at 3:57










  • The side of the inequality was wrong, sorry, I edited it now...
    – ORBOT Inc.
    Jul 21 at 9:33










  • Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
    – stats_model
    Jul 21 at 18:42

















  • I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
    – stats_model
    Jul 21 at 3:52










  • It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
    – stats_model
    Jul 21 at 3:57










  • The side of the inequality was wrong, sorry, I edited it now...
    – ORBOT Inc.
    Jul 21 at 9:33










  • Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
    – stats_model
    Jul 21 at 18:42
















I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
– stats_model
Jul 21 at 3:52




I'm somewhat confused by what is meant by "Markov-type" inequality here. Markov's inequality says that $mathrmPr(X geq a) leq fracE(X)a$, whereas you are asking about $mathrmPr(H leq a E(H) + b)$.
– stats_model
Jul 21 at 3:52












It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
– stats_model
Jul 21 at 3:57




It also doesn't seem your stated probability can be non-trivially bounded independent of distribution in general for plain old expectation either. Consider for example, $X_p in -frac11-p,frac1p$ with $P(X_p = -frac11-p = 1-p$ and $P(X_p) = frac1p = p$. Then $E[X_p] = p$ for all $p in (0,1)$, but $P(X_p leq E[X_p]) = 1-p$ which can be made arbitrary.
– stats_model
Jul 21 at 3:57












The side of the inequality was wrong, sorry, I edited it now...
– ORBOT Inc.
Jul 21 at 9:33




The side of the inequality was wrong, sorry, I edited it now...
– ORBOT Inc.
Jul 21 at 9:33












Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
– stats_model
Jul 21 at 18:42





Reversing the sign of the inequality does not change my example above beyond some trivial modifications. Just let $X_p in left-frac1p,frac11-pright$ with $Pleft(X_p = -frac1pright) = p$ and $Pleft(X_p = frac11-pright) = frac11-p$. Now, the argument is the same as before, but with the inequality flipped.
– stats_model
Jul 21 at 18:42











1 Answer
1






active

oldest

votes

















up vote
0
down vote













The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.



Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.



Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.






share|cite|improve this answer























  • The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
    – ORBOT Inc.
    Jul 21 at 9:36










  • The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
    – stats_model
    Jul 21 at 18:40











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857536%2fknown-markov-type-inequalities-on-entropy%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.



Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.



Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.






share|cite|improve this answer























  • The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
    – ORBOT Inc.
    Jul 21 at 9:36










  • The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
    – stats_model
    Jul 21 at 18:40















up vote
0
down vote













The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.



Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.



Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.






share|cite|improve this answer























  • The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
    – ORBOT Inc.
    Jul 21 at 9:36










  • The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
    – stats_model
    Jul 21 at 18:40













up vote
0
down vote










up vote
0
down vote









The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.



Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.



Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.






share|cite|improve this answer















The following counterexample will show that the inequality you ask about, as stated, cannot be non-trivially bounded (i.e. restrictions on a and b must be made). If $X sim Ber(p)$, then for $p > 0.5$, we have that $H_1 < H_0$ where $H_i$ is the value of $H$ when $X = i$. Thus, $mathrmPr(H leq E[H]) = mathrmPr(X = 1) = p$. Since $p$ can be made arbitrarily close to 1, we have that the only possible bound is the trivial one.



Another example can also show that a lower bound is also not possible. To do this, consider fixed $n$ and consider $X in 1,ldots,n$ with $p_n > frac1n$ and $p_i = frac1-p_nn < p_n$ for $i neq n$. Then we have that $H_i > H_n$ for all $i neq n$ by the monotonicity of $-log$. But then we have that $mathrmPr(H geq E[H]) = mathrmPr(X neq n) = 1 - p_n$. But for each $n$, we can make $1 - p_n$ arbitrarily close to $frac1n$ and sending $n$ to $infty$, we get that we can make this probability arbitrarily close to 1.



Edit: For $a > 1$, however, we can get a non-trivial bound for $P(H geq a E[H])$ by simply applying Markov's inequality:
$$P(H geq a E[H]) leq fracE[H]a E[H] = frac1a$$.







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 21 at 18:50


























answered Jul 21 at 4:56









stats_model

55339




55339











  • The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
    – ORBOT Inc.
    Jul 21 at 9:36










  • The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
    – stats_model
    Jul 21 at 18:40

















  • The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
    – ORBOT Inc.
    Jul 21 at 9:36










  • The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
    – stats_model
    Jul 21 at 18:40
















The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
– ORBOT Inc.
Jul 21 at 9:36




The side of the inequality was indeed wrong and I was looking for an upper bound. In the upper bound case, though, even in your first example it holds that Pr(H<=kE[H]) is exponentially decreasing in k.
– ORBOT Inc.
Jul 21 at 9:36












The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
– stats_model
Jul 21 at 18:40





The property of being exponentially decreasing in k is independent of the question as you asked it. You asked about bounds on $P(H leq a E[H] + b)$, not about rates of change as you vary $a$.
– stats_model
Jul 21 at 18:40













 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857536%2fknown-markov-type-inequalities-on-entropy%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Relationship between determinant of matrix and determinant of adjoint?

Color the edges and diagonals of a regular polygon