Entropy of a unimodal continuous probability distribution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Among unimodal continuous probability distributions supported on the positive reals and whose mean and mode coincide, which one has the maximal entropy ?







share|cite|improve this question















  • 1




    +1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
    – joriki
    Jul 21 at 13:13















up vote
1
down vote

favorite












Among unimodal continuous probability distributions supported on the positive reals and whose mean and mode coincide, which one has the maximal entropy ?







share|cite|improve this question















  • 1




    +1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
    – joriki
    Jul 21 at 13:13













up vote
1
down vote

favorite









up vote
1
down vote

favorite











Among unimodal continuous probability distributions supported on the positive reals and whose mean and mode coincide, which one has the maximal entropy ?







share|cite|improve this question











Among unimodal continuous probability distributions supported on the positive reals and whose mean and mode coincide, which one has the maximal entropy ?









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 21 at 12:44









Sylvain Julien

869818




869818







  • 1




    +1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
    – joriki
    Jul 21 at 13:13













  • 1




    +1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
    – joriki
    Jul 21 at 13:13








1




1




+1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
– joriki
Jul 21 at 13:13





+1, nice question. Do you mean the differential entropy? (As there are different ways to define entropy for continuous distributions; see here.)
– joriki
Jul 21 at 13:13











1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










I'll assume that you're referring to the differential entropy of the distribution.



There is no such extremal unimodal distribution.



There are three constraints on the distribution function $f(x)$. Two can be expressed in integral form:



$$int_0^infty f(x)=1$$



and



$$int_0^infty xf(x)=mu;.$$



Ignoring for now the constraint that $f$ is unimodal with mode $x_0$, we get the following Lagrangian:



$$
L[f]=int_0^inftyleft(f(x)log f(x)+alpha f(x)+beta xf(x)right)mathrm dx;.
$$



Varying with respect to $f$ yields



$$
log f(x)+1+alpha+beta x=0;.
$$



Thus, $f$ for this simplified problem would be an exponential function. I'm not sure how to prove this formally, but it seems clear to me that if we add the constraint that $f$ is unimodal with mode $x_0$, the result must be two exponentials decaying away from $x_0$ on either side. But then there's no extremal value of the parameters, because for any given solution you can make the decay slightly slower on both sides while maintaining the mode and the mean, thus slightly increasing the entropy. The limit of this construction is a distribution that's no longer unimodal, but is constant up to the “mode” and then decays exponentially.






share|cite|improve this answer























    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858467%2fentropy-of-a-unimodal-continuous-probability-distribution%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted










    I'll assume that you're referring to the differential entropy of the distribution.



    There is no such extremal unimodal distribution.



    There are three constraints on the distribution function $f(x)$. Two can be expressed in integral form:



    $$int_0^infty f(x)=1$$



    and



    $$int_0^infty xf(x)=mu;.$$



    Ignoring for now the constraint that $f$ is unimodal with mode $x_0$, we get the following Lagrangian:



    $$
    L[f]=int_0^inftyleft(f(x)log f(x)+alpha f(x)+beta xf(x)right)mathrm dx;.
    $$



    Varying with respect to $f$ yields



    $$
    log f(x)+1+alpha+beta x=0;.
    $$



    Thus, $f$ for this simplified problem would be an exponential function. I'm not sure how to prove this formally, but it seems clear to me that if we add the constraint that $f$ is unimodal with mode $x_0$, the result must be two exponentials decaying away from $x_0$ on either side. But then there's no extremal value of the parameters, because for any given solution you can make the decay slightly slower on both sides while maintaining the mode and the mean, thus slightly increasing the entropy. The limit of this construction is a distribution that's no longer unimodal, but is constant up to the “mode” and then decays exponentially.






    share|cite|improve this answer



























      up vote
      1
      down vote



      accepted










      I'll assume that you're referring to the differential entropy of the distribution.



      There is no such extremal unimodal distribution.



      There are three constraints on the distribution function $f(x)$. Two can be expressed in integral form:



      $$int_0^infty f(x)=1$$



      and



      $$int_0^infty xf(x)=mu;.$$



      Ignoring for now the constraint that $f$ is unimodal with mode $x_0$, we get the following Lagrangian:



      $$
      L[f]=int_0^inftyleft(f(x)log f(x)+alpha f(x)+beta xf(x)right)mathrm dx;.
      $$



      Varying with respect to $f$ yields



      $$
      log f(x)+1+alpha+beta x=0;.
      $$



      Thus, $f$ for this simplified problem would be an exponential function. I'm not sure how to prove this formally, but it seems clear to me that if we add the constraint that $f$ is unimodal with mode $x_0$, the result must be two exponentials decaying away from $x_0$ on either side. But then there's no extremal value of the parameters, because for any given solution you can make the decay slightly slower on both sides while maintaining the mode and the mean, thus slightly increasing the entropy. The limit of this construction is a distribution that's no longer unimodal, but is constant up to the “mode” and then decays exponentially.






      share|cite|improve this answer

























        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        I'll assume that you're referring to the differential entropy of the distribution.



        There is no such extremal unimodal distribution.



        There are three constraints on the distribution function $f(x)$. Two can be expressed in integral form:



        $$int_0^infty f(x)=1$$



        and



        $$int_0^infty xf(x)=mu;.$$



        Ignoring for now the constraint that $f$ is unimodal with mode $x_0$, we get the following Lagrangian:



        $$
        L[f]=int_0^inftyleft(f(x)log f(x)+alpha f(x)+beta xf(x)right)mathrm dx;.
        $$



        Varying with respect to $f$ yields



        $$
        log f(x)+1+alpha+beta x=0;.
        $$



        Thus, $f$ for this simplified problem would be an exponential function. I'm not sure how to prove this formally, but it seems clear to me that if we add the constraint that $f$ is unimodal with mode $x_0$, the result must be two exponentials decaying away from $x_0$ on either side. But then there's no extremal value of the parameters, because for any given solution you can make the decay slightly slower on both sides while maintaining the mode and the mean, thus slightly increasing the entropy. The limit of this construction is a distribution that's no longer unimodal, but is constant up to the “mode” and then decays exponentially.






        share|cite|improve this answer















        I'll assume that you're referring to the differential entropy of the distribution.



        There is no such extremal unimodal distribution.



        There are three constraints on the distribution function $f(x)$. Two can be expressed in integral form:



        $$int_0^infty f(x)=1$$



        and



        $$int_0^infty xf(x)=mu;.$$



        Ignoring for now the constraint that $f$ is unimodal with mode $x_0$, we get the following Lagrangian:



        $$
        L[f]=int_0^inftyleft(f(x)log f(x)+alpha f(x)+beta xf(x)right)mathrm dx;.
        $$



        Varying with respect to $f$ yields



        $$
        log f(x)+1+alpha+beta x=0;.
        $$



        Thus, $f$ for this simplified problem would be an exponential function. I'm not sure how to prove this formally, but it seems clear to me that if we add the constraint that $f$ is unimodal with mode $x_0$, the result must be two exponentials decaying away from $x_0$ on either side. But then there's no extremal value of the parameters, because for any given solution you can make the decay slightly slower on both sides while maintaining the mode and the mean, thus slightly increasing the entropy. The limit of this construction is a distribution that's no longer unimodal, but is constant up to the “mode” and then decays exponentially.







        share|cite|improve this answer















        share|cite|improve this answer



        share|cite|improve this answer








        edited Jul 21 at 23:48


























        answered Jul 21 at 15:35









        joriki

        164k10180328




        164k10180328






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858467%2fentropy-of-a-unimodal-continuous-probability-distribution%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?