Why are $max(x_i)$ and $min(x_i)$ sufficient statistics for $operatornameUnif(a,b)$?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$



Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.



I think I am misunderstanding something about sufficiency here.







share|cite|improve this question

















  • 2




    The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
    – angryavian
    Jul 14 at 19:11






  • 1




    Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
    – StubbornAtom
    Jul 14 at 19:13











  • Right, that makes sense. Thanks.
    – jackson5
    Jul 14 at 19:15














up vote
1
down vote

favorite












Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$



Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.



I think I am misunderstanding something about sufficiency here.







share|cite|improve this question

















  • 2




    The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
    – angryavian
    Jul 14 at 19:11






  • 1




    Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
    – StubbornAtom
    Jul 14 at 19:13











  • Right, that makes sense. Thanks.
    – jackson5
    Jul 14 at 19:15












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$



Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.



I think I am misunderstanding something about sufficiency here.







share|cite|improve this question













Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$



Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.



I think I am misunderstanding something about sufficiency here.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 15 at 3:09









Michael Hardy

204k23186463




204k23186463









asked Jul 14 at 19:02









jackson5

524312




524312







  • 2




    The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
    – angryavian
    Jul 14 at 19:11






  • 1




    Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
    – StubbornAtom
    Jul 14 at 19:13











  • Right, that makes sense. Thanks.
    – jackson5
    Jul 14 at 19:15












  • 2




    The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
    – angryavian
    Jul 14 at 19:11






  • 1




    Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
    – StubbornAtom
    Jul 14 at 19:13











  • Right, that makes sense. Thanks.
    – jackson5
    Jul 14 at 19:15







2




2




The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11




The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11




1




1




Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13





Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13













Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15




Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










I think you may be confused about the factorization theorem: if you can factor the



$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$



then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:



  • one that depends on only the statistic and the parameter(s)


  • one that depends only on the data and not the parameter


For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851869%2fwhy-are-maxx-i-and-minx-i-sufficient-statistics-for-operatornameuni%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    I think you may be confused about the factorization theorem: if you can factor the



    $$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$



    then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:



    • one that depends on only the statistic and the parameter(s)


    • one that depends only on the data and not the parameter


    For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.






    share|cite|improve this answer

























      up vote
      2
      down vote



      accepted










      I think you may be confused about the factorization theorem: if you can factor the



      $$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$



      then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:



      • one that depends on only the statistic and the parameter(s)


      • one that depends only on the data and not the parameter


      For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.






      share|cite|improve this answer























        up vote
        2
        down vote



        accepted







        up vote
        2
        down vote



        accepted






        I think you may be confused about the factorization theorem: if you can factor the



        $$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$



        then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:



        • one that depends on only the statistic and the parameter(s)


        • one that depends only on the data and not the parameter


        For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.






        share|cite|improve this answer













        I think you may be confused about the factorization theorem: if you can factor the



        $$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$



        then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:



        • one that depends on only the statistic and the parameter(s)


        • one that depends only on the data and not the parameter


        For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 14 at 19:14









        Marcus M

        8,1731847




        8,1731847






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851869%2fwhy-are-maxx-i-and-minx-i-sufficient-statistics-for-operatornameuni%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?