Why are $max(x_i)$ and $min(x_i)$ sufficient statistics for $operatornameUnif(a,b)$?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$
Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.
I think I am misunderstanding something about sufficiency here.
probability probability-theory statistics probability-distributions statistical-inference
add a comment |Â
up vote
1
down vote
favorite
Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$
Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.
I think I am misunderstanding something about sufficiency here.
probability probability-theory statistics probability-distributions statistical-inference
2
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
1
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$
Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.
I think I am misunderstanding something about sufficiency here.
probability probability-theory statistics probability-distributions statistical-inference
Suppose I have $X_i sim operatornameUnif(a,b)$. I have that the joint distribution is given by $$frac1left(b-aright)^nprod_i=1^n I(x_i in (a,b)) = frac1left(b-aright)^nI(min(x_i) in (a,b))I(max(x_i)in (a,b)).$$
Now, my question is why does this satisfy the factorization theorem? Don't $I(min(x_i) in (a,b))$ and $I(max(x_i)in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $prod_i=1^n I(x_i in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.
I think I am misunderstanding something about sufficiency here.
probability probability-theory statistics probability-distributions statistical-inference
edited Jul 15 at 3:09
Michael Hardy
204k23186463
204k23186463
asked Jul 14 at 19:02
jackson5
524312
524312
2
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
1
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15
add a comment |Â
2
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
1
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15
2
2
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
1
1
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$
then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$
then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.
add a comment |Â
up vote
2
down vote
accepted
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$
then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$
then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,ldots,x_n ; theta) = phi(T;theta)cdot h(x_1,ldots,x_n)$$
then $T$ is sufficient for $theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $theta$ and $phi$ depends only on $maxx_i$ and $minx_i$.
answered Jul 14 at 19:14
Marcus M
8,1731847
8,1731847
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851869%2fwhy-are-maxx-i-and-minx-i-sufficient-statistics-for-operatornameuni%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
2
The vector of original samples $(x_1, ldots, x_n)$ will always be a sufficient statistic, by definition. It is more interesting to see if something simpler [than the full sample] contains all the "information" that the sample has regarding the parameter.
– angryavian
Jul 14 at 19:11
1
Review the factorisation theorem. You would find that the joint density $f(mathbf x;theta)$ factors as $f(mathbf x;theta)=g(theta, t(mathbf x))h(mathbf x)$ for some $g$ and $h$ where $g$ depends on $theta=(a,b)$ and on $x_1,cdots,x_n$ through $t(mathbf x)=(min x_i,max x_i)$ and $h$ is independent of $theta$.
– StubbornAtom
Jul 14 at 19:13
Right, that makes sense. Thanks.
– jackson5
Jul 14 at 19:15