Type 1 error condition in one tailed statistical hypothesis test

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Consider the following classical statistical test setup:



One assumes a coin to be unfair in the sense that heads, say, occurs more frequently than tails. Thus we set $H_0: pleqfrac12$ as null hypothesis and $H_1:p>frac12$ as alternative where $p$ is the probability for heads.



Also let X count the occurence of heads when tossing the coin $n$ times. Given $n$ and a significance level $alpha$ we get the one-tail condition
beginequation
(1)quad P(Xgeq k)leqalpha
endequation
where $P$ has a $(n,p)$-binomial distribution with $pleqfrac12$ (thus yielding the probability for rejecting $H_0$ when it's actually true).



To solve $(1)$ for $k$ it would now be common (school book) practice to set $p=frac12$ and solve $(1)$ by inversion. But this isn't correct, as we just know $pleqfrac12$.



So wouldn't it be better to rather use a distribution for "$k$ wins out of $n$ with a probability of success $leqfrac12$" and which would that appropriate distribution be?




I want to be more precise: In a more general context the maximum $alpha$ error could be defined as
beginequation
alpha_max:=max_thetainTheta_0P_theta(T(X_1,dotsc,T_n)in K)
endequation
where $T$ is some kind of test statistic, in our case counting the number of heads in a sample $X_1,dotsc,X_n$; $Theta$ is the parameter space in question (our paramter is $psimtheta$), $Theta_0$ the subspace corresponding to the null hypothesis, i.e.
beginequation
H_0: thetainTheta_0,quad H_1:thetainThetasetminusTheta_0;
endequation
and finally $K$ is the region of rejection of $H_0$, i.e.
beginequation
H_0text is rejected iff T(X_1,dotsc,T_n)in K.
endequation



So in particular we have $Theta=[0,1], Theta_0=[0,frac12]$, yielding
beginequation
alpha_max=max_pleqfrac12sum_i=k^n B_n,p(X=i),
endequation



which should now be $leq$ a given significance level.



[Definitions from http://www.wiwi.uni-muenster.de/05/download/studium/advancedstatistics/ss09/kapitel_6.pdf - couldn't find equivalent in English]







share|cite|improve this question





















  • Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
    – callculus
    Aug 4 at 16:00











  • Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
    – callculus
    Aug 4 at 16:14











  • Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
    – Don Fuchs
    Aug 4 at 17:46










  • Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
    – Don Fuchs
    Aug 4 at 17:54














up vote
0
down vote

favorite












Consider the following classical statistical test setup:



One assumes a coin to be unfair in the sense that heads, say, occurs more frequently than tails. Thus we set $H_0: pleqfrac12$ as null hypothesis and $H_1:p>frac12$ as alternative where $p$ is the probability for heads.



Also let X count the occurence of heads when tossing the coin $n$ times. Given $n$ and a significance level $alpha$ we get the one-tail condition
beginequation
(1)quad P(Xgeq k)leqalpha
endequation
where $P$ has a $(n,p)$-binomial distribution with $pleqfrac12$ (thus yielding the probability for rejecting $H_0$ when it's actually true).



To solve $(1)$ for $k$ it would now be common (school book) practice to set $p=frac12$ and solve $(1)$ by inversion. But this isn't correct, as we just know $pleqfrac12$.



So wouldn't it be better to rather use a distribution for "$k$ wins out of $n$ with a probability of success $leqfrac12$" and which would that appropriate distribution be?




I want to be more precise: In a more general context the maximum $alpha$ error could be defined as
beginequation
alpha_max:=max_thetainTheta_0P_theta(T(X_1,dotsc,T_n)in K)
endequation
where $T$ is some kind of test statistic, in our case counting the number of heads in a sample $X_1,dotsc,X_n$; $Theta$ is the parameter space in question (our paramter is $psimtheta$), $Theta_0$ the subspace corresponding to the null hypothesis, i.e.
beginequation
H_0: thetainTheta_0,quad H_1:thetainThetasetminusTheta_0;
endequation
and finally $K$ is the region of rejection of $H_0$, i.e.
beginequation
H_0text is rejected iff T(X_1,dotsc,T_n)in K.
endequation



So in particular we have $Theta=[0,1], Theta_0=[0,frac12]$, yielding
beginequation
alpha_max=max_pleqfrac12sum_i=k^n B_n,p(X=i),
endequation



which should now be $leq$ a given significance level.



[Definitions from http://www.wiwi.uni-muenster.de/05/download/studium/advancedstatistics/ss09/kapitel_6.pdf - couldn't find equivalent in English]







share|cite|improve this question





















  • Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
    – callculus
    Aug 4 at 16:00











  • Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
    – callculus
    Aug 4 at 16:14











  • Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
    – Don Fuchs
    Aug 4 at 17:46










  • Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
    – Don Fuchs
    Aug 4 at 17:54












up vote
0
down vote

favorite









up vote
0
down vote

favorite











Consider the following classical statistical test setup:



One assumes a coin to be unfair in the sense that heads, say, occurs more frequently than tails. Thus we set $H_0: pleqfrac12$ as null hypothesis and $H_1:p>frac12$ as alternative where $p$ is the probability for heads.



Also let X count the occurence of heads when tossing the coin $n$ times. Given $n$ and a significance level $alpha$ we get the one-tail condition
beginequation
(1)quad P(Xgeq k)leqalpha
endequation
where $P$ has a $(n,p)$-binomial distribution with $pleqfrac12$ (thus yielding the probability for rejecting $H_0$ when it's actually true).



To solve $(1)$ for $k$ it would now be common (school book) practice to set $p=frac12$ and solve $(1)$ by inversion. But this isn't correct, as we just know $pleqfrac12$.



So wouldn't it be better to rather use a distribution for "$k$ wins out of $n$ with a probability of success $leqfrac12$" and which would that appropriate distribution be?




I want to be more precise: In a more general context the maximum $alpha$ error could be defined as
beginequation
alpha_max:=max_thetainTheta_0P_theta(T(X_1,dotsc,T_n)in K)
endequation
where $T$ is some kind of test statistic, in our case counting the number of heads in a sample $X_1,dotsc,X_n$; $Theta$ is the parameter space in question (our paramter is $psimtheta$), $Theta_0$ the subspace corresponding to the null hypothesis, i.e.
beginequation
H_0: thetainTheta_0,quad H_1:thetainThetasetminusTheta_0;
endequation
and finally $K$ is the region of rejection of $H_0$, i.e.
beginequation
H_0text is rejected iff T(X_1,dotsc,T_n)in K.
endequation



So in particular we have $Theta=[0,1], Theta_0=[0,frac12]$, yielding
beginequation
alpha_max=max_pleqfrac12sum_i=k^n B_n,p(X=i),
endequation



which should now be $leq$ a given significance level.



[Definitions from http://www.wiwi.uni-muenster.de/05/download/studium/advancedstatistics/ss09/kapitel_6.pdf - couldn't find equivalent in English]







share|cite|improve this question













Consider the following classical statistical test setup:



One assumes a coin to be unfair in the sense that heads, say, occurs more frequently than tails. Thus we set $H_0: pleqfrac12$ as null hypothesis and $H_1:p>frac12$ as alternative where $p$ is the probability for heads.



Also let X count the occurence of heads when tossing the coin $n$ times. Given $n$ and a significance level $alpha$ we get the one-tail condition
beginequation
(1)quad P(Xgeq k)leqalpha
endequation
where $P$ has a $(n,p)$-binomial distribution with $pleqfrac12$ (thus yielding the probability for rejecting $H_0$ when it's actually true).



To solve $(1)$ for $k$ it would now be common (school book) practice to set $p=frac12$ and solve $(1)$ by inversion. But this isn't correct, as we just know $pleqfrac12$.



So wouldn't it be better to rather use a distribution for "$k$ wins out of $n$ with a probability of success $leqfrac12$" and which would that appropriate distribution be?




I want to be more precise: In a more general context the maximum $alpha$ error could be defined as
beginequation
alpha_max:=max_thetainTheta_0P_theta(T(X_1,dotsc,T_n)in K)
endequation
where $T$ is some kind of test statistic, in our case counting the number of heads in a sample $X_1,dotsc,X_n$; $Theta$ is the parameter space in question (our paramter is $psimtheta$), $Theta_0$ the subspace corresponding to the null hypothesis, i.e.
beginequation
H_0: thetainTheta_0,quad H_1:thetainThetasetminusTheta_0;
endequation
and finally $K$ is the region of rejection of $H_0$, i.e.
beginequation
H_0text is rejected iff T(X_1,dotsc,T_n)in K.
endequation



So in particular we have $Theta=[0,1], Theta_0=[0,frac12]$, yielding
beginequation
alpha_max=max_pleqfrac12sum_i=k^n B_n,p(X=i),
endequation



which should now be $leq$ a given significance level.



[Definitions from http://www.wiwi.uni-muenster.de/05/download/studium/advancedstatistics/ss09/kapitel_6.pdf - couldn't find equivalent in English]









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Aug 4 at 15:09
























asked Aug 2 at 15:07









Don Fuchs

234




234











  • Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
    – callculus
    Aug 4 at 16:00











  • Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
    – callculus
    Aug 4 at 16:14











  • Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
    – Don Fuchs
    Aug 4 at 17:46










  • Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
    – Don Fuchs
    Aug 4 at 17:54
















  • Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
    – callculus
    Aug 4 at 16:00











  • Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
    – callculus
    Aug 4 at 16:14











  • Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
    – Don Fuchs
    Aug 4 at 17:46










  • Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
    – Don Fuchs
    Aug 4 at 17:54















Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
– callculus
Aug 4 at 16:00





Jetzt musst du nur noch eine konkrete Frage zu der Ergänzung stellen. Hast du eingentlich schon mal beim Binomialtest in Wiki vorbei geschaut?
– callculus
Aug 4 at 16:00













Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
– callculus
Aug 4 at 16:14





Wieso du hier ein maximales $alpha$ suchst ist unklar. Beim Hypothesentest ist dieser von vorneherein schon fest gelegt, wie du auch bei meiner Ungleichung in der Antwort siehst.
– callculus
Aug 4 at 16:14













Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
– Don Fuchs
Aug 4 at 17:46




Ich verstehe es jetzt so: Das Signifikanzniveau (meist mit $alpha$ bezeichnet) gibt eine obere Schranke für die W., einen Fehler 1. Art zu begehen, sagen $alpha_1$. Da man aber unter Annahme von $H_0$ $p$ nicht genau kennt (man hat nur eine Ungleichung), schätzt man $alpha_1$ durch $alpha_max$ ab.
– Don Fuchs
Aug 4 at 17:46












Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
– Don Fuchs
Aug 4 at 17:54




Und aus Monotoniegründen ist nun die Summe $sum B_n,p(X=i)$ maximal für $p=p_0=frac12$ (bei festem $k$) (die W. für mind. $k$ mal Kopf steigt mit der Erfolgwahrscheinlichkeit).
– Don Fuchs
Aug 4 at 17:54










1 Answer
1






active

oldest

votes

















up vote
0
down vote













Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$beginarray hline &H_0 &H_1 \ hline texttttwo-tailed & p=p_0 &pneq p_0 \ hline textttright-tailed & p=p_0 textor pleq p_0 &p>p_0 \ hline textttleft-tailed & p=p_0 textor pgeq p_0 &p<p_0 \ hline endarray$$



For the right-tailed case you evaluate the the smallest value of $c$, where



$$sum_i=c^n B(i| p_0,n)leq alpha$$



Then the critical range is $c, c+1, ldots, n $.






share|cite|improve this answer





















  • Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
    – Don Fuchs
    Aug 2 at 18:03











  • In your case $p_0=frac12$
    – callculus
    Aug 2 at 18:06










  • Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
    – Don Fuchs
    Aug 2 at 18:13










  • @DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
    – callculus
    Aug 2 at 18:26











  • Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
    – Don Fuchs
    Aug 4 at 14:08











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2870163%2ftype-1-error-condition-in-one-tailed-statistical-hypothesis-test%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$beginarray hline &H_0 &H_1 \ hline texttttwo-tailed & p=p_0 &pneq p_0 \ hline textttright-tailed & p=p_0 textor pleq p_0 &p>p_0 \ hline textttleft-tailed & p=p_0 textor pgeq p_0 &p<p_0 \ hline endarray$$



For the right-tailed case you evaluate the the smallest value of $c$, where



$$sum_i=c^n B(i| p_0,n)leq alpha$$



Then the critical range is $c, c+1, ldots, n $.






share|cite|improve this answer





















  • Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
    – Don Fuchs
    Aug 2 at 18:03











  • In your case $p_0=frac12$
    – callculus
    Aug 2 at 18:06










  • Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
    – Don Fuchs
    Aug 2 at 18:13










  • @DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
    – callculus
    Aug 2 at 18:26











  • Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
    – Don Fuchs
    Aug 4 at 14:08















up vote
0
down vote













Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$beginarray hline &H_0 &H_1 \ hline texttttwo-tailed & p=p_0 &pneq p_0 \ hline textttright-tailed & p=p_0 textor pleq p_0 &p>p_0 \ hline textttleft-tailed & p=p_0 textor pgeq p_0 &p<p_0 \ hline endarray$$



For the right-tailed case you evaluate the the smallest value of $c$, where



$$sum_i=c^n B(i| p_0,n)leq alpha$$



Then the critical range is $c, c+1, ldots, n $.






share|cite|improve this answer





















  • Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
    – Don Fuchs
    Aug 2 at 18:03











  • In your case $p_0=frac12$
    – callculus
    Aug 2 at 18:06










  • Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
    – Don Fuchs
    Aug 2 at 18:13










  • @DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
    – callculus
    Aug 2 at 18:26











  • Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
    – Don Fuchs
    Aug 4 at 14:08













up vote
0
down vote










up vote
0
down vote









Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$beginarray hline &H_0 &H_1 \ hline texttttwo-tailed & p=p_0 &pneq p_0 \ hline textttright-tailed & p=p_0 textor pleq p_0 &p>p_0 \ hline textttleft-tailed & p=p_0 textor pgeq p_0 &p<p_0 \ hline endarray$$



For the right-tailed case you evaluate the the smallest value of $c$, where



$$sum_i=c^n B(i| p_0,n)leq alpha$$



Then the critical range is $c, c+1, ldots, n $.






share|cite|improve this answer













Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$beginarray hline &H_0 &H_1 \ hline texttttwo-tailed & p=p_0 &pneq p_0 \ hline textttright-tailed & p=p_0 textor pleq p_0 &p>p_0 \ hline textttleft-tailed & p=p_0 textor pgeq p_0 &p<p_0 \ hline endarray$$



For the right-tailed case you evaluate the the smallest value of $c$, where



$$sum_i=c^n B(i| p_0,n)leq alpha$$



Then the critical range is $c, c+1, ldots, n $.







share|cite|improve this answer













share|cite|improve this answer



share|cite|improve this answer











answered Aug 2 at 16:21









callculus

16.2k31427




16.2k31427











  • Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
    – Don Fuchs
    Aug 2 at 18:03











  • In your case $p_0=frac12$
    – callculus
    Aug 2 at 18:06










  • Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
    – Don Fuchs
    Aug 2 at 18:13










  • @DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
    – callculus
    Aug 2 at 18:26











  • Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
    – Don Fuchs
    Aug 4 at 14:08

















  • Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
    – Don Fuchs
    Aug 2 at 18:03











  • In your case $p_0=frac12$
    – callculus
    Aug 2 at 18:06










  • Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
    – Don Fuchs
    Aug 2 at 18:13










  • @DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
    – callculus
    Aug 2 at 18:26











  • Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
    – Don Fuchs
    Aug 4 at 14:08
















Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
– Don Fuchs
Aug 2 at 18:03





Alright, but still the question remains why we use $p_0$ in your sum in the cases where $H_0:pleq p_0$ (or $geq$): In these cases we do not know the underlying probability distibution to calculate $P(H_0text is trueland H_0text is rejected)$.
– Don Fuchs
Aug 2 at 18:03













In your case $p_0=frac12$
– callculus
Aug 2 at 18:06




In your case $p_0=frac12$
– callculus
Aug 2 at 18:06












Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
– Don Fuchs
Aug 2 at 18:13




Of course, but what if $p=frac14<frac12=p_0$. Assuming $H_0$, for all we know this could be the case.
– Don Fuchs
Aug 2 at 18:13












@DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
– callculus
Aug 2 at 18:26





@DonFuchs We don´t know the real value of $p$-before and after the test. The only statement we can make that is the following. If the estimated value of $pcdot n$ is in the interval $c, c+1, ldots, n $ we do not accept the Null hypothesis with a statistictial significance of $alpha$. Or we do not reject the alternative hypothesis $pgeq frac12$ with a statistictial significance of $alpha$
– callculus
Aug 2 at 18:26













Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
– Don Fuchs
Aug 4 at 14:08





Sorry, but that doesn't convince. The point is: What we do in your sum above is to calculate $P(text$H_0$ is truelandtext$H_0$ rejected)$ (which is the type 1 or $alpha$ error) without really taking the condition $text$H_0$ is true$ into account (by additionaly assuming $p$ not to be less than $p_0$).
– Don Fuchs
Aug 4 at 14:08













 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2870163%2ftype-1-error-condition-in-one-tailed-statistical-hypothesis-test%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?