Expected value and distributions

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












I'm reading a book at the moment and it has a theorem that states this:



if $X sim N(mu,sigma^2)$ then

$E(X) = mu$



My question is, when would it not equal $mu$

I thought the expected value is the mean of any distribution, not just the normal.
Is this rule onle the case for the normal distribution, all symmetric distributions or all distributions?

Thanks







share|cite|improve this question























    up vote
    2
    down vote

    favorite












    I'm reading a book at the moment and it has a theorem that states this:



    if $X sim N(mu,sigma^2)$ then

    $E(X) = mu$



    My question is, when would it not equal $mu$

    I thought the expected value is the mean of any distribution, not just the normal.
    Is this rule onle the case for the normal distribution, all symmetric distributions or all distributions?

    Thanks







    share|cite|improve this question





















      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      I'm reading a book at the moment and it has a theorem that states this:



      if $X sim N(mu,sigma^2)$ then

      $E(X) = mu$



      My question is, when would it not equal $mu$

      I thought the expected value is the mean of any distribution, not just the normal.
      Is this rule onle the case for the normal distribution, all symmetric distributions or all distributions?

      Thanks







      share|cite|improve this question











      I'm reading a book at the moment and it has a theorem that states this:



      if $X sim N(mu,sigma^2)$ then

      $E(X) = mu$



      My question is, when would it not equal $mu$

      I thought the expected value is the mean of any distribution, not just the normal.
      Is this rule onle the case for the normal distribution, all symmetric distributions or all distributions?

      Thanks









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 26 at 23:10









      Bucephalus

      432214




      432214




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          3
          down vote



          accepted










          The expected value $ E(X) = mu$ is often connected to the parameters of the distribution for instance



          $$ X sim Bin(n,p), E(X) = np$$



          $$ X sim Geom(p) , E(X) = p $$



          $$ X sim NegBin(r,p) , E(X) = fracrp$$
          $$ X sim Pois(lambda) , E(X) = lambda $$
          $$ X sim N(mu, sigma^2) , E(X) = mu $$
          $$ X sim Gamma(alpha,beta) ,E(X) = fracalphabeta$$
          $$ X sim U(a,b) ,E(X) = fracb+a2 $$



          $$ X sim Beta(alpha,beta) ,E(X) = fracalphaalpha +beta $$



          So $ E(X) = mu$ is a function of them simply because of how the expected value is defined..



          $$ E(X) = int_-infty^infty xf(x) dx$$



          where $f(x) $ is our density function, for continous functions



          for discrete random variables we have
          $$ E(X) = sum_i x p(x_i) $$



          It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X sim U(a,b) $. It's pdf is given by.



          $$ f(x) =beginalignbegincases frac1b-a & textrm for a leq x leq b \ \ 0 & textrm for x <atextrm or x >b endcases endalign$$



          $$ E(X) = int_-infty^infty x frac1b-adx $$



          $$ E(X) = frac1b-aint_-infty^infty x dx $$
          $$ E(X) = frac1b-aint_a^b x dx $$
          $$ E(X) = frac1b-a fracx^22Big|_a^b$$
          $$ E(X) = frac12(b-a) b^2-a^2$$
          $$ E(X) = frac12(b-a) (b-a)(b+a)$$
          $$ E(X) = fracb+a2 $$






          share|cite|improve this answer























          • Yes, i'm understanding more now thanks to both of you. Thankyou.
            – Bucephalus
            Jul 27 at 0:09











          • Youre welcome, no problem
            – RHowe
            Jul 27 at 0:19

















          up vote
          1
          down vote













          The normal distribution $N(mu, sigma^2)$ is a probability distribution based on the pdf:



          $f(x; mu, sigma^2) = frac1sqrt2pisigma^2e^frac(x - mu)^22sigma^2$



          Nowhere in that definition is it guaranteed that $mu$ is the mean of the distribution, or that $sigma$ is its standard deviation. Both need to be proven.



          With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $textLognormal(mu, sigma^2)$ has an expected value of $e^mu+fracsigma^22$.






          share|cite|improve this answer





















          • I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
            – Bucephalus
            Jul 26 at 23:26







          • 1




            No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
            – ConMan
            Jul 26 at 23:30






          • 1




            And so I'm clear, neither of these are estimates. They are provably equal to the required values.
            – ConMan
            Jul 26 at 23:31










          • Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
            – Bucephalus
            Jul 26 at 23:34






          • 1




            Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
            – ConMan
            Jul 27 at 3:13











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2863911%2fexpected-value-and-distributions%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          3
          down vote



          accepted










          The expected value $ E(X) = mu$ is often connected to the parameters of the distribution for instance



          $$ X sim Bin(n,p), E(X) = np$$



          $$ X sim Geom(p) , E(X) = p $$



          $$ X sim NegBin(r,p) , E(X) = fracrp$$
          $$ X sim Pois(lambda) , E(X) = lambda $$
          $$ X sim N(mu, sigma^2) , E(X) = mu $$
          $$ X sim Gamma(alpha,beta) ,E(X) = fracalphabeta$$
          $$ X sim U(a,b) ,E(X) = fracb+a2 $$



          $$ X sim Beta(alpha,beta) ,E(X) = fracalphaalpha +beta $$



          So $ E(X) = mu$ is a function of them simply because of how the expected value is defined..



          $$ E(X) = int_-infty^infty xf(x) dx$$



          where $f(x) $ is our density function, for continous functions



          for discrete random variables we have
          $$ E(X) = sum_i x p(x_i) $$



          It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X sim U(a,b) $. It's pdf is given by.



          $$ f(x) =beginalignbegincases frac1b-a & textrm for a leq x leq b \ \ 0 & textrm for x <atextrm or x >b endcases endalign$$



          $$ E(X) = int_-infty^infty x frac1b-adx $$



          $$ E(X) = frac1b-aint_-infty^infty x dx $$
          $$ E(X) = frac1b-aint_a^b x dx $$
          $$ E(X) = frac1b-a fracx^22Big|_a^b$$
          $$ E(X) = frac12(b-a) b^2-a^2$$
          $$ E(X) = frac12(b-a) (b-a)(b+a)$$
          $$ E(X) = fracb+a2 $$






          share|cite|improve this answer























          • Yes, i'm understanding more now thanks to both of you. Thankyou.
            – Bucephalus
            Jul 27 at 0:09











          • Youre welcome, no problem
            – RHowe
            Jul 27 at 0:19














          up vote
          3
          down vote



          accepted










          The expected value $ E(X) = mu$ is often connected to the parameters of the distribution for instance



          $$ X sim Bin(n,p), E(X) = np$$



          $$ X sim Geom(p) , E(X) = p $$



          $$ X sim NegBin(r,p) , E(X) = fracrp$$
          $$ X sim Pois(lambda) , E(X) = lambda $$
          $$ X sim N(mu, sigma^2) , E(X) = mu $$
          $$ X sim Gamma(alpha,beta) ,E(X) = fracalphabeta$$
          $$ X sim U(a,b) ,E(X) = fracb+a2 $$



          $$ X sim Beta(alpha,beta) ,E(X) = fracalphaalpha +beta $$



          So $ E(X) = mu$ is a function of them simply because of how the expected value is defined..



          $$ E(X) = int_-infty^infty xf(x) dx$$



          where $f(x) $ is our density function, for continous functions



          for discrete random variables we have
          $$ E(X) = sum_i x p(x_i) $$



          It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X sim U(a,b) $. It's pdf is given by.



          $$ f(x) =beginalignbegincases frac1b-a & textrm for a leq x leq b \ \ 0 & textrm for x <atextrm or x >b endcases endalign$$



          $$ E(X) = int_-infty^infty x frac1b-adx $$



          $$ E(X) = frac1b-aint_-infty^infty x dx $$
          $$ E(X) = frac1b-aint_a^b x dx $$
          $$ E(X) = frac1b-a fracx^22Big|_a^b$$
          $$ E(X) = frac12(b-a) b^2-a^2$$
          $$ E(X) = frac12(b-a) (b-a)(b+a)$$
          $$ E(X) = fracb+a2 $$






          share|cite|improve this answer























          • Yes, i'm understanding more now thanks to both of you. Thankyou.
            – Bucephalus
            Jul 27 at 0:09











          • Youre welcome, no problem
            – RHowe
            Jul 27 at 0:19












          up vote
          3
          down vote



          accepted







          up vote
          3
          down vote



          accepted






          The expected value $ E(X) = mu$ is often connected to the parameters of the distribution for instance



          $$ X sim Bin(n,p), E(X) = np$$



          $$ X sim Geom(p) , E(X) = p $$



          $$ X sim NegBin(r,p) , E(X) = fracrp$$
          $$ X sim Pois(lambda) , E(X) = lambda $$
          $$ X sim N(mu, sigma^2) , E(X) = mu $$
          $$ X sim Gamma(alpha,beta) ,E(X) = fracalphabeta$$
          $$ X sim U(a,b) ,E(X) = fracb+a2 $$



          $$ X sim Beta(alpha,beta) ,E(X) = fracalphaalpha +beta $$



          So $ E(X) = mu$ is a function of them simply because of how the expected value is defined..



          $$ E(X) = int_-infty^infty xf(x) dx$$



          where $f(x) $ is our density function, for continous functions



          for discrete random variables we have
          $$ E(X) = sum_i x p(x_i) $$



          It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X sim U(a,b) $. It's pdf is given by.



          $$ f(x) =beginalignbegincases frac1b-a & textrm for a leq x leq b \ \ 0 & textrm for x <atextrm or x >b endcases endalign$$



          $$ E(X) = int_-infty^infty x frac1b-adx $$



          $$ E(X) = frac1b-aint_-infty^infty x dx $$
          $$ E(X) = frac1b-aint_a^b x dx $$
          $$ E(X) = frac1b-a fracx^22Big|_a^b$$
          $$ E(X) = frac12(b-a) b^2-a^2$$
          $$ E(X) = frac12(b-a) (b-a)(b+a)$$
          $$ E(X) = fracb+a2 $$






          share|cite|improve this answer















          The expected value $ E(X) = mu$ is often connected to the parameters of the distribution for instance



          $$ X sim Bin(n,p), E(X) = np$$



          $$ X sim Geom(p) , E(X) = p $$



          $$ X sim NegBin(r,p) , E(X) = fracrp$$
          $$ X sim Pois(lambda) , E(X) = lambda $$
          $$ X sim N(mu, sigma^2) , E(X) = mu $$
          $$ X sim Gamma(alpha,beta) ,E(X) = fracalphabeta$$
          $$ X sim U(a,b) ,E(X) = fracb+a2 $$



          $$ X sim Beta(alpha,beta) ,E(X) = fracalphaalpha +beta $$



          So $ E(X) = mu$ is a function of them simply because of how the expected value is defined..



          $$ E(X) = int_-infty^infty xf(x) dx$$



          where $f(x) $ is our density function, for continous functions



          for discrete random variables we have
          $$ E(X) = sum_i x p(x_i) $$



          It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X sim U(a,b) $. It's pdf is given by.



          $$ f(x) =beginalignbegincases frac1b-a & textrm for a leq x leq b \ \ 0 & textrm for x <atextrm or x >b endcases endalign$$



          $$ E(X) = int_-infty^infty x frac1b-adx $$



          $$ E(X) = frac1b-aint_-infty^infty x dx $$
          $$ E(X) = frac1b-aint_a^b x dx $$
          $$ E(X) = frac1b-a fracx^22Big|_a^b$$
          $$ E(X) = frac12(b-a) b^2-a^2$$
          $$ E(X) = frac12(b-a) (b-a)(b+a)$$
          $$ E(X) = fracb+a2 $$







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 27 at 0:06


























          answered Jul 26 at 23:51









          RHowe

          975715




          975715











          • Yes, i'm understanding more now thanks to both of you. Thankyou.
            – Bucephalus
            Jul 27 at 0:09











          • Youre welcome, no problem
            – RHowe
            Jul 27 at 0:19
















          • Yes, i'm understanding more now thanks to both of you. Thankyou.
            – Bucephalus
            Jul 27 at 0:09











          • Youre welcome, no problem
            – RHowe
            Jul 27 at 0:19















          Yes, i'm understanding more now thanks to both of you. Thankyou.
          – Bucephalus
          Jul 27 at 0:09





          Yes, i'm understanding more now thanks to both of you. Thankyou.
          – Bucephalus
          Jul 27 at 0:09













          Youre welcome, no problem
          – RHowe
          Jul 27 at 0:19




          Youre welcome, no problem
          – RHowe
          Jul 27 at 0:19










          up vote
          1
          down vote













          The normal distribution $N(mu, sigma^2)$ is a probability distribution based on the pdf:



          $f(x; mu, sigma^2) = frac1sqrt2pisigma^2e^frac(x - mu)^22sigma^2$



          Nowhere in that definition is it guaranteed that $mu$ is the mean of the distribution, or that $sigma$ is its standard deviation. Both need to be proven.



          With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $textLognormal(mu, sigma^2)$ has an expected value of $e^mu+fracsigma^22$.






          share|cite|improve this answer





















          • I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
            – Bucephalus
            Jul 26 at 23:26







          • 1




            No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
            – ConMan
            Jul 26 at 23:30






          • 1




            And so I'm clear, neither of these are estimates. They are provably equal to the required values.
            – ConMan
            Jul 26 at 23:31










          • Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
            – Bucephalus
            Jul 26 at 23:34






          • 1




            Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
            – ConMan
            Jul 27 at 3:13















          up vote
          1
          down vote













          The normal distribution $N(mu, sigma^2)$ is a probability distribution based on the pdf:



          $f(x; mu, sigma^2) = frac1sqrt2pisigma^2e^frac(x - mu)^22sigma^2$



          Nowhere in that definition is it guaranteed that $mu$ is the mean of the distribution, or that $sigma$ is its standard deviation. Both need to be proven.



          With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $textLognormal(mu, sigma^2)$ has an expected value of $e^mu+fracsigma^22$.






          share|cite|improve this answer





















          • I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
            – Bucephalus
            Jul 26 at 23:26







          • 1




            No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
            – ConMan
            Jul 26 at 23:30






          • 1




            And so I'm clear, neither of these are estimates. They are provably equal to the required values.
            – ConMan
            Jul 26 at 23:31










          • Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
            – Bucephalus
            Jul 26 at 23:34






          • 1




            Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
            – ConMan
            Jul 27 at 3:13













          up vote
          1
          down vote










          up vote
          1
          down vote









          The normal distribution $N(mu, sigma^2)$ is a probability distribution based on the pdf:



          $f(x; mu, sigma^2) = frac1sqrt2pisigma^2e^frac(x - mu)^22sigma^2$



          Nowhere in that definition is it guaranteed that $mu$ is the mean of the distribution, or that $sigma$ is its standard deviation. Both need to be proven.



          With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $textLognormal(mu, sigma^2)$ has an expected value of $e^mu+fracsigma^22$.






          share|cite|improve this answer













          The normal distribution $N(mu, sigma^2)$ is a probability distribution based on the pdf:



          $f(x; mu, sigma^2) = frac1sqrt2pisigma^2e^frac(x - mu)^22sigma^2$



          Nowhere in that definition is it guaranteed that $mu$ is the mean of the distribution, or that $sigma$ is its standard deviation. Both need to be proven.



          With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $textLognormal(mu, sigma^2)$ has an expected value of $e^mu+fracsigma^22$.







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 26 at 23:18









          ConMan

          6,9351324




          6,9351324











          • I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
            – Bucephalus
            Jul 26 at 23:26







          • 1




            No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
            – ConMan
            Jul 26 at 23:30






          • 1




            And so I'm clear, neither of these are estimates. They are provably equal to the required values.
            – ConMan
            Jul 26 at 23:31










          • Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
            – Bucephalus
            Jul 26 at 23:34






          • 1




            Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
            – ConMan
            Jul 27 at 3:13

















          • I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
            – Bucephalus
            Jul 26 at 23:26







          • 1




            No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
            – ConMan
            Jul 26 at 23:30






          • 1




            And so I'm clear, neither of these are estimates. They are provably equal to the required values.
            – ConMan
            Jul 26 at 23:31










          • Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
            – Bucephalus
            Jul 26 at 23:34






          • 1




            Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
            – ConMan
            Jul 27 at 3:13
















          I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
          – Bucephalus
          Jul 26 at 23:26





          I think i'm understanding what you're saying @ConMan. You're saying that we will use $E(X)$ to be our estimate of the population parameter $mu$ and we will do the same for $var(X)$ and $sigma^2$.
          – Bucephalus
          Jul 26 at 23:26





          1




          1




          No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
          – ConMan
          Jul 26 at 23:30




          No, I'm saying that the probability distribution is defined by a function, which takes two parameters. The first parameter happens to be the expected value of the distribution, and the second one happens to be its variance. Similarly, the uniform distribution is parameterised by the start and end points, and its expected value is the average of the two.
          – ConMan
          Jul 26 at 23:30




          1




          1




          And so I'm clear, neither of these are estimates. They are provably equal to the required values.
          – ConMan
          Jul 26 at 23:31




          And so I'm clear, neither of these are estimates. They are provably equal to the required values.
          – ConMan
          Jul 26 at 23:31












          Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
          – Bucephalus
          Jul 26 at 23:34




          Oh yeah I get it now, the parameters of a distribution aren't necessarily the average and the variance. They just happen to be the parameters for the normal distribution. So that would probably be the case for any symmetric distribution too? For example, $x^2, -1 < x < 1$?
          – Bucephalus
          Jul 26 at 23:34




          1




          1




          Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
          – ConMan
          Jul 27 at 3:13





          Nope, not necessarily. Like I said, the uniform distribution $U(a, b)$ is a flat line between the points $a$ and $b$. Its mean value is at $fraca+b2$, and it's symmetric about that point. Parameters are just "the values necessary to define the shape of the distribution", and they do different things in different distributions.
          – ConMan
          Jul 27 at 3:13













           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2863911%2fexpected-value-and-distributions%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?