What is the second moment for a symmetric set of vectors?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I am new to vector statistics and just wanted to check if I'm having a correct deduction here.



I have a set of vectors from an $N$-dimensional space
$$
v_k=beginbmatrix
v_k_1 \
v_k_2 \
vdots \
v_k_N
endbmatrix
$$
which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?



$$
operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
$$



where $mu_k$ is the $k$th element of the first moment vector.



Is there a algebraic proof for this?







share|cite|improve this question

























    up vote
    1
    down vote

    favorite












    I am new to vector statistics and just wanted to check if I'm having a correct deduction here.



    I have a set of vectors from an $N$-dimensional space
    $$
    v_k=beginbmatrix
    v_k_1 \
    v_k_2 \
    vdots \
    v_k_N
    endbmatrix
    $$
    which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?



    $$
    operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
    $$



    where $mu_k$ is the $k$th element of the first moment vector.



    Is there a algebraic proof for this?







    share|cite|improve this question























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I am new to vector statistics and just wanted to check if I'm having a correct deduction here.



      I have a set of vectors from an $N$-dimensional space
      $$
      v_k=beginbmatrix
      v_k_1 \
      v_k_2 \
      vdots \
      v_k_N
      endbmatrix
      $$
      which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?



      $$
      operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
      $$



      where $mu_k$ is the $k$th element of the first moment vector.



      Is there a algebraic proof for this?







      share|cite|improve this question













      I am new to vector statistics and just wanted to check if I'm having a correct deduction here.



      I have a set of vectors from an $N$-dimensional space
      $$
      v_k=beginbmatrix
      v_k_1 \
      v_k_2 \
      vdots \
      v_k_N
      endbmatrix
      $$
      which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?



      $$
      operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
      $$



      where $mu_k$ is the $k$th element of the first moment vector.



      Is there a algebraic proof for this?









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 18 at 19:38









      Michael Hardy

      204k23186462




      204k23186462









      asked Jul 18 at 17:12









      Alireza

      877




      877




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)



          Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:



          $$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
          beginbmatrix
          0 \
          0 \
          vdots \
          0 \
          0 \
          endbmatrix = boldsymbol0 quad quad quad
          boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
          = mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
          beginbmatrix
          1 & 0 & cdots & 0 & 0 \
          0 & 1 & cdots & 0 & 0 \
          vdots & vdots & ddots & vdots & vdots \
          0 & 0 & cdots & 1 & 0 \
          0 & 0 & cdots & 0 & 1 \
          endbmatrix = boldsymbolI.$$






          share|cite|improve this answer





















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2855790%2fwhat-is-the-second-moment-for-a-symmetric-set-of-vectors%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote



            accepted










            One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)



            Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:



            $$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
            beginbmatrix
            0 \
            0 \
            vdots \
            0 \
            0 \
            endbmatrix = boldsymbol0 quad quad quad
            boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
            = mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
            beginbmatrix
            1 & 0 & cdots & 0 & 0 \
            0 & 1 & cdots & 0 & 0 \
            vdots & vdots & ddots & vdots & vdots \
            0 & 0 & cdots & 1 & 0 \
            0 & 0 & cdots & 0 & 1 \
            endbmatrix = boldsymbolI.$$






            share|cite|improve this answer

























              up vote
              1
              down vote



              accepted










              One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)



              Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:



              $$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
              beginbmatrix
              0 \
              0 \
              vdots \
              0 \
              0 \
              endbmatrix = boldsymbol0 quad quad quad
              boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
              = mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
              beginbmatrix
              1 & 0 & cdots & 0 & 0 \
              0 & 1 & cdots & 0 & 0 \
              vdots & vdots & ddots & vdots & vdots \
              0 & 0 & cdots & 1 & 0 \
              0 & 0 & cdots & 0 & 1 \
              endbmatrix = boldsymbolI.$$






              share|cite|improve this answer























                up vote
                1
                down vote



                accepted







                up vote
                1
                down vote



                accepted






                One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)



                Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:



                $$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
                beginbmatrix
                0 \
                0 \
                vdots \
                0 \
                0 \
                endbmatrix = boldsymbol0 quad quad quad
                boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
                = mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
                beginbmatrix
                1 & 0 & cdots & 0 & 0 \
                0 & 1 & cdots & 0 & 0 \
                vdots & vdots & ddots & vdots & vdots \
                0 & 0 & cdots & 1 & 0 \
                0 & 0 & cdots & 0 & 1 \
                endbmatrix = boldsymbolI.$$






                share|cite|improve this answer













                One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)



                Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:



                $$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
                beginbmatrix
                0 \
                0 \
                vdots \
                0 \
                0 \
                endbmatrix = boldsymbol0 quad quad quad
                boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
                = mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
                beginbmatrix
                1 & 0 & cdots & 0 & 0 \
                0 & 1 & cdots & 0 & 0 \
                vdots & vdots & ddots & vdots & vdots \
                0 & 0 & cdots & 1 & 0 \
                0 & 0 & cdots & 0 & 1 \
                endbmatrix = boldsymbolI.$$







                share|cite|improve this answer













                share|cite|improve this answer



                share|cite|improve this answer











                answered Jul 19 at 2:35









                Ben

                81911




                81911






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2855790%2fwhat-is-the-second-moment-for-a-symmetric-set-of-vectors%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What is the equation of a 3D cone with generalised tilt?

                    Color the edges and diagonals of a regular polygon

                    Relationship between determinant of matrix and determinant of adjoint?