What is the second moment for a symmetric set of vectors?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I am new to vector statistics and just wanted to check if I'm having a correct deduction here.
I have a set of vectors from an $N$-dimensional space
$$
v_k=beginbmatrix
v_k_1 \
v_k_2 \
vdots \
v_k_N
endbmatrix
$$
which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?
$$
operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
$$
where $mu_k$ is the $k$th element of the first moment vector.
Is there a algebraic proof for this?
linear-algebra statistics proof-verification
add a comment |Â
up vote
1
down vote
favorite
I am new to vector statistics and just wanted to check if I'm having a correct deduction here.
I have a set of vectors from an $N$-dimensional space
$$
v_k=beginbmatrix
v_k_1 \
v_k_2 \
vdots \
v_k_N
endbmatrix
$$
which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?
$$
operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
$$
where $mu_k$ is the $k$th element of the first moment vector.
Is there a algebraic proof for this?
linear-algebra statistics proof-verification
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I am new to vector statistics and just wanted to check if I'm having a correct deduction here.
I have a set of vectors from an $N$-dimensional space
$$
v_k=beginbmatrix
v_k_1 \
v_k_2 \
vdots \
v_k_N
endbmatrix
$$
which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?
$$
operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
$$
where $mu_k$ is the $k$th element of the first moment vector.
Is there a algebraic proof for this?
linear-algebra statistics proof-verification
I am new to vector statistics and just wanted to check if I'm having a correct deduction here.
I have a set of vectors from an $N$-dimensional space
$$
v_k=beginbmatrix
v_k_1 \
v_k_2 \
vdots \
v_k_N
endbmatrix
$$
which elements are either $-1$ or $1$. If this set of vectors is a complete combination of all possible vectors, which count would be $2^N$ I know the first moment of these vectors would be $0$ because of symmetry, can I say the second moment, the variance-covariance matrix is equal to a unitary $Ntimes N$ matrix?
$$
operatornamecov_ij = frac1N sum_k=1^2^N [(v_k_i-mu_k)(v_k_j-mu_k)]
$$
where $mu_k$ is the $k$th element of the first moment vector.
Is there a algebraic proof for this?
linear-algebra statistics proof-verification
edited Jul 18 at 19:38
Michael Hardy
204k23186462
204k23186462
asked Jul 18 at 17:12
Alireza
877
877
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)
Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:
$$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
beginbmatrix
0 \
0 \
vdots \
0 \
0 \
endbmatrix = boldsymbol0 quad quad quad
boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
= mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
beginbmatrix
1 & 0 & cdots & 0 & 0 \
0 & 1 & cdots & 0 & 0 \
vdots & vdots & ddots & vdots & vdots \
0 & 0 & cdots & 1 & 0 \
0 & 0 & cdots & 0 & 1 \
endbmatrix = boldsymbolI.$$
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)
Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:
$$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
beginbmatrix
0 \
0 \
vdots \
0 \
0 \
endbmatrix = boldsymbol0 quad quad quad
boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
= mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
beginbmatrix
1 & 0 & cdots & 0 & 0 \
0 & 1 & cdots & 0 & 0 \
vdots & vdots & ddots & vdots & vdots \
0 & 0 & cdots & 1 & 0 \
0 & 0 & cdots & 0 & 1 \
endbmatrix = boldsymbolI.$$
add a comment |Â
up vote
1
down vote
accepted
One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)
Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:
$$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
beginbmatrix
0 \
0 \
vdots \
0 \
0 \
endbmatrix = boldsymbol0 quad quad quad
boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
= mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
beginbmatrix
1 & 0 & cdots & 0 & 0 \
0 & 1 & cdots & 0 & 0 \
vdots & vdots & ddots & vdots & vdots \
0 & 0 & cdots & 1 & 0 \
0 & 0 & cdots & 0 & 1 \
endbmatrix = boldsymbolI.$$
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)
Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:
$$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
beginbmatrix
0 \
0 \
vdots \
0 \
0 \
endbmatrix = boldsymbol0 quad quad quad
boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
= mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
beginbmatrix
1 & 0 & cdots & 0 & 0 \
0 & 1 & cdots & 0 & 0 \
vdots & vdots & ddots & vdots & vdots \
0 & 0 & cdots & 1 & 0 \
0 & 0 & cdots & 0 & 1 \
endbmatrix = boldsymbolI.$$
One problem here is that you have not specified a distribution for the vectors --- you have said what the possible values of the elements are, but you have not specified their probabilities. Without giving a distributional form for the vector, it is not correct for you to say that the first moment (mean) is zero, and it is not possible to derive the variance-covariance matrix. (Also, the mean of a random vector is itself a vector, not a scalar, so your notation is confused.)
Perhaps what you mean when you mention the 'symmetry' here is that you intend for every possible outcome to have equal probability (in which case, you should really specify this explicitly). In this case the elements of the vector would be independent with equal probabilities of values $-1$ and $1$. This gives the distributional form $v_i,j sim textIID 2 cdot textBern(tfrac12) -1$, which gives you the moments:
$$boldsymbolmu equiv mathbbE(boldsymbolv_k) =
beginbmatrix
0 \
0 \
vdots \
0 \
0 \
endbmatrix = boldsymbol0 quad quad quad
boldsymbolSigma_k equiv mathbbV(boldsymbolv_k)
= mathbbE(boldsymbolv_k boldsymbolv_k^textT) =
beginbmatrix
1 & 0 & cdots & 0 & 0 \
0 & 1 & cdots & 0 & 0 \
vdots & vdots & ddots & vdots & vdots \
0 & 0 & cdots & 1 & 0 \
0 & 0 & cdots & 0 & 1 \
endbmatrix = boldsymbolI.$$
answered Jul 19 at 2:35
Ben
81911
81911
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2855790%2fwhat-is-the-second-moment-for-a-symmetric-set-of-vectors%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password