Confused between Single Value Decomposition(SVD) and Diagonalization of matrix
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$
But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$
So why the definition of SVD and diagonalization is same here?
linear-algebra matrices machine-learning matrix-decomposition svd
 |Â
show 4 more comments
up vote
1
down vote
favorite
I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$
But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$
So why the definition of SVD and diagonalization is same here?
linear-algebra matrices machine-learning matrix-decomposition svd
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25
 |Â
show 4 more comments
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$
But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$
So why the definition of SVD and diagonalization is same here?
linear-algebra matrices machine-learning matrix-decomposition svd
I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$
But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$
So why the definition of SVD and diagonalization is same here?
linear-algebra matrices machine-learning matrix-decomposition svd
edited Jul 21 at 5:55


Parcly Taxel
33.6k136588
33.6k136588
asked Jul 21 at 5:07


Kaushal28
181110
181110
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25
 |Â
show 4 more comments
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25
 |Â
show 4 more comments
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A in mathbbC^m times n$
now
$$A = U Sigma V^T $$
where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$
Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_k = U_kSigma_k V_k^T $$
this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_kSigma_k$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_k = U_kSigma_k V_k^T $$
This is commonly referred to as a truncated SVD.
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A in mathbbC^m times n$
now
$$A = U Sigma V^T $$
where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$
Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_k = U_kSigma_k V_k^T $$
this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_kSigma_k$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_k = U_kSigma_k V_k^T $$
This is commonly referred to as a truncated SVD.
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
add a comment |Â
up vote
1
down vote
accepted
The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A in mathbbC^m times n$
now
$$A = U Sigma V^T $$
where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$
Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_k = U_kSigma_k V_k^T $$
this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_kSigma_k$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_k = U_kSigma_k V_k^T $$
This is commonly referred to as a truncated SVD.
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A in mathbbC^m times n$
now
$$A = U Sigma V^T $$
where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$
Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_k = U_kSigma_k V_k^T $$
this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_kSigma_k$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_k = U_kSigma_k V_k^T $$
This is commonly referred to as a truncated SVD.
The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A in mathbbC^m times n$
now
$$A = U Sigma V^T $$
where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$
Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_k = U_kSigma_k V_k^T $$
this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_kSigma_k$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_k = U_kSigma_k V_k^T $$
This is commonly referred to as a truncated SVD.
edited Jul 21 at 6:21


Chandler Watson
417320
417320
answered Jul 21 at 5:19


RHowe
1,000815
1,000815
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
add a comment |Â
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858259%2fconfused-between-single-value-decompositionsvd-and-diagonalization-of-matrix%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13
Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15
Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17
I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22
your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25