Confused between Single Value Decomposition(SVD) and Diagonalization of matrix

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$



But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$



So why the definition of SVD and diagonalization is same here?







share|cite|improve this question





















  • Is the matrix that you're diagonalizing / decomposing symmetric?
    – Jalapeno Nachos
    Jul 21 at 5:13











  • Yes, Co-variance matrix is always symmetric.
    – Kaushal28
    Jul 21 at 5:15











  • Then your singular values are exactly the eigenvalues - try to verify this on your own.
    – Jalapeno Nachos
    Jul 21 at 5:17










  • I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
    – Ale.B
    Jul 21 at 5:22










  • your singular values are the square root of the eigenvalues
    – RHowe
    Jul 21 at 5:25














up vote
1
down vote

favorite
1












I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$



But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$



So why the definition of SVD and diagonalization is same here?







share|cite|improve this question





















  • Is the matrix that you're diagonalizing / decomposing symmetric?
    – Jalapeno Nachos
    Jul 21 at 5:13











  • Yes, Co-variance matrix is always symmetric.
    – Kaushal28
    Jul 21 at 5:15











  • Then your singular values are exactly the eigenvalues - try to verify this on your own.
    – Jalapeno Nachos
    Jul 21 at 5:17










  • I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
    – Ale.B
    Jul 21 at 5:22










  • your singular values are the square root of the eigenvalues
    – RHowe
    Jul 21 at 5:25












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$



But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$



So why the definition of SVD and diagonalization is same here?







share|cite|improve this question













I'm studying Principle Component Analysis (PCA), and came across this post. In which it's written that diagonalization of co-variance matrix ($C$) can be given by $$C = VLV^T$$



But as per difference between SVD and Diagonalization and this post
, it's clear that diagonalization of any matrix can be given by:
$$C = VLV^-1$$



So why the definition of SVD and diagonalization is same here?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 21 at 5:55









Parcly Taxel

33.6k136588




33.6k136588









asked Jul 21 at 5:07









Kaushal28

181110




181110











  • Is the matrix that you're diagonalizing / decomposing symmetric?
    – Jalapeno Nachos
    Jul 21 at 5:13











  • Yes, Co-variance matrix is always symmetric.
    – Kaushal28
    Jul 21 at 5:15











  • Then your singular values are exactly the eigenvalues - try to verify this on your own.
    – Jalapeno Nachos
    Jul 21 at 5:17










  • I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
    – Ale.B
    Jul 21 at 5:22










  • your singular values are the square root of the eigenvalues
    – RHowe
    Jul 21 at 5:25
















  • Is the matrix that you're diagonalizing / decomposing symmetric?
    – Jalapeno Nachos
    Jul 21 at 5:13











  • Yes, Co-variance matrix is always symmetric.
    – Kaushal28
    Jul 21 at 5:15











  • Then your singular values are exactly the eigenvalues - try to verify this on your own.
    – Jalapeno Nachos
    Jul 21 at 5:17










  • I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
    – Ale.B
    Jul 21 at 5:22










  • your singular values are the square root of the eigenvalues
    – RHowe
    Jul 21 at 5:25















Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13





Is the matrix that you're diagonalizing / decomposing symmetric?
– Jalapeno Nachos
Jul 21 at 5:13













Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15





Yes, Co-variance matrix is always symmetric.
– Kaushal28
Jul 21 at 5:15













Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17




Then your singular values are exactly the eigenvalues - try to verify this on your own.
– Jalapeno Nachos
Jul 21 at 5:17












I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22




I can tell you that a diagonalization of any matrix can be given by the formula you give with $V$ and $V^-1$. As a corolary of the spectral theorem for autoadjoints lineal transformations it is proved that any symmetric matrix is diagonalizable and what is more $V$ can be obtained such that $V^-1 = V^t$. What I can not find is where is the definition of diagonalization and SVD in the link you post (the last here).
– Ale.B
Jul 21 at 5:22












your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25




your singular values are the square root of the eigenvalues
– RHowe
Jul 21 at 5:25










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










The SVD is a generalization of the eigendecomposition. The SVD is the following.



Suppose $ A in mathbbC^m times n$



now



$$A = U Sigma V^T $$



where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix



$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$



Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$



The actual way you compute the SVD is pretty similar to the eigendecomp.



In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.



A truncated SVD is like this.



$$A_k = U_kSigma_k V_k^T $$



this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$



So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states



The product
$$ U_kSigma_k$$



gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis



$$X_k = U_kSigma_k V_k^T $$



This is commonly referred to as a truncated SVD.






share|cite|improve this answer























  • How $V^TV = VV^T = I$?
    – Kaushal28
    Jul 26 at 16:15










  • @Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
    – RHowe
    Jul 26 at 16:16










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858259%2fconfused-between-single-value-decompositionsvd-and-diagonalization-of-matrix%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










The SVD is a generalization of the eigendecomposition. The SVD is the following.



Suppose $ A in mathbbC^m times n$



now



$$A = U Sigma V^T $$



where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix



$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$



Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$



The actual way you compute the SVD is pretty similar to the eigendecomp.



In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.



A truncated SVD is like this.



$$A_k = U_kSigma_k V_k^T $$



this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$



So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states



The product
$$ U_kSigma_k$$



gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis



$$X_k = U_kSigma_k V_k^T $$



This is commonly referred to as a truncated SVD.






share|cite|improve this answer























  • How $V^TV = VV^T = I$?
    – Kaushal28
    Jul 26 at 16:15










  • @Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
    – RHowe
    Jul 26 at 16:16














up vote
1
down vote



accepted










The SVD is a generalization of the eigendecomposition. The SVD is the following.



Suppose $ A in mathbbC^m times n$



now



$$A = U Sigma V^T $$



where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix



$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$



Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$



The actual way you compute the SVD is pretty similar to the eigendecomp.



In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.



A truncated SVD is like this.



$$A_k = U_kSigma_k V_k^T $$



this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$



So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states



The product
$$ U_kSigma_k$$



gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis



$$X_k = U_kSigma_k V_k^T $$



This is commonly referred to as a truncated SVD.






share|cite|improve this answer























  • How $V^TV = VV^T = I$?
    – Kaushal28
    Jul 26 at 16:15










  • @Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
    – RHowe
    Jul 26 at 16:16












up vote
1
down vote



accepted







up vote
1
down vote



accepted






The SVD is a generalization of the eigendecomposition. The SVD is the following.



Suppose $ A in mathbbC^m times n$



now



$$A = U Sigma V^T $$



where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix



$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$



Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$



The actual way you compute the SVD is pretty similar to the eigendecomp.



In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.



A truncated SVD is like this.



$$A_k = U_kSigma_k V_k^T $$



this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$



So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states



The product
$$ U_kSigma_k$$



gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis



$$X_k = U_kSigma_k V_k^T $$



This is commonly referred to as a truncated SVD.






share|cite|improve this answer















The SVD is a generalization of the eigendecomposition. The SVD is the following.



Suppose $ A in mathbbC^m times n$



now



$$A = U Sigma V^T $$



where $U,V^T$ are orthogonal matrices and $Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix



$$AA^T = (U Sigma V^T) (U Sigma V^T)^T $$
$$AA^T = (U Sigma V^T) (V Sigma^TU^T)$$
$$ AA^T = U Sigma V^T V Sigma^T U^T$$



Now $VV^T =V^TV = I $
$$ AA^T = U Sigma Sigma^T U^T $$
Also $ Sigma^T = Sigma $
$$ AA^T = USigma^2 U^T$$
Now we have $ Sigma^2 = Lambda $
$$ AA^T = U Lambda U^T$$



The actual way you compute the SVD is pretty similar to the eigendecomp.



In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.



A truncated SVD is like this.



$$A_k = U_kSigma_k V_k^T $$



this means the following
$$A_k = sum_i=1^k sigma_i u_i v_i^T $$



So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states



The product
$$ U_kSigma_k$$



gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis



$$X_k = U_kSigma_k V_k^T $$



This is commonly referred to as a truncated SVD.







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 21 at 6:21









Chandler Watson

417320




417320











answered Jul 21 at 5:19









RHowe

1,000815




1,000815











  • How $V^TV = VV^T = I$?
    – Kaushal28
    Jul 26 at 16:15










  • @Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
    – RHowe
    Jul 26 at 16:16
















  • How $V^TV = VV^T = I$?
    – Kaushal28
    Jul 26 at 16:15










  • @Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
    – RHowe
    Jul 26 at 16:16















How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15




How $V^TV = VV^T = I$?
– Kaushal28
Jul 26 at 16:15












@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16




@Kaushal28 the matrices are orthogonal en.wikipedia.org/wiki/Orthogonal_matrix
– RHowe
Jul 26 at 16:16












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858259%2fconfused-between-single-value-decompositionsvd-and-diagonalization-of-matrix%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?