Least squares solution to $A^TA ≃ X$ for non-square $A$ given $X$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I am sure that there is a fairly simple solution based on the Moore-Penrose pseudo-inverse or singular value decomposition, but I cannot seem to make any headway in this problem with these methods. My problem is as follows:



How does one determine the least-squares solution for an unknown non-square matrix $A$ given



$$A^TA ≃ X$$



for a known $X∈ℝ^n×n$ and knowing that $A∈ℝ^m×n$ where $m≠n$.



The same question has been asked and answered for square A, but I need to know an approximate solution for non-square A.



Solve for A from A x transpose(A)



Hand-holding solutions and/or examples are greatly appreciated, but any references or relevant theorems/keywords are welcome as well. Please let me know if I need to elaborate or make my question more precise.



Thanks in advance for any help!



Edit:
Following the comment by User7530 below, what I mean by least-squares solution is this:
$$min_A^m×n∥A^TA−X∥^2_F$$



where $∥M∥_F$ is the Frobenius norm.







share|cite|improve this question





















  • What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
    – user7530
    Jul 18 at 23:35










  • Yes, exactly. Adding this to the question. Thanks for the response!
    – Michael New
    Jul 18 at 23:36










  • What keeps you from just zero padding the square solution when m>n?
    – John Polcari
    Jul 19 at 0:32















up vote
1
down vote

favorite












I am sure that there is a fairly simple solution based on the Moore-Penrose pseudo-inverse or singular value decomposition, but I cannot seem to make any headway in this problem with these methods. My problem is as follows:



How does one determine the least-squares solution for an unknown non-square matrix $A$ given



$$A^TA ≃ X$$



for a known $X∈ℝ^n×n$ and knowing that $A∈ℝ^m×n$ where $m≠n$.



The same question has been asked and answered for square A, but I need to know an approximate solution for non-square A.



Solve for A from A x transpose(A)



Hand-holding solutions and/or examples are greatly appreciated, but any references or relevant theorems/keywords are welcome as well. Please let me know if I need to elaborate or make my question more precise.



Thanks in advance for any help!



Edit:
Following the comment by User7530 below, what I mean by least-squares solution is this:
$$min_A^m×n∥A^TA−X∥^2_F$$



where $∥M∥_F$ is the Frobenius norm.







share|cite|improve this question





















  • What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
    – user7530
    Jul 18 at 23:35










  • Yes, exactly. Adding this to the question. Thanks for the response!
    – Michael New
    Jul 18 at 23:36










  • What keeps you from just zero padding the square solution when m>n?
    – John Polcari
    Jul 19 at 0:32













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I am sure that there is a fairly simple solution based on the Moore-Penrose pseudo-inverse or singular value decomposition, but I cannot seem to make any headway in this problem with these methods. My problem is as follows:



How does one determine the least-squares solution for an unknown non-square matrix $A$ given



$$A^TA ≃ X$$



for a known $X∈ℝ^n×n$ and knowing that $A∈ℝ^m×n$ where $m≠n$.



The same question has been asked and answered for square A, but I need to know an approximate solution for non-square A.



Solve for A from A x transpose(A)



Hand-holding solutions and/or examples are greatly appreciated, but any references or relevant theorems/keywords are welcome as well. Please let me know if I need to elaborate or make my question more precise.



Thanks in advance for any help!



Edit:
Following the comment by User7530 below, what I mean by least-squares solution is this:
$$min_A^m×n∥A^TA−X∥^2_F$$



where $∥M∥_F$ is the Frobenius norm.







share|cite|improve this question













I am sure that there is a fairly simple solution based on the Moore-Penrose pseudo-inverse or singular value decomposition, but I cannot seem to make any headway in this problem with these methods. My problem is as follows:



How does one determine the least-squares solution for an unknown non-square matrix $A$ given



$$A^TA ≃ X$$



for a known $X∈ℝ^n×n$ and knowing that $A∈ℝ^m×n$ where $m≠n$.



The same question has been asked and answered for square A, but I need to know an approximate solution for non-square A.



Solve for A from A x transpose(A)



Hand-holding solutions and/or examples are greatly appreciated, but any references or relevant theorems/keywords are welcome as well. Please let me know if I need to elaborate or make my question more precise.



Thanks in advance for any help!



Edit:
Following the comment by User7530 below, what I mean by least-squares solution is this:
$$min_A^m×n∥A^TA−X∥^2_F$$



where $∥M∥_F$ is the Frobenius norm.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 18 at 23:47
























asked Jul 18 at 23:32









Michael New

154




154











  • What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
    – user7530
    Jul 18 at 23:35










  • Yes, exactly. Adding this to the question. Thanks for the response!
    – Michael New
    Jul 18 at 23:36










  • What keeps you from just zero padding the square solution when m>n?
    – John Polcari
    Jul 19 at 0:32

















  • What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
    – user7530
    Jul 18 at 23:35










  • Yes, exactly. Adding this to the question. Thanks for the response!
    – Michael New
    Jul 18 at 23:36










  • What keeps you from just zero padding the square solution when m>n?
    – John Polcari
    Jul 19 at 0:32
















What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
– user7530
Jul 18 at 23:35




What do you mean by "least-squares solution"? You are trying to solve $$min_A_mtimes n |A^TA-X|^2_F$$ where $|M|_F$ is the Frobenius norm?
– user7530
Jul 18 at 23:35












Yes, exactly. Adding this to the question. Thanks for the response!
– Michael New
Jul 18 at 23:36




Yes, exactly. Adding this to the question. Thanks for the response!
– Michael New
Jul 18 at 23:36












What keeps you from just zero padding the square solution when m>n?
– John Polcari
Jul 19 at 0:32





What keeps you from just zero padding the square solution when m>n?
– John Polcari
Jul 19 at 0:32











1 Answer
1






active

oldest

votes

















up vote
0
down vote



accepted










Suppose $A in mathbbR^m times n $ there is theorem called the Eckart Young Mirsky Theorem. It states the following



$$A = U Sigma V^T $$



with $ U, V^T $ being orthogonal matrices and $ Sigma $ is a diagonal matrices such with singular values. The following is true.



$$A_k = sum_i=1^k sigma_iu_i v_i^t $$
$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



Now



$$ | A^TA - X |_2 $$



The nearest matrix is rank k approximation under this norm as above then



$$ | (USigma V^T)^T(U Sigma V^T) - (U_kSigma_kV_k^T)^T (U_kSigma_kV_k^T) |_2 $$
$$ | VSigma U U Sigma V^T - V_kSigma_k^TU_kU_kSigma_kV_k^T |_2 $$



$$ | V Sigma^2 V^T - V_kSigma_k^2V_k^T |_2 $$



where



$$V_k Sigma_k^2V_k^2 = sum_i=1^k sigma_i^2 v_iv_i^t $$



Now the expression above showed



$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



$$ | A^TA -X | =| sum_i=k+1^k sigma_i^2 v_iv_i^t | = sigma_k+1^2$$



I believe. This is more or less saying that the solution to the least squares problem is building it iteratively from some eigenspace. That is typically how it works. It doesn't explicitly build the normal equations.



More specifically



$$VSigma^2 V^T = V Lambda V^T $$
$$ | A^TA -X | =| sum_i=k+1^k lambda_i v_iv_i^t | = lambda_k+1 $$



If you follow this as I decomposed the matrix $X$ with $A^TA$. All $A^TA$ is the nearest matrix made of the eigendecomposition from $X$ Each matrix $A$ is built from a singular value decomposition. The way the SVD works if you read about it is the following.



$$ A^TA = (U Sigma V)^T (U Sigma V^T) $$
$$ A^TA = ((V^T)^T Sigma^T U^T U Sigma V^T $$
by the properties of transposes and unitary matrices
$$ A^TA = V Sigma^T Sigma V^T $$
$$ A^TA = V Sigma^2 V^T $$
$$ A^TA = V Lambda V^T $$



Meaning the matrix $A$ is $USigma V^T$ however the nearest matrix under the 2 norm is a rank k approximation so we truncate it.



So let's say. Technically this matrix can be measured against the matrix X I think.



$$ A = U_kSigma_k V_k^T $$






share|cite|improve this answer























  • I don't understand this answer.
    – amsmath
    Jul 18 at 23:50










  • one moment editting
    – RHowe
    Jul 18 at 23:54










  • So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
    – Michael New
    Jul 19 at 0:11










  • I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
    – RHowe
    Jul 19 at 0:14










  • Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
    – Michael New
    Jul 19 at 15:57











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856117%2fleast-squares-solution-to-ata-%25e2%2589%2583-x-for-non-square-a-given-x%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote



accepted










Suppose $A in mathbbR^m times n $ there is theorem called the Eckart Young Mirsky Theorem. It states the following



$$A = U Sigma V^T $$



with $ U, V^T $ being orthogonal matrices and $ Sigma $ is a diagonal matrices such with singular values. The following is true.



$$A_k = sum_i=1^k sigma_iu_i v_i^t $$
$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



Now



$$ | A^TA - X |_2 $$



The nearest matrix is rank k approximation under this norm as above then



$$ | (USigma V^T)^T(U Sigma V^T) - (U_kSigma_kV_k^T)^T (U_kSigma_kV_k^T) |_2 $$
$$ | VSigma U U Sigma V^T - V_kSigma_k^TU_kU_kSigma_kV_k^T |_2 $$



$$ | V Sigma^2 V^T - V_kSigma_k^2V_k^T |_2 $$



where



$$V_k Sigma_k^2V_k^2 = sum_i=1^k sigma_i^2 v_iv_i^t $$



Now the expression above showed



$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



$$ | A^TA -X | =| sum_i=k+1^k sigma_i^2 v_iv_i^t | = sigma_k+1^2$$



I believe. This is more or less saying that the solution to the least squares problem is building it iteratively from some eigenspace. That is typically how it works. It doesn't explicitly build the normal equations.



More specifically



$$VSigma^2 V^T = V Lambda V^T $$
$$ | A^TA -X | =| sum_i=k+1^k lambda_i v_iv_i^t | = lambda_k+1 $$



If you follow this as I decomposed the matrix $X$ with $A^TA$. All $A^TA$ is the nearest matrix made of the eigendecomposition from $X$ Each matrix $A$ is built from a singular value decomposition. The way the SVD works if you read about it is the following.



$$ A^TA = (U Sigma V)^T (U Sigma V^T) $$
$$ A^TA = ((V^T)^T Sigma^T U^T U Sigma V^T $$
by the properties of transposes and unitary matrices
$$ A^TA = V Sigma^T Sigma V^T $$
$$ A^TA = V Sigma^2 V^T $$
$$ A^TA = V Lambda V^T $$



Meaning the matrix $A$ is $USigma V^T$ however the nearest matrix under the 2 norm is a rank k approximation so we truncate it.



So let's say. Technically this matrix can be measured against the matrix X I think.



$$ A = U_kSigma_k V_k^T $$






share|cite|improve this answer























  • I don't understand this answer.
    – amsmath
    Jul 18 at 23:50










  • one moment editting
    – RHowe
    Jul 18 at 23:54










  • So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
    – Michael New
    Jul 19 at 0:11










  • I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
    – RHowe
    Jul 19 at 0:14










  • Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
    – Michael New
    Jul 19 at 15:57















up vote
0
down vote



accepted










Suppose $A in mathbbR^m times n $ there is theorem called the Eckart Young Mirsky Theorem. It states the following



$$A = U Sigma V^T $$



with $ U, V^T $ being orthogonal matrices and $ Sigma $ is a diagonal matrices such with singular values. The following is true.



$$A_k = sum_i=1^k sigma_iu_i v_i^t $$
$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



Now



$$ | A^TA - X |_2 $$



The nearest matrix is rank k approximation under this norm as above then



$$ | (USigma V^T)^T(U Sigma V^T) - (U_kSigma_kV_k^T)^T (U_kSigma_kV_k^T) |_2 $$
$$ | VSigma U U Sigma V^T - V_kSigma_k^TU_kU_kSigma_kV_k^T |_2 $$



$$ | V Sigma^2 V^T - V_kSigma_k^2V_k^T |_2 $$



where



$$V_k Sigma_k^2V_k^2 = sum_i=1^k sigma_i^2 v_iv_i^t $$



Now the expression above showed



$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



$$ | A^TA -X | =| sum_i=k+1^k sigma_i^2 v_iv_i^t | = sigma_k+1^2$$



I believe. This is more or less saying that the solution to the least squares problem is building it iteratively from some eigenspace. That is typically how it works. It doesn't explicitly build the normal equations.



More specifically



$$VSigma^2 V^T = V Lambda V^T $$
$$ | A^TA -X | =| sum_i=k+1^k lambda_i v_iv_i^t | = lambda_k+1 $$



If you follow this as I decomposed the matrix $X$ with $A^TA$. All $A^TA$ is the nearest matrix made of the eigendecomposition from $X$ Each matrix $A$ is built from a singular value decomposition. The way the SVD works if you read about it is the following.



$$ A^TA = (U Sigma V)^T (U Sigma V^T) $$
$$ A^TA = ((V^T)^T Sigma^T U^T U Sigma V^T $$
by the properties of transposes and unitary matrices
$$ A^TA = V Sigma^T Sigma V^T $$
$$ A^TA = V Sigma^2 V^T $$
$$ A^TA = V Lambda V^T $$



Meaning the matrix $A$ is $USigma V^T$ however the nearest matrix under the 2 norm is a rank k approximation so we truncate it.



So let's say. Technically this matrix can be measured against the matrix X I think.



$$ A = U_kSigma_k V_k^T $$






share|cite|improve this answer























  • I don't understand this answer.
    – amsmath
    Jul 18 at 23:50










  • one moment editting
    – RHowe
    Jul 18 at 23:54










  • So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
    – Michael New
    Jul 19 at 0:11










  • I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
    – RHowe
    Jul 19 at 0:14










  • Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
    – Michael New
    Jul 19 at 15:57













up vote
0
down vote



accepted







up vote
0
down vote



accepted






Suppose $A in mathbbR^m times n $ there is theorem called the Eckart Young Mirsky Theorem. It states the following



$$A = U Sigma V^T $$



with $ U, V^T $ being orthogonal matrices and $ Sigma $ is a diagonal matrices such with singular values. The following is true.



$$A_k = sum_i=1^k sigma_iu_i v_i^t $$
$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



Now



$$ | A^TA - X |_2 $$



The nearest matrix is rank k approximation under this norm as above then



$$ | (USigma V^T)^T(U Sigma V^T) - (U_kSigma_kV_k^T)^T (U_kSigma_kV_k^T) |_2 $$
$$ | VSigma U U Sigma V^T - V_kSigma_k^TU_kU_kSigma_kV_k^T |_2 $$



$$ | V Sigma^2 V^T - V_kSigma_k^2V_k^T |_2 $$



where



$$V_k Sigma_k^2V_k^2 = sum_i=1^k sigma_i^2 v_iv_i^t $$



Now the expression above showed



$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



$$ | A^TA -X | =| sum_i=k+1^k sigma_i^2 v_iv_i^t | = sigma_k+1^2$$



I believe. This is more or less saying that the solution to the least squares problem is building it iteratively from some eigenspace. That is typically how it works. It doesn't explicitly build the normal equations.



More specifically



$$VSigma^2 V^T = V Lambda V^T $$
$$ | A^TA -X | =| sum_i=k+1^k lambda_i v_iv_i^t | = lambda_k+1 $$



If you follow this as I decomposed the matrix $X$ with $A^TA$. All $A^TA$ is the nearest matrix made of the eigendecomposition from $X$ Each matrix $A$ is built from a singular value decomposition. The way the SVD works if you read about it is the following.



$$ A^TA = (U Sigma V)^T (U Sigma V^T) $$
$$ A^TA = ((V^T)^T Sigma^T U^T U Sigma V^T $$
by the properties of transposes and unitary matrices
$$ A^TA = V Sigma^T Sigma V^T $$
$$ A^TA = V Sigma^2 V^T $$
$$ A^TA = V Lambda V^T $$



Meaning the matrix $A$ is $USigma V^T$ however the nearest matrix under the 2 norm is a rank k approximation so we truncate it.



So let's say. Technically this matrix can be measured against the matrix X I think.



$$ A = U_kSigma_k V_k^T $$






share|cite|improve this answer















Suppose $A in mathbbR^m times n $ there is theorem called the Eckart Young Mirsky Theorem. It states the following



$$A = U Sigma V^T $$



with $ U, V^T $ being orthogonal matrices and $ Sigma $ is a diagonal matrices such with singular values. The following is true.



$$A_k = sum_i=1^k sigma_iu_i v_i^t $$
$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



Now



$$ | A^TA - X |_2 $$



The nearest matrix is rank k approximation under this norm as above then



$$ | (USigma V^T)^T(U Sigma V^T) - (U_kSigma_kV_k^T)^T (U_kSigma_kV_k^T) |_2 $$
$$ | VSigma U U Sigma V^T - V_kSigma_k^TU_kU_kSigma_kV_k^T |_2 $$



$$ | V Sigma^2 V^T - V_kSigma_k^2V_k^T |_2 $$



where



$$V_k Sigma_k^2V_k^2 = sum_i=1^k sigma_i^2 v_iv_i^t $$



Now the expression above showed



$$| A - A_k |_2 = | sum_i=k+1^k sigma_iu_i v_i^t | = sigma_k+1$$



$$ | A^TA -X | =| sum_i=k+1^k sigma_i^2 v_iv_i^t | = sigma_k+1^2$$



I believe. This is more or less saying that the solution to the least squares problem is building it iteratively from some eigenspace. That is typically how it works. It doesn't explicitly build the normal equations.



More specifically



$$VSigma^2 V^T = V Lambda V^T $$
$$ | A^TA -X | =| sum_i=k+1^k lambda_i v_iv_i^t | = lambda_k+1 $$



If you follow this as I decomposed the matrix $X$ with $A^TA$. All $A^TA$ is the nearest matrix made of the eigendecomposition from $X$ Each matrix $A$ is built from a singular value decomposition. The way the SVD works if you read about it is the following.



$$ A^TA = (U Sigma V)^T (U Sigma V^T) $$
$$ A^TA = ((V^T)^T Sigma^T U^T U Sigma V^T $$
by the properties of transposes and unitary matrices
$$ A^TA = V Sigma^T Sigma V^T $$
$$ A^TA = V Sigma^2 V^T $$
$$ A^TA = V Lambda V^T $$



Meaning the matrix $A$ is $USigma V^T$ however the nearest matrix under the 2 norm is a rank k approximation so we truncate it.



So let's say. Technically this matrix can be measured against the matrix X I think.



$$ A = U_kSigma_k V_k^T $$







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 20 at 1:27


























answered Jul 18 at 23:47









RHowe

1,000815




1,000815











  • I don't understand this answer.
    – amsmath
    Jul 18 at 23:50










  • one moment editting
    – RHowe
    Jul 18 at 23:54










  • So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
    – Michael New
    Jul 19 at 0:11










  • I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
    – RHowe
    Jul 19 at 0:14










  • Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
    – Michael New
    Jul 19 at 15:57

















  • I don't understand this answer.
    – amsmath
    Jul 18 at 23:50










  • one moment editting
    – RHowe
    Jul 18 at 23:54










  • So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
    – Michael New
    Jul 19 at 0:11










  • I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
    – RHowe
    Jul 19 at 0:14










  • Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
    – Michael New
    Jul 19 at 15:57
















I don't understand this answer.
– amsmath
Jul 18 at 23:50




I don't understand this answer.
– amsmath
Jul 18 at 23:50












one moment editting
– RHowe
Jul 18 at 23:54




one moment editting
– RHowe
Jul 18 at 23:54












So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
– Michael New
Jul 19 at 0:11




So what I am hearing is $∥VΣ^2V^T−U_kΣ_kV^T_k∥_2$ is the minimum possible value of the norm, where $X$ is decomposed in line with the Eckart Young Mirsky Theorem such that $X=U_kΣ_kV^T_k$, and $A=VΣ^2V^T$. Is that correct?
– Michael New
Jul 19 at 0:11












I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
– RHowe
Jul 19 at 0:14




I didn't edit one part above there. I'm trying to figure out how to express that. It would be better to write it as an eigendecomp I think. There is a line where it is still an svd
– RHowe
Jul 19 at 0:14












Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
– Michael New
Jul 19 at 15:57





Geronimo, thanks for the answer. I think I get it after a bit more digging. I do think you missed a few transposes in the lines after "The nearest matrix is rank k approximation...", so if you wanted to edit for the next person to find this that might be helpful. Also, your final answer gives an n by n matrix not an m by n matrix, but I think I get the idea enough to figure that out myself. Plus 1 for helpfulness (although I need more reputation for it to count), and once I get it I will credit you with the solution.
– Michael New
Jul 19 at 15:57













 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856117%2fleast-squares-solution-to-ata-%25e2%2589%2583-x-for-non-square-a-given-x%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?