A set of $n$ vectors $A_1,dots, A_n$ in $n$-space is independent iff $d(A_1,dots, A_n) ne 0$.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I found a proof of this theorem in the book Multivariate Calculus VOL 2 by T M Apostol. But in that proof I can't understand one assertion. I paste the proof here and also bold that line:




Theorem 3.6 (P-83). A set of $n$ vectors $A_1,dots, A_n$ in $n$-space is independent if and only if $d(A_1,dots, A_n) ne 0$.




Proof.(Only one direction) Assume that $A_1,dots, A_n$ are independent. Let $V_n$ denotes the linear space of $n$-tuples of scalars. Since $A_1,dots, A_n$ are $n$ independent elements in an $n$-dimensional space they form a basis for $V_n$. Therefore there is a linear transformation $T:V_n to V_n$ which maps these $n$ vectors onto the unit coordinate vectors, $$T(A_k)=I_k ~ textfor~ k=1,dots,n$$ Therefore there is an $ntimes n$ matrix $B$ such that $$bfA_kB=I_k ~ textfor ~k=1,dots, n$$....Now the proof continues.....



But I cannot guess how Such a matrix $B$ exists satisfying that condition...!!!



Please help me to clarify this existence of $B$. Thank you.







share|cite|improve this question



















  • en.wikipedia.org/wiki/Linear_map#Matrices
    – Lorenzo
    Jul 24 at 4:23










  • The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
    – RHowe
    Jul 24 at 4:28










  • I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
    – Indrajit Ghosh
    Jul 24 at 4:29










  • @Geronimo ..I can't get u..
    – Indrajit Ghosh
    Jul 24 at 4:33










  • @Geronimo The notation $I_k$ does not represent the identity matrix here.
    – Dave
    Jul 24 at 5:27














up vote
1
down vote

favorite












I found a proof of this theorem in the book Multivariate Calculus VOL 2 by T M Apostol. But in that proof I can't understand one assertion. I paste the proof here and also bold that line:




Theorem 3.6 (P-83). A set of $n$ vectors $A_1,dots, A_n$ in $n$-space is independent if and only if $d(A_1,dots, A_n) ne 0$.




Proof.(Only one direction) Assume that $A_1,dots, A_n$ are independent. Let $V_n$ denotes the linear space of $n$-tuples of scalars. Since $A_1,dots, A_n$ are $n$ independent elements in an $n$-dimensional space they form a basis for $V_n$. Therefore there is a linear transformation $T:V_n to V_n$ which maps these $n$ vectors onto the unit coordinate vectors, $$T(A_k)=I_k ~ textfor~ k=1,dots,n$$ Therefore there is an $ntimes n$ matrix $B$ such that $$bfA_kB=I_k ~ textfor ~k=1,dots, n$$....Now the proof continues.....



But I cannot guess how Such a matrix $B$ exists satisfying that condition...!!!



Please help me to clarify this existence of $B$. Thank you.







share|cite|improve this question



















  • en.wikipedia.org/wiki/Linear_map#Matrices
    – Lorenzo
    Jul 24 at 4:23










  • The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
    – RHowe
    Jul 24 at 4:28










  • I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
    – Indrajit Ghosh
    Jul 24 at 4:29










  • @Geronimo ..I can't get u..
    – Indrajit Ghosh
    Jul 24 at 4:33










  • @Geronimo The notation $I_k$ does not represent the identity matrix here.
    – Dave
    Jul 24 at 5:27












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I found a proof of this theorem in the book Multivariate Calculus VOL 2 by T M Apostol. But in that proof I can't understand one assertion. I paste the proof here and also bold that line:




Theorem 3.6 (P-83). A set of $n$ vectors $A_1,dots, A_n$ in $n$-space is independent if and only if $d(A_1,dots, A_n) ne 0$.




Proof.(Only one direction) Assume that $A_1,dots, A_n$ are independent. Let $V_n$ denotes the linear space of $n$-tuples of scalars. Since $A_1,dots, A_n$ are $n$ independent elements in an $n$-dimensional space they form a basis for $V_n$. Therefore there is a linear transformation $T:V_n to V_n$ which maps these $n$ vectors onto the unit coordinate vectors, $$T(A_k)=I_k ~ textfor~ k=1,dots,n$$ Therefore there is an $ntimes n$ matrix $B$ such that $$bfA_kB=I_k ~ textfor ~k=1,dots, n$$....Now the proof continues.....



But I cannot guess how Such a matrix $B$ exists satisfying that condition...!!!



Please help me to clarify this existence of $B$. Thank you.







share|cite|improve this question











I found a proof of this theorem in the book Multivariate Calculus VOL 2 by T M Apostol. But in that proof I can't understand one assertion. I paste the proof here and also bold that line:




Theorem 3.6 (P-83). A set of $n$ vectors $A_1,dots, A_n$ in $n$-space is independent if and only if $d(A_1,dots, A_n) ne 0$.




Proof.(Only one direction) Assume that $A_1,dots, A_n$ are independent. Let $V_n$ denotes the linear space of $n$-tuples of scalars. Since $A_1,dots, A_n$ are $n$ independent elements in an $n$-dimensional space they form a basis for $V_n$. Therefore there is a linear transformation $T:V_n to V_n$ which maps these $n$ vectors onto the unit coordinate vectors, $$T(A_k)=I_k ~ textfor~ k=1,dots,n$$ Therefore there is an $ntimes n$ matrix $B$ such that $$bfA_kB=I_k ~ textfor ~k=1,dots, n$$....Now the proof continues.....



But I cannot guess how Such a matrix $B$ exists satisfying that condition...!!!



Please help me to clarify this existence of $B$. Thank you.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 24 at 4:17









Indrajit Ghosh

578415




578415











  • en.wikipedia.org/wiki/Linear_map#Matrices
    – Lorenzo
    Jul 24 at 4:23










  • The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
    – RHowe
    Jul 24 at 4:28










  • I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
    – Indrajit Ghosh
    Jul 24 at 4:29










  • @Geronimo ..I can't get u..
    – Indrajit Ghosh
    Jul 24 at 4:33










  • @Geronimo The notation $I_k$ does not represent the identity matrix here.
    – Dave
    Jul 24 at 5:27
















  • en.wikipedia.org/wiki/Linear_map#Matrices
    – Lorenzo
    Jul 24 at 4:23










  • The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
    – RHowe
    Jul 24 at 4:28










  • I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
    – Indrajit Ghosh
    Jul 24 at 4:29










  • @Geronimo ..I can't get u..
    – Indrajit Ghosh
    Jul 24 at 4:33










  • @Geronimo The notation $I_k$ does not represent the identity matrix here.
    – Dave
    Jul 24 at 5:27















en.wikipedia.org/wiki/Linear_map#Matrices
– Lorenzo
Jul 24 at 4:23




en.wikipedia.org/wiki/Linear_map#Matrices
– Lorenzo
Jul 24 at 4:23












The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
– RHowe
Jul 24 at 4:28




The existence of $B$ is the inverse $AA^-1 = A^-1A = I_k$
– RHowe
Jul 24 at 4:28












I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
– Indrajit Ghosh
Jul 24 at 4:29




I know this... is B the matrix of T here? But I actually confused why $A_k$ is pre multiplied with B ti get $I_k$...
– Indrajit Ghosh
Jul 24 at 4:29












@Geronimo ..I can't get u..
– Indrajit Ghosh
Jul 24 at 4:33




@Geronimo ..I can't get u..
– Indrajit Ghosh
Jul 24 at 4:33












@Geronimo The notation $I_k$ does not represent the identity matrix here.
– Dave
Jul 24 at 5:27




@Geronimo The notation $I_k$ does not represent the identity matrix here.
– Dave
Jul 24 at 5:27










2 Answers
2






active

oldest

votes

















up vote
1
down vote



accepted










I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.



Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.



There are two essential points to understand to justify this part of the proof of Theorem 3.6.



First, in the statement of Theorem 2.13, if we let $X = (x_1, dots, x_n)$ and $Y = (y_1, dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_ik)$ is the matrix of $T$ relative to the given bases.



If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.



Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,dots,I_n)$ is selected in each of the two copies of $V_n$.



Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.



So we can take $B = C^t$.






share|cite|improve this answer























  • Oh... I see... Thank you so much....!!
    – Indrajit Ghosh
    Jul 24 at 5:16










  • I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
    – Dave
    Jul 24 at 5:19

















up vote
0
down vote













If you look in your book in chapter 2.19



enter image description here



There is a theorem for inverses of square matrices.Now when you read the proof.
The reasoning is somewhat circular however the idea actually follows from this.



If
$$ A = U Lambda U^T $$
then
$$ det(A) = det(U Lambda U^T) = det(U)det(Lambda)det(U^T) $$
now the matrices $U,U^T$ are orthogonal have determinant one
$$det(A) = det(Lambda) = prod_i=1^n lambda_i $$



if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.



enter image description here






share|cite|improve this answer























  • How does this relate to the OP's question?
    – copper.hat
    Jul 24 at 14:22











  • It is the proof in the book he is referring to
    – RHowe
    Jul 24 at 14:23










  • He didnt even read the page before hand.
    – RHowe
    Jul 24 at 14:24










  • How do you know? And how does it relate to the OP's question?
    – copper.hat
    Jul 24 at 14:56










  • does this help...the proof was in chapter 2
    – RHowe
    Jul 24 at 15:06










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2861004%2fa-set-of-n-vectors-a-1-dots-a-n-in-n-space-is-independent-iff-da-1-do%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.



Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.



There are two essential points to understand to justify this part of the proof of Theorem 3.6.



First, in the statement of Theorem 2.13, if we let $X = (x_1, dots, x_n)$ and $Y = (y_1, dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_ik)$ is the matrix of $T$ relative to the given bases.



If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.



Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,dots,I_n)$ is selected in each of the two copies of $V_n$.



Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.



So we can take $B = C^t$.






share|cite|improve this answer























  • Oh... I see... Thank you so much....!!
    – Indrajit Ghosh
    Jul 24 at 5:16










  • I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
    – Dave
    Jul 24 at 5:19














up vote
1
down vote



accepted










I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.



Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.



There are two essential points to understand to justify this part of the proof of Theorem 3.6.



First, in the statement of Theorem 2.13, if we let $X = (x_1, dots, x_n)$ and $Y = (y_1, dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_ik)$ is the matrix of $T$ relative to the given bases.



If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.



Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,dots,I_n)$ is selected in each of the two copies of $V_n$.



Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.



So we can take $B = C^t$.






share|cite|improve this answer























  • Oh... I see... Thank you so much....!!
    – Indrajit Ghosh
    Jul 24 at 5:16










  • I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
    – Dave
    Jul 24 at 5:19












up vote
1
down vote



accepted







up vote
1
down vote



accepted






I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.



Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.



There are two essential points to understand to justify this part of the proof of Theorem 3.6.



First, in the statement of Theorem 2.13, if we let $X = (x_1, dots, x_n)$ and $Y = (y_1, dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_ik)$ is the matrix of $T$ relative to the given bases.



If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.



Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,dots,I_n)$ is selected in each of the two copies of $V_n$.



Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.



So we can take $B = C^t$.






share|cite|improve this answer















I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.



Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.



There are two essential points to understand to justify this part of the proof of Theorem 3.6.



First, in the statement of Theorem 2.13, if we let $X = (x_1, dots, x_n)$ and $Y = (y_1, dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_ik)$ is the matrix of $T$ relative to the given bases.



If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.



Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,dots,I_n)$ is selected in each of the two copies of $V_n$.



Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.



So we can take $B = C^t$.







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 24 at 5:17


























answered Jul 24 at 5:12









Dave

912




912











  • Oh... I see... Thank you so much....!!
    – Indrajit Ghosh
    Jul 24 at 5:16










  • I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
    – Dave
    Jul 24 at 5:19
















  • Oh... I see... Thank you so much....!!
    – Indrajit Ghosh
    Jul 24 at 5:16










  • I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
    – Dave
    Jul 24 at 5:19















Oh... I see... Thank you so much....!!
– Indrajit Ghosh
Jul 24 at 5:16




Oh... I see... Thank you so much....!!
– Indrajit Ghosh
Jul 24 at 5:16












I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
– Dave
Jul 24 at 5:19




I don't have access to the first edition of the book, but I suspect this is the kind of error that might have been introduced in the process of producing a new edition.
– Dave
Jul 24 at 5:19










up vote
0
down vote













If you look in your book in chapter 2.19



enter image description here



There is a theorem for inverses of square matrices.Now when you read the proof.
The reasoning is somewhat circular however the idea actually follows from this.



If
$$ A = U Lambda U^T $$
then
$$ det(A) = det(U Lambda U^T) = det(U)det(Lambda)det(U^T) $$
now the matrices $U,U^T$ are orthogonal have determinant one
$$det(A) = det(Lambda) = prod_i=1^n lambda_i $$



if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.



enter image description here






share|cite|improve this answer























  • How does this relate to the OP's question?
    – copper.hat
    Jul 24 at 14:22











  • It is the proof in the book he is referring to
    – RHowe
    Jul 24 at 14:23










  • He didnt even read the page before hand.
    – RHowe
    Jul 24 at 14:24










  • How do you know? And how does it relate to the OP's question?
    – copper.hat
    Jul 24 at 14:56










  • does this help...the proof was in chapter 2
    – RHowe
    Jul 24 at 15:06














up vote
0
down vote













If you look in your book in chapter 2.19



enter image description here



There is a theorem for inverses of square matrices.Now when you read the proof.
The reasoning is somewhat circular however the idea actually follows from this.



If
$$ A = U Lambda U^T $$
then
$$ det(A) = det(U Lambda U^T) = det(U)det(Lambda)det(U^T) $$
now the matrices $U,U^T$ are orthogonal have determinant one
$$det(A) = det(Lambda) = prod_i=1^n lambda_i $$



if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.



enter image description here






share|cite|improve this answer























  • How does this relate to the OP's question?
    – copper.hat
    Jul 24 at 14:22











  • It is the proof in the book he is referring to
    – RHowe
    Jul 24 at 14:23










  • He didnt even read the page before hand.
    – RHowe
    Jul 24 at 14:24










  • How do you know? And how does it relate to the OP's question?
    – copper.hat
    Jul 24 at 14:56










  • does this help...the proof was in chapter 2
    – RHowe
    Jul 24 at 15:06












up vote
0
down vote










up vote
0
down vote









If you look in your book in chapter 2.19



enter image description here



There is a theorem for inverses of square matrices.Now when you read the proof.
The reasoning is somewhat circular however the idea actually follows from this.



If
$$ A = U Lambda U^T $$
then
$$ det(A) = det(U Lambda U^T) = det(U)det(Lambda)det(U^T) $$
now the matrices $U,U^T$ are orthogonal have determinant one
$$det(A) = det(Lambda) = prod_i=1^n lambda_i $$



if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.



enter image description here






share|cite|improve this answer















If you look in your book in chapter 2.19



enter image description here



There is a theorem for inverses of square matrices.Now when you read the proof.
The reasoning is somewhat circular however the idea actually follows from this.



If
$$ A = U Lambda U^T $$
then
$$ det(A) = det(U Lambda U^T) = det(U)det(Lambda)det(U^T) $$
now the matrices $U,U^T$ are orthogonal have determinant one
$$det(A) = det(Lambda) = prod_i=1^n lambda_i $$



if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.



enter image description here







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 24 at 15:11


























answered Jul 24 at 5:45









RHowe

1,010815




1,010815











  • How does this relate to the OP's question?
    – copper.hat
    Jul 24 at 14:22











  • It is the proof in the book he is referring to
    – RHowe
    Jul 24 at 14:23










  • He didnt even read the page before hand.
    – RHowe
    Jul 24 at 14:24










  • How do you know? And how does it relate to the OP's question?
    – copper.hat
    Jul 24 at 14:56










  • does this help...the proof was in chapter 2
    – RHowe
    Jul 24 at 15:06
















  • How does this relate to the OP's question?
    – copper.hat
    Jul 24 at 14:22











  • It is the proof in the book he is referring to
    – RHowe
    Jul 24 at 14:23










  • He didnt even read the page before hand.
    – RHowe
    Jul 24 at 14:24










  • How do you know? And how does it relate to the OP's question?
    – copper.hat
    Jul 24 at 14:56










  • does this help...the proof was in chapter 2
    – RHowe
    Jul 24 at 15:06















How does this relate to the OP's question?
– copper.hat
Jul 24 at 14:22





How does this relate to the OP's question?
– copper.hat
Jul 24 at 14:22













It is the proof in the book he is referring to
– RHowe
Jul 24 at 14:23




It is the proof in the book he is referring to
– RHowe
Jul 24 at 14:23












He didnt even read the page before hand.
– RHowe
Jul 24 at 14:24




He didnt even read the page before hand.
– RHowe
Jul 24 at 14:24












How do you know? And how does it relate to the OP's question?
– copper.hat
Jul 24 at 14:56




How do you know? And how does it relate to the OP's question?
– copper.hat
Jul 24 at 14:56












does this help...the proof was in chapter 2
– RHowe
Jul 24 at 15:06




does this help...the proof was in chapter 2
– RHowe
Jul 24 at 15:06












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2861004%2fa-set-of-n-vectors-a-1-dots-a-n-in-n-space-is-independent-iff-da-1-do%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?