Eigenspaces are in direct sum
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$
Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.
Question 1 : Is my proof working ? If not, what's wrong ?
Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?
linear-algebra
add a comment |Â
up vote
1
down vote
favorite
Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$
Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.
Question 1 : Is my proof working ? If not, what's wrong ?
Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?
linear-algebra
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
1
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
Where is your linear mapping?
– xbh
Aug 6 at 14:39
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$
Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.
Question 1 : Is my proof working ? If not, what's wrong ?
Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?
linear-algebra
Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$
Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.
Question 1 : Is my proof working ? If not, what's wrong ?
Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?
linear-algebra
asked Aug 6 at 14:08
Henri
404
404
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
1
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
Where is your linear mapping?
– xbh
Aug 6 at 14:39
add a comment |Â
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
1
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
Where is your linear mapping?
– xbh
Aug 6 at 14:39
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
1
1
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
Where is your linear mapping?
– xbh
Aug 6 at 14:39
Where is your linear mapping?
– xbh
Aug 6 at 14:39
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
0
down vote
accepted
Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.
Q2: I could give a proof.
We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.
Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.
Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.
Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$
add a comment |Â
up vote
1
down vote
The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.
When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.
Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.
Q2: I could give a proof.
We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.
Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.
Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.
Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$
add a comment |Â
up vote
0
down vote
accepted
Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.
Q2: I could give a proof.
We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.
Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.
Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.
Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$
add a comment |Â
up vote
0
down vote
accepted
up vote
0
down vote
accepted
Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.
Q2: I could give a proof.
We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.
Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.
Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.
Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$
Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.
Q2: I could give a proof.
We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.
Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.
Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.
Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$
edited Aug 6 at 15:25
answered Aug 6 at 15:04
xbh
1,5729
1,5729
add a comment |Â
add a comment |Â
up vote
1
down vote
The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.
When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.
Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
add a comment |Â
up vote
1
down vote
The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.
When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.
Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
add a comment |Â
up vote
1
down vote
up vote
1
down vote
The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.
When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.
Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?
The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.
When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.
Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?
answered Aug 6 at 14:46
Matthew Leingang
15k12143
15k12143
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
add a comment |Â
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
– Henri
Aug 6 at 16:02
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
– xbh
Aug 6 at 16:16
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
@Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
– Matthew Leingang
Aug 6 at 17:19
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873901%2feigenspaces-are-in-direct-sum%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11
@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13
1
But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16
There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ã°ÑÂтþý òіûûð þûþф üÑÂûûñÑÂрó
Aug 6 at 14:34
Where is your linear mapping?
– xbh
Aug 6 at 14:39