Eigenspaces are in direct sum

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$



Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.



Question 1 : Is my proof working ? If not, what's wrong ?



Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?







share|cite|improve this question



















  • Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
    – xarles
    Aug 6 at 14:11










  • @xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
    – Henri
    Aug 6 at 14:13






  • 1




    But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
    – xarles
    Aug 6 at 14:16










  • There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
    – Ð°ÑÑ‚он вілла олоф мэллбэрг
    Aug 6 at 14:34











  • Where is your linear mapping?
    – xbh
    Aug 6 at 14:39














up vote
1
down vote

favorite












Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$



Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.



Question 1 : Is my proof working ? If not, what's wrong ?



Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?







share|cite|improve this question



















  • Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
    – xarles
    Aug 6 at 14:11










  • @xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
    – Henri
    Aug 6 at 14:13






  • 1




    But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
    – xarles
    Aug 6 at 14:16










  • There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
    – Ð°ÑÑ‚он вілла олоф мэллбэрг
    Aug 6 at 14:34











  • Where is your linear mapping?
    – xbh
    Aug 6 at 14:39












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$



Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.



Question 1 : Is my proof working ? If not, what's wrong ?



Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?







share|cite|improve this question











Let $V$ a vector space of finite dimension. Let $E_1,...,E_n$ Eigenspace associate to the eigen value $lambda _1,...,lambda _n$. I want to prove that $$E_1+cdots+E_n=E_1opluscdotsoplus E_n.$$



Let $v_1in E_1,...,v_nin E_n$ s.t. $v_1+cdots +v_n=0$. I have to prove that $v_i=0$ for all $i$. I know that if $v_1in E_1,...,v_nin E_n$ are non zero vector, then they are free. So if I suppose that there is $v_ineq 0$ (suppose WLOG $v_1=0$), then $$v_1=-v_2-...-v_n,$$
and thus, there is at least an other vector (let say $v_2$) that is non zero. Therefore $v_1=-v_2$ which is a contradiction.



Question 1 : Is my proof working ? If not, what's wrong ?



Question 2 : I find my proof not elegant at all. Is there a more elegant proof ?









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Aug 6 at 14:08









Henri

404




404











  • Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
    – xarles
    Aug 6 at 14:11










  • @xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
    – Henri
    Aug 6 at 14:13






  • 1




    But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
    – xarles
    Aug 6 at 14:16










  • There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
    – Ð°ÑÑ‚он вілла олоф мэллбэрг
    Aug 6 at 14:34











  • Where is your linear mapping?
    – xbh
    Aug 6 at 14:39
















  • Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
    – xarles
    Aug 6 at 14:11










  • @xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
    – Henri
    Aug 6 at 14:13






  • 1




    But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
    – xarles
    Aug 6 at 14:16










  • There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
    – Ð°ÑÑ‚он вілла олоф мэллбэрг
    Aug 6 at 14:34











  • Where is your linear mapping?
    – xbh
    Aug 6 at 14:39















Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11




Where did you use that the vectors $v_i$ are eigen vectors of diferent eigenvalue? What do you mean by "the vectors are free"?
– xarles
Aug 6 at 14:11












@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13




@xarles: free mean linearly independant. I used the fact that $v_i$ are eigenvectors to have that $(v_1,...,v_n)$ free.
– Henri
Aug 6 at 14:13




1




1




But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16




But this what do you have to prove, that the $v_i$ are linearly independent. The direct sum property is automatic from this fact.
– xarles
Aug 6 at 14:16












There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ð°ÑÑ‚он вілла олоф мэллбэрг
Aug 6 at 14:34





There is also some linear transformation in the background, because you can't have eigenvalues and eigenspaces sitting around when there is no linear operator to which they belong. Please clarify this.
– Ð°ÑÑ‚он вілла олоф мэллбэрг
Aug 6 at 14:34













Where is your linear mapping?
– xbh
Aug 6 at 14:39




Where is your linear mapping?
– xbh
Aug 6 at 14:39










2 Answers
2






active

oldest

votes

















up vote
0
down vote



accepted










Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.



Q2: I could give a proof.



We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.



Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
$v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
$$
(lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
$$
Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
$$
sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
$$
Repeatedly we could know that if we apply
$$
(mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
$$
to
$$
v_1 + v_2 + cdots + v_n = 0,
$$
then we obtain
$$
prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
$$
Since all $lambda_j$ are distinct, $v_n = 0$.



Similarly, apply
$$
prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
$$
to $v_1 + cdots + v_n =0 $ would yield similar expression
$$
prod_j neq k (lambda_k - lambda_j) v_k = 0,
$$
hence $v_k = 0$.



Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$






share|cite|improve this answer






























    up vote
    1
    down vote













    The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.



    When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.



    Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?






    share|cite|improve this answer





















    • I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
      – Henri
      Aug 6 at 16:02











    • @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
      – xbh
      Aug 6 at 16:16










    • @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
      – Matthew Leingang
      Aug 6 at 17:19











    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873901%2feigenspaces-are-in-direct-sum%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote



    accepted










    Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.



    Q2: I could give a proof.



    We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.



    Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
    $v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
    $$
    (lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
    $$
    Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
    $$
    sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
    $$
    Repeatedly we could know that if we apply
    $$
    (mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
    $$
    to
    $$
    v_1 + v_2 + cdots + v_n = 0,
    $$
    then we obtain
    $$
    prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
    $$
    Since all $lambda_j$ are distinct, $v_n = 0$.



    Similarly, apply
    $$
    prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
    $$
    to $v_1 + cdots + v_n =0 $ would yield similar expression
    $$
    prod_j neq k (lambda_k - lambda_j) v_k = 0,
    $$
    hence $v_k = 0$.



    Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$






    share|cite|improve this answer



























      up vote
      0
      down vote



      accepted










      Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.



      Q2: I could give a proof.



      We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.



      Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
      $v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
      $$
      (lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
      $$
      Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
      $$
      sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
      $$
      Repeatedly we could know that if we apply
      $$
      (mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
      $$
      to
      $$
      v_1 + v_2 + cdots + v_n = 0,
      $$
      then we obtain
      $$
      prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
      $$
      Since all $lambda_j$ are distinct, $v_n = 0$.



      Similarly, apply
      $$
      prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
      $$
      to $v_1 + cdots + v_n =0 $ would yield similar expression
      $$
      prod_j neq k (lambda_k - lambda_j) v_k = 0,
      $$
      hence $v_k = 0$.



      Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$






      share|cite|improve this answer

























        up vote
        0
        down vote



        accepted







        up vote
        0
        down vote



        accepted






        Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.



        Q2: I could give a proof.



        We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.



        Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
        $v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
        $$
        (lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
        $$
        Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
        $$
        sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
        $$
        Repeatedly we could know that if we apply
        $$
        (mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
        $$
        to
        $$
        v_1 + v_2 + cdots + v_n = 0,
        $$
        then we obtain
        $$
        prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
        $$
        Since all $lambda_j$ are distinct, $v_n = 0$.



        Similarly, apply
        $$
        prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
        $$
        to $v_1 + cdots + v_n =0 $ would yield similar expression
        $$
        prod_j neq k (lambda_k - lambda_j) v_k = 0,
        $$
        hence $v_k = 0$.



        Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$






        share|cite|improve this answer















        Q1: If you know that $(v_j)_1^n$ are independent, then the direct sum decomposition holds naturally, because now the expression of $0$ as a sum of vectors from $E_j$ would be unique, then by definition the sum is a direct sum. If you want to prove the decomposition from the square one, you might use my answer as a reference.



        Q2: I could give a proof.



        We assume that $lambda_j_1^n$ are distinct eigenvalues of a linear operator $mathcal T in mathcal L(V)$.



        Proof.$blacktriangleleft$ Suppose $v_j in E_j$ satisfy that
        $v_1 + v_2 + cdots + v_n =0$. By definition, $mathcal T - lambda_j mathcal I$ is zero mapping on $E_j$. Therefore apply $mathcal T - lambda_1 mathcal I$ to $sum_1^n v_j = 0$ yields
        $$
        (lambda_2 - lambda_1) v_2 + (lambda_3 - lambda_1) v_3 + cdots + (lambda_n - lambda_1) v_n = 0.
        $$
        Now apply $mathcal T - lambda_2 mathcal I$ to it and obtain
        $$
        sum_3^n (lambda_j - lambda_2) (lambda_ j - lambda_1) v_j = 0.
        $$
        Repeatedly we could know that if we apply
        $$
        (mathcal T - lambda_n-1 mathcal I)(mathcal T -lambda_n-2 mathcal I) cdots (mathcal T - lambda_1 mathcal I)
        $$
        to
        $$
        v_1 + v_2 + cdots + v_n = 0,
        $$
        then we obtain
        $$
        prod_j=1^n-1 (lambda_n - lambda_j) v_n = 0.
        $$
        Since all $lambda_j$ are distinct, $v_n = 0$.



        Similarly, apply
        $$
        prod_j neq k(mathcal T -lambda_jmathcal I) quad [k = 1,2, ldots, n-1]
        $$
        to $v_1 + cdots + v_n =0 $ would yield similar expression
        $$
        prod_j neq k (lambda_k - lambda_j) v_k = 0,
        $$
        hence $v_k = 0$.



        Conclusively, $v_j = 0$ for all $j$, as we desired. $blacktriangleright$







        share|cite|improve this answer















        share|cite|improve this answer



        share|cite|improve this answer








        edited Aug 6 at 15:25


























        answered Aug 6 at 15:04









        xbh

        1,5729




        1,5729




















            up vote
            1
            down vote













            The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.



            When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.



            Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?






            share|cite|improve this answer





















            • I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
              – Henri
              Aug 6 at 16:02











            • @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
              – xbh
              Aug 6 at 16:16










            • @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
              – Matthew Leingang
              Aug 6 at 17:19















            up vote
            1
            down vote













            The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.



            When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.



            Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?






            share|cite|improve this answer





















            • I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
              – Henri
              Aug 6 at 16:02











            • @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
              – xbh
              Aug 6 at 16:16










            • @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
              – Matthew Leingang
              Aug 6 at 17:19













            up vote
            1
            down vote










            up vote
            1
            down vote









            The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.



            When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.



            Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?






            share|cite|improve this answer













            The flaw in your proof is that you assumed for the sake of contradiction that the vectors $v_1, dots, v_n$ are linearly dependent ($v_1 + dots + v_n = 0$), and used it to derive the consequence that the vectors are linearly dependent ($v_1 = -v_2$). So you didn't really prove anything.



            When reviewing your own proofs, you can ask yourself where you used each of the hypotheses given. For instance, as other commenters have pointed out, you didn't use at all the fact that the vectors $v_i$ are eigenvectors for some linear transformation. That's a red flag that you're skipping something important.



            Like xarles says, the proof comes down to the essential fact that eigenvectors of a linear transformation corresponding to distinct eigenvalues are linearly independent. Can you show that?







            share|cite|improve this answer













            share|cite|improve this answer



            share|cite|improve this answer











            answered Aug 6 at 14:46









            Matthew Leingang

            15k12143




            15k12143











            • I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
              – Henri
              Aug 6 at 16:02











            • @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
              – xbh
              Aug 6 at 16:16










            • @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
              – Matthew Leingang
              Aug 6 at 17:19

















            • I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
              – Henri
              Aug 6 at 16:02











            • @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
              – xbh
              Aug 6 at 16:16










            • @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
              – Matthew Leingang
              Aug 6 at 17:19
















            I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
            – Henri
            Aug 6 at 16:02





            I don't understand what you mean "The flaw in your proof..." I make a prove by contradiction, i.e. suppose the list is linearly independent s.t. $v_1+...+v_n=0$ and conclude that they are dependent... it's a proof by contradiction...
            – Henri
            Aug 6 at 16:02













            @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
            – xbh
            Aug 6 at 16:16




            @Henri However you want to prove the linear independence. If you assume it and you aim to prove it, then this is not a valid deduction.
            – xbh
            Aug 6 at 16:16












            @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
            – Matthew Leingang
            Aug 6 at 17:19





            @Henri vectors $v_1, dots, v_n$ satisfying $v_1 + dots + v_n = 0$ are not linearly independent.
            – Matthew Leingang
            Aug 6 at 17:19













             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873901%2feigenspaces-are-in-direct-sum%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?