How to construct a 3 by 3 matrix with a given eigenspace in the standard basis

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I am having a great difficulty wrapping my head around this question, I need help and hopefully this will help others.



The question is as follows:
Assume that the matrix $F: Bbb R^3 rightarrow Bbb R^3$ is a linear transformation with an eigenvalue of 1 with a respective eigenspace of [(1,0,0) (0,1,0)] and an eigenvalue of 3 with an eigenspace of [(1,0,1)]. Define $F$'s matrix in the standard basis



So I have attempted multiple things. I have watched 3blue1brown on youtube and he explains the topic on eigenvalues, eigenvectors, eigenspaces and change of basis quite well. I thought I could apply this knowledge unto this question.



This is how I read the question. So we have a 3 by 3 matrix that's, let's say it belongs to Bob, Bob's basis vectors (his (1,0,0) (0,1,0) (0,0,1)) is ( (1,0,0) (0,1,0) (3,0,3) ). This is now our change of basis matrix.



and to translate between the basis' the formula for that is $F vec v = vec u$ this will 'translate' the input vector $vec v$ into what Bob really meant in our language. So how I see it $F vec [1,0,0]$ (should be) $=vec [1, 0, 0]$ and $Fvec[0,1,0] = vec[0,1,0]$ and finally $Fvec[0,0,1] = vec[3,0,3]$



but this is wrong and according to the answer (and their solution which is non-intuitive ) it should be $Fvec[1,0,0] = vec[1,0,0]$, $Fvec[0,1,0] = vec[0,1,0]$ and $Fvec[0,0,1] = Fvec[1,0,1] - Fvec[1,0,0]$ I understand the calculation, but it just isn't intuitive to me.



Why is my solution incorrect? I mean $F$ is our change of basis matrix, why shouldn't I just be able to plug in the vectors and get the vector written in our language?



Also, please explain the textbook answer intuitively to help me and others.



Thank you very much.



((on a side note I also played with the inverse. THe inverse, as I understand it, is playing the transformation in reverse i.e this should give us the basis vector pre-transformation, idk it didn't get me anywhere))







share|cite|improve this question























    up vote
    0
    down vote

    favorite












    I am having a great difficulty wrapping my head around this question, I need help and hopefully this will help others.



    The question is as follows:
    Assume that the matrix $F: Bbb R^3 rightarrow Bbb R^3$ is a linear transformation with an eigenvalue of 1 with a respective eigenspace of [(1,0,0) (0,1,0)] and an eigenvalue of 3 with an eigenspace of [(1,0,1)]. Define $F$'s matrix in the standard basis



    So I have attempted multiple things. I have watched 3blue1brown on youtube and he explains the topic on eigenvalues, eigenvectors, eigenspaces and change of basis quite well. I thought I could apply this knowledge unto this question.



    This is how I read the question. So we have a 3 by 3 matrix that's, let's say it belongs to Bob, Bob's basis vectors (his (1,0,0) (0,1,0) (0,0,1)) is ( (1,0,0) (0,1,0) (3,0,3) ). This is now our change of basis matrix.



    and to translate between the basis' the formula for that is $F vec v = vec u$ this will 'translate' the input vector $vec v$ into what Bob really meant in our language. So how I see it $F vec [1,0,0]$ (should be) $=vec [1, 0, 0]$ and $Fvec[0,1,0] = vec[0,1,0]$ and finally $Fvec[0,0,1] = vec[3,0,3]$



    but this is wrong and according to the answer (and their solution which is non-intuitive ) it should be $Fvec[1,0,0] = vec[1,0,0]$, $Fvec[0,1,0] = vec[0,1,0]$ and $Fvec[0,0,1] = Fvec[1,0,1] - Fvec[1,0,0]$ I understand the calculation, but it just isn't intuitive to me.



    Why is my solution incorrect? I mean $F$ is our change of basis matrix, why shouldn't I just be able to plug in the vectors and get the vector written in our language?



    Also, please explain the textbook answer intuitively to help me and others.



    Thank you very much.



    ((on a side note I also played with the inverse. THe inverse, as I understand it, is playing the transformation in reverse i.e this should give us the basis vector pre-transformation, idk it didn't get me anywhere))







    share|cite|improve this question





















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I am having a great difficulty wrapping my head around this question, I need help and hopefully this will help others.



      The question is as follows:
      Assume that the matrix $F: Bbb R^3 rightarrow Bbb R^3$ is a linear transformation with an eigenvalue of 1 with a respective eigenspace of [(1,0,0) (0,1,0)] and an eigenvalue of 3 with an eigenspace of [(1,0,1)]. Define $F$'s matrix in the standard basis



      So I have attempted multiple things. I have watched 3blue1brown on youtube and he explains the topic on eigenvalues, eigenvectors, eigenspaces and change of basis quite well. I thought I could apply this knowledge unto this question.



      This is how I read the question. So we have a 3 by 3 matrix that's, let's say it belongs to Bob, Bob's basis vectors (his (1,0,0) (0,1,0) (0,0,1)) is ( (1,0,0) (0,1,0) (3,0,3) ). This is now our change of basis matrix.



      and to translate between the basis' the formula for that is $F vec v = vec u$ this will 'translate' the input vector $vec v$ into what Bob really meant in our language. So how I see it $F vec [1,0,0]$ (should be) $=vec [1, 0, 0]$ and $Fvec[0,1,0] = vec[0,1,0]$ and finally $Fvec[0,0,1] = vec[3,0,3]$



      but this is wrong and according to the answer (and their solution which is non-intuitive ) it should be $Fvec[1,0,0] = vec[1,0,0]$, $Fvec[0,1,0] = vec[0,1,0]$ and $Fvec[0,0,1] = Fvec[1,0,1] - Fvec[1,0,0]$ I understand the calculation, but it just isn't intuitive to me.



      Why is my solution incorrect? I mean $F$ is our change of basis matrix, why shouldn't I just be able to plug in the vectors and get the vector written in our language?



      Also, please explain the textbook answer intuitively to help me and others.



      Thank you very much.



      ((on a side note I also played with the inverse. THe inverse, as I understand it, is playing the transformation in reverse i.e this should give us the basis vector pre-transformation, idk it didn't get me anywhere))







      share|cite|improve this question











      I am having a great difficulty wrapping my head around this question, I need help and hopefully this will help others.



      The question is as follows:
      Assume that the matrix $F: Bbb R^3 rightarrow Bbb R^3$ is a linear transformation with an eigenvalue of 1 with a respective eigenspace of [(1,0,0) (0,1,0)] and an eigenvalue of 3 with an eigenspace of [(1,0,1)]. Define $F$'s matrix in the standard basis



      So I have attempted multiple things. I have watched 3blue1brown on youtube and he explains the topic on eigenvalues, eigenvectors, eigenspaces and change of basis quite well. I thought I could apply this knowledge unto this question.



      This is how I read the question. So we have a 3 by 3 matrix that's, let's say it belongs to Bob, Bob's basis vectors (his (1,0,0) (0,1,0) (0,0,1)) is ( (1,0,0) (0,1,0) (3,0,3) ). This is now our change of basis matrix.



      and to translate between the basis' the formula for that is $F vec v = vec u$ this will 'translate' the input vector $vec v$ into what Bob really meant in our language. So how I see it $F vec [1,0,0]$ (should be) $=vec [1, 0, 0]$ and $Fvec[0,1,0] = vec[0,1,0]$ and finally $Fvec[0,0,1] = vec[3,0,3]$



      but this is wrong and according to the answer (and their solution which is non-intuitive ) it should be $Fvec[1,0,0] = vec[1,0,0]$, $Fvec[0,1,0] = vec[0,1,0]$ and $Fvec[0,0,1] = Fvec[1,0,1] - Fvec[1,0,0]$ I understand the calculation, but it just isn't intuitive to me.



      Why is my solution incorrect? I mean $F$ is our change of basis matrix, why shouldn't I just be able to plug in the vectors and get the vector written in our language?



      Also, please explain the textbook answer intuitively to help me and others.



      Thank you very much.



      ((on a side note I also played with the inverse. THe inverse, as I understand it, is playing the transformation in reverse i.e this should give us the basis vector pre-transformation, idk it didn't get me anywhere))









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 27 at 11:58









      Hoaz

      375




      375




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          The definition of eigenvectors let's you know how your map $F$ acts on some specific vectors, the eigenvectors. In this manner you know that if you take from $mathbbR^3$ the vectors $(1,0,0),(0,1,0)$ and $(1,0,1)$ they will be mapped to the same vectors (times some scalar) onto $mathbbR^3$. Eigenvectors, indeed, are very good vectors, because they form a base for $mathbbR^3$, not the standard base but a base nonetheless. With this notions we can now tackle your problem.



          The definition of a $lambda$-eigenvector is as follows:




          $mathbfvinmathbbR^3$ is a $lambda$-eigenvector for the linear map $$F:mathbbR^3rightarrowmathbbR^3$$ iif $$F(mathbfv)=lambdamathbfv$$




          If we take as a base for either domain and image of this linear map the eigenvectors, the associated matrix for $F$ in this base is the following $$D=left(beginmatrix1&0&0\0&1&0\0&0&3endmatrixright)$$ as you can see is a diagonal matrix with elements the eigenvectors taken with their respective geometric multiplicity. You clearly have found a base in that the matrix associated with $F$ is diagonal. This is called diagonalization. From this matrix you can go back to the matrix in standard base applying a matrix of basis transformation which in this case is the matrix that has as columns the eigenvectors associated with their respective eigenvalue. So, by definition, $$A=PDP^-1$$ where $A$ is the matrix of $F$ in standard base and $$P=left(beginmatrix1&0&1\0&1&0\0&0&1endmatrixright)$$ the matrix of the eigenvectors.
          So, then $$A=left(beginmatrix1&0&2\0&1&0\0&0&3endmatrixright)$$
          I give you some links where you can find useful explanations on this topic: Diagonalizable matrix and the spectral theorem



          Edit



          Obviously there is a lot to talk about this subject, I've tried to be more concise as possible auto not go on a tangent. If you have any other questions you can read any linear algebra. The topic of diagonalization, change of basis and the spectral theorem and of central importance in lots of fields, even applied fields as physics, and it's very important to have a firm idea of what's going on! If I should give an advice on what book to read I will say "Linear Algebra" by Serge Lang. But there are so many valuable books out there, just do a simple google search






          share|cite|improve this answer























          • Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
            – Hoaz
            Jul 27 at 16:22











          • That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
            – Davide Morgante
            Jul 27 at 16:24










          • yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
            – Hoaz
            Jul 27 at 16:26










          • You're welcome!
            – Davide Morgante
            Jul 27 at 16:26

















          up vote
          0
          down vote













          HINT



          Recall that



          $$D=P^-1F P implies F=PFP^-1$$



          with



          $$P=beginbmatrix1&0&1\0&1&0\0&0&1endbmatrixquad D=beginbmatrix1&0&0\0&1&0\0&0&3endbmatrix$$






          share|cite|improve this answer




























            up vote
            0
            down vote













            Here is a very simple solution:



            Denoting $e_1, e_2, e_3$ the canonical basis, you need to express $F(e_1), F(e_2)$ and $F(e_3)$ in the canonical basis.



            For $F(e_1)$ and $F(e_2)$, its in the hypotheses, so the first two columns are
            $$left[beginmatrix1&0\0&1\0&0endmatrixright.$$
            Now, setting $u=e_1+e_3$, we know that $F(u)=F(e_1)+F(e_3)=3e_1+3e_3$, so $F(e_3)=2e_1+3e_3$, and we have the third column of the matrix, which is
            $$ beginbmatrix1&0&2\0&1&0\0&0&3endbmatrix$$






            share|cite|improve this answer





















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2864344%2fhow-to-construct-a-3-by-3-matrix-with-a-given-eigenspace-in-the-standard-basis%23new-answer', 'question_page');

              );

              Post as a guest






























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              2
              down vote



              accepted










              The definition of eigenvectors let's you know how your map $F$ acts on some specific vectors, the eigenvectors. In this manner you know that if you take from $mathbbR^3$ the vectors $(1,0,0),(0,1,0)$ and $(1,0,1)$ they will be mapped to the same vectors (times some scalar) onto $mathbbR^3$. Eigenvectors, indeed, are very good vectors, because they form a base for $mathbbR^3$, not the standard base but a base nonetheless. With this notions we can now tackle your problem.



              The definition of a $lambda$-eigenvector is as follows:




              $mathbfvinmathbbR^3$ is a $lambda$-eigenvector for the linear map $$F:mathbbR^3rightarrowmathbbR^3$$ iif $$F(mathbfv)=lambdamathbfv$$




              If we take as a base for either domain and image of this linear map the eigenvectors, the associated matrix for $F$ in this base is the following $$D=left(beginmatrix1&0&0\0&1&0\0&0&3endmatrixright)$$ as you can see is a diagonal matrix with elements the eigenvectors taken with their respective geometric multiplicity. You clearly have found a base in that the matrix associated with $F$ is diagonal. This is called diagonalization. From this matrix you can go back to the matrix in standard base applying a matrix of basis transformation which in this case is the matrix that has as columns the eigenvectors associated with their respective eigenvalue. So, by definition, $$A=PDP^-1$$ where $A$ is the matrix of $F$ in standard base and $$P=left(beginmatrix1&0&1\0&1&0\0&0&1endmatrixright)$$ the matrix of the eigenvectors.
              So, then $$A=left(beginmatrix1&0&2\0&1&0\0&0&3endmatrixright)$$
              I give you some links where you can find useful explanations on this topic: Diagonalizable matrix and the spectral theorem



              Edit



              Obviously there is a lot to talk about this subject, I've tried to be more concise as possible auto not go on a tangent. If you have any other questions you can read any linear algebra. The topic of diagonalization, change of basis and the spectral theorem and of central importance in lots of fields, even applied fields as physics, and it's very important to have a firm idea of what's going on! If I should give an advice on what book to read I will say "Linear Algebra" by Serge Lang. But there are so many valuable books out there, just do a simple google search






              share|cite|improve this answer























              • Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
                – Hoaz
                Jul 27 at 16:22











              • That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
                – Davide Morgante
                Jul 27 at 16:24










              • yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
                – Hoaz
                Jul 27 at 16:26










              • You're welcome!
                – Davide Morgante
                Jul 27 at 16:26














              up vote
              2
              down vote



              accepted










              The definition of eigenvectors let's you know how your map $F$ acts on some specific vectors, the eigenvectors. In this manner you know that if you take from $mathbbR^3$ the vectors $(1,0,0),(0,1,0)$ and $(1,0,1)$ they will be mapped to the same vectors (times some scalar) onto $mathbbR^3$. Eigenvectors, indeed, are very good vectors, because they form a base for $mathbbR^3$, not the standard base but a base nonetheless. With this notions we can now tackle your problem.



              The definition of a $lambda$-eigenvector is as follows:




              $mathbfvinmathbbR^3$ is a $lambda$-eigenvector for the linear map $$F:mathbbR^3rightarrowmathbbR^3$$ iif $$F(mathbfv)=lambdamathbfv$$




              If we take as a base for either domain and image of this linear map the eigenvectors, the associated matrix for $F$ in this base is the following $$D=left(beginmatrix1&0&0\0&1&0\0&0&3endmatrixright)$$ as you can see is a diagonal matrix with elements the eigenvectors taken with their respective geometric multiplicity. You clearly have found a base in that the matrix associated with $F$ is diagonal. This is called diagonalization. From this matrix you can go back to the matrix in standard base applying a matrix of basis transformation which in this case is the matrix that has as columns the eigenvectors associated with their respective eigenvalue. So, by definition, $$A=PDP^-1$$ where $A$ is the matrix of $F$ in standard base and $$P=left(beginmatrix1&0&1\0&1&0\0&0&1endmatrixright)$$ the matrix of the eigenvectors.
              So, then $$A=left(beginmatrix1&0&2\0&1&0\0&0&3endmatrixright)$$
              I give you some links where you can find useful explanations on this topic: Diagonalizable matrix and the spectral theorem



              Edit



              Obviously there is a lot to talk about this subject, I've tried to be more concise as possible auto not go on a tangent. If you have any other questions you can read any linear algebra. The topic of diagonalization, change of basis and the spectral theorem and of central importance in lots of fields, even applied fields as physics, and it's very important to have a firm idea of what's going on! If I should give an advice on what book to read I will say "Linear Algebra" by Serge Lang. But there are so many valuable books out there, just do a simple google search






              share|cite|improve this answer























              • Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
                – Hoaz
                Jul 27 at 16:22











              • That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
                – Davide Morgante
                Jul 27 at 16:24










              • yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
                – Hoaz
                Jul 27 at 16:26










              • You're welcome!
                – Davide Morgante
                Jul 27 at 16:26












              up vote
              2
              down vote



              accepted







              up vote
              2
              down vote



              accepted






              The definition of eigenvectors let's you know how your map $F$ acts on some specific vectors, the eigenvectors. In this manner you know that if you take from $mathbbR^3$ the vectors $(1,0,0),(0,1,0)$ and $(1,0,1)$ they will be mapped to the same vectors (times some scalar) onto $mathbbR^3$. Eigenvectors, indeed, are very good vectors, because they form a base for $mathbbR^3$, not the standard base but a base nonetheless. With this notions we can now tackle your problem.



              The definition of a $lambda$-eigenvector is as follows:




              $mathbfvinmathbbR^3$ is a $lambda$-eigenvector for the linear map $$F:mathbbR^3rightarrowmathbbR^3$$ iif $$F(mathbfv)=lambdamathbfv$$




              If we take as a base for either domain and image of this linear map the eigenvectors, the associated matrix for $F$ in this base is the following $$D=left(beginmatrix1&0&0\0&1&0\0&0&3endmatrixright)$$ as you can see is a diagonal matrix with elements the eigenvectors taken with their respective geometric multiplicity. You clearly have found a base in that the matrix associated with $F$ is diagonal. This is called diagonalization. From this matrix you can go back to the matrix in standard base applying a matrix of basis transformation which in this case is the matrix that has as columns the eigenvectors associated with their respective eigenvalue. So, by definition, $$A=PDP^-1$$ where $A$ is the matrix of $F$ in standard base and $$P=left(beginmatrix1&0&1\0&1&0\0&0&1endmatrixright)$$ the matrix of the eigenvectors.
              So, then $$A=left(beginmatrix1&0&2\0&1&0\0&0&3endmatrixright)$$
              I give you some links where you can find useful explanations on this topic: Diagonalizable matrix and the spectral theorem



              Edit



              Obviously there is a lot to talk about this subject, I've tried to be more concise as possible auto not go on a tangent. If you have any other questions you can read any linear algebra. The topic of diagonalization, change of basis and the spectral theorem and of central importance in lots of fields, even applied fields as physics, and it's very important to have a firm idea of what's going on! If I should give an advice on what book to read I will say "Linear Algebra" by Serge Lang. But there are so many valuable books out there, just do a simple google search






              share|cite|improve this answer















              The definition of eigenvectors let's you know how your map $F$ acts on some specific vectors, the eigenvectors. In this manner you know that if you take from $mathbbR^3$ the vectors $(1,0,0),(0,1,0)$ and $(1,0,1)$ they will be mapped to the same vectors (times some scalar) onto $mathbbR^3$. Eigenvectors, indeed, are very good vectors, because they form a base for $mathbbR^3$, not the standard base but a base nonetheless. With this notions we can now tackle your problem.



              The definition of a $lambda$-eigenvector is as follows:




              $mathbfvinmathbbR^3$ is a $lambda$-eigenvector for the linear map $$F:mathbbR^3rightarrowmathbbR^3$$ iif $$F(mathbfv)=lambdamathbfv$$




              If we take as a base for either domain and image of this linear map the eigenvectors, the associated matrix for $F$ in this base is the following $$D=left(beginmatrix1&0&0\0&1&0\0&0&3endmatrixright)$$ as you can see is a diagonal matrix with elements the eigenvectors taken with their respective geometric multiplicity. You clearly have found a base in that the matrix associated with $F$ is diagonal. This is called diagonalization. From this matrix you can go back to the matrix in standard base applying a matrix of basis transformation which in this case is the matrix that has as columns the eigenvectors associated with their respective eigenvalue. So, by definition, $$A=PDP^-1$$ where $A$ is the matrix of $F$ in standard base and $$P=left(beginmatrix1&0&1\0&1&0\0&0&1endmatrixright)$$ the matrix of the eigenvectors.
              So, then $$A=left(beginmatrix1&0&2\0&1&0\0&0&3endmatrixright)$$
              I give you some links where you can find useful explanations on this topic: Diagonalizable matrix and the spectral theorem



              Edit



              Obviously there is a lot to talk about this subject, I've tried to be more concise as possible auto not go on a tangent. If you have any other questions you can read any linear algebra. The topic of diagonalization, change of basis and the spectral theorem and of central importance in lots of fields, even applied fields as physics, and it's very important to have a firm idea of what's going on! If I should give an advice on what book to read I will say "Linear Algebra" by Serge Lang. But there are so many valuable books out there, just do a simple google search







              share|cite|improve this answer















              share|cite|improve this answer



              share|cite|improve this answer








              edited Jul 27 at 12:49


























              answered Jul 27 at 12:16









              Davide Morgante

              1,751220




              1,751220











              • Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
                – Hoaz
                Jul 27 at 16:22











              • That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
                – Davide Morgante
                Jul 27 at 16:24










              • yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
                – Hoaz
                Jul 27 at 16:26










              • You're welcome!
                – Davide Morgante
                Jul 27 at 16:26
















              • Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
                – Hoaz
                Jul 27 at 16:22











              • That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
                – Davide Morgante
                Jul 27 at 16:24










              • yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
                – Hoaz
                Jul 27 at 16:26










              • You're welcome!
                – Davide Morgante
                Jul 27 at 16:26















              Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
              – Hoaz
              Jul 27 at 16:22





              Ah yes. Makes sense. It's like we convert any vector into bob's language, apply our transformation and then convert it back into our language ($A = PDP^-1$) cool thanks! :)
              – Hoaz
              Jul 27 at 16:22













              That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
              – Davide Morgante
              Jul 27 at 16:24




              That's it! I too loved 3Blue1Brown series on linear algebra, I watched it when I was about too follow my fist course in algebra an linear algebra, it was very insightful
              – Davide Morgante
              Jul 27 at 16:24












              yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
              – Hoaz
              Jul 27 at 16:26




              yes. I thought I understood linear algebra until I watched his videos, then I really started to understand. really intuitive. It's really a must watch if you're doing linear algebra. And thanks again :)
              – Hoaz
              Jul 27 at 16:26












              You're welcome!
              – Davide Morgante
              Jul 27 at 16:26




              You're welcome!
              – Davide Morgante
              Jul 27 at 16:26










              up vote
              0
              down vote













              HINT



              Recall that



              $$D=P^-1F P implies F=PFP^-1$$



              with



              $$P=beginbmatrix1&0&1\0&1&0\0&0&1endbmatrixquad D=beginbmatrix1&0&0\0&1&0\0&0&3endbmatrix$$






              share|cite|improve this answer

























                up vote
                0
                down vote













                HINT



                Recall that



                $$D=P^-1F P implies F=PFP^-1$$



                with



                $$P=beginbmatrix1&0&1\0&1&0\0&0&1endbmatrixquad D=beginbmatrix1&0&0\0&1&0\0&0&3endbmatrix$$






                share|cite|improve this answer























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  HINT



                  Recall that



                  $$D=P^-1F P implies F=PFP^-1$$



                  with



                  $$P=beginbmatrix1&0&1\0&1&0\0&0&1endbmatrixquad D=beginbmatrix1&0&0\0&1&0\0&0&3endbmatrix$$






                  share|cite|improve this answer













                  HINT



                  Recall that



                  $$D=P^-1F P implies F=PFP^-1$$



                  with



                  $$P=beginbmatrix1&0&1\0&1&0\0&0&1endbmatrixquad D=beginbmatrix1&0&0\0&1&0\0&0&3endbmatrix$$







                  share|cite|improve this answer













                  share|cite|improve this answer



                  share|cite|improve this answer











                  answered Jul 27 at 12:13









                  gimusi

                  64.9k73583




                  64.9k73583




















                      up vote
                      0
                      down vote













                      Here is a very simple solution:



                      Denoting $e_1, e_2, e_3$ the canonical basis, you need to express $F(e_1), F(e_2)$ and $F(e_3)$ in the canonical basis.



                      For $F(e_1)$ and $F(e_2)$, its in the hypotheses, so the first two columns are
                      $$left[beginmatrix1&0\0&1\0&0endmatrixright.$$
                      Now, setting $u=e_1+e_3$, we know that $F(u)=F(e_1)+F(e_3)=3e_1+3e_3$, so $F(e_3)=2e_1+3e_3$, and we have the third column of the matrix, which is
                      $$ beginbmatrix1&0&2\0&1&0\0&0&3endbmatrix$$






                      share|cite|improve this answer

























                        up vote
                        0
                        down vote













                        Here is a very simple solution:



                        Denoting $e_1, e_2, e_3$ the canonical basis, you need to express $F(e_1), F(e_2)$ and $F(e_3)$ in the canonical basis.



                        For $F(e_1)$ and $F(e_2)$, its in the hypotheses, so the first two columns are
                        $$left[beginmatrix1&0\0&1\0&0endmatrixright.$$
                        Now, setting $u=e_1+e_3$, we know that $F(u)=F(e_1)+F(e_3)=3e_1+3e_3$, so $F(e_3)=2e_1+3e_3$, and we have the third column of the matrix, which is
                        $$ beginbmatrix1&0&2\0&1&0\0&0&3endbmatrix$$






                        share|cite|improve this answer























                          up vote
                          0
                          down vote










                          up vote
                          0
                          down vote









                          Here is a very simple solution:



                          Denoting $e_1, e_2, e_3$ the canonical basis, you need to express $F(e_1), F(e_2)$ and $F(e_3)$ in the canonical basis.



                          For $F(e_1)$ and $F(e_2)$, its in the hypotheses, so the first two columns are
                          $$left[beginmatrix1&0\0&1\0&0endmatrixright.$$
                          Now, setting $u=e_1+e_3$, we know that $F(u)=F(e_1)+F(e_3)=3e_1+3e_3$, so $F(e_3)=2e_1+3e_3$, and we have the third column of the matrix, which is
                          $$ beginbmatrix1&0&2\0&1&0\0&0&3endbmatrix$$






                          share|cite|improve this answer













                          Here is a very simple solution:



                          Denoting $e_1, e_2, e_3$ the canonical basis, you need to express $F(e_1), F(e_2)$ and $F(e_3)$ in the canonical basis.



                          For $F(e_1)$ and $F(e_2)$, its in the hypotheses, so the first two columns are
                          $$left[beginmatrix1&0\0&1\0&0endmatrixright.$$
                          Now, setting $u=e_1+e_3$, we know that $F(u)=F(e_1)+F(e_3)=3e_1+3e_3$, so $F(e_3)=2e_1+3e_3$, and we have the third column of the matrix, which is
                          $$ beginbmatrix1&0&2\0&1&0\0&0&3endbmatrix$$







                          share|cite|improve this answer













                          share|cite|improve this answer



                          share|cite|improve this answer











                          answered Jul 27 at 12:22









                          Bernard

                          110k635102




                          110k635102






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2864344%2fhow-to-construct-a-3-by-3-matrix-with-a-given-eigenspace-in-the-standard-basis%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Comments

                              Popular posts from this blog

                              What is the equation of a 3D cone with generalised tilt?

                              Color the edges and diagonals of a regular polygon

                              Relationship between determinant of matrix and determinant of adjoint?