Understanding repeated Eigen values

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I am trying to understand the method of finding eigen vectors in the case of repeated eigen values. My question is based on section 3.5.2 of this link.
In the first example (Example 3.5.4),



$A = beginbmatrix2&0\0&2endbmatrix$



Here, the $lambda = 2$ (repeated eigen value). If $(A - lambda I)$ is calculated, it is zero matrix of dimension 2. So, the geometric multiplicity is 2, which means there must be two linearly independent eigen vectors.



1) Am I correct in understanding that these can be any two linearly independent vectors since $(A - lambda I)$ is a zero matrix? Or is there a reason for picking $v_1 = (1, 0)$ and $v_2 = (0, 1)$ as shown in the link?



Now, consider the second example (Example 3.5.5):



$A = beginbmatrix5&1\-4&1endbmatrix$



In this case, $lambda = 3$ (repeated eigen value) and



$A - lambda I = beginbmatrix2&1\-4&-2endbmatrix$



Here geometric multiplicity of $lambda$ is 1. So, there is only one linearly independent eigen vector.



2) What is the idea behind using $(A - lambda I)v_2 = v_1$ to find second eigen vector?



3) Is this technique used only when geometric multiplicity is less than algebraic multiplicity? Otherwise, do we just use logic to find all independent eigen vectors as we did in the first example?



4) Since the matrix is not diagonalizable, is the idea to minimize the error, rather than solve the system of equations? I am trying to understand the need to find a second eigen vector in practical situations.







share|cite|improve this question



















  • What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
    – Will Jagy
    Jul 30 at 20:01






  • 1




    anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
    – Will Jagy
    Jul 30 at 20:04










  • en.wikipedia.org/wiki/Jordan_normal_form
    – Will Jagy
    Jul 30 at 20:06










  • I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
    – skr_robo
    Jul 30 at 20:11






  • 1




    alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
    – Will Jagy
    Jul 30 at 20:18














up vote
0
down vote

favorite












I am trying to understand the method of finding eigen vectors in the case of repeated eigen values. My question is based on section 3.5.2 of this link.
In the first example (Example 3.5.4),



$A = beginbmatrix2&0\0&2endbmatrix$



Here, the $lambda = 2$ (repeated eigen value). If $(A - lambda I)$ is calculated, it is zero matrix of dimension 2. So, the geometric multiplicity is 2, which means there must be two linearly independent eigen vectors.



1) Am I correct in understanding that these can be any two linearly independent vectors since $(A - lambda I)$ is a zero matrix? Or is there a reason for picking $v_1 = (1, 0)$ and $v_2 = (0, 1)$ as shown in the link?



Now, consider the second example (Example 3.5.5):



$A = beginbmatrix5&1\-4&1endbmatrix$



In this case, $lambda = 3$ (repeated eigen value) and



$A - lambda I = beginbmatrix2&1\-4&-2endbmatrix$



Here geometric multiplicity of $lambda$ is 1. So, there is only one linearly independent eigen vector.



2) What is the idea behind using $(A - lambda I)v_2 = v_1$ to find second eigen vector?



3) Is this technique used only when geometric multiplicity is less than algebraic multiplicity? Otherwise, do we just use logic to find all independent eigen vectors as we did in the first example?



4) Since the matrix is not diagonalizable, is the idea to minimize the error, rather than solve the system of equations? I am trying to understand the need to find a second eigen vector in practical situations.







share|cite|improve this question



















  • What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
    – Will Jagy
    Jul 30 at 20:01






  • 1




    anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
    – Will Jagy
    Jul 30 at 20:04










  • en.wikipedia.org/wiki/Jordan_normal_form
    – Will Jagy
    Jul 30 at 20:06










  • I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
    – skr_robo
    Jul 30 at 20:11






  • 1




    alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
    – Will Jagy
    Jul 30 at 20:18












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am trying to understand the method of finding eigen vectors in the case of repeated eigen values. My question is based on section 3.5.2 of this link.
In the first example (Example 3.5.4),



$A = beginbmatrix2&0\0&2endbmatrix$



Here, the $lambda = 2$ (repeated eigen value). If $(A - lambda I)$ is calculated, it is zero matrix of dimension 2. So, the geometric multiplicity is 2, which means there must be two linearly independent eigen vectors.



1) Am I correct in understanding that these can be any two linearly independent vectors since $(A - lambda I)$ is a zero matrix? Or is there a reason for picking $v_1 = (1, 0)$ and $v_2 = (0, 1)$ as shown in the link?



Now, consider the second example (Example 3.5.5):



$A = beginbmatrix5&1\-4&1endbmatrix$



In this case, $lambda = 3$ (repeated eigen value) and



$A - lambda I = beginbmatrix2&1\-4&-2endbmatrix$



Here geometric multiplicity of $lambda$ is 1. So, there is only one linearly independent eigen vector.



2) What is the idea behind using $(A - lambda I)v_2 = v_1$ to find second eigen vector?



3) Is this technique used only when geometric multiplicity is less than algebraic multiplicity? Otherwise, do we just use logic to find all independent eigen vectors as we did in the first example?



4) Since the matrix is not diagonalizable, is the idea to minimize the error, rather than solve the system of equations? I am trying to understand the need to find a second eigen vector in practical situations.







share|cite|improve this question











I am trying to understand the method of finding eigen vectors in the case of repeated eigen values. My question is based on section 3.5.2 of this link.
In the first example (Example 3.5.4),



$A = beginbmatrix2&0\0&2endbmatrix$



Here, the $lambda = 2$ (repeated eigen value). If $(A - lambda I)$ is calculated, it is zero matrix of dimension 2. So, the geometric multiplicity is 2, which means there must be two linearly independent eigen vectors.



1) Am I correct in understanding that these can be any two linearly independent vectors since $(A - lambda I)$ is a zero matrix? Or is there a reason for picking $v_1 = (1, 0)$ and $v_2 = (0, 1)$ as shown in the link?



Now, consider the second example (Example 3.5.5):



$A = beginbmatrix5&1\-4&1endbmatrix$



In this case, $lambda = 3$ (repeated eigen value) and



$A - lambda I = beginbmatrix2&1\-4&-2endbmatrix$



Here geometric multiplicity of $lambda$ is 1. So, there is only one linearly independent eigen vector.



2) What is the idea behind using $(A - lambda I)v_2 = v_1$ to find second eigen vector?



3) Is this technique used only when geometric multiplicity is less than algebraic multiplicity? Otherwise, do we just use logic to find all independent eigen vectors as we did in the first example?



4) Since the matrix is not diagonalizable, is the idea to minimize the error, rather than solve the system of equations? I am trying to understand the need to find a second eigen vector in practical situations.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 30 at 19:55









skr_robo

1013




1013











  • What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
    – Will Jagy
    Jul 30 at 20:01






  • 1




    anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
    – Will Jagy
    Jul 30 at 20:04










  • en.wikipedia.org/wiki/Jordan_normal_form
    – Will Jagy
    Jul 30 at 20:06










  • I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
    – skr_robo
    Jul 30 at 20:11






  • 1




    alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
    – Will Jagy
    Jul 30 at 20:18
















  • What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
    – Will Jagy
    Jul 30 at 20:01






  • 1




    anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
    – Will Jagy
    Jul 30 at 20:04










  • en.wikipedia.org/wiki/Jordan_normal_form
    – Will Jagy
    Jul 30 at 20:06










  • I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
    – skr_robo
    Jul 30 at 20:11






  • 1




    alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
    – Will Jagy
    Jul 30 at 20:18















What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
– Will Jagy
Jul 30 at 20:01




What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^Jt$ are fairl easy and concrete
– Will Jagy
Jul 30 at 20:01




1




1




anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
– Will Jagy
Jul 30 at 20:04




anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-lambda I)^2 w = 0$ BUT $(A-lambda I) w neq 0.$ Then take $v = (A-lambda I) w$ satisfies $(A-lambda I) v = 0$ so is a genuine eigenvector
– Will Jagy
Jul 30 at 20:04












en.wikipedia.org/wiki/Jordan_normal_form
– Will Jagy
Jul 30 at 20:06




en.wikipedia.org/wiki/Jordan_normal_form
– Will Jagy
Jul 30 at 20:06












I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
– skr_robo
Jul 30 at 20:11




I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading.
– skr_robo
Jul 30 at 20:11




1




1




alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
– Will Jagy
Jul 30 at 20:18




alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself.
– Will Jagy
Jul 30 at 20:18















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2867360%2funderstanding-repeated-eigen-values%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2867360%2funderstanding-repeated-eigen-values%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?