Determine all real $x$ that satisfy $det A=0$ [duplicate]
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
8 answers
I want to find all real $x$ that satisfy
$$
textrmdet X=
beginvmatrix
x &2 &2 &2\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
My teacher does this by adding the three bottom rows to the top row
$$
textrmdet X=
beginvmatrix
x+6 &x+6 &x+6 &x+6\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
and then subtracting a row of $2$'s from the bottom three rows
$$
textrmdet X=
(x+6)
beginvmatrix
1 &1 &1 &1\
0 &x-2 &0 &0\
0 &0 &x-2 &0\
0 &0 &0 &x-2
endvmatrix.
$$
The answer is
$$
xin -6,2.
$$
I think I understand the operations (although subtracting an arbitrary row of numbers from a matrix/determinant row is something I've never seen before, but I don't see why that wouldn't be allowed. Just like you can subtract arbitrary coefficients on both sides of an equation, right?), my main issue is why they are performed.
- Why can't I just in the same way subtract a row of $2$'s from the three bottom rows in the first determinant? If I do that I get a different answer.
- I know I want a column of all zeroes except one column-element, but why do I need to perform the first operation beforehand? Is it somehow necessary that all the top row elements to be the same, $(x+6)$?
linear-algebra matrices determinant
marked as duplicate by Rodrigo de Azevedo, Mostafa Ayaz, José Carlos Santos
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Jul 25 at 18:17
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |Â
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
8 answers
I want to find all real $x$ that satisfy
$$
textrmdet X=
beginvmatrix
x &2 &2 &2\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
My teacher does this by adding the three bottom rows to the top row
$$
textrmdet X=
beginvmatrix
x+6 &x+6 &x+6 &x+6\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
and then subtracting a row of $2$'s from the bottom three rows
$$
textrmdet X=
(x+6)
beginvmatrix
1 &1 &1 &1\
0 &x-2 &0 &0\
0 &0 &x-2 &0\
0 &0 &0 &x-2
endvmatrix.
$$
The answer is
$$
xin -6,2.
$$
I think I understand the operations (although subtracting an arbitrary row of numbers from a matrix/determinant row is something I've never seen before, but I don't see why that wouldn't be allowed. Just like you can subtract arbitrary coefficients on both sides of an equation, right?), my main issue is why they are performed.
- Why can't I just in the same way subtract a row of $2$'s from the three bottom rows in the first determinant? If I do that I get a different answer.
- I know I want a column of all zeroes except one column-element, but why do I need to perform the first operation beforehand? Is it somehow necessary that all the top row elements to be the same, $(x+6)$?
linear-algebra matrices determinant
marked as duplicate by Rodrigo de Azevedo, Mostafa Ayaz, José Carlos Santos
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Jul 25 at 18:17
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
5
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.
– dxiv
Jul 24 at 17:36
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
8 answers
I want to find all real $x$ that satisfy
$$
textrmdet X=
beginvmatrix
x &2 &2 &2\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
My teacher does this by adding the three bottom rows to the top row
$$
textrmdet X=
beginvmatrix
x+6 &x+6 &x+6 &x+6\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
and then subtracting a row of $2$'s from the bottom three rows
$$
textrmdet X=
(x+6)
beginvmatrix
1 &1 &1 &1\
0 &x-2 &0 &0\
0 &0 &x-2 &0\
0 &0 &0 &x-2
endvmatrix.
$$
The answer is
$$
xin -6,2.
$$
I think I understand the operations (although subtracting an arbitrary row of numbers from a matrix/determinant row is something I've never seen before, but I don't see why that wouldn't be allowed. Just like you can subtract arbitrary coefficients on both sides of an equation, right?), my main issue is why they are performed.
- Why can't I just in the same way subtract a row of $2$'s from the three bottom rows in the first determinant? If I do that I get a different answer.
- I know I want a column of all zeroes except one column-element, but why do I need to perform the first operation beforehand? Is it somehow necessary that all the top row elements to be the same, $(x+6)$?
linear-algebra matrices determinant
This question already has an answer here:
Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
8 answers
I want to find all real $x$ that satisfy
$$
textrmdet X=
beginvmatrix
x &2 &2 &2\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
My teacher does this by adding the three bottom rows to the top row
$$
textrmdet X=
beginvmatrix
x+6 &x+6 &x+6 &x+6\
2 &x &2 &2\
2 &2 &x &2\
2 &2 &2 &x
endvmatrix\
$$
and then subtracting a row of $2$'s from the bottom three rows
$$
textrmdet X=
(x+6)
beginvmatrix
1 &1 &1 &1\
0 &x-2 &0 &0\
0 &0 &x-2 &0\
0 &0 &0 &x-2
endvmatrix.
$$
The answer is
$$
xin -6,2.
$$
I think I understand the operations (although subtracting an arbitrary row of numbers from a matrix/determinant row is something I've never seen before, but I don't see why that wouldn't be allowed. Just like you can subtract arbitrary coefficients on both sides of an equation, right?), my main issue is why they are performed.
- Why can't I just in the same way subtract a row of $2$'s from the three bottom rows in the first determinant? If I do that I get a different answer.
- I know I want a column of all zeroes except one column-element, but why do I need to perform the first operation beforehand? Is it somehow necessary that all the top row elements to be the same, $(x+6)$?
This question already has an answer here:
Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
8 answers
linear-algebra matrices determinant
asked Jul 24 at 17:33
Chisq
1025
1025
marked as duplicate by Rodrigo de Azevedo, Mostafa Ayaz, José Carlos Santos
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Jul 25 at 18:17
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Rodrigo de Azevedo, Mostafa Ayaz, José Carlos Santos
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Jul 25 at 18:17
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
5
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.
– dxiv
Jul 24 at 17:36
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38
add a comment |Â
5
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.
– dxiv
Jul 24 at 17:36
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38
5
5
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.– dxiv
Jul 24 at 17:36
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.– dxiv
Jul 24 at 17:36
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
0
down vote
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
- Swap two rows multiplies the determinant by $-1$
- Multiplying a row by a non zero scalar $lambda$, multiplies the determinant as well by the same scalar
- Adding or subtracting multiples of one row to another leaves the determinant untouched
The algorithm for gaussian elimination goes something like this
- If the first row has first element zero than change it with some other row that has fist element not zero, if there aren't go to step 3
- Fore every row $A_i$, not counting the first, $(i
gt 1)$ multiplay the first row by some coefficient so that the sum of the first row and the row $A_i$ has fist element zero. Substitute the $i$-th row with the sum just calculated - Now every element of the fist column, except many for the first one, are zero. Now go to the sub-matrix obtained by eliminating the first row and column and repeat this steps on every submatrix.
At the end of this you should have a triangular or diagonal matrix
add a comment |Â
up vote
0
down vote
Another way: the determinant is a fourth-degree polynomial $p(x)$ in variable $x$.
It is easy to see that $p(2) = p(-6) = 0$, like you did.
Then take the derivative:
beginalign
p'(x) &= beginvmatrix
1 &2 &2 &2\
0 &x &2 &2\
0 &2 &x &2\
0 &2 &2 &x
endvmatrix + beginvmatrix
x &0 &2 &2\
2 &1 &2 &2\
2 &0 &x &2\
2 &0 &2 &x
endvmatrix + beginvmatrix
x &2 &0 &2\
2 &x &0 &2\
2 &2 &1 &2\
2 &2 &0 &x
endvmatrix + beginvmatrix
x &2 &2 &0\
2 &x &2 &0\
2 &2 &x &0\
2 &2 &2 &1
endvmatrix \
&=4 beginvmatrix
x &2 &2\
2 &x &2\
2 &2 &x\
endvmatrix \
endalign
Taking derivatives again we get $p''(x) = 4cdot 3 beginvmatrix
x &2 \
2 &x \
endvmatrix$ so we can conclude that $p'(2) = p''(2) = 0$.
Therefore $2$ is a root of $p$ with muliplicity $3$ so $$p(x) = (x-2)^3(x+6)$$
Therefore the zeros are indeed only $-6$ and $2$.
add a comment |Â
up vote
-1
down vote
"Subtracting a row of 2s" is NOT a valid matrix operation. But subtracting one row from another is. Perhaps what your teacher did (or meant to do) is, first subtract the third row from the second to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 2 & 2 & x & 2 \ 2 & 2 & 2 & x endarrayright|$.
Now, subtract the fourth row from the third to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2 & 2 & 2 & x endarrayright|$.
Finally, you can subtract the first row from the fourth to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2- x & 0 & 0 & x- 2 endarrayright|$.
That's almost an "upper triangular" matrix. We can calculate the determinant reasonably easily by "expanding on the first column:
$xleft|beginarraycc x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x \ 0 & 0 & x- 2 endarrayright|$$- 2left|beginarraycc 2 & 2 & 2 \ x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x endarrayright|$.
That first determinant is $x(x- 2)^3$. For the second we can further expand that three by three determinant on the first column to get $-2left(2left|beginarraycc2- x & 0 \ x- 2 & 2- xendarrayright|- (x- 2)left|beginarraycc2 & 2 \ x- 2 & 2- xendarrayright|right)= -4(2- x)^2- (x- 2)(4- 2x- 2x+ 4)= -4(2- x)^2- (x- 2)(8- 4x)= -4(2- x)^2+ 4(2- x)^2= 0$.
So the determinant of the given matrix is $x(x- 2)^3$.
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
 |Â
show 1 more comment
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
- Swap two rows multiplies the determinant by $-1$
- Multiplying a row by a non zero scalar $lambda$, multiplies the determinant as well by the same scalar
- Adding or subtracting multiples of one row to another leaves the determinant untouched
The algorithm for gaussian elimination goes something like this
- If the first row has first element zero than change it with some other row that has fist element not zero, if there aren't go to step 3
- Fore every row $A_i$, not counting the first, $(i
gt 1)$ multiplay the first row by some coefficient so that the sum of the first row and the row $A_i$ has fist element zero. Substitute the $i$-th row with the sum just calculated - Now every element of the fist column, except many for the first one, are zero. Now go to the sub-matrix obtained by eliminating the first row and column and repeat this steps on every submatrix.
At the end of this you should have a triangular or diagonal matrix
add a comment |Â
up vote
0
down vote
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
- Swap two rows multiplies the determinant by $-1$
- Multiplying a row by a non zero scalar $lambda$, multiplies the determinant as well by the same scalar
- Adding or subtracting multiples of one row to another leaves the determinant untouched
The algorithm for gaussian elimination goes something like this
- If the first row has first element zero than change it with some other row that has fist element not zero, if there aren't go to step 3
- Fore every row $A_i$, not counting the first, $(i
gt 1)$ multiplay the first row by some coefficient so that the sum of the first row and the row $A_i$ has fist element zero. Substitute the $i$-th row with the sum just calculated - Now every element of the fist column, except many for the first one, are zero. Now go to the sub-matrix obtained by eliminating the first row and column and repeat this steps on every submatrix.
At the end of this you should have a triangular or diagonal matrix
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
- Swap two rows multiplies the determinant by $-1$
- Multiplying a row by a non zero scalar $lambda$, multiplies the determinant as well by the same scalar
- Adding or subtracting multiples of one row to another leaves the determinant untouched
The algorithm for gaussian elimination goes something like this
- If the first row has first element zero than change it with some other row that has fist element not zero, if there aren't go to step 3
- Fore every row $A_i$, not counting the first, $(i
gt 1)$ multiplay the first row by some coefficient so that the sum of the first row and the row $A_i$ has fist element zero. Substitute the $i$-th row with the sum just calculated - Now every element of the fist column, except many for the first one, are zero. Now go to the sub-matrix obtained by eliminating the first row and column and repeat this steps on every submatrix.
At the end of this you should have a triangular or diagonal matrix
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
- Swap two rows multiplies the determinant by $-1$
- Multiplying a row by a non zero scalar $lambda$, multiplies the determinant as well by the same scalar
- Adding or subtracting multiples of one row to another leaves the determinant untouched
The algorithm for gaussian elimination goes something like this
- If the first row has first element zero than change it with some other row that has fist element not zero, if there aren't go to step 3
- Fore every row $A_i$, not counting the first, $(i
gt 1)$ multiplay the first row by some coefficient so that the sum of the first row and the row $A_i$ has fist element zero. Substitute the $i$-th row with the sum just calculated - Now every element of the fist column, except many for the first one, are zero. Now go to the sub-matrix obtained by eliminating the first row and column and repeat this steps on every submatrix.
At the end of this you should have a triangular or diagonal matrix
answered Jul 24 at 17:53
Davide Morgante
1,751220
1,751220
add a comment |Â
add a comment |Â
up vote
0
down vote
Another way: the determinant is a fourth-degree polynomial $p(x)$ in variable $x$.
It is easy to see that $p(2) = p(-6) = 0$, like you did.
Then take the derivative:
beginalign
p'(x) &= beginvmatrix
1 &2 &2 &2\
0 &x &2 &2\
0 &2 &x &2\
0 &2 &2 &x
endvmatrix + beginvmatrix
x &0 &2 &2\
2 &1 &2 &2\
2 &0 &x &2\
2 &0 &2 &x
endvmatrix + beginvmatrix
x &2 &0 &2\
2 &x &0 &2\
2 &2 &1 &2\
2 &2 &0 &x
endvmatrix + beginvmatrix
x &2 &2 &0\
2 &x &2 &0\
2 &2 &x &0\
2 &2 &2 &1
endvmatrix \
&=4 beginvmatrix
x &2 &2\
2 &x &2\
2 &2 &x\
endvmatrix \
endalign
Taking derivatives again we get $p''(x) = 4cdot 3 beginvmatrix
x &2 \
2 &x \
endvmatrix$ so we can conclude that $p'(2) = p''(2) = 0$.
Therefore $2$ is a root of $p$ with muliplicity $3$ so $$p(x) = (x-2)^3(x+6)$$
Therefore the zeros are indeed only $-6$ and $2$.
add a comment |Â
up vote
0
down vote
Another way: the determinant is a fourth-degree polynomial $p(x)$ in variable $x$.
It is easy to see that $p(2) = p(-6) = 0$, like you did.
Then take the derivative:
beginalign
p'(x) &= beginvmatrix
1 &2 &2 &2\
0 &x &2 &2\
0 &2 &x &2\
0 &2 &2 &x
endvmatrix + beginvmatrix
x &0 &2 &2\
2 &1 &2 &2\
2 &0 &x &2\
2 &0 &2 &x
endvmatrix + beginvmatrix
x &2 &0 &2\
2 &x &0 &2\
2 &2 &1 &2\
2 &2 &0 &x
endvmatrix + beginvmatrix
x &2 &2 &0\
2 &x &2 &0\
2 &2 &x &0\
2 &2 &2 &1
endvmatrix \
&=4 beginvmatrix
x &2 &2\
2 &x &2\
2 &2 &x\
endvmatrix \
endalign
Taking derivatives again we get $p''(x) = 4cdot 3 beginvmatrix
x &2 \
2 &x \
endvmatrix$ so we can conclude that $p'(2) = p''(2) = 0$.
Therefore $2$ is a root of $p$ with muliplicity $3$ so $$p(x) = (x-2)^3(x+6)$$
Therefore the zeros are indeed only $-6$ and $2$.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Another way: the determinant is a fourth-degree polynomial $p(x)$ in variable $x$.
It is easy to see that $p(2) = p(-6) = 0$, like you did.
Then take the derivative:
beginalign
p'(x) &= beginvmatrix
1 &2 &2 &2\
0 &x &2 &2\
0 &2 &x &2\
0 &2 &2 &x
endvmatrix + beginvmatrix
x &0 &2 &2\
2 &1 &2 &2\
2 &0 &x &2\
2 &0 &2 &x
endvmatrix + beginvmatrix
x &2 &0 &2\
2 &x &0 &2\
2 &2 &1 &2\
2 &2 &0 &x
endvmatrix + beginvmatrix
x &2 &2 &0\
2 &x &2 &0\
2 &2 &x &0\
2 &2 &2 &1
endvmatrix \
&=4 beginvmatrix
x &2 &2\
2 &x &2\
2 &2 &x\
endvmatrix \
endalign
Taking derivatives again we get $p''(x) = 4cdot 3 beginvmatrix
x &2 \
2 &x \
endvmatrix$ so we can conclude that $p'(2) = p''(2) = 0$.
Therefore $2$ is a root of $p$ with muliplicity $3$ so $$p(x) = (x-2)^3(x+6)$$
Therefore the zeros are indeed only $-6$ and $2$.
Another way: the determinant is a fourth-degree polynomial $p(x)$ in variable $x$.
It is easy to see that $p(2) = p(-6) = 0$, like you did.
Then take the derivative:
beginalign
p'(x) &= beginvmatrix
1 &2 &2 &2\
0 &x &2 &2\
0 &2 &x &2\
0 &2 &2 &x
endvmatrix + beginvmatrix
x &0 &2 &2\
2 &1 &2 &2\
2 &0 &x &2\
2 &0 &2 &x
endvmatrix + beginvmatrix
x &2 &0 &2\
2 &x &0 &2\
2 &2 &1 &2\
2 &2 &0 &x
endvmatrix + beginvmatrix
x &2 &2 &0\
2 &x &2 &0\
2 &2 &x &0\
2 &2 &2 &1
endvmatrix \
&=4 beginvmatrix
x &2 &2\
2 &x &2\
2 &2 &x\
endvmatrix \
endalign
Taking derivatives again we get $p''(x) = 4cdot 3 beginvmatrix
x &2 \
2 &x \
endvmatrix$ so we can conclude that $p'(2) = p''(2) = 0$.
Therefore $2$ is a root of $p$ with muliplicity $3$ so $$p(x) = (x-2)^3(x+6)$$
Therefore the zeros are indeed only $-6$ and $2$.
answered Jul 24 at 18:09
mechanodroid
22.2k52041
22.2k52041
add a comment |Â
add a comment |Â
up vote
-1
down vote
"Subtracting a row of 2s" is NOT a valid matrix operation. But subtracting one row from another is. Perhaps what your teacher did (or meant to do) is, first subtract the third row from the second to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 2 & 2 & x & 2 \ 2 & 2 & 2 & x endarrayright|$.
Now, subtract the fourth row from the third to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2 & 2 & 2 & x endarrayright|$.
Finally, you can subtract the first row from the fourth to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2- x & 0 & 0 & x- 2 endarrayright|$.
That's almost an "upper triangular" matrix. We can calculate the determinant reasonably easily by "expanding on the first column:
$xleft|beginarraycc x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x \ 0 & 0 & x- 2 endarrayright|$$- 2left|beginarraycc 2 & 2 & 2 \ x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x endarrayright|$.
That first determinant is $x(x- 2)^3$. For the second we can further expand that three by three determinant on the first column to get $-2left(2left|beginarraycc2- x & 0 \ x- 2 & 2- xendarrayright|- (x- 2)left|beginarraycc2 & 2 \ x- 2 & 2- xendarrayright|right)= -4(2- x)^2- (x- 2)(4- 2x- 2x+ 4)= -4(2- x)^2- (x- 2)(8- 4x)= -4(2- x)^2+ 4(2- x)^2= 0$.
So the determinant of the given matrix is $x(x- 2)^3$.
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
 |Â
show 1 more comment
up vote
-1
down vote
"Subtracting a row of 2s" is NOT a valid matrix operation. But subtracting one row from another is. Perhaps what your teacher did (or meant to do) is, first subtract the third row from the second to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 2 & 2 & x & 2 \ 2 & 2 & 2 & x endarrayright|$.
Now, subtract the fourth row from the third to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2 & 2 & 2 & x endarrayright|$.
Finally, you can subtract the first row from the fourth to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2- x & 0 & 0 & x- 2 endarrayright|$.
That's almost an "upper triangular" matrix. We can calculate the determinant reasonably easily by "expanding on the first column:
$xleft|beginarraycc x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x \ 0 & 0 & x- 2 endarrayright|$$- 2left|beginarraycc 2 & 2 & 2 \ x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x endarrayright|$.
That first determinant is $x(x- 2)^3$. For the second we can further expand that three by three determinant on the first column to get $-2left(2left|beginarraycc2- x & 0 \ x- 2 & 2- xendarrayright|- (x- 2)left|beginarraycc2 & 2 \ x- 2 & 2- xendarrayright|right)= -4(2- x)^2- (x- 2)(4- 2x- 2x+ 4)= -4(2- x)^2- (x- 2)(8- 4x)= -4(2- x)^2+ 4(2- x)^2= 0$.
So the determinant of the given matrix is $x(x- 2)^3$.
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
 |Â
show 1 more comment
up vote
-1
down vote
up vote
-1
down vote
"Subtracting a row of 2s" is NOT a valid matrix operation. But subtracting one row from another is. Perhaps what your teacher did (or meant to do) is, first subtract the third row from the second to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 2 & 2 & x & 2 \ 2 & 2 & 2 & x endarrayright|$.
Now, subtract the fourth row from the third to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2 & 2 & 2 & x endarrayright|$.
Finally, you can subtract the first row from the fourth to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2- x & 0 & 0 & x- 2 endarrayright|$.
That's almost an "upper triangular" matrix. We can calculate the determinant reasonably easily by "expanding on the first column:
$xleft|beginarraycc x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x \ 0 & 0 & x- 2 endarrayright|$$- 2left|beginarraycc 2 & 2 & 2 \ x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x endarrayright|$.
That first determinant is $x(x- 2)^3$. For the second we can further expand that three by three determinant on the first column to get $-2left(2left|beginarraycc2- x & 0 \ x- 2 & 2- xendarrayright|- (x- 2)left|beginarraycc2 & 2 \ x- 2 & 2- xendarrayright|right)= -4(2- x)^2- (x- 2)(4- 2x- 2x+ 4)= -4(2- x)^2- (x- 2)(8- 4x)= -4(2- x)^2+ 4(2- x)^2= 0$.
So the determinant of the given matrix is $x(x- 2)^3$.
"Subtracting a row of 2s" is NOT a valid matrix operation. But subtracting one row from another is. Perhaps what your teacher did (or meant to do) is, first subtract the third row from the second to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 2 & 2 & x & 2 \ 2 & 2 & 2 & x endarrayright|$.
Now, subtract the fourth row from the third to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2 & 2 & 2 & x endarrayright|$.
Finally, you can subtract the first row from the fourth to get
$left|beginarrayccc x & 2 & 2 & 2 \ 0 & x- 2 & 2- x & 0 \ 0 & 0 & x- 2 & 2-x \ 2- x & 0 & 0 & x- 2 endarrayright|$.
That's almost an "upper triangular" matrix. We can calculate the determinant reasonably easily by "expanding on the first column:
$xleft|beginarraycc x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x \ 0 & 0 & x- 2 endarrayright|$$- 2left|beginarraycc 2 & 2 & 2 \ x- 2 & 2- x & 0 \ 0 & x- 2 & 2- x endarrayright|$.
That first determinant is $x(x- 2)^3$. For the second we can further expand that three by three determinant on the first column to get $-2left(2left|beginarraycc2- x & 0 \ x- 2 & 2- xendarrayright|- (x- 2)left|beginarraycc2 & 2 \ x- 2 & 2- xendarrayright|right)= -4(2- x)^2- (x- 2)(4- 2x- 2x+ 4)= -4(2- x)^2- (x- 2)(8- 4x)= -4(2- x)^2+ 4(2- x)^2= 0$.
So the determinant of the given matrix is $x(x- 2)^3$.
answered Jul 24 at 18:26
user247327
9,6501515
9,6501515
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
 |Â
show 1 more comment
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
Why is it not a valid matrix operation? If we ignore the determinant and just focus on the matrix itself, I subtract one row of coefficients of indpendent variables on the LHS and implicitly do the same on the RHS by subtracting the y. So if I subtract 2 of all the independent variable coefficients and do the same on the right hand side (explicitly subtracting the independent variable coefficients instead of implicitly, since I wouldn't know what coefficient to put in front of the y), why wouldn't that be allowed?
– Chisq
Jul 24 at 18:30
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
I don't know what you mean by "subtract 2 of all the independent variable coefficients". My comment was related to subtracting the number 2 from a row or column. For example, if the original matrix is $beginbmatrix 2 & 2 \ 1 & 3endbmatrix$ then the determinant is 2*3- 2*1= 4. But subtracting 2 from the first row gives $beginbmatrix0 & 0 \ 1 & 2 endbmatrix= 0$.
– user247327
Jul 24 at 18:36
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@Chisq I wouldn’t necessarily call it invalid, but in this context it’s useless. What relationship is there between the determinant of an arbitrary $2times2$ matrix $A$ and $A-beginbmatrix0&0\2&2endbmatrix$? You need to limit yourself to operations that have known effects on the determinant.
– amd
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@user247327 I mean, if we write a 3x3 matrix as a system of equations instead, then the numbers in the matrix will become the coefficients of the LHS variables (say $x_1$, $x_2$ $x_3$, the "independent" ones) Then I could subtract for example $2x_2$ from both LHS and RHS, without concerning myself about whether there is such a row that only contains $x_2$. But I think I figured out why the operation on a matrix wouldn't be allowed. It's because if we right out the whole matrix equation Ax=Iy, there are only y's on the RHS. Dunno if I'm confusing you but it's unimportant anyway. Thanks though.
– Chisq
Jul 24 at 18:49
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
@amd Yea I see your point. Thanks.
– Chisq
Jul 24 at 18:50
 |Â
show 1 more comment
5
then subtracting a row of 2's from the bottom three rows
That's misstated. What actually happens is that you pull out the factor $(x+6)$, first, then subtract the first row (which is all $1$s now) multiplied by $2$ from the other rows.– dxiv
Jul 24 at 17:36
Huh, then maybe that's it. But shouldn't that give me $2-2(x+6)$ as the elements that are supposed to become $0$? Edit: Saw your edit about the factorization. Ok, I'll have to think about that for a bit.
– Chisq
Jul 24 at 17:38