Issue with Matrix Multiplication using the Formal Definition
Clash Royale CLAN TAG#URR8PPP
up vote
6
down vote
favorite
I am writing a formal proof to show that if $B$ is the matrix obtained by interchanging the rows of a $2times2$ matrix $A$, then $det(B)=-det(A)$. My reasoning and proof are coming along nicely but I hit a bump in the road that highlighted to me a gap in my knowledge - that is, I guess I do not completely understand the definition of matrix multiplication. Note I went the rigorous route here only because I wanted to prove to myself I fully understood matrix multiplication... and I don't. My proof thus far is:
Let $E$ be the elementary matrix obtained by performing a type 1 elementary row operation on $I_2$. By Theorem 3.1 (Friedberg), $B=EA$. Note
$$det(A) =detbeginpmatrix
a & b \
c & d
endpmatrix=ad-bc$$ By the definition of matrix multiplication,
beginalign
& B_ij=(EA)_ij \[10pt]
= & sum_k=1^2 E_ikA_kj text for 1le ile2text, 1le jle2 \[10pt]
= & E_i1A_1j+E_i2A_2jtext for 1le ile2text, 1le jle2 \[10pt]
vdots,,, \[10pt]
= & beginpmatrix
c & d \
a & b
endpmatrix_ij
endalign
If $B=EA=beginpmatrix
c & d \
a & b \
endpmatrix$, then by the definition of a determinant of a $2times2$ matrix,
beginalign
det(B) & =det(EA)=bc-ad \[10pt]
& =-(ad-bc) \[10pt]
& =-det(A)
endalign
My issue is, how do I formally express the steps where I put my "$cdots$"? That is, the column and row vector multiplication and their sum? Maybe I'm wrong, but I feel most sources don't fully explain all the steps of matrix multiplication and just resort to hand-waving. The way I think about it - the column and row vectors I will be multiplying in my proof are actually just $2times1$ and $1times2$ matrices, respectively. I know they result in a $2times2$ matrix, but how? And why?
linear-algebra matrices proof-writing vectors
add a comment |Â
up vote
6
down vote
favorite
I am writing a formal proof to show that if $B$ is the matrix obtained by interchanging the rows of a $2times2$ matrix $A$, then $det(B)=-det(A)$. My reasoning and proof are coming along nicely but I hit a bump in the road that highlighted to me a gap in my knowledge - that is, I guess I do not completely understand the definition of matrix multiplication. Note I went the rigorous route here only because I wanted to prove to myself I fully understood matrix multiplication... and I don't. My proof thus far is:
Let $E$ be the elementary matrix obtained by performing a type 1 elementary row operation on $I_2$. By Theorem 3.1 (Friedberg), $B=EA$. Note
$$det(A) =detbeginpmatrix
a & b \
c & d
endpmatrix=ad-bc$$ By the definition of matrix multiplication,
beginalign
& B_ij=(EA)_ij \[10pt]
= & sum_k=1^2 E_ikA_kj text for 1le ile2text, 1le jle2 \[10pt]
= & E_i1A_1j+E_i2A_2jtext for 1le ile2text, 1le jle2 \[10pt]
vdots,,, \[10pt]
= & beginpmatrix
c & d \
a & b
endpmatrix_ij
endalign
If $B=EA=beginpmatrix
c & d \
a & b \
endpmatrix$, then by the definition of a determinant of a $2times2$ matrix,
beginalign
det(B) & =det(EA)=bc-ad \[10pt]
& =-(ad-bc) \[10pt]
& =-det(A)
endalign
My issue is, how do I formally express the steps where I put my "$cdots$"? That is, the column and row vector multiplication and their sum? Maybe I'm wrong, but I feel most sources don't fully explain all the steps of matrix multiplication and just resort to hand-waving. The way I think about it - the column and row vectors I will be multiplying in my proof are actually just $2times1$ and $1times2$ matrices, respectively. I know they result in a $2times2$ matrix, but how? And why?
linear-algebra matrices proof-writing vectors
2
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
2
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I am writing a formal proof to show that if $B$ is the matrix obtained by interchanging the rows of a $2times2$ matrix $A$, then $det(B)=-det(A)$. My reasoning and proof are coming along nicely but I hit a bump in the road that highlighted to me a gap in my knowledge - that is, I guess I do not completely understand the definition of matrix multiplication. Note I went the rigorous route here only because I wanted to prove to myself I fully understood matrix multiplication... and I don't. My proof thus far is:
Let $E$ be the elementary matrix obtained by performing a type 1 elementary row operation on $I_2$. By Theorem 3.1 (Friedberg), $B=EA$. Note
$$det(A) =detbeginpmatrix
a & b \
c & d
endpmatrix=ad-bc$$ By the definition of matrix multiplication,
beginalign
& B_ij=(EA)_ij \[10pt]
= & sum_k=1^2 E_ikA_kj text for 1le ile2text, 1le jle2 \[10pt]
= & E_i1A_1j+E_i2A_2jtext for 1le ile2text, 1le jle2 \[10pt]
vdots,,, \[10pt]
= & beginpmatrix
c & d \
a & b
endpmatrix_ij
endalign
If $B=EA=beginpmatrix
c & d \
a & b \
endpmatrix$, then by the definition of a determinant of a $2times2$ matrix,
beginalign
det(B) & =det(EA)=bc-ad \[10pt]
& =-(ad-bc) \[10pt]
& =-det(A)
endalign
My issue is, how do I formally express the steps where I put my "$cdots$"? That is, the column and row vector multiplication and their sum? Maybe I'm wrong, but I feel most sources don't fully explain all the steps of matrix multiplication and just resort to hand-waving. The way I think about it - the column and row vectors I will be multiplying in my proof are actually just $2times1$ and $1times2$ matrices, respectively. I know they result in a $2times2$ matrix, but how? And why?
linear-algebra matrices proof-writing vectors
I am writing a formal proof to show that if $B$ is the matrix obtained by interchanging the rows of a $2times2$ matrix $A$, then $det(B)=-det(A)$. My reasoning and proof are coming along nicely but I hit a bump in the road that highlighted to me a gap in my knowledge - that is, I guess I do not completely understand the definition of matrix multiplication. Note I went the rigorous route here only because I wanted to prove to myself I fully understood matrix multiplication... and I don't. My proof thus far is:
Let $E$ be the elementary matrix obtained by performing a type 1 elementary row operation on $I_2$. By Theorem 3.1 (Friedberg), $B=EA$. Note
$$det(A) =detbeginpmatrix
a & b \
c & d
endpmatrix=ad-bc$$ By the definition of matrix multiplication,
beginalign
& B_ij=(EA)_ij \[10pt]
= & sum_k=1^2 E_ikA_kj text for 1le ile2text, 1le jle2 \[10pt]
= & E_i1A_1j+E_i2A_2jtext for 1le ile2text, 1le jle2 \[10pt]
vdots,,, \[10pt]
= & beginpmatrix
c & d \
a & b
endpmatrix_ij
endalign
If $B=EA=beginpmatrix
c & d \
a & b \
endpmatrix$, then by the definition of a determinant of a $2times2$ matrix,
beginalign
det(B) & =det(EA)=bc-ad \[10pt]
& =-(ad-bc) \[10pt]
& =-det(A)
endalign
My issue is, how do I formally express the steps where I put my "$cdots$"? That is, the column and row vector multiplication and their sum? Maybe I'm wrong, but I feel most sources don't fully explain all the steps of matrix multiplication and just resort to hand-waving. The way I think about it - the column and row vectors I will be multiplying in my proof are actually just $2times1$ and $1times2$ matrices, respectively. I know they result in a $2times2$ matrix, but how? And why?
linear-algebra matrices proof-writing vectors
edited Jul 16 at 19:14
Michael Hardy
204k23186462
204k23186462
asked Jul 16 at 18:55


greycatbird
1066
1066
2
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
2
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29
add a comment |Â
2
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
2
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29
2
2
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
2
2
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
If I understood correctly, the matrix $E = beginbmatrixE_11 & E_12 \ E_21 & E_22endbmatrix$ is given by
$$E = beginbmatrix0 & 1 \ 1 & 0endbmatrix$$
so for $1le i,jle 2$ we have$$(EA)_ij = sum_k=1^2E_ikA_kj = E_i1A_1j+E_i2A_2j$$
If $i= 1$ then $$(EA)_1j = E_11A_1j+E_12A_2j = 0 cdot A_1j+1cdot A_2j = A_2j$$
If $i= 2$ then $$(EA)_2j = E_21A_1j+E_22A_2j = 1 cdot A_1j+0cdot A_2j = A_1j$$
So $$(EA)_ij = beginbmatrixA_21 & A_22 \ A_11 & A_12endbmatrix_ij = beginbmatrixc & d \ a & bendbmatrix_ij$$
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
add a comment |Â
up vote
1
down vote
Notice that since $E$ is obtained from switching the rows on the identity matrix, then the elements of $E$ are either $0$ or $1$. So, for your sum you will have something like $$B_ij = E_i1 A_1j + E_i2 A_2j = 0(A_1j) + 1(A_2j) = A_2j$$
Since you are only considering $2 times 2$ matrices, then you only need to check a few options.
Using this you can determine the elements of the matrix $B$
add a comment |Â
up vote
1
down vote
Don't see a real issue here. A simple way to show your desired result is $B = EA$ with
$$E = left[ beginarray*20c
0&1\
1&0
endarray right]quad Rightarrow quad det left( E right) = - 1$$
so that $det left( B right) = det left( E right)det left( A right) = - det left( A right)$.
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
If I understood correctly, the matrix $E = beginbmatrixE_11 & E_12 \ E_21 & E_22endbmatrix$ is given by
$$E = beginbmatrix0 & 1 \ 1 & 0endbmatrix$$
so for $1le i,jle 2$ we have$$(EA)_ij = sum_k=1^2E_ikA_kj = E_i1A_1j+E_i2A_2j$$
If $i= 1$ then $$(EA)_1j = E_11A_1j+E_12A_2j = 0 cdot A_1j+1cdot A_2j = A_2j$$
If $i= 2$ then $$(EA)_2j = E_21A_1j+E_22A_2j = 1 cdot A_1j+0cdot A_2j = A_1j$$
So $$(EA)_ij = beginbmatrixA_21 & A_22 \ A_11 & A_12endbmatrix_ij = beginbmatrixc & d \ a & bendbmatrix_ij$$
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
add a comment |Â
up vote
2
down vote
accepted
If I understood correctly, the matrix $E = beginbmatrixE_11 & E_12 \ E_21 & E_22endbmatrix$ is given by
$$E = beginbmatrix0 & 1 \ 1 & 0endbmatrix$$
so for $1le i,jle 2$ we have$$(EA)_ij = sum_k=1^2E_ikA_kj = E_i1A_1j+E_i2A_2j$$
If $i= 1$ then $$(EA)_1j = E_11A_1j+E_12A_2j = 0 cdot A_1j+1cdot A_2j = A_2j$$
If $i= 2$ then $$(EA)_2j = E_21A_1j+E_22A_2j = 1 cdot A_1j+0cdot A_2j = A_1j$$
So $$(EA)_ij = beginbmatrixA_21 & A_22 \ A_11 & A_12endbmatrix_ij = beginbmatrixc & d \ a & bendbmatrix_ij$$
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
If I understood correctly, the matrix $E = beginbmatrixE_11 & E_12 \ E_21 & E_22endbmatrix$ is given by
$$E = beginbmatrix0 & 1 \ 1 & 0endbmatrix$$
so for $1le i,jle 2$ we have$$(EA)_ij = sum_k=1^2E_ikA_kj = E_i1A_1j+E_i2A_2j$$
If $i= 1$ then $$(EA)_1j = E_11A_1j+E_12A_2j = 0 cdot A_1j+1cdot A_2j = A_2j$$
If $i= 2$ then $$(EA)_2j = E_21A_1j+E_22A_2j = 1 cdot A_1j+0cdot A_2j = A_1j$$
So $$(EA)_ij = beginbmatrixA_21 & A_22 \ A_11 & A_12endbmatrix_ij = beginbmatrixc & d \ a & bendbmatrix_ij$$
If I understood correctly, the matrix $E = beginbmatrixE_11 & E_12 \ E_21 & E_22endbmatrix$ is given by
$$E = beginbmatrix0 & 1 \ 1 & 0endbmatrix$$
so for $1le i,jle 2$ we have$$(EA)_ij = sum_k=1^2E_ikA_kj = E_i1A_1j+E_i2A_2j$$
If $i= 1$ then $$(EA)_1j = E_11A_1j+E_12A_2j = 0 cdot A_1j+1cdot A_2j = A_2j$$
If $i= 2$ then $$(EA)_2j = E_21A_1j+E_22A_2j = 1 cdot A_1j+0cdot A_2j = A_1j$$
So $$(EA)_ij = beginbmatrixA_21 & A_22 \ A_11 & A_12endbmatrix_ij = beginbmatrixc & d \ a & bendbmatrix_ij$$
answered Jul 16 at 19:25
mechanodroid
22.3k52041
22.3k52041
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
add a comment |Â
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
Thank you, and I think this response helped the most. I was having severe troubles with the nuts and bolts of the summation.
– greycatbird
Jul 16 at 19:36
add a comment |Â
up vote
1
down vote
Notice that since $E$ is obtained from switching the rows on the identity matrix, then the elements of $E$ are either $0$ or $1$. So, for your sum you will have something like $$B_ij = E_i1 A_1j + E_i2 A_2j = 0(A_1j) + 1(A_2j) = A_2j$$
Since you are only considering $2 times 2$ matrices, then you only need to check a few options.
Using this you can determine the elements of the matrix $B$
add a comment |Â
up vote
1
down vote
Notice that since $E$ is obtained from switching the rows on the identity matrix, then the elements of $E$ are either $0$ or $1$. So, for your sum you will have something like $$B_ij = E_i1 A_1j + E_i2 A_2j = 0(A_1j) + 1(A_2j) = A_2j$$
Since you are only considering $2 times 2$ matrices, then you only need to check a few options.
Using this you can determine the elements of the matrix $B$
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Notice that since $E$ is obtained from switching the rows on the identity matrix, then the elements of $E$ are either $0$ or $1$. So, for your sum you will have something like $$B_ij = E_i1 A_1j + E_i2 A_2j = 0(A_1j) + 1(A_2j) = A_2j$$
Since you are only considering $2 times 2$ matrices, then you only need to check a few options.
Using this you can determine the elements of the matrix $B$
Notice that since $E$ is obtained from switching the rows on the identity matrix, then the elements of $E$ are either $0$ or $1$. So, for your sum you will have something like $$B_ij = E_i1 A_1j + E_i2 A_2j = 0(A_1j) + 1(A_2j) = A_2j$$
Since you are only considering $2 times 2$ matrices, then you only need to check a few options.
Using this you can determine the elements of the matrix $B$
answered Jul 16 at 19:21
gd1035
29319
29319
add a comment |Â
add a comment |Â
up vote
1
down vote
Don't see a real issue here. A simple way to show your desired result is $B = EA$ with
$$E = left[ beginarray*20c
0&1\
1&0
endarray right]quad Rightarrow quad det left( E right) = - 1$$
so that $det left( B right) = det left( E right)det left( A right) = - det left( A right)$.
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
add a comment |Â
up vote
1
down vote
Don't see a real issue here. A simple way to show your desired result is $B = EA$ with
$$E = left[ beginarray*20c
0&1\
1&0
endarray right]quad Rightarrow quad det left( E right) = - 1$$
so that $det left( B right) = det left( E right)det left( A right) = - det left( A right)$.
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Don't see a real issue here. A simple way to show your desired result is $B = EA$ with
$$E = left[ beginarray*20c
0&1\
1&0
endarray right]quad Rightarrow quad det left( E right) = - 1$$
so that $det left( B right) = det left( E right)det left( A right) = - det left( A right)$.
Don't see a real issue here. A simple way to show your desired result is $B = EA$ with
$$E = left[ beginarray*20c
0&1\
1&0
endarray right]quad Rightarrow quad det left( E right) = - 1$$
so that $det left( B right) = det left( E right)det left( A right) = - det left( A right)$.
answered Jul 16 at 19:23


John Polcari
382111
382111
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
add a comment |Â
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
@ John: Thank you for the feedback. This is actually the approach I wanted to take, but in Friedberg the result you used comes well after the basic definitions I used. I wrote this proof as if my professor was grading it, and I know he would not allow theorems from later in the book to be used in earlier sections.
– greycatbird
Jul 16 at 19:25
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
Then all you really want is the proof that the determinant of the product is the product of the determinants (this is a special case). Why don't you just follow that in general and then specialize it to this particular case?
– John Polcari
Jul 16 at 19:30
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853727%2fissue-with-matrix-multiplication-using-the-formal-definition%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
2
If you ONLY want to show that interchanging the two rows of a $2times2$ matrix has the effect of multiplying the determinant by $-1,$ then why not just compute $detleft[ beginarraycc a & b \ c & d endarray right]$ and $detleft[ beginarraycc c & d \ a & b endarray right] text ? qquad$
– Michael Hardy
Jul 16 at 19:13
Well, first you need to define what the entries of $E$ are, otherwise you cannot simplify $E_i1A_1j+E_i2A_2j$ any further.
– Rahul
Jul 16 at 19:21
2
@Michael: Yes that is a good idea, and that is certainly the most basic route to correctly writing the proof. I just wanted to go a bit more in depth here for my own benefit.
– greycatbird
Jul 16 at 19:29
@Rahul: Thank you for pointing that out - I did forget to define the matrix E. I spend so much time within the Friedberg book I sometimes take that notation for granted.
– greycatbird
Jul 16 at 19:29