Square matrices and elementary matrices theorem. Linear algebra.
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I'm reading this text:
and the theorem it's referencing is here:
I don't understand this part:
From Theorem 2.11 you know that the system of linear equations represented by $Ax = O$ has only the trivial solution. But this implies that the augmented matrix [A O] can be rewritten in the form [I O] (using elementary row operations corresponding to $E_1$, $E_2$, . . . , and $E_k$). So, $E_k$. . .$E_3E_2E_1A=I$ and it follows that $A = E_1^-1E_2^-1E_3^-1. . .E_k^-1$. A can be written as the product of elementary matrices.
I don't understand any of that. How can the augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?) $[A | 0]$ be rewritten in the form $[I |0]$ using those row operations?
linear-algebra
add a comment |Â
up vote
1
down vote
favorite
I'm reading this text:
and the theorem it's referencing is here:
I don't understand this part:
From Theorem 2.11 you know that the system of linear equations represented by $Ax = O$ has only the trivial solution. But this implies that the augmented matrix [A O] can be rewritten in the form [I O] (using elementary row operations corresponding to $E_1$, $E_2$, . . . , and $E_k$). So, $E_k$. . .$E_3E_2E_1A=I$ and it follows that $A = E_1^-1E_2^-1E_3^-1. . .E_k^-1$. A can be written as the product of elementary matrices.
I don't understand any of that. How can the augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?) $[A | 0]$ be rewritten in the form $[I |0]$ using those row operations?
linear-algebra
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'm reading this text:
and the theorem it's referencing is here:
I don't understand this part:
From Theorem 2.11 you know that the system of linear equations represented by $Ax = O$ has only the trivial solution. But this implies that the augmented matrix [A O] can be rewritten in the form [I O] (using elementary row operations corresponding to $E_1$, $E_2$, . . . , and $E_k$). So, $E_k$. . .$E_3E_2E_1A=I$ and it follows that $A = E_1^-1E_2^-1E_3^-1. . .E_k^-1$. A can be written as the product of elementary matrices.
I don't understand any of that. How can the augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?) $[A | 0]$ be rewritten in the form $[I |0]$ using those row operations?
linear-algebra
I'm reading this text:
and the theorem it's referencing is here:
I don't understand this part:
From Theorem 2.11 you know that the system of linear equations represented by $Ax = O$ has only the trivial solution. But this implies that the augmented matrix [A O] can be rewritten in the form [I O] (using elementary row operations corresponding to $E_1$, $E_2$, . . . , and $E_k$). So, $E_k$. . .$E_3E_2E_1A=I$ and it follows that $A = E_1^-1E_2^-1E_3^-1. . .E_k^-1$. A can be written as the product of elementary matrices.
I don't understand any of that. How can the augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?) $[A | 0]$ be rewritten in the form $[I |0]$ using those row operations?
linear-algebra
edited Jul 27 at 17:56
高田航
1,116318
1,116318
asked Jul 27 at 17:37


Jwan622
1,61211224
1,61211224
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55
add a comment |Â
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?)
A short hand notation for $A x = b$ is the augmented matrix $[A lvert b]$.
In the above case $A x = b iff [Amid 0]$ and $[I mid 0 ] iff I x = 0 iff x = 0$.
How can the augmented matrix (..) $[A|0]$ be rewritten in the form $[I|0]$ using those row operations?
For invertible $A$ you can use Gauss elimination to turn $A$ into $I$.
Each elimination operation (multiply a row with a scalar non-zero multiple, exchange two rows, add a row to another row) can be expressed as multiplication with an matrix $E_i$ from the left.
So
$$
E_k dotsb E_1 A = I
$$
is a way to write down the successful Gauss elimination in $k$ steps from $A$ into $I$.
Further each operation and thus each $E_i$ is invertible. So we have
$$
beginalign
E_k E_k-1 dotsb E_1 A
&= I iff \
E_k^-1 E_k E_k-1 dotsb E_1 A = E_k-1 dotsb E_1 A
&= E_k^-1 iff \
E_k-1^-1 E_k-1 dotsb E_1 A = E_k-2 dotsb E_1 A
&= E_k-1^-1 E_k^-1 iff \
& ,,, vdots \
A &= E_1^-1 dotsb E_k^-1
endalign
$$
We multiplied both sides of the initial equation with $E_k$ from the left. Then with $E_k-1$ from the left and so until we multiply both sides with $E_1$ from the left. So we peeled free $A$.
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?)
A short hand notation for $A x = b$ is the augmented matrix $[A lvert b]$.
In the above case $A x = b iff [Amid 0]$ and $[I mid 0 ] iff I x = 0 iff x = 0$.
How can the augmented matrix (..) $[A|0]$ be rewritten in the form $[I|0]$ using those row operations?
For invertible $A$ you can use Gauss elimination to turn $A$ into $I$.
Each elimination operation (multiply a row with a scalar non-zero multiple, exchange two rows, add a row to another row) can be expressed as multiplication with an matrix $E_i$ from the left.
So
$$
E_k dotsb E_1 A = I
$$
is a way to write down the successful Gauss elimination in $k$ steps from $A$ into $I$.
Further each operation and thus each $E_i$ is invertible. So we have
$$
beginalign
E_k E_k-1 dotsb E_1 A
&= I iff \
E_k^-1 E_k E_k-1 dotsb E_1 A = E_k-1 dotsb E_1 A
&= E_k^-1 iff \
E_k-1^-1 E_k-1 dotsb E_1 A = E_k-2 dotsb E_1 A
&= E_k-1^-1 E_k^-1 iff \
& ,,, vdots \
A &= E_1^-1 dotsb E_k^-1
endalign
$$
We multiplied both sides of the initial equation with $E_k$ from the left. Then with $E_k-1$ from the left and so until we multiply both sides with $E_1$ from the left. So we peeled free $A$.
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
 |Â
show 1 more comment
up vote
0
down vote
augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?)
A short hand notation for $A x = b$ is the augmented matrix $[A lvert b]$.
In the above case $A x = b iff [Amid 0]$ and $[I mid 0 ] iff I x = 0 iff x = 0$.
How can the augmented matrix (..) $[A|0]$ be rewritten in the form $[I|0]$ using those row operations?
For invertible $A$ you can use Gauss elimination to turn $A$ into $I$.
Each elimination operation (multiply a row with a scalar non-zero multiple, exchange two rows, add a row to another row) can be expressed as multiplication with an matrix $E_i$ from the left.
So
$$
E_k dotsb E_1 A = I
$$
is a way to write down the successful Gauss elimination in $k$ steps from $A$ into $I$.
Further each operation and thus each $E_i$ is invertible. So we have
$$
beginalign
E_k E_k-1 dotsb E_1 A
&= I iff \
E_k^-1 E_k E_k-1 dotsb E_1 A = E_k-1 dotsb E_1 A
&= E_k^-1 iff \
E_k-1^-1 E_k-1 dotsb E_1 A = E_k-2 dotsb E_1 A
&= E_k-1^-1 E_k^-1 iff \
& ,,, vdots \
A &= E_1^-1 dotsb E_k^-1
endalign
$$
We multiplied both sides of the initial equation with $E_k$ from the left. Then with $E_k-1$ from the left and so until we multiply both sides with $E_1$ from the left. So we peeled free $A$.
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
 |Â
show 1 more comment
up vote
0
down vote
up vote
0
down vote
augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?)
A short hand notation for $A x = b$ is the augmented matrix $[A lvert b]$.
In the above case $A x = b iff [Amid 0]$ and $[I mid 0 ] iff I x = 0 iff x = 0$.
How can the augmented matrix (..) $[A|0]$ be rewritten in the form $[I|0]$ using those row operations?
For invertible $A$ you can use Gauss elimination to turn $A$ into $I$.
Each elimination operation (multiply a row with a scalar non-zero multiple, exchange two rows, add a row to another row) can be expressed as multiplication with an matrix $E_i$ from the left.
So
$$
E_k dotsb E_1 A = I
$$
is a way to write down the successful Gauss elimination in $k$ steps from $A$ into $I$.
Further each operation and thus each $E_i$ is invertible. So we have
$$
beginalign
E_k E_k-1 dotsb E_1 A
&= I iff \
E_k^-1 E_k E_k-1 dotsb E_1 A = E_k-1 dotsb E_1 A
&= E_k^-1 iff \
E_k-1^-1 E_k-1 dotsb E_1 A = E_k-2 dotsb E_1 A
&= E_k-1^-1 E_k^-1 iff \
& ,,, vdots \
A &= E_1^-1 dotsb E_k^-1
endalign
$$
We multiplied both sides of the initial equation with $E_k$ from the left. Then with $E_k-1$ from the left and so until we multiply both sides with $E_1$ from the left. So we peeled free $A$.
augmented matrix (this just means it's a matrix that includes the constants and the coefficients right?)
A short hand notation for $A x = b$ is the augmented matrix $[A lvert b]$.
In the above case $A x = b iff [Amid 0]$ and $[I mid 0 ] iff I x = 0 iff x = 0$.
How can the augmented matrix (..) $[A|0]$ be rewritten in the form $[I|0]$ using those row operations?
For invertible $A$ you can use Gauss elimination to turn $A$ into $I$.
Each elimination operation (multiply a row with a scalar non-zero multiple, exchange two rows, add a row to another row) can be expressed as multiplication with an matrix $E_i$ from the left.
So
$$
E_k dotsb E_1 A = I
$$
is a way to write down the successful Gauss elimination in $k$ steps from $A$ into $I$.
Further each operation and thus each $E_i$ is invertible. So we have
$$
beginalign
E_k E_k-1 dotsb E_1 A
&= I iff \
E_k^-1 E_k E_k-1 dotsb E_1 A = E_k-1 dotsb E_1 A
&= E_k^-1 iff \
E_k-1^-1 E_k-1 dotsb E_1 A = E_k-2 dotsb E_1 A
&= E_k-1^-1 E_k^-1 iff \
& ,,, vdots \
A &= E_1^-1 dotsb E_k^-1
endalign
$$
We multiplied both sides of the initial equation with $E_k$ from the left. Then with $E_k-1$ from the left and so until we multiply both sides with $E_1$ from the left. So we peeled free $A$.
edited Jul 27 at 20:59
answered Jul 27 at 18:03


mvw
30.2k22250
30.2k22250
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
 |Â
show 1 more comment
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
Just to clarify the notation a bit... the b in the [A | b] is what the equation is equal to right? Why does [A | 0 ] if and only if [ I | 0 ]?
– Jwan622
Jul 27 at 18:30
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
How do you know that A can be converted to I by a series of elementary matrices?
– Jwan622
Jul 27 at 18:31
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
If $A x = 0$ has only $x=0$ as solution, then the Gauss elimination procedure will succeed to turn $A x = 0$ into $I x = 0$. There should be a part of your text which explains the procedure and will show operations as elementary matrices.
– mvw
Jul 27 at 18:37
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
It does not, mind showing it?
– Jwan622
Jul 27 at 20:23
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
The matrices are shown here: Elementary matrix
– mvw
Jul 27 at 20:52
 |Â
show 1 more comment
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2864627%2fsquare-matrices-and-elementary-matrices-theorem-linear-algebra%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
You have to look back at the way you use row operations on the augmented matrix to implement Theorem 2.11 and understand how those row operations correspond to elementary matrices. So the key to "understanding all of that" is really to understand the proof of Theorem 2.11.
– Ethan Bolker
Jul 27 at 17:41
Lol, I thought I did understand it. If A is invertible, then the system of linear equations $Ax = b$ has a unique solution that can determined by that formula. When b is = 0, the solution is trivial.
– Jwan622
Jul 27 at 17:55