Relating 2 proofs of: If there are $m$ linearly independent vectors in $mathbbR^n$, then $mleq n$
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?
linear-algebra
add a comment |Â
up vote
2
down vote
favorite
I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?
linear-algebra
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?
linear-algebra
I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?
linear-algebra
asked Jul 15 at 9:02
Aubree Walters
112
112
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
add a comment |Â
up vote
0
down vote
The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
add a comment |Â
up vote
0
down vote
up vote
0
down vote
The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.
The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.
edited Jul 15 at 10:03
answered Jul 15 at 9:58
Paul Frost
3,703420
3,703420
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
add a comment |Â
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
Could you elaborate on what you mean by 'closely related'?
– Aubree Walters
Jul 15 at 10:44
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
– Paul Frost
Jul 15 at 12:47
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852318%2frelating-2-proofs-of-if-there-are-m-linearly-independent-vectors-in-mathbb%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password