Relating 2 proofs of: If there are $m$ linearly independent vectors in $mathbbR^n$, then $mleq n$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?







share|cite|improve this question























    up vote
    2
    down vote

    favorite












    I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?







    share|cite|improve this question





















      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?







      share|cite|improve this question











      I know of a proof using the exchange lemma, but I am trying to relate this approach to the approach using row reduction. The proof from my text (Linear Algebra Done Wrong) goes something like: since the vectors are linearly independent, the echelon form of the matrix with the $n$ vectors as columns has $n$ pivots. But there are only $m$ rows, so the number of pivots cannot exceed $m$. Hence $mleq n$. However, I feel uneasy about the step, because it seems so much easier than the proof the exchange lemma. Where is the difficulty hidden in the proof using row reduction?









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 15 at 9:02









      Aubree Walters

      112




      112




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.






          share|cite|improve this answer























          • Could you elaborate on what you mean by 'closely related'?
            – Aubree Walters
            Jul 15 at 10:44










          • The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
            – Paul Frost
            Jul 15 at 12:47











          • The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
            – Paul Frost
            Jul 15 at 12:47










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852318%2frelating-2-proofs-of-if-there-are-m-linearly-independent-vectors-in-mathbb%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.






          share|cite|improve this answer























          • Could you elaborate on what you mean by 'closely related'?
            – Aubree Walters
            Jul 15 at 10:44










          • The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
            – Paul Frost
            Jul 15 at 12:47











          • The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
            – Paul Frost
            Jul 15 at 12:47














          up vote
          0
          down vote













          The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.






          share|cite|improve this answer























          • Could you elaborate on what you mean by 'closely related'?
            – Aubree Walters
            Jul 15 at 10:44










          • The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
            – Paul Frost
            Jul 15 at 12:47











          • The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
            – Paul Frost
            Jul 15 at 12:47












          up vote
          0
          down vote










          up vote
          0
          down vote









          The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.






          share|cite|improve this answer















          The proof uses the fact that every matrix can be transformed to echelon form. The transformation to echelon form is done via row operations, and it requires a proof that it actually works. You may regard this as the "hidden" part. In fact, the procedure of row reduction is closely related to the exchange process in the Steinitz exchange lemma.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 15 at 10:03


























          answered Jul 15 at 9:58









          Paul Frost

          3,703420




          3,703420











          • Could you elaborate on what you mean by 'closely related'?
            – Aubree Walters
            Jul 15 at 10:44










          • The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
            – Paul Frost
            Jul 15 at 12:47











          • The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
            – Paul Frost
            Jul 15 at 12:47
















          • Could you elaborate on what you mean by 'closely related'?
            – Aubree Walters
            Jul 15 at 10:44










          • The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
            – Paul Frost
            Jul 15 at 12:47











          • The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
            – Paul Frost
            Jul 15 at 12:47















          Could you elaborate on what you mean by 'closely related'?
          – Aubree Walters
          Jul 15 at 10:44




          Could you elaborate on what you mean by 'closely related'?
          – Aubree Walters
          Jul 15 at 10:44












          The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
          – Paul Frost
          Jul 15 at 12:47





          The rows $r_1,..., r_n$ of a $n times m$-matrix can be regarded as vectors in $mathbbR^m$. They generate a subspace $V subset mathbbR^m$. In a row operation a row $r_k$ is replaced by a linear combination $Sigma_i=1^n a_i r_i$ with $a_k ne 0$ (a row exchange of $r_k$ and $r_l$ is the combinaton of three such operations: $r'_l = r_l + r_k$, $r'_k = -r_k + r'_l = r_l$, $r''_l = r'_l - r'_k = r_k$). Via row operations $ r_1,..., r_n $ is transformed into a certain basis $ b_1,...,b_k $ of $V$.
          – Paul Frost
          Jul 15 at 12:47













          The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
          – Paul Frost
          Jul 15 at 12:47




          The same idea (replacement of a vector $v_k$ by a linear combination $Sigma_i=1^n a_i v_i$ with $a_k ne 0$ ) is used in the exchange lemma.
          – Paul Frost
          Jul 15 at 12:47












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852318%2frelating-2-proofs-of-if-there-are-m-linearly-independent-vectors-in-mathbb%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?