Farkas' lemma proof explanation

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite
1












I have been studying the proof of the following variant of Farkas' Lemma:



A system of linear equations $A mathbfx = mathbfb$ in $d$ variables has a solution iff for all $mathbflambda in mathbbR^d, lambda^T A = mathbf0^T$ implies $lambda^T mathbfb = 0$.



For the direction $Rightarrow$ the proof is easy: Suppose that $Amathbfx = mathbfb$ has a solution $barmathbfx$. Then $lambda^T A = mathbf0^T Rightarrow lambda^TAbarmathbfx=lambda^Tmathbfb=0$



For the other direction the proof that the notes give proceeds as follows:



The implication $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0$ means that both matrices $A in mathbbR^ntimes d$ and $(A|mathbfb) in mathbbR^ntimes (d+1)$ have the same linear dependencies among their rows, therefore the same row rank, which means that they have the same column rank. That means that $mathbfb$ is a linear combination of the columns of $A$ which implies that $Amathbfx = mathbfb$ has a solution.



What I am missing in the second part of the proof is how the starting claim means that both matrices have the same linear dependencies among their rows. Can anyone give an intuitive explanation?







share|cite|improve this question























    up vote
    5
    down vote

    favorite
    1












    I have been studying the proof of the following variant of Farkas' Lemma:



    A system of linear equations $A mathbfx = mathbfb$ in $d$ variables has a solution iff for all $mathbflambda in mathbbR^d, lambda^T A = mathbf0^T$ implies $lambda^T mathbfb = 0$.



    For the direction $Rightarrow$ the proof is easy: Suppose that $Amathbfx = mathbfb$ has a solution $barmathbfx$. Then $lambda^T A = mathbf0^T Rightarrow lambda^TAbarmathbfx=lambda^Tmathbfb=0$



    For the other direction the proof that the notes give proceeds as follows:



    The implication $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0$ means that both matrices $A in mathbbR^ntimes d$ and $(A|mathbfb) in mathbbR^ntimes (d+1)$ have the same linear dependencies among their rows, therefore the same row rank, which means that they have the same column rank. That means that $mathbfb$ is a linear combination of the columns of $A$ which implies that $Amathbfx = mathbfb$ has a solution.



    What I am missing in the second part of the proof is how the starting claim means that both matrices have the same linear dependencies among their rows. Can anyone give an intuitive explanation?







    share|cite|improve this question





















      up vote
      5
      down vote

      favorite
      1









      up vote
      5
      down vote

      favorite
      1






      1





      I have been studying the proof of the following variant of Farkas' Lemma:



      A system of linear equations $A mathbfx = mathbfb$ in $d$ variables has a solution iff for all $mathbflambda in mathbbR^d, lambda^T A = mathbf0^T$ implies $lambda^T mathbfb = 0$.



      For the direction $Rightarrow$ the proof is easy: Suppose that $Amathbfx = mathbfb$ has a solution $barmathbfx$. Then $lambda^T A = mathbf0^T Rightarrow lambda^TAbarmathbfx=lambda^Tmathbfb=0$



      For the other direction the proof that the notes give proceeds as follows:



      The implication $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0$ means that both matrices $A in mathbbR^ntimes d$ and $(A|mathbfb) in mathbbR^ntimes (d+1)$ have the same linear dependencies among their rows, therefore the same row rank, which means that they have the same column rank. That means that $mathbfb$ is a linear combination of the columns of $A$ which implies that $Amathbfx = mathbfb$ has a solution.



      What I am missing in the second part of the proof is how the starting claim means that both matrices have the same linear dependencies among their rows. Can anyone give an intuitive explanation?







      share|cite|improve this question











      I have been studying the proof of the following variant of Farkas' Lemma:



      A system of linear equations $A mathbfx = mathbfb$ in $d$ variables has a solution iff for all $mathbflambda in mathbbR^d, lambda^T A = mathbf0^T$ implies $lambda^T mathbfb = 0$.



      For the direction $Rightarrow$ the proof is easy: Suppose that $Amathbfx = mathbfb$ has a solution $barmathbfx$. Then $lambda^T A = mathbf0^T Rightarrow lambda^TAbarmathbfx=lambda^Tmathbfb=0$



      For the other direction the proof that the notes give proceeds as follows:



      The implication $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0$ means that both matrices $A in mathbbR^ntimes d$ and $(A|mathbfb) in mathbbR^ntimes (d+1)$ have the same linear dependencies among their rows, therefore the same row rank, which means that they have the same column rank. That means that $mathbfb$ is a linear combination of the columns of $A$ which implies that $Amathbfx = mathbfb$ has a solution.



      What I am missing in the second part of the proof is how the starting claim means that both matrices have the same linear dependencies among their rows. Can anyone give an intuitive explanation?









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked Jul 25 at 13:19









      dimkou

      535




      535




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          6
          down vote



          accepted










          The equation $lambda^TA= mathbf0^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,dots,A_n,$ and suppose that $lambda_1A_1+dots+lambda_nA_n=mathbf0^T$ Then if $lambda^T=(lambda_1,dots,lambda_n)$ we have $lambda^TA= mathbf0^T.$



          Under the hypothesis that $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0, (A|mathbfb)$ will have the same linear dependencies as $A$.






          share|cite|improve this answer





















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862402%2ffarkas-lemma-proof-explanation%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            6
            down vote



            accepted










            The equation $lambda^TA= mathbf0^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,dots,A_n,$ and suppose that $lambda_1A_1+dots+lambda_nA_n=mathbf0^T$ Then if $lambda^T=(lambda_1,dots,lambda_n)$ we have $lambda^TA= mathbf0^T.$



            Under the hypothesis that $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0, (A|mathbfb)$ will have the same linear dependencies as $A$.






            share|cite|improve this answer

























              up vote
              6
              down vote



              accepted










              The equation $lambda^TA= mathbf0^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,dots,A_n,$ and suppose that $lambda_1A_1+dots+lambda_nA_n=mathbf0^T$ Then if $lambda^T=(lambda_1,dots,lambda_n)$ we have $lambda^TA= mathbf0^T.$



              Under the hypothesis that $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0, (A|mathbfb)$ will have the same linear dependencies as $A$.






              share|cite|improve this answer























                up vote
                6
                down vote



                accepted







                up vote
                6
                down vote



                accepted






                The equation $lambda^TA= mathbf0^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,dots,A_n,$ and suppose that $lambda_1A_1+dots+lambda_nA_n=mathbf0^T$ Then if $lambda^T=(lambda_1,dots,lambda_n)$ we have $lambda^TA= mathbf0^T.$



                Under the hypothesis that $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0, (A|mathbfb)$ will have the same linear dependencies as $A$.






                share|cite|improve this answer













                The equation $lambda^TA= mathbf0^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,dots,A_n,$ and suppose that $lambda_1A_1+dots+lambda_nA_n=mathbf0^T$ Then if $lambda^T=(lambda_1,dots,lambda_n)$ we have $lambda^TA= mathbf0^T.$



                Under the hypothesis that $lambda^TA= mathbf0^T Rightarrow lambda^Tmathbfb=0, (A|mathbfb)$ will have the same linear dependencies as $A$.







                share|cite|improve this answer













                share|cite|improve this answer



                share|cite|improve this answer











                answered Jul 25 at 13:42









                saulspatz

                10.4k21323




                10.4k21323






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862402%2ffarkas-lemma-proof-explanation%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What is the equation of a 3D cone with generalised tilt?

                    Color the edges and diagonals of a regular polygon

                    Relationship between determinant of matrix and determinant of adjoint?