Is there a way to know if a row reduction of a matrix has been done correctly?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
11
down vote

favorite
2












I'm an undergrad taking the class of "Linear algebra 1". I came across a problem:
sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic.



Yet, often in exams there's a question that requires you to apply row reduction
to a matrix.



I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me.



note: we're allowed to use a simple scientific calculator (not a graph calculator)







share|cite|improve this question

















  • 5




    If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
    – Lord Shark the Unknown
    Jul 15 at 8:18










  • correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
    – Jneven
    Jul 15 at 8:18







  • 2




    @Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
    – Arthur
    Jul 15 at 9:39











  • @Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
    – Jneven
    Jul 15 at 9:50







  • 1




    @Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
    – Arthur
    Jul 15 at 9:57















up vote
11
down vote

favorite
2












I'm an undergrad taking the class of "Linear algebra 1". I came across a problem:
sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic.



Yet, often in exams there's a question that requires you to apply row reduction
to a matrix.



I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me.



note: we're allowed to use a simple scientific calculator (not a graph calculator)







share|cite|improve this question

















  • 5




    If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
    – Lord Shark the Unknown
    Jul 15 at 8:18










  • correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
    – Jneven
    Jul 15 at 8:18







  • 2




    @Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
    – Arthur
    Jul 15 at 9:39











  • @Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
    – Jneven
    Jul 15 at 9:50







  • 1




    @Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
    – Arthur
    Jul 15 at 9:57













up vote
11
down vote

favorite
2









up vote
11
down vote

favorite
2






2





I'm an undergrad taking the class of "Linear algebra 1". I came across a problem:
sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic.



Yet, often in exams there's a question that requires you to apply row reduction
to a matrix.



I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me.



note: we're allowed to use a simple scientific calculator (not a graph calculator)







share|cite|improve this question













I'm an undergrad taking the class of "Linear algebra 1". I came across a problem:
sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic.



Yet, often in exams there's a question that requires you to apply row reduction
to a matrix.



I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me.



note: we're allowed to use a simple scientific calculator (not a graph calculator)









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 16 at 8:02









Martin Sleziak

43.5k6113259




43.5k6113259









asked Jul 15 at 8:05









Jneven

522218




522218







  • 5




    If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
    – Lord Shark the Unknown
    Jul 15 at 8:18










  • correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
    – Jneven
    Jul 15 at 8:18







  • 2




    @Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
    – Arthur
    Jul 15 at 9:39











  • @Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
    – Jneven
    Jul 15 at 9:50







  • 1




    @Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
    – Arthur
    Jul 15 at 9:57













  • 5




    If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
    – Lord Shark the Unknown
    Jul 15 at 8:18










  • correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
    – Jneven
    Jul 15 at 8:18







  • 2




    @Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
    – Arthur
    Jul 15 at 9:39











  • @Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
    – Jneven
    Jul 15 at 9:50







  • 1




    @Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
    – Arthur
    Jul 15 at 9:57








5




5




If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
– Lord Shark the Unknown
Jul 15 at 8:18




If say, the purpose of the row reduction was to find the solution of a system of linear equations, then you could check that your solution does solve the original system.
– Lord Shark the Unknown
Jul 15 at 8:18












correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
– Jneven
Jul 15 at 8:18





correct - but my problem is for a question like: find the echelon matrix, or: find a base for the row/columns vector space.
– Jneven
Jul 15 at 8:18





2




2




@Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
– Arthur
Jul 15 at 9:39





@Jneven You could "invent" a system of linear equations (if you get something singular, set each row equal to $0$, say, and otherwise just pick something at random) and see whether the solutions you get from the reduced form also solves the original set.
– Arthur
Jul 15 at 9:39













@Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
– Jneven
Jul 15 at 9:50





@Arthur I think this method would even take more time than to check the steps as I would have to make a entirely new process. My point is: this is definitely a possibility, but for my paticuler problem this doesn't seem to be very efficient.
– Jneven
Jul 15 at 9:50





1




1




@Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
– Arthur
Jul 15 at 9:57





@Jneven The problem with checking your steps is that your own reasoning is too fresh in your mind. That makes it a bit difficult to catch mistakes. Checking the answer through a wholly different avenue (even an incomplete check like this, since it wouldn't pick up any mistake removing solutions) makes it unlikely that the same mistake appears twice. At any rate, it wasn't meant as a full answer but just to complete the previous comment and address the concerns you had with it.
– Arthur
Jul 15 at 9:57











2 Answers
2






active

oldest

votes

















up vote
1
down vote



accepted










The actual algorithm for Gaussian Elimination looks like this. enter image description here



I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ ell_jk$. That is why you have the operation below $ u_j,k:m = u_j,k:m - ell_jk u_k,k:m$ It is subtracting off the ratio you just computed.



Which yielded this.



Suppose that



$$ A = beginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix $$
$$ A = LU $$



$$ U =A, L=I$$
$$ k=1,m=3,j=2$$
$$ell_21 = fracu_21u_11 = fraca_21a_11 = 3 $$
$$ u_2,1:3 = u_2,1:3 - 3 cdot u_1,1:3 $$
Then we're going to subtract 3 times the 1st row from the 2nd row
$$ beginbmatrix 3 & 5 & 6 endbmatrix - 3 cdot beginbmatrix 1 & 1 & 1 endbmatrix = beginbmatrix 0 & 2 & 3endbmatrix $$
Updating each of them
$$U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ -2 & 2 & 7 endbmatrix $$
$$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ 0 & 0 & 1 endbmatrix $$
$$k=1,j=3,m=3 $$
$$ell_31 = fracu_31u_11 = frac-21 = -2 $$
$$ u_3,1:3 = u_3,1:3 +2 cdot u_1,1:3 $$
Then we add two times the first row to the third row
$$ beginbmatrix -2 & 2 & 7 endbmatrix + 2 cdot beginbmatrix 1 & 1& 1 endbmatrix = beginbmatrix0 & 4 & 9 endbmatrix $$
Updating
$$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 4 & 9 endbmatrix $$
$$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 0 & 1 endbmatrix $$
$$ k=2, j=3,m=3 $$
$$ ell_32 = fracu_32u_22 = frac42 = 2$$
We're subtracting out little blocks
$$ u_3,2:3 = u_3,2:3 - 2 cdot u_2,2:3 $$
$$ beginbmatrix 4 & 9 endbmatrix - 2 cdotbeginbmatrix 2& 3 endbmatrix = beginbmatrix 0 & 3 endbmatrix $$
Updating
$$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix $$
$$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix $$
It now terminates
$$ A = LU $$
$$ underbracebeginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix_A = underbracebeginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix_L underbracebeginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix_U $$






share|cite|improve this answer






























    up vote
    5
    down vote













    We know that elementary row operations
    do not change the row space of the matrix.
    And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.



    So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_Asubseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $Asim B$.




    Example. Suppose that I have a matrix $$A=
    beginpmatrix
    1 & 1 & 1 & 2 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    1 & 2 & 1 & 1 \
    endpmatrix.$$



    And that after Gaussian elimination I get: $$B=
    beginpmatrix
    1 & 0 & 0 & 1 \
    0 & 1 & 0 &-1 \
    0 & 0 & 1 & 2 \
    0 & 0 & 0 & 0 \
    endpmatrix
    $$



    To check whether $R_Asubseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.



    Let us try with the fourth row: $(1,2,1,1)$.
    We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
    $$(boxed1,boxed2,boxed1,1)=
    1cdot(1,0,0,1)+2cdot(0,1,0,-1)+1cdot(0,0,1,2)
    $$
    We see that this is true. If the same thing works for each row of $A$, this shows that $R_Asubseteq R_B$.



    Let me try now another example where I make a mistake on purpose to see how to find the mistake.
    $$beginpmatrix
    1 & 1 & 1 & 2 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    1 & 2 & 1 & 1 \
    endpmatrixoverset(1)sim
    beginpmatrix
    0 & 0 & 1 & 1 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    1 & 2 & 1 & 1 \
    endpmatrixoverset(2)sim
    beginpmatrix
    0 & 0 & 1 & 1 \
    1 & 1 & 0 & 0 \
    0 & 1 & 0 & 0 \
    1 & 2 & 0 & 0 \
    endpmatrixoverset(3)sim
    beginpmatrix
    0 & 0 & 1 & 1 \
    1 & 1 & 0 & 0 \
    0 & 1 & 0 & 0 \
    0 & 1 & 0 & 0 \
    endpmatrixoverset(4)sim
    beginpmatrix
    0 & 0 & 1 & 1 \
    1 & 0 & 0 & 0 \
    0 & 1 & 0 & 0 \
    0 & 0 & 0 & 0 \
    endpmatrixoverset(5)sim
    beginpmatrix
    1 & 0 & 0 & 0 \
    0 & 1 & 0 & 0 \
    0 & 0 & 1 & 1 \
    0 & 0 & 0 & 0 \
    endpmatrix
    $$



    We can check that
    $$(1,1,1,2)ne 1cdot(1,0,0,0)+1cdot(0,1,0,0)+1cdot(0,0,1,1).$$



    I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $beginpmatrix
    0 & 0 & 1 & 1 \
    1 & 1 & 0 & 0 \
    0 & 1 & 0 & 0 \
    1 & 2 & 0 & 0 \
    endpmatrix$, everything works. So some error must be before this step.




    I will stress once again that this is only halfway verification. I have only checked $R_Asubseteq R_B$ but not $R_Bsubseteq R_A$.



    So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example



    $$beginpmatrix
    1 & 1 & 1 & 2 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    1 & 2 & 1 & 1 \
    endpmatrixsim
    beginpmatrix
    1 & 1 & 1 & 2 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    0 & 0 & 0 &-1 \
    endpmatrixsim
    beginpmatrix
    1 & 1 & 1 & 2 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 \
    0 & 0 & 0 & 1 \
    endpmatrixsim
    beginpmatrix
    1 & 1 & 1 & 0 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 0 \
    0 & 0 & 0 & 1 \
    endpmatrixsim
    beginpmatrix
    1 & 0 & 0 & 0 \
    1 & 1 & 0 & 0 \
    0 & 1 & 1 & 0 \
    0 & 0 & 0 & 1 \
    endpmatrixsim
    beginpmatrix
    1 & 0 & 0 & 0 \
    0 & 1 & 0 & 0 \
    0 & 0 & 1 & 0 \
    0 & 0 & 0 & 1 \
    endpmatrix
    $$



    The sanity check described above works. (We check that $R_Asubseteq R_B$, which is true.) But the result is incorrect.




    If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)



    In our example I would do the following
    $$
    left(beginarraycccc
    1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \
    1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
    1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \
    endarrayright)sim
    left(beginarraycccc
    0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
    1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
    1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \
    endarrayright)sim
    left(beginarraycccc
    0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
    1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
    0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
    endarrayright)sim
    left(beginarraycccc
    1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
    0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
    0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
    0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
    endarrayright)sim
    left(beginarraycccc
    1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \
    0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
    0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
    0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
    endarrayright)sim
    left(beginarraycccc
    1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \
    0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \
    0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
    0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
    endarrayright)
    $$
    Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
    $$1cdot(1,1,1,2)-1cdot(0,1,1,1)=(1,0,0,1).$$
    By making a similar verification for each I can test that $R_Asubseteq R_B$.



    Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
    $$left(beginarraycccc
    0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
    endarrayright)$$
    And I can check that
    $$-1cdot(1,1,0,0)-1cdot(0,1,1,1)+1cdot(1,2,1,1)=(0,0,0,0).$$




    1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".






    share|cite|improve this answer























    • as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
      – Jneven
      Jul 17 at 8:28











    • @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
      – Martin Sleziak
      Jul 17 at 9:03










    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852287%2fis-there-a-way-to-know-if-a-row-reduction-of-a-matrix-has-been-done-correctly%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted










    The actual algorithm for Gaussian Elimination looks like this. enter image description here



    I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ ell_jk$. That is why you have the operation below $ u_j,k:m = u_j,k:m - ell_jk u_k,k:m$ It is subtracting off the ratio you just computed.



    Which yielded this.



    Suppose that



    $$ A = beginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix $$
    $$ A = LU $$



    $$ U =A, L=I$$
    $$ k=1,m=3,j=2$$
    $$ell_21 = fracu_21u_11 = fraca_21a_11 = 3 $$
    $$ u_2,1:3 = u_2,1:3 - 3 cdot u_1,1:3 $$
    Then we're going to subtract 3 times the 1st row from the 2nd row
    $$ beginbmatrix 3 & 5 & 6 endbmatrix - 3 cdot beginbmatrix 1 & 1 & 1 endbmatrix = beginbmatrix 0 & 2 & 3endbmatrix $$
    Updating each of them
    $$U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ -2 & 2 & 7 endbmatrix $$
    $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ 0 & 0 & 1 endbmatrix $$
    $$k=1,j=3,m=3 $$
    $$ell_31 = fracu_31u_11 = frac-21 = -2 $$
    $$ u_3,1:3 = u_3,1:3 +2 cdot u_1,1:3 $$
    Then we add two times the first row to the third row
    $$ beginbmatrix -2 & 2 & 7 endbmatrix + 2 cdot beginbmatrix 1 & 1& 1 endbmatrix = beginbmatrix0 & 4 & 9 endbmatrix $$
    Updating
    $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 4 & 9 endbmatrix $$
    $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 0 & 1 endbmatrix $$
    $$ k=2, j=3,m=3 $$
    $$ ell_32 = fracu_32u_22 = frac42 = 2$$
    We're subtracting out little blocks
    $$ u_3,2:3 = u_3,2:3 - 2 cdot u_2,2:3 $$
    $$ beginbmatrix 4 & 9 endbmatrix - 2 cdotbeginbmatrix 2& 3 endbmatrix = beginbmatrix 0 & 3 endbmatrix $$
    Updating
    $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix $$
    $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix $$
    It now terminates
    $$ A = LU $$
    $$ underbracebeginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix_A = underbracebeginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix_L underbracebeginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix_U $$






    share|cite|improve this answer



























      up vote
      1
      down vote



      accepted










      The actual algorithm for Gaussian Elimination looks like this. enter image description here



      I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ ell_jk$. That is why you have the operation below $ u_j,k:m = u_j,k:m - ell_jk u_k,k:m$ It is subtracting off the ratio you just computed.



      Which yielded this.



      Suppose that



      $$ A = beginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix $$
      $$ A = LU $$



      $$ U =A, L=I$$
      $$ k=1,m=3,j=2$$
      $$ell_21 = fracu_21u_11 = fraca_21a_11 = 3 $$
      $$ u_2,1:3 = u_2,1:3 - 3 cdot u_1,1:3 $$
      Then we're going to subtract 3 times the 1st row from the 2nd row
      $$ beginbmatrix 3 & 5 & 6 endbmatrix - 3 cdot beginbmatrix 1 & 1 & 1 endbmatrix = beginbmatrix 0 & 2 & 3endbmatrix $$
      Updating each of them
      $$U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ -2 & 2 & 7 endbmatrix $$
      $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ 0 & 0 & 1 endbmatrix $$
      $$k=1,j=3,m=3 $$
      $$ell_31 = fracu_31u_11 = frac-21 = -2 $$
      $$ u_3,1:3 = u_3,1:3 +2 cdot u_1,1:3 $$
      Then we add two times the first row to the third row
      $$ beginbmatrix -2 & 2 & 7 endbmatrix + 2 cdot beginbmatrix 1 & 1& 1 endbmatrix = beginbmatrix0 & 4 & 9 endbmatrix $$
      Updating
      $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 4 & 9 endbmatrix $$
      $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 0 & 1 endbmatrix $$
      $$ k=2, j=3,m=3 $$
      $$ ell_32 = fracu_32u_22 = frac42 = 2$$
      We're subtracting out little blocks
      $$ u_3,2:3 = u_3,2:3 - 2 cdot u_2,2:3 $$
      $$ beginbmatrix 4 & 9 endbmatrix - 2 cdotbeginbmatrix 2& 3 endbmatrix = beginbmatrix 0 & 3 endbmatrix $$
      Updating
      $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix $$
      $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix $$
      It now terminates
      $$ A = LU $$
      $$ underbracebeginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix_A = underbracebeginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix_L underbracebeginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix_U $$






      share|cite|improve this answer

























        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        The actual algorithm for Gaussian Elimination looks like this. enter image description here



        I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ ell_jk$. That is why you have the operation below $ u_j,k:m = u_j,k:m - ell_jk u_k,k:m$ It is subtracting off the ratio you just computed.



        Which yielded this.



        Suppose that



        $$ A = beginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix $$
        $$ A = LU $$



        $$ U =A, L=I$$
        $$ k=1,m=3,j=2$$
        $$ell_21 = fracu_21u_11 = fraca_21a_11 = 3 $$
        $$ u_2,1:3 = u_2,1:3 - 3 cdot u_1,1:3 $$
        Then we're going to subtract 3 times the 1st row from the 2nd row
        $$ beginbmatrix 3 & 5 & 6 endbmatrix - 3 cdot beginbmatrix 1 & 1 & 1 endbmatrix = beginbmatrix 0 & 2 & 3endbmatrix $$
        Updating each of them
        $$U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ -2 & 2 & 7 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ 0 & 0 & 1 endbmatrix $$
        $$k=1,j=3,m=3 $$
        $$ell_31 = fracu_31u_11 = frac-21 = -2 $$
        $$ u_3,1:3 = u_3,1:3 +2 cdot u_1,1:3 $$
        Then we add two times the first row to the third row
        $$ beginbmatrix -2 & 2 & 7 endbmatrix + 2 cdot beginbmatrix 1 & 1& 1 endbmatrix = beginbmatrix0 & 4 & 9 endbmatrix $$
        Updating
        $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 4 & 9 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 0 & 1 endbmatrix $$
        $$ k=2, j=3,m=3 $$
        $$ ell_32 = fracu_32u_22 = frac42 = 2$$
        We're subtracting out little blocks
        $$ u_3,2:3 = u_3,2:3 - 2 cdot u_2,2:3 $$
        $$ beginbmatrix 4 & 9 endbmatrix - 2 cdotbeginbmatrix 2& 3 endbmatrix = beginbmatrix 0 & 3 endbmatrix $$
        Updating
        $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix $$
        It now terminates
        $$ A = LU $$
        $$ underbracebeginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix_A = underbracebeginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix_L underbracebeginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix_U $$






        share|cite|improve this answer















        The actual algorithm for Gaussian Elimination looks like this. enter image description here



        I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ ell_jk$. That is why you have the operation below $ u_j,k:m = u_j,k:m - ell_jk u_k,k:m$ It is subtracting off the ratio you just computed.



        Which yielded this.



        Suppose that



        $$ A = beginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix $$
        $$ A = LU $$



        $$ U =A, L=I$$
        $$ k=1,m=3,j=2$$
        $$ell_21 = fracu_21u_11 = fraca_21a_11 = 3 $$
        $$ u_2,1:3 = u_2,1:3 - 3 cdot u_1,1:3 $$
        Then we're going to subtract 3 times the 1st row from the 2nd row
        $$ beginbmatrix 3 & 5 & 6 endbmatrix - 3 cdot beginbmatrix 1 & 1 & 1 endbmatrix = beginbmatrix 0 & 2 & 3endbmatrix $$
        Updating each of them
        $$U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ -2 & 2 & 7 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ 0 & 0 & 1 endbmatrix $$
        $$k=1,j=3,m=3 $$
        $$ell_31 = fracu_31u_11 = frac-21 = -2 $$
        $$ u_3,1:3 = u_3,1:3 +2 cdot u_1,1:3 $$
        Then we add two times the first row to the third row
        $$ beginbmatrix -2 & 2 & 7 endbmatrix + 2 cdot beginbmatrix 1 & 1& 1 endbmatrix = beginbmatrix0 & 4 & 9 endbmatrix $$
        Updating
        $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 4 & 9 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 0 & 1 endbmatrix $$
        $$ k=2, j=3,m=3 $$
        $$ ell_32 = fracu_32u_22 = frac42 = 2$$
        We're subtracting out little blocks
        $$ u_3,2:3 = u_3,2:3 - 2 cdot u_2,2:3 $$
        $$ beginbmatrix 4 & 9 endbmatrix - 2 cdotbeginbmatrix 2& 3 endbmatrix = beginbmatrix 0 & 3 endbmatrix $$
        Updating
        $$ U = beginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix $$
        $$ L =beginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix $$
        It now terminates
        $$ A = LU $$
        $$ underbracebeginbmatrix 1 & 1 & 1 \ 3 & 5 & 6 \ -2 & 2 & 7 endbmatrix_A = underbracebeginbmatrix 1 & 0 & 0 \ 3 & 1 & 0 \ -2 & 2 & 1 endbmatrix_L underbracebeginbmatrix 1 & 1 & 1 \ 0 & 2 & 3 \ 0 & 0 & 3 endbmatrix_U $$







        share|cite|improve this answer















        share|cite|improve this answer



        share|cite|improve this answer








        edited Jul 18 at 21:15


























        answered Jul 18 at 21:10









        RHowe

        1,010815




        1,010815




















            up vote
            5
            down vote













            We know that elementary row operations
            do not change the row space of the matrix.
            And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.



            So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_Asubseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $Asim B$.




            Example. Suppose that I have a matrix $$A=
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrix.$$



            And that after Gaussian elimination I get: $$B=
            beginpmatrix
            1 & 0 & 0 & 1 \
            0 & 1 & 0 &-1 \
            0 & 0 & 1 & 2 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            To check whether $R_Asubseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.



            Let us try with the fourth row: $(1,2,1,1)$.
            We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
            $$(boxed1,boxed2,boxed1,1)=
            1cdot(1,0,0,1)+2cdot(0,1,0,-1)+1cdot(0,0,1,2)
            $$
            We see that this is true. If the same thing works for each row of $A$, this shows that $R_Asubseteq R_B$.



            Let me try now another example where I make a mistake on purpose to see how to find the mistake.
            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(1)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(2)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrixoverset(3)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            endpmatrixoverset(4)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 0 & 0 \
            endpmatrixoverset(5)sim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 1 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            We can check that
            $$(1,1,1,2)ne 1cdot(1,0,0,0)+1cdot(0,1,0,0)+1cdot(0,0,1,1).$$



            I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrix$, everything works. So some error must be before this step.




            I will stress once again that this is only halfway verification. I have only checked $R_Asubseteq R_B$ but not $R_Bsubseteq R_A$.



            So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example



            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 &-1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrix
            $$



            The sanity check described above works. (We check that $R_Asubseteq R_B$, which is true.) But the result is incorrect.




            If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)



            In our example I would do the following
            $$
            left(beginarraycccc
            1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \
            0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)
            $$
            Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
            $$1cdot(1,1,1,2)-1cdot(0,1,1,1)=(1,0,0,1).$$
            By making a similar verification for each I can test that $R_Asubseteq R_B$.



            Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
            $$left(beginarraycccc
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)$$
            And I can check that
            $$-1cdot(1,1,0,0)-1cdot(0,1,1,1)+1cdot(1,2,1,1)=(0,0,0,0).$$




            1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".






            share|cite|improve this answer























            • as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
              – Jneven
              Jul 17 at 8:28











            • @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
              – Martin Sleziak
              Jul 17 at 9:03














            up vote
            5
            down vote













            We know that elementary row operations
            do not change the row space of the matrix.
            And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.



            So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_Asubseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $Asim B$.




            Example. Suppose that I have a matrix $$A=
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrix.$$



            And that after Gaussian elimination I get: $$B=
            beginpmatrix
            1 & 0 & 0 & 1 \
            0 & 1 & 0 &-1 \
            0 & 0 & 1 & 2 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            To check whether $R_Asubseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.



            Let us try with the fourth row: $(1,2,1,1)$.
            We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
            $$(boxed1,boxed2,boxed1,1)=
            1cdot(1,0,0,1)+2cdot(0,1,0,-1)+1cdot(0,0,1,2)
            $$
            We see that this is true. If the same thing works for each row of $A$, this shows that $R_Asubseteq R_B$.



            Let me try now another example where I make a mistake on purpose to see how to find the mistake.
            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(1)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(2)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrixoverset(3)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            endpmatrixoverset(4)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 0 & 0 \
            endpmatrixoverset(5)sim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 1 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            We can check that
            $$(1,1,1,2)ne 1cdot(1,0,0,0)+1cdot(0,1,0,0)+1cdot(0,0,1,1).$$



            I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrix$, everything works. So some error must be before this step.




            I will stress once again that this is only halfway verification. I have only checked $R_Asubseteq R_B$ but not $R_Bsubseteq R_A$.



            So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example



            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 &-1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrix
            $$



            The sanity check described above works. (We check that $R_Asubseteq R_B$, which is true.) But the result is incorrect.




            If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)



            In our example I would do the following
            $$
            left(beginarraycccc
            1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \
            0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)
            $$
            Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
            $$1cdot(1,1,1,2)-1cdot(0,1,1,1)=(1,0,0,1).$$
            By making a similar verification for each I can test that $R_Asubseteq R_B$.



            Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
            $$left(beginarraycccc
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)$$
            And I can check that
            $$-1cdot(1,1,0,0)-1cdot(0,1,1,1)+1cdot(1,2,1,1)=(0,0,0,0).$$




            1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".






            share|cite|improve this answer























            • as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
              – Jneven
              Jul 17 at 8:28











            • @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
              – Martin Sleziak
              Jul 17 at 9:03












            up vote
            5
            down vote










            up vote
            5
            down vote









            We know that elementary row operations
            do not change the row space of the matrix.
            And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.



            So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_Asubseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $Asim B$.




            Example. Suppose that I have a matrix $$A=
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrix.$$



            And that after Gaussian elimination I get: $$B=
            beginpmatrix
            1 & 0 & 0 & 1 \
            0 & 1 & 0 &-1 \
            0 & 0 & 1 & 2 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            To check whether $R_Asubseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.



            Let us try with the fourth row: $(1,2,1,1)$.
            We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
            $$(boxed1,boxed2,boxed1,1)=
            1cdot(1,0,0,1)+2cdot(0,1,0,-1)+1cdot(0,0,1,2)
            $$
            We see that this is true. If the same thing works for each row of $A$, this shows that $R_Asubseteq R_B$.



            Let me try now another example where I make a mistake on purpose to see how to find the mistake.
            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(1)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(2)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrixoverset(3)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            endpmatrixoverset(4)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 0 & 0 \
            endpmatrixoverset(5)sim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 1 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            We can check that
            $$(1,1,1,2)ne 1cdot(1,0,0,0)+1cdot(0,1,0,0)+1cdot(0,0,1,1).$$



            I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrix$, everything works. So some error must be before this step.




            I will stress once again that this is only halfway verification. I have only checked $R_Asubseteq R_B$ but not $R_Bsubseteq R_A$.



            So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example



            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 &-1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrix
            $$



            The sanity check described above works. (We check that $R_Asubseteq R_B$, which is true.) But the result is incorrect.




            If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)



            In our example I would do the following
            $$
            left(beginarraycccc
            1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \
            0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)
            $$
            Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
            $$1cdot(1,1,1,2)-1cdot(0,1,1,1)=(1,0,0,1).$$
            By making a similar verification for each I can test that $R_Asubseteq R_B$.



            Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
            $$left(beginarraycccc
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)$$
            And I can check that
            $$-1cdot(1,1,0,0)-1cdot(0,1,1,1)+1cdot(1,2,1,1)=(0,0,0,0).$$




            1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".






            share|cite|improve this answer















            We know that elementary row operations
            do not change the row space of the matrix.
            And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.



            So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_Asubseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $Asim B$.




            Example. Suppose that I have a matrix $$A=
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrix.$$



            And that after Gaussian elimination I get: $$B=
            beginpmatrix
            1 & 0 & 0 & 1 \
            0 & 1 & 0 &-1 \
            0 & 0 & 1 & 2 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            To check whether $R_Asubseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.



            Let us try with the fourth row: $(1,2,1,1)$.
            We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
            $$(boxed1,boxed2,boxed1,1)=
            1cdot(1,0,0,1)+2cdot(0,1,0,-1)+1cdot(0,0,1,2)
            $$
            We see that this is true. If the same thing works for each row of $A$, this shows that $R_Asubseteq R_B$.



            Let me try now another example where I make a mistake on purpose to see how to find the mistake.
            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(1)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixoverset(2)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrixoverset(3)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            endpmatrixoverset(4)sim
            beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 0 & 0 \
            endpmatrixoverset(5)sim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 1 \
            0 & 0 & 0 & 0 \
            endpmatrix
            $$



            We can check that
            $$(1,1,1,2)ne 1cdot(1,0,0,0)+1cdot(0,1,0,0)+1cdot(0,0,1,1).$$



            I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $beginpmatrix
            0 & 0 & 1 & 1 \
            1 & 1 & 0 & 0 \
            0 & 1 & 0 & 0 \
            1 & 2 & 0 & 0 \
            endpmatrix$, everything works. So some error must be before this step.




            I will stress once again that this is only halfway verification. I have only checked $R_Asubseteq R_B$ but not $R_Bsubseteq R_A$.



            So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example



            $$beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            1 & 2 & 1 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 &-1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 2 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 1 & 1 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 \
            0 & 1 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrixsim
            beginpmatrix
            1 & 0 & 0 & 0 \
            0 & 1 & 0 & 0 \
            0 & 0 & 1 & 0 \
            0 & 0 & 0 & 1 \
            endpmatrix
            $$



            The sanity check described above works. (We check that $R_Asubseteq R_B$, which is true.) But the result is incorrect.




            If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)



            In our example I would do the following
            $$
            left(beginarraycccc
            1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \
            0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)sim
            left(beginarraycccc
            1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \
            0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \
            0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)
            $$
            Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
            $$1cdot(1,1,1,2)-1cdot(0,1,1,1)=(1,0,0,1).$$
            By making a similar verification for each I can test that $R_Asubseteq R_B$.



            Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
            $$left(beginarraycccc
            0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \
            endarrayright)$$
            And I can check that
            $$-1cdot(1,1,0,0)-1cdot(0,1,1,1)+1cdot(1,2,1,1)=(0,0,0,0).$$




            1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".







            share|cite|improve this answer















            share|cite|improve this answer



            share|cite|improve this answer








            edited Jul 16 at 9:49


























            answered Jul 16 at 9:43









            Martin Sleziak

            43.5k6113259




            43.5k6113259











            • as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
              – Jneven
              Jul 17 at 8:28











            • @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
              – Martin Sleziak
              Jul 17 at 9:03
















            • as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
              – Jneven
              Jul 17 at 8:28











            • @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
              – Martin Sleziak
              Jul 17 at 9:03















            as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
            – Jneven
            Jul 17 at 8:28





            as using extended matrix will take much more time - is it possible to check if $R_Bsubseteq R_A$ the same way we checked if $R_Asubseteq R_B$?
            – Jneven
            Jul 17 at 8:28













            @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
            – Martin Sleziak
            Jul 17 at 9:03




            @Jneven To check that $R_Asubseteq R_B$ we have used the fact that $B$ has a very simple form (it is in reduced row echelon form). Since $A$ can be arbitrary, we cannot use something similar there. So I'd say that most other methods use to check the opposite inclusion will require approximately the same amount of computation as using the extended matrix.
            – Martin Sleziak
            Jul 17 at 9:03












             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852287%2fis-there-a-way-to-know-if-a-row-reduction-of-a-matrix-has-been-done-correctly%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?