Geometric or matrix intuition on $A(A + B)^-1B = B (A + B)^-1 A$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite
3












I am curious about a seemingly simple identity in matrix algebra. Though matrix multiplication is not commutative (the classic example of noncommutativity, it does allow a commutativity of sorts around a very specific third matrix:



$$
colorblueA (A + B)^-1 B = colorblueB (A + B)^-1 A
$$



There is a very simple algebraic proof:



$$
begineqnarray
colorblueA (A + B)^-1 B + colorredB (A + B)^-1 B &=& (A + B)(A + B)^-1 B\
&=& B\
&=& B (A + B)^-1 (A + B) \
&=& colorblueB (A + B)^-1 A + colorredB (A + B)^-1 B\
endeqnarray
$$



because matrix addition is commutative. The identity follows by cancelling $B (A + B)^-1 B$. (If there is a simpler linear proof, say, without needing cancellation, please say so. That extra weird matrix, while obviously looking right and obviously doing the job, just pops out of nowhere. Or does it?)



Algebra is blind manipulation of symbols. The identity holds in any abstract ring with multiplicative inverses. But for a given model, the identity says something about that model. I've only seen the identity in the context of matrices, but I don't see what's so special about it there.



What does this identity do for matrices?



Is there something, in matrix theory, for which this is special? Does it ever really come up in proofs? $(A+B)^-1$ can't be the only matrices that allow such quasi-commutativity, can they? Is there a visualization or geometric interpretation or a meaningful anything interpretation?







share|cite|improve this question

















  • 1




    the title is not same as question?
    – mathreadler
    Jul 14 at 22:18










  • Yes it looks kind of curious indeed.
    – mathreadler
    Jul 14 at 22:23











  • It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
    – mathreadler
    Jul 14 at 22:35







  • 2




    Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
    – John Wayland Bales
    Jul 14 at 22:44










  • So this implies that it is true for any ring when $a+b$ is invertible.
    – Thomas Andrews
    Jul 14 at 22:44














up vote
5
down vote

favorite
3












I am curious about a seemingly simple identity in matrix algebra. Though matrix multiplication is not commutative (the classic example of noncommutativity, it does allow a commutativity of sorts around a very specific third matrix:



$$
colorblueA (A + B)^-1 B = colorblueB (A + B)^-1 A
$$



There is a very simple algebraic proof:



$$
begineqnarray
colorblueA (A + B)^-1 B + colorredB (A + B)^-1 B &=& (A + B)(A + B)^-1 B\
&=& B\
&=& B (A + B)^-1 (A + B) \
&=& colorblueB (A + B)^-1 A + colorredB (A + B)^-1 B\
endeqnarray
$$



because matrix addition is commutative. The identity follows by cancelling $B (A + B)^-1 B$. (If there is a simpler linear proof, say, without needing cancellation, please say so. That extra weird matrix, while obviously looking right and obviously doing the job, just pops out of nowhere. Or does it?)



Algebra is blind manipulation of symbols. The identity holds in any abstract ring with multiplicative inverses. But for a given model, the identity says something about that model. I've only seen the identity in the context of matrices, but I don't see what's so special about it there.



What does this identity do for matrices?



Is there something, in matrix theory, for which this is special? Does it ever really come up in proofs? $(A+B)^-1$ can't be the only matrices that allow such quasi-commutativity, can they? Is there a visualization or geometric interpretation or a meaningful anything interpretation?







share|cite|improve this question

















  • 1




    the title is not same as question?
    – mathreadler
    Jul 14 at 22:18










  • Yes it looks kind of curious indeed.
    – mathreadler
    Jul 14 at 22:23











  • It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
    – mathreadler
    Jul 14 at 22:35







  • 2




    Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
    – John Wayland Bales
    Jul 14 at 22:44










  • So this implies that it is true for any ring when $a+b$ is invertible.
    – Thomas Andrews
    Jul 14 at 22:44












up vote
5
down vote

favorite
3









up vote
5
down vote

favorite
3






3





I am curious about a seemingly simple identity in matrix algebra. Though matrix multiplication is not commutative (the classic example of noncommutativity, it does allow a commutativity of sorts around a very specific third matrix:



$$
colorblueA (A + B)^-1 B = colorblueB (A + B)^-1 A
$$



There is a very simple algebraic proof:



$$
begineqnarray
colorblueA (A + B)^-1 B + colorredB (A + B)^-1 B &=& (A + B)(A + B)^-1 B\
&=& B\
&=& B (A + B)^-1 (A + B) \
&=& colorblueB (A + B)^-1 A + colorredB (A + B)^-1 B\
endeqnarray
$$



because matrix addition is commutative. The identity follows by cancelling $B (A + B)^-1 B$. (If there is a simpler linear proof, say, without needing cancellation, please say so. That extra weird matrix, while obviously looking right and obviously doing the job, just pops out of nowhere. Or does it?)



Algebra is blind manipulation of symbols. The identity holds in any abstract ring with multiplicative inverses. But for a given model, the identity says something about that model. I've only seen the identity in the context of matrices, but I don't see what's so special about it there.



What does this identity do for matrices?



Is there something, in matrix theory, for which this is special? Does it ever really come up in proofs? $(A+B)^-1$ can't be the only matrices that allow such quasi-commutativity, can they? Is there a visualization or geometric interpretation or a meaningful anything interpretation?







share|cite|improve this question













I am curious about a seemingly simple identity in matrix algebra. Though matrix multiplication is not commutative (the classic example of noncommutativity, it does allow a commutativity of sorts around a very specific third matrix:



$$
colorblueA (A + B)^-1 B = colorblueB (A + B)^-1 A
$$



There is a very simple algebraic proof:



$$
begineqnarray
colorblueA (A + B)^-1 B + colorredB (A + B)^-1 B &=& (A + B)(A + B)^-1 B\
&=& B\
&=& B (A + B)^-1 (A + B) \
&=& colorblueB (A + B)^-1 A + colorredB (A + B)^-1 B\
endeqnarray
$$



because matrix addition is commutative. The identity follows by cancelling $B (A + B)^-1 B$. (If there is a simpler linear proof, say, without needing cancellation, please say so. That extra weird matrix, while obviously looking right and obviously doing the job, just pops out of nowhere. Or does it?)



Algebra is blind manipulation of symbols. The identity holds in any abstract ring with multiplicative inverses. But for a given model, the identity says something about that model. I've only seen the identity in the context of matrices, but I don't see what's so special about it there.



What does this identity do for matrices?



Is there something, in matrix theory, for which this is special? Does it ever really come up in proofs? $(A+B)^-1$ can't be the only matrices that allow such quasi-commutativity, can they? Is there a visualization or geometric interpretation or a meaningful anything interpretation?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 17 at 12:37
























asked Jul 14 at 22:11









Mitch

5,9112456




5,9112456







  • 1




    the title is not same as question?
    – mathreadler
    Jul 14 at 22:18










  • Yes it looks kind of curious indeed.
    – mathreadler
    Jul 14 at 22:23











  • It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
    – mathreadler
    Jul 14 at 22:35







  • 2




    Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
    – John Wayland Bales
    Jul 14 at 22:44










  • So this implies that it is true for any ring when $a+b$ is invertible.
    – Thomas Andrews
    Jul 14 at 22:44












  • 1




    the title is not same as question?
    – mathreadler
    Jul 14 at 22:18










  • Yes it looks kind of curious indeed.
    – mathreadler
    Jul 14 at 22:23











  • It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
    – mathreadler
    Jul 14 at 22:35







  • 2




    Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
    – John Wayland Bales
    Jul 14 at 22:44










  • So this implies that it is true for any ring when $a+b$ is invertible.
    – Thomas Andrews
    Jul 14 at 22:44







1




1




the title is not same as question?
– mathreadler
Jul 14 at 22:18




the title is not same as question?
– mathreadler
Jul 14 at 22:18












Yes it looks kind of curious indeed.
– mathreadler
Jul 14 at 22:23





Yes it looks kind of curious indeed.
– mathreadler
Jul 14 at 22:23













It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
– mathreadler
Jul 14 at 22:35





It seems to be same pseudo commutativity or what to call it also if instead $(A-B)^-1$ in middle. But then both reduce to 0 matrix instead of B in middle if doing same thing for proof.
– mathreadler
Jul 14 at 22:35





2




2




Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
– John Wayland Bales
Jul 14 at 22:44




Any linear combination $(rA+sB)^-1$ will work for non-zero numbers $r,s$. Just replace $A$ with $rA$ and $B$ with $sB$ in the proof and divide by $rs$ at the final step.
– John Wayland Bales
Jul 14 at 22:44












So this implies that it is true for any ring when $a+b$ is invertible.
– Thomas Andrews
Jul 14 at 22:44




So this implies that it is true for any ring when $a+b$ is invertible.
– Thomas Andrews
Jul 14 at 22:44










3 Answers
3






active

oldest

votes

















up vote
7
down vote













In the cases when $A,B$ and $A+B$ are all invertible, the inverse of the equality is:



$$B^-1(A+B)A^-1=A^-1(A+B)B^-1$$



But simple calculation shows the left side is $B^-1+A^-1,$ and the right side is $A^-1+B^-1.$




The general case can be shown by noticing that the set of pairs $(A,B)$ such that $A,B,A+B$ are invertible is dense in the set of pairs $A,B$ such that $A+B$ is invertible. Since the function is continous, that would finish it.



We can do this by taking an arbitrary $A,B$ and then replacing it with $A+lambda I, B-lambda I,$ where $lambda$ is a positive value with smaller magnitude than any of the non-zero eigenvalues of $A,B.$




That continuity argument, of course, doesn't extend to matrices over discrete fields, and this equality is true in any ring. If $R$ is a ring (with identity $I$) and $a,c,din R$ so that $dc=cd=I,(*)$ then:



$$ad(c-a)=(c-a)da,tag1$$



because he left side is $adc-ada=a-ada$ and the right side is $cda-ada=a-ada,$ so they are equal.



Now given $a,bin R$ so that $a+b$ is invertible in $R,$ let $c=a+b,d=(a+b)^-1$. Then $b=c-a$ so (1) becomes $$a(a+b)^-1b=b(a+b)^-1a$$




(*) There are rings where $cd=I$ does not imply $dc=I,$ but in square matrices, $DC=I $ means $CD=I.$ So for the general ring, we need the condition $cd=I$ and $dc=I.$






share|cite|improve this answer























  • I'd be interested in seeing the details of the density result you mention!
    – Ryan Gibara
    Jul 15 at 16:59










  • Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
    – Thomas Andrews
    Jul 15 at 17:27











  • @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
    – Thomas Andrews
    Jul 15 at 17:46


















up vote
2
down vote













The only thing it reminds me of right away is $$fracaba+b = left(fraca+babright)^-1 = left(frac 1 a + frac 1 bright)^-1$$



Which is "harmonic sum". Related to harmonic mean for scalars $a,b$.



In other words harmonic means are well defined without respect to order for matrices(?)




If three terms $$left(frac 1 a + frac 1 b + frac 1 cright)^-1 = left(frac bc+ac+ab abcright)^-1 = frac abcbc+ac+ab = frac a(bc)bc+a(c+b)$$



I wonder if something can be proven for $$A(BC+A(B+C))^-1(BC)=(BC)(BC+A(B+C))^-1(A)$$






share|cite|improve this answer






























    up vote
    0
    down vote













    Just put
    $$
    A = left( A + B right) - B
    $$
    to get
    $$
    eqalign
    & left( left( A + B right) - B right)left( A + B right)^, - ,1 B = Bleft( A + B right)^, - ,1 left( left( A + B right) - B right) cr
    & quad Downarrow cr
    & B - Bleft( A + B right)^, - ,1 B = B - Bleft( A + B right)^, - ,1 B cr
    $$



    Maybe a better insight on what is going on can be obtained by putting
    $$
    left{ matrix
    A = 1 over 2left( A + B right) - 1 over 2left( B - A right) hfill cr
    B = 1 over 2left( A + B right) + 1 over 2left( B - A right) hfill cr right.
    $$
    which then gives
    $$
    eqalign
    & 1 over 4left( left( A + B right) - left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) + left( B - A right) right) = cr
    & = 1 over 4left( left( A + B right) + left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) - left( B - A right) right) = cr
    & quad Downarrow cr
    & left( I - left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) + left( B - A right) right) = cr
    & = left( I + left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) - left( B - A right) right) cr
    & quad Downarrow cr
    & left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) = cr
    & = left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) cr
    $$






    share|cite|improve this answer























      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852002%2fgeometric-or-matrix-intuition-on-aa-b-1b-b-a-b-1-a%23new-answer', 'question_page');

      );

      Post as a guest






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      7
      down vote













      In the cases when $A,B$ and $A+B$ are all invertible, the inverse of the equality is:



      $$B^-1(A+B)A^-1=A^-1(A+B)B^-1$$



      But simple calculation shows the left side is $B^-1+A^-1,$ and the right side is $A^-1+B^-1.$




      The general case can be shown by noticing that the set of pairs $(A,B)$ such that $A,B,A+B$ are invertible is dense in the set of pairs $A,B$ such that $A+B$ is invertible. Since the function is continous, that would finish it.



      We can do this by taking an arbitrary $A,B$ and then replacing it with $A+lambda I, B-lambda I,$ where $lambda$ is a positive value with smaller magnitude than any of the non-zero eigenvalues of $A,B.$




      That continuity argument, of course, doesn't extend to matrices over discrete fields, and this equality is true in any ring. If $R$ is a ring (with identity $I$) and $a,c,din R$ so that $dc=cd=I,(*)$ then:



      $$ad(c-a)=(c-a)da,tag1$$



      because he left side is $adc-ada=a-ada$ and the right side is $cda-ada=a-ada,$ so they are equal.



      Now given $a,bin R$ so that $a+b$ is invertible in $R,$ let $c=a+b,d=(a+b)^-1$. Then $b=c-a$ so (1) becomes $$a(a+b)^-1b=b(a+b)^-1a$$




      (*) There are rings where $cd=I$ does not imply $dc=I,$ but in square matrices, $DC=I $ means $CD=I.$ So for the general ring, we need the condition $cd=I$ and $dc=I.$






      share|cite|improve this answer























      • I'd be interested in seeing the details of the density result you mention!
        – Ryan Gibara
        Jul 15 at 16:59










      • Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
        – Thomas Andrews
        Jul 15 at 17:27











      • @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
        – Thomas Andrews
        Jul 15 at 17:46















      up vote
      7
      down vote













      In the cases when $A,B$ and $A+B$ are all invertible, the inverse of the equality is:



      $$B^-1(A+B)A^-1=A^-1(A+B)B^-1$$



      But simple calculation shows the left side is $B^-1+A^-1,$ and the right side is $A^-1+B^-1.$




      The general case can be shown by noticing that the set of pairs $(A,B)$ such that $A,B,A+B$ are invertible is dense in the set of pairs $A,B$ such that $A+B$ is invertible. Since the function is continous, that would finish it.



      We can do this by taking an arbitrary $A,B$ and then replacing it with $A+lambda I, B-lambda I,$ where $lambda$ is a positive value with smaller magnitude than any of the non-zero eigenvalues of $A,B.$




      That continuity argument, of course, doesn't extend to matrices over discrete fields, and this equality is true in any ring. If $R$ is a ring (with identity $I$) and $a,c,din R$ so that $dc=cd=I,(*)$ then:



      $$ad(c-a)=(c-a)da,tag1$$



      because he left side is $adc-ada=a-ada$ and the right side is $cda-ada=a-ada,$ so they are equal.



      Now given $a,bin R$ so that $a+b$ is invertible in $R,$ let $c=a+b,d=(a+b)^-1$. Then $b=c-a$ so (1) becomes $$a(a+b)^-1b=b(a+b)^-1a$$




      (*) There are rings where $cd=I$ does not imply $dc=I,$ but in square matrices, $DC=I $ means $CD=I.$ So for the general ring, we need the condition $cd=I$ and $dc=I.$






      share|cite|improve this answer























      • I'd be interested in seeing the details of the density result you mention!
        – Ryan Gibara
        Jul 15 at 16:59










      • Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
        – Thomas Andrews
        Jul 15 at 17:27











      • @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
        – Thomas Andrews
        Jul 15 at 17:46













      up vote
      7
      down vote










      up vote
      7
      down vote









      In the cases when $A,B$ and $A+B$ are all invertible, the inverse of the equality is:



      $$B^-1(A+B)A^-1=A^-1(A+B)B^-1$$



      But simple calculation shows the left side is $B^-1+A^-1,$ and the right side is $A^-1+B^-1.$




      The general case can be shown by noticing that the set of pairs $(A,B)$ such that $A,B,A+B$ are invertible is dense in the set of pairs $A,B$ such that $A+B$ is invertible. Since the function is continous, that would finish it.



      We can do this by taking an arbitrary $A,B$ and then replacing it with $A+lambda I, B-lambda I,$ where $lambda$ is a positive value with smaller magnitude than any of the non-zero eigenvalues of $A,B.$




      That continuity argument, of course, doesn't extend to matrices over discrete fields, and this equality is true in any ring. If $R$ is a ring (with identity $I$) and $a,c,din R$ so that $dc=cd=I,(*)$ then:



      $$ad(c-a)=(c-a)da,tag1$$



      because he left side is $adc-ada=a-ada$ and the right side is $cda-ada=a-ada,$ so they are equal.



      Now given $a,bin R$ so that $a+b$ is invertible in $R,$ let $c=a+b,d=(a+b)^-1$. Then $b=c-a$ so (1) becomes $$a(a+b)^-1b=b(a+b)^-1a$$




      (*) There are rings where $cd=I$ does not imply $dc=I,$ but in square matrices, $DC=I $ means $CD=I.$ So for the general ring, we need the condition $cd=I$ and $dc=I.$






      share|cite|improve this answer















      In the cases when $A,B$ and $A+B$ are all invertible, the inverse of the equality is:



      $$B^-1(A+B)A^-1=A^-1(A+B)B^-1$$



      But simple calculation shows the left side is $B^-1+A^-1,$ and the right side is $A^-1+B^-1.$




      The general case can be shown by noticing that the set of pairs $(A,B)$ such that $A,B,A+B$ are invertible is dense in the set of pairs $A,B$ such that $A+B$ is invertible. Since the function is continous, that would finish it.



      We can do this by taking an arbitrary $A,B$ and then replacing it with $A+lambda I, B-lambda I,$ where $lambda$ is a positive value with smaller magnitude than any of the non-zero eigenvalues of $A,B.$




      That continuity argument, of course, doesn't extend to matrices over discrete fields, and this equality is true in any ring. If $R$ is a ring (with identity $I$) and $a,c,din R$ so that $dc=cd=I,(*)$ then:



      $$ad(c-a)=(c-a)da,tag1$$



      because he left side is $adc-ada=a-ada$ and the right side is $cda-ada=a-ada,$ so they are equal.



      Now given $a,bin R$ so that $a+b$ is invertible in $R,$ let $c=a+b,d=(a+b)^-1$. Then $b=c-a$ so (1) becomes $$a(a+b)^-1b=b(a+b)^-1a$$




      (*) There are rings where $cd=I$ does not imply $dc=I,$ but in square matrices, $DC=I $ means $CD=I.$ So for the general ring, we need the condition $cd=I$ and $dc=I.$







      share|cite|improve this answer















      share|cite|improve this answer



      share|cite|improve this answer








      edited Jul 15 at 16:31


























      answered Jul 14 at 23:05









      Thomas Andrews

      128k10144285




      128k10144285











      • I'd be interested in seeing the details of the density result you mention!
        – Ryan Gibara
        Jul 15 at 16:59










      • Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
        – Thomas Andrews
        Jul 15 at 17:27











      • @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
        – Thomas Andrews
        Jul 15 at 17:46

















      • I'd be interested in seeing the details of the density result you mention!
        – Ryan Gibara
        Jul 15 at 16:59










      • Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
        – Thomas Andrews
        Jul 15 at 17:27











      • @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
        – Thomas Andrews
        Jul 15 at 17:46
















      I'd be interested in seeing the details of the density result you mention!
      – Ryan Gibara
      Jul 15 at 16:59




      I'd be interested in seeing the details of the density result you mention!
      – Ryan Gibara
      Jul 15 at 16:59












      Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
      – Thomas Andrews
      Jul 15 at 17:27





      Fix $C$ invertible, say $CD=I$ Consider the set $X$ of all pairs of matrices $(A,B)$ such that $A+B=C.$ Then $f_C(A,B)=ADB-BDA$ is a continuous function $Xto M_n.$ In the subset $Ysubseteq X$ of pairs $(A,B)$ with $A,B$ invertible, t $f(A,B)=0.$ Now, given any $(A,B)in X$, and $epsilon>0$, there is a positive $lambda<epsilon$ such that $A'=A-lambda I$ and $B+lambda I$ are invertible. So we can find a pair $(A',B')in Y$ arbitrarily close to $(A,B)in X,$ and since $f(A',B')=0,$ for some point arbitrarily close to $(A,B),$ we must have $f(A,B)=0$ too. @RyanGibara
      – Thomas Andrews
      Jul 15 at 17:27













      @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
      – Thomas Andrews
      Jul 15 at 17:46





      @RyanGibara the details actually weren't important for the "intuition" part. The theorem is just much more obvious is $A,B,$ and $A+B$ are all invertible, and the intuition is that invertible matrices are not just dense in the set of all matrices but, in some sense, "very dense." In a neighborhood of a non-invertible matrix $A$, the non-invertible matrices have measure zero. Given the set $Q$ of matrices such that $|Q|<epsilon$, just picking a random $Q$, the probability that $A+Q$ and $B-Q$ is invertible is $1.$
      – Thomas Andrews
      Jul 15 at 17:46











      up vote
      2
      down vote













      The only thing it reminds me of right away is $$fracaba+b = left(fraca+babright)^-1 = left(frac 1 a + frac 1 bright)^-1$$



      Which is "harmonic sum". Related to harmonic mean for scalars $a,b$.



      In other words harmonic means are well defined without respect to order for matrices(?)




      If three terms $$left(frac 1 a + frac 1 b + frac 1 cright)^-1 = left(frac bc+ac+ab abcright)^-1 = frac abcbc+ac+ab = frac a(bc)bc+a(c+b)$$



      I wonder if something can be proven for $$A(BC+A(B+C))^-1(BC)=(BC)(BC+A(B+C))^-1(A)$$






      share|cite|improve this answer



























        up vote
        2
        down vote













        The only thing it reminds me of right away is $$fracaba+b = left(fraca+babright)^-1 = left(frac 1 a + frac 1 bright)^-1$$



        Which is "harmonic sum". Related to harmonic mean for scalars $a,b$.



        In other words harmonic means are well defined without respect to order for matrices(?)




        If three terms $$left(frac 1 a + frac 1 b + frac 1 cright)^-1 = left(frac bc+ac+ab abcright)^-1 = frac abcbc+ac+ab = frac a(bc)bc+a(c+b)$$



        I wonder if something can be proven for $$A(BC+A(B+C))^-1(BC)=(BC)(BC+A(B+C))^-1(A)$$






        share|cite|improve this answer

























          up vote
          2
          down vote










          up vote
          2
          down vote









          The only thing it reminds me of right away is $$fracaba+b = left(fraca+babright)^-1 = left(frac 1 a + frac 1 bright)^-1$$



          Which is "harmonic sum". Related to harmonic mean for scalars $a,b$.



          In other words harmonic means are well defined without respect to order for matrices(?)




          If three terms $$left(frac 1 a + frac 1 b + frac 1 cright)^-1 = left(frac bc+ac+ab abcright)^-1 = frac abcbc+ac+ab = frac a(bc)bc+a(c+b)$$



          I wonder if something can be proven for $$A(BC+A(B+C))^-1(BC)=(BC)(BC+A(B+C))^-1(A)$$






          share|cite|improve this answer















          The only thing it reminds me of right away is $$fracaba+b = left(fraca+babright)^-1 = left(frac 1 a + frac 1 bright)^-1$$



          Which is "harmonic sum". Related to harmonic mean for scalars $a,b$.



          In other words harmonic means are well defined without respect to order for matrices(?)




          If three terms $$left(frac 1 a + frac 1 b + frac 1 cright)^-1 = left(frac bc+ac+ab abcright)^-1 = frac abcbc+ac+ab = frac a(bc)bc+a(c+b)$$



          I wonder if something can be proven for $$A(BC+A(B+C))^-1(BC)=(BC)(BC+A(B+C))^-1(A)$$







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 14 at 23:34


























          answered Jul 14 at 23:04









          mathreadler

          13.6k71857




          13.6k71857




















              up vote
              0
              down vote













              Just put
              $$
              A = left( A + B right) - B
              $$
              to get
              $$
              eqalign
              & left( left( A + B right) - B right)left( A + B right)^, - ,1 B = Bleft( A + B right)^, - ,1 left( left( A + B right) - B right) cr
              & quad Downarrow cr
              & B - Bleft( A + B right)^, - ,1 B = B - Bleft( A + B right)^, - ,1 B cr
              $$



              Maybe a better insight on what is going on can be obtained by putting
              $$
              left{ matrix
              A = 1 over 2left( A + B right) - 1 over 2left( B - A right) hfill cr
              B = 1 over 2left( A + B right) + 1 over 2left( B - A right) hfill cr right.
              $$
              which then gives
              $$
              eqalign
              & 1 over 4left( left( A + B right) - left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) + left( B - A right) right) = cr
              & = 1 over 4left( left( A + B right) + left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) - left( B - A right) right) = cr
              & quad Downarrow cr
              & left( I - left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) + left( B - A right) right) = cr
              & = left( I + left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) - left( B - A right) right) cr
              & quad Downarrow cr
              & left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) = cr
              & = left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) cr
              $$






              share|cite|improve this answer



























                up vote
                0
                down vote













                Just put
                $$
                A = left( A + B right) - B
                $$
                to get
                $$
                eqalign
                & left( left( A + B right) - B right)left( A + B right)^, - ,1 B = Bleft( A + B right)^, - ,1 left( left( A + B right) - B right) cr
                & quad Downarrow cr
                & B - Bleft( A + B right)^, - ,1 B = B - Bleft( A + B right)^, - ,1 B cr
                $$



                Maybe a better insight on what is going on can be obtained by putting
                $$
                left{ matrix
                A = 1 over 2left( A + B right) - 1 over 2left( B - A right) hfill cr
                B = 1 over 2left( A + B right) + 1 over 2left( B - A right) hfill cr right.
                $$
                which then gives
                $$
                eqalign
                & 1 over 4left( left( A + B right) - left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) + left( B - A right) right) = cr
                & = 1 over 4left( left( A + B right) + left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) - left( B - A right) right) = cr
                & quad Downarrow cr
                & left( I - left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) + left( B - A right) right) = cr
                & = left( I + left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) - left( B - A right) right) cr
                & quad Downarrow cr
                & left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) = cr
                & = left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) cr
                $$






                share|cite|improve this answer

























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Just put
                  $$
                  A = left( A + B right) - B
                  $$
                  to get
                  $$
                  eqalign
                  & left( left( A + B right) - B right)left( A + B right)^, - ,1 B = Bleft( A + B right)^, - ,1 left( left( A + B right) - B right) cr
                  & quad Downarrow cr
                  & B - Bleft( A + B right)^, - ,1 B = B - Bleft( A + B right)^, - ,1 B cr
                  $$



                  Maybe a better insight on what is going on can be obtained by putting
                  $$
                  left{ matrix
                  A = 1 over 2left( A + B right) - 1 over 2left( B - A right) hfill cr
                  B = 1 over 2left( A + B right) + 1 over 2left( B - A right) hfill cr right.
                  $$
                  which then gives
                  $$
                  eqalign
                  & 1 over 4left( left( A + B right) - left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) + left( B - A right) right) = cr
                  & = 1 over 4left( left( A + B right) + left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) - left( B - A right) right) = cr
                  & quad Downarrow cr
                  & left( I - left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) + left( B - A right) right) = cr
                  & = left( I + left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) - left( B - A right) right) cr
                  & quad Downarrow cr
                  & left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) = cr
                  & = left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) cr
                  $$






                  share|cite|improve this answer















                  Just put
                  $$
                  A = left( A + B right) - B
                  $$
                  to get
                  $$
                  eqalign
                  & left( left( A + B right) - B right)left( A + B right)^, - ,1 B = Bleft( A + B right)^, - ,1 left( left( A + B right) - B right) cr
                  & quad Downarrow cr
                  & B - Bleft( A + B right)^, - ,1 B = B - Bleft( A + B right)^, - ,1 B cr
                  $$



                  Maybe a better insight on what is going on can be obtained by putting
                  $$
                  left{ matrix
                  A = 1 over 2left( A + B right) - 1 over 2left( B - A right) hfill cr
                  B = 1 over 2left( A + B right) + 1 over 2left( B - A right) hfill cr right.
                  $$
                  which then gives
                  $$
                  eqalign
                  & 1 over 4left( left( A + B right) - left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) + left( B - A right) right) = cr
                  & = 1 over 4left( left( A + B right) + left( B - A right) right)left( A + B right)^, - ,1 left( left( A + B right) - left( B - A right) right) = cr
                  & quad Downarrow cr
                  & left( I - left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) + left( B - A right) right) = cr
                  & = left( I + left( B - A right)left( A + B right)^, - ,1 right)left( left( A + B right) - left( B - A right) right) cr
                  & quad Downarrow cr
                  & left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) = cr
                  & = left( A + B right) - left( B - A right)left( A + B right)^, - ,1 left( B - A right) cr
                  $$







                  share|cite|improve this answer















                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jul 14 at 23:57


























                  answered Jul 14 at 23:42









                  G Cab

                  15.1k31136




                  15.1k31136






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852002%2fgeometric-or-matrix-intuition-on-aa-b-1b-b-a-b-1-a%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?