self-adjoint linear map has determinant $ < 0$.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I would like to know whether the following is correct and if so, how to generalize it:



Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.



Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:



Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$



Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.







share|cite|improve this question

















  • 1




    Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
    – user357980
    Jul 19 at 10:12






  • 1




    Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
    – user357980
    Jul 19 at 10:14











  • $pmatrix0&1\-1&0$ has deteminant $1$.
    – Lord Shark the Unknown
    Jul 19 at 10:16










  • Shark's matrix is not symmetric.
    – user357980
    Jul 19 at 10:20















up vote
1
down vote

favorite












I would like to know whether the following is correct and if so, how to generalize it:



Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.



Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:



Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$



Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.







share|cite|improve this question

















  • 1




    Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
    – user357980
    Jul 19 at 10:12






  • 1




    Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
    – user357980
    Jul 19 at 10:14











  • $pmatrix0&1\-1&0$ has deteminant $1$.
    – Lord Shark the Unknown
    Jul 19 at 10:16










  • Shark's matrix is not symmetric.
    – user357980
    Jul 19 at 10:20













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I would like to know whether the following is correct and if so, how to generalize it:



Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.



Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:



Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$



Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.







share|cite|improve this question













I would like to know whether the following is correct and if so, how to generalize it:



Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.



Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:



Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$



Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 19 at 10:13









Davide Morgante

1,830220




1,830220









asked Jul 19 at 10:06









zinR

286




286







  • 1




    Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
    – user357980
    Jul 19 at 10:12






  • 1




    Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
    – user357980
    Jul 19 at 10:14











  • $pmatrix0&1\-1&0$ has deteminant $1$.
    – Lord Shark the Unknown
    Jul 19 at 10:16










  • Shark's matrix is not symmetric.
    – user357980
    Jul 19 at 10:20













  • 1




    Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
    – user357980
    Jul 19 at 10:12






  • 1




    Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
    – user357980
    Jul 19 at 10:14











  • $pmatrix0&1\-1&0$ has deteminant $1$.
    – Lord Shark the Unknown
    Jul 19 at 10:16










  • Shark's matrix is not symmetric.
    – user357980
    Jul 19 at 10:20








1




1




Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12




Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12




1




1




Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14





Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14













$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16




$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16












Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20





Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20











3 Answers
3






active

oldest

votes

















up vote
1
down vote



accepted










The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.



I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.






share|cite|improve this answer






























    up vote
    1
    down vote













    If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).






    share|cite|improve this answer























    • Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
      –  mheldman
      Jul 19 at 11:05

















    up vote
    0
    down vote













    Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
    $$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
    so $F^1/2$ would not be invertible, which is a contradiction.






    share|cite|improve this answer





















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856484%2fself-adjoint-linear-map-has-determinant-0%23new-answer', 'question_page');

      );

      Post as a guest






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      1
      down vote



      accepted










      The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.



      I believe that your proof works. Here is another:
      For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.






      share|cite|improve this answer



























        up vote
        1
        down vote



        accepted










        The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.



        I believe that your proof works. Here is another:
        For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.






        share|cite|improve this answer

























          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.



          I believe that your proof works. Here is another:
          For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.






          share|cite|improve this answer















          The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.



          I believe that your proof works. Here is another:
          For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 19 at 10:40


























          answered Jul 19 at 10:30









          user357980

          1,556213




          1,556213




















              up vote
              1
              down vote













              If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).






              share|cite|improve this answer























              • Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
                –  mheldman
                Jul 19 at 11:05














              up vote
              1
              down vote













              If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).






              share|cite|improve this answer























              • Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
                –  mheldman
                Jul 19 at 11:05












              up vote
              1
              down vote










              up vote
              1
              down vote









              If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).






              share|cite|improve this answer















              If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).







              share|cite|improve this answer















              share|cite|improve this answer



              share|cite|improve this answer








              edited Jul 19 at 12:13


























              answered Jul 19 at 10:45









              mheldman

              54616




              54616











              • Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
                –  mheldman
                Jul 19 at 11:05
















              • Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
                –  mheldman
                Jul 19 at 11:05















              Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
              –  mheldman
              Jul 19 at 11:05




              Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
              –  mheldman
              Jul 19 at 11:05










              up vote
              0
              down vote













              Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
              $$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
              so $F^1/2$ would not be invertible, which is a contradiction.






              share|cite|improve this answer

























                up vote
                0
                down vote













                Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
                $$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
                so $F^1/2$ would not be invertible, which is a contradiction.






                share|cite|improve this answer























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
                  $$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
                  so $F^1/2$ would not be invertible, which is a contradiction.






                  share|cite|improve this answer













                  Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
                  $$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
                  so $F^1/2$ would not be invertible, which is a contradiction.







                  share|cite|improve this answer













                  share|cite|improve this answer



                  share|cite|improve this answer











                  answered Jul 19 at 10:52









                  user357980

                  1,556213




                  1,556213






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856484%2fself-adjoint-linear-map-has-determinant-0%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?