self-adjoint linear map has determinant $ < 0$.
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I would like to know whether the following is correct and if so, how to generalize it:
Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.
Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:
Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$
Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.
linear-algebra linear-transformations adjoint-operators
add a comment |Â
up vote
1
down vote
favorite
I would like to know whether the following is correct and if so, how to generalize it:
Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.
Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:
Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$
Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.
linear-algebra linear-transformations adjoint-operators
1
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
1
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I would like to know whether the following is correct and if so, how to generalize it:
Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.
Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:
Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$
Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.
linear-algebra linear-transformations adjoint-operators
I would like to know whether the following is correct and if so, how to generalize it:
Claim: Let $V$ be a $mathbb R$-vector space of dimension $2$, and let $langle cdot , cdot rangle : V times V to mathbb R$ be a scalar product on $V$. Let $F: V to V$ be a self-adjoint linear map such that $v, F(v)$ is an orthogonal basis for $V$.
Then $det F < 0$.
Proof: We can calculate the transformation matrix $A$ of $F$ with respect to the basis $v, F(v)$:
Since $$ v overset F mapsto F(v) , \ F(v) overset F mapsto F^2(v) = av + b F(v) $$ for some $a, b in mathbb R$, we know that $A= beginpmatrix 0 & a \ 1 & b endpmatrix $, hence $det F = det A = -a$. It now suffices to show that $a > 0$. For $x in V$ we write $lVert x rVert := langle x , x rangle$.
We have $$beginalign a &= fracalVert v rVertlangle v, v rangle = frac1lVert v rVertleft( a langle v , v rangle + b underbracelangle F(v), v rangle_=0 right) \
&= frac1lVert v rVertlangle av + b F(v) , v rangle = frac1lVert v rVert langle F^2 (v), v rangle \
&= frac1lVert v rVert langle F(v), F(v) rangle = fraclVert F(v) rVertlVert v rVert > 0 endalign$$
Question: Is there a similar result for $n$-dimensional $mathbb R$-vector spaces, where $n in mathbb N$? The naive approach, namely trying it with a basis $v, F(v), F^2(v),..., F^n-1(v)$, is destined to fail, since $langle F^2(v) , v rangle =0 iff lVert F(v) rVert = 0 iff F(v) = 0$.
linear-algebra linear-transformations adjoint-operators
edited Jul 19 at 10:13
Davide Morgante
1,830220
1,830220
asked Jul 19 at 10:06
zinR
286
286
1
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
1
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20
add a comment |Â
1
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
1
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20
1
1
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
1
1
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
1
down vote
accepted
The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.
I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.
add a comment |Â
up vote
1
down vote
If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
add a comment |Â
up vote
0
down vote
Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
$$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
so $F^1/2$ would not be invertible, which is a contradiction.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.
I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.
add a comment |Â
up vote
1
down vote
accepted
The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.
I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.
I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.
The generalization is false if $n geq 3$ is odd, because $det(-F_n) = - det(F_N)$. It is also false if $n geq 4$ is even, because we can define $F_n = I_n-3 oplus F_3$, where $F_3$ is from the previous example, that is $F$ acts on the first $n-3$ vectors by doing nothing and on the last three vectors by treating them like a three dimensional vector space. We then have $F_n$ self-adjoint the matrix of $F_n$ is a block matrix with $F_3$ at the bottom right corner and otherwise $1$'s on the rest of the $n-3$ diagonal entries.
I believe that your proof works. Here is another:
For $n = 2$, recall that being self-adjoint means that in some basis it looks like $beginpmatrixlambda_1 & 0 \ 0 & lambda_2 endpmatrix$. If the determinant is positive, then $lambda_i$ are both the same sign (and non-zero). So, we can multiply by $-1$ if necessary to get a positive sign for each. We then can scale so that the matrix is $beginpmatrix a & 0 \ 0 & 1 endpmatrix$. Now, suppose that $v = (r, s)$ is like that in the hypothesis. Then $langle F(v), vrangle = ar^2 + s^2$ which can only be zero if $a < 0$ or ($a = 0$ and $s = 0$). However, the latter case gives $F = beginpmatrix 0 & 0 \ 0 & 1 endpmatrix$ and $v = (r, 0)$ so $Fv = 0$.
edited Jul 19 at 10:40
answered Jul 19 at 10:30
user357980
1,556213
1,556213
add a comment |Â
add a comment |Â
up vote
1
down vote
If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
add a comment |Â
up vote
1
down vote
If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
add a comment |Â
up vote
1
down vote
up vote
1
down vote
If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).
If $v perp Fv$, then $v^*Fv = 0$. Since $v$ and $Fv$ are nonzero, this implies that $det F < 0$ (else $F$ is positive or negative semidefinite, and $Fv neq 0$ implies that $det F neq 0$).
edited Jul 19 at 12:13
answered Jul 19 at 10:45
mheldman
54616
54616
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
add a comment |Â
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
Incidentally, there are only a couple matrices that accomplish this. So you could just find those and compute their eigenvalues.
– mheldman
Jul 19 at 11:05
add a comment |Â
up vote
0
down vote
Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
$$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
so $F^1/2$ would not be invertible, which is a contradiction.
add a comment |Â
up vote
0
down vote
Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
$$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
so $F^1/2$ would not be invertible, which is a contradiction.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
$$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
so $F^1/2$ would not be invertible, which is a contradiction.
Here is another proof (inspired by mheldman's idea of inner product). If $F$ is self-adjoint and positive, it has a positive self-adjoit square root (just take digonalize and take the square root of its eigenvalues) $F^1/2$. We then see that
$$0 = langle Fv, vrangle = langle F^1/2F^1/2v, vrangle = langle F^1/2v, F^1/2vrangle = |F^1/2v|^2,$$
so $F^1/2$ would not be invertible, which is a contradiction.
answered Jul 19 at 10:52
user357980
1,556213
1,556213
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856484%2fself-adjoint-linear-map-has-determinant-0%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Clearly not true if $n$ is odd, because $det(-F) = -det(F)$.
– user357980
Jul 19 at 10:12
1
Also not true for $n geq 4$ even because we can simply define $F$ to act on a three dimensional subspace and then apply $n$ odd as above.
– user357980
Jul 19 at 10:14
$pmatrix0&1\-1&0$ has deteminant $1$.
– Lord Shark the Unknown
Jul 19 at 10:16
Shark's matrix is not symmetric.
– user357980
Jul 19 at 10:20