If matrix $A$ has entries $A_ij=sin(theta_i - theta_j)$, why does $|A|_* = n$ always hold?
Clash Royale CLAN TAG#URR8PPP
up vote
12
down vote
favorite
If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*
Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.
Is there any way to see why $A$ and $B$ have these properties?
linear-algebra trigonometry eigenvalues-eigenvectors spectral-theory nuclear-norm
add a comment |Â
up vote
12
down vote
favorite
If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*
Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.
Is there any way to see why $A$ and $B$ have these properties?
linear-algebra trigonometry eigenvalues-eigenvectors spectral-theory nuclear-norm
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
Interesting problem
– RHowe
Jul 24 at 19:26
add a comment |Â
up vote
12
down vote
favorite
up vote
12
down vote
favorite
If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*
Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.
Is there any way to see why $A$ and $B$ have these properties?
linear-algebra trigonometry eigenvalues-eigenvectors spectral-theory nuclear-norm
If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*
Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.
Is there any way to see why $A$ and $B$ have these properties?
linear-algebra trigonometry eigenvalues-eigenvectors spectral-theory nuclear-norm
edited Jul 28 at 9:19
Rodrigo de Azevedo
12.6k41751
12.6k41751
asked Jul 24 at 18:28


ChristophorusX
1439
1439
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
Interesting problem
– RHowe
Jul 24 at 19:26
add a comment |Â
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
Interesting problem
– RHowe
Jul 24 at 19:26
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
Interesting problem
– RHowe
Jul 24 at 19:26
Interesting problem
– RHowe
Jul 24 at 19:26
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
4
down vote
I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.
Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.
So far, we can conclude that $sin(X)$ has rank at most equal to $2$.
Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$
From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.
Potentially useful observations:
We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.
The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).
I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).
add a comment |Â
up vote
2
down vote
Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.
Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.
Now let's see why the statement about singular values is approximately true.
Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$
By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.
Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.
As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.
However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
$mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.
Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.
Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.
So far, we can conclude that $sin(X)$ has rank at most equal to $2$.
Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$
From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.
Potentially useful observations:
We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.
The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).
I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).
add a comment |Â
up vote
4
down vote
I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.
Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.
So far, we can conclude that $sin(X)$ has rank at most equal to $2$.
Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$
From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.
Potentially useful observations:
We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.
The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).
I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).
add a comment |Â
up vote
4
down vote
up vote
4
down vote
I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.
Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.
So far, we can conclude that $sin(X)$ has rank at most equal to $2$.
Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$
From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.
Potentially useful observations:
We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.
The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).
I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).
I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.
Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.
So far, we can conclude that $sin(X)$ has rank at most equal to $2$.
Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$
From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.
Potentially useful observations:
We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.
The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).
I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).
edited Jul 24 at 22:19
answered Jul 24 at 21:52
Omnomnomnom
121k784170
121k784170
add a comment |Â
add a comment |Â
up vote
2
down vote
Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.
Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.
Now let's see why the statement about singular values is approximately true.
Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$
By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.
Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.
As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.
However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
$mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.
Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.
add a comment |Â
up vote
2
down vote
Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.
Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.
Now let's see why the statement about singular values is approximately true.
Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$
By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.
Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.
As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.
However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
$mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.
Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.
Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.
Now let's see why the statement about singular values is approximately true.
Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$
By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.
Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.
As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.
However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
$mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.
Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.
Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.
Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.
Now let's see why the statement about singular values is approximately true.
Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$
By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.
Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.
As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.
However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
$mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.
Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.
edited Aug 5 at 14:51
answered Jul 31 at 20:14
Mike Hawk
6678
6678
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2861630%2fif-matrix-a-has-entries-a-ij-sin-theta-i-theta-j-why-does-a%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02
For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07
Interesting problem
– RHowe
Jul 24 at 19:26