If matrix $A$ has entries $A_ij=sin(theta_i - theta_j)$, why does $|A|_* = n$ always hold?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
12
down vote

favorite
6












If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*



Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.



Is there any way to see why $A$ and $B$ have these properties?







share|cite|improve this question





















  • what does arbitrary mean? Does it mean uniformly distributed?
    – RHowe
    Jul 24 at 19:02










  • For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
    – ChristophorusX
    Jul 24 at 19:07










  • Interesting problem
    – RHowe
    Jul 24 at 19:26














up vote
12
down vote

favorite
6












If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*



Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.



Is there any way to see why $A$ and $B$ have these properties?







share|cite|improve this question





















  • what does arbitrary mean? Does it mean uniformly distributed?
    – RHowe
    Jul 24 at 19:02










  • For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
    – ChristophorusX
    Jul 24 at 19:07










  • Interesting problem
    – RHowe
    Jul 24 at 19:26












up vote
12
down vote

favorite
6









up vote
12
down vote

favorite
6






6





If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*



Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.



Is there any way to see why $A$ and $B$ have these properties?







share|cite|improve this question













If we let $thetainmathbbR^n$ be a vector that contains $n$ arbitrary phases $theta_iin[0,2pi)$ for $iin[n]$, then we can define a matrix $XinmathbbR^ntimes n$, where
beginalign*
X_ij = theta_i - theta_j.
endalign*
Then the matrices that I consider are the antisymmetric matrix $A=sin(X)$ and the symmetric matrix $B=cos(X)$. Through numerical experiments (by randomly sampling the phase vector $theta$) I find that the nuclear norm of $A$ and $B$ are always $n$, i.e.
beginalign*
|A|_* = |B|_* = n.
endalign*



Moreover, performing SVD on $A$ yields the largest two singular value $sigma_1 = sigma_2 = n/2$ and all the other $sigma_3 = ldots = sigma_n = 0$. Further, if we look at the matrix $Acirc B$, where
beginalign*
(Acirc B)_ij = sin(theta_i - theta_j)cos(theta_i - theta_j) = sin(2(theta_i - theta_j))/2,
endalign*
then
beginalign*
|Acirc B|_* = n/2
endalign*
with $sigma_1 = sigma_2 = n/4$ and $sigma_3 = ldots = sigma_n = 0$.



Is there any way to see why $A$ and $B$ have these properties?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 28 at 9:19









Rodrigo de Azevedo

12.6k41751




12.6k41751









asked Jul 24 at 18:28









ChristophorusX

1439




1439











  • what does arbitrary mean? Does it mean uniformly distributed?
    – RHowe
    Jul 24 at 19:02










  • For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
    – ChristophorusX
    Jul 24 at 19:07










  • Interesting problem
    – RHowe
    Jul 24 at 19:26
















  • what does arbitrary mean? Does it mean uniformly distributed?
    – RHowe
    Jul 24 at 19:02










  • For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
    – ChristophorusX
    Jul 24 at 19:07










  • Interesting problem
    – RHowe
    Jul 24 at 19:26















what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02




what does arbitrary mean? Does it mean uniformly distributed?
– RHowe
Jul 24 at 19:02












For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07




For arbitrary I mean the statement should hold for any $theta$, i.e. $|A|_* = n$ is a deterministic argument that independent of the choice of phase $theta$.
– ChristophorusX
Jul 24 at 19:07












Interesting problem
– RHowe
Jul 24 at 19:26




Interesting problem
– RHowe
Jul 24 at 19:26










2 Answers
2






active

oldest

votes

















up vote
4
down vote



+50










I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.



Note that by Euler's formula, we have
$$
sin(X) = frac 12i[exp(iX) - exp(-iX)]
$$
To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
$$
exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
$$
Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.



So far, we can conclude that $sin(X)$ has rank at most equal to $2$.



Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
$$
|exp(iX)|_* = |operatornametr(exp(iX))| = n
$$
So, your numerical evidence seems to confirm that
$$
left|frac 12i[exp(iX) - exp(-iX)]right|_* =
left|frac 12iexp(iX)right|_* +
left|frac 12iexp(-iX)right|_*
$$




From there, we note that $A = sin(X)$ satisfies
$$
4 A^*A = [exp(iX) - exp(-iX)]^2 = \
n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
$$
where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.




Potentially useful observations:



We note that
$$
exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
$$
And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.



The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).



I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).






share|cite|improve this answer






























    up vote
    2
    down vote













    Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.



    Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.



    Now let's see why the statement about singular values is approximately true.



    Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$



    By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.



    Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.



    As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.



    However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
    $mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.



    Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.






    share|cite|improve this answer























      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2861630%2fif-matrix-a-has-entries-a-ij-sin-theta-i-theta-j-why-does-a%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      4
      down vote



      +50










      I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.



      Note that by Euler's formula, we have
      $$
      sin(X) = frac 12i[exp(iX) - exp(-iX)]
      $$
      To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
      $$
      exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
      $$
      Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.



      So far, we can conclude that $sin(X)$ has rank at most equal to $2$.



      Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
      $$
      |exp(iX)|_* = |operatornametr(exp(iX))| = n
      $$
      So, your numerical evidence seems to confirm that
      $$
      left|frac 12i[exp(iX) - exp(-iX)]right|_* =
      left|frac 12iexp(iX)right|_* +
      left|frac 12iexp(-iX)right|_*
      $$




      From there, we note that $A = sin(X)$ satisfies
      $$
      4 A^*A = [exp(iX) - exp(-iX)]^2 = \
      n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
      n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
      $$
      where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.




      Potentially useful observations:



      We note that
      $$
      exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
      $$
      And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.



      The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).



      I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).






      share|cite|improve this answer



























        up vote
        4
        down vote



        +50










        I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.



        Note that by Euler's formula, we have
        $$
        sin(X) = frac 12i[exp(iX) - exp(-iX)]
        $$
        To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
        $$
        exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
        $$
        Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.



        So far, we can conclude that $sin(X)$ has rank at most equal to $2$.



        Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
        $$
        |exp(iX)|_* = |operatornametr(exp(iX))| = n
        $$
        So, your numerical evidence seems to confirm that
        $$
        left|frac 12i[exp(iX) - exp(-iX)]right|_* =
        left|frac 12iexp(iX)right|_* +
        left|frac 12iexp(-iX)right|_*
        $$




        From there, we note that $A = sin(X)$ satisfies
        $$
        4 A^*A = [exp(iX) - exp(-iX)]^2 = \
        n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
        n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
        $$
        where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.




        Potentially useful observations:



        We note that
        $$
        exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
        $$
        And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.



        The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).



        I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).






        share|cite|improve this answer

























          up vote
          4
          down vote



          +50







          up vote
          4
          down vote



          +50




          +50




          I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.



          Note that by Euler's formula, we have
          $$
          sin(X) = frac 12i[exp(iX) - exp(-iX)]
          $$
          To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
          $$
          exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
          $$
          Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.



          So far, we can conclude that $sin(X)$ has rank at most equal to $2$.



          Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
          $$
          |exp(iX)|_* = |operatornametr(exp(iX))| = n
          $$
          So, your numerical evidence seems to confirm that
          $$
          left|frac 12i[exp(iX) - exp(-iX)]right|_* =
          left|frac 12iexp(iX)right|_* +
          left|frac 12iexp(-iX)right|_*
          $$




          From there, we note that $A = sin(X)$ satisfies
          $$
          4 A^*A = [exp(iX) - exp(-iX)]^2 = \
          n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
          n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
          $$
          where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.




          Potentially useful observations:



          We note that
          $$
          exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
          $$
          And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.



          The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).



          I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).






          share|cite|improve this answer















          I will stick to your notation in which $f(X)$ refers to the matrix whose entries are $f(X_ij)$.



          Note that by Euler's formula, we have
          $$
          sin(X) = frac 12i[exp(iX) - exp(-iX)]
          $$
          To see that $exp(iX)$ has rank $1$, we note that it can be written as the matrix product
          $$
          exp(iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n) pmatrixexp(-itheta_1) & cdots & exp( -itheta_n)
          $$
          Verify also that $exp(iX)$ is Hermitian (and positive definite), as is $exp(-iX)$.



          So far, we can conclude that $sin(X)$ has rank at most equal to $2$.



          Since $exp(iX)$ is Hermitian with rank 1, we can quickly state that
          $$
          |exp(iX)|_* = |operatornametr(exp(iX))| = n
          $$
          So, your numerical evidence seems to confirm that
          $$
          left|frac 12i[exp(iX) - exp(-iX)]right|_* =
          left|frac 12iexp(iX)right|_* +
          left|frac 12iexp(-iX)right|_*
          $$




          From there, we note that $A = sin(X)$ satisfies
          $$
          4 A^*A = [exp(iX) - exp(-iX)]^2 = \
          n [exp(iX) + exp(-iX)] - exp(iX)exp(-iX) - exp(-iX)exp(iX) =\
          n [exp(iX) + exp(-iX)] - 2 operatornameRe[exp(iX)exp(-iX)]
          $$
          where the exponent here is used in the sense of matrix multiplication. Our goal is to compute $|A|_* = operatornametr(sqrtA^*A)$.




          Potentially useful observations:



          We note that
          $$
          exp(iX)exp(-iX) = pmatrixexp(itheta_1) \ vdots \ exp( itheta_n)pmatrixexp(itheta_1) & cdots & exp( itheta_n) sum_k=1^n exp(-2itheta_k)
          $$
          And $operatornametr[exp(iX)exp(-iX)] = left| sum_k=1^n exp(2itheta_k) right|^2$. This product is complex-symmetric but not Hermitian.



          The matrices $exp(iX),exp(-iX)$ will commute if and only if $exp(iX)exp(-iX)$ is purely real (i.e. has imaginary part 0).



          I think that these matrices will commute if and only if $sum_k=1^n exp(2itheta_k) = 0$ (which is not generally the case).







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 24 at 22:19


























          answered Jul 24 at 21:52









          Omnomnomnom

          121k784170




          121k784170




















              up vote
              2
              down vote













              Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.



              Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.



              Now let's see why the statement about singular values is approximately true.



              Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$



              By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.



              Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.



              As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.



              However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
              $mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.



              Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.






              share|cite|improve this answer



























                up vote
                2
                down vote













                Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.



                Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.



                Now let's see why the statement about singular values is approximately true.



                Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$



                By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.



                Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.



                As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.



                However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
                $mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.



                Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.






                share|cite|improve this answer

























                  up vote
                  2
                  down vote










                  up vote
                  2
                  down vote









                  Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.



                  Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.



                  Now let's see why the statement about singular values is approximately true.



                  Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$



                  By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.



                  Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.



                  As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.



                  However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
                  $mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.



                  Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.






                  share|cite|improve this answer















                  Unfortunately, your hypotheses are a little too good to be true in general. But they do hold exactly in the special case that $sum_i e^2theta_i=0$ This will be the case when, for example, the angles are evenly spaced on the circle, i.e. $theta_i=i*2pi/n$. Moreover, they hold approximately in the limit of a large number of points sampled uniformly and independently.



                  Any $theta$ with $n=2$ will provide a counterexample to the claim about the singular values of $cos(X)$, provided that $cos(theta_1-theta_2)ne 0$. Indeed, since the matrix is symmetric, the singular values and eigenvalues coincide, and the only way that both eigenvalues of a 2x2 matrix can be equal is if the matrix is diagonal.



                  Now let's see why the statement about singular values is approximately true.



                  Indeed, using standard angle addition formulas, we see that $sum_i cos(theta_i-theta_j)cos(theta_i)=.5sum_icos(2theta_i-theta_j)+cos(theta_j)=.5ncos(theta_j)+.5sum_icos(2theta_i-theta_j)$



                  By the law of large numbers, we have $sum_icos(2theta_i-theta_j)approx frac n2pi int_0^2picos(2theta-theta_j)dtheta=0$ for large $n$. Therefore, in the limit of many independently uniformly sampled angles, the vector $cos(theta)$ is an eigenvector of $cos(X)$ with eigenvalue $.5n$. Similarly, one may check that $sin(theta)$ is likewise an eigenvector in the limit with the same eigenvalue. I suspect this argument carries over to the matrices $sin(X)$ and $cos(X)circsin(X)$, although I haven't worked out the details. Furthermore, if we assume that $sum_i e^2theta_i=0$, then the above computations shows that $cos(theta)$ and $sin(theta)$ are exact eigenvectors with eigenvalues $pm n/2$.



                  Edit: The statements about the nuclear norm holds for cos(X), but not sin(X). Indeed, the matrix cos(X) is symmetric and non-negative definite, so its nuclear norm is equal to its trace, which is $sum_i cos(theta_i-theta_i)=n$.



                  As for sin(X), the statement about the nuclear norm does not hold exactly, but it does hold when $sum_i e^2theta_i=0$, as well as approximately in the limit of many uniformly and indepdnently sampled phases, as before. Indeed, the matrix sin(X) is antisymmetric, so it can be unitarily diagonalized (over the complex numbers), with purely imaginary eigenvalues coming in conjugate pairs. The magnitudes of these eigenvalues are in turn the real singular values (up to a sign, which is immaterial for computing the norm). As Omnomnomnom has already pointed out, we may write $sin(X)$ as the sum of two complex rank-1 matrices, namely $e^ithetaotimes e^-itheta/2i$ and its complex conjugate (here $otimes$ denotes the outer product of two vectors). The vectors $e^itheta$ and $e^-itheta$ are not orthogonal in general (with respect to the hermitian innner product), so this is not a unitary decomposition.



                  However, it is nearly a unitary given the previous assumptions on $theta$. Indeed, we see that
                  $mid e^ithetamid=sum_i mid e^itheta_imid^2=n$. Furthermore, one may verify that for large $n$, $<e^itheta,e^-itheta>to 0$, using the law of large numbers as before.



                  Setting $v=e^itheta/sqrtn$ and $w=e^-itheta/sqrtn$ we have $sin(X)=-invotimes w/2+inwotimes v/2$. Since $v$ and $w$ both have unit norm and $<v,w>=<e^itheta,e^-itheta>/napprox 0$, this is approximately a unitary decomposition with eigenvalues $pm in/2$. As per the earlier discussion, this implies that the singular values of $sin(X)$ are approximately $pm n/2$, and the nuclear norm is correspondingly approximately $n$. I leave consideration of the matrix $Acirc B$ as an exercise to you.







                  share|cite|improve this answer















                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Aug 5 at 14:51


























                  answered Jul 31 at 20:14









                  Mike Hawk

                  6678




                  6678






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2861630%2fif-matrix-a-has-entries-a-ij-sin-theta-i-theta-j-why-does-a%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?