Derivatives in a Hilbert space.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.



The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.



In particular, the theorem says the following:



Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.



Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$



The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$



This derivative is what we don't understand about the proof. We don't know how to actually compute it.







share|cite|improve this question

















  • 1




    Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
    – T. Bongers
    Jul 16 at 16:27














up vote
2
down vote

favorite












We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.



The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.



In particular, the theorem says the following:



Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.



Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$



The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$



This derivative is what we don't understand about the proof. We don't know how to actually compute it.







share|cite|improve this question

















  • 1




    Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
    – T. Bongers
    Jul 16 at 16:27












up vote
2
down vote

favorite









up vote
2
down vote

favorite











We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.



The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.



In particular, the theorem says the following:



Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.



Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$



The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$



This derivative is what we don't understand about the proof. We don't know how to actually compute it.







share|cite|improve this question













We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.



The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.



In particular, the theorem says the following:



Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.



Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$



The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$



This derivative is what we don't understand about the proof. We don't know how to actually compute it.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 16 at 16:42
























asked Jul 16 at 16:26









user353840

1217




1217







  • 1




    Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
    – T. Bongers
    Jul 16 at 16:27












  • 1




    Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
    – T. Bongers
    Jul 16 at 16:27







1




1




Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
– T. Bongers
Jul 16 at 16:27




Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
– T. Bongers
Jul 16 at 16:27










2 Answers
2






active

oldest

votes

















up vote
2
down vote



accepted










You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.



The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.



For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.






share|cite|improve this answer






























    up vote
    1
    down vote













    For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.



    Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
    $$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
    Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$

    and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.



    The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.

    I suggest to work out the case $n=3$ in detail.






    share|cite|improve this answer





















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853567%2fderivatives-in-a-hilbert-space%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      2
      down vote



      accepted










      You can expand
      beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
      &+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
      &+ dots\
      &+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
      &+ (textterms without $e_1$)endalign
      The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.



      The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.



      For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.






      share|cite|improve this answer



























        up vote
        2
        down vote



        accepted










        You can expand
        beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
        &+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
        &+ dots\
        &+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
        &+ (textterms without $e_1$)endalign
        The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.



        The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.



        For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.






        share|cite|improve this answer

























          up vote
          2
          down vote



          accepted







          up vote
          2
          down vote



          accepted






          You can expand
          beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
          &+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
          &+ dots\
          &+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
          &+ (textterms without $e_1$)endalign
          The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.



          The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.



          For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.






          share|cite|improve this answer















          You can expand
          beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
          &+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
          &+ dots\
          &+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
          &+ (textterms without $e_1$)endalign
          The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.



          The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.



          For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 16 at 22:49


























          answered Jul 16 at 18:44









          Calvin Khor

          8,15911133




          8,15911133




















              up vote
              1
              down vote













              For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.



              Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
              $$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
              Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$

              and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.



              The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.

              I suggest to work out the case $n=3$ in detail.






              share|cite|improve this answer

























                up vote
                1
                down vote













                For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.



                Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
                $$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
                Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$

                and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.



                The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.

                I suggest to work out the case $n=3$ in detail.






                share|cite|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.



                  Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
                  $$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
                  Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$

                  and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.



                  The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.

                  I suggest to work out the case $n=3$ in detail.






                  share|cite|improve this answer













                  For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.



                  Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
                  $$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
                  Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$

                  and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.



                  The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.

                  I suggest to work out the case $n=3$ in detail.







                  share|cite|improve this answer













                  share|cite|improve this answer



                  share|cite|improve this answer











                  answered Jul 16 at 18:09









                  Berci

                  56.4k23570




                  56.4k23570






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853567%2fderivatives-in-a-hilbert-space%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Color the edges and diagonals of a regular polygon

                      Relationship between determinant of matrix and determinant of adjoint?