Derivatives in a Hilbert space.
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.
The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.
In particular, the theorem says the following:
Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.
Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$
The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$
This derivative is what we don't understand about the proof. We don't know how to actually compute it.
representation-theory hilbert-spaces
add a comment |Â
up vote
2
down vote
favorite
We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.
The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.
In particular, the theorem says the following:
Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.
Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$
The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$
This derivative is what we don't understand about the proof. We don't know how to actually compute it.
representation-theory hilbert-spaces
1
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.
The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.
In particular, the theorem says the following:
Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.
Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$
The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$
This derivative is what we don't understand about the proof. We don't know how to actually compute it.
representation-theory hilbert-spaces
We need help with the proof of Lemma IX.11.4 on page 249-250 of the book "Representations of Finite and Compact Groups" by Barry Simon.
The problem has mostly to do with the notation used. We do not understand what he meant with the derivative that is being taken there.
In particular, the theorem says the following:
Let $X$ be a Hilbert space. Let $S_n$ (the symmetric group) act on $X^otimes n$ in the natural way. Let $S^n(X)$ be the set of vectors invariant under all $V_pi$. Then $S^n(X)$ is the smallest space containing $ x otimes dots otimes x mid x in X $.
Here, the 'natural way' means, for $pi in S_n$ and $x_1 otimes dots otimes x_n in X^otimes n$ that $V_pi x_1 otimes dots otimes x_n = x_pi^-1(1) otimes dots otimes x_pi^-1(n).$
The proof starts with defining $P(x) = x otimes dots otimes x$ and then taking the derivative
$$ left. fracpartialpartial lambda_2 dots partial lambda_n P(e_1 + lambda_2 e_2 + dots + lambda_n e_n) right|_lambda_2 = dots = lambda_n = 0 = sum_pi in S_m V_pi(e_1 otimes dots otimes e_n).$$
This derivative is what we don't understand about the proof. We don't know how to actually compute it.
representation-theory hilbert-spaces
edited Jul 16 at 16:42
asked Jul 16 at 16:26
user353840
1217
1217
1
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27
add a comment |Â
1
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27
1
1
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
accepted
You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.
The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.
For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.
add a comment |Â
up vote
1
down vote
For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.
Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
$$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$
and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.
The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.
I suggest to work out the case $n=3$ in detail.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.
The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.
For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.
add a comment |Â
up vote
2
down vote
accepted
You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.
The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.
For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.
The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.
For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.
You can expand
beginalign(e_1 + lambda_2 e_2 + dots + lambda_n e_n )^otimes n &= e_1^otimes n\
&+e_1^otimes(n-1)otimes(lambda_2 e_2 + ... + lambda_n e_n) \&+ e_1^otimes (n-2) otimessum_i=2^n sum_j=2^nlambda_ilambda_j e_iotimes e_j \
&+ dots\
&+ e_1 otimes sum_i[2]=2^ndotssum_i[n]=2^n lambda_i[2]dotslambda_i[n] e_i[2]otimesdotsotimes e_i[n] \
&+ (textterms without $e_1$)endalign
The first line vanishes on any derivative in any $lambda_i$, The second line vanishes on any second derivative, and so on until before the second last line.
The last line consists of terms that have a factor of $lambda_i^2$ for some $i$. Therefore, their derivative has $lambda_i$ as a factor, which then vanishes as $lambda_ito 0$.
For the second last line, the only terms that do not vanish are the ones for which we have exactly one of each of $lambda_2,dots,lambda_n$. These terms clearly make up a sum over the permutations of $2,...,n$.
edited Jul 16 at 22:49
answered Jul 16 at 18:44
Calvin Khor
8,15911133
8,15911133
add a comment |Â
add a comment |Â
up vote
1
down vote
For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.
Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
$$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$
and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.
The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.
I suggest to work out the case $n=3$ in detail.
add a comment |Â
up vote
1
down vote
For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.
Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
$$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$
and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.
The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.
I suggest to work out the case $n=3$ in detail.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.
Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
$$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$
and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.
The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.
I suggest to work out the case $n=3$ in detail.
For $n=2$, we have only one variable, $lambda_2=:t$, and a function $p:Bbb Rto X, tmapsto P(e_1+te_2)$.
Since the norm on $X$ induces a metric (and topology), we can easily transform the definition of differential for $Bbb Rto X$ functions:
$$f'(t_0):=lim_tto t_0fracf(t)-f(t_0)t-t_0$$
Now we have $p(t)=P(e_1+te_2)=(e_1+te_2)otimes(e_1+te_2)=(e_1otimes e_1)+t(e_1otimes e_2+e_2otimes e_1)+t^2(e_2otimes e_2)$
and when differentiating it at $t=0$, the first term vanishes because it's constant, and so does the last term because we evaluate $2t(e_2otimes e_2)$ at $t=0$.
The multivariate case is analogous: exactly the terms of the form $V_pi(e_1otimesdotsotimes e_n)$ will not vanish.
I suggest to work out the case $n=3$ in detail.
answered Jul 16 at 18:09
Berci
56.4k23570
56.4k23570
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853567%2fderivatives-in-a-hilbert-space%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Please include the specific notation that you're talking about. Questions should be self-contained - not everyone has a copy of the book handy.
â T. Bongers
Jul 16 at 16:27