Fourier transform derivation
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
I'm reading Hassani's Mathematical Methods book specifically the chapter on Integral Transforms. He derives the fourier transform starting with the concept that the fourier transform has a kernel of the form $e^itx$, everything goes well until he stated on the bottom of page 694,
"In other words, as $n$ changes by one unit, $k_n$ changes only slightly. This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$"
That means as $Lambda rightarrow infty $, $k_n rightarrow 0$, so becomes almost continuous. The sentence "This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$" is what I don't understand. Can anyone clarify what he meant?
fourier-analysis fourier-transform
 |Â
show 2 more comments
up vote
2
down vote
favorite
I'm reading Hassani's Mathematical Methods book specifically the chapter on Integral Transforms. He derives the fourier transform starting with the concept that the fourier transform has a kernel of the form $e^itx$, everything goes well until he stated on the bottom of page 694,
"In other words, as $n$ changes by one unit, $k_n$ changes only slightly. This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$"
That means as $Lambda rightarrow infty $, $k_n rightarrow 0$, so becomes almost continuous. The sentence "This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$" is what I don't understand. Can anyone clarify what he meant?
fourier-analysis fourier-transform
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
1
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04
 |Â
show 2 more comments
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I'm reading Hassani's Mathematical Methods book specifically the chapter on Integral Transforms. He derives the fourier transform starting with the concept that the fourier transform has a kernel of the form $e^itx$, everything goes well until he stated on the bottom of page 694,
"In other words, as $n$ changes by one unit, $k_n$ changes only slightly. This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$"
That means as $Lambda rightarrow infty $, $k_n rightarrow 0$, so becomes almost continuous. The sentence "This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$" is what I don't understand. Can anyone clarify what he meant?
fourier-analysis fourier-transform
I'm reading Hassani's Mathematical Methods book specifically the chapter on Integral Transforms. He derives the fourier transform starting with the concept that the fourier transform has a kernel of the form $e^itx$, everything goes well until he stated on the bottom of page 694,
"In other words, as $n$ changes by one unit, $k_n$ changes only slightly. This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$"
That means as $Lambda rightarrow infty $, $k_n rightarrow 0$, so becomes almost continuous. The sentence "This suggests that the terms in the sum in Equation (29.2) can be lumped together in $j$ intervals of width $Delta n_j$" is what I don't understand. Can anyone clarify what he meant?
fourier-analysis fourier-transform
asked Jul 27 at 4:12
mathemania
627
627
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
1
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04
 |Â
show 2 more comments
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
1
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
1
1
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04
 |Â
show 2 more comments
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
"The terms in the sum can be lumped together" means that it can be possible to sum every $Delta n_j$ consecutive terms (the resulting sum can be called lumped term) and then consider the series of such lumped terms as equivalent to the original series.
In building such a lumped term some approximation can be made of the kind here illustrated.
In the first diagram I plotted as an example the sequence
$$frac1sqrtL+Lambdaf_Lambda, ne^2ipi nx/(L+Lambda)$$
on $36$ points only, at $x=1$, with the hypotheses that $L+Lambda=1$ and $f_Lambda, nequiv 1$.
Supposing to lump together terms every $Delta n_j = 3$, the corresponding piece of sequence that results is displayed in the second diagram. You see that for instance the three consecutive red terms (all with unit modulus) sum up to a term whose modulus is a bit less than $3$.
This lumped term can be substituted for by an approximated lumped term given by $Delta n_j$ times one of the consecutive terms to be lumped as if all such consecutive terms had the same phase. In the third diagram, such an approximation is displayed, where the approximated lumped term is taken as three times the central term of the consecutive terms to be lumped.
Note that such an approximation can be affected heavely by $f_Lambda, n$ that I've been considering constant in the diagrams. If it takes values too different the lumped terms could no longer be approximated by one of the original terms multiplied by the number of consecutive terms, because such terms would have very different moduli and phases. But this is not a problem because whatever is the level of approximation you want there will always be a real value such that, whatever is $Lambda$ greater than this value, that level of approximation is attained (as you can see by $(29.3)$).
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
add a comment |Â
up vote
0
down vote
It should be something like
This suggests that the terms in the sum in Equation (29.2) can be divided into pieces, where in each piece the quantity $k_n$ is constant. Denote the length of the $j$-th piece by $Delta n_j$, and put $n=n_j$ throughout the $j$-th piece.
In other words, he is writing
$$
sum_nf_Lambda,ne^ik_nx=sum_jsum_nin I_jf_Lambda,ne^ik_nxapproxsum_jf_Lambda,n_je^ik_n_jxDelta n_j
$$
where $I_j$ is a disjoint decomposition of $mathbbZ$ into intervals,
$Delta n_j=|I_j|$ is the number of elements of $I_j$, and $n_jin I_j$ is an element of $I_j$ chosen by some rule.
add a comment |Â
up vote
0
down vote
I have a slightly different book that defines this differently. The motivation for the Fourier Transform can be seen from the following. This is from Richard Haberman - Applied Partial Differential Equations with Fourier Series and Boundary Problems
In solving boundary value problems on a finite interval $-L < x < L$
with periodic boundary conditions we can use the complex form of the
Fourier Series
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty c_ne^frac-in pi xL$$
Where $f(x)$ represents a linear combination of all the sinusoids that are periodic with period $2L$ then we have the Fourier coefficients as
$$c_n = frac12Lint_-L^L f(x)e^frac-in pi xLdx $$
now then we have $ -L < x < L $ as our region of integration. So we extend it to $ - infty < x< infty $
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ frac12Lint_-L^L f(barx)e^frac-in pi barxLdbarx right]e^frac-in pi xL$$
For periodic functions $ -L < x < L $ the number of waves $ omega $ in a distance of $ 2 pi $ are then
$$ omega = fracn piL = 2pi fracn2L$$
giving us the distance between waves
$$ Delta omega = frac(n+1) piL - fracn piL = fracpiL$$
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ fracDelta omega2piint_-L^L f(barx)e^i omega barxdbarx right]e^-i omega x$$
Then the fourier transform here is as $ L to infty $. The values $ omega $ are the square roots of the eigenvalues so they get closer and closer $ Delta omega to 0$
$$ fracf(x+)+f(x-)2= frac12 piint_-infty^infty left[ int_-infty^infty f(barx)e^i omega barx right]e^-i omega x domega$$
Then the fourier transform is
$$F(omega) = frac12 pi int_-infty^infty f(barx) e^i omega barx d barx $$
The notation is that our interval $ - L < x < L$ has extended to infinity as you see. This is commonly what happens when you have some Riemann sum and take the discrete intervals and you let them go infinitly small.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
"The terms in the sum can be lumped together" means that it can be possible to sum every $Delta n_j$ consecutive terms (the resulting sum can be called lumped term) and then consider the series of such lumped terms as equivalent to the original series.
In building such a lumped term some approximation can be made of the kind here illustrated.
In the first diagram I plotted as an example the sequence
$$frac1sqrtL+Lambdaf_Lambda, ne^2ipi nx/(L+Lambda)$$
on $36$ points only, at $x=1$, with the hypotheses that $L+Lambda=1$ and $f_Lambda, nequiv 1$.
Supposing to lump together terms every $Delta n_j = 3$, the corresponding piece of sequence that results is displayed in the second diagram. You see that for instance the three consecutive red terms (all with unit modulus) sum up to a term whose modulus is a bit less than $3$.
This lumped term can be substituted for by an approximated lumped term given by $Delta n_j$ times one of the consecutive terms to be lumped as if all such consecutive terms had the same phase. In the third diagram, such an approximation is displayed, where the approximated lumped term is taken as three times the central term of the consecutive terms to be lumped.
Note that such an approximation can be affected heavely by $f_Lambda, n$ that I've been considering constant in the diagrams. If it takes values too different the lumped terms could no longer be approximated by one of the original terms multiplied by the number of consecutive terms, because such terms would have very different moduli and phases. But this is not a problem because whatever is the level of approximation you want there will always be a real value such that, whatever is $Lambda$ greater than this value, that level of approximation is attained (as you can see by $(29.3)$).
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
add a comment |Â
up vote
2
down vote
accepted
"The terms in the sum can be lumped together" means that it can be possible to sum every $Delta n_j$ consecutive terms (the resulting sum can be called lumped term) and then consider the series of such lumped terms as equivalent to the original series.
In building such a lumped term some approximation can be made of the kind here illustrated.
In the first diagram I plotted as an example the sequence
$$frac1sqrtL+Lambdaf_Lambda, ne^2ipi nx/(L+Lambda)$$
on $36$ points only, at $x=1$, with the hypotheses that $L+Lambda=1$ and $f_Lambda, nequiv 1$.
Supposing to lump together terms every $Delta n_j = 3$, the corresponding piece of sequence that results is displayed in the second diagram. You see that for instance the three consecutive red terms (all with unit modulus) sum up to a term whose modulus is a bit less than $3$.
This lumped term can be substituted for by an approximated lumped term given by $Delta n_j$ times one of the consecutive terms to be lumped as if all such consecutive terms had the same phase. In the third diagram, such an approximation is displayed, where the approximated lumped term is taken as three times the central term of the consecutive terms to be lumped.
Note that such an approximation can be affected heavely by $f_Lambda, n$ that I've been considering constant in the diagrams. If it takes values too different the lumped terms could no longer be approximated by one of the original terms multiplied by the number of consecutive terms, because such terms would have very different moduli and phases. But this is not a problem because whatever is the level of approximation you want there will always be a real value such that, whatever is $Lambda$ greater than this value, that level of approximation is attained (as you can see by $(29.3)$).
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
"The terms in the sum can be lumped together" means that it can be possible to sum every $Delta n_j$ consecutive terms (the resulting sum can be called lumped term) and then consider the series of such lumped terms as equivalent to the original series.
In building such a lumped term some approximation can be made of the kind here illustrated.
In the first diagram I plotted as an example the sequence
$$frac1sqrtL+Lambdaf_Lambda, ne^2ipi nx/(L+Lambda)$$
on $36$ points only, at $x=1$, with the hypotheses that $L+Lambda=1$ and $f_Lambda, nequiv 1$.
Supposing to lump together terms every $Delta n_j = 3$, the corresponding piece of sequence that results is displayed in the second diagram. You see that for instance the three consecutive red terms (all with unit modulus) sum up to a term whose modulus is a bit less than $3$.
This lumped term can be substituted for by an approximated lumped term given by $Delta n_j$ times one of the consecutive terms to be lumped as if all such consecutive terms had the same phase. In the third diagram, such an approximation is displayed, where the approximated lumped term is taken as three times the central term of the consecutive terms to be lumped.
Note that such an approximation can be affected heavely by $f_Lambda, n$ that I've been considering constant in the diagrams. If it takes values too different the lumped terms could no longer be approximated by one of the original terms multiplied by the number of consecutive terms, because such terms would have very different moduli and phases. But this is not a problem because whatever is the level of approximation you want there will always be a real value such that, whatever is $Lambda$ greater than this value, that level of approximation is attained (as you can see by $(29.3)$).
"The terms in the sum can be lumped together" means that it can be possible to sum every $Delta n_j$ consecutive terms (the resulting sum can be called lumped term) and then consider the series of such lumped terms as equivalent to the original series.
In building such a lumped term some approximation can be made of the kind here illustrated.
In the first diagram I plotted as an example the sequence
$$frac1sqrtL+Lambdaf_Lambda, ne^2ipi nx/(L+Lambda)$$
on $36$ points only, at $x=1$, with the hypotheses that $L+Lambda=1$ and $f_Lambda, nequiv 1$.
Supposing to lump together terms every $Delta n_j = 3$, the corresponding piece of sequence that results is displayed in the second diagram. You see that for instance the three consecutive red terms (all with unit modulus) sum up to a term whose modulus is a bit less than $3$.
This lumped term can be substituted for by an approximated lumped term given by $Delta n_j$ times one of the consecutive terms to be lumped as if all such consecutive terms had the same phase. In the third diagram, such an approximation is displayed, where the approximated lumped term is taken as three times the central term of the consecutive terms to be lumped.
Note that such an approximation can be affected heavely by $f_Lambda, n$ that I've been considering constant in the diagrams. If it takes values too different the lumped terms could no longer be approximated by one of the original terms multiplied by the number of consecutive terms, because such terms would have very different moduli and phases. But this is not a problem because whatever is the level of approximation you want there will always be a real value such that, whatever is $Lambda$ greater than this value, that level of approximation is attained (as you can see by $(29.3)$).
answered Aug 6 at 0:00
trying
4,0461722
4,0461722
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
add a comment |Â
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
Certainly you do make a point but based on what I think is it not that because as $n$ changes by one unit $k_n$ changes only slightly and this in turn makes each term to vary only slightly with each other therefore this will transition to the concept of limits where if $Delta n$ is infinitesimal then $k_n$ would be even smaller hence the function can be continuous if we change the sum to an integral wherein we change $Delta n$ to $dn$. Correct me if I'm wrong.
– mathemania
Aug 6 at 5:19
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
The bigger the $Lambda$, the less consecutive terms vary w.r.t. each other (where by term is meant $f_Lambda, ne^2ipi nx/(L+Lambda)$ in $(29.2)$ and "consecutive terms" means terms with $n$'s on a small integer interval). This is because the two factors of these terms have separately this same behaviour. In particular $e^2ipi nx/(L+Lambda)$ acts that way, as I have shown with the diagrams, while $f_Lambda, n$ acts that way because of $(29.3)$. This behaviour justifies the use of approximated lumped terms and what follows from their use.
– trying
Aug 6 at 7:51
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
Yes, so that means what I'm thinking is correct.
– mathemania
Aug 6 at 13:10
add a comment |Â
up vote
0
down vote
It should be something like
This suggests that the terms in the sum in Equation (29.2) can be divided into pieces, where in each piece the quantity $k_n$ is constant. Denote the length of the $j$-th piece by $Delta n_j$, and put $n=n_j$ throughout the $j$-th piece.
In other words, he is writing
$$
sum_nf_Lambda,ne^ik_nx=sum_jsum_nin I_jf_Lambda,ne^ik_nxapproxsum_jf_Lambda,n_je^ik_n_jxDelta n_j
$$
where $I_j$ is a disjoint decomposition of $mathbbZ$ into intervals,
$Delta n_j=|I_j|$ is the number of elements of $I_j$, and $n_jin I_j$ is an element of $I_j$ chosen by some rule.
add a comment |Â
up vote
0
down vote
It should be something like
This suggests that the terms in the sum in Equation (29.2) can be divided into pieces, where in each piece the quantity $k_n$ is constant. Denote the length of the $j$-th piece by $Delta n_j$, and put $n=n_j$ throughout the $j$-th piece.
In other words, he is writing
$$
sum_nf_Lambda,ne^ik_nx=sum_jsum_nin I_jf_Lambda,ne^ik_nxapproxsum_jf_Lambda,n_je^ik_n_jxDelta n_j
$$
where $I_j$ is a disjoint decomposition of $mathbbZ$ into intervals,
$Delta n_j=|I_j|$ is the number of elements of $I_j$, and $n_jin I_j$ is an element of $I_j$ chosen by some rule.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
It should be something like
This suggests that the terms in the sum in Equation (29.2) can be divided into pieces, where in each piece the quantity $k_n$ is constant. Denote the length of the $j$-th piece by $Delta n_j$, and put $n=n_j$ throughout the $j$-th piece.
In other words, he is writing
$$
sum_nf_Lambda,ne^ik_nx=sum_jsum_nin I_jf_Lambda,ne^ik_nxapproxsum_jf_Lambda,n_je^ik_n_jxDelta n_j
$$
where $I_j$ is a disjoint decomposition of $mathbbZ$ into intervals,
$Delta n_j=|I_j|$ is the number of elements of $I_j$, and $n_jin I_j$ is an element of $I_j$ chosen by some rule.
It should be something like
This suggests that the terms in the sum in Equation (29.2) can be divided into pieces, where in each piece the quantity $k_n$ is constant. Denote the length of the $j$-th piece by $Delta n_j$, and put $n=n_j$ throughout the $j$-th piece.
In other words, he is writing
$$
sum_nf_Lambda,ne^ik_nx=sum_jsum_nin I_jf_Lambda,ne^ik_nxapproxsum_jf_Lambda,n_je^ik_n_jxDelta n_j
$$
where $I_j$ is a disjoint decomposition of $mathbbZ$ into intervals,
$Delta n_j=|I_j|$ is the number of elements of $I_j$, and $n_jin I_j$ is an element of $I_j$ chosen by some rule.
answered Aug 2 at 0:48
timur
11.3k1842
11.3k1842
add a comment |Â
add a comment |Â
up vote
0
down vote
I have a slightly different book that defines this differently. The motivation for the Fourier Transform can be seen from the following. This is from Richard Haberman - Applied Partial Differential Equations with Fourier Series and Boundary Problems
In solving boundary value problems on a finite interval $-L < x < L$
with periodic boundary conditions we can use the complex form of the
Fourier Series
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty c_ne^frac-in pi xL$$
Where $f(x)$ represents a linear combination of all the sinusoids that are periodic with period $2L$ then we have the Fourier coefficients as
$$c_n = frac12Lint_-L^L f(x)e^frac-in pi xLdx $$
now then we have $ -L < x < L $ as our region of integration. So we extend it to $ - infty < x< infty $
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ frac12Lint_-L^L f(barx)e^frac-in pi barxLdbarx right]e^frac-in pi xL$$
For periodic functions $ -L < x < L $ the number of waves $ omega $ in a distance of $ 2 pi $ are then
$$ omega = fracn piL = 2pi fracn2L$$
giving us the distance between waves
$$ Delta omega = frac(n+1) piL - fracn piL = fracpiL$$
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ fracDelta omega2piint_-L^L f(barx)e^i omega barxdbarx right]e^-i omega x$$
Then the fourier transform here is as $ L to infty $. The values $ omega $ are the square roots of the eigenvalues so they get closer and closer $ Delta omega to 0$
$$ fracf(x+)+f(x-)2= frac12 piint_-infty^infty left[ int_-infty^infty f(barx)e^i omega barx right]e^-i omega x domega$$
Then the fourier transform is
$$F(omega) = frac12 pi int_-infty^infty f(barx) e^i omega barx d barx $$
The notation is that our interval $ - L < x < L$ has extended to infinity as you see. This is commonly what happens when you have some Riemann sum and take the discrete intervals and you let them go infinitly small.
add a comment |Â
up vote
0
down vote
I have a slightly different book that defines this differently. The motivation for the Fourier Transform can be seen from the following. This is from Richard Haberman - Applied Partial Differential Equations with Fourier Series and Boundary Problems
In solving boundary value problems on a finite interval $-L < x < L$
with periodic boundary conditions we can use the complex form of the
Fourier Series
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty c_ne^frac-in pi xL$$
Where $f(x)$ represents a linear combination of all the sinusoids that are periodic with period $2L$ then we have the Fourier coefficients as
$$c_n = frac12Lint_-L^L f(x)e^frac-in pi xLdx $$
now then we have $ -L < x < L $ as our region of integration. So we extend it to $ - infty < x< infty $
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ frac12Lint_-L^L f(barx)e^frac-in pi barxLdbarx right]e^frac-in pi xL$$
For periodic functions $ -L < x < L $ the number of waves $ omega $ in a distance of $ 2 pi $ are then
$$ omega = fracn piL = 2pi fracn2L$$
giving us the distance between waves
$$ Delta omega = frac(n+1) piL - fracn piL = fracpiL$$
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ fracDelta omega2piint_-L^L f(barx)e^i omega barxdbarx right]e^-i omega x$$
Then the fourier transform here is as $ L to infty $. The values $ omega $ are the square roots of the eigenvalues so they get closer and closer $ Delta omega to 0$
$$ fracf(x+)+f(x-)2= frac12 piint_-infty^infty left[ int_-infty^infty f(barx)e^i omega barx right]e^-i omega x domega$$
Then the fourier transform is
$$F(omega) = frac12 pi int_-infty^infty f(barx) e^i omega barx d barx $$
The notation is that our interval $ - L < x < L$ has extended to infinity as you see. This is commonly what happens when you have some Riemann sum and take the discrete intervals and you let them go infinitly small.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I have a slightly different book that defines this differently. The motivation for the Fourier Transform can be seen from the following. This is from Richard Haberman - Applied Partial Differential Equations with Fourier Series and Boundary Problems
In solving boundary value problems on a finite interval $-L < x < L$
with periodic boundary conditions we can use the complex form of the
Fourier Series
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty c_ne^frac-in pi xL$$
Where $f(x)$ represents a linear combination of all the sinusoids that are periodic with period $2L$ then we have the Fourier coefficients as
$$c_n = frac12Lint_-L^L f(x)e^frac-in pi xLdx $$
now then we have $ -L < x < L $ as our region of integration. So we extend it to $ - infty < x< infty $
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ frac12Lint_-L^L f(barx)e^frac-in pi barxLdbarx right]e^frac-in pi xL$$
For periodic functions $ -L < x < L $ the number of waves $ omega $ in a distance of $ 2 pi $ are then
$$ omega = fracn piL = 2pi fracn2L$$
giving us the distance between waves
$$ Delta omega = frac(n+1) piL - fracn piL = fracpiL$$
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ fracDelta omega2piint_-L^L f(barx)e^i omega barxdbarx right]e^-i omega x$$
Then the fourier transform here is as $ L to infty $. The values $ omega $ are the square roots of the eigenvalues so they get closer and closer $ Delta omega to 0$
$$ fracf(x+)+f(x-)2= frac12 piint_-infty^infty left[ int_-infty^infty f(barx)e^i omega barx right]e^-i omega x domega$$
Then the fourier transform is
$$F(omega) = frac12 pi int_-infty^infty f(barx) e^i omega barx d barx $$
The notation is that our interval $ - L < x < L$ has extended to infinity as you see. This is commonly what happens when you have some Riemann sum and take the discrete intervals and you let them go infinitly small.
I have a slightly different book that defines this differently. The motivation for the Fourier Transform can be seen from the following. This is from Richard Haberman - Applied Partial Differential Equations with Fourier Series and Boundary Problems
In solving boundary value problems on a finite interval $-L < x < L$
with periodic boundary conditions we can use the complex form of the
Fourier Series
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty c_ne^frac-in pi xL$$
Where $f(x)$ represents a linear combination of all the sinusoids that are periodic with period $2L$ then we have the Fourier coefficients as
$$c_n = frac12Lint_-L^L f(x)e^frac-in pi xLdx $$
now then we have $ -L < x < L $ as our region of integration. So we extend it to $ - infty < x< infty $
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ frac12Lint_-L^L f(barx)e^frac-in pi barxLdbarx right]e^frac-in pi xL$$
For periodic functions $ -L < x < L $ the number of waves $ omega $ in a distance of $ 2 pi $ are then
$$ omega = fracn piL = 2pi fracn2L$$
giving us the distance between waves
$$ Delta omega = frac(n+1) piL - fracn piL = fracpiL$$
$$ fracf(x+)+f(x-)2= sum_n=-infty^infty left[ fracDelta omega2piint_-L^L f(barx)e^i omega barxdbarx right]e^-i omega x$$
Then the fourier transform here is as $ L to infty $. The values $ omega $ are the square roots of the eigenvalues so they get closer and closer $ Delta omega to 0$
$$ fracf(x+)+f(x-)2= frac12 piint_-infty^infty left[ int_-infty^infty f(barx)e^i omega barx right]e^-i omega x domega$$
Then the fourier transform is
$$F(omega) = frac12 pi int_-infty^infty f(barx) e^i omega barx d barx $$
The notation is that our interval $ - L < x < L$ has extended to infinity as you see. This is commonly what happens when you have some Riemann sum and take the discrete intervals and you let them go infinitly small.
edited Aug 2 at 1:39
answered Aug 2 at 1:32


RHowe
975715
975715
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2864043%2ffourier-transform-derivation%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I always hated argument like this and never found a book that present this claim rigorously. You can find that argument in nearly all introductory books on Fourier series and transform, and the "justication" is always a hand waving argument full of "we can guess that it is a good idea to take...", "we can expect that it is a good approximation...". Probably there's some value behind these crappy arguments, at least at the level of intuition, but I always thought that this is not the manner to present that result and that there's some way to prove a theorem that states that claim rigorously.
– Bob
Jul 29 at 10:25
1
The rigour required to do Fourier analysis from start you would basically need a course in measure theory and functional analysis. But at the time when most (engineering) people learn practical Fourier methods they are nowhere near the mathematical maturity to digest the theory required for that. Just try to remember what annoyed you when you took the course and revisit it later on 1, 2, 5 or maybe 10 years.
– mathreadler
Jul 29 at 11:15
Brown and Churchill has a good intuitive derivation of this type. The original argument of this type was given by Fourier. There was no other intuitive or direct way to obtain the Fourier transform and its inverse given by anyone for many decades after Fourier. So arguments of this type remain part of the folklore of the subject. They're great for thinking about how one might obtain the Fourier integral expansion from a limited of the discrete case on an interval, but I'm not aware of anyone making such arguments completely rigorous.
– DisintegratingByParts
Jul 29 at 16:02
@DisintegratingByParts Do Brown and Churchill discuss in detail fourier transforms? Based on the scan that I did it seems that they just leave it in the exercises. Also, if it is possible for someone to decipher what Hassani stated then it would be better.
– mathemania
Jul 29 at 16:50
@mathemania here you can find my attempt to make the argument sensible: math.stackexchange.com/questions/2866510/…
– Bob
Jul 30 at 0:04