Professional Documents
Culture Documents
NewCH4 PDF
NewCH4 PDF
Integral transforms
The input is a function f (x) and the output is another function T f (s). There
are different integral transforms, depending on the kernel function K(x, s). The
transforms we consider in this chapter are the Laplace transform and the Fourier
transform.
Here s can also be a complex variable, namely the Laplace transform maps a real
function to a complex one. For our purposes it is enough to consider for the moment
s real. We can easily verify that L is a linear operator. In fact:
Z ∞ Z ∞ Z ∞
−sx −sx
L{af + bg} = [af (x) + bg(x)]e dx = a f (x)e dx + b g(x)e−sx dx
0 0 0
123
124 CHAPTER 4. INTEGRAL TRANSFORMS
It is ∞
1 −sx ∞ 1
Z
L{1} = e−sx dx = − e 0
= ,
0 s s
provided that s > 0 (this ensures us that lim e−sx = 0, namely that the integral
R ∞ −sx x→∞
0
e dx converges).
Example 4.1.2 Find the Laplace transform of f (x) = xn , with n positive integer.
∞ ∞
1 ∞ n n
Z Z
n n −sx
L{x } = x e dx = − xn e−sx 0 + xn−1 e−sx dx =
L{xn−1 }.
0 s 0 s s
R∞
We have assumed also in this case that s > 0 (otherwise the integral 0 xn e−sx dx
does not converge). To obtain L{xn−1 } we proceed the same way and obtain L{xn−1 } =
n−1
s
L{xn−2 }. We iterate n times and obtain:
n(n − 1)(n − 2) . . . n!
L{xn } = n
L{1} = n+1 .
s s
eimx −e−imx
R∞
It is L{f (x)} = 0
e−sx sin(mx)dx. By using the relation sin(mx) = 2i
we
obtain:
Z ∞ Z ∞
1 (im−s)x −(im+s)x
L{f (x)} = e dt − e dx
2i 0 0
(im−s)x ∞ −(im+s)x ∞
1 e e
= −
2i im − s 0 −im − s 0
1 1 1 m
= − = 2 ,
2i s − im s + im s + m2
for s > 0. In fact, the terms e(im−s)x and e−(im+s)x can be written as e−sx [cos(mx) ± sin(mx)].
In the limit for x → ∞, only the term e−sx matters (the term [cos(mx) ± sin(mx)]
oscillates) and it tends to zero for any s > 0.
In these three simple cases we have seen that the integral 4.2 was convergent
for any possible value of s > 0. This is not always the case, as the two following
examples show.
4.1. LAPLACE TRANSFORM 125
It is clear that this limit exists and is finite only if a < s (a < Re (s) if s ∈ C),
namely we can define the Laplace transform of the function f (x) = eax only if Re
(s) > a. In this case it is:
1
L{eax } = .
s−a
Example 4.1.5 Find the Laplace transform of the function f (x) = cosh(mx).
emx +e−mx
R∞
It is L{f (x)} = 0
e−sx cosh(mx)dx. By using the relation cosh(mx) = 2
we obtain:
Z ∞ Z ∞
1 (m−s)x −(m+s)x
L{f (x)} = e dt + e dx
2 0 0
(m−s)x ∞ −(m+s)x ∞
1 e e
= −
2 m−s 0 m+s 0
1 1 1 s
= + = 2 .
2 s−m s+m s − m2
This result holds as long as e(m−s)x and e−(m+s)x tend to zero for x → ∞, namely it
must be s > |m|.
There are a few properties of the Laplace transform that help us finding the
transform of more complex functions. If we know that F (s) is the Laplace transform
of f (x), namely that L{f (x)} = F (s), then:
•
L ecx f (x) = F (s − c)
(4.4)
This property comes directly from the definition of Laplace transform, in fact:
Z ∞ Z ∞
L ecx f (x) = cx −sx
f (x)e e dx = f (x)e−(s−c)x dx = F (s − c).
0 0
•
1 s
L{f (cx)} = F , (c > 0) (4.5)
c c
To show that it is enough to substitute cx with t. In this way is x = ct , dx = dt
c
and therefore:
126 CHAPTER 4. INTEGRAL TRANSFORMS
∞ ∞
1 1 s
Z Z
s
L{f (cx)} = e−sx
f (cx)dx = e− c t f (t)dt = F .
0 c 0 c c
•
L{uc(x)f (x − c)} = e−sc F (s) (4.6)
We have thus: Z ∞
L{uc (x)f (x − c)} = e−sx f (x − c)dx.
c
With the substitution t = x − c we obtain:
Z ∞
L{uc(x)f (x − c)} = e−s(c+t) f (t)dt = e−sc F (s).
0
•
L{xn f (x)} = (−1)n F (n) (s) (4.8)
d ∞ −sx
Z Z ∞
′
F (s) = e f (x)dx = − xe−sx f (x)dx = −L{xf (x)}.
ds 0 0
If we now differentiate n times F (s) with respect to s we obtain:
•
L{f ′(x)} = −f (0) + sF (s) (4.9)
Z ∞ ∞
Z ∞
′ −sx ′
L{f (x)} = e f (x)dx = f (x)e−sx 0 +s e−sx f (x)dx = −f (0)+sF (s),
0 0
−sx
provided that lim f (x)e = 0.
x→∞
4.1. LAPLACE TRANSFORM 127
We could calculate this transform directly but it is easier to use the Laplace transform
m
of sin(mx) that we have calculated in Example 4.1.3 (L{sin(mx)} = s2 +m 2 ). From
d m
L sin(mx) = mL{cos(mx)} = − sin(0) + s · .
dx s2 + m2
s
⇒ L{cos(mx)} = .
s2 + m2
s
We remind from Example 4.1.5 that L{cosh(mx)} = F (s) = s2 −m 2 (s > |m|). Eq.
4.8 tells us that F (s) is the Laplace transform of −x cosh(mx). We have therefore:
′
s2 − m2 − 2s2 s2 + m2
L{x cosh(mx)} = −F ′ (s) = − = .
(s2 − m2 )2 (s2 − m2 )2
Example 4.1.8 Find the Laplace transform of the function f (x) defined in this way:
x x < π
f (x) =
x − cos(x − π) x ≥ π
By means of the step function (Eq. 4.7) we can rewrite f (x) as f (x) = x −
uπ (x) cos(x−π). The Laplace transform of this function can be found by means of Eq.
n! s
4.6 and of the known results L{xn } = sn+1 (Example 4.1.2) and L{cos(mx)} = s2 +m 2
(Example 4.1.6).
1 −πs 1 se−πs
L{f (x)} = L{x} − L{uπ (x) cos(x − π)} = − e L{cos x} = − .
s2 s2 s2 + 1
128 CHAPTER 4. INTEGRAL TRANSFORMS
Z ∞ ∞
Z ∞
′′ −sx ′′ −sx ′
L{f (x)} = e f (x)dx = e f (x) 0
+s e−sx f ′ (x)dx
0 0
′ 2
= −f (0) − sf (0) + s F (s)
Z ∞ Z ∞
′′′ −sx ′′′
−sx ′′ ∞
L{f (x)} = e f (x)dx = e f (x) 0 + s e−sx f ′′ (x)dx
0 0
′′ ′ 2 3
= −f (0) − sf (0) − s f (0) + s F (s)
..
.
L{f (n) (x)} = sn F (s) − sn−1 f (0) − sn−2 f ′ (0) − · · · − sf (n−2) (0) − f (n−1) (0), (4.10)
If we now make the Laplace transform of both members of this equation (calling Y (s)
the Laplace transform of y(x) and F (s) the Laplace transform of f (x)), we obtain:
advantages is that Eq. 4.11 is not yet the solution of the given ODE; we should
invert this relation and find the function y(x) whose Laplace transform is given by
Y (s). This function is called the inverse Laplace transform of Y (s) and it is indicated
with L−1 {Y (s)}.
Since the operator L is linear, it is easy to show that also the inverse operator
L is linear. In fact, given two functions f1 (x) and f2 (x) whose Laplace transforms
−1
are F1 (s) and F2 (s), respectively, the linearity of the operator L ensures us that:
If we apply now the operator L−1 to both members of this equation we obtain:
L−1 L{c1 f1 (x) + c2 f2 (x)} = L−1 {c1 F1 (s) + c2 F2 (s)} = c1 L−1 {F1 (s)} + c2 L−1 {F2 (s)}.
To invert the function F (s) it is therefore enough to split it into many (possibly
simple) addends and find for each of them the inverse Laplace transform. Based
on the examples in Sect. 4.1.1 (and others that we do not have time to calculate,
but that can be found in the mathematical literature) it is possible to construct a
“dictionary” of basic functions/expressions and corresponding Laplace transforms,
as in Table 4.1. Any time we face a particular F (s), we can look at the dictionary and
check whether it is possible to recover the function f (x) whose Laplace transform is
F (s).
Since the Laplace transform of the solution y(x) is always in the form of a
fraction (see Eq. 4.11), the method we will always use to split a function F (s) into
simple factors is the method of the partial fractions. It is worth reminding it briefly.
We assume that F (s) = QPm n (s)
(s)
is the quotient between two polynomials Pn (s) and
Qm (s), with degrees n and m respectively. We will also assume m > n. It is always
possible to factorize the polynomial Qm (s) at the denominator into factors of the
type as + b or factors of the type cs2 + ds + e. Sometimes, when we factorize Qm (s),
we obtain factors of the type (as + b)k (that means that s = − ab is a root with
multiplicity k of the polynomial Qm (s)) or of the type (cs2 + ds + e)k . The methos of
the partial fractions consists in writing the fraction Pn (s)/Qm (s) as sum of simpler
A As+B
fractions of the type (as+b)k or (cs2 +ds+e)k . The partial fractions we seek depend on
ecx f (x) F (s − c)
1
F sc
f (cx) c
c>0
Rx F (s)
f (x̃)dx̃
R0x s
0
f (x − ξ)g(ξ)dξ F (s)G(s)
(−1)n xn f (x) F (n) (s)
f (n) (x) sn F (s) − sn−1 f (0) − · · · − f (n−1) (0)
4.1. LAPLACE TRANSFORM 131
Example 4.1.9 Use the method of the partial fraction to split the function
s3 + s2 + 1
F (s) = .
s2 (s2 + s + 1)
We must now compare the coefficients with like power of s, obtaining the system of
equations:
A + C = 1
A = −1
A + B + D = 1
B = 1
, ⇒ .
A + B = 0
C = 2
B=1 D=1
s2 + 5
F (s) =
s3 − 9s
s2 + 5 s2 + 5
F (s) = = .
s(s2 − 9) s(s − 3)(s + 3)
To invert this function we have to apply the method of the partial fractions, namely:
Now we can compare terms with like power of s, obtaining the following system of
equations:
A + B + C = 1
3B − 3C = 0
−9A = 5
From the second we obtain B = C, from the last A = − 59 . From the first equation:
14 7
2B = ⇒ B=C= .
9 9
Now we can invert all the terms of the given function and obtain:
s2 + 5
−1 −1 51 7 1 1
f (x) = L =L − + +
s3 − 9s 9s 9 s−3 s+3
5 14 s 5 14
= − + L−1 2 =− + cosh(3x).
9 9 s −9 9 9
We have to apply the operator L to both members of the given ODE. Since this is
a second-order ODE with constant coefficients, we can apply directly Eq. 4.11 to
obtain:
1
F (s) − 1 s−1
−1 2−s
Y (s) = 2 = 2 = .
s +4 s +4 (s − 1)(s2 + 4)
We apply now the method of the partial fractions to decompose this function:
A Bs + C 2−s
+ 2 =
s−1 s +4 (s − 1)(s2 + 4)
⇒ As2 + 4A + Bs2 + Cs − Bs − C = 2 − s.
By equating the terms with like power of s we obtain the system of equations:
1
A + B = 0 A = −B B = − 5
C − B = −1 ⇒ C =B−1 ⇒ A = 15
4A − C = 2
−4B − B = 1
C = − 6
5
1 1 1 s 6 1
Y (s) = − 2 − 2 .
5s−1 5s +4 5s +4
With the help of Table 4.1 we can easily identify the inverse Laplace transforms of
these addends, obtaining therefore:
ex cos(2x) 3 sin(2x)
y(x) = − − .
5 5 5
The method of the Laplace transform is sometimes more convenient, sometimes less
convenient compared to traditional methods of ODE resolution. It proves however
to be always more convenient in the case in which the inhomogeneous function is
a step function. In fact, in this case the only available traditional method is the
laborious variation of constants, whereas the Laplace transform of the step function
can be readily found.
where uc (x) is the Heaviside function (Eq. 4.7). In fact, for x < 1 both u1 and u2 are
zero. For x between 1 and 2 is u1 = 1 but u2 is still zero. For x ≥ 2 both functions
are 1 and therefore u1 (x)(x − 1) − u2 (x)(x − 2) = x − 1 − x + 2 = 1. If we make the
Laplace transform of both members of the given ODE we obtain:
2 e−s e−2s
s Y (s) + Y (s) = 2 − 2
s s
1 1 + s2 − s2 e−s − e−2s e−s − e−2s
⇒ Y (s) = e−s − e−2s 2 2 = e−s − e−2s 2 2 = − .
s (s + 1) s (s + 1) s2 s2 + 1
To invert this function Y (s) we use again the relation L{uc (x)f (x − c)} = e−cs F (s)
(and therefore L−1 {e−cs F (s)} = uc (x)f (x − c)) to obtain:
Among the results presented in Table 4.1 very significant is the one concerning the
Dirac delta function δ(x−c). We remind here briefly what is the Dirac delta function
and what are its properties. Given a function g(x) defined in the following way:
1 −ξ <x<ξ
2ξ
g(x) = dξ (x) = (4.12)
0 x ≤ −ξ or x ≥ ξ
it is clear that the integral of this function is 1 for any possible choice of ξ, in fact:
∞ ξ
1
Z Z
g(x)dx = dx = 1.
−∞ −ξ 2ξ
It is also clear that if ξ tends to zero, the interval of values of x in which g(x) is
different from zero becomes narrower and narrower until it disappears. Analogously,
the function g(x − c) = dξ (x − c) is non-null only in a narrow interval of x centered
4.1. LAPLACE TRANSFORM 135
on c that disappears for ξ tending to zero. The limit of the function g(x) = dξ (x)
for ξ → 0 is called Dirac delta function and is indicated with δ(x). It is therefore
characterized by the properties:
δ(x − c) = 0 ∀x 6= c (4.13)
Z ∞
δ(x)dx = 1. (4.14)
−∞
∞ c+ξ
1 1
Z Z
f (x)δ(x − c)dx = lim f (x)dx = lim [2ξf (x̃)] , x̃ ∈ [c − ξ, c + ξ].
−∞ ξ→0 2ξ c−ξ ξ→0 2ξ
The last step is justified by the mean value theorem for integrals. But the interval
of values in which x̃ must be taken collapses to the point c for ξ → 0, therefore we
obtain the important property of the Dirac delta function:
Z ∞
f (x)δ(x − c)dx = f (c). (4.15)
−∞
∞ c+ξ
e−sx
Z Z
−sx
L{δ(x − c)} = lim e dξ (x − c)dx = lim dx
ξ→0 0 ξ→0 c−ξ 2ξ
sξ
e −e
−s(c+ξ) −s(c+ξ)
e − e−sξ
= lim = e−sc lim
ξ→0 −2sξ ξ→0 2sξ
sξ
ξ(e + e )−sξ
= e−sc lim = e−sc .
ξ→0 2ξ
The last step is justified by the de l’Hopital’s rule for limits. In this way we have
found the result reported in Table 4.1 about the Laplace transform of δ(x − c). In
the case that c = 0 we have L{δ(x)} = 1.
Figure 4.1: The infinite line L along which the Bromwich integral must be performed.
the inversion of F (s) can be found treating F (s) as a complex function and is given
by the so-called Bromwich integral:
λ+i∞
1
Z
−1
f (x) = L {F (s)} = esx F (s)ds, (4.16)
2πi λ−i∞
where λ is a real positive number and is larger that the real parts of all the singu-
larities of esx F (s). Since F (s) has been defined as the integral of e−sx f (x) between
x = 0 and x = ∞, we will consider in this formula only positive values of x, as well.
In practice, the integral must be performed along the infinite line L, parallel to the
imaginary axis, indicated in Fig. 4.1. At this point, a curve must be chosen in order
to close the contour C. Possible completion paths are for instance the curves Γ1 or
Γ2 indicated in Fig. 4.2, namely the half-circles with radius R on the left and on
the right of L, respectively. For R → ∞ these curves make with L a closed contour.
The Bromwich integral can be evaluated by means of the residue theorem provided
that the integral of the function esx F (s) tends to zero for R (radius of the chosen
half-circle) tending to infinity. If we choose the completion path Γ1 , then the residue
theorem ensures us that:
1 X X
f (x) = · 2πi Rj = Rj , (4.17)
2πi C C
4.1. LAPLACE TRANSFORM 137
Figure 4.2: Possible contour completions for the integration path L to use in the
Bromwich integral.
where the sum is extended to all the residues of the function esx F (s) in the complex
plane. In fact, by construction L lies on the right of each singularity of esx F (s) and
on the limit R → ∞ the closed curve C = L + Γ1 will enclose them all (including for
instance the singularity z1 that in Fig. 4.2 is not yet enclosed in C). If we instead
have to choose the completion path Γ2 , then the closed curve L + Γ2 will enclose no
singularities and therefore f (x) will be zero.
From the relation L{uc (x)f (x − c)} = F (s)e−cs we can already derive the inverse
Laplace transform of the given function, namely u2 (x) sin[2(x − 2)]. We check if we
can obtain the same result be means of the Bromwich integral. We have to evaluate
the integral
λ+i∞
1 2es(x−2)
Z
ds.
2πi λ−i∞ s2 + 4
We notice first that the given function has two simple poles at s = 2i and s = −2i
(in fact it is s2 + 4 = (s + 2i)(s − 2i)), both of which have Re (z) = 0. We can
138 CHAPTER 4. INTEGRAL TRANSFORMS
therefore take an arbitrarily (but positive) small value of λ. We can distinguish two
cases: i) x < 2 and ii) x > 2. For x < 2 the exponent s(x − 2) has negative real
part if Re (s) > 0. We notice here that es(x−2) = e(x−2)Re(s) ei(x−2)Im(s) , therefore
what determines the behavior of this function at infinity is e(x−2)Re(s) (ei(x−2)Im(s) has
modulus 1 and does not create problems). That means that, for Re (s) → +∞ the
function es(x−2) tends to zero. At the same time the denominator s2 + 4 diverges as
Re (s) → +∞ and this means that also the term s21+4 that multiplies es(x−2) tends to
zero along the curve Γ2 for R → ∞. Therefore the integral of the function F (s)esx
tends to zero along the curve Γ2 of Fig. 4.2 (for R → ∞) and we can calculate the
Bromwich integral by means of the contour C = L + Γ2 . For what we have learned,
since the given closed contour does not enclose the poles, the function f (x) is zero.
For x > 2, the function es(x−2) tends to zero for Re (s) → −∞. That means
s(x−2)
that the integral of the function es2 +4 tends to zero (for R → ∞) along the curve Γ1
of Fig. 4.2 and we take therefore Γ1 as a completion of L to calculate the Bromwich
integral. For the residue theorem, this integral is given by the sum of the residues of
the function esx F (s) at all the poles, namely:
We have:
2es(x−2) 2es(x−2) e2i(x−2)
Res(2i) = lim (s − 2i) = lim =
s→2i s2 + 4 s→2i s + 2i 2i
s(x−2) s(x−2)
2e 2e e−2i(x−2)
Res(−2i) = lim (s + 2i) 2 = lim =
s→−2i s +4 s→−2i s − 2i −2i
By summing up these two residues we obtain:
1 2i(x−2)
f (x) = e − e−2i(x−2) = sin[2(x − 2)].
2i
This is what we obtain if x > 2 whereas, as we have seen, if x is smaller than 2
the function is zero. Recalling the definition of the Heaviside function uc (x) we can
conclude that the inverse Laplace transform of the given function is:
−1 2e−2s
f (x) = L = u2 (x) sin[2(x − 2)].
s2 + 4
with a ∈ R.
4.1. LAPLACE TRANSFORM 139
√ √
The function esx s − a has no poles, but the function z is multiple-valued in the
complex plane, therefore, as we have seen, a branch point is present at the point
z = 0, namely at s = a. This is the only singularity of our F (s)esx and therefore,
in order to evaluate the Bromwich integral, we have to take λ larger than a. The
integral to calculate will be:
√ λ+i∞ √
1
Z
−1
L { s − a} = s − aesx ds.
2πi λ−i∞
In this case, the branch point is at zero, therefore λ can be arbitrarily small (but
always larger than zero). Since z = 0 is a branch point of the function to integrate,
we have to introduce a branch cut to evaluate the integral. Although we have taken
so far the positive real axis as a branch cut, we have also said that this choice is
√
arbitrary and to make the function z singe value it is enough that closed curves are
not allowed to enclose the origin. We can therefore take as branch cut the negative
real axis. In Fig. 4.3 we indicate the contour we must use to integrate the given
function. Since the closed contour C = L + Γ1 + r1 + γ + r2 + Γ2 does not enclose
singularities, its integral is zero. To evaluate the Bromwich integral (namely the
integral along L) we have to calculate the integral along the arcs Γ1 and Γ2 , along
the straight lines r1 and r2 and along the circumference γ.
√ √
Since the function zezx tends to zero for Re (z) → −∞ (the term z cannot
contrast the exponential decay of ezx ; remind that x must be positive), the integral
along the arcs Γ1 and Γ2 disappears.
To evaluate the integral along γ we take as usual z = εeiθ and we take the limit
for ε → 0. The interval of values of θ is [π, −π], in fact, as we arrive at γ the first
argument will be π. Then, we rotate clockwise around the origin and after a whole
circuit the argument will be −π. Since dz = iεeiθ dθ we have:
√ −π √
I Z
θ iθ
zx
ze dz = εei 2 exεe · iεeiθ dθ.
γ π
The integrating function clearly tends to zero for ε → 0, therefore there is no contri-
bution from the integral over γ.
Along the straight lines r1 and r2 we can assume that the arguments of the
complex numbers lying on them are π (along r1 ) and −π (along r2 ) and that their
imaginary parts tend to zero, therefore we have z = reiπ (r1 ) and z = re−iπ (r2 ).
Consequently, dz = eiπ dr (r1 ) and dz = e−iπ dr (r2 ). Notice here that, although we
are on the negative real axis, r is positive. In fact, eiπ = e−iπ = −1. The parameter
140 CHAPTER 4. INTEGRAL TRANSFORMS
r runs between +∞ and 0 (r1 ) and between 0 and +∞ (r2 ). The integral of the given
function along r1 turns out to be:
Z
√
Z 0 √
Z 0 √
Z ∞ √
zx i π2 xreiπ iπ −xr
ze dz = re e · e dr = r·i·e · (−1)dr = i re−xr dr.
r1 ∞ ∞ 0
Along r2 we have:
√ zx √ −i π xre−iπ −iπ √ √
Z Z ∞ Z ∞ Z ∞
−xr
ze dz = re 2 e ·e dr = r·(−i)·e ·(−1)dr = i re−xr dr.
r2 0 0 0
The sign minus is due to the fact that, as we have said, the integral along the whole
closed curve C is zero, therefore L F (s)esx ds = − r1 +r2 F (s)esx ds. To evaluate
R R
R ∞ √ −xr 2
the integral 0 re dr we make the substitution xr = t2 , therefore r = tx and
dr = 2tdt
x
. We obtain:
∞ √ 1 ∞
Z Z
−xr 2
re dr = te−t · 2tdt.
0 x3/2 0
−t2 −t2
Since −2te is the differential of e we can integrate the given function by parts
and obtain: ∞ √ i∞ Z ∞ 2
h
1
Z
−xr −t2
re dr = − 3/2 te − e−t dt .
0 x 0 0
4.2. FOURIER TRANSFORMS 141
√
R∞ 2 π
The term under square brackets is zero. By using the known result 0
e−t dt = 2
we obtain:
√
∞ √ π
Z
−xr
re dr = .
0 2x3/2
√
This result completes our inversion of the function F (s) = s − a, namely we have:
√ eax
f (x) = L−1 { s − a} = − √ .
2 πx3
∞
a0 X 2πnt 2πnt
f (t) = + an cos + bn sin ,
2 n=1
T T
where the constant coefficients an , bn are called Fourier coefficients. Defining the
angular frequency ω = 2π
T
we simplify this expression into:
∞
a0 X
f (t) = + [an cos(ωnt) + bn sin(ωnt)] , (4.18)
2 n=1
T
2
Z
2
an = f (t) cos(ωnt)dt (4.19)
T − T2
T
2
Z
2
bn = f (t) sin(ωnt)dt (4.20)
T − T2
T
" Z Z T #
0
2 2
Z
2 2
bn = f (t) sin(ωnt)dt = − sin(ωnt)dt + sin(ωnt)dt
T − T2 T − T2 0
Z T
4 2 2 T
= sin(ωnt)dt = − [cos(ωnt)]02
T 0 nπ
2
= [1 − cos(nπ)] .
nπ
Here we have used the relation ωT = 2π. We can notice here that cos(nπ) is 1 if
n is even and -1 if n is odd, namely cos(nπ) = (−1)n . We could find the same
result by means of the de Moivre’s theorem applied to the complex number z = eiπ .
4
The coefficients bn are equal to zero if n is even and to nπ if n is odd. The Fourier
expansion we looked at is therefore:
4 sin(3ωt) sin(5ωt)
f (t) = sin(ωt) + + + ... .
π 3 5
By using the identities cos z = (eiz + e−iz )/2 and sin z = (eiz − e−iz )/2i the Fourier
expansion of a function f (t) can also be written as:
4.2. FOURIER TRANSFORMS 143
∞
eiωnt + e−iωnt eiωnt − e−iωnt
a0 X
f (t) = + an + bn
2 n=1
2 2i
∞
a0 eiω0t 1 X
(an − ibn )eiωnt + (an + ibn )e−iωnt .
= +
2 2 n=1
In this way we can see that the function f (t) can be expressed as sum, extending
from −∞ to +∞, of terms of the form eiωn t , where ωn = ω · n, namely we have:
∞ 1 (a − ib ) n ≥ 0
X n n
f (t) = cn eiωn t ; cn = 2 . (4.21)
1 (an + ibn ) n < 0
−∞ 2
This compact representation of the periodic function f (t) is called complex Fourier
series. If we combine the coefficients an and bn as indicated in Eq. 4.21 we find that,
irrespective of the sign of n, we have:
T
1
Z
2
cn = f (t)e−iωn t dt. (4.22)
T − T2
∞ Z T ∞ Z T
X 1 2
−iωn u iωn t
X ∆ω 2
f (t) = f (u)e du · e = f (u)e−iωnu du · eiωn t .
n=∞
T T
−2 n=∞
2π T
−2
In the limit for T → ∞ and ∆ω → 0 the limits of the integration extend to infinity,
the sum becomes an integral and the discrete values ωn become a continuous variable
ω (with ∆ω → dω). We have thus:
∞ ∞
1
Z Z
iωt
f (t) = dωe duf (u)e−iωu . (4.23)
2π −∞ −∞
From this relation we can define the Fourier transform of a function f (t) as:
144 CHAPTER 4. INTEGRAL TRANSFORMS
∞
1
Z
f˜(ω) = F {f (t)} = √ f (t)e−iωt dt. (4.24)
2π −∞
R∞
Here we require, in order this integration to be possible, that −∞ |f (t)|dt is finite.
Unlike the Laplace transform, the Fourier transform is very easy to invert. In fact,
we can directly see from Eq. 4.23 that:
∞
1
Z
f (t) = √ f˜(ω)eiωt dω. (4.25)
2π −∞
Example 4.2.2 Find the Fourier transform of the normalized Gaussian distribution
1 t2
f (t) = √ e− 2τ 2 .
τ 2π
t2 1 2 2 2 2 2 2
−iωt − = − t + 2iωtτ + (iωτ ) − (iωτ ) .
2τ 2 2τ 2
The first 3 addends inside the square brackets are the square of t + iωτ 2 , namely we
obtain:
2
t2 (t + iωτ 2 )2 (iωτ 2 )2 t + iωτ 2
1
−iωt − 2 = − + =− √ − ω 2τ 2 .
2τ 2τ 2 2τ 2 2τ 2
1 2τ 2
Since the term e− 2 ω does not depend on t we obtain:
∞ ”2
1 − 1 ω2 τ 2 2
Z “
t+iωτ
f˜(ω) =
−
e 2 e
√
2τ dt.
2πτ −∞
This is the integral of a complex function, therefore we should use the methods of
complex integration we have learned so far. However, we can see that the integration
simplifies significantly by means of the substitution:
t + iωτ 2 √
√ = s, dt = 2τ ds.
2τ
In this way we obtain:
∞
1 1
Z
1 2 2 2 1 2 2
f˜(ω) = √ e− 2 ω τ e−s ds = √ e− 2 ω τ ,
2π −∞ 2π
4.2. FOURIER TRANSFORMS 145
R∞ 2 √
where we have made use of the known result −∞ e−s ds = π. It is important to
note that the Fourier transform of a Gaussian function is another Gaussian function.
The Fourier transform allows us to express the Dirac delta function in an elegant
and useful way. We recall Eq. 4.23
Z ∞ Z ∞
1 iωt
f (t) = dωe duf (u)e−iωu .
2π −∞ −∞
By exchanging the variable of integration we obtain:
Z ∞ Z ∞
1
f (t) = dω duf (u)eiω(t−u)
2π −∞
Z ∞ Z −∞∞
1
= du dωf (u)eiω(t−u)
2π
Z ∞ −∞ −∞ Z ∞
1 iω(t−u)
= duf (u) e dω ,
−∞ 2π −∞
where the exchange of the order of integration has been made possible by the Fubini’s
theorem. Recalling Eq. 4.15 we can immediately recognize that:
Z ∞
1
δ(t − u) = eiω(t−u) dω. (4.26)
2π −∞
Analogously to the Laplace transform, it is easy to calculate the Fourier trans-
form of the derivative of a function. It is:
Z ∞
′ 1
F {f (t)} = √ f ′ (t)e−iωt dt
2π −∞
1 (−iω) ∞
Z
−iωt ∞
=√ f (t)e −∞
− √ f (t)e−iωt dt
2π 2π −∞
= iωF {f (t)}. (4.27)
Here we have assumed that the function f (t) tends to zero for t → ±∞ (as it should
R∞
be since −∞ |f (t)|dt is finite). It is easy to iterate this procedure and show that: