ΚΕΧΑΓΙΑΣ DIAFORIKES EXISOSEIS

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 272

APPLIED

MATHEMATICS
DIFFERENTIAL EQUATIONS
AND INTEGRAL TRANSFORMS

ATH. KEHAGIAS
THESSALONIKI 2019
ias
ag
eh
h .K
At

Copyright 2019 Ath. Kehagias

Sofistis Publications

Free Usage

1st Edition, October 2019


At
h.K
eh
ag
ias
ias
ag
Contents

eh
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Notation and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

I Ordinary Differential Equations

1 First Order ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

2 Linear ODEs: Solution Properties . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Linear ODEs: Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Systems of Linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5 Series Solutions of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


h

II Laplace

6
At

Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7 Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

8 Convolution and Integral Equations . . . . . . . . . . . . . . . . . . . . . 69

9 Dirac Delta and Generalized Functions . . . . . . . . . . . . . . . . . . 75

10 Difference Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
III Fourier

ias
11 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
12 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

IV Partial Differential Equations

13 PDEs for Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

ag
14 PDEs for Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

15 PDEs for Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

16 Bessel Functions and Applications . . . . . . . . . . . . . . . . . . . . . . 189


eh
17 Vector Spaces of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
18 Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

V Appendices
.K

A Definitions of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

B Distribution Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

C Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

D Numerical Solution of PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


h
At
ias
ag
Preface

I hear it, I forget it.


eh I see it, I remember it.
I do it, I learn it.

In these notes we mainly study ordinary differential equations (ODEs) i.e.,


equations involving derivatives with respect to a single variable, such as

dy
+ y = et ;
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw
dx 1

and partial differential equations (PDEs) i.e., equations involving derivatives with
respect to more than one variable, such as

∂ 2u ∂ 2u
+ = 0.
∂ x2 ∂ y2
In addition we study integral transforms, such as the Laplace and Fourier trans-
forms, from a general point of view as well as in connection to the solution of ODEs
and PDEs.
h

The notes are in an unfinished state and are known to contain some errors.
At

Ath. Kehagias
Thessaloniki, September 2019
At
h.K
eh
ag
ias
ias
ag
Notation and Preliminaries

1. We use the symbol := to denote a definition (of a function, set etc.).


2. N is the set of positive integers {1, 2, 3, ...} .
eh
3. N0 is the set of nonnegative integers {0, 1, 2, 3, ...} .
4. Z is the set of integers {..., −2, −1, 0, 1, 2, ...} .
5. R is the set of real numbers (−∞, +∞).
6. R∗ is the extended set of real numbers [−∞, +∞].
7. R+ 0 is the set of nonnegative real numbers [0, +∞).

8. R0 is the set of nonpositive real numbers (−∞, 0].
9. C is the set of complex numbers {(x + iy) : x, y ∈ R}.
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
10. Q[0,1] the set of rational numbers in [0, 1].
11. SL is the solution set {y : L (y) = 0}, where L is a linear differential operator.
12. FL is the set of functions which satisfy the Dirichlet conditions in [−L, L].
13. The notation (an )n∈A for a sequence (where A ∈ N) but also for a set. So we
write (with an abuse of notation) (an )n∈A ⊆ R. Similarly when ( fn )n∈A is a
sequence of functions, we will write (for example) ( fn )n∈A ⊆ CN .
14. 0 (x) is the zero function, i.e.,

∀x : 0 (x) := 0.
h

15. 1A (x) is the characteristic function of set A, i.e.,



1 x∈A
1A (x) := .
0 x∈
/A
At
At
h.K
eh
ag
ias
I
Ordinary Differential

ias
Equations

ag
eh
.K
h
At

1 First Order ODEs . . . . . . . . . . . . . . . . . . 3

2 Linear ODEs: Solution Properties 3

3 Linear ODEs: Solution Methods . 11

4 Systems of Linear ODEs . . . . . . . . . 25

5 Series Solutions of ODEs . . . . . . . . 35


At
h.K
eh
ag
ias
ias
ag
1. First Order ODEs

Here we study differential equations of the form


eh dy
dx
= F (x, y) .
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
h
At
At
h.K
eh
ag
ias
1.1 Theory and Examples 1

1.1 Theory and Examples

ias
1.2 Solved Problems
1.3 Unsolved Problems
1.4 Advanced Problems

ag
eh
h .K
At
At
h.K
eh
ag
ias
ias
ag
2. Linear ODEs: Solution Properties

Here we study differential equations of the form

dny
dx n
+ a
eh
n−1 (x)
d n−1 y
dx n−1
dy
+ ... + a1 (x) + a0 (x) y = g (x) .
dx
We focus on theoretical properties of the solutions y (x).

2.1 Theory and Examples


2.1.1 Definition. An n-th order linear differential equation has the form
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
dny d n−1 y dy
+ a n−1 (x) + ... + a1 (x) + a0 (x) y = g (x) . (2.1)
dxn dxn−1 dx
Sometimes we attach to (2.1) initial conditions

y (x0 ) = c0 , y0 (x0 ) = c1 , ..., y(n−1) (x0 ) = cn−1 . (2.2)

If g (x) is identically zero, the DE is called homogeneous, otherwise nonhomoge-


neous.
2.1.2 Example. This is a 2-nd order homogeneous linear differential equation
h

d2y dy
2
+ (1 − x) + x2 y = 0.
dx dx
This is a 3-rd order nonhomogeneous linear differential equation
At

d3y d2y dy
3
+ 3 2
+ 3 + y = x.
dx dx dx
2.1.3 Example. This is a 2-nd order nonhomogeneous linear differential equation
with initial conditions
d2y dy
2
+ 3 + 2y = 0
dx dx
y (0) = 1
y0 (0) = 1
4 Chapter 2. Linear ODEs: Solution Properties

2.1.4 Theorem. If a0 (x), ..., an−1 (x) are continuous in some interval (A, B), (2.1)-

ias
(2.2) has a unique solution in (A, B), i.e., there exists a unique function y (x) which
satisfies (2.1)-(2.2) for every x ∈ (A, B).
Proof. We will only give a part of the proof. In particular, we will show that: when
n = 2, if (2.1)-(2.2) has a solution, it is unique1 . Suppose u (x) and v (x) are two
solutions of (2.1)-(2.2) and let w (x) = u (x) − v (x). Then

w00 (x) + a1 (x) w0 (x) + a0 (x) w (x) = 0, w (x) = w0 (x) = 0.


2 2
Also define z (x) = (w0 (x)) + (w (x)) , then z (x0 ) = 0. Now choose any x1 , x2 such

ag
that
x0 ∈ [x1 , x2 ] ⊂ (A, B) .
Then, for any x ∈ [x1 , x2 ]

z0 (x) = 2w0 (x) w00 (x) + 2w (x) w0 (x)


= −2w0 (x) a1 (x) w0 (x) + a0 (x) w (x) + 2w (x) w0 (x)
 
2
= −2a1 (x) w0 (x) + 2w (x) w0 (x) (1 − a0 (x)) .
eh
By continuity, there exists some M such that

∀x ∈ [x1 , x2 ] : |a0 (x)| < M, |a1 (x)| < M.

Hence

z (x) ≤ 2 |a1 (x)| w0 (x) 2 + 2 |w (x)| w0 (x) |1 − a0 (x)|


0 
.K

2
≤ 2M w0 (x) + 2 (1 + M) |w (x)| w0 (x)

 2   2 
0 2 0 2
≤ 2M w (x) + (w (x)) + (1 + M) w (x) + (w (x))
 2 
= (1 + 3M) w0 (x) + (w (x))2 = (1 + 3M) z (x) .

Hence
− (1 + 3M) z (x) ≤ z0 (x) ≤ (1 + 3M) z (x) .
Letting K = 1 + 3M we have:
h

0
z0 (x) − Kz (x) ≤ 0 ⇒ z0 (x) − Kz (x) e−Kx ≤ 0 ⇒ z (x) e−Kx ≤ 0.


Hence z (x) e−Kx is non increasing, which means that


At

∀x ∈ [x0 , x2 ] : 0 ≤ z (x) e−Kx ≤ z (x0 ) e−Kx = 0 ⇒ (∀x ∈ [x0 , x2 ] : z (x) = 0) .




Similarly we have
0
z0 (x) + Kz (x) ≥ 0 ⇒ z0 (x) + Kz (x) eKx ≥ 0 ⇒ z (x) eKx ≥ 0.


1
This proof was given by B. Travis in “Uniqueness of Initial Value Problems”, Divulgaciones
Matematicas vol. 5, No. 1/2 (1997), pp. 39–41. The full proof of the theorem (existence and
uniqueness for any n ∈ N) is omitted because it requires more advanced concepts.
2.1 Theory and Examples 5

Hence z (x) eKx is non decreasing, which means that

ias
∀x ∈ [x1 , x0 ] : 0 ≤ z (x) eKx ≤ z (x0 ) eKx = 0 ⇒ (∀x ∈ [x1 , x0 ] : z (x) = 0) .


In short:
2
∀x ∈ [x1 , x2 ] ⊂ (A, B) : w0 (x) + (w (x))2 = 0 ⇒
∀x ∈ (A, B) : (u (x) − v (x))2 = 0 ⇒
∀x ∈ (A, B) : u (x) = v (x) .
2.1.5 Definition. For any an−1 (x) , ..., a1 (x) , a0 (x) we can define the n-th order

ag
differential operator of L () as follows:

dn d n−1 d
L () = n + an−1 (x) n−1 + ... + a1 (x) + a0 (x) .
dx dx dx
2.1.6 What this means is that L () is a function which has as domain and range
function sets; it takes as input a function y (x) and produces as output another
function
eh
L (y) =
dny
dxn
+ a n−1 (x)
d n−1 y
dxn−1
+ ... + a1 (x)
dy
dx
+ a0 (x) y.
Hence (2.1) can be rewritten as

L (y) = g (x) .
2.1.7 Example. Defining
d2 d
L () = +3 +2
.K

dx 2 dx
we can write the 2-nd order nonhomogeneous linear differential equation

d2y dy
2
+ 3 + 2y = 0
dx dx
as
L (y) = 0.
2.1.8 Theorem. The n-th order differential operator L () is linear, i.e.,
∀κ, λ ∈ C, ∀u, v : L (κu + λ v) = κL (u) + λ L (v) .
h

Proof. This is obvious:


dn d
L (κu + λ v) = n
(κu + λ v) + ... + a1 (x) (κu + λ v) + a0 (x) (κu + λ v)
dx dx 
At

n
d u du
 n
d v dv

=κ + ... + a1 (x) + a0 (x) u + λ + ... + a1 (x) + a0 (x) v
dxn dx dxn dx
= κL (u) + λ L (v) .
2.1.9 Definition. The set of functions {y1 (x) , ..., yK (x)} is called linearly indepen-
dent on X (where X ⊆ R) iff

(∀x ∈ X : c1 y1 (x) + ... + cK yK (x) = 0) ⇒ (∀k : ck = 0) . (2.3)

The set is called linearly dependent on X iff (2.3) does not hold.
6 Chapter 2. Linear ODEs: Solution Properties

1, x, x2 is lin. ind. on R because



2.1.10 Example. The set

ias
∀x ∈ X : c0 1 + c1 x + c2 x2 = 0 ⇒


c0 + c1 0 + c2 02 = 0
 
 c0 + c1 1 + c2 12 = 0  ⇒ c0 = c1 = c2 = 0
c0 + c1 2 + c2 22 = 0

1 0 0

since 1 1 1 = 2 6= 0.
1 2 4

ag
2.1.11 Example. The set {1, x, 1 + x} is lin. ind. on R because

∀x ∈ X : c1 1 + c2 x + c3 (1 + x) = 0

can be satrisfied with c1 = 1, c2 = 1, c3 = −1.


2.1.12 Definition. The Wronskian W (x|y1 , ..., yK ) of a function set {y1 (x) , ..., yK (x)}
eh
(we also write simply W (x)) is defined to be the determinant

y1 y2 ... yK


y01 y02 y0K

...
W (x) = ... ... ... ...
.

(K−1) (K−1) (K−1)

y
1 y2 ... yK

2.1.13 Example. The Wronskian of {1, x, 1 + x} is


.K



1 x 1 + x
W (x) = 0 1 1 = 0.
0 0 0

1, x, x2 is

2.1.14 Example. The Wronskian of

1 x x2

W (x) = 0 1 2x = 2.
h

0 0 2

cos x, sin x, eix is



2.1.15 Example. The Wronskian of

eix
At


cos x sin x
W (x) = − sin x cos x ieix = 0.
− cos x − sin x −eix

2.1.16 Theorem. Given [a, b] and L (y) = 0.


1. If y1 , y2 , ..., yK are solutions of L (y) = 0 on [a, b] and

∀x ∈ [a, b] : W (x|y1 , ..., yK ) = 0

then the set {y1 , y2 , ..., yK } is lin. dep. on [a, b].


2.1 Theory and Examples 7

2. If

ias
∃x ∈ [a, b] : W (x|y1 , ..., yK ) 6= 0
then the set {y1 , y2 , ..., yK } is lin. ind. on [a, b].
2.1.17 The terms linear operator”, “linearly independent” etc. remind us of Linear
Algebra. This is not accidental, there is a connection between linear DE’s and
Linear Algebra, as will now become obvious.
2.1.18 Definition. Given the linear differential operator

dn d n−1 d

ag
L () = n
+ a n−1 (x) n−1
+ ... + a1 (x) + a0 (x)
dx dx dx
we define the solution set of L by

SL := {L (y) = 0} .

In other words, y ∈ SL is equivalent to


eh dny
dx n
+ an−1 (x)
d n−1 y
dx n−1
dy
+ ... + a1 (x) + a0 (x) y = 0.
dx
(2.4)

2.1.19 Theorem. The set of solutions SL is a vector space.


Proof. We need to prove:

∀κ, λ ∈ C, ∀u, v ∈ SL : κu + λ v ∈ SL .
.K

Indeed:
 
u ∈ SL ⇒ L (u) = 0 ⇒ κL (u) = 0
⇒ κL (u)+λ L (v) = L (κu + λ v) = 0 ⇒ κu+λ v ∈ SL .
v ∈ SL ⇒ L (v) = 0 ⇒ λ L (v) = 0

L is an n-th order differential operator, then


2.1.20 We will next show that: if
the solution space SL has dimension n. To this end we need the following two
standard theorems of Linear Algebra2 .
2.1.21 Theorem. If T is a linear transformation from V to W , then the following
h

statements are equivalent.


1. T is one-to-one on V .
2. T is invertible and T −1 is linear.
3. ∀x ∈ V : T (x) = 0 ⇒ x = 0.
At

2.1.22 Theorem. If T is a linear transformation from V to W and dim (V ) = n < ∞,


then the following statements are equivalent.
1. T is one-to-one on V .
2. If {e1 , ..., en } is a linearly independent set, then {T (e1 ) , ..., T (en )} is a linearly
independent set.
3. dim (T (V )) = n.
4. If {e1 , ..., en } is a basis of V , then {T (e1 ) , ..., T (en )} is a basis of T (V ).
2
For the proofs see, e.g., T. Apostol, Calculus, vol.2, Wiley, 1969.
8 Chapter 2. Linear ODEs: Solution Properties

2.1.23 Theorem. Let L be an n-th order linear differential operator. The vector

ias
space SL = {y : L (y) = 0} has dimension n.
Proof. Let T be a linear transformation which maps SL to Cn by (for some fixed
x0 ) h i
T (y) = y (x0 ) , y0 (x0 ) , ..., y(n−1) (x0 ) .

(Why is it linear?) Then:


1. by the uniqueness Theorem, T (y) = 0 ⇒ y (x) = 0 (x);
2. by Theorem 2.1.21 T is one-to-one on SL ;
3. by Theorem 2.1.22 dim (SL ) = dim (Cn ) = n.

ag
2.1.24 Corollary. Let L be an n-th order linear differential operator. If {y1 , ..., yn }
is a linearly independent set of solutions of L (y) = 0, then it is a basis of SL =
{y : L (y) = 0}. In other words, every solution y of L (y) = 0 can be written as

y = c1 y1 + ... + cn yn .

2.1.25 Hence, if we find n linearly independent solutions {y1 , ..., yn } of


ehdny d n−1 y dy
+ an−1 (x) + ... + a 1 (x) + a0 (x) y = 0. (2.5)
dxn dxn−1 dx
then every solution y of L (y) = 0 can be written as

y = c1 y1 + ... + cn yn . (2.6)
.K

We call (2.6) the general solution of (2.5).


2.1.26 Example. The differential equation

d2y
+y = 0
dx2
has two solutions y1 (x) = cos x, y2 (x) = sin x (check it!) which are form a lin.ind. set
on R, since
h

     
c1 cos 0 + c2 sin 0 = 0 c1 1 + c2 0 = 0 c1 = 0
⇒ ⇒ .
c1 cos π2 + c2 sin π2 = 0 c1 0 + c2 1 = 0 c2 = 0

d 2
Hence {cos x, sin x} is a basis of SL (where L = dx 2 + 1). Any other solution can
At

be written as a linear combination of cos x and sin x and, conversely, any linear
combination of cos x and sin x is a solution (why?).
The theorem tells us that the DE does not have a bigger and lin.ind. set of
ix −ix
solutions.
 ixIt −ix
does have more solutions, e.g., e , e are  also solutions (check
it!).
is another basis of SL (and so are cos x, eix , cos x, e−ix etc.).

In fact e , e
But theset {cos x, sin x, cos x + sin x} is a lin.dep. set (obviously) and the same is
true of cos x, sin x, eix since

∀x ∈ R : cos x + i sin x − eix = 0.


2.1 Theory and Examples 9

2.1.27 Example. Let SL be the set of solutions of

ias
d2y dy
2
+ 5 + 6y = 0.
dx dx

A basis of SL is e−2x , e−3x . The general solution is




y (x) = c1 e−2x + c2 e−3x .

2.1.28 Definition. We say that y (x) is the general solution of L (y) = g (x) iff every

ag
solution can be written in the form of y (x).
2.1.29 Theorem. Consider the nonhomogeneous DE

dny d n−1 y dy
n
+ a n−1 (x) n−1
+ ... + a1 (x) + a0 (x) y = g (x) (2.7)
dx dx dx
which can also be written as
eh L (y) = g (x) .
It has general solution
y (x) = yh (x) + y p (x)
where
1. yh (x) is the general solution of L (y) = 0,
.K

2. y p (x) is some solution of L (y) = g (x).


Proof. By assumption we have

L (yh ) = 0,
L (y p ) = g (x) .

Adding these two together we get

L (yh ) + L (y p ) = g (x) ⇒ L (yh + y p ) = g (x) .


h

This shows that yh + y p is a solution L (y) = g (x). Why is it the general solution?
In other words, how do you prove that every solution of L (y) = g (x) can be written
in the form yh + y p ?
At

2.1.30 There is an analogous theorem of Linear Algebra about the solutions of


Ax = b. What does it say?
d2y
2.1.31 Example. The general solution of
dx2
+ y = 1 is

y (x) = c1 cos x + c2 sin x + 1.

d2y
Here yh (x) = c1 cos x + c2 sin x is the general solution of dx2 + y = 0 and y p (x) = 1 is
d2y
a solution of dx2 + y = 0.
10 Chapter 2. Linear ODEs: Solution Properties

2.2 Solved Problems

ias
2.3 Unsolved Problems
1. Find the Wronskian
of {3x, 2x}.
3x 2x
Ap. = 0.
3 2
2. Find the Wronskian
of {1, 2x}.
1 2x
Ap. = 2.
0 2

3. Find the Wronskian of 1 + x + x2 , 1 − 2x, x2 .


ag

1 + x + x2 1 − 2x x2

Ap. 1 + 2x −2 2x = −6.
2 0 2
4. Find the Wronskian of {ex , e−x , xex }.
ex e−x xex
−x e (x + 1) = −4e2x e−x .
x x

Ap. e −e
ex e−x ex (x + 2)
eh
5. Is {3x, 2x} lin. ind.?
Ap. No.
6. Is {1, 2x} lin. ind.?
Ap. Yes.
7. Is 1 + x + x2 , 1 − 2x, x2 lin. ind.?

Ap. Yes.
8. Is {ex , e−x , xex } lin. ind.?
.K

Ap. Yes.

2.4 Advanced Problems


1. Show that dxdt = F (ax + by) can be converted to a separable DE by the change
of variable v = ax + bt .
R G(z)
2. Show that yF (xy) + xG (xy) dy = 0 has the solution ln x = z(G(z)−F(z)) dz + c.
dy y R dz
3. Show that dx = F x has the solution ln x = F(z)−z + c.
4. Show that every solution of dx = f (t, x), x (0) = c is also a solution of the
h

R tdt
integral equation x (t) = c + 0 f (s, x (s)) ds.
5. The problem dxdt = f (t, x), x (0) = c has solution x (t). Show that the function
sequence obtained from the iteration
At

Z t
x0 (t) = c, ∀n : xn+1 (t) = c + f (s, xn (s)) ds
0

satisfies
∀t : lim xn (t) = x (t) .
n→∞
ias
ag
3. Linear ODEs: Solution Methods

Here we present methods which compute solutions of differential equations such


as
eh
dny d n−1 y dy
+ a n−1 (x) + ... + a1 (x) + a0 (x) y = g (x) ,
dxn dxn−1 dx
dny d n−1 y dy
n
+ a n−1 n−1
+ ... + a1 + a0 y = g (x) .
dx dx dx
.K

We focus on theoretical properties of the solutions y (x).


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

3.1 Theory and Examples for Constant Coefficient DEs


3.1.1 Theorem. The 2nd order homogeneous linear differential equation with
constant coefficients
d2y dy
2
+ a1 + a0 y = 0 (3.1)
dx dx
h

has characteristic equation


r2 + a1 r + a0 = 0
At

with roots r1 , r2 . The general solution of (3.1) has the form


1. y (x) = c1 er1 x + c2 er2 x when r1 6= r2 .
2. y (x) = c1 er1 x + c2 xer1 x when r1 = r2 .
Proof. First we check that ern x is a solution (for n ∈ {1, 2})

d 2 rn x d
2
e + a1 ern x + a0 ern x = rn2 ern x + a1 rn ern x + a0 ern x
dx dx
= rn2 + a1 rn + a0 ern x = 0.

12 Chapter 3. Linear ODEs: Solution Methods

Then also the linear combination c1 er1 x + c2 er2 x is a solution

ias
d2 d
2
(c1 er1 x + c2 er2 x ) + a1 (c1 er1 x + c2 er2 x ) + a0 (c1 er1 x + c2 er2 x )
dx  dx
2
d r1 x d r1 x
  2
d r2 x d r2 x

r1 x r2 x
= c1 e + a1 e + a01 e + c2 e + a1 e + a01 e
dx2 dx dx2 dx
= c1 0 + c2 0 = 0.
Now, when r1 6= r2 , it is easy to check that {er1 x , er2 x } is a lin.ind. set. But this is
not the case when r1 = r2 . However, in that case xer1 x is also a solution:

ag
d2 d
2
(xer1 x ) + a1 (xer1 x ) + a0 xer1 x ern x
dx dx
= a1 e (xr1 + 1) + r1 exr1 (xr1 + 2) + xa0 exr1
xr1

= a1 exr1 (1) + r1 exr1 (2)


= r12 + a1 r1 + a0 xer1 x + (a1 + 2r1 ) er1 x = 0


since r1 = r2 implies that (a) a21 − 4a0 = 0 and (b) r1 = r2 = − a21 . Hence {er1 x , xer1 x }
eh
is a lin.ind. set of solutions of (3.1).
3.1.2 Example. The DE
d2y dy
2
+ 3 + 2y = 0
dx dx
has characteristic equation
r2 + 3r + 2 = 0
.K

with roots r1 = −1, r2 = −2. Hence two solutions are e−x , e−2x ; e−x , e−2x is a


basis of the solution set; and every solution can be written in the form y (x) =
c1 e−x + c2 e−2x .
3.1.3 Example. The DE
d2y dy
2
+2 +y = 0
dx dx
has characteristic equation
r2 + 2r + 1 = 0
with roots r1 = r2 = −1. Hence two solutions are e−x , xe−x ; {e−x , xe−x } is a basis of
h

the solution set; and every solution can be written in the form y (x) = c1 e−x + c2 xe−x .
3.1.4 Example. The DE
d 2 y dy
+ +y = 0
At

dx2 dx
has characteristic equation
r2 + r + 1 = 0
√ √
1
√ 1 1
√ 1 x −1+i 3
x −1−i 3
with roots r1 = 2 i 3 − 2 , r2 = − 2 i 3 − 2 . Hence two solutions are e 2 ,e 2
√ √
−1+i 3 −1−i 3
and every solution can be written in the form y (x) = c1 ex 2 +c2 ex 2 . However,
note that
√ √
√ ! √ !! √ ! √ !
x −1+i 3
− 2x i 23 x − 2x 3 3 x 3 x 3
e 2 =e e =e cos x + i sin x = e− 2 cos x +ie− 2 sin x .
2 2 2 2
3.1 Theory and Examples for Constant Coefficient DEs 13

Similarly

ias
√ √
√ ! √ !! √ ! √ !
x −1−i 3
− 2x i 23 x − 2x 3 3 x 3 x 3
e 2 =e e =e cos x − i sin x = e− 2 cos x −ie− 2 sin x .
2 2 2 2

Hence the general solution can be written as


√ √
−1+i 3 −1−i 3
y (x) = c1 ex 2 + c2 ex 2
√ √
x −1+i 3
x −1−i 3
= c1 e 2 + c2 e 2
√ ! √ !! √ ! √ !!

ag
x 3 x 3 x 3 x 3
= c1 e− 2 cos x + ie− 2 sin x + c2 e− 2 cos x − ie− 2 sin x
2 2 2 2
√ ! √ !
x 3 x 3
= (c1 + c2 ) e− 2 cos x + i (c1 − c2 ) e− 2 sin x
2 2
√ ! √ !
x 3 x 3
= p1 e− 2 cos x + p2 e− 2 sin x .
2 2
eh
This (cosine/sine) form is the one we will prefer to use.
3.1.5 The above theorem generalizes to n-th order linear homogeneous DEs with
constant coefficients.
3.1.6 Theorem. The homogeneous linear differential equation with constant
coefficients
dny d n−1 y dy
.K

n
+ a n−1 n−1
+ ... + a1 + a0 y = 0 (3.2)
dx dx dx
has characteristic equation

rn + an−1 rn−1 + ... + a1 r + a0 = 0. (3.3)

Suppose that (3.3) has roots r1 with multiplicity m1 , ..., rK with multiplicity mK (we
have m1 + m2 + ... + mK = n). The general solution of (3.2) is
K mk
ckmt m−1 erk t .

y (x) = ∑∑
k=1 m=1
h

Proof. Left to the reader.


3.1.7 Example. The DE

d3y d2y dy
At

3
− 6 2
+ 11 − 6y = 0
dx dx dx
has characteristic equation

r3 − 6r2 + 11r − 6 = 0
with roots, r1 = 1, r2 = 3, r3 = 2. Hence its general solution is

y (x) = c1 ex + c2 e2x + c3 e3x


(check it!).
14 Chapter 3. Linear ODEs: Solution Methods

3.1.8 Example. The DE

ias
d3y d2y dy
3
+ 3 2
+3 = y = 0
dx dx dx
has characteristic equation

r3 + 3r2 + 3r + 1 = 0

with roots, r1 = r2 = r3 = −1. Hence its general solution is

y (x) = c1 e−x + c2 xe−x + c3 x2 e−x .

ag
(check it!).
3.1.9 We now turn to nonhomogeneous equations. There are two methods to
solve these:
1. The method of undetermined coefficients. This is simpler; we will explain
it with examples.
2. The method of variation of parameters. This is less simple but more general,
eh
we will present it a little later.
3.1.10 Example. To solve
d 2 y dy
− − 2y = 1 (3.4)
dx2 dx
we will exploit Theorem ??. From this we know that the solution is y (x) =
yh (x) + y p (x), where yh (x) is the general solution of
.K

d 2 y dy
− − 2y = 0 (3.5)
dx2 dx
and y p (x) is some solution of (3.4).
The char. eq. of (3.5) is r2 − r − 2 = 0 with roots r1 = −1, r2 = 2. Hence the
general solution is
yh (x) = c1 e−x + c2 e2x .
To find some solution of (3.4) we take a guess. A reasonable guess is y p (x) = A
(why?). Then we must have
h

d 2 A dA 1 1
2
− − 2A = 1 ⇒ −2A = 1 ⇒ A = − ⇒ y p (x) = − .
dx dx 2 2
Hence the general solution of (3.4) is
At

1
y (x) = c1 e−x + c2 e2x − .
2
3.1.11 Example. To solve
d 2 y dy
− − 2y = x2 (3.6)
dx2 dx
we take y (x) = yh (x) + y p (x), where yh (x) is the general solution of

d 2 y dy
− − 2y = 0; (3.7)
dx2 dx
3.1 Theory and Examples for Constant Coefficient DEs 15

this was already found to be

ias
yh (x) = c1 e−x + c2 e2x .

For the particular solution we guess is y p (x) = Ax2 + Bx + c (why?). Then we must
have

d 2 Ax2 + Bx +C d Ax2 + Bx +C
 
− 2 Ax2 + Bx +C = x2 ⇒

2

dx dx
2A − (2Ax + B) − 2 Ax2 + Bx +C = x2 ⇒


ag
−2Ax2 + (−2A − 2B) x + (2A − B − 2C) = x2 .

Then we must have

−2A = 1
−2A − 2B = 0
2A − B − 2C = 0

Hence
eh
1 1 3
y p (x) = − x2 + x −
2 2 4
and
1 1 3
y (x) = c1 e−x + c2 e2x − x2 + x − .
2 2 4
3.1.12 Example. To solve
.K

d 2 y dy
2
− − 2y = ex (3.8)
dx dx
we take y (x) = yh (x) + y p (x), where yh (x) is the general solution of

d 2 y dy
− − 2y = 0; (3.9)
dx2 dx
this was already found to be
h

yh (x) = c1 e−x + c2 e2x .

For the particular solution we guess y p (x) = Aex (why?). Then we must have

d 2 (Aex ) d (Aex )
At

1 1
2
− − 2 (Aex ) = ex ⇒ Aex − Aex − 2Aex = ex ⇒ A = − ⇒ y p (x) = − ex .
dx dx 2 2
Hence the general solution of (3.4) is

1
y (x) = c1 e−x + c2 e2x − ex .
2
3.1.13 Example. To solve
d 2 y dy
− − 2y = e−x (3.10)
dx2 dx
16 Chapter 3. Linear ODEs: Solution Methods

we take y (x) = yh (x) + y p (x), where yh (x) is the general solution of

ias
d 2 y dy
− − 2y = 0; (3.11)
dx2 dx
this was already found to be

yh (x) = c1 e−x + c2 e2x .

For the particular solution we should not guess y p (x) = Ae−x , because e−x already
appears in y p (x). So we try y p (x) = Axe−x and we have

ag
d 2 (Axe−x ) d (Axe−x ) −x
= e−x ⇒

2
− − 2 Axe
dx dx
Ae−x (x − 1) − 2Axe−x + Ae−x (x − 2) = e−x ⇒
(A − 2A + A) xe−x + (−A − 2A) e−x = e−x ⇒
1 1
A = − ⇒ y p (x) = − xe−x .
3 3
eh
Hence the general solution of (3.4) is

1
y (x) = c1 e−x + c2 e2x − e−x .
3
3.1.14 We can summarize various “reasonable guesses” for particular solutions
of L (y) = g (x) in the following table.
.K

For g (x) guess y p (x)


A0 + A1 x + ... + Amxm B0 + B1 x + ... + Bm xm
AePx BeQx
C1 cos (wx) +C2 sin (wx) D1 cos (wx) + D2 sin (wx)
ePx (A0 + A1 x + ... + Am xm ) eQx (B0 + B1 x + ... + Bm xm )
(C1 cos (wx) +C2 sin (wx)) (A0 + ... + Am xm ) (D1 cos (wx) + D2 sin (wx)) (B0 + ... + Bm xm )

3.1.15 We now turn to the method of variation of parameters, which is also


h

presented by examples.
3.1.16 Example. Let us first solve

d2y dy
2
− 2 + y = ex
At

dx dx
(which we could also solve by undetermined coefficients) just to illustrate the
method of variation of parameters. We note that the homogeneous equation

d2y dy
2
−2 +y = 0
dx dx
has general solution c1 ex + c2 xex and we now consider a particular solution of the
form
v1 (x) ex + v2 (x) xex .
3.1 Theory and Examples for Constant Coefficient DEs 17

Now we solve the system

ias
v01 (x) (ex ) + v02 (x) (xex ) = 0
v01 (x) (ex )0 + v02 (x) (xex )0 = ex

which becomes

v01 (x) (ex ) + v02 (x) (xex ) = 0


v01 (x) (ex ) + v02 (x) (ex + xex ) = ex

with solution

ag


0 xex

0
ex ex + xex x2
v1 (x) = = −x ⇒ v1 (x) = − ,
ex xex
2
ex e + xex
x


ex 0
ex ex
eh
v02 (x) = x

x
ex x xe x
e e + xe
= 1 ⇒ v2 (x) = x.


Finally then
x2 x x2
y p (x) = − e + x2 ex = ex
2 2
and
x2 x
.K

y (x) = yh (x) + y p (x) = c1 ex + c2 xex + e.


2
3.1.17 Example. To solve
d2y dy ex
− 2 + y =
dx2 dx x
we note that the homogeneous equation

d2y dy
− 2 +y = 0
dx2 dx
has general solution c1 ex + c2 xex and we now consider a particular solution of the
h

form
v1 (x) ex + v2 (x) xex .
Now we solve the system
At

v01 (x) (ex ) + v02 (x) (xex ) = 0


ex
v01 (x) (ex )0 + v02 (x) (xex )0 =
x
which becomes

v01 (x) (ex ) + v02 (x) (xex ) = 0


ex
v01 (x) (ex ) + v02 (x) (ex + xex ) =
x
18 Chapter 3. Linear ODEs: Solution Methods

with solution

ias
xex

0
ex
x x
x e + xe

0 = −1 ⇒ v1 (x) = x,
v1 (x) = x
x
ex x xe x


e e + xe
x
e 0
ex ex

1
v02 (x) = x x
x
= ⇒ v2 (x) = ln |x| .
x
e
x x xe
e e + xex

ag

Finally then
y p (x) = −xex + ln |x| xex
and

y (x) = yh (x) + y p (x) = c1 ex + c2 xex − xex + ln |x| xex = c1 ex + c3 xex + ln |x| xex .
3.1.18 Example. To solve
eh d2y
dx2
+ 9y = 3 tan 3x
we note that the homogeneous equation

d2y
+ 9y = 0
dx2
has general solution c1 cos 3x + c2 sin 3x and we now consider a particular solution
.K

of the form
v1 (x) cos 3x + v2 (x) sin 3x.
Now we solve the system

v01 (x) cos 3x + v02 (x) sin 3x = 0


−v01 (x) 3 sin 3x0 + v02 (x) 3 cos 3x = 3 tan 3x
with solution


0 sin 3x
3 tan 3x 3 cos 3x sin2 3x
h

v01 (x) = = − (sin 3x) tan 3x = − ,


cos 3x sin 3x cos 3x
−3 sin 3x 3 cos 3x

cos 3x 0
At


−3 sin 3x 3 tan 3x
v02 (x) = = cos 3x tan 3x = sin 3x.
cos 3x sin 3x
−3 sin 3x 3 cos 3x
Then
sin2 3x

1 2 − 2 sin 3x 1
Z
v1 = − dx = ln + sin 3x
cos 3x 6 2 + 2 sin 3x 3
1
Z
v2 = sin 3xdx = − cos 3x
3
3.2 Theory and Examples for Variable Coefficient DEs 19

and finally

ias
 
1 2 − 2 sin 3x 1 1
y (x) = c1 cos 3x + c2 sin 3x + ln + sin 3x cos 3x − cos 3x sin 3x
6 2 + 2 sin 3x 3 3

1 2 − 2 sin 3x
= c1 cos 3x + c2 sin 3x + ln cos 3x.
6 2 + 2 sin 3x

3.2 Theory and Examples for Variable Coefficient DEs


3.2.1 Theorem. The general solution of the 1st order linear differential equation

ag
dy
+ a0 (x) y = f (x)
dx
is  Z 
R R
− a0 (x)dx a0 (x)dx
y (x) = e c + f (x) e dx .

3.2.2 Since we have studied 1st order linear differential equations in a previous
eh
course, we will not deal with them in these notes.
3.2.3 For the n-th order linear differential equation

dny d n−1 y dy
n
+ an−1 (x) n−1
+ ... + + a0 (x) y = f (x)
dx dx dx
we can again use variation of parameters, as shown in the following examples.
Note that this depends on having found the general solution of
.K

dny d n−1 y dy
+ an−1 (x) + ... + + a0 (x) y = 0
dxn dxn−1 dx
which is not always easy.
3.2.4 Example. We solve

d2y dy
x2 2
− 2x + 2y = x ln x
dx dx
h

given that two lin. ind. solutions of

d2y dy
x2 − 2x + 2y = 0
dx2 dx
At

are x and x2 . In this case, we first divide by x2 to get

d 2 y 2 dy 2 ln x
2
− + 2y =
dx x dx x x

v01 x + v02 x2 = 0
ln x
v01 + v02 2x =
x
20 Chapter 3. Linear ODEs: Solution Methods

and, solving, we get

ias
ln x 1
v01 = − ⇒ v1 = − (ln x)2
x 2
ln x 1 + ln x
v02 = − 2 ⇒ v1 = − .
x x
Then the general solution is
x x
y (x) = c1 x + c2 x2 − (ln x)2 − x (1 + ln x) = c3 x + c2 x2 − x ln x − (ln x)2 .
2 2

ag
3.2.5 Example. We solve

2
 d2y dy 2
2
x −1 − 2x + 2y = x − 1
dx2 dx
given that two lin. ind. solutions of

d2y dy
eh x2
dx 2
− 2x + 2y = 0
dx
are x and x2 + 1. In this case, we first divide by x2 − 1 to get

d2y 2 dy 2 2

− + y = x − 1
dx2 x2 − 1 dx x2 − 1
.K

v01 x + v02 x2 + 1 = 0


v01 + v02 2x = x2 − 1

and, solving, we get

x3
v01 = −x2 − 1 ⇒ v1 = − −x
3
x2
v02 = x ⇒ v2 =
2
h

Then the general solution is


 3
 x2

x 1
y (x) = c1 x + c2 x + 1 + x − − x + x2 + 1
2
= c1 x + c2 x2 + x2 x2 − 3 .
 
3 2 6
At

3.3 Solved Problems


3.4 Unsolved Problems
d2y dy
1. Solve dx2 − 3 dx + 2y = ex .
Ans. C1 e2x − ex +C2 ex − xex .
d2y dy
2. Solve dx2 − 4 dx + 2y = ex .
√ √
Ans. C1 ex( 2+2) − ex +C2 e−x( 2−2) .
3.4 Unsolved Problems 21

d2y dy
3. Solve dx2 − 4 dx + 4y = ex .

ias
Ans.ex +C1 e2x +C2 xe2x .
d2y dy
4. Solve dx2 + dx + y = ex ,.
√  1 √  1
Ans. 13 ex +C44 cos 21 3x e− 2 x −C45 sin 12 3x e− 2 x .
d2y dy
5. Solve dx2 − 3 dx + 2y = x2 − 1.
Ans. 32 x +C1 e2x +C2 ex + 21 x2 + 45 .
d2y dy
6. Solve dx2 − 4 dx + 2y = x2 − 1.
√ √
Ans. 2x +C1 ex( 2+2) +C2 e−x( 2−2) + 12 x2 + 3.

ag
d2y dy
7. Solve dx2 − 4 dx + 4y = x2 − 1.
Ans. 12 x +C1 e2x + 41 x2 +C2 xe2x + 18 .
d2y dy
8. Solve dx2 + dx + y = x.
√  1 √  1
Ans. x +C1 cos 12 3x e− 2 x −C2 sin 12 3x e− 2 x − 1.
d2y dy
9. Solve dx2 − 3 dx + 2y = sin x.
3 1
Ans. 10 cos x + 10 sin x +C1 e2x +C2 ex .
d2y
eh dy
10. Solve dx2 + 5 dx + 6y = cos x.
1
Ans. 10 1
cos x + 10 sin x +C1 e−2x +C2 e−3x .
d2y dy
11. Solve dx2 − 4 dx + 4y = sin x + cos x.
7 1
Ans. 25 cos x − 25 sin x +C1 e2x +C2 xe2x .
d2y dy
12. Solve dx2 + dx + y = sin x.
√  1 √  1
Ans. C1 cos 12 3x e− 2 x − cos x −C2 sin 12 3x e− 2 x .
.K

d2y dy
13. Solve dx2 − 3 dx + 2y = ex = ex .
Ans. C1 e2x − ex +C2 ex − xex .
d2y dy
14. Solve dx2 + 5 dx + 6y = e−2x .
Ans. C1 e−2x +C2 e−3x + e12x (x − 1).
d2y dy
15. Solve dx2 − 4 dx + 4y = e2x .
Ans. C1 e2x + 12 x2 e2x +C2 xe2x .

d2y dy x
16. Solve dx2 + dx + y = e− 2 sin 23x .
√ √ √
h

(sin 12 3x− 3x cos 12 3x) √  1 √  1


Ans. 1x +C1 cos 12 3x e− 2 x +C2 sin 12 3x e− 2 x .
3e 2
d2y dy x
17. Solve dx2 − 2 dx + y = xe5 .
1 x
Ans. C1 ex + xC2 ex + 12x 3e .
At

d2y 1
18. Solve dx2 + y = cos x.
Ans. C1 cos x −C2 sin x + x sin x + cos x ln (cos x).
d2y dy e x
19. Solve dx2 − 4 dx + 3y = 1+ex.
et
3t
Ans. c1 e + c2 e + 2 ln (1 + et ).
t
d2y 1
20. Solve dx2 + y = cos x.
Ans. C1 cos x −C2 sin x + x sin x + cos x ln (cos x).
d2y dy
21. Solve x2 dx2 + x dx − y = 2x2 + 2, given that two solutions of the associated
22 Chapter 3. Linear ODEs: Solution Methods

homogeneous DE are x and 1x .

ias
3 −12x
Ans. C1 x + Cx2 + 4x 6x .
d2y dy
22. Solve x dx2 + (2 − 2x) dx + (x − 2) y = e2x , given that two solutions of the associ-
x
ated homogeneous DE are ex and ex .
x
Ans. 1x e2x +C1 ex +C2 ex .
d2y dy
23. Solve x2 dx2 − 4x dx + 6y = x5/2 , given that two solutions of the associated
homogeneous DE are x2 and x3 .
5
Ans. c1 x2 + c2 x3 − 4x 2 .
 d2y  dy 2
24. Solve x2 + x dx2 + 2 − x2 dx − (2 + x) y = x (x + 1) , given that two solutions

ag
of the associated homogeneous DE are ex and 1x .
Ans. Cx1 +C2 ex − x − 13 x2 − 1.

3.5 Advanced Problems


2
1. Suppose x1 (t) , x2 (t) are solutions of p (t) ddt 2x + q (t) dx
dt + r (t) x = 0 with Wron-
eh dW
skian W (x1 (t) , x2 (t)). Show that p (t) dt + q (t)W = 0.
2. Suppose x1 (t) , x2 (t) are solutions of ddt 2x + p (t) dx
2
dt + r (t) x = 0, with p (t) , q (t)
continuous in [t1 ,t2 ]. Show that, for all t,t0 ∈ (t1 ,t2 ) we have
Rt
− p(s)ds
W (x1 , x2 ) = W (x1 , x2 ) |t=t0 e t0 .

3. Show that the set of solutions of dx


p
dt = |x| is not a vector space.
.K

4. Suppose x1 (t) , ..., xN (t) are continuous in [t1 ,t2 ] and define
Z t2
ai j = xi (t) x j (t) dt.
t1

Prove that {x1 (t) , ..., xN (t)} is linearly independent in [t1 ,t2 ] iff


a11 a12 ... a1n


a21 a22 ... a2n
6= 0.
... ... ... ...
h


an1 an2 ... ann

2
5. Show that dtd 2 x (t) + a dtd x (t) + b x (t) = F (t), with characteristic equation roots
z1 6= z2 , has the following solution.
At

ez1t ez2t
Z Z
−z1 t
x (t) = c1 ez1 t
+ c2 e z2 t
+ e F (t) dt + e−z2t F (t) dt.
z1 − z2 z2 − z1
6. An Euler equation is a DE of the form

dnx n−1 d
n−1 x dx
tn n
+ a n−1 t n−1
+ ... + a1t + a0 x = 0
dt dt dt
Show that every Euler equation can be converted to a linear DE by the
substitution t = eu .
3.5 Advanced Problems 23

7. Find a transformation which converts

ias
d2x dx
(pt + q)2 2
+ a (pt + q) + a0 x = 0
dt dt
to a constant coefficient DE and solve it. What conditions must a, b satisfy?
8. Form constant coefficient homeogeneous linear DEs with the smallest possible
order and having as solutions: (i) xe−3x , (ii) x2 sin x, (iii) cos x + e−2x .
9. Show that a particular solution of

dnx d n−1 x dx
+ a0 x = Aect

ag
+ a n−1 + ... + a 1
dt n dt n−1 dt
is
Aect
xe(t) =
cn + an−1 cn−1 + ... + a1 c + a0
under the condition cn + an−1 cn−1 + ... + a1 c + a0 6= 0.
2
10. Suppose limt→∞ f (t) = 0. Prove that all solutions of ddt 2x + f (t) x (t) = 0 are
bounded.
eh R∞
11. Suppose f (t) is continuously differentiable, limt→∞ f (t) = 0 and 0 | f (t)| dt <
d2x
∞. Prove that all solutions of dt 2
+ (1 + f (t)) x (t) = 0 are bounded.
2
12. Suppose x (t) is a solution of t ddt 2x + dx
dt + tx (t) = 0 in (0, ∞). Show that g (x) =
x1/2 x (t) is bounded in (0, ∞).
h .K
At
At
h.K
eh
ag
ias
ias
ag
4. Systems of Linear ODEs

We study properties and solution methods for systems of ordinary differential


equations, such as
eh dx1
= a11 x1 (t) + a12 x2 (t) + ... + a1N xN (t) + f1 (t) ,
dt
...
dxN
= aN1 x1 (t) + aN2 x2 (t) + ... + aNN xN (t) + fN (t) .
dt
.K

4.1 Theory and Examples


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

4.1.1 Notation. We will limit ourselves to the study of ODE systems with constant
coefficients, such as
dx
= a11 x (t) + a12 y (t) + f (t) , x (0) = x0 ,
dt
dy
= a21 x (t) + a22 y (t) + g (t) , y (0) = y0 .
dt
This can be rewritten, using the notation of Linear Algebra, as
h

dz
= Az + u (t) , z (0) = z0
dt
where
       
x (t) x0 f (t) a11 a12
At

z (t) = , z0 = , u (t) = , A=
y (t) y0 g (t) a21 a22
This notation can be extended to any N × N matrix A and N × 1 vector z (t).
4.1.2 Theorem. Given any N × N matrix A and N × 1 vectors z (t), u (t), z0 , the
system
dz
= Az + u (t) , z (0) = z0
dt
has solution Z t
At
z (t) = e z0 + eA(t−τ) u (τ) dτ.
0
26 Chapter 4. Systems of Linear ODEs

4.1.3 The above theorem is esthetically pleasing, as it is a generalization of the

ias
fact that the scalar system
dx
= ax + u, x (0) = x0
dt
has solution Z t
at
x (t) = e x0 + ea(t−τ) u (τ) dτ.
0
However, application of the theorem requires the computation of
1 22

ag
eAt = I+At + A t + ... (4.1)
2!
4.1.4 We can compute eAt by rewriting (4.1) as a finite sum, using the Cayley-
Hamilton Theorem. However, we will omit this approach and present an allternative
solution method, which uses matrix diagonalization to decouple the ODEs of the
system. We present the method by examples.
4.1.5 Example. Let us solve
eh dx
= 2x + y, x (0) = 1,
dt
dy
= 3x + 4y, y (0) = 1.
dt
This can be rewritten as
        
d x (t) 2 1 x (t) x (0) 1
.K

= , = .
dt y (t) 3 4 y (t) y (0) 1
Now, defining      
x (t) 1 2 1
z (t) = , z0 = , A= .
y (t) 1 3 4
we have
dz
= Az, z (0) = z0 (4.2)
dt
We will now diagonalize A using eigenvalues and eigenvectors. Note that A has
h

eigenvectors and eigenvalues:


1
   
−1 3
1↔ , 5↔
1 1
At

Defining
−1 13 − 34 1
     
−1 4 1 0
U= , U = 3 3 , Λ=
1 1 4 4
0 5
we have
AU = ΛU ⇒ U −1 AU = Λ.
or, more specifically,
−1 
−1 31 −1 13
    
2 1 1 0
= .
1 1 3 4 1 1 0 5
4.1 Theory and Examples 27

Now, defining z=U −1 z, A=U −1 AU and multiplying from the left we have

ias
 dx    
−1 dz −1 d dz 1 0 x
U −1 z = U −1 AUU −1 z ⇒

U =U Az ⇒ = Λz ⇒ dt = .
dy 0 5 y
dt dt dt dt

Similarly
−1 
−1 13 − 12
   
−1 1
z0 = U z0 = = 3
1 1 1 2
This breaks down very nicely to

dx 1

ag
= x, x (0) = −
dt 2
dy 3
= 5y, y (0) =
dt 2
which can be solved separately to get

1 3
x (t) = − et , y (t) = e5t
eh
To find x (t) , y (t) we use
2 2

−1 31 − 12 et 1 t 1 5t
      
x (t) 2e + 2e
= 3 5t = 3 5t 1 t
2e − 2e
y (t) 1 1 2e

4.1.6 Example. We now solve

dx
.K

= −x + y, x (0) = 1
dt
dy
= x − y + 1, y (0) = 2
dt
In matrix form we have
          
d x −1 1 x 0 x (0) 1
= + , =
dt y 1 −1 y 1 y (0) 2
We find eigenvectors of the matrix
h

 
−1 1
A=
1 −1
They are
At

   
1 −1
↔ 0, ↔ −2.
1 1
Hence we have
   −1  1 1

1 −1 −1 1 −1 2 2
U= ,U = =
1 1 1 1 − 12 1
2
and
1 1
     
−1 2 2 −1 1 1 −1 0 0
U AU = = = Λ.
− 12 1
2
1 −1 1 1 0 −2
28 Chapter 4. Systems of Linear ODEs

Furthermore,

ias
1 1 1 1 1 3
         
2 2 0 2 2 2 1 2
u= = , z0 = = .
− 21 1
2
1 1
2 − 12 1
2
2 1
2

Hence we have the system


 dx      1
    3

dt 0 0 x 2 x (0) 2
dy = + 1 , = 1
dt
0 −2 y 2
y (0) 2

or

ag
 
dx 1 3 t 3
= , x (0) = ⇒ x (t) = +
dt 2 2 2 2
 
dy 1 1 1 −2t 1
= −2y + , y (0) = ⇒ y (t) = e +
dt 2 2 4 4
Then

x (t)
y (t)

eh
=U

x (t)
y (t)

=

1 −1
1 1


4e
t
2+2
1 −2t
3

+ 14

=
 1
2t −
1
2t +
1 −2t
4e
1 −2t
4e
+ 54
+ 74

.

4.1.7 The method can be generalized and applied to the system

dz
= Az + u (t) , z (0) = z0 (4.3)
dt
.K

for any N × N matrix A, provided that A is diagonalizable, i.e., it can be transformed


to a diagonal matrix Λ; this is true iff A has N linearly independent eigenvectors.
4.1.8 Not every matrix is diagonalizable. If A is not diagonalizable, the method
cannot be applied. However, every matrix can be transformed to Jordan canonical
form, and a variation of the method can then be applied (but the resulting ODEs
will not be fully decoupled).
4.1.9 Example. Let us solve

dx
= x + 2y, x (0) = 1
h

dt
dy
= y, y (0) = 2
dt
In matrix form we have
At

        
d x 1 2 x x (0) 1
= , = .
dt y 0 1 y y (0) 2

The matrix is not diagonalizable, but is already in Jordan canonicl form. We first
solve
dy
= y, y (0) = 2
dt
and get
y (t) = 2et .
4.1 Theory and Examples 29

Then we have
dx

ias
= x + 4et , x (0) = 1
dt
which has solution solution is:

x (t) et + 4tet .
4.1.10 Now we will present another solution method which reduces the system of
first order DE to a single N -th order DE. Again we present the method by examples.
4.1.11 Example. Let us solve

ag
dx
= 2x + y, x (0) = 1,
dt
dy
= 3x + 4y, y (0) = 1.
dt
by transforming the system to companion form. We define
 
  1 e = SAS−1 , e
1= 1 0 , S= , A z = Sz
eh
and, multiplying from the left, we get
1A

dz dSz z e
de
= Az ⇒ = SAS−1 Sz ⇒ = Ae
z.
dt dt dt
So we have
     
  2 1   1 0 −1 1 0
.K

1A = 1 0 = 2 1 , S= , S =
3 4 2 1 −2 1
and
            
e= 1 0 2 1 1 0 0 1 xe0 1 0 1 1
A = , = = .
2 1 3 4 −2 1 −5 6 ye0 2 1 1 3
Hence we have
        
d xe 0 1 xe xe(0) 1
= , =
dt ye −5 6 ye ye(0) 3
h

This implies that

de
x de
y d 2 xe dey
= ye, xe(0) = 1, = −5e
x + 6e
y, ye(0) = 3, =
dt dt dt 2 dt
At

Combining these we get

d 2 xe de
x
2
= −5e
x+6 , xe(0) = 1, xe0 (0) = 3
dt dt
which has solution
1 1
xe(t) = et + e5t
2 2
d 1 5
ye(t) = xe(t) = et + e5t
dt 2 2
30 Chapter 4. Systems of Linear ODEs

then
1 t
+ 12 e5t
   

ias
xe(t) 2e
= 1 t
ye(t) 2e + 52 e5t
and finally
1 t
+ 12 e5t 1 t 1 5t
      
x (t) 1 0 2e 2e + 2e
= 1 t = 3 5t 1 t
y (t) −2 1 2e + 52 e5t 2e − 2e
4.1.12 Note how nicely we got a single second order ODE from the system of two
first order ODEs; also note that one variable is the derivative of the other.
4.1.13 Example. Let us now solve

ag
dx
= 2x + y + 1, x (0) = 1,
dt
dy
= 3x + 4y, y (0) = 1.
dt
We have
     
  2 1   1 0 −1 1 0
1A = 1 0 = 2 1 , S= , S =
−2 1
and
eh 

3 4

   
2 1

e= 2 1 1 0
1 0 0 1
A = ,
3 4 −2 1
2 1 −5 6
           
xe0 1 0 1 1 1 0 1 1
= = , ue= = .
ye0 2 1 1 3 2 1 0 2
Hence we have
.K

          
d xe 0 1 xe 1 xe(0) 1
= + , =
dt ye −5 6 ye 2 ye(0) 3
This implies that
de
x de
y d 2 xe dey
= ye+ 1, xe(0) = 1, = −5e
x + 6e
y + 2, ye(0) = 3, 2
=
dt dt dt dt
Combining these we get
d 2 xe de
x
xe0 (0) = ye0 (0) + 1 = 4
h

= −5e
x + 6 − 6 + 2, xe(0) = 1,
dt 2 dt
which has solution
5 11 4
xe(t) = et + e5t −
4 20 5
At

d 5 t 11 5t
ye(t) = xe(t) − 1 = e + e − 1
dt 4 4
then
5 t 11 5t 4
   
xe(t)
= 4 e + 20 e − 5
ye(t) 5 t 11 5t
4e + 4 e − 1
and finally
5 t 11 5t 4 5 t 11 5t
− 45
      
x (t)
=
1 0 4 e + 20 e − 5 = 4 e + 20 e
5 t 11 5t 33 5t 5 t
y (t) −2 1 4e + 4 e − 1 20 e − 4 e + 35
4.2 Solved Problems 31

4.1.14 This method can also be generalized to any system

ias
dz
= Az + u (t) , z (0) = z0 (4.4)
dt
with N × N matrix A. Again there is a condition: the so-called observability matrix
 
1
 1A 
S=
 ... 

1An−1

ag
must be invertible. When S is not invertible (we say the system is unobservable)
the method cannot be applied.
4.1.15 There is a "reverse” of the above metod, by which we can write any N -th
order linear ODE as a single first order vector ODE. We illustrate this by an
example. Consider
d3x d2x dx
− 3 + 3 − x = u (t) (4.5)

and define
eh dt 3 dt 2 dt

dx d2x
z1 = x, z2 = , z3 = .
dt dt 2
Then (4.5) is equivalent to
      
z1 0 1 0 z1 0
d 
z2  =  0 0 1   z2  +  0  .
.K

dt
z3 3 −3 1 z3 u (t)
4.1.16 There is a dual method of reducing the system (4.2) to a single ODE, which
makes use of the controllability matrix

10 A10 ... AN−1 1 .


 
T=

How do you think it works?


h

4.2 Solved Problems


4.3 Unsolved Problems
dx
dt = −x + y
dy
At

1. Solve the system of DEs: dt = x − y .


x (0) = 4
y (0) = 1
3 −2t
Ans. x (t) = 2 e + 2 , y (t) = 25 − 32 e−2t .
5
dx t
dt = −x + y + e
dy
= x − y − et
2. Solve the system of DEs: dt .
x (0) = 0
y (0) = 1
5 −2t
Ans. y (t) = 6 e − 3 e + 2 , x (t) = 31 et − 56 e−2t + 12 .
1 t 1
32 Chapter 4. Systems of Linear ODEs
dx t
dt = −x + y + e

ias
dy t
3. Solve the system of DEs: dt = x − y − e .
x (0) = 4
y (0) = 1
7 −2t
5
Ans. y (t) = 2 − 6 e − 3 e , x (t) = 31 et + 76 e−2t + 52 .
1 t
dx
dt = −x + y + cost
dy
4. Solve the system of DEs: dt = x − y .
x (0) = 0
y (0) = 0
Ans. y (t) = 5 sint − 5 cost + 5e2t , x (t) = 15 cost + 53 sint − 5e12t .
2 1 1

ag
dx
dt = −4x + 6y
dy
= −3x + 5y
5. Solve the system of DEs: dt .
x (0) = 1
y (0) = 2
Ans. x (t) = 3e − 2e , y (t) = 3e2t − e−t .
2t −t
dx
dt = −4x + 6y + 1

6.
eh
Solve the system of DEs: dt
dy
= −3x + 5y + 2
x (0) = 0
.

y (0) = 1
Ans. y (t) = 2 e − 2 , x (t) = 2 e − 27 .
7 2t 5 7 2t
dx t
dt = −4x + 6y + e
dy
7. Solve the system of DEs: dt = −3x + 5y .
x (0) = 1
.K

y (0) = 0
Ans. y (t) = 2 e + 2 e − 2e , x (t) = 2et + e−t − 2e2t .
3 t 1 −t 2t
dx
dt = −4x + 6y
dy
= −3x + 5y + e−t
8. Solve the system of DEs: dt .
x (0) = 1
y (0) = 2
5 −t −t 8 −t −t
Ans. y (t) = 3 e − 3 e − te , x (t) = 11
11 2t 2t
3 e − 3 e − 2te .
dx
dt = 3x + 4y
dy
h

= −x + 7y
9. Solve the system of DEs: dt .
x (0) = 1
y (0) = 0
Ans. x (t) = e − 2te , y (t) = −te5t .
5t 5t
dx
At

dt = 3x + 4y
dy
= −x + 7y
10. Solve the system of DEs: dt .
x (0) = 1
y (0) = 1
Ans. x (t) = e + 2te , y (t) = e5t + te5t .
5t 5t
dx
dt = 3x + 4y
dy
= −x + 7y + 1
11. Solve the system of DEs: dt .
x (0) = 0
y (0) = 1
4.3 Unsolved Problems 33

28 5t
Ans. y (t) = 25 e + 12 5t 3 24 5t 4 5t 4
5 te − 25 , x (t) = 5 te − 25 e + 25 .

ias
dx
= 3x + 4y
dt
dy
dt= −x + 7y + e−t
12. Solve the system of DEs: .
x (0) = 1
y (0) = 2
Ans. y (t) = 9 e − 9 e + 3 te , x (t) = 91 e−t + 98 e5t + 20
19 5t 1 −t 10 5t 5t
3 te .
dx
dt = −y
dy
= x − 2y
13. Solve the system of DEs: dt .
x (0) = 1

ag
y (0) = 2
Ans. x (t) = e − te , y (t) = 2e−t − te−t .
−t −t
dx
dt = −y + 1
dy
= x − 2y
14. Solve the system of DEs: dt .
x (0) = 1
y (0) = 2
Ans. y (t) = e − 2te + 1, x (t) = 2 − 2te−t − e−t .
−t −t

15.
eh
Solve the system of DEs:
dx
dt = −y + sint
dy
dt = x − 2y .
x (0) = 1
y (0) = 2
Ans. y (t) = 2et − 2 et − 2 cost, x (t) = 21 sint − cost − 12 ett + e2t .
5 1 t 1
dx
dt = −y + 1
dy
= x − 2y + 1
16. Solve the system of DEs: dt .
.K

x (0) = 1
y (0) = 0
Ans. y (t) = te − e + 1, x (t) = te−t + 1.
−t −t
dx
dt = x + 2y
dy
= 2x + y + 1
17. Solve the system of DEs: dt .
x (0) = 1
y (0) = 0
Ans. y (t) = 3 e − e + 3 , x (t) = e−t + 23 e3t − 23 .
2 3t −t 1
dx
dt = x + 2y
h

dy
= 2x + y
18. Solve the system of DEs: dt .
x (0) = 1
y (0) = 1
3t 3t
Ans. x (t) = e , y (t) = e .
At

dx t
dt = x + 2y + e
dy
= 2x + y + 1
19. Solve the system of DEs: dt .
x (0) = 1
y (0) = 0
Ans. y (t) = 12 e − 4 e − 2 e + 13 , x (t) = 34 e−t + 11
11 3t 3 −t 1 t 3t 2
12 e − 3 .
34 Chapter 4. Systems of Linear ODEs
dx 3t
dt = x + 2y + e

ias
dy
20. Solve the system of DEs: dt = 2x + y + 1 .
x (0) = 1
y (0) = 0
Ans. y (t) = 24 e − 8 e + 2 te + 31 , x (t) = 78 e−t + 19
13 3t 7 −t 1 3t 3t 1 3t 2
24 e + 2 te − 3 .
dx
dt = x + 4y
dy
= x+y
21. Solve the system of DEs: dt .
x (0) = 1
y (0) = 0
Ans. x (t) = 2 e + 2 e , y (t) = 14 e3t − 14 e−t .
1 −t 1 3t

ag
dx t
dt = x + 4y − e
dy
22. Solve the system of DEs: dt = x + y .
x (0) = 1
y (0) = 1
Ans. y (t) = 4 e + 8 e + 8 e , x (t) = 45 e3t − 14 e−t .
1 t 1 −t 5 3t
dx −t
dt = x + 4y − e

23.
eh
Solve the system of DEs:
dy
dt = x + y
x (0) = 1
.

y (0) = 0
Ans. y (t) = 16 e − 16 e + 4 te , x (t) = 85 e−t + 38 e3t − 12 te−t .
3 3t 3 −t 1 −t
dx
dt = x + 4y + 1
dy
= x+y−1
24. Solve the system of DEs: dt .
x (0) = 1
.K

y (0) = 2
Ans. y (t) = 2 e + 6 e − 3 , x (t) = 37 e3t − 3e−t + 53 .
3 −t 7 3t 2
dx
dt = −x − 4y + 1
dy
= −x − y − 1
25. Solve the system of DEs: dt .
x (0) = 1
y (0) = 2
4 −3t 8 −3t
Ans. y (t) = 3 e + 3 , x (t) = 3 e − 53 .
2
h

4.4 Advanced Problems


At
ias
ag
5. Series Solutions of ODEs

This chapter is devoted to the solution of ODEs by expansion into infinite series.

5.1 Theory and Examples


eh
5.1.1 In this chapter we present methods to solve the linear ODEs

d2y dy
P0 (x) + P1 (x) + P2 (x) y = 0 (5.1)
dx2 dx
d2y dy
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw
P0 (x) 2 + P1 (x) + P2 (x) y = f (x) (5.2) 1
dx dx
using infinite series expansions. In what follows we always assume P0 (x), P1 (x),
P2 (x) have no common factors. Note that we can always rewrite the above as

d 2 y P1 (x) dy P2 (x)
+ + y=0 (5.3)
dx2 P0 (x) dx P0 (x)
d 2 y P1 (x) dy P2 (x)
+ + y = f (x) (5.4)
dx2 P0 (x) dx P0 (x)
h

5.1.2 Definition. We call x0 an ordinary point of (5.1) iff P0 (x0 ) 6= 0 and a singular
point iff P0 (x0 ) = 0.
5.1.3 Theorem. Suppose that
At

1. P0 (x) , P1 (x) , P2 (x) are polynomials with no common factor,


2. x0 is an ordinary point of (5.1) and
3. ρ is the distance between x0 and the closest zero of P0 (x) (with ρ = ∞ if P0 (x)
has no zeros).
Then every solution of (5.1) can be written as a power series

∑ an (x − x0)n
n=0

which converges in (−ρ, ρ).


36 Chapter 5. Series Solutions of ODEs

5.1.4 Example. We start with a simple example. Let us solve

ias
d2y
+ y = 0.
dx2
Of course we know that the general solution is

c1 cos x + c2 sin x,

but let us see what the power series approach yields. We assume that

y = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + ...

ag
Then
dy
= a1 + 2a2 x + 3a3 x2 + 4a4 x3 + 5a5 x4 ...
dx
d2y
= 2a2 + 6a3 x + 12a4 x2 + 20a5 x3 ...
dx2
So we have

d2y
eh
0= + y = (a0 + 2a2 ) + (a1 + 6a3 ) x + (a2 + 12a4 ) x2 + (a3 + 20a5 ) x3 + ...
dx2
and we get

1
a2 = − a0 ,
2
.K

1
a3 = − a1
6
1 1
a4 = − a2 = a0
12 24
1 1
a5 = − a3 = a1
20 120
...

and so
h

   
1 2 1 4 1 3 1 5
y (x) = a0 1 − x + x − ... + a1 x − x + x − ... .
2 24 6 120
More generally:
At

1
(n + 2) (n + 1) an+2 = an ⇒ an+2 = an
(n + 1) (n + 2)
and so
! !
(−1)n+1 n (−1)n+1 n
y (x) = a0 ∑ x + a1 ∑ x
n∈{0,2,4,...}
n! n∈{1,3,5,...}
n!

which, as expected, equals


a0 cos x + a1 sin x.
5.1 Theory and Examples 37

5.1.5 Example. Now let us solve

ias
d2y
+y = x
dx2
y (0) = 0
y0 (0) = 1

d2y
+ y = x, y (0) = 0, y0 (0) = 1.
dx2
Assuming

ag
y = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + ...
we get, as before, that

d2y
x= + y = (a0 + 2a2 ) + (a1 + 6a3 ) x + (a2 + 12a4 ) x2 + (a3 + 20a5 ) x3 + ... .
dx2
Also, from y (0) = 0 we
eh a0 = 0
1
a2 = a0 = 0
2
1
a4 = a2 = 0
12
...

From y0 (0) = 1 we get


.K

a1 = 1
a1 + 6a3 = 1 ⇒ a3 = 0
a3 + 20a5 = 0 ⇒ a5 = 0
...

Hence we have
y = x.
h

5.1.6 Example. Now we will solve

d2y dy
2
− x + 2y = 0
dx dx
At

Assuming
y = a0 + a1 x + a2 x2 + a3 x3 + ...
we get

2y = 2a0 + 2a1 x + 2a2 x2 + 2a3 x3 + 2a4 x4 + ...


dy
−x = −a1 x − 2a2 x2 − 3a3 x3 − 4a4 x4 − ...
dx
d2y
= 2a2 + 6a3 x + 12a4 x2 + ...
dx2
38 Chapter 5. Series Solutions of ODEs

So we have

ias
d2y dy
0= 2
− x + 2y
dx dx
= (2a0 + 2a2 ) + (a1 + 6a3 ) x + 12a4 x2 + ...

and we get

a2 = −a0 ,
1
a3 = − a1 ,

ag
2·3
a4 = 0,
1 1
a5 = a3 = − a1 ,
4·5 2·3·4·5
a6 = 0,
6 1·3
a7 = a5 = − a1 ,
6·7 2·3·4·5·6·7
eh a8 = 0
...

Finally:  
2
 1 3 1 5
y (x) = a0 1 − x + a1 x − x − x − ... .
6 120
5.1.7 Example. To solve
.K

d2y dy
2
− (x − 2) + 2y = 0
dx dx
dy dy
in powers of (x − 2) we first perform the change of variable u = x − 2. Since du = dx
d2y d2y
and du2 = dx2 (why?) the DE becomes

d2y dy
2
− u + 2y = 0.
du du
h

This is the DE we solved in the previous example, where we got


 
2
 1 3 1 5
y (u) = a0 1 − u + a1 u − u − u − ...
6 120
At

 
2
or 1 − (x − 2) =
 

2
 1 3 1 5
y (x) = a0 1 − (x − 2) + a1 (x − 2) − (x − 2) − (x − 2) − ...
6 120
 
2
 1 3 1 5
= a0 −x + 4x − 3 + +a1 (x − 2) − (x − 2) − (x − 2) − ... .
6 120
5.1.8 Definition. We call x0 an regular singular point of (5.1) iff
5.1 Theory and Examples 39

1. it can be written in the form

ias
d2y dy
(x − x0 )2 A (x) 2
+ (x − x0 ) B (x) +C (x) y = 0
dx dx
where A (x), B (x), C (x) are polynomials and
2. A (x0 ) 6= 0, B (x0 ) 6= 0, C (x0 ) 6= 0.
Otherwise we call x0 an irregular singular point of (5.1).
5.1.9 Theorem (Frobenius). Suppose that x0 = 0 is regular singular point of
d2y dy
x2 A (x) + xB (x) +C (x) y = 0, (5.5)

ag
dx 2 dx
where A (x) , B (x) ,C (x) are polynomials with no common factor. Then there exist
constants ρ > 0 and λ such that (5.5) has at least one solution of the form

y (x) = xλ ∑ an x n
n=0
eh
valid in the interval (0, ρ).
5.1.10 Example. Let us solve

8x2 y00 + 10xy0 + (x − 1) y = 0 (5.6)

which can be rewritten as


5 0 x−1
y00 + y+ y = 0. (5.7)
4x 8x2
.K

Clearly x0 = 0 is a singular point of (5.7) but, since


5 5
xP (x) = x = analytic
4x 4
2 2x−1 x−1
x Q (x) = x 2
= analytic
8x 8
it is a regular singular point. So we will try to find a solution of the form
∞ ∞
y (x) = xλ ∑ an x n = ∑ anxn+λ
h

(5.8)
n=0 n=0

Taking derivatives of (5.8) in (5.6) we get

0 =xλ [8λ (λ − 1) + 10λ − 1] a0 +


At

xλ +1 ([8 (λ + 1) λ + 10 (λ + 1) − 1] a1 − a0 ) +
xλ +2 ([8 (λ + 3) (λ + 1) + 10 (λ + 3) − 1] a3 − a1 ) + ...
or

0 = [8λ (λ − 1) + 10λ − 1] a0 +
x ([8 (λ + 1) λ + 10 (λ + 1) − 1] a1 − a0 ) +
x2 ([8 (λ + 3) (λ + 1) + 10 (λ + 3) − 1] a3 − a1 ) + ... .
40 Chapter 5. Series Solutions of ODEs

This can be satisfied if the indicial equation

ias
[8λ (λ − 1) + 10λ − 1] = 0

holds (and then we specify the an ’s by a recursion, as previously). Now, the indicial
equation can be rewritten as

8λ 2 + 2λ − 1 = 0,

which has solutions: λ1 = 41 , λ2 = − 12 . Furthermore, we get the requrrence relation

ag
−1
an = an−1 .
[4 (λ + n) − 1] [2 (λ + n) + 1]

So from the two λ values we get

1 1
for λ1 = : an = − an−1 ,
4 2n (4n + 3)
eh 1
for λ2 = − : an = −
2
1
2n (4n − 3)
an−1 .

Consequently we get two linearly independent solutions


 
1 1 1 2
y1 (x) = c1 x 1 − x +
4 x − ...
14 616
 
− 12 1 1 2
.K

y2 (x) = c2 x 1 − x + x − ...
2 40

and the general solution is

y (x) = c1 y1 (x) + c2 y2 (x) .

5.1.11 In the previous example the indicial equation had two distinct roots which
did not differ by an integer. Consider the next examples.
5.1.12 Example. Let us solve the ODE
h

2y
2d dy
x + 2x + x2 y = 0. (5.9)
dx2 dx
Assuming, as usual,
At


y = xλ ∑ anxn (5.10)
n=0
after some algebra we get
∞ ∞
n
∑ (n + λ ) (n + λ + 1) anx + ∑ an−2 xn = 0 ⇒
n=0 n=2

λ (λ + 1) a0 + (λ + 1) (λ + 2) a1 x + ∑ ((n + λ ) (n + λ + 1) an + an−2 ) xn = 0.
n=2
5.1 Theory and Examples 41

Then the indicial equation is

ias
λ (λ + 1) = 0
with roots λ1 = 0, λ2 = −1. With λ1 = 0 we get
an−2
a1 = 0, an = −
n (n + 1)
and so
1 1
a1 = 0, a2 = − a0 , a3 = 0, a4 = a0 , ...
3! 5!
and (taking for simplicity a0 = 1):

ag
 
x3 x5

x 2 x 4  1 − 3! + 5! − ... sin x
y1 (x) = 1 − + − ... = = .
3! 5! x x
Taking now λ2 = −1 we similarly get
1
0a1 = 0, an = − an−2 .
n (n − 1)
eh
Since we are looking for some solution, let us take a1 = 0. Then we get
1 1
a2 = − a0 , a4 = a0 , ...
2! 4!
and, taking for simplicity a0 = 1,

x2 x4
 
−1 cos x
y2 (x) = x 1 − + − ... = .
2! 4! x
.K

So the general solution is


sin x cos x
y (x) = c1 + c2 .
x x
5.1.13 Example. Let us solve the ODE (it is the Bessel equation, which we will
study in greater detail in Chapter 17):

d2y dy
x2 2 2

+ x + x − n y=0 (5.11)
dx2 dx
h

assuming n ∈ {1, 2, 3, ...} . Again we assume


∞ ∞
y (x) = xλ ∑ ck xk = ∑ ck xk+λ
k=0 k=0
At

So
∞ ∞ ∞ ∞
x2 − n2 y = xλ ∑ ck xk+2 − n2xλ ∑ ck xk = xλ ∑ ck−2xk − n2xλ ∑ ck xk ,

k=0 k=0 k=2 k=0

dy
x = xλ ∑ (k + λ ) ck xk ,
dx k=0
d2y ∞
x2 = x λ
∑ (k + λ ) (k + λ − 1) ck xk .
dx2 k=0
42 Chapter 5. Series Solutions of ODEs

Adding the above we get

ias

(k + λ ) (k + λ − 1) ck + (k + λ ) ck + ck−2 − n2 ck xk = 0
 
xλ ∑
k=0

and so

(k + λ ) (k + λ − 1) ck + (k + λ ) ck + ck−2 − n2 ck = 0 ⇒
 
h i
(k + λ )2 − n2 ck = −ck−2

ag
Letting k = 0 (and since c−2 = 0 and we do not want c0 = 0) we get the indicial
equation
λ 2 = n2
When λ = n, then
 
2 2
(k + n) − n ck = −ck−2 ⇒
eh k2 + 2kn ck = −ck−2 ⇒ ck = −
 1
k (k + 2n)
ck−2

So one solution is

x2 x4
 
n
y1 (x) = x 1 − + + ... . (5.12)
2 (2 + 2n) 2 · 4 · (2 + 2n) · (4 + 2n)
.K

When λ = −n, we get a second solution

x2 x4
 
−n
y2 (x) = x 1− + + ... (5.13)
2 (2 − 2n) 2 · 4 · (2 − 2n) · (4 − 2n)

but, since we assumed n ∈ {1, 2, 3, ...}, the second series does not exist. Still (for
n ∈ {1, 2, 3, ...}) we have found one solution of (5.11), namely y1 (x). This is as stated
by Frobenius’ Theorem.
5.1.14 Example. Finally, let us consider a case in which the indicial equation
h

has a double root. We solve


2y 
1

2d 2 dy
2
x − 2x + x + y = 0.
dx2 dx 4
At

By the usual methods we get the indicial equation

1
λ2 −λ + =0
4

with roots λ1 = λ2 = 12 . Assuming

∞ ∞
y (x) = xλ ∑ ck xk = ∑ ck xk+λ
k=0 k=0
5.2 Solved Problems 43

we get, after the usual manipulations,

ias
  ∞  
0 = (2λ − 1)2 + (2λ + 1)2 a1 − 8λ a0 x+ ∑ (2n + 2λ − 1)2 an − 8 (n + λ − 1) an−1 + 4an−2 xn
n=2

Setting λ = 12 we get

1 1
a1 = a0 , a2 = a0 , a3 = a0 , ... .
2! 3!
Hence we get a solution

ag
x2
 
1/2
y1 = x 1 + x + + ... = x1/2 ex .
2!

The Frobenius method cannot get another solution! (We can get another linearly
independent solution, by other methods. It turns out to be y2 (x) = x1/2 ex ln x.)
eh
5.2 Solved Problems
5.3 Unsolved Problems
 d2y
1. Find as a power series around x0 = 0 the general solution of 1 + x2 dx2 +
dy
6x dx + 6y = 0.
Ans. y (0) + xy0 (0) − 3x2 y (0) − 2x3 y0 (0) + 5x4 y (0) + O x5 .

.K
 d2y
2. Find as a power series around x0 = 0 the general solution of 1 + x2 dx2 +
dy
5x dx + 3y = 0.
Ans. y (0) + xy0 (0) − 23 x2 y (0) − 43 x3 y0 (0) + 15 4 5

8 x y (0) + O x .
 d2y
3. Find as a power series around x0 = 0 the general solution of 1 + 3x2 dx2 −
dy
6x dx + 10y = 0.
Ans. y (0) + xy0 (0) − 5x2 y (0) − 23 x3 y0 (0) + 53 x4 y (0) + O x5 .

 d2y
4. Find as a power series around x0 = 0 the general solution of 1 − x2 dx2 +
dy
6x dx + 6y = 0.
h

Ans. y (0) + xy0 (0) − 3x2 y (0) − 2x3 y0 (0) + 4x4 y (0) + O x5 .

 d2y
5. Find as a power series around x0 = 2 the general solution of 1 − x2 dx2 −
dy
5x dx + 6y = 0.
At

Ans. .
d2y
6. Find as a power series around x0 = 0 the general solution of dx2 − xy = 0.
Ans. y (0) + xy0 (0) + 61 x3 y (0) + 12
1 4 0
x y (0) + O x5 .

d2y
7. Find as a power series around x0 = 0 the general solution of dx2 + xy = 0.
Ans. y (0) + xy0 (0) − 61 x3 y (0) − 12
1 4 0
x y (0) + O x5 .

 d2y
8. Find as a power series around x0 = 0 the general solution of 1 + x3 dx2 +
dy
6x2 dx + 6xy = 0.
Ans. y (0) + xy0 (0) − x3 y (0) − x4 y0 (0) + O x5 .

44 Chapter 5. Series Solutions of ODEs
 d2y
9. Find as a power series around x0 = 0 the general solution of 1 − x3 dx2 +

ias
dy
6x dx + 6y = 0.
Ans. y (0) + xy0 (0) − 3x2 y (0) − 2x3 y0 (0) + 29 x4 y (0) + O x5 .

 d2y
10. Find as a power series around x0 = 0 the general solution of 1 + x2 dx2 +
dy
6x dx + 6y = 0.
Ans. .  d2y
11. Find as a power series around x0 = 0 the general solution of 1 − x3 dx2 +
dy
x3 dx + x 2 y = 0.
Ans. y (0) + xy0 (0) − 12
1 4
x y (0) + O x5 .


ag
 d2y dy
1 − x2 dx2
+ x dx +y = 0
12. Find as a power series around x0 = 0 the solution of y (0) = 1 .
0
y (0) = 1
Ans. 1 + x − 12 x2 − 31 x3 + 24
1 4
x + O x5 .

 d2y dy
1 − x2 dx2
+ x dx +y = 0
13. Find as a power series around x0 = 0 the solution of y (0) = 0 .
eh
Ans. 2x − 32 x3 + O x6 .

0
y (0) = 2

 d2y dy
1 + x2 dx2
+ x dx +y = 0
14. Find as a power series around x0 = 0 the solution of y (0) = 1 .
0
y (0) = 1
Ans. 1 + x − 12 x2 − 31 x3 + 24
5 4
x + O x5 .

d y 2 dy 2
.K

(1 − x) dx 2 + (1 − x) dx + y = 0
15. Find as a power series around x0 = 0 the solution of y (0) = 1 .
0
y (0) = 1
2 1 3 7 4 5

Ans. 1 + x − x + 6 x − 24 x + O x .
 d2y dy
1 − x4 dx 2
2 + 2x dx + x y = 0
16. Find as a power series around x0 = 0 the solution of y (0) = 1 .
0
y (0) = 0
1 4 5

Ans. 1 − 12 x + O x .
d2y dy
17. Find power series solutions (around x0 = 0) of 2x2 dx2 +x (3 + 2x) dx −(1 − x) y =
h

0.  √ 3 4 52 8 72 16 92
Ans. 1x − 1 + 12 x − 61 x2 + 24
1 3
x + O x4 , x − 25 x 2 + 35 x − 315 x + 3465 x +
 11 
O x2 .
At

d2y dy
18. Find power series solutions (around x0 = 0) of 2x2 dx2 +x (5 + x) dx −(2 − 3x) y =
0.  √  11 
3 5 1 72 1 92
Ans. x12 + 3x
1
+ 31 − 13 x + 91 x2 + O x3 , x − 12 x 2 + 81 x 2 − 48 x + 384 x +O x 2 .
.
d2y dy
19. Find power series solutions (around x0 = 0) of 3x2 dx2 + x (1 + x) dx − y = 0.
2
 14 
1 1 53 1 83 11
Ans. √
3x − 13 x 3 + 18 x − 162 x + 1
1944 x
3 + O x 3 , x − 71 x2 + 70
1 3 1 4
x − 910 x +
1 5 6

14 560 x + O x .
5.3 Unsolved Problems 45

d2y dy
20. Find power series solutions (around x0 = 0) of x2 (8 + x) dx2 + x (2 + 3x) dx +

ias
(1 + x) y = 0.
√ 9 32 5 52 7
  9 11 √ 5 675 94
Ans. x − 40 x + 128 x − 39245 6615
936 x + 7241 728 x + O x
2 2 2 , 4 x − 25
96 x + 14 336 x −
4
 21 
38 025 13 17
4 + 732 615 x 4 + O x 4
5046 272 x 645 922 816 .
d2y  dy
21. Find power series solutions (around x0 = 0) of 8x2 dx2 + x 2 + x2 dx + y = 0.
√ 1 49 17
 21  √
1 52 9
 11 
Ans. 4 x − 112 x + 17 3920 x 4 + O x 4 , x − 72 x + 19 5584 x 2 + O x 2 .
d2y dy
22. Find power series solutions (around x0 = 0) of x (1 + x) dx2 + (1 − x) dx + y = 0.
5 5
 
Ans. ln x − x (ln x − 4) + O x , 1 − x + O x .

ag
d2y dy
23. Find power series solutions (around x0 = 0) of x2 dx2 − x (1 − x) dx + 1 − x2 y =


0.
Ans. x ln x − x2 (ln x − 1) + x3 43 ln x − 1 − ..., x − x2 + 34 x3 − 13 4 79 5 6
 
36 x + 576 x + O x
.
d2y
24. Find power series solutions (around x0 = 0) of 4x2 dx2 + (1 + 4x) y = 0.
√ 3 5 1 3 7 1 11
 √ 3 5
x − x 2 + 41 x 2 −

Ans. x ln x − x 2 (ln x− 2) + x 2 4 ln x − 4 − x2 36 ln x − 108 ,
1 72
36 x
eh
1 2
+ 576
9
x +O x 2 .
11

d2y dy
25. Find power series solutions (around x0 = 0) of x dx2 − 5 dx + xy = 0.
1 8 1 10
Ans. −86 400 − 10 800x2 − 1350x4 + O x5 , x6 − 16 x + O x11 .
 
x + 640
d2y dy
26. Find power series solutions (around x0 = 0) of x (1 + x) dx2 − 4 dx − 2y = 0.
Ans. x5 − 3x6 + 6x7 − 10x8 + 15x9 + O x10 , 2880 − 1440x + 480x2 + O x5 .
 
d2y dy
27. Find power series solutions (around x0 = 0) of x2 dx2 − 3x dx + (3 + 4x) y = 0.
.K

Ans. x3 − 43 x4 + 23 x5 − 45
8 6 4 7
x +O x8 , −2x−8x2 +16x3 ln x−x4 64 ln x − 256
 
x + 135 3 9 +
5 32 200 6
 
x 3 ln x − 9 + O x .
d2y
28. Find power series solutions (around x0 = 0) of x dx2 + y = 0.
Ans. x − 12 x2 + 12
1 3 1 4
x + ..., 1 − x ln x + x2 12 ln x − 43 − x3 12
1 7
 
x − 144 ln x − 36 + ...
.
d2y dy
29. Find power series solutions (around x0 = 0) of x dx2 + x dx + y = 0.
Ans. 1 − x (ln x + 1) + x2 ln x − x3 12 ln x − 41 + ..., x − x2 + 21 x3 − 16 x4 + 24
1 5

x +
6

O x .
h

 d2y  dy
30. Find power series solutions (around x0 = 0) of 6x2 1 + 2x2 dx2 +x 1 + 50x2 dx +
1 + 30x2 y = 0.

√ 5 9
 11  √ 7 13
 16 
Ans. x − 2x 2 + 4x 2 + O x 2 , 3 x − 2x 3 + 4x 3 + O x 3 .
At

d2y dy
31. Find power series solutions (around x0 = 0) of 2x2 (1 + x) dx2 −x (1 − 3x) dx +y =
0.  √  11 
3 5 7 9
2 3 4 5 6
Ans. x − x + x − x + x + O x , x − x + x − x + x + O x 2 .
2 2 2 2

d2y dy
32. Find power series solutions (around x0 = 0) of x2 (8 + x) dx2 + x (2 + 3x) dx +
(1 + x) y = 0.
Ans.
√ 9 32 5 52 7
x − 40 x + 128 x − 39245
936 x + ...,
2
46 Chapter 5. Series Solutions of ODEs
√ 5 675 94 38 025 13
4
x − 25
96 x + 14 336 x − 5046 272 x + ... .
4 4

ias
5.4 Advanced Problems

ag
eh
h .K
At
II
Laplace

ias
ag
eh
.K
h
At

6 Laplace Transform . . . . . . . . . . . . . . . 49

7 Differential Equations . . . . . . . . . . . . 59

8 Convolution and Integral Equations


69

9 Dirac Delta and Generalized Func-


tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

10 Difference Equations . . . . . . . . . . . . . 87
At
h.K
eh
ag
ias
ias
ag
6. Laplace Transform

The Laplace transform can be seen as a generalization of the Taylor series expansion
to continuous powers s ∈ R+0 rather than discrete powers n ∈ N0 .
eh
6.1 Theory and Examples
6.1.1 Definition. Let f (t) be defined on [0, ∞). The Laplace transform of f (t) is
defined to be Z ∞
F (s) = L ( f (t)) := e−st f (t) dt (6.1)
0−
.K

provided the integral is well defined.


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

6.1.2 For the time being we assume s ∈ R. Furthermore, for the time being we
can assume the lower limit of integration is 0, i.e., we can use the definition
Z ∞
F (s) = L ( f (t)) := e−st f (t) dt. (6.2)
0
This will change in Chapter 5.
6.1.3 Example. To find the Laplace transform of f (t) = 1, we have
Z ∞
L ( f (t)) =e−st f (t) dt
h

0
1 −st ∞
 
1 1 1 1
= − e = − e−s∞ + e−s0 == − e−∞ + e−0
s t=0 s s s s
1
At

= .
s
6.1.4 Example. To find the Laplace transform of f (t) = t , we have
 ∞
1 −st
Z ∞ Z ∞
−st −st
L ( f (t)) = e f (t) dt = te dt = − 2 e (st + 1)
0 0 s t=0
   
1 −s∞ 1 −s0 1 −∞ 1
= − 2e (s∞ + 1) − − 2 e (s0 + 1) = − 2 e (∞ + 1) + 2 e−0 (0 + 1)
s s s s
1
= 2.
s
50 Chapter 6. Laplace Transform

6.1.5 Example. Similarly we find

ias
n!
Z ∞
L (t ) = n
e−st t n dt = .
0 sn+1

6.1.6 Definition. We say that f (t) is piecewise continuous in [a, b] iff [a, b] can
be partitioned as
[a, b] = [a0 , a1 ] ∪ [a1 , a2 ] ∪ ... ∪ [aK−1 , aK ]
so that
1. for each k ∈ {0, 1, ..., K − 1} : f (t) is continous in [ak , ak+1 ],

ag
2. limt→a− f (t) and limt→a+ f (t) exist,
0 K
3. for each k ∈ {1, ..., K − 1} : limt→a− f (t) and limt→a+ f (t) exist.
k k

6.1.7 Definition. We say that f (t) is of exponential order γ after N iff there exist
M > 0 such that
∀t > N : | f (t)| < Meγt .
Often we use the simpler: “ f (t) is of exponential order γ ”, or “ f (t) is of exponential
order”.
eh
6.1.8 Theorem. If for every N : f (t) is of exponential order γ after N and piecewise
continuous in [0, N], then L ( f (t)) exists for all s > γ .
6.1.9 Example. To find the Laplace transform of f (t) = eat we have
!t=∞
e(a−s)t
Z ∞
L ( f (t)) = e−st eat dt =
.K

0 a−s
! ! t=0
e(a−s)∞ e(a−s)0 1
= + = , defined for all s > a.
a−s a−s s−a

6.1.10 Example. To find the Laplace transforms of f (t) = cost and g (t) = sint we
use

L (cost) + iL (sint) = L ((cost + i sint))


h

1 s+i
Z ∞ Z ∞
−st
= e (cost + i sint) dt = e−st eit dt = = 2
0 0 s−i s +1
s 1
= 2 +i 2 .
s +1 s +1
At

Hence
s 1
L (cost) = , L (sint) = .
s2 + 1 s2 + 1
2
6.1.11 Example. Trying to find the Laplace transform of f (t) = et we note that
for all s ∈ R we have Z ∞
2
L ( f (t)) = e−st et dt = ∞.
0

So the Laplace transform of e t2 does not exist.


6.1 Theory and Examples 51

6.1.12 Theorem (Linearity). L (c1 f 1 (t) + c2 f 2 (t)) = c1 L ( f 1 (t)) + c2 L ( f 2 (t)) .

ias
Proof. Obvious, from the linearity of the integral operator.
L 5t + 3e−2t = 5 3

6.1.13 Example.
s2
+ s+2 .
6.1.14 Theorem (Shift). If L ( f (t)) = F (s), then L (eat f (t)) = F (s − a) .
Proof.
 Z ∞ Z ∞
L eat f (t) = eat f (t) e−st dt = f (t) e−(s−a)t dt = F (s − a) .
0 0

L e−2t cost = s+2



6.1.15 Example. .
(s+2)2 +1

ag
6.1.16 Theorem. If L ( f (t)) = F (s) and

f (t − t0 ) t > t0
g (t) =
0 t < t0

then L (g (t)) = e−t0 s F (s) .


Proof.

L (g (t)) =
Z ∞
eh0
Z ∞
g (t) e −st
dt =
Z ∞

t0
f (t) e−st dt
Z ∞
= f (t − t0 ) e−s(t−t0 ) e−st0 d (t − t0 ) = e−st0 f (t − t0 ) e−s(t−t0 ) dt
0 0
= e−st0 F (s) .

6.1.17 Theorem (Scaling). If a > 0 and L ( f (t)) = F (s), then L ( f (at)) = a1 F s



a .
.K

Proof. Obvious.
s/3
6.1.18 Example. L (cos 3t) = 2 = 3
s2 +9
.
( 3s ) +1

6.1.19 Theorem (Transform of Derivative). If for every N : (i) f (t) is continuous


and of exponential order γ after N , (ii) f 0 (t) is piecewise continuous in [0, N] and
(iii) L ( f (t)) = F (s), then

L f 0 (t) = sF (s) − f (0) .




Proof.
h

−st d f
 Z ∞ Z ∞
0
L f (t) = e dt = e−st d f
0 dt Z ∞0 Z ∞
−st −st −st
=e f ∞
(t) |t=0 − f (t) de =e f ∞
(t) |t=0 + f (t) se−st dt
At

0 0
= e−s∞ f (∞) − e −s0
f (0) + sL ( f (t)) = sF (s) − f (0) .

6.1.20 Theorem. If f (t) L ( f (t)) = F (s) and under conditions similar to those
of the previous theorem, we have

L f 00 (t) = s2 F (s) − s f (0) − f 0 (0) ,



 
L f (n) (t) = sn F (s) − sn−1 f (0) − sn−2 f 0 (0) − .... − f (n−1) (0) .

Proof. Similar to the previous one.


52 Chapter 6. Laplace Transform

6.1.21 Example. L (t) = 1


s2
, then

ias
1 1
L (1) = L (t)0 = sL (t) − (t)t=0 = s 2 − 0 =

s s
6.1.22 Theorem (Transform of Integral). If L ( f (t)) = F (s), then
Z t 
1
L f (u) du = F (s) .
0 s
Proof. Easy.
6.1.23 Example. L (t) = 1
s2
, then

ag
Z t
11 2
L t 2

=2 tdt = 2 2
= 3.
0 ss s
L t 0 = L (1) = 1s ; assume

6.1.24 Example. Using mathematical induction:
L (t n ) = n!
sn+1
, then
n!
sL t n+1 − (t)0 = L ((n + 1)t n ) ⇒ sL t n+1 = (n + 1) n+1 ⇒
 
eh (n
L t n+1 = n+2 .
 + 1)!
s

s
6.1.25 Apparently, multiplying by s gives differentiation in the time domain;
multiplying by 1s gives integration in the time domain.
6.1.26 Example. Suppose we want to solve
df
.K

+ f = 0, f (0) = 3.
dt
Then we have

sF (s) − f (0) + F (s) = 0 ⇒ (s + 1) F (s) = f (0) ⇒


3
F (s) = ⇒ f (t) = 3e−t .
s+1
Note we have converted the DE to an algebraic equation, solved that and found
f (t) by inverting F (s).
h

6.1.27 Example. Suppose we want to solve

d2 f df
+ 3 + 2 f = 1, f (0) = 0, f 0 (0) = 0.
dt 2 dt
Then we have
At

1
s2 F − s f (0) − f 0 (0) + 3 (sF − f (0)) + 2F = ⇒
s
1
s2 + 3s + 2 F = ⇒

s
1
F= .
s (s2 + 3s + 2)
1
To continue the solution we must decompose into partial fractions; we
s(s2 +3s+2)
will show how this is done in Chapter 5.
6.1 Theory and Examples 53

6.1.28 Theorem (Inverse Transform of Derivative). If L ( f (t)) = F (s), then


n
L (t n f (t)) = (−1)n ddsFn .

ias
Proof. Easy by mathematical induction.
1 0
0
L te2t = (−1)1 L e2t = − 1
 
6.1.29 Example. = .
s−2 (s−2)2
6.1.30 Theorem (Inverse Transform of Integral). If L ( f (t)) = F (s), then
f (t)
L 1
 R
t f (t) = u∞ F (u) du provided limt→0 t exists.
Proof. Easy.
L sint 1
 R ∞ du
6.1.31 Example. t = u u2 +1 = arctan s .

ag
6.1.32 Apparently multiplying (dividing) by t in the time domain yields differentia-
tion (integration) in the Laplace domain.
6.1.33 Theorem. If f (t) is periodic with period T (i.e., f (t + T ) = f (t)) then
R T −st
0 e f (t) dt
L ( f (t)) = .
eh 1 − e−sT
Proof. Easy.
6.1.34 Theorem (First Limit Theorem). If LR( f (t)) = F (s) Rthen lims→∞ F (s) = 0.
Proof. lims→∞ F (s) = lims→∞ L ( f (t)) = lims→∞ e−st f (t) dt = lims→∞ e−st f (t) dt =
0.
6.1.35 Theorem (Second Limit Theorem). If L ( f (t)) = F (s) and the limits
exist, then
lim f (t) = lim sF (s) , lim f (t) = lim sF (s) .
.K

t→0 s→∞ t→∞ s→0


Proof. Since ∞
 Z
sF (s) − f (0) = L f 0 (t) = e−st f 0 (t) dt.
0
we have
Z ∞ Z ∞
lim (sF (s) − f (0)) = lim e−st f 0 (t) dt = lim e−st f 0 (t) dt = 0.
s→∞ s→∞ 0 0 s→∞

Hence
lim sF (s) = f (0) = lim f (t) .
h

s→∞ t→0
Similarly
Z ∞ Z ∞
−st 0
lim (sF (s) − f (0)) = lim e f (t) dt − f (0) = f 0 (t) dt − f (0) = lim f (t)− f (0) .
At

s→0 s→0 0 0 t→∞

Hence
lim sF (s) − f (0) = lim f (t) − f (0) ⇒ lim sF (s) = lim f (t) .
s→0 t→∞ s→0 t→∞

6.1.36 Definition. If F (s) = L ( f (t)), then we also write f (t) = L −1 (F (s)) and
we say that F (s) is the inverse Laplace transform of F (s).
6.1.37 Theorem (Lerch). If, for every N , f (t) is sectionally continuous in [0, N]
and of exponential order γ after N and F (s) = L ( f (t)), then L −1 (F (s)) is unique
and equal to f (t).
54 Chapter 6. Laplace Transform

L −1 1
= e−3t (since L e−3t = 1
 
6.1.38 Example. s+3 s+3 ).

ias
 
6.1.39 Example. L −1 1
s2
= t.

6.1.40 Theorem (Linearity). L −1 (c1 F1 + c2 F2 ) = c1 L −1 (F1 ) + c2 L −1 (F2 ) .


Proof. Easy.
6.1.41 Example.
   
−1 1 −1 1 1 1 at  sinh (at)
L =L − = e − e−at = .
s − a2
2 2a (s − a) 2a (s + a) 2a a

ag
6.1.42 Theorem (Shift). L −1 (F (s − a)) = eat f (t) .
Proof. Easy.
6.1.43 Example.
  !
1 1 1
L −1 = L −1 = e−t sin (2t) .
eh s2 + 2s + 5 (s + 1)2 + 22 2

6.1.44 Theorem (Shift).


 
−1 −t0 s f (t − t0 ) when t ≥ t0
L

e F (s) = .
0 when t < t0

Proof. Easy.
L −1 (F (ks)) = 1k f t

6.1.45 Theorem (Scaling). k (with k > 0).
Proof. Easy.
.K

6.1.46 Example.
  !   
−1 s −1 1 2s 1 1 8t 1
L =L = cos = cos (4t) .
2
4s + 64 2 (2s)2 + 82 2 2 2 4

6.1.47 Theorem (Inverse Transform of Derivative). L −1 (F 0 (s)) = −t f (t).


Proof. Easy.
   0
6.1.48 Example. L −1 s
2 . We note that 1
=− 2s
2 . Then
(s2 +1) s2 +1 ( +1)
s 2
h

!  0     
−1 s 1 1 1 1 1
L 2
= − L −1 2
= − (−1)tL −1
2
= t sint.
(s2 + 1) 2 s +1 2 s +1 2
At

f (t)
L −1 (
R∞
6.1.49 Theorem (Inverse Transform of Integral). s F (u) du) = t .
Proof. Easy.
L −1 ln s+1

6.1.50 Example. s . We note that
   
1 1 s+1
Z ∞

− du = (ln u − ln (u + 1))s=0 = ln .
s u u+1 s
Hence
1 − e−t
    
−1 s+1 1 −1 1 1
L ln = L − = .
s t s s+1 t
6.1 Theory and Examples 55

6.1.51 Theorem. If f (0) = 0, then L −1 (sF (s)) = f 0 (t).

ias
Proof. Easy.
 
6.1.52 Example. L (sint) = 1
s2 +1
, sin 0 = 0. Then L −1 s
s2 +1
= (sint)0 = cost .
 2 
6.1.53 Example. How about L (cost) = s
s2 +1
, what is then L −1 s
s2 +1
? It turns
 2

0
out to be L −1 s2s+1 = − sint + δ (t). Now − sint = (cost) , but where did the δ (t)
2
come from? Note that in this case we have s2s+1 , what is different in this fraction
from what we have seen previously?

ag
6.1.54 Question. What would the L −1 (sF (s)) be if f (0) 6= 0? Note that then we
would have an abrupt change of f (t) at t = 0. Could it be

L −1 (sF (s)) = f 0 (t) + L −1 ( f (0))

What would L −1 ( f (0)) be? L −1 (1) ?


 
F(s) Rt
6.1.55 Theorem (Inverse Transform of Integral). If L −1 s = 0 f (u) du.
Proof. Easy.
6.1.56 Example.
eh
L (cost) = s2 +1
s
. Then
    Zt
−1 1 s −1 1
L =L = costdt = sint.
s s2 + 1 s2 + 1 0
 
6.1.57 Example. L −1 2 1 . Since
s +5s+6
.K

1 1 1
= − ,
s2 + 5s + 6 s+2 s+3
we have    
1
−1 −1 1 1
L 2
=L − = e−2t − e−3t
s + 5s + 6 s+2 s+3
 
6.1.58 Example. L −1 23s+7 . Since
s −2s−3

3s + 7 4 1
h

= − ,
s2 − 2s − 3 s − 3 s + 1
we have   
3s + 7
−1 −1 4 1
L =L − = 4e3t − e−t
s2 − 2s − 3
At

s−3 s+1
 2

6.1.59 Example. L −1 3 2s2 −4 . Since
s +6s +11s+6

2s2 − 4 7 4 1
3 2
= − − ,
s + 6s + 11s + 6 s + 3 s + 2 s + 1
we have
2s2 − 4
   
−1 −1 7 4 1
L =L − − = 7e−3t − 4e−2t − e−t
s3 + 6s2 + 11s + 6 s+3 s+2 s+1
56 Chapter 6. Laplace Transform
 
6.1.60 Example. L −1 3s+1
s3 −s2 +s−1
. Since

ias
3s + 1 2 2s − 1
= − 2 ,
s3 − s2 + s − 1 s−1 s +1
we have
   
−1 3s + 1 −1 2 2s − 1
L =L − 2 = 2et − 2 cost + sint
s3 − s2 + s − 1 s−1 s +1
6.1.61 We summarize basic pairs in the following table.

ag
f (t) F (s)
1
1 s
1
t s2
n!
tn sn+1
1
eat s−a
s
cos at s2 +a2
a
sin at
eh
6.1.62 We summarize basic properties in the following table
s2 +a2

f (t) F (s)
c1 f1 + c2 f2 c1 F1 + c2 F2
a f (at) F as
eat f (t) F (s − a)
f (t − t1 ) t > t1
.K

e−st1 F (s)
0 t < t1
0
f (t) sF (s) − f (0)
f 00 (t) s2 F (s) − s f (0) − f 0 (0)
−t f (t) F 0 (s)
t 2 f (t) F 00 (s)
Rt 1
0 f (u) du F (s)
f (t) Rs ∞
t s F (s) ds
h

6.2 Solved Problems


6.3 Unsolved Problems
1. Find the Laplace transform of f (t) = sinht .
At

Ans. Laplace transform is: s21−1 .



2 when t < 5
2. Find the Laplace transform of f (t) = .
0 when t > 5
−5s
Ans. 2 1−es .
3. Find the Laplace transform of f (t) = t 2 e4t .
2
Ans. Laplace transform is: 3.
(s−4)
4. Find the Laplace transform of f (t) = e−4t sin 2t .
2
Ans. Laplace transform is: 2 .
(s+4) +4
6.3 Unsolved Problems 57

5. Find the Laplace transform of f (t) = cosh 5t .

ias
s
Ans. Laplace transform is: s2 −25 .
6. Find the Laplace transform of f (t) = 2 cos 5t − 3 sin 5t .
Ans. s
Laplace transform is: 2 s2 +25 − s215
+25
.
7. Find the Laplace transform of f (t) = t sin at .
Ans. Laplace transform is: 2a 2 s 2 2 .
(a +s )
8. Find the Laplace transform of f (t) = t 2 cost .
3
Ans. Laplace transform is: 8 2 s 3 − 6 2 s 2 .
R ∞ −2t (s +1) (s +1)

ag
9. Evaluate 0 e costdt .
2
Ans. 5 .
10. Evaluate 0 te−2t costdt .
R∞
3
Ans. 25 .
11. Find the inverse Laplace transform of 4s23+16 .
3
Ans. 8 sin 2t .
4s
12. Find the inverse Laplace transform of s2 −25 .
Ans.
13. Find
eh
4 cosh 5t .
the inverse Laplace transform of s23s−5
+5s+6
.
5
Ans. 3e− 2 t cosh 12 t − 25 1

3 sinh 2 t .
14. Find the inverse Laplace transform of s23s−5
−5s+6
.
5 1 5 1
Ans. 3e 2 t cosh 2 t + 3 sinh 2 t .


15. Find the inverse Laplace transform of s24s+12


+8s+12
.
.K

Ans. 4e−4t cosh 2t − 12 sinh 2t .



3s+7
16. Find the inverse Laplace transform of s2 +2s+13 .
−t
√ 2
√ √ 
Ans. 3e cos 2 3t + 9 3 sin 2 3t .
e−πs (s+1)
17. Find the inverse Laplace transform of .
s2 +s+ 41
1 1
Ans. e 2 π− 2 t Heaviside (t − π) 12 t − 12 π + 1 .


18. Find the inverse Laplace transform of 2 s 2.


(s +4)
Ans. 14 t sin 2t .
s2
h

19. Find the inverse Laplace transform of 2.


(s 2 +4 )
Ans. 14 sin 2t + 21 t cos 2t .
1
20. Find the inverse Laplace transform of .
s2 (s2 +1)
At

Ans. t − sint .
21. Find the inverse Laplace transform of s23s+7
−2s−3
.
t 5

Ans. 3e cosh 2t + 3 sinh 2t .
2s −1 2
22. Find the inverse Laplace transform of (s+1)(s+2)(s+3) .
Ans. 12 e−t − 7e−2t + 17
2e
−3t .
2s −1 2
23. Find the inverse Laplace transform of s3 +6s 2 +5s−12 .
1 t 17 −3t 31 −4t
Ans. 20 e − 4 e + 5e .
3s+1
24. Find the inverse Laplace transform of s3 −s2 +s−1 .
58 Chapter 6. Laplace Transform

Ans. 2et − 2 cost + sint .

ias
25. Find the inverse Laplace transform of 6s25−2s+7s+2
.
7
Ans. − 13 e− 12 t cosh 12
1 1

t − 37 sinh 12 t .

6.4 Advanced Problems


1. Prove that L (h (t) + h (t − T ) + h (t − 2T ) + ...) = 1−e
1
Ts .
2. Suppose f (t) has period T . Prove that
Z T
1

ag
L ( f (t)) = e−sT f (t) dt.
1 − eT s 0

3. Find a function which is not of exponential order.

eh
h .K
At
ias
ag
7. Differential Equations

We present the application of the Laplace transform to the solution of ODEs. The
eh
basic idea is that the Laplace transform can convert an ODE to an algebraic
equation, which is easier to solve.

7.1 Theory and Examples


7.1.1 To solve a differential equation with unknown y (t) using the Laplace trans-
form, we first convert to an algebraic equation with unknown Y (s), then we solve
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
the algebraic equation to obtain Y (s) and finally we invert to obtain the desired
y (t).
7.1.2 Example. To solve

d2y
+ y = t, y (0) = 1, y0 (0) = −2
dt 2

we take the Laplace transform and we have


h

1 1
s2Y − sy (0) − y0 (0) +Y = ⇒ s 2
Y − s + 2 +Y = ⇒
s2 s2
1 s−2 1
s2 + 1 Y = s − 2 + 2 ⇒ Y = 2

+ 2 2 ⇒
At

s s + 1 s (s + 1)
s 2 1 1
Y (s) = 2 − 2 + 2− 2 ⇒
s +1 s +1 s s +1
y (t) = cost − 3 sint + t.

7.1.3 Example. To solve

d2y dy
2
+ 2 + 5y = e−t sint, y (0) = 0, y0 (0) = 1.
dt dt
60 Chapter 7. Differential Equations

we take the Laplace transform and we have

ias
1
s2Y − sy (0) − y0 (0) + 2 (sY − y (0)) + 5Y = ⇒
(s + 1)2 + 1
1
s2 + 2s + 5 Y − 1 =


(s + 1)2 + 1
1 1
Y (s) = 2 + 2
s + 2s + 5 (s + 2s + 2) (s2 + 2s + 5)
1 1 1
= 2 + 2
− 2

ag
s + 2s + 5 3 (s + 2s + 2) 3 (s + 2s + 5)
1 1
y (t) = e−t sint + e−t sin 2t.
3 3
7.1.4 Example. To solve

d2y π 
+ 9y = cos 2t, y (0) = 1, y = −1.
dt 2 2
eh
we let y0 (0) = a and, taking Laplace transform, we have

s
s2Y − sy (0) − y0 (0) + 9Y = 2 ⇒
s +4
s
s2 + 9 Y − s − a = 2


s +4
s+a s
Y (s) = 2 + 2 ⇒
.K

s + 9 (s + 4) (s2 + 9)
s a s s
Y (s) = 2 + 2 + 2
− 2

s + 9 s + 9 5 (s + 4) 5 (s + 9)
4 a 1
y (t) = cos 3t + sin 3t + cos 2t.
5 3 5
Now
4
π  3π a 3π 1 2π
−1 = y cos = + sin + cos ⇒
2 5 2 3 2 5 2
h

a 1 12
−1 = − − ⇒ a =
3 5 5
And so
4 4 1
y (t) = cos 3t + sin 3t + cos 2t
At

5 5 5
7.1.5 Example. We define

0 when t < 0
h (t) = .
1 whent > 0

Clearly L (h (t)) = 1.
Now, to solve
dy
+ 2y = h (t − 4) , y (0) = 3
dt
7.2 Solved Problems 61

we take the Laplace transform and we have

ias
e−4s
sY − y 0− + 2Y =


s
e−4s
(s + 2)Y = 3 + ⇒
s
3 1
Y (s) = + e−4s ⇒
s + 2 s (s + 2)
 
3 1 −4s 1
Y (s) = + e 1− ⇒
s+2 2 s+2

ag
1  
y (t) = 3e−2t + h (t − 4) 1 − e−2(t−4) .
2
7.1.6 The Laplace transform method can also be applied to the solution of systems
of ODEs, as seen in the following examples.
7.1.7 Example. To solve

dx
eh dt
dy
= 2x + y + 1, x (0) = 0

= 3x + 4y, y (0) = 0
dt
we have
1
sX − x (0) = 2X +Y +
s
.K

sY − y (0) = 3X + 4Y

1
(s − 2) X −Y =
s
−3X + (s − 4)Y = 0
Solution is:
s−4 s−4 3 1 4
X= = = + −
5s − 6s2 + s3 s (s − 1) (s − 5) 4 (s − 1) 20 (s − 5) 5s
h

3 3 3 3 3
Y= 2 3
= = − +
5s − 6s + s s (s − 1) (s − 5) 20 (s − 5) 4 (s − 1) 5s
and so
1 5t 3 t 4
At

x= e + e −
20 4 5
3 5t 3 t 3
y= e − e +
20 4 5

7.2 Solved Problems


7.2.1 Problem. Solve

d2y dy
2
− 3 + 2y = 4e2t , y (0) = −3, y0 (0) = 5.
dt dt
62 Chapter 7. Differential Equations

Solution. Taking Laplace transform we get

ias
4
s2Y − sy (0) − y0 (0) − 3 (sY − y (0)) + 2Y = ⇒
s−2
4
s2 − 3s + 2 Y + 3s − 5 − 9 =


s−2
3s − 14 4
=− + 2
(s − 1) (s − 2) (s − 2) (s − 1)
4 7 4
= − +
s − 2 s − 1 (s − 2)2

ag
y (t) = 4e2t − 7et + 4te2t .
7.2.2 Problem. Solve
d3y d2y dy
3
− 3 2
+ 3 − y = t 2 et , y (0) = 1, y0 (0) = 0, y00 (0) = −2.
dt dt dt
Solution. Taking Laplace transform we get
eh 2
s3 − 3s2 + 3s − 1 Y − s2 + 3s − 1 =


(s − 1)3
2
s3 − 3s2 + 3s − 1 Y = + s2 − 3s + 1 ⇒

3
(s − 1)
2 s2 − 3s + 1
Y= +
(s − 1)6 (s − 1)3
s2 − 2s + 1 − s
.K
2
= +
(s − 1)6 (s − 1)3
2 (s − 1)2 s−1 1
= + 3
− 3
+
(s − 1) 6
(s − 1) (s − 1) (s − 1)3
2et t 5 t 2 et et t 5 t 2 et
y (t) = + et − tet + = + et − tet + .
5! 2! 60 2
7.2.3 Problem. Solve
d3y d2y dy
3
− 3 2
+ 3 − y = t 2 et .
h

dt dt dt
0 00
Solution. We set y (0) = a, y (0) = b, y (0) = c.
2
s3Y − as2 − bs − c − 3 s2Y − as − b + 3 (sY − a) −Y =
 

(s − 1)3
At

2
s3 − 3s2 + 3s − 1 Y − as2 − (b − 3a) s − (c − 3b + 3a) =


(s − 1)3
2 as2 + (b − 3a) s + (3a − 3b + c)
s3 − 3s2 + 3s − 1 Y =

+ ⇒
(s − 1)3 (s − 1)6
2 c3 c2 c1
Y= 6
+ 3
+ 2
+
(s − 1) (s − 1) (s − 1) s−1
et t 5
y (t) = + c1 et + c2tet + c3t 2 et .
60
7.2 Solved Problems 63

7.2.4 Problem. Solve

ias
d2y
+ y = h (t − 2) − h (t − 4) , y (0) = 1, y0 (0) = 0
dt 2
Solution. Taking Laplace transform we get
e−2s e−4s
s2Y − s +Y = − ⇒
s s
e−2s e−4s
s2 + 1 Y = s +

− ⇒
s s
s e−2s e−4s

ag
Y= 2 + −
s + 1 s (s2 + 1) s (s2 + 1)
We have
s
Y1 =
s2 + 1
1 s
Y2 = − 2
s s +1
and hence
eh y1 (t) = cost
y2 (t) = 1 − cost
and

y (t) = y1 (t) + h (t − 2) y2 (t − 2) − h (t − 4) y2 (t − 4)
.K

= cost + h (t − 2) (1 − cos (t − 2)) − h (t − 4) (1 − cos (t − 4))


7.2.5 Problem Solve
d 2 y dy 5
+ + y = h (t) − h (t − 5) , y (0) = 0, y0 (0) = 0
dt 2 dt 4
Solution. Taking Laplace transform we get
5 1 − e−5s
s2Y + sY + Y = ⇒
4 s
  1
Y = 1 − e−5s
h

s s2 + s + 54


We will first compute z (t) when


1
Z (s) =
At

s2 + s + 54

s
2
Since s2 + s + 54 = s + 21 + 1 we have
!
4 4
1 4 5s + 5 4 1 s 1
5
= − 5
= − −
2
s s +s+ 4 2
5s s + s + 4 5 s s + s + 4 s + s + 54
2 5 2
!
4 1 s 1 2
= − 2 − 2
5 s s + 12 + 14 2 s + 12 + 14
64 Chapter 7. Differential Equations

and
e−2t
 

ias
4 −2t
z (t) = 1 − e cos (t) − sin (t)
5 2
Then
y (t) = z (t) − z (t − 5) h (t − 5)
7.2.6 Problem Solve

d2y
+ y = f (t) , y (0) = 0, y0 (0) = 0
dt 2
where

ag
f (t) = (−1)m when t ∈ [mπ, (m + 1) π)
Solution. Taking Laplace transform we get

f (t) = 1 + 2 ∑ (−1)n h (t − nπ)
n=1
!

1
1 + 2 ∑ (−1)n e−nπs
eh F (s) =
s n=1

Then !

1
Y (s) = 1 + 2 ∑ (−1)n e−nπs
s (s2 + 1) n=1
Now
   
1 −1 1 s
.K

−1
L =L − = 1 − cost
s (s2 + 1) s s2 + 1
 −nπs 
−1 e
L 2
= h (t − nπ) (1 − cos (t − nπ)) = h (t − nπ) (1 − (−1)n cos (t))
s (s + 1)
Hence

y (t) = 1 − cost + 2 ∑ (−1)n h (t − nπ) (1 − (−1)n cos (t))
n=1
Note that when t ∈ [mπ, (m + 1) π) we get
h

m
y (t) = 1 − cost + 2 ∑ (−1)n h (t − nπ) (1 − (−1)n cos (t))
n=1
m
= (−1) − (2m + 1) cost
At

7.2.7 Problem. Solve


dx
= 2x − 3y, x (0) = 8
dt
dy
= −2x + y, y (0) = 3
dt
Solution. We have
sX − x (0) = 2X − 3Y (s − 2) X + 3Y = 8
⇒ ⇒
sY − y (0) = −2X +Y 2X + (s − 1)Y = 3
7.3 Unsolved Problems 65

8 3

ias
3 s−1 8s − 17 5 3 −t 4t
X = s − 3s − 4 s + 1 s − 4 ⇒ x = 5e + 3e
= 2 = +
s−2 3
2 s−1

s−2 8

2 3 3s − 22 5 2 −t 4t
Y = s − 3s − 4 = s + 1 − s − 4 ⇒ y = 5e − 2e
= 2
s−2 3
2 s−1

7.2.8 Problem. Solve

ag
dx
= y + 1, x (0) = 1
dt
dy
= −x + t, y (0) = 0
dt
Solution. We have
sX − x (0) = Y + 1s sX −Y = 1 + 1s s2 + s3 + 1
 
1
⇒ ⇒ X = 2 4 ,Y = − 2

Then
eh
sY − y (0) = −X + s12 X + sY = s12 s +s s +1

s2 + s3 + 1 s2 s3 1
2 4
= 2 4
+ 2 4
+ 2
s +s s +s s +s s (1 + s2 )
1 s 1 1
= + + −
1 + s2 1 + s2 s2 s2 + 1
.K

s 1
= 2
+ 2
1+s s
and  
−1 s 1
x=L + = cost + t
1 + s2 s2
and  
−1 1
y=L − 2 = − sint.
s +1
h

7.3 Unsolved Problems


dx
 
+ 2x = 1
1. Given dt , find X (s) = L (x (t)) and x (t).
x (0) = 3
Ans. x (t) = 21 + 5e−2t , X (s) = s+2 5
+ 2s1
.
At

 d2x 
dt 2
+ 3 dx
dt + 2x = 0
2. Given  x (0) = 3 , find X (s) = L (x (t)) and x (t).
x0 (0) = 1
Ans. x (t) = 7e−t − 4e−2t , X (s) = s+1 7 4
− s+2 .
 d2x 
dt 2
+ 4 dx
dt + 4x = 0
3. Given  x (0) = 3 , find X (s) = L (x (t)) and x (t).
0
x (0) = 1
Ans. x (t) = 3e−2t + 7te−2t , X (s) = s+2 3
+ 7 2.
(s+2)
66 Chapter 7. Differential Equations

d2x −t
 
+ 4 dx
dt 2 dt + 4x = e
, find X (s) = L (x (t)) and x (t).

ias
4. Given  x (0) = 3
0
x (0) = 1
Ans. x (t) = 2e −2t + 6te−2t + e1t , X (s) = s+22
+ 6 2 + s+1 1
.
(s+2)
 d2x
−2t

dt 2
+ 4 dx dt + 4x = e
5. Given  x (0) = 1 , find X (s) = L (x (t)) and x (t).
0
x (0) = 1
Ans. x (t) = e −2t + 3te−2t + 21 t 2 e−2t , X (s) = s+2 1
+ 3 2 + 1 3.
(s+2) (s+2)
 d2x dx

+ 7 dt + 10x = 0

ag
dt 2
6. Given  x (0) = 1 , find X (s) = L (x (t)) and x (t).
0
x (0) = −1
4 −2t
Ans. x (t) = 3 e − 13 e−5t , X (s) = 3(s+2)
4 1
− 3(s+5) .
 d2x 
dt 2
+ 10 dx dt + 25x = 1
7. Given  x (0) = 1 , find X (s) = L (x (t)) and x (t).
x0 (0) = 1
eh
Ans. x (t) = 25 24 −5t
 d 2 x dx 1
e + 29 5 te
−5t + 1 , X (s) =

−t/2
25

24 29
25(s+5) + 5(s+5)2 + 25s .
1

dt 2 + dt + 4 x = e
8. Given  x (0) = 1 , find X (s) = L (x (t)) and x (t).
x0 (0) = 0
1 1
Ans. x (t) = e− 2 t + 12 te− 2 t + 12 t 2 e−t/2 , X (s) = 1 1 + 1
2 +
1
3.
s+ 2 2(s+ 12 ) ( 21 )
s+
 d2x dx

+ 4 + 4x = sin 2t
.K

dt 2 dt
9. Given  x (0) = 1 , find X (s) = L (x (t)) and x (t).
0
x (0) = 0
Ans. x (t) = 4 e2t − 18 cos 2t + 89 e−2t , X (s) = 8(s+2)
9 t 9
− 18 s2 +4
s
+ 94 s+2
1
.
 d2x 
dt 2
+ 3 dx dt + 2x = t sint
10. Given  x (0) = 0 , find X (s) = L (x (t)) and x (t).
x0 (0) = 0
4 −2t
Ans. x (t) = 17 3 3 1
50 cost + 25 sint − 10 t cost + 10 t sint − 2et + 25 e
1
, X (s) = 17 s
50 s2 +1 +
1 s 4 3 s2 21 1 1
+ − + − 2 s+1 .
h

5 s2 +1 2 25(s+2) 5 s2 +1 2 50(s2 +1)


( ) ( )
 dx 
dt= 2x − 4y + 1
dy
 = −x − y 
dt , find X (s) = L (x (t)) , Y (s) = L (y (t)) and x (t) , y (t).
11. Given 
 x (0) = 1 
At

x0 (0) = 0
7 −2t 7 3t 7 −2t
Ans. y (t) = 10 e − 60 e + 61 , x (t) = 10 e + 15 7 3t
e − 16 , X (s) = 10(s+2)
7
+
7 1 7 7 1
15(s−3) − 6s , Y (s) = 10(s+2) − 60(s−3) + 6s .
 dx 
dt = 2x − 4y
 dy = −x − y + e−2t 
12. Given  dt , find X (s) = L (x (t)) , Y (s) = L (y (t)) and
 x (0) = 1 
0
x (0) = 0
x (t) , y (t).
7.3 Unsolved Problems 67

y (t) = 14 −2t − 3 e3t + 4 te−2t , x (t) = 19 e−2t + 6 e3t + 4 te−2t , X (s) =


Ans. 25 e 50 5 25 25 5

ias
19 4 6 14 4 3
25(s+2) + 2 + 25(s−3) , Y (s) = 25(s+2) + 2 − 50(s−3) .
 5(s+2)
dx  5(s+2)

dt = 2x − 4y + 1
 dy = −x − y + 1 
13. Given  dt , find X (s) = L (x (t)) , Y (s) = L (y (t)) and x (t) , y (t).
 x (0) = 1 
0
x (0) = 0
3 −2t 1 3t 3 −2t
Ans. y (t) = 10 e − 20 e + 12 , x (t) = 10 3
e + 15 e3t + 12 , X (s) = 10(s+2) 1
+ 5(s−3) +
1 3 1 1
Y (s) = 10(s+2)
2s , − 20(s−3) + 2s .
 dx 
dt = 2x − 4y + sint

ag
dy
dt = −x − y , find X (s) = L (x (t)) , Y (s) = L (y (t)) and
 
14. Given   x (0) = 0 
x0 (0) = 0
x (t) , y (t).
7 1
Ans. y (t) = 50 sint − 50 cost + 25e1 2t − 50
1 3t
e , x (t) = 25e1 2t − 25
4 3
sint − 25 2 3t
cost + 25 e ,
2 3 s 4 1 1 7 1 1 s
X (s) = 25(s−3) − 25 s2 +1 − 25 s2 +1 + 25 s−2 , Y (s) = 50 s2 +1 − 50(s−3) − 50 s2 +1 +
( ) ( )
1 1
25 s−2 . 
ehdx
dt = x + 2y

dy
dt = 4x + 3y , find X (s) = L (x (t)) , Y (s) = L (y (t)) and x (t) , y (t).
 
15. Given 
 x (0) = 2 
x0 (0) = −1
Ans. x (t) = 11 −t 1 5t 1 5t 11 −t 11 1
6 e + 6 e , y (t) = 3 e − 6 e , X (s) = 6(s+1) + 6(s−5) , Y (s) =
1 11
3(s−5) − 6(s+1) .
.K

 dx t

dt = x + 2y + e
dy
dt = 4x + 3y + 1 , find X (s) = L (x (t)) , Y (s) = L (y (t)) and x (t) , y (t).
 
16. Given 
 x (0) = 1 
x0 (0) = 0
Ans. y (t) = 10 e − e−t − 12 et + 51 , x (t) = 14 et + e−t + 20
3 5t 3 5t
e − 52 , X (s) = 1
4(s−1) +
1 3 2 3 1 1 1
s+1 + 20(s−5) − 5s , Y (s) = 10(s−5) − s+1 − 2(s−1) + 5s .
 dx 
dt = x + 2y
 dy = 4x + 3y + e−t 
h

17. Given  dt , find X (s) = L (x (t)) , Y (s) = L (y (t)) and


 x (0) = 1 
x0 (0) = 0
x (t) , y (t).
Ans. y (t) = 49 e5t − 17 −t 1 −t 7 −t 2 5t 1 −t 7
18 e + 3 te , x (t) = 9 e + 9 e − 3 te , X (s) = 9(s+1) −
At

1 2 1 17 4
+ 9(s−5) , Y (s) = − 18(s+1) + 9(s−5) .
3(s+1)2 3(s+1)2
−t
 dx 
dt = x + 2y + e
dy
dt = 4x + 3y + 1 , find X (s) = L (x (t)) , Y (s) = L (y (t)) and
 
18. Given 
 x (0) = 1 
x0 (0) = 0
x (t) , y (t).
13 −t 2 −t 23 −t 2 −t
Ans. y (t) = 11 5t 1 11 5t 2
45 e − 9 e − 3 te + 5 , x (t) = 18 e + 90 e + 3 te − 5 , X (s) =
68 Chapter 7. Differential Equations

23 2 11 2 11 2 13 1
+ + 90(s−5) − 5s , Y (s) = − − 9(s+1) + 5s .
18(s+1) 3(s+1)2 45(s−5) 3(s+1)2

ias
7.4 Advanced Problems
1. Suppose f (t) is continuous in [0, ∞) and of exponential order. Prove that
every solution of dx
dt + ax (t) = f (t) is of exponential order.

ag
eh
h .K
At
ias
ag
8. Convolution and Integral Equations

We present the convolution of two functions, a concept basic in understanding the


eh
nature of solutions of differential equations. Convolution also generates integral
equations which can be converted, by the Laplace transform, to algebraic equations.

8.1 Theory and Examples


8.1.1 Definition. The convolution of f (t), g (t) is
Z t
( f ∗ g) (t) = f (τ) g (t − τ) dτ
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 0 1

8.1.2 Example.
Z t Z t
−3t τ −3(t−τ)
t
e ∗e = e e e−3t e4τ dτ
dτ =
0 0
t
Z t  et − e−3t

−3t −3t 1 4τ 1
=e 4τ
e dτ = e e = e−3t e4t − 1 =
0 4 τ=0 4 4

8.1.3 Theorem (Convolution). Let F (s) = L ( f (t)), G (s) = L (g (t)). Then


h

Z t 
F (s) G (s) = L f (τ) g (t − τ) dτ = L ( f ∗ g)
0

Proof. Changing integration order we have


At

t t
Z  Z ∞ Z 
−st
L f (τ) g (t − τ) dτ = e f (τ) g (t − τ) dτ dt
τ=0 t=0 τ=0
Z ∞ Z t Z ∞ Z ∞
−st
= e f (τ) g (t − τ) dτdt = e−st f (τ) g (t − τ) dtdτ
t=0 τ=0 τ=0 t=τ

Now letting v = t − τ, dv = dt (for τ constant) so


Z ∞ Z ∞ Z ∞
 Z ∞

−s(v+τ) −sτ −sv
= e f (τ) g (v) dvdτ = e f (τ) dτ e g (v) dv = F (s) G (s) .
τ=0 v=0 τ=0 v=0
70 Chapter 8. Convolution and Integral Equations

8.1.4 Example. From the previous example we know

ias
et − e−3t
 
−3t 1 1
L e ∗e t
=L

= − .
4 4 (s − 1) 4 (s + 3)

From the Convolution Theorem we get

1 1 1
L et ∗ e−3t = L et L e−3t =
  
= − .
(s − 1) (s + 3) 4 (s − 1) 4 (s + 3)

8.1.5 Example. Similarly to the previous:

ag
Z t Z t
−t −τ
e−τ tτ − τ 2 dτ
 
(t) ∗ te = τe (t − τ) dτ =
0 0
Z t
−τ 2 τ
e−τ (t − 2τ) dτ

= −e tτ − τ τ=0

0
t
= −e−τ tτ − τ 2 − (t − 2τ) e−τ + (−2) −e−τ τ=0
eh 

= te−t + 2e−t + t − 2.

But also
1 1
L (t) = L te−t =

,
s2 (s + 1)2
and
1 2 1 2 1
L (t) L te−t =

= + − + 2.
(s + 1)2 s2 s + 1 (s + 1)2 s s
.K

Hence
! !
1 1 2 1 2
L −1 = L −1 + + −
(s + 1)2 s2 s2 s + 1 (s + 1)2 s
= t + 2e−t + te−t − 2.

8.1.6 Theorem. The following hold when the respective convolutions are well
defined.
h

1. f ∗ g = g ∗ f ,
2. ( f ∗ g) ∗ p = f ∗ (g ∗ p),
3. ( f + g) ∗ p = f ∗ p + g ∗ p.
4. With o (t) being the zero function (∀t : o (t) = 0): f ∗ o = o (so o is the zero
At

element).
Proof. Immediate.
8.1.7 Question. What is the unit element of convolution? I.e., what is u :
f ∗ u = f ? It must be such that L (u) = 1. Have you seen a function u (t) such that
L (u) is a constant?
8.1.8 To obtain an intuitive interpretation of the convolution, consider the solution
of the ODE
df
+ a0 f = g (t)
dt
8.1 Theory and Examples 71

which, as you recall, is

ias
Z  Z
−a0 t
f (t) = e e g (t) dt + c = ce−a0t + e−a0t ea0t g (t) dt.
a0 t

To also satisfy f (0) = f0 we c = f0 and so we have


Z t Z t
−a0 t −a0 t −a0 t
f (t) = f0 e +e e a0 τ
g (τ) dτ = f0 e + e−a0 (t−τ) g (τ) dτ
0 0
= f0 e−a0t + g ∗ e −a0 t
.
So f (t) is the superposition of two components:

ag
1. f (0) e−a0t is the initial condition transmitted in time with a modulation
(attenuation) e−a0t
Rt
2. g ∗ e−a0t = 0 e−a0 (t−τ) g (τ) dτ can be interpreted as: the sum of all inputs g (τ)
(i.e., at each time τ ∈ (0,t]) modulated by e−a0 (t−τ) (i.e., attenuated by a time
t − τ , which is the length of time from when g (τ) it was applied until the
current time. .

dn f
eh
8.1.9 With the Laplace transform, we can generalize the above to the problem

d n−1 f
+ an−1 + ... + a0 f = g (t) , f (0) = f0 , f 0 (0) = f1 , ..., f (n−1) (0) = fn−1
dt n dt n−1
For simplicity we take f0 = f1 = ... = fn−1 = 0 (but similar results can be obtained
for nonzero initial conditions). In this case we have

sn + an−1 sn−1 + ... + a1 s + a0 F = G



.K

or
1
F= G = PG
sn + an−1 sn−1 + ... + a1 s + a0
and, with
 
−1 −1 1
p (t) = L (P) = L ,
sn + an−1 sn−1 + ... + a1 s + a0
we get Z t
f (t) = g ∗ p = g (τ) p (t − τ) dτ.
h

0
In other words: f (t) (at time t ) is the sum of the inputs g (τ) (at all times τ ∈ (0,t])
modulated by p for a length of time t − τ (from τ until current time t ). If
At

dn f d n−1 f
+ an−1 + ... + a0 f = g
dt n dt n−1
describes the output of a (physical or other) system when the input is g (t), then we
can picture the operation of the system as a box with input g (t) and output f (t)
(and g (t) can be changeable). Then we have F = PG and we call P the transfer
function of the system. It gives the output F (s) (in Laplace space) for every imput
F
G (s) (we can also say that P = G , where the function P (s) is fixed for every (G, F)
pair and describes the system behavior). We call p (t) the impulse function of the
system (why?).
72 Chapter 8. Convolution and Integral Equations

8.1.10 The situation is similar when the system has equation

ias
dn f d n−1 f dmg d m−1 g
+ a n−1 + ... + a 0 f = b m + b m−1 + ... + b0 g
dt n dt n−1 dt n dt n−1
(so the input is also processed by the system) in which case the transfer function is
bm sm + ... + b1 s + b0
P (s) = .
sn + an−1 sn−1 + ... + a1 s + a0
8.1.11 Example. To solve

d2y

ag
+ w2 y = f (t) , y (0) = 1, y0 (0) = −2
dt 2
we take the Laplace transform and get

s2Y − sy (0) − y0 (0) + w2Y = F (s) ⇒


s−2 F (s)
s2 + w2 Y − s + 2 = F (s) ⇒ Y = 2

2
+ 2
s +w s + w2
eh 2
y (t) = cos wt − sin wt + f (t) ∗
w
sin wt
w
2 1 t
Z
= cos wt − sin wt + f (τ) sin (w (t − τ)) dτ.
w w 0
8.1.12 The Laplace transform can also be used to solve integral equations (usually
of the convolution type, but other types can also be handled).
8.1.13 Example. To solve
.K

Z t
2
x (t) = t + x (u) sin (t − u) du
0

we have
 
2 1 1 2
X = 3 +X 2 ⇒ X 1− 2 = 3⇒
s s +1 s +1 s
2
 2  
s 2 2 s +1 2 2
X 2 = 3 ⇒X = 5
= 3+ 5⇒
s +1 s s s s
h

1
x = t 2 + t 4.
12
8.1.14 Example. To solve

t3
At

Z t
x (t) = − x (u) (t − u) du
6 0

we have
 
1 1 1 1
X = 4 −X 2 ⇒ X 1+ 2 = 4 ⇒
s s s s
2

X s +1 1 1 1 1
= ⇒ X = = − ⇒
s2 s4 s2 (s2 + 1) s2 s2 + 1
x = t − sint
8.2 Solved Problems 73

8.1.15 Example. To solve

ias
Z t
x (u) (t − u) du = 2x (t) + t − 2
0

we have
1 2 1 2
(X (s))2 = 2X (s) + 2
− ⇒ X 2 − 2X = 2 −
s s s s
and we note immediately that that X = 1s satisfies the equation. Hence x (t) = 1.
8.1.16 To solve
t

ag
Z
x (t) = t − x (u) (t − u) du
0
we have
 
1 1 1 1
X = 2 −X 2 ⇒ X 1+ 2 = 2 ⇒
s s s s
2

X s +1 1 1
= 2 ⇒X = 2 ⇒
eh s2 s
x = sint
s +1

8.2 Solved Problems


8.3 Unsolved Problems
1. Compute the convolution (1) ∗ (1).
.K

Ans. t .
2. Compute the convolution (t) ∗ (cost).
Ans. 1 − cost .
3. Compute the convolution (sint) ∗ (cost).
Ans. 12 t sint .
4. Compute the convolution (et ) ∗ (cost).
Ans. 12 et − 21 cost + 21 sint .
5. Compute the convolution 1 ∗ 1 ∗ ... ∗ 1.
n times
t n−1
h

Ans. (n−1)! .
Rt
6. Solve the integral equation 0 x (τ) e−(t−τ) dτ = t 2 − 2e−t − 2t + 2.
Ans. t 2 .
Rt
7. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 21 cost + 12 sint − 12 e−t .
At

Ans. cost .
Rt
8. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 21 t cost − 12 sint + 21 t sint .
Ans. t cost .
Rt
equation 0 x ( τ) e−(t−τ) dτ = 2e1 t e2t − 1 .

9. Solve the integral
Ans. et .  
Rt
10. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 4e1−t e2(−t) + 2t − 1 .
Ans. tet .
Rt
11. Solve the integral equation 0 x (τ) e−(t−τ) dτ = t 2 − 3e−t − 2t + 3.
Ans. t 2 + 1.
74 Chapter 8. Convolution and Integral Equations
Rt
12. Solve the integral equation 0 x (τ) e−(t−τ) dτ = t 2 − e−t − t + 1.

ias
Ans. t + t 2 .
Rt
13. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 52 e−t − 25 cos 2t + 15 sin 2t .
Ans. sin (2t).
Rt
14. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 10 1
cos 3t − 101 −t
e + 10 3
sin 3t .
Ans. cos (3t).
Rt 3
15. Solve the integral equation 0 x (τ) (t − τ) dτ = t6 .
Ans. t .
Rt
16. Solve the integral equation 0 x (τ) (t − τ) dτ = 1 − cost .
Ans. cost .

ag
Rt
17. Solve the integral equation 0 x (τ) (t − τ) dτ = et − t − 1.
Ans. et .
Rt
18. Solve the integral equation 0 x (τ) cos (t − τ) dτ = 21 et − 12 cost + 21 sint .
Ans. et .
Rt
19. Solve the integral equation 0 x (τ) cos (t − τ) dτ = 12 sint + 21 t cost .
Ans. cost .
Rt
20. Solve the integral equation 0 x (τ) cos (t − τ) dτ = 12 t sint .
Ans. sint .
eh Rt
21. Solve the integral equation 0 x (τ) e−(t−τ) dτ = 2t .
Ans. 2 + 2t .
Rt
22. Solve the integral equation 0 x (τ) (2 + t − τ) dτ = t .
Ans. 21 e−t/2 .
Rt
23. Solve the integral equation 0 x (τ) e−2(t−τ) dτ = sint .
Ans. cost + 2 sint .
.K
Rt
24. Solve the integral equation x (t) + 0 x (τ) sin (t − τ) dτ = cost + 21 t sint .
Ans. cos (t).
Rt
25. Solve the integral equation x (t) + 0 x (τ) e−2(t−τ) dτ = e−2t + (ln (et )) e−2t .
Ans. e−2t .
Rt
26. Solve the integral equation x (t) + 0 x (τ) e−(t−τ) dτ = 2t + e−t − 1.
Ans. t .
Rt
27. Solve the integral equation x (t) + 0 x (τ) et−τ dτ = t .
2
Ans. 2t−t
2 . Rt
28. Solve the integral equation x (t) + 0 x (τ) (t − τ) dτ = t .
h

Ans. sint.
Rt
29. Solve the integral equation x (t) + 0 x (τ) (t − τ) dτ = sint .
Ans. 21 sint + 21 t cost .
Rt t
30. Solve the integral equation x (t) + 0 x (τ) et−τ dτ = e +sint
2 .
At

sint+cost
Ans. 2 .

8.4 Advanced Problems


1. Suppose f (t) , g (t) are piecewise continuous in [0, ∞) and of exponential order.
Show that f (t) ∗ g (t) is of exponential order.
ias
ag
9. Dirac Delta and Generalized Functions

The Dirac delta function δ (x) is very useful in the solution of ODEs, especially ones
eh
related to engineering problems. However, despite its name, it is not a function
in the usual sense. Rather, it is a generalized function. Our main task in this
chapter to clarify this concept.

9.1 Theory and Examples


9.1.1 We return to the full definition of the Laplace transform.
Z ∞
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw L (x (t)) := x (t) e−st dt. (9.1) 1


0−

As mentioned, when applied to functions continuous at 0 the use of 0− is not


important. But now, dealing with discontinuous functions, its significance will
become clear.
9.1.2 All Laplace transform properties hold under (9.1), but we must modify the
derivative formulas as follows

L x0 (t) = sX − x 0−
 
h

L x00 (t) = s2 X − sx 0− − x0 0−
  

etc.

Here x (0− ) means limt→0− x (t) and similarly for x0 (0− ) etc.
At

9.1.3 In many applications we study systems with discontinuous input. For


example in a linear circuit we turn a switch on and voltage jumps from zero to one.
We model this by the Heaviside step function, previously denoted by h (t); from
now on we will also use the notation Heaviside (t):
 
1 t <0 1 t < t0
Heaviside (t) := , Heaviside (t − t0 ) := .
0 t >0 0 t > t0
As will be seen, the actual value at the point of discontinuity (0 or t0 ) is not
important.
76 Chapter 9. Dirac Delta and Generalized Functions

9.1.4 The Laplace transform of Heaviside (t) is

ias
Z ∞ Z M
L (Heaviside (t)) = Heaviside (t) e−st dt = lim lim Heaviside (t) e−st dt
0− ε→0 + M→∞ −ε
Z 0 Z M 
= lim lim 0e−st dt + 1e−st dt
ε→0+ M→∞ −ε 0
Z M  
−st 1 −sM 1 −s0 1
= lim 1e dt = lim − e + e = −0 + .
M→∞ 0 M→∞ s s s

(Note that it is the same as the Laplace transform of x (t) = 1; Heaviside (t) and 1

ag
are the same on (0, ∞)). The Laplace transform of Heaviside (t − t0 ) is, as usual

e−st0
L (Heaviside (t − t0 )) = e−st0 L (Heaviside (t)) = .
s
9.1.5 In other applications the input can be a switch which is turned on at t0 and
off at t1 . For such situations we define the functions
eh
Heaviside (t) − Heaviside (t − t1 ) , Heaviside (t − t0 ) − Heaviside (t − t1 − t0 ) etc.

So 
1 t ∈ (t0 ,t1 )
Heaviside (t − t0 ) − Heaviside (t − t1 − t0 ) = .
0 t∈/ (t0 ,t1 )
Their Laplace transforms are
.K

1 − e−t1 s
L (Heaviside (t) − Heaviside (t − t1 )) = ,
s
1 − e−t1 s −t0 s
L (Heaviside (t − t0 ) − Heaviside (t − t1 − t0 )) = e
s
9.1.6 Finally the input can be an impulse, i.e., it has short duration and high
value. This can be modeled by

Heaviside (t) − Heaviside (t − ε)


δε (t) =
h

ε
(and similarly for δε (t − t0 )). Note that

1

t ∈ (0, ε)
δε (t) := ε .
At

0 t∈/ (0, ε)

So Z ∞
∀ε > 0 : δε (t) dt = 1;
0
a constant amount of ‘energy’ or ‘momentum’ is delivered. Now we can take the
limit as ε → 0+ and define

∞ t =0
δ (t) := lim δε (t) =
ε→0+ 0 t=6 0
9.1 Theory and Examples 77

The Laplace transform of δε (t) is

ias
1 − e−εs
 
Heaviside (t) − Heaviside (t − ε)
L (δε (t)) = L (δε (t)) = L =
ε εs
 
We can compute L δ (t) in two different ways.
  Z∞
L δ (t) = δ (t) e−st dt = 0
0 −
  1 − e−εs
L δ (t) = lim L (δε (t)) = lim =1
ε→0+ ε→0+ εs

ag
The first answer is correct but we do not like it because it suggests δ (t) is the
same as the zero function. We like the second answer better but it is possibly
wrong, because we exchanged limit with integration (and this, generally, is not
acceptable).
 
9.1.7 We also like that
eh L δ (t) = 1 because it gives us
1  
L Heaviside0 (t) = sH (s) − Heaviside 0− = s − 0 = 1 = L δ (t) .
 
(9.2)
s
But the above is also strictly wrong, because the derivative Heaviside0 (t) does not
exist. However (9.2) suggests that Heaviside0 (t), the derivative of the step function,
is δ (t) which looks “kind of right”: it is zero everywhere except at t = 0, where it is
infinite (corresponding to an infinite rate of increase).
9.1.8 We will try to fix these issues, by defining the generalized derivative
.K

Heaviside0 (t) as a new entity: Dirac (t).


9.1.9 We will define the generalized derivatives

Dirac (t) := Heaviside0 (t) , Dirac (t − t0 ) := Heaviside0 (t − t0 ) .


or, for simplicity of notation
δ (t) := Heaviside0 (t) , δ (t − t0 ) := Heaviside0 (t − t0 ) .
Here δ (t) , δ (t − t0 ) are generalized functions.
9.1.10 The above “definitions” are not rigorous, but can be made so. This requires
h

more advanced mathematics; one can use either


1. the Stieltjes integral (which generalizes the standard Riemann integral) or
2. the theory of distributions.
In both of these approaches one can establish that Heaviside0 (t) is a certain “en-
At

tity” which has the derivative interpretation and many useful derivative properties.
(Actually it is even better to take Heaviside0 (t) dt "together" and write
Heaviside0 (t) dt = d Heaviside;
then Z b Z b
x (t) Heaviside0 (t) dt = x (t) d Heaviside,
a a
the Stieltjes integral of x (t) (with respect to Heaviside (t)). For more detils, see
Appendix A.)
78 Chapter 9. Dirac Delta and Generalized Functions

9.1.11 In particular (as we will take for granted) the following hold

ias
Z Z
Dirac (t) dt = Heaviside0 (t) dt = Heaviside (t) + c, (9.3)
Z Z
Dirac (t − t0 ) dt = Heaviside (t − t0 ) dt = Heaviside (t − t0 ) + c, (9.4)
Z b Z b
Dirac (t) dt = Heaviside0 (t) dt = Heaviside (b) − Heaviside (a) , (9.5)
a a
Z b Z b
Dirac (t − t0 ) dt = Heaviside0 (t − t0 ) dt = Heaviside (b) − Heaviside (a) (9.6)
a a

ag
The most useful ones are (9.5)-(9.6) which are written more precisely as follows
Z b Z b 
∗ 0 1 0 ∈ (a, b)
∀a, b ∈ R : Dirac (t) dt = Heaviside (t) dt = (9.7)
a a 0 0∈ / [a, b]
Z b Z b 
1 t0 ∈ (a, b)
∀a, b ∈ R∗ : Dirac (t − t0 ) dt = 0
Heaviside (t − t0 ) dt = (9.8)
a a 0 t0 ∈/ [a, b]
Note that we do not deal with the case where 0 or t0 ∈ {a, b}. In other words, it is
eh
acceptable to have a discontinuity inside the integration interval, but not on its
boundaries.
9.1.12 It can also be proved that (under the Stieltjes integral) integration by parts
holds for generalized derivatives, i.e., for all a, b ∈ R∗ and all differentiable x (t) we
have
Z b Z b
x (t) Heaviside0 (t) dt = (x (t) Heaviside (t))t=b
t=a − x0 (t) Heaviside (t) dt.
.K

a a
Then we also have the following results..
9.1.13 Theorem. The following hold:
Z b 
∗ 0 x (0) if 0 ∈ (a, b)
∀a, b ∈ R : x (t) Heaviside (t) dt = (9.9)
a 0 if 0 ∈
/ [a, b]
Z b 
∗ 0 x (t0 ) if t0 ∈ (a, b)
∀a, b ∈ R : x (t) Heaviside (t) dt = (9.10)
a 0 if t0 ∈
/ [a, b]
h

Proof. Using (9.5) and integration by parts we have


Z b Z b
x (t) Heaviside0 (t) dt = (x (t) Heaviside (t))t=b
t=a − x0 (t) Heaviside (t) dt
a a
At

and distinguish three cases


Z b
a<b<0: x (t) Heaviside0 (t) dt = x (b) 0 − x (a) 0 − 0 = 0,
a
Z b
a<0<b: x (t) Heaviside0 (t) dt = x (b) 1 − x (a) 0 − x (b) + x (0) = x (0) ,
a
Z b
0<a<b: x (t) Heaviside0 (t) dt = x (b) 1 − x (a) 1 − x (b) + x (a) = 0.
a
This completes the proof.
9.1 Theory and Examples 79

9.1.14 Using the above we can, for example, write the following

ias
Z 0− Z 0+ Z 0+
Dirac (t) dt = 0, Dirac (t) dt = (t) dt = 1, Dirac (t) dt = 1 (9.11)
−∞ −∞ 0−

Note that these integrals involve double limits which can, in this case, be inter-
changed. E.g.
Z 0−  Z −δ   Z −δ

δ (t) dt = lim lim Dirac (t) dt = lim lim Dirac (t) dt
−∞ δ →0+ M→∞ −M δ →0+ M→∞ −M
Z −a Z −δ

ag
= lim Dirac (t) dt + lim Dirac (t) dt = 0 + 0 = 0.
M→∞ −M δ →0− −a

9.1.15 Now we can resolve the previous contradiction on the relationship between
Dirac (t) and Heaviside (t):
Z ∞
L (Dirac (t)) = δ (t) e−st dt = e−s0 = 1,
0−
eh 1
L (Dirac (t)) = sL (Heaviside (t)) − Heaviside 0− = s − 0 = 1.

s
So, in this sense too, Dirac (t) is the (generalized) derivative of Heaviside (t).
9.1.16 Theorem. For every x (t) which has a Laplace transform we have
Z t
∀t > 0 : x (t) ∗ Dirac (t) = x (τ) Dirac (t − τ) dτ = x (t) ,
0−
.K
Z t
∀t > t0 : x (t) ∗ Dirac (t − t0 ) = x (τ) Dirac (t − t0 − τ) dτ = Heaviside (t − t0 ) x (t − t0 ) .
0−

Proof. We only prove the first one, by the previous theorem. First define f (t) =
x (−t). Now
Z t
x (t) ∗ Dirac (t) = Dirac (t) ∗ x (t) = Dirac (τ) x (t − τ) dτ =
0−
Z t
= Dirac (τ) f (τ − t) dτ = f (0 − t) = f (−t) = x (t)
0−
h

since 0 ∈ (0− ,t) .


We can also prove it by Laplace transformation:

L (x (t) ∗ Dirac (t)) = X (s) L (Dirac (t)) = X (s) 1 = X (s) = L (x (t))


At

and, similarly,

L (x (t) ∗ Dirac (t − t0 )) = X (s) L (Dirac (t)) e−t0 s = X (s) e−t0 s = L (Heaviside (t − t0 ) x (t − t0 )) .

9.1.17 Summarizing, the basic properties of Dirac (t) are:


Z t
when t0 < t : Dirac (t − t0 ) x (t) dt = x (t0 ) ,
0−
when t0 < t : x (t) ∗ Dirac (t − t0 ) = x (t − t0 ) .
80 Chapter 9. Dirac Delta and Generalized Functions

9.1.18 Now we are ready to solve differential equations which involve generalized

ias
functions.
9.1.19 Example. To solve

dy
y 0− = 1

+ 2y = Dirac (t − 1) ,
dt
we have

1 1 −s
sY − 1 + 2Y = e−s ⇒ Y = + e
s+2 s+2

ag
⇒ y (t) = e−2t + Heaviside (t − 1) e−2(t−1) .

9.1.20 Example. To solve

dy
y 0− = 1

+ 2y = Dirac (t) ,
eh dt
we have

2
sY − 1 + 2Y = 1 ⇒ Y =
s+2
⇒ y (t) = 2e−2t .

But note that y (0) = 2! Is this a contradiction to the given initial condition?
No, it is not! The initial condition requires y (0− ) = 1. The solution should more
.K

accurately be written as

y (t) = e−2t + Heaviside (t) e−2t

(why?) and then it is clear that

y 0− = 1, y 0+ = 2.
 

9.1.21 Example. To solve


h

d2y dy
y 0− = 0, y0 0− = 0
 
2
+ 2 + 2y = Dirac (t − 1) ,
dt dt
we have
At

e−s e−s
s2Y +2sY +2Y = e−s ⇒ Y = = ⇒ y (t) = Heaviside (t − 1) e−(t−1) sin (t − 1)
s2 + 2s + 2 (s + 1)2 + 1

(since 1
is Laplace transform of e−t sint ).
(s+1)2 +1
9.1.22 Example. To solve

d2y dy
y 0− = −5, y0 0− = 7
 
2
+ 2 − 15y = 6 Dirac (t − 9) ,
dt dt
9.1 Theory and Examples 81

we have, taking Laplace transformation,

ias
s2Y + 5s − 7 + 2sY + 10 − 15Y = 6e−9s ⇒
s2 + 2s − 15 Y = − (5s + 3) + 6e−9s ⇒


5s + 3 6e−9s
Y =− +
s2 + 2s − 15 (s2 + 2s − 15)
Now
5s + 3 9 11
Y1 = = +
s2 + 2s − 15
4 (s − 3) 4 (s + 5)

ag
6 6 6
Y2 = 2 = −
s + 2s − 15 8 (s − 3) 8 (s + 5)
so
9 11
y1 (t) = e3t + e−5t
4 4
6 3t 6 −5t

and
eh y2 (t) = e − e
8 8

y (t) = −y1 (t) + Heaviside (t − 9) y2 (t − 9) .


9.1.23 Example. To solve

d2y dy
+ 5 + 6y = e−t Dirac (t − 2) , y 0− = 2, y 0− = −5
 
dt 2 dt
.K

we have

s2Y − 2s + 5 + 5sY − 10 + 6Y = e−2(s+1) ⇒


s2 + 5s + 6 Y = 2s + 5 + e−2(s+1) ⇒


2s + 5 −2 e−2s
Y= + e .
s2 + 5s + 6 s2 + 5s + 6
Setting
h

2s + 5 1 1
Y1 = + =
s2 + 5s + 6
s+2 s+3
1 1 1
Y2 = 2 = −
s + 5s + 6 s + 2 s + 3
At

we get

y1 (t) = e−2t + e−3t


y2 (t) = e−2t − e−3t

and finally
 
y (t) = e−2t + e−3t + e−2 Heaviside (t − 2) e−2(t−2) − e−3(t−2)
= e−2t + e−3t + Heaviside (t − 2) e2 e−2t − e4 e−3t .

82 Chapter 9. Dirac Delta and Generalized Functions

9.1.24 Example. To solve

ias
d2y ∞
y 0− = 0, y0 0− = 0
 
dt 2
+ y = ∑ Dirac (t − kπ) ,
k=1

we have
∞ ∞
1
s2 + 1 Y = ∑ e−kπs ⇒ Y = ∑ e−kπs ⇒

k=1 s2 + 1 k=1
!
∞ ∞
k
y (t) = ∑ Heaviside (t − kπ) sin (t − kπ) = ∑ Heaviside (t − kπ) (−1) sint

ag
k=1 k=1
or 
0 t ∈ [2mπ, (2m + 1) π)
y (t) = .
− sint t ∈ [2 (m + 1) π, (2m + 2) π)

9.2 Solved Problems


9.2.1 Problem. Solve
eh d2y
− y = δ (t − 3) , y 0− = 1,

y 0− = 0

dt 2
Solution. We have

2
 −3s s e−3s
s −1 Y −s = e ⇒Y = 2 +
s − 1 s2 − 1
.K

Setting
s 1 1
Y1 = = +
s2 − 1 2 (s − 1) 2 (s + 1)
1
Y2 =
s2 − 1
we get
1 1 1
y1 (t) = et + e−t = cosht
2 2 2
h

1 1 1 1 1
y2 (t) = + = sinht
2 s−1 2 s+1 2
we then get
1 1
y (t) = cosht + h (t − 3) sinh (t − 3)
At

2 2
9.2.2 Problem. Solve

d2y dy
y 0− = 0, y0 0− = 0
 
2
+ 2 + y = δ (t − 1) ,
dt dt
Solution. We have
1 1
s2Y + 2sY +Y = 1 ⇒ Y = = ⇒
s2 + 2s + 1 (s + 1)
y (t) = h (t − 1) e−(t−1) (t − 1)
9.2 Solved Problems 83

9.2.3 Problem. Solve

ias
d2y
y 0− = 0, y 0− = 0
 
+ 4y = δ (t − π) − δ (t − 2π) ,
dt 2
Solution. We have
e−πs e−2πs
s2 + 4 Y = e−πs − e−2πs ⇒ Y = 2

+ 2
s +4 s +4
Setting
1 1
Y1 = , y1 (t) = sin 2t
s2 + 4 2

ag
we get
1
y (t) = h (t − π) sin (2t − 2π) − h (t − 2π) sin (2t − 4π)
2
1
y (t) = (h (t − π) − h (t − 2π)) sin (2t)
2
9.2.4 Problem. Solve
eh
d2y
dt 2
dy
+ 2 + 2y = δ (t − π) ,
dt
y 0− = 1,

y0 0− = 1


Solution. We have

s2Y − s − 1 + 2sY − 2 + 2Y = e−πs ⇒


e−πs s+3
Y= + ⇒
(s + 1) + 1 (s + 1)2 + 1
2
.K

e−πs s+1 s
Y= + + ⇒
(s + 1) + 1 (s + 1) + 1 (s + 1)2 + 1
2 2

Setting
1
Y1 =
(s + 1)2 + 1
s+1
Y2 =
(s + 1)2 + 1
h

2
Y3 =
(s + 1)2 + 1
we get

y1 (t) = e−t sint


At

y2 (t) = e−t cost


y3 (t) = 2e−t sint
and so

y (t) = h (t − π) y1 (t − π) + y2 (t) + y3 (t)


= h (t − π) e−(t−π) sin (t − π) + e−t cost + 2e−t sint
= −h (t − π) e−(t−π) sint + e−t cost + 2e−t sint
84 Chapter 9. Dirac Delta and Generalized Functions

9.2.5 Problem. Solve

ias
d2y dy
y 0− = 2, y0 0− = 2
 
2
− 2 − 3y = 2δ (t − 1) − δ (t − 3) ,
dt dt
Solution. We have

s2Y − 2s − 2 − 2sY + 4 − 3sY = 2e−s − e−3s ⇒


2s − 2 2e−s e−3s
Y= 2 + − ⇒
s − 2s − 3 s2 − 2s − 3 s2 − 2s − 3
2e−s e−3s

ag
2s − 2
Y= 2 + 2 − 2
s − 2s − 3 s − 2s − 3 s − 2s − 3
Setting

2s − 2 1 1
Y1 = =
+
s2 − 2s − 3
s+1 s−3
1 1 1
Y2 = 2 = −

we get
eh s − 2s − 3 4 (s − 3) 4 (s + 1)

y1 (t) = e−t + e3t


1 1
y2 (t) = e3t − e−t
4 4
.K

and so

y (t) = y1 (t − π) + 2h (t − 1) y2 (t − 1) − h (t − 3) y2 (t − 1)
1   1  
= e−t + e3t + h (t − 1) e3(t−1) − e−(t−1) − h (t − 3) e3(t−3) − e−(t−3)
2 4

9.3 Unsolved Problems


dx
 
dt + x = Dirac (t − 1)
1. Solve .
x (0− ) = 0
h

Ans. Heaviside
 dx (t − 1) e−(t−1) . 
2. Solve dt + x = Dirac (t − 5) .
x (0− ) = 1
Ans. e−t + Heaviside (t − 5) e−(t−5) .
At

 d2x dx

dt 2 + 3 dt + 2x = Dirac (t − 1)
3. Solve  x (0− ) = 0 .
x0 (0− ) = 0
Ans. − Heaviside (t − 1) e−2(t−2) + Heaviside (t − 1) e−(t−1) .
 d2x 
dt 2
+ 2 dx
dt + x = Dirac (t − 1)
4. Solve  x (0− ) = 1 .
x0(0− ) = 1 
Ans. e−t + 2te−t − Heaviside (t − 1) e−(t−1) − t Heaviside (t − 1) e−(t−1) .
9.3 Unsolved Problems 85

d2x
 
dt 2
+ 3 dxdt + 2x = Dirac (t − 1)

ias
5. Solve  x (0− ) = 1 .
0 −
x (0 ) = 2
−t
Ans. 4e − 3e −2t − Heaviside (t − 1) e−2(t−1) + Heaviside (t − 1) e−(t−1) .
 d2x dx

dt 2 + 4 dt + 4x = Dirac (t − 1)
6. Solve  x (0− ) = 0 .
x0 (0− ) = 0
Ans. − Heaviside (t − 1) e−2(t−1) + t Heaviside (t − 1) e−2(t−1) .
 d2x dx

dt 2 + 4 dt + 4x = Dirac (t − 1)

ag
7. Solve  x (0− ) = 1 .
x0 (0− ) = 2
Ans. e−2t + 4te−2t − Heaviside (t − 1) e−2(t−1) + t Heaviside (t − 1) e−2(t−1) .
−) = 0
 dx 
dt = x + 2y + Dirac (t − 1) , x (0
8. Solve dy − .
dt = 4x + 3y, y (0 ) = 0
Ans. y (t) = 23 e5t−5 Heaviside (t − 1)− 23 e1−t Heaviside (t − 1), x (t) = 23 e1−t Heaviside (t − 1)+
1 5t−5
3e Heaviside (t − 1).
9. Solve
 dx
eh
dt = x +
dy
2y + Dirac (t − 1) ,

x (0 −) = 1

.
dt = 4x + 3y, y (0 ) = 1
Ans. y (t) = 43 e5t − 31 e−t − 23 e1−t Heaviside (t − 1)+ 23 e5t−5 Heaviside (t − 1), x (t) =
1 −t 2 5t 2 1−t
3 e +3 e + 3 e Heaviside (t − 1) + 13 e5t−5 Heaviside (t − 1).
dx −) = 1

dt = x + 2y + Dirac (t − 1) , x (0
10. Solve dy − .
dt = 4x + 3y + Dirac (t − 1) , y (0 ) = 0
.K

Ans. y (t) = 23 e5t − 32 e−t − 13 e1−t Heaviside (t − 1)+ 43 e5t−5 Heaviside (t − 1), x (t) =
2 −t 1 5t 1 1−t
3 e +3 e + 3 e Heaviside (t − 1) + 23 e5t−5 Heaviside (t − 1).
dx

dt + x = Dirac (t)
11. Solve .
x (0− ) = 0
Ans. Heaviside
 dx (t) e−t . 
dt + x = Dirac (t − 5)
12. Solve .
x (0− ) = 1
Ans. e−t + Heaviside (t − 5) e−(t−5) .
 d2x dx

+ 2 + x = Dirac (t) + Heaviside (t − 1)
h

dt 2 dt
13. Solve  x (0− ) = 1 .
x0 (0− ) = 1
Ans. e−t + 23 te−t + te−t Heaviside(t )−t Heaviside(t − 1)e−(t−1) + Heaviside(t − 1).
 d2x dx

+ 3 + 2x = Dirac (t)
At

dt 2 dt
14. Solve  x (0− ) = 0 .
x0 (0− ) = 0
Ans. 12 e−2t − 21 e−t − e12t (Heaviside (t) − et Heaviside (t)).
 d2x dx

dt 2 + 3 dt + 2x = Dirac (t)
15. Solve  x (0− ) = 1 .
x0 (0− ) = 2
Ans. 72 e−t − 52 e−2t − e12t (Heaviside (t) − et Heaviside (t)).
86 Chapter 9. Dirac Delta and Generalized Functions

d2x
 
dt 2
+ 4 dx
dt + 4x = Dirac (t)

ias
16. Solve  x (0− ) = 0 .
0 −
x (0 ) = 0
Ans. e2t Heaviside (t) − 12 te−2t .
t
 d2x 
dt 2
+ 4 dx
dt + 4x = Dirac (t)
17. Solve  x (0− ) = 1 .
0 −
x (0 ) = 2
Ans. e−2t 7 −2t
+ 2 te + et2t Heaviside (t) .
 d2x 
dt 2
+ 3 dx
dt + 2x = 1 + Dirac (t)

ag
18. Solve  x (0− ) = 0 .
x0 (0− ) = 0
Ans. Heaviside(t − 1)e−(t−1) -Heaviside ((t − 1) e−2(t−1) + 1/2 + 1/2e−2t − e−t .
 dx 
dt = x + 2y + Dirac (t)
 x (0− ) = 0 
19. Solve  dy
.

dt = 4x + 3y 
y (0− ) = 0

1 5t 1 −t
eh
Ans. y (t) = 13 e−t − 13 e5t − 23 e−t Heaviside (t)+ 32 e5t Heaviside (t), x (t) = 23 e−t Heaviside (t)−
1 5t
6 e − 3 e + 3 e Heaviside (t). 
dx
dt = x + 2y + Dirac (t)
 x (0− ) = 1 
20. Solve  dy
.

dt= 4x + 3y 
y (0− ) = 1
Ans. y (t) = e5t − 23 e−t Heaviside (t)+ 23 e5t Heaviside (t), x (t) = 12 e5t + 23 e−t Heaviside (t)+
.K

1 5t
3 e Heaviside (t).

9.4 Advanced Problems


h
At
ias
ag
10. Difference Equations

In this chapter we present difference equations, i.e., sequences which are defined
eh
by a recursive equation.

10.1 Theory and Examples


10.1.1 We are interested in discrete time functions (i.e., with domain N0 ). To
emphasize this, we will write xn instead of x (t). In particular, we are interested in
.K

solving difference equations, e.g.,


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

xn+2 + 3xn+1 + 2xn = 0, x0 = 1, x1 = 1.

10.1.2 We will start our analysis studying corresponding continuous time func-
tions and their Laplace transforms; later we will turn to a completely discrete-time
point of view.

10.1.3 Notation. We will use the functions


h

u (t) = h (t) − h (t − 1) , u (t − n) = h (t − n) − h (t − n − 1) .

(Here and in what follows, h (t) is the Heaviside (t) function.)


At

10.1.4 Given a discrete time function xt , we define a function

x (t) = xn when n ≤ t < n + 1;

this can also be written as

∞ ∞
x (t) = ∑ xn u (t − n) = ∑ xn [h (t − n) − h (t − n − 1) ] .
n=0 n=0
88 Chapter 10. Difference Equations

To find L (x (t)) we have

ias
Z 1
L (x (t)) = x (t) e−st dt
0
Z 1 Z 2 Z 3
−st −st
= x0 e dt + x1 e dt + x2 e−st dt + ...
0 1 2
1 − e−s e−s − e−2s
e−2s − e−3s
= x0 + x1 + ... + x2
s s s
1 − e−s 1 − e−s 1 − e−s
= x0 + x1 e−s + x2 e−2s + ...
s s s

ag
1 − e−s  1 − e−s ∞
= x0 + x1 e−s + x1 e−2s + ... = ∑ xn e−ns .
s s n=0
In short
1 − e−s ∞
X (s) = L (x (t)) =
s n=0∑ xne−ns.
10.1.5 Example. If , with |r| < 1, we have
eh ∀n ∈ N0 : xn = rn

x (t) = ∑ rn u (t − n)
n=0
then
1 − e−s  1 − e−s 1 1 − e−s
X (s) = L (x (t)) = 1 + re−s + r2 e−2s + ... = =
s s 1 − re−s s (1 − re−s )
.K

In the last step we have used, with z−1 = re−s (and assuming z−1 < 1), the fact

that
1 1
1 + re−s + r2 e−2s + ... = 1 + z−1 + z−2 + ... = −1
= .
1−z 1 − re−s
10.1.6 Given x (t) = ∑∞
n=0 xn u (t − n), let us also compute
Z ∞ Z ∞ Z ∞
−st −s(τ−1)
L (x (t + 1)) = e x (t + 1) dt = e x (τ) dτ = e s
e−sτ x (τ) dτ
0 1 1
Z ∞ Z 1
−sτ
=e s
e x (τ) dτ − e s
e−sτ x (τ) dτ
h

0 0
Z 1 Z 1
−sτ
s
= e X (s) − e s
e x (τ) dτ = e X (s) − e s s
e−sτ x0 dτ
0 0
(1 − e−s )
= es X (s) − es x0 .
At

s
In this manner we get
(1 − e−s )
L (x (t + 1)) = es X (s) − es x0 ,
s
(1 − e−s )
L (x (t + 2)) = e2s X (s) − es (x0 es + x1 ) ,
s
...
Do you see the similarity with the Laplace transform of derivatives?
10.1 Theory and Examples 89

10.1.7 Example. To solve the difference equation

ias
x0 = 0, x1 = 1, ∀n ∈ N0 : xn+2 − 5xn+1 + 6xn = 0.

we let

x (t) = ∑ xn u (t − n)
n=0
and we get the equivalent equation

x (0) = 0, x (1) = 1, ∀t ≥ 0 : x (t + 2) − 5x (t + 1) + 6x (t) .

ag
Transforming we get

(1 − e−s ) es (1 − e−s )
e2s X (s) − es − 5es X (s) + 6X (s) = 0 ⇒ e2s − 5es + 6 X (s) =


s s
es (1 − e−s )
X (s) =
s (es − 3) (es − 2)
(1 − e−s )
 s
es

e
eh =
s

es − 3 es − 2
1 − e−s
 
1 1
= −
s 1 − 3e−s 1 − 2e−s
1 − e−s 1 − e−s
= − ⇒
s (1 − 3e−s ) s (1 − 2e−s )
1 − e−s 1 − e−s
L (x (t)) = − .
.K

s (1 − 3e−s ) s (1 − 2e−s )
Since !
1 − e−s ∞
=L ∑ rn u (t − n) ,
s (1 − re−s ) n=0
we get
∞ ∞
x (t) = ∑ 3n u (t − n) − ∑ 2n u (t − n)
n=0 n=0
and
h

∀n ∈ N0 : xn = 3n − 2n .
10.1.8 Obviously the above method can be used for any nonhomogeneous linear
difference equation
At

∀n ∈ N0 : xn+K + p1 xn+K−1 + ... + pK−1 xn+1 + pK xn = un+K .

10.1.9 We should not have to go through continuous time; the problem is es-
sentially one of discrete time and should be solvable entirely in the discrete time
domain. To achieve this we introduce a new transform.
10.1.10 Definition. The Z transform of xn is:

X (z) = Z (x) := ∑ xnz−n
n=0
90 Chapter 10. Difference Equations

To ensure the sum is well defined we assume

ias

xn
∀n : n < 1,
z
i.e., we assume that |z| is sufficiently large.
10.1.11 Since X (z) is a MacLaurin series in z−1 , if two functions xn and yn have
X (z) = Y (z), then also xn = yn . In other words, the Z transform is invertible.
10.1.12 Example. The Z transform of xn = rn is
r n

1 − 1 z

ag
Z (xn ) = 1 + rz−1 + r2 z−2 + ... = lim z
r = −1
=
n 1− z 1 − rz z−r

Recalling that the continuous-time function x (t) = (∑∞ n


n=0 r u (t − n)) has Laplace
transform
1 − e−s  1 − e−s 1
L (x (t)) = 1 + re−s + r2 e−2s + ... =
s s 1 − re−s
eh
we recognize that z plays the role of es but, by turning to discrete time, we have
−s
gotten rid of the cumbersome 1−es factor.
10.1.13 So we can think of the Z transform as a special case of the Laplace
transform, but we prefer to think of it as a completely independent transform. It is
reasonable then to expect that, if we obtain Z properties analogous to the Laplace
properties, we can also perform similar operations. In particular, we should be
able to solve difference equations with Z -transform, similarly to solving differential
.K

equations with Laplace transform.


10.1.14 Example. The Z transform of hn = 1 is

1 z
Z (hn ) = ∑ 1z−n = 1 − z−1 = z − 1
n=0

10.1.15 Example. The Z transform of



1 n=0
δn :=
6 0
0 n=
h

is

∆ (z) = Z (δn ) = ∑ δnz−n = 1 + 0z−1 + 0z−2 + ... = 1
n=0
At

10.1.16 Example. The Z transform of xn = n is

X (z) = Z (xn ) = 0 + 1z−1 + 2z−2 + 3z−3 ...


d
= u 1 + 2u + 3u2 + ... = u u + u2 + u3 + ...
 
du
z−1 z2 z−1
 
d u u
=u = = =
du 1 − u (1 − u)2 (1 − z−1 )2 (z − 1)2
z
= .
(z − 1)2
10.1 Theory and Examples 91

10.1.17 Theorem. Z (κxn + λ yn ) = κZ (xn ) + λ Z (yn ) .

ias
Proof. Easy.
10.1.18 Theorem. For m ≥ 0: Z (xn−m hn−m ) = z−m Z (xn ).
Proof. Because

Z (xn−m hn−m ) = ∑ xn−mhn−mz−n
n=0
∞ ∞
= ∑ xn−m z−n = z−m ∑ xn−mz−(n−m)
n=m n=m

ag

= z−m ∑ xk z−k = z−m Z (xn ) .
k=0

m ≥ 0: Z (xn+m ) = zm Z (xn ) − ∑m−1 −k .



10.1.19 Theorem. For k=0 xk z
Proof. Because
∞ ∞
−n
Z (xn+m ) = ∑ xn+mz =z m
∑ xn+mz−(n+m)
eh n=0
∞ m−1
n=0
!
= zm ∑ xnz−n − ∑ xnz−n .
n=0 n=0

10.1.20 Example. So for example

Z (xn+1 ) = zX (z) − zx0


.K

Z (xn+2 ) = z2 X (z) − z2 x0 − zx1

10.1.21 Theorem. The following final and initial value properties hold:

z−1
x0 = lim X (z) , lim xn = lim X (z) .
z→∞ n→∞ z→1 z

Proof. Easy.
X (z) = Z (xn ), then X z
= Z (an xn ).

10.1.22 Theorem. If a
Proof. Easy.
h

10.1.23 Theorem. If Z (xn ) = X (z), then Z (nxn ) = −z dX


dz .
Proof. Because
∞ ∞
Z (nxn ) = ∑ nxnz−n = z ∑ nxnz−n−1
At

n=0 n=0
!0
∞ ∞
1 0 dX
= −z ∑ nxn z−n = −z ∑ xnz−n = −z
n=0 n n=0 dz

10.1.24 Theorem (Convolution). If Z (xn ) = X (z), Z (yn ) = Y (z), then


!
n
Z ∑ xmxn−m = X (z)Y (z) .
m=0
92 Chapter 10. Difference Equations

Proof. Because

ias
! !
n ∞ n
Z ∑ xmyn−m = ∑ ∑ xmyn−m z−n
m=0 n=0 m=0
∞ n
= ∑ ∑ xmyn−mz−(n−m)z−m
n=0 m=0
∞ ∞
= ∑ ∑ xmyn−mz−(n−m)z−m

ag
m=0 n=m
∞ ∞
= ∑ ∑ xmyk z−k z−m
m=0 k=0
!
∞ ∞
−m −k
= ∑ xmz ∑ yk z
m=0 k=0
eh =
m=0

∑ xmz−m
!

∑ yk z−k
k=0
!
= X (z)Y (z)

10.1.25 Theorem. The following pairs hold:


.K

z
an z−a
z
n
(z−1)2
z(z+1)
n2
(z−1)3
z(z−cos w)
cos (wn) z2 −2z cos w+1
z sin w
sin (wn) z2 −2z cos w+1
h

Proof. Left to the reader.


At

10.1.26 Inversion methods are similar to those of Laplace transform, as will be


seen in the following examles of solving difference equations.

10.1.27 Example. To solve

3xn+1 − 2xn = (−1)n , x0 = 2


10.1 Theory and Examples 93

we take the Z -transforms:

ias
z
3 (zX − zx0 ) − 2X = ⇒
z+1
z
3zX − 6z − 2X = ⇒
z+1
z
(3z − 2) X = 6z + ⇒
z+1
6z z
X= + ⇒
(3z − 2) (z + 1) (3z − 2)

ag
X 6 1
= + ⇒
z (3z − 2) (z + 1) (3z − 2)
11z z
X (z) = 2
− ⇒
5 z− 3 5 (z + 1)
11 2 n 1
 
xn = − (−1)n .
5 3 5
eh
10.1.28 Example. To solve

3 1 1
xn+2 − xn+1 + x [n] = n , x0 = 4, x1 = 0
2 2 3
.K

we take the Z transforms:

3 1 z
z2 X − z2 x0 − zx1 − (zX − zx0 ) + X = ⇒
2 2 z − 13
3 1 z
z2 X − 4z2 − zX + 6z + X = ⇒
2 2 z − 13
 
3 1 z
z2 − z + X = 4z2 − 6z + ⇒
z − 13
h

2 2
X 4z − 6 1
= 1
+ ⇒
z − 3 z − 12 (z − 1)
1
  
z z − 2 (z − 1)
9z 4z z
− − ⇒
At

X (z) = 1 1
z− 3 z− 2 z−1
 n  n
1 1
xn = 9 −4 −1
3 2

10.1.29 Example. To solve

xn+2 − 7xn+1 + 10xn = 16n, x0 = 6, x1 = 2


94 Chapter 10. Difference Equations

we take the Z transforms:

ias
16z
z2 X − z2 x0 − zx1 − 7 (zX − zx0 ) + 10X = ⇒
(z − 1)2
16z
z2 X − 6z2 − 2z − 7 (zX − 6z) + 10X = ⇒
(z − 1)2
16z
z2 − 7z + 10 X = 6z2 − 40z +


(z − 1)2
X 5 4 4 3
= + + − ⇒

ag
z z − 1 (z − 1)2 z − 2 z − 5
5z 4z 4z 3z
X [z] = + 2
+ − ⇒
z − 1 (z − 1) z−2 z−5
xn = 5 + 4n + 4 · 2n − 3 · 5n .

10.1.30 Example. To solve


eh xn+2 − 2xn+1 + xn = 3n, x0 = 0, x1 = 2

we take the Z transforms:

3z
z2 X − z2 x0 − zx1 − 2 (zX − zx0 ) + X = ⇒
(z − 1)2
.K

3z
z2 X − 2z − 2zX + X = ⇒
(z − 1)2
3z
z2 − 2z + 1 X = 2z +


(z − 1)2
3z 2z
X (z) = + ⇒
(z − 1)4 (z − 1)2
1
xn = n (n − 1) (n − 2) + 2n
2
h

10.1.31 To introduce the Z transform we did not really need the Laplace transform,
we could have started from scratch. (In fact many people have done this, in separate
domains, for instance, generating functions in Probability Theory exploit the same
idea). But it is useful to have seen the connection of L to Z .
At

10.1.32 Recall the analogy

∞ n ∞
Z (x) = ∑ xn z−1 = ∑ xnun,
n=0 n=0
Z ∞ Z ∞
−s t
L (x) = x (t) vt dt.

x (t) e dt =
0 0

So the Laplace transform is like a "continuous" Taylor series.


10.2 Solved Problems 95

10.2 Solved Problems

ias
10.3 Unsolved Problems
xn+1 + 12 xn = 0
 
1. Find xn and X (z) = Z (x (n)) given that .
x0 = 1
1 n

Ans. xn = −2 .
xn+1 + 12 xn = 1
 
2. Find xn and X (z) = Z (x (n)) given that .
x0 = 0
n
Ans. xn = 23 − 23 − 21 .

ag
xn+1 + 12 xn = 1
 
3. Find xn and X (z) = Z (x (n)) given that .
x0 = 2
n
Ans. xn = 43 − 12 + 23 .
1 n
xn+1 + 12 xn =
  
4. Find xn and X (z) = Z (x (n)) given that 3 .
x0 = 1
1 n 1 n
Ans. – 15 − 2 + 65
 
3 .
xn+1 + 12 xn = n
 

Ans. 13
eh
5. Find xn and X (z) = Z (x (n)) given that
1 n

− 94 + 23 n.
x0 = 0
.

9 −2
1 n
xn+1 + 12 xn =
  
6. Find xn and X (z) = Z (x (n)) given that 2 .
x0 = 1
Ans. − 21n .
1 n
xn+1 + 12 xn =
  
7. Find xn and X (z) = Z (x (n)) given that 2 .
.K

x0 = 3
n 1 n
Ans. 2 − 12

+ 2 . n 
xn+1 + 12 xn = − 12

8. Find xn and X (z) = Z (x (n)) given that .
x0 = 3
n n
Ans. 5 − 21 − (2n + 2) − 21 .  
xn+1 + xn = 1
9. Find xn and X (z) = Z (x (n)) given that .
x0 = 2
n
Ans. 32 (−1) + 21 .
 
xn+1 − xn = 1
10. Find xn and X (z) = Z (x (n)) given that
h

.
x0 = 2
Ans. 2 + n.
xn+2 + 56 xn+1 + 16 xn = 0
 

11. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


At

x1 = 1
1 n 1 n
 
Ans. 9 −3 − 8 −2 .
xn+2 + 56 xn+1 + 16 xn = 0
 

12. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


x1 = 1
1 n 1 n
 
Ans. 9 −3 − 8 −2 .
xn+2 + xn+1 + 14 xn = 0
 

13. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


x1 = 1
96 Chapter 10. Difference Equations
n n
Ans. 4 − 12 − (3n + 3) − 21 .

ias
xn+2 + 65 xn+1 + 16 xn = (−1)n
 

14. Find xn and X (z) = Z (x (n)) given that  x0 = 0 .


x1 = 0
1 n 1 n n
 
Ans. 9 − 3 − 12 − 2 + 3 (−1) .
xn+2 + xn+1 + 14 xn = (−1)n
 

15. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


x1 = 1
1 n 1 n n
 
Ans. 4 − 2 − (7n + 7) − 2 + 4 (−1) .
n 
xn+2 + xn+1 + 14 xn = 12

ag
16. Find xn and X (z) = Z (x (n)) given that  x0 = 0 .
x1 = 1
1 n
Ans. − 2n ((−1) − 1).
xn+2 + xn+1 + 14 xn = 0
 

17. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


x1 = 1
1
eh n
Ans. − 2n (n (−1) − 1).

xn+2 + 3xn+1 + 2xn = 1

18. Find xn and X (z) = Z (x (n)) given that  x0 = 1 .


x1 = 1
5 n 5 n 1
Ans. 2 (−1) − 3 (−2) + 6 .
 
xn+2 + 3xn+1 + 2xn = n
19. Find xn and X (z) = Z (x (n)) given that  x0 = 0 .
.K

x1 = 1
5 n 10 n n 5
Ans. 4 (−1) − 9 (−2) + 6 − 36 .
xn+2 + 3xn+1 + 2xn = 4n
 

20. Find xn and X (z) = Z (x (n)) given that  x0 = 0 .


x1 = 1
5 n 6 n 1 n
Ans. 6 (−1) − 7 (−2) + 42 5 .
21. Find xn , yn and X (z) = Z (x (n)) ,Y (z) = Z (y (n)) given that

1
h

xn+1 = xn + yn , x0 = 0,
2
1
yn+1 = yn , y0 = 1
3
At

n n n
Ans. x (n) = 6 12 − 6 13 , y (n) = 13 .
22. Find xn , yn and X (z) = Z (x (n)) ,Y (z) = Z (y (n)) given that

1
xn+1 = xn + yn , x0 = 0,
2
1
yn+1 = yn + 1, y0 = 0
3
n n n
Ans. x (n) = 3 + 9 31 − 12 12 , y (n) = 32 − 32 31 .
10.4 Advanced Problems 97

23. Find xn , yn and X (z) = Z (x (n)) ,Y (z) = Z (y (n)) given that

ias
xn+1 = xn + 2yn , x0 = 1,
yn+1 = 4xn + 3yn , y0 = 0
n n
Ans. x (n) = 13 5n + 2 (−1) , y (n) = 32 5n − 23 (−1) .
24. Find xn , yn and X (z) = Z (x (n)) ,Y (z) = Z (y (n)) given that

xn+1 = xn + 2yn + 1, x0 = 0,
yn+1 = 4xn + 3yn , y0 = 0

ag
1 n n n
Ans. x (n) = 12 5 − 31 (−1) + 14 , y (n) = 61 5n + 13 (−1) − 12 .
25. Find xn , yn and X (z) = Z (x (n)) ,Y (z) = Z (y (n)) given that

xn+1 = xn + 2yn + 2n , x0 = 0,
eh yn+1 = 4xn + 3yn , y0 = 0
n n
Ans. x (n) = 19 2n + 91 5n − 92 (−1) + 14 , y (n) = 29 5n + 29 (−1) − 49 2n .

10.4 Advanced Problems


h .K
At
At
h.K
eh
ag
ias
III
Fourier

ias
ag
eh
.K
h
At

11 Fourier Series . . . . . . . . . . . . . . . . . . . . 101

12 Fourier Transform . . . . . . . . . . . . . . . 115


At
h.K
eh
ag
ias
ias
ag
11. Fourier Series

A Fourier series expansion of a periodic function f (t) is a representation of f (t) as


eh
a sum of sines and cosines (or, as a sum of complex exponentials).

11.1 Theory and Examples


11.1.1 Example. Consider the following problem


dy
+ ay = ∑ (−1)n Heaviside (t − nπ) , y (0) = 0. (11.1)
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw
dt n=0 1

In the plot we see the (periodic) input and the output of (11.1).
h
At

The problem can be solved in the standard manner by Laplace transform:

1 ∞ n −nπs 1 ∞
sY + aY = ∑ (−1) e ⇒Y = ∑ e−nπs ⇒
s n=0 s (s + a) n=0
  ∞
1 1 1
Y= − ∑ (−1)n e−nπs
a s s + a n=0
102 Chapter 11. Fourier Series

Letting

ias
(−1)n 1
 
1
Yn (s) = − e−nπs
a s s+a
(−1)n
Heaviside (t − nπ) 1 − eat

yn (t) =
a
we get


Y (s) = ∑ Yn (s)

ag
n=0

(−1)n
∑ a Heaviside (t − nπ) 1 − eat .

y (t) =
n=0

11.1.2 In the plot we see clearly the almost-periodicity of y (t). This is reasonable:
we apply a periodic input and we get a periodic output. However, periodicity is
eh
not obvious from the form of y (t). We want another solution method, which will
show the periodicity clearly (actually we want to separate the transient part of the
solution from the steady-state, periodic one).
11.1.3 Example. Before proceeding to this end, we will also solve the following
easy problems. First we consider

dy
+ ay = 1, y (0) = 0 (11.2)
.K

dt

which has solution:


 
1 1 1 1 1
1 − e−at .

(s + a)Y = ⇒ Y = − ⇒ y (t) =
s a s s+a a

Second we consider
dy
+ ay = sin (nt) , y (0) = 0 (11.3)
dt
h

which has solution:


n n
sY + aY = ⇒Y = 2 ⇒
s2 + n2 2
(s + n ) (s + a)
At

 
n 1 a s
Y= 2 + − ⇒
a + n2 s + a n2 + s2 n2 + s2
n
e−at + a sin (nt) − cos (nt) .

y (t) = 2 2
a +n

In the plot we see again the almost-periodicity of y (t); this is also reflected in the
steady-state component of the solution:

n
(a sin (nt) − cos (nt)) .
a2 + n2
11.1 Theory and Examples 103

ias
ag
11.1.4 Note that, by linearity, if

dy
+ ay = u (t) , x (0) = 0. (11.4)
dt
with input u (t) = un (t) has solution yn (t) then with input
eh u (t) = ∑ κn un (t)
n

it has solution
y (t) = ∑ κn yn (t) .
n

11.1.5 Now we will do two things:


.K

1. present a method by which by which periodic functions are written as sums


of sines and cosines;
2. use this method to write the solution of our Example 11.1.1 as the sum of a
transient part and a periodic (sum of sines and cosines) part.
11.1.6 Definition: We call f (t) periodic, iff there exists some T such that

∀t : f (t + T ) = f (t) .

The smallest such T is called the period of f (t).


h

11.1.7 Definition: We call f (t) piecewise continuous in [t1 ,t2 ] iff


1. There exist t1 = τ0 < τ1 < τ2 < ... < τn−1 < τn = t2 such that

[t1 ,t2 ] = [τ0 , τ1 ] ∪ [τ1 , τ2 ] ∪ ... ∪ [τn−1 , τn ] ,


At

2. f (t) is continuous inside each [τi−1 , τi ] and


3. for each τi (i ∈ {1, ..., n}) the side limits limt→τ − f (t), limt→τ + f (t) exist (except
i i
for limt→t − f (t) and limt→t + f (t) ).
1 2

11.1.8 Theorem: Let f (t) satisfy the following (Dirichlet) conditions:


1. f (t) is defined in (0, 2L),
2. f (t) and f 0 (t) are piecewise continuous in (0, 2L),
3. f (t) is periodic with period 2L.
104 Chapter 11. Fourier Series

Then, using the constants

ias
Z 2L Z 2L
1 nπt 1 nπt
an = f (t) cos dt, bn = f (t) sin dt. (11.5)
L 0 L L 0 L
we have:
1. at every point of continuity t :
∞ ∞
a0 nπt nπt
f (t) = + ∑ an cos + ∑ bn sin , (11.6)
2 n=1 L n=1 L
2. at every point of discontinuity t :

ag
  ∞ ∞
1 a0 nπt nπt
lim f (τ) + lim f (τ) = + ∑ an cos + ∑ bn sin , (11.7)
2 τ→t − τ→t + 2 n=1 L n=1 L
The right side of (11.6), when an , bn are given by (11.5), is called the trigonometric
Fourier series of f (t).
Proof. This is a partial proof. Assuming that f (t) can be written as
eh ∞ ∞
a0 nπt nπt
f (t) = + ∑ an cos + ∑ bn sin , (11.8)
2 n=1 L n=1 L
we will show that an , bn are given by (11.5).
Indeed, since
Z 2L Z 2L
nπt nπt
∀n ∈ N : cos dt = sin dt = 0.
0 L 0 L
.K

we have:
Z 2L Z 2L
!
∞ ∞
a0 nπt nπt
f (t) dt = + ∑ an cos + ∑ bn sin dt
0 0 2 n=1 L n=1 L
Z 2L ∞ Z 2L  ∞ Z 2L 
a0 nπt nπt
= dt + ∑ an cos dt + ∑ bn sin dt
0 2 n=1 0 L n=1 0 L
Z 2L
a0
= dt + 0
0 2
In short
h

Z 2L Z 2L Z 2L
a0 1 1 0πt
f (t) dt = 2L ⇒ a0 = f (t) dt = f (t) cos dt.
0 2 L 0 L 0 L
This proves the a0 formula.
At

Taking any m ∈ N, we have


Z 2L Z 2L
!
∞ ∞
mπt a0 nπt nπt mπt
f (t) cos dt = + ∑ an cos + ∑ bn sin cos dt
0 L 0 2 n=1 L n=1 L L
Z 2L ∞ Z L 
a0 mπt nπt mπt
= cos dt + ∑ an cos cos dt
0 2 L n=1 −L L L
∞ Z 2L 
nπt mπt
+∑ bn sin cos dtdt . (11.9)
n=1 0 L L
11.1 Theory and Examples 105

Since

ias
Z 2L
nπt mπt
∀n ∈ N0 : sin
cos dt = 0,
0 L L
Z 2L
nπt mπt
∀n ∈ N0 , n 6= m : cos cos dt = 0
0 L L
and Z 2L
mπt
cos2 dt = L
0 L
replacing the above in (11.9) we get

ag
Z 2L Z 2L
mπt 1 mπt
f (t) cos dt = am L ⇒ am = f (t) cos dt
0 L L 0 L
and we have proved the an formula for n ∈ N. The bn formula (for n ∈ N) is proved
similarly.
11.1.9 For a full proof we would need to define the finite sum
eh fN (t) =
a0 N
+ ∑ an cos
nπt N
+ ∑ bn sin
nπt
2 n=1 L n=1 L

and show that (at every continuity point t ) we have

lim fN (t) = f (t) .


N→∞
.K

11.1.10 Remark: The Dirichlet conditions are sufficient but not necessary for the
existence of the trigonometric Fourier series of f (t).
11.1.11 Example: Let f (t) be

1 when t ∈ (2kπ, (2k + 1) π) , k ∈ N0
f (t) =
0 when t ∈ ((2k + 1) π, (2k + 2) π) , k ∈ N0

Obviously f (t) is periodic with T = 2π . We compute its Fourier series as follows.


h

Z 2π Z 2π
1 1 1
Z π
a0 = f (t) dt = 1dt + 0dt = 1.
π 0 π 0 π π

For any n ≥ 1 we have


At

Z 2π
1 1
Z π
an = f (t) cos (nt) dt = 1 cos (nt) dt = 0.
π 0 π 0

Also
Z 2π  π
1 1 1 1
Z π
bn = sin (nt) dt = −
f (t) dt = cos (nt)
π 0 0 π π n 0
 2
1 when n = 2k + 1
= − (cos (nπ) − cos (0)) = nπ
nπ 0 when n = 2k.
106 Chapter 11. Fourier Series

Hence

ias
 
1 2 sin (t) sin (3t) sin (5t) 1 2 sin (nt)
f (t) = + + + + ... = + ∑
2 π 1 3 5 2 π n∈{1,3,5,...} n

11.1.12 Example. To solve Example 11.1.1, we first take u0 (t) = 12 and get that

1
1 − e−at .

y0 (t) =
2a
2
Then, with un (t) = nπ sin (nt) we get (from the second problem) that

ag
2 2
e−at − (n cos (nt) − a sin (nt)) .
(a2 + n2 ) π (a2 + n2 ) πn
And finally, with
1 2 sin (nt)
u (t) = + ∑
2 π n∈{1,3,5,...} n
eh
the solution is the sum of transient and steady-state periodic parts:

y (t) = ye(t) + y (t)

with

e−at 2
ye(t) = − + ∑ e−at
2a n∈{1,3,5,...} (a + n2 ) π
2
.K

1 2 (n cos (nt) − a sin (nt))


y (t) = − ∑ .
2a n∈{1,3,5,...} (a2 + n2 ) πn

11.1.13 Theorem: Let f (t) satisfy the Dirichlet conditions. Then:


1. f (t) is even iff its trigonometric Fourier series has only cosine terms;
2. f (t) is odd iff its trigonometric Fourier series has only sine terms.
Proof. Assume that the f (t) Fourier series has only cosine terms:

a0 nπt
h

f (t) = + ∑ an cos .
2 n=1 L

Then f (t) is even, since it is a sum of even functions.


Conversely, since
At

Z L
1 nπt
bn = f (t) sin dt
−LL L
nπt
if f (t) is even then f (t) sin L is odd and bn = 0, i.e., f (t) has only cosine terms.
We have proved the first part of the theorem; the second part is proved similarly.
11.1.14 If f (t) is defined only on a finite interval [t1 ,t2 ], we can extend it on R by
defining
∀t ∈ [t1 ,t2 ] , n ∈ N : f (t + n · (t2 − t1 )) = f (t) .
The extended f (t) is periodic with T = t2 − t1 and the formulas (11.6), (11.7) hold.
11.1 Theory and Examples 107

11.1.15 Example: Define f : [−5, 5] → R as follows:

ias

0 when − 5 < t < 0
f (t) = .
3 when 0 ≤ t < 5

To find the Fourier series of f (t), we extend f (t) to all of R. Then f (t) satisfies
the Dirichlet conditions and has half period L = 5, hence

1 5 1 0
Z 5
nπt nπt 1 nπt
Z Z
an = f (t) cos dt = 0 cos dt + 3 cos dt
5 −5 5 5 −5 5 5 0 5

ag
 t=5 
3 5 nπt 3 when n = 0
= sin = .
5 nπ 5 t=0 0 when n 6= 0

Similarly

1 5 1 0
Z 5
nπt nπt 1 nπt
Z Z
bn = f (t) sin dt = 0 sin dt + 3 sin dt
5 −5 5 5 −5 5 5 0 5

=
eh
3
5

5
− cos

nπt
t=5

5 t=0
=
3 (1 − cos nπ)

.

Finally
 
3 6 πt 1 3πt 1 5πt
f (t) = + sin + sin + sin + ... .
2 π 5 3 5 5 5
.K

Nota Bene:
3
∀n ∈ Z : f (5n) = lim f (t) + lim f (t) = .
t→5n− t→5n+ 2
11.1.16 Theorem. Let f (t) satisfy the Dirichlet conditions. Then at every conti-
nuity point t we have

f (t) = ∑ cn einπt/L , (11.10)
n=−∞

where
h

Z L
1
cn = f (t) e−inπt/L dt. (11.11)
2L −L

At every discontinuity point t , the left side of (11.10) is replaced by


At

 
1
lim f (t) + lim f (t) .
2 t→τ − t→τ +

The right side of (11.10) is called the exponential Fourier series of f (t).
Proof. By Theorem 11.1.8, since f (t) satisfies the Dirichlet conditions, at every
continuity point t we have

∞ ∞
a0 nπt nπt
f (t) = + ∑ an cos + ∑ bn sin .
2 n=1 L n=1 L
108 Chapter 11. Fourier Series

Replacing cos nπt nπt


L , sin L we get

ias
inπt inπt inπt inπt
a0 ∞
e L + e− L ∞
e L − e− L
f (t) = + ∑ an − i ∑ bn
2 n=1 2 n=1 2
∞ ∞
a0 an − ibn inπt an + ibn − inπt
= +∑ e L +∑ e L
2 n=1 2 n=1 2
∞ inπt
= ∑ cn e L
n=−∞

ag
where, for n = 0,
Z L Z L
a0 1 1 0πt
c0 = = f (t) dt = f (t) ei L dt,
2 2L −L 2L −L

for n ∈ Z+ ,
Z L Z L 
an − ibn 1 nπt nπt
cn = = f (t) cos dt − i f (t) sin dt
2 2L −L L L

=
eh1 L
Z

2L −L

f (t) cos
nπt
L
− i sin
nπt 
L
dt
−L

1 L
Z
nπt
= f (t) e−i L dt
2L −L
and for n ∈ Z− ,
Z L Z L 
a−n + ib−n 1 −nπt −nπt
.K

cn = = f (t) cos dt + i f (t) sin dt


2 2L −L L −L L
1 L nπt nπt 
Z 
= f (t) cos − i sin dt
2L −L L L
1 L
Z
nπt
= f (t) e−i L dt.
2L −L
In short, for each n ∈ Z : Z L
1 nπt
cn = f (t) e−i L dt.
2L −L
h

11.1.17 Example. Continuing Example 11.1.12, let us get the exponential Fourier
series of f (t). We have
Z 2π Z Z 2π 
1 −int 1 π
−int −int
cn = f (t) e dt = 1e dt + 0e dt .
At

2π 0 2π 0 π

Since the second integral is zero, we get: for n = 0


1 1
Z π
c0 = dt =
2π 0 2
and for n 6= 0:
1
= − nπi when n = 2k + 1

1 1 −int π 1
Z π
−int
e−inπ − 1 =

cn = e dt = − e |0 = − inπ
2π 0 2inπ 2inπ 0 when n = 2k.
11.1 Theory and Examples 109

and

ias
1 1 int
f (t) = + ∑ e .
2 n∈{...,−3,−1,1,3,...} inπ

The exponential series is equivalent to the trigonometric:

1 1 int 1 2 eit − e−it 2 ei3t − e−i3t


f (t) = + ∑ e = + + + ....
2 n∈{...,−3,−1,1,3,...} inπ 2 π 2i 3π 2i
1 2 2 1 2 sin (nt)

ag
= + sin (t) + sin (3t) + .... = + ∑ .
2 π 3π 2 π n∈{1,3,5,...} n

11.1.18 Theorem: Let f (t) be periodic with period T and f (t) , ddtf be piecewise
df
continuous. Then, at the continuity points of f (t) and dt , the derivative (resp. inte-
gral) of f (t) equals the term-by-term derivative (resp. integral) of the corresponding
(trig / exponential) Fourier series.
eh
11.1.19 Theorem (Parseval): Let
half-period L, and Fourier series
f (t), g (t) satisfy the Dirichlet conditions with

∞ ∞
a0 nπt nπt
f (t) = + ∑ an cos + ∑ bn sin ,
2 n=1 L n=1 L
∞ ∞
p0 nπt nπt
g (t) = + ∑ pn cos + ∑ qn sin .
.K

2 n=1 L n=1 L

Then we have
Z L ∞
1 a0 p0
f (t) g (t) dt = + ∑ (an pn + bn qn ) , (11.12)
L −L 2 n=1
a20
Z L ∞
1 2
( f (t)) dt = + ∑ a2n + b2n .

(11.13)
L −L 2 n=1
h

Also, if f (t) , g (t) have exponential Fourier series

∞ ∞
inπt/L
f (t) = cn e , g (t) = rn einπt/L ,
At

∑ ∑
n=−∞ n=−∞

then
Z L ∞
1
f (t) g (t)dt = ∑ cn rn , (11.14)
2L −L n=−∞
Z L ∞
1
| f (t)|2 dt = ∑ |cn |2 . (11.15)
2L −L n=−∞
110 Chapter 11. Fourier Series

Proof. We only prove the exponential forms(11.14)-(11.15). We have

ias
Z L Z L
! !
∞ ∞
i nπt i mπt
f (t) g (t)dt = ∑ cn e L
∑ rm e L dt
−L −L n=−∞ m=−∞
Z L
! !
∞ ∞
i nπt −i mπt
= ∑ cn e L
∑ rm e L dt
−L n=−∞ m=−∞
Z L
!
∞ ∞
i nπt −i mπt
= ∑ ∑ cn e L rm e L dt
−L n=−∞ m=−∞

ag
∞ ∞ Z L 
i nπt mπt
L −i L
= ∑ ∑ cn rm e e dt . (11.16)
n=−∞ m=−∞ −L

We see that Z L 
i nπt mπt
L −i L
2L when n = m
e e dt = . (11.17)
eh −L 0 when n 6= m
Hence (11.16) yields
Z L ∞
f (t) g (t)dt = 2L ∑ cn rn
−L n=−∞
2
which is equivalent to (11.14). For (11.15) let f (t) = g (t), then f (t) g (t) = | f (t)| ,
cn rn = |cn |2 .
11.1.20 The proof of Theorem 11.1.19 reminds us of inner products and vector
.K

spaces. Let us show the connection between these and Fourier analysis.
11.1.21 Notation. We denote by FL the set of functions which satisfy the Dirichlet
conditions (for a fixed L).
11.1.22 Theorem: FL is a vector space.
Proof. Let f (t) , g (t) ∈ FL and κ, λ ∈ C. Define h (t) = κ f (t) + λ g (t).
1. Since f (t), g (t) are defined in (−L, L), the same is true of h (t).
2. Sicne f (t) , g (t) and f 0 (t) , g0 (t) are piecewise continuous in (−L, L), the same
is true for h (t), h0 (t).
3. Since f (t) , g (t) are periodic with period 2L, the same is true of h (t).
h

Hence h (t) = κ f (t) + λ g (t) ∈ FL , i.e., FL is a vector space.


11.1.23 Definition. We say {..., f−1 (t) , f0 (t) , f1 (t) , ...} ⊆ FL is linearly indepen-
dent iff
At

!

∀t : ∑ κn fn (t) = 0 ⇒ (∀n ∈ Z : κn = 0) .
n=−∞

11.1.24 Definition. We say {..., f−1 (t) , f0 (t) , f1 (t) , ...} ⊆ FL is a basis of FL iff (i)
{..., f−1 (t), f0 (t), f1 (t), ...} is linearly independent and (ii) every f (t) ∈ FL can be
written as linear combination of the fn (t)’s:


f (t) = ∑ κn fn (t) .
n=−∞
11.2 Solved Problems 111

11.1.25 Definition. We say that {..., f−1 (t) , f0 (t) , f1 (t) , ...} ⊆ FL is orthogonal iff

ias
Z L
∀m 6= n : fm (t) gm (t) = 0.
−L
n nπt o∞
11.1.26 Theorem. The set ei L is an orthogonal basis of FL .
n=−∞
Proof.n Withothe above definitions we see the following.
nπt ∞
1. ei L is a subset of FL and is orthogonal; we showed that in (11.17)
n=−∞
of Theorem 11.1.19.
n nπt o∞
2. Since e i is orthogonal it is also linearly independent.

ag
L
n=−∞
3. From Theorem 11.1.19 every f (t) ∈ FL can be written as
∞ nπt
f (t) = ∑ cn ei L ,
n=−∞
n nπt o∞ n nπt o∞
a linear combination of ei L elements; hence ei L is a basis
n=−∞ n=−∞
of FL .
eh
11.1.27 Theorem. The set {cos nt}∞ ∞
n=0 ∪ {sin nt}n=1 is an orthogonal basis of FL .
Proof. Similar to that of Theorem 11.1.26.

11.2 Solved Problems


11.3 Unsolved Problems
.K

1. Compute the trigonometric Fourier series of



2 gia − π < x < 0
f (x) = .
1 gia 0 < x < π

Ans. f (x) = 23 − π2 ∑∞ 1
n=0 2n+1 sin [(2n + 1) x].
2. Compute the trigonometric Fourier series of

0 gia − 5 < x < 0
f (x) = .
h

3 gia 0 < x < 5


 
(2n+1)πx
Ans. f (x) = 32 + ∑ ∞ 6
n=0 (2n+1)π sin 5 .
3. Compute the trigonometric Fourier series of f (x) = cos x with period 2π .
At

Ans. f (x) = cos x.


4. Compute the trigonometric Fourier series of f (x) = |x| (−1 < x < 1).
cos[(2n+1)πx]
Ans. f (x) = 12 − π42 ∑∞
n=0 .
(2n+1)2
5. Compute the trigonometric Fourier series of

1 gia 0 < x < 2
f (x) = .
−1 gia 2 < x < 4
(2n+1)πx
sin
Ans. π4 ∑∞
n=0
2
2n+1 .
112 Chapter 11. Fourier Series

6. Compute the trigonometric Fourier series of f (x) = 4x (0 < x < 10).

ias
sin( nπx )
Ans. 20 − 40 ∞
π ∑n=1 n
5
.
7. Compute the trigonometric Fourier series of f (x) = cos2 x.
Ans. 12 + 12 cos 2x.
8. Compute the exponential Fourier series of f (x) = 1 (0 < x < 2π ).
Ans. c0 = 1, cn = 0 for n 6= 0.
9. Compute the exponential Fourier series of f (x) = x2 (0 < x < 2π ).
Ans. cn = n22 + i 2n , n = 0, ±1, ±2, ... .
10. Compute the exponential Fourier series of

ag

1 gia 0 < x < 2
f (x) = .
−1 gia 2 < x < 4

Ans. cn = n22 + i 2π
n , n = 0, ±1, ±2, ... .
11. Compute the exponential Fourier series of f (x) = cos x (0 < x < 2π ).
Ans. c1 = c−1 = 12 , all the other coefficients are 0.
12. Compute the trigonometric and exponential Fourier series of f (t) = t (on
[−π, π]).
Ans.
eh
f (t) = 2 sin (t) − sin (2t) + 2/3 sin (3t) − 1/2 sin (4t) + 2/5 sin (5t) + ...,
f (t) = −ieit + ie−it + i/2e2 it − i/2e−2 it − i/3e3 it + i/3e−3 it + i/4e4 it − i/4e−4 it + ...

13. Compute the trigonometric and exponential Fourier series of f (t) = t 2 (on
.K

[−π, π]).
Ans.

f (t) = 1/3 π 2 − 4 cos (t) + cos (2t) − 4/9 cos (3t) + 1/4 cos (4t) − ...
f (t) = 1/3 π 2 − 2 eit − 2 e−it + 1/2 e2 it + 1/2 e−2 it − 2/9 e3 it − 2/9 e−3 it + ...

14. Compute the trigonometric and exponential Fourier series of f (t) = t 2 + t (on
[−π, π]).
Ans.
h

1 4 2
f (t) = π 2 − 4 cos (t) + 2 sin (t) + cos (2t) − sin (2t) − cos (3t) + sin (3t) + ...
3   9
  3
1 1 i 1 i
f (t) = π 2 − (2 + i) eit − (2 − i) e−it + + e2 it + − e−2 it − ...
At

3 2 2 2 2

0 −π < t < 0
15. Compute the trigonometric and exponential Fourier series of f (t) = .
3 0<t <π
Ap.

3 sin (t) sin (3t) 6 sin (5t)


f (t) = +6 +2 + + ...
2 π π 5 π
3 ieit 3 ie−it ie3 it ie−3 it 3/5 ie5 it 3/5 ie−5 it
f (t) = 3/2 − + − + − + + ...
π π π π π π
11.4 Advanced Problems 113

0 −1 < t < 0
16. Compute the trigonometric and exponential Fourier series of f (t) = .

ias
1 0<t <1
Ans.

1 sin (π t) 2 sin (3 π t) 2 sin (5 π t)


f (t) =
+2 + + + ...
2 π 3 π 5 π
1 ieiπ t ie−iπ t ie3 iπ t ie−3 iπ t ie5 iπ t ie−5 iπ t
f (t) = − + − + − + + ...
2 π π 3π 3π 5π 5π

0 −5 < t < 0
17. Compute the trigonometric and exponential Fourier series of f (t) = .
3 0<t <5

ag
Ans.

sin (1/5 π t) sin (3/5 π t) sin (π t)


f (t) = 3/2 + 6 +2 + 6/5 + ...
π π π
3 iei/5π t 3 ie−i/5π t ie3/5 iπ t ie−3/5 iπ t 3/5 ieiπ t 3/5 ie−iπ t
f (t) = 3/2 − + − + − + + ...
eh π π π π π π
2
18. Compute the trigonometric and exponential Fourier series of f (t) = (t − 1)
(−π < t < π )
Ans.

103 cos (1/10 π t) sin (1/10 π t) cos (1/5 π t) sin (1/5 π t)


f (t) = − 400 2
− 40 + 100 2
+ 20 − ...
3  π  π π
 π
103 20 i i/10π t 20 i −i/10π t
f (t) = + −200 π −2 + e + −200 π −2 − e + ...
3
.K
π π

19. Compute the trigonometric and exponential Fourier series of f (t) = |sint|
(−π < t < π )
Ans.

cos (2t) 4 cos (4t)


f (t) = 2 π −1 − 4/3 − + ...
π 15 π
e2 it e−2 it e4 it e−4 it
f (t) = 2 π −1 − 2/3 − 2/3 − 2/15 − 2/15 + ...
π π π π
h

20. Compute the trigonometric and exponential Fourier series of f (t) = e|t| (−1 <
t < 1)
Ans.
At

2 eπ − 2 (−eπ − 1) cos (t) 2 (eπ − 1) cos (2t) 1 (−eπ − 1) cos (3t)


f (t) = + + + + ...
2π π 5 π 5 π
1 2 eπ − 2 1 (−eπ − 1) eit 1 (−eπ − 1) e−it 1 (eπ − 1) e2 it 1 (eπ − 1) e−2 it
f (t) = + + + + + ...
2 π 2 π 2 π 5 π 5 π

11.4 Advanced Problems


1. Find necessary and sufficient conditions so that the exponential Fourier
series of f (t) has only real (only imaginary) terms.
114 Chapter 11. Fourier Series

2. Show that
∞ ∞
a0 nπt nπt

ias
f (t) = + ∑ an cos + ∑ bn sin .
2 n=1 T n=1 T
can also be written as
∞  nπt 
f (t) = ∑ pn cos + φn .
n=0 T

sin(na) sin(nb)
3. Prove that ∑∞
n=1 n2
= a(π−b)
2 .
1 π 4
4. Prove that ∑∞
n=0 (2n+1)4 = 96 .

ag
1 π4
5. Prove that ∑∞
n=1 n4 = 90 .
1
6. Compute ∑∞ n=1 n2 .
1
7. Compute ∑∞ n=1 4n2 −1 .
in2πt , g (t) = ∞
8. Suppose f (t) = ∑∞ n=−∞ an e ∑n=−∞ bn ein2πt and
Z 2π ∞
eh p (t) =
0
f (t − τ) g (τ) dτ = ∑
n=−∞
cn ein2πt .

9. If g (t) = ∑∞ in2πt , what is the exponential Fourier series of f (t) =


n=−∞ an e
tg (t)?What is the relatrionship between (an )∞ ∞ ∞
n=−∞ , (bn )n=−∞ , (cn )n=−∞ ?
sin(n2 t )
10. Prove that: at every x ∈ R, f (t) = ∑∞
n=1 n2
is continuous but not differen-
tiable.
h .K
At
ias
ag
12. Fourier Transform

The Fourier transform is the extension of the Fourier series when the finite interval
eh
[−L, L] is replaced by (−∞, ∞).

12.1 Theory and Examples


12.1.1 For periodic functions, with period T , we have
∞ Z T /2
n2π 1 n2π
f (t) = ∑ cn ei T t , cn = f (t) e−i T t dt. (12.1)
n=−∞ T −T /2
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
What happens when T → ∞? Intuitively we see the following.
1. f (t) is not periodic.
2. 2πT → 0.
3. The sum of (12.1) tends to an integral.
4. For (12.1) to be meaningful, the integral must be well defined; this is not
guaranteed when the integration limits tend to ±∞.
12.1.2 Theorem. Suppose that f R(t)

1. is absolutely integrable, i.e., −∞ | f (t)| dt < ∞;
h

2. satisfies, for every T , the Dirichlet conditions in every finite interval − T2 , T2 ⊆


 

(−∞, ∞).
Then
1. at every continuity point t of f (t) we have
At

1
Z ∞
f (t) = F (ω) eiωt dω, (12.2)
2π −∞

where Z ∞
F (ω) = f (t) e−iωt dt. (12.3)
−∞
2. at every discontinuity point t of f (t) we have:
 
1 1 ∞
Z
lim f (τ) + lim f (τ) = F (ω) eiωt dω. (12.4)
2 τ→t − τ→t + 2π −∞
116 Chapter 12. Fourier Transform

Proof. The following “proof” is not rigorous but captures the basic ideas. For a

ias
given f (t) choose some T and for each t ∈ − T2 , T2 define fT (t) = f (t). Then


  ∞
T T 2πn
∀t ∈ − , : fT (t) = ∑ c (n) ei T t , (12.5)
2 2 n=−∞

where Z T /2
1 2πn
c (n) = fT (t) e−i T t dt. (12.6)
T −T /2

Letting δ ω = 2π 2πn
T and ω = nδ ω = T we get from (12.5) that

ag

fT (t) = ∑ c (n) einδ ωt
n=−∞
∞  Z T /2 
1 −inδ ωt
= ∑ fT (t) e dt einδ ωt
n=−∞ T −T /2
∞ 
1 T /2
Z 
eh = ∑
n=−∞
1
2π −T /2
fT (t) e −iωt
dt eiωt δ ω

= ∑ FT (ω) eiωt δ ω

ω∈{...,− 2π
T T ,...}
,0, 2π

where Z T /2
FT (ω) = fT (t) e−iωt dt.
.K

−T /2

Taking limits (and assuming they exist)

f (t) = lim fT (t) , (12.7)


T →∞
Z ∞
F (ω) = lim FT (ω) = f (t) e−iωt dt. (12.8)
T →∞ −∞

Substituting (12.7)-(12.8) in (12.5) and assuming that, as T → ∞, the sum of (12.5)


tends to an integral, we get
h

1
Z ∞
f (t) = F (ω) eiωt dt.
2π −∞

12.1.3 Definition. We define F (ω), the Fourier transform of f (t) by


At

Z ∞
F (ω) := f (t) e−iωt dt
−∞

12.1.4 Roughly speaking (and ignoring discontinuity points) we have

1
Z ∞ Z ∞
−iωt
F (ω) = f (t) e dt ⇔ f (t) = F (ω) eiωt dω (12.9)
−∞ 2π −∞

From (12.9) we see that the Fourier transform is invertible.


12.1 Theory and Examples 117

12.1.5 Notation. We write

ias
F ( f (t)) = F (ω) and F −1 (F (ω)) = f (t)
12.1.6 Theorem. Suppose that
R∞
f (t)
1. is absolutely integrable, i.e., −∞ | f (t)| dt < ∞;
2. satisfies the Dirichlet conditions in every finite interval − T2 , T2 ⊆ (−∞, ∞).
 

Then
1. at every continuity point t of f (t) we have
Z ∞
f (t) = (a (ω) cos ωt + b (ω) cos ωt) dω, (12.10)

ag
0
where
1 1
Z ∞ Z ∞
a (ω) = f (t) cos ωtdt, b (ω) = f (t) sin ωtdt ; (12.11)
π −∞ π −∞
2. at every discontinuity point t of f (t) we have:
  Z∞
1
2
eh
lim f (τ) + lim f (τ) =
τ→t − τ→t +

Proof. It follows immediately from Theorem 12.1.8.


0
(a (ω) cos ωt + b (ω) cos ωt) dω. (12.12)

12.1.7 Definition. We define FC (ω), the Fourier cosine transform of f (t), by


Z ∞
FC (ω) := f (t) cos (ωt) dt
−∞
and FS (ω), the Fourier sine transform of f (t), by
.K

Z ∞
FS (ω) := f (t) sin (ωt) dt
−∞
We write
FC ( f (t)) = FC (ω) , FC−1 (F (ω)) = f (t)
and
FS ( f (t)) = FS (ω) , FS−1 (F (ω)) = f (t) .
12.1.8 Intuitively, (12.10) is the “limit” of the Fourier series and (12.11) is the
“limit” of the formula for the Fourier coefficients (in both cases, when T → ∞).
h

12.1.9 Note that (12.11) can also be written as


1
Z ∞Z ∞
f (t) = f (τ) cos (w (t − τ)) dτdw.
2π −∞ −∞
At

12.1.10 Example: To compute the Fourier transform of



1 when |t| < a
f (t) =
0 when |t| > a
we have
e−iωt t=a
Z ∞ Z a
−iωt
F (ω) = f (t) e dt = e−iωt dt = |
−∞ −a −iω t=−a
e−iωa − eiωa sin (ωa)
= =2 = 2 sin c (wa) .
−iω ω
118 Chapter 12. Fourier Transform

12.1.11 Example: To compute the Fourier transform of Dirac (t − t0 ) we have

ias
Z ∞
F (δ (t − t0 )) = Dirac (t − t0 ) e−iωt dt = e−iωt0
−∞

12.1.12 Example. To compute the Fourier transform of f (t) = e−|t| :


  Z ∞
F e−|t| = e−|t| e−iωt dt
−∞
Z ∞ Z 0
= e−t e−iωt dt + et e−iωt dt.
−∞

ag
0

Next (with a = 1 + iω )

1 1 1
Z ∞ Z ∞
e−t e−iωt dt = e−at dt = − e−at |t=∞
t=0 = − =
0 0 a a 1 + iω

(since e−a∞ = e−(1+i)∞ = 0). Similarly


eh Z 0
1
et e−iωt dt =
−∞ 1 − iω
hence 
−|t|1 1 2
F e +
= = 2 .
1 − iω 1 + iω ω +1
12.1.13 Theorem (Linearity). Suppose f (t), g (t) satisfy the conditions of 12.1.2.
Then
.K

F (κ f + λ g) = κF ( f ) + λ F (g)
Proof. Obvious.
12.1.14 Theorem (Parseval). Suppose f (t) , g (t) satisfy the conditions of 12.1.2
and have Fourier transforms F (ω) , G (ω). Then

1 ∞
Z ∞ Z
f (t) g (t)dt = F (ω) G (ω)dω , (12.13)
−∞ 2π −∞
1
Z ∞ Z ∞
| f (t)|2 dt = |F (ω)|2 dω . (12.14)
h

−∞ 2π −∞

Proof. We have
 
1 ∞
Z ∞ Z ∞ Z
f (t) g (t)dt = f (t) iωt
G (ω) e dω dt
At

−∞ −∞ 2π −∞
Z ∞ 
1 ∞
Z
−iωt
= f (t) G (ω)e dω dt
2π −∞ −∞
Z ∞ 
1 ∞
Z
−iωt
= G (ω) f (t) e dt dω
2π −∞ −∞
1 ∞
Z
= G (ω)F (ω) dω.
2π −∞
This proves (12.13); then (12.14) follows, taking g (t) = f (t).
12.1 Theory and Examples 119
 
dt
F 1
= πe−|ω|
R∞
12.1.15 Example. To compute the integral −∞ 2 we have
(1+t 2 ) 1+t 2

ias
1
and, letting f (t) = 1+t 2 , we get

Z ∞
dt
Z ∞
1
Z ∞ 2 Z ∞
π
−|ω|
= | f (t)| dt =2
dω = π e−2ω dω = .

2 πe
2
−∞ (1 + t ) −∞ 2π −∞ 0 2

12.1.16 Theorem (Duality). Suppose f (t) satisfies the conditions of 12.1.2 and
has Fourier transform F (ω). Then

F (F (t)) = 2π f (−ω) .

ag
Proof. We have
Z ∞
F ( f (t)) = F (ω) = f (t) e−iωt dt, (12.15)
−∞
1
Z ∞
−1
F (F (ω)) = f (t) = F (ω) eiωt dω. (12.16)
2π −∞
eh
Substituting in (12.16) t with −t we get
Z ∞
2π f (−t) = F (ω) e−iωt dω.
−∞

Exchanging t and ω we get


Z ∞
2π f (−ω) = F (t) e−iωt dt = F (F (t)) .
.K
−∞

12.1.17 Example: To compute the Fourier transform of eiω0t , we know that


F (Dirac (t − t0 )) = e−iωt0 and so F (Dirac (t + t0 )) = eiωt0 . By the duality theorem

F eiω0t = 2π Dirac (−w + w0 ) = 2π Dirac (w − w0 ) .




Or, we can take


1
Z ∞
−1
F (2πδ (w − w0 )) = 2π Dirac (w − w0 ) eiwt dw = eiw0t .
2π −∞
h

12.1.18 Nota Bene! From this follows that

F (1) = Dirac (t) .


At

What is F (Heaviside (t))? We will answer this a little later.


12.1.19 Example: To compute the Fourier transform of cos (w0t), sin (w0t), we use

eiω0t + e−iω0t
 
F (cos (w0t)) = F = π (Dirac (w + w0 ) + Dirac (w − w0 )) .
2
Similarly we get

eiω0t − e−iω0t
 
F (sin (w0t)) = F = πi (Dirac (w + w0 ) − Dirac (w − w0 )) .
2i
120 Chapter 12. Fourier Transform

12.1.20 These results actually make intuitive sense, because they show the

ias
presence of pure frequencies.
1
12.1.21 Example. To compute the Fourier transform of f (t) = 1+t 2
, we use
 
1 −|t| 1
F e =
2 1 + ω2
Then, by Theorem 12.1.16, we have
 
1
F = 2π f (−ω) = πe−|ω| .
1 + t2

ag
12.1.22 Theorem (Scaling). Suppose f (t) satisfies the conditions of 12.1.2 and
has Fourier transform F (ω). Then. for each a 6= 0:
1 ω 
F ( f (at)) = F .
|a| a
Proof. Let a > 0. Then
1 ∞
Z
ω
F ( f (at)) = f (at) e−i a at d (at)
eh a −∞
1 ∞
Z
ω 1 ω 
= f (s) e−i a s ds = F .
a −∞ a a
The proof is similar when a < 0.
1
12.1.23 Example. To compute the Fourier transform of f (t) = 4+9t 2 , we have

  ! !
1 1 1 1 1
.K

F = F = F 2 .
4 + 9t 2 4 1 + 94 t 2 4 1 + 32 t
Hence taking a = 23 in Theorem 12.1.22 we get
 
1 1 1 −|2ω/3| π −|2ω/3|
F = πe = e .
4 + 9t 2 4 32 6

12.1.24 Theorem (Shifting). Suppose f (t) satisfies the conditions of 12.1.2 and
h

has Fourier transform F (ω). Then


F ( f (t − t0 )) = F (ω) e−iωt0 , F f (t) eiω0t = F (ω − ω0 ) .


Proof. We have
At

Z ∞
F ( f (t − t0 )) = f (t − t0 ) e−iωt dt
Z−∞

= f (t − t0 ) e−iω(t−t0 ) e−iωt0 d (t − t0 ) = F (ω) e−iωt0 .
−∞
Also
 Z ∞
F f (t) eiω0t = f (t) e−iωt eiω0t dt
Z−∞

= f (t) e−i(ω−ω0 )t dt = F (ω − ω0 ) .
−∞
12.1 Theory and Examples 121

1
12.1.25 Example. To compute the Fourier transform of f (t) = t 2 −4t+5
we note

ias
that
t 2 − 4t + 5 = 1 + (t − 2)2 .
Hence

1
F 2
= πe−|ω| ,
1+t
 
1  
F 2 = F 1 + (t − 2) = πe−|ω| e−i2ω .
2
t − 4t + 5

ag
12.1.26 Theorem (Differentiation). Suppose f (t) satisfies the conditions of
12.1.2 and has Fourier transform F (ω). Then
  Z 
df 1 dF
F = iωF (ω) , F f dt = F (ω) , F (t f (t)) = i .
dt iω dω
Proof. We have eh  
df d f −iωt
Z ∞ Z ∞
F = e dt = e−iωt d f
dt −∞ dt −∞
Z ∞
= f (t) e−iωt |t=−∞
t=∞
f (t) d e−iωt


Z−∞

= f (t) e−iωt |t=−∞
t=∞
− f (t) e−iωt (−iω) dt.
−∞
R∞
From −∞ | f (t)| dt < ∞ we conclude that limt→−∞ f (t) = limt→∞ f (t) = 0; hence
.K

 
df
Z ∞
F = iω f (t) e−iωt dt = iωF ( f ) = iωF (ω) .
dt −∞
R dg
For the second part of the theorem let g (t) = f (t) dt ; then dt = f (t) and
F ( f dt) = F (g (t)) = G (ω). Now
R

 
dg 1
F (ω) = F ( f ) = F = iωG (ω) ⇒ G (ω) = F (ω) .
dt iω
h

The third part is proved similarly to the first.


t
12.1.27 Example. To compute the Fourier transform of f (t) = 2, we use
(t 2 +1)
At

 
d 1 2t
=− 2
.
dt 1 + t2 (t 2 + 1)
Hence !  
2t 1
F − 2
= iωF = iωπe−|ω| .
(t 2 + 1) 1 + t2
And so !  
t 1 1 iωπ −|ω|
F 2
= − iωF =− e .
(t 2 + 1) 2 1 + t2 2
122 Chapter 12. Fourier Transform

t
12.1.28 Example. To compute the Fourier transform of f (t) = t 2 +1
we use

ias
 
1
F = πe−|ω| .
1 + t2

Hence    
1 t −it d  −|ω| 
F = F = i πe .
−i 1 + t2
1 + t2 dω
 
For ω > 0 we have F (ω) = F 1+t
t
2 = −πie−ω ; for ω < 0 we have F (ω) = πieω .

ag
What happens at ω = 0?
12.1.29 Example: To compute the Fourier transform of Heaviside (t) and Heaviside (t − t0 ),
we have

d Heaviside 1
= Dirac (t) ⇒ iwH (w) = 1 ⇒ Heaviside (w) = .
dt iw
This cannot be right, because H (w) is purely imaginary and then Heaviside (t)
eh
should be odd. What we are missing is a constant of integration, which should be
applied to Heaviside (t) to get an odd function. Indeed, we can define

s (t) = 2 Heaviside (t) − 1.

Now we also have


.K

ds 2
= 2 Dirac (t) ⇒ iwS (w) = 2 ⇒ S (w) = .
dt iw
This respects oddness. Now

s (t) + 1 S (w) + 1 1
Heaviside (t) = ⇒ H (w) = = + π Dirac (w) .
2 2 iw
Of course this is not a rigorous proof. To verify we must compute
  Z  
1 1 ∞ 1
h

−1
F + π Dirac (w) = + π Dirac (w) dw
iw 2π −∞ iw
1 ∞ 1 −iwt 1 ∞
Z Z
= e dw + πδ (w) e−iwt dw.
2π −∞ iw 2π −∞
1 1
At

= s (t) + = Heaviside (t)


2 2
(where
1 1 −iwt 1
Z ∞
e dw = s (t)
2π −∞ iw 2
requires complex integration). Finally
 
1
F (Heaviside (t − t0 )) = + π Dirac (w) e−iwt0
iw
12.1 Theory and Examples 123

12.1.30 Definition. The convolution of f (t) and g (t) is defined by

ias
Z ∞
f ∗g = f (τ) g (t − τ) dτ.
−∞

Note the difference from the definition of convolution in the context of Laplace
transforms.
12.1.31 Example. Given

1 otan |t| < 1
g (t) =

ag
0 otan |t| > 1

we compute
Z ∞
(g ∗ g) (t) = g (τ) g (t − τ) dτ.
−∞

by taking cases.eh
1. t < −2. When |τ| < 1 we have

t − τ ≤ −2 + 1 = −1 ⇒ g (τ) g (t − τ) = 0,

when |τ| > 1 we have


g (τ) g (t − τ) = 0 = 0.
Hence
.K

∀t ∈ (−∞, −2) : (g ∗ g) (t) = 0.


2. t ∈ (−2, 0). When |τ| < 1 we have in (−1,t + 1):

g (τ) g (t − τ) = 1

when |τ| > 1 we have


g (τ) g (t − τ) = 0.
h

Hence
Z ∞ Z t+1
∀t ∈ (−2, 0) : (g ∗ g) (t) = g (τ) g (t − τ) dτ = dτ = t + 2.
−∞ −1
At

3. t ∈ (0, 2) and (2, ∞) are treated similarly.


4. Finally

 0 otan |t| > 2
(g ∗ g) (t) = 2 + t otan t ∈ (−2, 0) .
2 − t otan t ∈ (0, 2)

12.1.32 Theorem (Convolution): Let f (t), g (t) satisfy the conditions of Theorem
12.1.2. Then
F ( f ∗ g) = F ( f ) · F (g) .
124 Chapter 12. Fourier Transform

Proof. We have

ias
Z ∞ Z ∞ 
F ( f ∗ g) = f (τ) g (t − τ) dτ e−iωt dt
−∞ −∞
Z ∞ Z ∞ 
−iωτ −iω(t−τ)
= f (τ) e g (t − τ) e dτ dt
−∞ −∞
Z ∞ Z ∞ 
−iωτ −iω(t−τ)
= f (τ) e g (t − τ) e dt dτ
−∞ −∞
Z ∞ Z ∞ 
−iωτ −iω(t−τ)
= f (τ) e g (t − τ) e d (t − τ) dτ

ag
−∞ −∞
Z ∞ Z ∞
−iωτ
= f (τ) e G (ω) dτ = G (ω) f (τ) e−iωτ dτ = F (ω) G (ω) .
−∞ −∞

12.1.33 Example. Let us verify Theorem 12.1.32 for g ∗ g, where



1 otan |t| < 1
g (t) = .
0 otan |t| > 1
eh
We have computed (g ∗ g) (t) in Example 12.1.31. Its Fourier transform is
Z ∞
F (g ∗ g) = (g ∗ g) (t) eiωt dt
−∞
Z 0 Z 2
iωt
= (2 + t) e dt + (2 − t) eiωt dt
−2 0
.K

... after many calculations ...


2 (1 − cos (2ω)) 4 sin2 (ω)
= = .
ω2 ω2
2 sin(ω)
But, since F (g) = ω , we get immediately

4 sin2 (ω)
F (g ∗ g) = G (ω)2 =
ω2
which verifies Theorem 12.1.32. The second way to compute F (g ∗ g) is much
h

easier.
12.1.34 Theorem. When the indicated convolutions exist, the following hold:
At

f ∗g = g∗ f
f ∗ (g ∗ h) = ( f ∗ g) ∗ h
f ∗ (g + h) = f ∗ g + f ∗ h.

Proof. All the properties follow from Theorem 12.1.32.


12.1.35 Consider an arbitrary function f (t) and define
 −σt
−σt e f (t) t > 0
φ (t) := Heaviside (t) e f (t) =
0 t <0
12.1 Theory and Examples 125

Then the Fourier transform of φ (t) is

ias
Z ∞ Z ∞ Z ∞
−iwt −σt −iwt
F (φ (t)) = φ (t) e dt = f (t) e e dt = f (t) e−st dt = L ( f (t))
−∞ 0 0

with s = σ + iw. So the Laplace transform of f (t) is the Fourier transform of a


function φ (t) which is the restriction of f (t) on (0, ∞) with an e−σt attenuation.
1. Adding the attenuation e−σt increases the class of functions ( f (t) e−σt )
which are integrable (e.g., when f (t) = Heaviside (t) is not integrable but
e−σt Heaviside (t) is).
2. We want to restrict f (t) to t > 0 because we have a beginning of time (when

ag
the system started working).
3. This is also why the Laplace transform of a derivative needs the initial
conditions; while in Fourier transform, the beginning of time is at t = −∞
and it is assumed that at t = 0 the effect of the "initial” conditions at t = −∞
has worn away.
12.1.36 Let us now examine the application of the Fourier transform to the
eh
solution of differential equations. We will work with examples.
12.1.37 Example. Let us solve

dx
x 0− = 1

+ x = Heaviside (t) − Heaviside (t − 1) ,
dt
first with Laplace, then with Fourier transform.
With Laplace we have
.K

1 − e−st 1 1
+ 1 − e−st

(s + 1) X = 1 + ⇒X =
s s+1 s (s + 1)
 
1  1 1
⇒X = + 1 − e−st −
s+1 s s+1
 
−t −t −(t−1)

⇒ xL (t) = e + Heaviside (t) 1 − e − Heaviside (t − 1) 1 − e

With Fourier things are quite (but not totally) similar; we have
h

1 − e−iw e−iw − 1
Z 1
F (Heaviside (t) − Heaviside (t − 1)) = e−iwt dt = i =
0 w iw
and then
At

e−iw − 1 1
⇒ X = e−iw − 1

(iw + 1) X =
iw iw (iw + 1)
 
−iw
 1 1
⇒ X = e −1 −
iw iw + 1
 
⇒ xF (t) = Heaviside (t) 1 − e−t − Heaviside (t − 1) 1 − e−(t−1) .


We see that
xL (t) = xF (t) + e−t .
126 Chapter 12. Fourier Transform

So the two solutions are not identical. But this is not surprising. The difference

ias
can be traced back to the dx
dt formulas:
   
dx − dx
L F

= X (s) − sx 0 , = X (w) .
dt dt
And indeed the difference between xL (t) and xF (t) is the term x (0− ) e−t , which
the intial condition propagated in time; the Fourier transform assumes no initial
conditions or, better, that the "initial" conditions were given at t = −∞ and their
influence has worn out.
12.1.38 Example. Now we solve

ag
dx
x 0− = 0

+ x = Dirac (t − 1) ,
dt
first with Laplace, then with Fourier transform.
With Laplace we have
e−s
(s + 1) X = e−s ⇒ X = ⇒ xL (t) = Heaviside (t − 1) e−(t−1)
s+1
eh
With Fourier we have (with exactly analogous calculations and with F (Dirac (t − 1)) =
e−iw ):
e−iw
(iw + 1) X = e−iw ⇒ X = ⇒ xF (t) = Heaviside (t − 1) e−(t−1)
iw + 1
We see that
xL (t) = xF (t) .
.K

12.1.39 Example. Now we solve

d2x dx
x 0− = 0, x 0 0− = 0
 
2
+ 2 + x = Heaviside (t) − Heaviside (t − 1) ,
dt dt
first with Laplace, then with Fourier transform.
With Laplace we have
1 − e−st 1
s2 + 2s + 1 X = ⇒ X = + 1 − e−st
 
s 2
s (s + 2s + 1)
h

⇒ xL (t) = Heaviside (t) xe(t) − Heaviside (t − 1) xe(t − 1)


where
xe(t) = 1 − te−t − e−t
and with Fourier
At

1 − e−iwt 1
2
⇒ X = 1 − e−iwt
 
−w + 2iw + 1 X = 2
iw iw (−w + 2iw + 1)
!
1 1 1
⇒ X = 1 − e−iwt

− 2

iw (iw + 1) iw + 1
⇒ xF (t) = Heaviside (t) xe(t) − Heaviside (t − 1) xe(t − 1)
We see that
xL (t) = xF (t) .
12.1 Theory and Examples 127

12.1.40 Example. Here is one final example. We solve

ias
d2x dx
x 0− = 0, x 0 0− = 0
 
+ 2 + x = sint,
dt 2 dt
first with Laplace, then with Fourier transform.
With Laplace we have

1 1
s2 + 2s + 1 X =

⇒X =
s2 + 1 (s2 + 1) (s2 + 2s + 1)
1 1 s 1

ag
⇒X = − 2 +
2 (s + 1) 2 s + 1 2 (s + 1)2
1 1 1
⇒ xL (t) = e−t + te−t − cost.
2 2 2
With Fourier we note that

F −1 eiw = Dirac (t − 1) and F −1 e−iw = Dirac (t + 1)


eh  

hence

eiw − e−iw
 
−1 −1 1
F (sin w) = F = (Dirac (t − 1) − Dirac (t + 1) )
2i 2i

Then, by duality,

F (sint) = πi (Dirac (w + 1) − Dirac (w − 1) )


.K

and

−w2 + 2iw + 1 X = πi (Dirac (w + 1) − Dirac (w − 1) )




Dirac (w + 1) − Dirac (w − 1)
⇒ X = πi
(iw + 1)2
Since
h

1 ∞ Dirac (w + 1) iwt i 1
Z
iπ 2
e dw = 2
e−iwt = − e−it
2π −∞ (iw + 1) 2 (i (−1) + 1) 4
1 Dirac (w − 1) iwt i 1
Z ∞
iπ 2
e dw = 2
eit = πeit
2π −∞ (iw + 1) 2 (i1 + 1) 4
At

we get
1 e−it + eit 1
xF (t) = − = − cost
2 2 2
We see that
1 1
xL (t) = xF (t) + e−t + te−t .
2 2
Here the cause of the discrepancy is different. Namely the part 12 e−t + 12 te−t is not
square integrable in (−∞, ∞) and so Fourier transform canot capture it.
128 Chapter 12. Fourier Transform

12.1.41 Notation.

ias

1 for t ≥ 0,
Heaviside (t) =
0 for t < 0.
d Heaviside
Dirac (t) =  dt
1 for t > 0,
sgn(t) =
 −1 for t < 0.

ag
1 for |t| ≤ 1,
Π (t) =
 0 for |t| > 1.
1 − |t| for |t| ≤ 1,
Λ (t) =
0 for |t| > 1.
sint
sinc(t) = t
eh
12.1.42 The basic properties of the Fourier transform are as follows.
.K

f (t) F (ω) = F ( f (t))


κ f1 (t) + λ f2 (t) κF1 (ω)+ λ F2 (ω)
1 ω
f (at) |a| F a
f (−t) F (−ω)
f (t − t0 ) F (ω) e−iωt0
f (t) eiω0t F (ω − ω0 )
1 1
f (t) cos (ω0t) 2 F (ω − ω0 ) + 2 F (ω + ω0 )
1 1
f (t) sin (ω0t) 2i F (ω − ω0 ) − 2i F (ω + ω0 )
h

F (t) 2π f (−ω)
df
iωF (ω)
Rdt 1
x (τ) dτ iω F (ω)
d
−it f (t) F (ω)
Rdω∞
At

| f (t)|2 dt |F (ω)|2 dω
R∞
−∞
R∞ R∞ −∞
−∞ f (t) g (t)dt −∞ F (w) G (ω)dω
f (t) ∗ g (t) F (ω) G (ω)

12.1.43 Some basic Fourier transform pairs are the following.


12.2 Solved Problems 129

f (t) F (ω) = F ( f (t))


1 −iwt0

ias
Heaviside (t − t0 ) iw + π Dirac (w) e
Dirac (t − t0 ) e−iwt0
eiw0t Dirac (w − w0 )
cos (w0t) π (Dirac (w − w0 ) + Dirac (w + w0 ))
sin (w0t) πi (− Dirac (w − w0 ) + Dirac (w + w0 ))
e−at Heaviside (t) 1
a+iω
e−a|t| 2a
a2 +ω 2
2
q
π −ω 2 /4a
e−at ae

ag
Π (t) 2 sinc(ω)
1
π sinc(t) Π (ω)
−at
te Heaviside (t) 1
2
(a+iω)
1 π −a|ω|
a2 +t 2 ae
1
t eh πi (1 − 2Heaviside (ω))

12.2 Solved Problems


12.3 Unsolved Problems
1 − t 2 an |t| ≤ 1

1. Compute the Fourier transform of f (t) =
0 an |t| > 1.
Ans. 4 sin ω−ω
ω3
cos ω
.
2. Compute the Fourier transform of f (t) = t 23+4
.K

Ans. 3/2 π e−2 w Heaviside (w) + e2 w Heaviside (−w)



1
3. Compute the Fourier transform of f (t) = 2+t .
Ans. ie2 iw π (1 − 2 Heaviside (w))
4. Compute the Fourier transform of f (t) = e−3t Heaviside (t).
−1
Ans. (3 + iw)
5. Compute the Fourier transform of f (t) = te−3t Heaviside (t).
−2
Ans. (3 + iw)
2
6. Compute the Fourier transform of f (t) = e−4t .
2 √
Ans. 1/2 e−1/16 w
h

π
1 − ω 2 an |ω| ≤ 1

7. Compute the inverse Fourier transform of F(ω) =
0 an |ω| > 1.
Ans. f (t) = πt23 (sint − t cost)
At

sin(ωt )
8. Compute the inverse Fourier transform of F(ω) = ωt 0 .
0
 1
2t0 an |t| < |t0 |
Ans. f (t) = .)
0 an |t| > |t0 |
ω 2
9. Compute the inverse Fourier transform of F(ω) = √1 e− 2 .

1 − 12 t 2
Ans. f (t) = 2π e .

1 − |ω| an |ω| ≤ 1
10. Compute the inverse Fourier transform of F(ω) =
0 an |ω| > 1.
Ans. f (t) = π1 1−cost
t2
.
130 Chapter 12. Fourier Transform

11. Compute the inverse Fourier transform of F(ω) = √ 1 .


1−iw

ias
et Heaviside(−t)
√ √
Ans. π −t
12. Compute the inverse Fourier transform of F(ω) = w23+1 .
Ans. 3/2 et Heaviside (−t) + 3/2 Heaviside (t) e−t
13. Compute the inverse Fourier transform of F(ω) = 3 w2w+1 .
Ans. 3/2 i (−et Heaviside (−t) + Heaviside (t) e−t )
sin(w)
14. Compute the inverse Fourier transform of F(ω) = w2 +1 .
(e−t+1 Heaviside(t−1)+et−1 Heaviside(−t+1)−et+1 Heaviside(−t−1)−e−t−1 Heaviside(t+1))
Ans. i 4

ag
sin(w)
15. Compute the inverse Fourier transform of F(ω) = w2 +9 .
(e−3t+3 Heaviside(t−1)+e3t−3 Heaviside(−t+1)−e−3t−3 Heaviside(t+1)−e3t+3 Heaviside(−t−1))
Ans. i 12
2
16. Compute the Fourier

transform of f (t) = e−t .
1 2
Ans. F (ω) = 22 e− 4 ω .
−|t|
q Fourier transform of f (t) = e .
17. Compute the sine
Ans. FS (ω) = π2 w2w+1 .
eh
18. Compute the cosine Fourier transform of f (t) =

1 − |t| an |t| ≤ 1
0 an |t| > 1.
q
2 1−cos ω
Ans. Fc (ω) = π ω2 .
sin(ω0 t)
19. Compute the cosine Fourier transform of f (t) = ω0 t .
( q
π
2 an |ω| < |ω0 |
Ans. Fc (ω) = .
.K

0 an |ω| > |ω0 |


t 2 an 0 ≤ |t| ≤ 1

20. Compute the cosine Fourier transform of f (t) =
0 an |t| > 1.
q
2 ω 2 sin ω−2 sin ω+2ω cos ω
Ans. Fc (ω) = π πω 3
.

12.4 Advanced Problems


1. Prove:
h

   
b b
Heaviside (at + b) = Heaviside t + Heaviside (a)+Heaviside −t − Heaviside (−a) .
a a
d
Λ = −Π 2t sgn(t) .

2. Prove: dt
d
sinc(t) = cost πt − sin
At

πt
3. Prove: dt πt 2
.
4. Compute −∞ dt2 2 .
R∞
(1+t )
1
Ans. 2 π.
2
5. Compute −∞ t dt2 2 .
R∞
(1+t )
Ans. 21 π.
4
6. Compute −∞ t dt2 4 .
R∞
(1+t )
1
Ans. 16 π.
12.4 Advanced Problems 131
R∞ y(τ) 1
7. Solve the integral equation: −∞ dτ = t 2 +b2.
(t−τ)2 +a2

ias
b−a
Ans. y (t) = 2
bπ (t +(b−a)
2 )
8. Prove: Λ (t) = Π (t) ∗ Π (t) .
9. Prove:
 
d d
[ f (t) ∗ g (t)] = f (t) ∗ g (t)
dt dt
d
[ f (t) ∗ h (t)] = f (t) .
dt

ag
10. Prove: Z t
Heaviside (t) ∗ [ f (t) Heaviside (t)] = f (τ) dτ.
0
1
11. Prove: πt ∗ −1
πt = δ (t) .
12. Compute

Heaviside (t) ∗ Heaviside (t) ,


eh Heaviside (t) ∗ Heaviside (t) ∗ Heaviside (t) ,
...
lim Heaviside (t) ∗ Heaviside (t) ∗ ... ∗ Heaviside (t)
n→∞ | {z }
n times

2
13. Prove: if f (t) is real valued, then |F (ω)| is even.
14. Prove:
.K

F 0 (0)
Z ∞
t f (t) dt = − ,
−∞ −2iπ
F 00 (0)
Z ∞
t 2 f (t) dt = − .
−∞ 4π 2
15. What is the relationship between f (t) and F (F ( f (t)))? Between f (t) and
F (F (... (F ( f (t))) ...))?
16. Prove:
h

Z ∞    Z ∞  
df df dF dF
dt = 4π 2 dω.
−∞ dt dt −∞ dω dω
17. Prove: Z ∞
| f (t)| ≤ |F (ω)| dω.
At

−∞
18. Prove: Z ∞
2 2
e−πt cos (2πωt) dt = e−πω .
−∞
19. Find f (t) such that f (t) = F (t) (where F (w) = F ( f (t))).
20. Prove:
ω2 ω2
   
2
2
... 1 − 2 ... = A · e−ω /B .

lim 1 − ω 1−
N→ 4 N
What are the A, B values?
At
h.K
eh
ag
ias
IV
Partial Differential Equations

ias
ag
eh
.K
h

13 PDEs for Equilibrium . . . . . . . . . . . . 135


At

14 PDEs for Diffusion . . . . . . . . . . . . . . . 149

15 PDEs for Waves . . . . . . . . . . . . . . . . . . 173

16 Bessel Functions and Applications


189

17 Vector Spaces of Functions . . . . . 201

18 Sturm-Liouville Problems . . . . . . . 215


At
h.K
eh
ag
ias
ias
ag
13. PDEs for Equilibrium

Equilibrium PDEs describe phenomena which, after having evolved in time, have
eh
reached a steady state. The classic equilibrium PDE is the Laplace equation

∇2 u = 0,

i.e., uxx + uyy = 0, uxx + uyy + uzz = 0 etc.

13.1 Theory and Examples


.K

13.1.1 In this chapter we study the Laplace equation


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

uxx + uyy = 0 (13.1)

and its variants. Note that this differential equation involves a function u (x, y)
of two variables and its partial derivatives. Such equations are called partial
differential equations (PDE).
13.1.2 The function u (x, y) can be an electric potential, a temperature, a probability
distribution etc. Note that u (x, y) does not involve time; the Laplace equation is
used to describe phenomena in equilibrium.
h

13.1.3 Why does (13.1) describe equilibrium? The full answer will come in later
chapters, where we will see that

u (x, y) = lim ue(x, y,t)


At

t→∞

i.e., u (x, y) is the steady state limit of another function u


e(x, y,t) which involves
time. However, some preliminary arguments can be given here.
13.1.4 We first present an informal argument, based on the approximation of
derivatives by finite differences. Recall that, for a (sufficiently smooth) function of
one variable, we have

f (x + ∆x) − f (x) f (x + ∆x) − 2 f (x) + f (x − ∆x)


f 0 (x) ' , f 00 (x) '
∆x ∆x2
136 Chapter 13. PDEs for Equilibrium

and the approximation becomes better as ∆x → 0. Applying the corresponding

ias
discretization to (13.1) we get the discrete version of the Laplace equation:

u (x + ∆x, y) − 2u (x, y) + u (x − ∆x, y) u (x, y + ∆y) − 2u (x, y) + u (x, y − ∆y)


+ =0
∆x2 ∆y2

Assuming ∆x = ∆y, this yields

u (x + ∆x, y) + u (x − ∆x, y) + u (x, y + ∆y) + u (x, y − ∆y)


u (x, y) = .
4

ag
In otherwords, if u satisfies the Laplace equation in the neighborhood of some
point (x, y), then at that point u (x, y) equals the average of the value of u at the
four nearest neighbors of (x, y). This is a very rough argument, but it captures the
essence of the matter.
13.1.5 A more rigorous argument depends on the theory of complex functions.
Recall the following facts.
1. A function u which satisfies uxx + uyy = 0 is called harmonic.
eh
2. Given an holomorphic function f (z) = u (x, y) + iv (x, y), its real part u (x, y)
and its imaginary part v (x, y) are harmonic functions.
3. Given a function f (z) = u (x, y) + iv (x, y), which is holomorphic inside and on
a circle centered at z0 and with radius R, we have
Z 2π 
1 
f (z0 ) = f z0 + R · eiθ dθ ; (13.2)
2π 0
.K

in other words the value at z0 is the average of the value on the circle.
It is easily seen that the (13.2) implies that the average property also holds for the
real and imaginary parts of f (z):

1 2π
Z
u (x0 , y0 ) = u (x0 + R cos θ , y0 + R sin θ , ) dθ ; (13.3)
2π 0
1 2π
Z
v (x0 , y0 ) = v (x0 + R cos θ , y0 + R sin θ , ) dθ . (13.4)
2π 0
h

In short: if a function satisfies the Laplace equation inside and on a circle, its value
at the center of the circle equals its average value on the circle. We will soon see
more connections of the Laplace equation to complex functions.
13.1.6 To find a specific solution of the Laplace equation (13.1) two additional
At

factors must be specified.


1. The region on which the equation holds.
2. The conditions on the boundary of the region (boundary conditions).
The situation is analogous to that of the “ordinary” differential equations studied
in previous chapters. Whereas in these cases a particular solution was fully
determined by the differential equation and initial conditions, the situation is more
complex now. Since the boundary will be a curve on the plane, the boundary
conditions consist in specifying functions (rather than numerical values) on the
boundary.
13.1 Theory and Examples 137

13.1.7 A similar situation appears, once again, in the theory of complex functions.

ias
Namely, Cauchy’s Integral Formula tells us that a function f (z) is analytic inside
and on the boundary of a region (the boundary being a simple closed curve) then
the value of f (z) at some point z0 inside the region is given by

1 f (z)
I
f (z0 ) = dz.
2πi z − z0
Do you see the connection to solutions of the Laplace equation? Recall that, if
f (z) = u (x, y) + iv (x, y) is analytic then uxx + uyy = 0, vxx + vyy = 0.
13.1.8 In general we distinguish the following kinds of Laplace equation problems,

ag
according to the nature of the boundary conditions..
1. Dirichlet problems: boundary conditions on u (x, y).
2. Neumann problems: boundary conditions on the partial derivatives (e.g.,
ux (x, y), uy (x, y)).
3. Mixed Dirichlet / Neumann problems.
13.1.9 Let us now solve what is perhaps the simplest Dirichlet problem.
eh 0 < x < L and 0 < y < M : uxx + uyy = 0 (13.5)
0 < x < L : u (x, 0) = f x) (13.6)
0 < x < L : u(x, M) = 0 (13.7)
0 < y < M : u(0, y) = 0 (13.8)
0 < y < M : u (L, y) = 0. (13.9)
.K

We will solve the above using the method of separation of variables. Assume that
u (x, y) = X (x)Y (y). Then (13.5) becomes
X 00 Y 00
X 00Y + XY 00 = 0 ⇒ =− = −b2 (13.10)
X Y
(the choice −b2 will be explained a little later). From (13.10) we get

X 00 + b2 X = 0 (13.11)
Y 00 − b2Y = 0. (13.12)
h

Fron (13.8), (13.9) follows that the solutions of (13.11) have the form

Xn (x) = sin (bn x) (13.13)

with bn = nπ
At

L (n ∈ {0, ±1, ±2, ...}). The solutions of (13.12) have the form

Yn (y) = Cn ebn y + Dn e−bn y ,

but can be written equivalently as

Yn (y) = En sinh (bn (y + Fn )) . (13.14)

From (13.7) we get Fn = −M , hence

Yn (y) = En sinh (bn (y − M)) (13.15)


138 Chapter 13. PDEs for Equilibrium

and
∞  nπ   nπ

ias

u (x, y) = ∑ En sin x sinh (y − M) . (13.16)
n=1 L L
We must also satisfy (13.6). Here is why we chose the constant −b2 (i.e., negative):
now we can expand f (x) in a Fourier series:
∞  nπ   
nπM
f (x) = u (x, 0) = ∑ En sin x sinh − . (13.17)
n=1 L L
Hence Z L
−2  nπ 

ag
En = f (x) sin x dx (13.18)
L sinh nπM

L 0 L
In short, (13.16) and (13.18) give the solution of (13.5)–(13.9).
13.1.10 Example. Let us solve (13.5)–(13.9) with L = M = π and f (x) = sin2 (x) .
Then
( cos(πn)−1
4
−2 − π sinh n 6= 2
Z π
2 πn n(n2 −4)
En = sin (x) sin (nx) dx = (13.19)

Hence
eh
π sinh (nπ) 0 0 n=2

8 1 1
u (x, y) = ∑ · 2
· sin (nx) sinh (n (y − π)) . (13.20)
π n=1,3,5,... π sinh (nπ) n (n − 4)

13.1.11 Here is a somewhat more complex version of (13.5)–(13.9):


.K

0 < x < L and 0 < y < M : uxx + uyy = 0 (13.21)


0 < x < L : u (x, 0) = f (x) (13.22)
0 < x < L : u(x, M) = g(x) (13.23)
0 < y < M : u(0, y) = h(x) (13.24)
0 < y < M : u (L, y) = k(x) (13.25)

To find u (x, y) solving (13.21)–(13.25) we use the principle of superposition: we


assume u (x, y)= u1 (x, y)+ u2 (x, y)+ u3 (x, y)+ u4 (x, y), where:
1. u1 (x, y) satisfies uxx +uyy = 0 and u(x, 0) = f (x), u (x, M) = u (0, y) = u (L, y) = 0.
h

2. u2 (x, y) satisfies uxx + uyy = 0 and u(x, L) = g (x), u (x, 0) = u (0, y) = u (L, y) = 0.
3. u3 (x, y) satisfies uxx + uyy = 0 and u(0, y) = h (x), u (x, 0) = u (x, M) = u (L, y) = 0.
4. u4 (x, y) satisfies uxx + uyy = 0 and u(L, y) = k (x), u (x, 0) = u (x, M) = u (0, y) = 0.
We solve each of these subproblems in the same way that we solved (13.5)–(13.9);
At

hence we find u1 , u2 , u3 , u4 and then u (x, y).


13.1.12 Here is a simple Neumann problem:

0 < x < L and 0 < y < M : uxx + uyy = 0 (13.26)


0 < x < L : uy (x, 0) = f (x) (13.27)
0 < x < L : uy (x, M) = 0 (13.28)
0 < y < M : ux (0, y) = 0 (13.29)
0 < y < M : ux (L, y) = 0. (13.30)
13.1 Theory and Examples 139

Once again we separate variables and obtain

ias
X 00 + b2 X = 0 (13.31)
Y 00 − b2Y = 0. (13.32)

For X we have the boundary conditions X 0 (0) = X 0 (L) = 0. Hence the only accept-
able solutions are  nπ 
Xn (x) = cos x .
L
For n ∈ {1, 2, ..} we can write the Y solutions in the form

ag
 nπ   nπ 
Yn (y) = En sinh y + Fn cosh y .
L L
Hence the general solution is

∞  nπ   nπ   nπ 
u (x, y) = E sinh y + F cosh y cos x
eh
∑ n
n=0
∞ 
L
 nπ 
n
L
 nπ 
L
 nπ 
= E0 + ∑ En sinh y + Fn cosh y cos x . (13.33)
n=1 L L L

Now (13.27), (13.28) yield


nπ  nπ 
.K

f (x) = uy (x, 0) =
∑ n F cos x (13.34)
n=1 L L

nπ   nπ   nπ   nπ 
0 = uy (x, M) = ∑ En sinh M + Fn cosh M cos x (13.35)
n=1 L L L L

From (13.34) we conclude that nπ L Fn are the coefficients of the cosine series of


f (x) which must, however, have zero coefficient on cos L x . In other words, the
problem only has a solution if the following compatibility condition holds:
Z L
h

f (x) dx = 0. (13.36)
0

If (13.36) holds, then F1 , F2 , ... are determined by


At

Z L
2  nπ 
Fn = f (x) cos x dx (13.37)
nπ 0 L

and E1 , E2 , ... are determined by solving (for n ∈ {1, 2, ...}) equation (13.35) which
becomes  nπ   nπ 
En sinh M + Fn cosh M = 0. (13.38)
L L
E0 remains undeterined (this is reasonable: the boundary conditions only detrmine
the derivatives).
140 Chapter 13. PDEs for Equilibrium

13.1.13 Here is a mixed Dirichlet-Neumann problem:

ias
0 < x < L and 0 < y < M : uxx + uyy = 0 (13.39)
0 < x < L : u (x, 0) = f (x) (13.40)
0 < x < L : u(x, M) = 0 (13.41)
0 < y < M : ux (0, y) = 0 (13.42)
0 < y < M : ux (L, y) = 0. (13.43)

We separate the variables and get

X 00 + b2 X = 0

ag
Y 00 − b2Y = 0.
For X we also have X 0 (0) = X 0 (L) = 0; hence the only acceptable solutions are
 nπ 
Xn (x) = cos x .
L
For n ∈ {1, 2, ..} we can write the Y solutions as
eh Yn (y) = En sinh
 nπ 
(y + Fn ) ;
L
since we must have Yn (M) = 0 we finally get
 nπ 
Yn (y) = En sinh (y − M) .
L
Especially for n = 0 we have
.K

M−y
Yn (y) = E0 .
M
So, finally, the general solution is

M−y  nπ   nπ 
u (x, y) = E0 + ∑ En sinh (y − M) cos x . (13.44)
M n=1 L L
To satisfy (13.40) we must have
∞  nπ   nπ 
f (x) = u (x, 0) = E0 − ∑ En sinh M cos x (13.45)
h

n=1 L L
hence
Z L
1
E0 = f (x) dx (13.46)
At

L 0
Z L
2  nπ 
∀n ∈ {0, 1, 2, ...} : En = − nπ
 f (x) cos x dx. (13.47)
L sinh L M 0 L
The solution of (13.39)–(13.43) can be simplified to
  ∞
A0 M−y An  nπ   nπ 
u (x, y) = +∑ nπM
 sinh (M − y) cos x
2 M n=1 sinh L
L L
2 L  nπ 
Z
∀n ∈ {0, 1, 2, ...} : An = f (x) cos x dx.
L 0 L
13.1 Theory and Examples 141

13.1.14 Let us now solve the Laplace equation on an infinite region, namely on

ias
the half-plane:

−∞ < x < ∞, 0 < y < ∞ : uxx + uyy = 0 (13.48)


−∞ < x < ∞ : u(x, 0) = f (x). (13.49)

Separating variables we get again

X 00 + b2 X = 0 (13.50)
Y 00 − b2Y = 0. (13.51)

ag
The solutions of (13.50) again have the form cos (bx) and sin (bx) but now we have
no constraint on the b values The solutions of (13.51) have the form eby , e−by but
(assuminh b > 0) eby is rejected because it gives an unbounded u (x, y). Finally (by
superposition) a solution of (13.48) is
Z ∞
u (x, y) = e−by (A (b) cos (bx) + B (b) sin (bx)) db. (13.52)
0
eh
Letting y = 0 in (13.52) we get

f (x) = u (x, 0) =
Z ∞
(A (b) cos (bx) + B (b) sin (bx)) db. (13.53)
0
Hence
1 1
Z ∞ Z ∞
A (b) = f (x) cos (bx) dx, B (b) = f (x) sin (bx) dx (13.54)
π −∞ π −∞
.K

Hence (13.52) and (13.54) solve (13.48)–(13.49).


We can write the solution in another form, which shows clearly how the
boundary conditions determine the solution. Replacing (13.54) in (13.52) we get
Z Z ∞ 
1 ∞ −by
u (x, y) = e f (z) cos (b (z − x)) dz db (13.55)
π 0 −∞
Z ∞ 
1 ∞
Z
−by
= f (z) e cos (b (z − x)) db dz. (13.56)
π −∞ 0

Then
h

1
Z ∞  ∞
e−by cos (b (z − x)) db = −ye−by
cos b (z − x) − (x − z) e−by
sin b (z − x)
0 y2 + (z − x)2 b=0
y
= .
y + (z − x)2
2
At

Hence
1 y f (z)
Z ∞
u (x, y) = 2
dz. (13.57)
π −∞ y2 + (z − x)
This is the Poisson formula for the half-plane (known to us from the theory of
complex functions). The interpretation of (13.57) is this: the value of u (x, y) at
(x, y) is the average of the u (z, 0) = f (z) (i.e., the values on the x axis boundary),
weighted by 2 1 2 , i.e., the inverse square of the distance between (z, 0) and
y +(z−x)
(x, y).
142 Chapter 13. PDEs for Equilibrium

13.1.15 Example. To solve

ias
−∞ < x < ∞, 0 < y < ∞ : uxx + uyy = 0

0 x<0
−∞ < x < ∞ : u(x, 0) = f (x) =
1 x>0
we have:
1 y f (z) 1 ∞ y
Z ∞ Z
u (x, y) = 2
dz = dz
π 2
−∞ y + (x − z) π 0 y + (x − z)2
2
 ∞     
1 −1 z − x 1 π −1 x 1 1 −1 x
= · − tan = − tan

ag
= tan .
π y z=0 π 2 y 2 π y
13.1.16 Let us also solve the Laplace equation on a half-strip:

0 < x < 1, 0 < y < ∞ : uxx + uyy = 0


0 < x < 1 : u (x, 0) = f (x)
0 < y < ∞ : u(0, y) = u(1, y) = 0
eh
By separation of variables we get

u (x, y) = X (x)Y (y) ,


X (x) = sin (nπx) ,
Y (y) = e−nπy
(Y (y) = enπy is unbounded and hence rejected). It follows that
.K


u (x, y) = ∑ Ane−nπy sin (nπx)
n=1
and Z 1
An = 2 f (x) sin (nπx) dx.
0
2 1−cos nπ
If, e.g., f (x) = 1, then An = π n and
 
4 −y 1 −3y 1 −5y
u (x, y) = e sin (πx) + e sin (3πx) + e sin (5πx) + ... .
π 3 5
h

Try to show that the above is equivalent to


 
2 −1 sin (πx)
u (x, y) = tan .
π sinh (y)
At

13.1.17 In many cases we must solve the Laplace equation on a region with
curved boundary. We will consider the simplest case: the region is a disk. It is
natural to use polar coordinates. Recall that
1 1
uxx + uyy = urr + ur + 2 urr , (13.58)
r r
hence the Laplace equation in polar coordinates becomes
1 1
urr + ur + 2 urr = 0. (13.59)
r r
13.1 Theory and Examples 143

13.1.18 For example let us solve:

ias
1 1
0 ≤ r < 1, 0 ≤ θ ≤ 2π : urr + ur + 2 uθ θ = 0 (13.60)
r r
0 ≤ θ ≤ 2π : u (1, θ ) = f (θ ). (13.61)

Letting
u (r, θ ) = R (r) Θ (θ )
we get
1 1
urr + ur + 2 uθ θ = 0 ⇒

ag
r r
1 1
R00 Θ + R0 Θ + 2 RΘ00 = 0 ⇒
r r
R00 1R 0 1 Θ00
+ + 2 =0⇒
R rR r Θ
R00 R0 Θ00
r2 + r = − = a. (13.62)
R R Θ
eh
From (13.62) we see that

Θ00 + aΘ = 0 (13.63)
2 00 0
r R + rR − aR = 0. (13.64)

Note that Θ must be periodic with period 2π . Let us consider the possible values
of a.
1. If a = −b2 < 0 then Θ (θ ) = Cebθ + De−bθ which is not periodic.
.K

2. If a = 0 then Θ (θ ) = Cθ + D and periodicity is satisified only for C = 0. Then


we get R (r) = A + B log r and R (0) is not well defined unless B = 0. In short,
for a = 0 the only acceptable solution is the constant u (x,t) = AD.
3. If a > 0 we have √  √ 
Θ (θ ) = C cos aθ + D sin aθ (13.65)

and periodicity is satisfied if a = bn = n. Also
√ √
R (r) = Ar a
+ Br− a
= Arn + Br−n (13.66)
h

and, for R (0) to be well defined, we must have B = 0.


From the above we see that a solution of (13.60) must have the form

C0
u (r, θ ) = + ∑ rn · (Cn cos (nθ ) + Dn sin (nθ )) . (13.67)
At

2 n=1
To also satisfy (13.61) we must set

C0
f (θ ) = u (1, θ ) = + ∑ (Cn cos (nθ ) + Dn sin (nθ )) (13.68)
2 n=1

and let Cn , Dn be the Fourier coefficient of f (θ ):


Z 2π Z 2π
1 1
Cn = f (θ ) cos (nθ ) dθ , Dn = f (θ ) sin (nθ ) dθ (13.69)
π 0 π 0
144 Chapter 13. PDEs for Equilibrium

13.1.19 We can also express the solution u (r, θ ) with the Poisson formula for the

ias
unit circle. Replacing (13.69) in (13.67) we get

Z 2π Z 2π

1 1 ∞
u (r, θ ) = f (θ ) dθ + ∑ rn · f (θ ) (cos (nφ ) cos (nθ ) + sin (nφ ) sin (nθ )) dφ
2π 0 π n=1 0
Z 2π
" #

1 n
= f (φ ) · 1 + 2 ∑ r · cos [n · (θ − φ )] dφ
2π 0 n=1
" #
1 2π ∞
Z  
= f (φ ) · 1 + ∑ rn ein·(θ −φ ) + rn e−in·(θ −φ ) dφ

ag
2π 0 n=1
" #
1 2π ∞ ∞
Z
= f (φ ) · 1 + ∑ rn ein·(θ −φ ) + ∑ rn e−in·(θ −φ ) dφ
2π 0 n=1 n=1
" #
1 2π rei·(θ −φ ) re−i·(θ −φ )
Z
= f (φ ) · 1 + + dφ
2π 0 1 − rei·(θ −φ ) 1 − re−i·(θ −φ )
1 − r2
=
1 2π

Z

0
eh f (φ ) ·
1 − 2r cos (θ − φ ) + r2
· dφ (13.70)

The interpretation of (13.70) is this: the value of u (r, θ ) at (r, θ ) is the average of
1
the values u (1, φ ) = f (φ ) (the values on the unit circle) weighted by 1−2r cos(θ −φ )+r2
(i.e., by the inverse squared distance between (1, φ ) and (r, θ )).
13.1.20 Example. To solve:
.K

1 1
0 ≤ r < 1, 0 ≤ θ ≤ 2π : urr + ur + 2 uθ θ = 0 (13.71)
r
 r
1 0<θ <π
0 ≤ θ ≤ 2π : u(1, θ ) = f (θ ) = (13.72)
0 π < θ < 2π

Using separation of variables, a solution of (13.71) is:


C0
h

u (r, θ ) = + ∑ rn · (Cn cos (nθ ) + Dn sin (nθ )) . (13.73)


2 n=1

and
At

1 2π

1 π 1 gia n = 0
Z Z
Cn = f (θ ) cos (nθ ) dθ = cos (nθ ) dθ = , (13.74)
π 0 π 0 0 gia n > 0
1 2π 1 π 1
Z Z
Dn = f (θ ) sin (nθ ) dθ = sin (nθ ) dθ = (1 − cos (nπ)) . (13.75)
π 0 π 0 nπ

Hence
 
1 1 1 3 1 5
u (r, θ ) = + r sin (θ ) + r sin (3θ ) + r sin (5θ ) + ... (13.76)
2 π 3 5
13.1 Theory and Examples 145

Using the Poisson formula:

ias
1 2π 1 − r2
Z
u (r, θ ) = f (φ ) · dφ
2π 0 1 − 2r cos (θ − φ ) + r2
1 π 1 − r2
Z
= dφ
2π 0 1 − 2r cos (θ − φ ) + r2
 
1 1 −1 2r sin (θ )
= + tan . (13.77)
2 π 1 − r2
Prove that the two forms of the solution are equivalent. It suffices to show that

ag
   
1 3 1 5 1 −1 2r sin (θ )
r sin (θ ) + r sin (3θ ) + r sin (5θ ) + ... = tan . (13.78)
3 5 2 1 − r2
13.1.21 Here is Dirichlet problem for the exterior of the unit circle:

1 1
1 < r, 0 ≤ θ ≤ 2π : urr + ur + 2 uθ θ = 0 (13.79)
r r
0 ≤ θ ≤ 2π : u (1, θ ) = f (θ ).
eh
Omitting the details, we finally get
(13.80)


C0
u (r, θ ) = + ∑ r−n (Cn cos (nθ ) + Dn sin (nθ )) . (13.81)
2 n=1
Z 2π Z 2π
1 1
Cn = f (θ ) cos (nθ ) dθ , Dn = f (θ ) sin (nθ ) dθ (13.82)
.K
π 0 π 0
or :
r2 − 1
Z 2π
1
u (r, θ ) = f (φ ) · · dφ . (13.83)
2π 0 1 − 2r cos (θ − φ ) + r2
13.1.22 Another variant is the Dirichlet problem on a ring:

1 1
r1 < r < r2 , 0 ≤ θ ≤ 2π : urr + ur + 2 uθ θ = 0 (13.84)
r r
0 ≤ θ ≤ 2π : u (r1 , θ ) = f1 (θ ) (13.85)
0 ≤ θ ≤ 2π : u (r2 , θ ) = f2 (θ ) . (13.86)
h

Separating variables, we get the same families of solutions as in 13.1.18. But,


because r = 0 is not included in the region, we can now accept solutions of the
forms A + B ln (r), rn , r−n . So we can finally assume
At

∞ 
1
(A0 + B0 ln (r)) + ∑ An rn + Bn r−n cos (nθ ) + Cn rn + Dn r−n sin (nθ ) .
  
u (r, θ ) =
2 n=1
(13.87)
To satisfy (13.85), (13.86) we determine ta A0 , B0 from
Z 2π
1
A0 + B0 ln (r1 ) = f1 (θ ) dθ (13.88)
π 0
1 2π
Z
A0 + B0 ln (r2 ) = f2 (θ ) dθ (13.89)
π 0
146 Chapter 13. PDEs for Equilibrium

and An , Bn (for n ∈ {1, 2, ...}) from

ias
1 2π
Z
An r1n + Bn r1−n = f1 (θ ) cos (nθ ) dθ (13.90)
π 0
1 2π
Z
Cn r1n + Dn r1−n = f1 (θ ) sin (nθ ) dθ (13.91)
π 0
1 2π
Z
n −n
An r2 + Bn r2 = f2 (θ ) cos (nθ ) dθ (13.92)
π 0
1 2π
Z
n −n
Cn r2 + Dn r2 = f2 (θ ) sin (nθ ) dθ (13.93)

ag
π 0
13.1.23 Here is a Neumann problem on the unit circle:

1 1
1 < r, 0 ≤ θ ≤ 2π : urr +2
uθ θ + ur = 0 (13.94)
r r
eh 0 ≤ θ ≤ 2π : ur (1, θ ) = f (θ ). (13.95)

By separation of variables we get



C0
u (r, θ ) = + ∑ rn (Cn cos (nθ ) + Dn sin (nθ )) ; (13.96)
2 n=1

differentiating with respect to r we get



∑ nrn−1 (Cn cos (nθ ) + Dn sin (nθ ))
.K

ur (r, θ ) = (13.97)
n=1

hence

f (θ ) = ur (1, θ ) = ∑ n · (Cn cos (nθ ) + Dn sin (nθ )) (13.98)
n=1

and so, for n ∈ {1, 2, ...} we have:


Z 2π Z 2π
1 1
Cn = f (θ ) cos (nθ ) dθ , Dn = f (θ ) sin (nθ ) dθ . (13.99)
nπ nπ
h

0 0

Once again, to be able to expand f (θ ) in a Fourier series the compatibility condition


Z 2π
f (θ ) dθ = 0. (13.100)
At

Also note that C0 is an arbitrary constant (this is reasonable, since boundary


conditions only detrmine derivatives of the solution). Finally, omitting details, let
us give a Poisson - like formula for the Neumann problem on the unit circle:
Z 2π
C0 1
f (φ ) ln 1 − 2r cos (θ − φ ) + r2 dφ
 
u (r, θ ) = − (13.101)
2 2π 0

with C0 an arbitrary constant.


13.2 Unsolved Problems 147

13.2 Unsolved Problems

ias
1. Solve uxx + uyy = 0 (0 < x < π and 0 < y < π ), u (x, 0) = 0 (0 < x < π), u(x, π) = 0
(0 < x < π), u(0, y) = g (y) (0 < y < π), u (π, y) = 0 (0 < y < π).
1+2·(−1)n
Ans. u (x, y) = −4 ∑∞
n=1 n3 sinh(nπ) sinh (n (π − x)) sin (ny).
2. Solve uxx + uyy = 0 (0 < x < π and 0 < y < π ), u (x, 0) = x2 (π − x) (0 < x < π),
u(x, π) = 0 (0 < x < π), u(0, y) = 0 (0 < y < π), u (π, y) = 0 (0R < y < π).
An 2 π
Ans. u (x, y) = ∑∞
n=1 sinh(nπ) sinh (n (π − x)) sin (ny) me An = π 0 g (y) sin (ny) dy.
3. Solve uxx + uyy = 0 (0 < x < π and 0 < y < π ), u (x, 0) = x2 (0 < x < π), u(x, π) =
x2 (0 < x < π), u(0, y) = 0 (0 < y < π), u (π, y) = 0 (0 < y < π). 

ag
Ans. u (x, y) = πx− π8 ∑∞
n=1 3
1
2n−1 · cosh (2n − 1) π2 − y · sin ((2n − 1) x).
(2n−1) cosh( 2 π)
4. Solve uxx +uyy = 0 (0 < x < π and 0 < y < π ), u (x, 0) = x2 (0 < x < π), u(x, π) = 0
(0 < x < π), ux (0, y) = 0 (0 < y < π), ux (π, y) = 0 (0 < y < π).
(−1)n
Ans. u (x, y) = 31 π (π − y)+ 4 ∑∞n=1 n2 sinh(nπ) sinh (n (π − y)) cos (nx).
2
5. Solve uxx + uyy = 0 (0 < x < 1 and 0 < y < 1), u (x, 0) = (1 − x) (0 < x < 1),
u(x, 1) = 0 (0 < x < 1), ux (0, y) = 0 (0 < y < 1), u (1,y) =0 (0 <y <1).
Ans.
1
2
eh
u (x, y) = 4 ∑∞
+
(−1)n
3.
1 1
n=1 An sinh n − 2 π (1 − y) cos n − 2 πx me An =

1
π 2 (n− 2 ) π 3 (n− 21 )
6. Solve uxx + uyy = 0 (0 < x < π and 0 < y < π ), uy (x, 0) = f (x) (0 < x < π),
u(x, π) = 0 (0 < x < π), u(0, y) = 0 (0 < y < π), ux (π, y) = 0 (0 < y < π).
1 2n−1

Ans. u (x, y) = − ∑∞ n=1 An (n− 1 ) cosh( 2n−1 π ) · cosh ((2n − 1) (π − y)) · sin 2 x
2 2
me Cn = π2 0 g (y) sin (ny) dy.

.K

−1 x < 0
7. Solve uxx + uyy = 0 (−∞ < x < ∞ and 0 < y < ∞), u(x, 0) = f (x) = .
  1 x>0
Ans. u (x, y) = 2
π tan−1 x
y .

 0 x < −1
8. Solve uxx +uyy = 0 (−∞ < x < ∞ and 0 < y < ∞), u(x, 0) = f (x) = 1 −1 < x < 1
0 1<x

   
Ans. u (x, y) = π1 tan−1 1+x
y + π1 tan−1 1−x
y .
9. Solve:
h

uxx + uyy = 0 (−∞ < x < ∞, 0 < y < 1)


u (x, 0) = f (x) (−∞ < x < ∞)
u(x, 1) = 0 (−∞ < x < ∞)
At

and show

1 sinh (by)
Z ∞ Z ∞
u (x, y) = f (z) cos (bz − bx) dzdb.
π b=0 z=−∞ sinh (ba)

10. Solve urr + 1r ur + r12 uθ θ = 0 (0 < r < 1, 0 ≤ θ ≤ 2π ), u(1, θ ) = 120 + 60 cos 2θ


(0 ≤ θ ≤ 2π ).
Ans. u (r, θ ) = 120 + 60r2 cos (2θ ).
148 Chapter 13. PDEs for Equilibrium

11. Solve urr + 1r ur + r12 uθ θ = 0 (0 < r < 1, 0 ≤ θ ≤ 2π ), u(1, θ ) = sin3 θ (0 ≤ θ ≤ 2π ).

ias
Ans. u (r, θ ) = 14 3r sin (θ ) − r3 sin (3θ ) .


12. Solve urr + 1r ur + r12 uθ θ = 0 (0 < r < 4, 0 ≤ θ ≤ 2π ), u(4, θ ) = 256 cos θ (0 ≤ θ ≤


2π ).
Ans. u (r, θ ) = 18 768 + 64r2 cos (2θ ) + r4 cos (4θ ) .
 

13. Solve urr + 1r ur + r12 uθ θ = 0 (0 < r < 1, 0 ≤ θ ≤ π ), u(1, θ ) = θ · (π − θ ) (0 ≤ θ ≤


π ), u(r, 0) = u (r, π) = 0 (0 ≤ r < 1).
r2k−1
Ans. u (r, θ ) = π8 ∑∞
k=1 3 sin [(2k − 1) θ ].
(2k−1)
14. A plate has the shape of a circular sector with uinit radius and angle

ag
θ0 : {(r, θ ) : 0 ≤ r ≤ 1, 0 ≤ θ ≤ θ0 }.

If the side {(1, θ ): 0 ≤ θ ≤ θ0 }is kept at the temperature and the sides {(r, θ ):
0 ≤ r ≤ 1, θ = 0}, {(r, θ ): θ = θ0 } are kept at zero temperature, find the
temperature u (r, θ ) in steady
R state.     
2 ∞ n θ0 φ θ
Ans. u (r, θ )= θ ∑n=1 r · 0 f (φ ) sin nπ θ dφ · sin nπ θ .
0 0 0

13.3 Advanced Problems


eh
h .K
At
ias
ag
14. PDEs for Diffusion

Diffusion PDEs describe phenomena which evolve in time but tend to reach (in the
eh
limit of infinite time) a steady state. The classic diffusion PDE is the heat equation

∂u
= c2 ∇2 u,
∂t

e.g, ut = c2 uxx , ut = c2 (uxx + uyy ) etc.


.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

14.1 Theory and Examples


14.1.1 In this chapter we will study PDEs which describe diffusion phenomena.
Our typical example will be the transmission of heat, but the same equations
describe the diffusion of a fluid in a porous medium, the change of the probability
function of a random process etc.
14.1.2 In the context of heat transmission, consider a thin rod. The temperature
of the rod is denoted by u (x,t); this implies that different parts of the rod can be at
h

different temperatures at different times. Without justification (it can be found in


a physics textbook) we postulate that u (x,t) is governed by the following diffusion
equation:
ut = a2 uxx . (14.1)
At

We will later introduce variations of (14.1) to describe variations of the basic heat
transmission problem.
14.1.3 Similarly to the Laplace equation, to obtain a particular solution of the
diffusion equation we must specify boundary conditions (at the ends of the rod);
now we must also specify initial conditions (what is the temperature of each part of
the rod at initial time t = 0).
14.1.4 Let us now solve what is probably the simplest version of the diffusion
150 Chapter 14. PDEs for Diffusion

equation.

ias
0 < x < 1 and 0 < t < ∞ : ut = a2 uxx (14.2)
0 < t < ∞ : u(0,t) = u(1,t) = 0 (14.3)
0 < x < 1 : u(x, 0) = f (x) . (14.4)

As in Chapter 11, we assume that u(x,t) can be written in the form

u(x,t) = X(x)T (t) (14.5)

Then (14.2) yields

ag
T 0 (t) X 00 (x)
X(x)T 0 (t) = a2 X 00 (x)T (t) ⇒ = . (14.6)
a2 T (t) X(x)
The left (resp. right) part is a function of t only (resp. x only). For these two parts
to be equal, for every x and t , they must both be equal to to a constant, which we
denote by −b2 . Then we have
eh T 0 (t) X 00 (x) T 0 (t) = −a2 b2 T (t)
 
= = −b2 ⇒ (14.7)
2
a T (t) X(x) X 00 (x) = −b2 X(x).

We have used −b2 because we want −a2 b2 to be nonpositive (if it was positive, then
T (t) and u(x,t) would be unbounded as t → ∞).
We now can solve the two DEs of (14.7). We get
2 2
T (t) = Ce−a b t (14.8)
.K

X(t) = A sin (bx) + B cos (bx) . (14.9)

Hence every function


2 b2 t
u(x,t) = T (t)X(x) = e−a · (A sin (bx) + B cos (bx))

solves (14.2). But we also must satisfy the boundary conditions (14.3) which now
become:

0 = u(0,t) = A sin (0) + B cos (0) = B (14.10)


h

0 = u(1,t) = A sin (b) + B cos (b) = A sin (b) . (14.11)

Hence b must satisfy


b = 0, ±π, ±2π, ... (14.12)
At

Consequently, for every integer n, we have un (x,t) = e −a2 n2 π 2 t An sin (nπx), which
satisfies (14.2) and (14.3); then this is also true of
∞ ∞
2 2 2
u(x,t) = ∑ un (x,t) = ∑ e−a n π t An sin (nπx) . (14.13)
n=1 n=1

Finally, we must the initial condition (14.4):



0 < x < 1 : f (x) = u(x, 0) = ∑ An sin (nπx) . (14.14)
n=1
14.1 Theory and Examples 151

This is a typical problem of Fourier series expansion. We define g(x) to be the odd

ias
extension of f (x) in [−1, 1]; then we can find An from
Z 1 Z 1
An = g(x) sin (nπx) dx = 2 f (x) sin (nπx) dx. (14.15)
−1 0

In conclusion, the function



2 2 2
u(x,t) = ∑ e−a n π t An sin (nπx) (14.16)
n=1
Z 1

ag
∀n ∈ {1, 2, ...} : An = 2 f (x) sin (nπx) dx. (14.17)
0

solves (14.2)–(14.4).
14.1.5 From the physics point of view, the problem (14.2)–(14.4) corresponds to
the situation in which the ends of the rod are kept at zero temperature. Note that
limt→∞ u(x,t) = 0, i.e., eventually the rod reaches a steady-state of zero temperature.
Does this make physical sense?
eh
14.1.6 Example. Let us solve

0 < x < 3 kai 0 < t < ∞ : ut = 2uxx (14.18)


0 < t < ∞ : u(0,t) = u(3,t) = 0 (14.19)
0 < x < 3 : u(x, 0) = 5 sin (4πx) − 3 sin (8πx) + 2 sin (10πx) (14.20)

Unlike the problem of (14.2)–(14.4), now the ends of the rod are at x = 0, x = 3
.K

(not at x = 0, x = 1). This is easy to solve with the variable change x0 = x/3; then
bn = nπ/3. Hence the solution has the form
∞  nπ 
2 π 2 t/9
u(x,t) = ∑ e−2n An sin
3
x (14.21)
n=1

To satisfy the initial condition we must have


∞  nπ 
5 sin (4πx) − 3 sin (8πx) + 2 sin (10πx) = u(x, 0) = ∑ An sin x . (14.22)
3
h

n=1

We see that (14.22) is satisfied with A12 = 5, A24 = −3, A30 = 2 and the remaining
An = 0. Then the solution is
2 t/9 2 t/9 2 t/9
u(x,t) = 5e−288π sin (4πx) − 3e−1152π sin (8πx) + 2e−1800π sin (10πx)
At

(14.23)
2 2 2
= 5e−32π t sin (4πx) − 3e−128π t sin (8πx) + 2e−200π t sin (10πx) . (14.24)

14.1.7 Example. Let us solve

0 < x < 3 kai 0 < t < ∞ : ut = 2uxx (14.25)


0 < t < ∞ : u(0,t) = u(3,t) = 0 (14.26)
0 < x < 3 : u(x, 0) = 25 (14.27)
152 Chapter 14. PDEs for Diffusion

The solution has the form

ias
∞  nπ 
2 π 2 t/9
u(x,t) = ∑ e−2n An sin
3
x (14.28)
n=1

For the initial condition we need


∞  nπ 
25 = u(x, 0) = ∑ An sin x . (14.29)
n=1 3
Hence Z 3
2  nπx  1 − cos nπ

ag
An = 25 sin = 50
3 0 3 nπ
and the solution is

1 − cos nπ −2n2 π 2t/9  nπ 
u(x,t) = 50 ∑ e sin x . (14.30)
n=1 nπ 3
14.1.8 We will now solve a problem with nonhomogeneous boundary conditions:
eh 0 < x < 1 kai 0 < t < ∞ : ut = a2 uxx
0 < t < ∞ : u(0,t) = k1
(14.31)
(14.32)
0 < t < ∞ : u(1,t) = k2 (14.33)
0 < x < 1 : u(x, 0) = f (x) (14.34)

This problem corresponds to the situation in which the ends of the rod are kept at
constant temperatures k1 and k2 . If we try to solve by separation of variables we
.K

will find out there exists no solution of the form u(x,t) = T (t)X(x).
However we can obtain the solution by applying a preliminary transform (which
is suggested by the nature of the physical problem). Namely, it is reasonable to
assume that at steady state the rod temperature is given by the function

w(x) = k1 + (k2 − k1 ) x, (14.35)

i.e., a uniform temperature variation along the rod. Note that w(x) satisfies (14.32)
“by construction”. Also w(x) satisfies (14.31) because
h

wt (x) = wxx (x) = 0. (14.36)

But w(x) does not satisfy (14.34).


Let us now check whether there exists some v(x,t) such that
At

u(x,t) = w(x) + v(x,t) (14.37)

solves the original problem. Replacing u(x,t) by w(x) + v(x,t) in (14.31), since
wt (x) = wtt (x) = 0, we get
vt = a2 vxx . (14.38)
We also get

v(0,t) = v(1,t) = 0 (14.39)


v(x, 0) = f (x) − k1 − (k2 − k1 ) x = h(x) (14.40)
14.1 Theory and Examples 153

But (14.38)–(14.40) is of the same form as the problemn of 14.1.4, which we know

ias
how to solve it. It is

2 2
v(x,t) = ∑ e−a b t An sin (nπx) (14.41)
n=1
Z 1
∀n ∈ {1, 2, ...} : An = 2 h(x) sin (nπx) dx. (14.42)
0

Hence (14.31)–(14.34) has solution

ag
u(x,t) = w(x) + v(x,t)

where

2 b2 t
u(x,t) = k1 + (k2 − k1 ) x + ∑ An e−a sin (nπx) (14.43)
n=1
Z 1
∀n ∈ {1, 2, ...} : An = 2 ( f (x) − k1 − (k2 − k1 ) x) sin (nπx) dx. (14.44)
eh
14.1.9 Example. Let us solve
0

0 < x < 3 and 0 < t < ∞ : ut = 2uxx (14.45)


0 < t < ∞ : u(0,t) = 10 (14.46)
0 < t < ∞ : u(3,t) = 40 (14.47)
0 < x < 3 : u(x, 0) = 25 (14.48)
.K

We set u(x,t) = w(x) + v(x,t) and

w(x) = 10 + 10x. (14.49)

Now we solve

0 < x < 3 and 0 < t < ∞ : vt = 2uxx (14.50)


0 < t < ∞ : v(0,t) = 0 (14.51)
h

0 < t < ∞ : v(3,t) = 0 (14.52)


0 < x < 3 : v(x, 0) = 15 − 10x (14.53)

which gives
At

∞  nπ 
2 π 2 t/9
v(x,t) = ∑ Ane−2n sin
3
x (14.54)
n=1
with Z 3
2  nπ  30
An = (15 − 10x) sin x dx = (cos (nπ) − 1) . (14.55)
3 0 3 nπ
Finally the full solution is

30 2 2
 nπ 
u(x,t) = 10 + 10x + ∑ (cos (nπ) − 1) e−2n π t/9 sin x (14.56)
n=1 nπ 3
154 Chapter 14. PDEs for Diffusion

14.1.10 Let us now solve

ias
0 < x < 1 and 0 < t < ∞ : ut = a2 uxx − cu (14.57)
0 < t < ∞:u(0,t) = 0 (14.58)
0 < t < ∞ : u(1,t) = 0 (14.59)
0 < x < 1 : u(x, 0) = f (x). (14.60)
The term −cu (where we assume c ≥ 0) corresponds to heat radiation to the
environment, i.e., the rod is not fully thermally insulated.
Mathematically, we expect −cu to generate in the solution u(x,t) a term of the
form e−ct (this is the attenuation effect). Let us chekc whether this assumption

ag
helps in simplifying the problem. Assume that the solution has the form
u(x,t) = e−ct v(x,t). (14.61)
Indeed, we then get
ut = −cv + e−ct vt , uxx = e−ct vxx (14.62)
and hence
eh −ct
ut = a2 uxx − cu ⇒
2 −ct
(14.63)
−cv + e vt = a e vxx − cv ⇒ (14.64)
e−ct vt = e−ct a2 vxx . (14.65)
We also see that
0 = u(0,t) = e−ct v(0,t) ⇒ v(0,t) = 0 (14.66)
.K

−ct
0 = u(1,t) = e v(1,t) ⇒ v(1,t) = 0 (14.67)
and
f (x) = u(x, 0) = e−c·0 v(x, 0) = v(x, 0). (14.68)
In short, v(x,t) is a solution of the problem
0 < x < 1 and 0 < t < ∞ : vt = a2 vxx (14.69)
0 < t < ∞ : v(0,t) = v(1,t) = 0 (14.70)
0 < x < 1 : v(x, 0) = f (x) (14.71)
h

which we have already solved; hence u(x,t) is given by



2 2 2
u(x,t) = e−ct ∑ e−a n π t An sin (nπx) (14.72)
n=1
At

Z 1
∀n ∈ {1, 2, ...} : An = 2 f (x) sin (nπx) d. (14.73)
0

14.1.11 A harder problem is:

0 < x < 1, 0 < t < ∞ : ut = uxx − cu (14.74)


0 < t < ∞ : u (0,t) = k0 (14.75)
0 < t < ∞ : u (1,t) = k1 (14.76)
0 < x < 1 : u (x, 0) = 0. (14.77)
14.1 Theory and Examples 155

We assume that
u (x,t) = w (x) + v (x,t) e−ct

ias
where w (x) satisfies

0 < x < 1 : wxx − cw = 0, w (0) = k0 , w (1) = k1.

Then we have

ut = 0 + vt e−ct − cve−ct
uxx = wxx + vxx e−ct

ag
−cu = −cw − cve−ct .

Hence for 0 < x < 1, 0 < t < ∞ we have

ut = uxx − cu ⇒
vt e−ct − cve−ct = wxx + vxx e−ct − cw − cve−ct ⇒
vt e−ct = vxx e−ct + wxx − cw ⇒
eh vt = vxx .

And for 0 < x < 1 we have

0 = u (x, 0) = w (x) + v (x, 0) e−c·0 ⇒


v (x, 0) = −w (x) .
.K

Finally, for 0 < t < ∞ we have

k0 = u (0,t) = w (0) + v (0,t) · e−ct ⇒


k0 = k0 + v (0,t) e−ct ⇒
0 = v (0,t)

and we can similarly show that for 0 < t < ∞ we have

0 = v (1,t) .
h

Summarizing, we will solve the subproblems

0 < x < 1 : wxx − cw = 0, w (0) = k0 , w (1) = k1 (14.78)

and
At

0 < x < 1, 0 < t < ∞ : vt = vxx (14.79)


0 < t < ∞ : v (0,t) = 0 (14.80)
0 < t < 1 : v (1,t) = 0 (14.81)
0 < x < 1 : v (x, 0) = −w (x) . (14.82)

Let us first solve (14.78). We have


√  √ 
w (x) = a sinh cx + b cosh cx , w (0) = k0 , w (1) = k1
156 Chapter 14. PDEs for Diffusion

hence

ias
a sinh (0) + b cosh (0) = k0
√  √ 
a sinh c + b cosh c = k1
The solution of the system is

k0 cosh ( c) − k1
b = k0 , a=− √
sinh ( c)
hence √
√  k0 cosh ( c) − k1 √ 

ag
w (x) = k0 cosh cx − √ sinh cx .
sinh ( c)
Next we solve (14.79)–(14.82). With separation of variables we get v (x,t) = X (x) T (t)
and
T 0 X 00
= = −b2 .
T X
Hence
2
eh T (t) = e−b t
X (x) = A sin (bx) + B cos (bx) .
To satisfy X (0) = X (1) = 0 we take bn = 0, ±π, ±2π, ... and Bn = 0 for all n. In
conclusion, a general solution is

2 2
v (x,t) = ∑ Ane−n π t sin (nπx)
n=1
.K

and we also have



√  k0 cosh ( c) − k1 √  ∞
−k0 cosh cx + √ sinh cx = −w (x) = v (x, 0) = ∑ An sin (nπx)
sinh ( c) n=1

Hence, we have

Z 1
√  k0 cosh ( c) − k1 √ 

∀n ∈ {1, 2, ...} : An = 2 −k0 cosh cx + √ sinh cx sin (nπx) dx.
0 sinh ( c)
or

h

Z 1
√  k0 cosh ( c) − k1 1
Z
√ 
∀n ∈ {1, 2, ...} : An = −2k0 cosh cx sin (nπx) dx+2 √ sinh cx sin (nπx) dx.
0 sinh ( c) 0

Finally, (14.74)-(14.77) has solution



At

!
√  k0 cosh ( c) − k1 √  ∞
−n2 π 2 t
u (x,t) = k0 cosh cx − √ sinh cx + ∑ An e sin (nπx) ·e−ct .
sinh ( c) n=1

As t → ∞ we get
√ !
√  k0 cosh ( c) − k1 √  ∞
2 2
lim u (x,t) = k0 cosh cx − √ sinh cx + lim ∑ An e−n π t sin (nπx) · e−ct
t→∞ sinh ( c) t→∞
n=1

√  k0 cosh ( c) − k1 √ 
= k0 cosh cx − √ sinh cx = w (x) .
sinh ( c)
14.1 Theory and Examples 157

If the attenuation factor is small, c ' 0, then

ias
√ √
cx + e− cx √ √
√  e 1 + cx + 1 − cx
cosh cx = ' =1
2 2
√ √
cx − e− cx √ √
√  e 1 + cx − 1 + cx √
sinh cx = ' = cx
2 2
√ √
and cosh (c) ' 1, sinh ( c) ' c. Hence

√  k0 cosh ( c) − k1 √ 
lim u (x,t) = k0 cosh cx − √ sinh cx
t→∞ sinh ( c)

ag
k0 · 1 − k1 √
' k0 · 1 − √ cx
c
= k0 − (k0 − k1 ) x

which approximates the steady state of the already solved problem:

0 < x < 1, 0 < t < ∞ : zt = zxx (14.83)


eh 0 < t < ∞ : z (0,t) = k0
0 < t < ∞ : z (1,t) = k1
(14.84)
(14.85)
0 < x < 1 : z (x, 0) = 0. (14.86)

14.1.12 So far we have examined problems in which the boundary conditions are
on u(x,t). This is not always the case; in some physical problems we may have the
condition that one or both ends of the rod are insulated. This means that ux (x,t)
.K

is zero at the respective end (there is no heat flow across this end). For example,
consider the following problem.

0 < x < 1 and 0 < t < ∞ : ut = a2 uxx (14.87)


0 < t < ∞ : ux (0,t) = ux (1,t) = 0 (14.88)
0 < x < 1 : u(x, 0) = f (x) (14.89)

Using the separation of variables assumption, after some algebra we conslude that
2 b2 t
u(x,t) = T (t)X(x) = e−a
h

(A sin (bx) + B cos (bx))

solves (14.87). To satisfy the boundary conditions (14.88) we must have

0 = ux (0,t) = A cos (0) − B sin (0) = A (14.90)


At

0 = ux (1,t) = A cos (b) − B sin (b) = B sin (b) (14.91)

which implies
b ∈ {0, ±π, ±2π, ...} . (14.92)
2 2 2
Hence for every integer n, the function un (x,t) = e−a n π t Bn cos (nπx) satisfies both
(14.87) and (14.88) and the same is true for

B0 2 2 2
u(x,t) = + ∑ e−a n π t Bn cos (nπx) . (14.93)
2 n=1
158 Chapter 14. PDEs for Diffusion

To also satisfy the initial condition (14.89) we must have

ias

0 < x < 1 : f (x) = u(x, 0) = ∑ Bn cos (nπx) . (14.94)
n=0

We can achieve this by defining g(x) to be the even extension of f (x) on [−1, 1];
then we find Bn from
Z 1 Z 1
Bn = g(x) cos (nπx) dx = 2 f (x) cos (nπx) dx. (14.95)
−1 0

ag
In conclusion

B0 2 2 2
u(x,t) = + ∑ Bn e−a n π t cos (nπx) (14.96)
2 n=1
Z 1
∀n ∈ {0, 1, 2, ..} Bn = 2 f (x) cos (nπx) dx (14.97)
0
eh
solves (14.87)–(14.89).
14.1.13 Example. The solution of

0 < x < 1 and 0 < t < ∞ : ut = uxx (14.98)


0 < t < ∞ : ux (0,t) = ux (1,t) = 0 (14.99)
0 < x < 1 : u(x, 0) = x (14.100)
.K

is

B0 2 2 2
u(x,t) = + ∑ e−a n π t Bn cos (nπx) (14.101)
2 n=1
B0 = 1 (14.102)
Z 1
cos nπ − 1
Bn = 2 x cos (nπx) dx = 2 (n = 1, 2, ..) (14.103)
0 n2 π 2
14.1.14 Example. The solution of
h

0 < x < 1 and 0 < t < ∞ : ut = uxx (14.104)


0 < t < ∞ : ux (0,t) = ux (1,t) = 0 (14.105)
2
0 < x < 1 : u(x, 0) = 1 + x (14.106)
At

is

B0 2 2 2
u(x,t) = + ∑ e−a n π t · Bn cos (nπx) (14.107)
2 n=1
8
B0 = , (14.108)
3
Z 1
cos nπ
1 + x2 cos (nπx) dx = 4 2 2 .

n ∈ {1, 2, ..} : kai Bn = 2 (14.109)
0 n π
14.1 Theory and Examples 159

14.1.15 We now consider a problem of heat transmission in an infinite rod:

ias
−∞ < x < ∞, 0 < t < ∞ : ut = a2 uxx , (14.110)
−∞ < x < ∞ : u(x, 0) = f (x). (14.111)

Because x ∈ (−∞, ∞) we will apply the Fourier transform with respect x. In other
words, we define Z ∞
U(w,t) = F (u(x,t)) = e−iwx u(x,t)dx. (14.112)
−∞
Then (14.110)-(14.111) becomes

ag
Ut = −w2 a2U, (14.113)
U(w, 0) = F ( f (x)) = F(w). (14.114)

Now, (14.113) has the solution


2 a2 t
U(w,t) = C(w)e−w (14.115)
eh
and, since U(w, 0) = F(w), we get
2 a2 t
U(w,t) = F(w)e−w . (14.116)

Note that (14.116) is a convolution. Since


 
1 x2
e −w2 a2 t
=F √ e− 4a2t , (14.117)
.K

2a πt
the inverse of (14.116) gives
2
1 − x2
u(x,t) = f (x) ∗ √ e 4a t, (14.118)
2a πt
i.e.,
1
Z ∞
2
/4a2 t
u(x,t) = √ f (z)e−(x−z) dz. (14.119)
2a πt −∞
h

This is the solution of (14.110)-(14.111).


14.1.16 We can also solve (14.110), (14.111) using the Laplace transform. Before
solving the general case of (14.110), (14.111), let us consider a special case:

−∞ < x < ∞, 0 < t < ∞ : vt = a2 vxx


At

(14.120)
−∞ < x < ∞ : v(x, 0) = δ (x) (14.121)

(i.e., we assume f (x) = δ (x)). Defining

V (x, s) = L (v (x,t)) (14.122)

we get the transform of (14.120):

sV − v(x, 0) = a2Vxx (14.123)


160 Chapter 14. PDEs for Diffusion

Using (14.121) we get

ias
a2Vxx − sV = −δ (x) . (14.124)
A bounded solution of (14.124) is

e− s|x|/a
V (x, s) = √ ; (14.125)
2a s
inverting we get
√ ! 2 2
−1 e− s|x|/a e−x /4a t
v(x,t) = L √ = √ . (14.126)
2a s 2a πt

ag
14.1.17 Before solving the general problem, consider the physical interpretation
of (14.126). It gives the temperature u(x,t) if we place a “point heat source” at

2 /4a2 t
x = 0. Note that e−x /2a πt is a Gaussian bell curve, centered at 0 and with
time increasing “spread” 2at . This tells us that the initial “heat spike” is diffusing
in time; this is very reasonable in the context of heat transmission. The function
2 2 √
e−x /4a t /2a πt is called the heat kernel, with basic characteristic the smoothing
of the initial conditions.
eh
14.1.18 Let us return to the general problem. We can represent the initial
conditons (14.110)-(14.111) as a convolution u(x, 0) = f (x) = f (x) ∗ δ (x). Since
convolution is a linear combination, the solution u(x,t) will also be the convolution
of f (x) with v(x,t):
1
Z ∞
2 2
u(x,t) = f (x) ∗ v(x,t) = √ f (z)e−(x−z) /4a t dz. (14.127)
2a πt −∞
.K

This gives the solution of (14.110), (14.111) by Laplace transform; it is the same as
(14.119), the solution by Fourier transform.
2
14.1.19 Example. If the initial conditions is f (x) = e−x , then (14.110)-(14.111)
has solution
2 2
1 e−x /(1+4a t )
Z ∞
−z2 −(x−z)2 /4a2 t
u(x,t) = √ e e dz = p . (14.128)
2a πt −∞ (1 + 4a2t)
For every x we have
h

2 2
e−x /(1+4a t )
lim u(x,t) = lim p =0 (14.129)
t→∞ t→∞ (1 + 4a2t)
which means that in steady state the rod reaches zero temperature. Does this
make physical sense?
At

14.1.20 Example. If the initial condition is



1 gia |x| ≤ 1
f (x) = (14.130)
0 gia |x| > 1.
Then the solution of (14.110)-(14.111) is
    
1 1 x+1 x−1
Z ∞
−(x−z)2 /4a2 t
u(x,t) = √ f (z)e dz = · erf √ − erf √ .
2a πt −∞ 2 2a t 2a t
(14.131)
14.1 Theory and Examples 161

where the error function erf(x) is

ias
Z x
2 2
erf(x) := √ e−z dz. (14.132)
π 0
Once again we see that
    
1 x+1 x−1
lim u(x,t) = lim · erf √ − erf √ = 0. (14.133)
t→∞ t→∞ 2 2a t 2a t
14.1.21 Returning to the general solution
1
Z ∞
2
/4a2 t
u(x,t) = √ f (z)e−(x−z) dz (14.134)

ag
2a πt −∞

we see that the temperature at (x,t) is a weighted sum of the temperatures at


all points of the (infinite) rod; the weight on the influence of point z to point x is
2 2
the value of the kernel e−(x−z) /4a t ; hence the influence of z on x decreases with
distance, but increases with time. What physical intuition do you see in this?
14.1.22 Here is yet another approach to the infinite rod heat diffusion problem:
eh −∞ < x < ∞, 0 < t < ∞ : ut = a2 uxx
−∞ < x < ∞ : u(x, 0) = f (x).
(14.135)
(14.136)
We will now use separation of variables. Assuming
u(x,t) = X(x)T (t) (14.137)
and applying the usual steps, we get
.K

T 0 (t) = −a2 b2 T (t)


(14.138)
X 00 (x) = −b2 X(x)
and conclude that any function of the form
2 b2 t
(A (b) sin (bx) + B (b) cos (bx)) e−a (14.139)
is a solution of (14.135) and so is the linear combination
Z ∞
2 b2 t
u(x,t) = (A(b) sin (bx) + B(b) cos (bx)) e−a db. (14.140)
0
But, in the absence of boundary conditions, b can take any value in R. Hence, to
h

have Z ∞
f (x) = u (x, 0) = (A(b) sin (bx) + B(b) cos (bx)) db
0
we choose A (b), B (b) to be the trigonometric Fourier transforms:
At

1
Z ∞
A (b) = f (z) sin (bz) dz,
π −∞
1 ∞
Z
B (b) = f (z) cos (bz) dz.
π −∞
Finally, the solution is
Z ∞ Z ∞  Z  
1 ∞ 2 2
u(x,t) = f (z) sin (bz) dz sin (bx) + f (z) cos (bz) dz cos (bx) e−a b t db.
π 0 −∞ −∞
(14.141)
162 Chapter 14. PDEs for Diffusion

14.1.23 To see the connection between (14.141) and solution

ias
1
Z ∞
2
/4a2 t
u(x,t) = √ f (z)e−(x−z) dz (14.142)
2a πt −∞

obtained by the Laplace and Fourier transforms, let us change the order of
integration in (14.141), to get
Z ∞ 
1
Z ∞
2 2
u(x,t) = f (z) [sin (bz) sin (bx) + cos (bz) cos (bx)] e−a b t db dz
π −∞ 0
Z ∞ 
1
Z ∞
−a2 b2 t
= f (z) cos (b [z − x]) e db dz.

ag
(14.143)
π −∞ 0

Note that Z ∞ Z 

−a2 b2 t ib[z−x] −a2 b2 t
cos (b [z − x]) e db = Re e e db .
0 0
This integral can be computed by complex integration and turns out to be
Z ∞ √
π 2
eh 0
cos (b [z − x]) e−a2 b2 t 2
db = √ e−(x−z) /4a t .
2a t
Replacing in (14.143) we once again get the solution in terms of the heat kernel

1
Z ∞
2 2
u(x,t) = √ f (z)e−(x−z) /4a t dz. (14.144)
2a πt −∞

Hence the application of integral transforms can be seen as a limiting case of


.K

separation of variables.
14.1.24 Let us now return to our claim of Chapter 11: the Laplace equation
describes phenomena in equilibrium. Essentially, we have already seen why this
is the case. Namely, in all the cases we have studied in this chapter the solution
u (x,t) tends to a limit:
lim u (x,t) = ue(x)
t→∞
and the limit function satisfies
uexx = 0
h

(why?). This means that u e(x) the solution of the one-dimensional Laplace equation
(satisfying the appropriate boundary conditions).
14.1.25 In short, the solution of the diffusion equation is the solution of the
Laplace equation. In this chapter we have only seen examples of this involving
At

one spatial variable x but, as we will see later, it is also true for more than one
variables (x, y, ...).
14.1.26 Finally let us solve the Heat equation on rectangle (SchFourier 41, 2.29)

ut = κ (uxx + uyy )
u (0, y,t) = u (1, y,t) = u (x, 0,t) = u (x, 1,t) = 0
u (x, y, 0) = F (x, y)
|u (x, y)| < M
14.1 Theory and Examples 163

We assume

ias
u (x, y,t) = X (x)Y (y) T (t)
and then we get
XY T 0 = κ X 00Y T + XY 00 T


or, dividing by κXY T


T0 X 00 Y 00
= +
κT X Y
So we can set
T0
 00
Y 00

2 X
= −λ = +

ag
κT X Y
to get

T 0 = −κλ 2 T
X 00 Y 00
−λ 2 = +
X Y
2
The first equation has solution T = e−κλ t . The second equation can be written as
eh X 00 Y 00
= − −λ2
X Y
or
X 00 Y 00
= −µ = − − λ 2
2
X Y
which gives
.K

X 00 + µ 2 X = 0
Y 00 + λ 2 − µ 2 Y = 0


These equations have solutions

X (x) = aµ cos (µx) + bµ sin (µx)


q  q 
Y (x) = cµ,λ cos λ 2 − µ 2 x + dµ,λ sin λ 2 − µ 2x
h

From u (0, y,t) = 0 we get bµ = 0. u (1, y,t) = 0 we get µ = mπ . From u (x, 0,t) = 0 we
p
get dµ = 0. From u (x, 1,t) = 0 we get λ 2 − µ 2 = nπ . So finally the solution is
∞ ∞
At

2 2
u (x, y,t) = ∑ ∑ Bm,n sin (mπx) sin (nπx) e−κ (m +n )πt
m=1 n=1

Letting t = 0 we get
∞ ∞
F (x, y) = u (x, y, 0) = ∑ ∑ Bm,n sin (mπx) sin (nπx)
m=1 n=1

and so Z 1Z 1
Bm,n = 4 F (x, y) sin (mπx) sin (nπx) dxdy
0 0
164 Chapter 14. PDEs for Diffusion

14.2 Solved Problems

ias
14.2.1 Problem. Solve

∂u ∂ 2u
∀t > 0, x ∈ (0, L) : =k 2 (14.145)
∂t ∂x
∀t > 0 : u (0,t) = 0, u (L,t) = 0 (14.146)
3πx
∀x ∈ (0, L) : u (x, 0) = 4 sin (14.147)
L
Solution. Here we can simply take

ag
B3 = 4, Bn = 0 when n 6= 3

and get the solution  


3πx −k( 3π )2t
u (x,t) = 4 sin e L
L
14.2.2 Problem. Solve
eh ∀t > 0, x ∈ (0, 3) :
∂u
∂t
∂ 2u
=2 2
∂x
(14.148)

∀x ∈ (0, 3) : u (x, 0) = 25 (14.149)


∀t > 0 : u (0,t) = 10, u (3,t) = 40 (14.150)

Solution. To solve this we assume u (x,t) = v (x,t) + w (x). Then the original
problem becomes
.K

∂v ∂ 2v d2w
∀t > 0, x ∈ (0, 3) : = 2 2 +2 2 (14.151)
∂t ∂x dx
∀x ∈ (0, 3) : v (x, 0) + w (x) = 25 (14.152)
∀t > 0 : v (0,t) + w (0) = 10, v (3,t) + w (3) = 40 (14.153)

We find w (x) to be the solution of the problem

d2w
0= , w (0) = 10, w (3) = 40; (14.154)
dx2
h

obviously
w (x) = 10 + 10x.
Then v (x,t) solves the problem
At

∂v ∂ 2v
∀t > 0, x ∈ (0, L) : =2 2 (14.155)
∂t ∂x
∀x ∈ (0, L) : v (x, 0) = 25 − (10 + 10x) = 15 − 10x (14.156)
∀t > 0 : v (0,t) = 0, v (3,t) = 0 (14.157)

Now we get
∞  nπx  2n2 π 2
v (x,t) = ∑ Bn sin e− 9 t

n=1 3
14.2 Solved Problems 165

and we must have


∞  nπx 

ias
15 − 10x = v (x, 0) = ∑ Bn sin .
n=1 3
From which we get
Z 3
2  nπx  30
Bn = (15 − 10x) sin dx = (1 + cos (nπ)) .
3 0 3 nπ
So the required solution is

u (x,t) = w (x,t) + v (x,t)

ag

30  nπx  2n2 π 2
= 10 + 10x + ∑ (1 + cos (nπ)) sin e− 9 t
n=1 nπ 3
Note that
lim u (x,t) = 10 + 10x.
t→∞
This is the steady state temperature.
14.2.3 Problem. Solve
eh ∀t > 0, x ∈ (0, L) :
∂u
=k 2
∂ 2u
(14.158)
∂t ∂x
∀t > 0 : u (0,t) = 0, u (L,t) = 0 (14.159)
3πx 8πx
∀x ∈ (0, L) : u (x, 0) = 4 sin + 7 sin (14.160)
L L
Solution. Here we can take
.K

/ {3, 8}
B3 = 4, B8 = 7, Bn = 0 when n ∈
and get the solution
   
3πx −k( 3π )2t 8πx −k( 8π )2t
u (x,t) = 4 sin e L + 7 sin e L
L L
14.2.4 Problem. Solve
∂ u ∂ 2u
∀t > 0, x ∈ (0, L) : = 2 (14.161)
∂t ∂x
h

∀t > 0 : u (0,t) = 0, u (1,t) = 0 (14.162)


∀x ∈ (0, 1) : u (x, 0) = 100 (14.163)

Solution. We must have


At

∞  nπx 
100 = u (x, 0) = ∑ Bn sin
n=1 L
Then we will also have
Z L Z L
!

100 sin (mπx) dx = ∑Bn sin (nπx) sin (mπx) dx
0 0 n=1
∞ Z L
= ∑ Bn sin (nπx) sin (mπx) dx
n=1 0
166 Chapter 14. PDEs for Diffusion

And since

ias
Z L  x=1
1 100
100 sin (mπx) dx = 100 − cos (mπx) = (1 − cos (mπ))
0 mπ x=0 mπ
! 
− π1 m cos(mπ) sin(nπ)−n sin(nπ) cos(mπ)
Z L 
2
m −n 2 for m 6
= n 0 for m 6
= n
sin (nπx) sin (mπx) dx = = 1
0 − π1 cos(mπ) sin(mπ)−mπ
2m for m = n 2 for m = n

we get
100 1 200
(1 − cos (mπ)) = Bm ⇒ Bm = (1 − cos (mπ)) .
mπ 2 mπ

ag
Finally the solution is

200 2
u (x,t) = ∑ (1 − cos (nπ)) sin (nπx) e−(nπ) t
n=1 nπ
400 2 400 2 400 2
= sin (πx) e−π t + sin (3πx) e−(3π) t + sin (5πx) e−(3π) t + ...
π 3π 5π
eh
Clearly we have expanded f (x) = 100 (actually its odd expansion) in a Fourier sine
series.
14.2.5 Problem. Solve

∂u ∂ 2u
∀t > 0, x ∈ (0, 25) : = 100 2 (14.164)
∂t ∂x
∀t > 0 : ux (0,t) = 0, ux (25,t) = 0 (14.165)
.K

∀x ∈ (0, 25) : u (x, 0) = x (14.166)

Solution. Here
1 π
Z
π
A0 = xdx =
π 0 2
2 25 mπx 50
Z
Am = f (x) cos dx = 2 2 (−1 + (−1)m )
25 0 L m π
and
h


25 50  mπx  mπ 2
u (x,t) = + ∑ 2 2 (−1 + (−1)m ) cos e−100( 25 ) t
2 m=1 m π 25
   
25 100  πx − 100 π 2t 100
 3πx − 900 π 2t 100 5πx − 2500 π 2t
= − 2 cos e 625 − 2 cos e 625 − cos e 625 − ...
At

2 π 25 9π 25 25π 2 25

14.2.6 Problem. Solve the heat diffusion problem on a ring:

∂u ∂ 2u
∀t > 0, x ∈ (−L, L) : =k 2 (14.167)
∂t ∂x
∀x ∈ (−L, L) : u (x, 0) = f (x) (14.168)
∀t > 0 : u (−L,t) = u (L,t) (14.169)
∀t > 0 : ux (−L,t) = ux (L,t) (14.170)
14.2 Solved Problems 167

Solution. Assume

ias
u (x,t) = φ (x) g (t) .
Then
1 dg 1 d2φ
= −λ =
kg dt φ dx2
As usual
g (t) = g (0) e−λ kt
and, from

ag
d2φ
+ λ φ = 0,
dx2
we get
√  √ 
φ (x) = A cos λ x + B sin λx

with boundary conditions


 √   √  √  √ 
eh
A cos − λ L + B sin − λ L = A cos
 √   √ 
λ L + B sin
√ 
λL
√ 
−A sin − λ L + B cos − λ L = −A sin λ L + B cos λL .

From the first one we get


√ 
B sin λL = 0

.K

so λ = nπ which also satisfies the second.


So finally we get solutions of the form

∞  nπx  2 ∞  nπx  nπ 2
−k( nπ
L ) t
u (x,t) = A0 + ∑ An cos e + ∑ Bn sin e−k( L ) t
n=1 L n=1 L

We still have to satisfy (??):

∀x ∈ (0, L) : u (x, 0) = f (x)


h

We must have
∞  nπx  ∞  nπx 
f (x) = A0 + ∑ An cos + ∑ Bn sin
L L
At

n=1 n=1

So we get

1 L
Z
A0 = f (x) dx
L −L
1 L mπx
Z
Am = f (x) cos dx
2L −L L
1 L mπx
Z
Bm = f (x) sin dx
2L −L L
168 Chapter 14. PDEs for Diffusion

14.2.7 Problem. Solve the problem

ias
∂u ∂ 2u
∀t > 0, x ∈ (−π, π) : =k 2 (14.171)
∂t ∂x
∀x ∈ (−π, π) : u (x, 0) = x2 − π 2 (14.172)
∀t > 0 : u (−π,t) = u (π,t) (14.173)
∀t > 0 : ux (−π,t) = ux (π,t) (14.174)

Solution. Applying the previous results, we have

ag
1 π 4
Z
A0 = xdx = − π 2
π 0 3
Z L
1 4
Am = f (x) cos mxdx = (−1)m 2
2L 0 m
Bm = 0

and
eh
u (x,t) = −4
π2
3

−kt 1
+ cos (x) e − cos (2x) e
4
−k4t
+ ...


14.2.8 Problem. Solve

0 < x, 0 < t : ut = kuxx


0 < x : u (x, 0) = 0
.K

0 < t : u (0,t) = g (t)


0 < x, 0 < t : |u (x,t)| < M

Soolution. Taking Laplace wrt t we get

sU (x, s) − u (x, 0) = kUxx (x, s)


U (0, s) = G (s)

Then √s √
x − ks x
h

U (x, s) = c1 e k + c2 e
and from boundedness we get c1 = 0 and from IC
√s
U (x, s) = G (s) e− k x
At

Then we get  √s  x −3/2 −x2 /4kt


h (t) = L−1 e− k x = √ t e
2πk
and

u (x,t) = g (t) ∗ h (t)


Z t Z t
x −3/2 −x2 /4ku x 2
= √ u e g (t − u) du = √ (t − u)−3/2 e−x /4k(t−u) g (t) du
0 2πk 0 2πk
14.2 Solved Problems 169

14.2.9 Problem. Solve

ias
0 < x < L, 0 < t : ut = kuxx
0 < x : u (x, 0) = U0
0 < t : ux (0,t) = 0
0 < t : u (L,t) = U1

Solution. Taking Laplace wrt t we get

sU (x, s) − u (x, 0) = kUxx (x, s)

ag
U (x, 0) = U0

which becomes
s U0
Uxx (x, s) − U (x, s) = −
k k
U1
Ux (0, s) = 0,U (L, s) =

Then
eh r  r 
s

s s U0
U (x, s) = c1 cosh x + c2 sinh x +
k k s
From Ux (0, s) = 0 we get c2 = 0 and from U (L, s) = Us1 we get
ps 
U0 cosh x
.K

U (x, s) = + (U1 −U0 ) pks 


s s cosh kL

To invert this we use tables and get

(−1)n (2n − 1) πx − (2n−1)22 π 2 kt


∞  
4
u (x,t) = U0 + (U1 −U0 ) ∑ cos e 4L
π n=1 2n − 1 2L

14.2.10 Problem. Solve


h

0 < x, 0 < t : ut = kuxx


0 < x : u (x, 0) = 0
0 < t : u (0,t) = U0
0 < x, 0 < t : |u (x,t)| < M
At

Solution. Taking Laplace wrt t we get


s
Uxx (x, s) − U (x, s) − u (x, 0) = 0
k
U0
U (0, s) =
s
Then √s √s
U (x, s) = c1 e k x + c2 e− k x
170 Chapter 14. PDEs for Diffusion
U
From boundedness we get c1 = 0 and from U (0, s) = s0 we get

ias
U0 −√ s x
U (x, s) = e k
s
To invert this we use tables and get
   Z √x 
x 2 2 kt −u2
u (x,t) = U0 erf c √ = U0 1 − √ e du
2 kt π 0
14.2.11 Problem. Solve

ag
0 < x, 0 < t : ut = kuxx
0 < x : u (x, 0) = 0
0 < t : u (0,t) = U0
0 < x, 0 < t : |u (x,t)| < M

using Fourier transforms.


eh
Solution. Here it is important to remember the following facts

du

Fs = −bFc (u)
dx
 
du
Fc = bFs (u) − u (0)
dx
 2 
d u
Fs = −b2 Fs (u) + bu (0)
.K

dx2
 2 
d u
Fc 2
= −b2 Fc (u) − pu0 (0)
dx

Because we have u (x, 0) = 0, we take sine transform and we get

Ut (b,t) = −kb2U (b,t) + kbu (0,t)


Ut (b,t) + kb2U (b,t) = kU0 b
h

Then
2 U0
U (x, b) = c1 e−kb t +
b
Since we must have Z ∞
U (x, 0) = u (x, 0) sin (bx) dx = 0
At

0
we get
U0 U0
0 = U (x, 0) = c1 + ⇒ c1 = − .
b b
So finally
U0  −kb2 t

U (x, b) = 1−e
b
and
2U0 ∞ 1 
Z 
2
u (x,t) = 1 − e−kb t sin (bx) db
π 0 b
14.3 Unsolved Problems 171

Compare

ias
2U0 ∞ 1 
Z 
−kb2 t
Fourier Sol: u (x,t) = 1−e sin (bx) db
π 0 b
 Z √x 
2 2 kt −u2
Laplace Sol: u (x,t) = U0 1 − √ e du
π 0
Can you show that these two are equivalent?

14.3 Unsolved Problems

ag
1. Solve ut = uxx (0 < x < 1, 0 < t < ∞), u(0,t) = u(1,t) = 0 (0 < t < ∞), u(x, 0) = 1
(0 < x < 1).  2 
4 −π t 1 −9π 2 t 1 −25π 2 t
Ans. u (x,t) = π e sin (πx) + 3 e sin (3πx) + 5 e sin (5πx) + ... .
2. Solve ut = uxx (0 < x < 1, 0 < t < ∞), u(0,t) = u(1,t) = 0 (0 < t < ∞), u(x, 0) =
x − x2 (0 < x < 1). 
2 1 −9π 2 t 1 −25π 2 t
Ans. u (x,t) = π83 e−π t sin (πx) + 27 e sin (3πx) + 125 e sin (5πx) + ... .

(0 < x < 4).


eh
3. Solve ut = 2uxx (0 < x < 4, 0 < t < ∞), u(0,t) = u(4,t) = 0 (0 < t < ∞), u(x, 0) = 25x

∞ cos(nπ) −n2 π 2 t/8


Ans. u (x,t) = − 200 sin nπx

π ∑n=1 n e 4 .
4. Solve ut = 4uxx (0 < x < 1, 0 < t < ∞), u(0,t) = u(4,t) = 0 (0 < t < ∞), u(x, 0) =
x2 − x3 (0 < x < 1).
2(−1)n+1 −1 2 2
Ans. u (x,t) = 2 ∑∞ n=1 n3 π 3
e−4n π t sin (nπx).
5. Solve ut = uxx (0 < x < π , 0 < t < ∞), u(0,t) = u(π,t) = 0 (0 < t < ∞), u(x, 0) =
sin3 (x) (0 < x < π ).
.K

Ans. u (x,t) = 43 e−t sin (x) − 41 e−9t sin (3x).


6. Solve ut = uxx (0 < x < 1, 0 < t < ∞), u(0,t) = 0, u(1,t) = 1 (0 < t < ∞), u(x, 0) = x2
(0 < x < 1).  2 
1 −9π 2 t 1 −25π 2 t
Ans. u (x,t) = x− π83 e−π t sin (πx) + 27 e sin (3πx) + 125 e sin (5πx) + ... .
7. Solve ut = uxx (0 < x < 1, 0 < t < ∞), u(0,t) = 1, u(1,t) = 2 (0 < t < ∞), u(x, 0) =
x + 1 + sin (πx) (0 < x < 1).
2
Ans. u (x,t) = 1 + x + e−π t sin (πx).
8. Solve ut = uxx (0 < x < 10, 0 < t < ∞), u(0,t) = 150, u(10,t) = 100 (0 < t < ∞),
h

u(x, 0) = 150 − 5 · x (0 < x < 10).


9. Solve ut = uxx (0 < x < 1, 0 < t < ∞), u(0,t) = 0, u(1,t) = 2 (0 < t < ∞), u(x, 0) =
x2 · (1 − x) (0 < x < 1).
10. Solve ut = uxx (0 < x < L, 0 < t < ∞), u(0,t) = a, u(L,t) = b (0 < t < ∞), u(x, 0) =
At

sin πxL (0 < x < L).


11. Solve ut = uxx − cu (0 < x < 1, 0 < t < ∞), u(0,t) = 0, u(1,t) = 0 (0 < t < ∞),
u(x, 0) = 1 (0 < x < 1).
2 2 2
Ans. u(x,t) = 2e−ct ∑∞ n=1
1−cos nπ
nπ · e−a n π t · sin (nπx).
12. Solve ut = uxx − cu (0 < x < π , 0 < t < ∞), u(0,t) = 0, u(π,t) = 0 (0 < t < ∞),
u(x, 0) = x · (1 − x) (0 < x < 1).
2 2 2
Ans. u(x,t) = 2e−ct ∑∞ n=1
2−nπ sin nπ−2 cos nπ
n3 π 3
· e−a n π t · sin (nπx).
13. Solve ut = uxx − cu (0 < x < 1, 0 < t < ∞), u(0,t) = 0, u(1,t) = 0 (0 < t < ∞),
u(x, 0) = x2 · (1 − x) (0 < x < 1).
172 Chapter 14. PDEs for Diffusion
2 2 2 2 2
Ans. u(x,t) = 2e−ct ∑∞ n=1
6 sin nπ−4nπ cos nπ−n π sin nπ−2nπ
n4 π 4
· e−a n π t · sin (nπx).

ias
14. Solve ut = uxx − u (0 < x < 1, 0 < t < ∞), u(0,t) = 1, u(1,t) = 2 (0 < t < ∞),
u(x, 0) = 0 (0 < x < 1).
15. Solve ut = uxx − u (0 < x < 1, 0 < t < ∞), u(0,t) = 1, u(1,t) = 2 (0 < t < ∞),
u(x, 0) = x + 1 (0 < x < 1).
16. Solve ut = uxx (0 < x < π , 0 < t < ∞), u(0,t) = ux (π,t) = 0 (0 < t < ∞), u(x, 0) =
cos (x) (0 < x < π ).
2
Ans. u (x,t) = e−a t · cos (x).
17. Solve ut = uxx (0 < x < π , 0 < t < ∞), ux (0,t) = ux (π,t) = 0 (0 < t < ∞), u(x, 0) =
sin (x) (0 < x < π ).

ag
cos nπ+1 −a2 n2 t
Ans. u (x,t) = 2 − ∑∞
n=1 n2 −1 · e · cos (nx).
18. Solve ut = uxx (0 < x < 1, 0 < t < ∞), ux (0,t) = ux (1,t) = 0 (0 < t < ∞), u(x, 0) = x2
(0 < x < 1).
cos(nπ) −a2 n2 π 2 t
Ans. u (x,t) = 13 + 4 ∑∞ n=1 n2 π 2 · e · cos (nπx).
19. Solve ut = uxx (0 < x < 1, 0 < t < ∞), ux (0,t) = ux (1,t) = 0 (0 < t < ∞), u(x, 0) =
x · (1 − x) (0 < x < 1).
(cos(nπ)+1) −a2 n2 π 2 t
Ans. u (x,t) = 16 − 2 ∑∞
eh n=1 n2 π 2
·e · cos (nπx).

14.4 Advanced Problems


h .K
At
ias
ag
15. PDEs for Waves

Wave PDEs describe oscillatory phenomena (in both space and time). The classic
wave PDE is
eh ∂ 2u
∂t 2
= c2 ∇2 u,

e.g, utt = c2 uxx , utt = c2 (uxx + uyy ) etc.

15.1 Theory and Examples


15.1.1 We start the study of wave PDEs with a simple example: a thin string is
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
placed along the x-axis with its two ends fastened at x = 0 and x = L (the length
of the string is L). Call u(x,t) the (transverse to the x axis) displacement of the
point x at time t . It can be shown (see a physics textbook) that u(x,t) satisfies (for
0 ≤ x ≤ L and 0 ≤ t < ∞) the wave equation:
utt = c2 uxx . (15.1)

15.1.2 Why is (15.1) a wave equation? To understand this we will employ a


rough argument, using discretization of the time derivative uxx . Let us define for
n ∈ {0, 1, 2, ..., N} and with δ x = L/N the functions
h

vn (t) = u(nδ x,t). (15.2)

Then (15.1) becomes


At

 c 2
∀n ∈ {1, 2, ..., N − 1} : (vn )tt = (vn+1 − 2vn + vn−1 ) (15.3)
δx
2
or, with k = 2 δcx ,

∀n ∈ {1, 2, ..., N − 1} : (vn )tt + k2 vn = k2 (vn+1 + vn−1 ) (15.4)

We also assume that boundary conditions are given for v0 and vN . Now, (15.4)
is a system of ordinary DEs. Setting w = k2 (vn+1 + vn−1 ) we get

(vn )tt + k2 vn = w (15.5)


174 Chapter 15. PDEs for Waves

which is the equation of a harmonic oscillator driven by w; in fact w = k2 (vn+1 + vn−1 )

ias
describes the coupling of the displacment vn to vn+1 , vn−1 , the displacements of the
neighboring elements.
15.1.3 To solve the system (15.4) we also need initial conditions on v1 , ..., vN−1
and their derivatives. Similarly, to solve (15.1) we need:
1. boundary conditions u (0,t), u (L,t) gia t ≥ 0:
2. initial conditions u(x, 0) = f (x), ut (x, 0) = g(x) for x ∈ [0, L].
15.1.4 Let us now solve the basic problem

0 < x < L, 0 < t : utt = c2 uxx (15.6)

ag
0 < t : u(0,t) = u(L,t) = 0 (15.7)
0 < x < L : u(x, 0) = f (x) (15.8)
0 < x < L : ut (x, 0) = g(x) (15.9)

with separation of variables. Assuming u(x,t) = X(x)T (t) we get from (15.6) that

1 T 00 X 00
eh ·
c2 T
=
X
= −b2 (15.10)

(we will explain the −b2 choice a little later). Now(15.10) yields

T 00 + b2 c2 T = 0, (15.11)
X 00 + b2 X = 0; (15.12)
.K

where (15.11) has solutions sin (bct), cos (bct) and (15.12) has solutions sin (bx),
cos (bx) which must equal zero at x = 0 and x = L. Consequently: (i) we reject the
cosine solutions; (ii) we only accept sine solutions of the form sin (bn x), bn = nπ
L ,
n ∈ {0, ±1, ±2, ...}. Hence a general form of u(x,t) is
∞   nπc   nπc   nπ 
u (x,t) = ∑ n A cos t + B n sin t sin x (15.13)
n=1 L L L

which satisfies (15.6), (15.7). To satisfy (15.8)-(15.9) we must have


h

∞  nπ  ∞  nπ 
f (x) = u(x, 0) = ∑ (An cos (0) + Bn sin (0)) sin x = ∑ An sin x (15.14)
n=1 L n=1 L
nπc ∞  nπ  nπc ∞  nπ 
g(x) = ut (x, 0) = ∑ n(−A sin (0) + B n cos (0)) sin x = ∑ n B sin x .
L n=1 L L n=1 L
At

(15.15)

We see from (15.14) that the An are the Fourier coefficients1 of the odd extension
of f (x) on [−L, L] and are given by
Z L
2  nπ 
n ∈ {1, 2, ...} : An = f (x) sin x (15.16)
L 0 L
1
Here is why we chose the negative constant −b2 . With a positive constant we would not be able
to obtain f (x) as a Fourier series..
15.1 Theory and Examples 175

Similarly, from (15.15) we see that the Bn are the Fourier coefficients of the odd

ias
extension of g(x) on [−L, L] and are given by
Z L
2  nπ 
n ∈ {1, 2, ...} : Bn = g(x) sin x (15.17)
nπc 0 L
15.1.5 From (15.13) we see that every point of the string performs a linear
combination of oscillations of the for
 nπc   nπc 
An cos t + Bn sin t .
L L

ag
For each n we get a different harmonic frequency, all of which are multiles of the
fundamental frequency πc L . Every such oscillation
  nπc   nπc   nπ 
An cos t + Bn sin t sin x
L L L
is a stationary wave.
eh
15.1.6 Example. Let us solve

0 < x < 1, 0 < t : utt = c2 uxx


0 < t : u(0,t) = u(1,t) = 0
0 < x < 1 : u(x, 0) = 1
0 < x < 1 : ut (x, 0) = 0.
.K

According to the previous, u (x,t) is given by



u (x,t) = ∑ An cos (nπct) sin (nπx)
n=1

where Z 1
1 − cos nπ
An = 2 1 sin (nπx) dx = 2 .
0 nπ
In other words
h

 
4 1 1
u (x,t) = cos (πct) · sin (πx) + cos (3πct) sin (3πx) + cos (5πct) sin (5πx) + ...
π 3 5

15.1.7 Example. Let us solve


At

0 < x < 1, 0 < t : utt = c2 uxx


0 < t : u(0,t) = u(1,t) = 0
0 < t : u(x, 0) = w(x)
0 < t : ut (x, 0) = 0

where
0 ≤ x ≤ 12

2x
w (x) =
2 − 2x 12 < x ≤ 1
176 Chapter 15. PDEs for Waves

Hence u (x,t) is given by

ias

u (x,t) = ∑ An cos (nπct) sin (nπx)
n=1

where
Z 1/2 Z 1
An = 2 2x sin (nπx) dx + 2 (2 − 2x) sin (nπx) dx
0 1/2
−2 sin 12 nπ + nπ cos 21 nπ −2 sin nπ + nπ cos 21 nπ + 2 sin 12 nπ 8 sin nπ
2
= −2 + 2 = .
n2 π 2 n2 π 2 n2 π 2

ag
Finally
8 sin nπ

u (x,t) = ∑ 2 22 cos (nπct) · sin (nπx)
n=1 n π
15.1.8 Example. Let us solve

0 < x < 1, 0 < t : utt = c2 uxx


eh 0 < t : u(0,t) = u(1,t) = 0
0 < x < 1 : u(x, 0) = x2
0 < x < 1 : ut (x, 0) = 0.

In this case, we have Bn = 0 for all n and u (x,t) is given by



u (x,t) = ∑ An cos (nπct) sin (nπx)
.K

n=1

where
2 cos nπ − 2 − n2 π 2 cos nπ
Z 1
An = 2 x2 sin (nπx) dx =
0 n3 π 3
15.1.9 Now let us consider the infinite string with zero initial velocity:

−∞ < x < ∞, 0 < t : utt = c2 uxx (15.18)


−∞ < x < ∞ : u(x, 0) = f (x) (15.19)
h

−∞ < x < ∞ : ut (x, 0) = 0. (15.20)

Setting u(x,t) = X(x)T (t) we get, in the usual manner, we get solutions of the form

(A (b) cos (bx) + B (b) sin (bx)) cos (bct) . (15.21)


At

Since it is an infinite string, we have no boundary conditions and restrictions on


the value of b. Hence, by superposition, we get the solution
Z ∞
u (x,t) = (A (b) cos (bx) + B (b) sin (bx)) cos (bct) db. (15.22)
0

Setting t = 0 in (15.22), we get


Z ∞
f (x) = u (x, 0) = (A (b) cos (bx) + B (b) sin (bx)) db. (15.23)
0
15.1 Theory and Examples 177

For (15.23) to hold we must have

ias
1 ∞
Z
A (b) = f (z) cos (bz) dz, (15.24)
π −∞
1 ∞
Z
B (b) = f (z) sin (bz) dz. (15.25)
π −∞
Hence (15.22)-(15.25) give the solution of (15.18) - (15.20).
15.1.10 We can solve similarly problems with other initial and boundary conditions.
However we will now see an alternative way to solve certain wave PDEs.

ag
15.1.11 Let us for the time being ignore initial and boundary conditions and try
to find solutions of utt = c2 uxx only. Using can be rewritten using the differential
operators Dx and Dt :

Dtt u − c2 Dxx u = 0 ⇒
Dt (Dt u) − c2 Dx (Dx u) = 0 ⇒
(Dt − cDx ) (Dt + cDx ) u = 0.
eh
The final equation is satisfied when either Dt u − cDx u = 0 or Dt u + cDx u = 0, i.e.,

ut − cux = 0 or ut + cux = 0. (15.26)

It is easy to check2 that rhe general solution of ut − cux = 0 is φ (x + ct) where φ is


an arbitrary (differentiable) function. Indeed:

Dt (φ (x + ct)) − cDx (φ (x + ct)) = cφ 0 (x + ct) − cφ 0 (x + ct) = 0.


.K

Similarly, the general solution of ut + cux = 0 is ψ (x − ct), where ψ is an arbitrary


function. . Hence a solution of utt − cuxx = 0 is

u (x,t) = φ (x + ct) + ψ (x − ct) (15.27)

which can be verified by computing the second derivatives. The solution (15.27) is
called the d’Alembert solution of the wave PDE.
15.1.12 Consider the infinite string with zero initial velocity. It is described by
h

−∞ < x < ∞, 0 < t < ∞ : utt = c2 uxx (15.28)


−∞ < x < ∞ : u(x, 0) = f (x) (15.29)
−∞ < x < ∞ : ut (x, 0) = 0. (15.30)
At

So we must have
f (x) = u (x, 0) = φ (x) + ψ (x) (15.31)
and
0 = ut (x, 0) = φ 0 (x) c − ψ 0 (x) c ⇒ K = φ (x) − ψ (x) . (15.32)
Hence
f (x) + K f (x) − K
φ (x) = , ψ (x) = (15.33)
2 2
2
These first order PDEs can be solved by the method of characteristics (see Chapter ??).
178 Chapter 15. PDEs for Waves

kai ara
1 1

ias
u (x,t) = f (x − ct) + f (x + ct) . (15.34)
2 2
From the physical point of view, the solution consists of two functions f (x + ct)
and f (x − ct) which are travelling waves: they move in space maintaining their
original shape; f (x − ct) moves towards +∞ and f (x + ct) moves towards −∞; both
have propagation speed equal to c. Note that there is no interaction between the
two waves.
15.1.13 We have previously solved the same problem with separation of variables
blhtwn and obtained

ag
Z ∞
u (x,t) = (A (b) cos (bx) + B (b) sin (bx)) cos (bct) db (15.35)
0

where
1 1
Z ∞ Z ∞
A (b) = f (z) cos (bz) dz, B (b) = f (z) sin (bz) dz. (15.36)
π −∞ π −∞
eh
Let us show that (15.35)-(15.36) are equivalent to (15.34). Replacing (15.36) in
(15.35), we get
Z Z ∞  Z ∞  
1 ∞
u (x,t) = f (z) cos (bz) dz cos (bx) + f (z) sin (bz) dz sin (bx) cos (bct) db
π 0 −∞ −∞
Z Z ∞ 
1 ∞
= ( f (z) [cos (bz) cos (bx) + sin (bz) sin (bx)] cos (bct)) dz db
π 0 −∞
Z Z ∞ 
.K

1 ∞
= f (z) cos [b (z − x)] cos (bct) dz db
π 0 −∞
Z Z ∞ 
1 ∞
= f (z) (cos [b (z + ct − x)] + cos [b (z − ct − x)]) dz db
2π 0 −∞
Z Z ∞  Z Z ∞ 
1 ∞ 1 ∞
= f (z) cos [b (z + ct − x)] dz db + f (z) cos [b (z − ct − x)] dz db
2π 0 −∞ 2π 0 −∞
(15.37)

Using the representation


h

Z ∞ Z ∞ 
1
f (x) = f (z) cos [b (z − x)] dz db
π 0 −∞

we see that (15.37) is exactly equal to 12 f (x + ct) + 12 f (x − ct).


At

15.1.14 Example. The problem

−∞ < x < ∞, 0 < t < ∞ : utt = c2 uxx (15.38)


−x
−∞ < x < ∞ : u(x, 0) = e sin (x) (15.39)
−∞ < x < ∞ : ut (x, 0) = 0 (15.40)

has the solution


1 1
u (x,t) = e−(x−ct) sin (x − ct) + e−(x+ct) sin (x + ct)
2 2
15.1 Theory and Examples 179

15.1.15 A somewhat harder problem is

ias
−∞ < x < ∞, 0 < t < ∞ : utt = c2 uxx (15.41)
−∞ < x < ∞ : u(x, 0) = f (x) (15.42)
−∞ < x < ∞ : ut (x, 0) = g (x) . (15.43)

Now (15.32) becomes


Z x
0 0 1
g (x) = ut (x, 0) = φ (x) · c − ψ (x) · c ⇒ g (z) dz + K = φ (x) − ψ (x) (15.44)
c x0

ag
where x0 and K are arbitrary. Solving (15.31) and (15.44) with respect to φ and ψ
we get

1 1 x K
Z
φ (x) = f (x) + g (z) dz + (15.45)
2 2c x0 2
Z x
1 1 K
ψ (x) = f (x) − g (z) dz − . (15.46)
2 2c x0 2
Hence
eh
Z x+ct Z x−ct
1 1 1 1
u (x,t) = f (x − ct) + f (x + ct) + g (z) dz − g (z) dz (15.47)
2 2 2c x0 2c x0

and we finally get the general solution of the wave equation for an infinite string.
Z x+ct
1 1 1
.K

u (x,t) = f (x − ct) + f (x + ct) + g (z) dz. (15.48)


2 2 2c x−ct

15.1.16 Example. The problem

−∞ < x < ∞, 0 < t < ∞ : utt = c2 uxx (15.49)


−∞ < x < ∞ : u(x, 0) = 0 (15.50)
−∞ < x < ∞ : ut (x, 0) = sin (x) (15.51)

has the solution


h

Z x+ct
1 1
u (x,t) = sin (z) dz = (cos (x − ct) − cos (x + ct)) .
2c x−ct 2c
15.1.17 In the remainder of this scetion we present some examples of solving the
At

PDE using integral transforms.


15.1.18 To solve

x > 0,t > 0 : utt = c2 uxx


x > 0 : u (x, 0) = 0
x > 0 : ut (x, 0) = 0
t > 0 : u (0,t) = A sin (wt)
x > 0,t > 0 : |u (x,t)| < M
180 Chapter 15. PDEs for Waves

we apply Laplace transform (with respect to t ) and get

ias
s2U (x, s) − su (x, 0) − ut (x, 0) = c2Uxx (x, s) =⇒
s2 sx sx
Uxx (x, s) = 2
U (x, s) =⇒ U (x, s) = c1 e c + c2 e− c
c
and
Aw
u (0,t) = A sin (wt) =⇒ U (0, s) = .
s2 + w2
sx
From boundedness we reject the e a part of the solution and get

ag
sx
U (x, s) = c2 e− a .

Then
Aw − s0
= U (0, s) = c2 e c =c
2
s2 + w2
Aw − sx
U (x, s) = 2 e c
eh  
s + w2

So finally (since L −1 Aw
s2 +w2
= sin (wt)) we get
h  x i  x
u (x,t) = sin w t − Heaviside t −
c c
15.1.19 To solve
.K

0 < x < 1,t > 0 : utt = uxx + sin (πx)


0 < x < 1 : u (x, 0) = ut (x, 0) = 0
t > 0 : u (0,t) = u (1,t) = 0

we apply Laplace transform (with respect to t ) and get

sin (πx)
s2U (x, s) − su (x, 0) − ut (x, 0) = Uxx (x, s) + =⇒
s
sin (πx)
h

Uxx (x, s) − s2U (x, s) = −


s
U (0, s) = U (L, s) = 0.

We find a particular solution for the inhomogeneous equation:


At

Ue = A cos πx + B sin πx
ex = −πA sin πx + Bπ cos πx
U
exx = −π 2 A cos πx − Bπ 2 sin πx.
U

So we must have

sin (πx) 1
− π 2 A + s2 A cos πx − π 2 B + s2 B sin πx = −
 
⇒ A = 0, B = 2
s (s + π 2 ) s
15.1 Theory and Examples 181

and
1

ias
e (x, s) =
U sin (πx) .
(s2 + π 2 ) s
Hence the general solution is

1
U (x, s) = c1 esx + c2 e−sx + sin (πx)
(s2 + π 2 ) s
From the boundary conditions we get

0 = U (0, s) = A1 + A2

ag
0 = U (L, s) = A1 esL + A2 e−sL

and since
1 1 = e−sL − esL 6= 0,

sL −sL
e e
we get A1 = 0, A2 = 0. Hence the transformed solution is
eh 1
U (x, s) = sin (πx) .
(s2 + π 2 ) s
Finally  
−1 1 1 1
L = − 2 cos πt
(s + π 2 ) s
2 π 2 π
and
(1 − cos πt) sin (πx)
u (x,t) =
.K

π2
15.1.20 To solve (Churchill OM p.130, Sec 42)

0 < x,t > 0 : utt = uxx − g


0 < x : u (x, 0) = ut (x, 0) = 0 (15.52)
t > 0 : u (0,t) = lim ux (x,t) = 0
x→∞

we apply Laplace transform (with respect to t ) and get


g
h

s2U (x, s) − su (x, 0) − ut (x, 0) = Uxx − =⇒


s
2 g
Uxx (x, s) − s U (x, s) =
s
U (0, s) = lim Ux (x, s) = 0.
At

x→∞

A particular solution is:


e (x, s) = −gsk
U
so
g e (x, s) = − g .
0 + gsk+2 = ⇒ k = −3 ⇒ U
s s3
Then the general solution is
g
U (x, s) = c1 esx + c2 e−sx − .
s3
182 Chapter 15. PDEs for Waves

From the boundary conditions we get

ias
g
0 = U (0, s) = c1 + c2 −
s3
0 = lim Ux (x, s) = lim c1 sesx − c2 se−sx .

x→∞ x→∞
g
Then c1 = 0 and c2 = s3 and the transformed solution is

g −sx 
U (x, s) = 3
e −1 .
s
Finally

ag
g  g  
u (x,t) = L −1 e−sx − 1 = 2
Heaviside (t − x) (t − x) − t 2
s3 2
15.1.21 To solve (Schaum Laplace, p.225, Ex. 8.6)

0 < x < L,t > 0 : utt = c2 uxx


x > 0 : u (x, 0) = 0
eh x > 0 : ut (x, 0) = 0
t > 0 : u (0,t) = 0
t > 0 : ux (L,t) = F

we apply Laplace transform (with respect to t ) and get

s2U (x, s) − su (x, 0) − ut (x, 0) = c2Uxx =⇒


.K

s2
Uxx − U =0
c2
F
U (0, s) = 0,Ux (L, s) = .
s
The general solution is
sx sx
U (x, s) = c1 cosh + c2 sinh .
c c
From the boundary conditions we get
h

0 = U (0, s) = c1 ⇒ c1 = 0

and
F s sL Fc
At

= Ux (L, s) = c2 cosh ⇒ c2 = 2 .
s c c s cosh sL
c
Hence
Fc sinh sxc
U (x, s) =
s2 cosh sL
c
To invert we use tables. We get
" #
∞ n   
8L (−1) (2n − 1) πx (2n − 1) πct
u (x,t) = F x + 2 ∑ 2
sin cos
π n=1 (2n − 1) 2L 2L
15.1 Theory and Examples 183

15.1.22 Finally let us solve the wave equation on rectangle (SchFourier 45, 2.33):

ias
utt = c2 (uxx + uyy )
u (0, y,t) = u (1, y,t) = u (x, 0,t) = u (x, 1,t) = 0
u (x, y, 0) = F (x, y)
ut (x, y, 0) = 0
|u (x, y)| < M

We assume
u (x, y,t) = X (x)Y (y) T (t)

ag
and then we get
XY T 0 = c2 X 00Y T + XY 00 T


or, dividing by κXY T


T 00 X 00 Y 00
= +
c2 T X Y
So we can set
eh T 00
c2 T
= −λ =2
 00
X
X
+
Y 00
Y


to get

T 00 = c2 λ 2 T
X 00 Y 00
−λ 2 = +
X Y
.K

The first equation has solution T = pλ cos (λt) + qλ sin (λt). The second equation
can be written as
X 00 Y 00
= − −λ2
X Y
or
X 00 Y 00
= −µ = − − λ 2
2
X Y
which gives
h

X 00 + µ 2 X = 0
Y 00 + λ 2 − µ 2 Y = 0


These equations have solutions


At

X (x) = aµ cos (µx) + bµ sin (µx)


q  q 
Y (x) = cµ,λ cos λ 2 − µ 2 x + dµ,λ sin λ 2 − µ 2x

Using the initial and boundary conditions we get


1. From ut (x, y, 0) = 0 we get qλ = 0.
2. From u (0, y,t) = 0 we get bµ = 0. From u (1, y,t) = 0 we get µ = mπ .
p
3. From u (x, 0,t) = 0 we get dµ = 0. From u (x, 1,t) = 0 we get λ 2 − µ 2 = nπ .
184 Chapter 15. PDEs for Waves

So finally the solution is

ias
∞ ∞  p 
u (x, y,t) = B sin (mπx) sin (nπx) cos c m 2 + n2 πt
∑ ∑ m,n
m=1 n=1

Letting t = 0 we get
∞ ∞
F (x, y) = u (x, y, 0) = ∑ ∑ Bm,n sin (mπx) sin (mπx)
m=1 n=1

and so

ag
Z 1Z 1
Bm,n = 4 F (x, y) sin (mπx) sin (mπx) dxdy
0 0

15.2 Solved Problems


15.2.1 Problem. Solve

For 0 < x < 1, 0 < t : utt = c2 uxx ,


eh For 0 < t : u(0,t) = u(1,t) = 0,
For 0 < x < 1 : u(x, 0) = 1,
For 0 < x < 1 : ut (x, 0) = 0.

Solution. We have

u (x,t) = ∑ An cos (nπct) · sin (nπx)
.K

n=1
where Z 1
1 − cos nπ
An = 2 1 · sin (nπx) dx = 2 · .
0 nπ
In other words

1 − cos nπ
u (x,t) = ∑ 2· nπ
cos (nπct) · sin (nπx)
n=1
 
4 1 1
= cos (πct) · sin (πx) + cos (3πct) · sin (3πx) + cos (5πct) · sin (5πx) + ...
h

π 3 5

15.2.2 Problem. Solve

∂ 2u ∂ 2u
At

∀x ∈ (0, π) ,t > 0 : = 5
∂t 2 ∂ x2
∀t > 0 : u (0,t) = u (π,t) = 0
∀x ∈ (0, π) : u (x, 0) = x sin x
∀x ∈ (0, π) : ut (x, 0) = 0.

Solution. Applying the previous results we have π2 0 x sin2 xdx = 21 π


2
Z π
π
Am = x sin2 xdx =
π 0 2
15.2 Solved Problems 185

and for m > 1

ias
2 4m
Z π  
m+1
Am = x sin x sin (mx) dx = 2
(−1) −1 .
π 0 (m2 − 1) π
Hence the solution is
π √  ∞
4m    √ 
m+1
u (x,t) = sin x cos 5t + ∑ 2
(−1) − 1 sin (mx) cos m 5t
2 2
m=1 (m − 1) π
π √  16  √  32  √ 
= sin x cos 5t − sin (2x) cos 2 5t − sin (4x) cos 4 5t − ...
2 9 225

ag
15.2.3 Problem. Solve

For 0 < x < 1, 0 < t : utt = c2 uxx ,


For 0 < t : u(0,t) = u(1,t) = 0,
0 ≤ x ≤ 12

2x
For 0 < x < 1 : u(x, 0) = h (x) = ,
2 − 2x 12 < x ≤ 1

Solution. We have
eh
For 0 < x < 1 : ut (x, 0) = 0.


u (x,t) = ∑ An cos (nπct) · sin (nπx)
n=1
where
Z 1/2 Z 1
An = 2 2x sin (nπx) dx + 2 (2 − 2x) sin (nπx) dx
.K

0 1/2
−2 sin 12 nπ + nπ cos 21 nπ −2 sin nπ + nπ cos 21 nπ + 2 sin 12 nπ 8 sin nπ
2
= −2 + 2 = .
n2 π 2 n2 π 2 n2 π 2
So finally
∞ 8 sin nπ
u (x,t) = ∑ n2π 22 cos (nπct) · sin (nπx)
n=1
15.2.4 Problem. Solve

0 < t : utt = c2 uxx ,


h

For − ∞ < x < ∞, (15.53)


For − ∞ < x < ∞ : u(x, 0) = Π(x), (15.54)
For − ∞ < x < ∞ : ut (x, 0) = 0, (15.55)
At

Solution. We have
Z ∞
u (x,t) = (A (b) cos (bx) + B (b) sin (bx)) cos (bct) db (15.56)
0

where
Z 1
1 ∞ 1 2 sin (b)
Z
A (b) = Π(z) cos (bz) dz = cos (bz) dz = (15.57)
π −∞ π −1 πb
1 ∞
Z
B (b) = Π(z) sin (bz) dz = 0. (15.58)
π −∞
186 Chapter 15. PDEs for Waves

Since

ias
2 sin (b) 1
Z ∞
= A (b) = Π(z) cos (bz) dz, B (b) = 0 ⇒
πb π −∞
2 sin (b)
Z ∞ Z ∞
Π(x) = A (b) cos (bx) db = cos (bx) db
0 0 πb
we have

2 sin (b)
Z ∞
u (x,t) = cos (bx) cos (bct) db (15.59)
0 πb

ag
2 sin (b) 1
Z ∞
= (cos [b (x − ct)] + cos [b (x + ct)]) db (15.60)
0 πb 2
1 ∞ 2 sin (b) 1 ∞ 2 sin (b)
Z Z
= cos [b (x − ct)] db + cos [b (x + ct)] db (15.61)
2 0 πb 2 0 πb
1 1
= Π (x − ct) + Π (x + ct) (15.62)
2 2
eh
15.2.5 Problem. Solve

For − ∞ < x < ∞ : 0 < t < ∞ : utt = c2 uxx (15.63)


−x2
For − ∞ < x < ∞ : u(x, 0) = e sin (x) , (15.64)
For − ∞ < x < ∞ : ut (x, 0) = 0 (15.65)

Solution. The problem has the solution


.K

1 2 1 2
u (x,t) = e−(x−ct) sin (x − ct) + e−(x+ct) sin (x + ct)
2 2
15.2.6 Problem. Solve

For − ∞ < x < ∞ : utt = c2 uxx , (15.66)


For − ∞ < x < ∞ : u(x, 0) = 0, (15.67)
For − ∞ < x < ∞ : ut (x, 0) = sin x. (15.68)
h

Solution.The problem has the solution


Z x+ct
1 1
u (x,t) = sin (z) dz = (cos (x − ct) − cos (x + ct)) .
2c 2c
At

x−ct

15.2.7 Problem. Solve (Schaum Laplace, p.226, Ex. 8.5)

0 < x < L,t > 0 : utt = a2 uxx


x > 0 : u (x, 0) = mx (L − x)
x > 0 : ut (x, 0) = 0
t > 0 : u (0,t) = 0
t > 0 : u (L,t) = 0
15.3 Unsolved Problems 187

Solution. We apply Laplace transform (with respect to t ) and get

ias
s2U − su (x, 0) − ut (x, 0) = c2Uxx =⇒
s2 msx (L − x)
Uxx − U = −
c2 c2
U (0, s) = U (L, s) = 0
the general solution is

sx sx mx (L − x) 2c2 m
U = A1 cosh + A2 sinh + − 3 .
a a s s

ag
From the boundary conditions we get
h i
s(2x−L)
2c2 m cosh 2c mx (L − x) 2c2 m
U (x, s) = 3
 sL  + − 3
s cosh 2c s s
To invert (the cosh part) we use tables. We get

8mL2 ∞
u (x,t) = 3 ∑
eh 1
π n=1 (2n − 1)3
sin

(2n − 1) πx
L

cos

(2n − 1) πct
L

+mx (L − x)−c2 mt 2

15.3 Unsolved Problems


1 0 ≤ x ≤ 21

1. Solve utt = c2 uxx (0 < x < 1, 0 < t ), u(0,t) = u(1,t) = 0 (0 < t ), u(x, 0) = ,
−1 12 < x ≤ 1
.K

ut (x, 0) = 0.
Ans. u (x,t) = π4 (sin (2πx) sin (2πt)+ 13 sin (6πx) sin (6πt)+ 15 sin (10πx) sin (10πt)+...
.
2. Solve utt = c2 uxx (0 < x < 1, 0 < t ), u(0,t) = u(1,t) = 0 (0 < t ), u(x, 0) = x,
ut (x, 0) = 0.
(−1)n+1
Ans. u (x,t) = π2 ∑∞
n=1 n sin (nπx) cos (nπct).
2
3. Solve utt = c uxx (0 < x < 1, 0 < t ), u(0,t) = u(1,t) = 0 (0 < t ), u(x, 0) = 0,
ut (x, 0) = 1.
2−2 cos(nπ)
Ans. u (x,t) = ∑∞
n=1 n2 π 2 c
sin (nπx) sin (nπct).
h

4. Solve utt = c2 uxx (0 < x < 1, 0 < t ), u(0,t) = u(1,t) = 0 (0 < t ), u(x, 0) = 1,
ut (x, 0) = 1.
4 2
2
Ans. u (x,t) = ∑n=1,3,5,... nπ sin (nπx) cos (nπct)+ ∑n=1,3,5,... nπ sin (nπx) sin (nπct).
2
5. Solve utt = c uxx (0 < x < 1, 0 < t ), u(0,t) = u(1,t) = 0 (0 < t ), u (x, 0) = x · (1 − x),
At

ut (x, 0) = 0.
4 n
Ans. u (x,t) = ∑∞ n=1 n3 π 3 (1 − (−1) ) sin (nπx) cos (nπct).
6. Solve utt = c2 uxx (0 < x < π , 0 < t ), u(0,t) = u(π,t) = 0 (0 < t ), u(x, 0) = 0,
ut (x, 0) = 8 sin2 x.
Ans. u (x,t) = ∑n=1,3,4,... 32
((−1)n − 1) sin (nx) sin (nct).
πcn (n −4)
2 2

7. Solve utt = c2 uxx (0


< x < π , 0 < t ), u(0,t) = u(π,t) = 0 (0 < t ), u(x, 0) = 0,
ut (x, 0) = x2 .
Ans. u (x,t) = ∑n=1,3,4,... 2
πc(n2 −4)
(1 − (−1)n ) sin (nx) sin (nct).
188 Chapter 15. PDEs for Waves
2
8. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞) kai u(x, 0) = e−x , ut (x, 0) = 0 (−∞ <

ias
x < ∞). h i
2 2
Ans. u(x,t) = 12 e−(x−t) + e−(x+t) .
2
9. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞) kai u(x, 0) = 0, ut (x, 0) = xe−x (−∞ <
x < ∞). h i
2 2
1
Ans. u(x,t) = 4 e −(x−t) −e−(x+t) ).
10. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞), u(x, 0) = 0, ut (x, 0) = 1 (−∞ < x < ∞).
Ans. u(x,t) = t .
11. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞), u(x, 0) = sin x, ut (x, 0) = x2 (−∞ < x < ∞).

ag
Ans. u(x,t) = sin (x) cos (ct) + x2t + 13 c2t 3 .
12. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞), u(x, 0) = x3 , ut (x, 0) = x (−∞ < x < ∞).
Ans. u(x,t) = x3 + 3c2 xt 2 + xt .
13. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞), u(x, 0) = cos x, ut (x, 0) = 1/e (−∞ <
x < ∞).
Ans. u(x,t) = cos (x) cos (ct) + et .
1
14. Solve utt = uxx (−∞ < x < ∞, 0 < t < ∞), u(x, 0) = log 1+x 2 , ut (x, 0) = 2 (−∞ <
x < ∞).
eh
Ans. u(x,t) = 2t + log
 √ 
1 + x2 + 2cxt + c2t 2 + log
√
1 + x2 − 2cxt + c2t 2 .


2
15. Solve utt = uxx (0 < x < ∞, 0 < t < ∞), u(x, 0) = xe−x , ut (x, 0) = 0 (−∞ < x < ∞)
u(0,t) = 0 (0 < t < ∞).

15.4 Advanced Problems


h .K
At
ias
ag
16. Bessel Functions and Applications

Bessel functions can be seen as the analog of sines and cosines for problems in
eh
cylindrical coordinates. Much like sines and cosines, they emerge as solutions of
an ODE, in this case Bessel’s equation.

16.1 Theory and Examples


16.1.1 In Chapter 5 we have studied the ODE

d2y dy
x2 2 2

+ + − y = 0, n ≥ 0.
.K
x x ν (16.1)
https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw dx2 dx 1

This is Bessel’s equation and its solutions, the Bessel functions, have many
applications in PDE problems. In this chapter we will study Bessel functions in
greater detail.
16.1.2 As we will see, the general solution of (16.1) has the form

y = c1 Jν (x) + c2Yν (x) (16.2)

where Jν (x) (resp. Yν (x)) are called Bessel function of first (resp. second) kind and
h

ν -th order.
16.1.3 We will determine Jν (x) and Yν (x) in several steps. Let us first continue
the solution of (16.1) from where we stopped in Example 5.1.13. We have already
seen that, assuming solutions of the form
At

y (x) = xr ∑ ck xk

we get the indicial equation


r2 = ν 2
with solutions r1 = ν , r2 = −ν . We have also seen that, with r = ν , we get the
solution  2 x 4 x

y1 (x) = xν 1 − + + ... . (16.3)
2 (2 + 2ν) 2 · 4 · (2 + 2ν) · (4 + 2ν)
190 Chapter 16. Bessel Functions and Applications

Similarly, when r = −ν , we get the solution

ias
x2 x4
 
−ν
y2 (x) = x 1− + + ... . (16.4)
2 (2 − 2ν) 2 · 4 · (2 − 2ν) · (4 − 2ν)

When ν = 0 the two series are identical; when ν ∈ {1, 2, 3, ...} the second series
/ {0, 1, 2, 3, ...} the two series are linearly independent.
does not exist; when ν ∈
16.1.4 Definition. The Bessel function of the first kind and ν -th order is defined
to be

ag
x2 x4
 

Jν (x) := ν 1− + + ... (16.5)
2 Γ (ν + 1) 2 (2 + 2ν) 2 · 4 · (2 + 2ν) · (4 + 2ν)

which can be rewritten as

(−1)n 2x

ν+2n
Jν (x) = ∑ . (16.6)
n=0 n!Γ (ν + n − 1)
eh
In the following plot we see J0 (x), J1 (x), J2 (x).

PlotBessel

16.1.5 Clearly, (16.5) is a rescaled version of the solution y1 (x) of (16.3). The
function Γ (x) appearing in (16.5)-(16.6) is the Gamma function, the generalization
of the factorial function to the real numbers (see Appendix C); in particular:
.K

∀n ∈ N : Γ (n + 1) = n!. The generalization is required because n can be non-integer.


16.1.6 Definition. We extend the definition of Jν (x) to values ν ∈ {−1, −2, −3, ...}
by replacing ν ∈ {1, 2, 3, ...} with −ν in (16.5).
16.1.7 Theorem. The following hold

∀ν ∈ Z : J−ν (x) = (−1)ν Jν (x) , (16.7)


∀ν ∈
/ Z : J−ν (x) and Jν (x) are linearly independent. (16.8)

In the case ∀ν ∈
h

/ Z, the general solution of (16.1) is

c1 Jν (x) + c2 J−ν (x) .

Proof. For (16.7) we have


At

(−1)n 2x

ν+2n
Jν (x) = ∑
n=0 n!Γ (ν + n − 1)
∞ (−1)n x −ν+2n

2
J−ν (x) = ∑
n=0 n!Γ (−ν + n − 1)
ν−1 (−1)n x −ν+2n ∞ (−1)n x −ν+2n
 
2 2
= ∑ +∑
n=0 n!Γ (−ν + n − 1) n=ν n!Γ (−ν + n − 1)
16.1 Theory and Examples 191

It is a known property of the Gamma function that Γ (−ν + n − 1) = ∞ for n =

ias
0, 1, ..., ν − 1. Hence the first sum is 0. In the second sum we place n = ν + k and it
becomes

∞ (−1)k x ν+2k
ν+2k
(−1)ν+k 2x


2
J−ν (x) = ∑ = (−1)ν ∑ = (−1)ν Jν (x)
k=0 (ν + k)!Γ (k + 1) k=0 k!Γ (ν + k + 1)

So for ν ∈ {1, 2, 3, ...} Jν (x) and J−ν (x) are linearly dependent.
We omit the proof of (16.8); it is straightforward but tedious.
For the last part of the theorem, when ν ∈ / Z we have that J−ν (x) and Jν (x) are

ag
linearly independent. So we have two linearly indeendent solutions (16.1), which
is a second order linear equation. By the results of Chapter 1, {J−ν (x) , Jν (x)} is a
basis of the space of solutions of (16.1).
16.1.8 Definition. The Bessel function of the 2nd kind and ν -th order is defined
to be

Jν (x) cos (νπ) − J−ν (x)


∀ν ∈
eh / Z : Yν (x) =
sin (νπ)
,

Jµ (x) cos (µπ) − J−µ (x)


∀ν ∈ Z : Yν (x) = lim .
µ→ν sin (µπ)

16.1.9 Theorem. For every ν ∈ R : limx→0 Yν (x) = −∞.


16.1.10 Theorem. The ODE
.K

d2y dy
x2 + x + x2 − ν 2 y = 0,

2
ν ≥0
dx dx
has general solution
y = c1 Jν (x) + c2Yν (x)
Proof. Omitted.
16.1.11 In what follows we will concentrate on the Bessel functions of the first
kind, which are the ones most often appearing in applications. These functions
have a large number of properties (many of them useful in applications). We will
h

only prove only a few useful identities; the reader can find more in the Problems
sections.
16.1.12 Theorem. We have
At

d ν
∀ν ∈ R : (x Jν (x)) = xν Jν−1 (x)
dx
Proof. We have
!
d ν d ∞
(−1)n x2ν+2n ∞
(−1)n x2ν+2n−1
(x Jn (x)) = ∑ 2ν+2nn!Γ (ν + n − 1) = ∑ ν+2n−1n!Γ (ν + n)
dx dx n=0 2 n=0 2

ν

(−1)n x(ν−1)+2n
= x ∑ (ν−1)+2n = xν Jν−1 (x)
r=0 2 n!Γ ((ν − 1) + n + 1)
192 Chapter 16. Bessel Functions and Applications

16.1.13 Theorem. We have

ias
r
2
J1/2 (x) = sin x, (16.9)
πx
r
2
J−1/2 (x) = cos x. (16.10)
πx

Proof. We only prove (16.9). We have

 1

ag
x 2n+ 2
∞ (−1)n 2
J1/2 (x) = ∑ n + 32

n=0 n!Γ
x
1 x
5 x
9
2 2 2
2 − 2 2  − ...
= 3
+
Γ 2 1!Γ 2 5
2!Γ 92
x 2
1 x
5 x
9
2 2
2 2√ 2 √
= √ − + − ...
eh 2
π

x 2
1 
1! 32 2π 35 π
2! 2 2 2
x 2
1
x2 x4

2 2 sin x
= √ 1 − + − ... = √ .
π 3! 5! π x
2 2

16.1.14 Theorem. For large x we have the asymptotic formulas


.K

 !
n + 12 π
r
2
Jν (x) ∼ cos x − ,
πx 2
 !
n + 12 π
r
2
J−1/2 (x) ∼ sin x − .
πx 2

Proof. Omitted.
16.1.15 Theorem. When λ 6= µ , we have
h

µJν (λ ) Jν0 (µ) − λ Jν (µ) Jν0 (λ )


Z 1
xJν (λ x) Jν (µx) dx =
0 λ 2 − µ2
At

Proof. It is easy to check that, by a change of variables y1 (x) = Jν (λ x) is a solution


of
d 2 y1 dy1
x2 2 2 2

+ x + λ x − ν y1 = 0 (16.11)
dx2 dx
and y2 (x) = Jν (µx) is a solution of

d 2 y2 dy2
x2 + µ 2 x2 − ν 2 y2 = 0.

2
+x (16.12)
dx dx
16.1 Theory and Examples 193

Multiplying (16.11) by y2 and (16.12) by y1 and subtracting we get

ias
d 2 y1 d 2 y2
   
2 dy1 dy2
= µ 2 − λ 2 x2 y1 y2 ⇒

x y2 2 − y1 2 + x y2 − y1
dx dx dx dx
   
d dy1 dy2 dy1 dy2
= µ 2 − λ 2 xy1 y2 ⇒

x y2 − y1 + y2 − y1
dx dx dx dx dx
  
d dy1 dy2
= µ 2 − λ 2 xy1 y2 ⇒

x y2 − y1
dx dx dx
 
x y2 dy
dx
1
− y dy2
1 dx Z

ag
= xy1 y2 dx ⇒
µ2 − λ 2
x (λ Jν (µx) Jν0 (λ x) − µJν (λ x) Jν0 (µx))
Z
= xJν (λ x) Jν (µx) dx ⇒
µ2 − λ 2
(λ Jn (µ) Jν0 (λ ) − µJν (λ ) Jν0 (µ))
Z 1
= xJν (λ x) Jν (µx) dx
µ2 − λ 2 0

16.1.16 Theorem (Orthogonality). Let c1 , c2 be constants such that c1 c2 6= 0,


eh
and let λ , µ be two different roots of

c1 Jν (x) + c2 xJν0 (x) = 0. (16.13)

Then we have Z 1
xJν (λ x) Jν (µx) dx = 0.
0
√ √
Hence, for any λ , µ as above, the set { xJν (λ x) , xJν (µx)} is orthogonal in (0, 1).
.K

Proof. This theorem follows from more general results of Chapter 17. However we
can present and independent proof here. Since λ , µ are roots of (16.13), we have

c1 Jν (λ ) + c2 λ Jν0 (λ ) = 0 (16.14)
c1 Jν (µ) + c2 µJν0 (µ) = 0 (16.15)

For this system in c1 , c2 to have a nonzero solution, we must have


Jν (λ ) λ Jν0 (λ )

= Jν (λ ) µJν0 (µ) − Jν (µ) λ Jν0 (λ ) = 0.

Jν (µ) µJν0 (µ)

h

Since we also know that


(λ Jν (µ) Jν0 (λ ) − µJν (λ ) Jν0 (µ))
Z 1
xJν (λ x) Jν (µx) dx = ,
µ2 − λ 2
At

we immediately get the required result.


16.1.17 Theorem. For any ν ∈ R, Jν (x) has an infinite number of distinct roots

0 < λν,1 < λν,2 < λν,3 ....

We also have
∀n : λν−1,1 < λν,1 < λν+1,1 .
Proof. It follows from more general results which will be presented in Chapter 17.
194 Chapter 16. Bessel Functions and Applications

16.1.18 Theorem (Bessel-Fourier Series). For some n ∈ R let 0 < λν,1 < λν,2 <

ias
λν,3 ... be the roots of Jν (x). Furthermore, let f (x) be a function which is piecewise
differentiable on [0, 1].
1. If f (x) is continuous at x ∈ [0, 1] then
N
f (x) = lim ∑ cnJν (λν,nx) . (16.16)
N→∞
n=1

2. If f (x) is discontinuous at x ∈ [0, 1] then


  N
1
lim f (ξ ) + lim f (ξ ) = lim ∑ cnJν (λν,nx) (16.17)

ag
2 ξ →x− ξ →x+ N→∞
n=1

(with the obvious modification if x ∈ {0, 1}).


In both (16.16) and (16.17), we have
Z 1
2 
∀n ∈ N : cn = 2 x f (x) Jν λν,k x dx. (16.18)
eh Jν+1 λν,k 0

Proof. This also follows from results presented in Chapter 5.


16.1.19 The above theorems remind us of Fourier series expansions, with the
cn ’s of (16.18) playing the role of Fourier coefficients. In fact, trigonometric series
and series of Bessel functions are just two special cases of a more general theory
of ortogonal functions. As already hinted, the related results will be presented in
Chapter 16.
16.1.20 We now present some examples of the application of Bessel functions to
.K

PDE problems.
16.1.21 Example. Let us solve the heat equation on a disk:
 
1
0 < ρ < 1, 0 < t : ut = κ uρρ + uρ
ρ
0 < t : u (1,t) = 0,
0 < ρ < 1 : u (ρ, 0) = F (ρ) ,
0 < ρ < 1, 0 < t : |u (ρ,t)| < M
h

Letting
u (ρ,t) = P (ρ) T (t)
we get
 
1 0
At

0 00
P (ρ) T (t) = κ P (ρ) T + P (ρ) T (t) ⇒
ρ
T 0 P 00 1P 0
= + = −λ 2
κT P ρP
So we have

T 0 = −κλ 2 T
1
P00 + P0 + λ 2 P = 0 (16.19)
ρ
16.1 Theory and Examples 195
2
We know the first ODE has solution T = e−κλ t . Now we want solutions of the

ias
second ODE, which can be rewritten as

ρ 2 P00 + ρP0 + λ 2 ρ 2 P = 0. (16.20)

We recognize the ODE (16.20) as Bessel’s equation of order ν = 0 ; hence it has


solutions
P = Aλ J0 (λ ρ) + Bλ Y0 (λ ρ)
where J0 and Y0 are the Bessel functions (of zero order) of the first and second
kind respectively. Since we want u (0,t) = P (0) T (t) bounded, we see that Bλ = 0

ag
(because limρ→0 Y0 (λ ρ) = ∞). Since u (1,t) = 0 we get

0 = P (1) = Aλ J0 (λ )

so λ ∈ {λ1 , λ2 , ...}, the set of the Bessel function J0 (λ ) roots. Thus a solution of
(16.19) is
2
eh Am J0 (λm ρ) e−λmt
and the general solution is

2
u (ρ,t) = ∑ AmJ0 (λmρ) e−λmt .
m=1

Now we choose the Am so that



F (ρ) = u (ρ, 0) = ∑ AmJ0 (λmρ)
.K

m=1

In other words, we want a Bessel series expansion of F (ρ). That such an expansion
is possible follows from Theorem 16.1.18.
16.1.22 Let us solve the heat equation on a cylinder:
 
1
0 < ρ < 1, 0 < z < 1, 0 < t : ut = κ uρρ + uρ + uzz
ρ
0 < ρ < 1, 0 < z < 1 : u (ρ, z, 0) = F (ρ, z) ,
0 < ρ < 1, 0 < t : u (ρ, 0,t) = 0,
h

0 < ρ < 1, 0 < t : u (ρ, 1,t) = 0,


0 < z < 1, 0 < t : u (1, z,t) = 0,
0 < ρ < 1, 0 < z < 1, 0 < t : |u (ρ, z,t)| < M.
At

Letting
u (ρ,t) = P (ρ) Z (z) T (t)
we get
 
0 00 1 0 00
PZT = κ P ZT + P ZT t + PZ T ⇒
ρ
T 0 P00 1P 0 Z0
= + + = −λ 2
κT P ρP Z
196 Chapter 16. Bessel Functions and Applications

So we have
2t
T 0 = −κλ 2 T ⇒ T (t) = c1 e−κλ

ias
Next we write the second as
P00 1 P 0 Z0
+ = −λ 2 − = −µ 2
P ρP Z
Now we have
P00 1 P 0
+ = −µ 2 ⇒ P (ρ) = c2 J0 (µρ) + c3Y0 (µρ)
P ρP

ag
and
Z0
= µ 2 − λ 2 = ν 2 ⇒ Z (z) = c4 eνz + c5 e−νz
Z
From |u (ρ, z,t)| < M we get c3 = 0. So a solution is
2
u (ρ, z,t) = J0 (µρ) Aν eνz + Bν e−νz e−κλ t
eh 

Now from
2
0 = u (ρ, 0,t) = J0 (µρ) (Aν + Bν ) e−κλ t .
So Bν = −Aν and
2
u (ρ, z,t) = Aν J0 (µρ) eνz − e−νz e−κλ t


Next from
2
0 = u (ρ, 1,t) = Aν J0 (µρ) eν − e−ν e−κλ t

.K

we get eν − e−ν = 0 which means e2ν = 1 and ν = kπi. So the solution becomes
2t
u (ρ, z,t) = Aν J0 (µρ) sin (kπz) e−κλ

From u (1, z,t) = 0 we get


2t
0 = Aν J0 (µ) sin (kπz) e−κλ

so µ ∈ {µ1 , µ2 , ...}, the set of the roots of J0 (µ) = 0. Furthermore


h

λ 2 = µ 2 − ν 2 = µ 2 + k2 π 2

So the solution becomes


2 2π 2
u (ρ, z,t) = ∑ ∑ Am,k J0 (µm ρ) sin (kπz) e−κ (µm +k )t
At

k m

Now
 
F (ρ, z) = u (ρ, z, 0) = ∑ ∑ Am,k J0 (µmρ) sin (kπz) = ∑ Bk (ρ) sin (kπz)
k m k

So, taking ρ as constant we get


Z 1
2
Bk (ρ) = F (ρ, z) sin (kπz) dz
1 0
16.1 Theory and Examples 197

And then Z 1
2

ias
Am,k = ρBk (ρ) J0 (µm ρ) dρ
J1 (µm )2 0
or Z 1Z 1
4
Am,k = ρF (ρ, z) J0 (µm ρ) sin (kπz) dρdz
J1 (µm )2 0 0

16.1.23 Let us solve the wave equation on a circular drum:


 
1 2 1
0<ρ < 1, 0 < φ < 2π, 0 < t : utt = c uρρ + uρ + 2 uφ φ
ρ ρ

ag
0<φ < 2π, 0 < t : u (1, φ ,t) = 0,
0<ρ < 1, 0 < φ < 2π, 0 < t : ut (ρ, φ , 0) = 0,
0<ρ < 1, 0 < φ < 2π, 0 < t : u (ρ, φ , 0) = F (ρ, φ )

Letting
u (ρ, φ ,t) = P (ρ) Φ (φ ) T (t)
we get
eh 00

00
2 1 0 1

PΦT = c P ΦT + P ΦT + 2 PΦφ φ T ⇒
ρ ρ
T 00 P00 1P 0 1 Φ 00
= + + = −λ 2
c2 T P ρP ρ2 Φ
With the usual methods we get
.K

T 00 + λ 2 c2 T = 0
Φ00 + µ 2 Φ = 0
ρ 2 R00 + ρP0 + λ 2 ρ 2 − µ 2 P = 0


The general solutions are

T (t) = A1 cos (λ ct) + B1 sin (λ ct)


Φ (φ ) = A2 cos (µφ ) + B2 sin (µφ )
h

P (ρ) = A3 Jµ (λ ρ) + B3Yµ (λ ρ)

Using the boundary conditions we get

B1 = 0
At

µ =m
B3 = 0

So the solution becomes

u (ρ, φ ,t) = Jm (λ ρ) (A cos (mφ ) + B sin (mφ )) cos (λ ct)

From
0 = u (1, φ ,t) = Jm (λ ) (A cos (mφ ) + B sin (mφ ))
198 Chapter 16. Bessel Functions and Applications

we get λ ∈ {λmk } where for every m, λmk is the k-th root of Jm (λ ). Then the general

ias
solution is
∞ ∞
u (ρ, φ ,t) = ∑ ∑ Jm (λmk ρ) (Amk cos (mφ ) + Bmk sin (mφ )) cos (λmk ct) .
m=0 k=1

Using the same approach as before we get

1 2π
Z
Cm (ρ) = F (ρ, φ ) cos (mφ ) dφ
π 0
1 2π
Z

ag
Dm (ρ) = F (ρ, φ ) sin (mφ ) dφ
π 0
and
Z 1
2
Amk = ρJm (λmk ρ)Cm (ρ) dρ
Jm+1 (λmk )2 0
Z 1
2
Bmk = ρJm (λmk ρ) Dm (ρ) dρ
eh Jm+1 (λmk )2 0

16.2 Solved Problems


16.3 Unsolved Problems
1. Prove that J00 (x) = −J1 (x).
2. Prove that Jν0 (x) = 21 (Jν−1 (x) − Jν+1 (x)) .
.K

00
3. Prove that Jν (x) = 41 (Jν−2 (x) − 2Jν (x) + Jν+2 (x)).
R 3
4. Evaluate x J2 (x) dx.
Ans. x3 J3 (x) + c.
R1
5. Evaluate 0 x3 J0 (x) dx.
Ans. 2J0 (1) − 3J1 (1) .
Evaluate x2 J0 (x) dx.
R
6.
Ans. x2 J1 (x) + xJ0 (x) − J0 (x) dx.
R
R √
7. Evaluate J1 ( 3 x) dx.
√ √
Ans. 6 3 xJ1 ( 3 x).
h

R J2 (x)
8. Evaluate x2
dx.
J (x) J (x)
Ans. − 23x − 13 + 13 J0 (x) dx.
R
R
9. Evaluate J0 (x) sin xdx.
 x − xJ1 (x)cos x = c.
Ans. xJ0 (x) sin
At

10. Solve: ut = k uρρ + ρ1 uρ on an axially symmetric cylinder of infinite length


and radius R, subject to

0 < t : u (R,t) = 0,
0 < ρ < R : u (ρ, 0) = f (ρ) .
2
−kλn t , with A =
R1
Ans. u (ρ,t) = R22 ∑∞
n=1 An J0 (λn ρ) e n
1
ρ f (ρ) J0 (λn ρ) dρ .
(J1 (λn R))2 0
11. Prove that J02 (x) + ∑∞ 2
k=1 Jk (x) = 1.
16.4 Advanced Problems 199

12. Solve: utt = uρρ + ρ1 uρ + ρ12 uφ φ on the unit disk with

ias
0 < φ < 2π, 0 < t : u (1, φ ,t) = 0,
0 < φ < 2π, 0 < ρ < 1 : u (ρ, φ , 0) = ρ cos (3φ ) ,
0 < φ < 2π, 0 < ρ < 1 : ut (ρ, φ , 0) = 0.

2((λn2 −8)J0 (λn )−6λn J1 (λn )+8)


Ans. u (ρ, φ ,t) = ∑∞
n=1 An J3 (λn ρ) cos (3φ ) cos (λnt), with An = .
λn3 J42 (λn )

16.4 Advanced Problems

ag
1. Prove J0 (x) + 2J2 (x) + 2J4 (x) + ... = 1.
that
2. Prove J1 (x) − J3 (x) + J5 (x) − J7 (x) + ... = 21 sin x.
that
3. Prove xJν0 (x) = νJν (x) − Jν+1 (x) .
that
4. Prove xJν0 (x) = xJν−1 (x) − νJν (x) .
that
1 2v
5. Prove 2 Jν+1 (x) = x Jν (x) − Jν−1 (x) .
that
6. Prove (xν Jν (x))0 = xν Jν−1 (x) .
that
7. Prove
eh
that
8. Prove that J3/2 (x) = πx
0
(x−ν Jν (x))q= −xν−1 Jν+1 (x) .
2 sin x
x − cos x

.
k
9. Prove that J0 (x) + 2 ∑∞
k=1 (−1) J2k (x) = cos x.
10. Prove that ∑∞k=1 (2k − 1) J2k−1 (x) = 2x .
h .K
At
At
h.K
eh
ag
ias
ias
ag
17. Vector Spaces of Functions

We have already mentioned that many concepts of Linear Algebra (linear combi-
eh
nation, vector space, independence, orthogonality, ...) can be applied to sets of
functions which, under appropriate conditions, can be viewed as vector spaces. In
this chapter we will revisit this idea and explore it in more depth.

17.1 Theory and Examples


17.1.1 Hint. Whenever a new definition or theorem is introduced, try to find the
corresponding definition or theorem concerning the finite dimensional vector space
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
RN (which you have probably seen in Linear Algebra).
17.1.2 Definition. Given a set of functions F, we say that F is a vector space iff

∀ f , g ∈ F and ∀κ, λ ∈ R : κ f + λ g ∈ F.

17.1.3 Definition. Given a vector space F and a vector space G ⊆ F, we say that
G is a vector subspace of F.
17.1.4 Example. Let PN be the set of polynomials of degree at most N . PN is a
h

vector space. Indeed, if f (t) , g (t) are polynomials of degree at most N , the same
is true of h (t) = κ f (t) + λ g (t). For any m, n ∈ N0 , if M ≤ N then PM is a vector
subspace of PN .
17.1.5 Example. SL , the solution set of L (y) = 0 (where L is a linear differential
At

operator) is a vector space, as we have seen in Chapter 2.


17.1.6 Example. The set FL of functions which satisfy the Dirichlet conditions
on [−L, L] is a vector space, as we have seen in Chapter 9.
17.1.7 Definition. Let CN (a, b) (with a, b ∈ R) be the set of functions which have
continuous derivatives of orders 0, 1, ..., N on [a, b]. C∞ (a, b) is the set of functions
which have continuous derivatives of all orders 0, 1, ... on [a, b].
eN (a, b) (with a, b ∈ R) be the set of functions which have
17.1.8 Definition. Let C
piecewise continuous derivatives of orders 0, 1, ..., N on [a, b]. C e∞ (a, b) is the set
202 Chapter 17. Vector Spaces of Functions

of functions which have piecewise continuous derivatives of all orders 0, 1, ... on

ias
[a, b].
17.1.9 Example. It is easily checked that, for any N ∈ N∗ , CeN (a, b) is a vector
eN (a, b). Also, if M ≥ N , C
space and CN (a, b) is a vector subspace of C eM (a, b) is a
eN (a, b).
vector subspace of C
17.1.10 Definition. Let F be a vector space of functions. We say that ( fn )n∈A ⊆ F
is linearly independent in F iff
!
∀x : ∑ κn fn (x) = 0 ⇒ (∀n ∈ {1, 2, ..., N} : κn = 0) .

ag
n∈A

17.1.11 Example. Take P2 and its elements f0 (x) = 1, f1 (x) = x, f2 (x) = x2 . Then
the set
( fn )n∈{0,1,2} = ( f0 , f1 , f2 ) = 1, x, x2


is a linearly independent set in P2 .


eh
17.1.12 Definition. Let F be a vector space of functions and take ( f n )n∈A ⊆ A ⊆ N.
The span of ( fn )n∈A is
( )

Span ( fn )n∈A := g:g= ∑ κn fn, ∀n : κn ∈ R .
n∈A

The definition covers the cases of both finite and countably infinite sequences of
.K

functions.
17.1.13 Example. It is easily checked that, for the space of second order
polynomials P2 , we have

P2 = Span 1, x, x2 = Span 1 + x, 1 − x, x2 = Span 1, 4, 1 − x, 3x2 .


  

17.1.14 Theorem. Let F be a vector space of functions and take ( fn )n∈A ⊆ F.


Then Span ( fn )n∈A is a vector space.

Proof. Suppose g, h ∈ Span ( fn )n∈A and κ, λ ∈ C. Then
h

g= ∑ an fn, h= ∑ bn f n .
n∈A n∈A

Then
At


κg + λ h = κ ∑ an fn + λ ∑ bn fn = ∑ (κan + λ bn) fn ∈ Span ( fn )n∈A .
n∈A n∈A n∈A

17.1.15 Definition. Given a vector space F and some strictly positive function
w (x), we define for all f , g ∈ F the inner product of f , g with respect to w by
Z b
( f , g)w := f (x) g (x) w (x) dx.
a

In case w (x) = 1, we simply write ( f , g).


17.1 Theory and Examples 203

17.1.16 Theorem. For all f , g, h ∈ F and for all κ, λ ∈ R the following hold.

ias
1. ( f , g) = (g, f ) .
2. (κ f + λ g, h) = κ ( f , h) + λ (g, h) .
3. ( f , f ) ≥ 0.
4. ( f , f ) = 0 iff f (x) = 0 for almost all x ∈ [a, b].
Proof. The proofs of 1-3 are easy. For example we have
Z b Z b
( f , g) = f (x) g (x) dx = g (x) f (x) dx = (g, f )
a a

and this proves 1; we can similarly prove 2 and 3.

ag
Regarding 4, if we have
Z b
0 = (f, f) = | f (x)|2 dx,
a

we cannot conclude that f (x) = 0 for all x ∈ [a, b]. For example, consider the
function
a+b

0 x 6=
eh f (x) =
1 x=
2
a+b
2
.

Then Z b
| f (x)|2 dx = 0.
a
This happens because the value of the integral does not depend on the value of the
function at single point. In fact, when the integral is understood in the Lebesgue
sense, the value of the integral does not change if we change the function values at
.K

a countable set of points or even at any point set of measure zero1 . However, using
Rb 2
the theory of Lebesgue integration, we can prove that a | f (x)| dx = 0 iff f (x) 6= 0
on a set of zero measure, which is exactly the statement of 4.
17.1.17 We can conclude that

( f , f ) = 0 ⇔ (∀x ∈ [a, b] : f (x) = 0)

if, for example, we assume that f (x) is a continuous function. For this and related
reasons, in what follows we will deal with the space C e2 (a, b) equipped with the
h

inner product (., .). The analysis also holds if we use the weighted inner product
(., .)w ; we have omitted this aspect in the interest of brevity. Also, much of the
following analysis also holds for many other vector spaces of functions.
17.1.18 Definition. The norm of f ∈C
e2 (a, b) is defined by
At

k f k := ( f , f )1/2 .

17.1.19 Recall that e2 (a, b) : k f k < ∞ (why?).


a, b are finite (a, b ∈ R). Hence: ∀ f ∈ C
17.1.20 Theorem. For all f ∈ C e2 (a, b) and κ ∈ R, the following hold.
1. k f k ≥ 0.
2. k f k = 0 iff f (x) = 0 (x).
1
For the exact meanings of these terms see Chapter B.
204 Chapter 17. Vector Spaces of Functions

3. kκ f k = |κ| k f k.

ias
Proof. For 1, we note that
b
Z 
1/2 2
kfk = (f, f) = | f (x)| dx ≥ 0.
a

For 2, we have already remarked that, when f is continuous,

k f k2 = ( f , f ) = 0 ⇒ f (x) = 0 (x) .
For 3, we have
1/2 1/2

ag
b
Z  Zb
1/2 2 2
kκ f k = (κ f , κ f ) = |κ f (x)| dx = κ 2
| f (x)| dx = |κ| ( f , f )1/2 = |κ| k f k .
a a

17.1.21 Theorem (Schwartz inequality). For all f,g ∈ C


e2 (a, b) we have

|( f , g)| ≤ k f k · kgk .
Proof. Let

Now
eh ∀λ ∈ R : a (λ ) := ( f + λ g, f + λ g) = k f + λ gk2 ≥ 0.

( f + λ g, f + λ g) = ( f , f ) + λ ( f , g) + λ (g, f ) + λ 2 (g, g)
= λ 2 (g, g) + 2 ( f , g) λ + ( f , f )
= λ 2 kgk2 + 2 ( f , g) λ + k f k2 .
.K

Since a (λ ) is a trinomial and for all λ ∈ R we have a (λ ) ≥ 0 (which also holds for
kgk2 ) the discriminant of a (λ ) must be negative. Then

(2 ( f , g))2 − 4 kgk2 k f k2 ≤ 0 ⇒ |( f , g)|2 ≤ kgk2 k f k2 ⇒ |( f , g)| ≤ kgk k f k .

17.1.22 Theorem (Triangle inequality). For all f,g ∈ C


e2 (a, b) we have

k f + gk ≤ k f k + kgk .
Proof. Using the a (λ ) of the previous proof, we have

k f + gk2 = a (1) = kgk2 + 2 ( f , g) + k f k2 ≤ kgk2 + 2 kgk k f k + k f k2


h

hence
k f + gk2 ≤ (k f k + kgk)2 ⇒ k f + gk ≤ k f k + kgk .
17.1.23 Definition. Given a function g ∈ C e2 (a, b) and a sequence of functions
At


(gn ) ⊆C
n=0
e2 (a, b), we say that gn converges to g in the mean iff

lim kg − gn k = 0
n→∞

or, more explicitly,


Z b
lim |g (t) − gn (t)|2 dt = 0.
n→∞ a
We also use the notation
l.i.m. gn = g.
n→∞
17.1 Theory and Examples 205

17.1.24 Example. Define

ias
1 x ∈ 0, n1
  
gn (x) := .
0 x ∈ ( 1n , 1]

Then limn→∞ k0 − gn k = 0, in the sense


Z 1
lim |0 (t) − gn (t)|2 dt = 0.
n→∞ 0

ag
17.1.25 Definition. We say that f,g ∈ C
e2 (a, b) are orthogonal (to each other) in
[a, b] iff ( f , g) = 0.
17.1.26 Definition. We say that the set ( fn )n∈A ⊆ C
e2 (a, b) is orthogonal in [a, b] iff

∀m 6= n : ( fm , fn ) = 0.

We say that ( fn )n∈A is orthonormal in [a, b] iff (a) it is orthogonal in [a, b] and
eh
(b) ∀n ∈ A : k fn k = 1.
17.1.27 Example. Since
Z 2π 
0 m 6= n
∀m, n ∈ N : sin mx sin nxdx = ,
0 π m=n

it follows that (in the interval [0, 2π])


.K

1. sin x and sin 2x are orthogonal to each other;



2. the set (sin
 nx)n=1is orthogonal;

3. the set √1 sin nx is orthonormal.
π n=1
17.1.28 Theorem. If the set ( fn )n∈A is orthogonal then it is linearly independent.
Proof. We have

∑ κn fn = 0 ⇒ ∀m ∈ N : fm ∑ κn fn = 0
n∈A n∈A
h

⇒ ∀m ∈ N : ∑ κn fm fn = 0
n∈A
Z b Z b
⇒ ∀m ∈ N : ∑ κn fm (x) fn (x) dx = 0 (x) dx
n∈A a a
At

⇒ ∀m ∈ N : ∑ κn ( fm, fn) = 0
n∈A
⇒ ∀m ∈ N : κn k fm k2 = 0
⇒ ∀m ∈ N : κn = 0.

17.1.29 Definition. Given functions g, f ∈ C


e2 (a, b), the projection of g to f is
defined by
Proj (g| f ) := arg min kg − hk .
h=κ f
206 Chapter 17. Vector Spaces of Functions

17.1.30 Theorem. Given functions g, f ∈ C


e2 (a, b), the projection of g to f is

ias
(g, f )
Proj (g| f ) := f.
k f k2
Furthermore g−Proj(g| f ) is orthogonal to Proj(g| f ):

(g − Proj (g| f ) , Proj (g| f )) = 0

and
kg − Proj (g| f )k2 + kProj (g| f )k2 = kgk2 .

ag
Proof. Define the function

J (κ) := kg − κ f k2 = (g − κ f , g − κ f ) = κ 2 k f k2 − 2κ (g, f ) + kgk2 .

This is a quadratic function and achieves its global minimum value at

(g, f )
κ0 = .
k f k2
eh 2
Hence both kg − κ f k and kg − κ f k are minimized when

(g, f )
h=κf = f
k f k2
which, by definition, is the projection Proj(g| f ).
Furthermore
.K

(g − Proj (g| f ) , Proj (g| f )) = (g, Proj (g| f )) − (Proj (g| f ) , Proj (g| f ))
!
(g, f )
= g, 2
f − kProj (g| f )k2
kfk
2
|(g, f )|2 (g, f )

= − f
k f k2 k f k2
|(g, f )|2 |(g, f )|2
k f k2 = 0.
h

= 2
− 4
kfk kfk
Finally

kgk2 = kg − Proj (g| f ) + Proj (g| f )k2


At

= (g − Proj (g| f ) + Proj (g| f ) , g − Proj (g| f ) + Proj (g| f ))


= kg − Proj (g| f )k2 + kProj (g| f )k2 + 2 (g − Proj (g| f ) , Proj (g| f ))
= kg − Proj (g| f )k2 + kProj (g| f )k2 .

17.1.31 Example. In [0, 1], the projection of g (x) = x to f (x) = x2 is


R1
(g, f ) x · x2 dx
f = R 01 x2 .
k f k2 0
2
(x2 ) dx
17.1 Theory and Examples 207

Since Z 1 Z 1
1 1

ias
2
2
x · x dx = , x2 dx =
0 4 0 5
we have
1/4 2 5 2
Proj x|x2 =

x = x .
1/5 4
Furthermore we have
Z 1 
2
 2
 5 2 2
x − Proj x|x , Proj x|x = x − x x dx = 0
0 4

ag
and
Z 1
2 1
x2 dx =
kxk =
0 3
1
 2
5 1
 2 Z
2 2

x − Proj x|x = x− x dx =
0 4 48
Z 1 2
eh kProj (x| sin x)k2 =
0
5 2
4
x dx =
5
16
hence  2  2
kxk2 = x − Proj x|x2 + Proj x|x2 .

17.1.32 Example. In [0, 2π], the projection of g (x) = x to f (x) = sin x is


R 2π
(g, f ) x sin xdx
.K

2
f = R02π sin x.
kfk 0 sin2 xdx
Since Z 2π Z 2π
x sin xdx = −2π, sin2 xdx = π
0 0
we have
−2π
Proj (x| sin x) = sin x = −2 sin x.
π
Furthermore we have
h

Z 2π
(x − Proj (x| sin x) , Proj (x| sin x)) = (x + 2 sin x) sin xdx = 0
0

and
At

Z 2π
8
kxk2 = x2 dx = π 3
0 3
Z 2π
8
kx − Proj (x| sin x)k2 = (x + 2 sin x)2 dx = π 3 − 4π
0 3
Z 2π
kProj (x| sin x)k2 = (−2 sin x)2 dx = 4π
0

hence
kxk2 = kx − Proj (x| sin x)k2 + kProj (x| sin x)k .
208 Chapter 17. Vector Spaces of Functions

17.1.33 Definition. Given a function g∈C


e2 (a, b) and a vector subspace V ⊆

ias
e2 (a, b), the projection of g to V is
C

Proj (g|V) := arg min kg − hk .


h∈V

17.1.34 Theorem. Given a function g∈Ce2 (a, b) and an orthonormal set ( fn )N ⊆


  n=
e2 (a, b), let V = Span ( fn )N . Then we have
C n=1

N N
Proj (g|V) = ∑ Proj (g| fn) fn = ∑ (g, fn) fn. (17.1)

ag
n=1 n=1

Furthermore g−Proj(g|V) is orthogonal to Proj(g|V) :

(g − Proj (g|V) , Proj (g|V)) = 0; (17.2)

and, in fact, to every h ∈ V:

∀h ∈ V : (g − Proj (g|V) , h) = 0; (17.3)

we also have
eh
kg − Proj (g|V)k2 + kProj (g|V)k2 = kgk2 . (17.4)
Finally
∑ |(g, fn)|2 ≤ kgk2 . (17.5)
n∈A
 
N
Proof. Since V = Span ( fn )n=1 , we want to minimize
.K

2 !

J (κ1 , ...κN ) = g − ∑ κn fn = g − ∑ κn fn , g − ∑ κn fn

n∈A n∈A n∈A

2 !

= kgk2 + ∑ κn fn + 2 g, ∑ κn fn

n∈A n∈A

= kgk2 + ∑ κn2 k fn k2 + 2 ∑ κn (g, fn )


n∈A n∈A
h

2
= kgk + ∑ κn2 + 2 ∑ κn (g, fn)
n∈A n∈A

2
(since k fn k = 1). Setting the partial derivatives of J (κ1 , ...κN ) equal to zero we
At

have
∀n : 2κn − 2 (g, fn ) = 0
which implies the only critical point of J (κ1 , ...κN ) is

∀n : κn∗ = (g, fn )

and it is clearly a global minimum. Hence, by definition,


N
Proj (g|V) = ∑ (g, fn) fn
n=1
17.1 Theory and Examples 209

The results (17.2)-(17.4) are proved as in the proof of Theorem 17.1.30. Then, from

ias
(17.4), we have
kProj (g|V)k2 ≤ kgk2
N
But, by orthogonality of ( fn )n=1 , we have
2 !
N N N N
kProj (g|V)k2 = ∑ (g, fn ) fn = ∑ (g, fn) fn, ∑ (g, fn) fn = ∑ k(g, fn)k2 .

n=1 n=1 n=1 n=1

which yields (17.5).


sin
√ x, √ 2x , the set ( f n )2
sin

ag
17.1.35 Example. With f1 = f2 = is orthonormal in
π π  n=1
[−π, π] (check it). Then with g = x and V = span sin √ x , sin
π
√ 2x we have
π
Z π
sin x √
(g, f1 ) = x √ dx = 2 π,
−π π
Proj (g| f1 ) = (g, f1 ) 1 = 2 sin x;
Z π
sin 2x √
eh (g, f2 ) =
−π
x √ dx = − π ,
π
sin 2x
Proj (g| f2 ) = (g, f2 ) √ = − sin 2x.
π
Hence

Proj (g|V) = Proj (g| f1 ) + Proj (g| f2 ) = 2 sin x − sin 2x,


g − Proj (g|V) = x − 2 sin x + sin 2x.
.K

Also
2
Z π
kgk2 = x2 dx = π 3 ,
−π 3
Z π
2
kProj (g|V)k = (2 sin x − sin 2x)2 dx = 5π,
−π
2
Z π
kg − Proj (g|V)k2 = (x − 2 sin x + sin 2x)2 dx = π 3 − 5π
−π 3
h

and so
kg − Proj (g|V)k2 + kProj (g|V)k2 = kgk2 .
17.1.36 Example. With f 1 = 1, f2 = 1 − 3x the set ( fn )2n=1 is orthonormal in [0, 1]
(check it). Then with g = x2 and V = span (1, 1 − 3x) we have
At

Z 1 
2 1
(g, f1 ) = x 1dx = ,
0 3
1
Proj (g| f1 ) = (g, f1 ) 1 = ;
3
Z 1
5
(g, f2 ) = x2 (1 − 3x) dx = − ,
0 12
5 5
Proj (g| f2 ) = (g, f2 ) (1 − 3x) = x − .
4 12
210 Chapter 17. Vector Spaces of Functions

And so

ias
1 5 5 5 1
Proj (g|V) = Proj (g| f1 ) + Proj (g| f2 ) = + x− = x− ,
3 4 12 4 12
5 1
g − Proj (g|V) = x2 − x +
4 12
and
Z 1
2 1 1
kgk = x4 dx = = ,
0 5 3

ag
1 2
Z 1 
2 5 61
kProj (g|V)k = x− dx = ,
0 4 12 144
1 2
Z 1 
2 2 5 13
kg − Proj (g|V)k = x − x+ dx =
0 4 12 240
and
kg − Proj (g|V)k2 + kProj (g|V)k2 = kgk2 .
eh
17.1.37 The following two theorems are easy corollaries of Theorem 17.1.34.

17.1.38 Theorem (Pythagorean). Given a function g∈C


e2 (a, b) and a vector
subspace V ⊆ C
e2 (a, b), we can decompose g as follows:

g = gb + ge

where
.K

kgk2 = kb
gk2 + ke
gk2 and (b
g, ge) = 0.

17.1.39 Theorem. Given a function g ∈ C
e2 (a, b) and an orthonormal set ( fn )
n=1 ⊆
e2 (a, b). Then
C
lim (g, fn ) = 0.
n→∞

17.1.40 Definition. Given an orthonormal set ( fn )∞


n=1 ⊆ C2 (a, b), we say that
e
( fn )∞
n=1 is complete iff
h


N
∀g ∈ C g − ∑ (g, fn ) fn = 0.
eN (a, b) : lim
n→∞
n=1

17.1.41 Theorem. Given an orthonormal set ( fn )∞ ∞


n=1 ⊆ C2 (a, b), ( f n )n=1 is com-
At

e
plete iff

∀g ∈ C
e2 (a, b) : ∑ |(g, fn)|2 = kgk2 . (17.6)
n=1
 
N
Proof. Let VN = Span ( fn )n=1 . Then we have

2 2
N N
2 2 2
kgk = kg − Proj (g|VN )k + kProj (g|VN )k = g − ∑ (g, fn ) fn + ∑ (g, fn ) fn .

n=1
n=1
17.2 Solved Problems 211

If ( fn )n=1 is complete, then

ias

N
∀g ∈ C
e2 (a, b) : lim g − ∑ (g, fn ) fn
=0

n→∞
n=1

and so
2 2
N ∞ ∞
e2 (a, b) : kgk2 = lim 2
∀g ∈ C ∑ (g, fn ) fn = ∑ (g, fn ) fn = ∑ |(g, fn )| .

n→∞
n=1 n=1 n=1

ag
Conversely, if

∀g ∈ C
e2 (a, b) : ∑ |(g, fn)|2 = kgk2 (17.7)
n=1
then 2
N
∀g ∈ C2 (a, b) : 0 = g − ∑ (g, fn ) fn (17.8)
e
n=1

eh ∞
which means ( fn )n=1 is complete.
( fn )∞
17.1.42 Definition. Given a complete orthonormal set n=1 ⊆ C2 (a, b), the
e

generalized Fourier series (with respect to ( fn )n=1 ) of g ∈ C
e2 (a, b) is

∑ (g, fn) fn. (17.9)
n=1
.K

17.1.43 We give two final theorems without proof.


17.1.44 Theorem. Let F be a vector space of functions and ( fn )n∈A , (gn )n∈B two
complete (in F) sequences. Then they have the same cardinality, i.e., |A| = |B|
17.1.45 Definition. Let F be a vector space of functions and ( fn )n∈A a complete
(in F) sequence. The dimension of F is the number of elements in ( fn )n∈A , i.e.,

dim (F) = |A| .


h

17.2 Solved Problems


17.3 Unsolved Problems
1. Prove that 0 (x) belongs to every vector space of functions.
At

2 2 2 2
2. Prove that k f − gk + k f + gk = 2 k f k + 2 kgk .
1 2 1 2
3. Prove that ( f , g) = 4 k f + gk − 4 k f − gk .
4. Prove that ( fn )n∈A is linearly independent iff there is some n∗ ∈ A and (κn )n6=n∗
such that
fn∗ = ∑∗ κn fn.
n6=n

5. Prove that: limn→∞ k fn − f k = 0 ⇒ limn→∞ k fn k = k f k .


The following problems are with respect to the vector space C e2 (a, b).
6. Let f1 = 1, f2 = x, f3 = x2 . Prove that { f1 , f2 , f3 } is linearly independent.
212 Chapter 17. Vector Spaces of Functions

7. Let f1 = 1, f2 = |x|, f3 = x2 . Prove that { f1 , f2 , f3 } is linearly independent.

ias
8. Let f1 = 1, f2 = x, f3 = x2 . With the usual inner product defined on [a, b] =
[−1, 1], compute ( f1 , f2 ), ( f2 , f3 ), ( f1 , 2 f1 + 4 f3 ), k f1 k2 , k f3 k2 , k− f1 + 2 f3 k2 .
9. Find a, b such that the set {x, a + bx} is orthogonal on [a, b] = [−1, 1].
10. Find a, b such that the set {x, a + bx} is orthogonal on [a, b] = [0, 1].
11. Let f1 = 1, f2 = x, f3 = x2 . Find functions g1 , g2 , g3 such that: (a) Span ( f1 , f2 , f3 ) =
Span (g1 , g2 , g3 ) and {g1 , g2 , g3 }is orthogonal on [a, b] = [−1, 1].
12. Let f1 = 1, f2 = x, f3 = x2 . Find functions g1 , g2 , g3 such that: (a) Span ( f1 , f2 , f3 ) =
Span (g1 , g2 , g3 ) and {g1 , g2 , g3 }is orthogonal on [a, b] = [0, 1].
13. Let f1 = 1, f2 = x, f3 = x2 . Find functions g1 , g2 , g3 such that: (a) Span ( f1 , f2 , f3 ) =

ag
Span (g1 , g2 , g3 ) and {g1 , g2 , g3 }is orthogonal on [a, b] = [0, 2].
14. Find necessary and sufficient conditions (on f , g) so that k f + gk = k f k + kgk.
15. Find necessary and sufficient conditions (on f , g) so that |( f , g)| = k f k kgk.
16. With [a, b] = [−1, 1] find the projections of x and x2 to Span (cos (πx) , cos (2πx)).
17. With [a, b] = [−1, 1] find the projections of x and x2 to Span (sin (πx) , sin (2πx)).
18. With [a, b] = [−1, 1] find the projections of x and x2 to Span (cos (πx) , cos (2πx) , sin (πx) , sin (2πx))
19. With [a, b] = [−1, 1] find the projection of x2 to Span (cos (πx) , cos (2πx) , cos (3πx)).
eh
20. With [a, b] = [0, 1] find the projections of x and x2 to Span (cos (πx) , cos (2πx)).
21. With [a, b] = [0, 1] find the projections of x and x2 to Span (sin (πx) , sin (2πx)).
22. With [a, b] = [0, 1] find the projections of x and x2 to Span (cos (πx) , cos (2πx) , sin (πx) , sin (2πx)).
23. With [a, b] = [0, 1] find the projection of x2 to Span (cos (πx) , cos (2πx) , cos (3πx)).
24. With [a, b] = [−1, 1] find the projection of cos2 (πx) to Span (cos (πx) , cos (2πx) , cos (3πx)).
25. With [a, b] = [−1, 1] find the projection of sin (πx) to Span (cos (πx) , cos (2πx) , cos (3πx)).
26. With [a, b] = [−1, 1] find the projection of cos (x) to Span (cos (πx) , cos (2πx) , cos (3πx)).
.K

27. Find conditions on a, b so that the set {cos x, sin x} is orthogonal on [a, b].

28. Find conditions on a, b so that the set (cos (nx))n=0 is orthogonal on [a, b].

29. Given the sequence ( fn )n=0 defined by

1 dn  2 n 
f0 (x) := 1, ∀n ∈ N : fn (x) := x − 1 ;
2n n! dxn

prove that ( fn )n=0 is orthonormal in [0, 1].

30. Given the sequence ( fn )n=0 defined by

d n n −x 
h

∀n ∈ N0 : fn (x) := ex x e ;
dxn
 ∞
e−x/2
prove that f
n! n n=0 is orthonormal in [0, ∞).
At

17.4 Advanced Problems


The following problems use the set of functions
 Z b 
2
L2 (a, b) := f: | f (x)| dx < ∞ .
a

1. Prove that L2 (a, b) is a vector space of functions.


2. Prove that sin (x) ∈ L2 (0, 2π).
17.4 Advanced Problems 213

3. Prove that sin (x) ∈


/ L2 (−∞, ∞).

ias
4. Prove that e−|x| ∈ L2 (−∞, ∞).
5. Let S be a vector subspace of L2 (a, b) and define the orthogonal complement
of S by
S⊥ = { f : ∀g ∈ S : ( f , g) = 0} .
Is S⊥ a vector space?
6. Repeat the above problem with S ⊆ L2 (a, b) but not a vector subspace.
⊥
7. Is S⊥ = S?
8. Show that S⊥ = {x}⊥ .
\

ag
x∈S
9. Show that S⊥ ∩ S ⊆ {0}.
10. Let S be a vector subspace of L2 (a, b) and define the projection operator PS
by
PS ( f ) = Proj ( f |S) .
Prove the following and provide a geometric interpretation.
(a) PS is a linear operator.
eh
(b) PS is idempotent: PS (PS ( f )) = PS ( f ).
(c) PS is distance reducing: kPS ( f ) − PS (g)k ≤ k f − gk.
11. Prove that: if Q is a linear, idempotent and distance reducing operator, then it
is a projection operator into some vector subspace S of L2 (a, b) (i.e., Q = PS ).
12. Prove that dim (L2 (a, b)) = ℵ0 .
h .K
At
At
h.K
eh
ag
ias
ias
ag
18. Sturm-Liouville Problems

We now present the family of the so-called Sturm-Liouville problems. This is class
eh
of boundary value ODE problems which arise often in PDE applications and
generalize the Fourier and Bessel problems seen in previous chapters1 .

18.1 Theory and Examples


18.1.1 All functions appearing in this chapter belong to C2 (a, b), the space of all
functions which are continuous and have continuous first and second derivatives
on [a, b].
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
18.1.2 Definition. A regular Sturm-Liouville problem is one of the form
 
d dy
∀x ∈ (a, b) : p (x) + q (x) y + λ w (x) y = 0, (18.1)
dx dx
Ay (a) + By0 (a) = 0, (18.2)
Cy (b) + Dy0 (b) = 0; (18.3)

with the following additional assumptios:


1. a, b ∈ R;
h

2. p (x), q (x), w (x) are real-valued functions defined on [a, b], and such that for
all x ∈ [a, b] :
(a) p (x) , p0 (x) , q (x) , q0 (x) are continuous,
(b) for all p (x) and w (x) are strictly positive;
At

3. A, B,C, D ∈ R and AB 6= 0, CD 6= 0.
The Sturm-Liouville operator corresponding to (18.1) is defined by
 
d dy
L (y) := p (x) + q (x) y.
dx dx
18.1.3 Note that (18.1)-(18.3) always has the trivial solution y (x) = 0.
1
While the results presented in this chapter are quite interesting and useful, most of the required
proofs are beyond the scope of these notes. Hence in this chapter we present theorems without
their proofs.
216 Chapter 18. Sturm-Liouville Problems

18.1.4 Theorem. Consider the regular Sturm-Liouville problem

ias
 
d dy
∀x ∈ (a, b) : p (x) + q (x) y + λ w (x) y = 0, (18.4)
dx dx
Ay (a) + By0 (a) = 0, (18.5)
Cy (b) + Dy0 (b) = 0. (18.6)

For a fixed λ , the solution space of (18.4)-(18.6) is a vector space and the corre-
sponding Sturm-Liouville operator is linear.
Proof. Immediate.

ag
18.1.5 Definition. Suppose that for some λ ∈ R there exists some nontrivial solu-
tion yλ (x) of (18.1)-(18.3). Then we call λ an eigenvalue and yλ (x) a corresponding
eigenfunction of (18.1)-(18.3).
18.1.6 Example. Consider the problem

d2y
eh ∀x ∈ (0, π) :
dx2
+ λ y = 0,
y (0) = 0,
y (π) = 0.

This is a regular Sturm-Liouville problem with: a = 0, b = π , p (x) = 1, q (x) =


0, w (x) = 1, A = C = 1, B = D = 0. To solve the problem we first get the general
solution in the form
.K

√  √ 
y (x) = c1 cos λ x + c2 sin λx

(note that we can have λ < 0 and λ ∈ C). Now we apply the boundary conditions
and get

c1 cos (0) + c2 sin (0) = 0


√  √ 
c1 cos λ π + c2 sin λπ = 0
h

which has the following solutions


1. (c1 , c2 ) = (0, 0) for any λ value (but this is the trivial solution) or
2. (0, c2 ) = (0, c2 ) for any c2 ∈ R, provided λ = n2 , n ∈ Z − {0}.
At

So the SL problem has an infinity of solutions but they all have the form


y (x) = ∑ bn sin (nx) ;
n=1

in other words, the solution space is

Span (sin (x) , sin (2x) , sin (3x) , ...) .


18.1 Theory and Examples 217

18.1.7 Example. Next consider the problem

ias
d2y
∀x ∈ (0, π) : +λy = 0
dx2
y0 (0) = 0
y0 (π) = 0

Again, it is a regular Sturm-Liouville problem with: a = 0, b = π , p (x) = 1, q (x) =


0, w (x) = 1, A = C = 0, B = D = 0. To solve, we get the general solution in the form
√  √ 

ag
y (x) = c1 cos λ x + c2 sin λx

and from the boundary conditions we get

−c1 sin (0) + c2 cos (0) = 0


√ √  √ √ 
eh −c1 λ sin λ π + c2 λ cos λπ = 0

which, apart from the trivial solution, has solutions (c1 , 0) for any c1 ∈ R, provided
λ = n2 , n ∈ Z.
So we get an infinity of solutions, all having the form

y (x) = ∑ bn cos (nx) ;
n=1

the solution space is


.K

       
1 3 5
Span sin x , sin x , sin x , ... .
2 2 2
18.1.8 Example. Now consider the problem

d2y
∀x ∈ (0, π) : +λy = 0
dx2
y (0) = 0
y0 (π) = 0
h

Again, it is a regular Sturm-Liouville problem with: a = 0, b = π , p (x) = 1, q (x) =


0, w (x) = 1, A = D = 1, B = C = 0. To solve, we get the general solution in the form
√  √ 
At

y (x) = c1 cos λ x + c2 sin λx

and from the boundary conditions we get

c1 cos (0) + c2 sin (0) = 0


√ √  √ √ 
−c1 λ sin λ π + c2 λ cos λπ = 0

which, apart from the trivial solution, has solutions (0, c2 ) for any c2 ∈ R, provided
2
λ = n + 12 , n ∈ Z.
218 Chapter 18. Sturm-Liouville Problems

So we get an infinity of solutions, all having the form

ias
∞   
1
y (x) = ∑ bn sin n+ x ;
n=1 2

the solution space is


       
1 3 5
Span sin x , sin x , sin x , ... .
2 2 2

ag
18.1.9 Example. Now consider the problem

d2y dy
∀x ∈ (0, 1) : x2 + x + x2 − ν 2 y = 0

dx 2 dx
y (0) = 1
eh y0 (1) = 0

The differential equation is the Bessel equation. The problem is equivalent to a


regular Sturm-Liouville one, with: a = 0, b = 1, p (x) = x, q (x) = x, w (x) = 1x , A =
D = 1, B = C = 0. To see this, consider
 
d dy
x + q (x) y + λ w (x) y = 0
dx dx

and take p (x) = x, q (x) = x, w (x) = 1x , λ = −ν 2 to get


.K

 
d dy 1
x + xy + λ y = 0 ⇒
dx dx x
d 2 y dy 1
x 2 + + xy + λ y = 0 ⇒
dx dx  x
d 2 y dy ν2
x 2 + + x− y⇒
dx dx x
d2y dy
x2 2 + x + x2 − ν 2 y = 0.

h

dx dx
So the general solution has the form
√  √ 
y (x) = c1 Jν λ x + c2Yν λx
At

and from the boundary conditions we get

c1 cos (0) + c2 sin (0) = 0


√ √  √ √ 
−c1 λ sin λ π + c2 λ cos λπ = 0

which, apart from the trivial solution, has solutions (0, c2 ) for any c2 ∈ R, provided
2
λ = n + 12 , n ∈ Z.
18.1 Theory and Examples 219

So we get an infinity of solutions, all having the form

ias
∞   
1
y (x) = ∑ bn sin n+ x ;
n=1 2

the solution space is


       
1 3 5
Span sin x , sin x , sin x , ... .
2 2 2
18.1.10 Definition. A periodic Sturm-Liouville problem is one of the form

ag
 
d dy
∀x ∈ (a, b) : p (x) + q (x) y + λ w (x) y = 0,
dx dx
y (a) = y (b) ,
y0 (a) = y0 (b) .

The assumptions on a, b, p (x) , q (x) , w (x) are the same as for the regular problem
eh
and, in addition, we assume p (x) , q (x) , w (x) are periodic with period b − a (note
there exist no A, B,C, D constants in the boundary conditions).
18.1.11 Example. Consider

d2y
∀x ∈ (0, π) : +λy = 0
dx2
y (0) = y (1)
.K

y0 (0) = y0 (1)

It is a periodic Sturm-Liouville problem with: a = 0, b = π , p (x) = 1, q (x) = 0, w (x) =


1. The general solution has the form
√  √ 
y (x) = c1 cos λ x + c2 sin λx

and from the boundary conditions we get


√  √ 
h

c1 cos (0) + c2 sin (0) = c1 cos λ π + c2 sin λπ


√  √  √  √ 
−c1 sin λ 0 + c2 cos λ 0 = −c1 sin λ π + c2 cos λπ
At

√ 
c1 = c1 cos λπ
√ 
c2 = c2 cos λπ

which yields nontrivial solutions of the form (c1 , c1 ) for any c1 ∈ R, provided λ = 4n2 ,
n ∈ Z. So we get the general solution

y (x) = c1 ∑ (cos (2nx) + sin (2nx)) .
n=0
220 Chapter 18. Sturm-Liouville Problems

18.1.12 In what follows we will always be referring to a regular Sturm-Liouville

ias
problem; hence the appearing a, b, p (x) , q (x) , w (x) and A, B,C, D are always as-
sumed to satisfy the corresponding Sturm-Liouville regulrity conditions.
18.1.13 Theorem. Consider the regular Sturm-Liouville problem
 
d dy
∀x ∈ (a, b) : p (x) + q (x) y + λ w (x) y = 0, (18.7)
dx dx
Ay (a) + By0 (a) = 0, (18.8)
Cy (b) + Dy0 (b) = 0. (18.9)

ag
Suppose that, for a fixed λ , the general solution of (18.7) can be written as
c1 uλ + c2 vλ . Then λ is an eigenvector of (18.7)-(18.9) iff

Auλ (a) + Bu0 (a) Avλ (a) + Bv0 (a)




λ λ = 0. (18.10)
Cuλ (b) + Du0 (b) Cvλ (b) + Dv0 (b)


eh λ λ

18.1.14 Example. Recalling Example 18.1.6

d2y
∀x ∈ (0, π) : +λy = 0
dx2
y (0) = 0
y (π) = 0

we 
see that  (with a = 0, b = π , A = C = 1, B = D = 0, uλ (x) =
 (18.10) becomes
.K

√  √
cos λ x , vλ (x) = sin λx )
√  √ 
cos λ0 sin λ0 √


√  √  = sin π λ = 0.
cos sin

λπ λπ

which is exactly the condition we used in the solution of Example 18.1.6.


18.1.15 Theorem (Symmetry). e2 (a, b) ×
The Sturm-Liouville operator L () : C
h

e2 (a, b) → C
C e2 (a, b) satisfies

(u, L (v)) = (L (u) , v)


At

for all solutions u, v of the corresponding SL problem.


18.1.16 Example. The Sturm-Liouville operator corresponding to Example 18.1.6
d2
is L () = dx 2 . Take u, v solutions of the problem, then

d2v
 
d dv
Z π Z π
(u, L (v)) = u 2 dx = u dx =
0 dx 0 dx dx
dv π
 
dv du dv du
Z π Z π
= u − dx = − dx,
dx 0 0 dx dx 0 dx dx
18.1 Theory and Examples 221

since u (0) = u (π) = 0. Similarly

ias
Z π 2  
d u d du
Z π
(L (u) , v) = vdx = vdx =
0dx2 0 dx dx
du π
 
dv du dv du
Z π Z π
= v − dx = − dx.
dx 0 0 dx dx 0 dx dx

So (u, L (v)) = (L (u)), as required by the theorem. Note that we have not really
used the special form of L for this particular problem. We could use the special
Rπ d2
form of the solutions, to check that 0 sin (5x) dx 2 (sin (3x)) dx = 0 =

ag
d2
Z π
(u, L (v)) = sin (mx) (sin (nx)) dx = 0,
0 dx2
Z π 2
d
(L (u) , v) = (sin (mx)) sin (nx) dx = 0.
0 dx2
18.1.17 Theorem (Orthogonality). If λ , µ are different eigenvalues of a regular
eh
or periodic Sturm-Liouville problem, and uλ , uµ are corresponding
 eigenfunctions,
then uλ , uµ are orthogonal with respect to w (x), i.e. uλ , uµ w = 0 or, explicitly:
Z b
uλ (x) uµ (x) w (x) dx = 0.
a

18.1.18 Example. Continuing with Example 18.1.6, taking eigenvalues λ = m2 6=


n2 = µ , we have corresponding eigenvectors uλ (x) = sin (mx), uµ (x) = sin (nx) and
.K

clearly Z π Z π
uλ (x) uµ (x) dx = sin (mx) sin (nx) = 0.
0 0
18.1.19 Theorem (Real Eigenvalues). If λ is an eigenvalue of a regular or
periodic Sturm-Liouville problem, then λ ∈ R. Let Vλ be the span of the set of all
eigenfunctions corresponding to λ . Then Vλ is a vector space and there exists an
orthonormal set of real-valued eigenfunctions which is complete in Vλ .
18.1.20 Theorem (Eigenvalue Multiplicity). If λ is an eigenvalue of a regular
Sturm-Liouville problem and uλ , vλ two eigenfunctions corresponding to λ , then
h

uλ , vλ are linearly dependent.


18.1.21 Example. Continuing with Example 18.1.6, we have seen that the
eigenvalues have the form λ = n2 ∈ R and for each such λ the corresponding
eigenfunction is uλ (x) = sin (nx). Also Vλ = Span (sin (nx)), a vector space; any two
At

eigenfunctions belonging to Vλ will have the form c1 sin (nx), c2 sin (nx) and hence
will be linearly dependent.
18.1.22 Theorem (Eigenvalue Monotonicity). The set of all eigenvalues of a
regular Sturm-Liouville problem forms a strictly increasing sequence

λ1 < λ2 < ... < λn < λn+1 < ...

which is unbounded:
lim λn = ∞.
n→
222 Chapter 18. Sturm-Liouville Problems

18.1.23 Example. Staying with Example 18.1.6, we can enumerate the eigneval-

ias
ues as λn = n2 ; then we clearly have

λ1 < λ2 < ... and lim λn = ∞.


n→∞

18.1.24 Example.

18.1.25 Theorem (Eigenvalue Monotonicity). The set of all eigenvalues of


a periodic Sturm-Liouville problem (taking multilicities into account) forms an
increasing sequence
λ1 ≤ λ2 ≤ .. ≤ λn ≤ λn+1 ≤ ...

ag
which is unbounded:
lim λn = ∞.
n→

18.1.26 Example. For this theorem the appropriate example is Example 18.1.11,
where we have
1. For λ1 = 0 the eigenfunction f1 (x) = 1;
eh
2. with n∈ N, for λ2n = λ2n+1 = n the eigenfunctions f2n (x) = cos (nx) and
f2n+1 (x) = sin (nx) .
So we have
λ1 < λ2 = λ3 < λ4 = λ5 < ... .
18.1.27 The next two theorems are the most important ones of this Chapter.

18.1.28 Theorem. Let (un )∞


n=1 be the orthonormal set of all eigenfunctions of a
.K

regular or periodic Sturm-Liouville problem and let g ∈ C


e2 (a, b). Then


if f (x) is continuous at x : f (x) = ∑ (g, fn) fn (x) , (18.11)
n=1
  ∞
1
if f (x) is discontinuous at x : lim f (ξ ) + lim f (ξ ) = ∑ (g, fn) fn (x)
2 ξ →x− ξ →x+ n=1
(18.12)

(with the obvious modifications for x ∈ {a, b}).


h

18.1.29 Theorem. The orthonormal set (un )∞


n=1 of all eigenfunctions of a regular
or periodic Sturm-Liouville problem is complete in C2 (a, b).
18.1.30 Example. Continuing with 18.1.11, we see that the orthonormal set
At

(un )∞
n=1 of all eigenfunctions is
 ∞  ∞
[ 1 [ 1
U = {1} √ cos (nx) √ sin (nx)
π n=1 π n=1
 
1 1 1 1
= 1, √ cos (x) , √ sin (x) , √ cos (2x) , √ sin (2x) , ... .
π π π π

Now, we already know that (18.11)-(18.12) hold; this is just the fact that any
function continuous in [0, 1] has Fourier series representation in terms of U; so
18.1 Theory and Examples 223

Theorem 18.1.28 does not tell us anything new. On the othe rhand, from Theorem

ias
18.1.29 we see that any function g ∈ C2 (0, 1) also satisfies

2

N 
an bn
lim g − ∑ √ cos (nx) + √ cos (nx) = 0,

N→∞ π π
n=0

i.e., the Fourier approximation holds in themean square sense as well.


18.1.31 Example. Looking again at Example 18.1.6, we see that we can approxi-
mate any g ∈ C2 (0, 1) (in both the pointwise and the mean square sense) by sines

ag
only. This may seem a little surprising at first, but can actually be justified in two
different ways:
1. by taking ge(x), the odd extension of g in [−1, 1]: since ge will be odd, it will
have a Fourier series in sines only;
2. or, as a consequence of Theorems 18.1.28 and 18.1.29.
18.1.32 Example. Looking at Example 18.1.13, we see why any function g∈
C2 (0, 1) can be approximated (in both the pointwise and the mean square sense)
eh
by a series of Bessel functions; it is a direct consequence of Theorems 18.1.28 and
18.1.29.
18.1.33 Example. Approximation with Legendre ....

18.1.34 Example. Let us solve the problem

y00 + λ y = 0
.K

y (0) = 0
Cy (1) + y0 (1) = 0

with C > 0. It is clearly a regular SL problem. Let us consider possible values for
λ.
1. If λ < 0 then from the first boundary condition the general solution is
p 
y (x) = c1 sinh −λ x
h

and then the second boundary condition gives



p  −λ
0 < tanh −λ = − < 0.
C
At

Hence negative λ values are rehjected.


2. If λ = 0, then the general solution is y (x) = c0 + c1 x but then the boundary
conditions require

c0 = 0
Cc0 + c1 = 0

which only gives the trivial solution.


224 Chapter 18. Sturm-Liouville Problems

3. Finally, for every λ > 0 we get an eigenfunction which, by the first boundary

ias
condition must have the form
√ 
uλ (x) = cλ sin λx ;

but by the second boundary condition λ must satisfy



√ λ
tan λ = − . (18.13)
C
Now, (18.13) cannot be solved exactly. However, solving graphically

ag
eq1823_solutions

we see that we will have an infinity of solutions (eigenvalues), which we can


enumerate in ioncreasing order:

0 = λ0 < λ1 < λ2 < ... .


eh
Hence the general solution will have the form
∞ p 
y (x) = ∑ cn sin λn x .
n=0

18.1.35 Example. Let us solve the problem

y00 + λ y = 0
.K

y (0) + y0 (0) = 0
y (1) + 3y0 (1) = 0

It is easily seen that its ia regular SL problem. For the same reasons as in Example
18.1.34, we get the following caes.
1. We cannot have λ = 0.
2. If we have λ = −k2 < 0 then the solution has the form

y = c1 cosh kx + c2 sinh kx
h

and the boundary conditions give

c1 + kc2 = 0
At

(cosh k + 3k sinh k) c1 + (sinh k + 3k cosh k) c2 = 0

To get a nontrivial solution we must have



1 k = 1 − 3k2 sinh k + 2k cosh k

0 =
(cosh k + 3k sinh k) (sinh k + 3k cosh k)

So k must satisfy
2k
tanh k = −
1 − 3k2
18.1 Theory and Examples 225

We see graphically that this has a single solution computed numerically to

ias
be k0 = 1.122..; then λ0 = −1.2587483. Now we solve the first equation of the
system
c1 + 1.122c2 = 0 ⇒ c1 = −1.122c2
hence we get an eigenfunction:

y0 (x) = 1.122 cosh (1.122x) − sinh (1.122x)

3. If λ = k2 > 0 then the solution is

ag
y = c1 cos kx + c2 sin kx

and the BCs give

c1 + kc2 = 0
(cos k − 3k sin k) c1 + (sin k + 3k cos k) c2 = 0

To get a nontrivial solution we must have


eh

1 k

= sin k + 3k2 sin k + 2k cos k

0 =
(cos k − 3k sin k) (sin k + 3k cos k)

So k must satisfy
2k
tan k = −
1 + 3k2
.K

Again we note graphically that this has an infinity of solutions, which we can
compute numerically:

k1 = 2.9256
k2 = 6.1766
k3 = 9.3528
...

The corresponding eigenvalues are


h

λ1 = 8.5596
λ2 = 38.1502
λ3 = 87.4953
At

...

So we get a family of solutions:

yn (x) = kn cos (kn x) − sin (kn x) .

The general solutiuon has the form



y (x) = ∑ cnyn (x) .
n=0
226 Chapter 18. Sturm-Liouville Problems

18.2 Solved Problems

ias
18.3 Unsolved Problems
1. Put the ODE
d2y dy
+ b + cy = 0
dx2 dx
in Sturm-Liouville
  form.
d dy
Ans. dx ebx dx + cebx y. //13.2.1
2. Put the ODE
 d2y dy

ag
1 − x2 2
− x + α 2y = 0
dx dx
in Sturm-Liouville
√ form.

d dy 2
Ans. dx 2
1 − x dx + √ α 2 y. //13.2.3
1−x
3. Put the ODE
d2y dy
x2 2
+ bx + cy = 0
dx dx

d
Ans. dx

eh
in Sturm-Liouville
dy
xb dx
 form.
+ cxb−2 y. //13.2.4
4. Put the ODE
d2y dy
2
− 2xy + 2αy = 0
dx dx
in Sturm-Liouville
 2  form. 2
dy
Ans. dx e−x dx + 2αe−x y.
d
// 13.2.5
.K

5. Put the ODE


d2y dy
x 2
+ (1 − x) y + αy = 0
dx dx
in Sturm-Liouville
  form.
dy
d
Ans. dx xe−x dx + αe−x y. // 13.2.6
6. Put the ODE
 d2y dy
1 − x2 2
− 2xy + α (α + 1) y = 0
dx dx
h

in Sturm-Liouville
  dyform.

d 2
Ans. dx 1 − x dx + α (α + 1) y. // 13.2.7
7. Find the eigenvalues of
At

d2y
+ λ y = 0, y (0) + 2y0 (0) = 0, y (2) = 0.
dx2
Ans. λ0 = 0, λ1 = 5.047, λ2 = 14.919, ... . // 13.2.11
8. Solve the problem

d2y dy
2
+ 2y + y + λ y = 0, y (0) = 0, y (1) = 0.
dx dx
Ans. .13.2.9
18.4 Advanced Problems 227

9. Solve the problem

ias
d2y dy
2
+ 2 + y + λ y = 0, y0 (0) = 0, y0 (1) = 0.
dx dx
Ans. λn = n2 π 2 , yn (x) = e−x sin (nπx). // 13.2.10
10. Solve the problem

d2y dy
2
+ 2 + y + λ y = 0, y (0) = 0, y0 (1) = 0.
dx dx

ag
Ans. λ0 = −1, y0 = 1; λn = n2 π 2 , yn (x) = e−x (nπ cos (nπx) + sin (nπx)). //
13.2.21
11. Solve the problem

d2y dy
x2 2
− 2x + 2y + λ x2 y = 0, y (1) = 0, y (2) = 0.
dx dx
Ans. λn = n2 π 2 , yn (x) = x sin (nπ (x − 2)). // 13.2.22
eh
12. Solve the problem

d2y
+ λ y = 0, y (0) + αy0 (0) = 0, y (π) + αy0 (π) = 0.
dx2
Ans. λ0 = −1/α 2 , y0 = e−x/α ; λn = n2 , yn (x) = nα cos (nx) − sin (nx). //
13.2.26
.K

13. Solve the problem

d2y
+ λ y = 0, y (0) + αy0 (0) = 0, y (π) + αy0 (π) = 0.
dx2
Ans. λ0 = −1/α 2 , y0 = e−x/α ; λn = n2 , yn (x) = nα cos (nx) − sin (nx). //
13.2.26

18.4 Advanced Problems


h

1. Prove that all the eigenvalues of the problem

d2y 2

+ λ − x y=0
dx2
u0 (0) = u0 (1) = 0
At

are positive.
2. Prove that all the eigenvalues of the problem

d2y 2

+ λ − x y=0
dx2
u0 (0) = lim u0 (x) = 0
x∞

are positive.
228 Chapter 18. Sturm-Liouville Problems

3. Let u be a nontrivial solution of

ias
d2y
∀x ∈ (a, b) : + w (x) y = 0.
dx2
Suppose that
∀x ∈ [a, b] : w (x) ≤ 0.
Show that u (x) = 0 has at most one root (in [a, b]).
4. Let u1 and u2 be nontrivial solutions of

d2y

ag
∀x ∈ (a, b) : + w1 (x) y = 0,
dx2
d2y
∀x ∈ (a, b) : 2 + w2 (x) y = 0,
dx
respectively. Suppose that

∀x ∈ [a, b] : w1 (x) > w2 (x) .


eh
Show that between every two roots of u2 (x) = 0 there exists one root of
u1 (x) = 0.
d2y
5. Does dx2 + 1 + sin2 x y = 0 have oscillating solutions in (0, ∞)?

d2y
6. Does dx2 − x2 y = 0 have oscillating solutions in (0, ∞)?
d2y
7. Does dx2 + 1x y = 0 have oscillating solutions in (0, ∞)?
8. If L1 (·) and L2 (·) are two Sturm-Liouville operators, let us define L12 (·) =
.K

L1 (L2 (·)) and L21 (·) = L2 (L1 (·)). Find necessary and/or sufficient condi-
tions so that L12 (·) = L21 (·).
h
At
V
Appendices

ias
ag
eh
.K
h
At

A Definitions of the Integral . . . . . . 231

B Distribution Theory . . . . . . . . . . . . . 239

C Gamma Function . . . . . . . . . . . . . . . . 241

D Numerical Solution of PDEs . . . . 243


At
h.K
eh
ag
ias
ias
ag
A. Definitions of the Integral

A.1 Riemann Integral

Rb
eh
A.1.1 In previous parts of these notes we have used definite integrals of the form
a f (x) dx without much discussion. In most cases what we used was the Riemann
integral which, as you recall, is defined as the limit of finite sums.
Rb
A.1.2 We will now give a rigorous definition of a. f (x) dx in the Riemann sense.
To this end we need some preliminaries.
A.1.3 Definition. A partition of a closed interval [a, b] is a finite sequence of points
.K

P = (x , x , ..., x ) such that


0 1 N
https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1

a = x0 < x1 < ... < xN = b.

The intervals of the partition are [x0 , x1 ], ..., [xN−1 , xN ]. The norm w (P) of the
partition is the largest interval width:

w (P) = max (xN − xN−1 ) .


1≤n≤N
h

A.1.4 Definition. A tagged partition of a closed interval [a, b] is a pair of finite


seqeuences (P, Q) where P = (x0 , x1 , ..., xN ) is a partition of [a, b] and Q = (ξ1 , ..., ξN )
is a finite sequence of points such that
At

∀n : ξn ∈ [xn−1 , xn ] .

A.1.5 Definition. Let f be a function defined on [a, b]. The Riemann integral of f
on [a, b] is a number I such that

N
∀ε > 0∃δ > 0 : ((P, Q) is a tagged partition with w (P) < δ ) ⇒ I − ∑ f (ξn ) (xn − xn−1 ) < ε.

n=1
(A.1)
Rb
The number I is usually denoted by a f (x) dx.
232 Chapter A. Definitions of the Integral
Rb
A.1.6 What the above definition says is: a. f (x) dx = I iff the difference between I

ias
and any sum over a finite partition can be made as small as we want, provided the
norm of the partition is sufficiently small. A concise way to write (A.1) is
Z b N

a
f (x) dx := lim ∑ f (ξn) (xn − xn−1) .
w(P)→∞ n=1

A.1.7 If we fix f and a we can define a function F (x) by


Z x
F (x) := f (z) dz.
a

ag
Rx
(when (a, x) and f are such that a f (z) dz exists) and, as is well known, F 0 (x) =
f (x).
A.1.8 Now consider the converse question: given some F (x), can we represent it
Rx
as a Riemann integral a f (z) dz? The answer is: not always. For example, if F (x)
is discontinuous at some point, it cannot be so represented since, as we know,
Rx
a f (z) dz is a continuous function of x.
eh
A.1.9 In the next section we will show how to represent a discontinuous
as a a Stieltjes integral (which is a generalization of the Riemann integral). The
F (x)

connection of this problem to the representation of Heaviside(x) as the integral of


Rb
its derivative (i.e., as a Dirac(x) dx) is obvious.

A.2 Stieltjes Integral, Heaviside and Dirac Functions


.K

A.2.1 Definition. Let f be a function defined on [a, b] ⊂ R and φ a monotone


function on [a, b]. The Stieltjes integral of f on [a, b] with respect to φ is a number I
such that

∀ε > 0∃δ > 0 : (P, Q) is a tag.part. with w (P) < δ



N
⇒ I − ∑ f (ξn ) (xn − xn−1 ) (φ (xn ) − φ (xn−1 )) < ε.

n=1
(A.2)
h

Rb
The number I usually denoted by a f (x) dφ and (A.2) can be wriiten in short form
as
Z b N

a
f (x) dφ := lim ∑ f (ξn) (xn − xn−1) (φ (xn) − φ (xn−1)) .
w(P)→∞ n=1
At

A.2.2 Note that, when φ (x) = x, the Stieltjes integral reduces to the Riemann
integral. Also, it can be proved that, when φ (x) is differentiable:
Z b Z b
f (x) dφ = f (x) φ 0 (x) dx.
a a

A.2.3 Now, taking φ (x) =Heaviside(x), let us calculate the Stieltjes integral
Z b
f (x) d Heaviside.
a
A.3 About Sets 233

For some tagged partition (P, Q), consider the sum

ias
N
J= ∑ f (ξn) (Heaviside (xn) − Heaviside (xn−1)) .
n=1

1. If 0 ∈
/ [a, b], clearly J = 0.
2. If 0 ∈ (xn−1 , xn ) (for some n), then J = f (0). As w (P) → 0, there will always
exist some [xn−1 , xn ] 3 0. Hence we distinguish two subcases.
(a) If f (x) is continuous at x = 0, then J → f (0). This can be rewritten
w(P)→0
as

ag
Z b Z b
f (x) d Heaviside = f (0) or f (x) Dirac (x) dx = f (0) .
a a
This is the relationship (9.9).
(b) If f (x) is discontinuous at x = 0, small changes of some ξn can result in
different values of J , and J cannot tend to a limit; hence the Stieltjes
integral does not exist.

A.3 About Sets


eh
A.3.1 For a really rigorous treatment of Fourier analysis neither the Riemann nor
the Stieltjes integral suffices. What is required is the Lebesgue integral. Before
this can be defined, we need some preliminaries regarding sets.
A.3.2 Definition. Given sets A and B, we define:
.K

A ∩ B := {x : x ∈ A and x ∈ B} , A ∪ B := {x : x ∈ A or x ∈ B or both} .

These operations can be expanded to an infinite collection of sets A1 , A2 , ... as


follows:

\ ∞
[
An = {x : x ∈ An for every n ∈ N} , An = {x : x ∈ An for at least one n ∈ N} .
n=1 n=1

A.3.3 Definition. When A ⊆ R, we define the translation of A by x0 ∈ R by


h

A + x0 = {y : y = x + x0 , x ∈ A} .

A.3.4 Definition. Given some set A, we define its cardinality by


At

|A| := “the number of elements of A”.

A.3.5 If a set contains a finite number n of elements, then we say that its is a
finite countable set and its cardinality is, obviously, n. In other words:

|{a1 , a2 , ..., an }| = n.

Things are more interesting when A contains an infinite number of elements. Then
we distinguish two cases.
234 Chapter A. Definitions of the Integral

1. If the elements of A can be placed into a 1-to-1 correspondence with N, then

ias
we say A is infinitely countable or enumerable and we write |A| = ℵ0 . In other
words
|{a1 , a2 , ...}| = ℵ0 .
Intuitively, when we say that {a1 , a2 , ...} is (finitely or infinitely) countable
we mean that we can enumerate its elements: there is a first element (a1 ), a
second element (a2 ) and so on.
2. If the elements of A cannot be placed into a 1-to-1 correspondence with N,
then we say A is uncountable. It may not be obvious but such sets do exist.
For example, it can be proved that we cannot put the elements of the interval

ag
(0, 1) into a 1-to-1 correspondence with N. Intuitively, this means that there
cannot be a first, second etc. element of (0, 1). To put this in another way,
while both N and (0, 1)have an infinite number of elements, (0, 1) has more
elements than N. This suggests that there are different kinds of infinity.
A.3.6 Definition. Given a set A, we define its powerset
℘(A) := “the set of all subsets of A”.
eh
A.3.7 Example. We have

℘({1, 2}) = {0,


/ {1} , {2} , {1, 2}}
and
|℘({1, 2})| = 4 = 22 = 2|{1,2}| .
More generally, for any finite set A with cardinality |A| ∈ N 0 we have |℘(A)| = 2|A| .
.K

And we also have


∀A : |A| ∈ N ⇒ |℘(A)| = 2|A| > |A| .
It is interesting to note that we also have
∀A : |A| = ℵ0 ⇒ |℘(A)| = 2ℵ0 > ℵ0 .
However, to prove the above we need to define the arithmetic of cardinalities (so that
expressions such as 2ℵ0 make rigorous sense). We will not need these concepts in
our treatment of the Lebesgue integral.
h

A.4 Lebesgue Integral


A.4.1 We will now define the Lebesgue integral in several steps.
A.4.2 We first need to define the Lebesgue measure of a set; intuitively, it is a
At

generalization of the length of an interval.


A.4.3 In other words: the length of (a, b) is
l (a, b) := b − a;
what would be the analog for a general set A (not an interval)? Let us call this
analog µ ; note that it is a set function, i.e., it assigns real numbers to sets:
µ : ℘(R) → R.
Ideally, µ should have the following properties for all sets of real numbers..
A.4 Lebesgue Integral 235

P1 The measure of an interval (a, b) is its length:

ias
∀ (a, b) : µ (a, b) = b − a.
P2 The measure is an increasing function:

∀A, B : A ⊆ B ⇒ µ (A) ≤ µ (B) .


P3 The measure is translation invariant:

∀A : µ (A + x0 ) = µ (A) .
P4 Measures of nonintersecting sets add up:

ag
∀A, B : A ∩ B = 0/ we have µ (A ∪ B) = µ (A) + µ (B)
and more generally

∀ {A1 , A2 , ...} : j 6= k ⇒ A j ∩ Ak = 0/ we have µ (∪∞
i=1 Ai ) = ∑ µ (Ai ) .
i=1

A.4.4 Unfortunately, it can be proved that no function µ : ℘ → R satisfies all


eh
P1-P4. So we will try to find such a µ which works on a restricted range of sets.
A.4.5 Definition. We define the outer Lebesgue measure µ ∗ as follows: for any
A⊆R ( )

µ ∗ (A) := inf ∑ l (In) : A ⊆ ∪∞n=1In
n=1

where the inf is taken over sequences of open intervals (In )n=1 (i.e., for all n :
.K

In = (an , bn )).
A.4.6 Theorem. The outer Lebesgue measure has the following properties for all
sets A, B, A1 , A2 , ... :
1. ∀A : µ ∗ (A) ≥ 0 (nonnegativity of measure).
2. ∀ (a, b) : µ ∗ ((a, b)) = b − a (outer measure of an interval is its length).
3. A ⊆ B ⇒ µ ∗ (A) ≤ µ ∗ (B) (monotonicity).
4. |A| ≤ ℵ0 ⇒ µ ∗ (A) = 0 (countable sets have zero measure).
5. ∀x0 ∈ R : µ∗ (A + x0 ) = µ ∗ (A) (translation invariance).
6. µ ∗ ∪∞ ∞ ∗
n=1 A ≤ ∑n=1 µ (An ) (subadditivity).
h

A.4.7 Example. Consider Q(0,1) , the set of rational numbers∗ in (0, 1)



. Because
Q(0,1) = ℵ0 (the rationals are countable), it follows that µ Q(0,1) = 0. Now
consider J(0,1) , the set of irrational numbers in (0, 1). We have
At

J(0,1) ⊆ (0, 1) ⇒ µ ∗ J(0,1) ≤ µ ∗ ((0, 1)) = 1.




Also, because
Q(0,1) ∪ J(0,1) = (0, 1) and Q(0,1) ∩ J(0,1) = 0,
/
we have

1 = µ ∗ ((0, 1)) = µ ∗ Q(0,1) ∪ J(0,1) ≤ µ ∗ Q(0,1) + µ ∗ J(0,1) = 0 + µ ∗ J(0,1) .


   

In short
1 ≤ µ ∗ J(0,1) ≤ 1 ⇒ µ ∗ J(0,1) = 1.
 
236 Chapter A. Definitions of the Integral

A.4.8 Definition. A set A ⊆ R is called Lebesgue measurable iff

ias
∀B ⊆ R : µ ∗ (A) = µ ∗ (A ∩ B) + µ ∗ (A ∩ Bc ) .
If A is measurable, we define its Lebesgue measure by
µ (A) := µ ∗ (A) .
A.4.9 Theorem. The family of measurable sets satisfies the following.
1. Both 0/ and R are measurable.
2. If A, B are measurable then A ∩ B, A ∪ B, Ac , A + x0 (for any x0 ∈ R ) are
measurable.

ag
3. If for some (An )n=1 we have : ∀n : An is measurable, then ∪∞
n=1 An and ∩n=1 An

are measuable.
4. If µ ∗ (A) = 0 then A is measurable.
5. Every open and every closed set is measurable (so, in particular, all intervals
are measurable).
6. If A, B are measurable and A ⊆ B then µ (A) ≤ µ (B).

7. For any sequence (An )n=1 of measurable sets such that m 6= n ⇒ Am ∩ An = 0/ ,
we have
eh ∞
[
µ
!

An = ∑ µ (An) .
n=1 n=1
A.4.10 It can be proved that there exist sets which are not measurable.
A.4.11 We are finally ready to define the Lebesgue integral.
A.4.12 Definition. The characteristic function of a set A ⊆ R is defined by
.K

1 iff x ∈ A,
1A (x) =
0 iff x ∈
/ A.
A.4.13 Definition. Every function of the form
N
φ (x) = ∑ κn1An (x)
n=1
is called a simple function, provided that:
1. all the sets An are measurable,
2. m 6= n ⇒ Am ∩ An = 0/ ,
h

3. m 6= n ⇒ κm 6= κn = 0/ .
A.4.14 Definition. For every simple function φ (x) = ∑N
n=1 κn 1An (x), the Lebesgue
integral over a measurable set A is defined by
N
At

A
φ (x) dx := ∑ κn µ (An ∩ A) .
n=1
A.4.15 Definition. For every bounded function f which is zero outside a (mea-
surable) set A with µ (A) < ∞ define
Z 
I A ( f ) = inf φ (x) dx where φ is simple and f ≤ φ ,
A
Z 
I A ( f ) = sup φ (x) dx where φ is simple and φ ≤ f .
A
A.4 Lebesgue Integral 237

If I A ( f ) = I A ( f ) we say that f is (Lebesgue) integrable on A and define the Lebesgue

ias
integral of f on A by Z
f (x) dx := I A ( f ) .
A
The integral of f over some B different from the domain A of f is defined by
Z Z
f (x) dx := f (x) 1B (x) dx.
B A

A.4.16 The meaning of the above definitions is actually simple. To compute the
Lebesgue integral of a function f we do the following.

ag
1. If f is simple, we partition its range into (a finite number of) sets, where f
has constant value in each set; the integral is the sum of the measures of
the sets, each measure weighted by the respective value of f .
2. If f is not simple, we approximate it from above and below by simple functions;
the integral of f is the liniting common value of the integrals of the upper
and lower bounding functions.
eh
A.4.17 Compare the above with the Riemann integral, where we partition the
domain of the function.
R R
A.4.18 Example. R 1A (x) dx = µ (A), A dx = µ (A), ...
A.4.19 Example.
A.4.20 Example.
A.4.21 Example.
.K

A.4.22 Theorem. If f is Riemann integrable, it is also Lebesgue integrable and


the the Riemann and Lebesgue integrals are equal.
h
At
At
h.K
eh
ag
ias
ias
ag
B. Distribution Theory

Here is another way to justify d Heaviside


dx =Dirac(x).

B.1
eh
Theory and Examples
Rb
B.1.1 We can think that of the definite integral a as a functional, i.e. a function
which has as domain a set of functions and as range a set of numbers.
B.1.2 More rigorously, fix a, b (such that 0 ∈ (a, b)) and define C∞
0 to be the set
of functions which have continuous derivatives of all orders and vanish in some
.K

neighborhood of a and in some neighborhood of b. Now chose a real valued function


https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
f (x) and define
Z b
∀φ ∈ C∞
0 : A f (φ ) := φ (x) f (x) dx (B.1)
a
Then clearly
A f : C∞
0 → R.
B.1.3 It is in fact easy to see that A f is a linear functional, i.e.,

∀κ1 , κ2 ∈ R and ∀φ1 , φ2 ∈ C∞


0 : A f (κ1 φ1 + κ2 φ2 ) = κ1 A f (φ1 ) + κ2 A f (φ2 ) .
h

B.1.4 We can also construct linear functionals which are not obtained from an
integral. As an example, we define the functional

ADirac : C∞
0 →R
At

by
∀φ ∈ C∞
0 : ADirac (φ ) := φ (0) .
It is easy to show that ADirac is a linear functional. It is not so easy but can be
shown (do it!) that ADirac cannot be written as an integral of the form (B.1) for any
function f .
B.1.5 Now, keeping 0 to R by Q .
a, b fixed, denote the set of all functionals from C∞
Furthermore denote
240 Chapter B. Distribution Theory

1. by QR the subset of regular functionals, i.e., the ones which can be obtained

ias
(for some f ) by an integral of the form (B.1) and
2. by QS the subset of singular functionals, i.e., the ones which cannot be
obtained (for some f ) by an integral of the form (B.1).
B.1.6 We can think of the elements of Q (i.e., the functionals) as (representatives
of) generalized functions. The elements of QR (i.e., the regular functionals) are
representatives of actual functions: each A f is a representative of some function
f . This is not true of the elements of QS (i.e., the singular functionals): they
cannot be obtained from (i.e., represent) some function f . Hence Q generalizes
(i.e., enlarges) the set of functions.

ag
B.1.7 Now we can ask: given some generalized function A ∈ Q , which generalized
function B ∈ Q should represent the derivative of A?
B.1.8 If A = A f ∈ QR it is natural to choose B = A f 0 , i.e., the regular generalized
function obtained (by an integral) from f 0 . More explicitly, with
Z b
A f (φ ) = φ (x) f (x) dx
eh A f 0 (φ ) =
a
Z b
φ (x) f 0 (x) dx
a

we call A f 0 the derivative of A f .


B.1.9 A consequence of the above is that (integrating by parts and using the fact
that φ (a) = φ (b) = 0):
.K

Z b Z b
0
(x))x=b φ 0 (x) f (x) dx = −A f φ 0 .

A f 0 (φ ) = φ (x) f (x) dx = (φ (x) f x=a − (B.2)
a a

B.1.10 Now we can use (B.2) to define the derivative of a generalized function
(“generalized derivative”). In other words, we define
0
A f (φ ) := −A f φ 0 .

(B.3)

The advantage of (B.3) is that it applies to all generalized functions, not only the
regular ones! (Recall that φ 0 is always defined, since φ ∈ C∞
h

0 .)
B.1.11 Now let us take a = −∞ and b = ∞ and compute the generalized derivative
of Heaviside(x). First note that
Z ∞ Z ∞
At

AHeaviside(x) (φ ) = φ (x) Heaviside (x) dx = φ (x) dx.


−∞ 0

Then
Z ∞
0
φ 0 (x) dx = −φ (∞)+φ (0) = φ (0) = ADirac (φ ) .

AHeaviside’(x) (φ ) = −AHeaviside(x) φ = −
0

In short
AHeaviside’(x) (φ ) = ADirac (φ ) .
ias
ag
C. Gamma Function

The Gamma function Γ (n) is the generalization of the factorial function n!.

C.1 Theory and Examples


eh
C.1.1 Definition. The Gamma function is defined by
Z ∞
∀n ∈ (0, ∞) : Γ (n) := xn−1 e−x dx.
0

C.1.2 Example. We have


.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
Z ∞ Z ∞
Γ (1) = x0 e−x dx = e−x dx = 1.
0 0

Also, with x = u2 we have dx = 2udu and so



  Z∞
1 1 −u2
Z ∞ Z ∞
1 2
Γ = x− 2 e−x dx = e 2udu = 2 e−u du = π.
2 0 0 u 0

C.1.3 Theorem. The Gamma function satisfies


h

∀n ∈ (0, ∞) : Γ (n + 1) = nΓ (n)

Proof. We have Z Z
n −x n −x
−e−x nxn−1 dx.

x e dx = −x e −
At

Hence
Z ∞ x=∞ Z ∞
n −x
−xn e−x x=0 − −e−x nxn−1 dx

Γ (n + 1) = x e dx =
0 Z ∞ 0
n−1 −x
= −0 + 0 + n x e dx = nΓ (n) .
0

C.1.4 Theorem. The Gamma function satisfies

∀n ∈ N0 : Γ (n + 1) = n!
242 Chapter C. Gamma Function

Proof. We have

ias
Z ∞
Γ (1) = x1−1 e−x dx = 1 = 0!
0
Γ (2) = Γ (1 + 1) = 1Γ (1) = 1 = 1!,
Γ (3) = Γ (2 + 1) = 2Γ (2) = 2 = 2!,
Γ (4) = Γ (3 + 1) = 3Γ (3) = 3 · 2 = 3!,
...

and, by induction, we easily obtain the required result.

ag
C.1.5 Definition. We extend the definition of Γ (n) as follows. Define

Z−
0 := {0, −1, −2, ...}

and define
Γ (n + 1)
∀n ∈ (−∞, 0) : Γ (n) := .
n
C.1.6 Example.
eh  Γ( 1 ) √
Γ − 21 = − 21 = −2 π .
2
C.1.7 Theorem. We have

limx→(2n)+ Γ (x) = +∞,


limx→(2n)− Γ (x) = −∞,
∀n ∈ Z−
0 : lim
x→(2n−1)+ Γ (x) = −∞,
.K

limx→(2n−1)− Γ (x) = −∞.

C.1.8 Example. Γ (0+ ) = limx→0+ Γ (x) = +∞, Γ (0− ) = limx→0− Γ (x) = −∞.
C.1.9 Theorem (Stirling). For large n:
√  n n
n! '
2πn .
e
√ 10
C.1.10 Example. Γ (10) = 362880, 2π · 10 10 e = 3. 598 7 × 106 . Γ (112.3) = 7.
√ 112.3
254 4 × 10180 , 2π · 112.3 112.3
h

= 8. 140 6 × 10182 .

e
At
ias
ag
D. Numerical Solution of PDEs

In this chapter we will discuss the numerical solution of PDEs by computer.


eh
D.1 Basic Methods
D.1.1 We start with some very simple (almost primitive) methods. While these
will not yield accurate solutions, they are useful in highlighting the basic ideas
behind numerical solution of PDEs. For a better understanding, we also present
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw
the MATLAB code which implements the methods. 1

D.1.2 Example. Our first example involves a first order PDE:

0 < t : ut + 3ux = 0, (D.1)


0 < x : u(x, 0) = f (x). (D.2)

It is easily checked that the analytical solution is


h

u(x,t) = f (x − 3t). (D.3)

Hence (D.1) describes the transmission of the initial condition to the left with
At

velocity 3. For example, if


 0 gia x ≤ 10
 x−10

10 gia 10 ≤ x ≤ 20
f (x) = (D.4)

 1 − 20−x
10 gia 20 ≤ x ≤ 30
0 gia 30 ≤ x.

then u(x,t) looks like this:


244 Chapter D. Numerical Solution of PDEs

ias
ag
Let us now solve the same PDE numerically. Letting
vn (t) = u(n,t)
we get
dvn
ux ' vn − vn−1 .
eh ut =
dt
, (D.5)

Let us take n = 0, 1, ..., 40 and assume v0 (t) = 0 for every t . Then we must solve the
system of ODEs
dv1
= −3 · (v1 − v0 )
dt
...
dvn
.K

= −3 · (vn − vn−1 ) (D.6)


dt
...
dv40
= −3 · (v40 − v39 )
dt
with initial conditions vn (0) = f (n) (using f (x) of (D.4)). This can be achieved by
the following MATLAB commands.

clear
T=[0:0.1:10];
h

u0=[zeros(1,10) [0.1:0.1:1] [0.9:-0.1:0.1] zeros(1,10)];


[t,u]=ode23(’flux11’,T,u0);
which use the function file flux11.m:
At

function ut=flux(t,u)
N=length(u);
ut(1,1)=-3*u(1);
for n=2:N
ut(n,1)=-3*(u(n)-u(n-1));
end
The results appear in the following figures; we can see that they are in good
agreement with the exact solution.
D.1 Basic Methods 245

ias
ag
D.1.3 Example. Let us solve one more first order PDE

ut + 3uux = 0, u(x, 0) = f (x) (D.7)

where f (x) is again given by (D.4). The discretization of this is given by changing
eh
the flux11.m so that it corresponds to ut = −3uux . Then we get the following
results. Where is the nonlinear behavior?
.K

D.1.4 Let us now solve some second order PDEs. The basic idea is to replace the
h

time and space derivatives with finite differences:

u (m · δ x, (n + 1) · δt) − u (m · δ x, n · δt)
ut ' (D.8)
δt
At

u (m · δ x, n · δt) − u ((m − 1) · δ x, n · δt)


ux ' (D.9)
δx
u ((m + 1) · δ x, n · δt) − u (m · δ x, n · δt)
' (D.10)
δx
u ((m + 1) · δ x, n · δt) + u ((m − 1) · δ x, n · δt) − 2u (m · δ x, n · δt)
uxx ' . (D.11)
δ x2
Letting
um,n = u (m · δ x, n · δt) (D.12)
246 Chapter D. Numerical Solution of PDEs

the above equations become

ias
um,n+1 − um,n
ut ' (D.13)
δt
um,n − um−1,n
ux ' (D.14)
δx
um+1,n − um,n
' (D.15)
δx
um+1,n + um−1 − 2um,n
uxx ' . (D.16)
δ x2

ag
and the diffusion equation ut = a2 uxx becomes

um,n+1 − um,n um+1,n + um−1,n − 2um,n


= a2 ⇒ (D.17)
δt δ x2
δt · a2
um,n+1 = um,n + · (um+1,n + um−1,n − 2um,n ) . (D.18)
δ x2
eh
D.1.5 Example. We solve the diffusion PDE

0 < x < 40 : ut = a2 uxx (D.19)


0 < t < 100 : u(0,t) = 0, (D.20)
0 < t < 100 : u(40,t) = 0, (D.21)
0 < x < 40 : u(x, 0) = f (x). (D.22)
.K

We use for f (x) a triangular function with its apex at x = 20 and we use δ x =
1, δt = 0.1. The MATLAB code is
clear
u(:,1)=[zeros(1,10) [0.1:0.1:1] [0.9:-0.1:0.1] zeros(1,10)]’;
M=length(u);
a2=4;
dt=0.1;
h

for n=1:999
u(1,n+1)=u(1,n);
for m=2:M-1
u(m,n+1)=u(m,n)+dt*a2*(u(m+1,n)+u(m-1,n)-2*u(m,n));
At

end
u(M,n+1)=u(M,n);
end
figure(1); surf(u); shading flat

The results are plotted below, for two values: a = 2 and a = 3. We can clearly see
2 2
the smooothing effect of the heat kernel e−x /4a t , which becomes faster with larger
diffusion coefficient a.
D.1 Basic Methods 247

ias
ag
eh
D.1.6 Example. Next we solve the same problem but with a negative diffusion
.K

coefficient

0 < x < 40 : ut = −a2 uxx (D.23)


0 < t < 100 : u(0,t) = 0, (D.24)
0 < t < 100 : u(40,t) = 0, (D.25)
0 < x < 40 : u(x, 0) = f (x). (D.26)

This results show a behavior opposite of smoothing: local differences are accentu-
h

ated. There is also evidence of numerical errors.


At
248 Chapter D. Numerical Solution of PDEs

D.1.7 Example. Our final diffusion example involves the nonlinear reaction-

ias
diffusion equation1 :

0 < x < 40 : ut = a2 uxx + bu (1 − u) (D.27)


0 < t < 100 : u(0,t) = 0, (D.28)
0 < t < 100 : u(40,t) = 0, (D.29)

0 x ≤ 30
0 < x < 40 : u(x, 0) = . (D.30)
1 30 < x ≤ 50

ag
In (D.27) the time evolution of u depends both on the diffusion term a2 uxx and a
nonlinear “reaction” term bu (1 − u). For u ∈ (0, 1) the reaction term is positive,
and this favors the increase of u; note also that increase rate becomes 0 at u = 0
and u = 1. The results are seen in the following figures, for two sets of values:
with (a, b) = (2, 1) we have reaction-diffusion and with (a, b) = (2, 0) we have pure
reaction.
eh
h .K
At

1
It is used to model chemical reactions.
D.1 Basic Methods 249

D.1.8 Example. Now consider the Laplace PDE

ias
0 < x < 30, 0 < y < 40 : uxx + uyy = 0, (D.31)
0 < y < 40 : u (0, y) = 1, (D.32)
0 < y < 40 : u(30, y) = 0, (D.33)
 x 2
0 < x < 30 : u(x, 0) = 1 − , (D.34)
30
 x 1/4
0 < x < 30 : u (x, 40) = 1 − . (D.35)
30

ag
Here the discretizations are as follows:

vm,n = u (m · δ x, n · δ y)
vm+1,n + vm−1,n − 2vm,n
uxx '
δ x2
vm,n+1 + vm,n−1 − 2vm,n
uyy ' .
δ y2
eh
These, in conjunction with the boundary conditions, yield the system (with δ x =
δ y = 1 and m = 0, 1, ..., 30, n = 0, 1, ..., 40):

0 < m < 30, 0 < n < 40 : vm+1,n + vm−1,n + vm,n+1 + vm,n−1 − 4vm,n = 0 (D.36)
0 ≤ n ≤ 40 : v0,n = 1 (D.37)
0 ≤ n ≤ 40 : v30,n = 0 (D.38)
.K

 m 2
0 < m < 30 : vm,0 = 1 − (D.39)
30
 m 1/4
0 < m < 30 : um,40 = 1 − . (D.40)
30
Equations (D.36)–(D.40) are a system of linear algebraic equations with unknowns
v0,0 , v0,1 , ..., v0,40 , v1,0 , v1,1 , ..., v1,40 , ..., v30,40 . It can be shown that the system
matrix is invertible and can be solved to obtain u (m · δ x, n · δ y) ' vm,n (for m =
0, 1, ..., 30 and n = 0, 1, ..., 40). However, we will follow a different approach. Consider
h

the system of difference equations (for m = 0, 1, ..., 30, n = 1, ..., 40, t = 0, 1, 2, ...):

vt−1 t−1 t−1 t−1 t−1


m+1,n + vm−1,n + vm,n+1 + vm,n−1 + 4vm,n
0 < m < 30, 0 < n < 40 : vtm,n = (D.41)
8
At

0 ≤ n ≤ 40 : vt0,n = 1 (D.42)
0 ≤ n ≤ 40 : vt30,n=0 (D.43)
t
 m 2
0 < m < 30 : vm,0 = 1 − (D.44)
30
 m 1/4
0 < m < 30 : utm,40 = 1 − (D.45)
30

with random initial conditions v0m,n (m = 0, 1, ..., 30, n = 1, ..., 40, t = 0, 1, 2, ...). While
not obvious, it can be proved that, as t → ∞, the system (D.41)–(D.45) converges
250 Chapter D. Numerical Solution of PDEs

to the solution of the system (D.36)–(D.40). In other words, for m = 0, 1, ..., 30,

ias
n = 1, ..., 40, we have
lim vtm,n = vm,n . (D.46)
t→∞
Then, the numerical solution of (D.41)–(D.45) gives an approximate solution of
(D.31)–(D.35). The algorithm is implemented by the following MATLAB code.

M=30;
N=40;
T=100;

ag
u=rand(M,N);
u(1,:)=ones(1,N);
u(M,:)=zeros(1,N);
u(:,1)=(([1:-1/M:1/M]).^2)’;
u(:,N)=(([1:-1/M:1/M]).^(1/4))’;
for t=1:T
uold=u;
eh
for m=2:M-1
for n=2:N-1
u(m,n)=(4*uold(m,n)+uold(m-1,n)+uold(m+1,n)+uold(m,n-1)+uold(m,
end
end
disp([t max(max(abs(u-uold)))])
end
.K

The results are plotted below.


h
At

D.1.9 Example. The above method can be applied to more general problems. The
following code solves a Laplace problem on more complicated region.

clear
M=30;
N=40;
T=100;
D.2 Advanced Methods 251

u=rand(M,N);

ias
u(1,:)=ones(1,N);
u(M,:)=zeros(1,N);
u(:,1)=(([1:-1/M:1/M]).^2)’;
u(:,N)=(([1:-1/M:1/M]).^(1/4))’;

for m=11:M-10
for n=11:N-25
u(m,n)=1;
end

ag
end
for t=1:T
uold=u;
for m=2:M-1
for n=2:N-1
u(m,n)=(4*uold(m,n)+uold(m-1,n)+uold(m+1,n)+uold(m,n-1)+uold(m,
end
end
eh
for m=11:M-10
for n=11:N-25
u(m,n)=1;
end
end
disp([t max(max(abs(u-uold)))])
.K

end

The results are as follows.


h
At

D.2 Advanced Methods


D.2.1 Let us now turn to more sophisticated methods for the numerical solution
of PDEs. There is an extensive literature on this subject, and a vast collection of
solution methods have been developed. We are not going to explain these methods
(a full course could be devoted to this).
252 Chapter D. Numerical Solution of PDEs

D.2.2 Instead we will look at the problem for the computer user’s perspective.

ias
A large number of computer PDE solvers is available; some of them must be
purchased and others are freely available. We will mention one example from each
category.

D.2.1 MATLAB Partial Differential Equation Toolbox


D.2.3 A good proprietary PDE solver is the MATLAB Partial Differential Equation
Toolbox. It provides functions for solving PDEs using finite element analysis. It
also provides a GUI which facilitates the definition of the problem (especially the
specification of complex regions).

ag
D.2.4 Example. As an example of the use of the PDE toolbox, we present the
commands required for the solution of the problem

0 ≤ x2 + y2 < 1 : uxx + utt = δ (x, y) ,


eh x2 + y2 = 1 : u (x, y) = 0.

This is a Poisson equation, i.e., a Laplace equation with input. In this case, the
imput is δ (x, y), which is the two dimensional Dirac delta function:

f (0, 0) if (0, 0) ∈ Ω,
Z Z
f (x, y) δ (x, y) dxdy =
Ω 0 if (0, 0) ∈
/ Ω.

The equation must hold on the unit disk; and we have zero boundary conditions
at the unit circle. The required MATLAB commands are:
.K

% Problem Definition
c = 1;
a = 0;
f = @circlef;
% Create the model
numberOfPDE = 1;
pdem = createpde(numberOfPDE);

% Create geometry and append to the PDE Model


h

g = @circleg;
geometryFromEdges(pdem,g);

% Define the boundary conditions


At

figure;
pdegplot(pdem, ’edgeLabels’, ’on’);
axis equal
title ’Geometry With Edge Labels Displayed’;
% Solution is zero at all four outer edges of the circle
applyBoundaryCondition(pdem,’Edge’,(1:4), ’u’, 0);

% Solve the equation and plot the solution


[u,p,e,t] = adaptmesh(g,pdem,c,a,f,’tripick’,’circlepick’,’maxt’,2000
D.2 Advanced Methods 253

figure;

ias
pdeplot(p,e,t,’xydata’,u,’zdata’,u,’mesh’,’off’);

The plot is as follows.

ag
eh
.K

D.2.2 FlexPDE
D.2.5 The “Lite” version of the FlexPDE solver is available for free. There is also a
“Pro”, paid version, but the free version is extremely capable and comes with great
documentaion and many examples.
D.2.6 You can dowload it from https://www.pdesolutions.com/sdmenu7.html.
Make sure that you also download the free book Fields of Physics by Finite Element
Analysis - An Introduction, by G. Backstrom, from https://www.pdesolutions.com/bookstor
it explains a large number of the examples included in the FlexPDE distribution.
h

D.2.7 Example. Here is an example from the above mentioned book. It concerns
finding the electric potential u (x, y) in a trapezoidal plate under given boundary
conditions. In the interior of the region, the potential satisfies the Laplace equation
At

uxx + uyy = 0.

The boundary conditions are rather complicated to list in mathematical notation,


but they can be read easily from the respective code:

TITLE { cond3.pde }
’Conduction in a Trapezoidal Plate’
{ Find the potential U in a plate under an impressed voltage. }
{ From "Fields of Physics" by Gunnar Backstrom }
254 Chapter D. Numerical Solution of PDEs

SELECT

ias
errlim= 1e-4
VARIABLES
U
DEFINITIONS
L1= 0.5
L2= 0.25
Ly= 1
cond= 5.99e7
Ex= -dx(U)

ag
Ey= -dy(U)
E= -grad(U)
Em= magnitude(E)
Jx= cond*Ex
Jy= cond*Ey
J= cond*E
Jm= magnitude(j)
eh
eqn= div( -cond*grad(U))
EQUATIONS
div( -cond*grad(U))= 0
BOUNDARIES
region 1
start(-L1,0) value(U)= 0
line to (L1,0) natural(U)= 0
.K

line to (L2,Ly) value(U)= 1.0


line to (-L2,Ly) natural(U)= 0
line to close
END
h
At

Here is the plot of the results


D.2 Advanced Methods 255

ias
ag
eh
D.2.8 Example. Here is one more example u (x, y). It concerns finding the steady
state temperature u (x, y) in a cylindrical insulator which contains two hot water
.K

tubes. In the interior of the region, u (x, y) satisfies the Laplace equation

uxx + uyy = 0.

The boundary conditions are rather complicated to list in mathematical notation,


but they can be read easily from the respective code:

TITLE { heat1.pde }
’Two Insulated Tubes’
{ Temperature field in an insulator containing two hot-water tubes. }
h

{ From "Fields of Physics" by Gunnar Backstrom }


SELECT
errlim= 1e-4
VARIABLES
At

temp
DEFINITIONS
r0= 0.1
d= 0.15
r1= 0.5
Lx= 0.3
Ly= 0.2
k= 0.03 { Thermal conductivity of insulation }
fluxd_x= -k*dx(temp)
256 Chapter D. Numerical Solution of PDEs

fluxd_y= -k*dy(temp)

ias
fluxd= -k*grad( temp)
fluxdm= magnitude( fluxd)
f_angle= sign(fluxd_y)* arccos(fluxd_x/fluxdm)/pi*180
EQUATIONS
div(-k*grad(temp))= 0
BOUNDARIES
region 1
start ’outer’ (0,-r1) value(temp)= 273 { Frozen soil }
arc( center= 0,0) angle= 360 close

ag
start ’left’ (-d-r0,0) value(temp)= 323 { Cutout for hot water tub
arc( center= -d,0) angle= -360 close
start ’right’ (d-r0,0) value(temp)= 353 { Cutout for hot water tub
arc( center= d,0) angle= -360 close
END
eh
The results are plotted below.
h .K
At
D.3 Projects 257

ias
ag
eh
D.3 Projects
.K

1. assa
h
At
At
h.K
eh
ag
ias
ias
ag
Bibliography

[1] Haberman, Richard. Elementary applied partial differential equations. Vol. 987.
eh
Englewood Cliffs, NJ: Prentice Hall, 1983.
.K

https://escher.ntr.nl/en/eindeloos/p488bclEIDUBIcOUBNYw 1
h
At

You might also like