Professional Documents
Culture Documents
Differential Equations
Differential Equations
Differential Equations
Equations
Lecture notes for MATH 2351/2352
Jeffrey R. Chasnov
Preface
What follows are my lecture notes for a first course in differential equations,
taught at the Hong Kong University of Science and Technology. Included in
these notes are links to short tutorial videos posted on YouTube.
Much of the material of Chapters 2-6 and 8 has been adapted from the
widely used textbook Elementary differential equations and boundary value
problems by Boyce & DiPrima (John Wiley & Sons, Inc., Seventh Edition,
c
2001).
Many of the examples presented in these notes may be found in this
book. The material of Chapter 7 is adapted from the textbook Nonlinear
c
dynamics and chaos by Steven H. Strogatz (Perseus Publishing, 1994).
All web surfers are welcome to download these notes, watch the YouTube
videos, and to use the notes and videos freely for teaching and learning. An
associated free review book with links to YouTube videos is also available from
the ebook publisher bookboon.com. I welcome any comments, suggestions or
corrections sent by email to jeffrey.chasnov@ust.hk. Links to my website, these
lecture notes, my YouTube page, and the free ebook from bookboon.com are
given below.
Homepage:
http://www.math.ust.hk/~machas
YouTube:
https://www.youtube.com/user/jchasnov
Lecture notes:
http://www.math.ust.hk/~machas/differential-equations.pdf
Bookboon:
http://bookboon.com/en/differential-equations-with-youtube-examples-ebook
iii
Contents
0 A short mathematical review
0.1 The trigonometric functions . . . . . . . . . . . . . .
0.2 The exponential function and the natural logarithm
0.3 Definition of the derivative . . . . . . . . . . . . . .
0.4 Differentiating a combination of functions . . . . . .
0.4.1 The sum or difference rule . . . . . . . . . . .
0.4.2 The product rule . . . . . . . . . . . . . . . .
0.4.3 The quotient rule . . . . . . . . . . . . . . . .
0.4.4 The chain rule . . . . . . . . . . . . . . . . .
0.5 Differentiating elementary functions . . . . . . . . .
0.5.1 The power rule . . . . . . . . . . . . . . . . .
0.5.2 Trigonometric functions . . . . . . . . . . . .
0.5.3 Exponential and natural logarithm functions
0.6 Definition of the integral . . . . . . . . . . . . . . . .
0.7 The fundamental theorem of calculus . . . . . . . . .
0.8 Definite and indefinite integrals . . . . . . . . . . . .
0.9 Indefinite integrals of elementary functions . . . . . .
0.10 Substitution . . . . . . . . . . . . . . . . . . . . . . .
0.11 Integration by parts . . . . . . . . . . . . . . . . . .
0.12 Taylor series . . . . . . . . . . . . . . . . . . . . . . .
0.13 Complex numbers . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
2
2
2
2
3
3
3
3
3
3
4
5
5
6
6
7
8
1 Introduction to odes
11
1.1 The simplest type of differential equation . . . . . . . . . . . . . 11
2 First-order odes
2.1 The Euler method . . . . .
2.2 Separable equations . . . .
2.3 Linear equations . . . . . .
2.4 Applications . . . . . . . . .
2.4.1 Compound interest .
2.4.2 Chemical reactions .
2.4.3 Terminal velocity . .
2.4.4 Escape velocity . . .
2.4.5 RC circuit . . . . .
2.4.6 The logistic equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
14
17
20
20
21
23
24
26
27
vi
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
30
30
31
32
34
36
37
41
42
44
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
47
51
54
54
56
57
5 Series solutions
5.1 Ordinary points . . . . . . . . . . . . . . . . . . .
5.2 Regular singular points: Cauchy-Euler equations
5.2.1 Real, distinct roots . . . . . . . . . . . . .
5.2.2 Complex conjugate roots . . . . . . . . .
5.2.3 Repeated roots . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
65
67
67
67
6 Systems of equations
6.1 Determinants and the eigenvalue problem . . . .
6.2 Coupled first-order equations . . . . . . . . . . .
6.2.1 Two distinct real eigenvalues . . . . . . .
6.2.2 Complex conjugate eigenvalues . . . . . .
6.2.3 Repeated eigenvalues with one eigenvector
6.3 Normal modes . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
71
71
75
77
79
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
83
83
84
87
87
88
88
89
92
94
94
95
4 The
4.1
4.2
4.3
4.4
Laplace transform
Definition and properties . . . . . . .
Solution of initial value problems . .
Heaviside and Dirac delta functions .
4.3.1 Heaviside function . . . . . .
4.3.2 Dirac delta function . . . . .
Discontinuous or impulsive terms . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
8 Partial differential equations
8.1 Derivation of the diffusion equation . . . . .
8.2 Derivation of the wave equation . . . . . . .
8.3 Fourier series . . . . . . . . . . . . . . . . .
8.4 Fourier cosine and sine series . . . . . . . .
8.5 Solution of the diffusion equation . . . . . .
8.5.1 Homogeneous boundary conditions .
8.5.2 Inhomogeneous boundary conditions
8.5.3 Pipe with closed ends . . . . . . . .
8.6 Solution of the wave equation . . . . . . . .
8.6.1 Plucked string . . . . . . . . . . . .
8.6.2 Hammered string . . . . . . . . . . .
8.6.3 General initial conditions . . . . . .
8.7 The Laplace equation . . . . . . . . . . . .
8.7.1 Dirichlet problem for a rectangle . .
8.7.2 Dirichlet problem for a circle . . . .
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
97
97
98
100
102
104
104
108
109
112
112
114
114
115
115
116
viii
CONTENTS
Chapter 0
A short mathematical
review
A basic understanding of calculus is required to undertake a study of differential
equations. This zero chapter presents a short review.
0.1
0
1
2
,
sin 30 =
,
sin 45 =
,
sin 0 =
4
4
4
3
4
sin 60 =
,
sin 90 =
.
4
4
The following symmetry properties are also useful:
sin(/2 ) = cos ,
cos(/2 ) = sin ;
and
sin() = sin(),
0.2
cos() = cos().
/ = ,
( ) = .
0.3
ln (/) = ln ln ,
ln = ln .
0.4
0.4.1
( + ) ()
.
(1)
0.4.2
0.4.3
=
,
2
and should be memorized as the derivative of the top times the bottom minus
the top times the derivative of the bottom over the bottom squared.
0.4.4
0.5
0.5.1
0.5.2
Trigonometric functions
(cos ) = sin .
We thus say that the derivative of sine is cosine, and the derivative of cosine
is minus sine. Notice that the second derivatives satisfy
(sin ) = sin ,
0.5.3
(cos ) = cos .
0.6
(ln ) =
1
.
() = lim
(
)
+ ( 1) ,
(2)
=1
Riemann Sum definition is extended to all values of and and for all values
of () (positive and negative). Accordingly,
() =
( ()) =
and
().
()
() =
() +
(),
which states (when () > 0) that the total area equals the sum of its parts.
0.7
view tutorial
Using the definition of the derivative, we differentiate the following integral:
() = lim
()
()
()
()
= lim
0
= ().
= lim
This result is called the fundamental theorem of calculus, and provides a connection between differentiation and integration.
The fundamental theorem teaches us how to integrate functions. Let ()
be a function such that () = (). We say that () is an antiderivative of
(). Then from the fundamental theorem and the fact that the derivative of a
constant equals zero,
() =
() + .
= lim
(
)
+ ( 1)
=1
(
)
( + ) + ( 1)
=1
= lim
= lim
(
)
( + ) + ( 1) .
=1
The last expression has an interesting structure. All the values of () evaluated at the points lying between the endpoints and cancel each other in
consecutive terms. Only the value () survives when = 1, and the value
+ () when = , yielding again (3).
0.8
() = (),
where F(x) is the antiderivative of ().
0.9
+1
+ , = 1.
=
+1
When = 1, and is positive, we have
1
= ln + .
ln () = .
Therefore, since
{
|| =
if < 0;
if > 0,
1
= ln || + .
cos = sin + ,
sin = cos + .
Easily proved identities are an addition rule:
(
)
() + () = () + ();
and multiplication by a constant:
() = ().
This permits integration of functions such as
3
72
(2 + 7 + 2) =
+
+ 2 + ,
3
2
and
0.10
Substitution
More complicated functions can be integrated using the chain rule. Since
)
(
)
(
() = () (),
we have
(
)
(
)
() () = () + .
(
)
() () = ()
= () +
(
)
= () + .
0.11
Integration by parts
Another integration technique makes use of the product rule for differentiation.
Since
( ) = + ,
we have
= ( ) .
Therefore,
()() = ()()
() ().
= ()
= ()
= ().
= .
0.12
Taylor series
A Taylor series of a function () about a point = is a power series representation of () developed so that all the derivatives of () at match all
the derivatives of the power series. Without worrying about convergence here,
we have
() = () + ()( ) +
()
()
( )2 +
( )3 + . . . .
2!
3!
Notice that the first term in the power series matches (), all other terms
vanishing, the second term matches (), all other terms vanishing, etc. Commonly, the Taylor series is developed with = 0. We will also make use of the
Taylor series in a slightly different form, with = * + and = * :
(* + ) = (* ) + (* ) +
(* ) 2 (* ) 3
+
+ ....
2!
3!
+
...,
5!
4
+
...,
4!
= 1 + +
3
3!
2
cos = 1
2!
1
= 1 + 2 . . . , for || < 1,
1+
2
3
ln (1 + ) =
+
. . . , for || < 1.
2
3
A Taylor series of a function of several variables can also be developed. Here,
all partial derivatives of (, ) at (, ) match all the partial derivatives of the
power series. With the notation
sin =
2
,
2
2
,
2
,
2
etc.,
we have
(, ) = (, ) + (, )( ) + (, )( )
)
1 (
(, )( )2 + 2 (, )( )( ) + (, )( )2 + . . .
+
2!
0.13
Complex numbers
1
( + ) ,
2
Im =
1
( ) .
2
Furthermore,
= ( + )( )
= 2 2 2
= 2 + 2 ;
and we define the absolute value of , also called the modulus of , by
|| = ( )1/2
= 2 + 2 .
We can add, subtract, multiply and divide complex numbers to get new
complex numbers. With = + and = + , and , , , real numbers,
we have
+ = ( + ) + ( + );
= ( ) + ( );
= ( + )( + )
= ( ) + ( + );
( + )( )
=
2 + 2
( + )
( )
= 2
+ 2
.
+ 2
+ 2
Furthermore,
|| =
( )2 + ( + )2
(2 + 2 )(2 + 2 )
= ||||;
and
= ( ) ( + )
= ( )( )
= .
Similarly
||
,
=
||
( )= .
= 1
+
... +
+
+ ...
2!
4!
3!
5!
= cos + sin .
Therefore, we have
cos = Re ,
sin = Im .
+
,
2
sin =
.
2
10
cos2 =
1 + cos 2
,
2
sin2 =
1 cos 2
,
2
1
1
cos2 = (2 + sin 2) + ,
sin2 = (2 sin 2) + ,
4
4
from which follows
2
2
sin =
0
cos2 = .
Chapter 1
Introduction to odes
A differential equation is an equation for a function that relates the values of
the function to the values of its derivatives. An ordinary differential equation
(ode) is a differential equation for a function of a single variable, e.g., (), while
a partial differential equation (pde) is a differential equation for a function of
several variables, e.g., (, , , ). An ode contains ordinary derivatives and a
pde contains partial derivatives. Typically, pdes are much harder to solve than
odes.
1.1
view tutorial
The simplest ordinary differential equations can be integrated directly by finding
antiderivatives. These simplest odes have the form
= (),
2
= ,
2
where is the height of the object above the ground, is the mass of the
object, and = 9.8 meter/sec2 is the constant gravitational acceleration. As
Galileo suggested, the mass cancels from the equation, and
2
= .
2
Here, the right-hand-side of the ode is a constant. The first integration, obtained
by antidifferentiation, yields
= ,
11
12
with the first constant of integration; and the second integration yields
1
= + 2 ,
2
with the second constant of integration. The two constants of integration
and can then be determined from the initial conditions. If we know that the
initial height of the mass is 0 , and the initial velocity is 0 , then the initial
conditions are
(0) = 0 ,
(0) = 0 .
Substitution of these initial conditions into the equations for / and allows
us to solve for and . The unique solution that satisfies both the ode and
the initial conditions is given by
1
() = 0 + 0 2 .
2
(1.1)
For example, suppose we drop a ball off the top of a 50 meter building. How
long will it take the ball to hit the ground? This question requires solution of
(1.1) for the time it takes for ( ) = 0, given 0 = 50 meter and 0 = 0.
Solving for ,
20
=
2 50
=
sec
9.8
3.2sec.
Chapter 2
First-order differential
equations
Reference: Boyce and DiPrima, Chapter 2
The general first-order differential equation for the function = () is written
as
= (, ),
(2.1)
where (, ) can be any function of the independent variable and the dependent variable . We first show how to determine a numerical solution of this
equation, and then learn techniques for solving analytically some special forms
of (2.1), namely, separable and linear first-order equations.
2.1
view tutorial
Although it is not always possible to find an analytical solution of (2.1) for
= (), it is always possible to determine a unique numerical solution given
an initial value (0 ) = 0 , and provided (, ) is a well-behaved function.
The differential equation (2.1) gives us the slope (0 , 0 ) of the tangent line
to the solution curve = () at the point (0 , 0 ). With a small step size
, the initial condition (0 , 0 ) can be marched forward in the x-coordinate
to = 0 + , and along the tangent line using Eulers method to obtain the
y-coordinate
(0 + ) = (0 ) + (0 , 0 ).
This solution (0 + , 0 + ) then becomes the new initial condition and is
marched forward in the x-coordinate another , and along the newly determined tangent line. For small enough , the numerical solution converges to
the exact solution.
13
14
2.2
Separable equations
view tutorial
A first-order ode is separable if it can be written in the form
()
= (),
(0 ) = 0 ,
(2.2)
(()) () =
().
0
() =
(),
0
and since is a dummy variable of integration, we can write this in the equivalent
form
() =
0
().
(2.3)
A simpler procedure that also yields (2.3) is to treat / in (2.2) like a fraction.
Multiplying (2.2) by results in
() = (),
which is a separated equation with all the dependent variables on the left-side,
and all the independent variables on the right-side. Equation (2.3) then results
directly upon integration.
Example: Solve
+ 21 = 32 , with (0) = 2.
1
= (3 ),
(2.4)
1
= .
3
2
We integrate the right-side from the initial condition = 0 to and the left-side
from the initial condition (0) = 2 to . Accordingly,
1
=
3
2
.
0
(2.5)
15
4
3
2
1
0
+ 21 = 32 .
The integrals in (2.5) need to be done. Note that () < 3 for finite or the
integral on the left-side diverges. Therefore, 3 > 0 and integration yields
]
1 ]
ln (3 ) 2 = 0 ,
2
1
ln (3 ) = ,
2
21
3 =
,
1
= 3 2 .
Since this is our first nontrivial analytical solution, it is prudent to check our
result. We do this by differentiating our solution:
1 1
= 2
2
1
= (3 );
2
and checking the initial conditions, (0) = 3 0 = 2. Therefore, our solution
satisfies both the original ode and the initial condition.
Example: Solve
+ 12 = 32 , with (0) = 4.
This is the identical differential equation as before, but with different initial
conditions. We will jump directly to the integration step:
=
3
2
.
0
16
ln ( 3)
= 3 + 2 .
The solution curves for a range of initial conditions is presented in Fig. 2.1.
All solutions have a horizontal asymptote at = 3 at which / = 0. For
(0) = 0 , the general solution can be shown to be () = 3+(0 3) exp(/2).
cos 2
= 23+2
, with (0) = 1. (i) For what values of
Example: Solve
> 0 does the solution exist? (ii) For what value of > 0 is ()
maximum?
Notice that the solution of the ode may not exist when = 3/2, since /
. We separate variables and integrate from initial conditions:
(3 + 2) = 2 cos 2
(3 + 2) = 2
cos 2
1
0
]
]
3 + 2 1 = sin 2 0
2 + 3 + 2 sin 2 = 0
1
= [3 1 + 4 sin 2].
2
Solving the quadratic equation for has introduced a spurious solution that
does not satisfy the initial conditions. We test:
{
1
-1;
(0) = [3 1] =
-2.
2
Only the + root satisfies the initial condition, so that the unique solution to the
ode and initial condition is
1
= [3 + 1 + 4 sin 2].
(2.6)
2
To determine (i) the values of > 0 for which the solution exists, we require
1 + 4 sin 2 0,
or
1
sin 2 .
(2.7)
4
Notice that at = 0, we have sin 2 = 0; at = /4, we have sin 2 = 1;
at = /2, we have sin 2 = 0; and at = 3/4, we have sin 2 = 1 We
therefore need to determine the value of such that sin 2 = 1/4, with in
the range /2 < < 3/4. The solution to the ode will then exist for all
between zero and this value.
17
0.6
0.8
1
1.2
1.4
1.6
0
0.5
1.5
/2 /2
is denoted by arcsin. The first solution with > 0 of the equation sin 2 = 1/4
places 2 in the interval (, 3/2), so to invert this equation using the arcsine
we need to apply the identity sin ( ) = sin , and rewrite sin 2 = 1/4 as
sin ( 2) = 1/4. The solution of this equation may then be found by taking
the arcsine, and is
2 = arcsin (1/4),
or
1
=
2
1
+ arcsin
4
)
.
2.3
Linear equations
view tutorial
18
The first-order linear differential equation (linear in and its derivative) can be
written in the form
+ () = (),
(2.8)
with the initial condition (0 ) = 0 . Linear first-order equations can be integrated using an integrating factor (). We multiply (2.8) by (),
[
]
()
+ () = ()(),
(2.9)
+ () =
[()].
()
(2.10)
[()] = ()().
(2.11)
or
1
=
()
0 0 +
)
()() .
(2.12)
+ =
+ ;
= .
(2.13)
=
(),
0
0
ln
=
(),
0
0
(
)
() = 0 exp
() .
0
() = exp
)
()
the integrating factor. This important result finds frequent use in applied mathematics.
19
= 2 ,
and
)
3
2
+
4
0
(
)
3
= 2
+
4
0
(
)
3
= 2
+ ( 1)
4
(
)
1
= 2
4
(
)
1
= 1 .
4
= 2
Example: Solve
2 = , with (0) = 0.
This equation is separable, and we solve it in two ways. First, using an integrating factor with () = 2 and () = :
(
)
() = exp 2
and
=
1
2
]2
1
= 0
2
)
2
1 (
=
1 .
2
Therefore,
)
2
1 2 (
1
2
)
1 ( 2
1 .
=
2
20
2 = ,
= (1 + 2),
,
=
0 1 + 2
0
1
1
ln (1 + 2) = 2 ,
2
2
2
1 + 2 = ,
)
1 ( 2
1 .
=
2
The results from the two different solution methods are the same, and the choice
of method is a personal preference.
2.4
2.4.1
Applications
Compound interest
view tutorial
The equation for the growth of an investment with continuous compounding
of interest is a first-order differential equation. Let () be the value of the
investment at time , and let be the annual interest rate compounded after
every time interval . We can also include deposits (or withdrawals). Let be
the annual deposit amount, and suppose that an installment is deposited after
every time interval . The value of the investment at the time + is then
given by
( + ) = () + ()() + ,
(2.14)
where at the end of the time interval , () is the amount of interest
credited and is the amount of money deposited ( > 0) or withdrawn
( < 0). As a numerical example, if the account held $10,000 at time , and
= 6% per year and = $12,000 per year, say, and the compounding and
deposit period is = 1 month = 1/12 year, then the interest awarded after
one month is = (0.06/12) $10,000 = $50, and the amount deposited is
= $1000.
Rearranging the terms of (2.14) to exhibit what will soon become a derivative, we have
( + ) ()
= () + .
= + ,
(2.15)
which can solved with the initial condition (0) = 0 , where 0 is the initial
capital. We can solve either by separating variables or by using an integrating
2.4. APPLICATIONS
21
,
=
0 +
0
(
)
1
+
ln
= ,
0 +
+ = (0 + ) ,
0 +
,
(
)
= 0 + 1 ,
(2.16)
where the first term on the right-hand side of (2.16) comes from the initial
invested capital, and the second term comes from the deposits (or withdrawals).
Evidently, compounding results in the exponential growth of an investment.
As a practical example, we can analyze a simple retirement plan. It is
easiest to assume that all amounts and returns are in real dollars (adjusted for
inflation). Suppose a 25 year-old plans to set aside a fixed amount every year of
his/her working life, invests at a real return of 6%, and retires at age 65. How
much must he/she invest each year to have $8,000,000 at retirement? We need
to solve (2.16) for using = 40 years, () = $8,000,000, 0 = 0, and = 0.06
per year. We have
()
,
1
0.06 8,000,000
=
,
0.0640 1
= $47,889 year1 .
To have saved approximately one million US$ at retirement, the worker would
need to save about HK$50,000 per year over his/her working life. Note that the
amount saved over the workers life is approximately 40 $50,000 = $2,000,000,
while the amount earned on the investment (at the assumed 6% real return) is
approximately $8,000,000 $2,000,000 = $6,000,000. The amount earned from
the investment is about 3 the amount saved, even with the modest real return
of 6%. Sound investment planning is well worth the effort.
2.4.2
Chemical reactions
Suppose that two chemicals and react to form a product , which we write
as
+ ,
where is called the rate constant of the reaction. For simplicity, we will use
the same symbol , say, to refer to both the chemical and its concentration.
The law of mass action says that / is proportional to the product of the
concentrations and , with proportionality constant ; that is,
= .
(2.17)
22
Similarly, the law of mass action enables us to write equations for the timederivatives of the reactant concentrations and :
= ,
= .
(2.18)
The ode given by (2.17) can be solved analytically using conservation laws. We
assume that 0 and 0 are the initial concentrations of the reactants, and that
no product is initially present. From (2.17) and (2.18),
( + ) = 0
( + ) = 0
+ = 0 ,
+ = 0 .
= (0 )(0 ),
(0) = 0,
0
0 (0 )(0 )
= .
(2.19)
The remaining integral can be done using the method of partial fractions. We
write
1
=
+
.
(2.20)
(0 )(0 )
0
0
The cover-up method is the simplest method to determine the unknown coefficients and . To determine , we multiply both sides of (2.20) by 0 and
set = 0 to find
1
=
.
0 0
Similarly, to determine , we multiply both sides of (2.20) by 0 and set
= 0 to find
1
=
.
0 0
Therefore,
(
)
1
1
1
1
=
,
(0 )(0 )
0 0 0
0
and the remaining integral of (2.19) becomes (using < 0 , 0 )
(
)
0 0
0 (0 )(0 )
0 0
0 0
(
(
)
(
))
1
0
0
=
ln
+ ln
0 0
0
0
(
)
1
0 (0 )
=
ln
.
0 0
0 (0 )
2.4. APPLICATIONS
23
(0 0 ) 1
,
0 (0 0 ) 0
0 , if 0 < 0
= min(0 , 0 ).
As one would expect, the reaction stops after one of the reactants is depleted;
and the final concentration of product is equal to the initial concentration of
the depleted reactant.
2.4.3
Terminal velocity
view tutorial
Using Newtons law, we model a mass free falling under gravity but with
air resistance. We assume that the force of air resistance is proportional to the
speed of the mass and opposes the direction of motion. We define the -axis to
point in the upward direction, opposite the force of gravity. Near the surface
of the Earth, the force of gravity is approximately constant and is given by
, with = 9.8 m/s2 the usual gravitational acceleration. The force of air
resistance is modeled by , where is the vertical velocity of the mass and
is a positive constant. When the mass is falling, < 0 and the force of air
resistance is positive, pointing upward and opposing the motion. The total force
on the mass is therefore given by = . With = and = /,
we obtain the differential equation
= .
(2.21)
The terminal velocity of the mass is defined as the asymptotic velocity after
air resistance balances the gravitational force. When the mass is at terminal
velocity, / = 0 so that
=
(2.22)
24
=
,
0 +
0
(
)
+
ln
= ,
1+
= / ,
)
(
1 / .
=
(
)
Therefore, = 1 / , and approaches as the exponential term
decays to zero.
As an example, a skydiver of mass = 100 kg with his parachute closed
may have a terminal velocity of 200 km/hr. With
2
= (9.8 m/s )(103 km/m)(60 s/min)2 (60 min/hr)2 = 127, 008 km/hr ,
one obtains from (2.22), = 63, 504 kg/hr. One-half of the terminal velocity
for free-fall (100 km/hr) is therefore attained when (1 / ) = 1/2, or =
ln 2/ 4 sec. Approximately 95% of the terminal velocity (190 km/hr ) is
attained after 17 sec.
2.4.4
Escape velocity
view tutorial
An interesting physical problem is to find the smallest initial velocity for a
mass on the Earths surface to escape from the Earths gravitational field, the
so-called escape velocity. Newtons law of universal gravitation asserts that the
gravitational force between two massive bodies is proportional to the product of
the two masses and inversely proportional to the square of the distance between
them. For a mass a position above the surface of the Earth, the force on
the mass is given by
=
,
( + )2
where and are the mass and radius of the Earth and is the gravitational
constant. The minus sign means the force on the mass points in the direction
of decreasing . The approximately constant acceleration on the Earths
surface corresponds to the absolute value of / when = 0:
=
,
2
and 9.8 m/s2 . Newtons law = for the mass is thus given by
2
=
2
( + )2
=
,
(1 + /)2
where the radius of the Earth is known to be 6350 km.
(2.23)
2.4. APPLICATIONS
25
= ,
,
=
(1 + /)2
=
.
(1
+
/)2
0
0
The left integral is 21 ( 2 02 ), and the right integral can be performed using
the substitution = 1 + /, = /:
1+/
2
(1
+
/)
2
0
1
]1+/
=
1
=
=
2
+
.
+
Therefore,
1 2
( 02 ) =
,
2
+
which when multiplied by is an expression of the conservation of energy (the
change of the kinetic energy of the mass is equal to the change in the potential
energy). Solving for 2 ,
2
2 = 02
.
+
The escape velocity is defined as the minimum initial velocity 0 such that
the mass can escape to infinity. Therefore, 0 = escape when 0 as .
Taking this limit, we have
2
escape
= lim
2
+
= 2.
2
With 6350 km and = 127 008 km/hr , we determine escape = 2
40 000 km/hr. In comparison, the muzzle velocity of a modern high-performance
rifle is 4300 km/hr, almost an order of magnitude too slow for a bullet, shot
into the sky, to escape the Earths gravity.
26
(a)
R
(b)
+
2.4.5
RC circuit
view tutorial
Consider a resister and a capacitor connected in series as shown in Fig. 2.3.
A battery providing an electromotive force, or emf , connects to this circuit
by a switch. Initially, there is no charge on the capacitor. When the switch
is thrown to a, the battery connects and the capacitor charges. When the
switch is thrown to b, the battery disconnects and the capacitor discharges,
with energy dissipated in the resister. Here, we determine the voltage drop
across the capacitor during charging and discharging.
The equations for the voltage drops across a capacitor and a resister are
given by
= /, = ,
(2.24)
where is the capacitance and is the resistance. The charge and the current
are related by
=
.
(2.25)
Kirchhoffs voltage law states that the emf in any closed loop is equal to
the sum of the voltage drops in that loop. Applying Kirchhoffs voltage law
when the switch is thrown to a results in
+ = .
(2.26)
Using (2.24) and (2.25), the voltage drop across the resister can be written in
terms of the voltage drop across the capacitor as
=
and (2.26) can be rewritten to yield the first-order linear differential equation
for given by
+ / = /,
(2.27)
2.4. APPLICATIONS
27
(/)/ ,
with solution
(
)
() = 1 / .
The voltage starts at zero and rises exponentially to , with characteristic time
scale given by .
When the switch is thrown to b, application of Kirchhoffs voltage law results
in
+ = 0,
with corresponding differential equation
+ / = 0.
Here, we assume that the capacitance is initially fully charged so that (0) = .
The solution, then, during the discharge phase is given by
() = / .
The voltage starts at and decays exponentially to zero, again with characteristic time scale given by .
2.4.6
view tutorial
Let = () be the size of a population at time and let be the growth
rate. The Malthusian growth model (Thomas Malthus, 1766-1834), similar to
a compound interest model, is given by
= .
= 1
,
(2.28)
= (1 ),
28
where we may assume the initial condition (0) = 0 > 0. Separating variables
and integrating
.
=
0 (1 )
0
The integral on the left-hand-side can be done using the method of partial
fractions:
1
= +
(1 )
1
+ ( )
=
;
(1 )
and equating the coefficients of the numerators proportional to 0 and 1 , we
have = = 1. Therefore,
=
+
(1
(1
)
0
0
0
1
ln
= ln
0
1 0
(1 0 )
= ln
0 (1 )
= .
Solving for , we first exponentiate both sides and then isolate :
(1 0 )
= ,
0 (1 )
(1 0 ) = 0 0 ,
(1 0 + 0 ) = 0 ,
0
=
.
0 + (1 0 )
(2.29)
The population, therefore, grows in size until it reaches the carrying capacity of
its environment.
Chapter 3
Second-order linear
differential equations with
constant coefficients
Reference: Boyce and DiPrima, Chapter 3
The general second-order linear differential equation with independent variable
and dependent variable = () is given by
+ () + () = (),
(3.1)
3.1
view tutorial
In general, (3.1) cannot be solved analytically, and we begin by deriving an
algorithm for numerical solution. Consider the general second-order ode given
by
= (, , ).
(3.2)
= (, , ).
(3.3)
The first ode, (3.2), gives the slope of the tangent line to the curve = ();
the second ode, (3.3), gives the slope of the tangent line to the curve = ().
Beginning at the initial values (, ) = (0 , 0 ) at the time = 0 , we move
along the tangent lines to determine 1 = (0 + ) and 1 = (0 + ):
1 = 0 + 0 ,
1 = 0 + (0 , 0 , 0 ).
29
3.2
view tutorial
Consider the second-order linear homogeneous ode:
+ () + () = 0;
(3.4)
(3.5)
(
)
(
)
1 + 1 + 1 + 2
2 + 2 + 2
= 1
= 1 0 + 2 0
= 0,
since 1 and 2 were assumed to be solutions of (3.4). We have therefore shown
that any linear combination of solutions to the second-order linear homogeneous
ode is also a solution.
3.3
The Wronskian
view tutorial
Suppose that having determined that two solutions of (3.4) are = 1 () and
= 2 (), we attempt to write the general solution to (3.4) as (3.5). We must
then ask whether this general solution will be able to satisfy the two initial
conditions given by
(0 ) = 0 , (
0 ) = 0 .
(3.6)
Applying these initial conditions to (3.5), we obtain
1 1 (0 ) + 2 2 (0 ) = 0 ,
1 1 (0 ) + 2 2 (0 ) = 0 ,
(3.7)
which is observed to be a system of two linear equations for the two unknowns
1 and 2 . Solution of (3.7) by standard methods results in
1 =
0 2 (0 ) 0 2 (0 )
,
2 =
0 1 (0 ) 0 1 (0 )
,
31
(3.8)
2 () = sin ,
2 () = cos ,
3.4
view tutorial
We now study solutions of the homogeneous, constant coefficient ode, written
as
+ + = 0,
(3.9)
with , , and constants. Such an equation arises for the charge on a capacitor
in an unpowered RLC electrical circuit, or for the position of a freely-oscillating
frictional mass on a spring, or for a damped pendulum. Our solution method
finds two linearly independent solutions to (3.9), multiplies each of these solutions by a constant, and adds them. The two free constants can then be used
to satisfy two given initial conditions.
Because of the differential properties of the exponential function, a natural
ansatz, or educated guess, for the form of the solution to (3.9) is = , where
is a constant to be determined. Successive differentiation results in =
and
= 2 , and substitution into (3.9) yields
2 + + = 0.
(3.10)
(3.11)
1 (
2 4 .
=
2
There are three cases to consider: (1) if 2 4 > 0, then the two roots are
distinct and real; (2) if 2 4 < 0, then the two roots are distinct and complex
conjugates of each other; (3) if 2 4 = 0, then the two roots are degenerate
and there is only one real root. We will consider these three cases in turn.
3.4.1
When + = are real roots, then the general solution to (3.9) can be written
as a linear superposition of the two solutions + and ; that is,
() = 1 + + 2 .
The unknown constants 1 and 2 can then be determined by the given initial
conditions (0 ) = 0 and (
0 ) = 0 . We now present two examples.
Example 1: Solve
+ 5 + 6 = 0 with (0) = 2, (0)
= 21 2 32 3 .
Use of the initial conditions then results in two equations for the two unknown
constant 1 and 2 :
1 + 2 = 2,
21 32 = 3.
Adding three times the first equation to the second equation yields 1 = 9; and
the first equation then yields 2 = 2 1 = 7. Therefore, the unique solution
that satisfies both the ode and the initial conditions is
() = 92 73
)
(
7
= 92 1 .
9
33
Note that although both exponential terms decay in time, their sum increases
initially since (0)
= 0 .
Again our ansatz is = , and we obtain the characteristic equation
2 1 = 0,
with solution = 1. Therefore, the general solution for is
() = 1 + 2 ,
and the derivative satisfies
()
= 1 2 .
Initial conditions are satisfied when
1 + 2 = 0 ,
1 2 = 0 .
Adding and subtracting these equations, we determine
1
1
1 = (0 + 0 ) , 2 = (0 0 ) ,
2
2
so that after rearranging terms
(
)
)
(
+
() = 0
+ 0
.
2
2
The terms in parentheses are the usual definitions of the hyperbolic cosine and
sine functions; that is,
+
, sinh =
.
2
2
Our solution can therefore be rewritten as
cosh =
() = 0 cosh + 0 sinh .
Note that the relationships between the trigonometric functions and the complex
exponentials were given by
cos =
+
,
2
sin =
,
2
so that
cosh = cos ,
sinh = sin .
Also note that the hyperbolic trigonometric functions satisfy the differential
equations
sinh = cosh ,
cosh = sinh ,
which though similar to the differential equations satisfied by the more commonly used trigonometric functions, is absent a minus sign.
3.4.2
view tutorial
We now consider a characteristic equation (3.11) with 2 4 < 0, so the roots
occur as complex conjugate pairs. With
=
,
2
1
4 2 ,
2
=
2
1 () =
= cos ,
and
1 2
2
(
)
=
2
2 () =
= sin .
Having found the two real solutions 1 () and 2 (), we can then apply the
principle of superposition a second time to determine the general solution ():
() = ( cos + sin ) .
(3.12)
It is best to memorize this result. The real part of the roots of the characteristic
equation goes into the exponential function; the imaginary part goes into the
argument of cosine and sine.
Example 1: Solve
+ = 0 with (0) = 0 and (0)
= 0 .
view tutorial
The characteristic equation is
2 + 1 = 0,
with roots
= .
The general solution of the ode is therefore
() = cos + sin .
35
The derivative is
()
= sin + cos .
Applying the initial conditions:
(0) = = 0 ,
(0)
= = 0 ;
= 0.
The characteristic equation is
2 + + 1 = 0,
with roots
3
1
.
=
2
2
3
3
12
+ sin
.
cos
() =
2
2
The derivative is
1 1
()
= 2
2
)
3
3
cos
+ sin
2
2
(
)
3 1
3
3
+
2 sin
+ cos
.
2
2
2
= 0:
= 1,
1
3
+
= 0;
2
2
or
= 1,
Therefore,
() =
3
.
3
)
3
3
3
cos
+
sin
.
2
3
2
(
21
3.4.3
Repeated roots
view tutorial
Finally, we consider the characteristic equation,
2 + + = 0,
with 2 4 = 0. The degenerate root is then given by
=
,
2
.
1 () = exp
2
(3.13)
(3.14)
where () is an unknown function that satisfies the differential equation obtained by substituting (3.14) into (3.9). This standard technique is called the
reduction of order method and enables one to find a second solution of a homogeneous linear differential equation if one solution is known. If the original
differential equation is of order , the differential equation for = () reduces
to an order one lower, that is, 1.
Here, however, we will determine this missing second solution through a
limiting process. We start with the solution obtained for complex roots of the
characteristic equation, and then arrive at the solution obtained for degenerate
roots by taking the limit 0.
Now, the general solution for complex roots was given by (3.12), and to
properly limit this solution as 0 requires first satisfying the specific initial
conditions (0) = 0 and (0)
37
Example: Solve
+ 2 + = 0 with (0) = 1 and (0)
= 0.
The characteristic equation is
2 + 2 + 1 = ( + 1)2
= 0,
which has a repeated root given by = 1. Therefore, the general solution to
the ode is
() = 1 + 2 ,
with derivative
()
= 1 + 2 2 .
Applying the initial conditions, we have
1 = 1,
1 + 2 = 0,
so that 1 = 2 = 1. Therefore, the solution is
() = (1 + ) .
3.5
+ () + () = (),
(3.15)
+ () + () = 0.
(3.16)
(3.17)
and apply the initial conditions to determine the constants 1 and 2 . Note that
because of the linearity of (3.15),
2
( + ) + ( + ) + ( + )
2
= (
+ + ) + (
+ + )
+ + =
=0+
= ,
= 0.
view tutorial
First, we solve the homogeneous equation. The characteristic equation is
2 3 4 = ( 4)( + 1)
= 0,
so that
() = 1 4 + 2 .
Second, we find a particular solution of the inhomogeneous equation. The form
of the particular solution is chosen such that the exponential will cancel out of
both sides of the ode. The ansatz we choose is
() = 2 ,
(3.18)
= 41 4 2 2 .
Applying the initial conditions,
1
= 1,
2
41 2 1 = 0;
1 + 2
or
3
,
2
41 2 = 1.
1 + 2 =
This system of linear equations can be solved for 1 by adding the equations
to obtain 1 = 1/2, after which 2 = 1 can be determined from the first equation. Therefore, the solution for () that satisfies both the ode and the initial
39
conditions is given by
1 4 1 2
+
2
2
)
1 4 (
= 1 2 + 25 ,
2
() =
where we have grouped the terms in the solution to better display the asymptotic
behavior for large .
We now find particular solutions for some relatively simple inhomogeneous
terms using this method of undetermined coefficients.
Example: Find a particular solution of
3 4 = 2 sin .
view tutorial
We show two methods for finding a particular solution. The first more direct
method tries the ansatz
() = cos + sin ,
where the argument of cosine and sine must agree with the argument of sine in
the inhomogeneous term. The cosine term is required because the derivative of
sine is cosine. Upon substitution into the differential equation, we obtain
( cos sin ) 3 ( sin + cos ) 4 ( cos + sin ) = 2 sin ,
or regrouping terms,
(5 + 3) cos + (3 5) sin = 2 sin .
This equation is valid for all , and in particular for = 0 and /2, for which
the sine and cosine functions vanish. For these two values of , we find
5 + 3 = 0,
3 5 = 2;
3
,
17
5
.
17
1
(3 cos 5 sin ) .
17
The second solution method makes use of the relation = cos + sin to
convert the sine inhomogeneous term to an exponential function. We introduce
the complex function () by letting
() = () + (),
and rewrite the differential equation in complex form. We can rewrite the equation in one of two ways. On the one hand, if we use sin = Re{ }, then the
differential equation is written as
3 4 = 2 ;
(3.19)
3 4 = 2 sin ,
3 4 = 2 cos .
3 4 = 2 cos ,
3 4 = 2 sin .
Therefore,
= Im{ }
}
{
1
(5 + 3)(cos + sin )
= Im
17
1
=
(3 cos 5 sin ).
17
Example: Find a particular solution of
+ 2 = 2 .
view tutorial
The correct ansatz here is the polynomial
() = 2 + + .
Upon substitution into the ode, we have
2 + 2 + 22 2 2 = 2 ,
41
or
22 + 2( ) + (2 + 2)0 = 2 .
Equating powers of ,
2 = 1,
2( ) = 0,
2 + 2 = 0;
and solving,
1
= ,
2
1
= ,
2
3
= .
4
3.6
The first-order linear ode can be solved by use of an integrating factor. Here I
show that odes having constant coefficients can be solved by our newly learned
solution method.
Example: Solve + 2 = with (0) = 3/4.
Rather than using an integrating factor, we follow the three-step approach: (i)
find the general homogeneous solution; (ii) find a particular solution; (iii) add
them and satisfy initial conditions. Accordingly, we try the ansatz () =
for the homogeneous ode + 2 = 0 and find
+ 2 = 0,
or = 2.
3
= + 1,
4
=
1 .
4
3.7
Resonance
view tutorial
Resonance occurs when the frequency of the inhomogeneous term matches the
frequency of the homogeneous solution. To illustrate resonance in its simplest
embodiment, we consider the second-order linear inhomogeneous ode
+ 02 = cos ,
(0) = 0 , (0)
= 0 .
(3.21)
Our main goal is to determine what happens to the solution in the limit 0 .
The homogeneous equation has characteristic equation
2 + 02 = 0,
so that = 0 . Therefore,
() = 1 cos 0 + 2 sin 0 .
(3.22)
.
02 2
Therefore,
() =
02
cos .
2
cos ,
02 2
with derivative
()
= 0 (2 cos 0 1 sin 0 )
sin .
02 2
,
02 2
0 = 2 0 ,
so that
1 = 0
,
02 2
2 =
0
.
0
3.7. RESONANCE
43
Therefore, the solution to the ode that satisfies the initial conditions is
)
(
0
cos 0 +
cos
() = 0 2
sin 0 + 2
2
0
0
0 2
0
(cos cos 0 )
= 0 cos 0 +
,
sin 0 +
0
02 2
where we have grouped together terms proportional to the forcing amplitude .
Resonance occurs in the limit 0 ; that is, the frequency of the inhomogeneous term (the external force) matches the frequency of the homogeneous
solution (the free oscillation). By LHospitals rule, the limit of the term proportional to is found by differentiating with respect to :
lim
sin
(cos cos 0 )
= lim
2
2
0
0
2
sin 0
=
.
20
(3.23)
+ 02 = cos 0 ,
and find a particular solution directly. The particular solution is the real part
of the particular solution of
+ 02 = 0 ,
and because the inhomogeneous term is a solution of the corresponding homogeneous equation, we take as our ansatz
= 0 .
We have
= 0 (1 + 0 ) ,
(
)
= 0 20 02 ;
,
20
}
20
sin 0
,
=
20
= Re{
= (2 + ).
3.8
Damped resonance
view tutorial
A more realistic study of resonance assumes an additional damping term. The
forced, damped harmonic oscillator equation may be written as
+ + = cos ,
(3.24)
where > 0 is the oscillators mass, > 0 is the damping coefficient, >
0 is the spring constant, and is the amplitude of the external force. The
homogeneous equation has characteristic equation
2 + + = 0,
45
so that
1 2
4.
2 2
When 2 4 < 0, the motion of the unforced oscillator is said to be underdamped; when 2 4 > 0, overdamped; and when 2 4 = 0, critically
damped. For all three types of damping, the roots of the characteristic equation satisfy Re( ) < 0. Therefore, both linearly independent homogeneous
solutions decay exponentially to zero, and the long-time asymptotic solution
of (3.24) reduces to the (non-decaying) particular solution. Since the initial
conditions are satisfied by the free constants multiplying the (decaying) homogeneous solutions, the long-time asymptotic solution is independent of the initial
conditions.
If we are only interested in the long-time asymptotic solution of (3.24), we
can proceed directly to the determination of a particular solution. As before,
we consider the complex ode
=
+ + = ,
with = Re( ). With the ansatz = , we have
2 + + = ,
or
=
.
( 2 ) +
(02 2 ) +
(
)
( 2
)
=
(0 2 ) .
2
2
2
2
(0 ) +
(3.25)
number = + can
be written in polar form as = , where = cos ,
= sin , and = 2 + 2 , tan = /. We therefore write
(02 2 ) = ,
with
=
(02 2 )2 + 2 ,
tan =
( 2
.
02 )
= 2
,
(0 2 )2 + 2
and = Re( ) becomes
(
=
(02 2 )2 + 2
(02 2 )2 + 2
(
)
Re (+)
(3.26)
)
cos ( + ).
2 2
.
2
Taking the derivative of with respect to 2 and setting this to zero to determine
yields
2
2
2(
02 ) + 2 = 0,
or
2
2
.
= 02
22
We can interpret this result by saying that damping lowers the resonance
frequency of the undamped oscillator.
Chapter 4
4.1
view tutorial
The main idea is to Laplace transform the constant-coefficient differential equation for () into a simpler algebraic equation for the Laplace-transformed function (), solve this algebraic equation, and then transform () back into
(). The correct definition of the Laplace transform and the properties that
this transform satisfies makes this solution method possible.
An exponential ansatz is used in solving homogeneous constant-coefficient
odes, and the exponential function correspondingly plays a key role in defining
the Laplace transform. The Laplace transform of (), denoted by () =
{ ()}, is defined by the integral transform
() =
().
(4.1)
0
The improper integral given by (4.1) diverges if () grows faster than for
large . Accordingly, some restriction on the range of may be required to
guarantee convergence of (4.1), and we will assume without further elaboration
that these restrictions are always satisfied.
The Laplace transform is a linear transformation. We have
(
)
{1 1 () + 2 2 ()} =
1 1 () + 2 2 ()
0
= 1
1 () + 2
2 ()
0
= 1 {1 ()} + 2 {2 ()}.
47
48
() = 1 { ()}
() = { ()}
1. ()
( )
2. 1
3.
4.
5.
6. sin
7. cos
!
+1
!
( )+1
2
+ 2
+ 2
8. sin
( )2 + 2
9. cos
( )2 + 2
10. sin
2
(2 + 2 )2
11. cos
2 2
(2 + 2 )2
12. ()
13. () ( )
()
14. ( )
15. ()
() (0)
16.
()
2 () (0) (0)
49
= ( ).
We also compute directly the Laplace transform of 1 (line 2):
{1} =
0
]
1
=
0
1
= .
Now, the Laplace transform of (line 3) may be found using these two results:
{ } = { 1}
1
=
.
(4.2)
{ } =
!
=0
(4.3)
=
{ }.
!
=0
Also, with > ,
1
1
=
(1 )
1 ( )
=
=0
=
.
+1
=0
(4.4)
50
Using (4.2), and equating the coefficients of powers of in (4.3) and (4.4), results
in line 4:
!
{ } = +1 .
The Laplace transform of (line 5) can be found from line 1 and line 4:
{ } =
!
.
( )+1
The Laplace transform of sin (line 6) may be found from the Laplace transform
of (line 3) using = :
{
}
{sin } = Im { }
{
}
1
= Im
}
{
+
= Im
2 + 2
= 2
.
+ 2
Similarly, the Laplace transform of cos (line 7) is
{
}
{cos } = Re { }
= 2
.
+ 2
The transform of sin (line 8) can be found from the transform of sin
(line 6) and line 1:
{ sin } =
;
( )2 + 2
and similarly for the transform of cos :
{ cos } =
.
( )2 + 2
The Laplace transform of sin (line 10) can be found from the Laplace transform of (line 5 with = 1) using = :
{
}
{ sin } = Im { }
{
}
1
= Im
( )2
{
}
( + )2
= Im
(2 + 2 )2
2
= 2
.
( + 2 )2
Similarly, the Laplace transform of cos (line 11) is
{
}
{ cos } = Re { }
}
{
( + )2
= Re
(2 + 2 )2
2 2
= 2
.
( + 2 )2
51
We now transform the inhomogeneous constant-coefficient, second-order, linear inhomogeneous ode for = (),
+ + = (),
making use of the linearity of the Laplace transform:
{
} + {}
+ {} = {}.
To determine the Laplace transform of ()
we define () =
{()}, and integrate
by parts. We let
=
= .
Therefore,
]
0
+
0
= () (0),
where assumed convergence of the Laplace transform requires
lim () = 0.
by parts and using the just derived result for the first derivative. We let
=
= ,
so that
0
(
)
= (0)
+ () (0)
= 2 () (0) (0),
= 0.
4.2
We begin with a simple homogeneous ode and show that the Laplace transform
method yields an identical result to our previously learned method. We then
apply the Laplace transform method to solve an inhomogeneous equation.
52
Example: Solve
2
= 0 by two different
methods.
view tutorial
The characteristic equation of the ode is determined from the ansatz =
and is
2 2 = ( 2)( + 1) = 0.
The general solution of the ode is therefore
() = 1 2 + 2 .
To satisfy the initial conditions, we must have 1 = 1 + 2 and 0 = 21 2 ,
requiring 1 = 31 and 2 = 32 . Therefore, the solution to the ode that satisfies
the initial conditions is given by
() =
1 2 2
+ .
3
3
(4.5)
We now solve this example using the Laplace transform. Taking the Laplace
transform of both sides of the ode, using the linearity of the transform, and
applying our result for the transform of the first and second derivatives, we find
[2 () (0) (0)]
( 1)(0) + (0)
.
2 2
Note that the denominator of the right-hand-side is just the quadratic from the
characteristic equation of the homogeneous ode, and that this factor arises from
the derivatives of the exponential term in the Laplace transform integral.
Applying the initial conditions, we find
() =
1
.
( 2)( + 1)
(4.6)
1
=
+
.
( 2)( + 1)
2 +1
(4.7)
The cover-up method can be used to solve for and . We multiply both sides
of (4.7) by 2 and put = 2 to isolate :
1
+1
1
= .
3
=2
53
1
1
2
1
+
,
3 2 3 +1
and line 3 of Table 4.1 gives us the inverse transforms of each term separately
to yield
1
2
() = 2 + ,
3
3
identical to (4.5).
Example: Solve
+ = sin 2 with (0) = 2 and (0)
= 1 by Laplace
transform methods.
Taking the Laplace transform of both sides of the ode, we find
2 () (0) (0)
+ () = {sin 2}
2
,
= 2
+4
where the Laplace transform of sin 2 made use of line 6 of Table 4.1. Substituting for (0) and (0)
2 + 1
2
+ 2
.
2
+ 1 ( + 1)(2 + 4)
To determine the inverse Laplace transform from Table 4.1, we perform a partial
fraction expansion of the second term:
(2
2
+ +
= 2
+
.
2
+ 1)( + 4)
+ 1 2 + 4
(4.8)
() =
From lines 6 and 7 of Table 4.1, we obtain the solution by taking inverse Laplace
transforms of the three terms separately, where = 1 in the first two terms, and
= 2 in the third term:
() = 2 cos +
5
1
sin sin 2.
3
3
54
4.3
The Laplace transform technique becomes truly useful when solving odes with
discontinuous or impulsive inhomogeneous terms, these terms commonly modeled using Heaviside or Dirac delta functions. We will discuss these functions
in turn, as well as their Laplace transforms.
4.3.1
Heaviside function
view tutorial
The Heaviside or unit step function, denoted here by (), is zero for < and
is one for ; that is,
{
0, < ;
() =
(4.9)
1, .
The precise value of () at the single point = shouldnt matter.
The Heaviside function can be viewed as the step-up function. The stepdown functionone for < and zero for is defined as
{
1 () =
1,
0,
< ;
.
(4.10)
The step-up, step-down functionzero for < , one for < , and zero
for is defined as
0,
1,
() () =
0,
< ;
< ;
.
(4.11)
()
{ ()} =
0
=
0,
f(t-c),
< ;
.
55
x=f(t)
t
Figure 4.1: A linearly increasing function which turns into a sinusoidal function.
The Laplace transform is
() ( )
{ () ( )} =
0
( )
( +) ( )
0
=
( )
= (),
where we have changed variables to = . The translation of () a distance
in the positive direction corresponds to the multiplication of () by the
exponential . This result is shown in line 13 of Table 4.1.
Piecewise-defined inhomogeneous terms can be modeled using Heaviside
functions. For example, consider the general case of a piecewise function defined
on two intervals:
{
1 (), if < * ;
() =
2 (), if * .
Using the Heaviside function * , the function () can be written in a single
line as
(
)
() = 1 () + 2 () 1 () * ().
This example can be generalized to piecewise functions defined on multiple
intervals.
As a concrete example, suppose the inhomogeneous term is represented by
a linearly increasing function, which then turns into a sinusoidal function for
56
4.3.2
1
,
2
{()} =
2.
2 + 2
view tutorial
The Dirac delta function, denoted as (), is defined by requiring that for any
function (),
()() = (0).
The usual view of the shifted Dirac delta function ( ) is that it is zero
everywhere except at = , where it is infinite, and the integral over the Dirac
delta function is one. The Dirac delta function is technically not a function,
but is what mathematicians call a distribution. Nevertheless, in most cases of
practical interest, it can be treated like a function, where physical results are
obtained following a final integration.
There are many ways to represent the Dirac delta function as a limit of a
well-defined function. For our purposes, the most useful representation makes
use of the step-up, step-down function of (4.11):
( ) = lim
1
( () + ()).
2
Before taking the limit, the well-defined step-up, step-down function is zero
except over a small interval of width 2 centered at = , over which it takes
the large value 1/2. The integral of this function is one, independent of the
value of .
57
The Laplace transform of the Dirac delta function is easily found by integration using the definition of the delta function:
{( )} =
( )
0
= .
This result is shown in line 14 of Table 4.1.
4.4
=0
The inhomogeneous term here is a step-up, step-down function that is unity
over the interval (5, 20) and zero elsewhere. Taking the Laplace transform of
the ode using Table 4.1,
(
)
20
5
.
2 2 () (0) (0)
+ () (0) + 2() =
5 20
.
(22 + + 2)
To determine the solution for () we now need to find the inverse Laplace
transform. The exponential functions can be dealt with using line 13 of Table
4.1. We write
() = (5 20 )(),
where
() =
1
.
(22 + + 2)
(4.12)
where () = 1 {()}.
To determine () we need the partial fraction expansion of (). Since the
discriminant of 22 + + 2 is negative, we have
+
1
= + 2
,
(22 + + 2)
2 + + 2
yielding the equation
1 = (22 + + 2) + ( + );
58
+ = 0,
2 = 1,
2 + + 2
)
(
+ 21
1 1
.
=
2 2 + 12 + 1
() =
Inspecting Table 4.1, the first term can be transformed using line 2, and the
second term can be transformed using lines 8 and 9, provided we complete the
square of the denominator and then massage the numerator. That is, first we
complete the square:
(
)2
1
1
15
2 + + 1 = +
+ ;
2
4
16
and next we write
(
)
+ 14 + 115 15
+ 21
16
=
(
)2 15 .
1
2
1
+ 2 + 1
+ 4 + 16
Therefore, the function () can be written as
)
(
(
)
15
1
+ 4
1
1 1
16
() =
1 2
2 ( + 41 )2 + 15
(
+
)
+
15
16
4
15
16
The first term is transformed using line 2, the second term using line 9 and the
third term using line 8. We finally obtain
(
(
))
1
1
() =
1 /4 cos ( 15/4) + sin ( 15/4)
,
(4.13)
2
15
which when combined with (4.12) yields the rather complicated solution for
().
We briefly comment that it is also possible to solve this example without
using the Laplace transform. The key idea is that both and are continuous
functions of . Clearly from the form of the inhomogeneous term and the initial
conditions, () = 0 for 0 5. We then solve the ode between 5 20
with the inhomogeneous term equal to unity and initial conditions (5) = (5)
=
0. This requires first finding the general homogeneous solution, next finding a
particular solution, and then adding the homogeneous and particular solutions
and solving for the two unknown constants. To simplify the algebra, note that
the best ansatz to use to find the homogeneous solution is () = (5) , and
not () = . Finally, we solve the homogeneous ode for 20 using as
boundary conditions the previously determined values (20) and (20),
where
we have made use of the continuity of and .
Here, the best ansatz to use
is () = (20) . The student may benefit by trying this as an exercise and
attempting to obtain a final solution that agrees with the form given by (4.12)
and (4.13).
59
Example: Solve 2
+ + 2 = ( 5) with (0) = (0)
=0
Here the inhomogeneous term is an impulse at time = 5. Taking the Laplace
transform of the ode using Table 4.1, and applying the initial conditions,
(22 + + 2)() = 5 ,
so that
() =
1 5
2 + 21 + 1
)
1 5
1
2
( + 41 )2 + 15
16
15
1 16 5
16
=
2 15
( + 41 )2 +
(
15
16
The inverse Laplace transform may now be computed using lines 8 and 13 of
Table 4.1:
(
)
2
() = 5 ()(5)/4 sin 15( 5)/4 .
(4.14)
15
It is interesting to solve this example without using a Laplace transform.
Clearly, () = 0 up to the time of impulse at = 5. Furthermore, after the
impulse the ode is homogeneous and can be solved with standard methods. The
only difficulty is determining the initial conditions of the homogeneous ode at
= 5+ .
When the inhomogeneous term is proportional to a delta-function, the solution () is continuous across the delta-function singularity, but the derivative
of the solution ()
=
( 5)
5
= 1.
And as 0, we have (5
+ ) (5
) = 1/2. Since (5
) = 0, the appropriate
initial conditions immediately after the impulse force are (5+ ) = 0 and (5
+) =
1/2. This result can be confirmed using (4.14).
60
Chapter 5
Series solutions of
second-order linear
homogeneous differential
equations
Reference: Boyce and DiPrima, Chapter 5
We consider the second-order linear homogeneous differential equation for =
():
() + () + () = 0,
(5.1)
where (), () and () are polynomials or convergent power series (around
= 0 ), with no common polynomial factors (that could be divided out). The
value = 0 is called an ordinary point of (5.1) if (0 ) = 0, and is called a
singular point if (0 ) = 0. Singular points will later be further classified as
regular singular points and irregular singular points. Our goal is to find two
independent solutions of (5.1), valid in a neighborhood about = 0 .
5.1
Ordinary points
() =
;
=0
61
62
1 ,
=1
and
() =
( 1) 2 .
=2
Substituting the power series for and its derivatives into the differential equation to be solved, we obtain
( 1)
=2
= 0.
(5.2)
=0
The power-series solution method requires combining the two sums on the lefthand-side of (5.2) into a single power series in . To shift the exponent of 2
in the first sum upward by two to obtain , we need to shift the summation
index downward by two; that is,
( 1) 2 =
=2
( + 2)( + 1)+2 .
=0
( + 2)( + 1)+2 + = 0.
(5.3)
=0
For (5.3) to be satisfied, the coefficient of each power of must vanish separately.
(This can be proved by setting = 0 after successive differentiation.) We
therefore obtain the recurrence relation
, = 0, 1, 2, . . . .
+2 =
( + 2)( + 1)
We observe that even and odd coefficients decouple. We thus obtain two independent sequences starting with first term 0 or 1 . Developing these sequences,
we have for the sequence beginning with 0 :
0 ,
1
2 = 0 ,
2
1
1
4 =
2 =
0 ,
43
432
1
1
6 =
4 = 0 ;
65
6!
and the general coefficient in this sequence for = 0, 1, 2, . . . is
2 =
(1)
0 .
(2)!
63
3 =
(1)
1 .
(2 + 1)!
(1) 2
(1) 2+1
+ 1
(2)!
(2 + 1)!
=0
=0
)
)
(
(
4
5
3
2
+
. . . + 1
+
...
= 0 1
2!
4!
3!
5!
() = 0
= 0 cos + 1 sin ,
as expected.
In our next example, we will solve the Airys Equation. This differential
equation arises in the study of optics, fluid mechanics, and quantum mechanics.
Example: Find the general solution of = 0.
view tutorial
With
() =
=0
( 1) 2
+1 = 0.
(5.4)
=0
=2
( 1) 2 =
=2
( + 3)( + 2)+3 +1 .
=1
When combining the two sums in (5.4), we separate out the extra = 1 term
in the first sum given by 22 . Therefore, (5.4) becomes
22 +
=0
( + 3)( + 2)+3 +1 = 0.
(5.5)
64
65
Airys functions
2
y0
1
0
1
2
10
4
x
y1
1
0
1
2
10
5.2
view tutorial
The value = 0 is called a regular singular point of the ode
( 0 )2 + ()( 0 ) + () = 0,
(5.7)
(5.8)
66
(5.9)
which can be solved using the quadratic formula. Three cases immediately
appear: (i) real distinct roots, (ii) complex conjugate roots, (iii) repeated roots.
Students may recall being in a similar situation when solving the second-order
linear homogeneous ode with constant coefficients. Indeed, it is possible to
directly transform the Cauchy-Euler equation into an equation with constant
coefficients so that our previous results can be used.
The idea is to change variables so that the power law ansatz = becomes
an exponential ansatz. For > 0, if we let = and () = (), then the
ansatz () = becomes the ansatz () = , appropriate if () satisfies
a constant coefficient ode. If < 0, then the appropriate transformation is
= , since > 0. We need only consider > 0 here and subsequently
generalize our result by replacing everywhere by its absolute value.
We thus transform the differential equation (5.8) for = () into a differential equation for = (), using = , or equivalently, = ln . By the
chain rule,
1
=
=
,
so that symbolically,
= .
( 2
)
= 2
.
2
67
5.2.1
5.2.2
5.2.3
Repeated roots
If ( 1)2 4 = 0, there is one real root of (5.9). The general solution for
is
() = (1 + 2 ) ,
yielding
() = || (1 + 2 ln ||) .
We now give examples illustrating these three cases.
Example: Solve 22 + 3 = 0 for 0 1 with two-point
boundary condition (0) = 0 and (1) = 1.
Since > 0, we try = and obtain the characteristic equation
0 = 2( 1) + 3 1
= 22 + 1
= (2 1)( + 1).
Since the characteristic equation has two real roots, the general solution is given
by
1
() = 1 2 + 2 1 .
68
We now encounter for the first time two-point boundary conditions, which can
be used to determine the coefficients 1 and 2 . Since y(0)=0, we must have
2 = 0. Applying the remaining condition (1) = 1, we obtain the unique
solution
() = .
Note that = 0 is called a singular point of the ode since the general solution
is singular at = 0 when 2 = 0. Our boundary condition imposes that () is
finite at = 0 removing the singular solution. Nevertheless, remains singular
at = 0. Indeed, this is why we imposed a two-point boundary condition rather
than specifying the value of (0) (which is infinite).
Example: Find the general solution of 2 + + 2 = 0 with twopoint boundary condition (1) = 1 and ( ) = 1.
With the ansatz = , we obtain
0 = ( 1) + + 2
= 2 + 2 ,
so that = . Therefore, with = ln , we have () = cos + sin ,
and the general solution for () is
() = cos ( ln ) + sin ( ln ).
The first boundary
condition (1) = 1 yields = 1. The second boundary
condition ( ) = 1 yields = 1.
Example: Find the general solution of 2 + 5 + 4 = 0 with twopoint boundary condition (1) = 0 and () = 1.
With the ansatz = , we obtain
0 = ( 1) + 5 + 4
= 2 + 4 + 4
= ( + 2)2 ,
so that there is a repeated root = 2. With = ln , we have () =
(1 + 2 )2 , so that the general solution is
() =
1 + 2 ln
.
2
2 ln
.
2
Chapter 6
6.1
We begin by reviewing some basic linear algebra. For the simplest 2 2 case,
let
(
)
(
)
1
A=
, x=
,
(6.1)
2
and consider the homogeneous equation
Ax = 0.
(6.2)
When does there exist a nontrivial (not identically zero) solution for x? To
answer this question, we solve directly the system
1 + 2 = 0,
1 + 2 = 0.
Multiplying the first equation by and the second by , and subtracting the
second equation from the first, results in
( )1 = 0.
Similarly, multiplying the first equation by and the second by , and subtracting the first equation from the second, results in
( )2 = 0.
69
70
1
A = , x = 2 ,
3
then there exists a nontrivial solution to (6.2) provided det A = 0, where
det A = ( ) ( ) + ( ). The definition of the determinant can be further generalized to any matrix, and is typically taught
in a first course on linear algebra.
We now consider the eigenvalue problem. For A an matrix and v an
1 column vector, the eigenvalue problem solves the equation
Av = v
(6.3)
(6.5)
= ( )( )
= 2 ( + ) + ( ).
This characteristic equation can be more generally written as
2 TrA + detA = 0,
(6.6)
where TrA is the trace, or sum of the diagonal elements, of the matrix A. If
is an eigenvalue of A, then the corresponding eigenvector v may be found by
solving
(
) (
)
1
= 0,
2
where the equation of the second row will always be a multiple of the equation
of the first row. The eigenvector v has arbitrary normalization, and we may
always choose for convenience 1 = 1.
In the next section, we will see several examples of an eigenvector analysis.
6.2
71
(6.7)
We will consider by example three cases separately: (i) eigenvalues of A are real
and there are two linearly independent eigenvectors; (ii) eigenvalues of A are
complex conjugates, and; (iii) A has only one linearly independent eigenvector.
These three cases are analogous to the cases considered previously when solving
the second-order, linear, constant-coefficient, homogeneous equation.
6.2.1
1
1 1
1
=
,
2
4 1
2
(6.8)
72
Clearly, the second equation is just the first equation multiplied by 2, so only
one equation is linearly independent. This will always be true, so for the 2 2
case we need only consider the first row of the matrix. The first eigenvector
therefore satisfies 21 = 211 . Recall that an eigenvector is only unique up to
multiplication by a constant: we may therefore take 11 = 1 for convenience.
For 2 = 1, and eigenvector v2 = (12 , 22 ) , we have from (6.9)
212 + 22 = 0,
so that 22 = 212 . Here, we take 12 = 1.
Therefore, our eigenvalues and eigenvectors are given by
(
)
(
)
1
1
1 = 3, v1 =
;
2 = 1, v2 =
.
2
2
Using the principle of superposition, the general solution to the ode is therefore
x() = 1 v1 1 + 2 v2 2 ,
or explicitly writing out the components,
1 () = 1 3 + 2 ,
2 () = 21 3 22 .
We can obtain a new perspective on the solution by drawing a phase-space
diagram, shown in Fig. 6.1, with x-axis 1 and y-axis 2 . Each curve corresponds to a different initial condition, and represents the trajectory of a particle
for both positive and negative with velocity given by the differential equation.
The dark lines represent trajectories along the direction of the eigenvectors. If
2 = 0, the motion is along the eigenvector v1 with 2 = 21 and motion is
away from the origin (arrows pointing out) since the eigenvalue 1 = 3 > 0.
If 1 = 0, the motion is along the eigenvector v2 with 2 = 21 and motion
is towards the origin (arrows pointing in) since the eigenvalue 2 = 1 < 0.
When the eigenvalues are real and of opposite signs, the origin is called a saddle
point. Almost all trajectories (with the exception of those with initial conditions
exactly satisfying 2 (0) = 21 (0)) eventually move away from the origin as
increases.
The current example can also be solved by converting the system of two
first-order equations into a single second-order equation. Consider again the
system of equations
1 = 1 + 2 ,
2 = 41 + 2 .
We differentiate the first equation and proceed to eliminate 2 as follows:
1 = 1 + 2
= 1 + 41 + 2
= 1 + 41 + 1 1
= 2 1 + 31 .
73
x2
0.5
0
0.5
1
1.5
2
2
1.5
0.5
0
x1
0.5
1.5
Figure 6.1: Phase space diagram for example with two real eigenvalues of opposite sign.
Therefore, the equivalent second-order linear homogeneous equation is given by
1 2 1 31 = 0.
If we had eliminated 1 instead, we would have found an identical equation for
2 :
2 2 2 32 = 0.
The corresponding characteristic equation is 2 2 3 = 0, which is identical
to the characteristic equation of the matrix A. In general, a system of firstorder linear homogeneous equations can be converted into an equivalent -th
order linear homogeneous equation. Numerical methods usually require the
conversion in reverse; that is, a conversion of an -th order equation into a
system of first-order equations.
Example: Find the general solution of 1 = 31 +
22 .
The equations in matrix form are
(
) (
1
3
=
2
2
2
2
) (
1
2
22 , 2 =
21
)
.
74
x2
0.5
0
0.5
1
1.5
2
2
1.5
0.5
0
x1
0.5
1.5
Figure 6.2: Phase space diagram for example with two real eigenvalues of same
sign.
Therefore, the eigenvalues of A are 1 = 4, 2 = 1. Proceeding to determine
the associated eigenvectors, for 1 = 4,
11 +
and for 2 = 1,
212 +
221 = 0;
222 = 0.
1
2/2
(
2 = 1, v2 =
1
2
)
.
1
2
(
= 1
1
2/2
4 + 2
1
2
We show the phase space plotin Fig. 6.2. If 2 = 0, the motion is along
the eigenvector v1 with 2 = ( 2/2)1 with eigenvalue
1 = 4 < 0. If
1 = 0, the motion is along the eigenvector v2 with 2 = 21 with eigenvalue
2 = 1 < 0. When the eigenvalues are real and have the same sign, the origin
is called a node. A node may be attracting or repelling depending on whether
the eigenvalues are both negative (as is the case here) or positive. Observe that
the trajectories collapse onto the v2 eigenvector since 1 < 2 < 0 and decay is
more rapid along the v1 direction.
6.2.2
75
1
1
1
2
=
.
2
2
1 21
= 1 .
and
2
v and v
,
and we can form a linear combination of these two complex solutions to construct
two independent real solutions. Namely, if the complex functions () and ()
are written as
() = Re{()} + Im{()},
() = Re{()} Im{()},
then two real functions can be constructed from the following linear combinations of and :
+
= Re{()}
2
and
= Im{()}.
2
Thus the two real vector functions that can be constructed from our two complex
vector functions are
{(
)
}
1
1
Re{v } = Re
( 2 +)
{( )
}
1
1
= 2 Re
(cos + sin )
(
)
1
cos
= 2
;
sin
76
x2
0.5
0
0.5
1
1.5
2
2
1.5
0.5
0
x1
0.5
1.5
Figure 6.3: Phase space diagram for example with complex conjugate eigenvalues.
and
)
}
1
Im{v } =
Im
(cos + sin )
(
)
1
sin
.
= 2
cos
21
{(
Taking a linear superposition of these two real solutions yields the general solution to the ode, given by
( (
)
(
))
1
cos
sin
x = 2
+
.
sin
cos
The corresponding phase space diagram is shown in Fig. 6.3. We say the
origin is a spiral point. If the real part of the complex eigenvalue is negative,
as is the case here, then solutions spiral into the origin. If the real part of the
eigenvalue is positive, then solutions spiral out of the origin.
The direction of the spiralhere, it is clockwisecan be determined using
a concept from physics. If a particle of unit mass moves along a phase space
trajectory, then the angular momentum of the particle about the origin is equal
to the cross product of the position and velocity vectors: L = x x.
With
both the position and velocity vectors lying in the two-dimensional phase space
plane, the angular momentum vector is perpendicular to this plane. With
x = (1 , 2 , 0),
x = ( 1 , 2 , 0),
then
L = (0, 0, ), with = 1 2 2 1 .
By the right-hand-rule, a clockwise rotation corresponds to < 0, and a counterclockwise rotation to > 0.
77
6.2.3
1
1 1
1
=
.
2
1
3
2
(6.10)
78
x2
0.5
0
0.5
1
1.5
2
2
1.5
0.5
0
x1
0.5
1.5
Figure 6.4: Phase space diagram for example with only one eigenvector.
so that
2 = 1 1
(
)
= (1 + 2 )2 21 + (1 + 2)2 2
= 1 2 + 2 (1 )2 .
Combining our results for 1 and 2 , we have therefore found
(
)
(
)
[(
) (
) ]
1
1
0
1
= 1
2 + 2
+
2 .
2
1
1
1
Our missing linearly independent solution is thus determined to be
[(
) (
) ]
0
1
x() = 2
+
2 .
1
1
(6.11)
The second term of (6.11) is just times the first solution; however, this is
not sufficient. Indeed, the correct ansatz to find the second solution directly is
given by
x = (w + v) ,
(6.12)
where and v is the eigenvalue and eigenvector of the first solution, and w
is an unknown vector to be determined. To illustrate this direct method, we
substitute (6.12) into x = Ax, assuming Av = v . Canceling the exponential,
we obtain
v + (w + v) = Aw + v.
Further canceling the common term v and rewriting yields
(A I) w = v.
(6.13)
79
6.3
Normal modes
1 = 1 (1 2 ),
2 = 2 (2 1 ).
Further rewriting by collecting terms proportional to 1 and 2 yields
1 = ( + )1 + 2 ,
2 = 1 ( + )2 .
80
1
=
2
.
(6.14)
2
( + )
2
( + ) 2
=
( + ) 2
= (2 + + )2 2 .
The solution for m2 is
2 = ,
and the two eigenvalues are
21 = /,
22 = ( + 2)/.
1 = /, 2 = ( + 2)/.
The positions of the oscillating masses in general contain time dependencies of
the form sin 1 , cos 1 , and sin 2 , cos 2 .
It is of further interest to determine the eigenvectors, or so-called normal
modes of oscillation, associated with the two distinct angular frequencies. With
specific initial conditions proportional to an eigenvector, the mass will oscillate
with a single frequency. The eigenvector associated with 21 satisfies
11 + 12 = 0,
so
that 11 = 12 . The normal mode associated with the frequency 1 =
/ thus follows a motion where 1 = 2 . Referring to Fig. 6.5, during this
motion the center spring length does not change, and the two masses oscillate
as if the center spring was absent (which is why the frequency of oscillation is
independent of ).
81
=
1
82
Chapter 7
Nonlinear differential
equations and bifurcation
theory
Reference: Strogatz, Sections 2.2, 2.4, 3.1, 3.2, 3.4, 6.3, 6.4, 8.2
We now turn our attention to nonlinear differential equations. In particular, we
study how small changes in the parameters of a system can result in qualitative changes in the dynamics. These qualitative changes in the dynamics are
called bifurcations. To understand bifurcations, we first need to understand the
concepts of fixed points and stability.
7.1
7.1.1
view tutorial
Consider the one-dimensional differential equation for = () given by
= ().
(7.1)
84
The omitted terms in the Taylor series expansion are proportional to 2 , and
can be made negligible over a short time interval with respect to the kept term,
proportional to , by taking (0) sufficiently small. Therefore, at least over short
times, the differential equation to be considered, = (* ), is linear and has
by now the familiar solution
() = (0)
(* )
7.1.2
Two dimensions
view tutorial
The idea of fixed points and stability can be extended to higher-order systems
of odes. Here, we consider a two-dimensional system and will need to make use
of the two-dimensional Taylor series expansion of a function (, ) about the
origin. In general, the Taylor series of (, ) is given by
(, ) = +
+
+
2
2
2
+ 2
+ 2 2
2
)
+ ...,
where the function and all of its partial derivatives on the right-hand-side are
evaluated at the origin. Note that the Taylor series is constructed so that all
partial derivatives of the left-hand-side match those of the right-hand-side at
the origin.
We now consider the two-dimensional system given by
= (, ),
= (, ).
(7.2)
85
= +
+
+ ...
=
+
+ ....
= (* + , * + )
+
+ ...
=
+
+ ...,
=+
where in the Taylor series , and all their partial derivatives are evaluated at
the fixed point (* , * ). Neglecting higher-order terms in the Taylor series, we
thus have a system of odes for the perturbation, given in matrix form as
( ) ( ) ( )
.
(7.3)
=
The two-by-two matrix in (7.3) is called the Jacobian matrix at the fixed point.
An eigenvalue analysis of the Jacobian matrix will typically yield two eigenvalues
1 and 2 . These eigenvalues may be real and distinct, complex conjugate pairs,
or repeated. The fixed point is stable (all perturbations decay exponentially)
if both eigenvalues have negative real parts. The fixed point is unstable (some
perturbations grow exponentially) if at least one of the eigenvalues has a positive
real part. Fixed points can be further classified as stable or unstable nodes,
unstable saddle points, stable or unstable spiral points, or stable or unstable
improper nodes.
Example: Find all the fixed points of the nonlinear system = (3
2), = (2 ), and determine their stability.
view tutorial
The fixed points are determined by solving
(, ) = (3 2) = 0,
(, ) = (2 ) = 0.
There are four fixed points (* , * ): (0, 0), (0, 2), (3, 0) and (1, 1). The Jacobian
matrix is given by
(
) (
)
3 2 2
2
=
.
2 2
Stability of the fixed points may be considered in turn. With J* the Jacobian
matrix evaluated at the fixed point, we have
(
)
3 0
(* , * ) = (0, 0) : J* =
.
0 2
86
1.5
0.5
0.5
1.5
2.5
3.5
1 2. Since one eigenvalue is negative and the other positive the fixed point
(1, 1) is an unstable saddle point. From our analysis of the fixed points, one can
expect that all solutions will asymptote to one of the stable fixed points (0, 2)
or (3, 0), depending on the initial conditions.
It is of interest to sketch the phase space diagram for this nonlinear system.
The eigenvectors associated with the unstable saddle point (1, 1) determine the
directions of the flow into and away from this fixed
point. The eigenvector
associated with the positive eigenvalue 1 = 1 + 2 can determined from the
first equation of (J* 1 I)v1 = 0, or
211 212 = 0,
87
of the lines with origin at the fixed point for incoming (negative eigenvalue) and
outgoing (positive
eigenvalue) trajectories. The outgoing trajectories have
neg
ative slope 2/2 and the incoming trajectories have positive slope 2/2. A
rough sketch of the phase space diagram can be made by hand (as demonstrated
in class). Here, a computer generated plot obtained from numerical solution of
the nonlinear coupled odes is presented in Fig. 7.1. The curve starting from
the origin and at infinity, and terminating at the unstable saddle point is called
the separatrix. This curve separates the phase space into two regions: initial
conditions for which the solution asymptotes to the fixed point (0, 2), and initial
conditions for which the solution asymptotes to the fixed point (3, 0).
7.2
One-dimensional bifurcations
7.2.1
Saddle-node bifurcation
view tutorial
The saddle-node bifurcation results in fixed points being created or destroyed.
The normal form for a saddle-node bifurcation is given by
= + 2 .
The fixed points are * = . Clearly, two real fixed points exist when
< 0 and no real fixed points exist when > 0. The stability of the fixed
points when < 0 are determined by the derivative of () = + 2 , given by
() = 2. Therefore, the negative fixed point is stable and the positive fixed
point is unstable.
Graphically, we can illustrate this bifurcation in two ways. First, in Fig. 7.2(a),
we plot versus for the three parameter values corresponding to < 0, = 0
and > 0. The values at which = 0 correspond to the fixed points, and
arrows are drawn indicating how the solution () evolves (to the right if > 0
and to the left if < 0). The stable fixed point is indicated by a filled circle
and the unstable fixed point by an open circle. Note that when = 0, solutions
converge to the origin from the left, but diverge from the origin on the right.
88
dx/dt
(a)
r<0
r=0
r>0
x*
(b)
7.2.2
Transcritical bifurcation
view tutorial
A transcritical bifurcation occurs when there is an exchange of stabilities between two fixed points. The normal form for a transcritical bifurcation is given
by
= 2 .
The fixed points are * = 0 and * = . The derivative of the right-hand-side is
() = 2, so that (0) = and () = . Therefore, for < 0, * = 0
is stable and * = is unstable, while for > 0, * = is stable and * = 0
is unstable. The two fixed points thus exchange stability as passes through
zero. The transcritical bifurcation is illustrated in Fig. 7.3.
7.2.3
view tutorial
The pitchfork bifurcations occur in physical models where fixed points appear
and disappear in pairs due to some intrinsic symmetry of the problem. Pitchfork
bifurcations can come in one of two types. In the supercritical bifurcation, a
pair of stable fixed points are created at the bifurcation (or critical) point and
dx/dt
(a)
r<0
(b)
89
r=0
r>0
7.2.4
view tutorial
In the subcritical case, the cubic term is destabilizing. The normal form (to
order 3 ) is
= + 3 .
The fixed points are * = 0 and * = , the latter fixed points existing
only when 0. The
derivative of the right-hand-side is () = + 32 so
90
(a)
dx/dt
r<0
r=0
r>0
x*
(b)
for < 0 and unstable for > 0 while the fixed points = exist and are
unstable for < 0. There are no stable fixed points when > 0.
The absence of stable fixed points for > 0 indicates that the neglect of
terms of higher-order in than 3 in the normal form may be unwarranted.
Keeping to the intrinsic symmetry of the equations (only odd powers of ) we
can add a stabilizing nonlinear term proportional to 5 . The extended normal
form (to order 5 ) is
= + 3 5 ,
and is somewhat more difficult to analyze. The fixed points are solutions of
( + 2 4 ) = 0.
The fixed point * = 0 exists for all , and four additional fixed points can be
found from the solutions of the quadratic equation in 2 :
* =
1
(1 1 + 4).
2
91
x*
points are
<
1
:
4
1
<<0:
4
>0:
* = 0
1
* = 0, * =
(1 1 + 4)
2
1
* = 0, * =
(1 + 1 + 4)
2
1 + 4), we have
(
)
1
(* ) = 2 2 + (1 1 + 4)
2
(
)
= (1 + 4) 1 + 4
(
)
= 1 + 4 1 + 4 1 .
Clearly, the plus root is always stable since (* ) < 0. The minus root exists
only for 14 < < 0 and is unstable since (* ) > 0. We summarize the
92
1
:
4
1
<<0:
4
>0:
* = 0
(stable);
* = 0, (stable)
1
* =
(1 + 1 + 4)
2
1
* =
(1 1 + 4)
2
* = 0 (unstable)
1
* =
(1 + 1 + 4)
2
(stable);
(unstable);
(stable).
7.2.5
view tutorial
We illustrate the utility of bifurcation theory by analyzing a simple model of a
fishery. We utilize the logistic equation (see S2.4.6) to model a fish population
in the absence of fishing. To model fishing, we assume that the government has
established fishing quotas so that at most a total of fish per year may be
caught. We assume that when the fish population is at the carrying capacity
of the environment, fisherman can catch nearly their full quota. When the fish
population drops to lower values, fish may be harder to find and the catch rate
may fall below , eventually going to zero as the fish population diminishes.
Combining the logistic equation together with a simple model of fishing, we
propose the mathematical model
(
)
= 1
,
(7.4)
+
where is the fish population size, is time, is the maximum potential growth
rate of the fish population, is the carrying capacity of the environment, is
93
(1+)2/4
= (1 )
.
(7.5)
1 [
(1 ) (1 + )2 4 .
2
(7.6)
The fixed points given by (7.6) are real only if < 41 (1 + )2 . Furthermore, the
minus root is greater than zero only if > . We therefore need to consider
three intervals over which the following fixed points exist:
0:
1
(1 + )2 :
4
1
> (1 + )2 :
4
<<
* = 0,
* = 0,
* = 0.
1 [
(1 ) + (1 + )2 4 ;
2
]
1 [
* =
(1 ) (1 + )2 4 ;
2
* =
94
The stability of the fixed points can be determined with rigor analytically or
graphically. Here, we simply apply biological intuition together with knowledge
of the types of one dimensional bifurcations. An intuitive argument is made
simpler if we consider decreasing from large values. When is large, that
is > 41 (1 + )2 , too many fish are being caught and our intuition suggests
that the fish population goes extinct. Therefore, in this interval, the single
fixed point * = 0 must be stable. As decreases, a bifurcation occurs at
= 41 (1 + )2 introducing two additional fixed points at * = (1 )/2. The
type of one dimensional bifurcation in which two fixed points are created as a
square root becomes real is a saddlenode bifurcation, and one of the fixed points
will be stable and the other unstable. Following these fixed points as 0,
we observe that the plus root goes to one, which is the appropriate stable fixed
point when there is no fishing. We therefore conclude that the plus root is stable
and the minus root is unstable. As decreases further from this bifurcation,
the minus root collides with the fixed point * = 0 at = . This appears to
be a transcritical bifurcation and assuming an exchange of stability occurs, we
must have the fixed point * = 0 becoming unstable for < . The resulting
bifurcation diagram is shown in Fig. 7.6.
The purpose of simple mathematical models applied to complex ecological
problems is to offer some insight. Here, we have learned that overfishing (in
the model > 41 (1 + )2 ) during one year can potentially result in a sudden
collapse of the fish catch in subsequent years, so that governments need to be
particularly cautious when contemplating increases in fishing quotas.
7.3
Two-dimensional bifurcations
7.3.1
95
where = cos and = sin . The parameter controls the stability of the
fixed point at the origin, the parameter is the frequency of oscillation near
the origin, and the parameter determines the dependence of the oscillation
frequency at larger amplitude oscillations. Although we include for generality,
our qualitative analysis of these equations will be independent of .
The equation for the radius is of the form of the supercritical pitchfork
bifurcation. The fixed points are * = 0 and * = (note that > 0), and
the former fixed point is stable for < 0 and the latter is stable for > 0. The
transition of the eigenvalues of the Jacobian from negative real part to positive
real part can be seen if we transform these equations to cartesian coordinates.
We have using 2 = 2 + 2 ,
sin
= cos
= ( 3 ) cos ( + 2 ) sin
= (2 + 2 ) (2 + 2 )
= (2 + 2 )( + );
cos
= sin +
= ( 3 ) sin + ( + 2 ) cos
= (2 + 2 ) + + (2 + 2 )
= + (2 + 2 )( ).
The stability of the origin is determined by the Jacobian matrix evaluated at
the origin. The nonlinear terms in the equation vanish and the Jacobian matrix
at the origin is given by
(
)
=
.
7.3.2
96
Chapter 8
Partial differential
equations
Reference: Boyce and DiPrima, Chapter 10
Differential equations containing partial derivatives with two or more independent variables are called partial differential equations (pdes). These equations are of fundamental scientific interest but are substantially more difficult
to solve, both analytically and computationally, than odes. In this chapter, we
will derive two fundamental pdes and show how to solve them.
8.1
view lecture
To derive the diffusion equation in one spacial dimension, we imagine a still
liquid in a long pipe of constant cross sectional area. A small quantity of dye
is placed in a cross section of the pipe and allowed to diffuse up and down the
pipe. The dye diffuses from regions of higher concentration to regions of lower
concentration.
We define (, ) to be the concentration of the dye at position along the
pipe, and we wish to find the pde satisfied by . It is useful to keep track of
the units of the various quantities involved in the derivation and we introduce
the bracket notation [] to mean the units of . Relevant dimensional units
used in the derivation of the diffusion equation are mass , length , and time
. Assuming that the dye concentration is uniform in every cross section of the
pipe, the dimensions of concentration used here are [] = /.
The mass of dye in the infinitesimal pipe volume located between position
1 and position 2 at time , with 1 < < 2 , is given to order = 2 1
by
= (, ).
The mass of dye in this infinitesimal pipe volume changes by diffusion into and
out of the cross sectional ends situated at position 1 and 2 (Figure 8.1).
We assume the rate of diffusion is proportional to the concentration gradient,
a relationship known as Ficks law of diffusion. Ficks law of diffusion assumes
the mass flux , with units [] = / across a cross section of the pipe is given
97
98
J (x2)
J (x1)
X1
X2
(8.1)
2
where the diffusion constant > 0 has units [] = /, and we have used
the notation = /. The mass flux is opposite in sign to the gradient of
concentration. The time rate of change in the mass of dye between 1 and 2
is given by the difference between the mass flux into and the mass flux out of
the infinitesimal cross sectional volume. When < 0, > 0 and the mass
flows into the volume at position 1 and out of the volume at position 2 . On
the other hand, when > 0, < 0 and the mass flows out of the volume at
position 1 and into the volume at position 2 . In both cases, the time rate of
change of the dye mass is given by
= (1 , ) (2 , ),
or rewriting in terms of (, ):
(, ) = ( (2 , ) (1 , )) .
Dividing by and taking the limit 0 results in the diffusion equation:
= .
We note that the diffusion equation is identical to the heat conduction equation,
where is temperature, and the constant (commonly written as ) is the
thermal conductivity.
8.2
view lecture
To derive the wave equation in one spacial dimension, we imagine an elastic
string that undergoes small amplitude transverse vibrations. We define (, )
to be the vertical displacement of the string from the -axis at position and
time , and we wish to find the pde satisfied by . We define to be the
constant mass density of the string, the tension of the string, and the angle
99
T2
2
u
1
T1
u(x1,t)
x1
x2
1 + 2 = 2 sin 2 1 sin 1 .
We now make the assumption of small vibrations, that is , or equivalently 1. Note that [] = so that is dimensionless. With this
approximation, to leading-order in we have
cos 2 = cos 1 = 1,
sin 2 = (2 , ),
sin 1 = (1 , ),
and
1 + 2 = 1.
100
8.3
Fourier series
view tutorial
Our solution of the diffusion and wave equations will require use of a Fourier
series. A periodic function () with period 2, can be represented as a Fourier
series in the form
)
0 (
.
+
+ sin
cos
2
=1
() =
(8.2)
cos
sin
cos
( )
( )
( )
cos
sin
sin
( )
( )
( )
= ,
(8.3)
= ,
(8.4)
= 0.
(8.5)
sin
( )
sin
( )
sin () sin ()
(
)
(
)]
[
=
cos ( ) cos ( + )
2
[
]
(
)
(
)
1
1
=
sin ( )
sin ( + )
2
+
= 0.
=
101
For = , however,
sin
( )
=
sin2 ()
)
(
1 cos (2)
=
2
[
]
1
=
sin 2
2
2
= .
=
cos
() cos
}
{
cos
+
sin
.
+
cos
cos
=1
If = 0, then the second and third integrals on the right-hand-side are zero
and the first integral is 2 so that the right-hand-side becomes 0 . If is
a positive integer, then the first and third integrals on the right-hand-side are
zero, and the second integral is . For this case, we have
=1
() cos
= ,
where all the terms in the summation except = are zero by virtue of the
Kronecker delta. We therefore obtain for = 0, 1, 2, . . .
1
=
() cos
(8.6)
() sin
=
sin
{
}
sin
cos
+
sin
sin
.
=1
102
Here, the first and second integrals on the right-hand-side are zero, and the
third integral is so that
() sin
=
=1
= .
Hence, for = 1, 2, 3, . . . ,
1
=
() sin
(8.7)
Our results for the Fourier series of a function () with period 2 are thus
given by (8.2), (8.6) and (8.7).
8.4
view tutorial
The Fourier series simplifies if () is an even function such that () = (),
or an odd function such that () = (). Use will be made of the following
facts. The function cos (/) is an even function and sin (/) is an odd
function. The product of two even functions is an even function. The product
of two odd functions is an even function. The product of an even and an odd
function is an odd function. And if () is an even function, then
() = 2
();
We examine in turn the Fourier series for an even or an odd function. First,
if () is even, then from (8.6) and (8.7) and our facts about even and odd
functions,
2
() cos
,
=
0
(8.8)
= 0.
The Fourier series for an even function with period 2 is thus given by the
Fourier cosine series
() =
+
cos
,
2
=1
() even.
(8.9)
() sin
0
(8.10)
103
and the Fourier series for an odd function with period 2 is given by the Fourier
sine series
() =
=1
sin
() odd.
(8.11)
() = 1
2
.
104
2
0 =
()
0
)
(
2
2
1
=
[
]
2
2
0
= 0.
The coefficients for > 0 are
2
() cos ()
=
0
)
(
2
2
=
1
cos ()
0
2
4
=
cos () 2
cos ()
0
{[ 0
}
] 1
]
4
2
sin() 0 2
sin ()
=
sin ()
0
0
4
=
sin ()
2 0
]
4
= 2 2 cos () 0
)
4 (
= 2 2 1 cos () .
Since
{
cos () =
1, if odd;
1, if even;
we have
{
=
8/(2 2 ), if odd;
0,
if even.
The Fourier cosine series for the triangle function is therefore given by
() =
8
2
(
cos +
cos 3 cos 5
+
+ ...
32
52
)
.
8.5
8.5.1
105
view lecture
view lecture
We consider one dimensional diffusion in a pipe of length , and solve the
diffusion equation for the concentration (, ),
= ,
0 ,
> 0.
(8.12)
Both initial and boundary conditions are required for a unique solution. That
is, we assume the initial concentration distribution in the pipe is given by
(, 0) = (),
0 .
(8.13)
(, ) = 0,
> 0.
(8.14)
The left hand side of this equation is independent of and the right hand side is
independent of . Both sides of this equation are therefore independent of both
and and equal to a constant. Introducing as the separation constant, we
have
1
=
= ,
106
+ = 0.
(8.16)
Because of the boundary conditions, we must first consider the equation for
(). To solve, we need to determine the boundary conditions at = 0 and
= . Now, from (8.14) and (8.15),
(0, ) = (0) () = 0,
> 0.
Since () is not identically zero for all (which would result in the trivial
solution for ), we must have (0) = 0. Similarly, the boundary condition at
= requires () = 0. We therefore consider the two-point boundary value
problem
+ = 0, (0) = () = 0.
(8.17)
The equation given by (8.17) is called an ode eigenvalue problem. The allowed
values of and the corresponding functions () are called the eigenvalues and
eigenfunctions of the differential equation. Since the form of the general solution
of the ode depends on the sign of , we consider in turn the cases > 0, < 0
and = 0. For > 0, we write = 2 and determine the general solution of
+ 2 = 0
to be
() = cos + sin .
Applying the boundary condition at = 0, we find = 0. The boundary
condition at = then yields
sin = 0.
The solution = 0 results in the trivial solution for and can be ruled out.
Therefore, we must have
sin = 0,
which is an equation for the eigenvalue . The solutions are
= /,
where is an integer. We have thus determined the eigenvalues = 2 > 0 to
be
2
= (/) , = 1, 2, 3, . . . ,
(8.18)
with corresponding eigenfunctions
= sin (/).
For < 0, we write = 2 and determine the general solution of
2 = 0
to be
() = cosh + sinh ,
(8.19)
107
where we have previously introduced the hyperbolic sine and cosine functions
in S3.4.1. Applying the boundary condition at = 0, we find = 0. The
boundary condition at = then yields
sinh = 0,
which for = 0 has only the solution = 0. Therefore, there is no nontrivial
solution for with < 0. Finally, for = 0, we have
= 0,
with general solution
() = + .
The boundary condition at = 0 and = yields = = 0 so again there is
no nontrivial solution for with = 0.
We now turn to the equation for (). The equation corresponding to the
eigenvalue , using (8.18), is given by
)
(
+ 2 2 /2 = 0,
which has solution proportional to
=
2 /2
(8.20)
2 /2
(8.21)
satisfy the pde given by (8.12) and the boundary conditions given by (8.14) for
every positive integer .
The principle of linear superposition for homogeneous linear differential
equations then states that the general solution to (8.12) and (8.14) is given
by
(, ) =
=
=1
(, )
(8.22)
sin (/)
2 2 /2
=1
The final solution step is to satisfy the initial conditions given by (8.13). At
= 0, we have
() =
sin (/).
(8.23)
=1
We immediately recognize (8.23) as a Fourier sine series (8.11) for an odd function () with period 2. Equation (8.23) is a periodic extension of our original
() defined on 0 , and is an odd function because of the boundary
condition (0) = 0. From our solution for the coefficients of a Fourier sine series
(8.10), we determine
2
() sin
.
(8.24)
=
0
Thus the solution to the diffusion equation with homogeneous Dirichlet boundary conditions defined by (8.12), (8.13) and (8.14) is given by (8.22) with the
coefficients computed from (8.24).
108
( ) sin
0
2
2
= sin (/2)
if = 1, 5, 9, . . . ;
2/
2/ if = 3, 7, 11, . . . ;
=
0
if = 2, 4, 6, . . . .
(2 + 1) (2+1)2 2 /2
2
(1) sin
.
(, ) =
=0
8.5.2
view lecture
Consider a diffusion problem where one end of the pipe has dye of concentration
held constant at 1 and the other held constant at 2 , which could occur if the
ends of the pipe had large reservoirs of fluid with different concentrations of
dye. With (, ) the concentration of dye, the boundary conditions are given
by
(0, ) = 1 , (, ) = 2 , > 0.
The concentration (, ) satisfies the diffusion equation with diffusivity :
= .
If we try to solve this problem directly using separation of variables, we will
run into trouble. Applying the inhomogeneous boundary condition at = 0
directly to the ansatz (, ) = () () results in
(0, ) = (0) () = 1 ;
so that
(0) = 1 / ().
However, our separation of variables ansatz assumes () to be independent of
! We therefore say that inhomogeneous boundary conditions are not separable.
The proper way to solve a problem with inhomogeneous boundary conditions
is to transform it into another problem with homogeneous boundary conditions.
109
0 ,
(() + (, )) = 2 (() + (, ))
or
= ,
since = 0 and = 0. The boundary conditions satisfied by are
(0, ) = (0, ) (0) = 0,
(, ) = (, ) () = 0,
so that is observed to satisfy homogeneous boundary conditions. If the initial
conditions are given by (, 0) = (), then the initial conditions for are
(, 0) = (, 0) ()
= () ().
The resulting equations may then be solved for (, ) using the technique for
homogeneous boundary conditions, and (, ) subsequently determined.
8.5.3
view lecture
view lecture
There is no diffusion of dye through the ends of a sealed pipe. Accordingly,
the mass flux of dye through the pipe ends, given by (8.1), is zero so that the
boundary conditions on the dye concentration (, ) becomes
(0, ) = 0,
(, ) = 0,
> 0,
(8.25)
(0) = () = 0.
(8.26)
110
Again, we consider in turn the cases > 0, < 0 and = 0. For > 0, we
write = 2 and determine the general solution of (8.26) to be
() = cos + sin ,
so that taking the derivative
() = sin + cos .
Applying the boundary condition (0) = 0, we find = 0. The boundary
condition at = then yields
sin = 0.
The solution = 0 results in the trivial solution for and can be ruled out.
Therefore, we must have
sin = 0,
with solutions
= /,
where is an integer. We have thus determined the eigenvalues = 2 > 0 to
be
2
= (/) , = 1, 2, 3, . . . ,
(8.27)
with corresponding eigenfunctions
= cos (/).
(8.28)
111
which can be seen as extending the formula obtained for eigenvalues and eigenvectors for positive given by (8.27) and (8.28) to = 0.
We now turn to the equation for (). The equation corresponding to eigenvalue , using (8.27), is given by
(
)
+ 2 2 /2 = 0,
which has solution proportional to
=
2 /2
(8.29)
2 /2
(8.30)
satisfy the pde given by (8.12) and the boundary conditions given by (8.25) for
every nonnegative integer .
The principle of linear superposition then yields the general solution as
(, ) =
(, )
=0
(8.31)
2 2
2
0
+
cos (/) / ,
=
2
=1
0
+
cos (/),
() =
2
=1
(8.32)
=
() cos
.
(8.33)
0
Thus the solution to the diffusion equation with homogenous Neumann boundary conditions defined by (8.12), (8.13) and (8.25) is given by (8.31) with the
coefficients computed from (8.33).
112
2
( ) cos
=
0
2
2
= cos (/2)
if = 0, 4, 8, . . . ;
2/
2/ if = 2, 6, 10, . . . ;
=
0
if = 1, 3, 5, . . . .
The first two terms in the series for (, ) are given by
]
2
2
2 [
(, ) =
1/2 cos (2/)4 / + . . . .
8.6
8.6.1
Plucked string
view lecture
We assume an elastic string with fixed ends is plucked like a guitar string. The
governing equation for (, ), the position of the string from its equilibrium
position, is the wave equation
= 2 ,
(8.34)
(, 0) = 0,
0 .
(8.36)
Again we use the method of separation of variables and try the ansatz
(, ) = () ().
(8.37)
Substitution of our ansatz (8.37) into the wave equation (8.34) and separating
variables results in
1
= 2
= ,
113
+ 2 = 0.
(8.38)
We solve first the equation for (). The appropriate boundary conditions for
are given by
(0) = 0, () = 0,
(8.39)
and we have solved this equation for () previously in S8.5.1 (see (8.17)).
A nontrivial solution exists only when > 0, and our previously determined
solution was
2
= (/) , = 1, 2, 3, . . . ,
(8.40)
with corresponding eigenfunctions
= sin (/).
(8.41)
2 2 2
= 0,
2
+ sin
.
(8.42)
cos
,
= 1, 2, 3, . . .
satisfies the wave equation, the boundary conditions at the string ends, and the
assumption of zero initial string velocity. Linear superposition of these solutions
results in the general solution for (, ) of the form
(, ) =
sin
=1
cos
.
(8.43)
The remaining condition to satisfy is the initial displacement of the string, the
first equation of (8.36). We have
() =
sin (/),
=1
which is observed to be a Fourier sine series (8.11) for an odd function with
period 2. Therefore, the coefficients are given by (8.10),
=
() sin
, = 1, 2, 3, . . . .
(8.44)
0
114
Our solution to the wave equation with plucked string is thus given by (8.43)
and (8.44). Notice that the solution is time periodic with period 2/. The
corresponding fundamental frequency is the reciprocal of the period and is given
by = /2. From our derivation of the wave equation in S8.2, the velocity
is related to the density of the string and tension of the string by 2 = /.
Therefore, the fundamental frequency (pitch) of our guitar string increases (is
raised) with increasing tension, decreasing string density, and decreasing string
length. Indeed, these are exactly the parameters used to construct, tune and
play a guitar.
The wave nature of our solution and the physical significance of the velocity
can be made more transparent if we make use of the trigonometric identity
)
1 (
sin cos = sin ( + ) + sin ( ) .
2
With this identity, our solution (8.43) can be rewritten as
)
(
( )
1
( + )
+ sin
.
(, ) =
sin
2 =1
(8.45)
The first and second sine functions can be interpreted as a traveling wave moving
to the left or the right with velocity . This can be seen by incrementing time,
+ , and observing that the value of the first sine function is unchanged
provided the position is shifted by , and the second sine function is
unchanged provided + . Two waves travelling in opposite directions
with equal amplitude results in a standing wave.
8.6.2
Hammered string
view lecture
In contrast to a guitar string that is plucked, a piano string is hammered. The
appropriate initial conditions for a piano string would be
(, 0) = 0,
(, 0) = (),
0 .
(8.46)
Our solution proceeds as previously, except that now the homogeneous initial
condition on () is (0) = 0, so that = 0 in (8.42). The wave equation
solution is therefore
(, ) =
sin
sin
.
(8.47)
=1
Imposition of initial conditions then yields
() =
sin
.
=1
=
() sin
,
or
2
=
() sin
.
0
8.6.3
115
view lecture
If the initial conditions on (, ) are generalized to
(, 0) = (),
(, 0) = (),
0 ,
(8.48)
then the solution to the wave equation can be determined using the principle of
linear superposition. Suppose (, ) is the solution to the wave equation with
initial condition (8.36) and (, ) is the solution to the wave equation with
initial conditions (8.46). Then we have
(, ) = (, ) + (, ),
since (, ) satisfies the wave equation, the boundary conditions, and the initial
conditions given by (8.48).
8.7
8.7.1
view lecture
We consider the Laplace equation (8.49) for the interior of a rectangle 0 < < ,
0 < < , (see Fig. 8.4), with boundary conditions
(, 0) = 0,
(, ) = 0,
0 < < ;
(0, ) = 0,
(, ) = (),
0 .
116
u=0
u=0
u = f(y)
u=0
0
a
x
2 2
= 0,
2
(0) = 0,
sinh
=0
sin
.
=0
sinh
sin
,
117
which we recognize as a Fourier sine series for an odd function with period 2,
and coefficient sinh (/). The solution for the coefficient is given by
2
sinh
=
8.7.2
() sin
0
view lecture
view lecture
The Laplace equation is commonly written symbolically as
2 = 0,
(8.50)
Here we consider boundary conditions specified on a circle, and write the Laplacian in polar coordinates by changing variables from cartesian coordinates. Polar
coordinates is defined by the transformation (, ) (, ):
= cos ,
= sin ;
(8.51)
=
+
,
=
+
.
(8.52)
After taking the partial derivatives of and using (8.51), we can write the
transformation (8.52) in matrix form as
(
) (
) (
)
/
cos
sin
/
=
.
(8.53)
/
sin cos
/
Inversion of (8.53) can be determined from the following result, commonly
proved in a linear algebra class. If
(
)
A=
, det A = 0,
then
A
1
=
det A
)
.
118
sin
= cos
cos
= sin
+
.
(8.55)
+
=
+
,
(8.56)
so that the Laplacian may be found by multiplying both sides of (8.56) by its
complex conjugate, taking care with the computation of the derivatives on the
right-hand-side:
(
)
(
)
2
+
=
2
2
1
1 2
2
+
.
= 2+
2 2
We have therefore determined that the Laplacian in polar coordinates is given
by
1
1 2
2
+ 2 2.
(8.57)
2 = 2 +
We now consider the solution of the Laplace equation in a circle with radius
< subject to the boundary condition
(, ) = (),
0 2.
(8.58)
or
+
=
= ,
where is the separation constant. We thus obtain the two ordinary differential
equations
2 + = 0, + = 0.
2
119
(8.59)
(, ) =
0
+
( cos + sin ),
2
=1
(8.60)
where we have separated out the = 0 solution to write our solution in a form
similar to the standard Fourier series given by (8.2). The remaining boundary
condition (8.58) specifies the values of on the circle of radius , and imposition
of this boundary condition results in
() =
0
+
( cos + sin ).
2
=1
(8.61)
Equation (8.61) is a Fourier series for the periodic function () with period 2,
i.e., = in (8.2). The Fourier coefficients and are therefore given
by (8.6) and (8.7) to be
1 2
=
() cos , = 0, 1, 2, . . . ;
0
1 2
=
() sin , = 1, 2, 3, . . . .
(8.62)
0
A remarkable fact is that the infinite series solution for (, ) can be summed
explicitly. Substituting (8.62) into (8.60), we obtain
[
]
2
( )
(, ) =
() 1 + 2
(cos cos + sin sin )
2 0
=1
[
]
2
( )
=
() 1 + 2
cos ( ) .
2 0
=1
120
( )
=1
cos ( ) = 1 +
)
(
()
=1
()
)
(
()
=1
)
+ c.c.
()
2 2
= 2
.
2 cos ( ) + 2
=1+
Therefore,
2 2
(, ) =
2
()
,
2 cos ( ) + 2
= 2
.
2
2 cos ( ) +
2