PartB Asymptotics&Integrals

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

Part B: Asymptotic Expansions and Integrals

1
W.R. Young

April 2017

1
Scripps Institution of Oceanography, University of California at San Diego, La Jolla, CA 92093–0230,
USA. wryoung@ucsd.edu
Contents

1 Why integrals? 3
1.1 The Airy function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Recursion relations: the example n! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Special functions defined by integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Elementary methods for evaluating integrals . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 What is asymptotic? 14
2.1 An example: the erf function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 The Landau symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 The definition of asymptoticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Manipulation of asymptotic series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5 Stokes lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Integration by parts (IP) 29


3.1 The Taylor series, with remainder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Large-s behaviour of Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Watson’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Laplace’s method 36
4.1 An example — the Gaussian approximation . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Another Laplacian example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Laplace’s method with moving maximum . . . . . . . . . . . . . . . . . . . . . . . . 41
4.4 Uniform approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5 Fourier Integrals and Stationary phase 49


5.1 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Generalized Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 The Airy function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6 Dispersive wave equations 62


6.1 Group velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.2 The 1D KG equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

1
7 Constant-phase (a.k.a. steepest-descent) contours 71
7.1 Asymptotic evaluation of an integral using a constant-phase contour . . . . . . . . . 72
7.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

8 The saddle-point method 74


8.1 The Airy function as x → ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.2 The Laplace transform of a rapidly oscillatory function . . . . . . . . . . . . . . . . . 76
8.3 Inversion of a Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

9 Evaluating integrals by matching 81


9.1 Singularity subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.2 Local and global contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
9.3 An electrostatic problem — H section 3.5 . . . . . . . . . . . . . . . . . . . . . . . . 85
9.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

2
Lecture 1

Why integrals?

Integrals occur frequently as the solution of partial and ordinary differential equations, and as the
definition of many “special functions”. The coefficients of a Fourier series are given as integrals
involving the target function etc. Green’s function technology expresses the solution of a differ-
ential equation as a convolution integral etc. Integrals are also important because they provide
the simplest and most accessible examples of concepts like asymptoticity and techniques such as
asymptotic matching.

1.1 The Airy function


Airy’s equation,
y ′′ − xy = 0 , (1.1)
is an important second-order differential equation. The two linearly independent solutions, Ai(x)
and Bi(x), are shown in figure 1.1. The Airy function, Ai(x), is defined as the solution that decays
as x → ∞, with the normalization Z ∞
Ai(x) dx = 1 . (1.2)
−∞

We obtain an integral representation of Ai(x) by attacking (1.1) with the Fourier transform. Denote
the Fourier transform of Ai by Z ∞
f
Ai(k) = Ai(x)e−ikx dx . (1.3)
−∞

Fourier transforming (1.1), we eventually find


3
f
Ai(k) = eik /3 . (1.4)

Using the Fourier integral theorem


Z ∞
3 dk
Ai(x) = eikx+ik /3 , (1.5)
−∞ 2π
Z  
1 ∞ k3
= cos kx + dk . (1.6)
π 0 3

Notice that the integral converges at k = ∞ because of destructive interference or catastrophic


cancellation.

3
1

Ai(x) and Bi(x)


0.5

−0.5
−8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3
x

Figure 1.1: The functions Ai(x) and Bi(x). The Airy function decays rapidly as x → ∞ and rather
slowly as x → −∞ .

We’ll develop several techniques for extracting information from integral representations such
as (1.6). We’ll show that as x → −∞:
!
1 2|x|3/2 π
Ai(x) ∼ √ cos − , (1.7)
π|x|1/4 3 4
and as x → +∞:
2x3/2
e− 3
Ai(x) ∼ √ 1/4 . (1.8)
2 πx
Exercise: Fill in the details between (1.3) and (1.4).

1.2 Recursion relations: the example n!


The factorial function
an = n! (1.9)
satisfies the recursion relation
an+1 = (n + 1)an , a0 = 1 . (1.10)
The integral representation Z ∞
an = tn e−t dt (1.11)
0
is equivalent to both the initial condition and the recursion relation. The proof is integration by
parts:
Z ∞ Z ∞
n+1 −t d
t e dt = − tn+1 e−t dt , (1.12)
0 0 dt
Z ∞
= − [tn+1 e−t ]∞ +(n + 1) tn e−t dt . (1.13)
| {z 0} 0
=0
Exercise: Memorize Z ∞
n! = tn e−t dt . (1.14)
0

Later we will use the integral representation (1.14) to obtain Stirlings approximation:
√  n n
n! ∼ 2πn , as n → ∞. (1.15)
e
Exercise: Compare Stirling’s approximation to n! with n = 1, 2 and 3.

4
5

3 Γ (x)

1 1/ Γ (x )

−1

−2

−3

−4

−5
−3 −2 −1 0 1 2 3 4
x

Figure 1.2: The Γ-function and its reciprocal.

1.3 Special functions defined by integrals


We’ll soon encounter the error function and its complement:
Z z
def 2 2 def
erf(z) = √ e−t dt and erfc(z) = 1 − erf(z) . (1.16)
π 0
Another special function defined by an integral is the exponential integral of order n:
Z ∞ −t
def e
En (z) = dt . (1.17)
z tn
We refer to the case n = 1 simply as the “exponential integral”.
Example: Singularity subtraction — small z behavior of En (z).

def R∞
The Gamma function: Γ(z) = 0
tz−1 e−t dt , for ℜz > 0.
There are many other examples of special functions defined by integrals. Probably the most impor-
tant is the Γ-function, which is defined in the heading of this section — see Figure 1.2. If ℜz > 0
we can use integration by parts to show that Γ(z) satisfies the functional equation

zΓ(z) = Γ(z + 1) . (1.18)

Using analytic continuation1 this result is valid for all z 6= 0, −1, −2 · · · Thus the functional
equation (1.18) is used to extend the definition of Γ-function throughout the complex plane. Notice
that if z is an integer, n, then
Γ(n + 1) = n! (1.19)
1
If f (z) and g(z) are analytic in a domain D, and if f = g in a smaller domain E ⊂ D, then f = g throughout D.

5
The special value   Z ∞ −t Z ∞
1 e 2 √
Γ = √ dt = e−u du = π (1.20)
2 0 t −∞
is important.
Exercise: Use the functional equation (1.18) to obtain Γ(3/2) and Γ(−1/2).
Exercise: Use the functional equation (1.18) to find the leading order behaviour of Γ(z) near z = 0 and z = −1,
and other negative integers. Work backwards and show that
(−)n 1
Γ(x) ∼ , as x → −n.
n! x + n
Thus Γ(z) has poles at z = 0, −1, · · · with residues (−)n /n!.

1.4 Elementary methods for evaluating integrals


Change of variables
How can we evaluate the integral Z ∞
3
e−t dt ? (1.21)
0
Try a change of variable

v = t3 and therefore dv = 3t2 dt = 3v 2/3 dt . (1.22)

The integral is then Z    



1 −v −2/3 1 1 4
e v dv = Γ =Γ . (1.23)
3 0 3 3 3
Exercise: Evaluate in terms of the Γ-function
Z ∞
def p
U (α, p, q) = tq e−αt dt .
0

Exercise: Show that Z ∞


Γ(1 + p)
L[tp ] = tp e−st dt = . (1.24)
0 s1+p

Differentiation with respect to a parameter


Given that Z ∞ √
π 2
= e−x dx , (1.25)
2 0
√ ′
we can make the change of variables x = tx and find that
r Z ∞
1 π 2
= e−tx dx . (1.26)
2 t 0

We now have an integral containing the parameter t.


To evaluate Z ∞
2
x2 e−tx dx , (1.27)
0
we differentiate (1.26) with respect to t to obtain
r Z ∞ r Z ∞
1 π 2 3 π 2
3
= x2 e−tx dx , and again = x4 e−tx dx . (1.28)
4 t 0 8 t5 0

6
Differentiation with respect to a parameter is a very effective trick. For some reason it is not taught
to undergraduates.
How would you calculate L[tp ln t]? No problem — just notice that

∂p tp = ∂p ep ln t = tp ln t , (1.29)

and then take the derivative of (1.24) with respect to p

Γ′ (1 + p) Γ(1 + p) ln s
L[tp ln t] = − , (1.30)
s1+p s1+p
Γ(1 + p)
= [ψ(1 + p) − ln s] , (1.31)
s1+p
where the digamma function
def Γ′ (z)
ψ(z) = (1.32)
Γ(z)
is the derivative of ln Γ.

1.5 Complexification
Consider Z ∞
F (a, b) = e−at cos bt dt , (1.33)
0
where a > 0. Then
Z ∞
F =ℜ e−(a+ib)t dt , (1.34)
0
1 a − ib
=ℜ =ℜ 2 , (1.35)
a + ib a + b2
a
= 2 . (1.36)
a + b2
As a bonus, the imaginary part gives us
Z ∞
b
= e−at sin bt dt . (1.37)
a2 + b2 0

Derivatives with respect to the parameters a and b generate further integrals.

Contour integration
The theory of contour integration, covered in part B, is an example of complexification. As revision
we’ll consider examples that illustrate important techniques.
Example: Consider the Fourier transform Z ∞
e−ikx
f (k) = dx . (1.38)
−∞ 1 + x2
We evaluate this Fourier transform using contour integration to obtain

f (k) = πe−|k| . (1.39)

Note particularly the |k|: if k > 0 we must close in the lower half of the z = x + iy plane, and if k < 0 we close
in the upper half plane.

7
C

z=
/6

re

Re
z=


π /6
A z =x B

Figure 1.3: The pie contour ABC in the complex plane (z = x + iy = reiθ ) used to evaluate J(1)
in (1.42). The ray AC is a contour of constant phase: z 3 = ir 3 and exp(iz 3 ) = exp(−r 3 ).

Example Let’s evaluate Z  



1 k3
Ai(0) = cos dk (1.40)
π 0 3
via contour integration. We consider a slightly more general integral
Z ∞
3
J(α) = eiαv dv , (1.41)
0
Z ∞
eisgn(α)x dx .
3
= |α|−1/3 (1.42)
0

Thus if we can evaluate J(1) we also have J(α), and in particular ℜJ(1/3), which is just what we need for
Ai(0). But at the moment it may not even be clear that these integrals converge — we’re relying on the
destructive cancellation of increasingly wild oscillations as x → ∞, rather than decay of the integrand, to
ensure convergence.
To evaluate J(1) we consider the entire analytic function
3 3 3iθ
h i
f (z) = eiz = eir e = exp y 3 − 3x2 y + i (x3 − 3xy 2 ) . (1.43)
| {z }
=the phase of z 3

Notice from Cauchy’s theorem that the integral of f (z) over any closed path in the z-plane is zero. In particular,
using the pie-shaped path ABC in the figure,
Z
3
0= eiz dz . (1.44)
ABC

The pie-path ABC is cunningly chosen so that the segment CA (where z = reiπ/6 ) is a contour of constant
phase, so called because
3
f (z) = e−r on AC. (1.45)
On CA phase of f (z) is a constant, namely zero.
Now write out (1.44) as
Z R Z π/6 Z 0
3 3 3iθ 3
0= eix dx + eiR e iReiθ dθ + e−r dr . (1.46)
0 0 R
| {z } | {z }
→J (1) =M (R)

Note that on the arc BC, z = Reiθ and dz = iReiθ dθ — we’ve used this in M (R) above.
We consider the limit R → ∞. If we can show that the term in the middle, M (r), vanishes as R → ∞ then
we will have Z ∞
3
J(1) = e−r dr . (1.47)
0
The right of (1.47) is a splendidly convergent integral and is readily evaluated in terms of our friend the
Γ-function.

8
So we now focus on the troublesome M (R):
Z π/6 3

cos 3θ −R3 sin 3θ iθ
eiR

|M (R)| = R e e dθ ,
0
Z π/6
iR3 cos 3θ −R3 sin 3θ iθ
≤R e e e dθ ,
0
Z π/6 3
≤R e−R sin 3θ
dθ ,
0
Z π/6 3
<R e−R dθ , (6θ/π)
(1.48)
0
π  3

= 1 − e−R ,
6R2
→ 0 , as R → ∞ . (1.49)

At (1.48) we’ve obtained a simple upper bound2 using the inequality


6θ π
sin 3θ > , for 0 < θ < . (1.50)
π 6
An alternative is to change variables with v = sin 3θ so that
Z π/6 Z
3 1 1 −R3 v dv
e−R sin 3θ dθ = e √ , (1.51)
0 3 0 1 − v2
and then use Watson’s lemma (from the next lecture). This gives a sharper bound on the arc integral.
The final answer is Z
31/3 ∞ −r3 Γ(1/3)
Ai(0) = e dr = 2/3 . (1.52)
π 0 3 π
In the example above we used a constant-phase contour to evaluate an integral exactly. A
constant-phase contour is also a contour of steepest descent. The function in the exponential is

iz 3 = y 3 − 3x2 y +i (x3 − 3xy 2 ) . (1.53)


| {z } | {z }
=φ =ψ

On CA the phase is constant: ψ = 0. But from the Cauchy-Rieman equations

∇φ · ∇ψ = 0 , (1.54)

and therefore as one moves along CA one is moving parallel to ∇φ. One is therefore always
ascending or descending along the steepest direction of the surface formed by φ(x, y) above the
(x, y)-plane. Thus the main advantage to integrating along the constant-phase contour CA is that
the integrand is decreasing as fast as possible without any oscillatory behavior.
Example: Let’s prove the important functional equation
Z ∞
v z−1 π
Γ(z)Γ(1 − z) = dv = . (1.55)
0 1+v sin πz

Example: Later, in our discussion of the method of averaging, we’ll need the integral
Z π
1 dt
A(κ) = . (1.56)
2π −π 1 + κ cos t

We introduce a complex variable

z = eit , so that dz = izdθ , and cos t = 12 z + 21 z −1 . (1.57)


2
This trick is a variant of Jordan’s lemma.

9
Thus
Z
i dz
A(κ) = − , (1.58)
π C κz 2 + 2z + κ
Z
i dz
=− , (1.59)
πκ C (z − z+ )(z − z− )
where the path of integration, C, is a unit circle centered on the origin. The integrand has simple poles at
p
z± = κ−1 ± κ−2 − 1 . (1.60)
The pole at z+ is inside C, and the other is outside. Therefore
 
i 1
A(κ) = 2πi × − × , (1.61)
πκ z+ − z−
1
= √ . (1.62)
1 − κ2

Mathematica, Maple and Gradshteyn & Ryzhik


Tables of Integrals Series and Products by I.S. Gradshteyn & I.M. Ryzhik is a good source for
look-up evaluation of integrals. Get the seventh edition — it has fewer typos.

1.6 Problems
Problem 1.1. Use the elementary integral
Z 1
1
= xn dx , (1.63)
n+1 0
to evaluate  
Z Z 1  
1
def 1 n def n 2 1
S(n) = x ln dx and R(n) = x ln dx . (1.64)
0 x 0 x
Do this problem two ways (i) integration by parts and (ii) differentiation with respect to the
parameter n.
Problem 1.2. Starting from Z ∞
a
= e−ax cos λx dx , (1.65)
a2 + λ2 0
evaluate Z ∞
I(a, λ) = x e−ax cos λx dx , (1.66)
0
and for desert Z ∞
sin x
J(a) = dx . e−ax (1.67)
0 x
Notice that J(a) is an interesting Laplace transform.
Problem 1.3. Consider Z ∞
2 u2 −b2 u−2
F (a, b) = e−a du . (1.68)
0
(i) Using a change of variables show that F (a, b) = a−1 F (1, ab). (ii) Show that
∂F (a, b)
= −2F (1, ab) . (1.69)
∂b
(iii) Use the results above to show that f satisfies a simple first order differential equation; solve
the equation and show that √
2π −2ab
F (a, b) = e . (1.70)
2a

10
Problem 1.4. The harmonic sum is defined by
N
X 1
HN ≡ . (1.71)
n
n=1

In this problem you’re asked to show that

lim (HN − ln N ) = γE , (1.72)


N →∞

where the Euler constant γE is defined in (1.80). (i) Prove that HN diverges by showing that

ln(1 + N ) ≤ HN ≤ 1 + ln N . (1.73)

Hint: compare HN with the area beneath the curve f (x) = x−1 — you’ll need to carefully select
the limits of integration. Your answer should include a careful sketch. (ii) Prove that
Z 1
1 − xN
HN = dx . (1.74)
0 1−x
R1
Hint: n−1 = 0 xn−1 dx. (iii) Use matlab to graph

1 − xN
FN (x) ≡ , for 0 ≤ x ≤ 1, (1.75)
1−x
with N = 100. This indicates that FN (x) achieves its maximum value at x = 1. Prove that
FN (1) = N . These considerations should convince you that the integral in (1.74) is dominated by
the peak at x = 1. (iv) With a change of variables, rewrite (1.74) as
Z   
N
y N dy
HN = 1− 1− . (1.76)
0 N y

(v) Deduce (1.72) by asymptotic evaluation, N → ∞, of the integral in (1.76).

Problem 1.5. Consider a harmonic oscillator that is kicked at t = 0 by singular forcing


1
ẍ + x = . (1.77)
t
(i) Show that a particular solution of (1.77) is provided by the Stieltjes integral
Z ∞ −st
e
x(t) = 2
ds . (1.78)
0 1+s

(ii) Find the leading-order the behaviour of x(t) as t → ∞ from the integral representation (1.78).
(iii) Show that this asymptotic result corresponds to a two-term balance in (1.77). (iv) Evaluate
x(0). (v) Can you find ẋ(0)? (vi) If your answer to (v) was “no”, what can you say about the form
of x(t) as t → 0? Do you get more information from the differential equation, or from the integral
representation?

Problem 1.6. Evaluate the Fresnel integral


Z ∞
2
F (α) = eiαx dx . (1.79)
0

11
Problem 1.7. Euler’s constant is defined by
def
γE = −Γ′ (1) . (1.80)
(i) Show by direct differentiation of the definition of the Γ-function that:
Z ∞
γE = − e−t ln t dt . (1.81)
0

(ii) Judiciously applying IP to the RHS, deduce that


Z 1 −1
1 − e−t − e−t
γE = dt . (1.82)
0 t
Problem 1.8. This problem uses many3 of the elementary tricks you’ll need for real integrals. (i)
Show that Z ∞ −x
e − e−xt
ln t = dx . (1.83)
0 x
(ii) From the definition of the Γ-function,
Z ∞
def
Γ(z) = e−t tz−1 dt , ℜz > 0 , (1.84)
0
show that the digamma function is
Z ∞ 
d ln Γ
def Γ′ (z) −x 1 dx
ψ(z) = = = e − , ℜz > 0 . (1.85)
dz Γ(z) 0 (x + 1)z x
Hint: Differentiate the definition of Γ(z) in (1.84), and use the result from part (i). (iii) Noting
that (1.85) implies
Z ∞ −x Z ∞ 
e 1 dx
ψ(z) = lim dx − , ℜz > 0 , (1.86)
δ→0 δ x δ (x + 1)z x
change variables with x + 1 = eu in the second integral and deduce that:
Z ∞  −u 
e e−zu
ψ(z) = − du , ℜz > 0 . (1.87)
0 u 1 − e−u
Explain in ten or twenty words why it is necessary to introduce δ in order to split the integral on
the RHS of (1.85) into the two integrals on the RHS of (1.86). (iv) We define Euler’s constant as
γE ≡ −ψ(1) = −Γ′ (1) = 0.57721 · · · (1.88)
Show that
Z ∞
e−u − e−ux
ψ(z) = −γE + du ,
0 1 − e−u
Z 1
1 − v z−1
= −γE + dv .
0 1−v
(v) From the last integral representation, show that
∞ 
X 
1 1
ψ(z) = −γE + − .
n+1 n+z
n=0

Notice we can now drop the restriction ℜz > 0 — the beautiful formula above provides an analytic
extension of ψ(z) into the whole complex plane.
3
But not all — there is no integration by parts.

12
Problem 1.9. Use pie-shaped contours to evaluate the integrals
Z ∞ Z ∞
dx
A= 3
, and B = cos x2 dx . (1.89)
0 1 + x 0

Problem 1.10. Use the Fourier transform to solve the dispersive wave equation

ut = νuxxx , with IC u(x, 0) = δ(x). (1.90)

Express the answer in terms of Ai.

Problem 1.11. Solve the half-plane (y > 0) boundary value problem

yuxx + uyy = 0 (1.91)

with u(x, 0) = cos qx and limy→∞ u(x, y) = 0.

13
Lecture 2

What is asymptotic?

2.1 An example: the erf function


We consider the error function Z z
2
def 2
erf(z) = √ e−t dt . (2.1)
π 0
The upper panel of Figure 2.1 shows erf, and the complementary error function
Z ∞
def 2 2
erfc(z) = 1 − erf(z) = √ e−t dt , (2.2)
π z

on the real line.


The series on the right of

X
2 (−t2 )n
e−t = (2.3)
n!
n=0
2
has infinite radius of convergence i.e., e−t is an entire function in the complex t-plane. Thus we
can simply integrate term-by-term in (2.1) to obtain a series for erf(z) that converges in the entire
complex plane:

2 X (−)n z 2n+1
erf(z) = √ , (2.4)
π n=0 (2n + 1)n!
2 
= √ z − 13 z 3 + 1 5
10 z
1 7
− 42 z + 1 9
216 z − 1
1320 z
11
+R6 , (2.5)
π
| {z }
=erf6 (x)

where erf6 (x) is the sum of the first six terms and R6 (z) is the remainder after 6 terms.
The lower panel of Figure 2.1 shows that erfn (the sum of the first n nonzero terms) provides
an excellent approximation to erf if |x| < 1. With matlab we find that

erf(1) − erf10 (1) erf(2) − erf10 (2)


= 1.6217 × 10−8 , and = 0.0233 . (2.6)
erf(1) erf(2)

The Taylor series is useful if |z| < 1, but as |z| increases past 1 convergence is slow. Moreover
some of the intermediate terms are very large and there is a lot of destructive cancellation between
terms of different signs. Figure 2.2 shows that this cancellation is bad at z = 3, and it gets a lot
worse as |z| increases. Thus, because of round-off error, a computer with limited precision cannot

14
2

1.5
erf(x) and erfc(x)
1

0.5

-0.5

-1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x

1.5
erf(x) and Taylor series

0.5

-0.5

-1

-1.5
-3 -2 -1 0 1 2 3
x

Figure 2.1: Upper panel: the blue curve is erf(x) and the red curve is erfc(x). Lower panel shows
erf(x) and truncated Taylor series erfn (x), with n = 1, 2 · · · , 20.

300
x=3
200

100
coeff of xn

-100

-200

-300
0 5 10 15 20 25 30 35 40
n

Figure 2.2: The terms in the Taylor series (2.5) with x = 3. The sum of the series — that is erf(3)
— is very close to 1. But there are cancellations between terms of order ±200 before convergence
takes hold. The problem quickly gets worse: at x = 4 the largest terms in the series exceed 105 .

15
accurately sum the convergent Taylor series if |z| is too large. Convergence is not as useful as one
might think.
Now let’s consider an approximation to erf(x) that’s good for large1 x. We work with the
complementary error functions in (2.2) and use integration by parts
Z ∞ 
2 1 d −t2
erfc(x) = √ − e dt , (2.7)
π x 2t dt
2 Z ∞ −t2
2 e−x 2 e
=√ −√ dt . (2.8)
π 2x π x 2t2

If we discard the final term in (2.8) we get a useful approximation2


2
e−x
erfc(x) ∼ √ , as x → ∞. (2.9)
πx
The upper panel of Figure 2.3 shows that this leading-order asymptotic approximation is reliable
once x is greater than about 2 e.g., at x = 2 the error is 10.5%, and at x = 4 the error is less than
3%.
Exercise: If we try integration by parts on erf (as opposed erfc) something bad happens: try it and see.

Why does the approximation in (2.9) work? Notice that the final term in (2.8) can be bounded
like this
Z ∞ −t2 Z ∞
2 e 2 1 2
√ 2
dt = √ 3
× 2te−t dt , (2.10)
π x 2t π x 4t
Z ∞
2 1 2
≤√ 3
2te−t dt , (2.11)
π 4x x
2
2 e−x
=√ . (2.12)
π 4x3
The little trick we’ve used above in going from (2.10) to (2.11) is that
1 1
t ≥ x, ⇒ 3
≤ 3. (2.13)
4t 4x
Pulling the (4x)−3 outside, we’re left with an elementary integral. Variants of this maneuverer
appear frequently in the asymptotics of integrals (try the exercise below).
Using the bound in (2.23) in (2.8) we have
2
" 2
#
2 e−x 2 e−x
erfc(x) = √ + something which is much less than √ as x → ∞. (2.14)
π 2x π 2x

Thus as x → ∞ there is a dominant balance in (2.14) between the left hand side and the first term
on the right. The final term is smaller than the other two terms by a factor of at least x−2 .
Exercise: Prove that Z ∞ −t
e e−x
dt < . (2.15)
x tN xN
1
We restrict attention to the real line: z = x + iy. The situation in the complex plane is tricky — we’ll return to
this later. We also defer the definition asymptotic approximation.
2
The ∼ in (2.9) denotes “asymptotic equivalence” and is defined in section 2.2. In (2.9) it means that
√ 2
lim πxex erfc(x) = 1 .
x→∞

16
1

πx exp(x2 )erfc(x)
0.8

0.6

0.4

√ 0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
x

1.15
one term
1.1
erfc(x)/asymptotics

two terms
three terms
1.05 four terms

0.95

0.9

0.85
1.5 2 2.5 3 3.5
x

Figure 2.3: Upper panel shows erfc(x) divided by the leading order asymptotic approximation on
the right of (2.9); as x → ∞ the ratio approaches 1. The lower panel shows erfc(x) divided by an
n-term truncation of (2.28) with n = 1, 2, 3 and 4.

One more term


We can develop an asymptotic series if we integrate by parts successively starting with (2.8):
2 Z ∞  
e−x 1 1 1 d −t2
erfc(z) = √ − √ − e dt , (2.16)
πx π x t2 2t dt
2   Z ∞ −t2
e−x 1 3 e
=√ 1− 2 + √ dt . (2.17)
πx 2x 2 π x t4
| {z }
R2

We use the same trick to bound the remainder:


Z ∞ 2 Z ∞
3 −2te−t 3 d −t2 3 2
R2 = − √ 5
dt < √ 5 e dt = √ 5 e−x . (2.18)
4 π x t 4 πx x dt 4 πx

As x → ∞ the remainder R2 (x) is much less than the second term in the series, so we can suppress
some information and write
2   
e−x 1 1
erfc(x) = √ 1− 2 +O . (2.19)
πx 2x x4

The big O notation used above is explained in section 2.2 — it means that x4 times the term
O(x−4 ) is bounded by some constant as x → ∞. You can see that the constant identified by the
inequality (2.18) is in fact 3/4.

17
Yet more terms: the asymptotic series
Exercise: show that Z ∞ 2 2
t−q e−t dt = 21 z −(q+1) e−z − 21 (q + 1)Jq+2 . (2.20)
|z {z }
Jq

Using the result in the exercise above we integrate by parts N times and obtain an exact
expression for erfc(x):
2 N −1   Z ∞ −t2
e−x X 1 n N 2 e
erfc(x) = √ (2n − 1)!! − 2 + (−1) (2N − 1)!! √ dt . (2.21)
πx n=0 2x π x (2t2 )N
| {z } | {z }
N terms RN

Above, RN (x) is the remainder after N terms and the “double factorial” is 7!! = 7 · 5 · 3 · 1 etc. To
bound the remainder we use our trick again:
Z 2
2 (2N − 1)!! ∞ (e−t )t
|RN | = √ dt , (2.22)
π x 2t × (2t2 )N
(2N − 1)!! 2
≤ √ N 2N+1 e−x . (2.23)
π2 x

We have shown that


|RN | 2N − 1
≤ , (2.24)
N th term of the series (2x)2
or equivalently
|RN | ≤ term N + 1 in the asymptotic series . (2.25)
Thus the first term we neglect in the expansion is an upper bound on the error as x → ∞. And
if we fix N and increase x then the approximation to erfc(x) obtained by dropping the remainder
gets better and better. But the limits

x→∞ and N →∞ (2.26)

don’t “commute”. In other words, if we fix x at some large value, such as x = 3, and increase N
then the approximation gets better for a while, but then goes horribly wrong. This behaviour is
illustrated in figure 2.4 which shows how

def N -term approximation to erfc(x)


relative error = −1 (2.27)
erfc(x)

depends on both N and x in our erf example.

Numerical use of asymptotic series — the optimal stopping rule


Suppose an unreasonable person insists on ignoring the simple limit x → ∞ and instead demands
the best answer at a fixed value of x, such as x = 2. How many terms in the series
2  
e−x 1 1×3 1×3×5 1×3×5×7 
erfc(x) ∼ √ 1− 2 + 2 2
− 2 3
+ 2 4
+ O x−10 (2.28)
x π 2x (2x ) (2x ) (2x )

should one use to appease this tyrant? The numerators above are growing very quickly so at a
fixed value of x this series for erfc(x) diverges as we add more terms. But Figure 2.4 shows that at

18
0
10

-1
10

-2
abs(relative error) 10

-3
10

-4
10
x=1
x=1.5
-5
x=2
10 x=2.5
x=3
x=3.5
x=4
-6
10
0 2 4 6 8 10 12 14 16 18 20
number of terms

Figure 2.4: The absolute value of the relative error as a function of the number of terms used in
the asymptotic series (2.28).

fixed x there is an optimal value of N at which the relative error is smallest. How do we find this
best asymptotic estimate?
We showed above in (2.24) and (2.25) that as x → ∞ the remainder RN (x) is less than the
(N + 1)st term in the series. Thus a good place to stop summing is just before the smallest term in
the series: we know the remainder is less than this smallest term. In practice we get good accuracy
if we use the optimal stopping rule: locate the smallest term in the series and add all the previous
terms. Do not include the smallest term in this sum.
The optimal stopping rule is a rule of thumb not a precise result — the remainder RN is less
than the (N + 1)st term only when x is sufficiently large i.e., in the limit x → ∞. We have no
assurance that this inequality applies at a particular value of x.
We illustrate the optimal stopping rule by estimating erfc(2). With x = 2 the sum (2.28) is
e−4  1 3 15 105 945 10395 
erfc(2) ∼ √ 1− + − + − + +··· .
| {z } 2 π 8
|{z} 64
|{z} 512
|{z} |4096
{z } |32768
{z } |262144
{z }
4.67773×10−3 | {z }
5.16675×10−3 0.0125 0.046875 0.0292969 0.0256348 0.0288391 0.0396538
(2.29)
The smallest term is 105/4096. The optimal approximation is obtained by stopping before the
smallest terms:  1 3 15 
0.0051667 1 − + − = 0.00461172 . (2.30)
8 64 512
The relative error is 0.0141116, or about 1.4%.
We get a much better answer by including half of the smallest term in the asymptotic series:
 1 3 15 1 105 
0.0051667 1 − + − + = 0.00467795 . (2.31)
8 64 512 2 4096
With this mysterious improvement the relative error is now −0.000046. We should explain why
adding half of the smallest term works so well. (Bender & Orszag and Hinch don’t mention this.....)

19

Exercise: erfc(1) = 0.157299 and the leading-order approximation is e−1 / π = 0.207554. The relative error is
therefore 0.31948 which seems unfortunately large. Show that according to the optimal stopping rule the
leading-order approximation is optimal. Does adding half of the smallest term significantly reduce the error?

2.2 The Landau symbols


Let’s explain the frequently used “Landau symbols”. In asymptotic calculations the Landau nota-
tion is used to suppress information while still maintaining some precision.

Big Oh
We frequently use “big Oh” — in fact I may have accidentally done this without defining O. One
says f (ǫ) = O(φ(ǫ)) as ǫ → 0 if we can find an ǫ0 and a number A such that

|f (ǫ)| < A|φ(ǫ)| , whenever ǫ < ǫ0 .

Both ǫ0 and A have to be independent of ǫ. Application of the big Oh notation is a lot easier than
this definition suggests. Here are some ǫ → 0 examples

sin 32ǫ = O(ǫ) , sin 32ǫ = O(ǫ1/2 ) , ǫ5 = O(ǫ2 ) ,


1
cos ǫ − 1 = O(ǫ1/2 ) , ǫ + ǫ2 sin = O(ǫ) ,
ǫ
1
sin = O(1) , e−1/ǫ = O(ǫn ) for all n.
ǫ
The expression
ǫ2
cos ǫ = 1 − + O(ǫ3 ) (2.32)
2
means
ǫ2
cos ǫ − 1 + = O(ǫ3 ) . (2.33)
2
In some of the cases above
f (ǫ)
lim (2.34)
ǫ→0 φ(ǫ)

is zero, and that’s good enough for O. Also, according to our definition of O, the limit in (2.34)
may not exist — all that’s required is that ratio f (ǫ)/φ(ǫ) is bounded by a constant independent of
ǫ as ǫ → 0. One of the examples above illustrates this case.
The big Oh notation can be applied to other limits in obvious ways. For example, as x → ∞
p
sin x = O(1) , 1 + x2 = O(x2 ) , ln cosh x = O(x) . (2.35)

As x → 1 
ln 1 + x + x2 − x = O(x2 ) . (2.36)

Hinch’s ord
H uses the more precise notation ord(φ(ǫ)). We say

f (ǫ)
f (ǫ) = ord(φ(ǫ)) ⇔ lim exists and is nonzero. (2.37)
ǫ→0 φ(ǫ)

20
For example, as ǫ → 0:
ǫ
sinh(37ǫ + ǫ3 ) = ord(ǫ) , and = ord(1) . (2.38)
ln(1 + ǫ + ǫ2 )

Notice that sinh(37ǫ + ǫ3 ) is not ord(ǫ1/2 ), but


 
1
sinh(37ǫ + ǫ3 ) = O(ǫ1/2 ) , and sin sinh(37ǫ + ǫ3 ) = O(ǫ1/2 ) (2.39)
ǫ
Big Oh tells one a lot less than ord.

Little Oh
Very occasionally — almost never — we need “little Oh”. We say f (ǫ) = o(φ(ǫ)) if for every positive
δ there is an ǫ0 such that
|f (ǫ)| < δ|φ(ǫ)| , whenever ǫ < ǫ0 .
Another way of saying this is that
f (ǫ)
f (ǫ) = o(φ(ǫ)) ⇔ lim = 0. (2.40)
ǫ→0 φ(ǫ)

Obviously f (ǫ) = o(φ(ǫ)) implies f (ǫ) = O(φ(ǫ)), but not the reverse. Here are some examples
ǫ2
ln(1 + ǫ) = o(ǫ1/2 ) , cos ǫ − 1 + = o(ǫ3 ) , eo(ǫ) = 1 + o(ǫ) . (2.41)
2
The trouble with little Oh is that it hides too much information: if something tends to zero we
usually want to know how it tends to zero. For example
 
ln(1 + 2e−x + 3e−2x ) = o e−x/2 , as x → ∞, (2.42)

is not as informative as

ln(1 + 2e−x + 3e−2x ) = ord e−x , as x → ∞. (2.43)

Asymptotic equivalence
Finally “asymptotic equivalence” ∼ is useful. We say f (ǫ) ∼ φ(ǫ) as ǫ → 0 if
f (ǫ)
lim = 1. (2.44)
ǫ→0 φ(ǫ)
Notice that
f (ǫ) ≈ φ(ǫ) , ⇔ f (ǫ) = φ(ǫ) [1 + o(ǫ)] . (2.45)
Some ǫ → 0 examples are
sin ǫ √ ǫ
ǫ+ ∼ ǫ, and 1+ǫ−1∼ . (2.46)
ln(1/ǫ) 2
Some x → ∞ examples are
ex x3 
sinh x ∼ , and + sin x ∼ x , and x + ln 1 + e2x ∼ 3x . (2.47)
2 1 + x2
df dg
Exercise: Show by counterexample that f (x) ≈ g(x) as x → ∞ does not imply that dx
≈ dx
, and that f (x) ≈ g(x)
as x → ∞ does not imply that ef ≈ eg .

21
Gauge functions
The φ(ǫ)’s referred to above are gauge functions — simple functions that we use to compare a
complicated f (ǫ) with. A sequence of gauge functions {φ0 , φ1 , · · · } is asymptotically ordered if

φn+1 (ǫ) = o[φn (ǫ)] , as ǫ → 0. (2.48)

In practice the φ’s are combinations of powers and logarithms:

ǫn , ln ǫ , ǫm (ln ǫ)p , ln ln ǫ etc. (2.49)


Exercise Suppose ǫ → 0. Arrange the following gauge functions in order, from the largest to the smallest:
  √
1 2 1
ǫ, ln ln , e− ln ǫ , e1/ ǫ , ǫ0 , ln (2.50)
ǫ ǫ
1 1
e−1/ǫ , ǫ1/3 , ǫ1/π , ǫ ln2 , , ǫln ǫ . (2.51)
ǫ ln 1ǫ

2.3 The definition of asymptoticity


Asymptotic power series
Consider a sum based on the simplest gauge functions ǫn :

X
an ǫ n . (2.52)
n=0

This sum is an ǫ → 0 asymptotic approximation to a function f (ǫ) if


PN n
f (ǫ) − n=0 an ǫ
lim N
= 0. (2.53)
ǫ→0 ǫ
The numerator in the fraction above is the remainder after summing N + 1 terms, also known
as RN +1 (ǫ). So the series in (2.52) is asymptotic to the function f (ǫ) if the remainder RN +1 (ǫ)
goes to zero faster than the last retained gauge function ǫN . We use the notation ∼ to denote an
asymptotic approximation:

X
f (ǫ) ∼ an ǫn , as ǫ → 0. (2.54)
n=0

The right hand side of (2.54) is called an asymptotic power series or a Poincaré series, or an
asymptotic representation of f (ǫ).
Our erf-example satisfies this definition with ǫ = x−1 . If we retain only one term in the series
(2.28) then the remainder is
Z ∞ −t2
2 e
R1 = √ dt . (2.55)
π x 2t2
In (2.11) we showed that
R1 1
2 √ ≤ . (2.56)
−x
e / πx 4x2
Thus as x → ∞ the remainder is much less than the last retained term. According to the definition
above, this is the first step in justifying the asyptoticness of the series.
Exercise: Show from the definition of asymptoticity that

e−1/ǫ ∼ 0 + 0 ǫ + 0 ǫ2 + 0 ǫ3 + · · · as ǫ ↓ 0. (2.57)

22
A problem with applying the definition is that one has to be able to say something about the
remainder in order to determine if a series is asymptotic. This is not the case with convergence.
For example, one can establish the convergence of

X
ln(n + 2) xn , (2.58)
n=0

without knowing the function to which this mysterious series converges. Convergence is an intrinsic
property of the coefficients ln(n + 2). The ratio test shows that the series in (2.58) converges if
|x| < 1 and we don’t have to know what (2.58) is converging to. On the other hand, asympoticity
depends on both the function and the terms in the asymptotic series.
Example The famous Stieltjes series

X
def
S(x) = (−)n n!xn (2.59)
n=0
does not converge unless x = 0. In fact, as it stands, S(x) does not define a function of x. S(x) is a formal
power series. And we can’t say that S(x) is an asymptotic series because we have to ask asymptotic to what?
But now observe that Z ∞
n! = tn e−t dt , (2.60)
0
and substitute this integral representation of n! into the sum (2.59). There is a moment of pleasure when we
realize that if we exchange the order of integration and summation then we can evaluate the sum to obtain
Z ∞ −t
def e
F (x) = dt . (2.61)
0 1 + xt
Because of the dubious steps between (2.59) and (2.61), I’ve simply defined F (x) by the integral above. But
now that we have a well defined function F (x), we’re entitled to ask is the sum S(x) asymptotic to F (x) as
x → 0? The answer is yes.
The proof is integration by parts, which yields the identity
Z ∞
e−t
F (x) = 1 − x + 2!x2 − 3!x3 + · · · (−1)(N−1) (N − 1)! xN−1 + (−1)N N !xN dt . (2.62)
0 (1 + xt)N+1
| {z }
=RN

It is easy show that


|RN (x)| ≤ N !xN , (2.63)
and therefore
RN (x)
lim = 0. (2.64)
(N − 1)! xN−1
x→0

Above we’re comparing the remainder to the last retained term in the truncated series. Because the ratio goes
to zero in the limit the series is asymptotic.
Exercise: Find another function with the same x → 0 asymptotic expansion as F (x) in (2.61).
Example: Dawson’s integral is Z x
def 2 2
D(x) = e−x et dt . (2.65)
0
2
The integrand is strongly peaked near t = x, where the integrand is equal to ex . The width of this peak is
order x−1 . Thus we expect that the answer is something like
2 ? x2 ?
D(x) ∼ e−x e = , (2.66)
x x
where ? is an unidentified number.
To more precisely estimate D(x) for x ≫ 1 we try IP:
Z x Z x 2
2 1 det
et dt = dt , (2.67)
0 2t dt
"0 2 #x Z
x t2
et e
= + 2
dt . (2.68)
2t 0 2t
0

23
The expression above is meaningless — we’ve taken a perfectly sensible integral and written it as the difference
of two infinities.
A correct approach is to split the integral like this
Z x Z 1 Z x 2
2 2 1 det
et dt = et dt + dt , (2.69)
0 0 1 2t dt
Z 1 " 2
#x Z x t2
2 et e
= et dt + + 2
dt , (2.70)
0 2t 1 2t
1
Z 1 2 Z x t2
2 ex e
= et dt − 21 e + + 2
dt , (2.71)
0 2x 1 2t
| {z } | {z }
a number R
x2
e
∼ , as x → ∞. (2.72)
2x
Thus
1
D(x) ∼ , as x → ∞. (2.73)
2x
Back in (2.69) we split the range at t = 1 — this was an arbitrary choice. We could split at another arbitrary
value such as t = 32.2345465. The point is that as x → ∞ all the terms on the right of (2.71) are much less
2
than the single dominant term ex /2x. If we want the next term in (2.73), then that comes from performing
another IP on the next biggest term on the right of (2.71), namely
Z x 2
et
R(x) = dt . (2.74)
1 2t2

To show that (2.72) is a valid asymptotic approximation according to the definition of Poincaré — with ǫ = x−1
and N = 1 in definition (2.53) — we should show that R(x) in (2.74) is very much less than the leading term,
or in other words that R x t2
e /2t2 dt
lim 1 x2 = 0. (2.75)
x→∞ e /2x
Exercise: Use l’Hôpital’s rule to verify the result above.

2.4 Manipulation of asymptotic series


Many — but not all— of the expansion expansions you’ll encounter have the form of an asymptotic
power series. But in the previous lectures we saw examples with fractional powers of ǫ and ln ǫ and
ln[ln(1/ǫ)]. These expansions have the form

X
f (ǫ) ∼ an φn (ǫ) , (2.76)
n=0

where {φn } is an asymptotically ordered set of gauge functions. The Poincaré definition of asymp-
toticity is generalized to say that the sum on the right of (2.76) is an asymptotic approximation to
f (ǫ) as ǫ → 0 if
P
f (ǫ) − Nn=0 an φn (ǫ)
lim = 0. (2.77)
ǫ→0 φN (ǫ)
In other words, once ǫ is sufficiently small the remainder is less than the last term.
Example: Using the x → ∞ gauge functions {xn/12 }, where n is an integer, we have the generalized asymptotic
expansion
x1/2 + x1/3 5/12 1/3 1/4 1/6 1/12

−1/12

= x − x + 2x − 2x + 2x − 2 + ord x . (2.78)
x1/12 + 1
Example: Another example produced by
Series[x^(x - x^2), {x, 0, 3}]

24
in mathematica is
2
xx−x ∼ 1 + x ln x + x2 ( 21 ln2 x − ln x) + x3 ( 61 ln3 x − ln2 x) + O(x4 ) . (2.79)
p q
Evidently in this example the x → 0 gauge functions are x ln x where p and q are non-negative integers.

Uniqueness
If a function has an asymptotic expansion in terms of a particular set of gauge function then that
expansion is unique. For example, using the θ → 0 gauge functions θ n , the function sin 2θ can be
expanded as
4θ 3 
sin 2θ = 2θ − + ord θ 5 , (2.80)
3
and that’s the only asymptotic expansion of sin θ using θ n . In this sense asymptotic expansions are
unique.
The converse is not true: if you’ve done the exercise in (2.76) then you know that two dif-
ferent functions might share an asymptotic expansion because they differ by a quantity that is
asymptotically smaller than every gauge function. For example, as θ ↓ 0

X
−1/θ (2θ)2n+1
sin 2θ + e ∼ (−1)n . (2.81)
(2n + 1)!
n=0

The right of (2.81) is also the asymptotic expansion of sin 2θ in terms of the gauge functions θ n .
A given function can also have multiple asymptotic expansions in terms of different gauge
functions. For example, consider the θ → 0 gauge functions sinn θ, for which

sin 2θ = 2 sin2 θ − sin3 θ + ord sin5 θ . (2.82)

Or gauge functions tann θ, for which



sin 2θ = 2 tan θ − 2 tan3 θ + ord tan5 θ . (2.83)

Manipulation of asymptotic expansions


If we have two ǫ → 0 asymptotic power series

X ∞
X
n
f∼ an ǫ , and g∼ an ǫ n . (2.84)
n=0 n=0

then we can do what comes naturally as far as adding, multiplying and dividing these expansions.
If f and g are represented by the generalized asymptotic series in (2.76) then we have a minor
problem with multiplication: φm φn may not be a member of our set of gauge functions. In this
case we can simply enlarge the set of gauge functions — provided that the expanded set can be
ordered as ǫ → 0. (I can’t think of an example in which this is not possible.)
Exercise: Noting that
1 1
∼ as ǫ → 0, (2.85)
ǫ(1 + ǫ) ǫ
is  
1
exp ∼ e1/ǫ ? (2.86)
ǫ(1 + ǫ)

25
Asymptotic series can be integrated: if

X
f (x) ∼ an (x − x0 )n , as x → x0 , (2.87)
n=0

then Z x ∞
X an
f (t) dt ∼ (x − x0 )n+1 , as x → x0 . (2.88)
x0 n+1
n=0
Asymptotic series cannot in general be differentiated. Thus

x + sin x ∼ x , as x → ∞, (2.89)

but the derivative 1 + cos x is not asymptotic to 1. Note however that BO section 3.8 discusses
some useful special cases in which differentiation is permitted.

2.5 Stokes lines


2.6 Problems
Problem 2.1. (i) Find a leading-order x → ∞ asymptotic approximation to
Z ∞
def q
A(x; p, q) = e−pt dt . (2.90)
x

Show that the remainder is asymptotically negligible as x → ∞. Above, p and q are both positive
real numbers.
Problem 2.2. Find two terms in the x → ∞ behaviour of
Z x −v
e
F (x) = 1/3
dv . (2.91)
0 v
Problem 2.3. (i) Use integration by parts to find the leading-order term in the x → ∞ asymptotic
expansion of the exponential integral:
Z ∞ −v
def e
E1 (x) = dv . (2.92)
x v
Show that this approximation is asymptotic i.e., prove that the remainder is asymptotically less
than the leading term as x → ∞. (ii) With further integration by parts, find an expression for
the n’th term, and the remainder after n terms. (iii) Show that the remainder after N terms is
asymptotically less than the N ’th terms as x → ∞.
Problem 2.4. Consider the first-order differential equation:
1
y′ − y = − , with the condition lim y(x) = 0 . (2.93)
x x→∞

(i) Find a valid two-term dominant balance in the differential equation and thus deduce the leading-
order asymptotic approximation to y(x) for large positive x. (ii) Use an iterative procedure to
deduce the full asymptotic expansion of y(x). (iii) Is the expansion convergent? (iv) Use the
integrating function method to solve the differential equation exactly in terms of the exponen-
tial integral in (2.92). Use matlab (help expint) to compare the exact solution of (2.93) with
asymptotic expansions of different order. Summarize your study as in Figure 2.5.

26
2
Exact solution
One term
1.5
Two terms
Three terms

y (x)
1

0.5

0
0 1 2 3 4 5 6 7 8 9 10
x

0.25
y n (5) an d y (5)

0.2

0.15

0.1

0.05
0 2 4 6 8 10 12
n

Figure 2.5: Solution of problem 2.4. Upper panel compares the exact solution with truncated
asymptotic series. Lower panel shows the asymptotic approximation at x = 5 as a function of the
truncation order n i.e., n = 1 is the one-term approximation. The solid line is the exact answer.

Problem 2.5. The exponential integral of order n is


Z ∞ −t
def e
En (x) = dt . (2.94)
x tn
Show that
e−x 1
n
− En (x) . En+1 (x) = (2.95)
nx n
Find the leading-order asymptotic approximation to En (x) as x → ∞.

Problem 2.6. (i) Solve the differential equation


y ′ − xy = −1 , with lim y(x) = 0 , (2.96)
x→∞

in terms of erf and use the results from this lecture to find the full asymptotic expansion of the
solution as x → ∞. (ii) Find this expansion without using explicit solution: identify a two-term
x → ∞ balance in the differential equation, and then proceed to higher order via iteration or some
other scheme.
Problem 2.7. Find an example of a infinitely differentiable function satisfying the inequalities

df
max |f (x)| < 10−10
, and max > 1010 . (2.97)
0<x<1 0<x<1 dx

This is why the differential operator d/dx is “unbounded”: d/dx can take a small function and
turn it into a big function.
Problem 2.8. Prove that
Z ∞ X ∞
e−t
dt ∼ (−1)n (2n)!xn , x → 0. (2.98)
0 1 + xt2
n=0

27
Problem 2.9. True or false as x → ∞
 
1 ? √ ? 1 ?
(i ) x + ∼ x , (ii ) x + x ∼ x , (iii ) exp x + ∼ exp(x) , (2.99)
x x
 
√  ? 1 ? 1 ?
(iv ) exp x + x ∼ exp(x) , (v ) cos x + ∼ cos x , (v ) ∼ 0 ? (2.100)
x x

Problem 2.10. Let’s investigate the Stieltjes series S(x) in (2.59) and the function F (x) in (2.61)
(i) Compute the integral F (0.1) numerically. (ii) With x = 0.1, compute partial sums of the
divergent series (2.59) with N = 2, 3, 4, · · · 20. Which N gives the best approximation to F (0.1)?
(iii) I think the best answer is obtained by truncating the series S(0.1) just before the smallest
term. Is that correct?

28
Lecture 3

Integration by parts (IP)

Our earlier example


2  
e−z 1 1×3 1×3×5 −8
erfc(z) ∼ √ 1 − 2 + − + O(z ) , as z → ∞, (3.1)
z π 2z (2z 2 )2 (2z 2 )3

illustrated the use of integration by parts (IP) to obtain an asymptotic series. In this lecture we
discuss other integrals that also yield to IP

3.1 The Taylor series, with remainder


We can very quickly use integration by parts to prove that a function f (x) with n derivatives can
be represented exactly by n terms of a Taylor series, plus a remainder. The fundamental theorem
of calculus is Z x
f (x) = f (a) + f ′ (ξ) dξ . (3.2)
a
| {z }
R1

If we drop the final term, R1 (x), we have a one-term Taylor series for f (x) centered on x = a. To
generate one more terms we integrate by parts like this
Z x
d
f (x) = f (a) + f ′ (ξ) (ξ − x) dξ , (3.3)
a dξ
Z x
= f (a) + (x − a)f ′ (a) − f ′′ (ξ)(ξ − x) dξ . (3.4)
a

And again
Z x
d 1
f (x) = f (a) + (x − a)f ′ (a) − (ξ − x)2 dξ ,
f ′′ (ξ) (3.5)
a dξ 2
Z
′ 1 ′′ 2 1 x ′′′
= f (a) + (x − a)f (a) + f (a)(x − a) + f (ξ)(ξ − x)2 dξ . (3.6)
2 2 a
| {z }
R3

If f (x) has n-derivatives we can keep going till we get

f ′′ (a) f (n−1) (a)


f (x) = f (a) + f ′ (a)(x − a) + (x − a)2 + · · · + (x − a)n−1 +Rn (x) , (3.7)
2 (n − 1)!
| {z }
=n terms, let’s call this fn (x)

29
where the remainder after n-terms is
Z x
1
Rn (x) = f (n) (ξ)(x − ξ)n−1 dξ . (3.8)
(n − 1)! a

Using the first mean value theorem, the remainder can be represented as

f (n) (x̄)
Rn (x) = (x − a)n , (3.9)
n!
where x̄ is some unknown point in the interval [a, x]. This is the form given in section 4.6 of RHB.
Some remarks about the result in (3.7) through (3.9) are:
(1) f (x) need not have derivatives of all order at the point x: the representation in (3.7) and
(3.9) makes reference only to derivatives of order n, and that is all that is required.

(2) Using (3.9), we see that the ratio of Rn (x) to the last retained term in the series is proportional
to x − a and therefore vanishes as x → a. Thus, according to our definition in (2.53), fn (x)
is an asymptotic expansion of f (x).

(3) The convergence of the truncated series fn (x) as n → ∞ is not assumed: (3.7) is exact. The
remainder Rn (x) may decrease up to a certain point and then start increasing again.

(4) Even if fn (x) diverges with increasing n, we may obtain a close approximation to f (x) —
with a small remainder Rn — if we stop summing at a judicious value of n.

(5) The difference between the convergent case and the divergent case is that in the former
instance the remainder can be made arbitrarily small by increasing n, while in the latter case
the remainder cannot be reduced below a certain minimum.
Above we are recapitulating many remarks we made previously regarding the asymptotic expansion
of erf in (3.1).
Example: Taylor series, even when they diverge, are still asymptotic series. Let’s investigate this by revisiting
problem 1.1:
x(ǫ)2 = 9 + ǫ . (3.10)
Notice that even before taking this class you could have solved this problem by arguing that
1/2
x(ǫ) = 3 1 + 9ǫ , (3.11)

and then recollecting the standard Taylor series


α(α − 1) 2 α(α − 1)(α − 2) 3
(1 + z)α = 1 + αz + z + z + ··· (3.12)
2! 3!
The perturbation expansion you worked out in problem 1.1 is laboriously reproducing the special case α = 1/2
and z = ǫ/9.
You should recall from part B that the radius of convergence of (3.12) is limited by the nearest singularity to
the origin in the complex z-plane. With α = 1/2 the nearest singularity is the branch point at z = −1. So the
series in problem 1.1 converges
√ provided that ǫ < 9. Let us ignore this red flag and use the Taylor series with
ǫ = 16 to estimate x(16) = 25 = 5. We calculate a lot of terms with the mathematica command:
Series[Sqrt[9 + u], {u, 0, 8}].
This produces the series
ǫ ǫ2 ǫ3 5ǫ4 7ǫ5 7ǫ6 11ǫ7 143ǫ8 
x(ǫ) = 3 + − + − + − + − + ord ǫ9 .
6 216 3 888 279 936 5 038 848 60 466 176 1 088 391 168 156 728 328 192
Thus
8 32 256 2560 28672 114688
x(16) ∼ 3 + − + − + − +··· (3.13)
3 27
|{z} 243
|{z} | {z } |19683
2187 {z } |59049
{z }
1.18519 1.0535 1.17055 1.45669 1.94225

30
The fourth term is the smallest term. Stopping short of the smallest term, the sum of the first three terms is
121
x(16) ≈ = 4.48148 , (3.14)
27
which is a relative error of about 10%. If we include half of the smallest term then
1217
x(16) ≈ = 5.00823 , (3.15)
243
with relative error 0.00165. This is a good result when working with the “small” parameter 16/9.

3.2 Large-s behaviour of Laplace transforms


The s → ∞ behaviour of the Laplace transform
Z ∞
¯ def
f (s) = e−st f (t) dt (3.16)
0

provides a typical and important example of IP. But before turning to IP, we argue that as ℜs → ∞,
the maximum of the integrand in (3.16) is determined by the rapidly decaying e−st and is therefore
at t = 0. In fact, e−st is appreciably different from zero only in a peak at t = 0, and the width
of this peak is s−1 ≪ 1. Within this peak t = O(s−1 ) the function f (t) is almost equal to f (0)
(assuming that f (0) is non-zero) and thus
Z ∞
¯ f (0)
f (s) ≈ f (0) e−st dt = . (3.17)
0 s

This argument suggests that the large s-behaviour of the Laplace transform of any function f (t)
with a Taylor series around t = 0 is given by
Z ∞ Z ∞  
−st −st t2 ′′
e f (t) dt = e ′
f (0) + tf (0) + f (0) + · · · e−st dt , (3.18)
0 0 2!
f (0) f ′ (0) f ′′ (0)
∼ + 2 + + ··· (3.19)
s s s3
This heuristic1 answer is in fact a valid asymptotic series.
We obtain an improved version of (3.19) using successive integration by parts starting with
(3.16):
Z ∞
f (0) f ′ (0) f ′′ (0) f (n−1) (0) 1
f¯(s) = + 2 + + · · · + + e−st f (n) (t)dt . (3.20)
s s s2 sn sn 0
| {z }
Rn

The improvement over (3.19) is that on the right of (3.20), IP has provided an explicit expression
for the remainder Rn (s).
Example: A Laplace transform. Find the large-s behaviour of the Laplace transform
  Z ∞ −st
1 e
L √ = √ dt . (3.21)
1+t 2 1 + t2
0

When s is large the function e−st is non-zero only in a peak located at t = 0. The width of this peak is
s−1 ≪ 1. In this region the function (1 + t2 )−1/2 is almost equal to one. Hence heuristically
Z ∞ −st Z ∞
e 1
√ dt ≈ e−st dt = . (3.22)
0 1 + t2 0 s
1
See the next section, Watson’s lemma, for justification.

31
This is the correct leading-order behaviour.
To make a more careful estimate we can use integration by parts:
  Z
1 1 ∞ 1 de−st
L √ =− √ dt , (3.23)
1+t 2 s 0 1 + t dt
2
 −st ∞ Z ∞
1 e 1 te−st
=− √ − dt , (3.24)
s 1 + t2 0 s 0 (1 + t2 )3/2
1
= − R1 (s) . (3.25)
s
As s → ∞ the remainder R1 (s) is negligible with respect to s−1 and the heuristic (3.22) is confirmed. Why is
R1 (s) much smaller than s−1 in the limit? Notice that in the integrand of R1
Z
te−st −st 1 ∞ −st 1
≤ te , and therefore R(s) < te dt = 2 . (3.26)
(1 + t2 )3/2 s 0 s

The estimates between (3.23) and (3.26) are a recap of arguments we’ve been making in the previous lectures.
The proof of Watson’s lemma below is just a slightly more general version of these same estimates.
To get more terms in the asymptotic expansion we invoke Watson’s lemma, so as s → ∞:
  Z ∞  
1 t2 3t4 5t6
L √ = e−st 1 − + − + O(t8 ) dt , (3.27)
1 + t2 0 2 8 16
1 1 9 225 
∼ − 3 + 5 − 7 + O s−9 . (3.28)
s s s s
Because of the rapid growth of the numerators this is clearly an asymptotic series. The Taylor series of
(1 + t2 )−1/2 does not converge beyond t = 1. The limited radius of convergence doesn’t matter: Watson’s
lemma assures us that we get the right asymptotic expansion even if we integrate into the region where the
Taylor series diverges. In fact, the expansion of the integral is asymptotic, rather than convergent, because
we’ve integrated a Taylor series beyond its radius of convergence.
We obtain the entire asymptotic series by noting that
1
√ = 1 + 2x + 6x2 + 20x3 + 70x4 + · · · (3.29)
1 − 4x

where the coefficient of xn above is the “central binomial coefficient” (2n)!/(n!)2 . Thus, with x = −t2 /4, we
have
  X ∞ Z ∞
1 (2n)! n 1 2n
L √ ∼ (−1) 2
t2n e−st dt , (3.30)
1 + t2 n=1
(n!)2 0
∞  2
1X (2n)! 1
= (−1)n . (3.31)
s n=0 n! (2s)2n

Example: Another Laplace transform. Consider


  Z 1
H(t) e−st
L √ = √ dt , (3.32)
1 − t2 0 1 − t2
1 1 9 225 
∼ + 3 + 5 + 7 + O s−9 . (3.33)
s s s s
This is the same as (3.28), except that all the signs are positive. The integrable singularity at t = 1 makes
only an exponentially small contribution as s → ∞.
Example: Yet another Laplace transform. Find the large-s behaviour of the Laplace transform
h√ i Z ∞ √
L 1 + et = e−st 1 + et dt . (3.34)
0
| {z }
f (t)


In this case f (0) = 2 and we expect that the leading order is

2
f¯ ∼ . (3.35)
s

32
Let’s confirm this using IP: √ Z ∞
2 1 et
f¯(s) = − e−st √ dt . (3.36)
s s 0 2 1 + et
| {z }
f ′ (t)

Notice that in this example f ′ (t) ∼ et/2 as t → ∞, and thus we cannot bound the remainder using maxt>0 f ′ (t).
Instead, we bound the reminder like this
Z
1 ∞ −(s− 1 )t 1 1 1
R1 = e 2 √ dt < . (3.37)
s 0 2 1+e −t/2 s 2s − 1
| {z }
1

2

This maneuver works in examples with f (t) ∼ eαt as t → ∞.


Example: A thinly disguised Laplace transform. Consider
Z 1
def 7
S(x) = ext dt (3.38)
0

as x → ∞. The integrand is strongly peaked near t = 1. Changing variable to v = 1 − t7 we obtain


Z
ex 1 e−xv dv
S(x) = , (3.39)
7 0 (1 − v)6/7
Z
ex ∞ −xv
∼ e (1 − 67 v + · · · ) dv (3.40)
7 0
etc.
Example: An integral with a parameter. Consider
Z ∞
def
I(x, ν) = tν e−x sinh t dt . (3.41)
0

The minimum of φ(t) = sinh t is at t = 0, so


Z ∞
Γ(ν + 1)
I(x) ∼ tν e−xt dt ∼ , as x → ∞. (3.42)
0 xν+1
To get the next term in the asymptotic series, keep one more term in the expansion of sinh t:
 
3 5 xt3
e−x sinh t ≈ e−xt ext /6−xt /120+··· ≈ e−xt 1 − + O(xt5 ) . (3.43)
6
Thus
Z ∞  
xt3
I(x) ∼ tν e−xt 1 − + O(xt5 ) dt , (3.44)
0 6
Γ(ν + 1) Γ(ν + 4) 
∼ ν+1
− ν+3
+ O x−ν−5 . (3.45)
x 6x
Notice we have to keep the dominant term xt up in the exponential.
If we desire more terms, and are obliged to justify the heuristic above, we should change variables with
u = sinh t in (3.41), and use Watson’s lemma. The transformed integral is a formidable Laplace transform:
Z ∞ p 
def du
I(x, ν) = e−xu lnν 1 + u2 + u √ . (3.46)
0 1 + u2
With mathematica
√   
lnν 1 + u2 + u 3 + ν 2 135 + 52ν + 5ν 2 4
√ = uν 1 − u + u + O(u6 ) . (3.47)
1 + u2 6 360

The coefficient of u2n in this expansion is a polynomial — let’s call it (−)n Pn (ν) — of order n. Substituting
(3.47) into (3.46) and integrating term-by-term
 
1 P2 (ν) P4 (ν) −6
I(x, ν) ∼ ν+1 Γ(ν + 1) − Γ(ν + 3) + Γ(ν + 5) + O(x ) . (3.48)
x x2 x4

33
3.3 Watson’s Lemma
All the examples in the previous section are a special cases of Watson’s lemma. So let’s prove the
lemma by considering a Laplace transform
Z ∞
¯
f (s) = e−st tξ g(t) dt , (3.49)
0

where the factor tξ includes whatever singularity exists at t = 0; the singularity must be integrable
i.e., ξ > −1. We assume that the function g(t) has a Taylor series with remainder

g(t) = g0 + g1 t + · · · gn tn +Rn+1 (t) . (3.50)


| {z }
n + 1 terms

This is a t → 0 asymptotic expansion in the sense that there is some constant K such that

|Rn+1 | < Ktn+1 . (3.51)

Notice we are not assuming that the Taylor series converges.


Of course, we do assume convergence of the Laplace transform (3.49) as t → ∞, which most
simply requires that f (t) = tξ g(t) eventually grows no faster than eγt for some γ. Notice that the
possibility of a finite upper limit in (3.49) is encompassed if f (t) is zero once t > T .
With these modest constraints on tξ g(t):
Z ∞ Z ∞

¯
f (s) = −st ξ n
e t g0 + g1 t + · · · gn t dt + e−st tξ Rn+1 (t) dt . (3.52)
|0 {z } |0 {z }
I1 I2

The second integral in (3.52) is


Z !

−st n+1+ξ 1
I2 < K e t dt = O . (3.53)
0 sξ+n+2

Using Z ∞
Γ(n + ξ + 1)
e−st tξ+n dt = , (3.54)
0 sn+ξ+1
we integrate I1 term-by-term and obtain Watson’s lemma:
!
Γ(ξ + 1) Γ(ξ + 2) Γ(ξ + n + 1) 1
f¯(s) ∼ g0 + g1 + · · · + gn + O ξ+n+2 . (3.55)
sξ+1 sξ+2 sξ+n+1 s

Watson’s lemma justifies doing what comes naturally.

3.4 Problems
Problem 3.1. (i) Obtain the leading-order asymptotic approximation for the integral
Z 1
3
ext dt , as x → ∞. (3.56)
−1

(ii) Justify the asympoticness of the expansion. (iii) Find the leading-order asymptotic approxi-
mation for x → −∞.

34
Problem 3.2. In our evaluation of Ai(0) we encountered a special case, namely n = 3, of the
integral
Z π/(2n)
def
Z(n, x) = e−x sin nθ dθ . (3.57)
0
Convert Z(n, x) to a Laplace transform and use Watson’s lemma to obtain the first few terms of
the x → ∞ asymptotic expansion.

Problem 3.3. In lecture 3 we obtained the full asymptotic series for erfc(z) via IP:
2 ∞  n
e−x X 1
erfc(x) ∼ √ (2n − 1)!! − 2 . (3.58)
πx 2x
n=0

Obtain this result by making a change of variables that converts erfc(z) into a Laplace transform,
and then use Watson’s lemma.

Problem 3.4. Use integration by parts to find x → ∞ asymptotic approximations of the integrals
Z x
4
A(x) = e−t dt , (3.59)
0
Z x
4
B(x) = e+t dt , (3.60)
0
Z ∞
C(x) = e−xt ln(1 + t2 ) dt , (3.61)
Z0 ∞ −xt
e
D(x) = a
dt , with a < 1; (3.62)
0 t (1 + t)
Z ∞
p
E(x) = e−xt dt , with p > 0; (3.63)
1

In each case obtain a two-term asymptotic approximation and exhibit the remainder as an integral.
Explain why the remainder is smaller than the second term as x → ∞.

Problem 3.5. Using repeated IP, find the full x → ∞ asymptotic expansion of Dawson’s integral
(2.65). Is this series convergent?

Problem 3.6. Consider f (x) = (1 + x)5/2 , and the corresponding Taylor series fn (x) centered on
x = 0. (i) Show that for n ≥ 3 and x > 0:

f (n) (0) n
Rn < x ,
n!
i.e., the remainder is smaller than the first neglected term for all positive x. (ii) The Taylor series
converges only up to x = 1. But suppose we desire f (2) = 35/2 . How many terms of the series
should be summed for best accuracy? Sum this optimally truncated series and compare with the
exact answer. (iii) Argue from the remainder in (3.9) that the error can be reduced by adding half
the first neglected term. Compare this corrected series with the exact answer.

35
Lecture 4

Laplace’s method

Laplace’s method applies to integrals in which the integrand is concentrated in the neighbourhood of
a few (or one) isolated points. The value of the integral is determined by the dominant contribution
from those points. This happens most often for integrals of the form
Z b
L(x) = f (t)e−xφ(t) dt , as x → ∞. (4.1)
a

If φ(t) ≥ 0 for all t in the interval a ≤ t ≤ b then as x → +∞ the integrand will be maximal where
φ(t) is smallest. This largest contribution becomes more and more dominant as x increases.
Look what happens if we apply IP to (4.1):
Z b
f d −xφ
L(x) = − ′
e dt , (4.2)
a xφ dt
  Z b  
f −xφ b −xφ d f
=− e + e dt . (4.3)
xφ′ a a dt xφ′
There is a problem if φ′ has a zero anywhere in the closed interval [a, b]. However if φ′ is non-zero
throughout [a, b] then IP delivers the goods. For example, suppose
φ′ > 0 for a ≤ t ≤ b, (4.4)
then from (4.3)
f (a) −xφ(a)
L(x) ∼ e , as x → ∞. (4.5)
xφ′ (a)
All of the examples in section 3.2 have this form.
In the case of (4.5), the integrand is concentrated near x = a and the asymptotic approximation
in (4.3) depends only on f (a), φ(a) and φ′ (a). We can quickly obtain (4.5) with the following
approximations in (4.3): Z ∞

L(x) ∼ f (a)e−xφ(a)−xφ (a)t dt . (4.6)
a
Exercise: Find the leading order asymptotic approximation to I(x) if φ′ < 0 for a ≤ t ≤ b. Show that
Z π
def ex cosh π
A(x) = ex cosh t dt ∼ , as x → ∞. (4.7)
0 x sinh π

To summarize, if φ′ is non-zero throughout [a, b] then the integrand is concentrated at one of


the end points, and IP quickly delivers the leading-order term. And, if necessary, one can write the
integral as a Laplace transform by changing variables to
v = φ(t) (4.8)

36
in (4.1). Then Watson’s lemma delivers the full asymptotic expansion. We turn now to discussion
of the case in which φ′ (t) has a zero somewhere in [a, b].

4.1 An example — the Gaussian approximation


As an example of Laplace’s method with a zero of φ′ we study the function defined by
Z y
def
U (x, y) = e−x cosh t dt , (4.9)
0

and ask for an asymptotic approximations as x → +∞ with y fixed. In this example φ′ = sinh t is
zero at t = 0 and IP fails.
Exercise: Find a two-term approximation of U (x, 1) when |x| ≪ 1.

With x → ∞, the main contribution to U (x, y) in (4.9) is from t ≈ 0. Thus, according to


Laplace, the leading-order behaviour is
Z ∞
1 2
U (x, y) ∼ e−x(1+ 2 t ) dt , (4.10)
0
r
π
= e−x , as x → +∞. (4.11)
2x

The peak of the integrand is centered on t = 0 and has width x−1/2 ≪ 1. All the approximations
we’ve made above are good in the peak region. They’re lousy approximations outside the peak
e.g., near t = 1/2. But both the integrand and our approximation to the integrand are tiny near
t = 1/2 and thus those errors do not seriously disturb our estimate of the integral.
Notice that in (4.10) the range of integration is extended to t = ∞ — we can then do the integral
without getting tangled up in error functions. The point is that the leading-order behaviour of
U (x, y) as x → ∞ is independent of the fixed upper limit y. If you’ve understood the argument
above regarding the peak width, then you’ll appreciate that if y = 1/10 then x will have to be
roughly as big as 100 in order for (4.11) to be accurate.
Let’s bash out the second term in the x → ∞ asymptotic expansion. According to mathemat-
ica, the integrand is
 
2 4 6 2 xt4 xt6 
e−x cosh t = e−x−xt /2 e−xt /4!−xt /6!+··· ≈ e−x−xt /2 1 − − + O x2 t 8 . (4.12)
24 720

Notice the x2 in the big Oh error estimate above — this x2 will bite us below. We now substitute
the expansion (4.12) into the integral (4.9) and integrate term-by-term using
Z ∞  
2 p+1
tp e−at dt = 12 a− 2 Γ p+1
2 . (4.13)
0

Thus we have
Z ∞  
−x
1
− 2 xt2 1 4 1 6 2 8

U (x, y) = e e 1 −
|{z} 24 xt −
|{z} 720 xt +O |{z}
|{z} x t (4.14)
0
∼x−1/2 x−3/2 ∼x−5/2 x−5/2

The underbraces indicate the order of magnitude of each term after using (4.13) to evaluate the
integral. Notice that a term of order x2 t6 is of order x−5/2 after integration. Thus, if we desire a
systematic expansion, we should not keep the term xt6 and drop x2 t8 . After integration both these
terms are order x−5/2 , and we should keep them both, or drop them both.

37
1
q
pu πad
0.8 p 2x
π 1
2x ( 1 − 8x )
e x U ( x, 5)

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
x

2
q
pu πad
p 2x
π
! 1
"
1−
e x U ( x, 5)

1.5 p 2x
π
! 8x
1 9
"
2x 1 − 8x + 128x 2

0.5
0 0.5 1 1.5 2 2.5 3
x

Figure 4.1: Upper panel compares the two-term asymptotic expansion in (4.17) with evaluation of
the integral by numerical quadrature using the matlab routine quad. The lower panel compares
the three term expansion in (4.21) with quadrature.

38
Proceeding with the integration
r Z ∞  
−x 2 −v2 v4 8v 6 8 −2

U (x, y) ∼ e e 1− − +O v x dv , (4.15)
x 0 6x 720x2
r  
−x π 1 3 8 15 −2

=e 1− × − × +O x , (4.16)
2x 6x 4 720x2 8
r  
−x π 1 −2

∼e 1− +O x . (4.17)
2x 8x

Discretion is the better part of valor, so I’ve dropped the inconsistent term and written O(x−2 )
above.
Another way to generate more terms in the expansion is to convert U (x, y) into a Laplace
transform via u = cosh t − 1:
Z ∞
−x e−xu
U (x, y) ∼ e √ du , (4.18)
0 2u + u2
Z ∞  
−x −xu 1 u 3u2 5u3 4

∼e e √ 1− + − +O u du , (4.19)
2u 4 32 128
r0        
−x 1 1 1 3 3 5 −3

=e Γ − Γ + Γ +O x , (4.20)
2x 2 4x 2 32x2 2
r  
π 1 9 
= e−x 1− + + O x −3
. (4.21)
2x 8x 128x2

The Laplace-transform approach is more systematic because the coefficients in the series expansion
(4.19) are not functions of x, and the expansion is justified using Watson’s lemma. However the
argument about the dominance of the peak provides insight and is all one needs to quickly obtain
the leading-order asymptotic expansion.

4.2 Another Laplacian example


Consider Z π/2
def 1
In = (cos t)n dt . (4.22)
π −π/2

With a little integration by parts one can show that


 
1
In = 1 − In−2 . (4.23)
n

Then, since I0 = 1 and I1 = 2/π, it is easy to compute the exact integral at integer n recursively.
Let’s use Laplace’s method to find an n → ∞ asymptotic approximation. We write the integral
as Z
1 π/2 n ln cos t
In = e dt , (4.24)
π −π/2
and then make the small t-approximation
 
t2 t2
ln cos t = ln 1 − ≈− . (4.25)
2 2

39
1

0.8

0.6
In

0.4

0.2 Figure 4.2: The exact evaluation of (4.22)


is solid lines connecting the ∗’s and the
0 leading-order asymptotic estimate (4.27) is
0 5 10 the dashed curve. The improved result in
n (4.33) is the dotted curve.

Thus the leading order is obtained by evaluating a gaussian integral


Z
1 ∞ −nt2 /2
In ∼ e dt , (4.26)
π −∞
r
2
= . (4.27)
πn
Figure 4.2 compares this approximation to the exact integral. Suppose we’re disappointed with
the performance of this approximation at n = 5, and want just one more term. The easiest way to
bash out an extra term is
 
t2 t4 6
ln cos t = ln 1 − + + ord(t ) , (4.28)
2 24
 2   2
t t4 6 1 t2
= − + ord(t ) + + ord(t ) + ord(t6 ) ,
4
(4.29)
2 24 2 2
t2 t4
=− − + ord(t6 ) , (4.30)
2 12
and then
Z
1 ∞ −nt2 /2 −nt4 /12
In ∼ e e dt , (4.31)
π −∞
Z  
1 ∞ −nt2 /2 nt4
= e 1− dt , (4.32)
π −∞ 12
r  
2 1
= 1− . (4.33)
πn 4n

This works very well at n = 5. In the unlikely event that more terms are required, then it is
probably best to be systematic: change variables with v = − ln cos t and use Watson’s lemma.

40
4.3 Laplace’s method with moving maximum
Large s asymptotic expansion of a Laplace transform
Not all applications of Laplace’s method fall into the form (4.1). For example, consider the Laplace
transform h i Z ∞ 1
−1/t
L e = e− t −st dt , as s → ∞. (4.34)
0
Watson’s lemma is defeated by this example.
def
In the exponential in (4.34) have χ = t−1 + st, and

dχ 1
= 0, ⇒ − + s = 0. (4.35)
dt t2

Thus the integrand is biggest at t∗ = s−1/2 — the peak is approaching t = 0 as s increases. Close
to the peak
1
χ = χ(t∗ ) + χ′′ (t∗ )(t − t∗ )2 + O(t − t∗ )3 , (4.36)
2
2
= 2s + s−3/2 t − s−1/2 + O(t − t∗ )3 .
1/2
(4.37)

The width of the peak is s−3/4 ≪ s−1/2 , so it helps to introduce a change of variables
def 
v = s3/4 t − s−1/2 . (4.38)

In terms of the original variable t the peak of the integrand is moving as s increases. We make the
change of variable in (4.38) so that the peak is stationary at v = 0. The factor s3/4 on the right of
(4.38) ensures that the width of the v-peak is not changing as s → ∞.
Notice that t = 0 corresponds to v = −s1/4 → −∞. But the integrand has decayed to
practically to zero once v ≫ 1. Thus the lower limit can be taken to v = −∞. The Laplace
transform is therefore
h i Z ∞
−1/t −3/4 −2s1/2 2
L e ∼s e e−v dt , as s → ∞. (4.39)
−∞
| {z }

= π

This Laplace transform is exponentially small as s → ∞, and of course the original function was
also exponentially small as t → 0. I trust you’re starting to appreciate that there is an intimate
connection between the small-t behaviour of f (t) and the large-s behaviour of f¯(s).
Remark: the Laplace transform of any function must vanish as s → ∞. So, if you’re asked to
find the inverse Laplace transform of s, the answer is there is no function with this transform.

Stirling’s approximation
A classic example of a moving maximum is provided by Stirling’s approximation to n!. Starting
from Z ∞
Γ(x + 1) = tx e−t dt , (4.40)
0
let’s derive the fabulous result
√  x x  1 

Γ(x + 1) ∼ 2πx 1+ + O x−2 , as x → ∞. (4.41)
e 12x

41

At x = 1, √
we have from the leading order 1 ≈ 2π/e = 0.9221, which is not bad! And with the
next term 2π/e × (13/12) = 0.99898. It only gets better as x increases.
We begin by moving everything in (4.40) upstairs into the exponential:
Z ∞
Γ(x + 1) = e−χ dt , (4.42)
0

where
def
χ = x ln t − t . (4.43)
The maximum of χ is at t∗ = x — the maximum is moving as x increases. We can expand χ around
this moving maximum as

(t − x)2
χ = x ln x − x + + O(t − x)3 , (4.44)
2x
= x ln x − x − v 2 , (4.45)

def √
where v = (t − x)/ 2x is the new variable of integration. With this Gaussian approximation we
have
x ln x−x
√ Z ∞ −v2
Γ(x + 1) = e 2x e dv . (4.46)
−∞
| {z }

= π

This is the leading order term in (4.40).


Exercise: Obtain the next term, 1/12x, in (4.40).
Example: Find the leading order approximation to
Z ∞
def tx e−t dt
Λ(x) = (4.47)
0 1 + t2
It is necessary to move all functions upstairs into the exponential, and after some algebra I found
 x−2
√ x−2
Λ(x) ∼ 2πx , as x → ∞. (4.48)
e

I’m about 80% sure that this is correct.

4.4 Uniform approximations


Consider a function of two variables 1 defined by:
Z ∞
def
J(x, α) = e−x(sinh t−αt) dt , with x → ∞, and α fixed. (4.49)
0

In this case

φ = sinh t − αt , = cosh t − α .
and (4.50)
dt
The location of the minimum of φ crucially depends on whether α is greater or less than one.
1
This function is related to the Anger function
Z ∞
def
Aν (x) = exp (−νt − x sinh t) dt .
0

42
If α < 1 then the minimum of φ is at t = 0 and
Z ∞
J(x, α < 1) ∼ e−x(1−α)t dt , (4.51)
0
1
= , as x → ∞, and α < 1 fixed. (4.52)
(1 − α)x

If α > 1, the minimum of φ(t) moves away from t = 0 and enters the interior of the range of
integration. Let’s call the location of the minimum t∗ (α):
 p 
cosh t∗ (α) = α , and therefore t∗ = ln α + α2 − 1 . (4.53)

If α > 1 then t∗ is real and positive. Notice that


p
φ(t∗ ) = sinh t∗ − αt∗ = α2 − 1 − αt∗ (α) , (4.54)

and p
φ′′ (t∗ ) = sinh t∗ = α2 − 1 . (4.55)
Then we expand φ(t) in a Taylor series round t∗ :
1
φ(t) = φ(t∗ ) + (t − t∗ )2 φ′′ (t∗ ) + O(t − t∗ )3 . (4.56)
2
To leading order Z ∞ 1
−xφ∗ 2 φ′′ (t )
J(x, α > 1) ∼ e e−x 2 (t−t∗ ) ∗
dt , (4.57)
−∞
Notice we’ve extended the range of integration to t = −∞ above. The error is small, and this
enables us to evaluate the integral exactly
s

J(x, α > 1) ∼ e−φ∗ (α) , as x → ∞. (4.58)
xφ′′∗ (α)

If we use the expressions for t∗ (α) and φ′′∗ (α) above then we obtain an impressive function of the
parameter α:
s
2π  p  p αx
J(x, α > 1) ∼ √ exp −x α2 − 1 α + α2 − 1 , as x → ∞. (4.59)
x α2 − 1

Comparing (4.52) with (4.59), we wonder what happens if α = 1? And how does the asymptotic
expansion change continuously from the simple form in (4.52) to the complicated expression in
(4.59) as α passes continuously through 1?
Notice that as x → ∞:
Z ∞
J(x, 1) = e−x(sinh t−t) dt , (4.60)
0
Z ∞
3
∼ e−xt /6 dt , (4.61)
0
 
1
= 21/3 3−2/3 Γ x−1/3 . (4.62)
3

So, despite the impression given by (4.52) and (4.59), J(x, 1) is not singular.

43
We’re interested in the transition where α is close to 1, so we write

α=1+ǫ (4.63)

where ǫ is small. Then


Z ∞ Z ∞
xǫt− 13 xt3 −1/3 1 3
J(x, α) ∼ e dt = x eξτ − 3 τ dτ , (4.64)
0 0

where ξ is a similarity variable:


def
ξ = (α − 1)x2/3 . (4.65)
The transition from (4.52) to (4.59) occurs when α − 1 = O(x−2/3 ), and ξ = O(1). The transition
is described uniformly by a special function
Z ∞
def 1 3
J(ξ) = eξτ − 3 τ dτ . (4.66)
0

Our earlier results in (4.52), (4.59) and (4.62) are obtained as special cases by taking ξ → −∞,
ξ → +∞ and ξ = 0 in J(ξ).

4.5 Problems
Problem 4.1. Considering U (x, y) in (4.9), show that

x2 Uxx + xUx − x2 U = Uyy . (4.67)

Evaluate U (x, ∞) in terms of modified Bessel functions.

Problem 4.2. Consider


Z kx−p
def −x cosh t
V (x, k, p) = e dt , as x → ∞. (4.68)
0

Find a leading-order approximation to (i) V (x, k, 1); (ii) V (x, k, 1/2) and (iii) V (x, k, 1/4). Hint:
In one of the three cases you’ll need to use the error function.

Problem 4.3. Show that


Z 1  n r
t t π e
e dt ∼ , as n → ∞. (4.69)
0 1 + t2 2n 2n

Problem 4.4. Show that


Z π
π n+2
tn sin t dt ∼ , as n → ∞. (4.70)
0 n2
Problem 4.5. The beta function is
Z 1
def
B(x, y) = tx−1 (1 − t)y−1 dt . (4.71)
0

With a change of variables show that


Z ∞
B(x, y) = e−xv (1 − e−v )y−1 dv . (4.72)
0

44
Suppose that y is fixed and x → ∞. Use Laplace’s method to obtain the leading order approxima-
tion
Γ(y)
B(x, y) ∼ y . (4.73)
x
Go to the Digital Library of Special Functions, chapter 5 and find the relation between the beta
function and the gamma function. (You can probably also find this formula in RHB, or any text
on special functions.) Use this relation to show that
Γ(x) 1
∼ y, as x → ∞. (4.74)
Γ(x + y) x
Remark: this result can also be deduced from Stirling’s approximation, but its a rather messy
calculation.
Problem 4.6. Find an asymptotic approximation of
Z ∞Z ∞ −n(x2 +y2 )
e
dxdy as n → ∞. (4.75)
0 0 (1 + x + y)n
Problem 4.7. Find the x → ∞ leading-order behaviour of the integrals
Z 1 Z 1
−xt3 3
A(x) = e dt , B(x) = e+xt dt , (4.76)
−1 −1
Z 1 Z 1
4 4
C(x) = e−xt dt , D(x) = e+xt dt , (4.77)
−1
Z ∞ Z −1

4 4 /4
E(x) = e−xt−t /4 dt , F (x) = e+xt−t dt , (4.78)
0 −∞
Z ∞ 2 Z ∞ 2
e−t et
G(x) = dt , H(x) = dt , (4.79)
−∞ (1 + t2 )x −∞ (1 + t2 )x
Z π/2 Z π/2
2
I(x) = e−x sec t dt , J(x) = e−x sin t dt , (4.80)
0 0
Z 1 Z 1
 
K(x) = 1 − t2 e−x cosh t dt , L(x) = 1 − t2 ex cosh t dt . (4.81)
−1 −1

Problem 4.8. Find the leading order asymptotic expansion of


Z ∞
def
M (x) = ext t−t dt (4.82)
0

as x → ∞ and as x → −∞.
Problem 4.9. Find the first two terms in the asymptotic expansion of
Z ∞
def 2 x
N (x) = tn e−t − t dt (4.83)
0
as x → ∞.
Problem 4.10. Show that
Z ∞  n 3
1 √ (n − 1)n− 2
e−x dx ∼ 2π 1 as n → ∞. (4.84)
0 1 + e−x nn− 2
(I am 80% sure this is correct.)

45
Problem 4.11. (i) Draw a careful graph of φ(t) = (1 − 2t2 )2 for −2 ≤ t ≤ 2. (ii) Use Laplace’s
method to show that as x → ∞
Z 1/2  r 
√ xφ x 1 π p q
1 + t e dt ∼ e + + + ··· , (4.85)
0 4 x x x3/2

and determine the constants p and q. Find asymptotic expansion as x → ∞ of


Z 1 Z 1
√ xφ

(ii ) 1 + t e dt , (iii ) 1 + t exφ dt . (4.86)
0 −1

Calculate the expansion up to and including terms of order x−3/2 ex .

46
45

40

35

30

25
F(x)

20

15

10
Figure 4.3: A comparison of F (x) com-
5
puted from (4.87) using matlab (solid
0 curve) with the asymptotic approximation
0 0.5 1 1.5 2 2.5 3
x (dashed curve).

Problem 4.12. Consider the function


Z ∞  
t3
F (x) ≡ exp − + xt dt . (4.87)
0 3

(i) F (x) satisfies a second-order linear inhomogeneous differential equation. Find the ODE and
give the initial conditions F (0) and F ′ (0) in terms of the Γ-function. (ii) Perform a local analysis
of this ODE round the irregular singular point at x = ∞ and say what you can about the large x
behaviour of F (x). (iii) Use Laplace’s method on (4.87) to obtain the complete x → ∞ leading-
order approximation to F (x). (iv) Numerically evaluate (4.87) and make a graphical comparison
with Laplace’s approximation on the interval 0 ≤ x ≤ 3 (see figure 4.3).

%% MATLAB script for Laplace’s method.


%%You’ll have to supply the ??’s and code {\tt myfun}.
clear
xx = [0:0.05:3];
nloop = length(xx);
FF = zeros(1,nloop); % Store function values in FF
uplim = 10; %10=\infty for the upper limit of quad?
lowlim = realmin; % avoid a divide-by-zero error
for n=1:nloop
F = quad(@(t)myfun(t,xx(n)),lowlim,uplim);
FF(n) = F;
end
plot(xx,FF)
hold on
approx = sqrt(??)*xx.^(-??).*exp(2*xx.^(??)/3);
plot(xx,approx,’--’)
hold off
xlabel(’x’)
ylabel(’F(x)’)

47
1

0.8 x=1

Integran d
0.6
x=

x=
5
0.4
x

125
=
25
0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

1 qu
1
paπd
2
0.8 x
F (x)

0.6

0.4

0.2

0
0 5 10 15 20 25 30 35 40 45 50
x

Figure 4.4: Upper panel is the exact integrand in (4.88) (the solid curve) and the Gaussian approx-
imation (dashed). Lower panel compares the F (x) obtained by numerical quadrature (solid) with
the asymptotic approximation. The comparison is not great — problem 4.13 asks you to calculate
the next term in the asymptotic expansion and add that to the figure.

Problem 4.13. Find the first few terms in the x → ∞ asymptotic expansion of
Z 1  
def xt2
F (x) = exp − dt . (4.88)
0 1+t

Improve figure 4.4 by adding the higher-order approximations to the lower panel.

Problem 4.14. Find the first two terms in the x → ∞ expansion of


Z ex
def 2 2
Y (x) = e−xt /(1+t ) dt . (4.89)
0

Problem 4.15. Show that as x → ∞


Z ∞ −t  
e −x 1 1 −3

dt ∼ e + + ord x . (4.90)
x tx 2x 8x2

48
Lecture 5

Fourier Integrals and Stationary phase

5.1 Fourier Series


Recall that we can represent almost any function f (x) defined on the fundamental interval −π <
x < π as a Fourier series

f (x) = a0 + a1 cos x + a2 cos 2x + · · ·


+ b1 sin x + b2 sin 2x + · · · (5.1)

(see chapter 12 of RHB). Determining the coefficients in the series above devolves to evaluating
the integrals:
Z π Z
1 1 π
a0 = f (x) dx , ak = cos kx f (x) dx , (5.2)
2π −π π −π
Z
1 π
bk = sin kx f (x) dx . (5.3)
π −π

(Notice the irritating factors of 2 in a0 versus ak .) We’re interested in how fast these Fourier
coefficients decay as k → ∞: the series is most useful if the coefficients decay rapidly.
A classic example is the discontinuous square wave function
def
sqr(x) = sgn [sin(x)] . (5.4)

Applying the recipe above to sqr(x), we begin by observing that because sqr(x) is an odd function,
all the ak ’s are zero. To evaluate bk notice that the integrand of (5.3) is even so that we need only
integrate from 0 to π
Z  π h i 2
2 π 2
bk = sin kx dx = − cos kx = 1 − (−1)k . (5.5)
π 0 πk 0 πk
The even bk ’s are also zero — this is clear from the anti-symmetry of the integrand above about
x = π/2. A sensitive awareness of symmetry is often a great help in evaluating Fourier coefficients.
Thus we have  
4 1 1 1
sqr(x) = sin x + sin 3x + sin 5x + sin 7x + · · · . (5.6)
π 3 5 7
The wiggly convergence of (5.6) is illustrated in figure 5.1. (Perhaps we’ll have time to say more
about the wiggles later.) The point of this square-wave example is that Fourier series is converg-
ing very slowly: the coefficients decrease only as k−1 , and the series is certainly not absolutely
convergent.

49
1 1

0.5 0.5
n = 16

sqr(x)
0 0

−0.5 −0.5

−1 n=1,4,16 n = 256
−1

−2 0 2 −0.1 −0.05 0 0.05 0.1


x

Figure 5.1: Convergence of the Fourier series of sqr(t). The left panel shows the partial sum
with 1, 4 and 16 terms. The right panel is an expanded view of the Gibbs oscillations round the
discontinuity at x = 0. Notice that the overshoot near x = 0 does not get smaller if n is increased
from 16 to 256. FourSer4.eps

Exercise: Deduce the Gregory-Leibniz series


π 1 1 1
= 1 − + − + ··· (5.7)
4 3 5 7
from (5.6).

Now let’s go to the other extreme and consider very rapidly convergent Fourier series, such as

cos2 x = 1
2 + 21 cos 2x . (5.8)

Another example of a rapidly convergent Fourier series is

1 − r2
= 1 + 2r cos x + 2r 2 cos 2x + 2r 3 cos 3x + · · · (5.9)
1 + r 2 − 2r cos x

If |r| < 1 then the coefficients decrease as r k = ek ln r , which is faster than any power of k. In
examples like this we get a great approximation with only a few terms.
We can use IP to prove that if f (x) and its first p−1 derivatives are continuous and differentiable
in the closed interval −π ≤ x ≤ π, and if the p’th derivative exists apart from jump discontinuities
at some points, then the Fourier coefficients are O(n−p−1 ) as n → ∞. Functions such as sqr(x),
with jump discontinuities, correspond to p = 0 — the coefficients decay slowly as n−1 . Very smooth
functions such as (5.8) and (5.9) correspond to p = ∞.
The asymptotic estimate in the previous paragraph is obtained by evaluating the Fourier coef-
ficient Z π
fn = f (x)einx dx , (5.10)
−π
using integration by parts. Suppose we can break the fundamental interval up into sub-intervals so
that f (x) is smooth (i.e., infinitely differentiable) in each subinterval. Non-smooth behavior, such
a jump in some derivative, occurs only at the ends of the interval. Then the contribution of the

50
sub-interval (a, b) to fn is
Z b
In ≡ f (x)einx dx ,
a
Z
1 b deinx
= f (x) dx ,
in a dx
Z
1  b 1 b ′
= f (x)einx a − f (x)einx dx . (5.11)
in in a
| {z }
≡Jn

Since f (x) is smooth, we can apply integration by parts to Jn to obtain


Z b
1  b 1  b 1
In = f (x)einx a + 2 f ′ (x)einx a − 2 f ′′ (x)einx dx . (5.12)
in n n a
| {z }
≡Kn

Obviously we can keep going and develop a series in powers of n−1 . Thus we can express In in
terms of the values of f and its derivatives at the end-points.
It is sporting to show that we actually generate an asymptotic series with this approach. For
instance, looking at (5.12), we should show that the ratio of the remainder, n−2 Kn , to the previous
term limits to zero as n increases. Assuming that f ′ is not zero at both end points, this requires
that Z b
lim f ′′ (x)einx dx = 0 . (5.13)
n→∞ a

We can bound the integral easily


Z b Z b Z b

f ′′ (x)einx dx ≤ |f ′′ (x)||einx | dx ≤ |f ′′ (x)| dx . (5.14)
a a a

But this doesn’t do the job. Rb


Instead, we can invoke the Riemann-Lebesgue lemma1 : If a |F (t)| dt exists then
Z b
lim eiαt F (t) dt = 0 . (5.15)
α→∞ a

Riemann-Lebesgue does not tell us how fast the integral vanishes. So, by itself, Riemann-Lebesgue
is not an asymptotic estimate. But RL does assure us that the remainder in (5.12) is vanishing
faster than the previous term as n → 0 i.e., dropping the remainder we obtain an n → ∞ asymptotic
approximation.
An alternative to Riemann-Lebesgue is to change our perspective and think of (5.12) like this:
Z b
1  b 1  b 1
In = f (x)einx a + 2 f ′ (x)einx a − 2 f ′′ (x)einx dx . (5.16)
in n n a
| {z }
the new remainder

The bound in (5.14) then shows that the new remainder is asymptotically less than the first term
on the right as n → ∞. We can then continue to integrate by parts and prove asymptoticity by
using the last two terms as the remainder.
1
The statement above is not the most general and useful form of RL — see section 3.4 of Asymptotics and Special
Functions by F.W.J. Olver — particularly for cases with a = −∞ or b = +∞.

51
1

0.8

0.6

0.4

0.2

−0.2

−0.4

−0.6

−0.8

−1
−1 −0.5 0 0.5 1

Figure 5.2: The first three terms in (5.18) make a rough approximation to the dashed square.

Some examples
Suppose, for example, we have a function such as those in (5.8) and (5.9). These examples are
smooth throughout the fundamental interval. In this case we take a = −π and b = π and use the
result above. Since f (x) and all its derivatives have no jumps, even at x = ±π, all the end-point
terms vanish. √Thus in this case fn decreases faster than any power of n e.g., perhaps something
like e , or e n . In this case integration-by-parts does not provide the asymptotic rate of decay
−n −

of the Fourier coefficients — we must deploy a more potent method such as steepest descent.
Example: An interesting example of a Fourier series is provided by a square in the (x, y)-plane defined by the four
vertices (1, 0), (0, 1), (−1, 0) and (0, −1). The square can be represented in polar coordinates as r = R(θ). In
the first quadrant of the (x, y)-plane, the edge of the square is line x + y = 1, or
1
R(θ) = , if 0 ≤ θ ≤ π/2. (5.17)
cos θ + sin θ
With some huffing and puffing we could write down R(θ) in the other three quadrants. But instead we simplify
matters using the obvious symmetries of the square.
Because R(θ) = R(−θ) we only need the cosines in the Fourier series. But we also have R(θ) = R(θ + π/2),
and this symmetry implies that

R(θ) = a0 + a4 cos 4θ + a8 cos 8θ + · · · (5.18)

We can save some work by leaving out cos θ, cos 2θ, cos 3θ etc because these terms reverse sign if θ → θ + π/2.
Thus the ak ’s corresponding to these harmonics will turn out to be zero.
The first term in the Fourier series is therefore
I
1
a0 = R(θ) dθ , (5.19)

Z π/2
2 dθ
= . (5.20)
π 0 cos θ + sin θ
We’ve used symmetry to reduce the integral from −π to π to four times the integral over the side in the first
quadrant. The mathematica command
Integrate[1/(Sin[x] + Cos[x]), {x, 0, Pi/2}]
tells us that √  
2 2 1
a0 = tanh−1 √ ≈ 0.7935 . (5.21)
π 2

52
The higher terms in the series are
I Z π/2
1 4 cos(4kθ) dθ
a4k = cos(4kθ)R(θ) dθ = . (5.22)
π π 0 cos θ + sin θ
With mathematica, we find
   
4 4 4 128
a4 = − τ = 0.1106 , a8 = − − τ = 0.0349 , (5.23)
π 3 π 105
   
4 4364 4 55808
a12 = − τ = 0.0166 , a16 = − − τ = 0.0096 , (5.24)
π 3465 π 45045
where  
def √
1
τ = √ 2tanh−1
= 1.24645 . (5.25)
2
Figure 5.2 shows that the first three terms of the Fourier series can be used to draw a pretty good square.
We might have anticipated this because the coefficients above decrease quickly. In fact, we now show that
a4k = O(k−2 ) as k → ∞.
After this preamble, we consider the problem of estimating the Fourier integral
Z π/2
cos N θ
S(N ) = dθ , (5.26)
0 sin θ + cos θ
as N → ∞. (I’ve changed notation: N is a continuously varying quantity i.e., not necessarily integers 4, 8
etc.)
The Riemann-Lebesgue (RL) lemma assures us that

lim S(N ) = 0 . (5.27)


N→∞

But in asymptotics we’re not content with this — we want to know how S(N ) approaches zero.
Lets us try IP
Z π/2
1 (sin N θ)θ
S(N ) = dθ , (5.28)
N 0 sin θ + cos θ
 π/2 Z π/2
1 sin N θ 1 sin N θ (cos θ − sin θ)
= + dθ .
N sin θ + cos θ 0 N 0 (sin θ + cos θ)2
| {z }
→0 (Riemann-Lebesgue)

We’ve invoked the Riemann-Lebesgue lemma above. Thus, provided that sin(N π/2) 6= 0, the leading order
term is
sin(N π/2)
S(N ) ∼ . (5.29)
N
If N is an even integer (and in the problem that originated this example, N = 4k is an even integer) then to
find a non-zero result we have to integrate by parts again:
Z π/2
sin(N π/2) 1 (cos θ − sin θ)
S(N ) = − 2 (cos N θ)θ dθ ,
N N 0 (sin θ + cos θ)2
 π/2 Z π/2  
sin(N π/2) 1 cos θ − sin θ 1 cos θ − sin θ
= − 2 cos N θ + cos N θ dθ ,
N N (sin θ + cos θ)2 0 N2 0 (sin θ + cos θ)2 θ
sin(N π/2) 1 + cos (N π/2) 
∼ + + o N −2 . (5.30)
N N2
We’ve used RL to justify the o(N −2 ). We can keep integrating by parts and develop an asymptotic series is
powers of N −1 .
With N = 4 and 8 we find from (5.30)
1 1
a4 ≈ = 0.1592 , a8 ≈ = 0.0398 , (5.31)
2π 8π
1 1
a12 ≈ = 0.0177 , a16 ≈ = 0.0099 . (5.32)
18π 32π
Comparing these asymptotic estimates with (5.24), we see that the errors are 44%, 14%, 6% and 4% respec-
tively.

53
Example: partial failure of IP Previously we evaluated the Fourier transform
Z ∞ −ikx
−|k| e
πe = dx . (5.33)
−∞ 1 + x2
Can we find a k → ∞ asymptotic expansion using IP? Let’s try:
Z ∞
1 d eikx
f (k) = 2
dx , (5.34)
−∞ 1 + x dk ik
 ∞ Z ∞
1 eikx 1 2x eikx
= + dx ,
1 + x ik −∞ ik −∞ (1 + x2 )2
2
| {z }
=0

= O k−1 , (use RL). (5.35)
We could IP again, but again the terms that fall outside the integral are zero. In retrospect, this can’t work
— after n integrations we’ll find 
f (k) = O k−n . (5.36)
This is true: using the exact answer in (5.33)
lim kn e−k = 0 , for all n. (5.37)
k→∞

IP will never recover an exponentially small integral. I call this a partial failure, because at least integration
by parts correctly tells us that the Fourier transform is smaller than any inverse power of k. This is the case
for any infinitely differentiable function: just keep integrating by parts.

5.2 Generalized Fourier Integrals


Generalized Fourier integrals are the imaginary analog of the Laplace integrals in (4.1). The
generalized Fourier integral is
Z b
J(x) = f (t)eixψ(t) dt , (5.38)
a
where ψ(t) is a real phase function. As x → ∞ the integrand is very oscillatory. We previously
considered the special case ψ(t) = t and obtained the asymptotic expansion with IP. Let’s try IP
again:
Z
i b f d ixψ(t)
J =− e dt , (5.39)
x a ψ ′ dt
  Z b
i f ixψ b i d f
=− ′
e + eixψ(t) dt . (5.40)
x ψ a x a dt ψ ′
| {z }
o(1)— invoke RL.

The asymptotic expansion of J(x) can be obtained via IP provided that f (t)/ψ ′ (t) is non-zero at
either a or b and provided that f (t)/ψ ′ (t) is not singular for a ≤ t ≤ b. If f (t)/ψ ′ (t) is singular
then IP fails. Instead we need the method of stationary phase.
Example: Let’s find the leading term in the asymptotic expansion of
Z π/2
2
A(x) = e−t sin xt dt , as x → ∞. (5.41)
0

This is an ordinary Fourier integral and integration by parts is the method of choice:
Z
1 π/2 −t2 d
A(x) = − e cos xt dt , (5.42)
x 0 dt
Z
1 2 xπ  2 π/2 −t2
= 1 − e−π /4 cos − te cos xt dt . (5.43)
x 2 x 0
The RL lemma shows that the final term is o(x−1 ). We can keep integrating by parts to generate the full
asymptotic series in inverse powers of x.

54
Example: Find the asymptotic expansion of the generalized Fourier integral
Z π/2
2
B(x) = e−t sin(x cos t) dt , as x → ∞. (5.44)
0
We set up the integral for IP
Z π/2 −t2
1 e d
B(x) = cos(x cos t) dt . (5.45)
x 0 sin t dt
2
But then we see that in this case the integration by parts will fail because e−t / sin t is singular at t = 0. This
happens because the phase, cos t, is has a stationary point at t = 0. We’ll use stationary phase on this integral.
Example: Find the asymptotic expansion of
Z π/2
C(x) = t sin(x cos t) dt , as x → ∞. (5.46)
0
In this case we get away with integration by parts only once:
Z
1 π/2 t d
C(x) = cos(x cos t) dt , (5.47)
x 0 sin t dt
 1 π/2 Z
1 π sin t − t cos t
= − cos x − cos(x cos t) dt , (5.48)
x 2 x 0 sin2 t
1 π  
= − cos x + o x−1 . (5.49)
x 2
We can’t integrate by parts again — we encounter the same t = 0 singularity that stopped us in the example
B(x) above.
Example: Find the asymptotic expansion of
Z a
def 2
D(x) = eixt dt , as x → ∞. (5.50)
0
This example is important for a complete understanding of stationary phase: we’ll obtain the full asymptotic
expansion of D(x).
Write the integral as
Z ∞ Z ∞
2 2
D(x) = eixt dt − eixt dt , (5.51)
0 a
Z ∞ Z ∞
1 2 i 1 d ixt2
= √ eiv dv + e dt . (5.52)
x 0 2x a t dt
The first term on the right is the Fresnel integral, and the second can be integrated by parts to obtain
r 2 Z ∞ ixt2
π iπ i eia x i e
D(x) = 21 e4 + + dt . (5.53)
x 2ax 2x a t2
The final integral in (5.53) is easily bounded
Z ∞ ixt2 Z ∞
e dt 1

2
dt ≤ 2
= . (5.54)

a t a t a
Thus the final two terms in (5.53) are O(x−1 ), showing that
r
1 π iπ
D∼ 2 e4 , as x → ∞. (5.55)
x
Successive integration by parts in (5.53) will generate the full asymptotic expansion of D(x). In fact, we can
reduce this problem to the function Z ∞ it
def e
F(z, p) = p
dt (5.56)
z t
studied in problem 5.4: making the change of variables v = xt2 , the final integral in (5.51) is
Z ∞ Z ∞ iv
2 1 e
eixt dt = √ √ dv , (5.57)
a 2 x a2 x v

F a2 x, 12
= √ , (5.58)
2 x
2 ∞
ieia x X Γ(n + 12 )
∼ √ . (5.59)
2xa π n=0 (ia2 x)n

55
Fresnel Integral Factoids
The Fresnel integrals are
Z ∞ Z ∞
r
2 2 2 2 1 π
cos a t dt = sin a t dt = .
0 0 2a 2

Alternatively, if β is a real number


Z ∞ r  
±iβv2 1 π iπ
e dv = exp ± sgn(β) .
0 2 |β| 4

The Fresnel integrals are special cases of


Z ∞  
itn 1
e dt = Γ 1 + eiπ/2n , provided n > 1.
0 n

The integral above is used in the case of higher-order stationary points.

56
1

s in( x c ost)
0.5

−0.5

−1
0 0.5 1 1.5
t

1
e − t s in( x c ost)

0.5

0
2

−0.5

−1
0 0.5 1 1.5
t

Figure 5.3: The upper panel shows the function sin(x cos t) in (5.60) with x = 160. The solid curve
in the lower panel shows the integrand of (5.60). The dotted curve shows the stationary phase
approximation sin[x(1 − t2 /2)] used in (5.61).

The stationary phase approximation


Let’s reconsider the integral that defeated IP:
Z π/2
2
B(x) = e−t sin(x cos t) dt , as x → ∞. (5.60)
0

The phase ψ = cos t is stationary at t = 0 and thus we suspect that the dominant contribution to
B(x) comes from the neighbourhood of t = 0. Looking at figure 5.3 and proceeding heuristically
Z ∞   
t2
B(x) ∼ sin x 1 − dt , (5.61)
0 2
Z ∞ Z ∞
xt2 xt2
= sin x cos dt − cos x sin dt , (5.62)
0 2 0 2
r
1 π
∼ (sin x − cos x) , as x → ∞. (5.63)
2 x
We justify the heuristic above by dividing the range of integration into two regions:
Z δ Z ∞
−t2 2
B = e sin(x cos t) dt + e−t sin(x cos t) dt , (5.64)
|0 {z } |δ {z }
B1 B2

where δ ≪ 1. In integral B1 , the variable t is always much less than one and therefore exp(−t2 ) ≈ 1.
Thus B1 is asymptotically determined following the argument in example D(x): we obtain (5.61).
In integral B2 there are no stationary points in the range of integration and therefore with IP
B2 ∼ x−1 , which is negligible relative to B1 as x → ∞.

57
Example: Find the leading term in the x → ∞ asymptotic expansion of
Z ∞
def
E(x) = exp [ix(t − sinh t)] dt . (5.65)
0

The phase is
t3 
ψ = t − sinh t = − + ord t5 . (5.66)
6
This is a higher order stationary point because both ψ ′ and ψ ′′ vanish at the stationary point. The leading
term is Z ∞    1/3  
ixt3 6 4
E(x) ∼ exp − dt = Γ e−iπ/6 . (5.67)
0 6 x 3
Example: Find the leading term in the x → ∞ asymptotic expansion of
Z ∞  √ 
def
F (x, p) = t−p cos x sinh t − t dt , where 0 ≤ p < 1. (5.68)
0

5.3 The Airy function


An important example leading to asymptotic analysis of a Fourier integral is the x → −∞ asymp-
totic expansion of the Airy function
Z  
1 ∞ k3
Ai(x) = cos kx + dk . (5.69)
π 0 3
We now use the method of stationary phase to determine the leading order asymptotic expansion
of Ai(x) as x → −∞.
We begin by calculating the derivative of the phase function
 
d k3
kx + = x + k2 . (5.70)
dk 3

If x < 0 we see that there are stationary points at k = ±|x|1/2 (and only k = +|x|1/2 is in the range
of integration). This observation motivates the change of variable

def k
t = , (5.71)
|x|1/2
so that Z
|x|1/2 ∞ h i
Ai(x) = cos Xψ(t) dt , (5.72)
π 0
where
def t3 def
X = |x|3/2 , and .ψ(t) = t − (5.73)
3
The change of variables in (5.71) puts (5.69) into the form of a generalized Fourier integral in (5.72).
When X is large the phase of the cosine is changing rapidly and integrand is violently oscillatory.
Successive positive and negative lobes almost cancel. The main contribution to the integral comes
from the neighbourhood of the point t = t∗ where the oscillations are slowest. This is the point of
“stationary phase”, defined by

= 0, ⇒ t∗ = 1 . (5.74)
dt
We get a leading-order approximation to the integral by expanding the phase function round t∗ :

ψ = ψ∗ + 12 (t − 1)2 ψ∗′′ +ord(t − 1)3 , (5.75)


|{z} |{z}
=2/3 =−2

58
1

0.5

Ai(x) 0

−0.5
−30 −25 −20 −15 −10 −5 0 5
x

Figure 5.4: The solid blue curve is Ai(x) and the red dashed curve is the x → −∞ asymptotic
approximation in (5.78).

def def
where ψ∗ = ψ(t∗ ) and ψ∗′′ = ψ ′′ (t∗ ). Thus as x → −∞:
Z ∞   
|x|1/2 2
Ai(x) ∼ cos X − (t − 1)2 dt , (5.76)
π −∞ 3
Z ∞
|x|1/2 2
∼ ℜe2iX/3 e−iv dv . (5.77)
πX 1/2 −∞

To ease the integration we extend the range to −∞, and now invoke the Fresnel integral to obtain
!
1 2|x|3/2 π
Ai(x) ∼ √ cos − , as x → −∞. (5.78)
π|x|1/4 3 4

This asymptotic approximation is compared with Ai in Figure 5.4

5.4 Problems
Problem 5.1. (i) Use IP to obtain the leading-order asymptotic approximation for the integral
Z ∞
eit
dt , as x → ∞. (5.79)
x t

(ii) Justify the asympoticness of the expansion. Hint: see the discussion surrounding (5.16).

Problem 5.2. Using integration by parts to find x → ∞ asymptotic approximations of the integrals
Z 2 Z 1
cos xt
A(x) = dt , and B(x) = cos t2 eixt dt . (5.80)
1 t 0

Problem 5.3. Consider


Z π/4  2

f (x) = cos xt2 1 − e−t dt , as x → ∞. (5.81)
0

Show that IP can be used to compute the leading-order term, but not the second term. Compute
the second term using stationary phase.

59
1.5

J 0 (x)
0.5

−0.5
0 5 10 15
x

Figure 5.5: Comparison of J0 (x) (solid blue curve) with the leading order asymptotic term in (5.87)
(red dashed curve).

Problem 5.4. Consider the Fresnel-type function


Z ∞
def eit
F(x, p) = dt , (5.82)
x tp
which converges for all positive x if p > 0. Use integration by parts to show that
ieix
F(x, p) = − ipF(x, p + 1) . (5.83)
xp
Prove that the full x → ∞ asymptotic expansion of F (x, p) is

ieix X Γ(p + n)
F(x, p) ∼ . (5.84)
xp Γ(p)(ix)n
n=0

Be sure to explain carefully why the remainder after N terms is asymptotically less than the
absolute value of the (N + 1)’st term.
Problem 5.5. Find two terms in the x → 0 and x → ∞ expansion of the Fresnel integrals
Z ∞ Z ∞
2
C(x) = cos t dt , and S(x) = sin t2 dt . (5.85)
x x

Problem 5.6. The Bessel function of order zero is defined by


Z π/2
def 2
J0 (x) = cos (x cos t) dt . (5.86)
π 0
(i) Show that J0′′ + x−1 J0′ + J0 = 0. (ii) Plot the integrand of (5.86) as a function of t at x = 100.
(iii) Show that r
2  
J0 (x) ∼ cos x − π4 . (5.87)
πx
(iv) Use matlab (help besselj) to compare the leading order approximation with the exact
Bessel function on the interval 0 < x < 5π. The comparison is splendid: see Figure 5.5.
Problem 5.7. The Bessel function of integer order n has an integral representation
Z
1 π
Jn (x) = cos(x sin t − nt) dt . (5.88)
π 0
Show that 
Γ 13
Jn (n) ∼ 2/3 1/6 1/3 , as n → ∞. (5.89)
π2 3 n

60
Problem 5.8. (i) Using the integral representation for Jn (x) in the previous problem show that
r h p
π 2  π i
Jn (ny) ∼ (y − 1)−1/4 cos n y 2 − 1 − cos−1 y −1 − , (5.90)
2n 4

where y > 1 is fixed as n → ∞. (ii) Now instead of fixing y, write y = 1 + n−p v with p > 0 and
v fixed. Thus as n → ∞, y → 1. Determine the scaling exponent p that results in an interesting
distinguished limit. (iii) Find a uniformly valid approximation to Jn (ny) for large n and y close to
one. You should recover (5.89) if y = 1 (equivalently v = 0).

Problem 5.9. Find a leading order x → ∞ asymptotic approximation to


Z π
B(x) = eix(t+cos t) dt . (5.91)
0

Problem 5.10. Find leading-order, x → ∞ asymptotic expansion of


Z 1 Z ∞
√ ix(1−2t2 )2
√ 2 2
(i ) 1+ te dt , (ii ) 1 + t eix(1−2t ) dt . (5.92)
0 1

Problem 5.11. According to article 238 in Lamb, the surface elevation produced by a two-
dimensional splash is given by
Z  p 
1 ∞
η(x, t) = cos kx − gk t dk . (5.93)
π 0

Show that as t → ∞ √  
gt gt2 gt2
η(x, t) ∼ 3√ 3 cos + sin . (5.94)
2 2 πx 2 4x 4x
Verify that the
√ frequency and wavenumber in (5.94) are connected by the water-wave dispersion
relation ω = gk.

61
Lecture 6

Dispersive wave equations

6.1 Group velocity


An important application of stationary phase is estimating the Fourier integrals that arise when
we solve dispersive wave problems using the Fourier transform. Some first-order dispersive wave
equations, and their Fourier transforms (∂x → ik), are
iAt + Axx = 0 , Ãt + ik2 Ã = 0 , (6.1)
−1
Axt − A = 0 , Ãt + ik à = 0 , (6.2)
3
At − Axxx = 0 , à + ik à = 0 . (6.3)
In each case the solution is
Z ∞
dk
A(x, t) = ei(kx−tω(k)) A0 (k) , (6.4)
−∞ 2π
where Ã0 (k) is the Fourier transform of the intial condition and the function ω(k) is the dispersion
relation. In the three cases above
1
ω(k) = k2 , ω(k) = , ω(k) = k3 . (6.5)
k
We’ll see later that very similar results apply to second-order wave equations.
The inverse Fourier transform can be written in the form of a generalized Fourier integral
Z ∞
dk
A(x, t) = eitψ Ã0 (k) , (6.6)
−∞ 2π
where the phase function is
def x def
ψ(k, u) = uk − ω(k) , with . u = (6.7)
t
In the limit t → ∞, with u = x/t fixed, the stationary phase condition that ∂k ψ = 0 leads to

= u. (6.8)
dk
|{z}
group velocity

To apply the method of stationary phase to the generalized Fourier integral in (6.6), we have to
find all solutions of (6.8), call them k1 (u), k2 (u), · · · and kn (u). Usually n is a modest number:
n = 1 in the following example. Then we expand the phase functions around each km (u) as
1
ψ(k, u) = ψm − (k − km )2 ωm
′′
+ O(k − km )3 , (6.9)
2

62
t =0
0.02 t = 40

A ( x, t )
0

−0.02

0 50 100 150 200


x

0.02 FFT
SP
0.01
A ( x, t )

−0.01

−0.02

50 100 150 200


x

Figure 6.1: Upper panel shows the initial condition in (8.27) with ℓ = 12 and p = 1, and the
solution at t = 40 computed with matlab FFT. The lower panel compares the matlab solution
with the stationary phase approximation in (6.19).

where m = 1, 2 · · · n. Also in (6.9)


def ′′ def
ψm = ψ(km ) , and ωm = ∂k2 ω(km ) . (6.10)

Thus the integral in (6.6) becomes


n
X Z ∞
i[km x−ω(km )t] it 2 ω ′′ dk
A(x, t) ∼ e Ã0 (km ) e− 2 (k−km ) m . (6.11)
m=1 −∞ 2π

Evaluating the Fresnel integrals we find

Xn
à (k ) h   i
′′ π
A(x, t) ∼ p 0 m exp i km x − ω(km )t − sgn ωm . (6.12)
′′ |
2πt|ωm 4
m=1

Note the fiddly |ωm ′′ | and sgn(ω ′′ ) in (6.12). More important, note that k in (6.12) is a function
m m
of x/t which is obtained by solving the group velocity equation (6.8).
Remark: if ωm′′ = 0 the formula in (6.12) is invalid. This higher-order stationary point requires

separate analysis.
Example: Use stationary phase to approximate the t → ∞ solution of
2 2
e−x /2ℓ
At − Axxx = 0 , with IC A(x, 0) = √ cos px . (6.13)
ℓ 2π
Using the Fourier transform, the initial condition is
 2 2 2 2

Ã0 (k) = 12 e−ℓ (k−p) /2 + e−ℓ (k+p) /2 , (6.14)

63
and the solution is then
Z ∞
3 dk
A(x, t) = ei(kx−k t)
Ã0 (k) , (6.15)
−∞ 2π
Z ∞
dk
=2 cos(kx − k3 t)Ã0 (k) . (6.16)
0 2π

In (6.15), because Ã0 (k) = A0 (−k), we can restrict attention to k > 0. The stationary phase equation is
r
x x
3k2 = , or k± (x, t) = ± . (6.17)
t 3t
There are two stationary phase points, but only k+ (x, t) lies in the range of integration in (6.16). The expansion
of the phase function ku − k3 around k+ is:
3
ψ = 2k+ − 3k+ (k − k+ )2 + ord(k − k+ )3 . (6.18)

The stationary-phase approximation to the Fourier intergral in (6.16) is then


 Z ∞ Z ∞ 
Ã0 (k+ ) 3   2  3   2 
A∼ cos 2k+ t cos 32 k+ (k − k+ )2 t dk + sin 2k+ t sin 23 k+ (k − k+ )2 t dk ,
π −∞ −∞

Ã0 (k+ )  3  3 
= √ cos 2k+ t + sin 2k+ t . (6.19)
6πtk+
Figure 6.1 compares this approximation to a numerical solution.

%% Third-order dispersive wave equation A_t-A_{xxx}=0


Lx = 4e2; nx = 2e4; dx = Lx/nx; %grid
xx = 0:dx:Lx-dx; xx=xx-Lx/2;
k1 = 2*pi/Lx; kk = k1*[0:nx/2 (-nx/2+1):-1]; %wavenumbers

%% IC in physical space, and in Fourier space


ell = 12; p=1; %IC parameters
A0 = (ell*sqrt(2*pi))^(-1) * exp(-xx.^2/(2*ell^2)).*cos(p*xx);
A0h = fft(A0);
subplot(2,1,1)
plot(xx,A0,’b-’) %plot IC
axis([-40 , 200 , -1.1*max(A0), 1.1*max(A0)])
xlabel(’$x$’,’interpreter’,’latex’,’fontsize’,14)
ylabel(’$A(x,t)$’,’interpreter’,’latex’,’fontsize’,14)
text(15,0.025,’$t=0$’,’interpreter’,’latex’,’fontsize’,14 )
hold on

%% long-time solution
t=40;
Ah = A0h .* exp(-1i.*kk.^3*t); % Fourier evolution
A = real(ifft(Ah)); % physical space solution
subplot(2,1,1)
plot(xx,A,’r--’) % plot A(x,t)
text(145,0.02,’$t=40$’,’interpreter’,’latex’,’fontsize’,14 )

%% SP approximation
kp = sqrt(abs(xx)/(3*t));
A0 = 0.5*( exp(-0.5*ell^2*(kp - p).^2) + exp(-0.5*ell^2*(kp + p).^2) );
ASP =A0.*( cos(2*t*kp.^3) + sin(2*t*kp.^3))./(sqrt(6*pi*t*kp));
subplot(2,1,2)
plot(xx,A,’r--’,xx,ASP,’b’)
axis([50 , 200 , -1.1*max(A), 1.1*max(A)])
legend(’FFT’,’SP’)
xlabel(’$x$’,’interpreter’,’latex’,’fontsize’,14)
ylabel(’$A(x,t)$’,’interpreter’,’latex’,’fontsize’,14)

64
6.2 The 1D KG equation
Let’s use stationary phase to analyze the Klein-Gordon equation:

Att − a2 Axx + σ 2 A = 0 , (6.20)

with the initial condition such as


2 2
e−x /2ℓ
A(x, 0) = √ cos px , and At (x, 0) = 0 . (6.21)
2πℓ
We introduce non-dimensional variables
def σx a
x̄ = , and t̄ = σt , and Ā = A. (6.22)
a σ
With this change in notation, the non-dimensional problem is

Āt̄t̄ − Āx̄x̄ + Ā = 0 , (6.23)

with the initial condition


2 2
e−x̄ /2η
Ā(x̄, 0) = √ cos ν x̄ , and Āt (x, 0) = 0 . (6.24)
2πη
The IC contains the non-dimensional parameters

def pa def ℓσ
ν = , and η = . (6.25)
σ a
We proceed dropping all the bars decorating the non-dimensional variables.
Remark : in the limit η → 0, the initial condition is A(x, 0) → δ(x). By taking η → 0 we recover
the Green’s function of the Klein-Gordon equation.
The Fourier transform of ψ is
Z ∞
def
Ã(k, t) = e−ikx A(x, t) dx , (6.26)
−∞

and with the operational rule ∂x → ik we find the transformed equation Klein-Gordon equation

Ãtt + (k2 + 1)Ã = 0 . (6.27)

The solution that satisfies the transformed initial condition is


1  iωt 
à = cos ωt Ã0 (k) = e + e−iωt Ã0 (k) . (6.28)
2
Above, the Klein-Gordon dispersion relation is
p
ω(k) = k2 + 1 . (6.29)

and Ã0 (k) is the Fourier transform of the initial condition.


We record some consequences of the Klein-Gordon dispersion relation that we’ll need below:

dω k k d2 ω 1
=√ = , and = 3. (6.30)
dk 2
k +1 ω dk2 ω
These results simplify some of the calculations below.

65
From (6.28), the Fourier Integral Theorem now delivers the solution in the form
Z ∞
dk
A(x, t) = eikx cos(ωt) Ã0 (k) , (6.31)
−∞ 2π
Z ∞ Z
1 ikx−iωt dk 1 ∞ ikx+iωt dk
=2 e Ã0 (k) +2 e Ã0 (k) . (6.32)
−∞ 2π −∞ 2π
| {z } | {z }
def =B ∗
=B

To show that the two integrals on the right of (6.32) are complex conjugates of each other we use
the reality conditon Ã∗0 (k) = Ã0 (−k).
Should simplify to an integral over x > 0 by restrict attention to even functions of x so that
Ã0 (−k) = Ã0 (k).
The Klein-Gordon√ equation is a second-order √ wave equation and has two modes, one with
dispersion relation k + 1 and the other with − k2 + 1. One wave goes to the left and the other
2

to the right. Our initial condition excites both waves. These are the two terms B(x, t) and B ∗ (x, t)
above.
Let’s consider B(x, t), written as :
Z
1 ∞ itψ dk
B(x, t) = e Ã0 , (6.33)
2 −∞ 2π

with phase function p


def
ψ(k, u) = uk − k2 + 1 . (6.34)
The stationary-phase condition for ψ is that
k⋆
u= p . (6.35)
k⋆2 + 1

Solving (6.35) for k⋆ as a function of u = x/t we have


u 1
k⋆ (u) = √ , and 1 + k⋆2 = , (6.36)
1 − u2 1 − u2

provided that −1 < u < 1. The restriction on u is because an observer moving with speed |u| > 1
is out-running the waves — we return to this point later.
Close to the stationary point k⋆ (u), the phase ψ is therefore
p 1
ψ(k) = − 1 − u2 − (1 − u2 )3/2 (k − k⋆ )2 + O(k − k⋆ )3 . (6.37)
| {z } 2 | {z }
=ψ(k⋆ ) =ω ′′ (k⋆ )

Notice that ω ′′ (k⋆ ) = (1 − u2 )3/2 is positive.


The Fourier integral is therefore
√ Z ∞
1 2 t 2 3/2 2 dk
B ∼ e−it 1−u e−i 2 (1−u ) (k−k⋆ ) Ã0 (k⋆ ) . (6.38)
2 −∞ 2π

Invoking the Fresnel integral, we obtain


√ 
cos t 1 − u 2+ π  
1 4 u
B∼ p Ã0 √ , (6.39)
2 2πt(1 − u2 )3/2 1 − u2

66
def
provided that 0 < u = x/t < 1. The total solution is A = B + B ∗ , or
√ 
cos t 1 − u2 + π4  
u
A(x, t) ∼ p ℜÃ0 √ . (6.40)
2πt(1 − u2 )3/2 1 − u2
Figure 6.2 compares the stationary phase approximation (6.40) with a numerical solution of the
Klein-Gordon equation.
Should do ν 6= 0 first.

Visualization of the stationary phase solution


Let’s visualize this asymptotic solution using the Gaussian initial condition with the Fourier trans-
form Ã0 in (??).
First consider η → 0, so that A(x, 0) → δ(x). In this case the Klein-Gordon equation has an
exact solution √ 
J1 t 1 − u2 1
A(x, t) = − √ + [δ(x − t) + δ(x + t)] , (6.41)
2 1−u 2 2
where J1 (z) is the first order Bessel function. Figure ?? compares the stationary phase solution in
(6.39) (with Ã0 = 1) to the exact solution in (6.41). The approximation is OK provided we don’t
get too close to x = t. The stationary phase approximations says nothing about the δ-pulses that
herald the arrival of the signal.
Figure ?? shows the stationary phase approximation (6.39) with η 6= 0. The initial disturbance
now has finite width and therefore contains no high-wavenumbers. Thus the rapid oscillations near
the from at x = t have been removed. This is good: it probably means that the stationary phase
approximation is accurately reproducing the waveform. But on the other hand, because of the
δ(x ± t) in (6.41), there should be terms like
2 2
e−(x±t) /2η
√ (6.42)
2πη
in the exact solution. The stationary phase approximation fails when u = x/t is close to ±1, and
therefore the approximation gives no hint of these pulses.

Caustics: u ≈ 1
To analyze the solution (6.32) close to front at x = t, we write x = t + x′ where x′ ≪ 1. The
stationary-phase approximation, analogous to (6.39), is therefore
p
ψ2 (k) = (t + x′ )k − 1 + k2 t ,
| {z }
≈|k|(1+ 12 k 2 )
t
≈ x′ k − , provided k > 0. (6.43)
2k
The approximation above is valid close to the stationary wavenumber, k = ∞. The inverse trans-
form is therefore
Z ∞
1 ′ t
A2 (x, t) = ei(x k− 2k ) Ã0 (k) dk ,
4π −∞
r Z ∞ r !
1 t iξ(κ−1 −κ) t
= e Ã0 κ dκ , (6.44)
4π 2x′ −∞ 2x′

67
0.04
η = 0. 5

A( x , t)
0.02
0
−0.02
−0.04
−50 0 50 100 150 200 250 300 350 400
x
0.02
η= 1
A( x , t)

−0.02
−50 0 50 100 150 200 250 300 350 400
x
0.02
η= 2
A( x , t)

−0.02
−50 0 50 100 150 200 250 300 350 400
x

Figure 6.2: Comparison of a numerical solution of the Klein-Gordon equation (solid black curve)
with the asymptotic approximation (red dashed curve) in (6.40) at t = 400. In this illustration
ν = 0. The matlab code is given in the following verbatim box.

where q
def 1 ′
ξ = 2 tx . (6.45)

68
%% Comparison of stat phase approx with FFT solution of KG
Lx = 1e3; nx = 2e5; dx = Lx/nx; %grid
xx = 0:dx:Lx-dx; xx=xx-Lx/2;
k1 = 2*pi/Lx; kk = k1*[0:nx/2 (-nx/2+1):-1]; %wavenumbers
t = 400; %final time
nLoop=0;
for eta =[0.5 1 2]
nLoop= nLoop+1;
% IC in physical space, and in Fourier space
A0 = ( eta*sqrt(2*pi) )^(-1) * exp(-xx.^2/(2*eta^2));
A0h = fft(A0);
% solution to Klein-Gordon
Ah = A0h .* cos( sqrt(1+kk.^2)*t );
A = real(ifft(Ah)); % physical space solution
subplot(3,1,nLoop)
plot(xx,A,’k-’)
axis([-0.2*t , 1.1*t , -1.1*max(A), 1.1*max(A)])
hold on
% compare final solution to stationary phase
uu = xx/t; sq = sqrt(1-uu.^2); ut = uu ./ sq;
At = exp( -(eta*ut).^2 );
Asp = cos( t*sq + pi/4 ) ./ sqrt( 2*pi*t*sq.^3 ) .* At;
subplot(3,1,nLoop), hold on
plot(xx,real(Asp),’r--’)
xlabel(’$x$’,’interpreter’,’latex’,’fontsize’,16)
ylabel(’$A(x,t)$’,’interpreter’,’latex’,’fontsize’,16)
ss = [’$\eta =’ ,num2str(eta), ’$’];
text(-50,0.75*max(A),ss,’interpreter’,’latex’,’fontsize’,12)
end

69
0.04
t=0

A( x, 0)
0.02
0
−0.02
−0.04
−140 −120 −100 −80 −60 −40 −20 0 20
x
0.04
t = 40
A ( x, 40)

0.02
0
−0.02
−0.04
−140 −120 −100 −80 −60 −40 −20 0 20
x
0.04
t = 80
A ( x, 80)

0.02
0
−0.02
−0.04
−140 −120 −100 −80 −60 −40 −20 0 20
x

Figure 6.3: Solution of a mystery equation.

6.3 Problems
Problem 6.1. Use a Fourier transform to solve
i 1
Qt = Qxx − Qxxx , with IC Q(x, 0) = δ(x). (6.46)
2 3
Use the method of stationary phase to estimate

lim Q(ut, t) (6.47)


t→∞

with (a) u = 0; (b) u = 1/4; (c) all values of u.

Problem 6.2. Here are some one-dimensional dispersive wave equations

At − Axxx = 0 , Fxt − F = 0 , Bt + Bxxx = 0 , Gxt + G = 0 . (6.48)

Figure 6.3 shows the solution of one of these equations with the initial condition
2
e−x /200
√ cos x . (6.49)
2π 10
(i) Which equation has been solved to produce the figure? (ii) Construct the solution of the
equation from part (i) using the Fourier Integral Theorem. (iii) The width of the wave packet
increases like tα , while its amplitude decreases like t−β . Determine the exponents α and β.

70
Lecture 7

Constant-phase (a.k.a.
steepest-descent) contours

The method of steepest descents, and its big brother the saddle-point method, is concerned with
complex integrals of the form Z
I(λ) = h(z)eλf (z) dz , (7.1)
C
where C is some contour in the complex (z = x + iy) plane. The basic idea is to convert I(λ) to
Z
I(λ) = h(z)eλf (z) dz , (7.2)
D

where D is a contour on which ℑf is constant. Thus if

f = φ + iψ , (7.3)

then on the contour D Z


I(λ) = eλiψD h(z)eλφ dt . (7.4)
D
Above ψD is the constant imaginary part of f (z). We refer ψ as the phase function and D is
therefore a constant-phase contour.
Contour D is orthogonal to ∇ψ and, because

∇φ · ∇ψ = 0 , (7.5)

D is also tangent to ∇φ. Thus as one moves along D one is always ascending or descending along
the steepest direction of the surface formed by φ(x, y) above the (x, y)-plane. The main advantage
to integrating along D is that the integral will be dominated by the neighbourhood of the point on
D where φ is largest.
Because φ is changing most rapidly as one moves along D, D is also a contour of steepest descent
(or of steepest ascent).
Exercise: Prove (7.5) and the surrounding statements.

The method and its advantages are best explained via well chosen examples. In fact we’ve already
used the method to calculate Ai(0) back in section 1.5 — you should review that example.

71
7.1 Asymptotic evaluation of an integral using a constant-phase
contour
An example of the constant phase method is provided by
Z 1
def 2
B(λ) = eiλx dx . (7.6)
0

If λ ≫ 1 we quickly obtain a leading order approximation to B using stationary phase. We improve


on this leading-order approximation by considering
f (z) = iz 2 = −2xy +i (x2 − y 2 ) . (7.7)
| {z } | {z }
=φ =ψ

The end-points of (7.6) define two constant-phase contours — see Figure 7.1. The end-point at
(x, y) = (0, 0) has ψ = 0, and the other end-point at (x, y) = (1, 0) has ψ = 1. The constant-phase
contours through these endpoints are
p
D0 : z = reiπ/4 , and D1 : z = 1 + y 2 + iy . (7.8)
(See the figure.) There are no singularities in the region enclosed by C, D0 , D1 and E. Thus, as E
recedes to infinity, we have Z Z
iλz 2 2
B(λ) = e dz + eiλz dz . (7.9)
| D0 {z } D1
1
√ π iπ/4
=2 λ
e

The first integral along D0 in (7.9) is the standard Fresnel integral. In the second integral along
D1 , the exponential function is √ 2
2
eiλz = eiλ e−2λy 1+y . (7.10)
This verifies that on D1 the integrand decreases monotonically away from the maximum at z = 1.
On D1 we strive towards a Laplace integral in (7.9) by making the change of variable

iz 2 = i − v , or z = 1 + iv . (7.11)
Thus Z Z ∞
iλz 2 e−λv
i
e dz = − eiλ
√ dv . (7.12)
D1 0 2
1 + iv
The minus sign is because we integrate along D1 starting at v = ∞.
Assembling the results above
r Z
1 π iπ/4 i iλ ∞ e−λv
B(λ) = e − e √ dv . (7.13)
2 λ 2 0 1 + iv
Watson’s lemma delivers the full asymptotic expansion of the final integral in (7.13).
There is an alternative derivation that using the contour F in Figure ??. F is tangent to D1
at z = 1, so that Z Z
2 2
eiλz dz ∼ eiλz dz as λ → ∞. (7.14)
D1 F
Now in the neighbourhood of z = 1:
z = 1 + iy (7.15)
Thus Z Z ∞
iλz 2 iλ 2
e dz ∼ −e e−2λy eiλy dy . (7.16)
F 0

72
ψ = x2 − y2
2

1.5

0
D

1
D
y

0.5

−0.5

−1
−1 −0.5 0 0.5 1 1.5 2 2.5
x
Figure 7.1: The contours D0 and D1 . On D0 , x = y and the phase is constant.

7.2 Problems
Problem 7.1. Show that if p > 1 and 0 < a:
Z ∞  
±iap vp 1 1 ±iπ/2p
e dv = Γ 1 + e . (7.17)
0 a p

The case p = 2 is the Fresnel integral and p = 3 is the example in (1.42).

Problem 7.2. Consider the integral


Z ∞ Z
def 2 2 +ibx
P (a, b) = e−ax cos bx dx = 1
2 e−ax dx , (7.18)
0 C

where C is the real axis, −∞ < x < ∞. Complete the square in the exponential and show that
the line y = b/2a is a contour D on which the integrand is not oscillatory. Evaluate P exactly by
integration along D.

Problem 7.3. Use a constant phase contour to asymptotically evaluate


Z π
def
Q(λ) = eiλx ln x dx (7.19)
0

as λ → ∞.

73
Lecture 8

The saddle-point method

The saddle-point method is a combination of the method of steepest descents and Laplace’s method
in the complex plane.
In the previous lecture we considered integrals along constant-phase contours D. In those those
earlier examples the maximum of φ on D was at the endpoints of the contour. But it is possible
that the maximum of φ is in the middle of the contour. If s is the arclength along the contour then

= ∇φ · τ̂ D (8.1)
ds
where τ̂ D is the unit tangent to D. In fact, from the Cauchy-Riemann equations,
∇φ dφ
τ̂ D = , so that = |∇φ| . (8.2)
|∇φ| ds

Thus if there is an interior maximum on D at that point z∗ then




= 0, which requires |∇φ| = 0 at z∗ . (8.3)
ds
z∗

However from the Cauchy-Riemann equations



df

|∇φ| = |∇ψ| = . (8.4)
dz

So at the point z∗ , |∇φ|, |∇ψ| and |df /dz| are all zero. In short, we can locate the saddle points
by solving
df
= 0. (8.5)
dz

8.1 The Airy function as x → ∞


Let’s recall our Fourier transform representation of the Airy function:
Z ∞
3 dk
Ai(x) = ei(kx+k /3) (8.6)
−∞ 2π

In an earlier lecture we used stationary phase to investigate the x → −∞ behaviour of Ai. Now
consider what happens if x → ∞. In this case the stationary point is in the complex k-plane, at

74
ψ = p + p 3 /3 − p q 2
4

2
D+
q

0
D−

−2
−5 0 5
p
Figure 8.1: The contours D+ and D− .


k = ± xi. We’ll use this to illustrate the saddle-point method. But first we make the change of
variable √
k = xκ (8.7)
so that √ Z
x
Ai(x) = eXf (κ) dκ , (8.8)
2π C
where C is the real axis and
 
def 3/2 def κ3
X = x , and f (κ) = i κ + . (8.9)
3

If κ = p + iq then in this problem we can write f (κ) explicitly without too much work:
 
2 q3 p3 2
f (κ) = −q − p q + +i p + − pq . (8.10)
| {z 3} 3
| {z }
φ ψ

Contours of constant ψ are shown in Figure 8.1.


The saddle points at κ = κ∗ are located by
df
= 0, or κ∗ = ±i . (8.11)
dk
Notice that
2
f (i) = − , (8.12)
3
and thus the constant-phase contours passing through κ = i are determined by ℑf = 0, implying
that
p3
p = 0, and − q2 + 1 = 0 . (8.13)
3

75
Our attention is drawn to the saddle at +i, and we deform C onto the contour D+ passing
through κ = +i. We can parameterize the integral on D+ via

p = 3 sinh t , and q = cosh t . (8.14)

Notice then that


2  2
φ= 3 cosh t − 4 cosh3 t = − cosh 3t , (8.15)
3 3
and
dκ √
= 3 cosh t + i sinh t . (8.16)
dt
Thus we have a new integral representation for the Airy function
√ Z ∞
3x 2X
Ai(x) = cosh t e− 3 cosh 3t dt . (8.17)
2π −∞
Laplace’s method applied to this integral quickly gives the leading-order result
2x3/2
e− 3
Ai(x) ∼ √ 1/4 . (8.18)
2 πx
Notice that if we had to numerically evaluate Ai(x) for positive x the representation in (8.14) would
be very useful.
We were distracted by the exciting parameterization (8.14). A better way to obtain the asymp-
totic approximation of Ai(x) is to use a path that is tangent to the true steepest-descent path at
κ = i i.e., the path
2 p3
κ = i + p, on which f = − − p2 + i . (8.19)
3 3
This is not a path of constant phase (notice the ip3 /3) but that doesn’t matter. We then have
√ Z
x 2 X ∞ −Xp2 Xi p3
Ai(x) ∼ e 3 e e 3 dp . (8.20)
2π −∞

Watson’s lemma now delivers the full asymptotic expansion of the Airy function:
2x 3/2
∞  
e− 3 X Γ(3n + 12 ) 1 n
Ai(x) ∼ √ 1/4 − , as x → ∞. (8.21)
2 πx n=0 (2n)! 9X

8.2 The Laplace transform of a rapidly oscillatory function


As an another example, consider the large-s behaviour of the Laplace transform
  Z ∞  
1 −st 1
L sin = e sin dt . (8.22)
t 0 t
Although this is a real integral, we make an excursion into the complex plane by writing
  Z ∞
1 −1
L sin =ℑ e−st+it dt . (8.23)
t 0

Our previous experience with Laplace’s method suggests considering the function
def i
φ = − st , (8.24)
t

76

real (φ) an d i mag(φ) = 2s
1

0 O U

y
S
−1
R

−2
−1 0 1 2 3 4 5 6 7 8
x


Figure 8.2: The green curves show ℜ(iz −1 − z), and the black curve is ℑ(iz −1 − z) = ± 2s. The
saddle point S is at z = e−iπ/4 , and the curve of steepest descent is OSR.

in the complex t-plane. Our attention focusses on saddle points, which are located by

dφ e−iπ/4
= 0, ⇒ t⋆ = ± √ . (8.25)
dt s

Studying t⋆ , we’re motivated to introduce a complex variable z = x + iy defined by


 
def √ √ i
z = st and therefore φ= s −z . (8.26)
z
| {z }
def
= φ̂(z)

In terms of z, the Laplace transform in (8.23) is therefore


  Z ∞√
1 1
L sin = ℑ√ e sφ̂ dz . (8.27)
t s 0

Back in (8.26), the s in the definition of z ensures that the saddle points are at z = ω and
z = −ω, where
def √ 1−i
ω = e−iπ/4 = −i = √ . (8.28)
2
That is, as s → ∞ the saddle points don’t move in the z-plane. ℜφ̂ is √ shown in the z-plane as
green contours in Figure 8.2. The black curves in Figure 8.2 are ℑφ̂ = ± 2 and the saddle point
at z = ω is indicated by S. The curve of steepest descent, which goes through S, is OSR.
Invoking Cauchy’s theorem on the closed curve OURSO:
Z √
1
√ e sφ̂ dz = 0 , (8.29)
s OU RSO

and therefore, taking the points R and U to infinity,


  Z √
1 1
L sin =ℑ√ e sφ̂ dz . (8.30)
t s OSR

When s ≫ 1 we can evaluate the integral on the right of (8.30) using

φ̂(z) = −2ω + ω 3 (z − ω)2 + O(z − ω)3 . (8.31)

77
To go over the saddle point on OSR

z − ω = ω 1/2 ζ so that ω 3 (z − ω)2 = −ζ 2 . (8.32)

Thus   √ √ √
1 π 1/2 −2ω√s π −√2s π
L sin ∼ ℑ 3/4 ω e = 3/4 e cos 2s + . (8.33)
t s s 8
Must check!
The saddle point method requires careful scrutiny of the real and imaginary parts of analytic
functions. It is comforting to contour these functions with MATLAB. The script that generates
Figure 8.2 is in the box, and it can be adapted to other examples we encounter.

78
%% Saddle point plot
%% phi = 1i/t - s t with t = z/sqrt(s)
%% phi = sqrt(2s)(1i/z - z)
close all
clc
clear
s = 10; a=sqrt(2*s);
x = linspace(-1,8,200); y = linspace(-2,1,200);
[xx,yy] = meshgrid(x,y); z = xx + 1i*yy;
phi = sqrt(s)*( 1i./z - z);
phImag = imag(phi);
phReal = real(phi);
% contour levels for real(phi):
V=[-6 :0.4: 2]*a;
subplot(2,1,1)
contour(xx,yy,phReal, V,’g’ )
hold on
% plot the saddle point contour
contour( xx , yy , phImag, [-a a],’k’ )
axis equal%best for analytic functions
hold on
xlabel(’$x$’,’interpreter’,’latex’)
ylabel(’$y$’,’interpreter’,’latex’)
axis([min(x) max(x) min(y) max(y) ] )
%% plot the x-axis:
plot([min(x) max(x)],[0 0],’k’,’linewidth’,1.0)
%% plot the y-axis:
plot([0 0] , [min(y) max(y)],’k’,’linewidth’,1.0)
title(’real($\phi$) and imag($\phi)=\sqrt{2 s}$’,’interpreter’,’latex’)
%label the steepest descent contour
text(-0.05 , 0 , ’O’)
b = 1/sqrt(2);
text(1*b, -1.2*b , ’S’)
text(8.1 , -1.3 ,’R’)


Note the use of 1i for −1 in MATLAB, and the use of meshgrid to set up the complex matrix
z = xx + 1i*yy. To get a good plot you have to supply some √ analytic information to contour e.g.,
we make MATLAB plot the saddle point contours ℑφ = ± 2s in the second call to contour. And
you can’t trust MATLAB to pick sensible contour levels because the pole at z = 0 means that there
are some very large values
√ in the domain — so the vector V determines the contour levels using
the convenient scale 2s. Since the real and imaginary parts of the analytic function φ intersect
at right angles (recall ∇φr · ∇φi = 0), we preserve angles with the command axis equal.

79
8.3 Inversion of a Laplace transform
We can further illustrate the saddle-point method by considering the problem of inverting a Laplace
transforms using the Bromwich method. Recall again that the Laplace transform of a function f (t)
is Z ∞
¯ def
f (s) = f (t)e−st . (8.34)
0

We refuse to Laplace transform functions with nonintegrable singularities, such as f (t) = (t − 1)−2
and 1/ ln t.
The inversion theorem says that
Z
1
f (t) = est f¯(s) ds , (8.35)
2πi B

where the “Bromwich contour” B is a straight line parallel to the si -axis and to the right of all
singularities of f¯(s). The theorem requires that f (t) is absolutely integrable over all finite integrals
i.e., that Z T
|f (t)| dt < ∞ , for 0 ≤ T < ∞. (8.36)
0

Absolute integrability over the infinite interval is not required: e−st takes care of that.
Now consider the Laplace transform

ex s
f¯(x, s) = . (8.37)
s
According to the inversion theorem
Z √
1 ds
f (x, t) = est−x s
. (8.38)
2πi B s

The Bromwich contour must be in the RHP, to the right of the pole at s = 0.
Suppose we don’t have enough initiative to look up this inverse transform in a book, or to
evaluate it exactly by deformation of B to a branch-line integral (see problems). Suppose further
that we don’t notice that f (x, t) is a similarity solution of the highly regarded diffusion problem

ft = fxx , f (0, t) = 1 , f (x, 0) = 0 . (8.39)

We’re dim-witted, and thus we try to obtain the t → ∞ asymptotic expansion of f (x, t).

80
Lecture 9

Evaluating integrals by matching

9.1 Singularity subtraction


Considering Z π
cos x
F (ǫ) = √
dx , as ǫ → 0, (9.1)
0 x2 + ǫ 2
we cannot set ǫ = 0 because the resulting integral is logarithmically divergent at x = 0. An easy
way to make sense of this limit is to write
Z π Z π
1 − cos x dx
F (ǫ) = − √ dx + √ , (9.2)
2
x +ǫ 2 x2 + ǫ 2
0 0
| {z }
an elementary integral
Z π  p 
1 − cos x
∼− dx + ln π + π 2 + ǫ2 − ln ǫ , (9.3)
0 x
Z π
1 1 − cos x
∼ ln − dx + ln 2π , (9.4)
ǫ 0 x
with errors probably ord(ǫ). This worked nicely because we could exactly evaluate the elementary
integral above. This method is called singularity subtraction — to evaluate a complicated nearly-
singular integral one finds an elementary integral with the same nearly-singular structure and
subtracts the elementary integral from the complicated integral. To apply this method one needs
a repertoire of elementary nearly singular integrals.
Exercise: generalize the example above to Z a
f (x)
F (ǫ) = √ dx . (9.5)
0 x2 + ǫ2
Example: find the small x behaviour of the exponential integral
Z ∞ −t
e
E(x) = dt . (9.6)
x t
Notice that
dE e−x 1 x
=− = − + 1 − + ··· (9.7)
dx x x 2
If we integrate this series we have
x2
E(x) = − ln x + C + x − + ord(x3 ) . (9.8)
4
The problem has devolved to determining the constant of integration C. We do this by subtracting the
singularity. We use an elementary nearly-singular-as x → 0 integral:
Z 1
dt
ln x = − . (9.9)
x t

81
We use this elementary integral to subtract the logarithmic singularity from (9.6):
Z 1 Z ∞ −t
1 − e−t e
E(x) + ln x = − dt + dt . (9.10)
x t 1 t
Now we take the limit x → 0 and encounter only convergent integrals:
C = lim [E(x) + ln x] , (9.11)
x→0
Z 1 Z ∞ −t
1 − e−t e
=− dt + dt , (9.12)
0 t 1 t
= −γE . (9.13)
Above, we’ve used the result from problem 1.7 to recognize Euler’s constant γE ≈ 0.57721. To summarize, as
x→0
x2 
E(x) ∼ − ln x − γE + x − + ord x3 . (9.14)
4

9.2 Local and global contributions


Consider Z 1
ex dx
def

A(ǫ) = . (9.15)
0 ǫ+x
The integrand is shown in Figure 9.1. How does the function A(ǫ) behave as ǫ → 0? The leading
order behaviour is perfectly pleasant:
Z 1 x
e dx
A(0) = √ . (9.16)
0 x
This integral is well behaved and we can just evaluate it, for example as
Z 1
1 x3/2 x5/2
A(0) = √ + x1/2 + + + · · · dx ,
0 x 2! 3!
2 1 2
≈2+ + + ,
3 5 21
= 2.91429 . (9.17)
Alternatively, with the mathematica command NIntegrate, we find A(0) = 2.9253.
To get the first dependence of A on ǫ, we try taking the derivative:
Z
dA 1 1 ex dx
=− . (9.18)
dǫ 2 0 (ǫ + x)3/2
But now setting ǫ = 0 we encounter a divergent integral. We’ve just learnt that the function A(ǫ)
is not differentiable at ǫ = 0. Why is this?
Referring to Figure 9.1, we can argue that the peak contribution to the integral in (9.15) is
 
peak width, ord(ǫ) × peak height, ord(ǫ−1/2 ) = ord ǫ1/2 . (9.19)
Therefore the total integral is
A(ǫ) = an ord(1) global contribution

+ an ord ǫ1/2 contribution from the peak

+ higher-order terms — probably a series in ǫ. (9.20)
1/2

The ord ǫ is not differentiable at ǫ = 0 — this is why the integral on the right of (9.18) is
divergent. Ths argument suggests that

A(ǫ) = 2.9253 + c ǫ + higher-order terms. (9.21)
How can we obtain the constant c above?

82
10
ǫ = 0.01
8

i nt e gr an d
6

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x

Figure 9.1: The integrand in (9.15) with ǫ = 0.01. There is a peak with height ǫ−1/2 ≫ 1 and width
ǫ ≪ 1 at x = 0. The peak area scales as ǫ1/2 , while the outer region makes an O(1) contribution to
the integral.

Method 1: subtraction
We have
Z 1  
1 x 1
A(ǫ) − A(0) = e − √ √ dx , (9.22)
ǫ+x x
Z0 ∞
1 1
∼ √ − √ dx , (9.23)
0 ǫ+x x
Z ∞
√ 1 1
= ǫ √ − √ dt , (9.24)
1+t t
√0
= −2 ǫ . (9.25)

Exercise: Explain the transition from (9.22) to (9.23).


Although this worked very nicely, it is difficult to get further terms in the series with subtraction.

Method 2: range splitting and asymptotic matching


We split the range at x = δ, where
ǫ ≪ δ ≪ 1, (9.26)
and write the integral as Z Z 1 x
δ
def ex dx e dx
A(ǫ) = √ + √ . (9.27)
ǫ+x ǫ+x
|0 {z } | δ {z }
A1 (ǫ,δ) A2 (ǫ,δ)

We can simplify A1 (ǫ, δ) and A2 (ǫ, δ) and add the results together to recover A(ǫ). Of course, the
artificial parameter δ must disappear from the final answer. This cancellation provides a good
check on the consistency of our argument and the correctness of algebra.
To simplify A1 note
Z δ
1 + x + ord(x2 ) dx
A1 = √ . (9.28)
0 ǫ+x
This is a splendid approximation because x is small everywhere in the range of integration. The

83
integrals are elementary, and we obtain
√ √ 1 2 √ 1 √
A1 = 2 ǫ + δ − 2 ǫ + ǫ3/2 + δ δ + ǫ − ǫ δ + ǫ + ord(δ5/2 ) , (9.29)
3 3 3 
√ ǫ √ 1 3/2 2 3/2 5/2 ǫ
2
1/2
= 2 δ + √ − 2 ǫ + ǫ + δ + ord δ , 3/2 , δ ǫ . (9.30)
δ 3 3 δ

To be consistent about which terms are discarded we gear by saying that δ = ord(ǫ1/2 ). Then all
the terms in ord garbage heap in (9.30) are of order ǫ5/4 .
To simplify A2 we use the approximation
Z 1  
x 1 ǫ 2 −5/2
A2 = e √ − 3/2 + ord(ǫ x ) dx . (9.31)
δ x 2x
This approximation is good because x ≥ δ ≫ ǫ everywhere in the range of integration. Now we can
evaluate some elementary integrals:
Z 1 x Z δ x Z 1  2 
e e x d −1/2 ǫ
A2 = √ dx − √ dx + ǫ e x + ord 3/2 , (9.32)
0 x 0 x δ dx δ
Z δ Z 1  2 
−1/2 1/2
 −1/2 x 1 −1/2 x ǫ 5/2
= A(0) − x + x dx + ǫ x e δ −ǫ x e dx + ord 3/2 , δ , (9.33)
0 δ δ
 2 
√ 2 3/2 ǫ ǫ 5/2 1/2
= A(0) − 2 δ − δ + ǫe − √ − ǫA(0) + ord 3/2 , δ , ǫδ . (9.34)
3 δ δ
The proof of the pudding is when we sum (9.30) and (9.34) and three terms containing the
arbitrary parameter δ, namely
√ 2 3/2 ǫ
2 δ, δ , and √ , (9.35)
3 δ
all cancel. We are left with
1
A(ǫ) = A(0) − 2ǫ1/2 + [e − A(0)] ǫ + ǫ3/2 + ord(ǫ2 ) . (9.36)
3
The terms of order ǫ0 and ǫ1 come from the outer region, while the terms of order ǫ1/2 and ǫ3/2
came from the inner region (the peak).

Another example of matching


Let us complete problem 1.5 by finding a few terms in the t → 0 asymptotic expansion of
Z ∞ −vt
ve
ẋ(t) = dv . (9.37)
0 1 + v2
If we simply set t = 0 then the integral diverges logarithmically. We suspect ẋ ∼ ln t. Let’s calculate
ẋ(t) at small t precisely by splitting the range at v = a, where
1
1≪a≪ . (9.38)
t
For instance, we could take a = ord(t−1/2 ). Then we have
Z a Z ∞  
v − v2 t + · · · −vt 1 1
ẋ = dv + e − + · · · dv (9.39)
0 1 + v2 a v v3

84
Now we have a variety of integrals that can be evaluated by elementary means, and by recognizing
the exponential integral:

ẋ ∼ ln(1 + a2 ) − at + t tan−1 a + E(at) + · · ·


1
2 (9.40)
 
πt t
∼ ln a − at + − ln(at) − γE + at +ord a2 t2 , , a−2 , (9.41)
2 | {z } a
from E(at)
1 πt
= ln − γE + + ord(t2 ) . (9.42)
t 2

9.3 An electrostatic problem — H section 3.5


Here is a crash course in the electrostatics of conductors:

∇ · e = ρ, and ∇ × e = 0. (9.43)

Above e(x) is the electric field at point x and ρ(x) is the density of charges (electrons per cubic
meter). Both equations can be satisfied at once by introducing the electrostatic potential φ:

e = −∇φ , and therefore ∇2 φ = −ρ . (9.44)

To obtain the electrostatic potential φ we must solve Poisson’s equation above.


This is accomplished using the Green’s function
1
∇2 g = −δ(x) , ⇒ g= , (9.45)
4πr
def
where r = |x| is the distance from the singularity (the point charge). Hence if there are no
boundaries Z
1 ρ(x′ )
φ(x) = dx . (9.46)
4π |x − x′ |
So far, so good: in free space, given ρ(x), we must evaluate the three dimensional integral above.
The charged rod at the end of this section is a non-trivial example.
If there are boundaries then we need to worry about about boundary conditions e.g., on the
surface of a charged conductor (think of a silver spoon) the potential is constant, else charges would
flow along the surface. In terms of the electric field e, the boundary condition on the surface of a
conducting body B is that
e · tB = 0 , and e · nB = σ (9.47)
where tB is any tangent to the surface of B, nB is the unit normal, pointing out of B, and σ is the
charge density (electrons per square meter) sitting on the surface of B.
Example: a sphere. The simplest example is sphere of radius a carrying a total charge q, with surface charge
density
q
σ= . (9.48)
4πa2
Outside the sphere ρ = 0 and the potential is
q qr
φ= , so that e= , (9.49)
4πr 4πr 2
where r is a unit vector pointing in the radial direction (i.e., our notation is x = rr). The solution above is
the same as if all the charge is moved to the center of the sphere.

85
For a non-spherical conducting body B things aren’t so simple. We must solve ∇2 φ = 0 outside
the body with φ = φB on the surface B of the body, where φB is an unknown constant. (We are
considering an isolated body sitting in free space so that φ(x) → 0 as r → ∞.)
We don’t know the surface charge density σ(x), but only the total charge q, which is the surface
integral of σ(x): Z Z
q= σ dS = e · nB dS . (9.50)
B B
This is a linear problem, so the solution φ(x) will be proportional to the total charge q. We define
the capacity CB of the body as
q = CB φB . (9.51)
The capacity is an important electrical property of B.
Example The electrostatic energy is defined via the volume integral
Z
def 1
E = |e|2 dV , (9.52)
2
where the integral is over the region outside of B. Show that
1
E= C B φB . (9.53)
2
If you have a sign error, consider that the outward normal to body, nB , is the inward normal to free space.

Example: a charged rod. Find the potential due to a line distribution of charge with density η (electrons per
meter) along −a < z < a.
In this example the charge density is
δ(s)χ(z)
ρ=η , (9.54)
2πs
p
where s = x2 + y 2 is the cylindrical radius. The signature function χ(z) is one if −a < z < +a, and zero
otherwise.
We now evaluate the integral in (9.46) using cylindrical coordinates, (θ, s, z) i.e., dx = dθsdsdz. The s and θ
integrals are trivial, and the potential is therefore
Z a
η dξ
φ(s, z) = p , (9.55)
4π −a (z − ξ)2 + s2
Z +(a−z)/s
η dt
= √ , (9.56)
4π −(a+z)/s 1 + t2
η h p i+(a−z)/s
= ln(t + 1 + t2 ) , (9.57)
4π −(a+z)/s
 
η r+ − z + a
= ln , (9.58)
4π r− − z − a
where p
r± ≡ s2 + (a ∓ z)2 . (9.59)
r± is the distance between x and the end of the rod at z = ±a.
Using
r 2 − r+
2
z= − , (9.60)
4a
the expression in (9.58) can alternatively be written as
 
η r+ + r− + 2a
φ= ln . (9.61)
4π r+ + r− − 2a
If you dutifully perform this algebra you’ll be rewarded by some remarkable cancellations. The expression in
(9.61) shows that the equipotential surfaces are confocal ellipsoids — the foci are at z = ±a. The solution is
shown in Figure 9.2.

86
2.5

1.5

z /a
0.5

−0.5

−1

−1.5
−0.5 0 0.5 1 1.5 2 2.5
x/a

Figure 9.2: Equipotential contours, φ(x, z) in (9.61), surrounding a charged rod that lies along the
z-axis between z = −a and z = a. The surfaces are confocal ellipsoids.

A slender body
In section 5.3, H considers an axisymmetric body B defined in cylindrical coordinates by
p
x2 + y 2 = ǫB(z) . (9.62)
| {z }
def
=s

(I’m using different notation from H: above s is the cylindrical radius.) The integral equation in
H is then Z 1
1 f (ξ; ǫ) dξ
1= p . (9.63)
4π −1 (z − ξ)2 + ǫ2 B(z)2
I think it is easiest to attack this integral equation by first asymptotically estimating the integral
Z 1
1 f (ξ; ǫ) dξ
φ(s, z) = p , as s → 0. (9.64)
4π −1 (z − ξ)2 + s2

Notice that we can’t simply set s = 0 in (9.64) because the “simplified” integral,
Z 1
1 f (ξ; ǫ) dξ
,
4π −1 |z − ξ|

is divergent. Instead, using the example below, we can show that


√ ! Z 1
f (z) 2 1 − z2 1 f (ξ) − f (z)
φ(s, z) = ln + dξ + O(s) . (9.65)
2π s 4π −1 |ξ − z|

Thus the integral equation (9.63) is approximated by


√ ! Z 1
f (z; ǫ) 2 1 − z2 1 f (ξ; ǫ) − f (z; ǫ)
1≈ ln + dξ . (9.66)
2π ǫB(z) 4π −1 |ξ − z|

87
As ǫ → 0 there is a dominant balance between the left hand side and the first term on the right,
leading to

f (z; ǫ) ≈  √  , (9.67)
2 1−z 2
ln ǫB(z)

=  , (9.68)
B(z)
L − ln √
2 1−z 2

def
where L = ln 1ǫ ≫ 1. Thus expanding the denominator in (9.68) we have
 
2π 2π B(z) 
f (z; ǫ) ≈ + 2 ln √ + ord L−3 . (9.69)
L L 2 1 − z2
This is the solution given by H. For many purposes we might as well stop at (9.67), which provides
the sum to infinite order in L−n . However if we need a nice explicit result for the capacity,
Z 1
C(ǫ) = f (z; ǫ) dz , (9.70)
−1

then the series in (9.69) is our best hope.


Example: obtain the approximation (9.66). This is a good example of singularity subtraction. We subtract the
nearly singular part from (9.64):
Z 1 Z
1 f (ξ) − f (z) f (z) 1 dξ
φ(s, z) = p dξ + p . (9.71)
4π −1 (z − ξ)2 + s2 4π −1 (z − ξ)2 + s2

In the first integral on the right of (9.71) we can set s = 0 without creating a divergent integral: this move
produces the final term in (9.66), with the denominator |z − ξ|.
The final term in (9.71) is the potential of a uniform line density of charges on the segment −1 < z < 1 i.e.,
the potential of a charged rod back in (9.58) (but now with a = 1). We don’t need (9.58) in its full glory —
we’re taking s → 0 with −1 < z < 1. In this limit (9.61) simplifies to
Z 1  √ 
1 dξ 1 2 1 − z2
p ≈ ln . (9.72)
4π −1 (z − ξ)2 + s2 2π s

Thus we have Z  √ 
1
1 f (ξ) − f (z) f (z) 2 1 − z2
φ(s, z) = dξ + ln . (9.73)
4π −1 |z − ξ| 2π s

9.4 Problems
Problem 9.1. Find the leading-order behavior of
Z π
cos x
H(ǫ) = 2 2
dx , as ǫ → 0. (9.74)
0 x +ǫ

Problem 9.2. Consider the integral


Z ∞
W (z)
I(x) ≡ dz , (9.75)
x z

where W (x) is a smooth function that decays at x = ∞ and has a Taylor series expansion about
x = 0:
1
W (x) = W0 + xW0′ + x2 W0′′ + · · · (9.76)
2

88
Some examples are: W (z) = exp(−z), W (z) = sech(z), W (z) = (1 + z 2 )−1 etc. (i) Show that the
integral in (4) has an expansion about x = 0 of the form
 
1 1
I(x) = W0 ln + C − W0′ x − W0′′ x2 + O(x3 ) , (9.77)
x 4
where the constant C is Z 1
dx
C= [W (z) + W (1/z) − W0 ] . (9.78)
0 z
(ii) Evaluate C if W (z) = (1 + z 2 )−1 . (iii) Evaluate the integral exactly with W (z) = (1 + z 2 )−1 ,
and show that the expansion of the exact solution agrees with the formula above.
Problem 9.3. Find useful approximations to
Z ∞
def du
F (x) = √ (9.79)
0 x 2 + u2 + u4
as (i) x → 0; (ii) x → ∞.
Problem 9.4. Find the first two terms in the ǫ → 0 asymptotic expansion of
Z ∞
def dy
F (ǫ) = 1/2
. (9.80)
0 (1 + y) (ǫ2 + y)
Problem 9.5. Consider
Z ∞
def x dx
H(r) = . (9.81)
0 (r 2 + x)3/2 (1 + x)
(i) First, with r → 0, find the first two non-zero terms in the expansion of H. (ii) With r → ∞,
find the first two non-zero terms, counting constants and ln r as the same order.
Problem 9.6. Find two terms in the expansion the elliptic integral
Z π/2
def dθ
K(m) = p , (9.82)
0 1 − m2 sin2 θ
as m ↑ 1.
Problem 9.7. Find three terms (counting ǫn and ǫn ln ǫ as different orders) in the expansion of
the elliptic integral
Z π/2p
def
J(m) = 1 − m cos2 θ dθ . (9.83)
0
as m ↑ 1.
Problem 9.8. This is H exercise 3.8. Consider the integral equation
Z 1
f (t; ǫ) dt
x= 2 2
, (9.84)
−1 ǫ + (t − x)

posed in the interval −1 ≤ x ≤ 1. Assuming that f (x; ǫ) is O(ǫ) in the end regions where 1 − |t| =
ord(ǫ), obtain the first two terms in an asymptotic expansion of f (x; ǫ) as ǫ → 0.
Problem 9.9. Show that as ǫ → 0:
Z 1    
ln x 1 2 1 π2 ǫ ǫ2 ǫ3
dx = − ln − +ǫ 1− + − + ··· . (9.85)
0 ǫ+x 2 ǫ 6 4 9 16

89

You might also like