Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

ISE I Brief Lecture Notes

1 Partial Differentiation
1.1 Definitions
Let f (x, y) be a function of two variables. The partial derivative ∂f /∂x is
the function obtained by differentiating f with respect to x, regarding y as
a constant. Similarly, ∂f /∂y is obtained by differentiating f with respect to
y, regarding x as a constant.
We often use the alternative notation

fx = ∂f /∂x, fy = ∂f /∂y.

Example If f (x, y) = x2 + xy + y 3 − 1 then fx = 2x + y, fy = x + 3y 2 .

As a limit,

f (x + h, y) − f (x, y)
∂f /∂x = limh→0 .
h
Also, the equation z = f (x, y) defines a surface in 3-dimensional space with
x, y, z-axes, and ∂f /∂x is the gradient of the tangent at a point in the x-
direction.

Higher partial derivatives Can differentiate fx , fy partially with respect to


x and y to get four second order derivatives:

fxx = ∂ 2 f /∂x2 ,
fyy = ∂ 2 f /∂y 2 ,
fxy = ∂ 2 f /∂x∂y,
fyx = ∂ 2 f /∂y∂x

Example 1. f (x, y) = x2 + xy + y 3 − 1 as above: then

fxx = 2, fyy = 6y, fxy = fyx = 1.

2. If f (x, y) = tan−1 (y/x) then we get

fxx = 2xy/(x2 + y 2 )2 , fyy = −2xy/(x2 + y 2 )2 ,

1
and also
fxy = fyx = (y 2 − x2 )/(x2 + y 2 )2 .

Notice that in both examples, fxy = fyx . This is a general fact:

Theorem If f (x, y) is a function of two variables, and the second order


partial derivatives fxy and fyx both exist and are continuous, then fxy = fyx .

Here, saying that a function g(x, y) is continuous at a point (a, b) means


that lim(x,y)→(a,b) g(x, y) = g(a, b) – this means that as (x, y) gets closer and
closer to (a, b), g(x, y) gets closer and closer to g(a, b).
All the functions we meet in this chapter will satisfy the assumptions of
the theorem, so from we on we always assume that fxy = fyx .

More than 2 variables For a function f (x, y, z, ...) of three or more variables,
we define the partial derivative fx = ∂f /∂x to be the function obtained by
differentiating f with respect to x, regarding y, z, ... as constants. Similarly
for ∂f /∂y and ∂f /∂z and so on.

Example if f (x, y, z) = (x2 + y 2 + z 2 )1/2 then

fx = x(x2 + y 2 + z 2 )−1/2 , fy = y(x2 + y 2 + z 2 )−1/2 , fz = z(x2 + y 2 + z 2 )−1/2 .

1.2 Small changes


Let f = f (x, y), and make small changes from x to x + δx and from y
to y + δy. What is the corresponding change in f , i.e. what is δf =
f (x + δx, y + δy) − f (x, y) ? We can estimate this as follows: write this as

δf = [f (x + δx, y + δy) − f (x, y + δy)] + [f (x, y + δy) − f (x, y)].

From the definitions of ∂f /∂x, ∂f /∂y as limits, the first of these brackets is
roughly equal to fx .δx and the second to fy .δy. So

δf ≈ fx .δx + fy .δy.

Example What is the approximate change in f (x, y) = xexy if x changes


from 1 to 1 + h and y from 0 to k ?
Ans: δf ≈ (exy + xyexy )δx + x2 exy δy. Putting x = 1, y = 0 we get
δf ≈ h + k.

2
1.3 Chain Rules
Recall the chain rule for differentiating functions of one variable: if f = f (u)
and u = u(t) then df /dt = df /du.du/dt. Now let f = f (x, y) where x, y are
both functions of one variable t – say

x = x(t), y = y(t).

If a small change δt in t gives corresponding changes δx, δy in x and y, then


δx ≈ dx/dt.δt, δy ≈ dy/dt.δt, so by Section 1.2,

δf ≈ fx .δx + fy .δy ≈ fx dx/dt.δt + fy dy/dt.δt.

Dividing through by δt and taking the limit as δt → 0, we get

Chain Rule I If f = f (x, y) where x = x(t), y = y(t), then

df /dt = fx dx/dt + fy dy/dt.

Similarly for functions f (x, y, z) with x = x(t), y = y(t), z = z(t) we get

df /dt = fx dx/dt + fy dy/dt + fz dz/dt.

Example If f (x, y) = x3 y + sin(x + y) where x = t2 , y = sin t, then

df /dt = (3x2 y + cos(x + y)).2t + (x3 + cos(x + y)). cos t.

(Could of course work this out by substituting for x, y to get f as a function


of t, and then differentiating, but would be somewhat unpleasant.

Implicit functions An equation of the form

f (x, y) = 0

defines y as an implicit function of x. To find dy/dx, we differentiate the


equation with respect to x using the Chain Rule I (noting that x and y are
both functions of the single variable x). This gives fx + fy dy/dx = 0, hence

dy/dx = −fx /fy .

For example, if the equation is x2 − y cos x + x3 y 2 = 0, then differentiating


as above gives 2x + y sin x + 3x2 y 2 + (− cos x + 2x3 y)dy/dx = 0, hence

dy/dx = (2x + y sin x + 3x2 y 2 )/(cos x − 2x3 y).

3
General chain rule Now suppose f = f (x, y) where x = x(s, t), y = y(s, t).
Then f is also a function of s, t and we’d like a formula for ∂f /∂s, ∂f /∂t.
Well, if we regard t as a constant then x = x(s), y = y(s), so we can apply
Chain Rule I to get

Chain Rule II If f = f (x, y) where x = x(s, t), y = y(s, t), then


∂f /∂s = fx ∂x/∂s + fy ∂y/∂s,
and similarly
∂f /∂t = fx ∂x/∂t + fy ∂y/∂t.
For more than 2 variables the rule is entirely similar: if f = f (x, y, z, ...)
where x = x(s, t, ...), y = y(s, t, ...), z = z(s, t, ...),.... then
∂f /∂s = fx ∂x/∂s + fy ∂y/∂s + fz ∂z/∂s + ∙ ∙ ∙
and similarly for ∂f /∂t and so on.

Example Let f = f (x, y) and let r, θ be polar coordinates, so that x =


r cos θ, y = r sin θ. Express the Laplace equation
∂ 2 f /∂x2 + ∂ 2 f /∂y 2 = 0
in polar coordinate form.

Step 1 First work out fx , fy in terms of r, θ using the Chain Rule. Well,

r = (x2 + y 2 )1/2 , θ = tan−1 (y/x),


so by the Chain Rule,
fx = fr ∂r/∂x + fθ ∂θ/∂x = fr .x(x2 + y 2 )−1/2 − fθ .y(x2 + y 2 )−1 ,
and so we get
fx = fr cos θ − fθ (sin θ)/r.
Similarly
fy = fr sin θ + fθ (cos θ)/r.

Step 2 Now we work out fxx in terms of r, θ. By Step 1,


fxx = cos θ.∂/∂r(fx ) − ((sin θ)/r).∂/∂θ(fx )
= cos θ.∂/∂r(fr cos θ − fθ (sin θ)/r) − ((sin θ)/r).∂/∂θ(fr cos θ − fθ (sin θ)/r)
= cos θ[cos θfrr + ((sin θ)/r2 )fθ − ((sin θ)/r)fθr ]−
((sin θ)/r)[cos θfrθ − sin θfr − ((cos θ)/r)fθ − ((sin θ)/r)fθθ ].

4
Hence we get

fxx = cos2 θfrr −((2 sin θ cos θ)/r)fθr +((sin2 θ)/r)fr +((2 sin θ cos θ)/r2 )fθ +((sin2 θ)/r2 )fθθ .

Similarly we get

fyy = sin2 θfrr +((2 sin θ cos θ)/r)fθr +((cos2 θ)/r)fr −((2 sin θ cos θ)/r2 )fθ +((cos2 θ)/r2 )fθθ .

Adding these two expressions, we obtain

fxx + fyy = frr + (1/r)fr + (1/r2 )fθθ .

Therefore Laplace’s equation in polar form is

frr + (1/r)fr + (1/r2 )fθθ = 0.

1.4 Taylor expansions


Recall the Taylor expansion of a function f (x) of one variable at a point
x = a: this is

f 00 (a) 2 f (n) (a) n


f (a + h) = f (a) + f 0 (a)h + h + ∙∙∙ + h + Rn (h),
2! n!
where Rn (h) is the “error term”.
Now we’ll get a similar expansion for a function f (x, y) of two variables
at a point (a, b). We study f along the line joining (a, b) to (a + h, b + k)
for small h, k. A point on this line is (a + th, b + tk), with t between 0 and
1. Define a function F (t) of one variable by

F (t) = f (a + th, b + tk) (0 ≤ t ≤ 1).

So F (0) = f (a, b) and F (1) = f (a + th, b + tk). We are interested in the


Maclaurin series for F , which is

F 00 (0) 2
F (t) = F (0) + F 0 (0)t + t + ∙∙∙ (1)
2!
Let’s work out F 0 (0) and F 00 (0) in terms of f and its partial derivatives. By
dy
Chain Rule I, F 0 (t) = fx dx
dt + fy dt where x = a + th, y = b + tk, and hence

F 0 (t) = hfx + kfy

5
(evaluated at (a + th, b + tk)). Applying the Chain Rule again,
d d
F 00 (t) = h dt (fx (a + th, b + tk)) + k dt (fy (a + th, b + tk))
= h(hfxx + kfxy ) + k(hfyx + kfyy )
= h2 fxx + 2hkfxy + k 2 fyy .

Hence
F (0) = f (a, b),
F 0 (0) = hfx (a, b) + kfy (a, b),
F 00 (0) = h2 fxx (a, b) + 2hkfxy (a, b) + k 2 fyy (a, b).
Substituting into (1), we get

Taylor expansion of f (x, y) at (a, b): This is

f (a + h, b + k) = f (a, b) + hfx (a, b) + kfy (a, b) + 12 (h2 fxx (a, b) + 2hkfxy (a, b) + k 2 fyy (a, b))
+ terms of degree 3 or more in h, k.

For small h, k this gives an approximation to f (a + h, b + k).

1.5 Maxima, minima and stationary points


Now we use the previous section to study maxima and minima of functions
of 2 variables. For a function f (x, y), we say f has a maximum at a point
(a, b) if f (a + h, b + k) < f (a, b) for all small values of h, k (not both 0).
Similarly, f has a minimum at (a, b) if f (a + h, b + k) > f (a, b) for all small
h, k.

Finding maxima and minima Spose f has a maximum at (a, b). Then if
we fix y = b and vary x, we certainly have f (a + h, b) < f (a, b) for all small
h, which says that the function f (x, b) of 1 variable (x) has a maximum at
x = a. Hence ∂f /∂x = 0 at (a, b), and likewise ∂f /∂y = 0 at (a, b). This is
also true at a minimum. Hence maxima and minima are stationary points
of f , in the following sense:

Definition We say (a, b) is a stationary point of f if fx (a, b) = fy (a, b) = 0.


A stationary point may or may not be a max/min. For example, if
f (x, y) = xy then (0, 0) is a stationary point, but it is not a max or a min,
as we can make f (h, k) positive or negative for suitable choices of small h, k.
We call a stationary point (a, b) a saddle point of f if it is not a max or
a min.

6
Example Let f (x, y) = x2 /2 − x + xy 2 . Then fx = x − 1 + y 2 , fy = 2xy. At
a stationary point, 2xy = 0 so x = 0 or y = 0. If x = 0 then −1 + y 2 = 0 so
y = ±1. If y = 0 then x − 1 = 0 so x = 1. So f has 3 stationary points:

(0, 1), (0, −1), (1, 0).

Spose (a, b) is a stationary point of f , so fx (a, b) = fy (a, b) = 0. Write

A = fxx (a, b), B = fxy (a, b), C = fyy (a, b).

Then the Taylor expansion of f at (a, b) is


1
f (a + h, b + k) = f (a, b) + (Ah2 + 2Bhk + Ck 2 ) + higher terms.
2
Write
Δ = Ah2 + 2Bhk + Ck 2 .
Then we see that (a, b) is a max if Δ < 0 for all small h, k; (a, b) is a min if
Δ > 0 for all small h, k; and otherwise (a, b) is a saddle.
Suppose now that A 6= 0. Then
1 2 2 1
Δ= (A h + 2ABhk + ACk 2 ) = [(Ah + Bk)2 + (AC − B 2 )k 2 ].
A A
If AC − B 2 > 0 and A > 0 then Δ > 0 for all small h, k, and so (a, b) is a
min. If AC − B 2 > 0 and A < 0 then Δ < 0 for all small h, k, and so (a, b)
is a max. And if AC − B 2 < 0 then Δ can be made positive or negative for
suitably chosen small h, k, so (a, b) is a saddle.
Now suppose A = 0 and B 6= 0. Then Δ = k(2Bh + Ck), which can be
made positive or negative for small h, k, so again we have a saddle.

Summary Let (a, b) be a stationary point of f (x, y), and let A = fxx (a, b),
B = fxy (a, b), C = fy (a, b). The nature of the stationary point is as follows:

A AC − B 2 nature
>0 >0 minimum
<0 >0 maximum
any <0 saddle

Example Let f (x, y) = x2 /2 − x + xy 2 as above. We found that f has 3


stationary points, (0, 1), (0, −1), (1, 0). Also fxx = 1, fxy = 2y, fyy = 2x.

7
So at (0, 1), A = 1, B = 2, C = 0, so AC − B 2 < 0 and this is a saddle. At
(0, −1), A = 1, B = −2, C = 0, so AC − B 2 < 0 and this is also a saddle.
At (1, 0), A = 1, B = 0, C = 2, so AC − B 2 > 0 and this is a minimum.

Note If AC − B 2 = 0 at a stationary point, the point can be a max, min


or saddle – nothing definite can be said in general. For example, f (x, y) =
x4 + y 4 has a stationary point at (0, 0); here AC − B 2 = 0 and (0, 0) is
clearly a minimum of f . But g(x, y) = x3 + y 3 also has a stationary point
at (0, 0) with AC − B 2 = 0, and this point is a saddle point of g.

Contour sketching A common way to visualise a 3-dimensional surface


z = f (x, y) is to sketch some of its contours in the x, y-plane. By a contour
I just mean a curve of the form f (x, y) = c for a constant c.
In the lectures we saw examples of contours for the simple cases z =
x2 + y 2and z = xy. For z = x2 + y 2 there is a min at (0, 0) and the contours
are concentric circles centred at (0, 0); for z = xy there is a saddle at (0, 0)
and the contours are curves xy = c.

Procedure for making contour sketch Suppose we want to make a contour


sketch of a surface z = f (x, y). Here are the steps:
(1) Find the stationary points of f and determine their nature.
(2) Sketch the contours through the saddle points.
(3) Fill in some more contours, making sure that near the max/min points
the contours look like those of z = x2 + y 2 , and near the saddles they look
like those of z = xy.
This is of course a pretty rough description, but it is fairly clear what to
do in the simple examples in the lectures and problem sheets.

1.6 Exact differential equations


Consider an equation u(x, y) = c (c a constant). As we saw earlier, this
equation defines y as an implicit function of x, and if we differentiate it with
dy
respect to x we get ux +uy dx = 0. So if we put ux = p(x, y) and uy = q(x, y)
then we see that the differential equation
dy
p(x, y) + q(x, y) =0
dx
has solution u(x, y) = c.

8
Now let’s think of this the other way round. Suppose we have a differ-
ential eqn
dy
P (x, y) + Q(x, y) = 0.
dx
If we can find a function u(x, y) such that ux = P, uy = Q, then we’ll be
able to solve the eqn – the solution will be u(x, y) = c. Notice that if such
a function u exists, then uxy = Py and uyx = Qx , so as uxy = uyx , we need
to make sure that Py = Qx . This leads to
dy
Definition The differential equation P + Q dx = 0 is said to be exact if
Py = Qx .

It turns out that for an exact eqn we can always find a function u such
that ux = P, uy = Q, so we can always solve exact eqns.

Example Solve
dy
2xy + cos x cos y + (x2 − sin x sin y) = 0.
dx

Ans Let P = 2xy + cos x cos y, Q = x2 − sin x sin y. Then Py = Qx =


2x − cos x sin y, so eqn is exact. To solve, we need to find u = u(x, y) such
that
(1) ux = 2xy + cos x cos y (2) uy = x2 − sin x sin y.
Integrating (1), we get u = x2 y+sin x cos y+f (y), where f (y) is any function
of y. Differentiating wrt y this gives uy = x2 − sin x sin y + f 0 (y). Hence (2)
is satisfied if we take f (y) = 0. So u = x2 y + sin x cos y satisfies both (1)
and (2), and so the solution of the diff eqn is

x2 y + sin x cos y = c

where c is an arbitrary constant.

Integrating factor Sometimes it is possible to take a non-exact eqn P +


dy
Q dx = 0 and to find a clever function λ = λ(x, y) to multiply the eqn
through by and make it exact. In other words, to make the eqn
dy
λP + λQ =0
dx
an exact eqn. We call such a function λ an integrating factor for the eqn.

9
The only cases where this method is feasible is when we can find an
integrating factor of the form λ = λ(x) (a function of x only) or λ(y).
When λ = λ(x), the exactness condition is that (λP )y = (λQ)x , which is
λPy = λ0 Q + λQx . This boils down to the following:

1 dλ Py − Qx
= .
λ dx Q
If the RHS happens to be a function of x only, then we have a chance of
solving this and finding λ.
Similarly, to find an integrating factor of the form λ = λ(y) we need to
solve
1 dλ Py − Qx
=− .
λ dy P

Example Solve
dy
xy − 1 + (x2 − xy) = 0.
dx
Ans Let P = xy − 1, Q = x2 − xy. Then Py = x, Qx = 2x − y so eqn is not
exact. But note that (Py − Qx )/Q = −1/x, so to find an integrating factor
λ = λ(x) we need to solve λ1 dλ
dx = −1/x. A solution is λ = 1/x. Multiplying
the original eqn by this we get the eqn
1 dy
y− + (x − y) = 0.
x dx
This is exact. To solve it we find u = u(x, y) satisfying
1
ux = y − , uy = x − y.
x
Integrating the first one gives u = xy − log x + f (y), hence uy = x + f 0 (y).
So we need to choose f (y) with f 0 (y) = −y, so take f (y) = −y 2 /2. Hence
the solution is
1
xy − log x − y 2 = c.
2

10
2 Laplace Transforms
If f (t) is a function of one variable, we define its Laplace transform to be
the function F (s) defined by
Z ∞
F (s) = e−st f (t)dt.
0

Often we also write L(f (t)) or just L(f ) for the Laplace transform of f .

Examples 1. Laplace transform of the constant function f (t) = 1 is


Z ∞
1 1
L(1) = e−st dt = [− e−st ]∞
0 = (s > 0).
0 s s
R∞ 1
2. L(eat ) = 0 eat e−st dt = s−a (s > a).
R∞
3. Let L(sin at) = F (s) = 0 e−st sin atdt. Then if we integrate by parts
twice we get
a a2
F (s) = − F (s),
s2 s2
and hence
a
L(sin at) = .
s2 + a2
Similarly
s
L(cos at) = .
s2 + a2
R∞
4. To work out L(tn ) = 0 e−st tn dt, integrate by parts to get L(tn ) =
n n−1 ). Repeating, we get
s L(t

n n(n − 1) n(n − 1) . . . 2.1 n!


L(tn ) = L(tn−1 ) = L(tn−1 ) = ∙ ∙ ∙ = L(1) = n+1 .
s s2 sn s

Note The definition Laplace transform implies easily that

L(f + g) = L(f ) + L(g) and L(cf ) = cL(f ) (c a constant).


12 3
For example, L(2t3 − 3 sin t) = 2L(t3 ) − 3L(sin t) = s4
− s2 +1
.

Use of Laplace transforms

11
First we make the crucial observaton that if y is a function of t then we
can express L( dy
dt ) in terms of L(y) as follows:
Z ∞
dy dy
L( ) = e−st dt,
dt 0 dt
which by parts is equal to
Z ∞
−st
[e y]∞
0 + se−st ydt,
0

so we get the formula


dy
L( ) = −y(0) + sL(y). (2)
dt
Another piece of notation: if L(f (t)) = F (s) we say that f is the inverse
Laplace transform of F , and write f (t) = L−1 (F (s)).
Now here’s an example illustrating how we can use transforms to solve
a differential eqn. More complicated examples will appear later when we’ve
done a bit more theory.

Example Solve the differential eqn


dy
+ 2y = cos t, with y(0) = 1.
dt

Ans Taking Laplace transforms of both sides gives L( dy


dt )+2L(y) = L(cos t),
so using (2) we get
s
−1 + sL(y) + 2L(y) = .
s2 +1
This gives
s 1
L(y) = + .
(s + 2)(s2 + 1) s + 2
So y is the inverse Laplace transform of the RHS, or
s 1
y = L−1 ( 2
+ ).
(s + 2)(s + 1) s + 2
A
To work this out, express the RHS in partial fractions s+2 + Bs+C
s2 +1
. We find
that A = 35 , B = 25 , C = 15 . So
3 2s + 1
y = L−1 ( ) + L−1 ( 2 ).
5(s + 2) 5(s + 1)

12
Hence the solution is
3 2 1
y = e−2t + cos t + sin t.
5 5 5

More Laplace Transforms


We can get lots more from

Shift Rule I If L(f (t)) = F (s), then L(e−at f (t)) = F (a + s).


R∞ R∞
Proof L(e−at f (t)) = 0 e−st e−at f (t)dt = 0 e−(s+a)t f (t)dt = F (s + a).

Hence for example we get


n!
L(e−at tn ) = ,
(s + a)n+1

b
L(e−at sin bt) = .
(s + a)2 + b2
As another example, suppose we want to work out
2s + 3
f (t) = L−1 ( ).
s2 + 2s + 5
Well,

2s + 3 2s + 3 2(s + 1) 1
= = + .
s2 + 2s + 5 2
(s + 1) + 4 (s + 1) + 4 (s + 1)2 + 4
2

Hence
1
f (t) = 2e−t cos 2t + e−t sin 2t.
2
There is another shift rule, based on the Heaviside step function Ha (t),
defined as follows, where a > 0 is a constant:

0, if t < a
Ha (t) =
1, if t ≥ a

First observe that


Z ∞
e−as
L(Ha (t)) = e−st dt = .
a s
More generally, we have

13
Shift Rule II If L(f (t)) = F (s), then

L(Ha (t)f (t − a)) = e−as F (s).

R∞
Proof The LHS = a e−st f (t − a)dt. Put u = t − a. Then
Z ∞ Z ∞
−s(u+a) −as
LHS = e f (u)du = e e−su f (u)du = e−as F (s).
0 0

e−as
Examples 1. L(Ha (t) sin(t − a)) = e−as L(sin t) = s2 +1
.
−2s
2. What is L−1 ( e s2 ) ? Well, we know L(t2 ) = 2
s3
, so by the shift rule,

e−2s 1
L−1 ( 2
) = H2 (t)(t − 2)2 .
s 2

Higher order differential equations


2
Recall (2): L( dy d y
dt ) = −y(0) + sL(y). Now we find L( dt2 ). Well,
Z ∞ Z ∞
d2 y d2 y dy dy dy
L( 2 ) = e−st 2 = [e−st ]∞ 0 + s e−st dt = −y 0 (0) + sL( ).
dt 0 dt dt 0 dt dt

Substituting for L( dy
dt ) gives

d2 y
L( ) = −y 0 (0) − sy(0) + s2 L(y). (3)
dt2

Example Use Laplace transforms to solve

d2 y dy
− 2 + y = et
dt2 dt
with y(0) = 0, y 0 (0) = 1.
Answer Take Laplace trans of both sides:
1
−y 0 (0) − sy(0) + s2 L(y) − 2(−y(0) + sL(y)) + L(y) = L(et ) = .
s−1
1 1
So −1 + (s2 − 2s + 1)L(y) = s−1 which gives (s − 1)2 L(y) = s−1 + 1, so

1 1
L(y) = + .
(s − 1)2 (s − 1)3

14
So
1 1 t2 et
y = L−1 ( ) + L −1
( ) = te t
+ .
(s − 1)2 (s − 1)3 2

Simultaneous differential equations


This is where Laplace transforms come into their own.

Example Find functions y, z of t satisfying the simultaneous differential eqns


d2 y
dt2
+ 2y − z = 0
d2 z
dt2
+ 2z − y = 0

with z(0) = 2, y(0) = z 0 (0) = y 0 (0) = 0.


Ans Taking Laplace trans and using (3), get

(s2 + 2)L(y) − L(z) = 0


(s2 + 2)L(z) − L(y) = 2s.

Eliminate L(y) by taking the first eqn plus (s2 + 2) times the second, to get

2s(s2 + 2)
L(z) = .
(s2 + 3)(s2 + 1)
s s
By partial fracs the RHS is s2 +1
+ s2 +3
. So
s s √
z = L−1 ( ) + L−1 ( 2 ) = cos t + cos( 3t).
s2 +1 s +3
Similarly √
y = cos t − cos( 3t).

Integration and Laplace transforms


Rt
Rule If g(t) = 0 f (u)du, then L(g) = 1s L(f ).
Proof By the Fundamental theorem of Calculus (from last term – this
says integration is the reverse of differentiation), g 0 (t) = f (t). Hence

L(f (t)) = L(g 0 (t)) = −g(0) + sL(g).


R0
As g(0) = 0 f (u)du = 0, L(f ) = sL(g).

15
Example Solve the differential/integral eqn
Z t
dy
+ 2y + y(u)du = 0,
dt 0

with y(0) = 1.
Ans Take Laplace trans:
1
−1 + sL(y) + 2L(y) + L(y) = 0.
s
s 1 1
This gives L(y) = (s+1)2
, which by partial fracs is s+1 − (s+1)2
. Hence

1 1
y = L−1 ( ) − L−1 ( ) = e−t − te−t .
s+1 (s + 1)2

16
3 Fourier Series
A function f (x) is periodic with period P if f (x + P ) = f (x) for all x. Such
a function is determined completely once we know its values on any interval
of length P . Fourier series are defined for periodic functions of period 2π.
Such a function is determined by its values for −π ≤ x < π.

Definition Let f (x) be a function defined for −π ≤ x < π. The Fourier


series of f (x) is defined to be the series
∞ ∞
a0 X X
+ an cos nx + bn sin nx,
2
n=1 n=1

where Rπ
a0 = R−π f (x)dx,
π
an = R −π f (x) cos nxdx,
π
bn = −π f (x) sin nxdx.
The numbers a0 , an , bn are called the Fourier coefficients. In the lectures I
explained why they are defined as above.
A basic question is: under what conditions is a function equal to its
Fourier series? A very good answer is provided by

Dirichlet’s Theorem Let f (x) be defined for −π ≤ x < π. Suppose


f (x) is continuous except at a finite number of points, and also f (x) has a
left-hand and a right-hand derivative at every point. Then
(1) at points x where f is continuous, f (x) is equal to its Fourier series
(2) at points x = a where f is not continuous, the Fourier series is equal
to 12 (f (a)+ + f (a)− ), where f (a)+ is the limit of f (x) as x tends to a from
the right, and f (a)− is the limit of f (x) as x tends to a from the left.

In the lectures I explained these conditions in more detail, with examples.

Example of calculation of Fourier series Let

f (x) = x2 , −π ≤ x < π

We’ll find the Fourier series of f (x). First, f (x) is an even function, so
f (x) sin nx is odd, and so
Z
1 π
bn = f (x) sin nxdx = 0.
π −π

17
Next, integrating by parts,

an = π1 −π x2 cos nxdx Rπ
1 1
= nπ [x2 sin nx]π−π − nπ −π 2xR πsin nxdx
= 0 + n22 π [x cos nx]π−π + n22 π −π cos nxdx
n
= 4 cos
n2

= 4∙(−1)
n2
.

Also Z π
1 2π 2
a0 = x2 dx = .
π −π 3
Hence the Fourier series is

X (−1)n cos nx
π2
+4 .
3 n2
n=1

Since f (x) is continuous for all x, by Dirichlet’s Theorem f (x) is equal to


its Fourier series for all x. For example, putting x = π we get

X (−1)n cos nπ X 1 ∞
π2 π2
π2 = +4 2
= +4 ,
3 n 3 n2
n=1 n=1

and hence

X 1 π2
= .
n2 6
n=1
Or putting x = 0 we get
X (−1)n∞
π2
0= +4 ,
3 n2
n=1

so

X (−1)n π2
= .
n2 12
n=1

Even and odd functions


Let f (x) be a function defined for −π ≤ x < π. Recall that f is an
even function if f (−x) = f (x) for all x. Such a function is determined by
its values for 0 ≤ x < π. If f is even then f (x) sin nx is odd so the Fourier
coefficient bn = 0.
Likewise, f is an odd function if f (−x) = −f (x) for all x. If f is odd
then the Fourier coefficient an = 0.

18
P∞ (1) An even function f (x) (0 ≤ x < π) has a Fourier cosine series
Summary
a0
2 + n=1 an cos nx, where
Z Z
2 π 2 π
a0 = f (x)dx, an = f (x) cos nxdx.
π 0 π 0
P∞
(2) An odd function f (x) (0 ≤ x < π) has a Fourier sine series n=1 bn cos nx,
where Z
2 π
bn = f (x) sin nxdx.
π 0

Example Find the Fourier cosine series for the function f (x) = sin x (0 ≤
x < π). (This means the Fourier series of the even function taking these
values.)
Answer The Fourier coefficients are
Z
2 π 4
a0 = sin xdx = ,
π 0 π

an = π2 R0 sin x cos nxdx
π
= π2 0 12 (sin(n + 1)x − sin(n − 1)x)dx
which works out as 0 if n is odd, and π(n−4
2 −1) if n is even. Hence the Fourier
cosine series is

2 4 X cos 2nx
− .
π π 4n2 − 1
n=1

Parseval’s Formula Let f (x) (−π ≤ x < π) be a function and suppose it


is continuous and differentiable, so by Dirichlet is equal to its Fourier series:
∞ ∞
a0 X X
f (x) = + an cos nx + bn sin nx.
2
n=1 n=1

Multiplying both sides by f (x) and integrating we get


Rπ 2
R π a0 P Rπ P Rπ
−π f (x) dx = −π 2 f (x)dx P
+ an −π f (x) cos nxdx + bn −π f (x) sin nxdx
P
= a20 .πa0 + an .πan + bn .πbn
Rπ Rπ
(since −π f (x) cos nxdx = πan , −π f (x) sin nxdx = πbn from the definition
of Fourier coeffs an , bn ). Hence we get

19
Parseval’s Formula If f (x) (−π ≤ x < π) is equal to its Fourier series, then
Z ∞ ∞
1 π
a2 X 2 X 2
2
f (x) dx = 0 + an + bn .
π −π 2
n=1 n=1

Example In our first example of Fourier series for f (x) = x2 (−π ≤ x < π)
we found

X
2 π2 (−1)n cos nx
x = +4 .
3 n2
n=1
2π 2 4∙(−1)n
So a0 = 3 and an = n2
, and by Parseval’s formula we get
Z ∞
1 π
2π 4 X 16
x4 dx = + .
π −π 9 n4
n=1

2π 4
The left hand side is 5 , and hence we find

X 1 π4
= .
n4 90
n=1

Changing the period Sometimes one wants to get a Fourier series for a
periodic function of period different to 2π. Here’s how.
Let f (x) be periodic of period 2r, say. So f is completely determined by
its values for −r ≤ x < r. If we define another function F (x) by
rx
F (x) = f ( ),
π
then F (x + 2π) = f ( rx π +
rx
P2r) = f ( π ) =P F (x), so F has period 2π. So F
a0
has a Fourier series 2 + an cos nx + bn sin nx, where

an = π1 R−π F (x) cos nxdx
π
= π1 R −π f ( rx
π ) cos nxdx
r
= 1r −r f (y) cos( nπy rx
r )dy (substituting y = π )

and similarly Z r
1 nπy
bn = f (y) sin( )dy
r −r r
R
1 r
and a0 = r −r f (y)dy.

20
Summary If f (x) is defined for −r ≤ x < r then the Fourier series of f is
∞ ∞
a0 X nπx X nπx
+ an cos + bn sin ,
2 r r
n=1 n=1

where Rr
a0 = 1r R−r f (x)dx,
r
an = 1r R −r f (x) cos( nπx
r )dx,
1 r nπx
bn = r −r f (x) sin( r )dx.

21

You might also like