Professional Documents
Culture Documents
Wa0011.
Wa0011.
1 Partial Differentiation
1.1 Definitions
Let f (x, y) be a function of two variables. The partial derivative ∂f /∂x is
the function obtained by differentiating f with respect to x, regarding y as
a constant. Similarly, ∂f /∂y is obtained by differentiating f with respect to
y, regarding x as a constant.
We often use the alternative notation
fx = ∂f /∂x, fy = ∂f /∂y.
As a limit,
f (x + h, y) − f (x, y)
∂f /∂x = limh→0 .
h
Also, the equation z = f (x, y) defines a surface in 3-dimensional space with
x, y, z-axes, and ∂f /∂x is the gradient of the tangent at a point in the x-
direction.
fxx = ∂ 2 f /∂x2 ,
fyy = ∂ 2 f /∂y 2 ,
fxy = ∂ 2 f /∂x∂y,
fyx = ∂ 2 f /∂y∂x
1
and also
fxy = fyx = (y 2 − x2 )/(x2 + y 2 )2 .
More than 2 variables For a function f (x, y, z, ...) of three or more variables,
we define the partial derivative fx = ∂f /∂x to be the function obtained by
differentiating f with respect to x, regarding y, z, ... as constants. Similarly
for ∂f /∂y and ∂f /∂z and so on.
From the definitions of ∂f /∂x, ∂f /∂y as limits, the first of these brackets is
roughly equal to fx .δx and the second to fy .δy. So
δf ≈ fx .δx + fy .δy.
2
1.3 Chain Rules
Recall the chain rule for differentiating functions of one variable: if f = f (u)
and u = u(t) then df /dt = df /du.du/dt. Now let f = f (x, y) where x, y are
both functions of one variable t – say
x = x(t), y = y(t).
f (x, y) = 0
3
General chain rule Now suppose f = f (x, y) where x = x(s, t), y = y(s, t).
Then f is also a function of s, t and we’d like a formula for ∂f /∂s, ∂f /∂t.
Well, if we regard t as a constant then x = x(s), y = y(s), so we can apply
Chain Rule I to get
Step 1 First work out fx , fy in terms of r, θ using the Chain Rule. Well,
4
Hence we get
fxx = cos2 θfrr −((2 sin θ cos θ)/r)fθr +((sin2 θ)/r)fr +((2 sin θ cos θ)/r2 )fθ +((sin2 θ)/r2 )fθθ .
Similarly we get
fyy = sin2 θfrr +((2 sin θ cos θ)/r)fθr +((cos2 θ)/r)fr −((2 sin θ cos θ)/r2 )fθ +((cos2 θ)/r2 )fθθ .
F 00 (0) 2
F (t) = F (0) + F 0 (0)t + t + ∙∙∙ (1)
2!
Let’s work out F 0 (0) and F 00 (0) in terms of f and its partial derivatives. By
dy
Chain Rule I, F 0 (t) = fx dx
dt + fy dt where x = a + th, y = b + tk, and hence
5
(evaluated at (a + th, b + tk)). Applying the Chain Rule again,
d d
F 00 (t) = h dt (fx (a + th, b + tk)) + k dt (fy (a + th, b + tk))
= h(hfxx + kfxy ) + k(hfyx + kfyy )
= h2 fxx + 2hkfxy + k 2 fyy .
Hence
F (0) = f (a, b),
F 0 (0) = hfx (a, b) + kfy (a, b),
F 00 (0) = h2 fxx (a, b) + 2hkfxy (a, b) + k 2 fyy (a, b).
Substituting into (1), we get
f (a + h, b + k) = f (a, b) + hfx (a, b) + kfy (a, b) + 12 (h2 fxx (a, b) + 2hkfxy (a, b) + k 2 fyy (a, b))
+ terms of degree 3 or more in h, k.
Finding maxima and minima Spose f has a maximum at (a, b). Then if
we fix y = b and vary x, we certainly have f (a + h, b) < f (a, b) for all small
h, which says that the function f (x, b) of 1 variable (x) has a maximum at
x = a. Hence ∂f /∂x = 0 at (a, b), and likewise ∂f /∂y = 0 at (a, b). This is
also true at a minimum. Hence maxima and minima are stationary points
of f , in the following sense:
6
Example Let f (x, y) = x2 /2 − x + xy 2 . Then fx = x − 1 + y 2 , fy = 2xy. At
a stationary point, 2xy = 0 so x = 0 or y = 0. If x = 0 then −1 + y 2 = 0 so
y = ±1. If y = 0 then x − 1 = 0 so x = 1. So f has 3 stationary points:
Summary Let (a, b) be a stationary point of f (x, y), and let A = fxx (a, b),
B = fxy (a, b), C = fy (a, b). The nature of the stationary point is as follows:
A AC − B 2 nature
>0 >0 minimum
<0 >0 maximum
any <0 saddle
7
So at (0, 1), A = 1, B = 2, C = 0, so AC − B 2 < 0 and this is a saddle. At
(0, −1), A = 1, B = −2, C = 0, so AC − B 2 < 0 and this is also a saddle.
At (1, 0), A = 1, B = 0, C = 2, so AC − B 2 > 0 and this is a minimum.
8
Now let’s think of this the other way round. Suppose we have a differ-
ential eqn
dy
P (x, y) + Q(x, y) = 0.
dx
If we can find a function u(x, y) such that ux = P, uy = Q, then we’ll be
able to solve the eqn – the solution will be u(x, y) = c. Notice that if such
a function u exists, then uxy = Py and uyx = Qx , so as uxy = uyx , we need
to make sure that Py = Qx . This leads to
dy
Definition The differential equation P + Q dx = 0 is said to be exact if
Py = Qx .
It turns out that for an exact eqn we can always find a function u such
that ux = P, uy = Q, so we can always solve exact eqns.
Example Solve
dy
2xy + cos x cos y + (x2 − sin x sin y) = 0.
dx
x2 y + sin x cos y = c
9
The only cases where this method is feasible is when we can find an
integrating factor of the form λ = λ(x) (a function of x only) or λ(y).
When λ = λ(x), the exactness condition is that (λP )y = (λQ)x , which is
λPy = λ0 Q + λQx . This boils down to the following:
1 dλ Py − Qx
= .
λ dx Q
If the RHS happens to be a function of x only, then we have a chance of
solving this and finding λ.
Similarly, to find an integrating factor of the form λ = λ(y) we need to
solve
1 dλ Py − Qx
=− .
λ dy P
Example Solve
dy
xy − 1 + (x2 − xy) = 0.
dx
Ans Let P = xy − 1, Q = x2 − xy. Then Py = x, Qx = 2x − y so eqn is not
exact. But note that (Py − Qx )/Q = −1/x, so to find an integrating factor
λ = λ(x) we need to solve λ1 dλ
dx = −1/x. A solution is λ = 1/x. Multiplying
the original eqn by this we get the eqn
1 dy
y− + (x − y) = 0.
x dx
This is exact. To solve it we find u = u(x, y) satisfying
1
ux = y − , uy = x − y.
x
Integrating the first one gives u = xy − log x + f (y), hence uy = x + f 0 (y).
So we need to choose f (y) with f 0 (y) = −y, so take f (y) = −y 2 /2. Hence
the solution is
1
xy − log x − y 2 = c.
2
10
2 Laplace Transforms
If f (t) is a function of one variable, we define its Laplace transform to be
the function F (s) defined by
Z ∞
F (s) = e−st f (t)dt.
0
Often we also write L(f (t)) or just L(f ) for the Laplace transform of f .
11
First we make the crucial observaton that if y is a function of t then we
can express L( dy
dt ) in terms of L(y) as follows:
Z ∞
dy dy
L( ) = e−st dt,
dt 0 dt
which by parts is equal to
Z ∞
−st
[e y]∞
0 + se−st ydt,
0
12
Hence the solution is
3 2 1
y = e−2t + cos t + sin t.
5 5 5
b
L(e−at sin bt) = .
(s + a)2 + b2
As another example, suppose we want to work out
2s + 3
f (t) = L−1 ( ).
s2 + 2s + 5
Well,
2s + 3 2s + 3 2(s + 1) 1
= = + .
s2 + 2s + 5 2
(s + 1) + 4 (s + 1) + 4 (s + 1)2 + 4
2
Hence
1
f (t) = 2e−t cos 2t + e−t sin 2t.
2
There is another shift rule, based on the Heaviside step function Ha (t),
defined as follows, where a > 0 is a constant:
0, if t < a
Ha (t) =
1, if t ≥ a
13
Shift Rule II If L(f (t)) = F (s), then
R∞
Proof The LHS = a e−st f (t − a)dt. Put u = t − a. Then
Z ∞ Z ∞
−s(u+a) −as
LHS = e f (u)du = e e−su f (u)du = e−as F (s).
0 0
e−as
Examples 1. L(Ha (t) sin(t − a)) = e−as L(sin t) = s2 +1
.
−2s
2. What is L−1 ( e s2 ) ? Well, we know L(t2 ) = 2
s3
, so by the shift rule,
e−2s 1
L−1 ( 2
) = H2 (t)(t − 2)2 .
s 2
Substituting for L( dy
dt ) gives
d2 y
L( ) = −y 0 (0) − sy(0) + s2 L(y). (3)
dt2
d2 y dy
− 2 + y = et
dt2 dt
with y(0) = 0, y 0 (0) = 1.
Answer Take Laplace trans of both sides:
1
−y 0 (0) − sy(0) + s2 L(y) − 2(−y(0) + sL(y)) + L(y) = L(et ) = .
s−1
1 1
So −1 + (s2 − 2s + 1)L(y) = s−1 which gives (s − 1)2 L(y) = s−1 + 1, so
1 1
L(y) = + .
(s − 1)2 (s − 1)3
14
So
1 1 t2 et
y = L−1 ( ) + L −1
( ) = te t
+ .
(s − 1)2 (s − 1)3 2
Eliminate L(y) by taking the first eqn plus (s2 + 2) times the second, to get
2s(s2 + 2)
L(z) = .
(s2 + 3)(s2 + 1)
s s
By partial fracs the RHS is s2 +1
+ s2 +3
. So
s s √
z = L−1 ( ) + L−1 ( 2 ) = cos t + cos( 3t).
s2 +1 s +3
Similarly √
y = cos t − cos( 3t).
15
Example Solve the differential/integral eqn
Z t
dy
+ 2y + y(u)du = 0,
dt 0
with y(0) = 1.
Ans Take Laplace trans:
1
−1 + sL(y) + 2L(y) + L(y) = 0.
s
s 1 1
This gives L(y) = (s+1)2
, which by partial fracs is s+1 − (s+1)2
. Hence
1 1
y = L−1 ( ) − L−1 ( ) = e−t − te−t .
s+1 (s + 1)2
16
3 Fourier Series
A function f (x) is periodic with period P if f (x + P ) = f (x) for all x. Such
a function is determined completely once we know its values on any interval
of length P . Fourier series are defined for periodic functions of period 2π.
Such a function is determined by its values for −π ≤ x < π.
where Rπ
a0 = R−π f (x)dx,
π
an = R −π f (x) cos nxdx,
π
bn = −π f (x) sin nxdx.
The numbers a0 , an , bn are called the Fourier coefficients. In the lectures I
explained why they are defined as above.
A basic question is: under what conditions is a function equal to its
Fourier series? A very good answer is provided by
f (x) = x2 , −π ≤ x < π
We’ll find the Fourier series of f (x). First, f (x) is an even function, so
f (x) sin nx is odd, and so
Z
1 π
bn = f (x) sin nxdx = 0.
π −π
17
Next, integrating by parts,
Rπ
an = π1 −π x2 cos nxdx Rπ
1 1
= nπ [x2 sin nx]π−π − nπ −π 2xR πsin nxdx
= 0 + n22 π [x cos nx]π−π + n22 π −π cos nxdx
n
= 4 cos
n2
nπ
= 4∙(−1)
n2
.
Also Z π
1 2π 2
a0 = x2 dx = .
π −π 3
Hence the Fourier series is
∞
X (−1)n cos nx
π2
+4 .
3 n2
n=1
and hence
∞
X 1 π2
= .
n2 6
n=1
Or putting x = 0 we get
X (−1)n∞
π2
0= +4 ,
3 n2
n=1
so
∞
X (−1)n π2
= .
n2 12
n=1
18
P∞ (1) An even function f (x) (0 ≤ x < π) has a Fourier cosine series
Summary
a0
2 + n=1 an cos nx, where
Z Z
2 π 2 π
a0 = f (x)dx, an = f (x) cos nxdx.
π 0 π 0
P∞
(2) An odd function f (x) (0 ≤ x < π) has a Fourier sine series n=1 bn cos nx,
where Z
2 π
bn = f (x) sin nxdx.
π 0
Example Find the Fourier cosine series for the function f (x) = sin x (0 ≤
x < π). (This means the Fourier series of the even function taking these
values.)
Answer The Fourier coefficients are
Z
2 π 4
a0 = sin xdx = ,
π 0 π
Rπ
an = π2 R0 sin x cos nxdx
π
= π2 0 12 (sin(n + 1)x − sin(n − 1)x)dx
which works out as 0 if n is odd, and π(n−4
2 −1) if n is even. Hence the Fourier
cosine series is
∞
2 4 X cos 2nx
− .
π π 4n2 − 1
n=1
19
Parseval’s Formula If f (x) (−π ≤ x < π) is equal to its Fourier series, then
Z ∞ ∞
1 π
a2 X 2 X 2
2
f (x) dx = 0 + an + bn .
π −π 2
n=1 n=1
Example In our first example of Fourier series for f (x) = x2 (−π ≤ x < π)
we found
∞
X
2 π2 (−1)n cos nx
x = +4 .
3 n2
n=1
2π 2 4∙(−1)n
So a0 = 3 and an = n2
, and by Parseval’s formula we get
Z ∞
1 π
2π 4 X 16
x4 dx = + .
π −π 9 n4
n=1
2π 4
The left hand side is 5 , and hence we find
∞
X 1 π4
= .
n4 90
n=1
Changing the period Sometimes one wants to get a Fourier series for a
periodic function of period different to 2π. Here’s how.
Let f (x) be periodic of period 2r, say. So f is completely determined by
its values for −r ≤ x < r. If we define another function F (x) by
rx
F (x) = f ( ),
π
then F (x + 2π) = f ( rx π +
rx
P2r) = f ( π ) =P F (x), so F has period 2π. So F
a0
has a Fourier series 2 + an cos nx + bn sin nx, where
Rπ
an = π1 R−π F (x) cos nxdx
π
= π1 R −π f ( rx
π ) cos nxdx
r
= 1r −r f (y) cos( nπy rx
r )dy (substituting y = π )
and similarly Z r
1 nπy
bn = f (y) sin( )dy
r −r r
R
1 r
and a0 = r −r f (y)dy.
20
Summary If f (x) is defined for −r ≤ x < r then the Fourier series of f is
∞ ∞
a0 X nπx X nπx
+ an cos + bn sin ,
2 r r
n=1 n=1
where Rr
a0 = 1r R−r f (x)dx,
r
an = 1r R −r f (x) cos( nπx
r )dx,
1 r nπx
bn = r −r f (x) sin( r )dx.
21