Professional Documents
Culture Documents
Bessel Funtions - Epelde García
Bessel Funtions - Epelde García
Mathematical Physics
Supervisor:
Judith Rivas Ulloa
Preface 3
1 Preliminaries 5
1.1 The Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Sturm-Liouville problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Resolution by separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 The Laplacian in polar coordinates . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 The wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.3 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.4 The Dirichlet problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Bibliography 67
1
2
Preface
The aim of this dissertation is to introduce Bessel functions to the reader, as well as studying
some of their properties. Moreover, the final goal of this document is to present the most well-
known applications of Bessel functions in physics.
In the first chapter, we present some of the concepts needed in the rest of the dissertation.
We give definitions and properties of the Gamma function, which will be used in the defini-
tion of Bessel functions. We also provide the reader with the basics of Sturm-Liouville problems,
which again are to be used in Chapter 3. Finally, we present three examples of partial differential
equations. These equations are well-known in physical mathematics, and are solved in Chapter 4.
In Chapter 3, we prove some essential properties of Bessel functions. First, we establish the
basic properties such as recurrence relations of functions seen in Chapter 2. Bessel functions of
integer order can also be seen as the coefficients of a Laurent series. Moreover, these particular
functions are proved to have some integral expressions, known as Bessel’s integral formulas. Our
next goal will be approximating Bessel functions via asymptotics. Furthermore, we will make use
of these approximations to estimate the zeros of the functions. Finally, we will study orthogonal
sets of Bessel functions.
In the final chapter, we will use the concepts we have developed previously to solve the three
partial differential equations described in Chapter 1. These equations are in fact the heat and
wave equations, and the Dirichlet problem.
Besides the development of the theory of Bessel functions, some problems regarding that
theory are solved at the end of Chapters 2 and 3.
3
4
Chapter 1
Preliminaries
Finding solutions of differential equations has been a problem in pure mathematics since the
invention of calculus by Newton and Leibniz in the 17th century. Besides this, these equations
are used in some other disciplines such as engineering, biology, economics and physics. Bessel
functions are solutions of a particular differential equation, called Bessel’s equation.
In the late 17th century, the Italian mathematician Jacopo Riccati studied what we nowadays
know as Riccati’s equations. Given P , Q and R three functions of z ∈ C (it is supposed that
neither P nor R are identically zero), Riccati’s equations are differential equations of the form
dw
= P + Qw + Rw2 .
dz
The theory of Bessel functions is connected with Riccati’s equations. In fact, Bessel functions
are defined as solutions of Bessel’s equation, which can be derived from a Riccati’s equation.
Riccati and Daniel Bernoulli discussed this particular Riccati’s equation,
dw
= az n + bw2 ,
dz
and Bernoulli himself published a solution in 1724. Euler also studied this particular equation.
This equation can be reduced to Bessel’s equation by elementary transformations, and is there-
fore solvable via Bessel functions.
In 1738, Bernoulli published a memoir containing theorems on the oscillations of heavy chains.
Apfunction contained in one of these theorems was the now called Bessel function of argument
2 z/n. Thus, Bernoulli is considered the first to define a Bessel function.
In 1764, while Euler was investigating the vibrations of a stretched membrane, he arrived at
the following equation:
1 d2 w d2 w 1 dw 1 d2 w
= + + , (1.1)
E 2 dt2 dr2 r dr r2 dθ2
where w(r, θ, t) is the transverse displacement of a point expressed in polar coordinates by (r, θ),
at time t, and E is a constant depending on the density and tension of the membrane. He wanted
5
to find solutions of the following form,
d2 u 1 du
2
β2
α
+ + − u = 0.
dt2 r dr E2 r2
This equation is known as Bessel’s equation of order β, and its solution was found by Euler
himself in that year.
Some other mathematicians, such as Lagrange, Laplace and Poisson worked with Bessel’s
equation as well. The well-known German astronomer and mathematician Friedrich Wilhelm
Bessel also studied the equation while he was working on dynamical astronomy. In 1824, Bessel
wrote a memoir where he made a detailed investigation on the solutions of Bessel’s equation. Al-
though Bessel functions were originally discovered by Bernoulli, they were generalised by Bessel,
and were named after him years after his death.
In this dissertation we will not make a chronological approach to Bessel functions, but rather
focus on the theoretical development. In order to study the theory of Bessel functions, we will
need some properties of the Gamma functions and Sturm-Liouville problems, which will be given
in this chapter.
Γ(z + 1) = zΓ(z).
The Gamma function is the generalisation of the factorial to complex numbers. In fact, for
all n ∈ N ∪ {0},
Γ(n + 1) = n!
Moreover, Euler and Weiestrass describe the Gamma function using infinite products. It can
be proved that the following other expressions are alternative definitions of the Gamma function,
for all complex values of z except for negative integers.
6
(i) Euler’s definition
n!nz
Γ(z) = lim ,
n→∞ z(z + 1)(z + 2) . . . (z + n)
Γ0 (n + 1) 1 1 1
= −γ + 1 + + + · · · + , (1.3)
Γ(n + 1) 2 3 n
where γ is Euler’s constant (1.2).
Proposition 1.2 (Duplication formula). Let z ∈ C. Then,
22z−1
1
Γ(2z) = √ Γ(z)Γ z + . (1.4)
π 2
7
From now on, we consider the space C 2 ([a, b]) where a, b ∈ R and a < b, with the inner
product h·, ·i defined by
Z b
hf, gi = f (x)g(x)dx.
a
Let us define the formal adjoint of a linear operator.
Definition 1.3. If L : C 2 ([a, b]) → C 2 ([a, b]) is the linear transformation
L(f ) = rf 00 + qf 0 + pf, p, q, r ∈ C 2 ([a, b])
(we also assume p, q, r take real values), the formal adjoint of L is defined by
L∗ (f ) = (rf )00 − (qf )0 + pf = rf 00 + (2r0 − q)f 0 + (r00 − q 0 + p)f.
Moreover, if L = L∗ , then L is said to be formally self-adjoint.
Lagrange’s Identity is a well known property of formally self-adjoint operators.
Lemma 1.1 (Lagrange’s Identity). If L : C 2 ([a, b]) → C 2 ([a, b]) is a formally self-adjoint oper-
ator of the form
L(f ) = rf 00 + qf 0 + pf, p, q, r ∈ C 2 ([a, b]),
then
hL(f ), gi = hf, L(g)i + [r(x)(f 0 (x)g(x) − f (x)g 0 (x))]ba . (1.6)
Proof. Since L is formally self-adjoint and therefore L = L∗ , we know that
2r0 − q = q and r00 − q 0 + p = p.
Thus, q(x) = r0 (x), and
L(f ) = rf 00 + r0 f 0 + pf = (rf 0 )0 + pf.
Hence,
Z b
hL(f ), gi − hf, L(g)i = L(f )(x)g(x) − f (x)L(g)(x) dx
a
Z bh
g(x) (r(x)f 0 (x))0 + p(x)f (x)
=
a
i
− f (x) (r(x)g 0 (x))0 + p(x)g(x) dx
Z b
= g(x)(r(x)f 0 (x))0 − f (x)(r(x)g 0 (x))0 dx.
a
Integrating by parts,
h ib Z b
hL(f ), gi − hf, L(g)i = g(x)r(x)f 0 (x) − r(x)f 0 (x)g 0 (x)dx
a a
h ib Z b
− f (x)r(x)g 0 (x) + r(x)f 0 (x)g 0 (x)dx
a a
h ib
= r(x) f 0 (x)g(x) − f (x)g 0 (x) .
a
8
Let us define the boundary condition of a problem.
Definition 1.4. Let [a, b] ⊆ R , and f ∈ C 2 ([a, b]). A boundary condition is a restriction of the
type
B(f ) = αf (a) + α0 f 0 (a) + βf (b) + β 0 f 0 (b) = 0,
where α, α0 , β and β 0 are constants.
Moreover, let L : C 2 ([a, b]) → C 2 ([a, b]) be a formally self-adjoint operator defined by
where r, p and q are real functions in the space C 2 ([a, b]). If [r(f 0 g − f g 0 )]ba = 0 is satisfied for
all f, g such that B(f ) = B(g) = 0, the boundary condition B is said to be self-adjoint.
Remark. In our case, we will work with boundary conditions of the type
where r, r0 and p are real and continuous on [a, b] and r > 0 on [a, b].
Moreover, the weighted inner product and norm of the space L2w (a, b) are defined as
Z b
hf, giw = f (x)g(x)w(x)dx = hwf, gi = hf, wgi, (1.8)
a
p
kf kw = hf, giw .
9
Theorem 1.1. Let a regular Sturm-Liouville problem of the form (1.7) be given. Then,
(ii) Eigenfunctions corresponding to distinct eigenvalues are orthogonal with respect to the
weight function w, that is, if f and g are eigenfunctions with eigenvalues λ and µ, λ 6= µ,
then Z b
hf, giw = f (x)g(x)w(x)dx = 0.
a
(i) If λ is an eigenvalue of (1.7), with eigenfunction f , then by Lagrange’s Identity (1.6) and
(1.8),
λkf k2w = hλwf, f i = −hL(f ), f i = −hf, L(f )i = hf, λwf i = λhf, wf i = λkf k2w .
(ii) We take f and g eigenfunctions for the eigenvalues λ and µ, respectively. Then, since the
eigenvalues are real,
λhf, giw = hλwf, gi = −hL(f ), gi = −hf, L(g)i = hf, µwgi = µhf, giw .
Theorem 1.2. For every regular Sturm-Liouville problem of the form (1.7) on [a, b], there exists
an orthonormal basis {φn }∞ 2
n=1 of Lw (a, b) consisting of eigenfunctions of the problem. If λn is
the eigenvalue for φn , then limn→∞ λn = +∞. Moreover, if fPis of class C 2 ([a, b]) and satisfies
the boundary conditions B1 (f ) = B2 (f ) = 0, then the series hf, φn iφn converges uniformly to
f.
F (u(x, y), ux (x, y), uxx (x, y), uy (x, y), uyy (x, y)) = 0. (1.9)
10
After substituting X(x)Y (y) in equation (1.9), we may be able to write it in the form
If this is the case, since the left-hand side of the equation depends only on x, and the right-hand
side depends only on y, it follows that both sides of (1.10) are equal to a constant ξ. Therefore,
we can reduce problem (1.9) to solve
It should be noticed that this method does not always find a solution, but it can be applied to
problems such as the heat equation or the wave equation.
Before we apply the method to some particular cases, let us consider the two-dimensional
Laplace operator.
∆u = uxx + uyy
ux = vr rx + vθ θx
uy = vr ry + vθ θy
uxx = (vrr rx + vrθ θx )rx + vr rxx + (vθr rx + vθθ θx )θx + vθ θxx
uyy = (vrr ry + vrθ θy )ry + vr ryy + (vθr ry + vθθ θy )θy + vθ θyy
But since r2 = x2 + y 2 , we have that 2rrx = 2x, 2rry = 2y and thus rx = cos θ, ry = sin θ.
Moreover, rxx = − sin θθx and ryy = cos θθy .
On the other hand, derivating with respect to y,
11
Derivating again with respect to x, we get
r cos θθx − rx sin θ r cos θ(− sin θ/r) − rx sin θ 2
θxx = − 2
=− 2
= 2 cos θ sin θ.
r r r
Similarly,
2
θyy = − sin θ cos θ.
r
Then, by substitution,
uxx =vrr cos2 θ + vrθ cos θ(− sin θ/r) + vr (− sin θ(− sin θ/r))
+ vrθ cos θ(− sin θ/r) + vθθ sin2 θ/r2 + vθ ((2/r) sin θ cos θ),
uyy =vrr sin2 θ + vrθ sin θ(cos θ/r) + vr (cos θ(cos θ/r))
+ vrθ sin θ(cos θ/r) + vθθ cos2 θ/r2 + vθ ((−2/r) sin θ cos θ).
Therefore,
1 1
∆u = uxx + uyy = vrr + vr + 2 vθθ .
r r
utt − c2 ∆u = 0, (1.11)
where the function u ≡ u(x, y, t) indicates the position of the point (x, y) ∈ R2 at a moment t.
It shows the expansion of a wave in the plane. The constant c is determined by the propagation
speed of the wave.
Let us solve this equation by separation of variables. First, we consider the Laplacian in its
polar form, (r, θ) being the polar coordinates of the point (x, y). In order to simplify notation,
we will also use u(r, θ) to express v(r, θ) = u(r cos θ, r sin θ).
1 1
utt − c2 ∆u = utt − c2 (urr + ur + 2 uθθ ). (1.12)
r r
We can think of solutions of the type
Substituting in (1.12),
Therefore
T 00 (t) ∆v(r, θ)
2
= = −µ2 ,
c T (t) v(r, θ)
where −µ2 is a constant. From this equation we get
12
• Regarding the equation with T ,
1 1
R00 (r)Θ(θ) + R0 (r)Θ(θ) + 2 R(r)Θ00 (θ) + µ2 R(r)Θ(θ) = 0.
r r
1 Θ00 (θ)
(r2 R00 (r) + rR0 (r) + r2 µ2 R(r)) = − = ν 2,
R(r) Θ(θ)
If we want to solve this problem in a bounded region, we need some boundary conditions to
be verified by the general solution of equations (1.13), (1.14) and (1.15).
1 1
ut − K∆u = ut − K(urr + ur + 2 uθθ ),
r r
where K is a constant (known in physics as the thermal diffusivity) determined by the nature of
the space. This equation measures the heat in a point (x, y) at time t.
We want to find solutions of the type u(r, θ, t) = T (t)R(r)Θ(θ), so applying the same process
as for the wave equation, we get equations (1.14) and (1.15). However, in this case, instead of
(1.13), we get
T 0 (t) + Kµ2 T (t) = 0. (1.16)
Of course, we also need boundary conditions to establish the solution of the problem in a
bounded region.
13
1.3.4 The Dirichlet problem
Finally, we will study the Dirichlet problem in R3 . The equation to be satisfied is simply Laplace’s
equation in three dimensions
∆u = 0.
Using the polar Laplacian in two dimensions, we write it as
1 1
(uxx + uyy ) + uzz = urr + ur + 2 uθθ + uzz = 0. (1.17)
r r
Applying again separation of variables, we try to find solutions of the type
u(r, θ, z) = R(r)Θ(θ)Z(z)
and we get again equations (1.14) and (1.15), but now we have also
14
Chapter 2
In this chapter, we are going to introduce the Bessel functions of the first, second and third
kind. In order to define these functions, we will solve a differential equation known as Bessel’s
equation.
d2 w dw
z2 2
+z + (z 2 − ν 2 )w = 0 (2.1)
dz dz
We are looking for functions which solve this equation. These solutions can be found with
the Frobenius method, which consists in finding functions of the form
∞
X
w(z) = ar z α+r , a0 6= 0
r=0
where we have to determine a constant α and the coefficients aj , for all j ∈ N ∪ {0}. The first
and second derivatives of w with respect to z are
∞ ∞
dw X d2 w X
= ar (α + r)z α+r−1 , = ar (α + r)(α + r − 1)z α+r−2 .
dz dz 2
r=0 r=0
15
Substituting these series in equation (2.1), we get
∞
X ∞
X ∞
X
α+r 2 α+r
0= ar (α + r)(α + r − 1)z + ar (α + r − ν )z + ar z α+r+2
r=0 r=0 r=0
∞
X ∞
X
2 α+r
= ((α + r)(α + r − 1) + (α + r) − ν )ar z + ar z α+r+2
r=0 r=0
∞
X ∞
X
= ar ((α + r)2 − ν 2 )z α+r−1 + ar−2 z α+r .
r=0 r=2
This means every coefficient of the powers of z has to be zero. Thus, we have the following
equations:
2 2
(α − ν )a0 = 0,
((α + 1)2 − ν 2 )a1 = 0,
((α + r)2 − ν 2 )ar + ar−2 = 0, ∀r ≥ 2.
This means that the coefficients of odd index are all zero, and that the ones with even index are
determined by this formula,
a2r−2
a2r = − 2 , r ∈ N.
2 r(ν + r)
Hence,
a0 a2 a0
a2 = − =⇒ a4 = − 2 = 4
22 (ν+ 1) 2 2(ν + 2) 2 2(ν + 1)(ν + 2)
a4 a0
=⇒ a6 = − 2 =− 6
2 3(ν + 3) 2 3!(ν + 1)(ν + 2)(ν + 3)
(−1)r a0
=⇒ · · · =⇒ a2r = 2r Qr , ∀r ∈ N.
2 r! k=1 (ν + k)
16
Definition 2.2. Let ν be a complex constant such that 2ν is not a negative integer. Then,
∞
X (−1)r (z/2)ν+2r
Jν (z) = (2.2)
r! Γ(ν + r + 1)
r=0
is called the Bessel function of the first kind of order ν and argument z.
The same process can be applied for α = −ν. In this case, we get
∞
X (−1)r (z/2)−ν+2r
J−ν (z) = . (2.3)
r! Γ(−ν + r + 1)
r=0
In order to work with these expressions, we would like the series (2.2) and (2.3), to be
absolutely convergent.
Lemma 2.1. The series defining Bessel functions of the first kind of order ν and −ν are abso-
lutely convergent for all z 6= 0.
Proof. Recall
∞ ∞
X (−1)r (z/2)ν+2r z ν X (−1)r z 2r
Jν (z) = = .
r! Γ(ν + r + 1) 2 r! 4r Γ(ν + r + 1)
r=0 r=0
Notice that since ν is taken to be complex, (z/2)ν is not defined at z = 0 since log z is not. We
can use the Cauchy-Hadamard theorem to find the radius of convergence of the series on the
right. Making x = z 2 ,
|ar | (−1)r
R = lim , where ar = ,
r→∞ |ar+1 | r! 4r Γ(±ν + r + 1)
Remark. As series (2.2) and (2.3) are both convergent for all z 6= 0, we may differentiate them
term-by-term.
As Jν and J−ν are solutions of a second-order linear differential equation, we want to see that
they are linearly independent, and thus they form a basis for the vector space of the solutions of
(2.1).
17
Proposition 2.1. If ν ∈ C − {k/2 | k ∈ Z}, the Bessel functions of the first kind Jν and J−ν are
linearly independent. In that case, for any solution w of (2.1), there exist A, B ∈ C such that
w(z) = AJν (z) + BJ−ν (z).
Proof. It is enough to see that the Wronskian of Jν (z) and J−ν (z),
Jν (z) J−ν (z)
W (Jν (z), J−ν (z)) = 0 (z) ,
Jν0 (z) J−ν
does not vanish at any point. Since Jν and J−ν satisfy (2.1),
z 2 Jν00 (z) + zJν0 (z) + (z 2 − ν 2 )Jν (z) = 0,
00 0
z 2 J−ν (z) + zJ−ν (z) + (z 2 − ν 2 )J−ν (z) = 0.
We multiply these two equations by J−ν (z) and Jν (z), respectively. Substracting one to the
other, and dividing by z, we get
00
(z) − J−ν (z)Jν00 (z) + Jν (z)J−ν
0
(z) − J−ν (z)Jν0 (z) = 0.
z Jν (z)J−ν
This is equivalent to
d 0 d
(z) − J−ν (z)Jν0 (z) =
z(Jν (z)J−ν [zW (Jν , J−ν )] = 0.
dz dz
This implies W (Jν , J−ν ) = C/z, C being a constant to be determined. Taking into account the
first term in the series (2.2),
(z/2)ν (z/2)ν−1
Jν (z) = (1 + O(z 2 )), Jν0 (z) = (1 + O(z 2 )).
Γ(ν + 1) 2Γ(ν)
The same applies to J−ν (z). Then, using the property (1.5),
1 1 1
W (Jν (z), J−ν (z)) = − + O(z)
z Γ(ν + 1)Γ(−ν) Γ(−ν + 1)Γ(ν)
2 sin νπ
=− + O(z).
πz
But we had stated W (Jν (z), J−ν (z)) = C/z, so the last O(z) must be zero and
2 sin νπ
W (Jν (z), J−ν (z)) = − , (2.4)
πz
which only vanishes when ν is an integer. By hypothesis, 2ν is not an integer, so neither is ν
and
k
W (Jν (z), J−ν (z)) 6= 0, ∀z ∈ C − |k∈Z .
2
So Jν (z) and J−ν (z) are linearly independent solutions of (2.1), which is a second-order linear
differential equation. Because of the solutions forming a two-dimensional vector space, and Jν (z)
and J−ν (z) being linearly independent, any solution can be expressed as a linear combination of
them.
18
We have seen that Jν (z) and J−ν (z) are linearly independent when ν is not an integer or half
an integer. Now we focus on the case when ν is either n or n + 1/2 for an integer n.
Let ν = −n − 1/2 where n is a positive integer or zero. Then, 2ν is not a positive integer,
and we define J−ν as in (2.3). However, when taking α = ν, we get
(2ν + 1)a1 = 0, r(2ν + r)ar + ar−2 = 0, ∀r ≥ 2.
Since 2ν = −2n − 1, when r = 2n + 1, 2ν + r = 0 and the value of a2n+1 is no longer established
by a2n−1 , so a2n+1 is not forced to be zero and can take arbitrary values. However, if we still take
Jν and J−ν defined as in (2.2) and (2.3), respectively, this functions are linearly independent, as
their Wronskian (showed in (2.4)) does not vanish. So when ν is half an odd integer, we take Jν
and J−ν as defined before, and any solution of (2.1) will be a combination of them.
The same argument applies when ν = n + 1/2, n being a positive integer. In this case, we
can give arbitrary values to a2n+1 when α = −ν, but we will still define J−ν in the same way.
Remark. We can now generalise Definition 2.2 for all complex values of ν.
Now, let ν be a nonnegative integer n. Then, using that Γ(k + 1) is infinite for all negative
integer k, and making r − n = s in the third step,
∞ ∞
X (−1)r (z/2)−n+2r X (−1)s+n (z/2)−n+2r
J−ν (z) = =
r! Γ(−n + r + 1) r=n r! Γ(−n + r + 1)
r=0
∞ ∞
X (−1)s (−1)n (z/2)−n+2(s+n) X (−1)s (z/2)n+2s
= = (−1)n
(s + n)! Γ(−n + s + n + 1) Γ(n + s + 1) s!
s=0 s=0
n
= (−1) Jν (z). (2.5)
Hence, Jν and J−ν are linearly dependent, and we need to define a new solution of (2.1), linearly
independent to Jν .
Example 2.1. The functions J0 (z) and J1 (z) play a major role in physical applications. By
definition, Bessel functions of the first kind of orders 0 and 1 are
∞ ∞
X (−1)r (z/2)2r X (−1)r (z/2)2r
J0 (z) = = ,
r! Γ(r + 1) (r!)2
r=0 r=0
∞ ∞
X (−1)r (z/2)2r+1 X (−1)r (z/2)2r+1
J1 (z) = = .
r! Γ(ν + 2) (r + 1)(r!)2
r=0 r=0
Thus, for small values of z, J0 (z) ∼ 1 and J1 (z) ∼ z/2. Moreover, we have that for real
values of x
lim J0 (x) = 1
x→0
lim J1 (x) = 0
x→0
Graphics for this functions are shown in Figure 2.1.
19
Figure 2.1: Graphical representations of Bessel functions of the first kind of orders 0 (blue) and 1 (red).
is called the Bessel function of the second kind of order ν and argument z.
Proposition 2.2. For all integer n, the Bessel function of the second kind Yn is a solution of
Bessel’s equation, and it is linearly independent of Jn (z).
Proof. As we have stated before, when ν = n ∈ Z, the expression inside the limit in (2.6) takes
the form (0/0) (notice that sin nπ = 0, cos nπ = (−1)n and J−n (z) = (−1)n Jn (z)) , and we can
use L’Hôpital’s rule to obtain the limit. Hence,
20
∂ ∂
−π sin(απ)Jα (z) + cos(απ) ∂α Jα (z) − ∂α J−α (z)
Yn (z) = lim
α→n π cos(απ)
1 ∂ n+1 ∂
= Jα (z) + (−1) J−α (z) . (2.7)
π ∂α ∂α α=n
Since Jα is a solution of (2.1), and taking the derivatives with respect to α on each side of
equation (2.1),
2
∂ 2 d d 2 2
z +z + z − α Jα (z) = 0
∂α dz 2 dz
2
2 d d 2 2 ∂
=⇒ − 2αJα (z) + z 2
+z +z −α Jα (z) = 0
dz dz ∂α
2
2 d d 2 2 ∂
=⇒ z +z +z −α Jα (z) = 2αJα (z).
dz 2 dz ∂α
d2
2 d 2 2
z +z + z − α Yn (z)
dz 2 dz
d2
1 d d
= z2 2 + z + z 2 − α2 Jα (z)
π dz dz dα
2
n 2 d d 2 2 d
−(−1) z +z +z −α J−α (z)
dz 2 dz dα α=n
1 n+1
= 2αJα (z) + (−1) 2αJ−α (z) α=n
π
2n
= (Jn (z) − (−1)n J−n (z))
π
= 0.
Now, we should show that Yn (z) is linearly independent of Jn (z). First, we will calculate the
Wronskian for Jα and Yα when α is not an integer. The result of the Wronskian of Jν (z) and
21
J−ν (z) is used here.
Jα (z) Yα (z)
W (Jα (z), Yα (z)) =
Jα0 (z) Yα0 (z)
(cos απ)Jα (z) − J−α (z)
Jα (z)
= sin απ
(cos απ)Jα0 (z) − J−α
0 (z)
Jα0 (z)
sin απ
1
cos απ Jα (z)Jα0 (z) − Jα0 (z)Jα (z)
=
sin απ
+Jα0 (z)J−α (z) − Jα (z)J−α0
(z)
1
=− W (Jα (z), J−α (z))
sin απ
2
= 6= 0.
πz
Because of the continuity of Jα and Yα with respect to the variable α, this result is also true for
any α = n ∈ N, and thus the Wronskian cannot vanish anywhere. Therefore, Jn and Yn are also
linearly independent, and so are J−n and Yn .
Proof. When ν is not an integer, Jν (z) and Yν (z) are trivially solutions to (2.1) because of Jν (z)
and J−ν (z) being solutions too. Since we have shown previously that Jν (z) and J−ν (z) are
linearly independent, so are Jν (z) and Yν (z). The result of the previous proposition gives us the
proof for integer values of ν.
Example 2.2 (A transformation of Bessel’s equation). Some physical problems can be solved
with Bessel functions. This is either because Bessel’s equation arises during the resolution of the
problem, or because making some modifications to the original equation gives Bessel’s equation
as a result. Let z = βtγ , with β, γ ∈ C. Then,
dw dw/dt 1 dw
= = .
dz dz/dt βγtγ−1 dt
Hence,
dw t dw
z = .
dz γ dt
On the other hand, writing Bessel’s equation (2.1) as
d dw
z z + (z 2 − ν 2 )w = 0,
dz dz
22
and substituting the values,
d dw
t t + (β 2 γ 2 t2γ − ν 2 γ 2 )w = 0.
dt dt
dw du
t = αtα u + utα+1 .
dt dt
Thus,
d2 u
d dw du
t t = tα+2 2 + (2α + 1)tα+1 + α2 tα u.
dt dt dt dt
Now, we can substitute this in Bessel’s equation.
d2 u du
tα+2 2
+ (2α + 1)tα+1 + α2 tα u + (β 2 γ 2 t2γ − ν 2 γ 2 )tα u = 0.
dt dt
Dividing by tα ,
d2 u du
t2 2
+ (2α + 1)t + (α2 + β 2 γ 2 t2γ − ν 2 γ 2 )u = 0. (2.8)
dt dt
Since the general solution of Bessel’s equation is AJν (z) + BYν (z) (with A, B ∈ C), the
solution of the equation (2.8) is
Example 2.3. We will calculate the series expression of Y0 (z). The same argument can be
applied for the rest of the integers. Applying (2.7) when n = 0, and making β = −α in the
second equality,
1 ∂ ∂
Y0 (z) = Jα (z) − J−α (z)
π ∂α ∂α
α=0
1 ∂ 1 ∂
= Jα (z) − − Jβ (z)
π ∂α α=0 π ∂β β=0
2 ∂
= Jα (z) .
π ∂α α=0
23
Thus,
∞
2 X (−1)r (z/2)2r Γ0 (r + 1)
z
Y0 (z) = log −
π r!Γ(r + 1) 2 Γ(r + 1)
r=0
∞
2 X (−1)r (z/2)2r
z
1 1 1
= log − −γ + 1 + + + · · · +
π r! r! 2 2 3 r
r=0
∞ r
!
2 X (−1)r (z/2)2r z X 1
= log +γ−
π r! r! 2 k
r=0 k=1
∞ r
2 z 2 X X (−1)r (z/2)2r
= log + γ J0 (z) − .
π 2 π k (r!)2
r=0 k=1
are called Hankel functions or Bessel functions of the third kind of order ν.
(1)
Let us prove that any solution of Bessel’s equation can be written as combinations of Hν (z)
(2)
and Hν (z).
Proposition 2.3. Given a constant ν ∈ C, Hankel functions of order ν are linearly independent
and the general solution of (2.1) can be expressed as
Proof. Using Corollary 1.1, Hankel functions are solutions of (2.1). Moreover, their Wronskian
is
W (Hν(1) (z), Hν(2) (z)) = (Jν (z) + iYn (z))(Jν0 (z) − iYν0 (z))
− (Jν0 (z) + iYν0 (z))(Jν (z) − iYn (z))
= −2i(Jν (z)Yν0 (z) − Yν (z)Jν0 (z))
4i
=− 6= 0,
πz
and therefore Hankel functions are linearly independent and form a basis of the vector space of
the solutions of (2.1).
24
Example 2.4 (Bessel functions with argument zemπi ). It should be noted that Bessel functions
are multivalued functions in C − {0}, this is, one point z ∈ C − {0} may have more than one
image. Nevertheless, the functions Jν (z)/z ν are single-valued (have only one image). Then, for
a complex z, and for any integer m,
Jν (emπi z) Jν (z)
= .
emπiν z ν zν
Equivalently,
Jν (emπi z) = emπνi Jν (z).
We can also get
J−ν (emπi z) = e−mπνi J−ν (z).
If we now consider Bessel functions of the second kind,
1
Yν (zemπi ) = (cos νπ)Jν (zemπi ) − J−ν (zemπi )
sin νπ
1
(emνπi cos νπ)Jν (z) − e−mνπi J−ν (z)
=
sin νπ
1
e−mνπi [(cos νπ)Jν (z) − J−ν (z)] + (emνπi − e−mνπi ) cos νπJν (z)
=
sin νπ
1
= e−mνπi Yν (z) + 2i sin mνπ Jν (z).
tan νπ
Repeating a similar argument with Hankel functions, we get
d2 w dw
z2 2
+z − (z 2 + ν 2 )w = 0. (2.11)
dz dz
Actually, (2.11) is the result of replacing z by iz in Bessel’s equation (2.1). Thus, we deduce that
Jν (iz) and J−ν (iz) are solutions of (2.11). However, we usually want to express these solutions
in a real form. The function e−νπi/2 Jν (iz) is a solution of (2.11). Notice that
∞ ∞
−νπi/2 ν
X (−1)r iν i2r (z/2)ν+2r 1 X (−1)r iν (i2 )r (z/2)ν+2r
e Jν (iz) = (−i) = ν (2.12)
r!Γ(ν + r + 1) i r!Γ(ν + r + 1)
r=0 r=0
25
Definition 2.5. Let ν ∈ C. The function
∞
X (z/2)ν+2r
Iν (z) = (2.13)
r!Γ(ν + r + 1)
r=0
As Iν and I−ν are multiples of Jν (iz) and J−ν (iz), respectively, they are linearly independent
if ν ∈
6 Z. When ν is an integer n, we have again that
∞
X (z/2)−n+2r
I−n =
r=n
Γ(r + 1)Γ(−n + r + 1)
∞
X (z/2)n+2s
=
Γ(n + s + 1)Γ(s + 1)
s=0
= In (z).
As we have done with Bessel functions of the first kind, we define a new function which will
be linearly independent of the first one.
is called the modified Bessel function of the third kind or the modified Hankel function of order
ν.
Proposition 2.4. The modified Hankel function of order ν ∈ C is a solution of (2.11) and it is
linearly independent of the modified Bessel function of the first kind of the same order.
Proof. If ν 6∈ Z, Kν is a linear combination of Iα (z) and I−α (z), and therefore it is a solution of
(2.13). Since Iν (z) and I−ν (z) are linearly independent, so are Iα (z) and a linear combination
of Iα (z) and I−α (z). Thus, the problem reduces to prove the linear independence of Kν and Iν
when ν = n ∈ Z.
Notice that the limit gives an indetermination (0/0) when ν is an integer n. By L’Hôpital’s
rule,
" #
∂ ∂
− ∂α (−1)n ∂
π ∂α I−α (z) Iα (z) ∂
Kn (z) = lim = I−α (z) − Iα (z) . (2.15)
2 α→n π cos απ 2 ∂α ∂α ν=n
26
Since Iα is a solution of (2.11) for ν = α,
2
2 d d 2 2
z +z − z − α Iα (z) = 0
dz 2 dz
2
∂ 2 d d 2 2
=⇒ z −z − z − α Iα (z) = 0
∂α dz 2 dz
2
2 d d 2 2 ∂
=⇒ − 2αIα (z) + z 2
+z −z −α Iα (z) = 0
dz dz ∂α
d2
d ∂
=⇒ z2 2 + z − z 2 − α2 Iα (z) = 2αIα (z).
dz dz ∂α
On the other hand, we can deduce from (2.14), by substitution, that for any ν 6∈ Z,
π 1
W (Iν (z), Kν (z)) = Iα (z)Kν0 (z) − Kν (z)Iν0 (z) = W (Iν (z), I−ν (z)) = − .
2 sin νπ z
Then, by the continuity of this expression with respect to ν, this equality also holds for integer
values of ν.
Example 2.5. As we said with the Bessel functions of the second kind, it is hard to find the
general series expression for these functions when ν is an integer, but we can do it for ν = 0.
From (2.15),
1 ∂ ∂ ∂
K0 (z) = I−α (z) − Iα (z) =− Iα (z) .
2 ∂α ∂α α=0 ∂α α=0
27
By substitution in (2.13), we also have
∞
X (z/2)2r
I0 (z) = .
(r!)2
r=0
z
Moreover, when z is small, we have that I0 (z) ∼ 1 and K0 (z) ∼ −γ − log .
2
28
2.4 Exercises
1. Prove the following inequality, for all z ∈ C.
ν
1 |z| 2
|Jn (z)| ≤ e(|z|/2) , ∀n ∈ Z.
n! 2
Solution. We only have to bound |Jn (z)| step by step. Using series (2.2),
∞
X (−1)r (z/2)n+2r
|Jn (z)| =
r!(n + r)!
r=0
∞
X (−1)r (z/2)n+2r
≤
r!(n + r)!
r=0
∞
X (|z|/2)n+2r
=
r!(n + r)!
r=0
n X∞ 2r
|z| 1 |z|
=
2 r!(n + r)! 2
r=0
n X ∞ 2r
|z| 1 |z|
≤
2 r!n! 2
r=0
n
1 |z| 2
= e(|z|/2) .
n! 2
lim Jn (z) = 0.
n→∞
By Exercise 1,
n
1 |z| 2 /4
lim |Jn (z)| ≤ lim e|z| = 0.
n→∞ n→∞ n! 2
29
3. Prove the following relation
d J−ν (z) 2 sin νπ
=− .
dz Jν (z) zπ(Jν (x))2
J−ν (z)
∃C ∈ C s.t. J−ν (z) = CJν (z) ⇐⇒ ∃C ∈ C s.t. =C
Jν (z)
d J−ν (z)
⇐⇒ =0
dz Jν (z)
2 sin νπ
⇐⇒
zπJν2 (z)
⇐⇒ ν ∈ Z.
4. Show that if ν 6∈ Z
30
Similarly,
Moreover,
Similarly,
31
32
Chapter 3
In Chapter 2 we presented Bessel functions of several kinds. Now we will focus on some proper-
ties of these functions.
First, we will study some recurrence relations between Bessel functions defined in Chapter 2.
We will also see the Bessel functions of the first kind as the coefficients of a generating function.
As a consequence, we will prove some properties of the trigonometric functions. We will also
study an integral formula for the Bessel functions of the first kind.
Finally, we will calculate the solutions to a certain Sturm-Liouville problem, involving Bessel
functions. As a consequence, we will get orthogonal sets of Bessel functions.
33
Taking derivatives with respect to z on each side,
∞
z −ν X (−1)r 2r(z/2)2r−1 1/2
−ν Jν (z) + z −ν Jν0 (z) = 2−ν .
z r! Γ(ν + r + 1)
r=0
Thus,
∞
X (−1)r r(z/2)2r−1
ν
Jν (z) − Jν0 (z) = −z ν 2−ν
z r! Γ(ν + r + 1)
r=1
∞
X (−1)r+1 (r + 1)(z/2)2(r+1)−1
= −z ν 2−ν
(r + 1)! Γ(ν + r + 2)
r=0
∞
X (−1)r (z/2)2r+1
= z ν 2−ν
r! Γ(ν + r + 2)
r=0
∞
X (−1)r (z/2)ν+2r+1
=
r! Γ(ν + r + 2)
r=0
= Jν+1 (z).
Similarly,
∞
X (−1)r (z/2)2ν+2r
z ν Jν (z) = 2ν ,
r! Γ(ν + r + 1)
r=0
Corollary 3.1. If ν ∈ C,
2ν
Jν−1 (z) + Jν+1 (z) = Jν (z), (3.3)
z
Jν−1 (z) − Jν+1 (z) = 2Jν0 (z). (3.4)
Example 3.1. We take again the example when ν = 0. From (3.1) we get
which gives us another expression for the derivative of J0 (z) with respect to z.
Remark. It is worth noting that these formulas can also be applied to Bessel function of ssecond
and third kind, as they are nothing but linear combinations of Bessel functions of the first kind
(except when ν is an integer but, because of continuity, the formulas are still valid).
34
Remark. Notice also that in the process of getting equation (3.1), we arrived at
d
z −ν Jν (z) = −z −ν−1 Jν+1 (z).
zdz
Taking again the derivatives with respect to z and multiplying by z,
d 2 −ν
d
z −ν−1 Jν+1 (z) = (−1)2 z −ν−2 Jν+2 .
z Jν (z) = −
zdz zdz
Repeating the process m times,
d m −ν
z Jν (z) = (−1)m z −ν−m Jν+m (z).
(3.5)
zdz
Similarly, from the process of getting (3.2) we also prove
d m ν
(z Jν (z)) = z ν−m Jν−m (z). (3.6)
zdz
Example 3.2 (Spherical Bessel functions). Bessel functions of the type Jn+1/2 , with n ∈ N∪{0},
are often used to solve problems of spherical waves. That is why these functions are called Spher-
ical Bessel functions.
In order to calculate the values of these functions, we will first get the value of J1/2 (z). Using
that by formula (1.4)
√ √
r!Γ(r + 3/2) = Γ(r + 1)Γ(r + 3/2) = π2−2r−1 Γ(2r + 2) = π2−2r−1 (2r + 1)!,
we get
∞
X (−1)r (z/2)2r+(1/2)
J1/2 (z) =
r!Γ(r + (3/2))
r=0
1/2 X∞
(−1)r z 2r+1
2
=
πz (2r + 1)!
r=0
1/2
2
= sin z.
πz
Similarly,
1/2
2
J−1/2 (z) = cos z.
πz
Using (3.5), we get
n
n n+ 12 d
Jn+1/2 = (−1) z z −1/2 J1/2 (z)
zdz
" #
d n −1/2 2 1/2
n n+ 12
= (−1) z z sin z
zdz πz
1/2
d n sin z
1 2
= (−1)n z n+ 2
π zdz z
35
Similarly, but now using (3.6),
1/2
d n sin z
n+ 12 2
J−n−(1/2) (z) = z .
π zdz z
In conclusion, we can express Jn+1/2 and J−n−1/2 with a finite number of sines, cosines and
powers of z.
We can also get Bessel functions of the second and third kind in this way. For example,
1/2
2
Y−1/2 (z) = J−1/2 (z) = sin z,
πz
1/2 1/2 1/2
(1) 2 2 2
H−1/2 = J−1/2 (z) + iY−1/2 = cos z + i sin z = eiz ,
πz πz πz
1/2 1/2 1/2
(2) 2 2 2
H−1/2 = J−1/2 (z) − iY−1/2 = cos z − i sin z = e−iz .
πz πz πz
We can also give the recurrence formulas for the modified Bessel functions.
Proposition 3.2. Let ν ∈ C. Then,
ν
−Iν+1 (z) = Iν (z) − Iν0 (z),
z
ν
Iν−1 (z) = Iν (z) + Iν0 (z),
z
2ν
Iν−1 (z) − Iν+1 (z) = Iν (z),
z
Iν−1 (z) + Iν+1 (z) = 2Iν0 (z).
36
Example 3.3. For ν = 0, we have
I1 (z) = I00 (z).
Corollary 3.2. If ν ∈ C,
ν
−Kν+1 (z) = − Kν (z) + Kν0 (z),
z
ν
−Kν−1 (z) = Kν (z) + Kν0 (z),
z
2ν
Kν−1 (z) − Kν+1 (z) = − Kν (z),
z
Kν−1 (z) + Kν+1 (z) = −2Kν0 (z).
We can write V (1) as a Fourier series, as it is periodic with respect to φ. So it admits the
form
X∞
cn (kρ)einφ .
n=−∞
We can calculate the coefficients cn (kρ) by using the series form of the exponential function.
Making kρ = z and eiφ = t,
∞ ∞
−1
X 1 zt r X 1 z s
V (1) (z, t) = ez(t−t )/2 = ezt/2 e−z/2t = −
r! 2 s! 2t
r=0 s=0
∞ X
∞
X (−1)s z r+s r−s
= t .
r!s! 2
r=0 s=0
As t = eiφ , we are looking for the coefficients of this power series. If we make n = r − s, then
the coefficient of tn for n ≥ 0 is
∞
X (−1)s z n+2s
= Jn (z).
(n + s)!s! 2
s=0
37
On the other hand, the coefficient of t−n is, using relation (2.5),
∞
X (−1)s z −n+2r
= J−n (z)
s=n
(−n + s)!s! 2
Hence,
∞ ∞
−1 )/2
X X
V (1) (z, t) = ez(t−t = Jn (z)tn = Vn(2) (z, t) (3.9)
n=−∞ n=−∞
Because of this relation, Jn (z) are called the Bessel coefficients and V (1) is called the gener-
ating function of the Bessel coefficients.
Relation (3.9) was used by Jacobi to get the following equalities of trigonometric functions.
Proposition 3.3. Let z ∈ C and φ ∈ R. Then,
(i) cos(z sin φ) = J0 (z) + 2 ∞
P
n=1 J2n (z) cos(2nφ).
P∞
(ii) sin(z sin φ) = 2 n=0 J2n+1 (z) sin((2n + 1)φ).
(iii) cos(z cos φ) = J0 (z) + 2 ∞ n
P
n=1 (−1) J2n (z) cos(2nφ).
Equating real and imaginary parts, we prove (i) and (ii). Moreover, taking φe = −φ + π/2, we
obtain (iii) and (iv).
38
Proof. By Proposition 3.3, we know that
∞
X
cos(z sin θ) = J0 (z) + 2 J2n (z) cos(2nθ),
n=1
∞
X
sin(z sin θ) = 2 J2n+1 (z) sin((2n + 1)θ).
n=0
As the previous functions are even and odd, respectively, with respect to θ, we can see them as
Fourier Cosine and Sine series, respectively. Then,
∞ ∞
a0 (z) X X
cos(z sin θ) = + an (z) cos nθ, sin(z sin θ) = bn (z) sin nθ,
2
n=1 n=1
where ( (
2Jn (z), if n is even, 0, if n is even,
an (z) = , bn (z) = .
0, if n is odd 2Jn (z), if n is odd
Notice that
an (z) + bn (z)
= Jn (z), n ∈ N ∪ {0}.
2
On the other hand, if we compute the Fourier coefficients for these Cosine and Sine series,
2 π
Z
an (z) = cos (z sin θ) cos nθdθ
π 0
and Z π
2
bn (z) = sin (z sin θ) sin nθdθ.
π 0
1 π
Z
an (z) + bn (z)
= (sin (z sin θ) sin nθ + sin (z sin θ) sin nθ) dθ
2 π 0
Z π
1
= cos (nθ − z sin θ) dθ.
π 0
1 π
Z
Jn (z) = cos (nθ − z sin θ) dθ.
π 0
39
Equating with the coefficients of the Fourier series,
( Rπ
1
cos (z sin θ) cos nθ dθ, if n is even,
Jn (z) = π1 R0π
π 0 sin (z sin θ) sin nθ dθ, if n is odd.
Since the integrands are symmetric respect to θ = π/2 in the interval (0, π),
( R π/2
2
π cos (z sin θ) cos nθ dθ, if n is even,
Jn (z) = 2
R0π/2 .
π 0 sin (z sin θ) sin nθ dθ, if n is odd
These integral expressions for Jn (z) can be used to make numerical approximations of the
series (2.2) via methods such as Simpson’s rule.
However, due to the definition of the function by a series, the partial sums only provide a
good approximation of Bessel functions of the first kind when z is small. Therefore, we now want
to approximate Jν (z) for large values of z, using other expressions (easier to handle).
First, take g(x) = x1/2 f (x), where f is a solution of Bessel’s equation (2.1). Then,
Multiplying by 1/x3/2 ,
1
− ν2
g 00 (x) + g(x) + 4
g(x) = 0.
x2
40
So, when x is large, the last summand tends to zero, and thus
g 00 (x) + g(x) ≈ 0.
But the solutions to the equation g 00 (x)+g(x) = 0 are combinations of trigonometric functions
of the form
A cos x + B sin x, A, B ∈ R.
Those solutions can also be written in the form α sin (x + β), where α and β are again real
constants. So, we can guess that Bessel functions can be approximated using Sine or Cosine
functions when x is large. The following result gives us that approximation.
Theorem 3.2. Let ν be a real constant. Then, there exists a constant Cν such that,
r
2 νπ π Cν
∀x ≥ 1, Jν (x) = cos x − − + Eν (x), |Eν (x)| ≤ 3/2 . (3.10)
πx 2 4 x
Proof. This proof requires more advanced techniques, and can be found in Chapters VII and
VIII of [4].
This result lets us approximate the Bessel function of the first kind of order ν ∈ R and
argument x ≥ 1, giving us an upper bound for the error. Moreover, this error decreases as x
tends to infinity. With this result we can also give approximate values for other functions seen
in Chapter 2.
Corollary 3.3. If ν is a real constant, for any x ≥ 1,
r
2 νπ π
(i) Yν (x) ≈ sin x − − ,
πx 2 4
r
0 2 νπ π
(ii) Jν (x) ≈ sin x − − .
πx 2 4
Proof. In order to prove (i) we have to take into account the definition of Yν (equation (2.6)),
when ν is not an integer. Also, using the following trigonometrical formula (Cosine of a sum)
νπ π νπ π νπ π
cos x + − = cos (νπ) cos x − − − sin (νπ) sin x − −
2 4 2 4 2 4
and Theorem 3.10 for J−ν we find that
" r r #
1 2 νπ π 2 νπ π
Yν (x) ≈ cos (νπ) cos x − − − cos x + −
sin (νπ) πx 2 4 πx 2 4
p
2/(πx) h νπ π νπ π i
= cos (νπ) cos x − − − cos x + −
sin (νπ) 2 4 2 4
p
2/(πx) νπ π
= sin (νπ) sin x − −
sin (νπ) 2 4
r
2 νπ π
= sin x − − .
πx 2 4
41
By an argument of continuity, for all ν ∈ R, we get
r
2 νπ π
Yν (x) ≈ sin x − − .
πx 2 4
To prove part (ii), we use the recurrence relation (3.2). On the one hand,
r
2 (ν − 1)π π
Jν−1 (x) = cos x − − + Eν−1 (x)
πx 2 4
r
2 νπ π π
= cos x − − + + Eν−1 (x)
πx 2 4 2
r
2 νπ π
=− sin x − − + Eν−1 (x)
πx 2 4
On the other hand,
r p
ν 2 |ν| |ν|Cν ( 2/π + Cν )|ν|
Jν (x) ≤ + 5/2 ≤ .
x π x3/2 x x3/2
Combining the two results, we obtain (iii).
where 1 and M ∈ R. Then, for all integers m ≥ M , f has exactly one zero zm in each
interval [mπ, (m + 1)π]. Moreover, zm ∼ (m + 1/2)π.
42
Therefore, the hypothesis |f (x) − cos x| ≤ implies, by evaluating f at mπ and (m + 1)π (both
greater than M π),
Consequently,
Since by hypothesis 1, f ((m + 1)π) and f (mπ) have different signs. Because of f being
continuous (it is differentiable), by Bolzano’s Theorem,
So we have proved the existence of a zero of f in the interval (mπ, (m + 1)π), where m ≥ M .
Now, let zm be a zero of f in that interval. From the first equation of (3.12) we get
Moreover, since (m + 1/2)π ≥ M π and sin ((m + 1/2)π) = cos (mπ) = (−1)m ,
0 1
f m+ π) + (−1)m ≤ .
2
This implies
1
f0 m+ π) ∈ (−1 − , −1 + ) ∪ (1 − , 1 + ),
2
so f 0 does not vanish around (m + 1/2)π. Hence, f is strictly increasing or decreasing near
(m + 1/2)π, and thus there is exactly one zero in the interval (mπ, (m + 1)π).
If b = 0, we want to get the zeros of Jν (x). In this case, we can easily get the asymptotic
expression for the zeros of (3.11) based on the asymptotics of Jν (x), this is, based in identity
(3.10). In fact, r
2 νπ π
Jν (x) ≈ cos x − −
πx 2 4
νπ π
implies a large x will satisfy Jν (x) = 0 if cos x − 2 − 4 is near zero.
In fact, we can apply Lemma 3.1 to prove what we have stated. Let us consider the function
1 1
f (x) = x̃1/2 Jν (x̃), x̃ = x + νπ + π.
2 4
43
By (3.10),
r r r
1/2 2 1 1 1/2 2 2
f (x) ≈ x̃ cos x̃ − νπ − π = x̃ cos x = cos x.
πx̃ 2 4 πx̃ π
Since
1
f 0 (x) = x̃1/2 Jν0 (x̃) + x̃−1/2 Jν (x̃)
2
and by (3.10) and Corollary 3.3,
r r r
2 νπ π 2 2
x̃1/2 Jν0 (x̃) ≈ −x̃ 1/2
sin x̃ − − = −x̃1/2 sin x = − sin x
πx̃ 2 4 πx̃ π
and r r
2 νπ π 1 2
x̃−1/2 Jν (x̃) ≈ x̃−1/2 cos x̃ − − = cos x ≈ 0
πx̃ 2 4 x̃ π
for large values of x. Thus, r
0 2
f (x) ≈ − sin x
π
for large values of x.
As we have seen before, these errors in approximation tend to zero when x tends to infinity.
Applying Lemma 3.1, for x ≥ M π, we can approximate the zeros of f (x). In fact, for a large
enough m, there exists a unique zero zm in each interval [mπ, (m + 1)π] and zm ∼ (m + 1/2)π.
Thus,
1 1 1 ν 3
z̃m ∼ m + π + νπ + π = m + + π.
2 2 4 2 4
So we conclude the zeros of Jν , which are in fact the zeros of x1/2 Jν (x), are approximated by
(m + ν/2 + 3/4) π.
Now, we consider the case in which b 6= 0. We can apply again Lemma 3.1, but now to the
function
1 1 a
f (x) = cx̃−1/2 Jν (x̃) + x̃1/2 Jν0 (x̃), x̃ = x + νπ − π and c = .
2 4 b
On the one hand,
r
1/2 0 1/2 2 νπ π
x̃ Jν (x̃) ≈ −x̃ sin x̃ − −
πx̃ 2 4
r
2 π
= −x̃1/2 sin x −
πx̃ 2
r
2
= −x̃1/2 (− cos x)
πx̃
r
2
= cos x
π
44
for large values of x. On the other hand, by (3.10)
r 1r2
2 νπ π
x̃−1/2 Jν (x̃) ≈ x̃−1/2 cos x̃ − − = cos x ≈ 0.
πx̃ 2 4 x̃ π
Thus, r
2
f (x) ≈ cos x.
π
Taking the derivative of f with respect to x,
c 1
f 0 (x) = − x̃−3/2 Jν (x̃) + cx̃−1/2 Jν0 (x̃) + x̃−1/2 Jν0 (x̃) + x̃1/2 Jν00 (x̃).
2 2
Since Jν satisfies Bessel’s equation (2.1), we know that
Then,
1 c
f 0 (x) =c+ x̃−1/2 Jν0 (x̃) + x̃1/2 Jν00 (x̃) − x̃−3/2 Jν (x̃)
2 2
1 c
= c+ x̃−1/2 Jν0 (x̃) − x̃−1/2 Jν0 (x̃) − x̃1/2 Jν (x̃) + ν 2 x̃−3/2 Jν (x̃) − x̃−3/2 Jν (x̃)
2 2
≈ −x̃1/2 Jν (x̃).
45
p
Finally,
p applying Lemma 3.1 to π/2f (x), we get that x̃1/2 f (x) has zeros (its zeros and the
zeros of π/2f (x) coincide) when x ∼ (m + 1/2)π, this is, when
1 1 1 1 1
x̃ ∼ m + π + νπ − π = m + ν + π.
2 2 4 2 4
Lemma 3.2. Let ν ≥ 0, a, b ≥ 0 with (a, b) 6= (0, 0), and ων the smallest positive zero of
aJν (x) + bxJν0 (x). Then, ων > ν.
Theorem 3.3. If ν ≥ 0, a, b ≥ 0, (a, b) 6= (0, 0), and {λi }i∈N are the positive zeros of aJν (x) +
bxJν0 (x), arranged in increasing order, then
(i) λ1 > ν.
46
(iii) If b > 0, there exists an integer Mν such that
1 1
λk ∼ M ν + k + ν + π as k → ∞.
2 2
Proof. Part (i) is a direct consequence of Lemma 3.2, and we got (ii) and (iii) distinguishing the
cases as a consequence of Lemma 3.1.
x2 f 00 (x) + xf 0 (x) + (µ2 x2 − ν 2 )f (x) = (xµ)2 g 00 (µx) + (xµ)g 0 (µx) + ((µx)2 − ν 2 )g(µx) = 0.
ν2 0 ν 2
xf 00 (x) + f 0 (x) − f (x) + µ2 xf (x) = xf 0 (x) − f (x) + µ2 xf (x) = 0.
(3.15)
x x
So we get a Sturm-Liouville equation of the form
ν2
(rf 0 )0 + pf + µ2 wf = 0, where r(x) = x, p(x) = − , w(x) = x.
x
If we take this equation in the closed interval [a, b] ⊆ [0, ∞) with boundary conditions
we get a regular Sturm-Liouville problem. Since Jν and Yν are linearly independent solutions,
the eigenfunctions of the problem will be of the type
The boundary conditions let us fix the constants cµ and dµ in equation (3.16) and we would
obtain an orthonormal basis of L2w (a, b), with w(x) = x. However, it is not easy nor interesting
to get these coefficients.
Our main goal will be to find the solutions of the Sturm-Liouville equation in the interval [0, b]
when ν ≥ 0. At x = 0, r vanishes and p is infinite, so it is not a regular Sturm-Liouville problem
and we cannot add boundary conditions in 0. Indeed, since Yν functions and its derivatives
take infinite values at x = 0, we must make dµ = 0 in (3.16) to guarantee that the solution is
47
continuous at that point. However, in x = b we can impose a condition βf (b) + β 0 f 0 (b) = 0. The
problem would be as follows:
ν2
xf 00 (x) + f 0 (x) − f (x) + µ2 xf (x) = 0,
x
f (0+ ) exists and is finite, (3.17)
0 0
βf (b) + β f (b) = 0.
Although this is a singular problem, it preserves some of the properties of a regular Sturm-
2
Lioville problem. First, we want to check that the linear transformation L(f ) = (xf 0 )0 − νx f is
self-adjoint. We shall take two eigenfunctions f and g,
ν2
L∗ (f ) = rf 00 + (2r0 − q)f 0 + (r00 − q 0 + p)f = xf 00 + f 0 − f = L(f ).
x
So it is formally self-adjoint. Therefore, we can apply Lagrange’s identity,
h ib
hL(f ), gi − hf, L(g)i = xf 0 (x)g(x) − xf (x)g 0 (x)
The evaluation at x = b is done taking into account the boundary condition at that point.
In fact, for any solution h of the problem,
0
(
0 0 h(b) = − ββ h0 (b) if β 6= 0
βh(b) + β h (b) = 0 =⇒
h(b) = 0 if β = 0
Since
1 1
Jν (x) ≈ xν , Jν0 (x) ≈ xν−1
2ν Γ(ν + 1) 2ν Γ(ν)
when x is small, both Jν (x) and Jν0 (x) are multiples of xν . Since ν > 0, when x tends to zero,
48
If ν = 0, then f (0) = g(0) = 1 and f 0 (0) = g 0 (0) = 0, so
Once we have proved this and although problem (3.17) is singular at x = 0, Theorem 1.1
concerning the reality of the eigenvalues and the orthogonality of the eigenfunctions with respect
to the weight function w(x) = x still holds.
It can also be proved that the space of the solutions of (3.17) is 1 dimensional. In fact, the
fundamental existence theorem for ordinary differential equations says that for any constants c1
and c2 there exists a unique solution of L(f ) + µ2 wf = 0 such that f (a) = c1 and f 0 (a) = c2 .
That is, a solution is specified by two arbitrary constants, so the space of the solutions is 2-
dimensional. However, by imposing the boundary condition βc1 + β 0 c2 = 0, we create a linear
relation between c1 and c2 and therefore the space of solutions is 1-dimensional.
Since f (x) = Jν (µx), we have f 0 (x) = µJν0 (µx). Hence, the solutions of (3.17) are functions
Jν (µx) such that
βJν (µb) + β 0 µJν0 (µb) = 0. (3.18)
If we write λ = µb, we get to two different cases:
• If β 0 = 0, we have
Jν (λ) = 0. (3.19)
• If β 0 6= 0, denoting c = bβ/β 0 ,
cJν (λ) + λJν0 (λ) = 0. (3.20)
We solved this problem for λ > 0 in Section 3.5. We got a sequence {λi }i∈N of zeros, which
gives us the eigenvalues {µ2i }i∈N = {λ2i /b2 }i∈N . It is only left to see if there are any nonpositive
zeros of these functions.
Lemma 3.3. Zero is an eigenvalue of (3.17) if and only if β/β 0 = −ν/b, in which case the
eigenfunction is f (x) = xν . If β 0 = 0, or β/β 0 ≥ −ν/b, there are no negative eigenvalues.
Proof. First, let us prove the initial statement. If zero is an eigenvalue, there exists a non-zero
function f such that f is a solution of (3.17) when µ = 0. We distinguish two cases.
xf 00 (x) + f 0 (x) = 0,
49
so the general solution is obtained easily using the change of variable g(x) = f 0 (x). We get
g(x) = −xg 0 (x), so g(x) = C1 /x (where C1 is a constant) and therefore
f (x) = C1 log x + C2 , C1 , C2 ∈ R.
If the function f is also to satisfy the boundary condition at the origin, the constant C1 must be
zero, and thus f 0 (x) = 0. Since we want f to verify βf (b) + β 0 f 0 (b) = βC2 = 0 (and we are look-
ing for functions that are not null), we get that µ = 0 is an eigenvalue of (3.17) for ν = 0 iff β = 0.
Since ν and −ν are the roots of the polinomial r(r − 1) + r − ν 2 , it follows that the solutions to
the equation by Euler’s method are
f (x) = C1 xν + C2 x−ν , C1 , C2 ∈ R.
Therefore, in order for f (0+ ) to be finite, C2 must be zero. Regarding the second boundary
condition,
βC1 bν + β 0 C1 νbν−1 = C1 bν−1 (βb + β 0 ν) = 0,
and since b > 0, βb + β 0 ν = 0. If β 0 = 0, then β must be zero and we would have no boundary
condition. Thus, β 6= 0 and since b 6= 0, the necessary and sufficient condition we get is
β ν
=− .
β0 b
Since the condition when ν = 0 is included here, we have proved the first part of the lemma.
Moreover, the eigenfunctions we get are 1 = xν when ν = 0 and xν when ν > 0.
Let us focus now on the second part. We take now negative values of µ2 . We can write
µ = iκ, with κ > 0. Then, the general solution of the equation of problem (3.17) is
C1 Jν (iκx) + C2 Yν (iκx), C1 , C2 ∈ R.
Again, the boundary condition at the origin fixes C2 = 0, as Yν (z) blows up at z = 0. The
condition at b can be written as (3.19) or (3.20), depending on the value of β 0 , where λ = iκb
and c = bβ/β 0 . If we denote y = κb > 0, we want the solutions to verify one of these equations.
iyJν0 (iy) = νJν (iy) − iyJν+1 (iy) = νiν Iν (y) − iν+2 yIν+1 (y) = iν [νIν (y) + yIν+1 (y)] .
50
Therefore, the boundary conditions at b can be written as
where y > 0. But since y > 0, by definition we have Iν (y) > 0, so when β 0 = 0, there are no
solutions of problem (3.17). Furthermore, if β 6= 0 and since Iν+1 > 0 and c+ν = (bβ/β 0 )+ν ≥ 0
by hyphotesis, the corresponding boundary condition cannot be satisfied. As a consequence, there
are not any negative eigenvalues in the established conditions.
So we have a family of orthogonal sets of functions on the interval [0, b] with respect to
w(x) = x. These functions are
λk x
fk (x) = Jν . (3.21)
b
The functions in (3.21) are real, so the norm of these functions is
Z b Z b
kfk k2w = |fk (x)|2 x dx = (fk (x))2 x dx.
0 0
Now we integrate both parts of the equation from 0 to b, using integration by parts in the
right-hand side of the equation.
Z b
(xf 0 (x))2 |bx=0 = (ν 2 − µ2 x2 )(f ( x))2 |bx=0 + 2µ2 xf (x)2 dx
0
At x = 0, the left-hand side vanishes and the term to evaluate on the right-hand side reduces
to ν 2 f 2 . Since (f (0))2 = (Jν (0))2 = 0 when ν > 0, that term also vanishes for all ν ≥ 0. As a
consequence,
Z b
2µ 2
f (x)2 xdx = b2 f 0 (b)2 + (µ2 b2 − ν 2 )f (b)2 .
0
Due to f 0 (x) = µJν0 (µx), we get the identity (3.22).
51
If we take our solution of (3.17), it satisfies the boundary condition (3.18), and we can simplify
the right-hand side of (3.22). If we denote µ = λ/b, and our condition is of the type (3.19), we
have
b 2
b2 0
Z
λx
Jν x dx = (J (λ))2 , (3.23)
0 b 2 ν
and if we have a condition of the type (3.20),
b 2
b2 (λ2 − ν 2 + c2 ) 0
Z
λx
Jν x dx = (Jν (λ))2 . (3.24)
0 b 2λ2
We can also simplify the right-hand side of (3.23) with the recurrence relation (3.1). If we
substitute z = λ in (3.1), we get Jν0 (λ) = −Jν+1 (λ), and thus if (3.19) is satisfied,
Z b 2
λx b2
Jν x dx = (Jν+1 (λ))2 . (3.25)
0 b 2
Summing up, given the problem (3.17), we get an orthonormal set of Bessel functions, which
we want to be an orthonormal basis. However, since (3.17) is not a regular Sturm-Liouville prob-
lem, we cannot ensure directly the existence of enough eigenfunctions to form an orthonormal
basis. Nevertheless, it can be proved using more advanced methods that the family of eigenfunc-
tions of the type (3.21) is in fact an orthonormal basis of L2w (0, b). The proof can be found again
in chapter XVIII of [4].
(i) Let {λk }k∈N be the positive zeros of Jν (x), and φk (x) = Jν (λk x/b), for all k ∈ N. Then
{φk }k∈N is an orthogonal basis for L2w (0, b), and
b2
kφk k2w = Jν+1 (λk )2 , k ∈ N.
2
(ii) Let {λ̃k }k∈N be the positive zeros of cJν (x) + xJν0 (x), and let ψk (x) = Jν (λ̃k x/b) for k ∈ N.
Also, we define ψ0 (x) = xν . If c > −ν, then {ψk }k∈N is an orthogonal basis for L2w (0, b).
If c = −ν, then {ψk }k∈N∪{0} is an orthogonal basis for L2w (0, b). Moreover,
b2 (λ2k − ν 2 + c2 ) b2ν+2
kψk k2w = Jν (λk )2 (k ≥ 1), kψ0 k2w = .
2λ2k 2ν + 2
Therefore, from Theorem 3.4 we know that any f ∈ L2w (0, b) can be expanded in a Fourier-
Bessel series of the form
∞
X ∞
X
f (x) = ck φk (x) or f (x) = dk ψk (x),
k=1 k=1
52
where Z b Z b
1 1
ck = f (x)φk (x)x dx and dk = f (x)ψk (x)x dx.
kφk k2w 0 kψk k2w 0
The series above are convergent for all f ∈ L2w (0, b). One
P∞ can prove some properties regarding
P∞
these series. For example, if f is piecewise smooth then i=1 ck φk (x) and i=1 dk ψk (x) converge
to (f (x− ) + f (x+ ))/2.
Example 3.5. If {λk }k∈N are the positive zeros of J0 (x), and we define f (x) = 1 for all 0 ≤ x ≤ b,
we have
∞
X λk x
f (x) = ck J0 ,
b
k=1
convergent in the norm of L2w (0, b), with w(x) = x. Since xJ0 (x) is the derivative of xJ1 (x) by
formula (3.6) (taking m = ν = 1), and by substitution of x = bt/λk , the coefficients are
Z b
b2 λk
Z
2 λk x 2
ck = 2 J0 xdx = 2 J0 (t) t dt
b J1 (λk )2 0 b b J1 (λk )2 λ2k 0
2 2
= 2 2
[tJ1 (t)]λ0 k = , k ∈ N.
λk J1 (λk ) λk J1 (λk )
53
3.7 Exercises
1. Establish Lommel’s formula
2 sin νπ
Jν (z)J1−ν (z) + J−ν (z)Jν−1 (z) = (3.26)
πz
Solution. By recurrence formulas (3.1) and (3.2),
ν 0
J−ν+1 (z) = − J−ν (z) − J−ν (z)
z
and
ν
Jν−1 (z) = Jν (z) + Jν0 (z).
z
So equation (3.26) is equivalent to
ν ν 2 sin νπ
0
Jν (z) − J−ν (z) − J−ν (z) + J−ν (z) Jν (z) + Jν0 (z) =
z z πz
0 0 2 sin νπ
⇐⇒ − Jν (z)J−ν (z) + J−ν (z)Jν (z) =
πz
2 sin νπ
⇐⇒ − W (Jν , J−ν ) =
πz
2 sin νπ 2 sin νπ
⇐⇒ − − = .
πz πz
d
= −z ν [z −ν Jν (z)].
dz
Therefore,
d −ν
z −ν Jν+1 = − [z Jν (z)].
dz
Replacing ν with ν − 1 and integrating on each side,
Z Z
−ν+1 d
z Jν+1 dz = − [z −ν+1 Jν−1 (z)]dz = −z −ν+1 Jν−1 (z) + C, C ∈ C.
dz
54
Similarly,
ν
Jν−1 (z) = Jν (z) + Jν0 (z)
z
= z −ν νz ν−1 Jν (z) + z ν Jν0 (z)
d
= z −ν [z ν Jν (z)],
dz
and consequently
d ν
z ν Jν−1 (z) = [z Jν (z)].
dz
Replacing ν with ν + 1 and integrating,
Z Z
d ν+1
z ν+1 Jν (z) = [z Jν+1 (z)] = z ν+1 Jν+1 (z) + D, D ∈ C.
dz
3. Use Exercise 2 to show that if ν ∈ C
Z
z ν+1 Iν (z)dz = z ν+1 Iν+1 (z) + C1 , C1 ∈ C,
Z
z −ν+1 Iν (z)dz = z −ν+1 Iν−1 (z) + C2 C2 ∈ C.
Solution. Recall Iν (z) = e−νπi/2 Jν (iz). Then, making iz = w and using (3.28),
Z Z
z ν+1 Iν (z)dz = z ν+1 e−νπi/2 Jν (iz)dz
Z
= e−νπi/2 (−i)(−i)ν+1 wν+1 Jν (w)dw
55
56
Chapter 4
In this chapter, we want to show some of the applications of Bessel functions in physics. They
are used to solve differential equations where Bessel’s equation arises. For example, if we take
the two-dimensional wave equation in polar coordinates studied in Section 1.3,
2 2 1 1
utt − c ∆u = utt − c urr + ur + 2 uθθ = 0,
r r
we obtained the following equations,
T 00 (t) + c2 µ2 T (t) = 0,
• Regarding the equation for R(r), we have the generalization of Bessel’s equation (3.14).
Moreover, we also have a boundary condition for R(r) at r = b of the type
βR(b) + β 0 R0 (b) = 0.
57
Finally, since we want the solution to be defined at the origin, we forbid R(r) to blow up
at r = 0. Summing up, we get the functions studied in Section 3.6.
Actually, we have stated in Section 3.6 that there is a sequence of µk , for which the boundary
conditions are satisfied, which are in fact the eigenvalues {µi }i∈N of problem (3.17).
The initial condition T (0) will determine the coefficients cnk and dnk .
Remark. In the previous series, an extra condition over Θ(θ) can force n to be a particular
number, and therefore reduce the initial series to one with only one index.
Thus, we have the wave equation (1.11) in a disc of radius b. Since the drum is attached to a
frame, the boundary of the disc does not vibrate, and therefore the boundary condition at r = b
is R(b) = 0. Besides that, we have, as always, the conditions of R being continuous at r = 0
and Θ being periodic. So, as mentioned in Section 1.3, applying separation of variables we get
equations (1.13), (1.14) and (1.15). So, as mentioned in the introduction, we get
D = {(r, θ) | 0 ≤ r ≤ b, −π ≤ θ ≤ π} , w(r, θ) = r
58
is, actually, the disc of radius b which we are working on, and the measure
Theorem 4.1. Let n ∈ N ∪ {0} and let {λk,n }k∈N be the positive zeros of Jn (x). Then,
λk,n r λk,n r
Jn cos nθ | n ≥ 0, k ≥ 1 ∪ Jn sin nθ | n, k ≥ 1
b b
is an orthogonal basis for L2 (D), where D is the disc of radius b about the origin.
Proof. Orthogonality can be checked evaluating the iterated integrals. The functions in the
set are of the type gi (r)hj (θ), gi and hj forming orthogonal sets in L2w (0, b) and L2 (−π, π),
respectively. Let us take two different functions from the set, gi (r)hj (θ) and gi0 (r)hj 0 (θ). Then
i 6= i0 or j 6= j 0 . If we compute the integral in L2 (D), due to the orthogonality previously
mentioned,
Z bZ π Z b Z π
rgi (r)hj (θ)gi0 (r)hj 0 (θ)dθdr = gi (r)gi0 (r)rdr hj (θ)hj 0 (θ)dθ = 0.
0 −π 0 −π
To prove that the set is complete, let us suppose we have f ∈ L2 (D) orthogonal to all the
functions Jn (λk,n r/b) cos nθ and Jn (λk,n r/b) sin nθ. Then
Z bZ π Z bZ π
λk,n r λk,n r
f (r, θ)rJn cos nθdθdr = 0, f (r, θ)rJn sin nθdθdr = 0.
0 −π b 0 −π b
As a consequence, Z π Z π
f (r, θ) cos nθdθdr, f (r, θ) sin nθdθdr
−π −π
λ r
are orthogonal to Jn k,n b , for all k, n. By completeness of this last function set, the integrals
must be zero. Thus, f is orthogonal to sin nθ and cos nθ , for all n. Therefore, by completeness
this time of the set of the functions sin nθ and cos nθ, it follows that f is zero.
Recall the solution (4.1) of the equation for T (t). Since T 0 (0) = 0, a2 = 0 and the general
solution of the problem is
∞ X
∞
X λn,k r λn,k ct
u(r, θ, t) = (cnk cos nθ + dnk sin nθ)Jn cos .
b b
n=0 k=1
59
We have to find cnk and dnk . On the one hand,
∞ X
∞
X λk,n r
f (r, θ) = u(r, θ, 0) = (cnk cos nθ + dnk sin nθ)Jn . (4.2)
b
n=0 k=1
On the other hand, as we have an orthogonal basis of L2 (D), we can expand f in terms of
the elements in the basis of Theorem 4.1. We also use the results in Theorem 3.4.
∞ X
∞
X λn,k r
f (r, θ) = (c0nk cos nθ + d0nk sin nθ)Jn , (4.3)
b
n=0 k=1
where the coefficients are
1
c0nk = hf, Jn (λk,n r/b) cos nθiw
kJn (λk,n r/b) cos nθk2w
Z bZ π
1 λk,n r
= f (r, θ)rJ n cos nθdθdr, n ≥ 0, k ≥ 1,
kJn (λk,n r/b) cos nθk2w 0 −π b
1
d0nk = hf, Jn (λk,n r/b) sin nθiw
kJn (λk,n r/b) sin nθk2w
Z bZ π
1 λk,n r
= f (r, θ)rJn sin nθdθdr, n, k ≥ 1.
kJn (λk,n r/b) sin nθk2w 0 −π b
But notice that using Theorem 3.4, for all n, k ∈ N,
Z πZ b
2
kJ0 (λk,0 r/b)kw = (J0 (λk,0 r/b))2 dr
−π 0
= 2π kJ0 (λk,0 r/b)k2v
= πb2 (J1 (λk,0 ))2 ,
Z bZ π 2
2 λk,n r
kJn (λk,n r/b) cos nθkw = r Jn cos2 nθ dθdr
0 −π b
Z b 2 ! Z π
λk,n r 2
= r Jn dr cos nθ dθ
0 b −π
60
where k · kv denotes the norm in L2v (0, b) with v(r) = r. Then,
Z bZ π
1 λk,0 r
c00k = 2 f (r, θ)rJ0 dθdr
πb J1 (λk,0 )2 0 −π b
and for n ≥ 1,
Z bZ π
2 λk,n r
c0nk = f (r, θ)rJn cos nθdθdr,
πb2 Jn+1 (λk,n )2 0 −π b
Z bZ π
2 λk,n r
d0nk = f (r, θ)rJn sin nθdθdr.
πb2 Jn+1 (λk,n )2 0 −π b
Equating (4.2) and (4.3),
Notice that if we take the problem in the disc of radius b, we have exactly the same problem as
in Section 4.1, but now with
T 0 (t) + Kµ2 T (t) = 0.
The solutions for T , therefore, are changed to exponential functions, but the rest of the problem
remains the same. So we will solve a different problem. This time, we take it in the region
D = {(r, θ) | 0 ≤ r ≤ b, 0 ≤ θ ≤ α} ,
where 0 < α < 2π. We are going to suppose, also, that the boundary is insulated. That means
61
and thus, ν = nπ/α.
Therefore, we get Bessel functions of order ν = nπ/α for R. Now, {λk,n }k∈N denote the
0
positive zeros of Jnπ/α (x). Then, the boundary condition is satisfied when we take the set
λk,n r
J nπ .
α b k∈N
These functions correspond to eigenvalues µ2k,n = (λk,n /b)2 . As we have seen in Theorem 3.4
(case (ii)), the set above is an orthogonal basis, except when n = 0. In this case we have to add
the constant function 1 = x0 and the eigenvalue µ = 0.
!
−ν 2 Kt
λ2k,n Kt
T (t) = Ce = C exp − , C ∈ R.
b2
We also want to impose the condition u(r, θ, 0) = f (r, θ). On the one hand, we expand f in
the form
∞ X
∞
X λk,n r nπθ
f (r, θ) = a000 + a0nk J nπ cos , (4.5)
α b α
n=0 k=1
where a000 , a0nk ∈ R, with n ∈ N ∪ {0} and k ∈ N. Using a similar argument to that in Theorem
4.1, this is possible. On the other hand, we try to find solutions of the type
∞ X
∞
!
λ2k,n Kt
X λk,n r nπθ
u(r, θ, t) = a00 + ank J nπ cos exp − ,
α b α b2
n=0 k=1
where we have to determine coefficients a00 , ank , with n ∈ N ∪ {0} and k ∈ N. It follows that
∞ X
∞
X λk,n r nπθ
f (r, θ) = u(r, θ, 0) = a00 + ank J nπ cos . (4.6)
α b α
n=0 k=1
62
It is only left to calculate the coefficients a000 , a0nk , where n ∈ N ∪ {0} and k ∈ N.
1
a000 = hf, 1i
k1k2w
Z αZ b
1
= f (r, θ)rdr dθ,
k1k2w 0 0
1
a00k = hf, J0 (λk,0 r/b)i
kJ0 (λk,0 r/b)k2w
Z αZ b
1 λk,0 r
= f (r, θ)J0 rdr dθ,
kJ0 (λk,0 r/b)k2w 0 0 b
1
a0nk = 2 hf, J α (λk,n r/b) cos(nπθ/α)i
nπ
J nπ
α
(λk,n r/b) cos(nπθ/α)
w
Z αZ b
1 λk,n r nπθ
= 2 f (r, θ)J nπ cos rdr dθ.
0 0
α b α
J nπ
α
(λ k,n r/b) cos(nπθ/α)
w
Notice that
αZ b
αb2
Z
k1k2w = rdrdθ =
0 0 2
and using Theorem 3.4, we find
Z αZ b 2
λk,0 r
kJ0 (λk,0 r/b)k2w = J0 rdr dθ
0 0 b
= α kJ0 (λk,0 r/b)k2v
αb2 λ2k,0
= (J0 (λk,n ))2 ,
2λ2k,0
Z αZ b 2
2 λk,n r 2 nπθ
J nπ (λk,n r/b) cos(nπθ/α) = J nπ cos rdr dθ
α w 0 0
α b α
Z α Z b 2 !
2 nπθ λk,n r
= cos dθ J nπ rdr
0 α 0
α b
α 2
= J nπ (λ k,n r/b)
2 α v
αb2 (λ2k,n − (nπ/α)2 ) 2
= Jnπ/α (λk,n ) ,
4λ2k,n
63
2λ2k,0 αZ b
Z
λk,0 r
a00k = f (r, θ)J0 rdr dθ, (k ≥ 1),
αb2 (λk,0 )2 J0 (λk,0 )2 0 0 b
and for n, k ≥ 1.
4λ2k,n αZ b
Z
λk,n r nπθ
a0nk = h i f (r, θ)J nπ cos rdr dθ.
nπ 2 α b α
αb2 λ2k,n − α J nπ
α
(λk,n )2 0 0
D = {(r, θ, z) | 0 ≤ r ≤ b, 0 ≤ z ≤ l} .
So we want to solve
1 1
urr + ur + 2 uθθ + uzz = 0,
r r
u(r, θ, 0) = f (r, θ),
u(r, θ, l) = g(r, θ),
u(b, θ, z) = h(θ, z).
First, we will look for solutions when f ≡ h ≡ 0 and g ≡ g(r) is independent of θ. This case
can be easily generalised to g ≡ g(r, θ). The case when g ≡ h ≡ 0 can be done similarly. Finally,
we will analyse the case when f ≡ g ≡ 0. The sum of these solutions will give us the general
solution.
When f ≡ h ≡ 0 and since we take the conditions independent of θ, we can assume the
solution is also independent of θ, and we apply separation of variables to u(r, z) = R(r)Z(z).
Substituting in (1.17), we get
1
R00 (r)Z(z) + R0 (r)Z(z) = −R(r)Z 00 (z).
r
Equivalently,
Z 00 (z)
1 00 1 0
R (r) + R (r) = − = −µ2 ∈ R.
R(r) r Z(z)
As a consequence, we get the following equations:
64
and thus the corresponding solutions to Z with Z(0) = 0 are sinh (λk z/b). Hence,
∞
X λk r λk z
u(r, z) = ak J0 sinh .
b b
k=1
Hence,
ck
ak = .
sinh (λk l/b)
Now, we consider the case when f ≡ g ≡ 0. Again, we will assume h is independent
of θ. Thus, the solution u(r, z) will also be independent of θ. We try functions of the type
u(r, z) = R(r)Z(z), and we have the following equations
but now we want µ such that sin µl = 0. This implies µl = nπi, where n ∈ Z. Hence, we have
eigenvalues µ such that µ2 = −(nπ/l)2 and therefore
Z(z) = A sin(nπz/l), A ∈ R,
If we make the change of variable x = nπr/l, we get the modified Bessel’s equation of order zero.
It follows that the solution of the equation for R is
65
If we expand the condition u(b, z) = h(z) in a Fourier Sine series on [0, l], we get
∞
X nπz
h(z) = a0n sin ,
l
n=1
where Z l
2 nπz
a0n = h(x) sin dz.
l 0 l
Hence, equating the expansions of u(b, z) and h(z),
Z l
1 2 nπz
an = h(x) sin dz.
I0 (nπb/l) l 0 l
66
Bibliography
[1] De Castro, A., & Rey Pastor, J. Funciones de Bessel y Aplicaciones. Editorial Dossat
Madrid, 1958
[2] Folland, G. B. Fourier Analysis and Its Applications. American Mathematical Soc., 1992
[3] Tranter, C. J. Bessel Functions with some Physical Applications. Hodder Arnold, 1968
67