Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Series solutions

1 Real analytic solutions


To start with let us recall the Taylor series of a function y(x) around the point x0 :

X y (n) (x0 )
y(x) = (x − x0 )n .
n!
k=0

So if we know the derivatives of the function at a point then we can formally write its Taylor
series. Let us take an example of IVP:

y 0 = y, y(0) = 1.

Then we can see from the equation y 0 (0) = y(0) = 1. We can dierentiate the equation to get
second derivative
y 00 = y 0 = y = 1 at 0.

Similarly we can nd all derivatives as

y (m) (0) = 1, for all m ∈ N

Therefore its Taylor series is



X xn
y(x) = = ex .
n!
k=0

This is the solution of the IVP.


Question: Can we do this always?
Consider another example
 1
e− x2 x 6= 0
y 0 + y = h(x) = , y(0) = 0.
0 x=0

In this case again we get


y (m) (0) = 0, ∀m ∈ N.

1
1 REAL ANALYTIC SOLUTIONS 2

Therefore Taylor series of y(x) is identically equal to zero. However by integrating the equation
we can write the solution as Z x
y(x) = e−x h(t)dt
0

this is not equal to zero for any x > 0. Therefore the formal Taylor series expansion need not
be a solution of IVP.
From our experince in Calculus, we know that h(x) is not real analytic. So that could be the
problem. To make our ideas clear let us recall some basics on power series.

We recall the following results from calculus about the power series. Given a sequence of real

numbers {an }∞n=0 , the series an (x − x0 )n is called power series with center x0 . It is easy
P
n=0
to see that a power series converges for x = x0 . Power series is a function of x provided it
converges for x. If a power series converges, then the domain of convergence is either a bounded
interval or the whole of R. So it is natural to study the largest interval where the power series
converges.
Theorem 1. If an xn converges at x = r, then an xn converges for |x| < |r|.
P P

Proof. We can nd C > 0 such that |an xn | ≤ C for all n. Then
x x
|an xn | ≤ |an rn || |n ≤ C| |n .
r r

Conclusion follows from comparison theorem.



Theorem 2. Consider the power series an xn . Suppose
P
n=0

p
n
β = lim sup |an |
n

and R = 1
β (We dene R = 0 if β = ∞ and R = ∞ if β = 0). Then

1. an xn converges for |x| < R
P
n=0

2. an xn diverges for |x| > R.
P
n=0

3. No conclusion if |x| = R.
In case the limit exists in the denition of β , then

an+1
β = lim

n→∞ an

Within the interval of convergence we can dierentiate the series term-by-term and integrate
term-by-term. Indeed we have
1 REAL ANALYTIC SOLUTIONS 3

hdifi ∞
Theorem 3. Suppose that the power series f (x) = for |x − x0 | < R has radius
X
ak (x − x0 )k
k=0
of convergence R > 0 and sum equal to f (x). Then f is dierentiable in |x − x0 | < R and

for |x − x0 | < R.
X
f 0 (x) = kak (x − x0 )k−1
k=1
An important consequence of this is
hzerothmi
Theorem 4. If ak xk ≡ 0 then ak = 0 for all k
P

Proof. Taking x = 0 we get a0 = 0. Then by above theorem we can dierentiate the series to
get X
kak xk−1 ≡ 0

Again taking x = 0 we get a1 = 0. Proceeding this way we get ak = 0 for all k.

Denition 1. A function g(x) dened in an interval I containing the point x0 is called real
analytic at x0 if it can be represented as power series around x0 . That is there exist constants
ck , k = 0, 1, 2... such that the following series converges for |x − x0 | < r, for some r > 0.

X
g(x) = ck (x − x0 )k .
k=0

Remark 1. If g is real analytic function at a point x0 , then ck = g . That is g is equal to


(k) (x
0)
k!
its Taylor series.
Let us consider the example: y 00 = y . Let us assume that the function y(x) is equal to its power
series ck xk . Then by the above theorem 3, we get
P


X ∞
X
0 k−1 00
y (x) = kck x , y (x) = k(k − 1)xk−2 .
k=1 k=2

Substituting these into the equation y 00 − y = 0 we get



X ∞
X
k−2
0= k(k − 1)x − ck xk
k=2 k=0

X
= [(k + 2)(k + 1)ck+2 − ck ] xk
k=0

Therefore we get the recurrence relation


ck
ck+2 = , k = 1, 2, 3...
(k + 2)(k + 1)
1 REAL ANALYTIC SOLUTIONS 4

Therefore
c0
k = 0 : =⇒ c2 =
2
c1
k = 1 : =⇒ c3 =
3 · 2
c2 c0
k = 2 : =⇒ c4 = =
4 ·3 4!
c1
k = 3 : =⇒ c5 =
5!

Iterating this we get


c0 c1
c2k = , c2k+1 =
(2k)! (2k + 1)!
Therefore, if c0 = c1 we get
X xk
y(x) = c0 = c0 ex
k!
and if c1 = −c0 then y(x) = c0 e−x . Since linear second order equation can have only two linearly
independent solutions, ex and e−x are the only L.I. solutions.

Let us consider another example with variable coecients.


Example 2. y 00 − 2xy 0 + 2y = 0.

Writing y(x) = ak xk and dierentiating term by term we get


P


X
xy 0 = kak xk
k=1
X∞ ∞
X
y 00 = k(k − 1)ak xk−2 = (k + 2)(k + 1)ak+2 xk
k=2 k=0

Substituting in the equation we get


X
y 00 − 2xy 0 + 2y = [(k + 2)(k + 1)ak+2 − 2kak + 2ak ]xk = 0

Then by Theorem 4 above we get

(k + 2)(k + 1)ak+2 − 2kak + 2ak = 0, ∀k

That is
2a2 + 2a0 = 0 =⇒ a2 = −a0

and
2(k − 1)
ak+2 = ak
(k + 2)(k + 1)
1 REAL ANALYTIC SOLUTIONS 5

Now k = 1 implies a3 = 0 · a1 = 0 and hence

a2k+1 = 0 for all k = 0, 1, 2..

Also k = 2 implies a4 = 2a2


12 = − a60 . All other even coecients can be calculated as multiple of
a0 . Therefore we get
x4
y(x) = a1 x + a0 (1 − + ...)
6
The two L.I. solutions are y1 (x) = x and y2 (x) = 1 − x6 + .... To see the convergence of y2 we
4

use the recurrence relation. The coecients of y2 satisfy


ak+2 2(k − 1)
= → 0 as k → ∞.
ak (k + 2)(k + 1)

Hence by ratio test, power series converges with radius of convergence equals to ∞.

Then it is natural to ask the following:

Q1: Can we apply this method for equations like y00 + ex y = 0?


Q2: What is the most general class of coecients for which the power series solution
exists?
To answer the rst question, we need the following Cauchy product
∞ ∞
Denition 3. Let and be two power series. The Cauchy product of these two
X X
k
ak x bk xk
k=0 k=0
∞ n
series is where cn = ak bn−k .
X X
cn xn
k=0 k=0

Theorem 5. If ak xk converges for |x| < R1 and bk xk converged for |x| < R2 then their
P P

Cauchy product cn xn converges for |x| < R where R ≥ min{R1 , R2 }.


P

Remark 2. It is possible to have series with nite radius of convergence and their product has
innite radius of convegence. For example take f (x) = 1+x
1−x and g(x) = 1−x
1+x . Both of them has
radius of convergence 1 but their product has ∞.
Theorem 6. Suppose a(x) and b(x) are real analytic in |x| < R. Then all solutions of the
equation
y 00 + a(x)y 0 + b(x)y = 0

are real analytic in |x| < R.


Proof. See the text book.
1 REAL ANALYTIC SOLUTIONS 6

Legendre Equation: One of the important dierential equation that appears in Mathematical
physics is the Legendre equation:

(1 − x2 )y 00 − 2xy 0 + p(p + 1)y = 0, p ∈ R.

If we write this equation as


2x p(p + 1)
y 00 − 2
y0 + y=0
(1 − x ) (1 − x2 )

then we see that the functions a1 and a2 given by


2x p(p + 1)
a1 (x) = 2
, and a2 (x) = ,
(1 − x ) (1 − x2 )

are real analytic at x = 0 with the series converges in (−1, 1). Therefore by the above theorem
the problem will have two linearly independent real analytic solutions. We can write the serieses
of a1 and a2 as
∞ ∞
p(p + 1)x2k , for |x| < 1.
X X
a1 (x) = (−2x) x2k , a2 (x) =
k=0 k=0

By assuming y(x) = ck xk we see that


P


X
0
(−2x)y (x) = −2kck xk
k=0

X ∞
X
00 k−2
y (x) = k(k − 1)ck x = (k + 2)(k + 1)ck+2 xk
k=2 k=0

X
−x2 y 00 = −k(k − 1)ck xk
k=0

Therefore

X
(1 − x2 )y 00 − 2xy 0 + p(p + 1)y = [(k + 2)(k + 1)ck+2 − k(k − 1)ck − 2kck + p(p + 1)ck ] xk
k=0
X∞
= [(k + 2)(k + 1)ck+2 + (p + k + 1)(p − k)ck ] xk
k=0

Therefore by Theorem 4 we get the recurrence relation

(k + 2)(k + 1)ck+2 + (p + k + 1)(p − k)ck = 0, k = 0, 1, 2, ... (1.1) recrel


1 REAL ANALYTIC SOLUTIONS 7

Hence we get
p(p + 1) (p + 2)(p − 1)
c2 = − c0 , c3 = − c1
2 3·2
(p + 3)(p + 1)p(p − 2) (p + 4)(p + 2)(p − 1)(p − 3)
c4 = c0 , c5 = c1 .
4·3·2 5·4·3·2

Therefore we can write the solution as


   
p(p + 1) 2 (p + 2)(p − 1) 3
y(x) = c0 1 − x + ... + c1 x − x + ...
2! 3!
= c0 φ1 (x) + c1 φ2 (x).

The adjacent coecients of the series in φ1 and φ2 are related by (1.1) and hence

ck+2 (p + k + 1)(p − k)
ck = (k + 2)(k + 1) → 0, as k → ∞.

By ratio test the series converges. Also,

W (φ1 , φ2 )(0) = 1.

Therefore φ and φ2 are two linearly independent solutions.

Legendre Polynomials: We note that when p is a non-negative even integer p = 2m, m =


0, 1, 2.. then φ1 has only a nite number of non-zero terms viz a polynomial of order 2m. for
example

p = 0 : =⇒ φ1 (x) = 1
p = 2 : =⇒ φ1 (x) = 1 − 3x2

Similarly when p is a positive odd integer p = 2m + 1 then φ2 is a polynomial of order


2m + 1. For example,

p = 1 : =⇒ φ1 (x) = x
5
p = 3 : =⇒ φ1 (x) = x − x3
3

Denition 4. The polynomial solution Pn of degree n of Legendre equation satisfying Pn (1) = 1


is called n−th Legendre polynomial. These are explicitly given by
1 dn  2
(x − 1)n .

Pn (x) = n n
2 n! dx
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 8

2 Regular singular points: Frobenius solutions


Let a(x) : (−l, l)\{0} → R be a continuous function. The point x = 0 is called a singular point
if |a(x)| → ∞ as x → 0. Now we dene
Denition 5. Regular singular point: A point x = x0 is a a regular singular point for the
second order equation
y 00 + a(x)y 0 + b(x)y = 0

if (x − x0 )a(x) and (x − x0 )2 b(x) are real analytic at x = x0 .


For simplicity if we take x0 = 0 then a second order equation with a regular singular point at
x = 0 has the form
x2 y 00 + xa(x)y 0 + b(x)y = 0

where a, b are real analytic at 0.



Important Idea: Assume y(x) = x and nd c0k s
X
r
ck xk
k=0
Rationale: Coecients are singular. So expect solutions also to be singular

Let us illustrate the method for


Example 6. Consider the equation L(y) = x2 y00 + 23 xy0 + xy = 0.
Solution: Let us assume y(x) = xr ck xk . Then
P

X
y 0 (x) = ck (k + r)xk+r−1
X
y 00 (x) = ck (k + r)(k + r − 1)xk+r−2 .

Therefore substituting in the equation


∞   ∞
X 3 X
0 = L(y) = (k + r)(k + r − 1)ck + (k + r)ck xk+r + ck−1 xk+r
2
k=0 k=1
∞   
3 X 3
= (r(r − 1) + r)c0 xr + (k + r)(k + r − 1) + (k + r) ck + ck−1 xk+r
2 2
k=1

X
= p(r)c0 xr + [p(r + k)ck + ck−1 ] xk = 0.
k=1

here p(r) = r(r − 1) + 23 r is called indicial polynomial. Now by Theorem 4, we get

p(r) = 0 (c0 6= 0, why?) and p(r + k)ck = −ck−1 .

This implies r1 = 0, r2 = − 12 .
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 9

(−1)k c0
p(r + k)ck = −ck−1 =⇒ ck = , k = 1, 2, ...
p(r + k)p(r + k − 1)...p(r + 1)
1st solution: Take r = r1 = 0. Then p(r + k) = p(k) 6= 0 for all k = 1, 2, .. (since the only
other root is − 21 ). Hence all other coecients ck can be computed using

(−1)k c0
ck = , k = 1, 2, ...
p(k)p(k − 1)...p(1)

Hence the rst solution is



X (−1)k xk
φ1 (x) = c0 + c0
p(k)p(k − 1)...p(1)
k=1

2nd Solution: Taking r = r2 = − 21 , we see that p(r + k) = p(k − 21 ) 6= 0 for any k = 1, 2, ...
since the other root is 0. Hence all other coecients ck can be computed using

(−1)k c1
ck = , k = 1, 2, 3,
p(k − 12 )p(k − 32 )...p( 12 )

and the solution will be



− 21 − 12
X (−1)k xk
φ2 (x) = c1 x + c1 x
k=1
p(k − 12 )p(k − 32 )...p( 21 )

Convergence: The recurrence relation implies



ck+1 1
ck p(k + r + 1) → 0 as k → ∞.

Hence by ratio test the series converges for all x. However the second solution has singularity
at x = 0.

Now let us X
look at the most general case where a(x) and b(x) are real analytic. So let us
assume a(x) = βk x and y(x) = ck xk+r then
X X
k k
αk x , b(x) =
X
y 0 (x) = (k + r)ck xk+r−1
X
y 00 (x) = (k + r − 1)(k + r)ck xk+r−2
X X X k
X
b(x) = xr ck xk βk xk = xr β̃k xk , β̃k = cj βk−j
j=0

X X X k
X
xa(x)y 0 (x) = xr (k + r)ck xk αk xk = xr α̃k xk , α̃k = (j + r)cj αk−j .
j=0
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 10

Substituting this in L(y) = 0 we get


Xh i
0 = L(y) = x2 y 00 + xa(x)y 0 + b(x)y = xr (k + r)(k + r − 1)ck + α̃k + β̃k xk
 
X∞ Xk k
X
= (k + r)(k + r − 1)ck + (j + r)cj αk−j + cj βk−j  xk+r
k=0 j=0 j=0

Therefore by Theorem 4, we get


k−1
X k−1
X
(k + r)(k + r − 1)ck + (k + r)ck α0 + ck β0 + (j + r)cj αk−j + cj βk−j = 0, k = 0, 1, 2...
j=0 j=0

That is writing the coecient of c0 and ck , k = 1, 2, .. we see


k−1
X k−1
X
p(r)c0 + p(r + k)ck + (j + r)cj αk−j + cj βk−j = 0
j=0 j=0

where p(r) = r(r − 1) + rα0 + β0 is called the indicial polynomial.


By assuming c0 6= 0, we get

p(r) = 0 =⇒ r = r1 and r = r2

For each of the values of r1 and r2 we see that ck , k = 1, 2, 3.. satises


k−1
X
p(r + k)ck = − [(j + r)αk−j + βk−j ] cj
j=0

So if p(r + k) is not zero for any k, then the above equation determines the ck uniquely and
hence the solutions. But this may not be the case as r2 can be equal to r1 + k for some k.
Therefore we have the following several cases:

Case I: r1 6= r2 , r1 < r2 , r2 6= r1 + k for any k.

In this case all coecients ck , k = 1, 2, ... can be computed using


k−1
−1 X
ck (r) = [(j + r)αk−j + βk−j ] cj (2.1) coef
p(r + k)
j=0
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 11

The linearly independent solutions are



X
r1 r1
φ1 (x) = c0 x + x ck (r1 )xk , c0 6= 0
k=1

X
φ2 (x) = c0 xr2 + xr2 ck (r2 )xk , c0 6= 0
k=1

An example of this type is computed above.


Case II: r1 = r2
In this case p(r1 ) = 0 and p0 (r1 ) = 0. Writing the solution y(x) as

X
y(x) = c0 + ck (r)xk+r
k=1

we may see L(y) as a function of x and r. By assuming that ck (r) satisfy (2.1), we get

X
L(y) = c0 q(r) + ck (r)xk+r
k=1

= c0 q(r)

Therefore
 
∂ ∂
L y = L(y)(x, r) = c0 (p0 (r) + (log r)p(r))xr = 0, at r = r1 .
∂r ∂r

Therefore if φ1 is the solution corresponding to r = r1 then ∂φ1


∂r at r = r1 is the second solution.
Therefore,

X ∞
X
k+r1 r
φ1 (x) = ck (r1 )x = c0 x + ck (r1 )xk+r1
k=0 k=1

∞ ∞
∂φ1 X X
φ2 (x) = r=r1 = x
r1
c0k (r1 )xk + (log x)xr1 ck (r1 )xk
∂r
k=1 k=0

X
= xr1 c0k (r1 )xk + (log x)φ1 (x)
k=1

Practically, one assumes the following for calculating the second solution

X
φ2 (x) = xr1 bk xk + (log x)φ1 (x)
k=1

and obtain the coecients bk by substituting this in the given equation.


2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 12

Example 7. Consider the Bessel equation: x2 y 00 + xy 0 + x2 y = 0

In this case a(x) = 1, b(x) = x2 . Therefore α0 = 1, β0 = 0.


Indicial polynomial: p(r) = r(r − 1)α0 r + β0 = r2 . The roots are r1 = 0 and r2 = 0. Then the
possible solutions are of the form

ck xk , and φ2 (x) = (log x)φ1 +
X X
φ1 (x) = bk xk
k=1

Then

X ∞
X ∞
X
0 = x2 y 00 + xy 0 + x2 y = ck k(k − 1)xk + kck xk + ck xk+2
k=2 k=1 k=0

X
= c1 x + [(k(k − 1) + k)ck + ck−2 ] xk
k=2

Therefore
ck−2
c1 = 0, and ck = − , k = 2, 3, ...
k2
This implies
(−1)k (−1)k
c2k+1 = 0, c2k = = , k = 1, 2, 3...
22 · 42 · · · (2m)2 22k (k!)2
Therefore the rst solution is ∞
X (−1)k 2k
φ1 (x) = x
22k (k!)2
k=0

To get the second solution we substitute φ2 into the equation



X φ1
φ02 (x) = kck xk−1 + + log xφ01
x
k=1

X φ1 2
φ002 (x) = k(k − 1)ck xk−2 − + φ0 + (log x)φ001
x2 x 1
k=2

Therefore

X ∞
X ∞
X
L(φ2 ) = k(k − 1)ck x + k
kck x +k
ck xk+2 + 2xφ01 + log xL(φ1 )
k=2 k=1 k=1

X
= c1 x + c2 22 x2 + [(k(k − 1) + k)ck + ck−1 ] xk + 2xφ01 = 0
k=3
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 13

Again by theorem 4 we get

c1 = 0
∞ ∞
X X (−1)k 2kx2k
c1 x + c2 22 x2 + [(k(k − 1) + k)ck + ck−1 ] xk = −2
22k (m!)2
k=3 k=1

Since the right hand side series has only even powers of x, we get all odd coecients c2k+1 as

c1 = 0,
k 2 ck = ck−2 , k = 3, 5, 7, ... =⇒ c2k+1 = 0 ∀k ∈ N

Comparing the even powers, we can compute all even coecients c2k as

4c2 = 1
(−1)k+1 k
(2k)2 c2k + c2k−2 = , k = 2, 3, 4...
22k−2 (k!)2

Case III: Roots dier by integer., i.e., r2 = r1 + m for some m ∈ N


Recall that the relation satised by ck :
k−1
(2.2)
X
p(r + k)ck = − [(j + r)αk−j + βk−j ] cj := Dk sprelation
j=0

Since r1 > r2 and p is second order polynomial p(r1 + k) 6= 0 for all k ∈ N. Therefore all
coecients can be computed using (2.2) to write the series solution of rst solution φ1 . Now to
nd second solution we note that

p(r2 ) = p(r1 + m) = 0

Therefore it is not clear how to compute cm from (2.2). However in some special cases Dm on
the Right hand side of (2.2) also may become zero. In that case we can take cm to be arbitrary
and other coecients can be calculated using (2.2). For example
Example 8. x2 y 00 + 2x2 y 0 − 2y 0 = 0

In this case by taking y(x) = ck xk+r and substituting in the equation, we get
P


X ∞
X ∞
X
2 00 2 0 0 k+r k+r+1
0 =x y + 2x y − 2y = (k + r)(k + r − 1)ck x + 2(k + r)ck x −2 ck xk+r
k=0 k=0 k=0

X
=r(r − 1)c0 xr + (−2)c0 xr + [(k + r)(k + r − 1)ck + 2(k + r − 1)ck−1 − 2ck ] xk+r
k=1
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 14

Therefore indicial polynomial is p(r) = r2 − r − 2. The roots are r1 = −1, and r2 = 2. That is
r2 = r1 + 3 implying m = 3. So c3 needs to be seen carefully.
From the recurrence relation

((k + r)(k + r − 1) − 2) ck = −2(k + r − 1)ck−1

Taking r = 2 we can nd all coecients to get the rst solution in the form
X
φ1 (x) = x2 ck xk

Now taking r = −1 we get the relation

[(k − 1)(k − 2) − 2] ck = −2(k − 2)ck−1 , k = 1, 2, 3...

Therefore

k = 1 : =⇒ −2c1 = (−2)(−1)c0 = 2c0 = 2 =⇒ c1 = −1


k = 2 : =⇒ (−2)c2 = (−2)(0)c1 =⇒ c2 = 0
k = 3 : =⇒ (2 − 2)c3 = (−2)(1)c2 = 0

implying c3 is arbitrary. So we may choose c3 = 0. From the recurrence relation we will get
ck = 0 for all k ≥ 4. hence the second solution is

φ2 (x) = x−1 (1 − x)

On the other hand one may choose c3 6= 0. If c3 = 1 then c4 , c5 , .... can be found from the
recurrence relation and the resultant solution will be

y(x) = x−1 (1 − x) + x−1 (x3 + c4 x4 + · · · )


= x−1 (1 − x) + x2 (1 + c4 x + · · · ) = φ2 (x) + Cφ1 (x)

Hence for any choice of c3 we will get only φ1 and φ2 above are the only L.I. solutions.

In general to nd second solution we use φ2 = vφ1 and reducing the order we get
1 − R a1 (x)dx 1 − (α0 x−1 +α1 +α2 x+··· )
R
v0 = e = e
φ21 x2r1 (c0 + c1 x + · · · )2
1
= 2r1 e−α0 log x−α1 x+···
x (c0 + c1 x + · · · )2
x−α0 1
= 2r1 e−α1 x+··· = 2r1 +α0 g(x)
x (c0 + c1 x + · · · )2 x
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 15

where g(t) = e−α1 x+···


(c0 +c1 x+··· )2
. Since g(0) = 1
c20
6= 0, g(x) is real analytic at 0. This implies

X
g(x) = gk xk

Therefore by taking n = 2r1 + α0

v 0 (x) = b0 x−n + b1 t−n+1 + · · · + bn−1 t−1 + bn + · · ·

This implies
x−n+1
v(x) = b0 + · · · + bn−1 log x + bn x + · · ·
−n + 1
Hence the second solution φ2 is

x−n+1
φ2 (x) = vφ1 = bn−1 (log x)φ1 + xr1 (c0 + c1 x + · · · )(b0 + ···)
−n + 1
b0
= bn−1 (log x)φ1 + xr1 −n+1 (c0 + c1 x + · · · )( + ···)
−n + 1
X
= bn−1 (log x)φ1 + xr2 dk xk .

To show r2 = 1 − n + r1 we notice that

p(r) = r(r − 1) + rα0 + β0 , also p(r) = (r − r1 )(r − r2 )

Sum of the roots = α0 − 1 = −(r1 + r2 ) =⇒ r2 = 1 − α0 − r − 1. Now substituting n = 2r1 + α0


we get r2 = 1 − n + r1 . There in this case we substitute

X
r2
φ2 (x) = bn−1 (log x)φ1 + x dk xk
k=0

in the equation and nd the unknowns bn−1 &dk , k = 1, 2, 3... Note that in this case bn−1 can be
equal to zero which is the special case discussed above.

As a result of the above discussed is summarized as


Theorem 7. Suppose a(x) and b(x) are real analytic at x = 0 having radius of convergence R.
Then the equation
x2 y 00 + xa(x)y 0 + b(x)y = 0

has a solution in the form xr ck xk (for some r ∈ R ) and the radius of convergence R. Other
P

Linearly independent solution can be obtained by reduction of order.


Proof. Refer to the text book.
Reference text book: E.A. Coddington, An introduction to ODE, PHI, 2003.
2 REGULAR SINGULAR POINTS: FROBENIUS SOLUTIONS 16

2.1 Problems

1. Find two linearly independent series solutions of


(a) y 00 − xy 0 + 2y = 0
(b) y 00 + 3x2 y 0 − 2xy = 0
(c) y 00 + x2 y 0 + x2 y = 0
(d) (1 + x2 )y 00 + y = 0
2. Consider the Chebyshev equation

(1 − x2 )y 00 − xy 0 + α2 y = 0, α ∈ R

(a) Compute two linearly independent series solutions for |x| < 1.
(b) Show that for each non-negative α = n there is a polynomial solution of degree n
3. Consider the Hermite equation

y 00 − 2xy 0 + 2αy = 0, α ∈ R

(a) Compute two linearly independent series solutions.


(b) Show that for each non-negative α = n there is a polynomial solution of degree n.
4. Show that 
Z 1 0, (n 6= m)
Pn (x)Pm (x)dx =
−1  2 n = m.
2n+1

5. Show that there are constants c0 , c1 , c2 , ...cn such that

xn = c0 P0 (x) + c1 P1 (x) + ... + cn Pn (x)

6. Find all solutions of the following equations for x > 0:


(a) x2 y 00 + 2xy 0 − 6y = 0
(b) 2x2 y 00 + xy 0 − y = 0
(c) x2 y 00 − 5xy 0 + 9y = 0
7. Find all solutions of the following equations for x > 0:
(a) 3x2 y 00 + 5xy 0 + 3xy = 0
(b) x2 y 00 + 3xy 0 + (1 + x)y = 0
(c) x2 y 00 − 2x(x + 1)y 0 + 2(x + 1)y = 0

You might also like