Professional Documents
Culture Documents
Raza Tahir-Kheli - Ordinary Differential Equations - Mathematical Tools For Physicists-Springer International Publishing (2018)
Raza Tahir-Kheli - Ordinary Differential Equations - Mathematical Tools For Physicists-Springer International Publishing (2018)
Raza Tahir-Kheli - Ordinary Differential Equations - Mathematical Tools For Physicists-Springer International Publishing (2018)
Ordinary
Differential
Equations
Mathematical Tools for Physicists
Ordinary Differential Equations
Raza Tahir-Kheli
Ordinary Differential
Equations
Mathematical Tools for Physicists
123
Raza Tahir-Kheli
Department of Physics
Temple University
Philadelphia, PA, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dedicated to my friends
Sir Roger J. Elliott, Kt., F.R.S.
(in Memorium)
Sir Anthony J.Leggett, KBE, F.R.S., Nobel
Laureate
Alan J. Heeger, Nobel Laureate
J. Robert Schrieffer, Nobel Laureate
my wife
Ambassador Shirin Raziuddin Tahir-Kheli
and our grandchildren
Taisiya, Alexander, Cyrus, Gladia
Preface
vii
viii Preface
In addition, there are the so-called exact [compare (6.74)–(6.91)] and inexact
equations [compare (6.92)–(6.241)], Riccati equations [compare (6.242)–(6.268)],
Euler equations [compare (6.269)–(6.315)], and the factorable equations [compare
(6.316)–(6.344)]. Singular solution of Clairaut equation is discussed and calculated
both by an informal procedure and a formal procedure.
Preface ix
Studied in Chap. 7 are equations where ‘Special Situations’ obtain. For instance,
a given differential equation may be integrable. Similarly, there are equations that
have both the independent and the dependent variables missing in any explicit form.
And then, there are those that explicitly contain only the independent variable, or
only the dependent variable. All these equations provide possible routes to suc-
cessful treatment. An interesting special situation, called ‘Order Reduction,’ comes
into play if one of the n non-trivial solutions of an nth order homogeneous linear
ordinary differential equation is already known. Then, the given equation can be
reduced to one of the ðn 1Þth order. Homogeneous linear ordinary second-order
differential equations have been treated before. Studied in this chapter are inho-
mogeneous linear ordinary differential equations.
Chapter 8 deals with oscillatory motion that is central to the description of
acoustics and the effects of inter-particle interaction in many physical systems. In its
most accessible form, oscillatory motion is simple harmonic (s-h). (s-h) motion has
a long and distinguished history of use in modeling physical phenomena. An
harmonic motion—which somewhat more realistically represents the observed
behavior of physical systems—is described next. Detailed analysis of ‘Transient
State’ motion is presented for a point mass for two different oscillatory systems.
These are:
(1) The point mass, m, is tied to the right end of a long, massless coil spring placed
horizontally in the x-direction on top of a long, level, table. The left end of the
coil is fixed to the left end of the long table. The motion of the mass is slowed
by frictional force that is proportional to its momentum m vðtÞ. In its completely
relaxed state, the spring is in equilibrium and the mass is in its equilibrium
position.
(2) Because the differential equations needed for analyzing damped oscillating
pendula are prototypical of those used in studies of electromagnetism, acous-
tics, mechanics, chemical and biological sciences and engineering, we analyze
next a pendulum consisting of a (point-sized) bob of mass m that is tied to the
end of a massless stiff rod of length l. The rod hangs down, in the negative
z-direction, from a hook that has been nailed to the ceiling. The pendulum is set
to oscillate in two-dimensional motion in the x z plane. Air resistance is
approximated as a frictional force proportional to the speed with which the bob
is moving. The ensuing friction slows the oscillatory motion.
In Chap. 9, physics relating to, and the applicable mathematics for the use of,
resistors, inductors, and capacitors are studied. The study includes Kirchhoff’s two
rules that state: ‘The incoming current at any given point equals the outgoing
current at that point’ and ‘The algebraic sum of changes in potential encountered by
charges traveling, in whatever manner, through a closed-loop circuit is zero.’
Considered next is Ohm’s law: namely ‘In a closed-loop circuit that contains a
battery operating at V volt, and a resistor of strength R ohms, current flow is
I amperes: I ¼ VR :’ Problems relating to additions of finite numbers of resistors,
placed in various configurations—some in series and some in parallel formats—are
x Preface
worked out in detail. Also included are several, more involved, analyses relating to
current flows in a variety of infinite networks.
Numerical solutions are analyzed in Chap. 10. Given a first-order linear differ-
ential equation [refer to (10.2)] and its solution, yðx0 Þ; at a point x ¼ x0 ; Runge–
Kutta procedure is used to calculate Yðx0 þ DÞ; an estimate of the exact result
yðx0 þ DÞ: Runge-Kutta estimate is the least accurate when it uses only one step for
the entire move. And indeed, as shown in (10.11), the single-step process does yield
grossly inaccurate results. The two-step process—see (10.12) and (10.13)—im-
proves the results only slightly. But the four-step
effort—see (10.14)–(10.17)—does
1
much better. It reduces the error to about 50 th of that for the one-step process.
Estimates from a ten-step Runge-Kutta process are recorded in table (10.1).
These estimates—being in error only by 100 0:0025 22:17 ¼ 0:0113%—are highly
accurate.
Coupled first-order differential equations (10.18) are treated next.
Together these equations are equivalent to a single second-order differential
equation. Tables (10.2)–(10.7) display numerical results, Xn and Yn , for the
one-step, two-step, and the five-step processes. Table (10.8) contains numerical
results Xn gathered during a twenty-step Runge-Kutta process. At maximum
extension, D ¼ 2, the Runge-Kutta estimate Xn is 26:371190. It differs from the
exact result, 26:371404, by only a tiny amount, 0:000214. The percentage error
involved is 0:000811.
Table (10.9) records numerical results Yn collected during a twenty-step
Runge-Kutta process. At maximum extension, D ¼ 2, the Runge-Kutta estimate Yn
is 11:0170347, It differs from the exact result, 11:0171222, by 0:0000875. The
percentage error involved is 0:000794: It is similar to the corresponding error,
0:000811%, for Xn . The accuracy achieved by the twenty-step Runge-Kutta esti-
mate is quite extraordinary. When very high accuracy is desired, the twenty-step
Runge-Kutta estimate yields results that are worth the effort.
Chapter 11 deals with Frobenius’ work. As stated earlier,
linear (ODEs) with variable coefficients are generally hard to treat. Fortunately,
Frobenius’ method may often be helpful in that regard. To that purpose, analytic
functions, ordinary points, and regular and irregular singular points are described in
this chapter. Frobenius assumes a modified Taylor series solution that is valid in the
neighborhood of an ordinary point. The unknown constants there are determined
through actual use of the Taylor series solution.
Frobenius solution around ordinary point is worked out for differential equations
of type (a)—see (11.5)–(11.30)—and differential equations of type (b)—see (11.32)–
(11.60).
Equations of type (c), (11.62), around regular singular points are treated next–
see (11.63)–(11.75).
Indicial equation is defined—see (11.76)—and equations of category (1), whose
two roots differ by non-integers, of category (2), whose two roots are equal, and of
category (3), whose two roots differ by an integer, are all analyzed [See (11.78)–
(11.183)].
Preface xi
The last part of Chap. 9 deals with Bessel’s equations. Details of the relevant
analyses are provided in (11.184)–(11.242).
Answers to assigned problems are given in Chap. 12.
Fourier transforms and Dirac33: delta function are treated in the appendix which
forms Chap. 13.
Bibliography is presented last.
Unlike a novel, which is often read continuously—and the reading is completed
within a couple of days—this book is likely to be read piecemeal—a chapter or two
a week. At such slow rate of reading, it is often hard to recall the precise form of a
relationship that appeared in the previous chapter. To help relieve this difficulty,
when needed, the most helpful explanation of the issue at hand is repeated briefly
and the most relevant expressions are referred to by their equation numbers.
Throughout the book, for efficient reading, most equations are numbered in seri-
atim. When needed, they can be accessed quickly.
Most of the current knowledge of differential equations is much older than those
of us who study it. The present book owes in motivation to a famous treatise by
Piaggio10: —first published nearly a century ago by G. Bell and Sons, LTD., and
last reprinted in (1940). Piaggio is a great book, but in some important places it
misses, and sometime misprints, relevant detail. Numerous other books11:21: of
varying usefulness are also available. The current text stands apart from these books
in that it is put together with a view to being accessible to all interested readers: for
use both as a textbook and a book for self-study.
Answers to problems suggested for solution are appended in Chap. 12.
Finally, but for the support of my colleague Robert Intemann, this book could
not have been written.
1 Differential Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 D....................... . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Laws of Addition . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Laws of Multiplication . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 What is D1 f ðxÞ? . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.4 Index Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Some Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Ordinary Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.4 Homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.5 Inhomogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.6 Nonlinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.7 Partial Differential Equation . . . . . . . . . . . . . . . . . . . 8
2.1.8 The Order of an (ODE) . . . . . . . . . . . . . . . . . . . . . . 8
2.1.9 The Degree of an (ODE) . . . . . . . . . . . . . . . . . . . . . 8
2.1.10 Order and Degree: Exercises . . . . . . . . . . . . . . . . . . 8
2.1.11 Characteristic Equation: Ech . . . . . . . . . . . . . . . . . . . 9
2.1.12 Complementary Solution: Scomp . . . . . . . . . . . . . . . . 9
2.1.13 Particular Integral: Ipi . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.14 Indicial Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.15 General Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.16 Complete Solution . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.17 Complete Primitive . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.18 Singular Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 How Some (ODE) Arise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
xiii
xiv Contents
3 Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 Homogeneous Linear (ODEs) . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Second Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.3 Characteristic Equation: Ech . . . . . . . . . . . . . . . . . . . 15
3.1.4 Unequal Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.5 Complementary Solution . . . . . . . . . . . . . . . . . . . . . 15
3.1.6 Examples Group I: Unequal Real Roots . . . . . . . . . . 16
3.1.7 Examples Group II: Complex Roots . . . . . . . . . . . . 17
3.1.8 Equations with Complex Roots . . . . . . . . . . . . . . . . 17
3.1.9 Equation with Double Root . . . . . . . . . . . . . . . . . . . 18
3.1.10 n-Equal Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.11 Ech with Multiple Roots . . . . . . . . . . . . . . . . . . . . . 20
3.1.12 Problems Group I . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Linear Dependence and Linear Independence . . . . . . . . . . . . . 21
3.2.1 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.2 Examples Group III . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.3 Examples Group IV . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . 27
3.3.1 Particular Integral: Ipi . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.2 Examples Group V . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.3 Examples Group VI . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.4 Problems Group II . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.5 Examples Group VII . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.6 Problems Group III . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.7 Ipi for BðxÞ ¼ cosðaxÞ ; sinðaxÞ . . . . . . . . . . . . . . . . 37
3.3.8 Examples Group VIII . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.9 Problems Group IV . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.10 Ipi for BðxÞ ¼ expða xÞ WðxÞ . . . . . . . . . . . . . . . . . . 40
3.3.11 Examples Group IX . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.12 Problems Group V . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Simultaneous Linear (ODEs) with Constant Coefficients . . . . . 46
3.4.1 Separable Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.2 Problems Group VI . . . . . . . . . . . . . . . . . . . . . . . . . 58
4 Variable Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.1 Linear (ODE)’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.1.1 First-Order and First-Degree . . . . . . . . . . . . . . . . . . 59
4.1.2 Integrating Factor . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.3 Equation (4.2): Solution . . . . . . . . . . . . . . . . . . . . . 61
4.2 Examples Group I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.1 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.2 Problems Group I . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Contents xv
1.1 D
d
D0 = 1 ; D1 = D = ;
dx
dν
Dν = , ν = 2, 3, 4, . . . (1.1)
dx ν
The operator D follows some, but not all, laws of algebra.
Assume ν, ν1 , and ν2 are positive integers. Using the rules of calculus, we can check
that D obeys the commutative law of addition,
of multiplication are obeyed. In other words, for ν1 and ν2 positive integers, the
operators D ν1 and D ν2 commute. So far so good! But problems begin to arise if
inverse powers are involved.
First thing to note here is that D represents the process of differentiation with respect
to the variable x. Therefore, quite reasonably, the inverse process—meaning integra-
tion with respect to x—should be represented by D −1 . Simple analysis helps illustrate
this point. For instance, straightforward integration of the differential equation,
D y(x) = 2 x ,
This suggests two things. First, the operator D −1 may be identified with the operation
of integration. Second, that D −1 . D = D −1+1 may be set equivalent to D 0 = 1.
This looks very promising. But wait a minute !
In view of (1.8), for integral ν that is >μ, the following should hold:
D −ν . D ν x μ = D −ν . 0
= x μ . (1.9)
The two results (1.9) and (1.10) are different. Indeed, whenever ν > μ,
D −ν . D ν x μ = D ν . D −ν x μ . (1.11)
Clearly, this law holds if both ν1 and ν2 are positive integers. But above we have used
this law even when one of these indices was not positive? Let us try a very simple
exercise to check the adequacy of the index law when that is the case.
4 1 Differential Operator
−1 x3
D x = D .D
3−1 2 3
x =D .
2 3
+ σ0 = 2 , (1.13)
3
but
2.1.1 Explicit
Here, u (ν) is the νth differential of the dependent variable u with respect to the
independent variable x, i.e.,
dν u
u (ν) = ≡ D ν u, (2.2)
dx ν
2.1.2 Implicit
Generally, explicit ordinary differential equations are easier to treat than implicit
ordinary differential equations. For simplicity, therefore, we shall work mostly with
the explicit ordinary differential equations.
2.1.3 Linear
A differential equation that is of the first degree in its dependent variable as well as
all its derivatives and is of the form
ν
ai (x)D i u = B(x) , (2.5)
i=0
where B(x) and {ai (x)}, i = 0, 1, . . . , ν are all known functions of x, is said to be a
linear ordinary differential equation.
2.1.4 Homogeneous
Whenever the function B(x) is missing from the linear (ODE), the latter is referred
to as homogeneous linear ordinary differential equation. That is
ν
ai (x)D i u = 0. (2.6)
i=0
2.1 Ordinary Differential Equation 7
Later in the book, there will be occasion to work with differential equations of the
type
where both F1 (x, y) and F2 (x, y) are homogeneous functions and both are of the
same degree.]
2.1.5 Inhomogeneous
A linear (ODE) of the form (2.5), where B(x) is not equal to zero, is called an
inhomogeneous linear ordinary differential equation.
2.1.6 Nonlinear
When the (ODE) cannot be expressed in linear form represented in (2.5) or (2.6), it is
said to be nonlinear (ODE). For instance, the well-known simple pendulum equation,
where l represents the length, m the mass, and g the acceleration due to gravity—all
of which are assumed to be constants—
d2 {l tan θ}
m + m g sin θ = 0 (2.9)
dt 2
is nonlinear when the angle θ is not very small compared to a radian. Here, θ is
the angle that the massless pendulum rod of length l makes with the vertical, m is
the mass hanging atthe bottom of the rod, t refers to the time, g is the acceleration
due to gravity, and gl is the angular velocity so that one complete cycle—which
is equivalent to angular rotation 2π—is completed in time √
2π
g . It does, however,
l
become linear for very small θ when both tan θ and sin θ tend to θ.
Just as implicit (ODE) is harder to solve than explicit (ODE), solving nonlinear
(ODE) requires more effort than solving linear (ODE). Furthermore, simple treat-
ment of nonlinear (ODE) cannot be guaranteed to succeed. Fortunately, many of the
physical problems of interest—at least in the first approximation—can be expressed
in terms of explicit ordinary differential equations that are linear. Therefore, much
of our attention will be focused on linear explicit ordinary differential equations.
8 2 Some Definitions
An (ODE) has only one independent variable. But physical situations of interest
sometime involve more than one physical property. As a result, the functions repre-
senting them may sometime involve two or more independent variables. And such
dependence is often best expressed by partial differential equations that contain par-
tial derivatives with respect to more than one independent variable.
The order of an (ODE) is ν if the highest derivative in the equation is of order ν. For
instance, this is the case in (2.1), (2.3), and (2.4) because the highest derivative there
is u (ν) .
Similarly, the order of a partial (ODE) is the order of the highest partial derivative
present.
Unlike the order of an (ODE), determining its degree requires some care. It is neces-
sary first to arrange the (ODE) in a form where all the differential coefficients in the
differential equation appear with rational and integral powers. When that has been
done, the equation can be written as a polynomial in all the derivatives present. Then,
the power of the derivative of the highest order is the degree of the given (ODE).
Determine the n solutions of the nth degree polynomial in k—that is, contained in the
characteristic equation mentioned above —and notate them k j with j = 1, 2, . . . , n.
Next, exponentiate each one of them. Then, the sum
10 2 Some Definitions
n
σ j exp k j , (2.16)
j=1
The term with the lowest power of x—such as (11.76)—in a Frobenius solution—
such as (11.75)—is termed the indicial equation.
A solution that contains arbitrary constants is often called a general solution. General
solution is a bit of a misnomer because the requirement for a solution to be general
is quite relaxed. Of the possible n arbitrary constants in the solution, only one needs
be nonzero. Occasionally in this text, general solution will refer to a solution given
in terms of the variable ν0 that symbolizes roots of the indicial equation.
Occasionally, there may exist a solution to a given differential equation that cannot be
derived from its complete solution or complete primitive. Such an unusual solution
is often called a singular solution.
u(x) = σ0 exp (k x) ,
v(x) = σ1 sin(a x) + σ2 cos(a x) , (2.18)
Du = k u ;
D 2 v = −a 2 v . (2.19)
As an aside each of the surviving known constants k and a in (2.19) can also
be eliminated by further differentiation which raises the order of each of the two
differential equations by one notch.
u D 2 u = (Du)2 ,
v D 3 v = (D 2 v)(Dv) . (2.20)
Chapter 3
Constant Coefficients
where ci for i = 0, 1, 2, . . . , etc., are known constants. Exact solution of such equa-
tions can often be found in terms of elementary functions when the equations are
first order or second order, and also—with somewhat greater effort—when they are
third order or fourth order.
The simplest homogeneous linear ordinary differential equation is of the first order.
1
ci D i u(x) = (c0 + c1 D) u(x) = 0 .
i=0
and integrate.
c0
ln u(x) = − x +σ . (3.3)
c1
The solution is
c0
u(x) = σ0 exp − x , (3.4)
c1
where σ0 = exp(σ) is the single unknown arbitrary constant that can be determined
from one boundary condition. For instance, the value of u(x) at x = 0 is equal to σ0 .
Notice that only one integration is needed to obtain a solution of the first-order
homogeneous linear ordinary differential equation with constant coefficients and the
solution has only a single arbitrary constant. Indeed, the number of independent
constants that a complete solution of an nth order homogeneous linear ordinary
differential equation with constant coefficients contains usually is equal to n and
such a solution requires the equivalent of n integrations.
This is the characteristic equation. Note, given a differential equation, e.g. (3.5), the
characteristic equation is readily found by replacing D with k.
The characteristic equation for all second-order homogeneous linear ordinary differ-
ential equation is a quadratic. The two roots of (3.8) are k1 and k2 .
k1 = (2 c0 )−1 −c1 + c12 − 4 c0 c2 , (3.9)
k2 = (2 c0 )−1 −c1 − c12 − 4 c0 c2 . (3.10)
Assuming the two roots, whether they be real or imaginary, are unequal, then the
differential equation (3.5) has two solutions of the form
Knowing that a given set of derivatives of the product σ u(x) is equal to σ times the
same set of derivatives of u(x), and the fact that derivatives of a sum are equal to the
sum of those derivatives, (3.15) can be rewritten as
σ1 c0 + c1 D + c2 D 2 u 1 (x)
+σ2 c0 + c1 D + c2 D 2 u 2 (x) = 0 . (3.16)
Complementary solution is worked out for three differential equations for which the
characteristic equation as well as its roots is provided. Note, these are second-order
homogeneous linear ordinary differential equation with constant coefficients, whose
characteristic equation has two real roots, k1 and k2 , that are unequal.
9 9
D 2 + 4D − u = 0 ; E ch = k 2 + 4k − = 0 ,
4 4
1 9
k1 = , k2 = − . (3.17)
2 2
2
2D + 5D + 3 u = 0 ; E ch = 2k 2 + 5k + 3 = 0 ,
3
k1 = − 1 , k2 = − . (3.18)
2
2
2D − 5D + 3 u = 0 ; E ch = 2k 2 − 5k + 3 = 0 ,
3
k1 = 1 , k2 = . (3.19)
2
As described in (3.5) → (3.13), the relevant complementary solution for the differ-
ential equations in (3.17)–(3.19) is the following.
k1 = r − i m ; k2 = r + i m . (3.21)
where the arbitrary constants σ3 and σ4 represent combinations of the earlier arbitrary
constants in the form
σ3 = −i (σ1 − σ2 ) ,
σ4 = (σ1 + σ2 ) . (3.23)
In deriving the above equation, the following identity was also used
Given below are differential equations along with their characteristic equations and
complex roots in the form of r and m. [Note: See (3.21) for the definition of r and m.]
D 2 + 2D + 5 u(x) = 0 ; k 2 + 2k + 5 = 0 ;
r = −1 , m = 2 . (3.25)
D 2 − 6D + 10 u(x) = 0 ; k 2 − 6k + 10 = 0 ;
r = 3 , m = 1 . (3.26)
5 5
D 2 + 3D + u(x) = 0 ; k 2 + 3k + = 0 ;
2 2
3 1
r = − , m= . (3.27)
2 2
18 3 Constant Coefficients
(k − c)2 = 0 , (3.29)
is a double solution that duly satisfies the differential equation (3.28). But it is only
a single distinct solution. [Note: Two solutions are distinct if they are linearly inde-
pendent. Linear independence is formally defined in (3.43)–(3.45).] Hoping that
another distinct solution would also involve exp(c x), let us try
as a possible second distinct solution and determine f (x) accordingly. To this end,
insert (3.31) into (3.28). One gets
0 = (D − c)2 u 2 (x) = (D − c)2 exp(c x) f (x)
= (D − c) (D − c) exp(c x) f (x) = (D − c) {exp(c x) D f (x)}
= exp(c x) D 2 f (x) . (3.32)
d
D 2 f (x) = {D f (x)} = 0 . (3.33)
dx
This is readily done by integrating (3.33) twice. The first integration leads to
d
D f (x) dx ≡
2
{D f (x)} dx = [0]dx ,
dx
= {D f (x)} + const 1 = const 2 . (3.34)
3.1 Homogeneous Linear (ODEs) 19
f (x) = σ0 + σ1 x . (3.36)
Thus, according to (3.31) and (3.36), the u(x) given below solves the differential
equation (D − c)2 u(x) = 0.
(D − c)n u(x) = 0 .
has n equal roots: k = c. Following the same procedure as outlined in deriving (3.37),
the complementary solution is
Scomp (x) = exp(c x)[σ0 + σ1 x + σ2 x 2 + · · · + σn−1 x n−1 ]. (3.38)
And once again it is a sum of distinct terms. As such, it is a complete solution. If the
values of σi − i = 0 , 1 , 2 , . . . , (n − 1) were all known—say, determined from n
different boundary conditions—the Scomp (x), given above, would also qualify as the
complete primitive.
Characteristic equations with multiple roots are analyzed below.
Equation (3.38) refers to the case where E ch has n equal roots. That means, there
is only one distinct root that occurs n different times. What if there were two roots,
both of them occurring multiple times? To study this case, consider the following
differential equation with two roots: c0 and c.
E ch = (k − c0 )n 0 (k − c)n = (k − c)n (k − c0 )n 0 .
Define
(D − c0 )n 0 U0 (x) = 0 ,
(D − c)n U (x) = 0 .
3.1 Homogeneous Linear (ODEs) 21
Then,
(D − c0 )n 0 (D − c)n [U (x) + U0 (x)]
= (D − c0 )n 0 (D − c)n U (x) + (D − c0 )n (D − c)n U0 (x)
= 0 + (D − c0 )n 0 (D − c)n U0 (x) = 0 + (D − c)n (D − c0 )n 0 U0 (x)
= 0+0 . (3.41)
Thus, the solution of homogeneous linear ordinary differential equation with constant
coefficients (3.39) is
Find complementary solution, Scomp (x), to the following ten homogeneous linear
ordinary differential equations with constant coefficients.
D 2 + 2D − 3 u(x) = 0 . (1)
D 2 − 3D − 4 u(x) = 0 . (2)
1 2
D+ u(x) = 0 . (3)
2
1 2
D− u(x) = 0 . (4)
2
(D + 2)3 u(x) = 0 . (5)
(D − 2)3 u(x) = 0 . (6)
D + D + 1 u(x) = 0
2
. (7)
D 2 − D + 1 u(x) = 0 . (8)
D 2 + 2D + 3 u(x) = 0 . (9)
D 2 + 3D + 4 u(x) = 0 . (10)
n
σi f i (x) = 0 . (3.43)
i=1
Clearly, the case when the constants {σi }, i = 1, 2, ..., n, are all vanishing is trivial.
A straightforward use of the relationship (3.43) for determining linear dependence
of the functions f i (x) is quite awkward. It requires knowledge of an appropriate, non-
trivial, set of constants σi . This requirement is circumvented below.
In order to determine whether the given n functions, { f i (x)}, i = 1, 2, ..., n—each
of which is differentiable (n − 1) times—are linearly dependent, one differentiates
(3.43) several times and keeps a record. Every time (3.43) is differentiated, the pro-
cess produces a new differential equation. This way, after
(n − 1) differentiations, there are (n − 1) new differential equations. These along
with the original equation, namely (3.43), make a total of n simultaneous homoge-
neous linear ordinary differential equations with constant coefficients that involve
the n constants {σi }, i = 1, 2, ..., n. It is helpful to display these equations.
n
σi f i (x) = 0 ; (1)
i=1
n
σi . D f i (x) = 0 ; (2)
i=1
..... . .....
n
σi . D n−1 f i (x) = 0 ; (n) (3.44)
i=1
3.2.1 Wronskian
f 1 (x) f 2 (x), ..., f n (x)
D f 1 (x) D f 2 (x), ..., D f n (x)
W (x) ≡
............. ....................
D n−1 f 1 (x) D n−1 f 2 (x), ..., D n−1 f n (x)
=0. (3.45)
In other words:
For all values of x within an interval I , the given n functions—{ f i (x)}, i =
1, 2, ..., n—each of which is differentiable at least (n − 1) times, are linearly depen-
dent if and only if their Wronskian, W (x), is vanishing.
Linear Independence
On the other hand, linear independence, being the opposite of linear dependence,
obtains only if a relationship like (3.43) never holds true—except, of course, for
the trivial case when all the constants σi are zero. Indeed, a given set of functions
{ f i (x)}, i = 1, 2, ..., n, is said to be linearly independent if and only if, for all values
of x in an interval I , there exist n non-trivial constants, say {σi }, i = 1, 2, ..., n, such
that relationship (3.46) obtains.
n
σi f i (x) = 0 . (3.46)
i=1
Employing the same argument that led from (3.43) to (3.45), one concludes that
functions { f i (x)}, i = 1, 2, ..., n, are linearly independent if and only if the following
relationship—namely (3.47)—holds true.
f 1 (x) f 2 (x), ..., f n (x)
D f 1 (x) D f (x), ..., D f (x)
W (x) = 2 n = 0 . (3.47)
............. ....................
n−1
D f 1 (x) D n−1 f 2 (x), ..., D n−1 f n (x)
(A): Work out the requirement that must be satisfied for a given pair of functions,
f 1 (x) and f 2 (x), each of which is differentiable at least once, to be linearly dependent.
24 3 Constant Coefficients
Solution
According to (3.45), two such functions f 1 (x) and f 2 (x) are linearly dependent if
and only if they obey the relationship
f (x) f 2 (x)
W (x) = 1 =0. (3.48)
D f 1 (x) D f 2 (x)
Rewriting (3.48),
D f 2 (x) D f 1 (x)
− = 0 , (3.49)
f 2 (x) f 1 (x)
and integrating,
log[ f 2 (x)] = log[ f 1 (x)] + constant , (3.50)
gives
f 2 (x) = C0 f 1 (x) . (3.51)
Simply expressed, two functions that are proportional are linearly dependent.
(B): Given u(x) in (3.37) is complementary solution of the homogeneous linear
ordinary differential equation with constant coefficients (3.28) that has equal roots,
show that u(x) is also its complete solution Scs (x).
In order for (3.37) to be a complete solution of the homogeneous linear ordi-
nary differential equation with constant coefficients (3.28), the given two functions
f 1 (x) = exp(c x) and f 2 (x) = x exp(c x) have to be linearly independent. This will
be the case, according to (3.47), if the following holds true.
D f 2 (x) D f 1 (x)
− = 0 . (3.52)
f 2 (x) f 1 (x)
1
= 0 . (3.54)
x
Clearly, as long as x is non-infinite this inequality holds. Therefore, the given two
functions are linearly independent, and as a result (3.37) is a complete solution of the
homogeneous linear ordinary differential equation with constant coefficients (3.28).
3.2 Linear Dependence and Linear Independence 25
Using the methods described in the foregoing, work out complementary solution to
the set of twelve homogeneous linear ordinary differential equations with constant
coefficients given in (3.55) below.
D 2 + 3D + 1 u(x) =0 . (1)
D 2 + 3D − 1 u(x) =0 . (2)
D 2 + 3D − 3 u(x) =0 . (3)
9
D 2 + 3D + u(x) =0 . (4)
2
D 2 + 4D + 6 u(x) =0 . (5)
1
D2 + D + u(x) =0 . (6)
2
3 2
D+ u(x) =0 . (7)
2
3 2
D− u(x) =0 . (8)
2
(D + 2)2 u(x) =0 . (9)
(D + 1)3 u(x) =0 . (10)
D (D 2 − 4)2 u(x) =0 . (11)
2 2
D (D − 4) u(x) =0 . (12) (3.55)
Consider
ν
c j D j u(x) = B(x) .
j=0
ν
c j D j I pi = B(x) . (3.57)
j=0
Clearly, by using the procedure outlined in the foregoing subsections, one can also
work out a complementary solution for the homogeneous part of the above inho-
mogeneous linear ordinary differential equation with constant coefficients. In other
words, one can determine Scomp (x) that satisfies the following equation.
ν
c j D j Scomp (x) = 0 . (3.58)
j=0
The sum of the complementary solution Scomp (x) and the particular integral I pi leads
to complete solution. That is, from (3.57) and (3.58), one has
ν
c j D j u(x) = B(x)
j=0
ν
ν
= c j D j Scs (x) = c j D i Scomp (x) + I pi
j=0 j=0
ν
ν
= c j D j Scomp (x) + c j D j I pi = 0 + B(x) = B(x) . (3.59)
j=0 j=0
28 3 Constant Coefficients
In order to solve and find the particular integral for an inhomogeneous linear ordinary
differential equation with constant coefficients given above, proceed as follows:
ν
i −1
Multiply both sides on the left by i=0 ci D . The result is
ν
−1 ν ν −1
ci D i
ci D i
I pi = I pi = ci D i
B(x) . (3.60)
i=0 i=0 i=0
ν
i −1
Expand i=0 ci D B(x) as series in ascending powers of D and retain only the
terms needed for the given B(x).
I pi when B(x) = en x n + e0
V(A) : Solve
D 2 + D + 1 u(x) = e5 x 5 + e0 .
Calculate first the particular integral u(x) = I pi . To this end, as explained above,
invert the left-hand side and expand the resultant in ascending powers of D, retaining
terms only up to the order D 5 . Note that higher powers of D in the expansion would
contribute nothing.
3.3 Method of Undetermined Coefficients 29
1
V (A) : I pi = (e5 x 5 + e0 )
D2 + D + 1
= 1 − D + D 3 − D 4 + O(D 6 ) (e5 x 5 + e0 )
= e5 (x 5 − 5x 4 + 60x 2 − 120x) + e0 . (3.61)
In the above equation, ignore exp(k x). The rest is the characteristic equation E ch .
(k 2 + k + 1) = 0 . (3.63)
More precisely, it is
√ √
x 3 3
V (A) : Scomp (x) = exp − σ3 sin x + σ4 cos x .(3.66)
2 2 2
As usual, σ3 and σ4 are arbitrary constants. Also, the sign and the cosine functions
are linearly independent. As such (3.61) and (3.66) represent complete solution,
= I pi + Scomp (x), of differential equation V(A).
V(B): Solve
D 2 − D − 1 u(x) = (e3 x 3 + e0 ) .
(k 2 − k − 1) = 0 . (3.68)
√
Its roots, k1,2 = 1
2
± 2
5
, lead to the complementary solution
√ √
x 5 5
V (B) : Scomp (x) = exp σ1 exp x + σ2 exp − x . (3.69)
2 2 2
V(C): Solve
D 2 + 2D + 1 u(x) = (e4 x 4 + e0 ) .
The particular integral is calculated the same way as in (3.61) and (3.67).
1
V (C) : I pi = (e4 x 4 + e0 )
D 2 + 2D + 1
= 1 − 2D + 3D 2 − 4D 3 + 5D 4 + O(D 5 ) (e4 x 4 + e0 )
= e4 (x 4 − 8x 3 + 36x 2 − 96x + 120) + e0 . (3.70)
Next, the Scomp (x). The characteristic equation k 2 + 2k + 1 = 0 has a double root
k = k1 = k2 = −1. Therefore, similar to (3.28)–(3.37), its complementary solution
is
Solution: (3.72)
−1
(1) I pi = D 2 + 3D + 1 (e1 x 3 + e0 )
= 1 − 3D + 8D 2 − 21D 3 + ... (e1 x 3 + e0 ) ,
= e1 (x 3 − 9x 2 + 48x − 126) + e0 .
−1
(2) I pi = D 2 + 3D − 1 (e2 x 3 + e0 )
= −1 − 3D − 10D 2 − 33D 3 + ... (e2 x 3 + e0 ) ,
1
f (x) = f (x) dx ,
D
and in (12)
1
f (x) = dx f (x) dx .
D2
Therefore
1 e11 7
(e11 x 6 + e0 ) = x + e0 x
D 7
and
1 e12 8 e0 2
2
(e12 x 6 + e0 ) = x + x .
D 56 2
In keeping with the tradition, I pi contains only known constants ei , etc. Unknown
constants such as σi , etc., are not included in the result above. That is appropriate
because such constants are already equivalently present in the relevant Scomp (x) in
(3.56) and therefore in the complete solution.
Work out the particular integral I pi for the following ten inhomogeneous linear ordi-
nary differential equations with constant coefficients. The equations given below
were put together by making additions—in the form B(x) = cn x n + c0 —to the
homogeneous linear ordinary differential equation with constant coefficients given
in Problems Group I. Because the relevant complementary solutions are already
available from Problems Group I, only particular integrals, I pi , are being required
here.
D 2 + 2D − 3 u(x) = c1 x 3 + c0 . (1)
D 2 − 3D − 4 u(x) = c2 x 3 + c0 . (2)
1 2
D+ u(x) = c3 x 3 + c0 . (3)
2
1 2
D− u(x) = c4 x 4 + c0 . (4)
2
34 3 Constant Coefficients
As such, when acting on exp(α x) a function f (D) that involves only positive powers
of D will lead to
1 en f (α) exp(α x)
f (D) en exp(α x) = = en exp(α x). (3.75)
f (α) f (α)
1 1 1
f (D) en exp(α x) = [en exp(α x)] . (3.76)
f (D) f (α) f (D)
The left-hand sides of (3.76) and (3.77) are equal and hence, also the right-hand
sides.
1 en exp(α x)
[en exp(α x)] = . (3.78)
f (D) f (α)
3.3 Method of Undetermined Coefficients 35
1 e1 exp(α x) e0
I pi = (e1 exp(α x) + e0 ) = + . (1)
D 2 + 3D + 1 α2 + 3α + 1 1
1 e2 exp(α x) e0
I pi = (e2 exp(α x) + e0 ) = + . (2)
D 2 + 3D − 1 α2 + 3α − 1 −1
1 e3 exp(α x) e0
I pi = (e3 exp(α x) + e0 ) = + . (3)
D + 3D − 3
2 α + 3α − 3
2 −3
1 e4 exp(α x) e0
I pi = (e4 exp(α x) + e0 ) = + . (4)
D + 3D + 29
2 α + 3α + 2
2 9 9/2
1 e5 exp(α x) e0
I pi = (e5 exp(α x) + e0 ) = + . (5)
D 2 + 4D + 6 α2 + 4α + 6 6
36 3 Constant Coefficients
1 e6 exp(α x) e0
I pi = (e6 exp(α x) + e0 ) = + . (6)
D2 + D + 21 α +α+ 2
2 1 1/2
1 e7 exp(α x) e0
I pi = 2 (e7 exp(α x) + e0 ) = + . (7)
3 2 9/4
D + 23 α+ 2
1 e8 exp(α x) e0
I pi = 2 (e8 exp(α x) + e0 ) = 2
+ . (8)
D − 23 α− 3 9/4
2
1 e9 exp(α x) e0
I pi = (e9 exp(α x) + e0 ) = + . (9)
(D + 2)2 (α + 2) 2 4
1 e10 exp(α x) e0
I pi = (e10 exp(α x) + e0 ) = + . (10)
(D + 1)3 (α + 1) 3 1
1 1
I pi = 2 (e11 exp(αx) + e0 ) = 2 [e11 exp(αx)]
D D −4 2 D D −42
−1 x
1 D2 e11 exp(α x)
+ 1− e0 = 2 + e0 . (11)
16 D 2 α α2 − 4 16
1 1
I pi = (e12 exp(αx) + e0 ) = [e12 exp(αx)]
D D2 − 1
2 D2D2 − 1
2
1 −1 e12 exp(α x) x
+ 1 − D2 e0 = 2 2 − e0 − e0 . (12) (3.80)
−D 2 α α −1 2
1
cos(αx) = exp(iαx) + exp(−iαx) ,
2
1
sin(αx) = exp(iαx) − exp(−iαx) . (3.81)
2i
This renders the inhomogeneous terms similar to those treated in detail in (3.79)
and (3.80).
Use (3.81) and information provided in (3.78) to solve for the inhomogeneous linear
ordinary differential equations with constant coefficients given below.
D 2 + 3D + 1 u(x) = 2 cos(x) . (1)
D 2 + 3D − 1 u(x) = 2i sin(x) . (2)
D 2 + 3D − 3 u(x) = 2 cos(x) . (3)
9
D 2 + 3D + u(x) = 2 cos(x) . (4)
2
D 2 + 4D + 6 u(x) = 2i sin(x) . (5)
1
D2 + D + u(x) = 2i sin(x) . (6)
2
3 2
D+ u(x) = 4 cos2 (x) . (7)
2
3 2
D− u(x) = 4 sin2 (x) . (8)
2
38 3 Constant Coefficients
I pi for (3.82)
Because the Scomp (x) have already been calculated for the differential equations
(3.82)—see (3.55) and (3.56)—only the I pi are worked out here.
2 1
(1) I pi : cos(x) = exp(i x) + exp(−i x)
D2+ 3D + 1 D + 3D + 1
2
exp(i x) exp(−i x) 2
− = sin(x) .
3i 3i 3
1 1
(2) I pi : 2i sin(x) = exp(i x) − exp(−i x)
+ 3D − 1
D2 D + 3D − 1
2
exp(i x) exp(−i x) −i
= − − = [4 sin(x) + 6 cos(x)] .
2 − 3i 2 + 3i 13
1 1
(3) I pi : 2 cos(x) = [exp(i x) + exp(−i x)]
+ 3D − 3
D2 D + 3D − 3
2
exp(i x) exp(−i x) 2
=− − = [−4 cos(x) + 3 sin(x)] .
4 − 3i 4 + 3i 25
1 1
(4) I pi : 2 cos(x) = [exp(i x) + exp(−i x)]
D2 + 3D + 2
9
D + 3D + 29
2
exp(i x) exp(−i x) 4
= 7 + 7 = [6 sin(x) + 7 cos(x)] .
2
+ 3i 2
− 3i 85
2i 1
(5) I pi : sin(x) = exp(i x) − exp(−i x)
+ 4D + 6
D2 D + 4D + 6
2
exp(i x) exp(−i x) i
= − = [10 sin(x) − 8 cos(x)] .
5 + 4i 5 − 4i 41
1 1
(6) I pi : [2i sin(x)] = exp(i x) − exp(−i x)
+D+2
D2 1
D +D+2
2 1
exp(i x) exp(−i x) 4i
= − = − [sin(x) + 2 cos(x)] .
3
2
+i 3
2
−i 5
3.3 Method of Undetermined Coefficients 39
1 1
(7) I pi : [4 cos (x)] =
2
2 [exp(2i x) + exp(−2i x) + 2]
3 2
D+2 D + 23
exp(2i x) exp(−2i x) 2 192 56 8
= + + 3 2 = sin(2x) − cos(2x) + .
3 2 3 2 625 625 9
2i + 2 −2i + 2 2
1 1
(8) I pi : [4 sin2 (x)] =
2 [2 − exp(2i x) − exp(−2i x)]
3 2
D− 2
D − 23
2 exp(2i x) exp(−2i x) 8 192 56
= 2 + + = + sin(2x) + cos(2x) .
3 2 3 2 9 625 625
3
2
2i + 2 −2i + 2
1 1 exp(2i x) exp(−2i x)
(9) I pi : [2 cos(2x)] = +
(D + 2)2 4 (1 + i)2 (1 − i)2
sin(2x)
= .
4
1 exp(2i x) − exp(−2i x)
(10) I pi : [2i sin(2x)] =
(D + 1) 3
(D + 1)3
exp(2i x) exp(−2i x) 2i
= − =− [11 sin(2x) − 2 cos(2x)] .
(1 + 2i)3 (1 − 2i)3 125
1 1
(11) I pi : [4 cos2 (2x)] = exp(4i x) + exp(−4i x)
D D2 − 4 D(D 2 − 4)
1 1 exp(4i x)
+ 2 =
D 3 − 4D 4i (4i)2 − 4
−1
1 exp(−4i x) 1 D2
− + 1− 2
4i (−4i)2 − 4 −4D 4
sin(4x) x
=− − .
40 2
−4 sin2 (2x) 1 exp(4i x) exp(−4i x)
(12) I pi : = 2 +
D2 D2 − 1 D D2 − 1 D2 − 1
1 1 exp(4i x)
− 2=
D4 − D2 (4i)2 (4i)2 − 1
1 exp(−4i x) 1 cos(4x)
+ + 2−2= + x2 − 2 . (3.83)
(4i)2 (−4i)2 − 1 D2 136
40 3 Constant Coefficients
Additions have been made to the right-hand side in the form B(x) = sin(nx) and
cos(nx) to the homogeneous linear ordinary differential equations with constant
coefficients in Problems Group I. Work out the particular integrals, I pi , for the
following differential equations.
D 2 + 2D − 3 u(x) = 2 cos(x) . (1)
D 2 − 3D − 4 u(x) = 2 i sin(x) . (2)
1 2
D+ u(x) = sin(x) sin(2 x) . (3)
2
1 2
D− u(x) = sin(x) cos(2x) . (4)
2
(D + 2)3 u(x) = 4 cos2 (x) . (5)
(D − 2)3 u(x) = −4 sin2 (x) . (6)
D 2 + D + 1 u(x) = 2 cos(2x) . (7)
D 2 − D + 1 u(x) = 2i sin(2x) . (8)
D 2 + 2D + 3 u(x) = 4 cos2 (2x) . (9)
D 2 + 3D + 4 u(x) = −4 sin2 (2x) . (10)
Knowing
Notice α = 1 here.
1
I pi = 2 exp(x) 2x 3
D +2D+1
1
= exp(x) 2 x3
(D + 1)2 + 2 (D + 1) + 1
⎧ ⎫
exp(x) ⎨ 1 ⎬
= x 3
2 ⎩ 1 + D + D2 ⎭
4
exp(x) 3 2 1 3
= 1 − D + D − D + O(D ) x 3
4
2 4 2
exp(x) 9
I pi = x −3x + x −3 .
3 2
(3.87)
2 2
The characteristic equation, E ch , is found from the differential equation in the usual
manner. The complementary solution, Scomp (x), is found according to (3.39)–(3.42).
Both are given below in (3.88).
E ch : k2 + 2 k + 1 = 0 ;
D2 + 2 D + 1 u = 0 . (3.88)
The characteristic equation has double roots k = −1. Therefore, just as in (3.37),
the complementary solution is
The complete solution, Scs (x), is the sum of Scomp (x), (3.89), and the I pi , (3.87).
Again treat the same twelve homogeneous linear ordinary differential equations
with constant coefficients as in (3.55) and change them into inhomogeneous lin-
ear ordinary differential equations with constant coefficients by adding B(x) =
exp(α x) W (x). The above procedure can readily be generalized to deal with
cases where B(x) is either exp(α x){cos(x)}μ x n or exp(α x){sin(x)}μ x n . That
is so because B(x) can again be expressed in terms of such relationships as
exp(α x + ν) x n . In this spirit, solve the following set of twelve equations (3.90).
42 3 Constant Coefficients
D 2 + 3D + 1 u(x) = exp(3x) x 2 . (1)
D 2 + 3D − 1 u(x) = exp(3x) x 2 . (2)
D 2 + 3D − 3 u(x) = exp(3x) x 2 . (3)
9
D + 3D +
2
u(x) = exp(4x) x 2 . (4)
2
D 2 + 4D + 6 u(x) = exp(4x) x 2 . (5)
1
D +D+
2
u(x) = exp(4x) x 2 . (6)
2
3 2
D+ u(x) = 2 exp(−x) cos(x) x . (7)
2
3 2
D− u(x) = 2 exp(x) cos(x) x . (8)
2
(D + 2)2 u(x) = 2i exp(−x) sin(x) x . (9)
(D + 1)3 u(x) = 2i exp(x) sin(x) x . (10)
D (D 2 − 4) u(x) = 4 cos2 (2x) x 2 . (11)
2 2
D (D − 1) u(x) = −4 sin2 (2x) x 2 . (12) (3.90)
I pi for (3.90)
We use (3.84), (3.85), and (3.86). Again, because Scomp (x) has already been calculated
for the differential equations (3.90)—see (3.55) and (3.56)—only the I pi are being
worked out here.
1 1
(1) I pi : exp(3x) x 2 = exp(3x) x2
+ 3D + 1
D2 (D + 3) + 3(D + 3) + 1
2
1 exp(3x) 9D 62D 2
= exp(3x) 2 x2 = 1− + x2
D + 9D + 19 19 19 361
exp(3x) 18 124
= x − x+
2
.(1)
19 19 361
1 1
(2) I pi : exp(3x) x 2 = exp(3x) x2
D 2 + 3D − 1 (D + 3)2 + 3(D + 3) − 1
1 exp(3x) 9D 64D 2
= exp(3x) 2 x 2
= 1− + x2
D + 9D + 17 17 17 289
exp(3x) 18 128
= x2 − x + .(2)
17 17 289
1 1
(3) I pi : exp(3x) x 2 = exp(3x) x2
D2 + 3D − 3 (D + 3) + 3(D + 3) − 3
2
3.3 Method of Undetermined Coefficients 43
1 exp(3x) 3D 22D 2
= exp(3x) 2 x2 = 1− + x2
D + 9D + 15 15 5 75
exp(3x) 6 44
= x − x+
2
.(3)
15 5 75
1 1
(4) I pi : exp(4x) x 2 = exp(4x) x2
D 2 + 3D + 9
(D + 4)2 + 3(D + 4) + 29
2
1 exp(4x) 22 354 2 2
= exp(4x) x =2
2
1− D+ D x
D2 + 11D + 65 65 65 (65)2
2
exp(4x) 44 708
=2 x2 − x + .(4)
65 65 (65)2
1 1
(5) I pi : exp(4x) x 2 = exp(4x) x2
+ 4D + 6
D2 (D + 4) + 4(D + 4) + 6
2
1 exp(4x) 6 53 2 2
= exp(4x) 2 x2 = 1− D+ D x
D + 12D + 38 38 19 722
exp(4x) 12 53
= x − x+
2
.(5)
38 19 (19)2
1 1
(6) I pi : exp(4x) x 2 = exp(4x) x2
D 2 + D + 21 (D + 4)2 + (D + 4) + 21
1 exp(4x) 18 242 2 2
= exp(4x) 2 x =2
2
1− D+ D x
D + 9D + 41 41 41 (41)2
2
exp(4x) 36 484
=2 x2 − x + .(6)
41 41 (41)2
1
(7) I pi : 2 [exp(−x) 2 cos(x) x]
D + 23
1 1
= exp{x(−1 − i)} 2 x + exp{x(−1 + i)} 2 x
D − 1 − i + 23 D − 1 + i + 23
exp{−x(1 + i)} 4 exp{−x(1 − i)} 4
= 1 + (3 − 4i)D x + 1 + (3 + 4i)D x
− 43 − i 25 − 43 + i 25
8x exp(−x) 32 exp(−x)
= [4 sin(x) − 3 cos(x)] + [2 sin(x) + 11 cos(x)] . (7)
25 125
1
(8) I pi : [exp(x) 2 cos(x) x]
3 2
D− 2
44 3 Constant Coefficients
1 1
= exp{x(1 + i)} x + exp{x(1 − i)} 2 x
3 2
D+1+i − D + 1 − i − 23
2
exp{x(1 + i)} 4 exp{x(1 − i)} 4
= 1 + (1 + 2i)D x + 1 + (1 − 2i)D x
− 43 − i 5 − 43 + i 5
8x exp(x) 8 exp(x)
=− [4 sin(x) + 3 cos(x)] + [8 sin(x) − 44 cos(x)] . (8)
25 125
1
(9) I pi : [exp(−x) 2i sin(x) x]
(D + 2)2
1 1
= exp{−x(1 − i)} x − exp{−x(1 + i)} x
(D − 1 + i + 2) 2
(D − 1 − i + 2)2
exp{−x(1 − i)} exp{−x(1 + i)}
= [1 + (i − 1)D]x + [1 − (i + 1)D]x
2i 2i
= i exp(−x)[sin(x) + cos(x) − x cos(x)] . (9)
1
(10) I pi : [exp(x) 2i sin(x) x]
(D + 1)3
1 1
= exp{x(1 + i)} x − exp{x(1 − i)} x
(D + 1 + i + 1) 3
(D + 1 − i + 1)3
exp{x(1 + i)} 3D exp{x(1 − i)} 3D
= 1− x− 1− x
(2 + i) 3 2 + i (2 − i) 3 2 −i
2i exp(x)
= [21 sin(x) + 72 cos(x) + 10 x sin(x) − 55 x cos(x)] . (10)
625
1 1
(11) I pi : [4 cos2 (2x) x 2 ] = exp(4i x) + exp(−4i x) + 2 x 2
D D2 − 4 D 3 − 4D
1 1
= exp(4i x) x 2 + exp(−4i x) x2
(D + 4i)3 − 4(D + 4i) (D − 4i)3 − 4(D − 4i)
⎡ ⎤
1 1
+⎣ 2
⎦ 2x 2 = exp(4i x)
3 + 12i D 2 − 52D − 80i
x2
−4D 1 − 4 D D
1 − D 2 −1
exp(−4i x) 4
+ 3 x2 + 2x 2
D − 12i D 2 − 52D + 80i −4D
i 13 109i 2 2
= exp(4i x) − D− D x +
80 1600 32000
−i 13 109i 2 2 x x3
exp(−4i x) − D+ D x − −
80 1600 32000 4 6
2 sin(4x) cos(4x)
= −x − 13 x
40 400
3.3 Method of Undetermined Coefficients 45
109 x x3
+ sin(4x) − − . (11)
8000 4 6
1 1
(12) I pi : [−4 sin2 (2x)x 2 ] = exp(4i x) + exp(−4i x) − 2 x 2
D2 D2 − 1 D4 − D2
1 1
= exp(4i x) x 2 + exp(−4i x) x2
(D + 4i) − (D + 4i)
4 2 (D − 4i) − (D − 4i)2
4
1
+ 2x 2 =
D2 1 − D2
1
exp(4i x) × x2
D 4 + 16i D 3 − 97D 2 − 264i D + 272
1
+ exp(−4i x) × 4 x2
D − 16i D 3 − 97D 2 + 264i D + 272
1 −1
+ 2 1 − D2 2x 2 =
D
exp(4i x) 33 2707 2 2
1+ i D− D x
272 34 4624
exp(−4i x) 33 2707 2 2 x 4
+ 1− i D− D x + + 2x 2 + 4 =
272 34 4624 6
cos(4x) 2707 33
x2 − − x sin(4x)
136 2312 17 × 136
x4
+ + 2x 2 + 4. (12) (3.91)
6
In particular, for problems (3) and (4), it is helpful to note the equalities sin(x)
cos(2 x) = sin(x) − 2 sin3 (x) and sin(x) sin(2 x) = 2 cos(x) − 2 cos3 (x).
An (ODE) has only one independent variable and usually only a single dependent
variable. But occasionally, the needs of a subject-matter require more than one depen-
dent variable. For complete description then, there must be either partial differential
equations or more than one simultaneous linear ordinary differential equation with
constant coefficients. Here, we treat the latter option. Generally, the larger the num-
ber of these equations, the greater the effort needed to solve them. Therefore, for
convenience, we work with equations that have only one, two, or three dependent
variables. Because there are cases where the differential equations can be separated
so that each is a function only of one dependent variable. Therefore, for additional
convenience, we work with such cases first.
Chose t as the single independent variable and notate a first differential with respect
to t by the symbol . That is
d
≡ .
dt
{A} Number of Constants
{A}: Consider first a very simple example with only two dependent variables y ≡
y(t) and z ≡ z(t) that satisfy the following simultaneous equations.
y(t) − z(t) = 0 ,
y(t) − z(t) = 0 . (3.92)
As noted in the passage following (3.4), the minimum number of constants needed for
a complete solution to a single homogeneous linear ordinary differential equation
with constant coefficients is equal to the order of the given differential equation.
But when dealing with a series of coupled homogeneous linear ordinary differential
equations with constant coefficients, the answer to this question is more subtle. Here,
one needs to work with the determinant of the coefficients that multiply each of the
3.4 Simultaneous Linear (ODEs) with Constant Coefficients 47
dependent variables and then look for the highest power of the differentials that occur.
A practical demonstration helps with explanation of this statement.
The determinant of the operational coefficients of the dependent variables y(t)
and z(t) in the two coupled equations (3.92) is
−1
= − 2 + 1 .
1 −
Because this determinant is not manifestly equal to zero, look for the highest power
of that occurs. It is equal to two. Therefore, the complete solution to this pair of
differential equations will have two arbitrary constants.
{A} Solution
The dependent variables y(t) and z(t) in (3.92) can readily be separated. To this end,
operate by from the left on (3.92).
Now, by using the original equation, (3.92), eliminate −z(t) and y(t).
The result is two homogeneous linear ordinary differential equations with constant
coefficients. Each of these two differential equations involves only a single dependent
variable, that is, y(t) or z(t). And they both have constant coefficients. Now the well-
worked procedure can be used to find their solution.
Unfortunately, the full complementary solution, Scomp; y (t) + Scomp; z (t), contains
four arbitrary independent constants: σ1 → σ4 . But there should really be a total
of only two undetermined constants. So what to do? Substitute the results, namely
y(t) = Scomp; y (t) and z(t) = Scomp; z (t), into original differential equation (3.92) and
see what happens. One gets
σ3 = − σ1 ; σ4 = σ2 . (3.98)
Therefore, according to (3.95) and (3.98), the solution to the simultaneous linear
ordinary differential equations with constant coefficients (3.92) is the following:
{B}: The simultaneous linear ordinary differential equations with constant coef-
ficients {A} in (3.92) were easy to solve. Let us continue to treat similarly simple
simultaneous equations but increase their number to three.
x(t) = y(t) ;
y(t) = z(t) ;
z(t) = x(t) . (3.100)
There are three dependent variables x(t), y(t), and z(t) with t as the independent
variable. The determinant of the operational coefficients of the given simultaneous
equations is
−1 0
0 − 1 .
1 0
Again because this determinant is not manifestly equal to zero, the relevant parameter
is the highest power of that occurs. It is equal to three. Therefore, the complete
solution will have three arbitrary constants.
{B}: Solution
Once again the dependent variables can readily be separated. To separate x(t), from
y(t) and z(t), operate by from the left on the top equation in (3.100).
Next operate by on the middle equation in (3.100) and make use of the bottom
equation there.
3.4 Simultaneous Linear (ODEs) with Constant Coefficients 49
In fact, the same process can be repeated for the dependent variables y(t) and z(t).
As a result, two remaining equations are obtained. Because of the x(t), y(t), z(t)
symmetry inherent in (3.100), the last two equations are very much like the first
differential equation (3.103). That is
(3 − 1)y(t) = 0 ;
(3 − 1)z(t) = 0 . (3.104)
All these three homogeneous linear ordinary differential equations with constant
coefficients are of third order. Therefore, complete solution of each will have three
independent constants, thus making a grand total of nine constants. But as noted
before, only three are needed. Therefore, six unnecessary constants will have to be
eliminated. Fortunately, because the given simultaneous linear ordinary differential
equations are all symmetric in structure, one needs to solve only one of the three.
The Scomp; x (t) for (3.103) is found in the usual manner as follows.
√
−1 ± i 3
E ch; x : (k) − 1 = 0 : k1 = 1 ; k2,3
3
= .
2
Therefore,
√ √
t 3 3
Scomp; x (t) = σ1 exp(t) + exp − σ2 exp −i t + σ3 exp i t
2 2 2
√ √
t 3 3
= σ1 exp(t) + exp − (σ2 + σ3 ) cos t − i(σ2 − σ3 ) sin t . (3.105)
2 2 2
To eliminate the six unnecessary constants, substitute the results provided in (3.105)
and (3.106) in the form: x(t) = Scomp; x (t), y(t) = Scomp; y (t), and z(t) = Scomp; z (t),
into the original differential equations (3.100). One gets
These equations provide the relationships that allow for the cancelation of six of
the nine unknowns. But despite the straightforward nature of the needed algebra,
direct elimination of these constants from (3.107) to (3.109), as they are currently
structured, is a lengthy undertaking. With a view to finding a more convenient format
for these equations that would help relieve this difficulty, rewrite Scomp; x (t) in (3.105)
as follows.
t
Scomp; x (t) = σ1 exp(t) + σ0 exp −
2
√ √
3 3
× cos(φ) cos t − sin(φ) sin t .
2 2
Or equivalently
√
t 3
Scomp; x (t) = σ1 exp(t) + σ0 exp − cos t +φ . (3.110)
2 2
Here,
σ2 + σ3 σ2 − σ3
cos(φ) = ; sin(φ) = i . (3.111)
σ0 σ0
Because cos2 (φ) + sin2 (φ) = 1, the constant σ0 is chosen such that
σ2 σ3
4 =1 . (3.112)
σ02
Note that Scomp; x (t) still has three arbitrary constants. In addition to σ1 , there are the
two new constants σ0 and the angle φ. Of course, σ0 and φ are functions of the original
arbitrary constants σ2 and σ3 . Again, because of symmetry already mentioned, the
structure of Scomp; y (t) and Scomp; z (t) can be predicted by inspection.
√
t 3
Scomp; y (t) = σ1 exp(t) + σ0 exp − cos t +φ
: (a)
2 2
3.4 Simultaneous Linear (ODEs) with Constant Coefficients 51
√
t 3
Scomp; z (t) = σ1 exp(t) + σ0 exp − cos t +φ
: (b). (3.113)
2 2
At this juncture, get back to the original equations and work with them in the
form (3.107)–(3.109). To this end, first use Scomp; x (t) given in (3.110) and differen-
tiate it to calculate Scomp; x (t).
√3
(Trigonometric equalities, cos 2π3
= − 21 and sin 2π3
= 2 , were used here.)
Next write the result according to (3.107) as Scomp; y (t) = Scomp; x (t). That is
√
t 3
σ1
exp(t) + σ0
exp − cos t +φ
2 2
√
t 3 2π
= σ1 exp(t) + σ0 exp − cos t +φ+ .
2 2 3
σ1 = σ1 ,
σ0 = σ0 ,
2π
φ = φ + .
3
Accordingly,
√
t 3 2π
Scomp; y (t) = σ1 exp(t) + σ0 exp − cos t +φ+ . (3.114)
2 2 3
Finally, because of symmetry, (3.110) and (3.114) lead by induction to the final
result
√
t 3 4π
Scomp; z (t) = σ1 exp(t) + σ0 exp − cos t +φ+ . (3.115)
2 2 3
52 3 Constant Coefficients
{C}: The simultaneous differential equations in problem {B} were in principle ele-
mentary even though eliminating the six unnecessary independent constants took
effort. In (3.116) given below, the level of complexity is raised a little. But here one
is helped by the fact that there are fewer constants that need eliminating.
Because this determinant is not manifestly equal to zero, look for the highest power
of that occurs. Clearly, it is equal to two. Therefore, the complete solution to
this pair of simultaneous linear ordinary differential equations will have only two
arbitrary constants.
{C}: Solution
Following the usual protocol, it is necessary to remove one dependent variable from
each of the given two simultaneous linear ordinary differential equations. Only ele-
mentary algebra is needed first to eliminate y and next x. The result is the following
pair of differential equations.
E ch; x : k 2 + 2k = 0 ; k1 = 0 ; k2 = −2 ;
E ch; y : 2(k 2 + 2k) = 0 ; k1 = 0 ; k2 = −2 ;
Scomp; x (t) = σ1 + σ2 exp(−2 t) ; Scomp; y (t) = σ3 + σ4 exp(−2 t) ;
1 3
I pi ; x = exp(t) .3 = exp(t) ;
( + 1)2 + 2( + 1) 3
1 3
I pi ; y = exp(t) .3 = exp(t) .(3.118)
2[( + 1) + 2( + 1)]
2 6
That is
Or equivalently
For arbitrary t, this equation can be satisfied only if σ3 = σ21 and σ4 = − σ22 . Hence,
the complete solution of the simultaneous differential equations (3.116) is
Because this determinant is not manifestly equal to zero, look for the highest power
of that occurs. It is equal to two. Therefore, the complete solution to this pair of
differential equations will have two arbitrary constants.
{D}: Solution
To eliminate x(t), multiply the bottom line in (3.123) by ( + 1) and subtract the
result from the line above.
Equivalently,
54 3 Constant Coefficients
Clearly, σ1 is one of the two independent constants. This leaves only one additional
independent constant to find.
Next, in order to eliminate y(t), multiply the bottom line in (3.123) by ( + 2)
and subtract the result from the top line.
Equivalently,
{E}: Solve
Because this determinant is not manifestly equal to zero, look for the highest power of
that occurs. It is equal to three. Therefore, the complete solution to the simultaneous
linear ordinary differential equation (3.129) will have three arbitrary constants.
{E}: Solution
To eliminate x(t), multiply the top line in (3.129) by (2 + 1) and the bottom by
(2 + 3 + 4). Subtract the resultant equation at the bottom from the corresponding
one at the top.
Equivalently,
Similarly, to eliminate y(t) multiply the top line in (3.129) by ( + 1) and the bottom
by (2 + + 4). Subtract the resultant equation at the top from the corresponding
one at the bottom.
Equivalently,
3
x(t) = exp(t) (1 − t) + 2 t 2 + 3 t −
2
√ √
t 7 7
+ σ4 + exp σ5 sin t + σ6 cos t ;
2 2 2
exp(t) 5 1
y(t) = 3t − − 2 t2 − 5 t +
2 2 2
√ √
t 7 7
+ σ1 + exp σ2 sin t + σ3 cos t , (3.135)
2 2 2
into at least one of the original equations. The second of the original equa-
tion (3.123)—namely (2 + 1) x(t) + ( + 1)y(t) − 2 t = 0—is clearly the eas-
ier to handle. After a little algebra, one gets
√ √
t 7 3 7
(σ1 + σ4 ) + exp sin t (σ2 + σ5 ) − (σ3 + σ6 )
2 2 2 2
√ √
t 7 3 7
+ exp cos t (σ3 + σ6 ) + (σ2 + σ5 ) = 0 . (3.136)
2 2 2 2
3.4 Simultaneous Linear (ODEs) with Constant Coefficients 57
Insert σ4 = −σ1 , σ5 = −σ2 and σ6 = −σ3 into (3.135) describing x(t). One gets
3
x(t) = exp(t) (1 − t) + 2 t 2 + 3 t −
2
√ √
t 7 7
− σ1 − exp σ2 sin t + σ3 cos t . (3.138)
2 2 2
Equations (3.135) for y(t) and (3.138) for x(t) are the complete solution to the
pair of simultaneous linear ordinary differential equation (3.129) and appropriately
have only three arbitrary constants σ1 , σ2 , and σ3 .
{F}: Solve
()x(t) + (4 )y(t) = σ0 t ,
(3 )x(t) + (12 )y(t) = 2 t , (3.139)
where t is the independent variable and x(t) and y(t) are dependent variables.
{F}: Number of Independent Constants
The determinant of the operational coefficients of (3.139) is
() (4)
(3) (12) = 0. (3.140)
In other words, the relevant determinant is manifestly equal to zero. Thus, one won-
ders as to whether a solution to the simultaneous linear ordinary differential equa-
tion (3.139) is at all possible? And, if a solution is possible, how many independent
constants would there be?
In an attempt to deal with these issues, let us separate the equations. To this end,
multiply the first of the (3.139) from the left with (12) and the second with (4)
and subtract the second from the first, thereby canceling the terms multiplying y(t).
One gets
Therefore, in order that there be a solution to (3.139), the following has to be true
2
σ0 = . (3.142)
3
58 3 Constant Coefficients
And if this relationship should be true, then the two equations in (3.139) are identical
and one is left with only a single equation: namely ()x(t) + (4 )y(t) = 23 t.
And by itself this equation contains insufficient information to solve for both x(t)
and y(t). On the other hand, if one of the independent variables—say y(t)—was
chosen, the differential equation involving x(t) would very likely have a solution.
But because the choice of y(t) is arbitrary, it should in principle have an arbitrary
number of constants. Thus, there is no fixed limit as to how many independent
constants such a solution would have.
Solve the following pairs of simultaneous linear ordinary differential equations with
constant coefficients. Reminder: = dtd and D = dx d
.
Both single and simultaneous linear ordinary differential equations with constant
coefficients were discussed in detail in Chap. 3. Here, that analysis is extended to lin-
ear ordinary differential equations with variable coefficients. Differential equations
with non-constant coefficients are in general much harder to solve. For simplicity,
therefore, only first-order and first-degree equations of type (A) are handled.
But with help from the Ber nouilli 1. suggestion, these equations can be transformed
into equations similar to (A).
Linear ordinary differential equations that are of the first-order and first-degree and
have variable coefficients are of the general form
Multiply (on the left) the differential equation (4.2) by an as yet unknown factor
F(x).
This implies
F(x) [Du(x)] + [F(x) M(x)] u(x) = F(x) [Du(x)] + u(x) [D F(x)] . (4.5)
Notice the function u(x) is not operated upon either side of (4.6). Indeed, it ap-
pears simply as a multiplying factor. Therefore, it can be eliminated. The result is a
differential equation obeyed by the as yet unknown factor F(x).
Or equivalently
log F(x) = log σ0 + M(x) . dx . (4.8)
4.1 Linear (ODE)’s 61
where,
W (x) = exp M(x) dx (4.10)
Left-hand sides of (4.3) and (4.4) are the same. Therefore, their right-hand sides must
be equal.
F(x)N (x) = D[F(x) u(x)] . (4.11)
Use integrating factor W (x) and solve the following twelve differential equations.
The notation is as in (4.2), (4.10), and (4.15).
3
(1) : M(x) = , N (x) = x 2 .
x
(2) : M(x) = 2 x , N (x) = x 3 .
3+x 2
(3) : M(x) = , N (x) = .
x x
1
(4) : M(x) = , N (x) = x 2 exp(x) .
x
(5) : M(x) = 3 tan(x) , N (x) = 2 sec(x) .
(6) : M(x) = 3 cot(x) , N (x) = 2 sin(2x) .
2x
(7) : M(x) = , N (x) = 5 exp(−x) .
x −2
2
(8) : M(x) = − , N (X ) = x 3 exp(−x) .
x
2
(9) : M(x) = − , N (x) = x − 1 .
x +1
2
(10) : M(x) = − , N (x) = (x − 1)2 .
x −1
1
(11) : M(x) = , N (x) = x 4 .
x log(x)
1
(12) : M(x) = + cot(x) , N (x) = cot(x) . (4.16)
x
4.2.1 Solution
Use (4.16)-(1)–(4.16)-(12): First, work out W (x) according to the procedure de-
scribed in (4.10). Next, use (4.15) to determine the solution
to the relevant differ-
ential equation. [Note: In problem (2) above, the integral x 3 exp(x 2 ) dx can be
4.2 Examples Group I 63
3
(1) : W (x) = exp dx = exp log(x 3 ) = x 3 ;
x
1 1 x6
u = 3 x 3 × x 2 dx + σ0 = 3 + σ0 .
x x 6
(2) : W (x) = exp 2x dx = exp(x 2 ) ;
1 x2 − 1
u = x 3 exp(x 2 ) dx + σ0 = + σ0 exp(−x 2 ) .
exp(x 2 ) 2
3+x
(3) : W (x) = exp dx = exp[3 log(x) + x] = x 3 exp(x) ;
x
1 x 2 − 2x + 2 σ0
u = 3
2x 2 exp(x) dx + σ0 = 2 + 3 .
x exp(x) x3 x exp(x)
1
(4) : W (x) = exp dx = exp log(x) = x ;
x
1 1
u = x 3 exp(x) dx + σ0 = [exp(x)(x 3 − 3x 2 + 6x − 6) + σ0 ] .
x x
1
(5) : W (x) = exp 3 tan x dx = exp[−3 log(cos x)] = ;
(cos x)3
2
u = (cos x)3 sec x dx + σ0
(cos x)3
(sec x)3 (3 sin x + sin 3x)
= (cos x)3 (3 sin x + sin 3x) + σ0 = + σ0 (cos x)3 .
3 3
(6) : W (x) = exp 3 cot x dx = exp 3 log(sin x) = (sin x)3 ;
1 3 2 sin(2x) dx + σ
u = (sin x) 0
(sin x)3
1 10 sin x − 5 sin(3x) + sin(5x)
= + σ 0 .
(sin x)3 20
2x 2
(7) : W (x) = exp dx = exp 2 1+ dx
x −2 x −2
= exp 2x + 4 log(x − 2) = (x − 2)4 exp(2x) ;
1 4 exp(x) dx + σ
u = 5(x − 2) 0
(x − 2)4 exp(2x)
1
= 5(x 4 − 12x 3 + 60x 2 − 152x + 168) exp(x) + σ0 .
(x − 2) exp(2x)
4
64 4 Variable Coefficients
−2 1 1
(8) : W (x) = exp dx = exp[−2 log(x)] = = 2 ;
x exp[log(x )]2 x
2 1 3 2
u = x x exp(−x) dx + σ0 = x − exp(−x) (x + 1) + σ0 .
x2
−2 1
(9) : W (x) = exp dx = exp −2 log(x + 1) = ;
x +1 (x + 1)2
(x − 1)
u = (x + 1)2 dx + σ0
(x + 1)2
2 2
= (x + 1) + log(1 + x) + σ0 .
(x + 1)
−2 1
(10) : W (x) = exp dx = exp[−2 log(x − 1)] = ;
x −1 (x − 1)2
(x − 1)2
u = (x − 1)2 dx + σ0 = (x − 1)2 [x + σ0 ] .
(x − 1)2
1 1
(11) : W (x) = exp dx = exp d(log x)
x log(x) log x
= exp log(log x) = log(x) ;
5
1 1 x x5
u = x 4 log(x) dx + σ0 = log(x) − + σ0 .
log(x) log(x) 5 25
1
(12) : W (x) = exp + cot x dx = exp log(x) + log(sin x)
x
= exp log(x sin x = x sin x ;
1 1
u = x sin x cot x dx + σ0 = x cos x dx + σ0
x sin x x sin x
1
= [x sin x + cos x + σ0 ] . (4.17)
x sin x
For given choices of the duo M(x) and N (x), solve the following differential equa-
tions labeled (1)–(12). [Hint: Compare (4.2), (4.10), and (4.15)–(4.17).]
1
(1) : M(x) = , N (x) = x .
x
3
(2) : M(x) = , N (x) = x 2 .
x
4.2 Examples Group I 65
2+x 3
(3) : M(x) = , N (x) = .
x x
1 + 3x
(4) : M(x) = , N (x) = x exp(x) .
x
(5) : M(x) = cot x , N (x) = sec x .
(6) : M(x) = tan x , N (x) = cos x .
x
(7) : M(x) = , N (x) = x exp(x) .
x −1
(8) : M(x) = 3 cot x , N (X ) = 2 cos x .
1
(9) : M(x) = − , N (x) = (x + 1)2 .
x +1
1
(10) : M(x) = − , N (x) = (x − 1)2 .
x −1
2
(11) : M(x) = , N (x) = x .
x log(x)
1
(12) : M(x) = + cot x , N (x) = cot x . (4.18)
x
Therefore,
and
1 du 0 (x)
. [u 0 (x)]{( 1−n )−1} .
1
Du(x) = . (4.22)
1−n dx
Express u(x) and Du(x) as in (4.21) and represent (4.19) in terms of u 0 (x).
1 du 0 (x)
. [u 0 (x)]{( 1−n )−1} . + M(x) [u 0 (x)]( 1−n )
1 1
1−n dx
= N (x) [u 0 (x)]( 1−n )
n
. (4.23)
Multiply both sides by (1 − n) [u 0 (x)]−{( 1−n )−1} to arrive at the following linear
1
equation:
has its solution embedded in (4.10) and (4.15). And both of these can immediately
be transferred to relate to the newest version of the Bernouilli differential equation—
that is (4.26)—merely by adding the subscript 0 to the functions u(x), M(x), and
N (x). So, according to (4.10) and (4.15), the solution to Bernouilli differential equa-
tion (4.26) is as follows:
1
u 0 (x) = W0 (x) N0 (x) dx + σ1 , (4.28)
W0 (x)
With M0 (x) and N0 (x) as given in (4.25), and u 0 (x) as in (4.21), (4.28), and (4.29)
provide the desired solution of the differential equation (4.19).
Make use of (4.20)–(4.29), and solve several Bernouilli equations of the form (4.19).
[D + 4] u 0 (x) = 6 . (4.31)
and
1
u 0 (x) = W0 (x) N0 (x) dx + σ1
W0 (x)
= exp (−4 x) 6 exp (4 x) dx + σ1
6
= + σ1 exp (−4 x) = [u(x)]1−n = [u(x)]2 . (4.33)
4
Therefore, according to (4.33), the solution to the Bernouilli differential equa-
tion (4.30) is
21
6
u(x) = ± + σ1 exp (−4 x) . : (1) (4.34)
4
[D − x] u 0 (x) = − x (4.36)
and
1
u 0 (x) = W0 (x) N0 (x) dx + σ1
W0 (x)
2 2
x x
= exp − x exp − dx + σ1
2 2
2
x
= 1 + σ1 exp = [u(x)]1−n = [u(x)]−1 . (4.38)
2
Next, solve
[D − 2 x] u 0 (x) = − 2 x (4.41)
and
4.3 Bernouilli Equation 69
u 0 (x) = exp x 2 (−2 x) exp −x 2 dx + σ1
= 1 + σ1 exp x 2 = [u(x)]1−n = [u(x)]−2 . (4.43)
and
u 0 (x) = exp x 2 exp −x 2
. {−2 exp(x )} dx + σ1
2
1
u(x) = ± . : (4) (4.49)
exp(x 2 ) [−2 x + σ1 ]
and
u 0 (x) = exp x 3 exp −x 3 . {−3 exp(x 3 )} dx + σ1
Therefore, the solution to the Bernouilli differential equation (4.50) is u(x) such that
1
[u(x)]3 = (4.54)
exp(x 3 ) [−3 x + σ1 ]
1
u(x) =
3
exp(x 3 ) [−3 x + σ1 ]
2
(−1) 3
u(x) =
3
exp(x 3 ) [−3 x + σ1 ]
4
(−1) 3
u(x) = . : (5) (4.55)
3
exp(x 3 ) [−3 x + σ1 ]
The next few Bernouilli type equations that are solved are numbered (6)–(10) and
are given below in (4.56).
[D + 4 x] u 0 (x) = 4 x 3 . (4.57)
4.3 Bernouilli Equation 71
and
u 0 (x) = exp −2 x 2 exp 2 x 2 . {4 x 3 } dx + σ1
1 2 2
= exp −2 x 2
exp 2 x (2x − 1) + σ1 . (4.59)
2
Therefore, the solution to the Bernouilli differential equation (4.56)-(6) is u(x) such
that
1
[u(x)]4 = (2x 2 − 1) + σ1 exp −2 x 2 (4.60)
2
dx 1
W0 (x) = exp − = exp (− log x) = , (4.63)
x x
and
u 0 (x) = x (−dx) + σ1
= [u(x)]−1 . (4.64)
1
u(x) = . (4.65)
x [−x + σ1 ]
72 4 Variable Coefficients
dx 1
W0 (x) = exp −2 = exp (−2 log x) = , (4.67)
x x2
dx
u 0 (x) = x −2
2
+ σ1
x
= [u(x)]−2 . (4.68)
dx
W0 (x) = exp 3 = exp (3 log x) = x 3 , (4.71)
x
and
u 0 (x) = x −3 3 x 4 dx + σ1
3
= x −3 x 5 + σ1 = [u(x)]3 . (4.72)
5
Therefore, there are three solutions to the Bernouilli differential equation (4.56)-(9):
namely
3 5 1
x + σ1 3
u(x) = 5
,
x3
3 5 1
2 x + σ 1
3
u(x) = (−1) 3 5
,
x3
3 5 1
4 x + σ1 3
u(x) = (−1) 3 5
. : (9) (4.73)
x3
4.3 Bernouilli Equation 73
3
x
W0 (x) = exp −x 2 dx = exp − , (4.75)
3
and
x3 x3
u 0 (x) = exp − exp x − dx + σ1 . (4.76)
3 3
Given below are a set of ten Bernouilli differential equations similar to (4.19). The
relevant choices for n, M(x) and N (x), are as noted in (4.78)-(1)–(4.78)-(10). [Hint:
Compare (4.19)–(4.77).]
The differential operator V (x), the solution Y (x), and quite possibly also the
inhomogeneous term represented by the arbitrary function F(x), involve constants
and derivatives with respect to the variable x. The objective of the present exercise
is to determine the solution, Y (x), of the differential equation (5.1) with specified
boundary conditions.
In the preceding chapters, various straightforward methods for solving homo-
geneous linear ordinary differential equations were discussed. For inhomogeneous
linear ordinary differential equations, those methods required more effort. Espe-
cially, the particular integral, Ipi , needs to be worked out ab initio for every different
inhomogeneous term F(x). But there exists another powerful methodology named
after George Green26. whereby Green’s function, once and for all, helps solve the
differential equation for arbitrary values of the inhomogeneous term. Therefore, for a
particular differential operator, in addition to the homogeneous solution, only Green’s
function needs to be worked out.
The Green function procedure is well-suited to studying inhomogeneous ordinary
differential equations. A given Green’s function always refers to a particular differ-
ential operator. For the present case, V (x) is the relevant differential operator. While
there is no total agreement on this issue, most of the available literature, for instance,
Dean J. Duffy24. , would accept the following definition of Green’s function, G(x, x ),
that is, relevant to the differential operator V (x).
Here, δ(x − x ) is Dirac’s delta function. Note, some discussion of the Dirac33. delta
function is provided in the Appendix.
If Green’s function, G(x, x ), as defined in (5.2), can be worked out, the desired
solution of the differential equation V (x) Y (x) = F(x) can be represented as (5.3)
given below.
Y (x) = F(x ) G(x, x ) dx . (5.3)
The correctness of this assertion is assured if it can be shown to be consistent with the
fundamental equation (5.1). To check this fact, let us proceed as follows. Multiply
(5.3) from the left by V (x).
V (x) Y (x) = V (x) F(x ) G(x, x ) dx
= F(x ) V (x) G(x, x ) dx . (5.4)
demand
Most differential equations have two components: One that deals with the homo-
geneous part of the equation and the other that involves the inhomogeneous part.
Fortunately, the various techniques for handling the homogeneous part are similar
to those discussed in detail in Chaps. 3 and 4 . Therefore, it can be assumed that the
solution to the homogeneous part can be worked out.
In this subsection, we show that a Green’s function may be calculated by using
eigenfunctions. That is, information obtained from solving the homogeneous part of
the differential equation. Indeed, a Green’s function is often fathered by the homo-
geneous part of the referencing differential equation. And once Green’s function is
calculated, it can be used to solve the inhomogeneous part of the referencing differ-
ential equation. And that can be done for arbitrary choices of the inhomogeneous
term.
Let us consider the following linear ordinary differential equation.
2
D + α2 yn (x) = γn yn (x) , n = 0, 1 , 2 , 3 , . . . (5.9)
are orthogonal and complete. Also the eigenvalues, γn , are real. As shown below,
for (5.9) both the eigenvalues γn and the eigenfunction yn (x) are readily evaluated.
The eigenfunction can also be made orthonormal so that the relevant integral of
eigenfunctions yn1 (x) and yn2 (x), calculated over the prescribed spatial range, equals
δn1 , n2 .
π
yn1 (x) yn2 (x) dx = δn1 , n2 . (5.11)
0
78 5 Green’s Function Laplace Transforms
We require that the eigenfunctions yn (x) remain valid in the physical interval
0 ≤ x ≤ π and obey the boundary conditions
Without loss of generality, one can use only the positive n. The normalization require-
ment,
π π
π
[yn (x)] [yn (x)]∗ dx = 1 = |ρn |2 sin(n x)2 dx = |ρn |2 , (5.14)
0 0 2
γn = (α2 − n2 ). (5.18)
5.2 Solving Differential Equations 79
As stated above, the eigenfunctions yn (x) are orthonormal and complete. [Note : For
observational convenience, (5.1) is reprinted below as (5.19).] Therefore, when Y (x)
is a function in the same domain as the eigenfunctions yn (x) and obeys the same
boundary conditions, it may be represented as a linear combination of the stated
eigenfunctions.
∞
Y (x) = bn yn (x). (5.20)
n=0
V (x) = [D2 + α2 ] ;
V (x) G(x, x ) = δ(x − x ) ;
∞ ∞
F(x) = [D + α ]
2 2
bn yn (x) = bn [D2 + α2 ] yn (x) . (5.21)
n=0 n=0
Use the first row of (5.17) and thereby replace [D2 + α2 ] yn (x) by γn yn (x). As a
result, (5.21) becomes
∞
F(x) = bn γn yn (x) . (5.22)
n=0
The constant bn needs to be determined. To that end, multiply (5.22) by ym (x) and
integrate over the relevant space.
π π ∞
ym (x) F(x) dx = bn γn ym (x) yn (x)dx
0 0 n=0
∞
π ∞
= bn γn ym (x) yn (x)dx = bn γn δn,m = bm γm . (5.23)
n=0 0 n=0
For convenience, change the variables from m to n and x to x in (5.24). This gives
us the desired bn .
π
1
bn = yn (x ) F(x ) dx . (5.25)
γn 0
∞ ∞
1 1
G(x, x ) = yn (x) yn (x ) = y (x) yn (x ).
2) n
(5.28)
n=0
γn n=0
(α 2 − n
choice of F(x ), Green’s function (5.28) should provide the desired solution Y (x).
In other words, the function Y (x) in (5.29),
D2 + α2 Y (x) = F(x) , (5.29)
Use (5.16) and (5.30) and work out solutions to the differential equation D2 + α2 Y (x) = F(x)
for the following choices of F(x).
5.2.4 Solution
(1)
Insert F(x ) = sin(x ) in (5.30).
∞ π
2 1
Y (x) = sin(n x) sin(n x ) sin(x ) dx
π n=0 (α2 − n2 ) 0
π
Because the integral 0 sin(n x ) sin(x ) dx = ( π2 )δ(n − 1), only the n = 1 term sur-
vives. Thus,
∞
2 1 π sin(x)
(1) : Y (x) = sin(x) δ(n − 1) = 2 . (5.32)
n=0
π (α2 −n )
2 2 α −1
(2)
sin(2 x )
Insert F(x ) = sin(x )cos(x ) = 2
in (5.30).
∞ π
1 2 1
Y (x) = sin(n x) sin(n x ) sin(2 x ) dx
2 π n=0 (α2 − n2 ) 0
π
Because the integral 0 sin(n x ) sin(2 x ) dx = ( π2 )δ(n − 2), only the n = 2 term
survives. Thus,
∞
1 1 π 1 sin(2x)
(2) : Y (x) = sin(n x) δ(n − 2) = .
n=0
π (α2 − n2 ) 2 2 α2 − 4
(5.33)
(3)
Insert F(x ) = x in (5.30).
∞ π
2 1
: Y (x) = sin(n x) sin(n x ) x dx (5.34)
π n=0 (α2 − n2 ) 0
82 5 Green’s Function Laplace Transforms
π
Do the integral 0 sin(n x ) x dx by parts.
π
x cos(n x) π 1 π
sin(n x) x dx = − + cos(n x)dx
0 n 0 n 0
π cos(n π) 1 sin(nx) π π(−1)n+1
=− + = +0 . (5.35)
n n n 0 n
In order to calculate Y (x), the result in (5.35) above should be inserted into (5.34).
∞
(−1)n+1
(3) : Y (x) = 2 sin(n x) . (5.36)
n=0
n(α2 − n2 )
Note: The result Y (x) given in (5.36) looks more complicated than one would have
anticipated. Therefore, it is important to check its accuracy. To that end let us plug
this Y (x) into (5.29) and check whether it actually leads to F(x) = x. That is, let us
work with equations
2
D + α2 Y (x) = F(x)
∞
(−1)n+1
= D 2 + α2 2 sin(n x) = F(x) = x
n=0
n(α2 − n2 )
∞
∞
(−1)n+1 (−1)n+1
= 2 (α2 − n2 ) sin(n x) = 2 sin(n x) . (5.37)
n=0
n(α − n )
2 2
n=0
n
Again one wonders whether the right-hand side of (5.37) is indeed equal to x. Actu-
ally, it looks much like a Fourier series. If so, is it the Fourier expansion of x? Let us
check.
Fourier expansion of x.
∞
∞
x = a0 + an cos(n x) + bn sin(nx) , (5.38)
n=1 n=1
where
π
1 1 π
a0 = x dx = 0 , an = x cos(nx)dx = 0 ,
2 π −π π −π
1 π 1 −x cos(n x) π 1 sin(n x) π
bn = x sin(nx)dx = +
π −π π n −π π n −π
n+1
1 2 π (−1) (−1) n+1
= = 2 . (5.39)
π n n
∞
(−1)n+1
x = 2 sin(n x) . (5.40)
n=0
n
will change. And we will have the following eigenfunction Green’s function when
α = 23 .
∞
4
G(x, x ) = eigenfunction result yn (x) yn (x ). (5.42)
n=0
(9 − 4 n2 )
As noticed above, the eigenfunctions Green’s function leads to infinite series with
results that are often both difficult to work out and complicated to work with. On
many occasions, a better approach is to work directly with the defining equation of
Green’s function that involves Dirac’s delta function. And then approach the delta
function singularity from either sides, thereby obtaining a closed-form expression for
Green’s function. To demonstrate this procedure and to compare with the previous
results, we work with a similar differential equation to that used for deriving the
eigenfunctions Green’s function.
Consider the differential operator V (x) = D2 + 49 and its Green’s function
G(x, x0 ). [Note: Compare (5.2)]
9
D2 + G(x, x0 ) = δ(x − x0 ) . (5.43)
4
84 5 Green’s Function Laplace Transforms
As before, x and x0 are chosen to lie within the interval 0 ≤ x ≤ π. The delta func-
tion in (5.43) refers to the physical separation (x − x0 ). There are two possibilities
for reaching the singularity at x = x0 . First: x can approach x0 from below. Second:
x approaches x0 from above. For either of these options, as long x dos not touch x0 ,
the delta function δ(x − x0 ) itself is vanishing and (5.43) reduces to the following:
9
D +
2
G(x, x0 ) = 0 , (5.44)
4
3 3
G(x, x0 ) = A sin x + B cos x . (5.45)
2 2
Let us examine how the constants A and B are affected as the position x moves within
the intervals x0 > x ≥ 0 and π ≥ x > x0 .
One of the relevant boundary conditions (5.8), namely
G(x = 0, x0 ) = 0 , (5.46)
3
G(x, x0 ) = A sin x , x0 > x ≥ 0. (5.47)
2
3
G(x, x0 ) = B cos x , π ≥ x > x0 . (5.48)
2
Our next task is to calculate the constants A and B. To that end, we can impose the
continuity requirement at x0 so the result is the same whether x0 is approached from
below or from above. We get
3 3
A sin x0 = B cos x0 . (5.49)
2 2
3 3 3 3
−B sin x0 − A cos x0 =1 . (5.52)
2 2 2 2
2 3 2 3
A = − cos x0 ; B = − sin x0 . (5.53)
3 2 3 2
Inserting A and B given in (5.53) into (5.47) and (5.48) leads to the desired Green’s
function.
2 3 3
G(x, x0 ) = − cos x0 sin x , x0 > x ≥ 0. (5.54)
3 2 2
2 3 3
G(x, x0 ) = − sin x0 cos x , π ≥ x > x0 . (5.55)
3 2 2
5.3.2 Solution
Because
the current
Green’s function is produced by a self-adjoint differential oper-
ator, D2 + α2 , its Green’s function is symmetric, i.e., G(x0 , x) = G(x, x0 ). There-
fore, we can write the above as
π
Y (x0 ) = G(x, x0 ) F(x) dx . (5.57)
0
Insert the G(x, x0 ) given in (5.54) and (5.55) into (5.57). For F(x) = sin(x), we get
x0
2 3 3
Y (x0 ) = − cos x0 sin x sin(x) dx
3 2 0 2
π
2 3 3
− sin x0 x sin(x) dxcos
3 2 x0 2
2 3 x0 1 5
=− cos x0 sin − sin x0
3 2 2 5 2
x
2 3 0 1 5
− sin x0 −cos + cos x0 . (5.58)
3 2 2 5 2
sin(x)
(1) : Y (x) = . (5.60)
α2 − 1
The only change necessary in the eigenfunctions result given in (5.60) is to change
α2 to 94 . Then, we would have Y (x) = 4 sin(x)
5
, which is exactly the same as the current
result given above in (5.59). Q.E.D.
F(x) = sin(x) cos(x) : (2)
In order to calculate Y (x0 ) with F(x) = sin(x) cos(x), we need to insert Green’s
function provided in (5.54) and (5.55) into (5.57). We get
x0
1 3 3
Y (x0 ) = − cos x0 sin x sin(2 x) dx
3 2 0 2
π
1 3 3
− sin x0 cos x sin(2 x) dx . (5.61)
3 2 x0 2
1 sin(2x)
(2) : Y (x) = . (5.63)
2 α2 − 4
The only change necessary in the eigenfunctions result given in (5.63) is to change
α2 to 49 . Then, we would have Y (x) = − 27 sin(2 x0 ), which is exactly the current
result given above in (5.62). Q.E.D.
5.3.4 Solution
Next, we choose
d
V (t) = − α , Y (t) = x(t) , F(t) = t 2 exp(ω t) , (5.68)
dt
work out the relevant Green’s function, G(t, t0 ), determine the solution x(t) of the
differential equation (5.69) within the domain {π ≥ t > 0} and obeying the boundary
condition {limt=0+ x(t) = 0 .}
d
− α x(t) = t 2 exp(ω t) . (5.69)
dt
For t > t0 , the delta function δ(t − t0 ) is vanishing. Then, the truncated version of
differential equation (5.70) is easy to solve and we get
Set F(t0 ) = t02 exp(ω t0 ). Insert it in (5.72). Use (5.71) and rewrite the result as (5.73).
π t
x(t) = t02 exp(ω t0 ) G(t, t0 ) dt0 = t02 exp(ω t0 ) exp [α (t − t0 )] dt0
0 0
t
= exp (α t) t02 exp [t0 (ω − α)] dt0
0
t t
t02 exp [t0 (ω − α)] 2
= exp (α t) − t0 exp [t0 (ω − α)] dt0
(ω − α) 0 ω−α 0
t 2 exp ω t 2 exp α t t0 exp [t0 (ω − α)] t
= −
ω−α ω−α (ω − α) 0
t
2 exp α t
+ exp [t0 (ω − α)] dt0
(ω − α)2 0
t2 2t 2 2 exp α t
= (exp ω t) − + − . (5.73)
(ω − α) (ω − α) 2 (ω − α) 3 (ω − α)3
In order to ascertain whether the x(t) given in (5.73) satisfies the differential
equation (5.69), we proceed as follows.
dx(t) 2t 2 2 α exp α t
− α x = (exp ω t) − −
dt (ω − α) (ω − α)2 (ω − α)3
2
t 2t 2
+ ω (exp ω t) − +
(ω − α) (ω − α)2 (ω − α)3
t2 2t 2 2 α exp α t
− α (exp ω t) − + + .
(ω − α) (ω − α) 2 (ω − α) 3 (ω − α)3
(5.74)
90 5 Green’s Function Laplace Transforms
Equation (5.74) meanders and is quite long. We write it in a more organized form
below.
dx(t) ω−α 2α − 2 α
− α x(t) = t 2 exp(ω t) + exp(α t)
dt ω−α (ω − α)3
(2 ω − 2 α) 2
+ exp(ω t) − = t 2 exp(ω t) . (5.75)
(ω − α) 3 (ω − α)2
This is indeed the result expected from the differential equation (5.69). Therefore,
the correctness of the x(t) given in (5.73) is verified. Q.E.D.
It is helpful to use a general notation for Laplace transforms. Consider some function
j of t, as in j(t), where j is any lowercase character. Denote its Laplace transform by
the same uppercase character J . But signify it as a function of s, as in J (s). As such
the inverse transform of J (s) is j(t). These statements are displayed below in (5.76).
t0
s {j(t)} = limt0 −>∞ j(t) exp(−s t) dt ≡ J (s),
0
−1
s {J (s)} ≡ j(t) . (5.76)
For a function j(t) such that j(t) = 0 when t < 0, the Laplace integral s {j(t)}
specifies the Laplace transform J (s).
∞
J (s) ≡ s {j(t)} = j(t) exp(−s t) dt. (5.77)
0
J (s) exists for s > 0 provided j(t) satisfies the following conditions:
(1) j(t)= 0 for t < 0
(2) j(t) is continuous, or at least piecewise continuous, in every interval
(3) t n f (t) < ∞ as t− > 0 for some number n, where n < 1.
5.4 Laplace Transform 91
5.4.1 Table
f (t) F(s) =limt0 −>∞
t0
f (t) exp(−s t) dt
0
(1). 1 1/s
(2). a1 f1 (t) + a2 f2 (t) a1 F1 (s) + a2 F2 (s)
(3). H (t − c) exp(−c s)/s
(4). H (t − c)f (t − c) exp(c s)f (s)
(5). t 1/s2
(6). t 2 3
2!/s
(7). t 3 4
3!/s
(8). t n , n = 1, 2, 3 n!/sn+1
(9). t p , p > −1 G(p+1)
sp+1
(10). exp(a t) , s > a 1/(s − a)
(11). t exp(a t) , s > a 1/(s − a) 2
(12). δ(t − a) exp(−a s)
(13). t n exp(a t) , s > a n!/(s − a)n+1
(14). cos(b t) s/(s + b )
2 2
(15). exp(a t)cos(b t) (s − a)/[(s − a)2 + b2 ]
(16). cosh(b t) s/(s − b )
2 2
(17). sin(b t) b/(s2 + b2 )
(18). exp(a t)sin(b t) b/[(s − a)2 + b2 ]
(19). sinh(b t) b/(s − b )
2 2
(20). t sin(b t) (2 a s)/(s + b )
2 2 2
(21). t cos(b t) (s2 − b2 )/(s2 + b2 )2
: (22). t sinh(a t) 2 a s/(s − a )
2 2 2
(5.78)
(23). t cosh(a t) s2 + a2 (s2 − a2 )2
(24). sin(a t) − a t cos(a t) 2 a /(s + a )
3 2 2 2
(25). sin(a t) + a t cos(a t) 2 a s2 /(s2 + a2 )2
(26). cos(a t) − a t sin(a t) s (s2 − a2 )/(s2 + a2 )2
(27). cos(a t) + a t sin(a t) s (s + 3 a )/(s + a )
2 2 2 2 2
(28). sin(a t + b) [s sin(b) + a cos(b)]/(s + a ) 2 2
(29). cos(a t + b) [s cos(b) − a sin(b)]/(s2 + a2 )
(30). 1 f (t) ∞
t F(u) du
(31). ft (c t) 1 s
F
t
c c
(32). 0 f (v) dv 1
s F(s)
(33). t f (t − τ ) g(τ ) dτ
F(s) G(s)
0
(34). cos(a t + b) [s cos(b) − a sin(b)]/(s2 + a2 )
(35). f (t) s F(s) − f (0)
(36). f (t)
s F(s) − s f (0) − sf (0) − f (0)
3 2
(37). [exp(a(a−b) t)−exp(b t)]
(s − a)−1 (s − b)−1
√
(38). erfc √ a 1
exp −a s
2 t s
√
(39). √1 exp −a2 √1 exp(−a s)
πt 4 t s
√
(40). √a exp −a2 exp(−a s)
π 4t
2 t 3
−1 −1
(41). a exp(a t)−b exp(b t)
s(s − a) (s − b)
a−b
π
(42). √t
4s
92 5 Green’s Function Laplace Transforms
5.5 Computation
To get a feel for how the integration process (5.79) actually unfolds, let us work
out a few simple examples.
(1) : For f (t) = t n , compute s { f (t)} = F(s).
To solve (1) write:
t0 n ∞
s t n = limt −>∞ t n exp(−s t) dt ≡ t (n−1) exp(−s t) dt
0
0 s 0
∞
n(n − 1)
= t (n−2) exp(−s t) dt
s2 0
∞
n(n − 1)(n − 2)
= t (n−3) exp(−s t) dt
s3 0
= ...
n n!
Therefore , s t = . (5.80)
s(n+1)
Therefore,
b
s {sin(b t)} = F(s) = . (5.82)
s2 + b2
t
NOTE: For convenience, instead of the proper form, i.e., limt0 −>∞ 0 0 , of the
infinite integrals that occur here, heretofore we shall use only their improper
∞
form, i.e., 0 .
(3) : For f (t) = c1 f1 (t) + c2 f2 (t) compute s { f (t)} .
To solve (3) write:
∞
s {c1 f1 (t) + c2 f2 (t)} = f (t) exp(−s t) dt
0
∞ ∞
= c1 f1 (t) exp(−s t) dt + c2 f2 (t) exp(−s t) dt
0 0
= c1 s { f1 (t)} + c2 s { f2 (t)}
= c1 F1 (s) + c2 F2 (s). (5.83)
H (t − a) = 1, for t > a
= 0, for t < a . (5.85)
t
(8) : Consider f (t) = 0 fi (x)dx.
To solve (8) write:
t ∞ t
s fi (x)dx = exp(−s t) fi (x)dx dt ,
0 0 0
t=∞
exp(−s t) t 1 ∞
= fi (x)dx + exp(−s t)fi (t)dt
−s 0 t=0 s 0
s fi (t) Fi (s)
= 0+ = . (5.89)
s s
exp(I a t) + exp(−I a t)
f (t) = cos(a t) = ,
2
and work with s > a.
∞
1 1 ∞
s [cos(a t)] = exp[(I a − s) t)] dt +
exp[(−I a − s) t)] dt ,
0 2 2 0
1 exp[(I a − s) t] ∞ 1 exp[(−I a − s) t] ∞
= +
2 Ia−s 0 2 −I a − s 0
1 −1 1 −1 s
= + = 2 . (5.90)
2 I a − s 2 −I a − s s + a2
exp(a t) − exp(−a t)
sin(a t) =
2I
Work with s > a and write:
t t
1
s [sin(a t)] = exp[(a − s) t)] dt − exp[(−a − s) t)] dt ,
2I 0 0
∞ ∞
1 exp[(a − s) t] 1 exp[(−a − s) t]
= −
2I a−s 0 2I −a − s 0
1 −1 1 −1 a
= − = (−I ) 2 . (5.91)
2I a − s 2I −a − s s − a2
96 5 Green’s Function Laplace Transforms
The (HSF) is useful for representing switches that turn on and off at specific times.
For instance, consider the following two problems.
Using the notation of (5.1) in the form
F(t) = W0 , if t < 5
= 0 , if t ≥ 5 . (5.93)
(HSF)-2 : On the other hand, a more sophisticated switch is one that is:
on for t < 5 with value W1 ;
goes off during 10 > t ≥ 5;
comes back on at t = 10 with value W2
and stays on until t = 15.
Turns off at t = 15
and comes back on at t = 20 with strength W3
and stays on until t = 25.
But at t = 25 the switch instantly adjusts to strength W4
and stays on at that strength.
Thus, one works with a switch that operates with function F(t) such that
5.6 Heaviside Step Function 97
F(t) = W1 , if t < 5
= 0 , if 5 ≤ t < 10
= W2 , if 10 ≤ t < 15
= 0 , if 15 ≤ t < 20
= W3 , if 20 ≤ t < 25
= W4 , if t ≥ 25 . (5.95)
The process:
One works out the Laplace transform of both sides of the given differential equation.
The initial conditions are inserted into the Laplace transformed equation. Generally,
this simplifies the output variable a little bit. Next, one reorganizes the output variable
by partial fraction decomposition. Then, one inverse Laplace transforms the resultant
output, if needs be, by using Laplace transform inversion tables. This process will
become clear as we work out several problems.
Solution: (1)
Let us solve the following first-order differential equation with boundary condition
x(t) = 0 for t < 0.
dx(t)
(1) : 2 + 5 x(t) = t exp(−t) . (5.97)
dt
Solution of (1).
Laplace transform of both sides of differential equation (5.97)—namely equa-
tion (1)—are
1
2 {s X (s) − x(0)} + 5 X (s) = . (5.98)
(s + 1)2
98 5 Green’s Function Laplace Transforms
1 1 2 x(0)
X (s) = + . (5.99)
2s + 5 (s + 1)2 (2 s + 5)
1 1
X (s) = . (5.100)
2s +5 (s + 1)2
A B C
X (s) ≡ + + (5.101)
(2 s + 5) (s + 1) (s + 1)2
1 = A (s + 1)2 + B (s + 1) (2 s + 5) + C (2 s + 5)
= s2 (A + 2 B) + s (2 A + 2 C + 7 B) + (A + 5 C + 5 B). (5.102)
For (5.102) to hold for arbitrary values of s, terms with any particular power of s
must be equal on both sides of this equation. Accordingly, comparison of the s2 , s,
and s0 terms gives
A+2B = 0 ;
2A + 2C + 7B = 0 ;
A + 5C + 5B = 1 . (5.103)
4 2 1
A = ; B = − ; C = . (5.104)
9 9 3
5.8 First-Order Differential Equations 99
4 1 2 1 1 1
X (s) = − +
9 (2 s + 5) 9 (s + 1) 3 (s + 1)2
4 1 2 1 1 1
= − + . (5.105)
18 (s + 5/2) 9 (s + 1) 3 (s + 1)2
Finally, by using the inverse Laplace transform tables, one inverts the above from
X (s) to x(t). The result is
4 5t 2 t
x(t) = exp − − exp(− t) + exp(− t) . (5.106)
18 2 9 3
dx(t) 10 5t 5 t
=− exp − + exp(− t) − exp(− t) , (5.107)
dt 18 2 9 3
dx(t) 10 5t 10 2t
2 + 5 x(t) = − exp − + exp(− t) − exp(− t)
dt 9 2 9 3
20 5t 10 5t
+ exp − − exp(− t) + exp(− t)
18 2 9 3
= t exp(− t) . Q E D . (5.108)
Solution: (2)
Again, with the same initial boundary condition, namely x(t) = 0 for t < 0, let us
solve another first-order differential equation: namely (5.109).
dx(t)
(2) : 3 − 7 x(t) = cosh(a t). (5.109)
dt
Solution of (2).
Laplace transform of both sides of the differential equation (5.109)—namely
equation (2)— are
100 5 Green’s Function Laplace Transforms
s
3 {s X (s) − x(0)} − 7 X (s) = . (5.110)
(s2 − a2 )
Or
1 s 3 x(0)
X (s) = + . (5.111)
(s2 − a2 ) 3s − 7 (3 s − 7)
1 1 s
X (s) = . (5.112)
(s − a) (s + a) 3s − 7
A B C
X (s) ≡ + + (5.113)
(s − a) (s + a) (3s − 7)
s = A (s + a) (3 s − 7) + B (s − a) (3 s − 7) + C (s − a)(s + a)
= s2 (3 A + 3 B + C) + s (3 a A − 7 A − 7 B − 3 a B)
+ (−7 a A + 7 a B − a2 C) . (5.114)
If (5.114) is to hold for arbitrary values of s, terms with any particular power of s
must be equal on both sides of this equation. Accordingly, comparison of the s2 , s,
and s0 terms gives
3A + 3B + C = 0 ;
(3 a A − 7 A − 7 B − 3 a B) = 1;
(−7 a A + 7 a B − a2 C) = 0. (5.115)
3a + 7 3a − 7 42
A = ; B = − ; C = − (5.116)
18 a2 − 98 18 a2 − 98 18 a2 − 98
5.8 First-Order Differential Equations 101
1 3a + 7 1 3a − 7
X (s) = −
(s − a) 18 a2 − 98 (s + a) 18 a2 − 98
1 42
− . (5.117)
(3 s − 7) 18 a2 − 98
Finally, by using the inverse Laplace transform tables, one inverts the above from
X (s) to x(t). The answer is
3a + 7 3a − 7
x(t) = exp(a t) − exp(− a t)
18 a − 98
2 18 a2 − 98
7t 14
− exp . (5.118)
3 18 a2 − 98
dx(t)
3 − 7 x(t)
dt
3a + 7 3a − 7
= 3a exp(a t) + 3 a exp(− a t)
18 a − 98
2 18 a2 − 98
98 7 3a + 7
− exp −7 exp(a t)
18 a − 98
2 3t 18 a2 − 98
3a − 7 98 7
+7 exp(− a t) + exp . (5.119)
18 a − 98
2 18 a − 98
2 3t
7
Combining exp(a t), exp(− a t), and the exp 3t
terms, (5.120) leads to the
expression
dx(t)
3 − 7 x(t)
dt
(3 a + 7) (3 a − 7) (3 a − 7) (3 a + 7)
= exp(a t) + exp(−a t)
18 a2 − 98 18 a2 − 98
98 7 98 7
− exp + exp (5.120)
18 a − 98
2 3t 18 a − 98
2 3t
Q.E.D.
Solution: (3)
Another first-order differential equation with boundary condition x(t) = 0 for t < 0
is solved below.
dx(t)
(3) : 5 + 4 x(t) = sinh(a t) . (5.122)
dt
Solution of (3).
Laplace transform of both sides of differential equation (5.122)—namely equa-
tion (3)—are
a
5 {s X (s) − x(0)} + 4 X (s) = . (5.123)
s2 − a 2
Or
a
X (s) (5 s + 4) − 5 x(0) = (5.124)
s2 − a2
Note: We have used the Laplace transform table provided in (5.78). To satisfy the
initial boundary condition, set x(0) = 0. As a result, X (s) becomes
a
X (s) = . (5.125)
(5 s + 4) (s − a) (s + a)
A B C
X (s) ≡ + + (5.126)
(5 s + 4) (s − a) (s + a)
a = A (s − a) (s + a) + B (5 s + 4) (s + a) + C (5 s + 4)(s − a) . (5.127)
5.8 First-Order Differential Equations 103
If (5.127) is to hold for arbitrary values of s, terms with any particular power of s
must be equal on both sides of this equation. Accordingly, comparison of the s2 , s,
and s0 terms gives
A + 5B + 5C = 0 ;
5aB + 4B − 5aC + 4C = 0 ;
−A a2 + 4 a B − 4 a C = a . (5.128)
(5.130)
Finally, by using the inverse Laplace transform tables, one inverts the above from
X (s) to x(t). The result is
4 4t 1 1
x(t) = exp − + exp(a t) − exp(− a t) .
(16 − 25 a2 ) 5 (8 + 10 a) (8 − 10 a)
(5.131)
dx(t) 16 4t 5a
5 =− exp − + exp(a t)
dt 16 − 25 a2 5 8 + 10 a
5a
+ exp(− a t) . (5.132)
8 − 10 a
16 4t 2
4 x(t) = exp − + exp(a t)
16 − 25 a2 5 4 + 5a
2
− exp(− a t) . (5.133)
4 − 5a
dx(t)
5 + 4 x(t)
dt
5a 4
= exp(a t) + exp(a t)
8 + 10 a 8 + 10 a
5a 4
+ exp(− a t) − exp(− a t) (5.134)
8 − 10 a 8 − 10 a
dx(t)
5 + 4 x(t)
dt
1
= {exp(a t) − exp(− a t)} = sinh(a t). (5.135)
2
Q.E.D.
Solution: (4)
Another first-order differential equation with boundary condition x(t) = 0 for t < 0
is solved below.
dx(t)
(4) : 4 + 9 x(t) = exp(a t) sinh(a t) . (5.136)
dt
Solution of (4).
Laplace transform of both sides of differential equation (5.136)—namely equa-
tion (4)—are
a
4 {s X (s) − x(0)} + 9 X (s) = . (5.137)
s (s − 2 a)
Or
a
X (s) (4 s + 9) − 4 x(0) = (5.138)
s (s − 2 a)
Note: We have used the Laplace transform table provided in (5.78). To satisfy the
initial boundary condition, set x(0) = 0. As a result, X (s) becomes
5.8 First-Order Differential Equations 105
a
X (s) = . (5.139)
(s) (s − 2 a) (4 s + 9)
a = A (s − 2 a) (4 s + 9) + B (s) (4 s + 9) + C (s) (s − 2 a)
= A 4 s2 + 9 s − 8 a s − 18 a + B 4 s2 + 9 s + C s2 − 2 a s . (5.141)
If (5.141) is to hold for arbitrary values of s, terms with any particular power of s
must be equal on both sides of this equation. Accordingly, comparison of the s2 , s,
and s0 terms gives
4A + 4B + C = 0 ;
9A − 8aA + 9B − 2aC = 0 ;
−18 a A = a . (5.142)
Finally, by using the inverse Laplace transform tables, one inverts X (s) to x(t). The
result is
1 exp(2 a t)
x(t) = − +
18 18 + 16 a
4 2 1 9t
+ − exp − . (5.145)
18 9 + 8 a 4 4
106 5 Green’s Function Laplace Transforms
dx(t) exp (2 a t) − 1
4 + 9 x(t) = . (5.149)
dt 2
Q.E.D.
Solution: (5)
Another first-order differential equation with boundary condition x(t) = − 6 for
t < 0 is solved below.
dx(t)
(5) : + 5 x(t) = exp(7 t) . (5.150)
dt
Solution of (5).
Laplace transform of both sides of differential equation (5.150)—namely equa-
tion (5)—are
1
{s X (s) − x(0)} + 5 X (s) = . (5.151)
(s − 7)
Note: We have used the Laplace transform table provided in (5.78) and to satisfy the
initial boundary condition, we have set x(0) = − 6. As a result, X (s) becomes
1
X (s) (s + 5) + 6 = . (5.152)
(s − 7)
5.8 First-Order Differential Equations 107
Equivalently,
1 6 43 − 6 s
X (s) = − = . (5.153)
(s + 5) (s − 7) s+5 (s + 5) (s − 7)
A B
X (s) ≡ + . (5.154)
(s + 5) (s − 7)
43 − 6 s = A (s − 7) + B (s + 5) = s (A + B) + (5 B − 7 A) . (5.155)
If (5.155) is to hold for arbitrary values of s, terms with any particular power of s
must be equal on both sides of this equation. Accordingly, comparison of the s and
s0 terms gives
A+B = − 6 ;
−7 A + 5 B = 43 (5.156)
leading to
73 1
A = − ; B = . (5.157)
12 12
−73 1 1
X (s) = + . (5.158)
12 (s + 5) 12 s−7
Finally, by using the inverse Laplace transform tables, one inverts X (s) to x(t). The
result is
73 1
x(t) = − exp(−5 t) + exp(7 t) . (5.159)
12 12
108 5 Green’s Function Laplace Transforms
Q.E.D.
By the use of Laplace transform, solve the following second-order differential equa-
tion
Solution: (I)
Laplace transform of both sides of differential equation (5.161)—namely equa-
tion (I)—are
1
2 s2 Y (s) − s y(0) − y (0) + 3 {s Y (s) − y(0)} − 2 Y (s) = .
(s + 2)2
(5.163)
1
Y (s) 2 s2 + 3 s − 2 = Y (s) [(2 s − 1) (s + 2)] = − 4. (5.164)
(s + 2)2
− 4 s2 − 16 s − 15
Y (s) = . (5.165)
(2 s − 1) (s + 2)3
5.9 Second-Order Differential Equations 109
A B C D
Y (s) ≡ + + + (5.166)
(2 s − 1) (s + 2) (s + 2)2 (s + 2)3
− 4 s2 − 16 s − 15
= A (s + 2)3 + B (2 s − 1) (s + 2)2 + C (2 s − 1) (s + 2) + D (2 s − 1)
= (A + 2 B) s3 + (6 A + 7 B + 2 C) s2 + (12 A + 4 B + 3 C + 2 D) s
+ (8A − 4B − 2C − D) . (5.167)
For (5.167) to hold for arbitrary values of s, terms with any particular power of s must
be equal on both sides of this equation. Accordingly, comparison of the s3 , s2 , s,
and s0 terms gives
A +2B = 0 ;
6A + 7B + 2C = − 4 ;
12 A + 3 C + 4 B − 2 D = − 16 ;
8 A − 4 B − 2 C − D = − 15 . (5.168)
192 96 10 25
A = − ; B = ; C = − ; D = − . (5.169)
125 125 125 125
There is a common denominator of 125 for A, B, C, and D in (5.169). Therefore
upon plugging these results for A, B, C, and D into the Laplace transform (5.166)
one gets
1 96 96 10 25
Y (s) = − + − − . (5.170)
125 (s − 2 ) (s + 2) (s + 2)
1 2 (s + 2)3
Finally, by using the inverse Laplace transform tables, one inverts the above from
Y (s) to y(t). The result is
1 t 25 2
x(t) = −96 exp + 96 exp(−2 t) − 10 t exp(−2 t) − t exp(−2 t)
125 2 2
(5.171)
110 5 Green’s Function Laplace Transforms
Next, we examine the response of the third and the fourth terms in (5.171). Meaning
the expressions in the following two equations:
2
10 d d
− 2 2 + 3 − 2 [t exp(−2 t)] , (5.173)
125 dt dt
2
25 d d
− 2 2 + 3 − 2 [t 2 exp(−2 t)] . (5.174)
2 × 125 dt dt
Solution: (II)
Laplace transform of both sides of differential equation (5.175)—namely equa-
tion (II)—are
6
s2 Y (s) − s y(0) − y (0) − 6 {s Y (s) − y(0)} + 9 Y (s) = .
(s2 + 9)
(5.177)
5.9 Second-Order Differential Equations 111
6
Y (s) s2 + 6 s + 9 + s − 2 = (5.178)
s2 + 9
After a little bit of algebra equation (5.178) can be rewritten in a more compact form.
−s3 + 2 s2 − 9 s + 24
Y (s) = . (5.179)
(s2 + 9) (s − 3)2
As + B C D
Y (s) ≡ + + . (5.180)
(s + 9) (s − 3) (s − 3)2
2
For (5.181) to hold for arbitrary values of s, terms with any particular power
of s must be equal on both sides of this equation. Accordingly, comparison of the
s3 , s2 , s, and s0 terms gives
A+C = − 1 ;
−6A + B − 3C + D = 2 ;
−6B + 9A + 9C = − 9 ;
9 B − 27 C + 9 D = 24 . (5.182)
1 10 6
A = ; B = 0 ; C = − ; D = − . (5.183)
9 9 9
While B = 0, there is a common denominator of 9 for A, C, and D in (5.183). There-
fore upon plugging these results for A, B, C, and D into the Laplace transform (5.180)
one gets
1 s 10 6
Y (s) = − − . (5.184)
9 (s2 + 9) (s − 3) (s − 3)2
112 5 Green’s Function Laplace Transforms
Finally, by using the inverse Laplace transform tables, one inverts the above from
Y (s) into Y (t). The result is
1
Y (t) = cos(3 t) − 10 exp(3 t) − 6 t exp(3 t) . (5.185)
9
Q.E.D.
With the help of Laplace transform, solve the following IVP.
Solution: (III)
Laplace transform of both sides of differential equation (5.187)—namely equa-
tion (III)—are
9
s2 Y (s) − s y(0) − y (0) − 8 {s Y (s) − y(0)} + 7 Y (s) = 2 . (5.189)
s
Upon inserting the boundary condition (5.188), (5.189) becomes
9
Y (s) s2 − 8 s + 7 + s − 10 = 2 (5.190)
s
5.9 Second-Order Differential Equations 113
−s3 + 10 s2 + 9
Y (s) = . (5.191)
s2 (s − 7) (s − 1)
A B C D
Y (s) ≡ + 2+ + . (5.192)
s s (s − 7) (s − 1)
− s3 + 10 s2 + 9 = A s (s − 7) (s − 1) + B (s − 7) (s − 1)
+ C s2 (s − 1) + D s2 (s − 7) . (5.193)
For (5.193) to hold for arbitrary values of s, terms with any particular power
of s must be equal on both sides of this equation. Accordingly, comparison of the
s3 , s2 , s, and s0 terms gives
A+C +D = −1 ;
− 8 A + B − C = 10 ;
7A − 8B − 7D = 0 ;
7B = 9 . (5.194)
72 9 26
A = ; B = ; C = ; D = − 3. (5.195)
49 7 49
Upon plugging the results for A, B, C, and D into the Laplace transform (5.192) one
gets
72 9 26
3
Y (s) = 49
+ 7
+ 49
− . (5.196)
s s2 (s − 7) (s − 1)
Finally, by using the inverse Laplace transform tables, one inverts the above from
Y (s) to Y (t). The result is
72 9 26
Y (t) = + t+ exp(7 t) − 3 exp(t). (5.197)
49 7 49
114 5 Green’s Function Laplace Transforms
Q.E.D.
With the help of Laplace transform, solve the following differential equation.
Solution: (IV)
Laplace transform of both sides of differential equation (5.199)—namely equa-
tion (IV)—are
1
2 s2 Y (s) − s y(0) − y (0) − 5 {s Y (s) − y(0)} − 3 Y (s) = .
(s + 1)2
(5.201)
1
Y (s) 2 s2 − 5 s − 3 − 2 s − 4 + 5 = (5.202)
(s + 1)2
(2 s − 1) (s + 1)2 + 1
Y (s) = . (5.203)
(s + 1)2 (2 s + 1) (s − 3)
5.9 Second-Order Differential Equations 115
A B C D
Y (s) ≡ + + + . (5.204)
(2 s + 1) (s − 3) (s + 1) (s + 1)2
236 1 9 1
A= ; B= ; C= ; D= . (5.205)
7 × 16 7 × 16 16 4
Upon plugging the above results for A, B, C, and D into the Laplace trans-
form (5.204) one gets
59 1 1 1
Y (s) = +
28 (2 s + 1) 7 × 16 (s − 3)
9 1 1 1
+ + . (5.206)
16 (s + 1) 4 (s + 1)2
Finally, by using the inverse Laplace transform tables, one inverts the above from
Y (s) to Y (t). The result is
59 1 1
Y (t) = exp(− t/2) + exp(3 t)
28 2 7 × 16
9 t
+ exp(− t) + exp(− t) . (5.207)
16 4
1 59 9 9
2 Y (t) = exp(− t/2) + exp(3 t) + exp(− t)
4 28 56 8
1
− exp(− t) + t exp(− t) . (5.208)
2
116 5 Green’s Function Laplace Transforms
5 59 15 45
− 5 Y (t) = exp(− t/2) − exp(3 t) + exp(− t)
4 28 7 × 16 16
5 5
− exp(− t) + t exp(− t) . (5.209)
4 4
59 3 27
− 3 Y (t) = −3 exp(− t/2) − exp(3 t) − exp(− t)
56 7 × 16 16
3
− t exp(− t) . (5.210)
4
Q.E.D.
Adding (5.208), (5.209), and (5.210) leads to the desired
result t exp(− t). To double check this statement, examine the details of the addition
as given below. We have
59 1 5
exp(− t/2) + −3 = 0.
56 2 2
9 15 3
exp(3 t) − − = 0.
56 7 × 16 7 × 16
1
exp(− t) [18 − 16 + 45 − 20 − 27] = 0 .
16
1
t exp(− t) [2 + 5 − 3] = t exp(− t) . (5.211)
4
where the differential operator O(t), the solution P(t), as well as the inhomoge-
neous term represented by a function (t), all involved constants and derivatives
with respect to a single variable t. In addition, there were some specified boundary
conditions that the solution had to satisfy. The important thing to note was that the
inhomogeneous term L(t) was not arbitrary. Rather, it was properly defined.
In preceding chapters, various methods for solving homogeneous linear ordinary
differential equations were discussed.
5.10 Need for Convolution 117
Given continuous, or at least piecewise continuous, functions f (t) and g(t) on [0, ∞] ,
the convolution integral of f (t) and g(t) is defined as
t t
f (t) (∗)g(t) = f (t − τ ) g(τ ) dτ = g(t − τ ) f (τ ) dτ . (5.213)
0 0
y(t = 0) = 1 , y (t = 0) = − 6 , (5.216)
118 5 Green’s Function Laplace Transforms
(5.215) leads to
s 6 γ(s)
Y (s) = − + . (5.217)
(s2 + 1) (s2 + 1) (s2 + 1)
The main idea of this exercise is that the above result applies to arbitrary choices
of γ(t). Just to make sure that we have not made an error somewhere let us make
an extremely simple choice for the arbitrary γ(t). That is, γ(t) = t. Then, (5.218)
gives
t
y(t) = cos(t) − 6 sin(t) + sin(τ ). (t − τ ) dτ . (5.219)
0
One gets
t t
sin(τ ). (t − τ ) dτ = [− cos(τ ) (t − τ )]t0 − cos(τ ) d τ
0 0
− cos(t) [t − t] + cos(0) [t − 0] − [sin(t) − sin(0)]
= t − sin(t) . (5.221)
One special type of differential equation, namely the Ber noulli 1. equation, was
discussed in Chap. 4. [Compare, for instance, (4.19)–(4.77).] Here, that analysis is
extended to other special type equations.
Included in this presentation are the Clairaut 2. equations—[Compare (6.2)–
(6.13)]— Lagrange3. equation—[Compare (6.19)–(6.31)], y the separable equations—
[Compare (6.32)–(6.35)], and the dy dx
= x
equations—[Compare
(6.36)–(6.73)]. In addition, there are the so-called exact [Compare (6.74)–(6.91)] and
inexact equations—[Compare (6.92)–(6.241)]—Riccati 4. equations—[Compare
(6.242)–(6.268)]—Euler 7. equations—[Compare (6.269)–(6.315)], and the fac-
torable equations—[Compare (6.316)–(6.344)].
Notation
Occasionally, for convenience, the following notation will be used.
dy
q= ;
dx
q dx = dy . (6.1)
Consider an equation
y = σ x + f (σ) (6.2)
dy
= σ,
dx
and use (6.1) in the form
dy
= q,
dx
that leads to
σ = q .
Now substitute the variable q for the parameter σ in (6.2). This process leads to
Clairaut equation
y = q x + f (q) . (6.3)
Its general solution has already been recorded as (6.2). It is a straight line in the (x, y)
plane that is obtained by replacing the slope, q, by its observed value σ. Indeed, the
general solution is a single arbitrary parameter representation of a whole family of
straight lines.
dy = q dx + x dq + d f (q) .
According to (6.1), q dx = dy. Thus, dy and q dx can be eliminated from the above
differential equation. The result is
x dq + d f (q) = 0 . (6.4)
[x + F(q)] · dq = 0 . (6.6)
(1) : dq = 0 ;
(2) : [x + F(q)] = 0 . (6.7)
y = σ x + f (σ) , (6.8)
where f (σ) is an arbitrary function of the arbitrary constant σ. Written in this form,
y is the general solution and is identical to the solution expressed by (6.2).
Clearly, the behavior of the second differential equation, namely (6.7)-(2), would
depend on the details of the function F(q). Relevant issues regarding this matter will
be analyzed in detail later.
Comment
Equation (6.10) describes a series of straight lines that could be assumed to be parallel
to tangents to the parabola
y2 = 4 a x . (6.11)
If this assumption turns out to be correct, these straight lines and tangents to the
parabola will have the same slope—meaning, at any given point (x, y) the value
of the differential dy
dx
derived from (6.11) will equal that obtained from the general
solution (6.10).
In order to check the above assumption, proceed√ as follows. Square root of the
equation of the parabola, (6.11), gives [y] parabola = 4 a x. Therefore
√
dy d 4ax a
= = . (6.12)
dx parabola dx x
dy
Now, replace dx
in Clairaut equation (A A) by dy
dx
—or equivalently replace
parabola
the existing σ in the solution (6.10) by dy
dx
. Either way, one gets
parabola
⎛ ⎞
a a
y=x + ⎝ ⎠
x a
x
√ √
=2 a x . (6.13)
And this is (the square root of) the equation of the parabola itself !
(2) Differentiate general solution, y, with respect to σ. Set the result for dσ
dy
equal to
zero. Solve the resultant equation for σ, and notate the solution σ0 . For instance, for
dy
the given general solution (6.10), the differential is dσ = x − σa2 . Now, set it equal
to zero
dy a
=x− 2 = 0 (6.14)
dσ σ
a
and determine the solution for σ The result is σ = . Next, notate this σ σ0 . In
x
this manner, one has σ0 = ax .
Find both general and singular solution to the following Clairaut equations numbered
I(B)–I(F).
I (B) : y = x q + 2q 2 .
I (C) : y = x q + 4q 3 .
I (D) : y = x q − 3 sin(q).
I (E) : y = x q + 5 cos(q).
I (F) : y = x q − sin2 (q). (6.16)
d(2 q 2 )
F(q) = ;
dq
Equation I (B) : F(q) + x = 4 q + x = 0 .
dy x2
T her e f or e 4 dx = 4y = − (x) dx = − + const
dx 2
2
x
Singular solution is : y = − + constant .
8
d(4 q 3 )
F(q) = ;
dq
Equation I (C) : F(q) + x = 12 q 2 + x = 0 .
−1 √
T her e f or e qdx = dy = x dx.
12
−1 2 3
Singular solution is : y = x 2 + const.
12 3
d[−3 sin(q)]
F(q) = ;
dq
Equation I (D) : F(q) + x = − 3 cos(q) + x = 0 .
x
T her e f or e qdx = dy = cos−1 dx.
3
x
Singular solution is : y = x cos−1 − 9 − x 2 + const .
3
126 6 Special Types of Differential Equations
x x
Equivalently , y = x cos−1 − 3 sin cos−1 + const.
3 3
d[5 cos(q)]
F(q) = ;
dq
Equation I (E) : F(q) + x = − 5 sin(q) + x = 0 .
x
T her e f or e dy = sin−1 dx .
5
x
Singular solution is : y = x sin−1 + 25 − x 2 + const.
5
−1 x x
Equivalently , y = x sin + 5 cos sin−1 + const.
5 5
(6.18)
Solve the following Clairaut differential equations problems labeled (1)–(4). Find
both the general solution and the singular solution. [Hint: Read (6.10)–(6.18)]
y = x q + 3 q 2 . (1)
y = x q + 3 q 3 . (2)
y = x q − 2 sin q . (3)
y = x q + 2 cos q . (4)
If the function G(q) should be equal to q, Lagrange equation (6.19) would reduce
to Clairaut equation.
Differentiate (6.19)
dy = q dx .
The result is
dx dG(q) dF(q)
[q − G(q)] = x + .
dq dq dq
dx 1 dG(q) dF(q)
= x + , (6.20)
dq [q − G(q)] dq dq
This is a first-order differential equation, and its solution, x(q), should be achievable
depending on the details of the functions G(q) and F(q). Even its singular solution
may be found be setting the singular value q = G(q) in the original differential
equation.
128 6 Special Types of Differential Equations
3q
I I (A) : y = x − q2 .
2
2
q q
I I (B) : y = x − .
3 6
I I (C) : y = 3 q x + 5 log(q) . (6.21)
Solution: II(A)
Here, G(q) = 3q
2
and F(q) = −q 2 . Therefore, (6.20) and II(A) give
dx 1 3
= −q x − 2q
dq 2
2
3
=− x +4 . (6.22)
q
This is a first-order differential equation. Its solution x(q) can be found by using
(4.10) and (4.15). The result is
σ1
x(q) = q + , (6.23)
q3
Equations (6.23) and (6.24) are parametric representation of the general solution
y = y(x).
Singular solution is obtained by setting q = G(q). That means using the relation-
ship q = 3 q2 , which is q = 0, in equation II(A). Thus, the singular solution is
y(x) = 0 . (6.25)
Solution: II(B)
q2
Here, G(q) = q3 and F(q) = − 6
. Therefore, (6.20) and II(B) give
dx 1 x q
= −
dq 2q 3 3
3
6.2 Lagrange Equation 129
1 x
= −1 . (6.26)
2 q
This is a first-order differential equation. Its solution, according to (4.10) and (4.15),
is
√
x(q) = − q − σ1 q . (6.27)
1 3 2 3
y(q) = − q + σ1 q 2 . (6.28)
3 2
Equations (6.27) and (6.28) are parametric equations representing the general solu-
tion y = y(x). Singular solution is obtained in similar fashion to that for equation
II(A). And the result once again is y(x) = 0.
Solution: II(C)
Here, G(q) = (3 q) and F(q) = 5 log(q). Therefore, (6.20) and II(C) give
dx 1 5
=− 3x + . (6.29)
dq 2q q
This is a first-order differential equation. Its solution, according to (4.10) and (4.15),
is
5 σ1
x(q) = − + 3 . (6.30)
q q2
Equations (6.30) and (6.31) are parametric equations representing the general solu-
tion y = y(x). Singular solution is obtained in similar fashion to that for equation
II(A). And the result once again is y(x) = 0.
130 6 Special Types of Differential Equations
Solve the following Lagrange differential equations problems labeled (1)–(3). Find
both the general solution and the singular solution. [Hint: Read (6.21)–(6.31).]
q
(1) : y = x − 3 q2 .
3
2
q q
(2) : y = x − .
5 5
(3) : y = 4 q x + 4 log(q) .
Whenever it is possible to separate the dependent and the independent variables and
express the differential equation in the form
dy dx
= (6.32)
Q(y) P(x)
Variables are separated in the following four problems, and then, direct integration
is used.
dy
(1) : = exp(x) dx .
y log y
dy
(2) : = exp(2 x) exp(−2 y) .
dx
d2 y dy
(3) : (sin x) 2 = (cos x) .
dx dx
d3 y
(4) : = (x 2 + 2x) exp x . (6.34)
dx 3
6.3 Separable Differential Equations 131
6.3.2 Solution
dy
(1) : = exp(x) dx ; log[log(y)] = exp(x) + const .
y log y
Equivalently : y = log−1 log−1 exp(x) + σ0 .
dy exp(2 y) exp(2 x)
(2) : = exp(2 x) dx ;
= + const .
exp(−2 y) 2 2
1
Equivalently : y = log σ0 + exp(2 x) .
2
dy d2 y dq dy
(3) : U se = q ; (sin x) 2 = (sin x) = (cos x) = (cos x) q
dx dx dx dx
dq
T her e f or e, = cot x dx ; log q = log(sin x) + const ;
q
dy
Equivalently , q ≡ = σ0 sin x .
dx
Or , dy = σ0 sin x dx ; y = − σ0 cos x + σ1 .
d3 y d2 y
(4) : dx = = (x 2 + 2x) exp (x) dx = x 2 exp x + σ0
dx 3 dx 2
2
d y dy
T her e f or e , dx = = x 2 . exp (x) dx + σ0 x
dx 2 dx
= (x 2 − 2x + 2) exp x + σ0 x + σ1 .
T hus , dy = [(x 2 − 2x + 2) exp x + σ0 x + σ1 ] dx .
σ0 2
Or , y = (x 2 − 4x + 6) exp x + x + σ1 x + σ2 . (6.35)
2
dy dx
(1) : = .
y x
132 6 Special Types of Differential Equations
dy
(2) : = exp(3y − 2x) .
dx
dy
(3) : = 2 x y exp(x 2 ) .
dx
dy
(4) : = x 2 y2 .
dx
dy
6.4 Separable Equations of Form dx = xy
dy y
= . (6.36)
dx x
dy
6.4.1 Solution dx = xy
dy d(xρ) dρ
(ρ) = = = x +ρ . (6.38)
dx dx dx
Transferring ρ to the left-hand side,
dρ
(ρ) − ρ = x ,
dx
dx 1
and multiplying from the left both sides by x (ρ)−ρ
finally transforms (6.36) into
one of separable form
dx dρ
= . (6.39)
x (ρ) − ρ
dy
6.4.2 Examples Group IV: dx = xy Equations
dy y y 2
(1) : = 1+ + .
dx x x
dy y y
(2) : = tan + .
dx x x
dy y y
(3) : = exp − + .
dx x x
dy x y
(4) : = + .
dx y x
dy
(5) : x (x 2 + y 2 ) = y (x + y)2 .
dx
dy y + cx
(6) : = . (6.40)
dx x + cy
6.4.3 Solution
y
As noted in (6.38) and (6.37), substitute (ρ) for dy
dx
and ρ for x
. One gets
dρ dx
(2) : (ρ) = tan ρ + ρ . T her e f or e, dx = , or
tan ρ x
log(sin ρ) = log σ0 + log x , sin ρ = σ0 x , or
y = x sin−1 [σ0 x] .
dρ dx
(3) : (ρ) = exp(−ρ) + ρ . T her e f or e, = , or
exp(−ρ) x
exp(ρ) = σ0 + log x; ρ = log σ0 + log x , or y = x log σ0 + log x .
134 6 Special Types of Differential Equations
1 dρ dx
(4) : (ρ) = + ρ . T her e f or e, = , or
ρ 1
ρ
x
ρ2 √ 1
= σ0 + log x , or y = ± x 2 σ0 + log x 2 .
2
dy y 2 xy
(5) W rite as : = 1+ 2 .T her e f or e ,
dx x 1 + xy
2ρ2 1 + ρ2 dx
(ρ) = + ρ . Accor dingly, dρ = , or
1 + ρ2 2ρ2 x
1 ρ
− + = σ0 + log x . T hus : ρ2 − 2ρ (σ0 + log x) − 1 = 0 , or
2ρ 2
ρ = (σ0 + log x) ± (σ0 + log x)2 + 1 , or
y = x (σ0 + log x) ± x (σ0 + log x)2 + 1.
dy y y 2
(1) : =2 + .
dx x x
dy y y
(2) : = 3 tan + .
dx x x
dy y y
(3) : = 5 exp −3 + .
dx x x
dy x y
(4) : =4 + .
dx y x
dy
(5) : x (x 2 + 2 y 2 ) = y x 2 + 3 y x + 2 y2 .
dx
dy y
6.5 Equations Reducible to dx = x 135
dy y
6.5 Equations Reducible to dx = x
dy b1 y + c1
= φ .
dx b2 x + c2
Solved below are equations (I ) to (I V ) that are of the form described above. Despite
slightly different appearance, they are readily reducible to the general equation dy =
dx
xy .
2
dy y y
(I ) : = 1+ + .
dx x +2 x +2
dy y+1
(I I ) : = 2 +4 .
dx x +2
dy y+2 y+2
(I I I ) : = tan + .
dx x +3 x +3
dy y+3 y+3
(I V ) : = exp − + .
dx x +4 x +4
Solution: (I)
dz dy dy dz dy
Set (x + 2) = z . T hen = 1 and = = ·1
dx dx dz dx dz
2
dy dy y y
T her e f or e , = = 1+ + . (6.42)
dz dx z z
Except for change of variable from x → z, this is identical to (1) in (6.40). Therefore,
its solution is
(I ) : y = z tan σ0 + log z , or , y = (x + 2) tan σ0 + log(x + 2) .
(6.43)
Solutions: (II)–(IV)
Set z 1 = y + 1 , z 2 = x + 2 and z1
z2
= ρ . Then
136 6 Special Types of Differential Equations
dy dz 1
= .
dx dz 2
If one sets
2ρ + 4 = (ρ) ,
then, much like (6.38), one has dy = (ρ). Therefore, according to (6.39),
dx
dx
x
= (ρ)−ρ . Additionally, because dy
dρ
dx
= dz
dz 1
2
, the earlier variable y corresponds
to the current variable z 1 . Similarly, the earlier variable x corresponds to the current
variable z 2 . Therefore, for brevity it is convenient to go directly to (6.39) and rewrite
it by transliterating x to z 2 . Additionally, we do the same for the corresponding result
that would be obtained.
dx dz 2 dρ dρ dρ
→ = = = . (6.44)
x z2 (ρ) − ρ 2ρ + 4 − ρ ρ+4
(I I ) : y = σ0 (x + 2)2 − 4x − 9 . (6.46)
z1
(I I I ) : Set z 1 = y + 2 , z 2 = x + 3 , =ρ .
z2
dy dz 1 z1 z1
T hen , = = tan + = tan ρ + ρ . (6.47)
dx dz 2 z2 z2
As per (6.38) and (6.39), the relationship dxx = dρ
(ρ)−ρ
obtains. And again because
dy
here dx
= dz 1
dz 2
, therefore, similar to (6.39), one can write
dy y
6.5 Equations Reducible to dx = x 137
dz 2 dρ dρ dρ
= = = . (6.48)
z2 (ρ) − ρ tan ρ + ρ − ρ tan ρ
Following the same procedure as used in the above two equations, namely (II) and
(III), one can write
dz 2 dρ dρ dρ
= = = . (6.53)
z2 (ρ) − ρ exp(−ρ) + ρ − ρ exp(−ρ)
Because z 1 = y + 3, z 2 = x + 4, ρ = z1
z2
, (6.54) readily leads to the result
(I V ) : y = (x + 4) log log(x + 4) + σ0 − 3 . (6.55)
138 6 Special Types of Differential Equations
dy a1 x+b1 y+c1
6.5.2 Equations dx = a2 x+b2 y+c2
dy x−y
(V ) : = .
dx x +y+2
dy x −y+1
(V I ) : = .
dx x +y+3
dy x + 2y + 3
(V I I ) : = . (6.56)
dx 4(x + 2y) + 5
Solutions of (V ) and (V I I )
Unlike (II)–(IV) the situation is different here because either the numerator or the
denominator, or both, contain x as well as y. When such is the case, one proceeds as
follows: First one sets
z1 dy dz 1
y = z1 + a ; x = z2 + b ; =ρ; = . (6.57)
z2 dx dz 2
Then, one eliminates both a and b as well as the original constants in the numerator
and the denominator. To that end, one employs Cramer’s rule—see (3.117). This
procedure is demonstrated in the treatment of (V ) and (V I ) below.
dy dz 1 z2 − z1 1−ρ
= = = = (ρ) . (6.59)
dx dz 2 z1 + z2 1+ρ
1
log(z 2 ) = − log(ρ2 + 2ρ − 1) + σ1 . (6.61)
2
1
log(x + 1) = − log[y 2 + 4y + 2yx − x 2 + 2] + log(x + 1) + σ1 .
2
Thus
log[y 2 + 4y + 2yx − x 2 + 2] = 2 σ1 .
Exponentiation leads to
y 2 + 4y + 2yx − x 2 − σ0 = 0 .
dy x −y+1 z 2 − z 1 + (−a + b + 1)
(V I ) : = = . (6.63)
dx x +y+3 z 2 + z 1 + (a + b + 3)
dy dz 1 z2 − z1 1−ρ
= = = = (ρ) . (6.64)
dx dz 2 z1 + z2 1+ρ
1
log(z 2 ) = − log(ρ2 + 2ρ − 1) + σ1 . (6.65)
2
1
log(x + 2) = − log[y 2 + 6y + 2yx − x 2 − 2x + 1] + log(x + 2) + σ1 .
2
140 6 Special Types of Differential Equations
Thus
log[y 2 + 6y + 2yx − x 2 − 2x + 2] = 2 σ1 .
y 2 + 6y + 2yx − x 2 − 2x − σ0 = 0
Remark
Differential equation (V I ) that was solved above had variables x, y as well as con-
stants both in the numerator and the denominator. It was successfully treated by
replacing x and y with two new variables such that the use of Cramer’s rule helped
eliminate the constants. There are, however, situations where the Cramer’s rule is
inapplicable. When that is the case, and the differential equation has features similar
to (V I I ), one proceeds somewhat differently.
Procedure for Solving (VII)
For generality and ease of recognizing the feature of interest, it is convenient to
rewrite problem (VII) as
x + 2y + 3
dy = dx
4(x + 2y) + 5
(a x + b y) + c
≡ dx (6.67)
j (a x + b y) + r
where a, b, c, j, and r are constants. Clearly, the Cramer’s rule is inapplicable here
because the relevant determinant,
a b
j a j b , (6.68)
is vanishing.
To deal with this matter, let us introduce a single variable, z 0 , for the combination
(ax + by).
z0 = a x + b y . (6.69)
dz 0 = a dx + b dy
(a x + b y) + c
= a dx + b dx
j (a x + b y) + r
z0 + c
= a dx + b dx
j z0 + r
= dx [z 0 (a j + b) + (a r + b c)] /( j z 0 + r ) . (6.70)
( j z 0 +r )
Next, multiply the resultant equation (6.70) by [z 0 (a j+b)+(a r +b c)]
.
j z0 + r
dz 0 = dx. (6.71)
z 0 (a j + b) + (a r + b c)
As a result, the left- and the right-hand sides are functions of only one variable each:
that is z 0 and x. Thus, one can directly integrate both sides to determine one as a
function of the other. Finally, because z 0 is equal to (a x + b y), in this manner one
has determined the desired solution y ≡ y(x).
4 z0 + 5
dz 0 = dx . (6.72)
6 z 0 + 11
2 z0 7
− log(6 z 0 + 11) + σ0 = x .
3 18
Setting z 0 = (a x + b y) → (x + 2 y), one finds the requisite solution to (VII).
2(x + 2 y) 7
(V I I ) : x = − log (6 x + 12 y + 11) + σ0 . (6.73)
3 18
Solve the
following problems, labeled (1)–(7), that are in fact reducible to the form
q = xy . [Hint: Read (6.42)–(6.73).]
142 6 Special Types of Differential Equations
2
dy y y
(1) : = 1+ + .
dx x +1 x +1
dy y+2
(2) : = +1 .
dx x +3
dy y+1 y+1
(3) : = tan + .
dx x +2 x +2
dy y+2 y+2
(4) : = exp − + .
dx x +3 x +3
dy x −y+1
(5) : = .
dx x + 2y + 3
dy x −y+1
(6) : = .
dx x + 3y + 4
dy (x + y) + 2
(7) : = .
dx 3(x + y) + 4
Consider variables X and Y that are both real and finite within an(X,Y ) domain
,
and a function Z (X, Y ) that has continuous partial derivatives ∂∂ XZ Y and ∂∂YZ X .
Then, the total derivative of Z , namely dZ (X, Y ),
is exact if U (X, Y ) and V (X, Y ) are functions of X and Y in the domain and obey
the so-called integrability requirement:
∂Z ∂Z
U (X, Y ) = ; V (X, Y ) = . (6.75)
∂X Y ∂Y X
This requirement holds if the second mixed derivatives of Z are equal. That is, if the
following is true.
∂2 Z ∂2 Z
= . (6.76)
∂Y ∂ X ∂ X ∂Y
Thus, in two dimensions, the standard representation for an exact differential is
6.6 Exact Differential and Exact Differential Equation 143
∂Z ∂Z
dZ (X, Y ) = dX + dY (6.77)
∂X Y ∂Y X
dZ (X, Y ) = 0 , (6.79)
or equivalently
∂Z ∂Z
dX + dY = 0 , (6.80)
∂X Y ∂Y X
6.6.3 Requirement
6.6.4 Solution
dZ (X, Y ) = 0 (6.83)
144 6 Special Types of Differential Equations
and assume it is exact. Solving it requires the evaluation of the function Z (X, Y ). To
that purpose, proceed as follows: One has from (6.81) and (6.82):
∂Z
U (X, Y ) = . (6.84)
∂X Y
Eliminate −F(Y ). Thus, the solution Z (X, Y ) of the exact differential equa-
tion (6.81) is
6.6 Exact Differential and Exact Differential Equation 145
Z (X, Y ) = U (X, Y ) dX + V (X, Y ) dY
∂[ U (X, Y ) dX ]
− dY + σ . (6.88)
∂Y X
Given below are six differential equations that are similar to (6.81).
(1) : 0 = dZ = [1 + 2Y ] dX + [2 + 2X ] dY .
(2) : 0 = dZ = 3X 2 + 4X Y dX + 2X 2 + 2Y dY .
(3) : 0 = dZ = exp(2Y ) dX + 2X exp(2Y ) − Y dY .
(4) : 0 = dZ = 3Y X 2 + sin(Y ) dX + X cos(Y ) + X 3 + 1 dY .
(5) : 0 = dZ = sin(Y ) + X Y 2 + X 2 Y + X + Y dX
X3
+ + X cos(Y ) + X 2 Y + X dY .
3
(6) : 0 = dZ = X + 2X cos(Y ) + 3X 2 Y dX
+ X 3 − X 2 sin(Y ) + Y dY . (6.89)
6.6.6 Solution
Choose U (X, Y ) and V (X, Y ) from one of the six equations (6.89). Check whether
the relevant differential equation is exact by using (6.82). And if so, employ (6.88)
to find its solution.
The results are given below ad seriatim.
(1) : U = 1 + 2Y ; V = 2 + 2X .
∂U ∂V
= 2 = .
∂Y X ∂X Y
U dX = (X + 2Y X ) , V dY = (2Y + 2X Y ) ,
∂[ U dX ]
− dY = − 2X dY = − 2X Y ,
∂Y X
σ1 = (X + 2Y X ) + (2Y + 2X Y ) − (2X Y )
X − σ1
T her e f or e , Y = − .
2 + 2X
146 6 Special Types of Differential Equations
(2) : U = 3X 2 + 4X Y ; V = 2X 2 + 2Y .
∂U ∂V
= 4X = .
∂Y X ∂X Y
σ1 = (X 3 + 2Y X 2 ) + (Y 2 + 2X 2 Y ) − (2X 2 Y )
T her e f or e , Y 2 + 2X 2 Y + X 3 − σ1 = 0 .
Or equivalently , Y = − X 2 ± X 4 − X 3 + σ1 .
(5) : U = sin(Y ) + X Y 2 + X 2 Y + X + Y ;
X3
V = + X cos(Y ) + X 2 Y + X .
3
∂U ∂V
= cos(Y ) + 2X Y + X + 1 =
2
.
∂Y X ∂X Y
σ1 = X sin(Y ) + X 2 Y 2 /2 + X 3 Y/3 + X 2 /2 + X Y
+ X 3 Y/3 + X sin(Y ) + X 2 Y 2 /2 + X Y
− X sin(Y ) + X 2 Y 2 /2 + X 3 Y/3 + X Y
T her e f or e , σ1 = X 3 Y/3 + X sin(Y ) + X 2 Y 2 /2 + X Y + X 2 /2 .
6.6 Exact Differential and Exact Differential Equation 147
Exercise
Due to symmetry, it is clear that the functions U (X, Y ) and V (X, Y ) may be inter-
changed as long as the same is done for the variables X and Y. Therefore, (6.88) can
also be represented as
Z (X, Y ) = V (X, Y ) dY + U (X, Y ) dX
∂[ V (X, Y ) dY ]
− dX + σ . (6.91)
∂X Y
Show that this is true. Also, by using (6.91)—that is, instead of (6.88)—solve the six
differential equations given as (6.89).
Solve the exact differential equations specified in problems (1) → (3). [Hint: Read
(6.88) and (6.90).]
(1) : 0 = dz = [1 + 2x y] dx + 2 + x 2 dy .
(2) : 0 = dz = 1 + y + y 2 dx + [2 + 2x y] dy .
(3) : 0 = dz = exp(y) + x dx + x exp(y) + y dy .
may be made exact if multiplied by an appropriate function I (x, y). When that can
be done, and the new differential equation
Assume the integrating factor for the inexact differential equation (6.92) is dependent
only on x. That is
That is
∂u ∂v dX (x)
X (x) = X (x) + v(x, y) . (6.98)
∂y x ∂x y dx
∂u
∂v
∂y
− ∂x y
dX (x)
x
= dx
. (6.99)
v(x, y) X (x)
In other words, if the integrating factor X that makes the inexact differential equa-
tion (6.92) exact is to depend only on x, it must satisfy (6.99).
6.7 Inexact Differential Equation Integrating Factor 149
Assume there is an integrating function, Y (y), dependent only on y that makes the
inexact differential equation (6.92) exact. That is, assume the following is an exact
differential equation
Comparing with (6.99) and (6.101), symmetry considerations suggest that if there is
an integrating factor Y (y) that makes an inexact differential equation u dx + v dy = 0
exact, it must have the form
∂v ∂u
∂x y
− ∂y
dY (y)
dy
= x
;
u(x, y) Y (y)
⎡ ⎤
∂v(x,y) ∂u(x,y)
∂x
− ∂y
⎢ y x ⎥
Y (y) = σ2 exp ⎣ dy ⎦ . (6.103)
u(x, y)
Exercise
Prove that (6.103), guessed on symmetry grounds, are correct.
Exact or Inexact?
Solve the following three differential equations.
150 6 Special Types of Differential Equations
Procedure: First choose u(x, y) and v(x, y) from one of the three equations (6.104).
Then by using (6.82), check as to whether the relevant differential equation is exact
or inexact. And if it is inexact, employ (6.99) or (6.103) to find whether an integrating
factor that depends only upon a single variable is possible.
Solution of (6.104)-(1)
∂u ∂v
u = 1 + 2y ; v = 1 + 3x . = 2 and = 3 . (6.105)
∂y x ∂x y
Good news! There exists an integrating factor X (x). One can find it by integrating
the above equation.
−1
log[X (x)] = log (1 + 3x) 3 + constant. (6.107)
Thus
σ0
X (x) = 1 . (6.108)
(1 + 3x) 3
Including the integrating factor, and ignoring the unnecessary multiplier σ0 , the
differential equation (6.104)-(1) now is
Now, determine whether this new differential equation is indeed exact. One has
∂u ∂v 2
= 2(1 + 3x)− 3 ; (1 + 3x)− 3 3 . (6.110)
1 1
=
∂y x ∂x u 3
Good! The new equation is exact. Accordingly, one can find its solution via the
procedure outlined in (6.88). That is
6.7 Inexact Differential Equation Integrating Factor 151
∂[ u(x, y) dx]
σ1 − σ2 = u(x, y) dx +v(x, y) dy − dy
∂y
x
∂[ u(x, y) dx]
− dy
∂y x
2
(1 + 2y)(1 + 3x) 3 2 2
= 2 + (1 + 3x) 3 y − (1 + 3x) 3 y
3 3
2
(1 + 2y)(1 + 3x) 3
= . (6.111)
2
Therefore, the solution to (6.104)-(1) is
−1 σ1 − σ2
y(x) = + 2 . (6.112)
2 (1 + 3x) 3
Solution of (6.104)-(2)
∂u ∂v
u = y 2 ; v = 3x y ; = 2y = = 3y . (6.113)
∂y x ∂x y
Good news! There exists an integrating factor Y (y), and one can find it by integrating
the above equation.
Thus
Y (y) = σ0 y . (6.116)
y 3 dx + 3x y 2 dy = 0 . (6.117)
152 6 Special Types of Differential Equations
As before one checks as to whether this new differential equation is indeed exact.
u(x, y) = y 3 ; v(x, y) = 3y 2 x .
∂u ∂v
And, = 3 y2 ; = 3 y2 . (6.118)
∂y x ∂x u
Yes, the new equation is exact. Accordingly, one can find its solution via the procedure
outlined in (6.91). That is
∂[ v(x, y) dy]
σ1 − σ2 = v(x, y) dy + u(x, y) dx − dx
∂x y
∂[ v(x, y) dy]
= (3y x) dy + (y ) dx −
2 3
dx
∂x y
= y3 x + y3 x − y3 x = y3 x . (6.119)
Solution of (6.104)-(3)
∂u ∂v
u = x −y ; v = x ;
2
= − 1 = = 1 . (6.121)
∂y x ∂x y
This equation is NOT exact. Therefore, try (6.99) to check as to whether an integrating
factor that depends only on one variable is possible. To that end, x would appear to
be the more likely candidate.
∂v
∂u
∂y
− ∂x y −2 dx dX
x
.dx = = . (6.122)
v x X
leading to
σ0
X (x) = . (6.124)
x2
6.7 Inexact Differential Equation Integrating Factor 153
With the integrating factor included, the differential equation (6.104)-(3) reads
1 2
2
x − y dx + x dy = 0 ;
x
1 1 1
u(x, y) = 2 x 2 − y ; v(x, y) = 2 x = . (6.125)
x x x
For exactness, it must satisfy the equality
∂u 1 ∂v
= − 2 = (6.126)
∂y x x ∂x u
which it does. Accordingly, one can find its solution via the procedure outlined
in (6.88). That is
∂[ u(x, y) dx]
σ1 − σ2 = u(x, y) dx + v(x, y) dy − dy
∂y x
y y y y
= x+ + − = x+ . (6.127)
x x x x
Therefore, the solution to (6.104)-(3) is
dy
+ y 2 + f 1 (x) y + f 2 (x) = 0 (6.130)
dx
is sometime called a Riccati equation.
154 6 Special Types of Differential Equations
6.8.1 Treatment
Set
1 dz
= y . (6.131)
z dx
As a result,
dz
= zy. (6.132)
dx
Its differentiation gives
d2 z dy dz dy
2
= z +y =z + y2 z . (6.133)
dx dx dx dx
Now, multiply (6.130) by z
dy
z + y 2 + f 1 (x) y + f 2 (x) = 0 (6.134)
dx
dy
and replace z dx
according to (6.133). This gives
d2 z
− y 2 z + z y 2 + f 1 (x) z y + f 2 (x) z = 0 . (6.135)
dx 2
d2 z dz
2
+ f 1 (x) + f 2 (x)z = 0 . (6.136)
dx dx
Notice that this differential equation, unlike the original Riccati equation, is linear.
But the removal of nonlinearity has resulted in increasing the order. And we now
have a second-order differential equation. But, because the new equation is linear,
there is a greater possibility of making progress. In particular, if f 1 (x) and f 2 (x)
should both turn out to be constants, (6.136) can be solved much like equations of
the type (3.1), (3.5), etc. And once z has been determined, y can be calculated by a
simple differentiation as is implicit in the form of (6.131).
6.8 Riccati Equation 155
First convert the Riccati equations given below into second-order linear equations
and then solve them.
dy
(1) : + y 2 + x y = 0.
dx
dy
(2) : + y 2 + x 2 y = 0.
dx
dy
(3) : + y 2 + y + 1 = 0.
dx
dy
(4) : + y 2 + 2 y + 2 = 0. (6.137)
dx
Converting the Riccati equations into second-order linODE’s is straightforward. All
one needs to do is use (6.130) and (6.136). Accordingly, (6.137) get transformed as
follows:
(1) → (1 ) : d2 z
dx 2
+ x dxdz
= 0
d2 z
(2) → (2 ) : dx 2
+ x dx =
2 dz
0
(3) → (3 ) :
2
d z
dx 2
+ dx dz
+z = 0
2
(4) → (4 ) : d z
dx 2
+ 2 dx + 2 z =
dz
0 (6.138)
6.8.3 Solution
dp
+x p = 0 . (6.139)
dx
Equation (6.139) is readily solved. (See, for instance, (4.2), (4.10), and (4.15) that
describe how such first-order linODEs are solved.) That is
2
x
p = σ0 exp − . (6.140)
2
Because p = dz
dx
, its integration leads to z.
2
dz x
dx = z = p dx = σ0 exp − dx + σ1 . (6.141)
dx 2
156 6 Special Types of Differential Equations
1 dz p σ0 exp − x2
y= = =
z dx z 2
σ0 exp − x2 dx + σ1
2
exp − x2
= 2
. (6.142)
exp − x2 dx + σ2
dp
+ x 2 p = 0. (6.143)
dx
Again with the help of (4.2), (4.10), and (4.15) one gets
3
x
p = σ0 exp − . (6.144)
3
Because p = dz
dx
its integration leads to z.
3
x
dz = z = σ0 exp − dx + σ1 . (6.145)
3
And once z is known, y can be determined by using (6.131). Much like (6.142) one
can write y as
3
exp − x3
y= 3
. (6.146)
exp − x3 dx + σ2
k2 + k + 1 = 0 (6.148)
which leads to
√
−1 + i 3
k = k1 = r + im = ,
2 √
−1 − i 3
k = k2 = r − im = . (6.149)
2
Therefore, using (6.147)–(6.149), the result for z, according to (3.21) and (3.22), can
be written as
Or, alternatively, as
√
x 3
z = σ0 exp − cos σ5 − x . (6.151)
2 2
Note, one went from (6.150)–(6.151) by introducing two new arbitrary constants σ0
and σ5 such that σ3 = σ0 sin(σ5 ) and σ4 = σ0 cos(σ5 ). This is perfectly alright as
long as σ02 = σ32 + σ42 . And because all sigmas are arbitrary constants, this equality
is trivially achieved.
Given z—see, (6.151)—the solution y is readily found.
√ √
1 dz 1 3 3
y = = − + tan σ5 − x . (6.152)
z dx 2 2 2
k 2 + 2k + 2 = 0 (6.153)
158 6 Special Types of Differential Equations
which leads to
k = k1 = r + im = − 1 + i ,
k = k2 = r − im = − 1 − i . (6.154)
Therefore,
1 dz
y = = − 1 + tan (σ5 − x) . (6.156)
z dx
dy
(1) : + y 2 + 2 y + 1 = 0.
dx
dy
(2) : + y 2 + y + 2 = 0.
dx
could be transformed into linear ordinary differential equations with constant coeffi-
cients. The latter are much easier to solve as was demonstrated in detail in Chaps. 3
and 4.
6.9 Euler Equation 159
x = exp(t)
dt
1 = exp(t) . . (6.158)
dx
d U (x, t) dt dU (x, t)
1· = exp(t) . .
dt dx dt
d U (x, t)
= exp(t) ·
dx
d U (x, t) d U (x, t)
= =x . (6.159)
dt dx
x D = . (6.160)
x D (x D) = x (D x) D + x 2 D 2
= () = x D + x 2 D 2 = + x 2 D 2 . (6.161)
Therefore
x 2 D 2 ≡ ( − 1) . (6.162)
Similarly
x D x 2 D 2 = [ ( − 1)] . (6.163)
The left-hand sides of (6.163) and (6.164) are equal. The equality of their right-hand
sides yields
2 ( − 1) + x 3 D 3 = [ ( − 1)] , (6.165)
or
In order to proceed to the next order, multiply x 3 D 3 from the left by x D and
use (6.166) to appropriately replace one of the expressions equal to 3 x 3 D 3 .
x D x 3 D3 = x 3 x 2 D3 + x 4 D4 = 3 x 3 D3 + x 4 D4
= 3 ( − 1) ( − 2) + x 4 D 4 . (6.167)
Multiply the left-hand side of (6.166) by x D and the right-hand side by the same
amount, that is . One gets
x D x 3 D 3 = [( − 1)( − 2)] . (6.168)
Now that the left-hand sides of (6.167) and (6.168) are the same, one can claim
equality of their right-hand sides. That is
Exercise
Clearly, (6.160), (6.162), (6.166), and (6.169) show a pattern. Therefore, by induc-
tion, one assumes
Solution IX-(A)
Use (6.158), (6.160), and (6.162) and transform (6.171) into
2
[( − 1) + + 1] U (t) = + 1 U (t) = a exp(2 t) + b . (6.172)
E ch = k 2 + 1 = 0 ; k1,2 = ± i ;
Scomp (t) = σ0 exp(−i t) + σi exp(−i t) = σ2 sin(t) + σ3 cos(t) ;
1 1 b
I pi (t) = 2 {a exp(2 t) + b} = a exp(2 t) 2 + ;
+1 2 +1 1
a
U (t) = Scomp (t) + I pi (t) = σ2 sin(t) + σ3 cos(t) + exp(2 t) + b .
5
Finally transform U (t) back into u(x). In this fashion, one gets:
a 2
U (t) → u(x) = σ2 sin[log(x)] + σ3 cos[log(x)] + x + b. (6.173)
5
Solution IX-(B)
I X − (B) : x 2 D 2 − 2 x D + 4 u = a log(x) . (6.174)
As above use (6.158), (6.160), and (6.162) and transform (6.174) into the following
differential equation.
2
− 3 + 4 U (t) = a t . (6.175)
a 3a
+ log(x) + . (6.176)
4 16
Solution IX-(C)
I X − (C) : x 2 D 2 + 2x D − 4 u = a x 2 log(x) + b . (6.177)
Again use (6.158), (6.160), and (6.162) and transform (6.177) into
2
+ − 4 U (t) = a exp(2 t) t + b , (6.178)
1 5
+ a x2 log(x) − . (6.179)
2 4
Solution IX-(D)
I X − (D) : x 2 D 2 − 4x D + 2 u = 2 x cos log(x) . (6.180)
And finally, as before, use (6.158), (6.160), and (6.162) and transform (6.180) into
2 − 5 + 2 U (t) = 2 exp(t) cos(t) . (6.181)
This leads to
6.9 Euler Equation 163
√
5 17
E ch = k 2 − 5 k + 2 ; k1,2 = ± ;
2 2
√ √
5 17 5 17
Scomp (t) = σ1 exp t+ t + σ2 exp t− t ;
2 2 2 2
1 1
I pi (t) = 2 {2 a exp(t) cos(t)} = exp[t (1 + i)]
− 5 + 2 2 − 5 + 2
1
+ 2 exp[t (1 − i)];
− 5 + 2
1 1
= exp[t (1 + i)] + exp[t (1 − i)]
−3 − 3i −3 + 3i
exp(t) 1
=− (1 − i) exp(i t) + (1 + i) exp(−i t) = I pi (t) = − exp(t) [sin(t) + cos(t)] .
6 3
U (t) → u(x) = σ1 x 2 + 2 + σ2 x 2 −
5 17 5 17
2
x
− sin{log(x)} + cos{log(x)} . (6.182)
3
Any such linear ordinary differential equation with variable coefficients of the form
cn (a x + b)n can readily be transformed into a linear ordinary differential equation
with constant coefficients—meaning into equations of the type that we have been
studying previously. The relevant transformation is carried out by a change of the
variable: from (a x + b) to exp(a t) where both a and b are constants. That is
Differentiate (6.184) with respect to x, and divide both sides by the constant a.
dt
1 = exp(a t) .
dx
dt
= (a x + b) . . (6.185)
dx
164 6 Special Types of Differential Equations
dU (x, t) dt dU (x, t)
= (a x + b) .
dt dx dt
dU (x, t)
= (a x + b) . . (6.186)
dx
= (a x + b) D. (6.187)
(a x + b) D {(a x + b) D} = (a x + b)[a D + (a x + b) D 2 ]
= {} = a + (a x + b)2 D 2 . (6.188)
(a x + b)2 D 2 = ( − a) . (6.189)
Similarly
(a x + b) D (a x + b)2 D 2 = (a x + b) 2a 2 x + 2ab D 2 + (a x + b)[(a x + b)2 D 3 ]
which leads to
exp(at) for transforming out of the variable B(x) into an appropriate function of the
variable t. This will become clear in examples (X I ) given below.
Exercise
Clearly, (6.187), (6.189), (6.192), and (6.193) show a pattern. Therefore, by induction
E ch = k 2 − 4 = 0 ; k1,2 = ± 2 ;
Scomp (t) = σ0 exp(2 t) + σ1 exp(−2 t) ;
1 1 2
I pi (t) = 2 {10 t} = − 1 + {10 t}
−4 4 4
10
=− t ;
4
10
U (t) = Scomp (t) + I pi (t) = σ0 exp(2 t) + σ1 exp(−2 t) − t;
4
σ1 5
U (t) → u(x) = σ0 (2x + 3) + − log(2x + 3). (6.197)
(2x + 3) 4
166 6 Special Types of Differential Equations
X (B) : (x + 2)2 D 2 + 3(x + 2) D + 1 u(x) = (x + 2)2 log(x + 2)
(6.198)
E ch = k 2 + 2k + 1 = 0 ; k1,2 = − 1 ;
Scomp (t) = (σ0 + σ1 t) exp(−t) .
1 1
I pi (t) = 2 {exp(2 t) t} = exp(2 t) {t}
+ 2 + 1 ( + 2) + 2( + 2) + 1
2
1 exp(2 t) 6
= exp(2 t) {t} = 1− t
9 + 6 + 2 9 9
exp(2 t) 6
= I pi (t) = t−; U (t) = Scomp (t) + I pi (t)
9 9
exp(2 t) 6
= (σ0 + σ1 t) exp(−t) + t− .
9 9
σ0 + σ1 log(x + 2) (x + 2)2 6
U (t) → u(x) = + log(x + 2) − . (6.200)
(x + 2) 9 9
X (C) : (x − 1)2 D 2 − 1 u(x) = (x − 1) cos[log(x − 1)] log(x − 1).(6.201)
Solve the following five problems by transforming each of the differential equations
by setting the variable (a x + b) = exp(a t). Then, work out their solution U (t).
Also write the corresponding solution u(x).
168 6 Special Types of Differential Equations
(x + 3)2 D 2 + 2(x + 3)D u = − log(x + 3) (x + 3) . (1)
(x + 3)2 D 2 + 2(x + 3)D + 3 u = − log(x + 3) (x + 3) . (2)
(2x − 1)2 D 2 − 2(2x − 1)D u = −2 log(2x − 1) sin[log(2x − 1)] (2x − 1) . (3)
(2x − 1)2 D 2 − 2(2x − 1)D + 4 u = −2 log(2x − 1) sin[log(2x − 1)] (2x − 1) . (4)
1
(3x + 2)2 D 2 − 3(3x + 2)D + u = sin[log(3x + 2)] (3x + 2) . (5)
4
y1 = [D + M2 ]y(x), (6.204)
and
[x n D + M1 ]y1 = N , (6.205)
where for brevity M1 (x) has been represented as M1 , M2 (x) as M2 , N (x) as N , y1 (x)
as y1 . With the help of (6.204), which allows replacing y1 by [D + M2 ]y(x), (6.205)
can be represented as
The operator multiplication of the term on the left-hand side of (6.206) requires
some care. For instance, (6.206) leads to the following second-order (linODE) with
variable coefficients.
' (
x n D 2 + [M1 + x n M2 ] D + [x n (D M2 ) + M1 M2 ] y(x)
= N. (6.207)
Check whether the following second-order equations are factorable. And if so, solve
them accordingly.
X I (1) : D 2 + (x −1 + 1) D + x −1 y(x) = x .
2
X I (2) : D + (x −1 + 3) D + 2(x −1 + 1) y = x −1 .
X I (3) : x D 2 − (2 x + 1)D + 2 y = x 3 exp(x) .
2
X I (4) : D + x −1 D − x −2 y = exp(x) .
2
X I (5) : D + 2x D + 2 y = log(x) .
2
X I (6) : D + exp(x) D + exp(x) y = 2 x exp(x) . (6.208)
Solution XI(1)
One-to-one comparison with (6.207) leads to the equalities
M1 + x n M2 = x −1 + 1 ; (6.209)
x n (D M2 ) + M1 M2 = x −1 . (6.210)
M1 + M2 = x −1 + 1. (6.211)
−1
(D M2 ) + M1 M2 = x . (6.212)
and solving it would involve a second-order differential equation. Thus, one would
have made no headway at all ! Unless, of course, one were clever and could guess
M1 or M2 by inspection.
170 6 Special Types of Differential Equations
(x −1 + 1)c = c2 + x −1 . (6.214)
This is a quadratic with two solutions c = 1 and c = x −1 . Clearly the only acceptable
solution, where M2 = c is a constant, is c = 1. Knowing M2 = 1, (6.211), or (6.212),
gives M1 = x −1 .
Also because n = 0 and N = x, the central differential equations to be solved—
namely (6.205) ≡ (a) and (6.204)≡ (b)—can now be written as:
(a) : [D + x −1 ]y1 = x;
(b) : [D + 1] y(x) = y1 . (6.215)
Because both these equations are first-order (linODE), the procedures used in (4.2)–
(4.18) apply and lead readily to the relevant solution.
σ1 x2
(a) : y1 = + ;
x 3
exp (x) 1
(b) : y(x) = σ1 exp (−x) dx + σ2 exp (−x) + [x 2 − 2x + 2].
x 3
(6.216)
Solution XI(2)
Again work with the master equation (6.207) and the defining (6.204), (6.205),
and (6.206).
Here, n = 0, N = x −1 and
(a) : [D + (1 + x −1 )]y1 = x −1 ;
(b) : (D + 2)y(x) = y1 . (6.218)
exp(−x) 1
(a) : y1 = σ1 + ;
x x
exp(x)
(b) : y(x) = σ1 exp(−2x) dx + σ2 exp(−2x)
x
exp(2x)
+ exp(−2x) dx. (6.219)
x
Solution XI(3)
Next, solve differential equation {3}. Here, n = 1 and
Its two solutions are : c = −x −1 and c = −2. Clearly, the solution which is relevant
is when c is a constant. That is, c = −2. Knowing M2 = c = −2 one readily finds
M1 = −1. Knowing n, M1 , M2 , and the fact that N = x 3 exp(x), one can construct
the pair of first-order (linODE) that need solving. These are:
(a) : y1 = σ1 x + (x 2 − x) exp(x) ;
σ1
(b) : y(x) = σ2 exp(2x) − (2x + 1)
2 4
− x + x + 1 exp(x) . (6.223)
Solution XI(4)
Here, n = 0, N = exp(x) and
172 6 Special Types of Differential Equations
Their solution is
(a) : y1 = σ1 + exp(x) ;
x σ2
(b) : y(x) = σ1 +
2 x
1
+ exp(x) − exp(x). (6.226)
x
Solution XI(5)
Their solution is
(a) : y1 = σ1 − x + x log(x);
(b) : y(x) = σ1 exp(x 2 )dx + σ2 exp(−x 2 )
1 1 exp(x 2 )
+ [log(x) − 1] − exp(−x 2 ) dx. (6.229)
2 4 x
Solution XI(6)
Their solution is
∂u ∂v
u = 1 + 2y ; v = 1 + 3x. = 2 and = 3 . (6.233)
∂y x ∂x y
Good news! There exists an integrating factor X (x). One can find it by integrating
the above equation.
−1
log[X (x)] = log (1 + 3x) 3 + constant. (6.235)
Thus
σ0
X (x) = 1 . (6.236)
(1 + 3x) 3
Including the integrating factor, and ignoring the unnecessary multiplier σ0 , the
differential equation (6.104)-(1) now is
Now determine whether this new differential equation is indeed exact. One has
∂u − 13 ∂v 2
(1 + 3x)− 3 3 . (6.238)
1
= 2(1 + 3x) ; =
∂y x ∂x u 3
174 6 Special Types of Differential Equations
Good! The new equation is exact. Accordingly, one can find its solution via the
procedure outlined in (6.88). That is
∂[ u(x, y) dx]
σ1 − σ2 = u(x, y) dx +v(x, y) dy − dy
∂y
x
− 13 2
= (1 + 3x) (1 + 2y)dx + (1 + 3x) 3 dy
∂[ u(x, y) dx]
− dy
∂y x
2
(1 + 2y)(1 + 3x) 3 2 2
= 2 + (1 + 3x) 3 y − (1 + 3x) 3 y
3 3
2
(1 + 2y)(1 + 3x) 3
= . (6.239)
2
Therefore, the solution to (6.104)-(1) is
−1 σ1 − σ2
y(x) = + 2 . (6.240)
2 (1 + 3x) 3
dy
+ y 2 + f 1 (x) y + f 2 (x) = 0 (6.242)
dx
is sometime called a Riccati equation.
6.11 Additional Riccati Equations 175
6.11.1 Solution
Set
1 dz
= y . (6.243)
z dx
As a result,
dz
= zy. (6.244)
dx
Its differentiation gives
d2 z dy dz dy
2
= z +y =z + y2 z . (6.245)
dx dx dx dx
Now, multiply (6.242) by z
dy
z + y 2 + f 1 (x) y + f 2 (x) = 0 (6.246)
dx
dy
and replace z dx
according to (6.245). This gives
d2 z
− y 2 z + z y 2 + f 1 (x) z y + f 2 (x) z = 0 . (6.247)
dx 2
d2 z dz
2
+ f 1 (x) + f 2 (x)z = 0 . (6.248)
dx dx
Notice that this differential equation, unlike the original Riccati equation, is linear.
But the removal of nonlinearity has resulted in increasing the order. And we now
have a second-order differential equation. But, because the new equation is linear,
there is a greater possibility of making progress. In particular, if f 1 (x) and f 2 (x)
should both turn out to be constants, (6.248) can be solved much like equations of
the type (3.1), (3.5), etc. And once z has been determined, y can be calculated by a
simple differentiation as is implicit in the form of (6.243).
176 6 Special Types of Differential Equations
First convert the Riccati equations given below into second-order linear equations
and then solve them.
(1) : dy
dx
+ y 2 + x y = 0.
(2) : dy
dx
+ y2 + x 2 y = 0.
(3) : dy
dx
+ y2 + y + 1 = 0.
(4) : dy
dx
+ y2 + 2 y + 2 = 0. (6.249)
Converting the Riccati equations into second-order linear ordinary differential equa-
tions is straightforward. All one needs to do is use (6.242) and (6.248). Accord-
ingly, (6.249) get transformed as follows:
(1) → (1 ) : d2 z
dx 2
+ x dxdz
=0
d2 z
(2) → (2 ) : dx 2
+ x 2 dx
dz
=0
(3) → (3 ) : d2 z
dx 2
+ dx dz
+z =0
2
(4) → (4 ) : d z
dx 2
+ 2 dx + 2 z
dz
=0 (6.250)
6.11.3 Solution
dp
+ x p = 0. (6.251)
dx
Equation (6.251) is readily solved. [See, for instance, (4.2), (4.10) and (4.15) that
describe how such first-order linear ordinary differential equations are solved.] That
is
2
x
p = σ0 exp − . (6.252)
2
Because p = dz
dx
, its integration leads to z.
2
dz x
dx = z = p dx = σ0 exp − dx + σ1 . (6.253)
dx 2
6.11 Additional Riccati Equations 177
1 dz p σ0 exp − x2
y= = =
z dx z 2
σ0 exp − x2 dx + σ1
2
exp − x2
= 2
. (6.254)
exp − x2 dx + σ2
dp
+ x2 p = 0 . (6.255)
dx
Again with the help of (4.2), (4.10), and (4.15) one gets
3
x
p = σ0 exp − . (6.256)
3
Because p = dz
dx
its integration leads to z.
3
x
dz = z = σ0 exp − dx + σ1 . (6.257)
3
And once z is known, y can be determined by using (6.243). Much like (6.254) one
can write y as
3
exp − x3
y= 3
. (6.258)
exp − x3 dx + σ2
k2 + k + 1 = 0 (6.260)
which leads to
√
−1 + i 3
k = k1 = r + im = ,
2 √
−1 − i 3
k = k2 = r − im = . (6.261)
2
Therefore, using (6.259)–(6.261), the result for z, according to (3.21) and (3.22), can
be written as
Or, alternatively, as
√
x 3
z = σ0 exp − cos σ5 − x . (6.263)
2 2
Note, one went from (6.262)–(6.263) by introducing two new arbitrary constants σ0
and σ5 such that σ3 = σ0 sin(σ5 ) and σ4 = σ0 cos(σ5 ). This is perfectly alright as
long as σ02 = σ32 + σ42 . And because all sigmas are arbitrary constants, this equality
is trivially achieved.
Given z—see, (6.263)—the solution y is readily found.
√ √
1 dz 1 3 3
y = = − + tan σ5 − x . (6.264)
z dx 2 2 2
k 2 + 2k + 2 = 0 (6.265)
6.11 Additional Riccati Equations 179
which leads to
k = k1 = r + im = − 1 + i,
k = k2 = r − im = − 1 − i. (6.266)
Therefore,
1 dz
y = = − 1 + tan (σ5 − x). (6.268)
z dx
dy
(1) : + y 2 + 2 y + 1 = 0.
dx
dy
(2) : + y 2 + y + 2 = 0.
dx
could be transformed into linear ordinary differential equations with constant coeffi-
cients. The latter are much easier to solve as was demonstrated in detail in Chaps. 3
and 4.
The Euler procedure consists in arranging an appropriate change of the indepen-
dent variable that transforms the operator D into a new operator . One proceeds as
180 6 Special Types of Differential Equations
follows. Set
x = exp(t)
dt
1 = exp(t) . . (6.270)
dx
d U (x, t) dt dU (x, t)
1· = exp(t) . .
dt dx dt
d U (x, t)
= exp(t) ·
dx
d U (x, t) d U (x, t)
= =x . (6.271)
dt dx
x D = . (6.272)
x D (x D) = x (D x) D + x 2 D 2
= () = x D + x 2 D 2 = + x 2 D 2 . (6.273)
Therefore
x 2 D 2 ≡ ( − 1) . (6.274)
Similarly
x D x 2 D 2 = [ ( − 1)] . (6.275)
The left-hand sides of (6.275) and (6.276) are equal. The equality of their right-hand
sides yields
6.12 Additional Euler Equations 181
2 ( − 1) + x 3 D 3 = [ ( − 1)] , (6.277)
or
In order to proceed to the next order, multiply x 3 D 3 from the left by x D and
use (6.278) to appropriately replace one of the expressions equal to 3 x 3 D 3 .
x D x 3 D3 = x 3 x 2 D3 + x 4 D4 = 3 x 3 D3 + x 4 D4
= 3 ( − 1) ( − 2) + x 4 D 4 . (6.279)
Multiply the left-hand side of (6.278) by x D and the right-hand side by the same
amount, that is . One gets
x D x 3 D 3 = [( − 1)( − 2)] . (6.280)
Now that the left-hand sides of (6.279) and (6.280) are the same, one can claim
equality of their right-hand sides. That is
Exercise
Clearly, (6.272), (6.274), (6.278), and (6.281) show a pattern. Therefore, by induc-
tion, one assumes
E ch = k 2 + 1 = 0 ; k1,2 = ±i;
Scomp (t) = σ0 exp(−i t) + σi exp(−i t) = σ2 sin(t) + σ3 cos(t);
1 1 b
I pi (t) = 2 {a exp(2 t) + b} = a exp(2 t) 2 + ;
+1 2 +1 1
a
U (t) = Scomp (t) + I pi (t) = σ2 sin(t) + σ3 cos(t) + exp(2 t) + b.
5
Finally, transform U (t) back into u(x). In this fashion one gets:
a 2
U (t) → u(x) = σ2 sin[log(x)] + σ3 cos[log(x)] + x + b. (6.285)
5
I X − (B) : x 2 D 2 − 2 x D + 4 u = a log(x). (6.286)
As above use (6.270), (6.272), and (6.274) and transform (6.286) into the following
differential equation.
2
− 3 + 4 U (t) = a t. (6.287)
a 3a
+ log(x) + . (6.288)
4 16
Again use (6.270), (6.272), and (6.274) and transform (6.289) into
2
+ − 4 U (t) = a exp(2 t) t + b, (6.290)
1 5
+ a x2 log(x) − . (6.291)
2 4
-
Solution Example IX-(D)
I X − (D) : x 2 D 2 − 4x D + 2 u = 2 x cos log(x) . (6.292)
And finally, as before, use (6.270), (6.272), and (6.274) and transform (6.292) into
2
− 5 + 2 U (t) = 2 exp(t) cos(t). (6.293)
This leads to
184 6 Special Types of Differential Equations
√
5 17
E ch = k 2 − 5 k + 2 ; k1,2 = ± ;
2 2
√ √
5 17 5 17
Scomp (t) = σ1 exp t+ t + σ2 exp t− t ;
2 2 2 2
1 1
I pi (t) = 2 {2 a exp(t) cos(t)} = exp[t (1 + i)]
− 5 + 2 2 − 5 + 2
1
+ 2 exp[t (1 − i)];
− 5 + 2
1 1
= exp[t (1 + i)] + exp[t (1 − i)]
−3 − 3i −3 + 3i
exp(t) 1
=− (1 − i) exp(i t) + (1 + i) exp(−i t) = I pi (t) = − exp(t) [sin(t) + cos(t)] .
6 3
Any such (linODE) with variable coefficients of the form cn (a x + b)n can readily
be transformed into a (linODECC)—meaning into equations of the type that we have
been studying previously. The relevant transformation is carried out by a change of
the variable: from (a x + b) to exp(a t) where both a and b are constants. That is
Differentiate (6.296) with respect to x , and divide both sides by the constant a.
dt
1 = exp(a t) .
dx
dt
= (a x + b) . . (6.297)
dx
dU (x, t) dt dU (x, t)
= (a x + b) .
dt dx dt
dU (x, t)
= (a x + b) . . (6.298)
dx
= (a x + b) D. (6.299)
(a x + b) D {(a x + b) D} = (a x + b)[a D + (a x + b) D 2 ]
= {} = a + (a x + b)2 D 2 . (6.300)
(a x + b)2 D 2 = ( − a) . (6.301)
Similarly
(a x + b) D (a x + b)2 D 2 = (a x + b) 2a 2 x + 2ab D 2 + (a x + b)[(a x + b)2 D 3 ]
which leads to
Exercise
Clearly, (6.299), (6.301), (6.304), and (6.305) show a pattern. Therefore, by induction
Transform each of the following three differential equations by setting the variable
(a x + b) = exp(a t), and then, find their solution U (t). Also write the correspond-
ing solution u(x).
X (A) : (2x + 3)2 D 2 + 2(2x + 3) D − 4 u(x) = 5 log[2x + 3]. (6.307)
E ch = k 2 − 4 = 0 ; k1,2 = ±2;
Scomp (t) = σ0 exp(2 t) + σ1 exp(−2 t) ;
1 1 2
I pi (t) = 2 {10 t} = − 1 + {10 t}
−4 4 4
10
=− t ;
4
10
U (t) = Scomp (t) + I pi (t) = σ0 exp(2 t) + σ1 exp(−2 t) − t;
4
σ1 5
U (t) → u(x) = σ0 (2x + 3) + − log(2x + 3). (6.309)
(2x + 3) 4
X (B) : (x + 2)2 D 2 + 3(x + 2) D + 1 u(x) = (x + 2)2 log(x + 2)
(6.310)
6.12 Additional Euler Equations 187
E ch = k 2 + 2k + 1 = 0 ; k1,2 = −1;
Scomp (t) = (σ0 + σ1 t) exp(−t) .
1 1
I pi (t) = 2 {exp(2 t) t} = exp(2 t) {t}
+ 2 + 1 ( + 2)2 + 2( + 2) + 1
1 exp(2 t) 6
= exp(2 t) {t} = 1− t
9 + 6 + 2 9 9
exp(2 t) 6
= I pi (t) = t− ; U (t) = Scomp (t) + I pi (t)
9 9
exp(2 t) 6
= (σ0 + σ1 t) exp(−t) + t− .
9 9
σ0 + σ1 log(x + 2) (x + 2)2 6
U (t) → u(x) = + log(x + 2) − . (6.312)
(x + 2) 9 9
X (C) : (x − 1)2 D 2 − 1 u(x) = (x − 1) cos[log(x − 1)] log(x − 1).(6.313)
1
+ (x − 1) {2 + log(x − 1)} sin[log(x − 1)] +
5
1
+ (x − 1) {1 − 2 log(x − 1)} cos[log(x − 1)]. (6.315)
5
Solve the following five problems by transforming each of the differential equations
by setting the variable (a x + b) = exp(a t). Then, work out their solution U (t).
Also write the corresponding solution u(x).
6.12 Additional Euler Equations 189
(x + 3)2 D 2 + 2(x + 3)D u = − log(x + 3) (x + 3) . (1)
(x + 3)2 D 2 + 2(x + 3)D + 3 u = − log(x + 3) (x + 3) . (2)
(2x − 1)2 D 2 − 2(2x − 1)D u = −2 log(2x − 1) sin[log(2x − 1)] (2x − 1) . (3)
(2x − 1)2 D 2 − 2(2x − 1)D + 4 u = −2 log(2x − 1) sin[log(2x − 1)] (2x − 1) . (4)
1
(3x + 2)2 D 2 − 3(3x + 2)D + u = sin[log(3x + 2)] (3x + 2) . (5)
4
y1 = [D + M2 ]y(x), (6.316)
and
[x n D + M1 ]y1 = N , (6.317)
where for brevity M1 (x) has been represented as M1 , M2 (x) as M2 , N (x) as N , y1 (x)
as y1 . With the help of (6.316), which allows replacing y1 by [D + M2 ]y(x), (6.317)
can be represented as
The operator multiplication of the term on the left-hand side of (6.318) requires
some care. For instance, (6.318) leads to the following second-order (linODE) with
variable coefficients.
' (
x n D 2 + [M1 + x n M2 ] D + [x n (D M2 ) + M1 M2 ] y(x)
= N. (6.319)
while one can readily find what n and N are, determining M1 and M2 is less straight-
forward. Here, one knows only the sums [M1 + x n M2 ] and [x n (D M2 ) + M1 M2 ].
But alas that does not a readily accessible solution of M1 and M2 make.
Check whether the following second-order equations are factorable. And if so, solve
them accordingly.
X I (1) : D 2 + (x −1 + 1) D + x −1 y(x) = x.
2
X I (2) : D + (x −1 + 3) D + 2(x −1 + 1) y = x −1 .
X I (3) : x D 2 − (2 x + 1)D + 2 y = x 3 exp(x).
2
X I (4) : D + x −1 D − x −2 y = exp(x).
2
X I (5) : D + 2x D + 2 y = log(x).
2
X I (6) : D + exp(x) D + exp(x) y = 2 x exp(x). (6.320)
M1 + x n M2 = x −1 + 1 ; (6.321)
−1
x (D M2 ) + M1 M2 = x
n
. (6.322)
M1 + M2 = x −1 + 1. (6.323)
−1
(D M2 ) + M1 M2 = x . (6.324)
and solving it would involve a second-order differential equation. Thus, one would
have made no headway at all ! Unless, of course, one were clever and could guess
M1 or M2 by inspection.
Another two possibilities for success are if
(A): M1 is vanishing
(B): or, M2 is a constant.
In order to solve (6.323) and (6.324), try the first possibility: meaning M1 = 0. If
this is true, then according to (6.323) and (6.324), M2 = (x −1 + 1) and D M2 = x −1 .
6.13 Additional Factorable Equations 191
(x −1 + 1)c = c2 + x −1 . (6.326)
This is a quadratic with two solutions c = 1 and c = x −1 . Clearly, the only acceptable
solution, where M2 = c is a constant, is c = 1. Knowing M2 = 1, (6.323), or (6.324),
gives M1 = x −1 .
(a) : [D + x −1 ]y1 = x;
(b) : [D + 1] y(x) = y1 . (6.327)
Because both these equations are first-order (linODE), the procedures used in (4.2)–
(4.18) apply and lead readily to the relevant solution.
σ1 x2
(a) : y1 = + ;
x 3
exp (x) 1
(b) : y(x) = σ1 exp (−x) d x + σ2 exp (−x) + [x 2 − 2x + 2].
x 3
(6.328)
Here, n = 0, N = x −1 and
(a) : [D + (1 + x −1 )]y1 = x −1 ;
(b) : (D + 2)y(x) = y1 . (6.330)
192 6 Special Types of Differential Equations
exp(−x) 1
(a) : y1 = σ1 + ;
x x
exp(x)
(b) : y(x) = σ1 exp(−2x) dx + σ2 exp(−2x)
x
exp(2x)
+ exp(−2x) dx. (6.331)
x
Its two solutions are : c = −x −1 and c = −2. Clearly, the solution which is relevant
is when c is a constant. That is, c = −2. Knowing M2 = c = −2, one readily finds
M1 = −1. Knowing n, M1 , M2 , and the fact that N = x 3 exp(x), one can construct
the pair of first-order (linODE) that need solving. These are:
(a) : y1 = σ1 x + (x 2 − x) exp(x) ;
σ1
(b) : y(x) = σ2 exp(2x) − (2x + 1)
2 4
− x + x + 1 exp(x) . (6.335)
Their solution is
(a) : y1 = σ1 + exp(x) ;
x σ2
(b) : y(x) = σ1 +
x
2
1
+ exp(x) − exp(x). (6.338)
x
Their solution is
(a) : y1 = σ1 − x + x log(x);
(b) : y(x) = σ1 exp(x 2 )dx + σ2 exp(−x 2 )
1 1 exp(x 2 )
+ [log(x) − 1] − exp(−x 2 ) dx.
2 4 x
(6.341)
Their solution is
(a) : y1 = σ1 + 2 exp(x) (x − 1) ;
(b) : y(x) = (σ1 − 2) exp {− exp(x)} exp {exp(x)}dx
+ σ2 exp {− exp(x)} + 2x − 2 . (6.344)
Chapter 7
Special Situations
Unlike equations of the first-order and first-degree linear (ODE)’s that are second and
higher order, and have variable coefficients, are often difficult to solve. Fortunately,
there are equations of special types that are easier to handle. Some of these were
treated in the preceding chapter. Different from special types, but somewhat in the
same vein, are equations that represent ‘Special Situations’. With such situations
in place, satisfactory solution is often possible. For instance, a given differential
equation may be integrable. Similarly, equations that have both the independent
as well as the dependent variables missing in any explicit form: And those that
explicitly contain only the independent variable, or only the dependent variable, are
easily handled.
An interesting new situation called order reduction comes into play if one of the
n non-trivial solution of an nth order homogeneous linear (ODE) is already known.
Then, the given equation can be reduced to one of (n − 1)th order.
[See (4.19)–(4.77)]. These equations, it turns out, can be transformed into standard
linear (ODE)) of first order and first degree.
The Second Order
Unlike equations of the first order and first degree, as a rule linear (ODE)’s that
are second and higher order and have variable coefficients are difficult to solve
in terms of known elementary functions. Fortunately, there are exceptions to this
gloomy rule. But, much like the Ber nouilli 1. equations, mostly the exceptions are
equations of special types. Some examples of these are special equations that were
described in the preceding chapter. Included there were the Clairaut 2. equations—
[Compare (6.2)–(6.13)]—the Lagrange3. equation—[Compare (6.19)–(6.31)]—the
Separable Equations—[Compare (6.32)–(6.35)] and the dy dx
= xy equations—
[Compare (6.36)–(6.73)]. In addition, there are the so-called Exact- [Compare (6.74)–
(6.91)] and Inexact-Equations—[Compare (6.92)–(6.241)]—Riccati 4. equations—
[Compare (6.242)–(6.268)]—Euler 7. equations—[Compare (6.269)–(6.315)] , and
the Factorable Equations—[Compare (6.316)–(6.344)].
Described below are a few equations that represent special situations. Such situa-
tions are slightly different from the special types treated in the preceding chapter. And
with these situations in place, satisfactory solution of relevant differential equations
is often possible.
Simple Cases—see (7.1)–(7.27)—are studied first. These cases are either readily
integrated, or they have the independent and/or the dependent variables missing in
explicit form.
In (7.29)–(7.47), one works with an interesting new situation that is referred to
as order reduction. Generally, it is not very likely that one of the non-trivial solution
to a given differential equation is known. However, if it should happen, the given
equation can be reduced to one which is of order one lower than the original. This
is particularly helpful when dealing with a first-degree, second-order homogeneous
linear (ODE). Because with prior knowledge of one non-trivial solution, the equation
evolves into one of the first order. And such first-order differential equations are often
more manageable than the original second-order equations.
Should a differential equation with variable coefficients also be inhomogeneous,
finding its complete solution would require knowledge of both its complementary
solution and the particular integral. While the method of undetermined coefficients—
see (3.60)–(3.91)—worked well for determining the I pi for equations with constant
coefficients, differential equations with variable coefficients, in contrast, are best
treated by a procedure termed variation of parameters . This latter procedure is
described and extensively illustrated by relevant application—see (7.48) → (7.62).
Finally, two different varieties of second-order differential equations are con-
sidered where a known special situation obtains. For known special situations, the
second-order differential equation can often be solved by following particular rou-
tines which are special to that case. These routines are described in detail in (7.63)–
(7.124).
7.2 Simple Cases 197
Equation (1)
Consider
n
D2 y = (x) , (7.1)
Here, as usual, D = dx d
. Clearly, differential equations of the form (7.2) can directly
be integrated. First and second integration yields
[D 2 y]dx = Dy = f (x)dx + σ1 ,
[Dy]dx = y = dx f (x)dx + σ1 x + σ2 . (7.3)
f (x) = x0 − x . (7.4)
dq(x)
= f [q(x)]
dx
and integrate.
dq(x)
= x + constant . (7.7)
f [q(x)]
The left-hand side of (7.7) is some function F[q(x)] that in principle determines q(x).
Once q(x), = dy(x)
dx
, is known, its integration would lead to the desired solution y(x).
198 7 Special Situations
Equation (3)
2
D2 y = 1 + (Dy)2 . (7.8)
A more suitable form, convenient for calculation, is the square root of (7.8).
d q(x)
(3) : D 2 y(x) = Dq(x) =
dx
= ± 1 + [q(x)]2 . (7.9)
Or, equivalently,
dq(x)
= ± dx . (7.10)
1 + [q(x)]2
Integration leads to
dy(x)
q(x) = = sinh [x + σ1 ] . (7.12)
dx
And final integration with respect to x leads to
Once solved, q(x) can be integrated with respect to x to find y(x). Below, we solve
the following first-degree linear (ODE)’s.
(a) : D 2 y + (x + x 2 ) Dy = 0 ;
(b) : D 2 y + exp(x) Dy = 0 . (7.15)
In the following, we define M(x), N (x), and W (x) and work out the solution.
7.2.3 Solution
Clearly, this procedure would work just as well for other choicesof the function (x).
To that end, it is helpful to set z = − exp(x). Then, represent exp {− exp(x)}dx
as exp(z)
z
dz = σ3 Ei[z] = σ3 Ei[-exp(x)].
1
(c) : D 2 y(x) + Dy(x) = 0 .
y(x)
(d) : D 2 y(x) + 2 y(x) . Dy(x) = 0 . (7.18)
Solution (c)
Because
dq(x) dq(x) dy(x) dq(x)
D 2 y(x) = Dq(x) = = . = . q(x) , (7.19)
dx dy(x) dx dy(x)
and
dq(x) 1
(c) : . q(x) + . Dy(x)
dy(x) y(x)
dq(x) 1
= . q(x) + . q(x) = 0 . (7.20)
dy(x) y(x)
Therefore,
dq(x) 1
(c) : =− . (7.21)
dy(x) y(x)
We get
q(x) = − log[y(x)] − σ0 .
7.2 Simple Cases 201
The result is
(c) : x + σ1 = − exp(−σ0 ) Ei log y(x) + σ0 . (7.22)
Solution (d)
dq(x)
= −2 y(x) . (7.24)
dy(x)
where (σ0 )2 is a constant. Invert (7.25) and integrate with respect to y(x).
dx dy(x)
. dy(x) = dx = ,
dy(x) (σ0 )2 − y(x)2
1 y(x)
= x = tanh−1 − σ1 . (7.26)
σ0 σ0
(1) : D 2 y + y Dy = 0 ;
(2) : D 2 y + exp(y) Dy = 0 . (7.28)
Occasionally, the unlikely is true and one of the ν non-zero solutions to a νth order
homogeneous linear (ODE) is already known. When that happens, the given equation
can be reduced to an equation of the (ν − 1)th order.
Consider, for example, a νth order homogeneous linear (ODE)
ν
ai (x)D i u(x) = 0 (7.29)
i=0
where {ai (x)}, i = 0, 1, ..., ν are all known functions of x. The solution u(x) of
(7.29) is of course unknown. Given g(x) is a known solution of (7.29). Consider a
function φ(x) that is a product of g(x) and a new function f (x).
Assume f (x) is so chosen that φ(x) is also a solution of (7.29). This require-
ment results in D f (x) satisfying homogeneous linear (ODE) of the (ν − 1)th order.
Because the size of the relevant algebra decreases rapidly with decrease in order, for
simplicity one works with a low value of ν. Accordingly, in the following, only a
second-order linear (ODE)is treated.
a0 (x) + a1 (x) D + a2 (x) D 2 u(x) = 0 . (7.31)
The two solutions to this equation are notated u 1 (x) and u 2 (x). Regarding these
solutions:
(a): Assume u 1 (x) is a known (or correctly guessed) non-trivial solution.
a0 (x) + a1 (x) D + a2 (x) D 2 u 1 (x) = 0 (7.32)
(b): The other linearly independent solution, u 2 (x) , to the same differential equation,
that is
a0 (x) + a1 (x) D + a2 (x) D 2 u 2 (x) = 0 , (7.33)
is currently unknown.
7.3 Order Reduction 203
It was asserted above that functions u 1 (x) and u 2 (x)—see (7.32) and
(7.33)—are linearly independent. It is helpful to show that this assertion is correct.
According to (3.47), two functions u 1 (x) and u 2 (x) = f (x) u 1 (x) are linearly
independent if and only if their Wronskian is non vanishing. In other words, functions
u 1 (x) and u 2 (x) are linearly independent only if the following is true
u 1 (x) u 2 (x) u 1 (x) f (x) u 1 (x)
Du 1 (x) Du 2 (x) = Du 1 (x) D[ f (x) u 1 (x)] = u 1 (x) · D f (x) = 0 .
2
(7.34)
As such, the Wronskian is non-zero as long as a1 (x) is non-infinite and a2 (x) is non-
zero. Assuming both these requirements are satisfied, u 1 (x) and u 2 (x) are indeed
linearly independent.
Represent the second solution as a product of the first and a new function f (x). That is
Because u 1 (x) is already known, in order to determine u 2 (x) all one needs is calculate
f (x). And to that purpose, proceed as follows:
(A): Differentiate (7.36) twice.
As per (7.32), the first term on the right-hand side of (7.39) is equal to zero. By
introducing a convenient notation,
[a1 (x) u 1 (x) + 2 a2 (x)Du 1 (x)] v(x) + a2 (x) u 1 (x) Dv(x) = 0. (7.41)
Dividing both sides by a2 (x)u 1 (x)v(x) and then multiplying by dx yields a neater
version of (7.41). That is
Integration gives
a1 (x)
dx + 2 log[u 1 (x)] + log[v(x)] = log(σ0 )
a2 (x)
As such
a1 (x)
exp − a2 (x)
dx
v(x) = σ0 . (7.44)
[u 1 (x)]2
d f (x)
Because according to (7.40), v(x) = dx
, and therefore an integration with respect
to x would give
exp − a1 (x)
dx
a2 (x) d f (x)
v(x)dx = σ0 dx = dx = f (x) .
[u 1 (x)]2 dx
The seven differential equations and their first solution, u 1 (x) , are given below. By
using the procedure ‘reduce order from second to first’ work out the second solu-
tion u 2 (x).
7.4.2 Solution
x2
(V I ) : u 1 (x) = σ1 exp − ; a2 (x) = 1 ; a1 (x) = x , a0 (x) = 1 ;
2
x2 exp − x1 dx
u 2 (x) = σ1 exp − · σ0 dx
2 [exp(−x 2 )]
x2 x2
= σ2 exp − · exp dx = u 2 (x) : (6) .
2 2
x2
(V I I ) : u 1 (x) = σ1 exp x − ; a2 (x) = 1 ; a1 (x) = x , a0 (x) = x ;
2
x2 exp (−x)dx
u 2 (x) = σ1 exp x − · σ0 dx
2 2
[exp x − x ]2 2
2 2
x x
= σ2 exp x − · exp − 2x dx = u 2 (x) : (7) .
2 2
(7.47)
Given one solution, u 1 (x), find the second solution, u 2 (x), of the following differ-
ential equations. [Hint: Read (7.32)–(7.47).]
x2
(6) : (D 2 − x D − 1)u(x) = 0 . : u 1 (x) = σ1 exp
2
x2
(7) : (D 2 + x D − x)u(x) = 0 . : u 1 (x) = σ1 exp −x − .
2
As noted in (3.59)—also compare (2.16)—the complete solution, Scs (x) is the sum
of the complementary solution, Scomp (x) ≡ u 1 (x) + u 2 (x) , and a particular integral,
I pi (x). That is
As per our previous experience determining I pi (x) for inhomogeneous linear (ODE)’s
with constant coefficients—see (3.60)–(3.91)—it is reasonable to expect the current
particular integral, I pi (x), will also bear relationship to the solution of the homoge-
neous linear (ODE) (7.49): namely u 1 (x) and u 2 (x). With this expectation in mind,
introduce a trial solution of the form
208 7 Special Situations
where f 1 (x) and f 2 (x) are unknown functions that need to be determined. Clearly,
in order to determine f 1 (x) and f 2 (x) , two independent differential equations that
involve these functions are required.
Another important point to note is that the basic differential equation that specifies
I pi (x)—that is (7.50)—is second order. Therefore, one would need to differentiate
the trial solution (7.51) twice.
7.4.6 Procedure
Differentiate (7.51).
Differentiating once again would obviously introduce D 2 f 1 (x) and D 2 f 2 (x) . But
most certainly, unless it cannot be avoided, nobody wants to be stuck with having to
deal with second-order differential equations for the yet to be determined functions
f 1 (x) and f 2 (x) . Therefore, if at all possible, second differentials of f 1 (x) and f 2 (x)
must be avoided. Obviously, therefore, one must set that part of (7.52) equal to zero
which upon differentiation would introduce the unwanted second differentials. This
implies setting
Rewrite the original differential equation, namely (7.50), by making use of (7.51)
and (7.54). That is, write
as
Recall that each of the two functions u 1 (x) and u 2 (x) are solutions of the differ-
ential equation (7.49). As such, the top two lines on the right-hand side of (7.56) are
zero, and the third can be written as
d f1 u 2 (x)
D f 1 (x) = = − {B(x)/a2 (x)} ;
dx u 1 (x)Du 2 (x) − u 2 (x)Du 1 (x)
d f2 u 1 (x)
D f 2 (x) = = {B(x)/a2 (x)} . (7.58)
dx u 1 (x)Du 2 (x) − u 2 (x)Du 1 (x)
Integration leads to the functions f 1 (x) and f 2 (x). Recall that one started off already
knowing the solution to the homogeneous part of the differential equation—namely
u 1 (x) and u 2 (x)—therefore the particular integral,
We determine the I pi (x) for examples numbered (1) - (4). Given alongside, these
equations are the relevant u 1 (x) and u 2 (x) : that is, both the first and the second
solution to the homogeneous part of each of these differential equations. Note that
these equations have constant coefficients. Therefore, they can be solved by using
‘undetermined coefficients’ that were described in detail in (3.60)–(3.91). Indeed, for
inhomogeneous linear (ODE)’s with constant coefficients, undetermined coefficients
are much easier to use than the ‘variation of parameters’ procedure currently under
discussion. Still, for demonstrational purposes, we use the more laborious variation
of parameters procedure described above.
210 7 Special Situations
7.4.8 Solution
Employe (7.58), calculate D f 1 (x) and D f 2 (x), integrate the result, and determine
f 1 (x) and f 2 (x).
x
I pi (x) − (1) : D f 1 (x) = exp(x) cos(x) ,
2σ1
x
D f 2 (x) = − exp(x) sin(x) ,
2σ2
exp(x)
f 1 (x) = [(10x − 4) sin(2x) + (5x + 3) cos(2x)] + σ3 ,
50σ1
exp(x)
f 2 (x) = − [(5x + 3) sin(2x) + (4 − 10x) cos(2x)] + σ4 .
50σ2
1
I pi (x) − (2) : D f 1 (x) = exp[(1 − b)x] ,
3 b σ1
1
D f 2 (x) = − exp[(1 + 2 b)x] ,
3 b σ2
1 1
f 1 (x) = exp[(1 − b)x] + σ3
3 b σ1 1−b
1 1
f 2 (x) = − exp[(1 + 2 b)x] + σ4 .
3 b σ2 1+2b
7.4 Reduce Order from Second to First 211
1 exp(x)
I pi (x) − (3) : D f 1 (x) = ,
3 σ1 x
1 exp(2 x)
D f 2 (x) = − ,
3 σ2 x
1
f 1 (x) = Ei(−x) + σ3
3 σ1
1
f 2 (x) = − Ei(2 x) + σ4 .
3 σ2
1
I pi (x) − (4) : D f 1 (x) = − x 2 exp(−x) ,
σ1
1
D f 2 (x) = − x exp(−x) ,
σ2
1
f 1 (x) = (x 2 + 2x + 2) exp(−x) + σ3
σ1
1
f 2 (x) = − (x + 1) exp(−x) + σ4 .
σ2
Somewhat more complicated equations than those numbered (1)–(4) are treated
below. Note: given alongside, these differential equations are u 1 (x) and u 2 (x) , which
are the two solutions to the homogeneous part of each of these differential equations.
1 σ
2
(5) (x 2 D 2 − x D − 3)u(x) = ; u 1 (x) = σ1 x 3 ; u 2 (x) =
x2 x
(6) (x 2 D 2 + x D + 1)u(x) = log(x) ;
u 1 (x) = σ1 sin[log(x)] ; u 2 (x) = σ2 cos[log(x)]
(7) [x 2 D 2 + x D + 4]u(x) = x 2 log(x) ;
u 1 (x) = σ1 sin 2 log(x) ; u 2 (x) = σ2 cos 2 log(x)
1
(8) [x(x + 1) D 2 − (x + 1) D + 1]u(x) = ; u 1 (x) = σ1 (x + 1) ;
x
u 2 (x) = σ2 [x log(x + 1) + log(x + 1) + 1]
(9) [(x 2 − 1) D 2 − (x 2 + 2x − 1) D + 2x]u(x) = x ; u 1 (x) = σ1 (x + 1)2 ;
u 2 (x) = σ2 exp(x) :
(10) [(x + 2) D + (x + 2) D + 1]u(x) = (x + 2) ;
2 2
u 1 (x) = σ1 sin log(x + 2) ; u 2 (x) = σ2 cos log(x + 2) . (7.61)
7.4.10 Solution
As before, use (7.58); calculate D f 1 (x) and D f 2 (x); integrate the result; and deter-
mine f 1 (x) and f 2 (x).
−1
I pi (x) − (5) : f 1 (x) = − 20σ1 x 5 + σ3 ; f 2 (x) = (4σ2 x)−1 + σ4 .
σ2
Using the information: u 1 (x) = σ1 x 3 , u 2 (x) = x
and
one gets
1 1 x −2
I pi (x) − (5) : I pi (x) = − + = + σ3 u 1 (x) + σ4 u 2 (x) .
20 x 2 4 x2 5
7.4 Reduce Order from Second to First 213
1
I pi (x) − (6) : f 1 (x) = {log(x) sin[log(x)] + cos[log(x)]} + σ3 ;
σ1
1
f 2 (x) = − {sin[log(x] − log(x) cos[log(x)]} + σ4 .
σ2
x2
I pi (x) − (7) : f 1 (x) = {[2 log(x) − 1] sin[2 log(x)] + 2 log(x) cos[2 log(x)]} + σ3 ;
16σ1
x2
f 2 (x) = {[2 log(x) − 1] cos[2 log(x)] − 2 log(x) sin[2 log(x)]} + σ4 .
16σ2
Therefore, using u 1 (x) = σ1 sin 2 log(x) , u 2 (x) = σ2 cos 2 log(x) and the
equation
one is led to
x2 x2
I pi (x) − (7) : I pi (x) = log(x) − + σ3 u 1 (x) + σ4 u 2 (x) .
8 16
Proceeding as in equation I pi (x) − (6) one has
1 x2 + 1 log(x) 1 − x
(8) : f 1 (x) = log(x + 1) − + + σ3 ;
σ1 2x 2 2 2x 2
1 1
f 2 (x) = − + σ4 .
σ2 2x 2
Therefore, using u 1 (x) = σ1 sin 2 log(x) , u 2 (x) = σ2 cos 2 log(x) and the
equation
gives
1
(7.61) − (8) : I pi (x) = (x + 1) log(x + 1) − (x + 1) log(x) − 1
2
+ σ3 u 1 (x) + σ4 u 2 (x) .
1 1
(9) : f 1 (x) = + σ3 ;
2σ1 (x 2 − 1)
1 exp(−x)
f 2 (x) = − + σ4 .
σ2 x −1
one gets
1
(7.61) − (9) : I pi (x) = + σ3 u 1 (x) + σ4 u 2 (x) .
2
Finally, we treat (7.61)-(10).
1 x +2
(10) : f 1 (x) = {sin[log(x + 2)] + cos[log(x + 2)]} + σ3 ;
σ1 2
1 x +2
f 2 (x) = − {sin[log(x + 2)] − cos[log(x + 2)]} + σ4 .
σ2 2
Therefore, using u 1 (x) = σ1 sin[log(x + 2)] , u 2 (x) = σ2 cos[log(x + 2)] and the
relationship
Determine the I pi (x) for the following inhomogeneous linear ordinary differential
equation with constant coefficients numbered (1)–(10). Given alongside are u 1 (x)
and u 2 (x) : that is, both the first and the second solution to the homogeneous part of
each of these differential equations. [Hint: Read (7.48)–(7.58).]
The complete solution is the sum of complementary solution and the particular
integral. The complementary solution of a second-order differential equation with
variable coefficients can be very hard to find. Generally, it is constituted of two non-
trivial solutions, and assuming neither of these can conveniently be found, one looks
for special situations. And there are some such situations, but they require appropriate
transformations that use either M0 (x) or M1 (x), functions that occur in (7.63).
In an attempt to solve (7.63), try a transformation that utilizes M1 (x). Represent the
solution of (7.63), namely u(x), as
where
H (x) = exp α M1 (x) dx . (7.65)
Inserting (7.64) into (7.63) and doing a little bit of algebra, one can rewrite the latter
as
α (D M1 ) + (M1 )2 (α2 + α) + M0 + M1 (2α + 1) D + D 2 y(x)
= H −1 (x) B(x) . (7.66)
(2α + 1) = 0 , (7.67)
the term that multiplies D in (7.66) vanishes. And as a result, except for D 2 , the only
remaining term on the left-hand side of (7.66) is
α (D M1 ) + (M1 )2 (α2 + α) + M0
1 1
= − D M1 − M12 + M0 ≡ G . (7.68)
2 4
where C is a constant.
specifies y(x) =
Examine precisely what all this implies. Recall also that (7.64)
H −1 u(x) and with the use of (7.67), (7.65) gives H −1 = exp 21 M1 (x) dx . There-
fore, if in addition to (7.67) one uses also (7.69)-(α) , the differential equation for
y(x) , namely (7.66), becomes
C + D 2 y(x) = H −1 (x) B(x) , (7.70)
On the other hand, when one uses (7.69)-(β) the differential equation (7.66) leads to
−2
C x + D 2 y(x) = H −1 (x) B(x) . (7.71)
Note, both these equations for y(x)—namely (7.70) and (7.71)—would be much
easier to solve than the original differential equation (7.63).
1 2
(9) − + D + D 2 u = x −1 .
x2 x
4 4
(10) − D + D2 u = x 3 . (7.72)
x2 x
7.6.2 Solution
(1) M0 = 4x 2 − 2 ; M1 = −4 x ; G = C = 0 ; H = exp(x 2 ) ;
B = exp(x 2 ) x .
Therefore,
C + D 2 y = 0 + D 2 y = H −1 B
= D2 y = x . (7.73)
x3
y(x) = σ1 x + σ2 + . (7.74)
6
Therefore, the solution of the differential equation (7.72)-(1) is
x3
(1) u(x) = y H = y exp(x 2 ) = σ1 x + σ2 + exp(x 2 ) . (7.75)
6
x2
(2) M0 = x 2 − 1 ; M1 = −2 x ; G = C = 0 ; H = exp ;
2
x + 2x
2
B = x exp ; D 2 y = H −1 B = x exp(x) . (7.76)
2
y = σ1 x + σ2 + (x − 2) exp(x) .
x2 x2
(2) u = y H = y exp = {σ1 x + σ2 + (x − 2) exp(x)} exp .
2 2
(7.77)
x2 1 x2
(3) M0 = ; M1 = −x ; G = C = ; H = exp ;
4 2 4
x2 x 1 x
B = exp ; + D 2 y = H −1 G = . (7.78)
4 8 2 8
Differential equation (7.78) is both simple, has constant coefficients, and can readily
be solved by using the technique described earlier in (3.5)–(3.73), etc.
x x x
y = σ1 sin √ + σ2 cos √ + . (7.79)
2 2 4
2 2
(4) M0 = 2
; M1 = − ; G = C = 0 ; H = x ;
x x
B = exp (x) x ; D 2 y = H −1 B = exp(x) . (7.81)
y = σ1 x + σ2 + exp(x). (7.82)
The solution is
(5) u = y H
√ √ 1
= σ1 sin x 2 + σ2 cos x 2 + exp(2x) exp(x 2 − 2x)
6
√ √ 1
= σ1 sin x 2 + σ2 cos x 2 exp(x 2 − 2x) + exp(x 2 ) .
6
(7.85)
2 2
(6) M0 = ; M1 = − ; G = C = 0 ; H = (x + 2) ;
(x + 2) 2 (x + 2)
x
B = x ; D 2 y = H −1 B = . (7.86)
x +2
x2
y = σ1 x + σ2 + − 2(x + 2) log(x + 2). (7.87)
2
Therefore,
x3
(6) u = y H = σ3 x + σ4 x 2 + − 2(x + 2)2 log(x + 2) . (7.88)
2
1 1
; G = C = 0 ; H = (x)− 2 ;
1
(7) M0 = − ; M1 =
4 x2 x
B = x −2 ; D 2 y = H −1 B = x − 2 .
3
(7.89)
−1
(8) M0 = 4 exp(2 x) ; M1 = {1 + 4 exp(x)} ; G = C = ;
x 4
H = exp + 2 exp(x) ; B = exp(2x) ;
2
−1 x
+ D 2 y = H −1 B = exp 3 − 2 exp(x) . (7.92)
4 2
7.6 Transformation Using M1 (x) (7.63) 221
1 x
y = σ1 exp(x/2) + σ2 exp(−x/2) + exp [− − 2 exp(x)] . (7.93)
4 2
And the solution is
1
(8) u = y H = σ1 exp x + 2 exp(x) + σ2 exp 2 exp(x) + .
4
(7.94)
1 2 −1
(9) M0 = − ; M1 = ; G=C = ;
x2 x x2
1 1 −1
H = ; B = ; + D 2 y = H −1 B = 1 . (7.95)
x x x2
4 4 −2
(10) M0 = ; M1 = − ; G=C = ;
x2 x x2
−2
H = x2 ; B = x3 ; + D 2 y = H −1 B = x . (7.98)
x2
σ2 x3
y = σ1 x 2 + + . (7.99)
x 4
Accordingly, the solution of differential equation (7.72)-(10) is
σ2 x3 2
(10) u = y H = σ1 x 2 + + x . (7.100)
x 4
222 7 Special Situations
Use the procedure outlined in (7.64)–(7.71) and determine the differential equation
obeyed by y(x). Solve it and calculate u(x).
2
(1) (4x + 3) − 4x D + D 2 u = exp(x 2 ) .
x2
(2) x 2 − 2x D + D 2 u = exp .
2
9 2 3 2
(3) x − 3x D + D 2 u = exp x .
4 4
1 2
(4) 2
− D + D 2 u = x −2 .
x x
u(x) ≡ y(x) ,
and , M0 (x) + M1 (x) D + D 2 y(x) = B(x) . (7.102)
dz
= (±)M0 . (7.103)
dx
√
In order that (±)M0 is real, the sign (±) is so chosen that (±)M0 is positive.
Consider the following relationships.
dy dy dz
Dy = = ; (7.104)
dx dz dx
7.7 Transformation Using M0 (x) 223
d d dy dz d dy dz dz
D2 y = [Dy] = =
dx dx dz dx dz dz dx dx
2
d y dz dy d dz dz
= + ·
dz 2 dx dz dz dx dx
2
d2 y dz dy d dz dz
= + ·
dz 2 dx dz dz dx dx
2
d2 y dz dy d dz
= + ·
dz 2 dx dz dx dx
2
d2 y dz dy d2z
= + · . (7.105)
dz 2 dx dz dx 2
dy d2 y B(x)
(±)y + K (±) + 2 = , (7.106)
dz dz (±)M0
where
d √(±)M √
0
+ M1 (±)M0
K (±) = dx
. (7.107)
(±)M0
Should K (±) turn out to be equal to a constant then (7.106) would be a second-order
linear ordinary differential equation with constant coefficients and as such would be
easy to solve. And, indeed, if this constant should happen to be zero, the solution
would be even easier to obtain.
7.8.1 Solution
Equation (7.108)-(1)
In (7.108)-(1), currently, we have M0 = −cos2 (x), but instead of M0 we choose
(±)M0 which is required to be +cos2 (x). Therefore, in (1) one must set (±) = − 1.
One follows the same procedure in equations (2) → (4).
H er e , (±) ≡ (−) .
dz
Also : = (±)M0 = cos(x) ; z = sin(x) ; M1 = tan(x) .
dx
√
d (±)M0
= − sin(x) ; M1 (±)M0 = sin(x) ; K (±) = 0 ;
dx
B(x) = − sin(x) cos2 (x) . (7.109)
This equation has constant coefficients and is readily solved by using the techniques
described in a previous chapter. Its solution is:
One can change back to the original variable x by substituting sin(x) for z. Therefore,
the solution of (7.108)-(1) is:
Equation (7.108)-(2)
In (7.108)-(2), M0 is positive. So one can safely ignore the use of the symbol (±).
dz √ √
(2) M0 = 12 exp(4x) ; =
M0 = 12 exp(2x) ; z = 3 exp(2x) ;
dx
√
d M0 √
M1 = − 2[1 + 4 exp(2x)] ; = 48 exp(2x) ;
dx
√ 4
M1 M0 = − 48 [exp(2x) + 4 exp(4x)] ; K = − √ ;
3
B(x) 1 1 2z
= exp 2 exp(2x) = exp √ . (7.113)
M0 3 3 3
7.8 Examples Group IX 225
4 dy d2 y B(x) 1 2z
y− √ + 2 = = exp √ . (7.114)
3 dz dz M0 3 3
This equation has constant coefficients and is readily solved by using the techniques
described in a previous chapter. Its solution is:
z √ 2z
(2) : y = σ1 exp √ + σ2 exp z 3 − √ . (7.115)
3 3
√
One can change back to the original variable x by substituting 3 exp(2x) for z.
Therefore, the solution of (7.108)-(2) is
Equation (7.108)-(3)
(3): Next consider (7.108)-(3). Here too, M0 is positive. So the symbol (±) is not
needed.
√ √
2 dz 2 2
(3) : M0 = 4 ; = M0 = 2 ; z = − ;
x dx x x
√ √
2 d M0 8
M1 = ; = − 3 ;
x dx x
√ √
8 2
M1 M0 = 3 ; K = 0 ; x = − .
x z
B 2 (x −4 + x −6 ) −2 z2
= = 1 + x = 1 + . (7.117)
M0 2(x −4 ) 2
d2 y B z2
y+ 2
= = 1+ . (7.118)
dz M0 2
z2
(3) : y = − σ1 sin(z) + σ2 cos(z) + . (7.119)
2
√
Again, one can change back to the original variable by substituting − z2 for z.
And the solution of (7.108)-(3) is
226 7 Special Situations
√ √
2 2 1
(3) : y(x) = σ1 sin + σ2 cos( + 2 . (7.120)
x x x
Equation (7.108)-(4)
−1
(4): In (7.108)-(4), M0 is equal to x2
. Therefore, as suggested earlier, we choose
(±)≡ − .
1 dz 1
(4) : (±)M0 = 2
; = (±)M0 = ; T her e f or e, z = log x ;
x dx x
√
1 d (± )M0 1
Also, M1 = ; = − 2 ;
x dx x
1
M1 ± M0 = 2 ; K (±) = 0 ; B = x . (7.121)
x
d2 y B
(±)y + = ,
dz 2 (±)M0
d2 y x
−y + 2 = 1 = x 3 = exp(3 z) . (7.122)
dz x2
This equation has constant coefficient and is solved by previously described tech-
niques. Its solution is
exp(3 z)
y = σ1 exp(z) + σ2 exp(−z) + . (7.123)
8
Now change back to the original variable x by substituting log(x) for z in (7.123).
The result is the desired solution to (7.108)-(4).
σ2 x3
(4) : y(x) = σ1 x + + exp(−z) + . (7.124)
x 8
Chapter 8
Oscillatory Motion
Oscillatory motion is central to the description of acoustics and the effects of inter-
particle interaction in many physical systems. In its most accessible form, oscillatory
motion is simple harmonic. Such motion—which in this chapter is described first—
has a long and distinguished history of use in modeling physical phenomena .
Described next is anharmonic motion which somewhat more realistically repre-
sents the observed behavior of oscillatory physical systems. To this end, a detailed
analysis of transient state motion is presented for a point mass for two different
oscillatory systems. These are as follows:
(1) The point mass, m, is tied to the right end of a long, massless coil spring placed
horizontally in the x-direction on top of a long, level table. The left end of the
coil is fixed to the left end of the long table. The motion of the mass is slowed by
frictional force, that is, proportional to its momentum m v(t). In its completely
relaxed state, the spring is in equilibrium and the mass is in its equilibrium
position (EP).
(2) Because the differential equations needed for analyzing damped oscillating pen-
dula are prototypical of those used in the studies of electromagnetism, acoustics,
mechanics, chemical and biological sciences, and engineering, we analyze next
a pendulum consisting of a (point-sized) bob of mass m, that is, tied to the end of
a massless stiff rod of length l. The rod hangs down, in the negative z-direction,
from a hook that has been nailed to the ceiling. The pendulum is set to oscillate
in two-dimensional motion in the x − z plane. Air resistance is approximated as
a frictional force proportional to the speed with which the bob is moving. The
ensuing friction slows the oscillatory motion.
A function F(x) is periodic if for all values of x there exists a positive constant,
CONST, such that
The period of the function is the smallest positive value of CONST, say, equal to 2P
so that
Beauty and simplicity of oscillatory periodic motion has long attracted scientists
interested in modeling observed physical phenomena.
The simplest possible periodic motion is called simple harmonic motion(s.h).
(s.h) is force-free, undamped periodic motion with constant period of oscillation and
unchanging size of the maximum and the minimum displacements.
Oscillatory Motion of Mass Tied to Spring on a Table
Consider, for instance, a point mass, m, tied to the right end of a long massless coil
spring placed horizontally in the x-direction on top of a long, level, friction-free
smooth table. The left end of the coil is fixed to the left end of the long table. In its
completely relaxed state, the spring is in equilibrium and the mass, tied to the right
end of the coil, is in its equilibrium position. The equilibrium position is henceforth
to be referred to as the (EP).
Pull the mass away from the (EP) by distance + x. The extension of the spring
beyond the (EP) provides a restoring force. In general, the strength of the restoring
force, F(x), is a complicated function of the extension x. If F(x) is analytic within
−X 0 ≤ x ≤ X 0 , the Maclaurin − T aylor 27.−28. series expansion obtains.
∞ i
dF
F(x) = F(0) + xi . (8.3)
i=1
dx i
Because there is no restoring force at zero extension, F(0) must be vanishing. Also,
for small extension |x| < |X 0 |— where |X 0 | is the maximum extension where the
Hooke’s law accurately holds—the Hooke’ law 29. applies. That is
In practice, the restoring force may not be linear. As such, there would be anhar-
monicity related to nonlinear contribution to F(x). Also, many oscillatory processes
encounter damping due to a variety of frictional effects. Yet, the restoring force F(x),
for moderate to small displacement, is often well approximated by Hooke’s law.
d2 x
F(x) = m = m D2 x . (8.5)
dt 2
Using (8.4) and (8.5), the resultant second-order differential equation (relevant only
for small |x|) is
m [D 2 + 0 2 ]x = 0 , (8.6)
where
d dn
D = ; Dn = ; 0 2 = K . (8.7)
dx dx n
(Note: 0 is real because K is positive.)
The second-order differential equation (8.6) has constant coefficients. Therefore,
it is readily solved by using the standard rules described in (3.5) to (3.42). The
solution is sinusoidal
x(t + τ ) = x(t)
so that
1
ν= . (8.11)
τ
The angular frequency of the oscillations is defined to be 2πν .
2π √
2πν = = 0 = K . (8.12)
τ
The one remaining parameter, namely the phase angle φ0 , can be determined from
one additional piece of information, e.g., either the position or the velocity at some
specific time. For instance, assume the mass is at its maximum displacement—that
is, σ0 —at time t = 0. Then, according to (8.8)
meaning φ0 = 0. One could have got the same result also from requiring at maximum
displacement, σ0 , the velocity is vanishing. That is, at t = 0
dx
= − σ0 ω0 sin [ω0 t + φ0 ] = − σ0 ω0 sin [φ0 ] = 0 . (8.14)
dt t=0
Thus, φ0 is indeed zero, and the solution of the differential equation (8.6) is
8.1.2 Energy
The (s.h) oscillatory motion described above is infinitely long-lived. How does the
system energy fair? The total energy is the sum of the kinetic energy
2
m dx m 1
E kinetic ≡ = [−x0 0 sin (0 t)]2 = m K (x0 )2 [sin (0 t)]2 ,
2 dt 2 2
and the potential energy, which is equal to the work done in extending the spring a
distance x
x
1 1
E potential ≡ − F(x) dx = m K x 2 = m K (x0 )2 [cos (0 t)]2 .
0 2 2
8.1 Periodic Functions 231
That is
1
E total = E kinetic + E potential = m K (x0 )2 [sin (0 t)]2 + [cos (0 t)]2
2
1
= m K (x0 )2 . (8.16)
2
Clearly, the total energy is constant, independent of time. Notice how at maximum
displacement in either direction when [cos (0 t)]2 = 1 and [sin (0 t)]2 = 0 energy
is all potential. This happens when 0 t = n π, n = 0, 1, 2, ... At the midpoint, that
is, at the (EP), when [cos (0 t)]2 = 0 and [sin (0 t)]2 = 1 the energy is all kinetic.
This happens whenever 0 t = (2 n + 1) (π/2). In between these two extremes, the
energy is partially kinetic and partially potential.
Conservation of total energy provides a convenient tool for determining the differ-
ential equation obeyed by the harmonically oscillating mass. Both the velocity, v,
and the position, x, in the expression for total energy are time dependent.
m 1
E total (t) = E kinetic (t) + E potential (t) = v(t)2 + m K x(t)2 . (8.17)
2 2
Yet, given absence of lossy effects and zero exchange of energy with the outside, the
total energy must remain unchanged. That is
dE total (t)
= 0 . (8.18)
dt
d2 x(t)
The relationship dv(t)
dt
= dt 2
leads to the relevant differential equation.
d2 x(t)
+ K x(t) = 0 . (8.19)
dt 2
232 8 Oscillatory Motion
Assume the point mass being described is actually quite rough and its motion on the
table is subject to frictional force that is proportional to its momentum m v(t). No
external force is applied, and no other forces affect the motion of the long massless
coil. Then, the Newtonian equation of motion of the mass is
d2 x(t)
m = − Friction − Hooke’s Restoring Force
dt 2
= −(2 α) m v(t) − (m K ) x(t) , (8.20)
d2 x(t) dx(t)
2
+2α + K x(t)
dt dt
≡ D 2 + 2 α D + K x(t) = 0 . (8.21)
Solution
As is the case for all homogeneous linear (ODE), particular integral, I pi , for (8.21)
is vanishing. Given that (8.21) has constant coefficients, its characteristic equation,
E ch , can be written simply by substituting k for D.
k2 + 2 α k + K = 0 . (8.22)
When
α2 > K (8.25)
Meaning, for over-damped motion both k01 and k02 are necessarily positive, and
(k02 − k01 ) is greater than zero. Therefore, irrespective of whether the constants σ1
and/or σ2 are positive or negative in (8.24), the position x(t → ∞) must tend to zero
which is the (EP). But on the other hand, at time t = 0, we have
x(0) = σ1 + σ2 . (8.27)
Should it happen that x(0) is a positive distance away from (EP), then there is a
possibility that the point mass would start moving leftward, come to a momentary
stop, and reverse direction to head back. But whether it oscillates further, or indeed
stays put on the original side never crossing over to the other, depends—see below—
on the sign and the relative size of the two constants σ1 and σ2 . As usual, two boundary
conditions are needed to fix the two constants σ1 and σ2 .
For over-damped motion, according to (8.26), if both σ1 and σ2 are positive the mass
will stay on the positive side of the (EP) and never cross over to the other side. Also,
with the passage of time the mass will continue monotonically approaching the (EP).
What about its velocity vt ?
dx
vt = = −σ1 k01 exp (−k01 t) − σ2 k02 exp (−k02 t) . (8.28)
dt
As shown in Fig. 8.1, for positive values of the constants σ1 and σ2 , at t = 0 the mass
is moving leftward toward the (EP). Its velocity approaches zero as t → ∞, and in
this process, the velocity undergoes no extrema.
Confirmation of above statement is given below. Rewrite (8.24).
234 8 Oscillatory Motion
xt and vt 0
-0.5 vt
-1
0 2 4 6 8 10 12 14
Fig. 8.1 Mass tied to an over-damped spring, with positive values of σ’s, stays on the original side
σ2
x(t) = σ1 exp(−k01 t) 1 + exp[−(k02 − k01 ) t] . (8.29)
σ1
t → ∞, because (k02 − k01 ) > 0, it tends to +1. These two numbers have the same
sign only if
σ2
1+ >0 . (8.30)
σ1
As long as
σ2
> −1 (8.31)
σ1
equation (8.30) is satisfied, and the mass stays on the original side of the (EP).
But what about its velocity? And additionally, does the velocity vt and possibly
also the location x(t) undergo any extrema?
8.1 Periodic Functions 235
dxExtrema
in position x(t)—if any—would occur at time t = tx:extreme such that
= 0. On the other hand, the velocity extremum—if any—would occur
dt t=tx:extreme
2
at t = tv:extreme when dv dt t=tv:extreme
= ddt x2 = 0. In other words, tx:extreme and
t=v:extreme
tv:extreme would obey the following relationships:
dx
= −σ1 k01 exp(−k01 tx:extreme ) − σ2 k02 exp(−k02 tx:extreme )
dt tx:extreme
= −σ1 k01 exp(−k01 tx:extreme )
1 + σσ21 kk02
01
exp[−(k 02 − k 01 ) t x:extreme ] =0. (8.32)
d2 x
= σ1 (k01 )2 exp(−k01 tv:extreme ) + σ2 (k02 )2 exp(−k02 tv:extreme )
dt 2 t=tv:extreme
= σ1 (k01 )2 exp(−k01 tv:extreme ))
2
σ k
1 + σ2 k022 exp[−(k02 − k01 ) tv:extreme ] = 0 . (8.33)
1 01
Since k01 and k02 are > 0, if σ1 and σ2 have the same sign, tx:extreme and tv:extreme
will not exist because logarithm of negative quantity is a complex number while
the time must necessarily be real. Therefore, under these circumstances x(t)
and v(t) will not have extrema.
σ2
8.1.7 Over-Damped Anharmonic Motion σ1 Negative But
> −1 Mass Stays on Original Side
Mass stays on the original side of the origin if (8.30) is satisfied. And this can
happen—as demonstrated in Fig. 8.2—even when σσ21 is negative as long as it
is > −1. Also because the ratio − σσ21 kk0201 is then positive, and logarithm of a
positive quantity is real, according to (8.34) extrema can occur both for the position
x and the velocity v.
236 8 Oscillatory Motion
0 2 4 6 8 10 12 14
t
Fig. 8.2 An over-damped spring with positive σ1 and not too negative σ2 . Mass stays on the original
side. At time t, position xt and (ten times) velocity vt are displayed as function of t. Dots locate
extrema
Next, consider systems where (8.30) is not satisfied: Meaning the requirement for
staying on the original side is violated. One class of such systems are over-damped
springs with negative σ1 and larger positive σ2 . For such systems as t increases from
zero the mass moves leftward, stops momentarily, and restarts moving in positive
direction to its final position of rest. According to (8.34), extrema in both the position
and the velocity are possible—positions indicated by dots. This behavior is portrayed
in Fig. 8.3.
Unlike the case where friction overpowers the Hooke’s attraction—for instance,
(8.25)—if the force due to Hooke’s attraction should happen to be of similar strength
to friction, the system would be critically damped. More precisely, this is the case
8.1 Periodic Functions 237
k 2 + 2 α k + α2 = (k + α)2 = 0 ,
k = −α , (8.35)
that are equal. According to well-established procedure, the solution to this differ-
ential equation can then be expressed as—[see (3.37)]—
As such
dxt
vt = = (σ4 − α σ3 − α σ4 t) exp (−α t) . (8.37)
dt
Note a dot indicates the location of the extremum (Fig. 8.4).
238 8 Oscillatory Motion
Critically Damped
3
2
xt
1
xt and vt 0
-1
-2 vt
0 0.5 1 1.5 2 2.5 3 3.5
t
Fig. 8.4 xt and vt of a mass being pulled by critically damped spring with σ1 = 3 and α = 2, are
displayed as function of t. Because α > 0, xt remains positive, mass stays on the original positive
side. Dots indicate extrema
8.1.10 An Exercise
Assume, much like Fig. 8.2 at time t = 0, the spring is extended and the mass is
at position x0 and has just begun moving leftward: meaning v0 = −0. According to
(8.36) and (8.37), one has
v0 = − 0 = (σ4 − α σ3 ) ,
σ4 = α σ3 ,
xt = σ3 (1 + α t) exp(−α t) = x0 (1 + α t) exp(−α t);
dxt
vt = = − x0 α2 t exp(−α t) ;
dt
dvt d2 x t
= = − x0 α2 exp(−α t) [1 − α t] . (8.38)
dt dt 2
According to (8.38), xt remains positive because α > 0. Meaning, mass stays on the
original positive side. And as t increases, mass approaches exponentially the (EP).
At t = 0, the first derivative of xt is vanishing while its second derivative is negative.
8.1 Periodic Functions 239
Sinusoidal Motion
When Hooke’s attractive force is stronger than friction, i.e., K > α2 , the spring is
under-damped. The two roots of the characteristic equation are complex, namely
− k01 = − α − i K − α2 ; − k02 = − α + i K − α2 . (8.39)
tan
θ θ
mg sin θ
mg cos θ
mg
g is the acceleration due to gravity. For arbitrary size of the angle θt , (8.41) is
nonlinear. However, when θt is much less than a radian both tan θt and sin θt tend to
θt and the equation of motion becomes linear. [Note: π radians equals 180◦ . Thus,
one radian is = 57.2958◦ .] We can then write (8.41) as
d2 θt dθt
2
+ 2μ + ω02 θt = 0 (8.42)
dt dt
where
α g
2μ = ; ω0 = . (8.43)
m l
8.2 Oscillating Pendulum 241
The equilibrium position (EP) refers to the case where the angle θ is zero: Meaning,
the massless rod is exactly vertical.
Solution of (8.42)
Equation (8.42) is homogeneous linear ordinary differential equation (hlinODE).
Therefore, its particular integral, I pi , is vanishing. In order to determine its comple-
mentary solution, Scomp → θt , one needs first to solve its characteristic equation
2
E ch . To that end, dtd is replaced by k and dtd 2 by k 2 .
k 2 + 2μ k + ω02 = 0 . (8.44)
dθt
≡ θt. = σ1 k1 exp(k1 t) + σ2 k2 exp(k2 t) ;
dt
d2 θt
≡ θt.. = σ1 (k1 )2 exp(k1 t) + σ2 (k2 )2 exp(k2 t) . (8.47)
dt 2
Because of convenience, and the fact that it does not at all affect any of the substance
of this work, in what follows we shall work with t ≥ 0.
8.2.2 Angle θ t
When μ2 > ω02 , friction is overpowering. Then, the pendulum is said to be over-
damped—see Figs. 8.6 and 8.7. Because μ is greater than zero, for an over-damped
pendulum the following inequalities hold.
μ> (μ2 − ω02 ) ; k1 < 0 ; k2 < 0 ; k1 > k2 . (8.48)
242 8 Oscillatory Motion
The following parameters have been set to be the same for all the four curves, A, B, C,
and D, displayed in Fig. 8.6.
1 5 k2
k1 = − ; k2 = − ; = 5 ; σ1 + σ2 = 3 ;
3 3 k1
−k1 − k2 k1 − k2 2
= μ = 1; = (μ2 − ω02 ) = . (8.50)
2 2 3
However, while the sum σ1 + σ3 = 3 is the same for all the four curves in Fig. 8.6,
individual values of σ1 and therefore σ2 may differ from one curve to the other.
To understand how the angle θt changes with t, one needs to study its description
in (8.46). Time moves forward. Given both k1 and k2 are negative, if σ1 and σ2 have
the same sign, (8.46) keeps its sign and the bob never crosses over to the other side.
Accordingly, for the curve marked A in Fig. 8.6, where σ1 = σ2 = 23 , the bob stays
on the positive side.
Regarding curve B, it is convenient to rewrite (8.46) as
σ2
θt = σ1 exp(k1 t) 1 + exp[−(k1 − k2 ) t] . (8.51)
σ1
8.2 Oscillating Pendulum 243
Over Damped
6
θ 2 B
C
A
0
D
-2
0 2 4 6 8 10 12 14
t
Fig. 8.6 Pendulum motion experiences friction strong enough that its motion is over-damped. The
rod is turning past angle θ = 3◦ at time t = 0
Clearly, the term σ1 exp(k1 t) does not change sign with passage of time. And
θt were
if to change sign, it will have to come from the second
term, namely
1 + σσ21 exp[−(k1 − k2 ) t] . At t = 0, this term equals 1 + σσ21 . And at t → ∞,
because (k1 − k2 ) > 0, it is equal to 1. These two values can have the same sign
only if
σ2
> −1 . (8.52)
σ1
At time zero, according to (8.46) and (8.49), the angle is 3◦ . Its second derivative,
d2 θ t 15 −1 2 3 −5 2
= σ1 (k1 )2 + σ2 (k2 )2 = −
dt 2 t=0 4 3 4 3
5
= − 4σ1 k12 = − , (8.55)
3
is negative. Meaning, the bob starts its journey at θtθt:extremum = 3◦ , that is a maximum,
and its distance from the (EP) continues decreasing monotonically with the passage
of time.
For the curve C in Fig. 8.6, σ1 = 10, σ2 = −7 and the ratio σσ21 = −0.7 is again
> −1. Thus, the bob stays on the positive side. Initially, at time tθt:extremum , the angle
continues increasing and according to (8.53) reaches a maximum θt:extremum , where
it halts momentarily and starts decreasing.
3
log [0.7 (5)] = 0.939572 .
tθt:extremum =
4
θt:extremum = σ1 exp (k1 tθt:extremum ) + σ2 exp (k2 tθt:extremum )
1 5
= 10 exp − 0.9396 − 7 exp − 0.9396 = 5.850◦ .
3 3
and (8.46),
8.2.5 Extremum in θ t .
The x axis in Fig. 8.7 is the time t, and the y axis is the angular velocity dθ
dt
t
≡ θt .
[See (8.47)].
Curves A, B, C, and D in Fig. 8.7 refer to the same parameters as those in Fig.
8.6. Because θt in curve A, Fig. 8.6, undergoes no extremum, curve A in Fig. 8.7 does
not pass through θt . = 0. On the other hand, because of the presence of an extremum
Over−Damped Pendulum
3
1 D
0
-1 B
A C
-2
-3
-4
0 2 4 6 8 10
in θt in each of the other three curves in Fig. 8.6, the relevant curves pass through
θt . = 0. For curve B, this occurs at the beginning: that is, at t = 0. For curves C and
D, θt . = 0 occurs at 0.9396 and 1.511, respectively.
In addition to θt . itself, graphical treatment of extrema for θt . requires knowledge
also of θt .. —these are both available in (8.47)—as well as that of θt ... , which is given
below.
d3 θ t d2 θ t .
θt ... ≡ = = σ1 (k1 )3 exp(k1 t) + σ2 (k2 )3 exp(k2 t)
dt 3 dt 2
σ2 k23
= σ1 (k1 ) exp(k1 t) 1 +
3
exp[−(k1 − k2 ) t] . (8.58)
σ1 k13
Is there an extremum present in curve A, Fig. 8.7? To answer this query, use (8.53)
and calculate the time t = tθt:extremum
when such an extremum would occur. To that end
recall that (k1 − k2 ) = 43 , and kk21 = 5. Next set θt .. = 0 in (8.47) and rewrite the
result as
1 σ2 k22 3 σ2
tθt:extremum = log − = log − 25 . (8.59)
k1 − k2 σ1 k 1
2 4 σ1
σ2
For curve A, σ1
= 1.5
1.5
= 1. Therefore,
3
tθt:extremum = log (−25) . (8.60)
4
This is a complex number. Because time tθt:extremum has to be real this result is unphysical.
Thus, curve A does
not undergo any extrema.
For curve B, σ1 = −3/4
σ2
15/4
= −1/5. Therefore, an extremum occurs at time
3 −1 3
tθt:extremum = log − 25 = log (5) = 1.20708. (8.61)
4 5 4
Using this value of time in (8.46), one can determine the relevant value of θt . at this
extremum. The result is −0.668739. Because θt ... at this time—namely 1.20708—
d3 θ
θt ... ≡ = σ1 (k1 )3 exp(k1 t) + σ2 (k2 )3 exp(k2 t)
dt 3
= 0.371521, (8.62)
Proceeding as before, the θt . for this extremum is −1.303795 and again it is a mini-
mum.
For curve D, the extremum is a maximum and it occurs at t = 2.18256 and
θt . = 0.64565.
When strength of friction becomes similar to the ordering tendency of the natural
vibrations a pendulum is said to be critically damped—see Figs. 8.8 and 8.9. More
precisely, this is the case when μ2 = ω02 . Here, the characteristic equation (8.44)
has two equal roots : k1 = k2 = −μ. [See (8.45)] Therefore, according to the well-
established procedure, and (3.37), the angle at time t can be expressed as
Critically Damped
4 A
3
2 B
θ 1
0
-1
-2 C
0 5 10 15 20
t
Fig. 8.8 Angle pendulum rod of critically damped pendulum makes it displayed as function of
time. Curves A, B, and C are defined in (8.69)–(8.74)
248 8 Oscillatory Motion
C
0
B
-1
0 2 4 6 8 10
t
dt , for a critically damped pendulum.
dθ
Fig. 8.9 Plotted along the vertical axis is the angular velocity,
Curves A, B, and C are the same as in Fig. 8.8
Prediction from this equation is displayed in Fig. 8.8. At time t = 0, the bob is at
a point where the rod, according to (8.64), makes an angle θ0 = σ3 (which was set
equal to +3◦ ). Because μ > 0, as time t → ∞, the angle θt tends to (EP). In between,
there are two possibilities: First, the angle stays positive, eventually reaching (EP).
And the second, the angle switches over to the negative side where at time, say tcritical ,
the angle reaches, say θtcritical , where it comes to a momentary stop. Instantly, the bob
starts the reverse journey, eventually reaching the end of its travel at the midpoint:
that is, when the angle θt reaches the (EP).
Let us examine this behavior and see exactly what transpires. To this end, in
Fig. 8.9, look at the derivative of θt .
dθt
= (σ4 − μσ3 − μ σ4 t) exp(−μ t) . (8.65)
dt
8.2 Oscillating Pendulum 249
Because dθt
dt
must tend to zero at t = tcritical , (σ4 − μσ3 − μ σ4 tcritical ) = 0. As such
σ4 − μσ3 1 σ3
tcritical = = − . (8.66)
μ σ4 μ σ4
In order to determine the type of extremum that θtcritical represents, one needs to look
at the second derivative ddtθ2t at time t = tcritical . One knows that friction is positive
2
therefore the parameter μ > 0. And if the parameter σ4 should also be > 0, the
second differential of equation (8.65) would be clearly negative.
d2 θ t 1 σ3
= exp −μ − (−μ σ4 ) < 0, (8.68)
dt 2 t=tcritical μ σ4
and
σ4
θmax = exp (−μ tmax ) . (8.70)
μ
and
σ4 ◦ 2
θmax = exp (−μ tmax ) = 3 × 3 × exp − = 4.621◦ . (8.72)
μ 3
Next, in Fig. 8.8, examine curve B where σ3 = 3◦ , μ1 = 3, and σ4 = 0.5◦ . But now,
because the predicted tmax is equal to −3, which is a time before the experiment
began, the requirement for an observed maximum is not satisfied. As a result, while
the angle stays positive, it continues to decrease monotonically heading to the (EP)
as t increases toward ∞.
250 8 Oscillatory Motion
The third curve, C in Fig. 8.8, refers to the case where = 3, σ4 = −σ3 = −3.
1
μ
2
d θtcritical
Thus, μ σ4 is < 0. With reference to (8.68), (−μ σ4 ) , and therefore dt 2 t=tcritical
are positive. As a result, the relevant θcritical is a minimum. Again, it is helpful to call
this value of θcritical by a new name: θmin , and the time it is reached as t = tmin . Thus
1 σ3 3
tmin = − = 3− = 4 , (8.73)
μ σ4 −3
and
|σ4 | 4
θmin = − exp (−μ tmin ) = − (3◦ × 3) exp − = −2.372◦ (8.74)
μ 3
With the progress of time, due to the presence of negative exponent in (8.64), θt →
(E P).
When friction is weaker than the critical amount, i.e., ω02 > μ2 , the pendulum is
under-damped—see Fig. 8.10. The two roots, k1 and k2 —compare (8.45)—of the
characteristic equation (8.44) are complex, namely
k1 = −μ − i ω02 − μ2 ; k2 = −μ + i ω02 − μ2 . (8.75)
where σ3 = −i(σ1 − σ2 ), σ4 = (σ1 + σ2 ), and σ5 = σ3 + σ4 . Also, sin φ =
2 2
σ3
σ5
and cos φ = σσ45 . Because μ > 0, as t → ∞ the angle θunder-damped tends
to (EP). However, before the bob comes to the final absolute stop, it repeats the
following performance many times—in principle, infinite number of times.
At time t = 0 and angle 3◦ , consider the bob is heading left. As it moves toward
decreasing angle, it crosses the (EP), proceeds further left, slows down for a momen-
8.2 Oscillating Pendulum 251
tary halt, reverses its direction, and starts moving forward to the right: crosses the
(EP) again before coming to a momentary stop on the right side and starting another
journey leftward. This behavior contrasts with that of the over-damped and critically
damped pendulums that either never move across the midpoint or cross it at most
only one time.
Another point to note is that according
to (8.76),the angle θunder-damped reaches a
maximum = θn at a time t = tn if tn ω02 − μ2 − φ = 2nπ and the next maximum
= θn+1 at time tn+1 when tn+1 ω0 − μ − φ = 2(n + 1)π. Because the cosines
2 2
at both these times are equal to unity, the ratio of the angles at these two successive
maxima is
⎛ ⎞
θn exp (−μ tn ) 2πμ
= = exp ⎝ ⎠ . (8.77)
θn+1 exp (−μ tn+1 ) ω 2 − μ2 0
[Note : The same is also true for the ratio of angles at two successive minima.]
Remarkably, this ratio is not dependent on n. Rather, it is the same for any two
successive maxima (or even, any two successive minima). Also, it is readily measur-
able. Another quantity that is easy to measure is the cycle time, δcycle , of the damped
oscillation. That is
⎛ ⎞
2π
δcycle = tn+1 − tn = ⎝ ⎠ . (8.78)
ω02 − μ2
θn
From δcycle and the ratio log θn+1
, one can determine the strength of friction,
⎛ ⎞
θn
log θn+1
μ = ⎝ ⎠, (8.79)
δcycle
t
= 3 exp − cos(10 t) . (8.80)
1.5
252 8 Oscillatory Motion
Fig. 8.10 With sinusoidal external force, angle θ the pendulum makes with the vertical is displayed
as a function of time. Plotted here is (8.80)
Transient state motion of damped pendulums was analyzed in the foregoing. The
steady-state motion, caused by the application of an applied force, is considered
below.
In the presence of an applied force, say m l f (t), the original equation of motion,
namely (8.41), changes to the following:
d2 {l tan θ} d {l tan θ}
m 2
+α + m g sin θ = m l f (t) . (8.81)
dt dt
8.2 Oscillating Pendulum 253
d2 θ dθ
2
+ 2μ + ω02 θ = f (t) (8.82)
dt dt
g α
where ω0 = l
and μ is = 2m
.
Oscillating Pendulum
Assume the externally applied force is sinusoidal in time with angular velocity ω.
With the application of external force (8.83), the equation of motion (8.82) becomes
inhomogeneous linear (ODE) with constant coefficients. The external force, how-
ever, does not affect the complementary solution that relates to the transient states
discussed in detail in the foregoing. Therefore, only the particular integral, I pi , that
leads to steady-state motion needs to be evaluated here. Denoting = dtd , the par-
ticular integral of equation (8.82) is calculated in the usual fashion as follows.
1
θ pi → I pi = A [cos(ω t)]
2 + 2μ + ω02
1/2
= A [exp(i ω t) + exp(−i ω t)]
2 + 2μ + ω02
1/2 1/2
= A exp(i ω t) + exp(−i ω t)
−ω 2 + 2μ i ω + ω02 −ω 2 − 2μ i ω + ω02
(ω02 − ω 2 ) cos(ωt) + 2μω sin(ωt)
= A
(ω02 − ω 2 )2 + 4μ2 ω 2
A cos(ωt − )
A cos(ωt − ) ω02
= =
(ω02 − ω 2 )2 + 4μ2 ω 2 2 2 2 2
1 − ωω0 + 2μ
ω0
ω
ω0
1
Mratio =
2 2 2 2
ω ω
1− ω0
+ 2μ
ω0 ω0
1
= , (8.85)
4 2 2
ω ω 2μ
1+ ω0
− ω0
2− ω0
The magnification ratio is unity when the applied force is constant in time: that is,
Mratio = 1 when ω = 0. And when the frequency of the applied force reaches the
undamped natural frequency of the pendulum, that is when ω → ω0 , Mratio → 2ωμ0 .
Next, in (8.84), is the quantity ωA2 which is the static deflection, θstatic deflection . This
0
nomenclature is owed to the fact that in the absence of time-dependent motion the
applied force would deflect the equilibrium position of the bob through an angle
≈ mmgA = ωA2 . Also, in (8.84), is the so-called lag angle defined by the relations
0
(ω02 − ω 2 )
cos() = ,
(ω02 − ω 2 )2 + 4μ2 ω 2
2μω
sin() = . (8.86)
(ω02 − ω 2 )2 + 4μ2 ω 2
1 1 1
(Mratio )max = √ = = . (8.90)
z 1− σc 2
1 − xc2
4
Note when friction decreases, xc increases toward unity. As a result, 1 − xc2 decreases
and the magnification ratio increases. Indeed, when there is no friction at all present,
meaning when μ = 0, (Mratio )max → ∞.
Chapter 9
Resistors, Inductors, Capacitors
Introduced first are Kirchhoff’s two rules that state: ‘The incoming current at any
given point equals the outgoing current at that point,’ and ‘The algebraic sum of
changes in potential encountered by charges traveling, in whatever manner, through
a closed-loop circuit is zero.’ Included next is the Ohm’s law: ‘In a closed-loop
circuit that contains a battery operating at V volt, and a resistor of strength R ohms,
current flow is I amperes: I = VR .’ Several problems relating to additions of finite
numbers of resistors, placed in various configurations, some in series and some in
parallel formats, are worked out—see (9.2)–(9.30) and Figs. 9.1, 9.2, 9.3, 9.4, 9.5
and 9.6. More involved problems relating to total resistance and current flows in
infinite networks of resistors are treated next—see (9.31)–(9.60) and Figs. 9.7, 9.8,
9.9, 9.10, 9.11, 9.12, 9.13, 9.14, 9.15, 9.16, 9.17. Electric circuit elements are listed
in (9.61). Inductors are introduced in (9.62). Series and parallel circuits constituted
of finite number of inductors, and infinite series–parallel circuits of inductors are
treated in (9.63)–(9.70) and displayed in Figs. 9.18 and 9.19. The same is done
also for capacitors—see (9.71)–(9.82) and Figs. 9.20 and 9.21. Resistor–capacitor
circuits are treated in (9.83) to (9.86), and some results are displayed in Fig. 9.22.
Resistor–inductor circuits are studied in (9.91)–(9.96), and results are plotted in
Fig. 9.23. Inductor–capacitor circuits are analyzed in (9.97)–(9.106), and results are
displayed in Fig. 9.23.
Central to the understanding of current flow are a few rules that are obeyed by the
current and the circuitry through which it flows.
of holding charge. Lacking the capacitor, electric charge does not accumulate at a
point. As such, at any point, the inflow of charge during a given time interval equals
the outflow of charge during the same time interval.
The rate of flow of charge, (q + δq − q), during a time interval (t + δt − t)—that
is, δq
δt
—is called electric current at the given time t. Thus, except for the extraordinary
circumstance referred to above, incoming current at any point equals the outgoing
current at that point. This is called Kirchhoff’s first rule.
9.1 Electric Current 263
Kirchhoff’s second rule asserts that much like this person, if a given charge travels
in a closed-loop circuit in whatever manner, the algebraic sum of changes in potential
encountered must be zero.
According to Ohm’s law, current I amperes flows through a closed-loop circuit that
consists of a battery operating at voltage V volts and a resistor of strength R ohms.
V volts
I amper es = . (9.1)
R ohms
R1 ohms placed one after the other—that is, as shown in Fig. 9.1, in series—is
Rseries−3 (e f f ective) = (R3 + R2 + R1 ) ohms.
n Resistors in Series
By an argument similar to that given above, the effective resistance of n resistors
R1 , R2 , . . . , Rn connected in series should be R1 + R2 + · · · + Rn . However, more
formally, we can argue as follows. According to ohm’s law23. , when a current I
amper es 31. flows through a resistor of strength R1 ohms the potential across it
decreases I R1 volts 32. . When current I flows through n resistors in series, the
potential decrease across R1 is I R1 , that across R2 is I R2 , . . . , and so on. There-
fore, the total decrease in potential across the n resistors is I (R1 + R2 + · · · + Rn ) :
exactly the same as it would be across a single resistor Rseries−n (e f f ective).
For large number of resistors in series, the effective total resistance is large.
2 Resistors in Parallel
Consider two resistors R1 and R2 joined together in parallel and connected to a
battery supplying direct current at constant voltage V volts. The parallel connection
implies that while the positive outlet of the battery is connected to one end of each
of the two resistors, the negative outlet is connected to the other end of the same
two resistors. As a result, the potential difference across each of the two resistors is
identical: equal to V volts. Assume this process results in current I1 across resistor
R1 and I2 across R2 . Then according to Ohm’s law
V
I1 =
R1
V
I2 = . (9.3)
R2
The total current, I parllel−2 (e f f ective), that flows through the two resistors R1 and
R2 that are connected in parallel, and their total resistance,
R parallel−2 (e f f ective), are
V V
I parllel−2 (e f f ective) = I1 + I2 = +
R1 R2
V
≡ . (9.4)
R parallel−2 (e f f ective)
1 1 1
= + . (9.5)
R parallel−2 (e f f ective) R1 R2
266 9 Resistors, Inductors, Capacitors
R1 . R2
R parallel−2 (e f f ective) = R1 R2 ≡ (9.6)
R1 + R2
has some interesting features. [Note: The notation a b stands for a+b
a.b
.] To examine
these features, keep R1 constant and make changes in R2 .
Process (1) : Consider the case RR21 1. To this end, re-write (9.6) as
R2
R parallel−2 (e f f ective) = (9.7)
1 + RR21
and expand it in powers of the small parameter RR21 .
2
R2 R2
R parallel−2 (e f f ective) = R2 1 − +O . (9.8)
R1 R1
This result is interesting. When two resistors are connected in parallel, and one of the
resistors is much smaller than the other one, the effective resistance is even smaller
than the smaller resistor !
Process (2) : Increase R2 till R2 = R1 . Even though there are two resistors of
strength R1 , their total resistance is only half of R1 .
R1
R parallel−2 (e f f ective) = . (9.9)
2 lim R2 →R1
Process (4) : With further increase in R2 , the effective resistance, albeit slowly,
rises further. When R2 R1 , (9.6) gives
2
R1 R1 R1
R parallel−2 (e f f ective) = = R1 1− +O . (9.11)
1 + RR21 R2 R2
n Resistors in Parallel
Current flowing through n resistors R1 , R2 , . . . , Rn that are joined together in par-
allel is not unlike very large number of people wanting to tread across a mountain
through a group of n tunnels that begin and end in very close proximity. And if
the tunnels are of different widths, surely most people would choose to go through
the tunnel that is the widest. And similar choices would be made for other tunnels.
Same must also be the case for current flowing through n resistors in parallel so that
most current would flow through that resistor that has the smallest resistance, and
so on. Assuming the potential difference is V volts across the parallel assembly—
which means the potential difference is V volts across each of the n resistors in the
assembly—and currents I1 , I2 , . . . , In flow through the n resistors that are linked in
parallel, then according to Ohm’s law
V = I1 R1 = I2 R2 = · · · = In Rn . (9.12)
V
Ii = , i = 1, 2, . . . , n .
Ri
As a result the total current, Iparallel−n (e f f ective), the effective total resistance,
Rparallel−n (e f f ective), and the potential V are related as follows:
V V V V
I1 + I2 + · · · + In = + + + ··· +
R1 R2 R3 Rn
V
≡ Iparallel−n (e f f ective) ≡ . (9.13)
Rparallel−n (e f f ective)
Thus
1 1 1 1
= + + ··· + . (9.14)
Rparallel−n (e f f ective) R1 R2 Rn
1 n
= ,
Rparallel−n (e f f ective) R
or equivalently
R
Rparallel−n (e f f ective) = . (9.15)
n
268 9 Resistors, Inductors, Capacitors
In other words, for very large number of resistors, all in parallel, the effective total
resistance is very small.
A Few Resistors Together
Process (1) : A group of three resistors R1 , R2 , R3 ohms—to be called the ‘origi-
nal trio’—is connected in series across an on–off switch and a battery operating at
constant voltage V volts—see Fig. 9.3. Turning the switch on makes it a closed-loop
circuit.
Process (2) : Three additional resistors of strength R1 ohms each are connected
in parallel to the resistor R1 of the original trio : thus forming a sextet.
Process (3) : The sextet is enhanced further to form a nonet by connecting in
parallel, to the resistor R2 of the original trio, three new resistors of strength R2 each.
Process (4) : In a fashion similar to the above, another three resistors of strength
R3 ohms each are added next. Each of these new resistors R3 is connected in parallel
to the resistor R3 of the original trio. This process results in forming a duodectet to
be called dd, composed of a total of twelve resistors, a switch, and a battery.
Problem (A)
Calculate the total effective resistance, Reffect−dd , of the duodectet dd demonstrated
in Fig. 9.3.
R1
Reffect−dd1 = . (9.16)
4
Similarly, the four resistors R2 in parallel are replaced by a single resistor Reffect−dd2 .
R2
Reffect−dd2 = . (9.17)
4
And finally, the four resistors R3 in parallel can be replaced by a single resistor
Reffect−dd3 .
R3
Reffect−dd3 = . (9.18)
4
In the manner described above, the dd may be replaced by a new triplet. The
effective resistance of such triplet would be Reffect−dd .
9.1 Electric Current 269
In the circuit shown in Fig. 9.5, turn the switch on and work out currents I1 and I2 , the
potential difference VX − VY , and the effective resistance, Reffect−XY between points
X and Y. The relevant data is as follows:
V = 10 volts ;
R1 = 15 × 103 ohms ; R2 = 70 × 103 ohms ;
R3 = 60 × 103 ohms ; R4 = 20 × 103 ohms . (9.20)
V is the battery voltage, and R1 → R4 are the resistors described above. Their
strength is as specified in (9.20).
I = I1 + I2 , (9.21)
270 9 Resistors, Inductors, Capacitors
V V
I1 = , I2 = . (9.22)
R1 + R2
2
R3
3
+ R4
4
V R2 V R3
Veffect−XY = VX − VY = − . (9.24)
2 R1 + R2 R3 + 3 4R4
Following (9.24) and (9.22), and using the numbers provided in (9.20), leads to
Knowing the effective potential difference, Veffect−XY, and the currents I1 and I2 , the
effective resistance, Reffect−XY, between points X and Y can now be calculated. The
effective current flow from X to Y is Ieffect−XY.
Therefore
Veffect−XY
Reffect−XY = = 5 × 103 ohms . (9.27)
Ie f f ect−X Y
Figure 9.6 shows a circuit that represents a slight change from the circuit in Fig. 9.5.
Here the battery is missing, the on–off switch is absent, and the labels i 1 and i 2
have been removed. Thereby, in effect, connecting directly the points W and Z. [This
effective double point is called the point WZ.] Calculate effective resistance between
X and WZ and between WZ and Y.
9.1 Electric Current 271
and
21
Rtotal X −W Z = × 103 ohms .
2
RtotalW Z −Y = 4 × 103 ohms . (9.30)
2
Reffective = 2R ∗ Reffective + 2R 2 , (9.32)
With the switch on, the battery operating provides current I which distributes itself
into I1 , I2 . . . I∞ as it flows through an infinite array of resistors each of which is of
strength R ohms.
Physical behavior of this circuit effectively replicates that of Fig. 9.7. The battery
experiences an effective total resistance Reffective and drives current I .
Ampere’s law can now be used to determine the effective total current driven
through the single effective resistance Reffective .
V
Ieffective = . (9.34)
Reffective
I1 R = [I − I1 ] Reffective . (9.35)
Next consider an infinite network of resistors shown in Fig. 9.9. The battery oper-
ates at constant voltage V across an on–off switch and a trio of resistors R, R0 , R.
This is the first-trio of this network. The middle resistor of the first-trio, namely resis-
tor R0 , is connected in parallel to an identical second-trio R, R0 , R, and the process
is repeated ad infinitum. Current flow and the effective resistance of are treated next.
In the network shown in Fig. 9.10, current I amps distributes itself into I1 , I2 . . . I∞
as it flows through the infinite array of resistors. In order to work out the current I1 ,
the total effective current Ieffect0 , the total effective resistance Reffect0 , one can follow
the following procedure. Connect the midpoint resistor R0 in parallel to a new resistor
of strength Reffect0 . And, as shown in Fig. 9.10, stop there. The upshot again is that the
resistance of this assembly of four resistors should exactly be equal to the effective
resistance, Reffect0 , of the infinite network. That is,
9.2 Infinite Networks of Resistors 273
The above is a quadratic in Reffect0 with two solutions: one positive and another
un-physical solution that is negative. The physical solution is
R0
Reffect0 = R 1 + 1+2 . (9.38)
R
When the switch is on, current I flows out of the positive terminal of the battery
and, after completing its journey, returns to the negative terminal. The total effective
resistance of the circuit is Reffect0 given in (9.38). Therefore, the total effective current
is
V V
I = = . (9.39)
Reffect0 R0
R 1+ 1+2 R
9.2.2 Current I1
When current I reaches the point A in Fig. 9.10, it splits between two resistors R0
and Reffect0 . From point A to point B, current I1 flows through resistor R0 , while the
remainder, I − I1 , flows through resistor Reffect0 . Both these currents flow under the
same potential difference. Therefore
I1 R0 = [I − I1 ] Reffect0 (9.40)
274 9 Resistors, Inductors, Capacitors
Next consider an infinite network of resistors shown in Fig. 9.11. The battery operates
at constant voltage V volts across an on–off switch and a trio of resistors R1 , R2 , R3 .
This is the first-trio. The middle resistor of the first-trio, namely resistor R2 , is
connected in parallel to an identical second-trio R1 , R2 , R3 . And this process is
repeated similarly ad infinitum. The effective resistance and the effective current
flow are treated below.
The requirement that the effective resistance of the infinite circuit be the same as
RNORC leads to the equality
As a quadratic it has two solutions: one of which is positive and the other is negative
and therefore un-physical.
(R1 + R3 ) R1 + R3 2
RNORC = + + R2 (R1 + R3 ) . (9.43)
2 2
R1 √
RNORC = 1+ 5 . (9.46)
2
9.2 Infinite Networks of Resistors 275
When the switch is on, current I amps flows out of the positive terminal of the battery
and, after completing its journey, returns to the negative terminal. The total effective
resistance of the circuit is RNORC —see (9.43). Therefore, the total effective current
is
V
I =
RNORC
V
= . (9.47)
(R1 +R3 )
R1 +R3 2
2
+ 2
+ R2 (R1 + R3 )
[After leaving the positive terminal of the battery, the current I arrives at point A.
There it splits into two parts. Current I1 amps travels across the resistor R2 to point
B. The remainder, (I − I1 ) amps, travels through RNORC . At point B, currents I1
and (I − I1 ) join up so that their sum, namely the current I , returns to the battery.
Because the currents flowing through the resistors, R2 and RNORC , are driven by the
same voltage difference, the following relationship must obtain.
I1 R2 = [I − I1 ] RNORC (9.48)
Calculate the effective total resistance of the infinite network drawn in Fig. 9.13. Here
it is helpful to proceed in four successive stages.
To begin with notice the equivalence of Figs. 9.13 and 9.14.
First Stage: Treat the first-sextet as distinct from the rest. Focus on its second-
triplet that is composed of three resistors of strength R2 ohms each. Next, connect
the middle resistor R2 of this second-triplet in parallel to a new resistor of strength
Reff . This process has now formed a septet as shown in Fig. 9.14.
Second Stage: However, as shown in Fig. 9.15, this septet of resistors can again
be reduced further to an equivalent-sextet by representing the last two resistors—
that is, R2 connected in parallel to Reff —by a single resistor R0 . And because of the
parallel nature of this connection, the relevant relationship is
R0 = R2 Reff . (9.50)
Notice this relationship has two unknowns: R0 and Reff . To determine these two
unknowns, one needs also a second relationship. For that purpose, proceed as follows.
Third Stage: The so-named equivalent-sextet achieved in the second stage
described above can be reduced now to an equivalent-triplet. This is done by focussing
on the resistor in the middle of the first-trio of resistors of strength R1 . Because this
resistor is aligned in parallel to three other resistors, R2 , R0 , and R2 , all four can be
replaced by a single equivalent resistance—see Fig. 9.16—Rq . The relevant relation-
ship is
Rq = R1 (R0 + 2 R2 ) . (9.51)
As shown in Fig. 9.16, we are now left with a simple circuit: consisting of the power
source and three resistors—namely, R1 , Rq , and R1 —that are connected in series.
Fourth Stage: Finally replace these three resistors by an equivalent-singlet: a sin-
gle effective resistor, Re f f ect−Cc , that is connected to the power source—see Fig. 9.17.
This means
Re f f ect−Cc = R1 + Rq + R1 . (9.52)
There are a total of three equations—namely, (9.50), (9.51), and (9.52)—and three
unknowns: namely, Re f f ect−Cc , R0 , and Rq . After combining (9.50), (9.51), and
(9.52), straightforward algebra is used to calculate Re f f ect−Cc as a function of R1
and R2 . However, even without examining the result in detail, one can predict its
outcome in three particular limits: (1), (2), (3).
9.2 Infinite Networks of Resistors 277
(1) Consider the resistance notated Rq . In the limit R2 − > ∞, (9.51) gives
Rq − > R1 . As such (9.52) would lead to
(2) Next consider R0 . In the opposite limit, namely R2 − > 0, (9.50) leads to
R0 − > 0 . (9.54)
(3) And with R0 and R2 both tending to zero, (9.51) leads to Rq − > 0. As a result
(9.52) yields
It is helpful to have a few simple tests that check the accuracy of (9.57).
(a) : To that end, first try R2 − > ∞. In that limit, according to (9.56), x − > ∞.
Therefore (9.57) transforms as follows.
4x − x 2 + x 2 (1 + 5/x)
Re f f ect−Cc x −>∞
= R1
(3x)
4x − x + x 2 + 5x
2
= R1 = 3R1 . (9.58)
(3x)
Whenever convenient, some of the following notation may be used. Also, to indicate
time dependence, symbols may be in lower case.
N otation U sed
Capacitance, C Measur ed in f arads : F
Charge, Q Measur ed in coulombs : C
Curr ent, I Measur ed in amper es : A
I nductance, L Measur ed in henries : H
Resistance, R Measur ed in ohms :
V oltage, V Measur ed in volts : V
(9.61)
9.3 Inductors
A coil made of several turns of a very good conducting wire is the physical equivalent
of an inductor (L). If constant current i(0) flows through (L), the coil acts merely as
a very week resistor. According to Ampere’s law, constant current produces a time-
independent magnetic field whose magnitude is proportional to the current. On the
other hand, if the current i(t) is changing in time, it produces a changing magnetic
field. And the resulting change in the magnetic flux induces an electro-motive force
(emf). The (emf) produced is proportional to the rate of change of the current.
Because of the weakness of the internal resistance, current flows that are constant
in time mostly produce un-substantial changes in voltage acrossan inductor.
On the
d i(t)
other hand, when current flow is changing in strength, say at rate dt amperes per
second, an inductor of strength L henrys affects nonzero potential drop v(t) across it.
d i(t)
v(t) = L . (9.62)
dt
9.3 Inductors 279
Imagine time-dependent current i(t) amps flowing through two inductors of strength
L 1 and L 2 henries connected together in series. The drop in potential across the first
inductor is L 1 di(t)
dt
and that across the second is L 2 di(t)
dt
. And because these drops in
potential occur in series, the total drop in potential is their sum. The same result would
obtain if the given current passed through a single inductor of strength (L 1 + L 2 )
henries.
d i(t) d i(t) d i(t)
L1 + L2 = (L 1 + L 2 ) . (9.63)
dt dt dt
In other words, when placed in series the inductances add much the same way as do
resistances.
Imagine time-dependent current i(t) amps flowing through two inductors of strength
L 1 and L 2 henries connected together in parallel. The drop in potential across the
first inductor is L 1 di(t)
dt
and that across the second is L 2 di(t)
dt
. And because these drops
in potential occur in parallel, they are the same.
d i 1 (t) d i 2 (t)
L1 = L2 ≡ Const . (9.64)
dt dt
With slight manipulation of (9.64), one can write
d i 1 (t) Const
= ;
dt L1
d i 2 (t) Const
= . (9.65)
dt L2
Denote the total current i parll and total inductance L parll and use (9.65) to write
d i parll d i 1 (t) d i 2 (t) Const Const Const
≡ + = + ≡ . (9.66)
dt dt dt L1 L2 L parll
Equation (9.66) leads to the following relationship obeyed by the total inductance,
L parll , of two inductors, L 1 and L 2 , added together in parallel.
1 1 1
= + . (9.67)
L parll L1 L2
Consider the infinite network of inductors L 1 , L 2 , L 3 shown in Fig. 9.18. Its effective
inductance may equivalently be represented by the inductance of a single inductor
of strength L eff . As a result, inductance L eff may be equated to the inductance of the
four inductors shown in Fig. 9.19. That is,
Equivalently
This is a quadratic with a positive and a negative solution. The physically acceptable
one is the positive solution
(L 1 + L 2 ) 4 L3
L eff = 1+ 1+ (9.70)
2 (L 1 + L 2 )
9.3.4 Capacitors
Q coulombs
C farads = . (9.71)
V volts
As such, a farad is equal to coulombs per volt: or, equivalently, (amperes. second)
per volt. Also, a coulomb is equivalent to farads. volt.
Assume a large battery that can supply current at fixed voltage V is connected to a
capacitor (C), a perfect resistor (R), and a switch. [Note : A perfect resistor obeys
9.3 Inductors 281
Ohm’s law exactly and contains no stray inductance or capacitance.] These items are
placed in series forming a closed-loop circuit. The charging of (C) requires an inflow
of current into, say, the left plate and an equal outflow from the right plate. Generally
this process is not instantaneous. Rather, the charging occurs exponentially with a
relaxation time that is representative of the nature of the capacitor as well as details
of the resistance in the circuitry. And both the current i as well as the charge q are
functions of time. Note, current and charge are related because one represents the
rate of flow of the other. That is,
dq(t)
i(t) = . (9.72)
dt
Therefore at some specified time t, the charge q(t) that has accumulated on the left
plate is
t t
dq(t)
q(t) = dt = i(t) dt . (9.73)
−∞ dt −∞
Imagine two capacitors, C1 and C2 , connected in parallel. This implies the left plates
of the two capacitors are connected together as are the right plates. As a result,
both capacitors experience the same voltage difference, say V volts, leading to the
relationships
Q 1 = C1 · V ; Q 2 = C2 · V ; Q 1 + Q 2 = (C1 + C2 ) · V , (9.74)
where Q 1 and Q 2 are the charges held on the positive plates of capacitors C1 and
C2 , respectively. The total charge held is Q total−parallel and the total capacitance is
Ctotal−parallel .
Imagine two capacitors, C1 and C2 , joined together in series. This implies the negative
plate of capacitor C1 is connected to the positive plate of capacitor C2 . As a result, the
charges will be distributed as follows. If the left plate—meaning the positive plate—
of capacitor C1 has charge +Q, its right plate will have charge −Q. This will result
in the left plate of capacitor C2 with charge +Q and its right plate with charge −Q.
And if the potential drop across capacitor C1 is V1 , and that across C2 is V2 , the total
potential drop across the two capacitors in series will be Vtotal−series−2 = V1 + V2 and
the following relationships will hold.
Q Q Q
Vtotal−series−2 = V1 + V2 = + ≡ (9.77)
C1 C2 Ctotal−series−2
1 1 1
= + . (9.78)
Ctotal−series−2 C1 C2
If the two capacitors are equal—that is, C1 = C2 ≡ C—their sum in series is equal
to a half of each: meaning Ctotal−series−2 = C2 .
The above results are similar to those of resistors being added in parallel. An
interesting consequence of this is that when very large number of capacitors are
added together in series, their sum is very small. For instance, the result of adding n
capacitors of strength C in series is Cn , and if n → ∞ the sum is zero.
Consider the infinite network of capacitors C1 , C2 , C3 shown in Fig. 9.20. Its effec-
tive capacitance may equivalently be represented by the capacitance of a single
capacitor of strength Ceff . As a result, capacitance Ceff may be equated to the capac-
itance of the four capacitors shown in Fig. 9.21.
That is,
1 1 1 1
= + + . (9.79)
Ceff C1 C2 C3 + Ceff
Equivalently
2
Ceff (C1 + C2 ) = − Ceff C3 (C1 + C2 ) + C1 C2 C3 . (9.80)
9.3 Inductors 283
This is a quadratic with a positive and a negative solution. The physically acceptable
solution is the positive one, namely
2
C3 C3 C1 C2 C3
Ceff = − + + . (9.81)
2 2 C1 + C2
9.3.9 Comment
C √
Ceff = 3 −1 . (9.82)
2
Figure 9.22 displays a series circuit comprised of an on–off switch, a large battery,
a resistor R, and a capacitor C. At time t the switch is on, the battery supplies time-
dependent current i(t) at constant voltage V, and the charges on the left and the
right plates of the capacitor are q(t) and −q(t). The following differential equation
describes this closed-loop circuit.
where
τ R−C ≡ R C (9.85)
Problem
Assume the capacitor in the system described above and shown in Fig. 9.22 is in its
uncharged state. Turn the switch on at time t = t0 and work out the time dependence
of the charge and the current.
Solution
Charge on the capacitor at time t0 is zero. Equation (9.86) at t = t0 ,
9.4 R-C Series-Circuit 285
t0
q(t0 ) = 0 = σ0 exp − + CV , (9.87)
τ R−C
gives
t0
σ0 = − C V exp . (9.88)
τ R−C
Accordingly, we have
t − t0
q(t) = C V 1 − exp − (9.89)
τ R−C
and
dq(t) V t − t0
i(t) = = exp − . (9.90)
dt R τ R−C
Equation (9.90) predicts that within a time interval equal to τ R−C the current drops to
about one-third—actually, 36.7879441%— of its original value VR , while, accord-
ing to (9.89), the magnitude of the charge on either plate rises to about two-thirds of
its maximum value, which is C V.
At time t, the switch is on and the battery supplies time-dependent current i(t) at
constant voltage V to a series connection constituted of a resistor R and an inductor
L. The following differential equation describes this closed-loop circuit.
di(t)
R i(t) + L = V. (9.91)
dt
Divide both sides by L and solve the above. The result is
R V
i(t) = σ0 exp − t + . (9.92)
L R
Consider a closed-loop circuit containing a large battery that can supply current at
constant voltage V. Other items in the circuit are a resistor R, an inductor L, and a
286 9 Resistors, Inductors, Capacitors
switch. All these items are connected in series. Turn the switch on at time t = t0 and
work out the time dependence of the charge and the current.
9.4.4 Solution
Or equivalently
V R
σ0 = − exp t0 . (9.94)
R L
L
τ R−L = . (9.96)
R
When R is measured in —ohms—and L in H—henrys—τ R−L is measured in
seconds.
interval equal to τ R−L the current drops to 63.212056%
Note that within a time
of its maximum value VR .
q(t) di(t)
+L = V . (9.97)
C dt
9.4 R-C Series-Circuit 287
d2 q(t) q(t) V
+ = . (9.98)
dt 2 (τ L−C ) 2 L
In (9.98)
√
(τ L−C ) ≡ LC (9.99)
The charge q(t) and the current i(t) are periodic in time with angular frequency ω
and frequency ν.
1
ω = = 2πν . (9.101)
τ L−C
Problem
Imagine Fig. 9.23 without the battery. And consider an L-C series circuit that despite
the absence of the battery has an on/off switch. The switch is kept off as long as the
time t is < t0 . The charge ±Q 0 on the plates stays put as is while the switch is off.
When at t = t0 the switch is turned on, it allows the charge to start flowing. Describe
the resultant discharge of the capacitor.
Battery Absent
Solution
In the absence of the battery, the voltage V can be excluded from the L − C series-
circuit differential equation (9.98). The two unknown constants, σ1 and σ2 , in the
288 9 Resistors, Inductors, Capacitors
result of the remaining equation are determined from the known boundary condition:
q(t0 ) = Q 0 and i(t0 ) = 0.
Thus
σ1 = Q 0 sin(ω t0 ) ;
σ2 = Q 0 cos (ω t0 ) . (9.103)
q = Q 0 cos [ω (t − t0 )] ;
i = −ω Q 0 sin [ω (t − t0 )] . (9.104)
Because there is no impressed voltage, the sum of these two voltage drops is zero.
Problem
Consider an L-R-C circuit with a battery that operates at constant potential V. The
switch is kept off as long as the time t is < 0. While the switch is off, the capacitor
plates have no charge. When at t = +0 the switch is turned on, it allows charge to
start flowing. Describe the charge q(t) and its rate of accumulation.
9.5 L-R-C Series Circuit 289
9.5.2 Solution
The L-R-C series-circuit shown in Fig. 9.24 contains resistance R ohms, inductance
L henries, and capacitance C farads. Shown also is the battery that provides constant
voltage V volts. Also there is an on–off switch. Before the switch is turned on at
t=0, the capacitor is completely uncharged. At t = +0, charge +dq streams out
from the positive terminal of the battery and starts getting deposited on the left-
hand plate of the capacitor. As a result, equal amount of negative charge—that is,
−dq—is induced on the right-hand plate of the capacitor. Because of the physical
requirement that charge neither be created nor destroyed, this process results in
equal amount of positive charge—that is +dq—to move away from the right-hand
plate of the capacitor, continue its travel around the circuit, and in time dt return
to the negative terminal of the battery. This whole process is equivalent to current,
dq
dt
, flowing across the entire circuit, from the positive terminal of the battery back
through to the negative terminal.
Streaming of the charge and changes in potential across the circuit are intercon-
nected. According to Kirchhoff’s second law, over a complete cycle, the total change
in potential is zero. Assuming the battery provides current at V volts, a flowchart of
the voltage changes across the whole circuit, at a given time t ≥ 0, would be as fol-
lows. Voltage change across capacitor C + voltage change across inductor L + volt-
age change across resistor R + voltage change across the battery = 0. That is,
q(t) di(t)
− −L − R i(t) + V = 0 . (9.107)
C dt
where Scomp (t) is the complementary solution and I pi (t) is the particular integral that
are defined as follows.
2
d R d 1
+ + Scomp (t) = 0 . (9.110)
dt 2 L dt LC
2
d R d 1 V
2
+ + I pi (t) = . (9.111)
dt L dt LC L
According to the description given in detail in Chap. 3—see (3.55) and (3.56)—
Scomp (t) is readily calculated. Similarly, I pi (t) may be found by using (3.61) and
(3.67). We get
⎧ ⎫
⎨ R 2 1 ⎬
Rt
Scomp (t) = σ1 exp − exp t −
2L ⎩ 2L LC ⎭
⎡ ⎧ ⎫⎤
⎨ R 2 1 ⎬
Rt
+ σ2 exp − exp ⎣− t − ⎦ . (9.112)
2L ⎩ 2L LC ⎭
V 1
I pi (t) = 1
= CV . (9.113)
L LC
It is interesting to note that in (9.110) and (9.111) , the expression L1C plays the
R
role of angular velocity squared, while L acts as a friction coefficient. [Note : In
(9.112) the constants σ1 and σ2 are arbitrary and, as usual, can be determined by two
boundary conditions.] For convenience, introduce the notation
2
R 1
α = − . (9.114)
2L LC
where
−R
1 = +α,
2L
−R
2 = −α . (9.116)
2L
No Impressed Voltage
When the battery is absent, the impressed voltage, V, is zero. As a result, I pi (t) is
vanishing and according to (9.109) and (9.115)
1 < 0 ;
2 < 0 ;
1− 2 > 0 . (9.121)
292 9 Resistors, Inductors, Capacitors
Shown in Fig. 9.25 is a closed-loop series circuit that contains a resistor, an inductor,
a capacitor, and an on–off switch. The switch is off while t ≤ 0. At t = 0, the
charge on the left plate of the capacitor is +q(0) and the current i(0) = 0. However,
immediately as t > 0 the switch is turned on and the capacitor begins discharging:
thereby resulting in current i(t) flowing from the positively charged left plate all the
way around the circuit to the negatively charged right plate of the capacitor.
Problem
Work out the time dependence of the capacitor charge q and the electric current i in
the circuit. For numerical calculation use:
I nductance L = 1 H ; Resistance R = 103 ;
−1
Capacitance C = 9 · 104 F ; Capacitor Charge q(0) = 10−3 C
Solution
Clearly I pi (t) = 0 for t ≥ 0 because there is no impressed voltage.
Regarding the charge q(t) on the capacitor, and the current i(t) flowing in the
circuit, at t = 0 we are told i(0) = 0 and according to (9.115)
dq(t)
i(t) = = σ1 1 exp ( 1 t) + σ2 2 exp ( 2 t). (9.123)
dt
i(0) = σ1 1 + σ2 2 = 0A . (9.124)
10−3 . C
2 2
σ1 = − q(0) = −
1 − 2 1 − 2
10−3 .C
1 1
σ2 = q(0) = (9.125)
1 − 2 1 − 2
According to (9.114) and (9.116), one needs 2RL and L1C for calculating α,
1 , and 2 . Using the numbers provided, one gets
R 1
= 500 ; = 90, 000 ; α = 400 ;
2L LC
= −100 ; 2 = − 900 ;
1
−3 −3
10 10
σ1 = 9 ; σ2 = − . (9.126)
8 8
the two roots of the characteristic equation are equal. That is,
R
1 = 2 ≡ 0 =− . (9.129)
2L
Problem
Assume the capacitor charge is q(0) as the impressed voltage is turned off at t = 0.
Immediately thereafter, the capacitor begins discharging . Work out the time depen-
dence of the electric current and the (magnitude) of the charge on a capacitor plate.
For numerical calculation use :
These numbers obey (9.128) which is a requirement for a critically damped circuit.
Solution
At t = 0, the charge is q(0) and the current i(0). Inserting this information into
(9.130) and (9.131), as well as noting the fact that i(0) = 0, we are led to
q(0) = σ3 ,
i(0) = 0 = ω0 σ3 + σ4 (9.132)
which gives
− ω0 q(0) = σ4 . (9.133)
Using the results obtained in (9.132) and (9.133) in (9.130) and (9.131), one gets
R Rt
q(t) = q(0) 1 + t exp − (9.134)
2L 2L
and
2
R R
i(t) = − q(0) t exp − t . (9.135)
2L 2L
296 9 Resistors, Inductors, Capacitors
Turn the switch on in the L − R − C circuit shown in Fig. 9.28. The large battery
supplies current at constant voltage V volts. At time t, current i 1 (t) is flowing out
of the positive terminal of the battery. At the first circuit point A, current i(t) sepa-
rates from i 1 (t) and moves up the inductor L to the circuit point B. The remaining
current— that is, i 1 (t) − i(t)—continues onward and moves up the capacitor C even-
tually joining at the second circuit point B with the current i(t). Together these two
currents make up current i 1 (t) which, after passing through the resistor R, returns
to the negative terminal of the battery: thereby completing the closed-loop circuit.
Assume at time t, the charge on the lower plate of the capacitor is +q(t) and that on
the higher plate is −q(t).
Problem
Refer to Fig. 9.28 and work out currents i(t), i 1 (t) and charge q(t).
Solution
Figure 9.28 actually describes three separate circuits. Each follows its own differential
equation and makes its own contribution to the process.
For instance, the first circuit on the left that contains the battery, the inductor L,
and the resistor R. Its differential equation is
di(t)
V −L − R i 1 (t) = 0 . (9.136)
dt
9.5 L-R-C Series Circuit 297
Next, there is the L-C circuit that is contained within points A and B. As shown
in Fig. 9.24, both the inductor L and the capacitor C experience identical potential
difference that obtains between points A and B: hence the equality
di(t) q(t)
L = (9.137)
dt C
where
dq(t)
= i 1 (t) − i(t) . (9.138)
dt
In addition to the above three equations, one can also write a fourth equation that
refers to the circuit heading from the battery through the capacitor back to the battery.
That is,
q(t)
V = + R i1 . (9.139)
C
It turns out that (9.139) does not provide any new information beyond what is already
included in the first set of three equations (9.136), (9.137), and (9.138). But that is
not a problem because the first set of equations is sufficient for determining the three
unknowns: i(t), i 1 (t) and q(t). In order to carry out this determination, proceed as
follows.
Differentiate (9.137) and combine the result with (9.138).
d2 i(t) 1 dq(t)
L 2
=
dt C dt
i 1 (t) − i(t)
= . (9.140)
C
and divide (9.141) by L . This leads to the desired, single, differential equation
whereby i(t) can be determined.
d2 i(t) 1 di(t) 1 V
+ + i(t) = . (9.142)
dt 2 RC dt LC L RC
where Scomp (t) is the complementary solution and I pi (t) is the particular integral that
are defined as follows.
2
d 1 d 1
+ + Scomp (t) = 0 . (9.144)
dt 2 R C dt LC
2
d 1 d 1 V
+ + I pi (t) = . (9.145)
dt 2 R C dt LC L RC
According to the description given in detail in Chap. 3—see (3.55) and (3.56)—
Scomp (t) is readily calculated. Similarly, I pi (t) may be calculated by using (3.61)
and (3.67).
⎧ ⎫
⎨ 1 2 1 ⎬
t
Scomp (t) = σ1 exp − exp t −
2RC ⎩ 2RC LC ⎭
⎧ ⎫
⎨ 2
t 1 1 ⎬
+ σ2 exp − exp −t − . (9.146)
2RC ⎩ 2RC LC ⎭
V 1 V
I pi (t) = 1
= . (9.147)
L RC LC
R
Following (9.137) → (9.147), both the charge q(t) and the current i 1 (t) can now
be calculated. For instance, (9.137) gives
di(t)
q(t) = L C (9.148)
dt
which leads to
⎧ ⎫
⎨ 2 ⎬
−L t 1 1
q(t) = σ1 exp − exp t −
2R 2RC ⎩ 2RC LC ⎭
⎧ ⎫
⎨ 2 ⎬
−L t 1 1
+ σ2 exp − exp −t −
2R 2RC ⎩ 2RC LC ⎭
⎡ ⎧ ⎫⎤
⎨ 1 2 1 ⎬
L 2 t
+ − (LC) ⎣σ1 exp − exp t − ⎦
2R 2RC ⎩ 2RC LC ⎭
⎡ ⎧ ⎫⎤
⎨ 2 ⎬
L 2 t 1 1
− − (LC) ⎣σ2 exp − exp −t − ⎦.
2R 2RC ⎩ 2RC LC ⎭
(9.149)
9.5 L-R-C Series Circuit 299
And knowing q(t) and i(t)—see (9.157), (9.143), (9.146), and (9.147)—(9.138)
provides a simple route to the evaluation of the current i 1 (t).
Problem
At time t = 0, the two plates of the capacitor hold charges q(0) ≡ Q 0 and −Q 0 and
current i 2 (t = 0) is equal to I20 . Work out q(t), i 1 (t), and i 2 (t). The relevant plot is
in Fig. 9.29.
Solution
The potential difference across points A and E is the same as that across points B
and D.
di 2 (t)
R i 1 (t) = L . (9.150)
dt
The potential difference across points B and D is also equal to that across the capac-
itor C.
di 2 (t)
L = q(t)/C . (9.151)
dt
Additionally, the current flow out of the positive plate—and back into the negative
plate—of the capacitor leads to the relationship
300 9 Resistors, Inductors, Capacitors
dq(t)
= − i 1 (t) − i 2 (t) . (9.152)
dt
Differentiate (9.151) and combine the result with (9.152).
di 2 2 (t) 1 dq(t)
L =
dt 2 C dt
−i 1 (t) − i 2 (t)
= . (9.153)
C
Now eliminate i 1 (t) from (9.153) and (9.150) and divide the result by L to get
⎧ ⎫
⎨ 1 2 1 ⎬
t
i 2 (t) = σ1 exp − exp t −
2RC ⎩ 2RC LC ⎭
⎧ ⎫
⎨ 2
t 1 1 ⎬
+ σ2 exp − exp −t − . (9.155)
2RC ⎩ 2RC LC ⎭
Equations (9.155) and (9.157) provide the solution to Examples Group 7 except that
there still are two unknowns σ1 and σ2 . The given two boundary conditions, namely
i 2 (t = 0) = I20 and q(t = 0) = Q 0 , determine these unknowns. That is,
I20 = σ1 + σ2 (9.158)
and
2
−L L
Q0 = − (σ1 + σ2 ) + (σ1 − σ2 ) − (LC) . (9.159)
2R 2R
Chapter 10
Numerical Solution
dy(x)
= F(x, y) ,
dx
dx
A(x, y) = =x+y ,
dt
dy
B(x, y) = =x−y . (10.1)
dt
Together these equations are equivalent to a single second-order differential equation.
Tables 10.2, 10.3, 10.4, 10.5, 10.6, and 10.7 display numerical results for the one-step,
two-step, and the five-step processes. Table 10.8 contains numerical results gathered
during a twenty-step Runge–Kutta process. At maximum extension, = 2, the
Runge–Kutta estimate is 26.371190. It differs from the exact result, 26.371404, by
only a tiny amount, 0.000214. The percentage error involved is 0.000811. Table 10.9
records numerical results collected during a twenty-step Runge–Kutta process.
At maximum extension, = 2, the Runge–Kutta estimate for Yn is 11.0170347.
It differs from the exact result, 11.0171222, by 0.0000875. The percentage error
involved is 0.000794. It is similar to the corresponding error, 0.000811%, for X n .
dy(x)
= F(x, y) . (10.2)
dx
Its solution, y(x), contains one arbitrary constant that can be determined by speci-
fying one boundary condition. Let that boundary condition be the value of y(x) at
x = x0 . That is
y(x0 ) = Y0 . (10.3)
Assume the given differential equation cannot be solved by methods that have been
described so far in this book. The objective of the current exercise is to work out an
approximation procedure whereby one can proceed beyond the starting point in a
step-by-step process. Each step is chosen to be of length . In this manner, n steps
are needed to move from the initial location x0 to the desired final location xn .
xn = x0 + n = x0 + . (10.4)
= . (10.5)
n
The protocol for carrying out this objective is suggested by Runge and Kutta. It
involves successive use of small ‘steps,’ each of which carries the process through a
small extension in position, say from x to (x +). To the above purpose, one proceeds
as follows. The first Runge–Kutta step of length takes us from the starting point
x0 to a neighboring point x1 = x0 + and requires calculating the parameters
R1:1 , R1;2 , R1;3 , R1;4 , K 1;1 . Here Y1 represents an estimate of the exact result y1 =
y(x1 ) = y(x0 + ). Of course, the hope is that the extension, , is short enough that
the estimate Y1 is a good approximation to the unknown exact result y1 .
Choose the value of x0 , Y0 = y(x0 ), n, and . Set = n .
10.1 Single First-Order Differential Equations 305
It is important to note that Y0 has been chosen to be exactly equal to a given value.
Therefore, it is exact. On the other hand, Y1 is the Runge–Kutta estimated value of
y1 . Therefore, it is approximate. The objective of the current exercise is to make this
approximation as accurate as needed. And an important option for achieving this is
to make short.
Because is short, one needs to extend through several ’s. Usually, the beginning
and the endpoints, say x0 and x0 + , are determined by the physical needs of
the subject matter that is being studied. And, of course, greater the number n, the
greater the effort needed to arrive at the desired estimate. Therefore, the choice for
n is made by the amount of effort one is willing to expand. And because of the
relationship (10.5), that decision fixes the value of .
We could now proceed to the second step. But it is more convenient directly to
go to the nth step.
Runge–Kutta : nth-Step
dy(x)
= xy . (10.8)
dx
Assume we do not know how to solve it. Because it is first order, its solution must
contain one arbitrary constant that can be determined by a single boundary condition.
Let such boundary condition be its solution at x = x0 = 0 and assume that this
solution is equal to Y0 = 3. That is
y(x = x0 = 0) = Y0 = 3 . (10.9)
The Objective
Knowing x0 and y(x0 ), we wish to find the solution y(x) at a point x = x0 + .
Need for Exact Solution
As mentioned earlier, in principle we do not know the exact solution of (10.8).
For educational purposes, however, it is important to know how well the Runge–
Kutta procedure actually is faring. To that end, it is best to have available, hidden
somewhere in the background, the exact solution. Then, as we proceed through
the various Runge–Kutta steps, the accuracy of the numerical approximation can
properly be evaluated.
The exact boundary-value solution of (10.8) is
x2 x2
y(x) = a0 exp = 3 exp . (10.10)
2 2
= 2 ; x0 = 0 ; Y0 = 3 ;
R1;1
R1;1 = x0 Y0 = 0 ; R2;1 = x0 + Y0 + = 6;
2 2
R2;1
R3;1 = x0 + Y0 + = 12 ;
2 2
R4;1 = (x0 + ) Y0 + R3;1 = 60 ;
1
K1 = R1;1 + 2 R2;1 + 2 R3;1 + R4;1 = 16 ;
6
Y1 = Y0 + K 1 = 3 + 16 = 19 . (10.11)
The Y1 calculated above is the estimated value of y(x = ). Its exact value is
2
y(x = 2) = 3 exp( 22 ) = 22.167168. Thus, for the longest-possible choice for , the
10.1 Single First-Order Differential Equations 307
= 1 , x0 = 0 , Y0 = 3 ;
R1;1 = 0 ; R2;1 = 1.5 ;
R3;1 = 1.875 ; R4;1 = 4.875
1
K1 = R1;1 + 2 R2;1 + 2 R3;1 + R4;1 = 1.9375 ;
6
Y1 = Y0 + K 1 = 3 + 1.9375 = 4.9375 . (10.12)
= 1 ; x1 = ; Y1 = 4.9375 ;
R1;2 = e x1 Y1 = 4.9375 ; R2;2 = 11.1094 ;
R3;2 = 15.7383 ; R4;2 = 41.3516 ;
1
K2 = R1;2 + 2 R2;2 + 2 R3;2 + R4;2 = 16.6641 ;
6
Y2 = Y1 + K 2 = 4.9375 + 16.6641 = 21.6016 . (10.13)
Y2 = 21.6016 , calculated above, is the estimated value of y(2). Its exact value is
3 exp(2) = 22.167168. Thus, for half-the-longest-possible choice for , the Runge–
Kutta estimate is already much improved over that for the longest-possible choice.
The result is now only 0.57 points in error, which is less than a fifth of the previous
error.
Solution with One-Fourth the Longest Possible
Next, let us consider an even shorter value of equal to a quarter of the longest-
possible choice: meaning = 0.5. This requires trudging through four Runge–Kutta
steps, each of length . The first step yields
= 0.5 ; x0 = 0 ; Y0 = 3 ;
R1;1 = 0 ; R2;1 = 0.375 ;
R3;1 = 0.398438 ; R4;1 = 0.849608 ;
1
K1 = R1;1 + 2 R2;1 + 2 R3;1 + R4;1 = 0.399414 ;
6
Y1 = Y0 + K 1 = 3 + 0.399414 = 3.399414. (10.14)
308 10 Numerical Solution
= 0.5 ; x1 = ; Y1 = 3.399414 ;
R1;2 = 0.849853 ; R2;2 = 1.43413 ;
R3;2 = 1.54368 ; R4;2 = 2.47154 ;
1
K2 = R1;2 + 2 R2;2 + 2 R3;2 + R4;2 = 1.54617 ;
6
Y2 = Y1 + K 2 = 3.399414 + 1.54617 = 4.945584. (10.15)
= 0.5 ; x2 = 2 ; Y2 = 4.945584 ;
R1;3 = 2.47279 ; R2;3 = 3.86373 ;
R3;3 = 4.2984 ; R4;3 = 6.93299 ;
1
K3 = R1;3 + 2 R2;3 + 2 R3;3 + R4;3 = 4.28834 ;
6
Y3 = Y2 + K 3 = 4.945584 + 4.28834 = 9.233924 (10.16)
= 0.5 ; x3 = 3 ; Y3 = 9.233924 ;
R1;4 = 6.92544 ; R2;4 = 11.1096 ;
R3;4 = 12.9401 ; R4;4 = 12.174 ;
1
K4 = R1;2 + 2 R2;2 + 2 R3;2 + R4;2 = 12.8665 ;
6
Y4 = Y3 + K 4 = 9.233924 + 12.8665 = 22.100424 (10.17)
Equation (10.17) indicates that the use of one-quarter the longest-possible leads
to Runge–Kutta approximation result, 22.100424.
This result
is in error by a mere
22.167−22.100 = 0.067 points. Being only 100 × 22.1670.067
= 0.302%, this is greatly
reduced error. Indeed, it is almost one-fiftieth of 100 × 22.167−19
22.167
= 14.29%—
which was the error when the longest-possible was used.
Solution with One-Tenth the Longest-Possible
To get a good feel as to whether the improvement in the estimate is worth the addi-
tional effort, let us try even a shorter length : say, = 10 = 0.2. In order to save
space, these results are tabulated in Table 10.1.
Table 10.1 indicates that when the travel to the maximum extension, x10 = 2 = ,
is carried out by using = 10 —meaning, when ten steps are used to get to the
maximum extension— the Runge–Kutta estimate is in error ≈0.0025. The percentage
error now—being about 0.0113%—is absolutely minuscule. Indeed it is much less
than one-thousandth the error, 14.3%, found by using the longest possible value of
.
10.1 Single First-Order Differential Equations 309
d2x dx dy dx dx dx
2
= + = +x−y = +x+x−
dt dt dt dt dt dt
= 2x . (10.19)
d2 y dx dy dy dy dy
2
= − = x+y− = +y+y−
dt dt dt dt dt dt
=2y. (10.20)
310 10 Numerical Solution
And the two constants a1 and b1 can be determined by the two boundary conditions—
namely
x(t = t0 = 0) = 1
= a1 + b1 = X 0 ;
y(t = t0 = 0) = 2
√ √
= 2 − 1 a1 − 2 + 1 b1 = Y0 . (10.23)
Inserting the a1 and b1 —given in (10.24)—into (10.22) provides the desired exact
solution of (10.18).
√ √
3+ 2 √ 2−3 √
x(t) = √ exp ( 2 t) + √ exp (− 2 t) ;
2 2 2 2
√ √
3+ 2 √
y(t) = 2−1 √ exp ( 2 t)
2 2
√ √
2−3 √
− 2+1 √ exp (− 2 t) . (10.25)
2 2
10.2 Coupled Differential Equations First Order 311
Equation (10.25) duly satisfy the required two boundary conditions (10.23). And they
provide ready access to the exact result for comparison with Runge–Kutta estimate.
One hastens to add that the Runge–Kutta numerical approximation—to be dis-
cussed in the following—for a coupled pair of first-order differential equations can
be employed without knowing the exact results in (10.25). The latter are provided
merely for educational purposes. In other words, exact results help evaluate the accu-
racy of the Runge–Kutta estimate.
in error by 0.7881. The exact value is 26.37. In percentage terms, for the two-step
Runge–Kutta process, the error in final value of X n is 100 0.7881
26.37
= 2.9884%. That
is about one-sixth the error, 17.8, for the single-step process.
Regarding the final Yn , the relevant Tables 10.3 and 10.5. The single-step estimate
for Yn is in error by 0.3505 and the two-step processes are in error by 0.2949. In
other words, while the error involved in the single-step process is 3.18%, the two-step
process is in error by 2.68%. The difference between the two sets of errors is not
significant. It is only 0.5%. Perhaps things will improve if we go to a five-step process.
Five-Step Numerical Approximation
Solution with One-Fifth the Longest-Possible
Error Comparison : One-Step Versus Five-Step Process
Examine Table 10.6. At full extension the Runge–Kutta estimate for X n is
26.331396. Compared with the exact result, 26.371404, the estimate is in error
by 0.04008. This error is about ( 120
1
)th the error, 4.705, found by using the one-step
Runge–Kutta process. In percentage terms, the error here is approximately 0.152%.
10.2 Coupled Differential Equations First Order 315
11.0170347. It differs from the exact result, 11.0171222, by 0.0000875. The percent-
age error involved is 0.000794. It is similar to the corresponding error, 0.000811%,
for X n . The accuracy achieved by the twenty-step Runge–Kutta estimate is quite
extraordinary. When high accuracy is essential, the twenty-step process is well worth
the additional effort.
Chapter 11
Frobenius Solution
is
where
B(x) C(x)
M (x) = ; N (x) = . (11.3)
A(x) A(x)
converges to F(x) for all x at, and in the neighborhood of, x0 . Note that geometric
functions, polynomials, and exponentials of analytic functions are analytic as are
rational functions except at the points where their denominator is vanishing.
If both coefficients, M (x) and N (x), are analytic at x = x0 the point x0 is called an
‘Ordinary Point’ of the differential equation (11.2).
On the other hand, if either or both of these coefficients is/are not analytic at x = x0 ,
but both (x − x0 ) M (x) and (x − x0 )2 N (x) are analytic at x = x0 , then x0 is a ‘Regular
Singular Point’ of the differential equation (11.2).
But if one and/or other of the products (x − x0 ) M (x) and (x − x0 )2 N (x) is/are not
analytic at x = x0 , the x0 is an ‘Irregular Singular Point’ of the given differential
equation.
11.1.7 Solution
In (11.5), the parameters α, β, γ are constants that are real. We classify these equa-
tions as being of type (a). Clearly x = 0 is an ordinary point of the differential
equation (11.5). [Note: The symbols M (x) and N (x) are defined
in (11.2)]. Because
both the relevant coefficients M (x) = α x and N (x) = β x2 + γ are analytic at
x = 0. Frobenius solution of a differential equation around its ordinary point, x = 0,
is found in simple form. Simple form refers to the case where the unknown constant
ν0 is set qual to 0, and the Frobenius solution is expressed as a Maclaurin27. series.
∞
u(x) = an xn . (11.6)
n=0
In order to use (11.6), the differentials that would be needed in (11.5) are the
following:
∞
∞
D2 u(x) = n(n − 1) an xn−2 = n(n − 1) an xn−2 ;
n=0 n=2
∞
∞
Du(x) = n an xn−1 = n an xn−1 . (11.7)
n=0 n=1
Unfortunately (11.8), in its current form, is not convenient for working out the details
of the unknown parameters an . To relieve this inconvenience, one would want to
achieve two things. First: All infinite sums should have the same range. Second: All
infinite sums should post the same power of x, say xn .
To begin this process consider, the first term on the left-hand side in (11.8) and
manipulate it as follows:
∞
∞
n(n − 1) an xn−2 = 2 a2 + 6 a3 x + n(n − 1) an xn−2
n=2 n=4
∞
= 2 a2 + 6 a3 x + (n + 2)(n + 1) an+2 xn . (11.9)
n=2
∞
∞ In order to render then term n=4 n(n − 1) an x
n−2
[Note: into
n=2 (n + 2)(n + 1) an+2 x , set n − 2 = n so that n = n + 2. Then write
320 11 Frobenius Solution
∞
n=4 n(n − 1) an xn−2 as ∞ n
n +2=4 (n + 2)(n + 1) an +2 x and finally change the
notation n → n to get the desired result.]
Next look at the second term on the left-hand side in (11.8) and manipulate it so
that the infinite sum, like (11.9), is summed from n = 2 and has xn in it. Fortunately
that is easy to achieve.
∞
∞
αx n an xn−1 = α a1 x + α n an xn . (11.10)
n=1 n=2
Finally consider the last term on the left-hand side in (11.8). One gets
∞
∞
an x(n+2) = an −2 xn . (11.11)
n=0 n =2
[Note: To see how β ∞ n=0 an x
(n+2)
becomes β ∞ n
n=2 an−2 x , set n = n + 2 and
change variable from n → n .]
The original differential equation (11.8) is written as: [(11.9)] + [(11.10)] +
[(11.12)] = 0. That is
∞
∞
[2 a2 + 6 a3 x + (n + 2)(n + 1) an+2 x ] + [α a1 x + α
n
n an xn ]
n=2 n=2
∞
∞
+[γ a0 + γ a1 x + γ an xn + β an−2 xn ] = 0 . (11.13)
n=2 n=2
Or more conveniently as
(γ a0 + 2 a2 ) + x(α a1 + γ a1 + 6 a3 )
∞
+ xn {(n + 2)(n + 1) an+2 + (α n + γ) an + β an−2 } = 0 . (11.14)
n=2
Equation (11.14) should be true for all values of x. And that can happen only if the
coefficient multiplying any given power of x is zero: meaning, only if the following
relationships hold:
11.1 Normalized Form 321
(γ a0 + 2 a2 ) = 0 , (α a1 + γ a1 + 6 a3 ) = 0 , (11.15)
and for n ≥ 2,
an (α n + γ) + β an−2
an+2 = − . (11.18)
(n + 2)(n + 1)
(11.22)
322 11 Frobenius Solution
du
u(x = 0) = σ1 = a0 ; = σ2 = a1 . (11.23)
dx x=0
Inserting (11.17) and (11.19)–(11.23) into (11.6) leads to the result—see (11.24)
given below—for the simple form of the Frobenius series solution of (11.5).
γ (2 α + γ) γ2 − β 4
u(x) = 1 − x + 2
x σ1
2 12
⎧⎡ ⎤ ⎫
⎨ (4 α + γ) (2 α+γ)12( 2 )−β − β γ2
γ
⎬
− ⎣ ⎦ x6 σ1 + O(x8 ) σ1
⎩ 30 ⎭
α+γ 3 (3 α + γ) α+γ −β 5
+ x− x + 6
x σ2
6 20
⎧⎡ ⎤ ⎫
⎨ (5 α + γ) (3 α+γ)20( α+γ
6 )−β
− β α+γ ⎬
6
− ⎣ ⎦ x7 σ2 + O(x9 ) σ2 .
⎩ 42 ⎭
(11.24)
One good thing about having arbitrary constants, α, β, γ, in (11.5) is that different
choices of these constants lead to different differential equations. And its solution in
the form of (11.24) makes the first several terms of their Frobenius series solution
instantly available.
To see how the above process works, let us solve the following five differential
equations which are similar in form to (11.5) which is of type (a).
All we have to do to solve the (11.25) is to use the values of α, β, and γ that are
given. First few terms of the relevant Frobenius power series solution of equations I-
(A)–I-(E) are the following.
For differential equation I-(A), use: α = 2, β = 3, γ = 4. Therefore, its solution:
13 4 7
I − (A) : u(x) = σ1 1 − 2x2 + x − x6 + O(x8 )
12 30
7 19
+ σ2 x − x 3 + x5 − x7 + O(x9 ) . (11.26)
20 420
1 2 1 29
I − (B) : u(x) = σ1 1 − x − x4 + x6 + O(x8 )
2 24 720
1 3 1 13
+ σ2 x − x − x5 + x7 + O(x9 ) . (11.27)
3 30 630
1 2 5 23
I − (C) : u(x) = σ1 1 + x − x4 + x6 + O(x8 )
2 24 720
1 3 1 29
+ σ2 x − x − x5 + x7 + O(x9 ) .
6 120 5040
(11.28)
1 2 1 4 1
I − (D) : u(x) = σ1 1 − x + x − x6 + O(x8 )
2 8 48
2 3 7 2
+ σ2 x − x + x5 + − x7 + O(x9 ) . (11.29)
3 30 35
1 2 5 37
I − (E) : u(x) = σ1 1 + x + x4 + x6 + O(x8 )
2 24 720
1 3 7 31
+ σ2 x + x + x5 + x7 + O(x9 ) .
3 60 1260
(11.30)
324 11 Frobenius Solution
Having solved Frobenius equations of Type (a) around their ordinary point, somewhat
more complicated such equations—to be referred to as equations of Type (b)—are
tackled below.
2
(b) : D + (α x2 + β x + γ) D + (μ x2 + ν x + ρ) u(x) = 0 . (11.32)
11.1.11 Solution
Again the initial conditions are: u(x = 0) = σ1 and du dx x=0
= σ2 . All the six con-
stants α, β, γ, μ, ν, ρ are real.Clearly x = 0 is an ordinary point
of (11.32) because
both the coefficients M (x) = α x2 + β x + γ and N (x) = μ x2 + ν x + ρ are an-
alytic at x = 0. Frobenius solution of a differential equation around its ordinary point
x = 0 is best found in simple form. Simple form refers to the case where the unknown
constant ν0 is set qual to 0, and the Frobenius solution is expressed as a Maclaurin
series.
∞
u(x) = an xn . (11.33)
n=0
∞
∞
D2 u(x) = n(n − 1) an xn−2 = n(n − 1) an xn−2 ;
n=0 n=2
∞
∞
Du(x) = n an xn−1 = n an xn−1 . (11.34)
n=0 n=1
2
D + (α x2 + β x + γ) D + (μ x2 + ν x + ρ) u(x)
∞ ∞
= n(n − 1) an x n−2
+ (α x + β x + γ)
2
n an xn−1
n=2 n=1
∞
+ (μ x2 + ν x + ρ) an xn = 0 . (11.35)
n=0
Unfortunately (11.35), in its current form, is not convenient for working out the
details of the unknown parameters an . To relieve this inconvenience, one would want
to achieve two things. First: All infinite sums should have the same range. Second:
All infinite sums should post the same power of x, say xn .
To begin this process, consider the first term on the right-hand side in (11.35) and
manipulate it as follows:
∞
∞
n(n − 1) an xn−2 = 2 a2 + 6 a3 x + n(n − 1) an xn−2
n=2 n=4
∞
= 2 a2 + 6 a3 x + (n + 2)(n + 1) an+2 xn .
n=2
∞ ∞
n=4 n(n − 1) an x n=2 (n + 2)
n−2
[Note: In order to render the term into
(n
∞ + 1)a n+2 x n
, set n − 2 =
n so that n = n + 2. Then write
as ∞
n=4 n(n − 1) an x n +2=4 (n + 2)(n + 1) an +2 xn and finally change the
n−2
∞
notation n → n to achieve the given result n=2 (n + 2)(n + 1) an+2 xn .]
Therefore, the first term on the right-hand side in (11.35) is
∞
∞
n(n − 1) an xn−2 = 2 a2 + 6 a3 x + (n + 2)(n + 1) an+2 xn . (11.36)
n=2 n=2
326 11 Frobenius Solution
Next manipulate the second term on the right-hand side in (11.35) and arrange it so
that the sum incudes xn and ranges from n = 2 to ∞.
∞
∞
(α x2 + β x + γ) n an xn−1 = α (n − 1) an−1 xn + β a1 x
n=1 n=2
∞
∞
+β n an xn + γ n an xn−1 . (11.37)
n=2 n=1
The following transformations were used in (11.37). The first term on the left-hand
side of (11.37) is
∞
∞
α x2 n an xn−1 = α n an xn+1 (11.38)
n=1 n=1
Set n + 1 = n . As a result, the first term on the left-hand side of (11.37) becomes
∞
∞
α x2 n an xn−1 = α n an xn+1
n=1 n=1
∞
∞
=α (n − 1) an −1 xn = α (n − 1) an−1 xn . (11.39)
n =2 n=2
∞
The second term is β x n=1 n an xn−1 . Write it as
∞
∞
∞
βx n an xn−1 = β n an xn = β a1 x + β n an xn . (11.40)
n=1 n=1 n=2
∞
The third term is γ n=1 n an xn−1 . Set n − 1 = n . As a result,
∞
∞
∞
γ n an xn−1 = γ (n + 1) an +1 xn = γ (n + 1) an+1 xn
n=1 n =0 n=0
∞
= γ a1 + 2 γ a2 x + γ (n + 1) an+1 xn . (11.41)
n=2
∞
(α x2 + β x + γ) n an xn−1
n=1
∞
∞
=α (n − 1) an−1 xn + β a1 x + β n an xn
n=2 n=2
∞
+ γ a1 + 2 γ a2 x + γ (n + 1) an+1 xn . (11.42)
n=2
∞
By settingn = n + 2 and changing variable from n → n the sum n=0 an xn+2
becomes ∞ n
n =2 an −2 x . Therefore,
∞
∞
∞
μ an xn+2 = μ an −2 xn = μ an−2 xn . (11.44)
n=0 n =2 n=2
∞ ∞
Similarly by setting n = n + 1, the sum n=0 an xn+1 becomes n =1 an −1 xn .
Therefore,
∞
∞
∞
ν an xn+1 = ν an −1 xn = ν a0 x + ν an−1 xn , (11.45)
n=0 n =1 n=2
and
∞
∞
ρ an xn = ρ a0 + ρ a1 x + ρ an xn . (11.46)
n=0 n=2
By adding (11.44), (11.45), and (11.46), the last term on the right-hand side in (11.35),
that is the current (11.43), becomes
∞
∞
(μ x2 + ν x + ρ) an xn = μ an−2 xn + ν a0 x
n=0 n=2
∞
∞
+ν an−1 xn + ρ a0 + ρ a1 x + ρ an xn . (11.47)
n=2 n=2
328 11 Frobenius Solution
Equation (11.48) should hold for all values of x. And that can happen only if the
coefficient multiplying any given power of x is zero: meaning, only if the following
three equations, numbered (11.49), (11.50), (11.51), hold true. [Note: (11.49) refers
to x0 , (11.50) to x1 , and (11.51) to xn for all n ≥ 2.]
(ρ a0 + 2 a2 + γ a1 ) = 0 , (11.49)
(β a1 + ρ a1 + 6 a3 + 2γ a2 + ν a0 ) = 0 , (11.50)
γ a1 + ρ a0
a2 = − (11.52)
2
and
(β + ρ) a1 + ν a0 + 2 γ a2 (β + ρ) a1 + ν a0 − γ(ρ a0 + γ a1 )
− =− = a3 .
6 6
Therefore, (11.50) gives
(β + ρ − γ 2 ) a1 + (ν − γ ρ)a0
a3 = − . (11.53)
6
Now that the variables a2 and a3 are available in terms of a0 and a1 , the higher-order
terms—namely a4 , a5 , . . . , etc.—may also be represented as functions of a2 and
a3 . To that purpose, setting n = 2 in (11.51) will lead to a4 , and n = 3 will yield a5 ,
and so on.
μ a0 + (α + ν) a1 + (2 β + ρ) a2 + 3 γ a3
a4 = − . (11.54)
12
μ a1 + (2 α + ν) a2 + (3 β + ρ) a3 + 4 γ a4
a5 = − . (11.55)
20
μ a2 + (3 α + ν) a3 + (4 β + ρ) a4 + 5 γ a5
a6 = − . (11.56)
30
μ a3 + (4 α + ν) a4 + (5 β + ρ) a5 + 6 γ a6
a7 = − . (11.57)
42
330 11 Frobenius Solution
du
u(x = 0) = σ1 = a0 ; = σ2 = a1 . (11.58)
dx x=0
Combining (11.52) to (11.58) with (11.33) provide first few terms for the simple
form of the Frobenius series solution for (11.32).
Use the information given in (11.52)–(11.58) above and solve the following five
differential equations similar in form to equation (b) (11.32). For simplicity, only the
indices α - ρ are given below. [See (11.59)]
II − (F) : [α = 1 ; β = 1 ; γ = 1 ; μ = 1 ; ν = 1 ; ρ = 1 ] .
II − (G) : [α = 0 ; β = 1 ; γ = 2 ; μ = 3 ; ν = 4 ; ρ = 5 ] .
II − (H ) : [α = 1 ; β = 2 ; γ = 3 ; μ = 4 ; ν = 5 ; ρ = 6 ] .
II − (I ) : [α = 2 ; β = 1 ; γ = 2 ; μ = 3 ; ν = 2 ; ρ = 2 ] .
II − (J ) : [α = 1 ; β = 3 ; γ = 3 ; μ = 1 ; ν = 1 ; ρ = 2 ] . (11.59)
x2 x4 x5 x6 x7
II − (F) : u(x) = σ1 1 − + + − −
2 24 15 720 70
x2 x3 7 21 43
+ σ2 x − − + x5 + x6 − x7 + O(x8 ) .
2 6 120 720 5040
5 2 17 4 11 5
II − (G) : u(x) = σ1 1 − x + x3 +x − x
2 24 60
5 43 1 5
+σ1 − x6 − x 7 + σ2 x − x 2 − x3 + x4
144 504 3 12
5 6 7
x x x
+σ2 + − + O(x8 ) .
60 72 42
11.1 Normalized Form 331
13 3
13 4 23 5
II − (H ) : u(x) = σ1 1 − 3x2 + x +
x − x
6
24 40
103 6 212 3 2 1 3
+σ1 − x − x 7 + σ2 x − x + x
720 5040 2 6
5 4 3 44 335
+σ2 x − x5 − x6 − x7 + O(x8 ) .
8 20 720 5040
1 3 x4 x5
II − (I ) : u(x) = σ1 1 − x2 + x − +
3 12 4
6
x 30 1 3 x4
+σ1 − − x 7 + σ2 x − x 2 + x −
18 1008 6 12
17 x6 23
+σ2 x5 + − x7 + O(x8 ) .
120 40 1008
5 3 x4 17 5
II − (J ) : u(x) = σ1 1 − x2 + x − − x
6 24 60
x6 323 3 2 2 3 x4
+σ1 + x 7 + σ2 x − x + x +
12 5040 2 3 3
6
47 x 516
+σ2 − x5 + + x7 + O(x8 ).
120 720 5040
Use the information given in (11.52)–(11.58) and solve the following five differential
equations similar in form to equation (b) (11.32). For simplicity, only the indices α
to γ are given in (11.60) below.
II .1 : [α = 1 ; β = 2 ; γ = 1 ; μ = 3 ; ν = 3 ; ρ = 1 ] .
II .2 : [α = 0 ; β = 1 ; γ = 0 ; μ = 2 ; ν = 2 ; ρ = 0 ] .
II .3 : [α = 1 ; β = 0 ; γ = 1 ; μ = 2 ; ν = 1 ; ρ = 1 ] .
II .4 : [α = 0 ; β = 2 ; γ = 2 ; μ = 0 ; ν = 0 ; ρ = 1 ] .
II .5 : [α = 2 ; β = 1 ; γ = 1 ; μ = 5 ; ν = 1 ; ρ = 4 ] . (11.60)
332 11 Frobenius Solution
According to an established idea6. and a recent proo f 30. , if the differential equation
has a ‘Regular Singular Point,’ say at x = x0 , then it has at least one solution that is
expressible as a modified Taylor series of the form
∞
y(x) = an (x − x0 )(n+ν0 ) . (11.62)
n=0
βx+γ μ x2 + ν x + ρ
(c) : y (x) + y (x) + y(x) = 0 . (11.63)
x x2
11.2.2 Solution
βx+γ
M (x) = , (11.64)
x
and N (x),
μ x2 + ν x + ρ
N (x) = , (11.65)
x2
11.2 Frobenious Solution Around Regular Singular Point 333
x M (x) = (β x + γ) ,
x N (x) = μ x2 + ν x + ρ ,
2
(11.66)
In order to solve the second-order differential equation (c) (11.63), the following two
differentials of equation (11.67) are needed.
∞
y (x) = an (n + ν0 ) x(n+ν0 −1) ,
n=0
∞
y (x) = an (n + ν0 )(n + ν0 − 1) x(n+ν0 −2) . (11.68)
n=0
Equations (11.63) and (11.69) can be put together in a more useful format as (11.70).
This fact is owed to the Frobenius statement that a solution of (11.63) exists as a
Maclaurin series of the form (11.67).
∞
an [(n + ν0 )(n + ν0 + γ − 1) + ρ ] x(n+ν0 −2)
n=0
∞
+ an [β (n + ν0 ) + ν] x(n+ν0 −1)
n=0
∞
+ [μ] an x(n+ν0 ) = 0 . (11.70)
n=0
334 11 Frobenius Solution
The second term of (11.70) can be transformed as follows. Set n = n − 1 and then
write the second term of (11.70) as
∞
an [β (n + ν0 ) + ν] x(n+ν0 −1)
n=0
∞
= an −1 [β (n − 1 + ν0 ) + ν] x(n +ν0 −2) . (11.72)
n =1
The third term of (11.70) can also be suitably transformed. To that end, set n = n + 2
and proceed as follows.
∞
∞
[μ] an x(n+ν0 ) = [μ] an −2 x(n +ν0 −2)
n=0 n =2
∞
= [μ] an−2 x(n+ν0 −2) . (11.74)
n=2
By adding these terms, the original differential equation (11.63) is represented more
neatly as (11.75) given below.
11.2 Frobenious Solution Around Regular Singular Point 335
!
a0 x(ν0 −2) ν02 + ν0 (γ − 1) + ρ
!
+ x(ν0 −1) a1 ν02 + ν0 (γ + 1) + γ + ρ + a0 (β ν0 + ν)
∞
+ an {(n + ν0 )(n + ν0 + γ − 1) + ρ } x(n+ν0 −2)
n=2
∞
+ an−1 [β (n − 1 + ν0 ) + ν] x(n+ν0 −2)
n=2
∞
+ an−2 [μ] x(n+ν0 −2) = 0 . (11.75)
n=2
In order to properly satisfy the requirement that (11.75) holds true—namely that
coefficients of every power of x in (11.75) is vanishing—we begin with the first term
of (11.75) which has the lowest power of x—that is, the term a0 x(ν0 −2) —and set its
coefficient equal to zero.
!
ν02 + ν0 (γ − 1) + ρ = 0 . (11.76)
Equation (11.76), being the term that multiplies the lowest power of x in (11.75),
is labeled the ‘Indicial Equation.’
11.4.1 ν1 and ν2
The indicial equation always refers to a specified differential equation. In this case,
the indicial equation (11.76) is particularized to the γ and ρ that appear in differential
equation (11.63), or equivalently (11.75). And being a quadratic, it has two roots, ν1
and ν2 .
336 11 Frobenius Solution
The two roots of the indicial equation recorded in (11.77), namely ν0 = ν1 and ν0 =
ν2 , signify the two possible solutions of the original differential equation (11.63).
βx+γ μ x2 + ν x + ρ
y (x) + y (x) + y(x) = 0 . (11.78)
x − x0 (x − x0 )2
It has a regular singular point at x = x0 . And should its indicial equation have two
roots that are unequal and differ by an integer. Meaning, if ν1 and ν2 are the two
roots that are unequal and
"
2
γ−1
| ν1 − ν 2 | = 2 | −ρ | , (11.79)
2
is not an integer, then it will always posses two linearly independent solutions, y1 (x)
and y2 (x), in the form of modified Taylor series:
∞
y1 (x) = an (x − x0 )(n+ν1 ) . (11.80)
n=0
∞
y2 (x) = an (x − x0 )(n+ν2 ) . (11.81)
n=0
11.4 Indicial Equation Roots 337
The second solution, say y2 (x) that uses ν2 instead of ν1 , may sometime diverge
because of the presence of denominators that go to zero. Often, however, the arbitrary
constant multiplying the whole solution may be changed to obtain non-divergent
second solution. The details of such procedure will become clear when relevant
solutions are discussed in what follows.
Term with the Second Lowest Power of x in (11.75)
Having set the term with the lowest power of x in (11.75) equal to zero, we consider
next the second term—which carries the second lowest power of x, i.e., x(ν0 −1) —and
set it equal to zero.
a1 ν02 + ν0 (γ + 1) + γ + ρ + a0 (β ν0 + ν) = 0 . (11.84)
∞
[ {an (n + ν0 )(n + ν0 + γ − 1) + ρ }
n=2
In order for this sum to add up identically to zero, terms multiplying each and every
power of x must individually tend to zero. Therefore, the following equality
βx+γ νx+ρ
y (x) + y (x) + y(x) = 0 . (11.87)
x x2
These equations are of type (c) where the parameter μ has been dropped. Note that
these equations—(1) to (10)—have all a regular singular point at x = 0. Use (11.85).
Set n = 2, 3, 4, 5, 6, 7 in (11.86). Thereby solve (11.88) (1) to (10). The relevant
four parameters—namely β, γ, ν, ρ—are the following
All the ten differential equations in (11.88) belong to category (1). Meaning that the
two roots of their indicial equation are different, and the difference between the two
roots is not an integer.
11.4.3 Solution
Note that the two roots of the indicial equation given above are ν0 ≡ ν1 = (1/3) and
ν0 ≡ ν2 = (−1/3). They are different and their difference is not an integer. Work
first with ν0 ≡ ν1 = (1/3) and use (11.85) that relates a1 to a0 .
# $
β ν0 + ν 1
+ 4
a1 = −a0 = − a0 3 3
= − a0 . (11.89)
2 ν0 + γ 2
3
+1
β (n − 1 + ν0 ) + ν
f or n ≥ 2 , an = − an−1 . (11.90)
(n + ν0 )(n + ν0 + γ − 1) + ρ
(4/3) + (4/3) a1 a0
a2 = − a1 = − = . (11.91)
(7/3)(7/3) − (1/9) 2 2!
340 11 Frobenius Solution
(7/3) + (4/3) a2 a0
a3 = − a2 = − = − . (11.92)
(10/3)(10/3) − (1/9) 3 3!
(10/3) + (4/3) a3 a0
a4 = − a3 = − = . (11.93)
(13/3)(13/3) − (1/9) 4 4!
(13/3) + (4/3) a4 a0
a5 = − a4 = − = − . (11.94)
(16/3)(16/3) − (1/9) 5 5!
(16/3) + (4/3) a5 a0
a6 = − a5 = − = . (11.95)
(19/3)(19/3) − (1/9) 6 6!
(19/3) + (4/3) a6 a0
a7 = − a6 = − = − . (11.96)
(22/3)(22/3) − (1/9) 7 7!
And so on, leading to the following result where the arbitrary constant a0 is replaced
either by arbitrary constant σ1 or by σ2 .
y1 (x) x x2 x3 x4 x5 x6 x7
(1a) : 1 = 1− + − + − + − + ···
σ1 x 3 1! 2! 3! 4! 5! 6! 7!
= exp(−x) .
y2 (x) 32 33 34
(1b) : = 1 − 3x + x2 − x3 + x4
σ2 x
−1
3 4 4·7 4 · 7 · 10
35 36
− x5 + x6
4 · 7 · 10 · 13 4 · 7 · 10 · 13 · 16
37
− x7 + · · ·
4 · 7 · 10 · 13 · 16 · 19
11.4 Indicial Equation Roots 341
y1 (x) x x2 x3 x4 x5
III − (2a) : 5
= 1+ + + + +
σ1 x 2 9 198 7, 722 463, 320 39, 382, 200
x6 x7
+ + + O(x8 )
4, 489, 570, 800 659, 966, 907, 600
y2 (x) x x2 x3 x4 x5
III − (2b) : = 1 − + − − −
σ2 x−1 5 30 90 360 5400
x6 x7
− − + ···
162, 000 7, 938, 000
y1 (x) x √ x2 √
III − (3a) : √ = 1−
(32 + 5 6) + (472 + 183 6)
σ1 x1+ 6 23 460
x 3 √ x 4 √ x5 √
− (1491 + 629 6) + (126 + 47 6) + (−375 + 37 6)
2070 288 720
x6 √ x7 √
+ (5615 − 1731 6) + (−901448 + 333403 6) + · · ·
8640 1512000
y2 (x) x √ x2 √
III − (3b) : √ = 1+
(−32 + 5 6) + (472 − 183 6)
σ2 x1− 6 23 460
x 3 √ x 4 √ x5 √
+ (−1491 + 629 6) + (126 − 47 6) − (375 + 37 6)
2070 288 720
x6 √ x7 √
+ (5615 + 1731 6) − (901448 + 333403 6) + · · ·
8640 1512000
For III-(4), β = 0 ; γ = −1/2 ; ν = 1/2 ; ρ = −5/2 and the solutions are
# $ # $ # $
y1 (x) x x2 x3 x4
III − (4a) : = 1− + − +
5
σ1 x 2 9 9 · 22 9 · 22 · 39 9 · 22 · 39 · 60
# $ # $
x5 x6
− +
9 · 22 · 39 · 60 · 85 9 · 22 · 39 · 60 · 85 · 114
# $
x7
− + ···
9 · 22 · 39 · 60 · 85 · 114 · 147
342 11 Frobenius Solution
y2 (x) x x2 x3 x4
III − (4b) : −1
= 1+ + + −
σ2 x 5 5·6 5·6·3 5·6·3·4
# $
x 5 x 6 x7
+ − + + ···
5 · 6 · 3 · 4 · 15 5 · 6 · 3 · 4 · 15 · 30 5 · 6 · 3 · 4 · 15 · 30 · 49
y1 (x) x2 x3 x4 x5 x6 x7
III − (5a) : 1 =1+x+ + + + + + + ···
σ1 x 3 2! 3! 4! 5! 6! 7!
= exp(x) .
# $ #$ # $
y2 (x) 32 33 34
III − (5b) : = 1 − 3x − x2 − x3 − x4
σ2 x−1 2 2·5 2·5·8
# $ # $ # $
35 5 36 6 37
− x − x − x7 + ...
2 · 5 · 8 · 11 2 · 5 · 8 · 11 · 14 2 · 5 · 8 · 11 · 14 · 17
y1 (x) x2 x3 x4 x5 x6 x7
III − (6a) : 1 =1−x+ − + − + − + ···
σ1 x 3 2! 3! 4! 5! 6! 7!
= exp(−x) .
y2 (x) 32 33 34
III − (6b) : −1
= 1 + 3x − x2 + x3 − x4
σ2 x 2 2·5 2·5·8
# $
35 36 37
+ x5 − x6 + x7 + · · ·
2 · 5 · 8 · 11 2 · 5 · 8 · 11 · 14 2 · 5 · 8 · 11 · 14 · 17
y1 (x) x2 x3 x4 x5 x6 x7
III − (7a) : = 1 − x + − + − + − + ···
σ1 x 2 2! 3! 4! 5! 6! 7!
= exp(−x) .
# $ # $ # $
y2 (x) 22 2 23 3 24
III − (7b) : = 1+2x − x + x − x4
σ2 x1/2 1 1·3 1·3·5
# $ # $ # $
25 5 26 6 27
+ x − x + x7 + · · ·
1·3·5·7 1·3·5·7·9 1 · 3 · 5 · 7 · 9 · 11
11.4 Indicial Equation Roots 343
y1 (x) x2 x3 x4 x5 x6 x7
III − (8a) : = 1 + x + + + + + + + ···
σ1 x 2 2! 3! 4! 5! 6! 7!
= exp(x) .
# $ # $ # $
y2 (x) 22 2 23 3 24
III − (8b) : = 1−2x − x − x − x4
σ2 x1/2 1 1·3 1·3·5
# $ # $ # $
25 5 26 6 27
− x − x − x7 + · · ·
1·3·5·7 1·3·5·7·9 1 · 3 · 5 · 7 · 9 · 11
y1 (x) x2 x3 x4 x5 x6 x7
III − (9a) : 1 =1+x+ + + + + + + ···
σ1 x 3 2! 3! 4! 5! 6! 7!
= exp(x) .
y2 (x) 31 32 33
III − (9b) : = 1+ x+ x2 + x3
σ2 x − 13 1 1·4 1·4·7
# $
34 35 36
+ x4 + x5 + x6
1 · 4 · 7 · 10 1 · 4 · 7 · 10 · 13 1 · 4 · 7 · 10 · 13 · 16
37
+ x7 + · · ·
1 · 4 · 7 · 10 · 13 · 16 · 19
y1 (x) x √
III − (10a) : √ = 1− (41 + 22 2)
σ1 x2+2 2 31
x2 √ x3 √
+ (85 + 63 2) − (5, 049 + 3, 457 2)
62 4, 278
x4 √ x5 √
+ (1, 064 + 797 2) − (23, 316 + 11, 485 2)
1, 488 52, 080
x6 √ x7 √
+ (191, 026 − 61, 923 2) + (−264, 230 + 159, 191 2) + · · ·
312, 480 312, 480
y2 (x) x √ x2 √
III − (10b) : √ = 1+ (−41 + 22 2) + (85 − 63 2)
σ2 x2−2 2 31 62
344 11 Frobenius Solution
x3 √
+ (−5, 049 + 3, 457 2)
4, 278
x4 √ x5 √
+ (1, 064 − 797 2) + (−23, 316 + 11, 485 2)
1, 488 52, 080
x6 √ x7 √
+ (191, 026 + 61, 923 2) − (264, 230 + 159, 191 2) + · · ·
312, 480 312, 480
(1) : [β = −3 ; γ = 3 ; ν = 2 ; ρ = −4 ] .
(2) : [β = 2 ; γ = −3 ; ν = 1 ; ρ = −2 ] .
(3) : [β = 3 ; γ = −2 ; ν = −2 ; ρ = −3 ] .
(4) : [β = 1 ; γ = −2 ; ν = −1 ; ρ = −1 ] .
(5) : [β = −1 ; γ = −5 ; ν = −1 ; ρ = −4 ] . (11.97)
βx+γ νx+ρ
y (x) + y (x) + y(x) = 0 . (11.98)
x x2
Equations (A)–(H) are all of category (2): meaning, the two roots, ν1 and ν2 , of their
indicial equation,
ν02 + ν0 (γ − 1) + ρ = 0 ,
(A) : [β = −3 ; γ = 1 ; ν = −3 ; ρ = 0 ] .
(B) : [β = 4 ; γ = 1 ; ν = −1 ; ρ = 0 ] .
(C) : [β = −2 ; γ = 3 ; ν = −4 ; ρ = 1 ] .
(D) : [β = 1 ; γ = −3 ; ν = 1 ; ρ = 4 ] .
(E) : [β = −1 ; γ = −1 ; ν = −1 ; ρ = 1 ] .
(F) : [β = 1 ; γ = 2 ; ν = 2 ; ρ = 1/4 ] .
(G) : [β = 1 ; γ = −1 ; ν = 1 ; ρ = 1 ] .
(H ) : [β = 1 ; γ = −2 ; ν = 1 ; ρ = 9/4 ] . (11.99)
ν02 + ν0 (γ − 1) + ρ = 0 ,
relating to equations given in (11.99) -(A) to -(H), are equal. Label this double-root
ν0 . Use (11.85). Then set n = 2, 3, 4, 5, 6, 7 in (11.86). And thereby determine the
first solution for each of the eight differential equations listed in (11.99).
The method for working out the first solution of these differential equations is
quite straightforward. Indeed, it is exactly the same method used earlier for the
examples set III. The only difference here is that for (11.99)-(A), similar to Piaggio’s
work 10. the first solution is being written entirely in terms of the variable ν0 . And
when so written, the first solution will also be called the general solution which
is described in detail in the following section. Numerical representation—which is
done by replacing the variable ν0 with its value equal to the relevant double root of the
indicial equation—will be made at the end of the current process. Equation (11.85),
that is similar to (11.100) below, relates a1 to a0 .
β ν0 + ν
a1 = − a0 , (11.100)
2 ν0 + γ
β (n − 1 + ν0 ) + ν
an = −an−1 . (11.101)
(n + ν0 )(n + ν0 + γ − 1) + ρ
346 11 Frobenius Solution
Henceforth, in this book, the statement general solution of a given differential equa-
tion will imply a particular form of its first solution that is written as a function of
the variable ν0 . The general solution is determined as follows.
First one considers the indicial equation and determines its roots. In relation to
(11.99)-(A) , where β = −3, γ = 1, ν = −3, ρ = 0, the indicial equation,
ν02 + ν0 (γ − 1) + ρ = ν02 = 0 ,
has two roots, ν0 = ν1 = 0 and ν0 = ν2 = 0. And both the roots are equal to zero.
Second, one identifies the relevant differential equation: it is (11.99)-(A)—written
below as (11.102)—that is obtained from (11.98) by using the given values of the
parameters: β = −3, γ = 1, ν = −3, ρ = 0.
1 − 3x 3
y (x) + y (x) − y(x) = 0 (11.102)
x x
β ν0 + ν 1 + ν0
a1 = − a0 = 3 a0 . (11.103)
2 ν0 + γ 1 + 2 ν0
β (n − 1 + ν0 ) + ν
an = −an−1
(n + ν0 )(n + ν0 + γ − 1) + ρ
3
= an−1 . (11.104)
(n + ν0 )
As such
3 32 (1 + ν0 )
a2 = a1 = a0 ,
(2 + ν0 ) (2 + ν0 )(1 + 2ν0 )
3 33 (1 + ν0 )
a3 = a2 = a0 ,
(3 + ν0 ) (2 + ν0 )(3 + ν0 )(1 + 2ν0 )
3 34 (1 + ν0 )
a4 = a3 = a0 ,
(4 + ν0 ) (2 + ν0 )(3 + ν0 )(4 + ν0 )(1 + 2ν0 )
3 35 (1 + ν0 )
a5 = a4 = a0 ,
(5 + ν0 ) (2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(1 + 2ν0 )
36 (1 + ν0 )
a6 = a0 ,
(2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(6 + ν0 )(1 + 2ν0 )
11.5 Examples Group IV 347
37 (1 + ν0 )
a7 = a0 .
(2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(6 + ν0 )(7 + ν0 )(1 + 2ν0 )
(11.105)
The first solution, YA;1 (x, ν0 ), is all contained in (11.103) and (11.105). As stated
above, this solution—meaning (11.106)—will henceforth also be called the general
solution, G A (x, ν0 ), of (11.99)-(A) or equivalently of (11.102).
G A (x, ν0 )
= YA;1 (x, ν0 )
σ1
3 (1 + ν0 ) x 32 (1 + ν0 ) x2 33 (1 + ν0 ) x3
= xν0 [1 + + +
(1 + 2 ν0 ) (2 + ν0 )(1 + 2ν0 ) (2 + ν0 )(3 + ν0 )(1 + 2ν0 )
34 (1 + ν0 ) x4
+
(2 + ν0 )(3 + ν0 )(4 + ν0 )(1 + 2ν0 )
35 (1 + ν0 ) x5
+
(2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(1 + 2ν0 )
36 (1 + ν0 ) x6
+
(2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(6 + ν0 )(1 + 2ν0 )
37 (1 + ν0 ) x7
+ ] + O(x8 )
(2 + ν0 )(3 + ν0 )(4 + ν0 )(5 + ν0 )(6 + ν0 )(7 + ν0 )(1 + 2ν0 )
(11.106)
The double root for (11.99)-(A), i.e. (11.102), is ν0 = 0. Therefore, the first so-
lution, YA;1 (x, ν0 ), for (11.99)-(A) is immediately available in numerical format
YA;1 (x, ν0 = 0), by setting ν0 = 0 in its general solution (11.106). [See, for instance,
(11.107).]
For YA;1 (x, 0), rather than σ0 , it was convenient to use σ1 as the arbitrary constant.
Similarly, for the second solution YA;2 (x, 0)—to be studied later—σ2 would be the
relevant arbitrary constant.
First solution of differential equations (B), (C), (D), (E), (F), (G), and (H ) can be
found in similar fashion and the results are as follows.
348 11 Frobenius Solution
YD;1 (x, 2) 5 x3 5 x4 7 x5 7 x6 x7
(D) : = 1 − 3x + 3 x2 − + − + − + ···
σ1 x 2 3 8 40 180 140
(11.110)
YE;1 (x, 1) 3 x2 2 x3 5 x4 x5 7 x6 x7
(E) : = 1 + 2x + + + + + + + ···
σ1 x 2 3 24 20 720 630
(11.111)
YG;1 (x, 1) 3 x2 2 x3 5 x4 x5 7 x6 x7
(G) : = 1 − 2x + − + − + − + ···
σ1 x 2 3 24 20 720 630
= (1 − x) exp(−x) (11.113)
Equation (11.99)-(A)–(H) are of category (2): Meaning their indicial equations have
double roots. The methodology to be used for working out their second solution will
involve two different methods: one inspired by a method used by Piaggio10. and
the other based on what is known as the ‘method of order reduction.’ The relevant
details are provided in the following. For convenience, these methods are termed as
the ‘second methods.’
and
d YA;1 (x, ν0 ) d G A (x, ν0 )
YA;2 (x, ν0 ) = = . (11.116)
d ν0 d ν0
Because YA;1 (x, ν0 ) includes the term xν0 , the differential of xν0 needs to be worked
out. To that end, define a symbol X (ν0 , x).
1 dx(ν0 , x)
· = log(x) , (11.119)
X (ν0 , x) d ν0
Note: Any remaining terms that would arise if x itself depended on ν0 are not included
in (11.119). Now rewrite the above as
dx(ν0 , x)
= X (ν0 , x) log(x) . (11.120)
d ν0
dxν0
= xν0 log(x) . (11.121)
d ν0
Equation (11.122) contains all the essentials of the second solution. However, in
order to convert it into proper form, one needs to do two simple things.
(i) : Replace the arbitrary constant σ1 in (11.122) by an arbitrary constant σ2
everywhere.
(ii) : Knowing that the double root of the indicial equation for (11.99)-(A) is
ν0 = 0, everywhere in (11.122), replace the symbol ν0 with its value 0.
Once these changes have been executed, the numerical form of the second solution,
YA;2 (x, 0), of differential equation (11.99)-(A) can be written as
YA;2 (x, 0) 27 2 33 3
= exp(3 x) · log(x) − 3 x − x − x
σ2 Piaggio 4 4
225 3, 699 3, 969
− x4 − x5 − x6
32 800 1600
88, 209 554, 769
− x7 − x8 − O(x9 ) . (11.123)
78, 400 1, 254, 400
Note: In going from (11.122) to (11.123), we have used the result, G A (x, ν0 = 0) =
YA;1 (x, 0) = σ2 exp(3x), for the first solution that was given in (11.107). Note also
that the arbitrary constant σ1 has been replaced by arbitrary constant σ2 .
In this regard, (7.31)–(7.45) can be used without alteration as long as the old notation
is aligned with the current notation. To that purpose, consider first the differential
equation (7.31) presented below as (11.125)
a2 (x) D2 + a1 (x) D + a0 (x) u(x) = 0 , (11.125)
1 − 3x 3
y (x) + y (x) − y(x) = 0 . (11.126)
x x
352 11 Frobenius Solution
1 − 3x 3
a2 (x) = 1 ; a1 (x) = ; a0 (x) = − (11.127)
x x
Next, using the fact that y1 (x) = const. exp(3 x), we have
%
exp(−3x)
y2 (x) = σ4 exp(3x) dx . (11.131)
x
y2 (x)
= exp(3x) log(x)
σ4
9 3 27 81 5
+ exp(3x) −3x + x2 − x3 + x4 − x + O(x6 )
4 2 32 200
11.7 Methodology For Second Solution 353
9 9 27 81 81 243 7 729 8
exp(3x) = 1 + 3x + x2 + x3 + x4 + x5 + x6 + x + x
2 2 8 40 80 560 4, 480
This leads to
y2 (x)
= exp(3x) log(x)
σ4
9 9 27 9 3 27
+ 1 + 3x + x2 + x3 + x4 + etc. . −3x + x2 − x3 + x4 − etc.
2 2 8 4 2 32
YA;2 (x, 0)
σ4 ord.reduction
27 2 33 3
= exp(3 x) · log(x) − 3 x − x − x
4 4
225 3, 699 3, 969
− x4 − x5 − x6
32 800 1600
88, 209 554, 769
− x7 − x8
78, 400 1, 254, 400
192, 483 597, 861
−− x9 − x10 , etc. (11.132)
1, 254, 400 12, 544, 000
YA;2 (x,0) YA;2 (x,0)
Note: σ4
in (11.132) is the same as σ4
in (11.123). As
ord.reduction Piaggio
usual, σ0 , σ1 , σ2 , etc., are arbitrary constants.
Complete numerical solution of (11.99)-(A), YA;Complete (x, 0), consists of YA;1 (x, 0)
recorded in (11.107) and YA;2 (x, 0) in (11.132). As such, (11.133)—which is the sum
of these two—represents the complete solution of (11.99)-(A).
x−1 x+1
y (x) + y (x) + y(x) = 0 . (11.134)
x x2
ν + β ν0 1 + ν0
a1 = −a0 = − a0 . (11.135)
γ + 2 ν0 2 ν0 − 1
[Also see (11.100).] To calculate a2 and higher-order an , rewrite (11.90) [Also see
(11.101)].
β (n − 1 + ν0 ) + ν
an = −an−1
(n + ν0 )(n + ν0 + γ − 1) + ρ
(n − 1 + ν0 ) + 1
= −an−1
(n + ν0 )(n + ν0 − 1 − 1) + 1
(n + ν0 )
= −an−1 (11.136)
(n + ν0 − 1)2
(2 + ν0 )
a2 = a0 . (11.137)
(ν0 + 1)(2 ν0 − 1)
(3 + ν0 )
a3 = − a0 . (11.138)
(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(4 + ν0 )
a4 = a0 . (11.139)
(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(5 + ν0 )
a5 = − a0 . (11.140)
(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(6 + ν0 )
a6 = a0
(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(7 + ν0 )
a7 = − a0
(ν0 + 6)(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(8 + ν0 )
a8 = a0 .
(ν0 + 7)(ν0 + 6)(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(11.141)
G G (x, ν0 )
σ 0 ν0 x ν 0
1 + ν0 (2 + ν0 )
= 1− x+ x2
2 ν0 − 1 (ν0 + 1)(2 ν0 − 1)
(3 + ν0 ) (4 + ν0 )
− x3 + x4
(ν0 + 2)(ν0 + 1)(2 ν0 − 1) (ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(5 + ν0 )
− x5
(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(6 + ν0 )
+ x6
(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(7 + ν0 )
− x7
(ν0 + 6)(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
(8 + ν0 )
+ x8
(ν0 + 7)(ν0 + 6)(ν0 + 5)(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2 ν0 − 1)
− O(x9 ) (11.142)
356 11 Frobenius Solution
and
For specific value of ν0 equal to the double root of the indicial equation—which
for the present case is ν0 = 1—the first and the second solutions of (11.99)-(G) are
to be denoted as
and
d YG;1 (x, ν0 )
YG;2 (x, ν0 = 1) = YG;2 (x, 1) = . (11.146)
d ν0 ν0 =1
As stated above and expressed in (11.144), for unspecified values of ν0 , the second
solution, YG;2 (x, ν0 ), of (11.99)-(G) is the differential, with respect to the variable
ν0 , of its general solution given in (11.142). That is
d G G (x, ν0 )
YG;2 (x, ν0 ) =
d ν0
(1 + 2ν0 − 2ν02 )
= G G (x, ν0 ) · log(x) + σ1 xν0 1 + x
(2 ν0 − 1)2
2 + 2ν0 + 3ν0 2
− σ 1 x ν0 x2
(2ν02 + ν0 − 1)2
(3 + 2ν0 + 7ν0 2 + 6ν0 3 + ν0 4 ) 3
+ σ 1 x ν0 2 x
(2ν03 + 5ν02 + ν0 − 2)2
24 + 12ν0 + 63ν0 2 + 88ν0 3 + 35ν04 + 4ν05
− σ 1 x ν0 x4
(2ν04 + 11ν03 + 16ν02 + ν0 − 6)2
(2ν06 + 26ν05 + 115ν04 + 200ν03 + 109ν02 + 16ν0 + 40 5
+ σ 1 x ν0 3 x
[(ν0 + 4)(ν0 + 3)(ν0 + 2)(ν0 + 1)(2ν0 − 1)]2
− O(xν0 x6 ) (11.148)
Equation (11.148) contains all the essentials of the second solution. However, in
order to convert it into proper form, one needs to do two simple things.
(i) : Replace the arbitrary constant σ1 in (11.148) by an arbitrary constant σ2
everywhere.
(ii) : Knowing that the double root of the indicial equation for (11.99)-(G) is
ν0 = 1, everywhere in (11.148), replace the symbol ν0 with its value 1.
Once these changes have been introduced the second solution,
YG;2 (x, ν0 = 1) → YG;2 (x), of (11.99)-(G) can be written in its final form.
YG ; 2 (x, ν0 = 1) YG ; 2 (x)
=
σ2 Piaggio σ2 Piaggio
7 3 19 4 113 5
= x(1 − x) exp(−x) log(x) + x + x2 − x + x − x
4 18 288
127
+ x6 − O(x7 ) (11.149)
1, 200
358 11 Frobenius Solution
Note: In going from (11.148) to (11.149), we have used the result, G G (x, ν0 = 1) =
YG;1 (x, ν0 = 1) = YG;1 (x) = σ1 x(1 − x) exp(−x) for the first solution that was
given in (11.147). Of course, the arbitrary constant σ1 there has been changed to σ2
here.
Exercise for the Reader: (11.99)-(B)
Follow the above procedure and find the second solution for differential equa-
tion (11.99)-(B).
βx+γ νx+ρ
y (x) + y (x) + y(x) = 0 . (11.150)
x x2
(1) : [β = 1 ; γ = −2 ; ν = 1 ; ρ = 2 ] .
(2) : [β = 1 ; γ = −3 ; ν = 1 ; ρ = 3 ] .
(3) : [β = 1 ; γ = −4 ; ν = 1 ; ρ = 4 ] .
(4) : [β = 1 ; γ = −5 ; ν = 1 ; ρ = 5 ] .
(5) : [β = 1 ; γ = −6 ; ν = 1 ; ρ = 6 ] . (11.151)
Use (11.85) ; set n = 2, 3, 4, 5, 6, 7 in (11.86) and remember that the first solution
of these differential equations makes use of the larger of the two roots of the indicial
equation: that is ν0 = ν1 . The smaller root, ν0 = ν2 , of the indicial equation is used
for the second solution. Because of space constraints, provide detailed information
about the general solution, first solution, and two different techniques for working
out the second solution, only for (11.151)-(4).
x−5 x+5
y (x) + y (x) + y(x) = 0 . (11.152)
x x2
11.8 Examples Group V 359
are ν0 = 5 and ν0 = 1. Work directly with the variable ν0 and not specify either of
its two roots. As such, (11.85), that relates a1 to a0 , is written as
ν + β ν0 1 + ν0
a1 = −a0 = − a0 . (11.153)
γ + 2 ν0 2 ν0 − 5
β (n − 1 + ν0 ) + ν
an = −an−1
(n + ν0 )(n + ν0 + γ − 1) + ρ
(n − 1 + ν0 ) + 1
= −an−1
(n + ν0 )(n + ν0 − 5 − 1) + 5
(n + ν0 )
= −an−1 . (11.154)
(n + ν0 − 5)(n + ν0 − 1)
(2 + ν0 )
a2 = a0 . (11.155)
(ν0 − 3)(2 ν0 − 5)
(3 + ν0 )
a3 = − a0 . (11.156)
(ν0 − 3)(ν0 − 2)(2 ν0 − 5)
(4 + ν0 )
a4 = a0 . (11.157)
(ν0 − 3)(ν0 − 2)(ν0 − 1)(2 ν0 − 5)
(5 + ν0 )
a5 = − a0 . (11.158)
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(2 ν0 − 5)
360 11 Frobenius Solution
(6 + ν0 )
a6 = a0
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(2 ν0 − 5)
(7 + ν0 )
a7 = − a0
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(ν0 + 2)(2 ν0 − 5)
(8 + ν0 )
a8 = a0 .
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(ν0 + 2)(ν0 + 3)(2 ν0 − 5)
(11.159)
G(x, ν0 )
σ 0 x ν0
1 + ν0 (2 + ν0 )
= 1− x+ x2
2 ν0 − 5 (ν0 − 3)(2 ν0 − 5)
(3 + ν0 ) (4 + ν0 )
− x3 + x4
(ν0 − 3)(ν0 − 2)(2 ν0 − 5) (ν0 − 3)(ν0 − 2)(ν0 − 1)(2 ν0 − 5)
(5 + ν0 )
− x5
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(2 ν0 − 5)
(6 + ν0 )
+ x6
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(2 ν0 − 5)
(7 + ν0 )
− x7
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(ν0 + 2)(2 ν0 − 5)
(8 + ν0 )
+ x8
(ν0 − 3)(ν0 − 2)(ν0 − 1)(ν0 )(ν0 + 1)(ν0 + 2)(ν0 + 3)(2 ν0 − 5)
− O(x9 ) (11.160)
here.]
y1;1 (x, ν0 = 2) 3x 5 x3 x4 7 x5 x6 x7
(1) : = 1 − + x 2
− + − + − + ···
σ1 x 2 2 12 8 240 180 1, 120
x
= 1− exp(−x). (11.161)
2
For differential equation (11.151)-(2), β = 1, γ = −3, ν = 1 and ρ = 3, and the
two roots of the indicial equation are ν0 = 3 and ν0 = 1. The following result obtains
for its first solution, y1;2 (x, ν0 = 3).
y1;2 (x, ν0 = 3) 4x 5 x2 x3 7 x4 x5 x6 x7
(2) : =1− + − + − + − + ···
σ1 x 3 3 6 3 72 45 240 1, 512
x
= 1− exp(−x) . (11.162)
3
y1;3 (x, ν0 = 4) 5x 3 x2 7 x3 x4 3 x5 x6 11 x7
(3) : =1− + − + − + − + ···
σ1 x 4 4 4 24 12 160 288 20, 160
x
= 1− exp(−x) . (11.163)
4
y1;4 (x, ν0 = 5) 6x 7 x2 4 x3 3 x4 x5 11 x6 x7
(4) : =1− + − + − + − + ···
σ1 x 5 5 10 15 40 60 3, 600 2, 100
x
= 1− exp(−x) . (11.164)
5
y1;5 (x, ν0 = 6) 7x 2 x2 x3 5 x4 11 x5 x6 13 x7
(5) : =1− + − + − + − + ···
σ1 x 6 6 3 4 72 720 360 30, 240
x
= 1− exp(−x) . (11.165)
6
362 11 Frobenius Solution
x−5 x+5
y (x) + y (x) + y(x) = 0 . (11.166)
x x2
are ν0 = 5 and ν0 = 1.
The second solution uses the smaller of the two roots namely ν0 = 1. A cursory
look at (11.151)-(4)’s general solution (11.160) shows how it is replete with infinities.
Infinities arise from the term (ν0 − 1) in the denominator for the reason that this term
goes to zero when ν0 = 1. To obviate the occurrence of these infinities, one proceeds
as follows.
Replace the arbitrary constant σ0 in the general solution (11.160) by σpia;4 (ν0 − 1)
where σpia;4 is also an arbitrary constant. Doing this transforms G(x, ν0 ) into
G pia (x, ν0 ). More importantly, it gets rid everywhere of the pesky denominator
(ν0 − 1). As a result, the general solution (11.160) changes into
' (
G pia (x, ν0 ) 1 + ν0
= (ν 0 − 1) 1 − x
σpia;4 xν0 2 ν0 − 5
' (
(2 + ν0 ) (3 + ν0 )
+ (ν0 − 1) x −
2
x 3
(ν0 − 3)(2 ν0 − 5) (ν0 − 3)(ν0 − 2)(2 ν0 − 5)
(4 + ν0 ) (5 + ν0 )
+ x4 − x5
(ν0 − 3)(ν0 − 2)(2 ν0 − 5) (ν0 − 3)(ν0 − 2)(ν0 )(2 ν0 − 5)
(6 + ν0 )
+ x6
(ν0 − 3)(ν0 − 2)(ν0 )(ν0 + 1)(2 ν0 − 5)
(7 + ν0 )
− x7
(ν0 − 3)(ν0 − 2)(ν0 )(ν0 + 1)(ν0 + 2)(2 ν0 − 5)
11.9 Equation (11.151)-(4) 363
(8 + ν0 )
+ x8
(ν0 − 3)(ν0 − 2)(ν0 )(ν0 + 1)(ν0 + 2)(ν0 + 3)(2 ν0 − 5)
− O(x9 ) (11.167)
The next thing to do is to differentiate G pia (x, ν0 ) with respect to ν0 . And call the
result y2;4 (x, ν0 ).
d G pia (x, ν0 )
= y2;4 (x, ν0 )
d ν0
= G pia (x, ν0 ) log(x)
ν02 − 5ν0 + 1 27 10
+ σpia;4 xν0 · [1 − 2 x+ − x2
(2ν0 − 5)2 (2ν0 − 5)2 (ν0 − 3)2
66 12 5
− − − x3
(2ν0 − 5)2 (ν0 − 3)2 (ν0 − 2)2
4ν03 + 9ν02 − 120ν0 + 178
− x4
(2ν03 − 15ν02 + 37ν0 − 30)2
3ν04 + 5ν03 − 94ν02 + 185ν0 − 75 5
+2 x
ν 2 (2ν03 − 15ν02 + 37ν0 − 30)2
8ν05 + 21ν04 − 268ν03 + 403ν02 + 84ν0 − 180 6
− x
ν02 (2ν04 − 13ν03 + 22ν02 + 7ν0 − 30)
10ν06 + 48ν05 − 327ν04 − 10ν03 + 1055ν02 − 224ν0 − 420 7
+ x
ν02 (2ν05 − 9ν04 − 4ν03 + 51ν02 − 16ν0 − 60)2
− O(x8 )] (11.168)
G pia (x, ν0 = 1)
σpia;4
5x5 6x 7x2 8x3 9x4
=− 1− + − + − O(x5 )
6 5 2!5 3!5 4!5
5 5 x
=− x 1− exp(−x) , (11.169)
6 5
y2;4 (x, ν0 = 1)
σpia;4
G pia (x, ν0 = 1) log(x) 2 2 1 3 2 4
= +x+ x + x + x
σpia;4 3 2 3
− O(x5 ). (11.170)
364 11 Frobenius Solution
x−5 x+5
y (x) + y (x) + y(x) = 0 , (11.175)
x x2
we get
x−5 x+5
a2 (x) = 1 ; a1 (x) = ; a0 (x) = . (11.176)
x x
x 6 6 7 4
x5 1 − exp(−x) = x5 − x + x7 − x8
5 5 10 15
+ O(x9 ) (11.181)
The right-hand side of (11.178) is equal to the product of the left-hand sides of
(11.181) and (11.180). Below we equate that with the product of the right-hand sides
of (11.180) and (11.181). We get
%
y2 (x) x exp (x)
= x5 1 − exp(−x) 2 dx
(−4σ2 ) 5 x5 1 − 5x
x 1 −4 7 51 389 −1
= x5 1 − exp(−x) − x − x−3 − x−2 − x
5 4 15 100 750
x 5
+ x5 1 − exp(−x) log(x) + O(x2 ) ,
5 24
366 11 Frobenius Solution
6 7 1 −4 7 51
= x5 − x6 + x7 − x − x−3 − x−2
5 10 4 15 100
x 5
+ x5 1 − exp(−x) log(x) + O(x2 ) .
5 24
' (
−1 −5 5 x 2x2 x3 2x4
= x 1− exp(−x). log(x) + x + + + + O(x ) .
5
4 6 5 3 2 3
Finally, multiply both sides by −4 to get the second solution of (11.151)-(4) by the
method of order reduction.
ycomplete;4 (x)
x
σ1 x 5 1 − exp(−x)
5
' (
−5 5 x 2x2 x3 2x4
+ σ2 x 1− exp(−x). log(x) + x + + + + O(x5 ) .
6 5 3 2 3
(11.183)
βx+γ μ x2 + ν x + ρ
y (x) + y (x) + y(x) = 0 . (11.184)
x x2
β = 0, γ = 1, μ = 1, ν = 0 and ρ = 0 in (11.184).
1
y (x) + y (x) + y(x) = 0 . (11.185)
x
It has a regular singular point at x = 0, and its indicial equation has a double root
ν0 = 0. Accordingly, it is a member of category 2. As such the following rule applies.
If the two roots—ν1 and ν2 —of the indicial equation are equal—meaning if
2
ν1 = ν2 ≡ ν0 , or equivalently ρ = γ−1 2
, and the relevant differential equation—
meaning (11.185)—has a regular singular point at x = x0 , then it will always posses a
relatively easily accessible first solution, y1 (x), in the form of modified Taylor series.
∞
y1 (x) = an (x − x0 )(n+ν0 ) . (11.186)
n=0
β ν0 + ν 0+0
a1 = −a0 = −a0 = 0 . (11.187)
2 ν0 + γ 0+1
an−1 [β (n − 1 + ν0 ) + ν] + an−2 μ
an = −
(n + ν0 )(n + ν0 + γ − 1) + ρ
an−2
= − . (11.188)
(n + ν0 )2
368 11 Frobenius Solution
a0
a2 = − . (11.189)
(2 + ν0 )2
a3 = 0 (11.190)
a2 a0
a4 = − = (11.191)
(4 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
a5 = a7 = a9 ... = 0 , (11.192)
a4 a0
a6 = − = − ,
(6 + ν0 )2 (6 + ν0 )2 (4
+ ν0 )2 (2 + ν0 )2
a6 a0
a8 = − = , (11.193)
(8 + ν0 )2 (8 + ν0 ) (6 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
2
and so on.
Listed below, in (11.221), is a compact form of the general solution, G BESS0 (x, ν0 ),
of Bessel’s equation, (11.185), of order zero. This general solution is made up of
(11.187) and (11.189)–(11.193), etc.
G BESS0 (x, ν0 )
a0 xν0
1 1
= 1− x2 + x4
(2 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
1
− x6
(6 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
1
+ x8
(8 + ν0 )2 (6 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
11.11 Bessel’s Equation of Order Zero 369
1
− x10
(10 + ν0 )2 (8 + ν0 )2 (6 + ν0 )2 (4 + ν0 )2 (2 + ν0 )2
+ O(x12 ) (11.194)
The indicial equation for Bessel’s equation of order zero has a double root ν0 =
0. Setting everywhere, in its general solution (11.221), ν0 = 0 and the arbitrary
constant a0 equal to unity leads to the first solution of Bessel’s equation of order
zero. Traditionally, it is labeled as J0 (x).
G BESS0(x,ν0 =0)
= J0 (x)
a0
x2 x4 x6 x8 x10
= 1− + − + − + O(x12 )
(2)2 (8)2 (48)2 (384)2 (3, 840)2
1 x 2 1 x 4 1 x 6 1 x 8
= 1 − 1 + 2 − 2 + 2
1! 2 2! 2 3! 2 4! 2
1 x 10
− 2 + O(x12 ) (11.195)
5! 2
(−1)n x
∞
2n
J0 (x) = . (11.196)
n=0
(n!)2 2
d G BESS0 (x, ν0 )
YBESS0;2;piaggio (x, ν0 ) ≡
d ν0
2 4(3 + ν0 )
= G BESS0 (x, ν0 ) · log(x) + σ1 xν0 x2 − 2 x4
(2 + ν0 )3 (ν0 + 6ν0 + 8)3
ν0 88 + 48ν0 + 6ν0 2
+ σ1 x x6
(ν03 + 12ν02 + 44ν0 + 48)3
370 11 Frobenius Solution
Equation (11.197) contains all the essentials of the second solution. However, in
order to convert it into proper form, one needs to do two simple things.
(i) : Replace the arbitrary constants a0 and σ1 in (11.197) by unity everywhere.
(ii) : Knowing that the double root of the relevant indicial equation is ν0 = 0,
replace the symbol ν0 with its value 0 everywhere in (11.197).
Once these changes have been made the second solution, YBESS0;2;piaggio (x, ν0 = 0),
of Bessel’s equation of order zero can be written as
YBESS0;2;piaggio (x, ν0 = 0)
x2 3 11 25
= J0 (x) log(x) + − x4 + x6 − x8
4 128 13, 824 1, 769, 472
137
+ x10 + O(x12 ) . (11.198)
884, 736, 000
Various terms in (11.198) can also be expressed in a form familiar in the literature.
For instance, one can write
x2 1
= (−1)2 2 x2
4 2 (1!)2
3 1 1 4
− x4 = (−1)3 4 1+ x
128 2 (2!)2 2
11 1 1 1 6
x6 = (−1)4 6 1+ + x
13, 824 2 (3!)2 2 3
25 1 1 1 1 8
− x8 = (−1)5 8 1+ + + x
1, 769, 472 2 (4!)2 2 3 4
137 1 1 1 1 1 10
x10 = (−1)6 10 1+ + + + x (11.199)
884, 736, 000 2 (5!)2 2 3 4 5
The results of the Piaggio-like second solution of Bessel’s equation of order zero,
given in (11.199), agree with those in the more elegant (11.200) below.
11.11 Bessel’s Equation of Order Zero 371
YBESS0;2;piaggio (x, ν0 = 0)
(−1)n+1 x
∞
2n 1 1 1
= J0 (x) log(x) + 1+ + + ··· + .
n=1
(n!)2 2 2 3 n
(11.200)
Y0 (x)
(−1)n+1 x
∞
2 2n 1 1 1
= J0 (x) log(x) + 1 + + + ··· +
π n=1
(n!)2 2 2 3 n
2
+ {[γe − log(2)]J0 (x)} . (11.201)
π
Here γe ≈ 0.577215665 is the so-called Euler’s constant.
1
y (x) + y (x) + y(x) = 0 , (11.205)
x
we get
1
a2 (x) = 1 ; a1 (x) = ; a0 (x) = 1 . (11.206)
x
Use (11.207) ; insert y1 (x) = J0 (x) in (11.204); and write the resultant as
%
dx
y2 (x) = σ0 J0 (x) . (11.208)
x {J0 (x)}2
Power expand the integrand in (11.208) while using the expansion for J0 (x) given in
(11.196),
1 1 1 5 23 677
= + x+ x3 + x5 + x7
x {J0 (x)}2 x 2 32 576 73, 728
7, 313 218, 491
+ x9 + x11 + O(x13 ) , (11.209)
3, 686, 400 530, 841, 600
Therefore,
%
y2 (x) dx
= J0 (x)
σ0 x {J0 (x)}2
' (
x2 x4 x6 x8
= J0 (x) log(x) + 1 − + − + ×
4 64 2304 147, 456
' 2 (
x 5 23 677 7, 317
+ x +
4
x +
6
x +
8
x 10
4 128 3, 456 589, 824 36, 864, 000
11.11 Bessel’s Equation of Order Zero 373
= J0 (x) log(x)
x2 3 11 25
+ − x4 + x6 − x8
4 128 13, 824 1, 769, 472
137
+ x10 . (11.211)
884, 736, 000
βx+γ μ x2 + ν x + ρ
y (x) + y (x) + y(x) = 0 . (11.214)
x x2
1 x2 − n2b
y (x) + y (x) + y(x) = 0 . (11.215)
x x2
374 11 Frobenius Solution
Clearly, x = 0 is the only singular point for (11.214) and (11.215). And it is a regular
singular point. Recall that a regular singular point at x = 0 would require x × 1x and
x2 × x12 both to be analytic at x = 0: which is clearly the case here. All the other
points are ordinary points.
ν0 ≡ ν1 = nb ; ν0 ≡ ν2 = − nb . (11.217)
Because one root, −nb , is the negative of the other, nb , either of the two—say nb —can
be assumed to be positive. Depending on the size of nb , Bessel’s equation of order nb
may belong to any one of the three categories: (1), (2), or (3). Description of what these
categories refer to is available contiguous to (11.80)–(11.81), (11.82), and (11.83).
β ν0 + ν 0+0
a1 = −a0 = −a0 = 0 . (11.218)
2 ν0 + γ 2ν0 + 1
For even values of n, (11.219) provides the recurrence relationship. And for n =
2, 4, 6, 8, etc., it gives
a2 !−1
=− (2 + ν0 )2 − n2b ,
a0
a4 !−1
= (2 + ν0 )2 − n2b (4 + ν0 )2 − n2b
a0
a6 !−1
=− (2 + ν0 )2 − n2b (4 + ν0 )2 − n2b (6 + ν0 )2 − n2b
a0
a8 !−1
= (2 + ν0 )2 − n2b (4 + ν0 )2 − n2b (6 + ν0 )2 − n2b (8 + ν0 )2 − n2b
a0
a10
= −etc. (11.220)
a0
G BESSnb (x, ν0 )
a0 xν0
1 1
= 1− x +
2
x4
(2 + ν0 )2 − n2b (2 + ν0 )2 − n2b (4 + ν0 )2 − n2b
1
− x6
(2 + ν0 )2 − n2b (4 + ν0 )2 − n2b (6 + ν0 )2 − n2b
1
+ x8
(2 + ν0 )2 − n2b (4 + ν0 )2 − n2b (6 + ν0 )2 − n2b (8 + ν0 )2 − n2b
+ O(x10 ) (11.221)
Consider a Bessel’s equation of order nb that belongs to category (1): meaning the
difference, 2 nb , between the two roots—both of which are real—of its indicial
equation is neither zero nor is it equal to an integer. As such, it has two linearly
independent solutions, Bnb (x, ν0 ) which are readily found by setting ν0 = ± nb .
For the case when the indicial equation roots difference 2nb is not an integer, the
first solution, Bnb (x, ν0 = nb ), of Bessel’s equation of order nb , is readily found, by
376 11 Frobenius Solution
But, alas!, nb is not an integer and its factorial is not defined. Therefore, as currently
written, the relationship (11.224) would be incorrect.
We need to generalize the factorial function to non-integral values of the argument.
This can be done through the use of the gamma function which is defined whenever
the following integrals converge.
% ∞ % 1
(z) = t z−1
exp(−t) dt = (− log t)z−1 dt . (11.225)
0 0
Indeed, (z) is valid for all complex values of z: Be they non-integral, negative or
imaginary.
The simplest property of the gamma function is the recurrence relationship.
(z + 1) = z! (11.227)
11.12 Bessel’s Equation of Order nb 377
Part of the right-hand side of (11.228) is function Jnb (x) known as the ‘Bessel function
of the first kind of order nb .’
∞
(−1)n x 2n+nb
Jnb (x) = . (11.229)
n=0
n! (n + nb + 1) 2
The second solution Bnb (x, ν0 = −nb ) that refers to ν0 = − nb is now straightforward
to determine. For even values of n, the recurrence relationship is
an−2
an = −
(n − nb )2 − n2b
an−2
=− . (11.230)
n(n − 2 nb )
(−nb + 1) x
∞
2n−nb
Bnb (x, ν0 = −nb ) = a0 2−nb (−1)n (.11.233)
n=0
n! (n − nb + 1) 2
Part of the right-hand side of (11.228) is function Jnb (x) known as the ‘Bessel function
of the first kind of order −nb .’
∞
(−1)n x 2n−nb
J−nb (x) = . (11.234)
n=0
n! (n − nb + 1) 2
Complete solution of differential equation (11.215) for the case where the indicial
equation roots differ by non-integer—meaning the Bessel’s equation is of category
(1)—is the sum of the first- and the second solution that were given in (11.228)
and (11.233).
x 2n+nb
∞
(nb + 1)
Bnb (x, ν0 = nb ) + Bnb (x, ν0 = −nb ) = σ1 (−1)n 2 nb
2 n! (n + nb + 1)
n=0
∞
(−nb + 1) x 2n−nb
+ σ2 2−nb (−1)n . (11.235)
n! (n − nb + 1) 2
n=0
First Solution
Replacing n2b by unity transforms Bessel’s equation of order nb —see (11.215)—into
Bessel’s equation of order unity—see (11.236).
1 x2 − 1
y (x) + y (x) + y(x) = 0 . (11.236)
x x2
The two roots of the indicial equation of (11.236) are nb = 1 and nb = −1. Its
first solution must use the larger root, namely nb = 1. Because the general solu-
tion G BESSnb (x, ν0 )—see (11.221)—is still applicable, as are other equations leading
to (11.228), all that is needed to convert (11.228) into the first solution, B1;x (nb = 1),
of (11.236) is change the variable nb to unity. This leads to the result
∞
(1 + 1) x 2n+1
B1;x (nb = 1) = a0 21 (−1)n
n=0
n! (n + 1 + 1) 2
∞
(−1) n x 2n+1
= 2 a0 ≡ 2 a0 J1 (x) , (11.237)
n=0
n! (n + 1)! 2
where J1 (x) is Bessel’s function of the first kind of order unity. [Compare (11.229).]
Second Solution
Bessel’s equation of order unity is treated in Piaggio10. , pp. 114–115. Because the
Piaggio book is not readily available, and the presentation there is minimal and
somewhat abstruse, it is helpful to record a detailed solution here.
380 11 Frobenius Solution
G BESS1 (x, ν0 )
' ( ' (
x2 x4
= a0 xν0 [1 − +
(ν0 + 3)(ν0 + 1) (ν0 + 5)(ν0 + 3)2 (ν0 + 1)
' (
x6
−
(ν0 + 7)(ν0 + 5)2 (ν0 + 3)2 (ν0 + 1)
' (
x8
+
(ν0 + 9)(ν0 + 7)2 (ν0 + 5)2 (ν0 + 3)2 (ν0 + 1)
+ O(x10 )] (11.238)
When ν0 = 1, (11.238) is the first solution, B1;x (nb = 1), of Bessel’s equation of order
unity. [Reminder: The first solution, B1;x (nb = 1), of Bessel’s equation of order unity
is (11.237).] On the other hand, when ν0 is set equal to −1, the denominator of the
general solution (11.238) goes to zero because of the presence of the factor (ν0 + 1)
everywhere. To avoid this difficulty, Piaggio proposed to change the arbitrary constant
a0 to another arbitrary constant a0 (ν0 + 1). The Piaggio10. version of the general
solution is
G BESSpiaggio (x, ν0 )
' (
ν0 x2
= a0 x [(ν0 + 1) −
(ν0 + 3)
' 4 (
x
+
(ν0 + 5)(ν0 + 3)2
' (
x6
−
(ν0 + 7)(ν0 + 5)2 (ν0 + 3)2
' (
x8
+
(ν0 + 9)(ν0 + 7)2 (ν0 + 5)2 (ν0 + 3)2
+ O(x10 )] (11.239)
The important part of Piaggio’s suggestion is the following: the second solution,
B2;x (nb ), of Bessel’s equation of order unity is the differential with respect to the
variable ν0 of its general solution G BESSpiaggio (x, ν0 ). The final requirement is that
the result of such differential be evaluated at ν0 = −1 which is the second root of
the relevant indicial equation. Recall (11.121), whereby
dxν0
= xν0 log(x) , (11.240)
d ν0
11.13 Bessel’s Indicial Equation 381
d G BESSpiaggio (x, ν0 )
= log(x) G BESSpiaggio (x, ν0 )
d ν0
' ( ' (
1 13 + 3ν0
+ a0 xν0 [1 + x 2
− x4
(ν0 + 3)2 (ν0 + 5)2 (ν0 + 3)3
' (
127 + 52ν0 + 5ν02
− x6
(ν0 + 7)2 (ν0 + 5)3 (ν0 + 3)3
' (
1383 + 753ν0 + 129ν02 + 7ν03
− x8
(ν0 + 9)2 (ν0 + 7)3 (ν0 + 5)3 (ν0 + 3)3
+ O(x10 ) (11.241)
c0 c1
(1) − − 40 + 42x + 18x 2 + 9x 3
3 27
c0 c2
(2) − + 153 − 156x + 72x 2 − 32x 3
4 128
(3) 4c0 + 4c3 −192 + 72x − 12x 2 + x 3
(4) 4c0 + 4c4 1920 + 768x + 144x 2 + 16x 3 + x 4
c0 c5
(5) + 45 − 60x + 36x 2 − 12x 3 + 2x 4
8 16
c0 c6
(6) − − 45 + 60x + 36x 2 + 12x 3 + 2x 4
8 16
(7) c0 + c7 −120x + 60x 2 − 5x 4 + x 5
(8) c0 + c8 −120x − 60x 2 + 5x 4 + x 5
81c0 c9
(9) + 400 − 1320x + 720x 2 + 180x 3 − 270x 4 + 81x 5
243 243
256c0 c10
(10) + −4095 = 8100x − 3960x 2 − 1440x 3
1024 1024
c10
+ 2400x 4 − 1152x 5 + 256x 6
1024
c0 c1 exp(αt)
− + 2 . (1)
3 α + 2α − 3
c0 c2 exp(αt)
− + 2 . (2)
4 α − 3α − 4
4c3 exp(αt)
4c0 + 2 . (3)
4α + 4α + 1
4c4 exp(αt)
4c0 + 2 . (4)
4α − 4α + 1
c0 c5 exp(αt)
+ 3 . (5)
8 α + 6α2 + 12α + 8
c0 c6 exp(αt)
− + 3 . (6)
8 α − 6α2 + 12α − 8
12.3 Problems Group III, 3-chapt 385
c7 exp(αt)
c0 + . (7)
α2 + α + 1
c8 exp(αt)
c0 + 2 . (8)
α −α+1
c0 c9 exp(αt)
+ 2 . (9)
3 α + 2α + 3
c0 c10 exp(αt)
+ 2 . (10)
4 α + 3α + 4
sin(x) − 2 cos(x)
. (1)
5
−i
[5 sin(x) − 3 cos(x)] . (2)
17
8 24 6 70
sin(x) − sin(3x) − cos(x) + cos(3x) . (3)
25 1369 25 1369
6 70 8 24
sin(x) − sin(3x) − cos(x) + cos(3x) . (4)
25 1369 25 1369
1 sin(2 x) − cos(2 x)
+ . (5)
4 16
1 sin(2 x) + cos(2 x)
+ . (6)
4 16
4 sin(2x) − 6 cos(2x)
. (7)
13
2i
− [3 sin(2x) − 2 cos(2x)] . (8)
13
2 16 sin(4x) − 26 cos(4x)
+ . (9)
3 233
1 sin(4x) − cos(4x)
− + . (10)
2 12
386 12 Answer to Assigned Problems
exp(2x)
{[−4i + 39x] sin(x) + [3 + 26x] cos(x)} . (1)
169
−i exp(2x)
{[2 + 35x] sin(x) + [11 + 5x] cos(x)} . (2)
125
2 2
[3 sin(x) + 4 cos(x)] − [35 sin(3x) + 12 cos(3x)] . (3)
25 1369
2 2
− [4 sin(x) + 3 cos(x)] + [12 sin(3x) + 35 cos(3x)] . (4)
25 1369
exp(x
− {72 sin(x) + 21 cos(x) − x[65 sin(x) + 45 cos(x)]} . (5)
625
exp(x){3 sin(x) + x [sin(x) − cos(x)]} . (6)
2 2 exp(x)
exp(x)(1 − x) − [(104 − 222x) sin(2x) − (153 − 37x) cos(2x)] . (7)
3 1369
2 exp(x)
−2 exp(x)(1 − x) + [(32 + 26x) sin(2x) + (43 − 39x) cos(2x)] . (8)
169
4 exp(−x)[2 sin(x) + x cos(x)] . (9)
2 exp(x)
[(130 − 259x) sin(x) − (151 − 185x) cos(x)] . (10)
1369
In problem (4) set (−x + 3) = exp(−t) , (−x + 3)D = , and thereby trans-
form it into problem (1).
In problem (5) set (2x − 1) = exp(2t) , (2x − 1)D = , and thereby trans-
form it into problem (2).
In problem (6) set (x + 1) = exp(t) , (x + 1)D = and (x + 1)2 D 2 =
( − 1) and thereby transform it into problem (3).
13
4 1
u(x) = (−1) 3 . (5)
exp(x ){−18 x + σ0 }
3
1
u(x) = ± 2 x 2 − 1 + σ0 exp(−2x 2 ) 4
; (6)
1
u(x) = ± (i) 2 x − 1 + σ0 exp(−2x )
2 2 4
. (6)
1
u(x) = . (7)
x(−3 x + σ0 )
1
u(x) = 2 . (8)
x [−2 log(x) + σ0 ]
21
1
u(x) = ± . (9)
2x 2 + σ0 x 4
3
9 x 8 + σ1
u(x) = ; (10)
2 x2
2
3
9 x 8 + σ1
u(x) = (−1) 3 ; (10)
2 x2
4
3
9 x 8 + σ1
u(x) = (−1) 3 . (10)
2 x2
y = x σ + 3 σ 2 . (1)
y = x σ + 3 σ 3 . (2)
y = x σ − 2 sin σ . (3)
y = x σ + 2 cos σ . (4)
Singular Solution
12 y + x 2 = 0 . (1)
81 y 2 + 4 x 3 = 0 . (2)
x 2
4 − x 2 = y − x cos−1 . (3)
2
x 2
4 − x 2 = y − x sin−1 . (4)
2
390 12 Answer to Assigned Problems
Singular Solution
Singular solution to (1), (2), and (3) is the same. That is:
y(x) = 0 .
dy dx
(1) : = ; log(y) = log(x) + const .
y x
Or Equivalently : y = σ0 x .
dy exp(−3y) exp(−2x)
(2) : = exp(−2x) dx ; = + const .
exp(3y) −3 −2
1 3
Or Equivalently : y = − log exp(−2x) + σ0 .
3 2
12.11 Problems Group III, 6-chapt 391
dy
(3) : = 2x exp(x 2 ) dx ; log(y) = exp(x 2 ) + σ0 .
y
Or Equivalently : y = log−1 [exp(x 2 ) + σ0 ] .
dy
(4) : = x 2 dx ; log(y) = exp(x 2 ) + σ0 .
y2
Or Equivalently : y = log−1 [exp(x 2 ) + σ0 ] .
σ3 − x
(1) : y = .
x2 + 2
(2) : σ3 = x(1 + y + y 2 ) : or equivalently,
1 4σ3
y = −1 ± −3 + .
2 x
x2 y2
(3) : σ3 = + + x exp(y) .
2 2
σ0 2
(1) : y = 3 − .
(2x + 3) 2 3
σ0
(2) : y = √ .
x
3x 2 2
(3) : y = σ0 exp − + − x2 .
2 3
1
(1) : y = −1 +
x + σ0
√ √
1 7 7
(2) : y=− + tan σ0 − x .
2 2 2
Given below are the Scomp and I pi for problems (1)–(5) and some help with the
solution.
4 exp(3t)
I pi; t = − sin(3t) ,
√
71 √
+ σ2 (3x + 2)−
35 35
Scomp; x = (3x + 2) σ1 (3x + 2) 6 6 ,
4(3x + 2)
I pi; x = − sin[log(3x + 2)] . (5)
71
x3
(1) : y= − x + σ0 x 2 + σ0 x + σ1
12
x3
or , y = − x + σ0 −x 2 + σ0 x + σ1 .
12
exp(x)
(1) : I pi (x) =
2
x2 3
(2) : I pi (x) =− +x+
2 2
2 2
(3) : I pi (x) = x−
3 3
(4) : I pi (x) = log(x) + 1
1
(5) : I pi (x) =
2x
x
(6) : I pi (x) = log(x) − 1
2
1
(7) : I pi (x) = 2 log(x) + 1
8x
(8) : I pi (x) = (x + 1) log(x + 1) − (x + 1) log(x) − 1
1
(9) : I pi (x) =
2
x 1
(10) : I pi (x) = + .
2 2
3x 2
u(x) = exp y(x).
4
√ √
2+ 2 + σ2 (x) 2 − + x −1 ; u(x) = x y(x).
1 5 1 5
(4) : y(x) = σ1 (x) 2
(1) : [β = −3 ; γ = 3 ; ν = 2 ; ρ = −4 ] .
(2) : [β = 2 ; γ = −3 ; ν = 1 ; ρ = −2 ] .
(3) : [β = 3 ; γ = −2 ; ν = −2 ; ρ = −3 ] .
(4) : [β = 1 ; γ = −2 ; ν = −1 ; ρ = −1 ] .
(5) : [β = −1 ; γ = −5 ; ν = −1 ; ρ = −4 ] .
u(x)
(1b) : √
σ1 x −1− 5
x √ x2 √ x3 √
= 1+ (35 + 13 5) + (230 + 99 5) + (9, 675 + 4, 283 5)
19 76 2, 508
x4 √ x5 √
+ (110, 555 + 49, 331 5) + (15, 306 + 6, 805 5)
20, 064 25, 080
x6 √ x7 √
+ (23, 965 + 10, 671 5) + (432, 160 + 192, 689 5) + · · ·
109, 440 5, 554, 080
x √ x2 √ x3 √
= 1− (29 + 21) + (647 + 93 21) − (5, 847 + 1, 573 21)
20 680 12, 240
x4 √ x5 √
+ (114 + 11 21) + (−3, 057 + 347 21)
288 5, 760
x6 √ x7 √
+ (95, 543 − 17, 163 21) + (−1, 021, 789 + 205, 549 21) + · · ·
172, 800 2, 419, 200
⎡ ⎤
u(x)
(3b) : ⎣ √ ⎦
3− 21
σ1 x 2
x √ x2 √ x3 √
= 1+ (−29 + 21) + (647 − 93 21) + (−5, 847 + 1, 573 21)
20 680 12, 240
x4 √ x5 √
+ (114 − 11 21) − (3, 057 + 347 21)
288 5, 760
x6 √ x7 √
+ (95, 543 + 17, 163 21) − (1, 021, 789 + 205, 549 21) + · · ·
172, 800 2, 419, 200
u(x)
(5b) : √
σ1 x 3− 13
x √ x2 √ x3 √
= 1 + (22 − 7 13) + (15 − 4 13) + (588 − 167 13)
51 68 8, 772
x4 √ x5 √
+ (10, 267 − 2, 773 13) + (3, 121 − 915 13)
631, 584 1, 052, 640
x6 √ x7 √
+ (407 − 87 13) − (3, 205 + 1, 463 13) + · · ·
743, 040 15, 603, 840
Chapter 13
Answer to Additional Assigned Problems
A function f (t) that is not necessarily periodic but is reasonably well behaved can
be represented in terms of an integral involving its Fourier transform.
∞
1
f (t) = F(ω) exp(i ω t)dω , (13.1)
2π −∞
where
∞
F(ω) = f (t) exp(−i ω t)dt . (13.2)
−∞
Often F(ω) and f (t) are referred to as the inverse Fourier transform of each other.
(6) : In order to determine the Fourier transform, F0 (ω), of the function [ f (t) exp
(i ω0 t)] proceed as follows: Write:
∞
F0 (ω) = 1/(2π) [ f (t) exp(i ω0 t)] exp(−i ω t) dt
−∞
∞
= 1/(2π) f (t) exp(−i [ω − ω0 ] t) dt
−∞
= F(ω − ω0 ) . (13.4)
Clearly, therefore, the Fourier transform of [ f (t) exp(i ω0 t)] is F(ω − ω0 ). (6)
13.4 Dirac’s Delta Function 405
It is necessary to give some description of the function δ(x − a). Indeed, Dirac’s
delta function deserves a detailed and thorough review—see for instance, Dean G.
Du f f y 24. . Still, the following relationships should suffice for the current needs.
For real a
δ(t − a) = ∞ , when x = a
= 0 , when x = a , (13.5)
and
∞
δ(t − a) dt = 1 , (13.6)
−∞
∞
δ(t − a) f (t)dt = f (a) . (13.7)
−∞
A choice popular with electrical engineers is to relate the delta function to the
derivative of Heaviside step function H (t − a). The Heaviside step function is
defined for a ≥ 0 as
H (t − a) = 1, f or t > a
= 0, f or t < a . (13.8)
d H (t)
δ(t) = . (13.9)
dt
Upon integration, (13.9) yields
t
H (t) = δ(x) dx . (13.10)
−∞
1 1
∞ nπ x
δ(x) = + cos . (13.11)
2L L n=1 L
In order to check whether the Boykin relationship (13.11) is valid, we examine the
accuracy of its following prediction.
406 13 Answer to Additional Assigned Problems
L
f (x) δ(x) dx = f (0) . (13.12)
−L
In other words, we examine whether (13.13) indeed holds true—see (13.15) for the
appropriate result for f (0). That is
∞
f (0) = a0 + an . (13.14)
n=1
However, before that can be done the function f (x) needs to be represented in a
suitable format.
To that end, proceed as follows. Given the function f (x) is piecewise differentiable
with period 2 L in the range [-L,L], it can be represented by an infinite Fourier series.
∞
n π x n π x
f (x) = a0 + an cos + bn sin . (13.15)
n=1
L L
The f (x) given above in (13.15) is now inserted into (13.13). Upon working out such
(13.13), we find that the result is the same as that predicted by the use of Boykin’s
equation: Meaning, it is equal to f (0). This fact can be confirmed by comparison
with (13.16).
L ∞
n π x n π x 1
a0 + an cos + bn sin dx
−L n=1
L L 2L
L ∞ n π x n π x 1
∞ nπ x
+ a0 + an cos + bn sin cos dx
−L n=1
L L L n=1 L
∞
= a0 + an = f (0) . Q.E.D (13.16)
n=1
Bibliography
26. Green, George. “Essay on the Application of Mathematical Analysis to the Theory of Electricity
and Magnetism”. (1793)–(1841),
27. Maclaurin, Colin. (1698)–(1746),
28. Taylor, Brook. (1685)–(1731),
29. Hook, R. (1635)–(1703),
30. Whittaker, E. T. and Watson, G. N. “Modern Analysis”, pp.194–203, Macmillan, N.Y. (1943),
31. Ampere, Andre-Marie. (1775)–(1836),
32. Volta, Alessandro, G. A. A. (1745)–(1827),
33. Dirac, Paul, Adrien, Maurice. (1902)–(1984),
34. Young, Peter. November (2009), physics.ucsc.edu.
35. Boykin, Timothy, B. Am. J. of Phys. 71, 462–468 (2003)