Professional Documents
Culture Documents
NUMERICAL METHODS Final 12
NUMERICAL METHODS Final 12
UNIVERSITY OF LUCKNOW
MINOR PROJECT
“NUMERICAL ANALYSIS”
University Of Lucknow
CERTIFICATE
This is to certify that term paper entitled “NUMERICAL ANALYSIS” which is
being submitted by the B.Sc Mathematics (Semester-6) student ABHISHEK
GAUTAM has been carried as per the requirements given by the Department of
Mathematics ,University Of Lucknow.
This is an original study and all work has been done by the student himself . He
has fulfilled all conditions for submission of term paper.
Abhishek Gautam
B.Sc. 3rd year
VIth Semester
[2110011015747]
CONTENT
Abstract 1
1.1 Overview
1.2 Finite And Forward Difference Operator
1.3 Error Propagation And its Properties
1.4 Backward And Central Difference Operator
1.5 Shift ,Average And Differential Operator
2.1 Overview
2.2 Bisection Method
2.3 Newton Methods and its Extensions
2.4 Secant Method
4.1 Overview
4.2 Newton Forward and Backward for Derivatives
4.3 Derivatives using Bessel`s Formulae
4.4 Examples
4.5 Numerical Integration
4.6 Trapezoidal ,Simpsons rule
6.1 Overview
6.2 Formulation Of Problem
6.3 Examples
CONCLUSION 56-57
REFERENCES 58
Page |1
ABSTRACT
This project delves into the realm of numerical analysis with a focus on enhancing
computational efficiency through innovative methodologies and algorithms. Numerical
analysis serves as the backbone for solving mathematical problems using numerica l
approximation techniques, indispensable across various scientific and engineering
domains.
The primary objective of this project is to explore and develop novel numerical
algorithms that can expedite computations without compromising accuracy. Tradition al
numerical methods often encounter challenges in handling large -scale problems
efficiently, necessitating advancements to meet the demands of contemporary
computational tasks. By leveraging mathematical insights, algorithmic optimizations,
and computational techniques, this research endeavors to streamline numerical
computations for enhanced performance and scalability.
CHAPTER 1
INTRODUCTION
Numerical analysis is the study of algorithms that use numerical approximation (as opposed
to symbolic manipulations) for the problems of mathematical analysis (as distinguished
from discrete mathematics). It is the study of numerical methods that attempt to find
approximate solutions of problems rather than the exact ones. Numerical analysis finds
application in all fields of engineering and the physical sciences, and in the 21st century also
the life and social sciences, medicine, business and even the arts. Current growth in
computing power has enabled the use of more complex numerical analysis, providing
detailed and realistic mathematical models in science and engineering. Examples of
numerical analysis include: ordinary differential equations as found in celestial
mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in
data analysis.
Numerical Analysis deals with the process of getting the numerical solution to complex
problems. The majority of mathematical problems in science and engineering are difficult to
answer precisely, and in some cases it is impossible. To make a tough Mathematical problem
easier to solve, an approximation is essential.
Page |3
APPLICATIONS OF NUMERICAL
ANALYSIS
The overall goal of the field of numerical analysis is the design and analysis of
techniques to give approximate but accurate solutions to a wide variety of hard
problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather
prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a
system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving partial
differential equations numerically.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and
crew assignments and fuel needs. Historically, such algorithms were developed within
the overlapping field of operations research.
CHAPTER-2
OPERATORS USED IN NUMERICAL
ANALYSIS
Lot of operators are used in numerical analysis/computation. Some of the frequently used
operators, viz. forward difference (∆), backward difference (∇), central difference (δ), shift
(E) and mean (µ) are discussed in this module. Let the function y = f(x) be defined on the
closed interval [a, b] and let x0, x1, . . . , x be the n values of x. Assumed that these values are
equidistance, i.e. xi = x0 +, i = 0, 1, 2, . . . , n; h is a suitable real number called the difference
of the interval or spacing. When x = xi, the value of y is denoted by yi and is defined by yi =
f(xi). The values of x and y are called arguments and entries respectively. 3.1 Finite
difference operators Different types of finite difference operators are defined, among them
forward difference, backward difference and central difference operators are widely used. In
this section, these operators are discussed.
△F(x)= F( x + h)-F(x)
△𝟐 F(x)= F( x + 2h) - 2F( x + h) + F(x)
△𝟑 F(x)= F( x + 3h) - 3F( x + 2h) +2F( x + h) - F(x)
THEN,
In particular, Δ𝑦0 = 𝑦1 − 𝑦0 , Δ𝑦1 = 𝑦2 − 𝑦1 , … , Δ𝑦𝑛−1 = 𝑦𝑛 − 𝑦𝑛−1 . These are called first order
differences.
The differences of the first order differences are called second order differences. The second order
differences are denoted by Δ2 𝑦0 , Δ2 𝑦1 , …
The third order differences are also defined in similar manner, i.e.
In general,
And
𝜟𝒏+𝟏 𝒚𝒊 = 𝜟𝒏 𝒚𝒊+𝟏 − 𝜟𝒏 𝒚𝒊 , 𝒏 = 𝟎, 𝟏, 𝟐, …
It must be remembered that Δ0 ≡ identity operator, i.e. Δ0 𝑓(𝑥) = 𝑓(𝑥) and Δ1 ≡ Δ.All the forward
differences can be represented in a tabular form, called the forward difference or diagonal difference
table.
Let 𝑥0 , 𝑥1 , … , 𝑥4 be four arguments. All the forwarded differences of these arguments are shown in
Table 3.1.
𝑥 𝑦 Δ Δ2 Δ3 Δ4
𝑥0 𝑦0
Δ𝑦0
𝑥1 𝑦1 Δ2 𝑦0
Δ𝑦1 Δ3 𝑦0 𝜎2
𝑥2 𝑦2 Δ2 𝑦1 Δ5 Δ4 𝑦0
Δ𝑦2 Δ3 𝑦1
𝑥3 𝑦3 Δ2 𝑦2
Δ𝑦3
𝑥4 𝑦4
Page |8
𝑥 𝑦 Δ𝑦 Δ2 𝑦 Δ3 𝑦 Δ4 𝑦 Δ5 𝑦
𝑥0 𝑦0
Δ𝑦0
𝑥1 𝑦1 Δ2 𝑦0
Δ𝑦1 Δ3 𝑦0 + 𝜀
𝑥2 𝑦2 Δ2 𝑦1 + 𝜀 Δ4 𝑦0 − 4𝜀
Δ𝑦2 + 𝜀 Δ3 𝑦1 − 3𝜀 Δ5 𝑦0 + 10𝜀
𝑥3 𝑦3 + 𝜀 Δ2 𝑦2 − 2𝜀 Δ4 𝑦1 + 6𝜀
Δ𝑦3 − 𝜀 Δ3 𝑦2 + 3𝜀 Δ5 𝑦1 − 10𝜀
𝑥4 𝑦4 Δ2 𝑦3 + 𝜀 Δ4 𝑦2 − 4𝜀
𝑥5 𝑦5 Δ𝑦4 Δ3 𝑦3 − 𝜀
Δ𝑦5
𝑥6 𝑦6
Page |9
Properties
Some common properties of forward difference operator are presented below:
(ii) Δ[𝑓1 (𝑥) + 𝑓2 (𝑥) + ⋯ + 𝑓𝑛 (𝑥)] = Δ𝑓1 (𝑥) + Δ𝑓2 (𝑥) + ⋯ + Δ𝑓𝑛 (𝑥)
Combining properties (ii) and (iii), one can generalise the property (ii) as
(iv) Δ[𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑥)] = 𝑐1 Δ𝑓1 (𝑥) + 𝑐2 Δ𝑓2 (𝑥) + ⋯ + 𝑐𝑛 Δ𝑓𝑛(𝑥)
(vii) Δ[ 𝑥 𝐶𝑟 ] = 𝑥
𝐶𝑟−1 , where 𝑟 is fixed and ℎ = 1.
Δ[ 𝑥 𝐶𝑟 ] = 𝑥+1 𝐶
𝑟 − 𝑥 𝐶𝑟 = 𝑥 𝐶𝑟−1 as ℎ = 1.
P a g e | 10
1 Δ𝑓(𝑥)
Δ[ ]=−
𝑓(𝑥) 𝑓(𝑥 + ℎ)𝑓(𝑥)
𝜵𝒚𝒊 = 𝒚𝒊 − 𝒚𝒊−𝟏 , 𝒊 = 𝒏, 𝒏 − 𝟏, … , 𝟏
In particular,
These are called the first order backward differences. The second order differences are denoted by
∇2 𝑦2 , ∇2 𝑦3 , … , ∇2 𝑦𝑛 . First two second order backward differences are
And
𝜵𝟐 𝒚𝟑 = 𝒚𝟑 − 𝟐𝒚𝟐 + 𝒚𝟏 , 𝜵𝟐 𝒚𝟒 = 𝒚𝟒 − 𝟐𝒚𝟑 + 𝒚𝟐 .
In general,
Where ∇0 𝑦𝑖 = 𝑦𝑖 , ∇1 𝑦𝑖 = ∇𝑦𝑖 .
P a g e | 11
Like forward differences, these backward differences can be written in a tabular form, called
backward difference or horizontal difference table.
𝑥 𝑦 ∇ ∇2 ∇3 ∇4
𝑥0 𝑦0
𝑥1 𝑦1 ∇𝑦1
𝑥2 𝑦2 ∇𝑦2 ∇2 𝑦2
𝑥3 𝑦3 ∇𝑦3 ∇2 𝑦3 ∇3 𝑦3
𝑥4 𝑦4 ∇𝑦4 ∇2 𝑦4 ∇3 𝑦4 ∇4 𝑦4
𝒉
𝜹𝒇(𝒙) = 𝒇(𝒙 + 𝒉/𝟐) − 𝒇(𝒙 − )
𝟐
In general,
P a g e | 12
𝛅𝐧 𝐲𝐢 = 𝛅𝐧−𝟏 𝐲 𝟏 − 𝛅𝐧−𝟏 𝐲 𝟏
𝐢+𝟐 𝐢−𝟐
𝑥 𝑦 𝛿 𝛿2 𝛿3 𝛿4
𝑥0 𝑦0
𝛿𝑦1/2
𝑥1 𝑦1 𝛿 2 𝑦1
𝛿𝑦3/2 𝛿 3 𝑦3/2
𝑥2 𝑦2 𝛿 2 𝑦2 𝛿 4 𝑦2
𝛿𝑦5/2 𝛿 3 𝑦5/2
𝑥3 𝑦3 𝛿 2 𝑦3
𝛿𝑦7/2
𝑥4 𝑦4
It may be observed that all odd (even) order differences have fraction suffices (integral suffices).
Note that shift operator increases subscript of 𝑦 by one. When the shift operator is applied twice on
the function 𝑓(𝑥), then the subscript of 𝑦 is increased by 2 .
That is,
In general,
The inverse shift operator can also be find in similar manner. It is denoted by 𝐸 −1 and is defined by
𝑬−𝟏𝒇(𝒙) = 𝒇(𝒙 − 𝒉)
Similarly, second and higher order inverse operators are defined as follows:
Properties
Few common properties of 𝐸 operator are given below:
i. 𝐸𝑐 = 𝑐, where 𝑐 is a constant.
ii. 𝐸{𝑐𝑓(𝑥)} = 𝑐𝐸𝑓(𝑥).
iii. 𝐸 {𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑥)]
iv. = 𝑐1 𝐸𝑓1 (𝑥) + 𝑐2 𝐸𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝐸𝑓𝑛 (𝑥)
v. 𝐸 𝑚 𝐸 𝑛 𝑓(𝑥) = 𝐸 𝑛 𝐸 𝑚 𝑓(𝑥) = 𝐸 𝑚+𝑛 𝑓(𝑥).
vi. 𝐸 𝑛 𝐸 −𝑛 𝑓(𝑥) = 𝑓(𝑥).
vii. In particular, 𝐸𝐸 −1 ≡ 𝐼, 𝐼 is the identity operator and it is sometimes denoted by 1.
(𝐸 𝑛 )𝑚 𝑓(𝑥) = 𝐸 𝑚𝑛 𝑓(𝑥).
𝑓(𝑥) 𝐸𝑓(𝑥)
viii. 𝐸{ }= .
𝑔(𝑥) 𝐸𝑔(𝑥)
P a g e | 14
Average operator :
The average operator is denoted by 𝜇 and is defined by
𝟏
𝝁𝒇(𝒙) = [𝒇(𝒙 + 𝒉/𝟐) + 𝒇(𝒙 − 𝒉/𝟐)]
𝟐
In terms of 𝑦,
1
𝜇𝑦𝑖 = [𝑦𝑖+1/2 + 𝑦𝑖−1/2 ]
2
Here the average of the values of 𝑓(𝑥) at two points (𝑥 + ℎ/2) and 𝑓(𝑥 − ℎ/2) is taken as the value
of 𝜇𝑓(𝑥).
Differential operator :
The differential operator is well known from differential calculus and it is denoted by 𝐷. This operator
gives the derivative. That is,
𝒅
𝑫𝒇(𝒙) = 𝒇(𝒙) = 𝒇′ (𝒙)
𝒅𝒙
𝒅𝟐
𝑫𝟐 𝒇(𝒙) = 𝒇(𝒙) = 𝒇′′ (𝒙)
𝒅𝒙𝟐
…
…
…
…
…
……
𝒅 𝒏
𝑫 𝒇(𝒙) = 𝒏 𝒇(𝒙) = 𝒇𝒏 (𝒙)
𝒏
𝒅𝒙
P a g e | 15
CHAPTER 3
SOLUTIONS OF EQUATIONS IN ONE VARIABLE
We deal with one of the most basis problems of numerical approximation, the root-finding
problem, in this chapter. For a given function 𝑓, we need to solve an equation of the form
𝑓(𝑥) = 0 , which is called root. A root of this equation is also called a zero of the function 𝑓.
For example, for an equation 𝑓(𝑥) = 𝑥 2 − 2𝑥 + 1 = 0, we can find the root 𝑥 = 1 so that
𝑓(1) = 0.
For solving such equations, one of the most useful methods are the iterative methods since
these methods are suitable for computing using computer programs. The methods we will
discuss here are the bisection methods, Secant method, Newton's method and it extensions.
BISECTION METHOD :
The bisection method is based on the intermediate value theorem for continuous functions:
Suppose 𝑓 is a contiuous function defined on the interval [𝑎, 𝑏] with 𝑓(𝑎) and 𝑓(𝑏) are of
opposite signs. Then a number 𝑝 exists in (𝑎, 𝑏) with 𝑓(𝑝) = 0. Although the procedure will
work when there is more than one root in the interval (𝑎, 𝑏), we assume for simplicity that
the root in this interval in unique. The method applies bisecting of subintervals of [𝑎, 𝑏] and,
at each step, locating the half containing.
To begin, set 𝑎1 = 𝑎 and 𝑏1 = 𝑏 and let 𝑝1 be the midpoint of [𝑎, 𝑏], that is.
𝑏1 − 𝑎1 𝑎1 + 𝑏1
𝑝1 = 𝑎1 + =
2 2
If 𝑓(𝑝1 ) ≠ 0, then 𝑓(𝑝1 ) has the same sign as either 𝑓(𝑎1 ) or 𝑓(𝑏1 )
If 𝑓(𝑝1 ) and 𝑓(𝑎1 ) have the same sign, 𝑝 ∈ (𝑝1 , 𝑏1 ). Set 𝑎2 = 𝑝1 and 𝑏2 = 𝑏1 .
Step 2: Since 𝑓 (𝑝1 ) > 0, we should select the interval [1,1.5] for the second iteration.
𝐍 𝒂𝒏 𝒃𝒏 𝒑𝒏 𝒇(𝒑𝒏 )
Suppose that 𝑓 ∈ 𝐶 2 [𝑎, 𝑏]. Let 𝑝0 ∈ [𝑎, 𝑏] be an approximation to 𝑝 such that 𝑓 ′ (𝑝0 ) ≠ 0 and
|𝑝 − 𝑝0 | is "small." Consider the first Taylor polynomial for 𝑓(𝑥) expanded about 𝑝0 and
evaluated at = 𝑝 :
(𝒑 − 𝒑𝟎 )𝟐 ′′
𝒇(𝒑) = 𝒇(𝒑𝟎 ) + (𝒑 − 𝒑𝟎 )𝒇′ (𝒑𝟎 ) + 𝒇 (𝝃(𝒑))
𝟐
Where 𝜉(𝑝) lies between 𝑝 and 𝑝0 . Since 𝑓(𝑝) = 0, this equation gives
′(
(𝑝 − 𝑝0 )2
0 = 𝑓(𝑝0 ) + (𝑝 − 𝑝0 )𝑓 𝑝0 ) + 𝑓 ′′ (𝜉(𝑝))
2
Newton's method is derived by assuming that since |𝑝 − 𝑝0 | is small, the term involving
(𝑝 − 𝑝0 )2 is much smaller, so
0 ≈ 𝑓 (𝑝0 ) + (𝑝 − 𝑝0 )𝑓 ′ (𝑝0 )
𝑓 (𝑝0 )
𝑝 ≈ 𝑝0 − ≡ 𝑝1
𝑓 ′ (𝑝0 )
This sets the stage for Newton's method, which starts with an initial approximation 𝑝0 and
𝒇(𝒑𝒏−𝟏)
generates the sequence {𝑝𝑛 }∞
𝑛=0 , by 𝒑𝒏 = 𝒑𝒏−𝟏 − , for 𝒏 ≥ 𝟏
𝒇′ (𝒑𝒏−𝟏 )
.
P a g e | 19
Figure above demonstrate the values of function varying and their limits under given
circumstances.
Example 1
Consider the function 𝑓(𝑥) = cos 𝑥 − 𝑥 = 0. Approximate a root of 𝑓 using (a) a fixed-point
method, and (b) Newton's method.
Solution (a) A solution to this root-finding problem is also a solution to the fixed-point
problem 𝑥 = cos 𝑥, and the graph in Figure 2.8 implies that a single fixed point 𝑝
(Note that the variable in the trigonometric function is in radian measure, not degrees. This
will always be the case unless specified otherwise.)
P a g e | 20
𝑝0
𝑝1 = 𝑝0 − ′
𝑓 (𝑝0 )
𝜋
𝜋 cos ( ) − 𝜋/4
= − 4
4 −sin (𝜋) − 1
4
𝜋 √2/2 − 𝜋/4
= −
4 √2/2 − 1
= 0.7395361337
cos(𝑝1 ) − 𝑝1
𝑝2 = 𝑝1 −
−sin(𝑝1 ) − 1
= 0.7390851781
Secant Method
Newton's method is an extremely powerful technique, but it has a major weakness; the need
to know the value of the derivative of 𝑓 at each approximation. Frequently, 𝑓 ′ (𝑥) is far more
difficult and needs more arithmetic operations to calculate than 𝑓(𝑥).
By definition,
𝒇(𝒙) − 𝒇(𝒑𝒏−𝟏 )
𝒇′ (𝒑𝒏−𝟏 ) = 𝐥𝐢𝐦
𝒙→𝒑𝒏−𝟏 𝒙 − 𝒑𝒏−𝟏
Example 2
Use the Secant method to find a solution to 𝑥 = cos 𝑥 and compare the approximations with
those given in previous example, which applied Newton's method.
Solution. For the Secant method, we need two initial approximations. Suppose we use 𝑝0 =
0.5 and 𝑝1 = 𝜋/4 :
(𝒑𝟏 − 𝒑𝟎 )(𝐜𝐨𝐬 𝒑𝟏 − 𝒑𝟏 )
𝒑𝟐 = 𝒑𝟏 −
(𝐜𝐨𝐬 𝒑𝟏 − 𝒑𝟏 ) − (𝐜𝐨𝐬 𝒑𝟐 − 𝒑𝟐 )
𝝅 𝝅 𝝅
𝝅 ( − 𝟎. 𝟓) (𝐜𝐨𝐬 ( ) − )
= − 𝟒 𝟒 𝟒
𝟒 (𝐜𝐨𝐬 (𝝅) − 𝝅/𝟒) − (𝐜𝐨𝐬 𝟎. 𝟓 − 𝟎. 𝟓)
𝟒
We note that although the formula for 𝑝2 seems to indicate a repeated computation, once
𝑓(𝑝0 ) and 𝑓 (𝑝1 ) are computed, they are not computed again.
Secant Method
𝑛 𝑝𝑛
0 0.5
1 0.7853981635
2 0.7363841388
P a g e | 23
CHAPTER 4
INTERPOLATION
Interpolation is the process of finding the most appropriate estimate for missing data. It is the
"art of reading between the lines of a table". For making the most probable estimate it
requires the following assumptions:
(i) The frequency distribution is normal and marked by sudden ups and downs.
Extrapolation
Extrapolation is the process of finding the values outside the given interval. It is also
possible that we may require information for future in which case the process of estimating
the most appropriate value is known as extrapolation.
Let 𝑦 = 𝑓(𝑥) be the function such that 𝑓(𝑥) is taking the values 𝑦0 , 𝑦1 , … , 𝑦𝑛 corresponding
to x = x 0 , x1 , … , x n.
Inthecaseofthevaluesofindependentvariablearenotequallyspacedandwhenthe differences of
dependent variable are not small, we will use Lagrange's interpolation formula.
P a g e | 24
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 ) (𝒙 − 𝒙𝟎 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 )
𝒚= 𝒚𝟎 + 𝒚 +⋯
(𝒙𝟎 − 𝒙𝟏 )(𝒙𝟎 − 𝒙𝟐 ) … (𝒙𝟎 − 𝒙𝒏 ) (𝒙𝟏 − 𝒙𝟏 )(𝒙𝟏 − 𝒙𝟐 ) … (𝒙𝟏 − 𝒙𝒏 ) 𝟏
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏−𝟏 )
+
(𝒙𝒏 − 𝒙𝟏 )(𝒙𝒏 − 𝒙𝟐 ) … (𝒙𝒏 − 𝒙𝒏−𝟏 )
Example:
Using Lagrange's interpolation formula, find the value corresponding to x = 10 from the
following table:
Solution
𝑥 5 6 9 11
𝑦 12 13 14 16
Given 𝑥0 = 5, 𝑥1 = 6, 𝑥2 = 9, 𝑥3 = 11, 𝑥 = 10
𝑌0 = 𝑓(𝑥0 ) = 12
𝑌1 = 𝑓 (𝑥1 ) = 13
𝑌2 = 𝑓(𝑥2 ) = 14
Y3 = f(x 3 ) = 16
P a g e | 25
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 ) (𝒙 − 𝒙𝟎)(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 )
𝒚= 𝒚𝟎 + 𝒚 +⋯
(𝒙𝟎 − 𝒙𝟏 )(𝒙𝟎 − 𝒙𝟐 ) … (𝒙𝟎 − 𝒙𝒏 ) (𝒙𝟏 − 𝒙𝟏 )(𝒙𝟏 − 𝒙𝟐) … (𝒙𝟏 − 𝒙𝒏 ) 𝟏
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐) … (𝒙 − 𝒙𝒏−𝟏)
+ 𝒚
(𝒙𝒏 − 𝒙𝟏 )(𝒙𝒏 − 𝒙𝟐 ) … (𝒙𝒏 − 𝒙𝒏−𝟏) 𝒏
(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟗)(𝟏𝟎 − 𝟏𝟏) (𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟗)(𝟏𝟎 − 𝟏𝟏)
= 𝟏𝟐 + 𝟏𝟑
(𝟓 − 𝟔)(𝟓 − 𝟗)(𝟓 − 𝟏𝟏) (𝟔 − 𝟓)(𝟔 − 𝟗)(𝟔 − 𝟏𝟏)
(𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟏𝟏) (𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟗)
+ 𝟏𝟒 + 𝟏𝟔
(𝟗 − 𝟓)(𝟗 − 𝟔)(𝟗 − 𝟏𝟏) (𝟏𝟏 − 𝟓)(𝟏𝟏 − 𝟔)(𝟏𝟏 − 𝟗)
𝟒. 𝟏 ⋅ −𝟏 𝟓 ⋅ 𝟏 ⋅ −𝟏 𝟓 ⋅ 𝟒 ⋅ −𝟏 𝟓 ⋅ 𝟒. 𝟏
= 𝟏𝟐 + 𝟏𝟑 + 𝟏𝟒 + 𝟏𝟔
−𝟏 ⋅ −𝟒 ⋅ −𝟔 𝟏 ⋅ −𝟑 ⋅ −𝟓 𝟒 ⋅ 𝟑 ⋅ −𝟐 𝟔⋅𝟓⋅𝟐
𝟏𝟑 𝟑𝟓 𝟏𝟔 𝟒𝟒
=𝟐− + + = = 𝟏𝟒. 𝟔𝟔𝟕
𝟑 𝟑 𝟑 𝟑
Example: 1
Using Newton's divided difference formula, find the values of 𝑓(2), 𝑓(8) and 𝑓(15) given
the following table:
Solution
𝑥 4 5 7 10 11 13
We form the divided difference table since the intervals are unequal.
4 48
5 100 52
7 294 97 15 0
10 900 202 21 1 0
11 1210 310 1
13 2028 409 33
𝑓(2) = 4
𝑓(8) = 448
𝑓(15) = 3150
NEWTON'S FORWARD
INTERPOLATION FORMULA
Let the function y = f(x) take the values 𝑦0 , 𝑦1 , … 𝑦𝑛 at the points 𝑥0 , 𝑥1 , … 𝑥𝑛 where 𝑥𝑛 =
𝑥0 + 𝑛ℎ then Newtons interpolation formula is given by
Newton backward interpolation formula: Let the function y = f(x) take the values
𝑦0 , 𝑦1 , … 𝑦𝑛 at the points 𝑥0 , 𝑥1 , … 𝑥𝑛 where 𝑥𝑛 = 𝑥0 + 𝑛ℎ then Newtons interpolation
formula is given by
𝑥 -1 0 1 2 3
𝑦 -21 6 15 12 3
Solution:
𝑥 − 𝑥𝑛 2.65 − 3
𝑝= = = −0.35
ℎ 1
𝑥 𝑌 Δ𝑦 Δ2 𝑦 Δ3 𝑦 Δ4 𝑦
-1 -21 27
0 6 9 -18 6
1 15 -3 -12 6 0
2 12 -9 -6
3 3
P a g e | 29
𝒙−𝒙𝟎
.. where 𝒑 =
𝒉
EXAMPLE:
Apply Gauss forward and backward interpolation formula to find y(25) for the following
data:
Solution: Given x = 25
𝑋 20 24 28 32
25−2
Let 𝑥0 = 24, 𝑝 = = 0.25
4
X P Y Δ𝑦 Δ2 𝑦 Δ3 𝑦
20 -1 2854 308
24 0 3162 382 74 -8
28 1 3544 448 66
32 2 3992
𝑝 𝑝
Gauss forward interpolation formula is given by 𝑦𝑝 = 𝑦0 + ( ) Δ𝑦0 + ( ) Δ2 𝑦−1 +
1 2
𝑝+1 3 𝑝+1 4 𝑝+2 5 0.25
( ) Δ 𝑦−1 + ( ) Δ 𝑦−2 + ( ) Δ 𝑦−2 + ⋯ = 3162 + ( ) 382 +
3 4 5 1
0.25 1.25 0.25(0.25−1)
( ) 74 + ( ) (−8) = 3162 + 0.25(382) + (74) +
2 3 2
1.25(1.25−1)(1.25−2)
(−8) =
6
25−28
Let 𝑥0 = 28, 𝑝 = = −0.75
4
X P Y Δ𝑦 Δ2 𝑦 Δ3 𝑦
20 -2 2854 308
24 -1 3162 382 74 -8
28 0 3544 448 66
32 1 3992
P a g e | 31
𝑝 𝑝+1 2
Gauss backward interpolation formula is given by 𝑦𝑝 = 𝑦0 + ( ) Δ𝑦−1 + ( ) Δ 𝑦−1 +
1 2
1.25 0.25(0.25 − 1)
( ) (−8) = 3544 + (−0.75)(382) + (66)
3 2
1.25(1.25 − 1)(1.25 − 2)
+ (−8) =
6
Bessel`s formula
Bessel`s formula is given by
𝟏
𝒚𝟎 +𝒚𝟏 𝟏 𝒑(𝒑−𝟏) 𝚫𝟐 𝒚−𝟏 +𝚫𝟐 𝒚𝟎 (𝒑−𝟐)𝒑(𝒑−𝟏)
𝒚𝒑 = + (𝒑 − ) 𝚫𝒚𝟎 + ( )+ 𝚫𝟑 𝒚−𝟏 +
𝟐 𝟐 𝟏.𝟐 𝟐 𝟏.𝟐.𝟑
𝟏
𝒑(𝒑−𝟏)(𝒑−𝟐)(𝒑+𝟏) 𝚫𝟒 𝒚−𝟐 +𝚫𝟒 𝒚−𝟏 𝒙−𝒙𝟎
( ) + ⋯ where 𝒑 =
𝟏.𝟐.𝟑 𝟐 𝒉
𝑋 0 4 8 12
5−4
Take 𝑥0 = 4 given h = 4 and 𝑝 = = 0.25 ,Difference table is given by
4
X p Y Δ𝑦 Δ2 𝑦 Δ3 𝑦
0 -1 143
P a g e | 32
1 1
(𝑝−2)𝑝(𝑝−1) 𝑝(𝑝−1)(𝑝−2 )(𝑝+1) Δ4𝑦−2+Δ4 𝑦−1 158+177
3
Δ 𝑦−1 + ( )+⋯= + (0.25 − 0.5)(19) +
1.2.3 1.2.3 2 2
4+3 0.25(0.25−1) (0.25−0.5)0.25(0.25−1)
+ (−1) = 167.5 − 4.75 − 0.328 − 0.0078 = 162.41
2 2 6
P a g e | 33
CHAPTER 5
NUMERICAL DIFFERENTIATION AND INTEGRATION
Definition:
𝑑𝑦
Numerical differentiation is the process of computing the value of the derivative for some
𝑑𝑥
particular values of x, of x from the given data (𝑥𝑖 , 𝑦𝑖 ) when the actual relationship between x
and y is not known.
𝑑𝑦 2𝑝 − 1 2 3𝑝2 − 6𝑝 + 2 3
= Δ𝑦0 + Δ 𝑦0 + Δ 𝑦0 + ⋯
𝑑𝑝 1.2 1.2.3
𝑑𝑝 1 𝑑𝑦 𝑑𝑦 𝑑𝑝
Also = , Now =
𝑑𝑥 ℎ 𝑑𝑥 𝑑𝑝 𝑑𝑥
𝑑𝑦
This gives the value of at any x which is a non tabular value.
𝑑𝑥
At 𝑥 = 𝑥0 ,
𝐝𝐲 𝟏 𝟏 𝟐 𝟐 𝟔
( ) = [𝚫𝐲𝟎 − 𝚫 𝐲𝟎 + 𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝟎 𝐡 𝟏. 𝟐 𝟏. 𝟐. 𝟑 𝟏. 𝟐. 𝟑. 𝟒
𝟏 𝟏 𝟏 𝟏
= [𝚫𝐲𝟎 − 𝚫𝟐 𝐲𝟎 + 𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐡 𝟐 𝟑 𝟒
𝟐 𝟐
𝐝 𝐲 𝟏 𝟐 𝟑𝐲 +
𝟔𝐩 − 𝟏𝟖𝐩 + 𝟏𝟏 𝟒
= [𝚫 𝐲𝟎 + (𝐩 − 𝟏)𝚫 𝟎 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝟐 𝐡𝟐 𝟏𝟐
𝐝𝟐 𝐲 𝟏 𝟏𝟏 𝟒
( 𝟐) = 𝟐 [𝚫𝟐 𝐲𝟎 − 𝚫𝟑 𝐲𝟎 + 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝐡 𝟏𝟐
𝟎
𝐝𝟑 𝐲 𝟏 𝟑 𝟏𝟐𝐩 − 𝟏𝟖 𝟒
= [𝚫 𝐲𝟎 + 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝟑 𝐡𝟑 𝟏𝟐
𝐝𝟑 𝐲 𝟏 𝟑
( 𝟑) = 𝟑 [𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝐡 𝟐
𝟎
NEWTON'S BACKWARD
DIFFERENCE FORMULA TO
COMPUTE DERIVATIVES:
Newton's backward difference formula is
𝑑𝑦 2𝑝 + 1 2 3𝑝2 + 6𝑝 + 2 3
= ∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯
𝑑𝑝 1.2 1.2.3
𝑑𝑝 1
Also =
𝑑𝑥 ℎ
𝑑𝑦 𝑑𝑦 𝑑𝑝
Now =
𝑑𝑥 𝑑𝑝 𝑑𝑥
𝑑𝑦
This gives the value of at any x which is a non tabular value.
𝑑𝑥
At 𝑥 = 𝑥0,
𝑑𝑦 1 1 2 2 3 6
( ) = [∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥𝑛 ℎ 1.2 1.2.3 1.2.3.4
1 1 1 1
= [∇𝑦𝑛 + ∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
ℎ 2 3 4
2
𝑑 𝑦 1 2
6𝑝 + 18𝑝 + 11 4
2
= 2 [∇2 𝑦𝑛 + (𝑝 + 1)∇3 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ]
𝑑𝑥 ℎ 12
𝑑2𝑦 1 11
( 2) = 2 [∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
𝑛
𝑑3𝑦 1 12𝑝 + 18 4
3
= 3 [∇3 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ]
𝑑𝑥 ℎ 12
𝑑3𝑦 1 3
( 3) = 3 [∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 2
𝑛
The maximum and minimum values of a function 𝑦 = 𝑓(𝑥) can be found by equating the
𝑑𝑦
first derivative to zero and solving for x.
𝑑𝑥
𝟏
𝒚𝟎 +𝒚𝟏 𝟏 𝒑(𝒑−𝟏) 𝜟𝟐 𝒚−𝟏 +𝜟𝟐 𝒚𝟎 (𝒑− )𝒑(𝒑−𝟏)
𝒚𝒑 = + (𝒑 − ) 𝜟𝒚𝟎 + ( )+ 𝟐
𝜟𝟑𝒚−𝟏 +
𝟐 𝟐 𝟏.𝟐 𝟐 𝟏.𝟐.𝟑
𝟏
𝒑(𝒑−𝟏)(𝒑− )(𝒑+𝟏) 𝜟𝟒 𝒚−𝟐 +𝜟𝟒 𝒚−𝟏 𝒙−𝒙𝟎
𝟐
( ) + ⋯ where 𝒑 =
𝟏.𝟐.𝟑 𝟐 𝒉
P a g e | 36
1
𝑑𝑦 (2𝑝 − 1) Δ2 𝑦−1 + Δ2 𝑦0 (3𝑝2 − 3𝑝 + )
= Δ𝑦0 + ( )+ 2 Δ3 𝑦
−1
𝑑𝑝 1.2 2 1.2.3
(4𝑝3 − 6𝑝2 − 2𝑝 + 2) Δ4 𝑦−2 + Δ4 𝑦−1
+ ( )+⋯
1.2.3.4 2
𝑑𝑦 1
Also =
𝑑𝑝 ℎ
1
𝑑𝑦 𝑑𝑦 𝑑𝑝 1 (2𝑝 − 1) Δ2 𝑦−1 + Δ2 𝑦0 (3𝑝2 − 3𝑝 + )
= = [Δ𝑦0 + ( )+ 2 Δ3 𝑦
−1
𝑑𝑥 𝑑𝑝 𝑑𝑥 ℎ 1.2 2 1.2.3
(4𝑝3 − 6𝑝2 − 2𝑝 + 2) Δ4 𝑦−2 + Δ4 𝑦−1
+ ( )+⋯]
1.2.3.4 2
𝑑𝑦 1 1 Δ2 𝑦−1 + Δ2 𝑦0 1 1 Δ4 𝑦−2 + Δ4 𝑦−1
( ) = [Δ𝑦0 − ( ) + Δ3 𝑦−1 + ( )+⋯]
𝑑𝑥 𝑥=𝑥0 ℎ 2 2 12 12 2
2
Problem 1: From the following table of values of x and y, find 𝑑𝑦
𝑑𝑥
𝑑 𝑦
and 2 for x = 1.05
𝑑𝑥
X Y Δ Δ2 Δ3 Δ4 Δ5 Δ6
1.00 1 0.02470
1.30 1.14017
𝑑𝑦 1 1 1 1
( ) = [Δ𝑦0 − Δ2 𝑦0 + Δ3 𝑦0 − Δ4 𝑦0 + ⋯ ]
𝑑𝑥 𝑥=𝑥0 ℎ 2 3 4
1 1 1 1
= [0.02411 − (−0.00054) + (−0.00003) − 0.00001
0.05 2 3 4
1
+ (−0.00003)] = 0.487763
5
𝑑2𝑦 1 11
( 2) = 2 [Δ2 𝑦0 − Δ3 𝑦0 + Δ4 𝑦0 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
0
1 11 5
= [−0.00054 − 0.00003 + 0.00001 − (−0.00003)] = −0.2144
0.052 12 6
Problem 2:
𝑑𝑦 𝑑2𝑦
From the following table of values of x and y, find and for x = 1.25
𝑑𝑥 𝑑𝑥 2
X Y Δ Δ2 Δ3 Δ4 Δ5 Δ6
P a g e | 38
1.00 1 0.02470
1.30 1.14017
𝑑𝑦 1 1 1 1
( ) = [∇𝑦𝑛 + ∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥𝑛 ℎ 2 3 4
1 1 1 1
= [0.02259 + (−0.00047) + (0.00004) + 0.00001
0.05 2 3 4
1
+ (−0.00003)] = 0.44733
5
𝑑2𝑦 1 11
( 2) = 2 [∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
𝑛
1 11 5
= 2
[−0.00047 + 0.00004 + 0.00001 + (−0.00003)] = −0.158332
0.05 12 6
P a g e | 39
NUMERICAL INTEGRATION
Definition: Numerical integration is the process of computing the value of the integral
𝑏 𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 or ∫𝑎 𝑦𝑑𝑥 for some particular values x from the given data (𝑥𝑖 , 𝑦𝑖 ) when the
actual relationship between x and y is not known.
𝑏 𝒉
Trapezoidal Rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 2𝐵) where h = difference of the intervals, A = sum of
2
Note:
1. In deriving the trapezoidal rule we replace the arc of the curvey f(x) over by its
chords.
𝑏 ℎ
Simpsons One Third Rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 4𝐵 + 2𝐶) where h = difference of the
3
intervals, A = sum of the first and last ordinates, 𝐵 = sum of the even ordinates, 𝐶 = sum of
the remaining ordinates.
NOTE:
1. Simpson`s one third rule approximates the area of two adjacent strips by the area
under a quadratic parabola.
𝑏 3ℎ
Simpsons Three Eighth Rule : ∫𝑎 𝑦𝑑𝑥 = 8
(𝐴 + 3𝐵 + 2𝐶) where h = difference of the intervals,
𝐴 = sum of the first and last ordinates, 𝐶 = sum of the ordinates which is multiplied by 3, 𝐵 = sum of
the remaining ordinates.
𝜋
Problem 1: Dividing the range into 10 equal parts, find the approximate value of ∫0 sin 𝑥𝑑𝑥
by a) trapezoidal rule b) Simpsons rule c) Simpsons three eight rule.
𝜋 2𝜋 3𝜋 4𝜋 5𝜋 6𝜋 7𝜋 8𝜋 9𝜋 10𝜋
𝑋 0
/10 /10 /10 /10 /10 /10 /10 /10 /10 /10
Trapezoidal rule:
𝜋 𝒉
∫0 sin 𝑥𝑑𝑥 = (𝐴 + 2𝐵)
𝟐
ordinates = 0.309 + 0.5878 + 0.809 + 0.9511 + 0 + 0.9511 + 0.809 + 0.5878 + 0.309 = 6.3138
𝜋
𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 2(6.3138)) = 1.984
0 2
𝜋 ℎ
Simpsons rule: ∫0 sin 𝑥𝑑𝑥 = (𝐴 + 4𝐵 + 2𝐶)
2
𝜋
𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 4(3.2360) + 2(3.0778)) = 2.0000
0 3
𝑏 𝟑𝐡
Simpsons three eight rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 3𝐵 + 2𝐶)
𝟖
𝐵 = sum of the remaining ordinates = 0.3090 + 0.8090 + 0.9511 + 0.9511 + 0.8090 + 0.3090 =
4.1382
𝜋
3𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 3(4.1382) + 2(2.1756)) = 1.9742
0 8
P a g e | 42
CHAPTER 6
Euler Method
Euler's method is the most elementary approximation technique for solving initial-value
problems. Although it is seldom used in practice, the simplicity of its derivation can be used
to illustrate the techniques involved in the construction of some of the more advanced
techniques, without the cumbersome algebra that accompanies these constructions. The
object of Euler's method is to obtain approximations to the well-posed initial-value problem
𝑑𝑦
= 𝑓(𝑡, 𝑦), 𝑎 ≤ 𝑡 ≤ 𝑏, 𝑦(𝑎) = 𝛼 (6)
𝑑𝑡
We first make the stipulation that the mesh points are equally distributed throughout the
interval [𝑎, 𝑏]. This condition is ensured by choosing a positive integer N, setting ℎ = (𝑏 −
𝑎)/𝑁, and selecting the mesh points
The common distance between the points ℎ = 𝑡𝑖+1 − 𝑡𝑖 , is called the step size.
We will use Taylor's Theorem to derive Euler's method. Suppose that 𝑦(𝑡), the unique
solution to (6), has two continuous derivatives on [𝑎, 𝑏], so that for each 𝑖 = 0,1,2, … , 𝑁 − 1
(𝑡𝑖+1 − 𝑡𝑖 )2
𝑦(𝑡𝑖+1 ) = 𝑦(𝑡𝑖 ) + (𝑡𝑖+1 − 𝑡𝑖 )𝑦 ′ (𝑡𝑖 ) + 𝑦 ′′ (𝜉𝑖 )
2
ℎ2 ′′
𝑦 (𝑡𝑖+1 ) = 𝑦(𝑡𝑖 ) + ℎ𝑦 ′ (𝑡𝑖 ) + 𝑦 (𝜉𝑖 )
2
P a g e | 43
𝑦(𝑡𝑖+1 )
= 𝑦(𝑡𝑖 ) + ℎ𝑓(𝑡𝑖 , 𝑦(𝑡𝑖 ))
Euler's method construct 𝑤𝑖 ≈ 𝑦(𝑡𝑖 ), for each 𝑖 = 1,2, . . , 𝑁, by deleting the remainder term. Thus,
Euler's method is
𝑤0 = 𝛼
𝑤0 = 𝛼
Midpoint Method:
𝒉 𝒉
𝒘𝒊+𝟏 = 𝒘𝒊 + 𝒉𝒇 (𝒕𝒊 + , 𝒘𝒊 + 𝒇(𝒕𝒊 , 𝒘𝒊 )) , for 𝒊
𝟐 𝟐
= 𝟎, 𝟏, … , 𝑵 − 𝟏
Only three parameters are present in 𝑎1 (𝑓 + 𝛼1 , 𝑦 + 𝛽1 ), and all are needed in the match of 𝑇 (2). So, a
more complicated form is required to satisfy the conditions for any of the higher-order Taylor
methods.
Initial and boundary conditions . An ordinary differential equation of the 𝑛th order is of the form
𝜙(𝑥, 𝑦, 𝑐1 , 𝑐2 , ⋯ , 𝑐𝑛 ) = 0
To obtain its particular solution, 𝑛 conditions must be given so that the constants 𝑐1 , 𝑐2 … , 𝑐𝑛
can be determined.
P a g e | 44
If these conditions are prescribed at one point only (say: 𝑥0, then the differential equation
together with the conditions constitute an initial value problem of the nth order.
If the conditions are prescribed at two or more points, then the problem is termed as boundary
value problem.
In this chapter, we shall first describe methods for solving initial value problems and then
explain the finite difference method and shooting method for solving boundary value
problems.
Picard's Method
𝑑𝑦
Consider the first order equation = 𝑓(𝑥, 𝑦)
𝑑𝑥
It is required to find that particular solution of (1) which assumes the value 𝑦0 when 𝑥 = 𝑥0.
Integrating (1) between limits, we get
⇋
∫ 𝒅𝒚𝒙 ∫ 𝒇(𝒙, 𝒚)𝒅𝒙 or 𝒚 𝒚𝟎 ∫ 𝒇(𝒙, 𝒚)𝒅𝒙 (𝟐)
𝟎
This is an integral equation equivalent to (1), for it contains the unknown 𝑦 under the integral sign.
As a first approximation 𝑦1 to the solution, we put 𝑦 = 𝑦0 in 𝑓(𝑥, 𝑦) and integrate (2), giving
𝑥
𝑦1 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦0 )𝑑𝑥
𝑥0
𝑥
𝑦 = 𝑦 + ∫ 𝑓𝑥𝑦𝑑𝑥
𝑥0
𝑥
𝑦3 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦2 )𝑑𝑥
𝑥0
𝑥
𝑦𝑛 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦𝑛−1 )𝑑𝑥
𝑥0
Hence this method gives a sequence of approximations 𝑦1 , 𝑦2 , 𝑦3 … each giving a better result than the
preceding one.
EXAMPLE 1
Using Picard's process of successive approximations, obtain a solution up to the fifth
approximation of the equation 𝑑𝑦/𝑑𝑥 = 𝑦 + 𝑥, such that 𝑦 = 1 when 𝑥 = 0. Check your
answer by finding the exact particular solution.
Solution:
𝑥
(i) We have 𝑦 = 1 + ∫𝑥 (𝑥 + 𝑦)𝑑𝑥
0
𝑥
𝑦1 = 1 + ∫ (1 + 𝑥)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 /2
𝑥0
𝑥
𝑦1 = 1 + ∫ (1 + 𝑥 + 𝑥 2 /2)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6
𝑥0
𝑥
𝑥3 𝑥4
𝑦3 = 1 + ∫ (1 + 𝑥 + 𝑥 2 + 𝑥 3 /6)𝑑𝑥 = 1 + 2𝑥 + 𝑥 2 + +
𝑥0 3 24
𝑥
𝑥3 𝑥4
𝑦4 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 + + ) 𝑑𝑥
0 3 24
𝑥3 𝑥4 𝑥5
= 1 + 𝑥 + 𝑥2 + + +
3 12 120
𝑥
𝑥3 𝑥4 𝑥5
𝑦5 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 + + + ) 𝑑𝑥
0 3 12 120
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + 𝑥 + 𝑥2 + + + +
3 12 60 720
𝑑𝑦
− 𝑦 = 𝑥 is a Leibnitzs linear in 𝑥
𝑑𝑥
𝑦𝑒 −𝑥 = ∫ 𝑥𝑒 −𝑥 𝑑𝑥 + 𝑐
Since 𝑦 = 1, when 𝑥 = 0, ∴ 𝑐 = 2.
𝑦 = 2𝑒 𝑥 − 𝑥 − 1
𝑥2 𝑥3 𝑥4
Or using the series: 𝑒 𝑥 = 1 + 𝑥 + + + +⋯
2! 3! 4!
We get
𝑥3 𝑥 4 𝑥 5 𝑥6
𝑦 = 1 + 𝑥 + 𝑥2 + + + + + ⋯∞
3 12 60 360
Comparing (1) and (3), it is clear that (1), approximates to the exact particular solution (3) upto the
term in 𝑥 5.
P a g e | 47
CHAPTER 7
MILNE`s PREDICTOR CORRECTOR METHOD
OVERVIEW
The Milne's Predictor Method, also known as Milne's Predictor-Corrector Method, is a
numerical technique used for solving ordinary differential equations (ODEs) numerically. It
is an extension of the Adams-Moulton Method and belongs to the family of predictor-
corrector methods.
The basic idea behind Milne's Predictor Method is to predict the value of the dependent
variable at a future time step using an extrapolation formula, and then use this prediction to
correct the estimate iteratively until a desired level of accuracy is achieved.
1. Prediction Step:
Using previous known values of the dependent variable and the given ODE, predict the value of the
dependent variable at the next time step using an extrapolation formula (typically a higher-order
Adams-Bashforth formula).
2. Correction Step:
Use the predicted value obtained in the previous step as an initial guess to correct the estimate
iteratively. This correction step involves solving an implicit equation derived from the ODE and the
extrapolation formula used in the prediction step.
3. Iteration:
Iterate the correction step until the corrected estimate converges to the desired level of accuracy,
typically using iterative methods like Newton's method or other root-finding techniques.
4. Update:
Once the corrected estimate is obtained, update the solution and proceed to the next time step.
P a g e | 48
The advantage of Milne's Predictor Method lies in its ability to achieve higher accuracy
compared to explicit methods like Euler's method, while still maintaining simplicity and
computational efficiency. However, it may suffer from stability issues for certain types of
ODEs, and care must be taken in the choice of step size and extrapolation formula to ensure
stability and accuracy.
Overall, Milne's Predictor Method is a valuable tool in the numerical solution of ODEs,
particularly when higher accuracy is desired and when the computational cost of more
complex methods like Runge-Kutta methods is prohibitive.
FORMULATION
Next we calculate,
In the relation
𝒙𝟎+𝟒𝒉
𝒚𝟒 = 𝒚𝟎 + ∫ 𝒇(𝒙, 𝒚)𝒅𝒙
𝒙𝟎
𝒙𝟎+𝟒𝒉 𝒏(𝒏 − 𝟏) 𝟐
𝒚𝟒 = 𝒚𝟎 + ∫ (𝒇𝟎 + 𝒏𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ ) 𝒅𝒙
𝒙𝟎 𝟐
[ Put 𝒙 = 𝒙𝟎 + 𝒏𝒉, 𝒅𝒙 = 𝒉𝒅𝒏]
𝟒 𝒏(𝒏 − 𝟏) 𝟐
= 𝒚𝟎 + ∫ (𝒇𝟎 + 𝒏𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ ) 𝒅𝒏
𝟎 𝟐
𝟐𝟎 𝟐
= 𝒚𝟎 + 𝒉 (𝟒𝒇𝟎 + 𝟖𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ )
𝟑
Neglecting fourth and higher order differences and expressing Δ𝑓0 , Δ2 𝑓0 and Δ3 𝑓0 and in terms of the
function values, we get
P a g e | 49
(𝑝) 4ℎ
𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3
𝑓4 = 𝑓(𝑥0 + 4ℎ, 𝑦4 )
(𝑐) ℎ
𝑦4 = 𝑦2 + (𝑓2 + 4𝑓3 + 𝑓4 )
3
Then an improved value of 𝑓4 is computed and again the corrector is applied to find a still better
value of 𝑦4 . We repeat this step until 𝑦4 remains unchanged. Once 𝑦4 and 𝑓4 are obtained to desired
degree of accuracy, 𝑦5 = 𝑦(𝑥0 + 5ℎ) is found from the predictor as
(𝑝) 4ℎ
𝑦5 = 𝑦1 + (2𝑓2 − 𝑓3 + 2𝑓4 )
3
and 𝑓5 = 𝑓(𝑥0 + 5ℎ, 𝑦5 ) is calculated. Then a better approximation to the value of 𝑦5 is obtained
from the corrector as
(𝑐) ℎ
𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 )
3
We repeat this step until 𝑦5 becomes stationary and, then proceed to calculate 𝑦6 as before.
This is Milne's predictor-corrector method. To insure greater accuracy, we must first improve the
accuracy of the starting values and then subdivide the intervals.
EXAMPLE 1
Apply Milne's method, to find a solution of the differential equation 𝑦 ′ = 𝑥 − 𝑦 2 in the range 0 ≤
𝑥 ≤ 1 for the boundary condition 𝑦 = 0 at 𝑥 = 0.
Solution:
Using Picard's method, we have
𝑥
𝑦 = 𝑦(0) + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥, where 𝑓(𝑥, 𝑦) = 𝑥 − 𝑦 2
0
𝑥 𝑥2
Giving 𝑦1 = 0 + ∫0 𝑥𝑑𝑥 =
2
𝑥 𝑥4 𝑥2 𝑥5
Giving 𝑦2 = ∫0 (𝑥 − ) 𝑑𝑥 = −
4 2 20
𝑥 2
𝑥2 𝑥5 𝑥 2 𝑥5 𝑥8 𝑥 11
𝑦3 = ∫ [𝑥 − ( − ) ] 𝑑𝑥 = − + −
0 2 20 2 20 160 4400
Now let us determine the starting values of the Milne's method from (i), by choosing ℎ = 0.2.
(𝑝) 4ℎ
Using the predictor, 𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3
(𝑝)
𝑥 = 0.8 𝑦4 = 0.3049, 𝑓4 = 0.7070
(𝑐) ℎ
and the corrector, 𝑦4 = 𝑦2 + (𝑓2 + 4𝑓3 + 𝑓4 ), yields
3
(𝑐)
𝑦4 = 0.3046 𝑓4 = 0.7072 (𝑖𝑖)
(𝑐)
𝑦4 = 0.3046, which is the same as in (ii)
4ℎ (𝑝)
(2𝑓2 − 𝑓3 + 2𝑓4 )
𝑦4 = 𝑦1 +
3
(𝑝)
𝑥 = 0.1, 𝑦5 = 0.4554 𝑓5 = 0.7926
(𝑐) ℎ
and the corrector 𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 ) gives
3
(𝑐)
𝑦5 = 0.4555 𝑓5 = 0.7925
(𝑐)
𝑦5 = 0.4555, a value which is the same as before.
Runge's Method
𝑑𝑦
Consider the differential equation, = 𝑓(𝑥, 𝑦), 𝑦(𝑥0 ) = 𝑦0
𝑑𝑥
Clearly the slope of the curve through 𝑃(𝑥0 , 𝑦0 ) is 𝑓(𝑥0 , 𝑦0 ). Integrating both sides of (1) from
(𝑥0 , 𝑦0 ) to (𝑥0 + ℎ, 𝑦0 + 𝑘), we have
𝐲𝟎 +𝐤 𝐱 𝟎 +𝐡
∫ 𝐝𝐲 = ∫ 𝐟(𝐱, 𝐲)𝐝𝐱
𝐲𝟎 𝐱𝟎
To evaluate the integral on the right, we take 𝑁 as the mid-point of 𝐿𝑀 and find the values of 𝑓(𝑥, 𝑦)(
i.e., 𝑑𝑦/𝑑𝑥) at the points 𝑥0 , 𝑥0 + ℎ/2, 𝑥0 + ℎ. For this purpose, we first determine the values of 𝑦 at
these points.
Also
𝑦𝑇 = 𝑀𝑇 = 𝐿𝑃 + 𝑅𝑇 = 𝑦0 + 𝑃𝑅 ⋅ tan 𝜃 = 𝑦0 + ℎ𝑓(𝑥0 + 𝑦0 )
P a g e | 52
Now the value of 𝑦𝑄 at 𝑥0 + ℎ is given by the point 𝑇 ′′ where the line through 𝑃 draw with slope at
𝑇(𝑥0 + ℎ, 𝑦𝑇 ) meets 𝑀𝑄.
𝒙𝟎 +𝒉
𝒉
𝒌=∫ 𝒇(𝒙, 𝒚)𝒅𝒙 = [𝒇 + 𝟒𝒇𝑺 + 𝒇𝑸 ] by Simpson's rule
𝒙𝟎 𝟔 𝑷
𝒉
= [𝒇(𝒙𝟎 + 𝒚𝟎 ) + 𝒇(𝒙𝟎 + 𝒉/𝟐, 𝒚𝑺 ) + (𝒙𝟎 + 𝒉, 𝒚𝑸 ]
𝟔
Calculate successively
𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ𝑦0 + 𝑘1 )
2 2
𝑘 ′ = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘1 )
and
𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘 ′ )
1
Finally compute, 𝑘 = (𝑘1 + 4𝑘2 + 𝑘3 ) which gives the required approximate value as 𝑦1 = 𝑦0 + 𝑘.
6
First order R-K method. We have seen that Euler's method gives
ℎ2 ′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 +⋯
2 0
It follows that the Euler's method agrees with the Taylor's series solution upto the term in ℎ.
𝒉
𝒚𝟏 = 𝒚 + [𝒇(𝒙𝟎, 𝒚𝟎) + 𝒇(𝒙𝟎 + 𝒉, 𝒚𝟏 )]
𝟐
𝒉
𝒚𝟏 = 𝒚𝟎 + [𝒇 + 𝒇(𝒙𝟎 + 𝒉), 𝒚𝟎 + 𝒉𝒇𝟎] where 𝒇𝟎 = (𝒙𝟎, 𝒚𝟎)
𝟐 𝟎
𝒉𝟐 ′′ 𝒉𝟑 ′′′
𝒚𝟏 = 𝒚(𝒙𝟎 + 𝒉) = 𝒚𝟎 + 𝒉𝒚′𝟎 + 𝒚 + 𝒚 +⋯
𝟐! 𝟎 𝟑! 𝟎
Expanding 𝑓(𝑥0 + ℎ, 𝑦0 + ℎ𝑓0 ) by Taylor's series for a function of two variables, (2) gives
P a g e | 54
𝒉 𝛛𝒇 𝛛𝒇
𝒚𝟏 = 𝒚𝟎 + [𝒇𝟎 + {𝒇𝟎 = (𝒙𝟎, 𝒚𝟎 ) + 𝒉 ( ) + 𝒉𝒇𝟎 ( ) + 𝑶(𝒉𝟐)∘∘ }]
𝟐 𝛛𝒙 𝟎 𝛛𝒚 𝟎
𝟏 𝛛𝒇 𝛛𝒇
= 𝒚𝟎 + [𝒉𝒇𝟎 + 𝒉𝒇𝟎 + 𝒉𝟐 {( ) + ( ) } + 𝑶(𝒉𝟑)]
𝟐 𝛛𝒙 𝟎 𝛛𝒚 𝟎
𝒉𝟐 ′ 𝒅𝒇(𝒙, 𝒚) 𝛛𝒇 𝛛𝒇
= 𝒚𝟎 + 𝒉𝒇𝟎 + 𝒇𝟎 + 𝑶(𝒉𝟑 ) [∵ = +𝒇 ]
𝟐 𝒅𝒙 𝛛𝒙 𝛛𝒚
Comparing (3) and (4), it follows that the modified Euler's method agrees with the Taylor's series
solution upto the term in ℎ2 .
Hence the modified Euler's method is the Runge-Kutta method of the second order. ∴ The second
order Runge-Kutta formula is
1
𝑦1 = 𝑦0 + (𝑘1 + 𝑘2 )
2
(iii) Third order R-K method. Similarly, it can be seen that Runge's method agrees with the Taylor's
series solution upto the term in ℎ3 .
1
𝑦1 = 𝑦0 + (𝑘1 + 4𝑘2 + 𝑘3 )
6
1 1
Where, 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ), 𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2
(iv) Fourth order R-K method. This method is most commonly used and is often referred to as the
Runge-Kutta method only.
is as follows:
𝑑𝑦
= 𝑓(𝑥, 𝑦), 𝑦(𝑥0 )
𝑑𝑥
P a g e | 55
and
𝟏 𝟏
𝒌𝟐 = 𝒉𝒇 (𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟏 )
𝟐 𝟐
𝟏 𝟏
𝒌𝟑 = 𝒉𝒇 (𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟐 )
𝟐 𝟐
𝒌𝟒 = 𝒉𝒇(𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟑 )
𝟏
𝒌 = (𝒌𝟏 + 𝟐𝒌𝟐 + 𝟐𝒌𝟑 + 𝒌𝟒 )
𝟔
Finally compute
NOTE Obs. One of the advantages of these methods is that the operation is identical whether the
differential equation is linear or non-linear.
EXAMPLE 1:
Apply the Runge-Kutta fourth order method to find an approximate value of 𝑦 when 𝑥 = 0.2 given
that 𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.
Solution:
CONCLUSION:
In conclusion, the journey through this numerical analysis project has been an illuminating
exploration into the realm of mathematical modeling and computational techniques.
Throughout this endeavor, we have delved into a myriad of methods and formulas, each
playing a crucial role in the pursuit of accurate solutions to complex problems.
Moreover, the study of interpolation and approximation opened up new avenues for
approximating functions and data points. By employing Lagrange interpolation, Newton's
divided difference method, and cubic spline interpolation, we were able to construct
polynomial approximations that closely mimicked the behavior of the underlying functions,
facilitating the estimation of values between discrete data points with remarkable accuracy.
Furthermore, our exploration extended to the realm of numerical solutions for ordinary
differential equations (ODEs) and partial differential equations (PDEs). Leveraging methods
such as Euler's method, the Runge-Kutta family, and finite difference schemes, we gained
insights into how numerical algorithms can be utilized to simulate dynamic systems and
model physical phenomena with precision.
Additionally, the investigation into root-finding techniques, including the bisection method,
Newton-Raphson method, and secant method, equipped us with powerful tools for locating
P a g e | 57
the roots of nonlinear equations—a fundamental task with applications spanning various
fields of science and engineering.
In conclusion, this project has underscored the indispensable role of numerical analysis in
modern scientific inquiry and engineering practice. By harnessing the power of mathematical
abstraction and computational machinery, we stand equipped to confront the complexities of
real-world problems with confidence and competence, armed with a rich toolkit of numerical
methods and formulas to guide us towards meaningful solutions.
As we conclude this project, let us not view it merely as an endpoint, but rather as a
springboard for further exploration and discovery in the boundless realm of numerical
analysis.
P a g e | 58
REFERENCES
Atkinson, Kendall E. An Introduction to Numerical Analysis. John Wiley & Sons,
1989.
Burden, Richard L., and J. Douglas Faires. Numerical Analysis. Cengage Learning,
2010.
https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equation
s
"Numerical Methods for Initial Value Problems" by Jeff R. Allen. University of Utah,
Department of Mathematics. [Online] Available at:
http://www.math.utah.edu/~allenf/teaching/2017Spring/2270/lab8/IvpIntro.html
https://people.cs.uchicago.edu/~ridg/newtonapplet/DEtext/html/deintro.html