Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

DEPARTMENT OF MATHEMATICS

UNIVERSITY OF LUCKNOW

MINOR PROJECT

“NUMERICAL ANALYSIS”

Submitted by- Submitted to -

Abhishek Gautam Prof. Rekha Shrivastava


Roll no. – 2110011015747 Department Of Mathematics
B.Sc 6th Semester University Of Lucknow

University Of Lucknow
CERTIFICATE
This is to certify that term paper entitled “NUMERICAL ANALYSIS” which is
being submitted by the B.Sc Mathematics (Semester-6) student ABHISHEK
GAUTAM has been carried as per the requirements given by the Department of
Mathematics ,University Of Lucknow.

This is an original study and all work has been done by the student himself . He
has fulfilled all conditions for submission of term paper.

Dr. REKHA SHRIVASTAVA


Minor Project Supervisor
Department of Mathematics
University of Lucknow
Lucknow
ACKNOWLEDGEMENT
I would like to extend my heartfelt gratitude to Prof. Rekha Shrivastava ma`am for
her invaluable guidance and unwavering support throughout the completion of this
termassignment.

Their expertise, encouragement and dedication have been instrumental in shaping


and refining the contents of this work. I am so deeply apprectiative of friends for
their understanding, encouragement and patience during the demanding periods of
research and writing. Their unwavering support provided the necessary motivation to
navigate through the intricacies of this academic endeavor.

Finally, I express my gratitude to the numerous resources, literature, and


individuals whose contribution and their respective fields formed the foundation of
this assignment.

Abhishek Gautam
B.Sc. 3rd year
VIth Semester
[2110011015747]
CONTENT
Abstract 1

CHAPTER 1: Introduction 2-4

1. Objective of the Thesis


2. Scope and Limitations
3. Applications

CHAPTER 2:Operators used in Numerical Analysis 5-14

1.1 Overview
1.2 Finite And Forward Difference Operator
1.3 Error Propagation And its Properties
1.4 Backward And Central Difference Operator
1.5 Shift ,Average And Differential Operator

CHAPTER 3: Solution Of Equation In One Variable 15-22

2.1 Overview
2.2 Bisection Method
2.3 Newton Methods and its Extensions
2.4 Secant Method

CHAPTER 4: Interpolation 23-32

3.1 Overview And Meaning


3.2 Lagrange`s Interpolation
3.3 Newton Divided Difference ,Forward And Backward
3.4 Gauss Forward and Backward Interpolation
3.5 Besssel`s Formulae
CHAPTER 5: Numerical Differentiation And Integration 33-41

4.1 Overview
4.2 Newton Forward and Backward for Derivatives
4.3 Derivatives using Bessel`s Formulae
4.4 Examples
4.5 Numerical Integration
4.6 Trapezoidal ,Simpsons rule

CHAPTER 6:Numerical Solution Of Ordinary Differential Equations 42-46

5.1 Euler`s Method


5.2 Formulation Of Problem
5.3 Examples
5.4 Mid- point Method And Picard Method
5.5 Formulation Of Problem
5.6 Examples

CHAPTER 7: Milne`s Method And Runge`s Method 47-55

6.1 Overview
6.2 Formulation Of Problem
6.3 Examples

CONCLUSION 56-57

REFERENCES 58
Page |1

ABSTRACT
This project delves into the realm of numerical analysis with a focus on enhancing
computational efficiency through innovative methodologies and algorithms. Numerical
analysis serves as the backbone for solving mathematical problems using numerica l
approximation techniques, indispensable across various scientific and engineering
domains.

The primary objective of this project is to explore and develop novel numerical
algorithms that can expedite computations without compromising accuracy. Tradition al
numerical methods often encounter challenges in handling large -scale problems
efficiently, necessitating advancements to meet the demands of contemporary
computational tasks. By leveraging mathematical insights, algorithmic optimizations,
and computational techniques, this research endeavors to streamline numerical
computations for enhanced performance and scalability.

The project's methodology involves a comprehensive review of existing numerical


analysis techniques, identifying areas for improvement, and proposing novel algorithms
to address these shortcomings. Special attention is given to numerical linear algebra,
numerical optimization, and differential equation solvers, as these areas are fundamental
to numerous computational tasks. Moreover, the project explores the integration of
machine learning and artificial intelligence techniques to augment traditional numerical
methods, potentially unlocking new avenues for efficiency gains.
Page |2

CHAPTER 1

INTRODUCTION
Numerical analysis is the study of algorithms that use numerical approximation (as opposed
to symbolic manipulations) for the problems of mathematical analysis (as distinguished
from discrete mathematics). It is the study of numerical methods that attempt to find
approximate solutions of problems rather than the exact ones. Numerical analysis finds
application in all fields of engineering and the physical sciences, and in the 21st century also
the life and social sciences, medicine, business and even the arts. Current growth in
computing power has enabled the use of more complex numerical analysis, providing
detailed and realistic mathematical models in science and engineering. Examples of
numerical analysis include: ordinary differential equations as found in celestial
mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in
data analysis.

Numerical analysis is a discipline of mathematics concerned with the development of


efficient methods for getting numerical solutions to complex mathematical problems. There
are three sections to the numerical analysis. The first section of the subject deals with the
creation of a problem-solving approach. The analysis of methods, which includes error
analysis and efficiency analysis, is covered in the second section. The efficiency analysis
shows us how fast we can compute the result, while the error analysis informs us how correct
the result will be if we utilize the approach. The construction of an efficient algorithm to
implement the approach as a computer code is the subject’s third part. All three elements
must be familiar to have a thorough understanding of the numerical analysis.

Numerical Analysis deals with the process of getting the numerical solution to complex
problems. The majority of mathematical problems in science and engineering are difficult to
answer precisely, and in some cases it is impossible. To make a tough Mathematical problem
easier to solve, an approximation is essential.
Page |3

Numerical analysis is a branch of mathematics that solves continuous problems using


numeric approximation. It involves designing methods that give approximate but accurate
numeric solutions, which is useful in cases where the exact solution is impossible or
prohibitively expensive to calculate.

APPLICATIONS OF NUMERICAL
ANALYSIS

 The overall goal of the field of numerical analysis is the design and analysis of
techniques to give approximate but accurate solutions to a wide variety of hard
problems, many of which are infeasible to solve symbolically:
 Advanced numerical methods are essential in making numerical weather
prediction feasible.
 Computing the trajectory of a spacecraft requires the accurate numerical solution of a
system of ordinary differential equations.
 Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving partial
differential equations numerically.
 Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and
crew assignments and fuel needs. Historically, such algorithms were developed within
the overlapping field of operations research.

 Insurance companies use numerical programs for actuarial analysis.


 Numerical analysis is a field of mathematics and computer science that deals with the
development, analysis, and implementation of algorithms to solve problems from
various disciplines. Here are some common applications of numerical analysis:

 Solving Equations: Numerical methods are used to find solutions to equations


that cannot be solved analytically. For example, root-finding algorithms like Newton's
method or bisection method are used to find roots of equations.
Page |4

 Optimization: Numerical optimization techniques are used to find the minimum or


maximum of a function, which has widespread applications in engineering, finance,
and machine learning.
 Interpolation and Extrapolation: Numerical methods are used to approximate
unknown values between known data points (interpolation) or predict values beyond
the range of known data (extrapolation). These techniques are commonly used in data
analysis and signal processing.
 Numerical Integration and Differentiation: Numerical methods are used to
approximate definite integrals and calculate derivatives of functions, which are useful
in physics, engineering, and finance.
 Numerical Linear Algebra: Techniques such as matrix factorization,
eigenvalue calculation, and solving systems of linear equations play a crucial role in
solving problems in areas such as image processing, cryptography, and scientific
computing.

 Numerical Solutions to Differential Equations : Differential equations are


fundamental in modeling many real-world phenomena in physics, engineering,
biology, and finance. Numerical methods such as Euler's method, Runge-Kutta
methods, and finite difference methods are used to approximate solutions to ordinary
and partial differential equations.

 Statistical Analysis: Numerical methods are used for statistical analysis,


including regression analysis, hypothesis testing, and Monte Carlo simulations.

 Simulation and Modeling : Numerical methods are used to simulate complex


systems and phenomena, such as weather patterns, fluid flow, structural analysis, and
population dynamics.
 Financial Mathematics: Numerical methods are widely used in finance for
pricing derivatives, risk management, portfolio optimization, and simulating financial
markets.
 Image and Signal Processing: Numerical methods are used for tasks such as
image compression, image enhancement, noise reduction, and pattern recognition.
Page |5

CHAPTER-2
OPERATORS USED IN NUMERICAL
ANALYSIS

Lot of operators are used in numerical analysis/computation. Some of the frequently used
operators, viz. forward difference (∆), backward difference (∇), central difference (δ), shift
(E) and mean (µ) are discussed in this module. Let the function y = f(x) be defined on the
closed interval [a, b] and let x0, x1, . . . , x be the n values of x. Assumed that these values are
equidistance, i.e. xi = x0 +, i = 0, 1, 2, . . . , n; h is a suitable real number called the difference
of the interval or spacing. When x = xi, the value of y is denoted by yi and is defined by yi =
f(xi). The values of x and y are called arguments and entries respectively. 3.1 Finite
difference operators Different types of finite difference operators are defined, among them
forward difference, backward difference and central difference operators are widely used. In
this section, these operators are discussed.

 △F(x)= F( x + h)-F(x)
 △𝟐 F(x)= F( x + 2h) - 2F( x + h) + F(x)
 △𝟑 F(x)= F( x + 3h) - 3F( x + 2h) +2F( x + h) - F(x)

THEN,

 △𝒏 F(x) ∑𝒏𝒓=𝟎(−𝟏) n-r C 𝒏𝒓𝑭(𝒙 + 𝒓𝒉)


Page |6

Finite difference operators


Different types of finite difference operators are defined, among them forward difference, backward
difference and central difference operators are widely used. In this section, these operators are
discussed.

FORWARD DIFFERENCE OPERATOR

The forward difference is denoted by Δ and is defined by,

𝜟𝒇(𝒙) = 𝒇(𝒙 + 𝒉) − 𝒇(𝒙)

When 𝑥 = 𝑥𝑖 then from above equation

𝜟𝒇(𝒙𝒊 ) = 𝒇(𝒙𝒊 + 𝒉) − 𝒇(𝒙𝒊 ), i.e. 𝜟𝒚𝒊 = 𝒚𝒊+𝟏 − 𝒚𝒊 , 𝒊 = 𝟎, 𝟏, 𝟐, … , 𝒏 − 𝟏

In particular, Δ𝑦0 = 𝑦1 − 𝑦0 , Δ𝑦1 = 𝑦2 − 𝑦1 , … , Δ𝑦𝑛−1 = 𝑦𝑛 − 𝑦𝑛−1 . These are called first order
differences.

The differences of the first order differences are called second order differences. The second order
differences are denoted by Δ2 𝑦0 , Δ2 𝑦1 , …

Two second order differences are

𝚫𝟐 𝐲𝟎 = 𝚫𝐲𝟏 − 𝚫𝐲𝟎 = (𝐲𝟐 − 𝐲𝟏 ) − (𝐲𝟏 − 𝐲𝟎 ) = 𝐲𝟐 − 𝟐𝐲𝟏 + 𝐲𝟎


𝚫𝟐 𝐲𝟏 = 𝚫𝐲𝟐 − 𝚫𝐲𝟏 = (𝐲𝟑 − 𝐲𝟐 ) − (𝐲𝟐 − 𝐲𝟏 ) = 𝐲𝟑 − 𝟐𝐲𝟐 + 𝐲𝟏

The third order differences are also defined in similar manner, i.e.

𝚫𝟑 𝐲𝟎 = 𝚫𝟐 𝐲𝟏 − 𝚫𝟐 𝐲𝟎 = (𝐲𝟑 − 𝟐𝐲𝟐 + 𝐲𝟏 ) − (𝐲𝟐 − 𝟐𝐲𝟏 + 𝐲𝟎 ) = 𝐲𝟑 − 𝟑𝐲𝟐 + 𝟑𝐲𝟏 − 𝐲𝟎


𝚫𝟑 𝐲𝟏 = 𝐲𝟒 − 𝟑𝐲𝟑 + 𝟑𝐲𝟐 − 𝐲𝟏

Similarly, higher order differences can be defined.

In general,

𝚫𝐧+𝟏 𝐟(𝐱) = 𝚫[𝚫𝐧 𝐟(𝐱)], i.e. 𝚫𝐧+𝟏 𝐲𝐢 = 𝚫[𝚫𝐧 𝐲𝐢 ], 𝐧 = 𝟎, 𝟏, 𝟐, …


Page |7

Again, 𝜟𝒏+𝟏 𝒇(𝒙) = 𝜟𝒏 [𝒇(𝒙 + 𝒉) − 𝒇(𝒙)] = 𝜟𝒏 𝒇(𝒙 + 𝒉) − 𝜟𝒏 𝒇(𝒙)

And

𝜟𝒏+𝟏 𝒚𝒊 = 𝜟𝒏 𝒚𝒊+𝟏 − 𝜟𝒏 𝒚𝒊 , 𝒏 = 𝟎, 𝟏, 𝟐, …

It must be remembered that Δ0 ≡ identity operator, i.e. Δ0 𝑓(𝑥) = 𝑓(𝑥) and Δ1 ≡ Δ.All the forward
differences can be represented in a tabular form, called the forward difference or diagonal difference
table.

Let 𝑥0 , 𝑥1 , … , 𝑥4 be four arguments. All the forwarded differences of these arguments are shown in
Table 3.1.

𝑥 𝑦 Δ Δ2 Δ3 Δ4

𝑥0 𝑦0

Δ𝑦0

𝑥1 𝑦1 Δ2 𝑦0

Δ𝑦1 Δ3 𝑦0 𝜎2

𝑥2 𝑦2 Δ2 𝑦1 Δ5 Δ4 𝑦0

Δ𝑦2 Δ3 𝑦1

𝑥3 𝑦3 Δ2 𝑦2

Δ𝑦3

𝑥4 𝑦4
Page |8

Error propagation in a difference table


If any entry of the difference table is erroneous, then this error spread over the table in convex
manner. The propagation of error in a difference table is illustrated in Table . Let us assumed that 𝑦3
be erroneous and the amount of the error be 𝜀.

𝑥 𝑦 Δ𝑦 Δ2 𝑦 Δ3 𝑦 Δ4 𝑦 Δ5 𝑦

𝑥0 𝑦0

Δ𝑦0

𝑥1 𝑦1 Δ2 𝑦0

Δ𝑦1 Δ3 𝑦0 + 𝜀

𝑥2 𝑦2 Δ2 𝑦1 + 𝜀 Δ4 𝑦0 − 4𝜀

Δ𝑦2 + 𝜀 Δ3 𝑦1 − 3𝜀 Δ5 𝑦0 + 10𝜀

𝑥3 𝑦3 + 𝜀 Δ2 𝑦2 − 2𝜀 Δ4 𝑦1 + 6𝜀

Δ𝑦3 − 𝜀 Δ3 𝑦2 + 3𝜀 Δ5 𝑦1 − 10𝜀

𝑥4 𝑦4 Δ2 𝑦3 + 𝜀 Δ4 𝑦2 − 4𝜀

𝑥5 𝑦5 Δ𝑦4 Δ3 𝑦3 − 𝜀

Δ𝑦5

𝑥6 𝑦6
Page |9

Error propagation in a finite difference table.

i. The error increases with the order of the differences.


ii. The error is maximum (in magnitude) along the horizontal line through the erroneous
tabulated value.
iii. In the 𝑘 th difference column, the coefficient of errors are the binomial coefficients in
the expansion of (1 − 𝑥)𝑘 . In particular, the errors in the second difference column
are 𝜀, −2𝜀, 𝜀, in the third difference column these are 𝜀, −3𝜀, 3𝜀, −𝜀, and so on.
iv. The algebraic sum of errors in any complete column is zero.
v. If there is any error in a single entry of the table, then we can detect and correct it
from the difference table.
vi. The position of the error in an entry can be identified by performing the following
steps.
a. If at any stage, the differences do not follow a smooth pattern, then there is
an error.

Properties
Some common properties of forward difference operator are presented below:

(i) Δ𝑐 = 0, where 𝑐 is a constant.

(ii) Δ[𝑓1 (𝑥) + 𝑓2 (𝑥) + ⋯ + 𝑓𝑛 (𝑥)] = Δ𝑓1 (𝑥) + Δ𝑓2 (𝑥) + ⋯ + Δ𝑓𝑛 (𝑥)

(iii) Δ[𝑐𝑓(𝑥)] = 𝑐Δ𝑓(𝑥).

Combining properties (ii) and (iii), one can generalise the property (ii) as

(iv) Δ[𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑥)] = 𝑐1 Δ𝑓1 (𝑥) + 𝑐2 Δ𝑓2 (𝑥) + ⋯ + 𝑐𝑛 Δ𝑓𝑛(𝑥)

(v) Δ𝑚 Δ𝑛 𝑓(𝑥) = Δ𝑚+𝑛 𝑓(𝑥) = Δ𝑛 Δ𝑚 𝑓(𝑥) = Δ𝑘 Δ𝑚+𝑛−𝑘 𝑓(𝑥), 𝑘 = 0,1,2, … , 𝑚 or 𝑛.

(vi) Δ[𝑐 𝑥 ] = 𝑐 𝑥+ℎ − 𝑐 𝑥 = 𝑐 𝑥 (𝑐 ℎ − 1), for some constant 𝑐.

(vii) Δ[ 𝑥 𝐶𝑟 ] = 𝑥
𝐶𝑟−1 , where 𝑟 is fixed and ℎ = 1.

Δ[ 𝑥 𝐶𝑟 ] = 𝑥+1 𝐶
𝑟 − 𝑥 𝐶𝑟 = 𝑥 𝐶𝑟−1 as ℎ = 1.
P a g e | 10

In particular, when the numerator is 1, then

1 Δ𝑓(𝑥)
Δ[ ]=−
𝑓(𝑥) 𝑓(𝑥 + ℎ)𝑓(𝑥)

Backward difference operator:


The symbol ∇ is used to represent backward difference operator. The backward difference operator is
defined as

𝜵𝒇(𝒙) = 𝒇(𝒙) − 𝒇(𝒙 − 𝒉)

When 𝑥 = 𝑥𝑖 , the above relation reduces to

𝜵𝒚𝒊 = 𝒚𝒊 − 𝒚𝒊−𝟏 , 𝒊 = 𝒏, 𝒏 − 𝟏, … , 𝟏

In particular,

∇𝑦1 = 𝑦1 − 𝑦0 , ∇𝑦2 = 𝑦2 − 𝑦1 , … , ∇𝑦𝑛 = 𝑦𝑛 − 𝑦𝑛−1

These are called the first order backward differences. The second order differences are denoted by
∇2 𝑦2 , ∇2 𝑦3 , … , ∇2 𝑦𝑛 . First two second order backward differences are

𝜵𝟐 𝒚𝟐 = 𝜵(𝜵𝒚𝟐 ) = 𝜵(𝒚𝟐 − 𝒚𝟏 ) = 𝜵𝒚𝟐 − 𝜵𝒚𝟏 = (𝒚𝟐 − 𝒚𝟏 ) −


(𝒚𝟏 − 𝒚𝟎 ) = 𝒚𝟐 − 𝟐𝒚𝟏 + 𝒚𝟎 ,

And

𝜵𝟐 𝒚𝟑 = 𝒚𝟑 − 𝟐𝒚𝟐 + 𝒚𝟏 , 𝜵𝟐 𝒚𝟒 = 𝒚𝟒 − 𝟐𝒚𝟑 + 𝒚𝟐 .

The other second order differences can be obtained in similar manner.

In general,

𝜵𝒌 𝒚𝒊 = 𝜵𝒌−𝟏 𝒚𝒊 − 𝜵𝒌−𝟏 𝒚𝒊−𝟏 , 𝒊 = 𝒏, 𝒏 − 𝟏, … , 𝒌

Where ∇0 𝑦𝑖 = 𝑦𝑖 , ∇1 𝑦𝑖 = ∇𝑦𝑖 .
P a g e | 11

Like forward differences, these backward differences can be written in a tabular form, called
backward difference or horizontal difference table.

𝑥 𝑦 ∇ ∇2 ∇3 ∇4

𝑥0 𝑦0

𝑥1 𝑦1 ∇𝑦1

𝑥2 𝑦2 ∇𝑦2 ∇2 𝑦2

𝑥3 𝑦3 ∇𝑦3 ∇2 𝑦3 ∇3 𝑦3

𝑥4 𝑦4 ∇𝑦4 ∇2 𝑦4 ∇3 𝑦4 ∇4 𝑦4

Central difference operator


There is another kind of finite difference operator known as central difference operator. This operator
is denoted by 𝛿 and is defined by

𝒉
𝜹𝒇(𝒙) = 𝒇(𝒙 + 𝒉/𝟐) − 𝒇(𝒙 − )
𝟐

When 𝑥 = 𝑥𝑖 , then the first order central difference, in terms of ordinates is

𝜹𝒚𝒊 = 𝒚𝒊+𝟏/𝟐 − 𝒚𝒊−𝟏/𝟐

Where 𝒚𝒊+𝟏/𝟐 = 𝒇(𝒙𝒊 + 𝒉/𝟐) and 𝒚𝒊−𝟏/𝟐 = 𝒇(𝒙𝒊 − 𝒉/𝟐).

In particular, 𝜹𝒚𝟏/𝟐 = 𝒚𝟏 − 𝒚𝟎, 𝜹𝒚𝟑/𝟐 = 𝒚𝟐 − 𝒚𝟏, … , 𝜹𝒚𝒏−𝟏/𝟐 = 𝒚𝒏 − 𝒚𝒏−𝟏.

The second order central differences are

𝜹𝟐 𝒚𝒊 = 𝜹𝒚𝒊+𝟏/𝟐 − 𝜹𝒚𝒊−𝟏/𝟐 = (𝒚𝒊+𝟏 − 𝒚𝒊 ) − (𝒚𝒊 − 𝒚𝒊−𝟏 ) = 𝒚𝒊+𝟏 − 𝟐𝒚𝒊 + 𝒚𝒊−𝟏 .

In general,
P a g e | 12

𝛅𝐧 𝐲𝐢 = 𝛅𝐧−𝟏 𝐲 𝟏 − 𝛅𝐧−𝟏 𝐲 𝟏
𝐢+𝟐 𝐢−𝟐

𝑥 𝑦 𝛿 𝛿2 𝛿3 𝛿4

𝑥0 𝑦0

𝛿𝑦1/2

𝑥1 𝑦1 𝛿 2 𝑦1

𝛿𝑦3/2 𝛿 3 𝑦3/2

𝑥2 𝑦2 𝛿 2 𝑦2 𝛿 4 𝑦2

𝛿𝑦5/2 𝛿 3 𝑦5/2

𝑥3 𝑦3 𝛿 2 𝑦3

𝛿𝑦7/2

𝑥4 𝑦4

It may be observed that all odd (even) order differences have fraction suffices (integral suffices).

Shift, average and differential operators


The shift operator is denoted by 𝐸 and is defined
by
𝐄𝐟(𝐱) = 𝐟(𝐱 + 𝐡)
In terms of 𝐲,
𝐄𝐲𝐢 = 𝐲𝐢+𝟏
P a g e | 13

Note that shift operator increases subscript of 𝑦 by one. When the shift operator is applied twice on
the function 𝑓(𝑥), then the subscript of 𝑦 is increased by 2 .

That is,

𝑬𝟐 𝒇(𝒙) = 𝑬[𝑬𝒇(𝒙)] = 𝑬[𝒇(𝒙 + 𝒉)] = 𝒇(𝒙 + 𝟐𝒉)

In general,

𝑬𝒏 𝒇(𝒙) = 𝒇(𝒙 + 𝒏𝒉) or 𝑬𝒏 𝒚𝒊 = 𝒚𝒊+𝒏𝒉

The inverse shift operator can also be find in similar manner. It is denoted by 𝐸 −1 and is defined by

𝑬−𝟏𝒇(𝒙) = 𝒇(𝒙 − 𝒉)

Similarly, second and higher order inverse operators are defined as follows:

𝑬−𝟐𝒇(𝒙) = 𝒇(𝒙 − 𝟐𝒉) and 𝑬−𝒏𝒇(𝒙) = 𝒇(𝒙 − 𝒏𝒉)

The general definition of shift operator is

𝑬𝒓 𝒇(𝒙) = 𝒇(𝒙 + 𝒓𝒉)

where 𝑟 is positive as well as negative rational numbers.

Properties
Few common properties of 𝐸 operator are given below:

i. 𝐸𝑐 = 𝑐, where 𝑐 is a constant.
ii. 𝐸{𝑐𝑓(𝑥)} = 𝑐𝐸𝑓(𝑥).
iii. 𝐸 {𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑥)]
iv. = 𝑐1 𝐸𝑓1 (𝑥) + 𝑐2 𝐸𝑓2 (𝑥) + ⋯ + 𝑐𝑛 𝐸𝑓𝑛 (𝑥)
v. 𝐸 𝑚 𝐸 𝑛 𝑓(𝑥) = 𝐸 𝑛 𝐸 𝑚 𝑓(𝑥) = 𝐸 𝑚+𝑛 𝑓(𝑥).
vi. 𝐸 𝑛 𝐸 −𝑛 𝑓(𝑥) = 𝑓(𝑥).
vii. In particular, 𝐸𝐸 −1 ≡ 𝐼, 𝐼 is the identity operator and it is sometimes denoted by 1.
(𝐸 𝑛 )𝑚 𝑓(𝑥) = 𝐸 𝑚𝑛 𝑓(𝑥).
𝑓(𝑥) 𝐸𝑓(𝑥)
viii. 𝐸{ }= .
𝑔(𝑥) 𝐸𝑔(𝑥)
P a g e | 14

ix. 𝐸{𝑓(𝑥)𝑔(𝑥)} = 𝐸𝑓(𝑥)𝐸𝑔(𝑥).


x. 𝐸Δ𝑓(𝑥) = Δ𝐸𝑓(𝑥).
xi. Δ𝑚 𝑓(𝑥) = ∇𝑚 𝐸𝑚 𝑓(𝑥) = 𝐸𝑚 ∇𝑚 𝑓(𝑥)

and ∇𝑚 𝑓(𝑥) = Δ𝑚 𝐸−𝑚 𝑓(𝑥) = 𝐸−𝑚 Δ𝑚 𝑓(𝑥).

Average operator :
The average operator is denoted by 𝜇 and is defined by

𝟏
𝝁𝒇(𝒙) = [𝒇(𝒙 + 𝒉/𝟐) + 𝒇(𝒙 − 𝒉/𝟐)]
𝟐

In terms of 𝑦,

1
𝜇𝑦𝑖 = [𝑦𝑖+1/2 + 𝑦𝑖−1/2 ]
2

Here the average of the values of 𝑓(𝑥) at two points (𝑥 + ℎ/2) and 𝑓(𝑥 − ℎ/2) is taken as the value
of 𝜇𝑓(𝑥).

Differential operator :
The differential operator is well known from differential calculus and it is denoted by 𝐷. This operator
gives the derivative. That is,

𝒅
𝑫𝒇(𝒙) = 𝒇(𝒙) = 𝒇′ (𝒙)
𝒅𝒙
𝒅𝟐
𝑫𝟐 𝒇(𝒙) = 𝒇(𝒙) = 𝒇′′ (𝒙)
𝒅𝒙𝟐





……
𝒅 𝒏
𝑫 𝒇(𝒙) = 𝒏 𝒇(𝒙) = 𝒇𝒏 (𝒙)
𝒏
𝒅𝒙
P a g e | 15

CHAPTER 3
SOLUTIONS OF EQUATIONS IN ONE VARIABLE

We deal with one of the most basis problems of numerical approximation, the root-finding
problem, in this chapter. For a given function 𝑓, we need to solve an equation of the form
𝑓(𝑥) = 0 , which is called root. A root of this equation is also called a zero of the function 𝑓.

For example, for an equation 𝑓(𝑥) = 𝑥 2 − 2𝑥 + 1 = 0, we can find the root 𝑥 = 1 so that
𝑓(1) = 0.

For solving such equations, one of the most useful methods are the iterative methods since
these methods are suitable for computing using computer programs. The methods we will
discuss here are the bisection methods, Secant method, Newton's method and it extensions.

BISECTION METHOD :
The bisection method is based on the intermediate value theorem for continuous functions:

Suppose 𝑓 is a contiuous function defined on the interval [𝑎, 𝑏] with 𝑓(𝑎) and 𝑓(𝑏) are of
opposite signs. Then a number 𝑝 exists in (𝑎, 𝑏) with 𝑓(𝑝) = 0. Although the procedure will
work when there is more than one root in the interval (𝑎, 𝑏), we assume for simplicity that
the root in this interval in unique. The method applies bisecting of subintervals of [𝑎, 𝑏] and,
at each step, locating the half containing.

To begin, set 𝑎1 = 𝑎 and 𝑏1 = 𝑏 and let 𝑝1 be the midpoint of [𝑎, 𝑏], that is.

𝑏1 − 𝑎1 𝑎1 + 𝑏1
𝑝1 = 𝑎1 + =
2 2

If 𝑓(𝑝1 ) = 0, then 𝑝 = 𝑝1 , and we are done.


P a g e | 16

If 𝑓(𝑝1 ) ≠ 0, then 𝑓(𝑝1 ) has the same sign as either 𝑓(𝑎1 ) or 𝑓(𝑏1 )

If 𝑓(𝑝1 ) and 𝑓(𝑎1 ) have the same sign, 𝑝 ∈ (𝑝1 , 𝑏1 ). Set 𝑎2 = 𝑝1 and 𝑏2 = 𝑏1 .

If 𝑓(𝑝1 ) and 𝑓(𝑎1 ) have opposite signs, 𝑝 ∈ (𝑎1 , 𝑝1 ). Set 𝑎2 = 𝑎1 and 𝑏2 = 𝑝1

Then re-apply the process to the interval [𝑎2 , 𝑏2 ].

Step 2: Since 𝑓 (𝑝1 ) > 0, we should select the interval [1,1.5] for the second iteration.

Thus 𝑝2 = 1.25 and 𝑓(𝑝2 ) = −1.796875

We can continue in this manner which is given in the Table ,


P a g e | 17

𝐍 𝒂𝒏 𝒃𝒏 𝒑𝒏 𝒇(𝒑𝒏 )

1 1.0 2.0 1.5 2.375

2 1.0 1.5 1.25 -1.79687

3 1.25 1.5 1.375 0.16211

4 1.25 1.375 1.3125 -0.84839

5 1.3125 1.375 1.34375 -0.35098

6 1.34375 1.375 1.359375 -0.09641

7 1.359375 1.375 1.3671875 0.03236

8 1.359375 1.3671875 1.36328125 -0.03215

9 1.36328125 1.3671875 1.365234375 0.000072

10 1.36328125 1.365234375 1.364257813 -0.01605

11 1.364254813 1.365234375 1.364746094 -0.00799

12 1.364746094 1.365234375 1.364990235 -0.00396

13 1.364990235 1.365234375 1.365112305 -0.00194

c. Solve for 𝑥 3 , divide by 𝑥, then take positive square root. 𝑝1 = 0, 𝑝2 undefined

d. Solve for 𝑥 3, then take negative of the cubed root. 𝑝1 = 0, 𝑝2 = −1, 𝑝3 =


−1.4422496, 𝑝4 = −1.57197274.
P a g e | 18

Newton's Method and Its Extension


Newton-Raphson method:
Newton's (or the Newton- Raphson) method is one of the most powerful and well-known
numerical methods for solving a root finding problem. There are many ways of introducing
Newton's method.

Suppose that 𝑓 ∈ 𝐶 2 [𝑎, 𝑏]. Let 𝑝0 ∈ [𝑎, 𝑏] be an approximation to 𝑝 such that 𝑓 ′ (𝑝0 ) ≠ 0 and
|𝑝 − 𝑝0 | is "small." Consider the first Taylor polynomial for 𝑓(𝑥) expanded about 𝑝0 and
evaluated at = 𝑝 :

(𝒑 − 𝒑𝟎 )𝟐 ′′
𝒇(𝒑) = 𝒇(𝒑𝟎 ) + (𝒑 − 𝒑𝟎 )𝒇′ (𝒑𝟎 ) + 𝒇 (𝝃(𝒑))
𝟐

Where 𝜉(𝑝) lies between 𝑝 and 𝑝0 . Since 𝑓(𝑝) = 0, this equation gives

′(
(𝑝 − 𝑝0 )2
0 = 𝑓(𝑝0 ) + (𝑝 − 𝑝0 )𝑓 𝑝0 ) + 𝑓 ′′ (𝜉(𝑝))
2

Newton's method is derived by assuming that since |𝑝 − 𝑝0 | is small, the term involving
(𝑝 − 𝑝0 )2 is much smaller, so

0 ≈ 𝑓 (𝑝0 ) + (𝑝 − 𝑝0 )𝑓 ′ (𝑝0 )

Solving for 𝑝 gives

𝑓 (𝑝0 )
𝑝 ≈ 𝑝0 − ≡ 𝑝1
𝑓 ′ (𝑝0 )

This sets the stage for Newton's method, which starts with an initial approximation 𝑝0 and
𝒇(𝒑𝒏−𝟏)
generates the sequence {𝑝𝑛 }∞
𝑛=0 , by 𝒑𝒏 = 𝒑𝒏−𝟏 − , for 𝒏 ≥ 𝟏
𝒇′ (𝒑𝒏−𝟏 )

.
P a g e | 19

Figure above demonstrate the values of function varying and their limits under given
circumstances.

Example 1

Consider the function 𝑓(𝑥) = cos 𝑥 − 𝑥 = 0. Approximate a root of 𝑓 using (a) a fixed-point
method, and (b) Newton's method.

Solution (a) A solution to this root-finding problem is also a solution to the fixed-point
problem 𝑥 = cos 𝑥, and the graph in Figure 2.8 implies that a single fixed point 𝑝

(Note that the variable in the trigonometric function is in radian measure, not degrees. This
will always be the case unless specified otherwise.)
P a g e | 20

𝑝0
𝑝1 = 𝑝0 − ′
𝑓 (𝑝0 )
𝜋
𝜋 cos ( ) − 𝜋/4
= − 4
4 −sin (𝜋) − 1
4
𝜋 √2/2 − 𝜋/4
= −
4 √2/2 − 1
= 0.7395361337
cos(𝑝1 ) − 𝑝1
𝑝2 = 𝑝1 −
−sin(𝑝1 ) − 1
= 0.7390851781

We continue generating the sequence by

𝒇(𝒑𝒏−𝟏 ) 𝒄𝒐𝒔 𝒑𝒏−𝟏 − 𝒑𝒏 − 𝟏


𝒑𝒏 = 𝒑𝒏−𝟏 − = 𝒑 𝒏−𝟏 −
𝒇′ (𝒑𝒏−𝟏 ) −𝒔𝒊𝒏 𝒑𝒏−𝟏 − 𝟏

This gives the approximations in Table . An excellent approximation is obtained with 𝑛 = 3.

In a practical application, an initial approximation is selected, and successive approximations


are generated by Newton's method. Either these will generally converge quickly to the root or
it will be clear that convergence is unlikely.
P a g e | 21

Secant Method
Newton's method is an extremely powerful technique, but it has a major weakness; the need
to know the value of the derivative of 𝑓 at each approximation. Frequently, 𝑓 ′ (𝑥) is far more
difficult and needs more arithmetic operations to calculate than 𝑓(𝑥).

By definition,

𝒇(𝒙) − 𝒇(𝒑𝒏−𝟏 )
𝒇′ (𝒑𝒏−𝟏 ) = 𝐥𝐢𝐦
𝒙→𝒑𝒏−𝟏 𝒙 − 𝒑𝒏−𝟏

If 𝑝𝑛−2 is close to 𝑝𝑛−1 , then

𝒇(𝒑𝒏−𝟐 ) − 𝒇(𝒑𝒏−𝟏 ) 𝒇(𝒑𝒏−𝟏 ) − 𝒇(𝒑𝒏−𝟐 )


𝒇′ (𝒑𝒏−𝟏 ) ≈ =
𝒑𝒏−𝟐 − 𝒑𝒏−𝟏 𝒑𝒏−𝟏 − 𝒑𝒏−𝟐

Using this approximation for 𝑓 ′ (𝑝𝑛−1 ) in Newton's formula gives

𝒇(𝒑𝒏−𝟏 )(𝒑𝒏−𝟏 − 𝒑𝒏−𝟐 )


𝒑𝒏 = 𝒑𝒏−𝟏 −
𝒇(𝒑𝒏−𝟏 ) − 𝒇(𝒑𝒏−𝟐 )

This technique is called the Secant method


P a g e | 22

Example 2

Use the Secant method to find a solution to 𝑥 = cos 𝑥 and compare the approximations with
those given in previous example, which applied Newton's method.

Solution. For the Secant method, we need two initial approximations. Suppose we use 𝑝0 =
0.5 and 𝑝1 = 𝜋/4 :

(𝒑𝟏 − 𝒑𝟎 )(𝐜𝐨𝐬 𝒑𝟏 − 𝒑𝟏 )
𝒑𝟐 = 𝒑𝟏 −
(𝐜𝐨𝐬 𝒑𝟏 − 𝒑𝟏 ) − (𝐜𝐨𝐬 𝒑𝟐 − 𝒑𝟐 )
𝝅 𝝅 𝝅
𝝅 ( − 𝟎. 𝟓) (𝐜𝐨𝐬 ( ) − )
= − 𝟒 𝟒 𝟒
𝟒 (𝐜𝐨𝐬 (𝝅) − 𝝅/𝟒) − (𝐜𝐨𝐬 𝟎. 𝟓 − 𝟎. 𝟓)
𝟒

Succeeding approximations are generated by the formula

(𝒑 −𝒑𝒏−𝟐 )(𝒄𝒐𝒔 𝒑𝒏−𝟏 −𝒑𝒏−𝟏 )


𝒑𝒏 = 𝒑𝒏−𝟏 − (𝒄𝒐𝒔 𝒏−𝟏 for 𝒏 ≥ 𝟐
𝒑𝒏−𝟏 −𝒑𝒏−𝟏 )−(𝒄𝒐𝒔 𝒑𝒏−𝟐 −𝒑𝒏−𝟐 )

These give the results in Table .

We note that although the formula for 𝑝2 seems to indicate a repeated computation, once
𝑓(𝑝0 ) and 𝑓 (𝑝1 ) are computed, they are not computed again.

Secant Method

𝑛 𝑝𝑛

0 0.5

1 0.7853981635

2 0.7363841388
P a g e | 23

CHAPTER 4
INTERPOLATION
Interpolation is the process of finding the most appropriate estimate for missing data. It is the
"art of reading between the lines of a table". For making the most probable estimate it
requires the following assumptions:

(i) The frequency distribution is normal and marked by sudden ups and downs.

(ii) The changes in the series are uniform within a period.

Interpolation technique is used in various disciplines like statistics, economics, business,


population studies, price determination etc. It is used to fill in the gaps in the statistical data
for the sake of continuity of information. For example, if we know the records for the past
five years except the third year which is not available due to unforeseen conditions, the
interpolation technique helps to estimate the record for that year too under the assumption
that the changes in the records over these five years have been uniform.

Extrapolation
Extrapolation is the process of finding the values outside the given interval. It is also
possible that we may require information for future in which case the process of estimating
the most appropriate value is known as extrapolation.

Lagrange's interpolation formula for unequal intervals

Let 𝑦 = 𝑓(𝑥) be the function such that 𝑓(𝑥) is taking the values 𝑦0 , 𝑦1 , … , 𝑦𝑛 corresponding
to x = x 0 , x1 , … , x n.

Inthecaseofthevaluesofindependentvariablearenotequallyspacedandwhenthe differences of
dependent variable are not small, we will use Lagrange's interpolation formula.
P a g e | 24

Let 𝑓(𝑥) be a polynomial in 𝑥 of degree 𝑛. Lagrange's interpolation formula for unequal


intervals is

(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 ) (𝒙 − 𝒙𝟎 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 )
𝒚= 𝒚𝟎 + 𝒚 +⋯
(𝒙𝟎 − 𝒙𝟏 )(𝒙𝟎 − 𝒙𝟐 ) … (𝒙𝟎 − 𝒙𝒏 ) (𝒙𝟏 − 𝒙𝟏 )(𝒙𝟏 − 𝒙𝟐 ) … (𝒙𝟏 − 𝒙𝒏 ) 𝟏
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏−𝟏 )
+
(𝒙𝒏 − 𝒙𝟏 )(𝒙𝒏 − 𝒙𝟐 ) … (𝒙𝒏 − 𝒙𝒏−𝟏 )

Example:

Using Lagrange's interpolation formula, find the value corresponding to x = 10 from the
following table:

Solution

𝑥 5 6 9 11

𝑦 12 13 14 16

Given 𝑥0 = 5, 𝑥1 = 6, 𝑥2 = 9, 𝑥3 = 11, 𝑥 = 10

𝑌0 = 𝑓(𝑥0 ) = 12

𝑌1 = 𝑓 (𝑥1 ) = 13

𝑌2 = 𝑓(𝑥2 ) = 14

Y3 = f(x 3 ) = 16
P a g e | 25

(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 ) (𝒙 − 𝒙𝟎)(𝒙 − 𝒙𝟐 ) … (𝒙 − 𝒙𝒏 )
𝒚= 𝒚𝟎 + 𝒚 +⋯
(𝒙𝟎 − 𝒙𝟏 )(𝒙𝟎 − 𝒙𝟐 ) … (𝒙𝟎 − 𝒙𝒏 ) (𝒙𝟏 − 𝒙𝟏 )(𝒙𝟏 − 𝒙𝟐) … (𝒙𝟏 − 𝒙𝒏 ) 𝟏
(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐) … (𝒙 − 𝒙𝒏−𝟏)
+ 𝒚
(𝒙𝒏 − 𝒙𝟏 )(𝒙𝒏 − 𝒙𝟐 ) … (𝒙𝒏 − 𝒙𝒏−𝟏) 𝒏
(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟗)(𝟏𝟎 − 𝟏𝟏) (𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟗)(𝟏𝟎 − 𝟏𝟏)
= 𝟏𝟐 + 𝟏𝟑
(𝟓 − 𝟔)(𝟓 − 𝟗)(𝟓 − 𝟏𝟏) (𝟔 − 𝟓)(𝟔 − 𝟗)(𝟔 − 𝟏𝟏)
(𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟏𝟏) (𝟏𝟎 − 𝟓)(𝟏𝟎 − 𝟔)(𝟏𝟎 − 𝟗)
+ 𝟏𝟒 + 𝟏𝟔
(𝟗 − 𝟓)(𝟗 − 𝟔)(𝟗 − 𝟏𝟏) (𝟏𝟏 − 𝟓)(𝟏𝟏 − 𝟔)(𝟏𝟏 − 𝟗)
𝟒. 𝟏 ⋅ −𝟏 𝟓 ⋅ 𝟏 ⋅ −𝟏 𝟓 ⋅ 𝟒 ⋅ −𝟏 𝟓 ⋅ 𝟒. 𝟏
= 𝟏𝟐 + 𝟏𝟑 + 𝟏𝟒 + 𝟏𝟔
−𝟏 ⋅ −𝟒 ⋅ −𝟔 𝟏 ⋅ −𝟑 ⋅ −𝟓 𝟒 ⋅ 𝟑 ⋅ −𝟐 𝟔⋅𝟓⋅𝟐
𝟏𝟑 𝟑𝟓 𝟏𝟔 𝟒𝟒
=𝟐− + + = = 𝟏𝟒. 𝟔𝟔𝟕
𝟑 𝟑 𝟑 𝟑

Newton's Divided Difference


Let the function 𝑦 = 𝑓(𝑥) takes the values 𝑓 (𝑥0 ), 𝑓(𝑥1 ), … , 𝑓 (𝑥𝑛) corresponding to the
values 𝑥1 − 𝑥0 , 𝑥2 − 𝑥1 , 𝑥3 − 𝑥2 , … , 𝑥𝑛 − 𝑥𝑛−1 need not to be equal.

Newton's divided difference formula for unequal intervals is given by

𝒇(𝒙) = 𝒇(𝒙𝟎 ) +(𝒙 − 𝒙𝟎 )𝒇(𝒙𝟎 , 𝒙𝟏 ) + (𝒙 − 𝒙𝟎 )(𝒙 − 𝒙𝟏 )𝒇(𝒙𝟎 , 𝒙𝟏 , 𝒙𝟐 )


+(𝒙 − 𝒙𝟎 )(𝒙 − 𝒙𝟏 )(𝒙 − 𝒙𝟐 )𝒇(𝒙𝟎 , 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ) + ⋯
+(𝒙 − 𝒙𝟎 )(𝒙 − 𝒙𝟏 ) … (𝒙 − 𝒙𝒏 )𝒇(𝒙𝟎 , 𝒙𝟏 , … , 𝒙𝒏 )

𝒚𝟏 −𝒚𝟎 𝒇(𝒙𝟏 ,𝒙𝟐 )−𝒇(𝒙𝟎 ,𝒙𝟏 )


Where 𝒇(𝒙𝟎 , 𝒙𝟏 ) = , 𝒇(𝒙𝟎 , 𝒙𝟏 , 𝒙𝟐 ) = ,…
𝒙𝟏 −𝒙𝟎 𝒙𝟐 −𝒙𝟎

Example: 1

Using Newton's divided difference formula, find the values of 𝑓(2), 𝑓(8) and 𝑓(15) given
the following table:

Solution

𝑥 4 5 7 10 11 13

𝑓(𝑥) 48 100 294 900 1210 2028


P a g e | 26

By Newton's Divided Difference interpolation formula is

𝑓(𝑥) = 𝑓(𝑥0 ) +(𝑥 − 𝑥0 )𝑓(𝑥0 , 𝑥1 )


+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝑓(𝑥0, 𝑥1 , 𝑥2 )
+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥
−𝑥2 )𝑓(𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ) + ⋯
+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … (𝑥
−𝑥𝑛)𝑓(𝑥0 , 𝑥1 , … , 𝑥𝑛 )

Here 𝑥0 = 4, 𝑥1 = 5, 𝑥2 = 7, 𝑥3 = 10, 𝑥4 = 11, 𝑥5 = 13

We form the divided difference table since the intervals are unequal.

𝑋 𝑓(𝑥) 4𝑓(𝑥) 42 𝑓(𝑥) 43 𝑓(𝑥) 44 𝑓(𝑥)

4 48

5 100 52

7 294 97 15 0

10 900 202 21 1 0

11 1210 310 1

13 2028 409 33

Also, 𝑓(𝑥0 ) = 48, 𝑓(𝑥0 , 𝑥1 ) = 52, 𝑓(𝑥0 , 𝑥1 , 𝑥2 ) = 15, 𝑓(𝑥0 , 𝑥1 , 𝑥2 𝑥3 ) = 1

𝑦 = 𝑓(𝑥) = 48 + (𝑥 − 4)(52) + (𝑥 − 4)(𝑥 − 5)(15) + (𝑥 − 4)(𝑥 − 5)(𝑥 − 7) ⋅ 1𝑓(2)


= 48 + (2 − 4)(52) + (2 − 4)(2 − 5)(15) + (2 − 4)(2 − 5)(2 − 7)

𝑓(2) = 4

𝑓(8) = 48 + (4)(52) + (4)(3)(15) + (4)(3)(1)


P a g e | 27

𝑓(8) = 448

𝑓(15) = 48 + 11(52) + (11)(10)(15) + (11)(10)(8)

𝑓(15) = 3150

Hence ,this is the desired result.

NEWTON'S FORWARD
INTERPOLATION FORMULA
Let the function y = f(x) take the values 𝑦0 , 𝑦1 , … 𝑦𝑛 at the points 𝑥0 , 𝑥1 , … 𝑥𝑛 where 𝑥𝑛 =
𝑥0 + 𝑛ℎ then Newtons interpolation formula is given by

𝒑(𝒑 − 𝟏) 𝟐 𝒑(𝒑 − 𝟏)(𝒑 − 𝟐) 𝟑


𝒚 = 𝒇(𝒙) = 𝒚𝟎 + 𝒑𝚫𝒚𝟎 + 𝚫 𝒚𝟎 + 𝚫 𝒚𝟎 + ⋯
𝟏. 𝟐 𝟏. 𝟐. 𝟑
𝒙 − 𝒙𝟎
where 𝒑 =
𝒉

Newton backward interpolation formula: Let the function y = f(x) take the values
𝑦0 , 𝑦1 , … 𝑦𝑛 at the points 𝑥0 , 𝑥1 , … 𝑥𝑛 where 𝑥𝑛 = 𝑥0 + 𝑛ℎ then Newtons interpolation
formula is given by

𝒑(𝒑 + 𝟏) 𝟐 𝒑(𝒑 + 𝟏)(𝒑 + 𝟐) 𝟑


𝒚 = 𝒇(𝒙) = 𝒚𝟎 + 𝒑𝛁𝒚𝒏 + 𝛁 𝒚𝒏 + 𝛁 𝒚𝒏 + ⋯
𝟏. 𝟐 𝟏. 𝟐. 𝟑
𝒙 − 𝒙𝒏
where 𝒑 =
𝒉

Example: 1 Find the value of 𝑦 from the following data at 𝑥 = 2.625


P a g e | 28

𝑥 -1 0 1 2 3

𝑦 -21 6 15 12 3

Solution:
𝑥 − 𝑥𝑛 2.65 − 3
𝑝= = = −0.35
ℎ 1

Newton`s backward formula is given by

𝑝(𝑝 + 1) 2 𝑝(𝑝 + 1)(𝑝 + 2) 3


𝑦 = 𝑓(𝑥) = 𝑦𝑛 +𝑝∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯
1.2 1.2.3
−0.35(−0.35 + 1)
= 3 + (−0.35)(−9) + (−6)
2
(−0.35)(−0.35 + 1)(−0.35 + 2)
+ 6 = 3 + 3.15 + 0.6825 − 0.3754
6
= 6.4571

The difference table is given by

𝑥 𝑌 Δ𝑦 Δ2 𝑦 Δ3 𝑦 Δ4 𝑦

-1 -21 27

0 6 9 -18 6

1 15 -3 -12 6 0

2 12 -9 -6

3 3
P a g e | 29

Gauss forward interpolation formula


Gauss forward interpolation formula is used for the central differences and it is given by 𝑦𝑝 =
𝒑 𝒑 𝒑+𝟏 𝟑 𝒑+𝟏 𝟒 𝒑+𝟐 𝟓
𝒚𝟎 + ( ) 𝚫𝒚𝟎 + ( ) 𝚫𝟐 𝒚−𝟏 + ( ) 𝚫 𝒚−𝟏 + ( ) 𝚫 𝒚−𝟐 + ( ) 𝚫 𝒚−𝟐 +
𝟏 𝟐 𝟑 𝟒 𝟓

𝒙−𝒙𝟎
.. where 𝒑 =
𝒉

Gauss backward interpolation formula


Gauss backward interpolation formula is used for the central differences and it is given by
𝒑 𝒑+𝟏 𝟐 𝒑+𝟏 𝟑 𝒑+𝟐 𝟒
𝒚𝒑 = 𝒚𝟎 + ( ) 𝚫𝒚−𝟏 + ( ) 𝚫 𝒚−𝟏 + ( ) 𝚫 𝒚−𝟐 + ( ) 𝚫 𝒚−𝟐 +
𝟏 𝟐 𝟑 𝟒
𝒑+𝟐 𝟓 𝒙−𝒙𝟎
( ) 𝚫 𝒚−𝟑 + .. where 𝒑 =
𝟓 𝒉

EXAMPLE:

Apply Gauss forward and backward interpolation formula to find y(25) for the following
data:

Solution: Given x = 25

𝑋 20 24 28 32

𝑌 2854 3162 3544 3992

Gauss forward interpolation:

25−2
Let 𝑥0 = 24, 𝑝 = = 0.25
4

Difference table is given by


P a g e | 30

X P Y Δ𝑦 Δ2 𝑦 Δ3 𝑦

20 -1 2854 308

24 0 3162 382 74 -8

28 1 3544 448 66

32 2 3992

𝑝 𝑝
Gauss forward interpolation formula is given by 𝑦𝑝 = 𝑦0 + ( ) Δ𝑦0 + ( ) Δ2 𝑦−1 +
1 2
𝑝+1 3 𝑝+1 4 𝑝+2 5 0.25
( ) Δ 𝑦−1 + ( ) Δ 𝑦−2 + ( ) Δ 𝑦−2 + ⋯ = 3162 + ( ) 382 +
3 4 5 1
0.25 1.25 0.25(0.25−1)
( ) 74 + ( ) (−8) = 3162 + 0.25(382) + (74) +
2 3 2
1.25(1.25−1)(1.25−2)
(−8) =
6

Gauss backward interpolation:

25−28
Let 𝑥0 = 28, 𝑝 = = −0.75
4

Difference table is given by

X P Y Δ𝑦 Δ2 𝑦 Δ3 𝑦

20 -2 2854 308

24 -1 3162 382 74 -8

28 0 3544 448 66

32 1 3992
P a g e | 31

𝑝 𝑝+1 2
Gauss backward interpolation formula is given by 𝑦𝑝 = 𝑦0 + ( ) Δ𝑦−1 + ( ) Δ 𝑦−1 +
1 2

𝑝+1 3 𝑝+2 4 𝑝+2 5


( ) Δ 𝑦−2 + ( ) Δ 𝑦−2 + ( ) Δ 𝑦−3 + ⋯
3 4 5
−0.75 0.25
= 3544 + ( ) 382 + ( ) 66 +
1 2

1.25 0.25(0.25 − 1)
( ) (−8) = 3544 + (−0.75)(382) + (66)
3 2
1.25(1.25 − 1)(1.25 − 2)
+ (−8) =
6

Bessel`s formula
Bessel`s formula is given by

𝟏
𝒚𝟎 +𝒚𝟏 𝟏 𝒑(𝒑−𝟏) 𝚫𝟐 𝒚−𝟏 +𝚫𝟐 𝒚𝟎 (𝒑−𝟐)𝒑(𝒑−𝟏)
𝒚𝒑 = + (𝒑 − ) 𝚫𝒚𝟎 + ( )+ 𝚫𝟑 𝒚−𝟏 +
𝟐 𝟐 𝟏.𝟐 𝟐 𝟏.𝟐.𝟑
𝟏
𝒑(𝒑−𝟏)(𝒑−𝟐)(𝒑+𝟏) 𝚫𝟒 𝒚−𝟐 +𝚫𝟒 𝒚−𝟏 𝒙−𝒙𝟎
( ) + ⋯ where 𝒑 =
𝟏.𝟐.𝟑 𝟐 𝒉

EXAMPLE 1: Using the following table find 𝑦5

𝑋 0 4 8 12

𝑌 143 158 177 199

5−4
Take 𝑥0 = 4 given h = 4 and 𝑝 = = 0.25 ,Difference table is given by
4

X p Y Δ𝑦 Δ2 𝑦 Δ3 𝑦

0 -1 143
P a g e | 32

Bessel`s formula is given by

𝑦0+𝑦1 1 𝑝(𝑝−1) Δ2𝑦−1+Δ2𝑦0


𝑦𝑝 = + (𝑝 − ) Δ𝑦0 + ( )+
2 2 1.2 2

1 1
(𝑝−2)𝑝(𝑝−1) 𝑝(𝑝−1)(𝑝−2 )(𝑝+1) Δ4𝑦−2+Δ4 𝑦−1 158+177
3
Δ 𝑦−1 + ( )+⋯= + (0.25 − 0.5)(19) +
1.2.3 1.2.3 2 2
4+3 0.25(0.25−1) (0.25−0.5)0.25(0.25−1)
+ (−1) = 167.5 − 4.75 − 0.328 − 0.0078 = 162.41
2 2 6
P a g e | 33

CHAPTER 5
NUMERICAL DIFFERENTIATION AND INTEGRATION

Definition:

𝑑𝑦
Numerical differentiation is the process of computing the value of the derivative for some
𝑑𝑥

particular values of x, of x from the given data (𝑥𝑖 , 𝑦𝑖 ) when the actual relationship between x
and y is not known.

NEWTON'S FORWARD DIFFERENCE FORMULA TO COMPUTE


DERIVATIVES:

Newton's forward difference formula is

𝒑(𝒑 − 𝟏) 𝟐 𝒑(𝒑 − 𝟏)(𝒑 − 𝟐) 𝟑


𝒚 = 𝒇(𝒙) = 𝒚𝟎 + 𝒑𝚫𝒚𝟎 + 𝚫 𝒚𝟎 + 𝚫 𝒚𝟎 + ⋯
𝟏. 𝟐 𝟏. 𝟐. 𝟑
𝒙 − 𝒙𝟎
where 𝒑 =
𝒉
𝟐
𝒑 −𝒑 𝟐 𝒑 − 𝟑𝒑𝟐 + 𝟐𝒑 𝟑
𝟑
𝒚 = 𝒚𝟎 + 𝒑𝚫𝒚𝟎 + 𝚫 𝒚𝟎 + 𝚫 𝒚𝟎 + ⋯ … .
𝟏. 𝟐 𝟏. 𝟐. 𝟑

Differentiate with respect to p we get,

𝑑𝑦 2𝑝 − 1 2 3𝑝2 − 6𝑝 + 2 3
= Δ𝑦0 + Δ 𝑦0 + Δ 𝑦0 + ⋯
𝑑𝑝 1.2 1.2.3

𝑑𝑝 1 𝑑𝑦 𝑑𝑦 𝑑𝑝
Also = , Now =
𝑑𝑥 ℎ 𝑑𝑥 𝑑𝑝 𝑑𝑥

𝟏 𝟐𝒑 − 𝟏 𝟐 𝟑𝒑𝟐 − 𝟔𝒑 + 𝟐 𝟑 𝟒𝒑𝟑 − 𝟏𝟖𝒑𝟐 + 𝟐𝟐𝒑 − 𝟔 𝟒


= [𝜟𝒚𝟎 + 𝜟 𝒚𝟎 + 𝜟 𝒚𝟎 + 𝜟 𝒚𝟎 + ⋯ ]
𝒉 𝟏. 𝟐 𝟏. 𝟐. 𝟑 𝟏. 𝟐. 𝟑. 𝟒

𝑑𝑦
This gives the value of at any x which is a non tabular value.
𝑑𝑥

For tabular values of vn, x, this formula takes a simpler form.


P a g e | 34

At 𝑥 = 𝑥0 ,

𝐝𝐲 𝟏 𝟏 𝟐 𝟐 𝟔
( ) = [𝚫𝐲𝟎 − 𝚫 𝐲𝟎 + 𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝟎 𝐡 𝟏. 𝟐 𝟏. 𝟐. 𝟑 𝟏. 𝟐. 𝟑. 𝟒
𝟏 𝟏 𝟏 𝟏
= [𝚫𝐲𝟎 − 𝚫𝟐 𝐲𝟎 + 𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐡 𝟐 𝟑 𝟒
𝟐 𝟐
𝐝 𝐲 𝟏 𝟐 𝟑𝐲 +
𝟔𝐩 − 𝟏𝟖𝐩 + 𝟏𝟏 𝟒
= [𝚫 𝐲𝟎 + (𝐩 − 𝟏)𝚫 𝟎 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝟐 𝐡𝟐 𝟏𝟐
𝐝𝟐 𝐲 𝟏 𝟏𝟏 𝟒
( 𝟐) = 𝟐 [𝚫𝟐 𝐲𝟎 − 𝚫𝟑 𝐲𝟎 + 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝐡 𝟏𝟐
𝟎
𝐝𝟑 𝐲 𝟏 𝟑 𝟏𝟐𝐩 − 𝟏𝟖 𝟒
= [𝚫 𝐲𝟎 + 𝚫 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝟑 𝐡𝟑 𝟏𝟐

𝐝𝟑 𝐲 𝟏 𝟑
( 𝟑) = 𝟑 [𝚫𝟑 𝐲𝟎 − 𝚫𝟒 𝐲𝟎 + ⋯ ]
𝐝𝐱 𝐱=𝐱 𝐡 𝟐
𝟎

NEWTON'S BACKWARD
DIFFERENCE FORMULA TO
COMPUTE DERIVATIVES:
Newton's backward difference formula is

𝐩(𝐩 + 𝟏) 𝟐 𝐩(𝐩 + 𝟏)(𝐩 + 𝟐) 𝟑


𝐲 = 𝐟(𝐱) = 𝐲𝟎 + 𝐩𝛁𝐲𝐧 + 𝛁 𝐲𝐧 + 𝛁 𝐲𝐧 + ⋯
𝟏. 𝟐 𝟏. 𝟐. 𝟑
𝐱 − 𝐱𝐧
where 𝐩 =
𝐡

Differentiate with respect to p we get,


P a g e | 35

𝑑𝑦 2𝑝 + 1 2 3𝑝2 + 6𝑝 + 2 3
= ∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯
𝑑𝑝 1.2 1.2.3

𝑑𝑝 1
Also =
𝑑𝑥 ℎ

𝑑𝑦 𝑑𝑦 𝑑𝑝
Now =
𝑑𝑥 𝑑𝑝 𝑑𝑥

1 2𝑝 + 1 2 3𝑝2 + 6𝑝 + 2 3 4𝑝3 + 18𝑝2 + 22𝑝 + 6 4


= [∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ]
ℎ 1.2 1.2.3 1.2.3.4

𝑑𝑦
This gives the value of at any x which is a non tabular value.
𝑑𝑥

For tabular values of x, this formula takes a simpler form.

At 𝑥 = 𝑥0,

𝑑𝑦 1 1 2 2 3 6
( ) = [∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥𝑛 ℎ 1.2 1.2.3 1.2.3.4
1 1 1 1
= [∇𝑦𝑛 + ∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
ℎ 2 3 4
2
𝑑 𝑦 1 2
6𝑝 + 18𝑝 + 11 4
2
= 2 [∇2 𝑦𝑛 + (𝑝 + 1)∇3 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ]
𝑑𝑥 ℎ 12
𝑑2𝑦 1 11
( 2) = 2 [∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
𝑛
𝑑3𝑦 1 12𝑝 + 18 4
3
= 3 [∇3 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ]
𝑑𝑥 ℎ 12
𝑑3𝑦 1 3
( 3) = 3 [∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 2
𝑛

The maximum and minimum values of a function 𝑦 = 𝑓(𝑥) can be found by equating the
𝑑𝑦
first derivative to zero and solving for x.
𝑑𝑥

DERIVATIVES USING BESSEL`S FORMULA:

Bessel`s formula is given by

𝟏
𝒚𝟎 +𝒚𝟏 𝟏 𝒑(𝒑−𝟏) 𝜟𝟐 𝒚−𝟏 +𝜟𝟐 𝒚𝟎 (𝒑− )𝒑(𝒑−𝟏)
𝒚𝒑 = + (𝒑 − ) 𝜟𝒚𝟎 + ( )+ 𝟐
𝜟𝟑𝒚−𝟏 +
𝟐 𝟐 𝟏.𝟐 𝟐 𝟏.𝟐.𝟑
𝟏
𝒑(𝒑−𝟏)(𝒑− )(𝒑+𝟏) 𝜟𝟒 𝒚−𝟐 +𝜟𝟒 𝒚−𝟏 𝒙−𝒙𝟎
𝟐
( ) + ⋯ where 𝒑 =
𝟏.𝟐.𝟑 𝟐 𝒉
P a g e | 36

1
𝑑𝑦 (2𝑝 − 1) Δ2 𝑦−1 + Δ2 𝑦0 (3𝑝2 − 3𝑝 + )
= Δ𝑦0 + ( )+ 2 Δ3 𝑦
−1
𝑑𝑝 1.2 2 1.2.3
(4𝑝3 − 6𝑝2 − 2𝑝 + 2) Δ4 𝑦−2 + Δ4 𝑦−1
+ ( )+⋯
1.2.3.4 2

𝑑𝑦 1
Also =
𝑑𝑝 ℎ

1
𝑑𝑦 𝑑𝑦 𝑑𝑝 1 (2𝑝 − 1) Δ2 𝑦−1 + Δ2 𝑦0 (3𝑝2 − 3𝑝 + )
= = [Δ𝑦0 + ( )+ 2 Δ3 𝑦
−1
𝑑𝑥 𝑑𝑝 𝑑𝑥 ℎ 1.2 2 1.2.3
(4𝑝3 − 6𝑝2 − 2𝑝 + 2) Δ4 𝑦−2 + Δ4 𝑦−1
+ ( )+⋯]
1.2.3.4 2
𝑑𝑦 1 1 Δ2 𝑦−1 + Δ2 𝑦0 1 1 Δ4 𝑦−2 + Δ4 𝑦−1
( ) = [Δ𝑦0 − ( ) + Δ3 𝑦−1 + ( )+⋯]
𝑑𝑥 𝑥=𝑥0 ℎ 2 2 12 12 2

2
Problem 1: From the following table of values of x and y, find 𝑑𝑦
𝑑𝑥
𝑑 𝑦
and 2 for x = 1.05
𝑑𝑥

X 1.00 1.05 1.10 1.15 1.20 1.25 1.30

y 1 1.02470 1.04881 1.07238 1.09544 1.11803 1.14017

Solution: The forward difference table is given by

X Y Δ Δ2 Δ3 Δ4 Δ5 Δ6

1.00 1 0.02470

1.05 1.02470 0.02411 -0.00059 0.00005

1.10 1.04881 0.02357 -0.00054 0.00003 -0.00002

1.15 1.07238 0.02306 -0.00051 0.00004 0.00001 0.00003 -0.00006

1.20 1.09544 0.02259 -0.00047 0.00002 -0.00002 -0.00003


P a g e | 37

1.25 1.11803 0.02214 -0.00045

1.30 1.14017

Take 𝑥0 = 1.05, 𝑦0 = 1.02470, Δ𝑦0 = 0.02411, Δ2 𝑦0 = −0.00054, Δ3 𝑦0 = −0.00003, Δ4 𝑦0 =


0.00001, Δ5 𝑦0 = −0.00003

Newton`s forward difference formula for differentiation is given by

𝑑𝑦 1 1 1 1
( ) = [Δ𝑦0 − Δ2 𝑦0 + Δ3 𝑦0 − Δ4 𝑦0 + ⋯ ]
𝑑𝑥 𝑥=𝑥0 ℎ 2 3 4
1 1 1 1
= [0.02411 − (−0.00054) + (−0.00003) − 0.00001
0.05 2 3 4
1
+ (−0.00003)] = 0.487763
5
𝑑2𝑦 1 11
( 2) = 2 [Δ2 𝑦0 − Δ3 𝑦0 + Δ4 𝑦0 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
0
1 11 5
= [−0.00054 − 0.00003 + 0.00001 − (−0.00003)] = −0.2144
0.052 12 6

Problem 2:

𝑑𝑦 𝑑2𝑦
From the following table of values of x and y, find and for x = 1.25
𝑑𝑥 𝑑𝑥 2

X 1.00 1.05 1.10 1.15 1.20 1.25 1.30

y 1 1.02470 1.04881 1.07238 1.09544 1.11803 1.14017

Solution: The difference table is given by

X Y Δ Δ2 Δ3 Δ4 Δ5 Δ6
P a g e | 38

1.00 1 0.02470

1.05 1.02470 0.02411 -0.00059 0.00005

1.10 1.04881 0.02357 -0.00054 0.00003 -0.00002

1.15 1.07238 0.02306 -0.00051 0.00004 0.00001 0.00003 -0.00006

1.20 1.09544 0.02259 -0.00047 0.00002 -0.00002 -0.00003

1.25 1.11803 0.02214 -0.00045

1.30 1.14017

Take 𝑥𝑛 = 1.25, 𝑦𝑛 = 1.11803, ∇𝑦𝑛 = 0.02259, ∇2 𝑦𝑛 = −0.00047, ∇3 𝑦𝑛 = 0.00004, ∇4 𝑦𝑛 =


0.00001, ∇5 𝑦𝑛 = −0.00003

Newton`s backward difference formula for differentiation is given by

𝑑𝑦 1 1 1 1
( ) = [∇𝑦𝑛 + ∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥𝑛 ℎ 2 3 4
1 1 1 1
= [0.02259 + (−0.00047) + (0.00004) + 0.00001
0.05 2 3 4
1
+ (−0.00003)] = 0.44733
5
𝑑2𝑦 1 11
( 2) = 2 [∇2 𝑦𝑛 + ∇3 𝑦𝑛 + ∇4 𝑦𝑛 + ⋯ ]
𝑑𝑥 𝑥=𝑥 ℎ 12
𝑛
1 11 5
= 2
[−0.00047 + 0.00004 + 0.00001 + (−0.00003)] = −0.158332
0.05 12 6
P a g e | 39

NUMERICAL INTEGRATION
Definition: Numerical integration is the process of computing the value of the integral
𝑏 𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 or ∫𝑎 𝑦𝑑𝑥 for some particular values x from the given data (𝑥𝑖 , 𝑦𝑖 ) when the
actual relationship between x and y is not known.

𝑏 𝒉
Trapezoidal Rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 2𝐵) where h = difference of the intervals, A = sum of
2

the first and last ordinates, 𝐵 = sum of the remaining ordinates.

Note:
1. In deriving the trapezoidal rule we replace the arc of the curvey f(x) over by its
chords.

2. If the number of points of the segments 𝐴𝐵 be increased, a better approximation to the


area will be obtained.

3. The truncation error of the trapezoidal rule is of order h2 .

𝑏 ℎ
Simpsons One Third Rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 4𝐵 + 2𝐶) where h = difference of the
3

intervals, A = sum of the first and last ordinates, 𝐵 = sum of the even ordinates, 𝐶 = sum of
the remaining ordinates.

NOTE:
1. Simpson`s one third rule approximates the area of two adjacent strips by the area
under a quadratic parabola.

2. The smaller h is the closer to the approximation.

3. The truncation error of the trapezoidal rule is of order h4 .

4. It is simply called Simpson`s rule.


P a g e | 40

𝑏 3ℎ
Simpsons Three Eighth Rule : ∫𝑎 𝑦𝑑𝑥 = 8
(𝐴 + 3𝐵 + 2𝐶) where h = difference of the intervals,

𝐴 = sum of the first and last ordinates, 𝐶 = sum of the ordinates which is multiplied by 3, 𝐵 = sum of
the remaining ordinates.

Note: It is accurate for the problem which number of intervals is divided by 3.

𝜋
Problem 1: Dividing the range into 10 equal parts, find the approximate value of ∫0 sin 𝑥𝑑𝑥
by a) trapezoidal rule b) Simpsons rule c) Simpsons three eight rule.

Solution: Given that the interval is divided by 10 , then ℎ = 𝜋/10

The values are tabulated as follows:

𝜋 2𝜋 3𝜋 4𝜋 5𝜋 6𝜋 7𝜋 8𝜋 9𝜋 10𝜋
𝑋 0
/10 /10 /10 /10 /10 /10 /10 /10 /10 /10

0.30 0.587 0.809 0.951 0.951 0.809 0.587 0.309


sin x 0 1 0
9 8 0 1 1 0 1 0

Trapezoidal rule:

𝜋 𝒉
∫0 sin 𝑥𝑑𝑥 = (𝐴 + 2𝐵)
𝟐

𝐴 = sum of the first and last ordinates = 0 + 0 = 0

𝐵 = sum of the remaining

ordinates = 0.309 + 0.5878 + 0.809 + 0.9511 + 0 + 0.9511 + 0.809 + 0.5878 + 0.309 = 6.3138

𝜋
𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 2(6.3138)) = 1.984
0 2

𝜋 ℎ
Simpsons rule: ∫0 sin 𝑥𝑑𝑥 = (𝐴 + 4𝐵 + 2𝐶)
2

𝐴 = sum of the first and last ordinates = 0 + 0 = 0

𝐵 = sum of the even ordinates = 0.3090 + 0.8090 + 1 + 0.8090 + 0.3090 = 3.2360


P a g e | 41

C = sum of the remaining ordinates = 0.5878 + 0.9511 + 0.9511 + 0.5878 = 3.0778.

𝜋
𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 4(3.2360) + 2(3.0778)) = 2.0000
0 3

𝑏 𝟑𝐡
Simpsons three eight rule: ∫𝑎 𝑦𝑑𝑥 = (𝐴 + 3𝐵 + 2𝐶)
𝟖

𝐴 = sum of the first and last ordinates = 0 + 0

C = sum of the ordinates which is multiplied by 3 = 0.5878 + 1 + 0.5878 = 2.1756

𝐵 = sum of the remaining ordinates = 0.3090 + 0.8090 + 0.9511 + 0.9511 + 0.8090 + 0.3090 =
4.1382

𝜋
3𝜋
∫ sin 𝑥𝑑𝑥 = 10 (0 + 3(4.1382) + 2(2.1756)) = 1.9742
0 8
P a g e | 42

CHAPTER 6
Euler Method
Euler's method is the most elementary approximation technique for solving initial-value
problems. Although it is seldom used in practice, the simplicity of its derivation can be used
to illustrate the techniques involved in the construction of some of the more advanced

techniques, without the cumbersome algebra that accompanies these constructions. The
object of Euler's method is to obtain approximations to the well-posed initial-value problem

𝑑𝑦
= 𝑓(𝑡, 𝑦), 𝑎 ≤ 𝑡 ≤ 𝑏, 𝑦(𝑎) = 𝛼 (6)
𝑑𝑡

A continuous approximation to the solution 𝑦(𝑡) will not be obtained; instead,


approximations to 𝑦 will be generated at various values, called mesh points, in the interval
[𝑎, 𝑏]. Once the approximate solution is obtained at the points, the approximate solution at
other points in the interval can be found by interpolation.

We first make the stipulation that the mesh points are equally distributed throughout the
interval [𝑎, 𝑏]. This condition is ensured by choosing a positive integer N, setting ℎ = (𝑏 −
𝑎)/𝑁, and selecting the mesh points

𝑡𝑖 = 𝑎 + 𝑖ℎ, for each 𝑖 = 0,1,2, … , 𝑁

The common distance between the points ℎ = 𝑡𝑖+1 − 𝑡𝑖 , is called the step size.

We will use Taylor's Theorem to derive Euler's method. Suppose that 𝑦(𝑡), the unique
solution to (6), has two continuous derivatives on [𝑎, 𝑏], so that for each 𝑖 = 0,1,2, … , 𝑁 − 1

(𝑡𝑖+1 − 𝑡𝑖 )2
𝑦(𝑡𝑖+1 ) = 𝑦(𝑡𝑖 ) + (𝑡𝑖+1 − 𝑡𝑖 )𝑦 ′ (𝑡𝑖 ) + 𝑦 ′′ (𝜉𝑖 )
2

for some number 𝜉𝑖 in (𝑡𝑖 , 𝑡𝑖+1 ). Because ℎ = 𝑡𝑖+1 − 𝑡𝑖 , we have

ℎ2 ′′
𝑦 (𝑡𝑖+1 ) = 𝑦(𝑡𝑖 ) + ℎ𝑦 ′ (𝑡𝑖 ) + 𝑦 (𝜉𝑖 )
2
P a g e | 43

and, because 𝑦(𝑡) satisfies the differential equation (6),

𝑦(𝑡𝑖+1 )
= 𝑦(𝑡𝑖 ) + ℎ𝑓(𝑡𝑖 , 𝑦(𝑡𝑖 ))

Euler's method construct 𝑤𝑖 ≈ 𝑦(𝑡𝑖 ), for each 𝑖 = 1,2, . . , 𝑁, by deleting the remainder term. Thus,
Euler's method is

𝑤0 = 𝛼

𝑤𝑖+1 = 𝑤𝑖 + ℎ𝑓(𝑡𝑖 , 𝑤𝑖 ), for each 𝑖 = 0,1, … , 𝑁 − 1.

𝑤0 = 𝛼

Midpoint Method:
𝒉 𝒉
𝒘𝒊+𝟏 = 𝒘𝒊 + 𝒉𝒇 (𝒕𝒊 + , 𝒘𝒊 + 𝒇(𝒕𝒊 , 𝒘𝒊 )) , for 𝒊
𝟐 𝟐
= 𝟎, 𝟏, … , 𝑵 − 𝟏

Only three parameters are present in 𝑎1 (𝑓 + 𝛼1 , 𝑦 + 𝛽1 ), and all are needed in the match of 𝑇 (2). So, a
more complicated form is required to satisfy the conditions for any of the higher-order Taylor
methods.

The most appropriate four-parameter form for approximating

Initial and boundary conditions . An ordinary differential equation of the 𝑛th order is of the form

𝐹(𝑥, 𝑦, 𝑑𝑦/𝑑𝑥, 𝑑 2 𝑦/𝑑𝑥 2 , ⋯ , 𝑑 𝑛 𝑦/𝑑𝑥 𝑛 ) = 0

Its general solution contains 𝑛 arbitrary constants and is of the form

𝜙(𝑥, 𝑦, 𝑐1 , 𝑐2 , ⋯ , 𝑐𝑛 ) = 0

To obtain its particular solution, 𝑛 conditions must be given so that the constants 𝑐1 , 𝑐2 … , 𝑐𝑛
can be determined.
P a g e | 44

If these conditions are prescribed at one point only (say: 𝑥0, then the differential equation
together with the conditions constitute an initial value problem of the nth order.

If the conditions are prescribed at two or more points, then the problem is termed as boundary
value problem.

In this chapter, we shall first describe methods for solving initial value problems and then
explain the finite difference method and shooting method for solving boundary value
problems.

Picard's Method
𝑑𝑦
Consider the first order equation = 𝑓(𝑥, 𝑦)
𝑑𝑥

It is required to find that particular solution of (1) which assumes the value 𝑦0 when 𝑥 = 𝑥0.
Integrating (1) between limits, we get


∫ 𝒅𝒚𝒙 ∫ 𝒇(𝒙, 𝒚)𝒅𝒙 or 𝒚 𝒚𝟎 ∫ 𝒇(𝒙, 𝒚)𝒅𝒙 (𝟐)
𝟎

This is an integral equation equivalent to (1), for it contains the unknown 𝑦 under the integral sign.

As a first approximation 𝑦1 to the solution, we put 𝑦 = 𝑦0 in 𝑓(𝑥, 𝑦) and integrate (2), giving

𝑥
𝑦1 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦0 )𝑑𝑥
𝑥0

For a second approximation 𝑦2 , we put 𝑦 = 𝑦1 in 𝑓(𝑥, 𝑦) and integrate (2), giving

𝑥
𝑦 = 𝑦 + ∫ 𝑓𝑥𝑦𝑑𝑥
𝑥0

Similarly, a third approximation is

𝑥
𝑦3 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦2 )𝑑𝑥
𝑥0

Continuing this process, we obtain 𝑦4 , 𝑦5 , ⋯ 𝑦𝑛 where


P a g e | 45

𝑥
𝑦𝑛 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦𝑛−1 )𝑑𝑥
𝑥0

Hence this method gives a sequence of approximations 𝑦1 , 𝑦2 , 𝑦3 … each giving a better result than the
preceding one.

EXAMPLE 1
Using Picard's process of successive approximations, obtain a solution up to the fifth
approximation of the equation 𝑑𝑦/𝑑𝑥 = 𝑦 + 𝑥, such that 𝑦 = 1 when 𝑥 = 0. Check your
answer by finding the exact particular solution.

Solution:
𝑥
(i) We have 𝑦 = 1 + ∫𝑥 (𝑥 + 𝑦)𝑑𝑥
0

First approximation. Put 𝑦 = 1 in 𝑦 + 𝑥, giving

𝑥
𝑦1 = 1 + ∫ (1 + 𝑥)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 /2
𝑥0

Second approximation . Put 𝑦 = 1 + 𝑥 + 𝑥 2 /2 in 𝑦 + 𝑥, giving

𝑥
𝑦1 = 1 + ∫ (1 + 𝑥 + 𝑥 2 /2)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6
𝑥0

Third approximation . Put 𝑦 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6 in 𝑦 + 𝑥, giving

𝑥
𝑥3 𝑥4
𝑦3 = 1 + ∫ (1 + 𝑥 + 𝑥 2 + 𝑥 3 /6)𝑑𝑥 = 1 + 2𝑥 + 𝑥 2 + +
𝑥0 3 24

Fourth approximation. Put 𝑦 = 𝑦3 in 𝑦 + 𝑥, giving

𝑥
𝑥3 𝑥4
𝑦4 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 + + ) 𝑑𝑥
0 3 24
𝑥3 𝑥4 𝑥5
= 1 + 𝑥 + 𝑥2 + + +
3 12 120

Fifth approximation, Put 𝑦 = 𝑦4 in 𝑦 + 𝑥, giving


P a g e | 46

𝑥
𝑥3 𝑥4 𝑥5
𝑦5 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 + + + ) 𝑑𝑥
0 3 12 120
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + 𝑥 + 𝑥2 + + + +
3 12 60 720

(ii) Given equation

𝑑𝑦
− 𝑦 = 𝑥 is a Leibnitzs linear in 𝑥
𝑑𝑥

Its, I.F. being 𝑒 −𝑥 the solution is

𝑦𝑒 −𝑥 = ∫ 𝑥𝑒 −𝑥 𝑑𝑥 + 𝑐

= −𝑥𝑒 −𝑥 − ∫ (−𝑒 −𝑥 )𝑑𝑥 + 𝑐 = −𝑥𝑒 −𝑥 − 𝑒 −𝑥 + 𝑐


∴ 𝑦 = 𝑐𝑒 𝑥 − 𝑥 − 1

Since 𝑦 = 1, when 𝑥 = 0, ∴ 𝑐 = 2.

Thus the desired particular solution is

𝑦 = 2𝑒 𝑥 − 𝑥 − 1

𝑥2 𝑥3 𝑥4
Or using the series: 𝑒 𝑥 = 1 + 𝑥 + + + +⋯
2! 3! 4!

We get

𝑥3 𝑥 4 𝑥 5 𝑥6
𝑦 = 1 + 𝑥 + 𝑥2 + + + + + ⋯∞
3 12 60 360

Comparing (1) and (3), it is clear that (1), approximates to the exact particular solution (3) upto the
term in 𝑥 5.
P a g e | 47

CHAPTER 7
MILNE`s PREDICTOR CORRECTOR METHOD

OVERVIEW
The Milne's Predictor Method, also known as Milne's Predictor-Corrector Method, is a
numerical technique used for solving ordinary differential equations (ODEs) numerically. It
is an extension of the Adams-Moulton Method and belongs to the family of predictor-
corrector methods.

The basic idea behind Milne's Predictor Method is to predict the value of the dependent
variable at a future time step using an extrapolation formula, and then use this prediction to
correct the estimate iteratively until a desired level of accuracy is achieved.

The general procedure of Milne's Predictor Method can be summarized as follows:

1. Prediction Step:

Using previous known values of the dependent variable and the given ODE, predict the value of the
dependent variable at the next time step using an extrapolation formula (typically a higher-order
Adams-Bashforth formula).

2. Correction Step:

Use the predicted value obtained in the previous step as an initial guess to correct the estimate
iteratively. This correction step involves solving an implicit equation derived from the ODE and the
extrapolation formula used in the prediction step.

3. Iteration:

Iterate the correction step until the corrected estimate converges to the desired level of accuracy,
typically using iterative methods like Newton's method or other root-finding techniques.

4. Update:

Once the corrected estimate is obtained, update the solution and proceed to the next time step.
P a g e | 48

The advantage of Milne's Predictor Method lies in its ability to achieve higher accuracy
compared to explicit methods like Euler's method, while still maintaining simplicity and
computational efficiency. However, it may suffer from stability issues for certain types of
ODEs, and care must be taken in the choice of step size and extrapolation formula to ensure
stability and accuracy.

Overall, Milne's Predictor Method is a valuable tool in the numerical solution of ODEs,
particularly when higher accuracy is desired and when the computational cost of more
complex methods like Runge-Kutta methods is prohibitive.

FORMULATION

Given 𝑑𝑦/𝑑𝑥 = 𝑓(𝑥, 𝑦) and 𝑦 = 𝑦0 , 𝑥 = 𝑥0; to find an approximate value of 𝑦 for 𝑥 = 𝑥0 + 𝑛ℎ by


Milne's method, we proceed as follows:

The value y0 = 𝑦(𝑥0 ) being given, we compute

𝒚𝟏 = 𝒚(𝒙𝟎 + 𝒉), 𝒚𝟐 = 𝒚(𝒙𝟎 + 𝟐𝒉), 𝒚𝟑 = 𝒚(𝒙𝟎 + 𝟑𝒉)

by Picard's or Taylor's series method.

Next we calculate,

𝑓0 = 𝑓(𝑥0, 𝑦0 ), 𝑓1 = 𝑓(𝑥0 + ℎ, 𝑦1 ), 𝑓2 = 𝑓(𝑥0 + 2ℎ, 𝑦2 ), 𝑓3 = 𝑓(𝑥0 + 3ℎ, 𝑦3 )

Then to find 𝑦4 = 𝑦(𝑥0 + 4ℎ), we substitute Newton's forward interpolation formula

𝑛(𝑛 − 1) 2 𝑛(𝑛 − 1)(𝑛 − 2) 3


𝑓(𝑥, 𝑦) = 𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + Δ 𝑓0 + ⋯
2 6

In the relation

𝒙𝟎+𝟒𝒉
𝒚𝟒 = 𝒚𝟎 + ∫ 𝒇(𝒙, 𝒚)𝒅𝒙
𝒙𝟎
𝒙𝟎+𝟒𝒉 𝒏(𝒏 − 𝟏) 𝟐
𝒚𝟒 = 𝒚𝟎 + ∫ (𝒇𝟎 + 𝒏𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ ) 𝒅𝒙
𝒙𝟎 𝟐
[ Put 𝒙 = 𝒙𝟎 + 𝒏𝒉, 𝒅𝒙 = 𝒉𝒅𝒏]
𝟒 𝒏(𝒏 − 𝟏) 𝟐
= 𝒚𝟎 + ∫ (𝒇𝟎 + 𝒏𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ ) 𝒅𝒏
𝟎 𝟐
𝟐𝟎 𝟐
= 𝒚𝟎 + 𝒉 (𝟒𝒇𝟎 + 𝟖𝚫𝒇𝟎 + 𝚫 𝒇𝟎 + ⋯ )
𝟑

Neglecting fourth and higher order differences and expressing Δ𝑓0 , Δ2 𝑓0 and Δ3 𝑓0 and in terms of the
function values, we get
P a g e | 49

(𝑝) 4ℎ
𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3

which is called a predictor.

Having found 𝑦4 , we obtain a first approximation to

𝑓4 = 𝑓(𝑥0 + 4ℎ, 𝑦4 )

Then a better value of 𝑦4 is found by Simpson's rule as

(𝑐) ℎ
𝑦4 = 𝑦2 + (𝑓2 + 4𝑓3 + 𝑓4 )
3

which is called a corrector.

Then an improved value of 𝑓4 is computed and again the corrector is applied to find a still better
value of 𝑦4 . We repeat this step until 𝑦4 remains unchanged. Once 𝑦4 and 𝑓4 are obtained to desired
degree of accuracy, 𝑦5 = 𝑦(𝑥0 + 5ℎ) is found from the predictor as

(𝑝) 4ℎ
𝑦5 = 𝑦1 + (2𝑓2 − 𝑓3 + 2𝑓4 )
3

and 𝑓5 = 𝑓(𝑥0 + 5ℎ, 𝑦5 ) is calculated. Then a better approximation to the value of 𝑦5 is obtained
from the corrector as

(𝑐) ℎ
𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 )
3

We repeat this step until 𝑦5 becomes stationary and, then proceed to calculate 𝑦6 as before.

This is Milne's predictor-corrector method. To insure greater accuracy, we must first improve the
accuracy of the starting values and then subdivide the intervals.

EXAMPLE 1
Apply Milne's method, to find a solution of the differential equation 𝑦 ′ = 𝑥 − 𝑦 2 in the range 0 ≤
𝑥 ≤ 1 for the boundary condition 𝑦 = 0 at 𝑥 = 0.

Solution:
Using Picard's method, we have
𝑥
𝑦 = 𝑦(0) + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥, where 𝑓(𝑥, 𝑦) = 𝑥 − 𝑦 2
0

To get the first approximation, we put 𝑦 = 0 in 𝑓(𝑥, 𝑦),


P a g e | 50

𝑥 𝑥2
Giving 𝑦1 = 0 + ∫0 𝑥𝑑𝑥 =
2

To find the second approximation, we put

𝑥 𝑥4 𝑥2 𝑥5
Giving 𝑦2 = ∫0 (𝑥 − ) 𝑑𝑥 = −
4 2 20

Similarly, the third approximation is

𝑥 2
𝑥2 𝑥5 𝑥 2 𝑥5 𝑥8 𝑥 11
𝑦3 = ∫ [𝑥 − ( − ) ] 𝑑𝑥 = − + −
0 2 20 2 20 160 4400

Now let us determine the starting values of the Milne's method from (i), by choosing ℎ = 0.2.

𝑥0 = 0.0, 𝑦0 = 0.0000, 𝑓0 = 0.0000


𝑥1 = 0.2, 𝑦1 = 0.020, 𝑓1 = 0.1996
𝑥2 = 0.4, 𝑦2 = 0.0795 𝑓2 = 0.3937
𝑥3 = 0.5, 𝑦3 = 0.1762, 𝑓3 = 0.5689

(𝑝) 4ℎ
Using the predictor, 𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3

(𝑝)
𝑥 = 0.8 𝑦4 = 0.3049, 𝑓4 = 0.7070

(𝑐) ℎ
and the corrector, 𝑦4 = 𝑦2 + (𝑓2 + 4𝑓3 + 𝑓4 ), yields
3

(𝑐)
𝑦4 = 0.3046 𝑓4 = 0.7072 (𝑖𝑖)

Again using the corrector,

(𝑐)
𝑦4 = 0.3046, which is the same as in (ii)

Now using the predictor,

4ℎ (𝑝)
(2𝑓2 − 𝑓3 + 2𝑓4 )
𝑦4 = 𝑦1 +
3
(𝑝)
𝑥 = 0.1, 𝑦5 = 0.4554 𝑓5 = 0.7926

(𝑐) ℎ
and the corrector 𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 ) gives
3

(𝑐)
𝑦5 = 0.4555 𝑓5 = 0.7925

Again using the corrector,

(𝑐)
𝑦5 = 0.4555, a value which is the same as before.

Hence 𝑦(1) = 0.4555.


P a g e | 51

Runge's Method
𝑑𝑦
Consider the differential equation, = 𝑓(𝑥, 𝑦), 𝑦(𝑥0 ) = 𝑦0
𝑑𝑥

Clearly the slope of the curve through 𝑃(𝑥0 , 𝑦0 ) is 𝑓(𝑥0 , 𝑦0 ). Integrating both sides of (1) from
(𝑥0 , 𝑦0 ) to (𝑥0 + ℎ, 𝑦0 + 𝑘), we have

𝐲𝟎 +𝐤 𝐱 𝟎 +𝐡
∫ 𝐝𝐲 = ∫ 𝐟(𝐱, 𝐲)𝐝𝐱
𝐲𝟎 𝐱𝟎

Called after the German mathematician Carl Runge (1856-1927).

To evaluate the integral on the right, we take 𝑁 as the mid-point of 𝐿𝑀 and find the values of 𝑓(𝑥, 𝑦)(
i.e., 𝑑𝑦/𝑑𝑥) at the points 𝑥0 , 𝑥0 + ℎ/2, 𝑥0 + ℎ. For this purpose, we first determine the values of 𝑦 at
these points.

Also

𝑦𝑇 = 𝑀𝑇 = 𝐿𝑃 + 𝑅𝑇 = 𝑦0 + 𝑃𝑅 ⋅ tan 𝜃 = 𝑦0 + ℎ𝑓(𝑥0 + 𝑦0 )
P a g e | 52

Now the value of 𝑦𝑄 at 𝑥0 + ℎ is given by the point 𝑇 ′′ where the line through 𝑃 draw with slope at
𝑇(𝑥0 + ℎ, 𝑦𝑇 ) meets 𝑀𝑄.

∴ Slope at 𝑇 = tan 𝜃 ′ = 𝑓(𝑥0 + ℎ, 𝑦𝑇 ) = 𝑓[𝑥0 + ℎ, 𝑦0 + ℎ𝑓(𝑥0, 𝑦0 )]

∴ 𝑦𝑄 = 𝑅 + 𝑅𝑇 = 𝑦0 + 𝑃𝑅 ⋅ tan 𝜃 ′ = 𝑦0 + ℎ𝑓[𝑥0 + ℎ, 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )] (4)

Thus the value of 𝑓(𝑥, 𝑦) at 𝑃 = 𝑓(𝑥0 , 𝑦0 ),

the value of 𝑓(𝑥, 𝑦) at 𝑆 = 𝑓(𝑥0 + ℎ/2, 𝑦s )

and the value of 𝑓(𝑥, 𝑦) at 𝑄 = (𝑥0 + ℎ, 𝑦𝑄 )

where 𝑦𝑆 and 𝑦𝑄 are given by (3) and (4).

Hence from (2), we obtain

𝒙𝟎 +𝒉
𝒉
𝒌=∫ 𝒇(𝒙, 𝒚)𝒅𝒙 = [𝒇 + 𝟒𝒇𝑺 + 𝒇𝑸 ] by Simpson's rule
𝒙𝟎 𝟔 𝑷
𝒉
= [𝒇(𝒙𝟎 + 𝒚𝟎 ) + 𝒇(𝒙𝟎 + 𝒉/𝟐, 𝒚𝑺 ) + (𝒙𝟎 + 𝒉, 𝒚𝑸 ]
𝟔

Which gives a sufficiently accurate value of k and also 𝑦 = 𝑦0 + 𝑘 points.

The repeated application of (5) gives the values of 𝑦 for equi-spaced

Working rule to solve (1) by Runge's method:

Calculate successively

𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ𝑦0 + 𝑘1 )
2 2
𝑘 ′ = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘1 )

and

𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘 ′ )

1
Finally compute, 𝑘 = (𝑘1 + 4𝑘2 + 𝑘3 ) which gives the required approximate value as 𝑦1 = 𝑦0 + 𝑘.
6

(Note that 𝑘 is the weighted mean of 𝑘1 , 𝑘2 , and 𝑘3 ).


P a g e | 53

Runge - Kutta Method


The Taylor's series method of solving differential equations numerically is restricted by the labor
involved in finding the higher order derivatives. However, there is a class of methods known as
Runge-Kutta methods which do not require the calculations of higher order derivatives and give
greater accuracy. The Runge-Kutta formulae possess the advantage of requiring only the function
values at some selected points. These methods agree with Taylor's series solution up to the term in ℎ𝑟
where 𝑟 differs from method to method and is called the order of that method.

First order R-K method. We have seen that Euler's method gives

𝒚𝟏 = 𝒚𝟎 + 𝒉𝒇(𝒙𝟎 , 𝒚𝟎 ) = 𝒚𝟎 + 𝒉𝒚′𝟎 [∵ 𝒚′ = 𝒇(𝒙, 𝒚)]

Expanding by Taylor's series

ℎ2 ′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 +⋯
2 0

It follows that the Euler's method agrees with the Taylor's series solution upto the term in ℎ.

Hence, Euler's method is the Runge-Kutta method of the first order.

Second order R-K method. The modified Euler's method gives

𝒉
𝒚𝟏 = 𝒚 + [𝒇(𝒙𝟎, 𝒚𝟎) + 𝒇(𝒙𝟎 + 𝒉, 𝒚𝟏 )]
𝟐

Substituting 𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) on the right-hand side of (1), we obtain

𝒉
𝒚𝟏 = 𝒚𝟎 + [𝒇 + 𝒇(𝒙𝟎 + 𝒉), 𝒚𝟎 + 𝒉𝒇𝟎] where 𝒇𝟎 = (𝒙𝟎, 𝒚𝟎)
𝟐 𝟎

Expanding L.H.S. by Taylor's series, we get

𝒉𝟐 ′′ 𝒉𝟑 ′′′
𝒚𝟏 = 𝒚(𝒙𝟎 + 𝒉) = 𝒚𝟎 + 𝒉𝒚′𝟎 + 𝒚 + 𝒚 +⋯
𝟐! 𝟎 𝟑! 𝟎

Expanding 𝑓(𝑥0 + ℎ, 𝑦0 + ℎ𝑓0 ) by Taylor's series for a function of two variables, (2) gives
P a g e | 54

𝒉 𝛛𝒇 𝛛𝒇
𝒚𝟏 = 𝒚𝟎 + [𝒇𝟎 + {𝒇𝟎 = (𝒙𝟎, 𝒚𝟎 ) + 𝒉 ( ) + 𝒉𝒇𝟎 ( ) + 𝑶(𝒉𝟐)∘∘ }]
𝟐 𝛛𝒙 𝟎 𝛛𝒚 𝟎
𝟏 𝛛𝒇 𝛛𝒇
= 𝒚𝟎 + [𝒉𝒇𝟎 + 𝒉𝒇𝟎 + 𝒉𝟐 {( ) + ( ) } + 𝑶(𝒉𝟑)]
𝟐 𝛛𝒙 𝟎 𝛛𝒚 𝟎
𝒉𝟐 ′ 𝒅𝒇(𝒙, 𝒚) 𝛛𝒇 𝛛𝒇
= 𝒚𝟎 + 𝒉𝒇𝟎 + 𝒇𝟎 + 𝑶(𝒉𝟑 ) [∵ = +𝒇 ]
𝟐 𝒅𝒙 𝛛𝒙 𝛛𝒚

Comparing (3) and (4), it follows that the modified Euler's method agrees with the Taylor's series
solution upto the term in ℎ2 .

Hence the modified Euler's method is the Runge-Kutta method of the second order. ∴ The second
order Runge-Kutta formula is

1
𝑦1 = 𝑦0 + (𝑘1 + 𝑘2 )
2

Where 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ) and 𝑘2 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘)

(iii) Third order R-K method. Similarly, it can be seen that Runge's method agrees with the Taylor's
series solution upto the term in ℎ3 .

As such, Runge's method is the Runge-Kutta method of the third order.

∴ The third order Runge-Kutta formula is

1
𝑦1 = 𝑦0 + (𝑘1 + 4𝑘2 + 𝑘3 )
6

1 1
Where, 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ), 𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2

And 𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘 ′ ), where 𝑘 ′ = 𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘1 ).

(iv) Fourth order R-K method. This method is most commonly used and is often referred to as the
Runge-Kutta method only.

Working rule for finding the increment 𝑘 of 𝑦 corresponding to an increment h of 𝑥 by Runge-Kutta


method from

is as follows:

𝑑𝑦
= 𝑓(𝑥, 𝑦), 𝑦(𝑥0 )
𝑑𝑥
P a g e | 55

Calculate successively 𝒌𝟏 = 𝒉𝒇(𝒙𝟎 , 𝒚𝟎 ),

and

𝟏 𝟏
𝒌𝟐 = 𝒉𝒇 (𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟏 )
𝟐 𝟐
𝟏 𝟏
𝒌𝟑 = 𝒉𝒇 (𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟐 )
𝟐 𝟐
𝒌𝟒 = 𝒉𝒇(𝒙𝟎 + 𝒉, 𝒚𝟎 + 𝒌𝟑 )
𝟏
𝒌 = (𝒌𝟏 + 𝟐𝒌𝟐 + 𝟐𝒌𝟑 + 𝒌𝟒 )
𝟔

Finally compute

which gives the required approximate value as 𝑦1 = 𝑦0 + 𝑘.

(Note that k is the weighted mean of 𝑘 1 , 𝑘2 , 𝑘3 , and 𝑘4 ).

NOTE Obs. One of the advantages of these methods is that the operation is identical whether the
differential equation is linear or non-linear.

EXAMPLE 1:
Apply the Runge-Kutta fourth order method to find an approximate value of 𝑦 when 𝑥 = 0.2 given
that 𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.

Solution:

Here 𝑥0 = 0, 𝑦0 = 1, ℎ = 0.2, 𝑓(𝑥0 , 𝑦0 ) = 1


∴ 𝑘1 = ℎ𝑓(𝑥0, 𝑦0 ) = 0.2 × 1 = 0.2000
1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 ) = 0.2 × 𝑓(0.1,1.1) = 0.2400
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 ) = 0.2 × 𝑓(0.1,1.12) = 0.2440
2 2
1
= (0.2000 + 0.4800 + 0.4880 + 0.2888)
6
1
= × (1.4568) = 0.2428
6

Hence the required approximate value of 𝑦 is 1.2428 .


P a g e | 56

CONCLUSION:
In conclusion, the journey through this numerical analysis project has been an illuminating
exploration into the realm of mathematical modeling and computational techniques.
Throughout this endeavor, we have delved into a myriad of methods and formulas, each
playing a crucial role in the pursuit of accurate solutions to complex problems.

Beginning with foundational techniques such as numerical differentiation and integration, we


established a solid understanding of how to approximate derivatives and integrals using finite
difference and numerical quadrature methods. These techniques, including the central
difference formula and Simpson's rule, provided us with invaluable tools for tackling
problems where analytical solutions were elusive or impractical.

Moreover, the study of interpolation and approximation opened up new avenues for
approximating functions and data points. By employing Lagrange interpolation, Newton's
divided difference method, and cubic spline interpolation, we were able to construct
polynomial approximations that closely mimicked the behavior of the underlying functions,
facilitating the estimation of values between discrete data points with remarkable accuracy.

Furthermore, our exploration extended to the realm of numerical solutions for ordinary
differential equations (ODEs) and partial differential equations (PDEs). Leveraging methods
such as Euler's method, the Runge-Kutta family, and finite difference schemes, we gained
insights into how numerical algorithms can be utilized to simulate dynamic systems and
model physical phenomena with precision.

Additionally, the investigation into root-finding techniques, including the bisection method,
Newton-Raphson method, and secant method, equipped us with powerful tools for locating
P a g e | 57

the roots of nonlinear equations—a fundamental task with applications spanning various
fields of science and engineering.

In conclusion, this project has underscored the indispensable role of numerical analysis in
modern scientific inquiry and engineering practice. By harnessing the power of mathematical
abstraction and computational machinery, we stand equipped to confront the complexities of
real-world problems with confidence and competence, armed with a rich toolkit of numerical
methods and formulas to guide us towards meaningful solutions.

As we conclude this project, let us not view it merely as an endpoint, but rather as a
springboard for further exploration and discovery in the boundless realm of numerical
analysis.
P a g e | 58

REFERENCES
 Atkinson, Kendall E. An Introduction to Numerical Analysis. John Wiley & Sons,
1989.

 Burden, Richard L., and J. Douglas Faires. Numerical Analysis. Cengage Learning,
2010.

 "Numerical Methods for Ordinary Differential Equations" by Wikipedia contributors.


Wikipedia, The Free Encyclopedia. [Online] Available at:

https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equation
s
 "Numerical Methods for Initial Value Problems" by Jeff R. Allen. University of Utah,
Department of Mathematics. [Online] Available at:
http://www.math.utah.edu/~allenf/teaching/2017Spring/2270/lab8/IvpIntro.html

 "Numerical Methods for Differential Equations" by L. Ridgway Scott. University of


Chicago, Department of Computer Science. [Online] Available at:

https://people.cs.uchicago.edu/~ridg/newtonapplet/DEtext/html/deintro.html

You might also like