Professional Documents
Culture Documents
Maths Minor Project Final2
Maths Minor Project Final2
UNIVERSITY OF LUCKNOW
MINOR PROJECT
ON
I have acknowledged all the sources of information which I used in the project. The matter embodied in this
minor project has not been submitted by me to any other University or Institution for the award of any other
degree/diploma.
PRANJAL DWIVEDI
Roll no.-2110011015884
Department of Mathematics & Astronomy
Faculty of Science, University of Lucknow,
Lucknow-226007, Uttar Pradesh, India
ACKNOWLEDGEMENT
I would like to extend my heartfelt gratitude to Dr. R.R. Yadav for their invaluable
guidance and unwavering support throughout the completion of this minor project.
Pranjal Dwivedi
B.Sc. 3rd year
VIth Semester
[2110011015884]
CONTENTS
Abstract 1
Introduction 2-12
1.1 Overview
1.2 Formulation of Problem
1.3 Limitations
1.4 Examples
CHAPTER 2: Taylor’s Series Method 20-23
2.1 Overview
2.2 Formulation Of Problem
2.3 Limitations
2.4 Examples
CHAPTER 3: Euler’s Method 24-29
3.1 Overview
3.2 Formulation Of Problem
3.3 Limitations
3.4 Examples
CHAPTER 4: Modified Euler’s Method 30-37
4.1 Overview
4.2 Formulation Of Problem
4.3 Improvements
4.4 Examples
CHAPTER 5: Runge-Kutta Method 38-44
5.1 Overview
5.2 Formulation Of Problem
5.3 Limitations And Advantages
5.4 Examples
CHAPTER 6: Milne’s Predictor-Corrector Method 45-50
6.1 Overview
6.2 Formulation Of Problem
6.3 Limitations And Advantages
6.4 Examples
CHAPTER 7: Adams’s Bashforth Method 51-54
7.1 Overview
7.2 Formulation Of Problem
7.3 Limitations And Advantages
7.4 Examples
CONCLUSION 55-57
REFERENCES 58
Page |1
Abstract
Numerical methods are fundamental tools for solving first-order ordinary
differential equations (ODEs) in various scientific and engineering fields. This
thesis provides a comprehensive study of numerical techniques tailored for
efficient and accurate solutions to such equations. The primary focus is on
understanding, implementing, and analyzing the performance of key methods
including Picard’s Method, Taylor’s Method, Euler's method, the Modified
Euler method, the Runge-Kutta method, Milne Predictor-Corrector Method,
Adam-Bashforth method. Theoretical analysis and practical examples are used
to demonstrate the strengths and limitations of each method, offering insights
into their application in real-world problems.
Introduction .
Differential equations are among the most important mathematical tools used in pro ducing
models in the physical sciences, biological sciences, and engineering. In this text, we consider
numerical methods for solving ordinary differential equations, that is, those differential
equations that have only one independent variable.
The differential equations we consider in most of the book are of the form
The given function f (t, y) of two variables defines the differential equation, and examples are
given in Chapter.
The primary objective of this thesis is to provide a comprehensive analysis and comparison of
numerical methods for solving first-order ordinary differential equations (ODEs). The study
aims to deepen the understanding of these methods, including their theoretical principles,
practical implementation, and computational efficiency, with the goal of identifying their
strengths, weaknesses, and suitability for different types of ODEs and applications.
The first objective is to analyze and compare Picard's method, Taylor's method, Euler's
method, modified Euler method, Runge-Kutta method, Milne predictor-corrector method, and
Adams-Bashforth method for solving first-order ODEs. Each method will be examined in
detail, including its algorithm, implementation details, and theoretical analysis. The
comparative analysis will evaluate the methods based on accuracy, efficiency, and
computational cost, providing insights into their performance and applicability.
The fourth objective is to evaluate the practical applicability of each numerical method in
real-world problems. This includes examining the method's performance in solving ODEs
that arise in various fields such as physics, engineering, biology, and economics. By
evaluating the practical applicability of each method, this thesis aims to provide guidance on
selecting the most appropriate method for specific problems and applications.
Finally, this thesis aims to contribute to the existing body of knowledge in numerical methods
for ODEs. By providing a detailed analysis and comparison of these methods, this study
seeks to advance the understanding of these methods and their practical implications.
Additionally, this thesis aims to provide valuable insights for future research and
development in the field of numerical methods for ODEs.
Overall, this thesis seeks to provide a comprehensive analysis and comparison of numerical
methods for solving first-order ODEs, with the goal of advancing the understanding of these
methods and their practical implications.
Page |5
The scope of this thesis encompasses a detailed examination and comparison of numerical
methods for solving first-order ordinary differential equations (ODEs). The study focuses on
seven main methods: Picard's method, Taylor's method, Euler's method, modified Euler
method, Runge-Kutta method, Milne predictor-corrector method, and Adams-Bashforth
method. These methods are fundamental in numerical analysis and find wide application in
various fields such as physics, engineering, biology, and economics. The scope also includes
an analysis of the theoretical principles, practical implementation, and computational
efficiency of each method.
One of the key aspects of the scope is to provide an in-depth analysis of each numerical method.
This includes a detailed examination of the algorithm, implementation details, and theoretical
background of each method. By providing a comprehensive analysis, this thesis aims to deepen
the understanding of these methods and their practical implications.
2. Comparative Analysis:
Another important aspect of the scope is the comparative analysis of the numerical methods.
The study will compare the methods based on various criteria such as accuracy, efficiency, and
computational cost. By comparing these methods, this thesis aims to identify their strengths,
weaknesses, and suitability for different types of ODEs and applications.
3. Practical Applicability:
The scope also includes an evaluation of the practical applicability of each numerical method.
This involves examining the performance of the methods in solving ODEs that arise in real-
Page |6
world problems. By evaluating the practical applicability of each method, this thesis aims to
provide guidance on selecting the most appropriate method for specific problems and
applications.
4. Contribution to Knowledge:
Furthermore, this thesis aims to contribute to the existing body of knowledge in numerical
methods for ODEs. By providing a detailed analysis and comparison of these methods, this
study seeks to advance the understanding of these methods and their practical implications.
Additionally, this thesis aims to provide valuable insights for future research and development
in the field of numerical methods for ODEs.
Limitations:
Despite the comprehensive scope outlined above, there are certain limitations to this study.
Firstly, the study focuses solely on first-order ODEs and does not cover higher-order ODEs or
partial differential equations. Secondly, the analysis is limited to the methods mentioned above
and does not include other numerical methods for solving ODEs. Thirdly, the study does not
consider the implementation of these methods in specific software environments or
programming languages. Lastly, the practical applicability of these methods may vary
depending on the specific problem and may require further validation in practical scenarios.
In conclusion, while this thesis aims to provide a comprehensive analysis and comparison of
numerical methods for solving first-order ODEs, it is important to recognize the scope and
limitations of the study. By addressing these limitations, this thesis seeks to provide valuable
insights and contribute to the advancement of knowledge in numerical methods for ODEs.
Page |7
1. Introduction
The introduction sets the stage for the thesis by providing an overview of the importance of
numerical methods in solving ODEs. It outlines the objectives of the thesis, the scope of the
study, and the structure of the thesis.
2. Mathematical Preliminaries
This chapter provides the necessary background in mathematics for understanding the
numerical methods discussed in later chapters. It covers topics such as differential equations,
linear algebra, and numerical linear algebra.
3. Picard's Method
This chapter focuses on Picard's method for solving first-order ODEs. It includes a detailed
explanation of the method, its algorithm, and practical implementation. The chapter also
discusses the theoretical analysis of the method and its practical applicability.
4. Taylor's Method
The chapter on Taylor's method provides a comprehensive analysis of this method, including
Page |8
its algorithm, implementation details, and theoretical background. The chapter also compares
Taylor's method with other numerical methods for solving ODEs.
5. Euler's Method
This chapter examines Euler's method for solving first-order ODEs, including its algorithm,
implementation details, and theoretical analysis. The chapter also discusses the strengths and
limitations of Euler's method in practical applications.
The chapter on the modified Euler method provides an in-depth analysis of this method,
including its algorithm, implementation details, and theoretical background. The chapter also
compares the modified Euler method with other numerical methods for solving ODEs.
7. Runge-Kutta Method
This chapter focuses on the Runge-Kutta method for solving first-order ODEs. It includes a
detailed explanation of the method, its algorithm, and practical implementation. The chapter
also discusses the theoretical analysis of the method and its practical applicability.
The chapter on the Milne predictor-corrector method provides a comprehensive analysis of this
method, including its algorithm, implementation details, and theoretical background. The
chapter also compares the Milne predictor-corrector method with other numerical methods for
solving ODEs.
Page |9
9. Adams-Bashforth Method
This chapter examines the Adams-Bashforth method for solving first-order ODEs, including
its algorithm, implementation details, and theoretical analysis. The chapter also discusses the
strengths and limitations of the Adams-Bashforth method.
The comparative analysis chapter evaluates the numerical methods discussed in earlier chapters
based on criteria such as accuracy, efficiency, and computational cost. The chapter provides
insights into the strengths and limitations of each method and offers guidance for selecting the
most appropriate method for specific problems and applications.
11. Conclusion
The conclusion summarizes the key findings of the thesis and discusses the implications of the
study. It also suggests areas for future research and development in the field of numerical
methods for solving first-order ODEs.
12. References
The references section lists all the sources cited in the thesis, providing readers with additional
resources for further reading and research.
P a g e | 10
The study of numerical methods for solving first-order ordinary differential equations (ODEs)
holds significant importance in various fields of science and engineering. This thesis aims to
provide a comprehensive analysis and comparison of these methods, highlighting their
strengths, weaknesses, and practical applicability. The significance of this study lies in several
key areas:
This study contributes to the advancement of knowledge in numerical methods for ODEs by
providing a detailed analysis and comparison of seven main methods: Picard's method, Taylor's
method, Euler's method, modified Euler method, Runge-Kutta method, Milne predictor-
corrector method, and Adams-Bashforth method. By conducting a thorough examination of
these methods, this study seeks to deepen the understanding of their theoretical principles and
practical implications.
comparative analysis of these methods based on criteria such as accuracy, efficiency, and
computational cost offers valuable insights into their strengths and limitations. This guidance
can help practitioners and researchers make informed decisions when choosing a numerical
method for their specific problems and applications.
This study also has the potential to inspire future research and development in the field of
numerical methods for ODEs. By identifying areas where further improvements can be made
and new methods can be developed, this study lays the groundwork for future advancements
in numerical analysis. The insights provided in this study can serve as a foundation for future
research projects aimed at enhancing the efficiency and accuracy of numerical computations
in solving ODEs.
Lastly, this study contributes to education and training in the field of numerical analysis by
providing a comprehensive overview of these methods. The detailed analysis and comparison
of these methods can serve as a valuable resource for students and educators seeking to deepen
their understanding of numerical methods for ODEs. By enhancing the educational resources
available in this field, this study aims to contribute to the development of skilled professionals
capable of solving complex problems using numerical techniques.
In conclusion, this study on numerical methods for solving first-order ODEs holds significant
importance in advancing knowledge, solving real-world problems, guiding practitioners and
researchers, inspiring future research, and contributing to education and training. By
providing a comprehensive analysis and comparison of these methods, this study aims to
make a valuable contribution to the field of numerical analysis and its practical applications.
P a g e | 12
Mathematical Preliminaries:
𝑑𝑦
= f(x,y) given y(𝑥0 )= 𝑦0 (1)
𝑑𝑥
to study the various numerical methods of solving such equations. In most of these methods,
we replace the differential equation by a difference equation and then solve it. These methods
yield solutions either as a power series in x from which the values of y can be found by direct
substitution, or a set of values of x and y. The methods of Picard and Taylor series belong to
the former class of solutions. In these methods, y in (1) is approximated by a truncated series,
each term of which is a function of x. The information about the curve at one point is utilized
and the solution is not iterated. As such, these are referred to as single-step methods.
The methods of Euler, Runge-Kutta, Milne, Adams-Bashforth, etc. be- long to the latter class
of solutions. In these methods, the next point on the curve is evaluated in short steps ahead,
P a g e | 13
by performing iterations until sufficient accuracy is achieved. As such, these methods are
called step-by-step methods.
Euler and Runga-Kutta methods are used for computing y over a lim- ited range of x- values
whereas Milne and Adams methods may be applied for finding y over a wider range of x-
values. Therefore Milne and Adams methods require starting values which are found by
Picard’s Taylor series or Runge-Kutta methods.
F(x,y,dy/dx,d2y/dx2,……….,dny/dxn)= 0 (2)
𝝓(𝒙, 𝒚, 𝒄𝟏 , 𝒄𝟐 , … . , 𝒄𝒏 ) = 𝟎 (3)
To obtain its particular solution, n conditions must be given so that the constants c1, c2,…, cn
can be determined.
If these conditions are prescribed at one point only (say:x0), then the differential equation
together with the conditions constitute an initial value problem of the nth order.
If the conditions are prescribed at two or more points, then the problem is termed as
boundary value problem.
In this chapter, we shall first describe methods for solving initial value problems and then
explain the finite difference method and shooting method for solving boundary value
problems.
P a g e | 14
CHAPTER 1:
1.1 Overview
At its core, Picard's method relies on the idea of approximating a solution to an ODE by
iteratively improving an initial guess. The method is based on the Picard-Lindelöf theorem,
which guarantees the existence and uniqueness of a solution to an IVP under certain
conditions on the function f(x, y) and the initial condition y(𝐱 𝟎 )= 𝐲𝟎 .
It is required to find that particular solution of (1) which assumes the value y0 when
x=x0. Integrating (1) between limits, we get
𝒚 𝒙 𝒙
∫𝒚 ⅆ𝒚 = ∫𝒙 𝒇(𝒙, 𝒚) or y-y0 = ∫𝒙 𝒇(𝒙, 𝒚)ⅆ𝒙 (2)
𝟎 𝟎 𝟎
This is an integral equation equivalent to (1), for it contains the unknown y under the
integral sign.
As a first approximation y1 to the solution, we put y=y0 in f(x, y) and integrate (2),
giving:
𝒙
y1 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟎 )ⅆ𝒙
𝟎
For a second approximation y2, we put y = y1 in f(x, y) and integrate (2), giving:
𝒙
y2 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟏 )
𝟎
𝒙
y3 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟐 )
𝟎
𝒙
yn = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝒏−𝟏 )
𝟎
1.3 LIMITATIONS
Certainly! Here are some generalized and simplified limitations of Picard's method:
3. Local Solution: It provides solutions that are valid only in a small neighborhood of the
initial point, limiting its applicability to broader contexts.
5. Not Universally Applicable: While effective for many problems, Picard's method may
not be suitable for all types of differential equations, particularly those with irregular
solutions or singularities.
These limitations highlight the need for careful consideration and possibly the use of
alternative methods depending on the specific characteristics of the problem.
P a g e | 17
1.4 EXAMPLES:
EXAMPLE 1
Solution:
𝑥
(i) We have 𝑦 = 1 + ∫𝑥0 (𝑥 + 𝑦)𝑑𝑥
𝑥
𝑦1 = 1 + ∫ (1 + 𝑥)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 /2
𝑥0
𝑥
𝑦1 = 1 + ∫ (1 + 𝑥 + 𝑥 2 /2)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6
𝑥0
𝑥
𝑥3 𝑥4
𝑦3 = 1 + ∫ (1 + 𝑥 + 𝑥 2 + 𝑥 3 /6)𝑑𝑥 = 1 + 2𝑥 + 𝑥 2 + +
𝑥0 3 24
𝑥
𝑥3 𝑥4
𝑦4 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 +
+ ) 𝑑𝑥
0 3 24
2
𝑥3 𝑥4 𝑥5
=1+𝑥+𝑥 + + +
3 12 120
𝑥
𝑥3 𝑥4 𝑥5 2
𝑦5 = 1 + ∫ (1 + 2𝑥 + 𝑥 + + + ) 𝑑𝑥 (1)
0 3 12 120
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + 𝑥 + 𝑥2 + + + + (1)
3 12 60 720
𝑑𝑦
− 𝑦 = 𝑥 is a Leibnitzs linear in 𝑥
𝑑𝑥
𝑦𝑒 −𝑥 = ∫ 𝑥𝑒 −𝑥 𝑑𝑥 + 𝑐
Since 𝑦 = 1, when 𝑥 = 0, ∴ 𝑐 = 2.
𝑦 = 2𝑒 𝑥 − 𝑥 − 1 (2)
𝑥2 𝑥3 𝑥4
Or using the series: 𝑒 𝑥 = 1 + 𝑥 + + + +⋯
2! 3! 4!
We get
𝑥3 𝑥4 𝑥5 𝑥6
𝑦 = 1 + 𝑥 + 𝑥2 + + + + + ⋯∞ (3)
3 12 60 360
Comparing (1) and (3), it is clear that (1), approximates to the exact particular solution (3)
upto the term in 𝑥 5 .
NOTE Obs. At 𝑥 = 1, the fourth approximation 𝑦4 = 3.433 and the fifth approximation 𝑦5 =
3.434 whereas the exact value is 3.44 .
P a g e | 19
EXAMPLE 2
𝑑𝑦 𝑦 − 𝑥
= , 𝑦(0) = 1
𝑑𝑥 𝑦 + 𝑥
𝑥 𝑦−𝑥
Solution: We have 𝑦 = 1 + ∫0 𝑑𝑥
𝑦+𝑥
𝑥 𝑥
𝑦−𝑥 2
𝑦1 = 1 + ∫ 𝑑𝑥 = 1 + ∫ (−1 + ) 𝑑𝑥
0 𝑦+𝑥 0 1+𝑥
= 1 + [−𝑥 + 2 log(1 + 𝑥)]0𝑥 = 1 − 𝑥 + 2 log(1 + 𝑥))
𝑥
1 − 𝑥 + 2log (1 + 𝑥) − 𝑥
𝑦2 = 1 + ∫ 𝑑𝑥
0 1 − 𝑥 + 2log (1 + 𝑥) + 𝑥
𝑥
2𝑥
= 1 + ∫ [1 − ] 𝑑𝑥
0 1 + 2log (1 + 𝑥)
Hence we use the first approximation and taking 𝑥 = 0.1 in (𝑖) we obtain
CHAPTER 2:
2.1 OVERVIEW
𝑑𝑦
Consider the first order equation 𝑑𝑥 = 𝑓(𝑥, 𝑦)
𝑑2 𝑦 ∂𝑓 ∂𝑓 𝑑𝑦
Differentiating (1), we have 𝑑𝑥 2 = ∂𝑥 + ∂𝑦 𝑑𝑥 i.e. 𝑦 ′′ = 𝑓𝑥 + 𝑓𝑦 𝑓 ′
(𝑥 − 𝑥0 )2 ′′ (𝑥 − 𝑥0 )3 ′′′
𝑦 = 𝑦0 + (𝑥 − 𝑥0 )(𝑦 ′ )0 + (𝑦 )0 + (𝑦 )0 + ⋯ (3)
2! 3!
gives the values of 𝑦 for every value of 𝑥 for which (3) converges.
P a g e | 21
On finding the value 𝑦1 for 𝑥 = 𝑥𝑖 from (3), 𝑦 ′ , 𝑦 ′′ etc. can be evaluated at 𝑥 = 𝑥1 by means
of (1), (2) etc. Then 𝑦 can be expanded about 𝑥 = 𝑥1 . In this way, the solution can be
extended beyond the range of convergence of series (3).
2.3 LIMITATIONS
Despite its accuracy and effectiveness in certain cases, Taylor's series method has several
limitations that should be considered:
4. Limited Applicability: The method is most suitable for problems where the function
and its derivatives are well-behaved and can be easily evaluated. It may not be suitable for
functions with singularities or discontinuities.
challenging due to the need to calculate and manage higher-order derivatives, especially for
functions with complex expressions.
Despite these limitations, Taylor's series method remains a valuable tool in numerical analysis,
particularly for problems where high accuracy is required and the function and its derivatives
are well-behaved.
2.4 EXAMPLES
EXAMPLE 1
Solve 𝑦 ′ = 𝑥 + 𝑦, 𝑦(0) = 1 by Taylor's series method. Hence find the values of 𝑦 at 𝑥 = 0.1
and 𝑥 = 0.2.
Solution:
𝑦 ′ = 𝑥 + 𝑦 𝑦 ′ (0) = 1 [∵ 𝑦(0) = 1]
𝑦 ′′ = 1 + 𝑦 ′ 𝑦 ′′ (0) = 2
𝑦 ′′′ = 𝑦 ′′ 𝑦 ′′′ (0) = 2
𝑦 ′′′ = 𝑦 ′′′ 𝑦 ′′′ (0) = 2, etc.
Taylor's series is
(𝑥 − 𝑥0 )2 ′′ (𝑥 − 𝑥0 )3 ′′′
𝑦 = 𝑦0 + (𝑥 − 𝑥0 )(𝑦 ′ )0 + (𝑦 )0 + (𝑦 )0 + ⋯
2! 3!
Here 𝑥0 = 0, 𝑦0 = 1
𝑥2 (𝑥)3 (𝑥)4
∴ 𝑦 = 1 + 𝑥(1) + (2) + (2) + (4) ⋯
2 3! 4!
(0.1)3 (0.1)4
Thus 𝑦(0.1) = 1 + 0.1 + (0.1)2 + 3!
+ 4!
⋯
P a g e | 23
= 1.1103
and
(0.2)3 (0.2)4
𝑦(0.2) = 1 + 0.2 + (0.2)2 + + +⋯
3 6
= 1.2427
EXAMPLE 2
Find by Taylor's series method, the values of 𝑦 at 𝑥 = 0.1 and 𝑥 = 0.2 to five places of
decimals from 𝑑𝑦/𝑑𝑥 = 𝑥 2 𝑦 − 1, 𝑦(0) = 1.
Solution:
𝑦 ′ = 𝑥 2 𝑦 − 1, (𝑦 ′ )0 = −1 [∵ 𝑦(0) = 1]
𝑦 ′′ = 2𝑥𝑦 + 𝑥 2 𝑦 ′ , (𝑦 ′′ )0 = 0
𝑦 ′′′ = 2𝑦 + 4𝑥𝑦 ′ + 𝑥2𝑦 ′′ , (𝑦 ′′′ )0 = 2
𝑦 𝑖𝑣 = 6𝑦 ′ + 6𝑥𝑦 ′′ + 𝑥2𝑦 ′′′ , (𝑦 𝑖𝑣 )0 = −6, etc.
𝑥2 (𝑥)3 (𝑥)4
𝑦 = 1 + 𝑥(−1) + (0) + (2) + (−6) + ⋯
2 3! 4!
𝑥3 𝑥4
= 1 + −𝑥 + − + ⋯
3 4
CHAPTER 3:
EULER’S METHOD
3.1 OVERVIEW
Euler's method is a straightforward and widely used numerical technique for approximating
solutions to ordinary differential equations (ODEs). It is particularly useful when analytical
solutions are difficult or impossible to obtain. Euler's method is based on the concept of
approximating the solution curve of an ODE by using short straight line segments.
ⅆ𝒚
ⅆ𝒙
= 𝒇(𝒙, 𝒚)
given that y(x0) = y0. Its curve of solution through P(x0,y0) is shown dotted in figure 10.1. Now
we have to find the ordinate of any other point Q on this curve.
P a g e | 25
FIGURE 10.1
In the interval 𝐿𝐿1 , we approximate the curve by the tangent at 𝑃. If the ordinate through 𝐿1
meets this tangent in 𝑃1 (𝑥0 + ℎ, 𝑦1 ), then
𝑦1 = 𝐿1 𝑃1 = 𝐿𝑃 + 𝑅1 𝑃1 = 𝑦0 + 𝑃𝑅1 tan 𝜃
𝑑𝑦
= 𝑦0 + ℎ ( ) = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
𝑑𝑥 𝑝
Let 𝑃1 𝑄1 be the curve of solution of (1) through 𝑃1 and let its tangent at 𝑃1 meet the ordinate
through 𝐿2 in 𝑃2 (𝑥0 + 2ℎ, 𝑦2 ). Then
𝑦2 = 𝑦1 + ℎ𝑓(𝑥0 + ℎ, 𝑦1 ) (1)
NOTE: Obs. In Euler's method, we approximate the curve of solution by the tangent in each
interval, i.e., by a sequence of short lines. Unless ℎ is small, the error is bound to be quite
significant. This sequence of lines may also deviate considerably from the curve of solution.
As such, the method is very slow and hence there is a modification of this method which is
given in the next section.
3.3 LIMITATIONS
While Euler's method is a simple and useful technique for approximating solutions to ordinary
differential equations (ODEs), it has several limitations that should be considered:
1. Accuracy: Euler's method is a first-order method, meaning that the error in the
approximation is proportional to the step size \(h\). As a result, it may require very small step
sizes to achieve accurate results, which can be computationally expensive.
2. Global Error Accumulation: Since Euler's method only uses information from the previous
point to calculate the next point, errors can accumulate over time, especially for long
integration intervals.
3. Sensitivity to Step Size: The accuracy of Euler's method is highly sensitive to the choice of
step size \(h\). Using an inappropriate step size can result in significant errors in the
approximation.
P a g e | 27
4. Not Self-Starting: Euler's method requires an initial guess of the solution at the starting
point (x0). If this guess is not close to the true solution, the method may diverge or provide
inaccurate results.
5. Improvements Needed for Higher Accuracy: While Euler's method is a good starting point
for numerical integration, more advanced methods like the Runge-Kutta methods are often
needed to achieve higher accuracy for complex problems.
Despite these limitations, Euler's method remains a valuable tool in numerical analysis,
particularly for educational purposes and for gaining an initial understanding of numerical
integration techniques.
3.4 EXAMPLES
EXAMPLE 1
Solution:
We take 𝑛 = 10 and ℎ = 0.1 which is sufficiently small. The various calculations are arranged
as follows:
1.0 3.18
NOTE Obs. In Example 10.1(Obs.), we obtained the true values of 𝑦 from its exact solution to
be 3.44 where as by Euler's method 𝑦 = 3.18 and by Picard's method 𝑦 = 3.434. In the above
solution, had we chosen 𝑛 = 20, the accuracy would have been considerably increased but at
the expense of double the labor of computation. Euler's method is no doubt very simple but
cannot be considered as one of the best.
P a g e | 29
EXAMPLE 2
𝑑𝑦 𝑦−𝑥
Given 𝑑𝑥 = 𝑦+𝑥 with initial condition 𝑦 = 1 at 𝑥 = 0; find 𝑦 for 𝑥 = 0.1 by Euler's method.
Solution:
We divide the interval (0,0.1) in to five steps, i.e., we take 𝑛 = 5 and ℎ = 0.02. The various
calculations are arranged as follows:
0.10 1.0928
CHAPTER 4:
4.1 OVERVIEW
Modified Euler's method, also known as Heun's method or the improved Euler method, is a
numerical technique used to approximate solutions to ordinary differential equations (ODEs).
It is an extension of the basic Euler's method, offering improved accuracy by using a more
sophisticated approach. Here's an overview of the method:
In Euler's method, the curve of solution in the interval 𝐿𝐿1 is approximated by the tangent at 𝑃
(Figure 10.1) such that at 𝑃1 , we have
𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) (1)
𝑃2 (𝑥0 + 2ℎ, 𝑦2 )
P a g e | 31
(1)
Now we find a better approximation 𝑦1 of 𝑦(𝑥0 + ℎ) by taking the slope of the curve as the
mean of the slopes of the tangents at 𝑃 and 𝑃1 , i.e.,
(1) ℎ
𝑦1 = 𝑦0 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )]
2
As the slope of the tangent at 𝑃1 is not known, we take 𝑦1 as found in (1) by Euler's method
and insert it on R.H.S. of (2) to obtain the first modified value 𝑦1 (1)
Again (2) is applied and we find a still better value 𝑦1(2) corresponding to 𝐿1 as
(2) ℎ (1)
𝑦1 = 𝑦0 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )]
2
We repeat this step, until two consecutive values of 𝑦 agree. This is then taken as the starting
point for the next interval 𝐿1 𝐿2 .
𝑦2 = 𝑦1 + ℎ𝑓(𝑥0 + ℎ, 𝑦1 )
(1)
and a better approximation 𝑦2 is obtained from (2)
(1) ℎ
𝑦2 = 𝑦1 + [𝑓(𝑥0 + ℎ, 𝑦1 ) + 𝑓(𝑥0 + 2ℎ, 𝑦2 )]
2
We repeat this step until 𝑦2 becomes stationary. Then we proceed to calculate 𝑦3 as above
and so on.
This is the modified Euler's method which gives great improvement in accuracy over the
original method.
P a g e | 32
4.3 LIMITATIONS:
1. Accuracy: Euler's method is a first-order method, meaning that the error in the
approximation is proportional to the step size. It may require very small step sizes to achieve
accurate results, which can be computationally expensive.
2. Numerical Stability: Euler's method can be numerically unstable for certain types of
ODEs, particularly those with rapidly changing or oscillatory solutions. Using too large a step
size can lead to instability and inaccurate results.
3. Sensitivity to Step Size: The accuracy of Euler's method is highly sensitive to the choice
of step size. Using an inappropriate step size can result in significant errors in the
approximation.
4. Global Error Accumulation: Errors can accumulate over time, especially for long
integration intervals, leading to inaccurate results for the overall solution.
2. Stability: Modified Euler's method is more stable than Euler's method for certain types of
ODEs, especially those with rapidly changing solutions. It can handle larger step sizes
without sacrificing accuracy.
3. Convergence Rate: Modified Euler's method converges to the true solution faster than
Euler's method, especially for smoother functions.
Overall, Modified Euler's method offers a good balance between accuracy and computational
efficiency, making it a popular choice for many numerical ODE problems. However, for
problems requiring higher accuracy or stability, more advanced methods like the Runge-
Kutta methods may be more appropriate.
4.4 EXAMPLES
EXAMPLE 1
Using modified Euler's method, find an approximate value of 𝑦 when 𝑥 = 0.3, given that
𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.
Solution:
1
0.1 0.1 + 1.1 (1 + 1.2) 1.00 + 0.1(1.1) = 1.11
2
1
0.1 0.1 + 1.11 (1 + 1.21) 1.00 + 0.1(1.105) = 1.1105
2
1
0.1 0.1 + 1.1105 (1 + 1.2105) 1.00 + 0.1(1.1052) = 1.1105
2
P a g e | 34
Since the last two values are equal, we take 𝑦(0.1) = 1.1105.
1
0.2 0.2 + 1.2316 (1.12105 + 1.4316) 1.1105 + 0.1(1.3211) = 1.2426
2
1
0.2 0.2 + 1.2426 (1.2105 + 1.4426) 1.1105 + 0.1(1.3266) = 1.2432
2
1
0.2 0.2 + 1.2432 (1.2105 + 1.4432) 1.1105 + 0.1(1.3268) = 1.2432
2
Since the last two values are equal, we take 𝑦(0.2) = 1.2432.
1
0.3 0.3 + 1.3875 (1.4432 + 1.6875) 1.2432 + 0.1(1.5654) = 1.3997
2
1
0.3 0.3 + 1.3997 (1.4432 + 1.6997) 1.2432 + 0.1(1.5715) = 1.4003
2
1
0.3 0.3 + 1.4003 (1.4432 + 1.7003) 1.2432 + 0.1(1.5718) = 1.4004
2
1
0.3 0.3 + 1.4004 (1.4432 + 1.7004) 1.2432 + 0.1(1.5718) = 1.4004
2
Since the last two values are equal, we take 𝑦(0.3) = 1.4004.
NOTE Obs. In Example 10.8, we obtained the approximate value of 𝑦 for 𝑥 = 0.3 to be 1.53
whereas by the modified Euler's method the corresponding value is 1.4003 which is nearer its
true value 1.3997, obtained from its exact solution 𝑦 = 2𝑒𝑥 − 𝑥 − 1 by putting 𝑥 = 0.3.
EXAMPLE 2
Using the modified Euler's method, find 𝑦(0.2) and 𝑦(0.4) given
𝑦 ′ = 𝑦 + 𝑒 𝑥 , 𝑦(0) = 0
Solution:
To calculate (0.2) :
1
(1 + 1.4214) 0 + 0.2(1.2107)
0.2 2
0.2 0.2 + 𝑒 = 1.4214
= 0.2421
= 1.2107
1
0.2421 + 𝑒 0.2 (1 + 1.4635) 0 + 0.2(1.2317)
0.2 2
= 1.4635 = 0.2463
= 1.2317
P a g e | 36
1
0.2463 + 𝑒 0.2 (1 + 1.4677) 0 + 0.2(1.2338)
0.2 2
= 1.4677 = 0.2468
= 1.2338
1
0.2468 + 𝑒 0.2 (1 + 1.4682) 0 + 0.2(1.2341)
0.2 2
= 1.4682 = 0.2468
= 1.2341
Since the last two values of 𝑦 are equal, we take 𝑦(0.2) = 0.2468.
To calculate (0.4) :
1
0.5404 + 𝑒 0.4 (1.4682 0.2468 + 0.2(1.7502)
0.4 2
= 2.0322 = 0.5968
+ 2.0322)
= 1.7502
1
(1.4682
0.5968 + 𝑒 0.4 2 0.2468 + 0.2(1.7784)
0.4 + 2.0887)
= 2.0887 = 0.6025
= 1.7784
P a g e | 37
1
(1.4682
0.6025 + 𝑒 0.4 2 0.2468 + 0.2(1.78125)
0.4 + 2.0943)
= 2.0943 = 0.6030
= 1.78125
1
(1.4682
0.6030 + 𝑒 0.4 2 0.2468 + 0.2(1.7815)
0.4 + 2.0949)
= 2.0949 = 0.6031
= 1.7815
1
(1.4682
0.6031 + 𝑒 0.4 2 0.2468 + 0.2(1.7815)
0.4 + 2.0949)
= 2.0949 = 0.6031
= 1.7816
Since the last two value of 𝑦 are equal, we take 𝑦(0.4) = 0.6031
CHAPTER 5:
Runge-Kutta Method
5.1 OVERVIEW
The Taylor's series method of solving differential equations numerically is restricted by the
labor involved in finding the higher order derivatives. However, there is a class of methods
known as Runge-Kutta methods which do not require the calculations of higher order
derivatives and give greater accuracy. The Runge-Kutta formulae possess the advantage of
requiring only the function values at some selected points. These methods agree with Taylor's
series solution up to the term in ℎ𝑟 where 𝑟 differs from method to method and is called the
order of that method.
First order R-K method. We have seen that Euler's method (CHAPTER 3) gives
ℎ2 ′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 +⋯
2 0
It follows that the Euler's method agrees with the Taylor's series solution upto the term in ℎ.
ℎ
𝑦1 = 𝑦 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )] (1)
2
ℎ
𝑦1 = 𝑦0 + [𝑓0 + 𝑓(𝑥0 + ℎ), 𝑦0 + ℎ𝑓0 ] where 𝑓0 = (𝑥0 , 𝑦0 ) (2)
2
ℎ2 ′′ ℎ3 ′′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 + 𝑦 +⋯ (3)
2! 0 3! 0
Expanding 𝑓(𝑥0 + ℎ, 𝑦0 + ℎ𝑓0 ) by Taylor's series for a function of two variables, (2) gives
ℎ ∂𝑓 ∂𝑓
𝑦1 = 𝑦0 + [𝑓0 + {𝑓0 = (𝑥0 , 𝑦0 ) + ℎ ( ) + ℎ𝑓0 ( ) + 𝑂(ℎ2 )∘∘ }]
2 ∂𝑥 0 ∂𝑦 0
1 ∂𝑓 ∂𝑓
= 𝑦0 + [ℎ𝑓0 + ℎ𝑓0 + ℎ2 {( ) + ( ) } + 𝑂(ℎ3 )]
2 ∂𝑥 0 ∂𝑦 0
ℎ2 𝑑𝑓(𝑥, 𝑦) ∂𝑓 ∂𝑓
= 𝑦0 + ℎ𝑓0 + 𝑓0′ + 𝑂(ℎ3 ) [∵ = +𝑓 ]
2 𝑑𝑥 ∂𝑥 ∂𝑦
Comparing (3) and (4), it follows that the modified Euler's method agrees with the Taylor's
series solution upto the term in ℎ2 .
Hence the modified Euler's method is the Runge-Kutta method of the second order. ∴ The
second order Runge-Kutta formula is
1
𝑦1 = 𝑦0 + (𝑘1 + 𝑘2 )
2
(iii) Third order R-K method. Similarly, it can be seen that Runge's method (Section 10.6)
agrees with the Taylor's series solution upto the term in ℎ3 .
P a g e | 40
1
𝑦1 = 𝑦0 + (𝑘1 + 4𝑘2 + 𝑘3 )
6
1 1
Where, 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ), 𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2
(iv) Fourth order R-K method. This method is most commonly used and is often referred to
as the Runge-Kutta method only.
is as follows:
𝑑𝑦
= 𝑓(𝑥, 𝑦), 𝑦(𝑥0 )
𝑑𝑥
and
1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 )
2 2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 )
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
Finally compute
NOTE Obs. One of the advantages of these methods is that the operation is identical whether
the differential equation is linear or non-linear.
Limitations:
1. Complexity: Runge-Kutta methods are more complex than Euler's method and Modified
Euler's method, requiring more computational effort to implement.
Advantages:
1. Higher Accuracy: Runge-Kutta methods are generally more accurate than Euler's method
and Modified Euler's method, especially for solving ODEs with complex behavior or rapidly
changing solutions.
2. Adaptive Step Size: Runge-Kutta methods can be easily adapted to use variable step
sizes, allowing for more efficient computation by focusing computational effort where it is
most needed.
P a g e | 42
5.4 EXAMPLES
EXAMPLE 1
Apply the Runge-Kutta fourth order method to find an approximate value of 𝑦 when 𝑥 = 0.2
given that 𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.
Solution:
EXAMPLE 2
𝑑𝑦 𝑦 2 −𝑥 2
Using the Runge-Kutta method of fourth order, solve 𝑑𝑥 = 𝑦 2 +𝑥 2
𝑦(0) = 1 at 𝑥 = 0.2,0.4.
Solution:
𝑦 2 −𝑥 2
We have 𝑓(𝑥, 𝑦) = 𝑦 2+𝑥 2
To find 𝑦(0.2)
Hence 𝑥0 = 0, 𝑦0 = 1, ℎ = 0.2
1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 ) = 0.2𝑓(0.1,1.09836) = 0.1967
2 2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 ) = 0.2𝑓(0.2,1.1967) = 0.1891
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
1
= [0.2 + 2(0.19672) + 2(0.1967) + 0.1891] = 0.19599
6
To find (0.4) :
𝑘1 = ℎ𝑓(𝑥1 , 𝑦1 ) = 0.1891
1 1
𝑘2 = ℎ𝑓 (𝑥1 + ℎ, 𝑦1 + 𝑘1 ) = 0.2𝑓(0.3,1.2906) = 0.1795
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥1 + ℎ, 𝑦1 + 𝑘2 ) = 0.2𝑓(0.3,1.2858) = 0.1793
2 2
𝑘4 = ℎ𝑓(𝑥1 + ℎ, 𝑦1 + 𝑘3 ) = 0.2𝑓(0.4,1.3753) = 0.1688
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
1
= [0.1891 + 2(0.1795) + 2(0.1793) + 0.1688] = 0.1792
6
CHAPTER 6:
6.1 OVERVIEW
In the methods so far described to solve a differential equation over an interval, only the
value of 𝑦 at the beginning of the interval was required. In the predictor-corrector methods,
four prior values are needed for finding the value of 𝑦 at 𝑥𝑖 . Though slightly complex, these
methods have the advantage of giving an estimate of error from successive approximations to
𝑦𝑖 .
We will now discuss abot such a method named Milne’s Predictor-Corrector method.
Next we calculate,
In the relation
𝑥0 +4ℎ
𝑦4 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥0
𝑥0 +4ℎ
𝑛(𝑛 − 1) 2
𝑦4 = 𝑦0 + ∫ (𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + ⋯ ) 𝑑𝑥
𝑥0 2
[ Put 𝑥 = 𝑥0 + 𝑛ℎ, 𝑑𝑥 = ℎ𝑑𝑛]
4
𝑛(𝑛 − 1) 2
= 𝑦0 + ∫ (𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + ⋯ ) 𝑑𝑛
0 2
20 2
= 𝑦0 + ℎ (4𝑓0 + 8Δ𝑓0 + Δ 𝑓0 + ⋯ )
3
Neglecting fourth and higher order differences and expressing Δ𝑓0 , Δ2 𝑓0 and Δ3 𝑓0 and in
terms of the function values, we get
(𝑝) 4ℎ
𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3
𝑓4 = 𝑓(𝑥0 + 4ℎ, 𝑦4 )
(𝒄) 𝒉
𝒚𝟒 = 𝒚𝟐 + (𝒇 + 𝟒𝒇𝟑 + 𝒇𝟒 )
𝟑 𝟐
Then an improved value of 𝑓4 is computed and again the corrector is applied to find a still
better value of 𝑦4 . We repeat this step until 𝑦4 remains unchanged. Once 𝑦4 and 𝑓4 are
obtained to desired degree of accuracy, 𝑦5 = 𝑦(𝑥0 + 5ℎ) is found from the predictor as
(𝒑) 𝟒𝒉
𝒚𝟓 = 𝒚𝟏 + (𝟐𝒇𝟐 − 𝒇𝟑 + 𝟐𝒇𝟒 )
𝟑
(𝑐) ℎ
𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 )
3
We repeat this step until 𝑦5 becomes stationary and, then proceed to calculate 𝑦6 as before.
This is Milne's predictor-corrector method. To insure greater accuracy, we must first improve
the accuracy of the starting values and then subdivide the intervals.
1. Accuracy: While Milne's method is more accurate than simpler methods like Euler's
method, it is not as accurate as some higher-order methods like the Runge-Kutta methods.
This can be a limitation for problems requiring very high accuracy.
2. Stability: The stability of Milne's method can be an issue for certain types of ODEs,
especially those with rapidly changing solutions or stiff equations. Care must be taken in
selecting the step size to ensure stability.
and Modified Euler's method, requiring both prediction and correction steps. This complexity
can make it more difficult to debug and maintain.
1. Accuracy: Milne's method is more accurate than Euler's method and Modified Euler's
method, especially for problems where higher accuracy is required.
2. Stability: While stability can be a limitation, Milne's method is generally stable for a wide
range of ODEs, making it a reliable choice for many problems.
3. Ease of Use: Despite its complexity compared to simpler methods, Milne's method is still
relatively easy to implement and understand, especially compared to more advanced methods
like the Runge-Kutta methods.
6.4 EXAMPLES
EXAMPLE 1
Apply Milne's method, to find a solution of the differential equation 𝑦 ′ = 𝑥 − 𝑦 2 in the range
0 ≤ 𝑥 ≤ 1 for the boundary condition 𝑦 = 0 at 𝑥 = 0.
P a g e | 49
Solution:
𝑥
𝑦 = 𝑦(0) + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥, where 𝑓(𝑥, 𝑦) = 𝑥 − 𝑦 2
0
𝑥 𝑥2
Giving 𝑦1 = 0 + ∫0 𝑥𝑑𝑥 = 2
𝑥 𝑥4 𝑥2 𝑥5
Giving 𝑦2 = ∫0 (𝑥 − ) 𝑑𝑥 = − 20
4 2
𝑥 2
𝑥2 𝑥5 𝑥2 𝑥5 𝑥8 𝑥11
𝑦3 = ∫ [𝑥 − ( − ) ] 𝑑𝑥 = − + − (𝑖)
0 2 20 2 20 160 4400
Now let us determine the starting values of the Milne's method from (i), by choosing ℎ = 0.2.
(𝑝) 4ℎ
Using the predictor, 𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3
(𝑝)
𝑥 = 0.8 𝑦4 = 0.3049, 𝑓4 = 0.7070
(𝑐) ℎ
and the corrector, 𝑦4 = 𝑦2 + 3 (𝑓2 + 4𝑓3 + 𝑓4 ), yields
(𝑐)
𝑦4 = 0.3046 𝑓4 = 0.7072 (𝑖𝑖)
(𝑐)
𝑦4 = 0.3046, which is the same as in (ii)
4ℎ
(𝑝)
𝑦4 (2𝑓2 − 𝑓3 + 2𝑓4 )
= 𝑦1 +
3
(𝑝)
𝑥 = 0.1, 𝑦5 = 0.4554 𝑓5 = 0.7926
(𝑐) ℎ
and the corrector 𝑦5 = 𝑦3 + 3 (𝑓3 + 4𝑓4 + 𝑓5 ) gives
(𝑐)
𝑦5 = 0.4555 𝑓5 = 0.7925
(𝑐)
𝑦5 = 0.4555, a value which is the same as before.
CHAPTER 7:
Adams-Bashforth Method
7.1 OVERVIEW
𝑑𝑦
Given 𝑑𝑥 = 𝑓(𝑥, 𝑦) and 𝑦0 = 𝑦(𝑥0 ), we compute
Next we calculate
𝑓−1 = 𝑓(𝑥0 − ℎ, 𝑦−1 ), 𝑓−2 = 𝑓(𝑥0 − 2ℎ, 𝑦−2 ), 𝑓−3 = 𝑓(𝑥0 − 3ℎ, 𝑦−3 )
Neglecting fourth and higher order differences and expressing ∇𝑓0 , ∇2 𝑓0 and ∇3 𝑓0 in terms of
function values, we get
P a g e | 52
𝒉
𝒚𝟏 = 𝒚𝟎 + (𝟓𝟓𝒇𝟎 − 𝟓𝟗𝒇−𝟏 + 𝟑𝟕𝒇−𝟐 − 𝟗𝒇−𝟑 ) (𝟐)
𝟐𝟒
Then to find a better value of 𝑦1, we derive a corrector formula by substituting Newton's
backward formula at 𝑓1, i.e.,
in (1)
𝑥1
𝑛(𝑛 + 1) 2
∴ 𝑦1 =𝑦0 + ∫ (𝑓1 + 𝑛∇𝑓1 + ∇ 𝑓1 + ⋯ ) 𝑑𝑥
𝑥0 2
[ Put 𝑥 = 𝑥1 + 𝑛ℎ, 𝑑𝑥 = ℎ𝑑𝑛]
0
𝑛(𝑛 + 1) 2
=𝑦0 + ∫ (𝑓1 + 𝑛∇𝑓1 + ∇ 𝑓1 + ⋯ ) 𝑑𝑛
−1 2
1 1 1 3
=𝑦0 + ℎ (𝑓1 − ∇𝑓1 − ∇2 𝑓0 − ∇ 𝑓1 + ⋯ )
2 12 24
Neglecting fourth and higher order differences and expressing ∇𝑓1 , ∇2 𝑓1 and ∇3 𝑓1 and in
terms of function values, we obtain
(𝒄) 𝒉
𝒚 𝟏 = 𝒚𝟎 + (𝟗𝒇𝟏 + 𝟏𝟗𝒇𝟎 − 𝟓𝒇−𝟏 + 𝟗𝒇−𝟐 )
𝟐𝟒
Then an improved value of 𝑓1 is calculated and again the corrector (3) is applied to find a
still better value 𝑦1. This step is repeated until 𝑦1 remains unchanged and then we proceed to
calculate 𝑦2 as above.
NOTE Obs. To apply both Milne and Adams-Bashforth methods, we require four starting
values of 𝑦 which are calculated by means of Picard's method or Taylor's series method or
P a g e | 53
Euler's method or the Runge-Kutta method. In practice, the Adams formulae (2) and (3)
above together with the fourth order Runge-Kutta formulae have been found to be the most
useful.
While Adams-Bashforth methods are generally more accurate than simpler methods like
Euler's method, they can still suffer from accuracy issues, especially for stiff equations or
with improper step sizes.
1. Accuracy: Adams-Bashforth methods are generally more accurate than Euler's method
and Modified Euler's method, especially for problems where higher accuracy is required.
advanced methods like the Runge-Kutta methods. However, it remains a valuable tool in
numerical analysis, especially for problems where a balance between accuracy and
complexity is required.
7.4 EXAMPLES
EXAMPLE 1
𝑑𝑦
Given 𝑑𝑥 = 𝑥 2 (1 + 𝑦) and 𝑦(1) = 1, 𝑦(1.1) = 1.233, 𝑦(1.2) = 1.548, 𝑦(1.3) = 1.979, evaluate
Solution:
Here 𝑓(𝑥, 𝑦) = 𝑥 2 (1 + 𝑦)
(𝑝) ℎ
𝑦1 = 𝑦0 + (55𝑓0 − 59𝑓−1 + 37𝑓−2 − 9𝑓−3 )
24
(𝑝)
𝑥4 = 1.4, 𝑦1 = 2.573 𝑓1 = 7.004
(𝑐) ℎ
𝑦1 = 𝑦0 + (9𝑓1 + 19𝑓0 − 5𝑓−1 + 𝑓−2 )
24
(𝑐) 0.1
𝑦1 = 1.979 + (9 × 7.004 + 19 × 5.035 − 5 × 3.669 + 2.702) = 2.575
24
CONCLUSION:
In this thesis, we have explored various numerical methods for solving first-order ordinary
differential equations (ODEs). We have covered Picard's method, Taylor's method, Euler's
method, Modified Euler's method, Runge-Kutta method, Milne predictor-corrector method,
and Adams-Bashforth method. Each method has its strengths and weaknesses, and the choice
of method depends on the specific requirements of the problem at hand.
Summary of Methods:
• Euler's method is a basic numerical technique that approximates the solution using a
simple linear approximation. While easy to implement, Euler's method can be less
accurate, especially for ODEs with rapidly changing solutions. Modified Euler's
method improves upon Euler's method by using a more sophisticated approach to
estimate the solution, leading to higher accuracy.
• The Runge-Kutta method is a popular numerical technique known for its accuracy and
stability. It uses a weighted average of several function evaluations to approximate the
solution. The method is versatile and can handle a wide range of ODEs, making it a
popular choice for many numerical integration problems.
• The Milne predictor-corrector method combines both predictor and corrector steps to
P a g e | 56
• The Adams-Bashforth method is another numerical technique that uses past values of
the solution to approximate future values. It is accurate and stable for many ODEs but
may require more memory compared to simpler methods.
Comparative Analysis:
In comparing these methods, we found that the choice of method depends on several factors,
including the desired accuracy, stability requirements, computational complexity, and ease of
implementation. Euler's method and Modified Euler's method are simple to implement but
may not be accurate enough for some problems. Picard's method and Taylor's method are
more accurate but can be computationally expensive.
The Runge-Kutta method is accurate, stable, and versatile, making it a popular choice for
many numerical integration problems. However, it may be computationally more expensive
than simpler methods. The Milne predictor-corrector method offers higher accuracy than
Euler's method and Modified Euler's method but may require more memory.
In conclusion, the choice of numerical method for solving ODEs depends on the specific
requirements of the problem. For simple problems where accuracy is not critical, Euler's
method or Modified Euler's method may be sufficient. For problems requiring higher
accuracy and stability, the Runge-Kutta method or the Milne predictor-corrector method may
be more appropriate.
P a g e | 57
REFERENCES:
• Burden, Richard L., and J. Douglas Faires. Numerical Analysis. Cengage Learning,
2010.
• "Numerical Methods for Initial Value Problems" by Jeff R. Allen. University of Utah,
Department of Mathematics. [Online] Available at:
http://www.math.utah.edu/~allenf/teaching/2017Spring/2270/lab8/IvpIntro.html