Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

DEPARTMENT OF MATHEMATICS

UNIVERSITY OF LUCKNOW

MINOR PROJECT
ON

“NUMERICAL METHODS IN SOLVING FIRST ORDER


ORDINARY DIFFERENTIAL EQUATIONS”

Submitted by- Submitted to -


Pranjal Dwivedi Dr. R.R Yadav
Roll no. – 2110011015884 Professor
B.Sc 6th Semester [Dept. Of Mathematics & Astronomy]
University Of Lucknow University Of Lucknow
Lucknow-226007, India
SELF DECLARATION
I, Pranjal Dwivedi declare that the work embodied in the minor project entitled "Numerical methods in
solving first order ordinary differential equations" is my own bonafide work followed the standard of
research ethics to the best of my abilities under the supervision of Dr. R. R. Yadav in the Department of
Mathematics and Astronomy, Faculty of Science, University of Lucknow, Lucknow, Uttar Pradesh, India.

I have acknowledged all the sources of information which I used in the project. The matter embodied in this
minor project has not been submitted by me to any other University or Institution for the award of any other
degree/diploma.

PRANJAL DWIVEDI
Roll no.-2110011015884
Department of Mathematics & Astronomy
Faculty of Science, University of Lucknow,
Lucknow-226007, Uttar Pradesh, India
ACKNOWLEDGEMENT
I would like to extend my heartfelt gratitude to Dr. R.R. Yadav for their invaluable
guidance and unwavering support throughout the completion of this minor project.

Their expertise, encouragement and dedication have been instrumental in shaping


and refining the contents of this work. I am so deeply apprectiative of friends for
their understanding, encouragement and patience during the demanding periods of
research and writing. Their unwavering support provided the necessary motivation to
navigate through the intricacies of this academic endeavor.

Finally, I express my gratitude to the numerous resources, literature, and


individuals whose contribution and their respective fields formed the foundation of
this assignment.

Pranjal Dwivedi
B.Sc. 3rd year
VIth Semester
[2110011015884]
CONTENTS
Abstract 1

Introduction 2-12

• Objective of the Thesis


• Scope and Limitations
• Structure of the Thesis
• Significance of the Study
• Mathematical Preliminaries
CHAPTER 1: Picard’s Method Of Successive Approximations 14-19

1.1 Overview
1.2 Formulation of Problem
1.3 Limitations
1.4 Examples
CHAPTER 2: Taylor’s Series Method 20-23

2.1 Overview
2.2 Formulation Of Problem
2.3 Limitations
2.4 Examples
CHAPTER 3: Euler’s Method 24-29

3.1 Overview
3.2 Formulation Of Problem
3.3 Limitations
3.4 Examples
CHAPTER 4: Modified Euler’s Method 30-37

4.1 Overview
4.2 Formulation Of Problem
4.3 Improvements
4.4 Examples
CHAPTER 5: Runge-Kutta Method 38-44

5.1 Overview
5.2 Formulation Of Problem
5.3 Limitations And Advantages
5.4 Examples
CHAPTER 6: Milne’s Predictor-Corrector Method 45-50

6.1 Overview
6.2 Formulation Of Problem
6.3 Limitations And Advantages
6.4 Examples
CHAPTER 7: Adams’s Bashforth Method 51-54

7.1 Overview
7.2 Formulation Of Problem
7.3 Limitations And Advantages
7.4 Examples

CONCLUSION 55-57

REFERENCES 58
Page |1

Abstract
Numerical methods are fundamental tools for solving first-order ordinary
differential equations (ODEs) in various scientific and engineering fields. This
thesis provides a comprehensive study of numerical techniques tailored for
efficient and accurate solutions to such equations. The primary focus is on
understanding, implementing, and analyzing the performance of key methods
including Picard’s Method, Taylor’s Method, Euler's method, the Modified
Euler method, the Runge-Kutta method, Milne Predictor-Corrector Method,
Adam-Bashforth method. Theoretical analysis and practical examples are used
to demonstrate the strengths and limitations of each method, offering insights
into their application in real-world problems.

Through a comparative study, this thesis aims to guide practitioners and


researchers in selecting the most appropriate numerical method for solving first-
order ODEs based on considerations such as accuracy, efficiency, and
computational cost. The findings presented here contribute to the advancement
of numerical methods for ODEs, offering valuable insights for future research
and practical applications in diverse fields ranging from physics and
engineering to biology and economics.
Page |2

Introduction .

Differential equations are fundamental in describing the behavior of numerous phenomena in


science and engineering. While analytical solutions exist for some differential equations,
many real-world problems require numerical methods for approximation. This thesis focuses
on numerical methods for solving first-order ordinary differential equations (ODEs), which
are crucial in diverse fields like physics, engineering, biology, and economics.

Differential equations are among the most important mathematical tools used in pro ducing
models in the physical sciences, biological sciences, and engineering. In this text, we consider
numerical methods for solving ordinary differential equations, that is, those differential
equations that have only one independent variable.

The differential equations we consider in most of the book are of the form

𝑦(𝑡) = 𝑓(𝑡, 𝑦(𝑡))

where Y (t) is an unknown function that is being sought.

The given function f (t, y) of two variables defines the differential equation, and examples are
given in Chapter.

This equation is called a first-order differential equation because it contains a first-order


derivative of the unknown function, but no higher-order derivative. The numerical methods
for a first-order equation can be extended in a straightforward way to a system of first-order
Page |3

equations. Moreover, a higher-order differential equation can be reformulated as a system of


first-order equations.

• Objective of the thesis:

The primary objective of this thesis is to provide a comprehensive analysis and comparison of
numerical methods for solving first-order ordinary differential equations (ODEs). The study
aims to deepen the understanding of these methods, including their theoretical principles,
practical implementation, and computational efficiency, with the goal of identifying their
strengths, weaknesses, and suitability for different types of ODEs and applications.

1. To Analyze and Compare Numerical Methods:

The first objective is to analyze and compare Picard's method, Taylor's method, Euler's
method, modified Euler method, Runge-Kutta method, Milne predictor-corrector method, and
Adams-Bashforth method for solving first-order ODEs. Each method will be examined in
detail, including its algorithm, implementation details, and theoretical analysis. The
comparative analysis will evaluate the methods based on accuracy, efficiency, and
computational cost, providing insights into their performance and applicability.

2. To Provide a Comprehensive Understanding:

The second objective is to provide a comprehensive understanding of each numerical method,


including its underlying principles and assumptions. This includes a detailed examination of
the mathematical concepts and techniques used in each method, as well as practical
considerations for implementation. By providing a detailed analysis, this thesis aims to
deepen the understanding of these methods and their practical implications.
Page |4

3. To Identify Strengths and Weaknesses:


Another objective is to identify the strengths and weaknesses of each numerical method in
solving first-order ODEs. This includes analyzing the method's ability to accurately
approximate solutions, its computational efficiency, and its stability properties. By
identifying the strengths and weaknesses of each method, this thesis aims to provide insights
into when and how to effectively use these methods in practical applications.

4. To Evaluate Practical Applicability:

The fourth objective is to evaluate the practical applicability of each numerical method in
real-world problems. This includes examining the method's performance in solving ODEs
that arise in various fields such as physics, engineering, biology, and economics. By
evaluating the practical applicability of each method, this thesis aims to provide guidance on
selecting the most appropriate method for specific problems and applications.

5. To Contribute to the Body of Knowledge:

Finally, this thesis aims to contribute to the existing body of knowledge in numerical methods
for ODEs. By providing a detailed analysis and comparison of these methods, this study
seeks to advance the understanding of these methods and their practical implications.
Additionally, this thesis aims to provide valuable insights for future research and
development in the field of numerical methods for ODEs.

Overall, this thesis seeks to provide a comprehensive analysis and comparison of numerical
methods for solving first-order ODEs, with the goal of advancing the understanding of these
methods and their practical implications.
Page |5

• Scope and Limitations:

The scope of this thesis encompasses a detailed examination and comparison of numerical
methods for solving first-order ordinary differential equations (ODEs). The study focuses on
seven main methods: Picard's method, Taylor's method, Euler's method, modified Euler
method, Runge-Kutta method, Milne predictor-corrector method, and Adams-Bashforth
method. These methods are fundamental in numerical analysis and find wide application in
various fields such as physics, engineering, biology, and economics. The scope also includes
an analysis of the theoretical principles, practical implementation, and computational
efficiency of each method.

1. In-Depth Analysis of Numerical Methods:

One of the key aspects of the scope is to provide an in-depth analysis of each numerical method.
This includes a detailed examination of the algorithm, implementation details, and theoretical
background of each method. By providing a comprehensive analysis, this thesis aims to deepen
the understanding of these methods and their practical implications.

2. Comparative Analysis:

Another important aspect of the scope is the comparative analysis of the numerical methods.
The study will compare the methods based on various criteria such as accuracy, efficiency, and
computational cost. By comparing these methods, this thesis aims to identify their strengths,
weaknesses, and suitability for different types of ODEs and applications.

3. Practical Applicability:

The scope also includes an evaluation of the practical applicability of each numerical method.
This involves examining the performance of the methods in solving ODEs that arise in real-
Page |6

world problems. By evaluating the practical applicability of each method, this thesis aims to
provide guidance on selecting the most appropriate method for specific problems and
applications.

4. Contribution to Knowledge:

Furthermore, this thesis aims to contribute to the existing body of knowledge in numerical
methods for ODEs. By providing a detailed analysis and comparison of these methods, this
study seeks to advance the understanding of these methods and their practical implications.
Additionally, this thesis aims to provide valuable insights for future research and development
in the field of numerical methods for ODEs.

Limitations:

Despite the comprehensive scope outlined above, there are certain limitations to this study.
Firstly, the study focuses solely on first-order ODEs and does not cover higher-order ODEs or
partial differential equations. Secondly, the analysis is limited to the methods mentioned above
and does not include other numerical methods for solving ODEs. Thirdly, the study does not
consider the implementation of these methods in specific software environments or
programming languages. Lastly, the practical applicability of these methods may vary
depending on the specific problem and may require further validation in practical scenarios.

In conclusion, while this thesis aims to provide a comprehensive analysis and comparison of
numerical methods for solving first-order ODEs, it is important to recognize the scope and
limitations of the study. By addressing these limitations, this thesis seeks to provide valuable
insights and contribute to the advancement of knowledge in numerical methods for ODEs.
Page |7

• Structure of the Thesis:

This thesis is structured to provide a comprehensive analysis and comparison of numerical


methods for solving first-order ordinary differential equations (ODEs). The structure is
designed to guide the reader through the theoretical principles, practical implementation, and
comparative analysis of these methods, with the aim of deepening understanding and providing
valuable insights for practitioners and researchers.

1. Introduction

The introduction sets the stage for the thesis by providing an overview of the importance of
numerical methods in solving ODEs. It outlines the objectives of the thesis, the scope of the
study, and the structure of the thesis.

2. Mathematical Preliminaries

This chapter provides the necessary background in mathematics for understanding the
numerical methods discussed in later chapters. It covers topics such as differential equations,
linear algebra, and numerical linear algebra.

3. Picard's Method

This chapter focuses on Picard's method for solving first-order ODEs. It includes a detailed
explanation of the method, its algorithm, and practical implementation. The chapter also
discusses the theoretical analysis of the method and its practical applicability.

4. Taylor's Method

The chapter on Taylor's method provides a comprehensive analysis of this method, including
Page |8

its algorithm, implementation details, and theoretical background. The chapter also compares
Taylor's method with other numerical methods for solving ODEs.

5. Euler's Method

This chapter examines Euler's method for solving first-order ODEs, including its algorithm,
implementation details, and theoretical analysis. The chapter also discusses the strengths and
limitations of Euler's method in practical applications.

6. Modified Euler Method

The chapter on the modified Euler method provides an in-depth analysis of this method,
including its algorithm, implementation details, and theoretical background. The chapter also
compares the modified Euler method with other numerical methods for solving ODEs.

7. Runge-Kutta Method

This chapter focuses on the Runge-Kutta method for solving first-order ODEs. It includes a
detailed explanation of the method, its algorithm, and practical implementation. The chapter
also discusses the theoretical analysis of the method and its practical applicability.

8. Milne Predictor-Corrector Method

The chapter on the Milne predictor-corrector method provides a comprehensive analysis of this
method, including its algorithm, implementation details, and theoretical background. The
chapter also compares the Milne predictor-corrector method with other numerical methods for
solving ODEs.
Page |9

9. Adams-Bashforth Method

This chapter examines the Adams-Bashforth method for solving first-order ODEs, including
its algorithm, implementation details, and theoretical analysis. The chapter also discusses the
strengths and limitations of the Adams-Bashforth method.

10. Comparative Analysis

The comparative analysis chapter evaluates the numerical methods discussed in earlier chapters
based on criteria such as accuracy, efficiency, and computational cost. The chapter provides
insights into the strengths and limitations of each method and offers guidance for selecting the
most appropriate method for specific problems and applications.

11. Conclusion

The conclusion summarizes the key findings of the thesis and discusses the implications of the
study. It also suggests areas for future research and development in the field of numerical
methods for solving first-order ODEs.

12. References

The references section lists all the sources cited in the thesis, providing readers with additional
resources for further reading and research.
P a g e | 10

• Significance of the Study:

The study of numerical methods for solving first-order ordinary differential equations (ODEs)
holds significant importance in various fields of science and engineering. This thesis aims to
provide a comprehensive analysis and comparison of these methods, highlighting their
strengths, weaknesses, and practical applicability. The significance of this study lies in several
key areas:

1. Advancing Knowledge in Numerical Methods:

This study contributes to the advancement of knowledge in numerical methods for ODEs by
providing a detailed analysis and comparison of seven main methods: Picard's method, Taylor's
method, Euler's method, modified Euler method, Runge-Kutta method, Milne predictor-
corrector method, and Adams-Bashforth method. By conducting a thorough examination of
these methods, this study seeks to deepen the understanding of their theoretical principles and
practical implications.

2. Practical Implications for Real-World Problems:

The practical applicability of numerical methods for ODEs is of paramount importance in


solving real-world problems in various fields such as physics, engineering, biology, and
economics. This study evaluates the performance of these methods in solving ODEs that arise
in practical scenarios, providing valuable insights for practitioners and researchers. By
identifying the most suitable methods for specific problems and applications, this study aims
to enhance the efficiency and accuracy of numerical computations in practical settings.

3. Guidance for Practitioners and Researchers:


One of the key contributions of this study is to provide guidance for practitioners and
researchers in selecting the most appropriate numerical method for solving ODEs. The
P a g e | 11

comparative analysis of these methods based on criteria such as accuracy, efficiency, and
computational cost offers valuable insights into their strengths and limitations. This guidance
can help practitioners and researchers make informed decisions when choosing a numerical
method for their specific problems and applications.

4. Potential for Future Research and Development:

This study also has the potential to inspire future research and development in the field of
numerical methods for ODEs. By identifying areas where further improvements can be made
and new methods can be developed, this study lays the groundwork for future advancements
in numerical analysis. The insights provided in this study can serve as a foundation for future
research projects aimed at enhancing the efficiency and accuracy of numerical computations
in solving ODEs.

5. Contribution to Education and Training:

Lastly, this study contributes to education and training in the field of numerical analysis by
providing a comprehensive overview of these methods. The detailed analysis and comparison
of these methods can serve as a valuable resource for students and educators seeking to deepen
their understanding of numerical methods for ODEs. By enhancing the educational resources
available in this field, this study aims to contribute to the development of skilled professionals
capable of solving complex problems using numerical techniques.

In conclusion, this study on numerical methods for solving first-order ODEs holds significant
importance in advancing knowledge, solving real-world problems, guiding practitioners and
researchers, inspiring future research, and contributing to education and training. By
providing a comprehensive analysis and comparison of these methods, this study aims to
make a valuable contribution to the field of numerical analysis and its practical applications.
P a g e | 12

Mathematical Preliminaries:

A number of problems in science and technology can be formulated into differential


equations. The analytical methods of solving differential equations are applicable only to a
limited class of equations. Quite often differential equations appearing in physical problems
do not belong to any of these familiar types and one is obliged to resort to numerical
methods. These methods are of even greater importance when we realize that computing
machines are now readily available which reduce numerical work considerably.

Solution of a differential equation: The solution of an ordinary differential equation


means finding an explicit expression for y in terms of a finite number of elementary functions
of x. Such a solution of a differential equation is known as the closed or finite form of
solution. In the absence of such a solution, we have recourse to numerical methods of
solution.

Let us consider the first order differential equation:

𝑑𝑦
= f(x,y) given y(𝑥0 )= 𝑦0 (1)
𝑑𝑥

to study the various numerical methods of solving such equations. In most of these methods,
we replace the differential equation by a difference equation and then solve it. These methods
yield solutions either as a power series in x from which the values of y can be found by direct
substitution, or a set of values of x and y. The methods of Picard and Taylor series belong to
the former class of solutions. In these methods, y in (1) is approximated by a truncated series,
each term of which is a function of x. The information about the curve at one point is utilized
and the solution is not iterated. As such, these are referred to as single-step methods.

The methods of Euler, Runge-Kutta, Milne, Adams-Bashforth, etc. be- long to the latter class
of solutions. In these methods, the next point on the curve is evaluated in short steps ahead,
P a g e | 13

by performing iterations until sufficient accuracy is achieved. As such, these methods are
called step-by-step methods.

Euler and Runga-Kutta methods are used for computing y over a lim- ited range of x- values
whereas Milne and Adams methods may be applied for finding y over a wider range of x-
values. Therefore Milne and Adams methods require starting values which are found by
Picard’s Taylor series or Runge-Kutta methods.

• Initial and boundary conditions


An ordinary differential equation of the nth order is of the form:

F(x,y,dy/dx,d2y/dx2,……….,dny/dxn)= 0 (2)

Its general solution contains n arbitrary constants and is of the form:

𝝓(𝒙, 𝒚, 𝒄𝟏 , 𝒄𝟐 , … . , 𝒄𝒏 ) = 𝟎 (3)

To obtain its particular solution, n conditions must be given so that the constants c1, c2,…, cn
can be determined.

If these conditions are prescribed at one point only (say:x0), then the differential equation
together with the conditions constitute an initial value problem of the nth order.

If the conditions are prescribed at two or more points, then the problem is termed as
boundary value problem.

In this chapter, we shall first describe methods for solving initial value problems and then
explain the finite difference method and shooting method for solving boundary value
problems.
P a g e | 14

CHAPTER 1:

Picard’s Method Of Successive Approximations

1.1 Overview

Picard's method, also known as the method of successive approximations, is a fundamental


technique in the field of numerical analysis used to approximate solutions to ordinary
differential equations (ODEs). The method is based on the concept of fixed-point iteration
and is particularly useful for solving initial value problems (IVPs) where an initial condition
is specified.

Principle of Picard's Method:

At its core, Picard's method relies on the idea of approximating a solution to an ODE by
iteratively improving an initial guess. The method is based on the Picard-Lindelöf theorem,
which guarantees the existence and uniqueness of a solution to an IVP under certain
conditions on the function f(x, y) and the initial condition y(𝐱 𝟎 )= 𝐲𝟎 .

1.2 FORMULATION OF THE PROBLEM

Consider the first order equation


ⅆ𝒚
= 𝑭(𝒙, 𝒚) (1)
ⅆ𝒙
P a g e | 15

It is required to find that particular solution of (1) which assumes the value y0 when
x=x0. Integrating (1) between limits, we get

𝒚 𝒙 𝒙
∫𝒚 ⅆ𝒚 = ∫𝒙 𝒇(𝒙, 𝒚) or y-y0 = ∫𝒙 𝒇(𝒙, 𝒚)ⅆ𝒙 (2)
𝟎 𝟎 𝟎

This is an integral equation equivalent to (1), for it contains the unknown y under the
integral sign.
As a first approximation y1 to the solution, we put y=y0 in f(x, y) and integrate (2),
giving:

𝒙
y1 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟎 )ⅆ𝒙
𝟎

For a second approximation y2, we put y = y1 in f(x, y) and integrate (2), giving:

𝒙
y2 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟏 )
𝟎

Similarly, a third approximation is:

𝒙
y3 = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝟐 )
𝟎

Continuing this process, we obtain y4, y5,….,yn ,where

𝒙
yn = y0 + ∫𝒙 𝒇(𝒙, 𝒚𝒏−𝟏 )
𝟎

Hence this method gives a sequence of approximations 𝑦1 , 𝑦2 , 𝑦3 … … each giving a


better result than the preceding one.
P a g e | 16

1.3 LIMITATIONS

Certainly! Here are some generalized and simplified limitations of Picard's method:

1. Convergence Dependence: The method's convergence can depend heavily on the


function and initial guess, making it less reliable for certain types of equations.

2. Computational Intensity: Calculating successive approximations can be


computationally intensive, especially for complex functions or large intervals.

3. Local Solution: It provides solutions that are valid only in a small neighborhood of the
initial point, limiting its applicability to broader contexts.

4. Sensitivity to Parameters: The method can be sensitive to small changes in


parameters, which may affect the convergence behavior.

5. Not Universally Applicable: While effective for many problems, Picard's method may
not be suitable for all types of differential equations, particularly those with irregular
solutions or singularities.

6. Limited to First-Order Equations: While extensions exist for higher-order equations,


the method's simplicity and applicability are more straightforward for first-order
equations.

7. Convergence Criteria: The convergence criteria can be challenging to determine,


requiring careful consideration and potentially multiple iterations to achieve satisfactory
results.

These limitations highlight the need for careful consideration and possibly the use of
alternative methods depending on the specific characteristics of the problem.
P a g e | 17

1.4 EXAMPLES:

EXAMPLE 1

Using Picard's process of successive approximations, obtain a solution up to the fifth


approximation of the equation 𝑑𝑦/𝑑𝑥 = 𝑦 + 𝑥, such that 𝑦 = 1 when 𝑥 = 0. Check your
answer by finding the exact particular solution.

Solution:
𝑥
(i) We have 𝑦 = 1 + ∫𝑥0 (𝑥 + 𝑦)𝑑𝑥

First approximation. Put 𝑦 = 1 in 𝑦 + 𝑥, giving

𝑥
𝑦1 = 1 + ∫ (1 + 𝑥)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 /2
𝑥0

Second approximation. Put 𝑦 = 1 + 𝑥 + 𝑥 2 /2 in 𝑦 + 𝑥, giving

𝑥
𝑦1 = 1 + ∫ (1 + 𝑥 + 𝑥 2 /2)𝑑𝑥 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6
𝑥0

Third approximation. Put 𝑦 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 /6 in 𝑦 + 𝑥, giving

𝑥
𝑥3 𝑥4
𝑦3 = 1 + ∫ (1 + 𝑥 + 𝑥 2 + 𝑥 3 /6)𝑑𝑥 = 1 + 2𝑥 + 𝑥 2 + +
𝑥0 3 24

Fourth approximation. Put 𝑦 = 𝑦3 in 𝑦 + 𝑥, giving

𝑥
𝑥3 𝑥4
𝑦4 = 1 + ∫ (1 + 2𝑥 + 𝑥 2 +
+ ) 𝑑𝑥
0 3 24
2
𝑥3 𝑥4 𝑥5
=1+𝑥+𝑥 + + +
3 12 120

Fifth approximation, Put 𝑦 = 𝑦4 in 𝑦 + 𝑥, giving


P a g e | 18

𝑥
𝑥3 𝑥4 𝑥5 2
𝑦5 = 1 + ∫ (1 + 2𝑥 + 𝑥 + + + ) 𝑑𝑥 (1)
0 3 12 120
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + 𝑥 + 𝑥2 + + + + (1)
3 12 60 720

(ii) Given equation

𝑑𝑦
− 𝑦 = 𝑥 is a Leibnitzs linear in 𝑥
𝑑𝑥

Its, I.F. being 𝑒 −𝑥 the solution is

𝑦𝑒 −𝑥 = ∫ 𝑥𝑒 −𝑥 𝑑𝑥 + 𝑐

= −𝑥𝑒 −𝑥 − ∫ (−𝑒 −𝑥 )𝑑𝑥 + 𝑐 = −𝑥𝑒 −𝑥 − 𝑒 −𝑥 + 𝑐


∴ 𝑦 = 𝑐𝑒 𝑥 − 𝑥 − 1

Since 𝑦 = 1, when 𝑥 = 0, ∴ 𝑐 = 2.

Thus the desired particular solution is

𝑦 = 2𝑒 𝑥 − 𝑥 − 1 (2)

𝑥2 𝑥3 𝑥4
Or using the series: 𝑒 𝑥 = 1 + 𝑥 + + + +⋯
2! 3! 4!

We get

𝑥3 𝑥4 𝑥5 𝑥6
𝑦 = 1 + 𝑥 + 𝑥2 + + + + + ⋯∞ (3)
3 12 60 360

Comparing (1) and (3), it is clear that (1), approximates to the exact particular solution (3)
upto the term in 𝑥 5 .

NOTE Obs. At 𝑥 = 1, the fourth approximation 𝑦4 = 3.433 and the fifth approximation 𝑦5 =
3.434 whereas the exact value is 3.44 .
P a g e | 19

EXAMPLE 2

Find the value of 𝑦 for 𝑥 = 0.1 by Picard's method, given that

𝑑𝑦 𝑦 − 𝑥
= , 𝑦(0) = 1
𝑑𝑥 𝑦 + 𝑥

𝑥 𝑦−𝑥
Solution: We have 𝑦 = 1 + ∫0 𝑑𝑥
𝑦+𝑥

First approximation. Put 𝑦 = 1 in the integrand, giving

𝑥 𝑥
𝑦−𝑥 2
𝑦1 = 1 + ∫ 𝑑𝑥 = 1 + ∫ (−1 + ) 𝑑𝑥
0 𝑦+𝑥 0 1+𝑥
= 1 + [−𝑥 + 2 log(1 + 𝑥)]0𝑥 = 1 − 𝑥 + 2 log(1 + 𝑥))

Second approximation. Put 𝑦 = 1 − 𝑥 + 2log (1 + 𝑥) in the integrand, giving

𝑥
1 − 𝑥 + 2log (1 + 𝑥) − 𝑥
𝑦2 = 1 + ∫ 𝑑𝑥
0 1 − 𝑥 + 2log (1 + 𝑥) + 𝑥
𝑥
2𝑥
= 1 + ∫ [1 − ] 𝑑𝑥
0 1 + 2log (1 + 𝑥)

which is very difficult to integrate.

Hence we use the first approximation and taking 𝑥 = 0.1 in (𝑖) we obtain

𝑦(0.1) = 1 − (0.1) + 2log 1.1 = 0.9828


P a g e | 20

CHAPTER 2:

TAYLOR’S SERIES METHOD

2.1 OVERVIEW

Taylor's series method is a powerful technique for approximating solutions to ordinary


differential equations (ODEs) with initial conditions. It is based on the idea of expanding the
solution as a Taylor series around a given point. Here's a brief overview of the method:

2.2 FORMULATION OF THE PROBLEM

𝑑𝑦
Consider the first order equation 𝑑𝑥 = 𝑓(𝑥, 𝑦)

𝑑2 𝑦 ∂𝑓 ∂𝑓 𝑑𝑦
Differentiating (1), we have 𝑑𝑥 2 = ∂𝑥 + ∂𝑦 𝑑𝑥 i.e. 𝑦 ′′ = 𝑓𝑥 + 𝑓𝑦 𝑓 ′

Differentiating this successively, we can get 𝑦 ′′ , 𝑦 𝑖𝑣 etc. Putting 𝑥 = 𝑥0 and 𝑦 = 0, the

Values of (𝑦 ′ )0 , (𝑦 ′′ )0 , (𝑦 ′′′ )0 can be obtained. Hence the Taylor's series

(𝑥 − 𝑥0 )2 ′′ (𝑥 − 𝑥0 )3 ′′′
𝑦 = 𝑦0 + (𝑥 − 𝑥0 )(𝑦 ′ )0 + (𝑦 )0 + (𝑦 )0 + ⋯ (3)
2! 3!

gives the values of 𝑦 for every value of 𝑥 for which (3) converges.
P a g e | 21

On finding the value 𝑦1 for 𝑥 = 𝑥𝑖 from (3), 𝑦 ′ , 𝑦 ′′ etc. can be evaluated at 𝑥 = 𝑥1 by means
of (1), (2) etc. Then 𝑦 can be expanded about 𝑥 = 𝑥1 . In this way, the solution can be
extended beyond the range of convergence of series (3).

2.3 LIMITATIONS
Despite its accuracy and effectiveness in certain cases, Taylor's series method has several
limitations that should be considered:

1. Local Convergence: The method guarantees convergence only in a small neighborhood


of the initial point. For problems where the solution varies significantly over the interval of
interest, the method may not provide accurate results.

2. Computational Complexity: Calculating higher-order derivatives and evaluating


them at the initial point can be computationally intensive, especially for functions with complex
expressions or for high-order ODEs.

3. Sensitivity to Initial Conditions: Taylor's series method is sensitive to the choice of


initial conditions, and small errors in the initial conditions can lead to significant errors in the
approximation.

4. Limited Applicability: The method is most suitable for problems where the function
and its derivatives are well-behaved and can be easily evaluated. It may not be suitable for
functions with singularities or discontinuities.

5. Difficulty in Higher Dimensions: Extending Taylor's series method to higher-


dimensional problems can be challenging due to the increased complexity of calculating and
managing higher-order derivatives.

7. Practical Implementation Challenges: Implementing Taylor's series method can be


P a g e | 22

challenging due to the need to calculate and manage higher-order derivatives, especially for
functions with complex expressions.

Despite these limitations, Taylor's series method remains a valuable tool in numerical analysis,
particularly for problems where high accuracy is required and the function and its derivatives
are well-behaved.

2.4 EXAMPLES

EXAMPLE 1

Solve 𝑦 ′ = 𝑥 + 𝑦, 𝑦(0) = 1 by Taylor's series method. Hence find the values of 𝑦 at 𝑥 = 0.1
and 𝑥 = 0.2.

Solution:

Differentiating successively, we get

𝑦 ′ = 𝑥 + 𝑦 𝑦 ′ (0) = 1 [∵ 𝑦(0) = 1]
𝑦 ′′ = 1 + 𝑦 ′ 𝑦 ′′ (0) = 2
𝑦 ′′′ = 𝑦 ′′ 𝑦 ′′′ (0) = 2
𝑦 ′′′ = 𝑦 ′′′ 𝑦 ′′′ (0) = 2, etc.

Taylor's series is

(𝑥 − 𝑥0 )2 ′′ (𝑥 − 𝑥0 )3 ′′′
𝑦 = 𝑦0 + (𝑥 − 𝑥0 )(𝑦 ′ )0 + (𝑦 )0 + (𝑦 )0 + ⋯
2! 3!

Here 𝑥0 = 0, 𝑦0 = 1

𝑥2 (𝑥)3 (𝑥)4
∴ 𝑦 = 1 + 𝑥(1) + (2) + (2) + (4) ⋯
2 3! 4!

(0.1)3 (0.1)4
Thus 𝑦(0.1) = 1 + 0.1 + (0.1)2 + 3!
+ 4!

P a g e | 23

= 1.1103

and

(0.2)3 (0.2)4
𝑦(0.2) = 1 + 0.2 + (0.2)2 + + +⋯
3 6
= 1.2427

EXAMPLE 2

Find by Taylor's series method, the values of 𝑦 at 𝑥 = 0.1 and 𝑥 = 0.2 to five places of
decimals from 𝑑𝑦/𝑑𝑥 = 𝑥 2 𝑦 − 1, 𝑦(0) = 1.

Solution:

Differentiating successively, we get

𝑦 ′ = 𝑥 2 𝑦 − 1, (𝑦 ′ )0 = −1 [∵ 𝑦(0) = 1]
𝑦 ′′ = 2𝑥𝑦 + 𝑥 2 𝑦 ′ , (𝑦 ′′ )0 = 0
𝑦 ′′′ = 2𝑦 + 4𝑥𝑦 ′ + 𝑥2𝑦 ′′ , (𝑦 ′′′ )0 = 2
𝑦 𝑖𝑣 = 6𝑦 ′ + 6𝑥𝑦 ′′ + 𝑥2𝑦 ′′′ , (𝑦 𝑖𝑣 )0 = −6, etc.

Putting these values in the Taylor's series, we have

𝑥2 (𝑥)3 (𝑥)4
𝑦 = 1 + 𝑥(−1) + (0) + (2) + (−6) + ⋯
2 3! 4!
𝑥3 𝑥4
= 1 + −𝑥 + − + ⋯
3 4

Hence 𝑦(0.1) = 0.90033 and 𝑦(0.21) = 0.80227


P a g e | 24

CHAPTER 3:

EULER’S METHOD

3.1 OVERVIEW

Euler's method is a straightforward and widely used numerical technique for approximating
solutions to ordinary differential equations (ODEs). It is particularly useful when analytical
solutions are difficult or impossible to obtain. Euler's method is based on the concept of
approximating the solution curve of an ODE by using short straight line segments.

3.2 FORMULATION OF THE PROBLEM

Consider the equation :

ⅆ𝒚
ⅆ𝒙
= 𝒇(𝒙, 𝒚)

given that y(x0) = y0. Its curve of solution through P(x0,y0) is shown dotted in figure 10.1. Now
we have to find the ordinate of any other point Q on this curve.
P a g e | 25

FIGURE 10.1

Let us divide 𝐿𝑀 into 𝑛 sub-intervals each of width ℎ at 𝐿1 , 𝐿2 ⋯ so that ℎ is quite small

In the interval 𝐿𝐿1 , we approximate the curve by the tangent at 𝑃. If the ordinate through 𝐿1
meets this tangent in 𝑃1 (𝑥0 + ℎ, 𝑦1 ), then

𝑦1 = 𝐿1 𝑃1 = 𝐿𝑃 + 𝑅1 𝑃1 = 𝑦0 + 𝑃𝑅1 tan 𝜃
𝑑𝑦
= 𝑦0 + ℎ ( ) = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
𝑑𝑥 𝑝

Let 𝑃1 𝑄1 be the curve of solution of (1) through 𝑃1 and let its tangent at 𝑃1 meet the ordinate
through 𝐿2 in 𝑃2 (𝑥0 + 2ℎ, 𝑦2 ). Then

𝑦2 = 𝑦1 + ℎ𝑓(𝑥0 + ℎ, 𝑦1 ) (1)

Repeating this process 𝑛 times, we finally reach on an approximation 𝑀𝑃𝑛 of 𝑀𝑄 given by

𝑦𝑛 = 𝑦𝑛−1 + ℎ𝑓(𝑥0 + ̅̅̅̅̅̅̅


𝑛 − 1ℎ, 𝑦𝑛−1 )
P a g e | 26

This is Euler's method of finding an approximate solution of (1).

NOTE: Obs. In Euler's method, we approximate the curve of solution by the tangent in each
interval, i.e., by a sequence of short lines. Unless ℎ is small, the error is bound to be quite
significant. This sequence of lines may also deviate considerably from the curve of solution.
As such, the method is very slow and hence there is a modification of this method which is
given in the next section.

3.3 LIMITATIONS

While Euler's method is a simple and useful technique for approximating solutions to ordinary
differential equations (ODEs), it has several limitations that should be considered:

1. Accuracy: Euler's method is a first-order method, meaning that the error in the
approximation is proportional to the step size \(h\). As a result, it may require very small step
sizes to achieve accurate results, which can be computationally expensive.

2. Global Error Accumulation: Since Euler's method only uses information from the previous
point to calculate the next point, errors can accumulate over time, especially for long
integration intervals.

3. Sensitivity to Step Size: The accuracy of Euler's method is highly sensitive to the choice of
step size \(h\). Using an inappropriate step size can result in significant errors in the
approximation.
P a g e | 27

4. Not Self-Starting: Euler's method requires an initial guess of the solution at the starting
point (x0). If this guess is not close to the true solution, the method may diverge or provide
inaccurate results.

5. Improvements Needed for Higher Accuracy: While Euler's method is a good starting point
for numerical integration, more advanced methods like the Runge-Kutta methods are often
needed to achieve higher accuracy for complex problems.

Despite these limitations, Euler's method remains a valuable tool in numerical analysis,
particularly for educational purposes and for gaining an initial understanding of numerical
integration techniques.

3.4 EXAMPLES

EXAMPLE 1

Using Euler's method, find an approximate value of 𝑦 corresponding to 𝑥 = 1, given that


𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.

Solution:

We take 𝑛 = 10 and ℎ = 0.1 which is sufficiently small. The various calculations are arranged
as follows:

𝑥 𝑦 𝑥 + 𝑦 = 𝑑𝑦/𝑑𝑥 Old 𝑦 + 0.1(𝑑𝑦/𝑑𝑥) = new 𝑦

0.0 1.00 1.00 1.00 + 0.1(1.00) = 1.10


P a g e | 28

0.1 1.10 1.20 1.10 + 0.1(1.20) = 1.22

0.2 1.22 1.42 1.22 + 0.1(1.42) = 1.36

0.3 1.36 1.66 1.36 + 0.1(1.66) = 1.53

0.4 1.53 1.93 1.53 + 0.1(1.93) = 1.72

0.5 1.72 2.22 1.72 + 0.1(2.22) = 1.94

0.6 1.94 2.54 1.94 + 0.1(2.54) = 2.19

0.7 2.19 2.89 2.19 + 0.1(2.89) = 2.48

0.8 2.48 3.29 2.48 + 0.1(3.29) = 2.81

0.9 2.81 3.71 2.81 + 0.1(3.71) = 3.18

1.0 3.18

Thus the required approximate value of 𝑦 = 3.18.

NOTE Obs. In Example 10.1(Obs.), we obtained the true values of 𝑦 from its exact solution to
be 3.44 where as by Euler's method 𝑦 = 3.18 and by Picard's method 𝑦 = 3.434. In the above
solution, had we chosen 𝑛 = 20, the accuracy would have been considerably increased but at
the expense of double the labor of computation. Euler's method is no doubt very simple but
cannot be considered as one of the best.
P a g e | 29

EXAMPLE 2
𝑑𝑦 𝑦−𝑥
Given 𝑑𝑥 = 𝑦+𝑥 with initial condition 𝑦 = 1 at 𝑥 = 0; find 𝑦 for 𝑥 = 0.1 by Euler's method.

Solution:

We divide the interval (0,0.1) in to five steps, i.e., we take 𝑛 = 5 and ℎ = 0.02. The various
calculations are arranged as follows:

𝑥 𝑦 𝑑𝑦/𝑑𝑥 Oldy +0.02(𝑑𝑦/𝑑𝑥) = new 𝑦

0.00 1.0000 1.0000 1.0000 + 0.02(1.0000) = 1.0200

0.02 1.0200 0.9615 1.0200 + 0.02(0.9615) = 1.0392

0.04 1.0392 0.926 1.0392 + 0.02(0.926) = 1.0577

0.06 1.0577 0.893 1.0577 + 0.02(0.893) = 1.0756

0.08 1.0756 0.862 1.0756 + 0.02(0.862) = 1.0928

0.10 1.0928

Hence the required approximate value of 𝑦 = 1.0928.


P a g e | 30

CHAPTER 4:

Modified Euler's Method

4.1 OVERVIEW

Modified Euler's method, also known as Heun's method or the improved Euler method, is a
numerical technique used to approximate solutions to ordinary differential equations (ODEs).
It is an extension of the basic Euler's method, offering improved accuracy by using a more
sophisticated approach. Here's an overview of the method:

4.2 FORMULATION OF THE PROBLEM

In Euler's method, the curve of solution in the interval 𝐿𝐿1 is approximated by the tangent at 𝑃
(Figure 10.1) such that at 𝑃1 , we have

𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) (1)

Then the slope of the curve of solution through 𝑃1

[ i.e., (𝑑𝑦/𝑑𝑥)𝑃1 = 𝑓(𝑥0 + ℎ, 𝑦1 )]

is computed and the tangent at 𝑃1 to 𝑃1 𝑄1 is drawn meeting the ordinate through 𝐿2 in

𝑃2 (𝑥0 + 2ℎ, 𝑦2 )
P a g e | 31

(1)
Now we find a better approximation 𝑦1 of 𝑦(𝑥0 + ℎ) by taking the slope of the curve as the
mean of the slopes of the tangents at 𝑃 and 𝑃1 , i.e.,

(1) ℎ
𝑦1 = 𝑦0 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )]
2

As the slope of the tangent at 𝑃1 is not known, we take 𝑦1 as found in (1) by Euler's method
and insert it on R.H.S. of (2) to obtain the first modified value 𝑦1 (1)

Again (2) is applied and we find a still better value 𝑦1(2) corresponding to 𝐿1 as

(2) ℎ (1)
𝑦1 = 𝑦0 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )]
2

We repeat this step, until two consecutive values of 𝑦 agree. This is then taken as the starting
point for the next interval 𝐿1 𝐿2 .

Once 𝑦1 is obtained to a desired degree of accuracy, 𝑦 corresponding to 𝐿2 is found from (1).

𝑦2 = 𝑦1 + ℎ𝑓(𝑥0 + ℎ, 𝑦1 )

(1)
and a better approximation 𝑦2 is obtained from (2)

(1) ℎ
𝑦2 = 𝑦1 + [𝑓(𝑥0 + ℎ, 𝑦1 ) + 𝑓(𝑥0 + 2ℎ, 𝑦2 )]
2

We repeat this step until 𝑦2 becomes stationary. Then we proceed to calculate 𝑦3 as above
and so on.

This is the modified Euler's method which gives great improvement in accuracy over the
original method.
P a g e | 32

4.3 LIMITATIONS:

1. Accuracy: Euler's method is a first-order method, meaning that the error in the
approximation is proportional to the step size. It may require very small step sizes to achieve
accurate results, which can be computationally expensive.

2. Numerical Stability: Euler's method can be numerically unstable for certain types of
ODEs, particularly those with rapidly changing or oscillatory solutions. Using too large a step
size can lead to instability and inaccurate results.

3. Sensitivity to Step Size: The accuracy of Euler's method is highly sensitive to the choice
of step size. Using an inappropriate step size can result in significant errors in the
approximation.

4. Global Error Accumulation: Errors can accumulate over time, especially for long
integration intervals, leading to inaccurate results for the overall solution.

Improvements in Modified Euler's Method:

1. Accuracy: Modified Euler's method improves accuracy by using a more sophisticated


approach to estimate the slope of the solution curve at each step. This typically involves
averaging the slopes at the beginning and end of the interval.

2. Stability: Modified Euler's method is more stable than Euler's method for certain types of
ODEs, especially those with rapidly changing solutions. It can handle larger step sizes
without sacrificing accuracy.

3. Convergence Rate: Modified Euler's method converges to the true solution faster than
Euler's method, especially for smoother functions.

4. Ease of Implementation: Despite its improved accuracy, Modified Euler's method


remains relatively simple to implement compared to more advanced methods like the Runge-
Kutta methods.
P a g e | 33

Overall, Modified Euler's method offers a good balance between accuracy and computational
efficiency, making it a popular choice for many numerical ODE problems. However, for
problems requiring higher accuracy or stability, more advanced methods like the Runge-
Kutta methods may be more appropriate.

4.4 EXAMPLES

EXAMPLE 1

Using modified Euler's method, find an approximate value of 𝑦 when 𝑥 = 0.3, given that
𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.

Solution:

The various calculations are arranged as follows taking ℎ = 0.1 :

𝑥 𝑥 + 𝑦 = 𝑦′ Mean slope Old 𝑦 + 0.1 (mean slope ) = new 𝑦

0.0 0+1 - 1.00 + 0.1(1.00) = 1.10

1
0.1 0.1 + 1.1 (1 + 1.2) 1.00 + 0.1(1.1) = 1.11
2

1
0.1 0.1 + 1.11 (1 + 1.21) 1.00 + 0.1(1.105) = 1.1105
2

1
0.1 0.1 + 1.1105 (1 + 1.2105) 1.00 + 0.1(1.1052) = 1.1105
2
P a g e | 34

Since the last two values are equal, we take 𝑦(0.1) = 1.1105.

0.1 1.2105 - 1.1105 + 0.1(1.2105) = 1.2316

1
0.2 0.2 + 1.2316 (1.12105 + 1.4316) 1.1105 + 0.1(1.3211) = 1.2426
2

1
0.2 0.2 + 1.2426 (1.2105 + 1.4426) 1.1105 + 0.1(1.3266) = 1.2432
2

1
0.2 0.2 + 1.2432 (1.2105 + 1.4432) 1.1105 + 0.1(1.3268) = 1.2432
2

Since the last two values are equal, we take 𝑦(0.2) = 1.2432.

0.2 1.4432 - 1.2432 + 0.1(1.4432) = 1.3875

1
0.3 0.3 + 1.3875 (1.4432 + 1.6875) 1.2432 + 0.1(1.5654) = 1.3997
2

1
0.3 0.3 + 1.3997 (1.4432 + 1.6997) 1.2432 + 0.1(1.5715) = 1.4003
2

1
0.3 0.3 + 1.4003 (1.4432 + 1.7003) 1.2432 + 0.1(1.5718) = 1.4004
2

1
0.3 0.3 + 1.4004 (1.4432 + 1.7004) 1.2432 + 0.1(1.5718) = 1.4004
2

Since the last two values are equal, we take 𝑦(0.3) = 1.4004.

Hence 𝑦(0.3) = 1.4004 approximately.


P a g e | 35

NOTE Obs. In Example 10.8, we obtained the approximate value of 𝑦 for 𝑥 = 0.3 to be 1.53
whereas by the modified Euler's method the corresponding value is 1.4003 which is nearer its
true value 1.3997, obtained from its exact solution 𝑦 = 2𝑒𝑥 − 𝑥 − 1 by putting 𝑥 = 0.3.

EXAMPLE 2

Using the modified Euler's method, find 𝑦(0.2) and 𝑦(0.4) given

𝑦 ′ = 𝑦 + 𝑒 𝑥 , 𝑦(0) = 0

Solution:

We have 𝑦 ′ = 𝑦 + 𝑒𝑥 = 𝑓(𝑥, 𝑦); 𝑥 = 0, 𝑦 = 0 and ℎ = 0.2

The various calculations are arranged as under:

To calculate (0.2) :

Old 𝑦 + ℎ (Mean slope)


𝑥 𝑦 + 𝑒𝑥 = 𝑦 ′ Mean slope
new 𝑦

0.0 1 - 0 + 0.2(1) = 0.2

1
(1 + 1.4214) 0 + 0.2(1.2107)
0.2 2
0.2 0.2 + 𝑒 = 1.4214
= 0.2421
= 1.2107

1
0.2421 + 𝑒 0.2 (1 + 1.4635) 0 + 0.2(1.2317)
0.2 2
= 1.4635 = 0.2463
= 1.2317
P a g e | 36

1
0.2463 + 𝑒 0.2 (1 + 1.4677) 0 + 0.2(1.2338)
0.2 2
= 1.4677 = 0.2468
= 1.2338

1
0.2468 + 𝑒 0.2 (1 + 1.4682) 0 + 0.2(1.2341)
0.2 2
= 1.4682 = 0.2468
= 1.2341

Since the last two values of 𝑦 are equal, we take 𝑦(0.2) = 0.2468.

To calculate (0.4) :

𝑥 𝑦 + 𝑒𝑥 Mean slope Oldy +0.2 (mean slope ) new 𝑦

0.2468 + 𝑒 0.2 0.2468 + 0.2(1.4682)


0.2 -
= 1.4682 = 0.5404

1
0.5404 + 𝑒 0.4 (1.4682 0.2468 + 0.2(1.7502)
0.4 2
= 2.0322 = 0.5968
+ 2.0322)

= 1.7502

𝑥 𝑦 + 𝑒𝑥 Mean slope Oldy +0.2 (mean slope ) new 𝑦

1
(1.4682
0.5968 + 𝑒 0.4 2 0.2468 + 0.2(1.7784)
0.4 + 2.0887)
= 2.0887 = 0.6025
= 1.7784
P a g e | 37

1
(1.4682
0.6025 + 𝑒 0.4 2 0.2468 + 0.2(1.78125)
0.4 + 2.0943)
= 2.0943 = 0.6030
= 1.78125

1
(1.4682
0.6030 + 𝑒 0.4 2 0.2468 + 0.2(1.7815)
0.4 + 2.0949)
= 2.0949 = 0.6031
= 1.7815

1
(1.4682
0.6031 + 𝑒 0.4 2 0.2468 + 0.2(1.7815)
0.4 + 2.0949)
= 2.0949 = 0.6031
= 1.7816

Since the last two value of 𝑦 are equal, we take 𝑦(0.4) = 0.6031

Hence 𝑦(0.2) = 0.2468 an d𝑦(0.4) = 0.6031 approximately.


P a g e | 38

CHAPTER 5:

Runge-Kutta Method

5.1 OVERVIEW

The Taylor's series method of solving differential equations numerically is restricted by the
labor involved in finding the higher order derivatives. However, there is a class of methods
known as Runge-Kutta methods which do not require the calculations of higher order
derivatives and give greater accuracy. The Runge-Kutta formulae possess the advantage of
requiring only the function values at some selected points. These methods agree with Taylor's
series solution up to the term in ℎ𝑟 where 𝑟 differs from method to method and is called the
order of that method.

5.2 FORMULATION OF PROBLEM

First order R-K method. We have seen that Euler's method (CHAPTER 3) gives

𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) = 𝑦0 + ℎ𝑦0′ [∵ 𝑦 ′ = 𝑓(𝑥, 𝑦)]

Expanding by Taylor's series

ℎ2 ′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 +⋯
2 0

It follows that the Euler's method agrees with the Taylor's series solution upto the term in ℎ.

Hence, Euler's method is the Runge-Kutta method of the first order.


P a g e | 39

Second order R-K method. The modified Euler's method gives


𝑦1 = 𝑦 + [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 )] (1)
2

Substituting 𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) on the right-hand side of (1), we obtain


𝑦1 = 𝑦0 + [𝑓0 + 𝑓(𝑥0 + ℎ), 𝑦0 + ℎ𝑓0 ] where 𝑓0 = (𝑥0 , 𝑦0 ) (2)
2

Expanding L.H.S. by Taylor's series, we get

ℎ2 ′′ ℎ3 ′′′
𝑦1 = 𝑦(𝑥0 + ℎ) = 𝑦0 + ℎ𝑦0′ + 𝑦 + 𝑦 +⋯ (3)
2! 0 3! 0

Expanding 𝑓(𝑥0 + ℎ, 𝑦0 + ℎ𝑓0 ) by Taylor's series for a function of two variables, (2) gives

ℎ ∂𝑓 ∂𝑓
𝑦1 = 𝑦0 + [𝑓0 + {𝑓0 = (𝑥0 , 𝑦0 ) + ℎ ( ) + ℎ𝑓0 ( ) + 𝑂(ℎ2 )∘∘ }]
2 ∂𝑥 0 ∂𝑦 0
1 ∂𝑓 ∂𝑓
= 𝑦0 + [ℎ𝑓0 + ℎ𝑓0 + ℎ2 {( ) + ( ) } + 𝑂(ℎ3 )]
2 ∂𝑥 0 ∂𝑦 0
ℎ2 𝑑𝑓(𝑥, 𝑦) ∂𝑓 ∂𝑓
= 𝑦0 + ℎ𝑓0 + 𝑓0′ + 𝑂(ℎ3 ) [∵ = +𝑓 ]
2 𝑑𝑥 ∂𝑥 ∂𝑦

Comparing (3) and (4), it follows that the modified Euler's method agrees with the Taylor's
series solution upto the term in ℎ2 .

Hence the modified Euler's method is the Runge-Kutta method of the second order. ∴ The
second order Runge-Kutta formula is

1
𝑦1 = 𝑦0 + (𝑘1 + 𝑘2 )
2

Where 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ) and 𝑘2 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘)

(iii) Third order R-K method. Similarly, it can be seen that Runge's method (Section 10.6)
agrees with the Taylor's series solution upto the term in ℎ3 .
P a g e | 40

As such, Runge's method is the Runge-Kutta method of the third order.

∴ The third order Runge-Kutta formula is

1
𝑦1 = 𝑦0 + (𝑘1 + 4𝑘2 + 𝑘3 )
6

1 1
Where, 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ), 𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2

And 𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘 ′ ), where 𝑘 ′ = 𝑘3 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘1 ).

(iv) Fourth order R-K method. This method is most commonly used and is often referred to
as the Runge-Kutta method only.

Working rule for finding the increment 𝑘 of 𝑦 corresponding to an increment h of 𝑥 by


Runge-Kutta method from

is as follows:

𝑑𝑦
= 𝑓(𝑥, 𝑦), 𝑦(𝑥0 )
𝑑𝑥

Calculate successively 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ),

and

1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 )
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 )
2 2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 )
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6

Finally compute

which gives the required approximate value as 𝑦1 = 𝑦0 + 𝑘.

(Note that k is the weighted mean of 𝑘1 , 𝑘2 , 𝑘3 , and 𝑘4 ).


P a g e | 41

NOTE Obs. One of the advantages of these methods is that the operation is identical whether
the differential equation is linear or non-linear.

5.3 LIMITATIONS AND ADVANTAGES

Limitations:

1. Complexity: Runge-Kutta methods are more complex than Euler's method and Modified
Euler's method, requiring more computational effort to implement.

2. Computationally Intensive: The higher-order accuracy of Runge-Kutta methods often


requires more function evaluations per step, making them more computationally intensive,
especially for stiff equations or systems with many variables.

3. Memory Requirements: Runge-Kutta methods may require more memory to store


intermediate results and coefficients compared to simpler methods like Euler's method.

Advantages:

1. Higher Accuracy: Runge-Kutta methods are generally more accurate than Euler's method
and Modified Euler's method, especially for solving ODEs with complex behavior or rapidly
changing solutions.

2. Adaptive Step Size: Runge-Kutta methods can be easily adapted to use variable step
sizes, allowing for more efficient computation by focusing computational effort where it is
most needed.
P a g e | 42

3. Versatility: Runge-Kutta methods come in various orders, offering a range of trade-offs


between accuracy and computational cost, allowing users to choose the most appropriate
method for their specific problem.

In summary, while Runge-Kutta methods may be more computationally intensive and


complex to implement compared to simpler methods like Euler's method and Modified
Euler's method, they offer significantly higher accuracy, stability, and versatility, making
them a preferred choice for many numerical ODE problems, especially those requiring high
precision and robustness.

5.4 EXAMPLES

EXAMPLE 1

Apply the Runge-Kutta fourth order method to find an approximate value of 𝑦 when 𝑥 = 0.2
given that 𝑑𝑦/𝑑𝑥 = 𝑥 + 𝑦 and 𝑦 = 1 when 𝑥 = 0.

Solution:

Here 𝑥0 = 0, 𝑦0 = 1, ℎ = 0.2, 𝑓(𝑥0 , 𝑦0 ) = 1


∴ 𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ) = 0.2 × 1 = 0.2000
1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 ) = 0.2 × 𝑓(0.1,1.1) = 0.2400
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 ) = 0.2 × 𝑓(0.1,1.12) = 0.2440
2 2
1
= (0.2000 + 0.4800 + 0.4880 + 0.2888)
6
1
= × (1.4568) = 0.2428
6

Hence the required approximate value of 𝑦 is 1.2428 .


P a g e | 43

EXAMPLE 2
𝑑𝑦 𝑦 2 −𝑥 2
Using the Runge-Kutta method of fourth order, solve 𝑑𝑥 = 𝑦 2 +𝑥 2

𝑦(0) = 1 at 𝑥 = 0.2,0.4.

Solution:

𝑦 2 −𝑥 2
We have 𝑓(𝑥, 𝑦) = 𝑦 2+𝑥 2

To find 𝑦(0.2)

Hence 𝑥0 = 0, 𝑦0 = 1, ℎ = 0.2

𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 ) = 0.2𝑓(0,1) = 0.2000


1 1
𝑘2 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘1 ) = 0.2 × 𝑓(0.1,1.1) = 0.19672
2 2

1 1
𝑘3 = ℎ𝑓 (𝑥0 + ℎ, 𝑦0 + 𝑘2 ) = 0.2𝑓(0.1,1.09836) = 0.1967
2 2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 ) = 0.2𝑓(0.2,1.1967) = 0.1891
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
1
= [0.2 + 2(0.19672) + 2(0.1967) + 0.1891] = 0.19599
6

Hence 𝑦(0.2) = 𝑦0 + 𝑘 = 1.196.

To find (0.4) :

Here 𝑥1 = 0.2, 𝑦1 = 1.196, ℎ = 0.2.


P a g e | 44

𝑘1 = ℎ𝑓(𝑥1 , 𝑦1 ) = 0.1891
1 1
𝑘2 = ℎ𝑓 (𝑥1 + ℎ, 𝑦1 + 𝑘1 ) = 0.2𝑓(0.3,1.2906) = 0.1795
2 2
1 1
𝑘3 = ℎ𝑓 (𝑥1 + ℎ, 𝑦1 + 𝑘2 ) = 0.2𝑓(0.3,1.2858) = 0.1793
2 2
𝑘4 = ℎ𝑓(𝑥1 + ℎ, 𝑦1 + 𝑘3 ) = 0.2𝑓(0.4,1.3753) = 0.1688
1
𝑘 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
1
= [0.1891 + 2(0.1795) + 2(0.1793) + 0.1688] = 0.1792
6

Hence 𝑦(0.4) = 𝑦1 + 𝑘 = 1.196 + 0.1792 = 1.3752.


P a g e | 45

CHAPTER 6:

MILNE’S PREDICTOR-CORRECTOR METHOD

6.1 OVERVIEW

In the methods so far described to solve a differential equation over an interval, only the
value of 𝑦 at the beginning of the interval was required. In the predictor-corrector methods,
four prior values are needed for finding the value of 𝑦 at 𝑥𝑖 . Though slightly complex, these
methods have the advantage of giving an estimate of error from successive approximations to
𝑦𝑖 .

We will now discuss abot such a method named Milne’s Predictor-Corrector method.

6.2 FORMULATION OF PROBLEM

Given 𝑑𝑦/𝑑𝑥 = 𝑓(𝑥, 𝑦) and 𝑦 = 𝑦0 , 𝑥 = 𝑥0 ; to find an approximate value of 𝑦 for 𝑥 = 𝑥0 +


𝑛ℎ , we proceed as follows:

The value y0 = 𝑦(𝑥0 ) being given, we compute

𝑦1 = 𝑦(𝑥0 + ℎ), 𝑦2 = 𝑦(𝑥0 + 2ℎ), 𝑦3 = 𝑦(𝑥0 + 3ℎ)

by Picard's or Taylor's series method.

Next we calculate,

𝑓0 = 𝑓(𝑥0 , 𝑦0 ), 𝑓1 = 𝑓(𝑥0 + ℎ, 𝑦1 ), 𝑓2 = 𝑓(𝑥0 + 2ℎ, 𝑦2 ), 𝑓3 = 𝑓(𝑥0 + 3ℎ, 𝑦3 )


P a g e | 46

Then to find 𝑦4 = 𝑦(𝑥0 + 4ℎ), we substitute Newton's forward interpolation formula

𝑛(𝑛 − 1) 2 𝑛(𝑛 − 1)(𝑛 − 2) 3


𝑓(𝑥, 𝑦) = 𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + Δ 𝑓0 + ⋯
2 6

In the relation

𝑥0 +4ℎ
𝑦4 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥0
𝑥0 +4ℎ
𝑛(𝑛 − 1) 2
𝑦4 = 𝑦0 + ∫ (𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + ⋯ ) 𝑑𝑥
𝑥0 2
[ Put 𝑥 = 𝑥0 + 𝑛ℎ, 𝑑𝑥 = ℎ𝑑𝑛]
4
𝑛(𝑛 − 1) 2
= 𝑦0 + ∫ (𝑓0 + 𝑛Δ𝑓0 + Δ 𝑓0 + ⋯ ) 𝑑𝑛
0 2
20 2
= 𝑦0 + ℎ (4𝑓0 + 8Δ𝑓0 + Δ 𝑓0 + ⋯ )
3

Neglecting fourth and higher order differences and expressing Δ𝑓0 , Δ2 𝑓0 and Δ3 𝑓0 and in
terms of the function values, we get

(𝑝) 4ℎ
𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3

which is called a predictor.

Having found 𝑦4 , we obtain a first approximation to

𝑓4 = 𝑓(𝑥0 + 4ℎ, 𝑦4 )

Then a better value of 𝑦4 is found by Simpson's rule as

(𝒄) 𝒉
𝒚𝟒 = 𝒚𝟐 + (𝒇 + 𝟒𝒇𝟑 + 𝒇𝟒 )
𝟑 𝟐

which is called a corrector.


P a g e | 47

Then an improved value of 𝑓4 is computed and again the corrector is applied to find a still
better value of 𝑦4 . We repeat this step until 𝑦4 remains unchanged. Once 𝑦4 and 𝑓4 are
obtained to desired degree of accuracy, 𝑦5 = 𝑦(𝑥0 + 5ℎ) is found from the predictor as

(𝒑) 𝟒𝒉
𝒚𝟓 = 𝒚𝟏 + (𝟐𝒇𝟐 − 𝒇𝟑 + 𝟐𝒇𝟒 )
𝟑

and 𝑓5 = 𝑓(𝑥0 + 5ℎ, 𝑦5 ) is calculated. Then a better approximation to the value of 𝑦5 is


obtained from the corrector as

(𝑐) ℎ
𝑦5 = 𝑦3 + (𝑓3 + 4𝑓4 + 𝑓5 )
3

We repeat this step until 𝑦5 becomes stationary and, then proceed to calculate 𝑦6 as before.

This is Milne's predictor-corrector method. To insure greater accuracy, we must first improve
the accuracy of the starting values and then subdivide the intervals.

6.3 LIMITATIONS AND ADVANTAGES

Limitations of Milne's Predictor-Corrector Method:

1. Accuracy: While Milne's method is more accurate than simpler methods like Euler's
method, it is not as accurate as some higher-order methods like the Runge-Kutta methods.
This can be a limitation for problems requiring very high accuracy.

2. Stability: The stability of Milne's method can be an issue for certain types of ODEs,
especially those with rapidly changing solutions or stiff equations. Care must be taken in
selecting the step size to ensure stability.

3. Complexity: Milne's method is more complex to implement compared to Euler's method


P a g e | 48

and Modified Euler's method, requiring both prediction and correction steps. This complexity
can make it more difficult to debug and maintain.

Advantages of Milne's Predictor-Corrector Method:

1. Accuracy: Milne's method is more accurate than Euler's method and Modified Euler's
method, especially for problems where higher accuracy is required.

2. Stability: While stability can be a limitation, Milne's method is generally stable for a wide
range of ODEs, making it a reliable choice for many problems.

3. Ease of Use: Despite its complexity compared to simpler methods, Milne's method is still
relatively easy to implement and understand, especially compared to more advanced methods
like the Runge-Kutta methods.

In summary, while Milne's Predictor-Corrector Method offers improved accuracy and


stability compared to simpler methods, it may not be as accurate or stable as more advanced
methods like the Runge-Kutta methods. However, it remains a valuable tool in numerical
analysis, especially for problems where a balance between accuracy and complexity is
required.

6.4 EXAMPLES

EXAMPLE 1

Apply Milne's method, to find a solution of the differential equation 𝑦 ′ = 𝑥 − 𝑦 2 in the range
0 ≤ 𝑥 ≤ 1 for the boundary condition 𝑦 = 0 at 𝑥 = 0.
P a g e | 49

Solution:

Using Picard's method, we have

𝑥
𝑦 = 𝑦(0) + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥, where 𝑓(𝑥, 𝑦) = 𝑥 − 𝑦 2
0

To get the first approximation, we put 𝑦 = 0 in 𝑓(𝑥, 𝑦),

𝑥 𝑥2
Giving 𝑦1 = 0 + ∫0 𝑥𝑑𝑥 = 2

To find the second approximation, we put

𝑥 𝑥4 𝑥2 𝑥5
Giving 𝑦2 = ∫0 (𝑥 − ) 𝑑𝑥 = − 20
4 2

Similarly, the third approximation is

𝑥 2
𝑥2 𝑥5 𝑥2 𝑥5 𝑥8 𝑥11
𝑦3 = ∫ [𝑥 − ( − ) ] 𝑑𝑥 = − + − (𝑖)
0 2 20 2 20 160 4400

Now let us determine the starting values of the Milne's method from (i), by choosing ℎ = 0.2.

𝑥0 = 0.0, 𝑦0 = 0.0000, 𝑓0 = 0.0000


𝑥1 = 0.2, 𝑦1 = 0.020, 𝑓1 = 0.1996
𝑥2 = 0.4, 𝑦2 = 0.0795 𝑓2 = 0.3937
𝑥3 = 0.5, 𝑦3 = 0.1762, 𝑓3 = 0.5689

(𝑝) 4ℎ
Using the predictor, 𝑦4 = 𝑦0 + (2𝑓1 − 𝑓2 + 2𝑓3 )
3

(𝑝)
𝑥 = 0.8 𝑦4 = 0.3049, 𝑓4 = 0.7070

(𝑐) ℎ
and the corrector, 𝑦4 = 𝑦2 + 3 (𝑓2 + 4𝑓3 + 𝑓4 ), yields

(𝑐)
𝑦4 = 0.3046 𝑓4 = 0.7072 (𝑖𝑖)

Again using the corrector,


P a g e | 50

(𝑐)
𝑦4 = 0.3046, which is the same as in (ii)

Now using the predictor,

4ℎ
(𝑝)
𝑦4 (2𝑓2 − 𝑓3 + 2𝑓4 )
= 𝑦1 +
3
(𝑝)
𝑥 = 0.1, 𝑦5 = 0.4554 𝑓5 = 0.7926

(𝑐) ℎ
and the corrector 𝑦5 = 𝑦3 + 3 (𝑓3 + 4𝑓4 + 𝑓5 ) gives

(𝑐)
𝑦5 = 0.4555 𝑓5 = 0.7925

Again using the corrector,

(𝑐)
𝑦5 = 0.4555, a value which is the same as before.

Hence 𝑦(1) = 0.4555.


P a g e | 51

CHAPTER 7:
Adams-Bashforth Method

7.1 OVERVIEW

The Adams-Bashforth Method is a numerical technique used to approximate solutions to


ordinary differential equations (ODEs). It belongs to a class of explicit multistep methods that
use past values of the solution to approximate future values. Here's an overview of the
method:

7.2 FORMULATION OF PROBLEM

𝑑𝑦
Given 𝑑𝑥 = 𝑓(𝑥, 𝑦) and 𝑦0 = 𝑦(𝑥0 ), we compute

𝑦−1 = 𝑦(𝑥0 − ℎ), 𝑦−2 = 𝑦(𝑥0 − 2ℎ), 𝑦−3 = 𝑦(𝑥0 − 3ℎ)

By Taylor's series or Euler's method or the Runge-Kutta method.

Next we calculate

𝑓−1 = 𝑓(𝑥0 − ℎ, 𝑦−1 ), 𝑓−2 = 𝑓(𝑥0 − 2ℎ, 𝑦−2 ), 𝑓−3 = 𝑓(𝑥0 − 3ℎ, 𝑦−3 )

Then to find 𝑦1 , we substitute Newton's backward interpolation formula

𝑛(𝑛 + 1) 2 𝑛(𝑛 + 1)(𝑛 + 2 3


𝑓(𝑥, 𝑦) = 𝑓0 + 𝑛∇𝑓0 + ∇ 𝑓0 + ∇ 𝑓0 + ⋯
2 6

Neglecting fourth and higher order differences and expressing ∇𝑓0 , ∇2 𝑓0 and ∇3 𝑓0 in terms of
function values, we get
P a g e | 52

𝒉
𝒚𝟏 = 𝒚𝟎 + (𝟓𝟓𝒇𝟎 − 𝟓𝟗𝒇−𝟏 + 𝟑𝟕𝒇−𝟐 − 𝟗𝒇−𝟑 ) (𝟐)
𝟐𝟒

This is called the Adams-Bashforth predictor formula.

Having found 𝑦1 , we find 𝑓1 = 𝑓(𝑥0 + ℎ1 , 𝑦1 ).

Then to find a better value of 𝑦1, we derive a corrector formula by substituting Newton's
backward formula at 𝑓1, i.e.,

in (1)

𝑛(𝑛 + 1) 2 𝑛(𝑛 + 1)(𝑛 + 2 3


𝑓(𝑥, 𝑦) = 𝑓1 + 𝑛∇𝑓1 + ∇ 𝑓1 + ∇ 𝑓1 + ⋯
2 6

𝑥1
𝑛(𝑛 + 1) 2
∴ 𝑦1 =𝑦0 + ∫ (𝑓1 + 𝑛∇𝑓1 + ∇ 𝑓1 + ⋯ ) 𝑑𝑥
𝑥0 2
[ Put 𝑥 = 𝑥1 + 𝑛ℎ, 𝑑𝑥 = ℎ𝑑𝑛]
0
𝑛(𝑛 + 1) 2
=𝑦0 + ∫ (𝑓1 + 𝑛∇𝑓1 + ∇ 𝑓1 + ⋯ ) 𝑑𝑛
−1 2
1 1 1 3
=𝑦0 + ℎ (𝑓1 − ∇𝑓1 − ∇2 𝑓0 − ∇ 𝑓1 + ⋯ )
2 12 24

Neglecting fourth and higher order differences and expressing ∇𝑓1 , ∇2 𝑓1 and ∇3 𝑓1 and in
terms of function values, we obtain

(𝒄) 𝒉
𝒚 𝟏 = 𝒚𝟎 + (𝟗𝒇𝟏 + 𝟏𝟗𝒇𝟎 − 𝟓𝒇−𝟏 + 𝟗𝒇−𝟐 )
𝟐𝟒

which is called the Adams-Moulton corrector formula.

Then an improved value of 𝑓1 is calculated and again the corrector (3) is applied to find a
still better value 𝑦1. This step is repeated until 𝑦1 remains unchanged and then we proceed to
calculate 𝑦2 as above.

NOTE Obs. To apply both Milne and Adams-Bashforth methods, we require four starting
values of 𝑦 which are calculated by means of Picard's method or Taylor's series method or
P a g e | 53

Euler's method or the Runge-Kutta method. In practice, the Adams formulae (2) and (3)
above together with the fourth order Runge-Kutta formulae have been found to be the most
useful.

7.3 LIMITATIONS AND ADVANTAGES

Limitations of Adams-Bashforth Predictor-Corrector Method:

While Adams-Bashforth methods are generally more accurate than simpler methods like
Euler's method, they can still suffer from accuracy issues, especially for stiff equations or
with improper step sizes.

Advantages of Adams-Bashforth Predictor-Corrector Method:

1. Accuracy: Adams-Bashforth methods are generally more accurate than Euler's method
and Modified Euler's method, especially for problems where higher accuracy is required.

2. Stability: While stability can be a limitation, Adams-Bashforth methods are generally


stable for a wide range of ODEs, making them a reliable choice for many problems.

3. Ease of Use: Despite its complexity compared to simpler methods, Adams-Bashforth


methods are still relatively easy to implement and understand, especially compared to more
advanced methods like the Runge-Kutta methods.

In summary, while Adams-Bashforth Predictor-Corrector Method offers improved accuracy


and stability compared to simpler methods, it may not be as accurate or stable as more
P a g e | 54

advanced methods like the Runge-Kutta methods. However, it remains a valuable tool in
numerical analysis, especially for problems where a balance between accuracy and
complexity is required.

7.4 EXAMPLES

EXAMPLE 1
𝑑𝑦
Given 𝑑𝑥 = 𝑥 2 (1 + 𝑦) and 𝑦(1) = 1, 𝑦(1.1) = 1.233, 𝑦(1.2) = 1.548, 𝑦(1.3) = 1.979, evaluate

𝑦(1.4) by the Adams-Bashforth method.

Solution:

Here 𝑓(𝑥, 𝑦) = 𝑥 2 (1 + 𝑦)

Starting values of the Adams-Bashforth method with ℎ = 0.1 are

𝑥 = 1.0, 𝑦−3 = 1.000, 𝑓−3 = (1.0)2 (1 + 1.000) = 2.000


𝑥 = 1.1, 𝑦−2 = 1.233, 𝑓−2 = 2.702
𝑥 = 1.2, 𝑦−1 = 1.548, 𝑓−1 = 3.669
𝑥 = 1.3, 𝑦0 = 1.979, 𝑓0 = 5.035

Using the predictor,

(𝑝) ℎ
𝑦1 = 𝑦0 + (55𝑓0 − 59𝑓−1 + 37𝑓−2 − 9𝑓−3 )
24
(𝑝)
𝑥4 = 1.4, 𝑦1 = 2.573 𝑓1 = 7.004

Using the corrector

(𝑐) ℎ
𝑦1 = 𝑦0 + (9𝑓1 + 19𝑓0 − 5𝑓−1 + 𝑓−2 )
24
(𝑐) 0.1
𝑦1 = 1.979 + (9 × 7.004 + 19 × 5.035 − 5 × 3.669 + 2.702) = 2.575
24

Hence 𝑦(1.4) = 2.575


P a g e | 55

CONCLUSION:
In this thesis, we have explored various numerical methods for solving first-order ordinary
differential equations (ODEs). We have covered Picard's method, Taylor's method, Euler's
method, Modified Euler's method, Runge-Kutta method, Milne predictor-corrector method,
and Adams-Bashforth method. Each method has its strengths and weaknesses, and the choice
of method depends on the specific requirements of the problem at hand.

Summary of Methods:

• Picard's method is a simple iterative technique that approximates the solution to an


ODE by successively refining an initial guess. Taylor's method is a more advanced
technique that approximates the solution using a Taylor series expansion. Both
methods can be computationally expensive and may not be suitable for all ODEs.

• Euler's method is a basic numerical technique that approximates the solution using a
simple linear approximation. While easy to implement, Euler's method can be less
accurate, especially for ODEs with rapidly changing solutions. Modified Euler's
method improves upon Euler's method by using a more sophisticated approach to
estimate the solution, leading to higher accuracy.

• The Runge-Kutta method is a popular numerical technique known for its accuracy and
stability. It uses a weighted average of several function evaluations to approximate the
solution. The method is versatile and can handle a wide range of ODEs, making it a
popular choice for many numerical integration problems.

• The Milne predictor-corrector method combines both predictor and corrector steps to
P a g e | 56

improve accuracy. It is an extension of the Adams-Moulton method and offers higher


accuracy compared to simpler methods. The method is stable and converges to the
true solution for a wide range of ODEs.

• The Adams-Bashforth method is another numerical technique that uses past values of
the solution to approximate future values. It is accurate and stable for many ODEs but
may require more memory compared to simpler methods.

Comparative Analysis:

In comparing these methods, we found that the choice of method depends on several factors,
including the desired accuracy, stability requirements, computational complexity, and ease of
implementation. Euler's method and Modified Euler's method are simple to implement but
may not be accurate enough for some problems. Picard's method and Taylor's method are
more accurate but can be computationally expensive.

The Runge-Kutta method is accurate, stable, and versatile, making it a popular choice for
many numerical integration problems. However, it may be computationally more expensive
than simpler methods. The Milne predictor-corrector method offers higher accuracy than
Euler's method and Modified Euler's method but may require more memory.

Conclusion and Recommendations:

In conclusion, the choice of numerical method for solving ODEs depends on the specific
requirements of the problem. For simple problems where accuracy is not critical, Euler's
method or Modified Euler's method may be sufficient. For problems requiring higher
accuracy and stability, the Runge-Kutta method or the Milne predictor-corrector method may
be more appropriate.
P a g e | 57

In future research, it would be beneficial to explore advanced numerical methods, such as


implicit methods or higher-order Runge-Kutta methods, to further improve the accuracy and
efficiency of solving ODEs. Additionally, applying these methods to real-world problems in
various fields, such as physics, engineering, and biology, would help demonstrate their
practical utility.
P a g e | 58

REFERENCES:

• Atkinson, Kendall E. An Introduction to Numerical Analysis. John Wiley & Sons,


1989.

• Burden, Richard L., and J. Douglas Faires. Numerical Analysis. Cengage Learning,
2010.

• "Numerical Methods for Ordinary Differential Equations" by Wikipedia contributors.


Wikipedia, The Free Encyclopedia. [Online] Available at:
https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equation
s

• "Numerical Methods for Initial Value Problems" by Jeff R. Allen. University of Utah,
Department of Mathematics. [Online] Available at:
http://www.math.utah.edu/~allenf/teaching/2017Spring/2270/lab8/IvpIntro.html

• "Numerical Methods for Differential Equations" by L. Ridgway Scott. University of


Chicago, Department of Computer Science. [Online] Available at:
https://people.cs.uchicago.edu/~ridg/newtonapplet/DEtext/html/deintro.html

You might also like