Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 45

MODULE 1

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATION

Introduction

Nature of numerical problems

Solving mathematical equations is an important requirement for


various branches of science. The field of numerical analysis explores
the techniques that give approximate solutions to such problems with
the desired accuracy.
Computer based solutions

The major steps involved to solve a given problem using a computer are:

1. Modeling: Setting up a mathematical model, i.e., formulating the


problem in mathematical terms, taking into account the type of
computer one wants to use.
2. Choosing an appropriate numerical method (algorithm) together with
a preliminary error analysis (estimation of error, determination of
steps, size etc.)
3. Programming, usually starting with a flowchart showing a block
diagram of the procedures to be performed by the computer and then
writing, say, a Python program.
4. Operation or computer execution.

5. Interpretation of results, which may include decisions to rerun if


further data are needed.
Errors
Numerically computed solutions are subject to certain errors.
Mainly there are three types of errors. They are inherent errors,
truncation errors and errors due to rounding.
1. Inherent errors or experimental errors arise due to the assumptions
made in the mathematical modeling of problem. It can also arise
when the data is obtained from certain physical measurements of
the parameters of the problem. i.e., errors arising from
measurements.

3
2. Truncation errors are those errors corresponding to the fact that a
finite (or infinite) sequence of computational steps necessary to
produce an exact result is “truncated” prematurely after a certain
number of steps.
3. Round of errors are errors arising from the process of rounding off
during computation. These are also called chopping, i.e. discarding all
decimals from some decimals on.

Error in Numerical Computation

Due to errors that we have just discussed, it can be seen that our
numerical result is an approximate value of the (sometimes unknown)
exact result, except for the rare case where the exact answer is
sufficiently simple rational number.
If ~a is an approximate value of a quantity whose exact value is a ,
then the difference ε =a−~a is called the absolute error of ~a or, briefly, the
error of ~a . Hence, ~a=a+ ε , i.e.
Approximate value = True value + Error.

For example, if ~a = 10.52 is an approximation to a = 10.5, then the


error is ε = 0.02. The relative error ε r, of ~a is defined by
|ε| |Error|
ε r= =
|a| |Truevalue|
For example, consider the value of √ 2=1.414213 up to four decimal
places, then √ 2=1.414213 +Error.
|Error|=|1.4142−1.41421|=0.00001 ,
taking 1.41421 as true or exact value. Hence, the relative error is
|ε| 0.00001
ε r= = .
|a| 1.4142
Significant digit of a number c is any given digit of c, except
possibly for zeros to the left of the first nonzero digit that serve only to fix
the position of the decimal point. (Thus, any other zero is a significant
digit of c). For example, each of the number 1360, 1.360, 0.01360 has
4 significant digits.

Numerical Iteration Method


A numerical iteration method or simply iteration method is a
mathematical procedure that generates a sequence of improving
approximate solutions for a class of problems. A specific way of
implementation of an iteration method, including the termination criteria,
4
is called an algorithm of the iteration method. In the problems of
finding the solution of an equation an iteration method uses an initial
guess to generate successive approximations to the solution.
Since the iteration methods involve repetition of the same process
many times, computers can act well for finding solutions of equation
numerically. Some of the iteration methods for finding solution of
equations involves
1. Bisection Method
2. Chord Method ( Regula Falsi method or Method of false position)
3. Newton-Raphson Method and modification
4. Secant Method
5. Combined Method
6. Fixed Point Iteration Method
A numerical method to solve equations may be a long process
in some cases. If the method leads to value close to the exact solution,
then we say that the method is convergent. Otherwise, the method is
said to be divergent.

Solution of Algebraic and Transcendental Equations

Algebraic and Transcendental Equations

f(x) = 0 is called an algebraic equation if the corresponding f (x) is a


polynomial. An example is 7x2 + x - 8 = 0.
f(x)=0 is called transcendental equation if the f (x) contains
trigonometric, or exponential or logarithmic functions. Examples of
transcendental equations are:
sin x – x = 0, tg x - x = 0 and 7 x3 +log(3x - 6) + 3excos(x)+tg(x)=0
There are two types of methods available to find the roots of
algebraic and transcendental equations of the form f (x) = 0.
1. Direct Methods: Direct methods give the exact value of the
roots in a finite number of steps. We assume here that there are no
round off errors. Direct methods determine all the roots at the same time.
2. Indirect or Iterative Methods: Indirect or iterative methods are
based on the concept of successive approximations. The general
procedure is to start with one or more initial approximation to the root
and obtain a sequence of iterates xkwhich in the limit converges to
the actual or true solution to the root. Indirect or iterative methods
5
determine one or two roots at a time. The indirect or iterative
methods are further divided into two categories: bracketing and open
methods. The bracketing methods require the limits between which
the root lies, whereas the open methods require the initial estimation
of the solution. Bisection and Chord or another name False position
methods are two known examples of the bracketing methods. Among
the open methods, the Newton-Raphson is most commonly used. The
most popular method for solving a non-linear equation is the Newton-
Raphson method and this method has a high rate of convergence to a
solution.

Part 1

1.1 Isolation of real roots

One of the most common problem encountered in engineering


analysis is that given a function f(x), find the values of x for which f(x) = 0.
The solution (values of x) are known as the roots of the equation f(x) = 0,
or the zeroes of the function f (x). The roots of equations may be real or
complex.
In general, an equation may have any number of (real) roots, or no
roots at all. For example, sin(x) – x = 0 has a single root, namely, x = 0,
whereas tan(x) – x = 0 has infinite number of roots (x = 0, ± 4.493,
± 7.725, …).

Any equation with one unknown can be written as


f (x)=0. (1.1)
The solution of (1.1) is such a value ξ (the root of the equation) for
which f (ξ)=0.
Only few equations have formulas which can be used to calculate
exact root values. In practice, we often encounter such equations which
cannot be solved by elementary methods. Moreover, with most engineering
calculations, it is impossible to talk about exact solutions, because the
included ratios are approximate. Therefore, methods which allow
calculating the roots of (1.1) arbitrarily closely are really important.
To solve an equation at the given accuracy level, you need to fulfill
two basic steps:
1. Isolate roots, i.e., allocate the segments which contain one and the
only root of (1.1).
2. Correct approximate roots, i.e. calculate them at the required
accuracy.
There are specific numerical methods which can be used to pass
each if these steps.
6
Consider (1.1) to isolation the roots, we use theorem of Bolzano-
Cauchy (Intermediate value theorem for continuous functions).
Any continuous function f (x) in the interval [a, b] which satisfies

f (a) * f (b) < 0

(f(a) and f(b) have opposite signs) must have at least one root in the
interval [a, b] (a zero in the interval [a, b]). This root is the only one if the
first derivative of f(x) exists and retains its sign inside the interval [a, b].

sign(f’(x)) = const.

So an interval [a, b] must contain a zero of a continuous function f if the


product f (a) * f (b) < 0. Geometrically (Fig.1.1), this means that
if f (a) * f (b) < 0, then the curve f has to cross the X-axis at some point in
between a and b.

Fig. 1.1 Geometrical interpretation of the Bolzano-Cauchy method

Here are basic tabular and graphical methods which can be used to
isolate the roots of the equation.

1. Tabular method (enumerative technique).

First, find signs of the function f ( x) in a number of points x 1, x 2,


x 3, … in the domain definition of the function. If f (x k )⋅f ( x k+1)< 0, then
according to the theorem above, f ( x)=0 has at least one root inside the
interval [ x k , x k +1 ]. Then check whether this is the only root. If f ' (x ) does not
change the sign in the interval [ x k , x k +1 ], then it is the only root (because
function f ( x) is monotonic).

Example. Isolate all real roots of the equation x 3−2,5 x 2−1,2 x +3=0.

We determine signs of the function f ( x)= x3 −2,5 x 2−1,2 x+ 3 in a number of


7
points:
x
-3 -2 -1 0 1 2 3
f (x) - - + + + - +

So, there are three segments at the ends of which f (x) has different
signs: [ −2 ,−1 ] , [ 1,2 ], [ 2,3 ] .
Algebraic equation of the third degree has three roots. Consequently,
each of the three segments contains one root of the equation.

2. Graphical method.

Its essence lies in building the graph of the function y=f (x ) and using
it to find the intervals which contain the intersection points of the graph with
the X-axis (that is zeros of the function f ( x)). If the equation has no close-
by-value roots, its roots can be easily separated by this method.
Sometimes f ( x)=0 can be easily represented as ϕ (x )=ψ ( x) where ϕ ( x ), ψ ( x )
are simpler, and then intervals containing the intersection points with these
functions can be found by plotting the functions y=ϕ ( x) and y=ψ ( x ).

Algebraic (polynomial) equation. In mathematics, algebraic, or


polynomial, equation is an equation of the form
n n−1 n−2
P( x )≡ a0 x + a1 x +a2 x +...+ an=0 , (1.2)

where a 0 , a 1 , a2 , ...., a n are real numbers and a 0 ≠ 0.
Algebraic equations are univariate which means that it involves only
one variable.

The fundamental theorem of algebra states that algebraic equation of


the n-th degree (and, therefore, polynomial P(x)) has precisely n roots, real
or complex, provided that each root is considered as many times as its
multiplicity.

Theorem 1. If the coefficients of the algebraic equation (1.2) are


real, then the complex roots of this equation are complex conjugate pair
wise, i.e. if ξ=α +iβ (α , β are real) is the root of (1.2) of multiplicity s, then
ξ=α−iβ is also a root of (1.2) and has the same multiplicity (s).
Consequence. Algebraic equation of odd degrees with real
coefficients have at least one real root.

Theorem 2 can be used to roughly estimate absolute values of the


roots of (1.2).
Theorem 2. Let А=max {|a1|,|a2|, ...,|an|} where a k are coefficients of
(1.1.2). Then absolute values of all roots x k ,(k =1 ,... , n) of (1.2) meet the
inequality
8
A
|x k|<1+ a ,
| 0|
i.e. in the complex plane ξ 0 η( x=ξ +i η), roots of this equation are located
inside the circle.

1.2 Bisection Method

Bisection Method is one of the simplest methods. Being based on


interval properties, it is also called the Interval Halving Method, the Binary
Search Method, and the Dichotomy Method.
Bisection method is a numerical method, or algorithm, used for
finding values x such that f(x) = 0 for a given function f. Such an x is called
the root of the function f. Numerical root-finding methods use iteration to
produce sequences of numbers that hopefully converge towards a limit (the
so-called "fixed point") which is a root. First values of this series are initial
guesses. The method computes subsequent values based both on the
older ones and the function f.
Let us consider a segment [ a 0 , b 0 ] that contains the unique root of
f ( x)=0 and denote it ξ . To solve the problem, we bisect this interval to get a
a0 +b 0 a +b a +b a +b
point   : If f ( 0 0 )=0, then ξ= 0 0 . In case f ( 0 0 )≠ 0, we choose a
2 2 2 2
[ a , b ]
half of the segment 0 0 at the ends of which function f ( x) has the
opposite signs, divide the new segment [ a 1 , b1 ] in halves, and repeat all the
steps. As a result, at some point, we get the exact root of the equation or a
sequence of nested segments [ a 0 , b 0 ], [ a 1 , b1 ],…, [ a n , b n ],… . It is proved that
lim an =lim bn=ξ .
n→∞ n →∞

So, at each step, the method divides the interval in two by computing
the midpoint ci = (ai+bi) / 2 (i-th iteration) of the interval and the value of the
function f(ci) at that point.

Criterion for termination

Error Analysis. The maximum error after the i-th iteration using this process
is
log ( b−a )−log ε i
ε i=¿ b−a∨ ¿i ⟹ i≥ ¿
2 log 2
As at each iteration, the interval is divided into halves, we get
ε i+ 1 1
= . Thus, this method converges linearly.
εi 2
The following formula can be used to determine the number of
iterations that the Bisection Method would need to converge a root with a
certain tolerance ε.

9
b-a
n> ln( )/ ln(¿ 2)¿ .

If we want to calculate the roots of the equation as accurate as ε , we
divide the segment [ a 0 , b 0 ] for as long as the condition |b n−a n|<2 ε is executed.
The approximate root value is chosen as an average value on the segment
a n+ bn
[ a n , b n ]: ξ ≈ .
2
Some other termination criteria are as follows:
 Termination after N steps (N given, fixed)

 Termination, if f (c n)≤ α (α>0 given).

Bisection Scheme

Given a function f (x) continuous on an interval [a, b] and f (a) * f (b) < 0  

Do  
       c = (a+b)/2
If f(c)=0 then (c-root)
       if f (a) * f (c) > 0 then  a = c  else  b = c  
while |b-a|<2ε (the convergence criteria is satisfied)

Bisection method is simple and reliable: it converges all continuous


functions f ( x) (including non-differentiable) to a simple root and is resistant
enough to the rounding errors even though converging is slow due to
gaining a bit of accuracy with each iteration.

Merits of bisection method


 The iteration using bisection method always produces a root, since
the method brackets the root between two values.
 As iterations are conducted, the length of the interval gets halved.
So, one can guarantee the convergence in case of the solution of
the equation.
 the Bisection Method is simple to program in a computer.

Demerits of bisection method

 The convergence of the bisection method is slow as it is simply


based on halving the interval.
 Bisection method cannot be applied over an interval where

10
there is a discontinuity.
 Bisection method cannot be applied over an interval where the
function takes always values of the same sign.
 The method fails to determine complex roots.
 If one of the initial guesses a 0 or b0 is closer to the exact solution, it
will take larger number of iterations to reach the root.

1.3 Laboratory work №1

USING THE BISECTION METHOD TO SOLVE ALGEBRAIC


EQUATIONS

In the laboratory work, you will study the Bisection method and learn
how to use this method for calculating all real roots of an algebraic equation
f ( x)=0 at the given accuracyε =1 0−5.

Procedure

1. Isolate all real roots of the given algebraic equation, f ( x)=0. To


solve this problem, you can use any analytical, graphical, or tabular
method.
2. Correct one of these roots using the bisection method by hand
(only five iterations).
3. Solve the problem of isolating and finding all real roots by IF
statement (MathCAD) using programming blocks at the given accuracy
ε =1 0−5.
4. Solve the problem of isolating and correcting the roots using
Python software at accuracyε =1 0−5.
5. Correct the roots by means of the build-in functions. Compare the
obtained values and make a conclusion.

Plan for students' reports

1. Procedure.
2. Theoretical background.
3. Hand calculations.
4. Calculations performed with MathCAD, Python.
5. Conclusions.

11
Variants

Variant
Variant
Equation Number Equation
Number
1 3 2
x −2.8 x −6.2 x+3.7=0 16 3 2
x −0.1 x −4.6 x +2.2=0
2 x 3−9.9 x 2+ 3.5 x −31.9=0 17 x 3−5.9 x 2+11.1 x−6.7=0
3 3 2
x + 0.3 x −5.7 x +2.2=0 18 3 2
x −7.2 x +16.5 x−11.9=0
4 x 3−0.8 x 2−6.8 x+ 0.7=0 19 x 3+ 4.1 x2 +3.6 x−0.4=0
5 3 2
x −0.9 x −22 x+30.8=0 20 3
x −7.3 x +13.5 x−5=0
2

6 3 2
x −5.4 x +2.5 x+12.5=0 21 3 2
x −6.7 x + 11.6 x−2.1=0
7 3 2
x −4.7 x + 4.1 x +0.5=0 22 3 2
x −3.3 x +1.6 x +1.7=0
8 x 3−7.5 x 2+15.9 x−7.8=0 23 x 3−2.1 x2 −2.6 x+1.7=0
9 3 2
x −4.4 x + 4.7 x−1.1=0 24 3 2
x −3.6 x + 3.3 x−0.5=0
10 x 3−4.8 x 2 +3.3 x+5=0 25 x 3+ 0.5 x2 −2.2 x−1.1=0
11 3 2
x −2.7 x −3.5 x+ 0.8=0 26 3 2
x −2.5 x −0.49 x+1.2=0
12 3 2
x −3.9 x + 4.4 x−1.4=0 27 3
x −2.2 x −0.25 x+1=0
2

13 3 2
x −4.2 x + 3.1 x +2.7=0 28 x −3.2 x +0.51 x+ 1.6=0-
3 2

14 3 2
x + 8.5 x +2.8 x +15.6=0 29 3 2
x −2.3 x −0.1 x+ 0.3=0
15 3 2
x −1.6 x −2.4 x +0.3=0 30 3 2
x −1.9 x −2.6 x+ 2.8=0

1.4 Typical solution examples

Hand calculation

At the level of an accuracy ε =0.05, calculate the real root of the


equation
x 3+ 2 x=6.
Let us denote f (x)= x3 +2 x−6 (the equation must have the form of
f(x)=0).
By isolating the roots, we find f(1) = -3 and f(2) = 6.
Consequently, a single root of the equation exists and lies within the
interval [1, 2] because f ' (x)=3 x 2 +2>0 for all х.
Then we denote а0=1, b0=2 and apply the bisection method: We
divide intervals until |b n−a n|≤ 2 ⋅ε (see Table 1.1).

Table 1.1
12
a n( f (a n)<0) b n(f (b n)>0) b n−a n an +b n an +b n
f( )
2 2

0 1 2 1 1.5 0.37
1 1 1.5 0.5 1.25 -1.58
2 1.25 1.5 0.25 1.375 -0.65
3 1.375 1.5 0.125 1.4375 -0.15
4 1.4375 1.5 0.0625 1.46875

Answer: ξ ≈1.46875 ≈ 1.47 .


Note that we can roughly calculate f(x) as it requires only the sign of
this value.

MathCAD example
2
Let us consider f ( x)  x  3  x  12. Its roots must be found at the
accuracy level of =10-5.

1. Isolate the intervals as

They are: a1  7 b1  4 and a2  1 b2  3

2. Use bisection method to solve the equation: i  0  20 (i - iterations


–from 0 to 20)

a0  7
b0  4

13
  ai   ai  bi  
 ai 1   ai  bi 
   if  f ai  f    0    ai  bi     2 
 bi 1    2   2   bi 
    
a20  b20
 5.274916
2
 a20  b20 
f   7.894165  10  6
 2 

Note, that in order to increase accuracy, you need to increase the


number of iterations. The required number of iterations can be found as

3. Use the simplest programming block so as to solve the equation by


the bisection method:


xxx1 F 7  5  10
5  xxx1 5.27491

xxx2 F 1  3  10
5  xxx2 2.27491
Programming blocks which include input validation and counting the
number of iterations might look like

14
3. Solve the equation using integrated functions.

Below you can see the solution for the equation by means of the
integrated function root:
TOL  10  – Internal solution accuracy is redefined here
5

xx1  root( f ( x)  x  7  5) xx1  5.27492


xx2  root( f ( x)  x  2  4) xx2  2.27492

as well as the solution for the equation by means of the integrated function
polyroots:

2
f ( x)  x  3  x  12

 12 
v   3 
 
 1 
 5.27492 
xxx   
xxx polyroots( v)  2.27492 

15
Python program.

Let’s consider how to transform the MathCAD programming block into


a Python functions.

MathCAD Pyton
5
f ( x)  x  2x  10 lab_1.py
from dihotomia import
dihotomia
a = float(input('a: '))
b = float(input('b: '))
eps = float(input('eps: '))
print(dihotomia(a, b, eps))

function.py
def f(x):
return 2*x**5+4*x-3

R( a b  )  error ( "error" ) if f ( a)  f ( b )  0 dihotomia.py


from function import f
otherwise
i0 def dihotomia(a, b, eps):
if f(a) * f(b) > 0:
while b  a  2  return 0, 0
(a  b ) else:
c c=0
2 i=0
a  c if f ( c)  f ( a)  0 while abs(b - a) > 2 *
eps:
b  c if f ( c)  f ( b )  0 c = (a + b) / 2
break if f ( c) 0 if f(a) * f(c) < 0:
b=c
i i 1 else:
a=c
c
  i += 1
i return c, i

The answers in Python are:

16
Part 2

2.1 Chord method (Regula Falsi method, false position method,


method of proportional parts)

This method is also based on the intermediate value theorem and


was developed to overcome slowness of the bisection method. As before,
we shall consider the equation:
f(x)=0, (1.3)
where f(x) is a continuous function, defined at the segment [a, b] and f(a)
f(b)<0 (f(a) and f(b) have opposite signs).
Instead of dividing the interval [a; b] in halves (as it is done with the
bisection method), we divide it in relation f(a)/f(b). This gives an
approximate value of the root x0=b+h1, where

−f (b) −f (b)(a−b)
h1 = (a−b)= .
f (a)−f (b) f ( a)−f (b)

17
Then, applying this technique to the interval ([a; x 0] or [x0; b]), at the
ends of which the function f(x) has opposite signs, we obtain the second
approximation of the root x2, etc.
Geometrically, method of proportional parts is equivalent to replacing
the curve y = f (x) by the chord passing through the points A (a; f(a)) and
B(b; f(b)) (Fig. 1.2). That is why, the chord method has got its name.

Fig. 1.2. Geometrical interpretation of the chord method

x-b y−f (b)


We know that the equation for the chord AB is =
a-b f (a)−f (b)
.
f (b)(a−b)
Hence, for x=x0 and y=0, we get x 0=b−
f (a)−f (b)
.
For the convergence of the chord method, the following conditions
must be satisfied:
1. That end of the chord is stationary for which the sign of the function
f (x) coincides with the sign of its second derivative f"(x);

2. Successive approximations xn lie on that side of the root ξ where


f(x) has the opposite sign to the sign of its second derivative f"(x).
Here is the formula to be used for the method realisation for the case of
a fixed point:
f ( x n−1 )(a−x n−1 )
x n=x n−1− .
f (a)−f ( x n−1)

If the interval [a; b] is sufficiently small, the error of the method is defined
as |ξ−x n|<| x n−x n−1| where ξ is root of the equation f(x)=0.

Thus, |ξ−x n|< ε is guaranteed if |x n−x n−1|< ε, where ε is the given limiting
absolute error.

18
2.2 Newton-Raphson Method (Newton's method, Newton's iteration)

The Newton-Raphson method, or Newton Method, is a powerful


technique for solving equations numerically. Like so much of the
differential calculus, it is based on the simple idea of linear approximation.
The Newton-Raphson method is a root-finding algorithm that uses
the first few terms of the Taylor series of a function f(x) in the vicinity of a
suspected root.
Let ξ (the root of f ( x)=0)) is separated on the interval
[a, b], and f’(x) and f’’(x) are continuous and save defined signs when
a ≤ x ≤ b.
Let ξ=x n +h n where h n is small. Hence, using the Taylor formula, we
get
0 = f ( x n+ hn )≈ f (x n)+h n f ' ( xn ).
Therefore,
−f ( x n )
h n= .
f '( x n )
If we introduce this correction to the root refinement formula, we can
find the following (in order) approximation of the root:

f (x n)
x n+1=x n − (n = 0, 1, 2, . . .).
f ' ( x n)

Newton’s method is geometrically equivalent to replacing a small arc


of the curve y = f(x) by a tangent drawn at some point of the curve
(Fig. 1.3).

Fig. 1.3 Geometrical interpretation of the Newton-Raphson method

Newton-Raphson Theorem. If f (a)f (b)< 0, and f ' ( x ) and f ″ (x ) differ from


zero and keep signs with a ≤ x ≤ b, then, on the basis of the initial
approximation x 0 ∈[a , b] that satisfies f ( x 0) f ″ ( x 0 )> 0, the only root ξ of f ( x)=0
can be solved at any degree of accuracy by the Newton’s method

19
f ( x n)
x n+1=x n − .
f ' ( x n)
Using the Newton-Raphson method, it is important to remember the
following rule: a starting point x 0 can be that very end of the interval [a, b]
which corresponds to the ordinate with the same sign as the sign of f ″ (x ).
The end iteration condition is the inequality|x n−x n−1|< ε where ε is the
given limiting absolute error.

The function g(x) defined by formula


f (x n )
g ( x )=x−
f '( x n )
is called the Newton-Raphson iteration function. Since f(p) = 0, it is easy
to see that g(p) = p. Thus, the Newton-Raphson iteration for finding roots of
f(x) = 0 is accomplished by finding a fixed point of g(x).

The method can be used for both algebraic and transcendental


equations, and it also works when coefficients or roots are complex. It
should be noted, however, that in the case of an algebraic equation with
real coefficients, a complex root cannot be reached with a real starting
value.

Example 1. Set up a Newton iteration for computing the square root of a


given positive number. Using the same find the square root of 2 exact to
six decimal places.

Let c be a given positive number and let x be its positive square root,
so that
x=√ c . Then x 2=c or f ( x)= x −c=0
2

Using the Newton’s iteration formula, we have


2
x n−c
x n+1=x n −
2 xn
or

( )
xn c 1 c
x n+1= + = x+ , n=0,1,2 , … ,
2 2 xn 2 n x n

Now to find the square root of 2, let c = 2, so that


x n+1=
1
2(x n+
2
xn )
, n=0,1,2, … ,

Choose x0=1. Then


x1 = 1.500000, x2 = 1.416667, x3 = 1.414216, x4 = 1.414214, …

and accept 1.414214 as the square root of 2 exact to 6D.


20
Historical Note: Heron of Alexandria (60 CE?) used a pre-algebra version of
the above recurrence. It is still at the heart of computer algorithms for
finding square roots.
Example 2. Let us find an approximation √ 5 to ten decimal places. Note that
is an irrational number. Therefore, the sequence of decimals which defines
will not stop. Clearly √ 5 is the only zero of f(x) = x2 - 5 on the interval [1, 3].
Let xn be the successive approximations obtained through Newton's
method.

Let us start this process by taking x1 = 2.


x1 = 2
x2 = 2.25
x3 = 2.236111111111111111111111111
x4 = 2.236067977915804002760524499654934
x5 = 2.236067977499789969447872828327110
x6 = 2.236067977499789696409173668731276

Modified Newton's Method

If the derivative f'(x) changes little on the interval [a, b], then in the
calculation formula of Newton’s method, we can put f ' ( x n) ≈ f ' ( x 0).
Hence, for the root ξ f(x) = 0 we get successive approximations
f ( x n)
x n+1=x n − (n = 0, 1, 2, . . .).
f ' ( x 0)
Geometrically, this method means that tangents at the points Bn
[xn, f(xn)] are replaced by lines parallel to the tangent to the curve y = f(x), at
its fixed point B0[x0, f(x0)] (Fig. 1.4). This formula is quite useful when f ' (x n)
is complex.

Рис. 1.4 Geometric interpretation of the modified Newton's method

2.3 The Secant Method

21
We have seen that the Newton-Raphson method requires the
evaluation of derivatives of the function and this is not always possible,
particularly in the case of functions arising in practical problems. In the
secant method, the derivative at xn is approximated by the formula

' (x n−x n−1 )


f ( x k )= .
f ( x n)−f (x n−1)
Hence, the Newton-Raphson formula becomes
f (x n)(x n− xn−1 )
x n+1=x n − .
f (x n)−f (x n−1 )
It should be noted that this formula requires two initial approximations
to the root. The end iteration condition is the inequality |x n−x n−1|< ε where ε is
the given limiting absolute error.

2.4 Combined Method

This method is used to calculate root values at a given accuracy as


an alternative to the chord method and the Newton-Raphson method.
Let us use a 0 and b 0 to denote the ends of the segment containing the
root of the equation, with a 0 standing for the end of the segment at which
the signs of the function f(x) and its second derivative f ″ (x ) are the same.
We draw the chord through the points ¿, ¿. The intersection point of the
chord with the X-axis is denoted as b 1.
If we draw the tangent to the curve y=f ( x ) in the point ¿, a 1 is the
intersection point of the tangent with the X-axis. Thus, we obtain a new
segment with the ends a 1 and b 1 containing the root of the equation
(Fig.1.5). Similarly, we obtain the segment with the end points a 2 and b 2, etc.

Fig. 1.5 Geometrical interpretation of the combined method

Here are combined-method calculations for the case shown in


Fig. 1.5:
f (an )(bn −an )
b n+1=b n−Δ bn, Δ bn= ;
f ( bn )−f (an )
22
f (a n)
a n+1=an−Δ an, Δ an= ,
f ' ( an )
where n=0,1 ,.. . .
If the root of the equation must be calculated at the accuracy ε , the
process of computing the root must be terminated at that moment when
|b n−a n|<2 ε . As the answer we take the arithmetic mean of the last received
a n + bn
values a n and b n, ξ= .
2

2.5 Error of numerical solutions of the equation

To estimate the approximation accuracy, we can use the formula

|f (x n )|
|x n−ξ|≤ , |f (x )|≥ m 1 , a ≤ x ≤ b .
m1

There is another formula which can be used to estimate the absolute


error of an approximate value x n if we know two successive approximations
x n−1 and x n. We assume that the derivative f ( x) is continuous on an interval
[a, b] (the one containing all approximations) and has the constant sign,
0< m1 ≤|f ( x)|< M 1 <+∞ .

Moreover, we assume that successive approximations x n of the exact


root ξ are calculated as
f ( x n−1 )
x n=x n−1− (x −a) , (n = 1,2,…),
f (x n−1 )−f (a) n −1

where the end a is fixed. Hence, we get

f ( x n−1 )−f (a)


f (ξ)−f ( x n−1 )= ( xn −x n−1 ).
x n−1−a

By applying the Lagrange theorem about finite increments of a


function, we obtain

( ξ−x n−1 ) f (ξ n−1)= ( x n−x n−1) f ( x n−1) ,


where ξ n−1 ∈( x n−1 , ξ) и x n−1 ∈(a , x n−1).
Therefore,
|f ( x n−1 )−f (ξ n−1)|
|ξ−x n|= |x n−x n−1| . (1.4)
|f (ξ n−1)|
As f ' (x) has the constant sign at the interval [ a , b ], with x n−1 ∈ [ a ,b ] and
ξ n−1 ∈ [ a , b ] , we get
|f ( x n−1)−f (ξ n−1)|≤ M 1−m1 .
23
From (1.4), it comes that
M 1 −m1
|ξ−x n |≤ |x n −x n−1|
m1 ,
where m1 , M 1 can be taken for the lowest and highest values for the
absolute value of the derivative f (x) at the interval [ a , b ]. If the segment is so
narrow that the inequality M 1 ≤ 2 m1 is correct, we get
| n| | n n−1| .
ξ−x ≤ x −x

Thus, |ξ−x n|< ε is guaranteed if |x n−x n−1|< ε where ε is the given limiting
absolute error.

2.6. Laboratory work №2

USING CHORD METHOD, NEWTON METHOD, MODIFIED NEWTON’S


METHOD, SECANT AND COMBINED METHODS TO SOLVE
ALGEBRAIC EQUATIONS

In the laboratory work, you will study numerical methods for refining
approximate roots, including Chord Method, Newton-Raphson Method,
Modified Newton’s Method, Secant and Combined methods, and find out
how to use these methods for calculating all real roots of an algebraic
equation f ( x)=0 at the accuracy ε =1 0−5.

Procedure

1. Isolate all real roots for the given algebraic equation f (x)=0. To
solve this problem, you can use any analytical, graphical, or tabular
method.
2. Correct one of these roots using the combined method by hand
(only five iterations).
3. Solve the problem of finding all real roots by a MathCAD
programming block at the given accuracyε =1 0−5.Calculate and save
required number of iterations.
4. Solve the problem of isolating and correcting the roots at ε =1 0−5 by
the given methods using Python software. Calculate and save the required
number of iterations.
5. Correct the roots by means of the build-in functions. Compare the
obtained values and make a conclusion about the rate of convergence.

Plan for students’ reports


1. Procedure.
2. Theoretical background.
3. Hand calculations.
4. Calculations performed with MathCAD, Python.
5. Conclusions.
24
Variants

Variant number Equation Variant number Equation


1 5
x + 2 x−10=0 16 3
3 x + 2 x−2=0
2 x 3+ 5 x −4=0 17 x 3+ 5 x +11=0
3 3
x + 7 x+3=0 18 3
2 x + 4 x−15=0
4 2 x5 + 4 x−3=0 19 3 x 3+ x +9=0
5 3
x + 8 x+13=0 20 5
x + 7 x−5=0
6 3
3 x + x −2=0 21 3
x + 11 x−24=0
7 5
x + 5 x −20=0 22 5
x + 2 x−1=0
8 x 3+ x−15=0 23 2 x5 +8 x−7=0
9 3
x + 2 x +7=0 24 3
x + 3 x −10=0
10 2 x3 +6 x−17=0 25 x 3+ 2 x−20=0
11 3
x + x−18=0 26 5
x + x+1=0
12 5
3 x +5 x−3=0 27 3
3 x + 2 x−1=0
13 3
x + 2 x−30=0 28 3
x + 7 x+ 9=0
14 5
2 x + x−1=0 29 5
2 x + x−4=0
15 5
x + 7 x+ 4=0 30 3
2 x +7 x−8=0

2.7 Typical solution examples

Hand calculation
Let us calculate the real root of the equation x 3+ 3 x +8=0 at the
accuracy of ε =1 0−3.

We consider
f (x)= x +3 x +8.
3

The interval [-2, -1] contains the root of this equation because at its
ends, the function f ( x) has different signs:

f (−2)=−6 ,
f (−1)=4 .

The derivative is f ' (x )=3 x 2+ 3>0 for all variables x, so the equation
x + 3 x +8=0 has a single real root and two complex ones. Thus, the interval
3

[-2, -1] has the only real root of the equation.


25
The second derivative is

f ″ ( x )=6 x <0 for −2 ≤ x ≤−1,

So, if b 0 is the end of the interval x=−2, then b 0=−2, a 0=−1.

Further calculations are based on Table 1.2:


Table 1.2
a n+1=an−Δ an
b n+1=b n−Δ bn
3
f (x)= x +3 x +8 f ' (x )=3 x 2+ 3>0
f ( an ) ( bn −an )
f (−2)=−6 f ″ (x )=6 x <0 Δ an=
f ( b n )−f ( an )
f (−1)=4 If −2 ≤ x ≤−1
f (b n)
Δ bn = '
f ( bn )

№ f (bn )−¿
an bn b n−a n f (an ) f (bn ) '
f (bn ) Δ an Δ bn
п/п f (an )
0 -1 -2 -1 4 -6 -10 15 0,4 -0,4
-
1 -1,4 -1,6 --0,2 1,056 -0,896 -0,932 10,68 0,1082
0,0839
-
2 -1,5082 -1,5161 -0,0079 -0,0447 -0,0331 -0,0778 9,8956 0,00455
0,0033
3 -1,51273 -1,51275 -0,00001

Answer: ξ ≈ −1,512738−1,512750 ≈−1,5123.


2

26
MathCAD example programs

Let us calculate the real root of the equation


2∗x5 + 4 x−3=0
at the accuracy of ε =1 0−3.

Let’s define:
5 4
f ( x)  2 x  4 x  3
d 4 2.4
f1( x)  f ( x)  10 x  4
dx 0.8
f ( x)
2  0.80.2 0.4 0.6 0.8
d 3
f2 ( x)  f ( x)  40 x
2  2.4
dx
4

x
Function hord refines the root of the equation y=f(x) at the interval [a, b]
up to given error ξ (eps).
Input parameters are
 a, b as the end points of the root isolation,
 ξ (eps) as the given error.
Output parameters include
 two last approximations of the root,
 number of iterations which will satisfy the accuracy condition.
In a program block, we use
 f (x) as the function of the equation;
 f1(x) as the function of the first derivative of f(x);l
 f2(x) as the function of the second derivative of f(x).

Both functions must be defined before the program main block as the
user function.
For the chord method, fixed points are defined by the following rule: that
very end is fixed, for which the sign of the function f (x) coincides with the
sign of its second derivative f "(x). This value is entered as fix.

27
hord ( a b  )  i0 x  hord ( a b  )
x a if f ( a)  f2 ( a)  0  0.67823
b otherwise x   0.678236
 
fix  b if x a  11 
a otherwise
prev_x  fix
while prev_x  x  
prev_x  x
f ( x) ( fix  x)
xx
f ( fix)  f ( x)
ii 1
 prev_x 
 x 
 
 i 
Newton-Raphson Method and Modified Newton's method

newton1( a b  )  error ( "error" ) if f ( a)  f ( b )  0 x  newton1( a b  )


otherwise
i0
 0.67824
x 
 5 
x a if f ( a)  f2 ( a)  0
b otherwise
fix  b if x a
a otherwise
prev_x  fix
while x  prev_x  
prev_x  x
f ( x)
x  prev_x 
f1 ( x)
ii 1
 x
 
i

28
newton2( a b  )  error ( "error" ) if f ( a)  f ( b )  0 x  newton2( a b  )
otherwise
i0
x
 0.67823
x a if f ( a)  f2 ( a)  0 
 20 
b otherwise
fix  b if x a
a otherwise
prev_x  fix
while x  prev_x  
prev_x  x
f ( x)
x  prev_x 
f1 ( fix)
ii 1
 x
 
i

MathCAD function for solving an algebraic equation by the combined


method and Secant Method

combinated( a b  )  error ( "error" ) if f ( a)  f ( b )  0 seq ( a b  )  error ( "error" ) if f ( a)  f ( b )  0

otherwise otherwise
i0 i0
A  a if f ( a)  f2 ( a)  0 x  a if f ( a)  f2 ( a)  0
b otherwise b otherwise

B  a if f ( a)  f2( a)  0 prev_x  b if x a

b otherwise a otherwise

while B  A  2 f ( x)
xx
f1 ( prev_x)
f ( B) ( B  A )
BB while x  prev_x  
f ( B)  f ( A )
dumb  x
f (A)
A A  f ( x) ( x  prev_x)
f1 ( A ) xx
f ( x)  f ( prev_x)
i i 1
prev_x  dumb
A  B ii 1
 2 
   x
 i   
i
x  combinated( a b  ) x  seq ( a b  )
 0.67824  0.67824
x   x  
 4   6 
29
Python program

Lab1.py
from methods import hord
from methods import newton1
from methods import newton2
from methods import seq
from methods import combined

a = float(input('a: '))
b = float(input('b: '))
eps = float(input('eps: '))
ans = hord(a, b, eps)
print("hord: %.5f" % (ans))
ans = newton1(a, b, eps)
print("newton1: %.5f" % (ans))
ans = newton2(a, b, eps)
print("newton2: %.5f" % (ans))
ans = seq(a, b, eps)
print("seq: %.5f" % (ans))
ans = combined(a, b, eps)
print("combined: %.5f" % (ans))

Func.py
def f(x):
return 2 * x ** 5 + 4 * x - 3

def f1(x):
return 10 * x ** 4 + 4

def f2(x):
return 40 * x ** 3

Methods.py
from func import f
from func import f2
from func import f1

def hord(a, b, eps):


if f(a) * f(b) > 0:
return 0, 0
else:
i=0
30
if f(a) * f2(a) < 0:
x=a
else:
x=b
if x == a:
fix = b
else:
fix = a
prev_x = fix
while abs(prev_x - x) > eps:
prev_x = x
x = x - (f(x) * (fix - x)) / (f(fix) - f(x))
i += 1
return x

def newton1(a, b, eps):


if f(a) * f(b) > 0:
return 0, 0
else:
i=0
if f(a) * f2(a) < 0:
x=a
else:
x=b
if x == a:
fix = b
else:
fix = a
prev_x = fix
while abs(prev_x - x) > eps:
prev_x = x
x = prev_x - f(x) / f1(x)
i += 1
return x

def newton2(a, b, eps):


if f(a) * f(b) > 0:
return 0, 0
else:
i=0
if f(a) * f2(a) < 0:
x=a
else:
x=b
31
if x == a:
fix = b
else:
fix = a
prev_x = fix
while abs(prev_x - x) > eps:
prev_x = x
x = prev_x - f(x) / f1(fix)
i += 1
return x

def seq(a, b, eps):


if f(a) * f(b) > 0:
return 0, 0
else:
i=0
if f(a) * f2(a) < 0:
x=a
else:
x=b
if x == a:
prev_x = b
else:
prev_x = a
x = x - f(x) / f1(prev_x)
while abs(prev_x - x) > eps:
dumb = x
x = x - (f(x) * (x - prev_x)) / (f(x) - f(prev_x))
prev_x = dumb
i += 1
return x

def combined(a, b, eps):


if f(a) * f(b) > 0:
return 0, 0
else:
i=0
if f(a) * f2(a) > 0:
dumb = a
a=b
b = dumb
while abs(b - a) > 2 * eps:
b = b - (f(b) * (b - a)) / (f(b) - f(a))
a = a - f(a) / f1(a)
32
i += 1
return (a + b) / 2

Rezults:

Part 3

3.1 Fixed Point Iteration Method

Let us consider the equation


f ( x)=0, (1.5)
where f ( x) is a continuous function. We calculate the real root which is
located at the interval [ a , b ] and reduce f ( x)=0 to the equivalent form
x=ϕ(x ), (1.6)
where ϕ ( x ) is a continuous function at the interval [ a , b ].
Let us select an arbitrary x 0 ∈ [ a , b ] and substitute it to the right side of
(1.3.2): x 1=ϕ (x 0 ).
Similarly, we get an iterative sequence:
x 2=ϕ ( x 1) ;
x 3=ϕ( x 2 );
…………..
x n=ϕ( x n−1 ).
It is proved that if the iterative sequence x 0, x 1, …, x n, is convergent,
then its limit is a root for both (1.5) and (1.6) which are equivalent.
Fixed point of a function φ(x) is a number p such that p= φ(p). Fixed
point is not a root of φ (x) = 0, it is a solution of x= φ(x).
Geometrically, fixed points of a function φ(x) are the intersection points
for the curve φ(x) and the line y=x.
Fixed-point iteration is an iteration pn+1 = φ(pn) for n=0,1,2….
33
Theorem. Assume that φ(x)∈∁ [a ,b ] .
1. If the range of the mapping y= φ(x) satisfies y ∈[a ,b ] for all x ∈[a , b] ,
then φ(x) has a fixed point at [a, b].
2. If φ’(x) is defined over [a, b] and a positive constant M<1 exists
with | φ’(x)|≤M<1 for all x∈[a , b], then φ(x) has a unique fixed point p in [a,
b].
So, to converge the iterative process, we reduce the original equation
f ( x)=0 to the form x=ϕ( x ) where condition
|ϕ' (x )|≤ M <1 (1.7)
is satisfied with a ≤ x ≤ b.
Note that iterative sequence is convergent independently of x 0.
The iterations have a geometric interpretation. A solution of (1.6) is
the abscissa ξ of the intersection point for the line y = x and the curve
y=φ(x). Geometrically, it is clear that if for the vicinity of solutions, 0 < φ’(x)
≤ М < 1 are hold, then the sequence {x n} monotonically converges to ξ (root
of f(x)=0), and lies on the side, in which a first approximation is located
(Fig. 1.6). For −1 < −M ≤ φ’(x) < 0, successive approximations are
alternately arranged at different sides of the solution ξ (Fig. 1.7).

Fig. 1.6 Monotone convergence at Fig. 1.7 Oscillating convergence when -1


0 < φ’(x) ≤ М < 1 < φ’(x) < 0

The equation f ( x)=0 can be transformed to x=ϕ(x ) in different ways,


but the function ϕ ( x ) must satisfy the condition (1.7).

For example, the equation f (x)=0 is substituted by the equivalent


x=x− λf ( x). In this case, ϕ ( x )=x−λf ( x ). The parameter λ is chosen so that
|ϕ' (x )|=|1−λ f ' (x )|<1 with a ≤ x ≤ b.

Convergence rate of the iteration process is determined as


n
|ξ−x n|≤ MM−1 |x 0−x 1| ,
where ξ is the exact solution of the equation, M is a positive constant M<1
34
with | φ’(x)|≤M<1 for all x∈[a , b].

An error estimate for the fixed-point iteration method can be


1−M
|x n+1 −xn|≤ M
ε,

where ε is the given solution accuracy.


In particular, with 0< ϕ ' ( x) ≤1/2 and −1<ϕ '( x )<0, the value of x n+1 will be
the approximate value of the root ξ at the accuracy of ε , that is |x n+1 −xn|< ε .

3.2. Laboratory work №3

USING FIXED-POINT ITERATION METHOD TO SOLVE ALGEBRAIC


AND TRANSCENDENTAL EQUATIONS

In this laboratory work, you will study the method of fixed-point


Iterations and find out how to calculate approximated real roots for given
equations f ( x)=0 (algebraic and transcendental) with its help. Calculations
are carried out at the accuracy of 10-5.

Procedure

1. Separate all real roots of an algebraic equation (take variants from


Lab 2) and find the segment containing the smallest positive real root for
the transcendental equation.
2. Reduce the equations to the form suitable for the fixed-point
iteration method.
3. Refine the root of the algebraic equation by hand (only 5 iterations).
4. Solve the problem of verifying the roots of fixed-point iteration
method in MathCAD (use as few iterations as possible).
5. Solve the problem of verifying the roots by fixed-point iteration
method in Python.
6. Compare the obtained results. Check whether the results are
correct by means of built-in functions.

Plan for student’s reports


1. Procedure.
2. Theoretical background, including convergence conditions and
geometric interpretation of the fixed-point iteration method.
3. Hand calculations including the procedure of bringing the equations
to a form for which the fixed-point iteration method can be applied.
4. MathCAD and Pyton calculations of successive approximations x n (
n=1,2 ,.. .) until |x n−x n−1|<1 0−5 is fulfilled.
5. Result verification by means of built-in functions.
Variants
35
Transcendental equations
Variant Equation Variant Equation
1 tg1.58x-2.31x=0 16 tg9.15x-13.36x=0
2 ln (¿ 7.6 x )−8.59 x +0.5=0 ¿ 17 4.33 sin 4 .0 x−3.5 x =0
3 9.33 sin 6 .977 x−7.5 x=0 18 ln (¿ 3.05 x )−3.4 x +2.5=0 ¿
4 0.973 e
−0.51 x
−x =0 19 0.93 e
−0.52 x
−x =0
5 7.67 sin 6 .0 x−7.25 x=0 20 tg6.0x-8.75x=0
6 ln (¿ 6.1 x )−6.87 x+ 1.0=0 ¿ 21 2.67 sin 3 .04 x−2.25 x=0
7 −0.545 x
0.833 e −x=0 22 3.7 cos 3 .4 x−5.25 x=0
8 ln (¿ 4.57 x)−5.1 x +1.5=0¿ 23 x
e +2 sin x=0
9 6.67 sin 5 .4 x−5.25 x=0 24 x 2−cos x=0
10 tg2.21x-3.23x=0 25 0.167 e
−0.867 x
−x=0
11 ln (¿ 1.52 x)−1.7 x+3.0=0 ¿ 26 cos (¿ x)−1 /(x−2)=0 ¿
12 0.731 e
−0.58 x
−x=0 27 2
4 x −cos x−4=0
13 5.67 sin 4 .8 x−4.5 x=0 28 −x
e −ln x=0
14 tg3.8x-5.53x=0 29 sin x−1/ x=0
15 ln (¿ 3.96 x )−4.9 x +2.0=0 ¿ 30 2.67 sin 3 .04 x−2.25 x=0

3.3. Typical solution examples

Hand calculation

Example 1. Reduce the equation x 3+ 10 x −3=0 to the form for which


the fixed-point iteration method can be applied.
The only real root for the given equation is in the interval [0, 1]
because f (0)=−3 , f (1)=8, and the first derivative is positive.
3
3−x
Reduce the original equation to the form x= 10 . In this case, it is
1 3
ϕ ( x )= (3−x ).
10
−3 x 2 ϕ' (x ) = 3 x 2 ≤ 3 <1
Thenϕ '
( x)= | | 10 10 with 0 ≤ x ≤ 1.
10 ,

Thus, a sufficient condition for converging the iterative process is


fulfilled, and the fixed-point iteration method can be used to solve this
1
equation by selecting a x 0 ∈ [ 0,1 ], e.g., x 0= .
2

Example 2. Reduce the equation 2 x3 + 4 x−11=0 to the form for which


the fixed-point iteration method can be applied.
Unique root of the given equation lies in the interval [1, 2].
3
11−2 x
Reduce the original equation to the form x= 4 .
36
1
|ϕ' (x )|= 64 x 2> 1 with
2
ϕ ( x )= (11−2 x
3
) ' −6 x
In this case, it is → ϕ ( x)=
4 4 →
1 ≤ x ≤2.

A condition (1.7) for converging the iterative process is fulfilled, and


the fixed-point iteration method can be used to solve this equation.

Replace the original equation with the equivalent:


x=x− λ(2 x3 + 4 x−11).

In this case, ϕ (x )=x−λ (2 x 3 + 4 x−11),


ϕ (x)=1−λ (6 x + 4).
' 2

The parameter λ can be found from the condition


|ϕ' (x )|≤ M <1 with1 ≤ x ≤2,

that is −1<1− λ(6 x 2 +4)<1 or 0< λ(6 x 2 +4)<2 with 1 ≤ x ≤2.

1
Let us substitute x=1 to the inequality to get 0< λ< 10 , or x=2 to get
1
0< λ< .
14
1 1
Let us take λ= 20 from 0< λ< 14 . The original equation is transformed
as
x=x−0,05 (2 x 3 +4 x−11) ,

And |ϕ' (x )|=|1−0,05(6 x 2 +4)|≤ 0,4<1 with 1 ≤ x ≤2.

Select any x 0 ∈ [ 1,❑ 2 ].

Let us choose x 0=1,5 and get x 1=ϕ ( x 0 )=1,34375. By substituting x 1 to the


right side of the equation, we get x 2, etc. The calculation is performed as
long as the following inequality is fulfilled|x n−x n−1|<1 0 .
−5

MathCAD example program

Example 1. For algebraic equation:

5
f ( x)  2 x  4 x  3
d 4
f1( x)  f ( x)  10 x  4
dx

37
Root isolation intervals for a given equation, λ coefficient, reduced equation

a  0
b  1
5
  10

1 1
  if  f1( a)  f1( b )   

 f1 ( a) f1( b ) 

  0.07143  ( x )  x    f ( x)

Solving by Fixed point iteration method and verifying the solution with an
inbuilt function

iter( a   )  i1


x0  a
x1   ( x0 )
while x1  x0  
x0  x1
x1   ( x0 )
ii 1
i
 
 x1
38
 21 
iter( a   )   
 0.67823
5
TOL  10

x  root( f ( x) xa b )  0.67824

Example 2. For transcendental equation:


 0.51 x
f_t( x)  0.973e
 x

d  0.51 x
f1_t ( x)  f_t ( x)  0.49623 e 1
dx

Root isolation intervals for a given equation, λ coefficient, reduced equation

a  0
b  1

5
  10
1 1
_t  if 
 f1_t ( a)  f1_t ( b )   

 f1_t ( a) f1_t ( b ) 
_t  0.66835

_t ( x_t )  x  _t  f_t( x)

39
Solving by Fixed point iteration method and verifying the solution with an
inbuilt function

iter_t( a _t  )  i1


x0  a
x1  _t ( x0_t )
while x1  x0  
x0  x1
x1  _t ( x0_t )
ii 1
i
 
 x1

 6 
iter_t( a _t  )   
 0.68582
5
TOL  10

x  root( f_t (x) xa b )  0.68582

Pyton example program

main.py
from tkinter import *
from tkinter import messagebox
from sympy import *
import pylab
import math
import numpy as np
import PIL.Image
import PIL.ImageTk
import os

def func(f, x):


num = x
x = symbols('x')
40
return f.subs(x, num)

def u(f, x, _lambda):


return x + _lambda * (func(f, x));

def func1(f, x):


num = x
x = symbols('x')
expr = diff(f, x)
return expr.subs(x, num)

def iterations(f, a, _lambda, eps,n):


x0 = a;
x1 = u(f, x0, _lambda);
i = 1;
while (abs(x0 - x1) > eps):
x0 = x1
x1 = u(f, x0, _lambda);
i += 1;
if (i>1000):
return " Method does not work correctly !"
break;
template = '{:.' + str(n) + 'f}'
str1 = template.format(x1);
str2 = str(i);
str3 = "Root of the equation: " + str1 + "\n Number of iterations: " + str2;
return str3;

# output processing

def show_res():
a = float(min_entry.get())
b = float(max_entry.get())
if (a > b):
swap(a, b);
n = int(eps_entry.get())
eps = 10 ** (-n)
_str = func_entry.get()
f = sympify(_str)

if (abs(func1(f, a)) > abs(func1(f, b))):


l = -1 / func1(f, a)
else:
l = -1 / func1(f, b)
41
if islambda.get() == true:
_lambda = float(lambda_entry.get())
else:
_lambda = l

show_graph(u, f, _lambda)
messagebox.showinfo("Fixed Point Iteration Method", iterations(f, a,
_lambda, eps,n))

# function plotting

def show_graph(u,f,_lambda):
a=float(min_entry.get())
b=float(max_entry.get())
if (a > b):
swap(a, b);
xlist = np.arange(a, b, 0.05)
ylist1 = [u(f, x, _lambda) for x in xlist]
ylist2 = [x for x in xlist]
pylab.plot(xlist, ylist1)
pylab.plot(xlist, ylist2)
pylab.savefig('graph.png')
pylab.cla()
pylab.clf()
img = PIL.Image.open("graph.png")
photo = PIL.ImageTk.PhotoImage(img)
lbl = Label(root, image=photo)
lbl.image = photo
lbl.place(x=300, y=20);

root = Tk();
root.title("Lab work №3")
root.geometry("1000x560")

# choice for Lambda

islambda = BooleanVar()
islambda.set(0);
islambda_checkbutton = Checkbutton(root, text=" input lambda, \ from the
keyboard ", variable=islambda)

# input fields

min_label = Label(text="from:")
42
max_label = Label(text="to:")
eps_label = Label(text=" Fractional decimal places: ")
func_label = Label(text="function:")
lambda_label = Label(text="lambda:")
func_entry = Entry()
min_entry = Entry()
max_entry = Entry()
eps_entry = Entry()
lambda_entry = Entry()

# location of fields, buttons, etc.

res = Button(text="Calculate", background="#555", foreground="#ccc",


font="16", command=show_res);

#graph = Button(text=" Plot function graphs", background="#555",


foreground="#ccc", font="16", command=show_graph);
res.place(x=18, y=200);

#graph.place(x=18,y=230);
min_label.place(x=16, y=20);
max_label.place(x=16, y=50);
eps_label.place(x=1, y=80);
func_label.place(x=18, y=110);
lambda_label.place(x=18, y=140);
min_entry.place(x=100, y=20);
max_entry.place(x=100, y=50);
eps_entry.place(x=170, y=80);
func_entry.place(x=100, y=110);
lambda_entry.place(x=100, y=140);
islambda_checkbutton.place(x=20, y=160);

# islambda_checkbutton.pack()

root.mainloop();

43
Output results:

44
4. Questions and exercises for Module 1
Questions:
1. What is the difference between algebraic and transcendental equations?
2.Why we are using numerical iterative methods for solving equations?
3.Based on which principle, the bisection and regula-falsi method is
developed?
4. What are the advantages and disadvantages of the bracketing
methods like bisection and regula-falsi?
5. What is the difference between bracketing and open method?
6. What is the importance of Secant method over Newton-Raphson
method?
7. How the accuracy condition is satisfied for each method?
8. Give a geometric interpretation for methods that approximate the root
from only one side.
9. Give a geometric interpretation for methods that approximate the root
from the both side.
10. Which methods use the values of the first derivative?
11. Which methods use the values of the second derivative?
12. How to determine the required number of iterations according to the
accuracy condition before performing calculations? In which method is it
possible?

Exercises:
1. Approximate the real root to two four decimal places of
3
x + 5 x −3=0.
Approximate to four decimal places √3 3
2. Find a positive root of the equation
4
x +2 x+ 1=0
(Choose x0 = 1.3), correct to 4 places of decimals.
3. Explain how to determine the square root of a real number by
Newton-Raphson method and using it determine correct to three
decimal places.
4. Find the value of √ 2 correct to four decimals places using Newton
Raphson method.
5. Use the Newton-Raphson method, with 3 as starting point, to find a
fraction that is within 10-8 of √ 10.
6. Calculate by Newton’s iteration, starting from x 0 = 2 and calculating x1,
x2, x3. Compare the results with the value 2.645751
7. Design a Newton’s iteration for computing k-th root of a positive
number c.
8. Find all real solutions of the following equations by Newton’s iteration
45
metho, Chord Method and Combined Method.
a) sin(x)=x b) ln(x)=1-2x c) cos(x)=√ x

9. Using Newton-Raphson method and Modified Newton-Raphson


method , find the root of the equation
x −x −x−3=0,
3 2

correct to three decimal places.


10. Apply Secant’s method to the equation
3
x −5 x+ 3=0
x =2
starting from the given 0 and performing 3 steps.

11. Apply Fixed Point Iteration’s method to the equation


4 3
x −x −2 x−34=0
x =3
starting from the given 0 and performing 3 steps.
12. Apply Combine’s method to the equation
3 2
x −3.9 x + 4.79 x−1.881=0
starting from the given 0=1 and performing 3 steps.
x

Examples of answers:
1. Ans: An equation f (x)=0 is called an algebraic equation if the
corresponding f (x) is a polynomial, while, f (x)=0 is called transcendental
equation if the f (x) contains trigonometric, or exponential or logarithmic
functions.
2. Ans: As analytic solutions are often either too tiresome or simply do not
exist, we need to find an approximate method of solution. This is where
numerical analysis comes into the picture.
3. Ans: These methods are based on the intermediate value theorem for
continuous functions: stated as , “If f is a continuous function
and f (a) and f(b) have opposite signs, then at least one root lies in between
a and b. If the interval is small enough, it is likely to contain a single root. ”
4. Ans: (i) The bisection and regula-falsi method is always convergent. Since
the method brackets the root, the method is guaranteed to converge. The
main disadvantage is, if it is not possible to bracket the roots, the
methods cannot applicable. For example, if f ()x is such that it always
takes the values with same sign, say, always positive or always negative,
then we cannot work with bisection method. Some examples of such
functions are
 f (x)=x2 which take only non-negative values and
 f (x)=-x2 which take only non-positive values.
5. Ans: For finding roots of a nonlinear equation f (x) = 0, bracketing method
requires two guesses which contain the exact root. But in open method initial
guess of the root is needed without any condition of bracketing for starting
the iterative process to find the solution of an equation.
6. Ans: Newton-Raphson method requires the evaluation of derivatives of the
function and this is not always possible, particularly in the case of functions
arising in practical problems. In such situations Secant method helps to solve
the equation with an approximation to the derivative.
46
Contents

Module 1 (3-44)
Introduction
1. Part 1
1.1. Isolation of real roots
1.2. Bisection Method
1.3. Laboratory work №1 Using the Bisection Method to Solve Algebraic
Equations
1.4. Typical solution examples
1.5.
2. Part 2
4.1 Chord Method ( Method of false position)
4.2 Newton-Raphson Method and modification
4.3 Secant Method
4.4 Combined Method
4.5 Error of numerical solutions of the equation
4.6 Laboratory work №2 Using Chord Method, Newton-Raphson
Method, and Combined Method to Solve Algebraic Equations
4.7 Typical solution examples
3. Part 3
3.1 Fixed Point Iteration Method
3.2 Laboratory work №3 Using Fixed-point Iteration Method to Solve
Algebraic and Transcendental Equations
3.3 Typical version examples
4. Question and exercises for Module 1

47

You might also like