2.1 Locating Roots: Chapter Two Non Linear Equations

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Chapter Two

Non linear equations

2.1 Locating roots


The roots of a function f (x) are defined as the values for which the value of the function
becomes equal to zero. So, finding the roots of f (x) means solving the equation f (x) = 0.

Example

If f (x) = ax2+bx +c is a quadratic polynomial, the roots are given by the well-known formula

Numerical methods are used when


• there is no formula for root,
• the formula is too complex,
• f is a “black box”

f(x) is a function that is real valued and that x is a real variable. Suppose that

 f(x) is continuous on an interval [a, b], and


 f(a)f(b) < 0

Then f(x) changes sign on [a, b], and f(x) = 0 has at least one root on the interval.

2.2 Bisection Methods


The bisection method is one of the bracketing methods for finding roots of an equation. For a given
a function f(x), guess an interval which might contain a root and perform a number of iterations,
where, in each iteration the interval containing the root is get halved.

The bisection method is based on the intermediate value theorem for continuous functions.

Intermediate value theorem for continuous functions:


If f is a continuous function and f (a) and f (b) have opposite signs, then at least one root lies in
between a and b. If the interval (a, b) is small enough, it is likely to contain a single root.

i.e., an interval [a, b] must contain a zero of a continuous function f if the product f (a) f (b)  0.
Geometrically, this means that if f (a) f (b)  0, then the curve f has to cross the x-axis at some
point in between a and b.

By: Endriss A. Page 1 of 13


Suppose we want to find the solution to the equation f (x) = 0 , where f is continuous. Given a
function f (x) continuous on an interval [a0 , b0] and satisfying f (a ) f (b ) < 0.
For n = 0, 1, 2, … until termination do:

Compute

If f (xn) = 0 , accept xn as a solution and stop. Else continue.

Criterion for termination



A convenient criterion is to compute the percentage error r defined by

where x'r is the new value of xr . The computations can be terminated when r becomes less than
a prescribed tolerance, say p. In addition, the maximum number of iterations may also be
specified in advance.

Some other termination criteria are as follows:

By: Endriss A. Page 2 of 13


Example
Solve x3 – 9x+1 = 0 for the root between x = 2 and x = 4, by bisection method.

Solution:

Set a0 = 2 and b0 = 4. Then

The steps are illustrated in the following table.

By: Endriss A. Page 3 of 13


By: Endriss A. Page 4 of 13
Merits(Advantage ) of bisection method
a) The iteration using bisection method always produces a root, since the method brackets the
root between two values.
b) As iterations are conducted, the length of the interval gets halved. So one can guarantee the
convergence in case of the solution of the equation.
c) the Bisection Method is simple to program in a computer

Demerits(Disadvantage) of bisection method


a) The convergence of the bisection method is slow as it is simply based on halving the interval.
b) Bisection method cannot be applied over an interval where there is a discontinuity.
c) Bisection method cannot be applied over an interval where the function takes always values of
the same sign.
d) The method fails to determine complex roots.
e) If one of the initial guesses 0 a or 0 b is closer to the exact solution, it will take larger number
of iterations to reach the root.

2.3. Interpolation and Secant methods

A). Interpolation
Very roughly speaking, interpolation is what you use when you have a few data points and want
to use them to estimate the values between those data points.

By: Endriss A. Page 5 of 13


value. We will see different methods of interpolation in the next topics.

B). Secant methods

Assume we need to find a root of the equation f (x) = 0, called a. Consider the graph of the function
f (x) and two initial estimates of the root, x0 and x1. The two points (x0, f (x0)) and (x1, f (x1)) on
the graph of f (x) determine a straight line, called a secant line which can be viewed as an
approximation to the graph. The point x2 where this secant line crosses the x axis is then an
approximation for the root a.

This is the same idea as in Newton’s method, where the tangent line has been approximated by a
secant line.

The general iteration will be given by

and so on.

Note : The secant method is slower than Newton’s method but faster than the bisection method.

Example:
Find a root of the equation x6−x −1 = 0 (Recall that the true root is a = 1.134724138.)

By: Endriss A. Page 6 of 13


Advantages

• better-than-linear convergence near simple root


• linear convergence near multiple root
• no derivative needed

Disadvantages

• iterates may diverge


• no practical & rigorous error bound

2.4. Iteration Methods


A numerical iteration method or simply iteration method is a mathematical procedure that
generates a sequence of improving approximate solutions for a class of problems. A specific way
of implementation of an iteration method, including the termination criteria, is called an algorithm
of the iteration method. In the problems of finding the solution of an equation an iteration method
uses an initial guess to generate successive approximations to the solution. Since the iteration
methods involve repetition of the same process many times, computers can act well for finding
solutions of equation numerically.
Some of the iteration methods for finding solution of equations involves (1) Bisection method, (2)
Bisection methods, (3) Newton-Raphson method. A numerical method to solve equations may be
a long process in some cases. If the method leads to value close to the exact solution, then we say
that the method is convergent. Otherwise, the method is said to be divergent.

Example
Use the method of iteration to find a positive root, between 0 and 1, of the equation xe-x = 1.
Writing the equation in the form x = e-x

By: Endriss A. Page 7 of 13


By: Endriss A. Page 8 of 13
In this case the iteration does not converge

In this case also the iteration does not converge

In this case the iteration converge

By: Endriss A. Page 9 of 13


2.5. Conditions for convergence
Consider the equation f (x) = 0, which has the root a and can be written as the fixed point
problem g(x) = x. If the following conditions hold

 g(x) and g′(x) are continuous functions;


 |g′(a)| < 1

then the fixed point iteration scheme based on the function g will converge to a. Alternatively, if
|g′(a)| > 1 then the iteration will not converge to a. (Note that when |g′(a)| = 1 no conclusion can
be reached.)

2.6. Newton Raphson Method


Recall that the equation of a straight line is given by the equation
y = mx +n (1)
where m is called the slope of the line. (This means that all points (x,y) on the line satisfy the
equation above.)
If we know the slope m and one point (x0,y0) on the line, equation (1) becomes
y - y0 = m(x −x0) (2)

Idea behind Newton’s method


Assume we need to find a root of the equation f (x) = 0. Consider the graph of the function f (x) and an
initial estimate of the root, x0. To improve this estimate, take the tangent to the graph of f (x) through the
point (x0, f (x0) and let x1 be the point where this line crosses the horizontal axis.

According to eq. (2) above, this point is given by

By: Endriss A. Page 10 of 13


where f ′(x0) is the derivative of f at x0. Then take x1 as the next approximation and continue the procedure.
The general iteration will be given by

Example 1: Find a root of the equation


x6 - x - 1 = 0 (Note that the true root is a = 1.134724138.)

We already know that this equation has a root between 1 and 2 so we take x0 = 1.5 as our first
approximation. For each iteration we calculate the value xn - xn - 1 which, as we shall see
later, is a good approximation to the absolute error  - xn - 1.

The iterations for Newton’s algorithm are shown in the table below

Example.
Let us approximate the only solution to the equation x = cos x. In fact, looking at the graphs we
can see that this equation has one solution.

By: Endriss A. Page 11 of 13


This solution is also the only zero of the function f (x) = x - cos x . So now we see how Newton's
method may be used to approximate r. Since r is between 0 and pi / 2 , we set x1 = 1. The rest of
the sequence is generated through the formula.

We have

and substituting these in Newton’s iterative formula, we have

Advantages

 The error decreases rapidly with each iteration


 Newton’s method is very fast. (Compare with bisection method!)
 quadratic convergence near simple root
 linear convergence near multiple root

By: Endriss A. Page 12 of 13


Disadvantages

• Unfortunately, for bad choices of x0 (the initial guess) the method can fail to converge! Therefore the
choice of x0 is VERY IMPORTANT!
• Each iteration of Newton’s method requires two function evaluations, while the bisection method
requires only one.
• no practical & rigorous error bound

Note:
A good strategy for avoiding failure to converge would be to use the bisection method for a few
steps (to give an initial estimate and make sure the sequence of guesses is going in the right
direction) folowed by Newton’s method, which should converge very fast at this point.
2
3
4

By: Endriss A. Page 13 of 13

You might also like