Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Nonlinear Systems

Rooting-Finding Problem

Ng Tin Yau (PhD)


Department of Mechanical Engineering
The Hong Kong Polytechnic University

Jan 2016

By Ng Tin Yau (PhD) 1/28

Table of Contents

Introduction

Single Equation

Systems

By Ng Tin Yau (PhD) 2/28

Rooting-Finding Problem
One of the most basic problems of numerical approximation is the
root-finding problem. This process involves finding a root or
solution of an equation of the form
f (x) = 0

(1)

Many standard techniques have proposed to solve this type of


problems. Typical methods include the bisection method, method of
false position, fixed-point iteration, M
ullers method and
Newton-Raphson method. All the methods begin with an initial
approximation and generate a sequence that converges to a root of the
equation, if the method is successful. However, the rate of convergence
can be faster or slower when compared with different methods.
On the other hand, given a specified function f and a tolerance, an
efficient program should produce an approximation to one or more
solutions of f (x) = 0, each having an absolute or relative error within
the tolerance, and the results should be generated in a reasonable
amount of time.
Introduction By Ng Tin Yau (PhD) 3/28

Power Cable Problem


An electric power cable is suspended from the towers that are 100 m
apart. The cable is allow to dip 10 m in the middle. How long is the
cable?

Introduction By Ng Tin Yau (PhD) 4/28

Cont
It is known that the curve assumed by a suspended cable is a
catenary. When the y-axis passess through the lowest point, we can
assume an equation of the form
x
y(x, ) = cosh

Here is a parameter to be determined. The conditions of the problem


are that y(50, ) = y(0, ) + 10. Hence, we obtain
 
50
= + 10
cosh

The parameter is found to be = 126.632. After this value is


substituted into the arc length formula of the catenary
50 p
l=
1 + (y 00 )2 dx
50

the length is determined to be 102.619 m.


Introduction By Ng Tin Yau (PhD) 5/28

Basic Theorems
Theorem (Intermediate Value Theorem (IVT))
Suppose f C[a, b] and K is any number between f (a) and f (b), then
there exists a number c (a, b) for which f (c) = K.

Theorem (Taylors Theorem)


Suppose f C n [a, b], that f (n+1) exists on [a, b] and x0 [a, b]. For
every x [a, b], there exists a number (x) between x0 and x with
f (x) = Pn (x) + Rn (x), where
Pn (x) =

n
X
f (k) (x0 )
k=0

and
Rn (x) =

k!

(x x0 )k

f (n+1) ((x))
(x x0 )n+1
(n + 1)!

(2)

(3)

Introduction By Ng Tin Yau (PhD) 6/28

The Bisection Method


The rationale of the bisection method is based on the Intermediate
Value Theorem (IVT) from calculus. Suppose f is a continuous
function defined on the interval [a, b] with f (a) and f (b) of opposite
sign. By the IVT, there exists a number p (a, b) with f (p) = 0.
Although the procedure will work when there is more than one root in
the interval (a, b), we assume for simplicity that the root in this
interval is unique. The method calls for a repeated halving of
subintervals of [a, b] and at each step, locating the half containing p.
To begin, set a1 = a and b1 = b, and let p1 be the midpoint of [a, b],
that is
a1 + b1
b1 a1
=
(4)
p1 = a1 +
2
2
If f (p1 ) = 0, then p = p1 and we are done. Suppose that this is not the
case, say f (p1 ) > 0 and f (a) < 0, then p (a, p1 ). Now set a2 = a1 and
1
2
b2 = p1 , then p2 = a1 +p
= a2 +b
2
2 . Now we have to check the sign of
f (p2 ) again. These steps can be repeated as needed until some
stopping criterion has met.

Single Equation By Ng Tin Yau (PhD) 7/28

A Schematic Diagram

Single Equation By Ng Tin Yau (PhD) 8/28

Algorithm - Bisection Method


To find a solution to f (x) = 0 given the continuous function f on the
interval [a, b]. where f (a) and f (b) have opposite signs:
INPUT: endpoints a, b; tolerance T OL; maximum number of
iterations N
OUTPUT: approximate solution p or message of failure

The Bisection Algorithm


1

Set i = 1  F A = f (a)

While i N do STEP 3 - 6

Set p = (a + b)/2  F P = f (p)

If F P = 0 or (b a)/2 < T OL then OUTPUT(p) and STOP

Set i = i + 1

If F P F A > 0, then set a = p  F A = F P ; else set b = p

OUTPUT(Method failed after N iterations)

Single Equation By Ng Tin Yau (PhD) 9/28

Example - Bisection Method


Example
Find a root of the function f (x) = x3 + x 1 on the interval [0, 1].
A sketch of the curve is given as follows:
f (x)
20

10
x
2

10

Single Equation By Ng Tin Yau (PhD) 10/28

Example cont
Let a1 = 0 and b1 = 1. Notice that f (0) = 1 and f (1) = 1 and
therefore, f (0)f (1) < 0, so a root exists in the interval [0, 1]. Now using
the algorithm given previously, we obtain the following results for
i = 1, 2, 3, 4, 5.
i
1
2
3
4
5

ai
0
0.5
0.5
0.625
0.625

bi
1
1
0.75
0.75
0.6875

pi = (ai + bi )/2
0.5
0.75
0.625
0.6875
0.65625

f (pi )
0.375
0.172
0.131
0.012
0.061

To this end, if we want to accept p4 = 0.6875 as our approximate root,


then we can stop the computation. Notice that p4 is a better
approximation compared with p5 as |f (p4 )| < |f (p5 )|.

Single Equation By Ng Tin Yau (PhD) 11/28

Error Analysis
Suppose that f C[a, b] and f (a)f (b) < 0. Let p be a root of the
equation f (x) = 0 where p [a, b]. To obtain a better understanding of
the bisection method, for n 1, let us consider the length of the
interval [an , bn ], that is
1
1
bn an = n1 (b1 a1 ) = n1 (b a)
(5)
2
2
and we expect the root p [an , bn ] and pn = (an + bn )/2. Now
1
1
(6)
|pn p| (bn an ) = n (b a)
2
2
In other words, limn pn = p. That is, given  > 0 (the tolerance),
then there exists N N such that whenever n N , we have
1
|pn p| n (b a) < 
2
Solving for n gives
ln(b a) ln 
n>
(7)
ln 2

Single Equation By Ng Tin Yau (PhD) 12/28

Newton-Raphson Method
Newton-Raphson method is one of the most powerful and well-known
numerical methods for solving a root-finding problem.
Suppose that f C 2 [a, b]. Let p0 [a, b] be an approximation solution
to the solution p of f (x) = 0 such that f 0 (x0 ) 6= 0 and |p p0 | is
small. Consider the first Taylor polynomial for f (x) expand about p0 ,
and evaluated at x = p,
f (p) = f (p0 ) + (p p0 )f 0 (p0 ) +

(p p0 )2 00
f ()
2

(8)

where lies between p and p0 . Since f (p) = 0 and |p p0 | is small, this


equation gives
f (p0 ) + (p p0 )f 0 (p0 ) 0
(9)
Solving for p gives
p p1 p0

f (p0 )
f 0 (p0 )

(10)

Single Equation By Ng Tin Yau (PhD) 13/28

An Iterative Formula
This set the stage for Newtons method, which starts with an
pn = pn1

f (pn1 )
f 0 (pn1 )

for n 1

(11)

The following figure illustrates how the approximations are obtained


using successive tangents.

Single Equation By Ng Tin Yau (PhD) 14/28

Algorithm - Newton-Raphson Method


To find a solution to f (x) = 0 given an initial approximation p0 :
INPUT: initial approximation p0 ; tolerance T OL; maximum
number of iterations N
OUTPUT: approximate solution p or message of failure

The Newton-Raphson Method


1

Set i = 1

While i N do STEP 3 - 6

Set p = p0 f (p0 )/f 0 (p0 )  [Compute pi ]

If |p p0 | < T OL (or other criteria), then OUTPUT(p) and STOP

Set i = i + 1

Set p0 = p  [Update p0 ]

OUTPUT(Method failed after N iterations)

Single Equation By Ng Tin Yau (PhD) 15/28

Example - Newton-Raphson Method


Example
Find a root of the function f (x) = 3x ex on the interval [1, 2] using
the Newton-Raphson Method.
First of all, f 0 (x) = 3 ex . Choose p0 = 1.2 and using the iterative
formula pi+1 = pi f (pi )/f 0 (pi ) to obtain the following results:
i
0
1
2
3
4
5

pi
1.2
2.0743
1.7242
1.5572
1.5149
1.5121

f (pi )
0.2799
1.7361
0.4355
0.0740
0.0042

f 0 (pi )
0.3201
4.9591
2.6082
1.7457
1.5488

In fact, an approximation obtained by MatLab is 1.5121.

Single Equation By Ng Tin Yau (PhD) 16/28

Stopping Criteria
Given a tolerance  > 0 and suppose that we have generated a sequence
p1 , . . . , pN . The following criteria are some common stopping criteria
used in numerical mathematics:
1

|pN pN 1 | < 

|pN pN 1 |
|pN |

|f (pN )| < 

<  with pN 6= 0

Unfortunately, difficulties can arise


P using any of these stopping criteria.
For example, the sequence pn = nk=1 (1/k) diverges even though
limn (pn pn1 ) = 0. It is also possible for f (pn ) to be close to zero
while pn differs significantly from p. Without additional knowledge
about f or p, Inequality (2) is the best stopping criterion to apply
because it comes closest to testing relative error.

Single Equation By Ng Tin Yau (PhD) 17/28

Nonlinear Systems
Some physical problems involve the solution of systems of n nonlinear
equations in n unknowns. One approach is to linearize and solve,
repeatedly.
In the general case, a system of n nonlinear equations in n unknowns
xi can be displayed in the form
fi (x1 , x2 , , xn ) = 0 i = 1, 2, . . . , n

(12)

Using vector notation, we can write this system in a more elegant form:
f (x) = 0

(13)

by defining column vectors as


f = [f1 , f2 , , fn ]T
x = [x1 , x2 , , xn ]T
Systems By Ng Tin Yau (PhD) 18/28

Differentiation
Suppose that U Rn is an open set. A function f : U Rm is
differentiable at x U if there is a linear map T : Rn Rm such that
lim

h0

kf (x + h) f (x) T (h)k
=0
khk

(14)

We write T = Df (x) and we call this the derivative of f at x. We say


differentiable on U if f is differentiable at each point in U .
Suppose that f is differentiable on U and let x U . Denote v Rn be
a unit vector. The directional derivative of f at x in the direction v
is defined as
Dv f (x) = lim

t0

kf (x + tv) f (x) T (h)k


t

(15)

Systems By Ng Tin Yau (PhD) 19/28

Jacobian Matrix
By setting v = ej in the definition of the directional derivative, a
standard basis vector, we obtain the partial derivative in the j-th
direction and is denoted by Dj f (x).
Furthermore, if f = (f1 , f2 , , fn ), then the matrix Dj fi (x) with
1 i n and 1 j n is called the the Jacobian matrix of f at x
fi
(x).
and is denoted by Jij (x) = x
j
If we denote j fi =

fi
xj ,

then the Jacobian matrix may be written as

1 f1
1 f2

J (x) = .
..

2 f1
2 f2
..
.

..
.

1 fn 2 fn

n f1
n f2

..
.

(16)

n fn

Systems By Ng Tin Yau (PhD) 20/28

Newtons Method for Systems


Suppose that the vector x(k) is an approximate solution to the equation
f (x) = 0 at the k-th iteration. Let h(k) be a computed correction to
the guess x(k) so that x(k) + h(k) is a better approximate solution.
Discarding the higher-order terms in the Taylor expansion, we have
f (x(k) + h(k) ) f (x(k) ) + J (x(k) )h(k) 0

(17)

where J (x(k) ) = Df (x(k) ) is the Jacobian matrix evaluated at the point


x(k) . By assuming that the Jacobian matrix is nonsingular at x(k) , then
h(k) J 1 (x(k) )f (x(k) )

(18)

Let x(k+1) = x(k) + h(k) , then


x(k+1) = x(k) J 1 (x(k) )f (x(k) )

(19)

Systems By Ng Tin Yau (PhD) 21/28

Newtons Method - computational form

In practice, the computational form of Newtons method does not


involve inverting the Jacobian matrix but rather solves the Jacobian
linear system
J (x(k) )h(k) = f (x(k) )
(20)
This linear system can be solved by any direct or indirect methods like
Gauss elimination method, Jacobis iterative method. The next
iteration of Newtons method is then given by
x(k+1) = x(k) + h(k)

(21)

This is Newtons method for nonlinear systems.

Systems By Ng Tin Yau (PhD) 22/28

Example
Example
Given a system
x1 + x2 + x3 3 = 0
x21 + x22 + x23 5 = 0
ex1 + x1 x2 x1 x3 1 = 0
Use Newtons method to perform three iterations with initial guess
x(0) = [0.1 1.2 2.5]T using 4 decimal places.
Solution: The Jacobian matrix is

1
1
1
2x1
2x2 2x3
J (x) =
x
e 1 + x2 x3 x1 x1

Systems By Ng Tin Yau (PhD) 23/28

The first iteration


Using the given initial data to obtain

0.1 + 1.2 + 2.5 3

0.8000
0.12 + 1.22 + 2.52 5
2.7000
f (x(0) ) =
=
0.1

e + (0.1)(1.2) (0.1)(2.5) 1
0.0248
and the Jacobian matrix J (x(0) ) is

1
1
1
1.0000 1.0000 1.0000

2(0.1)
2(1.2) 2(2.5) = 0.2000 2.4000 5.0000
0.1
e + 1.2 2.5
0.1
0.1
0.1948 0.1000 0.1000
By solving the system J (x(0) )h(0) = f (x(0) ), we have
h(0) = [0.0966 0.3217 0.3817]T which gives
x(1) = x(0) + h(0) = [0.0034

0.8783

2.1183]T

Systems By Ng Tin Yau (PhD) 24/28

The second iteration


(1)

(1)

(1)

Now using x1 = 0.0034, x2 = 0.8783, and x3 = 2.1183 to compute

(1)
(1)
(1)

x
+
x
+
x

0.0000
1
2
3
(1)
(1)
(1)
0.2584
f (x(1) ) =
=
(x1 )2 + (x2 )2 + (x3 )2 5

x(1)
0.0008
(1)
(1)
(1)
(1)
e 1 + (x1 )(x2 ) (x1 )(x3 ) 1
Then the Jacobian matrix J (x(1) ) is given as

1
1
1
1.0000 1.0000 1.0000
(1)
(1)
(1)

2x1
2x2
2x3 = 0.0068 1.7566 4.2366

(1)
(1)
(1)
(1)
(1)
0.2366 0.0034 0.0034
ex1 + x x
x
x
2

Solving the system J (x(1) )h(1) = f (x(1) ) to yield


h(1) = [0.0004 0.1050 0.1046]T , then
x(2) = x(1) + h(1) = [0.0030

0.9833

2.0137]T

Systems By Ng Tin Yau (PhD) 25/28

The third iteration


(2)

(2)

(2)

Now using x1 = 0.0030, x2 = 0.9833, and x3 = 2.0137 to compute

(2)
(2)
(2)

x
+
x
+
x

0.0000
1
2
3
(2)
(2)
(2)
0.0219
f (x(2) ) =
=
(x1 )2 + (x2 )2 + (x3 )2 5

x(2)
0.0001
(2)
(2)
(2)
(2)
e 1 + (x1 )(x2 ) (x1 )(x3 ) 1
Then the Jacobian matrix J (x(2) ) is given as

1
1
1
1.0000 1.0000 1.0000
(2)
(2)
(2)

2x1
2x2
2x3 = 0.0060 1.9666 4.0274

(2)
(2)
(2)
(2)
(2)
0.2274 0.0030 0.0030
ex1 + x x
x
x
2

Solving the system J (x(2) )h(2) = f (x(2) ) to yield


h(2) = [0.0006 0.0119 0.0112]T , then
x(3) = x(2) + h(2) = [0.0024

0.9952

2.0025]T

Systems By Ng Tin Yau (PhD) 26/28

Conclusions & Remarks


The results are summarized in the following table:
k
0
1
2
3
4
5

(k)

x1
0.1
0.0034
0.0030
0.0024
0.0012
0.0006

(k)

x2
1.2
0.8783
0.9833
0.9952
0.9976
0.9988

(k)

x3
2.5
2.1183
2.0137
2.0025
2.0012
2.0006

kx(k) x(k1) k

0.3817
0.1050
0.0119
0.0024
0.0012

When programmed and excuted on a computer, we found that it


converges to x = [0 1 2]T , but when we change to a different
starting vector [1 0 1]T , it converges to another root,
[1.2244 0.0931 1.8687]T .

Systems By Ng Tin Yau (PhD) 27/28

Problems

Obtain the zeros of the following functions using the bisection


method (estimate the number of steps needed for the given ) and
Newton-Raphson method (initial guess p0 = 1.5):
a. f (x) = x3 2x2 5 on the interval [1, 4] with  = 0.001.
b. f (x) = x cos x on the interval [0, /2] with  = 0.001.

Using starting values (x, y) = (0.8, 0.8) to solve the following


system:
(
sin(x + y) = exy
cos(x + 6) = x2 y 2

Systems By Ng Tin Yau (PhD) 28/28

You might also like