Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 55

Chapter 12

Iterative Methods for


System of Equations
Iterative Methods for Solving
Matrix Equations

Ax  b 
x  Cx  d , C ii  0
Jacobi method
Gauss-Seidel Method*
Successive Over Relaxation (SOR)
MATLAB’s Methods
Iterative Methods
a 11 x 1  a 12 x 2  a 13 x 3  a 14 x 4  b1
a x  a 22 x 2  a 23 x 3  a 24 x 4  b2
 21 1

a 31 x 1  a 32 x 2  a 33 x 3  a 34 x 4  b3
a 41 x 1  a 42 x 2  a 43 x 3  a 44 x 4  b4
Can be converted to
 x1  ( b1  a 12 x 2  a 13 x 3  a 14 x 4 ) / a 11

 x2  ( b2  a 21 x 1  a 23 x 3  a 24 x 4 ) / a 22

 x3  ( b3  a 31 x 1  a 32 x 2  a 34 x 4 ) / a 33
 x 4  ( b4  a 41 x 1  a 42 x 2  a 43 x 3 ) / a 44
Iterative Methods
Idea behind iterative methods:
Convert Ax = b into x = Cx +d
Ax  b  x  Cx  d Equivalent system
Generate a sequence of approximations
(iteration) x1, x2, …., with initial x0
j 1
x  Cx
j
d
Similar to fix-point iteration method
Rearrange Matrix Equations
a11 x1  a12 x 2  a13 x 3  a14 x 4  b1
a 21 x1  a 22 x 2  a 23 x 3  a 24 x 4  b2
a x  a x  a x  a x  b
 31 1 32 2 33 3 34 4 3
a 41 x1  a 42 x 2  a 43 x 3  a 44 x 4  b4
Rewrite the matrix equation in the same way
 a12 a13 a14 b1
 x1   x2  x3  x4 
a11 a11 a11 a11

x 
a 21
x1 
a 23
x3 
a 24
x4 
b2
n equations
 2 a 22 a 22 a 22 a 22

 x3 a a a b
  31 x1  32 x 2  34 x 4  3 n variables
 a 33 a 33 a 33 a 33
 a a a b
 x4   41 x1  42 x 2  43 x 3  4
 a 44 a 44 a 44 a 44
Iterative Methods
j 1
Ax  b  x  Cx
j
 d ; C ii  0
x and d are column vectors, and C is a square matrix
 a12 a13 a14   b1 
 0   
a11 a11 a11  a 
   11 
  a 21 a 23 a 24   b2 
0  
 a 22 a 22 a 22   a 22 
C  ; d 
a 31 a 32 a 34 b3
  0    
 a 33 a 33 a 33   a 33 
 a 41 a 42 a 43   b4 
   0   
 a 44 a 44 a 44   a 44 
Convergence Criterion
 x ij  xij 1
1  a,i  j
100%   s for all x i
 xi
 2 Norm of the residual vector Ax  b  
  s

For system of equations

i  1,   eig( C )
Jacobi Method

 x1 new  ( b1  a12 x 2
old
 a13 x 3
old
 a14 x 4
old
) / a11
 new
 x2  ( b2  a21 x1  a23 x 3  a24 x 4
old old old
) / a22
 new
 ( b3  a31 x1  a32 x 2  a34 x 4
old old old
 x3 ) / a33
 new
 ( b4  a41 x1  a42 x 2  a43 x 3
old old old
 x4 ) / a44
Gauss-Seidel Method
Differ from Jacobi method by sequential updating:
use new xi immediately as they become available

 x1 new  ( b1  a12 x 2
old
 a13 x 3
old
 a14 x 4
old
) / a11
 new
 x2  ( b2  a21 x1  a23 x 3  a24 x 4
new old old
) / a22
 new
 ( b3  a31 x1  a32 x 2  a34 x 4
new new old
 x3 ) / a33
 new
 ( b4  a41 x1  a42 x 2  a43 x 3
new new new
 x4 ) / a44
Gauss-Seidel Method
 use new xi at jth iteration as soon as they
become available
 x1 j  ( b1  a12 x 2
j 1
 a13 x 3
j 1
 a14 x 4
j 1
) / a11
 j j 1 j 1
 x2  ( b2  a21 x1  a23 x 3  a24 x 4
new
) / a22
 j j 1
 ( b3  a31 x1  a32 x 2  a34 x 4
j j
 x3 ) / a33
 j
 ( b4  a41 x1  a42 x 2  a43 x 3 ) / a44
j j j
 x4

xij  xij 1
 a,i  j
 100%   s for all xi
xi
(a) Gauss-Seidel Method (b) Jacobi Method
Convergence and Diagonal Dominant
Sufficient condition -- A is diagonally dominant
Diagonally dominant: the magnitude of the diagonal
element is larger than the sum of absolute value of
the other elements in the row n
aii   aij
i 1
j i
Necessary and sufficient condition -- the magnitude
of the largest eigenvalue of C (not A) is less than 1
Fortunately, many engineering problems of
practical importance satisfy this requirement
Use partial pivoting to rearrange equations!
Convergence or Divergence of Jacobi and
Gauss-Seidel Methods

 Same equations, but in different order


 Rearrange the order of the equations to ensure convergence
Diagonally Dominant Matrix

8 1  2  3 is strictly diagonally dominant


  1  10 2 
5 because diagonal elements are
A 
 1  6 12  3 greater than sum of absolute
 
 3 2 3  9 value of other elements in row

8 1  2  3
 1 6 2 5 
A 
is not diagonally dominant
 1  6 12  3
 
 3 2 3  9
Jacobi and Gauss-Seidel
5 x1  2 x 2  2 x 3  10

Example:  2 x 1  4 x 2  x 3  7
3 x  x  6 x  12
 1 2 3

Jacobi Gauss-Seidel
 new
 new 2 old 2 old 10
 x1  x2  x3 
2 old 2 old 10
x
 1  x2  x3  5 5 5

5 5 5 
 new  new 2 new 1 old 7
 x2 
2 old
x1
1 old 7
 x3   2
x  x 1  x3 
4 4 4  4 4 4

 new 3 old 1 old 12  new 3 new 1 new 12
x
 3   x  x   x 3   6 x1  6 x 2 

1 2
 6 6 6 6
Example
 5 x1  12 x 3  80  5 0 12
4 x1  x2  x3  2  4  1  1
 
6 x1  8 x2  45  6 8 0 
Not diagonally dominant !!
Order of the equation can be important
Rearrange the equations to ensure convergence

4 x1  x2  x3  2  4  1  1
6 x1  8 x2  45  6 8 0
 
 5 x1  12 x 3  80   5 0 12
Gauss-Seidel Iteration
 x1  ( x 2  x 3  2 ) / 4

Rearrange  x 2  ( 45  6 x 1 ) / 8
 x  ( 80  5 x ) / 12
 3 1

Assume x1 = x2 = x3 = 0

 x1  ( 0  0  2 ) / 4  0.5

x 2  45  6  ( 0.5 ) / 8  6.0
First
iteration 
 x  80  5  ( 0.5 ) / 12  6.4583
 3
Gauss-Seidel Method
 x 1  ( 2  6  6.4583 ) / 4  2.6146

Second iteration  x 2  ( 45  6( 2.6146 )) / 8  3.6641
 x  ( 80  5( 2.6146 )) / 12  7.7561
 3
 x 1  ( 2  3.6641  7.7561 ) / 4  2.3550

Third iteration  x 2  ( 45  6 ( 2.3550 )) / 8  3.8587
 x  ( 80  5( 2.3550 )) / 12  7.6479
 3
 x 1  ( 2  3.8587  7.6479 ) / 4  2.3767

Fourth iteration  x 2  ( 45  6 ( 2.3767 )) / 8  3.8425
 x  ( 80  5( 2.3767 )) / 12  7.6569
 3
5 th : x 1  2.3749 , x 2  3.8439 , x 3  7.6562
6 th : x 1  2.3750 , x 2  3.8437 , x 3  7.6563
7 th : x 1  2.3750 , x 2  3.8438 , x 3  7.6562
MATLAB M-File for Gauss-Seidel method

Continued on next page


MATLAB M-File for Gauss-Seidel method
Continued from previous page
Gauss-Seidel Iteration
» A = [4 –1 –1; 6 8 0; -5 0 12];
» b = [-2 45 80];
» x=Seidel(A,b,x0,tol,100);
i x1 x2 x3 x4 ....
1.0000 -0.5000 6.0000 6.4583
2.0000 2.6146 3.6641 7.7561
3.0000 2.3550 3.8587 7.6479
4.0000 2.3767 3.8425 7.6569
5.0000 2.3749 3.8439 7.6562
6.0000 2.3750 3.8437 7.6563
7.0000 2.3750 3.8438 7.6562
8.0000 2.3750 3.8437 7.6563
Gauss-Seidel method converged

Converges faster than the Jacobi method shown in next page


Jacobi Iteration
» A = [4 –1 –1; 6 8 0; -5 0 12];
» b = [-2 45 80];
» x=Jacobi(A,b,0.0001,100);
i x1 x2 x3 x4 ....
1.0000 -0.5000 5.6250 6.6667
2.0000 2.5729 6.0000 6.4583
3.0000 2.6146 3.6953 7.7387
4.0000 2.3585 3.6641 7.7561
5.0000 2.3550 3.8561 7.6494
6.0000 2.3764 3.8587 7.6479
7.0000 2.3767 3.8427 7.6568
8.0000 2.3749 3.8425 7.6569
9.0000 2.3749 3.8438 7.6562
10.0000 2.3750 3.8439 7.6562
11.0000 2.3750 3.8437 7.6563
12.0000 2.3750 3.8437 7.6563
13.0000 2.3750 3.8438 7.6562
14.0000 2.3750 3.8438 7.6562

Jacobi method converged


Relaxation Method
Relaxation (weighting) factor 
Gauss-Seidel method:  = 1
Overrelaxation 1 <  < 2
Underrelaxation 0 <  < 1

x new
i  x new
i  (1   ) x old
i

Successive Over-relaxation (SOR)


Successive Over Relaxation
(SOR)
Relaxation method
G-S method x2 new  (b2  a21 x1new  a23 x3old  a24 x4 old ) / a22
SOR method x2 new  (1   ) x2old   x2new
 (1   ) x2old   (b2  a21 x1new  a23 x3old  a24 x4 old ) / a22

 x1 new  (1   )x 1
old
  (b1  a12 x 2
old
 a13 x 3
old
 a14 x4
old
)/a 11
 new
 x2  (1   )x 2   (b2  a21 x1  a23 x 3  a24 x 4
old new old old
)/a 22
 new
 (1   )x 3   (b3  a31 x1  a32 x 2  a34 x4
old new new old
 x3 )/a 33
 new
 (1   )x 4   (b4  a41 x1  a42 x 2  a43 x 3
old new new new
 x4 )/a 44
SOR Iterations
 x1  ( x 2  x 3  2 ) / 4

Rearrange  x 2  ( 45  6 x 1 ) / 8
 x  ( 80  5 x ) / 12
 3 1

Assume x1 = x2 = x3 = 0, and  = 1.2

x i   x iG  S  1    x iold ; G - S : Gauss - Seidel

 x1  ( 0.2 )  0  1.2  ( 0  0  2 ) / 4  0.6



 x 2  ( 0.2 )  0  1.2  45  6  ( 0.6 ) / 8  7.29
First
iteration 
 x 3  ( 0.2 )  0  1.2  80  5  ( 0.6 ) / 12  7.7
SOR Iterations
Second iteration
 x1  ( 0.2 )  ( 0.6 )  1.2  ( 7.29  7.7  2 ) / 4  4.017

 x2  ( 0.2 )  ( 7.29 )  1.2  ( 45  6 ( 4.017 )) / 8  1.6767
 x  ( 0.2 )  ( 7.7 )  1.2  ( 80  5( 4.017 )) / 12  8.4685
 3
Third iteration
 x1  ( 0.2 )  ( 4.017 )  1.2  ( 1.6767  8.4685  2 ) / 4  1.6402

 x2  ( 0.2 )  ( 1.6767 )  1.2  ( 45  6 ( 1.6402 )) / 8  4.9385
 x  ( 0.2 )  ( 8.4685 )  1.2  ( 80  5( 1.6402 )) / 12  7.1264
 3

Converges slower !! (see MATLAB solutions)


There is an optimal relaxation parameter
Successive Over Relaxation

Relaxation factor
w (= )
SOR with  = 1.2
» [A,b]=Example » w=1.2; x = SOR(A, b, x0, w, tol, 100);
i x1 x2 x3 ....
A = 1.0000 -0.6000 7.2900 7.7000
4 -1 -1 2.0000 4.0170 1.6767 8.4685
6 8 0 3.0000 1.6402 4.9385 7.1264
-5 0 12 4.0000 2.6914 3.3400 7.9204
5.0000 2.2398 4.0661 7.5358
b = 6.0000 2.4326 3.7474 7.7091
-2 7.0000 2.3504 3.8851 7.6334
45 8.0000 2.3855 3.8261 7.6661
80 9.0000 2.3705 3.8513 7.6521
10.0000 2.3769 3.8405 7.6580
» x0=[0 0 0]' 11.0000 2.3742 3.8451 7.6555
x0 = 12.0000 2.3753 3.8432 7.6566
0 13.0000 2.3749 3.8440 7.6561
0 14.0000 2.3751 3.8436 7.6563
0 15.0000 2.3750 3.8438 7.6562
» tol=1.e-6 16.0000 2.3750 3.8437 7.6563
tol = 17.0000 2.3750 3.8438 7.6562
1.0000e-006 18.0000 2.3750 3.8437 7.6563
19.0000 2.3750 3.8438 7.6562
20.0000 2.3750 3.8437 7.6563
Converge Slower !! 21.0000 2.3750 3.8438 7.6562
SOR method converged
SOR with  = 1.5
» [A,b]=Example2 » w = 1.5; x = SOR(A, b, x0, w, tol, 100);
i x1 x2 x3 ....
A = 1.0000 -0.7500 9.2812 9.5312
4 -1 -1 2.0000 6.6797 -3.7178 9.4092
6 8 0 3.0000 -1.9556 12.4964 4.0732
-5 0 12 4.0000 6.4414 -5.0572 11.9893
b = 5.0000 -1.3712 12.5087 3.1484
-2 6.0000 5.8070 -4.3497 12.0552
45 7.0000 -0.7639 11.4718 3.4949
80 8.0000 5.2445 -3.1985 11.5303
» x0=[0 0 0]' 9.0000 -0.2478 10.3155 4.0800
10.0000 4.7722 -2.0890 10.9426
x0 = 11.0000 0.1840 9.2750 4.6437
0 12.0000 4.3775 -1.1246 10.4141
0
0 13.0000 0.5448 8.3869 5.1335
14.0000 4.0477 -0.3097 9.9631
» tol=1.e-6 15.0000 0.8462 7.6404 5.5473
………
tol = 20.0000 3.3500 1.4220 9.0016
1.0000e-006 ………
30.0000 2.7716 2.8587 8.2035
………
50.0000 2.4406 3.6808 7.7468
………
Diverged !! 100.0000 2.3757 3.8419 7.6573
SOR method did not converge
CVEN 302-501
Homework No. 8

Chapter 12
Prob. 12.4 & 12.5 (Hand
calculation and check the
results using the programs)
You do it but do not hand in. The
solution will be posted on the net.
Nonlinear Systems
Simultaneous nonlinear equations
f1 ( x1 , x 2 , x 3 ,  , xn )  0
f 2 ( x1 , x 2 , x 3 ,  , xn )  0
f 3 ( x1 , x 2 , x 3 ,  , xn )  0

fn ( x1 , x 2 , x 3 ,  , xn )  0

Example x12  2 x 22  4 x 3  7
2 x1 x 2  5 x 2  7 x 2 x 32  8
5 x1  2 x 2  x 32  4
Two Nonlinear Functions

Circle x2 + y2 = r2

 f(x,y)=0
g(x,y)=0
Solution
depends on
initial guesses

Ellipse (x/a)2 + (y/b)2 = 1


function [x1,y1,x2,y2]=curves(r,a,b,theta) » r=2; a=3; b=1; theta=0:0.02*pi:2*pi;
% Circle with radius A » [x1,y1,x2,y2]=curves(2,3,1,theta);
x1=r*cos(theta); y1=r*sin(theta); » H=plot(x1,y1,'r',x2,y2,'b');
% Ellipse with major and minor axes (a,b) » set(H,'LineWidth',3.0); axis equal;
x2=a*cos(theta); y2=b*sin(theta);
Newton-Raphson Method
One nonlinear equation (Ch.6)
f ( x i  1 )  f ( x i )  ( x i  1  x i ) f ( x i )
f ( xi )
xi1  xi 
f ( x i )

Two nonlinear equations (Taylor-series)


 f1,i f1,i
 f1,i  1  f1,i  ( x1,i  1  x1,i ) x  ( x 2 ,i  1  x 2 , i ) x
 1 2

f f 2 ,i f 2 ,i
2 , i  1  f 2 , i  ( x 1, i  1  x 1, i )  ( x 2 ,i 1  x 2 ,i )
 x1 x 2
Intersection of Two Curves
Two roots: f1(x1,x2) = 0 , f2 (x1,x2) = 0
 f1,i f1,i f1,i f
 x 1, i  1  x 2 , i  1   f 1, i  x 1 , i  x 2 , i 1, i
 f 1, i  1 0  x1 x 2 x1 x 2
  
 f 2 ,i 1 0  f 2 ,i x f 2 ,i f 2 ,i f 2 ,i
1, i  1  x 2 , i  1   f 2 , i  x 1, i  x2 ,i

 x1 x 2 x1 x 2

Alternatively
 f 1 , i f 1 , i 
 x x 2   x 1 , i  1  x 1 , i   f 1, i  [J]{x} = { f }
 1     
 f 2 , i f 2 , i  x 2 , i  1  x 2 , i
   f 2. i  Jacobian matrix
 x 1 x 2 
Intersection of Two Curves
Intersection of a circle and a parabola
 f ( x, y)  x 2  y 2  1  0  f1 ( x1 , x 2 )  x12  x 22  1  0
 or 
 g ( x , y )  x 2
 y  0 
 2 1 2
f ( x , x )  x 1  x2  0
2
Intersection of Two Curves
 f1 f1 
 f1 ( x1 , x 2 )  x12  x 22  1  x x 2   2 x1 2 x2 
  1 
 f 2 ( x1 , x 2 )  x12  x 2  f 2 f 2   2 x1  1 
 x1 x 2 

 2 x 1, i 2 x2 ,i   x1,i    f1,i 
[ J ]{ x}      
 2 x 1, i  1 x2 ,i   f 2 ,i 
Solve for xnew
 x1,i   x 1 , i  1  x 1, i 
x    x x 
new old

x2 ,i   y2 , i  1  x 2 , i 
function [x,y,z]=ellipse(a,b,c,imax,jmax)
for i=1:imax+1 » a=3; b=2; c=1; imax=50; jmax=50;
theta=2*pi*(i-1)/imax; » [x,y,z]=ellipse(a,b,c,imax,jmax);
for j=1:jmax+1
phi=2*pi*(j-1)/jmax; » surf(x,y,z); axis equal;
x(i,j)=a*cos(theta); » print -djpeg100 ellipse.jpg
y(i,j)=b*sin(theta)*cos(phi);
z(i,j)=c*sin(theta)*sin(phi);
end
end
function
[x,y,z]=parabola(a,b,c,imax,jmax)
% z=a*x^2+b*y^2+c
for i=1:imax+1 Parabolic or
xi=-2+4*(i-1)/imax;
for j=1:jmax Hyperbolic
eta=-2+4*(j-1)/jmax;
x(i,j)=xi; Surfaces
y(i,j)=eta;
z(i,j)=a*x(i,j)^2+b*y(i,j)^2+c;
end
end

» a=0.5; b=-1; c=1; imax=50; jmax=50;


» [x2,y2,z2]=parabola(a,b,c,imax,jmax);
» surf(x2,y2,z2); axis equal;
» a=3; b=2; c=1; imax=50; jmax=50;
» [x1,y1,z1]=ellipse(a,b,c,imax,jmax);
» surf(x1,y1,z1); hold on; surf(x2,y2,z2); axis equal;
» print -djpeg100 parabola.jpg
Intersection of two nonlinear surfaces

Infinite
many
solutions
Intersection of three nonlinear surfaces
» [x1,y1,z1]=ellipse(2,2,2,50,50);
» [x2,y2,z2]=ellipse(3,2,1,50,50);
» [x3,y3,z3]=parabola(-0.5,0.5,-0.5,30,30);
» surf(x1,y1,z1); hold on; surf(x2,y2,z2); surf(x3,y3,z3);
» axis equal; view (-30,60);
» print -djpeg100 surf3.jpg
Newton-Raphson Method
n nonlinear equations in n unknowns

 f 1 ( x 1 , x 2 , x 3 ,...., x n )  0  f1 ( x ) 
  f ( x ) 
 f ( x , x , x ,...., x )  0
2 1 2 3 n
   2  
 f 3 ( x 1 , x 2 , x 3 ,...., x n )  0 F(x)   f 3 ( x ) 
    
   
 f n ( x 1 , x 2 , x 3 ,...., x n )  0  f n ( x ) 
   
F ( x )  0 , x  x1 x2 x3  xn 
Newton-Raphson Method
Jacobian (matrix of partial derivatives)
  f1  f1  f1  f1 
 x  x2  x3  xn 
 1 
  f2  f2  f2  f2 
 x  x2  x3  xn 
 1 
J ( x )    f3  f3  f3  f3 
 x  x2  x3  xn 
 1 
 
 
  fn  fn  fn  fn 
  x1  x2  x3  xn 
Newton’s iteration (without inversion)
       
J ( xold ) y  F ( xold ); y  x  xnew  xold
Newton-Raphson Method
For a single equation with one variable

df
J ( x)   f ( x )
dx
Newton’s iteration

1 f ( xold )
xnew  xold  J ( xold ) f  xold 
f ( xold )
f ( xi )
xi  1  xi 
f ( xi )
function x = Newton_sys(F, JF, x0, tol, maxit)

% Solve the nonlinear system F(x) = 0 using Newton's method


% Vectors x and x0 are row vectors (for display purposes)
% function F returns a column vector, [f1(x), ..fn(x)]'
% stop if norm of change in solution vector is less than tol
% solve JF(x) y = - F(x) using Matlab's "backslash operator"
% y = -feval(JF, xold) \ feval(F, xold);
% the next approximate solution is x_new = xold + y';

xold = x0;
disp([0 xold ]);
iter = 1;
while (iter <= maxit)
y= - feval(JF, xold) \ feval(F, xold);
xnew = xold + y';
dif = norm(xnew - xold);
disp([iter xnew dif]);
if dif <= tol
x = xnew;
disp('Newton method has converged')
return;
else
xold = xnew;
end
iter = iter + 1;
end
disp('Newton method did not converge')
x=xnew;
Intersection of Circle and Parabola
function f = ex11_1(x) function df = ex11_1_j(x)

f = [(x(1)^2 + x(2)^2 -1) df = [2*x(1) 2*x(2)


(x(1)^2 - x(2)) ]; 2*x(1) -1 ] ;

Use x0 = [0.5 0.5] as initial estimates


» x0=[0.5 0.5]; tol=1.e-6; maxit=100;
» x=Newton_sys('ex11_1','ex11_1_j',x0,tol,maxit);

0 0.5000 0.5000
1.0000 0.8750 0.6250 0.3953
2.0000 0.7907 0.6181 0.0846
3.0000 0.7862 0.6180 0.0045
4.0000 0.7862 0.6180 0.0000
5.0000 0.7862 0.6180 0.0000
Newton method has converged
Intersection of Three Surfaces
Solve the nonlinear system
 f ( x , y, z )  x 2  y 2  z 2  14  0  f 1 ( x1, x 2 , x 3 )  x 12  x 22  x 32  14  0
 
 g ( x , y , z )  x 2
 2 y 2
 9  0 or  2 1, 2 3
f ( x x , x )  x 2
1  2 x 2 9  0
2

 h( x , y, z )  x  3 y 2  z 2  0 
 
 f 3 ( x 1 , x 2 , x 3 )  x 1  3 x 2
2  x 2
3 0

 f 1 f 1 f 1 
 x x 2 x 3 
 1  2 x1 2 x2 2 x3 
f 2 f 2 f 2  2 x
Jacobian [J ]    4 x2 0
 x 1 x 2 x 3   1 
 f f 3 f 3   1  6 x 2 2 x 3 
 3 
 x 1 x 2 x 3 
Newton-Raphson Method
Solve the nonlinear system
 2 x1 2 x2 2 x 3   x 1   f1 
2 x    
4 x2 0   x 2     f 2 
 1 
 1  6 x 2 2 x 3   x 3  f 
 3
 x new   x old   x 1 
     
 new  
y  
 old   2 
y x
z   z   x 
 new   old   3 

MATLAB function ( y = J-1F )


y =  feval(JF,xold) \ feval(F, xold)
Intersection of Three Surfaces
function f = ex11_2(x) function df = ex11_2_j(x)
% Intersection of three surfaces % Jacobian matrix
df = [ 2*x(1) 2*x(2) 2*x(3)
f = [ (x(1)^2 + x(2)^2 + x(3)^2 - 14) 2*x(1) 4*x(2) 0
(x(1)^2 + 2*x(2)^2 - 9) 1 -6*x(2) 2*x(3) ] ;
(x(1) - 3*x(2)^2 + x(3)^2) ];

» x0=[1 1 1]; es=1.e-6; maxit=100;


» x=Newton_sys(‘ex11_2',‘ex11_2_j',x0,es,maxit);
0 1 1 1
1.0000 1.6667 2.1667 4.6667 3.9051
2.0000 1.5641 1.8407 3.2207 1.4858
3.0000 1.5616 1.8115 2.8959 0.3261
4.0000 1.5616 1.8113 2.8777 0.0182
5.0000 1.5616 1.8113 2.8776 0.0001
6.0000 1.5616 1.8113 2.8776 0.0000
Newton method has converged
Fixed-Point Iteration
No need to compute partial derivatives
 f 1 ( x 1 , x 2 , x 3 ,...., x n )  0  x1  g 1 ( x 1 , x 2 , x 3 ,...., x n )
 
f (
 2 1 2 3x , x , x ,...., x n )  0  x2  g 2 ( x 1 , x 2 , x 3 ,...., x n )
 
 f 3 ( x 1 , x 2 , x 3 ,...., x n )  0   x 3  g 3 ( x 1 , x 2 , x 3 ,...., x n )
   
 
 f n ( x 1 , x 2 , x 3 ,...., x n )  0  x n  g n ( x 1 , x 2 , x 3 ,...., x n )

Convergence g i K
 ; K 1
Theorem x j n
Example1: Fixed-Point
Solve the nonlinear system
 f 1 ( x 1 , x 2 , x 3 )  x 12  50 x 1  x 22  x 32  200  0

 2 1 2 3
f ( x , x , x )  x 2
1  20 x 2  x 3  50  0
2


f ( x
 3 1 2 3 , x , x )   x 2
1  x 2  40 x 3  75  0
2

Rearrange (initial guess: x = y = z = 2)


 x 1  g 1 ( x 1 , x 2 , x 3 )  ( 200  x 12  x 22  x 32 ) /50

 2
x  g 2 ( x 1 , x 2 , x 3 )  ( 50  x 2
1  x 2
3 ) /20

x
 3  g 3 ( x 1 , x 2 , x 3 )  ( x 2
1  x 2  75 ) /40
2
function x = fixed_pt_sys(G, x0, es, maxit)
% Solve the nonlinear system x = G(x) using fixed-point method
% Vectors x and x0 are row vectors (for display purposes)
% function G returns a column vector, [g1(x), ..gn(x)]'
% stop if norm of change in solution vector is less than es
% y = feval(G,xold); the next approximate solution is xnew = y';

disp([0 x0 ]); %display initial estimate


xold = x0;
iter = 1;
while (iter <= maxit)
y = feval(G, xold);
xnew = y';
dif = norm(xnew - xold);
disp([iter xnew dif]);
if dif <= es
x = xnew;
disp('Fixed-point iteration has converged')
return;
else
xold = xnew;
end
iter = iter + 1;
end
disp('Fixed-point iteration did not converge')
x=xnew;
function G = example1(x)
G = [ (-0.02*x(1)^2 - 0.02*x(2)^2 - 0.02*x(3)^2 + 4)
(-0.05*x(1)^2 - 0.05*x(3)^2 + 2.5)
(0.025*x(1)^2 + 0.025*x(2)^2 -1.875) ];

» x0=[0 0 0]; es=1.e-6; maxit=100;


» x=fixed_pt_sys('example1', x0, es, maxit);
0 0 0 0
1.0000 4.0000 2.5000 -1.8750 5.0760
2.0000 3.4847 1.5242 -1.3188 1.2358
3.0000 3.6759 1.8059 -1.5133 0.3921
4.0000 3.6187 1.7099 -1.4557 0.1257
5.0000 3.6372 1.7393 -1.4745 0.0395
6.0000 3.6314 1.7298 -1.4686 0.0126
7.0000 3.6333 1.7328 -1.4705 0.0040
8.0000 3.6327 1.7318 -1.4699 0.0013
9.0000 3.6329 1.7321 -1.4701 0.0004
10.0000 3.6328 1.7321 -1.4700 0.0001
11.0000 3.6328 1.7321 -1.4701 0.0000
12.0000 3.6328 1.7321 -1.4701 0.0000
13.0000 3.6328 1.7321 -1.4701 0.0000
14.0000 3.6328 1.7321 -1.4701 0.0000
15.0000 3.6328 1.7321 -1.4701 0.0000
Fixed-point iteration has converged
Example 2: Fixed-Point
Solve the nonlinear system
 f1 ( x1 , x 2 , x 3 )  x12  4 x 22  9 x 32  36  0

 2 1 2 3
f ( x , x , x )  x 2
1  9 x 2  47  0
2


 3 1 2 3
f ( x , x , x )  x 1 x 3  11  0
2

Rearrange (initial guess: x = 0, y = 0, z > 0)


 x1  g 1 ( x1 , x 2 , x 3 )  11 / x 3


 2
x  g 2 ( x 1 , x 2 , x 3 )  47  x 2
1 /3

 3
x  g 3 ( x 1 , x 2 , x 3 )  36  x 2
1  4 x 2
2 /3
function G = example2(x)
G = [ (sqrt(11/x(3)))
(sqrt(47 - x(1)^2) / 3 )
(sqrt(36 - x(1)^2 - 4*x(2)^2) / 3 ) ];

» x0=[0 0 10]; es=0.0001; maxit=500;


» x=fixed_pt_sys(‘example2',x0,es,maxit);
0 0 0 10
1.0000 1.0488 2.2852 2.0000 8.3858
2.0000 2.3452 2.2583 1.2477 1.4991
3.0000 2.9692 2.1473 1.0593 0.6612
4.0000 3.2224 2.0598 0.9854 0.2779
5.0000 3.3411 2.0170 0.9801 0.1263
6.0000 3.3501 1.9955 0.9754 0.0238
7.0000 3.3581 1.9938 0.9916 0.0181
8.0000 3.3307 1.9923 0.9901 0.0275
9.0000 3.3332 1.9974 1.0017 0.0129
10.0000 3.3139 1.9969 0.9962 0.0201
11.0000 3.3230 2.0005 1.0037 0.0124
12.0000 3.3105 1.9988 0.9972 0.0142
13.0000 3.3213 2.0011 1.0033 0.0126
14.0000 3.3112 1.9991 0.9973 0.0120
15.0000 3.3212 2.0010 1.0028 0.0116
16.0000 3.3120 1.9992 0.9974 0.0107
17.0000 3.3209 2.0008 1.0024 0.0103
18.0000 3.3126 1.9992 0.9977 0.0097
19.0000 3.3205 2.0007 1.0022 0.0092
20.0000 3.3130 1.9993 0.9979 0.0087
...
50.0000 3.3159 1.9999 0.9996 0.0017
...
102.0000 3.3166 2.0000 1.0000 0.0001
103.0000 3.3167 2.0000 1.0000 0.0001
Fixed-point iteration has converged

You might also like