Iterative Methods To Solve Equation System

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 19

System of linear equations

Iterative Methods
TOPICS
• Jacobi Method
• Gauss-Seidel Method
• Gauss-Seidel Method amended
• Nonlinear equation systems
Description
• All the presentation is going to manage
the follow notation to write the matrixes

 a11 a12  a1n   x1   b1  Ax=b


a a22    
 a2 n   x2   b2  
 21
       Given a square
      system of n
 ai1 ai 2  ain   xi   bi  linear equations
with unknown x:
     
     
am1 am 2  amn   xn  bm 

A x b
Jacobi method
• Jacobi method is an algorithm for determining the solutions of a
system of linear equations with largest absolute values in each
row and column dominated by the diagonal element.

• Each diagonal element is solved for, and an approximate value


plugged in. The process is then iterated until it converges

• This algorithm is a stripped-down version of the Jacobi


transformation method of matrix diagonalization.
Given a square system of n linear equations

 a11 a12  a1n   x1   b1 


a a22 
 a2 n  x  b 
 21  2  2
      
A  , x  , b 
 ai1 ai 2  ain 
 xi   bi 
     
     
am1 am 2  amn   xn  bm 

Ax=b
Then A can be decomposed into a diagonal component D,
and the remainder R:

a11 0  0   0 a12  a1n 


0 a22  0  a 
  21 0  a2 n 
    
D  R 
0 0  0  ,  ai1 ai 2 0 ain 
   
   
 0 0  amn  am1 am 2  0 

A=D+R
The system of linear equations may be rewritten as:

(D + R) x = b Dx + Rx = b

And finally

Dx = b - Rx
The Jacobi method is an iterative technique that solves
the left hand side of this expression for x, using previous
value for x on the right hand side. Analytically, this may
be written as

( k 1) 1
xi  D (b  Rx ) (k )

The element-based formula is thus

1

( k 1) (k )
xi  (bi  a i, j x j )
a i ,i ji

i = 1,2, …., n
• the computation of xi(k+1) requires each element in x(k)
except itself. Unlike the Gauss–Seidel method, we can't
overwrite xi(k) with xi(k+1), as that value will be needed
by the rest of the computation.

• This is the most meaningful difference between the


Jacobi and Gauss–Seidel methods, and is the reason why
the former can be implemented as a parallel algorithm,
unlike the latter. The minimum amount of storage is
two vectors of size n.
Gauss – Seidel Method
• Also known as the Liebmann method or the method of
successive displacement, is an iterative method used to solve a
linear system of equations.

• Though it can be applied to any matrix with non-zero elements


on the diagonals, convergence is only guaranteed if the matrix is
either diagonally dominant, or symmetric and positive definite.
Given a square system of n linear equations

 a11 a12  a1n   x1   b1 


a a22 
 a2 n  x  b 
 21  2  2
      
A  , x  , b 
 ai1 ai 2  ain 
 xi   bi 
     
     
am1 am 2  amn   xn  bm 

Ax=b
A can be decomposed into a lower triangular component L
and a strictly upper triangular component U

 a11 0  0   0 a12  a1n 


a  0 0 a 
 21 a22  0    2n 

    
L  , U  
 ai1 ai 2 ... 0  0 0 0 ain 
   
   
am1 am 2  amn  0 0  0 

A=L+U Lx = b - Ux
The Gauss–Seidel method is an iterative technique that
solves the left hand side of this expression for x, using
previous value for x on the right hand side. Analytically,
this may be written as

Similar to
( k 1) 1
xi  L (b  Ux ) (k ) Jacobi
Method

However, by taking advantage of the triangular form of L*, the


elements of x(k+1) can be computed sequentially using forward
substitution
1
 
( k 1) (k ) ( k 1)
xi  (bi  a i, j x j  a i, j x j )
a i ,i ji ji

i = 1,2, …., n
• The computation of xi(k+1) requires each element in x(k)
except itself. Unlike the Gauss–Seidel method, we can't
overwrite xi(k) with xi(k+1), as that value will be needed
by the rest of the computation.

 
 
Convergence criterion to Gauss- Seidel method
To ensure the convergence for the method, the
diagonal coefficient of each equation must be
higher than the sum of the absolute value from the
others coefficients of the equation
n
| aij |  | ai , j |
j 1
j i

The last criterion it is enough but not necessary to the


convergence.
The convergence is ensure when the restriction is
satisfied. Systems that meet the restriction are known
as diagonally dominant
Gauss-Seidel Method amended
(relaxation)

• Relaxation is an improvement made to


the Gauss-Seidel Method to achieve
faster the convergence.
Improvement of convergence with relaxation
The relaxation represents a soft modification to Gauss-
Seidel method to improve the convergence. After all the
process and the calculation of each x, that value is modify
through an average of the results of each iteration made
before an the actual one.

new new last


xi  xi  (1   ) xi
Where λ is a weighted factor that has a value between 0 and
2. If λ has a value between 0 and 1, the result is a weighted
average. These type of modifications are known as sub-
relaxation.
To values of λ between 1 and 2, is given an extra value to the
actual one. This modification are called over-relaxation.
NON-LINEAR EQUATIONS
Generally, nonlinear algebraic problems are often exactly solvable, and if not
they usually can be thoroughly understood through qualitative and numerical
analysis. As an example, the equation

x  x 1  0
2

Could be written as

f ( x)  c Where f ( x)  x  x 2
And C=1

and is nonlinear because f(x) satisfies neither additively nor


homogeneity (the nonlinearity is due to the x2). Though nonlinear,
this simple example may be solved exactly (via the quadratic
formula) and is very well understood
Bibliography
• CHAPRA, Steven C. y CANALE, Raymond P.: Métodos
Numéricos para Ingenieros. McGraw Hill 2002.
• Black, Noel and Moore, Shirley, "Gauss-Seidel Method"
from MathWorld
• http://www.slideshare.net/nestorbalcazar/mtodos-
numricos-06
• Khalil, Hassan K. (2001). Nonlinear Systems. Prentice
Hall. ISBN 0-13-067389-7.

You might also like