Numerical Chapter3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Unit-3: Solutions of Systems of Linear Equations

3.1. Introduction

Numerical methods for solving linear algebraic systems may be divided into two types: direct
and iterative.

 Direct methods are those which, in the absence of round-off or other errors, yield the
exact solution in a finite number of elementary arithmetic operations.
 Iterative methods start with an initial vectors 𝑥(0) and generates a sequence of
approximate solution vectors {𝑥(𝑘)} which converge to the exact solution vectors 𝑥 as
the number of iterations increases.
3.2. Direct Methods for Solving System of Linear Equations
A general set of 𝑚 linear equations in 𝑛 unknowns is given by:

𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1


𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2
⋮ ⋮ ⋮ ⋮
𝑎𝑚1 𝑥1 + 𝑎𝑚2 𝑥2 + ⋯ + 𝑎𝑚𝑛 𝑥𝑛 = 𝑏𝑚
Which can be written in the matrix product form as:
𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑥1 𝑏1
𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝑥2 𝑏2
⋮ ⋮ ⋮ = ⋮
⋮ ⋮ ⋮ ⋮
[𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛 ] [𝑥𝑛 ] [𝑏𝑚 ]

Or simply [𝐴][𝑋] = [𝐵] where [𝐴] is called the coefficient matrix, [𝐵] is called the constant
matrix and [𝑋] is called the solution matrix of the system.
𝑎11 𝑎12 ⋯ 𝑎1𝑛 ⋮ 𝑏1
𝑎21 𝑎22 ⋯ 𝑎2𝑛 ⋮ 𝑏2
[𝐴 ⋮ 𝐵] = ⋮ ⋮ is called the augmented matrix
⋮ ⋮
[𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛 ⋮ 𝑏𝑚 ]
A system of equations [𝐴][𝑋] = [𝐵] is consistent if there is a solution, and it is inconsistent if
there is no solution. However, a consistent system of equations may have a unique or infinitely
many solutions.

Page 1 of 19
There are four types of direct methods. These are:
a. Gaussian Elimination Methods
b. Gauss-Jordan Elimination Methods
c. Tridiagonal matrix method
d. LU Decomposition Method
3.2.1. Gaussian Elimination Methods (find the echelon form of the augmented matrix)
It consists of two major steps. These are:

 Forward Elimination and


 Back Substitution
 Forward Elimination of unknowns: In this step the unknowns are eliminated in each
equation starting with the first equation. This way the system of equations can be
reduced to upper triangular system of equations.
 Back Substitution: Now the equations are solved starting from the last equation as it has
only one unknown.

Example
Solve the following S.L.E by using G.E.M
𝑥 − 3𝑦 + 𝑧 = 4 2𝑥 + 𝑦 + 𝑧 = 10
a. { 2𝑥 − 8𝑦 + 8𝑧 = −2 b. { + 2𝑦 + 3𝑧 = 18
3𝑥
−6𝑥 + 3𝑦 − 15𝑧 = 9 𝑥 + 4𝑦 + 9𝑧 = 16
Solution
𝑥 − 3𝑦 + 𝑧 = 4
a. { 2𝑥 − 8𝑦 + 8𝑧 = −2
−6𝑥 + 3𝑦 − 15𝑧 = 9
Step 1: Forward Elimination
i. To eliminate 𝑥 from the 2nd and 3rd equations multiply the 1st
equation by −2 and 6 and add to the 2nd and 3rd equation
respectively.
𝑥 − 3𝑦 + 𝑧 = 4 𝑥 − 3𝑦 + 𝑧 = 4
{−2𝑦 + 6𝑧 = −10 ⇒ { −𝑦 + 3𝑧 = −5
−15𝑦 − 9𝑧 = 33 −5𝑦 − 3𝑧 = 11
ii. To eliminate the 𝑦 term in the last equation multiply the 2nd equation
by −5 and add it to the 3rd equation. The resulting new system of
equations becomes
𝑥 − 3𝑦 + 𝑧 = 4
{ −𝑦 + 3𝑧 = −5
−18𝑧 = 36

Page 2 of 19
Step 2: Back substitution
From the 3rd equation we get 𝑧 = −2 , substitute this into the 2nd equation yields 𝑦 = −1.
Using both of these results in the first equation gives 𝑥 = 3
Hence the solution of the above system is given by (𝑥, 𝑦, 𝑧) = (3, −1, −2)
Or we can find the echelon form of the augmented matrix of the system of equation. That is
1 −3 1 4
[0 −1 3 −5 ]
0 0 −18 36

3.2.2. Gauss-Jordan Elimination Methods (find the rref of the augmented matrix)
The general procedure for Gauss-Jordan Elimination can be summarized in the following steps:

1. Write the augmented matrix of the system.


2. Write the augmented matrix in RREF
3. Stop the process in step2 if you obtain a row whose elements are all zeros except the
last one on the right. In that case, the system is inconsistent and has no solutions.
Otherwise, finish step2 and the solutions of the system from the final matrix.
Example
Solve the following S.L.E using G.J.E.M
𝒙+𝒚+𝒛=𝟓 𝟐𝒙 + 𝟑𝒚 + 𝟒𝒛 = 𝟏𝟗
a. {𝟐𝒙 + 𝟑𝒚 + 𝟓𝒛 = 𝟖 b. { 𝒙 + 𝟐𝒚 + 𝒛 = 𝟒
𝟒𝒙 + 𝟓𝒛 = 𝟐 𝟑𝒙 − 𝒚 − 𝒛 = 𝟗
a. → (𝟑, 𝟒, −𝟐) b. → (𝟑. 𝟕𝟕𝟕𝟖, −𝟐. 𝟏𝟏𝟏𝟏, 𝟒. 𝟒𝟒𝟒𝟒)

Self-assessment exercise
A civil engineering builds 3 types of buildings; each type requires metal, cement
and wood in the following composition
Product type Metal(tones) Cement(tones) Wood(tones)
A 5 1 1
B 2 4 2
C 1 2 5
If he/she has 12kg of metal, 15kg of cement, and 20kg of wood in total ,how many
buildings of each type can be constructed at most
[hint let x ,y,and z be number of buildings of type A, B,and C respectively ]

Page 3 of 19
3.2.3. Tridiagonal matrix method
Thomas algorithm for tridiagonal system

Definition:
Consider the system of equations defined by

𝑏1 𝑥1 + 𝑐1 𝑥2 = 𝑑1
𝑎2 𝑥1 + 𝑏2 𝑥2 + 𝑐2 𝑥3 = 𝑑2
𝑎3 𝑥2 + 𝑏3 𝑥3 + 𝑐3 𝑥4 = 𝑑3

𝑎𝑛 𝑥𝑛−1 + 𝑏𝑛 𝑥𝑛 = 𝑑𝑛
Or in matrix notation, consider the system of linear simultaneous algebraic equations given by

𝐴𝑥 = 𝑑
𝒄𝟏

𝒃𝟏 𝒄𝟐 𝟎

𝒄𝒏−𝟏
where 𝑨 =
𝒂𝟐 𝒃𝟐

𝒂𝟐

𝟎 ⋱
𝒃𝒏
[ 𝒂𝒏 ]
is a tridiagonal matrix, 𝑥 = [𝑥1 , 𝑥2 , … , 𝑥𝑛 ]𝑡 and 𝑑 = [𝑑1 , 𝑑2 , … , 𝑑𝑛 ]𝑡 .

Note:
An 𝒏 × 𝒏 matrix 𝑨 is called tridiagonal matrix if 𝒂𝒊𝒋 = 𝟎 whenever |𝒊 − 𝒋| > 𝟏.

These equations can be solved by using Thomas algorithm which is described in three steps as shown
below:

Step 1: set 𝑦1 = 𝑏1 and compute


𝑎𝑖 𝑐𝑖−1
𝑦𝑖 = 𝑏𝑖 − 𝑦𝑖−1
𝑖 = 2,3, … , 𝑛

𝑑1
Step 2: set 𝑧1 = 𝑏1
and compute

𝑑𝑖 −𝑎𝑖 𝑧𝑖−1
𝑧𝑖 = 𝑦𝑖
𝑖 = 2,3, … , 𝑛

Step 3: 𝑥𝑛 = 𝑧𝑛
𝑐𝑖 𝑥𝑖+1
𝑥𝑖 = 𝑧𝑖 − 𝑖 = 𝑛 − 1, 𝑛 − 2, … ,1
𝑦𝑖

Page 4 of 19
Remark: Hence we consider a 4 × 4 tridiagonal system of equations given by
𝑎11 𝑎12 0 0 𝑥1 𝑑1
𝑎 𝑎22 𝑎23 0 𝑥2 𝑑
[ 21 ] [ ] = [ 2]
0 𝑎32 𝑎33 𝑎34 𝑥3 𝑑3
0 0 𝑎43 𝑎44 𝑥4 𝑑4
It can be written as

𝑎11 𝑥1 + 𝑎12 𝑥2 = 𝑑1
𝑎21 𝑥1 + 𝑎22 𝑥2 + 𝑎23 𝑥3 = 𝑑2
𝑎32 𝑥2 + 𝑎33 𝑥3 + 𝑎34 𝑥4 = 𝑑3
𝑎43 𝑥3 + 𝑎44 𝑥4 = 𝑑4
Example:

1. Solve the following equations by Thomas algorithm

3𝑥1 − 𝑥2 = 5
2𝑥1 − 3𝑥2 + 2𝑥3 = 5
𝑥2 + 2𝑥3 + 5𝑥4 = 10
𝑥3 − 𝑥4 = 1
Solution:
Here

3 −1 0 0 𝑥1 5 [𝑎2 , 𝑎3 , 𝑎4 ] = [2,1,1]
2 −3 2 0 𝑥2 5 [𝑏1 , 𝑏2 , 𝑏3 , 𝑏4 ] = [3, −3,2, −1]
[ ][ ] = [ ]
0 1 2 5 𝑥3 10 [𝑐1 , 𝑐2 , 𝑐3 ] = [−1,2,5]
0 0 1 −1 𝑥4 1 [𝑑1 , 𝑑2 , 𝑑3 , 𝑑4 ] = [5,5,10,1]

Step 1: set 𝑦1 = 3 and compute


𝑎𝑖 𝑐𝑖−1
𝑦𝑖 = 𝑏𝑖 − 𝑦𝑖−1
𝑖 = 2,3, … , 𝑛

𝑎2 𝑐1 7
𝑦2 = 𝑏2 − 𝑦1
= −3

𝑎3 𝑐2 20
𝑦3 = 𝑏3 − 𝑦2
= 3

𝑎4 𝑐3 55
𝑦4 = 𝑏4 − 𝑦3
= − 20

5
Step 2: set 𝑧1 = 3 and compute
𝑑𝑖 −𝑎𝑖 𝑧𝑖−1
𝑧𝑖 = 𝑦𝑖
𝑖 = 2,3, … , 𝑛

𝑑2 −𝑎2 𝑧1 5
𝑧2 = 𝑦2
= −7

Page 5 of 19
𝑑3 −𝑎3 𝑧2 75
𝑧3 = =
𝑦3 20

𝑑4 −𝑎4 𝑧3
𝑧4 = 𝑦4
=1

Step 3: 𝑥4 = 𝑧4 = 1
𝑐𝑖 𝑥𝑖+1
𝑥𝑖 = 𝑧𝑖 − 𝑖 = 𝑛 − 1, 𝑛 − 2, … ,1
𝑦𝑖

𝑐3 𝑥4
𝑥3 = 𝑧3 − 𝑦3
=2
𝑐2 𝑥3
𝑥2 = 𝑧2 − 𝑦2
=1
𝑐1 𝑥2
𝑥1 = 𝑧1 − 𝑦1
=2

2. Solve the following tridiagonal system of equations using the Thomas algorithm
2𝑥1 + 𝑥2 = 1
3𝑥1 + 2𝑥2 + 𝑥3 = 2
𝑥2 + 2𝑥3 + 2𝑥4 = −1
𝑥3 + 4𝑥4 = −3
Ans: 𝑥1 = 1, 𝑥2 = −1, 𝑥3 = 1, 𝑥4 = −1.

Page 6 of 19
3.2.4. LU Decomposition Methods
Some linear equations 𝐴𝑋 = 𝐵 are relatively easy to solve. For example, if A is a lower
triangular matrix, then elements of X can be computed recursively using forward substitution.
Similarly if A is an upper triangular matrix, then the elements of X can be computed recursively
using backward substitution.

Most linear equations encountered in practice, do not have a triangular coefficient matrix. In
such cases, the linear equation is often best solved using the L-U factorization method or L-U
decomposition method. The L-U factorization method is designed to decompose the coefficient
matrix into the product of lower and upper triangular matrices, allowing the linear equation to
be solved using a combination of backward and forward substitution.
The L-U factorization method involves two phases:

 In the first factorization phase, Gaussian Elimination is used to factor the matrix A into
the product A=LU of a lower triangular matrix L and an upper triangular matrix U.
 In the solution phase of the L-U factorization method, the factored linear equations
AX=(LU)X=L(UX)=B is solved by first solving LY=B for Y using forward substitution, and
then solving UX=Y for X using backward substitution.
To decompose A as A=LU, consider an 𝑛𝑥𝑛 matrix A given below
𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑙11 0 ⋯ 0 𝑢11 𝑢12 ⋯ 𝑢1𝑛
𝑎 𝑎22 ⋯ 𝑎2𝑛 𝑙 𝑙 ⋯ 0 0 𝑢22 ⋯ 𝑢2𝑛
𝐴 = [ 21 ] = [ 21 22 ][ ]
⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 𝑙𝑛1 𝑙𝑛2 ⋯ 𝑙𝑛𝑛 0 0 ⋯ 𝑢𝑛𝑛
To produce a unique solution it is convenient to choose either 𝑢𝑖𝑖 = 1 𝑜𝑟 𝑙𝑖𝑖 = 1 ,𝑖 = 1,2, … , 𝑛
When we choose
a. 𝑙𝑖𝑖 = 1, the method is called the Doolittle’s method
b. 𝑢𝑖𝑖 = 1, the method is called the Croute’s method
Consider a 3x3 matrix A . We can decompose matrix A using Croute’s method as follows
𝑎11 𝑎12 𝑎13 𝑙11 0 0 1 𝑢12 𝑢13
[𝑎21 𝑎22 𝑎23 ] = [𝑙21 𝑙22 0 ] [0 1 𝑢23 ]
𝑎31 𝑎32 𝑎33 𝑙31 𝑙32 𝑙33 0 0 1
𝑙11 𝑙11 𝑢12 𝑙11 𝑢13
= [𝑙21 (𝑙21 𝑢12 + 𝑙22 ) (𝑙21 𝑢13 + 𝑙22 𝑢23 ) ]
𝑙31 (𝑙31 𝑢12 + 𝑙32 ) (𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑙33 )

We can find, therefore, the elements of the matrices L and U by equating the above two
matrices.

Page 7 of 19
Example
Solve the following S.L.E using Croute’s LU decomposition methods.
2𝑥 + 𝑦 − 𝑧 = 5
a. { 𝑥 + 3𝑦 + 2𝑧 = 5
3𝑥 − 2𝑦 − 4𝑧 = 3
b. The currents 𝐼1 , 𝐼2 , 𝑎𝑛𝑑 𝐼3 occurring in a closed circuit having three mesh points is given
by:
−𝐼1 + 3𝐼2 = 5
{ 3𝐼1 + 𝐼2 + 2𝐼3 = 3
−11𝐼2 − 7𝐼3 = −15
Solution
2 1 −1 5
𝐴 = [1 3 2 ] 𝑎𝑛𝑑 𝐵 = [5]
3 −2 −4 3
2 1 −1 𝑙11 0 0 1 𝑢12 𝑢13
𝐴 = [1 3 2 ] = [𝑙21 𝑙22 0 ] [0 1 𝑢23 ]
3 −2 −4 𝑙31 𝑙32 𝑙33 0 0 1
𝑙11 𝑙11 𝑢12 𝑙11 𝑢13
= [𝑙21 (𝑙21 𝑢12 + 𝑙22 ) (𝑙21 𝑢13 + 𝑙22 𝑢23 ) ]
𝑙31 (𝑙31 𝑢12 + 𝑙32 ) (𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑙33 )
1 1 −1 −1
𝑙11 = 2, 𝑙11 𝑢12 = 1 ⟹ 𝑢12 = 𝑙 = 2 , 𝑙11 𝑢13 = −1 ⟹ 𝑢13 = 𝑙 =
11 11 2

5
𝑙21 = 1 , 𝑙31 = 3 , (𝑙21 𝑢12 + 𝑙22 ) = 3 ⟹ 1𝑥0.5 + 𝑙22 = 3 ⟹ 𝑙22 = 3 − 0.5 = 2.5 = 2

𝑙21 𝑢13 + 𝑙22 𝑢23 = 2 ⟹ 1𝑥(−0.5) + 2.5𝑥𝑢23 = 2 ⟹ 𝑢23 = 1


7
𝑙31 𝑢12 + 𝑙32 = −2 ⟹ 3𝑥0.5 + 𝑙32 = −2 ⟹ 𝑙32 = −3.5 = − 2

𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑙33 = −4 ⟹ 3𝑥(−0.5) + (−3.5)𝑥1 + 𝑙33 = −4 ⟹ 𝑙33 = 1


𝒍𝟏𝟏 = 𝟐 , 𝒍𝟐𝟏 = 𝟏 , 𝒍𝟐𝟐 = 𝟐. 𝟓 , 𝒍𝟑𝟏 = 𝟑 , 𝒍𝟑𝟐 = −𝟑. 𝟓 𝒂𝒏𝒅 𝒍𝟑𝟑 = 𝟏
𝒖𝟏𝟐 = 𝟎. 𝟓 , 𝒖𝟏𝟑 = −𝟎. 𝟓 𝒂𝒏𝒅 𝒖𝟐𝟑 = 𝟏
2 0 0 1 0.5 −0.5
𝐿 = [1 2.5 0] 𝑎𝑛𝑑 𝑈 = [0 1 1 ]
3 −3.5 1 0 0 1

AX = (LU)X = L(UX) = B

Page 8 of 19
2 0 0 𝑦1 2𝑦1
Let UX = Y ⟹ LY = [1 𝑦
2.5 0] [ 2 ] = [ 𝑦1 + 2.5𝑦2 ]
3 −3.5 1 𝑦3 3𝑦1 − 3.5𝑦2 + 𝑦3
2𝑦1 5
LY = B ⟹ [ 𝑦1 + 2.5𝑦2 ] = [5]
3𝑦1 − 3.5𝑦2 + 𝑦3 3
⟹ 𝑦1 = 2.5 , 𝑦1 + 2.5𝑦2 = 5 ⟹ 𝑦2 = 1 𝑎𝑛𝑑3𝑦1 − 3.5𝑦2 + 𝑦3 = 3 ⟹ 𝑦3 = −1
1 0.5 −0.5 𝑥 2.5 𝑥 + 0.5𝑦 − 0.5𝑧 2.5
UX = Y ⟹ [0 1 1 ] [𝑦] = [ 1 ] ⟹ [ 𝑦+𝑧 ]=[ 1 ]
0 0 1 𝑧 −1 𝑧 −1
⟹ 𝑧 = −1 , 𝑦 + 𝑧 = 1 ⟹ 𝑦 = 2 𝑎𝑛𝑑 𝑥 + 0.5𝑦 − 0.5𝑧 = 2.5 ⟹ 𝑥 = 2.5 − 1 − 0.5 = 1
S.S= {(1, 2, −1)}
3.3. Iterative Methods for Solving System of Linear Equations
Because of round-off errors, direct methods become less efficient than iterative methods
when they applied to large systems. In addition, the amount of storage space required for
iterative solutions on a computer is far less than the one required for direct methods when
the coefficient matrix of the system is sparse. Thus, especially for sparse matrices, iterative
methods are more attractive than direct methods.
For the iterative solution of a system of equations, one starts with an arbitrary starting
vectors 𝑥 (0) and computes a sequence of iterates 𝑥 (𝑚) for 𝑚 = 1, 2, 3, …
Iterative methods will be successful only when the system is diagonally dominant system.

Note: an 𝒏 × 𝒏 matrix A is diagonally dominant if|𝒂𝒊𝒊 | > ∑𝒏𝒋=𝟏,𝒋≠𝒊|𝒂𝒊𝒋 |

Under the category of iterative method, we shall describe the following two methods:
i. Gauss-Jacobi’s method
ii. Gauss-Seidel method

Page 9 of 19
3.3.1 Gauss-Jacobi’s method
Let us consider again a system of 𝑛 × 𝑛 linear equations with 𝑛 unknowns:

𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1


𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2 …………………………………(1)
⋮ ⋮ ⋮ ⋮
𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
From equation (1) we get
1
𝑥1 = 𝑎 (𝑏1 − (𝑎12 𝑥2 + 𝑎13 𝑥3 + ⋯ + 𝑎1𝑛 𝑥𝑛 ))
11

1
𝑥2 = 𝑎 (𝑏2 − (𝑎21 𝑥1 + 𝑎23 𝑥3 + ⋯ + 𝑎2𝑛 𝑥𝑛 ))
22

⋮ ⋮
1
𝑥𝑛 = 𝑎 (𝑏𝑛 − (𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛−1 𝑥𝑛−1 ))
𝑛𝑛

(0) (0) (0)


Let 𝑥1 , 𝑥2 , … , 𝑥𝑛 be the initial approximations of the unknowns 𝑥1 , 𝑥2 , … , 𝑎𝑛𝑑 𝑥𝑛
(𝑛) (𝑛) (𝑛)
Let 𝑥1 , 𝑥2 , … , 𝑥𝑛 be the 𝑛𝑡ℎ iteration of the given S.L.E then the (𝑛 + 1)𝑡ℎ iteration is given
by:

(𝑛+1) 1 (𝑛) (𝑛) (𝑛)


𝑥1 = 𝑎 (𝑏1 − (𝑎12 𝑥2 + 𝑎13 𝑥3 + ⋯ + 𝑎1𝑛 𝑥𝑛 ))
11

(𝑛+1) 1 (𝑛) (𝑛) (𝑛)


𝑥2 = 𝑎 (𝑏2 − (𝑎21 𝑥1 + 𝑎23 𝑥3 + ⋯ + 𝑎2𝑛 𝑥𝑛 ))
22

⋮ ⋮
(𝑛+1) 1 (𝑛) (𝑛) (𝑛)
𝑥𝑛 =𝑎 (𝑏𝑛 − (𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛−1 𝑥𝑛−1 ))
𝑛𝑛

The process is continued till convergence is secured.


Note1: in the absence of any better estimates, the initial approximations are taken as
(0) (0) (0)
𝑥1 = 0, 𝑥2 = 0, … , 𝑥𝑛 = 0
2: we assume that the diagonal terms 𝑎11 , 𝑎22 , … , 𝑎𝑛𝑛 are all nonzero and are the largest
coefficients of 𝑥1 , 𝑥2 , … , 𝑥𝑛 respectively so that convergence is assured. If some of the diagonal
elements are zeros, we may have to rearrange the equations.

3: stop the iteration if ‖𝑥𝑖𝑛+1 − 𝑥𝑖𝑛 ‖ ≤ 𝑇𝑂𝐿 for 𝑖 = 1,2,3, … , 𝑛

Page 10 of 19
Example
Solve the following S.L.E using G.J.M up to 3 decimal place accuracy.
5𝑥 + 𝑦 + 𝑧 + 𝑢 = 4
20𝑥 + 𝑦 − 2𝑧 = 17
𝑥 + 7𝑦 + 𝑧 + 𝑢 = 12
a. { + 20𝑦 − 𝑧 = −18
3𝑥 b. {
𝑥 + 𝑦 + 6𝑧 + 𝑢 = −5
2𝑥 − 3𝑦 + 20𝑧 = 25
𝑥 + 𝑦 + 𝑧 + 4𝑢 = −6
Solution
𝟏
𝒙(𝒏+𝟏) = 𝟐𝟎 [𝟏𝟕 − 𝒚(𝒏) + 𝟐𝒛(𝒏) ]
𝟏
𝒚(𝒏+𝟏) = 𝟐𝟎 [−𝟏𝟖 − 𝟑𝒙(𝒏) + 𝒛(𝒏) ]
𝟏
𝒛(𝒏+𝟏) = 𝟐𝟎 [𝟐𝟓 − 𝟐𝒙(𝒏) + 𝟑𝒚(𝒏) ]
a. let (𝒙(𝟎) , 𝒚(𝟎) , 𝒛(𝟎) ) = (𝟎, 𝟎, 𝟎)
𝒏 𝒙(𝒏+𝟏) 𝒚(𝒏+𝟏) 𝒛(𝒏+𝟏)
0 𝟏𝟕 −𝟏𝟖 𝟐𝟓
=0.85 =-0.9 =1.25
𝟐𝟎 𝟐𝟎 𝟐𝟎
1 1.02 -0.965 1.03
2 1.00125 -1.0015 1.00325
3 1.0004 -1.000025 0.99965
4 0.99996625 -1.0000775 0.99995625
𝟏
Tol= 𝟐 𝒙𝟏𝟎−𝟑 = 𝟎. 𝟎𝟎𝟎𝟓

|𝒙𝟒 − 𝒙𝟑 | = 𝟎. 𝟎𝟎𝟎𝟒𝟑𝟑𝟕𝟓 ≤ 𝟎. 𝟎𝟎𝟎𝟓


|𝒚𝟒 − 𝒚𝟑 | = 𝟎. 𝟎𝟎𝟎𝟎𝟓𝟐𝟓 ≤ 𝟎. 𝟎𝟎𝟎𝟓
|𝒛𝟒 − 𝒛𝟑 | = 𝟎. 𝟎𝟎𝟎𝟑𝟎𝟔𝟐𝟓 ≤ 𝟎. 𝟎𝟎𝟎𝟓
Therefore the solution is (𝟎. 𝟗𝟗𝟗𝟗𝟔𝟔𝟐𝟓, −𝟏. 𝟎𝟎𝟎𝟎𝟕𝟕𝟓, 𝟎. 𝟗𝟗𝟗𝟗𝟓𝟔𝟐𝟓) ≈ (𝟏, −𝟏, 𝟏)

Page 11 of 19
3.3.2 Gauss-Seidel method
Gauss-Seidel is a modification of Gauss-Jacobi method. Because the new value can be
immediately stored in the location that held the old values, the storage requirement for 𝑥 with
the Gauss-Seidel method is half what it would be the Jacobi method and the rate of
convergence is most rapid.
Let us consider again the S.L.E
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2 …………………………………(1)
⋮ ⋮ ⋮ ⋮
𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
From equation (1) we get
1
𝑥1 = 𝑎 (𝑏1 − (𝑎12 𝑥2 + 𝑎13 𝑥3 + ⋯ + 𝑎1𝑛 𝑥𝑛 ))
11

1
𝑥2 = 𝑎 (𝑏2 − (𝑎21 𝑥1 + 𝑎23 𝑥3 + ⋯ + 𝑎2𝑛 𝑥𝑛 ))
22

⋮ ⋮
1
𝑥𝑛 = (𝑏𝑛 − (𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛−1 𝑥𝑛−1 ))
𝑎𝑛𝑛

(0) (0) (0)


Let 𝑥1 , 𝑥2 , … , 𝑥𝑛 be the initial approximations of the unknowns 𝑥1 , 𝑥2 , … , 𝑎𝑛𝑑 𝑥𝑛
(𝑛) (𝑛) (𝑛)
Let 𝑥1 , 𝑥2 , … , 𝑥𝑛 be the 𝑛𝑡ℎ iteration of the given S.L.E then the (𝑛 + 1)𝑡ℎ iteration is given
by:
(𝑛+1) 1 (𝑛) (𝑛) (𝑛)
𝑥1 = 𝑎 (𝑏1 − (𝑎12 𝑥2 + 𝑎13 𝑥3 + ⋯ + 𝑎1𝑛 𝑥𝑛 ))
11

(𝑛+1) 1 (𝑛+1) (𝑛) (𝑛)


𝑥2 = 𝑎 (𝑏2 − (𝑎21 𝑥1 + 𝑎23 𝑥3 + ⋯ + 𝑎2𝑛 𝑥𝑛 ))
22

⋮ ⋮
(𝑛+1) 1 (𝑛+1) (𝑛+1) (𝑛+1)
𝑥𝑛 =𝑎 (𝑏𝑛 − (𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛−1 𝑥𝑛−1 ))
𝑛𝑛

Page 12 of 19
Example
Solve the following S.L.E using G.S.M up to 3 decimal place accuracy.
5𝑥 + 𝑦 + 𝑧 + 𝑢 = 4 10𝑥 − 2𝑦 − 𝑧 − 𝑤 = 3
20𝑥 + 𝑦 − 3𝑧 = 17
𝑥 + 7𝑦 + 𝑧 + 𝑢 = 12 −2𝑥 + 10𝑦 − 𝑧 − 𝑤 = 15
a. { 3𝑥 + 𝑦 − 𝑧 = −18 b. { c. {
𝑥 + 𝑦 + 6𝑧 + 𝑢 = −5 −𝑥 − 𝑦 + 10𝑧 − 2𝑤 = 27
2𝑥 − 3𝑦 + 20𝑧 = 25
𝑥 + 𝑦 + 𝑧 + 4𝑢 = −6 −𝑥 − 𝑦 − 2𝑧 + 10𝑤 = −9
Solution
1
𝑥 (𝑛+1) = 10 [3 + 2𝑦 (𝑛) + 𝑧 (𝑛) + 𝑤 (𝑛) ]
1
(𝑛+1)= [15+2𝑥 (𝑛+1) +𝑧 (𝑛) +𝑤 (𝑛) ]
𝑦 10

1
(𝑛+1)= [27+𝑥 (𝑛+1) +𝑦(𝑛+1) +2𝑤 (𝑛) ]
𝑧 10

1
𝑤 (𝑛+1) = 10 [−9 + 𝑥 (𝑛+1) + 𝑦 (𝑛+1) + 2𝑧 (𝑛+1) ]

𝑛 𝑥 (𝑛+1) 𝑦 (𝑛+1) 𝑧 (𝑛+1) 𝑤 (𝑛+1)


0 3 78 2.886 -0.1368
= 0.3 = 1.56
10 50
1 0.88692 1.952304 2.956562 -0.0247652
2 0.983640 1.989907 2.992401 -0.004165
3 0.996805 1.998184 2.996521 -0.001196

c.(1,2,3,0)

Matlab program for Gauss-Seidal method


Example :solve the following by using Matlab program
𝟒𝒙𝟏 + 𝒙𝟐 + 𝟐𝒙𝟑 = −𝟑
{ 𝟐𝒙𝟏 + 𝟓𝒙𝟐 − 𝒙𝟑 = 𝟓
−𝟑𝒙𝟏 − 𝒙𝟐 + 𝒙𝟑 = 𝟎
k=1;x1=0;x2=0;x3=0;
disp('k x1 x2 x3')
fprint`f('%2.0f %-8.5f %-8.5f %-8.5f\n', k, x1, x2, x3)
for k=1:10
x1=(-3-(x2+2*x3))/4;
x2=(5-(2*x1-x3))/5;
x3=(3*x1+x2);
fprintf('%2.0f %-8.5f %-8.5f %-8.5f\n', k, x1, x2, x3)
end

Page 13 of 19
𝒌 𝒙𝟏 𝒙𝟐 𝒙𝟑
𝟏 𝟎. 𝟎𝟎𝟎𝟎𝟎 𝟎. 𝟎𝟎𝟎𝟎𝟎 𝟎. 𝟎𝟎𝟎𝟎𝟎
𝟏 − 𝟎. 𝟕𝟓𝟎𝟎𝟎 𝟏. 𝟑𝟎𝟎𝟎𝟎 − 𝟎. 𝟏𝟗𝟎𝟎𝟎
𝟐 − 𝟎. 𝟗𝟖𝟎𝟎𝟎 𝟏. 𝟑𝟓𝟒𝟎𝟎 − 𝟎. 𝟑𝟏𝟕𝟐𝟎
𝟑 − 𝟎. 𝟗𝟐𝟗𝟗𝟎 𝟏. 𝟑𝟎𝟖𝟓𝟐 − 𝟎. 𝟐𝟗𝟔𝟐𝟒
𝟒 − 𝟎. 𝟗𝟐𝟗𝟎𝟏 𝟏. 𝟑𝟏𝟐𝟑𝟔 − 𝟎. 𝟐𝟗𝟒𝟗𝟒
𝟓 − 𝟎. 𝟗𝟑𝟎𝟔𝟐 𝟏. 𝟑𝟏𝟑𝟐𝟔 − 𝟎. 𝟐𝟗𝟓𝟕𝟐
𝟔 − 𝟎. 𝟗𝟑𝟎𝟒𝟔 𝟏. 𝟑𝟏𝟑𝟎𝟒 − 𝟎. 𝟐𝟗𝟓𝟔𝟕
𝟕 − 𝟎. 𝟗𝟑𝟎𝟒𝟑 𝟏. 𝟑𝟏𝟑𝟎𝟒 − 𝟎. 𝟐𝟗𝟓𝟔𝟓
𝟖 − 𝟎. 𝟗𝟑𝟎𝟒𝟒 𝟏. 𝟑𝟏𝟑𝟎𝟒 − 𝟎. 𝟐𝟗𝟓𝟔𝟓
𝟗 − 𝟎. 𝟗𝟑𝟎𝟒𝟑 𝟏. 𝟑𝟏𝟑𝟎𝟒 − 𝟎. 𝟐𝟗𝟓𝟔𝟓
𝟏𝟎 − 𝟎. 𝟗𝟑𝟎𝟒𝟑 𝟏. 𝟑𝟏𝟑𝟎𝟒 − 𝟎. 𝟐𝟗𝟓𝟔𝟓

System of non-linear equation


Newton’s Method for Solving System of Non-Linear Equations
Now we consider the solution of simultaneous non-linear equations by Newton’s method.
Consider the system
𝑓(𝑥, 𝑦) = 0
{ (1)
𝑔(𝑥, 𝑦) = 0
Let (𝑥0 , 𝑦0 ) be an initial approximation to the root of the system and (𝑥0 + ℎ, 𝑦0 + 𝑘) be the
root of the system given by (1). Then we must have
𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘) = 0
{ (2)
𝑔(𝑥0 + ℎ, 𝑦0 + 𝑘) = 0
Let us assume that 𝑓 and 𝑔 are differentiable expanding (2) by Tayler series, we obtain
𝜕𝑓 𝜕𝑓
𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘) = 𝑓0 + ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + ⋯ = 0
0 0
{ 𝜕𝑔 𝜕𝑔
(3)
𝑔(𝑥0 + ℎ, 𝑦0 + 𝑘) = 𝑔0 + ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + ⋯ = 0
0 0

Neglecting, the second and higher order terms and retaining only the linear terms of (3), we obtain
𝜕𝑓 𝜕𝑓
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 = −𝑓0
0 0
{ 𝜕𝑔 𝜕𝑔
(4)
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 = −𝑔0
0 0

The matrix form representation of (4) is given by


𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
𝜕𝑥 𝜕𝑦0 ℎ 𝑓 𝜕𝑥 𝜕𝑦0 𝑥1 𝑓0
[ 𝜕𝑔0 ]( ) = − [ 0 ] ⇒ [ 𝜕𝑔0 ] (𝑦 ) = −[ ]
𝜕𝑔 𝑘 𝑔0 𝜕𝑔 1 𝑔0
𝜕𝑥0 𝜕𝑦0 𝜕𝑥0 𝜕𝑦0

Page 14 of 19
Where
𝜕𝑓 𝜕𝑓 𝜕𝑔 𝜕𝑔 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑔
𝑓0 = 𝑓(𝑥0 , 𝑦0 ), 𝑔0 = 𝑔(𝑥0 , 𝑦0 ), 𝜕𝑥 = (𝜕𝑥) , 𝜕𝑥 = (𝜕𝑥 ) , 𝜕𝑦 = (𝜕𝑦) , 𝜕𝑥 = (𝜕𝑦 )
0 𝑥=𝑥0 𝑥=𝑥0 𝑦=𝑦0 0 𝑦=𝑦0

Solving (4) for ℎ and 𝑘, the next approximation of the root is given by
𝑥1 = 𝑥0 + ℎ
{
𝑦1 = 𝑦0 + 𝑘
The above process is repeated to desired degree of accuracy.

Suppose we have the following system of non-linear equations

𝑓(𝑥, 𝑦, 𝑧) = 0
{𝑔(𝑥, 𝑦, 𝑧) = 0 (5)
𝑟(𝑥, 𝑦, 𝑧) = 0

Let (𝑥0 , 𝑦0 , 𝑧0 ) be an initial approximation to the root of the system and (𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙)
be the root of the system given by (5). Then we must have

𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 0
{ 𝑔(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 0 (6)
𝑟(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 0

Let us assume that 𝑓 , 𝑔 and 𝑟 are differentiable expanding (6) by Tayler series, we obtain
𝜕𝑓 𝜕𝑓 𝜕𝑓
𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 𝑓0 + ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + 𝑙 𝜕𝑧 + ⋯ = 0
0 0 0
𝜕𝑔 𝜕𝑔 𝜕𝑔
𝑔(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 𝑔0 + ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + 𝑙 𝜕𝑧 + ⋯ = 0 (7)
0 0 0
𝜕𝑟 𝜕𝑟 𝜕𝑟
{ 𝑟(𝑥0 + ℎ, 𝑦0 + 𝑘, 𝑧0 + 𝑙) = 𝑟0 + ℎ 𝜕𝑥0 + 𝑘 𝜕𝑦0 + 𝑙 𝜕𝑧0 + ⋯ = 0
Neglecting, the second and higher order terms and retaining only the linear terms of (7), we obtain
𝜕𝑓 𝜕𝑓 𝜕𝑓
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + 𝑙 𝜕𝑧 = −𝑓0
0 0 0
𝜕𝑔 𝜕𝑔 𝜕𝑔
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 + 𝑙 𝜕𝑧 = −𝑔0 (8)
0 0 0
𝜕𝑟 𝜕𝑟 𝜕𝑟
{ℎ 𝜕𝑥0
+𝑘
𝜕𝑦0
+𝑙
𝜕𝑧0
= −𝑟0

𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 𝑥1 − 𝑥0
ℎ 𝑓0 𝑓0
𝜕𝑔 𝜕𝑔 𝜕𝑔 𝜕𝑔 𝜕𝑔 𝜕𝑔
⇒ (𝑘) = − (𝑔0 ) ⇒ 𝑦
( 1 − 𝑦0 ) = − (𝑔0 )
𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 𝜕𝑥0 𝜕𝑦0 𝜕𝑧0
𝜕𝑟 𝜕𝑟 𝜕𝑟 𝑙 𝑟0 𝜕𝑟 𝜕𝑟 𝜕𝑟
𝑧1 − 𝑧0 𝑟0
[𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 ] [𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 ]

Page 15 of 19
𝜕𝑓 𝜕𝑓 𝜕𝑓
𝑥1 − 𝑥0 𝜕𝑥0 𝜕𝑦0 𝜕𝑧0
𝑓0
𝜕𝑔 𝜕𝑔 𝜕𝑔
𝑦
⇒ 𝐽0 ( 1 − 𝑦 𝑔
0 ) = − ( 0 ) ,where 𝐽0 =
𝜕𝑥0 𝜕𝑦0 𝜕𝑧0
𝑧1 − 𝑧0 𝑟0 𝜕𝑟 𝜕𝑟 𝜕𝑟
[𝜕𝑥0 𝜕𝑦0 𝜕𝑧0 ]

𝑥1 − 𝑥0 𝑓0 𝑥1 𝑥0 𝑓0
⇒ (𝑦1 − 𝑦0 ) = −𝑖𝑛𝑣(𝐽0 ) ∗ (𝑔0 ) ⇒ (𝑦1 ) = (𝑦0 ) − 𝑖𝑛𝑣(𝐽0 ) ∗ (𝑔0 )
𝑧1 − 𝑧0 𝑟0 𝑧1 𝑧0 𝑟0
The (𝑘 + 1)𝑡ℎ iteration formula is given by
𝑥𝑘+1 𝑥𝑘 𝑓𝑘
𝑦 𝑦 ) 𝑔
( 𝑘+1 ) = ( 𝑘 ) − 𝑖𝑛𝑣(𝐽𝑘 ∗ ( 𝑘 ), for 𝑘 ≥ 0
𝑧𝑘+1 𝑧𝑘 𝑟𝑘
Solving (8) for ℎ , 𝑘 and 𝑟 the next approximation of the root is given by
𝑥1 = 𝑥0 + ℎ
{𝑦1 = 𝑦0 + 𝑘
𝑧1 = 𝑧0 + 𝑙
The above process is repeated to desired degree of accuracy.

Example solve

𝑓(𝑥, 𝑦) = 𝑥 2 + 𝑦 − 20𝑥 + 40 = 0
{
𝑔(𝑥, 𝑦) = 𝑥 + 𝑦 2 − 20𝑦 + 20 = 0
Solution

Let 𝑥0 = 𝑦0 = 0 be the initial approximation to the root


𝜕𝑓 𝜕𝑓
𝑓 = 𝑥 2 + 𝑦 − 20𝑥 + 40 ⇒ 𝜕𝑥 = 2𝑥 − 20, 𝜕𝑦 = 1

𝜕𝑔 𝜕𝑔
𝑔 = 𝑥 + 𝑦 2 − 20𝑦 + 20, ⇒ 𝜕𝑥 = 1, 𝜕𝑦 = 2𝑦 − 20

𝑓0 = 𝑓(𝑥0 , 𝑦0 ) = 𝑓(0,0) = 40, 𝑔0 = 𝑔(𝑥0 , 𝑦0 ) = 𝑔(0,0) = 20


𝜕𝑓 𝜕𝑓
= −20, =1
𝜕𝑥0 𝜕𝑦0

𝜕𝑔 𝜕𝑔
𝜕𝑥0
= 1, 𝜕𝑦0
= −20

The linear equations are


𝜕𝑓 𝜕𝑓
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 = −𝑓0
0 0 −20ℎ + 𝑘 = −40
{ 𝜕𝑔 𝜕𝑔
⇒{ ⇒ 𝑘 = 1.103 𝑎𝑛𝑑 ℎ = 2.055
ℎ 𝜕𝑥 + 𝑘 𝜕𝑦 = −𝑔0 ℎ − 20𝑘 = −20
0 0

Page 16 of 19
The next aproximation is given by
𝑥 = 𝑥0 + ℎ 𝑥 = 2.055
{ 1 ⇒{ 1
𝑦1 = 𝑦0 + 𝑘 𝑦1 = 1.103
𝑥 = 𝑥1 + ℎ 𝑥 = 4.110
{ 2 ⇒{ 2
𝑦2 = 𝑦1 + 𝑘 𝑦2 = 2.206

Comtinuing in the same way until we get the desired degree of accuracy
The solution is
𝑥 = 16.44
{
𝑦 = 17.65

Example
1. Solve using Newton’s method the following system of nonlinear algebraic equations
a. 𝒙𝟑𝟏 − 𝟐𝒙𝟐 − 𝟐 = 𝟎
(𝟎) (𝟎) (𝟎)
𝒙𝟑𝟏 − 𝟓𝒙𝟐𝟑 + 𝟕 = 𝟎 with 𝒙𝟏 = 𝟐, 𝒙𝟐 = 𝟏, 𝒙𝟑 = 𝟐 correct to three decimal places.
𝒙𝟐 𝒙𝟐𝟑 − 𝟏 = 𝟎

b. 𝑥 2 + 4𝑦 2 = 9
18𝑦 − 14𝑥 2 = −45 with 𝑥0 = 1 and 𝑦0 = −1 correct to six decimal places.

Page 17 of 19
Using mat lab
% solution of system of nonlinear equation
F1=@(x,y)x^3-2*y-2;
F2=@(x,z)x^3-5*z^2+7;
F3=@(y,z)y*z^2-1;
F1x=@(x)3*x;F1y=-2;F1z=0;
F2x=@(x)3*x^2;F2y=0;F2z=@(z)-10*z;
F3x=0;F3y=@(z)z^2;F3z=@(y,z)2*y*z;
Jacob=@(x,y,z)30*x*z^3+12*x^2*y*z;
xi=2;yi=1;zi=2;Err=0.0001;
for i=1:8
Jac=Jacob(xi,yi,zi);
Delx=(F3(yi,zi)*F1z*F2y-F3(yi,zi)*F1y*F2z(zi)+F2(xi,zi)*F1y*F3z(yi,zi)-
F2(xi,zi)*F1z*F3y(zi)-
F1(xi,yi)*F2y*F3z(yi,zi)+F1(xi,yi)*F2z(zi)*F3y(zi))/Jac;
Dely=(F3(yi,zi)*F1x(xi)*F2z(zi)-F3(yi,zi)*F1z*F2x(xi)-
F2(xi,zi)*F1x(xi)*F3z(yi,zi)+F2(xi,zi)*F1z*F3x+F1(xi,yi)*F2x(xi)*F3z(yi,zi)-
F1(xi,zi)*F2z(zi)*F3x)/Jac;
Delz=(F3(yi,zi)*F1y*F2x(xi)-
F3(yi,zi)*F1x(xi)*F2y+F2(xi,zi)*F1x(xi)*F3y(zi)-F2(xi,zi)*F1y*F3x-
F1(xi,yi)*F2x(xi)*F3y(zi)+F1(xi,yi)*F2y*F3x)/Jac;
xip1=xi+Delx;
yip1=yi+Dely;
zip1=zi+Delz;
Errx=abs((xip1-xi)/xi);
Erry=abs((yip1-yi)/yi);
Errz=abs((zip1-zi)/zi);
fprintf('i=%2.0f x=%-7.4f y=%-7.4f z=%-7.4f Error in x=%-7.4f Error in
y=%-7.4f Error in z=%-7.4f\n',i,xip1,yip1,zip1,Errx,Erry,Errz)
if Errx<Err&& Erry<Err &&Errz<Err
break
else
xi=xip1;yi=yip1;zi=zip1;
end
end

Page 18 of 19
Example :solve the following using mat lab
𝒙𝟑 − 𝟐𝒚 = 𝟐
{𝒙𝟑 − 𝟓𝒛𝟐 = −𝟕
𝒚𝒛𝟐 = 𝟏
Solution
𝒊 = 𝟏, 𝒙 = 𝟏. 𝟑𝟎𝟓𝟔 𝒚 = 𝟎. 𝟗𝟏𝟔𝟕 𝒛 = 𝟏. 𝟑𝟑𝟑𝟑 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟑𝟒𝟕𝟐 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟖𝟑𝟑 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟑𝟑𝟑𝟑

𝒊= 𝟐, 𝒙 = 𝟏. 𝟒𝟕𝟐𝟔 𝒚 = 𝟎. 𝟒𝟑𝟗𝟕 𝒛 = 𝟏. 𝟒𝟐𝟐𝟔 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟏𝟐𝟕𝟗 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟓𝟐𝟎𝟑 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟔𝟕𝟎

𝒊= 𝟑, 𝒙 = 𝟏. 𝟒𝟑𝟎𝟏 𝒚 = 𝟎. 𝟓𝟎𝟐𝟗 𝒛 = 𝟏. 𝟒𝟎𝟖𝟒 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟐𝟖𝟖 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟏𝟒𝟑𝟔 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟏𝟎𝟎

𝒊= 𝟒, 𝒙 = 𝟏. 𝟒𝟒𝟔𝟗 𝒚 = 𝟎. 𝟒𝟗𝟖𝟔 𝒛 = 𝟏. 𝟒𝟏𝟔𝟐 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟏𝟏𝟕 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟎𝟖𝟔 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟎𝟓𝟓

𝒊= 𝟓, 𝒙 = 𝟏. 𝟒𝟒𝟎𝟒 𝒚 = 𝟎. 𝟓𝟎𝟎𝟔 𝒛 = 𝟏. 𝟒𝟏𝟑𝟒 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟎𝟒𝟓 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟎𝟒𝟎 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟎𝟐𝟎

𝒊= 𝟔, 𝒙 = 𝟏. 𝟒𝟒𝟐𝟗 𝒚 = 𝟎. 𝟒𝟗𝟗𝟖 𝒛 = 𝟏. 𝟒𝟏𝟒𝟓 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟎𝟏𝟕 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟎𝟏𝟔 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟎𝟎𝟖

𝒊 = 𝟕, 𝒙 = 𝟏. 𝟒𝟒𝟐𝟎 𝒚 = 𝟎. 𝟓𝟎𝟎𝟏 𝒛 = 𝟏. 𝟒𝟏𝟒𝟏 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟎𝟎𝟕 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟎𝟎𝟔 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟎𝟎𝟑

𝒊 = 𝟖, 𝒙 = 𝟏. 𝟒𝟒𝟐𝟒 𝒚 = 𝟎. 𝟓𝟎𝟎𝟎 𝒛 = 𝟏. 𝟒𝟏𝟒𝟑 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒙 = 𝟎. 𝟎𝟎𝟎𝟑 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒚 = 𝟎. 𝟎𝟎𝟎𝟐 𝑬𝒓𝒓𝒐𝒓 𝒊𝒏 𝒛 = 𝟎. 𝟎𝟎𝟎𝟏

Page 19 of 19

You might also like