Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Numerical Methods

Solving Linear Algebraic Equations:


Direct Methods
by

Norhayati Rosli & Mohd Zuki Bin Salleh


Faculty of Industrial Sciences & Technology
norhayati@ump.edu.my, zuki@ump.edu.my

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
Description

AIMS

This chapter is aimed to solve small numbers of linear algebraic equations by using direct methods
involving Naïve Gauss elimination method, Gauss elimination as partial pivoting and LU
factorization methods

EXPECTED OUTCOMES

1. Students should be able transform the system of linear equations into matrix form.
2. Students should be able to solve linear algebraic equations by using Naïve Gauss elimination
method, Gauss elimination as partial pivoting and LU factorization.

REFERENCES

1. Norhayati Rosli, Nadirah Mohd Nasir, Mohd Zuki Salleh, Rozieana Khairuddin,
Nurfatihah Mohamad Hanafi, Noraziah Adzhar. Numerical Methods, Second Edition,
UMP, 2017 (Internal use)
2. Chapra, C. S. & Canale, R. P. Numerical Methods for Engineers, Sixth Edition, McGraw–
Hill, 2010.
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
Content

1 Introduction
2 Naïve Gauss Elimination Method
3 Gauss Elimination with Partial Pivoting
4 LU Factorization
4.1 LU as Gauss Elimination Method
4.2 Crout’s Method
4.3 Cholesky Method

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
INTRODUCTION
Solving Linear Algebraic Equations: Direct Methods

1 Naïve Gauss
Elimination Method

Three Types
of Methods

LU Factorization
3 2 Gauss Elimination as
Partial Pivoting

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
Gauss elimination is a sequential process of eliminate unknowns from a
system of equations by using forward elimination and solving for the
unknown by using backward substitution.

1 Forward
Elimination

Two Major
Backward Steps
Substitution
2

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Naïve Gauss Elimination Procedures

STEP 1 A system of linear equations is written in augmented matrix

Identify pivot element. The pivot element is used as multiplier


STEP 2
to derive row operation formula

Perform a forward elimination to transform the matrix to an


STEP 3
upper triangular matrix

STEP 4 Solve the system by using backward substitution


Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Forward Elimination
In forward elimination step the augmented matrix of 𝐀 𝐛 is reduced to an
upper triangular matrix.

Forward Elimination Procedure

Rewrite a system of linear equations in a augmented matrix,


Step 1
[𝐀|𝐛]. Let consider 𝑛–numbers of equations

Pivot element

a11 x1  a12 x2   a1n xn  b1  a11 a12 a1n b1 


 
a21 x1  a22 x2   a2 n xn  b2 Augmented
 a21 a22 a2 n b2 
Matrix
 
 
an1 x1  an 2 x2   ann xn  bn  an1 an 2 ann bn 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Forward Elimination Procedure (Cont.)

Transform the augmented matrix to an upper triangular matrix.


This transformation is executing by forward elimination of
Step 2 unknowns. The initial step in forward elimination of unknowns is to
eliminate, the first unknown 𝑥1 from the second through 𝑛𝑡ℎ
equations by using the multiplier and its row operation formula as
stated in Table 1. The new augmented matrix becomes

 a11 a12 a1n b1 


 ' 
0 a22 a2' n b2' 
 
 ' 
 0 an' 2 '
ann bn 
where the prime indicates that the elements have been changed
Numerical Methods
from its original values. by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Forward Elimination Procedure (Cont.)

The elimination procedure is repeated for the remaining equations


Step 3 to eliminate, the second unknown 𝑥2 from the second through 𝑛𝑡ℎ
equations. The new augmented matrix becomes

 a11 a12 a13 a1n b1 


 ' ' ' ' 
0 a22 a23 a2 n b2 
0 0 ''
a33 a3'' n b3'' 
 
 
0 0 0 ''
ann bn' 

where the double prime indicates that the elements
Numerical Methods
have been
modified twice. by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Forward Elimination Procedure (Cont.)

The final manipulation in the sequence is to use the


Step 4 (𝑛 − 1)𝑡ℎ equation to eliminate the term 𝑥𝑛−1 from the 𝑛𝑡ℎ
equation. The system is then transformed to an upper triangular
matrix, 𝐔.

 a11 a12 a13 a1n b1 


 ' ' ' 
0 a22 a23 a2' n b2  Upper triangular
0 0 ''
a33 a3'' n b3''  matrix
 
 
0 0 0 ( n 1)
ann bn( n 1) 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Table 1: Row operation for each step in forward elimination procedure
of 𝑛 × 𝑛 system.

Step Row Multiplier Row Operation


1 𝑖 = 2,3, … , 𝑛 𝑎𝑖1 𝑅𝑖 + 𝑚𝑖1 𝑅1
𝑚𝑖1 = −
𝑎11
2 𝑖 = 3,4, … , 𝑛 𝑎𝑖2 𝑅𝑖 + 𝑚𝑖2 𝑅2
𝑚𝑖2 = −
𝑎22
3 𝑖 = 4,5, … , 𝑛 𝑎𝑖3 𝑅𝑖 + 𝑚𝑖3 𝑅3
𝑚𝑖3 = −
𝑎33
⋮ ⋮ ⋮ ⋮
4 𝑖=𝑛 𝑎𝑖(𝑛−1) 𝑅𝑖 + 𝑚𝑖(𝑛−1) 𝑅𝑛−1
𝑚𝑖(𝑛−1) = −
𝑎(𝑛−1)(𝑛−1)

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Backward Substitution
Starting with the last row, solve for the unknown. The results obtained is
back substituted into 𝑛 − 1 equation to solve for 𝑥𝑛−1 by using the
following formula.

The 𝑛𝑡ℎ equation can be solved by using:

bn( n 1)
xn  ( n 1)
ann

The result is substituted into (𝑛 − 1) equation to solve the remaining 𝑥 by using:


n
b
i
(i 1)
 
j  i 1
aij( i 1) x j
xi  (i 1)
, i  n  1, n  2, ,1
a ii Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Example 1
Solve the following system of linear equations using Naïve Gauss elimination
method.
4 x1  3x2  2 x3  13
2 x1  x2  x3  1
x1  x2  3x3  14

Solution

Forward elimination of unknowns:


 4 3 2 13   2 
4 3 2 13   2 
4 3 2 13 
  R2    R1
  R2    R1
 
 2 1 1 1   4 
 R3  1  R1  0  5
2 0  152    41   0  5
2 0  152 
R3    R1
1 1 3 14   4   0 14 5 43   4  0 0 5
2 10 

 2 4  
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
NAÏVE GAUSS ELIMINATION
(Cont.)
Solution (Cont.)

Backward substitution:

5
x3  10  x3  4
2
5 15
 x2    x2  3
2 2
4 x1  3  3  2  4   13  x1  1

Therefore, 𝑥1 = −1, 𝑥2 = 3, 𝑥3 = 4.
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING
Two pitfalls involve in Naïve Gauss elimination method:
a) Division by zero – may occur in forward elimination and backward
substitution phases.
b) Large round off errors

2 Two
Division by zero

Pitfalls
Large round off
error Naïve Gauss 1
Elimination

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING (Cont.)
The simplest remedy to reduce the round–off error is to increase the
number of significant figures in the computation. However, it would
contribute to the computational effort and large memory are required
for the storage.
The technique of pivoting can be used to avoid both division by zero
and minimizes round–off error.
Pivoting involve the rearrangement of the equations so that the pivot
element has the largest magnitude compare to others element in that
similar column.
This technique can be incorporated into Naive Gauss elimination
method and it is frequently known as Gauss elimination with partial
pivoting. It involves the same step as Naïve gauss elimination
method (forward elimination and backward substitution) with an
additional task that we need to ensure the pivot element must have
the largest magnitude before each of the (𝑛 − 1) steps of forward
elimination is performed.
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING (Cont.)
Example 2 [Chapra & Canale, 2010]
Solve the following equations using Gauss elimination with partial pivoting.

0.3x1  0.52 x2  x3  0.01


0.5 x1  x2  1.9 x3  0.67
0.1x1  0.3x2  0.5 x3  0.44

Solution

0.3 0.52 1  0.01


Rewrite in augmented matrix form  
 0 .5 1 1.9 0.67 
 0.1 0.3 0.5  0.44 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING (Cont.)
Solution (Cont.)
Identify the largest absolute value in the first column and switch between the two
rows (if any).
Pivot element
 0.3 0.52 1 0.01 0.5 1 1.9 0.67 
  R 2  R1  
 0.5 1 1.9 0.67   0.3 0.52 1 0.01 
 0.1 0.3 0.5 0.44   0.1 0.3 0.5 0.44 

Forward elimination of unknowns:

0.5 1 1.9 0.67   0.3 


0.5 1 1.9 0.67 
  R2 ' R 2     R1
 
 0.3 0.52 1 0.01   0.5 
 R3 ' R 3   0.1  R1  0 0.0800  0.1400 0.4120 
 0.1 0.3 0.5 0.44   0.5   0 0.1000 0.1200 0.5740 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING (Cont.)
Solution (Cont.)

Identify the largest absolute value in second column and switch between the two
rows (if any).

0.5 1 1.9 0.67  0.5 1 1.9 0.67 


  R 3  R2  
 0 0.0800 0.1400 0.4120   0 0.1000 0.1200 0.5740 
 0 0.1000 0.1200 0.5740   0 0.0800 0.1400 0.4120 
Forward elimination of unknowns:

0.5 1 1.9 0.67   ( 0.0800) 


0.5 1 1.9 0.67 
  R3 ' R 3   0.1000  R2  
 0 0.1000 0.1200 0.5740    0 0.1000 0.1200 0.5740 
 0 0.0800 0.1400 0.4120   0 0 0.0440 0.8712 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMINATION WITH
PARTIAL PIVOTING (Cont.)
Solution (Cont.)

Backward substitution:

 0.0440 x3  0.8712  x3  19 .8000


0.1000 x2  0.1200 x3  0.5740
 0.5740  0.1200 19 .8000 
x2   x2  29 .5000
0.1000
0.5 x1  x2  1.9 x3  0.67
0.67   29 .5000   1.919 .8073 
x1   x1  14 .9000
0 .5
Therefore, 𝑥1 = −14.9000, 𝑥2 = 29.5000, 𝑥3 = 19.8000.
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
LU FACTORIZATION
𝐋𝐔 factorization is designed so that the elimination step involves only
operations on the matrix 𝐀.
It is well suited with the situations where many right hand side of vectors 𝐛
need to be evaluated for a single matrix 𝐀.
The matrix 𝐀 is decomposed (factored) into a product of two matrices 𝐋
and 𝐔 such that

𝐀 = 𝐋𝐔

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
LU FACTORIZATION (Cont.)
For 3 × 3 matrices we may have
 l11 0 0  u11 u12 u13   a11 a12 a31 
l l22 0   0 u22 u23    a21 a22 a23 
 21
l31 l32 l33   0 0 u33   a31 a32 a33 

By this decomposition, the system of 𝐀𝐱 = 𝐛 to be solved has the form of


𝐋𝐔𝐱 = 𝐛 (1)
To solve equation (1), let consider
𝐔𝐱 = 𝐝 (2)
Substituted equation (2) into equation (1) gives
𝐋𝐝 = 𝐛 (3)

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
LU FACTORIZATION (Cont.)
LU Factorization Procedures

𝐋𝐔 decomposition– the matrix 𝐀 is decomposed (factored)


Step 1 into a product of two matrices 𝐋 (lower triangular matrix) and
𝐔 (upper triangular matrix).

Substitution– this steps consists of two steps which are


forward and backward substitution. In forward substitution,
Step 2
equation (3) is used to generate an intermediate vector 𝐝.
Then, the vector 𝐝 is substituted in equation (2) to obtain the
solution of 𝐱 in backward substitution step.

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
LU FACTORIZATION (Cont.)
LU Factorization Methods

1 Gauss Elimination
Method as LU
Three Types
of Methods

3 2
Cholesky Method Crout’s Method

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION
Also known as a Doolittle decomposition (the elements of lower triangular
matrix are 1’s on the main diagonal).

Gauss elimination as 𝐋𝐔 decomposition and Naive Gauss elimination are


equally efficient.

Gauss elimination as 𝐋𝐔 decomposition is more time–consuming since the


elimination step is done only on the matrix 𝐀.

Forward elimination step in Naive Gauss elimination method can be used to


decompose 𝐀 into 𝐋 and 𝐔.

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION
Gauss Elimination as LU Factorization Procedures

From forward elimination step, we have

 ' 
 a11 a12 a31  R 2  R1   aa21   a11 a12 a31  R 3  R2   aa'32   a11 a12 a31 
a a22 a23       0
  11 
a '22 a 23       0
' 
 22 

a '22 a '23   U
 21  a 
R 3  R1   31 
 a31 a32 a33   a11   0 a '32 a '33   0 0 a ''33 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION (Cont.)
Gauss Elimination as LU Factorization Procedures

Matrix L can be obtained from the multiplier of each row operation.


Therefore, we have

 a11 a12 a31   1 0 0  u11 u12 u13 


a a22 a23   l21 1 0   0 u22 u23 
 21
 a31 a32 a33  l31 l32 1   0 0 u33 
 1 0 0   a11 a12 a31 
   m21 1 0   0 a '22 a '23 
  m31 m32 1   0 0 a ''33 

𝑎21 𝑎31 𝑎′ 32
where 𝑚21 = , 𝑚31 = and 𝑚32 = 𝑎′
𝑎11 𝑎11 22

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION (Cont.)
Example 3
Solve the following equations using Gauss elimination as LU Factorization.

4 x  3 y  2 z  13
2 x  y  z  1
x  y  3 z  14
Solution

LU decomposition

 2
R 2  R1   
  
4 3
 4 2 4 3 2
4 3 2 R  R   1    R 3  R2  
 1/4 
  
 2 1 1  
3 1
 4

 0  5 0   0 
  ( 5/2)  5
0 U
   2   2 
1 1 3   1 5  5
0  0 0 
 4 2  2
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION (Cont.)
Solution (Cont.)
Then write

 
4 3 2  
  1 0 0
 5  1 0 0  
U 0  0
L    m21   1
 2  1 0  1 0
 2 
5   m31  m32 1   
0 0  1 1
 2   1
4 10 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION (Cont.)
Solution (Cont.)

Substitution to solve for 𝐱

Set up 𝐋𝐝 = 𝐛
 1 0 0   d1  13 
1/ 2 1 0   d    1
  2  
1/ 4 1/10 1   d3  14 

Solve for 𝐝 by using forward substitution

 d1   13 
d   d 2    15 / 2 
 d3   10 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
GAUSS ELIMATION METHOD AS
LU FACTORIZATION (Cont.)
Solution (Cont.)

Set up 𝐔𝐱 = 𝐝

4 3 2   x1   13 
 0 5 / 2 0   x    15 / 2 
  2  
 0 0 5 / 2   x3   10 

Solve for 𝐱 by using backward substitution

 x1   1
x   x2    3 
 x3   4 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD
Executing the matrix multiplication on the right-hand side of the equation
gives:

 a11 a12 a13 a14   l11 (l11u12 ) (l11u13 ) (l11u14 ) 


a a22 a23 a24  l21 (l21u12  l22 ) (l21u13  l22u23 ) (l21u14  l22u 24 ) 
 21    (*)
 a31 a32 a33 a34  l31 (l31u12  l32 ) (l31u13  l32u23  l33 ) (l31u14  l32u 24  l33u34 ) 
   
 a41 a42 a43 a44  l41 (l41u12  l42 ) (l41u13  l42u23  l43 ) (l41u14  l42u24  l43u34  l44 ) 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
The elements of the matrices 𝐋 and 𝐔 can be determined by solving
equation (*) by equating the corresponding elements of the matrix 𝐀.
l11  a11
First Row
a13 a12 a14
𝑖 = 1 u13  u12  u14 
l11 l11 l11
l21  a21 l22  a22  l21u12
Second Row
a23  l21u13 a24l21u14
𝑖 = 2 u23  u24 
l22 l22

Third Row l31  a31 l32  a32  l31u12 l33  a33  l31u13
𝑖 = 3 a34  l31u14  l32u24
u34 
l33

Forth Row
l41  a41 l42  a42  l41u12
𝑖 = 4 l43  a43  l41u13  l42u23 l44  a44  l41u14  l42u24  l43u34
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
Example 4
Solve the following equations using Crout’s method.

12 x  2 y  z  16
15 x  12 y  z  1
13 x  4 y  z  11
Solution
LU decomposition

12 2 1   l11 l11u12 l11u13 


15 12 1  l l21u13  l22u23 
   21 l21u12  l22
13 4 1  l31 l31u12  l32 l31u23  l33 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
Solution (Cont.)

Equating the elements of the matrix A with the right–hand side of the multiplication of 𝐋
and 𝐔, yield;

For 𝑖 = 1 (first row) For 𝑖 = 2 (second row) For 𝑖 = 3 (third row)

l11  12 l21  15 l31  13


1 29 37
u12   l22  l32 
6 2 6
1 9 76
u13  u23   l33 
12 58 87

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
Solution (Cont.)

 1 1   
1  6 12  12 0 0
   
U  0 1
9 29
 and L  15 0
 58   2 
   37 76 
0 0 1  13 
   6 87 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
Solution (Cont.)

Substitution to solve for 𝐱

Set up 𝐋𝐝 = 𝐛 Solve for 𝐝 by using forward substitution

12 0 0   d1  16   d1   43 
15 29
0   d 2    1  d   d 2     38 
29 
 2
13 37 76 
  11  d3   2 
6 87   d 3 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CROUT’S METHOD (Cont.)
Solution (Cont.)

Set up 𝐔𝐱 = 𝐝
1  16 1
12   x1   43 
0 1  589   x2     38 
 29 
0 0 1   x3   2 

Solve for 𝐱 by using backward substitution, we obtain

 x1   1 
x   x2    1
 x3   2 
Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD
Cholesky Method: Introduction

Cholesky decomposition is a technique that is designed for a system where


the matrix 𝐀 is symmetric and positive definite.
The symmetric matrix refers to the matrix with the element of 𝑎𝑖𝑗 = 𝑎𝑗𝑖 for all
𝑖 ≠ 𝑗. In other words, 𝐀 = 𝐀𝑇 .
Cholesky decomposition method offer computational advantages because
only half of the storage and computation time are required.
In Cholesky method, a symmetric matrix 𝐀 is decomposed as
𝐀 = 𝐔𝑇 𝐔

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Cholesky Method: Formula

i 1
uii  aii   uki2
k 1
Cholesky Method Formula i 1
aij   uki ukj
uij  k 1
for j  i  1,..., n
uii

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Cholesky Method: Procedures

Decompose 𝐀 such that 𝐀 = 𝐔 𝑇 𝐔. Hence, we may have


Step 1
𝐔 𝑇 𝐔𝐱 = 𝐛.

Set up and solve 𝐔 𝑇 𝐝 = 𝐛, where 𝐝 can be obtained by using


Step 2 forward substitution.

Step 3 Set up and solve 𝐔𝐱 = 𝐝, where 𝐱 can be obtained by using


backward substitution

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Example 5
Solve the following equations using Cholesky Factorization.

2x  y  1
 x  2 y  z
zy
Solution

 2 1 0   x1  1 
In Matrix Form  1 2 1  x   0 
  2  
 0 1 1   x3  0 

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Solution (Cont.)

LU decomposition

For the first row (i  1) :


11
u11  a11   uk21  2  1.4142
k 1
0
a12   uk 1uk 2
a12
u12  k 1
  0.7071
u11 u11
0
a13   uk1uk 3
a13
u13  k 1
 0
u11 u11

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Solution (Cont.)

LU decomposition

For the second row (i  2) :


2 1
u22  a22   u122  1.2247
k 1
1
a23   uk 2uk 3
a23  u12u13
u23  k 1
  0.8165
u22 u22
For the third row (i  3) :

u33  a33   uk23  a33   u132  u23   0.5773


2
2

k 1

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Solution (Cont.)

Therefore

1.4142 0.7071 0 
U   0 1.2247 0.8165 
 0 0 0.5773 

 1.4142 0 0 
UT   0.7071 1.2247 0 
 0 0.8165 0.5773

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
CHOLESKY METHOD (Cont.)
Solution (Cont.)

Substitution to solve for 𝐱

Solve for 𝐝 by using forward substitution

 d1   0.7071
d   d 2    0.4083
 d3   0.5775

Solve for 𝐱 by using backward substitution

 x  1.0002 
x   y   1.0003
 z  1.0003 Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
Conclusion

Naïve Gauss elimination method suffers from two pitfall which are may involve
division by zero and large round-off error.
Later can be overcome by performing Gauss elimination method with partial
pivoting.
𝐋𝐔 factorization is designed so that the elimination step involves only
operations on the matrix 𝐀.
It is well suited with the situations where many right hand side of vectors 𝐛 need
to be evaluated for a single matrix 𝐀.

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449
Author Information
MOHD ZUKI SALLEH
Norhayati Binti Rosli, Applied & Industrial Mathematics Research
Senior Lecturer, Group,
Applied & Industrial Mathematics Research Faculty of Industrial Sciences & Technology
Group, (FIST),
Faculty of Industrial Sciences & Technology Universiti Malaysia Pahang,
(FIST), Lebuhraya Tun Razak,
Universiti Malaysia Pahang, 26300 Gambang, Kuantan,
26300 Gambang, Pahang. Pahang, Malaysia
SCOPUS ID: 36603244300 Web: Mohd Zuki Salleh
UMPIR ID: 3449 UMPIR: Mohd Zuki Salleh
Google Google Scolar: Mohd Zuki Salleh
Scholars: https://scholar.google.com/citations ResearchID: F-5440-2012
?user=SLoPW9oAAAAJ&hl=en ORCiD: http://orcid.org/0000-0001-9732-9280
e-mail: norhayati@ump.edu.my SCOPUS ID: 25227025600
E-mail: zuki@ump.edu.my

Numerical Methods
by Norhayati Rosli
http://ocw.ump.edu.my/course/view.php?id=449

You might also like