Komputasi Numeric Terapan: Week 3

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 56

KOMPUTASI NUMERIC

TERAPAN
WEEK 3

5-1
CHAPTER 5 SIMULTANEOUS LINEAR
EQUATIONS
Many engineering and scientific problems can be formulated
in terms of systems of simultaneous linear equations.
In a system consisting of only a few equations, a solution
can be found analytically using the standard methods from
algebra, such as substitution.

5-2
GENERAL FORM FOR A SYSTEM OF
EQUATIONS
a11 X 1 a12 X 2 ... a1n X n C1
a21 X 1 a22 X 2 ... a2 n X n C2

an1 X 1 an 2 X 2 ... ann X n Cn
aij : known coefficient
Xj : unknown variable
Ci : known contant
Assume # of unknowns = # of equations
Assume the equations are linearly independent; that is, any one
equation is not a linear combination of any of the other equations.
5-3
The linear system can be written in a matrix-vector
form:
a11 a12 a1n X 1 C1
a a22 a2 n X 2 C2
21


a n1 an 2 ann X n Cn

AX C
Combining A and C, it can be expressed as:
a11 a12 a1n C1
a a22 a2 n C2
21

5-4
a n1 an 2 ann Cn
2 X1 3X 2 1
can be expressed as:
4 X1 X 2 5

2 3 1
4 5
1

5-5
SOLUTION OF TWO EQUATIONS
a11 X 1 a12 X 2 C1
a21 X 1 a22 X 2 C2
It can be solved by substitution.
C1 a12 X 2
X1
a11
With substitution, we get
a22C1 a12C2
X1
a11a22 a21a12
a11C2 a21C1
X2
a11a22 a21a12
5-6
CLASSIFICATION OF SYSTEMS OF
EQUATIONS
Systems that have unique solutions

2 X 1 3 X 2 6
2 X 1 9 X 2 12

5-7
Systems without solutions (parallel lines)

3X1 9 X 2 5
X1 3X 2 6

5-8
Systems with an infinite number of solutions (same
line)
2 X1 3X 2 4
4 X1 6 X 2 8

5-9
A system that has a solution, but has ill-conditioned
parameters.

2 X 1 2.2 X 2 5.7
2 X 1 2 X 2 5.5

5-10
PERMISSIBLE OPERATIONS
2 X1 3X 2 1
4 X1 X 2 5

Rule 1:The solution is not changed if the order of the


equations is changed.

4 X1 X 2 5
2 X1 3X 2 1

5-11
Rule 2: Any one of the equations can be multiplied
or divided by a nonzero constant without changing
the solution.
4 X1 6 X 2 2
4 X1 X 2 5
Rule3: The solution is not changed if two equations
are added together and the resulting equation
replaces either of the two original equations.
2 X1 3X 2 1
2 X1 4 X 2 6

5-12
GAUSSIAN ELIMINATION
Gaussian elimination procedure:
Phase 1: forward pass
Phase 2: back substitution
The the forward pass is to apply the three permissible
operations to transform the original matrix to an upper-
triangular matrix:

1 d12 d13 d1n e1


0 1 d 23 d 2n e2

0 0 1 d 3n e3



0 0 0 1 en
5-13
EXAMPLE: GAUSSIAN ELIMINATION
PROCEDURE
2 X 1 3 X 2 2 X 3 X 4 2
2 X1 5 X 2 3X 3 X 4 7
2 X1 X 2 3X 3 2 X 4 1
5X1 2 X 2 X 3 3X 4 8

Represented in the matrix form:


2 3 2 1 2
2 5 3 1 7

2 1 3 2 1

5 2 1 3 8 5-15
As the step 1 in the forward pass, we will convert the
element a11 (a11 is called the pivot for row 1) to 1 and
eliminate, that is set to zero, all the other elements in the first
column.
1 d'12 d'13 d'14 e'1
0 d' d' d' 24 e'
22 23 2

0 d'32 d'33 d'34 e'3



0 d' 42 d' 43 d' 44 e' 4

operation resultant matrix


original matrix
R1 3 1
2 3 2 1 2 R'1 1 1 1
2 5 3 2 2 2
1 7 R ' R 2 R ' 0 2 1 2 9
2 2 1
2 1 3 2 1 R ' R 2 R ' 0 4 1 3 1
3 3 1
0 19 6 1
5 2 1 3 8 R'4 R4 5R'1 3
2 2
5-16

Step 2:
original matrix resultant matrix
operation
R'1 R1 3 1
1
3
1
1
1 1 1 1
2 2
2
0 2 1
2 R2 9
2 9 R'2
0 1
1
1
2
0 4 1 3 1 R'3 R3 4 R'2 2 2
0 0 3 7 19
0 19 6 1
3 5 159
R'4 R4
19
R'2 0 0 9
2 2 2 4 4

resultant matrix
original matrix
operation
3 1
1Step
3 3: 1 1 1 1
2 1 1 R'1 R1 2 2
2 9
R'2 R2
1
1 9 0 1 1
0 1 1 R 2 2
R '3 3 0 7 19

2 2 0 1
0 0 3 7 19 3 3 3
0 0 5 9 159 R'4 R4 R'3 0 572
5 143
0 5-17

12
0
4 4 4 12
Step 4:
original matrix original matrix
operation
3 1
1 1 1 3
1
1
1
2 2 R '1 R1 1
2 2
0 1
1
1 9 R' R 1 9
2 2 2 2
0 1 1
0 7 19 R '3 R3 2 2
0 1 0 7 19
3 3 0 1
143 572 R '4
R4 3 3
0 0 143 / 12 0 4
12
0 0 0 1
12

3 1
X1 X 2 X 3 X 4 1
2 2
It represents:
1 9
X2 X3 X4
2 2
7 19
X3 X4
3 3
4
5-18
X4
Step 1 of backward substitution:
3 1 R '4
1 1 1 R '1 R1 3
1 0

2 2 2 1 1
9
2
1
1 R '2 R2 R '4
0 1 1 1
0 1 0
2 2
0 7 19 R '3 R3
7 R '4 2 2
0 1 3 0 0 1 0 3
3 3
0 R '4 R4 0 0 1 4
0 0 1 4 0

3 R'1 R1 R '3 3
1 22: 1
Step 0 1
1 0 0 4
2
1
R '3
1 R'2 R 2 0 1 0 0 2
0 1 0 2
2 2
R '3 R 3 0 0 1 0 3
0 0 1 0 3
0
0 0 0 0 1 4
0 1 4 R'4 R 4 5-19
Step 3:
3 R'1 R1
3R '2
1 0 0 0 1
1 0 0 4
0 2
2 2
0 1 0 0 2 R'2 R2
1 0 0

0 0 1 0 3
0 0 1 0 3 R'3 R3

0 4 R'4 R4 0 0 0 1 4
0 0 1

X1 = 1, X2 = 2, X3 = 3, X4 = 4
The solution is:

5-20
GAUSS-JORDAN ELIMINATION
The Gaussian elimination procedure requires a forward pass
to transform the coefficient matrix into an upper-triangle form.
In Gauss-Jordan elimination, allcoefficients in a column
except for the pivot element are eliminated.
In Gauss-Jordan elimination, the solution is obtained directly
after the forward pass; there is no back substitution phase.
The Gauss-Jordan method needs more computational effort
than Gaussian elimination.
5-21
EXAMPLE: GAUSS-JORDAN ELIMINATION
Step1: operation resultant matrix
original matrix
R1 3 1
2 3 2 1 2 R '1 1 1 1
2 2 2
2 5 3 1
7 R' R 2 R' 0 2 1 2 9
2 2 1
2 1 3 2 1 R'3 R3 2 R'1 0 4 1 3 1
3
19 1
5 2 1 3 8 R'4 R 4 5R'1 0

6

2 2

1Step3 2: 1
1 R '1 R1 3R '2
1 1 31
2 2
2
1 0 2
0 2 1 2 9 R '2 R2
4 4
1 9
2 0 1 1
0 4 1 3 1 R '3 R3 4 R'2 2 2
0 0 3 7 19
0 19 6
1
3 R ' R 19 R' 5 159
0 0 9 5-22
2 2 4 4
2
2
4 4
Step 3:
1 31 R '1 R1
R '3
31 112
1 0 2 1 0 0
4 4 4 12 4
1 9 1 1 4
0 1 1 R '2 R 2 R '3 0 1 0
2
2 2 6 3
3 7 19 R
0 0 1
0
0 R '3 3 7 19

0 0
5
9
159 3 3 3
4 4 5 143 572
R '4 R4 R'3 0 0 0
4 12 12

31 112 31
Step
1 0 0
4: 12
4
R '1 R1
12
R '4
1 0 0 0 1

0 1 0
1 4

1
R '2 R2 R '4 0 1 0 0 2
6 3 6

0 0 3
7 19 R 7
0 1 R '3 3 R '4 0 1 0
3 3 3

3
143 572 R4
R '4
0 0 0 1 4
5-23
0 0 0 143 12
12 12
ACCUMULATED ROUND-OFF ERRORS
Problems with round-off and truncation are most
likely to occur when the coefficients in the equations
differ by several orders of magnitude.
Round-off problems can be reduced by
rearranging the equations such that the largest
coefficient in each equation is placed on the
principal diagonal of the matrix.
The equations should be ordered such that the
equation having the largest pivot is reduced first,
followed by the equation having the next largest,
and so on.
5-24
PROGRAMMING GAUSSIAN
ELIMINATION
Forward pass:
1. Loop over each row i, making each row i in turn the pivot
row.

2. Normalize the elements of the pivot row (row i) by


dividing each element in the row by aii as follows:
aij
aij for j (i 1) , (i 2) , , ( n 1)
aii
Ci
Ci
Cii
aii 1 5-25
3. Loop over rows( i + 1) to n below the pivot row and
reduce the elements in each row as follows:

akj akj aki aij for j i,...,n


Ck Ck aki Ci for k (i 1) , ,n

Back substitution
1. For the last row n:
X n Cn

2. For rows (n-1) through 1,


n
X i Ci a
j i 1
ij Xj for j (i 1), , n and i (n 1), ,1
5-26
LU DECOMPOSITION
A matrix A can be decomposed into L and U, where L is a
lower-triangular matrix and U is a upper-triangular matrix.
LU=A

l11 0 0 0 1 u12 u13 u1n a11 a12 a13 a1n


l 21 l 22 0 0 0 1 u u2 n a21 a22 a23 a2 n
23
l31 l32 l33 0 0 0 1 u3n a31 a32 a33 a3n


ln1 ln 2 ln 3 lnn 0 0 0 1 an1 an 2 an 3 ann
5-27
L and U can be determined as follows:
li1 ai1 for i 1,2,, n
a1 j
u1 j for j 2,3,, n
l11
j 1
lij aij lik ukj for j 2 , 3, , n
k 1

and i j, j 1, , n
j 1
a ji l jk uki
u ji k 1
for j 2 , 3, , n 1
l jj
and i j 1, j 2 , , n
5-28
AX=C
LUX=C, we have LE=C and UX=E
To calculate LE=C (forward substitution)
C1
e1
l11
j 1
Ci lije j
j 1
ei for i 2 , 3, , n
lii
To calculate UX=E (back substitution)
X n en
n
X i ei u
j i 1
ij Xj for i n 1, n 2,,1
5-29
RELATION BETWEEN LU
DECOMPOSITION AND GAUSSIAN
ELIMINATION
In the LU decomposition, matrix U is equivalent to the
upper triangular matrix obtained in the forward pass in
Gaussian elimination.
The calculation of UX=E is equivalent to the back
substitution in Gaussian elimination.

5-30
EXAMPLE: LU DECOMPOSTION
X 1 3 X 2 2 X 3 15
2 X 1 4 X 2 3 X 3 22
3 X 1 4 X 2 7 X 3 39
1 3 2
A 2 4 3

3 4 7

Applying LU decomposition:
l11 a11 1
l21 a21 2
l31 a31 3 5-31
a12 3
u12 3
l11 1
a13 2
u13 2
l11 1
2 1
l22 a22 l2 k uk 2 4 2(3) 2
k 1
2 1
l32 a32 l3k uk 3 4 3(3) 5
k 1

2 1
a23 l2 k uk 3
3 2(2)
u23 k 1
0.5
l22 2
31
l33 a33 l3k uk 3 7 (3)( 2) ( 5)( 0.5) 3.5 5-32

k 1
Thus, the L and U matrices are
1 0 0 1 3 2
L 2 2 0 U 0 1 0.5

3 5 3.5 0 0 1

Forward substitution:
C1 15
e1 15
l11 1
2 1
C2 l 2 j e j
j 1 22 2(15)
e2 4
l22 2
31
C3 l3 j e j
j 1 39 3(15) (5)( 4)
e3 4
5-33

l33 3.5
Back substitution:
X 3 e3 4
3
X 2 e2 u
j 2 1
2j X j 4 0.5(4) 2

3
X 1 e1 u
j 11
1j X j 15 3(2) (2)( 4) 1

5-34
CHOLESKY DECOMPOSITION FOR
SYMMETRIC MATRICES
A symmetric matrix A:
A AT

Cholesky decomposition for a symmetric matrix A


A LLT

5-35
Matrix L can be computed as follows:

l11 a11

i 1
lii aii lik2 for i 2, 3, , n
k 1

i 1
aij lik l jk
lij k 1
for i 2 , 3, , i-1, and j i
l jj
5-36
EXAMPLE: CHOLESKY DECOMPOSITION
1 2 3
A 2 8 10


3 10 22
We can obtain the following:

l11 a11 1 1

a21 2
l21 2
l11 1

2 1
l22 a22 l22k 8 22 2
k 1 5-37
a31 3
l31 3
l11 1

2 1
a32 l3k l2 k
10 (3)( 2)
l32 k 1
2
l11 2

31
l33 a33 l32k 22 32 (2)2 3
k 1

5-38
Therefore, the L matrix is

1 0 0
L 2 2 0

3 2 3

The validity can be verified as

1 0 0 1 2 3 1 2 3
A LLT 2 2 0 0 2 2 2 8 10

3 2 3 0 0 3 3 10 22

5-39
ITERATIVE METHODS

Elimination methods like the Gaussian elimination procedure


are often called direct equation-solving methods. An iterative
method is a trial-and error procedure.
In iterative methods, we can assume a solution, that is, a set of
estimates for the unknowns, and successively refine our
estimate of the solution through some set of rules.
A major advantage of iterative methods is that they can be
used to solve nonlinear simultaneous equations, a task that is
not possible using direct elimination methods.

5-40
JACOBI ITERATION

Each equation is rearranged to produce an


expression for a single unknown.
C1 a12 X 2 a13 X 3 a1n X n
X1
a11
C2 a21 X 1 a23 X 3 a2 n X n
X2
a22

Cn an1 X 1 an 2 X 2 an 1,n X n 1
Xn
ann 5-41
EXAMPLE: JACOBI ITERATION
3X1 X 2 2 X 3 9
X 1 4 X 2 3 X 3 8
X1 X 2 4 X 3 1
Rearrange each equation as follows:
9 X 2 2X3
X1
3
9 X1 3X 3
X2
4
1 X1 X 2
X3
4
5-42
Assume an initial estimate for the solution: X1=X2=X3=1. First
iteration:
9 ( 1) 2(1) 10
X1
3 3
8 1 3(1)
X2 1
4
1 1 1 1
X3
4 4

Second iteration:
1
9 (1) 2( )
4 7
X1
3 2
10 1
8 3( )
X2 3 4 47
4 48
The solution is shown in
10 the next table.
1 (1)
3 5 5-43
X3
4 6
Table: Example of Jacobi Iteration
Iteration X1 | X1| X2 | X2| X3 | X3|
0 1 1 1
1 3.333 2.333 -1.000 2.000 0.250 0.750
2 3.500 0.167 -0.979 0.021 -0.833 1.803
3 2.771 0.729 -1.750 0.771 -0.870 0.036
4 3.003 0.233 -1.960 0.210 -0.880 0.010
5 3.006 0.063 -1.960 0.210 -0.991 0.111
6 2.996 0.090 -1.976 0.067 -0.994 0.003
7 2.996 0.020 -2.001 0.025 -0.988 0.006
8 3.008 0.012 -1.992 0.009 -0.999 0.011
9 2.998 0.011 -1.997 0.005 -1.000 0.001
10 2.999 0.001 -2.001 0.003 -0.999 0.001
5-44
11 3.001 0.002 -1.999 0.001 -1.000 0.001
12 3.000 0.001 -2.000 0.000 -1.000 0.001
GAUSS-SEIDEL ITERATION
In the Jacobi iteration procedure, we always
complete a full iteration cycle over all the equations
before updating our solution estimates. In the
Gauss-Seidel iteration procedure, we update each
unknown as soon as a new estimate of that unknown
is computed.
Example:
3 X Gauss-Seidel
1 X 2 X 9 Iteration
2 3

X 1 4 X 2 3 X 3 8
X1 X 2 4 X 3 1
5-45
Assume an initial solution estimate of X1=X2=X3=1.
First iteration:
9 (1) 2(1)
X1 3.333
3
8 3.333 3(1)
X2 0.417
4
1 3.333 (0.417)
X3 0.688
4
Second iteration:
9 (0.417) 2( 0.068)
X1 2.680
3
8 2.680 3( 0.068)
X2 1.845
4
1 2.680 (1.845)
X3 0.882
4 5-46
TABLE: EXAMPLE OF GAUSS-SEIDEL ITERATION
Iteration X1 | X1| X2 | X2| X3 | X3|
0 1 1 1
1 3.333 2.333 -0.417 1.417 -0.688 1.688
2 2.680 0.348 -1.845 1.428 -0.882 0.194
3 3.027 0.346 -1.904 0.059 -0.0983 0.101
4 2.979 0.048 -1.992 0.088 -0.993 0.010
5 3.002 0.023 -1.994 0.002 -0.999 0.006
6 2.999 0.003 -2.000 0.006 -1.000 0.001
7 3.000 0.001 -2.000 0.000 -1.000 0.000
8 3.000 0.000 -2.000 0.000 -1.000 0.000
Jocobi iteration method requires 13 iterations to reach
the accuracy of 3 decimal places. Gauss-Seidel iteration 5-47
method needs 7 iterations.
CONVERGENCE CONSIDERATIONS OF
THE ITERATIVE METHODS

Both the Jacobi and Gauss-Seidel iterative methods


may diverge.
Interchange the order of equations in the above
example, and solve it by the Gauss-Seidel method:
X1 8 4 X 2 3X 3
X 2 9 3X1 2 X 3
1 X1 X 2
X3
4

5-48
TABLE: DIVERGENCE OF GAUSS-SEIDEL ITERATION
Iteration X1 | X1| X2 | X2| X3 | X3|
0 1 1 1
1 9 8 -16 17 -6 7
2 -38 47 111 127 37.5 43.5
3 33935 377.5 -943.5 1045.5 -318.25 355.75

The solution in the above table would not converge.


That is, we can not get a solution.
The divergence of the iterative calculation does not
imply there is no solution, since it is a permissible
operation to interchange the order in a set of
equations.
5-49
CONVERGENCE AND DIVERGENCE OF
GAUSS-SEIDEL ITERATION

a11 X 1 a12 X 2 C1
a21 X 1 a22 X 2 C2

If we solve both equations individually for X1 we get

C1 a12
X1 X2
a11 a11

C2 a22
X1 X2
a21 a21 5-50
5-51
It will converge when the absolute value of the
slope of f1 is less than the absolute value the slope
of f2. Thus the equations should be arranged such
that X1 is expressed in terms of X2 with the
following conditions:

a12 a22

a11 a21
For a system with more than 2 equations, we should
select the equation with the largest coefficient as
the first equation, the equation with the largest
coefficient in the remaining equations as the second
equation, and so on.
5-52
CRAMERS RULE
a11 a12 a1n C1
a C
a22 a2 n
A 21 C 2


a n1 a n 2 ann Cn

Cramers rule for obtaining Xi :


Ai
Xi
A a11 C1 a13 a1n
a C2 a23 a2 n
An example of |Ai| : A2 21

5-53
an1 Cn an 3 ann
EXAMPLE: CRAMERS RULE
3X1 X 2 2 X 3 9
X 1 4 X 2 3 X 3 8
X1 X 2 4 X 3 1

1 4 3
A 3 1 2 46
1 1 4

8 4 3
A1 9 1 2 138
1 1 4 5-54
1 8 3 1 4 8
A2 3 9 2 92 A3 3 1 9 46
1 1 4 1 1 1

The solution is

A1 138
X1 3
A 46

A292
X2 2
A 46

A3 46
X3 1
A 46 5-55
MATRIX INVERSION

The inverse of a matrix P is defined by the following


equation

P1P I
in which I is the identity or unit matrix and both P and I
are square matrices.
The values of P-1 can be computed by solving a set of
. n2 simultaneous equations.

5-56
Let the elements of P-1 and P be denoted as qij and pij.
q11 q12 q13 p11 p12 p13 1 0 0
q q q23 p21 p22 p23 0 1 0
21 22
q31 q32 q33 p31 p32 p33 0 0 1

p11q11 p21q12 p31q13 1


p12q11 p22q12 p32q13 0
p13q11 p23q12 p33q13 0
p11q 21 p 21q 22 p31q 23 0
p12 q 22 p 22 q 22 p32 q 23 1
p13 q 23 p 23 q 22 p33 q 23 0
p11q31 p 21q32 p31q33 0
p12 q31 p 22 q32 p32 q33 0
5-57

p13 q31 p 23 q32 p33 q33 1

You might also like