Chapter 3 Lectu

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Introduction

Solution Methods
Direct Methods:
Iterative Methods
Solution Techniques for Eigen-value
Problems
Eigen-values and Eigenvectors

1 — System of Linear Equations

1.1 Introduction
This chapter is about solving system of linear equations that has many application in engineering
and science. Consider n-linear system of equations as shown below.
a11 x1 + a12 x2 + a13 x3 + . . . + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + . . . + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + . . . + a3n xn = b3 (1.1)
.. .. .. .. .. ..
. . . . . .
an1 x1 + an2 x2 + an3 x3 + . . . + ann xn = bn
This equation can be written in a matrix form as

AX = B

Where A is an n × n coefficient matrix and X and B are column vectors of size n. Hence, matrix
multiplication is compatible with the solution of linear algebraic equations. If B = 0 then the
above system is called homogeneous system of equations. Otherwise it is non-homogeneous.
Homogeneous system of equations can be solved using eigen-value method leading to eigen
value problem.
Definition 1.1.1 If system of equations are all satisfied simultaneously by at least one set of
values, then it is consistent.

There are three row operations that are useful when solving systems of linear algebraic equations.
These operations does not affect the solution of the system. Thus they can be used without any
hesitation during the solution process when necessary. These are:
1. Scaling: Any row of the above equation can be multiplied by a constant.
2. Pivoting: The order of rows can be interchanged as required.
3. Elimination: Any row of a system of linear equations can be replaced by any weighted
linear combination of that row with any other row.

1.2 Solution Methods


There are two major solution methods of an equation whether it is a system or not. These are:
2 System of Linear Equations

1. Direct Methods and


2. Indirect ( Iterative) methods

1.2.1 Direct Methods:


These are analytical methods by which we find an exact solution (if possible). These methods
include: Cramer’s rule, Gaussian elimination with and without pivoting and LU-factorization.

Gauss-Elimination With Pivoting Method


Definition 1.2.1 — Partial Pivoting Procedure. If a zero element or near to zero element
is found in diagonal position i.e. ai j f or i = j, which is a pivot element, interchange the
corresponding row with this element with an other row having the maximum value in absolute
value in that corresponding column. This process can be explained in the following steps.

step 1: Largest coefficients of x1 ( may be positive or negative) is selected from all the equations
then the first equation will be interchanged with the equation with this largest value. This
largest value is called the pivot element and the row containing this element is called pivot
row. This row will be used to eliminate the other coefficients of x1 .
step 2: Numerically largest value of x2 is selected from the remaining equations. In this step we
wont consider the first equation. Then interchange the second equation with the row with
this largest value. The row containing this value is used to eliminate the other coefficients
of x2 except the previously selected rows. This procedure is continued until the upper
triangular system is obtained.

R The pivot element is selected to be the largest in absolute value to maximize the precision
of the solution.

Exercise 1.1 Solve the system of equation



4x1 + 4x2 + x3 + 4x4 = 12 

2x1 + 5x2 + 7x3 + 4x4 = 1

10x1 + 5x2 − 5x3 = 25 

−2x1 − 2x2 + x3 − 3x4 = −10

Using Gaussian Elimination with partial pivoting. 

R2 −→R2 −(1/5)R1
−5
   
4 4 1 4 12 10 5 0 25 R3 −→R3 −(2/5)R1
 2 5 7 4 1  R1 ←→R3  2 5 7 4 1  −−−−−−−−−−−−−−−→
−− −→  R −→ R4 + (1/5)R1
 10 5 −5 0 25  4 4 1 4 12  4
−2 −2 1 −3 −10 −2 −2 1 −3 −10

−5 −5
   
10 5 0 25 R3 −→R3 −(1/2)R2 10 5 0 25
 0 4 8 4 −4  −−−−−−−−−−−−→  0 4 8 4 −4 
R −→ R4 − (1/4)R2 
 0 2 3 4 2  4 0 0 −1 2 4 
0 −1 0 −3 −5 0 0 2 −2 −6

5 −5 10 5 −5
   
10 0 25 0 25
R4 ←→R3  0 4 8 4 −4  R4 −→R4 +(1/2)R3  0 4 8 4 −4 
−− −→  − − − − −− −→ 
0 0 2 −2 −6  0 0 2 −2 −6 
0 0 −1 2 4 0 0 0 1 1

Computational Methods
c Numerical Methods
1.2 Solution Methods 3

Therefore, after back substitution, the solution of the system is


   1 
x1 2
 x2   2 
 x3  =  −2
   

x4 1
ILL-CONDITIONED SYSTEM OF EQUATION
Definition 1.2.2 The system of equation AX = B is said to be ill-conditioned or unstable
system if it is highly sensetive to small change in A or B. i.e. a small change in A or B causes
a big difference in the solution of the system.

On the other hand if small changes in A and/or B causes a small change in the solution of the
system, then it is said to be stable (well conditioned). Thus in an ill conditioned system, even the
round off errors affect the solution badly. And it is quite difficult to recognize an ill conditioned
system.
Exercise 1.2 Consider the system
    
100 −200 x1 100
=
−200 401 x2 −100


By any of the methods, the solution is


 
201
X=
100
Suppose the elements of the coefficient matrix A are altered or perturbed slightly to yeild the
following generally similar system.
    
101 −200 x1 100
=
−200 400 x2 −100
Note that a11 has increased by 1% and a22 has been decreased by less than 0.254% with these
rather mdest changes in A, the exact solution of the perturbed system is
 
50.00
X=
24.75
Which means change by 1% in the equation generate change in the order of 70% in the system.
Thus, the system is clearly ill-conditioned.
Exercise 1.3 Show that the system of equations

     Making     
7 −10 x1 1 −−−−−−−−−−−−→ 7 −10 x1 1.01
= slight change as =
−5 7 x2 0.7 −5 7 x2 0.69

is ill-conditioned. 

LU-Factorization (Decomposition)
To solve the general linear system AX = B by this method factorize the coeffient matrix A into
product of two triangular matrices. This will reduce the problem to solving two linear systems.
This method is called triangular decomposition; which includes the variants of crout, Doolitte
and Cholesky.

Computational Methods
c Numerical Methods
4 System of Linear Equations

Doolittile Version
Consider the system of equation AX = B. Now, we will factor matrix A into two other matrices.
A lower triangular matrix L and an upper triangular matrix U such that A = LU. The method
how to solve the given system is describe as follows.
Starting with the system AX = B, introduce a new variable Y such that Y = UX so that
AX = (LU)X = L(UX) = B ⇒ LY = B. Since L is a lower triangular matrix, the system is
almost trivial to solve for the unknown Y , once we found the vector Y , then we solve the system
Y = UX
Now, the only thing remained to find is the factors L = (`i j ) and U = (ui j ), the triangular
matrices. This is accomplished by the usual Gauss-Elimination Method. In the Gauss elimination
process we elimination the ith row and jth column entry of A (its equivalent) we replace the the ith
row by Ri − αR j (i.e. Ri → Ri − αR j ). α will be the ith row and jth column entry of L (`i j = α).
This will be illustrated by the following example.
Exercise 1.4 Solve the following system using LU-Factorization (Doolittile)

    
1 2 3 x1 0
 3 4 1   x2  =  6 
1 0 1 x3 1


The main point here is decomposing the coefficient matrix A into lower and upper triangular
matrices L and U such that A = LU by applying the Gauss-Elimination.

  R2 −→R2 −3R1 =⇒`21 =3


  R3 −→R3 −R2  
1 2 3 −−−−−−−−−−−−−−−−−→ 1 2 3 −−−−−→ 1 2 3
A =  3 4 1  R3 −→ R3 − R1 =⇒ `31 = 3  0 −2 −8  =⇒ `32 = 1  0 −2 −8  = U
1 0 1 0 −2 −2 0 0 6
Therefore,
     
1 2 3 1 0 0 1 0 0
∴ U=  0 −2 −8  and L =  `21 1 0 = 3 1 0 
0 0 6 `31 `32 1 1 1 1
 
y1
Since AX = (LU)X = L(UX) = B, let Y = UX =  y2  ⇒ LY = B. Thus
y3
    
1 0 0 y1 0
 3 1 0   y2  =  6 
1 1 1 y3 1
Using forward substitution,
   
y1 0
 y2  =  6 
y3 −5

    
1 2 3 x1 0
But Y = UX ⇒  0 −2 −8   x2  =  6 
0 0 6 x3 −5

Computational Methods
c Numerical Methods
1.2 Solution Methods 5

Using backward substitution,


   
x1 11/6
 x2  =  1/3 
x3 −5/6

Exercise 1.5 Cholesky Method is a reading assignment. 

Crouts’s Method
Let us consider three equations in three nknown.

a11 x1 + a12 x2 + a13 x3 = b1 
Let a21 x1 + a22 x2 + a23 x3 = b2 ⇒ AX = B
a31 x1 + a32 x2 + a33 x3 = b3

   
`11 0 0 1 u12 u13
Let A = LU where L =  `21 `22 0  and U =  0 1 u23 
`31 `32 `33 0 0 1

    
`11 0 0 1 u12 u13 a11 a12 a13
∴ LU =  `21 `22 0   0 1 u23  =  a21 a22 a23  = A
`31 `32 `33 0 0 1 a31 a32 a33

Provided that all the principal minors of A are non singular.


Now, AX = B ⇒ LUX = B. Let UX = V ⇒ AX = LV , Thus LV = B becomes
    
`11 0 0 v1 b1
 `21 `22 0   v2  =  b2  Solve for v1 , v2 , v3 by forward substitution
`31 `32 `33 v3 b3

Since UX = V we have
    
1 u12 u13 x1 v1
 0 1 u23   x2  =  v2  and solve for x1 , x2 , x3 by backward substitution
0 0 1 x3 v3

Exercise 1.6 Using Crouts’s method solve the system


    
1 1 1 x1 1
 4 3 −1   x2  =  6 
3 5 3 x3 4


Let
    
`11 0 0 1 u12 u13 1 1 1
LU = A ⇒  `21 `22 0   0 1 u23  =  4 3 −1 
`31 `32 `33 0 0 1 3 5 3
   
`11 `11 u12 `11 u13 1 1 1
⇒  `21 `21 u12 + `22 `21 u13 + `22 u23 = 4 3 −1 
`31 `31 u12 + `32 `31 u13 + `32 u23 + `33 3 5 3

Computational Methods
c Numerical Methods
6 System of Linear Equations

`11 = 1 `21 = 4 `31 = 3 `22 = −1 `33 = −10



`32 = 2 u12 = 1 u13 = 1 u23 = 5
Thus,
   
1 0 0 1 1 1
L =  4 −1 0  and U =  0 1 5 
3 2 −10 0 0 1

Suppose V = Ux ⇒ LV = B, then
        
1 0 0 v1 1 v1 1
 4 −1 0   v2  =  6  ⇒  v2  =  −2  by forward substitution
3 2 −10 v3 4 v3 −0.5

Now since UX = B we have


        
1 1 1 x1 1 x1 −1
 0 1 5   x2  =  −2  ⇒  x2  =  0.5  by backward substitution
0 0 1 x3 −0.5 x3 −0.5

Exercise 1.7 Solve the system



2x1 + x2 + x4 = 1 
3x1 + 0.5x2 + x3 + x4 = 2

4x1 + 2x2 + 2x3 + x4 = −1 

x2 + x3 + 2x4 = 0

by the Crout’s method. 

1.2.2 Iterative Methods


The basic ingridients for this method are:
• Proper choice of initial values: for the iterative method to converge,quickly proper initial
value must be chosen. The physical problems help the analyst to choose the proper initial
values.
• Termination of the iterative process: In practive the true solution X will not be available
and hence the decision to terminate the process can not be based on the error norm which
is defined as the difference between the iterative value and the true value.
Suppose we want to solve

AX = B

a vector difference relative norm is defined as



xk − xk−1

xk

Thus iteratin may be terminated if



xk − xk−1
= ε (tolerance)
xk

Computational Methods
c Numerical Methods
1.2 Solution Methods 7

Jacobi’s Method
In for the iterative scheme to converge to the true solution, the equations must satisfy diagonal
dominance criteria.
n
|aii | > ∑ ai j
j=1, j6=i

Consider the system of equation


    
a11 a12 a13 · · · a1n x1 b1
 a21 a22 a23 · · · a2n  x2   b2 
    
 a31 a32 a33 · · · a3n x3 b3
=
   
  
 .. .. .. .. ..  ..   .. 
 . . . . .  .   . 
an1 an2 an3 · · · ann xn bn

Dividing each equation by the leading diagonal term, we get


n
b1 a1 j x j
x1 = − ∑
a11 j=2, j6=1 a11
n
b2 a2 j x j
x2 = − ∑
a22 j=1, j6=2 a22
n
b3 a3 j x j
x3 = − ∑
a33 j=1, j6=3 a33
..
.
n
bi ai j x j
xi = − ∑
aii j=1, j6=i aii

Procedure
(0) (1) (0)
Guess the initial values of all unknowns xi and solve each equation in turn for xi using xi as

n (0)
(1) bi ai j x j
xi = − ∑
aii j=1, j6=i aii

(r) (r−1)
Or the value of xi at r(th) iteration can be found using the values of xi of (r − 1)th iteration.
This iteration should be continued till the error

(r) (r−1)
xi − xi < ε1

Or the relative error



x(r) − x(r−1)
i i
< ε2

(r)

x i

If one needs p decimal places accuracy,

ε1 = 0.5 × 10−p
ε2 = 0.5 × 10−(p+1)

Computational Methods
c Numerical Methods
8 System of Linear Equations

Exercise 1.8 Solve the system



5x1 − x2 + x3 = 10 
2x1 + 4x2 = 12
x1 + x2 + 5x3 = −1

using Jacobi’s Method. 

Solution:
Let
 
5 −1 1
A =  2 4 0  and
1 1 5

|a11 | > |a12 | + |a13 | = | − 1| + |1| = 2


|a22 | > |a21 | + |a23 | = |2| + |0| = 2
|a33 | > |a31 | + |a32 | = |1| + |1| = 2
=⇒ diagonally dominant. Thus, the system is ready for the Jcob’s Method. Rearranging the
equations we get
(k) 1 (k−1) (k−1)

x1 = 10 + x2 − x3
5
(k) 1 (k−1)

x2 = 12 − 2x1
4
(k) 1 (k−1) (k−1)

x3 = −1 − x1 − x2
5
Let the initial approximation be
 (0)   
x 2
(0)  1(0)  
X =  x2  = 3 
(0) 0
3x
   
(0) (0)
(1) (1/5) 10 + x2 − x3
     
x1     (1/5) (10 + 3 − 0) 2.6
(1)  (1)   (0)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(2))  =  2.0 
 
(1) (1/5) (−1 − 2 − 3) −1.2
   
x3 (0) (0)
(1/5) −1 − x1 − x2
   
(1) (1)
(1/5) 10 + x2 − x3
 (2)     
x1     (1/5) (10 + 2 + 1.2) 2.64
(2)  (2)   (1)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(2.6))  =  1.70 
 
(2) (1/5) (−1 − 2.6 − 2) −1.12
   
x3 (1) (1)
(1/5) −1 − x1 − x2
   
(2) (2)
(1/5) 10 + x2 − x3
 (3)     
x1     (1/5) (10 + 1.7 + 1.12) 2.564
(3)  (3)   (2)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(1.7))  =  1.680 
 
(3) (1/5) (−1 − 2.64 − 1.7) −1.068
   
x3 (2) (2)
(1/5) −1 − x1 − x2
   
(3) (3)
(1/5) 10 + x2 − x3
 (4)     
x1     (1/5) (10 + 1.68 + 1.068) 2.5490
(4)  (4)   (3)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(2.568))  =  1.7180 
 
(4) (1/5) (−1 − 2.564 − 1.68) −1.0488
   
x3 (3) (3)
(1/5) −1 − x1 − x2
   
(4) (4)
(1/5) 10 + x2 − x3
 (5)   
x1     2.553
(5)  (5)   (4)
X =  x2  =  (1/4) 12 − 2x1 = 1.725 
 
(5) −1.054
   
x3 (4) (4)
(1/5) −1 − x1 − x2

Computational Methods
c Numerical Methods
1.2 Solution Methods 9
 
(5) (4)
x1 − x1  
  0.004
(5)  (5) (4)
X − X (4) =  x2 − x2 = 0.007 
 
0.005
 
(5) (4)
x3 − x3
Thus correct to one decimal places the solution of the gien system is X 5

Exercise 1.9 Solve the system



3x1 + 4x2 + 15x3 = 54.8 
x1 + 12x2 + 3x3 = 39.66
10x1 + x2 − 2x3 = 7.74

Using Jacobi’s Method. 

Gauss-Seidel Method
This method is a slight improvent of the Jacobi’s method. Unlike to the Jacobi’s method, updated values of xi ’s are
(0)
used instead of the values of the previous iterations. First get x1 using the formula

n a1 j x (0)
(1) b1 j
x1 = −∑
a11 j=2 a11

then after get the rest approcimations using the general iterative formula
(r)
(r) bi (i−1) ai j x j n
(r−1)
xi = − ∑ − ∑ ai j x j
aii j=1 a ii j=i+1

R
1. The rate of convergence of Gauss-seidel method is roughly twice that of the Jacobi’s
method.
2. For this Method to converge, the coefficient matrix of the system should be diagonally
dominant.

Exercise 1.10 Solve


    
1 3 1 x1 10
 1 2 5   x2  =  12 
4 1 2 x3 16

using Gauss-seidel method. 

Solution: Rearranging the equation so that it is diagonally dominant as


    
4 1 2 x1 16
 1 3 1   x2  =  10 
1 2 5 x3 12
Let
(0)
   
x1 0
X (0) =  x2(0)  =  0 
 
(0) 0
x3
Rearranging the equation using the general formula
 (r)   (r−1) (r−1)
 
0.25(16 − x2 − 2x3 )

x1 0
X (r) =  x2(r)  =  (1/3)(10 − x1(r) − x3(r−1) )  =  0 
   
(r) (r) (r) 0
x3 0.2(12 − x1 − 2x2 )
Thus,
(1) (0) (0)
    
0.25(16 − x2 − 2x3 )
  
x1 0.25(16 − 0 − 2(0)) 4.0
(1)  (1)   (1) (0)  
X =  x2  =  (1/3)(10 − x1 − x3 )  = (1/3)(10 − 4 − 0)  =  2.0 
(1)
x3
(1) (1)
0.2(12 − x1 − 2x2 ) 0.2(12 − 4 − 2(2)) 0.8

Computational Methods
c Numerical Methods
10 System of Linear Equations
(2) (1) (1)
    
0.25(16 − x2 − 2x3 )
  
x1 0.25(16 − 2.0 − 2(0.8)) 3.1000
X (2) =  x2(2)  =  (2) (1) 
(1/3)(10 − x1 − x3 )  =  (1/3)(10 − 3.1 − 0.8)  =  2.0333 
  

x3
(2) (2) (2)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.1 − 2(2.0333)) 0.9667
 (3)   (2) (2)

0.25(16 − x2 − 2x3 )
   
x 0.25(16 − 2.0333 − 2(0.9667)) 3.0083
 1  
X (3) =  x2(3)  =  (3) (2)  
(1/3)(10 − x1 − x3 )  = (1/3)(10 − 3.0083 − 0.9667)  =  2.012 
(3)
x3
(3) (3)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0083 − 2(2.012)) 0.9935
 (4)   (3) (3)

0.25(16 − x2 − 2x3 )
   
x1 0.25(16 − 3.0083 − 2(0.9935)) 3.0002
(4)  (4)   (4) (3)  
X =  x2  =  (1/3)(10 − x1 − x3 )  = (1/3)(10 − 3.0083 − 0.9935)  =  2.002 
(4)
x3
(4) (4)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0002 − 2(2.002)) 0.999916
 
3
Clearly as n → ∞ , X →  2 
1

1.3 Solution Techniques for Eigen-value Problems


1.3.1 Eigen-values and Eigenvectors
Let A = (ai j ) be a square matrix of order n, we can find a non-zero column vector X and a constant λ such that

AX = λ X

Where

λ is called the eigen-value of A and


X is the corresponding eigen-vector of A

R
1. Eigen-values and eigen-vectors are defined only for square matrices.
2. The zero vector cannot be an eigen-vector even though

A·0 = λ ·0

An eigen-value, however, can be zero.

In Applied Mathematics I, you discussed the analytical solution of this problem. In this section, however, we discuss
the numerical solution using power method.

Power Method
Power method is used to find the largest eigenvalue (in magnitude) and the corresponding eigenvector of a square
matrix A. This method is an iterative method.
Let A be the matrix whose eigenvalue and eigenvector are to be determine. The method requires writing AX = λ X.
Let X (0) be a non-zero initial vector, then we evaluate AX (0) and write as

AX (0) = λ (1) X (1)

Where λ (1) is the largest component in absolute value of the vector AX (0) . Then λ (1) will be the first approximated
eigenvalue and X (1) will be the first approximated value to the corresponding eigenvector. Similarly, we evaluate
AX (1) and write the result as

AX (1) = λ (2) X (2)

Which gives the second approximation. Repeating this process till |X (r) − X (r−1) | is negligible. Then λ (r) will be the
largest eigenvalue of A and X (r) is the corresponding eigenvector.
 Example 1.1 Find the largest eigenvalue and the corresponding eigenvector of the matrix
 
1 3 −1
A= 3 2 4 
−1 4 10

Starting with X (0) = (0, 0, 1)T as initial eigenvector take the tolerance limit as 0.01. 

Computational Methods
c Numerical Methods
1.3 Solution Techniques for Eigen-value Problems 11

Solution
First approximation:
        
1 3 −1 0 −1 −0.1 −0.1
(0) (1) (1)
AX =  3 2 4   0  =  4  = 10  0.4  =⇒ λ = 10 and X =  0.4 
−1 4 10 1 10 1.0 1.0

Second approximation:
        
1 3 −1 −0.1 0.1 0.009 0.009
AX (1) =  3 2 4   0.4  =  4.5  = 11.7  0.385  =⇒ λ (2) = 11.7 and X (2) =  0.385 
−1 4 10 1.0 11.7 1.000 1.000

Third approximation:
        
1 3 −1 0.009 0.164 0.014 0.014
AX (2) =  3 2 4   0.385  =  4.797  = 11.531  0.416  =⇒ λ (3) = 11.531 and X (3) =  0.416 
−1 4 10 1.000 11.531 1.000 1.000

Repeating the above process, we get the largest eigenvalue,


 
0.024
(5) (5)
λ = 11.560 and the corresponding eigenvector isX =  0.421 
1.000

Computational Methods
c Numerical Methods

You might also like