Professional Documents
Culture Documents
Chapter 3 Lectu
Chapter 3 Lectu
Chapter 3 Lectu
Solution Methods
Direct Methods:
Iterative Methods
Solution Techniques for Eigen-value
Problems
Eigen-values and Eigenvectors
1.1 Introduction
This chapter is about solving system of linear equations that has many application in engineering
and science. Consider n-linear system of equations as shown below.
a11 x1 + a12 x2 + a13 x3 + . . . + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + . . . + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + . . . + a3n xn = b3 (1.1)
.. .. .. .. .. ..
. . . . . .
an1 x1 + an2 x2 + an3 x3 + . . . + ann xn = bn
This equation can be written in a matrix form as
AX = B
Where A is an n × n coefficient matrix and X and B are column vectors of size n. Hence, matrix
multiplication is compatible with the solution of linear algebraic equations. If B = 0 then the
above system is called homogeneous system of equations. Otherwise it is non-homogeneous.
Homogeneous system of equations can be solved using eigen-value method leading to eigen
value problem.
Definition 1.1.1 If system of equations are all satisfied simultaneously by at least one set of
values, then it is consistent.
There are three row operations that are useful when solving systems of linear algebraic equations.
These operations does not affect the solution of the system. Thus they can be used without any
hesitation during the solution process when necessary. These are:
1. Scaling: Any row of the above equation can be multiplied by a constant.
2. Pivoting: The order of rows can be interchanged as required.
3. Elimination: Any row of a system of linear equations can be replaced by any weighted
linear combination of that row with any other row.
step 1: Largest coefficients of x1 ( may be positive or negative) is selected from all the equations
then the first equation will be interchanged with the equation with this largest value. This
largest value is called the pivot element and the row containing this element is called pivot
row. This row will be used to eliminate the other coefficients of x1 .
step 2: Numerically largest value of x2 is selected from the remaining equations. In this step we
wont consider the first equation. Then interchange the second equation with the row with
this largest value. The row containing this value is used to eliminate the other coefficients
of x2 except the previously selected rows. This procedure is continued until the upper
triangular system is obtained.
R The pivot element is selected to be the largest in absolute value to maximize the precision
of the solution.
R2 −→R2 −(1/5)R1
−5
4 4 1 4 12 10 5 0 25 R3 −→R3 −(2/5)R1
2 5 7 4 1 R1 ←→R3 2 5 7 4 1 −−−−−−−−−−−−−−−→
−− −→ R −→ R4 + (1/5)R1
10 5 −5 0 25 4 4 1 4 12 4
−2 −2 1 −3 −10 −2 −2 1 −3 −10
−5 −5
10 5 0 25 R3 −→R3 −(1/2)R2 10 5 0 25
0 4 8 4 −4 −−−−−−−−−−−−→ 0 4 8 4 −4
R −→ R4 − (1/4)R2
0 2 3 4 2 4 0 0 −1 2 4
0 −1 0 −3 −5 0 0 2 −2 −6
5 −5 10 5 −5
10 0 25 0 25
R4 ←→R3 0 4 8 4 −4 R4 −→R4 +(1/2)R3 0 4 8 4 −4
−− −→ − − − − −− −→
0 0 2 −2 −6 0 0 2 −2 −6
0 0 −1 2 4 0 0 0 1 1
Computational Methods
c Numerical Methods
1.2 Solution Methods 3
On the other hand if small changes in A and/or B causes a small change in the solution of the
system, then it is said to be stable (well conditioned). Thus in an ill conditioned system, even the
round off errors affect the solution badly. And it is quite difficult to recognize an ill conditioned
system.
Exercise 1.2 Consider the system
100 −200 x1 100
=
−200 401 x2 −100
Making
7 −10 x1 1 −−−−−−−−−−−−→ 7 −10 x1 1.01
= slight change as =
−5 7 x2 0.7 −5 7 x2 0.69
is ill-conditioned.
LU-Factorization (Decomposition)
To solve the general linear system AX = B by this method factorize the coeffient matrix A into
product of two triangular matrices. This will reduce the problem to solving two linear systems.
This method is called triangular decomposition; which includes the variants of crout, Doolitte
and Cholesky.
Computational Methods
c Numerical Methods
4 System of Linear Equations
Doolittile Version
Consider the system of equation AX = B. Now, we will factor matrix A into two other matrices.
A lower triangular matrix L and an upper triangular matrix U such that A = LU. The method
how to solve the given system is describe as follows.
Starting with the system AX = B, introduce a new variable Y such that Y = UX so that
AX = (LU)X = L(UX) = B ⇒ LY = B. Since L is a lower triangular matrix, the system is
almost trivial to solve for the unknown Y , once we found the vector Y , then we solve the system
Y = UX
Now, the only thing remained to find is the factors L = (`i j ) and U = (ui j ), the triangular
matrices. This is accomplished by the usual Gauss-Elimination Method. In the Gauss elimination
process we elimination the ith row and jth column entry of A (its equivalent) we replace the the ith
row by Ri − αR j (i.e. Ri → Ri − αR j ). α will be the ith row and jth column entry of L (`i j = α).
This will be illustrated by the following example.
Exercise 1.4 Solve the following system using LU-Factorization (Doolittile)
1 2 3 x1 0
3 4 1 x2 = 6
1 0 1 x3 1
The main point here is decomposing the coefficient matrix A into lower and upper triangular
matrices L and U such that A = LU by applying the Gauss-Elimination.
1 2 3 x1 0
But Y = UX ⇒ 0 −2 −8 x2 = 6
0 0 6 x3 −5
Computational Methods
c Numerical Methods
1.2 Solution Methods 5
Crouts’s Method
Let us consider three equations in three nknown.
a11 x1 + a12 x2 + a13 x3 = b1
Let a21 x1 + a22 x2 + a23 x3 = b2 ⇒ AX = B
a31 x1 + a32 x2 + a33 x3 = b3
`11 0 0 1 u12 u13
Let A = LU where L = `21 `22 0 and U = 0 1 u23
`31 `32 `33 0 0 1
`11 0 0 1 u12 u13 a11 a12 a13
∴ LU = `21 `22 0 0 1 u23 = a21 a22 a23 = A
`31 `32 `33 0 0 1 a31 a32 a33
Since UX = V we have
1 u12 u13 x1 v1
0 1 u23 x2 = v2 and solve for x1 , x2 , x3 by backward substitution
0 0 1 x3 v3
Let
`11 0 0 1 u12 u13 1 1 1
LU = A ⇒ `21 `22 0 0 1 u23 = 4 3 −1
`31 `32 `33 0 0 1 3 5 3
`11 `11 u12 `11 u13 1 1 1
⇒ `21 `21 u12 + `22 `21 u13 + `22 u23 = 4 3 −1
`31 `31 u12 + `32 `31 u13 + `32 u23 + `33 3 5 3
Computational Methods
c Numerical Methods
6 System of Linear Equations
Suppose V = Ux ⇒ LV = B, then
1 0 0 v1 1 v1 1
4 −1 0 v2 = 6 ⇒ v2 = −2 by forward substitution
3 2 −10 v3 4 v3 −0.5
AX = B
Computational Methods
c Numerical Methods
1.2 Solution Methods 7
Jacobi’s Method
In for the iterative scheme to converge to the true solution, the equations must satisfy diagonal
dominance criteria.
n
|aii | > ∑ ai j
j=1, j6=i
Procedure
(0) (1) (0)
Guess the initial values of all unknowns xi and solve each equation in turn for xi using xi as
n (0)
(1) bi ai j x j
xi = − ∑
aii j=1, j6=i aii
(r) (r−1)
Or the value of xi at r(th) iteration can be found using the values of xi of (r − 1)th iteration.
This iteration should be continued till the error
(r) (r−1)
xi − xi < ε1
ε1 = 0.5 × 10−p
ε2 = 0.5 × 10−(p+1)
Computational Methods
c Numerical Methods
8 System of Linear Equations
Solution:
Let
5 −1 1
A = 2 4 0 and
1 1 5
Computational Methods
c Numerical Methods
1.2 Solution Methods 9
(5) (4)
x1 − x1
0.004
(5) (5) (4)
X − X (4) = x2 − x2 = 0.007
0.005
(5) (4)
x3 − x3
Thus correct to one decimal places the solution of the gien system is X 5
Gauss-Seidel Method
This method is a slight improvent of the Jacobi’s method. Unlike to the Jacobi’s method, updated values of xi ’s are
(0)
used instead of the values of the previous iterations. First get x1 using the formula
n a1 j x (0)
(1) b1 j
x1 = −∑
a11 j=2 a11
then after get the rest approcimations using the general iterative formula
(r)
(r) bi (i−1) ai j x j n
(r−1)
xi = − ∑ − ∑ ai j x j
aii j=1 a ii j=i+1
R
1. The rate of convergence of Gauss-seidel method is roughly twice that of the Jacobi’s
method.
2. For this Method to converge, the coefficient matrix of the system should be diagonally
dominant.
Computational Methods
c Numerical Methods
10 System of Linear Equations
(2) (1) (1)
0.25(16 − x2 − 2x3 )
x1 0.25(16 − 2.0 − 2(0.8)) 3.1000
X (2) = x2(2) = (2) (1)
(1/3)(10 − x1 − x3 ) = (1/3)(10 − 3.1 − 0.8) = 2.0333
x3
(2) (2) (2)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.1 − 2(2.0333)) 0.9667
(3) (2) (2)
0.25(16 − x2 − 2x3 )
x 0.25(16 − 2.0333 − 2(0.9667)) 3.0083
1
X (3) = x2(3) = (3) (2)
(1/3)(10 − x1 − x3 ) = (1/3)(10 − 3.0083 − 0.9667) = 2.012
(3)
x3
(3) (3)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0083 − 2(2.012)) 0.9935
(4) (3) (3)
0.25(16 − x2 − 2x3 )
x1 0.25(16 − 3.0083 − 2(0.9935)) 3.0002
(4) (4) (4) (3)
X = x2 = (1/3)(10 − x1 − x3 ) = (1/3)(10 − 3.0083 − 0.9935) = 2.002
(4)
x3
(4) (4)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0002 − 2(2.002)) 0.999916
3
Clearly as n → ∞ , X → 2
1
AX = λ X
Where
R
1. Eigen-values and eigen-vectors are defined only for square matrices.
2. The zero vector cannot be an eigen-vector even though
A·0 = λ ·0
In Applied Mathematics I, you discussed the analytical solution of this problem. In this section, however, we discuss
the numerical solution using power method.
Power Method
Power method is used to find the largest eigenvalue (in magnitude) and the corresponding eigenvector of a square
matrix A. This method is an iterative method.
Let A be the matrix whose eigenvalue and eigenvector are to be determine. The method requires writing AX = λ X.
Let X (0) be a non-zero initial vector, then we evaluate AX (0) and write as
Where λ (1) is the largest component in absolute value of the vector AX (0) . Then λ (1) will be the first approximated
eigenvalue and X (1) will be the first approximated value to the corresponding eigenvector. Similarly, we evaluate
AX (1) and write the result as
Which gives the second approximation. Repeating this process till |X (r) − X (r−1) | is negligible. Then λ (r) will be the
largest eigenvalue of A and X (r) is the corresponding eigenvector.
Example 1.1 Find the largest eigenvalue and the corresponding eigenvector of the matrix
1 3 −1
A= 3 2 4
−1 4 10
Starting with X (0) = (0, 0, 1)T as initial eigenvector take the tolerance limit as 0.01.
Computational Methods
c Numerical Methods
1.3 Solution Techniques for Eigen-value Problems 11
Solution
First approximation:
1 3 −1 0 −1 −0.1 −0.1
(0) (1) (1)
AX = 3 2 4 0 = 4 = 10 0.4 =⇒ λ = 10 and X = 0.4
−1 4 10 1 10 1.0 1.0
Second approximation:
1 3 −1 −0.1 0.1 0.009 0.009
AX (1) = 3 2 4 0.4 = 4.5 = 11.7 0.385 =⇒ λ (2) = 11.7 and X (2) = 0.385
−1 4 10 1.0 11.7 1.000 1.000
Third approximation:
1 3 −1 0.009 0.164 0.014 0.014
AX (2) = 3 2 4 0.385 = 4.797 = 11.531 0.416 =⇒ λ (3) = 11.531 and X (3) = 0.416
−1 4 10 1.000 11.531 1.000 1.000
Computational Methods
c Numerical Methods