Professional Documents
Culture Documents
CH-3 Systems of Linear Equations: Chapter 9 and 10 in Textbook
CH-3 Systems of Linear Equations: Chapter 9 and 10 in Textbook
[ ] [ ]
1 0 0 0 1 2 0 0
0 4 0 0 3 4 1 0
diagonal , Tridiagonal
0 0 0 0 0 1 4 1
0 0 0 6 0 0 2 1
MATRICES
Examples:
[ ] [ ]
1 2 1 3
2 1 1
symmetric upper triangular 0 4 1 0
1 0 5 ,
0 0 4 1
1 5 4
0 0 0 1
Matrix Operations
Transposition
Addition and Subtraction
Multiplication
Inversion
The Transpose of a Matrix: A'
The transpose of a matrix is a new matrix that is formed
by interchanging the rows and columns.
a11 a12
a11 a21 a31
A = a21 a22 A' =
a a a12 a22 a32
31 32
Where
a11 + b11 = c11
a12 + b12 = c12
a21 + b21 = c 21
a22 + b22 = c 22
a31 + b31 = c31
a32 + b32 = c32
Matrix Multiplication
To multiply a scalar times a matrix, simply multiply each
element of the matrix by the scalar quantity
Matrix Multiplication
To multiply a matrix times a matrix, we write
• AB (A times B)
This is pre-multiplying B by A, or post-multiplying A by B.
Matrix Multiplication (cont.)
In order to multiply matrices, they must be
CONFORMABLE
That is, the number of columns in A must equal
the number of rows in B
So,
Amn*Bnp = Cmp
Matrix Multiplication (cont.)
(m x n) *(p xn) = cannot be done
(1x n) *(n x1) = a scalar (1x1)
[ ]
2 3 1
0 5 3 -1 3 -1
det 1 0 5 =2| |-1| |-1| |
5 4 5 4 0 5
1 5 4
2( 25 ) 1 (12+5) 1(15 0 )= 82
Inverse of matrix
Example
Cont...
Cont...
Cont...
Reading Assignment
• Augmented matrix
• Orthogonal matrix and vector
• Orthonormal matrix and vector
Linear Equations
Linear equations are common and important for survey problems
Matrices can be used to express these linear equations and aid in
the computation of unknown values
Example: n equations in n unknowns, the aij are numerical
coefficients, the bi are constants and the xj are unknowns
a11 x1 + a12 x2 + L + a1n xn = b1
a21 x1 + a22 x2 + L + a2 n xn = b2
M
an1 x1 + an 2 x2 + L + ann xn = bn
Linear Equations
The equations may be expressed in the form
AX = B
where
a11 a12 L a1n x1 b1
a21 a22 L a2 n x2 b2
A= , X = , and B=
M M M M M
an1 an1 L ann xn bn
A-1 AX = A-1 B
Now since A-1 A = I
We get X = A-1 B
3 − 1 1 x1 2
2 1 0 x = 1
2
1 2 − 1 x3 3
Linear Equations
When A-1 is computed the equation becomes
x1 = 2,
Therefore: x2 = −3,
x3 = −7
Linear Equations
The values for the unknowns should be checked by substitution back into
the initial equations
x1 = 2,
x2 = −3,
x3 = −7
x 1 +2 x 2=3
2 x 1 +4 x 2 =6
have infinite number of solutions
[ ][
x1 a
=
x 2 0 . 5( 3 a ) ]
is a solution for all a
Graphical Solution of Systems of Linear Equations
x 1 +x 2 =3
x 1 +2 x 2 =5
Solution
x1=1, x2=2
Cramer’s Rule is Not Practical
Cramer's Rule can be used to solve the system
3 1 1 3
| | | |
5 2 1 5
x1= =1, x 2 = =2
1 1 1 1
| | | |
1 2 1 2
[ ][ ] [ ] [ ][ ] [ ]
a 11 a12 a13 x1 b1 a11 a12 a 13 x1 b1
a 21 a22 a 23 x2 = b2 ⇒ 0 a22 ' a 23 ' x2 = b2 '
a 31 a32 a33 x3 b3 0 0 a 33 ' x3 b3 '
Forward Elimination
Forward Elimination
For n system equation the general formula to remove Xk from the
equation k+1 to n is
Backward Substitution
bn
x n=
an,n
bn 1 an 1, n x n
x n 1=
an 1,n 1
bn 2 an 2,n x n an 2, n 1 x n 1
xn 2=
an 2, n 2
n
bi ∑ a i,j x j
j=i+1
xi=
a i,i
Naive Gaussian Elimination
o The method consists of two steps
o Forward Elimination: the system is reduced to upper
triangular form. A sequence of elementary operations is used.
[ ][ ] [ ] [ ][ ] [ ]
a 11 a12 a13 x1 b1 a11 a12 a 13 x1 b1
a 21 a22 a 23 x2 = b2 ⇒ 0 a22 ' a 23 ' x2 = b2 '
a 31 a32 a33 x3 b3 0 0 a 33 ' x3 b3 '
[ ][]
x1 1
The solution is x 2 = 2
x3 1
Example 2:
Forward Elimination:
[ ][ ] [ ]
6 2 2 4 x1 16
12 8 6 10 x2 26
=
3 13 9 3 x3 19
6 4 1 18 x4 34
[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 12 8 1 x3 27
0 2 3 14 x4 18
Example 2:
Forward Elimination:
Step2: Eliminate x 2 from equations 3,4
[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 4 13 x4 21
[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 0 3 x4 3
Example 2:
[ ][ ] [ ] [ ][ ] [ ]
6 2 2 4 x1 16 6 2 2 4 x1 16
12 8 6 10 x2 26 0 4 2 2 x2 6
= ⇒ =
3 13 9 3 x3 19 0 0 2 5 x3 9
6 4 1 18 x4 34 0 0 0 3 x4 3
Example 2:
Backward Substitution
[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 0 3 x4 3
IMPORTANT OBSERVATION:
An ill-conditioned system is one with a determinant close to zero
• If determinant D=0 then there are infinitely many solutions singular system
• Scaling (multiplying the coefficients with the same value) does not change the
equations but changes the value of the determinant in a significant way.
However, it does not change the ill-conditioned state of the equations!
DANGER! It may hide the fact that the system is ill-conditioned!!
• One way to find out: change the coefficients slightly and recompute & compare
COMPUTING THE DETERMINANT OF A MATRIX
USING GAUSSIAN ELIMINATION
The determinant of a matrix can be found using Gaussian elimination.
Here are the rules that apply:
• Interchanging two rows changes the sign of the determinant.
• Multiplying a row by a scalar multiplies the determinant by that scalar.
• Replacing any row by the sum of that row and the multiple of any other
row does NOT change the determinant.
• The determinant of a triangular matrix (upper or lower triangular) is
the product of the diagonal elements. i.e.
• D=t11t22t33 t11 t12 t13
D = 0 t 22 t 23
48
0 0 t33
Techniques for Improving Solutions
• Use of more significant figures – double precision arithmetic
• Pivoting
If a pivot element is zero, normalization step leads to division by zero. The
same problem may arise, when the pivot element is close to zero. Problem
can be avoided:
• Partial pivoting
Switching the rows below so that the largest element is the pivot element.
• Complete pivoting
• Searching for the largest element in all rows and columns then switching.
• This is rarely used because switching columns changes the order of x’s and
adds significant complexity and overhead costly
• Scaling
• used to reduce the round-off errors and improve accuracy
Gauss-Jordan Elimination
• It is a variation of Gauss elimination. The major
differences are:
– When an unknown is eliminated, it is eliminated from all
other equations rather than just the subsequent ones.
– All rows are normalized by dividing them by their pivot
elements.
– Elimination step results in an identity matrix.
– It is not necessary to employ back substitution to obtain
solution.
51
Gauss-Jordan Elimination: Example
1 1 2 x1 8 1 1 2| 8
− 1 − 2 3 x = 1 Augmented Matrix : − 1 − 2 3 | 1
2
3 7 4 x3 10 3 7 4 |10
1 1 2 | 8 1 1 2 | 8
0 −1 5 | 9 Scaling R2:
0 1 −5| −9
R2 R2 - (-1)R1
R2 R2/(-1)
R3 R3 - ( 3)R1 0 4 −2| −14 0 4 −2| −14
R1 R1 - (1)R2 1 0 7 | 17 1 0 7 | 17
0 1 − 5 | − 9 0 1 − 5 | − 9
Scaling R3:
R3 R3-(4)R2 0 0 18 | 22 R3 R3/(18) 0 0 1 |11 / 9
R1 R1 - (7)R3 1 0 0 | 8.444 RESULT:
R2 R2-(-5)R3 0 1 0 | − 2.888
x1=8.45, x2=-2.89, x3=1.23
0 0 1 | 1.222
Time Complexity? O(n3)
LU Decomposition
• Gauss elimination solves [A] {x} ={B}
If: Consider:
L: lower triangular matrix [U]{X} = {D}
U: upper triangular matrix So, [L]{D} = {B}
Then, 2. [L]{D} = {B} is used to generate
[A]{X}={B} can be decomposed into two an intermediate vector {D} by
matrices [L] and [U] such that: forward substitution.
1. [L][U] = [A] ([L][U]){X} = {B} 3. Then, [U]{X}={D} is used to get
{X} by back substitution.
[ ]= 0 1 0 0
0 0 [L ] = l 21 1 0
l 31 l 32 1
LU Decomposition
Step 2: Generate an intermediate vector {D} by forward
substitution
1 0 0 d1 b1
l
21 1 0 d 2 = b2
l31 l32 1 d 3 b3
Step 3: Get {X} by back substitution.
3 − 0.1 − 0.2 1 0 0
[U ] = 0 7.003 − 0.293 [L ] = 0.03333 1 0
0 0 10 . 012 0.1000 −.02713 1
LU Decomposition-Example (cont’d)
Use previous L and D matrices to solve the system:
3 − 0.1 − 0.2 x1 7.85
0.1 x = − 19.3
7 − 0 .31 2
0.3 − 0.2 10 x3 71.4