Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

CH-3

Systems of Linear Equations

Chapter 9 and 10 in textbook


VECTORS
Vector: a one dimensional array of numbers
Examples:
row vector [1 4 2 ] column vector
2
1 []
[] [] [] []
1 0 0 0
0 1 0 0
Identity vectors e 1 = , e 2= , e 3= , e 4=
0 0 1 0
0 0 0 1
MATRICES
Matrix: a two dimensional array of numbers
Examples:
zero matrix
[ ]
0 00
0 00
identity matrix
[ ]
1 0
0 1

[ ] [ ]
1 0 0 0 1 2 0 0
0 4 0 0 3 4 1 0
diagonal , Tridiagonal
0 0 0 0 0 1 4 1
0 0 0 6 0 0 2 1
MATRICES
Examples:

[ ] [ ]
1 2 1 3
2 1 1
symmetric upper triangular 0 4 1 0
1 0 5 ,
0 0 4 1
1 5 4
0 0 0 1
Matrix Operations
Transposition
Addition and Subtraction
Multiplication
Inversion
The Transpose of a Matrix: A'
The transpose of a matrix is a new matrix that is formed
by interchanging the rows and columns.

The transpose of A is denoted by A' or (AT)


Example of a transpose
Thus,

 a11 a12 
 a11 a21 a31 
A = a21 a22  A' = 
a a  a12 a22 a32 
 31 32 

If A = A', then A is symmetric


Addition and Subtraction
Two matrices may be added (or subtracted) iff they are
the same order.

Simply add (or subtract) the corresponding elements. So,


A + B = C yields
Addition and Subtraction (cont.)
a11 a12   b11 b12  c11 c12 
a a  + b b  = c c 
 21 22   21 22   21 22 

a31 a32  b31 b32  c31 c32 

Where
a11 + b11 = c11
a12 + b12 = c12
a21 + b21 = c 21
a22 + b22 = c 22
a31 + b31 = c31
a32 + b32 = c32
Matrix Multiplication
To multiply a scalar times a matrix, simply multiply each
element of the matrix by the scalar quantity

 a11 a12   ka11 ka12 


k =
 21 22  ka21 ka22 
a a 

Matrix Multiplication
To multiply a matrix times a matrix, we write
• AB (A times B)
This is pre-multiplying B by A, or post-multiplying A by B.
Matrix Multiplication (cont.)
In order to multiply matrices, they must be
CONFORMABLE
That is, the number of columns in A must equal
the number of rows in B
So,
Amn*Bnp = Cmp
Matrix Multiplication (cont.)
(m x n) *(p xn) = cannot be done
(1x n) *(n x1) = a scalar (1x1)

a11 a12 a13  b11 b12  c11 c12 


a a a  x b b  = c c 
 21 22 23   21 22   21 22 
a31 a32 a33  b31 b32  c31 c32 

c11 = a11b11 + a12 b21 + a13b31


where
c12 = a11b12 + a12 b22 + a13b32
c 21 = a21b11 + a22 b21 + a23b31
c 22 = a21b12 + a22 b22 + a23b32
c 31 = a31b11 + a32b21 + a33b31
c 32 = a31b12 + a32b22 + a33b32
Matrix Multiplication- an example
Thus
1 4 7 1 4 c11 c12  30 66
2 5 8 x 2 5 = c c  = 36 81
     21 22   
3 6 9 3 6 c31 c32  42 96
where
c11 = 1 * 1 + 4 * 2 + 7 * 3 = 30
c12 = 1 * 4 + 4 * 5 + 7 * 6 = 66
c 21 = 2 * 1 + 5 * 2 + 8 * 3 = 36
c 22 = 2 * 4 + 5 * 5 + 8 * 6 = 81
c 31 = 3 * 1 + 6 * 2 + 9 * 3 = 42
c 32 = 3 * 4 + 6 * 5 + 9 * 6 = 96
Determinant of a MATRICES
Defined for square matrices only
Examples:

[ ]
2 3 1
0 5 3 -1 3 -1
det 1 0 5 =2| |-1| |-1| |
5 4 5 4 0 5
1 5 4
2( 25 ) 1 (12+5) 1(15 0 )= 82
Inverse of matrix
Example
Cont...
Cont...
Cont...
Reading Assignment

• Augmented matrix
• Orthogonal matrix and vector
• Orthonormal matrix and vector
Linear Equations
Linear equations are common and important for survey problems
Matrices can be used to express these linear equations and aid in
the computation of unknown values
Example: n equations in n unknowns, the aij are numerical
coefficients, the bi are constants and the xj are unknowns
a11 x1 + a12 x2 + L + a1n xn = b1
a21 x1 + a22 x2 + L + a2 n xn = b2
M
an1 x1 + an 2 x2 + L + ann xn = bn
Linear Equations
The equations may be expressed in the form
AX = B
where
 a11 a12 L a1n   x1   b1 
a21 a22 L a2 n   x2  b2 
A= , X =  , and B= 
M M M  M  M 
an1 an1 L ann   xn  bn 

nxn nx1 nx1

Number of unknowns = number of equations = n


Linear Equations
If the determinant is nonzero, the equation can be solved to produce n
numerical values for x that satisfy all the simultaneous equations
To solve, multiply both sides of the equation by A-1 which exists because

A-1 AX = A-1 B
Now since A-1 A = I

We get X = A-1 B

So if the inverse of the coefficient matrix is found, the unknowns, X would


be determined
Linear Equations
Example: 3 x1 − x2 + x3 = 2
2 x1 + x2 = 1
x1 + 2 x2 − x3 = 3

The equations can be expressed as

3 − 1 1   x1  2
 2 1 0   x  = 1 
  2   
1 2 − 1  x3  3
Linear Equations
When A-1 is computed the equation becomes

 0.5 − 0.5 0.5  2  2 


X = A−1 B = − 1.0 2.0 − 1.0  1  = − 3
− 1.5 3.5 − 2.5 3  7 

x1 = 2,
Therefore: x2 = −3,
x3 = −7
Linear Equations
The values for the unknowns should be checked by substitution back into
the initial equations

x1 = 2,
x2 = −3,
x3 = −7

3 × (2) − (−3) + (−7) = 2


2 × (2) + (−3) = 1
(2) + 2 × (−3) − (−7) = 3
Solutions of Linear Equations
A set of equations is inconsistent if there exists no solution to the
system of equations:
x 1 +2 x 2 =3
2 x 1 +4 x 2 =5
These equations are inconsistent
Solutions of Linear Equations
Some systems of equations may have infinite number of solutions

x 1 +2 x 2=3
2 x 1 +4 x 2 =6
have infinite number of solutions

[ ][
x1 a
=
x 2 0 . 5( 3 a ) ]
is a solution for all a
Graphical Solution of Systems of Linear Equations
x 1 +x 2 =3
x 1 +2 x 2 =5
Solution
x1=1, x2=2
Cramer’s Rule is Not Practical
Cramer's Rule can be used to solve the system
3 1 1 3
| | | |
5 2 1 5
x1= =1, x 2 = =2
1 1 1 1
| | | |
1 2 1 2

Cramer's Rule is not practical for large systems .


To solve N by N system requires ( N+1)( N-1)N! multiplications .
To solve a 30 by 30 system, 2 .38×1035 multiplications are needed .
It can be used if the determinants are computed in efficient way
Naive Gaussian Elimination
The method consists of two steps:
Forward Elimination: the system is reduced to upper triangular
form. A sequence of elementary operations is used.
Backward Substitution: Solve the system starting from the last
variable.

[ ][ ] [ ] [ ][ ] [ ]
a 11 a12 a13 x1 b1 a11 a12 a 13 x1 b1
a 21 a22 a 23 x2 = b2 ⇒ 0 a22 ' a 23 ' x2 = b2 '
a 31 a32 a33 x3 b3 0 0 a 33 ' x3 b3 '
Forward Elimination
Forward Elimination
For n system equation the general formula to remove Xk from the
equation k+1 to n is
Backward Substitution
bn
x n=
an,n
bn 1 an 1, n x n
x n 1=
an 1,n 1
bn 2 an 2,n x n an 2, n 1 x n 1
xn 2=
an 2, n 2
n
bi ∑ a i,j x j
j=i+1
xi=
a i,i
Naive Gaussian Elimination
o The method consists of two steps
o Forward Elimination: the system is reduced to upper
triangular form. A sequence of elementary operations is used.

[ ][ ] [ ] [ ][ ] [ ]
a 11 a12 a13 x1 b1 a11 a12 a 13 x1 b1
a 21 a22 a 23 x2 = b2 ⇒ 0 a22 ' a 23 ' x2 = b2 '
a 31 a32 a33 x3 b3 0 0 a 33 ' x3 b3 '

o Backward Substitution: Solve the system starting from the last


variable. Solve for xn ,xn-1,…x1.
Example 1:
Example 1
Example 1
Step 2: Backward Substitution
b3 13
x3= = =1
a 3,3 13
b 2 a 2,3 x 3 6 +4 x 3
x2= = =2
a 2,2 1
b1 a1,2 x 2 a1,3 x 3 8 2 x 2 3 x 3
x1= = =1
a 1,1 a1,1

[ ][]
x1 1
The solution is x 2 = 2
x3 1
Example 2:
Forward Elimination:

[ ][ ] [ ]
6 2 2 4 x1 16
12 8 6 10 x2 26
=
3 13 9 3 x3 19
6 4 1 18 x4 34

Part 1: Forward Elimination


Step1: Eliminate x 1 from equations 2, 3, 4

[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 12 8 1 x3 27
0 2 3 14 x4 18
Example 2:
Forward Elimination:
Step2: Eliminate x 2 from equations 3,4

[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 4 13 x4 21

Step3: Eliminate x 3 from equation 4

[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 0 3 x4 3
Example 2:

Summary of the Forward Elimination:

[ ][ ] [ ] [ ][ ] [ ]
6 2 2 4 x1 16 6 2 2 4 x1 16
12 8 6 10 x2 26 0 4 2 2 x2 6
= ⇒ =
3 13 9 3 x3 19 0 0 2 5 x3 9
6 4 1 18 x4 34 0 0 0 3 x4 3
Example 2:
Backward Substitution

[ ][ ] [ ]
6 2 2 4 x1 16
0 4 2 2 x2 6
=
0 0 2 5 x3 9
0 0 0 3 x4 3

Solve for x 4 , then solve for x3 ,. . . solve for x 1


3 9+5
x 4 = =1, x3 = = 2
3 2
6 2( 2) 2(1) 16+2 (1 ) 2( 2) 4(1 )
x 2= =1, x 1 = =3
4 6
Pseudo-Code:
Forward Elimination Back Substitution
Do k = 1 to n-1
xn = bn / an,n
Do i = k+1 to n
Do i = n-1 downto 1
factor = ai,k / ak,k
sum = bi
Do j = k+1 to n
Do j = i+1 to n
ai,j = ai,j – factor * ak,j
sum = sum – ai,j * xj
End Do
End Do
bi = bi – factor * bk
xi = sum / ai,i
End Do
End Do
End Do
Pitfalls of Elimination Methods
Division by zero
It is possible that during both elimination and back-
substitution phases a division by zero can occur.
For example:
0 + 2x2 + 3x3 = 8 0 2 3
4x1 + 6x2 + 7x3 = -3 A= 4 6 7
2x1 + x2 + 6x3 = 5 2 1 6
Solution: pivoting
Pitfalls (cont.)
Round-off errors
• Because computers carry only a limited number of significant
figures, round-off errors will occur and they will propagate from one
iteration to the next.
• This problem is especially important when large numbers of
equations (100 or more) are to be solved.

• Always use double-precision numbers/arithmetic. It is slow but


needed for correctness!
• It is also a good idea to substitute your results back into the original
equations and check whether a substantial error has occurred.
Pitfalls (cont.)
ill-conditioned systems - small changes in coefficients result in large
changes in the solution. Alternatively, a wide range of answers
can approximately satisfy the equations.
(Well-conditioned systems – small changes in coefficients result
in small changes in the solution)
Problem: Since round off errors can induce small changes in the
coefficients, these changes can lead to large solution errors in ill-conditioned
systems.
Example: b1 a12 10 2
x1 + 2x2 = 10 b2 a22 10.4 2 2(10) − 2(10.4)
1.1x1 + 2x2 = 10.4 x1 = = = =4 x2 = 3
D 1(2) − 2(1.1) − 0.2
b1 a12 10 2
x1 + 2x2 = 10
b2 a22 10.4 2 2(10) − 2(10.4)
1.05x1 + 2x2 = 10.4 x1 = = = =8 x2 = 1
D 1(2) − 2(1.05) − 0.1
ill-conditioned systems (cont.) –
• Surprisingly, substitution of the erroneous values, x1=8 and x2=1, into the
original equation will not reveal their incorrect nature clearly:
x1 + 2x2 = 10 8+2(1) = 10 (the same!)
1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!)

IMPORTANT OBSERVATION:
An ill-conditioned system is one with a determinant close to zero
• If determinant D=0 then there are infinitely many solutions singular system
• Scaling (multiplying the coefficients with the same value) does not change the
equations but changes the value of the determinant in a significant way.
However, it does not change the ill-conditioned state of the equations!
DANGER! It may hide the fact that the system is ill-conditioned!!

How can we find out whether a system is ill-conditioned or not?


Not easy! Luckily, most engineering systems yield well-conditioned results!

• One way to find out: change the coefficients slightly and recompute & compare
COMPUTING THE DETERMINANT OF A MATRIX
USING GAUSSIAN ELIMINATION
The determinant of a matrix can be found using Gaussian elimination.
Here are the rules that apply:
• Interchanging two rows changes the sign of the determinant.
• Multiplying a row by a scalar multiplies the determinant by that scalar.
• Replacing any row by the sum of that row and the multiple of any other
row does NOT change the determinant.
• The determinant of a triangular matrix (upper or lower triangular) is
the product of the diagonal elements. i.e.
• D=t11t22t33 t11 t12 t13
D = 0 t 22 t 23
48
0 0 t33
Techniques for Improving Solutions
• Use of more significant figures – double precision arithmetic
• Pivoting
If a pivot element is zero, normalization step leads to division by zero. The
same problem may arise, when the pivot element is close to zero. Problem
can be avoided:
• Partial pivoting
Switching the rows below so that the largest element is the pivot element.

• Complete pivoting
• Searching for the largest element in all rows and columns then switching.
• This is rarely used because switching columns changes the order of x’s and
adds significant complexity and overhead costly

• Scaling
• used to reduce the round-off errors and improve accuracy
Gauss-Jordan Elimination
• It is a variation of Gauss elimination. The major
differences are:
– When an unknown is eliminated, it is eliminated from all
other equations rather than just the subsequent ones.
– All rows are normalized by dividing them by their pivot
elements.
– Elimination step results in an identity matrix.
– It is not necessary to employ back substitution to obtain
solution.
51
Gauss-Jordan Elimination: Example
1 1 2  x1  8  1 1 2| 8 
− 1 − 2 3  x  = 1  Augmented Matrix : − 1 − 2 3 | 1 
  2   
 3 7 4  x3  10  3 7 4 |10
1 1 2 | 8  1 1 2 | 8 
 0 −1 5 | 9  Scaling R2:
0 1 −5| −9 
R2 R2 - (-1)R1
  R2 R2/(-1)  
R3 R3 - ( 3)R1 0 4 −2| −14  0 4 −2| −14 
R1 R1 - (1)R2 1 0 7 | 17  1 0 7 | 17 
0 1 − 5 | − 9  0 1 − 5 | − 9 
  Scaling R3:
 
R3 R3-(4)R2 0 0 18 | 22  R3 R3/(18) 0 0 1 |11 / 9
R1 R1 - (7)R3 1 0 0 | 8.444  RESULT:
R2 R2-(-5)R3 0 1 0 | − 2.888
  x1=8.45, x2=-2.89, x3=1.23
0 0 1 | 1.222 
Time Complexity? O(n3)
LU Decomposition
• Gauss elimination solves [A] {x} ={B}

• It becomes insufficient when solving these equations for different


values of {B}

• LU decomposition works on the matrix [A] and the vector {B}


separately.

• LU decomposition is very useful when the vector of variables {x} is


estimated for different parameter vectors {B} since the forward
elimination process is not performed on {B}.
LU Decomposition

If: Consider:
L: lower triangular matrix [U]{X} = {D}
U: upper triangular matrix So, [L]{D} = {B}
Then, 2. [L]{D} = {B} is used to generate
[A]{X}={B} can be decomposed into two an intermediate vector {D} by
matrices [L] and [U] such that: forward substitution.
1. [L][U] = [A] ([L][U]){X} = {B} 3. Then, [U]{X}={D} is used to get
{X} by back substitution.

As in Gauss elimination, LU decomposition must employ pivoting to avoid division


by zero and to minimize round off errors. The pivoting is done immediately after
computing each column.
Summary of LU Decomposition
LU Decomposition
System of linear equations [A]{x}={B}
 a11 a12 a13   x1  b1  = ; =
a    
 21 a22 a23   x2  = b2 
a31 a32 a33   x3  b3 
=
Step 1: Decomposition [ L ][U ] = [ A]

[ ]= 0 1 0 0
0 0 [L ] = l 21 1 0 
 l 31 l 32 1 
LU Decomposition
Step 2: Generate an intermediate vector {D} by forward
substitution
 1 0 0  d1  b1 
l     
 21 1 0 d 2  = b2 
l31 l32 1 d 3  b3 
Step 3: Get {X} by back substitution.

a11 a12 a13   x1   d1 


 0 a'     
 22 a'23   x2  = d 2 
 0 0 a' '33   x3  d 3 
LU Decomposition-Example
 3 − 0.1 − 0.2   3 −0.1 −0.2 
 
[A] = 0.1 7 − 0.3
0.3 − 0.2 10 
⇒ 0 7.003 −0.293
 
  0 −0.19 10.02 
0.1 0.3
= = 0.03333; =
3 3
= 0.1000
−0.19
= = = −0.02713
7.003

3 − 0.1 − 0.2   1 0 0
[U ] = 0 7.003 − 0.293 [L ] = 0.03333 1 0 
0 0 10 . 012   0.1000 −.02713 1 
 
LU Decomposition-Example (cont’d)
Use previous L and D matrices to solve the system:
 3 − 0.1 − 0.2   x1   7.85 
0.1   x  = − 19.3
 7 − 0 .31 2   
0.3 − 0.2 10   x3   71.4 

Step 2: Find the intermediate vector {D} by forward substitution

 1 0 0  d1   7.85  d1   7.85 


0.0333         
 1 0 d 2  = − 19.3 ⇒ d 2  = − 19.5617 
0.1000 − 0.02713 1 d 3   71.4  d 3   70.0843 
LU Decomposition-Example (cont’d)
Step 3: Get {X} by back substitution.

3 − 0.1 − 0.2   x1   7.85   x1   3 


0 7.0033 − 0.2933  x  = − 19.5617  ⇒  x  =  − 2.5 
  2     2  
0 0 10.012   x3   70.0843   x3  7.00003
LU Decomposition and substitution algorithm
Decomposition Step Substitution Steps
%Forward Substitution
• % Decomposition Step d(1)=bn(1);
for k=1:n-1 for i=2:n
[a,o]= pivot(a,o,k,n); d(i)=bn(i)-a(i,1:i-1)*(d(1:i-1))';
for i = k+1:n end
a(i,k) = a(i,k)/a(k,k); • % Back Substitution
a(i,k+1:n)= a(i,k+1:n)-a(i,k)*a(k,k+1:n); x(n)=d(n)/a(n,n);
for i=n-1:-1:1
end
x(i)=(d(i)-a(i,i+1:n)*x(i+1:n)')/a(i,i);
end
end

You might also like