Professional Documents
Culture Documents
Lecture 8 & 9 Gauss Elimination and LU Decomposition: Course Website
Lecture 8 & 9 Gauss Elimination and LU Decomposition: Course Website
Lecture 8 & 9 Gauss Elimination and LU Decomposition: Course Website
Course Website
https://sites.google.com/view/kporwal/teaching/mtl107
Basic concepts: linear system of equations
x1
▶ Find x =
x2
which satisfies
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
or,
Ax = b
a11 a12
where A =
a21 a22
umerical Methods for Computational Science and Engineering
Basic concepts: linear system of equations
Basic concepts: linear system of equations (cont.)
Basic concepts: linear system of equations (cont.)
Solving a 2 × 2 system.
I Unique solution if and only if lines are not parallel.
We can write Ax = b as
a1 x1 + a2 x2 + · · · + an xn = b
1 1
A=
3 3
1
null(A) = α , α ∈ R,
−1
1
range(A) = β , β ∈ R.
3
2
Give all solutions of Ax = b with b= .
6
Example (Almost singularity)
Let’s
look atlinear
system
of equations
1+ε 1 x1 2+ε
= where 0 < ε ≪ 1.
3 3 x2 6
1
The unique solution is x= .
1
System matrix above is almost singular, i.e., a small perturbation
makes it singular.
The close-by problem has many solutions, some are far apart from
x: ill-conditioned problem.
Vector norms
Manhattan Norm
whenever Q ⊤ Q = I .
Matrix norms (cont.)
The last identity comes from the observation that the vector x
which maximizes the supremum is given by x = (±1, ±1, · · · , ±1)⊤
with the sign of the entries chosen according to the sign of the
entries in the row of A with the largest row sum.
Matrix norms (cont.)
u⊤ v = 0
Q ⊤Q = I Hence, also Q −1 = Q ⊤ .
∥Qx∥2 = ∥x∥2
Hence,
∥Q∥2 =
Q −1
2 = 1.
Linear systems: Problem statement
or,
Ax = b
The matrix elements aik and the right-hand side elements bi are
given. We are looking for the unknowns xk .
Direct vs. iterative methods
Ax = b
A is given, real, nonsingular, n × n and b is real, given vector.
Such problems are ubiquitous!
Two types of solution approaches:
1. Direct method: yield exact solution in absence of roundoff
error.
▶ Variations of Gaussian elimination.
2. Iterative method: iterate in a similar fashion to what we do
for nonlinear problems.
▶ Use only when direct methods are ineffective.
Existence and uniqueness of LU decomposition
Theorem
If A is nonsingular, then one can find a row permutation P such
that PA satisfies the conditions of the previous theorem, that is PA
= LU exists and is unique.
Gaussian elimination for Ax = b
x1 x2 x3 x4 1
a11 a12 a13 a14 b1
a21 a22 a23 a24 b2
a31 a32 a33 a34 b3
a41 a42 a43 a44 b4
x1 x2 x3 x4 1
a11 a12 a13 a14 b1
0 ′
a22 ′
a23 ′
a24 b2′
0 ′
a32 ′
a33 ′
a34 b3′
0 ′
a42 ′
a43 ′
a44 b4′
′ ̸= 0.
1. Permute rows i = 2, ..., 4 (if necessary) such that a22
This is next pivot.
′ = a′ /a′ of row 2 from row i,
2. Subtract multiples li2 i2 22
i = 3, · · · , 4.
′′ = a′ − l ′ a′ , k, i = 3, · · · , 4
3. Set aik ik i2 2k
4. Set bi′′ = bi′ − li2
′ b ′ , i = 3, · · · , 4
2
Gaussian elimination for Ax = b (cont.)
x1 x2 x3 x4 1
a11 a12 a13 a14 b1
0 ′
a22 ′
a23 ′
a24 b2′
0 0 ′
a33 ′
a34 b3′
0 0 ′
a43 ′
a44 b4′
′′ ̸= 0.
1. Permute rows i = 3, ..., 4 (if necessary) such that a33
This is the next pivot.
′′ = a′′ /a′′ of row 3 from row 4.
2. Subtract multiples l43 43 33
′′′ = a′′ − l ′′ a′′ ,
3. Set a44 44 43 34
4. Set b4′′′ = b4′′ − li1
′′ b ′′ .
3
Gaussian elimination for Ax = b (cont.)
x1 x2 x3 x4 1 x1 x2 x3 x4 1
u11 u12 u13 u14 c1 a11 a12 a13 a14 b1
0 u22 u23 u24 c2 ⇐⇒ 0 ′
a22 ′
a23 ′
a24 b2′
0 0 u33 u34 c3 0 0 ′
a33 ′
a34 b3′
0 0 0 u44 c4 0 0 0 ′
a44 b4′
x1 x2 x3 x4 1
a11 a12 a13 a14 b1
l21 ′
a22 ′
a23 ′
a24 b2′
l31 ′
l32 ′′
a33 ′′
a34 b3′′
l41 ′
l42 ′
l43 ′′′
a44 b4′′′
U = L−1 −1 −1
3 P3 L2 P2 L1 PA
U = L−1 −1 −1 −1 −1 −1
3 (P3 L2 P3 )(P3 P2 L1 P2 P3 )(P3 P2 P1 )A
LU = PA.
Since
det(A) = det(P −1 ) det(L) det(U)
and
det(L) = 1,
and
det(P −1 ) = det(P) = (−1)V
where V is the number of row permutations in the Gaussian
elimination, we have
n
Y
det(A) = (−1)V uii
i=1
1 for k=1:n-1
2 for i=k+1:n
3 l(i,k) = a(i,k)/a(k,k);
4 for j=k+1:n
5 a(i,j) = a(i,j) - l(i,k)*a(k,j);
6 end
7 end
8 end
Complexity: LU factorization
1 1
= n(n − 1) + n(n − 1)(2n − 1)
2 3
2 3 1 2 1
= n − n − n
3 2 6
2
= n3 + O(n2 )
3
Algorithm: forward and Backward substitution
Course Website
https://sites.google.com/view/kporwal/teaching/mtl107
Gaussian elimination with partial pivoting
LU = PA
0 1 x1 4
Ax=b ⇐⇒ =
1 1 x2 7
x1 x2 1
0 1 4
1 1 7
x1 x2 1
1 1 7
0 1 4
Need for pivoting (cont.)
x1 x2 1
0.00035 1.2654 3.5267
1.2547 1.3182 6.8541
x1 x2 1
→ 0.00035 1.2654 3.5267 with
0 -4535.0 -12636
Backsubstitution gives
x2 = −12636/(−4535.0) ≈ 2.7863
x1 = 2.5354, x2 = 2.7863
x1 x2 1
→ 1.2547 1.3182 6.8541 with
0 1.2650 3.5248
l21 = 0.00027895
x2 = 2.7864
x1 = (6.8541−1.3182×2.7864)/1.2547 ≈ 3.1811/1.2547 ≈ 2.5353
There is a deviation from the exact solution in the last digit only.
GEPP stability
1,
if i = j or j = n,
n
An = (aij )i,j=1 with −1 if i > j,
0 otherwise.
1 0 0 0 0 1
−1
1 0 0 0 1
−1 −1 1 0 0 1
A6 =
−1
−1 −1 1 0 1
−1 −1 −1 −1 1 1
−1 −1 −1 −1 −1 1
Partial Pivoting do not trigger change of row and last column
grows.
Complete pivoting
Definition
A matrix A ∈ Rn×n is diagonally dominant, if
X
|aii | ≥ |aik | , i = 1, ..., n.
k̸=i
Theorem
If A is nonsingular and diagonally dominant then the LU
factorization can be computed without pivoting.
Proof
We show that after reduction of the first row, the reduced system
is again diagonally dominant. We have
(1) ai1 a1k
aik = aik − , i, k = 2, ..., n.
a11
For the diagonal elements we get the estimate
ai1 a1i
ai1 a1i
(1)
aii = aii − ≥ |aii | −
, i = 2, ..., n.
a11 a11