APM2613 - Lesson 4 - 3 - 2023

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

APM2613/LNS04/0/2023

Tutorial letter LNS04/0/2023

APM2613

Year module

Department of Mathematical Sciences

This tutorial letter contains important information:

• Lesson 4 - Solution of Linear Dystems: Iterative Methods


Additional notes to supplement prescribed content in Chap-
ter 7 of the textbook.

• Please read this information along with Chapter 7 to get


highlights of concepts discussed in these sections.

BARCODE

university
Open Rubric
Define tomorrow. of south africa
1 Lesson 4: Iterative Methods for Linear Systems
1.1 Objectives
The objectives of this Lesson are:

• To highlight further useful theory and properties of matrices;

• To highlight the study of iterative methods for solving linear systems: Jacobi method; Gauss-Seidel
method; Relaxation techniques;

• To highlight aspects of error analysis of these methods

1.2 Learning Outcomes


At the end of this Lesson you should be able to

• To calculate and apply the notions of matrix and vector norms.

• To implement the Jacobi and Gauss-Seidel methods, manually and by computer, to solve a system of
linear equations.

• To implement Relaxation Techniques, manually and by computer, to solve a system of linear equations.

• To identify relationships between the various iterative methods.

• To apply vector and matrix norms in the stopping criterion and error analysis for iterative schemes.

• To use the condition number and other properties of matrices to determine convergence properties of
iterative techniques.

1.3 Introduction
Chapter 7 of the textbook continues with the discussion of the solution of the linear system:
    
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
A =  .. ..   ..  =  ..  (1)
    
 . .   .   . 
an1 an2 · · · ann xn bn

but now using iterative methods. The techniques involve starting with an approximate solution, and using
repeated substitution of the solution to a set of iterative equations until a certain level of accuracy is reached,
provided the system of iterative equations converges to a solution.

Before going into the actual techniques or algorithms, further ground work is needed in terms of some
concepts of linear algebra.

2
APM2613/LNS04/0/2023

1.4 A Brief Recap of Linear Algebra


The chapter kicks off by discussing some useful notions or concepts of linear algebra: vector norms and
matrix norms in Section 7.1. A read through this section helps with understanding the terminology used in
the chapter. Of particular note are the definitions of the norms of a vector and of a matrix, and that the most
commonly used norms are the l2 and l∞ norms (see also the definition of the general p-norm given in the
previous lesson).

The distance between two vectors, defined in Definition 7.4 of the textbook become important when mea-
suring the difference between approximate solution and the true solution or between two successive vector
solutions in the sequence x(k) of approximations.

A point on notation
Notation is of critical importance in unpacking the formulas used in the textbook. A few examples follow:

1. Understanding the notation adopted in the textbook to distinguish between the i-th component of a
vector and the k-th iterate of a scheme. So, for example, whereas

x = (x1 , x2 , . . . , xn )

is used to denote an n-vector x with components xi , i = 1, 2, . . . , n, x(k) is used to denote the k-th
iterate of the solution vector x. Thus
(k) (k) (k)
x(k) = (x1 , x2 , . . . , xi , . . . , x(k)
n )

is a vector whose components are the result of k iterations of the initial vector x(0) .

2. Being able to read the meaning of expanded expressions such as


n
X
∥A∥∞ = max |aij |,
1≤x≤n
j=1

which reads as ’Infinite norm of A is the maximum of the sums of the magnitude of the elements
in each row (j = 1 to n)’. This helps you to do the computations systematically. Note also such
distinctions as
Xn n
X
|aij | =
̸ aij .
j=1 j=1

1.5 Iterative Methods


Iterative techniques are alternative methods for solving linear systems. They involve starting with an initial
solution x(0) to the solution x, and then generating a sequence of vectors x(k) |∞
k=0 which ’hopefully’ con-
verges to x. Most iterative schemes involve a conversion of the system Ax = b into an equivalent system of
the form
x = Tx + c (2)
for some n × n matrix T and vector c. The sequence of approximate solution vectors is generated iteratively
by computing
x(k+1) = T x(k) + c, k = 0, 1, 2, . . . (3)

3
This equation is comparable with the iterative equation xk+1 = g(xk ) previously discussed for iterative
schemes for nonlinear equations (Chapter 2/ Lesson 2). The iterative methods are applied to systems where
A is a square matrix (n × n) matrix.

Although these methods generally take longer when solving systems than direct methods (Chapter 6/ Lesson
3), they are efficient for large sparse matrices and are frequently used in the solution of boundary value prob-
lems and partial differential equations (sequel module). Another desirable characteristic of these iterative
methods is that they are self-corrective.

Note: Although formulas have been given for the various iterative methods, it is more important to under-
stand the how the iterative system is obtained from basic systematic transposition of each equation. Using
this approach, you don’t have to memorise the huge formulas associated with the schemes, especially if you
are dealing with a small system.

1.5.1 Jacobi Method


In rearranging equation (1) to obtain
n  
X aij bi
xi = − xj + , i = 1, 2, . . . , n
aii aii
j=1
j ̸= i

the components of (2) are


 
aij bi
T = − , j ̸= i and c = , i = 1, 2, . . . , n
aii aii

It is worth noting that the coefficients of the variables xi , i = 1, . . . , n on the RHS are 0, leading to a matrix
with zero diagonal entries.

Note a typical stopping criterion for the iterations used in Example 1 on p456, and how it makes use of the
norms of x(k) and x(k) − x(k−1) .

1.5.2 Gauss Seidel Method


(k)
This method is similar to the Jacobi method but makes use of the latest computed values of xi , i = 1, 2, . . . , i−
(k)
1 in turn immediately in computing xi , i = i, . . . , n, since they are already available and supposedly better
approximations.

Something worth mentioning here:

• In rearranging the equations, the i-th variable of the i-th equation should lie on the diagonal. So there
may be need to interchange equations to accommodate this.

A comparison between the Jacobi method and the Gauss-Seidel method reveals that the latter leads to faster
convergence than the former.

4
APM2613/LNS04/0/2023

1.5.3 Relaxation Methods


Relaxation methods are formulated with the objective of speeding convergence of systems that are convergent
by the Gauss-Seidel technique (a specific case of the general iterative scheme as given by (2)), and are given
as equation (7.17) in the text.Their general (compact) form is
(k)
(k) (k−1) rii
xi = xi +ω , for certain choices of ω > 0 (4)
aii
(k)
where rii is the i-th component of the residual vector associated with the k-th approximation of the solution.
Derivation of this formula is given in detail in Section 7.4 of the textbook, together with the values of ω that
yield so called under-relaxation (0 < ω < 1) and over-relaxation (ω > 1) methods. Further manipulation of
(4) gives the more computationally convenient matrix form of the SOR method as
x(k) = (D − ωL)−1 [(1 − ω)D + ωU ]x(k−1) + ω(D − ωL)−1 b,
yet another form of
x(k) = Tω x(k−1) + cω .
The matrices D, L and U are respectively diagonal, lower and upper triangular.

The relaxation methods vary according to the choice of ω, D and L.

1.5.4 Error Analysis


Section 7.5 focuses on using already discussed concepts of the residual and norms to determine error bounds
for the approximate solution, pretty much as presented before. Reading through this section will solidify
understanding of how error analysis is handled for linear systems.

1.6 Summary
Lessons 3 and 4 have presented notes on the subject of solution of linear systems. Two categories of methods
have been discussed for this topic: Direct methods (Chapter 6) and Indirect or Iterative methods (Chap-
ter 7). Direct methods make use of row reduction techniques to compute the approximate solution vector
x = (x1 , x2 , . . . , xn ) at the end of the reduction process. Gaussian elimination and related modifications are
the key focus in direct methods. Iterative methods on the other hand compute the approximate solution x(k)
vector iteratively, starting with an initial solution vector, x(0) . Starting with Jacobi method, Gauss-Seidel and
Relaxation methods are improvements on it in terms of speed of convergence.

In both approaches, the only way to estimate the error in the approximation, given that the true solution will
normally be not known, is to use the residual vector to compute bounds for the error. Thus error for linear
systems is likely due to computations rather the method being used.

1.7 Textbook Error


I have picked up an error in the textbook in the equations at the top of p471 of Section 7.4. There seems to
be an omitted change of sign in the first three equations at the top of the page, namely,
so that, in vector form, we have

5
(D − ωL)x(k) = [(1 − ω)D − ωU ]x(k−1) + ωb.

That is,

x(k) = (D − ωL)−1 [(1 − ω)D − ωU ]x(k−1) + ω(D − ωL)−1 b.

Letting

Tω = (D − ωL)−1 [(1 − ω)D − ωU ] and c = ω(D − ωL)−1 b

gives the SOR technique the form ...

...

You might also like