Professional Documents
Culture Documents
APM2613 - Lesson 4 - 3 - 2023
APM2613 - Lesson 4 - 3 - 2023
APM2613 - Lesson 4 - 3 - 2023
APM2613
Year module
BARCODE
university
Open Rubric
Define tomorrow. of south africa
1 Lesson 4: Iterative Methods for Linear Systems
1.1 Objectives
The objectives of this Lesson are:
• To highlight the study of iterative methods for solving linear systems: Jacobi method; Gauss-Seidel
method; Relaxation techniques;
• To implement the Jacobi and Gauss-Seidel methods, manually and by computer, to solve a system of
linear equations.
• To implement Relaxation Techniques, manually and by computer, to solve a system of linear equations.
• To apply vector and matrix norms in the stopping criterion and error analysis for iterative schemes.
• To use the condition number and other properties of matrices to determine convergence properties of
iterative techniques.
1.3 Introduction
Chapter 7 of the textbook continues with the discussion of the solution of the linear system:
a11 a12 · · · a1n x1 b1
a21 a22 · · · a2n x2 b2
A = .. .. .. = .. (1)
. . . .
an1 an2 · · · ann xn bn
but now using iterative methods. The techniques involve starting with an approximate solution, and using
repeated substitution of the solution to a set of iterative equations until a certain level of accuracy is reached,
provided the system of iterative equations converges to a solution.
Before going into the actual techniques or algorithms, further ground work is needed in terms of some
concepts of linear algebra.
2
APM2613/LNS04/0/2023
The distance between two vectors, defined in Definition 7.4 of the textbook become important when mea-
suring the difference between approximate solution and the true solution or between two successive vector
solutions in the sequence x(k) of approximations.
A point on notation
Notation is of critical importance in unpacking the formulas used in the textbook. A few examples follow:
1. Understanding the notation adopted in the textbook to distinguish between the i-th component of a
vector and the k-th iterate of a scheme. So, for example, whereas
x = (x1 , x2 , . . . , xn )
is used to denote an n-vector x with components xi , i = 1, 2, . . . , n, x(k) is used to denote the k-th
iterate of the solution vector x. Thus
(k) (k) (k)
x(k) = (x1 , x2 , . . . , xi , . . . , x(k)
n )
is a vector whose components are the result of k iterations of the initial vector x(0) .
which reads as ’Infinite norm of A is the maximum of the sums of the magnitude of the elements
in each row (j = 1 to n)’. This helps you to do the computations systematically. Note also such
distinctions as
Xn n
X
|aij | =
̸ aij .
j=1 j=1
3
This equation is comparable with the iterative equation xk+1 = g(xk ) previously discussed for iterative
schemes for nonlinear equations (Chapter 2/ Lesson 2). The iterative methods are applied to systems where
A is a square matrix (n × n) matrix.
Although these methods generally take longer when solving systems than direct methods (Chapter 6/ Lesson
3), they are efficient for large sparse matrices and are frequently used in the solution of boundary value prob-
lems and partial differential equations (sequel module). Another desirable characteristic of these iterative
methods is that they are self-corrective.
Note: Although formulas have been given for the various iterative methods, it is more important to under-
stand the how the iterative system is obtained from basic systematic transposition of each equation. Using
this approach, you don’t have to memorise the huge formulas associated with the schemes, especially if you
are dealing with a small system.
It is worth noting that the coefficients of the variables xi , i = 1, . . . , n on the RHS are 0, leading to a matrix
with zero diagonal entries.
Note a typical stopping criterion for the iterations used in Example 1 on p456, and how it makes use of the
norms of x(k) and x(k) − x(k−1) .
• In rearranging the equations, the i-th variable of the i-th equation should lie on the diagonal. So there
may be need to interchange equations to accommodate this.
A comparison between the Jacobi method and the Gauss-Seidel method reveals that the latter leads to faster
convergence than the former.
4
APM2613/LNS04/0/2023
1.6 Summary
Lessons 3 and 4 have presented notes on the subject of solution of linear systems. Two categories of methods
have been discussed for this topic: Direct methods (Chapter 6) and Indirect or Iterative methods (Chap-
ter 7). Direct methods make use of row reduction techniques to compute the approximate solution vector
x = (x1 , x2 , . . . , xn ) at the end of the reduction process. Gaussian elimination and related modifications are
the key focus in direct methods. Iterative methods on the other hand compute the approximate solution x(k)
vector iteratively, starting with an initial solution vector, x(0) . Starting with Jacobi method, Gauss-Seidel and
Relaxation methods are improvements on it in terms of speed of convergence.
In both approaches, the only way to estimate the error in the approximation, given that the true solution will
normally be not known, is to use the residual vector to compute bounds for the error. Thus error for linear
systems is likely due to computations rather the method being used.
5
(D − ωL)x(k) = [(1 − ω)D − ωU ]x(k−1) + ωb.
That is,
Letting
...