Professional Documents
Culture Documents
Numerical CH One
Numerical CH One
1
Many of the linear problems can be solved analytically
Example 1 ax + b = 0, a ̸= 0 then x = − ab
√
−b± b2 +4ac
ax2 + bx + c = 0, a ̸= 0 then x = 2a
However, many nonlinear equations have no such explicit solution. For these, nu-
merical methods based on approximation can be developed which produce roughly
correct results.For instance
δ = |a − A| ≤ δa i.e. a − δa ≤ A ≤ a + δa
Manalebish D: 2 2
An error of 0.01cm in a measurement of 10m is better than the same error in a
measurement of 1m. Thus
The relative error of a is given by
δ a − A a
ε= = = − 1
|A| A A
δa δa
εa = ≈
a − δa a
1
Here εR = 0.1x 100 = 0.001 Then δR = RεR = (29.25)(0.001) ≈ 0.03 Therefore,
R − δR ≤ R ≤ R + δR Then 29.22 ≤ R ≤ 29.28.
Manalebish D: 3 3
2. Initial errors: This is due to numerical parameters such as physical con-
stants.(which are imperfect known)
3. Experimental (or inherent errors) these are errors of the given data when ‘a’ is
determined by physical measurements, the error depends upon the measuring
instrument.
4. Truncation (residual) errors This type of error is rising from the fact that
(finite or infinite) sequences of computational steps necessary to produce an
exact result is truncated prematurely after a certain number of steps
Example 4
∞
x
X xn x x2 x3
e = is approximated as e = 1 + x + +
n=0
n! 2! 3!
5. Rounding errors This error is due to the limitation of the calculating aids,
numbers have to be rounded off during computations
6. Programming errors (Blunders) These are due to human errors during pro-
gramming (in using computers) bugs/debugging. This can be avoided by
being careful
Manalebish D: 4 4
where αi for i = 0, 1, · · · , 9 are called the digits of a.αm ̸= 0, m ∈ Z
A significant digit of a number ‘a’ is any given digit of a, except possible for zeros
to the left of the first nonzero digit that serve only to fix the position of the decimal
point. Those which carry real information as to the size of the number apart from
exponential position.
We say that the first n significant digits of an approximate number are correct if
the absolute error does not exceed one half unit in the nth place. That is if in
equation (1.1)
1
δ = |a − A| ≤ 10m−n+1
2
Then the first n digits αm , αm−1 , · · · , αm−n+1 are correct (and we say the number
a is correct to n significant figures.)
Example 7
Manalebish D: 5 5
with
1 1 1
δ = |35.97 − 36.0| = 0.03 ≤ (10−n+1 ) = (10−2+1 ) = (10−1 ) = 0.05
2 2 2
1. If the first of the discarded digits is less than 5 leave the remaining digits
unchanged (rounding down)
Manalebish D: 6 6
2. If the first of the discarded digits is greater than 5, add 1 to the nth digit
(rounding up)
Example 8
6.125753 ≈ 6.1round to (1 decimal place) δ ≤ 12 10−1 = 0.05
Since we want 2 significant digits we go up to 6.1 then the first of the
digits being left out is 2 < 5 hence rounding down gives 6.1 here the
place value of 2 is 10−2 implies n = 2 with
δ = |6.125753 − 6.1| = 0.025753 ≤ 12 (10−2+1 ) = 12 (10−1 ) = 0.05
≈ 6.12 round to (2 decimal place) δ ≤ 12 10−2 = 0.005
≈ 6.126 round to (3 decimal place) δ ≤ 21 10−3 = 0.0005
≈ 6.1258 round to (4 decimal place) δ ≤ 21 10−4 = 0.00005
Since we want 5 significant digits we go up to 6.1257 then the first of the
digits being left out is 5 and since 7 is odd then even digit rule gives 6.1258
here the place value of 5 is 10−5 implies n = 5 with
δ = |6.125753 − 6.1258| = 0.000047 ≤ 21 (10−5+1 ) = 12 (10−4 ) = 0.00005
Rounding errors are most dangerous when we have to perform more arithmetic
operations.
22 355
Example 9 Approximate π = 3.14159265 using 7 and 113 correct to 2 decimal
place and 4 decimal place and find the corresponding absolute and relative errors.
22 355
= 3.1428571 = 3.1415929
7 113
22 355
δ1 = | − π| = 0.0012645 δ2 = | − π| = 0.0000002668
7 113
δ1 δ2
ε1 = = 4.025 × 10−4 ε2 = = 8.49 × 10−8
|π| |π|
Manalebish D: 7 7
1.4 Propagation of errors
Suppose a′ is a computed result of a number a (a′ ∈ Q). The initial error is a′ − a
while the difference δ1 = f (a′ ) − f (a) is the corresponding approximation error. If
f is replaced by a simpler function of f1 (say a power series representation of f )
then δ2 = f1 (a′ ) − f (a′ ) is the truncation error. But in calculations we obtain, say,
f2 (a′ ) instead of f1 (a′ ) which is wrongly computed value of a wrong function of a
wrong argument. The difference δ3 = f2 (a′ ) − f1 (a′ ) is termed as the error from
the rounding. The total error is then the propagating error δ = f2 (a′ ) − f (a) =
δ1 + δ2 + δ3 .
1
Example 10 Determine e 3 with 4 decimal places.
We compute e0.3333 instead of e0.3̇ with initial error
1
δ1 = e0.3333 − e 3 = e0.3333 1 − e0.0000333... = −0.0000465196
2 3 4
Next, we compute ex from ex = 1 + x + x2! + x3! + x4! for x = 0.3333 with truncation
error
(0.3333)2 (0.3333)3 (0.3333)4
δ2 = 1 + 0.3333 + + +
2! 3! 4!
(0.3333)2 (0.3333)3 (0.3333)4
− 1 + 0.3333 + + + + ...
2! 3! 4!
(0.3333)5 (0.3333)6
= − + + . . . = −0.0000362750
5! 6!
Finally, the summation of the truncated series is done with rounded values giving
the result
x2 x3 x4
1+x+ + + = 1 + 0.3333 + 0.0555 + 0.0062 + 0.0005 = 1.3955
2! 3! 4!
instead if we go upto 10 decimal places we would have 1.3955296304 Then
Manalebish D: 8 8
Note:-
Investigation of error propagation are important in iterative processes and compu-
tations where each value depends on its predecessors
1.5 Instability
In some cases one may consider a small error negligible and want to suppress it
and after some steps the accumulated error may have a fatal error on the solution.
Small changes in initial data may produce large changes in the final results.
This property is known as ill-conditioned. ill-conditioned problem of computing
output value y from input value x by y = g(x) : when x is slightly perturbed to x
the result y = g(x) is far from y
A well conditioned problem has a stable algorithm of computing y = f (x), The
out put y is the exact result y = f (x), for a slightly perturbed input x which is close
to the input x. Thus if the algorithm is stable and the problem is well conditioned
the computed result y is close to the exact y
Algorithm is a systematic procedure that solves a problem. It is said to be
stable if its output is the exact result of a slightly perturbed input.
Performance features that may be expected from a good numerical algorithm.
Accuracy:- This is related to errors, how accurate is the result going to be when
a numerical algorithm is run with some particular input data.
efficiency:- How fast can we solve a certain problem, rate of convergence of
floating point operations
If an error stays at one point in an algorithm and does not aggregate further
as the calculation continues, then it is considered a numerically stable error. This
happens when the error causes only a very small variation in the formula result.
If the opposite occurs and error propagates bigger as the calculation continuous,
then it is considered numerically unstable.
Manalebish D: 9 9
Chapter 2
Non-linear Equation
Locating roots
consider an equation of the form f (x) = 0, assuming the function f (x) is con-
tinuously differentiable function of sufficiently high order,we want to find solution
(roots of the equation) f (x) = 0. That is, we find numbers a such that f (a) = 0.
a is the point at which the graph of the function intersects the x−axis.
The function can be algebraic equation such as polynomials, rationals or tran-
scendental equations: trigonometric, exponential, logarithmic, nth root, etc.
This nonlinear function for example have infinitely many solutions but it is difficult
to find them
In most cases we have to use approximate solutions i.e. find a such that |f (a)| <
ε where ε is given tolerance which gives an interval where the root is located but
not the exact root. in this case the equation f (x) = 0 and M.f (x) = 0 (where M
is a constant) do not have the same roots
10
2.1 Bisection method
This is a simple but slowly convergent method for determining the roots of a con-
tinuous function f (x). it is based on the intermediate value theorem for f (x) in
an interval [a0 , b0 ] for which if f has opposite signs at the end points say f (a0 ) < 0
and f (b0 ) > 0, then f has a root in [a0 , b0 ]
a0 +b0
compute the mid point x0 = 2
0+1
1. Let us take the first root between [a0 , b0 ] = [0, 1] with x1 = 2 = 0.5
Manalebish D: 11 11
continuing the process we get
1+2
2. For the second root between [a0 , b0 ] = [1, 2] with x1 = 2 = 1.5 f (x1 ) =
f (1.5) < 0
Then [a1 , b1 ] = [1.5, 2] and so on similarly [a14 , b14 ] = [1.5121, 1.5122] contin-
uing the process we get
Example 13 find the roots of f (x) = x2 − 3 on the interval [1, 2] with absolute
error δ = 0.01
Manalebish D: 12 12
2.2 The method of false position
This method is always convergent for continuous function f
It requires two initial guess [a0 , b0 ] with f (a0 )f (b0 ) < 0 the end points of the
new interval are calculated as a weighted average defined on previous interval. i.e.
f (b0 )a0 − f (a0 )b0
w1 = provided f (b0 ) and f (a0 ) have opposite signs.
f (b0 ) − f (a0 )
In general the formula is given as
f (bn−1 )an−1 − f (an−1 )bn−1
wn = for n = 0, 1, 2, . . . until convergence criteria is
f (bn−1 ) − f (an−1 )
satisfied
Then if f (an )f (wn ) ≤ 0 then set an+1 = an : bn+1 = wn otherwise an+1 = wn :
bn+1 = bn
Algorithm
Given a function f (x) continuous on the interval [a, b] satisfying the criteria
f (a)f (b) < 0
1. Set a0 = a, b0 = b
f (bn−1 )an−1 − f
2. For n = 0, 1, 2, . . . until convergence criteria is satisfied compute wn =
f (bn−1 ) − f
If f (an )f (wn ) ≤ 0 then set an+1 = an ; bn+1 = wn
Example 14 Solve 2x3 − 25 x − 5 = 0 for the interval [1, 2] using false position
method with absolute error ε < 10−3
5
f (x) = 2x3 − x − 5
2
5 11
f (1) = 2 − − 5 = −
2 2
f (2) = 16 − 5 − 5 = 6
11
f (1)f (2) = − 6 = −33 < 0
2
Manalebish D: 13 13
Let a0 = 1 b0 = 2
f (b0 )a0 − f (a0 )b0 f (2)1 − f (1)2
w1 = = = 1.47826
f (b0 ) − f (a0 ) f (2) − f (1)
Then f (w1 ) = f (1.47826) = −2.23489761
next f (1)f (w1 ) = − 11
2 (−2.23489761) > 0
√
Exercise 1 Approximate 2 using f (x) = x2 − 2 = 0 on [0, 2] with error δ < 0.01
Solution:-
Manalebish D: 14 14
[0, 2] f (0) = −2 < 0 and f (2) = 2 > 0
0(2) − 2(−2)
x1 = =1 f (1) = −1 < 0
2 − (−2)
1(2) − 2(−1) 4 4 2
[1, 2] x2 = = f =− <0
2 − (−1) 3 3 9
4 2
(2) − 2(− 9 ) 28
4
,2 x3 = 3 = = 1.4 f (1.4) = −0.04 < 0
3 2 − (− 29 ) 20
1.4(2) − 2(−0.04)
[1.4, 2] x4 = = 1.412 f (1.4) = −0.006256 < 0
2 − (−0.04)
Manalebish D: 15 15
set x0 = x1
set x1 = x2
until |f (x2 )| <tolerance value
Solution:-
Manalebish D: 16 16
so that we approach the root a.
we write the equation f (x) = 0 in the form x = g(x) where g is defined in some
interval I containing a and the range of g lies in I for x ∈ I.
Compute successively
x1 = g(x0 ) (2.1)
x2 = g(x1 ) (2.2)
x3 = g(x2 ) (2.3)
..
. (2.4)
xn+1 = g(xn ) (2.5)
Which may converge to the actual root a (depending on the choice of g and x0 )
with limn→∞ (xn − a) = 0 i.e. ∀ε > 0, ∃M such that |xn − a| < ε for n ≥ M, or that
the values xn may move away from the root a.(divergent)
A solution of the equation x = g(x) is called a fixed point of g.
Manalebish D: 17 17
To obtain the second root a2 = 4 If we choose x0 = 5 then
52 + 4
x1 = = 5.8
5
(5.8)2 + 4
x2 = = 7.528
5
x3 = 12.134
y=(x2+4)/5
10
6
y=x
−2
−4
−3 −2 −1 0 1 2 3 4 5 6 7
Note 1 From the graph convergence depends on the fact that in a neighborhood I
of a solution, the slope of the curve y = g(x) is less than the slope of y = x(i.e.
|g ′ (x)| < 1 for x ∈ I).
2 x2 + 1
Exercise 3 Find solutions of f (x) = x − 3x + 1 = 0 using x = g(x) = and
3
x0 = 1, 2, 3
Manalebish D: 18 18
since g(a) = a and g(xn ) = xn+1 for n = 0, 1, 2, . . . we have
Manalebish D: 19 19
Manalebish D: 20 20
Example 18 Find the roots of 4x − 4 sin x = 1
write f (x) = 0 as x = sin x + 0.25 = g(x) with |g ′ (x)| = | cos x| < 1 say for
1 < x < 1.3 choosing x0 = 1.2 we have
1 −3
Thus a = 1.172 (correct to 3D) with an error ε ≤ 2 10 = 0.0005 i.e. a =
1.172 ± 0.0005.
x ex ′ ex
⇔ e = 3x ⇒ x = = g(x) with |g (x)| = | | < 1 ⇒ x < ln 3 ≈ 1.1
3 3
To find a1 take x0 = 0 then
x1 = 0.3̇
x2 = 0.465
x3 = 0.53
.
= ..
x7 = 0.610
But a2 ̸∈ I suppose x0 = 2
x1 = 2.46
x2 = 3.91
x3 = 16.7
diverges
So to obtain a2 , take x = g(x) = 4x − ex with g ′ (x) = 4 − ex
Manalebish D: 21 21
|g ′ (x)| = |4 − ex | < 1 ⇔ −1 < 4 − ex < 1
−5 < −ex < −3
3 < ex < 5
ln 3 < x < ln 5 x ∈ (1.1, 1.6)
Hence convergent
Manalebish D: 22 22
Let x1 be the point of intersection of the tangent line with the x-axis. The
slope of the tangent to f at x0 is
f (x0 ) f (x0
tan β = f ′ (x0 ) = =⇒ x1 = x0 − ′
x0 − x1 f (x0 )
If xn+1 is close to the actual root a, f (xn+1 ) = 0 and xn ≈ xn+1 so that (xn+1 − xn )2
and all higher powers can be neglected which gives
f (xn )
=⇒ xn+1 = xn +
f ′ (xn )
Example 20 Find the roots of f (x) = ex −3x = 0 starting from x0 = 0 and x0 = 2
Solution:-
f ′ (x) = ex − 3
f (xn )
xn+1 = xn − ′
f (x )
xnn
e − 3xn (xn − 1)
= xn − x
= exn x −3
e n −3 en
e0 (0 − 1) −1
for x0 = 0 x1 = = = 0.5
e0 − 3 −2
e0.5 (0.5 − 1)
x2 = = 0.61
e0.5 − 3
x3 = 0.619
Manalebish D: 23 23
e2 (2 − 1)
for x0 = 2 x1 = = 1.6835
e2 − 3
x2 = 1.5435
x3 = 1.5134
x4 = 1.512
Solution:-
Manalebish D: 24 24
Chapter 3
25
otherwise it is inconsistent.
3x1 + x2 − 2x3 = 0 x − 2y = 1
x1 − x2 + x3 = 0 −3x + 6y = 4
4x1 − x3 = 0 0=7
Put x1 = 1 ⇐ x3 = 4, x2 = 5 no solution inconsistent
(1, 5, 4)is a non trivial solution
Cramers rule
Manalebish D: 26 26
Example 22 Solve
2x + z = 1
4x + 2y − 3z = −1
5x + 3y + z = 2
2 0 1
det A = 4 2 −3 = 24 ̸= 0
5 3 1
1 0 1
−1 2 −3
2 3 1 4 1
x = = =
det A 24 6
2 1 1
4 −1 −3
5 2 1 4 1
y = = =
det A 24 6
2 0 1
4 2 −1
5 3 2 16 2
z = = =
det A 24 3
Manalebish D: 27 27
3.1 Direct Methods for SLE
a11 x1 = b1
a21 x1 + a22 x2 = b2
.. ..
. .
an1 x1 + an2 x2 + . . . + ann xn = bn
Ai ↔ Aj (Ai ↔ Aj )
3. Replace the ith row(column) by K ′ times the j th row plus K times the ith row
Ai ↔ K ′ Aj + KAi K ̸= 0
Manalebish D: 28 28
Apply these operations on the augmented matrix
(A \ B) of the system
Ax = B to
a11 a12 · · · a1n b1
a21 a22 · · · a2n b2
reduce A into identity matrix. (A \ B) = .. .. .. .. ..
is called the
. . . . .
an1 an2 · · · ann bn
augmented matrix of the system Ax = B.
x + 2y + z = 3
3x − 2y − 4z = −2
2x + 3y − z = −6
x + 2y + z = 3
85
8y + 7z = 11 ⇒z= =5 y = −3 x=4
17
17z = 85
Manalebish D: 29 29
−1 3 5 2 10 −1 3 5 2 10
1 9 8 4 15 A2 =A1 +A2 0 12 13 6 25 A =A −12A
(A \ B) = −−−−−−−−−→ −−3−−−2−−−−→3
A4 =A4 −7A3
0 1 0 1 2 A4 =2A1 +A4 0 1 0 1 2
2 1 1 −1 −3 0 7 11 3 17
−1 3 5 2 10 −1 3 5 2 10
0 12 13 6 25 −−−−−−−−−−−−−−→ 0 12 13 6 25
A4 = −11A3 + 13A4
0 0 13 −6 1 0 0 13 −6 1
0 0 11 −4 3 0 0 0 14 28
Thus
The first row is called the pivotal row and the coefficients of the first unknown, the
pivotal coefficients
We choose the pivotal row as the equation which has the numerically (abso-
lutely) largest coefficient of the first variable. This is called partial pivoting
Example 25
10x − 7y + 3z + 5w = 6 (3.1)
−6x + 8y − z − 4w = 5 (3.2)
3x + y + 4z + 11w = 2 (3.3)
5x − 9y − 2z + 4w = 7 (3.4)
Manalebish D: 30 30
10 −7 3 5 6 1 −0.7 0.3 0.5 0.6
−6 8 −1 −4 5 A1 = 101 A1 −6 8 −1 −4 5
..
. (A \ B) =
3 1 4 11 2 3 1 4 11 2
5 −9 −2 4 7 5 −9 −2 4 7
1 −0.7 0.3 0.5 0.6
−1
A2 =A2 +6A1
0 3.8 0.8 8.6
−−−−−−−−−−−−−−−−−−−−→
A3 =A3 −3A1 A4 =A4 −5A1
0 3.1 3.1 9.5 0.2
0 −5.5 −3.5 1.5 4
Then Since the absolutely largest y-coefficient belong to the fourth equation inter-
change second and fourth rows and eliminate the y coefficients below that row.
1 −0.7 0.3 0.5 0.6 1 −0.7 0.3 0.5 0.6
0 −5.5 −3.5 1.5 4 1 0.63 −0.27 −0.72 A3 =A3 −3.1A2
1
A2 = −5.5 A2 0
0 3.1 3.1 9.5 0.2 0 3.1 3.1 9.5 0.2
0 3.8 0.8 −1 8.6 0 3.8 0.8 −1 8.6
1 −0.7 0.3 0.5 0.6
0.63 −0.27 −0.72
0 1
0 0 1.127 10.345 2.45
0 0 −1.618 0.036 11.36
Then the largest coefficient of z is in the fourth equation hence we interchange the
rows and then eliminate the coeficient
of z on the now below it.
1 −0.7 0.3 0.5 0.6
0.63 −0.27 −0.72
0 1
0 0 −1.618 0.036 11.36
0 0 1.127 10.345 2.45
1 −0.7 0.3 0.5 0.6 1 −0.7 0.3 0.5 0.6
0 1 0.63 −0.27 −0.72 0 1 0.63 −0.27 −0.72
0 0 1 −0.2247 −7.02247 0 0 1 −0.2247 −7.02247
0 0 0 10.37079 10.37079 0 0 0 1 1
Manalebish D: 31 31
using backward substitution the solution is w = 1, z = −7, y = 4, x = 5
This method minimizes round off errors which may affect the accuracy of the
solution.
Jordan’s Method
A modification of Gauss elimination method where by all the coefficients above and
below the main diagonal are eliminated is called the Jordan’s method.
If A is n × n with det A ̸= 0 and we transform the augmented matrix (A \ B)
to the form (In \ D) then the column vector D is the solution of the system.
x + 2y + 5z = −9
x − y + 3z = 2
3x − 6y − z = 25
1 2 5 −9 1 2 5 −9
A2 =A1 −A2 =3A1 −2A2
(A \ B) = 1 −1 3 2 −−−−−−−−−→ 0 3 2 −11 A−−1−
−−−−−−→
A3 =3A1 −A2 A3 =A3 −4A2
3 −6 −1 25 0 12 16 −52
3 0 11 −5 3 0 11 −5
−−−−−−→
1 A2 =A2 −2A3
0 3 2 −11 A3 = 8 A3 0 3 2 −11 −
−−−−−−−−−→
A1 =A1 −11A3
0 0 8 −8 0 0 1 −1
3 0 0 6 1 0 0 2
1
A1 = 3 A1
0 3 0 −9 −−−−−1−→ 0 1 0 −3
A2 = 3 A2
0 0 1 −1 0 0 1 −1
Manalebish D: 32 32
The solution is (2, −3, −1)
10x − 7y + 3z + 5w = 6
−6x + 8y − z − 4w = 5
5x − 9y − 2z + 4w = 7
To find the inverse of a non singular square matrix A using Jordan’s Method, we
apply the elementary row operations to the combined matrix (A\In ) and transform
it to the form (In \ D). Then D = A−1
1 0 2
Example 28 Find the inverse of A = 2 −1 3
4 1 8
1 0 2 1 0 0 1 0 2 1 0 0
−−−−−−→
A2 =2A1 −A2
(A \ I3 ) = 2 −1 3 0 1 0 −−−−−−−−−→ 0 1 1 2 −1 0 A3 ↔ A2
A3 =A3 −4A1
4 1 8 0 0 1 0 1 0 −4 0 1
1 0 2 1 0 0 1 0 2 1 0 0
−−−−−−−−−−→ −−−−−−−−−−−→
0 1 0 −4 0 1 A3 → A3 − A2 0 1 0 −4 0 1 A1 → A1 − 2A3
0 1 1 2 −1 0 0 0 1 6 −1 −1
1 0 0 −11 2 2 −11 2 2
−1
0 1 0 −4 0 1 D = A = −4 0 1
0 0 1 6 −1 −1 6 −1 −1
1 1 1 1
1 2 2 2
Exercise 7 Find the inverse of the matrix M =
1 2 3 3
1 2 3 4
Manalebish D: 33 33
Matrix Decomposition
Recall that for a square matrix A = (aij ), the diagonal elements are the elements
aij , fi = 1, . . . , n. If only the diagonal entries are nonzero A is called a diagonal
matrix. A diagonal matrix all of whose diagonal entries are equal to one it is called
the identity (or unit) matrix of order n.
1 0
Example 29 I2 = and A × I = I × A = A for any A
0 1
If all the elements above the diagonal are zero the matrix is called lower-triangular;
if all the elements below the diagonal are zero it is called upper triangular.
1 0 0
Example 30 L = 4 6 0 Lower triangular
−2 1 −4
1 −3 3
U = 0 −1 0 Upper triangular
0 0 1
4x − 2y + z = 15
−3x − y + 4z = 8
x − y + 3z = 13
To change the coefficient matrix to an upper triangular matrix we have used the
Gauss’ method
Note 2 To get the values for the lower triangular below the diagonal we have used
the coefficients of the leading terms. Furthermore, we can see that the original
Manalebish D: 34 34
coefficient
matrix Acanbe written as theproduct
4 −2 1 1 0 0 4 −2 1
A = −3 −1 4 = −0.75 1 0 × 0 −2.5 4.75
1 −1 3 0.25 0.2 1 0 0 1.8
| {z } | {z }
L U
Remark 3.1.1 det(LU ) = det L ∗ det U = det U since det L = 1 Thus for the
above example, we see that Gaussian elimination
Ax = B ⇔ LU x = B
4x − y + 2z = 15
−x + 2y + 3z = 5
5x − 7y + 9z = 8
Manalebish D: 35 35
(b) Decompose the coefficient matrix into lower and upper triangular
Solution
4 −1 2 15 4 −1 2 15
1
−−−−−−−−−−−→
A2 = 4 A1 +A2
(A \ B) = −1 2 3 5 −−−−−5 7 7 A3 = 23
35
7 A2 + A3
−−−−−−→ 0
A3 = 4 A1 +A3 4 2 4
5 −7 9 8 0 −23
4
26 −43
4 4
4 −1 2 15
7
0 4 2 4 7 35
0 0 18 18
Backward substitution gives x = 4, y = 3, z = 1
From
these the coefficient
matrix A
4 −1 2 1 0 0 4 −1 2
7 7
A = −1 2 3 = −0.25 1 0 ∗ 0 4 2 det A = det LU =
−23
5 −7 9 1.25 7 1 0 0 18
| {z } | {z }
L U
det L ∗ det U = det U = 126
Now Ax = B ⇔ LUx = B Then
put U x = y we get Ly = B
1 0 0 y 15
1
i.e. −0.25 1 0 y2 =
5
−23
1.25 7 1 y3 8
Using forward substitution
=⇒ y1 = 15
−15 15 35
−0.25(15) + y2 = 5 + y2 = 5 =⇒ y2 = 5 + =
4
4 4
23 35
1.25(15) − + y3 = 8 =⇒ y3 = 18
7 4
Manalebish D: 36 36
15
Ux =
35
4
18
4 −1 2 x 15
1
0 74 27 x2 = 35
4
0 0 18 x3 18
18x3 = 18 x3 = 1
4 7 7 35
x2 + =
7 4 2 4
x2 + 2 = 5 x2 = 3
4x1 − 3 + 2 = 15 x1 = 4
Example 32 Decompose
the following matrix into Lower
and Upper
triangular
1 1 1 1 1 1 1 1 1 1 1 1
1 2 2 2 0 1 1 1 0 1 1 1
−−−−−−R−2−=R 2 −R1 R 3 =R3 −R2
matrices. −−− −−−−−−−→
R3 =R3 −R1 R4 =R4 −R1
−−−−−− −−→
R4 =R4 −R2
1 2 3 3 0 1 2 2 0 0 1 1
1 2 3 4 0 1 2 3 0 0 1 2
1 1 1 1
R4 =R4 −R3
0 1 1 1
= U the corresponding lower triangular matrix is L =
0 0 1 1
0 0 0 1
1 0 0 0
1 1 0 0
1 1 1 0
1 1 1 1
Manalebish D: 37 37
1 0 1 0
5 −2 3
1 −1 1
3
A=0 1 7 B=
5 2 1 1
2 −1 0
2 0 −3 9
Tridiagonal Matrices
A tridiagonal matrix is a matrix that have nonzero elements only on the diagonal
and in the positions adjacent to the diagonal.
−4 2 0 0 0
1 −4 1 0 0
Example 33 0 1 −4 1 0
0 0 1 −4 1
0 0 0 2 −4
Manalebish D: 38 38
agonal matrix as its matrix of coefficients
Ax = B
−4x1 + 2x2 =0
x1 − 4x2 + x3 = −4
x2 − 4x3 + x4 = −11
x3 − 4x4 + x5 = 5
2x4 − 4x5 = 6
The correspondingaugmented matrix (A \ B) can be stored as
0 −4 2 0
1 −4 1 −4
1 −4 1 −11 a 5 × 4 matrix can be easily stored in computers and can be
1 −4 1 5
2 −4 0 6
solved using an algorithm for Gaussian elimination or using LU decomposition.
Example
34 The system
0.9999x − 1.0001y = 1
x−y =1
has the solution x = 0.5 y = −0.5 and the system
0.9999x − 1.0001y = 1
x−y =1+ε
has a solution x = 0.5 + 5000.5ε and y = −0.5 + 4.999.5ε.
Thus a change of magnitude ε produces a change in the solution of magnitude
5000ε.
The
system
a x +a x =b
11 1 12 2 1
is ill-conditioned if
a21 x1 + a22 x2 = b2
Manalebish D: 39 39
a12 a22
≈ or a11 a22 ≈ a12 a21 or a11 a22 − a12 a21 ≈ 0
a11 a21
i.e. if det A = a11 a22 − a12 a21 , is “ near zero”
Thus ill-conditioning can be regarded as an approach towards singularity (i.e.
difficulty to get inverse or solution)
For ill-conditioned system Ax = B withan initial approximate solution x′ =
(x′1 , x′2 . . . x′n ), there corresponds the residual R = B − Ax′
Ax′ = B − R ⇐ Ax′ = Ax − R which gives A(x − x′ ) = R whose solution is
the correction to be applied to x′ . This procedure can be repeated iteratively.
• Iterative methods are used when convergence is rapid so that the solution is
obtained with much less work than those of the direct method.
• They are not sensitive to round off errors, provided that it converges
a11 x1 = b1
a21 x1 + a22 x2 = b2
.. ..
. .
an1 x1 + an2 x2 + . . . + ann xn = bn
with aii ̸= 0(If not rearrange the system so that the ith coefficient for xi ̸= 0 to get
it)
Manalebish D: 40 40
Rewrite it in the form
n
1 X
xi = bi − aij xj
aii
j=1,j̸=i
for i = 1, 2, . . . n and m ≥ 0
• This process takes many iterations, if it converges at all and is generally used
only if direct methods fail.
• State a criteria for terminating the iteration say after K(given) iterations or
when |xj+1
i − xji | < ε, i = 1, . . . n
3
(1) 1 X (0)
xi = bi − aij xj
aii
j=1,j̸=i
m xm
1 xm
2 xm
3
0 0 0 0
1 1.4 0.5 1.4
2 1.11 1.2 1.11
3 0.929 1.055 0.929
4 0.9906 0.9645 0.9906
5 1.01159 0.9953 1.01159
6 1.00025 1.0058 1.00025
1
which converges to the exact solution x = 1 with an error ε = 10−2
1
Gauss-Seidel Method
(0) (0) (0)
• This is a method of successive corrections starting from x(0) = (x1 , x2 , . . . xn )
define the iteration
" i−1 n
#
(m+1) 1 X (m+1)
X (m)
xi = bi − aij xj − aij xj i = 1, . . . , n
aii j=1 j=i+1
(m+1)
That is, each new component xi is immediately used in the computation
of the next component.
m xm
1 xm
2 xm
3
0 0 0 0
1 1.4 0.78 1.026
2 0.9234 0.99248 1.1092
3 0.99134 1.0310 0.99159
4 0.99154 0.99578 1.0021
1
which converges to the exact solution x = 1 more quickly than Gauss Jacobi
1
method.
Remark 3.2.1 If the iteration diverges, a rearrangment of the equations may pro-
duce convergence.
Solve different equations for different xi (aii ̸= 0)
Iterate in a different order
10x1 − 2x2 + x3 = 12
x1 + 9x2 − x3 = 10
2x1 − x2 + 11x3 = 20
(j) (j)
xj+1
1 = 1.2000 + 0.2000x2 − 0.1000x3
(j+1) (j)
xj+1
2 = 1.1111 − 0.1111x1 + 0.1111x3
(j+1) (j+1)
xj+1
3 = 1.8182 − 0.1818x1 + 0.0909x2
Manalebish D: 43 43
For instance
Thus
j 0 1 2 3 4 5
xj1 0 1.2000 1.2267 1.2624 1.2625 1.2624
xj2 0 0.9778 1.1624 1.1598 1.1591 1.1591
xj3 0 1.6889 1.7008 1.6941 1.6940 1.6941
proofLet x be the exact root of Ax + B and xm the mth step iterated solution
Manalebish D: 44 44