Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Error Analysis

Round off Errors


•All calculating devices perform finite precision
arithmetic i.e. they cannot handle more than a
specified number of digits.

•If a computer can handle numbers up to ‘s’ digits


only, then the product of two such numbers, which in
reality have 2s or 2s-1 digits, can only be stored and
hence used in subsequent calculations up to the first s
digits only.

•The effect of such rounding off may accumulate after


extensive calculations. If the algorithm is unstable,
this can rapidly lead to useless results as seen earlier.
2
Example of round-off error
We have already seen examples of round-off
errors when we tried to integrate numerically
1 n
x
yn  dx
0
x5
using the recursion formula, and with s = 3:
1
yn  5 yn 1 
n
3
Example of round-off error
1
dx
y0    .182
0
x5

y1  1  5 y0  .090
1
y2   5 y1  .050
2
1
y3   5 y2  .083
3
1
y4   5 y3  .165
4

•Round off errors lead to meaningless results due to


the instability of the algorithm.
4
Truncation Errors
•Truncation errors occur when a limiting process is
truncated before it has reached its limiting value.
•A typical example may be a series which converges
after ‘n’ terms, is truncated i.e. cut-off after the first
n/2 terms. The last n/2 terms represent in this case
the truncation error.
 Example: The converged value of the series
𝑛
𝑛

1
It converges to this value (with each term evaluated up
to 6 decimal places) on summing the series up to 22
terms. If instead we sum to only 11 terms our truncation
5
error would be 4.88E-4
Example of truncation error
• Whenever a non-linear function is linearized, we
get a truncation error. Its magnitude is governed by
the magnitude of the largest nonlinear term,
typically the quadratic term.

 Nonlinear function of , is linearlized about :


𝑑𝑓
𝑓 𝑥 𝑓 𝑥0 |𝑥 0 𝑥 𝑥0 quadratic terms
𝑑𝑥
𝑡𝑟𝑢𝑛𝑐𝑎𝑡𝑖𝑜𝑛 𝑒𝑟𝑟𝑜𝑟

 To see a further example of truncation error we


consider numerical integration of the definite
integral:
𝑏

𝐼 𝑦 𝑥 𝑑𝑥 6
𝑎
Example of truncation error
•Suppose we use the secant approximation to
evaluate the integral.

•The secant method, we recall, requires


approximating the non-linear function by straight lines
connecting successive function values.

• We assume that the ‘step size’ is constant i.e.


successive values of the independent variable x e.g.
xn and xn-1 differ by the constant step size ‘h’

•Thus the area between the curve y=f(x) and the x


axis is approximated with the sum I(h) of the areas 7of
a series of trapezoids, each with a base width of h
Trapezoidal rule
•Hence this integration scheme is called the
‘trapezoidal rule’.

y=f(x) Trapezoidal
y0 approximation y5
y1 y4
y2 y3

x
h h

8
Trapezoidal rule: truncation error
1 1
I (h)  ( y0  y1 )h  ( y1  y2 )h
2 2
1 1
 ( y 2  y3 ) h  ( y3  y 4 ) h
2 2
1
 ( y 4  y5 ) h
2

•The ‘truncation error’, the error due to the


approximation of the nonlinear variation of y=f(x) with a
series of ‘linear variations’ is of the order h2 when h is
small.

•Why is the error of the order h2 ?


9
Reducing truncation error
•This means that the error I - I(2h) for a step size
size of 2h should be very nearly proportional to 4h2
since the error I- I(h) for a step size of h is very
nearly proportional to h2.
•Hence: I  I ( h) 1

I  I ( 2h) 4
•Thus one can achieve arbitrarily high accuracy by
choosing a sufficiently small step size h.
•However reducing h means increased number of
function evaluations and hence higher
computational expense.
•Also for very small h, there may be an increase in
round off errors in the computations. (Why?) 10
Reducing truncation error
•In order to avoid having to take very small step sizes
‘h’, one can try approximating y(x) by a higher order
polynomial instead of a step wise linear function.

•For instance the area under the curve y(x) may be


evaluated by approximating y(x) with piecewise
quadratic functions.

•The ‘truncation error’, would now be of the order h3


rather than h2.

•Now, I  I ( h) 1

I  I ( 2h) 8
Reducing truncation error
•That would be an example of polynomial or ‘p’
refinement.

•Continuing with the secant (piece-wise linear)


approximation while reducing the size of the step ‘h’
is an example of ‘h’ refinement.

•An alternative and highly efficient method for reducing


truncation error exists.

•It’s basic idea is due to Lewis Fry Richardson and is


known as ‘Richardson’s extrapolation’.

•For the current problem, it says that if we evaluate I


using the trapezoidal rule for two different values of12 h,
we can drastically reduce the truncation error.
Richardson’s Extrapolation
•By evaluating the integral for two step sizes h and 2h
and combining the results, we can come up with a vastly
improved solution.
•For example, for the secant approximation, we know:
I (h)  I  Kh 2 and I (2h)  I  K 4h 2 where K is a proportionality
constant
 4( I (h)  I )  I (2h)  I
3I  4 I (h)  I (2h)
1
This leads to I  I (h)  I (h)  I (2h)
3
correction term
• Thus by adding a correction term to I(h) we can get a much
better approximation to I with far lesser error than I(h). 13
Lewis Fry Richardson
Lewis Fry Richardson, FRS (1881 –1953) was an English
mathematician, physicist, meteorologist, psychologist and pacifist
who pioneered modern mathematical techniques of weather
forecasting. He is also noted for his pioneering work
concerning fractals and a method for solving a system of linear
equations known as modified Richardson iteration
Richardson's Quaker beliefs entailed an ardent pacifism that
exempted him from military service during World War I as
a conscientious objector, though this subsequently disqualified him
from having any academic post. Richardson worked from 1916 to
1919 for an ambulance unit. After the war, he rejoined the
Meteorological Office but was compelled to resign on grounds of
conscience when it was amalgamated into the Air Ministry in 1920.
He subsequently pursued a career on the fringes of the academic
world before retiring in 1940 to research his own ideas.
His pacifism had direct consequences on his research
interests. The discovery that his meteorological work was of value
to chemical weapons designers caused him to abandon all his
efforts in this field, and destroy findings that he had yet to publish. 14
(Source: Wikepdia)
Examples: Richardson’s extrapolation
12 12
3
1. Compute  x dx and  x 4 dx using h  1 and h  2. Then use Richardson' s
10 10
extrapolation to improve the result. Compare answers with exact solutions.

2. Consider f ( x)  x exp( x) with x0  2.0, h  0.1 and h  0.2. Compute


the first derivative using the central difference approximation and Richardson' s
extrapolation. The central difference approximation is :
f ( x  h)  f ( x  h)

( f ( x)  )
2h
Compare the original, improved and exact results.

3. Approximate the derivative of the function f ( x)  e  x sin x at the point


x0  1.0 using central difference and step sizes of h  0.25 and h  .5
Compare the original, improved and exact results. 15
Error definitions
•Before entering into further details of error analysis, it
is appropriate to define a few terms which we will need
in our discussion.
•Let a~ be a numerical approximation for a quantity
whose exact value is a

•Then the absolute error is .


a~  a

•The relative error is


~a
a
a~  if a  0
a
16
Error Bounds

•A bound on the magnitude of the error is known


as an error bound.

•Thus a bound on the absolute error is a bound


on:
~
a a

•A bound on the relative error is a bound on:


a~  a
a
17
Error Bounds
•The above error definitions hold in general for
vectors and matrices as well as scalars.

•However to hold for vectors and matrices we


have to define a proper magnitude (norm) for the
vector or matrix.

•The norm can then be used to define error


bounds for the vector or matrix.

•The error norm is a numerical value. When


dealing with numerical values we should be clear
about the distinction between the number of
decimals and number of significant digits in a18
number.
Decimals vs. Digits
•The number of significant digits in a numerical
value does not include the zeros at the beginning of
the number, as these zeros only help denote where
the decimal point should be.

•However if counting the number of decimals, one


has to include the leading zeros to the right of the
decimal point.

•Example: .00548 has 3 significant digits but 5


decimals
19
Magnitude of error
•If the magnitude of the error (whether absolute or
relative) in the numerical result does not exceed
1
10 t
2
~ , can be
then the numerical approximation, say, a
assured to have t correct decimals.

Example:
1
If the absolute error does not exceed say 10 3 or
2
0.0005 then we are certain that the numerical
approximation has 3 correct decimals.

20
Magnitude of error
Example: Given a numerical solution .001234 and an
error magnitude of .000004, we can see that:
1
.000004 < .000005 = 2  105 .

•Hence here t = 5 and the solution .001234 is correct


to 5 decimals.

Verify
Suppose the error magnitude 0.000004 is the
magnitude of absolute error.
Then the largest possible exact value = .001238 and
the smallest possible exact value = .001230
Hence the exact value lies between .001230 and 21
.001238
Magnitude of error
•The numerical solution of 0.001234 therefore must
be correct up to 5 decimal places.
1
•If instead the absolute error was 0.000006 > .2  105
•Then the largest possible exact value = .001240 and
the smallest possible exact value = .001228.
•The numerical solution 0.001234 can have an error
in the 5th decimal place. In that case we can only be
certain that it is correct to 4 decimal places.
•The number of digits in a~ which occupy positions
1
where the unit is greater than or equal to 10 t
2
(other than zeros at the beginning of the number)
~
are the number of significant digits of a .
22
Magnitude of error
•The number of digits in .001234 which occupy
positions where the unit is greater than or equal to
1
 105 are the first 5 digits.
2
•However out of the first 5 digits, the first 2 digits are
zero. Hence the number of significant digits is 5-2 = 3
•We saw earlier that numerical solutions can be
severely affected by round off. However the “way”
round off is done can affect the severity of the errors
arising from it.
•If one simply “chops” off all decimals to the right of
the tth decimal place, then the round off errors tend to
accumulate. 23
Effects of chopping
•This is because the errors consistently have a sign
opposite to the true value.
•For instance, if in the steps of an iterative scheme the
iterate a is known to be always positive, then after round
~
off by “chopping”, the error a  a is always negative.
•Thus, the error from one “chopping” operation has no
chance of cancelling out the error from a subsequent
“chopping” operation.
•Similarly if the iterate a is known to be always negative,
after round off by “chopping”, a ~  a is always positive,
so again the errors from repeated round off operations
during the iteration keep accumulating. 24
Effects of chopping
•Depending on the number of significant digits, the
accumulation of round off errors may result in the
magnitude of the cumulative error approaching the true
solution. This will result in meaningless numerical
solutions.

•Roundoff errors due to chopping after t decimals can


lead to errors as large as 10-t .

•This can be seen, for instance, if .5559999….. is


chopped off after 3 decimals. In that case the limiting
value of the error is 1 10 3.
25
Rounding vs. Chopping
•Thus ‘chopping’ has severe problems. As an
alternative to ‘chopping’, many computers perform
‘rounding’.

The rules for rounding are as follows. If the magnitude


of the error is not to exceed 1 2 10t then:
1. If the digit to the right of the tth decimal place is less
than 1 2 10,t the tth decimal place is left unchanged.

2. If the digit to the right of the tth decimal place is


greater than 1 2 10t, then the tth decimal place is raised
by 1. 26
Rules for Rounding
3.If the digit to the right of the tth decimal place is
exactly equal to 1 2 10t , then the tth decimal place is
raised by 1 if it is odd, and left unchanged if it is even.

•Since the probability of the tth decimal place being odd


or even is equal (each probability being equal to half), if
rule 3 is followed, the resulting error will be positive or
negative equally often.

•The errors will thus tend to cancel and not accumulate.

•If the above rules are followed, then the error due to
 1 t 1 t 
round off lies in the interval  10 , 10 
 2 2  27
Superiority of rounding
•This has to be compared to the fact that ‘chopping’
can lead to errors of magnitude as large as 10-t

•The superiority of ‘rounding’ over ‘chopping’ is thus


clearly established.

•This is because rounding errors have a greater


tendency to cancel each other out. This was explicitly
shown in case of rounding rule (3).

•However the combined effect of rules (1) and (2) also


favour cancellation of errors.

•This is because of the following reason. 28


Superiority of rounding
•The probability of the (t+1)th decimal being greater
than 1 2 10t is equal to the probability of the (t+1)th
decimal being less than 1 2 10.t

•Hence, the errors arising from application of rules


(1) and (2) have the opposite sign.

•Irrespective of whether rounding or chopping is


used, for a numerical algorithm with a large number
of operations, round off errors due to one operation
are bound to propagate to subsequent operations.
29
Bounds on errors
•The subject of error propagation is complex. However
it is possible to come up with relatively simple bounds
on the error for basic operations such as addition,
subtraction, multiplication and division.

•These bounds can allow estimation of errors for a final


numerical solution, if the error for each intermediate
step can be calculated.

•Consider for instance ~


x1 and ~
x2 which are numerical
approximations for x1 and x2.

•Supposing that the bound on the absolute error in x1 is


ε1 and the bound on the absolute error in x2 is ε2. 30
Bounds on Addition & Subtraction

Then we can write:


~
x1   1  x1  ~
x1   1 , ~
x2   2  x2  ~
x2   2

Hence the smallest possible value for x1


x1  ~
x1   1
and the largest possible value for x1
x1  ~
x1   1

31
Bounds on Addition & Subtraction
Similarly the smallest possible value for x2

x2  ~
x2   2
while the largest possible value for x2
~
x2  x2   2
Since x1  x2 cannot be greater than the largest
value of x1,minus the smallest value of x2, we can
write:

1 2 1 1 2 2 )

32
Bounds on Addition & Subtraction

Similarly, since x1  x2 cannot be less than the


smallest value of x1 ,minus the largest values of x2,
we can write:
1 1 2 2 1 2 )

Combining eqns. (1) and (2) we get:

1 2 1 2 1 2 1 2 1 2 )

33
Bounds on Addition & Subtraction

Using a similar argument, it can be shown that:


𝑥1 𝑥2 𝜀1 𝜀2 𝑥1 𝑥2 𝑥1 𝑥2 𝜀1 𝜀2 4)

Combining (3) and (4) we get:

( x1  x2 )  ( ~
x1  ~
x2 )   1   2

( x1  x2 )  ( ~
x1  ~
x2 )   1   2

34
Bounds on Addition & Subtraction
•We can write down the above result in terms of the
following theorem:

“The absolute error due to addition or subtraction of two


real numbers is bounded by the sum of the bounds on
the absolute error in each number”

•A similar bound can be obtained for multiplication or


division of two real numbers, but in this case the bound
involves relative rather than absolute errors.

35
Bounds on Multiplication & Division

Recall that the relative error r due to approximation


~
of the real number x by x is defined as:
~
xx
r  ~
x  x(1  r )
x
~
Suppose x1 and ~ x2 are approximate numerical
solutions for x1 and x2 and r1 and r2 are the relative
errors in x1 and x2 .

36
Bounds on Multiplication & Division
Then we can write:
~
x1 ~
x2  x1 (1  r1 ) x2 (1  r2 )  x1 x2 (1  r1 )(1  r2 )

Thus the relative error in the product x1 x2 is:

(1  r1 )(1  r2 )  1  r1  r2  r1r2  r1  r2, if r1  1, r2  1

Similarly the relative error in the quotient x1 x2 from:


~
x1 x1 (1  r1 )
~ 
x2 x2 (1  r2 )

is found to be: 1  r1 r1  r2
1   r1  r2 if r1  1, r2  1
1  r2 1  r2 37
Bounds on Multiplication & Division

Thus if 1 and  2 are bounds on the relative errors


r1 and r2 then it is clear that 1   2 is the best upper
bound for both 1 2 and 1 2

Thus we get the following theorem:

“In multiplication and division of real numbers, the


bound on the relative error due to the operation is
equal to the sum of the bounds on the relative
errors in the operands”

38
Example: maximum and relative error
1. Roundoff to 3 decimals : (a) 0.2397 (b) - 0.2397 (c) 0.23750
(d) 0.23650 (d) 0.23652

2. Compute to three correct decimals : (a)1.3134. (b)0.3761.e (c) .e

3. Determine the maximum error and relative error in the following


results where x  2.00, y  3.00, z  4.00 are correctly rounded.
y y
(a) f  3 x  y  z (b) f  x. (c) f  x.sin
z 40

39
Uses of error analysis
•Error analysis enables the user make crucial
decisions that have an important bearing on the
accuracy and efficiency of the calculation.

•For instance if we consider the case where the


product x1x2 is to be calculated and x1 is known up
to 3 significant digits, and the true value of x1 is of
order one.

•We now need to calculate x2. From error analysis


it is clear that it does not make a lot of sense to use
numerical methods that calculate x2 to say 6
significant digits. 40
Uses of error analysis
•This is because we know that the relative error in
x1x2 may be as high as 1 2  103 (see slide 19 & 20)
•This follows from the fact that the bound on the
relative error in x1x2 is given by the sum on the
bounds on the relative errors in x1 and x2
•However it does make a lot of sense to ensure
that the relative error in the calculation of x2 is not
in excess of 1 2  103 since that is likely to worsen
the relative error in the product.
•The computational cost involved in the numerical
calculation of x2 depends on the degree of
accuracy with which we wish to calculate x2 Why? 41
Uses of error analysis
•Hence knowledge of how the error bounds
behave, enables us to improve the efficiency as
well as the accuracy of the numerical analysis.

42
Error due to cancellation of terms
•Numerical calculations involving subtractions
between two numbers, where the difference between
the operands is much less than any one of the
operands, are prone to large errors.
•Suppose we wish to calculate y  x1  x2 and the error
in the calculation of x1 and x2 are denoted by Δx1 and
Δx2. Then the error in y, Δy is given as:
|∆𝑦| ≪ |∆𝑥1 | |∆𝑥2 |
|∆𝑦| ∆𝑦 |∆𝑥1 | |∆𝑥2 |
∴ ≪
|𝑦 | 𝑦 |𝑥1 𝑥2 |
Since product of
absolute values is
equal to the absolute
value of the product 43
Error due to cancellation of terms
•Then the relative error in y can be very large,
irrespective of the errors in x1 and x2, if x1  x2 is
very small, i.e. x1 and x2 are nearly equal.
•This is seen to be true for the absolute error as well,
as can be seen from the following example:
x1  .5764 and the absolute error in x1 is .5  10 -4
x 2  .5763 and the absolute error in x21 is also .5  10 -4
• Then x1  x2  .0001 but the error bound on the
numerical approximation of x1  x2 is:
=(.5x10-4 + .5x10-4)/.0001= 1.0
•Thus the error bound on x1  x2 is larger than x1  x442
Error due to cancellation of terms
•Hence the error bound is larger than the estimate of
the result!

•One can avoid errors due to cancellation of terms by


rewriting formulas to avoid subtraction between nearly
equal terms.

45

You might also like