Professional Documents
Culture Documents
Lecture 3.1
Lecture 3.1
y1 1 5 y0 .090
1
y2 5 y1 .050
2
1
y3 5 y2 .083
3
1
y4 5 y3 .165
4
1
It converges to this value (with each term evaluated up
to 6 decimal places) on summing the series up to 22
terms. If instead we sum to only 11 terms our truncation
5
error would be 4.88E-4
Example of truncation error
• Whenever a non-linear function is linearized, we
get a truncation error. Its magnitude is governed by
the magnitude of the largest nonlinear term,
typically the quadratic term.
𝐼 𝑦 𝑥 𝑑𝑥 6
𝑎
Example of truncation error
•Suppose we use the secant approximation to
evaluate the integral.
y=f(x) Trapezoidal
y0 approximation y5
y1 y4
y2 y3
x
h h
8
Trapezoidal rule: truncation error
1 1
I (h) ( y0 y1 )h ( y1 y2 )h
2 2
1 1
( y 2 y3 ) h ( y3 y 4 ) h
2 2
1
( y 4 y5 ) h
2
•Now, I I ( h) 1
I I ( 2h) 8
Reducing truncation error
•That would be an example of polynomial or ‘p’
refinement.
Example:
1
If the absolute error does not exceed say 10 3 or
2
0.0005 then we are certain that the numerical
approximation has 3 correct decimals.
20
Magnitude of error
Example: Given a numerical solution .001234 and an
error magnitude of .000004, we can see that:
1
.000004 < .000005 = 2 105 .
Verify
Suppose the error magnitude 0.000004 is the
magnitude of absolute error.
Then the largest possible exact value = .001238 and
the smallest possible exact value = .001230
Hence the exact value lies between .001230 and 21
.001238
Magnitude of error
•The numerical solution of 0.001234 therefore must
be correct up to 5 decimal places.
1
•If instead the absolute error was 0.000006 > .2 105
•Then the largest possible exact value = .001240 and
the smallest possible exact value = .001228.
•The numerical solution 0.001234 can have an error
in the 5th decimal place. In that case we can only be
certain that it is correct to 4 decimal places.
•The number of digits in a~ which occupy positions
1
where the unit is greater than or equal to 10 t
2
(other than zeros at the beginning of the number)
~
are the number of significant digits of a .
22
Magnitude of error
•The number of digits in .001234 which occupy
positions where the unit is greater than or equal to
1
105 are the first 5 digits.
2
•However out of the first 5 digits, the first 2 digits are
zero. Hence the number of significant digits is 5-2 = 3
•We saw earlier that numerical solutions can be
severely affected by round off. However the “way”
round off is done can affect the severity of the errors
arising from it.
•If one simply “chops” off all decimals to the right of
the tth decimal place, then the round off errors tend to
accumulate. 23
Effects of chopping
•This is because the errors consistently have a sign
opposite to the true value.
•For instance, if in the steps of an iterative scheme the
iterate a is known to be always positive, then after round
~
off by “chopping”, the error a a is always negative.
•Thus, the error from one “chopping” operation has no
chance of cancelling out the error from a subsequent
“chopping” operation.
•Similarly if the iterate a is known to be always negative,
after round off by “chopping”, a ~ a is always positive,
so again the errors from repeated round off operations
during the iteration keep accumulating. 24
Effects of chopping
•Depending on the number of significant digits, the
accumulation of round off errors may result in the
magnitude of the cumulative error approaching the true
solution. This will result in meaningless numerical
solutions.
•If the above rules are followed, then the error due to
1 t 1 t
round off lies in the interval 10 , 10
2 2 27
Superiority of rounding
•This has to be compared to the fact that ‘chopping’
can lead to errors of magnitude as large as 10-t
31
Bounds on Addition & Subtraction
Similarly the smallest possible value for x2
x2 ~
x2 2
while the largest possible value for x2
~
x2 x2 2
Since x1 x2 cannot be greater than the largest
value of x1,minus the smallest value of x2, we can
write:
1 2 1 1 2 2 )
32
Bounds on Addition & Subtraction
1 2 1 2 1 2 1 2 1 2 )
33
Bounds on Addition & Subtraction
( x1 x2 ) ( ~
x1 ~
x2 ) 1 2
( x1 x2 ) ( ~
x1 ~
x2 ) 1 2
34
Bounds on Addition & Subtraction
•We can write down the above result in terms of the
following theorem:
35
Bounds on Multiplication & Division
36
Bounds on Multiplication & Division
Then we can write:
~
x1 ~
x2 x1 (1 r1 ) x2 (1 r2 ) x1 x2 (1 r1 )(1 r2 )
is found to be: 1 r1 r1 r2
1 r1 r2 if r1 1, r2 1
1 r2 1 r2 37
Bounds on Multiplication & Division
38
Example: maximum and relative error
1. Roundoff to 3 decimals : (a) 0.2397 (b) - 0.2397 (c) 0.23750
(d) 0.23650 (d) 0.23652
39
Uses of error analysis
•Error analysis enables the user make crucial
decisions that have an important bearing on the
accuracy and efficiency of the calculation.
42
Error due to cancellation of terms
•Numerical calculations involving subtractions
between two numbers, where the difference between
the operands is much less than any one of the
operands, are prone to large errors.
•Suppose we wish to calculate y x1 x2 and the error
in the calculation of x1 and x2 are denoted by Δx1 and
Δx2. Then the error in y, Δy is given as:
|∆𝑦| ≪ |∆𝑥1 | |∆𝑥2 |
|∆𝑦| ∆𝑦 |∆𝑥1 | |∆𝑥2 |
∴ ≪
|𝑦 | 𝑦 |𝑥1 𝑥2 |
Since product of
absolute values is
equal to the absolute
value of the product 43
Error due to cancellation of terms
•Then the relative error in y can be very large,
irrespective of the errors in x1 and x2, if x1 x2 is
very small, i.e. x1 and x2 are nearly equal.
•This is seen to be true for the absolute error as well,
as can be seen from the following example:
x1 .5764 and the absolute error in x1 is .5 10 -4
x 2 .5763 and the absolute error in x21 is also .5 10 -4
• Then x1 x2 .0001 but the error bound on the
numerical approximation of x1 x2 is:
=(.5x10-4 + .5x10-4)/.0001= 1.0
•Thus the error bound on x1 x2 is larger than x1 x442
Error due to cancellation of terms
•Hence the error bound is larger than the estimate of
the result!
45