Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 24

What is The Error ?

How it is happened ?

1
Definition
• The Error in a computed quantity is defined
as:

Error = True Value – Approximate Value


• Example:
- True Value : phi = 3.14159265358979
- Appr. Value : 22/7 = 3.14285714285714
- Error = -0.00126448926735
2
Kind of Error
• The Absolute Error is measure the
magnitude of the error
Ea  Error
• The Relatife Error is a measure of the error
in relation to the size of the true value
Ea
Er 
True Value
3
Example
• True value : 10 m
Appr. Value : 9 m
Ea  1, Er  0.1
• True value : 1000 m
Appr. Value : 999 m
Ea  1, Er  0.001

4
Source of Error
• Truncation Error
– Caused by approximation used in the
mathematical formula of the scheme
• Round-off Error
– Caused by the limited number of digits that
represent numbers in a computer and
– The ways numbers are stored and additions and
substractions are performed in a computer
5
Background of
The Truncation Error
• Numerical solutions are mostly approximations for
exact solution
• Most numerical methods are based on approximating
function by polynomials
• How accurately the polynomial is approximating the
true function ?
• Comparing the polynomial to the exact solution it
becames possible to evaluate the error, called
truncation error

6
Taylor Series
• The most important polynomials used to derived
numerical schemes and analyze truncation errors
• With an infinite power series, it is exactly represents a
function within a certain radius about a given point

7
Taylor’s Theorem
(See Introduction to Real Analysis
by Bartle and Sherbert for proves)

Let n  N , let I  [a, b], and let f : I   be such that


f and its derivatives f ' , f ' ' ,  , f ( n ) are continous on I
and that f ( n 1) exists on  a, b . If x0  I , then for any x in I
there exists a point c between x and x0 such that
f ' ' ( x0 )
f ( x)  f ( x0 )  f ' ( x0 )( x  x0 )  ( x  x0 ) 2  
2
( n 1)
f ( n ) ( x0 ) f (c )
 ( x  x0 ) 
n
( x  x0 ) n 1
n! (n  1)!
8
Applications
• For practice we denote h  x  x0
• Find the Taylor expansion of sin(x) about x0  
2
answer:
2 3
h h
sin( x)  sin( )  h cos( )  sin( )  cos( )
2 2 2! 2 3! 2
4 5
h h
 sin( )  cos( )  

4! 2 5! 2
where h  x  
2
9
• How about the Taylor expansion of Tan(x) at
x0  
2 ?
• The Taylor expansion of a function about
x0  0 is called the Maclaurin series. Thus,
Maclaurin series for sin(x) is,
3 4 7
x x x
sin( x)  x    
3! 5! 7!
• It is impossible in practical applications to include
an infinite number of terms. Therefore, the Taylor
series has to be truncated after a certain order term
10
• If the Taylor series is truncated after the N th
order term, it is expressed as
h2 h3
f ( x )  f ( x0 )  hf ' ( x0 )  f ' ' ( x0 )  f ' ' ' ( x0 )
2! 3!
h 4 ( iv ) hn (n)
 f ( x0 )    f ( x0 )  ( h n 1 )
4! n!
where h  x  x0
N 1
h
( h n 1 )  f ( N 1) ( x0   h) , 0  1
( N  1)!
• Since  cannot be found exactly, the error term
is often approximated by
N 1
h
( h n 1 )  f ( N 1) ( x0 )
( N  1)! 11
Example 1:
Find the Taylor expansion of e about x0  0
x

which use the first two, three, four and five and
evaluate for x= 0.5, respectively.
Answer:
e  1  x  ( x )
x 2 h  x0
2
x x
e  1  x   ( x )
x 3

2!
2 3
x x
e  1  x    ( x 4 )
x

2! 3!
2 3 4
x x x
e x  1  x     ( x 5 )
2! 3! 4! 12
e 0.5  1.648721..

e 0.5  1.5  ( x 2 ) ( x 2 )  0.14872..

e 0 .5
 1.625  ( x )
3
( x )  0.023721..
3

e 0 .5
 1.64583..  ( x )
4
( x )  0.002887..
4

e 0 .5
 1.64843..  ( x )
5
( x )  0.0002837..
5

13
Example 2:
Find the Taylor expansion of e about x0  0.25
x

which use the first two, three, four and five and
evaluate for x= 0.5, respectively.
Answer:
e x  e 0.25  he 0.25  (h 2 ) h  x  0.25
2
h 0.25  0.25
e  e  he  e  (h )
x 0.25 0.25 3

2!
2 3
h 0.25 h 0.25
e  e  he  e  e  (h 4 )
x 0.25 0.25

2! 3!
2 3 4
h h h
e x  e 0.25  he 0.25  e 0.25  e 0.25  e 0.25   ( h 5
)
2! 3! 4! 14
e 0 .5
 1.60503..  (h )
2

(h )  0.04368...
2

e 0.5  1.64515..  (h 3 )


(h 3 )  0.00356..
e 0 .5
 1.64850..  (h )
4

(h 4 )  2.1988..e - 004


e 0.5
 1.64871..  (h )
5

(h 5 )  1.0900..e - 005

15
Summary from example 1 and 2:
Order of x0  0, h  0.5 x0  0.25, h  0.25 Ratio
Trunc. Er.
( h 2 ) 0.14872.. 0.04368.. 3.4

( h )
3 0.023721.. 0.00356.. 6.66

( h )
4 0.002887.. 2.1988..e-04 13.13

( h 5 ) 0.0002837.. 1.090..e-0.5 26.03

16
Numbers on Computers
• Computers do not use the decimal system in
computations and memory but use the binary
system
• It caused by computer memory consists of a
huge number of electronic and magnetic
recording devices, of which each element has
only “on” and “off” statuses

17
• Example: In a single precision, 4 bytes, or equivalently
32 bits, are used to store one real number. If a decimal
number is given by input, it first converted to the
closest binary in the normalized format:

 0.abbbbb...bbb  2 x2 z

where a is always 1 and b’s are binary digit that are


0 or 1; z is an exponent which is also expressed in
binary.

18
Numbers Store in Computer’s Memory for
Single Precision (IBM 370)
1 1111111
     11111111
     11111111
      11111111
  
s e (7 bits ) m (24 bits )

where s is as sign (+ or -), e is an exponent, and m


is a mantissa including the a and b’s.
Example:
(1.5)10  (0.1100 0000 0000 0000 0000 0000 ) 2 x 21

(0.00001)10  (0.1010 0111 1100 0101 1010 1100 ) 2 x 2-16


19
Causes Round-off Error
The summation of these two numbers becomes
(1)10  (0.00001)10
 (0.1100 0000 0000 0000 0101 0011 1110 0010 1101 0110 0 ) 2 x 21

Because the mantissa has 24 bits, therefore, the


result of this calculation is stored in memory as
(1)10  (0.00001)10  (0.1100 0000 0000 0000 0101 0011) 2 x 21
 (1.5000100136)10

Thus, whenever 0.00001 is added to 1.5, the result


gains 0.0000000136 as an error which called round-
off error 20
The effects of round-off error can be minimized by
changing the computational algorithm although it
must be devised case by case. Some useful strategies
include:
o Double precision
o Grouping
o Taylor expansions
o Changing definition of variables
o Rewriting the equation to avoid substraction

21
• Double Precision (IBM 370)
In a double precision, 8 bytes, or equivalently 64
bits, are used to store one real number. In this
format 1 bit is used for sign, 7 bits are used for
exponent, and 56 bits are used for mantissa.
• Grouping
when the small numbers are computed, e.g. addition,
substraction, etc., grouping them helps to reduce
round-off errors. For example, to add 0.00001 to
unity ten thousand times one can grouped into 100
groups and each group consists of 100 small values.
22
• Taylor Expansions
as  approaches 0, accuracy of a numerical
evaluation for
sin(1   )  sin(1)
f ( ) 

becomes very poor because round-off errors. By
using Taylor expansion, we can rewrite the equation
so that the accuracy for  is improved as
2
f ( )  cos(1)  0.5 sin(1)
23
• Rewriting the equation to avoid substraction
consider the equation of

f ( x)  x x  1  x 
for an increasing of values x, the calculation of the
equation above has a loss-of-significance error.
To avoid this error one can reformulate it to get
x
f ( x) 
x 1  x
24

You might also like