Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

AMS 147 Computational Methods and Applications

Lecture 09
Copyright by Hongyun Wang, UCSC
Recap:

The total error in numerical differentiation

E T ( h) =

fl( f (x + h ))  fl( f ( x ))
)
 
f
 ( x
h 


Exact
value

Numerical result
from a computer


e(h) +
f ( x + h) + f ( x) 3


h

Discretization 

Effect of
round-off error

error

The total error diverges to infinity as h goes to zero.

Composite trapezoidal rule


N 1

h
E ( h)
 f 0 + f N + 2  f i  =  f ( x ) dx + 


2

i=1
a
 

Error
b

Numerical
approximation

Exact
value

( )

E (h) = O h2

Composite Simpson rule


N 1
N

h
E (h )
 f 0 + f N + 2  f i + 4  f i1/ 2  =  f ( x ) dx + 


6

i=1
i =1
a

 

Error
b

Numerical
approximation

Exact
value

( )

E (h) = O h4

(Draw the numerical grid to explain fi and fi 1/2 ).


For both numerical differentiation and numerical integration, we have

Penalty for using very small h

Advantage for using higher order method


-1-

AMS 147 Computational Methods and Applications

Numerical error estimation


Consider the error in a numerical result:

Error = Numerical result  Exact value


Question: How to estimate the error when the exact value is unknown?
Here we consider the situation where the total error is dominated by the discretization error and
the effect of round-off error can be neglected.
We first introduce the notation.
Notation:
Let T (h ) be the numerical approximation to quantity I, obtained with step size h using a p-th
order numerical method.

T ( h) = I + E ( h )
  
Numerical
approximation

Exact
value

Error

where

( )

E (h) = C h p + o h p
==>

T ( h ) = I + C h + o( h
p

Example:
Second order numerical differentiation method for approximating f  ( x ) :
f ( x + h )  f ( x  h)
= f ( x ) + E (h )
2h
  
I

T  h

Error

( )

E (h) = C h2 + o h2

Example:
b

Composite Simpsons rule for approximating

 f ( x ) dx
a

-2-

AMS 147 Computational Methods and Applications


N 1
N

h
f
+
f
+
2
f
+
4
fi 1/2  =  f ( x ) dx + E ( h )


0
N
i



6
=1
i =1
a
i

 
Error


b

T (h)

( )

E (h) = C h2 + o h2

Goal: to estimate E(h) when the exact value of quantity I is unknown.


Strategy:

we calculate T(h) and T(h/2).

T ( h ) = I + C h p + o( h p )

h
h
T  = I + C  + o( h p )
 2
 2
p

==>

==>

p
 h    1
C h p + o( h p )
T (h )  T
= 1
 2    2

Chp =

T (h )  T
1

 h
 2

 1
 2

+ o( h p )

Using E ( h )  C h p , we obtain

E ( h) 

T ( h)  T
1

 h
 2

 1
 2

This is how we estimate the error numerically when the exact value is unknown.
If p (the order of the numerical method) is unknown, we simply use


h
E ( h) ~  T (h )  T 

 2

Estimating the order of a method
E (h)  C h p

 h
 h
E   C 
 2
 2

-3-

AMS 147 Computational Methods and Applications


E (h)
 2p
 h
E 
 2

==>

 E (h)
p  log 2 
 h
E
  2

==>

Using E ( h ) 

 h
T (h)  T  
 2
 1
1  
 2

, we obtain


 h

 T ( h)  T  2

p  log2 
 T  h   T h 
  2
 4
This is how we estimate the order of accuracy of a numerical method.

Numerical solutions of ODEs


Goal:

to solve the initial value problem


 y = F ( y,t )

 y ( 0 ) = y0

We introduce the numerical grid

tn = n h
where h is called the time step.
The first method we introduce is Euler method.
Euler method:
At tn, the differential equation is
y ( tn ) = F ( y (tn ),tn )

Strategy: use

y (t n+1 )  y( tn )
to approximate y ( tn ) .
h

==>

y (t n+1 )  y( tn )
 F ( y (tn ),tn )
h

==>

y (t n+1 )  y( tn ) + h F ( y ( tn ),t n )

-4-

AMS 147 Computational Methods and Applications


which leads to the Euler method

y n+1 = y n + h F ( y n ,t n )
y n : numerical approximation of y( tn )
Note: the Euler method is an explicit method.
To fully understand the difference between explicit methods and implicit methods, we introduce
backward Euler method, which is an implicit method.
Backward Euler method:
At tn+1, the differential equation is
y ( tn +1 ) = F ( y (tn +1 ),tn +1 )

Strategy: use

y (t n+1 )  y( tn )
to approximate y ( tn +1 ) .
h

y (t n+1 )  y( tn )
 F ( y (tn +1 ),tn +1 )
h
y (t n+1 )  y( tn ) + h F ( y ( tn+1 ),t n+1)

==>

which leads to the backward Euler method


yn +1 = yn + h F ( yn +1 , t n +1 )

Note: the backward Euler method is an implicit method.


In the backward Euler method, yn+1 is not given as an explicit expression. Instead, yn+1 is given
as the solution of an equation. If function F(y, t) is non-linear in y, the equation is a non-linear
equation.
Error analysis
Notation:

Exact solution y(t) is a continuous function of t.

Numerical solution {y0, y1, y2, } contains approximate solutions at discrete times.

Global error:
def

EN ( h ) =

 y (t N )
y

 N



Numerical
solution at t N

Exact
solution at t N

where t N = Nh = T is fixed as h  0 .
(Draw a graph of y (t N ) and y N to show the global error)

-5-

AMS 147 Computational Methods and Applications


Goal of the error analysis:
To find the order of global error EN ( h ) = yN  y ( t N ) .
To achieve that goal, we first need to study the local truncation error.
Local truncation error (LTE):
When we substitute an exact solution of y = F ( y,t ) into the numerical method, the residual
term is called the local truncation error.

Local truncation error (LTE) of Euler method:


Let y (t ) be an exact solution of y  = F ( y,t) .
The local truncation error (LTE) is
en ( h) = y ( tn+1 )  y (tn )  h F ( y (t n ),tn )

We use the Taylor expansion to find the order of the local truncation error (LTE).
We do Taylor expansion of y (t n+1 ) = y( tn + h ) around tn.


y  (tn ) 2
en ( h) =  y ( tn ) + y  (tn ) h +
h +   y (t n )  h F ( y ( tn ),t n )
2


=

y ( tn ) 2
2
h +  = O( h )
2

Local truncation error (LTE) of backward Euler method


en ( h) = y ( tn+1 )  y (tn )  h F ( y (t n+1 ),tn +1 )

Again, we use the Taylor expansion to find the order of the local truncation error (LTE).
We do Taylor expansion of y (t n ) = y( tn +1  h) around tn+1.


y  (tn +1 ) 2
en ( h) = y ( tn+1 )   y (tn +1 )  y  (tn +1 ) h +
h +   h F ( y (tn +1 ),tn +1 )
2



=

y ( t n +1 ) 2
h +  = O h2
2

( )

Now we connect the local truncation error (LTE) to the global error.

Global error EN ( h ) = yN  y ( t N ) is what we want to know.

-6-

AMS 147 Computational Methods and Applications

Local truncation error (LTE) en ( h ) is what we can derive using Taylor expansion.

Question: How is E N ( h) related to en ( h) ?


Theorem:
If en ( h) = O(h p +1 ) , then E N ( h) = O(h p ) .
That is, the global error is one order lower than the local truncation error (LTE).
(The proof of the theorem is given in the Appendix).
Using the theorem above, we examine the global error of Euler method and the global error of
backward Euler method.
Global error of Euler method
Local truncation error:
en ( h) = O(h 2 )

==>

Global error E N ( h) = O(h )

==>

Euler method is a first order method.

Global error of backward Euler method:


Local truncation error
en ( h) = O(h

==>

Global error E N ( h) = O(h )

==>

The backward Euler method is a first order method.

For a first order method, when we reduce the time step h by a factor of 2, the global error
E N ( h) = O(h ) decreases only by a factor of 2. We like to have higher order methods.
Second order methods:
At tn, the differential equation is
y ( tn ) = F ( y (tn ),tn )

Strategy: use

y ( t n +1 )  y ( t n 1 )
2h

to approximate y ( tn ) .

==>

y ( t n +1 )  y ( t n 1 )

==>

y ( t n +1 )  y ( t n 1 ) + 2h F ( y ( t n ) ,t n )

2h

 F ( y ( t n ) ,t n )

-7-

AMS 147 Computational Methods and Applications


which leads to the midpoint method.
The midpoint method (also called the leap frog method)
yn +1 = yn 1 + 2h F ( yn , t n )

Note: The midpoint method does not work well. It is numerically unstable.
We will demonstrate its instability in computer illustration.
A new approach (for constructing numerical methods)
Start with the differential equation

y  = F ( y,t)
Integrating from tn to tn+1, we have

y ( t n +1 )  y ( t n ) =

tn+1

 F ( y (t ),t ) dt

tn

Strategy: use the trapezoidal rule to approximate the integral on the right hand side.

y (t n+1 )  y( tn ) =

h
F ( y (tn ),tn ) + F ( y ( tn+1 ),tn +1 ) + O( h 3 )
2

which leads to the trapezoidal method.


The trapezoidal method:

y n+1 = y n +

h
( F ( y n,t n ) + F ( y n+1,tn +1))
2

Note: the trapezoidal method is an implicit method.


Local truncation error of the trapezoidal method is (derivation skipped)

( )

en ( h ) = O h 3 .
==>

Global error E N ( h) = O(h 2 ) .

==>

It is a second order method.

-8-

AMS 147 Computational Methods and Applications

Appendix
Theorem:
If en ( h) = O(h p +1 ) , then E N ( h) = O(h p ) .
Proof of the theorem:
To prove the theorem, we need to look at another type of error.
Error of one step:
y  = F ( y,t )
.
Let y (t ) be the exact solution of 
 y ( tn ) = y n

Error of one step is defined as

en ( h) = y (tn +1 )  y n +1
def

(Draw a graph of y n+1 and y (tn +1 ) to show the error of one step)
For the Euler method (as a representative of explicit methods), we have

en ( h) = y ( tn+1 )  y n+1
= y( tn +1 )  y n  h F ( y n ,tn )
= y( tn +1 )  y (t n )  h F ( y( tn ),t n ) = e n (h )

For explicit methods, error of one step is the same as local truncation error.
For the backward Euler method (as a representative of implicit methods), we have

en ( h) = y ( tn+1 )  y n+1
= y( tn +1 )  y n  h F ( y n+1,tn +1 )

= y( tn +1 )  y (t n )  h F ( y( tn+1 ),t n+1) + h F ( y (tn +1 ),tn+1 )  F ( y n+1,tn +1 )


= en (h ) + h

F
( y( tn +1)  y n+1 )
y

= en (h ) + h

F
e (h )
y n

==>


F 
en ( h) 1 h  = e n (h )
y 


==>

en ( h) = e n (h)


F
2 
= en (h )1 + h
+ O(h )
y



F 

 1 h
y 

1

-9-

AMS 147 Computational Methods and Applications


In general, error of one step and local truncation error have the same leading term.
With the help of error of one step, we now write out the proof of the theorem.

E 0 ( h) = 0
E n +1 (h ) = y (tn +1 )  y n +1
= (y ( tn +1 )  y (t n+1)) + ( y (tn +1 )  y n +1 )

==>

E n +1 ( h)  y ( tn+1 )  y (tn +1 ) + y( tn +1 )  y n+1

(E01)

The second term on the right hand side of (E01) is error of one step
y (t n+1 )  yn +1 = en (h ) = O(en ( h)) = O(h p+1 )

==>

y( tn +1 )  y n+1  C2 h p +1

The first term on the right hand side of (E01) is the difference between two exact solutions.
Both y (t ) and y (t) satisfy y  = F ( y,t) . But they have different starting values at tn.
The theory of ODE tells us that
c t 
y ( t)  y ( t)  e ( ) y ( )  y ( )

for t  

Setting t = tn+1 and  = tn , we have


y ( tn +1 )  y (t n+1)  e ch y (tn )  y ( tn )
= e ch y ( tn )  y n = e c h E n ( h)

Substituting these into (E01), we get


E n +1 ( h)  e ch E n (h) + C2 h p +1

(E02)

This is a recursive inequality valid for all values of n. We are going to use it to estimate
E N (h ) where N h = T .
n+1
ch
Let  = e . Dividing both sides of (E02) by  , we obtain

E n +1 ( h)
E n ( h)
p+1 1

+ C2 h
n+1
n


 n +1

Summing over n = 0,1, 2,, N  1 , we get

E N (h )
E 0 ( h)
1
1 
p+1  1

+ C2 h  + 2 +  + N 
N
0


 

Recall that the sum of a geometric series is given by
1 + r + + r

N 1

rN  1
=
r 1

- 10 -

(E03)

AMS 147 Computational Methods and Applications

==>

1
1
1
1 1
+ 2 +  + N = N (1+  +  +  N 1 ) = N
 




  N  1

  1 

Substituting it into (E03) and using E 0 ( h) = 0 , we have

E N (h )  C2 h

p +1

  N  1


  1

(E04)

x
Using the inequality e  1  x , we obtain

  1 = e c h  1 ch
 N  1 = e c N h  1 = e cT  1
Thus, we arrive at
E N (h )  C2 h

==>

p +1

 e cT  1
 e cT  1 p
 ch  = C c  h





E N ( h ) = O (h p )

- 11 -

You might also like