Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Adaptive stepsize

In numerical analysis, some methods for the numerical where c is some constant of proportionality.
solution of ordinary dierential equations (including the We have marked this solution and its error with a (0) .
special case of numerical integration) use an adaptive
stepsize in order to control the errors of the method The value of c is not known to us. Let us now apply Eu-
and to ensure stability properties such as A-stability. lers method again with a dierent step size to generate a
Rombergs method is an example of a numerical integra- second approximation to y(tn+1). We get a second solu-
tion method which uses an adaptive stepsize. tion, which we label with a (1) . Take the new step size to
be one half of the original step size, and apply two steps
of Eulers method. This second solution is presumably
more accurate. Since we have to apply Eulers method
1 Example twice, the local error is (in the worst case) twice the orig-
inal error.
For simplicity, the following example uses the sim-
plest integration method, the Euler method; in practice,
higher-order methods such as RungeKutta methods are h
preferred due to their superior convergence and stability yn+ 21 = yn + 2 f (tn , yn )
properties.
(1) h
Consider the initial value problem yn+1 = yn+ 21 + f (tn+ 21 , yn+ 12 )
2
( )2 ( )2 ( )2
(1) h h h 1 1 (0)

n+1 = c +c = 2c = ch2 = n+1
y (t) = f (t, y(t)), y(a) = ya 2 2 2 2 2
(1) (1)
where y and f may denote vectors (in which case this yn+1 + n+1 = y(t + h)
equation represents a system of coupled ODEs in several Here, we assume error factor c is constant over the inter-
variables). val [t, t + h] . In reality its rate of change is proportional
We are given the function f(t,y) and the initial conditions to y (3) (t) . Subtracting solutions gives the error estimate:
(a, ya), and we are interested in nding the solution at
t=b. Let y(b) denote the exact solution at b, and let yb
denote the solution that we compute. We write yb + = y (1) y (0) = (1)
n+1 n+1 n+1
y(b) , where is the error in the numerical solution.
For a sequence (tn) of values of t, with tn = a + nh, the This local error estimate is third order accurate.
Euler method gives approximations to the corresponding The local error estimate can be used to decide how step-
values of y(tn) as size h should be modied to achieve the desired accu-
racy. For example, if a local tolerance of tol is allowed,
we could let h evolve like:
(0)
yn+1 = yn + hf (tn , yn )
( ( tol ) )
The local truncation error of this approximation is dened
h 0.9 h min max , 0.3 ,2
by (1)
|n+1 |

The 0.9 is a safety factor to ensure success on the next


(0)
n+1 = y(tn+1 )
(0)
yn+1 try. The minimum and maximum are to prevent extreme
changes from the previous stepsize. This should, in prin-
and by Taylors theorem, it can be shown that (provided ciple give an error of about 0.9 tol in the next try. If
(1)
f is suciently smooth) the local truncation error is pro- |n+1 | < tol , we consider the step successful, and the
portional to the square of the step size: error estimate is used to improve the solution:

(0) (2) (1) (1)


n+1 = ch2 yn+1 = yn+1 + n+1

1
2 4 FURTHER READING

This solution is actually third order accurate in the local


scope (second order in the global scope), but since there
is no error estimate for it, this doesn't help in reducing
the number of steps. This technique is called Richardson
extrapolation.
Beginning with an initial stepsize of h = b a , this
theory facilitates our controllable integration of the ODE
from point a to b , using an optimal number of steps given
a local error tolerance. A drawback is that the step size
may become prohibitively small, especially when using
the low-order Euler method.
Similar methods can be developed for higher order meth-
ods, such as the 4th order Runge-Kutta method. Also, a
global error tolerance can be achieved by scaling the local
error to global scope.

2 Embedded error estimates


Adaptive stepsize methods that use a so-called 'embed-
ded' error estimate include the RungeKuttaFehlberg,
CashKarp and DormandPrince methods. These meth-
ods are considered to be more computationally ecient,
but have lower accuracy in their error estimates.

3 References

4 Further reading
William H. Press, Saul A. Teukolsky, William T.
Vetterling, Brian P. Flannery, Numerical Recipes in
C, Second Edition, CAMBRIDGE UNIVERSITY
PRESS, 1992. ISBN 0-521-43108-5
Kendall E. Atkinson, Numerical Analysis, Second
Edition, John Wiley & Sons, 1989. ISBN 0-471-
62489-6

Silvana Ilie, Gustaf Sderlind, Robert Corless,


Adaptivity and computational complexity in the nu-
merical solution of ODEs, J. Complexity, 24(3)
(2008) 341-361.

You might also like