Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Time-stepping techniques

Unsteady ows are parabolic in time use time-stepping methods to

advance transient solutions step-by-step or to compute stationary solutions


time
future

Initial-boundary value problem


zone of influence

u = u(x, t) time-dependent PDE boundary conditions initial condition

present

past

domain of dependence


space

u t

+ Lu = f

in (0, T ) on (0, T ) in at t = 0

Bu = 0 u = u0

Time discretization

0 = t0 < t1 < t2 < . . . < tM = T

u0 u0 in tn+1 = tn + t

Consider a short time interval (tn , tn+1 ) , where

Given un u(tn ) use it as initial condition to compute un+1 u(tn+1 )

Space-time discretization
Space discretization: Unknowns: ui (t) nite dierences / nite volumes / nite elements time-dependent nodal values / cell mean values (i) before or (ii) after the discretization in space

Time discretization:

The space and time variables are essentially decoupled and can be discretized independently to obtain a sequence of (nonlinear) algebraic systems A(un+1 , un )un+1 = b(un ) Method of lines (MOL) duh + Lh uh = fh dt FEM approximation L Lh n = 0, 1, . . . , M 1

yields an ODE system for ui (t) semi-discretized equations


n un i u(xi , t )

on (tn , tn+1 )
N

uh (x, t) =
j =1

uj (t)j (x),

Galerkin method of lines


Weak formulation
d dt (u, v ) u t

+ Lu f v dx = 0,
d dt (uh , vh )

v V,

t (tn , tn+1 )

+ a(u, v ) = l(v ), v V

+ a(uh , vh ) = l(vh ), vh Vh t (tn , tn+1 )

Dierential-Algebraic Equations

MC du dt + Au = b

where MC = {mij } is the mass matrix and u(t) = [u1 (t), . . . , uN (t)]T Matrix coecients Mass lumping mij = (i , j ), aij = a(i , j ), mi =
j

bi = l(i ) j ) =
j

MC ML = diag{mi }, j 1.
j

mij = (i ,

i d x

due to the fact that

In the 1D case FDM=FVM=FEM + lumping

Two-level time-stepping schemes


Lumped-mass discretization ML du dt + Au = b,
du dt 1 [Au b] R(u, t) = ML

First-order ODE system

+ R(u, t) = 0

for t (tn , tn+1 ) n = 0, 1, . . . , M 1

Standard scheme (nite dierence discretization of the time derivative) un+1 un + R(un+1 , tn+1 ) + (1 )R(un , tn ) = 0 t 01

u(tn ) = un ,

where t = tn+1 tn is the time step and is the implicitness parameter =0 = 1/2 =1 forward Euler scheme Crank-Nicolson scheme backward Euler scheme explicit, implicit, implicit, O(t) O(t)2 O(t)

Fully discretized problem


Consistent-mass discretization MC du dt + Au = b,
1 R(u, t) = MC [Au b]

[MC + tA]un+1 = [MC (1 )tA]un + t bn+ where bn+ = bn+1 + (1 )bn , 0 1, n = 0, . . . , M 1

Exact integration

In general, time discretization is performed using numerical methods for ODEs du(t) = f (t, u(t)) dt Initial value problem on (tn , tn+1 ) u(tn ) = un
tn+1 du dt tn

dt = u

n+1

u =

tn+1 tn

f (t, u(t)) dt

un+1 = un + f (, u( ))t,

(tn , tn+1 )

by the mean value theorem

Idea: evaluate the integral numerically using a suitable quadrature rule

Example: standard time-stepping schemes


Numerical integration on the interval (tn , tn+1 )
FE BE CN LF

111111111 000000000 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111

111111111 000000000 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111

111111111 000000000 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111

111111111 000000000 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111

left endpoint Forward Euler Backward Euler Crank-Nicolson Leapfrog method

right endpoint

trapezoidal rule

midpoint rule

un+1 = un + f (tn , un )t + O(t)2 un+1 = un + f (tn+1 , un+1 )t + O(t)2


n n n+1 un+1 = un + 1 , un+1 )]t + O(t)3 2 [f (t , u ) + f (t

un+1 = un + f (tn+1/2 , un+1/2 )t + O(t)3

Properties of time-stepping schemes


Time discretization tn = nt, t =
T M

M=

T t

Accumulation of truncation errors


p loc = O (t)

n = 0, . . . , M 1
p1 glob = M loc = O (t)

Remark. The order of a time-stepping method (i.e., the asymptotic rate at which the error is reduced as t 0) is not the sole indicator of accuracy The optimal choice of the time-stepping scheme depends on its purpose: to obtain a time-accurate discretization of a highly dynamic ow problem (evolution details are essential and must be captured) or to march the numerical solution to a steady state starting with some reasonable initial guess (intermediate results are immaterial) The computational cost of explicit and implicit schemes diers considerably

Explicit vs. implicit time discretization


Pros and cons of explicit schemes easy to implement and parallelize, low cost per time step a good starting point for the development of CFD software small time steps are required for stability reasons, especially if the velocity and/or mesh size are varying strongly extremely inecient for solution of stationary problems unless local time-stepping i. e. t = t(x) is employed Pros and cons of implicit schemes stable over a wide range of time steps, sometimes unconditionally constitute excellent iterative solvers for steady-state problems dicult to implement and parallelize, high cost per time step insuciently accurate for truly transient problems at large t convergence of linear solvers deteriorates/fails as t increases

Example: 1D convection-diusion equation



u t u + v u x = d x2
2

in (0, X ) (0, T ) u|t=0 = u0 2u du(x(t), t) =d 2 dt x

Lagrangian representation where


d dt

u(0) = u(1) = 0,

u f (t, u(t)) = v u x + d x2

pure diusion equation


dx(t) dt

is the substantial derivative along the characteristic lines

=v

Initial prole is convected at speed v and smeared by diusion if d > 0 d=0 du(x(t), t) =0 dt

For the pure convection equation u is constant along the characteristics

Example: 1D convection-diusion equation


Uniform space-time mesh xi = ix, tn = nt, x = t =
X N, T M,

i = 0, . . . , N n = 0, . . . , M

un un+1 ,

u0 i = u0 (xi ) 01

Fully discretized equation

+1 n+1 n un = un ]t + (1 )fh i + [fh i

Central dierence / lumped-mass FEM


+1 un un i i t

+1 +1 n+1 n+1 +1 un un + un i+1 ui1 i1 2ui i+1 = v +d 2x (x)2

n n n un un i1 2ui + ui+1 i+1 ui1 +d (1 ) v 2x (x)2

a sequence of tridiagonal linear systems

i = 1, . . . , N,

n = 0, . . . M 1

Example: 1D convection-diusion equation


Standard scheme (two-level)
n+ n+ n+ n+ n+ +1 ui ui + ui un un +1 ui1 1 2ui +1 i i +v =d t 2x (x)2 n+ +1 ui = un + (1 )un i, i

FE

01

BE

Forward Euler ( = 0) Backward Euler ( = 1) Crank-Nicolson ( = 1 2) Leapfrog time-stepping

+1 n n un = h(un i1 , ui , ui+1 ) i +1 n+1 +1 n un = h(un i1 , ui , ui+1 ) i +1 +1 n+1 n n n un = h(un i i1 , ui1 , ui , ui+1 , ui+1 ) +1 1 n un = un + 2tfh i i

CN

+1 1 n n n un un un un i+1 ui1 i1 2ui + ui+1 i i +v =d 2t 2x (x)2

LF

(explicit, three-level)

n 1 +1 n , un un = h(un i , ui+1 ) i 1 , u i i

Fractional-step scheme
Given the parameters (0, 1), = 1 2, and [0, 1] subdivide the time interval (tn , tn+1 ) into three substeps and update the solution as follows Step 1. Step 2. Step 3. un+ = un + [f (tn+ , un+ ) + (1 )f (tn , un )]t un+1 = un+ + [(1 )f (tn+1 , un+1 ) + f (tn+ , un+ )] t un+1 = un+1 + [f (tn+1 , un+1 ) + (1 )f (tn+1 , un+1 )]t

Properties of this time-stepping method second-order accurate in the special case = 1


2 2 1 2 1

coecient matrices are the same for all substeps if =

combines the advantages of Crank-Nicolson and backward Euler

Predictor-corrector and multipoint methods


Objective: to combine the simplicity of explicit schemes and robustness of implicit ones in the framework of a fractional-step algorithm, e.g., 1. Predictor 2. Corrector or u n+1 = un + f (tn , un )t
1 un+1 = un + 2 [f (tn , un ) + f (tn+1 , u n+1 )]t

forward Euler Crank-Nicolson backward Euler

un+1 = un + f (tn+1 , u n+1 )t

Remark. Stability still leaves a lot to be desired, additional correction steps usually do not pay o since iterations may diverge if t is too large Order barrier: two-level methods are at most second-order accurate, so extra points are needed to construct higher-order integration schemes Adams methods Runge-Kutta methods tn+1 , . . . , tnm , tn+ [tn , tn+1 ], m = 0, 1, . . . [0, 1]

Adams methods
Derivation: polynomial tting Truncation error: = O(t)p glob

for polynomials of degree p 1 which interpolate function values at p points Adams-Bashforth methods p=1 p=2 p=3 (explicit)

un+1 = un + tf (tn , un ) un+1 = un + un+1 = un +


t n n 2 [3f (t , u )

forward Euler f (tn1 , un1 )] 16f (tn1 , un1 ) + 5f (tn2 , un2 )]

t n n 12 [23f (t , u )

Adams-Moulton methods p=1 p=2 p=3

(implicit) backward Euler + f (tn , un )] Crank-Nicolson

un+1 = un + tf (tn+1 , un+1 ) un+1 = un + un+1 = un +


t n+1 , un+1 ) 2 [f (t

t n+1 , un+1 ) 12 [5f (t

+ 8f (tn , un ) f (tn1 , un1 )]

Adams methods
Predictor-corrector algorithm 1. Compute u n+1 using an Adams-Bashforth method of order p 1 2. Compute un+1 using an Adams-Moulton method of order p with predicted value f (tn+1 , u n+1 ) instead of f (tn+1 , un+1 ) Pros and cons of Adams methods methods of any order are easy to derive and implement only one function evaluation per time step is performed error estimators for ODEs can be used to adapt the order other methods are needed to start/restart the calculation time step is dicult to change (coecients are dierent) tend to be unstable and produce nonphysical oscillations

Runge-Kutta methods
Multipredictor-multicorrector algorithms of order p p=2 u n+1/2 = un +
t n n 2 f (t , u )

forward Euler / predictor midpoint rule / corrector forward Euler / predictor backward Euler / corrector midpoint rule / predictor Simpson rule corrector

un+1 = un + tf (tn+1/2 , u n+1/2 ) p=4 u n+1/2 = un + u n+1/2 = un +


t n n 2 f (t , u ) t n+1/2 ,u n+1/2 ) 2 f (t

u n+1 = un + tf (tn+1/2 , u n+1/2 ) un+1 = un +


t n n 6 [f (t , u )

+ 2f (tn+1/2 , u n+1/2 )

+ 2f (tn+1/2 , u n+1/2 ) + f (tn+1 , u n+1 )]

Remark. There exist embedded Runge-Kutta methods which perform extra steps in order to estimate the error and adjust t in an adaptive fashion

General comments
Pros and cons of Runge-Kutta methods self-starting, easy to operate with variable time steps more stable and accurate than Adams methods of the same order high order approximations are rather dicult to derive; p function evaluations per time step are required for a pth order method more expensive than Adams methods of comparable order Adaptive time-stepping strategy
t

t t

makes it possible to achieve the desired accuracy at a relatively low cost Explicit methods: use the largest time step satisfying the stability condition Implicit methods: estimate the error and adjust the time step if necessary

Automatic time step control


Objective: make sure that Local truncation error 1. 2. ut = u + t2 e(u) + O(t)4 umt = u + m2 t2 e(u) + O(t)4 ||u ut || T OL (prescribed tolerance)

Heuristic error analysis e(u) umt ut t2 (m2 1)

Remark. It is tacitly assumed that the error at t = tn is equal to zero. Estimate of the relative error ||u ut || t t
2

Adaptive time stepping t2 t2 (m2 1) = T OL ||ut umt ||

||ut umt || = T OL m2 1

Richardson extrapolation: eliminate the leading term m2 ut umt 4 + O ( t ) u= m2 1 fourth-order accurate

Practical implementation
Automatic time step control can be executed as follows Given the old solution un do: 1. Make one large time step of size mt to compute umt 2. Make m small substeps of size t to compute ut 3. Evaluate the relative solution changes ||ut umt || 4. Calculate the optimal value t for the next time step 5. If t t, reset the solution and go back to step 1 6. Set un+1 = ut or perform Richardson extrapolation Note that the cost per time step increases substantially (umt may be as expensive to obtain as ut due to slow convergence at large time steps). On the other hand, adaptive time-stepping enhances the robustness of the code, the overall eciency and the credibility of simulation results.

Evolutionary PID controller


Another simple mechanism for capturing the dynamics of the ow: Monitor the relative changes en =
||un+1 un || ||un+1 ||

of an indicator variable u
en t n

If en > reject the solution and repeat the time step using t =

Adjust the time step smoothly so as to approach the prescribed tolerance en1 en
kP

tn+1 =

T OL en

kI

e2 n 1 en en2

kD

tn

Limit the growth and reduction of the time step so that tmin tn+1 tmax , Empirical PID parameters kP = 0.075, l tn+1 L tn kD = 0.01

kI = 0.175,

Pseudo time-stepping
Solutions to boundary value problems of the form Lu = f represent the steady-state limit of the associated time-dependent problem u +Lu = f, t u(x, 0) = u0 (x) in

where u0 is an arbitrary initial condition. Therefore, the numerical solution can be marched to the steady state using a pseudo time-stepping technique. can be interpreted as an iterative solver for the stationary problem the articial time step represents an adjustable relaxation parameter evolution details are immaterial t should be as large as possible the unconditionally stable backward Euler method is to be recommended explicit schemes can be used in conjunction with local time-stepping it is worthwhile to perform an adaptive time step control (e.g., PID)

You might also like