Section 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Outline

1. Source

2. Introduction

3. Partial Differential Equations

4. Discrete approximations

5. Big-oh and little-oh notation


References
RJL Chapter: 1.0
JCS Sections 1.1 - 1.3
Scholarpedia
Numerical analysis is the area of mathematics and computer
science that creates, analyzes, and implements algorithms for
solving numerically the problems of continuous mathematics.
Wikipedia
Numerical analysis is the study of algorithms that use
numerical approximation (as opposed to general symbolic
manipulations) for the problems of mathematical analysis (as
distinguished from discrete mathematics).
Terefthen
Numerical analysis is the study of algorithms for the problems
of continuous mathematics.
MathWorks
Numerical analysis is a branch of mathematics that solves
continuous problems using numeric approximation. It involves
designing methods that give approximate but accurate numeric
solutions, which is useful in cases where the exact solution
is impossible or prohibitively expensive to calculate.
◮ Many natural, human or biological, chemical, mechanical,
economical or financial systems and processes can be
described at a macroscopic level by a set of PDEs for the
averaged quantities such as density, temperature,
concentration, velocity, etc.
◮ Model-based-PDEs used in practice were introduced in the
19th century and involve the 1st & 2nd derivatives only.
◮ Nonetheless, PDE theory is not restricted to the analysis
of equations of two independent variables and interesting
equations are often nonlinear.
◮ We will mainly restrict attention to the PDEs of classical
mathematical physics, and treat these with well-known,
classical numerical methods.
◮ This is for the sake of brevity; but these elementary
methods must be mastered prior to any attempt to study
more complicated problems and correspondingly more
advanced numerical procedures.
◮ A PDE is a an equation in which the unknown function
F : Ω → R is a function of multiple independent variables
and of their partial derivatives.
◮ Notice that like ordinary derivatives, a partial
derivative is defined as a limit.
◮ More precisely, given Ω ⊂ Rd an open subset and a function
F : Ω → R, the partial derivative of F at x = (x1 , · · · , xd )
with respect to the variable xi is:

∂ F (x1 , · · · , xi + ∆x, · · · , xd ) − F (x)


F (x) = lim .
∂xi ∆x→0 ∆x

◮ The function F is totally differentiable (i.e. is a C 1


function) if all its partial derivatives exist in a
neighborhood of x and are continuous.
Definition 1 (Order of PDE)
An expression of the form

F (∇k u(x), ∇k−1 u(x), · · · , ∇u(x), u(x), x) = 0, x ∈ Ω,

where u : Ω → R is the unknown, is called a k th order partial


differential equation.

Notes
◮ Solving the PDE means finding all functions u(x)
satisfying the equation, and possibly satisfying boundary
conditions on the domain boundary, ∂Ω.
◮ In the absence of finding the solution, it may also be
necessary to deduce the existence and other properties.
Definition 2 (linear PDE)
1. The PDE is called linear is it has the form
X
aα (x)∇α u(x) = f (x), (1)
|α|≤k

for given functions aα and f . Moreover, this equation is


homogeneous if f ≡ 0.
2. And the equation is nonlinear if it depends nonlinearly
upon the highest order derivatives.
a. Linear Equations
1. Laplace equation  ∆u = 0
2. 1st order wave equation  ut + cux = 0
3. Heat/diffusion equation  u t − ∇2 u = 0
4. Schrodinger’s equation  iut + ∇2 u = 0
5. Wave equation  utt − c 2 ∇2 u = 0
6. Black-Scholes equation  ut + 0.5σ 2 x 2 uxx + rxux − ru = 0

b. NonLinear Equations 
1. Burgers’ equation  ut + uux = 0
2. Korteweg-deVries eqn  ut + uux + uxxx = 0
3. Reaction diffusion eqn  ut − ∇(D(u)∇u) = f (u)
4. Thin film equation  ut + (u 2 − u 3 )x + (u 3 uxxx )x = 0
◮ In the previous slide we listed different types of
equations.
◮ Here, ∀x ∈ Ω, we consider linear second order PDEs of the
form
A : ∇(∇u) + B · ∇u + cu = f , Ω ⊂ Rd , (2)
d
X
where v : w = vij wij denoting the contracted product,
i,j=1
f (x), c(x) ∈ R, B(x) ∈ Rd , A(x) ∈ Rd×d are coefficients.
◮ Or a similar, and perhaps more conventional form is,
d d
X ∂2u X ∂u
aij + bi + cu = f , Ω ⊂ Rd
∂xi ∂xj ∂xi
i,j=1 i=1
Proposition 1
If
d d
X ∂2u X ∂u
aij + bi + cu = f , Ω ⊂ Rd
∂xi ∂xj ∂xi
i,j=1 i=1

is a second order linear partial differential equation, there


are three possibilities in the classification depending on the
sign of the determinant of matrix A.
Notes
◮ Usually, 2nd order PDEs are either elliptic, parabolic or
hyperbolic.
◮ Elliptic equations are associated to a special state of
the system, in principle corresponding to the minimum of
the energy.
◮ Parabolic equations describe evolutionary phenomena that
lead to steady state described by an elliptic equation.
◮ And parabolic equations model the transport of some
physical quantity, such as fluids or waves.
Remark 1
◮ The terminology elliptic, parabolic, and hyperbolic has
also a geometric interpretation involving planar conics.
◮ Let us consider a linear second-order PDE with constant
coefficients in R2 of the general form

∂2u ∂2u ∂2u ∂u ∂u


a 2
+ b + c 2
+d +e + fu = g , in Ω.
∂x1 ∂x ∂x
1 2 ∂x2 ∂x 1 ∂x 2

◮ The equation is said to be elliptic if b 2 − 4ac < 0,


parabolic if b 2 − 4ac = 0 and hyperbolic if b 2 − 4ac > 0.
Elliptic Equations
◮ Elliptic equations give boundary value problems (BVPs)
where the solution at all points must be simultaneously be
determined based on the boundary conditions around the
domain.
◮ More generally, an elliptic equation has the general form

Lu = f

where L is some elliptic operator.


◮ Here u = u(x) varies with space alone.
Parabolic Equations
◮ If L is an elliptic operator then the time dependent
equation
ut = Lu
is called a parabolic equation. If L = ∇2  the
Laplacian, then we have the heat or diffusion equation
modelling the spread of heat or some material.
◮ Here u = u(x, t) varies with space and time. If boundary
conditions are independent of time then the system will
reach steady state.
◮ We would solve for steady state by setting ut = 0 which
result in an elliptic equation.
Hyperbolic Equations
◮ Here we give an example of a first order hyperbolic
equation
ut + Aux = 0
where u(x, t) ∈ Rm and A is an m × m matrix.
◮ Problem is hyperbolic if A has real eigenvalues and is
diagonisable.
Boundary conditions
◮ Additional information is needed on the boundary ∂Ω or on
the portion Γ of the boundary Ω. Such data is called
boundary conditions.
◮ If the boundary condition gives a value to the domain:

∀x ∈ Γ, u(x) is fixed,

then it is a Dirichlet boundary condition.


◮ If the boundary condition gives a value of the normal
derivative of the problem
∂u
∀x ∈ Γ, = (∇u · n)(x) is fixed,
∂n
(where n is the outward normal to Γ), then it is a Neumann
boundary condition.
◮ Mixed boundary conditions indicate that different boundary
conditions are used on different parts of the boundary.
Example 1
◮ Consider a rod heated at both ends.
◮ We denote u(x) the temperature of the rod at point x.
◮ This gives two Dirichlet boundary conditions: u(0) = g (0),
u(1) = g (1).
◮ Thus we have the problem

∂2u
− 2 (x) + c(x)u(x) = f (x), ∀x ∈ (0, 1)



 ∂x




 u(0) = g (0)
 u(1) = g (1)
Notes
◮ For most PDEs no exact solution is known and, in some
cases, it is not even clear whether a unique solution
exist.
◮ For this reason, numerical methods have been developed in
combination with some analysis.
◮ Thus before proceeding to introduce numerical methods for
solving each of the three main classes of the problems it
is worthwhile to give some considerations to the question
of under what circumstances these equations do, or do not,
have solutions.
◮ This is a part of a mathematical concept of well
possedness.
◮ The notion of well-possedness, due to J. Hadamard
(18651963), is related to the requirement that can be
expected from solving a PDE.
Definition 3
A problem consisting of a PDE and boundary and/or initial
conditions is said to be well-posed in the sense of Hadamard
if it satisfies the following conditions
◮ a solution exist;
◮ the solution is unique;
◮ and the solution depends continuously on given data.
Otherwise it is ill-posed.

Remark 2
This is an important property because essentially all
numerical algorithms assume that the problems to which they
apply are are well posed. As a result we
◮ may fail to obtain a solution,
◮ or may obtain a set of numbers with no association with
the real problem.
Definition 4
A numerical algorithm is called stable if small changes in the
data result in small changes of the numerical solution.

Notes
◮ The approximation property of the numerical solution is
defined differently for the different types of problems.
◮ However, only stable algorithms can be guaranteed to
produce numerical solutions which are close to the exact
solution.
◮ Hence the typical problem of numerical analysis is to
construct a stable algorithm for well-posed problems.
Notes
◮ Consider a linear partial differential equation

P(∂t , ∂x , ∂xx , · · · )u = f (x, t),

◮ And a corresponding finite difference scheme


n
Pkh vm = f (xm , tn ), (3)

for the function v defined on the grid tn = nk, n ≥ 0 and


m = 0, 1, 2, · · · , M.
Definition 5
n } be a solution of (3). The characteristic polynomial
Let {vm
of the linear difference equation satisfied by the Fourier
transform {v̂ n (ξ)} of {vm
n } is called an amplification

polynomial for the scheme (3).

Remark 3
The coefficients of the amplification polynomial may depend on
h, k and ξ.

Remark 4
An equivalent way of obtaining the amplification polynomial is
n = g n e mθ in the equation (3) and simplify.
to substitute vm
◮ The rate of convergence of a numerical method is normally
discussed using the notation O(hp ), the socalled big-oh
notation.
◮ If f (h) and g (h) are two functions of h, we say that
f (h) = O(g (h)) as h → 0if there exist a constant c such
f (h)
that < c for h sufficiently small, or equivalently,
g (h)
if we can bound |f (h)| < c |g (h)| for h sufficiently small.
◮ It is sometimes convenient to use the little-oh notation:
f (h) = o(g (h)) as h → 0.
f (h)
◮ This means that → 0 as h → 0. This means that f (h)
g (h)
decays to zero faster than g (h).
Exact Problem
- with exact
solution u

Cannot compute
the solution
exactly

Approximate
Error using finite Discrete problem
difference method

Check
for
no consistency
and
stability

yes

Computer Convergency
implementation to exact

Figure: The numerical analysis approach

You might also like