Elliptic Pdes and The Finite Difference Method

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 30

LECTURE 2

Elliptic PDEs and the


Finite Difference Method

1
Aim of Lecture
• During this lecture we will discuss:
– Elliptic Partial Differential Equations
– Finite Difference Method
• Taylor’s Series Expansions
• High Order Terms & Truncation
• Finite Difference Discretisation
– Solution Methods
• Linear/Matrix System Solvers
• Iterative Solvers: Jacobi, Gauss-Seidel
• Use of Excel
2
Elliptic PDEs
• Elliptic PDEs represent phenomena that have already
reached a steady state and are, hence, time independent.
• Two classic Elliptic Equations are:
– Laplace Equation
 2u  2u or  2u  0
  0
x 2 y 2
– Poisson’s Equation
 2u  2u
 2 g 0 or  2
ug 0
x 2
y
– u(x,y) is dependent variable and g is a constant
3
Elliptic PDE – Example
• Temperature, u(x,y) profile
around two computer chips
on a printed circuit board.

 ug 0
2
Heat Source g

• Where g is the heat source.

4
FINITE DIFFERENCE
METHOD

5
Taylor’s Series Expansions

x-h x x+h

• Recall the Taylor’s Series expansion of a


function about a point, x
df 1 2 d2 f 1 3 d3 f
f ( x  h)  f ( x )  h  h 2
 h 3
 O ( h 4
)
dx 2! dx 3! dx

• We will use this to find approximate


solutions to PDEs
6
High Order Terms
• Note the O(h 4 ) in the previous expansion
• This refers to all powers of h greater than or
4 5 6 7
equal to 4, e.g. h , h , h , h , ...
• When h is small, these high order terms
are tiny, e.g. h  0 . 01  h 4
 0.00000001, ...
• We can simplify expressions by making h
small and ignoring high order terms
• This is known as truncation
7
Taylor Series expansions in 2D
y+y

• Consider the function u (x,y)


y
expanded about the
point (x,y)
y-y
x-x x x+x

u 1  2u
u ( x  x, y )  u ( x, y )  x  x 2
 O( x )
3

x 2! x 2

u 1  2u
u ( x, y  y )  u ( x, y )  y  y 2  O ( y 3
)
y 2! y 2

8
Taylor Series expansions in 2D
• Now consider a regular grid of j+1
points and use the notation
u ( x, y )  ui , j
j
u ( x  x, y )  ui 1, j
u ( x, y  y )  ui , j 1 j -1
i -1 i i+1
• This gives
u 1  2
u
ui 1, j  ui , j  x  x 2
 O ( x 3 )
x i, j 2 x 2
i, j

u 1  2
u
ui , j 1  ui , j  y  y 2
 O ( y )3

y i, j
2 y 2 i, j
9
Finite Differences
• We can rearrange the Taylor’s series
u 1  2
u
ui 1, j  ui , j  x  x 2
 O (  x 3
)
x i , j 2 x i , j
2

gives u ui 1, j  ui , j (known as


  O (x ) forward difference)
x i , j x
u 1  2
u
ui 1, j  ui , j  x  x 2
 O ( x 3 )
x i, j 2 x 2 i, j

gives u ui , j  ui 1, j (known as


  O(x)
x i, j x backward
difference)
10
Finite Differences
• We can also add and subtract Taylor’s series
u 1  2
u
ui 1, j  ui , j  x  x 2 2  O ( x 3 ) (1)
x i, j 2 x i, j

u 1 2 u
2
ui 1, j  ui , j  x  x  O ( x 3 ) ( 2)
x i, j 2 x 2 i, j

(1) – (2) gives ui 1, j  ui 1, j


u (known as central
  O ( x 2 )
x i, j 2x difference)

ui 1, j  2ui , j  ui 1, j (also known as


(1) + (2) gives  2u
  O(x ) central difference
2

x i , j
2
x 2
[2nd order])
11
Finite Differences: Summary
• We can rearrange the Taylor’s series to get
u ui 1, j  ui , j
  O (x) - Forward difference
x i, j x
u ui , j  ui 1, j
  O(x) - Backward difference
x i, j x
u ui 1, j  ui 1, j - Central difference
  O (x )2

x i, j 2x
 2u ui 1, j  2ui , j  ui 1, j - Central difference (2nd
  O(x ) order)
2

x i , j
2
x 2

• Then truncate the higher order terms and substitute for


differentials in the PDE
12
Exercise
• Write down the central difference approximations for:

u

y i, j

 u
2

y 2
i, j

13
Truncation Error
• Approximating derivatives, in this case using finite
differences, is known as discretisation.
• These approximations will result in errors known as
truncation error.
 2u ui 1, j  2ui , j  ui 1, j  x 2  4u 
     
x i , j
2
x 2
 12 x
4

Truncation Error
 2u ui 1, j  2ui , j  ui 1, j

x i , j
2
x 2

14
Finite Difference Method – Example
j+1
• Consider Poisson’s equation.
 2u  2u
 g 0 j
x 2
y 2

• Discretise j -1
i -1 i i+1
ui 1, j  2ui , j  ui 1, j ui , j 1  2ui , j  ui , j 1
 g 0
x 2
y 2

• Difference formula for each node:

2 x 2  y 2 ui , j  y 2  ui 1, j  ui 1, j   x 2  ui , j 1  ui , j 1   x 2 y 2 g

15
Finite Difference Method – Example
• Consider the case: x  y  h
• Then
 
2 2h 2 ui , j  h 2  ui 1, j  ui 1, j  ui , j 1  ui , j 1   h 4 g

• The difference equation can be written as:


1 h2 g
ui , j  (ui 1, j  ui 1, j  ui , j 1  ui , j 1 ) 
4 4
or
4ui , j  (ui 1, j  ui 1, j  ui , j 1  ui , j 1 )  h 2 g

16
Example
(0,1) u=0 (1,1)
• Consider the PDE
shown, on a square
domain with zero  2u  2uu = 0
u=0  2 9  0 u=0
boundary conditions x y
2

(u = 0) y

(0,0) u=0 (1,0)

17
Approximate Solution
• Represent using a Finite
Difference grid with x  y  h  1/ 3
• Need to find approximations to u

4
for all nodal values

3
• However we know that u = 0 on all
boundaries

2
• So need only to find

j=1
approximations for u at four
internal nodes i=1 2 3 4
• Required values are u 2, 2 , u3, 2 , u 2,3 , u3,3

18
Example 2
• In general we have: 4ui , j  ui , j 1  ui 1, j  ui 1, j  ui , j 1 1
   9 1
3
• Or, in terms of the 4 unknowns:
4u2, 2  u3, 2  u2,3  1 (i  2, j  2)

4
4u2,3  u2, 2  u3,3  1 (i  2, j  3)
4u3, 2  u2, 2  u3,3  1 (i  3, j  2)

3
4u3,3  u3, 2  u2,3  1 (i  3, j  3)
• So that  4  1  1 0  u2, 2  1

2
 1 4 0  1 u  1

j=1
   3, 2    
 1 0 4  1 u 2,3  1 i=1 2 3 4
    
 0  1  1 4   u3,3  1

and then (e.g. using Matlab) u2, 2  u3, 2  u2,3  u3,3  0.5 19
Solvers
• Notice that the Finite Difference method will generally
result in a matrix system of the form
Au = b
where
u = [u1 u2 ……….. un ]T
b = [b1 b2 ……….. bn ]T  a11 a12 a13 . a1n 
a a2 n 
 21 a22 a23 .
and A  a31 a32 a33 . a3n 
 
 . . . . . 
an1 an 2 an 3 . ann 
20
Solvers
• Generally in 2D we will get matrices of the form
 a11 a12 0 0 a15 0 0 0 0 0 0 0 0 0 0 0 
a a22 a23 0 0 a26 0 0 0 0 0 0 0 0 0 0 
 21
0 a32 a33 a34 0 0 a37 0 0 0 0 0 0 0 0 0 
 
0 0 a43 a44 0 0 0 a48 0 0 0 0 0 0 0 0 
 a51 0 0 0 a55 a56 0 0 a59 0 0 0 0 0 0 0 
 
0 a62 0 0 a65 a66 a67 0 0 a6,10 0 0 0 0 0 0 
0 0 a73 0 0 a76 a77 a78 0 0 a7,11 0 0 0 0 0 
 
0 0 0 a84 0 0 a87 a88 0 0 0 a8,12 0 0 0 0 
A
0 0 0 0 a95 0 0 0 a99 a9,10 0 0 a9,13 0 0 0 
 
0 0 0 0 0 a10,6 0 0 a10,9 a1010 a10,11 0 0 a10,14 0 0 
 
0 0 0 0 0 0 a11,7 0 0 a11,10 a1111 a11,12 0 0 a11,15 0 
0 0 0 0 0 0 0 a12,8 0 0 a12,11 a1212 0 0 0 a12,16 
 
0 0 0 0 0 0 0 0 a13,9 0 0 0 a13,13 a13,14 0 0 
0 0 0 0 0 0 0 0 0 a14,10 0 0 a14,13 a14,14 a14,15 0 
 
0 0 0 0 0 0 0 0 0 0 a15,11 0 0 a15,14 a15,15 a15,16 
0 0 0 0 0 0 0 0 0 0 0 a16,12 0 0 a16,15 a16,16 

Note the banded structure of the matrix. 21


Banded matrices
• Banded matrices arise in finite difference methods because
(in 2D) the value at each node is directly dependent only
on its four nearest neighbours.
• Banded matrices are sparse (i.e. mostly full of zeroes)
with a regular structure and hence can be stored in
minimal space.
• For example, if the finite difference grid is 100 by 100, the
number of unknowns is 10,000 and the number of entries
in the matrix is 100,000,000 which might require 1,600Mb
(megabytes) to store in a computer.
• However, by just storing five non-zero diagonals we can
reduce the storage requirement to around 50,000 values or
around 800Kb (kilobytes) = 0.8Mb.
• This means the system can be solved much more rapidly.
22
Direct and Iterative Solvers
• Exact solution requires inversion of A
– Very slow. Huge memory requirements.
• Direct Solvers: (Gaussian Elimination, etc)
– Need to store whole matrix. (Disadvantage)
– Slow, especially for large matrices. (Disadvantage)
– Robust even with ill-conditioned matrices. (Advantage)
• Iterative Solvers: (Jacobi, Gauss Seidel, etc)
– Good for large matrix systems. No need to store whole
matrix (Advantage)
– Fast, even for large matrices. (Advantage)
– Poor for ill-conditioned matrices. May converge only
slowly. (Disadvantage)

23
Iterative Solvers
• Two classical Examples are Jacobi and Gauss-Seidel
• Consider following system of equations
8 x1  x2  x3  8
x1  7 x2  2 x3  4
2 x1  x2  9 x3  12
Jacobi Gauss Seidel
x1
k 1
 k

 8  x2  x3 / 8
k
x1
k 1
 k
 8  x2  x3 / 8
k

 4  x  2x  / 7  4  x 
k 1 k 1 k 1 k
x2 1
k
3
k
x2 1  2 x3 / 7
 12  2 x  x  / 9  12  2 x /9
k 1 k 1 k 1
x3
k 1
1
k
2
k
x3 1  x2
• Start with initial vector x1 = [0, 0, 0] T. The final
solution is x = [1, 1,1] T. It takes 8 iterations for Jacobi
and 6 iterations for Gauss Seidel. 24
Iterative Solvers – Matrix Version
• Need to solve Ax = b
• Let A = D+L+U (Diagonal + Lower triangle + Upper triangle)
 a11 a12 a13 a14 
a a a23 a24 
A   21 22 
a31 a32 a33 a34 
 
a41 a42 a44 a44 

a11 0 0 0 0 0 0 0 0 a12 a13 a14 


0 a22 0 0 a 0 0 0 0 0 a23 a24 
D  L   21  U  
0 0 a33 0 a31 a32 0 0 0 0 0 a34 
     
0 0 0 a44  a41 a42 a44 0 0 0 0 0

• The Jacobi method can be written as: Dx(k+1) = -(L+U)x(k) + b


• The Gauss Seidel method can be written: (D+L)x(k+1) = -Ux(k) + b
(generally converges faster as it uses most recent information)
25
Iterative Solvers – Convergence
• To ensure convergence of an iterative solver such as Jacobi or
Gauss Seidel, we require Diagonal Dominance in the matrix,
i.e. for each row i
| ai ,i |   | ai , j |
i j

• In other words, the diagonal element in each row (or column)


is greater in magnitude than the sum of the off diagonal
elements
• We can sometimes rearrange the order of the equations to
ensure diagonal dominance

26
Exercise
• Write down the Jacobi method for the following
system
x1  4 x2  5
2 x1  x2  3
• Would you expect it to converge?
• If not, how would you rewrite it so that it would
converge?

27
Example – Gauss Seidel
• Example: use Excel to solve the following
2 x1  x2  3
x1  4 x2  5
• Rewrite as Gauss-Seidel iterations
x1( k 1)  (3  x2( k ) ) / 2
x2( k 1)  (5  x1( k 1) ) / 4

28
Gauss Seidel in Excel

FLAG
(to reset, see Tutorial 1)

Formula for X1
(uses old value of X2)
29
Gauss Seidel in Excel

Formula for X2
(uses current value of X1)
30

You might also like