Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Lecture 24:

Analysis of Iterative Methods


Introduction to Multigrid Methods
Last Time…

z Completed looking at co-located schemes and


SIMPLE algorithm
This Time…

We will:
z Revisit linear solvers to look at multigrid schemes
z Start by looking at the performance of Jacobi and
Gauss-Seidel schemes for different types of initial
guesses
z Find that the performance of these schemes is tied to
the frequency content of the initial guess
z Find a way to exploit this fact in devising multigrid
schemes
Poisson Equation

z Consider 1-D Poisson equation:

z Boundary conditions:

z Discretize this on a mesh of N equal control volumes


of size h
Poisson Equation (Cont’d)

z Discretization yields:
Poisson Equation (Cont’d)

z Choose φ(0) =0, φ(L) =0 and s(x)=0


z The solution to this problem is φ =0 everywhere
z In order to study convergence, let us start with an
initial guess:

z This represents a “Fourier mode” and k is the wave


number
z Error at any iteration is simply the value of φ itself
Initial Guess

z Small k is
smoother
z High k is
more
wiggly
Overall Idea

z Start with initial guess


z N=64 1-D uniform mesh
z Do 50 Gauss-Seidel iterations and plot φmax versus
iteration
z Recall that φ(x) is the error.
Error For Different Initial Guesses

z Low wavenumber
(low frequency)
initial profile has
high error after 50
iterations
z High wavenumber
initial profile
almost converged
by 10th iteration
Multiple-Frequency Initial Guess

z Realistic initial guesses would normally have many


frequency components
z To mimic this, consider an initial profile with a mixture
of frequencies:

z Signal contains k=2, k=8, and k=16


z Mesh is still N=64; again do 50 iterations
Error For Mixed-Frequency Case

z Error drops rapidly


for the first few
iterations and then
stalls
Error Profiles for k=2

z Plot φ(x) for k=2 initial


profile
z Not much difference
between initial and final
profiles after 50
iterations
z Gauss-Seidel not doing
much
Error Profiles for k=8

z For k=8, Gauss-


Seidel is much
more effective
z Error almost zero
after 50 iterations
Error Profile for Mixed Frequency
z High k component of
initial profile is
eliminated by GS
scheme
z Low-k components
remain
z Fast drop in error in the
beginning is due to
removal of high k
components
One way to think about this …

z Gauss-Seidel and Jacobi schemes have a local stencil


z They feel the influence of near neighbors easily
z Points which are computationally far away (i.e. many
cells away) are not felt easily
z Therefore high frequency variations of φ are easily
eliminated (local smoothing); GS/Jacobi do not have
enough “reach” for low-frequency signals
z Not related to the physical size of the domain
» Tied to number of cells in mesh vis a vis frequency
content of profile
Introduction to Multigrid Schemes

z Notice from our analysis that low k is bad for Gauss-


Seidel; similarly high N is bad
z Seems like a good idea to involve coarse meshes to
remove low k components
z But how exactly?
Constraints

z Final solution should only depend on the finest mesh


» Don’t want to compromise on accuracy
z Coarse mesh must therefore only provide corrections
or guesses to accelerate convergence
» Corrections must go to zero as convergence is
approached on fine mesh (i.e. as fine mesh
residuals go to zero)
z Can therefore get away with solving only approximate
coarse-mesh problem since it doesn’t govern accuracy
Nested Iteration?

z How about the following strategy:


1. Create sequence of coarse meshes
2. Start with coarsest mesh. Discretize and solve
problem
3. Use this solution on next-finer level as initial guess
4. Discretize and solve problem on next finer level
5. Keep doing it until finest level
» Solution on the finest level is your answer if
problem is linear
» If problem is non-linear, go back to 2.
Issues with Nested Iteration

z Don’t want to waste too much effort on coarse mesh


solutions
z Does not make use of previous outer iteration on fine
mesh in current coarse mesh solutions
z Coarse-mesh solution may not be a good guess of the
solution on the fine mesh
» May give a reasonable guess of the error
distribution though
Coarse Grid Correction

z Recall that we are trying to solve A x=b


z Say we have an iterate xk at the kth iteration. The
error is
e = x – xk
z Therefore
A (x – xk) = b – Axk = rk
A e = rk rk is the residual at the kth iteration
z If we solved A e = rk and corrected
x = xk + e
This is equivalent to solving A x =b
Coarse Grid Correction (Cont’d)
z In the multigrid method, this process is carried out
across two mesh levels
z We iterate for k iterations on the fine mesh until the
rate of residual reduction becomes untenably low
z At this point, we have only low wave number errors
left. These cannot be efficiently removed on the fine
mesh, but can be removed efficiently on a coarse
mesh
z We find the residual r on the fine mesh
z We solve A e = r on the coarse mesh and correct the
fine mesh solution using x = xk + e
Multigrid Method

z On the coarse level, we have to solve A e = r


z This is no different from solving Ax =b and will also
run into low wavenumber error reduction problems on
the coarse mesh
z So we solve for the correction to e on a yet-coarser
mesh and so on, recursively, until the we have used
the coarsest mesh possible (typically consisting of 2-4
cells)
Multigrid Method (Cont’d)

z Each coarse level solves for the error in the


next fine level
z Finest level solves for the solution vector x
z The source vector (RHS) at each coarse level
is the residual on the next fine level
z On the finest level it is the actual source vector
b
Properties of Multigrid Method

z Notice that at each coarse level we solve A e = r


z When the finer mesh solution has converged, the
residual on the finer mesh, r, is zero
z Correspondingly, the correction e computed on the
coarse mesh is zero
z So the solution on the finer mesh will get a zero
correction
z This holds at all levels. So once the finest mesh
converges, the coarse mesh corrections all go to zero.
Properties (Cont’d)

z Since the coarse mesh corrections go to zero at


convergence, the precise equation used to generate
them is not important
z The coarse mesh coefficient matrix determines the
rate of convergence but not the actual final solution on
the fine mesh
z There are different methods for creating the coarse
level coefficient matrix
Notation

z At any level l all vectors and matrices represented by


superscript (l).
z Finest level is l=0. Higher values of l represent ever-
coarser levels
z Thus at any level (l), the problem to be solved is:

z The residual at level (l) after k iterations is


Notation (Cont’d)

z At the finest level (l=0), we iterate for k iterations on


Ax =b and find the residual r0 .
z The unknown at next coarse level (l=1) is x(1) . It
represents the error at level (l=0)
z The solution vector x(l) is the error or correction to level
(l-1).
z The source vector b(l) is the residual at level (l-1)
Multigrid Idea

z We do k iterations on finest mesh.


z Find residuals and transfer them by some means to
next coarse level l=1 to find b(1)
z Do some iterations on the coarse mesh at l=1.
( Recursively repeat process on coarser levels if
necessary)
z Transfer x(1) to finest mesh and use it to correct x
Multigrid Idea (Cont’d)

z Need to decide
» What equations to solve on coarse mesh
» What are the procedures for transferring residuals
from fine mesh to coarse mesh
» What are the procedures for transferring
corrections from coarse mesh to fine mesh
z Obviously the process on succeeding mesh levels is
recursive
Creating Coarse Meshes

z Could create mesh by merging (agglomerating cells)


Restriction

z For our mesh, cell centroid of coarse level cell lies


midway between centroids of parent fine cells
z Source term at coarse level may be found by
averaging

z This process of transferring residuals from fine level to


coarse level is called restriction
z Can be represented by the operator notation:
Restriction (Cont’d)

z Averaging need not be the only type of restriction


operator
z Other types of interpolations can be used as the
restriction operator
Prolongation

z Once we have a solution on the coarse mesh, we use


it to correct the next fine level solution:

l
z Here I l +1 is the prolongation operator and transfers
the solution from the coarse mesh to the fine
z Simplest prolongation operator is to apply coarse level
correction to both parent cells
Closure

In this lecture, we saw that


z Jacobi and Gauss-Seidel schemes do well on high
wavenumber errors but badly on low wavenumber
errors
z They do well on coarse grids but badly on fine grids
z Multigrid methods involve coarse mesh corrections to
accelerate convergence on the fine mesh

You might also like