Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

CS 294-73 


Software Engineering for


Scientific Computing


Lecture 16: Multigrid
(structured grids revisited)


Laplacian, Poisson’s equation, Heat Equation
Laplacian:

Poisson’s equation:

Heat equation:

Blue:Ω
Black: ∂Ω

10/24/2019 CS294-73 – Lecture 16 2


FFT1DW problem

void FFT1DW::forwardFFTCC(vector<complex<double> >


& a_fHat, const vector<complex<double> >& a_f)
const
{
for (int i = 0; i < a_f.size(); i++)
{
m_in[i] = a_f[i];
}
fftw_execute(m_forward);
...
}
Compiler error.
Fixed by declaring in .H file
...
mutable vector<complex<double > m_in;

10/24/2019 CS294-73 – Lecture 16 3


Discretizing on a rectangle (or a union of rectangles)

Defined only on interior (blue)


nodes. Values at black nodes
are set to zero if we are using
the boundary conditions given
above.

10/24/2019 CS294-73 – Lecture 16 4


Solving Poisson’s equation

Want to solve linear system of equations

For ϕ defined on the interior (blue) grid points i . This makes


sense, since the stencil for the operator requires only nearest
neighbors.

and we have values defined by the


boundary conditions to be set to zero
(black points).

10/24/2019 CS294-73 – Lecture 16 5


Point Jacobi

We have used point Jacobi to solve these equations.

10/24/2019 CS294-73 – Lecture 16 6


Smoothing properties of point Jacobi

Define the error

Even though we don’t know the error, we can compute the residual to
provide a measure for the error: e.g. convergence if .

We can also see how the error behaves under point Jacobi

10/24/2019 CS294-73 – Lecture 16 7


Smoothing properties of point Jacobi

For example, choose . Then

10/24/2019 CS294-73 – Lecture 16 8


Smoothing properties of point Jacobi

•  The value at the new iteration is an average of the values at the


old iteration: weighted sum, positive weights that sum to 1.
•  The max and min values are always strictly decreasing.
Smooths the local error very rapidly.
•  Error equation looks like forward Euler for the heat equation –
smooths higher wavenumbers faster.

10/24/2019 CS294-73 – Lecture 16 9


Discrete Fourier Analysis of Point Jacobi
(k) 1
h
W = 2 ( 4 + 2cos(2⇡kx h) + 2cos(2⇡ky h))W (k) ⌘ kW
(k)
h
1
Let f = W(k) . Then ( h ) 1 W (k) = W. (k)
k
Point Jacobi approximates this by
l l 1
= (1 + ( k + 1))
l l 1 l 0
= (1 + k) = (1 + k)

For the error to converge to zero for any Fourier mode, we require that
01+ k <1
⇣N N ⌘
k= , : k = 8/h2 , 1 + k =0
2 2
|k|h << 1 : k = O(1) , 1 + k = 1 + O(h2 )
Generally, high wave numbers in he error are damped efficiently, but not others.
But this is relative to the grid spacing. 10

10/24/2019 CS294-73 – Lecture 16


Multigrid
Idea:
1.  Apply point Jacobi to the current approximation to the solution.
2.  Solve a coarse-grid version of the problem on a coarsened grid.
3.  Correct the fine-grid solution using the coarse-grid solution.

Why should this work ? Point Jacobi eliminates the high-wavenumber


components of the solution so that what remains is representable on the coarse
grid.
How does this work ?
•  Residual-correction form:
h h˜
= Rh , Rh = f,
= ˜+
+
h
=f

captures the smooth remnants of the error.


•  Have to iterate, since this isn’t exact. 11
Deep mathematics: point Jacobi smooths <-> special property of Laplacian.
10/24/2019 CS294-73 – Lecture 16
Multigrid

•  Do a few iterations of point Jacobi on


your grid to obtain
•  Compute the residual .
•  Average the residual down onto a grid
coarsened by a factor of 2 :
•  Apply point Jacobi to solve the residual
equation for :

•  Interpolate correction back onto fine grid


solution:

•  Smooth again using point Jacobi.

10/24/2019 CS294-73 – Lecture 16 12


Multigrid

•  If the number of grid points is 2M+1, can


apply this recursively.
multigrid(phi,f,h,level)
phi:= phi + lambda*(L(phi) – f);
(p times)
if (level > 0)
R = L(phi) – f;
Rcoarse = Av(R);
delta = 0.;
call multigrid(delta,Rcoarse,2h,
level-1);
phi += Int(delta);
endif;
phi:= phi + lambda*(L(phi) – f);
(p times)

10/24/2019 CS294-73 – Lecture 16 13


Averaging and Interpolation

•  Conservation of total charge.


¹ ² ¹

² ⁴ ²
¹⁄₁₆×
•  Adjoint conditions, order conditions (see ¹ ² ¹
Trottenberg).

For our second-order accurate discretization on a


nodal-point grid, we can use the trapezoidal rule

¼ ½ ¼

½ ₁ ½
And bilinear interpolation.
¼ ½ ¼
Even if the grid is not a power of 2, can apply a
direct solve at the bottom. 3 levels in 3D leads to a
reduction by 512 in the number of unknowns.
10/24/2019 CS294-73 – Lecture 16 14
Results

10/24/2019 CS294-73 – Lecture 16 15


Finite-Volume Methods on Structured Grids

We write the Laplacian as the divergence of fluxes.

Using the Divergence Theorem, this leads to a natural discretization of


the Laplacian.

This discretization satisfies a discrete


analogue of the divergence theorem.
10/24/2019 CS294-73 – Lecture 16
Finite-Volume Methods on Structured Grids

We use centered-differences and the midpoint rule to approximate fluxes:

Away from boundaries, this leads to the same finite-difference


approximation to the Laplacian.

At boundaries, it is different. It also interacts differently with multigrid.

10/24/2019 CS294-73 – Lecture 16


Finite-Volume Methods on Structured Grids

Boundary conditions are expressed in terms of computing fluxes at


boundary faces.

Coarsening is done by aggregating volumes, rather than by sampling


gridpoints.

10/24/2019 CS294-73 – Lecture 16


Finite-Volume Methods on Structured Grids

Averaging and interpolation:

10/24/2019 CS294-73 – Lecture 16


Results (Fourth-Order Finite-Volume)

10/24/2019 CS294-73 – Lecture 16


The Heat Equation

Explicit discretization in time leads to time step restrictions of the form

So instead, we use implicit discretizations, e.g. Backward Euler:

More generally, any implicit method for the heat equation will involve solving

10/24/2019 CS294-73 – Lecture 16


The Heat Equation – Point Jacobi

Can do the same residual / error analysis as in Poisson to obtain

Weighted sum, positive weights, but sum to less than 1, more so as h gets
larger.
Fourier analysis works the same way as Poisson.
10/24/2019 CS294-73 – Lecture 16
Multigrid Pseudocode
vcycle( , ⇢)
{
:= + (L( ) ⇢) p times
if (level > 0)
{
R = ⇢ L( )
Rc = A(R)
: Bc ! R , = 0
vcycle( , Rc )
:= + I( )
:= + ⇤ (L( ) ⇢) p times
}
else
{
:= + ⇤ (L( ) ⇢) pB times
{
}
At the top level, iterate until residual is reduced by some factor.
10/24/2019 CS294-73 – Lecture 16 23
Implementing Multigrid
The algorithm is naturally recursive – implement as nested set of calls to
vcycle.

10/24/2019 CS294-73 – Lecture 16 24


main
...
Multigrid mg(domain,dx,numLevels);
...
int maxiter;
double tol;
cin >> maxiter >> tol;
double resnorm0 = mg.resnorm(phi,rho);
cout << "initial residual = " << resnorm0 << endl;
for (int iter = 0; iter < maxiter; iter++)
{
mg.vCycle(phi,rho);
double resnorm = mg.resnorm(phi,rho);
cout << "iter = " << iter << ", resnorm = " <<
resnorm << endl;
if (resnorm < tol*resnorm0) break;
}

10/24/2019 CS294-73 – Lecture 16 25


Multigrid.H
...
private:
BoxData<double > m_res; //Residual at current level.
BoxData<double > m_resc; // Average of residual.
BoxData<double > m_delta; // Coarse-level correction.
Box m_box;
Box m_bdry[2*DIM];
int m_domainSize;
shared_ptr<Multigrid> m_coarsePtr;
double m_dx;
double m_lambda;
int m_level; // What level ? (m_level = 0 is the bottom).
int m_preRelax = 2*DIM;
int m_postRelax = 2*DIM;
int m_bottomRelax = 10;
int m_flops = 0; // I’m going to use this to count flops.
...

10/24/2019 CS294-73 – Lecture 16 26


Multigrid::define
...
Multigrid::define(const Box& a_box, double a_dx,
int a_level){
m_box = a_box;
m_level = a_level;
m_res.define(m_box);
m_dx = a_dx;
m_domainSize = m_box.low()[0] + 1;
if (m_level > 0)
{
Box bCoarse = m_box.coarsen(2);
m_resc.define(bCoarse);
m_delta.define(bCoarse.grow(1));
m_coarsePtr = shared_ptr<Multigrid> 

(new Multigrid(bCoarse,2*m_dx,m_level-1);
}
m_lambda = m_dx*m_dx/(4*DIM);
...
10/24/2019 CS294-73 – Lecture 16 27
Multigrid Pseudocode
vcycle( , ⇢)
{
:= + (L( ) ⇢) p times
if (level > 0)
{
R = ⇢ L( )
Rc = A(R)
: Bc ! R , = 0
vcycle( , Rc )
:= + I( )
:= + ⇤ (L( ) ⇢) p times
}
else
{
:= + ⇤ (L( ) ⇢) pB times
{
}

10/24/2019 CS294-73 – Lecture 16 28


Multigrid::vcycle
Multigrid::vCycle(
BoxData<double >& a_phi,
const BoxData<double >& a_rhs
){
pointRelax(a_phi,a_rhs,m_preRelax);
if (m_level > 0)
{
residual(m_res,a_phi,a_rhs);
avgDown(m_resc,m_res);
m_delta.setVal(0.);
m_coarsePtr->vCycle(m_delta,m_resc);
fineInterp(a_phi,m_delta);
pointRelax(a_phi,a_rhs,m_postRelax);
}
else{
pointRelax(a_phi,a_rhs,m_bottomRelax);
}

10/24/2019 CS294-73 – Lecture 16 29


Multigrid::residual
Multigrid::residual(
BoxData<double >& a_res,
BoxData<double >& a_phi,
const BoxData<double >& a_rhs
){
getGhost(a_phi);
res.setVal(0.);
double dxsqi = 1./(m_dx*m_dx);
for (auto it = m_box.begin();it.done();++it){
Point pt = *it;
a_res[pt] = -2*DIM*a_phi[pt];
for (int dir = 0; dir < DIM ; dir++)
{
Point edir = getUnitv(dir);
a_res[pt] += (a_phi[pt + edir] + a_phi[pt-edir]);
};
a_res[pt] = -a_res[pt]*dxsqi + a_rhs[pt];
}
};
10/24/2019 CS294-73 – Lecture 16 30
Multigrid::pointRelax
Multigrid::pointRelax(BoxData<double >& a_phi,
const BoxData<double >& a_rhs, int a_numIter){
for (int iter = 0; iter < a_numiter; iter++){
for (auto it = m_box.begin();it.done();++it){
Point = *it;
res[pt] = .5*a_phi[pt];
for (int dir = 0; dir < DIM ; dir++){
Point edir = Basis(dir);
res[pt] += (a_phi[pt + edir] +
a_phi[pt-edir])/(4*DIM);
}
}
for (auto it = m_box.begin();it.done();++it {
Point pt =*it;
a_phi[pt] = res[pt] - m_lambda*a_rhs[pt];
}
}
}
10/24/2019 CS294-73 – Lecture 16 31
Multigrid::getGhost
void Multigrid::getGhost(BoxData<double >& a_phi ){
...
for (int k = 0; k < 2*DIM; k++)
{
Box bx = m_bdry[k];
for (auto it = m_box.begin();it.done();++it){
Point pt = *it;
int image[DIM];
for (int dir = 0; dir < DIM; dir++)
{
image[dir] = (pt[dir] + m_domainSize)%m_domainSize;
}
Point ptimage(image);
a_phi[pt] = a_phi[ptimage];
}
}
...
};

10/24/2019 CS294-73 – Lecture 16 32

You might also like