Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Chapter TWO

Finite Difference Method

Basic Principles of Numerical analysis


The approximation of a continuously varying quantity in terms of values at a finite
number of points is called discretization.

The fundamental elements of any numerical simulation are:

(a) The fluid continuum is discretize; i.e field variables (p, u, v, w, .....) are
approximated by their values at a finite number of nodes.
(b) The equations of motion are discretized; i.e approximated in terms of values at
nodes:
differential or integral equations algebraic equations
(continuum) (discrete)

(c) The system of algebraic equations is solved to give values at the nodes.

Three stages in a numerical analysis are:

Pre-processing: - problem formulation (governing equations with boundary / or initial


conditions;
- construction of a computational domain with mesh.
It depends upon:
– the desired outcome of the simulation (e.g. forces, loss coefficients, flow
rate, concentration distribution, heat transfer, ...);
– the capabilities of the solver.

Solver: - numerical solution of the governing equations.

In commercial CFD packages the solver is often operated as a “black box”.


Nevertheless, intelligent user intervention is necessary – to set under-relaxation
factors and input parameters, for example – whilst an understanding of discretisation
methods and internal data structures is necessary in order to supply mesh data in an
appropriate form and to analyse the output.

Post-processing: - plotting and analysis of the results.

The raw output of the solver is a set of numbers corresponding to the values of each
field variable (u, v, w, p, …) at each point of the mesh. This huge quantity of
numbers must be reduced to some meaningful subset and, usually, manipulated
further to obtain the desired predictive quantities. For example, a set of surface
pressures and cell-face areas is required to compute a drag coefficient or a set of
velocities and areas to determine a flow rate.
Commercial packages often provide post-processing facilities to plot, interpolate or
simply extract quantities from the output dataset. A key component of post-
processing is being able to visualise complex flows – either to indicate important
features of the flow or, unfortunately, sometimes to establish why a calculation is
diverging.

The main Discretization Methods


Three basic methods commonly used to discretize the governing equations include
(a) Finite-difference methods, (b) Finite-volume method, and(c) Finite-element
method.

When a differential equation is solved analytically over a given region subject to


given boundary conditions, the resulting solution satisfies the differential equation at
every point in the region. When the problem can not be solved analytically or the
analytical solution becomes so complex that numerical computation is very difficult,
one generally resorts so to a numerical technique for solution. When finite-difference
approach is used, the problem domain is discretized so that the values of a unknown
dependent variable are considered only at a finite number of nodal points instead of
every point over the region. If N nodes are chosen, N algebraic equations are
developed by discretizing the governing differential equations and the boundary
conditions for the problem. It means, the problem of solving the ordinary or partial
differential equations over the problem domain is transformed to the task of
development of a set of algebraic equations and their solution by a suitable method.

This simple approach is complicated by the fact that the nature of the resulting set
of algebraic equations depends on the character of the partial differential equations
governing the physical problem; that is, whether they are parabolic, elliptic or
hyperbolic. Furthermore, there are many discretization schemes, hence one must
select the one which is the most appropriate for the nature of the problem.

Two basic approaches commonly used to discretize the derivatives in partial


differential equations include (i) the use of Taylor series expansion, and (ii) the use
of a polynomial of degree n.

2.1 Taylor Series Formulations

A formal basis for developing finite difference approximation to derivatives is with


Taylor series expansion. Consider Taylor series expansion of a function f(x) about a
point x0 given by

( ) ( )
( ) ( ) | | | (2.1)

( )
= ( ) ∑

Solving for the first derivative, one obtains


( ) ( ) ( )
| | (2.2)

Summing all the terms with factors of and higher and representing them as O( )

(that is read as terms of order ) yields


( ) ( )
| ( ) (2.3)

which is an approximation for the first partial derivative of f with respect to x0.

If the subscript index i is used to represent the discrete point x0. Then the notation
i+1 and i-1 refer to the discrete points at x0+ and x0 - , respectively.

Equation (2.3) is written as

| ( ) (2.4)

This equation is known as the firs forward difference approximation of | of order


( ). It is obvious that as the step size decreses, the error term is reduced and
therefore the accuracy of approximation is increased. Now consider the Taylor series
expansion of ( ) about x0.
( ) ( )
( ) ( ) | | | (2.5)

( )
= ( ) ∑ *( ) +

Solving for the first derivative, one obtains

( ) ( )
| ( )

or

| ( ) (2.6)

Equation (2.6) is the first backward difference approximation of | of order( ).

Subtracting equation (2.5) from equation (2.1), one obtains

( )
( ) ( ) | | (2.7)
Solving for | ,

( ) ( )
| ( )

| ( ) (2.8)

This representation of | is known as the central difference approximation of order


( ) . Thus, three approximations for the first derivative have been introduced.

Three-point forward difference approximation


Now the first forward difference approximation can also be express in three-point
method. Again consider the Taylor series expansin

( ) ( )
( ) ( ) | | | (2.9)

Expanding by Taylor series ( ) about x0 produces the expansion


( ) ( )
( ) ( ) | ( ) | | (2.10)

Multiply equation (2.9) by 4 and subtracted it from equation (2.10), and the result is

( )
( ) ( ) ( ) ( ) | | (2.11)

Solving for |

( ) ( ) ( )
| ( )
( ) (2.12)

| ( )
( )

This equation represents forward difference approximation for the first derivative of f
with respect to x0 using three points on one side and is of the order ( ) . A similar
backward difference approximation for the first derivative of f with respect to x0 can
be produce in three point method using Taylor series expansion of ( ) and
( ). The result is

| ( )
( ) (2.13)
Now the second order derivative of approximate expression are considered. Again
consider the Taylor series expansions (2.9) and (2.10), which are repeated here:

( ) ( )
( ) ( ) | | | (2.14)

and

( ) ( )
( ) ( ) | ( ) | | (2.15)

Multiply equation (2.14) by 2 and subtract it from equation (2.15), and the result is

( ) ( ) ( ) ( ) | ( ) | (2.16)

Solving for | ,

( ) ( ) ( )
| ( )
( )

or | ( )
( ) (2.17)

This equation represents the forward difference approximation for the second order
derivative and is of the order ( ). A similar approximation for the second order
backward derivative can be produced as

| ( )
( ) (2.18)

To obtain a central difference approximation of the second order derivative simply


add equations (2.1) and (2.5). Thus

( ) ( ) ( )
| ( )
( ) (2.19)

or | ( )
( )

So far the first and second order derivatives have been expressed using forward and
backward differencing of order ( ) and central differencing of order ( ) .
By considering additional terms in the Taylor series expansion, a more accurate
approximation of the derivatives is produced. Rewrite equations (2.1) and (2.2),
respectively

( ) ( )
( ) ( ) | | | (2.20)

( ) ( ) ( )
| | (2.21)

Substitute a forward difference expression for | , that is

( ) ( ) ( )
| ( )
( )

and one gets

( ) ( )
|
( ) ( ) ( ) ( )
* ( )+
( )

( ) ( ) ( )
or | ( ) (2.22)

or | ( )
( )

Thus a second-order accurate finite difference approximation for the first order
derivative has been obtained which is similar to equation (2.12). A similar expression
for the backward difference approximation can be obtained by substituting second-
order derivative with a first-order accurate backward approximation. In general,
higher order approximations are obtained by substituting more terms in the Taylor
series respective difference representations of the derivatives. Practically,
approximations of order three or more are rarely used because they require more
computation time, since computation time increases as (nodes)3 in most machines.
However, with sufficient convergence criteria, a good approximation with a more
reasonable computation time can be obtained with second-order differencing.
Condititions for using Forward, Backward, and Central-Difference
approximations
 Forward-difference approximation is used when data to the left of a point, at
which a derivative is desired, is not known.
 Backward-difference approximation is used when data to the right of the
desired point is not known.
 Central-difference approximations are used when data on both sides of the
desired point are available and are more accurate than either forward- or
backward-difference approximations.

2.2 Polynomials Formulation

The second procedure for approximating a derivative is to represent the function as


a polynomial. The coefficients of the polynomial are evaluated by the substitution of
value (dependent variable) from a series of equally spaced points of the independent
variable. The approximation values of the derivatives are computed from the
polynomial. For example, consider a second-order polynomial,

f(x) Ax2 +Bx + C (2.23)

Take the origin at xi = 0. Thus, , and and the values of the


function f at these locations are f(xi) = fi, (fxi+1) = fi+1, and (fxi+2) = fi+2. Thus,

( ) ( )

and ( ) ( )
After solving above three equations we obtain

C = fi

( )

and
( )

from which we determine the first derivative of f that


or xi = 0 |

Hence,
( )

which is same as the second -order accurate forward difference approximation


obtains from Taylor series. Similarly, second-order derivative of f may be determined
as

from which
( )

which is identical to Eq.(2.17). This approach is particularly useful in developing finite


difference expression for nonuniform values of ( ) as well as evaluating the value
of gradient needed for determining mass or heat flux at the wall.

Finite Difference Approximation of Mixed Partial Derivatives

Often, it may be necessary to represent mixed partial derivatives as in finite


difference. The finite difference approximation can be developed by successive
application of finite differencing of the first derivative in x and y variables. We
consider finite-difference approximation of the mixed derivative and use the
central difference equation (2.8) to discretize the first derivative for both the x and y
variables. Thus

( ) ( | | ) ( ) (2.24)

where subscripts i and j denote the grid points associated with the discretization in
the x and y variables, respectively. Applying the central difference approximation
once more to discretize the partial derivatives with respect to the y variable in
equation (2.24) we obtain

( ) ( ) [( ) ( ) ] (2.25)

which is the finite difference approximation of the mixed derivative using central
differences for both x and y variables. The order of differentiation is immaterial if the
derivatives are continuous; that is and are the same.

Non-uniform Mesh Size


In most engineering problems, one will often have some idea of the general shape of
the solution, especially of the locations where the profile will exhibit a sudden change
in the first derivative. Therefore, to obtain higher resolution in the region where the
gradients are expected to vary rapidly, it is desirable to use a finer mesh over that
particular region rather than refining the mesh over the entire domain. To illustrate
this matter we consider the simplest situation involving a change in mesh spacing
only in one direction at some point in the region.

If the spacing (or mesh size) of the points i, i+1 and i+2 is not uniform (identical), a
finite difference approximation of the derivative is found by the same procedure.
Assume xi = 0, xi+1 = , and xi+2 = ( ) . Then,

fi = C

( ) ( )

( ) ( ) ( )( )

Consequently, C = fi

( ) ( )
( )

( )
and
( )( )

Therefore,

( ) ( )
| ( )
(2.26)

which is a second-order accurate approximation. Similarally, the second derivtive


of fis obtained as

( )
Hence, * ( )( )
+

which is a first-order accurate approximation. Similar relations for backward and


central difference approximations may be obtained by this procedure.

Notation for functions of several variables


We will use upper case F to denote the analytic (exact) solution of the PDEs and
lower case f to denote numerical approximate solution. Subscripts will denote
discrete points in space and superscripts discrete levels in time. Assume f is a
function of the independent variables x and t. Subdivide the x-t plane into sets of
equal rectangles of sides , , by equally spaced grid lines, defined by xi = i ,

and equally spaced grid lines defined by tn = n , n = 0, 1, 2,......

Denote the value of f at the representative mesh point P(i , n ) by

( ) ( )

Finite-Difference Operators
Finite difference operators are generally used in order to express the finite difference
approximation in compact forms and different notations have been used by different
authors; however, completely universal notations are not yet available. Here we
present some of the commonly used difference operators.

Assume i referring to the grid points selected along the x-axis. The forward-,
backward-, and central-difference approximations of the first derivatives about the
grid node i can be expressed with operator notations as

Forward | (2.27)

Backward | (2.28)

Central | (2.29)

The finite difference approximation for the second derivative is expressed as

| ( )
(2.30)

For example, the finite difference approximation to the one-dimension convection-


diffusion equation

where Pe, Peclet number is assumed constant, can be expressed with the
above operator notation using the central difference approximation for both the first
and second derivatives as

Numerical Errors
Error is defined as the difference between an observed or calculated value and a
true value; such as variation in measurements, calculations, and observations of a
quantity due to mistakes or uncontrollable factors. Generally, error may be
associated with consistent or repeatable sources, called systematic or bias errors,
are they may be associated with random fluctuations which tend to have a Gaussian
distribution if truly random. In the context of numerical simulations on computers,
systematic or bias errors are the only type of error that will occur. The only source of
random error that may be introduced in a simulation is through user, and for a single
user even this error would have a bias. Systematic errors can be studied through
inter-comparisons based on parameter variations, such as variation in grid
resolution, variation in numerical schemes, and variation in models and model
inputs. Error in numerical simulations does not necessarily imply a mistake or
blunder. If you know about a mistake or blunder, then you can at least fix the
problem and eliminate it. However, because of the fact that we are representing a
continuous system by a finite length, discrete approximation, error becomes intrinsic
to the process and we can only hope at this time to minimize it. Fortunately, we do
understand that this error is created by those terms truncated in the Taylor series
representation of derivatives, or introduced by iterative solution process, if
appropriate. These discretization errors have a definite magnitude and assignable
cause, and can all be cast, eventually, in terms of two parameters - the grid size and
the time step size.

In the solution of differential equations with finite difference approximations, many


schemes are available for the discretization of derivatives and the solution of the
resulting system of algebraic equations. Three most important errors that commonly
occur in numerical solutions are the (a) round-off error (b) truncation error, and (c)
discretization error.

Round-off error: The round-off error is introduced becaused of the inability of the
computer to handle a large number of significant digits. Typically, in single-precision,
the number of significant digits retained ranges from 7 to 16, although it may vary
from one computer system to another the round-off error arises due to the fact that a
finite number of significant digits or decimal places are retained and all real numbers
are rounded off by the computer. The last retained digit is rounded of if the first
discarded digit is equal to or greater than 5. Otherwise, it is unchanged. For
example, if five significant digits are to be kept in place, 6.28537 is rounded off to
6.2854, and 6.28534 to 6.2853.

Truncation error: In finite difference expression of derivatives with Taylor's series


expansion, the higher order terms are neglected by truncating the series and the
error caused as a result of such truncation is called the truncation error. For example
in the forward differencing of the first derivative to the order as given by equation
(2.3), the term

( )
( ) |
represents the truncation error and the lowest order on the right-hand side, i.e. ,
gives the order of the method. 'The truncation error identifies the difference between
the exact solution of a partial differential equation and its finite difference solution
without the round-off error, that is

(Exact solution of PDE) - (Solution of finite difference equation without the round-off error) =
(Truncation error)

Discretization error: The discretization error is the error in the solution to the PDE
caused by replacing the continuous problem by a discrete one and is defined as the
difference between the exact solution of the PDE(round-off free) and the exact
solution of the finite difference equation(round-off free). In general, the difference
between the exact solution of the PDE and the computer solution to the finite
difference equations would be equal to the sum of the discretization error and the
round-off error associated with the finite difference calculation. One can also observe
that the discretization error is the error in the solution that is caused by the truncation
error in the difference representation of the PDE plus any errors introduced by the
treatment of the boundary conditions.

Accuracy of a Numerical Solution

The accuracy of a numerical solution is determined by its total errors, which is the
sum of the round-off error and truncation error. However, it is obvious that the round-
off error increases as the total number of arithmetic operations increases. Again, the
total number arithmetic operations increases if the step size decreases (that is, when
the number of grid points increases). Therefore, the round-off error is inversely
proportional to the step size. On the other hand, the truncation error decreases as
the step size decreases (or as the number of grid points increases).

Because of the aforementioned opposing effects, an optimum step size is required,


which will produce minimum total error in the overall solution.

Method of selecting optimum step size: Grid Independence Test


A numerical expert has to be extremely careful as regards the accuracy of a solution.
To get the most accurate numerical solution, one has to perform a grid
independence test. The test is carried out by testing with various grid sizes and
watching how the solution changes with respect to the changes in the grid size.
Finally, a stage will come when changing the grid spacings will not affect the
solution, In other words, the solution will become independent of grid spacing. The
largest value of grid spacings for which the solution is essentially independent of the
step size is chosen so that both the computational time and effort and the round-off
error are minimized.

Requirements of Numerical Analysis


There are five fundamental requirements for any numerical methods such as
consistency, stability, convergence, conservation, and boundedness.

Consistency: Consistency deals with the extent to which the finite difference
equations approximate the governing differential equation. The difference between
the governing differential equation (or PDE) and the finite-difference approximation
has already been defined as the truncation error of the difference expression. A
finite-difference expression of a PDE or GDE is said to be consistent if one can show
that the difference between the PDE and its difference expression vanishes as the
step size (i.e., , , etc) or mesh is refined (i.e., extremely small). This can be
written as ( ) ( ) . This should always be
the case if the order of the truncation error (T.E.) vanishes under mesh refinement.

In brief, it can be stated that the discretization of a PDE should become exact as the
mesh size tends to zero.

Stability: Numerical errors that are generated during the solution of discretized
equations should not be magnified.

Condition 1 Numerical errors (round off due to final precision of computer) should not
be allowed to grow unboundedly.

Condition 2 Numerical solution itself should remain uniformly bounded.

Stability is difficult to check. As a rule, it can only be verified for linear problems with
constant coefficients and periodic boundary conditions.

Convergence: The numerical solution is said to be convergent if the numerical


solution approaches the exact solution of the PDE as the mesh size (time and space
steps) tends to zero. It is to be noted that the conditions of the stability, consistency,
and convergence are related to each other. According to Lax equivalence theorem,
for a well-posed problem discretized by a consistent linear method, stability is the
necessary and sufficient condition for convergence.

For practical purposes, convergence can be investigated numerically by comparing


the results computed on a series of successively refined grids. The rate of
convergence is governed by the leading truncation error of the discretization scheme
and can also be estimated numerically.

Conservation: Underlying the conservation laws should be respected at the


discrete level (artificial sources/sinks are to be not considered). Numerical methods
should comply with the physical principles underlying the equation of fluid dynamics.
If mass, momentum and energy are conserved at discrete level, they can only be
distributed improperly. Nonconservative discretizations may produce reasonable
looking results which are totally wrong (i.e shocks moving with a wrong seed). Even
Nonconsevative schemes can be consistent and stable. Correct solutions are
recovered in the limit of very fine grids. It is usually unclear whether or not the mesh
is sufficiently fine for the detrimental of artificial sources or sinks or both to be
negligible. Finite difference method is conservative if it can be cast in the form

for a suitable numerical flux function . In this case, internal fluxes cancel out
which proves the global conservation property.

Boundedness: Quantities like densities, temperatures, concentration, etc should


remain non-negative and free spurious wiggles. Conventional discretization methods
tend to produce nonphysical solutions to convection dominated problems governed
by equations of hyperbolic type. Spurious undershoots and overshoots occur in the
vicinity of steep gradients.

Checking Results
Before applying a numerical scheme to real life situations modelled by PDEs there
are two important steps that should always be considered.

Verification
Verification is defined to be the process to determine whether the mathematical
model is solved correctly. Roache simplifies the definition of verification to - solving
the equations right.

The computer program implementing the scheme must be verified. This is a check to
see if the program is doing what it is supposed to do. Comparing results from pen
and paper calculations at a small number of points to equivalent computer output is
a way to (partially) verify a program. Give or take a small amount of rounding error
the numbers should be the same. Another way to verify the program is to find an
exact solution to the PDE for a simpler problem (if one exists) and compare
numerical and exact results.
Computer program verification involves testing that all branches, program elements
and statements are executed and produce the expected outcomes. For large
programs, there exist software verification programs to facilitate the verification
process. For a commercial solver it may not be possible to completely verify the
program if the source code is unavailable.

Validation
Validation is defined to be the process of assessing the accuracy of the simulation
model for its domain application. Roache simplifies the definition of validation to -
solving the right model equations with the right methods. For example, one could
have two fully verified algorithms - we are solving the equations correctly with the
specified methods. However, when we simulate a specific problem with both codes,
one code successfully meets our criteria for validation, while the other code does
not. The model equations are the same between two codes, but one code solves the
equation with a bounded scheme, and the other solves them with an unbounded
scheme. Therefore, far certain values of the input parameters, both codes will
generate same results, but for another set of input values they generate widely
different results. By allowing the distinction between right model equations and the
right methods, one can then determine. In this example, the method is generating
the error and not the model equations.

Validation is really a check on whether the PDE is a good model for the real problem
being studied. Validation means comparing numerical results with results from
similar physical problems. Physical results may come from measurements from real
life or from small-scale laboratory experiments. Either way, due to measurement
errors, scaling problems and the inevitable failure of the PDEs to capture all the
underlying physics, agreement between numerical and physical results will not be
perfect and the user will have to decide what is close enough.

Numerical Uncertainty

Uncertainty is defined as the estimated amount or percentage by which an observed


or calculated value may differ from the true value. Uncertainty may have three
origins in a simulation: (1) input uncertainty, (2) model uncertainty, and (3) numerical
uncertainty. Input uncertainty results from the fact that some input parameters are
not well defined, such as, the magnitude of equation of state parameters for different
materials, or the thermal conductivity of different materials. This uncertainty exists
independently of the model or computer code. Model uncertainty results from
alternative model formulation, structure, or implementation.

Numerical uncertainty is the only uncertainty that cannot be eliminated, but only
minimized, or bounded in a simulation. Input uncertainty has the potential to be
eliminated, or made a second order effect through improved definition of input
parameters (i.e., a better-measured value) or through the use of probabilistic
methods which define the uncertainty bounds for the parameter on the simulation
results. Model uncertainty can be minimized, or eliminated by the use of different
model or even code. But numerical uncertainty is a first-order effect that for the
foreseeable future (until we can routinely perform simulations with spatial and
temporal resolutions defined by the smallest scales) we are stuck with. The
challenge then is to develop effective error estimators that quantify this numerical
uncertainty. The vision is that single-grid error estimators would be imbedded
directly in the solution process and, with no additional effort expended by the user,
provide an error bound on the solution. The simulation result plus the error bound
would then allow the use or code developer to determine when the mathematical
model is incorrect. Further, such an approach would allow code users the flexibility to
perform lower fidelity simulations and then state the accuracy of the simulation. So, if
a 80% accurate answer is acceptable then the user could expend that level of effort
in the analysis.

To summarize, code mistakes can be eliminated through the process of verification.


Model mistakes can be eliminated through the process of validation. What remains
are the uncontrollable factors, those introduced by using finite length, discrete
methods to represent a continuum system. The aim then is to minimize and bound
these uncontrollable factors. If successful, with a verified code and a validated
model, one can then state that a given predicted quantity is true, plus and minus an
uncertainty magnitude.

Examples: The Diffusion Equation

Solution Methods:
The unconditional stability and second-order time accuracy makes Crank-Nicolson the method of
choice for the diffusion equation. The general rationale (“centred-time/centredspace”) is readily
extended to other parabolic equations.

Poisson and Laplace Equations

The canonical elliptic equation is the Poisson equation:

f may be any function of x, y, φ and its first derivatives. If f ≡ 0 then the equation is known as
the Laplace equation.
Boundary Conditions

For an elliptic problem to be well-posed, some condition on φ or its normal derivative must
be imposed on all boundaries.
Properties of the Laplace Equation

Functions satisfying this equation are called harmonic. If Dirichlet or Neumann


boundary conditions apply on all boundaries of the domain then it can be shown that:
(i) a unique solution exists;

(ii) the maximum and minimum values occur on the boundaries.

The Laplace equation can be discretised on a regular, rectangular mesh with equal x
and y step size h as

The Matrix Form of the Discretised Equations


The Wave Equation

Solution Method
Treatment of the First Time Step

Stability Analysis
The following is a Von Neumann stability analysis for the backward-
differencing method.

(2.24)
(2.25)

(2.26)

(2.27)

For explicit formulation of 1-D transient problem.

(2.28)

Substituting equations (2.25), (2.26) and (2.27) into eqn.(2.28) gives


(2.29)

So that

(2.30)

And

(2.31)

Inequality (2.30) is satisfied for all values of . With the maximum value of

(1- cos) = 2, then the left hand side of (2.31) is (1 – 4d), which must be larger than
or equal to -1; thus (1 – 4d)  -1 or 4d  2. So the stability condition is

(2.32)
Methods for Solving Equations

Many of the solution methods for elliptic equations are particular cases of
general solution methods for the system of linear algebraic equations:

Note that, since the discretisation methods only connect a node with its neighbouring
nodes, the matrices arising here are very sparse; i.e. most of their elements are zero. In
practice, the most efficient computational methods only store and operate with the non-
zero diagonals.

Direct Methods

Direct methods are only useful for very small matrices and are highly inefficient – both in
terms of memory store and number of computer operations – for the large, sparse matrices
associated with finite-difference methods. They are included here for reference but are far
inferior to iterative schemes for real engineering problems.
Gaussian Elimination

Subtract multiples of the first equation to eliminate x1 from later equations. i.e. in the
matrix form, subtract multiples of the first row to get zeros in elements below the
diagonal element:

Explicit Iterative Methods


Jacobi Method

Gauss-Seidel

Under/Over-Relaxation
Implicit Iterative Methods

The commonest of these is the alternating-direction-implicit (ADI) method (also


referred to as the line-iteration procedure). The idea is to solve an implicit (typically
tri-diagonal) set of equations along each row in turn, with everything off that row kept
constant. The same is then done for each column (hence the name “alternating-
direction”). The method works particularly well for computational molecules of the
form
Tridigonal Matrix Method

You might also like