Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Lecture 3: What we learnt

Ø The Finite Difference Method (FDM)


§ Application of various types of Boundary Conditions
Ø Direct Solution of a set of Linear Algebraic Equations
§ Gaussian Elimination
§ Tri-Diagonal Matrix System

1
Lecture 4: What we will learn

Ø Treatment of Non-Linear Source Terms


Ø Residuals and Convergence
Ø Newton’s Method for System of Non-Linear Equations

2
Treatment of Non-Linear Sources
d 2f
Recall our governing equation
2
= Sf
dx
In general, the source term could be a function of both x and f. In some
cases, the source term could be a non-linear function of f.

Let us consider the case where the source is a non-linear function of f


only, e.g.,
d 2f
2
= Sf = exp(f )
dx
fi +1 - 2fi + fi -1
After FD approximation, we get = exp(fi )
( Dx ) 2

This equation can only be solved iteratively using the following procedure
Ø Guess value of f at all nodes (= fi*)
Ø Evaluate the non-linear source (RHS) using guess: exp(fi*)
Ø Solve tri-diagonal system of equations and obtain new fi
Ø Repeat procedure until convergence
3
Treatment of Non-linear Sources
Unfortunately, with this procedure, convergence is often very slow or not
attained.
A better approach is to linearize the source using a Taylor Series
expansion. Note that this is, in fact, the principle of the Newton’s method
for solving non-linear equations, to be discussed later.
Using a Taylor Series expansion, we can write
*
dSf
Sf (f ) » Sf (f *) + (f - f *)
df
fi +1 - 2fi + fi -1
Therefore = exp(fi
*
) + exp(fi
*
)(fi - f *
i )
(Dx) 2

é 2 ù 1 1
Rearranging êexp(fi* ) + f
2 ú i
- f
2 i +1
- f
2 i -1
= exp(fi
*
)[fi - 1]
*

ë (Dx) û (Dx) (Dx)


The same iterative procedure described earlier is likely to converge faster
with this approach than the previous approach where no linearization was
performed. 4
Treatment of Non-linear Sources
Why does the linearized procedure produce better convergence?
Linearization results in a stronger diagonal, which is indicative of a better
conditioned matrix, and hence, results in better convergence.

What if linearization results in a weaker diagonal?


d 2
f
Consider
2
= Sf = exp(-f )
dx
fi +1 - 2fi + fi -1
After FD approximation, we get = exp(-fi )
(Dx) 2

Linearizing source term, we get


fi +1 - 2fi + fi -1
= exp(-fi* ) - exp(-fi* )(fi - fi* )
(Dx) 2
é 2 ù 1 1
Rearranging ê - exp( -fi
*
) + f
2ú i
- f
2 i +1
- f
2 i -1
= - exp( -fi
*
)[fi + 1]
*

ë (Dx) û (Dx) (Dx)


In this case linearization makes the diagonal weaker.
5
Treatment of Non-linear Sources
To linearize or not to linearize?
In general, always linearize, but follow the procedure listed below.

Step 1: Re-write difference equation such that the diagonal without


linearization is positive
fi +1 - 2fi + fi -1
Thus = exp(-fi ) should be re-written as
(Dx) 2

2 1 1
f
2 i
- f
2 i +1
- f = - exp(-fi )
2 i -1
(Dx) ( Dx) ( Dx)

Step 2: Linearize the source term using Taylor Series expansion


2 1 1
f
2 i
- f
2 i +1
- f
2 i -1
= - exp( -fi
*
) + exp( -fi
*
)(fi - f *
i )
(Dx) ( Dx) ( Dx)

6
Treatment of Non-linear Sources
Step 3: Write the source term in “equation of a line” form
2 1 1
f - f - f = exp( -f *
) f - exp( -f *
) - exp( -f *
)f *

(Dx) 2
i
(Dx) 2
i +1
( Dx) 2
i -1
 
i i
i i
i
SP SC

In general, the above linear equation may be written as

Aifi + å Ajf j = S Pfi + SC


j
j ¹i
Since Ai is positive, we want to move the first term on the RHS of the
above equation to the LHS of the equation only if SP is negative. This
will make the diagonal stronger (larger positive value).

Step 4: Modify the diagonal and source using the following logic

Ai = Ai - MIN (0, S P ) This means that the SP containing term is


moved to the LHS only if it is negative. Or
SC = SC + MAX (0, S P ) fi* else, it stays on the RHS.
7
Convergence
In the iterative algorithm discussed earlier, one of the steps was to
“Repeat procedure until convergence”
What exactly does that mean?

Recall that the algebraic equation we are trying to solve iteratively for the
interior nodes is
fi +1 - 2fi + fi -1
= exp(fi ) "i = 2,..., N - 1
(Dx) 2

Convergence is a measure of how closely (or accurately) these equations


have been satisfied.
By definition, as we approach a converged solution with iterations, the
error between the solutions obtained at successive iterations will be
vanishingly small.
However, the underlying assumption in this argument is that we do not
have any programming errors.
8
Convergence
Imagine a scenario in which there is a programming error and because
of which the solution at successive iterations do not change. In that
case, using the criterion of vanishingly small change between
successive iterations would give a false message.
On the other hand, if we checked to see if the equation below has been
satisfied or not, it would tell us that it has not been satisfied.
fi +1 - 2fi + fi -1
= exp(fi ) "i = 2,..., N - 1
(Dx) 2

Therefore, the only foolproof way to test convergence is to check if the


equations that we set out to solve (above) have indeed been satisfied.
In order to check how closely the equation has been satisfied, we
transpose all terms of the equation to one side, and calculate a residual
at each node

fi +1 - 2fi + fi -1
Ri = exp(fi ) - "i = 2,..., N - 1
(Dx) 2

9
Convergence
Performing an IF check on the residual of each node would be
computationally expensive.
One alternative would be to perform an IF check on the net residual, i.e.,
combined residual of all the cells.
However, simply adding the residuals of the individual nodes will not work
because the residuals may be positive or negative. Therefore, adding
them might result in a small number (and a false message).
Therefore, the most common way of calculating the net residual is to
square the nodal residuals and then add them. This is referred to as the L2
norm or L2 norm, and is defined as
2
é fi +1 - 2fi + fi -1 ù
R2 = å i
êexp(fi ) -
ë ( Dx ) 2 ú
û
Convergence means getting R2 down to a small enough value.
A converged solution implies that the governing algebraic equations have
10
been satisfied.
Convergence
A typical plot of the residuals on a semi-log scale looks as follows:

-5
Log10(R2)

-10

-15 Indicates that the


calculations have hit
round-off error
Number of iterations

At what order of magnitude the residuals hit round-off error depends on


the precision (single or double) of the calculation, the hardware, and the
software of the machine being used.
In general, the round-off errors are much smaller than the truncation
errors in the calculation, and therefore, 6-8 orders of magnitude
convergence is more than sufficient. 11
Convergence
What tolerance do we use to stop the calculations?
Method 1: Scale the residual by the largest residual.
Ø At each iteration, store the largest residual up to that point.
Ø Normalize the current residual by the largest residual.
Ø Set a tolerance of 10-6 (six orders) on the normalized residual.

Method 2: Set absolute tolerance


Ø Get an understanding of the magnitude of the independent variable you are
solving for, e.g., if f = temperature, then its magnitude is ~1000
Ø Set a tolerance value that is six orders of magnitude or so lower than the value
of the independent variable you are solving for, e.g., tolerance for T equation may
be set to 10-3. This would imply that temperatures would be accurate up to
approximately 3 decimal places.

Note: Do not use DO WHILE (R2 > tolerance) logic. This has the danger
of the iteration going into an infinite loop.
Instead, use dual logic:
DO WHILE (R2 > tolerance .AND. Iteration# < max_iterations)
12
Newton’s Method
Newton’s method is the most prolific method for finding roots of non-linear
equations. It uses a gradient based algorithm to approach the root.
Let’s first review Newton’s method for a single non-linear equation
(scalar case). We desire to find the roots of the equation
f ( x) = 0
Since this is a non-linear equation of arbitrary form, we must start with a
guess for f, and then perform iterations to approach the root in some
manner. Let the initial guess for the root be x*
Using a Taylor series expansion, we can write
*
df
f ( x) = f ( x ) +
*
( x - x* ) + ...
dx
One approach to solving f(x) = 0 is to instead set the RHS of the above
equation to zero, and solve the resulting equation for a change in f, i.e.,
* *
df d f 2
(Dx) 2
0 = f ( x* ) + (Dx) + 2 + ... where Dx = x - x*
dx dx 2 13
Newton’s Method
Unfortunately, this equation is also a non-linear equation (albeit in
polynomial form) and is equally difficult to solve.
In order to make a solution possible, we drop all non-linear terms in the
equation such that *
f (x )
df
*
x=x - *
*
f (x ) +
*
(Dx) » 0 or df
dx
dx
Since the higher-order terms have been dropped, the above equation is
not the same as the equation at the bottom of the previous page.
Therefore, its solution will only give an approximate value for the root.
However, it is easy to see graphically that this root is closer to the actual
root than the initial guess
Initial guess for The process may be repeated
Actual
root root, x* by replacing the initial guess
f(x)
by the new estimate, i.e.,
iterations
Root after 1 linear
14
approximation
Newton’s Method for Coupled Non-Linear Equations
The approach discussed earlier for a single unknown (scalar case) can be
extended to multiple unknowns (vector case).
fi +1 - 2fi + fi -1
Note that = exp(fi ) is a non-linear set of equations with
( Dx ) 2

multiple unknowns: f = [f1 ,f2 ,...,fN ]

We now seek the solution to the coupled set of non-linear equations


represented by
fi (f ) = 0 "i = 1,2,..., N
Written explicitly:
f1 (f1 ,f2 ,..., fN ) = 0
f 2 (f1 ,f2 ,..., fN ) = 0
.
.
.
f N (f1 ,f2 ,..., fN ) = 0 15
Newton’s Method for Coupled Non-Linear Equations
Perform a Taylor series expansion as for single-variable case
* * *
¶f1 ¶f ¶f
f1 (f ) = f1 (f * ) + Df1 + Df2 1 + ... + DfN 1 + ...
¶f1 ¶f2 ¶fN
* * *
¶f 2 ¶f 2 ¶f 2
f 2 (f ) = f 2 (f ) + Df1
*
+ Df2 + ... + DfN + ...
¶f1 ¶f2 ¶fN
.
.
.
* * *
¶f N ¶f ¶f
f N (f ) = f N (f * ) + Df1 + Df2 N + ... + DfN N + ...
¶f1 ¶f2 ¶fN

As in the scalar case, we now solve for the RHS equal to zero, but after
neglecting the higher order terms. After some rearrangement, and after
writing the resulting equation in matrix form, this yields

16
Newton’s Method for Coupled Non-Linear Equations

é ¶f * * *
ù é Df1 ù é f (f *

¶f1 ¶f1 ê
1
ê 1 ... ú ê Df ú * ú
ê ¶f1 ¶f2 ¶fN úê 2 ú ê f 2 (f ) ú
ê ê ú ê. ú
*ú .
ê ¶f 2
*
¶f 2
*
¶f 2 úê ú ê ú
ê ¶f ¶f2
...
¶fN ú ê. ú = - ê. ú [J ][ Df ] = -[ f (f * )]
ê 1 ú ê. ú ê ú
ê . ú
ê . . ... . úê ú
ê ú ê . ú ê. ú
ê ¶f N
* * *
úê . ú ê ú
¶f N ¶f N ê . ú
ê ... úê ú
êë ¶f1 ¶f2 ¶fN úû êë DfN úû ê f (f * ) ú
ë N û
The square matrix on the LHS is called the Jacobian matrix (J). Its
determination requires all individual partial derivatives.
The above equation represents a set of linear algebraic equations that can
be solved using Gaussian Elimination to yield the solution for Df.
Update solution using f = f * + Df
17
Newton’s Method for Coupled Non-Linear Equations
Solution Algorithm
1. Start with guessed value for all, 𝜙 (&) (= 𝜙 ∗), and set 𝑙 = 0
2. Calculate 𝑓 𝜙 (")
3. Calculate [𝐉 (") ] (Derive partial derivatives analytically)
𝑙 =𝑙+1 " "
4. Solve 𝐉 ∆𝜙 = −𝑓 𝜙 (")
5. Update Solution: 𝜙 ("#$) = 𝜙 (") + ∆𝜙 (")
6. Repeat Steps 2-5 until convergence
As before, convergence may be enforced by calculating the L2norm
and monitoring it. In this case, the L2norm is simply

R2 = å[ f i ]
i
2

If the equations are strongly non-linear, it may be necessary to use a


so-called linear relaxation factor, such that updating is done using
𝜙 ("#$) = 𝜙 (") + 𝜔∆𝜙 (") where w is the linear relaxation factor.
Small w => slow but stable
Generally, 0 £ w £1 Large w => fast but unstable 18
Exercise
𝑖 = 1: 𝜙$ = 𝜙%
#&'% &'#& (#&$%
𝑖 = 2, … , 𝑁 − 1: = 𝑒𝑥𝑝 𝜙+
)* !

𝑖 = 𝑁: 𝜙$ = 𝜙,

𝜙+($ − 2𝜙+ + 𝜙+&$


𝑖 = 2, … , 𝑁 − 1: 𝑓+ = 𝑒𝑥𝑝 𝜙+ −
Δ𝑥 '

Calculate the Jacobian matrix


!"! !"! !"!
!#! !#"
⋯ !##$%
!"" !"" !""

[𝐉]= !#! !#" !##$%
⋮ ⋮ ⋱ ⋮
!"#$% !"#$% !"#$%
!#! !#"
⋯ !##$%

2 1
𝑒𝑥𝑝 𝜙' + '
− '

Δ𝑥 Δ𝑥
1 2 1
= − '
𝑒𝑥𝑝 𝜙- + '
− '

Δ𝑥 Δ𝑥 Δ𝑥
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
1 2
− '
𝑒𝑥𝑝 𝜙.&$ + ' 19
Δ𝑥 Δ𝑥

You might also like