Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Numerical Analysis thoughts

Matthew OConnell

January 12, 2010


Regions of stability
Region of Stability: Heun
We wish to describe the region of stability for
sti problems of dierent numerical ODE meth-
ods, particularly Heun, Crank-Nicelson, and the
classical Runge-Kutta. Beginning with Heun.
Heun is a one step method for solving Cauchy
ODEs given by:
u
n+1
= u
n
+
h
2
_
f(t
n
, u
n
)+f(t
n
+h, u
n
+hf(t
n
, u
n
))

Using the test problem:


P =
_
y

(t) = y
y(0) = 1
we can see that f(t
n
, u
n
) = u
n
.
u
n+1
= u
n
+
h
2
_
u
n
+ +[u
n
+ hu
n
]

after some simplication


u
n+1
= u
n
+
_
1 + h +
h
2
2

2

We can now apply the initial condition, u


0
= 1
and see
u
1
= [1 + h +
h
2
2

2
]
u
2
= u
1
[1 + h +
h
2
2

2
] or [1 + h +
h
2
2

2
]
2
furthermore, successive values of u will be given
by:
u
n+1
= [1 + h +
h
2
2

2
]
n
.
We know from the analytical solution of the
test problem, that u
n+1
0, as n . This
will only occur if
[1 + h +
h
2
2

2
] 1.
Thus Heun is stable within the region:
_
z C : [1 + hz +
z
2
2
] 1
_
.

Graduate Student, Computational Engineering Department, University of Tennessee Chattanooga.


1
Random thoughts on Numerical Analysis 2
Figure 1: Region of stability for Heun
Region of Stability: Crank Nicelson
Crank Nicelson (CN) is a one step method
for solving Cauchy problems given by:
u
n+1
= u
n
+
h
2
(f(t
n
, u
n
) + f(t
n+1
, u
n+1
))
From the test problem, f
n
= u
n
. Making
this substitution:
u
n+1
= u
n
+
h
2
[u
n
+ u
n+1
]
And rewritten:
u
n+1
(1
h
2
) = u
n
(1 +
h
2
)
u
n+1
= u
n
(1 +
h
2
)
(1
h
2
)
.
Applying the initial condition:
u
1
= u
0
(1 +
h
2
)
(1
h
2
)
= 1
(1 +
h
2
)
(1
h
2
)
and successively:
u
n+1
=
_
1 +
h
2
1
h
2
_
n
And we know from the analytical solution
u
n+1
0 as n 0. This could only occur if
_
1 +
h
2
1
h
2
_
1.
This will occur for any negative value of .
Therefore
{z C :

1 +
z
2
1
z
2

1}.
Region of Stability: Classic 4-stage Runge-Kutta
Classic 4-stage Runge-Kutta is a one step
method described as: u
n+1
= u
n
+ h
=
1
6
_
K
1
+ 2K
2
+ 2K
3
+ K
4

Random thoughts on Numerical Analysis 3


Figure 2: Crank Nicelson stability, stable for any negative real.
K
1
= f(t
n
, u
n
)
K
2
= f(t
n
+
h
2
, u
n
+
h
2
K
1
)
K
3
= f(t
n
+
h
2
, u
n
+
h
2
K
2
)
K
3
= f(t
n
+ h, u
n
+ hK
3
)
From the test problem, the stages become:
K
1
= u
n
K
2
= [u
n
+ hK
1
]
K
3
= [u
n
+ hK
2
]
K
3
= [u
n
+ hK
3
]
h is now:
h = hu
n
+
h
2

2
2
u
n
+
h
3

3
6
u
n
+
h
4

4
24
u
n
and the RK method can be written as:
u
n+1
= u
n
+ hu
n
+
h
2

2
2
u
n
+
h
3

3
6
u
n
+
h
4

4
24
u
n
u
n+1
= u
n
_
1 + h +
h
2

2
2
+
h
3

3
6
+
h
4

4
24

making the substitution z =h:


u
n+1
= u
n
_
1 + z +
z
2
2
+
z
3
6
+
z
4
24
_
.
As we have done before, knowing that u
0
=1,
successive values u
n+1
are
u
n+1
=
_
1 + z +
z
2
2
+
z
3
6
+
z
4
24
_
n
.
From the analytic solution we know that as
n , u
n+1
0. This requires that
_
1 + z +
z
2
2
+
z
3
6
+
z
4
24
_
1.
Thus classic fourth order Runge-Kutta is sta-
ble within the region
R =
_
z C :
_
1 + z +
z
2
2
+
z
3
6
+
z
4
24
_
1
_
Random thoughts on Numerical Analysis 4
Figure 3: Region of stability for classic Runge-Kutta.
Higher Order Polynomial Interpolants and Condition Number
We wish to explore polynomial interpolation
with many interpolating nodes. We begin by
considerin a function f(x) with known functional
values at nodes x = {0,1,2,3,4}. And we wish to
interpolate this function using a polynomial
p(x) = c
4
x
4
+ c
3
x
3
+ c
2
x
2
+ c
1
x + c
0
.
Knowing that p must interpolate f at all nodes,
we have ve equations:
p(0) = f(0) (1)
p(1) = f(1) (2)
p(2) = f(2) (3)
p(3) = f(3) (4)
p(4) = f(4) (5)
f(0) = c
0
f(1) = c
0
+ c
1
+ c
2
+ c
3
+ c
4
f(2) = c
0
+ 2c
1
+ 4c
2
+ 8c
3
+ 16c
4
f(3) = c
0
+ 3c
1
+ 9c
2
+ 27c
3
+ 81c
4
f(4) = c
0
+ 4c
1
+ 16c
2
+ 64c
3
+ 256c
4
And we have a linear system: Ac = b:
_

_
1 0 0 0 0
1 1 1 1 1
1 2 4 8 16
1 3 9 27 81
1 4 16 64 256
_

_
_

_
c
0
c
1
c
2
c
3
c
4
_

_
=
_

_
f(0)
f(1)
f(2)
f(3)
f(4)
_

_
.
A is an ill conditioned matrix, with a condi-
tion number K
2
= 2592.9. This is an ill condi-
tioned problem without regard to the function
f(x) as A does not depend on the functional val-
ues being used to interpolate.
Random thoughts on Numerical Analysis 5
If we begin using more nodes, thus more
equations the condition number of the resulting
A increases.
x = {0, 1, 2, 3, 4, 5} K
2
= 5.7689 10
4
x = {0, 1, 2, 3, 4, 5, 6} K
2
= 1.5973 10
6
So we can see from an increasing condi-
tion number that higher order polynomial inter-
polants cannot be expected to give a desirable
interpolation of a function. The reason for this
can be thought of as if the vectors of the matrix
A begin to point in the same direction. But we
can also see this just by looking at the polyno-
mials being used. As shown in Figure ??, as the
degree of the polynomial increases, the distinc-
tion between successive degrees decreases.
0 1 2 3 4 5 6 7 8 9
1
2
3
4
5
6
Figure 4: Polynomials of degree 1 6. As the degree of the polynomial increases, the distinction
between one polynomial and the next lessens.
Modied Euler and Heun methods as Runge-Kutta methods
Runge-Kutta Review
We can be shown that both Heun and Modi-
ed Euler numerical methods for Cauchy ODEs
are two stage Runge-Kutta methods. So quickly
a review of Runge-Kutta methods. Runge-Kutta
methods are one step methods. All one step
methods have the form;
u
n+1
= u
n
+ h(t
n
, u
n
; h)
For Runge-Kutta methods the step function is
given by:
(t
n
, u
n
; h) =
s

i=1
b
i
k
i
for an s stage method
where the ks are given by:
k
i
= f(t
n
+ c
i
h, u
n
+ h(
s

j=1
a
ij
k
j
))
and the as bs and cs are described in the
Butcher Tableau for the method in the form:
Random thoughts on Numerical Analysis 6
c
1
a
11
. . . a
1s
.
.
.
.
.
.
.
.
.
c
s
a
s1
. . . a
ss
b
1
. . . b
s
As a side note: if a
ij
= 0 for all a such that j>i
then the method is explicit.
It can be ( and indeed will be) shown that
Heun and Modied Euler are two stage explicit
Runge-Kutta methods by showing the coe-
cients a,b and c for each method.
Modied Euler as a Runge-Kutta
Method
Beginning with the step function for Modied
Euler:
= f(t
n
+
1
2
h, u
n
+
1
2
f(t
n
, u
n
))
or the step function can be rewritten as:
= 0k
1
+ 1k
2
thus b = 0, 1
where the ks are:
k
1
= f(t
n
+ 0h, u
n
+ 0h)
k
2
= f(t
n
+
1
2
h, u
n
+
1
2
hk
1
)
In this form we can easily see that this is a
Runge-Kutta method with coecients in the
Butcher Tableau below.
0 0 0
1
2
1
2
0
0 1
Heun as a Runge-Kutta Method
Using the same methodology, Heuns step func-
tion is
=
1
2
_
f(t
n
, u
n
) + f(t
n
+ h, u
n
+ hf(t
n
, u
n
))

which can be rewritten as


=
1
2
k
1
+
1
2
k
2
thus

b =
1
2
,
1
2
.
The stages are calculated by
k
1
= f(t
n
+ 0h, u
n
+ 0h)
k
2
= f(t
n
+ 1h, u
n
+ 1hk
1
)
And we have the Butcher Tableau:
0 0 0
1 1 0
1
2
1
2
Therefore both Heun and Modied Euler are
two-stage explicit Runge-Kutta methods.
Random thoughts on Numerical Analysis 7
Example Lagrange and Cubic Spline Interpolation
As an example use of Lagrange and Natural Cubic Spline interpolation, we wish to nd an inter-
polating polynomial for the leftmost morning glory cloud photographed in gure ??
1
. To begin,
we take some data points on the front of the leftmost cloud
Table 1: Data set for the cloud.
x 73.7 161.2 226.1 282.1 322.4 372.3 413.9 488.4 545.1 610.7
f(x) 418.6 408.7 396.4 373.9 345.9 268.5 240.0 194.3 152.6 97.3
Figure 5: Morning Glory Clouds Shot over Australia,
Natural Cubic Splines
We will use Natural Cubic Splines describe the function between interpolating nodes. The spline
polynomials on the intervals between the nodes given by
1
APOD 2009 August 24, Picture by Mick Petro, used under Creative Commons Attribution-Share Alike 3.0
Unported
Random thoughts on Numerical Analysis 8
S
j1
=
M
j1
(x
j
x)
3
6h
j
+
M
j
(x x
j
)
3
6h
j
+ C
j1
(x x
j
) +

C
j1
where C and C
j
is given by

C
j1
= f
j1
M
j1
h
2
j
6
C
j1
=
f
j
f
j1
h
j

h
j
6
(M
j
M
j1
). (6)
The Ms are given by solving the linear system:
_

_
2
0
0 0 0 0 0 0 0 0

1
2
1
0 0 0 0 0 0 0
0
2
2
2
0 0 0 0 0 0
0 0
3
2
3
0 0 0 0 0
0 0 0
4
2
4
0 0 0 0
0 0 0 0
5
2
5
0 0 0
0 0 0 0 0
6
2
6
0 0
0 0 0 0 0 0
7
2
7
0
0 0 0 0 0 0 0
8
2
8
0 0 0 0 0 0 0 0
9
2
_

_
_

_
M
0
M
1
M
2
M
3
M
4
M
5
M
6
M
7
M
8
M
9
_

_
=
_

_
d
0
d
1
d
2
d
3
d
4
d
5
d
6
d
7
d
8
d
9
_

_
with , and d given by

j
=
h
j
h
j
+ h
j+1

j
=
h
j+1
h
j
+ hj + 1
d
j
=
6
h
j
+ h
j+1
_
f
j+1
f
j
h
j+1

f
j
f
j1
h
j
_
(7)
for j = 1 (n 1)
Since we are using a cubic spline (the second derivative on the boundary nodes is xed at zero)

0
=
n
= d
0
= d
n
= 0.
With all required data at hand, we can now solve for all S
j
s, yielding the cubic polynomials for
Random thoughts on Numerical Analysis 9
each interval xj x
j+1
.
S
0
= 418.6 + 1.2189667 10
6
(400315.5 + t) 0.12247557(73.7 + t)
S
1
= 409.1492 1.6434451 10
6
(226.1 t)
3
0.1525749(161.2 + t) 0.000010415(161.2 + t)
3
S
2
= 398.5198 0.0000120706(282.1 t)
3
0.39774469(226.1 + t) 0.000013359260(226.1 + t)
3
S
3
= 375.11501 0.000018563736(322.4 t)
3
0.6138761(282.1 + t) 0.00006838420(282.1 + t)
3
S
4
= 352.7621 0.00005522812(372.3 t)
3
1.3596182(322.4 + t) + 0.000012738458(322.4 + t)
3
S
5
= 285.39997 + 0.00001528002(413.9 t)
3
1.1851325(372.3 + t) + 0.00005419465(372.3 + t)
3
S
6
= 227.48697 + 0.000030261715(488.4 t)
3
0.36623957(413.9 + t) 0.000014273799(413.9 + t)
3
S
7
= 197.71870 0.000018754815(545.1 t)
3
0.78479305(488.4 + t) 3.406446 10
6
(488.4 + t)
3
S
8
= 153.4311 2.9442917 10
6
(610.7 t)
3
0.8556581(545.1 + t) + 0.(545.1 + t)
3
The plot of the S
j
s along with the original data points is in gure ?? and you can see how well
it matched the original function (cloud) in gure ??.
100 200 300 400 500 600
100
200
300
400
500
600
Figure 6: Cubic Spline Interpolant with interpolating nodes.
Lagrange Interpolation
Lagrange Interpolation is dened as
L(t) =
k

j=0
y
j
l
j
(t) (8)
Random thoughts on Numerical Analysis 10
with l
j
(t)
l
j
(t) =
k

i=0,i=j
t t
i
t
j
t
i
(9)
which will yield an order 9 polynomial:
L(x) = 97.3 + (0.598324 + (0.00106141 + (3.564172124982928 10
6
+ (1.0552218733964638 10
8
+ (7.933757878347993 10
11
+ (1.744071613713679 10
13
+ (7.947314825996885 10
15
+ (5.151899580586824 10
17
4.410873647694004 10
19
(282.1 + x))(413.9 + x))(226.1 + x))(545.1 + x))(161.2 + x))(488.4 + x))
(322.4 + x))(73.7 + x))(610.7 + x)
The results of Lagrange interpolation can be seen in gure ??.
0 100 200 300 400 500 600
100
200
300
400
500
600
Figure 7: Lagrange Interpolating polynomial with the interpolating nodes.
Summary
The original function, data points, and cubic and Lagrange interpolation are all plotted in gureno4.
On the interval [x
0
, x
n
], the spline interpolant was quite good, only deviating visibly from the
original function on the intervals: 2, 3, 6 and 7 (intervals where the function has the highest
curvature). Lagrange interpolation did a better than expected job on intervals 3 through 8 with
serious deviations on intervals 1, 2 and 9. Given that the Lagrange polynomial was 9th order, wild
deviations are to be expected on intervals near the boundary.
Random thoughts on Numerical Analysis 11
Figure 8: Cubic Spline in red, Lagrange Interpolating polynomial in blue and the interpolation
nodes.
Hermite Interpolating polynomials cannot exist for some functions
Given the set of data:
_

_
f
0
= f(1) = 1
f
1
= f

(1) = 1
f
2
= f

(1) = 2
f
3
= f(2) = 1
we wish to show there there does not exist a Her-
mite interpolating polynomial. Knowing that if
a polynomial

H was the Hermite interpolating
polynomial

H(x
i
) = f(x
i
) and

H

(x
i
) = f(x
i
).
Therefore we assume that

H(x) and

H

(x) are of
the form of second and third degree polynomials:

H(x) = a
0
+ a
1
x + a
2
x
2
+ a
3
x
3

(x) = 0 + a
1
+ 2a
2
x + 3a
3
x
2
Applying the conditions given by the data set we
have a linear system Ac = f:
_

_
1 1 1 1
0 1 2 3
0 1 2 3
1 2 4 8
_

_
_

_
c
0
c
1
c
2
c
3
_

_
=
_

_
f(1)
f

(1)
f

(1)
f(2)
_

_
=
_

_
1
1
2
1
_

_
. Where det(A) = 0 therefore A is singular and
there exists no unique solution for c therefore no
Hermite Interpolating polynomial for this set of
nodes regardless of their functional values.
Random thoughts on Numerical Analysis 12
Example Numerical Integration
Numerical Integration of a desired error
We wish to numerically integrate the func-
tion f(x)
_
1
0
e
x
dx
between 0 and 1. The exact integral is
_
1
0
e
x
= e
x
|
1
0
= e 1
We wish to evaluate this integral within an
absolute error 5x10
4
using Composite Trape-
zoid (ct) and Composite Simpsons (cs).
Composite Trapezoid
We know the error term for composite Trapezoid
is:
E =
(b a)
12
h
2
f

()
Knowing f(x) = e
x
, f

(x) = e
x
. To guarantee
that we will be within the error we will choose
such that f

() is the maximum on the interval


[0,1]. This occurs at the right end x = 1. So
f

() = e. So the error term becomes


E =
1
12
h
2
e.
If we desire the error term to be at most 5x10
4
|
1
12
h
2
e| 5 10
4
Knowing that h =
1
n
where n is the number of
subintervals and solving for the smallest number
that will give the desired error:
|
1
12n
2
e| 5 10
4
|
1
n
2
|
12
e
5 10
4
n =
_
_
e
(12)5 10
4
_
n = 21.2849 = 22
Using Composite Trapezoid to integrate f(x)
between 0 and 1 with subintervals we get a value
of 1.718578 (using a C program written for a as-
signment 4) for an absolute error of
E
ct
=
_
f(x)dx I
ct
(f) = 2.96172 10
4
So we have met our goal and we are under the
desired error. However, upon further experimen-
tation we can see that we can lower the number
of subintervals.
Random thoughts on Numerical Analysis 13
Table 2: The error of Composite Trapezoid on f(x) with diering number of subintervals.
m error
22 2.96172 10
4
21 3.246820 10
4
20 3.579605 10
4
19 2.958372 10
4
18 4.419222 10
4
17 4.954391 10
4
16 5593001 10
4
As we can see from Table ??, we can actu-
ally go as low as n = 17 and still be within our
desired error. It should be noted that this type
of analysis would not be possible without being
able to evaluate the derivative of f(x) to gather
concavity information.
Composite Simpsons
Following a similar method for determining the
least number of subintervals, we begin with the
error term for Composite Simpsons:
E
cs
=
(b a)
180
_
h
2
_
4
f
(4)
()
f
(4)
(x) = e
x
f
(4)
() = e
E
cs
=
h
4
180 2
4
e
Making the substitution of h =
1
n
E
cs
=
1
2880n
4
e
We want n such that E
cs
5 10
4

e
2880n
4

5 10
4
therefore n must be at least
n =
_
4
_
e
2880 5 10
4
_
= 1.034418 = 2.
Using Composite Simpsons to integrate f(x)
between 0 and 1 with 2 subintervals we get a
value of 1.71886115 for an absolute error of
E
cs
=
_
f(x)dx I
cs
(f) = 5.79323 10
4
which is actually larger than the desired er-
ror. In order to get an error below 5 10
4
we
have to use 4 subintervals in Composite Simp-
sons. Which yields an error of 3.701346 10
4
.
Steensens root nding example
We wish to numerically nd the rst positive root of f(x)
f(x) = e
x
sin(x)
We can see visually that the rst positive root for this function exists and as have Lipschitz conti-
nuity near the root. We will use two methods to calculate the root, the standard Newtons method
Random thoughts on Numerical Analysis 14
which requires derivative evaluations and Steensens method that only function evaluations to
estimate a derivative (average slope). We will see that both methods converge quadratically.
-1 1 2 3 4
1
2
3
4
5
6
Figure 9: f(x) = e
x
sin(x) on the interval -1.5 x 4.5.
Steensens Method
Steensens Method is a method for solving nonlinear equations which is implemented by
x
(k+1)
= x
(k)

f(x
(k)
)
(x
(k)
)
(x
(k)
) =
f(x
(k)
+ f(x
(k)
)) f(x
(k)
)
f(x
(k)
)
Using Steensens Method we wish to solve for the rst positive root of
f(x) = e
x
sin(x)
Using Octave, the following functions were used to calculated this root.
A few words on the Octave functions. Some of the constants chosen in the functions (particularly
the upper bound on the for loop) were chosen after seeing how long it took for the method to
converge so that the program didnt error during runtime for a division by zero.
Steffens = 0
f o r i = 1: 4
Steffens ( i+1) = Steffens ( i) step ( Steffens ( i) ) ;
end
Steffens
Random thoughts on Numerical Analysis 15
SError = 0. 588532743981861 Steffens ;
y = [ 0 , 1 , 2 , 3 , 4 ] ;
semi l ogy ( y , abs ( SError ) )
xl abe l ( ' I t e r a t i o n ' )
yl abe l ( ' Error ' )
f unc t i on l = step ( x)
y = ff( x) ;
phi = ff( x + y) y ;
phi = phi/y ;
l = y / phi ;
Using an initial guess of x = 0 Steensens method converged in 4 iterations to yield a solution
x = 0.588532743981867.
The intermediate values calculated and their corresponding errors are in Table 3. The accepted
value, shown in the Steensens Octave function, was the nal value given by Newtons Method on
a machine with
machine

machine
= 2.22044604925031 10
16
.
Table 3: Successive estimates for x given by Steensens Method and their errors.
Iteration Value Error
0 0.00
1 0.678614100575150 9.00813565932895 10
2
2 0.589658358068303 1.12561408644185 10
3
3 0.588532939946140 1.95964279137151 10
7
4 0.588532743981867 6.10622663543836 10
15
Newtons Method
Newtons method also solves nonlinear equations, Newtons method is given by:
x
(k+1)
= x
(k)

f(x
(k)
)
f

(x
(k)
)
Newtons method was implemented in Octave code by thefunction given below.
Newtons ( 1) = 0;
f o r i = 1: 5
Newtons ( i+1) = Newtons ( i) ff( Newtons ( i) ) /dff ( Newtons ( i) ) ;
end
Random thoughts on Numerical Analysis 16
Newtons
NError = 0. 588532743981861 Newtons
y = [ 0 , 1 , 2 , 3 , 4 , 5 ] ;
semi l ogy ( y , abs ( NError ) )
xl abe l ( ' I t e r a t i o n ' )
yl abe l ( ' Error ' )
Newtons Method converged in 5 iterations with an initial guess of x = 0, all intermediate values
and their corresponding error is given in Table 4.
Table 4: Successive Estimates for x given by Newtons Method.
Iteration Value Error
0 0.00
1 0.500000000000000 8.85327439818610 10
2
2 0.585643816966433 2.88892701542842 10
3
3 0.588529412626355 3.33135550611985 10
6
4 0.588532743977419 4.44211334382771 10
12
5 0.588532743981861 1.11022302462516 10
16
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
10
16
10
14
10
12
10
10
10
8
10
6
10
4
10
2
10
0
Iteration
E
r
r
o
r
Figure 10: The convergence of Newtons Method
in 5 iterations.
0 0.5 1 1.5 2 2.5 3 3.5 4
10
15
10
10
10
5
10
0
Iteration
E
r
r
o
r
Figure 11: The convergence of Steensens
Method in 4 iterations.
Summary
Both Steensens and Newtons methods converged (within 10
15
) and both did so fairly quickly,
using at most 5 iterations. And by looking at Tables 3 and 4 as well as Figures ?? and 8 we can see
Random thoughts on Numerical Analysis 17
that the errors of both dropped o quadratically. Steensens method quite surprising to converge
quadratically without directly calculating the derivative.

You might also like