Professional Documents
Culture Documents
Random Thoughts On Numerical Analysis
Random Thoughts On Numerical Analysis
Matthew OConnell
(t) = y
y(0) = 1
we can see that f(t
n
, u
n
) = u
n
.
u
n+1
= u
n
+
h
2
_
u
n
+ +[u
n
+ hu
n
]
1 +
z
2
1
z
2
1}.
Region of Stability: Classic 4-stage Runge-Kutta
Classic 4-stage Runge-Kutta is a one step
method described as: u
n+1
= u
n
+ h
=
1
6
_
K
1
+ 2K
2
+ 2K
3
+ K
4
2
2
u
n
+
h
3
3
6
u
n
+
h
4
4
24
u
n
and the RK method can be written as:
u
n+1
= u
n
+ hu
n
+
h
2
2
2
u
n
+
h
3
3
6
u
n
+
h
4
4
24
u
n
u
n+1
= u
n
_
1 + h +
h
2
2
2
+
h
3
3
6
+
h
4
4
24
_
1 0 0 0 0
1 1 1 1 1
1 2 4 8 16
1 3 9 27 81
1 4 16 64 256
_
_
_
_
c
0
c
1
c
2
c
3
c
4
_
_
=
_
_
f(0)
f(1)
f(2)
f(3)
f(4)
_
_
.
A is an ill conditioned matrix, with a condi-
tion number K
2
= 2592.9. This is an ill condi-
tioned problem without regard to the function
f(x) as A does not depend on the functional val-
ues being used to interpolate.
Random thoughts on Numerical Analysis 5
If we begin using more nodes, thus more
equations the condition number of the resulting
A increases.
x = {0, 1, 2, 3, 4, 5} K
2
= 5.7689 10
4
x = {0, 1, 2, 3, 4, 5, 6} K
2
= 1.5973 10
6
So we can see from an increasing condi-
tion number that higher order polynomial inter-
polants cannot be expected to give a desirable
interpolation of a function. The reason for this
can be thought of as if the vectors of the matrix
A begin to point in the same direction. But we
can also see this just by looking at the polyno-
mials being used. As shown in Figure ??, as the
degree of the polynomial increases, the distinc-
tion between successive degrees decreases.
0 1 2 3 4 5 6 7 8 9
1
2
3
4
5
6
Figure 4: Polynomials of degree 1 6. As the degree of the polynomial increases, the distinction
between one polynomial and the next lessens.
Modied Euler and Heun methods as Runge-Kutta methods
Runge-Kutta Review
We can be shown that both Heun and Modi-
ed Euler numerical methods for Cauchy ODEs
are two stage Runge-Kutta methods. So quickly
a review of Runge-Kutta methods. Runge-Kutta
methods are one step methods. All one step
methods have the form;
u
n+1
= u
n
+ h(t
n
, u
n
; h)
For Runge-Kutta methods the step function is
given by:
(t
n
, u
n
; h) =
s
i=1
b
i
k
i
for an s stage method
where the ks are given by:
k
i
= f(t
n
+ c
i
h, u
n
+ h(
s
j=1
a
ij
k
j
))
and the as bs and cs are described in the
Butcher Tableau for the method in the form:
Random thoughts on Numerical Analysis 6
c
1
a
11
. . . a
1s
.
.
.
.
.
.
.
.
.
c
s
a
s1
. . . a
ss
b
1
. . . b
s
As a side note: if a
ij
= 0 for all a such that j>i
then the method is explicit.
It can be ( and indeed will be) shown that
Heun and Modied Euler are two stage explicit
Runge-Kutta methods by showing the coe-
cients a,b and c for each method.
Modied Euler as a Runge-Kutta
Method
Beginning with the step function for Modied
Euler:
= f(t
n
+
1
2
h, u
n
+
1
2
f(t
n
, u
n
))
or the step function can be rewritten as:
= 0k
1
+ 1k
2
thus b = 0, 1
where the ks are:
k
1
= f(t
n
+ 0h, u
n
+ 0h)
k
2
= f(t
n
+
1
2
h, u
n
+
1
2
hk
1
)
In this form we can easily see that this is a
Runge-Kutta method with coecients in the
Butcher Tableau below.
0 0 0
1
2
1
2
0
0 1
Heun as a Runge-Kutta Method
Using the same methodology, Heuns step func-
tion is
=
1
2
_
f(t
n
, u
n
) + f(t
n
+ h, u
n
+ hf(t
n
, u
n
))
C
j1
= f
j1
M
j1
h
2
j
6
C
j1
=
f
j
f
j1
h
j
h
j
6
(M
j
M
j1
). (6)
The Ms are given by solving the linear system:
_
_
2
0
0 0 0 0 0 0 0 0
1
2
1
0 0 0 0 0 0 0
0
2
2
2
0 0 0 0 0 0
0 0
3
2
3
0 0 0 0 0
0 0 0
4
2
4
0 0 0 0
0 0 0 0
5
2
5
0 0 0
0 0 0 0 0
6
2
6
0 0
0 0 0 0 0 0
7
2
7
0
0 0 0 0 0 0 0
8
2
8
0 0 0 0 0 0 0 0
9
2
_
_
_
_
M
0
M
1
M
2
M
3
M
4
M
5
M
6
M
7
M
8
M
9
_
_
=
_
_
d
0
d
1
d
2
d
3
d
4
d
5
d
6
d
7
d
8
d
9
_
_
with , and d given by
j
=
h
j
h
j
+ h
j+1
j
=
h
j+1
h
j
+ hj + 1
d
j
=
6
h
j
+ h
j+1
_
f
j+1
f
j
h
j+1
f
j
f
j1
h
j
_
(7)
for j = 1 (n 1)
Since we are using a cubic spline (the second derivative on the boundary nodes is xed at zero)
0
=
n
= d
0
= d
n
= 0.
With all required data at hand, we can now solve for all S
j
s, yielding the cubic polynomials for
Random thoughts on Numerical Analysis 9
each interval xj x
j+1
.
S
0
= 418.6 + 1.2189667 10
6
(400315.5 + t) 0.12247557(73.7 + t)
S
1
= 409.1492 1.6434451 10
6
(226.1 t)
3
0.1525749(161.2 + t) 0.000010415(161.2 + t)
3
S
2
= 398.5198 0.0000120706(282.1 t)
3
0.39774469(226.1 + t) 0.000013359260(226.1 + t)
3
S
3
= 375.11501 0.000018563736(322.4 t)
3
0.6138761(282.1 + t) 0.00006838420(282.1 + t)
3
S
4
= 352.7621 0.00005522812(372.3 t)
3
1.3596182(322.4 + t) + 0.000012738458(322.4 + t)
3
S
5
= 285.39997 + 0.00001528002(413.9 t)
3
1.1851325(372.3 + t) + 0.00005419465(372.3 + t)
3
S
6
= 227.48697 + 0.000030261715(488.4 t)
3
0.36623957(413.9 + t) 0.000014273799(413.9 + t)
3
S
7
= 197.71870 0.000018754815(545.1 t)
3
0.78479305(488.4 + t) 3.406446 10
6
(488.4 + t)
3
S
8
= 153.4311 2.9442917 10
6
(610.7 t)
3
0.8556581(545.1 + t) + 0.(545.1 + t)
3
The plot of the S
j
s along with the original data points is in gure ?? and you can see how well
it matched the original function (cloud) in gure ??.
100 200 300 400 500 600
100
200
300
400
500
600
Figure 6: Cubic Spline Interpolant with interpolating nodes.
Lagrange Interpolation
Lagrange Interpolation is dened as
L(t) =
k
j=0
y
j
l
j
(t) (8)
Random thoughts on Numerical Analysis 10
with l
j
(t)
l
j
(t) =
k
i=0,i=j
t t
i
t
j
t
i
(9)
which will yield an order 9 polynomial:
L(x) = 97.3 + (0.598324 + (0.00106141 + (3.564172124982928 10
6
+ (1.0552218733964638 10
8
+ (7.933757878347993 10
11
+ (1.744071613713679 10
13
+ (7.947314825996885 10
15
+ (5.151899580586824 10
17
4.410873647694004 10
19
(282.1 + x))(413.9 + x))(226.1 + x))(545.1 + x))(161.2 + x))(488.4 + x))
(322.4 + x))(73.7 + x))(610.7 + x)
The results of Lagrange interpolation can be seen in gure ??.
0 100 200 300 400 500 600
100
200
300
400
500
600
Figure 7: Lagrange Interpolating polynomial with the interpolating nodes.
Summary
The original function, data points, and cubic and Lagrange interpolation are all plotted in gureno4.
On the interval [x
0
, x
n
], the spline interpolant was quite good, only deviating visibly from the
original function on the intervals: 2, 3, 6 and 7 (intervals where the function has the highest
curvature). Lagrange interpolation did a better than expected job on intervals 3 through 8 with
serious deviations on intervals 1, 2 and 9. Given that the Lagrange polynomial was 9th order, wild
deviations are to be expected on intervals near the boundary.
Random thoughts on Numerical Analysis 11
Figure 8: Cubic Spline in red, Lagrange Interpolating polynomial in blue and the interpolation
nodes.
Hermite Interpolating polynomials cannot exist for some functions
Given the set of data:
_
_
f
0
= f(1) = 1
f
1
= f
(1) = 1
f
2
= f
(1) = 2
f
3
= f(2) = 1
we wish to show there there does not exist a Her-
mite interpolating polynomial. Knowing that if
a polynomial
H was the Hermite interpolating
polynomial
H(x
i
) = f(x
i
) and
H
(x
i
) = f(x
i
).
Therefore we assume that
H(x) and
H
(x) are of
the form of second and third degree polynomials:
H(x) = a
0
+ a
1
x + a
2
x
2
+ a
3
x
3
(x) = 0 + a
1
+ 2a
2
x + 3a
3
x
2
Applying the conditions given by the data set we
have a linear system Ac = f:
_
_
1 1 1 1
0 1 2 3
0 1 2 3
1 2 4 8
_
_
_
_
c
0
c
1
c
2
c
3
_
_
=
_
_
f(1)
f
(1)
f
(1)
f(2)
_
_
=
_
_
1
1
2
1
_
_
. Where det(A) = 0 therefore A is singular and
there exists no unique solution for c therefore no
Hermite Interpolating polynomial for this set of
nodes regardless of their functional values.
Random thoughts on Numerical Analysis 12
Example Numerical Integration
Numerical Integration of a desired error
We wish to numerically integrate the func-
tion f(x)
_
1
0
e
x
dx
between 0 and 1. The exact integral is
_
1
0
e
x
= e
x
|
1
0
= e 1
We wish to evaluate this integral within an
absolute error 5x10
4
using Composite Trape-
zoid (ct) and Composite Simpsons (cs).
Composite Trapezoid
We know the error term for composite Trapezoid
is:
E =
(b a)
12
h
2
f
()
Knowing f(x) = e
x
, f
(x) = e
x
. To guarantee
that we will be within the error we will choose
such that f
e
2880n
4
5 10
4
therefore n must be at least
n =
_
4
_
e
2880 5 10
4
_
= 1.034418 = 2.
Using Composite Simpsons to integrate f(x)
between 0 and 1 with 2 subintervals we get a
value of 1.71886115 for an absolute error of
E
cs
=
_
f(x)dx I
cs
(f) = 5.79323 10
4
which is actually larger than the desired er-
ror. In order to get an error below 5 10
4
we
have to use 4 subintervals in Composite Simp-
sons. Which yields an error of 3.701346 10
4
.
Steensens root nding example
We wish to numerically nd the rst positive root of f(x)
f(x) = e
x
sin(x)
We can see visually that the rst positive root for this function exists and as have Lipschitz conti-
nuity near the root. We will use two methods to calculate the root, the standard Newtons method
Random thoughts on Numerical Analysis 14
which requires derivative evaluations and Steensens method that only function evaluations to
estimate a derivative (average slope). We will see that both methods converge quadratically.
-1 1 2 3 4
1
2
3
4
5
6
Figure 9: f(x) = e
x
sin(x) on the interval -1.5 x 4.5.
Steensens Method
Steensens Method is a method for solving nonlinear equations which is implemented by
x
(k+1)
= x
(k)
f(x
(k)
)
(x
(k)
)
(x
(k)
) =
f(x
(k)
+ f(x
(k)
)) f(x
(k)
)
f(x
(k)
)
Using Steensens Method we wish to solve for the rst positive root of
f(x) = e
x
sin(x)
Using Octave, the following functions were used to calculated this root.
A few words on the Octave functions. Some of the constants chosen in the functions (particularly
the upper bound on the for loop) were chosen after seeing how long it took for the method to
converge so that the program didnt error during runtime for a division by zero.
Steffens = 0
f o r i = 1: 4
Steffens ( i+1) = Steffens ( i) step ( Steffens ( i) ) ;
end
Steffens
Random thoughts on Numerical Analysis 15
SError = 0. 588532743981861 Steffens ;
y = [ 0 , 1 , 2 , 3 , 4 ] ;
semi l ogy ( y , abs ( SError ) )
xl abe l ( ' I t e r a t i o n ' )
yl abe l ( ' Error ' )
f unc t i on l = step ( x)
y = ff( x) ;
phi = ff( x + y) y ;
phi = phi/y ;
l = y / phi ;
Using an initial guess of x = 0 Steensens method converged in 4 iterations to yield a solution
x = 0.588532743981867.
The intermediate values calculated and their corresponding errors are in Table 3. The accepted
value, shown in the Steensens Octave function, was the nal value given by Newtons Method on
a machine with
machine
machine
= 2.22044604925031 10
16
.
Table 3: Successive estimates for x given by Steensens Method and their errors.
Iteration Value Error
0 0.00
1 0.678614100575150 9.00813565932895 10
2
2 0.589658358068303 1.12561408644185 10
3
3 0.588532939946140 1.95964279137151 10
7
4 0.588532743981867 6.10622663543836 10
15
Newtons Method
Newtons method also solves nonlinear equations, Newtons method is given by:
x
(k+1)
= x
(k)
f(x
(k)
)
f
(x
(k)
)
Newtons method was implemented in Octave code by thefunction given below.
Newtons ( 1) = 0;
f o r i = 1: 5
Newtons ( i+1) = Newtons ( i) ff( Newtons ( i) ) /dff ( Newtons ( i) ) ;
end
Random thoughts on Numerical Analysis 16
Newtons
NError = 0. 588532743981861 Newtons
y = [ 0 , 1 , 2 , 3 , 4 , 5 ] ;
semi l ogy ( y , abs ( NError ) )
xl abe l ( ' I t e r a t i o n ' )
yl abe l ( ' Error ' )
Newtons Method converged in 5 iterations with an initial guess of x = 0, all intermediate values
and their corresponding error is given in Table 4.
Table 4: Successive Estimates for x given by Newtons Method.
Iteration Value Error
0 0.00
1 0.500000000000000 8.85327439818610 10
2
2 0.585643816966433 2.88892701542842 10
3
3 0.588529412626355 3.33135550611985 10
6
4 0.588532743977419 4.44211334382771 10
12
5 0.588532743981861 1.11022302462516 10
16
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
10
16
10
14
10
12
10
10
10
8
10
6
10
4
10
2
10
0
Iteration
E
r
r
o
r
Figure 10: The convergence of Newtons Method
in 5 iterations.
0 0.5 1 1.5 2 2.5 3 3.5 4
10
15
10
10
10
5
10
0
Iteration
E
r
r
o
r
Figure 11: The convergence of Steensens
Method in 4 iterations.
Summary
Both Steensens and Newtons methods converged (within 10
15
) and both did so fairly quickly,
using at most 5 iterations. And by looking at Tables 3 and 4 as well as Figures ?? and 8 we can see
Random thoughts on Numerical Analysis 17
that the errors of both dropped o quadratically. Steensens method quite surprising to converge
quadratically without directly calculating the derivative.