Professional Documents
Culture Documents
ENUME 2024 - Lecture Notes
ENUME 2024 - Lecture Notes
ENUME 2024 - Lecture Notes
Morawski
phone: (22) 234-7721, e-mail: roman.morawski@pw.edu.pl
The programme of study, leading to the B.Sc. degree in Information and Communications
Technology, is aimed at providing graduates with knowledge and skills specific of information
technology, control and robotics, electronics and telecommunications.
Classes in the ENUME course are intended to contribute to achieving the above goal by enabling
the students of Information and Communications Technology to acquire the knowledge and skills
necessary for:
independently solving elementary numerical problems,
commissioning the solution of more complex numerical tasks to experts of numerical
methods, and
evaluating the solutions they provide.
Contents:
Analysis of numerical problems and algorithms
Solving linear algebraic equations
Solving nonlinear algebraic equations
Approximation of functions
Solving ordinary differential equations
Schedule:
Date Lectures Tests
February 22 2h
February 29 2h
March 7 2h
March 14 2h
March 21 2h
March 28 2h
April 4 1h 1 h (Test#1)
April 11 2h
April 18 2h
April 25 2h
May 9 2h
May 16 2h
May 23 2h
June 6 1h 1 h (Test#2)
June 13 2 h (Tests#1'&2')
Total 26 h 4h
Note: Test #1' is fully equivalent to Test #1, and Tests #2' is fully equivalent to Test #2; all the ENUME students are eligible to take all the tests.
Teaching materials:
R. Z. Morawski, Lecture notes for ENUME students, Spring 2024
R. Z. Morawski, A. Miękina: Solved problems in
numerical methods, Oficyna Wydawnicza PW,
Warszawa 2021
Electronic textbooks on numerical methods available
in the Main Library of Warsaw University of
Technology (examples on the next slide)
Addresses of selected e-books on numerical methods, available in the Main Library of WUT:
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000360322
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000269608
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000717968
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000097667
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2550000000104910
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000205415
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals3710000000331863
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_WTU01000196082
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2670000000501196
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2670000000501196
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2550000000045851
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2550000000045851
https://primo-48tuw.hosted.exlibrisgroup.com/permalink/f/1nmcehm/48TUW_ejournals2560000000324791
Tutors:
S. Kruszewski, e-mail: szymon.kruszewski @pw.edu.pl
P. Mazurek, e-mail: pawel.mazurek@pw.edu.pl
J. Wagner, e-mail: jakub.wagner@pw.edu.pl
Teaching materials:
P. Mazurek & J. Wagner, Introduction to MATLAB (3 parts)
J. Wagner, Guidelines for report preparation
J. Wagner, GitLab guidelines
Assignments:
A. Accuracy of computation ca. 8 h of homework
B. Approximation of functions ca. 10 h of homework
C. Ordinary differential equations ca. 12 h of homework
Partial grading
Project assignment A: accuracy of computation 15 pts
Project assignment B: approximation of functions 15 pts
Project assignment C: ordinary differential equations 20 pts
Test #1 20 pts
Test #2 30 pts
100 pts
Final grading
precondition: ≥ 25 pts from project assignments & ≥ 25 pts from tests
0 ≤∑pts < 50 2
50 ≤∑pts < 60 3
60 ≤∑pts < 70 3.5
70 ≤∑pts < 80 4
80 ≤∑pts < 90 4.5
90 ≤∑pts ≤100 5
Basic definitions
x1
x
x 2 x1 x2 xN – N -dimensional vector
T
xN
a1, N a1,1 aM ,1
T
a1,1 a1,2 a2,1
a a2,2 a2, N a1,2 a2,2 aM ,2
A 2,1 – M N -dimensional matrix
aM ,1 aM ,2 aM , N a1, N a2, N aM , N
2
x2 xn – the Euclidean norm
n 1
x
sup xn n 1, ..., N – the maximum norm
A useful inequality:
Ax p
A p x p
for p 2,
Definitions:
the eigenvalue ( n ) and eigenvector ( v n ) of a square ( N N ) non-singular matrix A :
A v n n v n and v n 2
1 for n 1, ..., N
the spectral radius of A :
sr A sup 1 , ..., N
Properties:
For any matrix A , the eigenvectors ( v n ) are linearly independent, and:
A V V Λ A V Λ V 1
where: V v1 ... v N and Λ diag 1 , ..., N
For a symmetrical real-valued matrix A , the eigenvectors ( v n ) are orthonormal, and – consequently
– the matrix V is unitary, i.e.:
1
V v1 ... v N VT
1
which implies: A V Λ V .
T
R
Numerical problem: D W:
D vectors of data : d1, d2 , ...
W vectors of results: w1 , w 2 , ...
R – requirements
N
Example: y PN x; a an x n
n0
d xˆ a0 a1 ... a N
T
vectors of data:
vectors of results: w ŷ
requirements: R :yˆ PN xˆ; a .
Forms of NA description:
(traditional) mathematical notation
programming languages
flow diagrams
sequential notation
the C language:
y a N ; for ( n N ; n 0; n ) y y x a n 1 ;
xˆ xˆ
yˆ a N a a xˆ
0 0 a0 xˆ
a1 a1
a yˆ W
nN
D
a1
0
1 ŷ yˆ
1 aN 1 aN 1
yˆ
a
N ˆ
y
yˆ yˆ xˆ a n 1
n 1
with the coefficient a19 disturbed at the level 6 106 % complex zeros, e.g. 13.99 j 2.51.
The numerical algorithm for solving the numerical problem, : D W , has the form:
0 1 K 1
vK w
1 2 K
dv v ... v
where: K K 1 ... 2 1
~
d
1 ~v 1 2 ~v2 K w
~
T T T
η(1) η( 2) η( K )
where: v ( k ) T v ( k 1) η for k 1, ..., K
k k
After linearisation:
v ( k ) T ( k ) v ( k 1) η k for k 1, ..., K
with T
k
being a matrix of the coefficients of relative errors propagation
y x
y x y y x x
where y , x – absolute errors corrupting y and x , respectively
x 1 2 x
2
x x
y y y x x 12 x 2 x 2 , where x x
y x x 2
y x 2 x 2
1
y y y
dy
y x dy d ln y
y T , where T
2
1
dx y dx x x
d ln x x x
x x x
N
: y x1 , x2 ,..., xN y Tn n
n 1
xn y ln y
where: n xn and Tn
y xn xn xn
ln xn x x
n n
1
ln x
ln x
1 1 1 2 1 1 2
1 1 a for a eps 1
a
ln 1
e1 1 e
A1 : 2
y v1 v2
x2 v2 x2
x v x x
A2 : 1 1 1 2 y v1v2
x2 v2 x1 x2
A1 : 2
y v1 v2
x
2 v2 x2
y1 x1 1 1 1 1 x2 1 2 1 2 1 3
2 2
y1 x12 1 21 1 1 x22 1 2 2 1 2 1 3
y1 x
2
1
x22 x12 21 1 x22 2 2 2 1 3
y 2 T1 eps T2 eps K s eps Ko eps K m eps T1 T2 eps K s K o K m eps
x12 x22
y 2 2 eps 3
eps
y
GRG 2
O
GR
3 x2
Notice: The dots indicating the exact values of variables (e.g. x , y , …) are usually omitted, if their absence does not
imply ambiguity.
R. Z. Morawski: Numerical methods (ENUME) – 2. Accuracy and complexity of computing 2-9
Warsaw University of Technology, Faculty of Electronics and Information Technology
2.4. Propagation of errors in the data
If the small relative changes of the data d cause large relative changes of the result w , then the
numerical problem is called ill-conditioned.
The conditioning of the numerical problem (and consequently O y ) is characterised by the matrix T
which does not depend on the numerical algorithm used for solving this problem.
x2 y1 x2 1 1 x1 1
T1,2 1 0,
y1 x2 y1 2 x1 y1 2 x1 x2 2
2
x y x y2 x1
T2,1 1 2 1 1, 0
y2 x1 y2 y2 x1 x1 x2
2
x2 y2 x2 1 1 x1 1
T2,2 1 ,1
y2 x2 y2 2 x1 y2 2 x1 x2 2
2
Ti , j 1 for i, j 1, 2 very good numerical conditioning.
w
w w d
cond(d) sup d cond d
d
w d
d
where: d d d and w w
w
where K T
K
k
T K 1 T k 1 for k 1,..., K 1; K K I
1 1 1
Example: Two algorithms for computing y for x 0 :
x 1 x x 1 x
v1 1 / x
A1 : x v 1 / 1 x y v1 v2
2
v1 x 1
A2 : x v 1 x y
2 v1v2
v x 1
A2 : x 1
y
v2 1 x v1v2
1 d
y 2 y 1 d s m G y 2 3eps
x 1 x 1 s 1 m
x 13 sup G y1 sup G y 2 .
R. Z. Morawski: Numerical methods (ENUME) – 2. Accuracy and complexity of computing 2-14
Warsaw University of Technology, Faculty of Electronics and Information Technology
Numerical correctness of a numerical algorithm
y x12 1 1 s x22 1 2 s
y x1 1 12 1 12 s x2 1 12 2 12 s
2 2
x12 x2 y2 x12 x2
K 2,
y2 x x2
2
1 x1 x x22
1
x2 0
The most important techniques for evaluation of the coefficients of error propagation:
"epsilon" calculus
symbolic differentiation
numerical differentiation
statistical simulation
R1
u1(t) R2 C2 u2(t)
R1 1 1
1
1 R1 R2
1 R1 R2 R2 1 2
3dB 3dB
R1C2 R1C 2 R1 1 1 C2 1 2
1
am , n m, n 1,..., N
m n 1
N 2 5 10 50
cond ( A) 19.28 4.77 105 1.60 1013 8.51 1019
The algorithm:
N
bN n n ,
b a x
xN and xn n 1 for n N 1,,1
aN , N an , n
where: A A and b b
1 1
1
by elimination of x1 from the equations number 2, ..., N , under an assumption that a1,1 0:
an ,1
1
df
ln ,1 1 for n 2,, N
a1,1
0 aN 2,2 x2 aN 2,N xN bN 2
2
by elimination of x2 from the equations number 3, ..., N , under an assumption that a2,2 0:
an ,2
2
df
ln ,2 2 for n 3, , N
a2,2
0 0 aN 3,N xN bN 3
lN ,1 lN ,2 1
3 1 6 x1 2
2 1 3 x 7
2
1 1 1 x3 4
1
a2,1 2
3 1 6 2 1
l2,1
3 1 6 2 3 1 6 2
a 3 2
1,1 2 1 17
a 2 1 17
2 1 3 7 3 3 1 3 l3,2 3,2
2
2 3 3 1 3
1 a
a
1 1 1 4 l 3,1 1 13 32 1 103 2,2
13 2 1 8
3,1 1
a1,1 3
Hence:
1 0 0 3 1 6
L 23 1 0 and U 0 13 1
3 2 1
1
0 0 1
l l
N ,1 N ,2 l N ,N
1
The solution x̂ , obtained by means of the LU or LLT factorisation, may be given the form:
xˆ 1 x x
where x is the exact solution of A x b .
1
The equation resulting from the substitution of x xˆ x to A x b :
A xˆ 1 x b
may be given the form:
A x A xˆ b
1
An estimate xˆ of x may be obtained by solving this equation, using the already obtained results of
the LU or LLT factorisation of A . It may be then used for correction of the solution x̂ :
1
xˆ 2 xˆ 1 xˆ
To calculate the determinant det A with a possibly small numerical error, one should use the LU or
LLT factorisation of the matrix A in the following way:
N
det A det L U det L det U det U un,n
n 1
2
N
det A det L L det L ln,n
T 2
n1
y N ,1 yN ,2 y N , N 0 0 1
y1 y2 yN e1 e2 e N
or in the form of a set of N systems of linear equations with the same matrix A :
A y1 e1
A y 2 e2
…
A y N eN
The use of the LU (or LLT) factorisation is recommended for solving those equations:
L U y1 e1
L U y 2 e2
…
L U y N eN
Example: The equation a x b ( a 0 ) may be solved by means of the following iterative algorithm:
0
where x̂ is the initial approximation of the solution, while M and w satisfy the coherence condition:
x M x w.
The method is convergent if sr M 1; the speed of convergence, i.e. the speed of approaching zero
by the norm:
xˆ i xˆ i x for i 0,1, ...,
xˆ i 1 xˆ i x and i
x
xˆ
where x and b are indicators of expected accuracy.
i.e.:
M L1 U and w L1 b
or:
L xˆ
i 1
v for i 0,1, 2,
i
where:
v U xˆ b for i 0,1, 2,
i i
x1 f1 x
x
f x
f x 0 , where: x 2 and f x 2
x
N f
N x
Example:
sin x1 x2 1
f x
cos x2 x 3 1
tan x3 x4 1
x x
1 2 3 4 x x
An iterative algorithm (IA) for solving a numerical problem d w has the form:
w i 1 w i , w i 1 ,; d for i =0,1,...
where:
i is an operator defining IA
w i is the i th approximation of the solution to the problem d w
Examples:
The Heron algorithm for solving w2 d 0 ( d 0 ), i.e. for computing d:
d
w i 1 12 wi i for i 0,1,...
w 0 max d ,1 ,
w
The Newton algorithm for solving f w; d 0 :
w i 1
wi
f w i ; d for i 0,1,...
f w ; d
i
w i w i w
i
0 for i 0,1,...
An IA is locally convergent if it is convergent for any (even very small) vicinity W0 of the convergence
. The analysis of local convergence may be performed by means of the error function:
point w
w i 1 i wi ; d for i 0,1,...
taking into account that the error should diminish to zero for i . The local convergence of a scalar
by:
IA is characterised by an approximation of the function i in the vicinity of w
w
i 1
C w (or w C w
i i 1 i
in the scalar case)
where:
C 0 is the coefficient of local convergence and [1, ) is the exponent of local convergence
w i 1 wi ; d for i 0, 1,...
which will be further processed using a simplified notation:
wi 1 wi for i 0, 1,... and w i
i
The parameters of local convergence may be determined using the Taylor series expansion of wi
:
in the vicinity of w
wi 1 w w wi w 12 w wi w ...
2
w
2
wi 1 w w w w wi w 2 w wi w ...
1
i 1 0 i i
Hence:
i 1 w i 12 w i2 ...
Since w w 152 w3 d , the exponent of convergence is 1 because:
w 1 152 3w2 0
2
w d
1 2
5 d 0
3
3
Consequently:
C 1 52 d 3 35 for d 1, 8
2
d
Since w 12 w , the exponent of convergence is 2 because:
w
d d 1
w 12 1 0 and w 0
w2 w d w3 w d d
Thus:
1
C
2 d
w i 1 w i 152 w i3 1 pi d 1 mi 1 oi 1 oi
For i :
w 1 i 1 w 1 i 152 w 3 1 i 1 pi w 3 1 mi oi 1 oi
3
w 1 i 1 w 1 i 152 w 3 1 3i pi w 3 1 mi oi 1 oi
w 1 i 1 w 1 i 152 w 2 3i pi 1 mi oi 1 oi
w 1 i 1 w 1 1 2
5 w 2 i 152 w 2 pi oi
i 1 1 52 w 2 i 152 w 2 pi oi
i 3 23
i 1 C i 2
15 w pi oi i 1 i
2 3
5
23
15 eps 5
3 i 1
0 5 15 eps
0
23
eps
i 1 15
23
6 eps 4eps K 4
1 5
3
K eps
sup i d d D
1 sup C d d D
Bisection method
For f x continuous in a0 , b0 and satisfying the condition f ( a0 ) f (b0 ) 0 , the bisection method is
defined by the formulae: x0 1
2 a0 b0 and:
ai 1 ai , bi 1 xi gdy f ( ai ) f ( xi ) 0
xi 1 1
i 1 i 1
a b with for i 0, 1, 2,
ai 1 xi , bi 1 bi gdy f ( ai ) f ( xi ) 0
2
2 xi bi if f ( ai ) f ( xi ) 0
The number of iterations necessary for reducing the error below is:
| b a |
I min i log 2
x0 f ( x ) f ( x0 ) f ( x ) f ( x0 ) x0 f ( x ) xf ( x0 ) f ( x )
f ( x) f ( x0 )
2
Since f ( x ) 0 :
x0 f ( x ) f ( x0 ) f ( x0 ) xf ( x0 ) f ( x ) f ( x )
x x x0 1
f ( x0 )
2
f ( x0 )
f ( x )
Thus: 1 and C x x0 1
f ( x0 )
2
f ( x )
f ( x) f ( x) f ( x)
2
f ( x) f ( x) f ( x)
x x x 1 0
f ( x ) f ( x) f ( x) x x
2 2
x
f ( x)
4
Since f ( x ) 0 :
f '( x ) f ( x ) f ( x )
2
f ( x )
x
f ( x )
4
f ( x )
f ( x )
Thus: 2 and C x 1 1
2 2
f ( x )
f ( xi ) f ( xi 1 ) C 12
f
x
f xi xi xi 1
xi 1 xi for i 1, 2,...
f xi f xi 1
f x f x i 12 f x i2 ... i i 1
i 1 i
f x f x i 12 f x i2 ... f x f x i 1 12 f x i21 ...
Since f x 0 :
f x i 12 f x i2 i i 1
i 1 i
f x i i 1 12 f x i2 i21
NUMBER OF ITERATIONS
METHOD
1 2 3 4 5 6
BISECTION ( 1)
1.3 10-1 6.3 10-2 3.1 10-2 1.6 10-2 7.8 10-3 3.9 10-3
x0 3 , x1 1.5
SECANT ( 1.618)
7.1 10-2 1.5 10-2 7.5 10-4 7.5 10-6 3.8 10-9 1.9 10-14
x0 3 , x1 1.5
NEWTON ( 2)
1.0 10-1 5.9 10-3 2.3 10-5 3.5 10-10 K eps K eps
x0 3
b and ci f xi
i xi xi 2 xi xi2 f xi f xi 2
2
The iterative formula, implemented in complex arithmetic, is identical for both versions:
2ci
xi 1 xi
bi sgn bi bi2 4ai ci
The convergence exponent is: 3 for version I, and 1.84 for version II.
f1 f1
xi 1 xi f xi f xi
1
x
xN
1
for i 0, 1,... ; where f x
1 f f N
xi 1 xi f xi f xi N
x1 xN
The corresponding algorithm has the form:
compute f xi and f x i ;
solve f xi x i f xi with respect to xi xi 1 xi ;
compute xi 1 x i xi ;
return to if xi x or xi xi x .
Any method for solving scalar nonlinear equations, described in § 3.3, may be used for finding real
zeros of a polynomial. A zero already found may be eliminated using the so-called linear deflation.
A pair of complex roots x and x may be found using the Mueller's method. They may be eliminated
using the so-called quadratic deflation. The coefficients of the quadratic polynomial:
m x x x x x x 2 px r
whose values are p 2Re x i r x , are used for finding the polynomial :
2
f N 2 x bN 2 x N 2 b1 x b0
such that:
f N x x 2 px r f N 2 x
Example: The errors n of the estimates xn xn 1 n of roots xn ( n 1, 2 , 3 ) of the polynomial:
f x x 3 30 x 2 299 x 990
may be evaluated in the following way:
f xn xn3 30 xn2 299 xn 990 n
xn3 1 n 30 xn2 1 n 299 xn 1 n 990 n
3 2
Since xn 30 xn 299 xn 990 0 , the errors may be determined as follows:
3 2
n n
n
3xn3 60 xn2 299 xn 3x
2
n 60 xn 299 xn
-0.06 -0.06
-0.08 -0.08
-0.1 -0.1
y y
-0.12 -0.12
-0.14 -0.14
-0.16 -0.16
-0.18 -0.18
-0.2 -0.2
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
x x
Having N 1 pairs:
xn , f xn for n 0, ..., N
determine a function fˆN x interpolating a function f x , i.e. satisfying the following conditions:
fˆN xn f xn for n 0, ..., N
The solution of the interpolation problem consists in estimation of the parameters p of a known
function fˆ x fˆ x; p , where fˆ x is (most frequently):
N N N
n 0
n 0
a a x ... a x N M 1
a rational function: fˆN x; p
0 1 N M 1 T
with p a ... a b ... b
1 b1 x ... bM x M
0 N M 1 1 M
n 0
ˆf x; p f x YN 1
N
x x , where Y sup f N 1
x x x 0 , xN
N
N 1! n0 n N 1
Conclusion: a "large" error of interpolation may happen in the vicinity of "large" changes of f x or
its derivatives.
x x
N
The Lagrange polynomial: fˆN x f xn Ln x with Ln x
N
n 0 0 xn x
n
The function s x; x0 , ..., xN , where x0 x1 ... x N , is a polynomial spline function of order M , if:
it is a polynomial of order not higher than M in each subinterval xn , xn1 , i.e.:
b) s xN 1
c) s x0 s xN
for a spline function of order 3 (alternatively):
a) s x0 2 , s xN 2
b) s x0 1 , s xN 1
s x3 a2 x3 x2 b2 x3 x2 c2 y3 a2 b2 c2 y3
2
a0 b0 c0 c1
s x2 a1 x2 x1 b1 x2 x1 c1 a2 x2 x2 b2 x2 x2 c2 s x2
2 2
a1 b1 c1 c2
The continuity conditions for the first derivative of the function have the form:
s x1 2a0 x1 x0 b0 2a1 x1 x1 b1 s x1 2 a0 b0 b1
s x2 2 a1 x2 x1 b1 2 a2 x2 x2 b2 s x2 2a1 b1 b2
The remaining conditions assume the form of algebraic equations with respect to an and bn :
a2 b2 4 3 a2 b2 1 2 a0 b0 b1
a0 b0 1 2 a0 b0 1 2a1 b1 b2
a1 b1 2 4 a1 b1 2 b0 0
The solution of those equations has the form: 4.5
a0 1 , a1 0 and a2 3 4
b0 0 , b1 2 and b2 2 3.5
Hence: 3
y=s(x)
x 2 1 for x 0,1
2.5
3 x 2
2
2 x 2 4 for x 2, 3
1.5
1
0 0.5 1 1.5 2 2.5 3
x
A vector of parameters p p1 ... pK is sought for, the vector that makes the linear combination
T
fˆ ( x; p) pkk ( x)
K
k 1
f xn n 1, 2, ..., N
with respect to the following (least-squares) criterion:
1 N ˆ
J 2 p fˆ xn ;p f xn
N
or J 2 p f xn ;p f xn
2 2
N n1
n1
Examples of k x :
x 2 x N K xN
1 N
or numerical form:
pˆ arg p ΦT Φ p ΦT y
A vector of parameters p p1 ... pK is sought for, the vector that makes the linear combination
T
fˆ ( x; p) pkk ( x)
K
k 1
f xn n 1, 2, ..., N
with respect to the following criterion:
A vector of parameters p p1 ... pK is sought for, the vector that makes the linear combination
T
fˆ ( x; p) pkk ( x)
K
k 1
2
a
f 2 x dx
a
A particularly simple solution may be obtained if k is a subset of an orthogonal basis, i.e.:
b
k | j k x j x dx 0 for k j
a
If, moreover, k 2
k , k 1 for k 1, ..., K , then:
A I p b pk f | k for k 0, 1, ..., K
k | j w x k x j x dx
w
a
b
The orthogonalisation of x x
k
k
k 0,1, ... for various intervals a, b weighing functions
w x generates various families of orthogonal polynomials.
1
Example: Chebyshev polynomials of the I kind – orthogonal in 1, 1 with w x :
1 x 2
T0 x 1, T1 x x
Tk x 2 xTk 1 x Tk 2 x for k 2, 3, ...
1 for k j 0
1 w x Tk x T j
x dx / 2 for k j 0
for k j
0
H 0 x 1, H1 x 2 x
H k x 2 xH k 1 x 2k 2 H k 2 x for k 2 , 3, ...
Notice: If the functions 1 x , 2 x , ... are orthogonal with respect to the inner product:
b
k | j w x k x j x dx
w
a
then the functions k x w x k x are orthogonal with respect to the inner product:
b
k | j k x j x dx
w
a
a a x ... a x L
fˆ x; p 0 1 L
for L =0,1, ...; M 1, 2, ...; L M
1 b1 x ... bM x M
are determined in such a way as to satisfy the equality up to L M 1 terms:
1 b x ... b
1 M x M c0 c1 x c2 x 2 c4 x 3 ... a0 a1 x ... aL x L
where c0 c1x c2 x 2 c4 x3 ... is the MacLaurin expansion of f x .
Thus, the parameters a0 , ..., aL , b1 , ..., bM should satisfy the following equations:
l 1
c0 a0 , cl ck bl k al for l 1, ..., L
k 0
l 1
cl ck bl k 0 for l L 1, ..., M
k 0
l 1
cl
k l M
ck blk 0 for l M 1, ..., M L
The accuracy of interpolation and approximation has been assessed using the criterion:
1
1 ˆ
f x f x dx (root-mean-square error =RMS error)
2
f
2 1
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
105
y
Lagrange interpolation
Cubic spline interpolation
LS polynomial approximation (N=100)
100 LS approximation with Chebyshev polynomials (N=100)
10-5
RMS error
10-10
10-15
10-20
0 10 20 30 40 50 60
number of nodes/order of polynomial
1012 1010
105 1010
Lagrange interpolation Lagrange interpolation
Cubic spline interpolation Cubic spline interpolation
LS polynomial approximation (N=100) LS polynomial approximation (N=100)
LS approximation with Chebyshev polynomials (N=100) LS approximation with Chebyshev polynomials (N=100)
100 105
RMS error
10-5 100
10-10 10-5
10-15 10-10
0 10 20 30 40 50 60 0 10 20 30 40 50 60
number of nodes/order of polynomial number of nodes/order of polynomial
108 106
RMS error
RMS error
Using the elemental equations and Kirchhoff's laws, one may model the following network:
R1 R2
C1 C2
e(t)
u1 u2
In scalar notation: Find the functions y1 t , , yM t satisfying the following set of ODEs:
dym t
f m t , y1 t , , y M t for t 0, T
dt
given the initial conditions, i.e. the values of ym 0 for m 1,, M .
In vector notation: Find a vector of functions y t y1 t yM t satisfying the following system
T
of ODEs:
dy t
f t , y t for t 0, T
dt
given the initial condition, i.e. the vector y 0 y1 0 yM 0
T
The algorithm for solving the system of ODEs from Section 6.1:
u t A u t b e t for t [0, T ]
based on the above method takes on the form:
un un1 h A un1 b e tn1 for n 1, 2, ... and u 0 u 0
A numerical method for solving an IVP is convergent if for any set of ODEs, having a unique solution
y t , the approximate solution y n , dependent on the step h , converges to y tn for h 0 , i.e.
the global error of the numerical solution is diminishing to a zero:
en y n y tn
h0
0 for n 0,1,
p 1!
The Taylor series of the RHS of the formula yn yn1 hf tn1 , yn1 has the form:
Hence:
yn y n y n h y n h 12 y nh 2 y nh 2 ... y n 12 y nh 2
and the conclusion:
p 1, rn h 12 y n h2
z t Λ z t V 1 B u t , where: z t V 1 y t .
Thus, the results of stability analysis obtained for a scalar ODE, called "test equation":
y t y t with C , for y 0 1 and t 0, T
may be generalised on linear systems of ODEs.
Im[h*lambda]
% Representation of inequality in logical matrix:
hl=x+1j*y;I=abs(1+hl)<1;
% Plot of points (x,y) that satisfy inequality:
scatter(x(I),y(I),0.3);grid on
xlabel('Re[h*lambda]');ylabel('Im[h*lambda]');
axis('equal');axis([-2.25 0.25 -1.25 1.25]);
1 h
i.e. for all h with Reh 0 .
% REGION OF STABILITY FOR IMPLICIT EULER METHOD.
% NOTATION: h*lambda = x+j*y
% Range of x and y coordinates:
N=300;xx=linspace(-0.25,2.25,N);yy=linspace(-1.25,1.25,N);
% Generation of grid with all points (x,y):
[x,y] = meshgrid(xx,yy);
% Representation of inequality in logical matrix:
hl=x+1j*y;I=1./abs(1-hl)<1;
% Plot of points (x,y) that satisfy inequality:
scatter(x(I),y(I),0.3);grid on
xlabel('Re[h*lambda]');ylabel('Im[h*lambda]');
axis('equal');axis([-0.25 2.25 -1.25 1.25]);
K
Butcher's table of coefficients:
yn yn 1 h wk f k
k 1 c1 a1,1 a1,2 a1,K
where: c2 a2,1 a2,2 a2,K
K
f k f tn 1 ck h, yn 1 h ak , f for k 1, 2,, K cK a K ,1 aK ,2 aK ,K
1
ck 0,1 and wk 0,1 for k 1, 2,, K w1 w2 wK
K
w
k 1
k 1
If ak , 0 for k , then the method is explicit, and the values of f k may be determined recursively:
k 1
f k f tn 1 ck h, yn 1 h ak , f for k 1, 2,, K ;
1
Otherwise it is implicit, and therefore the values of f k must be determined by solving the system of
algebraic equations (nonlinear, if the function f is nonlinear:
K
f t
n 1 1
c h , y n 1 h a f
1,
f1
1
f f t c h, y h a f
K
2 n 1 2 n 1 2,
1
fK
K
f t
n 1 K c h , y n 1 h a f
K ,
1
K 2
1
K 1
Im h
-1
-2
-3
-3 -2 -1 0
Re h
1 1 2 0
with: f1 f tn1, yn1 1
6
2
3
1
6
0.8
or:
hA f1 A un1 b e tn1 h
0.6
I hA 5 1 1
12
12 3
34 hA 1
I 4 hA f2 A un1 b e tn 0.4
0.2
0
0 1 2 3 4 5 6
time [ms]
yn k yn k h k f tn k , yn k with y0 y 0 , …, yK 1 y K 1 h
k 1 k
where 0 or 1 , K 1, K , max K , K K .
The method is called explicit if 1 ( 0 0 ) since then yn depends only on yn 1 , yn 2 , …, yn K
and f tn1, yn1 , f tn2 , yn2 ,…, f tn K , yn K .
F yn 1 , yn 2 , ... k yn k h k f tn k , yn k
k 1 k 1
For the sake of convenience, the formula defining a designed multi-step method will be re-written in
the form:
K K
yn k yn k h k yn k
k 1 k
resulting from the substitution yn1 f tn1, yn1 ,…, yn K f tn K , yn K .
The simplest method for determination of the parameters k and k , given K , and K , consists in:
the development of the RHS of the above formula into the Taylor series at tn ;
the solution of K K 1 linear algebraic equations resulting from setting the coefficients at
K K
the term without h to 1 and zeroing the coefficients at the terms with h1 , h 2 , …, h .
The above procedure provides, as a by-product, an estimate of the local error, viz. the term with
K K 1
h .
The application of a multi-step method for solving y t y t results in the difference equation:
K K K
yn k ynk h k ynk
k 1 k 0
Z
z Y z h z Y z F y , y
k 0
k
k 1
k
k 1
0 1 , ...
This solution is stable if the zeros of its denominator, i.e. of the equation:
K K
z
k 0
k
k
h k z k 0
k 0
are located inside of the unit circle. The border of the region of stability, i.e. the set of h values for
which this condition is satisfied, may be determined by solving the above equation with respect to h
j
and substituting z e :
K
e k
j K k
h kK0 for 0, 2
k
k 0
e j K k
Im[h*lambda]
z2=double(subs(zz(2),hl,hlambda));
L=(abs(z1)<=1) & (abs(z2)<=1);
% Plot of points (x,y) that satisfy inequality:
scatter(x(L),y(L),0.1);grid on
xlabel('Re[h*lambda]');ylabel('Im[h*lambda]');
axis('equal');axis([-1 4 -2.5 2.5]);
K
yn yn1 h k f tnk , ynk , rn c p 1 yn p 1 h p 1
k 1
K p c p 1 1 2 3 4 5 6 7
1
1 1 1
2
5 3 1
2 2
12 2 2
3 23 16 5
3 3
8 12 12 12
251 55 59 37 9
4 4
720 24 24 24 24
95 1901 2774 2616 1274 251
5 5
288 720 720 720 720 720
19087 4277 7923 9982 7298 2877 475
6 6
60480 1440 1440 1440 1440 1440 1440
0.8
0.6
0.4
0.2
Im h
K 2
K 1
K 3
K 4
0
-0.2
-0.4
-0.6
-0.8
-1
-2 -1.5 -1 -0.5 0 0.5
Re h
K
yn yn1 h k f tnk , ynk , rn c p 1 yn p 1 h p 1
k 0
K p c p 1 0 1 2 3 4 5 6 7
1
0 1 1
2
1 1 1
1 2
12 2 2
1 5 8 1
2 3
24 12 12 12
19 9 19 5 1
3 4
720 24 24 24 24
3 251 646 264 106 19
4 5
360 720 720 720 720 720
863 475 1427 798 482 173 27
5 6
60480 1440 1440 1440 1440 1440 1440
275 19087 65112 46461 37504 20211 6312 863
6 7
24192 60480 60480 60480 60480 60480 60480 60480
1
Im h
K 1
K 4
0 K 3
-1
-2
K 2
-3
-4
-6 -4 -2 0 2
Re h
K
yn k ynk h 1 f tn1, yn1 , rn c p 1 yn p 1 h p 1
k 1
K p c p 1 1 2 3 4 5 6 1
1
1 1 1 1
2
1
2 2 0 1 2
3
1 3 1
3 3 3 3
4 2 2
1 10 1
4 4 6 2 4
5 3 3
1 65 5 1
5 5 10 5 5
6 12 3 4
1 77 3 1
6 6 15 10 5 6
7 10 2 5
1.5
0.5
Im h
K 2
K 4
K 3
K 1
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
Re h
k 1
K p c p 1 1 2 3 4 5 6 0
1
1 1 1 1
2
2 4 1 2
2 2
9 3 3 3
3 18 9 2 6
3 3
22 11 11 11 11
12 48 36 16 3 12
4 4
125 25 25 25 25 25
10 300 300 200 75 12 60
5 5
137 137 137 137 137 137 137
20 360 450 400 225 72 10 60
6 6
343 147 147 147 147 147 147 147
2
K 2
K 3
K 1
K 4
Im h
-2
-4
-6
-8
-5 0 5 10 15
Re h
0
Its solution for y 0 has the form:
3
y1 t et e1000t and y2 t 2et e1000t
If the number of steps needed to find a solution for a given interval 0,T is increasing, then:
the number of arithmetic operations needed is increasing,
the accumulated error due to the rounding errors of operations is increasing.
The above dilemma may be resolved by adaptation of the step size, the adaptation based on
the evaluation of the local error.
1 2 h
Let yn be an estimate of the solution obtained for h , and yn – for :
2
p 1
h
h p 1 O h p 2 and yn
yn1 y tn
2
y tn 2 O h p 2
the main part 2
of the error the main part
of the error
The coefficient may be determined from those equations:
yn yn
p 1 2 1
2 h
1
y y 2 h p 1
p
n n
2
2 1 h p 1
and used for evaluation of the local error:
y n y n yn y n
2 1 2 1
rn y n y t n 2
1 1 p
and rn yn y t n
2 2
2 1
p
2p 1
The rules for step selection are as follows:
1
if rn rmax , then double the step h ;
1 2
if rn rmax , but rn rmax , then continue without changing h ;
The main limitation of the step-size control, based on the local error assessment, is implied by
the stability constraints.
Those constraints are much less severe for implicit methods than for explicit methods, but implicit
methods require the solution of a nonlinear algebraic equation at each step.
The computational complexity of this operation may be significantly reduced by using an initial
approximation of the solution obtained by means of a corresponding explicit method (prediction
phase):
K K
yn k y n k h k f t n k , yn k
0
k 1 k 1
yn h 0 f tn , yn k yn k h k f tn k , yn k
k 1 k 1
0
is solved with respect to yn using an iterative method starting at y n (correction phase).