Lecture 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

1.

5 Numerical integration

1.5 Numerical integration


We consider the function f : [α, β] ⊂ < → < and we wish to compute the integral

I = f (x) dx. This integral may be computed directly if a primitive F of the
α
function f is known, in which case we have I = F (β) − F (α).
However, if F is not known (or its computation is too complicated), an ap-
proximate value of I may be computed if the values of f in the nodes of a division
∆ of the interval [α, β] are known.
In other words, given the division ∆ : α = x1 < x2 < ... < xn+1 = β, if

yi = f (xi ), i = 1, ..., n + 1 are known, we can approximate I = f (x) dx using
α
only the values xi , yi , i = 1, ..., n + 1.
The computation is based in the following simple idea: if we approximate
the function f by its interpolating polynomial Pn (x) corresponding to ∆ and Y ,
then the integral of f can be obviously approximated by the integral of Pn (x),
Rβ Rβ
i.e. I = f (x) dx ' Pn (x) dx.
α α
We will use in this regard the Lagrange form of the interpolating polynomial:
n+1
P n+1
P
Pn (x) = Li (x) · yi = Li (x) · f (xi ) =
i=1 i=1
= L1 (x) · f (x1 ) + L2 (x) · f (x2 ) + ... + Ln+1 (x) · f (xn+1 )
where
n+1
Y x − xj
Li (x) = , i = 1, ..., n + 1
xi − xj
j=1,j6=i

Taking into account well-known properties of the integral, we obtain:

Zβ Zβ n+1
X n+1
XZ
β n+1
X Zβ
I' Pn (x) dx = Li (x)·f (xi ) dx = Li (x)·f (xi ) dx = f (xi )· Li (x) dx
α α i=1 i=1 α i=1 α

It follows that we must actually integrate the Li (x) function only. We will
perform the computations for the cases n = 1 and n = 2.

1.5.1. The case n = 1. Trapeze Formula


For the case n = 1 the division of [α, β] is ∆ : α = x1 < x2 = β. The corre-
sponding interpolating polynomial is P1 (x) = L1 (x) · f (x1 ) + L2 (x) · f (x2 ), cu

1
x−x2
L1 (x) = L2 (x) = xx−x
x1 −x2 ,
1
2 −x1
. The approximate value of the integral in this
β
R Rβ
case is I ' f (x1 ) · L1 (x) dx + f (x2 ) · L2 (x) dx.
α α
We have :
x2
Rβ Rx2 x−x2 1
Rx2 1 2
( x2

L1 (x) dx = x1 −x2 dx = x1 −x2 · (x − x2 ) dx = x1 −x2 · − x2 · x) =
α x1 x1 x1
x22 −x21 x2 −x1
= 1
− x2 −x 1
· ( 2 − x2 · (x2 − x1 )) = −( x2 +x
2
1
− x2 ) = 2 .
x2
Rβ Rx2 x−x1 1
Rx2 1 2
( x2

L2 (x) dx = x2 −x1 dx = x2 −x1 · (x − x1 ) dx = x2 −x1 · − x1 · x) =
α x1 x1 x1
x22 −x21 x2 −x1
= 1
x2 −x1 · ( 2 − x1 · (x2 − x1 )) = ( x2 +x
2
1
− x1 ) = 2 .

It follows that:
Rβ Rβ
I ' f (x1 ) · L1 (x) dx + f (x2 ) · L2 (x) dx =
α α
x2 −x1 x2 −x1 x2 −x1
= f (x1 ) · 2 + f (x2 ) · 2 = (f (x1 ) + f (x2 )) · 2

Replacing x1 = α, x2 = β we obtain the so-called Trapeze Formula:


β−α
f (x) dx ' (f (α) + f (β)) · (1)
2
α

This formula has a simple geometric interpretation. We recall the fact that
the value of the integral is numerically equal to the area of the surface delimited
by the plot of f , the Ox axis and the parallels to the Oy axis passing through
x = α and x = β. The following figure shows this area:

2
1.5 Numerical integration

The simplest approximation of the area is given by the area of the trapeze
from the following figure:

The area of this trapeze can be computed as the sum of the length of its bases
(f (α) + f (β)) multiplied by half of the length of its height ((β − α)/2). This we
obtain by geometrical means the same approximation for I obtained above using
the interpolation polynomial.
The trapeze Formula may be also written as:


β−α
f (x) dx = (f (α) + f (β)) · +ε
2
α

3
where ε denotes the error obtained by replacing the exact value of the integral
by the area of the trapeze. The value of this error is actually given by the
difference-area presented in the following figure:

1.5.2. Iterative Trapeze Method


In the case of some integrals, the error corresponding to the Trapeze Formula is
very large. If so, a better approximation may be obtained by using the so-called
Iterative Trapeze Method, presented in the following.
In essence, first we divide the [α, β] interval in k subintervals of equal length
by means of the equidistant division ∆ : α = x1 < x2 < ... < xn+1 = β, then we
apply the Trapeze Formula on each subinterval [xi , xi+1 ], i = 1, ..., n.
For an equidistant division, the distance between two successive nodes, called
the step of the division, is the constant: h = xi+1 − xi , i = 1, ..., n.
In this case we have xi = α + (i − 1) · h, where h = β−α n .
The area of the trapeze corresponding to the xi+1 − xi = h, i = 1, ..., n
subinterval is Ai = (f (xi ) + f (xi+1 )) · h2 , i = 1, ..., n. Since the area which
approximates the integral is the sum of the areas of all the trapezes, we obtain
the formula:
n n
X X h
I' Ai = (f (xi ) + f (xi+1 )) · (2)
2
i=1 i=1

The way in which the approximation of the integral is computed as the sum
of the trapeze areas is illustrated in the following figures (for k taking the values
2, 3 and 4):

4
1.5 Numerical integration

We remark that by increasing number of trapezes used, the error of the ap-
proximation (areas from last column of figures) decreases.

Remark:

The Iterative Trapeze Method can be easily implemented in a programming lan-


guage using the following pseudocode algorithm):
Input data: [α, β] interval, f function, number n of subintervals.

Output data: approximate value of the integral v ' f (x) dx.
α

Start
h = (b − a)/k; v = 0;
F or i f rom 1 to n
v = v + (f (a + (i − 1) ∗ h) + f (a + i ∗ h)) ∗ h/2;
Stop

5
1.5.3. The case n = 2. Simpson’s Formula
For the case n = 2 the division of [α, β] consists of the nodes α = x1 < α+β 2 =
x2 < x3 = β.
The corresponding interpolating polynomial is P2 (x) = L1 (x) · f (x1 ) + L2 (x) ·
f (x2 ) + L3 (x) · f (x3 ), cu L1 (x) = xx−x 2
1 −x2
· xx−x 3
1 −x3
, L2 (x) = xx−x 1
2 −x1
· xx−x
2 −x3
3
, L3 (x) =
x−x1 x−x2
x3 −x1 · x3 −x2 .

The approximate value of the integral in this case is I ' f (x1 ) · L1 (x) dx +
α
Rβ Rβ
f (x2 ) · L2 (x) dx + f (x3 ) · L3 (x) dx.
α α
We have :

Rβ Rx3 x−x2 x−x3 1


Rx3
L1 (x) dx = x1 −x2 · x1 −x3 dx = (x1 −x2 )·(x1 −x3 ) · (x − x2 ) · (x − x3 ) dx =
α x1 x1
1
Rx3
= (x1 −x2 )·(x1 −x3 ) · (x2 − (x2 + x3 ) · x + x2 · x3 ) dx =
x1 x3
1 x3 x2

= (x1 −x2 )·(x1 −x3 ) · ( 3 − (x2 + x3 ) · 2 + x2 · x3 · x) =
x1
1 x33 −x31 x23 −x21
= (x1 −x2 )·(x1 −x3 ) · ( 3 − (x 2 + x 3 ) · 2 + x 2 · x 3 · (x3 − x1 )) =
x 2 +x ·x +x3
= 1
(x1 −x2 ) · (−
3 1 3
3
1
+ (x2 + x3 ) · x3 +x 2
1
− x2 · x3 )

α+β x1 +x3
Taking into account the fact that x2 = 2 = 2 , we obtain:

Rβ 1
x +x
−2·x23 −2·x1 ·x3 −2·x31 +3·( 1 2 3 +x3 )·(x3 +x1 )−6· 1 2 3 ·x3
x +x
L1 (x) dx = x1 +x3 · 6 =
α (x1 − 2 )
2 −2·x23 −2·x1 ·x3 −2·x31 + 23 ·x21 + 32 ·x23 + 32 ·x1 ·x3 + 32 ·x1 ·x3 −3·x1 ·x3 −3·x23
= (x1 −x3 ) · 6 =
1 − 21 ·x23 +x1 ·x3 − 21 ·x21 1 x23 −2·x1 ·x3 +x21 x3 −x1
= (x1 −x3 ) · 3 = − (x1 −x 3)
· 6 = 6

Rβ Rβ
The integrals L2 (x) dx and L3 (x) dx are computed in the same way (exercise!),
α α
obtaining:

Zβ Zβ Zβ
x3 − x1 x3 − x1 x3 − x1
L1 (x) dx = , L2 (x) dx = 2 · , L3 (x) dx =
6 3 6
α α α

6
1.5 Numerical integration

Thus:
Rβ Rβ Rβ
I ' f (x1 ) · L1 (x) dx + f (x2 ) · L2 (x) dx + f (x3 ) · L3 (x) dx =
α α α
x3 −x1 x3 −x1 x3 −x1
= f (x1 ) · 6 dx + f (x2 ) · 2 · 3 + f (x3 ) · 6 dx.
α+β
Replacing x1 = α, x2 = 2 , x3 = β we obtain Simpson’s Formula:


α+β β−α
f (x) dx ' (f (α) + 4 · f ( ) + f (β)) · (3)
2 6
α

Remark:
The approximation given by Simpson’s Formula is better than the one given by
the Trapeze Formula, fact illustrated by the following example:

Example
R2 1
Compute the exact value for the integral I = x2
dx and approximate values for
1
I using the Trapeze Formula and Simpson’s Formula.

Solution:
2
R2 1 1
= − 21 − (− 11 ) = 1 − 1

The exact value of the integral is: I = x2
dx = −x 2 =
1 1
1
2 = 0.5.
Using the Trapeze Method we obtain:
I ' (f (1) + f (2)) · 2−1 1 1 1 1 1 5
2 = ( 12 + 22 ) · 2 = (1 + 4 ) · 2 = 8 = 0.625.
Using the Simpson’s Method we obtain:
I ' (f (1) + 4 · f ( 1+2 2−1 1 1 1 1
2 ) + f (2)) · 6 = ( 12 + +4 · ( 3 )2 + 22 ) · 6 =
2
4
= (1 + 4 · 9 + 14 ) · 1
6 = 36+64+9
36 · 1
6 = 109
36 · 1
6 = 109
216 = 0.50463

7
8
2. Approximate solutions for
nonlinear equations

2.1 Introduction
The problem:
Consider the nonlinear equation:

f (x) = 0 (4)

where f : [α, β] ⊂ < → < is a nonlinear function. Find x∗ ∈ [α, β] such that
f (x∗ ) = 0 (x∗ is a solution of the equation (4)).

Remarks:
In principle a nonlinear equation is an equation which is not linear, i.e. it is not of
the type a · x + b = 0. However, in this chapter we will address types of equations
for which there are no methods to find the exact solutions of the equations, or,
even worse, equations for which the exact solutions can not be found, such as,
for example, ex + x2 − 10 = 0. This equation belong to a category of equations
called transcendental equations, whose solutions are in fact irrational numbers
(i.e. containing an infinite number of decimals) and thus impossible to compute
exactly. In the following we will compute approximate solutions for this type of
equations, solutions which verify the equation in an approximate way: f (x̃) ' 0.
As a nonlinear equation, equation (4) may have not only one but several solu-
tions on a given interval. However, in the following we will make the assumption
that the interval [α, β] involved in the computations is small enough such that
on this interval there is a single solution of the equation (4). The choice of such
an interval can be made by using one of the several theorems available in the
literature (and we will show an example of such a theorem in the next section)

9
10 2. Approximate solutions for nonlinear equations

or, in a more practical way, by using a graphical representation of the function


f (x) in a mathematical software (such as Matlab ).

2.2 Newton’s method


To find an approximation of the exact solution x∗ of the equation (4), Newton’s
method computes a sequence x1 , x2 , ..., xk , ..., xn convergent to the exact solution:

{xn }n −→ x∗ .
n→∞

The recurrence relation defining the terms of the sequence is chosen as:

xk+1 = xk + h.

We will find h taking into account the following two relations:


- On one hand, if we consider xk+1 to be an approximate solution for the
equation (4), then f (xk+1 ) ' 0.
- On the other hand, expanding in Taylor series the function f (xk+1 ) and
keeping in the expansion only the first two terms (and ignoring the rest) we
0
obtain f (xk+1 ) = f (xk + h) ' f (xk ) + f (x1!k )·h .
Comparing the two approximation obtained above for f (xk+1 ), we can con-
0
clude that f (xk ) + f (x1!k )·h ' 0. This clearly suggests that we should choose for
h the expression: h = − ff0(x k)
(xk ) .
Hence we obtain for Newton’s sequence the expression:

f (xk )
xk+1 = xk − . (5)
f 0 (xk )

Geometrical interpretation:
Newton’s method is also called the tangent method and the reason for this name
will be made clear in the following.
We recall the fact that the equation of the tangent to the curve y = f (x) at
the point of coordinates (xk , f (xk )) is:

T : y − f (xk ) = f 0 (xk ) · (x − xk ).

The intersection of this tangent with the Ox axis, obtained by choosing y =


0 in the equation of the tangent, is the point with the x coordinate equal to
x = xk − ff0(x k)
(xk ) . Taking into account the definition of the Newton’s sequence (5),

10
2.2 Newton’s method 11

we remark that the intersection point actually gives the following term of the
Newton’s sequence, namely xk+1 .
In fact the computational process of finding the approximate solutions may
be represented in a geometrically way like this: starting with the value of xk ,
i.e. from the point of coordinates (xk , 0), we plot the parallel to the Oy passing
through this point; at the point where this parallel intersects the curve y = f (x)
we plot the tangent to the curve; the x coordinate of the intersection of the
tangent with the Ox axis is the next value xk+1 ; and then we repeat the whole
process to obtain xk+2 and so on.

Remark:
The sequence (5) does not always converge to the exact solution x∗ . The next
theorem (given without proof) presents a set on sufficient conditions for the
convergence of Newton’s sequence:

Theorem:
If f (α) · f (β) < 0 and both f 0 (x) and f 00 (x) do not change their sign on the [α, β]
interval, then the sequence (5) converges to the exact solution x∗ of the equation
(4) for any x1 ∈ [α, β] satisfying the condition f 0 (x1 ) · f 00 (x1 ) > 0.

11
12 2. Approximate solutions for nonlinear equations

Definition:
If x∗ is the exact solution of the equation (4) and xk is an approximate solution,
then the error associate to the approximation xk is defined as the difference in
absolute value between the exact solution and the approximate one:

ε = |x∗ − xk |.

Theorem:
The error ε corresponding to an approximate solution xk given by Newton’s
method may be estimated by using the relation:

f (xk )
ε≤ , where M = min |f 0 (x)| (6)
M x∈[α,β]

Proof:
The proof uses Lagrange’s theorem, which states that for any continuous function
f : [α, β] ⊂ < → < there exists a value γ ∈ [α, β] such that |f (β) − f (α)| =
|f 0 (γ)| · |β − α|.
Since f (x∗ ) = 0, using Lagrange’s theorem we obtain:
|f (xk )| = |0 − f (xk )| = |f (x∗ ) − f (xk )| = |f 0 (γ)| · |x∗ − xk |,
where γ ∈ [x∗ , xk ] or γ ∈ [xk , x∗ ]. It follows that:
ε = |x∗ − xk | = |f (xk )| |f (xk )|
|f 0 (γ)| ≤ min |f 0 (x)| .
x∈[α,β]

Example 8
Using Newton’s method, compute on the [0, 2] interval an approximate solution
of the equation x2 + x − 2 = 0 and estimate the error of the approximation.

Solution:
Since the equation is a second degree polynomial one, its exact solutions
√ can be
−1± 1+8
computed with ease using the corresponding formula: x1,2 = ∗
2 = −1±3
2 ,
∗ ∗
hence the exact solutions are x = −2 and x = 1. We remark that in general, if
for a given equation it is possible to find the exact solutions then, obviously, in
practice there is no point in computing approximate solutions. However, from the
teaching point of view it makes sense to take an equation (such as the one from
this example) for which the exact solutions are known, to compute approximate
solutions and their correspondent errors and make the comparison between the

12
2.2 Newton’s method 13

exact and approximate values of the solutions and between the errors estimated
by the previous theorem and the real errors. Such comparisons are made for each
of the examples which include approximate computations.
In order to apply Newton’s method, taking into account the general form of
the equation, (4), it is clear that f (x) = x2 + x − 2.
Since we are looking for approximate solutions on the [0, 2] interval, we are
expecting that the sequence of approximate solutions given by Newton’s method
will converge to the exact solution x∗ = 1.
The convergence itself is assured by the above theorem. Indeed, f (0) · f (2) =
−2 · 4 < 0, f 0 (x) = 2 · x + 1, f 00 (x) = 2 so both f 0 (x) and f 00 (x) present a
constant sign [0, 2] (namely positive), and, by choosing for example x1 = 2 we
have f 0 (x1 ) · f 00 (x1 ) = 5 · 2 > 0.
Using Newton’s sequence (5), for k = 1 we obtain:

f (x1 ) f (2) 22 +2−2 4 6


x2 = x1 − f 0 (x1 ) =2− f 0 (2) =2− 2·2+1 =2− 5 = 5 = 1.2

For k = 2 we obtain:
36+30−50
f (x2 ) 6 f ( 56 ) 6
36
+ 65 −2 6
x3 = x2 − f 0 (x2 ) = 5 − f 0 ( 65 )
= 5 − 25
2· 65 +1
= 5 − 25
12+5 =
5
16
6 6 16 6·17−16 86
= 5 − 25
17 = 5 − 5·17 = 5·17 = 85 ' 1.01176...
5

The Newton sequence x1 , x2 , x3 clearly approaches the exact solution x∗ = 1.


In theory, each value of a Newton sequence (except maybe the first one) may
be considered an approximate solution of the equation; evidently the larger the
value of n in xn the more precise the approximation and the smaller the error.
Speaking of error, next we will use the precedent theorem to compute esti-
mates of the errors corresponding to the approximations found above (namely
x2 and x3 ; the starting value x1 is usually not considered as an approximation).
Denoting by εk the error corresponding to the approximation xk , the theorem
gives the following estimates:

|f (xk )|
εk = |x∗ − xk | ≤
min |f 0 (x)|
x∈[α,β]

Since f 0 (x) = 2 · x + 1 is monotonically increasing on the [0, 2] interval, its


minimal value on this interval is evidently reached in 0, hence min |f 0 (x)| =
x∈[0,2]
|f 0 (0)| = 1
The estimate of the error becomes: εk = |x∗ − xk | ≤ |f (xk )|.

13
14 2. Approximate solutions for nonlinear equations

It follows that to for the error ε2 corresponding to the approximate solution


x2 we have the estimation: ε2 ≤ |f (x2 )| = |x22 + x2 − 2| = |(1.2)2 + 1.2 − 2| =
|1.44 + 1.2 − 2| = 0.64.
Hence the error ε2 corresponding to the approximate solution x2 = 1.2 is
smaller than 0.64. This is indeed true because the actual value of this error is in
fact ε2 = |x∗ − x2 | = |1 − 1.2| = 0.2.
for the error ε3 corresponding to the approximate solution x3 we have the
estimation: ε3 ≤ |f (x3 )| = |x23 + x3 − 2| = |( 86 2 86
85 ) + 85 − 2| ' 0.0354....
Again the estimation of the error given by the theorem is good enough since
the real value of the error is ε3 = |x∗ − x3 | ' 0.011....

2.2.1 Newton’s method for systems of equations


Newton’s method may be generalized for the case of a systems of n nonlinear
equations with n unknowns, of the type:

f1 (x1 , x2 , ..., xn ) = 0


..
 .

f (x , x , ..., x ) = 0
n 1 2 n

where fi real continuous functions of n real variables whose partial derivatives


are also continuous on their domains of definition.
 k
x1
" # f1
If we denote f¯ = .. and x̄k =  ... , then the sequences {xk1 }k , · · · , {xkn }k
.
fn xkn
(converging to the exact solutions of the system, x∗1 , · · · , x∗n respectively) are de-
fined by the vector form of the Newton’s method:
h i−1
x̄k+1 = x̄k − f¯0 (x̄k ) · f¯(x̄k )
 ∂f1 ∂f1

∂x1
··· ∂xn

in which the derivative is replaced by the jacobian matrix: f¯0 (x̄) =  ... . . . ... .
 
∂fn ∂fn
∂x1
··· ∂xn

Example 9
Solve using Newton’s method the system:
(
x31 + x2 − 1 = 0
x32 − x1 + 1 = 0

14
2.2 Newton’s method 15

x11
h i
choosing the initial values x̄1 = x12
= [ 00 ].

Solution:
We have f1 (x1 , x2 ) = x31 +x2 −1 and f1 (x1, x2 ) = x32−x1 +1, hence the associated
∂f1 ∂f1 h 2 i
3x 1
jacobian matrix is: f¯0 (x̄) = f¯0 ([ xx12 ]) = ∂x 1 ∂x2
∂f2 ∂f2 = −1 13x2 . Its inverse is:
∂x1 ∂x2 2
3x2
 
2 1

 0 k −1 1+9x2 2
1 x2 1+9x2 2
1 x2 
f¯ (x̄ ) = 3x2 .
1 1
1+9x2 2
1 x2 1+9x2 2
1 x2
The vector form of the Newton’s method is in this case:
2
 
3(xk 2) 1
  h i 2 2 − 2 2
 3

xk+1 xk1  1+9(xk1 ) (xk2 ) 1+9(xk k
1 ) (x2 )  (xk1 ) +x2 −1
1
= xk −  2  · (xk )3 −x +1
xk+1
2 2 3(xk )
1 1 2 1
2 k 2 2 k 2
1+9(xk
1 ) (x2 ) 1+9(xk
1 ) (x2 )

On the first step,


 for k 1=2 1 we obtain: 
3(x2 ) 1
h 2i h 1i 2 2 − 2 2
 3

x1 x1  1+9(x11 ) (x12 ) 1+9(x1 1
1 ) (x2 )  (x11 ) +x2 −1
x2
= x1 −  2  · (x1 )3 −x +1 =
2 2 1 3(x1 ) 1 2 1
1+9(x1 ) (x2 ) 1 2 1 2
1+9(x1 ) (x2 ) 1 2 1 2
−1
0
0   −1  1
= [0] − 1 0 · 1 = [1]
On the second step, for  k = 2 we obtain:

h 3i 3 1 8
x1 1 − 1 0.8 ]
x3
= [ 1 ] − 1 3 · [ 1 ] = 6 = [ 0.6
10 10 10
2 10 10 10
Proceeding
h 4i in the same
h 5 i manner, we obtain
h 6 ithe sequence of approximations:
x1 0.895992 x1 0.998269 x 0.999999 ], sequence which
x4
= [ 0.303696 ] , x5 = [ 0.00730683 ], x16 = [ 0.0000113025
2 2 2
clearly converges to the solution [ xx12 ] = [ 10 ].

15

You might also like