Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 110

Chapter 4 Numerical Differentiation and Integration

4.1 Numerical Differentiation


4.2 Richardson’s Extrapolation
4.3 Elements of Numerical Integration
4.4 Composite Numerical Integration
4.5 Romberg Integration
4.6 Multiple Integrals
4.7 Improper Integrals
4.1 Numerical Differentiation
The derivative of the function f at x0 is
f ( x0  h )  f ( x0 )
f ' ( x0 )  lim
h0
.
h
This formula gives an obvious way to
f ( x0  h )  f ( x0 )
generate an approximation to f '(x0) =
h
4.1 (2)

For small values of h, it is not very successful due to roundoff error.


f ( x0  h )  f ( x0 ) h
f ' ( x0 )   f " ( ).
h 2
This formula is known as the forward-difference formula if h > 0
and the backward-difference formula if h < 0.
4.1 (3)
To obtain general derivative approximation formulas,
suppose that {x0, x1,…, xn,} are (n + 1) distinct numbers
in some interval I and that f  Cn+1 (I),
from Theorem 3.3,
n ( x  x0 )( x  xn ) ( n 1)
f ( x )   f ( xk ) Lk ( x )  f ( ( x )),
k 0 ( n  1)!
for someξ(x) in I,

where Lk(x) denotes the kth Lagrange coefficient polynomial


for f at x0, x1, …, xn.
4.1 (4)
n  ( x  x0 )( x  xn )  ( n 1)
f ' ( x )   f ( xk ) L ' k ( x )  D x   f ( ( x))
k 0  (n  1!) 

( x  x0 )( x  xn )
(n  1)!
 
Dx f ( n 1) ( ( x)) .

When x is one of the numbers xj.


n f (n1) ( ( x j )) n
f ' ( x j )   f ( xk ) L'k ( x j )   ( x j  xk ),
k 0 (n  1)! k 0
k j
which is called an (n + 1)-point formula to approximate f'(xj).
4.1 (5)
We first derive some useful three-point formulas and
consider aspects of their errors.

Since
( x  x1 )( x  x2 ) 2 x  x1  x2
L0 ( x)  , we have L'0 ( x)  .
( x0  x1 )( x0  x2 ) ( x0  x1 )( x0  x2 )
Similarly,
2 x  x0  x2 2 x  x0  x1
L'1 ( x)  and L'2 ( x)  .
( x1  x0 )( x1  x2 ) ( x2  x0 )( x2  x1 )
4.1 (6)
Hence,
 2 x j  x1  x2   2 x j  x0  x 2 
f ' ( x j )  f ( x0 )    f ( x1 )  
( x 
 0 1 0 2 
x )( x  x ) ( x 
 1 0 1 2 
x )( x  x )
 2 x j  x0  x1  1 ( 3) 2
 f ( x2 )    f ( j )  ( x j  xk ),
 ( x2  x0 )( x2  x1 )  6 k 0
k j

for each j = 0, 1, 2.

With xj = x0, x1 = x0 + h, and x2 = x0 +2h


1 3 1  h 2 ( 3)
f ' ( x0 )   f ( x0 )  2 f ( x1 )  f ( x2 )  f (0 ).
h 2 2  3
4.1 (7)
For xj = x1
1 1 1  h 2 ( 3)
f ' ( x1 )   f ( x0 )  f ( x2 )  f (1 ),
h 2 2  6
for xj = x2
1 1 3  h 2 ( 3)
f ' ( x2 )  f ( x0 )  2 f ( x1 )  f ( x2 )  f ( 2 ).

h 2 2 
 3
Since x1= x0 + h and x2 = x0 +2h,
1 3 1  h ( 3)
2
f ' ( x0 )   f ( x0 )  2 f ( x0  h )  f ( x0  2h )  f (0 ),
h 2 2  3
1 1 1  h ( 3)
2
f ' ( x0  h )   f ( x0 )  f ( x0  2h )  f (1 ), and
h 2 2  6
1 1 3  h ( 3)
2
f ' ( x0  2h )   f ( x0 )  2 f ( x0  h )  f ( x0  2h )  f (2 ).
h 2 2  3
4.1 (8)
As a matter of convenience, the variable substitution x0
for x0 + h is used in the middle equation,
h 2 ( 3)
f ' ( x0 )   3 f ( x0 )  4 f ( x0  h )  f ( x0  2h)  f (0 ),
1
2h 3
h 2 ( 3)
f ' ( x0 )   f ( x0  h )  f ( x0  h )  f (1 ) (centered different )
1
2h 6
4.1 (9)
Where ξ1 lies between (x0 – h) and (x0 + h).

The methods presented are called three-point formulas.

Similarly, there are five-point formulas. One is


1 h 4 ( 5)
f ' ( x0 )   f ( x0  2h)  8 f ( x0  h)  8 f ( x0  h)  f ( x0  2h)  f ( ),
12h 30
4.1 (10)
EXAMPLE
Values for f (x) = xex are given in

Since f'(x) = (x + 1)ex, we have f'(2.0) = 22.167168.

Approximating f '(2.0) using the various three- and five-point formulas


produces the following results.
4.1 (11)
Three-Point Formulas
h  0.1 :
1
 3 f (2.0)  4 f (2.1)  f (2.2)  22.032310
0.2
h  0.1 :
1
 f (2.1)  f (1.9)  22.228790,
0.2
The errors are approximately
1.35 × 10-1, -6.16 × 10-2,

Five-Point Formulas
With h = 0.1
1
[ f (1.8)  8 f (1.9)  8 f ( 2.1)  f (2.2)]  22.166999
1.2

The error is approximately


1.69 × 10-4.
4.1 (12)
A particularly important subject is the effect roundoff error
plays in the approximation.
Let us examine
1 h 2 ( 3)
f ' ( x0 )   f ( x0  h )  f ( x0  h )   f (1 ),
2h 6
more closely.

Suppose that in evaluating f (x0 + h) and f (x0 - h)


we encounter roundoff errors e (x0 + h) and e (x0 - h).
~ ~
Then our computed values f ( x0  h ) and f ( x0  h) are related
to the true values f (x0 + h) and f (x0 - h) by the formulas

f ( x0  h )  f ( x0  h )  e( x0  h )

f ( x0  h )  f ( x0  h )  e( x0  h ).
4.1 (13)
The total error in the approximation,
~ ~
f ( x0  h)  f ( x0  h) e( x0  h)  e( x0  h) h 2 ( 3)
f ' ( x0 )    f (1 ),
2h 2h 6
is due in part to roundoff error and in part to truncation error.
If we assume that the roundoff errors e(x0 ± h) are bounded
by some number є > 0 and that
the third derivative of f is bounded by a number M > 0,
then
~ ~
f ( x0  h)  f ( x0  h)  h 2
f ' ( x0 )    M.
2h h 6
To reduce the truncation error, h2 M/6, we must reduce h.
But as h is reduced, the roundoff error є /h grows.
4.1 (14)
EXAMPLE
Consider using the values in following
to approximate f '(0.900), where f (x) = sin x.
The true value is cos0.900 = 0.62161.
4.1 (15)
Using the formula
f (0.900  h )  f (0.900  h )
f ' (0.900)  ,
2h
with different values of h, gives the approximations
4.1 (16)
The optimal choice for h appears to lie between 0.005 and 0.05.
If we perform some analysis on the error term,
 h2
e( h )   M,
h 6
a minimum for e occurs at h  3 3 / M , where
M  max f " ' ( x)  max cos x  cos 0.8  0.69671.
x[ 0.800,1.00] x[ 0.800,1.00]

Since values of f are given to five decimal places,


it is reasonable to assume that
the roundoff error is bounded by є = 0.000005.
Therefore, the optimal choice of h is approximately
3(0.000005)
h 3  0.028,
0.69671
which is consistent with the result.
4.1 (17)
In practice, we cannot compute an optimal h to use in
approximating the derivative,
but we must remain aware that reducing the step size will
not always improve the approximation.

As an approximation method, numerical differentiation is unstable,


since the small values of h needed to reduce truncation error also
cause the roundoff error to grow.

HW EXERCISE SET 4.1


1. a.
3. a.
4. a.
4.2 Richardson’s Extrapolation
Richardson's extrapolation is used to generate high-accuracy results
while using low order formulas.

Extrapolation can be applied whenever it is known that


an approximation technique has an error term with
a predictable form, one that depends on a parameter,
usually the step size h.
Suppose that for each number h, h≠ 0, we have a formula N (h)
that approximates an unknown value M.
4.2 (2)
Let us consider the formula for approximating M of the form
M  N (h)  K1h  K 2 h 2  K 3h3  . (1a )
2 3
M  N    K1  K 2  K 3  .
h h h h
(1b)
2 2 4 8
2  (1b) – (1a) 
 h  h   h2   h3 3 
M   N     N    N (h)   K 2   h   K 3   h   .
2
  2   2   2   4 
   
We define N1(h)≡ N(h) and

 
N 2 ( h )  N1     N1    N1 ( h ).
h h
2  2 
4.2 (3)
Then we have the O(h2) approximation formula for M:
K 3K
M  N 2 ( h )  2 h 2  3 h 3  . 2a
2 4
 h  K 2 2 3K 3 3
M  N2    h  h  . 2b
2 8 32
4 × (2b) – (2a) 
3M  4 N 2    N 2 ( h ) 
h 3K 3 3
h  ,
2 8
and dividing by 3 gives an O( h3 ) formula for approximating M:

  h  N 2 (h / 2)  N 2 (h)  3K 3 3
M  N2      h  .
 2 3  8
4.2 (4)
By defining
 h  N 2 ( h / 2)  N 2 ( h )
N 3 (h)  N 2    ,
2 3
we have the O( h3 ) formula:
K3 3
M  N 3 (h)  h  .
8
The process is continued by constructing an O(h4) approximation
 h  N 3 ( h / 2)  N 3 ( h )
N 4 (h)  N 3    ,
2 7
An O(h5) approximation
N ( h / 2)  N 4 ( h )
N 5 ( h )  N 4    4
h
2 15
and so on.
4.2 (5)
In general, if M can be written in the form
m 1
M  N (h )   K j h j  O ( h m ),
j 1

then for each j = 2, 3, ... , m, we have an O(hj) approximation


of the form
 h  N j 1 ( h / 2)  N j 1 ( h )
N j ( h )  N j 1    
.
2 j
2 1 1

These approximations are generated by rows in the order indicated by


the numbered entries in
4.2 (6)
Extrapolation can be applied whenever the truncation error for
a formula has the form
m 1

 j
 m
K h j
 O ( h ),
j 1

for a collection of constants Kj and


when α1 < α2 < α3 < ... <αm .
In the next example we have αj = 2j.

EXAMPLE
The centered difference formula to approximate f'(x0)
can be expressed with an error formula
1 h2 h 4 ( 5)
f ' ( x0 )   f ( x 0  h )  f ( x0  h )   f ' " ( x0 )  f ( x0 )  .
2h 6 120
4.2 (7)
Since this error formula contains only even powers of h,
extrapolation is more effective
h2 h 4 ( 5)
f ' ( x0 )  N1 (h)  f '" ( x0 )  f ( x0 )  , 1a
6 120
1
N1 (h)  N (h)   f ( x0  h)  f ( x0  h).
2h
 h  h2 h 4 ( 5)
f ' ( x0 )  N1   f '" ( x0 )  f ( x0 )  . 1b
 2  24 1920

[4 × (1b) – (1a)] / 3 
h4
f ' ( x0 )  N 2 ( h )  f ( 5 ) ( x 0 )  ,
480
h  N 1 ( h / 2)  N 1 ( h )
N 2 ( h )  N1 
  .
2 3
4.2 (8)
Continuing this procedure gives, for each j = 2, 3, …,
an O(h2j) approximation
 h  N j 1 (h / 2)  N j 1 (h )
N j (h )  N j 1    j 1
.
2 4 1
Notice that the denominator of the quotient is 4j-1 - 1
instead of 2j-1 – 1.

Suppose that x0 = 2.0, h = 0.2, and f (x) = xex, then


1
N1 (0.2)  N (0.2)   f (2.2)  f (1.8)  22.414160,
0.4
N1 (0.1)  N (0.1)  22.228786,
N1 (0.05)  N (0.05)  22.182564.
4.2 (9)
The extrapolation table for these data is shown in:

The exact value of f '(x) = xex + ex at x0 = 2.0 to


six decimal places is 22.167168,
so all the digits of N3 (0.2) are exact,
even though the best original approximation,
N1(0.05), has only one decimal place of accuracy.
4.2 (10)
Since each column beyond the first in the extrapolation table
is obtained by a simple averaging process,
the technique can produce high-order approximations
with minimal computational cost.

However, as k increases, the roundoff error in N1 (h /2k)


will generally increase because the instability of
numerical differentiation is related to the step size h/2k.

Extrapolation can be used to derive these formulas both three- and


five-point methods more easily.
4.2 (11)

Suppose we expand the function f


in a fourth Taylor polynomial about x0, then
1 1
f " ( x0 )( x  x0 ) 2  f '" ( x0 )( x  x0 )3
f ( x)  f ( x0 )  f ' ( x0 )( x  x0 ) 
2 6
1 ( 4) 1 (5)
 f ( x0 )( x  x0 ) 4  f ( )( x  x0 )5 ,
24 120

for some number ξ between x and x0.


4.2 (12)
Evaluating f at x0 + h and x0 - h gives
1 2 1
f ( x0  h)  f ( x0 )  f ' ( x0 )h  f " ( x0 )h  f '" ( x0 )h3
2 6
1 ( 4) 1 (5)
 f ( x0 )h 4  f ( 2 )h5 ,
24 120
1 1
f ( x0  h)  f ( x0 )  f ' ( x0 )h  f " ( x0 )h 2  f '" ( x0 )h3
2 6
1 ( 4) 4 1 (5)
 f ( x0 )h  f (1)h5
24 120
where x0 – h < ξ2 < x0 <ξ1 < x0 + h.

f ( x0  h )  f ( x0  h )  2hf ' ( x0 ) 
h3
3
f " ' ( x0 ) 
120
 f (1 )  f ( 5) ( 2 ).
h 5 ( 5)

If f (5) is continuous on [x0 - h, x0 + h], the Intermediate Value Theorem


implies that a number ξ in (x0 - h, x0 + h) exists with
f (5)
~
( ) 
1
2

f (5) (1 )  f (5) ( 2 ) . 
4.2 (13)
Although the approximation is the same as that given in
the three-point formula, the unknown evaluation point
occurs now in f (5) , rather than in f"'.
Extrapolation takes advantage of this by first replacing h
with 2h to give the new formula.
1 h2 h 4 ( 5) ~
f ' ( x0 )   f ( x0  h )  f ( x0  h )   f ' " ( x0 )  f ( ). 1a
2h 6 120
1 4h 2 16h 2 ( 5) ˆ
f ' ( x0 )   f ( x0  2 h )  f ( x0  2 h )   f ' " ( x0 )  f ( ), 1b
4h 6 120

[4 × (1a) – (1b)] / 3 
h4 (5)
f ' ( x0 )   f ( x0  2h)  8 f ( x0  h)  8 f ( x0  h)  f ( x0  2h)  f ( ),
1
12h 30
which is the five-point formula.
4.2 (14)
HW EXERCISE SET 4.2 6.
4.3 Elements of Numerical Integration
The basic method involved in approximating b f ( x)dx
a
is called numerical quadrature.
n

It uses a sum  ai f ( xi ) to approximate


b

i 0 
a
f ( x )dx.

The methods of quadrature in this section are based on


the interpolation polynomials given in Chapter 3.
4.3 (2)
We first select a set of distinct nodes {x0, . . . , xn}
from the interval [a, b].
Then we integrate the Lagrange interpolating polynomial
n
pn ( x )   f ( xi ) Li ( x ) and its truncation error term over [a, b]
i 0

to obtain
b n n
f ( n 1) ( ( x ))
f ( x )dx    f ( xi ) Li ( x )dx    ( x  xi )
b b
a a
i 0
a
i 0 ( n  1)!
dx
n b n
1
  ai f ( xi )    ( x  x ) f ( n 1)
( ( x ))dx
( n  1)! a i 0
i
i 0

where
b
ai   Li ( x)dx for each i = 0, 1, ..., n.
a
4.3 (3)
The quadrature formula is, therefore,
n
f ( x )dx   ai f ( xi )
b
a
i 0

Before discussing the general situation of quadrature formulas,


let us consider formulas produced by
using first and second Lagrange polynomials
with equally spaced nodes.
This gives the Trapezoidal rule and Simpson's rule.
4.3 (4)
To derive the Trapezoidal rule , let x0 = a, x1 = b, h = b - a
and use the linear Lagrange polynomial:
( x  x1 ) ( x  x0 )
P1 ( x )  f ( x0 )  f ( x1 )
( x0  x1 ) ( x1  x0 )
Then
b x (x  x ) ( x  x0 )
a  x ( x0  x1 ) 0 ( x1  x0 ) f ( x1 )]dx

1
1
f ( x ) dx [ f ( x )
0

1 x ''
  f ( ( x ))( x  x0 )( x  x1 )dx
1

02 x
4.3 (5)
Since (x – x0)(x – x1) does not change sign on [x0, x1],
the Weighted Mean Value Theorem for Integrals
can be applied to the error term to give
x1 x1
 f ( ( x))( x  x0 )( x  x1 )dx  f ( ) ( x  x0 )( x  x1 )dx
'' ''
x0 x0

x 3
( x1  x2 ) 2
 f ( )[ 
''
x  x0 x1 x ]xx10
3 2
h 3 ''
 f ( )
6
Consequently
b ( x  x1 ) 2 ( x  x0 ) 2 h 3 ''
 f ( x )dx  [ f ( x0 )  f ( x1 )] x0  f ( )
x1
a 2( x0  x1 ) 2( x1  x0 ) 12
( x1  x0 ) h 3 ''
 [ f ( x0 )  f ( x1 )]  f ( )
2 12
4.3 (6)
Since h = x1 – x0, we have the following rule:
Trapezoidal Rule: b h h3 ''
a
f ( x)dx  [ f ( x0 )  f ( x1 )] 
2 12
f ( )
This is called the Trapezoidal rule because
when f is a function with positive values,
b

a
f ( x ) dx is approximated by the area in a trapezoid

The rule gives the exact result


when second derivative is identically zero.
4.3 (7)
Simpson's rule results from integrating over [a, b]
the second Lagrange polynomial with nodes x0 = a, x2 = b,
and x1 = a + h, where h = (b - a)/2
4.3 (8)

Suppose that f is expanded in the third Taylor polynomial about x1.


f ' ' ( x1) f ' ' ' ( x1)
f ( x)  f ( x1)  f ' ( x1)( x  x1)  ( x  x1)2  ( x  x1)3
2 6
f (4) ( ( x))
 ( x  x1)4
24
x2 f ' ( x1) 2 f ' ' ( x1) 3
x0 f ( x ) dx  [ f ( x1 )( x  x0 )  ( x  x1 )  ( x  x1 )
2 6
f ' ' ' ( x1) 4 x2 1 x2 ( 4 )
 ( x  x1) ]x  x f ( ( x))( x  x1)4 dx
24 0 24 0
( 4) ( 4)
1 x2 ( 4 ) 4 f ( ) x2 4 f (1) 5 x2
x0 f ( ( x ))( x  x1 ) dx  1
x0 ( x  x1 ) dx  ( x  x1 ]x0
)
24 24 120
for some number 1 in (x0, x2).
4.3 (9)
However, h = x2 – x1 = x1 – x0,
So
(x2 - x1)2 - (x0 - x1)2 = (x2 – x1)4 - (x0 – x1)4 = 0,
(x2 – x1)3 - (x0 – x1)3 = 2h3 and (x2 – x1)5 - (x0 – x1)5 = 2h5.

Consequently
x2 h 3 '' f ( 4) (1 ) 5
x0 f ( x )dx  2hf ( x1 )  f ( x1 )  h
3 60
4.3 (10)
If we now replace f " (x1) by the approximation given in Section 4.1,
we have
3 2
h 1 h
x
x02 f ( x )dx  2hf ( x1 )  { 2 [ f ( x0 )  2 f ( x1 )  f ( x2 )]  f ( 4) ( 2 )}
3 h 12
f ( 4) (1 ) 5
 h
60
h h 5 1 ( 4) 1
 {[ f ( x0 )  4 f ( x1 )  f ( x2 )]  [ f ( 2 )  f ( 4) (1 )]
3 12 3 5
This gives Simpson's rule :
Simpson's Rule:
x2 h h5 ( 4 )
x0 f ( x)dx  3[ f ( x0 )  4 f ( x1 )  f ( x2 )]  90 f ( )
Simpson's rule gives exact results
when applied to any polynomial of degree three or less.
4.3 (11)
EXAMPLE
The Trapezoidal rule for a function f on the interval [0, 2] is
2
 f ( x)dx  f (0)  f (2)
0

and Simpson's rule for f on [0, 2] is


2 1

0
f ( x )dx  [ f (0)  4 f (1)  f ( 2)]
3
The results for some elementary functions are summarized in
4.3 (12)
Definition 4.1
The degree of accuracy, or precision, of a quadrature formula
is the largest positive integer n such that
the formula is exact for xk, for each k = 0, 1, .. . , n.

Definition 4.1 implies the Trapezoidal and Simpson's rules


have degrees of precision one and three, respectively.

The Trapezoidal and Simpson's rules are examples


of a class of methods known as Newton-Cotes formulas.
4.3 (13)
There are two types of Newton-Cotes formulas, open and closed.

The (n + 1)-point closed Newton-Cotes formula


uses nodes xi = x0 + ih, for i = 0, 1, . . . , n,
where x0 = a, xn = b and h = (b - a) / n.

The formula assumes the form


n
f ( x )dx   ai f ( xi )
b
a
i 0

xn n (x  xj )
 ( x  x ) dx
xn
ai   Li ( x )dx  
x0 x0
j 0 i j
j i
4.3 (14)
Theorem 4.2
Suppose that i 0 ai f ( xi ) denotes
n

the (n + 1)-point closed Newton-Cotes formula


with x0 = a, xn = b, and h = (b - a) / n.

There exists   (a, b) for which


n
h n3 f ( n2 ) ( ) n 2
a f ( x)dx  
b

i 0
ai f ( xi ) 
(n  2)! 0  t (t  1)...( t  n)dt
if n is even and f  Cn+2 [a, b],
n
h n  2 ( n 1)
f ( ) n
a f ( x)dx  
b

and ai f ( xi )  t (t  1)...(t  n)dt
i 0 (n  1)! 0

if n is odd and f  Cn+1 [a, b].

Note that when n is an even integer,


the degree of precision is n + 1,
although the interpolation polynomial is of degree at most n.
4.3 (15)
In the case where n is odd, the precision is only n.

Some of the common closed Newton-Cotes formulas


with their error terms are as follows:
n = 1: Trapezoidal rule
x1 h h 3 ' ''
x0 f ( x)dx  2 [ f ( x0 )  f ( x1 )]  12 f ( ) where x0 <  < x1
n = 2: Simpson's rule
x2 h h5 ( 4 )
x0 f ( x)dx  3 [ f ( x0 )  4 f ( x1 )  f ( x2 )]  90 f ( )
where x0 <  < x2
4.3 (16)
n = 3: Simpson's Three-Eighths rule
x3 3h 3h5 ( 4 )
x0 f ( x)dx  8 [ f ( x0 )  3 f ( x1 )  3 f ( x2 )  f ( x3 )]  80 f ( )
where x0 <  < x3
n = 4:
x4 2h
 x0
f ( x )dx 
45
[7 f ( x0 )  32 f ( x1 )  12 f ( x2 )  32 f ( x3 )  7 f ( x4 )]

8h 7 ( 6 )
 f ( )
945 where x0 <  < x4
4.3 (17)
The open Newton-Cotes formulas use the nodes xi = x0 + ih,
for each i = 0, 1, . . . , n,
where h = (b - a) / (n + 2) and x0 = a + h.
This implies that xn = b - h, so we set x-1 = a and xn+1 = b,
the formulas become
n
f ( x )dx   ai f ( xi )
b xn 1 b
a
f ( x )dx  
x1
i 0
ai   Li ( x )dx
a
4.3 (18)
Theorem 4.3
Suppose that i 0 ai f ( xi ) denotes
n

the (n + 1)-point open Newton-Cotes formula


with x-1 = a, xn+1 = b, and h = (b - a) / (n + 2).
n
h n 3 f ( n2 ) ( ) n 1 2
f ( x )dx   ai f ( xi ) 
b

a
i 0 (n  2)! 1
t (t  1)...( t  n)dt

if n is even and f  Cn+2 [a, b],


and
n
h n 2 f ( n 1) ( ) n1
f ( x)dx   ai f ( xi ) 
b
a
i 0 (n  1)! 1
t (t  1)...( t  n)dt

if n is odd and f  Cn+1 [a, b].


4.3 (19)
Some of the common open Newton-Cotes formulas are as follows:
n = 0: Midpoint rule
x1 h ''
 x1
f ( x )dx  2hf ( x0 )  f ( )
3
Where x-1 <  < x1

x2 3h 3h 3 ''
n = 1: x1
f ( x )dx  [ f ( x0 )  f ( x1 )] 
2 4
f ( ) Where x-1 <  < x2

n = 2: 
x3 4h 14h5 ( 4 )
f ( x )dx  [2 f ( x0 )  f ( x1 )  2 f ( x2 )]  f ( )
x1 3 45
Where x-1 <  < x3
n = 3:
x4 5h 95h5 ( 4 )
x1
f ( x )dx  [11 f ( x0 )  f ( x1 )  f ( x2 )  11 f ( x3 )] 
24 144
f ( )

Where x-1 <  < x4


4.3 (20)
EXAMPLE
Using the closed and open Newton-Cotes formulas
to approximate  / 4 sin xdx  1  2 / 2  0.29289322
gives
0

8
4
4.3 (21)

HW. EXERCISE SET 4.3


16. c. d.
4.4 Composite Numerical Integration
The Newton-Cotes formulas are generally unsuitable
for use over large integration intervals.

In this section, we discuss a piecewise approach to numerical integration


that uses the low-order Newton-Cotes formulas.
4
Consider finding an approximation to  e x dx .
0
Simpson's rule with h = 2 gives
4 2 0
0    )  56.76958
x 2 4
e dx ( e 4 e e
3
Since the exact answer in this case is e4 – e0 = 53.59815,
the error -3.17143 is far larger than we would normally accept.
4.4 (2)
To apply a piecewise technique to this problem,
divide [0, 4] into [0, 2] and [2, 4] and
use Simpson's rule twice with h = 1.
this gives
4 2 4
 e dx   e dx   e x dx
x x
0 0 2

1 0 1 2
 [ e  4e  e ]  [ e  4e 3  e 4 ]
2

3 3
1
 [ e 0  4e  2e 2  4e 3  e 4 ]
3
 53.86385

The error has been reduced to -0.26570.


4.4 (3)
we subdivide the intervals [0, 2] and [2, 4] and
use Simpson's rule with h=1/2 , giving
4 1 2 3 4
 e dx   e dx   e dx   e dx   e x dx
x x x x
0 0 1 2 3
1 3
1 0 1
 [ e  4e 2  e ]  [ e  4e 2  e 2 ]
6 6
5 7
1 2 1
 [ e  4e 2  e 3 ]  [ e 3  4e 2  e 4 ]
6 6
1 3 5 7
1 0
 [ e  4e 2  2e  4e 2  2e 2  4e 2  2e 3  4e 2  e 4 ]
6
 53.61622
The error for this approximation is -0.01807.
4.4 (4)
To generalize this procedure, choose an even integer n.
Subdivide the interval [a, b] into n subintervals, and
apply Simpson's rule on each consecutive pair of subintervals.

2 -1
4.4 (5)
With h = (b - a) / n and xj = a + jh, for each j = 0, 1, . . . n, we have
n/2
b
a f ( x )dx   xx22 jj2 f ( x )dx
j 1

h 
    f ( x2 j 2 )  4 f ( x2 j 1 )  f ( x2 j ) f ( j ) ,
n/2 h5 (4)
j 1 3 90 
for some ξj with x2j-2 < ξj < x2j , provided that f  C4[a, b].

For each j = 1, 2,…, (n /2) - 1,


f (x2j) appearing in the interval [x2j-2, x2j] and
the interval [x2j, x2j+2] , we can reduce this sum to

h  ( n / 2) 1 ( n / 2)  h 5 n/2
b
a f ( x ) dx   0
f ( x )  2  f ( x 2j )  4  f ( x 2 j 1 )  f ( x n 
)   f ( 4)
( j ).
3 j 1 j 1  90 j 1
4.4 (6)
Theorem 4.4
Let f  C4 [a, b], n be even, h = (b - a) / n,
and xj = a + jh, for each j = 0, 1,... ,n.

There exists a μ (a, b) for which


the Composite Simpson's rule for n subintervals
can be written with its error term as

h

( n / 2 )1

n/2
   b  a h 4 f ( 4 ) (  ).
 
b
a f ( x ) dx

f ( a ) 2 f ( x ) 4 f ( x 2 j 1 ) f ( b )
 180
3
2j
j 1 j 1
4.4 (7)
ALGORITHM 4.1 Composite Simpson's Rule
b
To approximate the integral I =  f ( x )dx :
a

INPUT endpoints a, b ; even positive integer n.


OUTPUT approximation XI to I.

Step1 Set h = (b - a) / n.
Step2 Set XI0=f (a)+f (b);
XI1 = 0; (Summation of f (x2i-1).)
XI2 = 0. (Summation of f (x2i) .)
Step3 For i = 1, . . . , n - 1 do Steps 4 and 5.
Step4 Set X = a + ih.
Step5 If i is even then set XI2=XI2+f (X)
else set XI1 = XI1 + f (X).
Step6 Set XI = h(XI0+2∙XI2+4∙XI1) / 3.
4.4 (8)
Step7 OUTPUT (XI);
STOP.

The subdivision approach can be applied


to any of the Newton-Cotes formulas.
4.4 (9)
Theorem 4.5
Let f  C2[a, b], h = (b - a) / n,
and xj = a + jh, for each j = 0, 1,..., n.
There exists a μ (a, b) for which
the Composite Trapezoidal rule for n subintervals
can be written with its error term as
h n 1 ba 2
b
a f ( x ) dx  [ f ( a )  2  f ( x j )  f ( b ) ]  h f ' ' ( )
2 j 1 12
4.4 (10)
Theorem 4.6
Let f  C2[a, b], n be even, h = (b - a) / (n + 2),
and xj = a + (j + 1)h for each j = -1, 0, ..., n + 1.
There exists a μ (a, b) for which
the Composite Midpoint rule for n + 2 subintervals
can be written with its error term as
n/2 ba 2
b
a f ( x ) dx  2 h  f ( x 2j )  h f ' ' ( )
j 0 6
4.4 (11)
EXAMPLE

Consider approximating 0 sin x dx
with an absolute error less than 0.00002.
The Composite Simpson's rule gives, for some μ in (0, π),
 h  ( n / 2 )1 n/2
 h 4
0 sin xdx  2  sin x2 j  4  sin x2 j 1   sin .
3  j 1 j 1  180
The inequality
h 4 h 4 5
sin    4
 0.00002
180 180 180n

is used to determine n and h.

Completing these calculations gives n ≥ 18.


4.4 (12)
If n = 20, then h = π/20, and the formula gives

   9j  10  ( 2 j  1) 
0 sin xdx  2  sin    4  sin    2.000006.
60  j 1  10  j 1  20 

The Composite Trapezoidal rule requires


h 2 h 2 3
sin    2
 0.00002, or that n ≥ 360.
12 12 12n

The Composite Trapezoidal rule with n = 20 and h = π/20 gives


   19  j     19  j 
0 sin xdx  2  sin    sin 0  sin    2  sin  
40  j 1  20   40  j 1  20 
 1.9958860.
4.4 (12)
The exact answer is 2,
so Simpson’s rule with n = 20 gave an answer well
within the required error bound,
whereas the Trapezoidal rule with n = 20 clearly did not.

HW. EXERCISE SET 4.4

[h] [p] 14.b(9)


4.5 Romberg Integration
Romberg integration uses the Composite Trapezoidal rule
to give preliminary approximations and
then applies the Richardson extrapolation process
to improve the approximations.
The Composite Trapezoidal rule for
approximating the integral of a function f on an interval [a, b]
using m subintervals is
h m 1  (b  a ) 2
b
a f ( x )dx   f ( a )  f (b)  2  f ( x j )  h f ' ' (  ),
2 j 1  12
where a < μ < b, h = (b - a) / m and
xj = a + jh, for each j = 0, 1,… m.
4.5 (2)
We first obtain Composite Trapezoidal rule approximations
with m1 = 1, m2 = 2, m3 = 4, ... , and mn = 2n - 1,
where n is a positive integer.
The step size hk corresponding to mk is hk = (b - a) / mk = (b - a) / 2k-1.

The Trapezoidal rule becomes


  2 k 1 1   (ba )
h
a f ( x )dx   f (a )  f (b)  2  f (a  ihk )   12 hk f ' ' ( k ),
b k 2
2
  i 1 
where μk is a number in (a, b).
4.5 (3)
The notation R k, 1 is introduced
h1 (b  a )
R1,1   f ( a )  f (b)   f ( a )  f ( b) 
2 2
4.5 (4)
h2
R2,1   f ( a )  f (b)  2 f ( a  h2 )
2
(b  a )   (b  a ) 
  f ( a )  f (b)  2 f  a  
4   2 
 R1,1  h1 f ( a  h2 ) 
1
2
4.5 (5)

R3,1  R2,1  h2  f (a  h3 )  f (a  3h3 )


1
2
4.5 (6)

And
1 
k 2
2
Rk ,1   Rk 1,1  hk 1  f (a  ( 2i  1)hk ),
2 i 1 
in general for each k = 2, 3,…, n.
4.5 (7)
EXAMPLE
Perform the first step of the Romberg integration
scheme for approximating 0 sin xdx
with n = 6 leads to the equations
 1 
R1,1  sin 0  sin    0; R2,1   R1,1   sin   1.57079633;
2 2 2
1   3 
R3,1   R2,1   sin  sin   1.89611890;
2 2 4 4 
1   3 5 7 
R4,1   R3,1   sin  sin  sin  sin   1.97423160;
2 4 8 8 8 8 
R5,1  1.99357034 , and R6,1  1.99839336

The correct value for the integral is 2.


Richardson extrapolation will be used to speed the convergence.
4.5 (8)
If f C∞[a, b], the Composite Trapezoidal rule can be written
with an alternative error term in the form
 
(1)
b
a f ( x )dx  Rk ,1   K h  K h   Ki hk2i ,
2i
i k
2
1 k
i 1 i 2

where each Ki is independent of hk and


depends only on f (2i-1) (a) and f (2i-1) (b).

With hk replaced by h k+1=hk / 2,


  K i hk2i K1hk2  K i hk2i
(2) ab f ( x )dx  Rk 1,1   K i hk2i 1   2i   i .
i 1 i 1 2 4 i 2 4
4.5 (9)
The O(hk4 ) formula

( 2)  4  (1) b  Rk 1,1  Rk ,1 
 a f ( x) dx   Rk 1,1  
3  3 
 K  h 2i   K  1  4i 1  2i
  i  k 2i 
 hk   i  h .
 i 1   4i 1  k
i 2 3  4  i 2
3  

Extrapolation can then be applied to this formula


to obtain an O(hk6 ) result, and so on.
4.5(10)
Rk ,1  Rk 1,1
We define Rk ,2  Rk ,1  , for each k = 2, 3,…, n,
3
Continuing for each k = 2, 3, 4, …, n and j = 2, …, k,
an O (hk2 j ) approximation formula defined by
Rk , j 1  Rk 1, j 1 2 ( j 1) 2 ( j 1) j 1
Rk , j  Rk , j 1  j 1
eliminate h 2  1  4 1
4 1
k

The results that are generated from these formulas are shown in
4.5(11)
The Romberg technique allows
an entire new row in the table to be calculated
by doing one additional application
of the composite Trapezoidal rule.

Then it uses an averaging of the previously calculated values


to obtain the succeeding entries in the row.

The method used to construct a table of this type


calculates the entries row by row, that is,
in the order R1,1, R2,1, R2,2,R3,1,R3,2,R3,3, etc.
4.5(12)
ALGORITHM 4.2 Romberg

To approximate the integral I  ab f ( x )dx, select an integer n > 0.

INPUT endpoints a, b; integer n.


OUTPUT an array R.
(Compute R by rows;
only the last 2 rows are saved in storage.)
Step 1 Set h = b – a;
R1,1 =h/2( f(a)+f(b) ).
Step 2 OUTPUT (R1,1).
4.5(13)
Step 3 For i = 2,…, n do Steps 4-8.
Step 4 Set 1 2i  2 
R2,1   R1,1  h  f ( a  ( k  0.5)h ).
2 k 1 
(Approximation from Trapezoidal method.)
Step 5 For j = 2, …, i R2, j 1  R1, j 1
set R2, j  R2, j-1  j 1
. (Extrapolation.)
4 1
Step 6 OUTPUT (R2,j for j = 1, 2, …i).
Step 7 Set h = h / 2.
Step 8 For j = 1, 2, …, i set R1,j = R2,j. (Update row 1 of R.)
Step 9 STOP.
4.5(14)
EXAMPLE
The values for R1,1 through R6,1 were obtained for
approximating 0 sin xdx.

With Algorithm 4.2, the Romberg table is shown in

Although there are 21 entries in the table,


only the six in the first column require function evaluations.
The other entries are obtained by an averaging process.
4.5(15)
Algorithm 4.2 requires a preset integer n.
We could also set an error tolerance
for the approximation and generate nmax,
until consecutive diagonal entries Rn-1,n-1 and Rn,n
agree to within the tolerance.

To guard against the possibility


that two consecutive row elements agree with each other
but not with the value of the integral being approximated,
it is common to generate approximations
until not only |Rn-1,n-1 – Rn,n| is within the tolerance,
but also |Rn-2,n-2 – Rn-1,n-1|.

We must have f C2k+2[a, b] for the kth row to be generated.


4.5(16)
General-purpose algorithms check at each stage to ensure
that this assumption is fulfilled.

These methods are known as cautious Romberg Algorithms.

HW. EXERCISE SET 4.5


[h][p]4. a
4.6 Multiple Integrals
The techniques discussed can be modified for use
in the approximation of multiple integrals.
Consider the double integral  f ( x , y ) dA,
R

where R = {(x, y)| a ≤ x ≤ b, c ≤ y ≤ d },


for some constants a, b, c, and d,
is a rectangular region in the plane.
4.6(2)
We will employ the Composite Simpson's rule
to illustrate the approximation technique,
although any other composite formula could be used
in its place.
To apply the Composite Simpson's rule,
we divide the region R by partitioning both [a, b] and [c, d]
into an even number of subintervals.
With the evenly spaced mesh points x0, x1, … , xn
and y0, yl, ... , ym, respectively,
step sizes are h = (b - a) / n and k = (d - c) / m.
b
 d f ( x, y )dy dx,
 f ( x, y)dA  
R
a  c 
4.6(3)
We first use the Composite Simpson's rule
to approximate cd f ( x, y )dy, treating x as a constant.

Let yj = c + jk, for each j = 0, 1, . . . , m. Then

f ( x, y )dy  f ( x, y0 )  2  f ( x, y2 j )  4  f ( x, y2 j 1 )  f ( x, ym )
d k ( m / 2 )1 m/2
c
3  j 1 j 1 
( d  c )k 4  4 f ( x,  )
 ,
180 y 4

for some μ in (c, d).


4.6(4)
Thus,
k b ( m / 2 ) 1
f ( x, y ) dydx    f ( x, y0 ) dx  2   f ( x, y2 j ) dx
b d b
 
a c 3 a j 1
a

m/2 
 4  f ( x, y2 j 1 ) dx  
b b
f ( x, ym ) dx 

a a
j 1

(d  c)k 4 b  4 f ( x,  )

180 a y 4
dx.
The Composite Simpson's rule is now employed on the integrals
in this equation.
Let xi = a + ih, for each i = 0, 1, . . . , n.
Then for each j = 0, 1,…, m, we have
h ( n / 2 )1 n/2

a f ( x, y j )dx   f ( x0 , y j )  2  f ( x2i , y j )  4  f ( x2i 1 , y j )  f ( xn , y j )
b

3 i 1 i 1 
(b  a )h 4  4 f
 ( j , y j ), for some ξj in (a, b).
180 x 4
4.6(5)
The resulting approximation has the form
hk  ( n / 2 ) 1

 f ( x0 , y0 )  2  f ( x2 i , y0 )
b d
 
a c
f ( x, y ) dydx 
9  i 1

n/2

 4 f ( x2 i 1 , y0 )  f ( xn , y0 ) 
i 1 
( m / 2 ) 1 ( m / 2 ) 1 ( n / 2 ) 1
 2   f ( x0 , y 2 j )  2   f ( x2 i , y 2 j )
 j 1 j 1 i 1

( m / 2 ) 1 n / 2 ( m / 2 ) 1

4   f (x 2 i 1 , y2 j )   f ( xn , y 2 j ) 
j 1 i 1 j 1 
m / 2 m / 2 ( n / 2 ) 1
 4  f ( x0 , y 2 j 1 )  2  f ( x2 i , y 2 j 1 )
 j 1 j 1 i 1

m/2 n/2 m/2 


 4  f ( x2 i 1 , y2 j 1 )   f ( xn , y 2 j 1 ) 
j 1 i 1 j 1 
 ( n / 2 ) 1 n/2
  f ( x0 , y m )  2  f ( x2 i , ym )  4 f ( x2 i 1 , y m )
 i 1 i 1

 f ( xn , y m ).
4.6(6)
The error term E is given by
 k (b  a )h   f (0 , y0 )
4 4 ( m / 2 ) 1  4
f ( , y ) m / 2  4
f ( 2 j 1 , y2 j 1 )
E  2  2j 2j
4 
540  x 4
j 1 x 4
j  1  x 4

 4 f ( m , ym )  (d  c)k 4 b  4 f ( x,  )
   a dx.
x 4
 180 y 4

If ∂4 f /∂x4 is continuous, and if ∂4 f /∂y4 is also continuous,


the error term has the form
( d  c )(b  a )  4  4 f 4  f
4

E  h ( ,  )  k (ˆ, ˆ ),
180  x 4
y 4

for some ( ,  ) and (ˆ, ˆ ) in R.
4.6(7)
EXAMPLE
The Composite Simpson's rule applied to approximate
1.4 1.0 ln( x  2 y )dydx,
2.0 1.5

with n = 4 and m = 2
uses the step sizes h = 0.15 and k = 0.25.
The region of integral R is shown below,
together with the nodes (xi, yj),
where i = 0, 1, 2, 3, 4 and j= 0, 1, 2 and
the coefficients wi,j of f (xi, yj) = ln (xi + 2yj) in the sum.
4.6(8)
The approximation is
(0.15)( 0.25) 4 2

2.0 1.5
1.4 1.0 ln( x  2 y )dydx  9 i 0 j 0
wi , j ln( xi  2 y j )

 0.4295524387
4 f 6 4 f  96
Since  and 
x 4
( x  2 y )4 y 4
( x  2 y )4
1
and the maximum value of ( x  2 y )4 on R occurs at (1.4, 1.0),
the error is bounded by
(0.5)(0.6)  6 96 
E  
4
(0.15) max  (0.25) max
4

180  ( x , y ) inR ( x  2 y ) 4 ( x , y ) inR ( x  2 y ) 4

 4.72 106.
The actual value of the integral to ten decimal places is
2.0 1.5
 
1.4 1.0
ln( x  2 y)dydx  0.4295545265,

so the approximation is accurate to within 2.1 × 10-6.


4.6(9)
To reduce the number of functional evaluations,
more efficient methods such as Romberg integration,
can be incorporated in place of the Newton-Cotes formulas.

The techniques previously discussed can be modified


to approximate double integrals of the form
b d ( x)
 
a c( x)
f ( x, y ) dydx
or d b( y )
 
c a( y)
f ( x, y )dxdy.
To describe the technique involved
with approximating an integral in the form
b d ( x)
a c ( x ) f ( x, y )dydx ,
we will use the basic Simpson's rule to integrate
with respect to both variables.
4.6(10)
The step size for the variable x is h = (b - a) /2,
but the step size for y varies with x

and is written k ( x )  d ( x )  c( x ) .
2
4.6(11)
Consequently,
k ( x)
b d ( x)
a c ( x ) f ( x, y )dydx  b
a  f ( x, c( x ))  4 f ( x, c( x )  k ( x ))  f ( x, d ( x ))dx
3
h  k (a )
   f (a, c(a )  4 f (a, c(a )  k (a ))  f (a, d (a ))
3 3
4k ( a  h )
  f (a  h, c(a  h ))  4 f (a  h, c(a  h )
3
 k ( a  h ))  f ( a  h, d ( a  h ))


k (b)
 f (b, c(b))  4 f (b, c(b)  k (b))  f (b, d (b)).
3 
4.6(12)
ALGORITHM 4.4 Simpson's Double Integral

To approximate the integral I  a c ( x ) f ( x, y )dydx :


b d ( x)

INPUT endpoints a, b: even positive integers m, n.


OUTPUT approximation J to I.
Step 1 Set h = (b - a) / n;
J1 = 0; (End terms.)
J2 = 0; (Even terms.)
J3 = 0. (Odd terms.)
Step 2 For i = 0, 1, . . . , n do Steps 3-8.
Step 3 Set x = a + ih; (Composite Simpson's method for x.)
HX = ( d(x) - c(x) ) / m;
K1 = f(x, c(x)) + f(x, d(x)); (End terms.)
K2 = 0; (Even terms.)
K3 = 0. (Odd terms.)
4.6(13)
Step 4 For j = 1, 2, . .. , m - 1 do Step 5 and 6.
Step 5 Set y = c(x) +jHX;
Q = f (x, y).
Step 6 If j is even then set K2 = K2 + Q
else set K3 = K3 + Q.
Step 7 Set L = (K1 + 2K2 + 4K3)HX / 3.
d (x )
(L
i
f ( x i , y )dy by the Composite Simpson's method.)
c ( xi )

Step 8 If i = 0 or i = n then set J1 = J1 +L


else if i is even then set J2 = J2 + L
else set J3 = J3 + L.
Step 9 Set J = h(J1 + 2J2 + 4J3)/3.
Step 10 OUTPUT (J);
STOP.
4.6(14)
EXAMPLE
The volume of the solid in Figure is approximated
by applying Simpson's Double Integral Algorithm
0.5 x 2
with n = m = 10 to 0.1 x3 e y / x dydx.
4.6(15)
This requires 121 evaluations of the function f (x, y) = e y / x
and produces the 0.0333054,
which approximates the volume of the solid shown
in Figure to nearly seven decimal places.
4.6(16)
HW. EXERCISE SET 4.8
10.
4.7 Improper Integrals
Improper integrals result when the notion of integration is
extended either to an interval of integration
on which the function is unbounded or to an interval
with one or more infinite endpoints. In either circumstance,
the normal rules of integral approximation must be modified.

First consider the situation when the integrand is unbounded


at the left endpoint of the interval of integration,
as shown in Figure
4.7(2)
The improper integral with a singularity at the left endpoint,
b dx
a , converges
( x  a) p

if and only if 0 < p < 1, and in this case,


1 p
dx ( b  a )
we define a
b
 .
( x  a) p
1 p
If f is a function that can be written in the form
g ( x)
f ( x)  ,
( x  a) p

where 0 < p < 1 and g is continuous on [a, b],


then the improper integral ab f ( x )dx also exists.
4.7(3)
We will approximate this integral using the Composite Simpson's rule,
provided that g  C5 [a, b].
In that case, we can construct the fourth Taylor polynomial, P4(x),
for g about a,
( 4)
g" ( a ) g " ' ( a ) g (a )
P4 ( x )  g (a )  g ' (a )( x  a )  ( x  a) 
2
( x  a) 
3
( x  a)4 ,
2! 3! 4!
and write
b g ( x )  P4 ( x ) b P4 ( x )
a f ( x )dx  a dx  a
b
dx.
( x  a) p
( x  a) p

Since P(x) is a polynomial, we can exactly determine the value of


(k ) (k )
P ( x ) 4 g ( a ) k p
4 g (a ) k 1 p
b
a
4
dx   
b
( x  a ) dx   ( b  a ) .
( x  a) k 0k! ( k  1  p )
p a
k 0 k!
4.7(4)
This is generally the dominant portion of the approximation,
especially when the Taylor polynomial P4 (x) agrees closely
with g (x) throughout the interval [a, b].
b g ( x )  P4 ( x )
To approximate the integral of f, we need a dx.
( x  a) p

To determine this, we first define


 g ( x )  P4 ( x )
 , if a  x  b,
G( x)   ( x  a) p


0, if x  a.
Since 0 < p < 1 and P4( k ) ( a ) agrees with g(k)(a)
for each k = 0, 1, 2, 3, 4, we have G  C4 [a, b].
This implies that the Composite Simpson's rule can be applied to
approximate the integral of G on [a, b].
4.7(5)
Adding this approximation gives an approximation
to the improper integral of f on [a, b], within the accuracy
of the Composite Simpson's rule approximation.
EXAMPLE
Use the Composite Simpson'sx rule with h = 0.25
to approximate 1 e dx.
0
x
Since the fourth Taylor polynomial for e x about x = 0 is
x2 x3 x4
we have P4 ( x )  1  x    ,
2 6 24
dx  01  x 1 / 2  x1 / 2  x 3 / 2  x 5 / 2 
1 P4 ( x ) 1 1 1 7/2 
0 x dx
x  2 6 24 
1
 lim 2 x1 / 2  x 3 / 2  x 5 / 2  x 7 / 2 
2 1 1 1 9/2 
x 
M 0  3 5 21 108 M
2 1 1 1
 2     2.9235450.
3 5 21 108
4.7(6)
Table lists the approximate values of
 1
 ( e x  P4 ( x )), when 0  x  1,
G( x)   x

0, when x  0.
4.7(7)
Applying the Composite Simpson's rule using these data gives
1 0.25
0 G( x)dx  3 [0  4(0.0000170)  2(0.0004013)
 4(0.0026026)  0.0099485]  0.0017691.
x
1 e
0 x dx  2.9235450  0.0017691  2.9253141
Since |G (4) (x)| < 1 on [0, 1], the error is bounded by
1 0
(0.25) 4  0.0000217.
180
4.7(7)
To approximate the improper integral with a singularity
at the right endpoint.
We can make the substitution
z   x, dz   dx
a
b
a f ( x )dx  b f (  z )dz,
which has its singularity at the left endpoint.
4.7(8)
An improper integral with a singularity at c, where a < c < b,
is treated as the sum of improper integrals
with endpoint singularities
since ab f ( x )dx  ac f ( x )dx  cb f ( x )dx.

The other type of improper integral involves infinite limits


of integration.
 1
a p
dx, for p > 1.
x
By making the integration substitution
t  x 1 , dt   x 2 dx, so dx   x 2 dt  t 2 dt.
p
 1 t 1/ a 1
a p dx  1 / a  2 dt  0 2 p dt.
0

x t t
4.7(8)
The variable change t = x-1 converts the improper integral
 
f ( x )dx  1 / a 2  1
f  dt
a f ( x )dx into a 0 t
t
EXAMPLE
To approximate I  1 x 3 / 2 sin 1 dx,
x
we make the change of variable t = x-1 to obtain
1
I   t 1/ 2 sin t dt.
0

The fourth Taylor polynomial, P4 (t), for sin t about 0 is


1
P4 (t )  t  t 3 ,
6
4.7(9)
so  sin t  t  1 t 3
 6 , if 0  t  1
G (t )   is in C4[0, 1],
t1/ 2

0, if t  0
and we have 1 3
sin t  t  t
1 5/ 2 6 dt
I  01 t 1 / 2  t dt  01
6 t 2 /1
1 3
1 sin t  t  t

2 3/ 2 1 7/2  6 dt
 t  t   
1
0
3 21 0 t1/ 2
1 3
sin t  t  t
 0.61904761 0
1 6 dt.
1/ 2
t
Applying the Composite Simpson's rule with n = 16
to the remaining integral gives
I = 0.0014890097 + 0.61904761 = 0.62053661
which is accurate to within 4.0 × 10-8.
4.7(10)

HW. EXERCISE SET 4.9


2. a.

You might also like