Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Numer. Math.

40, 407-422 (1982) Numerische


MathemaUk
9 Springer-Verlag1982

Calculation of the Weights


of Interpolatory Quadratures
J. Kautsky x and S. Elhay 2
1 School of Mathematical Sciences,The Flinders Universityof South Australia, Bedford Park, S.A.
5042, Australia
2 Department of Computer Science, University of Adelaide, G.P.O. Box 498, Adelaide, S.A. 5001,
Australia

Summary. We present an algorithm for the stable evaluation of the weights


of interpolatory quadratures with prescribed simple or multiple knots and
compare its performance with that obtained by directly solving, using the
method proposed by Galimberti and Pereyra [1], the confluent Vander-
monde system of linear equations satisfied by the weights. Elsewhere Kaut-
sky [5] has described a property which relates the weights of interpolatory
quadratures to the principal vectors of certain non-derogatory matrices.
Using this property one can get the information about the weight function
w of the approximated integral implicitly through the (symmetric tri-
diagonal) Jacobi matrix associated with the polynomials orthonormal with
respect to w. The results indicate that the accuracy of the method presented
is much higher than that achieved by solving the Vandermonde system
directly.
Subject Classifications: AMS (MOS): 65F15, 65D30; CR: 5.14, 5.16.

1. Introduction

In this paper we present an algorithm for the evaluation of the weights of


interpolatory quadratures with prescribed simple or multiple knots, and we
compare its performance with that obtained by directly solving, using the
method proposed in I-1], the confluent Vandermonde system of linear equa-
tions satisfied by the weights.
In [4] the method presented for the evaluation of Gauss quadratures with
multiple free and fixed knots is such that the weights for simple Gauss knots
can be determined in a numerically stable way as a by-product of the de-
termination of the Gaussian knots. The present investigation arose out of the
need to produce numerically stable methods for the weights of the prescribed
and multiple Gauss knots and has led rather naturally to methods for the
weights of arbitrary interpolatory quadratures. These weights are the solution
408 J. Kautsky and S. Elhay

of a linear algebraic system with a confluent Vandermonde matrix and with a


right hand side consisting of the standard power moments of the weight
function w, in the approximated integral. Of the methods solving the Vander-
monde system directly we found the one proposed in [-1] to be the most stable.
However, in [5] a property is described which relates the weights of
interpolatory quadratures to the principal vectors of certain non-derogatory
matrices. By using this property one can avoid dealing directly with the
moments and instead get the information about the weight function w im-
plicitly through the (symmetric tridiagonal) Jacobi matrix J associated with the
polynomials orthonormal with respect to w. In fact this property can be
implemented in several ways to evaluate the weights and in this paper we
present that algorithm which, from our numerical experiments, we found most
promising. The results in w5 indicate that the accuracy of the method presented
is indeed much higher than that achieved by solving the Vandermonde system
directly.
A detailed description and comparison of various alternative methods for
the weights of interpolatory quadrature will be presented in a forthcoming
paper.
In w we review the results of [-5] and introduce a general method to
implement them. In w we discuss details of one particular implementation
which is formulated as an algorithm in w The last two sections contain
implementation details, tests and conclusions.

2. Matrices Related to Interpolatory Quadratures

An interpolatory quadrature
mj- 1
Q(f)"= ~ ~ d~if~i)(v~)
j=l i=0

has the weights d~i chosen, for given distinct knots v 1..... vs and multiplicities
ml, m2, ..., m,, in such a way that
b
Q(f)=~f(t)w(t)dt
a

whenever f is a polynomial of degree less than n. Here n'.=m 1 + m 2 + . . . +m, is


the number of weights and thus the number of knots counting their multipli-
cities.
Let x~, x z ..... x, be the knots Vx, v2.... ,v s repeated according to their
multiplicities and let q , ( t ) : = v ( - I ( t - x j ) , v~O. Let v : = v i be one of the knots
j=l
and di:=dji, i=0,1 ..... m - 1 its corresponding weights and m:=m~ its multip-
licity (we drop the index identifying the knot as we will be dealing with the
weights of each knot separately). Let _P:=(Po..... p,_l) r, pj a polynomial of
exact degree j, j = 0, 1..... n - 1 . In [-5] it was shown that there exists a constant
Interpolatory Quadratures 409

/~ 4:0 and a non-derogatory lower Hessenberg matrix K such that, for all t,

t_p(t) = K_p(t) + #q.(t) e_., (2.1)

where, as in the rest of this paper, e j : = ( 0 ..... 0,1,0 ..... 0) T is the j-th unit
vector of appropriate dimension. Furthermore, v is an eigenvalue of K of
multiplicity m and
rj = pC/)(0/J!, j =0,1 ..... m- 1 (2.2)

are its right principal vectors of grade j + 1 corresponding to v. If now z~, j


= 0, 1, 2 ..... m - 1 are the left principal vectors satisfying

Zjr _rm_ 1 --- tSj0 (2.3)


then
dr =~._yT Zm-~- 1 (2.4)

where
b

y, = Sp(t) w(t) d t. (2.5)


a

Consider now the polynomial basis Po, Pl ..... Pn which is orthonormal with
respect to the weight function w on the interval (a, b). These polynomials satisfy
the three term relation
tp(t) = J_p(t) + fl.p.(t) e_. (2.6)

where J is an n • n symmetric tridiagonal (Jacobi) matrix whose diagonal


elements we denote by ~1, ~2 ..... a~ and subdiagonal elements ill, f12..... fl.-t.
In the methods we are presenting the information needed about the weight
b
function will be given by this matrix J and the zero-th moment #o:=Sw(t)dt.
b a

Obviously, p2 #o = 1, _y=~_p(t) w(t) dt =e_l/po and e~'_ri= Po 5io. Furthermore, com-


a
parison of (2.1) and (2.6) shows that in this case
K = J +_e._bT (2.7)

where b is such that q. =(fl, p.--b_Tp_)/#. The determination of K is thus equiva-


lent to expanding the polynomial q. (which is given by its roots - the knots of
the quadrature) in terms of the basis p, p.. The evaluation of the vector _b and
the polynomial base _p for (2.2) is a process which is numerically unreliable and
can be avoided by reformulating the problem as follows.
L e m m a 1. Assume K of the form (2.7). Let A be a nonsingular matrix and denote
I ( : = A - 1 K A , _)3:=lA-l_e~, _~1:= 1--Are~. Furthermore, let the right and left
Po Po
principal vectors be determined as follows:
(g-vl)_~=_(,_,, _~, =_0 (2.8)
and
~T~ = &i0, j = 0, 1..... m-- 1, (2.9)
410 J. Kautskyand S. Elhay

(I(T--vI)~j=~S_I, ~_, = 0 (2.10)


and
AT
r m 1_~S=CSso, j = 0 , 1 ..... m - 1 . (2.11)
Then
j!dj=2TZm_j_l, j=O, 1..... m - 1 . (2.12)

Proof is immediate if we observe that ~j = A - I ~ and 2~=Arzs. []


Thus to exploit the properties (2.3), (2.4) and (2.5) and Lemma 1 we need
the matrix/(, the vector ~ and the right and left principal vectors _P~and -zv We
have examined various choices of the matrix /( and of the transformation A
which simplify the calculation of the principal vectors at a cost of complicating
the calculation of the vector ~9. We will now present an algorithm based on one
such choice.

3. The Method

3.1. I( and Its Principal Vectors

As we know that /( has eigenvalues x l , x 2 . . . . . x n and that it should be non-


derogatory, we may choose simply
0 ) 0
g : = d i a g ( x 1..... x,)+s~2eJ l e t = x2 1 ... . (3.1)

0 0 ... x,
Obviously, for q =(qo, qa ..... q.- 0 r,
t q(t) = I(q(t) + qo q.(t) e./v (3.2)
where
J
qs(t)=qo I~ ( t - x s ) , j=O, 1..... n - 1 . (3.3)
s=l

The qSs are of exact degree j and so there exists a lower triangular matrix L
such that p=L_q. Substituting this into (2.1) gives us directly that L = A . The
choice of qo 4:0 is arbitrary and, as

-el = 1 Lr_e1 =_efLr_e 1 el/p ~ =el/qo '


Po (3.4)
the relation (2.9) orthonormalizing the right principal vectors reduces to

_exr~= 6ioqo, i=0, 1,2 ..... m - 1 . (3.5)


Thus the ~'s could be found by choosing their first elements according to (3.5)
and forward substituting in the first ( n - 1) rows of (2.8). Each _~gcould then be
found from an intermediate vector gg by setting e 2 ~ = 1, backsubstituting for gl
in (2.10) and setting _~ to a linear combination of ~ and go, chosen to satisfy
(2.11). However a more economical method, found to be consistently reliable,
InterpolatoryQuadratures 411

especially in the case of knots with high multiplicity, takes advantage of the
special form of the right and left principle vectors corresponding to the knot
which appears last in the diagonal of the m a t r i x / ( .
L e m m a 2. Suppose I( has the knot v, of multiplicity m, in its last m places on the
main diagonal. Denote

R:=(-~o,-~l .... , r-~4n- 1) = JR,]


Rz and Z:=(2__~_1,~.,_2,.... Z_*o)=[Z1]
Z2

where R 1 and Z 1 are ( n - m ) x m dimensional and R 2 and Z 2 are m x m. Denote


also ~ij=eT R ej+ 1 and 2i3=ef Z e"_ j for i= 1, 2 . . . . . n and j = 0 , 1, ..., m - 1.
Then (2.8), (2.9), (2.10) and (2.11) imply that
(a) R 2 is upper triangular. The top m • m submatrix of R is lower triangular
so that R is n - m + 1 banded and R 2 is f : = m i n ( m , n - - m + 1 ) banded. Further-
more R 2 is Toeplitz. More precisely f i j = 0 all i : ~ j + l , j + 2 . . . . . n - r e + j + 1 .
(b) Z~ = 0 and Z 2 is lower triangular Toeplitz.
(c) In particular, denoting ~r_ 1 =(0, 0 . . . . . 0, )~. . . . 2,_,. 1 ..... 21,20) and

_~y=(0, 0, ..., 0, 7o, yl, ..., yj) , j = 0 , 1,2 . . . . . m - 1


then

7o=2ol and ) ' i = - T o ~ 2kTi-k, i=1,2 .... ,m-1.


k=l
Proof. (a) Equation (2.8) can be written as

6j= 6_ j = 0 , 1,2 . . . . . m--1


(3.6)
i=2,3 ..... n

where Pi' = xi - v. F r o m the ordering of the knots in the diagonal o f / ( we have

P .... + 1 =Pn-m+ 2 ..... Pn =0. (3.7)


F r o m (2.8) and (2.9) we have P i , _ l = 0 , i = 1 , 2 . . . . ,n, 71o=1 and f l ~ = 0 , j
= 1, 2 . . . . . m - 1 . Part (a) of the L e m m a now follows from (3.6) and (3.7) by
simple induction on j. The fact that R 2 is a Toeplitz matrix is a consequence of
(3.6) reducing to
~j=ri-l,j-x, i>n-m+2.

(b) In [5, p. 314] it is shown that

Z r R = ( 2 * ) - ~I (3.8)

where here 2 * = 1. Consider first the special case n = m. Then from (a) we have
that R is a multiple of Ir, and so (b) follows immediately.
N o w suppose n > m. Then (2.10) gives

Pl~lj-----7,1,j_l, j=0,1 ..... m--1


412 J. Kautsky and S. Elhay

showing that ~1i=0 all i since pl:t:0. Noting that pi:t:0, i = 1 , 2 .... , n - m and
that ~ _ ~= 0 , i = 1 , 2 .... , n then (2.10) written as

easily shows that 2~j=0, i = 1 , 2 ..... n - m , or Z a = 0 . Now from (3.8) Z T R


=ZrRI+ZTRz=I immediately shows that Z z is lower triangular Toeplitz
because the inverse of any triangular Toeplitz matrix is both triangular and
Toeplitz.
(c) This follows from ZTRze_ra:em. []
Using the results of L e m m a 2 we can economize in the application of (3.5)
and (3.6) to find ~ _ ~ because we only need columns 1 to f in rows 1 to n - m
+ 1 of R (since R is r banded) to get the first row of R 2. This row, since R 2 is
Toeplitz, contains the only elements of ~ - a needed to evaluate the left
principal vectors. Furthermore the single vector Z2_qel, defining the whole of
Z 2, can be found very economically from the forward substitution formula
given in L e m m a 2 part (c). Of course only the last m components of )3 will then
be needed to find all the weights to v by (2.12).
Thus our method will be to order the knots in the diagonal o f / ( so that
the last m elements equal v and then compute all the weights of this knot v. A
similar cycle will then be repeated for each knot for which the weights are
required. Of course we will have to compute a new vector 33 in each cycle. It is
interesting to note that in our evaluation of the right and left principal vectors
o f / ( we have used only the knots x l , x 2 ..... x, of the quadrature and an
arbitrary multiplicative factor qo=81o . The weight function of the approxi-
mated integral enters into the calculation via the vector ~ the evaluation of
which we now describe.

3.2. The Vector ~

Recalling that for our case A = L we have, from L e m m a 1, that

~ _ : = l L - l el
Po
where then n x n transformation matrix L - 1:= [ a j satisfies the identity

I(L-l=L-l(j +e_,b_r). (3.9)

Using (2.7) and L e m m a 1 we now show that, given the first diagonal
element all~=qo/Po of L -x, the vector ~ can be explicitly calculated from /(
and the top half of J only. In fact we have the following result.
L e m m a 3. Let J* be the principal submatrix of J of order n*, 1 < n*< n and let
L*, = [a*] be an n x n* matrix satisfying
(a) f f*l j-- - O ' l j ~ j = l , . . . , n*,
(b) KL*=L*J*.
Interpolatory Quadratures 413

Then ~r*=~ij for all l <j<n* and 1 < i < m i n ( n , 2 n * - j + 1). In particular, if
n* > n_ then ~_= 1 L* e I .
=2 p0
Proof. Defining ~rio-~i,,,
+ - * * 1 = 0 we may write the first n - 1 rows of equation
(b) as

r 1 "q'-(~j--Xi) ff~"~-fljG~,j+l'
10"*i,j_ i=1, 2, ..., n-l,
(3.10)
j = l , 2 ..... n*,

as indeed we may the first n - 1 rows of (3.9) by replacing a* by a and n* by n


in (3.10). Thus (3.9) and (3.10) explicitly determine L and L* respectively, one
row at a time starting from the (given) first row. The range of agreement
between I * and L follows from the nature of the right hand side of (3.10), the
boundary condition ai,.,+* ~= 0 together with the fact that a l j = 0 , . j > i . []
Thus for any n*> n/2 we can compute 33 from L e m m a 3 part (b) as follows.
Define _ur:=e/rL *, i = 1 ..... n. Then L-~p=-q implies

UT=q~OeT (3.11)
Po -1.
Now we can write

L*J*=x eke:L* = KA_


e k _T
Uk .

k=l k=l
Thus

u_rj,=eTL, j , = ~, elf KekUk^ T=XiUr +U~+, ' i = 1 , 2 ..... n - - I ,


k=l
and therefore
ui+l=(J*-xil)ui, i = 1 , 2 ..... n - l , (3.12)
since J* is symmetric.
Equations (3.11) and (3.12) can be used directly as a Lanczos-type method
to compute the required first column of L*. But we will use, instead, the
following scheme which suffers far less from round-off errors.
We factor J* as
J* =QcI)Q r (3.13)

where QTQ=I and cb:=diag(~bl, 4>2 . . . . ,4,,.).


Then u i + l = Q ( ~ - x i I ) Q r u l and denoting ai:
=IQu we have a i + l = ( r
-- No --1
-xiI)a_i, i = 1 , 2 ..... n - 1 . Choosing u l = P o e 1 (i.e. choosing the arbitrary pa-
rameter qo=pg) we have (recalling that the unit vectors have dimensions such
that the products conform)

=er Qa-i=a-r a-i=a-r (~=1 (~_Xki)) gl, (3.14)


414 J. K a u t s k y and S. E l h a y

1
having used a~ = Po Q Tul = Q r el. Letting

_a, = .'(sl, s2 . . . . . s,,V


we have
I1.
-fi,=X s2=l,
i=1
because QTQ = i , and
n* i- 1
;,= E sj2 1-I (C~j--Xk), i = 2 , 3 . . . . . n.
j=l k=l

F r o m L e m m a 2, part (c), it follows that we need only compute 33i for i = n-m
-4-1, ..., n.
The choice qo = p 2 above leads to the normalization (cf. (3.5))

1
elTf j = q o 6jo = ~ o 6jo , j = 0 , 1,2 . . . . . m - 1

for the right principal vectors.


Remark. We have mentioned in w2 that the direct evaluation of the vector b in
(2.7) would be numerically unreliable. However combining (3.9) with _u.~
= e T L - 1 (for n = n * ) we obtain

b_.=(x,I-J) u_,,/e_T,u_,,.
This is in fact a method for solving an inverse eigenvalue problem, i.e.
finding vector b such that the matrix J + e , b T has prescribed eigenvalues.

4. The Algorithm

Input: A set V: = { V l , I) 2 . . . . . 1)s} of distinct knots with multiplicities


m 1, m 2 . . . . . ms, the first m o m e n t #o, the Jacobi matrix J* of dimension n*
n 9
where ~ < n < n: = m I + m 2 + . . . + ms, and the subset

Vt'.={vil, vi2, .... vi,}~g, l<ik <S, k = l , 2..... t

containing knots the weights of which are required.


Output: Weights do, d 1 ..... din_ ~ for each v e Vt.
1. 1.1: Factor J*=Qrq~Q, g~=diag(~bl,~b 2 . . . . . ~,,),
1.2: form Q r e 1 =(Sl, s 2 . . . . . s,,) r,
1.3 : save s 2, ~bi, i = 1. . . . . n*.
2. {Find all weights for each knot v E Vt in turn}
for each v e Vt do
2.1: {Prepare m a t r i x / ( and the band width f}
InterpolatoryQuadratures 415

2.1.1: set E:= min (m, n - m + 1) where m is the multiplicity of v and


2.1.2: set up array x l , x z .... , xn_ m containing each knot (except v)
repeated according to its multiplicity.
2.2: {Compute the required components of_~ for this matrix/(}
for i = n - m + l t o n do set
n* n- m
;, = - v) ' - n § m - 1 l I - xk)
j=l k=l

2.3: {Compute columns 1 to f of _ek~R, k = l , 2 ..... n - m + l , overwriting


e~R with el'§ 1R as it is generated}
2.3.1: Set the E-dimensional vector
1
r = = - - e I = :(rl, r 2 . . . . . re) r
#0
2.3.2: for k = l , 2 ..... n - m do
for i = min (k + 1, () .... ,2, 1 do set
ri: -----r i _ 1 - - ( X k - - V) r i , ( t o : = O)
{At the end of the process ri+ 1 =)-i, i--0, l, ..., E - 1}.
2.4: {Compute the left principal vectors}
2.4.1: set 70=201 ,
2.4.2: form d m _ l = ~ , y o / ( m - 1 ) !
2.4.3: if m > l then for i = 1 , 2 ..... m - 1 do set
i i

Y
k=O k=O

t7
and set din_i_ 1. (m-i-I)!"

Remark. The factoring (3.13) in Step 1.1 of the algorithm can be stably and
efficiently computed using, for example, the procedure IMQLT2 described in
[6]. In fact, as observed in [3] and [4] this procedure can be modified to
produce the required vector Q r e l without forming the whole matrix Q.
Two work vectors of lengths

fmax: = max min(ml, n - m i d- 1)


i, v i ~ V t

will suffice for the computation of all the left and right principal vectors
required. It will require (t+ 1)n* locations to save products of the form

( ~ - v i I ) ~' (4.1)

so that the computation of (3.14) in step 2.2 of the algorithm will require fewer
multiplications. This saving of computation time at the expense of storage may
be worthwhile for example in dealing with knots of high multiplicity or in the
case where several quadratures which have some knots in common are to be
computed.
The choice of the parameter n* will depend on the nature of the
quadrature(s) being computed. In the case that only one quadrature is needed
416 J. Kautsky and S. Elhay

t h e n t h e o b v i o u s c h o i c e is

,ntegerpart

H o w e v e r , s u p p o s e s e v e r a l q u a d r a t u r e s a r e r e q u i r e d w h i c h all h a v e t h e s a m e
weight function w and which have between k and 2k knots. Then by choosing
n*=k o n l y o n e f a c t o r i n g will b e n e e d e d for all t h e q u a d r a t u r e s a n d a n y
p r o d u c t s (4.1) c o r r e s p o n d i n g t o k n o t s c o m m o n t o s e v e r a l q u a d r a t u r e s c a n b e
saved.

5. T e s t s

I n [ 1 ] a m e t h o d is s u g g e s t e d for t h e s o l u t i o n o f c o n f l u e n t V a n d e r m o n d e
s y s t e m s a n d t h i s m e t h o d c a n t h e r e f o r e b e u s e d t o f i n d t h e w e i g h t s o f in-
terpolatory quadratures by solving the system of linear equations that the
w e i g h t s satisfy.
The 2-norm condition number K of this system increases rapidly with the
size o f t h e s y s t e m as is d e m o n s t r a t e d i n T a b l e 1 a n d Fig. 1. H e r e K w a s

Table 1. Condition numbers for confluent Vandermonde matrices, loglo(X) for s knots, equally
spaced in [ - 1 , 1], of multiplicity m. The order of the matrix is n = m s

m s n log 10(x) m s n log 1o (~c)

1 4 4 0.9 1 28 28 12.3
2 2 4 0.8 2 14 28 12.2
1 8 8 2.7 4 7 28 13.0
2 4 8 2.5 7 4 28 13.2
4 2 8 2.3 14 2 28 10.9
1 12 12 4.6 1 32 32 14.2
2 6 12 4.4 2 16 32 14.1
3 4 12 4.6 4 8 32 15.1
4 3 12 4.1 8 4 32 15.4
6 2 12 4.0 16 2 32 12.6
1 16 16 6.5 1 36 36 16.2
2 8 16 6.3 2 18 36 16.1
4 4 16 6.7 3 12 36 17.1
8 2 16 5.7 4 9 36 17.1
1 20 20 8.4 6 6 36 17.7
2 10 20 8.3 9 4 36 17.6
4 5 20 8.7 12 3 36 15.8
5 4 20 8.8 18 2 36 14.4
10 2 20 7.4 1 40 40 18.1
1 24 24 10.4 2 20 40 18.0
2 12 24 10.2 4 10 40 19.2
3 8 24 11.0 5 8 40 19.8
4 6 24 10.9 8 5 40 19.6
6 4 24 11.0 10 4 40 19.8
8 3 24 9.9 20 2 40 16.2
12 2 24 9.1
InterpolatoryQuadratures 417

21
20
~9
18
17
16
].
15
I.
13
I 9
o 12
11
10 I 9
9
I
8
7
I
6
5
[ range of r for systems
with given size and (at
3 leost 3 knots.
2 9 x for 2 knot systems,
1 6
I I I ~ I I I I I I I
8 12 16 20 24 28 32 36 40 Zd,
order of matrix

Fig. 1. Intervals showing the range of Ioglo(K) for the data of Table 1

calculated from the singular value decomposition of the Vandermonde matrix


(scaled as in [1]) for equidistant knots on [ - 1 , 1] of various multiplicities.
Solutions calculated by solving the system in arithmetic with precision # will
therefore be unreliable whenever # x > l . Similarly, for such ill-conditioned
systems, the size of the residuals of solutions obtained by any method cannot
be used to draw conclusions about the accuracy of the weights.
To test the method of w (M1) and to compare it with that based on [1]
(M2) we implemented both methods (for the VAX11/780) in single (24-bit
mantissa) and double (56-bit mantissa) precision. Figures 2-5 show the ma-
ximal relative errors E computed as
dt.r.ef ...... ) __d(k)
Jt -'Jl
Ek: = m,,j
ax 1 (reference) '
k = 1, 2,
9 " + Idol I

where the d~ ) were computed by M1 and d(.2.


)J, by M2, both in single precision.
As a reference set we took the weights calculated in double precision by M1
418 J. Kautsky and S. Elhay

Type of weight function:

§ v,,, Legendre
w:--1

;.'. N [a,b]= [-1,1]


%
Knots vj:=-l.2(~)
o~ i kY \'~ ~_ cx) j = 1,2 . . . . . . s.
T
\ \ \ ; V ".. \"'-.Z ~ / /
Legend:
multiplicity
\ b-.s \.. :. - ..
of Knots
M1 M2
\ , \
\ \ -
"j " " 2 + n
X i \, 5 X v

\ \
\ \ \ §

I I
, \, 'b, \, I I I
5 10 15 20 25 30 35 40 45
order of t<
Fig. 2. Comparison of the maximal relative errors of the weights produced by M1 and M2 for
multiplicities 1, 2 and 5

which agreed to sufficient accuracy with those calculated by M2 in double


precision except for those cases which are marked by ( ) .
We infer that in the marked cases the M2 reference has become unreliable
because the condition number of the system is too large even for double
precision calculation. We have also marked, by [ ], the cases where no double
precision M2 reference was available because the evaluation of the moments
failed due to overflow.
We show the results for four weight functions (Legendre, Jacobi, Laguerre
and Chebyshev type) and quadratures with equidistant and transformed
equidistant knots (as indicated in the figures) of multiplicities 1, 2 and 5. The
choice of weight functions and knots represents symmetric and non-symmetric
quadratures with weights of maximal magnitude varying from 10-1 (uniform
weights of simple Gauss-Chebyshev quadratures in Fig. 5) to 10 ~ (Newton-
Cotes quadratures in Fig. 2). We observe that the accuracy of M2 deteriorates
at a rate which corresponds to the increase in the condition number of the
system. We note also that although this condition number is essentially inde-
pendent of the multiplicity the performance of M2 is generally worse for knots
of lower multiplicity.
Examination of the errors of M1 shows some oscillatory behaviour which
is most pronounced in Fig. 2. The relative error is generally very uniform but
in some cases isolated larger errors occur at knots the weights of which are
"accidentally" small. This is most noticeable in Fig. 2 whenever 0 is a double
Interpolatory Quadratures 419

8
Type of weight function:
Jacobi
1 t 1/2
w:-- (i-~)

__\ " 9 .. ... ., ,,~," [ a , b l = [-1,1]


i_:_!
Knots vj :=-1§ ( s - l )
5 j =1,2 . . . . . s.

I ~'~ ~ Legend:
\ \ \ ofmultiplicitYKnots M1 Ivl2
1 9 A
'4\ 2 9 []
X v

"h, "\
\\\ \ \
, I 1 ~1 \ 1 \ I I , t
5 10 15 20 25 30 35 z0 /,5
order of
Fig. 3. Comparison of the maximal relative errors of the weights produced by M1 and M2 for
multiplicities 1, 2 and 5

Type of weight function:


Laguerre
;'~
- "
:
~
.*1,-
w: =expl-t)
\ e*
s
9-. %.,/X Knots' vj :=l/2ms In _ :+1
=,
\ \ \ ondm=mu,,,p,,c,ty
~~::i,I,,~~..
. Legend:
i

multiplicity M1 M2
9 of Knots

\\\ 1 9
.
,s
o
\ b \ s x ~,

, , , X,\ ,\ I ,
5 I0 15 20 25 30 35
order of I<
Fig. 4. Comparison of the maximal relative errors of the weights produced by M1 and M2 for
multiplicities 1, 2 and 5
420 J. Kautsky and S. Elhay

i x~ Type of weight function:


\ Chebyshev

~ \ ...~ 9 ""~'~'~"-7,~'x~.
~ .... x"~""--x w : (1-t2) -'2
~ "q'+"'~--~K_~.m,~-.m~+~" ""+... /"... '--~,:$, [ca.b] =[-I,1] .
r ~(o)7.r~{ol..a(O)_(o)~.(o) Knots: vj
j = 1,2. . . . s.
o

oa
"T
~\\ ~ \ ~" ~ ' \ . Legend:
multiplicity M2
\ :\ of Knots M1

\ : 2 +
\ \ \ 5 •

\\~, h\ \ ...
\ \ \
\ t3..~ :

5 10 15 20 25 30 35 Z,0 1,5
order of I~
Fig. 5. Comparison of the maximal relative errors of the weights produced by M1 and M2 for
multiplicities 1, 2 and 5

knot in a symmetric quadrature with n large because then the weight for
f(1)(O) vanishes but is calculated with the same absolute error as the neigh-
bouring large magnitude weights. Otherwise the loss of accuracy in the calcu-
lation by M1 in single precision decreases at a much slower rate than that of
M2 and appears to level out as n increases. We have, in fact, extended the
computation by M 1 shown in Fig. 3 until exponent overflow which occured at
n=95, 98 and 105 for multiplicities 1, 2, and 5 respectively. Throughout this
computation the error E 1 satisfied 3.6<-log10(E~)<5.7. By extrapolating
from Figure 1 we estimate the condition number of the equivalent Vander-
monde matrix to be roughly 105~

6. Applicability and Conclusions

The algorithm described in this paper for the calculation of the weights of
interpolatory quadratures requires as input the diagonal element ctl, ~2, ..., c~,
and the subdiagonal elements /Yl,/~a .... ,/~,_~ of the symmetric tridiagonal
matrix J associated with the polynomials orthonormal with respect to the
weight function w of the approximated integral. In Table 2 we list the explicit
formulae for J for some typical weight functions as well as the monomial
InterpolatoryQuadratures 421

q-

.~+
~Y L ~ ~ +1~ ~

+
+ +

In
o

0
r
+
+

o
7 +
+
d-
o

t~ +
T
~.= A"" if"
o N 7 I I I

8 r-~

I ~ I I I I
t...d

o ~

8
o o~.~
422 J. Kautsky and S. Elhay

moments b
pj=~w(t)tJdt, j=0,1 .....
a

N o t e that only #o is needed in our algorithm and that for some of the
weight functions the evaluation of J is in fact more stable than that of the
moments. Thus the accuracy of M1 and the remarkable stability, even in
extreme cases, may be attributable to several factors.
(i) We use the Jacobi matrix rather than the moments.
(ii) The right and left principal vectors that are produced are already
normalized - no linear combinations of vectors need be formed to normalize
them.
(iii) The forward substitution for the right principal vectors being from a
bidiagonal matrix with all super-diagonal elements equal to 1 involves only
one multiplication and one addition per element produced.
(iv) Each left principal vector -zi is found from -~i-1 by the c o m p u t a t i o n of
only 1 new element.
(v) The inner products that are formed are between vectors that have m a n y
(never explicitly calculated) zeros.
It seems that the problem of finding the weights via the Jacobi matrix is
better conditioned than that of finding them via the moments. In any case
factors !ii) to (v) will undoubtedly have a d a m p i n g effect on the propagation of
round-off error.
The matrix J is available also for a wider range of weight functions than is
shown in Table2. F o r example, it may be readily obtained for any wl=w p
where J is k n o w n for w and p is a polynomial or where the weight function
m a y be approximated by an expression of the form w 1 (see [-4]). For a
discussion of the evaluation of Jacobi matrices from modified moments and by
other means see Sects. 5.2 and 5.3 of [2] and the references there.
Furthermore, the problem of computing the weights of interpolatory
quadratures is potentially a very difficult one - the condition of the problem
m a y be arbitrary large for sufficiently large n and certain knot positions.
U n d e r these circumstances we consider the performance of our algorithm very
satisfactory.

References
1. Galimberti, G., Pereyra, V.: Solving Confluent Vandermonde Systems of Hermite Type. Numer.
Math. 18, 44-60 (1971)
2. Gautschi, W.: A survey of Gauss-Christoffel quadrature formulae. In: Christoffel, E.B.: The
Influence of his work on Mathematics and the Physical Sciences. Butzer, P., Feher, F. (eds.).
Birkh~iuser, pp. 72-147, 1981
3. Golub, G.H., Welsch, J.H.: Calculation of Gauss quadrature rules. Math. Comput. 23, 221-230
(1969)
4. Golub, G.H., Kautsky, J.: Calculation of Gauss Quadratures with Multiple Free and Fixed
Knots. N.A. Project NA-82-01, Stanford University, 1981 (to appear in Numer. Math.)
5. Kautsky, J.: Matrices related to interpolatory quadratures. Numer. Math. 36, 309-318 (1981)
6. Martin, R.S., Wilkinson, J.H.: The Implicit QL Algorithm. Numer. Math. 12, 277-383 (1968)

Received April 4, 1982

You might also like