Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

ERROR ANALYSIS 669

as YACC, Yet Another Compiler Compiler. Passing up 1997. Jager, R. D., and Ortiz, R. In the Company of Giants.
the obvious YANG (Yet AnotherNetwork Guru?),they New York: McGraw-Hill. Profiles of Steve Case, Michael Dell,
Bill Gates, Andrew Grove, William Hewlett, Steve Jobs, Sandy
settled on Yahoo!-including the exclamation mark- Kurtzig, and Ken Olsen.
and began to seek venture capital. 1997. Murray, C. J., The Superman: The Story of Seymour
Cray and the Technical Wizardry Behind the Supercomputer.
Their first success was with the prestigious Sequoia New York: John Wiley.
Capital, which had to its credit such winnersas Apple, 1997. Reid, R. H. Architects of the Web. New York: John Wiley.
Atari, Oracle, and Cisco.With $4 million from that Profiles of Marc Andreessen, Kim Polese, Jerry Yang, and
source, Yahoo! was able to recruit additional staff and others.
1997. Stross, R. E. Steve Jobs and the NeXT Big Thing. New
the company began to grow. York: Athenium.
Yahoo!is sometimes said to stand for Yet Another 1997. Wallace, J. Overdrive : Bill Gates and the Race to Control
Cyberspace. New York: John Wiley.
Hierarchical Officious Oracle. The “H” is significant in 1997. Wilson, M. The Difference Between God and Larry Ellison:
that only Yahoo!and afew other search engines usethe Inside Oracle Corporation.New York: William Morrow.
intelligence of human labor to categorize the sites they 1998. Swisher, K.AOL.COM: How Steve Case Beat Bill Gates,
Nailed the Netheads, and Made Millions in the War for the
find. Web. New York: Times Books.
The basic decision facing the Yahoo! founders was how 1998. Hamm, S. “The Education of Marc Andreessen,” Business
Week, 13 April, 85-92.
toproducerevenuefromsuchan obviouslyuseful 1998. Kirsner, S. “The Legend of Bob Metcalfe,” Wived, 6.11,
product. If they were to chargefor access, many users November, 182-186,232-234, 246-247.
would choose notto pay. And if they wereto decide to 1999. Byman, J. Andrew Grove and the Intel Corporation.
accept paid ads, customers might be turned off. But it Greensboro, NC: Morgan Reynolds.
1999. Clark, J. Netscape Time: The Making of the Billion-Dollar
was decided there was no alternative to paid advertis- Start-up that Tookon Microsoft. New York:St Martin’s Press.
ing. And, after a small amount of initial grousing, users 1999. Dell, M. Direct from Dell: Strategies That Revolutionized
stopped complaining andthenewrevenuesource An Industry. New Yrok: HarperBusiness.
promptedReuters to grant Yahoo! $40 million of 1999. Hof, R. D., Harnm, S., and Sager, I. ”SUN Power: How
Scott McNealy is Shaping the Future of Computing,” Business
fresh venture capital. Week, 18 January, 64-72.
In 1996, Yahoo!sold a 30% interest in thecom- 1999. Quittner, J. “Tim Berners-Lee,” Time, 153, 12 (29 March),
192-194.
pany to Softbank, the dominantsoftware firm of Japan, 1999. Ramo, J. C., and Quittner, J. “An Eye on the Future:
with Yang retaining a 15% interest. Softbank, which The Laughing Billionaire,” Time Magazine, 154, 26 (27 Dec.),
had previously purchased the Ziff-Davispublishing 50-67 (Man-of-the Year Jeff Bezos).
2000. Lewis, M. The New New Thing. New York: W. W. Norton.
company for $2.1 billion, prompted the launching of
the publication of Yahoo! Internet
Life, which Stephen J. Rogowski and Edwin D. Reilly
promptly attracted many thousands of subscribers.
In 1998Yahoo! earned its first profit and improved ERGONOMICS
greatly on this in 1999. By 1999alsoYahoo! had
become by f a r the most commonly visited search See HUMAN
FACTORS
IN COMPUTING,
engine site on theWorld Wide Web. Yanghas, indeed,
become more thanYet Another Network Guru.

Bibliography
1988. Rifkin, G., and Harrar, G. The Ultimate Entrepreneur:
The Stovy of Ken Olsen and Digital Equipment Corporation.
Chicago, IL: Contemporary Books.
1992. Wallace, J., and Erickson, J. Hard Drive: Bill Gates and In general, the basic arithmetic operations on digital
the Making of the Microsoft Empire. New York: John Wiley. computers are not exact but aqe subject to rounding or
1993. Rogowski, S., and Reilly, E. D. “Entrepreneurs,”in The
Encyclopedia of Computer Science, 3rd Ed. (eds. A. Ralston truncation errors. This article is concerned with the
and E. D. Reilly), 517-526. New York: Van Nostrand cumulative effect of these errors. It will be assumed
Reinhold. The third edition version included profiles of thatthereaderhasread the article on MATRIX
Seymour Cray and Robert Noyce, who have been accorded COMPUTATIONS, since the results will be illustrated by
full biographies in this edition, and of Dan Bricklin, Nolan
Bushnell, Marylene Delbourg-Delphis, Gary Kildall, William examples from that area.
Millard, and Adam Osborne, which are not included in this
Fourth Edition article. Definitions
1996. Grove, A. Only the Paranoid Survive: How to Exploit
the Crisis Points that Challenge Every Company and Career. There are two main methodsof error analysis, known
New York: Doubleday. as forward analysis and backward analysis, respec-
1997. Cringely, R. X. Accidental Empires: How the Boys of
Silicon Valley Make Their Millions, Battle Foreign tively.Theymay be illustrated by considering the
Competition, and Still Can’t Get a Date. New York: Harper. solution of an n x n system of linear equations by
670 ERROR ANALYSIS

Gaussian elimination. In this algorithm, the original vector andmatrix norms. A norm of a vector x,
system is reduced successively to equivalent systems denoted by IJx/I,is a nonnegative quantity satisfying
A(')x = b(r),Y = 1 , 2 , . . . , n - 1. In the final system the relations
the matrix of coefficients, A("-'), is upper-triangular, ilxll 2 0 and llxil = 0 iff x = 0,
and the solution is found by back substitution.
IIQXII = IQI llxll7
Inaforward analysis, oneadoptsthe following
strategy: because of rounding errors, the computed IIX + YII 5 IIXII + IIYIl.
derived system A(')x = b(') differs from that which Wewilluseonly twonorms,denoted by XI/^ and
would beobtained by exact arithmetic. It seems 11 x 11 and defined by
reasonable to assume that, if the algorithm is stable,
A(') - A(") and b ( r ) - b(') will be small, and, with llxllz = (E lxi12)1'2 I llxllm = maxlxil.
sufficient ingenuity, bounds would be found for these
Similarly, a norm of a matrix A, denoted by IlAll, is a
"errors." This is perhaps the most natural approach.
nonnegative quantity satisfying the relations
Alternatively, one could adopt the following strategy: lIAJI2 0 and IlAll = 0 iff A = 0,
if the algorithm is stable, presumably the computed
solution % is the exact solution of some system IlQAll = IQI llAll>
+ +
(A E)% = b e, where E and e are relatively small.
IIA + BII I IlAll + IlBltt
Of course, there will be an infinite number of sets
of which x is the exact solution. A successful error IIABII 6 IlAll IIBII.
analysiswill obtain satisfactory bounds for the Wewill useonly two norms, denoted by JJAl/,and
elements of E and e . Such an approach is known as llAilm and defined by
backward error analysis, since it seeks to replace all
errorsmade in the course of the solution by an liA1lz = (max eigenvalue of A A H ) ' I 2 , where AH
equivalent perturbation of the original problem. It has represents the conjugate transpose of A
oneimmediate advantage. Itputstheerrorsmade
11A11, = ma@jlQiiJ).
during the computation on the same footing as those
arising from the data. Hence, when the initial data is It may be verified that
itself inexact, no additional problem is posed.
IIAJ42 5 l l 4 2 IIXIIZ

Early Error Analysis of Elimination llAxllco L llAllw llxllm.


Processes Most of the early error analyses were for fixed-point
Inthe 194Os, the imminent arrival of electronic computation, but, since virtually all scientific compu-
computers stimulated an interest in error analysis, tation isnow done infloating point, we restrict
andone of the first algorithms to be studied was discussion to this case. We use the notation fl(x x y )
Gaussian elimination. Early analyses were all of the to denote the product of two standard floating-point
forward type, and typical of the results obtained was (fl) numbers as given by the computer under exam-
that of Hotelling, who showed that errors in solvingan ination, with ananalogous notation for the other
n x n system might build up by a factor of qn-'. The arithmetic operations. We have the following results
relevance of this result was widely accepted at the for each of the basic operations, using a mantissa of t
time. Writingin1946, Bargmann, Montgomery, and digits in the base p:
von Neumann said of Gaussian elimination: "An error
at any stage affectsall succeeding results and may
fl(x x y ) = x y ( 1 + E ) , I E ~ 5 mp-',
become greatly magnified;this explains why instability fl(x t Y ) = ( x / y ) ( l + E ) , l ~ Il dp-',
should beexpected. " The mood of pessimism was very f l ( x ~ Y ) = x ( l + & l ) ~ Y ( 1 + & 21) ~, 1 1 , 1 E 2 / L ~ P - t
infectious, andthe
tendency
become
to en-
meshed in the formal complexityof the algebra of the where m, d, and s are constants on the orderof unity,
analysis seemstohaveprecludedasound assess- dependingonthe details of therounding or chop-
ment of the nature of the problem. Before giving any ping procedure. Described in the language of back-
error analyses, we discuss fundamental limitations on ward error analysis, we might say, for example, that
the attainable accuracy. the computed sum of two numbers x and y is the
exact sum of two numbers x(1 E I ) and y ( 1 + ~ 2 ) + ,
Norms and Floating-point Arithmetic eachhavinga lowrelative error. On well-designed
computers,
Wewill need some way of assessing the "size" of a
vector or a matrix. Such a measure is provided by fl(x h y ) = ( x zk y ) ( l + E), IEl 6 sP-'.
ERROR ANALYSIS 671

For convenience, from now on we assume that all E in attributing this to somemalignant instability in this
the above satisfy the bound I E ~ 5 k E t , where k is of simple arithmetic process; it is the natural loss to be
the order of unity. expected.
By repeated application we have, with an obvious Inherent Sensitivity of the Solution of a
notation, Linear System
fl(a1 + a2 + ' . + a,)
'
Forany computational problem, the inherent sensi-
=uI(~+E~)+u~(~+E~)...+u,(~+E,), tivity of the solution to changes in thedata is of
fundamental importance; yet oddly enough the early
(1 - kD-'),-' 5 1 + El 5 (1 + kK'),-' analyses of Gaussian elimination paid little attention to
(1 - k 3 - y l - r 5 1 + E, _< (1 + ko-')"+1-' it. We consider in a very elementary way the effect of
perturbations SA in the matrix A. We have
v=2,3, ... ,n.
X = (A + SA)-'b = (A-' - A-lS.4A-l + . . . )b
The bounds on the errors arereasonably realistic, and
examples can be constructedin which they are almost = X - A-'SAx + (A-16A)2~ - '.',
attained. Naturally, when n is large, the statistical
giving
distribution can be expected, in general, to result in '
some cancellation of errors and, thus,in actual errors I l X - xll/llxll 5 llA-l~All/(l - IIA-1~41)~
substantially less than the bounds.
provided IlA-'SA(I < 1. The relative error in x will not
One of the most important elementsin elimination be low unless IIA-'SAII is small. Writing
methods is the computation of expressions of the form
IISAII = V I I A I I ~
p = fl(a - x1 x y1 - ' . - x, x y n ) .
'
we see that
The computed p and the error bounds are dependent
on the orderin which operations are performed.If the
operations are performed in the order written above, The inherent sensitivity is thereforedependent on
we obtain //A/IIlA-'II, and this is usually known as the condition
number of A (for the given norm), with respect to
p = a(1 + E ) - xlyl(1 + Fl) - "' - x,y,(l + Fn), matrix inversion or to the solution of linear systems.
where We might now askourselves what sortof limitation we
(1 - 5 1 + E 5 (1 + should expect on theaccuracy of Gaussian elimination
even if it had no menacing instability. The solution of
(1 - k , 3 - y - i 5 1 + Fi 5 (1 + kp-'y+2-'. Ax = b requires n3/3multiplications and additions, an
If one computes 4
average of n per element.From the elementary
discussion given so far, we might risk the following
p = fl(-x1 x y1 - x2 x y2 - . ' . - x, x y, + a), prophecy: even if Gaussian elimination is a stable proc-
then ess, then we can scarcely expect to obtain a bound for
the resulting error whichisless than that resulting
~=-xlyl(l+El)-...-x,y,(l+En)+~(l+F) , from a perturbation SA in A satisfying, say
+
(1 - k9-f)nL3-i(1 E ~ 5) (1 +
IF1 5 kp-'. In fact, this bound for the effect is usually reasonably
In describing the last result in terms of backward error realistic, provided that pivoting is used. Indeed, the
analysis, we might say, for example, that itisexact advantages conferred by the statistical distribution of
for data xi(l + Ei), yi, and a(1 + F ) , putting all the rounding errors is such that the error is usually less
perturbations in the xi and a. Alternatively, we could than themaximum error thatcould be causedby such
+
say it is exact for data, xi, yi (1 Ei),and a( 1 F ). + a perturbation.

Note that although the errors made can be equated


Backward Error Analysis of Gaussian
with the effect of small relative perturbations in the
Elimination
data, therelative error in the computedp may be arbi-
trarily high, depending on the degree of cancellation Gaussian elimination provides a verygood illustra-
that takes place. Indeed, if the true p is zero, one may tion of the power and simplicity of backward error
have an infinite relative error. One would not think of analysis. The elimination process may be described as
672 ERROR ANALYSIS

the production of a unit lower triangular matrix L and +


where the factors 1 E,i and 1 Fr are of the type +
an upper triangular matrix U such that LU = A. The discussedin connection with the computation of p
solution of the system Ax = b is then carried out in the above. Hence, the computed yi satisfyexactly the
two steps: relation
L y = b, Ux=y 1,1(1+Gr1)+lr2~2(1+Gr2)+"'

In the backward error analysis, one shows that the + l r , r - l y r - l ( l + Gr,r-1) + y r ( 1 + G r r ) = br,
computed L and U satisfy the relation LU = A + E and
obtains bounds for the elements of E. One then shows where
that the computed solutions y and x of the triangular (l+Gri)=(l+Eyi)/(l+Fy), i = l , ...,v - 1 .
systems satisfies the equations
( L + SL)y = b, (U + SU)x = y
1 + Grr = 1 / ( 1 + F r )
Notice that by dividing through by 1 + Fr we are
and obtains bounds for the elementsof SL and SU. The able to restrict ourselves to perturbations in L. The
computed x therefore solves exactly the system
computed y therefore satisfiesexactly the relation
(L+SL)(U+SU)x=b +
( L SL)y = b, where SLij = LijGij.
(A+E+SLU+LSU+SLSU)x=b We certainly have
Hence it is the exact solution of ( A + F )x = b, where (1 - kp-')" 5 ( 1 + G i j ) 5 ( 1 + k,8-')"
J1F11 = 1IE + SLU + LSU + SLSU I/ most of the factors, of course, satisfying much better
bounds. Bounds of the above type are cumbersome to
I IIE 11 + IlLlI I l S U It + It lJ II ll6LIl
use, and weobserve that, if k n P ' < 0 . 1 , as will
+ IISLII l l S U 11, usually be the case, then, using the binomial theorem,
and from the bounds for E, SL, and S U , one obtains a (1 + kp-')" 5 1 + (1.06)knp-',
bound for F.
( 1 - kp-')' 2 1 - (1.06)knp-'.
The simplicity of the technique may be illustrated by
presenting the analysis of the solution of the system Hence, we have
Ly = b. We first make the following observations: ISLijl 5 (l.06)knB-r(LijI,
1 . The relevant system to be analyzed is that with the giving, for example
computedmatrix L, not the L that wouldhave
resulted from exact computation.
Theanalysisis almost trivial, though earlier error
2. Since during the course of the analysis we do not
analyses of the solution of triangular systems were
attempta direct comparisonbetweencomputed
extremely complicated.
and exact values, there is no need to denote com-
puted quantities by bars. It is to beunderstood that If the computation of yr had been expressed in the
all symbols refer to computed quantities. form
3. It is only at the final stage when we have expressed Yr=fl(br-lrlyl-'"-1r,r-1Yr-1),
thecomputed solution as the exact solution of
(A + F )x = b and have obtained a bound for lIF 11 then wecouldstill obtain a relation of theform
that we attempt to compare the computed x with +
(L SL)y = b, but in this case the bounds on the
the true x, and at this stage we can use the result of elements of SL would be appreciably larger.
the previous section.
On most computers it is possible to accumulate either
of the expressions for yr in double precision, rounding
At a typical stage in the triangular solution, y1, y 2 , . . . ,
to single precision only on completion. If this is done,
y v - l have been computed and y r is determined from
then we again obtain a relation of the form
the relation
Y r = f l ( - l r l y l - l r 2 ~ 2- ' - 1r.r-1yr-1 + br), lrlYl(1 + G r 1 ) + I r 2 ~ 2 ( 1+ G r 2 ) + ' '

+ l r , r - l Y r - l ( l + G r , r - l ) + Y r ( 1 + G r r ) = br,
but now the quantities I Gri/( i < Y ) have bounds of
order P-2f and can therefore virtually be neglected,
while 1 Grr1 has the bound k6'. We therefore have a
ERROR ANALYSIS 673

result that might well be described as best possible, system is to guarantee that it is the exact solution of
having regard to theprecision of computation. Indeed, +
( A E )x = b and to give a bound for E of the type
the residual vector b - Ly corresponding to the com-
puted y willalmost certainly be smaller thanthat I F Ii 5 f(n)kP-'IiAil,
corresponding to the correctly rounded solution! where f ( n )is a modestfunction of n, depending alittle
on thedetails of the arithmetic. When backward error
Theanalysis of the solution of Ux = y is almost analysisisapplied tomatrix inversion, onecannot
identical to that of Ly = b, while the analysis of the
factorization process isonlymarginally morecom-
+
show that X is the exact solution of ( A E )X = I ,
with a similar bound for E, because itis not true.
plicated. If the L and U areproduced as inclassi- However, the rth column, x,, of X is the exact solu-
cal Gaussian elimination, thenonecanshowthat +
tion of some ( A E,)x, = e,, where e , is the rth
+
LU = A E , where, denoting the maximum modulus column of I : the E , are all different, but havethe same
of any element arising during the decomposition by g, satisfactory uniform bound. This result is implicit in
we certainly have that of von Neumannand Goldstine, but it is well
concealed!

Orthogonal Transformations
If the factors L and U are determined directly, using Experience with error analyses of matrix processes
the relations gradually exposed the fact that control of growth in
~ll, . ulI. .-- a . .ll- I . t1 u l j - ' . ' - 1+1uj-1,j j = 1 , . ..,i - 1 derived matrices is the key to stability. If orthogo-
nal transformations Q are used, then, since IlQAI12 =
and llAQl12 = llAl12, no general growth C U M take place.
Although the algebra is a little complicated, a fairly
- al1, . - 1.t t t l l j - . . . - l i . i - l ~ i - l , j
u[.I.- l.= t ., . . . , n, general analysis canbe given of wholeclasses of
algorithms based on orthogonal transformations, both
and the expressions on the right are accumulated in for the solution of equations and the eigenvalue prob-
double precision, an even more satisfactory bound lem. One can show, for example, that for a sequence
may be determined for E. Indeed, ignoring quantities of Y orthogonal similarity transformations, the final
of the order of magnitude of p-2t, we certainly have computed transform A(")satisfies exactly a relation of
I eij 1 5 gkp-', where g is now the elementof maximum the form
modulus in the computed U.Again, we have what may
be regarded as a "best possible" result. +
A(') = Q T ( A E ) Q ,

The reader may be surprised that no reference has where Q is exactly orthogonal and
been made to pivoting or to the size of the lpq. The IIEII 5 rf(~)llAllkP-f,
importance of pivotingis concealed. If any of the
multipliersis large, g willusually bemuch larger where f ( n ) is some quite innocuous function of n.
than max jaijl. When pivotingisused 11p4( 5 1, and Hence,the eigenvalues of A(') are exactly those of
there will not usually be much growth in the size +
A E, and we are back with perturbation theory.
of the elements of the reduced matrices or of U rela-
tive to the initial set of aij. When A is positive definite A Posteriori Error Bounds
or diagonally dominant, no growthcan take place, The bounds discussed so far are of the a priori type.
and we have a guaranteed a priori bound for IIE // in The main function of suchan analysis is to show
terms of A. whether or not an algorithm is stable and, if not, to
pinpoint the reasons for its instability.
In 1947, von Neumann and Goldstine considered the
special case of the inversion of a positivedefinite When a solution has been determined, one can usually
matrix with pivoting, and obtained a result for fixed- obtain much sharper backwarderrorbounds. For
point computation thatis only marginallyweaker than example,from a computed eigenvalue X andan
canbeobtained by arguments of the above type, eigenvector u, such that /Ju112= 1, one can compute
though the analysis was far more complicated. Their the residual defined by r = Au - Xu. Thismaybe
analysis is often described as a forward error analy- written in the form ( A - ruH)u= Xu, showing that X
sis, but itisin fact of the backward type, although and u are exact for the matrix A - ruH. When A is
at no stage are results expressed in a form such as Hermitian, this implies that A has an eigenvaluein
to emphasize this. Thefinal result of an analysis of +
the interval X - Ilrl12,X /lrllz.Similarly, when solving
the above type for the solution of a positive definite linear equations, one can compute r = b - Ax. If r is
674 ERROR
CORRECTING AND DETECTING
CODE

computed accurately, it can then beused to obtain an more general appreciation of theoretical error analysis,
improved solution by solving A6 = r. This process is they have an important role to play.
called iterative refinement.
Bibliography
Iterative Methods 1963. Wilkinson, J. H. Rounding Errors i n Algebraic Processes.
London: Her Majesty’s StationeryOffice and Upper Saddle
It was at one time thought that iterative methods for River, NJ: Prentice Hall.
solving linear equations orthe eigenvalue problem 1965. Wilkinson, J. H. The Algebraic Eigenvalue Problem.
would give far greater accuracy than direct methods, Oxford: Clarendon Press.
since one works with theinitial A throughout. In fact, 1967. Forsythe, G. E. and Moler, C. B. Computer Solution of
Linear Algebraic Systems. Upper Saddle River, NJ: Prentice
this advantage is largely illusory. In Jacobi’s method Hall.
for linear equations, one derives an improved xir+’) 1971. Wilkinson,J. H. “Modern Error Analysis,” SIAM Review,
from the relation 13, 548-568.
1979. Moore, R. E. Methods and Appiications of Interval
Analysis, Philadelphia: SIAM Publications.
1989. Kahaner, D.,Moler, C., and Nash, S. Numerical Methods
and Software. Upper Saddle River, NJ: Prentice Hall.
but the right-hand side cannot be computed exactly. James H.Wilkinson
From the above analysis it is clear that one is really
workingwith a matrixwithelements +
aij( 1 eij),
where the eii are differentin each iteration. When ERRORCORRECTING A N D
iterative methods are used in practice, iteration is DETECTINGCODE
usually terminated before attaining the accuracygiven
immediately by a direct method, even without itera-
tive refinement. Since, as we mentioned earlier, the
results obtained with good direct methods are almost
Error-detecting and error-correcting codes are essen-
“best possible,” this is to be expected.
tial parts of most forms of digital communication and
storage. Proper operation of the Internet (q.v.),
Interval Arithmetic and Significant
modems (q.v.), compact discs (see OFTICALSTORAGE),
Digit Arithmetic
and computers would be impossible without them.
Attemptshave beenmadeto obtain errorbounds Many recent advances in the speed and reliability of
for computed quantities onthe computer itself. digital communication are directly related to improved
In interval arithmetic (q.v.), an ordered pair [al,a,] design of codes.
of floating-point numbers is stored at each stage in the
Codes introduce redundant bits into a data stream so
computation, and it is guaranteed that the true num-
that transmission errors in the original data bits can be
ber a lies in the interval al <_ a 5 a,. Used in a direct
detected and sometimes corrected. Perhaps the best-
manner, the results achieved are very pessimistic; in
knowncode is the simple parity check code (see
fact, the computer merely performs numerically the
PARITY) that uses a single redundant bit and candetect
analog of what was done algebraically in the early
an odd number of errors in a group of bits. In inter-
forwarderror analysis of the Hotelling type. The
active communication, such as the Internet, it is often
intervals become very large. The apparently reason-
sufficient to detect errors, and then request retrans-
able assumption that in stable algorithms thecom-
mission of the data. In other situations, such as data
puted quantities will be close to those arising in exact
storage on a compact disc, rereading erroneous data
computation is frequently quite false. This is particu-
from the discwould frequently result in thesame
larly true of algorithms for the eigenvalue problem.
error, so the errors must be corrected.
In significantdigit (q.v.) arithmetic, onedoesnot
A diagram of a simplified coding scheme is shown in
work with normalized floating-point numbers, on the
Fig. 1. In this type of coding (called block coding) a
grounds that when cancellation takes place, the zeros
group of k data bits are read into the encoder which
introduced are non-significant. Thepossibilities of
then produces a corresponding n-bit code word with
significant digit arithmetic have been well exploited
n > k. There are 2k possible k-bitpatterns at the input,
by Metropolis and Ashenhurst.
and 2 corresponding n-bit code words. This collection
The realization that neither interval arithmetic nor sig- of possible code words is called a “code.” Since it can
nificant digit arithmetic provides an automatic answer produce only 2 k out of the 2” possible n-bit patterns,
to error analysis led to an overreaction against them. the encoder introduces redundancy into thedata
The provisionof the relevant hardware facilities should stream. The code rate, defined as the ratio of k to n,
‘makethem economic,andwhencombinedwith a is a measure of this redundancy.

You might also like