Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Report

Measurement Uncertainty
ncertainty in the broad sense is
no new concept in chemistry;
analysts have always sought to
quantify and control the accuracy of their
results. Few analysts would dispute that a
result is of little value without some
knowledge of the associated uncertainty;
clearly, without such information, inter-
pretation is impossible.
Correct interpretation depends on a
good assessment of accuracy. When esti-
mates of accuracy are optimistic, results
may appear irreconcilable and may be
overinterpreted; with unduly pessimistic
assessments, methods may appear unfit
for a particular purpose and may be opti-
mized when it's not necessary.
In general, different methods of estimat-
ing uncertainty will lead to different values.
Most estimates of accuracy have been
based on the standard deviation of a series
of experiments or interlaboratory compari-
sons, often in association with estimates of
bias in the form of recovery estimates.
When individual effects are being consid-
ered, the contribution from random vari-
ability can be estimated from repeatability
reproducibility or other precision mea-
sures. In addition separate contributions
from several systematic or random effects
can be combined linearly or by the root
sum of squares. Finally, the way uncer-
tainty is expressed can vary substantially.
Confidence intervals, absolute limits, stan-
dard deviations, and coefficients of variance
are all in common use. Clearly, with so
many possibilities for estimating and ex-
pressing such a critical parameter, a con-
sensus is essential for comparability.
The most recent recommendation is
that accuracy be expressed in terms of a
quantitative estimate of uncertainty as de-
scribed in the International Standards Or-
ganization's (ISO) Guide to the Expression of
Uncertainty in Measurement t1) and other
measurement authorities (2,3). The guide
is published under the auspices of several
organizations, including ISO, the Interna-
tional Bureau of Weights and Measures
(BIPM) the Organization for International
and Legal Metrology (OIML) and the In-
ternational Union of Pure and Applied
Chemistry (IUPAC)
The document lays out a standard ap-
proach to estimating and expressing un-
Correct interpreta-
tion of accuracy en-
sures that results are
judged neither overly
optimistically nor un-
duly pessimistically.
Steve Ellison
Laboratory of the Government Chemist
(U.K.)
Wol fhard Wegschei der
University of Leoben (Austria)
Al ex Wi l l i ams
EURACHEM Working Group on
Measurement Uncertainty (U.K.)
S0003-2700(97)09035-5 CCC: $14.00
1997 American Chemical Society
Analytical Chemistry News & Features, October 1, 1997 607 A
Report
certainty across many fields of measure-
ment and, in view of its pedigree, is widely
accepted by accreditation and certification
agencies worldwide. This Report deals
primarily with the provisions and interpre-
tation of this document, though it is recog-
nized that different approaches are used
in other ISO documents.
Def i ni t i ons
The definition of measurement uncertainty
is "a parameter associated with the result of
a measurement that characterizes the dis-
persion of the values that could reasonably
be attributed to the measurand" (1,4).
Thus, measurement uncertainty describes
a range or distribution of possible values.
For example, 82 5 describes s range eo
values. Measurement uncertainty, there-
fore, differs from "error", which is defined
as a single valuethe difference between a
result and the true value.
The stated range must also include the
values the measurand could reasonably
take, on the basis of the result. That
makes it quite different from measures of
precision, which give only the range
within which the mean of a series of ex-
periments will lie. Precision makes no
allowance for bias; measurement uncer-
tainty includes random components and
systematic components. Note that known
systematic errors, or bias, should be cor-
rected for as fully as possible; failure to
make such a correction is simply to report
a result known to be wrong. But an uncer-
tainty associated with each correction fac-
tor remains and must be considered. This
consideration of systematic effects makes
measurement uncertainty more realistic
than measures such as standard error.
Finally, measurement uncertainty is an
estimate. Obviously, all statistical calcula-
tions on finite samples provide estimates
of population parameters, but the estimate
goes deeper than this. Devising experi-
ments that can accurately characterize
uncertainties in method bias and other
systematic effects is extremely difficult.
For example, most derivatizations are pre-
sumed to proceed to completion. How
certain can the analyst be of this? Unfortu-
nately, statistics help little; in practice the
chemist is often forced to make an edu-
cated estimate from prior experience
However it is crucial to realize that the
attempt must be made the correction for
bias and the uncertainty of this correction
factor cannot simply be ignored if compa-
rability is to be established
Error and uncertainty. In com-
mon parlance, the terms error and uncer-
tainty are frequently used interchange-
ably. However, several significant differ-
ences in the concepts are implied by the
terms defined by ISO (4). Error is defined
as the difference between an individual
result and the true value of the measur-
and. Error, therefore, has a single value
for each result. In principle, an error could
be corrected if all the sources of error
were known, though the random part of
an error is variable from one determina-
tion to the next.
Uncertainty, on the other hand, takes
the form of a range and, if estimated for
an analytical procedure and a defined
sample type, may apply to all determina-
tions so described. No part of uncertainty
can be corrected for. In addition, estima-
tion of uncertainty does not require refer-
ence to a true value, only to a result and
the factors that affect the result. This shift
in philosophy marks a concept rooted in
observable, rather than theoretical, quan-
Box 1. Calculating uncertainty using ISO rules
Rule 1: All lontributions are eombined in the form oo fsandard deviationn sSDs).
Combining as SDs allows calculating a rigorous combined SD using standard
forms. It does not imply that the underlying distribution is or needs to be
normalevery distribution has an SD. It is not perfectly rigorous to deduce
a confidence interval from a combined SD, but in most cases, especially when
three or more comparable contributions are involved, the approximation is at
least as good as most contributing estimates.
Rule 2: Uncertainties are combined according to
in in which u (y) is the uncertainty of a value y,
lF
u
2
, - the uncertainties of
the independent parameters x
h
x
2
,... on which it depends, and dy/dXj is the
partial differential of y with respect to *,-. When variables are not independent,
the relationship is extended to include a correlation term (1).
Rule 2 establishes the principle of combination of root sum squares. One
corollary is that small components are quickly swamped by larger contributions,
making it particularly important to obtain good values for large uncertainties
and unnecessary to spend time on small components. In pictures, this looks
like a simple Pythagorean triangle. For the uncertainties u
x
and u
2
, the
combination u
c
can be visualized.
Rule 3: The SD obtained ffom Eq. . 1neds to ob multiplied db y coverage factor
k to obtain a range called the expanded uncertainty, which includes a large
fraction of the distribution. For most purposes, k = 2 is sufficient (2) and will
give a range corresponding to an approximately 95% confidence interval.
Similarly, k = 3 is recommended for more demanding cases.
608 A Analytical Chemistry yews & &eatures, October 1, 1991
titles. To further illustrate the difference,
the result of an analysis after correction
mayby chancebe very close to the
value of the measurand, and hence have a
negligible error. The uncertainty may
nonetheless still be very large, simply be-
cause the analyst is unsure of the size of
the error.
Uncertainty and quality assur-
ance. ISO explicitly excludes gross errors
of procedure from consideration within an
uncertainty assessment. Uncertainty esti-
mates can realistically apply only to well-
established measurement processes in sta-
tistical control, and thus they are a state-
ment of the uncertainty expected when
proper quality control (QC) measures are
in place. It is thus implicit that QC and qual-
ity assurance (QA) processes be in place
and within specification if an uncertainty
statement is to be at all meaningful.
Repeatability and
reproducibility
The most generally applied estimates of
uncertainty at present are those obtained
from interlaboratory comparisons, particu-
larly those using the collaborative trial
protocols of ISO 5725 (5) and the Associa-
tion of Official Analytical Chemists
(AOAC) (6).
For legislative purposes, the collabora-
tive trial reproducibility figure is the clos-
est approach to uncertainty that attempts
to estimate the full dispersion of results
obtained by a particular metiiod, and it
has the considerable advantages of sim-
plicity and generality, though at high cost.
Another substantial advantage is its objec-
tivity, because it is based entirely on ex-
perimental observations in a representa-
tive range of laboratories. Though it
serves well in cases in which the chief
issue is comparability among particular
laboratories with a common aim several
factors leave this approach wanting
Reproducibility is inevitably a measure
of precision; although it covers a range of
laboratory bias, it cannot cover bias inher-
ent in the method itself, nor, in general,
sample matrix effects. Arguably, these
effects are not relevant for a standard
method, which may simply define a proce-
dure that generates a result for trade or
legislative purposes. Many methods, in-
deed fall into this class; even when a
metiiod purports to determine a specific
molecular species, there is no guarantee
that it determines all that is present or,
indeed, any particular species at all.
An example is the semiquantitative
AOAC method for detecting cholesterol.
Though standardized and properly ac-
cepted for certain trade and regulatory
purposes on the basis of collaborative trial
data showing sufficient agreement be-
tween laboratories (7), subsequent work
using internal calibration (8) has shown
that method recovery is poorer than the
reproducibility figure suggests. It follows
that long-term studies of cholesterol levels
in food could be expected to misinterpret
changes in apparent level particularly
nations using different cholesterol
determination methods Reproducibility
figures will in treneral suffer from the
absence of bias information
These arguments suggest that repro-
ducibility will generally underestimate
uncertaintybut not necessarily. A single
laboratory can have much smaller uncer-
tainties for a determination than the repro-
ducibility figure would indicate, which
tends to include a range of poor as well as
good results. This issue can be put more
bluntly: What does the spread of results
found by a handful of laboratories on a
specific set of samples at some time in the
past have to do with the results of an indi-
vidual laboratory today? Indeed, this is the
very question that must be answered
quantitatively before any laboratory can
make use of collaborative trial information
in a formal uncertainty estimate. It follows
that reproducibility, although a powerful
tool, is not a panacea.
The ISO approach
The approach recommended in the ISO
guide, outlined below, is based on com-
bining the uncertainties in contributory
parameters to provide an overall estimate
of uncertainty (Figure 1).
To begin, write down a clear statement
of what is being measured, including the
relationship between the measurand and
the parameters (measured quantities, con-
stants, calibration standards, and other
influences) on which it depends. When
Figure 1. Uncertainty estimation process.
Analytical Chemistry News & Features, October 1, 1997 609 A
Report
Fi gure 2. Di oxi n anal ysi s.
possible, include corrections for known
systematic effects. Though the basic spec-
ification information is normally given in
the relevant standard operating procedure
or other method description, it is often
necessary to add explicit terms for factors
such as operating temperatures and ex-
traction time or temperature, which will
not normally be included in the basic cal-
culation given in a method description.
Then, for each parameter in this rela-
tionship, list the possible sources of uncer-
tainty, including chemical assumptions.
Measure or estimate the size of the uncer-
tainty associated with each possible source
of uncertainty (or for a combination of
sources). Combine the quantified uncer-
tainty components, expressed as standard
deviations, according to the appropriate
rules (see Box on p. 608 A)) ,o give e com-
bined standard uncertainty, and apply the
appropriate coverage factor to give an ex-
panded combined uncertainty
The most important features are that
all contributing uncertainty components
are quantified as standard deviations in
the first instance, whether they arise from
random variability or systematic effects;
also, that estimates of uncertainty from
experiment, prior knowledge, and profes-
sional judgment are treated in the same
way and given the same weight.
Quantifying all contributing uncertainty
components as standard deviations pro-
vides a particularly simple and consistent
method of calculation based on standard
expressions for combining variances. It is
justified in principle because, although an
error in a particular case may be system-
atic, lack of knowledge about the size of the
error leads to a probability distribution for
the error. This distribution can be treated
in the same way as that of a random vari-
able. Treating estimates of uncertainty fron
experiment prior knowledge and profes-
sional judgment the same way and giving
them the same weight ensures that all
known factors contributing to uncertainty
are accounted for even when experimental
determination is not possible
In principle, this approach overcomes
many of the deficiencies in currently used
approaches. It is much quicker and less
costly to apply than a collaborative trial,
but it can use collaborative trial data ad-
vantageously if available. The approach
covers all the effects on a result, system-
atic or random, and it takes into account
all available knowledge. In addition, it
mandates a particular form of expression,
leading to improved comparability in un-
certainty estimates
However, disadvantages exist. The
ISO approach, because it requires appro-
priate judgment, cannot be entirely objec-
tive; to some extent it relies on the experi-
ence of the analyst. A significant cost in
time and effort is a factor; estimating un-
certainties on the basis of local conditions
without using published data involves
more effort than simply quoting a pub-
lished reproducibility figure.
The lack of objectivity can be compen-
sated for by third-party review, such as
quality system assessment, interlabora-
tory comparisons, in-house QC sample
results, and certified reference material
checks. Finally, it should be clear that a
decision to exclude a particular contribu-
tion entirely rather than make some judg-
ment of its size represents a de facto deci-
sion to allocate the contribution a size of
zero hardly an improvement.
Cost, too, may be recouped in direct or
indirect benefits. Uncertainty estimation
improves knowledge of analytical tech-
niques and principles, forming a powerful
adjunct to training. Knowing the main con-
tributions to uncertainty determines the
direction of method improvement most
effectively. Efficiency can be improved with
minimal impact on method performance.
Finally, normal QA procedures, such as
checking the method for use, maintaining
records of calibration and statistical QC
procedures, should provide all the required
data; additional cost should be no more than
that of combining the data appropriately
Sources of uncertainty
Many factors affect analytical results, and
every one is a potential source of uncer-
tainty. In sampling, effects such as ran-
dom variations between different samples
and any potential for bias in the sampling
procedure are components of uncertainty
affecting the final result. Recovery of an
analyte from a complex matrix, or an in-
strument response, may be affected by
other constituents of the matrix. Analyte
speciation may further compound this
effect. When a spike is used to estimate
recovery the recovery of the analyte from
the sample iriciv differ" from the recoverv
of the spike Stability effects are also im-
portant but frequently are not well-known
Cross-contamination between samples
and contamination from the
laboratory
emnrrtnment are pupr nrpcpnt risks
Though ISO does not include accidental
gross cross-contamination in its definition
610 A Analytical Chemistry News & Features, October 1, 1997
of uncertainty, as it represents loss of con-
trol of the measurement process, the pos-
sibility of background contamination
should nonetheless be considered and
evaluated when appropriate.
Although instruments are regularly
checked and calibrated, the limits of accu-
racy on the calibration constitute uncer-
tainties. Calibration used may not accu-
rately reflect the samples presented; for
example, analytical balances are com-
monly calibrated using nickel check
weights, although samples are rarely of
such high density. Though not large in
most circumstances, buoyancy effects
differ between calibration weight and sam-
ple. Other factors include carry-over and
systematic truncation effects.
The molarity of a volumetric solution is
not exactly known, even if the parent ma-
terial has been assayed, because some
uncertainty relating to the assay proce-
dure exists. A wide range of ambient con-
ditions, notably temperature, affects ana-
lytical results. Reference materials are
also subject to uncertainty; fortunately,
most providers of reference materials now
state the uncertainty in the manner rec-
ommended in the guide.
The uncritical use of computer soft-
ware can also introduce errors. Selecting
the appropriate calibration model is im-
portant, and software may not permit the
best choice. Early truncation and round-
ing off can also lead to inaccuracies in the
final result.
Operator effects may be significant;
they can be evaluated either by predicting
them or by conducting experiments in-
volving many operators. The latter ap-
proach will not normally detect an overall
operator bias (for example, a particular
scale reading may be taken in the same
manner by a group of operators similarly
trained), but the scope of variation can be
estimated. "Operator effect" could reason-
ably be considered a proxy for a range of
poorly controlled input parameters such
as scale-reading accuracy time and dura-
tion of agitation during extraction and so
on It follows that a formal mathematical
model of the experimental process would
not normally include "operator" as an in-
niit factor but only the specific factors un-
der ooerator
Random effects contribute to uncer-
tainty in all determinations, and this en-
try is usually included in the list as a
matter of course. Conceptualizing every
component of uncertainty as arising
from both systematic and random effects
is also frequently useful; this step avoids
the most common trap for the unwary
overlooking systematic effects in the
effort to obtain good precision measures.
Both need to be taken into account,
though the ISO approach requires only
the overall value.
Determinands are not always com-
pletely defined. For example, volumes
may or may not be defined with refer-
ence to a particular set of ambient con-
ditions. Similarly, the determinand may
be defined in terms of a range of condi-
tions. For example, material extracted
from an acidified aqueous solution at pH
below 3.0 allows substantial latitude.
Such incomplete definitions result in the
determinand itself having a range of val-
ues, irrespective of good analytical tech-
nique, and that range constitutes an
uncertainty.
Many common analytes, such as fat,
moisture, ash, and protein, are defined not
in terms of a particular molecular or
atomic species but against some essen-
tially arbitrary process. In effect, the re-
sult is simply a response to a stated proce-
dure, expressed in the most convenient
units. Such measurements are not gener-
ally compared with results from other
methods; in effect, bias is neglected by
convention. However, the procedure itself
may lack full definition or permit a range
of conditions, giving rise to uncertainties.
Of course if comparison with other meth-
ods is desired additional sources of un-
certainty including method bias must be
taken into
Increasing confidence
The ISO guide suggests multiplying the
standard uncertainty by a coverage fac-
tor k to express uncertainties when a
high degree of confidence is desired.
This representation exactly mirrors the
situation in conventional statistics, in
which a confidence interval is obtained
by multiplying a standard deviation for a
parameter by a factor derived from the
Student ^-distribution for the appropriate
number of degrees of freedom.
The formal approach in the guide re-
quires estimation of a similar parameter,
the "effective degrees of freedom", and
uses this value in the same way. Though
the details are beyond the scope of this
article, some important points can be
made.
This parameter is almost invariably
dominated by the number of degrees of
freedom in the dominant contribution to
the overall uncertainty. When the domi-
nant contribution arises from sound and
well-researched information, effective de-
grees of freedom remain high, normally
leading to k = 2 for near 95% confidencce
Only where large uncertainty contribu-
tions are based on meager data will the
choice of k become significant. A prag-
matic approach, therefore, is simply to
adopt k = 2 for routtne work ,nd k k =
when a particularly high confidence is
required (2)
The question of possible distributions
must also be considered at the point of de-
ciding coverage factors. Although the guide
uses a combination of standard deviations
based on established error propagation
theory, the step from standard deviation to
confidence involves some assumptions.
The guide takes the view that, in most cir-
Table 1. Contributions to uncertainty in dioxin analysis.
Parameter u(RSD) Main contribution
3
Oss 0.02
V 0.02
Ak and -4SS 0.09
RRFn 0.08
"lspk 0.12
Combi ned uncertainty 0.17
Syringe specification; certified reference solution
uncertainty
Density (volume determined by weight)
Permitted abundance ratio range
Range permitted by method
Permined range of spike recovery
[(0.02) +( 0. 02) +( 0. 09) + (0.08)
2
+(0.12)
2
]
1
/
2
(a) Contributions are listed if they contribute more than 10% of the stated uncertainty.
(b) Permitted ranges are treated as limits of rectangular distributions and adjusted to SD
values (1) by dividing by the square root of 3.
(c) Recovery of added material is not, in general, fully representative of recovery of analyte
materials.
Analytical Chemistry News & Features, October 1, 1997 6 1 1 A
Report
cumstances, the central limit theorem will
apply, and the appropriate distribution will
be approximately normal. Certainly it is
rare to calculate confidence intervals based
on other distributions in general analytical
chemistry, if only because it is unusual to
have sufficient data to justify other assump-
tions. Nonetheless, when additional knowl-
edge about underlying distributions exists,
it is most sensible to base k on the best
available information.
Dioxin exampl e
The analysis of dioxins in the effluent of
paper and pulp mills by isotope dilution
MS (Figure 2) is a good example (9). For
the sake of discussion, we will consider
only the analysis of 2,3,7,8-tetrachloro-
dibenzodioxin (2,3,7,8-TCDD) and will
ignore the normally important uncertain-
ties caused by interference from other
TCDD isomers, GC integration, and res-
olution difficulties. By way of illustration,
some minor contributions that would
not normally be included will also be
examined.
The basic equation for determining the
concentration C
x
of TCDD is
C
x
= A
k
Q
ss
/A
a
JiRF
n
VR
%pk
in which A
k
is the peak area of the ana-
lyte, Q
ss
is the amount of spike, i4
ss
is the
peak area of the standard, RRF
n
is the
relevant response factor for the relevant
ion
13
C-12, Fis the original sample vol-
ume, and i?
spk
is the (nominal) recovery of
the analyte relative to added material.
R
a k
merits explanation, because it is
not used in the standard. Because the
13
C-12 calibration spike is added to the
slurry and is not naturally part of the sam-
ple, differential behavior is possible. If
measurable, this behavior would appear
as imperfect recovery of analyte. A com-
plete mathematical model of the system
therefore requires some representation of
the effect. Because no existing parameter
in the equation is directly influenced by
recovery the recovery term has been
added in the form of a nominal correction
factor The result is a basic equation en-
compassing all the main effects on the
result
Identification of the remaining contri-
butions to the overall uncertainty is best
achieved by considering the parameters
in the equation, any intermediate mea-
surements from which they are derived,
and any effects that arise from particular
operations within the method (such as the
possibility of "spike partitioning"). Table 1
lists parameters, calculated uncertainties
(as relative standard deviations), and
some contributory factors.
Information in Table 1 shows that uncer-
tainties associated with the physical mea-
surements of volume and mass contribute
essentially nothing to the combined uncer-
tainty and that any further study should be
directed primarily at the remaining compo-
nents. The largest contribution arises from
the extraction recovery step, in line with
most analysts' experience.
The method studied here is unusual in
specifying direct control of all the major
factors affecting uncertainty, which makes
it relatively easy to estimate uncertainty as
long as the method is operating within con-
trol. For most methods currently in use,
however, such control limits are not closely
specified. Typically, one or two critical pa-
rameters are given single target values, and
precision control limits are set It then falls
to the laboratory to estimate the contribu-
tion of its own level of control to the uncer-
tainty, rather than simply demonstrating
compliance with an established set of fig-
ures and an associated, carefully studied,
uncertainty estimate.
Legislation and compl i ance
Two issues are important when uncer-
tainty is considered in the context of legis-
lation and enforcement. The first con-
cerns the simpler problem of whether a
result constitutes evidence of noncompli-
ance with some limit, particularly when
the limit is within the uncertainty quoted.
The second issue is the use of uncertainty
information in setting limits.
Two instances in compliance are
clear-cut: Either the result is above the
upper limit, including its uncertainty,
which means that the result is in non-
compliance (Figure 3a); or the result,
including its uncertainty, is between the
upper and lower limits, and is therefore
in compliance (Figure 3d). For any other
case, some interpretation is necessary
and can be made only in the light of the
and with the knowledge and
understanding of the end of the
information
For example, Figure 3b represents
probable noncompliance with the limit,
but noncompliance is not demonstrated
beyond reasonable doubt. In the case of
legislation, the precise wording needs to
be consulted; some legislation requires
that, for example, process operators dem-
onstrate that they are complying with a
limit. In such a case, Figure 3b represents
noncompliance with the legislation; com-
pliance has not been demonstrated be-
yond doubt.
Similarly, if legislation requires clear
evidence of noncompliance with a limit
that triggers enforcement, although there
is no clear demonstration of compliance,
there is insufficient evidence of noncom-
pliance to support action, as in Figure 3c.
In these situations, end-users and legisla-
tors must spell out how the situation
should be handled.
Upper
limit
Lower
limit
(a)
Result above
limit plus
uncertainty
(b)
Result above
limit, but limit
within
uncertainty
(c)
Result below
limit, but limit
within
uncertainty
(d)
Result below
limit minus
uncertainty
Figure 3. Uncertainty and compl i ance limits.
612 A Analytical Chemistry News & &eatures, October 1, 1991
A recent editorial (10) pointed out the
need to avoid setting limits that cannot be
enforced without disproportionate effort.
An important factor to consider is the actual
uncertainty involved in determining a level
of analyte; if legislation is to be effective, the
uncertainty must be small in relation to any
limiting range. In chemistry, measurement
requirements tend to follow a "best avail-
able" presumption, even when this policy is
not actually written into legislation; an ex-
ample is the Delaney Clause (11).
As the state of the art improves, new
measurements become possible and are
immediately applied, leading to a situa-
tion in which the best available technol-
ogy is the only acceptable technology. In
such a situation, uncertainties are, inevi-
tably, hard to quantify well; they will of-
ten be larger than required for the pur-
pose. That legislation takes into account
the full uncertainty is particularly impor-
tant; failure to include significant compo-
nents may unreasonably restrict enforce-
ment. In particular, the possibility of sys-
tematic effects being considered is vital;
legislation based on measurement of
absolute amounts of substance, as in
most new European environmental legis-
lation, must consider the full range of
methods and sample matrices that may
fall within that legislation.
Another important consideration is the
interpretation of results and their relevant
uncertainties against limits. Assumptions
about the handling of experimental uncer-
tainty in interpretation for enforcement
purposes must be clearly stated in the
legislation. Specifically, do limits allow for
an experimental uncertainty or not? If so,
how large is the allowance?
A fundamental factor is how well leg-
islators understand uncertainty. The
need to set limits in some contexts is
easily understood, such as how much of
a toxic compound is acceptable in an
environmental matrix. However, judging
compliance is trickier, and a better un-
derstanding of analytical uncertainly is
required. The current move toward spec-
ifying method performance parameters,
such as repeatability, reproducibility,
and recovery rather than the method
itself, is a step in the right direction; but
these parameters do not necessarily
cover all of the significant components of
uncertainty. What is required is the addi-
tional specification of the measurement
uncertainty to meet the needs of the
legislation.
Ellison's work was supported under contract
with the Department of Trade and Industry as
part of the National Measurement System Valid
Analytical Measurement Programme.
References
(1) Guide to the Expression of Uncertainty in
Measurement; ISO: Geneva, ,193; ISBN
92-67-10188-9.
(2) Quantifying Uncertainty in inalytical
Measurement; Published do nehalf fo
EURACHEM by Laboratory of the Gov-
ernment Chemist: London, 1995; ISBN
0-948926-08-2.
(3) Taylor, B. N.; Kuyatt, C. E. Guidelines for
Evaluating and Expressing tht Uncertainty
of NISTMeasurement Results; NIST Tech-
nical Note 1297, National Institute of Stan-
dards and Technology: Gaithersburg,
MD, 1994.
(4) International Vocabulary of Basic ana
General Standard Terms in Metrology,
ISO: Geneva, 1993; ISBN 92-67-10175-1.
(5) ISO 5725:1986, Precision of Test Methods:
Determination of Repeatability ana Repro-
ducibility for a Standard Method by Inter-
laboratory Tests; ISO: Geneva, 1987.
(6) Youden, W. H.; Steiner, E. H. Statistical
Manual of the Association of Official Ana-
lytical Chemistst AOAC: Washington, DC,
1982.
(7) Thorpe, C.W.Assoc. Anal. Chem. 1969,
52,778-81.
(8) Lognay, G. C; Pearse, J.; Pocklington, D.;
Boenke, A.; Schurer, B.; Wagstaffe, P. J.
Analyst 1995,1201831-35.
(9) Report EPS l/RM/19; Environment Can-
ada: Ottawa, Ontario, 1992.
(10) Thompson, M. Analyss 1995,120,
117 N-18 N.
(11) Delaney: Federal Food, Drug and Cos-
metic Act; Food additives amendment,
1958.
Steve Ellison is head of the analytical
quality and chemometrics section at the
Laboratory of the Government Chemist
(U.K.). His research interests include sta-
tistics, validation and measurement un-
certainty, and chemometrics in the con-
texts of regulatory analysis and analytical
chemistry. Wolfhard Wegscheider is profes-
sor of chemistry and dean of graduate
studies at the University ofLeoben (Aus-
tria) and chair of EURACHEM Austria.
Alex Williams is chair of the EURACHEM
Working Group on Measurement Uncer-
tainty Address correspondence about this
article to Wegscheider at Institute of Gen-
eral and Analytical Chemistry University
ofLeoben A-8700 Leoben Austria
(wegschei@unileoben ac at)
What's new
in LC-MS
PRODUCTS
Quattro LC
micromass
USA Tel: 508 524-8200 Fax: 508 524-8210
Europe Tel: +31 (0) 294-480484 Fax: +31 (0) 294-419052
UK / International Tel: +44 (0) 161 945 4170
Fax: +44(0) 161 998 8915
http://www.mlcromass.co.uk
CIRCLE 1 ON READER SERVICE CARD

You might also like