Professional Documents
Culture Documents
CHM122 Topic A - Introduction To Analytical Processes
CHM122 Topic A - Introduction To Analytical Processes
CHM122 Topic A - Introduction To Analytical Processes
Sample
Unknown
External
Signal
Calibration
Standards
including
a blank
Sample
Unknown
Amount
S/N = 3
0
0 LOD
Analyte Mass or Concentration
Errors in External Standard Calibration
The calibration functional relationship between the response and
the analyte concentration must apply to the sample as well.
The raw analytical response is corrected by measuring a blank.
An ideal blank is identical to the sample but without the analyte.
For complex samples, it is too time-consuming or impossible to
prepare an ideal blank and a compromise must be made.
Most often a real blank is either a solvent blank, containing the
same solvent in which the sample is dissolved, or a reagent
blank containing the solvent plus all the reagents used in sample
preparation.
Errors in External Standard Calibration
Matrix effects, due to extraneous species in the sample that are not present in
the standards or blank, can cause the same analyte concentrations in the sample
and standards to give different responses.
Systematic errors can occur during the calibration process. If the standards are
prepared incorrectly, an error will occur. Also, the accuracy with which the
standards are prepared depends on the accuracy of the analytical techniques and
equipment used.
Random errors can also influence the accuracy of results obtained from
calibration curves. The uncertainty in the concentration of analyte obtained
from a calibration curve is lowest when the response is close to the mean value
y. The point (mean of x and y) represents the centroid of the regression line.
Note that measurements made near the center of the curve will give less
uncertainty in analyte concentration than those made at the extremes.
Multivariate Calibration
Least-squares procedure is an example of a univariate calibration procedure (only
one response is used per sample).
The process of relating multiple instrument responses to an analyte or a mixture
of analytes is known as multivariate calibration.
Multivariate calibration methods have become quite popular in recent years as
new instruments become available that produce multidimensional responses
(absorbance of several samples at multiple wavelengths, mass spectrum of
chromatographically separated components, etc.).
Multivariate calibration methods can be used to simultaneously determine
multiple components in mixtures and can provide redundancy in measurements
to improve precision because repeating a measurement N times provides an
improvement in the precision of the mean value.
They can also be used to detect the presence of interferences that would not be
identified in a univariate calibration.
Real-life calibration
Subject to matrix interferences
Matrix = what the real sample is in
pH, salts, contaminants, particulates
Glucose in blood, oil in shrimp
Concomitant species in real sample lead to different detector or
sensor responses for standards at same concentration or mass (or
moles)
Several clever schemes are typically employed to solve real-world
calibration problems:
Standard Addition
Internal Standard
Standard Addition Method
Standard-addition methods are particularly useful for analyzing
complex samples in which the likelihood of matrix effects is
substantial.
One of the most common form of standard addition method involves
adding one or more increments of a standard solution to sample aliquots
containing identical volumes. This process is often called spiking the
sample. Each solution is then diluted to a fixed volume before
measurement. Note that when the amount of sample is limited, standard
additions can be carried out by successive introductions of increments
of the standard to a single measured volume of the unknown.
Measurements are made on the original sample and on the sample plus
the standard after each addition. In most versions of the standard-
addition method, the sample matrix is nearly identical after each
addition.
Standard Addition Method
Classic method for reducing (or simply accommodating) matrix
effects especially for complex samples (biosamples)
Often the only way to do it right, you spike the sample by adding
known amounts of standard solution to the sample
Have to know your analyte in advance
Assumes that matrix is nearly identical after standard addition
(you add a small amount of standard to the actual sample)
As with “Internal Standard” this approach accounts for random
and systematic errors; more widely applicable
Must have a linear calibration curve
Internal Standard Method
A substance different from the analyte added in a constant
amount to all samples, blanks, and standards or a major
component of a sample at sufficiently high concentration so
that it can be assumed to be constant.
Plotting the ratio of analyte to internal-standard as a function
of analyte concentration gives the calibration curve.
Accounts for random and systematic errors.
Difficult to apply because of challenges associated with
identifying and introducing an appropriate internal standard
substance.
Similar but not identical; can’t be present in sample.
Internal Standard Method
Calibration then involves plotting the ratio of the analyte signal to the
internal-standard signal as a function of the analyte concentration of the
standards.
This ratio for the samples is then used to obtain their analyte concentrations
from a calibration curve.
Thus, if the analyte and internal standard signals respond proportionally to
random instrumental and method fluctuations, the ratio of these signals is
independent of such fluctuations.
If the two signals are influenced in the same way by matrix effects,
compensation of these effects also occurs.
In those instances where the internal standard is a major constituent of
samples and standards, compensation for errors that arise in sample
preparation, solution, and cleanup may also occur.
Internal Standard Method
A major difficulty in applying the internal-standard method is that of
finding a suitable substance to serve as the internal standard and of
introducing that substance into both samples and standards in a
reproducible way.
The internal standard should provide a signal that is similar to the
analyte signal in most ways but sufficiently different so that the two
signals are distinguishable by the instrument. The internal standard must
be known to be absent from the sample matrix so that the only source of
the standard is the added amount.
For example, lithium is a good internal standard for the
determination of sodium or potassium in blood serum because the
chemical behavior of lithium is similar to both analytes, but it does not
occur naturally in blood.
Internal Standard Method
An example of the determination of sodium in blood by Flame
spectrometry using lithium as an internal standard.
From the figure in the next slide, upper plot shows the normal calibration
curve of sodium intensity versus sodium concentration in ppm. Although
a fairly linear plot is obtained, quite a bit of scatter is observed.
Lower plot shows the intensity ratio of sodium to lithium plotted against
the sodium concentration in ppm.
Note the improvement in the calibration curve when the internal standard is
used. In the development of any new internal-standard method, we must
verify that changes in concentration of analyte do not affect the signal
intensity that results from the internal standard and that the internal
standard does not suppress or enhance the analyte signal.
Selecting an Analytical method
Analytical procedures are characterized by a number of
figures of merit such as accuracy, precision, sensitivity,
detection limit, and dynamic range.
Additional figures of merit that are commonly used and
discuss the validation and reporting of analytical
results.
Selecting an Analytical method
Defining a problem
Performance characteristics of instruments
Precision
Bias
Sensitivity FIGURES OF MERIT
Detection limit
Dynamic range
Selectivity
Defining a problem
To select an analytical method intelligently, it is essential to define
clearly the nature of the analytical problem. Such a definition
requires answers to the following questions:
1. What accuracy is required?
2. How much sample is available?
3. What is the concentration range of the analyte?
4. What components of the sample might cause interference?
5. What are the physical and chemical properties of
the sample matrix?
6. How many samples are to be analyzed?
Performance Characteristics of Instruments
Most important
Often seen as %
Handy, common
Performance Characteristics of
Instruments
Figures of merit permit us to narrow the choice of instruments
for a given analytical problem to a relatively few. Selection
among these few can then be based on the qualitative
performance criteria listed in Table 1-4.
(2) Bias
Provides a measure of the systematic/determinate error of an analytical
method
Defined in terms of the population mean for the concentration of an analyte
in a sample and the true value.
Determining bias involves analyzing one or more standard reference
materials whose analyte concentration is known.
The results from such an analysis will, however, contain both random and
systematic errors; but if we repeat the measurements a sufficient number of
times, the mean value may be determined with a given level of confidence.
Usually in developing an analytical method, we attempt to identify the
source of bias and eliminate it or correct for it by the use of blanks and
by instrument calibration.
(3) Sensitivity
There is general agreement that the sensitivity of an instrument or a
method is a measure of its ability to discriminate between small
differences in analyte concentration.
Two factors limit sensitivity: the slope of the calibration curve and
the reproducibility or precision of the measuring device.
Of two methods that have equal precision, the one that has the
steeper calibration curve will be the more sensitive.
If the two methods have calibration curves with equal slopes, the one
that exhibits the better precision will be the more sensitive.
The quantitative definition of sensitivity that is accepted by the
International Union of Pure and Applied Chemistry (IUPAC) is
calibration sensitivity, which is the slope of the calibration curve at
the concentration of interest.
(4) Detection Limit
The most generally accepted qualitative definition of detection
limit is that it is the minimum concentration or mass of analyte
that can be detected at a known confidence level.
This limit depends on the ratio of the magnitude of the analytical
signal to the size of the statistical fluctuations in the blank signal.
Unless the analytical signal is larger than the blank by some
multiple k of the variation in the blank due to random errors, it is
impossible to detect the analytical signal with certainty.
As the limit of detection is approached, the analytical signal
and its standard deviation approach the blank signal and its
standard deviation.
Sensitivity vs. Limit of Detection
NOT THE SAME THING!!!!!
Sensitivity: Ability to discriminate between small differences in
analyte concentration at a particular concentration.
calibration sensitivity—the slope of the calibration curve at the
concentration of interest
Limit of detection: Minimum concentration that can be detected at
a known confidence limit
Typically three times the standard deviation of the noise from
the blank measurement (3s or 3s is equivalent to 99.7%
confidence limit)
Such a signal is very probably not merely noise
Sensitivity and Detection Limit
Sensitivity is often used in describing an analytical
method. Unfortunately, it is occasionally used
indiscriminately and incorrectly.
The definition of sensitivity most often used is the
calibration sensitivity, or the change in the response
signal per unit change in analyte concentration.
The calibration sensitivity is thus the slope of the
calibration curve.
(5) Dynamic Range
Figure 1-13 illustrates the definition
of the dynamic range of an
analytical method, which extends
from the lowest concentration at
which quantitative measurements
can be made (limit of quantitation,
or LOQ) to the concentration at
which the calibration curve departs
from linearity by a specifted amount
(limit of linearity, or LOL).
Sensitivity* = Slope
Signal
S/N = 3
0
0 LOD
Analyte Mass or Concentration
Calibration Curves:
Sensitivity and LOD
S/N = 3
0
0
LOD LOD
Analyte Mass or Concentration
Calibration Curves:
Dynamic Range and Noise Regions
Calibration
Curve
becomes
poor above
Signal
this amount
of analyte
Poor
Quant
Noise
Region
Dynamic Range
S/N = 3
0
0 LOD LOQ LOL