CHM122 Topic A - Introduction To Analytical Processes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

CHM122 Topic A

Introduction to Analytical Processes

1st Semester 2023-2024


Classification
 Analytical methods are often classified as being either:
 Classical methods
 sometimes called wet-chemical methods
 both qualitative and quantitative approach
 uses gravimetric and titrimetric methods for analysis
 the extent of the general application decreases with the
passage of time and with the emerging instrumental
methods
 Instrumental methods
Instrumental methods
 Measurements of such analyte physical properties as light absorption or
emission, fluorescence, mass-to-charge ratio, conductivity and electrode
potential began to be used for quantitative analysis.
 Highly efficient chromatographic and electrophoretic techniques began
to replace distillation, extraction, and precipitation for the separation of
components of complex mixtures prior to their qualitative or quantitative
determination.
 These newer methods for separating and determining chemical species are
known collectively as instrumental methods of analysis.
 Application was delayed by lack of reliable and simple instrumentation.
The growth of modern instrumental methods of analysis has paralleled the
development of the electronics and computer industries.
Types of Instrumental Methods
Instrumental Components
Calibration of Instrumental Methods
 Comparison with standards
 Direct Comparison
 Titrations
 External Standard Calibration
 Least-squares method
 Errors in External Standard Calibration
 Multivariate Calibration
 Standard Addition Method
 Internal Standard Method
(A) Comparison with standards:
Direct Comparison
 The analytical procedure involves comparing a property of the analyte
(or the product of a reaction with the analyte) with standards such that
the property being tested matches or nearly matches that of the standard.
 In colorimeters, the color produced as the result of a chemical reaction
of the analyte was compared with the color produced by reaction of
standards.
 If the concentration of the standard was varied by dilution, for
example, it was possible to obtain a fairly exact color match.
 The concentration of the analyte was then equal to the concentration
of the standard after dilution.
 Such a procedure is called a null comparison or isomation method.
(A) Comparison with standards:
Titrations
 Titrations are among the most accurate of all analytical
procedures.
 In a titration, the analyte reacts with a standardized reagent (the
titrant) in a reaction of known stoichiometry.
 When the equivalence point is reached, as indicated by the color
change of a chemical indicator or by the change in an instrument
response.
 The amount of the standardized reagent needed to achieve
chemical equivalence can then be related to the amount of analyte
present.
 The titration is thus a type of chemical comparison.
(B) External Standard Calibration
 An external standard is prepared separately from the sample. By contrast, an
internal standard is added to the sample itself.
 External standards are used to calibrate instruments and procedures when there are
no interference effects from matrix components in the analyte solution.
 Calibration is accomplished by obtaining the response signal (absorbance, peak
height, peak area) as a function of the known analyte concentration.
 A calibration curve is prepared by plotting the data or by fitting them to a suitable
mathematical equation, such as the slope-intercept form used in the method of linear
least squares or regression analysis.
 The response signal obtained for the sample is then used to predict the unknown
analyte concentration, “c” from the calibration curve or best-fit equation.
 The concentration of the analyte in the original bulk sample is then calculated from
“c”, by applying the appropriate dilution factors from the sample preparation steps.
(B) External Standard Calibration (ideal)
 External Standard – standards are not in the sample and are run
separately
 Generate calibration curve
 Run known standards and measure signals
 Plot vs. known standard amount (conc., mass, or mole)
 Linear regression via least squares analysis
 Compare response of sample unknown and solve for unknown
concentration
 All well and good if the standards are just like the sample
unknown
(B) External Standard Calibration (ideal)

Sample
Unknown

External
Signal

Calibration
Standards
including
a blank
Sample
Unknown
Amount
S/N = 3
0
0 LOD
Analyte Mass or Concentration
Errors in External Standard Calibration
 The calibration functional relationship between the response and
the analyte concentration must apply to the sample as well.
 The raw analytical response is corrected by measuring a blank.
An ideal blank is identical to the sample but without the analyte.
 For complex samples, it is too time-consuming or impossible to
prepare an ideal blank and a compromise must be made.
 Most often a real blank is either a solvent blank, containing the
same solvent in which the sample is dissolved, or a reagent
blank containing the solvent plus all the reagents used in sample
preparation.
Errors in External Standard Calibration
 Matrix effects, due to extraneous species in the sample that are not present in
the standards or blank, can cause the same analyte concentrations in the sample
and standards to give different responses.
 Systematic errors can occur during the calibration process. If the standards are
prepared incorrectly, an error will occur. Also, the accuracy with which the
standards are prepared depends on the accuracy of the analytical techniques and
equipment used.
 Random errors can also influence the accuracy of results obtained from
calibration curves. The uncertainty in the concentration of analyte obtained
from a calibration curve is lowest when the response is close to the mean value
y. The point (mean of x and y) represents the centroid of the regression line.
 Note that measurements made near the center of the curve will give less
uncertainty in analyte concentration than those made at the extremes.
Multivariate Calibration
 Least-squares procedure is an example of a univariate calibration procedure (only
one response is used per sample).
 The process of relating multiple instrument responses to an analyte or a mixture
of analytes is known as multivariate calibration.
 Multivariate calibration methods have become quite popular in recent years as
new instruments become available that produce multidimensional responses
(absorbance of several samples at multiple wavelengths, mass spectrum of
chromatographically separated components, etc.).
 Multivariate calibration methods can be used to simultaneously determine
multiple components in mixtures and can provide redundancy in measurements
to improve precision because repeating a measurement N times provides an
improvement in the precision of the mean value.
 They can also be used to detect the presence of interferences that would not be
identified in a univariate calibration.
Real-life calibration
 Subject to matrix interferences
 Matrix = what the real sample is in
 pH, salts, contaminants, particulates
 Glucose in blood, oil in shrimp
 Concomitant species in real sample lead to different detector or
sensor responses for standards at same concentration or mass (or
moles)
 Several clever schemes are typically employed to solve real-world
calibration problems:
 Standard Addition
 Internal Standard
Standard Addition Method
 Standard-addition methods are particularly useful for analyzing
complex samples in which the likelihood of matrix effects is
substantial.
 One of the most common form of standard addition method involves
adding one or more increments of a standard solution to sample aliquots
containing identical volumes. This process is often called spiking the
sample. Each solution is then diluted to a fixed volume before
measurement. Note that when the amount of sample is limited, standard
additions can be carried out by successive introductions of increments
of the standard to a single measured volume of the unknown.
 Measurements are made on the original sample and on the sample plus
the standard after each addition. In most versions of the standard-
addition method, the sample matrix is nearly identical after each
addition.
Standard Addition Method
 Classic method for reducing (or simply accommodating) matrix
effects especially for complex samples (biosamples)
 Often the only way to do it right, you spike the sample by adding
known amounts of standard solution to the sample
 Have to know your analyte in advance
 Assumes that matrix is nearly identical after standard addition
(you add a small amount of standard to the actual sample)
 As with “Internal Standard” this approach accounts for random
and systematic errors; more widely applicable
 Must have a linear calibration curve
Internal Standard Method
 A substance different from the analyte added in a constant
amount to all samples, blanks, and standards or a major
component of a sample at sufficiently high concentration so
that it can be assumed to be constant.
 Plotting the ratio of analyte to internal-standard as a function
of analyte concentration gives the calibration curve.
 Accounts for random and systematic errors.
 Difficult to apply because of challenges associated with
identifying and introducing an appropriate internal standard
substance.
 Similar but not identical; can’t be present in sample.
Internal Standard Method
 Calibration then involves plotting the ratio of the analyte signal to the
internal-standard signal as a function of the analyte concentration of the
standards.
 This ratio for the samples is then used to obtain their analyte concentrations
from a calibration curve.
 Thus, if the analyte and internal standard signals respond proportionally to
random instrumental and method fluctuations, the ratio of these signals is
independent of such fluctuations.
 If the two signals are influenced in the same way by matrix effects,
compensation of these effects also occurs.
 In those instances where the internal standard is a major constituent of
samples and standards, compensation for errors that arise in sample
preparation, solution, and cleanup may also occur.
Internal Standard Method
 A major difficulty in applying the internal-standard method is that of
finding a suitable substance to serve as the internal standard and of
introducing that substance into both samples and standards in a
reproducible way.
 The internal standard should provide a signal that is similar to the
analyte signal in most ways but sufficiently different so that the two
signals are distinguishable by the instrument. The internal standard must
be known to be absent from the sample matrix so that the only source of
the standard is the added amount.
 For example, lithium is a good internal standard for the
determination of sodium or potassium in blood serum because the
chemical behavior of lithium is similar to both analytes, but it does not
occur naturally in blood.
Internal Standard Method
 An example of the determination of sodium in blood by Flame
spectrometry using lithium as an internal standard.
 From the figure in the next slide, upper plot shows the normal calibration
curve of sodium intensity versus sodium concentration in ppm. Although
a fairly linear plot is obtained, quite a bit of scatter is observed.
 Lower plot shows the intensity ratio of sodium to lithium plotted against
the sodium concentration in ppm.
 Note the improvement in the calibration curve when the internal standard is
used. In the development of any new internal-standard method, we must
verify that changes in concentration of analyte do not affect the signal
intensity that results from the internal standard and that the internal
standard does not suppress or enhance the analyte signal.
Selecting an Analytical method
 Analytical procedures are characterized by a number of
figures of merit such as accuracy, precision, sensitivity,
detection limit, and dynamic range.
 Additional figures of merit that are commonly used and
discuss the validation and reporting of analytical
results.
Selecting an Analytical method
 Defining a problem
 Performance characteristics of instruments
 Precision
 Bias
 Sensitivity FIGURES OF MERIT
 Detection limit
 Dynamic range
 Selectivity
Defining a problem
 To select an analytical method intelligently, it is essential to define
clearly the nature of the analytical problem. Such a definition
requires answers to the following questions:
1. What accuracy is required?
2. How much sample is available?
3. What is the concentration range of the analyte?
4. What components of the sample might cause interference?
5. What are the physical and chemical properties of
the sample matrix?
6. How many samples are to be analyzed?
Performance Characteristics of Instruments

The lists are quantitative


instrument performance
criteria that can be used to
decide whether a given
instrumental method is
suitable for attacking an
analytical problem. These
are called figures of merit.
Precision vs. Accuracy in the common verbiage
(Webster’s)
 Precision:
 The quality or state of being precise; exactness;
accuracy; strict conformity to a rule or a standard;
definiteness.
 Accuracy:
 The state of being accurate; exact conformity to truth,
or to a rule or model; precision.
 These are not synonymous when describing
instrumental measurements!
Precision and Accuracy in this course
 Precision: Degree of mutual agreement among data obtained in the
same way.
 Absolute and relative standard deviation, standard error of the
mean, coefficient of variation, variance.
 Accuracy: Measure of closeness to accepted value
 Extends in between various methods of measuring the same
value
 Absolute or relative error
 Not known for unknown samples
 Can be precise without being accurate!!!
 Precisely wrong!
(1) Precision - Metrics

Most important

Often seen as %

Handy, common
Performance Characteristics of
Instruments
 Figures of merit permit us to narrow the choice of instruments
for a given analytical problem to a relatively few. Selection
among these few can then be based on the qualitative
performance criteria listed in Table 1-4.
(2) Bias
 Provides a measure of the systematic/determinate error of an analytical
method
 Defined in terms of the population mean for the concentration of an analyte
in a sample and the true value.
 Determining bias involves analyzing one or more standard reference
materials whose analyte concentration is known.
 The results from such an analysis will, however, contain both random and
systematic errors; but if we repeat the measurements a sufficient number of
times, the mean value may be determined with a given level of confidence.
 Usually in developing an analytical method, we attempt to identify the
source of bias and eliminate it or correct for it by the use of blanks and
by instrument calibration.
(3) Sensitivity
 There is general agreement that the sensitivity of an instrument or a
method is a measure of its ability to discriminate between small
differences in analyte concentration.
 Two factors limit sensitivity: the slope of the calibration curve and
the reproducibility or precision of the measuring device.
 Of two methods that have equal precision, the one that has the
steeper calibration curve will be the more sensitive.
 If the two methods have calibration curves with equal slopes, the one
that exhibits the better precision will be the more sensitive.
 The quantitative definition of sensitivity that is accepted by the
International Union of Pure and Applied Chemistry (IUPAC) is
calibration sensitivity, which is the slope of the calibration curve at
the concentration of interest.
(4) Detection Limit
 The most generally accepted qualitative definition of detection
limit is that it is the minimum concentration or mass of analyte
that can be detected at a known confidence level.
 This limit depends on the ratio of the magnitude of the analytical
signal to the size of the statistical fluctuations in the blank signal.
 Unless the analytical signal is larger than the blank by some
multiple k of the variation in the blank due to random errors, it is
impossible to detect the analytical signal with certainty.
 As the limit of detection is approached, the analytical signal
and its standard deviation approach the blank signal and its
standard deviation.
Sensitivity vs. Limit of Detection
 NOT THE SAME THING!!!!!
 Sensitivity: Ability to discriminate between small differences in
analyte concentration at a particular concentration.
 calibration sensitivity—the slope of the calibration curve at the
concentration of interest
 Limit of detection: Minimum concentration that can be detected at
a known confidence limit
 Typically three times the standard deviation of the noise from
the blank measurement (3s or 3s is equivalent to 99.7%
confidence limit)
 Such a signal is very probably not merely noise
Sensitivity and Detection Limit
 Sensitivity is often used in describing an analytical
method. Unfortunately, it is occasionally used
indiscriminately and incorrectly.
 The definition of sensitivity most often used is the
calibration sensitivity, or the change in the response
signal per unit change in analyte concentration.
 The calibration sensitivity is thus the slope of the
calibration curve.
(5) Dynamic Range
Figure 1-13 illustrates the definition
of the dynamic range of an
analytical method, which extends
from the lowest concentration at
which quantitative measurements
can be made (limit of quantitation,
or LOQ) to the concentration at
which the calibration curve departs
from linearity by a specifted amount
(limit of linearity, or LOL).

At this point. the relative standard deviation is about 30% and


decreases rapidly as concentrations become larger.
(5) Dynamic range
 The maximum range over which an accurate measurement can be made
 From limit of quantitation to limit of linearity
 LOQ: 10 s of blank
 LOL: 5% deviation from linear
 Some analytical techniques, such as absorption spectrophotometry, are
linear over only one to two orders of magnitude. Other methods, such as
mass spectrometry, fluorescence may exhibit linearity over four to five
orders of magnitude and even higher for NMR.
 Absorbance: 1-2
 MS, Fluorescence: 4-5
 NMR: 6
Linear Dynamic Range
 The linear dynamic range of an analytical method most often refers to the
concentration range over which the analyte can be determined using a linear
calibration curve.
 The lower limit of the dynamic range is generally considered to be the
detection limit. The upper end is usually taken as the concentration at
which the analytical signal or the slope of the calibration curve deviates by a
specified amount.
 Deviations from linearity are common at high concentrations because of
nonideal detector responses or chemical effects.
 A linear calibration curve is preferred because of its mathematical simplicity
and because it makes it easy to detect an abnormal response. With linear
calibration curves, fewer standards and a linear regression procedure can be
used.
Linear Dynamic Range
 Nonlinear calibration curves are often useful, but more standards
are required to establish the calibration function than with linear
cases.
 A large linear dynamic range is desirable because a wide
range of concentrations can be determined without dilution
of samples, which is time consuming and a potential source of
error.
 In some determinations, only a small dynamic range is required.
For example, in the determination of sodium in blood serum,
only a small range is needed because variations of the sodium
level in humans is quite limited.
Analytical Sensitivity
 The ratio of the calibration curve slope to the standard deviation
of the analytical signal at a given analyte concentration.
 Usually a strong function of concentration.
 The detection limit, DL, is the smallest concentration that can
be reported with a certain level of confidence.
 Every analytical technique has a detection limit. For methods
that require a calibration curve, the detection limit is defined as
the analyte concentration that produces a response equal to k
times the standard deviation of the blank sb:
Analytical Sensitivity

 where k is called the confidence factor and m is the calibration


sensitivity.
 The factor k is usually chosen to be 2 or 3. A k value of 2
corresponds to a confidence level of 92.1%, while a k value of 3
corresponds to a 98.3% confidence level.
 Detection limits reported by researchers or instrument companies
may not apply to real samples. The values reported are usually
measured on ideal standards with optimized instruments. These
limits are useful, however, in comparing methods or instruments.
Calibration Curve,
Limit of Detection, Sensitivity

Sensitivity* = Slope
Signal

*Same as Working Curve


**Not improved by amplification alone

S/N = 3
0
0 LOD
Analyte Mass or Concentration
Calibration Curves:
Sensitivity and LOD

⚫ For a given sample standard


deviation, s, steeper calibration
curve means better sensitivity
Insensitive to amplification
Signal

S/N = 3
0
0
LOD LOD
Analyte Mass or Concentration
Calibration Curves:
Dynamic Range and Noise Regions

Calibration
Curve
becomes
poor above
Signal

this amount
of analyte

Poor
Quant
Noise
Region
Dynamic Range
S/N = 3
0
0 LOD LOQ LOL

Analyte Mass or Concentration


(6) Selectivity
 Selectivity of an analytical method refers to the degree to which the method
is free from interference by other species contained in the sample matrix.
 Unfortunately, no analytical method is totally free from interference from
other species, and frequently steps must be taken to minimize the effects of
these interferences
 Selectivity coefficients are useful figures of merit for describing the
selectivity of analytical methods and can range from zero (no interference) to
values considerably greater than unity.
 Note that a coefficient is negative when the interference causes a reduction in
the intensity of the output signal of the analyte.
 Unfortunately, selectivity coefficients are not widely used except to
characterize the performance of ion-selective electrodes.
Signals and Noise
▪Signal carries information about the analyte that is of interest
to us.
▪Noise is made up of extraneous information that is unwanted
because it degrades the accuracy and precision of an analysis
▪Signal-to-Noise Ratio, S/N)

▪Signal-to-noise (S/N) is much more useful figure of merit than


noise alone for describing the quality of an analytical method.
The magnitude of the noise is defined as the standard deviation
s of numerous measurements and signal is given by the mean x
of the measurements.
Signals and Noise

Signal carries the information


about the analyte, while the
noise is made up of extraneous
information that is unwanted
because it degrades accuracy
and precision of the
measurement.
Sources of Noise in Instrumental
Analyses
Analysis are affected by two types of noise:
1. Chemical noise
2. Instrumental noise

 Chemical noise: Arises from an uncontrollable variables


that effect the chemistry of the system being analyzed.
Examples are undetected variations in temperature,
pressure, chemical equilibria, humidity, light intensity etc.
Sources of Noise in Instrumental
Analyses
 Instrumental Noise: Noise is associated with each
component of an instrument – i.e., with the source, the
input transducer, signal processing elements and output
transducer. Noise is a complex composite that usually
cannot be fully characterized. Certain kinds of
instrumental noise are recognizable, such as:
1. Thermal or Johnson noise
2. Shot noise
3. Flicker noise
4. Environmental noise
Signal-to-Noise Enhancement:
 When the need for sensitivity and accuracy increased, the signal-to-
noise ratio often becomes the limiting factor in the precision of a
measurement. Both hardware and software methods are available for
improving the signal-to-noise ratio of an instrumental method.
 Hardware method: Hardware noise reduction is accomplished by
incorporating into the instrument design components such as filters,
choppers, shields, modulators, and synchronous detectors. These devices
remove or attenuate the noise without affecting the analytical signal
significantly.
 Software Method: Software methods are based upon various computer
algorithms that permit extraction of signals from noisy data. Hardware
convert the signal from analog to digital form which is then collected by
computer equipped with a data acquisition module.
Quality Assurance of Analytical Results
 When analytical methods are applied to real-world
problems, the quality of results as well as the performance
quality of the tools and instruments used must be evaluated
constantly.
 The major activities involved are quality control,
validation of results, and reporting.
Control Charts
 Sequential plot of some quality characteristic that is important in
quality assurance; show the statistical limits of variation that are
permissible for the characteristic being measured.
 As an example, we will consider monitoring the performance of an
analytical balance. Both the accuracy and the precision of the
balance can be monitored by periodically determining the mass of a
standard. We can then determine whether the measurements on
consecutive days are within certain limits of the standard mass.
These limits are called
 upper control limit (UCL)
 lower control limit (LCL)
Control Charts

A control chart for a modern analytical balance. The results appear to


fluctuate normally about the mean except for those obtained on day 17.
Investigation led to the conclusion that the questionable value resulted from a
dirty balance pan. UCL = upper control limit; LCL = lower control limit
Validation
 Determines the suitability of an analysis for providing the sought-for
information and can apply to samples, to methodologies, and to data.
 Often done by the analyst, but it can also be done by supervisory
personnel.
 Often used to accept samples as members of the population being
studied, to admit samples for measurement, to establish the authenticity
of samples, and to allow for resampling if necessary.
 Samples can be rejected because of questions about the sample identity,
questions about sample handling, or knowledge that the method of
sample collection was not appropriate or in doubt. For example,
contamination of blood samples during collection as evidence in a
forensic examination would be reason to reject the samples.
Validation
 The most common methods for validation include analysis of standard
reference materials when available, analysis by a different analytical
method, analysis of “spiked” samples, and analysis of synthetic
samples approximating the chemical composition of the test samples.
 Individual analysts and laboratories often must periodically demonstrate the
validity of the methods and techniques used.
 Data validation is the final step before release of the results. This
process starts with validating the samples and methods used. Then, the
data are reported with statistically valid limits of uncertainty after a
thorough check has been made to eliminate blunders in sampling and
sample handling, mistakes in performing the analysis, errors in
identifying samples, and mistakes in the calculations used.
Reporting Analytical Results
 Specific reporting formats and procedures vary from laboratory to
laboratory.
 Whenever appropriate, reports should follow the procedure of a good
laboratory practice (GLP).
 Generally, analytical results should be reported as the mean value and
the standard deviation. Sometimes the standard deviation of the mean
is reported instead of that of the data set. Either of these is acceptable as
long as it is clear what is being reported.
 A confidence interval for the mean should also be reported. Usually the
95% confidence level is a reasonable compromise between being too
inclusive or too restrictive.
Reporting Analytical Results
 The results of various statistical tests on the data should
also be reported when appropriate, as should the rejection
of any outlying results along with the rejection criterion.
 Significant figures are quite important when reporting
results and should be based on statistical evaluation of
the data.
 Whenever possible the significant figure convention
should be followed. Rounding of the data should be done
with careful attention to the guidelines.
Reporting Analytical Results
 Whenever possible graphical presentation should include error
bars on the data points to indicate uncertainty.
 Whenever appropriate the regression equation and its statistics
should also be reported.
 Validating and reporting analytical results are not the most
glamorous parts of an analysis, but they are among the most
important because validation gives us confidence in the
conclusions drawn.
 The report is often the “public” part of the procedure and may be
brought to light during hearings, trials, patent applications, and
other events.
References
 Textbook
▪ D.A. Skoog, D.M. West, F.J. Holler, and S.R. Crouch,
Fundamentals of Analytical Chemistry, 9th ed., Thomson Learning
Asia, Singapore, 2014.
 Supplemental Notes
▪ D.A. Skoog, F.J. Holler and S.R. Crouch, Principles of
Instrumental Analysis, 7th ed., Thomson Learning, Canada, 2016.
 Web references

You might also like