Professional Documents
Culture Documents
Performance Characteristic
Performance Characteristic
Performance Characteristics
Dr. B. R. Thorat
Department of Chemistry
Ismail Yusuf College, Mumbai 400060
Method
Development
Method
Validation
Method
Transfer
Approved
VALIDATION
THE PROCESS OF PROVIDING DOCUMENTED
SOMETHING DOES WHAT IT IS INTENDED TO DO.
EVIDENCE
WHY VALIDATE?
THAT
M
e
t
h
o
d
V
a
l
i
d
a
t
i
o
n
L
i
m
i
t
o
f
Q
u
a
n
t
i
t
a
t
i
o
n
S
p
e
c
i
f
i
c
i
t
y
L
i
n
e
a
r
i
t
y
R
a
n
g
e
R
o
b
u
s
t
n
e
s
s
S
y
s
t
e
m
S
u
i
t
a
b
i
l
i
t
y
Identification tests.
Quantitative tests for impurities content.
Limit tests for the control of impurities.
Quantitative tests of the active in samples of the drug substance (raw material), finished
product or other selected components in the drug.
ACCURACY
relative error.
Accuracy is often more difficult to determine because the true value is usually unknown. An
accepted or reference value must be used instead.
PRECISION
Precision (reproducibility of measurements)
The measure of the degree of agreement (degree of scatter)
among test results when the method is applied repeatedly to
multiple samplings of a homogeneous sample.
Expressed as % RSD for a statistically significant number of
samples.
precision
expresses
within-laboratories
expresses
the
precision
between
SPECIFICITY/SELECTIVITY
The ability of analytical method to accurately and specifically measure a signal is a
function of only the amount or concentration of analyte present in the sample. In presence of
interfering components, selectivity can be calculated as -
Where, A is analyte and I is interferent, S is the signal, k is the sensitivities and n is the
number of moles.
The selectivity of the method for the interferent relative to the analyte is defined by a
selectivity coefficient,
Selectivity may be positive or negative depending on whether the interferents effect on the
signal is opposite or negative to that of the analyte. A selectivity coefficient greater than +1 or
less than 1 indicates that the method is more selective for the interferent than for the analyte
SPECIFICITY/SELECTIVITY
The degree of interference:
Active Ingredients
Excipients
Impurities (synthetic precursors, enantiomers)
Degradation Products
Placebo Ingredients
Combination of 2 or more analytical procedures may be required to achieve
necessary level of discrimination.
Sensitivity
The definition of sensitivity most often used is the calibration sensitivity
the change in the response signal per unit change in analyte concentration
equivalent to the proportionality constant - k
SA is the smallest increment in signal that can be measured, when the smallest difference
Baseline
Peak A
LOD
noise
LOD: The lowest concentration of an analyte in a sample that can be detected, not quantified.
LOQ: The lowest concentration of analyte in a sample that can be determined with acceptable
precision and accuracy under stated operational conditions.
Detection Limit
The detection limit, DL, is the smallest concentration that can be reported with a
certain level of confidence.
Every analytical technique has a detection limit. For methods that require a calibration
curve, the detection limit should be defined. It is determine by using following equation -
It is the analyte concentration that produces a response equal to k (called the confidence
factor ) times the standard deviation of the blank, Sb , and m is the calibration sensitivity.
The factor k is usually chosen to be 2 or 3.
A k value of 2 corresponds to a confidence
level of 92.1%, while a k value of 3
corresponds to a 98.3% confidence level.
Example: Determination of sodium in blood serum, only a small range is needed because
variations of the sodium level in humans is quite limited
Limit of quantitation
An alternative expression for the detection limit, which minimizes type 1 and type 2
errors, is the limit of identification (SA)LOI.
Where, z is relative deviation and is the standard deviation. The limit of identification
is selected such that there is an equal probability of type 1 and type 2 errors.
The American Chemical Societys Committee on Environmental Analytical Chemistry
recommends the limit of quantitation as -
RUGGEDNESS
Random variations in experimental conditions also introduce
uncertainty.
If a methods sensitivity is highly dependent on experimental
conditions, such as Different Analysts
Different Laboratories
Different Instruments
Different Reagents
Different Days
Etc.
Expressed as % RSD
ROBUSTNESS
The number of methods are subject to a variety of chemical
and physical interferences that contribute uncertainty to the
analysis.
When a method is relatively free from chemical interferences,
it can be applied to the determination of analytes in a wide
variety of sample matrices. Such methods are considered
robust.
Errors in analysis
Error - the result of a measurement minus a true value of the measured.
Errors in analysis
Minimization of errors
Absolute Error
It is the difference between the measured value and the true value.
The sign of the absolute error tells you whether the value in question is high or low.
Relative Error
The relative error of a measurement is the absolute error divided by the true value.
Relative error may be expressed in percent, parts per thousand, or parts per million,
depending on the magnitude of the result.
The relative error is often a more useful quantity than the absolute error. The percent
relative error is given by the expression
E indicate error of measurement. Chemical analyses are affected by at least two types of
errors - determinate and indeterminate errors.
Instrumental errors
Sources of
Systematic
Errors
Method errors
Personal errors
Instrumental errors
Errors occurs due to instruments or apparatus used for the accurate measurement such as
pipettes, burettes, and volumetric flasks may hold or deliver volumes slightly different
from those indicated by their graduations.
These differences arise from using glassware at a temperature that differs significantly
from the calibration temperature,
from distortions in container walls due to heating while drying,
from errors in the original calibration,
from contaminants on the inner surfaces of the containers.
Electronic instruments are also subject to systematic errors.
Errors may emerge as the voltage of a battery-operated power supply decreases with use.
Errors can also occur if instruments are not calibrated frequently or if they are calibrated
incorrectly.
Method errors
The non-ideal chemical or physical behavior of the reagents and reactions on which an
analysis is based often introduce systematic method errors.
Such sources of non-ideality include the slowness of some reactions,
the incompleteness of others,
the instability of some species,
the lack of specificity of most reagents,
the possible occurrence of side reactions that interfere with the measurement process.
Example
The analytical method used involves the decomposition of the organic samples in hot
concentrated sulfuric acid, which converts the nitrogen in the samples to ammonium
sulfate. Often a catalyst, such as mercuric oxide or a selenium or copper salt, is added to
speed the decomposition.
Personal Errors
Many measurements require personal judgments.
estimating the position of a pointer between two scale divisions,
the color of a solution at the end point in a titration,
the level of a liquid with respect to a graduation in a pipette or burette.
Minimization of errors
There are steps that can be taken to ensure accuracy in analytical procedures.
Most of these depend on minimizing or correcting errors that might occur in the
measurement step.
The overall accuracy and precision of an analysis might not be limited by the
measurement step and might instead be limited by factors such as sampling, sample
preparation, and calibration.
Dilution and Matrix
Matching
Internal Standard
Methods
Minimization
of
errors
Saturation, Matrix
Modification, and Masking
Separations
Separations
Sample cleanup by separation methods is an important way to minimize errors from
possible interferences in the sample matrix.
Techniques such as filtration, precipitation, dialysis, solvent extraction, volatilization, ion
exchange, and chromatography are all very useful in ridding the sample of potential
interfering constituents.
Most separation methods are, however, time consuming and may increase the chances
that some of the analyte will be lost or that the sample can be contaminated.
In many cases, though, separations are the only way to eliminate an interfering species.
Mean
The mean, also called the arithmetic mean or the average, is obtained by dividing the sum
of replicate measurements by the number of measurements in the set:
Where, xi represents the individual values of x of ith component making up the set of N
replicate measurements.
median
The median is the middle result when replicate data are arranged in increasing or
decreasing order.
There are equal numbers of results that are larger and smaller than the median.
For an odd number of results, the median can be found by arranging the results in order
and locating the middle result.
For an even number, the average value of the middle pair is used.
In ideal cases, the mean and median are identical.
Mode
The "Mode" for a data set is the element that occurs the most often.
It is not uncommon for a data set to have more than one mode. This happens when two or
more elements occur with equal frequency in the data set.
A data set with two modes is called bimodal.
A data set with three modes is called trimodal.
Measure of dispersion
Population: The population is the collection of all measurements of interest and must be
carefully defined by the experimenter. In some cases, the population is finite and real, while
in others, the population is hypothetical or conceptual in nature.
Sampling: The population is finite, we usually would not have the time or resources to test
all indiduals. Hence, we select a sample for analysis according to statistical sampling
principles. We then infer the characteristics of the population from those of the sample.
Example: 1. The determination of calcium in a community water supply to determine water
hardness. 2. Determining glucose in the blood of a patient, we could hypothetically make an
extremely large number of measurements if we used the entire blood supply.
Correlation coefficient
Correlation coefficients (denoted r) are statistics that quantify the relation between X and
Y in unit-free terms.
When all points of a scatter plot fall directly
on a line with an upward incline, r = +1;
When all points fall directly on a downward
incline, r = -1.
To calculate a correlation
coefficient, you normally need three different
sums of squares (SS). The sum of squares for
variable X, the sum of square for variable Y,
and the sum of the cross-product of XY
Sample Mean
standard deviation
The difference between sample mean and population goes decreases as sample size goes on
in increasing and it negligible when N is 20 30. In most of cases, we do not know the
population mean which is then decided from sample mean.
The (xi - ) is the deviation (d) of data value xi from the mean of population and (xi
sample mean) is the deviation of data value xi from the sample mean. The average of the
deviation like mean is defined by the following equation -
The population standard deviation , is a measure of the precision of the population, is given
by the equation Where, N is the number of data points
making up the population
Where, the quantity (xi sample mean) represents the deviation di of value xi from mean, (N - 1) is
called the number of degrees of freedom.
The result is often expressed in parts per thousand (ppt) or in percent by multiplying this ratio
by 1000 ppt or by 100%.
The relative standard deviation multiplied by 100% is called the coefficient of variation
(CV).
Relative standard deviations often give a clearer picture of data quality than do absolute
standard deviations.
Example: Suppose that a copper determination has a standard deviation of 2 mg. If the
sample has a mean value of 50 mg of copper, the CV for this sample is 4%. For a sample
containing only 10 mg, the CV is 20%.