Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

Electronic Measurements & Instrumentation.

Prof S.Lakshminarayana,M.Tech.,Ph.D.,

FACTORS IN MAKING MEASUREMENTS


The "goodness" of measurements involves
several important concepts. Some of the more
significant of these are:
Error,
Validity,
Reliability,
Repeatability,
Accuracy,
Precision,
Resolution.

1. Error
In all measurements there is a certain degree of error present.
The word error in this context refers to normal random variation
and in no way means "mistakes" or "blunders.

FIGURE 2-5: The accuracy of a


measurement is indicated by the size of X.
Xo is the true value.

The measured value may deviate from the true value


(Xo) by a certain amount ( X),
If measurements are made repeatedly on the same
parameter (which is truly unchanging),
or if different instruments or instrument operators are
used to make successive measurements.
X = Absolute error.
Xi = Measured Value
Xo = True steady state value.
X = Xi Xo
If X 0, XiXo

Relative Error
Percentage Error

Percentage Error = Er X100

Errors in Measurement
Systems

Error is the difference between measured value and the true


value of a physical quantity. The accuracy of measuring
system is measured in terms of error.
Errors may be positive (Reading higher value) or negative
(reading lower value).
TYPES OF ERRORS.
1. Errors due to calibration
2. Human Errors.
3. Loading Error.
4. Environmental Error. (Temperature)
5. Random Error.
6. Instrument Error.

1. Accuracy: The maximum expected


difference in magnitude between measured and
true values (often expressed as a percentage of
the full-scale value).
2. Precision: The ability of the instrument to
repeat
the
measurement
of
a
constant
measurand. More precise measurements have
3. Resolution: The smallest possible increment
less random error.
discernible between measured values. As the term is
used, higher resolution means smaller increments.
Thus, an instrument with a five-digit display (say,
0.0000 to 9.9999) is said to have higher resolution
than an otherwise identical instrument with a threedigit display (say, 0.00 to 9.99). (Subject to the
condition that the source has the required
resolution)

Precision vs. Accuracy

Accurate but not precise.


The average is close
to the center but the
individual values are
not similar.

Precise, but not accurate

Accurate and Precise

WHAT IS THE DIFFERENCE BETWEEN


ACCURACY AND PRECISION?

Accuracy is defined as, "The ability of a measurement to match the


actual value of the quantity being measured". If in reality it is 34.0 F
outside and a temperature sensor reads 34.0 F, then than sensor is
accurate.
Precision is defined as, "(1) The ability of a measurement to be
consistently reproduced" and "(2) The number of significant digits to
which a value has been reliably measured". If on several tests the
temperature sensor matches the actual temperature while the actual
temperature is held constant, then the temperature sensor is precise. By
the second definition, the number 3.1415 is more precise than the
number 3.14

An example of a sensor with BAD accuracy and BAD precision: Suppose


a lab refrigerator holds a constant temperature of 38.0 F. A temperature
sensor is tested 10 times in the refrigerator. The temperatures from the
test yield the temperatures of: 39.4, 38.1, 39.3, 37.5, 38.3, 39.1, 37.1, 37.8,
38.8, 39.0. This distribution shows no tendency toward a particular
value (lack of precision) and does not acceptably match the actual
temperature (lack of accuracy).
An example of a sensor with GOOD accuracy and BAD precision:
Suppose a lab refrigerator holds a constant temperature of 38.0 F. A
temperature sensor is tested 10 times in the refrigerator. The
temperatures from the test yield the temperatures of: 37.8, 38.3, 38.1,
38.0, 37.6, 38.2, 38.0, 38.0, 37.4, 38.3. This distribution shows no
impressive tendency toward a particular value (lack of precision) but
each value does come close to the actual temperature (high accuracy).

An example of a sensor with BAD accuracy and GOOD precision:


Suppose a lab refrigerator holds a constant temperature of 38.0 F. A temperature
sensor is tested 10 times in the refrigerator. The temperatures from the test yield the
temperatures of : 39.2, 39.3, 39.1, 39.0, 39.1, 39.3, 39.2, 39.1, 39.2, 39.2. This
distribution does show a tendency toward a particular value (high precision) but every
measurement is well off from the actual temperature (low accuracy).
An example of a sensor with GOOD accuracy and GOOD precision:
Suppose a lab refrigerator holds a constant temperature of 38.0 F. A temperature
sensor is tested 10 times in the refrigerator. The temperatures from the test yield the
temperatures of: 38.0, 38.0, 37.8, 38.1, 38.0, 37.9, 38.0, 38.2, 38.0, 37.9. This
distribution does show a tendency toward a particular value (high precision) and is
very near the actual temperature each time (high accuracy).
The goal of any instrument is to have high accuracy (sensor matching reality as close
as possible) and to also have a high precision (being able to consistently replicate
results and to measure with as many significant digits as appropriately possible).
Instruments, including radar, need to be calibrated in order that they sustain high
accuracy and high precision.

Statistical Analysis:
1. Arithmetic mean :
The arithmetic mean, the average of number of readings taken. The best
approximation will be made when the number of readings of the same
quantity is very large. Theoretically an infinite number of readings
would give the best result, although in practice only a finite number of
measurements can be made. The arithmetic mean is given by

2. Mean Deviation

Deviation is the departure at given reading


from the arithmetic mean of the group of
readings. If the deviation of the first reading x1
is d1, and that of the second reading x2 is called
d2 and so on then the deviation from the mean
can be expressed as

The deviation from the mean may have a


positive or a negative value and the algebric
sum of all deviation must be zero.

3. Average Deviation D

Average deviation is an indication of the precision of


the instrument used in making the measurements.
Highly precise instrument will yield a low average
deviation between readings.
So average deviation is the sum of the absolute values
of the deviation divided by the number of readings. The
absolute value of the deviation is the value without
respect to sign. Average deviation may be expressed as

4. Standard Deviation
The standard deviation of an infinite number of data is the Square
root of the sum of all the individual deviations squared, divided by
the number of readings.
It is represented by and it is also known as root mean square
deviation. Mathematically it is given by

Variance
Variance or mean square deviation is same as the
standard deviation except that the square root is
not obtained.
Therefore
Variance (v) = mean square deviation = 2
variance are used in many computations because
variances are additive.

PROBABILITY OF ERRORS
VOLTAGE MEASUREMENTS

Mains voltage is measured with a digital voltmeter to the nearest to 0.1 V at short
time intervals. The data are presented in table 1.2. The nominal value of the
measured voltage is 220 V. The data are presented in the histogram of Fig. 1.4.
The ordinate represents the number of observed readings (frequency of
occurrence) of a particular value. It is also called the frequency distribution curve.

VOLTAGE MEASUREMENTS

TABLE 1.2

An examination of the. histogram reveals the following points :


(i) The largest number of readings (20) have their values at the central value of
220.0 V. Other readings are situated symmetrically. Small errors are more
probable than large errors.
(ii) The tendency of the curve indicates that if more readings were taken at smaller
interval, the curve remains symmetrical. Only the contour of the histogram
would become smooth. There is an equal probability of positive and negative
errors.
(iii) The bell shaped curve is called the Gaussian distribution curve.
(iv) A sharp and narrow distribution curve enables the observer to state that the
most probable value of the true value is the central value.

PROBABIE ERROR
The Gaussian distribution curve as a function of standard deviation is shown in
Fig. 1.5. When a large number of readings are taken, the relation is given by

Where x = Magnitude of deviation from the mean.


= Number of readings at any given deviation x,
(the probability of deviation).

Fig. 1.5 ERROR DISTRIBUTION CURVE

The error distribution curve shows symmetrical distribution of errors. It can be


treated as the limiting form of the histogram of Fig. 1.4. The area under the error
distribution curve between the limits + and - represents the entire number of
observations. The area under the curve + and - represents the case that differ
from the mean by no more than the standard deviation (a). The area under the
curve between the limits + and - a gives the total number of measurements with
in these limits. About 68 percent of all the cases lie between the limits + and -
from the mean. Hence, the more probable error is taken as 0.6745 .

To illustrate the meaning of the probable error, consider


measurement of nominally 100 pf capacitors. When a large no. of
measurements are taken, the mean value is found to be 100.00 pF.
This means that 68 percent of capacitors have values which lie
between limits of + 0.20pf of the mean. This indicates that if a
capacitor is selected from a lot of capacitors at random, there is
then approximated two to one chance that the value of the capacitor
selected lies between the limits of 0.20pf.
Probable error r 0.6745

PERFORMANCE CHARACTERISTICS OF INSTRUMENTS


Stability : The ability of a measurement system to maintain its
performance over prolonged periods of time is called stability. If a
system is highly stable, it need not be calibrated frequently.
Frequent calibration is required for less stable instruments.
Zero Stability : It indicates the ability of an instrument to restore to
zero reading after the input is made zero, while other conditions
remain the same.
Resolution : It is the smallest change of input to which the
measuring system responds. If a digital instrument has a read out of
9999, its resolution is 1 in 99999.
Accuracy and resolution are not the same.

PROBLEMS

You might also like