Lec 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Performance Characteristics of Measurement:

• These characteristics of a particular instrument are divided into


Static and Dynamic and are given in the data sheet provided by
the manufacturer.
• which are only applicable under specified standard calibration
conditions.

Static Characteristics
“Performance characteristics which measure slowly varying or
unvarying data and thereby indicate the response of the instruments. “
• Static Calibration is used to obtain static characteristics
Static characteristics : accuracy, precision, resolution, repeatability,
reproducibility, range, static error, sensitivity, linearity, and drift
 Accuracy:
is a measure of how close the output reading of the instrument is to
the correct value (also termed as true or desired value) of the
quantity measured.
• If an accuracy of ±1% is specified for a 100V voltmeter; then, true
value of the voltage lies 99 101 V with max. error not exceeding
± 1V
• In practice, inaccuracy or measurement uncertainty value is usually
used rather than the accuracy value for an instrument.
Inaccuracy is often quoted as a percentage of the full-scale (F.S) reading
of an instrument. So, instruments having range appropriate to the
spread of measurands’ values are chosen in order to maintain the best
possible accuracy in instrument readings
 The accuracy of an instrument can be specified in either of the
following ways.
Point Accuracy
Percentage of Scale Range Accuracy
Percentage of True Value Accuracy
Point Accuracy
• It does not specify the general accuracy of an instrument, rather it
gives the information about the accuracy at only one point on its
scale.
• Making a table of accuracy at a number of points in the range of the
instrument may help in calculating the general accuracy of an
instrument.
Percentage of Scale Range Accuracy
• In this case, the accuracy of a uniform scale instrument is expressed
in terms of scale range.
• This type of accuracy specification can be highly misleading.
consider a thermometer having a range and accuracy of 200°C and
± 0.5% of scale range, respectively. This implies,
For reading of 200°C the accuracy is ± 0.5%, while for a reading of 40°C
200
the accuracy is: × (± 0.5) = 2.5 %
40
Percentage of True Value Accuracy

In this case, the accuracy is defined in terms of true value of the


quantity being measured. Thus, the errors are proportional to the
readings, that is, smaller the reading, lesser is the error.

Considered the best way to specify the accuracy of an instrument.

Numerical Problem:
A pressure gauge with a measurement range of 0–10 bar has a quoted
inaccuracy of 1.0% F.S (full-scale).
(a) What is the max. measurement error expected for this instrument?
(b) What is likely measurement error expressed as a percentage of the
output reading if this pressure gauge is measuring a pressure of 1 bar?

✔ Solution is done in class.


 Precision:
• Precision shows an instrument’s degree of freedom from random
errors.
• Hence, it is a measure of the degree up to which repeated readings
in a group of measurements are similar, provided they are measured
under same conditions.
• A precise reading need not be accurate or vice versa; which means

• A high-precision instrument may have a low accuracy which causes a


bias in the measurement and can be removed by the recalibration.

e.g. a highly precise Voltmeter due to its finely divided, clearly legible,
incredibly sharp pointer, and mirror-backed distinct scales which remove
parallax. Let it can measure voltage to a value of 1 / 1000th of a volt but the
zero adjustment of the voltmeter is not accurate.
So it can yields highly precise but not accurate readings.
Static Characteristics of Instruments

PRECISION
• Indication of spread of readings.

• If a large number of readings are taken


of the same quantity by a high precision
instrument, then the spread of readings
will be very small.
Significant Figures: The number of significant figures provide the
information about the magnitude and precision of a measured
quantity. The more the number of significant figures, the more precise
precise is the quantity.
☞ the number of significant figures in the result of the calculation must be
equal to the number of significant figures in the original quantities. It means,
if extra digits accumulated in the answer, they should be discarded or
rounded off.

• For example: A current I of 2.34 A and a voltage V of 5.42 V, then


𝑉 5.42
R= = = 2.31623932 Ω
𝐼 2.34
So, the correct value of resistance R should contain three significant figures,
that is 2.31 Ω

Considering, the resistance of a resister 106 Ω. Its resistance should


be closer to 106 Ω rather than 107 Ω or 105 Ω.
3 Significant figures, less precise
Or if its resistance 106.0 Ω, then it should be closer to 106.0 Ω
rather than 106.1 Ω or 105.9 Ω.

4 Significant figures, more precise

It is common practice in measurement, all the digits are recorded


nearest to the true value, e.g.

• A voltmeter reading 117.1 V by the observer, the best estimate


closer to 117.1 V than 117.0 V or 117.2 V

Another way is the range of possible error, i.e. 117.1


11
±0.05 V means voltage lies 117.05 7. 117.15 V
15

• During independent measurement, best estimate to the true value is


taken as the arithmetic mean of all readings and the range is the
largest deviation from the mean, as explained in num. problem.
Numerical Problem:
A set of independent voltage measurements taken by four observers was
recorded as 117.02 V, 117.11 V, 117.08 V, and 117.03 V. Calculate:
(a) The average voltage?
(b) The range of error?
✔ Solution is done in class.

☞ When 2 or more measurements with different degrees of accuracy


are added, the result is only as accurate as the least accurate
measurement, as explained in the following num. problem.

Numerical Problem:
02 resistors 𝑅1 and 𝑅2 are connected in series, their individual resistance as
measured by Wheatstone bridge were: 𝑅1 = 18.7 Ω and 𝑅1 = 3.624 Ω.
Calculate the total resistance having suitable number of significant figures?

✔ Solution is done in class.


 Resolution:
• defined as the smallest change in the measured quantity to which an
instrument will respond and is reliably detected.
• The needle of instrument will show no deflection
unless a change equal to its resolution is achieved in
the input.
• One of the major factors influencing the
resolution of an instrument is how finely its output
scale is divided into subdivisions.
• The resolution of a Digital Readout is given by its
least count.

• e.g. Car speedometer has subdivisions of typically 20 km/h. So, it can’t


estimate speed between scale reading.
• Also, 200V Voltmeter has resolution of 1V; it can’t use be used to measure
50 mV.
 Repeatability :  Reproducibility :
• also termed as test-retest • describes the closeness of
reliability output readings when the
• describes the closeness of same input is applied over a
output readings when the period of time and there are
same input is applied changes in
repetitively over a short  the method of measurement,
period of time and  The observer,
 with the same measurement  Measuring instrument,
conditions,  location,
 same instrument & observer,  conditions of use,
 same location, and  and time of measurement
 same conditions of use • It indicates the steady state
maintained throughout response of an instrument.
• Both terms describe the spread of output readings for the same
input. This spread is referred to as repeatability if the measurement
conditions are constant and as reproducibility if the measurement
conditions vary.

 Range or Span:
The range or span of an instrument
defines the minimum and maximum
values of a quantity that the
instrument is designed to measure.

Fig. L 2.1 the relation between input &


output with ± repeatability
 Sensitivity:
“ defined as the ratio of change in output with respect to the change in
input (measurand) of the instrument.” thus
scale deflection 𝑄𝑜
sensitivity = =
value of measurand producing deflection 𝑄𝑖

• The sensitivity of measurement is therefore the slope of the straight


line drawn as shown in L 2.2
• e.g. 2 bar pressure produces
10° deflection in a pressure
transducer then,

sensitivity = 10° /2 = 5° /bar


(Assuming that the deflection is
zero with zero pressure applied)

Fig. L 2.2 Instrument output characteristic


𝑄𝑜
• When the calibration curve is a straight line, sensitivity is constant
𝑄𝑖
over the entire range of the instrument as shown in Fig. L 2.3

• In other case, when


the curve is not a
straight line,
sensitivity is no more
constant and varies
with input as shown
in Fig. L 2.3
Fig. L 2.3
Numerical Problem:
The following resistance values of a platinum resistance thermometer
were measured at a range of temperatures. Determine the
measurement sensitivity of the instrument in ohms/C.
Resistance (Ω) Temperature (C)
307 200
314 230 ✔ Solution is done in class.
321 260
328 290
 Linearity:
• It is usually desirable that the output reading of an instrument is
linearly proportional to the quantity being measured, as shown in
Fig. L 2.2 by Xs marks.
• Normal method is to draw a good fit straight line through the Xs
points, which is preferably be obtained through a mathematical
least-squares line-fitting technique.

• Nonlinearity is then defined as the max. deviation of any of the


output readings marked as X from this good fit straight line which is
usually expressed as a percentage of full-scale reading.
Most sensors are designed to have a linear output, but their output is not
perfectly linear i.e. linearity is never completely achieved; hence, the
deviations from the ideal are termed as non-linearity errors.
 Threshold:
“The minimum level of input that causes a change in the instrument’s
output reading of a large enough magnitude to be detectable.”
• Some manufacturers quote absolute values for threshold of instruments,
whereas others quote threshold as a percentage of full-scale readings.

e.g. A car speedometer typically has a threshold of about 15 km/h.

 Drift:
“the gradual shift in the indication of an instrument over a period of
time during which true value of the quantity does not change.”
• Drift is categorized into three types, namely:
zero drift, span drift, and zonal drift.
Zero Drift (Bias):
“The same amount of shifting (a constant error) in whole calibration due to
a change in ambient conditions is termed as zero drift, also termed as
calibration drift as shown in Fig. L 2.4(a).”
• It can occur due to many reasons including:
slippage, undue warming up of electronic tube circuits, or if an initial zero
adjustment in an instrument is not made.
• Zero drift is normally removable by calibration process.
• The typical unit by which such zero drift is measured, is often called the
zero drift coefficient (volts/𝐶 ° ) which is related to temperature changes.

The mechanical form of bathroom scale is a common example of an


instrument that is prone to bias.

Sensitivity Drift (Scale Factor Drift or Span Drift):


“defines the amount by which an instrument’s sensitivity of measurement
varies gradually with the deflection of the pointer as the ambient conditions
change as shown in Fig. L 2.4(b).”
• Sensitivity drift is quantified by Sensitivity Drift Coefficients that define
how much drift there is for a unit change in each environmental parameter
that the instrument characteristics are sensitive to;
unit: (angular degree/bar)/𝐶 °

e.g. the Modulus of Elasticity of a spring is temperature dependent.

• If an instrument suffers both zero drift and sensitivity drift at the same
time, then the typical change of the output characteristic of an instrument is
shown in Fig. L 2.4(c).

Zonal drift:
“is the drift that occurs only in a particular zone of an instrument
due to various environmental factors”,
such as:
change in temperature, mechanical vibrations, wear and tear, stray
electric and magnetic fields, and high mechanical stresses developed
in some parts of the instruments and systems.
Numerical Problem:
The following table shows output measurements of a voltmeter under two
sets of conditions:
(a) Use in an environment kept at 20𝐶 ° which is the temperature that it was
calibrated at.
(b) Use in an environment at a temperature of 50𝐶 ° .
Voltage readings at 20𝐶 ° Voltage readings at 50𝐶 °
(assumed correct)
10.2 10.5
20.3 20.6
30.7 40.0
40.8 50.1
Determine the zero drift when it is used in the 50 𝐶 ° environment, assuming
that the measurement values when it was used in the 20 𝐶 ° environment are
correct. Also calculate the zero drift coefficient.

✔ Solution is done in class.


Numerical Problem:

A spring balance is calibrated in an environment at 20𝐶 ° and has the


following deflection/load characteristic:

Load (kg) 0 1 2 3
Deflection (mm) 0 20 40 60
It is then used in an environment at 30𝐶 ° , and the following deflection/
load characteristic is measured:

Load (kg) 0 1 2 3
Deflection (mm) 5 27 49 71

Determine the zero drift and sensitivity drift per 𝐶 ° change in ambient
temperature

✔ Solution is done in class.


Homework 1:

You might also like