1
METROLOGY
Metrology. Metrology is a science of measurement. Metrology may be
divided depending upon the quantity under consideration into : metrology
of length, metrology of time etc. Depending upon the field of application it
is divided into industrial metrology, medical metrology etc.
Engineering metrology is restricted to the measurement of length,
angles and other quantities which are expressed in linear or angular terms.
For every kind of quantity measured, there must be a unit to measure
it. This will enable the quantity to be measured in number of that unit.
Further, in order that this unit is followed by all ; there must be a universal
standard and the various units for various parameters of importance must
be standardized. It is also necessary to see whether the result is given with
sufficient correctness and accuracy for a particular need or not. This will
depend on the method of measurement, measuring devices used etc.
Thus, in a broader sense metrology is not limited to length and angle
measurement but also concerned with numerous problems theoretical as
well as practical related with measurement such as :
(i) Units of measurement and their standards, which is concerned
with the establishment, reproduction, conservation and transfer
of units of measurement and their standards.
(ii) Methods of measurement based on agreed units and standards.
(ai) Errors of measurement.
(iv) Measuring instruments and devices.
(v) Accuracy of measuring instruments and their care.
(vi) Industrial inspection and its various techniques.
(vii) Design, manufacturing and testing of gauges of all kinds.
Legal Metrology
Legal metrology is directed by a national organisation which is called
National Service of Legal Metrology. It includes'a number of international
organisations whose ultimate object is to maintain uniformity of measure-
ment throughout the world. The functions of National Service of Legal
Metrology may be listed as :
1. To ensure conservation of national standards.
2. To guarantee their accuracy by comparison with international
standards.
()METROLOGY
3. To guarantee and where necessary, impart, proper accuracy to
the secondary standards for being used in the country, by com-
parison with national standards.
4. To carry out scientific and technical work in all fields of metrology
and methods of measurement.
5. To take part in the work of other national organisations inter-
ested in metrology.
6. To regulate, advise, supervise and control the manufacture and
repair of measuring instruments.
7. To inspect the use of these instruments and the measurement
operations when covered under public guarantee.
8. To detect frauds of measurement.
9. To organise training in legal metrology.
10. To represent the country in international activities regarding
metrology.
The legal metrology is thus concerned with the units of measurement,
methods of measurement and the measuring instruments, in relation to the
statutory technical and legal requirements.
Need of Inspection
Inspection means checking of all materials, products or component
parts at various stages during manufacturing. It is the act of comparing
materials, products or components with some established standard.
In old days the production was on a small scale, different component
parts were made and assembled by the same craftsman. If the parts did not
fit properly at the time of assembly, he used to make the necessary
adjustments in either of the mating parts so that each assembly functioned
properly. Therefore, it was not necessary to make similar parts exactly
alike or with same accuracy as there was no need of inspection.
Due to technological development new production techniques have
been developed. The products are being manufactured on a large scale duc
to low cost methods of mass production. So, hand fit method cannot serve
the purpose any more. The modern industrial mass production system is
based on interchangeable manufacture, when the articles are to be
produced on a large scale. In mass production the production of complete
article is broken up into various component parts. Thus the production of
each component part becomes an independent process. The different com-
ponent parts are made in large quantities in different shops. Some parts
are purchased from other factories also and then assembled together at one
place. Therefore, it becomes essential that any part chosen at random
should fit properly with any other mating parts that too selected at random.
This is possible only when the dimensions of the component parts are made
with close dimensional tolerances. This is only possible when the parts are
inspected at various stages during manufacturing.METROLOGY 3
When large number of identical parts are manufactured on the basis
of interchangeability if their dimensions are actually measured every time
lot of time will be required. Hence, to save the time gauges are used, which
can tell whether the part manufactured is within the prescribed limits or
not.
Thus, the need of inspection can be summarised as :
1. To ensure that the part, material or a component conforms to the
established standard.
2. To meet the interchangeability of manufacture.
3. To maintain customer relation by ensuring that no faulty product
reaches the customers.
4. Provide the means of finding out shortcomings in manufacture.
The results of inspection are not only recorded but forwarded to
the manufacturing department for taking necessary steps, so as
to produce acceptable parts and reduce scrap.
5. It also helps to purchase good quality of raw materials, tools,
equipment which governs the quality of the finished products.
6. It also helps to co-ordinate the functions of quality control,
production, purchasing and other departments of the organisa-
tion.
7. To take decision on the defective parts i.e., to judge the possibility
of making some of these parts acceptable after minor repairs.
Principles of Measurement
Mechanical and production engineers are concerned with special
aspects of measurement in designing and manufacturing engineering
products. The latter, which vary widely in size, shape etc. have one thing
common, namely, they have to be performed to a spcification involving
dimensional accuracy.
Measurement is an essential part of the development of technology
and as technology becomes more complex the techniques of measurement
becomes more sophisticated.
The classic statement concerning metrology, made by Lord Kelvin
about a century ago —
“When you can measure what you are speaking about and express it
in numbers, you know something about it, but'when you cannot measure
it, when you cannot express it in numbers, your knowledge is a meagre and
unsatisfactory kind. It may be the beginning of knowledge, but you have
scarcely in your thought advance to a stage of science.”
This statement presents a powerful case for the subject of metrology
to be a part of the studies of an engineer. It also points the way to the fact
that if we are to further improve the control of the process of manufacture,
we must continuously develop our means of measurement. As already
pointed out for every kind of quantity measured, there must be a unit to
measure it. This will enable the quantity to be measured in number of that
unit.METROLOGY
It also signifies that the study and practice of metrology demands a
clear knowledge of certain basic mathematical and physical principles.
Mathematics naturally forms the basis of metrology and at least a working
knowledge of arithmetic, geometry, trigonometry and algebra is desirable.
Some knowledge of applied mathematics, mechanics, and physics is also an
asset to the proper understanding of the operation of instruments and the
design of gauges and equipments
If we say that ‘A is longer than B’, until and unless we answer how
much, our statement is incomplete. We must be able to express this
difference in quantitative terms i.e., in terms of numbers.
Thus a physical measurement.can be defined as the act of obtaining
quantitative information about a physical object. It can also be defined as
the set of experimental operations carried out to determine value of
quantity.
Process of Measurement. The sequence of operations necessary for
the execution of measurement is called process of measurement.
There are three important elements of measurement,
(i) Measurand (ii) Reference (iii) Comparator.
@ Measurand. Measurand is the physical quantity or property like
length, angle, diameter, thickness etc. to be measured.
(ii) Reference. It is the physical quantity or property to which quantita-
tive comparisons are made.
(iii) Comparator. It is the means of comparing measurand with some
reference.
Suppose, a fitter has to measure the length of m.s. flat - he first lays
his rule along the flat. He then carefully aligns the zero end of his rule with
one end of the m.s. flat/slat and finally compares the length of the flat with
the graduations on his rule by his eyes. In this example, the length of m.s.
flat is a measurand, steel rule is the reference and eye can be considered
as a comparator.
Methods of Measurement
These are the methods of comparison used in measurement process.
In precision measurement various methods of measurement are adopted
depending upon the accuracy required and the amount of permissible error,
The methods of measurement can be classified as :
1. Direct method2. Indirect method
3. Absolute or Fundamental method4. Comparative method
5. Transposition method6. Coincidence method
7. Deflection method8.Complementary method
9. Contact method10. Contactless method ete.
1. Direct method of measurement. This is a simple method of measure-
ment, in which the value of the quantity to be measured is obtained directly
without any calculations. For example, measurements by using scales,
vernier callipers, micrometers, bevel protactor ete. This method is mostMETROLOGY 5
widely used in production. This method is not very accurate because it
depends on human insensitiveness in making judgement.
2. Indirect method of measurement. In indirect method the value of
quantity to be measured is obtained by measuring other quantities which
are functionally related to the required value. e.g. angle measurement by
sine bar, measurement of screw pitch diameter by three wire method etc
3. Absolute or Fundamental method. It is based on the measurement
of the base quantities used to define the quantity. For example, measuring
a quantity directly in accordance with the definition of that quantity, or
measuring a quantity indirectly by direct measurement of the quantities
linked with the definition of the quantity to be measured.
4. Comparative method. In this method the value of the quantity to be
measured is compared with known value of the same quantity or other
quantity practically related to it. So, in this method only the deviations
from a master gauge are determined, e.g., dial indicators, or other com-
parators.
5. Transposition method. It is a method of measurement by direct
comparison in which the value of the quantity measured is first balanced
by an initial known value A of the same quantity, then the value of the
quantity measured is put in place of this known value and is balanced again
by another known value B. If the position of the element indicating equi-
librium is the same in both cases, the value of the quantity to be measured
is VAB. For example, determination of a mass by means of a balance and
known weights, using the Gauss double weighing method.
6. Coincidence method. It is a differential method of measurement, in
which a very small difference between the value of the quantity to be
measured and the reference is determined by the observation of the coin-
cidence of certain lines or signals. For example, measurement by vernier
calliper micrometer.
7. Deflection method. In this method the value of the quantity to be
measured is directly indicated by a deflection of a pointer on a calibrated
scale.
8. Complementary method. In this method the value of the quantity to
be measured is combined with a known value of the same quantity. The
combination is so adjusted that the sum of these two values is equal to
predetermined comparison value. For example, determination of the
volume of a solid by liquid displacement.
9. Method of measurement by substitution. It is a method of direct
comparison in which the value of a quantity to be measured is replaced by
a known value of the same quantity, so selected that the effects produced in
the indicating device by these two values are the same.
10. Method of null measurement. It is a method of differential meas-
urement. In this method the difference between the value of the quantity
to be measured and the known value of the same quantity with which it is
compared is brought to zero.METROLOGY
11. Contact method. In this method the sensor or measuring tip of the
instrument actually touches the surface to be measured. ¢.g.,measurement —;
by micrometer, vernier calliper, dial indicators etc. In such cases arrange.
ment for constant contact pressure should be provided to prevent errors due
to excessive contact pressure.
12. Contactless method. In contactless method of measurement, there
is no direct contact with the surface to be measured. e.g.. measurement by
optical instruments, such as tool makers microscope, projection comparator
etc.
Measuring system :
A measuring system is made of five basic elements. These are
(i) Standard (ii) Workpiece
(iii) Instrument (iv) Person (v) Environment.
The most basic element of measurement is a standard without which
no measurement is possible. Once the standard is chosen a measuring |
| instrument incorporations this standard is should be obtained. This instru-
"ment is then used to measure the job parameters, in terms of units of
standard contained in it. The measurement should be performed under
standard environment. And, lastly, there must be some person or mecha-
nism (if automatic) to carry out the measurement.
Accuracy of Measurement
The purpose of measurement is to determine the true dimensions of a
part. But no measurement can be made absolutely accurate. There is
always some error. The amount of error depends upon the following
* factors :
() The accuracy and design of the measuring instrument
(ii) The skill of the operator
(iii) Method adopted for measurement
(iv) Temperature variations
(v) Elastic deformation of the part or instrument etc.
Thus, the true dimension of the part cannot be determined but can
only by approximated.
The agreement of the measured value with the true value of the
measured quantity is called accuracy. If the measurement of a dimensions
of a part approximates very closely to the true value of that dimension, it is
said to be accurate. Thus the term accuracy denotes the closeness of the
measured:value with the true value. The difference between the mea-
sured value and the true value is the error of measurement. The lesser the
error, more is the accuracy.
Precision and Accuracy |
Precision. The terms precision and accuracy are used in connection
with the performance of the instrument. Precision is the repeatability of the
measuring process. It refers to the group of measurements for the same
characteristics taken under identical conditions. It indicates to what extentMETROLOGY 7
the identically performed measurements agree with each other. If the
instrument is not precise it will give different (widely varying) results for
the same dimension when measured again and again. The set of observa-
tions will scatter about the mean. The scatter of these measurements is
designated as o, the standard deviation. It is used as an index of precision.
The less the scattering more, precise is the instrument. Thus, lower, the
value of , the more precise is the instrument.
Accuracy. Accuracy is the degree to which the measured value of the
quality characteristic agrees with the true value. The difference between
the true value and the measured value is known as error of measurement.
It is practically difficult to measure exactly the true value and there-
fore a set of observations is made whose mean value is taken as the true
value of the quality measured.
Distinction between Precision and Accuracy
Accuracy is very often confused with precision though much different.‘
The distinction between the precision and accuracy will become clear by the
tk Average Value
x *
Error
True Value
"Frequency
(a) Precise but not accurate
Average Value
True Value
Dimension
Frequency
(b) Accurate but not precise
Average Value
x %
ts True Value
Dimension
Frequency
(c) Accurate & precise
Fig. 1.1. Precision and accuracy8 METROLOGY
,following example. Several measurements are made on a component by
different types of instruments (A, B and C respectively) and the results are
plotted. In any set of measurements, the individual measurements are
scattered about the mean, and the precision signifies how well the various
measurements performed by same instrument on the same quality charac.
teristic agree with each other.
The difference between the mean of set of readings on the same quality
characteristic and the true value is called as error. Less the error more
accurate is the instrument.
Figure 1.1 shows that the instrument A is precise since the results of
number of measurements are close to the average value. However, there is
a large difference (error) between the true value and the average value
hence it is not accurate.
The readings taken by the instruments are scattered much from the
average value andhence it is not precise but accurate as there is a small
difference between the average value and true value.
Fig. 1.1 (c) shows that the instrument is accurate as well as precise.
Factors affecting the accuracy of the measuring system :
The basic components of an accuracy evaluation are the five elements
of a measuring system such as :
1. Factors affecting the calibration standards.
2. Factors affecting the workpiece.
3. Factors affecting the inherent characteristics of the instrument.
4. Factors affecting the person, who carries out the measurements,
and
5. Factors affecting the environment.
1. Factors affecting the Standard. It may be affected by :
— coefficient of thermal expansion,
— calibration interval,
~ stability with time,
~ elastic properties,
— geometric compactibility
2. Factors affecting the Workpiece, these are :
~ cleanliness, surface finish, waviness, scratch, surface defects etc.,
— hidden geometry,
~ elastic properties,
~ adequate datum on the workpiece,
— arrangement of supporting workpiece,
~ thermal equalization etc.
3. Factors affecting the inherent characteristics of Instrument
— adequate amplification for accuracy objective,
~ scale error,METROLOGY
~ effect of friction, backlash, hysteresis, zero drift error,
deformation in handling or use, when heavy workpieces are
measured,
~ calibration errors,
mechanical parts (slides, guide ways or moving elements),
~ repeatability and readability,
~ contact geometry for both workpiece and standard.
4. Factors affecting person :
~ training, skill,
~ sense of precision appreciation,
~ ability to select measuring instruments and standards,
~ sensible appreciation of measuring cost,
~ attitude towards personal accuracy achievements,
~ planning measurement techniques for minimum cost, consistent
with precision requirements etc.
5. Factors affecting Environment :
— temperature, humidity etc.,
— clean surrounding and minimum vibration enhance precision,
~ adequate illumination,
— temperature equalization between standard, workpiece, and in-
strument,
— thermal expansion effects due to heat radiation from lights,
heating elements, sunlight and people,
— manual handling may also introduce thermal expansion.
Higher accuracy can be achieved only if, ail the sources of error due to.
the above five elements in the measuring system are analysed and steps
taken to eliminate them.
‘The above analysis of five basic metrology elements can be composed
into the acronym.
SWIPE, for convenient reference
where,S - STANDARD
W - WORKPIECE
1 - INSTRUMENT
P-PERSON
E-ENVIRONMENT
Sensitivity
Sensitivity may be defined as the rate of displacement of the indicating
device of an instrument, with respect to the measured quantity. In other
words, sensitivity of an instrument is the ratio of the scale spacing to the
scale division value. For example, if on a dial indi
1.0 mm and the scale division value is 0.01 mm,
is also called as amplifi fi
icator, the scale spacing is
then sensitivity is 100.
cation factor or gearing ratio, PiaMETROLOGY
| :
If we now contjder sensitivity over the full range of instrument read.
ing with respect to ineasured quantities as shown in Fig. 1.2, the sensitivity
at any valle ofy = 2 where dy and dy are increments of x and y, taken over
the full instrument scale, the sensitivity is the slope of the curve at any
value of y.
Y
dy
dx
Instrument reading
Measured quantity
Fig. 1.2. Sensitivity of an instrument
The sensitivity may be constant or variable along the scale. In the first
case we get linear transmission and in the second non-linear transmission.
Sensitivity refers to the ability of measuring device to detect small
differences in a quantity being measured. High sensitivity instruments
may lead to drifts due to thermal or other effects, and indications may be
less repeatable or less precise than that of the instrument of lower sen-
sitivity.
Readability
Readability refers to the ease with which the readings of a measuring
instrument can be read. It is the susceptibility of a measuring device to
have its indications converted into meaningful number. Fine and widely
spaced graduation lines ordinarily improve the readability. If the gradua-
tion lines are very finely spaced, the scale will be more readable by using
the microscope, however, with the naked eye the readability will be poor.
To make micrometers more readable they are provided with vernier
scale. It can also be improved by using magnifying devices.
Calibration
The calibration of any measuring instrument is necessary to measure
the quantity in terms of standard unit. It is the process of framing the scale
of the instrument by applying some standardized signals. Calibration is a
premeasurement process, generally carried out by manufacturers.METROLOGY i"
\ It is carried out by making adjustments such that the read out device
produces zero output for zero measured input. Similarly, it should display
an output equivalent to the known measured input near the full scale input
value.
The accuracy of the instrument depends upon the calibration. Con-
stant use of instruments affect their accuracy. If the accuracy is to be
maintained, the instruments must be checked and recalibrated if neces-
sary. The schedule of such calibration depends upon the severity of use,
environmental conditions, accuracy of measurement required etc. As far as.
possible calibration should be performed under environmental conditions
which are vary close to the conditions under which actual measurements
are carried out. If the output of a measuring system is linear and
repeatable, it can be easily calibrated.
Magnification
In order to measure small differences in dimensions the movement of
the measuring tip in contact with the work must be magnified. For this the
output signal from a measuring instrument is to be magnified. This mag-
nification means increasing the magnitude of output signal of measuring
instrument many times to make it more readable.
The degree of magnification used should bear some relation to the
accuracy of measurement desired and should not be larger than necessary.
Generally, the greater the magnification, the smaller is the range of meas-
urement on the instrument and greater the need for care in using it.
The magnification obtained in measuring instrument may be
mechanical, electrical, electronic, optical, pneumatic principles or combina-
tion of these. Mechanical magnification is the simplest and economical
method. It is obtained by means of levers or gear trains. In electrical
magnification, the change in the inductance or capacitance of electric
circuit, made by change in the quantity being measured is used to amplify
the output of the measuring instrument. Electronic magnification is ob-
tained by the use of valves, transistors or ICS. Optical magnification uses
the principle of reflection and pneumatic magnification makes use of com-
pressed air for amplifying the output of measuring instrument.
Repeatability
It is the ability of the measuring instrument to repeat the same results
for the measurements for the same quantity, when the measurement are
carried out
~ by the same observer,
~ with the same instrument,
— under the same conditions,
~ without any change in location,12 METROLOGY
- without change in the method of measurement,
- and the measurements are carried out in short intervals of time.
It may be expressed quantitively in terms of dispersion of the results.
Reproducability
Reproducibility is the consistency of pattern of variation in measure-
ment ie., closeness of the agreement between the results of measurements
of the same quantity, when individual measurements are carried out :
~ by different observers,
— by different methods,
— using different instruments,
— under different conditions, locations, times etc.
It may also be expressed quantitatively in terms of the dispersion of
the results.
Errors in Measurement
It is never possible to measure the true value of a dimension, there is
always some error. The error in measurement is the difference between the
measured value and the true value of the measured dimensivu. Error in
measurement —= Measured value — True value.
The error in measurement may be expressed or evaluated either as an
absolute error or as a relative error.
Absolute Error
True absolute error. It is the algebraic difference between the result of
measurement and the conventional true value of the quantity measured.
Apparent absolute error. If the series of measurement are made then
the algebraic difference between one of the results of measurement and the
arithmetical mean is known as apparent absolute error.
Relative Error
It is the quotient of the absolute error and the value of comparison
used for calculation of that absolute error. This value of comparison may be
the true value, the conventional true value or the arithmetic mean for series
of measurement.
The accuracy of measurement, and hence the error depends upon so
many factors, such as :
~ calibration standard
~ Workpiece
— Instrument
- Person
— Environment ete. as already described.
No matter, how modern is the measuring instrument, how skillful is
the operator, how accurate the measurement process, there would always
be some error. It is therefore attempted to minimize the error. To minimizeMETROLOGY 13
the error, usually a number of observations are made and their average is
taken as the value of that measurement.
If these observations are made under identical conditions i.e., same
observer, same instrument and similar working conditions excepting for
time, then, it is called as ‘Single Sample Test’.
If however, repeated measurements of a given property using alter-
nate test conditions, such as different observer and/or different instrument
is made, the procedure is called as ‘Multi-Sample Test’. The multi-sample
test avoids many controllable errors e.g., personal error, instrument zero
error etc. The multi-sample test is costlier than the single sample test and
hence the later is in wide use.
In practice good number of observations are made under single sample
test and satistical techniques are applied to get results which could be
approximate to those obtainable from multi-sample test.
Types of Error
During measurement several types of error may arise, these are :
1. Static errors which includes :
(a) Reading errors
() Characteristic errors
(c) Environmental errors.
2. Instrument loading errors.
3. Dynamic errors.
Static errors
These errors result from the physical nature of the various com-
ponents of measuring system. There are three basic sources of static errors.
The static error divided by the measurement range (difference between the
upper and lower limits of measurement) gives the measurement precision.
(a) Reading errors, Reading errors apply exclusively to the read-out
device. These do not have any direct relationship with other types of errors
within the measuring system.
Reading errors include :
~ parallax error
~ interpolation error.
Attempts have been made to reduce or eliminate reading errors by
relatively simple techniques. For example, the use of mirror behind the
readout pointer or indicator virtually eliminates occurrence of parallax
error.
Interpolation error. It is the reading error resulting from the inexact
evaluation of the position of index with regards to two adjacent graduation
marks between which the index is located. How accurately can a scale be
read ? This depends upon the thickness of the graduation marks, the
spacing of the scale division and the thickness of the pointer used to give14 METROLOGY
the reading. Interpolation error can be tackled by increasing ; using mag-
nifier over the scale in the viscinity of pointer or by using a digital read out
system.
Characteristic Errors
It is defined as the deviation of the output of the measuring system
from the theoretical predicted performance or from nominal performance
specifications.
Linearity errors, repeatability, hysteresis and resolution errors are
part of characteri: errors if the theoretical output is a straight line.
Calibration error is also included in characteristic error.
Loading Errors
Loading errors results from the change in measurand itself when it is
being measured, (i.e., after the measuring system or instrument is con-
nected for measurement). Instrument loading error is the difference be-
tween the value of the measurand before and after the measuring system
is connected/contacted for measurement. For example, a soft or delicate
component is subjected to deformation during measurement due to the
contact pressure of the instrument and cause a loading error. The effect of
instrument loading errors are unavoidable. Therefore, measuring system
or instrument should be selected such that this sensing element will
minimize instrument loading error in a particular measurement involved.
Environmental Errors
These errors result from the effect of surrounding such as tempera-
ture, pressure, humidity etc. on measuring system.
External influences like magnetic or electric fields, nuclear radiations,
vibrations or shocks ete. also leads to environmental errors.
Environmental errors of each component of the measuring system
make a separate contribution to the static error. It can be reduced by
controlling the atmosphere according to the specific requirements.
Dynamic Errors
Dynamic error is the error caused by time variations in the
measurand. It results from the inability of the system to respond faithfully
to a time varying measurement. It is caused by inertia, damping, friction
or other physical constraints in the sensing or readout or display system.
For statistical study and the study of accumulation of errors, these
errors can be broadly classified into two categories :
(a) Systematic or controllable errors, and
(6) Random errors.
systematic Errors. Systematic errors are regularly repetitive in
nature. They are of constant and similar form. : They result from improper
conditions or procedures that are consistent in action. Out of the
systematic errors all except the personal error varies from individual toMETROLOGY 15
individual depending on the personality of observer. Other systematic
errors can be controlled in magnitude as well as in sense. If properly
analysed they can be determined and reduced. Hence, these are also called
as controllable errors.
Systematic errors includes :
1. Calibration Errors. These are caused due to the variation in the
calibrated scale from its normal value. The actual length of standards such
as slip gauge and engraved scales will vary from the nominal value by a
small amount. This will cause an error in measurement of constant mag-
nitude. Some times the instrument inertia and hysteresis effect do not
allow the instrument to transit the measurement accurately. Drop in
voltage along the wires of an electric meter may include an error (called
single transmission error) in measurement.
2. Ambient or Atmospheric conditions (Environmental Errors). Varia-
tion in atmospheric condition (i.e., temperature, pressure, and moisture
content) at the place of measurement from that of internationally agreed
standard values (20° temp. and 760 mm of Hg pressure) can give rise to
error in the measured size of the component. Instruments are calibrated at
these standard conditions, therefore error may creep into the given result
if the atmosphere conditions are different at the place of measurement. Out
of these temperature is the most significant factor which causes error ir.
measurement due to expansion or contraction of component being
measured or of the instrument used for measurement.
3. Stylus Pressure. Another common source of error is the pressure
with which the workpiece is pressed while measuring. Though the pressure
involved is generally small but this is sufficient enough to cause appreciable
deformation of both the stylus and the workpiece.
In ideal case, the stylus should have simply touched the workpiece.
Besides the deformation effect the stylus pressure can bring deflection in
the workpiece also
Variations in force applied by the anvils of micrometer on the work to
be measured results in the difference in its readings. In this case error is
caused by the distortion of both micrometer frame and work-piece.
“4. Avoidable Errors. These errors may occur due to parallax, non-
alignment of workpiece centres, improper location of measuring instru-
ments such as placing a thermometer in. sunlight while measuring
temperature. The error due to mis-alignment is caused when the centre line
of workpiece is not normal to the centre line of the measuring instrument.
5. Random Errors. Random errors are non-consistent. They occur
randomly and are accidental in nature. Such errors are inherent in the
measuring system. It is difficult to eliminate such errors. Their specific
cause, magnitudes and source can not be determined from the knowledge
of measuring system or conditions of measurement.
The possible source of such errors are :
@) Small variations in the position of setting standard and work-
piece.METROLOGY
(wi) Slight displacement of lever joints of measuring instruments.
(iii) Operator error in scale reading.
(iv) Fluctuations in the friction of measuring instrument etc.
Comparison between Systematic Errors and Random Errors.
Systematic Errors
. These errors are repetitive in
nature and are of constant and
similar form.
. These errors result from improper
conditions or procedures that are
consistent in action.
. Except personal errors, all other
systematic errors can be controlled
in magnitude and sense.
. If properly analyzed these can be
determined and reduced — or
eliminated.
. These include calibration errors,
variation in contact pressure,
variation in atmospheric
conditions, parallax errors, mis-
alignment errors etc.
Random Errors
These are non-consistent. The |
sources giving rise to such errors
are random.
Such errors are inherent in the
measuring system or measuring |
instruments. |
Specific causes, magnitudes and
sense of these errors cannot be
determined from the knowledge of
measuring system or condition. |
‘These errors cannot be eliminated, |
but the results obtained can be |
corrected.
These include errors caused due to |
variation in position of setting
standard and work-piece, errors
due to displacement of lever joints
of instruments, errors resulting
from backlash, friction etc.
Errors likely to Creep in Precision Measurements
The standard temperature for measurement is 20°C and all instru-
ments are calibrated at this temperature. If the measurements are carried
out at temperature other than the standard temperature, an error will be
introduced due to expansion or contraction of instrument or part to be
measured. But if the instrument and the workpiece to be measured are of
same material, accuracy of measurement will not be affected even if tite
standard temperature is not maintained. Since both will expand or contract
by the same amount.
The difference between the temperature of instrument and the
workpiece will also introduce an error in the measurement, especially when
the material of the workpiece or instrument has higher coefficient of
expansion. To avoid such errors, instrument and the workpiece to be
measured should be allowed to attain the same temperature before use and
should be handled as little as possible. For example, after wringing together
several slip gauges to form a stock for checking a gauge, they should be left
with the gauge for an hour, if possible preferably on the table of the
comparator which is to be used for performing the comparison.METROLOGY 7
% as scrurate results, high grade reference gauges should be used
only 's where the.temperature is maintained very close to the
standard temperature.
Handling of gauges changes its temperature, so they should be allowed
to stabilize.
There are two situations to be considered in connection with the effect
of temperature, these are :
(a) Direct measurement. Let us consider a gauge block being measured
directly by interferometry. Here, the effect of using a non-standard
temperature produces a proportional error, E =1 « (t - ts)
where, = nominal length
« = coefficient of expansion
(¢— ts) = deviation from standard temperature
= temperature during measurement
ts = standard temperature.
(6) Comparative measurement. If we consider two gauges whose ex-
pansion coefficients are respectively 0 and i, then the error due to non-
standard temperature will be, Error, E =1 (4 ~ ag) (t ~ ts).
As the expansion coefficients are small numbers, the error will be very
small as long as both parts are at the same temperature. Thus, in compara-
tive measurement it is important that all components in the measuring
system are at the same temperature rather than necessarily at the stand-
ard temperature.
Other ambient conditions may affect the result of measurement. For
example, if'a gauge block is being measured by interferometry, then relative
humidity, atmospheric pressure and CO, of the air affects the refractive
index of the atmosphere. These conditions should all be recorded during the
measurement and the necessary correction made.
Internationally accepted temperature for measurement is 20°C and all
instruments are calibrated at this temperature. To maintain such control-
led temperature, the laboratory should be air-conditioned.
Effect of supports
When long measuring bars, straight edges are supported as beam,
they are defected or deformed. This elastic deformation occurs because long
bars, supported at to ends sags under their own weight. This problem was
considered by Sir G.B. Airy, who showed that the position of the supports
can be arranged to give minimum error. The amount of deflection depends
upon the positions of the supports.
. L
Fig. 1.3. Effect of support‘Two conditions are considered, as shown in Fig. 1.4,
@ Abar of lerigth L supported, equidistant from the centre. In this case
the slope at the ends of the bar is zero. For minimum deflection, the
distance between the supports should be 0.554 times the length of the bar
S = 0.544 L or S/L = 0.544
(a) Line standard and End bars (slope at ends zero)
L
(52 opm eel
(6) straight edges (deflection at ends equals deflection).
Fig. 1.4. Support positions for different conditions of measurement
(i) Astraight edge of length L supported, equidistant from the centre.
The straight edges are used to check the straightness and flatness of the
parts. They are make of H section.
In this case the deflection at the ends is equal to the deflection at the
centre. For minimum deflection the distance between the supports should
be 0.577 times the length i.e. For any points
S=0.577 Lor 8 = 0.577.
Effect of alignment
Abbe’s alignment principle. It states that “the axis or line of measure-
ment should coincide with the axis of measuring instrument or line of the
measuring scale.”
If while measuring the length of a workpiece the measuring scale is
inclined to the true line of the dimension being measured there will be an
error in the measurement.
The length recorded will be more than the true length. This error is
called “Cosine error”. In many cases the angle 0 is very small and the error
will be negligible. : : ;
The cosine error may also, occur while using dial gauge, if the axis of
's not along the direction of measurement of work. Also, when
inter i
Cae is fitted with a ball-end stylus form, the arm should be so eet
anMETROLOGY a
that the direction of movement of the work is tangential to the arc along
which the ball moves, otherwise co-sine error will be introduced.
L= Measured length
L cos@= True length
L (1- cos @)= error
Work ! Direction
iece of
yy movement
~ 5(b)
5. (a) and (b) Effect of misalignment
The combined cosine and sine error
will occur if the micrometer axis is not
truly perpendicular to the axis of the
workpiece (Refer Fig. 1.6). The same error
occurs while measuring the length of the
end gauge in a horizontal comparator if
the gauge is not supported so that its axis
is parallel to the axis of the measuring or
anvils or if its ends, though parallel to
each other are not square with ends.
Referring Fig. 1.6,
If, D =true diameter Fig. 1.6. Combined sine and cosine
error
L = apparent length
d= micrometer anvil diameter
then D-=(L cos @)-dsin020 i
=L cos @-dsin®
and Error, = L ~ D = L - (L cos @—d sin 0) = L (1 — cos 0) +d sin 0,
4 The errors of above nature are avoided by using gauges with spherical
ends.
ETROLOGY
Contact Pressure
The variations in the contact pressure between the anvils of the
instrument and the work being measured produce considerable difference
in reading. Though the pressure involved is generally small, but it is
sufficient enough to cause appreciable deformation of both the anvil (or
stylus) and the workpiece. The deformation of the workpiece and the anvils
of instrument depends upon the contact pressure and the shape of the
contact surfaces. When there is a surface contact between the instrument
anvils and workpiece, there is very little deformation, but when there is a
point contact the deformation is appreciable.
Workpiece
deformation
Combined
deformation
* Stylus
Workpiece deformation
Fig. 1.7. Effect of contact pressure on measurement
Fig. 1.7 shows the error caused by combined deformation of the stylus
and the workpiece.
To minimize this error the development of correct feel is one of the skill
to be acquired by the inspector. To avoid this effect of contact pressure the
micrometer is fitted with a ratchet mechanism to apply the correct pressure
during measurement. The ratchet slips when the applied pressure exceeds
the minimum required operating pressure.
Parallax Error
A very common error that may occur in an instrument while taking
the readings is parallax error.
Parallax error occurs when :
@ The line of vision is not directly in line with the measuring
or
(ii) The scale and the pointer are separated from each other (not in
the same plane).
Refer Fig. 1.8.
Let d = separation of scale and pointer
[D = distance between the pointer and eye of the observer
@ = angle which the line of sight makes with the normal to scale.
scaleMETROLOGY JI
SCALE s
N Observers eye
Fig. 1.8. Parallax error
PA d
Thi PA _
wad NE” : D)
and error PA= @ Bi . NE
"aD a (d+D)tano
error =d tan 0.
Now generally, 6 is small therefore, tan 0 = 0 and error e = d0.
For least error d should be minimum possible, value of @ can be
reduced to zero by placing mirror behind the pointer which ensures normal
reading of scale.
Dust : The dust present in the atmosphere may change the reading by
a fraction of micron. Where accuracy of the order of micron is desired such
as while using slip gauges the effect of dust can be prevented by
(i Incorporating electrostatic precipitators in laboratory or in the
air ducts in addition to air filters.
(ii) The workpieces and masters should be cleaned by clean chamois
or by a soft brush.
(iii) Gauges should never be touched with moist fingers.
(iv) The contact surfaces should be sprayed with suitable filtered
clean solvent. ?
Errors due to vibrations
Vibrations may affect the accuracy of measurement. The instrument
anvil will not give consistent and repetitive reading if it is subjected to
vibration.
For eliminating or reducing effect of vibration on measurement, the
following precautions should be taken :METROLogy
1. The laboratory should be located away from the sources of Vibra,
tion. :
- Slipping cork, felt, rubber pads should be used under the gauge,
- Putting a gauge on a surface plate resting in turn on a heavy plate
also reduces the effect of vibrations.
- Precision measurement should be carried out away from shop
floor.
Errors due to location
The part to be measured is located on a table or a surface plate which
forms the datum for comparison with the standard. The reading taken by
the comparator is thus the indication of the displacement of the upper
surface of the measured part from the datum. If the datum surface is not
flat, or if the foreign matter such as dirt, chips ete. are present between the
datum and the workpiece surface then error will be introduced in the
reading taken.
DIRT ETC.
i
Fig. 1.9. Surface displacement
Error due to poor contact. Fig. 1.10 shows how the poor contact
between the working gauge or instrument and the workpiece causes an
error. Although, everything feels all right, yet the error is bound to occur.
To avoid this type of error the gauge with wide area of contact should not
be used while measuring irregular or curved surface and correct pressure
should be applied while making the contact.
ANbIe Workpiece
Fig. 1.10. Error due to poor contact,
Error due to wear in gauges. The measurin,
: g Surfaces of inst,
The anvils of the micrometer are subjected atone
as
= to wear due to TepeatedMETROLOGY 2
use. The internal instrument error such as threads of the micrometer
spindle can also lead to error in measurement.
Wear can be minimized by keeping gauge, masters and workpieces
clean and away from dust. Gauge, anvils and such other parts of the
instrument which are subjected to wear should be properly hardened.
Chrome plated parts are morc resistant to wear.
The lack of parallélism due to wear of anvils can be checked by optical
flats ; and the wear on spherical contacts of the instrument by means of
microscope.
The effect of averaging results. The statistical parameters of arith-
metic mean and standard deviation may be used to asses random errors by
taking repeated measurements. Thus, if we repeat the complete measure-
ment a great many times we could obtain a frequency distribution by
plotting a tally chart of these values. Further
. . = xy txy tag tidy
Arithmetic mean =x =~)? -—"3 —""#
and o=t
It is known that 99.27% of the observations will lie within + 3 6 of the
mean of the observations, so that we can say that, for all practical purposes
the estimated accuracy of determination is equal to + 30.
If now we take the observations and divide them into random sub-
groups of n and for each sub-group calculate its mean size x. A little thought
will show that this distribution will be more closely grouped about the true
mean size than that of the individual sizes. It can be shown that :
ae
om
where, 6, = standard deviation of the means x
standard deviation of the individual observations
n= sub-group size.
It follows that the accuracy of determination of the mean size of a
. . 3
sample of n observations is + 36, = >
If we apply this to the mean size of n observations, we see that
99.27% confidence limits = + ‘se
65% confidence limits. = + J
95% confidence limits = =+ co
_ By estimating the accuracy of determination (true value of std.
deviation). We can calculate the approximate degree of confidence which we
can assign to that value.cad ‘Mhovocy
Method of Least Squares
The least square principle has wide applications in metrology when
assessing the deviation of errors relative to some particular datum.
The Least Square Principle states that the most probable value of
observed quantities is that which renders the sum of the squares of residual
errors to minimum :
Method of Least Square for series of observed values of two dependent
variables ;
This method is applicable for two dependent measured variables, for
example in the measurement of straight edge when the variations (y) from
a truly straight line are determined at a number of positions (x) along the
straight line.
Let the observed values be given by (x;, y;) where ‘’ varies from 1 ton.
The equation of the straight line is given by y = ax +b, where a is the
slope or gradient of the line and b is the intercept on y axis.
The slope a may be expressed as the average increase in y for a given
increment of x, and it can be expressed as
; E@:-3) 01-9)
a= OY)
Ej -¥)
In practice this expression for ‘a’ necessitates a laborious calculation.
A general expression which simplifies calculations by enabling the original
data to be used without first calculating the mean as :
_Eai@y)
i
(1)
: n
After calcilating the best value for a, the best value for b can be found
by substituting values for x and y in the expression,
yaxtb +(2)
- Ey Ex;
where. y= = and x=——4,
Consider the experimental valués of x and y as below :
Eee | at pe (eS are (ae epee gl 10 aa ia ale sor
y; | 17] 18] 24) 31) 33) 37] 33] 36] 41) 44] 57] 57] 54] 497
1) 4} 9) 16} 25] 36) 49) 64) 81) 100/121/144/ 169] g19
xy,| 17] 36] 72| 124]:165] 222| 281] 288] 369/440] 627| 685| 702] see
From (1), substituting the values are get,
481 x 91 ,
So amn13 966 - 3367
_ 91x91 811-637
13
819METROLOGY 2»
Thus, a = 3.29.
Substituting for x and y in expression (2)
39 = (3.297) +b
Therefore, b = 13.97
The law of this particular straight line is then,
y= 3.29 x+ 13,97
Ifnow are substitute the values ofx in the above expression, we obtain
the theoretical values of y which we may denote Y.
The error in the observed values are then (y; - Y).
(6) Series of observed values for three dependent variables : Such a
problem arises in the measurement of flatness of a surface plate when the
variations (z) from a truly flat plane are determined at a number of
positions (x, y) the surface plate.
Let, the observed values be given by (x;, ;, z;) where i varies from 1 to
n. The equation of the flat plane is given by :
Z=axtbxtc
In this case it can be shown that,
E¥m Em Zm —~ EX Im EIm Em
Exe By — (Lin Yn
E22, £m 2m ~ Em Ym «EL %m
Lam Em — (Em Ym) =
where x,, = reading on x axis relative to x ie., (x; - x)
Im=0i-Y)
andz,,=(z;-2) ivaries from 1 ton
_ _ If the origin of the mean plane is taken through the mean point
zy, 2.
SOLVED PROBLEMS
Problem 1. A dial indicator has a scale from zero to 1 mm and 100
divisions on scale. During a calibration tests the following results were
obtained.
Calibration Scale value Calibration ‘Scale value
length (mm) (mm) length (mm) (mm)
0.00 0.00 0.70 0.69
0.10 0.09 . 0.80 0.79
0.20 0.20 0.90 0.91
0.30 0.29 1.00 1.00
0.40 0.41
0.50 , 051
0.60 0.61Determine :
(a) the sensitivity
(6) maximum error
() as a percentage of scale value
Gi) as a percentage of full scale value
(c) whether the dial indicator confirms to the makers specification of
accuracy of within + 1.0% of full scale deflection.
Sol. (a) Sensitivity = ——Range ___1mm _ 99) mm.
No. of divisions 100
(6) The maximum error as a percentage of scale value ard full-scale
value can be calculated by tabulating the results as follows, and using the
relations :
(@ Error as % of scale value
= Calibration length - Seale value, 499
Calibration length
Error as % of full scale value
~ Calibration length — Scale value x 100
Maximum scale value
Calibration length} Scale value Error as % of _| Error as % of full
scale value scale value
L, (nS L, (mm) Leola Lele 00
L, max.
0.00 0.00
0.10 0.09
0.20 0.20
0.30 0.29
0.40 0.41
0.50 0.51
0.60 0.61
0.70 0.69
0.80 0:79
0.90 0.91
1.00 1.00
EXERCISE
1. Define the term ‘Metrology’ as applied to engineering industry. State
its significance in modern industries.
2. Define ‘Inspection’, explain its need in industries.
Lord Kelvin said that, “When you can measure what you are speaking
about and express it in numbers, you know something about it, butMETROLOGY 7
Ad
10.
11.
12,
13.
14,
G
16.
17.
18.
19.
when you cannot measure it, and express it in numbers your
knowledge is meagre and unsatisfactory.” Justify the statement.
State and describe the important elements of measurement.
. Name the various methods of measurement and explain any three of
them with suitable examples,
State and explain the five basic elements of measuring system (factors
affecting accuracy of measurement.
. Differentiate between “Precision” and “Accuracy” with suitable
examples,
Explain the following terms :
@ Calibration ii) Readability
Git) Sensitivity (iv) Magnification.
Differentiate between :
(i) Absolute error and Relative error.
(ii) Repeatability and Reproducability of measurement.
Differentiate between (i) Systematic error and (ii) Random error.
Differentiate between : Direct and Indirect method of measurement.
Describe : (i) Loading errors, (ii) Environment errors, (iii) Dynamic
errors.
Define “Systematic Errors” describe the causes of these errors.
Describe the effect of temperature variation on the measurement.
What is the standard temperature for measurement ?
Explain the effect of the following on precision measurement :
(i) support (ii) alignment (iii) contact pressure.
Describe the effect of (i) parallax error (ii) poor contact (iii) tempera-
ture on precision measurement.
Explain the term “Cosine error” with suitable sketch.
Explain the term “Parallax error” with a suitable sketch.
Describe the following types of errors, and state how they can be taken
care of ?
() Environmental error
(ii) Parallax error
(iii) Errors due to vibrations.
OBJECTIVE QUESTIONS
(A) Fill in the blanks with suitable words :
1
2.
3.
Metrology is define as
Inspection is an act of
‘The three important elements of measurement are :
(®) measured (ii) and (iii)28
4.
9.
10.
11.
12.
13.
14.
15.
METROLOGY
The error in measurement is the difference between and
For every kind of quantity measured there must be a to
measure it,
—_——_____ designates the degree of agreement of the measured size
with its true value as expressed in standard units.
implies the ease with which the observations can be made
accurately,
The capability of the measuring instrument to detect and to faithfully
indicate even small variations of the measured dimension from
nominal size of the part is called as
expresses the degree of repeatabil
ty of the measuring
process.
The process of comparing one measuring device ae a higher order
standard of greater accuracy is known as
The internationally accepted temperature for measurement is taken
as 5
—_________ is a comparison process whereby unknown quantity is
compared with a known quantity.
refers to the minimum change in value that the instru-
ment can reliably indicate.
is used as an index of precision.
principle states that “the axis or line of measurement
should coincide with the axis of measuring instrument.
(B) Match the following two parts:
1.
\ Part I Part IT
@) The degree of agreement| (a) Standard deviation.
between measured value and
true value.
(ii) Degree of repeatability of the] (b) Accuracy.
measuring process.
(ii) Index of precision. (c) Sensitivity.
(iv) The ease with which the] (d) Precision.
observations are made
accurately.