Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

SVKM’S D. J.

Sanghvi College of Engineering


Department of Mechanical Engineering

Mechanical Measurements and Metrology


(Course Code: DJMEC 403)

Learning Objectives
 To understand the concept of metrology and standards of measurement.
 To equip with knowledge of limits, fits, tolerances and gauging.
 To acquire knowledge of linear and Angular measurements, Screw thread and gear
measurement & comparators.
 To understand the knowledge of measurement systems and methods with emphasis on
different Transducers, intermediate modifying and terminating devices.
 To understand the measurement of Force, Torque, Pressure, Temperature and Strain.
Outcomes
 Understand the objectives of metrology, methods of measurement, standards of
measurement & various measurement parameters.
 Explain tolerance, limits of size, fits, geometric and position tolerances, gauges and
their design and also understand the working principle of different types of
Comparators
 Describe measurement of major & minor diameter, pitch, angle and effective diameter
of screw threads & understand advanced metrology concepts.
 Explain measurement systems, transducers, intermediate modifying devices and
terminating devices
 Describe functioning of force, torque, pressure, strain and temperature measuring
devices.
Objectives of metrology and measurements
 To ascertain, the newly developed components are comprehensively evaluated and
designed within the process, and that facilities possessing measuring capabilities are
available in the plant.
 To ensure uniformity of measurements.
 To carry out process capability studies to achieve better component tolerances
 To assess the adequacy of measuring instrument capabilities to carry out their
respective measurements
 To ensure cost-effective inspection and optimal use of available facilities
 To adopt quality control techniques to minimize scrap rate and rework
 To establish inspection procedures from the design stage itself, so that the measuring
methods are standardized
 To calibrate measuring instruments regularly in order to maintain accuracy in
measurement
 To resolve the measurement problems that might arise in the shop floor
 To design gauges and special fixtures required to carry out inspection
 To investigate and eliminate different sources of measuring errors

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Standards and their roles


 A standard is defined as the fundamental value of any known physical quantity, as
established by national and international organizations of authority, which can be
reproduced.
 Standards play a vital role for manufacturers across the world in achieving
consistency, accuracy, precision, and repeatability in measurements and in supporting
the system.
National physical laboratory
 The National Physical Laboratory (NPL) was established in UK in 1900.
 It is a public institution for standardizing and verifying instruments, testing materials,
and determining physical constants.
 NPL India (NPLI) was established in 1947 in New Delhi under the Council of
 Scientific and Industrial Research (CSIR).
NPL India (NPLI) Roles
• To reinforce and carry out research and development activities in the areas of physical
sciences and key physics-based technologies.
• Maintaining national standards and ensuring that they conform to international standards.
• To support industries (National and private) in their research and development activities, by
carrying out calibration and testing, precision measurements, and development of processes
and devices.
Standards of Measurement
A standard is an exact quantity that people agree to use for comparison.
Types of Standards
• Primary Standard
• Secondary Standard
• Tertiary Standard
• Working Standard
Primary Standard
• They are material standard preserved under most careful conditions.
• These are not used for directly for measurements but are used once in 10 or 20 years for
calibrating secondary standards.
• Ex: International Prototype meter, Imperial Standard yard.
Example: International Prototype Meter
The bars were to be made of a special alloy, 90% platinum and 10% iridium, which is
significantly harder than pure platinum, and have a special X-shaped cross section (a

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
"Tresca section", named after French engineer Henri Tresca) to minimise the effects of
torsional strain during length comparisons.
International Prototype Weight
Secondary Standard
• The value of the secondary standard quantity is less accurate than primary standard one. It is
obtained by comparing with primary standard.
• These are close copies of primary standards w.r.t design, material & length.
Tertiary Standard
• Maintained in National Physics Laboratories (NPL).
• The primary or secondary standards exist as the ultimate controls for reference at rare
intervals.
• They are made as close copies of secondary standards & are kept as reference for
comparison with working standards.
Working Standards
• These standards are similar in design to primary, secondary & tertiary standards.
• But being less in cost and are made of low grade materials, they are used for general
applications in metrology laboratories.

Q1. With a block diagram, explain the three stages of a generalized measurement system
giving suitable examples.

A measuring instrument essentially comprises three basic physical elements. Each of these
elements is recognized by a functional element. Each physical element in a measuring
instrument consists of a component or a group of components that perform certain functions in
the measurement process. Hence, the measurement system is described in a more generalized
method. A generalized measurement system essentially consists of three stages. Each of these

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
stages performs certain steps so that the value of the physical variable to be measured is
displayed as an output for our reference.
The three stages of a measurement system are as follows:
1. Primary detector–transducer stage
2. Intermediate modifying stage
3. Output or terminating stage
The primary detector–transducer stage senses the quantity to be measured and converts it into
analogous signals. It is necessary to condition or modify the signals obtained from the primary
detector–transducer stage so that it is suitable for instrumentation purposes. This signal is
passed on to the intermediate modifying stage, wherein they are amplified so that they can be
used in the terminating stage for display purposes. These three stages of a measurement system
act as a bridge between the input given to the measuring system and its output.
Primary detector transducer stage
The main function of the primary detector–transducer stage is to sense the input signal and
transform it into its analogous signal, which can be easily measured. The input signal is a
physical quantity such as pressure, temperature, velocity, heat, or intensity of light. The device
used for detecting the input signal is known as a transducer or sensor. The transducer converts
the sensed input signal into a detectable signal, which may be electrical, mechanical, optical,
thermal, etc. The generated signal is further modified in the second stage. The transducer
should have the ability to detect only the input quantity to be measured and exclude all other
signals.
Intermediate modifying stage
In the intermediate modifying stage of a measurement system, the transduced signal is modified
and amplified appropriately with the help of conditioning and processing devices before
passing it on to the output stage for display. Signal conditioning (by noise reduction and
filtering) is performed to enhance the condition of the signal obtained in the first stage, in order
to increase the signal-to-noise ratio. If required, the obtained signal is further processed by
means of integration, differentiation, addition, subtraction, digitization, modulation, etc.

Output or terminating stage


The output or terminating stage of a measurement system presents the value of the output that
is analogous to the input value. The output value is provided by either indicating or recording

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
for subsequent evaluations by human beings or a controller, or a combination of both. The
indication may be provided by a scale and pointer, digital display, or cathode ray oscilloscope.
Recording may be in the form of an ink trace on a paper chart or a computer printout. Other
methods of recording include punched paper tapes, magnetic tapes, or video tapes. Else, a
camera could be used to photograph a cathode ray oscilloscope trace.
Q2. With a block diagram, explain a generalized measurement system giving suitable
examples.
Generalized measurement system
The generalized measurement system is a set of elements in which the measurement process
is carried on by the system. There are many measuring instruments, but they exist for
measuring some values of some variables. In some measuring instruments, the measuring
process is done easily, and finally, it gives the output for the input as reading or signal based
on the magnitude of the input variable.
For example, the measuring process in some simple instruments is easy and simple (for
measuring length, mass, etc). Most of the instruments like indicating thermometer undergo a
complex measuring system. There will be each element for each function in these complex
measuring instruments.
Block diagram of the generalized measurement system
The blocks in the block diagram represent each element in the generalized measurement
system.

Elements of generalized measurement system


The elements of the measurement system are listed below,
Input variables
Primary sensing element
Variable conversion element
Variable manipulation element

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
Data transmission element
Data processing element
Data presentation element
Observer
Input variables
Input variables may be any unknown variable. Without any input variables, the final result
cannot be achieved by the system.
Inputs should be in a certain amount of measured quantity.
Primary sensing element
The first element of the measurement system is the primary sensing element. The main
function of the primary sensing element is to sense the input variable and gives the output
according to the measurand. This output will be the input of the next element. So this output
is converted analogous electrical signal. This is achieved by using transducers.
Variable conversion element
It receives the output of the primary sensing element as input. As the name indicates, the
conversion of the variable from one form to another form takes place. The conversion process
is done without altering any data contained in the input.
The requirement of this element depends upon the measuring instruments, some may need
and some may not because they are converted into a required form in the previous element
(primary sensing element).
Variable manipulation element
This element manipulates the input variable.
As per required magnification, the variables are manipulated by manipulation otherwise
called as amplification. This is done for the required output from the input variable.
The manipulation process does not depend upon the variable conversion element, so the
manipulation of variables can proceed directly without any conversion element in some
cases.
Data transmission element
Transmission of data or information from one element to another element takes place in this
data transmission element. Data transmission is the main function of this element.
Data transmission elements such as data cables, transmitters, and receivers, transmission
shafts, etc are used to transmit the data from one element to another element.
Data processing element
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
Data is modified and processed before the final result comes. The data processing element
modifies the data for some reasons like,
Modification for final output form,
Modification for some final calculations,
Modification for errors in the instruments such as positive error, negative error, zero error,
temperature error, etc.
Data presentation element
Finally, data is present to the observer via the data presentation element. The presentation
element is such as to monitor, recorders, needle pointers, LCD and LED display, alarms,
indicators like the analog indicator and digital indicator, etc. Without data presentation
element, data cannot be delivered to the observer.
Observer
The measurement data is finally delivered to the observer via the data presentation element,
for further clarification and calculation. The observer used to record these data for further
clarification in the future. The recorded data are stored either in hard copy or digital copy.
Example 01: Bourdon tube pressure gauge

An example of a simple measurement system


In this case bourdon tube act as a primary sensing element and a variable conversion element.
It sense the input (pressure) quantity. On account of the pressure the closed end of the BT is
displaced and thus the pressure is converted into a small displacement.
The closed end of the BT is connected to a gearing arrangement through mechanical linkage.
The gearing arrangement amplifies the small displacement and consequently the pointer rotate
through the large angle. Thus the mechanical linkage acts as a data transmission element while
gearing arrangement act as a data manipulation element.
The final data transmission stage consist of a pointer and dial arrangement which when
calibrated with known pressure inputs gives an indication of the pressure signal applied to the
BT.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Example 02: Pressure actuated thermometer:

 The liquid bulb acts as the primary sensor and variable conversion element since a
temperature change result in pressure build up within the bulb due to the constrain
thermal expansion of fill in liquid.
 The pressure tubing is employed to transmit the pressure to the bourdon tube and thus
the function as the data transmission element
 The BT converts the fluid pressure in to displacement of its tip, and such act as a
variable conversion element
 The displacement is manipulated by the linkage and gear (manipulation element) to
give larger pointer motion.
 The scale and the pointer serve as the data presentation element.
The various inputs to the measurement system are classified as follows
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
Desired inputs: The quantities for which the instrument or the measurement system is
specifically designed to measure and responds are called desired inputs.
The desired input iD produces output component according to an input output relation
symbolised by GD here Gd (Transfer function) represents the mathematical operation
necessary to obtain the necessary output from the input.

Interfering inputs: The quantities to which an instrument or a measurement system becomes


unintentionally sensitive are called interfering input
The interfering input il would produce an output component according to input output
relation symbolised by Gi
Modify inputs: The inputs which cause a change in input output relationship for either
desired input or interfering input or for both are called modifying input.
Example: Desired input: Strain created by the load on the specimen.
Interfering input: Temperature and 50Hz electromagnetic field.
Modified input: Temperature and battery voltage.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Example: Measurement of Differential pressure of a fluid.

Desired input: Pressures on either sides.


Interfering input: Acceleration and angle of tilt.
Modified input: Temperature and tilt gravitational force.

STATIC CHARACTERISTICS
 To choose the instrument, most suited to a particular measurement application, we
have to know the system characteristics.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
 The performance characteristics may be broadly divided into two groups, namely
‘static’ (do not vary with time) and ‘dynamic’ (vary with time) characteristics.
 The set of criteria defined for the instruments, which do not vary with time (static).
 The set of criteria defined for the instruments, which varies with respect to time
(dynamic).
The various static characteristics are
Accuracy - Sensitivity
Linearity - Reproducibility
Repeatability - Resolution
Threshold - Stability
Tolerance, etc.
The various dynamic characteristics are
Speed of response
Measuring lag
Dynamic error.
Need for performance characteristics:
To know the quality of measurement instrument
To check whether the instrument is suitable for its application.
To compare the instrument with its alternatives.
When should we calibrated instruments?
1. Before a critical measurement
2. On manufacturers recommendation
3. Considering environmental conditions
4. After transportation and
5. After accidental drop/shock.
Calibration of Measuring Instruments:
• Process through which the reliability of instrument is maintained by comparing the instrument
with known standards.
Need of Calibration
To take into account the error producing properties of each component.
To take into account environmental errors.
To understand/detect error generated due to frequent use of equipment
To assure quality of inspection during manufacturing process.
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
1. Calibration is the procedure used to establish a relationship between the values of the
quantities indicated by the measuring instrument and the corresponding values realized
by standards under specified conditions.
2. If the values of the variable involved remain constant (not time dependent) while
calibrating a given instrument, this type of calibration is known as static calibration,
whereas if the value is time dependent or time-based information is required, it is called
dynamic calibration. The relationship between an input of known dynamic behaviour
and the measurement system output is determined by dynamic calibration.
3. The main objective of all calibration activities is to ensure that the measuring instrument
will function to realize its accuracy objectives.
4. General calibration requirements of the measuring systems are as follows:
(a) Accepting calibration of the new system,
(b) Ensuring traceability of standards for the unit of measurement under consideration (The
process of validation of the measurements to ascertain whether the given physical quantity
conforms to the original/national standard of measurement is known as traceability of the
standard.), and
(c) Carrying out calibration of measurement periodically, depending on the usage or when it is
used after storage.
5. Calibration is achieved by comparing the measuring instrument with the following:
(a) a primary standard, (b) a known source of input, and (c) a secondary standard that possesses
a higher accuracy than the instrument to be calibrated.
6. During calibration, the dimensions and tolerances of the gauge or accuracy of the measuring
instrument is checked by comparing it with a standard instrument or gauge of known accuracy.
7. If deviations are detected, suitable adjustments are made in the instrument to ensure an
acceptable level of accuracy.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Static calibration (Act of comparison) refers to a situation where interfering input and
modified inputs are kept constant value and the desired input is than varied over some range of
constant values causing the output to vary over some range of constant values.
Manufacturers and testing laboratories normally furnish the static characteristics of a
measuring device stating the levels of Interfering input and modified input under which the
calibration is done.
Traceability: It is the ability to trace the accuracy of a standard back to its ultimate source in
the fundamental standards present at the National Physics Laboratories.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

TERMINOLOGIES
Range
The region between the limits within which an instrument is designed to operate for measuring
it input quantity called range of instrument.
Range is expressed by stating upper and lower values.
Example: thermometer range -100oC to 100oC
Span
Algebraic difference between upper and lower range values of instrument.
• Example: span of thermometer having range -50 to 50 oC
Span = Upper Value – Lower Value
Span = 50-(-50) = 100 oC

Accuracy
Closeness of measured value with true value
Can be determined by single reading
Accuracy is defined as the closeness of indicated value to the true value of the quantity being
measured.
Accuracy is the degree of agreement of the measured dimension with its true magnitude.
• The maximum amount by which the result differs from the true value.
• The nearness of the measured value to its true value.
• Expressed as a percentage.
If the accuracy of an instrument is stated to be ± 1%, it implies that the maximum
departure of the reading from true value may account to a maximum amount of ± 1% of
the span of the instrument.
Example: range of thermometer 0C to 100C , Span = 100-0 = 100C
Accuracy = ± 1% of the instrument span = ± 1% of 100C = ± 1C
Measurement (70C ± 1%of span)
(70C-1C, 70C +1C) = (69C, 71C) actual tem. To be.

Accuracy: Accuracy as a % of span/ % a scale range


Accuracy as % of true value/ % of reading

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Example: We have a voltmeter with range (0-20)V. One of the measurement was specified
as 10 ± 0.2V. express the measurement with accuracy as a % of true value and accuracy as a
% of span.
Solution: 10 ± 0.2V point accuracy
% of full scale deflection / % of instrument span
Span = 20-0 = 20V
(10 ± 0.2/20*100) = 10 ± 1%V
% of true value/% measurand value
(10 ± 0.2/10*100)V = 10± 2 %V
Accuracy mentioned in terms of scale range is constant for the whole range of instrument.
Accuracy mentioned in terms of true value changes depending on the true value.
Consider a voltmeter of range (0-20V)
Reading Case 1: Accuracy = ± 1% of FSD Case 2: Accuracy = ±1% of true
value
1V 1 ± (1% of 20V) = 1 ± 0.2V 1 ± (1% of 1V) = 1 ± 0.01V
5V 5 ± (1% of 20V) = 5 ± 0.2V 5 ± (1% of 5V) = 5 ± 0.05V
10V 10 ± (1% of 20V) = 10 ± 0.2V 10 ± (1% of 10V) = 10 ± 0.1V
15V 15 ± (1% of 20V) = 15 ± 0.2V 15 ± (1% of 15V) = 15 ± 0.15V
20V 20 ± (1% of 20V) = 20 ± 0.2V 20 ± (1% of 20V) = 20 ± 0.2V

Example: Four students performed an experiment to measure the voltage across a


resistor. The actual value of voltage is 4.3V.

A B C D
4.524 4.016 4.250 4.301
4.523 4.137 4.321 4.299
4.525 4.541 4.295 4.302
4.526 5.104 4.342 4.298
Not accurate Not accurate accurate accurate
But Precise Not Precise Not Precise and Precise

Factors affecting the Accuracy


• Two terms are associated with accuracy in measuring equipment:
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
• Sensitivity
• Consistency.
Sensitivity
• The ratio of the change of instrument indication to the change of quantity being measured.
• the ability of the measuring equipment to detect small variations in the quantity being
measured
Consistency
• The successive readings of the measured quantity obtained from the measuring instrument
are same all the time, the equipment is said to be consistent
Accuracy and Cost

It can be observed from Fig. that as the requirement of accuracy increases, the cost increases
exponentially. If the tolerance of a component is to be measured, then the accuracy requirement
will normally be 10% of the tolerance values. Demanding high accuracy unless it is absolutely
required is not viable, as it increases the cost of the measuring equipment and hence the
inspection cost. In addition, it makes the measuring equipment unreliable, because, higher
accuracy increases sensitivity. Therefore, in practice, while designing the measuring
equipment, the desired/required accuracy to cost considerations depends on the quality and
reliability of the component/ product and inspection cost.
Interchangeability
A) Modern production techniques require that a complete product be broken into various
component parts so that the production of each part becomes an independent process,
leading to specialization. The various components are manufactured in one or more
batches by different persons on different machines at different locations and are then
assembled at one place. To achieve this, it is essential that the parts are manufactured
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
in bulk to the desired accuracy and, at the same time, adhere to the limits of accuracy
specified. Manufacture of components under such conditions is called interchangeable
manufacture.
B) When interchangeable manufacture is adopted, any one component selected at random
should assemble with any other arbitrarily chosen mating component. In order to
assemble with a predetermined fit, the dimensions of the components must be confined
within the permissible tolerance limits. By interchangeable assembly, we mean that
identical components, manufactured by different operators, using different machine
tools and under different environmental conditions, can be assembled and replaced
without any further modification during the assembly stage and without affecting the
functioning of the component when assembled.
C) For example, consider the assembly of a shaft and a part with a hole. The two mating
parts are produced in bulk, say 1000 each. By interchangeable assembly any shaft
chosen randomly should assemble with any part with a hole selected at random,
providing the desired fit.
D)
1) Interchangeable manufacture increases productivity and reduces production and time
costs.
2) In order to achieve interchangeability, certain standards need to be followed, based on
which interchangeability can be categorized into two types—universal
interchangeability and local interchangeability.
3) When the parts that are manufactured at different locations are randomly chosen for
assembly, it is known as universal interchangeability.
4) When the parts that are manufactured at the same manufacturing unit are randomly
drawn for assembly, it is referred to as local interchangeability.

Precision
Precision is the degree of repetitiveness of the measuring process.
Precision is the repeatability of the measuring process.
Precision refers to the consistent reproducibility of a measurement.
• If an instrument is not precise, it would give different results for the same dimension for
repeated readings.
• In most measurements, precision assumes more significance than accuracy.
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

OR
Precision
• Defined as repeatability of measuring instrument i.e. how close the measured values
are to each other..
Can not be determined by single reading i.e. for describing precision a set of readings required.
Example: Reading obtained from measuring instrument
– True reading – 25mm
– 24.7 , 25.31, 24.69, 24.89, 25.02 - Set 1
– 24.98, 25.02, 25.01, 25.00, 25.00 – Set 2

Difference between Precision and Accuracy

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Example:
Three industrial robots programmed to place components at a particular point on a table. The
target point was at the center of the concentric circles shown, and black dots represent points
where each robot actually deposited components at each attempt.
Both the accuracy and the precision of Robot 1 are shown to be low in this trial.
Robot 2 consistently puts the component down at approximately the same place but this is the
wrong point. Therefore, it has high precision but low accuracy.
Finally, Robot 3 has both high precision and high accuracy because it consistently places the
component at the correct target position.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Precision/Repeatability/Reproducibility
Precision: High precision does not imply anything about measurement accuracy. A high-
precision instrument may have a low accuracy. Low accuracy measurements from a high-
precision instrument are normally caused by a bias in the measurements, which is removable
by recalibration.
The terms repeatability and reproducibility mean approximately the same but are applied in
different contexts.
Repeatability describes the closeness of output readings when the same input is applied
repetitively over a short period of time, with the same measurement conditions, same
instrument and observer, same location, and same conditions of use maintained throughout.
Reproducibility describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument, location, conditions
of use, and time of measurement.
Both terms thus describe the spread of output readings for the same input. This spread is
referred to as repeatability if the measurement conditions are constant and as reproducibility if
the measurement conditions vary.
Resolution of Measuring Instruments
Resolution is the smallest change in a physical property that an instrument can sense.
For example, a weighing machine in a gymnasium normally senses weight variations in
kilograms, whereas a weighing machine in a jewellery shop can detect weight in milligrams.
Naturally, the weighing machine in the jewellery shop has a superior resolution than the one at
the gymnasium.
Resolution: It is the smallest change in the input quantity that cause a detectable change in its
output. If the input is slowly increased from some arbitrary value, it’ll be found that the output
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
does not change at all until a certain increment is exceeded. This increment is known as
resolution.
Example:

The degree of fineness to which an instrument can be read is known as the resolution. In this
case, the ruler has a resolution of 1 cm. We can see that the object is closer to the 5 cm
marking than the 6 cm mark, so we would record the length as 5 cm. However, it is clearly
not exactly 5 cm. Using this ruler, we would record any object that is closer to the 5 cm mark
than to any other as measuring 5 cm. This means an object could be as short as 4.5 cm, or
anywhere up to 5.5 cm, and we would record its length as 5 cm.
We call this the uncertainty in the measurement. There are many sources of uncertainty, but
here it is the uncertainty due to the resolution of the ruler. The uncertainty on that
measurement is equal to half of the range of likely values. In this case, the range is
Note that this is equal to half of the resolution of the ruler. When calculating uncertainty due
to the resolution of an instrument, the range of likely values is equal to the resolution. We can
therefore say that the uncertainty is equal to half of the resolution.
We could reduce the uncertainty in the measurement of our object by using a different ruler,
say, one that has markings every millimetre instead of every centimetre. This ruler has a
resolution of 1 mm. When an instrument can be read more finely, we say that it has higher
resolution.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Example:. A moving coil voltmeter has a uniform scale with 100 divisions. The full scale reading is
200V and 1/10th of the scale division can be estimated with a fair degree of certainty. Determine the
resolution of the instrument in volt.
Given Data: Number of division = 100
Max. output = full scale reading 200V
Solution:
We know the scale division is given by,
1 scale division = full scale reading/ number of divisions
1scale division = 2000V/100 = 2V
And resolution is given by
Resolution = 1/10 *2V = 0.2V
Resolution of moving coil voltmeter is 0.2V

Sensitivity:
The ratio of change in output of an instrument to smallest change in input.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
• Example: -– If the sensitivity of the voltmeter is say 1mv then if you apply a potential
difference 1mv the display moves. If you apply less than 1mv the display dose not moves.
The ratio of the magnitude of output signal to the input signed or response of measuring
system to the quality being measured is called sensitivity.

It is presented by the slope of the calibration curve if the ordinates are expressed in actual units.
When the calibration curve is linear (Fig.1) the sensitivity is constant. However, if the
calibration curve is nonlinear (Fig.2) the sensitivity is different at different points, being the
slope of curve at various points.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Static sensitivity, in general is defined as:


Static sensitivity(K) = Infinitesimal change in output/ Infinitesimal change in input =
∆Ao/∆Ai.
Similarly, inverse sensitivity or deflection factor or scale factor = ∆Ai/∆Ao.

Linearity in Measurement Systems: (Output is linearly proportional to input)


It is desirable to design instruments having a linear relationship between the applied static input
and the indicated output values, as shown in Fig.
A measuring instrument/system is said to be linear if it uniformly responds to incremental
changes, that is, the output value is equal to the input value of the measured property over a
specified range.

Linearity is defined as the maximum deviation of the output of the measuring system from a
specified straight line applied to a plot of data points on a curve of measured (output) values
versus the measurand (input) values.
In order to obtain accurate measurement readings, a high degree of linearity should be
maintained in the instrument or efforts have to be made to minimize linearity errors.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Before making any interpretation or comparison of the linearity specifications of the measuring
instrument, it is necessary to define the exact nature of the reference straight line adopted, as
several lines can be used as the reference of linearity. The most common lines are as follows:
Best-fit line: The plot of the output values versus the input values with the best line fit is shown
in Fig. The line of best fit is the most common way to show the correlation between two
variables. This line, which is also known as the trend line, is drawn through the centre of a
group of data points on a scatter plot. The best-fit line may pass through all the points, some of
the points, or none of the points.
End point line: This is employed when the output is bipolar. It is the line drawn by joining the
end points of the data plot without any consideration of the origin. This is represented in
Fig.
Terminal line: When the line is drawn from the origin to the data point at full scale output, it
is known as terminal line. The terminal line is shown in Fig.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
Least square line: This is the most preferred and extensively used method in regression
analysis. It is a statistical technique and a more precise way of determining the line of best fit
for a given set of data points. The best-fit line is drawn through a number of data points by
minimizing the sum of the squares of the deviations of the data points from the line of best fit,
hence the name least squares. The line is specified by an equation relating the input value to
the output value by considering the set of data points.
Any departure from straight line relationship is non linearity. The nonlinearity may be
due to the following factors.
Viscous flow or creep
Mechanical hysteresis
The elastic after effects in the mechanical system.
Drift: All calibrations and specifications of an instrument are only valid under controlled
conditions of temperature, pressure, and so on. These standard ambient conditions are usually
defined in the instrument specification. As variations occur in the ambient temperature, certain
static instrument characteristics change. Such environmental changes affect the output of the
instrument and can be attributed to a general term called drift.
No drift means that with a given input the measured values do not change with time.
Definition: Draft is an undesired gradual departure of the instrument output over a period of
time that is unrelated to changes in input, operation conditions or load.
Classification of drift:
Zero drift: If the whole calibration gradually shifts due to slippage, permanent set zero drift
set in.
Span drift: If there is proportional change in the indication all along the upward scale the drift
is called span drift.
The drift may be caused by the following factors:
Wear and tear
Mechanical vibration
Temperature changes
High mechanical stresses developed in some part of instruments and systems.

• It is an instrument's ability to maintain it’s calibration over a period of time

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
• If an instrument does not reproduce the same reading at different times of measurement for
the same input signal, it is said to have drift. If an instrument has perfect reproducibility, it is
said to have no drift.

Fig. Zero drift

Fig. Span drift

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Fig. Combined zero and span drift


Q The following table shows output measurements of a voltmeter under two sets of conditions:
(a) Use in an environment kept at 20C which is the temperature that it was calibrated at. (b)
Use in an environment at a temperature of 50 C.
Voltage readings at calibration Voltage readings at temperature of 50C
temperature of 20C (assumed correct)
10.2 10.5
20.3 20.6

Determine the zero drift when it is used in the 50C environment, assuming that the
measurement values when it was used in the 20C environment are correct. Also calculate the
zero drift coefficient.
Solution: Zero drift 10.5-10.2 = 0.3C, and 20.3-20.6 = 0.3
Zero drift coefficient = Magnitude of zero drift/ Magnitude of temperature change
causing the drift. = 0.3V/50C-20C = 0.01V/C

Q2

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Hysteresis Effects
Figure illustrates the output characteristic of an instrument that exhibits hysteresis. If the input
measured quantity to the instrument is increased steadily from a negative value, the output
reading varies in the manner shown in curve A. If the input variable is then decreased steadily,
the output varies in the manner shown in curve B. The noncoincidence between these loading
and unloading curves is known as hysteresis. Two quantities are defined, maximum input
hysteresis and maximum output hysteresis, as shown in Figure. These are normally expressed
as a percentage of the full-scale input or output reading, respectively.

Hysteresis is found most commonly in instruments that contain springs, such as a passive
pressure gauge (Figure 2.1) and a Prony brake (used for measuring torque). It is also evident
when friction forces in a system have different magnitudes depending on the direction of
movement, such as in the pendulum-scale mass-measuring device. Devices such as the
mechanical flyball (a device for measuring rotational velocity) suffer hysteresis from both of
the aforementioned sources because they have friction in moving parts and also contain a

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
spring. Hysteresis can also occur in instruments that contain electrical windings formed round
an iron core, due to magnetic hysteresis in the iron. This occurs in devices such as the variable
inductance displacement transducer, the linear variable differential transformer, and the rotary
differential transformer.
Dead Space
Dead space is defined as the range of different input values over which there is no change in
output value. Any instrument that exhibits hysteresis also displays dead space, as marked on
Figure 2.8. Some instruments that do not suffer from any significant hysteresis can still exhibit
a dead space in their output characteristics, however. Backlash in gears is a typical cause of
dead space and results in the sort of instrument output characteristic shown in Figure 2.9.
Backlash is commonly experienced in gear sets used to convert between translational and
rotational motion (which is a common technique used to measure translational velocity).

ERRORS IN MEASUREMENTS
While performing physical measurements, it is important to note that the measurements
obtained are not completely accurate, as they are associated with uncertainty. Thus, in order to
analyse the measurement data, we need to understand the nature of errors associated with the
measurements.
Two broad categories of errors in measurement have been identified: systematic and random
errors.
A systematic error is a type of error that deviates by a fixed amount from the true value of
measurement. These types of errors are controllable in both their magnitude and their direction,
and can be assessed and minimized if efforts are made to analyse them.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
Examples of such errors include measurement of length using a metre scale, measurement of
current with inaccurately calibrated ammeters, etc. When the systematic errors obtained are
minimum, the measurement is said to be extremely accurate.
The following are the reasons for their occurrence:
1. Calibration errors
2. Ambient conditions (The most significant ambient condition affecting the accuracy of
measurement is temperature. An increase in temperature of 1 ºC results in an increase in the
length of C25 steel by 0.3 μm, and this is substantial when precision measurement is required.)
3. Deformation of workpiece (Any elastic body, when subjected to a load, undergoes elastic
deformation. The stylus pressure applied during measurement affects the accuracy of
measurement.)

Errors
• Difference between measured value and true value.
• Types of Error
– Systematic Error
– Random Error
Systematic Error
• Systematic errors in experimental observations usually come from the measuring instruments.
They are controllable in nature
• They may occur because:
• There is something wrong with the instrument or its data handling system, or
• Because the instrument is wrongly used by the experimenter.
Example of Systematic Error : Parallax Error
Random Error
• These errors occurs randomly hence they can not be eliminated but their intensity can be
minimized.
• Example:
– Positioning standard or work piece, slight displacement of the jaws, fluctuation of instrument,
operator error etc.
Sources of Errors
• Defect in the instrument
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
• Adjustment of instrument
• Imperfection of instrument design.
• Method of location of instrument
• Environmental effects
• Error because of properties of work piece
• Error due to surface finish of object
• Error due to change in size of object
Environmental Error
• 25mm steel length will increase by 0.3 microns if there is change in 1 deg. Cent.
Temperature.
• Standard Humidity and temperature
• 20 deg. Centigrade at 35 to 45 % RH.
Dirt Error
• Dirt particles can enter in the inspection room through door, windows etc. these particles can
create small change or errors at the time of measurement. For this only, various laboratories
are to be in the dust prof rooms.
Reading errors: These errors occur due to the mistakes committed by the observer while
noting down the values of the quantity being measured. Digital readout devices, which are
increasingly being used for display purposes, eliminate or minimize most of the reading errors
usually made by the observer.

Errors due to parallax effect


Parallax errors occur when the sight is not perpendicular to the instrument scale or the observer
reads the instrument from an angle. Instruments having a scale and a pointer are normally
associated with this type of error. The presence of a mirror behind the pointer or indicator
virtually eliminates the occurrence of this type of error.
Zero errors
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
When no measurement is being carried out, the reading on the scale of the instrument should
be zero. A zero error is defined as that value when the initial value of a physical quantity
indicated by the measuring instrument is a non-zero value when it should have actually been
zero.
For example, a voltmeter might read 1 V even when it is not under any electromagnetic
influence. This voltmeter indicates 1 V more than the true value for all subsequent
measurements made. This error is constant for all the values measured using the same
instrument.
Random Errors
The following are the likely sources of random errors:
1. Presence of transient fluctuations in friction in the measuring instrument
2. Play in the linkages of the measuring instruments
3. Error in operator’s judgement in reading the fractional part of engraved scale divisions
4. Operator’s inability to note the readings because of fluctuations during measurement
5. Positional errors associated with the measured object and standard, arising due to small
variations in setting

Differences between systematic and random errors

Therefore, in order to find out and eliminate any systematic error, it is required to calibrate the
measuring instrument before conducting an experiment. Calibration reveals the presence of any
systematic error in the measuring instrument.
Numerical Problems

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Q1. A wheatstone bridge requires a change of 7 Ω in the unknown arm of the bridge to
produce in deflection of 3mm of the galvanometer. Determine the sensitivity and deflector
factor.
Given Data:
Input = 7 Ω = current in galvanometer.
Output = 3mm = displacement shown by galvanometer.
Sensitivity = K= Output/ Input = 3/7 = 0.4285 mm/ Ω
Deflector factor = Reciprocal of Sensitivity = 1/K = 2.333 Ω /mm.
Q2. A temperature measuring device consists of a transducer, an amplifier and a pen recorder. Their
static sensitivities are, temperature transducer sensitivity = 0.25 mV/ 0C, Amplifier gain = 2.0 V/mV,
Recorder sensitivity = 5mm/V. How much displacement will be seen by the recorder for a 10C change
in temperature?
Given Data:
Temperature transducer sensitivity = 0.25 mV/0C,
Amplifier gain = 2.0 V/mV and
Recorder sensitivity = 5mm/V
We know that, sensitivity of the overall system is given as = K = k1 x K2 x K3
K = k1 x K2 x K3 = .25 mV/0C x 2.0 V/mV x 5mm/V = 2.5 mm/0C
If input is of 10C change in temperature then
Sensitivity = K = 2.5 mm/0C = output/10C
Output = 2.5 mm.
Q3 A mercury thermometer has a capillary tube of 0.25mm diameter. If the bulb and capillary tube
are made of zero expansion material, what volume must it have if a sensitivity of 2.5 mm/ 0C is
desired? Assume the operating temperature is 200C and the coefficient of volumetric expansion of
mercury is 0.181 x10-3/0C.
Q4. A moving coil voltmeter has a uniform scale with 100 divisions. The full scale reading is 200V
and 1/10th of the scale division can be estimated with a fair degree of certainty. Determine the
resolution of the instrument in volt.
Given Data: Number of division = 100
Max. output = full scale reading 200V
Solution: We know the scale division is given by,
1 scale division = full scale reading/ number of divisions
1scale division = 2000V/100 = 2V
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering
And resolution is given by
Resolution = 1/10 *2V = 0.2V
Resolution of moving coil voltmeter is 0.2V
Q5 A dead zone of certain pyrometer is 0.15% of the span. The calibration is 500 C to 850C
What temperature change might occur before it is detected?
Solution: Span of Pyrometer = 50C to 85C, Dead zone is 0.15% of span.
The span of the pyrometer is 85-50 = 30C
Span = 30C
And the dead zone = 0.15xspan/100 = 0.15x300/100 = 0.525C
A temperature change of 0.525C occurs before it is detected.
Probable Error:
Normal curve of error: The law of probability states the normal occurrence of deviation from
average value of an infinite number of measurements or observations can be expressed by
𝑦 = ℎ√𝜋 𝑒𝑥𝑝(−ℎ2𝑥2) -----(1)
Where x = magnitude of deviation from mean
Y = number of reading at any deviation x,( the probability of occurrence of deviation x)
h = a constant called precision index.
Equation 1 leads to curve of type shown in fig. and this curve showing Y plotted against X is
called Normal or Gaussian Probability curve.
Another more convenient form of equation describing Gaussian curve uses standard
deviation σ and is given by,
𝑦 = 1𝜎√2𝜋 𝑒𝑥𝑝(−𝑥2/2𝜎2) -----(2)
Equation 2 is particularly useful because σ is usually the known quantity of interest.
The figure drawn is a normal probability curve.

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.


SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Consider two points marked on the figure r and –r. The reason for this name is the fact
mentioned above that half the observed values lie between the limits ±r. If we determine r as
the result of n measurement and then make an additional measurement the chances are 50
– 50% that the new value will lie between –r and +r that is the chances are even that any one
reading will have error not greater than ± r. the location of point r is found as follows.
𝑟
0.5 = ∫ (ℎ. 𝑒 −ℎ2𝑥2 )/√𝜋 𝑑𝑥
−𝑟

r = 0.4769/h Thus a convenient measure of precision is quantity r and is called probable


error.
1−𝑟2
PE = 0.6745
√𝑛
Q. A pressure gauge is calibrated from 0 to 800 kg/cm2. Its accuracy is specified as within
1% of full scale value in the first 20% of the scale reading and 0.5% in the remaining 80%
reading of the scale reading. What static error is expected if the instrument indicates?
(i) 130kg/cm3 ii) 320 kg/cm3.
Q. The power factor of a circuit is given by cos θ = P/VI, where p is power in watts V is
voltage in volts and I is current in amp. The relative errors in power voltage and current are
respectively ± 0.5%, ±1%, ±1%. Calculate the relative error in power factor. Also calculate
the uncertainty in the power factor if the errors were specified as uncertainities.

Q1. Explain with block diagram the generalised measurement system elements and give one
example indicating clearly various elements.
Q2. What are desired, interfering and modified inputs? Explain them with neat figures?
Q3 Consider a mercury-in-glass thermometer as a temperature-measuring system. Discuss
the various stages of this measuring system in detail.
Q4 Name and discuss three application areas for measurement systems
Q5 Discuss briefly the need for precision measurement in an engineering industry.
Q6 Describe some sources of errors in precision measurement.
Q7 What do you understand by the term: precision, Reproducibility and accuracy as applied
to methods of measurements.
Q8 Explain the following terms in mechanical measurements:
Calibration, Sensitivity and Precision.
Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.
SVKM’S D. J. Sanghvi College of Engineering
Department of Mechanical Engineering

Subject: Mechanical Measurements and Metrology MECH /Sem-IV /S.E.

You might also like