Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 99

Engineering Measurement

Measurement is the process of quantitative comparison between a predetermined standard and


an unknown magnitude.

 Measurements are required for assessing the performance of a product/system, performing


analysis to ascertain the response to a specific input function, studying some fundamental
principle or law of nature, etc.

 Measurements are required


- for assessing the performance of a product/system,
- performing analysis to ascertain the response to a specific input function,
- studying some fundamental principle or law of nature, etc.
 Metrology is a science of Measurement.
Metrology may be divided
 Depending upon the quantity under consideration into: Metrology of Length, Time, Mass,
Volume, Temperature, Pressure, Voltage, Current etc.
 Depending upon the field of application it is divided into Industrial Metrology, Medical
Metrology etc.

Legal
Types of
Metrology Dynamic
Deterministic
Types of Metrology
Legal Metrology
 'Legal metrology' is that part of Metrology which treats units of measurements, methods of
measurements and the measuring instruments, in relation to the technical and legal
requirements.
 The activities of the service of 'Legal Metrology' are:
Control of measuring instruments;
Testing of prototypes/models of measuring instruments;
Examination of a measuring instrument to verify its conformity to the statutory
requirements etc.
Dynamic Metrology
 'Dynamic metrology' is the technique of measuring small variations of a continuous nature.
 The technique has proved very valuable, and record of continuous measurement, over a
surface, for instance, has obvious advantages over individual measurements of an isolated
character.
Types of Metrology
Deterministic Metrology
 Deterministic Metrology is a new philosophy in which part measurement is replaced by
process measurement.
 This technology is used for very high precision manufacturing machinery and control
systems to achieve micro technology and nanotechnology accuracies.

Thus metrology is primarily concerned with


i) Methods of measurement based on agreed units and standards.(uniformity of measurements)
ii) Developing methods of measurement
iii) Analyzing the accuracy of methods of measurement,
establishing uncertainty of measurement, researching into the causes of measuring errors, and
eliminating measuring errors.
GENERAL MEASUREMENT CONCEPTS
 the primary objective of measurement in industrial inspection is to determine the quality of the
component manufactured.

A. Measurand, a physical quantity such as length, weight, and angle to be measured


B. Comparator, to compare the measurand (physical quantity) with a known standard (reference)
for evaluation
C. Reference, the physical quantity or property to which quantitative comparisons are to be
made, which is internationally accepted.
Precision, Accuracy and Reliability
 Accuracy and Precision are two important factors to consider when taking data measurements.

 Both accuracy and precision reflect how close a measurement is to an actual value, but Accuracy
reflects how close a measurement is to a known or accepted value, while Precision reflects how
reproducible measurements are, even if they are far from the accepted value.
PRECISION

 Precision is the repeatability of the measuring process.

 It refers to the group of measurements for the same characteristics taken under identical conditions.
(what extent the identically performed measurements agree with each other).

 Precision is the degree of Repetitiveness of the measuring process.

 The ability of the measuring instrument to repeat the same results during the act of measurements for the
same quantity is known as Repeatability.
ACCURACY
 Accuracy is the degree to which the measured value of the quality characteristic agrees with the true
value.
 Accuracy is the degree of agreement of the measured dimension with its true magnitude.
 It can also be defined as the maximum amount by which the result differs from the true value or as the
nearness of the measured value to its true value, often expressed as a percentage.
 In practice, realization of the true value is not possible due to uncertainties of the measuring process and
hence cannot be determined experimentally.
 Positive and negative deviations from the true value are not equal and will not cancel each other

Examples of Accuracy and Precision


Take experimental measurements for another example of precision and accuracy. If you take the measurements of
the mass of a 50.0-gram standard sample and get values of 47.5, 47.6, 47.5, and 47.7 grams, your scale is precise,
but not very accurate. If your scale gives you values of 49.8, 50.5, 51.0, 49.6, it is more accurate than the first
balance, but not as precise. The more precise scale would be better to use in the lab, providing you made an
adjustment for its error.
Reference value
 A value taken to be very close to the true value and usually accepted as a point of reference, e.g. a ‘standard
weight’ has been measured on a balance that has little or no error and so the ‘measured weight’ is very close to
the true value and accepted.

Reliability

 this is assessed through comparison of an individual result with a reference or class mean. Assessing the
reliability of an individual result allows a judgment to be made regarding the level of mistakes.

Calibration

 It is the process of comparing the indication of an instrument or the value of a material measure (e.g. value of a
weight or graduations of a length measuring ruler) against values indicated by a measurement standard under
specified conditions.

 In the process of calibration of an instrument or material measure the test item is either adjusted or correction
factors are determined.
Errors in Measurement
 It is never possible to measure the true value of a dimension, there is always some error.
 The amount by which each observed measurement differs from the “true, but unknown value"
 “True value"- value that would be attained by a perfect measurement.
 The Error in Measurement is the difference between the measured value and the true value of the
measured dimension.
Error in measurement = Measured value - True value.
The error in measurement may be expressed or evaluated either as an Absolute error or as a Relative
error.
I. Absolute Error
 True Absolute Error. It is the algebraic difference between the result of measurement and the
conventional true value of the quantity measured.
 Apparent Absolute Error. If the series of measurement are made then the algebraic difference
between one of the results of measurement and the arithmetical mean.
II. Relative Error
 it is the quotient of the absolute error and the value of comparison used for calculation of that
absolute error.
 This value of comparison may be the true value, the conventional true value or the arithmetic
mean for series of measurement.
 The accuracy of measurement, and hence the error depends upon so many factors, such as:
- Calibration standard - Environment etc. as already described.
- Work piece - Person
- Instrument
 No matter, how modern is the measuring instrument, how skillful is the operator, how accurate
the measurement process, there would always be some error.
 It is therefore attempted to minimize the error. To minimize the error, usually a number of
observations are made and their average is taken as the value of that measurement.
Types of Error
During measurement several types of error may arise, these are
1. Static errors
- Reading errors
- Characteristic errors
- Environmental errors.
2. Instrument loading errors.
3. Dynamic errors.
- Systematic or controllable errors
-Random or controllable errors
1. Static errors
 These errors result from the physical nature of the various components of measuring
system. There are three basic sources of static errors. The static error divided by the
measurement range (difference between the upper and lower limits of measurement)
gives the measurement precision.
a) Reading Errors
 Reading errors apply exclusively to the read-out device. These do not have any direct relationship with
other types of errors within the measuring system. Reading errors include: Parallax error,
Interpolation error.

b) Characteristic Errors
 It is defined as the deviation of the output of the measuring system from the theoretical predicted
performance or from nominal performance specifications.
 Linearity errors, repeatability, hysteresis and resolution errors are part of characteristic errors if the
theoretical output is a straight line. Calibration error is also included in characteristic error.

c) Environmental Errors
 These errors result from the effect of surrounding such as temperature, pressure, humidity etc. on
measuring system.
 External influences like magnetic or electric fields, nuclear radiations, vibrations or shocks etc. also
lead to environmental errors.
2. Instrumental Loading Errors
 Loading errors results from the change in measure and itself when it is being
measured, (i.e., after the measuring system or instrument is connected for
measurement).
 Instrument loading error is the difference between the value of the measure and
before and after the measuring system is connected/contacted for measurement.
 For example, soft or delicate components are subjected to deformation during
measurement due to the contact pressure of the instrument and cause a loading error.
3. Dynamic Errors
 Dynamic error is the error caused by time variations in the measurand.
 It is caused by inertia, damping, friction or other physical constraints in the sensing or readout
or display system.
 these errors can be broadly classified into two categories
a) Systematic or controllable errors,
Calibration Errors
Ambient or Atmospheric conditions (Environmental Errors).
Stylus Pressure
Avoidable Errors
b) Random or controllable errors.

a) Systematic or controllable Errors

 Systematic errors are regularly repetitive in nature. They are of constant and similar form.

 They result from improper conditions or procedures that are consistent in action. Out of the systematic
errors all except the personal error varies from individual to individual depending on the personality of
observer.
1) Calibration Errors. 2) Ambient or Atmospheric conditions
(Environmental Errors).
 These are caused due to the variation in the calibrated
scale from its normal value.
 Variation in atmospheric condition (i.e.,
 The actual length of standards such as slip gauge and
temperature, pressure, and moisture content) at
engraved scales will vary from the nominal value by a
small amount. the place of measurement from that of
 This will cause an error in measurement of constant internationally agreed standard values (20° temp.
magnitude.
and 760 mm of Hg pressure) can give rise to
error in the measured size of the component.
4) Avoidable Errors.
 These errors may occur due to parallax, non-
3) Stylus Pressure.
alignment of work piece centers, improper location
of measuring instruments such as placing a  Another common source of error is the pressure
thermometer in sunlight while measuring with which the work-piece is pressed while
temperature. measuring.
 The error due to misalignment is caused when the  Though the pressure involved is generally small
center line of work piece is not normal to the center but this is sufficient enough to cause appreciable
line of the measuring instrument. deformation of both the stylus and the work
piece.
b) Random Errors.
 Random errors are non-consistent. They occur randomly and are accidental in nature. Such errors are
inherent in the measuring system. It is difficult to eliminate such errors.
The possible sources of such errors are:
1. Small variations in the position of setting standard and work piece.
2. Slight displacement of lever joints of measuring instruments.
3. Operator error in scale reading.
4. Fluctuations in the friction of measuring instrument etc.
Linear measurements
 Linear measurement applies to measurement of lengths, diameters, heights, and thickness including external and
internal measurements.
 The line measuring instruments have series of accurately spaced lines marked on them, e.g. scale. The dimension
to be measured is aligned with the graduations of the scale.
 Linear measuring instruments are designed either for line instruments, the measurement is taken between two end
surfaces as in micrometers, slip gauges etc.
The direct measuring instruments are of two types:
Graduated Precision measuring instruments,
Non Graduated Non-precision instruments
The Graduated Instruments include rules, vernier calipers, vernier height gauges, vernier depth gauges,
micrometers, dial indicators etc.
- The Non-Graduated Instruments include calipers, trammels, telescopic gauges, surface gauges, straight gauges,
wire gauges, screw pitch gauges, thickness gauges, slip gauges etc.
1. Non-precision instruments
Calipers
Steel rule
2. Precision measuring instruments
Vernier Instruments
Angular Measurement
 The angle is defined as the opening between two lines which meet at a point.
 If a circle is divided into 360 equal parts. Each part is called as degree (0). Each degree is divided in 60 minutes
(‘), and each minute is divided into 60 seconds (“).
 It is more widely used in mathematical investigation.
2 radians = 360, giving, 1 radian = 57.2958 degrees.

Bevel Protector
 It is probably the simplest instrument for measuring the angle between two faces of component.
 It consists of a base plate attached to the main body, and an adjustable blade which is attached to a circular plate
containing vernier scale.
 The adjustable blade is capable of rotating freely about the center of the main scale engraved on the body of the
instrument and can be locked in any position.
Types of Bevel protectors
3. Universal Bevel
1. Mechanical Bevel Protector, Protector
2. Optical Bevel Protector.
3. Universal Bevel Protector
1. Mechanical Bevel 2. Optical Bevel
Protector Protector:
Clinometer
 A clinometer is a special case of a spirit level and used to determine straightness and flatness of surfaces
 While the spirit level is restricted to relatively small angles, clinometers can be used for much larger angles.
 It comprises a level mounted on a frame so that the frame may be turned to any desired angle with respect to a
horizontal reference.
Surface finish and its measurements

 Functioning of machine parts, load carrying capacity, tool life, fatigue life, bearing corrosion, and wear qualities
of any component of a machine have direct bearing with its surface texture.

 Good bearing properties in any part are obtained when the surface has large number of irregularities, i.e. a large
number of hills and valleys. The rate of wear is proportional to the surface areas in contact and the load per unit
area.

 If the hills and valleys on a surface very close, then surface appears as rough. This is due to action of the cutting
tool and is referred to as Primary Texture.

 If the hills and valleys on the surface are far apart, it is due to imperfection in the machine tool and is referred to
as Secondary Texture or waviness.

 This distinction between primary and secondary texture is due to difference in wave length.

 A surface actually is quite complex and consists of many different wavelengths caused due to feed of the tool,
cutting action, vibration, imperfection in machine tools, etc.
First order:
 this includes the irregularities arising out of straightness of guide- ways on which tool post is
moving
Second order:
 Some irregularities are caused due to vibration of any kind such as chatter marks and are
included in second order
Third order:
 Even if the machine were perfect and completely free of vibration, some irregularities are
caused by machining itself due to characteristics of the process
Fourth order:
 This include the irregularities arising from the rupture of the material during the separation of
the chip.
Surface Roughness
 Roughness or texture in the form of a succession of minute irregularities is produced directly by the
finishing process employed.
 The characteristic roughness produced by the tool is not the only cause of roughness in case of
machining operation, but the more openly spaced component or roughness are also produced from
faults in the machining operation.
 Surface roughness is concerned both with the size and the shape of the irregularities e.g. in certain
profile the height of departure from the nominal profile may be same but the spacing of the
irregularities may be wider or closer, or the space of irregularities may be of various forms.
Terminology
Real Surface: is the surface limiting the body and separating it from the surrounding surface.

Geometrical Surface: is the surface prescribed by the design or by the process of manufacturing,
neglecting the errors of form and surface roughness.

Effective Surface: is the close representation of real surface obtained by instrumental means

Surface Texture: repetitive or random deviation from the nominal surface which form the pattern of the
surface. Surface texture include roughness, waviness, lay and flaws.

Surface Roughness: it concerns all those irregularities which form surface relief and which are
conventionally defined within the area where deviation of form and waviness are eliminated

Primary Texture(Roughness): it is caused due to the irregularities in the surface roughness which result
from the inherent action of the production process. These are deemed to include transverse feed mark and
the irregularities within them.
Secondary Texture (Waviness): it results from the factors such as machine or work deflections, vibration,
chatter, heat treatment or warping strains. Waviness is the component of surface roughness upon which
roughness is superimposed.

Flaws: are irregularities which occur at one place or at relatively infrequent or widely varying interval in a
surface (like scratches, cracks, random blemishes, )

Center line: the line about which roughness is measured

Lay: is the direction of the predominant surface pattern ordinarily determined by the method od production
used.
Method of Measuring Surface Finish Touch Inspection
 Touch inspection  This method can simply tell which surface is
more rough. In this method, the finger-tip is
 Visual inspection moved along the surface at a speed of about
 Scratch inspection 25mm per second and the irregularities as small
as 0.01 mm can be easily detected
 Microscope inspection
Visual Inspections
 Surface photographs  Inspection by naked eye is always likely to be
 Micro interferometer misleading particularly when surface having
high degree of finish are inspected. In this
 Wallace surface dynamometer method limited rougher surface and results vary
 Reflected light intensity from person to person
Scratch Inspection
 In this method, a softer material like lead babbit
or to be inspected.by doing so it caplastic is
rubbed over the surface rries the impression of
the scratches on the surface which can be easily
visualized
Microscopic inspection Micro infrometer

 In this method, a master finished surface is placed  In this method, an optical flat is placed on the surface
under the microscope and compared with the to be inspected and illuminated by a monochromatic
surface under inspection source of light. Interference bands are studied through
a microscop
 Small portion of the surface can be inspected at
time , several readings are required to get an  Defect, i.e scratches in the surface appear as
average value. interference lines extending from the dark bands into
the bright bands. The depth of the defect is measured
Surface photographs in terms of the fraction of the interference
 In this method magnified photographs of the surface Reflected by light intensity
are taken with different types of illumination
 In this method a beam of light of known quantity is
 In case we use vertical illumination, the defects like projected upon the surface. This light is reflected in
irregularities and as bright area. In case of oblique several directions as beams of lesser intensity and the
illumination, reversescratches appears is the case. change in light intensity in different directions is
Photographs with different illumination are measured by photocell.
compared and the results assessed.
 The measured intensity changes are already calibrated
by means of reading taken from surface of known
roughness by some other suitable method.
Screw Threads
 Screw Threads are of prime importance, they are used as fasteners. It is a helical groove, used to transmit
force and motion.
 Screw Thread Gauging plays a vital role in Industrial Metrology to measure inter-related geometric aspects
such as Pitch diameter, Lead, Helix, and flank angle, among others.
 Lead: The axial distance advanced by the screw in one revolution.
 Pitch: It is the distance measured parallel to the screw threads axis between the corresponding points on two
adjacent threads in the same axial plane.
 Minor diameter: It is the diameter of an imaginary co-axial cylinder which touches the roots of external threads.
 Major diameter: It is the diameter of an imaginary co-axial cylinder which touches the crests of an external
thread and the root of an internal thread.
 Pitch diameter: It is the diameter at which the thread space and width are equal to half of the screw thread
 Helix angle: It is the angle made by the helix of the thread at the pitch line with the axis. The angle is measured
in an axial plane.
 Flank angle: It is the angle between the flank and a line normal to the axis passing through the apex of the
thread.
 Height of thread: It is the distance measured radially between the major and minor diameters respectively.
 Depth of thread: It is the distance from the tip of thread to the root of the thread measured perpendicular to the
longitudinal axis.
 Form of thread: This is the shape of the contour of one complete thread as seen in axial section.
 Axis of the thread: An imaginary line running longitudinally through the center of the screw.
 Angle of the thread: It is the angle between the flanks or slope of the thread measured in an axial plane.
 Height of thread: It is the distance measured radially between the major and minor diameters respectively.

Screw Thread Measuring Instruments


1. Screw thread micrometer 1. Screw thread micrometer
2. Screw pitch gauge  It is used for accurate measurement of pitch diameter
3. Three wire method of screw threads. The micrometer has a pointed
spindle and a double V-anvil, both correctly shaped to
contact the screw thread of the work being gauged.
2. Screw Pitch gauge 3. Three wire method

 three Wires of equal and precise diameter are placed


in the thread groves at opposite sides of the screw and
measuring the distance over the outer surfaces of the
wires with the micrometer
 method of measuring effective diameter is more
accurate
Measurement of Various Elements of Thread
B) Measurement of Major Diameter:
A) Measurement of Minor Diameter
(Floating Carriage Micrometer):

 The carriage has a micrometer with a fixed


spindle on one side and a movable spindle with a
micrometer on the other side.
 The carriage moves on a finely ground ‘V’
guideway or an anti-friction guideway to facilitate
movement in a direction parallel to the axis of the
plug gauge mounted between centers.
Minor diameter of internal threads
a) Using taper parallels.

 The taper parallels are pairs of wedges having


radiuses and parallel outer edges. The diameter across
their outer edges can be changed by sliding them over
each other
b) Using rollers.

 For threads bigger than 10 mm diameter, precision


rollers are inserted inside the thread and proper slip
gauge inserted between the rollers so that firm contact
is obtained. The minor diameter is then the length of
slip gauges plus twice the diameter of rollers
Gears

 A gear is a rotating machine element having cut teeth which mesh with another toothed part, usually
having teeth of similar size and shape, in order to transmit power.

 A transmission (or gear set) can be used to change the speed, torque, direction of rotation, direction of
power source or the type of motion
Gear Measuring Instrument
Disc Micrometer measuring the gear tooth Tooth thickness by gear tooth vernier caliper
thickness
 The thickness of gear teeth at the pitch
 The tooth thickness is generally line or chordal thickness of teeth and the
measured at pitch circle and is distance from the top of a tooth to the
therefore, the pitch line thickness of chord.
tooth.
Radius measurement Radius gauge (fillet gauge)
 Radius gauges require a bright light behind
1) Radius gauge(fillet gauge)
the object to be measured.
2) Spherometer  The gauge is placed against the edge to be
3) Cylindrometer checked and any light leakage between the
blade and edge indicates a mismatch that
4) Adjustable outside/inside radius gauges
requires correction.
5) Cutting tool radius measurement(optical system)
6) Digital radius gauge
7) Profile projector

Spherometer
 Spherometers are particularly useful
for situations, where only a portion of
the spherical surface is available.
Such situations are very common in
the optics workshops, while
fabricating lenses and mirrors.
Adjustable outside/inside radius gauges
Cutting tool radius measurement (optical system)
 Optical measuring systems for measuring the shape of
optical surfaces of polished as well as grinded surfaces

Digital radius gauge

Profile projector
Temperature Measurement and Thermometer
 Temperature is difficult to measure directly, so we usually measure it indirectly by measuring one of many physical
properties that change with temperature. We then relate the physical property to temperature by a suitable calibration.

 In the United States, the most common temperature scale is the Fahrenheit (oF) scale. Water freezes at 32oF and boils at
212oF, and the normal body temperature is about 98.6oF.

 Fahrenheit devised this scale in 1724 so that 100oF would represent the normal body temperature and 0oF would
represent the coldest temperature man could then produce (by mixing ice and salt).

 Most scientists in the United States use the Celsius (oC) scale (formerly called centigrade scale), which is in common
use throughout most of the world. Water freezes at 0oC and boils at 100oC, and the normal body temperature is about
37oC.

 Another important temperature scale used for scientific work is the Kelvin (oK), or absolute scale, which has the same
degree intervals as the Celsius scale; 0oK (absolute zero) is -273.15oC. On the absolute scale, water freezes at
273.15oK and boils at 373.15oK, and the normal body temperature (rectal) is about 310oK. This temperature scale is not
used in medicine.
The relationships between the different temperature scales are

Celsius to Fahrenheit [0F] = [0C] x 9/5 +32


Fahrenheit to Celsius [0C] = [0F - 32] x 5/9
Celsius to Kelvin [K] = [ 0C] + 273
Kelvin to Celsius [0C] = [K] - 273
Heat vs. Temperature
Temperature Heat

The degree of hotness and A form of energy which flows Definition


coldness of a body. from a hotter region to a
cooler region

Kelvin (oK) Joule (J) Unit of Measurement


Celsius(oC)

 Increases when Heated. Flows from hot area to a cold Property


 Decreases when Cooled. area.
 Temperature does not depend on the size or type of object. For example, the temperature of a small cup
of water might be the same as the temperature of a large tub of water, but the tub of water has more heat
because it has more water and thus more total thermal energy.

 If we add heat, the temperature will become higher. If we remove heat, the temperature will become
lower. Higher temperatures mean that the molecules are moving, vibrating and rotating with more
energy.
The Mercury Thermometer
 The most common way to measure temperature is with a glass thermometer containing Mercury or
Alcohol.
 The principle behind this thermometer is that an increase in the temperature of different materials usually
causes them to expand different amounts.
 In a glass thermometer, a temperature increase causes the alcohol or mercury to expand more than the
glass and thus produces an increase in the level of the liquid.
Measurement of force, torque and pressure
Force: It is defined as the reaction between the two bodies or components.
 The reaction can be either tensile force (Pull) or it can be Compressive force (Push).
 Measurement of force can be done by any two methods:
Direct Method: This involves a direct comparison with a known gravitational force on a standard mass.
Example: Physical Balance.
Indirect Method: This involves the measurement of effect of force on a body. E.g. Force is calculated from
acceleration due to gravity and the mass of the component.
PROVING RING

 The proving ring is a device used to measure force. It consists of an elastic


ring of known diameter with a measuring device located in the center of the
ring.
 They are made of a steel alloy.
 Proving rings can be designed to measure either compression or tension
 Standard for calibrating material testing machine.
 Capacity 1000 N to 1000 kN.
 Deflection is used as the measure of applied load.
 This deflection is measured by a precision micrometer.
 Micrometer is set with a help of vibrating reed.
Dynamometers
Absorption dynamometers:
 They are useful for measuring power or torque developed by power source such as engines or electric
motors.
Driving dynamometers:
 These dynamometers measure power or torque and as well provide energy to operate the device to be
tested.
 These are useful in determining performance characteristics of devices such as pumps and compression.
Transmission dynamometers:
 These are the passive devices placed at an appropriate location within a machine or in between the
machine to sense the torque at that location.
What is Strain?
 Strain is used to describe the measurement of the deformation of a material.
 The material of a certain component or object can be elongated (tractioned) or contracted (compressed),
 thus experiencing strain due to the following factors:
- the effect of an applied external force (mechanical strain)
- the influence of heat and cold (thermal strain)
- internal forces from the non-uniform cooling of cast components, forging, or welding (residual strain)
Why is Strain Measured?
 Most commonly, strain is measured to determine the level of stress on the material – Experimental Stress Analysis
 The absolute value and direction of the mechanical stress is determined from the measured strain and known
properties of the material (modulus of elasticity and Poisson’s ratio).
 These calculations are based on Hooke’s Law.
 In its simplest form, Hooke's Law determines the direct proportionality of the strain ε [m/m] and the stress σ [N/mm 2]
of a certain material using its elasticity or Young's modulus E [N/mm2].
σ = ε⋅E
Interferometry

 Interferometry is a technique used in physics, astronomy, and engineering to study waves, such as light or sound,
by observing the interference patterns they create when they interact.
 The basic principle involves combining two or more wave fronts and analyzing the resulting pattern of
constructive and destructive interference.
 In optical interferometry, for instance, light waves from a source are split into two or more beams using mirrors
or beam splitters.
 These beams travel different paths and are then recombined. When the beams intersect, their waves interfere
with each other, producing an interference pattern that contains information about the phase, amplitude, and
polarization of the original waves.
Interferometry has various applications:
1. **Precision Measurement**: It is used to measure small displacements, distances, and angles with very high
accuracy, making it invaluable in fields such as metrology and microscopy.
2. **Astronomy**: Astronomers use interferometry to enhance the resolution of telescopes by combining
signals from multiple telescopes, creating a virtual telescope with a much larger aperture. This technique,
known as interferometric imaging, allows astronomers to observe fine details of distant objects in space.
3. **Medical Imaging**: In medical imaging, interferometry can be used for techniques like optical coherence
tomography (OCT), which produces high-resolution cross-sectional images of biological tissues.
4. **Engineering**: Interferometry is used in fields such as optics, semiconductor manufacturing, and surface
metrology for precise measurements and quality control.
Overall, interferometry is a powerful tool for studying wave phenomena and making precise measurements in
various scientific and technical fields.
 In interferometry, flatness refers to the evenness or uniformity of a surface.
 Interferometers are often used to measure the flatness of surfaces by analyzing interference patterns produced
when light reflects off the surface.
 By examining the interference fringes, precise measurements of the surface flatness can be obtained.
 The gauge length interferometer is a specific type of interferometer used for measuring the flatness of surfaces
over large distances.
 It typically consists of a laser source, beam splitters, mirrors, and detectors.
 The gauge length interferometer operates by splitting a laser beam into two paths, directing them towards the
surface being measured, and then recombining them. The interference pattern produced when the beams
recombine is analyzed to determine the flatness of the surface.
Comparators

 Comparators are precision measuring instruments used to compare the dimensions of a workpiece with a standard
reference. They are widely used in manufacturing and metrology for quality control and inspection purposes.
Comparators typically consist of a measuring system, a magnifying system, and a scale or dial for reading
measurements. Here are some features and classifications of comparators:
Features of Comparators:
1. **Magnification**: Comparators often incorporate magnifying lenses or optical systems to enhance the visibility
of small features on the workpiece.
2. **Resolution**: The resolution of a comparator refers to the smallest measurable difference in dimensions. Higher
resolution comparators can detect smaller deviations from the standard.
3. **Accuracy**: Accuracy refers to the closeness of measurements to the true value. Comparators are designed to
provide precise and accurate measurements within specified tolerances.
4. **Versatility**: Some comparators are designed for specific types of measurements, while others offer versatility
by accommodating various accessories and configurations for different measurement tasks.
5. **Ease of Use**: User-friendly features such as adjustable focus, easy-to-read scales, and intuitive controls
enhance the usability of comparators.
6. **Stability and Durability**: Comparators are typically built to withstand environmental factors such as
temperature variations and mechanical shocks to ensure reliable performance over time.
Classification of Comparators:
1. **Mechanical Comparators**: These comparators use mechanical mechanisms, such as gears and levers, to
amplify and measure the displacement between the workpiece and the reference standard. Examples include dial
indicators, lever-type comparators, and snap gauges.
2. **Optical Comparators**: Optical comparators utilize optical systems, such as lenses and mirrors, to
magnify and project the image of the workpiece onto a screen or sensor. This allows for precise visual inspection
and measurement of dimensions. Profile projectors are a common type of optical comparators.
3. **Electrical Comparators**: Electrical comparators use electronic sensors, such as LVDTs (Linear Variable
Differential Transformers) or eddy current probes, to measure the displacement between the workpiece and the
reference standard. These comparators offer high accuracy and are often used in automated inspection systems.
Concepts of interchangeability
System of Limits, Fits, Tolerance and Gauging
Introduction

 In the case of Manufacturing of different components for engineering applications. No two parts
can be produced with identical measurements by any Manufacturing process.

 A Manufacturing process essentially contains five M’s—Man, Machine, Materials, Money, and
Management.

 Some variability in dimension within certain limits must be tolerated during manufacture,

 The permissible level of tolerance depends on the functional requirements, which cannot be
compromised.
 Generally in engineering, any component Manufactured is required to fit or match with some
other component.
 The correct and prolonged functioning of the two components in match depends upon the correct
size relationships between the two, i.e., the parts must fit with each other in a desired way.
Tolerance
 Tolerance can be defined as the magnitude of permissible variation of a dimension or other
measured value or control criterion from the specified value.
 It can also be defined as the total variation permitted in the size of a dimension, and is the
Algebraic difference between the upper and lower acceptable dimensions. It is an absolute value.
 The basic purpose of providing tolerances is to permit dimensional variations in the manufacture of
components, adhering to the performance criterion as established by the specification and design.
 It is impossible to make anything to an exact size, therefore it is essential to allow a definite
tolerance or permissible variation on every specified dimension.
 To achieve an increased compatibility between mating parts to enable interchangeable assembly,
the manufacturer needs to practice good Tolerance Principles.
Upper Deviation: It is the algebraic difference between the maximum size and the Basic size.
Lower Deviation: It is the algebraic difference between the minimum size and the Basic size.
Limits of Size: The two extreme permissible sizes of a part between which the Actual size should lie.
Maximum Limit of Size: The greater of the two limits of size.
Minimum Limit of Size: The smaller of the two limits of size.
Basic Size: The size with reference to which the limits of size are fixed.
Zero Line: It is a straight line corresponding to the Basic size. The deviations are measured from this
line. The positive and negative deviations are shown above and below the zero line respectively.
Shaft: A term used by convention to designate all External features of a part, including those which are
not cylindrical.
Hole: A term used by convention to designate all Internal features of a part, including those which are not
cylindrical.
Tolerance Zone: It is the zone between the maximum and minimum limit size.
Classification of Tolerance

1. Unilateral Tolerance
The total amount by which a specified 2. Bilateral Tolerance
dimension is permitted to vary
3. Compound Tolerance
4. Geometrical
Tolerance
1. Unilateral Tolerance
 When the two limit dimensions are only on one side of the nominal size (either above or below).

 For unilateral tolerances, a case may occur when one of the limits coincide with the Basic size.

 Unilateral Tolerance is employed when Precision fits are required during Assembly. This type of
tolerance is usually indicated when the mating parts are also machined by the same operator.

 Unilateral Tolerance is employed in the Drilling process wherein dimensions of the hole are most
likely to deviate in one direction only, that is, the hole is always oversized rather than undersized.
Example: +0.18 -0.10
25 +0.10, 25 −0.20,
2. Bilateral Tolerance
 When the two limit dimensions are above and below Nominal size, (i.e. on either side of the nominal
size).
 The dimension of the part is allowed to vary on both sides of the Basic size but may not be necessarily
equally disposed about it.
 This system is generally preferred in mass production where the machine is set for the Basic size.
+0.2
Example: 28 -0.2
3. Compound Tolerance

 When Tolerance is determined by established tolerances


on more than one dimension

4. Geometric Tolerance

 Geometric Tolerance is defined as the total amount that the dimension of a


Manufactured part can vary.
 Depending on the functional requirements, Tolerance on Diameter,
Straightness, and Roundness may be specified separately.
 Geometric tolerances are used to indicate the relationship of one part of an
object with another.
Form Tolerances

Geometric Orientation
Tolerance Tolerances

Positional
Tolerances
A. Form tolerances

 Form Tolerances are a group of Geometric Tolerances applied to individual features.


 Form tolerances as such do not require locating Dimensions. These include Straightness,
Circularity, Flatness, and Cylindricity.
B. Orientation Tolerances
 Orientation Tolerances are a type of Geometric Tolerances used to limit the direction or orientation of
a feature in relation to other features.
 These are related Tolerances. Perpendicularity, Parallelism, and Angularity fall into this category.
C. Positional Tolerances

 Positional Tolerances are a group of Geometric Tolerances that controls the extent of deviation of
the location of a feature from its true position
 This is a Three-dimensional Geometric Tolerance comprising Position, Symmetry, and
Concentricity.
GRADES OF TOLERANCES
 Grade is a measure of the magnitude of the tolerance.
 Lower the Grade the finer the Tolerance. There are total of 18 Grades which are allocated the numbers
IT01, IT0, IT1, IT2..... IT16.
 As the numbers get larger, so the Tolerance zone becomes progressively wider. Selection of grade
should depend on the circumstances. As the Grades get finer, the cost of production increases at a
sharper rate.
 The Tolerance Grades may be numerically determined in terms of the standard tolerance unit ‘i’
where i in microns is given by
i=0.453√D+0.001D (for Basic size up to and including 500 mm) and
i=0.004D+2.1 (for Basic size above 500 mm up to and including 3150 mm),
Where: D is in mm and it is the Geometric Mean of the Lower and Upper Diameters
 The various Diameter steps specified by ISI are: 1-3, 3-6, 6-10, 10-18,18-30, 30-50, 50-80, 80-
120,180-250, 250-315, 315-400, and 400- 500 mm.
 The value of ‘D’ is taken as the Geometric Mean for a particular range of size to avoid continuous
variation of Tolerance with size.
 The Fundamental Deviation of type d, e, f, g Shafts are respectively -16D0.44, -11D0.41 -5.5D0.41 & -
2.5D0.34
 The Fundamental Deviation of type D,E,F,G Holes are respectively +16D0.44, +11D0.41 +5.5D0.41 &
+2.5D0.34.
 The relative magnitude of each grade is shown in the table below;
Example: Calculate the Tolerance and hence the limits of size for the Shaft and Hole for the following
Fit 60mm H8-f7. The diameter steps are 50mmand 80mm.

Soln: D=
i=0.453√D+0.001D
=0.453√63.25 +0.001(63.25)
=1.856 micron=0.001856mm
Tolerance for Hole H8 =25i = 25*0.001856 =0.0464mm
Tolerance for Shaft f7 =16i =16*0.001856 =0.0297mm
Fits System
 The degree of tightness or looseness between two mating parts.
 Fit is an assembly condition between ‘Hole’ & ‘Shaft’
 Hole: A feature engulfing a component.
 Shaft: A feature being engulfed by a component.
Clearance Fit:

 In this type of fit, the Largest permitted Shaft diameter is less than the smallest Hole diameter so
that the Shaft can rotate or slide according to the purpose of the assembly.
 A Clearance Fit has positive allowance, i.e. there is Minimum positive clearance between High
limit of the Shaft and Low limit of the Hole.
Types of Clearance Fit

Loose Fit
 It is used between those mating parts where no precision is required.
 It provides Minimum Allowance and is used on loose pulleys, Agricultural Machineries etc.

Running Fit
 For a running fit, the dimension of Shaft should be smaller enough to maintain a film of oil for
lubrication. It is used in Bearing pair etc.

Slide Fit or Medium Fit


 It is used on those mating parts where great precision is required.
 It provides Medium Allowance and is used in Tool slides, Slide valve, Automobile parts, etc
Interference Fit:

 It is defined as the fit established when a Negative clearance exists between the sizes of Holes
and the Shaft.
 In this type of fit, the Minimum permitted diameter of the Shaft is larger than the Maximum
allowable diameter of the Hole.
 In case of this type of fit, the members are intended to be permanently attached.
Ex: Bearing bushes, Keys & key ways
Types of Interference Fit
Shrink Fit or Heavy Force Fit
 It refers to Maximum Negative Allowance.
 In assembly of the Hole and the Shaft, the Hole is expanded by heating and then rapidly cooled in its
position.
 It is used in fitting of Rims etc.
Medium Force Fit
 These fits have Medium Negative Allowance.
 Considerable pressure is required to assemble the Hole and the Shaft.
 It is used in car wheels, Armature of Dynamos etc.
Tight Fit or Force Fit
 A slight Negative Allowance exists between two mating parts.
 One part can be assembled into the other with a hand hammer or by light pressure.
Transition Fit:
 In this type of fit, the diameter of the Largest Allowable Hole is greater than the Smallest Shaft
 The smallest Hole is smaller than the largest Shaft, such that a small Positive or Negative Clearance
exists between the Shaft & Hole.
Ex: Coupling rings, Spigot in mating holes, etc.
Types of Transition Fit
Push Fit or Snug Fit
 It refers to zero Allowance and a light pressure is required in assembling the Hole and the Shaft.
 The moving parts show least vibration with this type of fit.
Force Fit or Shrink Fit
 A force fit is used when the two mating parts are to be rigidly fixed so that one cannot move
without the other.
 It either requires high pressure to force the Shaft into the Hole or the Hole to be expanded by
heating.
 It is used in Railway wheels, etc.
Wringing Fit
 A slight Negative Allowance exists between two mating parts in wringing fit.
 It requires pressure to force the Shaft into the Hole and gives a light assembly.
Basis of Fits
Hole Basis:
 In this system, the Basic Diameter of the Hole is constant while the Shaft size is varied according to
the type of fit.
Significance of Hole Basis System:
 Their selection depends on the production methods. Generally, Holes are produced by Drilling,
Boring, Reaming, Broaching, etc. whereas Shafts are either turned or ground.
 If the Hole Basis System is used, there will be reduction in production costs as only one tool is
required to produce the Hole and the Shaft can be easily machined to any desired size.
 Hence Hole Basis System is preferred over Shaft Basis System.
Shaft Basis system:
 In this system, the Basic Diameter of the Shaft is constant while the Hole size is varied according
to the type of Fit.
 It may, however, be necessary to use Shaft Basis System where different fits are required along a
long shaft.
 For example, in the case of driving shafts where a single shaft may have to accommodate to a
variety of accessories such as Couplings, Bearings, Collars, etc.,
 If the Shaft Basis System is used to specify the limit dimensions to obtain various types of Fits,
number of Holes of different sizes are required, which in turn requires tools of different sizes.
Fig. Hole basis system (a) Clearance fit (b) Transition fit (c) Interference fit

Fig. Shaft basis system (a) Clearance fit (b) Transition fit (c) Interference fit
GAUGES
 Production of components within the permissive tolerance limits facilitates interchangeable
manufacture.
 Various precision measuring instruments can be used to measure the actual dimensions of the
components,
 Gauges ensure that the components lie within the permissible limits, but they do not determine the
actual size or dimensions.
 Gauges are scale less inspection tools, which are used to check the conformance of the parts along with
their forms and relative positions of the surfaces of the parts to the limits.
 The gauges required to check the dimensions of the components correspond to two sizes conforming to
the maximum and minimum limits of the components.
 A Go-No GO gauge refers to an inspection tool used to check a work piece against its allowed
tolerances.

 It derives its name from its use: the gauge has two tests; the check involves the work piece having to
pass one test (Go) and fail the other (No Go).

 It is an integral part of the quality process that is used in the manufacturing industry to ensure
interchangeability of parts between processes, or even between different manufacturers.

 A Go - No Go gauge is a measuring tool that does not return a size in the conventional sense, but
instead returns a state. The state is either acceptable (the part is within tolerance and may be used) or it
is unacceptable (and must be rejected).

 They are well suited for use in the production area of the factory as they require little skill or
interpretation to use effectively and have few, if any, moving parts to be damaged in the often hostile
production environment.
Gauge Type Descriptions
Bore gauge A device used for measuring holes.
Center gauges and fishtail gauges used in lathe work for checking the angles when
grinding the profiles of single-point screw-cutting
tool bits and centers

Dial indicator/dial gauge An instrument used to accurately measure small


linear distances
Feeler gauge A simple tool used to measure gap widths.
Gauge block It is used as a reference for the setting of measuring
equipment used in machine shops, such as
micrometers, sine bars, calipers, and dial indicators

Gauge pin It is a precision ground cylindrical bar for use in


Go/no go gaugesor similar applications.

Go/No-Go gauge used to check a work piece against its allowed


tolerances.
Gauge Type Descriptions
Profile gauge or contour gauge A tool for recording the cross-sectional shape of a
surface.

Radius gauge A tool used to measure the radius of an object

Ring gauge used for checking the external diameter of a cylindrical


object.

Measuring tool determines the thickness of a wire.


Wire gauge

You might also like