Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 106

Measurement & Metrology

Introduction
Metrology literally means science of measurements. Metrology is
practised almost every day, often unknowingly, in our day-to-day tasks.
It is of utmost importance to measure different types of parameters or
physical variables and quantify each of them with a specific unit.
Measurement is closely associated with all the activities pertaining to
scientific, industrial, commercial, and human aspects. Its role is ever
increasing and encompasses different fields such as communications,
energy, medical sciences, food sciences, environment, trade,
transportation, and military applications.
Measurement is an act of assigning an accurate and precise value to
a physical variable.
The physical variable then gets transformed into a measured variable.
Meaningful measurements require common measurement standards.
The common methods of measurement are based on the development
of international specification standards.
Metrology is also concerned with the reproduction, conservation, and
transfer of units of measurements and their standards.
Measurements provide a basis for judgements about process
information, quality assurance, and process control.
Metrology may be divided depending upon the quantity under
consideration into: metrology of length, metrology of time etc.
Depending upon the field of application it is divided into industrial
metrology, medical metrology etc.
Engineering metrology is restricted to the measurement of length,
angles and other quantities which are expressed in linear or angular
It is also necessary to see whether the result is given with sufficient
correctness and accuracy for a particular need or not. This will depend
on the method of measurement, measuring devices used etc.
Thus, in a broader sense metrology is not limited to length and angle
measurement but also concerned with numerous problems theoretical
as well as practical related with measurement such as:
1. Units of measurement and their standards, which is concerned with
the establishment,
reproduction, conservation and transfer of units of measurement and
their standards.
2. Methods of measurement based on agreed units and standards.
3. Errors of measurement.
4. Measuring instruments and devices.
5. Accuracy of measuring instruments and their care.
6. Industrial inspection and its various techniques.
7. Design, manufacturing and testing of gauges of all kinds.
Need for Inspection
Inspection is defined as a procedure in which a part or product
characteristic, such as a dimension, is examined to determine whether it
conforms to the design specification.
Inspection means checking of all materials, products or component
parts at various stages during manufacturing. It is the act of comparing
materials, products or components with some established standard.
Inspection is carried out to isolate and evaluate a specific design or
quality attribute of a component or product.
Industrial inspection assumed importance because of mass
production, which involved interchangeability of parts.
Measurement is an integral part of inspection.
Many inspection methods rely on measurement techniques, that is,
measuring the actual dimension of a part, while others employ the
gauging method.
The gauging method does not provide any information about the
actual value of the characteristic but is faster when compared to the
measurement technique.
In inspection, the part either passes or fails.
Thus, industrial inspection has become a very important aspect of
quality control.
Inspection essentially encompasses the following:
1. Ascertain that the part, material, or component conforms to the
established or desired standard.
2. Accomplish interchangeability of manufacture.
3. Sustain customer good will by ensuring that no defective product
reaches the customers.
4. Provide the means of finding out inadequacies in manufacture. The
results of inspection are recorded and reported to the manufacturing
department for further action to ensure production of acceptable parts
and reduction in scrap.
5. Purchase good-quality raw materials, tools, and equipment that
govern the quality of the finished products.
6. Coordinate the functions of quality control, production, purchasing,
and other departments of the organizations.
7. Take the decision to perform rework on defective parts, that is, to
assess the possibility of making some of these parts acceptable after
minor repairs.
8. Promote the spirit of competition, which leads to the manufacture of
quality products in bulk by eliminating bottlenecks and adopting better
production techniques.
Accuracy and Precision
Accuracy is the degree of agreement of the measured dimension with
its true magnitude. It can also be defined as the maximum amount by
which the result differs from the true value or as the nearness of the
measured value to its true value, often expressed as a percentage.
Precision is the degree of repetitiveness of the measuring process. It
is the degree of agreement of the repeated measurements of a quantity
made by using the same method, under similar conditions. In other
words, precision is the repeatability of the measuring process.
The ability of the measuring instrument to repeat the same results
during the act of measurements for the same quantity is known as
repeatability.
Precision refers to the consistent reproducibility of a measurement.
It is essential to know the difference between precision and
accuracy. Accuracy gives information regarding how far the measured
value is with respect to the true value, whereas precision indicates
quality of measurement, without giving any assurance that the
measurement is correct. These concepts are directly related to random
and systematic measurement errors.
Example
Figure clearly depicts the difference between precision and accuracy,
wherein several measurements are made on a component using
different types of instruments and the results plotted.
Example
Figure clearly depicts the difference between precision and accuracy, wherein several
measurements are made on a component using different types of instruments and the
results plotted.

Precise but not Accurate but not Precise and Not precise and not
accurate precise accurate accurate
Precision is not a single measurement but is associated with a
process or set of measurements.
Normally, in any set of measurements performed by the same
instrument on the same component, individual measurements are
distributed around the mean value and precision is the agreement of
these values with each other.
Error
The difference between the true value and the mean value of the set
of readings on the same component is termed as an error.
Error can also be defined as the difference between the indicated
value and the true value of the quantity measured.

E = Vm − Vt
where E is the error, Vm the measured value, and Vt the true value.
The value of E is also known as the absolute error.
% error is sometimes known as relative error.
Relative error is expressed as the ratio of the error to the true value of the
quantity to be measured. Accuracy of an instrument can also be expressed as
% error. If an instrument measures Vm instead of Vt, then,
Accuracy of an instrument is always assessed in terms of error. The
instrument is more accurate if the magnitude of error is low.
In order to maintain the quality of manufactured components, accuracy
of measurement is an important characteristic.
Therefore, it becomes essential to know the different factors that affect
accuracy.
Two terms are associated with accuracy, especially when one strives
for higher accuracy in measuring equipment: sensitivity and
consistency.
The ratio of the change of instrument indication to the change of
quantity being measured is termed as sensitivity.
When successive readings of the measured quantity obtained from the
measuring instrument are same all the time, the equipment is said to be
consistent.
A highly accurate instrument possesses both sensitivity and
consistency.
Range is defined as the difference between the lower and higher
values that an instrument is able to measure.
Objectives of Metrology and Measurements
The objectives of metrology and measurements include the following:
1. To ascertain that the newly developed components are
comprehensively evaluated and designed within the process, and
that facilities possessing measuring capabilities are available in the
plant
2. To ensure uniformity of measurements
3. To carry out process capability studies to achieve better component
tolerances
4. To assess the adequacy of measuring instrument capabilities to carry
out their respective measurements
5. To ensure cost-effective inspection and optimal use of available
facilities
6. To adopt quality control techniques to minimize scrap rate and rework

7. To establish inspection procedures from the design stage itself, so


that the measuring methods are standardized.

8. To calibrate measuring instruments regularly in order to maintain


accuracy in measurement

9. To resolve the measurement problems that might arise in the shop


floor

10. To design gauges and special fixtures required to carry out


inspection

11. To investigate and eliminate different sources of measuring errors


General Measurement Concepts
We know that the primary objective of measurement in industrial
inspection is to determine the quality of the component manufactured.
Different quality requirements, such as permissible tolerance limits,
form, surface finish, size, and flatness, have to be considered to check
the conformity of the component to the quality specifications.
In order to realize this, quantitative information of a physical object or
process has to be acquired by comparison with a reference.

The three basic elements of


measurements, which are of
significance, are the following:
1. Measurand, a physical quantity such as length, weight, and angle to
be measured.
2. Comparator, to compare the measurand (physical quantity) with a
known standard (reference) for evaluation.
3. Reference, the physical quantity or property to which quantitative
comparisons are to be made, which is internationally accepted.
Methods of Measurement
The common methods employed for making measurements are as
follows:
Direct method In this method, the quantity to be measured is directly
compared with the primary or secondary standard. Scales, vernier
callipers, micrometers, bevel protractors, etc., are used in the direct
method.
Indirect method In this method, the value of a quantity is obtained by
measuring other quantities that are functionally related to the required
value.
Fundamental or absolute method In this case, the measurement is
based on the measurements of base quantities used to define the
quantity.
Comparative method In this method, as the name suggests, the
quantity to be measured is compared with the known value of the same
quantity or any other quantity practically related to it.
Transposition method This method involves making the measurement
by direct comparison.
Coincidence method This is a differential method of measurement
wherein a very minute difference between the quantity to be measured
and the reference is determined by careful observation of the
coincidence of certain lines and signals. Measurements on vernier
calliper and micrometer are examples of this method.
Deflection method This method involves the indication of the value of
the quantity to be measured directly by deflection of a pointer on a
calibrated scale. Pressure measurement is an example of this method.
Complementary method The value of the quantity to be measured is
combined with a known value of the same quantity.
Null measurement method In this method, the difference between the
value of the quantity to be measured and the known value of the same
quantity with which comparison is to be made is brought to zero.
Substitution method It is a direct comparison method.
Contact method In this method, the surface to be measured is touched
by the sensor or measuring tip of the instrument.
Contactless method As the name indicates, there is no direct contact
with the surface to be measured.
Composite method The actual contour of a component to be checked
is compared with its maximum and minimum tolerance limits.
Standards of Measurement
A standard is defined as the fundamental value of any known physical
quantity, as established by national and international organizations of
authority, which can be reproduced. Fundamental units of physical
quantities such as length, mass, time, and temperature form the basis
for establishing a measurement system.
Standards play a vital role for manufacturers across the world in
achieving consistency, accuracy, precision, and repeatability in
measurements.
Two standard systems for linear measurement that have been accepted
and adopted worldwide are English and metric (yard and metre)
systems.
Most countries have realized the importance and advantages of the
metric system and accepted metre as the fundamental unit of linear
measurement.
Yard
The imperial standard yard is made of 1 inch square cross-section
bronze bar (82% copper, 13% tin, 5% zinc) 38 inches long. The bar has
two 1/2 inch diameter X 1/2 inch deep holes. Each hole is fitted with
1/10th inch diameter gold plug. The top surface of these plugs lie on the
neutral axis of the bronze bar.
The top surface of the plug lies on the neutral axis.
Yard is then defined as the distance between the two central
transverse lines of the plug maintained at a temperature of 62 °F.
One of the advantages of maintaining the gold plug lines at neutral
axis is that this axis remains unaffected due to bending of the beam.
Another advantage is that the gold plug is protected from getting
accidentally damaged.
It is important to note that an error occurs in the neutral axis because
of the support provided at the ends.
Imperial standard yard
Metre
This standard is also known as international prototype metre, which
was established in 1875.
It is defined as the distance between the centre positions of the two
lines engraved on the highly polished surface of a 102 cm bar of pure
platinum–iridium alloy (90% platinum and 10% iridium) maintained at 0
°C under normal atmospheric pressure and having the cross-section of
a web, as shown in Fig.

The top surface of the web contains


graduations coinciding with the neutral
axis of the section.
This type of cross-section provides
greater rigidity for the amount of metal
involved and is economical.
The length of the meter is defined as the straight line distance, at 0°C
between the centre portions of pure platinum-iridium alloy (90%
platinum, 10% iridium) of 102 cm total length and having a web cross-
section as shown in Fig.
Modern Metre
The modern metre was defined in the 17th General Conference of
Weights and Measures held on 20 October 1983.
According to this, the metre is the length of the path travelled by light
in vacuum during a time interval of 1/299,792,458 of a second.

Disadvantages of Material Standards


The following disadvantages are associated with material standards:
1. Material standards are affected by changes in environmental conditions
such as temperature, pressure, humidity, and ageing, resulting in
variations in length.
2. Preservation of these standards is difficult because they must have
appropriate security to prevent their damage or destruction.
3. Replicas of material standards are not available for use at other places.
4. They cannot be easily reproduced.
5. Comparison and verification of the sizes of gauges pose considerable
difficulty.
6. While changing to the metric system, a conversion factor is necessary.
Subdivision of standards
The international standard yard and the international prototype meter
cannot be used for general purposes. For practical measurement there
is a hierarchy of working standards. Thus depending upon their
importance of accuracy required, for the work the standards are
subdivided into four grades;
1. Primary standards, 2. Secondary standards, 3. Territory standards,
and 4. Working standards.
Primary Standards
For precise definition of the unit, there shall be one, and only one
material standard, which is to be preserved under most careful
conditions. It is called as primary standard. International yard and
International meter are the examples of primary standards.
Secondary Standards
Secondary standards are made as nearly as possible exactly similar to
primary standards as regards design, material and length. They are
compared with primary standards after long intervals and the records of
deviation are noted.
Tertiary Standards
The primary and secondary standards are applicable only as ultimate
control. Tertiary standards are the first standard to be used for reference
purposes in laboratories and workshops.
Working Standards
Working standards are used more frequently in laboratories and
workshops. They are usually made of low grade of material as
compared to primary, secondary and tertiary standards, for the sake of
economy. They are derived from fundamental standards. Both line and
end working standards are used.
Line and End Measurements
A length may be measured as the distance between two lines or as the
distance between two parallel faces. So, the instruments for direct
measurement of linear dimensions fall into two categories.
1. Line standards and 2. End standards.
Line Standards
When the length is measured as the distance between centers of two
engraved lines, it is called line standard. Both material standards yard
and meter are line standards. The most common example of line
measurements is the rule with divisions shown as lines marked on it.
Characteristics of Line Standards
2. Scales can be accurately engraved but the engraved lines
themselves possess thickness and it is not possible to take
measurements with high accuracy.
2. A scale is a quick and easy to use over a wide range.
3. The scale markings are not subjected to wear. However, the leading
ends are subjected to wear and this may lead to undersize
measurements.
4. A scale does not possess a "built in" datum. Therefore it is not
possible to align the scale with the axis of measurement.
5. Scales are subjected to parallax error.
6. Also, the assistance of magnifying glass or microscope is required if
sufficient accuracy is to be achieved.
End standards
When length is expressed as the distance between two flat parallel
faces, it is known as end standard. Examples: Measurement by slip
gauges, end bars, ends of micrometer anvils, vernier calipers etc. The
end faces are hardened, lapped flat and parallel to a very high degree of
accuracy.
Characteristics of End Standards
1. These standards are highly accurate and used for measurement of
close tolerances in precision engineering as well as in standard
laboratories, tool rooms, inspection departments etc.
2. They require more time for measurements and measure only one
dimension at a time.
3. They are subjected to wear on their measuring faces.
4. Group of slips can be "wrung" together to build up a given size; faulty
wringing and careless use may lead to inaccurate results.
5. End standards have built in datum since their measuring faces are f l
at and parallel and can be positively locked on datum surface.
6. They are not subjected to parallax effect as their use depends on feel.
Comparison between Line Standards and End Standards:

Sr. No. Characteristic Line Standard End Standard


1 Principle Length is expressed as the distance Length is expressed as the
between two lines distance between two flat parallel
faces
2 Accuracy Limited to ± 0.2 mm for high Highly accurate for measurement of
accuracy, scales have to be used in close tolerances up to ± 0.001 mm.
conjunction with magnifying glass or
microscope
3 Ease and time & Measurement is quick and easy Use of end standard requires skill
measurement and is time consuming.
4 Effect of wear Scale markings are not subject to These are subjected to wear on
wear. However, significant wear may their measuring surfaces.
occur on leading ends. Thus it may
be difficult to assume zero of scale
as datum.
5 Alignment Cannot be easily aligned with the Can be easily aligned with the axis
axis of measurement. of measurement.
6 Manufacture and Simple to manufacture at low Manufacturing process is complex
cost cost. and cost is high.
7 Parallax effect They ate subjected to parallax error They ate not subjected to parallax
error
8 Examples Scale (yard, meter etc.) Slip gauges, end bars, V-caliper,
micrometers etc.
Limits, Fits and Tolerance
In the design and manufacture of engineering products a great deal of
attention has to be paid to the mating, assembly and fitting of various
components.

Present day standards of quantity production, interchangeability and


continuous assembly of many complex compounds, could not exist
under such system, neither could many of the exacting design
requirements of modern machines be fulfilled.

Modern mechanical production engineering is based on a system of


limits and fits, which not only itself ensuring the necessary accuracies of
manufacture, forms a schedule or specifications to which manufacture
can adhere.
In order that a system of limits and fits may be successful, following
conditions must be fulfilled:
The range of sizes covered by the system must be sufficient for most
purpose.
It must be based on some standards so that every body understands
alike and a give dimension has the same meaning at all places.
For any basic size it must be possible to select from a carefully
designed range of fit the most suitable one for a given application.
Each basic size of hole and shaft must have a range of tolerance
values for each of the different fits.
The system must provide for both unilateral and bilateral methods of
applying the tolerance.
It must be possible for a manufacturer to use the system to apply
either a hole based or a shaft based system.
The system should cover work from high class tool and gauge work
where very wide limits of sizes are permissible.
Nominal Size and Basic Dimensions
Nominal size: A nominal size is the size which is used for purpose of
general identification.
Basic dimension: A basic dimension is the dimension as worked out by
purely design considerations.
Shaft: The term shaft refers not only to diameter of a circular shaft but
to any external dimension on a component.
Hole: This term refers not only to the diameter of a circular hole but to
any internal dimension on a component. Actual size of the shaft.
Basic size: The basic size is the standard size for the part and is the
same for both hole and its shaft.
Zero line: This is the line which
represents the basic size so that the
deviation from the basic size is zero.
Limits of size: These are maximum and
minimum permissible sizes of the part.
Minimum limit of size: the
minimum size permitted for the
part.
Maximum limit of size: The
maximum size permitted for the
part.
Tolerance: The difference
between the maximum and
minimum limits of size.

Grade of Tolerance: The tolerance


grade is an indication of the degree of
accuracy of manufacture. It is
designated by letter IT followed by a
number. Tolerance grade are ITOI,
ITO upto IT16, the larger the number
the larger the tolerance.
Upper deviation: This is the amount from the basic zero or zero line, on
the maximum limit of size for either a hole or a shaft. It is designated
‘ES’ for a hole and ‘es’ for a shaft.
Upper deviation is a positive quantity when the maximum limit of size is
greater than the basic size and negative quantity when the maximum
limit of size is less than the basic size.
Lower deviation: This is amount from basic size or zero line to the
minimum limit of size. It is designated EI for hole and ei for a shaft.
Lower deviation is a positive quantity when the minimum limits of size is
greater than the basic size and a negative quantity when the minimum
limit of size is less than the basic size.
Fundamental of Deviation: It fixes the position of the tolerance zone in
relation to zero line.
Fit: Fit means a degree of tightness or looseness between two mating
parts to perform a definite function.
Clearance: The difference between the sizes of hole and a shaft which are to
be assemble together when the shaft is smaller than the hole.
Interference: The difference between the sizes of a hole and a shaft which are
to be assembled together when the shaft is larger than hole.
Clearance fit: A clearance fit could be
obtained by making the lower limit on the
hole equal to or larger than the upper limit
on the shaft. The largest permissible
diameter of the shaft is smaller than the
diameter of the smallest hole. This type of
fit always provides clearance.
Interference fit: An interference fit would
be obtained with equal certainty by
making the lower limit on the shaft equal
to or larger than the upper limit on the
hole. Interference fit is a form of a tight fit.
Transition fit: Between these two
conditions lies a range of fits known as
transition fit. These are obtained when the
upper limit on the shaft is smaller than the
upper limit on the hole.
Allowance: The difference between the maximum shaft and minimum
hole is known as allowance. In a clearance fit, this is the minimum
clearance and is a positive allowance. In an interference fit, it is
maximum interference and is a negative allowance.

Basis of Fit (or limit) System


A fit or limit system consists of a series of tolerance arranged to suit a
specific range of sizes and functions, so that limits of size may be
selected and given to mating components to ensure specific classes of
fit. This system may be arranged on the following basis:
1. Hole basis system
2. Shaft basis system

Hole Basis System:


Hole basis system is one in which the limits
on the hole are kept constant and the
various necessary to obtain the classes of
fit are arranged by varying those on the
shaft.
Shaft basis system:
Shaft basis system is one in which
the limits on the shaft are kept
constant and variations necessary
to obtain the classes of fit are
arranged by varying the limits on
the holes.

Systems of specifying tolerance


The tolerance or the error permitted in manufacturing a particular
dimension may be allowed to vary either on one side of the basis size.
Accordingly two systems of specifying tolerance exit.
1. Unilateral System
2. Bilateral System
Unilateral System
When the tolerance distribution is only in one direction of the basic size,
it is known as unilateral tolerance. Unilateral tolerance is employed
when precision fits are required during assembly.
Unilateral system is more satisfactorily and realistically applied to certain
machining processes where it is common knowledge that dimensions
will most likely deviate in one direction. This system is most commonly
used in interchangeability manufacturing especially where precision fits
are required.
Example +0.02 +0.02 −0.01 +0.00
40 +0.01, 40 −0.00, 40 −0.02, 40 −0.02
Bilateral System
When the tolerance distribution lies in two direction of the basic size, it is
known as bilateral tolerance. This system is generally preferred in mass
production where the machine is set for the basic size.
Bilateral system is to retain the same fit when tolerance is varied. The
basic size dimension of one or both of the mating parts will also have to
be varied. Bilateral tolerance help in machine setting and are used in
large scale manufacturing.
Example +0.02
40 ± 0.02, 40 −0.01
Designation of Holes, Shaft and Fits
A hole or shaft is completely described if the basic size, followed by the
appropriate letter and by the number of the tolerance grade is given.

Example: A 25 mm H-hole with a tolerance grade IT8 is given as:


25 mm H8 or simply 25H8
A 25 mm f-shaft with the tolerance grade IT7 is given as
25 mm f7 or simply 25f7

A fit is indicated by combining the designations for both the hole and
shaft with the hole designation written first, regardless of system.

Example: 26H8-f7 or
25H8-f7 or
25H8/f7
Commonly used Holes and Shaft
In several engineering applications the fits required can be met by a
quite small selection from the full range available in the standards. The
holes and shafts commonly used are as follows:

Holes: H6, H7, H8, H9, H11.


Shafts: c11, d10, e9, f7, g6, h6, k6, n6, s6

Concept of Interchangeability
Interchangeability refers to assembling a number of unit components
taken at random from stock so as to build up a complete assembly
without fitting or adjustment.
Modern production techniques require that a complete product be
broken into various component parts so that the production of each part
becomes an independent process, leading to specialization. The various
components are manufactured in one or more batches by different
persons on different machines at different locations and are then
assembled at one place.
To achieve this, it is essential that the parts are manufactured in bulk to
the desired accuracy and, at the same time, adhere to the limits of
accuracy specified. Manufacture of components under such conditions
is called interchangeable manufacture.
When interchangeable manufacture is adopted, any one component
selected at random should assemble with any other arbitrarily chosen
mating component. In order to assemble with a predetermined fit, the
dimensions of the components must be confined within the permissible
tolerance limits.
Production on an interchangeable basis results in an increased
productivity with a corresponding reduction in manufacturing cost.
Modern manufacturing techniques that complement mass production of
identical parts facilitating interchangeability of components have been
developed.
Another major advantage of interchangeability is the ease with which
replacement of defective or worn-out parts is carried out, resulting in
reduced maintenance cost.
Interchangeable manufacture increases productivity and reduces
production and time costs.
Example: The following limits are specified in a limit system, to give a
clearance fit between a hole and a shaft:
+0.03 −0.006
Hole = 25 −0.00 mm and shaft = 25−0.020 mm
Determine the following:
(a) Basic size
(b) Tolerances on shaft and hole
(c) Maximum and minimum clearances
Solution
(a) Basic size is the same for both shaft and hole.
(b) Determination of tolerance:
Tolerance on hole = HLH − LLH [High limit of the hole (HLH) and Low limit
of the hole(LLH)] = 25.03 − 25.00 = 0.03 mm
Tolerance on shaft = HLS − LLS [High limit of the shaft (HLS) and Low
limit of the shaft (LLS)] = [(25 − 0.006) − (25 − 0.020)] = 0.014 mm
(c) Determination of clearances:
Maximum clearance = HLH − LLS
= 25.03 − 24.98 = 0.05 mm
Minimum clearance = LLH − HLS
= 25.00 − (25 − 0.006) = 0.06 mm
Gauges
A gauge is tool or instrument used to measure or compare a
component.

It is here employed in the sense of an instrument which, having fixed


dimension is used to determine whether the size of some component
exceeds or is less than the size of gauge itself.

The true value of a gauge is measured by its accuracy and service life.

High carbon and alloy tool steel have been the principal materials used
for many years.
Classification of Gauges
The following gauges represent those most commonly used in
production work.
The classification is principally according to the shape or purpose for
which each is used.
1. Snap gauges
2. Plug gauges
3. Ring gauges
4. Length gauges
5. Form comparison gauges
6. Thickness gauges
7. Indicating gauges
8. Pneumatic gauges
9. Electric gauges
10. Electronic gauges
11. Projecting gauges
12. Multiple dimension gauges
Linear Measurement
Linear measurement applies to measurement of lengths, diameters,
heights and thickness including external and internal measurements.
The line measuring instruments have series of accurately spaced lines
marked on them, e.g. scale. The dimension to be measured is aligned
with the graduations of the scale.
Linear measuring instruments are designed either for line instruments,
the measurement is taken between two end surfaces as in micrometers,
slip gauges etc.
The instruments used for linear measurements can be classified as:
1. Direct measuring instruments
2. Indirect measuring instruments
The direct measuring instruments are of two types:
1. Graduated
2. Non Graduated
The graduated instruments include rules, vernier calipers, vernier
height gauges, vernier depth gauges, micrometers, dial indicators etc.
The non-graduated instruments include calipers, trammels, telescopic
gauges, surface gauges, straight gauges, wire gauges, screw pitch
gauges, thickness gauges, slip gauges etc. They can also be classified
as:

1. Non-precision instruments such as steel rule, calipers etc.


2. Precision measuring instruments, such as vernier instruments,
micrometers, dial gauges etc.
Engineering Steel Rule
An engineers steel rule is also known as Scale and is a line measuring
device. It is a precision measuring instrument and must be treated as
such and kept nicely polished condition.
It works on the basic measuring technique of comparing an unknown
length to the one previously calibrated.
It consists of strip of hardened steel having line graduations etched or
engraved at interval of fraction of a standard unit of length.
The scale is available in 150 mm, 300 mm, 600 mm and 1000 mm
length.
Some scales are provided with some attachments and special feature
to make their use versatile e.g., very small scales may be provided with
a handle to use it conveniently.
Shrink rules are scales which take into account the shrinkage of
materials after cooling.
Following are the desirable qualities of a steel rule:
Good quality spring steel
Clearly engraved lines
Reputed make
Metric on two edges
Thickness should be minimum
Use of Scale:
To get good results it is necessary that certain techniques must be
followed while using a scale.
The end of the scale is worn out at the ends and also it is very difficult
to line up the end of the scale accurately with the part of the edge to be
measured.
The scale should never be laid flat on the part to be measured
because it is difficult to read the correct dimension.
The degree of accuracy which can be obtained while making
measurements with a steel rule or scale depend upon:
Quality of the rule
Skill of the user
Calipers
Caliper is an instrument used for measuring distance between or over
surface, or for comparing dimensions of work pieces with such
standards as plug gauges, graduated rules etc.
Calipers do only the job of
transferring a dimension, but not of
measuring instruments on their
own.
This is illustrated in Fig., where a
caliper is shown transferring the
outer diameter of a job on to a
graduated steel rule, to read the
dimension accurately and
conveniently.

Calipers are available in various types and sizes. The two major types
are the firm joint caliper and the spring caliper.
Classification of
callipers

Types of callipers (a)


Outside calliper (b) Inside
calliper (c) Divider (d)
Hermaphrodite calliper
Vernier Calipers
The vernier instruments generally used in workshop and engineering
metrology have comparatively low accuracy.
The line of measurement of such instruments does not coincide not
with the line of scale.
The accuracy therefore depends upon the straightness of the beam
and squareness of the sliding jaw with respect to the beam.
To ensure the squareness, the sliding jaw must be clamped before
taking the reading.
The zero error must also be taken into consideration.
 they are made of alloy steel, hardened and tempered (to about 58 Rc)
and contact surfaces are lap finish.
The Vernier Principle
The principle of vernier is that when two scales or division slightly
different in size are used, the difference between them can be utilised to
enhance the accuracy of measurement.
Principle of 0.1 Vernier
The main scale is accurately graduated in 1 mm steps and terminates
in the form of caliper jaw.

There is a second scale which is movable and is also fixed to caliper


jaw.

The movable scale is equally divided into 10 parts but its length is only
9 mm. Therefore one division on this scale is equivalent to 9/10 = 0.9
mm. This means the difference between one graduation on the main
scale and one graduation on the sliding or vernier scale is 1.0 – 0.9 =
0.1 mm.

Hence if the vernier caliper is initially closed and then opened so that
the first graduation on the scale sliding scale corresponds to the first
graduation on the main scale a distance equal to 0.1 mm has moved as
shown in Fig.

Such a vernier scale is of limited use because measurements of


greater accuracy are normally required in precision engineering work.
Types of Vernier Calipers
According to IS 3651—1974 (Specification for vernier caliper), three
types of vernier calipers have been specified to meet the various needs
of external and internal measurements up to 2000 mm with vernier
accuracy of 0.02, 0.05 and 0.1 mm.
The three types are called types A, B, C and have been shown in Figs.
respectively. All the three types are made with only one scale on the
front of the beam for direct reading.
Type A has jaws on both sides for external and internal
measurements, and also has a blade for depth measurements. Type B
is provided with jaws on one side for external and internal
measurements. Type C has jaws on both sides for making the
measurements and for marking operations.
All parts of the vernier calipers are made of good quality steel and the
measuring faces hardened to 650 H.V. minimum. The recommended
measuring ranges (nominal sizes) of vernier calipers as per IS 3651—
1974 are 0—125, 0—200, 0—250. 0—300; 0—500, 0—750, 0—1000,
750—1500 and 750— 2000 mm.
Errors in Vernier Calipers
Errors are usually made in measurements with vernier calipers from
manipulation of vernier caliper and its jaws on the work piece.
For instance, in measuring an outside diameter, one should be sure
that the caliper bar and the plane of the caliper jaws are truly
perpendicular to the work piece’s longitudinal centre line.
One should be sure that the caliper is not canted, tilted, or twisted. It
happens because the relatively long, extending main bar of the average
vernier calipers so readily tips in one direction or the other.
The accuracy of the measurement with vernier calipers to a great
extent depends upon the condition of the jaws of the caliper. The
accuracy and the natural wear, and warping of vernier caliper jaws
should be tested frequently by closing them together tightly or setting
them to the 0.0 point of the main and vernier scales. In this position the
caliper is held against a light source. If there is wear, spring or warp a
knock-kneed condition will be observed. If measurement error on this
account is expected to be greater than 0.005 mm the instrument should
not be used and sent for repair.
When the sliding jaw frame has become worn or warped that it does
not slide squarely & snugly on main caliper beam, then jaws would
appear as shown in fig. Where a vernier caliper is used mostly for
measuring inside diameters, the jaws may become bowlegged or it’s
outside edges worn clown.
Precautions in using Vernier Caliper
The following precaution should be taken while using a vernier caliper:
No play should be there between the sliding jaws on scale, otherwise
the accuracy of the vernier caliper will be lost. If play exists then the gib
at the back of jaw assembly must be bent so that gib holds the jaw
against the frame and play is removed.
Usually the tips of measuring jaws are worn and that must be taken
into account. Most of the errors usually result from manipulation of the
vernier caliper and its jaws on the work piece.
In measuring an outside diameter it should be insured that the caliper
bar and the plane of the caliper jaws are truly perpendicular to the work
piece’s longitudinal centre line. It should be ensured that the caliper is
not canted, tilted or twisted.
The stationary caliper jaw of the vernier caliper should be used as the
reference point and measured point is obtained by advancing or
withdrawing the sliding jaw.
In general, the vernier caliper should be gripped near or opposite the
jaws; one hand for the stationary jaw and the other hand generally
supporting the sliding jaw. The instrument should not be held by the
over-hanging “tail” formed by the projecting main bar of the caliper.
The accuracy in measurement primarily depends on two senses, viz.,
sense of sight and sense of touch (feel).
The short-comings of imperfect vision can however be overcome by
the use of corrective eye-glass and magnifying glass. But sense of
touch is an important factor in measurements. Sense of touch varies
from person to person and can be developed with practice and proper
handling of tools.
One very important thing to note here is that sense of touch is most
prominent in the finger-tips, therefore, the measuring instrument must
always be properly balanced in hand and held lightly in such a way that
only fingers handle the moving and adjusting screws etc. If tool be held
by force, then sense of feel is reduced.
Vernier height gauge
Vernier height gauge is similar to vernier calliper but in this instrument
the graduated bar is held in a vertical position and it is used in
conjunction with a surface plate.
Construction:
A vernier height gauge consists of
A finely ground and lapped base. The base is massive and robust in
construction to ensure rigidity and stability.
A vertical graduated beam or column supported on a massive base.
Attached to the beam is a sliding vernier head carrying the vernier
scale and a clamping screw.
An auxiliary head which is also attached to the beam above the sliding
vernier head. It has fine adjusting and clamping screw.
A measuring jaw or a scriber attached to the front of the sliding vernier
Vernier height gauge
Use
The vernier height gauge is designed for accurate measurements and
marking of vertical heights above a surface plate datum.
It can also be used to measure differences in heights by taking the
vernier scale readings at each height and determining the difference by
subtraction.
It can be used for a number of applications in the tool room and
inspection department. The important features of vernier height gauge
are:
•All the parts are made of good quality steel or stainless steel.
•The beam should be sufficiently rigid square with the base.
•The measuring jaw should have a clear projection from the edge of the
beam at least equal to the projection of the base' from the beam.
•The upper and lower gauging surfaces of the measuring jaw shall be
flat and parallel to the base.
•The scriber should also be of the same nominal depth as the
measuring jaw so that it may be reversed.
•The projection of the jaw should be at least 25 mm.
•The slider should have a good sliding fit for all along the full working
length of the beam.
•Height gauges can also be provided with dial gauges instead of vernier.

Precautions

When not in use, vernier height gauge should be kept in its case.

It should be tested for straightness, squareness and parallelism of the


working faces of the beam, measuring jaw and scriber.

The springing of the measuring jaw should always be avoided.


Vernier Depth Gauge
Vernier depth gauge is used to measure the depths of holes, slots and
recesses, to locate centre distances etc. It consists of
A sliding head having flat and true base free from curves and
waviness.
A graduated beam known as main scale. The sliding head slides over
the graduated beam.
An auxiliary head with a fine adjustment and a clamping screw.
The beam is perpendicular to the base in both directions and its ends
square and flat.
The end of the sliding head can be set at any point with fine
adjustment mechanism locked and read from the vernier provided on it,
while using the instrument, the base is held firmly on the reference
surface and lowers the beam into the hole until it contacts the bottom
surface of the hole.
The final adjustment depending upon the sense of correct feel is made
by the fine adjustment screw. The clamping screw is then tightened and
the instrument is removed from the hole and reading taken in the same
way as the vernier calliper. While using the instrument it should be
ensured that the reference surface op which the depth gauge base is
rested is satisfactorily true, flat arid square.
Micrometers
Micrometers are designed on the principle of screw nut.
The micrometer screw gauge essentially consists of an accurate
screw having about 10 or 20 threads per cm and revolves in a fixed nut.
The end of the screw forms one measuring tip and the other
measuring tip is constituted by a stationary anvil in the base of the
frame. The screw is threaded forcertain length and is plain afterwards.
The plain portion is called sleeve and its end is the measuring surface.
The spindle is advanced or retracted by turning a thimble connected to
the spindle. The spindle is a slide fit over the barrel and barrel is the
fixed part attached with the frame.
The barrel is graduated in unit of 0.05 cm. i.e. 20 divisions per cm,
which is the lead of the screw for one complete revolution.
The thimble has got 25 divisions around its periphery on circular
portion. Thus it subdivides each revolution of the screw in 25 equal
parts, i.e. each division corresponds to 0.002 cm. A lock nut is provided
for locking a dimension by preventing motion of the spindle.
Ratchet stop is provided at the end of the thimble cap to maintain
sufficient and uniform measuring pressure so that standard conditions of
measurement are attained.
Ratchet stop consists of an overriding clutch held by a weak spring.
When the spindle is brought into contact with the work at the correct
measuring pressure, the clutch starts slipping and no further movement
of the spindle takes place by the rotation of ratchet. In the backward
movement it is positive due to shape of ratchet.
Reading a Micrometer:
In order to make it possible to read up to 0.0001 inch in micrometer
screw gauge, a vernier scale is generally made on the barrel.
The vernier scale has 10 straight lines on barrel and these coincide
with exact 9 divisions on the thimble. Thus one small deviation on
thimble is further subdivided into 10 parts and taking the reading one
has to see which of the vernier scale division coincides with division of
the thimble.
Accordingly the reading for given arrangement in fig. will be,
On main barrel : 0.120”
On thimble :0.014”
On vernier scale :0.0001”
Total reading :0.1342”
Before taking the reading anvil and spindle must be brought together
carefully and initial reading noted down. Its calibration must be checked
by using standard gauge blocks.
Practical Applications of Micrometers

In metric micrometers, the pitch of the screw thread is 0.5 mm so that


one revolution of screw moves it axially by 0.5 mm. Main scale on barrel
has least division of 0.5 mm. the thimble has 50 divisions on its
circumference.
One division on thimble = 0.5 / 50 mm = 0.1 mm
If vernier scale is also incorporated then sub divisions on the thimble
can be estimated up to an accuracy of 0.001 mm.
Reading of micrometer is 3.5 mm on barrel and 7 divisions on thimble
= 3.5+7 x 0.001= 3.5 + 0.07 = 3.57 mm
Cleaning the Micrometer:
Micrometer screw gauge should be wiped free from oil, dirt, dust and
grit.
When micrometer feels gummy and dust ridden and the thimble fails
to turn freely, it should never be bodily dunked in kerosene or solvent
because just soaking the assembled micrometer fails to float the dirt
away.
Further it must be remembered that the apparent stickiness of the
micrometer may not be due to the grit and gum but to a damaged thread
and sprung frame or spindle.
Every time the micrometer is used, measuring surface, the anvil and
spindle should be cleaned. Screw the spindle lightly but firmly down to a
clean piece of paper held between spindle and anvil.
Pull the piece of paper put from between the measuring surface. Then
unscrew the spindle few turns and blow out any fuzz or particles of
papers that may have clung to sharp edges of anvil and spindle.
Precautions in using Micrometer
In order to get good results out of the use of micrometer screw gauge,
the inspection of the parts must be made as follows. Micrometer should
be cleaned of any dust and spindle should move freely.
The part whose dimension is to be measured must be held n left hand
and the micrometer in right hand. The way for holding the micrometer is
to place the small finger and adjoining finger in the U – Shaped frame.
The forefinger and thumb are placed near the thimble to rotate it and
the middle finger supports the micrometer holding it firmly.
The micrometer dimension is set slightly larger than the size of the
part and part is slid over the contact surfaces of micrometer gently. After
it, the thimble is turned till the measuring pressure is applied.
In the case of circular parts, the micrometer must be moved carefully
over representative arc so as to note maximum dimension only. Then
the micrometer reading is taken.
The micrometers are available in various sizes and ranges, and
corresponding micrometer should be chosen depending upon the
dimension.
Bore gauge:
The dial bore gauges shown in fig. are for miniature hole
measurements.
The gauge is supplied with a set of split ball measuring contact points
which are hard chrome-plated to retain original spheres.
Along with the measuring probes, setting rings are also provided to zero
set the indicator whenever the probes are interchanged.
Actual ring size is engraved on the ring frames to the closest 0.001 mm
value.
Dial indicators
Dial indicators are small indicating devices using mechanical means
such as gears and pinions or levers for magnification system. They are
basically used for making and checking linear measurements.
Many a times they are also used as comparators. Dial indicator, in fact
is a simple type of mechanical comparator.
When a dial indicator is used as an essential part in the mechanism
any set up for comparison measurement purposes; it is called as a
gauge.
The dial indicator measures the displacement of its plunger or a stylus
on a circular dial by means of a rotating pointer.
Dial indicators are very sensitive and versatile instruments.
They require little skill in their use than other precision instruments,
such as micrometer vernier callipers, gauges etc. However, a dial
indicator by itself is not of much unless it is properly mounted and set
before using for inspection purposes.
Uses:
By mounting a dial indicator on any suitable base and with various
attachments, it can be used for variety of purposes as follows.
1. Determining errors in geometrical forms, e.g., ovality out-of-
roundness, taper etc.
2. Determining positional errors of surfaces, e.g., in squareness,
parallelism, alignment etc.
3. Taking accurate measurements of deformation (extension
compression) in tension and compression testing of material.
4. Comparing two heights or distances between narrow limits
(comparator).
 The practical applications of the use of dial indicator are:
1. To check alignment of lathe centers by using a suitable accurate bar
between centers.
2. To check trueness of milling machine arbors.
3. To check parallelism of the shaper ram with table surface or like.
Figure.
Dial Indicators
Slip Gauges
Slip gauges or gauge blocks are universally accepted end standard of
length in industry. These were introduced by Johnson, a Swedish
engineer, and are also called as johanson gauges.
Slip gauges are rectangular blocks of high grade steel with
exceptionally close tolerances. These blocks are suitably hardened
through out to ensure maximum resistance to wear.
They are then stabilized by heating and cooling successively in stages
so that hardening stresses are removed,After being hardened they are
carefully finished by high grade lapping to a high degree of finish,
flatness and accuracy.
For successful use of slip gauges their working faces are made truly
flat and parallel. A slip gauge is shown in fig. 3.36. Slip gauges are also
made from tungsten carbide which is extremely hard and wear
resistance.
The cross-sections of these gauges are 9 mm x 30 mm for sizes up to
10 mm and 9 mm x 35 mm for larger sizes. Any two slips when perfectly
clean may be wrung together. The dimensions are permanently marked
on one of the measuring faces of gauge blocks.
Dimensions of a Slip Gauge
Gauges blocks are used for:
Direct precise measurement, where the accuracy of the work piece
demands it.
For checking accuracy of vernier callipers, micrometers, and such
other measuring instruments.
Setting up a comparator to a specific dimension.
For measuring angle of work piece and also for angular setting in
conjunction with a sine bar.
The distances of plugs, spigots, etc. on fixture are often best
measured with the slip gauges or end bars for large dimensions.
To check gap between parallel locations such as in gap gauges or
between two mating parts.
Angular Measurement
Angular measurements are frequently necessary for the manufacture
of interchangeable parts. The ships and aero planes can navigate
confidently without the help of the site of the land; only because of
precise angular measuring devices can be used in astronomy to
determine the relation of the stars and their approximate distances.
The angle is defined as the opening between two lines which meet at
a point. If one of the two lines is moved at a point in an arc, a complete
circle can be formed.
The basic unit in angular measurement is the right angle, which is
defined as the angle between two lines which intersect so as to make
the adjacent angles equal.
If a circle is divided into 360 equal parts. Each part is called as degree
(0). Each degree is divided in 60 minutes (‘), and each minute is divided
into 60 seconds (“).
This method of defining angular units is known as sexagesimal
system, which is used for engineering purposes.
An alternative method of defining angle is based on the relationship
between the radius and arc of a circle. It is called as radian.
Radian is defined as the angle subtended at the centre by an arc of a
circle of length equal to its radius.
It is more widely used in mathematical investigation.
2 radians = 360, giving,
1 radian = 57.2958 degrees.
In addition linear units such as 1 in 30 or millimeters per meter are
often used for specifying tapers and departures from squareness or
parallelism.
Instruments for angular measurements
The following instruments are used for angular measurements:
1. Protractors: Vernier bevel protractor, Dial bevel protractor, Optical
bevel protractor
2. Sine bars
3. Sine tables, 4. Sine centre, 5. Angle gauges, 6. Sprit level, 7.
Clinometers, 8. Plain index centre, and 9. Optical instruments for
angular measurements as Auto – collimators.
Bevel Protector
It is probably the simplest instrument for measuring the angle between
two faces of component.
It consists of a base plate attached to the main body, and an adjustable
blade which is attached to a circular plate containing vernier scale. The
adjustable blade is capable of rotating freely about the centre of the
main scale engraved on the body of the instrument and can be locked in
any position.
An acute angle attachment is provided at the top; as shown in fig. for the
purpose of measuring acute angles. The base of the base plate is made
flat so that it could be laid flat upon the work and any type of angle
measured. It is capable of measurement from 00 to 3600.
The vernier scale has 24 divisions coinciding with 23 main scale
divisions. Thus the least count of the instrument is 5’. This instrument is
most commonly used in workshops for angular measurements till more
precision is required.
A recent development of the vernier bevel protector is optical bevel
protector. In this instrument, a glass circle divided at 10’ intervals
throughout the whole 3600 is fitted inside the main body.
A small microscope is fitted through which the circle graduations can be
viewed. The adjustable blade is clamped to a rotating member who
carries this microscope. With the aid of microscope it is possible to read
by estimation to about 2’.
Universal Bevel Protector
It is used for measuring and laying out of angles accurately and
precisely within 5 minutes. The protector dial is slotted to hold a blade
which can be rotated with the dial to the required angle and also
independently adjusted to any desired length. The blade can be locked
in any position.
Bevel Protectors as Per Indian Standard Practice
The bevel protectors are of two types, viz.
1. Mechanical Bevel Protector, and
2. Optical Bevel Protector.
Mechanical bevel protector:
 The mechanical bevel protectors are further classified into four
types; A, B, C and D.
In types A and B, the vernier is graduated to read to 5 minutes of arc
whereas in case of type C, the scale is graduated to read in degrees
and the bevel protector is without vernier or fine adjustment device or
acute angle attachment.
Optical bevel protector:
In the case of optical bevel protector, it is possible to take readings up
to approximately 2 minutes of arc. The provision is made for an internal
circular scale which is graduated in divisions of 10 minutes of arc.
Readings are taken against a fixed index line or vernier by means of
an optical magnifying system which is integral with the instrument. The
scale is graduated as a full circle marked 0—90—0—90. The zero
positions correspond to the condition when the blade is parallel to the
stock. Provision is also made for adjusting the focus of the system to
accommodate normal variations in eye-sight. The scale and vernier are
so arranged that they are always in focus in the optical system.

Various Components of Bevel Protectors


Body: It is designed in such a way that its back is flat and there are no
projections beyond its back so that when the bevel protector is placed
on its back on a surface plate there shall be no perceptible rock. The
flatness of the working edge of the stock and body is tested by checking
the squareness of blade with respect to stock when blade is set at 900.
Stock: The working edge of the stock is about 90 mm in length and 7
mm thick. It is very essential that the working edge of the stock be
perfectly straight and if at all departure is there, it should be in the form
of concavity and of the order of 0.01 mm maximum over the whole span.
Blade: It can be moved along the turret throughout its length and can
also be reversed. It is about 150 or 300 mm long, 13 mm wide and 2
mm thick and ends beveled at angles of 450 and 600 within the
accuracy of 5 minutes of arc. Its working edge should be straight upto
0.02 mm and parallel upto 0.03 mm over the entire length of 300 mm. It
can be clamped in any position.
Actual Angle Attachment
It can be readily fitted into body and clamped in any position. Its
working edge should be flat to within 0.005 mm and parallel to the
working edge of the stock within 0.015 mm over the entire length of
attachment.
The bevel protectors are tested for flatness, squareness, parallelism,
straightness and angular intervals by suitable methods.
Sine Principle and Sine Bars
A sine bar is used to measure angles based on the sine
principle.
The sine principle uses the ratio of the length of two sides of a right
triangle in deriving a given angle. It may be noted that devices operating
on sine principle are capable of “self generation.”
The measurement is usually limited to 450 from loss of accuracy point
of view. The accuracy with which the sine principle can be put to use is
dependent in practice, on some form of linear measurement.
The sine bar in itself is not a complete measuring instrument. Another
datum such as a surface plate is needed, as well as other auxiliary
equipment, notably slip gauges, and indicating device to make
measurements. Sine bars used in conjunction with slip gauges
constitute a very good device for the precise measurement of angles.
Sine bars are used either to measure angles very accurately or for
locating any work to a given angle within very close limits.
Sine bars are made from high carbon, high chromium, corrosion
resistant steel, hardened, ground and stabilized.
Figure illustrates the application of a sine rule for angle measurement.
The sine of angle θ formed between the upper surface of a sine bar and
the surface plate (datum) is given by
Sin (θ) = h/L
where h is the height difference between the two rollers and L is the
distance between the centres of the rollers.
Therefore, h = L Sin (θ)
Setting Sine Bars to Desired Angles
By building slip gauges to height h and placing the sine bar on a
surface plate with one roller on top of the slip gauges, the upper surface
can be set to a desired angle with respect to the surface plate.
The set-up is easier because the cylinders are integral parts of the
sine bar and no separate clamping is required.
No measurement is required between the cylinders since this is a
known length.
It is preferable to use the sine bar on a grade A surface plate.
Measuring Unknown Angles with Sine Bar
A sine bar can also be used to measure unknown angles with a high
degree of precision.
The angle of the work part is first measured using an instrument like a
bevel protractor.
Then, the work part is clamped to the sine bar and set on top of a
surface plate to that angle using slip gauges, as shown in Fig.
A dial gauge fixed to a stand is
brought in contact with the top
surface of the work part at one end
and set to zero.
Now, the dial indicator is moved to
the other end of the work part in a
straight line.
A zero reading on the dial indicator
indicates that the work part surface
is perfectly horizontal and the set
angle is the right one.
Few guidelines should be followed to ensure proper usage of sine
bar:
It is not recommended to use sine bars for angles greater than 45°
because any error in the sine bar or height of slip gauges gets
accentuated.
Sine bars provide the most reliable measurements for angles less than
15°.
The longer the sine bar, the better the measurement accuracy.
It is preferable to use the sine bar at a temperature recommended by
the supplier. The accuracy of measurement is influenced by the ambient
temperature.
It is recommended to clamp the sine bar and the work part against an
angle plate. This prevents misalignment of the workpiece with the sine
bar while making measurements.
One should always keep in mind that the sine principle can be put to
use provided the sine bar is used along with a high-quality surface plate
and set of slip gauges.
Sine Blocks, Sine Plates, and Sine Tables
A sine block is a sine bar that is wide enough to stand unsupported
(Fig. 5.11). If it rests on an integral base, it becomes a sine plate (Fig.
5.12).
A sine plate is wider than a sine block.
A heavy-duty sine plate is rugged enough to hold work parts for
machining or inspection of angles.
If a sine plate is an integral part of another device, for example, a
machine tool, it is called a sine table.
However, there is no exact dividing line between the three.

Sine block Sine plate Sine centre


In all these three devices, the work part rests on them. They are often
used like a fixture to keep the work part at a particular orientation, so
that the required angle is machined.
Sine centre
A sine centre provides a convenient means of measuring angles of
conical work pieces that are held between centres, as shown in Fig.
One of the rollers is pivoted about its axis, there by allowing the sine bar
to be set to an angle by lifting the other roller.
The base of the sine centre has a high degree of flatness, and slip
gauges are wrung and placed on it to set the sine bar at the required
angle.
Conical workpieces that need to be inspected are placed between the
centres. The sine centre is used for measuring angles up to 60°.
The sine bar is set to an angle such that the dial gauge registers no
deviation when moved from one end of the workpiece to the other. The
angle is determined by applying the sine rule.
Angle Gauges
Angle gauges, which are made of high-grade wear-resistant steel,
work on a principle similar to slip gauges. While slip gauges can be built
to give linear dimensions, angle gauges can be built to give the required
angle.
The gauges come in a standard set of angle blocks that can be wrung
together in a suitable combination to build an angle.
At the outset, it seems improbable that a set of 10 gauges is sufficient
to build so many angles. However, angle blocks have a special feature
that is impossible in slip gauges—the former can be subtracted as well
as added.

Angle gauge block (a) Addition (b) Subtraction


This illustration shows the way in which two gauge blocks can be used
in combination to generate two different angles. If a 5° angle block is
used along with a 30° angle block, as shown in Fig., the resulting angle
is 35°. If the 5° angle block is reversed and combined with the 30° angle
block, as shown in Fig., the resulting angle is 25°.
Angle gauges are made of hardened steel, which is lapped and
polished to a high degree of accuracy and flatness. The gauges are
about 75 mm long and 15 mm wide, and the two surfaces that generate
the angles are accurate up to ±2".
Most angles can be combined in several ways. However, in order to
minimize error, which gets compounded if the number of gauges used is
increased, it is preferable to use the least number of angle gauge
blocks.
The gauges are available in sets of 6, 11, or 16.
Uses
Angle gauges are used for measurement and calibration purposes in
tool rooms.
It can be used for measuring the angle of a die insert or for inspecting
compound angles of tools and dies.
They are also used in machine shops for either setting up a machine
(e.g., the revolving centre of a magnetic chuck) or for grinding notches
on a cylindrical grinding machine.
Angle gauges offer a simple means for inspecting such a compound
angle by using a dial gauge mounted on a stand.
Comparator
All measurements require the unknown quantity to be compared with a
known quantity, called the standard.
On the other hand, in certain devices the standards are separated
from the instrument. It compares the unknown length with the standard.
Such measurement is known as comparison measurement and the
instrument, which provides such a comparison, is called a comparator.
A comparator works on relative measurement. It gives only
dimensional differences in relation to a basic dimension or master
setting.
A comparator has to fulfil many functional requirements in order to be
effective in the industry.
A comparator should have a high degree of accuracy and precision.
We can safely say that in general, comparison measurement provides
better accuracy and precision than direct measurement.
In direct measurement, precision is dependent on the least count of
the scale and the means for reading it.
In comparison measurement, it is dependent on the least count of the
standard and the means for comparing.
The scale should be linear and have a wide range. Since a
comparator, be it mechanical, pneumatic, or electrical, has a means of
amplification of signals, linearity of the scale within the measuring range
should be assured.
A comparator is required to have high amplification. It should be able
to amplify changes in the input value, so that readings can be taken and
recorded accurately and with ease.
A comparator should have good resolution, which is the least possible
unit of measurement that can be read on the display device of the
comparator. Resolution should not be confused with readability, the
former being one among many factors that influence the latter. Other
factors include size of graduations, dial contrast, and parallax.
There should be a provision incorporated to compensate for
temperature effects.
Finally, the comparator should be versatile.
Classification of Comparators
With respect to the principle used for amplifying and recording
measurements, comparators are classified as follows:
1. Mechanical comparators
2. Mechanical–optical comparators
3. Electrical and electronic comparators
4. Pneumatic comparators
5. Other types such as projection comparators and multi-check
comparators
Mechanical Comparators
Dial Indicator
The dial indicator or the dial gauge is one of the simplest and the most
widely used comparator.
It is primarily used to compare workpieces against a master.
The basic features of a dial gauge consist of a body with a circular
graduated dial, a contact point connected to a gear train, and an
indicating hand that directly indicates the linear displacement of the
contact point.
The contact point is first set against the master, and the dial scale is
set to zero by rotating the bezel.
Now, the master is removed and the workpiece is set below the
contact point; the difference in dimensions between the master and the
workpiece can be directly read on the dial scale.
A dial gauge is also part of standard measuring devices such as bore
gauges, depth gauges, and vibrometers.
The contact point in a dial indicator is of an interchangeable type and
provides versatility to the instrument.
It is available as a mounting and in a variety of hard, wear-resistant
materials. Heat-treated steel, boron carbide, sapphire, and diamond are
some of the preferred materials.
Flat and round contact points are commonly used, tapered and button-
type contact points are also used in some applications.
The stem holds the contact point and provides the required length and
rigidity for ease of measurement. The bezel clamp enables locking of
the dial after setting the scale to zero.
The dials are of two types: continuous and balanced.
A continuous dial has graduations starting from zero and extends to
the end of the recommended range. It can be either clockwise or anti-
clockwise. The dial corresponds to the unilateral tolerance of
dimensions.
Balanced dial has graduations marked both ways of zero. This dial
corresponds to the use of bilateral tolerance.
Functional parts of a
dial indicator

Method for designating numbers


Use of dial indicators
A dial indicator is a delicate instrument as the slender spindle can be
damaged easily. The user should avoid sudden contact with the
workpiece surface, over-tightening of contact points, and side pressure.
Any sharp fall or blow can damage the contact points or upset the
alignment of bearings, and hence should be avoided.
Standard reference surfaces should be used. It is not recommended to
use non-standard attachments or accessories for reference surfaces.
The dial indicator should be cleaned thoroughly before and after use.
This is very important because unwanted dust, oil, and cutting fluid may
seep inside the instrument and cause havoc to the maze of moving
parts.
Periodic calibration of the dial gauge is a must.
Mechanical–optical Comparator
This is followed by a simple optical system wherein a pointed image is
projected onto a screen to facilitate direct reading on a scale.
The plunger is spring loaded such that it is biased to exert a downward
force on the work part.
This bias also enables both positive and negative readings, depending
on whether the plunger is moving up or down.
The scale is set to zero by inserting a reference gauge below the
plunger.
Now, the reference gauge is taken out and the work part is introduced
below the plunger.
This causes a small displacement of the plunger, which is amplified by
the mechanical levers.
The amplified mechanical movement is further amplified by the optical
system due to the tilting of the plane reflector.
A condensed beam of light passes through an index, which normally
comprises a set of cross-wires.
This image is projected by another lens onto the plane mirror.
The mirror, in turn, reflects this image onto the inner surface of a
ground glass screen, which has a scale.
The difference in reading can be directly read on this calibrated
screen, which provides the linear difference in millimetres or fractions of
a millimetre.
Optical magnifications provide a high degree of precision in
measurements due to the reduction of moving members and better
wear-resistance qualities.

Principle of a mechanical
optical comparator
Optical Projector
An optical projector is a versatile comparator, which is widely used for
inspection purpose. It is especially used in tool room applications.
It projects a two-dimensional magnified image of the workpiece onto a
viewing screen to facilitate measurement.
It comprises three main elements: the projector itself comprising a light
source and a set of lens housed inside the enclosure, a work table to
hold the workpiece in place, and a transparent screen with or without a
chart gauge for comparison or measurement of parts.
Figure illustrates the various parts of an optical projector.
The workpiece to be inspected is mounted on a table such that it is in
line with the light beam coming from the light source.
The table may be either stationary or movable.
In most projectors, the table can be moved in two mutually
perpendicular directions in the horizontal plane. The movement is
effected by operating a knob attached with a double vernier micrometer,
which can provide a positional accuracy of up to 5 μm or better.
The light beam originating from the
lamp is condensed by means of a
condenser and falls on the workpiece.
The image of the workpiece is carried
by the light beam, which passes through
a projection lens.
The projection lens magnifies the
image, which falls on a highly polished
mirror kept at an angle.
The reflected light beam carrying the
image of the workpiece falls on a
transparent screen.
The most preferred light source is the tungsten filament lamp,
although mercury or xenon lamps are also used sometimes.
An achromatic collimator lens is placed in the path of a light beam
coming from the lamp.
The collimator lens will reorient the light rays into a parallel beam large
enough in diameter to provide coverage of the workpiece.
The light beam, after passing through the projection lens, is directed
by a mirror onto the viewing screen.
Screens are made of glass, with the surface facing the operator,
ground to a very fine grain size.

The following are the typical applications of profile projectors:


1. Inspection of elements of gears and screws
2. Measurement of pitch circle diameters of holes located on
components
3. Measurement of unusual profiles on components such as involute
and cycloidal, which are difficult to measure by other means
4. Measurement of tool wear (Drawing of a tool to scale is made on a
tracing sheet. The tracing sheet is clamped on to the screen. Now, the
used tool is fixed on the table and the image is projected to the required
magnification. Using a pencil, one can easily trace the actual profile of
the tool on to the tracing sheet. This image superimposed on the actual
drawing is useful for measuring the tool wear.)

You might also like