Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

UNIT5-INTRODUCTION TO MEASUREMENT & MECHATRONICS

CONCEPT OF MEASUREMENT
Measurement, the process of associating numbers with physical quantities and phenomena.
Measurement is fundamental to the sciences; to engineering, construction, and other technical
fields; and to almost all everyday activities.

Measurement begins with a definition of the quantity that is to be measured, and it always
involves a comparison with some known quantity of the same kind. If the object or quantity to be
measured is not accessible for direct comparison, it is converted or ―transduced‖ into
an analogous measurement signal. Since measurement always involves some interaction between
the object and the observer or observing instrument, there is always an exchange of energy,
which, although in everyday applications is negligible, can become considerable in some types
of measurement and thereby limit accuracy.

ERRORS IN MEASUREMENT
Types of Errors
There are three types of errors that are classified based on the source they arise from; They are:

 Gross Errors
 Random Errors
 Systematic Errors

Gross Errors
This category basically takes into account human oversight and other mistakes while reading,
recording, and readings. The most common human error in measurement falls under this
category of measurement errors. For example, the person taking the reading from the meter of
the instrument may read 23 as 28. Gross errors can be avoided by using two suitable measures,
and they are written below:

 Proper care should be taken in reading, recording the data. Also, the calculation of error
should be done accurately.
 By increasing the number of experimenters, we can reduce the gross errors. If each
experimenter takes different readings at different points, then by taking the average of
more readings, we can reduce the gross errors

Random Errors
The random errors are those errors, which occur irregularly and hence are random. These can
arise due to random and unpredictable fluctuations in experimental conditions (Example:
unpredictable fluctuations in temperature, voltage supply, mechanical vibrations of experimental
set-ups, etc, errors by the observer taking readings, etc. For example, when the same person
repeats the same observation, he may likely get different readings every time.
This article explored the various types of errors in the measurements we make. These errors are
everywhere in every measurement we make.

Systematic Errors:
Systematic errors can be better understood if we divide them into subgroups; They are:

 Environmental Errors
 Observational Errors
 Instrumental Errors
Environmental Errors: This type of error arises in the measurement due to the effect of the
external conditions on the measurement. The external condition includes temperature, pressure,
and humidity and can also include an external magnetic field. If you measure your temperature
under the armpits and during the measurement, if the electricity goes out and the room gets hot,
it will affect your body temperature, affecting the reading.
Observational Errors: These are the errors that arise due to an individual‘s bias, lack of proper
setting of the apparatus, or an individual‘s carelessness in taking observations. The measurement
errors also include wrong readings due to Parallax errors.
Instrumental Errors: These errors arise due to faulty construction and calibration of the
measuring instruments. Such errors arise due to the hysteresis of the equipment or due to friction.
Lots of the time, the equipment being used is faulty due to misuse or neglect, which changes the
reading of the equipment. The zero error is a very common type of error. This error is common
in devices like Vernier callipers and screw gauges. The zero error can be either positive or
negative. Sometimes the scale readings are worn off, which can also lead to a bad reading.
Instrumental error takes place due to :

 An inherent constraint of devices


 Misuse of Apparatus
 Effect of Loading

Errors Calculation
Different measures of errors include:

Absolute Error
The difference between the measured value of a quantity and its actual value gives the absolute
error. It is the variation between the actual values and measured values. It is given by
Absolute error = |VA-VE|

Percent Error
It is another way of expressing the error in measurement. This calculation allows us to gauge
how accurate a measured value is with respect to the true value. Per cent error is given by the
formula
Percentage error (%) = (VA-VE) / VE) x 100

Relative Error
The ratio of the absolute error to the accepted measurement gives the relative error. The relative
error is given by the formula:
Relative Error = Absolute error / Actual value

How To Reduce Errors In Measurement


Keeping an eye on the procedure and following the below listed points can help to reduce the
error.

 Make sure the formulas used for measurement are correct.


 Cross check the measured value of a quantity for improved accuracy.
 Use the instrument that has the highest precision.
 It is suggested to pilot test measuring instruments for better accuracy.
 Use multiple measures for the same construct.
 Note the measurements under controlled conditions.
Why calibration is important?
The accuracy of all measuring devices degrade over time. This is typically caused by normal
wear and tear. However, changes in accuracy can also be caused by electric or mechanical shock
or a hazardous manufacturing environment (e.x., oils, metal chips etc.). Depending on the type of
instrument and the environment in which it is being used, it may degrade very quickly or over a
long period of time. The bottom line is that calibration improves the accuracy of the measuring
device. Accurate measuring devices improve product quality.

When should you calibrate your measuring device?


A measuring device should be calibrated:

 According to the recommendation of the manufacturer.


 After any mechanical or electrical shock.
 Periodically (annually, quarterly, monthly)
The hidden costs and risks associated with un-calibrated measuring device could be much higher
than the cost of calibration. Therefore, it is recommended that the measuring instruments are
calibrated regularly by a reputable company to ensure that errors associated with the
measurements are in the acceptable range.

MEASUREMENTS OF PRESSURE
BOURDON TUBE PRESSURE GAUGE
U TUBE MANOMETER
The simplest form of manometer consists of a U-shaped glass tube containing liquid. It is used to
measure gauge pressure and are the primary instruments used in the workshop for calibration.

The principle of the manometer is that the pressure to be measured is applied to one side of the
tube producing a movement of liquid, as shown in figure above. It can be seen that the level of
the filling liquid in the leg where the pressure is applied, i.e. the left leg of the tube, has dropped,
while that in the right hand leg as risen. A scale is fitted between the tubes to enable us to
measure this displacement.

Let us assume that the pressure we are measuring and have applied to the left hand side of the
manometer is of constant value. The liquid will only stop moving when the pressure exerted by
the column of liquid, H is sufficient to balance the pressure applied to the left side of the
manometer, i.e. when the head pressure produced by column ‖ H ‖ is equal to the pressure to be
measured.

Knowing the length of the column of the liquid, H, and density of the filling liquid, we can
calculate the value of the applied pressure.

The applied Pressure = ρ × g × h


OPTICAL PYROMETER WORKING PRINCIPLE
A pyrometer is a noncontact device and it is also known as a radiation thermometer. The main
function of this instrument is to detect the surface temperature of an object by measuring the
temperature of the electromagnetic radiation generated from the object. So, thermal radiation can
be measured by using this non-conductive device. By using this, we can determine the
temperature of the surface of the object. There are different types of pyrometers available in the
market like infrared and optical pyrometers.

What is an Optical Pyrometer?


Definition: A temperature measuring device that is used to measure the temperature of molten
metal‘s, overheated material, furnaces otherwise liquids. It is one kind of measuring device
of temperature with a non-contact. The working principle of this optical pyrometer is to match
the object‘s brightness of the filament within the device. By using contact type instruments,
measuring the temperature of the highly heated body is not possible. So this non-contact type
device is used to measure the temperature. The optical pyrometer diagram is shown below.
Optical Pyrometer Construction
The shape of the pyrometer is cylindrical and inside parts of the optical pyrometer mainly
include eyepiece, power source, absorption screen, and red filter.

 An eyepiece and the lens of an object are arranged at both sides of the device.
 A battery, millivoltmeter & rheostat are connected to a temperature bulb.
 An absorption screen is arranged in the middle of the reference temperature lamp and the
objective lens to increase the temperature range and his range can be measured with the help
of the device.
 The red filter is placed in between the lamp and eyepiece so that the lamp allows simply a
narrow band with 0.65mui wavelength.
Optical Pyrometer Working
The optical pyrometer diagram is shown below. It includes the lens to focus on the generated
energy from the heated object and aims at the filament of the lamp. The filament in
the lamp mainly depends on the flow of current through it. Therefore the changeable current can
be supplied throughout the lamp.

The magnitude of the flow of current can be changed until the filament‘s intensity is similar to
the intensity of the object. As the intensity of both the filament and object are the same, then the
filament outline can vanish completely.The filament in the bulb seems intense as its temperature
is higher compare with the temperature of the source. Similarly, the filament seems dim if their
temperature is lower than that necessary for equivalent brightness.

Optical Pyrometer Advantages


 It is used for high temperatures.
 It is used to check the distant objects as well as moving the object‘s temperature.
 Accuracy
 It can be measured without connecting with the target.
 Less weight
 It is flexible and portable.
Optical Pyrometer Disadvantages
 Due to the radiation of thermal background, dust, and smoke, the accuracy of this device can
be affected.
 These do not apply to the temperature measuring of burning gases because they do not emit
visible energy.
 It is expensive.
 Manual type pyrometers are not suitable for evaluating the object‘s temperature under 8000C
because, at less temperature, the generated energy will be too low.

Applications
The applications of optical pyrometer include the following.

 It is used to measure the temperature of highly heated materials


 It is useful to measure furnace temperatures.
 It is used in critical process measurements of semiconductor, medical, induction heat treating,
crystal growth, furnace control, glass manufacture, medical, etc.

PRONY BRAKE DYNAMOMETER

Prony Brake is one of the simplest dynamometers for measuring power output (brake power). It
is to attempt to stop the engine using a brake on the flywheel and measure the weight which an
arm attached to the brake will support, as it tries to rotate with the flywheel.

The Prony brake shown in the above consists of a wooden block, frame, rope, brake shoes and
flywheel. It works on the principle of converting power into heat by dry friction. Spring-loaded
bolts are provided to increase the friction by tightening the wooden block.
The whole of the power absorbed is converted into heat and hence this type of dynamometer
must the cooled.
The brake power is given by the formula

Brake Power (Pb) = 2πNT


Where T = Weight applied (W) × distance (l)

CONCEPT OF ACCURACY, PRECISION AND RESOLUTION?

Accuracy: An instrument‘s degree of veracity—how close its measurement comes to the


actual or reference value of the signal being measured.

Resolution: The smallest increment an instrument can detect and display—hundredths,


thousandths, millionths.

Range: The upper and lower limits an instrument can measure a value or signal such as
amps, volts and ohms.

Precision: An instrument‘s degree of repeatability—how reliably it can reproduce the


same measurement over and over.

Accuracy:

Accuracy refers to the largest allowable error that occurs under specific operating conditions

Accuracy is expressed as a percentage and indicates how close the displayed measurement is to
the actual (standard) value of the signal measured. Accuracy requires a comparison to an
accepted industry standard.

The accuracy of a specific digital multimeter is more or less important depending on the
application. For example, most AC power line voltages vary ±5% or more. An example of this
variation is a voltage measurement taken at a standard 115 V AC receptacle. If a digital
multimeter(DMM) is only used to check if a receptacle is energized, a DMM with a ±3%
measurement accuracy is appropriate.

Some applications, such as calibration of automotive, medical aviation or specialized industrial


equipment, may require higher accuracy. A reading of 100.0 V on a DMM with an accuracy of
±2% can range from 98.0 V to 102.0 V. This may be fine for some applications, but
unacceptable for sensitive electronic equipment.

Resolution

Resolution is the smallest increment a tool can detect and display.

For a nonelectrical example, consider two rulers. One marked in 1/16-inch segments offers
greater resolution than one marked in quarter-inch segments.
Imagine a simple test of a 1.5 V household battery. If a digital multimeter (DMM) has a
resolution of 1 mV on the 3 V range, it is possible to see a change of 1 mV while reading 1 V.
The user could see changes as small as one one-thousandth of a volt, or 0.001.

Resolution may be listed in a meter‘s specifications as maximum resolution, which is the


smallest value that can be discerned on the meter‘s lowest range setting.

What is “Mechatronics”?
Mechatronics can be defined as the application of electronics and computer technology to control
the motions of mechanical systems.

It is a multidisciplinary approach to product and manufacturing system design (Figure). It


involves application of electrical, mechanical, control and computer engineering to develop
products, processes and systems with greater flexibility, ease in redesign and ability of
reprogramming. It concurrently includes all these disciplines.

Mechatronics can also be termed as replacement of mechanics with electronics or enhance


mechanics with electronics. For example, in modern automobiles, mechanical fuel injection
systems are now replaced with electronic fuel injection systems. This replacement made the
automobiles more efficient and less pollutant. With the help of microelectronics and sensor
technology, mechatronics systems are providing high levels of precision and reliability.
Evolution Level of Mechatronics

1. Primary Level Mechatronics: This level incorporates I/O devices such as sensors and
actuators that integrates electrical signals with mechanical action at the basic control levels.
Examples: Electrically controlled fluid valves and relays

2. Secondary Level Mechantronics: This level integrates microelectronics into electrically


controlled devices. Examples: Cassette players

3. Third Level Mechatronics: This level incorporates advanced feedback functions into control
strategy thereby enhancing the quality in terms of sophistication called smart system.  The
control strategy includes microelectronics, microprocessor and other ‗ Application Specific
Integrated Circuits‘ (ASIC) Example: Control of Electrical motor used to activate industrial
robots, hard disk, CD drives and automatic washing machines.

4. Fourth Level Mechatronics: This level incorporates intelligent control in mechatronics


system. It introduces intelligence and fault detection and isolation (FDI) capability systems.

Advantages and Disadvantages of Mechatronics system:


Sensors and Transducers: An introduction to sensors and Transducers, use of
sensor and transducer for specific purpose in mechatronics. Transducer signal conditioning and
Devices for Data conversion programmable controllers.

Sensors and transducers

Measurement is an important subsystem of a mechatronics system. Its main function is to collect


the information on system status and to feed it to the micro-processor(s) for controlling the
whole system.

For a mechatronics system designer it is quite difficult to choose suitable sensors/transducers for
the desired application(s). It is therefore essential to learn the principle of working of commonly
used sensors/transducers. Sensors in manufacturing are basically employed to automatically
carry out the production operations as well as process monitoring activities. Sensor technology
has the following important advantages in transforming a conventional manufacturing unit into a
modern one.

 Sensors alarm the system operators about the failure of any of the sub units of manufacturing
system. It helps operators to reduce the downtime of complete manufacturing system by carrying
out the preventative measures.

 Reduces requirement of skilled and experienced labors.


 Ultra-precision in product quality can be achieved.

Sensor

It is defined as an element which produces signal relating to the quantity being measured.
According to the Instrument Society of America, sensor can be defined as ―A device which
provides a usable output in response to a specified measurand.‖ Here, the output is usually an
‗electrical quantity‘ and measurand is a ‗physical quantity, property or condition which is to be
measured‘. Thus in the case of, say, a variable inductance displacement element, the quantity
being measured is displacement and the sensor transforms an input of displacement into a change
in inductance.

Sensors are also called detectors.

Need for Sensors

 Sensors are omnipresent. They embedded in our bodies, automobiles, airplanes, cellular
telephones, radios, chemical plants, industrial plants and countless other applications.

 Without the use of sensors, there would be no automation

Transducer
It is defined as an element when subjected to some physical change experiences a related change
or an element which converts a specified measurand into a usable output by using a transduction
principle. It can also be defined as a device that converts a signal from one form of energy to
another form.

A wire of Constantan alloy (copper-nickel 55-45% alloy) can be called as a sensor because
variation in mechanical displacement (tension or compression) can be sensed as change in
electric resistance. This wire becomes a transducer with appropriate electrodes and input-output
mechanism attached to it. Thus we can say that ‗sensors are transducers‘.

Basic elements of transducer

• There are basically two elements which constructs a transducer and they are
• A sensing ELEMENT

Sensor/transducers specifications
Transducers or measurement systems are not perfect systems. Mechatronics design engineer
must know the capability and shortcoming of a transducer or measurement system to properly
assess its performance. There are a number of performance related parameters of a transducer or
measurement system. These parameters are called as sensor specifications.

Sensor specifications inform the user to the about deviations from the ideal behavior of the
sensors. Following are the various specifications of a sensor/transducer system.

1. Range
The range of a sensor indicates the limits between which the input can vary. For
example, a thermocouple for the measurement of temperature might have a range
of 25-225 °C.
2. Span
The span is difference between the maximum and minimum values of the input.
Thus, the above-mentioned thermocouple will have a span of 200 °C.
3. Error
Error is the difference between the result of the measurement and the true value of
the quantity being measured. A sensor might give a displacement reading of 29.8
mm, when the actual displacement had been 30 mm, then the error is –0.2 mm.
4. Accuracy
The accuracy defines the closeness of the agreement between the actual
measurement result and a true value of the measurand. It is often expressed as a
percentage of the full range output or full–scale deflection. A piezoelectric
transducer used to evaluate dynamic pressure phenomena associated with
explosions, pulsations, or dynamic pressure conditions in motors, rocket engines,
compressors, and other pressurized devices is capable to detect pressures between
0.1 and 10,000 psig (0.7 KPa to 70 MPa). If it is specified with the accuracy of
about ±1% full scale, then the reading given can be expected to be within ± 0.7
MPa.
5. Sensitivity
Sensitivity of a sensor is defined as the ratio of change in output value of a sensor
to the per unit change in input value that causes the output change. For example, a
general purpose thermocouple may have a sensitivity of 41 μV/°C.

Classification of sensors
Sensors can be classified into various groups according to the factors such as
measurand, application fields, conversion principle, energy domain of the
measurand and thermodynamic considerations. These general classifications of
sensors are well described in the references Detail classification of sensors in
view of their applications in manufacturing is as follows.
A. Displacement, position and proximity sensors
• Potentiometer
• Strain-gauged element
• Capacitive element
• Differential transformers
• Eddy current proximity sensors
• Inductive proximity switch
• Optical encoders
• Pneumatic sensors
• Proximity switches (magnetic)
• Hall effect sensors
B. Velocity and motion
• Incremental encoder
• Tachogenerator
• Pyroelectric sensors
C. Force
• Strain gauge load cell
D. Fluid pressure
• Diaphragm pressure gauge
• Capsules, bellows, pressure tubes
• Piezoelectric sensors
• Tactile sensor
E. Liquid flow
• Orifice plate
• Turbine meter
F. Liquid level
• Floats
• Differential pressure
G. Temperature
• Bimetallic strips
• Resistance temperature detectors
• Thermistors
• Thermo-diodes and transistors
• Thermocouples
• Light sensors
• Photo diodes
• Photo resistors
• Photo transistor

Strain Measurements
When a system of forces or loads act on a body, it undergoes some deformation. This
deformation per unit length is known as unit strain or simply a strain mathematically
Strain € = δl /l where, δl = change in length of the body
l= original length of the body.
If a net change in dimension is required, then the term, total strain will be used. Since
the strain applied to most engineering materials are very small they are expressed in
“micro strain”
Strain is the quantity used for finding the stress at any point. For measuring
the strain, it is the usual practice to make measurements over shortest possible gauge
lengths. This is because, the measurement of a change in given length does not give the
strain at any fixed point but rather gives the average value over the length. The strain at
various points might be different depending upon the strain gradient along the gauge
length, then the average strain will be the point strain at the middle point of the gauge
length. Since, the change in length over a small gauge length is very small, a high
magnification system is required and based upon this, the strain gauges are classified
as follows:
i) Mechanical strain gauges
ii) Optical strain gauges
iii) Electrical strain gauges
Mechanical Strain Gauges
This type of strain gauges involves mechanical means for magnification.
Extensometer employing compound levers having high magnifications was used. Fig.
shows a simple mechanical strain gauge. It consists of two gauge points which will be
seated on the specimen whose strain is to be measured. One gauge point is fixed while
the second gauge paint is connected to a magnifying lever which in turn gives the input
to a dial indicator. The lever magnifies the displacement and is indicated directly on
the calibrated dial indicator. This displacement is used to calculate the strain value.
The most commonly used mechanical strain gauges are Berry-type and Huggen berger
type. The Berry extensometer as shown in the Fig. is used for structural applications in
civil engineering for long gauge lengths of up to 200 mm.

Mechanical Strain Gauge ( Berry Extensometer)

Advantages

1. It has a self contained magnification system.


2. No auxiliary equipment is needed as in the case of electrical strain gauges.
Disadvantages
1. Limited only to static tests.
2. The high inertia of the gauge makes it unsuitable for dynamic measurements
and varying strains.
3. The response of the system is slow and also there is no method of recording
the readings automatically.
4. There should be sufficient surface area on the test specimen and clearance
above it in order to accommodate the gauge together with its mountings.

GAUGES:
Limit Gauges:
Two sets of limit gauges are necessary for checking the size of various parts. There are two
gauges: Go limit gauge, and Not Go limit gauge.
1. Go Limit: The Go limit applied to that of the two limits of size corresponds to the
maximum material condition, i.e. (1) an upper limit of a shaft, and (ii) the lower limit of a
hole. This is checked by the Go gauge.
2. Not Go Limit: The Not Go limit applied to that of the two limits of size corresponds to the
minimum material condition, i.e. (1) lower limit of a shaft, and (ii) the upper limit of a hole.
Thisis checked by the Not Go gauge.
The types are:
1. Plug Gauge
2. Snap Gauge

1. Plug Gauge:

A plug gauge is a cylindrical type of gauge, used to check the accuracy of holes. The plug
gauge checks whether the whole diameter is within specified tolerance or not. The ‗Go‘ plug
gauge is the size of the low limit of the hole while the ‗Not-Go‘ plug gauge corresponds to the
high limit of the hole.
Fig: Types of Plug gauges

It should engage the hole to be checked without using pressure and should be able to stand in
the hole without falling.

Snap Gauge:

A snap gauge is a U-Shaped frame having jaws, used to check the accuracy of shafts and male
members. The snap gauge checks whether the shaft diameter is within specified tolerances
ornot.
The ‗Go‘ snap gauge is the size of the high (maximum) limit of the shaft while the ‗Not-
Go‘snap gauge corresponds to the low (minimum) limit of the shaft.

You might also like