Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

COURSE CODE - EEE 315

COURSE TITLE - MEASUREMENT AND INSTRUMENTATION


SEMESTER/SESSION – FIRST/2023/2024 SESSION
CHAPTER ONE
LESSON ONE
CONCEPT OF MEASUREMENT
Measurement is the process of finding the value of a physical quantity. The result of the
measurement is expressed by a pointer deflection over a predefined scale or through a digital
readout.

Fig1a Analog instrument fig 1b Digital instrument


From fig 1b,
1. Function/Range Switch: selects the function (voltmeter, ammeter, or ohmmeter) and the range for
the measurement.
2. COM Input Terminal: Common ground, used in ALL measurements.
3. V Input Terminal: for voltage or resistance measurements.
4. 200 mA Input Terminal: for small current measurements.
5. 10 A Input Terminal: for large current measurements.
6. Low Battery LCD: appears when the battery needs replacement
The measurement of a given quantity is an act or the result of comparison between the quantity
(whose magnitude is unknown) and a known standard. Since two quantities are compared, the
result is expressed in numerical values while instrumentation consists of a set of sensors and
electronic apparatus for measuring or monitoring a process. The instrument used for comparing
the unknown quantity with a standard quantity is called a measuring instrument.
The value of the unknown quantity can be measured by DIRECT OR INDIRECT METHODS.
In direct measurement methods, the unknown quantity is measured directly instead of comparing
it with a standard. Examples of direct measurement are current by ammeter, voltage by voltmeter,
resistance by ohmmeter, power by wattmeter, etc.
In indirect measurement methods, the value of the unknown quantity is determined by measuring
the related quantity and calculating the desired quantity rather than measuring it directly.

1
Examples of measuring instruments are actinometer - an instrument for measuring the intensity
of electromagnetic radiation (usually by the photochemical effect), dynamometer or ergometer
which is the measuring instrument designed to measure power and electrodynamometer - a
measuring instrument that uses the interaction of the magnetic fields of two coils to measure
current or voltage or power.
From industrial point of view, measurement can be applied in the following areas;
(a) Monitoring of processes and operations.
(b) Control of processes and operations.
(c) Experimental engineering analysis.
Monitoring of processes and operations refers to situations where the measuring device is being
used to keep track of some quantity. The thermometers, barometers, radars, and anemometers used
by the weather bureau fit this definition. They simply indicate the condition of the environment,
and their readings do not serve any control functions in the ordinary sense. Similarly, water, gas,
and electric meters in the home keep track of the quantity of the commodity used so that the cost
to the user can be computed. In our automotive illustration, the speedometer, fuel gage, outdoor
temperature sensor, and compass would belong to this monitoring class of applications.
Control of processes and operations is one of the most important classes of measurement
application. This is referred to an automatic feedback control system. This type of application is
studied in control system in the field of aerospace, electrical, chemical, and industrial engineering.
Experimental engineering analysis is that part of engineering design, development, and research
that relies on laboratory testing of one kind or another to answer questions. That is, as engineers,
we have only two basic ways of solving engineering problems: theory and experimentation.
Engineering problem, theoretical and experimental methods may be used depending upon the
nature of the problem.
LESSON TWO
CLASSIFICATION OF MEASURING INSTRUMENTS
Measuring instruments can be classified as follows:
1. Mechanical instruments: - They are very reliable under static and stable conditions. The
disadvantage is that they are unable to respond rapidly to measurement under dynamic and
transient conditions.
2. Electrical instruments: - Electrical instrument depends on a mechanical meter movement
or an indicating device.

2
3. Electronic instruments: - These instruments have very fast response. For example a cathode
ray oscilloscope (CRO) is capable to follow dynamic and transient changes of the order of few
nano seconds (10-9 sec).

2.1 OTHER WAYS OF CLASSIFYING MEASURING INSTRUMENTS


Apart from the above; measuring instruments are further classified as:-
a. Primary instrument
b. Secondary instrument
1. Absolute instruments or Primary Instruments: - These instruments gives the magnitude of
quantity under measurement in terms of instrument parameters. These instruments do not require
comparison with any other standard instrument. They are used for laboratory works. In this type
of instruments, no calibration or comparison with other instruments is necessary. Some of the
examples of absolute instruments are: Tangent galvanometer, Raleigh current balance and
Absolute electrometer.
2. Secondary instruments: - This instrument determines the value of the quantity measured
directly. These instruments are calibrated by comparison with an absolute instrument which has
already been calibrated against an absolute instrument. The quantity to be measured by these
instruments can be determined from the deflection of the instruments. Some o examples of
secondary instruments are: ammeters, voltmeter, wattmeter, energy meter (watt-hour meter),
ampere-hour meters etc.

Classification of Secondary Instruments:


(a) Classification based on the various effects of electric current (or voltage) upon which
their operation depend. They are:
1. Magnetic effect: Used in ammeters, voltmeters, watt-meters, integrating meters etc.
2. Heating effect: Used in ammeters and voltmeters.
3. Chemical effect: Used in dc ampere hour meters.
4. Electrostatic effect: Used in voltmeters.
5. Electromagnetic induction effect: Used in ac ammeters, voltmeters, wattmeter’s and
integrating meters.
Generally the magnetic effect and the electromagnetic induction effect are utilized for the
construction of the commercial instruments. Some of the instruments are also named based on the
above effect such as electrostatic voltmeter, induction instruments, etc.

(b) Classification based on the Nature of their Operations


The instruments are as follows;
• Indicating instruments: Indicating instruments shows the value of quantity measured by means
of a pointer which moves on a scale. Examples of these instruments are ammeter, voltmeter,
wattmeter etc.
• Recording instruments: These instruments record continuously the variation of any electrical
quantity with respect to time. Electrical quantity such current, voltage, power etc., (which may be
measured with the indicating instruments) may be arranged to be recorded by a suitable recording
mechanism. Graphic recorders and galvanometer recorders are the examples of these instruments.
• Integrating instruments: These instruments record the consumption of the total quantity of
electricity during a particular period of time. Some widely used integrating instruments are:
Ampere-hour meter: kilowatt-hour (kWh) meter, kilovolt-ampere-hour (KVARh) meter and
energy meters.

3
Electromechanical indicating instruments: a type of instruments used in electrical engineering
to measure and indicate various electrical quantities such as voltage, current, and power.
A deflection instrument uses a pointer that moves over a calibrated scale to indicate a measured
quantity.

Operating Torque in Electromechanical Indicating Instruments


Three forces are operating in the electromechanical mechanism inside the instrument: they are: -
1. Deflecting force 2. Controlling force 3. Damping force
1. Deflecting force: - The deflecting force causes the pointer to move from its zero position
when a current flows. The system which produces the deflecting force is known as
deflecting system. A deflecting system converts electrical signal to mechanical force. In
the PMMC instrument, the deflecting force is magnetic

The effects of deflecting torque are as follows:-


1. Magnetic Effect: When a current flows through the conductor, a magnetic field is produced in
the wire. This is called magnetic effect of current.
a) When a current carrying conductor is kept in between the poles of permanent magnet, then
there will be the force of attraction or repulsion between wire and magnet. If we made the wire in
form of coil and placing it on the spindle as well as attaching a pointer to it, then it will give
deflection on the scale on passage of current through the coil.

b) But if a piece of soft iron is brought near the current carrying coil, then soft iron piece will
be attracted by the coil. Attaching a pointer to the soft iron piece will give reading on the scale.
This principle is used in Attraction type Moving Iron Instruments.

c) If two soft iron pieces is placed near the current carrying coil, then there will be force of
repulsion between them. One iron piece is stationary while other is movable. Pointer is attached
to the moving piece. This principles is utilized in the construction of Repulsion type Moving
Iron Instrument.

d) When two current carrying coils are placed closer to each other there will be a force of
repulsion between them. If one coil is movable and other is fixed, the movable coil will move
away from the fixed one. This principle is utilized in electrodynamometer type instrument

2. Electrodynamic Effect- when two current carrying coil or conductor are taken and placed
closed to each other, they will produce unlike pole near each other and thus there will be a force
of attraction between them. One coil is fixed and other is free to move. Pointer is attached to the

4
moving coil. Such instruments are called Electrodynamic Instruments. Generally wattmeter are
constructed using this effect.

3. Thermal Effect- If the current to be measured is passed through the coil, then heat is produced
in the wire. With the help of thermocouple (a transducer) heat produced in the wire is converted
into emf. The current produced by the emf is measured by an Ammeter. Hot wire instruments also
uses the thermal effect for production of deflecting torque.

4. Electrostatic Effect- Force exists between two charged plates. One plate is fixed, while other
is movable and pointer is attached with the moving one. Note that only Voltmeters can be made
by using this effect. Such voltmeter are called Electrostatic Voltmeter.

An Electrostatic Instrument

5. Induction Effect- If a disc is placed in between the poles of an electromagnet, an emf is


induced and hence an Eddy current is produced in the disc. As per Fleming's left hand rule, force
will be exerted on the disc due to the interaction of eddy current and magnetic field of
electromagnet. It makes the disc to rotate. A pointer is attached to the disc. This effect is utilized
in ac energy meter. Instruments using this effect cannot be used to measure dc quantities.

6. Chemical Effect- This effect is utilized in ampere hour meters, which measures the capacity
of batteries.

2. Controlling force: - The controlling force in the PMMC instrument is provided by spiral
springs. The springs retain the coil & pointer at their zero position when no current is
flowing. The coil and pointer stop rotating when the controlling force becomes equal to
the deflecting force (fig below). The spring material must be nonmagnetic to avoid any
magnetic field influence on the controlling force.

5
The controlling torque can be produced in two ways. They are,
• Spring control
• Gravity control

1. SPRING CONTROL
Two hairsprings S1 and S2 are wound on the spindle and are coiled in such a way that they are
opposite to each other and act against each other. When deflecting torque gets applied on the
pointer, the starts moving. At this moment one of the springs unwinds itself and the other gets
twisted. The spring that gets twisted or wounded tightly will oppose the deflecting torque by a
force called controlling torque.
The amount of controlling torque produced will be proportional to the angle of deflection (θ) of
the pointer, whereas the deflecting torque (Td) depends upon the current flowing through the coil
i.e., Td increases with an increase in current and vice-versa. At steady state,
Td = Tc
KI = Kθ (K = constant)
∴I∝θ

Since current is directly proportional to the deflection angle, uniform scales can be graduated.
The springs used are usually made up of low resistance bronze alloy and consist of a large number
of turns to avoid deformation in the spring.

Advantages of Spring Control:


• A Uniform scale is present since the current flowing is proportional to deflections.
• Readings with high accuracy can be obtained.

6
• This method of providing controlling torque is simple.
• This system can be used in any position.
• It is most preferable and is commonly used in many systems.

Disadvantages of Spring Control:


• The spring control torque depends upon the temperature changes.
• Spring control system requires high cost.
• Controlling torque cannot be varied

2. GRAVITY CONTROL:
Gravity controlled instruments are used for producing controlling torque and will be independent
of temperature. In such instruments, the moving system when deflecting through an angle θ, from
its position produces controlling torque (Tc). A small weight, W is attached to the moving arm.
By adjusting the weight, W on the arm, the controlling torque, Tc can be varied as shown in the
figure below.

Initially i.e., at zero position, the control weight is vertical at position A, when the pointer is
deflected at an angle θ, the control weight will be in position B as shown in the figure below,
producing a component Wsinθ. The component Wsinθ produce's controlling torque.

Where l is the distance from the axis of rotation. At steady state, when Td = Tc,

7
The disadvantage with this system is that the spindle should always be placed vertically as the
control torque is produced under gravity and leveled properly. But the advantage of gravity
control is, it is independent of temperature, cheap, and does not get deteriorated with time.

Advantages of Gravity Control:


• Gravity control is cheaper than spring control.
• This method of providing controlling torque is simple.
• In this method, variable controlling torques are achieved.
• The controlling torque does not depend upon the temperature changes.

Disadvantages of Gravity Control:


• Gravity controlled instruments must be kept in a vertical position.
• The scale is not uniform.
• The system used in this method is delicate.
• This method is very rarely used and is only used in some of the indicating and portable
instruments.

3. Damping force: The damping force is required to minimize (or damp out) oscillations
of the pointer and coil before settling down at their final position. Damping torque is
defined as the physical process of controlling the movement of a system by producing
the motion such that it opposes the natural oscillation of the system. An indicating
instrument provides the damping torque. The damping torque and the speed of rotation
of the moving system are proportional to each other. This relationship between the
damping torque and the speed of rotation is given as: Tv = kv d dt𝛳 where, kv is the
damping torque constant, and dt𝛳 is the speed of rotation of the moving system

There are four ways of producing damping torque or force, and they are:
• Air friction damping.
• Fluid friction damping.
• Eddy current damping.
• Electromagnetic damping.

Air Friction Damping


The air friction damping is created in an air chamber by moving the aluminum piston in and out.
As the piston enters the chamber, compression is caused inside the chamber. As the piston moves
out of the chamber, a force is experienced by it. Air friction damping is the best suitable method
of damping torque where the electric field is relatively weak. This is due to the absence of electric

8
components in air damping friction which could deform the electric field. The chamber has a
small opening. The cross section of the chamber may be rectangular or circular.

Fluid Friction Damping


The fluid friction damping is created due to the oscillation of the disk in and out of the liquid. The
liquid that is generally used is oil. The working of the fluid friction damping is similar to that of
the air friction damping. The only difference is that instead of air, fluid is used in the chamber.
However, oil damping is not much used because of several disadvantages such as leaking of oil,
keeping instrument in vertical position etc

Eddy Current Damping


In eddy current damping, eddy current and electric fields are used for creating an electromagnetic
torque that can oppose the motion. The damping torque produced in the eddy current damping is
proportional to the strength of the current and the magnetic field. Eddy current damping is
considered to be one of the most efficient methods of damping torque. However, the disadvantage
of this method is that it may distort the weak electrical fields. It consists of thin disc of conducting
but of non-magnetic material like copper or aluminum mounted on a spindle which carries the
moving system and pointer of the instrument. Disc is placed between the poles of permanent
magnet.

Fig – disc type eddy current damping


Electromagnetic Damping
Electromagnetic damping can be achieved by passing the electric current through a magnetic coil
such that the torque generated is acting against the natural movement of the coil. The disadvantage

9
of electromagnetic damping is similar to that of eddy current damping. This method of damping
torque is commonly used in galvanometers.

(c) Classification based on the kind of current that can be Measurand.


Under this heading, we have:
• Direct current (dc) instruments
• Alternating current (ac) instruments
• Both direct current and alternating current instruments (dc/ac instruments).

(d) Classification based on the method used.


Under this category, we have:
1. Direct measuring instruments: These instruments converts the energy of the measured
quantity directly into energy that actuates the instrument. These instruments are most
widely used in engineering practice because they are simple and inexpensive. Also, time
involved in the measurement is short. Examples are Ammeter, Voltmeter, Watt meter etc.
2. Comparison instruments: These instruments measure the unknown quantity by
comparison with a standard. Examples are dc and ac bridges and potentiometers. They are
used when a higher accuracy of measurements is needed.
3. Nature of Contact:-The instruments can be classified into two types:-
• Contact type: when initial sensing element comes in direct contact with the medium whose
parameter are to be measured. Temperature measurement by thermometer
• Non-Contact type: when the initial sensing element does not comes in contact with the
medium whose parameter are to be measured. Temperature Measurement by radiation
pyrometer.

4. Signal being processed: - The instrument can be classified as


a. Analog Measuring Instrument: In form of continuous or step-less parameter
b. Digital Measuring Instrument: In form of digital signal

5. Condition of Pointer:-The Classification is as


a. Null Type: The pointer is maintained at a fixed position and measurement is done by balancing
b. Deflection type: The pointer is deflected with respect to origin to get the measured quantity.

6. Power Source Required:-The classification is as follows:


a. Self-sufficient Instrument (Active): do not need external power
b. Power operated instrument (Passive): need external source of power.

The classification of measuring instruments is summarized as follows

10
Measuring instrument

Primary Secondary
instruments instruments

Digital instruments Anolog instruments Indicating

Electronic
instruments Manual Automatic
instruments Instruments Recording

Electrical Mechanical Integrating


Instruments Instruments

Self operated Power operated Electromechanical


instruments instruments

Deflection type Non Deflection type


instruments instruments

LESSON THREE
PERFORMANCE CHARACTERISTICS OF MEASURING INSTRUMENTS
The response of an instrument to a particular input is the guiding factor to decide its choice out of
the available options. The input to the instrument can be constant or varying with time. Therefore
performance characteristics include;
1. Static Performance Characteristics
2. Dynamic Performance Characteristics

1. Static performance characteristics: The set of criteria defined for the instruments, which
are used to measure the quantities may slowly vary with time or remain constant, i.e., do
not vary with time, is called ‘static characteristics’.

The static characteristics are; (i) Accuracy (ii) Precision (iii) Sensitivity (iv) Linearity (v)
Reproducibility (vi) Repeatability (vii) Resolution (viii) Threshold (ix) Drift (x) Stability (xi)
Tolerance (xii) Range or span

ACCURACY: It is the degree of closeness with which the reading approaches the true value of
the quantity to be measured. The accuracy can be expressed in following ways:
a) Point accuracy: Such accuracy is specified at only one particular point of scale. It does not
give any information about the accuracy at any other Point on the scale.
b) Accuracy as percentage of scale span: When an instrument such as uniform scale, its
accuracy may be expressed in terms of scale range.
c) Accuracy as percentage of true value: The best way to conceive the idea of accuracy is to
specify it in terms of the true value of the quantity being measured.

11
PRECISION: It is the measure of reproducibility i.e., given a fixed value of a quantity, precision
is a measure of the degree of agreement within a group of measurements. The precision is
composed of two characteristics:
1) Conformity: Consider a resistor having true value as 2385692, which is being measured by an
ohmmeter. But the reader can read consistently, a value as 2.4 M due to the non-availability of
proper scale. The error created due to the limitation of the scale reading is a precision error.
2) Number of significant figures: The precision of the measurement is obtained from the number
of significant figures, in which the reading is expressed. The significant figures convey the actual
information about the magnitude & the measurement precision of the quantity.
SENSITIVITY: The sensitivity denotes the smallest change in the measured variable to which
the instrument responds. It is defined as the ratio of the changes in the output of an instrument to
a change in the value of the quantity to be measured. Mathematically it is expressed as,

REPRODUCIBILITY: It is the degree of closeness with which a given value may be repeatedly
measured. It is specified in terms of scale readings over a given period of time.
REPEATABILITY: It is defined as the variation of scale reading & random in nature Drift: Drift
may be classified into three categories:
a) Zero Drift: If the whole calibration gradually shifts due to slippage, permanent set, or due to
undue warming up of electronic tube circuits, zero drift sets in.

b) Span drift or sensitivity drift:- If there is proportional change in the indication all along the
upward scale, the drifts is called span drift or sensitivity drift.

12
c) Zonal drift: In case the drift occurs on a portion of span of an instrument, it is called zonal
drift.
RESOLUTION: If the input is slowly increased from some arbitrary input value, it will again be
found that output does not change at all until a certain increment is exceeded. This increment is
called resolution.
THRESHOLD: If the instrument input is increased very gradually from zero there will be some
minimum value below which no output change can be detected. This minimum value defines the
threshold of the instrument.
STABILITY: It is the ability of an instrument to retain its performance throughout is specified
operating life.
TOLERANCE: The maximum allowable error in the measurement is specified in terms of some
value which is called tolerance.
RANGE OR SPAN: The minimum & maximum values of a quantity for which an instrument is
designed to measure is called its range or span.

2. Dynamic performance characteristics: The set of criteria defined for the instruments,
which are changes rapidly with time, is called dynamic characteristics.
3.
The dynamic characteristics are: i) Speed of response ii) Measuring lag iii) Dynamic error

SPEED OF RESPONSE: It is defined as the rapidity with which a measurement system responds
to changes in the measured quantity.

MEASURING LAG: It is the retardation or delay in the response of a measurement system to


changes in the measured quantity. The measuring lags are of two types: a) Retardation type: In
this case the response of the measurement system begins immediately after the change in measured
quantity has occurred. b) Time delay lag: In this case the response of the measurement system
begins after a dead time after the application of the input. Fidelity: It is defined as the degree to
which a measurement system indicates changes in the measured quantity without dynamic error.

DYNAMIC ERROR: It is the difference between the true value of the quantity changing with
time and the value indicated by the measurement system if no static error is assumed. It is also
called measurement error.
LESSON FOUR
CALIBRATION OF MEASURING INSTRUMENT
Calibration is the process of adjusting and verifying the accuracy of a measuring instrument or
system, such as electronic device or sensor, to ensure that it provides the correct readings or outputs
within the specified tolerance levels. This helps to ensure that the device operates within its
specified accuracy range, and provides reliable and consistent measurements over time.
Most instruments contain a facility for making two adjustments. These are a. The RANGE
adjustment and b. The ZERO adjustment.
In order to calibrate the instrument an accurate gauge is required. This is likely to be a
SECONDARY STANDARD. Instruments calibrated as a secondary standard have themselves
been calibrated against PRIMARY STANDARD.

13
4.1 WHY CALIBRATION IS IMPORTANT?
Calibration is important for several reasons:
1. Accuracy: Calibration helps to maintain the accuracy of an electronic device or system,
ensuring that it provides accurate and reliable measurements. This is important in
applications where precise measurements are required, such as scientific experiments,
industrial processes, or quality control.
2. Compliance: Calibration is often required by industry standards, regulations, and
quality control systems to ensure that electronic devices comply with established
specifications and requirements. This helps to ensure that devices are safe and meet
performance standards, and also helps to maintain consistency across different devices
and systems.
3. Safety: In some applications, such as medical equipment or safety-critical systems,
accurate and reliable measurements are crucial for ensuring the safety of the user or the
environment. Calibration helps to minimize the risk of safety incidents by ensuring that
devices are operating within their specified accuracy range.
4. Quality control: Calibration is a crucial component of quality control processes. It
helps ensure that electronic devices provide consistent and accurate measurements,
which is important for product quality and reliability. This helps to reduce the risk of
producing faulty or defective products and minimizes the need for rework or customer
returns.
5. Maintenance: Regular calibration can help to identify and correct problems with
electronic devices before they become major issues, reducing the need for costly repairs
and downtime. This helps to maintain the performance and longevity of the device and
ensures that it continues to operate within its specified accuracy range over time.
These are the main reasons why calibration is important, and why it is a critical component of
many electronic and industrial applications.

4.2 TYPES OF CALIBRATION


Below are some of the main types of calibration in electronics-
1. Dynamic calibration: This type of calibration involves measuring the response of a
device to a changing input signal. Dynamic calibration is commonly used for devices
such as accelerometers, microphones, and other transducers.
2. Static calibration: This type of calibration involves measuring the output of a device
at a fixed input signal. Static calibration is commonly used for devices such as voltage
or current sources, digital-to-analog converters, and other signal generators.
3. Field calibration: This type of calibration involves adjusting the readings of a device
in its actual operating environment. Field calibration is commonly used for devices such
as temperature sensors, pressure transducers, and other sensors that are installed in
remote locations.
4. Traceable calibration: This type of calibration involves comparing the readings of a
device to a reference standard that is traceable to national or international standards.
Traceable calibration is commonly used to ensure the accuracy and reliability of devices
used in scientific experiments, industrial processes, and other applications that require
accurate measurements.
5. Master calibration: This type of calibration involves using a highly accurate reference
standard to calibrate other standards and measuring devices. Master calibration is

14
commonly used in metrology labs and other organizations responsible for maintaining
the accuracy of calibration equipment and procedures.

4.3 CALIBRATION PROCESS


The calibration process in electronics generally involves the following steps:
1. Preparation: This step involves ensuring that the device to be calibrated is properly
cleaned and in good working condition and that all necessary tools and reference
standards are available.
2. Connection: The device to be calibrated is connected to the reference standard and any
necessary test equipment is set up.
3. Measurement: The device is then measured using the reference standard, and the
readings are compared to the known values of the reference standard.
4. Adjustment: If necessary, the device is adjusted to bring its readings into alignment
with the reference standard. This may involve adjusting internal electronics or physical
components or making changes to the device’s software or firmware.
5. Documentation: The results of the calibration are documented, including the readings
of the device before and after calibration, the reference standard used, and any
adjustments made to the device.
6. Verification: The device is then re-measured to verify that it is providing accurate and
consistent readings, and to ensure that the calibration process was successful.
7. Repeat: If necessary, the calibration process may be repeated several times to ensure
that the device provides accurate readings.

4.4 APPLICATIONS OF CALIBRATION


Calibration is used in a wide range of applications in electronics, including:
1. Manufacturing and quality control: Calibration is used in manufacturing processes
to ensure that electronic devices and systems are producing consistent and accurate
results, and to maintain quality control throughout the production process.
2. Medical and scientific research: Calibration is used in medical and scientific research
to ensure that electronic devices and systems used in experiments and research projects
are providing accurate and reliable data.
3. Environmental monitoring: It is used in environmental monitoring to ensure that
electronic sensors and devices used for measuring temperature, pressure, humidity, and
other environmental factors are providing accurate and consistent readings.
4. Aerospace and defense: Calibration is used in aerospace and defense applications to
ensure that electronic devices and systems used in these industries are providing
accurate and reliable readings and are in compliance with industry standards and
regulations.
5. Energy production and distribution: It is used in the energy industry to ensure that
electronic devices and systems used for generating, transmitting, and distributing
electrical power are providing accurate and reliable readings and are in compliance with
industry standards and regulations.
6. Consumer electronics: Calibration is also used in consumer electronics, such as
smartphones, televisions, and other devices, to ensure that they are providing accurate
and consistent readings and are functioning correctly.

15
These are some of the main applications in electronics, and the specific applications can vary
depending on the type of device being calibrated and the level of accuracy required for the
application.

4.5 WHAT ARE THE CHALLENGES?


Calibration of electronic devices and systems can face several challenges such as:
1. Complexity: Some electronic devices and systems are complex and have many
components, making calibration a time-consuming and challenging process.
2. Cost: Calibration equipment and reference standards can be expensive, and maintaining
a calibration program can also be costly.
3. Accuracy: Calibration is only as accurate as the reference standard used, and some
devices may require highly accurate reference standards that are difficult to obtain or
maintain.
4. Environmental factors: Environmental factors such as temperature, humidity, and
vibration can impact the accuracy of electronic devices and systems and may require
special procedures and techniques to overcome.
5. Interference: Electronic interference from other devices and systems can impact the
accuracy of calibration results and may require special procedures and techniques to
mitigate.
6. Compliance: Some industries and applications may require compliance with specific
standards and regulations, which can present additional challenges and requirements for
calibration.
7. Maintenance: Electronic devices and systems may require regular calibration to
maintain their accuracy over time, and the cost and effort involved in maintaining a
calibration program can be significant.
These are some of the main challenges associated with electronics, and the specific challenges can
vary depending on the type of device being calibrated and the level of accuracy required for the
application.

4.6 CALIBRATION STANDARDS FOR ELECTRONICS


Different Types of Standards to Perform Calibration in Electronics Systems and Devices. Different
types of electronic devices may require different calibration standards, based on the quantity being
measured and the level of accuracy required.
Some common examples of calibration standards for electronics include:
1. Voltage reference: A voltage reference is used to calibrate voltage meters and other
devices that measure electrical potential.
2. Resistance standard: A resistance standard is used to calibrate resistance meters and
other devices that measure electrical resistance.
3. Power standard: A power standard is used to calibrate power meters and other devices
that measure electrical power.
4. Time and frequency standards: These are used to calibrate clocks, oscillators, and
other devices that measure time or frequency.
5. RF power standards: These are used to calibrate radio frequency power meters and
other devices that measure RF power.
6. Impedance standards: These are used to calibrate impedance meters and other devices
that measure the opposition to electrical flow (impedance).

16
7. Spectrophotometer standards: These are used to calibrate spectrophotometers and
other devices that measure light and color.
Each of these standards must be regularly calibrated to maintain accuracy, and the choice of the
standard will depend on the specific application and measurement requirements.

4.7 INSTRUMENT CALIBRATION METHODS


The following are some of the most common methods of instrument calibrations service;

Pressure Calibration
This calibration process use gas and hydraulic pressure. A number of pressure balances and
calibrators are generally used, along with a variety of pressure gages. The ISO 17025 UKAS
accreditation is often taken into consideration when calibrating pressure and national standards
must also generally be adhered to. Examples of pressure equipment that can be tested for
calibration include;
• Barometers
• Analogue Pressure Gauges
• Digital Pressure Gauges
• Digital Indicators
• Transmitters
• Test Gauges

Electrical Calibration
This calibration service is used to measure voltage, current frequency and resistance. The process
monitors resistance and thermocouple simulation covering process instrumentation. Examples of
electrical equipment that can be tested for calibration include;
• Multi-meters
• Counter timers
• Insulation Testers
• Loop Testers
• Clamp Meters
• Data Loggers

Mechanical Calibration
Mechanical calibration housing facilities will be temperature controlled. A number of dimensional,
mass, force, torque and vibration elements will be calibrated during the testing process. Examples
of mechanical equipment that can be tested for calibration include;
• Weight & Mass Sets
• Torque Wrenches & Screwdrivers
• Micrometers, Vernier’s, Height Gauges
• Accelerometers
• Load Cells & Force Gauges
Equipment and instruments for which we provide mechanical and electro-mechanical calibration
service include, but are not limited to:
• Force gage calibrations for testing machines, weighing devices or other equipment
measuring force to ensure accurate readings for tension, compression, and torque.

17
• Pressure gauge calibration to ensure compliance for your processes and products for a
variety of gauges, including air, oxygen and hydraulic dial and digital gauges to high
accuracy pressure calibrators
• Strain gauge (including load cells, transducers, etc.) calibration to ensure your equipment
accurately converts a physical characteristic (e.g. deflection), to an output signal displaying
psi, Newtons, foot-pounds and most other unit quantities.
• Vacuum gauge calibration, including low-level capacitance diaphragms and transducers.

Temperature and Humidity Calibration


Temperature calibration usually takes place in a controlled environment. A number of different
types of equipment can be tested using temperature calibration, including the following;
• Thermometers/Thermocouples
• Thermal Cameras
• Infrared Meters
• Chambers/Furnaces
• Weather Stations
• Data Acquisition Systems
Again, humidity calibration will usually take place in a controlled environment and will generally
cover a range of 10 - 98% RH. A variety of instruments can be tested for humidity calibration,
including the following;
• Humidity Recorders
• Humidity Generators
• Digital Indicators and Probes
• Transmitters
• Psychrometers
• Thermohygrographs
• Tinytag Sensors
The calibration processes listed above are perhaps the most commonly-used. A few additional
examples of calibration types are;
• Waterflow Calibration
• Oilflow Calibration
• Air Velocity Calibration
• Air Flow Calibration

Electronic Calibration
Electronic calibration is one of three main types of calibration methods used today. Electronic
calibration deals with the calibration of electric and electronic instruments.
Electronic calibration involves either stimulating an electrical signal or measuring the electrical
signal of the instrument being calibrated with respect to that of a master (standard) instrument.
Known reference standards are used for the calibration to ensure traceability.
These international standards include Volts, Watts and Amperes, amongst others.

18
LESSON FIVE
CALIBRATION ERRORS
Calibration errors in measuring instruments include:-
1. Range and Zero Error
After obtaining correct zero and range for the instrument, a calibration graph should be produced.
This involves plotting the indicated reading against the correct reading from the standard gauge.
This should be done in about ten steps with increasing signals and then with reducing signals.
Several forms of error could show up. If the zero or range is still incorrect the error will appear as
shown.

2. Hysteresis and Non Linear Errors


Hysteresis is produced when the displayed values are too small for increasing signals and too large
for decreasing signals. This is commonly caused in mechanical instruments by loose gears and
linkages and friction. It occurs widely with things involving magnetization and demagnetization.
The calibration may be correct at the maximum and minimum values of the range but the graph
joining them may not be a straight line (when it ought to be). This is a nonlinear error. The
instrument may have some adjustments for this and it may be possible to make it correct at mid-
range as shown.

4.8 CALIBRATION MAY BE REQUIRED FOR THE FOLLOWING REASONS:


• The desire for a new instrument
• after an instrument has been repaired or modified
• when a specified time period has elapsed
• when a specified usage (operating hours) has elapsed
• after an event, for example
o after an instrument has been exposed to a shock, vibration, or physical damage,
which might potentially have compromised the integrity of its calibration
o sudden changes in weather

19
4.9 ERRORS IN MEASUREMENT
In order to understand the concept of errors in measurement, we should know the two terms that
defines the error. This include:
True Value or absolute value or exact value
In practice, it is not possible to determine the true value of a quantity by experiment means. True
value is defined as the average value of an infinite number of measured values when average
deviation due to various contributing factor approaches zero.
Measured Value
It is defined as the approximate value of true value. It can be found out by taking the MEAN of several
measured readings during an experiment.
Absolute error or Static error
Static error is defined as the difference of the measured value and the true value of the quantity.
Mathematically we can write an expression of error as, dA = Am – A where, dA is the static error A m is
measured value and A is true value.

Relative Error or Fractional Error


The relative error is the ratio of absolute error to the true value of the unknown quantity to be
measured
Mathematically we write as,

When the absolute error  o (=δA) is negligible, i.e., when the difference between the true value
A and the measured value Am of the unknown quantity is very small or negligible then the relative
error may be expressed as

The relative error is generally expressed as a fraction, i.e., 5 parts in 1000 or in percentage value

Limiting Errors or Guarantee Errors


The concept of guarantee errors can be cleared if we study this kind of error by considering one
example. Suppose there is a manufacturer who manufactures an ammeter, now he should promise
or declare that the error in the ammeter that he is selling is not greater than the limit he sets. This
limit of error is known as limiting errors or guarantee error.
The magnitude of a given quantity having a specified magnitude Am and a maximum or a limiting
error ±δA must have a magnitude between the limits

For example, the measured value of a resistance of 100 Ω has a limiting error of ±0.5 Ω. Then the
true value of the resistance is between the limits 100 ± 0.5, i.e., 100.5 and 99.5Ω.

20
TYPES OF ERRORS
The types of errors in measurement are follows;
i) Gross errors
ii) Systematic errors
iii) Random errors
i) Gross Errors: Gross errors are caused by mistake when using instruments or meters. Errors can
be made when calculating measurement and recording data results.
ii) Systematic Errors: The errors that occur due to fault when using measuring device are known
as systematic errors. Usually they are called as Zero Error – a positive or negative error. These
error is classified into: (1) Instrument Errors, (2) Environmental Errors, and (3) Observational
Errors.
a. Instrument Errors: Instrument errors are caused by miss-use of the instrument, loading effect,
ageing of instruments etc.
b. Errors due to Environmental factors: These errors are caused by external factors such as
surroundings temp, pressure, humidity, dust, etc.
c. Errors by Observation: The errors occur due to human observational factors e.g. errors due to
parallax, reading difficulty etc.
iii) Random Errors: Random errors are caused by the sudden change in experimental conditions
and noise and tiredness of the laboratory personnel. These errors are either positive or negative.
An example of the random errors is when there is changes in humidity, unexpected change in
temperature and fluctuation in voltage. These errors may be reduced by taking the average of a
large number of readings.

Example 1
The measurand value of a resistance is 10.25 Ω, whereas its value is 10.22 Ω. Determine the
absolute error of the measurement.
Answer
Measurand value Am = 10.25 Ω
True value Am = 10.22 Ω
Absolute error, δA = Am - A = 10.25 - 10.22 = 0.03 Ω

Example 2
The measured value of a capacitor is 205.3µF, whereas its true value is 201.4 µF. Determine the
relative error.
Solution
Measured value Am = 205.3 × 10-6 F
True value, A = 201.4 × 10-6 F
Absolute error,  o = Am - A
= 205.3 × 10-6 - 201.4 × 10-6
= 3.9 × 10-6 F
= 3.9 × 10-6 F

21
2. Statistical analysis of random errors in measurement
1. (a) What is the equation that may be used to find the accumulated error in a measuring instrument
and define all parameters used?
(b) The total measurement of resistances of a particular resistor is 1014.0 given that the measured
values are 101.2, 101.7, 101.3, 101.7, 101.5, 101.3, m, 101.4, 101.3 and 101.4. The unit is express
in ohms. Assume the presence of only random errors, calculate
(i) Arithmetic mean (ii) Deviation from the mean (iii) Standard deviation (iv) Average
Deviation (v) Probable error.
Answer
(a) The equation that may be used to find the accumulated error in a measuring instrument is given as

ei = eT2 + eS2 + eA2 + eD2 


1/ 2
where eT = Transducer error, eS = Signal conditioning element, eA =
Amplifier error and eD = Display element error.
(b) To find the value of m
101.2+101.7+101.3+101.7+101.5+101.3+101.4_+101.3+101.4 + m = 1014.0
m = 1014.0 - 912.8
M = 101.2

(i) Arithmetic mean ( x ) = 1014.0/10 = 101.4

(ii) Deviation from the mean (d) = x - x


Reading in X d=x- x d2 = d d2
101.2 - 0.2 0.2 0.04
101.7 0.3 0.3 0.09
101.3 - 0.1 0.1 0.01
101.7 0.3 0.3 0.09
101.5 0.1 0.1 0.01
101.3 - 0.1 0.1 0.01
101.4 0.0 0.0 0.00
101.2 - 0.2 0.2 0.02
101.3 - 0.1 0.1 0.01
101.4 0.0 0.0 0.00
∑x=1014 ∑d = 0 ∑ d = 1.4 2
∑ d = 0.28

( iii) Standard Deviation  =


d 2

=
0.28
=
0.28
= 0.176
n −1 10 − 1 9

(iv) Average Deviation D =


 d = 1.4 = 0.14
n 10

(v) Probable error r = 0.6745 = 0.6745x0.176 = 0.118712

22

You might also like