Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –

Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January


January 3, 2013 2013 – Riyadh

Kingdom of Saudi Arabia


Al-Imam Muhammad Ibn Saud Islamic University
College of Engineering
Department of Electrical Engineering
Prof. Dr. Ali S Hennache

EE 362 – INSTRUMENTATION AND CONTROL


SYSTEMS

Chapter One
Introduction to Instrumentation
1.1 Introduction
Instrumentation is the science of automated measurement and control.
Applications of this science abound in modern research, industry, and everyday living.
From automobile engine control systems to home thermostats to aircraft autopilots to
the manufacture of pharmaceutical drugs, automation is everywhere around us.

The objective of any measurement endeavor is to be able to measure a given


process variable in order to possibly control it. Hence what you cannot measure, you
cannot control.

Instruments are used to measure and control the condition of process streams as
they pass through a Plant. Instruments are used to measure and control process
variables such as: Temperature; Flow; Level; Pressure; Quality. Automatic instrument
control systems are most commonly used to continually monitor these process
conditions and correct them, without operator intervention, if there is a deviation from
the process value required. The main reason for using automatic controls is that
production is achieved more economically and safely. In fact, some of our processes
could not be controlled in a stable condition without automatic control systems.
Although I have laid out the basic foundation for measurement and control here, you
might check out.

1.2 Common Measurement Terminology

1|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Absolute Error:
Algebraic difference between the indication and the true value of a quantity to be
measured. Absolute Error = indication - true value. ΔX = X’ – X

Accuracy:
Accuracy is the conformity of an indicated value to an accepted standard value,
or true value. It is usually measured in terms of inaccuracy and expressed as accuracy.
It is a number or quantity, which defines the limit that errors will not exceed, when the
device is used under reference operating conditions. The units to be used must be
stated explicitly. It is preferred that a + and - sign precede the number or quantity.
The absence of a sign infers both signs (±). Accuracy can be expressed in a number of
forms:
Accuracy expressed in terms of the measured variable: Accuracy = ± 1Degree F.
Accuracy expressed in percent of span: Accuracy = ± 1/2 %
Accuracy expressed in percent of the upper range-value: Accuracy = ±1/2 % of URV

Ambient:
The surrounding or environment in reference to a particular point or object.

Attenuation:
A decrease in signal magnitude over a period of time.

Backlash:
In Mechanical Engineering, backlash, is clearance between mating components,
sometimes described as the amount of lost motion due to clearance or slackness when
movement is reversed and contact is re-established.

Bias:
Constant error which occurs during the measurement of an instrument. This error is
usually rectified through calibration.
Example: A weighing scale always gives a bias reading. This equipment always gives a
reading of 1 kg even without any load applied. Therefore, if A with a weight of 70 kg

2|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

weighs himself, the given reading would be 71 kg. This would indicate that there is a
constant bias of 1 kg to be corrected.

Calibration:
The procedure of comparing and determining the performance accuracy is called
calibration. To configure a device so that the required output represents (to a defined
degree of accuracy) the respective input.
• The calibration of any measuring instrument is necessary to measure the
quantity in terms of standard unit.
• It is carried out by making adjustments such that the read out device produces
zero output for zero input.
• The process whereby the magnitude of the output of a measuring instrument is
related to the magnitude of the input force driving the instrument (i.e. Adjusting a
weight scale to zero when there is nothing on it).
• The accuracy of the instrument depends on the calibration.
• If the output of the measuring instrument is linear and repeatable, it can be easily
calibrated.
Magnification:
Magnification is the process of enlarging something only in appearance, not in
physical size so that it is more readable.

Closed loop:
Relates to a control loop where the process variable is used to calculate the
controller output. In a closed loop system the control action is independent on desired
output.

Controller:
A device, which operates automatically to regulate the control of a process with a
control variable.

Drift:
It is an undesirable gradual deviation of the instrument output over a period of time
that is unrelated to changes in input operating conditions or load.
• An instrument is said to have no drift if is reproduces the same readings at
different times for same variation in measured quantity.
• It is caused by wear and tear, high stress developed at some parts etc.
Elevated zero:
is used when lower range-values is less than zero, Range = -20 to 2000C

Gain:

3|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

This is the ratio of the change of the output to the change in the applied input.
Gain is a special case of sensitivity, where the units for the input and output are
identical and the gain is unitless.

Hunting:
Generally an undesirable oscillation at or near the required setpoint is called
hunting. Hunting typically occurs when the demands on the system performance are
high and possibly exceed the system capabilities. The output of the controller can be
over controlled due to the resolution of accuracy limitations.

Hysteresis:
Hysteresis is the difference in the output for given input when the input is
increasing and output for same input when input is decreasing. When input of any
instrument is slowly varied from zero to full scale and then back to zero, its output varies
as shown in the diagram below

This is where the accuracy of the device is dependent on the previous value and
the direction of variation. Hysteresis causes a device to show an inaccuracy from the
correct value, as it is affected by the previous measurement.

• It is caused by friction, slack motion in the bearings and gears, elastic


deformation, magnetic and thermal effects.

Input Signal:
Is a signal applied to a device element, or system. The pressure applied to the
input connection of a pressure transmitter is an input signal.

Linearity:
Linearity expresses the deviation of the actual reading from a straight line (from
linear relation between input and output). If all outputs are in the same proportion to
corresponding inputs over a span of values, then input output plot is straight line else it

4|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

will be non linear (see diagram below) For continuous control applications, the problems
arise due to the changes in the rate the output differs from the instrument. The gain of a
non-linear device changes as the change in output over input varies. In a closed loop
system changes in gain affect the loop dynamics. In such an application, the linearity
needs to be assessed. If a problem does exist, then the signal needs to be linearised.

Measured Variable:
Is the physical quantity or condition, which is to be measured. Common
measured variables are: Temperature, pressure, rate of flow, level, speed, etc

Measured Signal:
Is the electrical, mechanical, pneumatic, or other variable applied to the input of a
device. In a thermocouple, the measured signal is an E.M.F, which is the electrical
analogue of the temperature applied to the thermocouple. A measured signal is
normally produced by the primary element (sensing element) of an instrument.

Output Signal:
Is a signal delivered by a device, element, or system. The signal (3 to 15 psig, 4
to 20 mA dc, etc) produced at the output connections of a transmitter is an output
signal.

Precision:
An equipment which is precise is not necessarily accurate.
• Defined as the capability of an instrument to show the same reading when used
each time (reproducibility of the instrument).
Ramp:
Defines the delayed and accumulated response of the output for a sudden
change in the input.

5|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Range:
Is the region between the limits within which a quantity is measured, received or
transmitted, expressed by stating the lower and upper range-value.
-20 to + 2000C; 20 to 1500C; 4mA to 20Ma

Range of Span:
Defined as the range of reading between minimum value and maximum value for
the measurement of an instrument. The range of span of an instrument which has a
reading range of –100°C to 100 °C is 200 °C.

Rangeability or turndown:
Ratio of the maximum adjustable span / the minimum adjustable span for a given
Instrument, R = 100 bars / 10 bars = 10

Readability:
Readability refers to the ease with which the readings of a measuring instrument can
be read.
• Fine and widely spaced graduation lines improve the readability.
• To make the micrometers more readable they are provided with venier scale or
magnifying devices.

Relative Error:
Ratio between the absolute error and the true value of the quantity to be
measured. Expressed in percent: x = (ΔX/X) x 100

Reliability:
The probability that a device will perform within its specifications for the number
of operations or time period specified.

Repeatability:
Ability of an instrument to give identical indications or responses for repeated
applications of the same value of the quantity measured under the same conditions of
used. Good repeatability does not guarantee accuracy.
It is the ability of the measuring instrument to repeat the same results for the
measurements for the same quantity, when the measurements are carried out
- by the same observer,
- with the same instrument,
- under the same conditions,
- without any change in location,
- without change in the method of measurement,
- the measurements are carried out in short intervals of time.

6|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Reproducibility:
The similarity of one measurement to another over time, where the operating
conditions have varied within the time span, but the input is restored.

In other words reproducibility is the closeness of the agreement between the results of
measurements of the same quantity, when individual measurements are carried out:
- by different observers,
- by different methods,
- using different instruments,
- under different conditions, locations, times etc.

Resolution:
• When the input is slowly increased from some non-zero value, it is observed that
the output does not change at all until a certain increment is exceeded; this
increment is called resolution.
• It is the min. change in measured variable which produces an effective response
of the instrument.
Resonance:
The frequency of oscillation is maintained due to the natural dynamics of the
system.

Response:
When the output of a device is expressed as a function of time (due to an applied
input) the time taken to respond can provide critical information about the suitability of
the device. A slow responding device may not be suitable for an application. This
typically applies to continuous control applications where the response of the device
becomes a dynamic response characteristic of the overall control loop. However in
critical alarming applications where devices are used for point measurement, the

7|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

response may be just as important. The diagram below shows response of the system
to a step input.

Sensitivity:
Is the ratio of the change in transducer output to the corresponding change in the
measured value, i.e. sensitivity = (change of output signal) / (change of input signal).
For example: A pressure-to-current converter could have a sensitivity of 0.1 mA / mbar.
• Sensitivity may be defined as the rate of displacement of the indicating device of
an instrument, with respect to the measured quantity.
• Sensitivity of thermometer means that it is the length of increase of the liquid per
degree rise in temperature. More sensitive means more noticeable expansion.
• In other words, sensitivity of an instrument is the ratio of scale spacing to the
scale division value. For example, if on a dial indicator, the scale spacing is 1 mm
and the scale division value is 0.01 mm then sensitivity is 100. It is also called as
amplification factor or gearing ratio.
Sensitivity (K) = Δθο / Δθi
Δθο : change in output; Δθi : change in input
Example 1: The resistance value of a Platinum Resistance Thermometer changes when
the temperature increases. Therefore, the unit of sensitivity for this equipment is
Ohm/°C.
Example 2: Pressure sensor A with a value of 2 bar caused a deviation of 10 degrees.
Therefore, the sensitivity of the equipment is 5 degrees/bar.
• Sensitivity of the whole system is (k) = k1 x k2 x k3 x .. x kn

K1 K2 K3
θiExample 3: Consider a measuring system consisting of a transducer, amplifier and θa o
recorder, with sensitivity for each equipment given below:
Transducer sensitivity 0.2 mV/°C
Amplifier gain 2.0 V/mV
Recorder sensitivity 5.0 mV/V
Therefore, Sensitivity of the whole system is:
(k) = k1 x k2 x k3
k = 0.2 mV x 2.0 V x 5.0 mV
°C mV V
k = 2.0 mV/°C

Example 04 : The output of a platinum resistance thermometer (RTD) is as follows:


Input(°C) Output(Ohm)
0 0
100 200

8|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

200 400
300 600
400 800
Calculate the sensitivity of the equipment.
Answer:
From that graph” input versus output”, the sensitivity is the slope of the graph.
K = Δθο graph = (400-200) ohm = 2 ohm/°C Δθi slope (200-100) °C

Setpoint:
Used in closed loop control, the set point is the ideal process variable. It is
represented in the units of the process variable and is used by the controller to
determine the output to the process.

Span:
Is the algebraic difference between the upper and lower range-values.Range: -20
to 2000C, Span is 2200C; Range: 20 to 1500C, Span is 1300C.

• Input span: I MAX  I MIN

• Output span OMAX  OMIN

Span Adjustment:
The difference between the maximum and minimum range values. When
provided in an instrument, this changes the slope of the input-output curve.

Steady state:
Used in closed loop control where the process no longer oscillates or changes
and settles at some defined value.

Suppressed zero:
is used when lower range-values is greater than zero ,Range = 20 to 1500C

Threshold:
The min. value below which no output change can be detected when the input of
an instrument is increased gradually from zero is called the threshold of the instrument.
Threshold may be caused by backlash
Time constant:
The time constant of a first order system is defined as the time taken for the
output to reach 63.2% of the total change, when subjected to a step input change.

Tolerance:
- Closely related to accuracy of equipment where the accuracy of an equipment is
sometimes referred to in the form of tolerance limit.
- Defined as the maximum error expected in an instrument.

9|P age
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

- Explains the maximum deviation of an output component at a certain value

Transducer:
An element or device that converts information from one form (usually physical,
such as temperature or pressure) and converts it to another ( (usually electrical, such as
volts or millivolts or resistance change). A transducer can be considered to comprise a
sensor at the front end (at the process) and a transmitter.

Transient:
A sudden change in a variable, which is neither a controlled response, nor long
lasting.

Transmitter
A device that converts one form of energy to another. Usually from mechanical to
electrical for the purpose of signal integrity for transmission over longer distances and
for suitability with control equipment.

Uncertainty:
Range of values within which the true value lies with a specified probability
Uncertainty of +/-1 % at 95 % confidence means the instrument will give the user a
range of +/-1 % for 95 readings out of 100.

Variable:
Generally, this is some quantity of the system or process. The two main types of
variables that exist in the system are the measured variable and the controlled variable.
The measured variable is the measured quantity and is also referred to as the process
variable as it measures process information. The controlled variable is the controller
output which controls the process.

Vibration:
This is the periodic motion (mechanical) or oscillation of an object.

Zero adjustment:
The zero in an instrument is the output provided when no, or zero input is
applied. The zero adjustment produces a parallel shift in the input-output curve.

1.3 SENSORS
 Human natural observation capabilities are generally not designed for process
conditions.
 Instruments must have desired capabilities to match process conditions.
 Process Control has the role of a decision makers (Like brain)
Sensors feel the condition and originate the signal followed by modification and
amplification for effective display /transmission or control objectives.

10 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

1.4 INSTRUMENT
 Typical components of instrument
 A Sensor: (measures a physical quantity and converts it into a signal)
 A Modifier (Change the type of signal)
 A Display unit (transmitting arrangement )

1.5 FUNCTIONAL ELEMENTS OF AN INSTRUMENT


Measurement is the process of determining the amount, degree or capacity by
comparison with the accepted standards of the system units being used.
Instrumentation is a technology of measurement which serves sciences,
engineering, medicine and etc.
Instrument is a device for determining the value or magnitude of a quantity or
variable.
Electronic instrument is based on electrical or electronic principles for its
measurement functions.
1.5.1 ELECTRONIC INSTRUMENT
• Basic elements of an electronic instrument
1) Transducer - convert a non electrical signal into an electrical signal
2) Signal modifier - convert input signal into a suitable signal for the indicating
device (e.g. amplifier)
3) Indicating device - indicates the value of quantity being measure (e.g
ammeter)

Indicating
Signal Device
1.5.2 Transducer
FUNCTIONS Modifier
The 3 basic functions of instrumentation are:
 Indicating – visualize the process/operation
 Recording – observe and save the measurement reading
 Controlling – to control measurement and process

1.6 PERFORMANCE CHARACTERISTICS


Performance Characteristics - characteristics that show the performance of an
instrument
 Eg: accuracy, precision, resolution, sensitivity.

11 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Allows users to select the most suitable instrument for specific measuring jobs.
Two basic characteristics :
 Static characteristics: refer to the comparison between steady output and
ideal output when the input is constant
 Dynamic characteristics: refer to the comparison between instrument
output and ideal output when the input changes
1.6.1 STATIC CHARACTERISTICS
Accuracy - Resolution - Precision - Expected value - Error- Sensitivity
Accuracy:
• Accuracy is defined as the closeness of the measured value with true value.
OR
• Accuracy is defined as the degree to which the measured value agrees with the
true value.
• Practically it is very difficult to measure the true value and therefore a set of
observations is made whose mean value is taken as the true value of the
quantity measured.
Precision:
• A measure of how close repeated trials are to each other.
OR
• The closeness of repeated measurements.
• Precision is the repeatability of the measuring process. It refers to the group of
measurements for the same characteristics taken under identical conditions.
• It indicated to what extent the identically performed measurements agree with
each other.
• If the instrument is not precise it will give different results for the same dimension
when measured again and again.
Distinction between Precision and Accuracy

12 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

 Figure above shows the difference between the concepts of accuracy versus
precision using a dartboard analogy that shows four different scenarios that
contrast the two terms.
 A: Three darts hit the target center and are very close together = high accuracy
and precision
 B: Three darts hit the target center but are not very close together = high
accuracy, low precision
 C: Three darts do not hit the target center but are very close together = low
accuracy, high precision
 D: Three darts do not hit the target center and are not close together = low
accuracy and precision

Accuracy vs Precision
High accuracy means that the mean is close to the true value, while high precision
means that the standard deviation σ is small.

1. 7 ERRORS IN MEASUREMENTS
• It is never possible to measure the true value of a dimension, there is always
some error.
• The error in the measurement is the difference between the measured value and
the true value of measured dimensions.
• The error in measurement may be expressed either as on absolute error or as a
relative error.

13 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Absolute error or percentage of error:


- True absolute error: It is the algebraic difference between the result of
measurement and the conventional true value of the quantity.
- Apparent absolute error: If the series of measurement are made then the algebraic
difference between one of the results of measurement and the arithmetical mean is
known as apparent absolute error.
Relative error:
- It is the quotient of absolute error and the true value or the arithmetical mean for
series of measurement.

Absolute error, e = Y  X
n
where Yn – expected
n value

Yn  Xn
Xn – measured value

% error = 100
Yn
Yn  Xn
A 1
Relative accuracy,

A100
Yn
% Accuracy, a = 100% - % error =

Xn  Xn
1
Precision, P =
Xn
Where - value of the nth measurement
X n - average set of measurement
Example 01 Xn
Given expected voltage value across a resistor is 80V. The measurement is 79V.
Calculate:
i. The absolute error
ii. The % of error
iii. The relative accuracy
iv. The % of accuracy
Solution
Given that, expected value = 80V
measurement value = 79V
 X
 80  79
i. Absolute error, e = = 80V – 79V = 1V

n 100 = 100
Y
n n
ii. % error =
Y n X = 1.25%
Yn  Xn
A 1
Yn 80
iii. Relative accuracy, = 0.9875
Yn
iv. % accuracy, a = A x 100% = 0.9875 x 100%=98.75%

14 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Example 02
From the value in table 1 calculate the precision of 6th measurement?
Table 1
No Xn
1 98
2 101
3 102
4 97
5 101
6 100
7 103
8 98
9 106
10 99
Solution
the average of measurement value
98  101  .... 99 1005
Xn    100 .5
10
the 6th reading 10

100 100.5
1  1  0.995
Precision = 0.5
100.5 100.5
1.6.3 SIGNIFICANT FIGURES
Significant figures convey actual information regarding the magnitude and
precision of quantity
More significant figure represent greater precision of measurement
Example
1) 1, 20 and 300 have 1 significant figures

2) 123.45 has 5 significant figures

3) 1001 has 4 significant figures

4) 100.02 has 5 significant figures

5) 0.00001 has 1 significant figures

6) 1.100 has 4 significant figures

7) 0.00100 has 3 significant figures

Example 03

15 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Find the precision value of X1 and X2?

Xn  101 ===>> 2 s.f

X1  98
===>> 3 s.f

X2  98.5
Solution

Xn  101
X1  98
===>> 2 s.f
===>> 3 s.f

X2  98.5Precision = 98  101
X1  1  0.97
101 98.5  101
X2  Precision = 1  0.975 ===>more precise
101
a- Rules for significant figures
1) All non-zero digits are significant
2) Zeros between two non-zero digits are significant
3) Leading zeros are not significant
4) Trailing zeros are significant
b- Rules regarding significant figures in calculation
1) For adding and subtraction, all figures in columns to the right of the last column
in which all figures are significant should be dropped
Example 04
V1 = 6.31 V
+ V2 = 8.736 V

 15.05 V
Therefore VT = 15.046 V

Example 05
3.76 g + 14.83 g + 2.1 g = 20.69 g
1) 2.1 shows the least number of decimal places
We must round our answer, 20.69, to one decimal place. Therefore, our final
answer is 20.7 g.
2) For multiplication and division, retain only as many significant figures as the least
precise quantity contains
Example 06
Calculate the value of
22.37 cm x 3.10 cm x 85.75 cm
Solution
22.37 cm x 3.10 cm x 85.75 cm = 5946.50525 cm3

16 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

22.37 shows 4 significant figures


3.10 shows 3 significant figures
85.75 shows 4 significant figures
The least number of significant figures is 3
Therefore our final answer becomes 5950 cm3.
When dropping non-significant figures
0.0148 ==> 0.015 (2 s.f)
==> 0.01 (1 s.f)
1.6.4 TYPES OF STATIC ERROR
Types of error in measurement:
1) Gross error/human error or reading errors - These errors occur due
to carelessness of operators. These do not have any direct
relationship with other types of errors within the measuring system.
cannot eliminate but can minimize
2) Systematic Error :Due to shortcomings of the instrument (such as
defective or worn parts) there are 3 types of systematic error:
(i) Instrumental error
(ii) Environmental error
(iii) Observational error
(i) Instrumental error
Loading errors results from the change in measurand itself when being
measured.
Instrument loading error is the difference between the value of measurand
before and after the measurement. For example a soft or ductile component is
subjected to deformation during measurement due to the contact pressure of the
instrument and causes a loading error. The effect of this error is unavoidable
- inherent while measuring instrument because of their mechanical
structure (bearing friction, irregular spring tension, stretching of spring,
etc)
- error can be avoided by:
(a) selecting a suitable instrument for the particular measurement
application
(b) apply correction factor by determining instrumental error
(c) calibrate the instrument against standard
(ii) Environmental error
- due to external condition effecting the measurement including
surrounding area condition such as change in temperature, humidity,
barometer pressure, etc
- to avoid the error :
(a) use air conditioner
(b) sealing certain component in the instruments
(c) use magnetic shields

17 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

(iii) Observational error


- introduced by the observer
- most common : parallax error and estimation error (while reading the
scale)
3) Random error :Due to unknown causes, occur when all systematic
error has accounted
- Accumulation of small effect, require at high degree of accuracy
- can be avoided by:
(a) Increasing number of reading
(b) Use statistical means to obtain best approximation of true value
1.6.5 DYNAMIC CHARACTERISTICS
Dynamic error, also called measurement error, is the difference between the true
value of measuring quantity and value indicated by measurement system if no static
error is assumed.
These errors can be broadly classified as:
(a) Systematic or controllable errors: These errors are controllable in both
their magnitude and stress. These can also be determined and reduced.
These are due to:
(1) Calibrations errors:
- The actual length of standards such as scales will vary from nominal
value by small amount. This will cause an error in measurement of constant
magnitude.

Instruments rarely respond instantaneously to changes in the measured


variables due to such things as mass, thermal capacitance, fluid capacitance or
electrical capacitance.
The three most common variations in the measured quantity:
 Step change
 Linear change
 Sinusoidal change
The dynamic characteristics of an instrument are:
 Speed of response
 Dynamic error :The difference between the true and measured value with
no static error.
 Lag – response delay
 Fidelity – the degree to which an instrument indicates the changes in the
measured variable without dynamic error (faithful reproduction).
1.6.6 LIMITING ERROR

18 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

The accuracy of measuring instrument is guaranteed within a certain percentage


(%) of full scale reading

E.g manufacturer may specify the instrument to be accurate at 2 % with full


scale deflection

For reading less than full scale, the limiting error increases

Given a 600 V voltmeter with accuracy 2% full scale.


Example 07

Calculate limiting error when the instrument is used to measure a voltage of


250V?
Solution
The magnitude of limiting error, 0.02 x 600 = 12V
Therefore, the limiting error for 250V = 12/250 x 100 = 4.8%

Example 08
Given for certain measurement, a limiting error for voltmeter at 70V is 2.143%
and a limiting error for ammeter at 80mA is 2.813%. Determine the limiting error of the
power.
Solution
The limiting error for the power = 2.143% + 2.813%
= 4.956%
1.7 STANDARD
A standard is a known accurate measure of physical quantity.
Standards are used to determine the values of other physical quantities by the
comparison method.
All standards are preserved at the International Bureau of Weight and Measures
(BIMP), Paris.
Four categories of standard:
 International Standard
 Primary Standard
 Secondary Standard
 Working Standard
International Std
 Defined by International Agreement
 Represent the closest possible accuracy attainable by the current science
and technology

19 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Primary Std
 Maintained at the National Std Lab (different for every country)
 Function: the calibration and verification of secondary std
 Each lab has its own secondary std which are periodically checked and
certified by the National Std Lab.
Working Std
 Used to check and calibrate lab instrument for accuracy and performance.

1.8 PROCESS OF MEASUREMENT


• The sequence of operations necessary for the execution of measurement is
called process of measurement.
• There are main three important elements of measurement,
(1) Measurand:
- Measurand is the physical quantity or property like length, angle,
diameter, thickness etc. to be measured.
(2) Reference:
- It is the physical quantity or property to which quantitative comparisons are
made.
(3) Comparator:
- It is the means of comparing measuring measurand with some reference.
1.8.1 Methods of Measurement
 The methods of measurement can be classified as:
(1) Direct method:
• This is a simple method of measurement, in which the value of the
quantity to be measured is obtained directly without the calculations.
• This method is most widely used in production. This method is not very
accurate because it depends on human judgment.
(2) Indirect method:
• In indirect method the value of quantity to be measured is obtained by
measuring other quantities which are functionally related to required value.
• for example, angle measurement by sine bar, measurement of shaft
power by dynamometer etc.
1.8.2 Measuring system
A measuring system is made of five elements: These are:
(1) Standard
(2) Work piece
(3) Instrument

20 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

(4) Person
(5) Environment
- The most basic element of measurement is a standard without which no
measurement is possible.
- Once the standard is chosen select a work piece on which measurement will be
performed.
- Then select a instrument with the help of which measurement will be done.
- The measurement should be performed under standard environment.
- And lastly there must be some person or mechanism to carry out the
measurement.
1.8.3 Factors affecting the accuracy of the measuring system
The basic components of an accuracy evolution are the five elements of a measuring
system such as:
1. Factors affecting the calibration standards.
2. Factors affecting the work piece.
3. Factors affecting the inherent characteristics of the instrument.
4. Factors affecting the person, who carries out the measurements.
5. Factors affecting the environment.
1. Factors affecting the standard.
- Coefficient of thermal expansion,
- Calibration internal
- Stability with time
- Elastic properties
- geometric compatibility
2. Factors affecting the work piece, these are
- Cleanliness, surface finish, surface defects etc.
- Elastic properties
- hidden properties
- Arrangement of supporting workpiece.
3 .Factors affecting the inherent characteristics of instrument.
- Scale error
- Effect of friction, hysteresis, zero drift
- Calibration errors
- Repeatability and readability
- Constant geometry for both work piece and standard
4. Factors affecting person:
- Training skill
- Ability to select the measuring instruments and standard
- Attitude towards personal accuracy achievements
- Sense of precision appreciation
5. Factors affecting environment:
- Temperature, humidity etc.

21 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

- Clean surrounding and minimum vibration enhance precision


- Temperature equalization between standard, work piece and instrument,
- Thermal expansion effects due to heat radiation from lights, heating elements,
sunlight and people.
The above analysis of five basic metrology elements can be composed into the
acronym. SWIPE for convenient reference
Where, S- standard, W- Work piece, I- Instrument ,P- Person ,E- Environment
1. 9 DEAD ZONE AND DEAD TIME
Dead Zone:
• The largest change of input quantity for which there is no change of output of the
instrument is termed as dead zone.
• It may occur due to friction in the instrument which does not allow the pointer to
move till sufficient driving force is developed to overcome the friction loss.
• Dead zone caused by backlash and hysteresis in the instrument.
Dead Time:
• The time required by a measurement system to begin to respond to a change in
the measurand is termed as dead time.
• It represents the time before the instrument begins to respond after the
measured quantity has been changed.
Difference between Systematic and Random errors:
Systematic error
- These errors are repetitive in nature and are of constant & similar form.
- These errors result from improper conditions.
- Expect personal errors all other systematic errors can be controlled in magnitude
and sense.
- If properly analyzed these can be determined and reduced or eliminated.
- These errors include calibration errors, variation in atmosphere, pressure,
misalignment error etc.
Random error
The random errors occur randomly and the specific causes of such errors cannot be
determined. The likely sources of this type of error are:
• Small variations in the position of setting standard and workpiece.
• Slight displacement of lever joints in the measuring instrument.
• Friction in measuring system.
• Operator errors in reading scale.

22 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

- These are non consistent. The sources giving rise to such errors are random.
- Such errors are inherent in the measuring system.
- Specific causes, magnitudes and sense of these errors cannot be determined
from the knowledge of measuring system.
- These errors cannot be eliminated, but the results obtained can be corrected.
- These errors includes Small variations in the position of setting standard and
workpiece, Slight displacement of lever joints in the measuring instrument,
Friction in measuring system, Operator errors in reading scale.

Example 09:
A pressure gauge with a range between 0-1 bar with an accuracy of ± 5% fs (full-
scale) has a maximum error of:
5 x 1 bar = ± 0.05 bar
100
Notes: It is essential to choose an equipment which has a suitable operating range.
Example 10 :
A pressure gauge with a range between 0 - 10 bar is found to have an error of ±
0.15 bar when calibrated by the manufacturer.
Calculate:
a. The error percentage of the gauge.
b. The error percentage when the reading obtained is 2.0 bar.
Answer:
a. Error Percentage = ± 0.15 bar x 100 = ± 1.5%
10.0 bar
b. Error Percentage = ± 0.15 bar x 100 = ± 7.5 %
2.0 bar

• The gauge is not suitable for use for low range reading.
• Alternative: use gauge with a suitable range.

23 | P a g e
Al-Imam Muhammad Ibn Saud Islamic University - College of Engineering –
Department of Electrical Engineering - Prof. Dr. Ali S Hennache – January
January 3, 2013 2013 – Riyadh

Example 11:
Two pressure gauges (pressure gauge A and B) have a full scale accuracy of ±
5%. Sensor A has a range of 0-1 bar and Sensor B 0-10 bar. Which gauge is more
suitable to be used if the reading is 0.9 bar?
Answer:
Sensor A :Equipment max error = ± 5 x 1 bar = ± 0.05 bar
100
Equipment accuracy @ 0.9 bar ( in %) = ± 0.05 bar x 100 = ± 5.6%
0.9 bar
Sensor B :Equipment max error = ± 5 x 10 bar = ± 0.5 bar
100
Equipment accuracy @ 0.9 bar ( in %) = ± 0.5 bar x 100 = ± 55%
0.9 bar
Conclusion: Sensor A is more suitable to use at a reading of 0.9 bar because
the error percentage (± 5.6%) is smaller compared to the percentage error of Sensor B
(± 55%).

Prof. Dr. Ali Hennache/ AIMISIU/CE/DEE/Riyadh- 2013

24 | P a g e

You might also like