Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 125

CALIBRATION

ENGR. RENDELL LAGMAN


In process industries, Instrumentation Maintenance
includes:
∙ Calibration – Instruments are periodically calibrated to ensure
accurate measurement of process conditions.
∙ Diagnosis and troubleshooting – problematic instruments must be
investigated to determine the origins of poor or erratic operation so
that reliable measurement can be restored.
∙ Repair and Installation – instruments may at times require
specialty repair away from the process facility. The instruments
must be removed from operation, dismantled, serviced, reinstalled,
calibrated, and then returned to service.
ELEMENTS OF THE INSTRUMENT LOOP
• Process
Operation involving physical or chemical change of matter, conversion of
energy, state, composition, dimension or other properties, e.g., change of
pressure, temperature, etc.

• Measuring Element
Sensors, Transmitters, Transducers, Process Switches
• Receiving Element
Indicators, Recorders, Controllers, Alarm Units, Totalizers, Computer-based
instruments & systems

• Final Control Element


Control Valve, damper, variable-pitch blades, motor drives, feeders,
relay/contactor, thyristors.
ELEMENTS OF THE INSTRUMENT LOOP
BLOCK DIAGRAM OF THE INSTRUMENT LOOP
Performance Characteristics
Static characteristics are the characteristics of an
element that describe the operation of the element at
steady-state conditions when the process is not
changing.

Dynamic characteristics are the characteristics of an


element that describe the operation of the element at
unsteady-state conditions when the process is
changing.
Every measurement system has several
types of measurement errors such as:
Static error:
The difference between the measured value and the actual value of the quantity is known as a
static error. These errors are caused by restrictions of measuring instruments or physical laws
ruling its actions. The instrument may be considered accurate if its measured value is nearly
equal to its true value.
Dynamic Error:
The difference between the true value of a quantity varying with respect to time, and the
reading indicated by measuring instrument without assuming any static error. These errors are
caused by the instrument which responds slowly to follow changes in the measured variables.
For example, when a room thermometer doesn’t indicate accurate temperature until the
temperature reaches a steady value.
STATIC
CHARACTERISTICS
The static characteristics of control elements are the
properties during steady-state operations.
Range is the boundary of the values that identify the
minimum and maximum limits of an element.
For example, a temperature sensor may have a range of
–50°F to 200°F.
Likewise, control valves are available that operate over a
variety of ranges. The valve is sized to be able to
regulate the fluid flow as required by the process.

An instrument or controller may be calibrated to only


use part of the maximum range. An operating range is a
part of the total range. Range is specified with two
numbers representing the lowest and the highest values. Range and Span
Span is the difference between the highest and lowest
numbers in the range.
Bias, Accuracy, and Precision
Bias is a systematic error or offset introduced into a measurement system.
Bias typically shows up as an error in measurement where the measurements are all on one side of the true value.
For example, a temperature sensor may read 2° high under all conditions. A bias may be intentionally introduced
into control strategy.
Bias, Accuracy, and Precision
Accuracy is the degree to which an observed value matches the actual value of a measurement over a
specified range. Manufacturers usually specify control element accuracy as the worst-case accuracy over the
entire range. Accuracy is often stated as a percentage of the full-scale range or as a percentage of the reading.
However, there is no standard definition of this word. Manufacturers may have different meanings when they
specify accuracy.
Bias, Accuracy, and Precision
Precision is the closeness to which elements provide agreement among measured values.
Precision does not describe the same thing as accuracy. Precision only measures
agreement among the measured values. It does not compare the measured values to a
standard or true value.
Bias, Accuracy, and Precision
Repeatability means an element can produce consistent results under the same
conditions. For example, an air pressure switch should actuate at the same air pressure
every time. It is often reported as a percentage of the average reading.
Drift refers to the gradual
change in a variable over
time when the process
conditions remain
constant. For example,
valves may not work as
well over time due to wear
and tear, and instruments
with solid-state parts can
drift due to aging or
changes in conditions.
Sensitivity refers to how easily a measurement or control element can detect small
changes. For example, Sensor A can detect a change as small as ¹⁄ ₂°F, while Sensor B can
only detect a change of 1°F. Therefore, Sensor A is twice as sensitive as Sensor B.
Dead zone occurs when an element does not respond to a change in the input because
the change is too small to be detected by the sensitivity of the element.
Dynamic
Characteristics
Dynamic characteristics are the characteristics of an element that describe the
operation of the element at unsteady-state conditions when the process is changing.
The dynamic
characteristics of control
elements are the properties
during changing
conditions.

Response time is how quickly an element reacts to a change in the measured variable or
produces a 100% change in the output signal due to a 100% change in the input signal.
For example, the response time of a temperature sensor determines how quickly it indicates or
records a change in temperature.
Dynamic error is the difference between a changing value and the momentary instrument
reading or the controller action.
Hysteresis is a property of physical
systems that do not react immediately
to the forces applied to them or do not
return completely to their original
state. Systems that exhibit hysteresis
are systems whose condition depends
on their immediate history. Frictional
or magnetic forces may cause
hysteresis. Hysteresis affects valve
actuators by slowing down the
response to a changing control signal
because of a worn linkage or Hysteresis is the property of a control element that
overtightened packing nut. results in different performance when a measurement
is increasing than when the measurement is decreasing.
Linearity is the closeness to
which multiple measurements
approximate a straight line on
a graph. Linearity is usually
measured as nonlinearity and
expressed as linearity.
Nonlinearity is the degree to
which multiple measurements
do not approximate a straight
line on a graph. Hysteresis is the property of a control element that
results in different performance when a measurement
is increasing than when the measurement is decreasing.
What is Calibration?
1. A set of operations wherein known values of a
quantity are applied to an instrument and
corresponding readings or output values are
recorded under specified conditions.
2. A comparison of the reading of a measuring
instrument against the reading of a standard
instrument of higher accuracy.,
• It may include adjusting the instrument to
read correctly (not in all cases).
• During calibration, an instrument is checked
at several points throughout the range of the
instrument.
Calibration according to Legal
Metrology http://legacy.senate.gov.ph/lisdata/2973226440!.pdf
(R.A. 9236 of 2003)
Calibration is an operation that, under specified
conditions, in a first step, establishes a relation
between the quantity values with measurement
uncertainties provided by measurement standards
and corresponding indications with associated
measurement uncertainties and, in a second step,
uses this information to establish a relation for
obtaining a measurement result from an indication.
INSTRUMENT CALIBRATION BLOCK DIAGRAM

INPUT Unit Under Test (UUT) OUTPUT MEASUREMENT


MEASUREMENT STANDARD (OMS)
STANDARD (IMS)
INSTRUMENT
KNOW INPUT FOR UUT OUTPUT
CALIBRATION

c b
a b–a= Note: Instrument

c Sensitivity must
be adjusted until
c=0.
Note: IMS and OMS are commonly known as CALIBRATORS.
Typical Instrument Calibration Errors
with Linear Response
Recall that the slope-intercept form of a linear
equation describes the response of any linear
instrument:
Example 1:
A flow transmitter is ranged 0 to 350 gallons per
minute, 4-20mA output, direct responding. Calculate the
current signal value at a flow rate of 204 GPM.
Example 2:
A pneumatic temperature transmitter I ranged 50 to
140 degrees Fahrenheit and has a 3-15 PSI output signal.
Calculate the pneumatic output pressure if the temperature is
79 degrees Fahrenheit.
Example 3:
A pH transmitter has a calibrated range of 4pH to
10pH, with a 4-20mA output signal. Calculate the pH sensed
by the transmitter if its output is 11.3mA.
Instrument Errors:
An "error" is defined as the algebraic difference between the
instrument indication and the actual value of the measured
variable. Typical errors that are corrected by performing a
calibration include:
∙ Zero error
∙ Span error
∙ Linearity error
Some instruments are provided with a means of adjusting the
zero and span of the instrument.
Calibration helps to correct such errors and improve the
accuracy of the instrument's readings.
INSTRUMENT ERRORS
Zero Error - produces a parallel shift of the input-output curve

A zero error is usually correctable by simply adjusting the “zero” screw on


an analog instrument, without making any other adjustments.
INSTRUMENT ERRORS
Span Error - changes the slope of the input-output curve.

Span error, when displayed on an input/output calibration graph, translates as a


line that is not parallel to the ideal line. That is to say, both lines are straight, but
their slopes are different.
INSTRUMENT ERRORS
Combined Zero and Span Error

This usually require multiple adjustments of the “zero” and “span” screws while
alternately applying 0% and 100% input range values to check for
correspondence at both ends of the linear function
INSTRUMENT ERRORS
Linearity Error - produces non-linear input-output response curve

Note: If the magnitude of the non-linear error is unacceptable and


cannot be adjusted, the instrument must be replaced.
INSTRUMENT ERRORS

A hysteresis calibration error occurs when the instrument responds differently to


an increasing input compared to a decreasing input. The only way to detect this
type of error is to do an up-down calibration test.
TYPICAL CALIBRATION BLOCK DIAGRAM

Utilities Power or Air Supply

IMS UUT OMS


UTILITIES
24 V

(Input Standard)

(UUT)

+ -

Precise Pressure Source


H L

D/P Transmitter
(Range: 0-200 mBar) Atm.

(Output Standard)
PRESSURE CALIBRATOR

TYPICAL BENCH CALIBRATION SET-UP


How Often Instrument is Calibrated?
By practice, the frequency of calibration depends upon the
classification of the instruments:
Critical: An instrument which, if not conforming to
specification, could potentially compromise product or process
quality and safety. (Typical is twice yearly)
Non-critical: An instrument whose function is not critical to
product or process quality, but whose function is more of an
operational significance. (Typical is yearly)
Reference Only: An instrument whose function is not critical to
product quality, not significant to equipment operation, and not
used for making quality decisions. (When required)
Instrument range:
The lower and upper limits the instrument is capable of
measuring.

Calibration range:
A portion or whole part of the instrument range the
instrument is calibrated; expressed by stating the lower and
upper range values.

Calibration Limits are defined by the "zero" and span


values.

"Zero" value:
The lower end of the range.

Span:
The algebraic difference between the upper and lower
range values.
Tolerance is the
maximum deviation that is
accepted in the design of the
user for its manufactured
product or components.

Tolerance is defined by user


according of need of product.
Note: Accuracy is
defined by the
manufacturer.
Accuracy in Calibration:
Instruments are calibrated to make them accurate within
manufacturer's specifications.
Accurate calibration therefore is an essential factor in instrument
performance.
Ways of Determining Instrument
Accuracy:
1. Manufacturer's specifications
According to instrument data sheet.
2. By calculations (Calculated).
If no given accuracy statement.
How is Accuracy Expressed?

1.As a percent of output span.


2.As a percent of the measured value.
1. As a percent of output span.
Example: A pressure transmitter has an output span of 50 psi.
It measures an actual tank pressure of 25 psig but reads 26
psi. In this case, the transmitter is accurate within 1 psi or 2%
of span.
Example of Accuracy calculation (% of span)

Calculated
INPUT VALUES OUTPUT VALUES (mA) Error in
Accuracy in
mA
% of Span
% INPUT INPUT EXPECTED ACTUAL

0% 100 deg C 4 4.2 0.2 1.25%


25% 200 deg C 8 8.2 0.2 1.25%
50% 300 deg C 12 12.2 0.2 1.25%
75% 400 deg C 16 16.2 0.2 1.25%
100% 500 deg C 20 20.2 0.2 1.25%
2. As a percent of the measured value.
Example: A pressure transmitter has an output span of 50 psi.
It measures an actual tank pressure of 25 psig but reads 26
psi. In this case, the transmitter accuracy is 3.85% of
measured value.
Example of Accuracy calculation (% of measured value)

Calculated
INPUT VALUES OUTPUT VALUES (mA) Error in
Accuracy in
mA
% of Span
% INPUT INPUT EXPECTED ACTUAL

0% 100 deg C 4 4.2 0.2 4.76%


25% 200 deg C 8 8.2 0.2 2.44%
50% 300 deg C 12 12.2 0.2 1.64%
75% 400 deg C 16 16.2 0.2 1.23%
100% 500 deg C 20 20.2 0.2 0.99%
CHARACTERISTICS OF A CALIBRATION

1.Compliance to the Required


Accuracy Ratio of Standards
2.Traceability of Calibration Standards
3.Uncertainty of Measurements
4.Compliance to ISO-17025 Technical
Requirements
1. Accuracy Ratio

∙ Describes the relationship between


the accuracy of the calibration
standard and the accuracy of the
instrument under calibration
∙ The calibration standard should be
four times more accurate than the
process instrument being calibrated
∙ The standard used to calibrate the
calibration standard should also be
four times more accurate
What is Calibration Standard?

Calibration Standard is an internationally


accepted and traceable instrument or material
used as reference in calibrating instruments.

They are commonly known as Calibrators.


Types of Calibration Standards:

1.“Equipment Type” Calibration Standard

2.“Material Type” Calibration Standard


“Equipment Type” Calibration Standard

∙ Digital Multimeters
∙ Signal Simulators
∙ Distance/height measuring devices
∙ Plotting/Recording devices
∙ Indicating devices
“Material Type” Calibration Standard

∙ Calibration Solutions (such as pH buffer


solution, conductivity calibration
solution, and etc.)
∙ Calibration gases (such as GC
calibration gas, smoke calibration gas,
and etc.)
2. Traceability
The ability to relate measurements back to
international standards through an unbroken chain of
calibration.

It means:
• All measuring instruments must be calibrated.
• All standards used to calibrate the measuring
instrument must also be calibrated.
• The calibration standard must, in turn, be
calibrated.

All calibrations must be traceable to international


standards, no matter how many calibration levels
exist.
3. UNCERTAINTY of Measurements
Uncertainty analysis is necessary for calibration
laboratories complying with ISO 17025
requirements.
It is performed to assess and identify the factors
related to the calibration equipment and process
instrument that impact the calibration accuracy.
What is "Uncertainty“
the word "uncertainty" means DOUBT,
and thus in its broadest sense
"uncertainty of measurement" means
doubt about the validity of the result of
a measurement.
Measurement and Measurand:
In general, the result of a measurement is
only an approximation or estimate of the
value of the measurand and thus is complete
only when accompanied by a statement of the
uncertainty of that estimate.
THREE PROPERTIES OF MEASUREMENT

∙Numeric Value
∙Unit of Measure
∙Indication of Accuracy
THREE TYPES OF ERROR

∙ Random error refers to unpredictable fluctuations in


measured data,
∙ Systematic error stems from consistent biases or
inaccuracies in measurement techniques, and
∙ Spurious error arises from external factors influencing
measurements erroneously. examples include human errors
like improper calibration or misinterpretation of readings, as
well as instrument malfunctions such as electrical
interference or sensor drift.
UTILITIES
24 V

(Input Standard)

(UUT)
12 mA

+ -

Precise Pressure Source


H L

D/P Transmitter
(Range: 0-200 mBar) Atm.

(Output Standard)
PRESSURE CALIBRATOR
Measured Value= 12 mA + Uncertainty of
TYPICAL BENCH CALIBRATION SET-UP
Measurement
Uncertainty
Uncertainty is a parameter associated with the result of
a measurement that characterizes the dispersion of the
values that could reasonably be attributed to the
measurand.
Uncertainty analysis is required for calibration labs
conforming to ISO 17025 requirements to evaluate
and identify factors that affect the calibration accuracy
of the equipment and process instrument.
Uncertainty
Calibration technicians should be aware of basic
uncertainty analysis factors, such as environmental
effects and how to combine multiple calibration
equipment accuracies to arrive at a single
calibration equipment accuracy. The combined
accuracy is determined by calculating the square
root of the sum of the squares.
The two ways to estimate uncertainties

∙ Type A evaluations - uncertainty estimates using statistics


(usually from repeated readings)
∙ Type B evaluations - uncertainty estimates from any other
information. This could be information from experience
of the measurements, from calibration certificates,
manufacturer’s specifications, from calculations, from
published information, and from common sense.
Uncertainty can be expressed in
terms of the following:
∙ Standard Uncertainty: ui
∙ Combined Uncertainty: uc
∙ Expanded Uncertainty: U= uc (k)
However, in instrument calibration, Uncertainty is
always express in terms of Expanded Uncertainty
Steps in calculating the calibration uncertainty:

1.Determine Standard Uncertainty (ui) of each


individual calibration system components.
2.Compute the Combined Standard Uncertainty (uc).
3.Compute the Expanded Uncertainty (U) using
factor; k=2,corresponding to 95% confidence level.
Step 1: Determine Standard Uncertainty
(ui) of each individual calibration system
components.
Based on the given manufacturers' accuracy statements, the
following are established:
1. Pressure Calibrator ui = 0.05%.
2. D/P Tx ui = 0.25%
For Technician and Lab
3. Multimeter ui= 0.05% 5. Technician ui= 0.06%
4. Power Supply ui = 0.1% 6. Environmental Condition ui= 0.06%
Step 2: Compute the Combined Standard Uncertainty
(uc).

The combined standard uncertainty of a


measurement result, suggested symbol uc, is
taken to represent the estimated standard
deviation of the result.
It is obtained by combining the individual
standard uncertainties “ui” using the usual
method for combining standard deviations.
Step 3: Compute the Expanded Uncertainty (U)
using factor; k=2,corresponding to 95% confidence
level.
A quantity defining an interval about the result of a measurement
that may be expected to encompass a large fraction of the
distribution of values that could reasonably be attributed to the
measurand. The expanded uncertainty denoted by U is obtained by
multiplying the combined standard uncertainty uc by a coverage
factor k.
Thus; U = uc (k)
TRUE Measurement = Observed measurement +/- U
Calculating the Expanded Uncertainty (U) of the
Calibration Set up:

Expanded Uncertainty
(U) = uc k
= 0.29 (2)
= 0.58%
Where:
uc = Combine standard uncertainty
K = Coverage Factor
Level of Confidence:
Most of expanded uncertainty calculations
are based on coverage factor (k=2) and
confidence level of 95% (1 chance in 20 that
the value of the measurand lies outside the
interval).
Some other coverage factors are:
k = 1 for a confidence level of approximately 68 percent
k = 2.58 for a confidence level of 99 percent
k = 3 for a confidence level of 99.7 percent
4. Compliance to ISO-17025:2000
Technical Requirements
∙ Human Factors (Personnel)
∙ Environmental Conditions
∙ Test & Calibration Methods and Method Validation
∙ Test/Calibration Equipment
∙ Traceability
∙ Handling of test and calibration items.
ISO/IEC 17025

∙ ISO/IEC 17025 enables laboratories to demonstrate that they


operate competently and generate valid results, thereby
promoting confidence in their work both nationally and
around the world.
∙ It also helps facilitate cooperation between laboratories and
other bodies by generating wider acceptance of results
between countries. Test reports and certificates can be
accepted from one country to another without the need for
further testing, which, in turn, improves international trade.
Who is ISO/IEC 17025 for?
∙ ISO/IEC 17025 is useful for any organization that performs
testing, sampling or calibration and wants reliable results. This
includes all types of laboratories, whether they be owned and
operated by government, industry or, in fact, any other
organization. The standard is also useful to universities, research
centres, governments, regulators, inspection bodies, product
certification organizations and other conformity assessment
bodies with the need to do testing, sampling or calibration.
Why has ISO/IEC 17025 been revised?

∙ The last version of ISO/IEC 17025 was published


in 2005 and, since then, market conditions and
technology have changed. The new version
covers technical changes, vocabulary and
developments in IT techniques. It also takes into
consideration the latest version of ISO 9001.
Set-up Guidelines:
1. The input and output standards used in calibration must be regularly
inspected, tested and calibrated according to method and standard
being used.
2. The accuracy of the input and output standard used in the calibration
must be at least 4 times better than the instrument to be calibrated.
3. The reference input and output standards must be kept in a
controlled environment.
4. The power must be well regulated.
5. UUT must be at process position.
Set-up Guidelines:

6. Calibration must be done in a controlled environment.


7. Must be done by a qualified and competent personnel only.
8. Standards and instrument for calibration must be handled properly.
9. Observance of proper calibration set-up (Utilities, polarity, size of
wires & etc.)
10.Traceability and uncertainty of measurement must be established.
Terms used in Calibration
Deadweight Tester – A device to generate pressures for the purpose of calibrating pressure instruments.
Freely balanced weights (deadweights) are loaded on a calibrated piston to produce a static hydraulic
pressure output.
Decade Resistance Box – A device that provides precision resistance values in units expressed as ohms.
Use as standard input instrument for calibrating RTD transmitter.
Readability – The smallest fraction of the scale or unit of measure on an instrument that can easily be
read.
Re-ranging – The process of assigning upper and lower measurement limits to a transmitter.
Multi-Function Calibrator – A device capable of producing or receiving several types of instrument
input or output signals, using selectable units of measure to compare against known standards.
NIST– National Institute of Standards and Technology, formerly known as the National Bureau of
Standards.
Range – The extent of measuring, indicating, or recording scale; beginning with the lower range value
(LRV) and ending with the upper range value (URV).
What is Calibration Standard?

Calibration Standard is an internationally


accepted and traceable instrument or
material used as reference in calibrating
instruments.
They are commonly known as
Calibrators.
TYPES OF CALIBRATION STANDARDS:

1.“Equipment Type” Calibration Standard


• Digital multimeters
• Signal Simulators
• Distance/height measuring devices
• Plotting/recording devices
• Indicating devices
2.“Material Type” Calibration Standard
• Calibration solutions
• Calibration gases
Classification of Calibration Standards:
1.Primary Reference Standard or Material
∙ Directly traceable to international standards.
∙ a standard which has highest metrological quality in a specified field.
2.Secondary or Certified Reference Standard or Material
±0.05%
∙ Traceable only to manufacturer's reference standards.
∙ one which value is fixed by comparison with a primary standard.
National Measurement Standard Institute:
(Most Common)

NIST - National Institute of Standards & Technology (USA)


NPL - National Physical Laboratory (UK)
PTB - Physikalisch-Technische Bundesanstalt (Germany)
NMI - Netherlands Meetinstituut (Netherlands)
The Importance of Calibration
Standards (Calibrators)
To determine whether a measurement is accurate and precise, it
must be compared to a known STANDARD. A measurement
standard is one that has been established as a model.
Instruments that are used as measurements standards(Calibrators)
are calibrated according to an internationally accepted standards
(Primary Std.). These certified standard instruments are then used to
calibrate test equipment (Secondary Std.) which in turn, used to
calibrate process instruments.
GENERAL CALIBRATION PROCEDURES

1. Prepare UUT, Calibrators, & Other accessories needed for calibration.


2. Connect the calibration Set-up
3. Fill up the Instrument Data in the Data sheet.
4. Determine AS-FOUND DATA.
5. Analyze AS-FOUND DATA.
6. Decide if correction of error is needed.
7. Perform necessary corrective adjustment.
8. Determine AS-LEFT DATA
9. Perform necessary calculations.
10.Document results (including the Calibration Certificate) and apply display
(sticker)
CALIBRATION PROCEDURES

Calibration procedure refers to the way or


manner on how calibration is carried out in
relation to instrument's input/output relationship.

It could be either a 5-point or 10-point


input/output relationship
5-Point Calibration Procedure
A calibration procedure which utilizes a 5 input
and output test points. This is the most widely
used calibration procedure. Test points
commonly used are 0, 25, 50, 75 and 100% of
the input and output span.
5-Point Input and Output Relationship Table
Example: Direct Acting Electronic Temperature
Controller with calibration range of
INPUT VALUES OUTPUT VALUES

% INPUT INPUT % OUTPUT OUTPUT

0% 100 deg C 0% 4 mA

25% 200 deg C 25% 8 mA

50% 300 deg C 50% 12 mA

75% 400 deg C 75% 16 mA

100% 500 deg C 100% 20 mA


20𝑚𝐴

IDEAL 5-PT. CALIBRATION CURVE


16𝑚𝐴
% OUTPUT

12𝑚𝐴

8 𝑚𝐴

4 𝑚𝐴 100℃ 200℃ 300℃ 400℃ 500℃


0

% INPUT
10-point Calibration Procedure:
A calibration procedure which utilizes a 10 input and
output test points. This is the most widely used calibration
procedure.
Test points commonly used are 0, 25, 50, 75, 100, 75, 50,
25 and 0% of the input and output span.

This procedure is used to determine instrument error


known as hysteresis.
10-Point Input and Output Relationship Table
Example: Direct Acting Electronic Temperature Controller with
calibration range of
INPUT VALUES OUTPUT VALUES
% INPUT INPUT % OUTPUT OUTPUT
0% 100 deg C 0% 4 mA
25% 200 deg C 25% 8 mA
50% 300 deg C 50% 12 mA
75% 400 deg C 75% 16 mA
100% 20 mA
100% 500 deg C
100% 20 mA
100% 500 deg C
75% 400 deg C 75% 16 mA
50% 300 deg C 50% 12 mA
25% 200 deg C 25% 8 mA
0% 100 deg C
0% 4 mA
20𝑚𝐴

IDEAL 5-PT. CALIBRATION CURVE


16𝑚𝐴
% OUTPUT

12𝑚𝐴

8 𝑚𝐴

4 𝑚𝐴 100℃ 200℃ 300℃ 400℃ 500℃


0

% INPUT
Hysteresis
Hysteresis calibration error is when an
instrument reacts differently to increasing
and decreasing inputs.

It is caused by mechanical friction or loose


coupling between moving elements in the
instrument, like bourdon tubes or pivots.

Friction always acts in the opposite


direction of relative motion, causing the
output to lag behind the input and register
false readings.

This error can't be fixed by adjusting the


calibration, as it usually requires replacing
defective components or correcting coupling
Up-tests and Down-tests

While performing such a directional calibration test, it is important not to overshoot any of the
test points. If you do happen to overshoot a test point in setting up one of the input conditions
for the instrument, simply “back up” the test stimulus and re-approach the test point from the
same direction as before. Unless each test point’s value is approached from the proper
direction, the data cannot be used to determine hysteresis/deadband error.
Non-Conventional or Specific Calibration
Procedure:

A calibration procedure which utilizes a


number of points other than 5 or 10.
Examples:
∙ Analyzer calibrations
∙ Flowmeter calibrations
Basic Steps in Calibrating an Instrument
1.Identify UUT & record necessary information.
2.Prepare necessary tools & utilities.
3.Set up the calibration system.
4.Calibrate UUT per manual/work instruction.
5.Evaluate/correct instrument error.
6.Finalize Calibration Certificate.
Example:
TASK: Calibrate an electronic d/p transmitter with:
Accuracy: 0.15% of Span
Input Range: 0-200 mBar
Calibration range: 0-100 mBar
Output range: 4-20 mA
Calibration procedure: 5-Point Calibration
Uncertainty: +/-0.56%
1. Identify the type of UUT to be calibrated and record
all necessary information required for the calibration job.
Information such as:
-Manufacturer
- Model No.
-Instrument Description
-Serial No.
- Other available information
-Measuring range
-Cal Range
-Specified Accuracy
2. Identify and prepare the appropriate IMS, OMS
and UTILITIES required for the calibration job.

IMS required: UUT dependent (Measuring P, T, L, F or


A)
OMS required: UUT dependent
UTILITIES required: UUT dependent
3. Set up the calibration system
4. Calibrate UUT per
manual/work instruction.

Actual UUT
Expected Error in % of
Input (%) Input (mbar) Indication in Error (mA)
Output (mA) Span
mA

0 0 4 4.10 0.10 0.625


25 25 8 8.10 0.10 0.625
50 50 12 12.10 0.10 0.625
75 75 16 16.10 0.10 0.625
100 100 20 20.10 0.10 0.625
5. Evaluate/correct instrument error.

UUT Performance Analysis:


The UUT exhibits a "Zero Shift" error.
Required Corrective Action:
Adjustment of "Zero" point.
As-found and as-left documentation
An important principle in calibration practice is to document every instrument’s
calibration as it was found and as it was left after adjustments were made.
The purpose for documenting both conditions is to make data available for
calculating instrument drift over time. If only one of these conditions is
documented during each calibration event, it will be difficult to determine how
well an instrument is holding its calibration over long periods of time.
Excessive drift is often an indicator of impending failure, which is vital for any
program of predictive maintenance or quality control.
Typically, the format for documenting both As-Found and As-Left data is a
simple table showing the points of calibration, the ideal instrument responses,
the actual instrument responses, and the calculated error at each point.
As-found and as-left documentation
The following table is an example for a pressure transmitter with a
range of 0 to 200 PSI over a five-point scale:
After correcting the error, perform another 5-point test to
established "As left Data Table"
Actual UUT
Expected Error in % of
Input (%) Input (mbar) Indication in Error (mA)
Output (mA) Span
mA
As found 0 0 4 4.10 0.1 0.625
Data 25 25 8 8.10 0.1 0.625
Table 50 50 12 12.10 0.1 0.625
75 75 16 16.10 0.1 0.625
100 100 20 20.10 0.1 0.625

Actual UUT
Expected Error in % of
Input (%) Input (mbar) Indication in Error (mA)
Output (mA) Span
mA
As left
0 0 4 4 0 0
Data 25 25 8 8 0 0
50 50 12 12 0 0
Table 75 75 16 16 0 0
100 100 20 20 0 0
Finalize Calibration Certificate.

UUT MANUAL Calibration Certificate


QMS Docs QMS Records
-Procedure - Checklist
CALIBRATION
- Work Instruction - Maintenance Service
- Checklist Report
-Etc. - Etc.
What is a Calibration Certificate or
Calibration Report?
A document that contains the result of a calibration activity.
Under 5.10 of ISO-17025, results of calibration activity shall
be reported accurately, clearly and objectively and shall
include all the information required by the client and necessary
for the interpretation of the calibration results.

Note: There is no fixed format for a Calibration Certificate.


Calibration Certificate/Report must at least contain the
following elements as per 5.10.2 of PNS ISO/IEC 17025-
2000.
• A title
• Name and address of the laboratory where calibration was carried out.
• Certificate Identification
• Name and address of client/custodian
• Identification of the method being used
• Unit Identification.
Calibration Certificate/Report must at least contain the
following elements as per 5.10.2 of PNS ISO/IEC 17025-
2000.
∙ Date Unit received & calibrated.
∙ Traceability, Uncertainty & Environmental Conditions.
∙ Test results & units of measurement
∙ Findings & observations.
∙ Statement to the effect of the results relate only to item calibrated.
(Calibration Important Note: only to the Particular instrument)
∙ Signatures authorizing the certificate
In addition, calibration certificates shall include the
following, where necessary for the interpretation of
calibration results:
1. When an instrument for calibration has been adjusted or repaired, the calibration
results before and after adjustment shall be reported.
2. A calibration certificate or label shall not contain any recommendation on the
calibration interval except agreed with the client.
Example: Let's say a company has a temperature sensor that is critical to their
manufacturing process. They require the sensor to be accurate within +/- 1°C. They
decide that they will send the sensor to a calibration lab for calibration once per year
to ensure it is within specification.
In this case, the calibration interval is one year. The company has determined that the
sensor should be calibrated annually to ensure it remains within their required
accuracy specification.
In addition, calibration certificates shall include the
following, where necessary for the interpretation of
calibration results:
3. When a calibration work has been contracted, the laboratory
performing the work shall issue the calibration certificate to the
contracting laboratory.
4. The format of the calibration certificate shall be designed to
accommodate data and to minimize the possibility of misunderstanding.
5. When it is necessary to issue a complete new calibration certificate, this
shall be uniquely identified and shall contain in a reference to the
original that it replaces.
6. Calibration certificates are part of the controlled documents in a
company's QMS.
Important Notes Regarding Certificates:
1.Hard copies of calibration certificates should also include
the page number and total number of pages.
2.It is highly recommended that a statement specifying that
the test report or calibration certificate shall not be
reproduced except in full, without written approval from
the issuing laboratory.
3.Calibration certificates must be controlled and considered
legal documents.
Automated calibration

Automated And Semi-automated Calibration tools have


been developed to help manage the data associated with
calibration, and to make the instrument technician’s job
more manageable.
An example of a fully automated calibration system is a process chemical analyzer where a
set of solenoid valves direct chemical samples of known composition to the analyzer at
programmed time intervals, a computer inside the analyzer recording the analyzer’s error
(compared to the known standard) and auto-adjusting the analyzer in order to correct for
whatever errors are detected.
In the illustration we see a schematic of a gas analyzer with two compressed-gas
cylinders holding gases of 0% and 100% concentration of the compound(s) of
interest, called “zero gas” and “span gas”, connected through solenoid valves so
that the chemical analyzer may be automatically tested against these standards.
The only time a human technician need attend to the analyzer is when
parameters not monitored by the auto-calibration system must be
checked, and when the auto-calibration system detects an error too large
to self-correct (thus indicating a fault).
An example of a semi-automated calibration system is
an instrument such as Fluke’s series of Documenting
Process Calibrators (DPC). These devices function as
standards for electrical measurements such as voltage,
current, and resistance, with built-in database capability
for storing calibration records and test conditions:

An example of a semi-automated calibration system is


an instrument such as Fluke’s series of Documenting
Process Calibrators (DPC). These devices function as
standards for electrical measurements such as voltage,
current, and resistance, with built-in database capability
for storing calibration records and test conditions.
REFERENCES
∙ Control.com. (n.d.). Calibration Errors and Testing. Retrieved from
https://control.com/textbook/instrument-calibration/calibration-errors-and-testing/
∙ InstrumentationTools.com. (n.d.). Home Page. Retrieved from
https://instrumentationtools.com/
∙ Philippine Instrumentation and Control Society. (n.d.). Home Page. Retrieved from
https://www.picst.org/
∙ I and E Center for Instrumentation Training & Technical Studies, Inc.
∙ Kirk, F. W., Weedon, T. A., & Kirk, P. (2010). Instrumentation (5th ed.). American
Technical Publishers.
∙ Tony R. - Kuphaldt. Lessons In Industrial Instrumentation, 2015
∙ Automation Forum. (n.d.). [Website]. Retrieved from https://automationforum.co

You might also like