Professional Documents
Culture Documents
G 63 Quality Risk Management (QRM) Application For Critical Instrument Calibration
G 63 Quality Risk Management (QRM) Application For Critical Instrument Calibration
G 63 Quality Risk Management (QRM) Application For Critical Instrument Calibration
Introduction
This document offers a risk assessment approach to document a critical instrument
calibration interval change request.
Non-Critical Instruments
Calibration frequencies for non-critical instruments, if any, can be adjusted by the
maintenance team as appropriate based on instrument history and other factors. This
practice has no impact to non-critical instrument interval change opportunities.
Critical Instruments
Calibration frequencies for critical instruments may be adjusted as necessary based on
calibration data or other information that may support a change. Before extending
calibration intervals, review the calibration history of the instrument based on the table
below. Consider the results of the calibrations [e.g., Return to Service (RTS) limit
exceeded, etc.] in the listed time window when modifying frequency.
The interval change table above, which should be based on instrument history, is the
primary method of determining opportunities for calibration interval changes.
Consideration should be given to the level of risk before making changes in calibration
interval. For instruments that are considered to be minimal risk, an informal concise
assessment is appropriate.
The calibration risk evaluation should consider how a deviation reporting involving the
instrument might affect release of the product lots in question. The extent to which the
instrument would impact the product is a good indicator of risk. A more conservative
extension of the calibration interval can then be made, if appropriate.
• Probability
The probability (or likelihood) of instrument failure may be attributed to:
a) design and construction,
b) the environment it is exposed to, and
c) how it is used.
Knowledge of the effects of design and construction can be gained through a
review of the maintenance history of the instrument, comparing it to similarly
designed instruments, and by knowing the age of the instrument (period of time
in use). For each of these parameters, if the data and relevant information is not
known, the risk should be assumed to be high.
The following criteria may be used to determine risk ranking for failure probability.
Refer to Table I below.
1) History – There are three (3) possible scenarios illustrated in table where
instrument history may be used to determine risk ranking for failure probability.
Specifically
(i) Availability of recorded history of an instrument in its current location,
(ii) Availability of history of identical instrumentation of the same make and
model in the same area, and \
(iii) Availability of history of similar instrumentation in a similar
environment. Risk ranking is determined by the length of recorded history
available for an instrument, the number of available instruments for use in
data gathering, and the typical interval between observed failures (mean
time between failures, MTBF). When the number of instruments in place
combined with the use history (e.g. >2 years) is sufficient to have observed
most, if not all potential modes of failures (MTBF is long i.e., >24
months), the risk should be considered low.
Due to the dynamic nature of a work environment a ranking cannot be based solely on
one factor. Rather, all influences should be considered, for example – a transmitter
located in a clean, dry area that does not get washed down but at the same time
operates in an unstable environment – while the transmitter may be adequately
protected and isolated from dirt and dust or exposure to wash down, the vibration and
shock to which it is exposed can physically fatigue components that might cause
erroneous reading. In this case, the sub-category of vibration and shock will determine
the environmental risk ranking for failure probability.
3) Range of use – Instruments designed around transducers need to be used
within their design range. Instruments are more linear and predictable in the middle of
their range. Exposing instrument inputs that are outside of the design range may create
conditions of non-linearity that are not readily apparent. Risk ranking is determined
by the potential range excursions and exposure to conditions that may take the device
to the edge of its range or beyond.
Severity
There are several factors that may define the severity (or consequence) of
instrument failure. The following brief narrative description for each factor will
supplement the guidance provided in Table II below.
1) Human safety – Direct threat to human safety defines the most severe
consequence of
calibrations OOT. If an instrument reading (or alarm) is the main
protection against severe or potentially fatal injury, such as breathing air,
oxygen level, or lethal compounds monitor, then severity is potentially
high. Depending on whether an instrument is a primary component in a
safety system, or part of a redundant system, will determine the severity
of this risk.
Detectability
Being able to immediately detect an instrument OOT condition may mitigate the
impact of such
condition upon the system, process, or even the product to which it is associated
or used. Immediate detection is determined by whether the system or process
utilizing the instrument is automated, or manual, and whether there are other
instruments or tell-tale parameters that occur as a direct result of incorrect
instrumentation. Refer to Table III below. Systems or processes that are
equipped with automation features or components that make it easier to detect
OOT conditions should have a reduced risk in detectability ranking. Systems that
have additional instruments or detectable parameters that are
frequently observed/compared will enable timely identification of OOT
conditions, thus resulting in lower risk.
Risk Acceptance:
Once the probability, severity, and detectability of instrument failure are
individually assessed and agreement is reached on the risk associated with each
instrument, a site should then define the level of risk it is willing to accept. The
FMEA ranking criteria can be used to assign numerical ratings and complete
the overall risk evaluation. See Table IV.
Table IV: FMEA Ranking Criteria and Failure Scores using a Three
Point Ranking System
To assign the appropriate level of risk, a simple Low, Medium, High model with a
corresponding numerical designation of 1, 2, and 3 will be used. Each criterion
(probability, severity, detectability) can therefore have a numerical rating of 1, 2, or 3
that will determine the risk score. The risk score for each failure mode is obtained by
multiplying the individual scores for each criterion.
For example:
Probability x Severity x Detectability = Risk Score
High risk score will require adherence to the calibration table or perhaps Team review
to tighter than published guidance.