Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

3/20/2014

Hardware Hazards

Accident Case: Flixborough The explosion of a large cloud of cyclohexane in Flixborough (UK) in 1974, which killed 28 persons and caused extensive plant damage, serves as a very instructive case. The triggering event was the breakdown of a temporary pipe serving as a substitute in a reactor unit. The accident was caused by a piece of hardware breaking down, but on closer investigation it was revealed that the breakdown followed from overload, and that the temporary construction was in fact inadequate for its intended use. After two months service, the pipe was exposed to bending forces due to a slight pressure rise of the 10-bar (106 Pa) cyclohexane content at about 150C. The two bellows between the pipe and the nearby reactors broke and 30 to 50 tonnes of cyclohexane was released and soon ignited, probably by a furnace some distance from the leak. (See figure 1.) A very readable account of the case is found in Kletz (1988). Figure 1. Temporary connection between tanks at Flixborough

Hazard Analysis The methods that have been developed to find the risks that may be relevant to a piece of equipment, to a chemical process or to a certain operation are referred to as hazard analysis. These methods ask questions such as: What may possibly go wrong? Could it be serious? and What can be done about it? Different methods of conducting the analyses are often combined to achieve a reasonable coverage, but no such set can do more than guide or assist a clever team of analysts in their determinations. The main difficulties with hazard analysis are as follows: availability of relevant data limitations of models and calculations new and unfamiliar materials, constructions and processes system complexity limitations on human imagination
http://www.ilo.org/oshenc/part-viii/audits-inspections-and-investigations/item/915-hardware-hazards 1/3

3/20/2014

Hardware Hazards

limitations on practical tests.

To produce usable risk evaluations under these circumstances it is important to stringently define the scope and the level of ambitiousness appropriate to the analysis at hand; for example, it is clear that one does not need the same sort of information for insurance purposes as for design purposes, or for the planning of protection schemes and the construction of emergency arrangements. Generally speaking, the risk picture must be filled in by mixing empirical techniques (i.e., statistics) with deductive reasoning and a creative imagination. Different risk evaluation tools - even computer programs for risk analysiscan be very helpful. The hazard and operability study (HAZOP) and the failure mode and effect analysis (FMEA ) are commonly used methods for investigating hazards, especially in the chemical industry. The point of departure for the HAZOP method is the tracing of possible risk scenarios based on a set of guide words; for each scenario one has to identify probable causes and consequences. In the second stage, one tries to find means for reducing the probabilities or mitigating the consequences of those scenarios judged to be unacceptable. A review of the HAZOP method can be found in Charsley (1995). The FMEA method asks a series of what if questions for every possible risk component in order to thoroughly determine whatever failure modes may exist and then to identify the effects that they may have on system performance; such an analysis will be illustrated in the demonstration example (for a gas system) presented later in this article. Fault trees and event trees and the modes of logical analysis proper to accident causation structures and probability reasoning are in no way specific to the analysis of hardware hazards, as they are general tools for system risk evaluations. Tracing hardware hazards in an industrial plant To identify possible hazards, information on construction and function can be sought from: actual equipment and plant substitutes and models drawings, electrical diagrams, piping and instrumentation (P/I) diagrams, etc. process descriptions control schemes operation modes and phases work orders, change orders, maintenance reports, etc.

By selecting and digesting such information, analysts form a picture of the risk object itself, its functions and its actual use. Where things are not yet constructed - or unavailable for inspection - important observations cannot be made and the evaluation must be based entirely on descriptions, intentions and plans. Such evaluation might seem rather poor, but in fact, most practical risk evaluations are made this way, either in order to seek authoritative approval for applications to undertake new construction, or to compare the relative safety of alternative design solutions. Real life processes will be consulted for the information not shown on the formal diagrams or described verbally by interview, and to verify that the information gathered from these sources is factual and represents actual conditions. These include the following: actual practice and culture additional failure mechanisms/construction details sneak paths (see below) common error causes
http://www.ilo.org/oshenc/part-viii/audits-inspections-and-investigations/item/915-hardware-hazards 2/3

3/20/2014

Hardware Hazards

risks from external sources/missiles particular exposures or consequences past incidents, accidents and near accidents.

Most of this additional information, especially sneak paths, is detectable only by creative, skilled observers with considerable experience, and some of the information would be almost impossible to trace with maps and diagrams. Sneak paths denote unintended and unforeseen interactions between systems, where the operation of one system affects the condition or operation of another system through other ways than the functional ones. This typically happens where functionally different parts are situated near each other, or (for example) a leaking substance drips on equipment beneath and causes a failure. Another mode of a sneak paths action may involve the introduction of wrong substances or parts into a system by means of instruments or tools during operation or maintenance: the intended structures and their intended functions are changed through the sneak paths. By common-mode failures one means that certain conditions - like flooding, lightning or power failure - can disturb several systems at once, perhaps leading to unexpectedly large blackouts or accidents. Generally, one tries to avoid sneak-path effects and common-mode failures through proper layouts and introducing distance, insulation and diversity in working operations.

http://www.ilo.org/oshenc/part-viii/audits-inspections-and-investigations/item/915-hardware-hazards

3/3

You might also like