Professional Documents
Culture Documents
The Deepwater Horizon Explosion - Non-Technical Skills Safety Culture and System Complexity
The Deepwater Horizon Explosion - Non-Technical Skills Safety Culture and System Complexity
The Deepwater Horizon Explosion - Non-Technical Skills Safety Culture and System Complexity
Introduction
Despite extensive research investigating safety within the oil and gas sector, serious
mishaps still occur. The 2010 explosion and sinking of the Deepwater Horizon
(DH) semi-submersible drilling rig in the Gulf of Mexico resulted in 11 fatalities,
and the worlds worst offshore oil spill. As the exploration and production of hydro-
carbons moves to deeper, remoter, and more ecologically sensitive settings, learning
from incidents becomes even more essential. To date, analyses of the DH mishap
have tended to focus on technical decision-making (BP 2010), the detection of
blowouts (Skogdalen, Utne, and Vinnem 2011), and safety regulation (Steinzor
2011). The investigation by the National Oil Spill Commission (2011) highlights
behaviours and organisational factors underlying the incident. However, these are
yet to be considered systematically within a human factors framework; this is
necessary for developing appropriate interventions and rening theory on accident
causation. In this paper, we consider the DH mishap from a non-technical skills
(NTS) and safety culture perspective. Then, through applying a systems-thinking
perspective, we reect on the role of the human factors-related problems underlying
the incident.
Mishap data often provides the catalyst for safety interventions; however, their
actual contribution to safety improvement can be limited. For example, data from
accident investigations often identify multiple contradictory contributory factors
linked to a mishap (Katsakiori, Sakellaropoulos, and Manatakis 2009), they inter-
pret behaviours out of context (Rasmussen 1997), lessons can quickly become
obsolete (Kirwan 2001), obvious problems are easier to identify than latent threats
(OHare 2000), and the inuencers of behaviour are often poorly described (Law-
ton and Parker 1998). Analyses frequently identify causes that are not observable
in real-time, and risky behaviours in one context can prevent mishaps in another
(Perrow 1999). A recurring critique of accident analysis techniques is their focus
on the individual components of an incident (i.e. the actions preceding a mishap),
rather than their interdependencies or systemic causes (Leveson 2011). System-
thinking approaches attempt to explain the accident process, and to understand
how the beliefs and actions of an operator are dependent on factors outside their
control or awareness. To be effective, interventions to reduce future events must
consider these interdependencies (Rasmussen 1997).
The National Oil Spill Commissions (2011) investigation of the DH provides
rich data on why the mishap occurred. Specic instances of behaviour and
organisational management are described. Yet it does not frame these using relevant
psychology literatures, or fully consider system interdependencies. This is necessary
for (i) describing the underlying psychological dimensions (rather than specic
instances) of worker behaviour and organisational environment that led to the
incident, (ii) explaining how risky-activity occurred, and (iii) developing interven-
tions to reduce the likelihood of error or future mishaps. DH is a seminal event for
safety management in the offshore industry, and we apply psychology theory to
understand the behaviour and organisational environment underlying the incident.
We consider the incident using two of the leading approaches used to understand
safety-related activity in high-risk workplaces, and then adopt a systems thinking
perspective to integrate our ndings.
The rst, NTS, refers to the cognitive (decision-making, risk assessment,
situation awareness (SA)) and social (teamwork, leadership) skills that underpin safe
performance in high-risk workplaces (Flin, OConnor, and Crichton 2008).
Psychology research demonstrates the importance of worker NTS for managing
safety emergencies offshore, although offshore research in the area remains quite
limited (OConnor and Flin 2003). NTS research analyses the non-technical skill
requirements of a workplace through investigating accident data, and triangulates
this analysis through other data collection techniques such as observations, surveys
and interviews (Flin, OConnor, and Crichton 2008). Based upon this analysis,
training, assessment, and other safety intervention programmes are tailored to the
specic NTS needs of a work domain.
Journal of Risk Research 407
Annular space The space between the well casing and the surrounding rock
formation
Blowout preventer A large valve used to control and severe the link between the oil
well and the platform
Cement plug A piece of cement used to seal the well below the seabed.
Cement slurry (with Cement with tiny bubbles of nitrogen injected into it in order to
nitrogen) reduce the weight placed upon the well. Unstable nitrogen foam
slurry can become porous, with nitrogen breaking out of the cement
Cementing process Cement is pumped down the well and up the annular space so that
it seals the production casing to the surrounding rock formation
Channelling Leaving gaps in the annular space (during the placement of the
production casing) through which hydrocarbons can ow up
Centralisers Components placed between sections of the production casing to
ensure optimal positioning of the casing in the well, and even
displacement of drilling mud from the wellbore
Drilling mud Fluid used to aid the drilling of boreholes into the earth (e.g.
keeping the drill bit cool, transporting rock cuttings to the surface)
Drill string Hollow sections of drill pipe consisting of drill collars, tools and
bits that are used to drill oil wells
Float valves Valves on the production casing that ensure cement and mud can be
pumped into the well without reversing direction and owing back
up the well
Hydrocarbon zone Reservoir of hydrocarbons being extracted
(Well) kick Flow of hydrocarbons and/or water into the wellbore and up the
annular space (signalling a blowout is going to occur)
Lockdown sleeve A device used to lock in place the wellhead and production casing
Lost return/circulation This occurs when drilling uids leaks into the surrounding
geological formations (instead of returning up the Annular space),
potentially increasing the fragility of the rock being drilled
Mud line The ocean oor where the wellhead is placed
Negative pressure test Used to assess whether, under conditions of low pressure,
hydrocarbons leak into the well
Production casing A continuous wall of steel placed between the hydrocarbon zone at
the bottom of the well, and the wellhead on the seaoor
Riser The link cable between the well and the drilling rig
Shoe track barriers Barrier for blocking hydrocarbons from entering the bottom of the
pipe
Spacer Chemicals used to separate drilling mud from cement slurry
Temporary The well is closed off to allow the drilling rig to be moved on, and
abandonment a smaller production unit to begin hydrocarbon extraction
Table 1. Key stages leading to the explosion on the Deepwater Horizon (National Oil Spill
Commission Report (2011).
Report
Key stages Causes page nos.
1. The cement barrier used to isolate Errors in conducting and interpreting 101107
the hydrocarbon zone at the bottom the negative-pressure test, creating the
of the well from the annular space belief that the cement job had been
failed successful
Errors in the design of the cementing
process
The use of an inappropriate foam
cement slurry to seal the well
Design of the temporary abandonment
which resulted in overly high-levels of
pressure being placed on the cement
job
2. Hydrocarbons entered the Failure of the cement job integrity 109113
well and travelled up the riser Errors in monitoring and interpreting
real-time data displays showing signs
of a kick
3. Hydrocarbons on the rig Hydrocarbons were not contained, and 113114
oor ignited diesel generators ingested and released
them onto deck areas where ignition
was possible
Deck areas lacked automatic re and
gas detection systems, resulting in
equipment in potential ignition
locations not being shut down
4. The Blowout Preventer The cables linking the emergency 114115
(BOP) used to seal the disconnect system (EDS) and the BOP
well, and prevent the were damaged by the re
uncontrolled ow of hydrocarbons Failures in the maintenance of the
towards BOP (possibly of batteries) prevented
the rig, did not activate activation of emergency automatic
system for shearing the drill pipe and
sealing the wellbore
Cognitive/individual factors
Research to understand mishaps in complex organisational setting often focus on
the cognitions (e.g. decision-making, risk assessment, SA) of the operators most
closely associated with a mishap. The DH investigation highlights a variety of
situations where problems in decision-making, risk assessment, and SA contributed
to the mishap.
Decision-making
The National Oil Spill Commission (2011) report highlights a series of operational
and managerial decisions which (either directly or indirectly) contributed to the
mishap. Many of the decisions identied refer to the design and management of the
Macondo well, for example the depth of the well or the cement casing that was to
be used (Hopkins 2012). Decisions involved participants distributed across locations
and various companies, and Table 2 lists key decisions identied as problematic.
Table 2. Key organisational decisions in the immediate lead up to the Deepwater Horizon mishap (National Oil Spill Commission 2011).
410
Date (all 2010) Primary decision-makers Decision Factors inuencing decision-making Report page no.
FebruaryApril Halliburton cement Omission to investigate Saved time and costs, however the possibility that 101102
design and BP team or consider negative the nitrogen foam slurry would be porous was not
results on the stability investigated. The BP team were not aware of the
of the foam cement potential problems with the cement slurry
slurry
9th April BP engineering team Cessation of drilling At 18,193 feet cracks started to appear in the well 9394
activities at 18,360 feet formation. To avoid further damage, fractures were
plugged, and drilling to intended depth of 20,200 feet
was stopped
11th15th April BP engineering team Placing and cementing The exploratory Macondo well had drilled into an 94
of a nal production accessible hydrocarbon reservoir of at least 50 million
casing string to recover barrels, although the rock formations at the bottom of
T.W. Reader and P. OConnor
(Continued)
411
Table 2. (Continued).
412
Date (all 2010) Primary decision-makers Decision Factors inuencing decision-making Report page no.
20th April BP engineering team Rejection of the full suite Saved time and money, however it meant the 102
of tests for evaluating evaluation of the cement job was reliant on a far
cement job more limited set of data
20th April BP engineering team, Setting the cement plug Reduced the likelihood of damage occurring to the 103104
with MMS approval 3300 feet below the mud lockdown sleeve, but placed greater pressure on the
line cement job at the bottom of the well
20th April BP engineering team Replacing 3000 feet of Use of seawater avoided mud contaminating the 104
mud in the well with cement plug whilst it set. However the lighter
lighter seawater for seawater placed greater pressure on the cement job at
setting the cement plug the bottom of the well
20th April BP engineering team Not installing additional Saved time and costs, but meant that that the cement 104
physical barriers to stop job at the bottom of the well was the only physical
T.W. Reader and P. OConnor
SA (risk perception)
Judging risk is a key aspect of SA as it reects how operators interpret information,
think ahead, and make decisions. The DH investigation highlights SA problems in
the real-time management of risk. Specically, the investigation critiques the lack of
risk assessment on the DH, with the platform team not accurately assessing risks.
Risk research shows that problems in assessing risk are central to poor decision-mak-
ing prior to organisational mishaps (Wagenaar 1992) and that numerous factors inu-
ence how we consider and respond to risk. For example, subjectivity (how a risk is
framed), control (whether we have control over a risk), and expertise for understand-
ing risk are important (Finucane et al. 2000; Mack 2003; Slovic 1979). Formalised
methods of risk assessment are used to reduce biases in risk perception (Pidgeon
1991), and offshore research shows that risk perception is inuenced by factors such
as safety climate, previous experiences of harm, work conditions, and management
commitment to safety (Mearns, Whitaker, and Flin 2003; Rundmo 1996).
Decision-making on the DH was often inuenced by misperceptions of risk. For
example, the salience of causing long-term damage to the viability of the well (i.e.
production potential) outweighed consideration of short-term risks. The selection of
a long-string production casing (used to secure the borehole and lower the drill
string through) by BP engineers to cement the well (National Oil Spill Commission
2011, 115, 123) is found to have been made to reduce the likelihood of future lost
returns. This was despite initial computer modelling showing a long-string casing
would increase the difculty of the immediate cement job. Similarly, the decision
during the temporary abandonment phase to pump cement into the well at a low
rate (National Oil Spill Commission 2011, 116) was made to reduce lost returns.
Yet it created short-term risk through reducing the efciency of mud displacement
from the well. Also, the use of foam cement slurry was taken to preserve the
long-term viability of the well (as it was light and less likely to cause damage), yet
it increased risk through creating great reliance on external expertise.
Risk assessment research shows the importance of structured risk analysis in
uncertain decision environments (Faber and Stewart 2003), and the National Oil
414 T.W. Reader and P. OConnor
Spill Commission (2011) report highlights the informal nature of risks appraisal and
decision-making. An example is the decision by the BP engineering team, on
learning that only six out of 16 centralisers were available, to not perform a formal
risk assessments of using six centralisers. This decision was made despite it being
contrary to well design specications, initial computer simulations, and staff
concerns. Similarly, the decision to set the cement plugs in seawater (due to fears
over mud entering the cement plug) rather than heavy drilling mud is critiqued, as
there was no precedence for performing this operation at such depth. Although the
decision was made to minimise risk, little formal analysis of potential new risks
was made (e.g. through the stress placed on the cement formation). Poor risk
assessment is seen as emerging from a drive to save time and resources and also
poor regulation and is judged central to the mishap. Yet, as discussed below, risk
assessments were also shaped by a range of other factors.
SA (process)
The investigation into the DH describes a number of instances where inaccurate SA
of the technical process of establishing the well contributed to the mishap. For
example, due to a managerial decision not to run a cement evaluation log (tests
used to assess the integrity of the cement job), assessments on the success of the
cement job were based on a limited set of information (National Oil Spill Commis-
sion 2011, 117). Evaluation of the cement job was based on an awareness of
whether uid was owing back up the well, and indicators of problems in the well
(e.g. during the negative pressure test (NPT)) were not acknowledged. The report
describes crew members as entering a form of conrmation bias (National Oil Spill
Commission 2011, 118119). In particular, when the negative-pressure test team
were unable to retain a drill-pipe pressure of zero (indicating a successful negative-
pressure test), they instead focused and based their SA on information (pressure on
the kill line) that showed the negative-pressure test to be successful, ignoring
contradictory data.
Loss of SA is also specied as contributing to the blowout. Crew members on
the drill oor were not aware of dynamic and substantial changes in the status of
the Macondo well. Kicks that signalled a blowout was imminent (National Oil
Spill Commission 2011, 120121) were not detected, and over a 45-min period
various data (e.g. increases in drilling-pipe pressure, pressure differences between
the drill pipe and kill line) indicating an impending blowout were not acted upon.
The reasons for this appear to be the demands placed upon crew members on the
drill oor, whereby they were required to perform multiple tasks at once, resulting
in their attention being split. This was compounded by instrumentation and displays
used for monitoring the well not alerting the drilling crew that a kick had occurred,
and the detection of the blowout was only reliant on drilling crew awareness.
Social/team factors
NTS research has long shown the impact of teamwork upon decision-making and
risk assessment in high-risk domains (Stout, Salas, and Fowlkes 1997). These
activities are crucial for maintaining safety offshore (OConnor and Flin 2003) and
are considered in the DH investigation. Understanding and improving teamwork in
offshore settings is challenging, as decision-making occurs between various parties
(e.g. engineers, technicians, managers) who are not together, work across shifts,
belong to different organisations, and have different expertise and seniority.
Teamwork
The National Oil Spill Commission (2011) reports various instances of poor
communication and leadership between crew members on the rig (and with
operating companies). For example, the communication between rig and the
onshore support teams. Formal discussions were frequently not held for key opera-
tional decisions, and communication on well management was poor. The rationale
underlying decision-making were not shared or clearly documented, and risks were
not formally assessed by crew members (National Oil Spill Commission 2011,
120). Changes in decision-making were frequently made, yet BP well site leaders
and rig crew had minimal time to become familiar with these changes due to poor
communication. Critically for the blowout, communication between operational
decision-makers on the drilling rig was also problematic (National Oil Spill
Commission, 2011, 123). Frequently, operational staff were unaware of emerging
risks and difculties, for example the negative-pressure test. This shaped their
understanding of the work environment, and meant decisions occurred with a
bounded awareness of the relevant information relating to potential risk (Simon
1991), leaving operators vulnerable to a mishap.
Furthermore, poor inter-organisational communication between Halliburton and
BP is also cited as key factor in the mishap (National Oil Spill Commission 2011,
117, 123124). In February 2010, Halliburton specialists recognised the potential
for the cement slurry to be unstable. Yet no investigation was made into the slurry
design, and BP was not informed. A further test in April (prior to the scheduled
cement job) repeated this result, and Halliburton failed to communicate to BP the
possibility that the foam cement slurry was potentially unstable. Thus, decisions on
the cement job were made on an incorrect assumption of cement slurry design
stability. This again speaks to the shared nature of risk assessment, and a lack of
clear procedures for outlining communication between the two organisations. More
pointedly, it highlights the importance of considering system inter-dependencies
416 T.W. Reader and P. OConnor
Leadership
The National Oil Spill Commission (2011) also identies problems in leadership for
technical and risk-related decision-making (e.g. communicating on the centralisers),
and overall management of the DH (National Oil Spill Commission 2011, 122).
Hopkins (2011), elaborates and considers safety leadership and communication on
the DH with respect to the leadership of four VIPs (two from BP and two from
Transocean, all experts on offshore drilling) visiting the DH rig on the day of the
mishap. The purpose of the visit was in part to reinforce safety. Despite the signs of
the well not having been sealed, and the potential for a well blowout, these were not
investigated or queried. In particular, Hopkins (2011) highlights the nature of the
awed communication between the VIPs and the DH team as contributing to them
not identifying the problems that were occurring. First, the VIPs tended to focus on
slips, trips, and falls rather than focusing on other aspects of safety and risk assess-
ment. Second, the walkround was conducted in unobtrusive manner as possible in
order to not undermine crew. Third, safety questions were framed in a manner that
assumed things had gone successfully, limiting opportunities to discuss problems.
Furthermore, in considering the management of the accident, leadership again
appears important. For example, problems in command and control during the
evacuation sequence were critical (Skogdalen, Khorsandi, and Vinnem 2012).
The DH had a split chain of command between the Offshore Installation Manager
(OIM) and the vessel captain. Leadership depended on the status of the rig, whether it
was latched, underway, or in an emergency situation. Key and critical decisions relat-
ing to lifeboat launches had to be made, and although the responsibility for decision-
making lay with the Captain, crew members were unclear as to who was in charge
due to missing handover procedures. To some extent, the report nds leadership issues
(e.g. setting clear expectations, developing clear communication procedures, listening
to junior staff) as underlying the problems in communication, SA, and risk assessment
leading to the incident.
Regulation
As discussed above, the role of the regulator is critiqued by the DH investigation.
Effective regulation shapes offshore safety culture through creating expectations and
norms on safety management (Cox and Cheyne 2000; Taylor 1979). The MMS
lacked the staff, resources, technical expertise (e.g. growing awareness on the
increased likelihood of blowout preventer failures in Deepwater conditions (National
Oil Spill Commission 2011, 74)), decision-making autonomy, and political inuence
to regulate safely. Senior ofcials focused on maximising revenue from leasing and
production. This impacted upon safety culture through the following mechanisms.
First, the quality of external inspections were often less rigorous than internal safety
audits, focussing on quantity rather than quality (National Oil Spill Commission
2011, 78). Inspectors did not ask tough questions and avoided reaching conclusions
that would increase regulation or costs (National Oil Spill Commission 2011, 126).
Second, contingency planning for DH disaster scenarios was inadequate (National
Oil Spill Commission 2011, 84), and there were no meaningful regulations for
testing cement, managing well-cementing, or conducting negative-pressure tests
(National Oil Spill Commission 2011, 228). Where guidelines were available (e.g.
depths for installing cement plugs), exclusions were accepted.
The above issues in regulation are seen as contributing to an environment where
production was prioritised over safety; an underlying cause of the DH mishap. The
lack of safety cases is seen as emblematic of the poor industry-wide safety culture
within which the DH operated. Safety cases involve operating companies validating
the effectiveness of their installation safety management systems through demon-
strating that hazards have been mitigated to as low as reasonably practicable.
Although previously rejected for the United States, there have been calls to intro-
duce safety cases in the United States in a manner similar to the North Sea
(National Oil Spill Commission 2011). However, it has been suggested that safety
cases are not compatible with US law, and regulators lack the resources necessary
to make a safety case regime minimally successful (Steinzor 2011). Ideally, this
would result in organisations outlining their safety management systems and proce-
dures in greater detail, alongside changing perceptions on the prioritisation of safety
and production.
to system safety offshore (e.g. changing procedures for the NPT) only addresses a
specic scenario or problem, and there are multiple pathways through which error
or problems in managing hazards can result in a mishap (Anderson 1999).
Whilst the specic failures leading to an incident might be addressed (e.g.
blowout preventer technology), deeper underlying problems (e.g. regulation, safety
culture) that underlie the failures remain and can combine to create similar events
in different areas of the offshore system. Furthermore, behaviours that appear
erroneous are often understood with hindsight, despite decision-makers being unable
to assess (e.g. due to information limitations) the consequences of their judgements
in real-time (Cook and Nemeth 2010). In utilising the DH analysis to shape safety
management across the offshore industry, it is necessary to begin charting and
explaining the relationships between the various components within the offshore
system that contributed or were involved in the mishap. Understanding activity in
the context of the organisational system is essential for developing a more
integrated accident analysis.
Using the NTS and safety culture analysis above, Figure 1 captures the interac-
tions between the various events identied in the DH investigation as leading to the
mishap. The model takes inspiration from Rasmussens (1997) description of the
inter-connected systems and complex interactions leading to the Zeebrugge ferry
accident. It utilises Levesons (2011) focus on the hierarchical relationships
between: (i) the specic events/error leading to an accident; (ii) the environmental
conditions that allowed the mishap to occur; and (iii) the underlying system factors
shaping the organisation and work conducted within it. In Levesons (2012)
Systems-Theoretic Accident Model and Processes (STAMP) approach, safety is
viewed as a control problem. Traditionally, accident models explain accident
causation in terms of a series of events. However, the STAMP approach considers
accidents to occur as a result of a lack of constraints imposed on the system design
and during operational deployment. In this approach, accidents in complex systems
are not considered to occur due to independent component failures. Rather they
result from external disturbances or dysfunctional interactions amongst system
components that are not adequately handled by the control system (see Leveson
2012 for more detail).
In the model presented, time is not explicitly captured. Rather, systemic aspects
of the accident process such as regulation, safety culture, communication, use of
third-parties and human factors engineering are conceptualised as factors that
implicitly inuenced the risk environment over time (e.g. regulation). Also, the
model is limited to the analysis of factors leading to the DH mishap described in
the ofcial National Oil Spill Commission (2011) report, and undoubtedly, there are
factors yet to be included.
Figure 1 nds no single root-cause or chain of behaviours to have caused the
accident. Although the failure of the blowout preventer is often seen as the core
failure underlying the oil spill (and remains the focus of technical and non-technical
research on safety interventions), a combination of interlinked events that occurred
simultaneously underlie the incident. These led to the earlier described non-techni-
cal skills and safety culture problems. Many of the accident mechanisms (e.g. the
unsuccessful cement job) leading to the blowout emerged from separate and distinct
failures (e.g. aws in the cement design, inappropriate foam cement slurry). It is
not clear whether ameliorating these individually would have prevented the accident
mechanisms from occurring, or whether the deeper-lying conditions (aws in the
Deepwater Horizon
destruction and oil spill
Flaws in design
of well Rejection of
cement Lack of clear
evaluation log procedures for
Poor info sharing running NPT
between
Minimal
operator and
communication
contract
on operational
companies
decisions
Informal risk Not learning
assessment from previous
procedures incidents
System Factors Production/cost Third party companies Industry Communication culture between Human
-saving conducting safety standards/ operational, management, and factors
pressure critical work regulation contract staff engineering
Figure 1. Interactions between events leading to the Deepwater Horizon mishap, the conditions that allowed the mishap to occur, and the system
factors underlying the mishap (as described within the National Oil Spill Commission Report (2011)).
422 T.W. Reader and P. OConnor
design of the well, poor information sharing between contract and operating
companies) would simply have created alternative accident mechanisms (or
postponed the mishap to a later point in time).
Furthermore, the conditions leading to the DH mishap can be seen as manifesta-
tions of systemic factors or system migration to states of higher risk (Leveson
2011, 60). For example, the lack of industry regulation for running safety critical
processes created risks across the offshore system, including negating the need for
formal risk assessments in the design of the well, training for managing the NPT
and emergency scenarios, and maintenance and inspection routines. It also
negatively inuenced other system factors, such as safety culture and the
requirement for effective human factors engineering.
Other system factors, such as the culture of communication, shaped the work
environment through more implicit mechanisms (e.g. information sharing between
contract and operator staff, management listening to the concerns of junior staff)
and heightened the risks associated with safety critical tasks leading to the mishap.
For example, in terms of the decision-making and risk assessment of operators, the
analysis shows how errors were shaped. In particular, the drilling crews monitor-
ing the progress of the well were unaware of the problems with the NPT, and their
attention was split across different tasks. This represents human factors engineering
(a systemic problem) and also that their decision-making was constrained through a
lack of communication (and thus awareness) on the doubts surrounding the
problems with NPT. In turn, those performing the NPT did not have ndings from
the cement evaluation log available and were unaware of the problems surrounding
the cement slurry (known by Halliburton, but not communicated to BP) and thus
the need to take a more conservative approach to assessing well integrity.
In summary, at the level of event/accident mechanism, the DH mishap has a
relatively traditional accident causation pathway. Active (NTS) and latent factors
(safety culture) combined together to contribute to the incident, and it can be
explained from a traditional linear accident analysis perspective. However, closer
inspection of incident using a systems-thinking perspective reveals a range of com-
plicated interdependent systems, with the accident potentially occurring through a
multitude of pathways. As offshore systems become increasingly complex, and the
nature of work itself more challenging, it becomes necessary to better understand
the inter-linking components that underlie the incident, and may combine again in
future to cause different mishaps with similar causes.
Conclusions
A range of operational behaviours and underlying safety management problems are
identied as causing the DH disaster. We apply NTS and safety culture concepts to
interpret these factors, as they facilitate the design of interventions through
identifying common (yet, highly specic) patterns of cognition, social behaviour
and organisational management underlying mishaps. Yet, this would appear
insufcient for applying the lessons from the DH to prevent future offshore
mishaps. Specically, for interventions to be effective, a better understanding is
required of the interactions and relationships between the various events, conditions,
and system factors that combined to produce the incident. We begin the
development of a systems-model delineating the various pathways through which
underlying system factors created organisational risk and compromised the activity
Journal of Risk Research 423
of operational staff on the DH in the lead-up to the blowout. The principles of this
model have applications beyond the offshore sector, and it attempts to capture the
migration of risk across complex organisational systems.
References
Anderson, P. 1999. Complexity Theory and Organization Science. Organization Science
10: 216232.
Bea, R. 2002. Human and Organizational Factors in Reliability Assessment and
Management of Offshore Structures. Risk Analysis 22: 2945.
BP. 2010. Deepwater Horizon: Accident Investigation Report. Houston, TX: BP.
Cook, R., and C. Nemeth. 2010. Those Found Responsible Have Been Sacked: Some
Observations on the Usefulness of Error. Cognition, Technology & Work 12: 8793.
Cox, S., and A. Cheyne. 2000. Assessing Safety Culture in Offshore Environments. Safety
Science 34: 111129.
Crichton, M. 2009. Improving Team Effectiveness Using Tactical Decision Games. Safety
Science 47: 330336.
Cullen. 1990. Report of the Ofcial Inquiry into the Piper Alpha Disaster. London: HMSO.
DeJoy, D. 2005. Behavior Change versus Culture Change: Divergent Approaches to
Managing Workplace Safety. Safety Science 43: 105129.
Dekker, S., P. Cilliers, and J. Hofmeyr. 2011. The Complexity of Failure: Implications of
Complexity Theory for Safety Investigations. Safety Science 49: 939945.
Endsley, M. 1995. Towards a Theory of Situation Awareness in Dynamic Systems. Human
Factors 37: 3264.
Faber, M., and M. Stewart. 2003. Risk Assessment for Civil Engineering Facilities: Critical
Overview and Discussion. Reliability Engineering & System Safety 80: 173184.
Finucane, M., A. Alhakami, P. Slovic, and S. Johnson. 2000. The Affect Heuristic in
Judgements of Risks and Benits. Journal of Behavioural Decision Making 13: 117.
Flin, R. 1995. Crew Resource Management for Training Teams in the Offshore Oil
Industry. European Journal of Industrial Training 9: 2327.
Flin, R., K. Mearns, P. OConnor, and R. Bryden. 2000. Safety Climate: Identifying the
Common Features. Safety Science 34: 177192.
Flin, R., P. OConnor, and M. Crichton. 2008. Safety at the Sharp End: A Guide to
Non-technical Skills. Aldershot: Ashgate.
Gordon, R., R. Flin, and K. Mearns. 2005. Designing and Evaluating a Human Factors
Investigation Tool (HFIT) for Accident Analysis. Safety Science 43: 147171.
Guldenmund, F. 2000. The Nature of Safety Culture: A Review of Theory and Research.
Safety Science 34: 215257.
Hopkins, A. 2011. Management Walk-arounds: Lessons from the Gulf of Mexico Oil Well
Blowout. Safety Science 49: 14211425.
Hopkins, A. 2012. Disastrous Decisions: The Human and Organisational Causes of the Gulf
of Mexico Blowout. Sydney: CCH Australia.
Katsakiori, P., G. Sakellaropoulos, and E. l. Manatakis. 2009. Towards an Evaluation of
Accident Investigation Methods in Terms of Their Alignment with Accident Causation
Models. Safety Science 47: 10071015.
Kirwan, B. 2001. Coping with Accelerating Socio-technical Systems. Safety Science 37:
77107.
Kujath, M., P. Amyotte, and F. Khan. 2010. A Conceptual Offshore Oil and Gas Process
Accident Model. Journal of Loss Prevention in the Process Industries 23: 323330.
Lawton, R., and D. Parker. 1998. Individual Differences in Accident Liability: A Review
and Integrative Approach. Human Factors 40: 655671.
Leveson, N. 2011. Applying Systems Thinking to Analyze and Learn from Events. Safety
Science 49: 5564.
Leveson, N. 2012. Engineering a Safer World: Systems Thinking Applied to Safety. Boston,
MA: MIT Press.
Mack, A. 2003. Inattentional Blindness: Looking Without Seeing. Current Directions in
Psychological Science 12: 179184.
424 T.W. Reader and P. OConnor
Mearns, K., R. Flin, and P. OConnor. 2001. Sharing Worlds of Risk: Improving Commu-
nication with Crew Resource Management. Journal of Risk Research 4: 377392.
Mearns, K., B. Kirwan, T. W. Reader, J. Jackson, R. Kennedy, and R. Gordon. 2013.
Development of a Methodology for Understanding and Enhancing Safety Culture in Air
Trafc Management. Safety Science 53: 123133.
Mearns, K., S. Whitaker, and R. Flin. 2001. Benchmarking Safety Climate in Hazardous Envi-
ronments: A Longitudinal, Inter-organisational Approach. Risk Analysis 21: 771786.
Mearns, K., S. Whitaker, and R. Flin. 2003. Safety Climate, Safety Management Practice
and Safety Performance in Offshore Environments. Safety Science 41: 641680.
National Oil Spill Commission. 2011. Deep Water: The Gulf Oil Disaster and the Future of
Offshore Drilling. Washington, DC: National Commission on the BP Deepwater Horizon
Oil Spill and Offshore Drilling.
OConnor, P., and R. Flin. 2003. Crew Resource Management Training for Offshore Oil
Production Teams. Safety Science 33: 111129.
OHare, D. 2000. The Wheel of Misfortune: A Taxonomic Approach to Human Factors in
Accident Investigation and Analysis in Aviation and Other Complex Systems.
Ergonomics 43: 20012019.
Okstad, E., E. Jersin, and R. K. Tinmannsvik. 2012. Accident Investigation in the
Norwegian Petroleum Industry Common Features and Future Challenges. Safety
Science 50: 14081414.
Pat-Cornell, M. E. 1993. Learning from the Piper Alpha Accident: A Postmortem Analysis
of Technical and Organizational Factors. Risk Analysis 13: 215232.
Perrow, C. 1999. Normal Accidents: Living with High-risk Technologies. Princeton, NJ:
Princeton University Press.
Pidgeon, N. 1991. Safety Culture and Risk Management in Organizations. Cross-Cultural
Psychology 22: 129140.
Rasmussen, J. 1997. Risk Management in a Dynamic Society: A Modelling Problem.
Safety Science 27: 183213.
Reader, T., R. Flin, K. Mearns, and B. Cuthbertson. 2011. Team Situation Awareness and
the Anticipation of Patient Progress during ICU Rounds. BMJ Quality & Safety 20:
10351042.
Reason, J. 1997. Managing the Risks of Organisational Accidents. Aldershot: Ashgate.
Rundmo, T. 1996. Associations Between Risk Perception and Safety. Safety Science 24:
197209.
Simon, H. A. 1991. Bounded Rationality and Organizational Learning. Organization
Science 2: 125134.
Skogdalen, J., J. Khorsandi, and J. Vinnem. 2012. Evacuation, Escape, and Rescue
Experiences from Offshore Accidents including the Deepwater Horizon. Journal of Loss
Prevention in the Process Industries 25: 148158.
Skogdalen, J., I. Utne, and J. Vinnem. 2011. Developing Safety Indicators for Preventing
Offshore Oil and Gas Deepwater Drilling Blowouts. Safety Science 49: 11871199.
Slovic, P. 1979. Rating the Risks. Environment 21: 3639.
Sneddon, A., K. Mearns, and R. Flin. 2006. Safety and Situation Awareness in Offshore
Crews. Cognition, Technology & Work 8: 255267.
Steinzor, R. 2011. Lessons from the North Sea: Should Safety Cases Come to America?
Boston College Environmental Affairs Law Review 38: 417444.
Stout, R., E. Salas, and J. Fowlkes. 1997. Enhancing Teamwork in Complex Environments
through Team Training. Group Dynamics: Theory, Research and Practice 1: 169182.
Taylor, S. 1979. Hospital Patient Behavior: Reactance, Helplessness, or Control? Journal
of Social Issues 35: 156184.
Turner, B., and N. Pigeon. 1997. Man-made Disasters. 2nd ed. London: Butterworth-
Heinemann.
Wagenaar, W. 1992. Risk Taking and Accident Causation. In Risk Taking Behaviour, edited
by J. Yates. Chichester: Wiley.
Woodcock, B., and K. Toy. 2011. Improving Situational Awareness through the Design of
Offshore Installations. Journal of Loss Prevention in the Process Industries 24: 847851.
Copyright of Journal of Risk Research is the property of Routledge and its content may not be
copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.