Emailing SDR 1 Safe by Design

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Safety Science 45 (2007) 305–327

www.elsevier.com/locate/ssci

Safe by design: where are we now?


a,¤
Andrew Hale , Barry Kirwan b, Urban Kjellén c

a
Safety Science Group, Delft University of Technology, Postbus 5015, 2600 GA Delft, Netherlands
b
EUROCONTROL – EEC BP15, Bois des bordes 91222 Bretigny Cedex, France
c
Hydro Oil and Energy, N-0246 Oslo, Norway

Abstract

This paper reviews and discusses the principal Wndings of the preceding papers in the special issue
and draws out the lessons to be learned by designers, safety specialists and researchers.
It returns to the questions posed in the editorial and groups them under the headings of the case
for design as an important contributor to operational safety, the general principles of the design pro-
cess and whether they are universally applicable across diVerent technologies and Welds of applica-
tion, the dilemmas facing designers and the help which can be oVered to assist them in their vital and
diYcult work.
The paper ends with a summary of the gaps in our knowledge of the design process and its contri-
bution to safety. These are large and cry out for more research to study them.
© 2006 Elsevier Ltd. All rights reserved.

Keywords: Safe design; User and designer mental models; Design standards; Use situations

The papers in this special issue can only give a broad-brush impression of the state of
the art of safety in design. Most of them are written from the point of view of the safety
and human factors experts, working to improve the attention paid in the design stage to
safety issues. Only a few are written from the point of view of the designer or design team,
the people who ultimately have to carry out the diYcult task of achieving that improve-
ment. However, we believe that a clear picture nonetheless emerges, which can form the
basis for designers to achieve a more systematic approach to inherently safe design. We try
to summarise that approach in this paper.
*
Corresponding author.
E-mail addresses: a.r.hale@tudelft.nl (A. Hale), barry.kirwan@eurocontrol.int (B. Kirwan), urban.kjellen@
hydro.com (U. Kjellén).

0925-7535/$ - see front matter © 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ssci.2006.08.007
306 A. Hale et al. / Safety Science 45 (2007) 305–327

In the subsequent sections we will develop a number of issues further under the follow-
ing headings, which we derive from the questions we posed in the editorial:

1. The case for safety by design.


2. The general principles of the design process: context, nature, content, roles and respon-
sibilities.
3. Dilemmas facing the design organisation.
4. Conclusions: what is the scope for improvements?

1. The case for safety by design

We started this special issue by addressing the question: How important is design in
determining the level of safety during use of a system or product? Before we can answer
that question, we need Wrst to be clearer about what is covered by the design process.

1.1. DeWning design

An issue on which there was no clear agreement in the papers is what is covered by the
design process – where does it start and when does it end? Does it include the initial choice
of the high level concept for fulWlling the system objectives, or does the design process only
start once this has been speciWed and is being worked out? If so, what is this transition
point? Clearly this boundary will reXect in part what is being designed. If we consider the
design of oil and gas installations, it makes no sense for the contractor responsible for
detailed design to question the high level choices that led the customer (oil company) to
deWne the need for the speciWed installation, rather than another type of installation or
other design capacities. For the customer of consumer products, it may well be sensible to
consider this choice as a design decision, as the customer has no control over it. We see this
boundary question also reXected at the other end of the design process. Kinnersley and
Roelen indicate that, in aviation, the operating procedures for a plane (or other parts of the
system) are considered as part of the design. The European standard on safety in machin-
ery design also considers the instruction manual as part of the design (CEN, 1991). Errors
in this aspect of design, related to procedures development, should therefore be considered
design errors.
As Fadier and De la Garza’s paper indicates, the boundary of the design process cannot
be completely sharp. In the process of installation of equipment and start-up, the system
boundary shifts because the plant cannot be installed as speciWed, or proves not to be oper-
able within the deWned design limits. In some of the studies reviewed by Kinnersley and
Roelen these sorts of shifts (use beyond the design base and change of operational context)
are counted as design errors. This seems to be pushing the responsibility of the designer
very far, raising as it does the issue of ‘predictable misuse’. The EU Machinery regulations
also include mention of this concept (European Council, 1989/98). In a paper presented to
the workshop, but not included in this special issue, Blanquart (2003) indicated that the
space industry has an even wider deWnition. Designers are asked to consider how the crew
and controllers of a spacecraft can use the craft outside the planned design envelope, since
this may be the only possibility to save the mission (as Apollo 13 demonstrated).
There is no simple solution to this issue of deWning the boundary of design. The deWni-
tion will need to depend on the context and the purpose for which it is being made. We sug-
A. Hale et al. / Safety Science 45 (2007) 305–327 307

gest, as a pragmatic solution, that the deWnition should be tailored to identify those who
can take speciWc decisions in the course of the whole design process. This will vary depend-
ing on whether we are dealing with consumer products, road infrastructure, cars, medical
devices, aircraft, air traYc control systems or complex chemical or nuclear plant. However,
in general, we have taken a rather broad view of design to include the design speciWcation
and requirements at one end and the instructions and procedures for use at the other. This
conclusion also has implications for our deWnition of “design errors”. We should not talk
of “design errors” but of errors in a speciWed step in the design process.

1.2. The signiWcance of design as causal factor

The answers given to the question of how much inXuence design has on accidents and
incidents varied across the papers, but were typically in the range of 20–60% of accidents
having at least one signiWcant or root cause attributed to erroneous design. The answers
were strongly dependent on the deWnition which was given of what ‘design error’ consisted
of, and whether the errors were deWned on the basis of accidents, or of errors recognised
and corrected in the design process. Attempts were made to pin this down that ranged
across the following spectrum:

• Did the system fail to meet its design speciWcation (or even what that speciWcation
should have been to match all legal requirements)?
• Could other, safer decisions feasibly have been made during design, which would have
prevented the accident?
• Were proposals for redesign made after an accident during use?
• Were errors picked up during the design review processes and corrected by design
changes?

We have included the last bullet in the list to emphasise that many design errors are
already identiWed in the design process and its design reviews. This is analogous to the rec-
ognition that many human errors in operation and maintenance are detected and corrected
by those making them, or their colleagues. This has led human factors experts to concen-
trate on enhancing these ‘natural’ error detection and correction mechanisms, a lesson we
should learn in the design Weld also. The other deWnitions rely strongly on hindsight, partic-
ularly the one concerning proposals for redesign after an accident. This is bound to diVer
from the viewpoint of foresight during the design process. It is clear from these diVerences
that there is at present no standard deWnition of what a design error is. Hence all studies
based on accident analysis are open to large diVerences of interpretation and the possibility
that the more an analyst wants to prove that design is important, the more likely the study
is to come up with a large percentage of design errors. We should perhaps take the range of
20–60% as an indication of the size of this possible diVerence. But we should also remem-
ber that the Wgures coming from accident analysis tell us only of the design errors that
escaped the many checks already built into the design and fabrication processes of the rele-
vant technologies. As Taylor points out in his second paper in this issue, designers are
already competent error detectors and correctors. This is a feature they share with all
humans. The initial error rates in design, measured in terms of failures per decision taken,
are much higher than those surfacing during actual use. The reliability of error correction
in design has received only marginal research study. We also need to understand better in a
308 A. Hale et al. / Safety Science 45 (2007) 305–327

qualitative way how, and how well, these current mechanisms for preventing and correct-
ing errors work in order to improve them.
The material on the role of design in accidents presented in this special issue comes from
industries with major accident potential such as the process, oil and gas, nuclear and trans-
portation (air, rail) industries. It is clear that there is an overwhelming case for paying
strong attention to safety in the design stage in these industries. What we notice in accident
cases are the design errors that have slipped through the check and review process until
start-up and use, or design that meets the requirements but still involves residual risk. We
should assume that design plays a similar or even more signiWcant role in accidents in less
hazardous industries simply because safe design of products and workplaces has probably
not received the same degree of attention.
We need more studies of both the design process and design errors and the link between
the two, in order to make more progress in this area. In the absence of well-founded Wgures
on the scope for design improvement, the interest in safety in design is driven largely by a
logical conclusion that systems development begin with design and so design oVers the ear-
liest, and hopefully the cheapest place to intervene and get it right.

1.3. What is in it for the company?

In this section, we will look more closely into the factors that shape the willingness of
the company responsible for designing the product, equipment, process or system to pay
attention to safety in design. We cannot assume that a company will strive towards a com-
pletely safe design irrespective of the costs involved. Rather, there is a crucial balance
between tangible and intangible costs and beneWts that shape the company’s decisions
(Fig. 1). The extent to which these inXuence design decisions is to a large extent determined
by the frame conditions under which a company operates. Is it designing high proWle, high
hazard systems or plant for a critical, safety conscious customer, mass products for a dis-
tributed, world-wide market, or specialist equipment for a niche market, for example?
To the downsides belongs the fact that safety imposes additional requirements on the
design and design process that may add to costs. This may lead to decreased proWt margins
or even loss of market share to competitors with less safe but cheaper design. The customer

Fig. 1. The business case for safety by design.


A. Hale et al. / Safety Science 45 (2007) 305–327 309

may also be unwilling to accept safe products that impose constraints on their use. To meet
these challenges, companies have to consider safety implications early, rather than having
to make expensive and less user-friendly safety add-ons later. Timing of safety input to
design is crucial to reduce costs and requires signiWcant organisational skills, an issue that
we will address later in this paper.
Many factors speak in favour of safe design. A proof of safe design is increasingly a
required ticket to the market. Guarantee obligations and contracts and liability claims
increasingly concentrate the company management on preventing damage and injury. Eth-
ical considerations and concern for company reputation underline the responsibility of
designers to exert their considerable inXuence to prevent the suVering and waste which can
result from design errors and missed safety opportunities.
An issue of increasing signiWcance is the liability of a company for the consequences of
accidents involving the company’s product, as discussed by Baram in his paper in this issue.
The fact that there is strict liability for accidents from some technologies, speaks for the
paramount importance of design in these cases. However, Baram also points to the require-
ment to prove that alternative design decisions were feasible as a possible defence by suppli-
ers, accepted in some jurisdictions to temper what otherwise might be seen as a too harsh
application of strict liability. His discussion of negligence makes use of the notion of a ‘state
of the art’ design process or design solutions that suppliers should follow. If we cannot test,
at the end of the design process, that something is safe, then we can only hold the suppliers
responsible for a requirement to use best process and technology. Mandating such a
requirement in more jurisdictions might have the eVect of forcing more studies to deWne
what such best practice in design is. However, the very identiWcation of the need to adopt a
‘state of the art’ design process provides a strong argument for the case for safety by design.
The choice as to how strict the control on the design process needs to be will depend on
what is at stake. We have seen, in contrasting the design of medical devices (de Mol, this
issue; 10.1016/j.ssci.2006.08.003) with that of chemical process plant (Kjellen, this issue;
10.1016/j.ssci.2006.08.012 and Taylor, this issue; 10.1016/j.ssci.2006.08.014), that the balance
between the pressure for innovation and the need for application of the precautionary
principle can diVer dramatically. We would argue, however, that this should be seen as a
choice of how strictly to impose safety checks at intervals in the design process, not as a
choice whether or not to deWne and use an explicit design process with speciWed safety
reviews in the Wrst place.
Safe design in this context also means a design that allows and conditions, as far as fea-
sible, safe use across the whole life cycle of the product, from manufacture, construction,
transportation and installation, through use, maintenance and modiWcation, to decommis-
sioning, demolition and disposal. While the case for incorporating safety at the design
stage is strong, it is not universally accepted, also not by all suppliers, who may seek to
limit their liability by pushing the main decisions about safety over to the user. To combat
this tendency to limit or diVuse the suppliers’ responsibility, it is essential that safety takes
its place beside the other criteria, such as proWt, business opportunity, quality, etc., in the
corporate business case which provides the framework for any design process.

1.4. What is in it for the designer?

The individual designer is also faced with a trade-oV between the tangible and intangible
costs and beneWts of taking safety into account. Many of the categories of costs and beneWts
310 A. Hale et al. / Safety Science 45 (2007) 305–327

that apply at the company level also have their parallel at the individual designer’s level.
We see in almost all the papers the conXicting objectives in plant or product design that the
designer has to reconcile in the design process. It is not possible to design a plant or
machine that maximises both safety and all other performance criteria such as production,
quality, cost, etc. in parallel. This means that explicit attention has to be paid to the way in
which these trade-oVs are made. If the inXuence of any speciWc decision on safety is not
made explicit, there is the danger that this trade-oV will be made without thinking and then
usually in favour of the objective which is, or can be made explicit. Much of the assistance
which can be given to designers, and which we discuss in the later sections of this paper, is
directed at making these safety implications clear.
At a legal level we see an increasing emphasis on the liability of the designer for incorrect
design decisions. However, this liability is limited in most cases, whether under strict liability
or tort law systems, to what the designer has control of and can reasonably be expected to
do. Unforeseeable consequences are generally excluded and in many cases it is necessary to
show that another, safer design was reasonably and not disproportionately more expensive.
At the personal level the trade-oV revolves around the two factors of reputation and
eVort. On the one hand, as Taylor emphasises, designers want to be able to be proud of
‘their babies’. They invest a great deal of time and eVort into producing a design and seeing
it come to fruition. A later history of accidents will spoil that sense of pride. On the other
side, many of the papers (e.g. Fadier and De la Garza, Jagtman and Hale, van Duijne et al.)
emphasise the complexity and uncertainty involved in understanding and inXuencing all of
the factors relevant for the control of risk in design. By accepting the challenge to increase
that control, designers take on a great deal of extra work, which may tax their ingenuity to
the limit. We therefore see them using a number of strategies to limit that complexity, such
as limiting their scope to meeting pre-deWned safety speciWcations and excluding many
forms of (unforeseen) use and misuse from their sphere of responsibility. This is a form of
bounded rationality, which we come across also in users of equipment (Reason, 1990). We
deal with this topic in more detail in Sections 3.1 and 3.2 below. Above all it is the profes-
sionalism of designers that needs to be the driving force. They need to see the challenge of
achieving safe design as the ultimate proof of their competence.

1.5. Limitations of design

Whilst we have stressed the positive incentives to pay attention to safety in the design
process, we need to admit its limitations also. Clearly not all accidents can be prevented by
design. There are some consequences of technology that cannot reasonably be predicted at
the design stage, particularly in new technologies, using new materials and scientiWc princi-
ples. However, once these have led to accidents, there is a clear responsibility for designers
to prevent them in future designs. This emphasises the need for a learning loop in the
design process, which stretches both over design generations and within them.
A major dilemma, which we return to in Section 3.1, is the diYculty of deWning and pre-
dicting at the design stage how a product or system will be used. Users are very creative
and make use of the potential of designs to do things which the designer may not have
thought of. Suppliers may be very pleased with this phenomenon, as it increases sales.
However, this immediately raises the spectre of accidents as a result of this extended use.
Our plea is for more attention to be paid to this process of prediction, but we accept that
this can never result in 100% foresight about how a design will or can be used.
A. Hale et al. / Safety Science 45 (2007) 305–327 311

2. Normative model of the design process

A point of discussion permeating the whole special issue is the question whether there
is a generic design process, formulated in terms of functional phases, which apply, or
should apply, to all industries and products. If so, the next question is whether the way in
which safety is, or should be integrated into that process is also generic across all of those
applications. In Kjellén’s paper and Taylor’s Wrst paper, the phases found in the high haz-
ard process and oil and gas industries are put forward as candidates for this position. The
general tenor of this special issue is that a well-managed design process progresses in
phases, and that the management of safety in design would improve if such a design pro-
cess were to be introduced in each speciWc industry. Table 1 represents an attempt to sum-
marise some important characteristics from those two papers of a typical design process
for complex technical systems involving major accident hazards. The details may vary

Table 1
The phases of a typical project involving development of complex technical systems and safety management tasks
in each phase
Phase Objective Main safety issues Main safety
management tasks
Business Clarify the business case for Are there any showstoppers Screening of available
development pursuing an opportunity to related to safety (unfamiliar or information sources
develop a new technical system prohibitive safety hazards, legal
constraints, reputation risks)?
Feasibility Clarify the technical feasibility Is basic technology adequately Benchmarking with
study of the project and the proven from a safety point of similar existing design
possibilities of meeting view? Will it be possible to
proWtability requirements implement regulatory,
corporate and customer safety
requirements within acceptable
cost limits?
Conceptual Develop concept alternatives by Is the selected concept proven Concept risk analysis,
design selecting and arranging building from a safety point of view? design reviews against
blocks, select the best solution Will it meet risk acceptance conceptual safety
with respect to project objectives criteria (explicit/tacit)? Are requirements
intrinsically safe solutions
adequately implemented?
Basic design Optimise basic design, deWne Are the inherent safe solutions Risk analyses and
detailed design requirements and and safety barriers adequately design reviews, audits
mature design to reduce cost, implemented? Are the safety of design organisation
schedule and quality uncertainties requirements for detailed
design adequately deWned?
Detailed design Meet design requirements Have the detailed safety Detailed risk analyses
requirements been adequately and design reviews,
implemented? Have suitable audits of design
documents been made to hand organisation
over the design to safe
fabrication/use?
Fabrication, Realisation of design, front-end Does design meet the safety Inspections and testing
installation, engineering, Wnal checking and requirements? Have design
commissioning, test before hand over to customer errors and weaknesses been
start-up identiWed and resolved?
312 A. Hale et al. / Safety Science 45 (2007) 305–327

dependent on the branch of industry, the means of organising projects and stakeholder
inXuence, etc., but we believe that the basic phases in Table 1 are, or should be, found in
other industries.
The generic design process and its link to safety is described here largely in functional
terms – i.e. what needs to happen in the process to ensure a good design according to all
criteria, including safety, When it comes to allocating those functions to people and organ-
isations, we can expect major diVerences between industries and activities, because of the
diVerent organisation of the sector and the incentives and inXuences which operate within
it. What is common, however, is the idea of waypoints at which the safety of the design is
checked, before moving on to the next phase. This iteration of safety checks linked into
decisions to move to the next design phase ensures that safety issues are constantly kept in
the focus of attention as the design is progressively worked out. We saw in Taylor’s papers
that it is through inadequate risk analysis at the diVerent design stages that many of the
design errors creep in.

2.1. From generic to speciWc

The papers that make up the special issue range across an enormous spectrum of activi-
ties, systems and equipment. These vary greatly in the number of parties that are involved
in the whole design process and the way in which these are organised and relate or commu-
nicate with each other. At one end of the spectrum there are the tightly controlled, high
technology plants and activities such as oil and gas, chemical, nuclear and aviation. Here
there are clearly deWned customers in very powerful positions to specify the requirements
for design, keep the designers and suppliers under constant surveillance and guarantee to a
large extent that the conditions of use and the characteristics of the users can be kept
within deWned bounds. At the other end we have diVuse and far less regulated sectors
where many manufacturers are in competition, the customer/user is the general public with
little or no restriction on competence or conditions of use. Typical of these are consumer
products and road transport (both vehicles and infrastructure). Even in the most coordi-
nated and centrally controlled design processes it is striking how complex the process is,
with many participants, each often working on only a small segment of the total design.
This makes it diYcult to put the Wnger on any one person as ‘the designer’ who should
make better decisions.
The generic principles outlined here therefore need adaptation to the speciWc sectors to
match the diVerences in technology, number and competence of players involved in the
design process, and the sophistication of the customers.

2.2. Coordinating the distributed design process

The nature of design as a distributed process raises the same sort of concerns as the divi-
sion of labour characterised by the Taylorian1 approach to production and assembly line
manufacture. This led to problems because no individual participant in the process has the
overview of, or the sense of ownership for, the product being made. Such Taylorian pro-
duction lines only work when there is a strong central planning and control function,
which ensures this overview and the necessary communication and optimisation. The same

1
This is F.W. Taylor, father of the production line, not Robert Taylor writer of two papers in this special issue.
A. Hale et al. / Safety Science 45 (2007) 305–327 313

lack of ownership of the total design and the problems of interfaces between the diVerent
actors can be seen in the design process. Kjellén and Taylor show how the mature system
of design in their high hazard industries tries to solve the problem organisationally, by cre-
ating many arenas for the exchange of design and safety information. Other papers, such as
those by Fadier and De la Garza, de Mol, and Jagtman and Hale go no further than iden-
tifying the problem and its eVects and pleading for a solution.
Within the diverse systems with many uncoordinated players, the same issues of respon-
sibility for predicting risks and making choices to control them are played out, but the allo-
cation of those responsibilities, and above all the possibility of checking and enforcing that
those responsibilities are carried out, diVers enormously. The ‘best practice’ for coping with
these issues is bound to diVer across systems also. We enter a plea that there should be
more explicit attention to this question in the sectors with less developed design processes.
Where there is no existing organisation with a powerful central role in managing the paral-
lel design processes, there is then a task for government in bringing together the players in
the design process to deWne and coordinate their roles.

3. Dilemmas facing the design organisation

3.1. Taking the user perspective into account

A fundamental concern of the design process is the issue of predicting and inXuencing
the conditions and methods of handling and use of what is being designed. This is central
to the dilemma of the designer, who has to create, from a situation of uncertainty and free-
dom of choice, a system or product design that imposes safe limits of use and creates the
certainty of how to stay within them. The designer needs to see his/her role not simply as
designing safe hard- and software, but as designing safe use. This is a constant process of
juggling with diVerent concepts and alternatives contributing to safe design. As discussed
by Kjellén in his paper, there needs to be a suitable balance between the use of hardware
and software barriers constraining use and reliance on good user interface and user
instructions to ensure safe use. Such decisions about design philosophy are inextricably
bound up with decisions about responsibility and liability. When designers think of taking
the responsibility to restrain use, they normally think in terms of hardware barriers, some-
thing they feel well equipped to design, as engineers. However, they tend to be technical
optimists, thinking that the barriers will be very reliable and that people will use them. A
particularly worrying aspect of van Duijne et al.’s Wndings was the overwhelming tendency
of the designers to think they had solved the safety problems of the gas lamp she was
studying and, hence, to want to concentrate in their design on projecting an image of a safe,
reliable, robust product to alleviate any fears users might have. Only one subject took the
opposite track of wanting to emphasise risk, since it could never be eliminated. Here we see
a particular characteristic of designers being optimistic about risk control because they
assume their operating rules will be followed. They do not take account of the trial and
error behaviour and diVering individual expectations that may lead to completely diVerent
behaviour.
Users, on the other hand often Wnd design restraints tiresome and limiting and take the
responsibility of removing them. User-centred design tries to take account of this aspect of
user responsibility as a starting point to produce designs which guide and support correct
use, rather than (solely) blocking incorrect use.
314 A. Hale et al. / Safety Science 45 (2007) 305–327

3.1.1. Gaps between designer and user models of use


A common theme in the whole workshop and in many of the papers is the signiWcant
gap between the use situation as envisaged by the designers and that which actually exists
in practice. Fadier and De la Garza concentrate particularly on this gap and portray the
actual use situation as a shift in system boundary compared to the limits of use as envis-
aged by the designer. This is an example of the shift which Rasmussen portrays in his
model of ‘drift to danger’ (Rasmussen and Svedung, 2000). Use legitimates practices which
are at or beyond the safe limits provided by design.
Fadier and De la Garza and other paper authors show that the picture that many
designers have of the use situation is limited and optimistic. There is a concentration on
how the design should be used and less on how it might be used. There is too little attention
paid to the implications of operating the equipment in non-nominal situations, outside
speciWed limits, and with parts of the system non-operational or working in degraded
mode, despite the fact that this situation may be the reality in complex technologies for sig-
niWcant parts of the time. Designers may underestimate the ‘costs’ of their operating proce-
dures to ensure safe use in terms of eVort, lost production or quality and hence not realise
how much they motivate misuse. They may also think too little about the maintainability
of their equipment, thus leading maintenance staV into having to take excessive risks to get
their work done eVectively and eYciently. Ignoring all of these complexities of the use situ-
ation may be a decision to ‘bound rationality’ to avoid overload, or it may be a result of
ignorance of that reality. Many designers, particularly those working for design bureaus,
might not even have visited operational sites using the equipment they design. Their infor-
mation about use is therefore limited to what they can imagine or see from pictures or hear
from descriptions or feedback from customers.
There is overwhelming evidence that this gap in perception exists. The interesting ques-
tion is why it persists and what can be done about it. Would more contact between design-
ers and users help matters? van Duijne et al.’s paper gives us some clues. Designers did seem
to welcome and use the abstracted information she provided about use, highlighting the
problems users have with a product. They were not signiWcantly more inspired by richer
data, though they often enjoyed it. However, a signiWcant number paid little attention to the
feedback, or misinterpreted it to match their expectations, and the impression remains that
their own experience was just as powerful an inXuence as actual user feedback.
She also reports (see also van Duijne, 2005) that a number of designers rejected the
value of user trials, saying that they were too unrepresentative and, in any case could never
be complete, always leaving surprising behaviour undiscovered. Her example is taken from
consumer products, where there is no restriction, apart from that imposed by the users
themselves, on who interacts where and when with the product. This forms the extreme of
unregulated use, contrasting with the relatively strictly controlled use in aviation and the
process industry. However, even here it is diYcult to get over to (potential) designers the
idea that they are not (or should not be) just designing hardware, but should, as far as pos-
sible see their task as designing the behaviour which uses the hardware safely. Taylor also
emphasises that designers should be taught much more explicitly and fundamentally that
they can and should be designing behaviour. Van Duijne’s original results about what
determined the behaviour of her subjects changing the gas cylinders on the camping gas
lamps and stoves showed how great that inXuence could be. She documented just how
important diVerent aspects of the design were in suggesting particular user behaviour,
sometimes correct, but sometimes wrong.
A. Hale et al. / Safety Science 45 (2007) 305–327 315

3.1.2. Collecting data on use situations


Acquiring knowledge about use and conditions of use is essential to safe design. As the
analysis of design errors showed, a common factor in accidents is the breakdown of a
design when used outside its deWned limits. The underlying question is then always whether
that use could and should have been predicted and allowed for, or prevented in the design.
It is essential that a design process provides for and supports eYcient arenas of exchange
of experience about potential and actual use. These need to be both proactive, projecting
new designs into their future use situations, and reactive, feeding back experience of using
earlier designs. Assumptions about use conditions and methods need to be made explicit,
recorded and regularly checked, so that they can form the measuring stick against which
design decisions and changes can be checked.
One way of taking use into account is to collect reactive data on use problems and to
feed it back to the designer. However, this may fail truly to close the design loop, as there is
a considerable time lag in many technologies between the design phase and the point at
which feedback becomes available. It is questionable whether the original designer of a
product or system will still be involved in the design of similar products when signiWcant
user experience is available. Will that person or another be on the receiving end of that
feedback, if it comes at all? Hence designers may need to learn most from the lessons of
previous generations of equipment or systems. This, however, poses an important problem
of abstraction and translation from the use of the old technology to the possibilities cre-
ated by the new generation.

3.1.3. Specifying use situations and safe use limits


The designer can, and indeed should, specify the limits within which the design can
operate safely. But this deWnition can and should not be too restrictive. Otherwise the
designer simply shuZes oV the responsibility onto the user, since the reality will be that use
situations will frequently fall outside these limits. Accidents then occur because users have
to use the product outside the safe limits envisaged by the designer. It is clear from the
papers in this special issue that the designer’s inability to foresee the great variety of inXu-
ences in the user environment is the cause of a signiWcant number of safety problems which
otherwise could be solved in the design stage.
Both Fadier and De la Garza, and Jagtman and Hale demonstrate that designers often
have a far too restricted vision of what environments their products will end up in. The
classiWcation of design errors in the papers by Kinnersley and Roelen and by Taylor also
demonstrates that there is a drift between the boundary as deWned by the designer and
what happens in practice.
This problem is particularly signiWcant when a product is used as an element of a larger
system and where there is little or no coordination between the designers of diVerent elements
that work together. The road transport system is even today almost totally lacking in coordi-
nation between the designers of vehicles and of infrastructure. The recent advent of intelligent
infrastructure and telematic applications linking infrastructure to vehicles may force changes
here. This may shift the road traYc system in the direction of the air traYc system, where that
interdependency has long existed and is becoming steadily more intimate, with the develop-
ment of data links and concepts such as free Xight. In the process industry we see the most
central control over all aspects of the design and its environment, with one principal ‘conduc-
tor’, the plant manager. However, even here the task to ensure that all inXuences of the user
environment have been anticipated and taken care of in design is not simple.
316 A. Hale et al. / Safety Science 45 (2007) 305–327

All this leads to the requirement that the design process should be explicit and transpar-
ent about its assumptions concerning situations of use. They need to be written down and,
if necessary, updated based on user feedback and accident experience.

3.1.4. Involving the user


The user is the expert on the situations and problems of use. This is exploited in partici-
pative ergonomics and user-centred design, where the user has signiWcant inXuence on
design. A similar situation arises in e.g. the process and oil and gas industries, where the cli-
ent (user company) is monitoring the whole design process and is responsible for commis-
sioning and start-up. Baram makes a related point in his paper about legal liability. He
deals with the question of what the designer can and should be made responsible and liable
for concerning safe use and how these responsibilities interact with those of the user. This
may be a powerful client commissioning the design (such as is the case for the chemical,
nuclear or aerospace company), or, at the other end of the spectrum, the often inexpert
consumer purchasing the product on the open market. In between we Wnd the professional
user of pharmaceutical and medical devices, of printing presses, etc. Baram makes the
argument that the more competent the user, the less liability the designer has (or needs to
have) for the product.

3.2. Complexity and bounded rationality

We can envisage the designers’ task as one of exerting the maximum inXuence on all fac-
tors that can increase the risk of accidents and ensuring that their disturbing inXuence is
either eliminated or controlled. This makes it an enormously complex task. We know from
studies of decision-making (see e.g. Reason, 1990) that the standard response to complexity
is for the decision maker to simplify and leave out aspects of the problem. Reason calls this
bounded rationality. Certain inXuences are ignored as being negligible or uncontrollable, or
are simply not taken account of for lack of time or knowledge. So is it with designers.
Baram and other authors argue that the designer’s responsibility for constraining the use of
a product should depend on the competence of those who will use it. The more competent
they can be assumed to be, the more the designer can leave the boundary of acceptable use
less rigidly deWned. The less competent the user can be guaranteed to be, as in the case of
consumer products or road transport, the more this boundary needs to be deWned and pro-
tected against misuse. However, papers such as those by van Duijne et al. and Fadier and
De la Garza illustrate designers who consider user behaviour in the ‘less competent’ sector
as essentially unpredictable or uncontrollable and therefore something that the designer
can only try to eliminate (with automation or interlocks) or to exclude (with operating rules
or warnings) or to wash his hands of (with disclaimers). One of the possibilities which we
need to inXuence in the design process is to try to enlarge the designers’ feeling of responsi-
bility for understanding how the product is used and to see it as a challenge and not an
impossible task to accomplish robust design that is safe under varying user conditions.

3.3. Safety, a specialist issue or the responsibility of every designer?

Is there a need for safety specialists in the design organisation with a speciWc responsi-
bility to ensure safe design or is this responsibility better taken care of by every designer in
the project team within his or her discipline?
A. Hale et al. / Safety Science 45 (2007) 305–327 317

Underlying much of the discussion at the workshop, and reXected in the papers, is the
question of the allocation of responsibility to ensure that safety becomes integrated in the
design process. The generic design process set out in Section 2 above describes formalised
milestones in the design process, at which among other issues the approach to safety and
the incorporation of suitable safety provisions are checked and veriWed. The actual work of
meeting the criteria at those checkpoints is, of course, that of the design team. Various
options for allocating the two tasks of designing in the safety and carrying out the checks
are discussed in this special issue. One extreme alternative is to train the designers so as to
make them capable of carrying the main responsibility both for incorporating safety and
checking it at the milestones. The designer would have to resolve, alone, any conXicts
between safety and other criteria, with the danger that safety would lose out to the more
immediately valued project cost and schedule criteria and the performance criteria of the
design object. At the other end of the spectrum is the option to give both tasks to safety
and human factors specialists. To do the Wrst task the specialist would need to be a member
of the design team. Hence he would likely become insuYciently independent to act as
assessor at the milestones. A typical solution e.g. in the Norwegian oil and gas industry is
therefore to have two sets of specialists, one as member of the design team and the other as
independent assessor (see the paper by Kjellén).
The safety and human factors specialists should not withdraw too completely from an
active participation in the design process. They might be pressured into codifying their
knowledge into easy-to-follow criteria or codes, which would not do justice to the complexity
and nuances of the interactions between people and hardware. Several papers (e.g. Jagtman
and Hale) expressed concern about the over-reliance of many designers on codiWed safety
standards, placed as they are under severe time and cost pressures, which discourage them
from seeking deeper understanding. This is certainly a possible danger, but we should see it in
the context of the maturity of the design management systems. We may envisage diVerent lev-
els of integration of safety in design. The Wrst is a disregard for safety and a lack of resources
devoted to it. The second is the appointment of dedicated safety staV to sort out the safety
issues while letting the designers get on with the ‘real’ job of designing the product. The third
and Wnal stage is the integration of much safety knowledge and responsibility into the design
functions, including designers with specialist competence on safety issues such as the design
of safety systems and man–machine interfaces. When shifting from the second to the third
stage, there is a need for much of the basic safety knowledge to be codiWed in a way that
designers can use it. The remaining safety staV may then withdraw to positions of monitor/
auditor and expert support for facilitating risk analyses of design and advising on diYcult
and controversial points. They will also be responsible for updating and improvements to the
design management system and internal design speciWcations arising from new experience.
It is clear from this special issue that the task of the designer is a challenging one, which
needs all the support it can get from guidance, training, information sources and instru-
ments and tools. The safety and human factors (HF) specialists have the task of providing
them. The papers of the special issue give some guidance in what are useful tools, particu-
larly that of Kirwan.

3.4. DiVusion of responsibility: when to tackle safety in the design process?

Design is a complex process, involving many steps, sometimes spread across several
companies. These may be customer companies, designers, manufacturers or R&D companies.
318 A. Hale et al. / Safety Science 45 (2007) 305–327

Such complexity and distributed operation give considerable scope for diVusion of respon-
sibility. It makes it diYcult to put the Wnger on any one person as ‘the designer’ who should
make better decisions. The nature of design as a distributed ‘Taylorian’ process has led to
problems because no participant in the process has the overview of, or the sense of owner-
ship for, the design being made. DiVerent parties may deliberately avoid claiming that
overview, for fear that they will be pointed to as holding the ultimate responsibility for fail-
ure of the design.

3.4.1. Contradictory incentives


The designer faces contradictory incentive systems. There are, on the one hand, incen-
tives promoting a postponement of crucial safety decisions to later phases or to the end
user, including increased likelihood of meeting cost and schedule targets in the current
phase and reduced pressure to Wnd relevant information and adequate solutions. There are,
on the other hand, conditions that punish designers if essential safety decisions are post-
poned or not taken at all. They involve increased complexity and costs for safety measures
implemented at a late stage in design, or even after the product has been delivered, and
the chance that the designer will be held responsible when things go wrong. Typically, the
advantages of delaying decisions to consider safety are immediate and certain, whereas
the disadvantages are delayed and uncertain.

3.4.2. Legal pressures


Even the most ambitious designer when it comes to safety faces the dilemma of being
either too early or too late, but rarely on time. Many safety related decisions depend on
detailed knowledge about the design object; knowledge that is not available until design
has matured to a certain degree. At that stage, the costs of changes to secure a safe design
may be prohibitive. It requires considerable knowledge and skills and adequate support
tools such as risk analysis for the designer to be able to resolve this dilemma. This calls for
the development of communities of practice of designers that share experiences in the art
of safety by design (Kjellén, 2004).
The case for considering and managing safety from the very beginning of the design
process is therefore a problematic issue. There is a long history of leaving thinking about
safety until the installation, commissioning and use phases of a technology. This attitude
was encouraged by the way in which legal duties under much legislation were deWned as
falling primarily on the user of the technology. Shifts in legislation over the last 30 years
have altered that view and made it urgent to consider systematically the case for increased
attention during design.
Baram presents a convincing argument for deWning a generic design process with pre-
deWned checkpoints as a criterion for determining whether designers have fulWlled their
obligations. Whether the designers should actually go through these steps, or just be able to
describe and justify their design with reference to the criteria and tests of the Wnal product
is an issue which requires more debate and examination. A deWnitive answer would require
much more knowledge of what the actual design processes are, as opposed to normative
speciWcations of what they should be. We believe that a standard of ‘good design practice’
such as described in Section 2 of this paper, or found in the EN standard 292 (CEN, 1991),
would be a powerful incentive to systematise and make design explicit, and could be used
in court cases as a touchstone for assessing state of the art design and whether each party
had lived up to its responsibility. Such a standard of good practice would need to reXect
A. Hale et al. / Safety Science 45 (2007) 305–327 319

diVerences per technology in the distribution of power and competence between the
designer and user. Communication between them needs to be mandated, but the responsi-
bilities for deWning the system boundary and use situations, predicting and analysing the
risks, testing design and adapting it to actual use conditions will vary per activity, industry
or technology. Each industry or company would therefore need to deWne how these func-
tions were allocated in its own case.
The most diYcult design areas to regulate are the ones in which the roles and communi-
cation channels between players are the most diVuse and the conXicts between safety and
other system objectives are the most sharp. We can point to the medical area as one such
and consumer products and transport systems (particularly road traYc) as others. These
require clearer deWnition of where the chief responsibility for safety will be laid; is that with
the user (caveat emptor) or with the designer? Where there are separate designers of diVer-
ent system elements, such as the infrastructure, vehicles and traYc control systems of trans-
port systems, there are particular requirements for mandating some form of collaboration,
so that safety problems do not appear out of the cracks where the diVerent sub-systems do
not Wt each other.

3.4.3. Standards and responsibility


Design standards provide designers with instructions and guidance on how and for
what situations to design. They encapsulate the learning from many experts in codiWed
form, usually in the form of solutions to design problems (e.g. the size of holes in mesh
guards which will prevent Wnger access to dangerous parts or the standard sign for repre-
senting a particular danger). Fadier and De la Garza showed that designers place a great
reliance on these standards as measuring sticks to judge their design, but also as a way of
limiting their responsibility. If something is not in the standard, then they claim not to need
to consider it. This reveals a danger of standards which are not complete, or which are not
appropriate for a given use situation or not updated with recent experience. They may lull
the designer into a sense of false security. They may be substitutes for thinking about use
situations and their challenge to design, instead of a stimulus to do so. Modern standards
such as ISO 11064 on ergonomic principles for control centre design and NORSOK S-002
on design of the working environment on oVshore installations aim to resolve this issue by
deWning work processes to be used by the design organisation (ISO, 2000; Standard Norge,
2004). As Kjellén points out in his paper, they involve using a combination of goal-oriented
requirements and risk analyses with the participation of experienced users to arrive at
detailed design solutions that are better adapted to the user situation. This is necessary,
because it is in just these areas, namely the incorporation of the diversity and complexity of
human operators and of use, that the more traditional standards have been weak.

3.4.4. Unexpected scenarios


We now move to the designer’s dilemmas when there is less certainty due to unexpected
scenarios, which are not postulated in the design stage, but materialise in operation. Could
and should they have been predicted? Do they represent design failures, or just the essential
uncertainty of the future? As Jagtman and Hale indicate, there is always a cut-oV, which
has to be made, at which point we label a potential scenario as not credible. Brainstorm
sessions, if suitably structured and manned, can produce almost endless lists of potential
failures and deviations. Kirwan (2002) has commented on this problem as a dilemma facing
the human factors specialist who postulates a failure path based on potential combinations
320 A. Hale et al. / Safety Science 45 (2007) 305–327

of technical failure and/or human error. He/she is sometimes countered by a rejection by


management or operators who say that such errors and failures are unthinkable, or could
never happen. How many major accidents have been ‘unthinkable’ until they happened?
We need to understand better what moves designers (or other participants in design
reviews) to label a proposed scenario as credible or incredible.

3.5. Innovation and safety

There is a fundamental conXict, which often underlies the choices, time and care that
designers take and the degree to which they see the priority of safety. This concerns the
issue of how much value is placed on innovation per se and to what extent does some for-
mal or informal precautionary principle apply to the design process. De Mol argues that
the incentives in the design process for pharmaceutical products and medical devices
favour innovation too much at the cost of careful and safety-conscious design and testing.
He considers that these incentives apply both at the level of the companies producing the
products (the proWt motive) and at the level of the customer (the drive for distinction
among doctors and surgeons and desperation for a cure among patients). He sees the bal-
ance tending too much against safety, partly because of the emphasis on innovation as the
only solution to some illnesses and deformities, which would otherwise be hopeless cases.
In such cases anything is considered to be better than nothing.
This is perhaps the most explicit example given in the special issue of these conXicts, but
they are also to be found to a greater or lesser degree in the other systems considered; the
technology push of electronics in the automobile industry, the incentive for production in
the chemical and printing industries and for increased traYc density and aircraft perfor-
mance in the air traYc industry all can conXict with safety. This conXict needs to be seen as
the background to, and the reason for the controls placed on the design process in all the
examples cited in the special issue. It pervades the legal discussion in Baram’s paper, with
the formulation of laws and regulations and the acceptance of diVerent defences against
liability playing out the shifting positions between a highly restrictive brake on innovation
(epitomised by the precautionary principle) and the liberal acceptance of ‘earn as you
learn’, as de Mol phrases the trial and error process which characterises some other indus-
tries. It seems understandable and defensible that the balance between innovation and pre-
caution should be diVerent in diVerent technologies, since the public good that can result
from the innovations also diVers, as do the consequences of the accidents that may result
from a too unbridled innovation. We can only plead for transparency in arguing what is
the correct balance for a given activity.

4. What is the scope for improvement?

Analyses of accident causes show a varying, but signiWcant contribution from design.
About half of the accidents in the high hazard technologies might be prevented or made
less likely if it was done perfectly. This is not to underestimate or undervalue the enormous
contribution that designers already make to safety, merely to underline the value of paying
more attention to the contributions to safety from design and the merits of improving the
design process.
In a number of industries there is a relatively mature design process, incorporating
safety, which performs well although further improvement is still possible. We believe that
A. Hale et al. / Safety Science 45 (2007) 305–327 321

the generic principles from that mature process can be an inspiration and source of learn-
ing for other industries and activities. The papers in this special issue show that each design
process studied has its blind spots. Suitable adaptation of approaches from other areas can
Wll these in. In some instances careful thought and ingenuity may be needed to adapt the
generic principles to the particular organisation and environment of an industry, such as
medical equipment or consumer products. This is because an essential part of improving
the design process is indeed to see it as an explicit process with well-deWned milestones. A
certain minimum degree of coordination and communication between often disparate and
distributed groups is needed at each milestone as well as timely inputs of information in
order to take appropriate safety decisions. The use of the array of design support tools and
methods needs to be planned and coordinated throughout this process. Above all the
attention for safety in design needs great transparency in making and documenting design
decisions and the assumptions on which they are based.

4.1. Operational input to design

The single most important issue in improving the design process from a safety point of
view is how to ensure operational input. We can summarise what we would like designers
to take account of in addition to ‘correct use’ in deWned nominal situations:

• The various user situations including transportation of equipment, installation, start-up,


maintenance and cleaning, handling of disturbances.
• The inevitability of trial and error behaviour on behalf of the user.
• The reality of operating with degraded and partially non-functional equipment.
• The importance of supporting the users in developing a realistic risk perception when
using the equipment and the consequences of transgressing the limits of use as assumed
in the design basis.
• The possibility that users will remove or make inoperative deWned safety barriers for
reasons which may be legitimate (coping with unexpected failures or situations) or not
(protecting production or quality at the expense of safety).
• The role that trust, or mistrust, of equipment plays in determining whether and how it
will be used.

The papers in this special issue point at various tools to support the designers in consid-
ering these diVerent issues, see e.g. Kjellén, Fadier and De la Garza, Jagtman and Hale and
van Duijne et al. The experience carriers in the form of design standards are central here.
When functioning well, they represent the accumulated experiences of designers and users
and are written in a language that the designers understand. As pointed out earlier, design
standards may be conservative and seldom represent the most recent user experience. We
need processes that ensure timely update of design standards. This is not feasible for indus-
try standards that undergo extensive and bureaucratic revision processes. Rather, the indi-
vidual design house or customer must develop internal design standards as a complement
to the industry standards that are possible to keep updated with the most recent user expe-
riences. These must be accompanied by eYcient work processes for experience exchange
between design and use, for example, via Hazop or job safety analysis (Kjellén, 2002).
Design standards may also lull designers into thinking that they do not need to consider
use situations further. If this problem is to be resolved we have to look to a combination of
322 A. Hale et al. / Safety Science 45 (2007) 305–327

persuading designers of the value of direct operational input and the need for transparency
in justifying and documenting design assumptions regarding use. The safety expert in the
design team may play a central role here when serving as a communication link between
designers and users. In the safety expert’s toolbox there are various risk analysis and design
review methods, such as job safety analysis, that oVer an arena for direct experience
exchange between designers and users. When used and documented correctly, they will
support designers and users to communicate directly on how the equipment will perform
under varying user situations.

4.2. Documenting design and design memory

Design is a process which must contain a learning loop and memory about its successes
and failures. Whilst it is possible to do better in predicting use situations and problems
than is now done, this will never be a perfect process. Hence designers have to learn from
past accidents and errors. Incident and accident databases can assist in this, but they need
to be much more oriented to designers and much more accessible to them. They also need a
translation process to generalise from errors with old technology to the implication for
designing its replacement.
An important adjunct to the hazard logs and incident databases is documentation of
designs. This is not a strong point of designers, but without a record of why decisions in
design were taken as they were, it is diYcult to learn how they could have been made better.
This design memory is also needed during the design process, as it progresses from concept
to detailed design decisions. Without such documentation it is hard to assess the implica-
tions of changing aspects of the design at a later stage.
Finally, the design process needs a running ‘big picture’ of the whole system being
designed (Kirwan, this issue; 10.1016/j.ssci.2006.08.011), so that all those involved can see
where their part Wts and those responsible for integration can track down interface prob-
lems between the various sub-systems and parts being independently designed.

4.3. Understanding how designers think about safety

We have seen that in industries with major accident potential and with a mature design
process, designers already pay great attention to safety and naturally check their designs
against safety criteria. In this way they already pick up many potential design errors and
correct them before they get frozen into the fabricated product. This natural error correc-
tion mechanism is a central postulate of all current human error theories. The task in sup-
porting that process is therefore to help minimise error making, but also to enhance error
detection and correction. The former consists of training and informing designers and pro-
viding them with design tools to avoid errors. The latter consists of formalising the mile-
stones at which safety checks are carried out and providing the tools to conduct those
checks.
Design is often a complex process involving several stages from conceptual design to
engineering of design changes during installation and commissioning. In this process, the
design gets passed to diVerent people who do their bit and pass it on to others for integra-
tion. This requires good communication and ideally a good recording of the history of
design decisions. As Taylor has shown in his second paper, these aspects of documentation
and communication have not, in the past, been the strong points of designers. They are sat-
A. Hale et al. / Safety Science 45 (2007) 305–327 323

isWed once they have reasoned out their own decisions and do not Wnd it interesting or nec-
essary to record how and why they arrived where they did. Designers therefore need
eYcient tools to document history on decisions and assumptions for communication fur-
ther down the design process, and even through into the stages of manufacture, installa-
tion, commissioning, start-up and use. This will help the various actors to check
performance against the reasoning that led to the original choices.
We also need to take account of the fact that designers are people with their own goals
and objectives, which will never be entirely subordinate to the goals of their employers.
Any support to them must take account of these personal goals. It needs to speak to the
pride in performance which designers have, but must also acknowledge the role of their
specialist knowledge and experience in furthering their own careers. Protecting this knowl-
edge can lead sometimes to hiding behind the myth of an ‘unfathomable creativity’, which
hinders free exchange of design experience.
To support designers in their task of incorporating safety into design, we must link in to
the way designers think and work; otherwise the transplant will be rejected. However, the
vast majority of the paper authors in this special issue are safety and human factors spe-
cialists whose task is to inXuence, steer and monitor the design process. Hence almost all of
the papers are written from that point of view. The large gap in the special issue, but also in
our knowledge in this area, is about what actually goes on in the heads of designers as they
design, and, in particular, how safety is represented and considered in their mental pro-
cesses (Wilpert, this issue; 10.1016/j.ssci.2006.08.016). There are no studies of the mental
processes and communication patterns of designers or design teams in this special issue;
but we also do not know of them in the safety literature. Taylor’s and van Duijne et al.’s
papers lift this veil to a small extent, whilst Wilpert’s paper provides a Wrst structuring of
what we need to know in this area. We make a strong plea for studies to be undertaken.
Meanwhile we must make progress using the insights we do have.
In particular we must help designers modify their views of users and use situations, so
that they are not so optimistic about technological solutions and pessimistic about those
involving people using the technology. This means providing more insight into the
strengths and weaknesses of human operators (and maintainers), means of using design
triggers and aVordances to guide use and the importance of trust by operators in technol-
ogy if they are to use it correctly. This is all needed to give the designers mental models they
can use to test their designs against in thought experiments, to supplement actual user tri-
als and design reviews.
Additionally, we need to help designers develop their own ‘safety culture’, so that they
take a reasonable share of ownership of safety in their designed artefacts and systems. Per-
haps the best way to deal with this is to encourage designers to think about safety from the
start, even in the very early design or R&D phases, as shown in the paper by Kirwan. This
can then lead to a safety policy for designers, from which can Xow safety practices that are
not too constraining, but which nevertheless ensure a high level of safety in the resultant
‘product’. The best situation is not when designers are merely answering safety questions,
but when they are themselves asking such questions.

4.4. DeWning safety and how to achieve it

It has not been our aim in this special issue to go into the details of exactly what safety
measures should be incorporated into the diVerent types of technologies. Such discussions
324 A. Hale et al. / Safety Science 45 (2007) 305–327

and recommendations can be found in textbooks of ergonomics and the extensive profes-
sional literature on preventive measures. Our emphasis is on how those detailed solutions
get considered and chosen; what prompts the designer to think that measures are necessary
and appropriate? Does the codiWcation of safety measures into standards help the designer
and answer the question ‘how’?
What we are looking for is that the designer has a coherent and systematic way of con-
sidering possible safety problems and how to avoid them. The approach taken by Kjellén,
among others, is to deWne two diVerent approaches to safety in design; one is to avoid user
errors and misuse through a human-centred design approach, the other is based on the
notion of barriers that prevent human and technical errors from escalating into accidents
with severe consequences. Both approaches have a long history in safety science. The diVer-
ent industries with major accident potential represented in this special issue tend to select
one of the two approaches as their preferred method of achieving safety in design, the avia-
tion industry focusing primarily on human-centred design and the process and oil and gas
industries being dominated by barrier thinking (see the papers by Kjellén and Kirwan).
The concept of preventing disturbances from escalating into serious accidents through
the use of barriers has a long history in the engineering disciplines; its scientiWc history
dates back to Haddon (1966), who systematised the concept into diVerent strategies for
accomplishing safety. Such a concept is important for the designer, because it gives a sys-
tematic way of thinking about the task of implementing safety in design. As discussed by
Kjellén , prescriptive design standards on safety usually present the designer with solutions
to safety problems in the form of requirements to the use and dimensioning of barriers
such as eVectiveness of breaks, means of relieving pressure in vessels in case of a Wre, meth-
ods of guarding moving machinery parts, etc. They Wt well with the engineers’ way of
thinking, but as has been discussed earlier in this paper, this can lull the designers into a
false sense of security; that they only have to meet the prescribed design criteria and all will
be well.
Prescriptive requirements must be complemented with goal-oriented requirements and a
scenario way of thinking as embedded in several of the risk analyses prescribed in modern
design standards. This approach requires the designer to envisage all of the possible scenar-
ios which could result from the proposed design concept, to understand the way in which
they could arise and lead to uncontrolled energy release and to devise suitable barriers to
block the development. If the barriers are conceptualised as consisting of hardware, behav-
ioural, and procedural actions, or a combination of these, the designer has potentially a
complete vocabulary and conceptual model to structure the search for suitable preventive,
recovery and mitigation measures (see also Schupp et al., 2006).
Whilst this scenario way of thinking is relatively accepted in the process and oil and gas
industries, it is still a novel way of thinking for many other technologies. The scenario
approach relies on stimulating the designer, and/or the design review team, to think crea-
tively about what might go wrong. Analogy can be used, as can the feedback from previous
designs and their failures, but the process is aimed essentially at making the designers think
and challenge their assumptions and design choices.
The main area in which the barrier approach needs more development and systematisa-
tion is in the way it treats behavioural and procedural actions. The engineers who devel-
oped the concept have applied it most rigorously to hardware. Behaviour is often seen as
such an unreliable method of achieving safety that it is not even considered as a valid ele-
ment of a barrier. Yet it is just this behaviour which can be the saviour of systems whose
A. Hale et al. / Safety Science 45 (2007) 305–327 325

hardware has failed in unexpected or unpredicted ways, or which can undermine the func-
tioning of hardware that was thought to be highly eVective, because the operators in the
system had other (often quite legitimate) ideas about how it was functioning, which led
them to disable it.
It is thus necessary to see the two approaches to safety in design, the human-centred
approach and the barrier approach, as essentially complementary elements. This puts a
heavy mental workload on designers who need “split brains”, i.e. ability to design from a
human and hardware point of view in parallel. We need methods of risk analyses that are
better than those existing today in supporting designers in sorting out the vital issues both
from a physical and a behavioural point of view and in challenging their traditional design
assumptions and choices.
This special issue does not present a clear-cut approach on how to arrive at a safe
design. Rather, it provides the reader with more questions than answers. We end the special
issue by providing a list of questions that we feel need further study to reduce the gap in
our knowledge.

4.5. A research agenda

A. What is the scope for safety improvements through design in various industries?
i. What are the root causes of accidents and what percentage of the accidents can be
prevented or mitigated through changed design? What is the role of design errors
in accidents, i.e. design not meeting speciWed requirements? What is the role of vio-
lations of design assumptions regarding user behaviour?
ii. What are the improvement potentials of alternative strategies to safety in design?
What is the scope for improvements of probability- and consequence-reducing
measures? Is safety best accomplished through a fail-safe approach or through a
design that promotes learning and understanding? Is it possible to provide a safe
design that also meets customer needs regarding user friendliness and satisfaction,
productivity, and cost-eYciency?
B. What is the scope and role of legislation in providing safe design?
i. How do the market mechanisms function in promoting safe design? What is the
role of the product safety legislation? What is the signiWcance of product certiWca-
tions as a means of penetrating the market?
ii. How do the diVerent regulatory regimes on machinery safety (the OSHA stan-
dards in the USA and the EU Machinery directives) compare regarding safety
results, costs and user satisfaction?
C. How to ensure that safety is adequately considered in the design process?
i. Is safety best accomplished through a structured model of decision-making in
design? How does it compare across industries? What are the important safety
considerations and check-points at each milestone of such a model? What are the
roles of diVerent actors (designers, client, users, authorities, safety experts) in pro-
moting safe design? How does the structured design process in the diVerent safety
standards such as the Machinery directives, ISO/EN design standards (EN 292/
1050, ISO 11064, IEC 61508) perform?
ii. What are the conditions for accomplishing safe design in diVerent industries:
In industries with major hazards (process, oVshore, nuclear, etc.)? In manufac-
turing industry? In the transportation sector with diVerent actors being responsible
326 A. Hale et al. / Safety Science 45 (2007) 305–327

for design of their sub-system (e.g. car manufacturers/road authorities; air-


plane manufacturers/aviation agencies)? In industries producing consumer
products?
D. How do designers tick? What is the best way to support them in integrating safety in
design?
i. What is the designer’s perception of risks associated with the “safe use” of the
design object and their role in controlling it? How do they perceive the split of
responsibility for safety between the designer and user? What are the limits of
design, what can we reasonably expect designers to take into account and inXu-
ence? What are the means to inXuence how designers think about safety?
ii. How to accomplish feed back of operational (user) experience to the design com-
munity? How do designers regard the value of user experience and respond to it?
How to translate user experience into the designers’ language? What information
do users trust and why? How do users understand the intention of a design and
how to feed that conception back to the designers? How to involve users in the
design process? What is the value of using simulators and prototypes to model
and demonstrate user behaviour and potential accident scenarios with new
design?
iii. How do design errors emerge and progress through the design process? How do
existing methods track and correct errors perform and what is the scope for
improvements?
iv. How do existing methods of risk analysis perform and what is the scope for
improvement? How to analyse and simulate diVerent uses including transgressions
of assumed use? How to consider safety from a life-cycle perspective? How to ana-
lyse and simulate development of disturbances and consequence modelling? How
to identify and evaluate possible accidents paths in well-defended systems? How
to analyse new/emerging technologies where operational experience is lacking?
How to assess wider human factors and organisational issues of socio-technical
systems at the design stage (health, psycho-social working environment, user satis-
faction)?
v. What is the optimal use of safety/human factors experts in design: as designers, as
reviewers and veriWers, by deWning premises? What are the competence needs of
designers and safety/human factors experts respectively?
vi. What is the scope for improvements through software tools which keep track of
design decisions and rational? Through software tools that reduce and manage
complexity of multiple interacting inXuences? Through databases on incidents and
other user experience? Through databases on design solutions?

References

Blanquart, J. 2003. Design for safety, and the way back. Paper to the New Technology and Work Workshop on
Safety and Design, Blankensee.
CEN, 1991. Safety of machinery – basic concepts, general principles for design – Part 1: Basic terminology, meth-
odology. European Standard EN 292-1:1991, Brussels.
European Council, 1989/98. Machinery. Council Directives 89/392/EEC, 91/68/EEC and 93/44/EEC, Brussels.
Haddon Jr., W., 1966. The Prevention of Accidents. In: Preventive Medicine. Little Brown, Boston.
ISO, 2000. Ergonomic design of control centres – Part 1: Principles for the design of control centres. ISO standard
ISO 11064-1:2000, International Organization for Standardization (ISO), Geneva 20.
A. Hale et al. / Safety Science 45 (2007) 305–327 327

Kirwan, B., 2002. Soft systems, hard lessons – strategies and tactical approaches for the integration of human fac-
tors into industrial organisations. In: Wilpert, B., Fahlbruch, B. (Eds.), System Safety – Challenges and Pitfalls
of Intervention. Pergamon, Oxford.
Kjellén, U., 2002. Transfer of experience from the users to design to improve safety in oVshore oil and gas produc-
tion. In: Wilpert, B., Fahlbruch, B. (Eds.), System Safety – Challenges and Pitfalls of Intervention. Pergamon,
Oxford.
Kjellén, U., 2004. Improving knowledge sharing and learning in an organisation of safety, health and environ-
mental project engineers. In: Andriessen, J.H.E., Fahlbruch, B. (Eds.), How to manage experience sharing?
From Organisational Surprises to Organisational Knowledge. Elsevier, Amsterdam.
Rasmussen, J., Svedung, I., 2000. Proactive Risk Management in a Dynamic Society. Swedish Rescue Services
Agency, Karlstad, Sweden.
Reason, J.T., 1990. Human Error. Cambridge University Press, Cambridge.
Schupp, B.A., Hale, A.R., Pasman, H.J., Lemkovitz, S.M., Goossens, L.H.J., 2006. Design for safety: systematic
support for the integration of risk reduction into early chemical process design. Safety Science 44 (1), 37–54.
Standard Norge, 2004. Working Environment, Norsok Standard S-002, Lysaker.
van Duijne, F.H. 2005. Risk perception in consumer product use. Ph.D. thesis. Faculty of Industrial Design Engi-
neering and Faculty of Technology, Policy and Management. Delft University of Technology, Netherlands.

You might also like