H7 SafetyHFE

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

SYSE-812 Human Factors Engg

Dr. Adnan Maqsood

➢ Safety in Human Factors Engineering


Topics of the Course SYSE-812
Handout Contents
1 Introduction to HFE
2 Human Centric System Analysis & Design
3 Investigation Techniques in HFE
4 Affective Design in HFE
5 Cognitive & Mental Workload Analysis
6 Physical Workload Assessment
7 Safety in HFE
8 Job Satisfaction
9 Social Implications in HFE
10 Future of HFE

2
What is an Accident?
◼ Something without apparent cause, unexpected,
unintentional act, mishap, chance occurrence, act of
God
◼ Chen (1972). “An error with sad consequences”.
Implies human error
◼ Arbous and Kerrick (1981). “Unplanned event in a
chain of planned and/or controlled events.” Implies
sequential development
◼ Schutzinger (1954). “Resulting from the integration of
a constellation of forces.” Implies mechanical or other
forces

3
What is an accident?
◼ Haddon (1964). “Occurrence of an unexpected
physical or chemical damage to living or non-living
structures.” Implies unexpected event
◼ Suchman (1961). “It is doubtful that any single
definition will cover all types of events or interests.”
Generally to qualify as in accident, there should be:
❑ 1. Low degree of expectedness
❑ 2. Low degree of avoidability
❑ 3. Low degree of intention
❑ 4. Quick occurrence

4
Myths, Misconceptions & Problems in Safety
Analysis

◼ 1. Semantic confusion: A drops something on B,


then B has an accident

◼ 2. Accidents happen to other people – they are


accident prone – I am not. This means that safety
propaganda or safety programs at work has little effect

5
Accident Statistics from UK
Could be any country

Place Deaths Serious Injuries Slight Injuries


Home 7,561 120,000 1,500,000 (est)
Road 6,810 88,563 253,835
Rail 216 920 11.570
Aircraft 147 ? ?
Water transporter 158 ? ?
Factory 628 ? 11,805 (3+days away)
Farm 136 ? 8,945 (3+days away)

6
What do we learn from this?
◼ Home vs. Roads and Road vs. Work
◼ Compare to Heinrich’s theory. Stated for aircraft: Out
of 330 mishaps are produced, 29 minor injuries and 1
major injury
◼ Difficult to get data with high reliability. Number of
deaths are usually a bit more correct – but it depends
on the country/culture
◼ What about trends? Technology brings its own
problems. In 1870, 8% of the accidents in UK were
traffic accidents – today 40%. Powered hand tools,
nuclear power plants, etc.
◼ Society matures with time. In general, the trend is
downwards, C. F. Smeed’s Law
7
Smeed’s Law (1972) – Revalidated several times

8
Smeed’s Law (1972) – Revalidated several times
◼ Increasing experience with greater motorization
◼ The more vehicles, the less miles per vehicle
◼ Improvements in legislation, roads, and vehicles
◼ In developing countries, the drivers get more
experienced over time
◼ Social protest regarding high death rates
◼ The dynamics of these factors are unknown. But it is
clear that large scale actions such as left-right
switching in Sweden improved traffic safety the first
year

9
Conceptual Models of the Accident Process
◼ 1. Chain of Multiple Events
◼ Each accident is the result of a series of events. No single
cause exists – many factors influence the accident.
◼ The probability p of an accident is a function of several
different variables p=f(x1,x2,x3,….xn).
◼ This model is also used in epidemiological models, see below.
◼ 2. Epidemiological Model
◼ Originated from the study of disease. (Water supplies and
cholera in London).
◼ The host (accident victim) is described in terms of age, sex,
economic status, intelligence, behavior, etc. The agent (injury
deliverer) is described in terms of type, potential hazard,
method of use, etc. The environment is described in terms of
the effects on the host and agent: e.g. temperature, noise,
social climate.
◼ Useful for classifying accidents, but is not so helpful for
analyzing cause and effect
10
Epidemiology Origin
◼ Investigation of Cholera Epidemics in London in 1855

Deaths
Number Deaths
Water per
of from
Company 10,000
Houses Cholera
Houses
Southwark
and 40,046 1,263 315
Vauxhall
Lambeth 26,107 98 37
Rest of
256,423 1,422 59
London

11
Conceptual Models of the Accident Process
◼ 3. Energy Exchange model
❑ Injuries produced by energy exchange: e.g. mechanical,
chemical, thermal, electrical, etc.
❑ For example: A blow from a moving object crushed a
passengers leg in a car.
❑ This concept is a bit naïve, since all physical events
involve energy exchange.
❑ It is difficult to understand about causation.
❑ But the classification can be useful to suggest barriers
against accidents

12
Conceptual Models of the Accident Process
◼ 4. Behavioral Models
❑ A. Risk-taking models: Whenever a decision is made,
it is affected by the degree of risk. Risk taking is affected
by the amount of uncertainty and the amount of danger.
The assumption is that those taking higher risks have
more accidents. But sometimes people are not aware of
risks at all.
❑ B. Accident Proneness: Proposes that some persons
are more liable, due to their personality, to have more
accidents. There has been a tremendous amount of
research. The notion of accident proneness has proven
not to be useful
❑ C. Concept of Overloading: Due to information, etc.
Matching environmental requirements with operator
13
capabilities
Conceptual Models of the Accident Process
◼ 5. Systems Safety
❑ Safety is a systems problem and the person must be
understood in the context of the total system
Predisposing Precipitating
Factors Factors

Equipment
Factors

Operator Accident or
Failure of Part
Task Factors Response to Accident
of System
the Failure Avoided

Environmental Example of a systems approach to accident analysis.


Factors There are predisposing factors such as worn tires, wet
road, and glare. These can lead to precipitating factors
and eventually an accident

14
Conceptual Models of the Accident Process
◼ 6. Combined Models. Surry classified the process by
analyzing a series of questions

Situational
Predisposing Characteristics Accident Conditions
Characteristics
Susceptible host Risk taking Unexpected

Hazardous environment Appraisal of hazard Unavoidable

Injury-producing agent Margin of error Unintentional

15
Ramsey’s Model

◼ Old lady sees water


puddle when crossing
the road
◼ She recognizes the
slipping hazard
◼ She decides to avoid
the puddle
◼ But she does not step
to side quickly enough
◼ She slips and falls!

16
Human Error Classification Scheme. Rouse (1983)
◼ 1. Observation of System State: Incorrect reading of
appropriate state variables; Erroneous interpretation of
correct readings; Failure to observe sufficient number of
variables; Observation of inappropriate state variables;
◼ 2. Choice of Hypothesis: Hypothesis does not
functionally relate to variables observed. Hypothesis
could not cause the values of the state variables
observed; Formulate better hypotheses
◼ 3. Testing of Hypothesis: Hypothesis not tested.
Stopped before reaching a conclusion; Reached wrong
conclusion; Considered but discarded correct
conclusion;

17
Human Error Classification Scheme. Rouse (1983)
◼ 4. Choice of goal: Goal not chosen. Insufficient
specification of goal. Choice of counter-productive
◼ 5. Choice of procedure: Procedure not chosen.
Choice would not achieve goal. Choice would achieve
incorrect goal; Choice unnecessary for achieving goal
◼ 6. Execution of procedure: Unrelated inappropriate
step executed; Required step omitted; Unnecessary
repetition of required step; Unnecessary step added;
Steps executed in wrong order; Step executed too
early or too late; Control in wrong position or range;
Stopped before procedure complete

18
Human Error
◼ Human error is the primary cause of 60-90 percent of
major accidents. Doctors and nurses make in average
1.7 errors per patient
◼ Thirty percent errors in command selection in word
processing (Card et al., 1980)
◼ But many of these errors are the results of bad system
design and bad organization rather than irresponsible
actions
◼ 1. Many reasons of errors:
❑ Poor discriminability
❑ Memory lapses
❑ Communication breakdown
❑ Biases in decision making
❑ Selection of compatible, but incorrect response
19
Human Error
◼ 2. Speed-Accuracy Trade Off
❑ It is impossible to work very fast and accurate at same
time
❑ Fast and sloppy OR Slow and accurate
◼ 3. Signal Detection Theory
❑ Assumes two kinds of human errors: false alarms and
misses
❑ The study of human errors has become a science by
itself

20
Categories of Human Errors by James Reason
◼ Mistakes & Slips
◼ Mistakes
❑ Failure to formulate the right intention, due to
shortcomings in: Perception, memory and cognition
❑ James Reason used Rasmussen’s distinction between:
◼ Knowledge-based mistakes
◼ Rule-based mistakes

21
Categories of Human Errors
◼ Knowledge-Based Mistakes
◼ These are due to failure to understand the situation. The
operator may not be able to consider alternative decisions,
since she is overwhelmed by the complexity of evidence
and cannot interpret it correctly
◼ Rule-Based Mistakes
◼ Example of rules: It is correct to turn the wheels in the
direction you want to go – unless you are skidding on ice.
Formulated as IF-THEN rules. There may be exceptions or
qualifications that are overlooked – the THEN part may be
wrong. The choice of rule is guided by frequency and
reinforcement – Rules that have been successful are
chosen again
◼ Rule-based mistakes tend to be done with much
confidence “Strong but Wrong”
◼ But there is less confidence in knowledge-based situations,
22 maybe because this involves a more conscious effort
Categories of Human Errors
◼ Slips
❑ The right intention is carried out – but incorrectly. A
common class of slips are “capture errors”. These may
happen when
◼ a. The intended action is almost the same as routine action
◼ b. The action sequence is relatively automatic
◼ e.g. Pouring orange juice in the coffee cup while reading
the morning paper during breakfast
◼ These routine situations are not attended, and the
errors are produced because the stimulus and the
response are similar
◼ In flying, controls for flaps and landing gears have
both similar feels, appearance, direction, and location,
and are both relevant for take-off and landing
23
Categories of Human Errors
◼ Lapses
❑ Failure to carry out an action – due to forgetfulness
❑ Sometimes an interruption may cause a sequence to be
stopped (What was I saying?)
◼ Mode Errors
❑ An action that is appropriate in one mode of operation is
not appropriate for another
❑ Example: Raising landing wheels, although aircraft is still
on the runway – but the pilot thought it was airborne
❑ Mode errors are of great concern in flying and HCI,
where the same key may have different meanings
❑ Mode errors are a joint consequence of relatively
automated performance and improperly conceived
systems design.
24
Remedial Actions of Errors

Potential Error Error Type Action


Loco not returned to service bay for 24 hr. Check Violation Organization /
Management
Setting off with parking brake on Slip Design
Driving locos despite earth tester warning on Violation Design / Training
Drivers leaning out of cab when traveling Violation Design / Training
Inadequate use of warning horns Violation Design
Misreading of displays Slip Design
Guards leaning out of cabs when travelling Violation Design / Training
Insufficient warning of objects/people on track Slip Design
Instability to effectively use fire extinguishers Mistake Design
Incorrect control operations Mistake Training / Design

25
How to deal with Mistakes?
◼ What can we do about Knowledge-based mistakes,
Rule-based mistakes, Slips, and lapses?
❑ Knowledge-based: Train the operator
❑ Rule-based: Training and redesign
❑ Slips: Redesign the task / environment
❑ Lapses: Redesign the task

26
Conclusion
◼ There are several ways to remedy causes of human
error
◼ In industry, it is common to implement work
procedures and training of operators
◼ In supervisory control, we try to redesign the
workplace and tools – and train the operator

This approach has been adopted by many


organizations – e.g. military. It is now common in
nuclear power plants and other complex
environments. Lately it has also been adopted by
industry

27
Reason’s Cheese Model of Accidents
◼ James Reason’s Swiss Cheese Model of
Organizational Accidents

28
Errors in Organizational Context
◼ Reason thinks that human errors represent only a
small part of the deficiencies in an organization
◼ Accidents are visible, and therefore analyzed. Less
visible organizational errors are often performed in
management decision making
❑ Example: Industrial managers have limited resources –
often not enough to allocate to both productivity and
safety
❑ Managers get positive reinforcement from production,
but safety is considered a “show stopper”, and is usually
characterized by absence of evidence

29
Consequences of Reason’s Model of Human Error
◼ Training
❑ Lack of knowledge can lead to mistakes. Training is
therefore helpful. But operators must also train at
correcting errors – this is naturalistic. Error-free training
is not
◼ Memory aids and rules
❑ For example, use memory aids for procedures (e.g.
checklists)
❑ Rules must be logical. The “band aid” approach to
human error make the situation worse

30
Consequences of Reason’s Model of Human Error
◼ Error-Tolerant Systems
❑ There is one positive aspect of errors – the opportunity
for the operator to correct them. This gives the operator
a sense of control. Driving a car involves continuous
error correction (of lateral and longitudinal position).
❑ Often there are many strategies and the operator must
be allowed to act in an opportunistic fashion. The
operator must be allowed to respond differently
according to the conditions of the moment. Operators
must be given a chance to explore the functionality of
the system. Is there an undo button?
❑ In an error-tolerant system, one can recover by undoing
an action – there is a back-up option

31
Starr (1969) risk taking model
◼ The horizontal line represents the natural death rate
due to old age

32
Human Errors are Commonplace
◼ But many of the errors people commit in operating
systems are the result of bad system design or bad
organizational structure rather than irresponsible
action (Norman 1988; Reason, 1990, 1997; Woods &
Cook, 1999)
◼ Although human error may be statistically defined as a
contributing cause to an accident, usually human error
is only one in a complex chain of breakdowns – many
of them are of mechanical or organizational nature
◼ They affected the system and weaken its defenses
(Perrow, 1984; Reason, 1997)

33
Stop Blaming the Operator
◼ By minimizing human error, we can improve both
safety and industrial production. This is a matter of
design and training
◼ The notion that the operator should be punished or
personally made responsible is unwarranted – (unless
there is a clear violation of regulations).
◼ Accident proneness is not a viable concept (Shaw and
Sichel). Therefore the blame for accidents and poor
quality falls on poor design, poor procedures, poor
training and in the end poor management!

34
Fault Tree Analysis
◼ Has been used extensively in space-craft design,
analysis of nuclear power plant, safety etc.
❑ 1. The fault tree starts with a specific failure (the top of the
tree). Choice of failure is important. If it is too general, it
cannot be analyzed, it is too specific, the analysis will not
produce enough information
❑ 2. The purpose is to find all credible ways in which the
undesirable event can occur. (Very expensive analysis)
❑ 3. It is a graphical model of various parallel and sequential
faults that will result in the occurrence of the undesired fault
(at the top of the tree)
❑ 4. Primary events are caused by inherent characteristics of
component, such as failure of light bulb due to worn
filament. Secondary events are caused by external
sources–such as excessive voltage, which burns out
35 filament
Construction of a fault tree
◼ A. By analysis (top-down)
❑ 1. Select one head event that is to be prevented
❑ 2. Determine all primary and secondary events that may
cause the head event
❑ 3. Determine relationships between causal events and
the head event in terms of AND and OR Boolean
operators
❑ 4. Determine the value and need for further analysis
according to steps 2 and 3
❑ 5. Continue to reiterate steps 2-4 until all events are
basic, or until it is not desirable to go further.
❑ 6. Diagram the events using the symbols below
❑ 7. Perform qualitative and quantitative analyses
36
Fault-Tree Analysis

Top event – Cannot Inconsequential


be developed further Event Or Insufficient
data to develop

Basic Event AND Gate. Several


input events must
occur to cause
output event

Event to be further
developed
OR Gate. At least
one input event must
occur to cause
Normal Event is output event
normal, but can
37 become a fault

You might also like