Professional Documents
Culture Documents
Demonstrating The Value of Human Factors For Process Design in A Controlled Experiment
Demonstrating The Value of Human Factors For Process Design in A Controlled Experiment
SAND2018-0725
Unlimited Release
Printed February 2018
Prepared by
Sandia National Laboratories
Albuquerque, New Mexico 87185 and Livermore, California 94550
NOTICE: This report was prepared as an account of work sponsored by an agency of the United
States Government. Neither the United States Government, nor any agency thereof, nor any of their
employees, nor any of their contractors, subcontractors, or their employees, make any warranty,
express or implied, or assume any legal liability or responsibility for the accuracy, completeness, or
usefulness of any information, apparatus, product, or process disclosed, or represent that its use
would not infringe privately owned rights. Reference herein to any specific commercial product,
process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily
constitute or imply its endorsement, recommendation, or favoring by the United States Government,
any agency thereof, or any of their contractors or subcontractors. The views and opinions expressed
herein do not necessarily state or reflect those of the United States Government, any agency thereof,
or any of their contractors.
Printed in the United States of America. This report has been reproduced directly from the best
available copy.
2
SAND2018-0725
Printed February 2018
Unlimited Release
Judi E. See
Nuclear Weapons Systems Analysis Department
Sandia National Laboratories
P. O. Box 5800
Albuquerque, New Mexico 87185-MS0151
Abstract
A controlled between-groups experiment was conducted to demonstrate the value of
human factors for process design. Most evidence to convey the benefits of human
factors is derived from reactive studies of existing flawed systems designed with little
or no human factors involvement. Controlled experiments conducted explicitly to
demonstrate the benefits of human factors have been scarce since the 1990s. Further,
most previous research focused on product or interface design as opposed to process
design. The present study was designed to fill these research gaps. Toward that end, 24
Sandia National Laboratories employees completed a simple visual inspection task
simulating receipt inspection. The experimental group process was designed to
conform to human factors and visual inspection principles, whereas the control group
process was designed without consideration of such principles. Results indicated the
experimental group exhibited superior performance accuracy, lower workload, and
more favorable usability ratings as compared to the control group. Given the
differences observed in the simple task used in the present study, the author concluded
that incorporating human factors should have even greater benefits for complex
products and processes. The study provides evidence to help human factors
practitioners revitalize the critical message regarding the benefits of human factors
involvement for a new generation of designers.
3
ACKNOWLEDGMENTS
Dr. Susan Adams and Ms. Victoria Newton provided constructive feedback during concept
development. Ms. Allison Noble, Ms. Kay Rivers, and Ms. Liza Kittinger conducted human factors
heuristic evaluations of the experimental group and control group task designs. Dr. Thor Osborn
located a reference supporting the value of process variation reduction and discussed the concept
with the author. Mr. Richard Craft, Dr. Mallory Stites, and Ms. Victoria Newton completed peer
reviews of the final paper.
4
TABLE OF CONTENTS
1. Introduction ..........................................................................................................................9
1.1. Reactive Case Studies ............................................................................................10
1.2. Controlled Experiments .........................................................................................10
1.3. Research Gaps ........................................................................................................11
1.4. Objectives of the Present Study .............................................................................12
2. Methodology ......................................................................................................................13
2.1. Participants .............................................................................................................13
2.2. Design ....................................................................................................................13
2.3. Primary Task Design..............................................................................................13
2.4. Procedure ...............................................................................................................14
2.4.1. Orientation and Practice ........................................................................14
2.4.2. Task Implementation .............................................................................17
3. Results ................................................................................................................................21
3.1. Task Accuracy and Speed ......................................................................................21
3.1.1. Task Accuracy.......................................................................................21
3.1.2. Task Speed ............................................................................................22
3.1.3. Error Analysis .......................................................................................22
3.1.4. Reductions in Variability ......................................................................23
3.2. Workload................................................................................................................24
3.2.1. NASA-TLX Workload Ratings ............................................................24
3.2.2. Secondary Task Analysis ......................................................................25
3.3. Usability Ratings ....................................................................................................25
4. Discussion ..........................................................................................................................27
4.1. Process Variation Reduction ..................................................................................27
4.2. Limitations of the Present Study ............................................................................28
4.3. Directions for Future Research ..............................................................................28
5. Conclusions ........................................................................................................................29
5.1. Key Points ..............................................................................................................29
References ................................................................................................................................31
5
FIGURES
TABLES
Table 1. Primary Task Design for the Experimental and Control Groups ................................... 15
Table 2. Acceptable Tile Accuracy Results ................................................................................. 21
Table 3. Rejectable Tile Accuracy Results .................................................................................. 22
Table 4. Control Group Errors and Experimental Group Mitigations ......................................... 23
Table 5. Experimental and Control Group Standard Deviations ................................................. 24
Table 6. Experimental and Control Group Usability Ratings ...................................................... 26
6
NOMENCLATURE
Abbreviation Definition
CI confidence interval
dBA decibel, A scale
F false
GSA General Services Administration
ID identification
M mean
min minutes
NASA-TLX National Aeronautics and Space Administration – Task Load Index
SD standard deviation
sec seconds
SNL Sandia National Laboratories
T true
7
This Page Intentionally Left Blank
8
1. INTRODUCTION
It is well established that the discipline of human factors provides numerous benefits
throughout the product lifecycle (Bailey, 1993; Bruseberg, 2008; Burgess-Limerick,
Cotea, & Pietrzak, 2010; Hendrick, 1996; Hendrick, 2008; Rouse, Kober, & Mavor,
1997; Sager & Grier, 2005; Shaver & Braun, 2008; Yousefi & Yousefi, 2011). Such
benefits have been demonstrated through positive examples, which highlight the value
of including human factors engineers early and often throughout the lifecycle; and
negative examples, which underscore the adverse consequences of neglecting human
factors (Burgess-Limerick, Cotea, & Pietrzak, 2010). Prominent success stories include
the Comanche helicopter acquisition program, maintainability of the F119 engine for
the F-22 Raptor, C-141 cargo plane development, and the F/A-18 Hornet (Burgess-
Limerick, Cotea, & Pietrzak, 2010; Sager & Grier, 2005). Well-known historical
failures include the Three Mile Island nuclear reactor accident in 1979, the Titan II
missile explosion in 1980, the Bhopal chemical leak in 1984, the Chernobyl accident in
1986, and the grounding of the Royal Majesty cruise ship in 1995 (Burgess-Limerick,
Cotea, & Pietrzak, 2010; See, 2017).
Demonstrated benefits encompass both the design process itself and the subsequent
operations and maintenance phase of the lifecycle. Design is impacted through reduced
product development time and costs, by as much as 50%, when human factors
engineering is incorporated (Sager & Grier, 2005). As just one example, Bailey (1993)
demonstrated that interfaces developed by human factors experts had fewer design
errors after a single iteration as compared to the same interfaces developed by
programmers after three to five iterations. In sum, the extra time, labor, and costs for
programmers to develop an effective and usable product could have been saved by
incorporating human factors experts from the beginning.
The operations and maintenance phase is impacted in terms of increases in advantageous
states and reductions in detrimental states. Advantages include improved safety,
effectiveness, efficiency, productivity, and operator satisfaction (Burgess-Limerick,
Cotea, & Pietrzak, 2010; Sager & Grier, 2005). Positive reductions include decreased
training time and costs, accidents, error rates, maintenance costs, and equipment damage
(Shaver & Braun, 2008). One particularly noteworthy positive reduction involves a
decrease in the number of errors requiring resolution during operations and
maintenance, attributable explicitly to investing in human factors early in design. First,
errors can become 30 to 1500 times costlier to correct in operations and maintenance as
compared to early design phases (Steicklein, Dabney, Dick, Haskins, Lovells, &
Moroney, 2004). Second, most of the costs during operations and maintenance, as much
as 67%, stem from modifications to resolve operator dissatisfaction with the original
system (Rauterberg & Strohm, 1992). Ultimately, a large portion of detrimental impacts
typically associated with error cost escalation could be avoided with proper attention to
human factors during design.
Some benefits such as productivity can be expressed in quantitative cost savings; other
less tangible benefits such as improved operator attitudes and enhanced safety may be
difficult to measure and quantify, but nevertheless have a positive influence. In a review
of 24 diverse human factors projects, Hendrick (2008) concluded that human factors
9
typically has a direct cost benefit of approximately 1:10+, with a typical payback period
of 6 to 24 months. He further inferred that earlier incorporation of human factors
translates into even lower costs and greater benefits.
Despite a proven record of success, human factors practitioners continue to face
challenges convincing personnel outside their field. As Sager and Grier (2005) stated,
“inadequate consideration of human factors engineering issues is a familiar
problem…issues are not expressly considered, they are considered but their importance
is underestimated, or they are considered too late in the design process” (p. 1). Hence,
there is an ongoing need for evidence documenting the benefits of human factors.
10
controlled experiments allow for precise control of independent variables to afford
establishing cause-and-effect relationships and of extraneous variables that can bias
results. To be sure, some researchers have conducted controlled experiments comparing
“old designs” (without human factors involvement) and “new designs” (with human
factors involvement) to provide additional confidence in reactive studies demonstrating
the value of human factors.
Walkenstein and Eisenberg (1996) conducted an experimental comparison of old and
new designs of a computer-telephony product. The original version had been developed
without human factors engineering involvement. Subsequently, human factors
practitioners were asked to redesign the product late in the development cycle, under
time constraints, and with limitations imposed on the amount and types of changes that
could be made. Twenty-three target customers used either the old design or the new
design to complete a series of 13 tasks. Results indicated that 9 of 15 features were
significantly easier to use in the new design, and overall ease of use was rated more
highly for the new design. Participants working with the new design were able to
successfully complete 11 of 13 assigned tasks, whereas participants working with the
old design could complete only 8 of 13 tasks. The authors concluded that the direct
involvement of human factors engineers led to substantial improvements in the user
interface, despite the issues associated with involvement late in the development cycle.
Similarly, two other studies used controlled experiments to demonstrate the value of
human factors in medicine. Lin, Isla, Doniz, Harkness, Vicente, and Doyle (1995)
compared the original design for a patient-controlled analgesia machine interface and a
redesigned interface guided by a cognitive task analysis as well as a set of human factors
design principles. Twenty-four novice users who programmed three different sets of
doctors’ orders into both the old and new machines demonstrated faster performance,
fewer errors, and lower workload with the new interface. Russ et al. (2014)
demonstrated that applying human factors principles for the redesign of a medication
alert interface improved performance time and usability, while reducing prescriber
workload and prescribing errors.
11
1.4. Objectives of the Present Study
The primary goal of the present study was to conduct a controlled experiment to
demonstrate the value of human factors for process design. Toward that end, a visual
inspection task simulating a typical receipt inspection process was designed. The
process was designed with adherence to common human factors principles
(experimental group) and without (control group). Impacts of incorporating human
factors into process design were evaluated in terms of performance accuracy and speed,
workload, and ratings of process usability. It was hypothesized the experimental group
would exhibit more accurate performance, faster task completion time, reduced
workload, and higher usability ratings as compared to the control group.
12
2. METHODOLOGY
This research complied with the American Psychological Association Code of Ethics
and was approved by the Institutional Review Board at SNL (ID# SNL000155).
Informed consent was obtained from each participant.
2.1. Participants
Twenty-five SNL employees volunteered to participate in the experiment in response to
an advertisement published in the electronic Sandia Daily News. No special knowledge,
skills, or previous experience were required to participate. However, participants had to
meet a criterion of 70% correct on the secondary task used in the experiment in order to
continue. As a result, one individual who volunteered was dismissed after the practice
session and replaced with the next volunteer. The 24 participants (14 males) who
completed the experiment ranged in age from their twenties to their sixties, with 42%
of participants in their thirties.
2.2. Design
A between-groups design with two groups (N = 12 per group) was used. The
experimental group task was designed to conform to human factors principles, whereas
the control group task was designed without consideration of human factors principles.
13
components of the inspection process—sorting and counting tiles and calculating fees—
to maximize accuracy, speed, and usability and minimize workload.
The control group task design provided the minimum tools necessary to complete the
process, without consideration of usability or user preferences. While designing the
control group task, every effort was made to avoid intentionally exaggerating task
difficulty (i.e., to prevent artificially biasing outcomes in favor of the experimental
group). Toward that end, the process design for the control group was grounded in issues
commonly reported in the research literature (Table 1). In reality, the greater difficulty
was refraining from incorporating human factors principles in the control group task.
For example, the experimenter had to overcome a natural inclination to format the
control group work instruction for readability and ease of use and to supply well-
designed ergonomic tools for task completion (e.g., calculator).
Application of human factors principles for the experimental group (or lack thereof for
the control group) was confirmed through independent heuristic evaluations of each
design. For example, the control group heuristic evaluator recommended providing
sorting bins to facilitate counting and a calculator with larger buttons and a larger
display for usability. Design of the experimental group task effectively eliminated or
resolved these issues. Specifically, the experimental group heuristic evaluator
highlighted the benefits of workspace customization; sorting trays with divided,
redundantly coded slots that each held five tiles; pictorial labels on each sorting tray;
and electronic spreadsheet design (preloaded with tile pictures and values, formatted
with variable shading to support row scanning, and designed to provide automatic
calculations).
2.4. Procedure
The experiment occurred in a private enclosed office at SNL. Two side-by-side sit-stand
workstations provided a large adjustable workspace for task completion. Light levels
during the experiment, achieved via overhead LED lights as well as natural outdoor
lighting from the office window, were 430 lux. All sessions were conducted weekdays
between 8:30 a.m. and 3:30 p.m. Given the environments in which visual inspection
may occur in the field, no attempt was made to minimize surrounding office noise. Each
participant individually completed a study session lasting approximately 1.3 hours.
14
Table 1. Primary Task Design for the Experimental and Control Groups
Activity Implementation Issue/Principle
Control: could use any available table • Configurability adheres to principle of
space for work area, but no other
Workspace
customization options were offered dimensions for fit and reach (Kroemer
Experimental: customized workspace for & Grandjean, 1997)
standing/sitting, table/chair heights, tool • Workspace flexibility supports
placement, and laptop/tablet mode usability/efficiency (Nielsen, 1995)
• Formatted instructions with structured
Control: minimal formatting; broad text
blocks of information reduce cognitive
Follow Work
15
2.4.1.1. NASA-TLX Practice
First, participants practiced the NASA-TLX workload rating scale (Hart & Staveland,
1988). The NASA-TLX provides an overall workload score based on a weighted
average of six subscale ratings (mental demand, physical demand, temporal demand,
performance, effort, and frustration). Weightings are achieved by presenting 15 pairwise
comparisons and asking participants to choose which subscale in each pair was more
important to task workload. The number of times each subscale is selected provides a
weighting to compute an overall workload score. Both subscale scores and overall
weighted workload scores range from 0 to 100, with higher scores representing greater
workload. After the experimenter described the NASA-TLX, participants practiced
using the rating scale to rate the workload of a simple task (sorting an ordinary deck of
cards based on suit).
BA BA AB AB
Participants first practiced the grammatical reasoning task with five paper examples,
circling T or F to indicate whether each statement was true or false. The majority of
participants (88%) answered all five examples correctly; the remaining three
participants missed only one example. The experimenter reviewed response accuracy
with participants before directing them to a computer practice session. For the electronic
practice session, 10 grammatical reasoning problems with feedback were presented.
Each stimulus remained on the screen until participants pressed either the T key (true)
or the F key (false). Given that the grammatical reasoning task was designed to run in
the background while participants performed the primary visual inspection task, a one-
second 57 dBA auditory stimulus served as a warning signal to indicate that a
16
grammatical reasoning problem was on the computer screen awaiting response. During
practice, the auditory alert preceded each stimulus to prepare participants for its
occurrence during the primary task. Participants had up to three opportunities to
complete the electronic practice session and reach a criterion of 70% correct by the final
attempt. All participants achieved at least 90% accuracy. Two participants required two
attempts, and two participants required three attempts.
17
time and accuracy on the grammatical reasoning task were recorded for analysis. Stimuli
were presented for the duration of the primary visual inspection task at random intervals
of either 30 or 45 seconds. The experimenter remained in the room during task
completion to document observations for post-session interviews.
Most of the task time was consumed by sorting and categorizing the tiles. Experimental
group participants used trays to sort and categorize tiles. Each of the six acceptable letter
types had its own labeled sorting tray, with numbered and color-coded slots holding
exactly five tiles each. A separate tray contained four labeled holding fixtures, one for
each rejectable letter type. The wooden racks included in Scrabble games to hold tiles
were used in the experiment as holding fixtures for rejectable tiles. The wooden racks
were labeled with numbered slots to facilitate subsequent counting of tile quantities.
Control group participants received only the bin containing the lot of tiles to be
inspected. They developed their own individual methods to sort and categorize tiles
within the available workspace. Techniques included sorting tiles into rows, columns,
piles, or vertical stacks according to letter type. Figure 2 illustrates sorting techniques
used for acceptable tiles in the experimental and control groups (experimental group
trays for rejectable tiles are not shown).
18
questionnaire to address inspection task (1) ease of completion, (2) amount of time, and
(3) task work instructions. A seven-point rating scale for each item ranged from Strongly
Disagree (1) to Strongly Agree (7). Before concluding the session, the experimenter
interviewed participants to gain insight into their thought processes throughout the
experiment, collect subjective descriptions of any errors that occurred, and discuss
experimenter observations.
Figure 3. Experimental and Control Group Data Entry Forms for Acceptable Tiles
19
This Page Intentionally Left Blank
20
3. RESULTS
Incorporating human factors in process design led to superior performance accuracy,
lower workload, and more favorable usability ratings in the experimental group as
compared to the control group. The experimental group process design promoted more
uniform task approaches among participants, effectively reducing process variation and
mitigating or eliminating errors observed in the control group.
For rejectable tiles, there were accuracy differences between the two groups for tile
values, quantities, and dollar amounts (Table 3). In all instances, errors were confined
to the control group. Value errors occurred when value and quantity entries for a single
tile were transposed on the paper form. Rejectable quantities were all under-recorded by
1 to 3 tiles due to the transposition error, misclassifying one rejectable tile type as
acceptable, and miscounting tiles. All four dollar amount errors consisted of
undercharging, ranging from $2 to $24. Differences in accuracy for quantities and dollar
amounts were statistically significant.
21
Table 3. Rejectable Tile Accuracy Results
Incorrect
Dependent Variable Group Statistical Significance
Responses
Experimental 0 p = .500, Fisher’s exact test, one-
Rejectable Tile Values
Control 1 tailed
22
Table 4. Control Group Errors and Experimental Group Mitigations
Observed Error Control Group Instantiation Experimental Group Mitigation
Sorting trays contained multiple slots
Tiles placed into piles or groupings
Miscounts that each accommodated five tiles to
prone to miscounting
minimize miscounting
Work instruction and sorting trays
Incorrect Rejectable tiles incorrectly
contained photos of acceptable and
Categorizations categorized as acceptable
rejectable tile types
Electronic spreadsheet was pre-
Incorrect Tile Tile values entered incorrectly on
populated with static information such as
Values paper form
tile values
Incorrect Dollar amounts calculated and Electronic spreadsheet automatically
Calculations recorded incorrectly on paper form calculated dollar amounts
Stacks of sorted tiles bumped during Sorting trays contained inspected tiles
Overturned Tiles
inspection of remaining tiles separate from unsorted tiles
Study ID, date, and lot number Electronic spreadsheet was pre-
Missing Entries
fields left blank populated with this information
Incorrect entries scratched out or Changes made in the electronic form
Scratchouts
overwritten replaced existing entries
Handwriting sometimes ambiguous Electronic spreadsheet used only legible
Handwriting
and open to interpretation typewritten entries
Sorting trays accommodated all 350 tiles
Amount of space required to sort
Space Allocation and required a finite, identifiable amount
350 tiles not well planned
of table space
Tiles arranged in configurations that Sorting trays used numbered and divided
Re-Counting did not support possible need for re- slots to facilitate re-counting and
counting verification of counts
End state not conducive to transfer Sorting trays also served as a convenient
End State for follow-on work (tiles in various mechanism to transfer tiles for next level
types of groupings on the table) of work
23
design minimized individual differences that contribute to process variation and hinder
consistency in manufacturing.
3.2. Workload
24
Mean NASA-TLX Subscale Scores
50
Experimental Control
40
30
Rating
20
10
0
Mental Physical Temporal Performance Effort Frustration
NASA-TLX Subscale
25
Table 6. Experimental and Control Group Usability Ratings
Participant Ratings
Group Participant Ease of Amount of Task Work
All 6 or 7?
Completion Time Instructions
1 5 5 6 No
2 7 6 7 Yes
3 7 6 7 Yes
Experimental Group
4 7 7 7 Yes
5 7 6 7 Yes
6 7 6 7 Yes
7 7 6 7 Yes
8 6 6 6 Yes
9 7 7 7 Yes
10 6 6 7 Yes
11 7 7 7 Yes
12 7 7 7 Yes
13 5 4 7 No
14 5 6 7 No
15 5 5 5 No
16 4 4 6 No
Control Group
17 7 6 7 Yes
18 7 7 7 Yes
19 6 6 6 Yes
20 7 7 7 Yes
21 7 7 7 Yes
22 6 6 7 Yes
23 7 6 7 Yes
24 6 6 7 Yes
26
4. DISCUSSION
The value of human factors was demonstrated in a controlled between-groups
experiment for a simple visual inspection process. The experimental group achieved
greater performance accuracy, with reduced workload and more favorable usability
ratings, as compared to the control group. Such improvements stemmed from applying
a user-centered design approach for the experimental group that focused on thorough
consideration of general human factors and specific visual inspection principles. The
result was a more efficient and usable process for the experimental group that lends
itself well to follow-on manufacturing steps and analysis. For example, use of electronic
spreadsheets to enter inspection outcomes resulted in legible records for archiving and
future analysis, with none of the scratchouts or writeovers observed in the control group
(refer to Figure 3). Further, the experimental group process design resulted in an end
state configuration suitable for the next level of processing (refer to Figure 2). By
contrast, control group participants concluded the task with rows, columns, piles, or
vertical stacks of tiles scattered across the table.
In a realistic manufacturing process, inspected parts must be organized to support
subsequent processing, either installation in the next level of assembly (acceptable
parts) or preparation for analysis and troubleshooting (rejectable parts). In the present
study, this step was incorporated into the experimental group process via sorting bins
and trays; however, this step was not specifically required in the control group. At a
minimum, depositing tiles into separate bins after the sorting was done would have not
only prolonged completion time for the control group but also increased opportunities
for error. In effect, although task completion time differences were not statistically
significant, the experimental group was ultimately faster since the control group would
still have to complete the final step.
27
4.2. Limitations of the Present Study
The magnitude of effects in the present study was limited by inspection task simplicity,
as evidenced in part by performance accuracy ceiling effects for acceptable tiles. This
outcome was the result of applying a simple inspection criterion based on a single tile
feature (the character printed on the tile). In reality, receipt inspection typically involves
simultaneous inspection for numerous defect types such as scratches, dents,
discolorations, and the presence of foreign material. As indicated in See’s (2012)
review, inspection only becomes more difficult as the number of different defect types
increases, magnifying task demand and reducing performance accuracy. Thus,
additional differences between the experimental and control groups might have
occurred in the current study with a more complex inspection task.
Another limitation in the present study was the relative ineffectiveness of the secondary
grammatical reasoning task as a supplemental measure of task demand. Previous
research has demonstrated its sensitivity to various stressors such as narcosis during
diving, automobile driving, and white noise (Baddeley, 1968). In the car driving study,
as in the present study, the grammatical reasoning task was used as a secondary task. In
that study, grammatical reasoning task reaction times increased by 44% and accuracy
fell by 28% when participants completed an auditory version of the task while
navigating a vehicle. In comparison to previous applications, the inspection task used in
the present study may not have been stressful enough to impact grammatical reasoning
task performance. Namely, 83% of NASA-TLX frustration ratings; which encompass
stress, annoyance, and irritation; were 10 or below. As in the driving study, an auditory
version of the grammatical reasoning task may also have been useful for a more
continuous secondary task presentation minimizing task switching.
28
5. CONCLUSIONS
In summary, if the incorporation of human factors can make a difference in a simple
task such as that used in the present study, even greater benefits might be expected to
accrue for more complex products and processes. In effect, designing a task simply by
using available tools, without true consideration of the human in the system, might
yield a workable process, but not an optimal process that promotes effectiveness,
reduces workload, and enhances usability. Non-human factors practitioners may
periodically require current, relevant evidence to help convince them that human
factors issues must be expressly considered early and often throughout the lifecycle.
To paraphrase Walkenstein and Eisenberg (1996), experimental results such as these
help demonstrate the value and need of involving human factors engineering in the
design and development process and making it an integral part of that process.
29
This Page Intentionally Left Blank
30
REFERENCES
31
15. Luna, S. F., Sturdivant, M. H., & McKay, R. C. (1988). Factoring humans into procedures.
In Human Factors and Power Plants, 1988, Conference Record for 1988 IEEE Fourth
Conference on Human Factors and Power Plants. Paper presented at 1988 IEEE Fourth
Conference on Human Factors and Power Plants, Monterey, CA (pp. 201-207). Monterey,
CA: Institute of Electrical and Electronics Engineers.
16. Matzen, L. E. (2009). Recommendations for Reducing Ambiguity in Written Procedures.
Report SAND2009-7522. Albuquerque, NM: Sandia National Laboratories.
17. Nielsen, J. (1995). 10 usability heuristics for user interface design. Retrieved from
https://www.nngroup.com/articles/ten-usability-heuristics/
18. Norman, D. (1988). The psychology of everyday things. New York: Basic Books, Inc.
19. Rauterberg, M., & Strohm, O. (1992). Work organization and software development. Annual
Review of Automatic Programming, 16, 121-128.
20. Reason, J. T. (1990). Human error. Cambridge, England: Cambridge University Press.
21. Rouse, W., Kober, N., & Mavor, A. (Eds.) (1997). The case for human factors in industry
and government: Report of a workshop. Washington, D. C.: National Academy Press.
22. Russ, A. L., Zillich, A. J., Melton, B. L., Russell, S. A., Chen, S., Spina, J. R.,…Saleem, J.
J. (2014). Applying human factors principles to alert design increases efficiency and reduces
prescribing errors in a scenario-based simulation. Journal of the American Medical
Informatics Association, 21, 287-296.
23. Sager, L., & Grier, R. A. (2005). Identifying and measuring the value of human factors to an
acquisition project. Paper presented at the Human Systems Integration Symposium,
Arlington, VA.
24. Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and design (7th
ed.). New York: McGraw-Hill.
25. See, J. E. (2017). Human Factors for NES: 18 Fundamental Topics. Report SAND2017-
2739 O. Albuquerque, NM: Sandia National Laboratories.
26. See, J. E. (2012). Visual Inspection: A Review of the Literature. Report SAND2012-8590.
Albuquerque, NM: Sandia National Laboratories.
27. Sen, R. N., & Yeow, P. H. P. (2003). Cost effectiveness of ergonomic redesign of electronic
motherboard. Applied Ergonomics, 34, 453-463.
28. Shaver, E. F., & Braun, C. C. (2008). The return on investment (ROI) for human factors and
ergonomics initiatives. Moscow, ID: Benchmark Research & Safety, Inc.
29. Smith, S. L., & Mosier, J. N. (1986). Guidelines for Designing User Interface Software.
Report ESD-TR-86-278. Bedford, MA: The MITRE Corporation. Retrieved from
http://www.hcibib.org/sam/
30. Steicklein, J. M., Dabney, J., Dick, B., Haskins, B., Lovell, R., & Moroney, G. (2004, June).
Error Cost Escalation Through the Project Life Cycle. Report JSC-CN-8435. Houston, TX:
NASA Johnson Space Center.
31. Steiner, S. H., & MacKay, R. J. (2014). Statistical engineering and variation reduction.
Quality Engineering, 26, 44-60.
32
32. Swain, A. D., & Guttmann, H. E. (1983). Handbook of Human Reliability Analysis with
Emphasis on Nuclear Power Plant Application. Technical Report NUREG/CR-1278-F
SAND-0200. Albuquerque, NM: Sandia Corporation.
33. Walkenstein, M., & Eisenberg, R. (1996). Benefiting design even late in the development
cycle: Contributions by human factors engineers. Proceedings of the Human Factors and
Ergonomics Society 40th Annual Meeting, 40, 318-322.
34. Yeow, P.H.P., & Sen, R.N. (2004). Ergonomics improvements of the visual inspection
process in a printed circuit assembly factory. International Journal of Occupational Safety
and Ergonomics, 10, 369-385.
35. Yousefi, P., & Yousefi, P. (2011). Cost justifying usability: A case study at Ericsson
(Unpublished master’s thesis). Blekinge Institute of Technology, Karlskrona, Sweden.
33
This Page Intentionally Left Blank
34
DISTRIBUTION
35