Professional Documents
Culture Documents
Computers & Industrial Engineering: Renran Tian, Vincent G. Duffy
Computers & Industrial Engineering: Renran Tian, Vincent G. Duffy
Computers & Industrial Engineering: Renran Tian, Vincent G. Duffy
Computerized task risk assessment using digital human modeling based Job
Risk Classication Model
Renran Tian a,, Vincent G. Duffy a,b
a
b
School of Industrial Engineering, Purdue University, 315 N. Grant Street, West Lafayette, IN 47907-2023, USA
Department of Agricultural & Biological Engineering, Purdue University, 225 S. University Street, West Lafayette, IN 47907-2093, USA
a r t i c l e
i n f o
Article history:
Received 23 April 2010
Received in revised form 10 May 2011
Accepted 23 June 2011
Available online 8 July 2011
Keywords:
Digital human modeling
Computer-Aided Ergonomics
Virtual Interactive Design
Dynamic ergonomics analysis
a b s t r a c t
Different from the classical ergonomics analysis, Virtual Interactive Design methodology relies on manikin movements in virtual environment. This platform has been implemented by researchers to accomplish static ergonomic analyses including Rapid Upper Limb Assessment, NIOSH lifting equation and
Static Strength Prediction. However, considering only static posture information limits the capacity of
ergonomics analysis. In this study, the methodology of performing dynamic ergonomics analysis based
on this Virtual Interactive Design platform is proposed. This environment allows velocity and angular
velocity of specied body segments/joints calculated for designed tasks to be used to assess the corresponding risk levels based on Job Risk Classication Model. The motion calculation is completed based
on the captured interaction between human participants and virtual workplace/mockup. To evaluate
the validity and reliability of this upgraded platform, potential errors are analyzed by comparing outputs
from several designed experimental conditions.
2011 Elsevier Ltd. All rights reserved.
1. Introduction
1045
Table 1
JRCMa parameters and coefcients (Marras et al., 1993).
b
R
Parameter
Coefcient
Constant
Lift Rate (LR)
Maximum Moment (MM)
Maximum Sagittal Flexion (MSF)
Average Twisting Velocity (ATV)
Maximum Lateral Velocity (MLV)
3.80
0.0014
0.024
0.020
0.061
0.026
eR
1 eR
1046
for several seconds before and after each task. Another important
tool used in this experiment is the data lter to improve data quality. A seven-point smoothing routine is used to smooth data
through the following formula:
For every seven continuous position data set X = x[1, x2, . . . , x7],
assume X N(l, r2). F(x) is the normal distribution for the specied mean (l) and standard deviation (r).
)~x4
1047
is the plane which is vertical to the other two planes, and ATV is
the average twisting angular velocity in this plane. In order to
use the UGS/Siemens Jack markers to calculate dynamic information, the following algorithm is applied. All the markers are shown
in Fig. 3.
Since JRCM is focused on trunk motion, it is important to dene
two endings of trunk. In the UGS/Siemens Jack marker set,
PSIS_R and PSIS_L are two markers on rear waist side with the
same height, and their center point can be used to represent
bottom ending of trunk; NECK_BASE_REAR can be used to represent top ending of trunk. Calculation of MSF and MLV is based
on the position data change of these three markers.
For calculating the ATV, the other two markers are used, which
are Acromion_L and Acromion_R. These two markers are
attached on left and right shoulder. Because participants were
asked to only rotate from waist without moving legs, by checking the change of the position data of them, twisting velocity of
trunk can be analyzed.
For calculating the MM, we need the distance between load and
trunk. So virtual markers of Left_Hand_Palm and Right_Hand_Palm are used to decide the position of load. PSIS_R and PSIS_L
are still used to decide position of trunk to calculate this
distance.
2.3. Error analysis and validation
Fig. 4 shows all the potential errors generated in the DVID environment. Mainly four different errors exist. The rst one is from
the different actions performed by human participants in immersive VE comparing to those in mock-up environment. Since immersive VE has some limitation in view range and interaction
feedback, and also since participants are not familiar with this
interaction environment, they tend to use different actions for
tasks from their common actions. The second one is about the MOCAP, since when human motions are captured by MOCAP, the captured motions may be different from the original ones due to the
accuracy limitation of MOCAP. The third one comes from the algorithm of DHM when trying to use the captured marker motion to
drive the kinematic model. This can come from various ways like
the denition and construction of skeleton linkage model, the algorithm used to calculate movements of different segments and rotation centers, and so on. This error is related to the DHM used in
DVID methodology. The last potential error comes from the deviation of outputted manikin motion from the animated humanmachine interaction. Since the experimental environment of this
1048
study is UGS/Siemens Jack software, the last two errors are related to the algorithm and manikin motion capture tool provided
by UGS/Siemens Jack software.
Considering the above potential errors, Cappelli and Duffy
(2007) and Sutherlands (2007) studies have shown the validity
and reliability of using MOCAP for this methodology. Wu et al.s
(2011) study has shown some inuence on the ergonomics assessment results from human participants performance in different
environments, yet the validity and reliability of an integrated VID
environment are still demonstrated. Since the last two errors proposed above are related to the UGS/Siemens Jack software (Jack
DHM errors and Jack motion capture tools errors) in this study,
they are combined together as errors from experimental environment. Based on previous studies, hypotheses for a validation test
include the following:
In terms of assessment output of JRCM results and detectable
differences in risk classications, human participants actions
are consistent in both VE and mockup;
In terms of assessment output of JRCM results and detectable
differences in risk classications, manikin motions and animated humanmachine interaction are consistent with those
captured from real human participants.
For the second hypothesis, although improving or testing validity and reliability of DHM technology within Jack software are
out of the scope of this study, it is still necessary to test this
hypothesis to fully understand the errors in the proposed DVID
system.
of Front Lift with 0 lb, Front Lift with 20 lb and Side Lift. Front Lift
task did not involve any twisting actions which only require participants to lift specied weight in standing stance. However, Side Lift
required the participants to lift 1 lb weight with twisting of 90
while standing. In mockup, participants were using one box (less
than 0.1 lb) to carry different weights. A Front Lift with 0 lb means
they were carrying the empty box. The VE used in this experiment
involved no physical boundary and the only feedback available for
participants were visual feedback (in VE, participants would see
one virtual box attached on their hands and color changes between
the box and other objects to indicate collisions). Thus, during trials
in VE, participants were instructed to imagine the weight they
were carrying while actually there was no physical box in their
hands. The task of Front Lift with 0 lb was intended to enable the
direct comparison between movements in the physical mockup
and virtual mockup.
The environment included one long corner table and a two-level
corner shelf on it. The virtual mockup is identical to the physical
mockup in all dimensions. An eight-cameras optical motion capture system by Motion Analysis Company was used in the experiment with frequency of 60 Hz. One Motion Capture server was
used to run the Motion Analysis Software and collect motion data.
The other (DHM) server was used to run UGS/Siemens Jack software including digital human manikin and virtual environment. A
5DT head-mounted display with resolution of 800 600 was connected with the DHM server to enable the VE and simulated interaction with human participants.
3. Result
Data was collected in order to assess validity of the VID environment. A total of 36 human participants were recruited from an
advertisement at Mississippi State University. Participants ranged
from 95 percentile male to 5 percentile female. All of the participants were screened for self-reported musculoskeletal disorder
history and tendency toward motion sickness. The Institutional Review Board (IRB) of Mississippi State University reviewed and approved the study prior to data collection.
Each participant completed three different tasks twice in each
environment (Mockup and representative VE), which means that
each participant performed 12 trials with six in the virtual environment and six in mockup environment. Three tasks were composed
FL0
FL20
SL
a
Mockup
VE
0.8977 (0.0396)
0.7943 (0.102)
0.4551 (0.195)
0.8427 (0.0759)
0.8246 (0.0733)
0.5256 (0.2964)
1049
VE refer to two kinds of environments where the human movements used to drive the human manikin are achieved; FL0,
FL20, and SL refer to three tasks.
From the table, we can nd out that:
1. The task of side lifting brings most risks for participants, and
although front lifting with 20 lb brings more risks than front
lifting with 0 lb, it is not a practical difference for healthy individuals. A load of 20 lb (9 kg) is not very heavy for a normal
adult. This trend can be seen with both mockup-based and
VE-based analysis.
2. For tasks of FL20 and SL, the mockup-based analysis shows
lower value (0.77943 and 0.4551), which implies higher risk
for Front Lift with 20 lb in box and Side Lift with 1 lb in the
box. While for Front Lift with empty box (FL0) task, the result
is opposite. This can be participants were faster in carrying a
heavier weight when they are in the more familiar environment
of physical mockup. This result also suggests one potential limitation of our experiment about using pure VE. Force feedback is
very difcult to apply if no physical boundary exists, and this
absence may result in more difculty for participants to control
their actions by only imagining the weights in hand.
3.1.2. Two-factor ANOVA test
In this experiment, mockup-based analysis and VE-based analysis are used to analyze motions of three tasks (Front Lift with
empty box, Front Lift with 20 lb, and Side Lift with 1 lb) performed
by 36 participants. A two-factor ANOVA test is applied to check the
inuence from two factors, analysis method and task on risk. The
results show that the interaction does not signicantly affect risk
value (p = 0.053, df = 2, 210), while different tasks does signicantly affect the risk value calculated (p < 0.0001, df = 2, 210).
Using mockup-based analysis or VE-based analysis does not significantly inuence the risk value calculated (p = 0.479, df = 1, 210).
3.1.3. Modied paired t-test
Although two-way ANOVA shows that signicant difference
does not exist between VE-based and mockup-based analysis
methods, both kinds of analysis offer different risks values. Clearly,
mean values of two methods are different from each other, and directly applied paired t-test which compared the calculated risk for
same task for each participant shows that output from two methods are not equal. Since the main source of this error comes from
the internal limitation of VE used in our experimental environment, we suppose that outputs from two methods represent same
trend; and beyond that internal limitation of VE, two methods can
offer similar trends and analysis results. In order to test this
hypothesis, following steps were implemented:
1. Calculate the difference between mean outputs of two methods
for each task using data of trial 2;
2. If the most difference between outputs of these two methods is
generated from the limitation of VE, for specied task, difference between outputs of two methods in every trials should
be related;
For the Front Lift with empty box (FL0), in data of trial 2, the
mean risk value for mockup-based analysis is 0.9133, and mean
risk value for VE-based analysis is 0.8549. So the difference is
0.05834. In data of trial 1, 0.05834 is added to all output risk values of VE-based analysis to achieve xed VE-based outputs. We
can show that mean value of mockup-based analysis trial 1 is
equal to the calculated xed VE-based outputs in two-tail t-test
with p-value of 0.8142. We can show similar results for
FL20 and SL task with p-value = 0.3252 and p-value = 0.0738
respectively.
Thus, although mockup-based analysis and VE-based analysis
offer different risk values, their outputs are showing similar trends.
3.2. DHM-based method vs. MOCAP-based method
In this section, motions of real participants acting in physical
mockup environment are directly imported into JRCM to calculate corresponding risk values, which is referred as MOCAPbased method. Since this kind of real motion is more accurate
and realistic when compared to DHM motions derived from simulated virtual interactions (as the human manikin motions used
in VE-based analysis and mockup-based analysis dened in the
previous section), MOCAP-based JRCM output is used as the
standard.
3.2.1. Mean value
Fig. 5 shows the mean risk values calculated for three different
tasks, and these values are averaged across all participants. As dened in previous section, Mockup and VE refer to the JRCM output
based on DHM manikin driven by captured human motions in the
corresponding environment, and MOCAP refers to the JRCM output
directly based on motion of human participant in physical mockup.
This result shows that (1) MOCAP-based analysis offers highest risk
classication values for all three tasks which means DHMs motions contain more risk compared to human participants motions,
(2) all three analyses show a similar trend that two tasks of front
lift are of almost equally low risk with side lift having a much higher risk and (3) only VE-based analysis shows a similar result for
two front lifting tasks which indicates the limitation of VE with
no force feedback used.
1050
Table 3
Modied paired t-test between MOCAP-based and mockup-based analyses results.
Tasks
Average mockup-based risk classication
value based on trial two
Average MOCAP-based risk classication
values based on trial two
Difference among average values of two
methods
t Stat for xed trial one risk classication
values between two methods
p-Value
t Critical
a
*
FL0
FL20
Table 4
Testretest reliability evaluation results.
SL
0.9133
0.7915
0.5133
0.9212
0.8418
0.5651
0.00789
0.0503
0.05179
2.7773
0.9922
2.4336
0.0087*
2.0301
0.3279
2.0301
0.0202*
2.0301
FL0
Mockup-based
analysis
VE-based
analysis
t Stat:
0.8237
p-Value: 0.423
t Stat: 0.6272
p-Value: 0.540
FL20
SL
a
t Stat: 1.7221
t Stat: 2.5366b
p-Value:
0.106a
p-Value:
0.0228b
t Stat: 1.0047
p-Value: 0.331
t Stat: 1.4429
p-Value: 0.1696
a
There is one outlier for this t-test. If the data is analyzed without the outlier, t
Stat = 1.3132 and p-value is 0.2102.
b
Signicant difference between two trials at 0.05 level.
based analysis offer good reliability for their output risk values. But
the test results show that the reliability is not as good as that for
FL0 task. This may be caused by the load weight increase, which
makes participants work harder to keep same actions when doing
different trials. For the task of SL, mockup-based analysis output is
signicantly different, although the VE-based analysis still outputs
reliable risk values.
This result shows that with the increase of task complexity (increase of load weight or more trunk rotations in different planes),
the reliability of the outputs decreases. One reason is that participants will change their actions during the second trial when they
have practiced once; another possible reason is that it will be difcult to keep similar features of two trials for complex tasks in regards to the speed, acceleration and joint angles. Also, for all the
tasks, VE-based analysis has a better reliability compared to the
mockup-based experiments, and the difference between the reliability of two methods increases with the increase of task complexity. This is understandable because participants tend to pay
more attention when performing their tasks in VE with a much
slower action speed, which appears to help them nish the two trials in a consistent way.
One limitation of current study is that due to the difculty in
setting up and calibrating motion capture system used in data collection, training session was not provided for participants to familiarize themselves with the experimental condition. This may
introduce learning effect into the data collection and result in an
increase of inconsistency during the experiment process. The
testretest reliability may be affected in this case.
4. Conclusion and discussion
4.1. Conclusion of validity and reliability test
In this paper, reliability and validity of using VID for dynamic
ergonomics analysis is examined through comparisons among
three integrating methods: VE-based analysis refers to the analysis
procedure that uses JRCM to analyze movements of a human manikin (DHM) driven by participant motions captured in a VE. Mockup-based analysis refers to a similar procedure when the
participant motions are captured in a physical mockup environment. MOCAP-based analysis method refers to the procedure of
using JRCM to analyze participant motions captured in a physical
mockup environment directly.
The validity test focuses on the potential errors within this integrated environment related to VE limitation, calculation for manikin motions based on marker movements, and dynamic analysis
tool used as JRCM. Results have shown several important points.
Firstly, VE-based analysis is valid when compared to mockupbased analysis by checking the means of all three tasks risk values.
This has also been supported by ANOVA test. Also, although direct
paired t-test rejects the hypothesis that these two methods provide
same risk values, one modied t-test analysis showed that they are
1051
MM (N M)
Mockup
VE
MOCAP
MSF ()
Mockup
VE
MOCAP
FL0
FL20
SL
0
6.4820
0.3175
0
5.9174
0.2981
0
6.8989
0.3338
FL0
FL20
SL
11.3452
20.0434
11.6730
33.5509
31.6753
31.7371
ATV (o/s)
Mockup
VE
MOCAP
MLV (o/s)
Mockup
VE
MOCAP
FL0
FL20
SL
3.9208
6.8912
29.4983
2.6451
2.5778
20.2431
3.8105
6.8501
25.2742
FL0
FL20
SL
7.6031
15.1079
32.7137
10.2442
11.2543
12.7969
3.7997
8.9167
26.4258
5.5159
7.2684
2.0391
following the same trends for all tasks. This result partly suggests
that a conclusion about the limitation of VE is signicantly affected by the analysis procedure. Secondly, by comparing to the
output risk levels generated from some static ergonomic analyses,
this research demonstrates that dynamic analysis can be shown
as a valid and even better prediction of risk by considering more
motion features, especially for complicated movements. Finally, a
comparison has been made between same dynamic analysis
based on manikin motions and corresponding real human motions that are used to drive the manikin. Although the mean value
shows a similar trend in different tasks, ANOVA and t-test suggests that the errors generated by DHM used in this research
are statistically signicant and difcult to remove. These results
suggest the importance to improve the accuracy of manikin motions in commercially available software that may be based on
marker movements.
Testretest reliability has been investigated for both VE-based
and mockup-based analyses and generally has been demonstrated
for all tasks. Two trends have been found: (1) reliability decreases
with the increase of task complexity and (2) VE-based analysis has
a better reliability than mockup-based analysis. These results will
be further analyzed in the following section.
4.2. Discussion of dynamic data under the statistic analysis
DHM related errors result in a signicant effect on VID outputs.
However, since the commercial DHM software used in this methodology is difcult to modify and algorithms for manikin movement calculation are out of the scope of this study, it is more
important to understand the effect of VE on human motions for different tasks. The direct calculation eliminates errors introduced by
adding the step of manikin movement in assessment. JRCM analysis is based on dynamic features of human motions to complete
different tasks including Maximum Moment (MM), Maximum
Sagittal Flexion (MSF), Average Twisting Velocity (ATV), and Maximum Lateral Velocity (MLV). Table 5 shows the average values of
these important dynamic features for different tasks when capturing in physical mockup and VE.
We can see trends for how different aspects of human motions
are inuenced by VID process.
1. Maximum Moment (MM) values reect the maximum distance
of the object from the participant body when lifting. Since the
VE and physical mockup environments are identical, all the
three experimental environments generate similar values for
MM when participants stand at the same distance from the
table. This partly shows that VID environment is powerful to
predict hand moving distances.
2. Another value that looks similar for all three experimental environments is Average Twisting Velocity (ATV): smaller value for
twisting speed during frontal lift and larger value for side lift.
Also, for all three tasks, participants tend to twist more slowly
within VE.
1052
Li, K., Duffy, V. G., & Zheng, L. (2006). Universal accessibility assessments through
virtual interactive design. International Journal of Human Factors Modeling and
Simulation, 1(1), 5268.
Marras, W. S., Lavender, S. A., Leurgans, S. E., Rajulu, S. L., Allread, W. G., Fathallah, F.
A., et al. (1993). The role of dynamic three-dimensional trunk motion in
occupationally-related low back disorders. Spine, 18(5), 617628.
Morrissey, M. (1998). Human-centric design. Mechanical Engineering, 120(7), 6062.
Porter, J. M., Case, K., & Freer, M. T. (1999). Computer-aided design and human
models. In W. Karwowski & W. S. Marras (Eds.), The occupational ergonomics
handbook (pp. 479500). Boca Raton: CRC Press.
Sutherland, J. A. (2007). Development of validation methods for dynamic ergonomic
assessment tools of the lumbar spine. Master thesis, Purdue University.
Wu, T., Tian, R., & Duffy, V. G. (2011). Performing ergonomics analyses through
virtual interactive design: Validity and reliability assessment. Human Factors
and Ergonomics in Manufacturing & Service Industries.. doi:10.1002/hfm.20267.
Yang, J., & Pitarch, E. P. (2004). Kinematic human modeling. End-of-year technical
report for project digital human modeling and virtual reality for FCS. The Virtual
Soldier Research Program, University of Iowa.
Zhang, X., & Chafn, D. B. (1996). Task effects on three-dimensional dynamic
postures during seated reaching movements: an analysis method and
illustration. In Proceedings of the 1996 40th annual meeting of the human
factors and ergonomics society, Philadelphia, PA, Part 1 (Vol. 1, pp. 594598).
Zhang, X., & Chafn, D. B. (2000). A three-dimensional dynamic posture prediction
model for simulating in-vehicle seated reaching movements: Development and
validation. Ergonomics, 43, 13141330.
Zhang, X., & Chafn, D. B. (2005). Digital human modeling for computer-aided
ergonomics. In W. S. Marras & W. Karwowski (Eds.), The occupational ergonomics
handbook (2nd ed.. CRC Press.