Computers & Industrial Engineering: Renran Tian, Vincent G. Duffy

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Computers & Industrial Engineering 61 (2011) 10441052

Contents lists available at ScienceDirect

Computers & Industrial Engineering


journal homepage: www.elsevier.com/locate/caie

Computerized task risk assessment using digital human modeling based Job
Risk Classication Model
Renran Tian a,, Vincent G. Duffy a,b
a
b

School of Industrial Engineering, Purdue University, 315 N. Grant Street, West Lafayette, IN 47907-2023, USA
Department of Agricultural & Biological Engineering, Purdue University, 225 S. University Street, West Lafayette, IN 47907-2093, USA

a r t i c l e

i n f o

Article history:
Received 23 April 2010
Received in revised form 10 May 2011
Accepted 23 June 2011
Available online 8 July 2011
Keywords:
Digital human modeling
Computer-Aided Ergonomics
Virtual Interactive Design
Dynamic ergonomics analysis

a b s t r a c t
Different from the classical ergonomics analysis, Virtual Interactive Design methodology relies on manikin movements in virtual environment. This platform has been implemented by researchers to accomplish static ergonomic analyses including Rapid Upper Limb Assessment, NIOSH lifting equation and
Static Strength Prediction. However, considering only static posture information limits the capacity of
ergonomics analysis. In this study, the methodology of performing dynamic ergonomics analysis based
on this Virtual Interactive Design platform is proposed. This environment allows velocity and angular
velocity of specied body segments/joints calculated for designed tasks to be used to assess the corresponding risk levels based on Job Risk Classication Model. The motion calculation is completed based
on the captured interaction between human participants and virtual workplace/mockup. To evaluate
the validity and reliability of this upgraded platform, potential errors are analyzed by comparing outputs
from several designed experimental conditions.
2011 Elsevier Ltd. All rights reserved.

1. Introduction

1.1. Digital human modeling and motion generation methods

With the development of computer technology and change of


the demands, ergonomics has received greater assistance from
computer-related technologies over the last two decades. This situation expedites the development of Computer-Aided Ergonomics
& Safety (CAES). As pointed out by Feyen, Liu, Chafn, Jimmerson,
and Joseph (2000), engineers can use computer-aided techniques
to evaluate the performance of human operators in a workplace
design which allow ergonomics information from several sources
to be examined before an actual job is implemented. Contemporary research shows that ergonomic analyses based on human
motion generate more accurate results than those solely based
on postures. Without consideration for the dynamic aspects of
task, in addition to initial and nal positions that give information
about static aspect, potential risk may be underestimated by as
much as 40% (Feyen et al., 2000). Thus, using captured human
motions as the source to drive the virtual interaction between
digital human manikin and virtual work environment, and applying dynamic ergonomics analysis models like Job Risk Classication Model based on the motions achieved from such a virtual
interaction, provide a promising way to further improve current
CAES systems.

As Zhang and Chafn (2005) described, the core of digital human


modeling (DHM) and simulation is a model a biomechanical representation of a human body along with the computational algorithms
that congure or drive the representation to produce postures or
motions. Despite many benets achieved from DHM, including easier and earlier identication of ergonomics problems, lessening or
sometimes even eliminating the need for physical mock-ups and
real human participant testing, and proactive ergonomics analysis
(Badler, Phillips, & Webber, 1993; Morrissey, 1998; Porter, Case, &
Freer, 1999; Zhang & Chafn 2000), Chafn (2001) pointed out the
insufcient ability of existing DHM to predict position, posture
and motion of a person in most task conditions. Two approaches
are commonly used for posture and motion prediction: empirical
statistical modeling and inverse kinematics solution (Yang & Pitarch,
2004). The rst method is based on captured real human motions,
and uses statistical methods, such as regression, to calculate most
probable posture. Such models include Zhang and Chafn (1996),
Faraway, Chafn, Woolley, Wang, and Park (1999). The second
method mostly uses static optimization and inverse kinematics to
solve a discrete posture determination problem (Zhang & Chafn,
2005). Some optimization-based human-gure positioning algorithms for computer animation belong to this class. Although great
progress has been achieved for these two approaches, there are still
many limitations including inability to simulate key characteristics
of real human motions like smooth velocity and acceleration, and

Corresponding author. Tel.: +1 765 337 7519.


E-mail addresses: rtian@purdue.edu (R. Tian), duffy@purdue.edu (V.G. Duffy).
0360-8352/$ - see front matter 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cie.2011.06.018

1045

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

the challenge of kinematic redundancy. Limitations of posture and


motion prediction in various environments limit the animation of
actual humanmachine interaction as well as DHM-based ergonomics analysis.

Table 1
JRCMa parameters and coefcients (Marras et al., 1993).

1.2. Virtual Interactive Design methodology


Virtual Interactive Design (VID) is introduced by Li, Duffy, and
Zheng (2006) to examine seated reaching behavior for accessing
an ATM under limited mobility task conditions. Different from previous studies which rely on computational algorithms to drive or
congure the kinematical human model, VID focuses on integrating motion capture systems (MOCAP) with the visualization part
of DHM to allow the DHM driven by actual human motions. By
connecting the DHM with different static ergonomics analysis
tools, this method enables real-time ergonomic analysis to justify
modications to humanmachine interface designs. Wu, Tian,
and Duffy (2011) have examined the validity and reliability of static VID methodology by comparing VID output which is based on
MOCAP with traditional DHM-based ergonomics analysis results,
and they concluded that static VID is valid and reliable in many
cases. Du and Duffy (2007) applied the static VID in industry to assess workstation redesign. These published studies using VID
method are based on static postures. The benet of further developing VID can be considered in the following two points:
 VID can help to deal with the situation that human posture and
motion prediction models cannot work well. Although VID still
requires human participant participation, it can eliminate the
need for construction of prototype to expedite the design and
redesign process.
 VID is based on reliable human motions, and it drives kinematic
models for ergonomics analysis, so it serves the experimental
environment by facilitating the validation of various human posture and motion prediction models in different task conditions.
1.3. Job Risk Classication Model
The VID platform provides a convenient way to use motion data
for job design. Use of motion capture for dynamic ergonomics analysis was previously shown (Cappelli & Duffy, 2007; Sutherland,
2007). This study will demonstrate dynamic virtual interactive design (DVID) methodology which uses the motion capture driven
digital human manikin to support risk assessment based on the
Job Risk Classication Model.
The Job Risk Classication Model (JRCM) was proposed by Marras
et al. (1993). It is a multiple regression model which can discriminate high and low risk lifting task. This model mainly considers
trunk motion and some other related information during lift actions
including Maximum Moment (MM), Maximum Sagittal Flexion
(MSF), Average Twisting Velocity (ATV), and Maximum Lateral
Velocity (MLV). JRCM has been shown to work well with MOCAP
output (Cappelli & Duffy, 2007). Results between JRCM using a Lumber-Motion-Monitor, as shown by Marras, are compared to MOCAP
in Sutherland (2007). Parameters and corresponding coefcients
used are shown in following Table 1 based on Marras et al. (1993).
So for each task, a risk value will be calculated and normalized
using the formula below:

R 3:8 0:0014  LR 0:024  MM 0:02  MSF 0:061  ATV


0:026  MLV
Estimated logistic probability can be calculated to normalize
the risk value into [0, 1] interval and to remove negative values,
which can make the results more comparable and more easily to
be understood (Cappelli, 2006a; Cappelli, 2006b):

b
R

Parameter

Coefcient

Constant
Lift Rate (LR)
Maximum Moment (MM)
Maximum Sagittal Flexion (MSF)
Average Twisting Velocity (ATV)
Maximum Lateral Velocity (MLV)

3.80
0.0014
0.024
0.020
0.061
0.026

JRCM refers to Job Risk Classication Model (Marras et al., 1993).

eR
1 eR

Greater number represents less danger for tasks.


In this paper, we will rstly describe the structure of DVID system
integrated with MOCAP, DHM, virtual environment, and JRCM.
Working process and underlined calculation algorithms will also
be demonstrated. Then, in order to evaluate the proposed integrated
methodology, different error sources are analyzed, including checking the inuence of virtual environment on human motion, analyzing the internal calculation errors of commercial digital human
model software involved and calculating testretest reliability.
Based on the error analysis, experiment is designed for data collection and statistical analyses. Validity and reliability of the constructed environment are partially conrmed with limitations
related to virtual environment and human manikin illustrated.
2. Method
2.1. Dynamic virtual interactive design structure
DVID is the platform used to support dynamic ergonomics analysis. As in VID, captured human motions are again the source for
driving kinematic model to animate humanmachine interaction.
However, since the dynamic motion information is of interest, (1)
not only postures but also velocity information is derived from
captured motion data and (2) the dynamic ergonomics assessment
model, JRCM in this study, is integrated into the analysis.
Fig. 1 shows the integrated environment for DVID methodology
constructed in this study. Two separate servers are used. In the
DHM & Virtual Environment Analysis server, the Virtual Environment (VE) which is identical to tested mock-up is constructed, and
DHM is inserted into the VE to visualize the whole environment
using UGS/Siemens Jack software. The VE can be viewed through
head mount display to construct the immersive VE, and any collisions between or interactions with virtual objects were represented by color changes on the surfaces. Human participants
wearing the head mount display were asked to wear a suit which
is covered with 34 reective markers. Different tasks can be performed in such environments when participants are interacting
with objects in an immersive VE. Totally eight cameras were set
up to capture movements of markers, which were inputted and
analyzed in a second Motion Capture server to generate captured
human motions. There is one motion capture interface to show the
captured marker movements. The captured human motions are input into the DHM & Ergonomics Analysis server again to update
DHMVE interaction along participants moves. First, real-time
captured motions are used to drive the movement of DHM in the
virtual environment to animate the humanmachine interaction.
The movement of DHM and changes of the virtual environment
can be inspected through head mount display to update the view
of the immersive virtual environment for the participants. After
achieving the animated humanmachine interaction, a dynamic
ergonomics analysis can be applied also in the DHM & Ergonomics
Analysis server.

1046

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

Fig. 1. DVID (dynamic virtual interactive design) integrated environment.

2.2. Dynamic information analyzer


Compared to prior VID studies, the unique contribution of this
study is the part that demonstrates the analysis of the dynamic aspects of humanmachine interaction in motion capture based VID.
Fig. 2 shows the ow-work for this part. Generally four steps are
included:
1. Motion Data Loader: towards the animated humanmachine
interaction, position data of DHM are achieved through virtual
markers attached on manikin, and initial data modications
are performed.
2. Information Analyzer: angular velocity and acceleration for
interested segments in the kinematic model are calculated
based on position data, and a data smoothing lter is applied.
3. Dynamic Ergonomics Analysis Model: in this study, Marrass
JRCM is applied based on the position, angular velocity and
acceleration data obtained in the previous step.
4. Risk Results Average: this is an automated tool to average
ergonomics assessment results for different trials and different
participants. These are compared among tasks to achieve nal
task risk assessment results.
The essential step in this process is to calculate necessary position and angular velocity data for the JRCM equation. Before this
step, missing frames in all trials are interpolated assuming a linear
relationship. The starting point and ending point for each movement cycle are determined by searching static postures lasting

for several seconds before and after each task. Another important
tool used in this experiment is the data lter to improve data quality. A seven-point smoothing routine is used to smooth data
through the following formula:
For every seven continuous position data set X = x[1, x2, . . . , x7],
assume X  N(l, r2). F(x) is the normal distribution for the specied mean (l) and standard deviation (r).
)~x4

Fx1  x1 Fx2  x2 Fx3  x3 Fx4  x4 1  Fx5  x5 1  Fx6  x6 1  Fx7  x7


Fx1 Fx21 Fx3 Fx4 1  Fx5 1  Fx6 1  Fx7

Dealing with the loaded position data, one cycle of data


smoothing is applied, and smoothed position data are used to calculate translation. Translation data are smoothed twice. Then
smoothed translation data are used to calculate velocity. Velocity
data are smoothed twice to calculate acceleration.
The calculation of each input for JRCM is then determined based
on UGS/Siemens Jack software marker denitions. For all ve inputs for JRCM, lifting rate is determined by the task feature; and all
other four inputs are based on participants trunk actions including: Maximum Moment (MM), Maximum Sagittal Flexion (MSF),
Average Twisting Velocity (ATV), and Maximum Lateral Velocity
(MLV). Here, the three reference planes in conventional anthropometry are used. The sagittal plane is the plane in the middle
which divides a person into left part and right part, and MSF means
the maximum forward bending of the participant during the task
action. The coronal plane is the plane in the middle which divides
person into front part and rear part, and MLV is the maximum side
bending angular velocity in the coronal plane. The transverse plane

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

1047

Fig. 2. Dynamic information analysis process.

is the plane which is vertical to the other two planes, and ATV is
the average twisting angular velocity in this plane. In order to
use the UGS/Siemens Jack markers to calculate dynamic information, the following algorithm is applied. All the markers are shown
in Fig. 3.
 Since JRCM is focused on trunk motion, it is important to dene
two endings of trunk. In the UGS/Siemens Jack marker set,
PSIS_R and PSIS_L are two markers on rear waist side with the
same height, and their center point can be used to represent
bottom ending of trunk; NECK_BASE_REAR can be used to represent top ending of trunk. Calculation of MSF and MLV is based
on the position data change of these three markers.
 For calculating the ATV, the other two markers are used, which
are Acromion_L and Acromion_R. These two markers are
attached on left and right shoulder. Because participants were
asked to only rotate from waist without moving legs, by checking the change of the position data of them, twisting velocity of
trunk can be analyzed.
 For calculating the MM, we need the distance between load and
trunk. So virtual markers of Left_Hand_Palm and Right_Hand_Palm are used to decide the position of load. PSIS_R and PSIS_L
are still used to decide position of trunk to calculate this
distance.
2.3. Error analysis and validation
Fig. 4 shows all the potential errors generated in the DVID environment. Mainly four different errors exist. The rst one is from
the different actions performed by human participants in immersive VE comparing to those in mock-up environment. Since immersive VE has some limitation in view range and interaction
feedback, and also since participants are not familiar with this
interaction environment, they tend to use different actions for

Fig. 3. Rear view of marker placement.

tasks from their common actions. The second one is about the MOCAP, since when human motions are captured by MOCAP, the captured motions may be different from the original ones due to the
accuracy limitation of MOCAP. The third one comes from the algorithm of DHM when trying to use the captured marker motion to
drive the kinematic model. This can come from various ways like
the denition and construction of skeleton linkage model, the algorithm used to calculate movements of different segments and rotation centers, and so on. This error is related to the DHM used in
DVID methodology. The last potential error comes from the deviation of outputted manikin motion from the animated humanmachine interaction. Since the experimental environment of this

1048

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

study is UGS/Siemens Jack software, the last two errors are related to the algorithm and manikin motion capture tool provided
by UGS/Siemens Jack software.
Considering the above potential errors, Cappelli and Duffy
(2007) and Sutherlands (2007) studies have shown the validity
and reliability of using MOCAP for this methodology. Wu et al.s
(2011) study has shown some inuence on the ergonomics assessment results from human participants performance in different
environments, yet the validity and reliability of an integrated VID
environment are still demonstrated. Since the last two errors proposed above are related to the UGS/Siemens Jack software (Jack
DHM errors and Jack motion capture tools errors) in this study,
they are combined together as errors from experimental environment. Based on previous studies, hypotheses for a validation test
include the following:
 In terms of assessment output of JRCM results and detectable
differences in risk classications, human participants actions
are consistent in both VE and mockup;
 In terms of assessment output of JRCM results and detectable
differences in risk classications, manikin motions and animated humanmachine interaction are consistent with those
captured from real human participants.
For the second hypothesis, although improving or testing validity and reliability of DHM technology within Jack software are
out of the scope of this study, it is still necessary to test this
hypothesis to fully understand the errors in the proposed DVID
system.

of Front Lift with 0 lb, Front Lift with 20 lb and Side Lift. Front Lift
task did not involve any twisting actions which only require participants to lift specied weight in standing stance. However, Side Lift
required the participants to lift 1 lb weight with twisting of 90
while standing. In mockup, participants were using one box (less
than 0.1 lb) to carry different weights. A Front Lift with 0 lb means
they were carrying the empty box. The VE used in this experiment
involved no physical boundary and the only feedback available for
participants were visual feedback (in VE, participants would see
one virtual box attached on their hands and color changes between
the box and other objects to indicate collisions). Thus, during trials
in VE, participants were instructed to imagine the weight they
were carrying while actually there was no physical box in their
hands. The task of Front Lift with 0 lb was intended to enable the
direct comparison between movements in the physical mockup
and virtual mockup.
The environment included one long corner table and a two-level
corner shelf on it. The virtual mockup is identical to the physical
mockup in all dimensions. An eight-cameras optical motion capture system by Motion Analysis Company was used in the experiment with frequency of 60 Hz. One Motion Capture server was
used to run the Motion Analysis Software and collect motion data.
The other (DHM) server was used to run UGS/Siemens Jack software including digital human manikin and virtual environment. A
5DT head-mounted display with resolution of 800  600 was connected with the DHM server to enable the VE and simulated interaction with human participants.
3. Result

2.4. Data collection

3.1. Mockup-based method vs. VE-based method

Data was collected in order to assess validity of the VID environment. A total of 36 human participants were recruited from an
advertisement at Mississippi State University. Participants ranged
from 95 percentile male to 5 percentile female. All of the participants were screened for self-reported musculoskeletal disorder
history and tendency toward motion sickness. The Institutional Review Board (IRB) of Mississippi State University reviewed and approved the study prior to data collection.
Each participant completed three different tasks twice in each
environment (Mockup and representative VE), which means that
each participant performed 12 trials with six in the virtual environment and six in mockup environment. Three tasks were composed

The VE-based analysis refers to the analysis procedure that uses


JRCM to analyze movements of the digital human manikin driven
by motions captured in the virtual environment. And the mockup-based analysis refers to the analysis procedure that using JRCM
to analyze movements of DHM driven by motions captured in the
physical mockup. Data from three different tasks of Front Lift with
empty box (FL0), Front Lift with 20 lb (FL20), and Side Lift with 1 lb
(SL) are used.
3.1.1. Mean value
Table 2 shows the mean risk value of all participants for each
trial of three different kinds of tasks. Physical Mockup and

Fig. 4. Potential errors in DVID environment.

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052


Table 2
Comparison of general mean risk classication value and standard deviation across 32
subjects for two experimental conditions.

FL0
FL20
SL
a

Mockup

VE

0.8977 (0.0396)
0.7943 (0.102)
0.4551 (0.195)

0.8427 (0.0759)
0.8246 (0.0733)
0.5256 (0.2964)

1049

3. Use the difference calculated from data of trial 2 to x the


results of VE-based dynamic analysis of trial 1. Assume that
xed VE-based dynamic analysis outputs (trial 1) should be
equal to mockup-based dynamic analysis outputs (trial 1). If
so, we can show that these two methods offer outputs following
the same trend; in other words, the difference has a xed reference point.

Lower risk classication value means higher risk.

VE refer to two kinds of environments where the human movements used to drive the human manikin are achieved; FL0,
FL20, and SL refer to three tasks.
From the table, we can nd out that:
1. The task of side lifting brings most risks for participants, and
although front lifting with 20 lb brings more risks than front
lifting with 0 lb, it is not a practical difference for healthy individuals. A load of 20 lb (9 kg) is not very heavy for a normal
adult. This trend can be seen with both mockup-based and
VE-based analysis.
2. For tasks of FL20 and SL, the mockup-based analysis shows
lower value (0.77943 and 0.4551), which implies higher risk
for Front Lift with 20 lb in box and Side Lift with 1 lb in the
box. While for Front Lift with empty box (FL0) task, the result
is opposite. This can be participants were faster in carrying a
heavier weight when they are in the more familiar environment
of physical mockup. This result also suggests one potential limitation of our experiment about using pure VE. Force feedback is
very difcult to apply if no physical boundary exists, and this
absence may result in more difculty for participants to control
their actions by only imagining the weights in hand.
3.1.2. Two-factor ANOVA test
In this experiment, mockup-based analysis and VE-based analysis are used to analyze motions of three tasks (Front Lift with
empty box, Front Lift with 20 lb, and Side Lift with 1 lb) performed
by 36 participants. A two-factor ANOVA test is applied to check the
inuence from two factors, analysis method and task on risk. The
results show that the interaction does not signicantly affect risk
value (p = 0.053, df = 2, 210), while different tasks does signicantly affect the risk value calculated (p < 0.0001, df = 2, 210).
Using mockup-based analysis or VE-based analysis does not significantly inuence the risk value calculated (p = 0.479, df = 1, 210).
3.1.3. Modied paired t-test
Although two-way ANOVA shows that signicant difference
does not exist between VE-based and mockup-based analysis
methods, both kinds of analysis offer different risks values. Clearly,
mean values of two methods are different from each other, and directly applied paired t-test which compared the calculated risk for
same task for each participant shows that output from two methods are not equal. Since the main source of this error comes from
the internal limitation of VE used in our experimental environment, we suppose that outputs from two methods represent same
trend; and beyond that internal limitation of VE, two methods can
offer similar trends and analysis results. In order to test this
hypothesis, following steps were implemented:
1. Calculate the difference between mean outputs of two methods
for each task using data of trial 2;
2. If the most difference between outputs of these two methods is
generated from the limitation of VE, for specied task, difference between outputs of two methods in every trials should
be related;

For the Front Lift with empty box (FL0), in data of trial 2, the
mean risk value for mockup-based analysis is 0.9133, and mean
risk value for VE-based analysis is 0.8549. So the difference is
0.05834. In data of trial 1, 0.05834 is added to all output risk values of VE-based analysis to achieve xed VE-based outputs. We
can show that mean value of mockup-based analysis trial 1 is
equal to the calculated xed VE-based outputs in two-tail t-test
with p-value of 0.8142. We can show similar results for
FL20 and SL task with p-value = 0.3252 and p-value = 0.0738
respectively.
Thus, although mockup-based analysis and VE-based analysis
offer different risk values, their outputs are showing similar trends.
3.2. DHM-based method vs. MOCAP-based method
In this section, motions of real participants acting in physical
mockup environment are directly imported into JRCM to calculate corresponding risk values, which is referred as MOCAPbased method. Since this kind of real motion is more accurate
and realistic when compared to DHM motions derived from simulated virtual interactions (as the human manikin motions used
in VE-based analysis and mockup-based analysis dened in the
previous section), MOCAP-based JRCM output is used as the
standard.
3.2.1. Mean value
Fig. 5 shows the mean risk values calculated for three different
tasks, and these values are averaged across all participants. As dened in previous section, Mockup and VE refer to the JRCM output
based on DHM manikin driven by captured human motions in the
corresponding environment, and MOCAP refers to the JRCM output
directly based on motion of human participant in physical mockup.
This result shows that (1) MOCAP-based analysis offers highest risk
classication values for all three tasks which means DHMs motions contain more risk compared to human participants motions,
(2) all three analyses show a similar trend that two tasks of front
lift are of almost equally low risk with side lift having a much higher risk and (3) only VE-based analysis shows a similar result for
two front lifting tasks which indicates the limitation of VE with
no force feedback used.

Fig. 5. Mean risk values comparison between MOCAP-based and DHM-based


Ergonomics Analysis.

1050

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

Table 3
Modied paired t-test between MOCAP-based and mockup-based analyses results.
Tasks
Average mockup-based risk classication
value based on trial two
Average MOCAP-based risk classication
values based on trial two
Difference among average values of two
methods
t Stat for xed trial one risk classication
values between two methods
p-Value
t Critical
a
*

FL0

FL20

Table 4
Testretest reliability evaluation results.

SL

0.9133

0.7915

0.5133

0.9212

0.8418

0.5651

0.00789

0.0503

0.05179

2.7773

0.9922

2.4336

0.0087*
2.0301

0.3279
2.0301

0.0202*
2.0301

Higher risk classication value means lower risk.


Means signicant difference at 0.05 level.

3.2.2. Two-factor ANOVA test


In this experiment, MOCAP-based analysis, mockup-based analysis and VE-based analysis are used to analyze motions of three
tasks (FL0, FL20 and SL) performed by 36 participants. A 3  3 ANOVA test is applied to check the inuence from two factors, analysis method and task, to affect risk value. The results show that
interaction does not signicantly affect risks value (p = 0.0759,
df = 4, 315), different tasks signicantly affect the risks value calculated (P < 0.001, df = 2, 315), and different analysis methods also
signicantly inuence the risks value calculated (p < 0.001, df = 2,
315). Compared to the previous ANOVA test to compare mockupbased and VE-based analysis, the MOCAP-based analysis shows
signicantly different risk values from the other two. This result
shows that DHM used in this experiment which transfers marker
movements into manikin motions contains signicant errors.
3.2.3. Modied paired t-test
The same modied paired t-test has been applied as described
in above section. In order to eliminate the inuence of VE to just
focus on the effect of DHM calculation, this t-test is just limited
to compare MOCAP-based and mockup-based analyses. Table 3
shows the results for all four tasks.
This result shows that even after xing mockup-based results,
there still is a signicant difference between outputs of these
two analyses methods except for FL20 task. This implies that errors
from this DHM calculation within the commercial software are statistically signicant, and cannot be explained clearly. However,
although statistical analysis is very important, the result does not
always signicantly weaken the practical meanings. For this specied analysis, as shown in Fig. 5, both MOCAP-based and mockupbased analyses can discriminate risk levels for three tasks with
similar risk differences found among them, which means that both
of them have practical potential for ergonomics analysis. This can
also partly be supported by the results shown in Table 3: the differences between outputs of these two methods for all tasks are less
than 1/10 of the output risk values.
3.3. Reliability test
To evaluate the testretest reliability of VID environment for
dynamic ergonomics analysis, risk values calculated from mockup-based and VE-based analysis for all tasks are compared between two trials for all participants. Following Table 4 shows the
result.
For the task of Front Lift with empty box (FL0), both mockupbased analysis and VE-based analysis of dynamic aspects of task offer very good reliability for their output results. This may be due to
the simplicity to complete FL0 task for adult people. They can keep
similar actions without much practice in the rst trial. For the task
of Front Lift with 20 lb (FL20), both mockup-based analysis and VE-

FL0
Mockup-based
analysis

VE-based
analysis

t Stat:
0.8237
p-Value: 0.423
t Stat: 0.6272
p-Value: 0.540

FL20

SL
a

t Stat: 1.7221

t Stat: 2.5366b

p-Value:
0.106a

p-Value:
0.0228b

t Stat: 1.0047
p-Value: 0.331

t Stat: 1.4429
p-Value: 0.1696

a
There is one outlier for this t-test. If the data is analyzed without the outlier, t
Stat = 1.3132 and p-value is 0.2102.
b
Signicant difference between two trials at 0.05 level.

based analysis offer good reliability for their output risk values. But
the test results show that the reliability is not as good as that for
FL0 task. This may be caused by the load weight increase, which
makes participants work harder to keep same actions when doing
different trials. For the task of SL, mockup-based analysis output is
signicantly different, although the VE-based analysis still outputs
reliable risk values.
This result shows that with the increase of task complexity (increase of load weight or more trunk rotations in different planes),
the reliability of the outputs decreases. One reason is that participants will change their actions during the second trial when they
have practiced once; another possible reason is that it will be difcult to keep similar features of two trials for complex tasks in regards to the speed, acceleration and joint angles. Also, for all the
tasks, VE-based analysis has a better reliability compared to the
mockup-based experiments, and the difference between the reliability of two methods increases with the increase of task complexity. This is understandable because participants tend to pay
more attention when performing their tasks in VE with a much
slower action speed, which appears to help them nish the two trials in a consistent way.
One limitation of current study is that due to the difculty in
setting up and calibrating motion capture system used in data collection, training session was not provided for participants to familiarize themselves with the experimental condition. This may
introduce learning effect into the data collection and result in an
increase of inconsistency during the experiment process. The
testretest reliability may be affected in this case.
4. Conclusion and discussion
4.1. Conclusion of validity and reliability test
In this paper, reliability and validity of using VID for dynamic
ergonomics analysis is examined through comparisons among
three integrating methods: VE-based analysis refers to the analysis
procedure that uses JRCM to analyze movements of a human manikin (DHM) driven by participant motions captured in a VE. Mockup-based analysis refers to a similar procedure when the
participant motions are captured in a physical mockup environment. MOCAP-based analysis method refers to the procedure of
using JRCM to analyze participant motions captured in a physical
mockup environment directly.
The validity test focuses on the potential errors within this integrated environment related to VE limitation, calculation for manikin motions based on marker movements, and dynamic analysis
tool used as JRCM. Results have shown several important points.
Firstly, VE-based analysis is valid when compared to mockupbased analysis by checking the means of all three tasks risk values.
This has also been supported by ANOVA test. Also, although direct
paired t-test rejects the hypothesis that these two methods provide
same risk values, one modied t-test analysis showed that they are

1051

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052


Table 5
Mean values of input information for JRCM.a.

MM (N  M)

Mockup

VE

MOCAP

MSF ()

Mockup

VE

MOCAP

FL0
FL20
SL

0
6.4820
0.3175

0
5.9174
0.2981

0
6.8989
0.3338

FL0
FL20
SL

11.3452
20.0434
11.6730

33.5509
31.6753
31.7371

ATV (o/s)

Mockup

VE

MOCAP

MLV (o/s)

Mockup

VE

MOCAP

FL0
FL20
SL

3.9208
6.8912
29.4983

2.6451
2.5778
20.2431

3.8105
6.8501
25.2742

FL0
FL20
SL

7.6031
15.1079
32.7137

10.2442
11.2543
12.7969

3.7997
8.9167
26.4258

5.5159
7.2684
2.0391

JRCM refers to Job Risk Classication Model.

following the same trends for all tasks. This result partly suggests
that a conclusion about the limitation of VE is signicantly affected by the analysis procedure. Secondly, by comparing to the
output risk levels generated from some static ergonomic analyses,
this research demonstrates that dynamic analysis can be shown
as a valid and even better prediction of risk by considering more
motion features, especially for complicated movements. Finally, a
comparison has been made between same dynamic analysis
based on manikin motions and corresponding real human motions that are used to drive the manikin. Although the mean value
shows a similar trend in different tasks, ANOVA and t-test suggests that the errors generated by DHM used in this research
are statistically signicant and difcult to remove. These results
suggest the importance to improve the accuracy of manikin motions in commercially available software that may be based on
marker movements.
Testretest reliability has been investigated for both VE-based
and mockup-based analyses and generally has been demonstrated
for all tasks. Two trends have been found: (1) reliability decreases
with the increase of task complexity and (2) VE-based analysis has
a better reliability than mockup-based analysis. These results will
be further analyzed in the following section.
4.2. Discussion of dynamic data under the statistic analysis
DHM related errors result in a signicant effect on VID outputs.
However, since the commercial DHM software used in this methodology is difcult to modify and algorithms for manikin movement calculation are out of the scope of this study, it is more
important to understand the effect of VE on human motions for different tasks. The direct calculation eliminates errors introduced by
adding the step of manikin movement in assessment. JRCM analysis is based on dynamic features of human motions to complete
different tasks including Maximum Moment (MM), Maximum
Sagittal Flexion (MSF), Average Twisting Velocity (ATV), and Maximum Lateral Velocity (MLV). Table 5 shows the average values of
these important dynamic features for different tasks when capturing in physical mockup and VE.
We can see trends for how different aspects of human motions
are inuenced by VID process.
1. Maximum Moment (MM) values reect the maximum distance
of the object from the participant body when lifting. Since the
VE and physical mockup environments are identical, all the
three experimental environments generate similar values for
MM when participants stand at the same distance from the
table. This partly shows that VID environment is powerful to
predict hand moving distances.
2. Another value that looks similar for all three experimental environments is Average Twisting Velocity (ATV): smaller value for
twisting speed during frontal lift and larger value for side lift.
Also, for all three tasks, participants tend to twist more slowly
within VE.

3. One signicant nding is the value of Maximum Sagittal Flexion


(MSF). It is clear that participants will bend much more when in
VE compared to mockup, and the manikin motions calculated
based on real human motions also tend to have a larger bending
angle. The limited version angle in VE is the main reason for this
limitation.
4. Lateral bending is not a focused motion among these tasks, and
this Maximum Lateral Velocity (MLV) value shows the stability
for motions in each task/environment. Firstly, we can notice
that with the increase of task complexity, the stability will
decrease (MLV increases); secondly, manikin motions are less
stable comparing to real human motions; and nally, stability
of motions related to physical mockup environment changes
much more than those related to VE when task complexity
changes.
5. In general, considering all these features, participants move
more slowly in VE and their moving speed and bending angle
do not change a lot when facing different tasks in VE.
4.3. Future work
Future work on VID research will focus on two directions. The
rst important improvement needed is the VE used. Not only the
limited vision eld of view needs to be extended, but also necessary feedback should be added. In current VID system, no feedback
other than visual feedback has been applied in VE which results in
an increased error for using VID for higher complexity tasks.
Appropriate feedback methods can be investigated, especially force
feedback with moderate physical boundary involved (Demirel &
Duffy, 2009). Another direction to improve VID environment is to
further study the DHM motion calculation algorithm based on
marker movements. The marker placement design and joint center
determination can both be considered.
References
Badler, N. I., Phillips, C. B., & Webber, B. L. (1993). Simulating humans: Computer
graphics, animation, and control. Oxford: Oxford University Press.
Cappelli, T. (2006a). Two- and three-plane job risk classication using motion capture:
An examination of the Marras et al. model, 1993. Master thesis, Mississippi State
University.
Cappelli, T. (2006b). Personal communication.
Cappelli, T., & Duffy, V. G. (2007). Motion capture for job risk classications
incorporating dynamic aspects of work, SAE 2006. Transactions Journal of
Passenger Cars Electronic and Electrical Systems, 2006-01-2317, 10691072.
Chafn, D. B. (2001). Digital human modeling for vehicle and workplace design.
Warrendale, PA: Society of Automotive Engineers.
Demirel, H. O., & Duffy, V.G. (2009). Impact of force feedback on computer aided
ergonomics analyses. In Proceedings of HCI international, San Diego, July.
Du, C. J., & Duffy, V. G. (2007). Development of a methodology for assessing
industrial workstations using optical motion capture integrated with digital
human models. Occupational Ergonomics, 7(1), 1125.
Faraway, J. J., Chafn, D., Woolley, C., Wang, Y., & Park, W. (1999). Simulating
industrial reach motions for biomechanical analyses. In Industrial engineering
research conference, Phoenix, AZ, May 2325.
Feyen, R., Liu, Y., Chafn, D. B., Jimmerson, G., & Joseph, B. (2000). Computer-aided
ergonomics: A case study of incorporating ergonomics analyses into workplace
design. Applied Ergonomics, 31, 291300.

1052

R. Tian, V.G. Duffy / Computers & Industrial Engineering 61 (2011) 10441052

Li, K., Duffy, V. G., & Zheng, L. (2006). Universal accessibility assessments through
virtual interactive design. International Journal of Human Factors Modeling and
Simulation, 1(1), 5268.
Marras, W. S., Lavender, S. A., Leurgans, S. E., Rajulu, S. L., Allread, W. G., Fathallah, F.
A., et al. (1993). The role of dynamic three-dimensional trunk motion in
occupationally-related low back disorders. Spine, 18(5), 617628.
Morrissey, M. (1998). Human-centric design. Mechanical Engineering, 120(7), 6062.
Porter, J. M., Case, K., & Freer, M. T. (1999). Computer-aided design and human
models. In W. Karwowski & W. S. Marras (Eds.), The occupational ergonomics
handbook (pp. 479500). Boca Raton: CRC Press.
Sutherland, J. A. (2007). Development of validation methods for dynamic ergonomic
assessment tools of the lumbar spine. Master thesis, Purdue University.
Wu, T., Tian, R., & Duffy, V. G. (2011). Performing ergonomics analyses through
virtual interactive design: Validity and reliability assessment. Human Factors
and Ergonomics in Manufacturing & Service Industries.. doi:10.1002/hfm.20267.

Yang, J., & Pitarch, E. P. (2004). Kinematic human modeling. End-of-year technical
report for project digital human modeling and virtual reality for FCS. The Virtual
Soldier Research Program, University of Iowa.
Zhang, X., & Chafn, D. B. (1996). Task effects on three-dimensional dynamic
postures during seated reaching movements: an analysis method and
illustration. In Proceedings of the 1996 40th annual meeting of the human
factors and ergonomics society, Philadelphia, PA, Part 1 (Vol. 1, pp. 594598).
Zhang, X., & Chafn, D. B. (2000). A three-dimensional dynamic posture prediction
model for simulating in-vehicle seated reaching movements: Development and
validation. Ergonomics, 43, 13141330.
Zhang, X., & Chafn, D. B. (2005). Digital human modeling for computer-aided
ergonomics. In W. S. Marras & W. Karwowski (Eds.), The occupational ergonomics
handbook (2nd ed.. CRC Press.

You might also like