Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Multimed Tools Appl

https://doi.org/10.1007/s11042-018-6216-x

Virtual reality training for assembly of hybrid


medical devices

Nicholas Ho 1 & Pooi-Mun Wong 1 & Matthew Chua 2 &


Chee-Kong Chui 1

Received: 9 November 2017 / Revised: 19 April 2018 / Accepted: 25 May 2018


# Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract Skill training in the medical device manufacturing industry is essential to optimize
and expedite the efficiency level of new workers. This process, however, gives rise to many
underlying issues such as contamination and safety risks, long training period, high skill and
experience requirements of operators, and greater training costs. In this paper, we proposed
and evaluated a novel virtual reality (VR) training system for the assembly of hybrid medical
devices. The proposed system, which is an integration of Artificial Intelligence (AI), VR and
gaming concepts, is self-adaptive and autonomous. This enables the training to take place in a
virtual workcell environment without the supervision of a physical trainer. In this system, a
sequential framework is proposed and utilized to enhance the training through its various
Bgame^ levels of familiarity-building processes. A type of hybrid medical device: carbon
nanotubes-polydimethylsiloxane (CNT-PDMS) based artificial trachea prosthesis is used as a
case study in this paper to demonstrate the effectiveness of the proposed system. Evaluation
results with quantitative and qualitative comparisons demonstrated that our proposed training
method has significant advantages over common VR training and conventional training
methods. The proposed system has addressed the underlying training issues for hybrid medical
device assembly by providing trainees with effective, efficient, risk-free and low cost training.

Keywords Interactive training environment . Virtual reality . Virtual assembly workcell . Hybrid
medical device . Assembly training

* Nicholas Ho
e0013180@u.nus.edu

1
Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1,
Singapore 117576, Singapore
2
Institute of Systems Science, National University of Singapore, 25 Heng Mui Keng Terrace,
Singapore 119615, Singapore
Multimed Tools Appl

1 Introduction

1.1 Hybrid medical device fabrication

Medical devices, in particular, hybrid medical devices comprise of multiple materials,


which include both synthetic and biological components (e.g. nanotubes, polymers, stem
cells, deoxyribonucleic acid) [8, 38, 42]. Cells, a commonly used biological material for
hybrid devices, is a critical consideration in the development and modelling of hybrid
medical device fabrication processes [18, 35]. As hybrid medical devices require a series
of controlled and specialized processes, the assembly of hybrid medical devices is
considered to be a highly complex and time-consuming process. Furthermore, the working
environment has to be free from contamination throughout the fabrication and preparation
process [18, 31]. Although physical robot aids are commonly used to reduce over-reliance
on human skills and abilities [26], they may not be feasible in this application because of
the complexity in the assembly process, and thus questioning the feasibility of Human-
Robot Collaboration (HRC) and fully automated based assembly for such application
types. Thereby, concluding that manual assembly is the most suitable approach for such
complex tasks with relatively low volume [15].
As hybrid medical device fabrication involves manual assembly due to its complex nature,
skill training in this industry is essential to optimize and expedite the efficiency level of new
assembly operators [5, 19]. Traditional, existing training methods for manual assembly include
mentoring and attending lectures in classrooms. However, these methods pose many problems
such as difficulty in transferring tacit skill/knowledge to the trainees and the requirement of
intensive guidance by qualified manager or supervisor [33]. This training process also gives
rise to other underlying issues such as long training period, high skill and experience
requirements of operators, and high training costs [25]. Moreover, the training is likely to be
executed within a shared workcell among other existing operators due to the high costs
involved in purchasing, constructing and/or maintaining the equipment and the cleanroom
[19]. In such case, the training process may lead to an increase in contamination and safety
risks, which is not desirable for hybrid medical device fabrication [2, 41]. In order to solve
these underlying issues, a risk-free, low cost and efficient assembly guidance training system
has to be introduced.

1.2 Virtual reality (VR) training

Virtual reality (VR) technology is recognised as a useful tool for physical skills training. By
definition, VR is a technology that allows user interaction and engagement within virtual
worlds that are generated by computers [17, 21]. The main advantage of developing a VR
system is that, it enables the users to interactively update the simulation and input new control
orders in a virtual environment by directly updating the virtual objects and operations in real
time [57]. VR training systems have been implemented in many training disciplines such as
military, engineering, flight simulation, automotive, space and manufacturing [22, 52]. Be-
cause of increasing demand for mobility, VR training is increasingly available over the Internet
and ad-hoc wireless networks. This, however, leads to increased handling complexity and
higher quality goals at run time [52]. Moreover, as every trainee learns differently and may
require their training to focus on certain aspects of the tasks, VR training systems also face
another challenge, which is to customise the training to suit individual learning patterns. This
Multimed Tools Appl

customisation process conventionally involves human supervision that is time consuming and
costly. In order to solve this issue, a self-adaptive and autonomous VR training system can be
implemented to automate and customise training for each trainee. As a result, it can not only
eliminate the need for human supervision, but also increase training effectiveness and effi-
ciency [52].
VR technologies are widely considered in manufacturing industries because it can
improve manufacturing operations, leading to efficient and effective learning and training,
and hence increasing productivity, improving quality and reducing cost. Moreover, these
technologies can address complex problems in manufacturing. VR training simulations,
which are usually conducted before the actual operations, have the capability to eliminate
the amount of errors, trials and re-works that are to be incurred during actual operations,
and thus saving energy, materials and labour [34, 37, 40]. Research on VR manufacturing
applications has progressed significantly over the past decade due to advances in both
hardware and software, and thus making it a potential and attractive research topic.
Hardware has become noticeably more powerful and smaller, while many robust and
efficient algorithms have been developed to speed up response and to improve accuracy in
tracking and registration [34].
According to Brough et al. [5], training effectiveness, efficiency and cost can be further
improved by utilizing VR technology, which have the potential to expedite the training
process. They also highlighted that during VR training, providing images of equipment/
materials/tools and visual hints via animations in a 3D setting, and allowing practice of the
assembly tasks are helpful for trainees, as these aids/practices help in the transfer of knowledge
to real life applications. This cognitive based training method enables trainees to learn to
recognize/remember assembly parts and sequences, and to handle the parts correctly during
assembly operations. In addition, Hyppölä et al. [21] also highlighted that VR is capable of
enhancing learning experience by providing experiential learning in acquiring new skills.
Moreover, it was also mentioned that VR can be used to (a) create effective virtual environ-
ments that are not available or easily accessible in the real world, (b) provide a safe, low-cost
training method with no harm/risk for patients in the medical industry, (c) support active
learning and repeated task practices.
In light of these advantages, the integration of VR technology will be able to help solve the
previously mentioned underlying training issues for hybrid medical device assembly. As a VR
based assembly training system does not require (a) physical equipment [22], (b) built-in
cleanroom system [22], (c) physical training manager/supervisor [52], and (d) long training
period (because it can reduce/skip Bwaiting time^), it is also able to provide risk-free, low cost
and efficient training [44, 52] apart from its main advantage of being effective as discussed
earlier.

2 Related works

VR technology plays an important role in simulating advanced 3D human-computer interac-


tions (HCI), especially for mechanical assemblies, by allowing operators to be fully immersed
in an artificial environment [22, 34]. HCI technologies have enabled computers to approxi-
mately replicate human activities and dynamics behaviours of the virtual objects in a virtual
environment [57]. Many VR-based systems have been proposed and could assist training
activities successfully, in particular assembly training activities [22, 34].
Multimed Tools Appl

2.1 VR training for assembly activities

Developed VR training methods for assembly activities include Brough et al. [5] who
developed a virtual environment-based training system for simple mechanical assembly
operations. The system consists of 3 components: (a) virtual workspace that is responsible
to dynamically generate animations, (b) virtual author that allows trainers to create a VR-based
tutorial without doing any programming, and (c) virtual mentor that monitors the operator
actions in the virtual workspace and assists him/her when necessary via various forms of
instructions. It was proven effective in training new operators based on conducted user study.
In the work by Xia et al. [55], a haptics-based virtual environment system is developed for
complex product assembly training. This system is based on a rebuilt hierarchical constraint-
based data model and it builds the virtual assembly environment using a programme. This
programme automatically integrates data to transfer relevant data to a VR application from a
computer-aided design system. Haptics feedback and physics-based modelling methods are
utilized to simulate the assembly operations. Experiment results illustrated the potential of this
proposed application for assembly training processes. Another work by Müller et al. [33]
includes introducing a concept in which VR based simulation-games are used to support
training for industrial manual assembly processes. This concept involves a tutorial game
(Assembly Game) that aims to give trainees an initial understanding of the assembly steps,
and a practical training session (Gaming System). Their human trial studies proved that the
concept was effective. Wasfy et al. [54] developed an intelligent virtual environment that has
the potential to effectively train users in complex engineering system operations. The system
involves an intelligent agent that tutors, guides and/or supervises the process training and it
comprises of three main components: (a) a rule-based expert system, (b) a hierarchical process
knowledge base, and (c) a human-like virtual characters engine. Stratos et al. [48] also
introduces an innovative VR, game based learning framework that aims to attract young
students at secondary level to manufacturing. It involves using a CAVE (an immersive VR
environment) to simulate manual assembly tasks. Experiment results illustrated the increase in
interest by the secondary students in the assembly process. Besides the emergence of new
frameworks, researchers have also been investigating on analysis methods for VR training.
One example is the new task analysis method recommended and developed by Yuviler-Gavish
et al. [58] for developing maintenance and assembly VR training simulators. The proposed
method, which was based on analysing the training and technology requirements simulta-
neously, was compared with traditional methods. It was concluded that the proposed method
increased the effectiveness of VR training simulators.

2.2 VR training in the medical industry

Research done for VR training in the medical industry is also gaining popularity because of its
time savings and cost effectiveness as compared to conventional training methods [52]. Such
works include Grantcharov et al. [16] who examined the effectiveness of utilizing VR training
simulators to train users’ laparoscopic psychomotor skills. It was concluded that the surgeons,
who received the VR training, displayed significantly greater improvement in performance
(i.e. greater operating speed and reduction in error) in the operating room than those in the
control group. Gallagher et al. [14] also conducted an assessment of laparoscopic psychomotor
skills of surgeons using MIST VR, which was a VR training simulator developed specifically
for laparoscopic surgery. The assessment results illustrated that VR training simulators can be
Multimed Tools Appl

utilized as a useful tool to identify the performance levels and evaluate the psychomotor skills
among experienced, junior and novice laparoscopic surgeons. In the work by Suzuki et al.
[50], a VR training system, which had the same manipulation interface as the real system, was
developed to train surgeons in endoscopic surgery. A function was added to the system to
record the trainee’s performance, and thus enabling a four-dimensional analysis during
training. The evaluation of the system illustrated that the time to completion decreased as
the number of training sessions increased. It was concluded that the proposed system was an
effective system to learn surgical procedures. Schwaitzberg et al. [44] also developed a virtual
transluminal endoscopic surgical trainer, which was a VR training simulator, with an aim to
accelerate the development of natural orifice transluminal endoscopic surgery (NOTES)
procedures and devices. A survey was conducted at the 2012 Natural Orifice Surgery
Consortium for Assessment and Research annual meeting. The survey results illustrated the
importance of developing a VR-based NOTES simulator with haptic capabilities and highly
realistic interfaces. Another work by Wang et al. [53] includes developing a tissue protection-
based VR training system for robotic catheter surgery to prevent collateral damage by
collision. The effectiveness of the tissue protection mechanism in the proposed system was
proven by the evaluation results, which illustrated a reduction in tissue damage and decreased
collision frequency.

2.3 Discussions on related works

In view of the existing related works, we have observed that there is a growing popular trend in
favour of developing VR training methods for mechanical assemblies and medical activities
like surgery. However, these works have not considered utilizing VR technology for assembly
operations involving both biological and mechanical components (i.e. hybrid medical device
assembly). Moreover, the support activities (e.g. the fabrication of medical devices) are as
important as the primary activities (e.g. surgery) in the medical industry as they are dependent
on one another for efficient and effective operations [9]. As such, given the capabilities of VR
technology application to train users in assembly operations and the lack of research done for
VR training methods in the medical device manufacturing industry, we believe that there is a
motivation to explore this aspect.
In this paper, we have proposed and evaluated a novel VR training system for assembly
operations involving both biological and mechanical components (i.e. hybrid medical device
assembly). The proposed system, which is an integration of AI, VR and gaming concepts, is
self-adaptive and autonomous, and thus enables the training to take place in a virtual workcell
environment without the supervision of a physical trainer. It also utilizes a unique sequential
framework that enhances training through its various Bgame^ levels of familiarity-building
processes. The motivation of this work is to solve the underlying training issues for hybrid
medical device assembly by providing trainees with effective, efficient, risk-free and low cost
training through the proposed VR training system. The structure of the proposed VR, game
based training system is shown in Section 3. Section 4 presents the constructed VR workcell
and the illustrations of the conduct of the assembly guidance training, based on the proposed
training system and a chosen case study from recent research findings. Section 5 presents the
evaluation results, which not only compare the proposed VR training system with other training
methods, but also compare qualitative metrics among various VR immersion techniques that
can be applicable to the proposed VR training system. Lastly, Section 6 discusses the advan-
tages of the proposed concept based on the evaluation results, and concludes the paper.
Multimed Tools Appl

3 Materials and methods

3.1 Virtual Reality Assembly Guidance Training System (VRAGTS)

Inspired by popular first-person game concepts, the VRAGTS is an intelligent, VR and game
based assembly training system. The motivation of this system is to promote real-time effective
guidance, interaction and fun for trainees during the training process. It aims to solve the
underlying training issues for hybrid medical device assembly by providing trainees with
effective, efficient, risk-free and low cost training.
The proposed system comprises of two main components: Virtual Environment and
Interactive Channel. Virtual Environment is the space where all virtual activities and interac-
tions take place in real time, ranging from operator assembly training step actions and walking
movements to movements caused by gravity and other forces [56]. We define step actions as
the actions conducted by the operator in the Virtual Environment during the relevant assembly
step. These assembly step actions are performed using the motion-based HCI system, which
will be explained in detail later in Section 3.3. The Virtual Environment allows both the
assembly operator and the VR supervisor (a programmed virtual supervisor that will be
explained in detail in the next section) to be readily updated of these virtual activities and
interactions; the assembly operator will be updated via a physical display set (e.g. computer
monitor, projector screen, VR headset) while the VR supervisor, being part of the Virtual
Environment programme, is constantly updated with the generated data. Interactive Channel is
the communication bridge between the assembly operator and the VR supervisor, where the
operator can send special request commands (e.g. request for hints, fast forward time, restart)
and the VR supervisor can give assembly training step guidance to the operator when
requested and/or when the operator repeatedly perform the wrong assembly training step
action (e.g. word instructions, virtual animations of the required assembly training step action,
visual directions to the equipment/materials/items involved in the assembly training step) [5].
This Interactive Channel component can be integrated as part of the Virtual Environment,
allowing operators to send out their special request commands by pressing on virtual buttons
or performing various forms of prominent hand gestures. This component also allows the VR
supervisor to provide assembly training step guidance to the operator within the Virtual
Environment itself. The Virtual Reality Assembly Guidance Training System (VRAGTS)
architecture is summarized in Fig. 1.
The learning process of the proposed training system is developed based on game concepts
to introduce playfulness and fun, as these characteristics are proven to be related to efficient
and effective learning [33]. In order to cater for operators wholly new in the industry, the
game-learning process of the proposed system is designed based on a step-by-step basis,
allowing trainees to gradually acquire their skills and knowledge over time [5, 33]. Tutorial,
practice and assessment phases are implemented to prepare trainees and/or test trainees if they
are ready for the real-life tasks. The proposed training system can be used as an effective
learning platform not only for new operators but also for existing operators who want to revise
on the assembly steps [33, 56].
The game concept that will be integrated in the system’s learning process consists of three
gaming stages. At the first stage, the trainees will undergo a tutorial lesson (Tutorial No. 1) to
familiarize themselves with the materials, equipment, and other relevant supporting items that
will be utilized during the assembly. After which, they will be tested and evaluated on this
knowledge via a game assessment (Assessment No. 1). This ensures that operators, who are
Multimed Tools Appl

Fig. 1 Virtual Reality Assembly Guidance Training System (VRAGTS) Architecture

new in the industry, will at least acquire knowledge about the equipment/materials/supporting
items required during the assembly. This ranges from the very basic of familiarizing with their
terms to understanding their functions and purposes. At the second stage, the trainees will
undergo the second tutorial lesson (Tutorial No. 2) to learn about the assembly step sequences.
After the tutorial lesson, they will be tested and evaluated on this knowledge via a game
assessment (Assessment No. 2). At the last stage (the most important stage in the game-
learning process), the trainees will undergo the last tutorial lesson (Tutorial No. 3) that will
demonstrate on how to perform the assembly steps. After the demonstration, they will be
allowed to hands-on practise performing the assembly steps to build familiarity and confidence
first before being tested and evaluated on this skill and knowledge via a game assessment
(Assessment No. 3).
All tutorials, practices and assessments are operated within the virtual environment
workspace under the VRAGTS. The trainees are required to pass each game assessment test
based a minimum score requirement to move on to the next stage, else they are required to start
their training from the beginning of the stage that they are currently in. At any point of time
during any assessment test, the trainees can request for hints from the VR supervisor. However,
they will be penalized for each hint request they make. The trainees will be deemed fit to
proceed on to the real-life training phase by the system when they pass all the game assessment
tests. The game-learning process overall flowchart of the proposed training system is summa-
rized in Fig. 2.

3.2 VR supervisor

The VR supervisor is an AI programme used to simulate a real human supervisor that is used
to train/guide the trainee in the assembly training process. It is aware of any activities within
the virtual workspace and will react accordingly in real-time. Its main functions are to: (a)
decide and provide accurate, real-time guidance and training to the trainees via clear directions
and instructions during the assembly training process, (b) detect and/or differentiate which
training step the operator is at and if the steps are performed correctly, and (c) update VR
contents in the virtual environment upon special request from the trainee [5]. The motivation of
utilizing this programme is to eliminate the need for any physical manager/supervisor, and
significantly reduce the learning curve, training period and costs.
Multimed Tools Appl

Fig. 2 Game-learning process overall flowchart of the VRAGTS

Figure 3 illustrates the summarized algorithm flowchart for the proposed VR training
system. It also illustrates the VR supervisor’s involvement in each assembly training stage.
When the assembly training stage starts, the VR supervisor will first give a tutorial to the
trainee to learn about the relevant skills and/or knowledge required in the particular stage.
After the tutorial, the practice/assessment starts (the assembly practice activity is only appli-
cable for Stage 3 as mentioned in the previous section). During the practice/assessment
activity, the VR supervisor will verify each assembly training step action taken by the trainee,
and will provide the relevant training step instructions only if the trainee repeatedly performs
wrong step actions (or in special cases, requests for hints). However, if the step action is
correctly executed, the trainee will be allowed to continue on to the next training step and this
cyclical process will continue until the practice/assessment task is completed. For Stage 3, the
Multimed Tools Appl

Fig. 3 Summarized algorithm flowchart: VR supervisor involvement

practice and assessment activities are the same except that the practice activity does not
involve any evaluation process and the trainee is allowed to practise an unlimited number of
Multimed Tools Appl

times without getting any penalty. Lastly, for the assessment activity, the VR supervisor will
determine whether the trainee has passed the test based on the scores obtained during the
assessment and the passing mark criteria. If the trainee passes the assessment test, the trainee is
allowed to continue to the next assembly training stage. This cyclical process will continue
until the trainee passes the final assessment test (Stage 3), upon which the training will end.
In order to integrate intelligence into the VR supervisor, a rule-based approach is considered
as it had been successfully applied to interactive training processes within a virtual environ-
ment [54]. The deterministic finite state machine concept, a type of rule-based system, is
utilized to configure the VR supervisor. As a popular method to configure artificial intelligence
(AI) for games, deterministic finite state machine is defined mathematically as a quintuple:

ðI; Σ; δ; i0 ; i F Þ ð1Þ

Whereby I is a finite set of states (I = {i0, i1, ……, iF}), Σ is the input alphabet (a finite set of
symbols), δ is the state transition function (δ : I × Σ → I), i0 is the initial state and iF is the final
state [28].
Figure 4 represents the VR supervisor finite state machine that is utilized during assembly
practice or assessment activities, where guidance aids are most relevant. The motivation for
this configuration is to enable the VR supervisor to effectively and efficiently make optimal
decisions, to allow seamless state transitions based on the various transition conditions. For
instance, when the trainee completes an assembly training step task (Step i), the VR supervisor
will recognize that Step i is completed (State i) and that the trainee will be tackling the next
step (Step i + 1). Hence, it will give the trainee instructions/directions for Step i + 1 only if the
trainee repeatedly performs wrong training step actions or requests hints while attempting to
complete Step i + 1; this state is also known as State (i + 1)g.

3.3 Physics-based modelling and hand motion simulation model

The dynamic behaviours of the virtual objects (e.g. their motions and deformations) have to be
characterized to enable users to perform motion-based HCI in the immersive virtual

Fig. 4 VR supervisor finite state machine


Multimed Tools Appl

environment [57]. There are many existing motion-based HCI techniques that can be consid-
ered and adopted in our VR training system. For instance, an articulated and generalized
Gaussian kernel correlation (GKC)-based framework for human pose estimation was devel-
oped by Ding and Fan [13]. The framework provided a continuous and differentiable similarity
measure between a template and an observation, which were represented by various sum of
Gaussians variants. An articulated Gaussian kernel correlation (AGKC) function was further
developed by embedding a kinematic skeleton in a multivariate sum of Gaussians model. This
new function supported subject-specific shape modelling and articulated pose estimation for
the user’s body and hands. Lastly, by incorporating three regularization terms in the AGKC
function (i.e. pose continuity, intersection penalty, visibility), a sequential pose tracking
algorithm was developed. Based on benchmarking evaluations, the proposed algorithm was
proven effective and efficient as compared to the state-of-the-art algorithms. In the work by
Kılıboz and Güdükbay [23], a method to recognize trajectory-based dynamic hand gestures in
real time for HCI was proposed. A fast learning mechanism, which does not require much
training data, was introduced to teach gestures to the system; the sample gesture data was
filtered and processed to develop gesture recognizers. A six-degrees-of-freedom position
tracker was also utilized to capture trajectory data. Gestures was represented as an ordered
sequence of directional movements in two dimensions. The proposed method was proven to
give users the freedom to develop gesture commands according to their personal preferences
for chosen tasks. As such, the proposed hand gesture recognition technique made the HCI
process more user specific and intuitive.
Physics modelling and hand motion simulation model are significant in VR training
systems as they play an important role in the characterization of the virtual objects’ dynamic
behaviours and the motion-based HCI. Moreover, they increase the accuracy of assembly
performance evaluation and increase training task efficiency [55]. We have utilized a mass-
spring-damper model to characterize the dynamic behaviours of the virtual objects in the
proposed system. These dynamic behaviours are governed by the geometries of the virtual
objects and the external forces/torques that act upon them [45]. The motivation of using the
mass-spring-damper model for this purpose is to effectively tackle the issue of visual inter-
penetration, and allow whole-hand gripping and manipulation [4, 55]. According to Borst et al.
[4] and Xia et al. [55], the model can be summarized using the following equations:

½M €x þ ½BT x˙ þ ½K T x ¼ F ð2Þ

½M €θ þ ½BR θ
˙
þ ½K R θ ¼ T ð3Þ

Whereby F and T are the external forces and torques applied to the virtual objects respectively,
[M] is the mass matrix, [BT] and [BR] are the linear and torsional damping matrices respec-
€ ˙ ; x are the
tively, [KT] and [KR] are the linear and torsional stiffness matrices respectively. x;x
acceleration, velocity and displacement vectors of the virtual objects respectively, while €θ; ˙θ; θ
are the angular acceleration, velocity and displacement vectors of the virtual objects
respectively.
When two virtual objects are placed close enough to each other, a real-time computation
based on the above physics model is carried out, and a geometry constraint is automatically
captured. The precise assembly position is then calculated, leading to a generation of attractive
or repulsive forces that naturally and realistically realize the assembly process.
Multimed Tools Appl

There is also a need to create virtual hand movements in the virtual environment based on
the user’s hand movements, so as to allow effective motion-based HCI between the user’s
virtual hands and the virtual objects. The special computing sensory hardware needed to
capture hand movement input and provide multimodal computer feedback is called an
interface device [49]. For the conducted experiments and evaluations in this paper, we utilize
perception neuron sensors to provide multimodal recognition of hand movements, which
enables us to simulate hand motions in the developed VR training system [1]. The motivation
behind this VR application is to create systematic human testing and training environments,
which enable precise control of complex dynamic 3-D stimulus presentations, behavioural
tracking, performance measurement, data recording, and analysis [59]. The developed VR
training system adopts the neuron sensor integrated system; the sensing mechanisms of the
accelerometer and gyroscope sensors are illustrated in Fig. 5.
The input measurements of the neuron sensor integrated system include three parts: (i) the
translational position of the user’s hand using a 3-axis accelerometer, (ii) the rotational position
of the user’s forearm using a 3-axis gyroscope, and (iii) the user’s hand gesture using a 3-axis
magnetometer [59]. These measured input data are transferred via a USB cable to a computer,
where the input data are converted into weighted matrices. Based on these weighted matrices,
the user’s hand motion will be simulated as virtual hand models in the VR training system. The
structure diagram of the motion-based HCI system, which is utilized in the VR training system,
is illustrated in Fig. 6 below.

3.4 Utilized hardware and software

As our VR training system involves VR technology and perception neuron sensors, we have
utilized a virtual reality head mounted display (HMD) and wearable sensors. Oculus Rift, a
type of VR HMD, is utilized for rendering the developed virtual interactive environment to
orchestrate the training process. The Oculus Rift comes with a sensor to position the headset in
the space. As for the wearable sensors, we utilize Perception Neuron Lite, which is a product
of NOITOM. This equipment, which is a set of six neuron sensors, is a sensor fusion of
accelerometer, gyroscope and magnetometer sensors. This sensor fusion forms an inertial
measurement unit (IMU). IMUs are commonly used in aircraft, UAVs, spacecraft, satellites
and health care for measuring orientations, movements and positions [10]. Perception Neuron
Lite is useful in measuring the degrees of freedom of an object because of its sensor fusion

Fig. 5 Sensing mechanisms of a accelerometer for axis data measurements and b gyroscope for angular data
measurements
Multimed Tools Appl

Fig. 6 Structure diagram of the motion-based HCI system

mechanism [39]. As such, it is able to accurately track hand motions in 3-D [36]. We have
attached the neuron sensors at the user’s arm, forehand, wrist, index finger, middle finger and
thumb, with one sensor attached at each part. These sensors synchronize with one another to
provide accurate and complete data of the user’s arm, hand and finger movements [11, 51].
As mentioned earlier, the gyroscope sensor is used in measuring the changes in rotational
velocity; mainly for the rotational movement of the hand/arm part that it is attached to [47].
The accelerometer sensors, on the other hand, are used to measure changes in velocities and
positions of the user’s hands [46]. A MEMS (Microelectromechanical systems) based accel-
erometer has been utilized because of its small size and ease of operation [32]. Lastly, the
magnetometer, which measures magnetism, helps in determining the absolute orientation of
the user’s hands on the N-S-E-W plane [51].
There are many software commercially available for developing virtual interactive environ-
ments. Commonly used software includes Unreal and Unity 3D. We have utilized Unity 3D for
the development of the virtual interactive environment; Unity 3D is a game engine that supports
2D and 3D graphics, and C# and Java scripts. As for the modelling of the virtual objects, we have
utilized Blender, which is a 3D computer graphics software tool. Blender is used to create the 3D
virtual object models because it can easily convert these models into files that are compatible with
the Unity software. Lastly, Axis Neuron, which is a support software for Perception Neuron Lite,
is utilized to capture and broadcast motion data that are received by the individual sensors. These
broadcasted data are utilized in the Unity software to simulate the motions of the user’s hand.

3.5 Overview of the proposed VR training system

In summary, the proposed VR training system utilizes a range of HCI techniques. Firstly, it
utilizes a task-based HCI method that enables trainees to gradually acquire their skills and
knowledge over time by learning on a step-by-step basis (i.e. from Stage 1 to Stage 3). The
objectives at each stage are predetermined and established, and the trainees have to accomplish
these objectives before they are allowed to move on to the next stage. Secondly, it utilizes a
game-based HCI method that allows trainees to enjoy the training process with playfulness and
fun through tutorials, assessments and/or practices at each stage. Thirdly, it introduces a
motion-based HCI method that enables trainees to directly interact with the virtual objects
using their simulated virtual hands (e.g. gripping, object manipulation). It also enables other
virtual objects to interact with one another, and thus preventing visual interpenetration and
enabling assembly operations to occur within the virtual environment. However, this method is
only applicable at Stage 3, as only Stage 3 involves hands-on assembly practices and
assessments. An overview flowchart of the proposed VR training system is illustrated in Fig. 7.
Multimed Tools Appl

Fig. 7 Overview flowchart of the proposed VR training system

4 Assembly of hybrid medical device case study: artificial trachea

4.1 CNT-PDMS based artificial trachea prosthesis

We applied the concept of the proposed system framework to a practical approach based on
recent research so as to prove its feasibility and effectiveness. We took reference to our
previous work which involved the development of a patient-specific, CNT-PDMS (carbon
nanotubes-polydimethylsiloxane) based artificial trachea prosthesis [6–8]. Based on experi-
ments done to replicate the fabrication of the CNT-PDMS based artificial trachea according to
this reference, we have taken note of the following details/data: (a) the equipment, materials
and supporting items required for this development summarized in Table 1, and (b) the
assembly steps for the CNT-PDMS based artificial trachea prosthesis shown in Table 2. We
shall utilize these known details/data to construct an appropriate virtual workcell.

Table 1 Equipment, materials and supporting items for CNT-PDMS based artificial trachea prosthesis

Equipment required Materials required Supporting items required

1. Weighing scale 1. PDMS pre-polymer 1. Mold No. 1


2. Fume hood 2. PDMS curing agent (to assemble CNT-PDMS composite)
3. 2-in-1 oven 2. Mold No. 2
(vacuum cum heat oven) (to assemble collagen coated prosthesis)
4. Homogenizer 3. Multi-Walled Carbon
5. Freeze drier Nanotubes (MWCNT) 3. Beaker
6. Biosafety cabinet (BSC) 4. Type I collagen sponge 4. Pipette
7. Cryopreservation system 5. Disposable cup
8. Water bath 5. Harvested cryopreserved 6. Stirrer/Spoon
9. Bioreactor/incubator epithelial cells
Multimed Tools Appl

Table 2 Assembly steps for CNT-PDMS based artificial trachea prosthesis

Step no. Assembly step description

A) CNT-PDMS composite skeleton


1 Pour PDMS pre-polymer into a beaker and add PDMS curing agent at 10:1 ratio using a pipette.
2 Put disposable cup on the weighing scale and tare it. Pour the PDMS mixture in the cup and take
note of the weight. Using the spoon, add 0.2 to 1% mass of MWCNT to the mixture.
3 Bring the cup of mixture to the fume hood, switch on the fume hood, and stir the mixture
thoroughly using the stirrer.
4 Put the cup of well-stirred mixture in the vacuum oven, switch it on, turn off gas intake, switch on
vacuum suction pump, and leave the cup of mixture there for 1 h. After which, switch off the
vacuum suction pump and then the vacuum oven.
5 Bring the cup of mixture to the fume hood. Slowly and carefully pour the vacuumed mixture into
Mould No. 1.
6 Put mould of mixture in the oven, switch it on, and configure temperature and period settings.
After 2 h of heating, allow material to cool for 15 mins. After which, remove mould from oven.
7 Bring the mould of mixture to the work bench and remove the cured CNT-PDMS composite from
the mould.
B) CNT-PDMS composite with Type I collagen sponge
8 Dissolve Type I collagen in aqueous hydrochloric acid in the fume hood.
9 Homogenize the solution at 8000 rpm for 15 min using the homogenizer in the fume hood.
10 Place the cured CNT-PDMS composite in the hollow region of Mould No. 2, and carefully pour
the collagen mixture into the mould cavity.
11 Put mould of mixture in the freeze drier, configure temperature and period settings, and leave it for
2 h.
12 Transfer the mould of mixture to the vacuum oven, configure temperature and period settings, and
leave it for 12 h. After which, remove mould from oven.
13 Bring the mould of mixture to the work bench and remove the composite from the mould.
14 Spray the composite with 70% ethanol, and place the composite in the BSC. Close the glass door
and switch on the UV light for 1 h to sterilize it.
C) Complete Trachea Prosthesis
15 Remove the cryopreserved epithelial cells from the cryopreservation system.
16 Thaw the cryopreserved epithelial cells in the water bath
17 Seed the thawed epithelial cells onto the composite using a pipette in the BSC.
18 Transfer the seeded composite to the bioreactor/incubator.
19 Allow the epithelial cells to grow and form a substantial amount of cilia.

4.2 Virtual workcell for CNT-PDMS based artificial trachea prosthesis assembly

Figure 8 illustrates an overview of the virtual workcell that is designed and developed for
CNT-PDMS based artificial trachea prosthesis assembly. The virtual workcell is constructed
using the software UNITY 3D, considering the proposed system framework, basic physics-
based modelling (e.g. collision detection, object manipulations) and relevant assembly steps. It
is a platform that houses all virtual training activities (i.e. tutorials, practices and assessments),
interactions between operator and the virtual items (i.e. equipment/materials/supporting items),
and interactions between the operator and the VR supervisor. Referring to Fig. 9, upon entering
Activation Mode when the training commences, the trainee is placed within the virtual
workcell with the provided equipment, materials and supporting items. The trainee can interact
with these virtual components using the virtual hands. The operator-to-VR supervisor inter-
active channel, in the form of virtual buttons, is placed on the top left corner of the trainee’s
screen. This allows the trainee to conveniently send out special request commands (i.e. request
for hints, restart, fast forward time) if needed.
Multimed Tools Appl

Fig. 8 An overview of the developed virtual workcell

4.3 VR assembly guidance training (stage 1 and 2)

As mentioned in Section 2.1 earlier (Fig. 2), the VR supervisor first exposes the trainees with
Stage 1 and 2 of the game-learning process in order to build their familiarity with the involved
equipment/materials/supporting items and with the assembly step sequence respectively. As
illustrated in Figs. 10 and 11, during Tutorial No. 1 in Stage 1, the trainee is allowed to find out
more about each equipment/materials/supporting item by just browsing around the virtual
workcell. When the trainee stops and focuses at any relevant virtual item, details of the selected
item will be presented to the trainee (e.g. its real image(s), its parts, and its purpose). This
allows the trainee (especially a new one who has no prior knowledge/experience) to not only
understand how the item looks like in reality, but also to understand its importance and role in
the assembly process.
On the other hand, during Tutorial No. 2 in Stage 2, the trainee is not allowed to browse
around the virtual workcell freely while being exposed to the assembly step sequence. Instead,
the VR supervisor gives the trainee a tour around the virtual workcell according to the step
sequence. For instance, when Tutorial No. 2 commences, the trainee is stationed in front of the
workbench because the first two assembly steps involve it and the items on it. Details of the
relevant assembly steps will then be presented to the trainee as illustrated in Fig. 12. Upon

Fig. 9 The virtual workcell during activation mode


Multimed Tools Appl

Fig. 10 Illustration of details of the BSC provided to the trainee for stage 1’s tutorial no. 1

clicking next, the trainee will be directed to be stationed in front of the fume hood where
details of the third assembly step will be presented to him/her as illustrated in Fig. 13. This
helps the trainee to not only familiarize with the step details, but also to remember the
involvement of the relevant items for each step (e.g. Step 1 and 2 involves the workbench
and Step 3 involves the fume hood as illustrated in Figs. 12 and 13).

4.4 VR assembly guidance training (stage 3)

As mentioned in Section 2.2 earlier, the VR supervisor is the key to provide an effective and
efficient VR assembly guidance training. When the trainee presses on the BRequest Hints^
virtual button or repeatedly performs any wrong step action, the VR supervisor will provide
relevant assembly training step guidance. In order to promote effective learning, the VR
supervisor is programmed to give out the hints in a relevant and orderly manner. This prevents
Bspoon-feeding^ the trainees as giving them the hints bit by bit instead of giving them the full

Fig. 11 Illustration of details of the water bath provided to the trainee for stage 1’s tutorial no. 1
Multimed Tools Appl

Fig. 12 Illustration of details of step 1 and 2 provided to the trainee at the workbench for stage 2’s tutorial no. 2

guidance instructions all at once, allows them to explore and think critically on how to execute
the assembly training step actions.
The first-level hint, consisting of only word instructions, is provided only at the first hint
request or after a fixed number of repeated mistakes; these word instructions are passed to the
trainee via the VR supervisor-to-Operator interactive channel, in form of a virtual panel. Next,
the second-level hint (at second hint request or after a fixed number of repeated mistakes)
consists of not only word instructions, but also visual directions to the relevant items involved
in the training step. Lastly, the third-level hint (at third hint request or after a fixed number of
repeated mistakes) consists of both word instructions and virtual animations of the required
assembly training step action; this hint will be the same as what is presented in Tutorial No. 3.
Figures 14, 15 and 16 illustrate how the VR supervisor guides the trainee during the assembly
training (for Stage 3 Practice/ Assessment) for first-level, second-level and third-level hint
respectively.

Fig. 13 Illustration of details of step 3 provided to the trainee at the fume hood for stage 2’s tutorial no. 2
Multimed Tools Appl

Fig. 14 Illustration of the VR assembly guidance via word instructions (first-level hint)

5 Evaluation of the proposed system

5.1 Evaluation objectives and settings

We have conducted an evaluation experiment (Evaluation No. 1) to compare the proposed VR


training method (Method A) with a common VR training method (Method B) and a conven-
tional physical training method (Method C). The experimental group comprised of a total of 30
participants, who have no prior skill or knowledge to the given application (i.e. hybrid medical
device assembly). The participants were divided into three groups of 10 (Group A, Group B,
and Group C), with the groups assigned respectively to try out the three training methods. The
aim of this experiment is to prove the effectiveness and efficiency of Method A. This is
achieved by comparing among the various training methods, the average training time taken
and assessment scores obtained by the participants. Besides these quantitative data analyses, a
survey using questionnaire (Questionnaire No. 1) was also conducted to compare qualitative
metrics (e.g. effectiveness, confidence level, enjoyment level) among the various training
methods, based on the participants’ opinions.

Fig. 15 Illustration of the VR assembly guidance via word instructions and visual directions (second-level hint)
Multimed Tools Appl

Fig. 16 Illustration of the VR assembly guidance via word instructions and virtual animations (third-level hint)

The experimental procedures for the evaluation of Method A involve three training
stages, as mentioned earlier in Section 3.1. The participants, who are involved in this
method (Group A), are required to undergo VR tutorials and assessments in Stage 1 and 2,
and VR tutorial, practices and assessment in Stage 3. In other words, they undergo a
wholly VR training experience and do not require any human assistance during the
training process. As for Method B, its participants (Group B) are required to undergo a
physical tutorial first, followed by VR practices and assessment during the training
process. Unlike Group A, Group B’s participants undergo a partial VR training experience.
Lastly, for Method C, its participants (Group C) are required to undergo one round each of
physical tutorial, interview and assessment. In this case, Group C’s participants undergo a
wholly physical training experience as there is no VR training involved. In order to enable
a fair analysis of the obtained assessment scores among the various training methods, the
same tutorial content and assessment questions are used during the evaluation process for
each method.
We have also conducted another evaluation experiment (Evaluation No. 2) to compare
qualitative metrics among various VR immersion techniques that can be applicable to the
proposed VR training system. These qualitative metrics include the level of immersion in the
virtual environment, the amount of control of the virtual objects, how realistic the system is,
the level of comfort during training, the level of enjoyment during the training, and the
potential of the technique for the development of an effective VR training system [24].
Selected VR immersion techniques that are involved in this evaluation process include a
HMD with neuron sensors (attached to right hand and arm only), a HMD with leap motion, a
HMD with remote, a desktop PC with monitor display and mouse, and an Android tablet with
touch screen display. The aim of this experiment is to analyse and compare the advantages and
disadvantages of each VR immersion technique, in order to determine which technique(s) can
potentially contribute to the development of an effective VR training system. The 20 partic-
ipants that are involved in the VR-related training methods (Method A and B) are required to
try out the various VR immersion techniques. A survey using questionnaire (Questionnaire
No. 2) was also conducted to obtain subjective responses of these participants with regard to
their VR training experiences for each VR immersion technique. A summary of the entire
evaluation process is illustrated in Fig. 17.
Multimed Tools Appl

Fig. 17 A summary of the entire evaluation process

5.2 Evaluation results – comparison among various training methods

The effectiveness and efficiency of the various training methods are evaluated using quanti-
tative data (i.e. the average training time taken and assessment scores obtained by the
participants). Figures 18 and 19 summarize the comparisons, among the various methods, of
the average training time taken and the average assessment scores obtained by the participants
respectively.
Referring to Fig. 18, we can observe that the average training time taken per participant is
the lowest for Method A, followed by Method B, and then Method C. The reductions in the
average training time for Method A are 24.4 and 26.7% when compared to Method B and C
respectively. Figure 18 also shows that for each training stage (i.e. tutorial, interview/practice,
assessment), the average time taken per participant is lower for Method A than Method B and
C. An interesting observation to note is that the average time taken per participant during the
tutorial stage is significantly lower for Method A (61.9 mins) than Method B and C (90 mins)
with a reduction of 31.2%. The reason for this phenomenon is that Method A focuses on VR
tutorial, whereby individual participants who learn faster can proceed on to the next training
stage. On the other hand, Method B and C focus on physical tutorial, whereby the participants
attend the tutorial in a classroom setting and they are required to stay until the lesson and Q&A
sessions are over. The last observation to note is that though the average training time taken per
Multimed Tools Appl

Fig. 18 Comparison of average training time among the various training methods

participant is lower for Method B as compared to Method C, the time reduction of 3% is not
significant. A possible explanation for this could be that Method B’s VR system does not
significantly reduce the amount of interview/practice and assessment times, having time
reductions of only 10.3 and 6.6% respectively. This questions the effectiveness and efficiency
of Method B’s one-stage VR training. On the other hand, when comparing the interview/
practice and assessment times for Method A to those of Method C, the time reductions for
these training stages are more significant (25.4 and 11.1% respectively).
Referring to Fig. 19, we can observe that the average assessment score per participant is the
highest for Method A, followed by Method B, and then Method C. The average assessment
score for Method A is 32.0 and 62.0% higher than Method B and C’s scores respectively.
While the proposed VR training system (Method A) focuses on a sequential framework that
enhances training through its various Bgame^ levels of familiarity-building processes, the
common VR training system (Method B) only focuses on one-stage VR training and the

Fig. 19 Comparison of average assessment scores among the various training methods
Multimed Tools Appl

conventional training system (Method C) does not focus on any VR-related training. This set
of results therefore proves the effectiveness of Method A’s VR system to train new users, as
compared to Method B and C.
The effectiveness of the various training methods are further evaluated and analysed using
the qualitative ratings based on the participants’ subjective responses. Figure 20 summarizes
the comparisons of the average ratings for the different qualitative metrics (i.e. effectiveness,
confidence, enjoyment, comfort) among the various training methods. Referring to Fig. 20, we
can observe that for every qualitative metric, the average ratings per participant is the highest
for Method A, followed by Method B, and then Method C.
Firstly, Method A’s participants on average felt that the proposed VR training system was
very effective, Method B’s participants on average felt that the common VR training system
was moderately effective, and Method C’s participants on average felt that the conventional
training system was not effective. These results are expected due to the multi-stage VR training
from Method A, one-stage VR training from Method B, and no VR-related training from
Method C as mentioned earlier in the quantitative analysis.
Secondly, Method A’s participants on average felt that the proposed VR training system
significantly boosted their level of confidence in handling the assembly tasks, Method B’s
participants on average felt that the common VR training system fairly increased their
level of confidence, and Method C’s participants on average felt that the conventional
training system poorly increased their level of confidence. These results are further
evaluated by comparing the readiness level of the participants among the various training
methods as summarized in Fig. 21. Referring to Fig. 21, it can be observed that 90% of
Method A’s participants felt ready to proceed to the next training level (i.e. physical
training in the lab), whereas only 40% of Method B’s participants and 30% of Method
C’s participants felt ready.
Thirdly, Method A’s participants on average felt that the proposed VR training system was
significantly enjoyable, Method B’s participants on average felt that the common VR training
system was moderately enjoyable, and Method C’s participants on average felt that the
conventional training system was not enjoyable. Based on these results, we deduced that the
level of enjoyment stems from the amount of VR involvement in the training system, and thus
explaining the low average rating for Method C and higher average ratings for Method A and
B. The higher average rating for Method A as compared to Method B could be due to the
longer exposure in the VR system for Method A as illustrated earlier in Fig. 18.

Fig. 20 Comparison of average ratings for the different qualitative metrics among the various training methods
Multimed Tools Appl

Fig. 21 Comparison of readiness level of participants among the various training methods

Lastly, Method A’s participants on average felt that the proposed VR training system was
very comfortable, whereas Method B and C’s participants on average felt that the respective
training systems were moderately comfortable. Based on these results, we deduced that the
level of comfort is inversely related to the amount of time taken to conduct the physical
tutorial. This explains the lower average ratings for Method B and C as compared to Method
A, as Method A does not involve any physical tutorial whereas Method B and C involve
physical tutorials. Moreover, the time taken to conduct the physical tutorials are the same for
Method B and C, and thus explaining the almost similar average ratings for these two methods.
In summary, the quantitative comparisons among the various training methods have proved
that the proposed VR training system (Method A) is more effective and efficient than the
common VR training system (Method B) and the conventional training system (Method C).
This is because of its lower average training time taken and higher average assessment scores
obtained by its participants, as compared to Method B and C. The qualitative comparisons
among the various training methods have further proved the superior effectiveness of Method
A, as its average ratings are higher in every metric when compared to other methods.

5.3 Evaluation results – comparison among various VR immersion techniques

As mentioned earlier in Section 3.4, we have utilized HMD with neuron sensors to orchestrate
the VR training processes for Evaluation No. 1. This section covers the analysis of the results that
were obtained from Evaluation No. 2. For Evaluation No. 2, we have also considered other
various VR immersion techniques (i.e. HMD with leap motion, HMD with remote, desktop PC
with monitor display and mouse, Android tablet with touch screen display) that can be applicable
to the proposed VR training system. The effectiveness of the various VR immersion techniques
are evaluated and analysed using qualitative ratings based on the participants’ subjective
responses. Figure 22 summarizes the comparisons of the average ratings for the different
qualitative metrics among the various VR immersion techniques. Figure 23, on the other hand,
illustrates the various VR immersion techniques that are evaluated for Evaluation No. 2.
Referring to Fig. 22, we can observe that the participants on average felt most immersed in
the virtual environment for the HMD with leap motion and the HMD with neuron sensors
techniques, followed by the HMD with remote technique, and then the desktop PC and the
Android tablet techniques. Some metrics are also rated in a similar order by the participants.
These metrics include how realistic the technique is, the level of enjoyment during the training,
and the potential of the technique to develop an effective VR training system. The reason for
these phenomena is that the HMD-related techniques are able to allow the users to view the
Multimed Tools Appl

Fig. 22 Comparison of average ratings for the different qualitative metrics among the various vr immersion
techniques

virtual environment in very close proximity, unlike the desktop PC and Android tablet tech-
niques which only allow the users to view the virtual environment in a limited-size screen.
Hence, this explains their higher immersion level, more realistic settings, higher enjoyment level
and higher potential to develop an effective VR training system, as compared to the desktop PC
and Android tablet techniques. In addition, considering these mentioned metrics, the reason for

Fig. 23 Illustration of the various VR immersion techniques: a HMD with neuron sensors, b HMD with leap
motion, c HMD with remote, d desktop PC with monitor display and mouse, and e android tablet with touch
screen display
Multimed Tools Appl

the higher average ratings for the HMD with leap motion and the HMD with neuron sensors
techniques (as compared to other techniques) is because of their superior interaction ability.
These two techniques are armed with supporting devices to match the virtual hand movements
with the users’ hand movements, and thus allowing the users to directly interact with the virtual
objects in a realistic manner. The other techniques, on the other hand, only allow the users to
interact with the virtual objects using a virtual cursor via either a remote, a mouse, or a touch
screen display, depending on what the technique is. Hence, this also explains why the partic-
ipants on average felt that the HMD with leap motion and the HMD with neuron sensors
techniques best allow them to control the virtual objects as compared to other techniques.
The participants on average felt the most comfortable when it comes to the HMD with leap
motion technique, followed by the desktop PC technique, and then the HMD with neuron
sensors and the HMD with remote techniques. The Android tablet technique is ranked last in
terms of comfort. The reason why the HMD with leap motion and desktop PC techniques are
ranked highest is because these techniques do not require the users to put on any wearables or
to hold on to any device. This is unlike the HMD with neuron sensors, the HMD with remote
and the Android tablet techniques, which respectively require the users to put on wearable
sensors, to hold on to a remote, and to hold on to the Android tablet, hence explaining their
lower average ratings.
In summary, the qualitative comparisons among the various VR immersion techniques have
proved the effectiveness of the HMD with leap motion and the HMD with neuron sensors
techniques. This is because of their higher average ratings in almost every metric when
compared to other techniques. One interesting observation to note is that for almost every
qualitative metric, the average rating per participant for the HMD with leap motion technique
is slightly higher than the average rating for the HMD with neuron sensors technique. The
reason for this phenomenon could be that the HMD with neuron sensors technique, which we
have configured for this evaluation process, only takes into account six wearable neuron
sensors that are placed at selected locations of the users’ the right arm/fingers as mentioned in
Section 3.4. Hence, this may affect its effectiveness as this configuration does not fully
immerse the users into the virtual environment. However, despite its lower average ratings
as compared to the HMD with leap motion technique, we still believe in the superior potential
of the HMD with neuron sensors technique as it can be upgraded to have more wearable
sensors throughout the whole body (i.e. on both sides of the arm, fingers, legs, feet and body).
This modified configuration will be able to fully immerse the users into the virtual environ-
ment, and thus significantly increasing its effectiveness.

6 Discussions and conclusion

As hybrid medical device fabrication involves manual assembly due to its complexity, skill
training in this industry is essential to optimize and expedite the efficiency level of new
assembly operators. The main contributions of this paper are that we (a) proposed and
evaluated a novel VR training system (the VRAGTS) that addresses the underlying training
issues for hybrid medical device assembly by providing trainees with effective, efficient, risk-
free and low cost training, and (b) evaluated and analysed the effectiveness of the various VR
immersion techniques that can be applicable to the proposed VR training system.
By incorporating concepts from AI, VR and gaming, the proposed system is self-adaptive
and autonomous, and thus enabling the training to take place in a virtual workcell environment
Multimed Tools Appl

without the supervision of a physical trainer. It also utilizes a unique sequential framework that
enhances training through its various Bgame^ levels of familiarity-building processes. As such,
the proposed system is able to promote real-time effective guidance, interaction and fun for
trainees during the training process, and thus enabling easy transfer of skill/knowledge to the
trainees. Moreover, in a VR environment, there will be no contamination/safety risks involved
and physical equipment/materials/supporting items are not required, allowing significant cost
savings to be reaped. With the help of the VR supervisor, the proposed system can also
eliminate the need for any physical manager/supervisor, and significantly reduce the learning
curve, training period and costs [5].
The evaluation results of the proposed VR training system have proved these advantages
over other training methods. If we were to compare the features among the various training
methods, one can observe that the proposed system has a superior advantage in all aspects to
the common VR training and the conventional training methods. Firstly, the higher average
assessment scores obtained by its participants, as compared to other methods, have proved that
the proposed VR training system is able to effectively transfer skills/knowledge to the users.
On top of this, the qualitative comparisons among the various training methods have further
proved the superior effectiveness of the proposed VR training system, as its participants on
average felt that they can learn better due to high levels of confidence, enjoyment and comfort
obtained during their training. Secondly, the lower average training time taken by its partic-
ipants, as compared to other methods, have proved that the proposed VR training system is
able to efficiently reduce the learning curve and training periods. Lastly, the proposed system is
a wholly VR system and does not require any human assistance during the training process,
and thus eliminating any costs that can be incurred from hiring a physical trainer. This is unlike
the common VR training and the conventional training methods which still require physical
trainers to conduct the training. The proven advantages of the proposed framework over other
training methods have illustrated its potential to improve current training systems, specifically
for complex assembly operations such as the fabrication of hybrid medical devices. The
comparison of the features among the various training methods can be summarized in Table 3.
We have also evaluated and analysed the effectiveness of the various VR immersion
techniques that can be applicable to the proposed VR training system. Based on our evaluation
results, the most effective techniques are the HMD with leap motion and the HMD with
neuron sensors techniques. The participants on average felt that these techniques enable them
to (a) be deeply immersed in the virtual environment, (b) effectively control the virtual objects,
(c) be in a very realistic setting, and (d) greatly enjoy the training process. As such, they
acknowledge the high potential of these techniques to develop an effective VR training system.

Table 3 Comparison of Features among various training methods

Metrics Features VRAGTS Common VR Conventional


training method training method

Effectiveness Transferability of skill/knowledge ✓✓✓✓✓ ✓✓✓ X


Level of user confidence ✓✓✓✓✓ ✓✓✓ X
Level of user enjoyment ✓✓✓✓✓ ✓✓✓ X
Level of user comfort ✓✓✓✓✓ ✓✓✓ X
Efficiency Reduced learning curve ✓✓✓✓✓ ✓✓✓ X
Reduced training period ✓✓✓✓✓ ✓✓✓ X
Training cost savings ✓✓✓✓✓ ✓✓✓ X
No need for physical trainer ✓✓✓✓✓ ✓✓✓ X
Multimed Tools Appl

Comparing the level of comfort metric between these two techniques, the HMD with leap
motion technique is rated as more comfortable than the HMD with neuron sensors technique.
This is due to the requirement to put on wearable sensors for the latter technique, as compared
to the former technique that does not require the users to put on any wearables or to hold on to
any device. Another disadvantage that the HMD with neuron sensors technique has is that the
current configuration of only six wearable neuron sensors on the right arm/fingers does not
fully immerse the users into the virtual environment, and thus explaining its slightly lower
effectiveness when compared to the HMD with leap motion technique. However, despite
having these disadvantages, the HMD with neuron sensors technique can be further upgraded
with increased number of wearable sensors throughout the whole body (i.e. on both sides of
the arm, fingers, legs, feet and body). We believe in the superior potential of this modified
configuration to fully immerse the users into the virtual environment, which will significantly
increase its effectiveness. As such, the HMD with neuron sensors technique is the most
suitable candidate to develop an effective VR system based on the proposed training method.
The proposed VR training system can also be used for other applications, apart from the
mentioned hybrid medical device assembly training application. For instance, in the healthcare
industry, the proposed system can be utilized to effectively train caregivers on how to use
sophisticated sensors to track and analyse the Activities of Daily Living (ADL) of elderly
people. This will enable the caregivers to provide the elderly people with proactive assistance,
and thus allowing effective facilitation of their activity tracking and management [29, 30].
Another potential application will be rehabilitation training, whereby the proposed system can
be utilized to effectively train patients to perform rehabilitation exercises [3, 29, 43].
In order to develop a complete VR system for hybrid medical device assembly operations,
we shall consider biology-based modelling on top of the currently implemented physics-based
modelling for future work. We also intend to (a) upgrade the configuration of the HMD with
neuron sensors technique to support more wearable sensors throughout the whole body, and
(b) integrate more sophisticated and useful hardware aids like haptic feedback and force-
sensors integrated gloves to enhance the effectiveness and user-experience of our proposed VR
training system. We shall also consider applying video-based 3D human motion tracking
technology to the proposed system [12]. This method, which is based on a novel fusion
formulation that integrates low- and high-dimensional tracking approaches into one frame-
work, will enable us to effectively and efficiently track the user’s movements [12], which can
then be mapped to the virtual environment. Future work also includes further testing our VR
training system on human subjects by applying subjective evaluations during the experiment.
The evaluations conducted in this paper are objective-based, as they investigated the effec-
tiveness and efficiency of the proposed system using operators who do not have any prior
relevant knowledge and/or experiences of the assembly operation [27]. For instance, compar-
ing the time taken for them to successfully complete the tasks when using the proposed system
to conventional methods [20]. Subjective evaluations, on the other hand, will aim to investi-
gate the acceptance and potential in value-adding elements of the proposed system from the
perspective of experienced operators, trainers and managers who has the ability to compare the
proposed system with the conventional training methods [27].

Acknowledgements We wish to acknowledge the contributions from Mr. Rahul Singh Chauhan, Mr. Jayendra
Laxman Zambre and Mr. Ritesh Kumar Agrahari for their assistance in the development of the VR systems. This
project is supported in parts by a MOE FRC Tier 1 Grant from National University of Singapore (WBS: R-265-
000-614-114).
Multimed Tools Appl

References

1. Alamri A, Eid M, Iglesias R, Shirmohammadi S, Saddik AE (2008) Haptic virtual rehabilitation exercises
for post-stroke diagnosis. IEEE Trans Instrum Meas 57(9):1876–1884
2. Albert DE (2015) Methods for verifying medical device cleanliness. In: Developments in surface contam-
ination and cleaning, volume 7: cleanliness validation and verification. Elsevier, Amsterdam, pp 109–128
3. Boian RF, Sharma AK, Han C, Merians AN, Burdea GC, Adamovich S, Recce ML, Tremaine M, Poizner H
(2002) Virtual reality-based post-stroke hand rehabilitation. Stud Health Technol Inform 85(1):64–70
4. Borst CW, Indugula AP (2006) A spring model for whole-hand virtual grasping. Presence-Teleop VIRT
15(1):47–61
5. Brough JE, Schwartz M, Gupta SK, Anand DK, Kavetsky R, Pettersen R (2007) Towards the development of a
virtual environment-based training system for mechanical assembly operations. Virtual Reality 11(4):189–206
6. Chua M, Chui CK, Chng CB, Lau D (2013) Carbon nanotube-based artificial tracheal prosthesis: carbon
nanocomposite implants for patient-specific ENT care. IEEE Nanotechnol Mag 7(4):27–31
7. Chua M, Chui CK, Chng CB, Lau D (2013) Experiments of carbon nanocomposite implants for patient
specific ENT care. In: The 7th IEEE international conference on nano/molecular medicine and engineering
(NANOMED), Phuket
8. Chua M, Chui CK, Teo C, Lau D (2015) Patient-specific carbon nanocomposite tracheal prosthesis. Int J
Artif Organs 38(1):31–38
9. Ciurana J (2014) Designing, prototyping and manufacturing medical devices: an overview. Int J Comput
Integr Manuf 27(10):901–918
10. Ciuti G, Ricotti L, Menciassi A, Dario P (2015) MEMS sensor technologies for human centred applications
in healthcare, physical activities, safety and environmental sensing: a review on research activities in italy.
Sensors (Basel) 15(3):6441–6468
11. Connolly J, Condell J, O’Flynn B, Sanchez JT, Gardiner P (2018) IMU sensor-based electronic goniometric
glove for clinical finger movement analysis. IEEE Sensors J 18(3):1273–1281
12. Cui J, Liu Y, Xu Y, Zhao H, Zha H (2013) Tracking generic human motion via fusion of low- and high-
dimensional approaches. IEEE Trans Syst Man Cybern Syst Hum 43(4):996–1002
13. Ding M, Fan G (2016) Articulated and generalized gaussian kernel correlation for human pose estimation.
IEEE Trans Image Process 25(2):776–789
14. Gallagher AG, Satava RM (2002) Virtual reality as a metric for the assessment of laparoscopic psychomotor
skills. Surg Endosc 16(12):1746–1752
15. Gislén L (2016) Alternative design of robot cell concepts for flexible production. (Master’s thesis,
University West), Trollhättan, Sweden
16. Grantcharov TP, Kristiansen VB, Bendi J, Bardram L, Rosenberg J, Funch-Jensen P (2004) Randomized
clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg 91(2):146–150
17. He Y, Zhang Z, Nan X, Zhang N, Guo F, Rosales E, Guan L (2017) vConnect: perceive and interact with
real world from CAVE. Multimed Tools Appl 76(1):1479–1508
18. Ho N, Ang W, Chui CK (2017) Workcell for hybrid medical device fabrication. In: 3rd CIRP conference on
biomanufacturing, Chicago
19. Hoedt S, Claeys A, Landeghem HV, Cottyn J (2016) Evaluation framework for virtual training within
mixed-model manual assembly. IFAC-PapersOnLine 49(12):261–266
20. Hořejší P (2015) Augmented reality system for virtual training of parts assembly. In: 25th DAAAM
international symposium on intelligent manufacturing & automation, Vienna
21. Hyppölä J, Martínez H, Laukkanen S (2014) Experiential learning theory and virtual and augmented reality
applications. In: 4th global conference on experiential learning in virtual worlds, Prague, Czech Republic
22. Jayaram S, Jayaram U, Kim YJ, DeChenne C, Lyons KW, Palmer C, Mitsui T (2007) Industry case studies
in the use of immersive virtual assembly. Virtual Reality 11(4):217–228
23. Kılıboz NÇ, Güdükbay U (2015) A hand gesture recognition technique for human–computer interaction. J
Vis Commun Image Represent 28(C):97–104
24. Kizony R, Raz L, Katz N, Weingarden H, Weiss PLT (2005) Video-capture virtual reality system for
patients with paraplegic spinal cord injury. J Rehabil Res Dev 42(5):595–608
25. Konig J, Koskela E (2013) The role of profit sharing in dual labour markets with flexible outsourcing.
Labour 27(4):357–370
26. Kusuda Y (2010) IDEC's robot-based cellular production system: a challenge to automate high-mix low-
volume production. Assem Autom 30(4):306–312
27. Langley A, Lawson G, Hermawati S, D'Cruz M, Apold J, Arlt F, Mura K (2016) Establishing the usability
of a virtual training system for assembly operations within the automotive industry. Hum Factors Ergon
Manuf Serv Ind 26(6):667–679
28. Lawson MV (2003) Finite automata. Chapman and Hall/CRC, Boca Raton
Multimed Tools Appl

29. Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition.
Neurocomputing 181(1):108–115
30. Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2016) Action2Activity: recognizing complex activities from
sensor data. In: Proceedings of the twenty-fourth international joint conference on artificial intelligence,
Buenos Aires
31. Lucke L, Anderson D, Smith D (2009) Harnessing experience for efficient medical device product
development. J Med Devices 3(2):027551
32. Manjiyani ZAA, Jacob RT, R KK, Varghese B (2014) Development of MEMS based 3-axis accelerometer
for hand movement monitoring. International Journal of Scientific and Research Publ 4(2):1–4
33. Müller BC, Reisea C, Duc BM, Seliger G (2016) Simulation-games for learning conducive workplaces: a
case study for manual assembly. In: 13th global conference on sustainable manufacturing - decoupling
growth from resource use, Ho Chi Minh
34. Nee AYC, Ong SK (2013) Virtual and augmented reality applications in manufacturing. In: 7th IFAC
conference on manufacturing modelling, management, and control, Saint Petersburg, Russia
35. Nerem RM, Sambanis A (2007) Tissue engineering: from biology to biological substitutes. Tissue Eng 1(1):
3–13
36. Neto P, Pires JN, Moreira AP (2013) 3-D position estimation from inertial sensing: minimizing the error
from the process of double integration of accelerations. In: IECON 2013 - 39th annual conference of the
IEEE industrial electronics society, Vienna
37. Pilia A, Brun C, Doceul L, Gargiulo L, Hatchressian JC, Keller D, Le R, Poli S, Zago B (2015) Application
of virtual reality tools for assembly of WEST components: comparison between simulations and physical
mockups. Fusion Eng Des 98–99:1589–1592
38. Premkumar T, Geckeler KE (2012) Graphene–DNA hybrid materials: assembly, applications, and prospects.
Prog Polym Sci 37(4):515–529
39. Rahni AAA, Yahya I (2008) Obtaining translation from a 6-DOF MEMS IMU — an overview. In: 2007
Asia-Pacific conference on applied electromagnetics, Melaka
40. Rodriguez L, Quint F, Gorecky D, Romero D, Siller HR (2015) Developing a mixed reality assistance
system based on projection mapping technology for manual operations at assembly workstations. In:
International conference on virtual and augmented reality in education, Monterrey
41. Rose P (2014) Whitepaper: contamination control of medical device cleanroom. Maetrics, Indianapolis
42. Sahithia K, Swethaa M, Ramasamya K, Srinivasanb N, Selvamurugan N (2010) Polymeric composites
containing carbon nanotubes for bone tissue engineering. Int J Biol Macromol 46(3):281–283
43. Saposnik G, Levin M (2011) Virtual reality in stroke rehabilitation: a meta-analysis and implications for
clinicians. Stroke 42(5):1380–1386
44. Schwaitzberg SD, Dorozhkin D, Sankaranarayanan G, Matthes K, Jones DB, De S (2016) Natural orifice
translumenal endoscopic surgery (NOTES): emerging trends and specifications for a virtual simulator. Surg
Endosc 30(1):190–198
45. Seth A, Vance JM, Oliver JH (2011) Virtual reality for assembly methods prototyping: a review. Virtual
Reality 15(1):5–20
46. Shirke S, Chilaveri PG (2013) Traffic light control using accelerometer sensor on ARM platform. Int J Eng
Res Technol 2(9):3124–3126
47. Stančin S, Tomažič S (2011) Angle estimation of simultaneous orthogonal rotations from 3D gyroscope
measurements. Sensors (Basel) 11(9):8536–8549
48. Stratos A, Loukas R, Dimitris M, Konstantinos G, Dimitris M, George C (2016) A virtual reality application
to attract young talents to manufacturing. In: 49th CIRP Conference on MANUFACTURING SYSTEMS -
CIRP CMS 2016, Stuttgart, Germany
49. Sturman DJ, Zeltzer D (1994) A survey of glove-based input. IEEE Comput Graph Appl 14(1):30–39
50. Suzuki N, Hattori A, Ieiri S, Tomikawa M, Kenmotsu H, Hashizume M (2012) VR training system for
endoscopic surgery robot. In: Augmented environments for computer-assisted interventions. AE-CAI 2011.
Lecture notes in computer science, vol 7264. Springer, Berlin, pp 117–129
51. Tang Z, Sekine M, Tamura T, Tanaka N, Yoshida M, Chen W (2015) Measurement and estimation of 3D
orientation using magnetic and inertial sensors. Adv Biomed Eng 4(1):135–143
52. Vaughan N, Gabrys B, Dubey VN (2016) An overview of self-adaptive technologies within virtual reality
training. Comput Sci Rev 22:65–87
53. Wang Y, Guo S, Li Y, Tamiya T, Song Y (2018) Design and evaluation of safety operation VR training
system for robotic catheter surgery. Med Biol Eng Comput 56(1):25–35
54. Wasfy A, Wasfy T, Noor A (2004) Intelligent virtual environment for process training. Adv Eng Softw
35(6):337–355
55. Xia P, Lopes AM, Restivo MT, Yao Y (2012) A new type haptics-based virtual environment system for
assembly training of complex products. Int J Adv Manuf Technol 58(1–4):379–396
Multimed Tools Appl

56. Yang Q, Wu DL, Zhu HM, Bao JS, Wei ZH (2013) Assembly operation process planning by mapping a
virtual assembly simulation to real operation. Comput Ind 64(7):869–879
57. Yu Y, Duan M, Sun C, Zhong Z, Liu H (2017) A virtual reality simulation for coordination and interaction
based on dynamics calculation. Ships Offshore Struc 12(6):873–884
58. Yuviler-Gavish N, Krupenia S, Gopher D (2013) Task analysis for developing maintenance and assembly
VR training simulators. Ergon Des 21(1):12–19
59. Zhou J, Malric F, Shirmohammadi S (2010) A new hand-measurement method to simplify calibration in
cyber glove-based virtual rehabilitation. IEEE Trans Instrum Meas 59(10):2496–2504

MR. Nicholas HO is a PhD mechanical engineering student in the Control and Mechatronics Division of the
Department of Mechanical Engineering, National University of Singapore. He is currently working on BDigital
manufacturing of patient-specific ENT devices^ as part of his PhD project topic.

MS. Pooi-Mun WONG is a PhD mechanical engineering student in the Control and Mechatronics Division of
the Department of Mechanical Engineering, National University of Singapore. She is currently working on
BCyber-Physical System (CPS)^ as part of her PhD project topic.
Multimed Tools Appl

DR. Matthew CHUA is a lecturer and principal scientist at the NUS Institute of System Science where he is
spearheading the Medical & Cybernetics Systems. His experience also includes artificial organs, soft robotics for
medical applications, intelligent assistive devices for the elderly and medical implant devices for immediate voice
restoration.

DR. Chee-Kong CHUI is a tenured Associate Professor with the Control and Mechatronics Division of the
Department of Mechanical Engineering, National University of Singapore. His technical expertise is in the areas
of biomechanics, computer vision and graphics, image processing and visualization, machine intelligence,
mechatronics and parallel processing.

You might also like