Paper 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Virtual Reality (2020) 24:527–539

https://doi.org/10.1007/s10055-019-00415-8

ORIGINAL ARTICLE

An augmented reality‑based training system with a natural user


interface for manual milling operations
Chih‑Kai Yang1 · Yu‑Hsi Chen1 · Tung‑Jui Chuang1 · Kalpana Shankhwar1 · Shana Smith1

Received: 27 July 2019 / Accepted: 27 November 2019 / Published online: 3 December 2019
© Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract
This study developed an augmented reality (AR)-based training system for conventional manual milling operations. An Intel
RealSense R200 depth camera and a Leap Motion controller were mounted on an HTC Vive head-mounted display to allow
users freely walk around in a room-size AR environment to operate a full-size virtual milling machine with their barehands,
using their natural operation behaviors, as if they were operating a real milling machine in the real world, without additional
worn or handheld devices. GPU parallel computing was used to handle dynamic occlusions and accelerate the machining
simulation to achieve a real-time simulation. Using the developed AR-based training system, novices can receive a hands-on
training in a safe environment, without any injury or damage. User test results showed that using the developed AR-based
training resulted in lower failure rates and inquiry times than using video training. Users also commented that the AR-based
training was interesting and helpful for novices to learn the basic manual milling operation techniques.

Keywords Augmented reality · Natural operation behavior · Manual milling operation · Occlusion

1 Introduction avoid collisions. Chardonnet et al. (2017) used a handheld


device to display AR images to assist CNC machining. Real-
Machining operations are often complicated and dangerous. time operation conditions and cutting tool motions were dis-
Because of safety issues and the lack of one-to-one train- played on the handheld device.
ing resources, many machine tool trainings are not effec- Although NC machines have many advantages over con-
tive. Some researchers started to apply virtual reality (VR) ventional machine tools, most small-quantity and low-cost
or augmented reality (AR) to assist machining trainings projects are still carried out by conventional machine tools.
to avoid potential injury or damage. For example, Zhang Conventional machining training is an important job train-
et al. (2008) developed an AR-assisted computer numerical ing in vocational education. Although many prior studies
control (CNC) machining simulation system. The system applied AR or VR technologies to machining simulations,
allowed operators to analyze machining processes on real most of them were applied to NC machines, which do not
machines. Neugebauer et al. (2010) created a VR-based require heavy manual skills. On the contrary, operating a
numerical control (NC) milling machine system by com- conventional manual machine tool requires a highly skilled
bining a virtual environment with an actual control panel. operator. However, there was very little research on applying
The system enabled novices to quickly comprehend complex AR or VR in conventional manual machine tool training.
operation processes and recognize potential errors without The purpose of this study was to create a realistic AR-
any extra cost or danger. Kiswanto and Ariansyah (2013) based training system for novices to learn how to operate
developed an AR-based 3-axis CNC machining simulation a manual milling machine at beginner’s level. The system
system to machine a virtual workpiece on a real milling provided users with a natural user interface that allowed
machine. Users can validate their machining processes to users to interact with virtual milling machines with their
natural operation behaviors as if they were operating a real
* Shana Smith machine in the real world, without additional worn or hand-
ssmith@ntu.edu.tw held devices. Furthermore, the system also used GPU par-
allel computing to provide dynamic occlusion effects and
1
Department of Mechanical Engineering, National Taiwan
University, Taipei, Taiwan

13
Vol.:(0123456789)
528 Virtual Reality (2020) 24:527–539

real-time machining simulation to increase the realism and locations of the foreground and the virtual objects to achieve
immersiveness of the simulation. intuitive interaction and masking. Contrary to Kinect’s full-
body tracking, Leap Motion is a hand tracking device which
1.1 Interaction interfaces can recognize and track hands and fingers. Weichert et al.
(2013) showed that the accuracy of the Leap Motion device
In an AR environment, an effective and real-time interac- was about 0.2 mm for static setup and 1.2 mm for dynamic
tion interface not only enhances the accuracy of simulation setup.
but also increases the sense of presence. Interaction inter- Penelle and Debeir (2014) promoted the fusion of the
faces can be natural or non-natural. Non-natural interactions 3D data acquired by Leap Motion and Kinect to improve
often use an intermediary stylus or wand, e.g., PHANToM, hand tracking performances. By registering the position and
Razer Hydra, or HTC Vive controller, to interact with vir- orientation of the Leap Motion in the reference frame of
tual objects. Users need to take more time to learn how to Kinect, the system could accurately detect the interactions
use these intermediaries. In addition, the usages of inter- between real hands and virtual objects.
mediaries are often different from users’ natural operation Natural user interfaces without worn devices provide
behaviors in operating a real machine tool in the real world. users with more intuitive and immersive sensations. How-
Thus, the AR training might not be effective. However, an ever, in the prior research, most of the Leap Motion or
intermediary device often provides a higher controllability Kinect devices were non-movable. It limited the portability
and stability. and mobility of the interactions in the AR environments. In
Some other non-natural interactions use gestures to inter- this research, a portable natural user interface integrating
act with virtual objects. For example, Shim et al. (2016) a Leap Motion controller, an Intel RealSense R200 RGB
developed a gesture-based AR system to interact with vir- depth (RGB-D) camera, and an HTC Vive HMD was devel-
tual objects without extra intermediary devices. However, oped. The system allowed users to freely walk around in a
gesture-based interaction is often not intuitive, neither in room-size AR environment to interact with a full-size con-
line with human natural operation behaviors. In addition, ventional manual milling machine using their natural opera-
users often need to learn and remember complex gestures. tion behaviors.
Unlike non-natural interaction interfaces, natural interac-
tions are more intuitive. Regazzoni et al. (2018) stated that
natural user interface design was one of the most important 1.2 Occlusion
issues in AR/VR applications, and a good natural user inter-
face allowed users to be concentrated on the achievement of Occlusion is another significant issue which determines the
the final goals instead of monitoring the correct execution degree of presence and the realism of an AR environment.
of the gestures. Users often do not need to learn how to use Without occlusion handling, virtual objects are always ren-
complex intermediary devices or gestures. They can move dered on the top layer of the color image captured by an
their bodies and interact with objects in a very natural way. RGB color camera. Therefore, it is difficult for users to cor-
For example, Qiu et al. (2013) proposed a VR assembly and rectly identify the relative positions of the virtual objects.
maintenance simulation system. The system integrated a Lu and Smith (2009) accessed the images before and after
data helmet and a data glove into a virtual human control virtual objects appeared and compared the two images to
system. Users can execute tasks by controlling an avatar in a compute the potential occlusion area for stereo matching.
VR environment to grasp, move, and release virtual objects The research divided the image into multiple layers and
naturally in an interactive way. Sportillo et al. (2015) placed applied GPU to reduce the matching cost. Gheorghe et al.
color markers on the thumb and the index fingers to track (2015) built a virtual CNC machining simulation using AR.
hand positions. Through color filtering with a RGB depth They managed the occlusion problem based on the prior
camera, users could grab and drag virtual objects naturally knowledge of the position and the shape of the real objects
in a virtual environment. in two particular scenarios. However, their method was not
Recently, with the advances of the sensor technology, applicable to the objects which were not in their database.
natural interactions without any worn devices become pos- Leal-Meléndrez et al. (2013) presented a pixel-wise
sible. For example, Kinect is a body motion sensing device occlusion handling method using Kinect to achieve real-time
released by Microsoft. It consists of an RGB camera, an occlusion. Khattak et al. (2014) fixed a Leap Motion con-
infrared emitter, an infrared camera, and a multi-array troller on a table to track hand movements and attached an
microphone which provide full-body 3D motion track- RGB-D (RGB depth) camera on an Oculus HMD to create
ing, facial recognition, and voice recognition capabilities. an AR environment. Their method first captured the static
Corbett-Davies et al. (2012) used Kinect to capture the environment and reconstructed the environment using the
depth information of a fixed scene and analyzed the relative RGB-D camera. Then, they compared the depth map of the

13
Virtual Reality (2020) 24:527–539 529

dynamic scene and the static scene to handle occlusions immersiveness of the AR simulation but also caused dis-
using a fragment shader. comfort to users. A more efficient and intuitive occlusion
Although most of the prior occlusion methods can suc- method is needed to handle real-time dynamic occlusions
cessfully handle occlusion problems, the requirement in AR applications.
of expensive computational resources often resulted in This paper is organized as follows. Section 2 gives an
rendering delay, which not only reduced the realism and overview of the hardware and software employed in this
study. Section 3 describes the unification of different coor-
dinate systems. Section 4 describes the dynamic occlusion
method developed in this study. Section 5 presents the
machining simulation and user interface designs. Section 6
gives a user test. Finally, Sect. 7 offers conclusions.

2 System architecture

This study developed an AR-based training system for an


LC-1 1/2 series conventional manual milling machine,
as shown in Fig. 1. The system architecture is shown in
Fig. 2. An HTC Vive HMD was used to track head posi-
tions and display AR images. In this study, in order to
achieve natural operation interactions, the HTC Vive hand-
held controllers were not used, so that users could operate
the virtual milling machine with their barehands. The Intel
RealSense R200 RGB-D camera was used to capture the
color images and the depth information of the real scenes.
The R200 camera contained three components: an RGB
color camera, an infrared laser projector, and two infrared
Fig. 1  A conventional LC-1 1/2 series milling machine cameras which could get the depth information of the real

Fig. 2  System architecture

13
530 Virtual Reality (2020) 24:527–539

scenes. Leap Motion was used to track hand positions. 3 Coordinate unification
Leap motion was a tiny hand tracking device containing
three infrared emitters and two infrared cameras. In the AR system, each device had its own coordinate sys-
To increase users’ mobility in the AR environment, the tem. These coordinate systems needed to be integrated
Real Sense R200 camera and the Leap Motion controller together in order to have correct data communication. In
were fixed on the HTC Vive HMD (as shown in Fig. 3), this study, the following coordinates were conformed to
so that users’ hand motions could be captured and viewed each other:
according to users’ head position and orientation. It also
allowed users to walk freely in a room-size environment • Uinty3D (virtual world)
to interact with a full-size virtual milling machine using • HTC Vive HMD
their natural operation behaviors. • Intel RealSense R200 color camera
Unity3D was a cross-platform game engine developed • Intel RealSense R200 depth camera
by Unity Technologies. Unity3D could use compute shader • Leap Motion
to accelerate computing processes on GPU. In this study,
Unity3D was used to build the AR environment. Four There were four unification steps: from HTC Vive to
modules were constructed: rendering module, interaction Unity3D (MVive∕Unity ) , from R200 color camera to Uni-
module, user interface module, and milling machine simu- ty3D (MR200Color∕Unity ) , from Leap Motion to Unity3D
lation module. (MLeap∕Unity ), and from R200 depth camera to color camera
The rendering module integrated the real scene image (Mdepth∕color ).
and the virtual world scene and ensures that the results
can be rendered using the HTV Vive HMD. The interac-
3.1 Step 1: from HTC Vive to Unity3D
tion module integrated user motion information tracked by
Leap Motion and maintained the logics and the rules of the
“SteamVR” from Unity Asset Store was used to manage
interaction behaviors between users and virtual objects.
Vive in Unity3D. “CameraRig” in “StemVR” was a Unity
The user interface module provided text or image instruc-
gameobject that controlled stereo rendering and head track-
tions to users during the AR-based training. The milling
ing. “CameraRig” contained one Unity camera which
machine simulation module handled machining logics and
anchored at the center between the poses of the left eye and
provided dynamic machining behaviors.
the right eye. By using “CameraRig”, users could also access
Two HTC Vive lighthouses were placed at the two cor-
the script “SteamVR_TrackedObject” to get the position and
ners of a 3.0 m × 2.25 m room. When users put on the HTC
the orientation of the HMD between the two lighthouses in
Vive HMD, the two lighthouses in the room would track
each frame. Therefore, the transformation matrix between
users’ head movements. HTC Vive handheld controllers
Vive and the Unity virtual world could be calculated by:
were not used in this study, so that users could operate the
virtual milling machine with their barehands. MVive/Unity = Mt × Mr × Ms (1)

where MVive/Unity = Transformation matrix from Vive to


Unity3D, Mt = Translation matrix, Mr = Rotation matrix,
Ms = Scale matrix, scaling from meter to millimeter.

3.2 Step 2: from Intel RealSense R200 color camera


to Unity3D

In this study, the Intel RealSense R200 color camera was


mounted in the front center of the HMD. Figure 4 shows
the relative position of the color camera to the Vive HMD.
The transformation matrix between the R200 color camera
and Vive was:

⎡ 1 0 0 0 ⎤
⎢ 0 1 0 0.015 ⎥
MR200Color/Vive =⎢ ⎥
⎢ 0 0 1 0.07 ⎥
⎣ 0 0 0 1 ⎦
Fig. 3  Leap motion and RealSense R200 were fixed on HTC Vive

13
Virtual Reality (2020) 24:527–539 531

could be obtained by two steps: the transformation from


the Leap Motion controller to the R200 color camera and
from the R200 virtual color camera to Unity3D. In this
study, Zhang (2000)’s camera calibration method was used
to obtain the exact transformation matrix between the Leap
Motion controller and the R200 color camera. Twenty-two
pairs of images were sampled by the R200 color camera
and the Leap motion controller at the same time to obtain
the transformation matrix, as shown in Fig. 6. In Fig. 6,
the right camera was the Leap Motion controller, and the
left camera was the R200 color camera.
The result is shown as below:

⎡ 0.9996 −0.0182 −0.0227 22.3281 ⎤


⎢ 0.0158 0.9944 −0.1041 22.0347 ⎥
=⎢
0.1037 0.9943 −40.0447 ⎥⎥
MLeap/R200Color
Fig. 4  Relative position of the R200 color camera to Vive ⎢ 0.0245
⎣ 0 0 0 1 ⎦

Then, the transformation between Leap Motion and


Unity3D could be obtained as follows.
MLeap/Unity = MR200Color/Unity × MLeap/R200Color (3)

Now, the Leap Motion controller could be used to track


users’ hands in the AR environment.

3.4 Step 4: from R200 depth camera to R200 color


camera

It was necessary to correctly project the depth informa-


tion onto the corresponding color image to obtain cor-
rect occlusion effects. However, there existed a deviation
Fig. 5  Unity3D game view in Vive between the depth information and the corresponding
color image. RealSense SDK was used to query the intrin-
Therefore, the transformation matrix between the R200 sic and extrinsic parameters of the color streams and depth
color camera and the Unity3D virtual world was: streams of the RealSense R200 RBG-D camera as follows.

MR200Color/Unity = MVive/Unity MR200Color/Vive ⎡ 2.0604 −0.0305 −1.7019 ⎤


Mdepth/color = ⎢ −0.0023 2.0683 −0.6136 ⎥
⎡ 1 0 0 0 ⎤ ⎢ ⎥
⎢ 0 1 0 0.015 ⎥ (2) ⎣ 0 −0.0001 1.0136 ⎦
= Mt × Mr × Ms ⎢
⎢ 0 0 1 0.07 ⎥⎥ Af ter MVive/Unity , MR200Color/Unity , MLeap/Unity , and
⎣ 0 0 0 1 ⎦ Mdepth/color were obtained, all the coordinate systems were
Now, the virtual objects were integrated with the real unified in Unity3D.
world, and users could see the integrated scenes using Vive in
Unity3D, as shown in Fig. 5. However, at this stage, there was
still no depth information concerning the real-world images. 4 Occlusion handling

The real scene captured by the color camera was repre-


3.3 Step 3: from Leap Motion to Unity3D sented as a 2D image plane in Unity3D. The image plane
has no relative depth information. Without occlusion
The relative position of the Leap Motion controller and the handling, virtual objects were always rendered in front
R200 color camera was fixed on the Vive HMD. Therefore, of the real scene, as shown in Fig. 7a. Typical occlusion
the transformation matrix from Leap Motion to Unity3D approaches cut occluded images and paste the forefront

13
532 Virtual Reality (2020) 24:527–539

Fig. 6  Twenty-two pairs of images for obtaining the transformation matrix

image back to the scene, which were time-consuming and the depth order of the objects and make sure that only the
difficult to achieve real-time dynamic occlusions. In this forefront objects were drawn in the scene. In this study,
study, a more efficient and intuitive method was proposed before the per-sample operations step, a custom fragment
by overwriting the Z-buffer in the rendering engine of shader was used to overwrite the Z-buffer of the image plane
Unity3D. in the pixel processor, so that each pixel would have correct
Figure 8 shows a standard Unity3D rendering pipeline. depth information.
Before the frame buffer step, per-sample operations would Whenever the RealSense R200 RGB-D camera
make a series of tests to ensure that the rendering was cor- refreshed the accepted images, the system mapped the
rect. Among these tests, the depth test was applied to check depth data to the color images. Then, both the color

Fig. 7  Occlusion handling

(a) Without occlusion handling (b) With occlusion handling

13
Virtual Reality (2020) 24:527–539 533

to learn the basic manual milling machining operations at


beginner’s level, resolution of 1 mm was enough to achieve
the training purpose. The scalar value of each vertex (or each
grid) of the voxel was set to be 0 or 1, representing whether
the vertex was removed by the cutter or not. The isolevel
was set to be 0.5.
Figure 9 shows a flowchart of the milling simulation.
The process contained four steps: (1) establishing a scalar
field for the workpiece, (2) detecting the relative location of
the cutter and the workpiece, (3) using the marching cubes
algorithm to update the geometry of the workpiece, and (4)
rendering the meshes of the workpiece.
This study first assigned the scalar value of each grid
to the “R” channel of the RGBA color information using
the “Texture3D” class in Unity3D. The parallel computing
capability of the GPU was applied to calculate the relative
Fig. 8  Unity3D rendering pipeline
positions of the cutter and all grids. If a grid was within the
cutting area, the “R” value of the grid would be changed to
information and depth information was written into the 0. Then, the mesh of each voxel was reconstructed. Finally,
texture of the image plane. After the fragment shader using the function “Graphics.DrawProceduralIndirect()” in
computed the pixel values from the texture, the depths of Unity3D, the system could redraw the scene directly in GPU,
the real-world objects would be reproduced on the image without sending data back to CPU.
plane correctly. After comparing the depth information of Figure 10 shows a snapshot of the milling simulation.
the virtual objects and the real objects in the per-sample The blue lines in the figure were used to guide the machin-
operations step, the system would render the forefront ing process during training. Using the parallel processing
objects and reject the farther objects. capability of GPU, the machining simulation could reach a
Most of the Z-buffer resolutions are 8, 16, 24, or 32 bits. real time of 30 frames per second.
The resolution determines how many bits of the depth value
of a pixel can be stored. The higher the resolution is, the 5.2 User interface
higher precision it represents, but more memory is needed.
In this research, a full 32-bit Z-buffer was used on the Uni- In order to assist learning, augmented instructions and ani-
ty3D PC platform. Figure 7b shows the occlusion result of mations were provided in the system. Figure 11a is a users’
using the custom fragment shader. The virtual object was first-person view from within the HTC Vive HMD. The aug-
correctly occluded by the real hand. mented translucent texts would guide the users to operate
the machine step by step. Figure 11b is a third-person view
of the system. Users could freely walk around in the room-
5 AR‑based training system size AR environment to operate the full-size virtual milling
machine with their barehands using their natural operation
5.1 Milling simulation behaviors. Figure 11c shows an augmented instruction and
an auxiliary view to help users to understand the environ-
In this study, a GPU implementation of the marching cubes ment which were not in users’ view. Figure 11d shows an
algorithm was used to accelerate the machining simulation. interaction between users and the virtual objects. Users
The marching cubes algorithm was a voxel reconstruction could grasp a virtual object and place it to the proper loca-
method proposed by Lorensen and Cline (1987). By search- tion. Objects which needed to be activated would also be
ing a lookup table, the method established an iso-surface highlighted to remind the users, as shown in Fig. 11e. For
mesh for each voxel in the scalar field. The mesh of each complex operations or important milling techniques, the
voxel was then connected together to reconstruct a 3D system also provided illustrated 3D animations to show
object. the operation procedures during the training, as shown in
In this study, the size of the virtual workpiece was an Fig. 11f. Users could easily realize the operation procedures
80 mm × 80 mm × 80 mm cube, which was represented by and follow the instruction to operate the virtual milling
80 voxels × 80 voxels × 80 voxels. Thus, the resolution of the machine step by step.
milling simulation is 1 mm. Since the system was for novices

13
534 Virtual Reality (2020) 24:527–539

Fig. 9  Flowchart of the milling simulation

6 User test

6.1 Training session

To verify the capability of the developed AR-based train-


ing system, a user test was conducted to find the subjective
and objective performance. Twenty subjects, who did not
have any prior milling machine operation experience, were
randomly divided into an experiment group and a control
group. Subjects in the experiment group were trained using
the AR-based training system. They had to follow the AR
instructions and complete the specific operations in the sys-
tem. Subjects in the control group were trained using an
education video.
Both AR-based training and education video training
Fig. 10  Real-time machining simulation included a complete milling machining operation, which

13
Virtual Reality (2020) 24:527–539 535

Fig. 11  User interface of the AR-based training system

composed of four main steps: (1) introduction to the machine Finally, subjects in the experiment group were asked to fill
tools, (2) introduction to the workpiece setup, (3) introduc- out a subjective questionnaire and a System Usability Scale
tion to the cutter setup, and (4) introduction to the milling (SUS) analysis (Brooke 1996). The training flowchart is
task. Both training times were about 30 min. shown in Fig. 12.
Twenty-four hours after completing the training ses-
sions, the participants were asked to complete a practical 6.2 Practical milling operation session
milling task. They had to complete a real milling task with
correct operation steps. After that, subjects in the experi- Before operating the real milling machine, subjects needed
ment group were asked to watch the same education video. to wear safety equipment and to understand the safety rules.

13
536 Virtual Reality (2020) 24:527–539

Fig. 13  A sample product

clockwise rotation mode, (2) rotate the x–y–z handle to


adjust the cutter to the proper location, (3) machine the
workpiece in the clockwise direction against the feeding
direction, (4) machine the workpiece to the correct width,
and (5) machine the workpiece to the correct depth.
Fig. 12  Training flowchart
6.3 Milling performance

During the test, whenever the subjects encountered any After the 20 participants completed the user test, their
problem or forgot any operation step, they could ask for help. performance was evaluated based on three items: failure
Each subject was given a 50 mm × 50 mm × 25 mm work- rate, the number of inquiries, and the time taken. The time
piece and a two flute end mill cutter with 16 mm diameter. A recorded did not include the time for inquiry. Since all par-
finished sample product and its dimensions were given to the ticipants did not have any prior milling machining experi-
subjects for their reference. Subjects were asked to machine ence and this was their first time of operating a real milling
the workpiece to the same geometry as the given sample machine, the dimension accuracy was not rigorously eval-
product. The finished product should include two adjacent uated. The allowances of the locations and widths of the
open slots. One was 16 mm wide, 2 mm deep, and 10 mm two slots were ± 4 mm, and the allowances of the depths of
away from the reference plane. The other one was 8 mm the two slots were ± 0.5 mm. Table 1 shows the evaluation
wide, 1 mm deep, and 26 mm away from the reference plane, results. The independent sample t test was used to check
as shown in Fig. 13. The evaluation consisted three parts: whether there were significant differences in failure rate,
In Part 1 (Workpiece), the participants had to correctly inquiry number, and the time taken between the control
fasten the workpiece on the vise of the milling machine. This group and the experiment group.
part included four steps: (1) put the baseplate on the vise, In Part 1 (Workpiece), there were no significant differ-
(2) put the workpiece on the baseplate, (3) fasten the vise ences between the control group and the experiment group
to hold the workpiece, and (4) knock the workpiece using in all three evaluation items. It might be because the steps
a rubber hammer to remove the gap between the plate and in Part 1 were easy and fast. In Part 2 (Cutter), there was a
the workpiece. significant difference in the number of inquiries. Subjects
In Part 2 (Cutter), the participants had to correctly install in the video training group had more questions concerning
the cutter assembly. This part included five steps: (1) assem- the setup sequence of the cutter, collet, and collet nut. In
ble the collet nut and the collet, (2) put the milling cutter in Part 3 (Milling), the failure rate in the video training group
the collet, (3) move the lever knob to the “IN” position to was significant higher. Subjects in the video training group
lock the rotation of the chuck, (4) fasten the cutter assembly were likely to use the center of the cutter as the reference,
on the chuck with a C spanner, and (5) move the lever knob instead of the edge of the cutter. It results in incorrect mill-
to the “OUT” position. ing dimensions. The number of inquiries in the video train-
In Part 3 (Milling), the subjects had to complete the ing group was also significant higher than the AR-based
milling task with the correct milling operations. This part training group.
included five steps: (1) turn on the milling machine to the

13
Virtual Reality (2020) 24:527–539 537

Table 1  Results of the practical milling task


Task operation Failure rate (%) Inquiry number (avg.) Time taken (s)
Video AR Video AR Video AR

Part 1
Workpiece
Put the baseplate on the vise 0 0 1.1 0.7 107 85.78
Put the workpiece on the baseplate 0 0
Fasten the vise 0 10
Knock the workpiece using a rubber hammer 0 0
p value 0.339 0.213 0.163
Part 2
Cutter
Assemble the collet nut and the collet 10 10 2.4 1.1 148.2 145.1
Put the milling cutter in the collet 10 10
Move the lever knob to the “IN” position 10 10
Fasten the nut on the collet chuck with a C spanner 10 10
Move the lever knob to the “OUT” position 0 10
p value 0.646 0.023 0.563
Part 3
Milling
Switch on the milling machine 0 0 1.6 0.7 1018.8 913.3
Rotate the x–y–z handle to adjust the cutter to the proper position 0 0
Machine the workpiece in the clockwise direction against the feeding 50 20
direction
Complete the milling assignment with the correct width 40 10
Complete the milling assignment with the correct depth 50 0
p value 0.003 0.001 0.28

The results showed that AR-based training with natural were related to the interaction between real hands and vir-
user interfaces helped users to have a deeper impression tual objects. One problem might be because although the
concerning the practical operation processes. In contrast, depth information of the real scenes could be obtained
the video training was not interactive, so the subjects might using the RGB-D camera, the real-world images rendered
not have sufficient subjective experience concerning how to to the users were still in 2D images, not in stereoscopic
operate a real milling machine. format. Therefore, it was difficult for the users to judge the
distance between the virtual objects and the real hands.
6.4 Subjective questionnaire analysis Another problem might be because the tracking error in
Leap Motion. If users moved their hands too fast or rotated
After completing the practical milling task, subjects in the their hands to certain angles, Leap Motion failed to track
experiment group were asked to watch the same educa- hands correctly.
tion video. Then, they were asked to fill out a subjective Subjects generally believed that the color changing
questionnaire and an SUS questionnaire. For the subjective features and the augmented instructions were helpful in
questionnaire, a 7-point Likert scale was used, 1 represent- understanding the milling task. Finally, most subjects
ing strongly disagree and 7 representing strongly agree. agreed that AR-based training was more helpful and inter-
The questionnaire contained 18 questions. Questions 1–8 esting than video training.
were for the interaction mode of the AR system; questions
9–12 were for the occlusion effects; questions 13–16 were 6.5 System usability scale analysis results
for the instruction mode in the AR system; questions 17
and 18 were the subjective comparison between the video The SUS analysis was based on the metric developed by
training and the AR-based training. The results of the sub- John Brook (Brooke 1996). The SUS metric provides a reli-
jective questionnaire are given in Table 2. able usability evaluation for a product. The metric consists
Regarding the interaction mode, most scores are higher of 10 questions. Tullis and Stetson (Tullis and Stetson 2004)
than 6, except questions 4, 5, and 7. These three questions showed that using the SUS metric, a sample size of around

13
538 Virtual Reality (2020) 24:527–539

Table 2  Subjective questionnaire results


Mean SD

1 The AR system was easy to operate 6 1.20


2 The milling operations in the AR system were realistic 6.3 0.65
3 The milling operations in the AR system were accurate 6.6 0.51
4 The grabbing and moving of virtual objects in the AR system were stable and robust 5.1 1.38
5 The interactions in the AR system were real time 5.7 1.23
6 The interactions in the AR system were intuitive 6.2 0.72
7 It was easy to use the AR system to complete the milling operations 5.7 1.44
8 I would like to use this AR system again to complete similar tasks 6.5 0.67
9 The occlusion feature in the AR system was helpful 6.7 0.65
10 The occlusion feature made the AR system more realistic 6.7 0.65
11 The occlusion feature made the AR system more accurate 6.7 0.65
12 The occlusion feature made the AR system more intuitive 6.7 0.65
13 The color changing feature in the AR system was helpful 6.6 0.67
14 The augmented instructions in the AR system were helpful 6.8 0.45
15 It was helpful to learn the milling steps by using the AR-based training system 6.6 0.51
16 It was helpful to operate a real milling machine after using the AR-based training system 6.6 0.51
17 AR-based milling machining training was more helpful than the video training 6.6 0.67
18 AR-based milling machining training was more interesting than the video training 6.6 0.67

8–12 participants was enough to give a reasonably accu- Conventional manual milling operations require skillful
rate measure of the usability of a system. Table 3 shows the operators to produce a quality product. In this study, an
results of the SUS analysis. The developed AR-based train- AR-based training system for conventional manual mill-
ing system received an overall mean score of 79.6, which ing machine operations was developed to provide novices
was above the average mean score of 68 (Tullis and Stetson a hands-on, but safe, training environment. Users can
2004). It indicated that the developed AR-based training sys- operate a full-size virtual milling machine using their
tem was easy to use. natural operation behaviors, without any worn or hand-
held intermediaries.
The system was portable and allowed users to walk
7 Conclusion around in a room-size environment. Dynamic occlusion was
realized using Unity3D fragment shader on GPU to increase
Manual milling machine operation training is an impor- the realism and immersiveness of the simulation. Parallel
tant job training in vocational education. However, most processing of GPU was used to implement the marching
prior AR-based milling applications were for NC machine cubes algorithm to have 30 frames per second of real-time
operations, which did not need heavy manual skills. simulation. With the augmented instructions and illustrated

Table 3  Questionnaire results Mean SD


for SUS
I think that I would like to use this system frequently 6.0 1.54
I found the system unnecessarily complex 2.3 1.50
I thought the system was easy to use 5.7 1.56
I think that I would need the support of a technical person to be able to use this system 4.2 1.64
I found the various functions in this system were well integrated 6 0.85
I thought there was too much inconsistency in this system 2.4 1.24
I would imagine that most people would learn to use this system very quickly 5.6 1.38
I found the system very cumbersome to use 2.2 1.03
I felt very confident using the system 5.6 1.51
I needed to learn a lot of things before I could get going with this system 2.8 1.53
Total score 79.6

13
Virtual Reality (2020) 24:527–539 539

3D animation, users were able to learn and practice the mill- Kiswanto G, Ariansyah D (2013) Development of augmented reality
ing operations step by step, without any fear of injury or (AR) for machining simulation of 3-axis CNC milling. In: Interna-
tional conference on advanced computer science and information
damage. In addition, by presenting users’ physical hands in systems (ICACSIS), Bali, Indonesia, pp 143–148
the vision, the sense of detachment from the AR environ- Leal-Meléndrez JA, Altamirano-Robles L, Gonzalez JA (2013) Occlu-
ment was eliminated. sion handling in video-based augmented reality using the Kinect
The user test results showed that users tended to have a sensor for indoor registration. In: Proceedings of 18th Iberoameri-
can congress CIARP 2013 on progress in pattern recognition,
deeper impression after taking the AR-based training. The image analysis, computer vision, and applications, Havana, Cuba,
failure rate in milling operations, the inquiry times in cutter pp 447–454
setup, and the inquiry times in milling operations of the AR- Lorensen WE, Cline HE (1987) Marching cubes: a high resolution
based training were significant lower than the video training. 3D surface construction algorithm. ACM SIGGRAPH Comput
Graphics Anaheim 21(4):163–169
Although the occlusion handling greatly improved the Lu Y, Smith S (2009) GPU-based real-time occlusion in an immer-
interaction effects, users still had trouble in grasping virtual sive augmented reality environment. J Comput Inf Sci Eng
objects with their barehands. It was because the color images 9(2):024501
of the real scenes were rendered in 2D images. Users could Neugebauer R, Klimant P, Wittstock V (2010) Virtual-reality-based
simulation of NC programs for milling machines. In: Proceedings
not use their binocular vision to determine the relative posi- of the 20th CIRP design conference on global product develop-
tions of the virtual objects and the real objects. In the future, ment, Ecole Centrale de Nantes, Nantes, France, pp 697–703
stereo color cameras will be used to provide stereoscopic Penelle B, Debeir O (2014) Multi-sensor data fusion for hand tracking
color images of the real scenes in the AR environment. In using Kinect and Leap Motion. In: Proceedings of the 2014 virtual
reality international conference, Laval, France, pp 1–7
addition, the current system was designed to train novices to Qiu S, Fan X, Wu D, He Q, Zhou D (2013) Virtual human modeling for
learn the basic manual milling machine operations at begin- interactive assembly and disassembly operation in virtual reality
ner’s level. In the future, training at professional level will environment. Int J Adv Manuf Technol 69:9–12
be added. The resolution of the marching cubes will also be Regazzoni D, Rizzi C, Vitali A (2018) Virtual reality applications:
guidelines to design natural user interface. In: Proceedings of the
increased to improve the accuracy of the simulation. ASME 2018 international design engineering technical confer-
ences and computers and information in engineering conference,
Acknowledgements The authors would like to thank the Ministry of Quebec, Canada, pp DETC2018-85867
Science and Technology, Taiwan, Republic of China for financially sup- Shim J, Yang Y, Kang N, Seo J, Han T-D (2016) Gesture-based inter-
porting this research under Contract MOST 108-2221-E-002-161-MY2. active augmented reality content authoring system using HMD.
Virtual Real 20(1):57–69
Sportillo D, Avveduto G, Tecchia F, Carrozzino M (2015) Training
in VR: a preliminary study on learning assembly/disassembly
References sequences. In: International conference on augmented and virtual
reality, Lecce, Italy, pp 332–343
Brooke J (1996) SUS-A quick and dirty usability scale. Usability Eval Tullis TS, Stetson JN (2004) A comparison of questionnaires for
Ind 189(194):4–7 assessing website usability. In: Usability professional association
Chardonnet J-R, Fromentin G, Outeiro J (2017) Augmented reality conference, Minneapolis, Minnesota, USA, pp 1–12
as an aid for the use of machine tools. In: The 15th management Weichert F, Bachmann D, Rudak B, Fisseler D (2013) Analysis of the
and innovative technologies (MIT) conference, Sinaia, Romania accuracy and robustness of the leap motion controller. Sensors
pp 1–4 13(5):6380–6393
Corbett-Davies S, Green R, Clark A (2012) Physically interactive tab- Zhang Z (2000) A flexible new technique for camera calibration. IEEE
letop augmented reality using the Kinect. In: Proceedings of the Trans Pattern Anal Mach Intell 22(11):1330–1334
27th conference on image and vision computing Dunedin, New Zhang J, Ong S-K, Nee AY (2008) AR-assisted in situ machining simu-
Zealand, pp 210–215 lation: architecture and implementation. In: Proceedings of the
Gheorghe C, Rizzotti D, Tièche F, Carrino F, Khaled OA, Mugellini E 7th ACM SIGGRAPH international conference on virtual-reality
(2015) Occlusion management in augmented reality systems for continuum and its applications in industry, p 26
machine-tools. In: International conference on virtual, augmented
and mixed reality, Los Angeles, CA, USA, pp 438–446 Publisher’s Note Springer Nature remains neutral with regard to
Khattak S, Cowan B, Chepurna I, Hogue A (2014) A real-time recon- jurisdictional claims in published maps and institutional affiliations.
structed 3D environment augmented with virtual objects rendered
with correct occlusion. In: 2014 IEEE games media entertain-
ment, Toronto, ON, Canada, pp 1–8

13

You might also like