10.1007/s00464 017 5634 6 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Surg Endosc (2018) 32:62–72 and Other Interventional Techniques

https://doi.org/10.1007/s00464-017-5634-6

Can a virtual reality surgical simulation training provide a


self-driven and mentor-free skills learning? Investigation
of the practical influence of the performance metrics
from the virtual reality robotic surgery simulator on the skill
learning and associated cognitive workloads
Gyusung I. Lee1 • Mija R. Lee1

Received: 6 January 2017 / Accepted: 1 June 2017 / Published online: 20 June 2017
Ó Springer Science+Business Media, LLC 2017

Abstract higher engagement, and a better understanding regarding


Background While it is often claimed that virtual reality using PM to improve performance. The EG’s TPM was
(VR) training system can offer self-directed and mentor- initially long but substantially shortened as the group
free skill learning using the system’s performance metrics became familiar with PM.
(PM), no studies have yet provided evidence-based con- Conclusion Our study demonstrated that the current VR
firmation. This experimental study investigated what extent simulator offered limited self-skill learning and additional
to which trainees achieved their self-learning with a current mentoring still played an important role in improving the
VR simulator and whether additional mentoring improved robotic surgery simulation training.
skill learning, skill transfer and cognitive workloads in
robotic surgery simulation training. Keywords Virtual reality  Simulation  Robotic surgery 
Methods Thirty-two surgical trainees were randomly Training  Performance metrics  Mentoring
assigned to either the Control-Group (CG) or Experiment-
Group (EG). While the CG participants reviewed the PM at The use of robotic surgery systems is rapidly and contin-
their discretion, the EG participants had explanations about uously increasing as more surgical specialties have started
PM and instructions on how to improve scores. Each using this new technology in their patient care [1–7].
subject completed a 5-week training using four simulation Approximately, 570,000 robotic procedures were per-
tasks. Pre- and post-training data were collected using both formed in 2014, representing an increase of about 9% since
a simulator and robot. Peri-training data were collected 2013. This substantial rise is often attributed to several
after each session. Skill learning, time spent on PM (TPM), unique features possessed by the robotic surgical systems
and cognitive workloads were compared between groups. but not present in traditional laparoscopy—notably, 3D
Results After the simulation training, CG showed sub- stereo visualization, increased degrees of freedom, motion
stantially lower simulation task scores (82.9 ± 6.0) com- scaling, and tremor reduction [8, 9]. While robotic surgery
pared with EG (93.2 ± 4.8). Both groups demonstrated provides several benefits in patient care, the performance of
improved physical model tasks performance with the actual robotic surgery, especially by novice surgeons, may be
robot, but the EG had a greater improvement in two tasks. challenging. For instance, surgeons remotely manipulate
The EG exhibited lower global mental workload/distress, very powerful robotic arms with the lack of tactile feed-
back due to the nature of robotic surgery as a remote sur-
gery. This challenge may cause a higher risk for the
Presented at the 2015 Clinical Congress, Chicago, Illinois, October
7th, 2015. patient; therefore, robotic surgeons are required to learn
how to operate the robotic surgery system accurately and
& Gyusung I. Lee carefully to maximize the benefits of robotic surgery while
gyusunglee@gmail.com maintaining patient safety [10, 11].
1
Department of Surgery, Johns Hopkins University School of
Surgical education plays a very important role in safe
Medicine, 600 North Wolfe Street Blalock 1204, Baltimore, patient care. There has been a growing need for more
MD 21287, USA

123
Surg Endosc (2018) 32:62–72 63

effective and efficient surgical training due to increasingly simulator and how actively the trainees use the information
restrictive limits on resident duty hours (i.e., 80-h maxi- available from the PM for their skill improvement. It has
mum, no more than 24 h’ continuous duty) resulting in not been thoroughly examined whether surgical trainees
increased workload and fewer hours for education [12–14]. understand the meaning of each PM provided by a VR
As a potential solution, Virtual Reality (VR) technology robotic surgical skill trainer after each task performance
has become a popular tool in minimally invasive surgery and/or more importantly, whether surgical trainees would
(MIS) education during the past decade, largely due to the know how they can improve particular PM scores to
increase in commercially available surgical trainers/simu- improve their overall skill learning.
lators [15–17]. As a result, VR surgical education has been While mentoring or coaching of providing feedback to
integrated into various MIS training at numerous residency surgical trainees is considered very crucial component in
and fellowship programs [18–20]. These VR simulators surgical education, a recent publication by the Association
provide trainees not only with easy access to various for Surgical Education addressed that the best methods of
training modules that often provide very realistic recre- feedback or debriefing had not been established yet and
ations of both the human anatomy and relevant surgical continuous research investigations on different feedback
tasks but also with performance evaluation metrics deliveries should be conducted in the future [37]. While
enabling immediate feedback to trainees following each several research studies demonstrated that mentoring or
task performance, something not always readily available coaching could be valuable in surgical training including
or provided in a lab training setup [21–23]. VR simulation [38–41], no studies have yet investigated
Several research studies have demonstrated that the VR the effectiveness of mentoring on the skill learning during
training system is a useful training tool in robotic surgery VR robotic surgery training.
(content validity) regarding providing sufficient surgical The goal of this study was to examine whether the current
skill learning [24, 25]. It has also been shown that the VR VR simulation training with the PMs from the simulator
training environment and the console experience was very could provide sufficient self-learning environment for sur-
realistic (face validity), the training models were effective gical trainees and investigate whether additional mentoring
in distinguishing between novice and expert surgeons are still required for improving the trainees’ skill learning
(construct validity), and the performance scores from the and transfer and their cognitive workloads in robotic surgery
VR system correlated well with the experts’ assessments education. We hypothesized that the surgical trainees who
(concurrent validity) [26–29]. used the PMs at their discretion during their robotic VR
The da Vinci robotic surgical system from Intuitive simulation training could not achieve high skill levels and
Surgical, Inc., offers a VR training option which provides a the surgeons who were trained using the same system but
comprehensive set of performance metric (PM) for trainee received additional mentoring explanations and instructions
practice evaluation. These PMs are designed to assess would demonstrate faster skill acquisition, higher perfor-
trainees’ task performance levels quantitatively and to mance scores, better learning transfer to the performance
show the performance components requiring skill with an actual robotic system, and exhibit less mental
improvement. Several research studies revealed that the use workload substantially.
of these metrics was very beneficial especially for crite-
rion-based robotic basic skill training [24, 30]. Considering
trainees’ potential benefit from the performance metrics Methods
(PM) provided by VR surgical simulators, it is often
claimed that VR simulators could provide trainees with a Participants
self-driven and mentor-free surgical skill learning envi-
ronment; however, no evidence-based data yet support This IRB-approved study was performed in the Robotic
such a claim. Additionally, the system’s provision of Surgery Training Laboratory of the Minimally Invasive
feedback on how specifically trainees can improve the Surgical Training and Innovation Center (MISTIC) at
scores of the not-well-performed performance metrics may Johns Hopkins University School of Medicine (JHSOM).
still be limited. Thirty-two (n = 32) surgical residents from different spe-
As demand exists for well-structured robotic training cialties at the JHSOM were recruited for this study.
curricula providing more effective and efficient robotic Informed consent was obtained from all participants. All
surgery training and VR training can be a good option participants ranged from 30 to 42 years of age and were
[31–36], the true value of VR simulator in robotic surgery residents or fellows residents who did not have any robotic
education should be carefully examined. No research training experience previously and were randomly assigned
studies have investigated how carefully surgical trainees to one of two groups: the Control Group (CG, n = 16) or
interact with the PM available from the VR surgical Experiment Group (EG, n = 16).

123
64 Surg Endosc (2018) 32:62–72

Training protocol camera pedals, and ergonomic adjustment settings. Once


confirmed that each participant had no questions regarding
Participants were engaged in a training session lasting 5 the tasks and goals, subjects tried each of the tasks once
weeks. A detailed training and evaluation plan is summa- with no scores recorded.
rized in Table 1. Once potential study participants were When a VR simulation task is about to be launched, the
identified, an email was sent to them to provide a link to a simulator displayed the list of purpose and objectives of
server where the example performance videos were avail- each task, and this information page was reviewed with the
able. These video files included example performances of research team member so that all participants, regardless of
the four VR simulation task and the four physical model their group assignments, could have a clear understanding
tasks. Subjects were instructed to watch these videos before of the tasks. After the completion of each simulation task,
coming to their orientation sessions to learn about the tasks the simulator displayed the performance evaluation pages
which they will perform later for skill learning and eval- including PMs. During this orientation, no explanations
uations. The purpose of these videos was to help partici- were provided by the research team member because only
pants’ understanding of the individual tasks and their goals, the EG participants should receive additional mentoring
but they did not necessarily demonstrate the expert ways to later during their study participations. All of these initial
perform each task. During their orientation sessions, these instructions and explanations were given to the participants
videos were reviewed again with the research team mem- in both CG and EG and were not a part of the mentoring
ber again to make sure the participants were familiar with which this study investigated.
the VR simulation and physical model tasks before the After the orientation, the pre-training evaluation was
actual involvements in the study. The research team performed for each participant, and the details of this
member provided a didactic orientation to address the brief evaluation are described in Pre-, Peri-, and Post-Training
history of robotic surgery, its nature as a remote surgery, Evaluations. Each participant attended 5-week training
main system components, and how to use the surgeon sessions as shown in Table 1. During each session, par-
console including the master controller, clutch buttons, ticipants repeated each VR simulation training task three

Table 1 Training/evaluation protocol and tasks


Weeks Description

Week 0 Watching the example videos of the four simulation training and four physical model tasks.
Week 1 1. Introduction and system orientation: Basic learning about system master manipulation, control pedals, clutches, instrument, and
camera navigation
2. Review of the simulation and physical model tasks and single trial with no score recording
3. Pre-training data collection (Simulator and da Vinci system)
Four simulation training task performance at da Vinci Skills Simulator
Four physical model task performance with da Vinci robot system
Base-line performances and mental workloads were measured
4. 32 subjects were randomly assigned to the EG or CG group.
Week 2 Skill training at the da Vinci Skills Simulator
EG group (16 subjects) CG group (16 subjects)
1. First performance of four simulation tasks 1. First performance of four simulation tasks
2. Mentoring provision 2. Metric review at their own discretion
3. Second performance of four simulation tasks 3. Second performance of four simulation tasks
4. Mentoring provision 4. Metric review at their own discretion
5. Third performance of four simulation tasks 5. Third performance of four simulation tasks
6. Peri-training evaluation 6. Peri-training evaluation
7. Survey 7. Survey
Week 3–4 Skill training and peri-training evaluation (same as week 2)
Week 5 1. Same as week 2
2. The peri-training evaluation data collected in week 5 was used as post-training evaluation data for the simulator
3. Post-training evaluation data collection with da Vinci robot system

123
Surg Endosc (2018) 32:62–72 65

times. After completing the training, participants were simulated tasks once before beginning the training program.
asked to repeat the four training tasks for evaluation (de- The simulator’s PM scores were recorded, and the mental
scribed below). Surgeons were trained using a standardized workloads were assessed using several cognitive tools
number of repetitions (not training time) to accommodate described in detail in the Cognitive Workload Assessment
better anticipated time differences between training meth- section. These data comprised participants’ base-line data-
ods and whether or not participants use provided feedback. set. Additional pre-training evaluations at the da Vinci robot
consisted of all subjects performing four physical model
Training and evaluation tasks tasks that were different from but relatable to the four VR
simulation training tasks. Due to the lack of automated PM
Training sessions included the four selected VR training with the da Vinci robot, we recorded the scope view during
tasks such as Camera Targeting 2, Ring and Rail 2, Thread the performance evaluation and quantified the task perfor-
the Ring, and Continuous Suturing at the da Vinci Skills mance based on certain metrics including time, accuracy, the
Simulator (Table 2). These four training tasks from the number of errors, and how often an instrument was moved
simulator were used both for skills training and evaluation. out of the camera view. Mental workload was assessed as
These tasks were selected based on published data docu- well.
menting their construct and content validities [26, 35, 36]. Peri-training evaluations were only done with the VR
This selection was also influenced by the four categories of simulator. During each session, participants repeated each
essential robotic skills (camera navigation and clutch con- training task two times. After the repetition, participants
trol, wrist manipulation, needle driving, and suturing) were asked to repeat the four training tasks one more time
identified at the Fundamentals of Robotic Surgery (FRS) for evaluation. The participants’ PM scores were recorded,
consortium meetings. Additionally, four physical model and their mental workloads were assessed using the same
tasks such as Lazy Susan tower, Loopy Rollercoaster, cus- evaluation methods used to assess peri-training
tom-developed needle passing and driving, and custom-de- performance.
veloped tension running suturing considered compatible to Finally, post-training evaluations took place immedi-
the selected simulation tasks in terms of the skills required to ately following the last training session of the 4-week
perform tasks were identified (Table 2). These physical training program. Using the same evaluation methods used
model tasks were used for the pre- and post-training evalu- in pre-training evaluation, each participant’s performance
ations to examine how much the skill learning using the VR and mental workload were measured at both the VR sim-
simulation training models will be transferred to the actual ulator and the da Vinci system for post-training evaluation.
robot performance using these physical model tasks. These
physical model tasks were not used as a part of the training Feedback provision
because the comparison between the VR simulation training
and physical model training was not a part of this study’s To maintain consistency in providing guidance, a set of the
scope. Only the VR training simulation tasks were used for explanations and instructions associated with any PM were
participants’ skill learning. documented prior to the subject recruitment. The expla-
nations and instructions were then uniformly provided to
Pre-, peri-, and post-training evaluations all EG group trainees. When any individual metrics did not
receive green check-mark symbols, but yellow triangle or
To investigate the learning curves and the changes in mental red X mark, feedback for improving the metric scores were
workload experienced by each participant throughout the provided for trainees.
training program, pre-, peri-, and post-training evaluations
were conducted. Two pre-training evaluations were per- Study procedures
formed: one with the VR simulator and another with the da
Vinci robot. Pre-training evaluations at the VR simulator As described earlier, participants had an orientation session
consisted of all subjects performing each of the four VR to establish a basic level of competency using the system’s

Table 2 VR training tasks and


Training category VR simulator task Physical model task
physical model tasks
Camera and clutching Camera Targeting 2 Lazy Susan tower
Endo-wrist manipulation Ring and Rail 2 Loopy Rollercoaster
Needle driving Thread the Ring Custom needle passing/driving
Suturing Continuous Suturing Custom tension running suturing

123
66 Surg Endosc (2018) 32:62–72

master manipulators and control pedals to accomplish basic interest, motivation, and concentration, distress represents
instrument movement and camera navigation at the unpleasant mood and tension with the lack of confidence
beginning of their participations and to understand the and perceived control, and worry represents self-esteem
skills, goals, and tasks which they will perform. After this and cognitive interference.
training, the pre-training evaluation was performed. Par-
ticipating surgeons were then randomized to the CG or the
EG. Results
After each task performance, there was a break for
reviewing the PM which was provided by the VR simu- Learning progress with the simulation training
lator. The CG participants were allowed to spend time on
the metric review alone and at their discretion. However, Figure 1 shows the learning progress with the simulation
the EG participants worked with a research team member training over the training weeks for each task. The pre-
to review the metrics together. This intervention provided training scores of the CG and EG were statistically com-
an additional opportunity for the research team member to pared, and no significant differences were found between
provide instructions and explanations regarding the prac- the two groups. This result demonstrated the comparability
tical meanings of individual PM, instructions on how to on the pre-training evaluations of the CG and EG perfor-
improve specific metric scores, and specific information mance scores. The CG participants’ post-training simula-
regarding which metrics should be improved in order to tion performance scores (82.9 ± 6.0) for all the tasks were
earn better global performance scores as needed or substantially lower than the EG subjects’ performance
requested by the participant. This research team member scores (93.2 ± 4.8) (p \ 0.05). It was demonstrated that
who provided this orientation session and mentoring during the performance scores of the CG participants plateau early
their learning was the Director of Robotic Surgery Edu- at the lower performance score for several tasks. When we
cation. This person developed a comprehensive robotic examined the learning slopes across the training weeks, it
surgery training curriculum and offered the basic and was found that the EG participants demonstrated signifi-
advanced level training to Johns Hopkins surgeons from cantly greater improvements between the pre-training and
multiple-specialties using this curriculum. the first training session when compared with the CG data
During the reviews, the actual time spent on PM (TPM) (p \ 0.05). It seemed that the EG participants learned more
for each participant was recorded to assess indirect atten- during the first session and so could perform better than the
tion levels for data analysis. At the end of the each day’s CG. For the following weeks, both displayed similar
training session, participants were asked to complete a brief learning effects over training weeks. With this learning
survey documenting their perception of the usefulness of trend, the EG demonstrated greater performance improve-
the PM on improving their robotic skill learning. ment with all tasks except for the Camera Targeting 2 task.

Cognitive workload assessments Simulator performance score improvements


after training
Mental workload assessments comprised a vital portion of
the evaluations included in this study since lower mental Figure 2A shows the amount of performance improve-
workload is associated with ease of task performance and ments—the difference between pre- and post-training per-
ability to assimilate information efficiently. The assessment formance scores—for each simulation training task for
was performed by using several psychologic survey tools each group. After the simulation training, both groups
including National Aeronautics and Space Administration- showed performance improvement; however, the EG
Task Load Index (NASA-TLX), Multiple Resources exhibited significantly greater improvement than the CG
Questionnaire (MRQ) and Short Stress State Questionnaire participants with three simulation tasks including the Ring
(SSSQ). NASA-TLX allowed participants to rate their and Rail 2, Thread the Ring, and Continuous Suturing.
perceived levels of mental demand, physical demand, However, the improvements exhibited by the two groups
temporal demand, effort, performance, and frustration were not statistically different with the Camera Targeting 2
during task execution. MRQ is a subjective mental work- task.
load assessment characterizing the nature of the mental
process used in task performance. Some examples are the Actual time spent on performance metrics (TPM)
auditory process, manual process, and spatial attentive
process. SSSQ is a subject mental stress assessment tool. The TPM was used as an indirect measure of trainees’
The questions can be categorized to investigate three attention levels with PM, and the results from both groups
mental stresses. Engagement demonstrates subjects’ are shown in Fig. 3. The EG participants spent 76.6 s on

123
Surg Endosc (2018) 32:62–72 67

Fig. 1 The learning progress


with the simulation training
over the training weeks for each
task. A Camera Targeting 2,
B Ring and Rail 2, C Thread the
Ring, and D Continuous
Suturing

Fig. 2 The amount of performance improvements which was calculated as the difference between pre- and post-training performance scores for
A simulation training tasks and B bench-top da Vinci training tasks for each group

average for reviewing the PM and receiving mentoring substantially shortened as the EG trainees became more
feedback from a research team member. The TPMs for the familiar with the PM and instructions. The CG partici-
EG trainees were initially very long due to the provision pants spent only 11.6 s on average for the review, and
of explanations about performance evaluation page and their TPM did not significantly change across the training
the details regarding individual PM. However, it weeks.

123
68 Surg Endosc (2018) 32:62–72

NASA-TLX analysis with the robot performances

The NASA-TLX workload analysis with the da Vinci


performance is shown in Fig. 4B. Similar to what we
observed in the NASA-TLX results with the simulation
training, both the groups had decreased global workload
after training. However, a significant Training 9 Group
interaction showed that the EG had a greater decrease in
the global workload after training (p \ 0.05). When the
individual scales were examined, the EG participants
exhibited higher performance satisfaction levels with the
lower level of effort involved after training.

Other cognitive workload analysis

SSSQ analysis data are displayed in Fig. 5A, B. We found


Fig. 3 Actual time spent on performance metrics (TMP) that the EG participants exhibited substantially decreased
distress levels after training (p \ 0.05), while the CG
participants showed no significant difference between the
Robot performance improvements after training pre- and post-training data. It was also observed that the
EG showed higher improvement in engagement levels after
Figure 2B shows the amount of performance improve- training when compared with the CG (p \ 0.05). SSSQ
ments as skill learning transfer which was calculated as the analysis on worry level did not show any differences
difference between pre- and post-training performance between the two groups.
scores for each bench-top training task for each group at Figure 5C shows the global rating of the MRQ analysis
the da Vinci robot. After simulation training, both CG and which compared how differently each group used resources
EG showed improvements. Similar to the results of the for their task performance. Our results demonstrated that
simulation performance improvement, the EG showed the CG used less amount of cognitive resources while the
greater improvements with training. The EG showed sig- EG involved more cognitive resources after training.
nificantly greater performance improvement after simula-
tion training when performing the Lazy Susan tower and Performance metric survey analysis
tension running suturing tasks (p \ 0.05). These were no
significant differences between groups with the Loopy After each training session, all participants completed a PM
Rollercoaster and needle driving tasks. survey with the multiple statements. The rating results from
the both groups are summarized in Table 3 with p values
NASA-TLX analysis with the simulation training displaying statistically significant differences between the
two groups. Each participant rated their impressions on
The NASA-TLX global scores from the CG and EG par- each statement from 1 to 5 with 1 for strongly disagree, 3
ticipants are shown in Fig. 4A. The result presented that for neutral, and 5 for strongly agree. For creating Table 3,
the both groups had decreased global workload as they their ratings were averaged over training sessions because
participated in the simulation training. A significant the preliminary statistical analysis had shown that their
Training 9 Group interaction showed that the EG had a ratings did not significantly change over training sessions.
greater decrease in the global workload than the CG Overall, the EG participants rated significantly higher than
(p \ 0.05). When we looked into the individual scales, it the CG for each statement. For the EG participants, there
was found that the Training 9 Group interaction was were two extra questions. These questions asked how
mainly contributed by the ‘performance’ scale. The EG helpful the extra instructional feedback provided during
participants exhibited higher performance satisfaction level their training sessions were for better understanding the
as they received more training than the CG subjects. This meaning of each PM and the strategies on how to improve
result agreed with the performance evaluation analysis their performance. The EG participants rated the extra
which showed that the EG’s performance improvement feedback very helpful in both aspects (4.85 ± 0.36 for
was greater with training. both).

123
Surg Endosc (2018) 32:62–72 69

Fig. 4 The NASA-TLX global


scores with the A VR simulator
and B da Vinci robotic system

Fig. 5 Additional cognitive


workload assessments:
A SSSQ—distress, B SSSQ—
engagement, C MRQ—global
rate

Discussion their simulation task performance by themselves. The EG


participants who received the additional explanations and
This study investigated how practically and sufficiently the instructions showed better skill learning at the VR simu-
VR simulator for robotic surgery training could offer self- lator and improved skill transfer at the da Vinci system
learning and mentor-free learning environment for trainees with higher satisfaction level regarding their performance
and whether additional mentoring could still enhance the and learning. These data confirmed that proper mentoring
trainees’ robotic skill learning using the VR simulator. still played a key role in VR simulation education for
Because both the EG and CG participants exhibited skill robotic surgery and suggested that extra mentoring should
learning, it was exhibited that the current VR simulator be included in the VR simulation training curriculum to
provided trainees with a certain degree of self-learning. improve the training efficiency and effectiveness. Although
However, the lower post-training performance scores from the EG participants initially spent longer time on PM as
the CG participants demonstrated that the VR simulation they learned about the PM and received additional men-
training was not as effective when trainees used the PMs at toring, the TPM became shorter as the trainees got more
their own discretion without having a full understanding familiar with the PM and instructions. When we examined
about the PMs and they need to figure out how to improve the learning progress, the largest differences in learning

123
70 Surg Endosc (2018) 32:62–72

Table 3 Performance metric survey


Statements CG rating EG rating p

1. The overall performance scores were very helpful for my training today. 4.25 ± 0.59 4.83 ± 0.38 \0.05
2. I knew how to obtain higher overall performance scores. 3.80 ± 0.91 4.83 ± 0.38 \0.05
3. I confidently understood the meaning of each performance metric. 3.28 ± 1.26 4.70 ± 0.46 \0.05
4. The individual performance metrics were very helpful for my training today. 3.98 ± 0.83 4.75 ± 0.49 \0.05
5. I knew specific strategies for higher individual metric scores. 3.33 ± 1.07 4.73 ± 0.45 \0.05
6. I understood well how overall performance score was connected with individual metrics. 3.49 ± 1.10 4.58 ± 0.68 \0.05
7. I feel my skill is improving. 4.18 ± 0.81 4.73 ± 0.45 \0.05
8. Extra instructions help me to better understand the meaning of each performance metric. N/A 4.85 ± 0.36 N/A
9. Extra instructions help me to better understand how I can improve my performance. N/A 4.85 ± 0.36 N/A

between two groups were observed during the first training required for performance improvement. These outcomes
session as the EG participants demonstrated greater skill well-reflected how practically the learners perceived the
improvement during the session, whereas both groups importance of proper mentoring provision during their
displayed similar slopes in their skill improvements over simulation training. It is often assumed that surgical
the following training weeks as shown in Fig. 1. Most EG trainees can learn fundamental surgical skills using VR
participants reached over 90% in their overall performance surgical simulators without having experienced mentors
scores for each task. However, most CG participants had or educators with them because the simulators provide
their scores lower than 90%. After the completion of CG trainees with performance feedback and the trainees can
subjects’ participation, the research team provided the CG learn from the feedback. Our study demonstrated that the
participants with the same explanations and instructions current voluntary use of the VR system is not as effective
which were used for the EG participants and asked them to as it potentially could be and VR simulators could not
perform each of the four simulation training tasks once provide trainees with a self-driven and mentor-free
more. It was observed that most CG participants showed learning environment as many assumed or expected. The
immediate score improvement to the similar score levels of provision of extra explanations and instructions will still
the EG participants. These extra task performance scores play a very important role in complementing the benefits
were not officially collected for all CG participants, so no in VR simulation training with PM and in making the
statistical analysis on these could be performed. However, current surgical VR training more effective, efficient, and
this was another confirmation that self-learning can be accurate.
improved with proper mentoring. Reflecting what we During this study, it was noticed that many participants
learned from this study, our current VR simulator training did not have a clear understanding about two PMs, the
begins with the comprehensive explanations regarding the motion economy, and master workspace range. The par-
performance evaluation pages and PMs. After performing a ticipants were not sure if these metrics were associated
few VR training tasks and reviewing the performance with the instruments, camera, master controllers, or
evaluation pages and PMs, trainees continue their skill something else. The motion economy measures the total
training by themselves and our robotic surgery skill edu- distance traveled by all instrument during an exercise.
cator provides skill improving instructions when needed or However, the camera motion was not included. The master
requested by trainees. workspace range reports the larger of the two radii of
The cognitive workload analysis presented that the EG motion of the user’s working volume on the master con-
group exhibited substantially decreased cognitive work- trollers. Because the definitions of these metrics were not
loads after their training in terms of decreased NASA- intuitively understood by study participants, most of CG
TLX global workload, higher engagement with lower participants did not know how they could improve the
distress levels. The MRQ analysis showed that the EG scores associated with these metrics. When VR surgical
participants’ better performances involved more cognitive simulators provide with the more user-friendly interface
resources during their task performances. Also, the survey with their PM, the surgeons will have a clear understanding
data showed the EG subjects better understood about the regarding the metrics so they can repeat each training tasks
PM, highly acknowledged the value of extra explanations applying score improving strategies accordingly.
and instructions, and better knew the strategies that were

123
Surg Endosc (2018) 32:62–72 71

Conclusions with laparoscopic sacrocolpopexy: a randomized controlled trial.


Obstet Gynecol 123(1):5–12. doi:10.1097/AOG.0000000000000
006
Our study demonstrated that the robotic surgery trainees 5. Martino MA, Berger EA, McFetridge JT, Shubella J, Gosciniak
had a limited self-learning and mentor-free skill learning G, Wejkszner T, Kainz GF, Patriarco J, Thomas MB, Boulay R
when trained using the current VR simulator and this (2014) A comparison of quality outcome measures in patients
having a hysterectomy for benign disease: robotic vs. non-robotic
limitation could be reduced by providing trainees with
approaches. J Minim Invasive Gynecol 21(3):389–393. doi:10.
additional mentoring for their better understanding of the 1016/j.jmig.2013.10.008
PMs and skill improving techniques for more effective and 6. Ma J, Shukla PJ, Milsom JW (2011) The evolving role of robotic
efficient VR robotic simulation education. As such men- colorectal surgery. Dis Colon Rectum 54(3):376–377. doi:10.
1007/DCR.0b013e318204a8d5
toring explanations and instructions are traditionally pro-
7. Wilson EB (2009) The evolution of robotic general surgery.
vided by experienced educators, this approach still poses a Scand J Surg 98(2):125–129
challenge in surgical education due to the restriction caused 8. You JY, Lee HY, Son GS, Lee JB, Bae JW, Kim HY (2013)
by the limited availability of the educators. Therefore, our Comparison of robotic adrenalectomy with traditional laparo-
scopic adrenalectomy with a lateral transperitoneal approach: a
research endeavor in improving traditional surgical edu-
single-surgeon experience. Int J Med Robot Comput Assist Surg
cation will continue by developing a surgeon-friendly MRCAS 9(3):345–350. doi:10.1002/rcs.1497
mentoring interface which can provide trainees with the 9. Orady M, Hrynewych A, Nawfal AK, Wegienka G (2012)
useful mentoring instructions in an electronic delivery. Comparison of robotic-assisted hysterectomy to other minimally
invasive approaches. J Soc Laparoendosc Surg 16(4):542–548.
Such virtual mentoring will help trainees to avoid repeating
doi:10.4293/108680812X13462882736899
the same mistakes that cause lower level task performance 10. Dulan G, Rege RV, Hogg DC, Gilberg-Fisher KM, Arain NA,
and guide them to understand the goal of each skill training Tesfay ST, Scott DJ (2012) Developing a comprehensive, profi-
better so that they can achieve more proficient skill ciency-based training program for robotic surgery. Surgery
152(3):477–488. doi:10.1016/j.surg.2012.07.028
learning.
11. Rashid HH, Leung YY, Rashid MJ, Oleyourryk G, Valvo JR,
Eichel L (2006) Robotic surgical education: a systematic
Acknowledgements Dr. Gyusung Lee received the Association for approach to training urology residents to perform robotic-assisted
Surgical Education Center for Excellence in Surgical Education, laparoscopic radical prostatectomy. Urology 68(1):75–79. doi:10.
Research and Training (CESERT) Grant and Intuitive Surgical 1016/j.urology.2006.01.057
Clinical Robotic Research Grant as the Principle Investigator of this 12. Antiel RM, Van Arendonk KJ, Reed DA, Terhune KP, Tarpley
study. The authors acknowledge the generous support of Dr. Michael JL, Porterfield JR, Hall DE, Joyce DL, Wightman SC, Horvath
Marohn, the Director of MISTIC, and the thoughtful and careful KD, Heller SF, Farley DR (2012) Surgical training, duty-hour
assistance of Karyn Rhyder in editing this manuscript. restrictions, and implications for meeting the Accreditation
Council for Graduate Medical Education core competencies:
Funding This project was supported by the Association for Surgical views of surgical interns compared with program directors. Arch
Education Center for Excellence in Surgical Education, Research and Surg 147(6):536–541. doi:10.1001/archsurg.2012.89
Training (CESERT) Grant and Intuitive Surgical Clinical Robotic 13. Ahmed N, Devitt KS, Keshet I, Spicer J, Imrie K, Feldman L,
Research Grant. Cools-Lartigue J, Kayssi A, Lipsman N, Elmi M, Kulkarni AV,
Parshuram C, Mainprize T, Warren RJ, Fata P, Gorman MS,
Compliance with ethical standards Feinberg S, Rutka J (2014) A systematic review of the effects of
resident duty hour restrictions in surgery: impact on resident
Disclosures Dr. Gyusung Lee received the Association for Surgical wellness, training, and patient outcomes. Ann Surg
Education Center for Excellence in Surgical Education, Research and 259(6):1041–1053. doi:10.1097/SLA.0000000000000595
Training (CESERT) grant and Intuitive Surgical Clinical Robotic 14. Lee JY, Mucksavage P, Sundaram CP, McDougall EM (2011)
Research Grant for this study. Dr. Mija Lee was the co-investigator in Best practices for robotic surgery training and credentialing.
this study and is Dr. Gyusung Lee’s spouse. J Urol 185(4):1191–1197. doi:10.1016/j.juro.2010.11.067
15. Liu JJ, Gonzalgo ML (2011) Minimally invasive training in
References urologic oncology. Arch Esp Urol 64(9):865–868
16. Schreuder HW, Oei G, Maas M, Borleffs JC, Schijven MP (2011)
1. Ahmed K, Khan MS, Vats A, Nagpal K, Priest O, Patel V, Vecht Implementation of simulation in surgical practice: minimally
JA, Ashrafian H, Yang GZ, Athanasiou T, Darzi A (2009) Cur- invasive surgery has taken the lead: the Dutch experience. Med
rent status of robotic assisted pelvic surgery and future devel- Teach 33(2):105–115. doi:10.3109/0142159X.2011.550967
opments. Int J Surg 7(5):431–440. doi:10.1016/j.ijsu.2009.08.008 17. Johnson E (2007) Surgical simulators and simulated surgeons:
2. Patel VR, Tully AS, Holmes R, Lindsay J (2005) Robotic radical reconstituting medical practice and practitioners in simulations.
prostatectomy in the community setting–the learning curve and Soc Stud Sci 37(4):585–608
beyond: initial 200 cases. J Urol 174(1):269–272. doi:10.1097/01. 18. Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP,
ju.0000162082.12962.40 Moses G, Smith CD, Satava RM (2005) Virtual reality simulation
3. Patel MN, Bhandari M, Menon M, Rogers CG (2009) Robotic- for the operating room: proficiency-based training as a paradigm
assisted partial nephrectomy. BJU Int 103(9):1296–1311. doi:10. shift in surgical skills training. Ann Surg 241(2):364–372
1111/j.1464-410X.2009.08584.x 19. van Empel PJ, Verdam MG, Strypet M, van Rijssen LB, Huirne
4. Anger JT, Mueller ER, Tarnay C, Smith B, Stroupe K, Rosenman JA, Scheele F, Bonjer HJ, Meijerink WJ (2012) Voluntary
A, Brubaker L, Bresee C, Kenton K (2014) Robotic compared autonomous simulator based training in minimally invasive

123
72 Surg Endosc (2018) 32:62–72

surgery, residents’ compliance and reflection. J Surg Educ performance and safety in the operating room while decreasing
69(4):564–570. doi:10.1016/j.jsurg.2012.04.011 operator workload. Surg Endosc 24(2):377–382. doi:10.1007/
20. Chan B, Martel G, Poulin EC, Mamazza J, Boushey RP (2010) s00464-009-0578-0
Resident training in minimally invasive surgery: a survey of 32. Stefanidis D, Hope WW, Scott DJ (2011) Robotic suturing on the
Canadian department and division chairs. Surg Endosc FLS model possesses construct validity, is less physically
24(3):499–503. doi:10.1007/s00464-009-0611-3 demanding, and is favored by more surgeons compared with
21. Rosenthal R, Schafer J, Hoffmann H, Vitz M, Oertli D, Hahnloser laparoscopy. Surg Endosc 25(7):2141–2146. doi:10.1007/s00464-
D (2013) Personality traits and virtual reality performance. Surg 010-1512-1
Endosc 27(1):222–230. doi:10.1007/s00464-012-2424-z 33. Patel YR, Donias HW, Boyd DW, Pande RU, Amodeo JL,
22. Pitzul KB, Grantcharov TP, Okrainec A (2012) Validation of Karamanoukian RL, D’Ancona G, Karamanoukian HL (2003)
three virtual reality Fundamentals of Laparoscopic Surgery (FLS) Are you ready to become a robo-surgeon? Am Surg
modules. Stud Health Technol Inform 173:349–355 69(7):599–603
23. Brinkman WM, Buzink SN, Alevizos L, de Hingh IH, Jaki- 34. Shaligram A, Meyer A, Simorov A, Pallati P, Oleynikov D
mowicz JJ (2012) Criterion-based laparoscopic training reduces (2013) Survey of minimally invasive general surgery fellows
total training time. Surg Endosc 26(4):1095–1101. doi:10.1007/ training in robotic surgery. J Robot Surg 7(2):131–136. doi:10.
s00464-011-2005-6 1007/s11701-012-0355-2
24. Kang SG, Yang KS, Ko YH, Kang SH, Park HS, Lee JG, Kim JJ, 35. Hung AJ, Patil MB, Zehnder P, Cai J, Ng CK, Aron M, Gill IS,
Cheon J (2012) A study on the learning curve of the robotic Desai MM (2012) Concurrent and predictive validation of a novel
virtual reality simulator. J Laparoendosc Adv Surg Tech Part A robotic surgery simulator: a prospective, randomized study.
22(5):438–442. doi:10.1089/lap.2011.0452 J Urol 187(2):630–637. doi:10.1016/j.juro.2011.09.154
25. Lerner MA, Ayalew M, Peine WJ, Sundaram CP (2010) Does 36. Finnegan KT, Meraney AM, Staff I, Shichman SJ (2012) da Vinci
training on a virtual reality robotic simulator improve perfor- Skills Simulator construct validation study: correlation of prior
mance on the da Vinci surgical system? J Endourol robotic experience with overall score and time score simulator
24(3):467–472. doi:10.1089/end.2009.0190 performance. Urology 80(2):330–335. doi:10.1016/j.urology.
26. Hung AJ, Zehnder P, Patil MB, Cai J, Ng CK, Aron M, Gill IS, 2012.02.059
Desai MM (2011) Face, content and construct validity of a novel 37. Johnston MJ, Paige JT, Aggarwal R, Stefanidis D, Tsuda S,
robotic surgery simulator. J Urol 186(3):1019–1024. doi:10.1016/ Khajuria A, Arora S (2016) An overview of research priorities in
j.juro.2011.04.064 surgical simulation: what the literature shows has been achieved
27. Kenney PA, Wszolek MF, Gould JJ, Libertino JA, Moinzadeh A during the 21st century and what remains. Am J Surg
(2009) Face, content, and construct validity of dV-trainer, a novel 211(1):214–225. doi:10.1016/j.amjsurg.2015.06.014
virtual reality simulator for robotic surgery. Urology 38. Panait L, Rafiq A, Tomulescu V, Boanca C, Popescu I, Carbonell
73(6):1288–1292. doi:10.1016/j.urology.2008.12.044 A, Merrell RC (2006) Telementoring versus on-site mentoring in
28. Perrenot C, Perez M, Tran N, Jehl JP, Felblinger J, Bresler L, virtual reality-based surgical training. Surg Endosc
Hubert J (2012) The virtual reality simulator dV-Trainer((R)) is a 20(1):113–118. doi:10.1007/s00464-005-0113-x
valid assessment tool for robotic surgical skills. Surg Endosc 39. Alaker M, Wynn GR, Arulampalam T (2016) Virtual reality
26(9):2587–2593. doi:10.1007/s00464-012-2237-0 training in laparoscopic surgery: a systematic review & meta-
29. Lee JY, Mucksavage P, Kerbl DC, Huynh VB, Etafy M, analysis. Int J Surg 29:85–94. doi:10.1016/j.ijsu.2016.03.034
McDougall EM (2012) Validation study of a virtual reality 40. Ahlborg L, Weurlander M, Hedman L, Nisel H, Lindqvist PG,
robotic simulator–role as an assessment tool? J Urol Fellander-Tsai L, Enochsson L (2015) Individualized feedback
187(3):998–1002. doi:10.1016/j.juro.2011.10.160 during simulated laparoscopic training:a mixed methods study.
30. Brinkman WM, Luursema JM, Kengen B, Schout BM, Witjes JA, Int J Med Educ 6:93–100. doi:10.5116/ijme.55a2.218b
Bekkers RL (2013) da Vinci skills simulator for assessing 41. Paschold M, Huber T, Zeissig SR, Lang H, Kneist W (2014)
learning curve and criterion-based training of robotic basic skills. Tailored instructor feedback leads to more effective virtual-re-
Urology 81(3):562–566. doi:10.1016/j.urology.2012.10.020 ality laparoscopic training. Surg Endosc 28(3):967–973. doi:10.
31. Stefanidis D, Wang F, Korndorffer JR Jr, Dunne JB, Scott DJ 1007/s00464-013-3258-z
(2010) Robotic assistance improves intracorporeal suturing

123

You might also like