Professional Documents
Culture Documents
10.1007/s00464 017 5634 6 PDF
10.1007/s00464 017 5634 6 PDF
10.1007/s00464 017 5634 6 PDF
https://doi.org/10.1007/s00464-017-5634-6
Received: 6 January 2017 / Accepted: 1 June 2017 / Published online: 20 June 2017
Ó Springer Science+Business Media, LLC 2017
123
Surg Endosc (2018) 32:62–72 63
effective and efficient surgical training due to increasingly simulator and how actively the trainees use the information
restrictive limits on resident duty hours (i.e., 80-h maxi- available from the PM for their skill improvement. It has
mum, no more than 24 h’ continuous duty) resulting in not been thoroughly examined whether surgical trainees
increased workload and fewer hours for education [12–14]. understand the meaning of each PM provided by a VR
As a potential solution, Virtual Reality (VR) technology robotic surgical skill trainer after each task performance
has become a popular tool in minimally invasive surgery and/or more importantly, whether surgical trainees would
(MIS) education during the past decade, largely due to the know how they can improve particular PM scores to
increase in commercially available surgical trainers/simu- improve their overall skill learning.
lators [15–17]. As a result, VR surgical education has been While mentoring or coaching of providing feedback to
integrated into various MIS training at numerous residency surgical trainees is considered very crucial component in
and fellowship programs [18–20]. These VR simulators surgical education, a recent publication by the Association
provide trainees not only with easy access to various for Surgical Education addressed that the best methods of
training modules that often provide very realistic recre- feedback or debriefing had not been established yet and
ations of both the human anatomy and relevant surgical continuous research investigations on different feedback
tasks but also with performance evaluation metrics deliveries should be conducted in the future [37]. While
enabling immediate feedback to trainees following each several research studies demonstrated that mentoring or
task performance, something not always readily available coaching could be valuable in surgical training including
or provided in a lab training setup [21–23]. VR simulation [38–41], no studies have yet investigated
Several research studies have demonstrated that the VR the effectiveness of mentoring on the skill learning during
training system is a useful training tool in robotic surgery VR robotic surgery training.
(content validity) regarding providing sufficient surgical The goal of this study was to examine whether the current
skill learning [24, 25]. It has also been shown that the VR VR simulation training with the PMs from the simulator
training environment and the console experience was very could provide sufficient self-learning environment for sur-
realistic (face validity), the training models were effective gical trainees and investigate whether additional mentoring
in distinguishing between novice and expert surgeons are still required for improving the trainees’ skill learning
(construct validity), and the performance scores from the and transfer and their cognitive workloads in robotic surgery
VR system correlated well with the experts’ assessments education. We hypothesized that the surgical trainees who
(concurrent validity) [26–29]. used the PMs at their discretion during their robotic VR
The da Vinci robotic surgical system from Intuitive simulation training could not achieve high skill levels and
Surgical, Inc., offers a VR training option which provides a the surgeons who were trained using the same system but
comprehensive set of performance metric (PM) for trainee received additional mentoring explanations and instructions
practice evaluation. These PMs are designed to assess would demonstrate faster skill acquisition, higher perfor-
trainees’ task performance levels quantitatively and to mance scores, better learning transfer to the performance
show the performance components requiring skill with an actual robotic system, and exhibit less mental
improvement. Several research studies revealed that the use workload substantially.
of these metrics was very beneficial especially for crite-
rion-based robotic basic skill training [24, 30]. Considering
trainees’ potential benefit from the performance metrics Methods
(PM) provided by VR surgical simulators, it is often
claimed that VR simulators could provide trainees with a Participants
self-driven and mentor-free surgical skill learning envi-
ronment; however, no evidence-based data yet support This IRB-approved study was performed in the Robotic
such a claim. Additionally, the system’s provision of Surgery Training Laboratory of the Minimally Invasive
feedback on how specifically trainees can improve the Surgical Training and Innovation Center (MISTIC) at
scores of the not-well-performed performance metrics may Johns Hopkins University School of Medicine (JHSOM).
still be limited. Thirty-two (n = 32) surgical residents from different spe-
As demand exists for well-structured robotic training cialties at the JHSOM were recruited for this study.
curricula providing more effective and efficient robotic Informed consent was obtained from all participants. All
surgery training and VR training can be a good option participants ranged from 30 to 42 years of age and were
[31–36], the true value of VR simulator in robotic surgery residents or fellows residents who did not have any robotic
education should be carefully examined. No research training experience previously and were randomly assigned
studies have investigated how carefully surgical trainees to one of two groups: the Control Group (CG, n = 16) or
interact with the PM available from the VR surgical Experiment Group (EG, n = 16).
123
64 Surg Endosc (2018) 32:62–72
Week 0 Watching the example videos of the four simulation training and four physical model tasks.
Week 1 1. Introduction and system orientation: Basic learning about system master manipulation, control pedals, clutches, instrument, and
camera navigation
2. Review of the simulation and physical model tasks and single trial with no score recording
3. Pre-training data collection (Simulator and da Vinci system)
Four simulation training task performance at da Vinci Skills Simulator
Four physical model task performance with da Vinci robot system
Base-line performances and mental workloads were measured
4. 32 subjects were randomly assigned to the EG or CG group.
Week 2 Skill training at the da Vinci Skills Simulator
EG group (16 subjects) CG group (16 subjects)
1. First performance of four simulation tasks 1. First performance of four simulation tasks
2. Mentoring provision 2. Metric review at their own discretion
3. Second performance of four simulation tasks 3. Second performance of four simulation tasks
4. Mentoring provision 4. Metric review at their own discretion
5. Third performance of four simulation tasks 5. Third performance of four simulation tasks
6. Peri-training evaluation 6. Peri-training evaluation
7. Survey 7. Survey
Week 3–4 Skill training and peri-training evaluation (same as week 2)
Week 5 1. Same as week 2
2. The peri-training evaluation data collected in week 5 was used as post-training evaluation data for the simulator
3. Post-training evaluation data collection with da Vinci robot system
123
Surg Endosc (2018) 32:62–72 65
times. After completing the training, participants were simulated tasks once before beginning the training program.
asked to repeat the four training tasks for evaluation (de- The simulator’s PM scores were recorded, and the mental
scribed below). Surgeons were trained using a standardized workloads were assessed using several cognitive tools
number of repetitions (not training time) to accommodate described in detail in the Cognitive Workload Assessment
better anticipated time differences between training meth- section. These data comprised participants’ base-line data-
ods and whether or not participants use provided feedback. set. Additional pre-training evaluations at the da Vinci robot
consisted of all subjects performing four physical model
Training and evaluation tasks tasks that were different from but relatable to the four VR
simulation training tasks. Due to the lack of automated PM
Training sessions included the four selected VR training with the da Vinci robot, we recorded the scope view during
tasks such as Camera Targeting 2, Ring and Rail 2, Thread the performance evaluation and quantified the task perfor-
the Ring, and Continuous Suturing at the da Vinci Skills mance based on certain metrics including time, accuracy, the
Simulator (Table 2). These four training tasks from the number of errors, and how often an instrument was moved
simulator were used both for skills training and evaluation. out of the camera view. Mental workload was assessed as
These tasks were selected based on published data docu- well.
menting their construct and content validities [26, 35, 36]. Peri-training evaluations were only done with the VR
This selection was also influenced by the four categories of simulator. During each session, participants repeated each
essential robotic skills (camera navigation and clutch con- training task two times. After the repetition, participants
trol, wrist manipulation, needle driving, and suturing) were asked to repeat the four training tasks one more time
identified at the Fundamentals of Robotic Surgery (FRS) for evaluation. The participants’ PM scores were recorded,
consortium meetings. Additionally, four physical model and their mental workloads were assessed using the same
tasks such as Lazy Susan tower, Loopy Rollercoaster, cus- evaluation methods used to assess peri-training
tom-developed needle passing and driving, and custom-de- performance.
veloped tension running suturing considered compatible to Finally, post-training evaluations took place immedi-
the selected simulation tasks in terms of the skills required to ately following the last training session of the 4-week
perform tasks were identified (Table 2). These physical training program. Using the same evaluation methods used
model tasks were used for the pre- and post-training evalu- in pre-training evaluation, each participant’s performance
ations to examine how much the skill learning using the VR and mental workload were measured at both the VR sim-
simulation training models will be transferred to the actual ulator and the da Vinci system for post-training evaluation.
robot performance using these physical model tasks. These
physical model tasks were not used as a part of the training Feedback provision
because the comparison between the VR simulation training
and physical model training was not a part of this study’s To maintain consistency in providing guidance, a set of the
scope. Only the VR training simulation tasks were used for explanations and instructions associated with any PM were
participants’ skill learning. documented prior to the subject recruitment. The expla-
nations and instructions were then uniformly provided to
Pre-, peri-, and post-training evaluations all EG group trainees. When any individual metrics did not
receive green check-mark symbols, but yellow triangle or
To investigate the learning curves and the changes in mental red X mark, feedback for improving the metric scores were
workload experienced by each participant throughout the provided for trainees.
training program, pre-, peri-, and post-training evaluations
were conducted. Two pre-training evaluations were per- Study procedures
formed: one with the VR simulator and another with the da
Vinci robot. Pre-training evaluations at the VR simulator As described earlier, participants had an orientation session
consisted of all subjects performing each of the four VR to establish a basic level of competency using the system’s
123
66 Surg Endosc (2018) 32:62–72
master manipulators and control pedals to accomplish basic interest, motivation, and concentration, distress represents
instrument movement and camera navigation at the unpleasant mood and tension with the lack of confidence
beginning of their participations and to understand the and perceived control, and worry represents self-esteem
skills, goals, and tasks which they will perform. After this and cognitive interference.
training, the pre-training evaluation was performed. Par-
ticipating surgeons were then randomized to the CG or the
EG. Results
After each task performance, there was a break for
reviewing the PM which was provided by the VR simu- Learning progress with the simulation training
lator. The CG participants were allowed to spend time on
the metric review alone and at their discretion. However, Figure 1 shows the learning progress with the simulation
the EG participants worked with a research team member training over the training weeks for each task. The pre-
to review the metrics together. This intervention provided training scores of the CG and EG were statistically com-
an additional opportunity for the research team member to pared, and no significant differences were found between
provide instructions and explanations regarding the prac- the two groups. This result demonstrated the comparability
tical meanings of individual PM, instructions on how to on the pre-training evaluations of the CG and EG perfor-
improve specific metric scores, and specific information mance scores. The CG participants’ post-training simula-
regarding which metrics should be improved in order to tion performance scores (82.9 ± 6.0) for all the tasks were
earn better global performance scores as needed or substantially lower than the EG subjects’ performance
requested by the participant. This research team member scores (93.2 ± 4.8) (p \ 0.05). It was demonstrated that
who provided this orientation session and mentoring during the performance scores of the CG participants plateau early
their learning was the Director of Robotic Surgery Edu- at the lower performance score for several tasks. When we
cation. This person developed a comprehensive robotic examined the learning slopes across the training weeks, it
surgery training curriculum and offered the basic and was found that the EG participants demonstrated signifi-
advanced level training to Johns Hopkins surgeons from cantly greater improvements between the pre-training and
multiple-specialties using this curriculum. the first training session when compared with the CG data
During the reviews, the actual time spent on PM (TPM) (p \ 0.05). It seemed that the EG participants learned more
for each participant was recorded to assess indirect atten- during the first session and so could perform better than the
tion levels for data analysis. At the end of the each day’s CG. For the following weeks, both displayed similar
training session, participants were asked to complete a brief learning effects over training weeks. With this learning
survey documenting their perception of the usefulness of trend, the EG demonstrated greater performance improve-
the PM on improving their robotic skill learning. ment with all tasks except for the Camera Targeting 2 task.
123
Surg Endosc (2018) 32:62–72 67
Fig. 2 The amount of performance improvements which was calculated as the difference between pre- and post-training performance scores for
A simulation training tasks and B bench-top da Vinci training tasks for each group
average for reviewing the PM and receiving mentoring substantially shortened as the EG trainees became more
feedback from a research team member. The TPMs for the familiar with the PM and instructions. The CG partici-
EG trainees were initially very long due to the provision pants spent only 11.6 s on average for the review, and
of explanations about performance evaluation page and their TPM did not significantly change across the training
the details regarding individual PM. However, it weeks.
123
68 Surg Endosc (2018) 32:62–72
123
Surg Endosc (2018) 32:62–72 69
123
70 Surg Endosc (2018) 32:62–72
1. The overall performance scores were very helpful for my training today. 4.25 ± 0.59 4.83 ± 0.38 \0.05
2. I knew how to obtain higher overall performance scores. 3.80 ± 0.91 4.83 ± 0.38 \0.05
3. I confidently understood the meaning of each performance metric. 3.28 ± 1.26 4.70 ± 0.46 \0.05
4. The individual performance metrics were very helpful for my training today. 3.98 ± 0.83 4.75 ± 0.49 \0.05
5. I knew specific strategies for higher individual metric scores. 3.33 ± 1.07 4.73 ± 0.45 \0.05
6. I understood well how overall performance score was connected with individual metrics. 3.49 ± 1.10 4.58 ± 0.68 \0.05
7. I feel my skill is improving. 4.18 ± 0.81 4.73 ± 0.45 \0.05
8. Extra instructions help me to better understand the meaning of each performance metric. N/A 4.85 ± 0.36 N/A
9. Extra instructions help me to better understand how I can improve my performance. N/A 4.85 ± 0.36 N/A
between two groups were observed during the first training required for performance improvement. These outcomes
session as the EG participants demonstrated greater skill well-reflected how practically the learners perceived the
improvement during the session, whereas both groups importance of proper mentoring provision during their
displayed similar slopes in their skill improvements over simulation training. It is often assumed that surgical
the following training weeks as shown in Fig. 1. Most EG trainees can learn fundamental surgical skills using VR
participants reached over 90% in their overall performance surgical simulators without having experienced mentors
scores for each task. However, most CG participants had or educators with them because the simulators provide
their scores lower than 90%. After the completion of CG trainees with performance feedback and the trainees can
subjects’ participation, the research team provided the CG learn from the feedback. Our study demonstrated that the
participants with the same explanations and instructions current voluntary use of the VR system is not as effective
which were used for the EG participants and asked them to as it potentially could be and VR simulators could not
perform each of the four simulation training tasks once provide trainees with a self-driven and mentor-free
more. It was observed that most CG participants showed learning environment as many assumed or expected. The
immediate score improvement to the similar score levels of provision of extra explanations and instructions will still
the EG participants. These extra task performance scores play a very important role in complementing the benefits
were not officially collected for all CG participants, so no in VR simulation training with PM and in making the
statistical analysis on these could be performed. However, current surgical VR training more effective, efficient, and
this was another confirmation that self-learning can be accurate.
improved with proper mentoring. Reflecting what we During this study, it was noticed that many participants
learned from this study, our current VR simulator training did not have a clear understanding about two PMs, the
begins with the comprehensive explanations regarding the motion economy, and master workspace range. The par-
performance evaluation pages and PMs. After performing a ticipants were not sure if these metrics were associated
few VR training tasks and reviewing the performance with the instruments, camera, master controllers, or
evaluation pages and PMs, trainees continue their skill something else. The motion economy measures the total
training by themselves and our robotic surgery skill edu- distance traveled by all instrument during an exercise.
cator provides skill improving instructions when needed or However, the camera motion was not included. The master
requested by trainees. workspace range reports the larger of the two radii of
The cognitive workload analysis presented that the EG motion of the user’s working volume on the master con-
group exhibited substantially decreased cognitive work- trollers. Because the definitions of these metrics were not
loads after their training in terms of decreased NASA- intuitively understood by study participants, most of CG
TLX global workload, higher engagement with lower participants did not know how they could improve the
distress levels. The MRQ analysis showed that the EG scores associated with these metrics. When VR surgical
participants’ better performances involved more cognitive simulators provide with the more user-friendly interface
resources during their task performances. Also, the survey with their PM, the surgeons will have a clear understanding
data showed the EG subjects better understood about the regarding the metrics so they can repeat each training tasks
PM, highly acknowledged the value of extra explanations applying score improving strategies accordingly.
and instructions, and better knew the strategies that were
123
Surg Endosc (2018) 32:62–72 71
123
72 Surg Endosc (2018) 32:62–72
surgery, residents’ compliance and reflection. J Surg Educ performance and safety in the operating room while decreasing
69(4):564–570. doi:10.1016/j.jsurg.2012.04.011 operator workload. Surg Endosc 24(2):377–382. doi:10.1007/
20. Chan B, Martel G, Poulin EC, Mamazza J, Boushey RP (2010) s00464-009-0578-0
Resident training in minimally invasive surgery: a survey of 32. Stefanidis D, Hope WW, Scott DJ (2011) Robotic suturing on the
Canadian department and division chairs. Surg Endosc FLS model possesses construct validity, is less physically
24(3):499–503. doi:10.1007/s00464-009-0611-3 demanding, and is favored by more surgeons compared with
21. Rosenthal R, Schafer J, Hoffmann H, Vitz M, Oertli D, Hahnloser laparoscopy. Surg Endosc 25(7):2141–2146. doi:10.1007/s00464-
D (2013) Personality traits and virtual reality performance. Surg 010-1512-1
Endosc 27(1):222–230. doi:10.1007/s00464-012-2424-z 33. Patel YR, Donias HW, Boyd DW, Pande RU, Amodeo JL,
22. Pitzul KB, Grantcharov TP, Okrainec A (2012) Validation of Karamanoukian RL, D’Ancona G, Karamanoukian HL (2003)
three virtual reality Fundamentals of Laparoscopic Surgery (FLS) Are you ready to become a robo-surgeon? Am Surg
modules. Stud Health Technol Inform 173:349–355 69(7):599–603
23. Brinkman WM, Buzink SN, Alevizos L, de Hingh IH, Jaki- 34. Shaligram A, Meyer A, Simorov A, Pallati P, Oleynikov D
mowicz JJ (2012) Criterion-based laparoscopic training reduces (2013) Survey of minimally invasive general surgery fellows
total training time. Surg Endosc 26(4):1095–1101. doi:10.1007/ training in robotic surgery. J Robot Surg 7(2):131–136. doi:10.
s00464-011-2005-6 1007/s11701-012-0355-2
24. Kang SG, Yang KS, Ko YH, Kang SH, Park HS, Lee JG, Kim JJ, 35. Hung AJ, Patil MB, Zehnder P, Cai J, Ng CK, Aron M, Gill IS,
Cheon J (2012) A study on the learning curve of the robotic Desai MM (2012) Concurrent and predictive validation of a novel
virtual reality simulator. J Laparoendosc Adv Surg Tech Part A robotic surgery simulator: a prospective, randomized study.
22(5):438–442. doi:10.1089/lap.2011.0452 J Urol 187(2):630–637. doi:10.1016/j.juro.2011.09.154
25. Lerner MA, Ayalew M, Peine WJ, Sundaram CP (2010) Does 36. Finnegan KT, Meraney AM, Staff I, Shichman SJ (2012) da Vinci
training on a virtual reality robotic simulator improve perfor- Skills Simulator construct validation study: correlation of prior
mance on the da Vinci surgical system? J Endourol robotic experience with overall score and time score simulator
24(3):467–472. doi:10.1089/end.2009.0190 performance. Urology 80(2):330–335. doi:10.1016/j.urology.
26. Hung AJ, Zehnder P, Patil MB, Cai J, Ng CK, Aron M, Gill IS, 2012.02.059
Desai MM (2011) Face, content and construct validity of a novel 37. Johnston MJ, Paige JT, Aggarwal R, Stefanidis D, Tsuda S,
robotic surgery simulator. J Urol 186(3):1019–1024. doi:10.1016/ Khajuria A, Arora S (2016) An overview of research priorities in
j.juro.2011.04.064 surgical simulation: what the literature shows has been achieved
27. Kenney PA, Wszolek MF, Gould JJ, Libertino JA, Moinzadeh A during the 21st century and what remains. Am J Surg
(2009) Face, content, and construct validity of dV-trainer, a novel 211(1):214–225. doi:10.1016/j.amjsurg.2015.06.014
virtual reality simulator for robotic surgery. Urology 38. Panait L, Rafiq A, Tomulescu V, Boanca C, Popescu I, Carbonell
73(6):1288–1292. doi:10.1016/j.urology.2008.12.044 A, Merrell RC (2006) Telementoring versus on-site mentoring in
28. Perrenot C, Perez M, Tran N, Jehl JP, Felblinger J, Bresler L, virtual reality-based surgical training. Surg Endosc
Hubert J (2012) The virtual reality simulator dV-Trainer((R)) is a 20(1):113–118. doi:10.1007/s00464-005-0113-x
valid assessment tool for robotic surgical skills. Surg Endosc 39. Alaker M, Wynn GR, Arulampalam T (2016) Virtual reality
26(9):2587–2593. doi:10.1007/s00464-012-2237-0 training in laparoscopic surgery: a systematic review & meta-
29. Lee JY, Mucksavage P, Kerbl DC, Huynh VB, Etafy M, analysis. Int J Surg 29:85–94. doi:10.1016/j.ijsu.2016.03.034
McDougall EM (2012) Validation study of a virtual reality 40. Ahlborg L, Weurlander M, Hedman L, Nisel H, Lindqvist PG,
robotic simulator–role as an assessment tool? J Urol Fellander-Tsai L, Enochsson L (2015) Individualized feedback
187(3):998–1002. doi:10.1016/j.juro.2011.10.160 during simulated laparoscopic training:a mixed methods study.
30. Brinkman WM, Luursema JM, Kengen B, Schout BM, Witjes JA, Int J Med Educ 6:93–100. doi:10.5116/ijme.55a2.218b
Bekkers RL (2013) da Vinci skills simulator for assessing 41. Paschold M, Huber T, Zeissig SR, Lang H, Kneist W (2014)
learning curve and criterion-based training of robotic basic skills. Tailored instructor feedback leads to more effective virtual-re-
Urology 81(3):562–566. doi:10.1016/j.urology.2012.10.020 ality laparoscopic training. Surg Endosc 28(3):967–973. doi:10.
31. Stefanidis D, Wang F, Korndorffer JR Jr, Dunne JB, Scott DJ 1007/s00464-013-3258-z
(2010) Robotic assistance improves intracorporeal suturing
123