Construction of Neuroanatomical Volumetric Models Using 3-Dimensional Scanning Techniques-Technical Note and Applications

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Technical Note

Construction of Neuroanatomical Volumetric Models Using 3-Dimensional Scanning


Techniques: Technical Note and Applications
Roberto Rodriguez Rubio1-3, Joseph Shehata1,2, Ioannis Kournoutas1,2, Ricky Chae1,2, Vera Vigo1,2, Minghao Wang1,2,4,
Ivan El-Sayed2,3, Adib A. Abla1,2

- BACKGROUND: Visuospatial features of neuroanatomy shared in a variety of 3D platforms. Similarly, the technical
are likely the most difficult concepts to learn in anatomy. demands are not high; thus, it is plausible that neurosur-
Three-dimensional (3D) modalities have gradually begun to geons could become quickly proficient and enlist their use
supplement traditional 2-dimensionanl representations of in education and surgical planning. Although SLS is pref-
dissections and illustrations. We have introduced and erable in settings in which high accuracy is required, PGM
described the workflow of 2 innovative methods—photo- is a viable alternative with a short learning curve.
grammetry (PGM) and structured light scanning (SLS)—
which have typically been used for reverse-engineering
applications. In the present study, we have described a
novel application of SLS and PGM that could enhance INTRODUCTION
medical education and operative planning in neurosurgery.
- METHODS: We have described the workflow of SLS and
PGM for creating volumetric models (VMs) of neuroana-
tomical dissections, including the requisite equipment and
N euroanatomy is one of the most challenging concepts
to understand within anatomy, primarily owing to the
difficulty in attaining a comprehensive visuospatial un-
derstanding of the many neurovascular structures.1 Teaching
neurosurgical anatomy has traditionally involved dissecting
software. We have also provided step-by-step procedures cadaveric specimens and reviewing 2-dimensional (2D)
on how users can postprocess and refine these images anatomic representations. A number of recent studies have
according to their specifications. Finally, we applied both suggested that incorporating stereoscopic (SS) images in medical
methods to 3 dissected hemispheres to demonstrate the education will lead to improved spatial reasoning abilities
quality of the VMs and their applications. compared with traditional methods.2,3 The fundamental principle
behind SS is stereopsis—the perception of depth that arises from
- RESULTS: Both methods yielded VMs with suitable binocular vision—a feature that can be reproduced using a variety
clarity and structural integrity for anatomical education, of technologies.4 Because performing surgery itself requires the
surgical illustration, and procedural simulation. simultaneous application of anatomical knowledge and spatial
reasoning, the use of SS modalities in resident education
- CONCLUSIONS: The application of 3D computer remains a promising frontier.
graphics to neurosurgical applications has shown great On the cutting edge of SS imaging are volumetric models (VMs),
promise. SLS and PGM can facilitate the construction of which are often built from DICOM (digital imaging and commu-
VMs with high accuracy and quality that can be used and nications in medicine) files—a modality that is apt at capturing the

Key words
From the 1Department of Neurological Surgery, 2Skull Base and Cerebrovascular Laboratory,
- 3D animation and 3Department of OtolaryngologyeHead and Neck Surgery, University of California, San
- 3D scanning Francisco, California, USA; and 4Department of Neurological Surgery, First Hospital of China
- Extended reality Medical University, Shenyang, China
- Photogrammetry
To whom correspondence should be addressed: Roberto Rodriguez Rubio, M.D.
- Structured light scanning
[E-mail: luis.rodriguezrubio@ucsf.edu]
- Surgical neuroanatomy
Supplementary digital content available online.
Abbreviations and Acronyms Citation: World Neurosurg. (2019) 126:359-368.
2D: 2-Dimensional https://doi.org/10.1016/j.wneu.2019.03.099
3D: 3-Dimensional Journal homepage: www.journals.elsevier.com/world-neurosurgery
PGM: Photogrammetry
Available online: www.sciencedirect.com
SLS: Structured light scanning
SS: Stereoscopic 1878-8750/$ - see front matter Published by Elsevier Inc.
VM: Volumetric model

WORLD NEUROSURGERY 126: 359-368, JUNE 2019 www.journals.elsevier.com/world-neurosurgery 359


TECHNICAL NOTE

Figure 1. Standard photogrammetry setup. After the specimen rotating slowly, the operator takes
dissection, the specimen is placed on a rotating surface photographs ensuring that the light and focus point do
and covered with black cloth to avoid the detection of not change during each 360 turn. Next, the process is
undesired surface and facial characteristics. The repeated at different elevations and angles to attain
camera (D810 SLR [Nikon, Tokyo, Japan]) is fixed on a high-quality alignment later in the process.
tripod to maintain the same distance and focus. With

intricate minutiae of neuroanatomical topography. Anatomical superiority of VM compared with traditional pedagogical
VMs acquired from DICOM were initially introduced for diag- methods has questioned in various fields,7-16 studies of neuro-
nostic and therapeutic planning and have now been routinely anatomical applications and neurosurgical education have
used in both neuroanatomical research and teaching.5 Moreover, demonstrated their role in improving performance.17-19 This can
interacting with the models using virtual reality could partially be explained by the spatial complexity and density of surgical
simulate training in a cadaveric laboratory.6 Although the neuroanatomy. Thus, it is plausible that volumetric depiction of

Figure 2. A screenshot of the photogrammetry portion of the model (e.g., excess black sheet or
postprocessing software, depicting the reconstruction external outliers) can be easily erased before the
process where images are overlapped. The undesired construction of the cloud to expedite the workflow.

360 www.SCIENCEDIRECT.com WORLD NEUROSURGERY, https://doi.org/10.1016/j.wneu.2019.03.099


TECHNICAL NOTE

Figure 3. A textured volumetric model created via the central sulcus. The same model is further dissected
photogrammetry before additional postprocessing in model 5 after postprocessing—where it is possible
highlights. The principle structures are easily to appreciate the greater image quality and the deep
recognized, such as the pre- and postcentral gyrus and white matter tracts.

professional surgical dissections will facilitate faster acquisition steps consisted of capturing 2D/SS media and subsequently
and longer retention of the relevant concepts. setting up specimens in a diffuse-light studio for scanning.
Fundamentally, 3-dimensional (3D) scanning describes the process The fundamental process underlying both SLS and PGM
by which a detector traverses an object or specimen and collects data techniques was as follows: a series of images of the target
on its shape, texture, and color before digitizing it.20 Two popular specimen’s surface were acquired from multiple viewpoints
techniques for acquiring this initial 3D blueprint of a target object using 360 rotation of an arbitrary vertical axis—at the supe-
are photogrammetry (PGM) and structured light scanning (SLS). rior, medium, and inferior elevations of the camera with
PGM is a process by which the metrics of common surface points respect to the specimen. Postprocessing software can then
are taken from photographs to create clouds of colorized superimpose the acquired images and measure the inter-
coordinates that will subsequently be triangulated to construct the secting tie points to triangulate their location in 3D space,
VM. Although methods for obtaining 3D data from 2D photographs thereby creating a 3D model. Finally, lighting is an essential
have been available for half a century,21 3D PGM applications have consideration, because surface detection is the fundamental
appeared only sporadically in neurosurgery.22,23 However, recent mechanism by which these techniques operate. The object
advances in 3D PGM software have made it a potentially practical should remain static, and the lighting should, ideally, be
alternative to dedicated surface scanners.24 Its principal advantages diffuse. Objects that are glossy, metallic, or transparent
are portability, less required equipment, and rapid data collection, all should be covered with talcum powder or scanning spray to
of which can reduce the overall research costs. Notable avoid reflections. Round target stickers can also be applied to
applications of PGM include producing film and video games, the background of objects with a homogenous surface and/or
preparing topographic maps, and planning architectural designs.25,26 texture to help the scanner detect the surface. After securing
In contrast, 3D scanning techniques such as SLS represent a the specimens on a turntable (Foldio360 [OrangeMonkie, San
portable method to rapidly obtain 3D anatomical data using Diego, California, USA), the scanning process was initiated.
structured light or lasers to capture surface topography.24 The
scanner will typically be connected to a computer that
automatically registers and reconstructs the coordinated PGM and Image Postprocessing
frames.27 Subsequently, this rough image can be fine-tuned The PGM workflow consists of 3 steps: 1) image capture, 2)
and postprocessed to prepare it for export to a 3D modeling preprocessing (optional), and 3) postprocessing. Because the
server or 3D printing modality. Popular applications of such mo- image was taken in 2 dimensions, the surface area that can
dalities include Microsoft Kinect (Microsoft Corp., Redmond, be captured was limited by the camera’s field of view. To
Washington, USA), biometric identification software (e.g., face, obtain additional surface angles, the assembly must be
retinal scans), and reverse-engineering industrial tasks.28 moved relative to the surface, similar to the method used in
the scanning workflow.
METHODS Image capture can proceed using 3 methods: 1) taking multiple
After performing microdissections of embalmed cadaveric photographs with a handheld camera moving relative to the
specimens, our laboratory followed a standardized workflow specimen between photographs; 2) rotating the specimen on a
to record the relevant neurosurgical anatomy. The major turntable with the camera fixed on a tripod; and 3) extracting

WORLD NEUROSURGERY 126: 359-368, JUNE 2019 www.journals.elsevier.com/world-neurosurgery 361


TECHNICAL NOTE

placed on a black cloth and secured on a turntable (a green


screen can also be used to facilitate masking). Next, the
operator takes a video or a series of photographs from the
superior, middle, and inferior aspects of the specimen,
rotating the object 360 on the turntable for each view of the
specimen (w45 photographs should be acquired about the
axis from each aspect; Figure 1). In the video option, capturing
a 45e90-second video at a frame rate of 24 frames/second
will yield w1000e2000 frames. Therefore, setting a frame
step of w10e25 should yield w80 images.
Although an optional step, preprocessing (step 2) the raw im-
age data set will yield a superior final result by increasing the
visibility of the details in the shaded and lit areas and increasing
the microcontrast. To achieve a visually aesthetic final image,
the exposure should be fixed such that the parts in shade are
lightened and the lit areas darkened. Chromatic aberrations
and noise will then be removed, and the image will be sharp-
ened. All processes are run in 16-bit and exported in 16-bit tag
image file format (.tiff), 16-bit portable network graphic format
(.png), or 100% quality 8-bit joint photographic experts group
format (.jpeg) using image processing software (DxO Optic-
sPro 11 [DxO Labs, Paris, France]).
Postprocessing (step 3) describes generating the mesh and
texture data from the images. The images are uploaded to a
reconstruction application and put through a sequence of
processing steps. A cache location is set to a dedicated large
solid-state drive (512 Gb) or on a hard disk drive. The images
are first subjected to automated masking, followed by manual
correction of the automatically generated masks. This step
excludes the background during model construction. Image
grouping with low overlap should be enabled to help avoid
incorrect camera and lens estimations. To construct the dig-
ital model, the masked photographs are then run through
alignment, geometric reconstruction, and texturing steps in
PGM software (Reality Capture Beta 1.0 [Capturing Reality,
Bratislava, Slovakia]). These processes will usually be run in a
semimanual mode (Figure 2). The reconstruction application
will compare the shapes in the photograph (alignment step)
Figure 4. A standard structured light scanning setup, with the scanner
to generate a high-resolution 3D mesh. During geometric
positioned at an optimal distance from the rotating specimen. Using reconstruction, sharp reconstruction should be used to pre-
structured light scanning, the operator must be more accurate and give vent the software from filling holes in the 3D polygon. Af-
more attention to the distance to capture all the desire surface. A rough terward, texturing will superimpose a precisely shaded and
of the upcoming model will be created almost instantaneously.
colored representation of the specimen over the geometric
reconstruction (i.e., the color contained in the photographs is
transferred to the textures used on the surface of the mesh).
frames from a video, again with the camera moving relative to The mesh can then be decimated to yield a simpler mesh
the specimen. The camera used will typically be a digital, with fewer polygons (500,000e1,000,000), smoothed, and
single-lens reflex, charged couple device, mirrorless camera, exported in a wavefront data format (.obj) file with the
or a smartphone camera. The selected camera should ideally accompanying material files (.mtl format) and texture files
have a high megapixel count (>12 megapixels), with lens that (.png format). The steps for postprocessing the image data
can provide a clear and static view of the entire surface. sets obtained from videos or photographs are identical.
Changes in distance or focus during photographic cycles will be
Additional images of the most geometrically complex sur-
deleterious to the image quality and surface recognition. In the
faces, including partially or entirely hidden surfaces and sur-
videography option, auto-focus features should be disabled.
faces with fine detail, can be acquired to improve the
In our experience, we believe that the first 2 methods completeness and accuracy of the final volumetric model
involving the handheld camera and tripod will be the most (Figure 3). The image sharpness and depth of field should be
efficient for image capture (step 1). First, the specimen is maximized.

362 www.SCIENCEDIRECT.com WORLD NEUROSURGERY, https://doi.org/10.1016/j.wneu.2019.03.099


TECHNICAL NOTE

Figure 5. An image depicting 3 preliminary scans before alignment.

3D SLS The SLS scanner (Artec Space Spider [Artec, Luxembourg City,
The 3D scanning workflow for SLS involves 1) scanning and 2) Luxembourg]) should be held at an optimized distance (9e18
postprocessing. The specimen is first placed on a black cloth on cm) from the surface of the specimen, as determined by the
top of a turntable (if unstable, a wax support should be used). preview screen shown by the scanning software (Artec Studio

Figure 6. Postalignment model depicting the synthesis of the 3 scans obtained via structured light scanning.

WORLD NEUROSURGERY 126: 359-368, JUNE 2019 www.journals.elsevier.com/world-neurosurgery 363


TECHNICAL NOTE

Figure 7. The volumetric model after the mesh has been decimated and smoothed. At this point, the model can be exported and texture added to complete
the final processing steps (e.g., texture touchups, smoothing).

12 [Artec]). The turntable should be smoothly rotated such that Additional Postprocessing
the specimen rotates relative to the scanner. Multiple scans are Common problems that arise in the raw mesh are the holes
then acquired (Figure 4). For surface capture, 3 scans are and nonmanifold edges or vertices (intersections) from the
conventionally acquired (Figure 5). For capturing specimen decimation step. Standard semiautomated postprocessing,
surfaces that cannot be fully scanned in a single orientation on including noise reduction hole filling, can be applied to fix raw
the turntable, 6e12 scans should be acquired by adjusting the mesh topology errors for both PGM and SLS. The quality of
orientation of the specimen. The scanner can also be moved the clean mesh can then be checked for the same errors.
over the surface of the static object. Regardless of the After repairing the mesh with 3D modeling software (Mesh-
method used, the scans should be taken with 360 rotation of Mixer, version 3.5 [AutoDesk, Inc., San Rafael, California,
the specimen about an arbitrary vertical axis passing through USA]), a clean mesh can be textured and exported. If the
the center of the surface from the superior, middle, and cameras were correctly aligned and the shots well taken, the
inferior aspects. The software will capture both geometrical texture should be accurate and only require further editing in
features and the color texture of the object. Overlap on the areas at which the surface capture proceeded from a high
scans will be achieved with the approximate aim that each camera angle or hidden surfaces were reconstructed. Healing
element is present in 3 images. brush tools from various software packages (Photoshop CC
2018 “19.00” [Adobe Systems, San Jose, California, USA])
Each frame should be registered during postprocessing. For
allow for cloning and fusing textures from adjacent regions to
rigid specimens, rigid alignment will be conducted (Figure 6).
the part under restoration or re-creation.
For nonrigid specimens, the specimen should be frozen and
semithawed to facilitate the use of the nonrigid alignment We recorded and scanned a stepwise dissection of 3 hemi-
tool. The color texture should permit robust autoalignment spheres that had been prepared via the Klinger method.29
of the scans. Registration between the scans should be One of the specimens was latex injected to depict the
performed and the outliers. The scan views should be relevant neurovascular anatomy of the medial and basal
merged with sharp fusion to create a watertight polygon surfaces.
mesh file. This semiautomatic process can be performed
After VMs had been obtained using the reported workflows
using the 3D scanner’s proprietary software. The mesh is
(Table 1), the models were uploaded to a 3D viewing platform
then decimated to 500,000e1,000,000 polygons and
(Sketchfab [Sketchfab Inc., New York, New York, USA]) to
smoothed, if necessary (Figure 7). Finally, the mesh is
facilitate its interaction in 2 dimensions and extended reality
exported using the same file and formatting as described
(i.e., virtual reality, mixed reality, augmented reality).
for PGM (Figure 8).

364 www.SCIENCEDIRECT.com WORLD NEUROSURGERY, https://doi.org/10.1016/j.wneu.2019.03.099


TECHNICAL NOTE

Figure 8. A textured volumetric model created via structured light scanning before export. The details of the structured light scanning model are well-visible,
with the nerves and arteries easily visualized. After rendering, the model can be exported and can be used as an interactive model in 2 dimensions or 3
dimensions—or used to perform a cinematic rendering (Videos 3 and 4).

Annotations were added to the relevant anatomical landmarks medicine.30-32 Because most neurosurgical procedures and corre-
on the models. Two cinematic videos depicting the VM in sponding pathologic entities involve intricate, minute anatomical
different stages of microdissection were rendered using 3D structures that cannot be easily perceived, neuroimaging has
modeling software (Blender 2.79 [Blender Foundation, become an integral component of clinical practice.5 Although
Amsterdam, Netherlands]) to provide a stylized view of the neuroimaging has enabled visualization of structures for both
corresponding neuroanatomy. diagnosis and surgical treatment, radiographs, computed
tomography scans, and magnetic resonance imaging have
typically been limited to 2 dimensions or a 3D volume in 2D
RESULTS slices.33 Some studies have suggested that incorporating 3D
Both 3D scanning methods yielded models that upheld suitable imaging of relevant anatomical structures can help medical
clarity and structural integrity for anatomical education, surgical students and residents learn and retain the information, which has
illustration, and, potentially, procedural simulation. Eight models of supported the proliferation of anatomic volumetric models.34
the brain obtained via PGM (models 1e5) and SLS Both SLS and PGM have specific advantages and
(models 6e8) are depicted in Videos 1 and 2, limitations. The best commercial 3D scanners have a
respectively. Model 5 (available at: https://sketchfab. reported accuracy of 30 mm, a nominal resolution of
com/models/c8bd14b4d3374658a42a95a37e341511) 100 mm, and a rate point accuracy of 50 mm.35-38
and model 8 (available at: https://sketchfab.com/ Thus, these scanners will be most effective when
models/b9e3d39d2d8a4acc875873888c101233) are Video available at
www.sciencedirect.com digitizing small-size objects with intricate details,
available on Sketchfab for a fully interactive sharp edges, and thin ribs, such as certain neuroan-
experience. In addition, rendered animations of the atomical specimens. These scanners can generate
VM were produced using Blender software (Blender 3D models with an accuracy of up to w100 mm.24 Although SLS
Foundation) to demonstrate a clip using a professional render setups have typically been more expensive compared with PGM,
engine (Videos 3 and 4). The workflow and characteristics of each the physical sensors on SLS scanners allow for reconstruction of
technique are briefly summarized in Tables 1 and 2. meatus, foramina, canals, and other anatomical corridors. In
contrast, PGM lacks the reflection of visible light from the
DISCUSSION deeper structures that results in constructing black meshes.
Computer graphics is a rapidly developing field with applications in PGM-textured models seem to have improved texture quality
science, engineering, architecture, entertainment, and and, thus, greater cosmetic precision compared with those

WORLD NEUROSURGERY 126: 359-368, JUNE 2019 www.journals.elsevier.com/world-neurosurgery 365


TECHNICAL NOTE

However, the primary drawback of PGM is the postprocessing


Table 1. Brief Summary of General Workflow for time required. Data collection tends to be faster with PGM
Photogrammetry and Structured Light Scanning than with SLS when video recording is implemented.39 In
Structured Light particular, taking photographs of a single specimen can be
Variable Photogrammetry Scanning accomplished in w5e10 minutes and taking a video can
require 1e2 minutes. In contrast, scanning the specimen via
Imaging capture Surface of the object is The object is fixed on SLS will require w10e20 minutes. Although SLS requires an
captured from the a turntable and independent power source, the PGM hardware is highly
superior, middle, and rotated 360 as the portable. Given an adequate light source, the only equipment
inferior aspects; if scanner is held at an needed to collect photographic data is a digital, single-lens re-
videography is chosen, optimized distance (9 flex camera or smartphone, a tripod, and adequate flash card
one must complete e18 cm); the storage capacity. Neither a computer nor electrical power is
frame extraction to scanning is completed required on-site unless plug-in lighting is used. Such portability
obtain the image data at the superior, advantages make PGM useful for data collection in the operating
set middle, and inferior theater or at a dissection station distant from a main power
aspects of each source or computer because photographs can be taken for future
surface remote digital reconstruction.
Preprocessing Photographs can be Frames are obtained
The tradeoff for the speed and flexibility of data collection with
optimized and exported in.scan format
PGM is that the processing after image capture is time intensive.
in.tiff, .png, or .jpeg
format For SLS, standard postprocessing requires w15 minutes (inter-
active). For PGM, manual correction of photographic masks av-
Postprocessing Using postprocessing Workflow is erages w1 minute per image. The suite of PGM model-building
software, images are semiautomatic and operations (alignment, geometry, and texture) is demanding of
automatically masked, includes scan both RAM and video card, commanding >10 GB of memory at
aligned, reconstructed, alignment, outlier peak. The time required to complete these operations depends
and textured; mesh is removal, texture
on the processing capability of the investigator’s computer and
decimated (500,000 addition, mesh
the number of photographs used to build the model.
e1,000,000 polygons), decimation/
smoothed, and exported smoothing, and export Graphics issues in PGM include forming and decimating triangle
meshes, merging multiple range images, and detecting scan ar-
tifacts. One issue with SLS includes self-occlusion (e.g., noses
generated by SLS, which might improve anatomical fidelity. In on faces), which can be readily solved by taking more images of
judging whether PGM is an appropriate tool for generating 3D that region from multiple angles. SLS also has an inability to
neuroanatomical reconstructions for a given project, several reconstruct cavities (not resolvable by taking more images)
practical considerations must be considered (Table 2). The owing to the lack of physical sensors, resulting in gaps remaining
primary benefits of PGM technology include faster raw data in the point cloud. Radial basis function interpolation or Delaunay
collection, portability, greater texture quality, and lower costs. triangulation can be used to patch over such holes.40
Furthermore, edge curl will produce artifacts at sharp corners,
permitting discontinuities in the computed mesh to arise. Other
SLS problems are related to surface characteristics such as
Table 2. Comparison of Efficiency, Quality, and Cost Between surface reflectance and color, the translucence or transparency
Photogrammetry and Structured Light Scanning of surfaces, and speckle (constructive and destructive
interference of light due to the microstructure of the reflecting
Structured Light
surface, although this will not usually be a problem). Ambient
Photogrammetry Scanning
lighting can also be problematic in PGM. The substantial
Pros Faster raw data Accuracy 0.05 mm; flexibility offered to the investigator in photographing a
collection (1e10 automated and time- specimen suggests that different protocols could create
minutes); portability; low efficient variances in model quality.
cost ($50e$300); high postprocessing
texture quality workflow;
reconstruction of CONCLUSIONS
vessels, nerves, and
foramina The application of 3D computer graphics to neurosurgical edu-
cation has shown great promise and has become increasingly
Cons Increased Slower raw data prevalent in the reported data because development in other
postprocessing time; collection (10e20
fields has facilitated its use in medicine. VMs are an innovative
postprocessing requires minutes); high cost
and immersive method to experience the intricacies of neuro-
high RAM (>16 GB); (>$3000); limited
anatomy and will likely continue to proliferate in academia and
unable to reconstruct portability
popular culture in the future. The present study has described 2
small cavities and
canals; ambient light of the most common methods for producing a volumetric
can affect quality reconstruction of anatomical models: surface capture via PGM
and SLS.

366 www.SCIENCEDIRECT.com WORLD NEUROSURGERY, https://doi.org/10.1016/j.wneu.2019.03.099


TECHNICAL NOTE

Given the straightforward workflow, it is feasible for neurosur- potential to significantly broaden the accessibility of 3D research
geons and anatomists to use the necessary hardware and soft- projects to academic institutions.
ware and generate high-quality in situ VMs for neuroanatomical
education, surgical simulation, and surgical planning. SLS is ACKNOWLEDGMENTS
preferable if high accuracy is desired and the imaging of We would like to express our gratitude to the body donors and
anatomical corridors (e.g., foramina, meatus, canals) is required. their families, who, through their altruism, contributed to making
PGM yields a viable alternative to dedicated scanners, with the this review article possible.

the understanding of according CT images: a large paleontological specimens. PLoS One. 2017;
REFERENCES randomized controlled study. Teach Learn Med. 12:e0179264.
2012;24:140-148.
1. Javaid MA, Chakraborty S, Cryan JF,
25. Fraser CS. A resume of some industrial applica-
Schellekens H, Toulouse AE. Understanding
13. Roach VA, Brandt MG, Moore CC, Wilson TD. Is tions of photogrammetry. ISPRS J Photogramm
neurophobia: reasons behind impaired under-
three-dimensional videography the cutting edge of Remote Sens. 1993;48:12-23.
standing and learning of neuroanatomy in cross-
surgical skill acquisition? Anat Sci Educ. 2012;5:
disciplinary healthcare students. Anat Sci Educ. 26. Valença J, Júlio E, Araújo H. Applications of
138-145.
2018;11:81-93. photogrammetry to structural assessment. Exp
14. Khot Z, Quinlan K, Norman GR, Wainman B. The Techn. 2011;36:71-81.
2. Berney S, Bétrancourt M, Molinari G, Hoyek N.
relative effectiveness of computer-based and
How spatial abilities and dynamic visualizations
traditional resources for education in anatomy. 27. Sturm P, Ramalingam S, Tardif J-P, et al. Camera
interplay when learning functional anatomy with
Anat Sci Educ. 2013;6:211-215. models and fundamental concepts used in geo-
3D anatomical models. Anat Sci Educ. 2015;8:
metric computer vision. Found Trends R Comput
452-462.
15. Lisk K, McKee P, Baskwill A, Agur AM. Student Graph Vis. 2011;6:1-183.
perceptions and effectiveness of an innovative
3. Nguyen N, Mulla A, Nelson AJ, Wilson TD. Vi-
learning tool: anatomy Glove Learning System. 28. Bell T, Li B, Zhang S. Structured Light Techniques
suospatial anatomy comprehension: the role of
Anat Sci Educ. 2015;8:140-148. and Applications. New York, NY: Wiley Encyclo-
spatial visualization ability and problem-solving
pedia of Electrical and Electronics Engineering;
strategies. Anat Sci Educ. 2014;7:280-288.
16. Hu A, Wilson T, Ladak H, Haase P, Doyle P, 2016:1-24.
4. McIntire JP, Havig PR, Geiselman EE. Stereo- Fung K. Evaluation of a three-dimensional
educational computer model of the larynx: voic- 29. Klingler J. Development of the macroscopic
scopic 3D displays and human performance: a
ing a new direction. J Otolaryngol Head Neck Surg. preparation of the brain through the process of
comprehensive review. Displays. 2014;35:18-26.
2010;39:315-322. freezing. Schweiz Arch Neurol Psychiatr. 1935;36:
5. Aso-Escario J, Martinez-Quiñones JV, Aso-Vizán J, 247-256 [in German].
Gil-Albero P, Arregui-Calvo R. Image analysis and 17. Estevez ME, Lindgren KA, Bergethon PR. A novel
processing: fundaments and applications in three-dimensional tool for teaching human 30. Dorsey J, McMillan L. Computer graphics and
neurology and neurosurgery. Rev Neurol. 2011;53: neuroanatomy. Anat Sci Educ. 2010;3:309-317. architecture. ACM SIGGRAPH Comput Graph. 1998;
494-503. 32:45-48.
18. Ruisoto P, Juanes JA, Contador I, Mayoral P, Prats-
6. Locketz GD, Lui JT, Chan S, et al. Anatomy-spe- Galino A. Experimental evidence for improved 31. Machover C, England N, Whitted T. Computer
cific virtual reality simulation in temporal bone neuroimaging interpretation using three- graphics in entertainment. IEEE Comput Graph Appl.
dissection: perceived utility and impact on sur- dimensional graphic models. Anat Sci Educ. 2012; 1998;18:22-23.
geon confidence. Otolaryngol Neck Surg. 2017;156: 5:132-137.
1142-1149. 32. Vidal FP, Bello F, Brodlie KW, et al. Principles and
19. Kirkman MA, Ahmed M, Albert AF, Wilson MH, applications of computer graphics in medicine.
7. Levinson AJ, Weaver B, Garside S, McGinn H, Nandi D, Sevdalis N. The use of simulation in Comput Graph Forum. 2006;25:113-137.
Norman GR. Virtual reality and brain anatomy: a neurosurgical education and training. J Neurosurg.
randomised trial of e-learning instructional de- 2014;121:228-246. 33. El-Gamal FE-ZA, Elmogy M, Atwan A. Current
signs. Med Educ. 2007;41:495-501. trends in medical image registration and fusion.
20. Kuş A. Implementation of 3D optical scanning Egypt Inform J. 2016;17:99-124.
8. Donnelly L, Patten D, White P, Finn G. Virtual technology for automotive applications. Sensors.
human dissector as a learning tool for studying 2009;9:1967-1979. 34. Levoy M. Display of Surfaces from Volume Data.
cross-sectional anatomy. Med Teach. 2009;31: revised February 1988. Available at: https://graphics.
553-555. 21. Wu B, Klatzky RL, Stetten G. Visualizing 3D ob- stanford.edu/papers/volume-cga88/volume.pdf; 1987.
jects from 2D cross sectional images displayed in- Accessed August 10, 2018.
9. Keedy AW, Durack JC, Sandhu P, Chen EM, situ versus ex-situ. J Exp Psychol Appl. 2010;16:
O’Sullivan PS, Breiman RS. Comparison of tradi- 45-59. 35. Zhang Q, Eagleson R, Peters TM. Volume visual-
tional methods with 3D computer models in the ization: a technical overview with a focus on
instruction of hepatobiliary anatomy. Anat Sci Educ. 22. De Benedictis A, Nocerino E, Menna F, et al. medical applications. J Digit Imaging. 2011;24:
2011;4:84-91. Photogrammetry of the human brain: a novel 640-664.
method for three-dimensional quantitative explo-
10. Bareither ML, Arbel V, Growe M, Muszczynski E, ration of the structural connectivity in neurosur- 36. Adams JW, Olah A, McCurry MR, Potze S. Surface
Rudd A, Marone JR. Clay modeling versus written gery and neurosciences. World Neurosurg. 2018;115: model and tomographic archive of fossil primate
modules as effective interventions in understand- e279-e291. and other mammal holotype and paratype speci-
ing human anatomy. Anat Sci Educ. 2013;6:170-176. mens of the Ditsong National Museum of Natural
23. Barbero-García I, Lerma JL, Marqués-Mateu Á, History, Pretoria, South Africa. PLoS One. 2015;10:
11. Maggio MP, Hariton-Gross K, Gluch J. The use of Miranda P. Low-cost smartphone-based photo- e0139800.
independent, interactive media for education in grammetry for the analysis of cranial deformation
dental morphology. J Dent Educ. 2012;76:1497-1511. in infants. World Neurosurg. 2017;102:545-554. 37. Hayes A, Easton K, Devanaboyina PT, Wu J-P,
Kirk TB, Lloyd D. Structured white light scanning
12. Metzler R, Stein D, Tetzlaff R, et al. Teaching on 24. Das AJ, Murmann DC, Cohrn K, Raskar R. of rabbit Achilles tendon. J Biomech. 2016;49:
three-dimensional presentation does not improve A method for rapid 3D scanning and replication of 3753-3758.

WORLD NEUROSURGERY 126: 359-368, JUNE 2019 www.journals.elsevier.com/world-neurosurgery 367


TECHNICAL NOTE

38. Ríos L, Palancar C, Pastor F, Llidó S, Sanchís- https://arxiv.org/pdf/1801.08863.pdf. Accessed Citation: World Neurosurg. (2019) 126:359-368.
Gimeno JA, Bastir M. Shape change in the atlas August 10, 2018. https://doi.org/10.1016/j.wneu.2019.03.099
with congenital midline non-union of its posterior
Journal homepage: www.journals.elsevier.com/world-
arch: a morphometric geometric study. Spine J.
neurosurgery
2017;17:1523-1528.
Available online: www.sciencedirect.com
39. Rist F, Herzog K, Mack J, Richter R, Steinhage V,
1878-8750/$ - see front matter Published by Elsevier Inc.
Töpfer R. High-precision phenotyping of grape Conflict of interest statement: The authors declare that
bunch architecture using fast 3D sensor and the article content was composed in the absence of any
automation. Sensors. 2018;18:763. commercial or financial relationships that could be
construed as a potential conflict of interest.
40. Daneshmand M, Helmi A, Avots E, et al. 3D
scanning: a comprehensive survey. Available at: Received 6 December 2018; accepted 9 March 2019

368 www.SCIENCEDIRECT.com WORLD NEUROSURGERY, https://doi.org/10.1016/j.wneu.2019.03.099

You might also like