Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO.

4, DECEMBER 2008 451

Advanced Medical Displays: A Literature Review of


Augmented Reality
Tobias Sielhorst, Marco Feuerstein, and Nassir Navab

Abstract—The impressive development of medical imaging using an ultrasonic tracking system. Early 1990s augmented re-
technology during the last decades provided physicians with an ality was also considered for other applications including in-
increasing amount of patient specific anatomical and functional dustrial assembly [4], paperless office [5], and machine mainte-
data. In addition, the increasing use of non-ionizing real-time
imaging, in particular ultrasound and optical imaging, during nance [6].
surgical procedures created the need for design and development While virtual reality (VR) aimed at immersing the user en-
of new visualization and display technology allowing physicians to tirely into a computer-generated virtual world, augmented re-
take full advantage of rich sources of heterogeneous preoperative ality (AR) took the opposite approach, in which virtual com-
and intraoperative data. During 90’s, medical augmented reality puter generated objects were added to the real physical world
was proposed as a paradigm bringing new visualization and
interaction solutions into perspective. This paper not only reviews [7]. Within their so-called virtuality continuum [8], Milgram
the related literature but also establishes the relationship between and Kishino described AR as a mixture of virtual reality (VR)
subsets of this body of work in medical augmented reality. It and the real world in which the real part is more dominant
finally discusses the remaining challenges for this young and active than the virtual one. Azuma described AR by its properties of
multidisciplinary research community. aligning virtual and real objects, and running interactively and
in real-time [9], [10].
I. INTRODUCTION In augmented reality inheres the philosophy that intelligence
EDICAL augmented reality takes its main motivation amplification (IA) of a user has more potential than artificial
M from the need of visualizing medical data and the pa-
tient within the same physical space. It goes back to the vision of
intelligence (AI) [11], because human experience and intuition
can be coupled by the computational power of computers.
having x-ray vision, seeing through objects. This would require
real-time in-situ visualization of co-registered heterogeneous II. OVERVIEW OF MEDICAL AR SYSTEMS AND TECHNOLOGIES
data, and was probably the goal of many medical augmented The first setup augmenting imaging data registered to an ob-
reality solutions proposed in literature. As early as 1938, Stein- ject was described in 1938 by the Austrian mathematician Stein-
haus [1] suggested a method for visualizing a piece of metal haus [1]. He described the geometric layout to reveal a bullet
inside tissue registered to its real view even before the inven- inside a patient with a pointer that is visually overlaid on the
tion of computers. The method was based on the geometry of invisible bullet. This overlay was aligned by construction from
the setup and the registration and augmentation was guaranteed any point of view and its registration works without any com-
by construction. In 1968, Sutherland [2] suggested a tracked putation. However, the registration procedure is cumbersome
head-mounted display as a novel human-computer interface en- and it has to be repeated for each patient. The setup involves
abling viewpoint-dependent visualization of virtual objects. His two cathodes that emit X-rays projecting the bullet on a fluoro-
visionary idea and first prototype were conceived at a time when scopic screen (see Fig. 2). On the other side of the X-ray screen,
computers were commonly controlled in batch mode rather than two spheres are placed symmetrically to the X-ray cathodes. A
interactively. It was only two decades later that the advances in third sphere is fixed on the crossing of the lines between the
computer technology allowed scientists to consider such tech- two spheres and the two projections of the bullet on the screen.
nological ideas within a real-world application. It is interesting The third sphere represents the bullet. Replacing the screen with
to note that this also corresponds to the first implementation of a a semi-transparent mirror and watching the object through the
medical augmented reality system proposed by Roberts et al. [3] mirror, the third sphere is overlaid exactly on top of the bullet
in 1986. They developed a system integrating segmented com- from any point of view. This is possible because the third sphere
puted tomography (CT) images into the optics of an operating is at the location to which the bullet is mirrored. Therefore, the
microscope. After an initial interactive CT-to-patient-registra- setup yields stereoscopic depth impression. The overlay is re-
tion, movements of the operating microscope were measured stricted to a single point and the system has to be manually cali-
brated for each augmentation with the support of an X-ray image
Manuscript received January 31, 2008; revised April 11, 2008. Current ver- with two X-ray sources.
sion published November 19, 2008.
T. Sielhorst and N. Navab are with Institut für Informatik I16, Technische
In the next decades, different technologies followed that
Universität München, München 85748, Germany (email: navab@cs.tum.edu). allow for medical augmentation of images. This section will
M. Feuerstein is with the Department of media Science, Graduate School of introduce them as seven fundamental classes, including their
Information Science, Nagoya University, Nagoya 464–8603, Japan. specific limitations and advantages. Each subsection begins
Color versions of one or more figures are available online at http://ieeexplore.
ieee.org. with the definition of the respective category. Fig. 15 gives a
Digital Object Identifier 10.1109/JDT.2008.2001575 short overview on these technologies.
1551-319X/$25.00 © 2008 IEEE

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
452 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

Fig. 1. Inventions timeline of selected imaging and AR technology [2] © 1968 IEEE.

Fig. 2. Early suggestion for overlay of imaging data by Steinhaus [1] in 1938. Computation-free calibration (left and middle) and visualization (right) of the
proposed setup.

We start with devices that allow for in-situ visualization. This live video instead of optical image fusion appears counterpro-
means that the view is registered to the physical space. ductive at first sight since it reduces image quality and intro-
duces latency for the real view. However, by this means the real
A. HMD Based AR System view can be controlled electronically resulting in the following
The first head-mounted display (HMD)-based AR system advantages:
was described by Sutherland [2] in 1968 (see Fig. 3). A stereo- 1) No eye-to-display calibration is needed, only the
scopic monochrome HMD combined real and virtual images camera-to-tracker transformation has to be calculated,
by means of a semi-transparent mirror. This is also referred to which may remain fixed.
as optical see-through HMD. The tracking was performed me- 2) Arbitrary merging functions between virtual and real ob-
chanically. Research on this display was not application driven, jects are possible as opposed to brightening up the real view
but aimed at the “ultimate display” as Sutherland referred to it. by virtual objects in optical overlays. Only video overlay
Bajura et al. [12] reported in 1992 on their video see-through allows for opaque virtual objects, dark virtual objects, and
system for the augmentation of ultrasound images (see Fig. 4). correct color representation of virtual objects.
The system used a magnetic tracking system to determine the 3) By delaying the real view until the data from the tracking
pose of the ultrasound probe and HMD. The idea of augmenting system is available, the relative lag between real and virtual

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 453

Fig. 3. The first (optical see-through) HMD by Sutherland [2].

objects can be eliminated as described by Bajura et al. [13]


and Jacobs et al. [14].
4) For the real view, the image quality is limited by the dis-
play specifications in a similar way as it is for the rendered
objects. Since the color spectrum, brightness, resolution,
accommodation, field of view, etc. are the same for real
and virtual objects, they can be merged in a smoother way
than for optical overlays.
5) The overlay is not user dependent, since the generation of
the augmentation is already performed in the computer, as
opposed to the physical overlay of light in the eye. The re-
sulting image of an optical see-through system is in general Fig. 4. First video see-through HMD: Augmentation of ultrasound slices [12].
not known. A validation without interaction is hardly pos- © 1992 ACM.
sible with optical overlays.
In 1996, in a continuation of the work of Bajura et al. [12],
[13], State et al. [15] reported on a system with 10 frames per point of time the tracking camera and the video cameras are
second (fps) creating VGA output. This system facilitates hy- genlocked, i.e., the tracking system shutter triggers the cameras.
brid magnetic and optical tracking and offers higher accuracy Their visualization software waits for the calculated tracking
and faster performance than the previous prototypes. The speed data before an image is augmented. This way, the relative lag is
was mainly limited by the optical tracking hardware. Nowa- reduced to zero without interpolating tracking data. The system
days, optical tracking is fast enough to be used exclusively. The uses inside-out optical tracking, which means that the tracking
continued system has been evaluated in randomized phantom camera is placed on the HMD to track a reference frame rather
studies in a needle biopsy experiment [16]. Users hit the targets than the other way around (see Fig. 5). This way of tracking
significantly more accurately using AR guidance compared to allows for very low reprojection errors since the orientation of
standard guidance. the head can be computed in a numerically more stable way than
In 2000, Sauer and colleagues [17] presented a video see- by outside-in tracking using the same technology [18].
through system that allowed for a synchronized view of real Wright et al. [19] reported in 1995 on optical see-through
and virtual images in real-time, i.e., 30 fps. In order to ensure visualization for medical education. The continuation of the
that camera images and tracking data are from exactly the same system [20] augments anatomical data on a flexible knee joint

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
454 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

Fig. 6. Augmented microscope: Ordinary and augmented view [29]© 2000


IEEE.

Fig. 5. Video see-through HMD without relative lag [17] © 2000 IEEE.

phantom in order to teach dynamic spatial behavior of anatomy.


Our group [21] suggested augmentation of recorded expert mo-
tions in regard to a simulator phantom in order to teach medical
actions. The system allows for comparative visualization and
automatic quantitative comparison of two actions.
Luo and Peli [22] use head mounted display visualization
as an aid for visually impaired rather than supporting physi-
cians. They use an optical see-through system to superimpose
contour images from an attached camera. The system is meant
to help patients with tunnel vision to improve visual search Fig. 7. Augmented binoculars [31] ©2002 IEEE.
performance.
Rolland and Fuchs [23] discuss in detail advantages and
shortcomings of optical and video see-through technology. In 1995, Edwards et al. [28] presented their augmented
Cakmaci and Rolland [24] provide a recent and comprehensive stereoscopic operating microscope for neurosurgical interven-
overview of HMD designs. tions. It allowed for multicolor representation of segmented 3D
imaging data as wireframe surface models or labeled 3D points
B. Augmented Optics (see Fig. 6). The interactive update rate of 1–2 Hz was limited
by the infrared tracking system. The accuracy of 2–5 mm is in
Operating microscopes and operating binoculars can be aug- the same range as the system introduced by Friets et al. [27]. In
mented by inserting a semi-transparent mirror into the optics. 2000, the group reported on an enhanced version [29] with sub-
The mirror reflects the virtual image into the optical path of the millimeter accuracy, which was evaluated in phantom studies,
real image. This allows for high optical quality of real images as well as clinical studies, for maxillofacial surgery. The new
without further eye-to-display calibration, which is one of the version also allows for calibration of different focal lengths to
major issues of optical see-through augmentation. Research on support variable zoom level settings during the augmentation.
augmented optics evolved from stereotaxy in brain surgery in For ophthalmology, Berger et al. [30] suggest augmenting
the early 1980s that brought the enabling technology together angiographic images into a biomicroscope. The system uses no
as for instance described by Kelly [25]. external tracking but image-based tracking, which is possible
The first augmented microscope was proposed by Roberts et because the retina offers a relatively flat surface that is textured
al. [3], [26] showing a segmented tumor slice of a computed to- with visible blood vessel structures. According to the authors,
mography data set in a monocular operating microscope. This the system offers an update rate of 1–5 Hz and an accuracy of 5
system can be said to be the first operational medical AR system. pixels in the digital version of the microscope image.
Its application area was interventional navigation. The accuracy Birkfellner and colleagues have developed an augmented op-
requirement for the system was defined to be 1 mm [27] in order erating binocular for maxillofacial surgery in 2000 [31], [32]
to be in the same range as the CT slice thickness. An average (see Fig. 7). It enables augmentation employing variable zoom
error of 3 mm [27] was measured for reprojection of contours, and focus as well as customizable eye distances [33]. As op-
which is a remarkable result for the first system. However, the posed to the operating microscopes that are mounted on a swivel
ultrasonic tracking did not allow for real-time data acquisition. arm, an operating binocular is worn by the user.
A change in position of the operating microscope required ap- A drawback of augmented optics in comparison with other
proximately 20 s for acquiring the new position. augmentation technology is the process of merging real and

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 455

Fig. 8. Concept of integral videography based augmentation and examples [37]


©2004 IEEE.

computed images. As virtual images can only be added and may


not entirely cover real ones, certain graphical effects cannot be
realized. The impact of possible misperception is discussed in
paragraph IV.D. Additionally, the relative lag between the vi-
sualization of real and virtual images cannot be neglected for
head-worn systems (cf. Holloway [34]).
In addition to the superior imaging quality of the real view, Fig. 9. AR window that needs polarization glasses [39] ©2003 IEEE.
a noteworthy advantage of augmented optics is a seamless in-
tegration of its technology into the surgical workflow. The aug-
mented optics can be used as usual if the augmentation is not For in situ visualization, AR windows seem to be a perfect
desired. Furthermore, the calibration or registration routine in match to the operating room at first sight. For ergonomic and
the operating room need not be more complicated than for a sterility reasons it is a good idea not to make surgeons wear
navigation system. a display. There are different ways of realizing AR windows.
In detail, each one introduces a trade-off: Autostereoscopic
C. AR Windows displays suffer from poorer image quality in comparison with
other display technologies. In principle, they offer a visualiza-
The third type of devices that allows for in situ visualization is tion for multiple users. However, this feature introduces another
an AR window. In 1995, Masutani et al. [35] presented a system trade-off regarding image quality.
with a semi-transparent mirror that is placed between the user Display technology using shutter glasses needs cables for
and the object to be augmented. The virtual images are created trigger signals and power supply. Polarization glasses, as for in-
by an autostereoscopic screen with integral photography tech- stance used in the system introduced by Goebbels et al. [39],
nology (see Fig. 8). With microlenses in front of an ordinary do not need cables and weigh less than an HMD, but limit the
screen, different images can be created for different viewing an- viewing angle of the surgeon to match the polarization. Non-
gles. This reduces either the resolution or limits the effective autostereoscopic AR windows need to track the position of the
viewing range of the user. However, no tracking system is nec- user’s eye in addition to the position of the patient and the AR
essary in this setup to maintain the registration after it has been window. This introduces another source of error.
established once. The correct alignment is independent of the Wesarg et al. [40] suggest a monoscopic AR window based on
point of view. Therefore, these autostereoscopic AR windows a transparent display. The design offers a compact setup, since
involve no lag when the viewer is moving. The first system could no mirror is used, and no special glasses are required. However,
not compute the integral photography dynamically. It had to be it cannot display stereoscopic images and only one eye can see
precomputed for a certain data set. a correct image overlay. Since no mirror is used, the foci of the
In 2002 Liao et al. [36], [37] proposed a medical AR window virtual and real image are at completely different distances.
based on integral videography that could handle dynamic All AR window designs have to take care of distracting reflec-
scenes. The authors realized the system for a navigation sce- tions from different light sources. Last but not least, the display
nario, in which the position of an instrument was supposed must be placed between the patient and the viewer. This may
to be visualized in the scene. Their algorithm performed the obstruct the surgeons’ working area.
recalculation of a changed image in less than a second. We believe that an optimal in-situ visualization device could
Blackwell et al. [38] presented in 1998 an AR window using consist of combination of an AR window and an HMD; an ex-
a semi-transparent mirror for merging the real view with vir- ample may be an HMD attached to a boom.
tual images from an ordinary monitor. This technology requires
tracked shutter glasses for the correct alignment of augmented D. Augmented Monitors
objects and stereo vision, but it can handle dynamic images for In this section, we cluster all systems that augment video im-
navigation purposes at a high resolution and update rate. ages on ordinary monitors. The point of view is defined by an

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
456 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

additional tracked video camera. In 1993, Lorensen and Kikinis


[41] published their live video augmentation of segmented MRI
data on a monitor. This initial system did not include tracking
of the video camera yet. The camera-to-image registration had
to be performed manually. The successor of this setup included
a vision-based tracking system with fiducial markers [42].
Sato et al. [43] visualize segmented 3D ultrasound images
registered to video camera images on a monitor for image guid-
ance of breast cancer surgery. Nicolau et al. [44] describe a
camera-based AR system using markers that are detected in
the camera image. The system aims at minimally invasive liver
ablation.
As an advantage of augmented monitors, users need not wear
an HMD or glasses. By definition, augmented monitors do not
however, offer in situ visualization nor stereoscopic vision.
Using them adds a tracked camera to the clinical setup. Fig. 10. Augmentation in an endoscope [55] ©2002 IEEE.

E. Augmented Endoscopes
A separate paragraph is dedicated to endoscope augmenta- For endoscopic augmentation, the issues of calibration,
tion although it might be considered as a special case of mon- tracking, and visualization are partly different than for other
itor-based augmented reality or augmented imaging devices (see types of AR devices:
Section II-F). In contrast to augmented imaging devices en- 1) Calibration and Undistortion of Wide Angle Optics:
doscopic images need a tracking system for augmentation. As Because of their wide-angle optics, endoscopes suffer from a
opposed to monitor-based AR the endoscopic setup already con- noticeable image distortion. If a perfect, distortion-free pinhole
tains a camera. Hence, the integration of augmented reality tech- camera model is assumed for superimposition, a particular
niques does not necessarily introduce additional hardware into source of error in the augmented image will be introduced [52].
the workflow of navigated interventions. A lot of investigative This issue can be neglected in other AR systems with telephoto
work has been carried out that dealt specifically with endoscopic optics. Common types of distortion are radial distortion (also
augmentation. referred to as barrel distortion) and tangential distortion. Either
The first usage of endoscopes as telescopic instruments uti- the endoscope image has to be undistorted or the rendered
lizing a light source dates back to the 19th century. Endoscopy overlay has to be distorted to achieve a perfect superimposi-
was mainly dedicated to diagnosis until the invention of video- tion. While first approaches [53] required several minutes to
based systems in the 1980s. Video endoscopy permits different undistort a single endoscope image, this process can now be
team members to see the endoscopic view simultaneously. With completed in real-time: De Buck et al. [54] undistort sample
this approach, it is possible for an assistant to position the en- points in the image and map a texture of the endoscope image
doscope while the operating surgeon can use both hands for the on the resulting tiles; Shahidi et al. [55] precompute a look-up
procedure. This feature opened the field of endoscopic surgeries. table (LUT) for each pixel for real-time undistortion.
The removal of the gallbladder was one of the first laparoscopic In order to model the geometry of an endoscope camera,
surgeries. This operation also became a standard minimally in- the intrinsic camera parameters focal length and principal point
vasive procedure. Since then, endoscopy has been successfully need to be determined. This can be achieved using well-estab-
introduced into other surgical disciplines. Comprehensive liter- lished camera calibration techniques [56]–[58]. Most systems
ature reviews on the history of endoscopy can be found, for in- assume the focal length of an endoscope camera to be kept
stance, in [45], [46], and [47]. constant, although many endoscopes incorporate zoom lenses
Although endoscopic augmentation seems to be a straight- to change it intraoperatively, invalidating a certain calibration.
forward step it has been realized as recently as the end of the Stoyanov et al. suggest to automatically adjust the calibration
1990s by Freysinger et al. [48] for ear, nose and throat (ENT) for intraoperative changes of the focal length of a stereoscopic
surgery and Shahidi and colleagues [49] for brain surgery. Fig. camera [59]. Even though models for the calibration of mono-
10 shows a visualization of the latter system including a tar- scopic cameras with zoom lenses exist [60], they are not easily
geting help in the endoscope image. Scholz et al. presented applicable to endoscopes. These models require the (preferably
a navigation system for neurosurgery based on processed im- automatic) determination of the physical ranges for the lens set-
ages [50]. Shahidi and Scholz use infrared tracking technology tings e.g., in terms of motor units. However, the zoom settings
and a rigid endoscope while Freysinger’s system uses magnetic of endoscopes are usually adjusted manually, rather than by a
tracking. precise motor.
Mourgues et al. [51] describe endoscope augmentation in a To obtain a rigid transformation from the camera coordinate
robotic surgery system. The tracking is done implicitly by the frame to the coordinate frame of an attached tracking body or
robot since the endoscope is moved by the robot’s arm. There- sensor, most authors employ hand-eye calibration techniques
fore no additional tracking system is necessary. [51], [61]–[64]. An alternative approach makes use of a tracked

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 457

Fig. 11. Context sensing by texturing segmented model. [75] ©2002 IEEE.

calibration pattern, whose physical coordinates are known with Fig. 12. Camera-augmented c-arm (CamC) [78]. © 1999 IEEE. (a) Principle
respect to the tracker [54], [55], [65]. of CamC (CamC), (b) Camera image, (c) fused image, (d) Fluoroscopic X-ray
image.
In certain applications oblique-viewing endoscopes are used,
for which the viewing directions are changeable by rotating the
scope cylinder. Yamaguchi et al. [66] and De Buck et al. [67] [75] project endoscope images on segmented surfaces for pro-
developed calibration procedures for such endoscopes. viding context and creating endoscopic panorama images (see
2) Tracking of Flexible Endoscopes: Non-rigid endoscopes Fig. 11). Kawamata et al. [76] visualize the anatomical con-
cannot be tracked by optical tracking systems. Bricault et al. text by drawing virtual objects in a larger area of the screen
[68] describe the registration of bronchoscopy and virtual bron- than endoscope images are available. Ellsmere and colleagues
choscopy images using only geometric knowledge and image [77] suggest augmenting laparoscopic ultrasound images into
processing. The algorithms employed did not have real-time CT slices and using segmented CT data for improved context
capability, however, they proved to be stable when used on sensing.
recorded videos. In contrast to Bricault’s shape from shading
approach, Mori et al. [69] use epipolar geometry for image F. Augmented Medical Imaging Devices
processing. In order to improve the performance of their regis-
Augmented imaging devices can be defined as imaging de-
tration algorithm they suggest the addition of electromagnetic
vices that allow for an augmentation of their images without a
tracking of the bronchoscope [70]. To achieve a fusion of the
tracking system. The alignment is guaranteed by their geometry.
bronchoscopic video with a target path, Wegner et al. restrict
A construction for the overlay of fluoroscopic images on the
electromagnetic tracking data onto positions inside a previously
scene has been proposed by Navab et al. [78] in 1999 (see
segmented bronchial tree [71]. Some groups, for instance Klein
Fig. 12). An ordinary mirror is inserted into the X-ray path
et al. [72], use electromagnetic tracking exclusively.
of a mobile C-arm1. By this means it is possible to place a
3) Endoscopy Related Visualization Issues: The augmenta-
video camera that records light following the same path as the
tion of endoscopic data does not only entail fusion with other
X-rays. Thus it is possible to register both images by estimating
imaging data. Konen et al. [50] suggest several image-based
the homography between them without spatial knowledge of
methods with a tracked endoscope to overcome typical lim-
the objects in the image. The correct camera position is deter-
itations, such as replay of former images in case of loss of
mined once during the construction of the system. For image
sight, image mosaicing, landmark tracking, and recalibration
fusion, one image can be transformed electronically to match
with anatomical landmarks. Krueger et al. [73] evaluate endo-
the other using the estimated homography. The system provides
scopic distortion correction, color normalization, and temporal
augmented images without continuous X-ray exposure for both
filtering for clinical use.
patient and physician. The overlay is correct until the patient
One of the reasons for augmenting endoscope images is to
moves relative to the fluoroscope. A new X-ray image has to be
provide the anatomical context since the point of view and the
taken in such a case.
horizon are changing. Recovering each of these issues requires
Tomographic reflection is a subgroup of augmented imaging
a heightened level of concentration from surgeons since their
devices. In 2000, Masamune and colleagues [79], [80] proposed
field of view is very limited and the operating surgeon gen-
an image overlay system that displays CT slices in-situ. A semi-
erally does not move the endoscope personally. Fuchs et al.
transparent mirror allows for a direct view on the patient as well
[74] suggest provision of anatomical context by visualizing la-
as a view on the aligned CT slice (see Fig. 13). The viewer may
paroscopic images in situ with a head-mounted display. The
move freely while the CT slice remains registered without any
necessary three-dimensional model of the surface as seen by
the laparoscope is created with a pattern projector. Dey et al. 1C-arm: Medically widespread X-ray imaging device with a C-shaped gantry

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
458 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

Fig. 13. CT reflection [79]: Concept and prototypical setup. © 2005 IEEE.

tracking. The overlaid image is generated by a screen that is


placed on top of the imaging plane of the scanner. The semi-
transparent mirror is placed in the plane that halves the angle
Fig. 14. Ultrasound augmentation by tomographic reflection: Sonic Flashlight
between the slice and the screen. The resulting overlay is correct [82], [83] ©2000 IEEE.
from any point of view up to a similarity transform that has to
be calibrated during the construction of the system. The system
is restricted to a single slice per position of the patient. For any means that such a visualization can be used for multiple users.
different slice, the patient has to be moved on the bed. Fischer The simplicity of the system introduces certain limitations as a
et al. [81] have extended this principle to magnetic resonance compromise, though.
imaging. Glossop et al. [84] suggested in 2003 a laser projector that
A similar principle has been applied to create an augmented moves a laser beam into arbitrary directions by means of con-
ultrasound echography device. Stetten et al. [82], [83] proposed trolled mirrors. Trajectories of the laser appear as lines due to
in 2000 the overlay of ultrasound images on the patient with a the persistence of vision effect. The images are limited to a cer-
semi-transparent mirror and a little screen that is attached to the tain number of bright monochrome lines or dots and non-raster
ultrasound probe (see Fig. 14). The mirror is placed on the plane images. The system also includes an infrared laser for interac-
that halves the angle between the screen and the B-scan plane tive patient digitization.
of ultrasonic measurements. Similarly to the reflection of CT or Sasama et al. [85] use two lasers for mere guidance. Each of
MRI slices, it allows for in situ visualization without tracking. In these lasers creates a plane by means of a moving mirror system.
addition to real-time images, it allows for arbitrary slice views, The intersection of both planes is used to guide laparoscopic in-
as the ultrasound probe can be freely moved. struments in two ways. The intersecting lines or the laser on the
patient mark the spot of interest, for instance an incision point.
G. Projections on the Patient
The laser planes can also be used for determining an orientation
Lastly, we present systems augmenting data directly onto the in space. The system manipulates the two laser planes in such a
patient. The advantage of these systems is that the images are way that their intersecting line defines the desired orientation. If
generally visible in situ without looking through an additional both lasers are projected in parallel to the instrument, the latter
device such as glasses, HMD, microscope, loupes, etc. As an- has the correct orientation. The system can only guide instru-
other beneficial feature, the user need not be tracked if visual- ments to points and lines in space but it cannot show contours
ization is meant to be on the skin rather than beneath. This also or more complex structures.

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 459

Fig. 15. Simplified relationship between technology and potential benefits. Grey color indicates in situ visualization.

III. POTENTIAL BENEFITS OF AR VISUALIZATION


The crucial question regarding new visualization paradigms
is “What can it do for us that established technology cannot?”.
AR provides an intuitive human computer interface. Since in-
tuition is difficult to measure for an evaluation we subdivide
the differences between AR and ordinary display technology
in this section into four phenomena: Image fusion, 3D interac-
tion, 3D visualization, and hand-eye coordination. Fig. 15 de-
picts a simplified relationship between these phenomena and
AR technology.

A. Extra Value From Image Fusion


Fusing registered images into the same display offers the best
of two modalities in the same view.
An extra value provided by this approach may be a better
understanding of the image by visualizing anatomical context
that has not been obvious before. This is the case for endoscopic
camera and ultrasound images, where each image corresponds
only to a small area. (See paragraph II.E.3)
Another example for additional value is displaying two phys-
ical properties in the same image that can only be seen in either
of the modalities. An example is the overlay of beta probe ac-
tivity. Wendler et al. [86] support doctors by augmenting pre-
viously measured activity emitted by radioactive tracers on the
live video of a laparoscopic camera. By this means, physicians
can directly relate the functional tissue information to the real
view showing the anatomy and instrument position.
A further advantage concerns the surgical workflow. Cur-
rently, each imaging device introduces another display into the
operating room [see Fig. 16(b)] thus the staff spends valuable
time on finding a useful arrangement of the displays. A single Fig. 16. Current minimally invasive spine surgery setup at “‘Chirurgische
Klinik’”, hospital of Ludwigs-Maximilian Universität München. (a) Action
display integrating all data could solve this issue. Each imaging takes place on a very different position than the endoscope display. (b) Each
device also introduces its own interaction hardware and graph- imaging device introduces another display.
ical user interface. A unified system could replace the inefficient
multitude of interaction systems.
the viewport on virtual objects. Changing the eye position rela-
tive to an object is a natural approach for 3D inspection.
B. Implicit 3D Interaction
3D user interfaces reveal their power only in tasks that cannot
Interaction with 3D data is a cumbersome task with 2D dis- be easily reduced to two dimensions, because 2D user interfaces
plays and 2D interfaces (cf. Bowman [87]). Currently, there is no benefit from simplification by dimension reduction and the fact
best practice for three-dimensional user interfaces as opposed that they are widespread. Recent work by Traub et al. [88] sug-
to 2D interfaces using the WIMP paradigm (windows, icons, gests that navigated implant screw placement is a task that can
menus, and pointing). benefit from 3D user interaction, as surgeons were able to per-
AR technology facilitates implicit viewpoint generation by form drilling experiments faster with in situ visualization com-
matching the viewport of the eye/endoscope on real objects to pared to a navigation system with a classic display. Although

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
460 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

the performance speed is probably not a valid metric for a med- can be prepared for the augmented reality system. Optical (in-
ical drilling task, the experiments indicate that the surgeons had frared) tracking systems are already in use in modern operating
a faster mental access to the spatial situation. rooms for intraoperative navigation. In orthopedics, trauma
surgery, and neurosurgery, which only require a rigid body reg-
C. 3D Visualization istration, available navigation systems proved to be sufficiently
Many augmented reality systems allow for stereoscopic data accurate. King et al. [29] proved in clinical studies to have
representation. Stereo disparity and motion parallax due to overall errors in the submillimeter range for their microscope
viewpoint changes (see Section III-B) can give a strong spatial based augmented reality system for neurosurgery. For the pose
impression of structures. determination of the real view infrared tracking is currently the
In digital subtraction angiography, stereoscopic displays can best choice. Only augmented flexible endoscopes have to use
help doctors to analyze the complex vessel structures [89]. Cal- different ways of tracking [see Section II-E2)].
vano et al. [90] report on positive effects of the stereoscopic As the last piece in the alignment chain of real and virtual
view provided by a stereo endoscope for in-utero surgery. The there is the patient registration. The transformation between
enhanced spatial perception may be also useful in other fields. image data and patient data in the tracking coordinate system
has to be computed. Two possibilities may apply:
D. Improved Hand-Eye Coordination 1) Rigid Patient Registration: Registration algorithms are
A differing position and orientation between image acquisi- well discussed in the community. Their integration into the sur-
tion and visualization may interfere with the hand-eye coordina- gical workflow requires mostly a trade off between simplicity,
tion of the operating person. This is a typical situation in mini- accuracy, and invasiveness.
mally invasive surgery [see Fig. 16(a)]. Hanna et al. [91] showed Registration of patient data with the AR system can be
that the position of an endoscope display has a significant im- performed with fiducials that are fixed on the skin or implanted
pact on the performance of a surgeon during a knotting task. [94]. These fiducials must be touched with a tracked pointer
Their experiments suggest the best positions of the display to for the registration process. Alternatively, the fiducials can be
be in front of the operator at the level of his or her hands. segmented in the images of a tracked endoscope rather than
Using in situ visualization, there is no offset between working touching them with a tracked pointer for usability reasons.
space and visualization. No mental transformation is necessary Whereas Stefansic et al. propose the direct linear transform
to convert the viewed objects to the hand coordinates. (DLT) to map the 3D locations of fiducials into their corre-
sponding 2D endoscope images [95], Feuerstein et al. suggest a
triangulation of automatically segmented fiducials from several
IV. CURRENT ISSUES
views [96], [97]. Baumhauer et al. study different methods for
We have presented different systems in this paper with their endoscope pose estimation based on navigation aids stuck onto
history and implications. In this section we present current lim- the prostate and propose to augment 3D transrectal ultrasound
iting factors for most of the presented types and approaches to data on the camera images [98]. Using this method, no external
solve them. tracking system is needed.
Especially for maxillofacial surgery, fiducials can be inte-
A. Registration, Tracking, and Calibration grated in a reproducibly fixed geometry [29]. For spine surgery,
The process of registration is the process of relating two or Thoranaghatte et al. try to attach an optical fiducial to the ver-
more data sets to each other in order to match their content. For tebrae and use the endoscope to track it in situ [99].
augmented reality the registration of real and virtual objects is Point-based registration is known to be a reliable solution in
a central piece of the technology. Maintz and Viergever [92] principle. However, the accuracy of a fiducial-based registration
give a general review about medical image registration and its varies on the number of fiducials and quality of measurement of
subclassification. each fiducial, but also on the spatial arrangement of the fidu-
In the AR community the term tracking refers to the pose esti- cials [100].
mation of objects in real time. The registration can be computed Another approach is to track the imaging device and register
using tracking data after an initial calibration step that provides the data to it. This procedure has the advantage that no fiducials
the registration for a certain pose. This is only true if the ob- have to be added to the patient while preserving high accuracy.
ject moves but does not change. Calibration of a system can be Grzeszczuk et al. [101] and Murphy [102] use a fluoroscope to
performed by computing the registration using known data sets, acquire intraoperative X-ray images and register them to dig-
e.g., measurements of a calibration object. Tuceryan et al. [93] itally reconstructed radiographs (DRR) created from preopera-
describe different calibration procedures that are necessary for tive CT. This 2D–3D image registration procedure, which could
video augmentation of tracked objects. These include image dis- also be used in principle for an AR system, has the advantage
tortion determination, camera calibration, and object-to-fiducial that no fiducials have to be added to the patient while keeping
calibration. high accuracy. By also tracking the C-arm, its subsequent mo-
Tracking technology is one of the bottlenecks for augmented tions can be updated in the registered CT data set.
reality in general [10]. As an exception, this is quite different Feuerstein et al. [97], [103] augment 3D images of an intra-
for medical augmented reality. In medical AR, the working operative flat panel C-arm into a laparoscope. This approach is
volume and hence the augmented space is indoors, predefined, sometimes also called registration-free [104], because doctors
and small. Therefore, the environment, i.e., the operating room, need not perform a registration procedure. As a drawback such

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 461

an intrinsic registration is only valid as long as the patient does on the size of the object the latency should be ideally as little as 50
not move after imaging. ms. Little additions in latencies may cause big decreases of per-
Grimson et al. [42] follow a completely different approach by formance. According to their experiment and theory a latency of
matching surface data of a laser range scanner to CT data of the 175 ms can result in 1200 ms slower grasping than for immediate
head. For sinus surgery, Burschka et al. propose to reconstruct feedback. This depends on the difficulty of the task. Their exper-
3D structures using a non-tracked monocular endoscopic camera iments showed also a significantly larger percentage of error in
and register them to a preoperative CT data set [105]. For spine their performance with 180 ms latency than with a system that had
surgery, Wengert et al. describe a system that uses a tracked en- only 80 ms latency. The experiments of our group [117] confirm
doscope to achieve the photogrammetric reconstruction of the these findings for a medical AR system. Therefore an optimal
surgical scene and its registration to preoperative data [106]. system should feature data synchronization and short latency. We
2) Deformable Tissue: The approaches mentioned above proposed recently an easy and accurate way of measuring the la-
model the registration of a rigid transformation. This is useful tency in a video see-through system [118].
for the visualization before an intervention and for a visualiza-
tion of not deformed objects. The implicit assumption of a rigid C. Error Estimation
structure is correct for bones and tissue exposed to the same Tracking in medical AR is mostly fiducial-based because it
forces during registration and imaging, but not for soft tissue can guarantee a predictable quality of tracking, which is neces-
deformed by, e.g., respiration or heart beat. sary for the approval of a navigation system.
A well known example breaking this assumption is the brain For an estimation of the overall error calibration, registration
shift in open brain surgery. Maurer et al. [107] show clearly that and tracking errors have to be computed, propagated, and ac-
the deformation of the brain after opening the skull may result cumulated. Nicolau and colleagues [44] propose a registration
in misalignment of several millimeters. with error prediction for endoscopic augmentation. Fitzpatrick
There are three possible directions to handle deformable et al. [100] compute tracking based errors based on the spatial
anatomy. distribution of marker sets. Hoff et al. [18] predict the error for
1) Use very recent imaging data for a visualization that in- an HMD based navigation system.
cludes the deformation. Several groups use ultrasound im- An online error estimation is a desirable feature, since physi-
ages that are directly overlayed onto the endoscopic view cians have to rely on the visualized data. In current clinical prac-
[108]–[110]. tice, navigation systems stop their visualization in case a factor
2) Use very recent data to update a deformable model of the that decreases the accuracy is known to the system. Instead of
preoperative data. For instance Azar et al. [111] model and stopping the whole system it would be useful to estimate the re-
predict mechanical deformations of the breast. maining accuracy and visualize it, so that a surgeon can decide
3) Make sure that the same forces are exposed to the tissue. in critical moments, whether to carefully use the data or not.
Active breathing control is an example for compensating MacIntyre et al. [119] suggest in a non-medical setup to pre-
deformations due to respiration [112], [113]. dict the error empirically by an adaptive estimator. Our group
Baumhauer et al. [114] give a recent review on the perspec- [120] suggests a way of dynamically estimating the accuracy
tives and limitations in soft tissue surgery, particularly focusing for optical tracking modeling the physical situation. By inte-
on navigation and AR in endoscopy. grating the visibility of markers for each camera into the model,
B. Time Synchronization a multiple camera setup for reducing the line of sight problem
is possible, which ensures a desired level of accuracy. Nafis et
Time synchronization of tracking data and video images is an
al. [121] investigate the dynamic accuracy of electromagnetic
important issue for an augmented endoscope system. In the un-
(EM) tracking. The magnetic field in the tracking volume can be
synchronized case, data from different points of time would be
influenced by metallic objects, which can change the measure-
visualized. Holloway et al. [34] investigated the source of errors
ments significantly. Also the distance of the probe to the field
for augmented reality systems. The errors of time mismatch can
generator and its speed have a strong influence on the accuracy.
raise to be the highest error sources when the camera is moving.
Finally it is not enough to estimate the error, but the whole
To overcome this problem, Jacobs et al. [14] suggest methods to
system has to be validated (cf. Jannin et al. [122]). Standard-
visualize data from multiple input streams with different laten-
ized validation procedures have not been used to validate the
cies from only the same point of time. Sauer et al. [17] describe an
described systems in order to make the results comparable. The
augmented reality system that synchronizes tracking and video
validation of the overall accuracy of an AR system must include
data by hardware triggering. Their software waits for the slowest
the perception of the visualization. In the next section we dis-
component before the visualization is updated. For endoscopic
cuss the effect of misperception in spite of mathematically cor-
surgery, Vogt [115] also uses hardware triggering to synchronize
rect positions in visualizations.
tracking and video data by connecting the S-Video signal (PAL,
50 Hz) of the endoscope system to the synchronization card of
D. Visualization and Depth Perception
the tracking system, which can also be run at 50 Hz.
If virtual and real images do not show a relative lag it means The issue of wrong depth perception has been discussed as
that the images are consistent and there is no error due to a time early as 1992 when Bajura and colleagues [12] described their
shift. However there is still the visual offset to haptic senses. Ware system. When merging real and virtual images the relative posi-
et al. [116] investigated the effect of latency in a virtual environ- tion in depth may not be perceived correctly although all posi-
ment with a grasping experiment. They conclude that depending tions are computed correctly. When creating their first setup also

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
462 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

Edwards et al. [28] realized that “Experimentation with intra-op- suitable for complex spatial geometry, and the virtual window
erative graphics will be a major part of the continuation of the locally covers the real view. Lerotic et al. [128] suggest to super-
project”. Drascic and Milgram [123] provide an overview of per- impose contours of the real view on the virtual image for better
ceptual issues in augmented reality system. While many prob- depth perception.
lems of early systems have already been addressed, the issue of 2) Adaption: A wrong visual depth perception can be cor-
a correct depth visualization remains unsolved. Depth cues are rected by learning if another sense can disambiguate the spatial
physical facts that the human visual system can use in order to re- constellation. The sense of proprioception provides exact infor-
fine the spatial model of the environment. These include visual mation about the position of the human body. Biocca and Rolland
stimuli such as shading but also muscular stimuli such as accom- [129] set up an experiment where the point of view of each subject
modation and convergence. Psychologists distinguish between a was repositioned with a video see-through HMD. The adaption
number of different depth cues. Cutting and Vishton review and time for hand-eye coordination is relatively short and the adap-
summarize psychologists’ research on nine of the most relevant tion is successful. Unfortunately, another adaption process is
depth cues [124] revealing the relevance of different depth cues started when the subject is exposed to the normal view again.
in comparison to each other. They identify interposition as the 3) Motion Sickness: In the worst case conflicting visual cues
most important depth cue even though it is only an ordinary qual- can cause reduced concentration, headache, nausea etc. These
ifier. This means that it can only reveal the order but not a rel- effects have been discussed in the virtual reality and psychology
ative or absolute distance. Stereo disparity and motion parallax community [130].
are the next strongest depth cues in the personal space of up to Modern theories state that the sickness is not caused by the
two meters distance in the named order. The visual system calcu- conflict of cues, but the absence of better information to keep
lates the spatial information together with the depth cues of rela- the body upright [131]. Therefore engineers should concentrate
tive size/density, accommodation, conversion, and areal perspec- on providing more information to the sense of balance (e.g., by
tive. Especially the latest one is hardly taken into account for the making the user sit, unobstructed peripheral view) rather than
space under 2 meters unless the subject is in fog or under water. reducing conflicting visual cues in order to avoid motion sick-
It is the very nature of AR to provide a view that does not rep- ness. However, motion sickness does not seem to play a big role
resent the present physical conditions while the visual system in AR: In a video see-through HMD based experiment with 20
expects natural behavior of its environment for correct depth surgeons we [117] found no indication of the above symptoms
perception. What happens if conflicting depth cues are present? even after an average performance time of 16 minutes. In the
The visual system weights the estimates according to its impor- experiment the subjects were asked to perform a pointing task
tance and personal experience [124]. while standing. The overall lag was reported to be 100 ms for
Conflicting cues could result into misperception, adaption, fast rendering visualizations. The remote field of view was not
and motion sickness. covered. For less immersive AR systems than this one based on
1) Misperception: If there are conflicting depth cues it means an HMD and for systems with similar properties motion sick-
that at least one depth cue is wrong. Since the visual system ness is therefore expected to be unlikely.
is weighting the depth cues together the overall estimate will
generally not be correct even though the computer generates E. Visualization and Data Representation
geometrically correct images. 3D voxel data cannot be displayed directly with an opaque
Especially optical augmentation provides different parame- value for each voxel as for 2D bitmaps. There are three major
ters for real and virtual images resulting in possibly incompat- ways of 3D data representation.
ible depth cues. The effect is described as ghost-like visualiza- 1) Slice Rendering: Slice rendering is the simplest way of
tion resembling to its unreal and confusing spatial relationship rendering. Only a slice of the whole volume is taken for visual-
to the real world. The visual system is quite sensitive to relative ization. Radiologists commonly examine CT or MRI data rep-
differences. resented by three orthogonal slices intersecting a certain point.
Current AR systems handle depth cues well that are based The main advantage of this visualization technique is the preva-
on geometry, as for instance relative size, motion parallax, and lence of this method in medicine and its simplicity. Another ad-
stereo disparity. Incorrect visualization of interposition between vantage that should not be underestimated is the fact that the
real and virtual objects has already been identified to be a serious visualization defines a plane. Since one degree of freedom is
issue by Bajura et al. [12]. It has been discussed in more detail fixed, distances in this plane can be perceived easily. Traub et
by Johnson [125] for augmentation in operating microscopes, al. [88] showed that slice representations as used in first gen-
Furmanski et al. [126] and Livingston et al. [127] for an op- eration navigation systems have superior capabilities in repre-
tical see-through, and by our group [117] for video see-through senting the precise position of a target point. They also found
HMDs. The type of AR display makes a difference since rel- that for finding a target point it can be more efficient to take a
ative brightness plays a role in depth perception and optical different representation of data.
see-through technology can only overlay brighter virtual images When two or three points of interest and their spatial relation-
on the real background. Opaque superimposition of virtual ob- ship are supposed to be displayed an oblique slice can be useful.
jects that are inside a real one is not recommended. Alternatives Without a tracked instrument however it is cumbersome to po-
can be a transparent overlay, wireframes, and a virtual window. sition such a plane.
Each possibility imposes a trade off: Transparent overlay re- The major drawback of slice rendering is that this visualiza-
duces the contrast of the virtual image, the wireframe is not tion does not show any data off the plane. This is not a constraint

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 463

for measuring visualizations in plane à la How far can I go with 3D displays in common. Bowman [87] gives a comprehensive
a drill? but optimizing questions like In which direction would introduction into 3D user interfaces and detailed information
a drill be furthest from critical tissue? why 3D interaction is difficult. The book gives general advice
2) Surface Rendering: Surface rendering shows transitions for creating new user interfaces. Reitinger et al. [136] suggest a
between structures. 3D user interface for liver planning. Even though the suggested
Often these transitions are segmented and converted to planning is done in pure virtual space the ideas apply to AR as
polygons. The desired tissue is segmented either manually, well. They use tracked instruments and a tracked glass plane
semi-automatically, or automatically depending on the image for defining points and planes in a tangible way. They combine
source and the desired tissue. The surface polygons of a tangible 3D interaction and classic 2D user interfaces in an ef-
segmented volume can be calculated by the marching cubes fective way.
algorithm [132]. Graphic cards offer hardware support for this Navab et al. [137] suggest a new paradigm for interaction
vertex based 3D data. This includes light effects based on the with 3D data. A virtual mirror is augmented into the scene. The
normals of the surfaces with only little extra computation time. physician has the possibility to explore the data from any side
Recently ray casting techniques became fast enough on using the mirror without giving up the registered view. Since the
graphic cards equipped with a programmable graphics pro- interaction uses a metaphor that has a very similar real counter-
cessing unit (GPU) [133]. As the surfaces need not be trans- part, no extra learning is expected for a user.
formed to polygons the images are smoother. They do not Apart from 2D/3D issues, standard 2D computer interfaces
suffer from holes due to discontinuities in the image and the such as mice are not suited for the OR because of sterility and er-
sampling is optimal for a specific viewing direction. Integration gonomic reasons. Fortunately, medical systems are highly spe-
of physical phenomena like refraction, reflexion, and shadows cialized on the therapy. Since a specialized application has a
are possible with this rendering technique. limited number of meaningful visualization modes the user in-
As a welcome side effect of surface rendering distances and terface can be highly specialized as well. Context aware systems
cutting points can be calculated when visualizing surfaces. can further reduce the degree of interaction. Automatic work-
The segmentation step is a major obstacle for this kind of flow recovery as suggested by Ahmadi et al. [138] could detect
visualization. Segmentation of image data is still considered a phases of the surgery and with this information the computer
hard problem with brisk research going on. Available solutions system could offer suitable information for each phase.
offer automatic segmentation only for limited number of struc-
tures. Manual and semiautomatic solutions can be time-con- V. PERSPECTIVE
suming or at least time-consuming to learn. The benefit from
such visualization using an interactive segmentation has to jus- After two decades of research on medical AR the basic con-
tify the extra work load on the team. cepts seem to be well understood and the enabling technologies
3) Volume Rendering: Direct volume rendering [134] cre- are now enough advanced to meet the basic requirements for a
ates the visualization by following rays from a certain viewpoint number of medical applications. We are encouraged by our clin-
through 3D data. Depending on the source of data and the in- ical partners to believe that medical AR systems and solutions
tended visualization different functions are available for gener- could be accepted by physicians, if they are integrated seam-
ating a pixel from the ray. The most prominent function is the lessly into the clinical workflow and if they provide a signifi-
weighted sum of voxels. A transfer function assigns a color and cant benefit at least for one particular phase of this workflow. A
transparency to each voxel intensity. It may be further refined perfect medical AR user interface would be integrated in such a
with the image gradient. A special kind of volume rendering is way that the user would not feel its existence, while taking full
the digitally reconstructed radiograph (DRR) that provides pro- advantage of additional in situ information it provides.
jections of a CT data set that are similar to X-ray images. Generally, augmented optics and augmented endoscopes
The advantage of direct volume rendering is a visualization do not dramatically change the OR environment, apart from
that has the capability of emphasizing certain tissues without an adding a tracking system imposing free line of sight constraints,
explicit segmentation thanks to a certain transfer function. Clear and change the current workflow minimally and smoothly. The
transitions between structures are not necessary. Also cloudy major issue they are facing is appropriate depth perception
structures and their density can be visualized. within a mixed environment, which is subject of active re-
The major disadvantage used to be too slow rendering in par- search within the AR community [128], [137], [139], [140].
ticular for AR, but hardware supported rendering algorithms on Augmented medical imaging devices provide aligned views by
current graphic cards can provide sufficient frame rates on real construction and do not need additional tracking systems. If
3D data. Currently, 3D texture based [135] and GPU acceler- they do not restrict the working space of the physicians these
ated raycast [133] renderers are the state of the art in terms systems have the advantage of a smooth integration into the
of speed and image quality, where the latter offer better image medical workflow. In particular, the CAMC system is currently
quality. Also the ray casting technique needs clear structures in getting deployed within three German hospitals and will be
the image for acceptable results, which can be realized with con- soon tested on 40 patients in each of these medical centers.
trast agents or segmentation. The AR window and video-see-through HMD systems still
need hardware and software improvement in order to satisfy the
F. User Interaction in Medical AR Environments requirements of operating physicians. In both cases, the commu-
Classic 2D computer interaction paradigms such as windows, nity also needs new concepts and paradigms allowing the physi-
mouse pointer, menus, and keyboards do not translate well for cians to take full advantage of the augmented virtual data, to

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
464 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

easily and intuitively interact with it, and to experience this dy- [23] J. P. Rolland and H. Fuchs, “Optical versus video see-through
namic mixed environment as one unique and correctly perceived head-mounted displays in medical visualization,” Presence, vol. 9, pp.
287–309, 2000.
3D world. We share the expectations of business analysts [141] [24] O. Cakmakci and J. Rolland, “Head-worn displays: A review,” Int. J.
that the hype level of augmented reality will reach its maximum Display Int. Technol., vol. 2, pp. 199–216, Sept. 2006.
in a few years and that medical AR will be one of its first killer [25] P. J. Kelly, G. Alker, and S. Goerss, “Computer-assisted stereotactic
laser microsurgery for the treatment of intracranial neoplasms,” Neu-
applications, saving lives of many future patients. rosurgery, vol. 10, pp. 324–331, 1982.
[26] J. Hatch and J. S. D. W. Roberts, “Reference-display system for the
integration of CT scanning and the operating microscope,” in Proc.
REFERENCES 11th Annu. Northeast Bioeng. Conf., 1985.
[27] E. M. Friets, J. W. Strohbehn, J. F. Hatch, and D. W. Roberts, “A frame-
[1] H. Steinhaus, “Sur la localisation au moyen des rayons x,” Comptes less stereotaxic operating microscope for neurosurgery,” IEEE Trans.
Rendus de L’Acad. des Sci., vol. 206, pp. 1473–1475, 1938. Biomed. Eng., vol. 36, no. 6, Jun. 1989.
[2] I. Sutherland, “A head-mounted three dimensional display,” in Proc. [28] P. J. Edwards, D. D. Hill, D. D. Hawkes, and D. A. Colchester,
Fall Joint Computer Conf., 1968, pp. 757–764. “Neurosurgical guidance using the stereo microscope,” in Proc. First
[3] D. Roberts, J. Strohbehn, J. Hatch, W. Murray, and H. Kettenberger, Int. Conf. Computer Vision, Virtual Reality and Robotics in Medicine
“A frameless stereotaxic integration of computerized tomographic (CVRMed’95), 1995.
imaging and the operating microscope,” J. Neurosurg., vol. 65, no. 4, [29] A. P. King, P. J. Edwards, C. R. Maurer, Jr., D. A. de Cunha, D. J.
pp. 545–549, 1986. Hawkes, D. L. G. Hill, R. P. Gaston, M. R. Fenlon, A. J. Strong, C. L.
[4] T. Caudell and D. Mizell, “Augmented reality: An application of Chandler, A. Richards, and M. J. Gleeson, “Design and evaluation of
heads-up display technology to manual manufacturing technology a system for microscope-assisted guided interventions,” IEEE Trans.
augmented reality,” in Proc. Hawaii Int. Conf. on Syst. Sci., 1992. Med. Imag., vol. 19, no. 11, pp. 1082–1093, Nov. 2000.
[5] W. Mackay, G. Velay, K. Carter, C. Ma, and D. Pagani, “Augmenting [30] J. Berger and D. Shin, “Computer-vision-enabled augmented re-
reality: Adding computational dimensions to paper,” Commun. ACM, ality fundus biomicroscopy,” Ophthalmology, vol. 106, no. 10, pp.
vol. 36, no. 7, pp. 96–97, 1993. 1935–1941, 1999.
[6] S. Feiner, B. Macintyre, and D. Seligmann, “Knowledge-based aug- [31] W. Birkfellner, M. Figl, K. Huber, F. Watzinger, F. Wanschitz, J.
mented reality,” Commun. ACM, vol. 36, no. 7, pp. 53–62, 1993. Hummel, R. Hanel, W. Greimel, P. Homolka, R. Ewers, and H.
[7] P. Wellner, W. Mackay, and R. Gold, “Computer augmented environ- Bergmann, “A head-mounted operating binocular for augmented
ments: Back to the real world,” Commun. ACM, vol. 36, no. 7, pp. reality visualization in medicine—Design and initial evaluation,”
24–26, 1993. IEEE Trans. Med. Imag., vol. 21, no. 8, pp. 991–997, Aug. 2002.
[8] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual dis- [32] W. Birkfellner, K. Huber, F. Watzinger, M. Figl, F. Wanschitz, R. Hanel,
plays,” IEICE Trans. Inf. Syst., pp. 1321–1329, 1994. D. Rafolt, R. Ewers, and H. Bergmann, “Development of the variosco-
[9] R. T. Azuma, “A survey of augmented reality,” Presence: Teleopera- pear—A see-through hmd for computer-aided surgery,” in Proc. IEEE
tors and Virtual Environments, vol. 6, no. 4, pp. 355–385, 1997. and ACM Int. Symp. on Augmented Reality, 2000, pp. 54–59.
[10] R. T. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. [33] M. Figl, C. Ede, J. Hummel, F. Wanschitz, R. Ewers, H. Bergmann, and
MacIntyre, “Recent advances in augmented reality,” IEEE Comput. W. Birkfellner, “A fully automated calibration method for an optical
Graphics Appl., vol. 21, pp. 34–47, 2001. see-through head-mounted operating microscope with variable zoom
[11] F. P. Brooks, “The computer scientist as toolsmith ii,” Commun. ACM, and focus,” IEEE Trans. Med. Imag., vol. 24, no. 11, pp. 1492–1499,
vol. 39, no. 3, pp. 61–68, 1996. Nov. 2005.
[12] M. Bajura, H. Fuchs, and R. Ohbuchi, “Merging virtual objects with [34] R. Holloway, “Registration error analysis for augmented reality,” Pres-
the real world: Seeing ultrasound imagery within the patient,” in Proc. ence: Teleoperators and Virtual Env., vol. 6, no. 4, pp. 413–432, 1997.
19th Annu. Conf. on Computer Graphics and Interactive Techniques, [35] Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki,
1992, pp. 203–210. T. Dohi, H. Iseki, and K. Takakura, “Development of integral pho-
[13] M. Bajura and U. Neumann, “Dynamic registration correction in video- tography-based enhanced reality visualization system for surgical sup-
based augmented reality systems,” IEEE Comput. Graph. Appl., vol. port,” Proc. lSCAS, vol. 95, pp. 16–17, 1995.
15, no. 5, pp. 52–60, 1995. [36] H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Ya-
[14] M. C. Jacobs, M. A. Livingston, and A. State, “Managing latencyin hagi, and T. Dohi, “Intra-operative real-time 3-D information display
complex augmented reality systems,” in Proc. ACM 1997 Symp. on system based on integral videography,” in Proc. Int. Conf. Med. Image
Interactive 3D Graphics, 1997, pp. 49–54. Computing and Computer Assisted Intervention (MICCAI), 2001, pp.
[15] A. State, M. A. Livingston, W. F. Garrett, G. Hirota, M. C. Whitton, 392–400.
E. D. Pisano, and H. Fuchs, “Technologies for augmented reality [37] H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi,
systems: Realizing ultrasound-guided needle biopsies,” in SIG- “Surgical navigation by autostereoscopic image overlay of integral
GRAPH’96: Proc. 23rd Annu. Conf. on Computer Graphics and videography,” IEEE Trans. Int. Technol. Biomed., vol. 8, no. 2, pp.
Interactive Techniques, New York, NY, USA, 1996, pp. 439–446. 114–121, 2004.
[16] M. Rosenthal, A. State, J. Lee, G. Hirota, J. Ackerman, E. D. P. K. [38] M. Blackwell, C. Nikou, A. M. D. Gioia, and T. Kanade, I. W. M.
Keller, M. Jiroutek, K. Muller, and H. Fuchs, “Augmented reality Wells, A. C. F. Colchester, and S. L. Delp, Eds., “An image overlay
guidance for needle biopsies: An initial randomized, controlled trial in system for medical data visualization,” in Proc. First Int. Conf. of Med-
phantoms,” Medical Image Anal., vol. 6, no. 3, pp. 313–320, 2002. ical Image Computing and Computer-Assisted Intervention (MICCAI)
[17] F. Sauer, F. Wenzel, S. Vogt, Y. Tao, Y. Genc, and A. Bani-Hashemi, Lecture Notes in Comput. Sci., Cambridge, MA, Oct. 1998, vol. 1496,
“Augmented workspace: Designing an AR testbed,” in Proc. IEEE and pp. 232–240.
ACM Int. Symp. on Augmented Reality, 2000, pp. 47–53. [39] G. Goebbels, K. Troche, M. Braun, A. Ivanovic, A. Grab, K. von
[18] W. A. Hoff and T. L. Vincent, “Analysis of head pose accuracy in Lübtow, R. Sader, F. Zeilhofer, K. Albrecht, and K. Praxmarer,
augmented reality,” IEEE Trans. Visualiz. Computer Graphics, vol. 6, “Arsystricorder—development of an augmented reality system for
2000. intraoperative navigation in maxillofacial surgery,” in Proc. IEEE and
[19] D. L. Wright, J. P. Rolland, and A. R. Kancherla, “Using virtual reality ACM Int. Symp. on Mixed and Augmented Reality (ISMAR), 2003.
to teach radiographic positioning,” Radiologic Int. Technol., vol. 66, [40] S. Wesarg, E. Firle, B. Schwald, H. Seibert, P. Zogal, and S. Roed-
no. 4, pp. 233–238, 1995. diger, R. L. Galloway, Jr., Ed., “Accuracy of needle implantation in
[20] Y. Argotti, L. Davis, V. Outters, and J. Rolland, “Dynamic superimpo- brachytherapy using a medical AR system: A phantom study,” Proc.
sition of synthetic objects on rigid and simple-deformable real objects,” 2004 SPIE Medical Imaging: Visualization, Image-Guided Procedures,
Computers & Graphics, vol. 26, no. 6, pp. 919–930, 2002. and Display, vol. 5367, pp. 341–352, 2004.
[21] T. Sielhorst, T. Blum, and N. Navab, “Synchronizing 3d movements [41] W. Lorensen, H. Cline, C. Nafis, R. Kikinis, D. Altobelli, L. Gleason,
for quantitative comparison and simultaneous visualization of actions,” G. Co, and N. Schenectady, “Enhancing reality in the operating room,”
in Proc. IEEE and ACM Int. Symp. on Mixed and Augmented Reality in IEEE Conf. on Visualization, 1993, pp. 410–415.
(ISMAR), 2005. [42] W. E. L. Grimson, T. Lozano-Perez, W. M. Wells III, G. J. Ettinger, S. J.
[22] G. Luo and E. Peli, “Use of an augmented-vision device for visual White, and R. Kikinis, “An automatic registration method for frameless
search by patients with tunnel vision,” Investigative Ophthalmology & stereotaxy, image guided surgery, and enhanced reality visualization,”
Visual Science, vol. 47, no. 9, pp. 4152–4159, 2006. IEEE Trans. Med. Imag., vol. 15, no. 2, pp. 129–140, Apr. 1996.

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 465

[43] Y. Sato, M. Nakamoto, Y. Tamaki, T. Sasama, I. Sakita, Y. Naka- [64] J. Schmidt, F. Vogt, and H. Niemann, “Robust hand-eye calibration of
jima, M. Monden, and S. Tamura, “Image guidance of breast cancer an endoscopic surgery robot using dual quaternions,” in 25th DAGM
surgery using 3-d ultrasound images and augmented reality visualiza- Symp. Pattern Recogn., B. Michaelis and G. Krell, Eds., 2003, vol.
tion,” IEEE Trans. Med. Imag., vol. 17, no. 5, Oct. 1998. 2781, pp. 548–556, Lecture Notes in Computer Science.
[44] S. Nicolau, X. Pennec, L. Soler, and N. Ayache, “An accuracy certi- [65] G. Marti, V. Bettschart, J.-S. Billiard, and C. Baur, “Hybrid method
fied augmented reality system for therapy guidance,” in Proc. 8th Eur. forboth calibration and registration of an endoscope with an active op-
Conf. on Computer Vision (ECCV 04), Prague, May 2004, vol. 3023, tical tracker,” Comput. Assisted Radiology and Surgery, 2004.
pp. 79–91, Lecture Notes in Computer Science. [66] T. Yamaguchi, M. Nakamoto, Y. Sato, K. Konishi, M. Hashizume,
[45] W. Lau, “History of endoscopic and laparoscopic surgery,” World Int. N. Sugano, H. Yoshikawa, and S. Tamura, “Development of a camera
J.f Surgery, vol. 21, no. 4, pp. 444–453, 1997. model and calibration procedure for oblique-viewing endoscopes,”
[46] G. Litynski, “Endoscopic surgery: The history, the pioneers,” World Computer Aided Surgery, vol. 9, no. 5, pp. 203–214, 2004.
Int. J. Surgery, vol. 23, no. 8, pp. 745–753, 1999. [67] S. De Buck, F. Maes, A. D’Hoore, and P. Suetens, N. Ayache, S.
[47] G. Berci and K. A. Forde, “History of endoscopy—What lessons have Ourselin, and A. Maeder, Eds., “Evaluation of a novel calibration
we learned from the past?,” Surgical Endoscopy, vol. 14, pp. 5–15, technique for optically tracked oblique laparoscopes,” in Proc. Int.
2002. Conf. Medical Image Computing and Computer Assisted Interven-
[48] W. Freysinger, A. Gunkel, and W. Thumfart, “Image-guided endoscop- tion (MICCAI), Brisbane, Australia, Oct./Nov. 2007, vol. 4791, pp.
icent surgery,” Eur. Archives of Otorhinolaryngology, vol. 254, no. 7, 467–474, Lecture Notes in Computer Science.
pp. 343–346, 1997. [68] I. Bricault, G. Ferretti, and P. Cinquin, “Registration of real and CT-de-
[49] R. Shahidi, B. Wang, M. Epitaux, R. Grzeszczuk, and J. Adler, I. W. rived virtual bronchoscopic images to assist transbronchial biopsy,”
M. Wells, A. C. F. Colchester, and S. L. Delp, Eds., “Volumetric image IEEE Trans. Med. Imag., vol. 17, no. 5, pp. 703–714, 1998.
guidance via a stereotactic endoscope,” in Proc. First Int. Conf. Med. [69] K. Mori, D. Deguchi, J. Ichi Hasegawa, Y. Suenaga, J. ichiro Toriwaki,
Image Computing and Computer-Assisted Intervention (MICCAI), H. Takabatake, and H. Natori, “A method for tracking the camera mo-
Cambridge, MA, Oct. 1998, vol. 1496, pp. 241–252, Lecture Notes in tion of real endoscope by epipolar geometry analysis and virtual en-
Computer Science. doscopy system,” in Proc. Int. Conf. Medical Image Computing and
[50] M. Scholz, W. Konen, S. Tombrock, B. Fricke, and L. Adams, “De- Computer Assisted Intervention (MICCAI), 2001, vol. 2208, pp. 1–8,
velopment of an endoscopic navigation system based on digitalimage Lecture Notes in Computer Science.
processing,” Computer Aided Surgery, vol. 3, pp. 134–143, 1998. [70] K. Mori, D. Deguchi, K. Akiyama, T. Kitasaka, C. R. Maurer Jr., Y.
[51] F. Mourgues and È. Coste-Manière, “Flexible calibration of actuated Suenaga, H. Takabatake, M. Mori, and H. Natori, “Hybrid broncho-
stereoscopic endoscope for overlay in robot assisted surgery,” in Proc. scope tracking using a magnetic tracking sensor and image registra-
Int. Conf. Med. Image Computing and Computer Assisted Intervention tion,” in Proc. Int. Conf. Medical Image Computing and Computer As-
(MICCAI), 2002, pp. 25–34. sisted Intervention (MICCAI), 2005, pp. 543–550.
[52] R. Khadem, M. R. Bax, J. A. Johnson, E. P. Wilkinson, and R. Shahidi, [71] I. Wegner, M. Vetter, M. Schoebinger, I. Wolf, and H. Meinzer, K. R.
“Endoscope calibration and accuracy testing for 3d/2d image registra- Cleary and R. L. Galloway Jr., Eds., “Development of a navigation
tion,” in Proc. Int. Conf. Medical Image Computing and Computer-As- system for endoluminal brachytherapy in human lungs,” in Proc. SPIE
sisted Intervention (MICCAI), 2001, pp. 1361–1362. Medical Imaging 2006: Visualization, Image-Guided Procedures, and
[53] W. E. Smith, N. Vakil, and S. A. Maislin, “Correction of distortion Display, Mar. 2006, vol. 6141, pp. 23–30.
in endoscope images,” IEEE Trans. Med. Imag., vol. 11, no. 1, pp. [72] T. Klein, J. Traub, H. Hautmann, A. Ahmadian, and N. Navab, “Fidu-
117–122, Feb. 1992. cial free registration procedure for navigated bronchoscopy,” in Proc.
[54] S. De Buck, J. Van Cleynenbreuge, I. Geys, T. Koninckx, P. R. Kon- Int. Conf. Medical Image Computing and Computer Assisted Interven-
inck, and P. Suetens, “A system to support laparoscopic surgery by tion (MICCAI), 2007.
augmented reality visualization,” in Proc. Int. Conf. Medical Image [73] S. Krüger, F. Vogt, W. Hohenberger, D. Paulus, H. Niemann, and C. H.
Computing and Computer Assisted Intervention (MICCAI), 2001, pp. Schick, “Evaluation of computer-assisted image enhancementin min-
691–698. imal invasive endoscopic surgery,” Methods Inf. Med., vol. 43, no. 4,
[55] R. Shahidi, M. R. Bax, C. R. Maurer, Jr., J. A. Johnson, E. P. pp. 362–366, 2004.
Wilkinson, B. Wang, J. B. West, M. J. Citardi, K. H. Manwaring, and [74] H. Fuchs, M. A. Livingston, R. Raskar, D. Colucci, K. Keller, A. State,
R. Khadem, “Implementation, calibration and accuracy testing of an J. R. Crawford, P. Rademacher, S. H. Drake, and A. A. Meyer, I. W. M.
image-enhanced endoscopy system,” IEEE Trans. Med. Imag., vol. Wells, A. C. F. Colchester, and S. L. Delp, Eds., “Augmented reality
21, no. 12, pp. 1524–1535, 2002. visualization for laparoscopic surgery,” in Proc. First Int. Conf. of Med-
[56] J. Heikkilä and O. Silvén, “A four-step camera calibration procedure ical Image Computing and Computer-Assisted Intervention (MICCAI),
with implicit image correction,” in Proc. IEEE Conf. Computer Vision Cambridge, MA, Oct. 1998, vol. 1496, pp. 934–943, Lecture Notes in
and Pattern Recognition (CVPR), 1997, pp. 1106–1112. Computer Science.
[57] R. Tsai, “A versatile camera calibration technique for high accuracy [75] D. Dey, D. Gobbi, P. Slomka, K. Surry, and T. Peters, “Automatic
3D machine vision metrology using off-the-shelf TV cameras and fusion of freehand endoscopic brain images to three-dimensional sur-
lenses,” IEEE Int. J. Robot. Autom., vol. RA-3, no. 4, pp. 323–344, faces: Creating stereoscopic panoramas,” IEEE Trans. Med. Imag., vol.
1987. 21, no. 1, pp. 23–30, Jan. 2002.
[58] Z. Zhang, “A flexible new technique for camera calibration,” IEEE [76] T. Kawamata, H. Iseki, T. Shibasaki, and T. Hori, “Endoscopic
Trans. .Pattern Anal. Mach. Intell., vol. 22, pp. 1330–1334, Nov. augmented reality navigation system for endonasal transsphenoidal
2000. surgery to treat pituitary tumors: Technical note,” Neurosurgery, vol.
[59] D. Stoyanov, A. Darzi, and G. Z. Yang, “Laparoscope self-calibration 50, no. 6, pp. 1393–1397, 2002.
for robotic assisted minimally invasive surgery,” in Proc. Int. Conf. [77] J. Ellsmere, J. Stoll, D. W. Rattner, D. Brooks, R. Kane, W. M. Wells
Med. Image Computing and Computer Assisted Intervention (MIC- III, R. Kikinis, and K. Vosburgh, R. E. Ellis and T. M. Peters, Eds., “A
CAI), J. Duncan and G. Gerig, Eds., 2005, vol. 3750, pp. 114–121, navigation system for augmenting laparoscopic ultrasound,” in Proc.
Lecture Notes in Computer Science. Int. Conf. Medical Image Computing and Computer Assisted Inter-
[60] R. Willson, “Modeling and calibration of automated zoom lenses.,” vention (MICCAI), 2003, pp. 184–191, Lecture Notes in Computer
Ph.D. dissertation, Robotics Inst., Carnegie Mellon Univ., Pittsburgh, Science.
PA, Jan. 1994. [78] N. Navab, M. Mitschke, and A. Bani-Hashemi, “Merging visible and
[61] G. Bianchi, C. Wengert, M. Harders, P. Cattin, and G. Székely, invisible: Two camera-augmented mobile C-arm (CAMC) applica-
“Camera-marker alignment framework and comparison with hand-eye tions,” in Proc. IEEE and ACM Int. Workshop on Augmented Reality,
calibration for augmented reality applications 2005,” ISMAR, pp. San Francisco, CA, 1999, pp. 134–141.
188–189, Oct. 2005. [79] G. Fichtinger, A. Deguet, K. Masamune, E. Balogh, G. S. Fischer, H.
[62] S. Nicolau, L. Goffin, and L. Soler, “A low cost and accurate guidance Mathieu, R. H. Taylor, S. J. Zinreich, and L. M. Fayad, “Image overlay
system for laparoscopic surgery: Validation on an abdominal phantom,” guidance for needle insertion in ct scanner,” IEEE Trans. Biomed.
in ACM Symp. on Virtual Reality Software and Int. Technol., Nov. 2005, Engi., vol. 52, no. 8, pp. 1415–1424, Aug. 2005.
pp. 124–133. [80] K. Masamune, Y. Masutani, S. Nakajima, I. Sakuma, T. Dohi, H. Iseki,
[63] M. Scheuering, A. Schenk, A. Schneider, B. Preim, and G. Greiner, “In- and K. Takakura, “Three-dimensional slice image overlay system with
traoperative augmented reality for minimally invasive liver interven- accurate depth perception for surgery,” in Proc. Int. Conf. Medical
tions,” Proc. 2003 SPIE Medical Imag.: Visualization, Image-Guided Image Computing and Computer Assisted Intervention (MICCAI), Oct.
Procedures, and Display,, 2003. 2000, vol. 1935, pp. 395–402, Lecture Notes in Computer Science.

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
466 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 4, NO. 4, DECEMBER 2008

[81] G. Fischer, A. Deguet, D. Schlattman, L. Fayad, S. Zinreich, R. Taylor, [99] R. U. Thoranaghatte, G. Zheng, F. Langlotz, and L.-P. Nolte, “En-
and G. Fichtinger, “Image overlay guidance for mri arthrography doscope-based hybrid navigation system for minimally invasive ven-
needle insertion,” Computer Aided Surgery, vol. 12, no. 1, pp. 2–4, tral spine surgeries,” Computer Aided Surgery, vol. 10, pp. 351–356,
2007. Sept./Nov. 2005.
[82] G. D. Stetten and V. S. Chib, “Overlaying ultrasound images on [100] J. M. Fitzpatrick, J. B. West, and C. R. Maurer Jr., “Predicting error in
direct vision,” Int. J. Ultrasound in Medicine, vol. 20, pp. 235–240, rigid-body point-based registration,” IEEE Trans. Med. Imag., vol. 14,
2001. no. 5, pp. 694–702, Oct. 1998.
[83] G. Stetten, V. Chib, and R. Tamburo, “Tomographic reflection to merge [101] R. Grzeszczuk, S. Chin, R. Fahrig, H. Abassi, H. Holz, J. A. Daniel
ultrasound images with direct vision,” in IEEE Proc. Applied Imagery Kiml, and R. Shahidi, “A fluoroscopic x-ray registration processfor
Pattern Recognition (AIPR) Annu. Workshop, 2000, pp. 200–205. three-dimensional surgical navigation,” in Proc. Int. Conf. Medical
[84] N. Glossop and Z. Wang, R. E. Ellis and T. M. Peters, Eds., “Laser Image Computing and Computer Assisted Intervention (MICCAI),
projection augmented reality system for computer assisted surgery,” Lecture Notes in Computer Science, 2000, Springer-Verlag.
in Proc. Int. Conf. Medical Image Computing and Computer Assisted [102] M. J. Murphy, “An automatic six-degree-of-freedom image registration
Intervention (MICCAI), 2003, vol. 2879, pp. 239–246, Lecture Notes algorithm for image-guided frameless stereotaxic radiosurgery,” Med-
in Computer Science. ical Phys., vol. 24, no. 6, pp. 857–866, 1997.
[85] T. Sasama et al., “A novel laser guidance system for alignment of [103] M. Feuerstein, T. Mussack, S. M. Heining, and N. Navab, K. R. Cleary
linear surgical tools: Its principles and performance evaluation as a and M. I. Miga, Eds., “Registration-free laparoscope augmentation for
man-machine system,” in Proc. Int. Conf. Medical Image Computing intra-operative liver resection planning,” in Medical Imaging 2007:
and Computer Assisted Intervention (MICCAI), London, U.K., 2002, Visualization and Image-Guided Procedures, Proc. SPIE, San Diego,
pp. 125–132. CA, Feb. 2007.
[86] T. Wendler, J. Traub, S. Ziegler, and N. Navab, R. Larsen, M. Nielsen, [104] P. A. Grützner, A. Hebecker, H. Waelti, B. Vock, L.-P. Nolte, and A.
and J. Sporring, Eds., “Navigated threedimensional beta probe for op- Wentzensen, “Clinical sludy for registration-free 3d-navigation with
timal cancer resection,” in Proc. Int. Conf. Medical Image Computing the siremobil iso-c mobile C-arm,” Electromedica, vol. 71, no. 1,
and Computer Assisted Intervention (MICCAI) vol. 4190 of Lecture pp. 7–16, 2003.
Notes in Computer Science, Copenhagen, Denmark, Oct. 2006, pp. [105] D. Burschka, M. Li, R. Taylor, and G. D. Hager, “Scale-invarianlregis-
561–569. tration of monocular endoscopic images to ct-scans for sinus surgery,”
[87] D. A. Bowman, E. Kruijff, J. J. LaViola, and I. Poupyrev, 3D User- in Proc. Int. Conf. Medical Image Computing and Computer Assisted
Interfaces: Theory and Practice. Redwood City, CA, USA: Addison Intervention (MICCAI), C. Barillol, D. Haynor, and P. Hellier, Eds.,
Wesley Longman Publishing Co., Inc., 2004. 2004, vol. 3217, pp. 413–421, Lecture Notes in Computer Science.
[88] J. Traub, P. Stefan, S.-M. Heining, T. Sielhorst, C. Riquarts, E. Eu-ler, [106] C. Wengert, P. Cattin, J. M. Duff, C. Baur, and G. Székely, “Marker-
and N. Navab, R. Larsen, M. Nielsen, and J. Sporring, Eds., “Hybrid less endoscopic registration and referencing,” in Proc. Int. Conf. Med-
navigation interface for orthopedic and trauma surgery,” in Proc. Int. ical Image Computing and Computer Assisted Intervention (MICCAI),
Conf. Medical Image Computing and Computer Assisted Intervention R. Larsen, M. Nielsen, and J. Sporring, Eds., 2006, Lecture Notes in
(MICCAI), Copenhagen, Denmark, Oct. 2006, vol. 4190, pp. 373–380, Computer Science.
Lecture Notes in Computer Science. [107] C. Maurer, Jr., “Investigation of intraopera live brain deformation using
[89] T. Peters, C. Henri, P. Munger, A. Takahashi, A. Evans, B. Davey, and a 1.5-T interventional MR system: Preliminary results,” IEEE Trans.
A. Olivier, “Integration of stereoscopic DSA and 3D MRI for image- Med. Imag., vol. 17, no. 5, p. 817, Oct. 1998.
guided neurosurgery,” Comput Med Imaging Graph., vol. 18, no. 4, pp. [108] M. Nakamoto, Y. Sato, M. Miyamoto, Y. Nakamjima, K. Konishi, M.
289–299, 1994. Shimada, M. Hashizume, and S. Tamura, “3d ultrasound system using
[90] C. Calvano, M. Moran, L. Tackett, P. Reddy, K. Boyle, and M. a magneto-optic hybrid tracker for augmented reality visualization in
Pankra-tov, “New visualization techniques for in utero surgery: laparoscopic liver surgery,” in Proc. Int. Conf. Medical Image Com-
Amnioscopy with a three-dimensional head-mounted display and puting and Computer Assisted Intervention (MICCAI), T. Dohiand and
a computer-controlled endoscope.,” J. Endourol., vol. 12, no. 5, pp. R. Kikinis, Eds., 2002, vol. 2489, pp. 148–155, Lecture Notes in Com-
407–410, 1998. puter Science.
[91] G. Hanna, S. Shimi, and A. Cuschieri, “Task performance in endo- [109] J. Leven, D. Burschka, R. Kumar, G. Zhang, S. Blumenkranz, X. D.
scopic surgery is influenced by location of the image display,” Ann. Dai, M. Awad, G. D. Hager, M. Marohn, M. Choli, C. Hasser, and R.
Surg., vol. 227, no. 4, pp. 481–484, 1998. H. Taylor, “Davinci canvas: A teterobotic surgical system wilh inte-
[92] J. B. A. Maintz and M. A. Viergever, “A survey of medical image reg- grated, robot-assisted, laparoscopic ultrasound capability,” in Proc. Int.
istration,” Medical Image Analysis, vol. 2, pp. 1–36, Mar. 1998. Conf. Medical Image Computing and Computer Assisted Intervention
[93] M. Tuceryan, D. S. Greer, R. T. Whitaker, D. E. Breen, C. Crampton, E. (MICCAI), Vol. 3749 of Lecture Notes in Computer Science, Sept. 2005,
Rose, and K. H. Ahlers, “Calibration requirements and proceduresfor a pp. 811–818, Springer-Verlag.
monitor-based augmented reality system,” IEEE Trans. Visualiz. Com- [110] M. Feuerstein, T. Reichl, J. Vogel, A. Schneider, H. Feussner, and N.
puter Graphics, vol. 1, pp. 255–273, Sept. 1995. Navab, “Magneto-optic tracking of a flexible laparoscopic ultrasound
[94] C. R. Maurer Jr., J. M. Fitzpatrick, M. Y. Wang, J. Robert, L. Galloway, transducer for laparoscope augmentation,” in Proc. Int. Conf. Medical
R. J. Maciunas, and G. S. Allen, “Registration of head volume images Image Computing and Computer Assisted Intervention (MICCAI), N.
using implantable fiducial markers,” IEEE Trans. Med. Imag., vol. 16, Ayache, S. Ourselin, and A. Maeder, Eds., Brisbane, Australia, Oct./
no. 4, pp. 447–462, Aug. 1997. Nov. 2007, vol. 4791, pp. 458–466, Lecture Notes in Computer Sci-
[95] J. D. Stefansic, A. J. Herline, Y. Shyr, W. C. Chapman, J. M. Fitz- ence.
patrick, and R. L. Galloway, “Registration of physical space to la- [111] F. S. Azar, D. N. Metaxas, and M. D. Schnall, “Methods for modeling
paroscopic image space for use in minimally invasive hepatic surgery,” and predicting mechanical deformations of the breast under external
IEEE Trans. Medical Imaging, vol. 19, no. 10, pp. 1012–1023, Oct. perturbations,” Medical Image Analysis, vol. 6, no. 1, pp. 1–27, 2002.
2000. [112] J. W. Wong, M. B. Sharpe, D. A. Jaffray, V. R. Kini, J. M. Robertson,
[96] M. Feuerstein, S. M. Wildhirt, R. Bauernschmitt, and N. Navab, J. S. J. S. Stromberg, and A. A. Martinez, “The use of active breathing con-
Duncan and G. Gerig, Eds., “Automatic patient registration for port trol (abc) to reduce margin for breathing motion,” Int. Int. J. Radiation
placement in minimally invasive endoscopic surgery,” in Proc. Int. Oncology*Biology*Phys., vol. 44, pp. 911–919, July 1999.
Conf. Medical Image Computing and Computer Assisted Intervention [113] J. M. Bailer, L. A. Dawson, S. Kazanjian, C. McGinn, K. K. Brock, T.
(MICCAI), Palm Springs, CA, Sept. 2005, vol. 3750, pp. 287–294, Lawrence, and R. T. Haken, “Determination of ventilatory liver move-
Lecture Notes in Computer Science. ment via radiographic evaluation of diaphragm position,” Int.. J. Radi-
[97] M. Feuerstein, T. Mussack, S. M. Heining, and N. Navab, “Intra-oper- ation Oncology*Biology*Phys., vol. 51, pp. 267–270, Sept. 2001.
ative laparoscope augmentation for port placement and resection plan- [114] M. Baumhauer, M. Feuerstein, H.-P. Meinzer, and J. Rassweiler, “Nav-
ning in minimally invasive liver resection,” IEEE Trans. Med. Imag., igation in endoscopic soft tissue surgery: Perspectives and limitations,”
vol. 27, pp. 355–369, Mar. 2008. Int. J. Endourology, vol. 22, pp. 1–16, Apr. 2008.
[98] M. Baumhauer, T. Simpfendörfer, R. Schwarz, M. Seitel, B. Müller- [115] F. Vogl, “Augmented light field visualization and real-time image en-
Stich, C. Gutt, J. Rassweiler, H.-P. Meinzer, and I. Wolf, K. R. Cleary hancement for computer assisted endoscopic surgery,” Ph.D. disserta-
and M. I. Miga, Eds., “Soft tissue navigation for laparoscopic prostate- tion, Universilal Erlangen-Nürnberg, , 2005.
ctomy: Evaluation of camera pose estimation for enhanced visualiza- [116] C. Ware and R. Balakrishnan, “Reaching for objects in VR displays:
tion,” in Proc. SPIE Medical Imaging 2007: Visualization and Image- Lag and frame rate,” ACM Trans. Computer-Human Interaction, vol.
Guided Procedures, San Diego, CA, Feb. 2007. 1, no. 4, pp. 331–356, 1994.

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
SIELHORST et al.: ADVANCED MEDICAL DISPLAYS: A LITERATURE REVIEW OF AUGMENTED REALITY 467

[117] T. Sielhorst, C. Bichlmeier, S. M. Heining, and N. Navab, “Depth [138] S.-A. Ahmadi, T. Sielhorst, R. Stauder, M. Horn, H. Feussner, and N.
perception a major issue in medical ar: Evaluation study by twenty Navab, “Recovery of surgical workflow without explicit models,” in
surgeons,” in Proc. Int. Conf. Medical Image Computing and Com- Proc. Int. Conf. Medical Image Computing and Computer Assisted In-
puter Assisted Intervention (MICCAI), R. Larsen, M. Nielsen, and J. tervention (MICCAI), 2006, pp. 420–428.
Sporring, Eds., 2006, Lecture Notes in Computer Science. [139] C. Bichlmeier, F. Wimmer, H. S. Michael, and N. Nassir, “Con-textual
[118] T. Sielhorst, W. Sa, A. Khamene, F. Sauer, and N. Navab, “Measure- anatomic mimesis: Hybrid in-situ visualization method for improving
ment of absolute latency for video see through augmented reality,” in multi-sensory depth perception in medical augmented reality,” in Proc.
Proc. IEEE and ACM Int. Symp. on Mixed and Augmented Reality IEEE and ACM Int. Symp. on Mixed and Augmented Reality (ISMAR),
(ISMAR), 2007. Nov. 2007, pp. 129–138.
[119] B. Mactntyre, E. M. Coelho, and S. Julier, “Estimating and adapting to [140] D. Kalkofen, E. Mendez, and D. Schmalstieg, “Interactive focus and
registration errors in augmented reality systems,” in Proc. IEEE Virtual context visualization for augmented reality,” in Proc. IEEE and ACM
Reality (VR), 2002. Int. Symp. on Mixed and Augmented Reality (ISMAR), 2007.
[120] T. Sielhorst, M. A. Bauer, O. Wenisch, G. Klinker, and N. Navab, [141] C. Pettey and L. Goasduff, Emerging Technologies Hype Cycle.
“Online estimation of the target registration error for n-ocular optical- Stamford, CT: Gartner, 2006.
tracking systems,” in Proc. Int. Conf. Medical Image Computing and
Computer Assisted Intervention (MICCAI), 2007, To appear at.
[121] C. Nafis, V. Jensen, L. Beauregard, and P. Anderson, “Method for esti-
mating dynamic em tracking accuracy of surgical navigation tools,” in
Proc. SPIE Medical Imaging 2006: Visualization, Image Guided Pro- Tobias Sielhorst received the M.S. and Ph.D. de-
cedures, and Display, K. R. Cleary and R. L. Galloway, Jr., Eds., Mar. grees from Technical Universitat München in 2003
2006, vol. 6141. and 2008, respectively.
[122] P. Jannin, J. Fitzpatrick, D. Hawkes, X. Pennec, R. Shahidi, and M. His major research interest is medical augmented
Vannier, “Validation of medical image processing in image-guided reality.
therapy,” IEEE Trans. Med. Imag., vol. 21, no. 12, pp. 1445–1449, Dr. Sielhorst has served on the program committee
Dec. 2002. of different conferences and workshops including
[123] D. Drascic and P. Milgram, “Perceptual issues in augmented reality,” IEEE and ACM International Symposium for Mixed
SPIE Volume 2653: Stereoscopic Displays and Virtual Reality Syst., and Augmented Reality (ISMAR), Medical Image
vol. 2653, pp. 123–134, 1996. Computing and Computer-Assisted Intervention
[124] J. E. Cutting and P. M. Vishton, W. Epstein and S. Rogers, Eds., “Per- (MICCAI), and the international workshop series
ceiving layout and knowing distances: The integration, relative po- AMI-ARS on Medical Augmented Reality. He co-organizes the latter one this
tency, and contextual use of different information about depth,” Per- year as one of the general chairs.
ception of Space and Motion, pp. 69–117, 1995.
[125] L. G. Johnson, P. Edwards, and D. Hawkes, “Surface transparency
makes stereo overlays unpredictable: The implications for augmented
reality,” in Medicine Meets Virtual Reality (MMVR), Vol. 94 of Studies Marco Feuerstein received the M.S. and Ph.D.
in Health Int. Technol. and Inf., J. D. Westwood, Ed. : IOS Press, degrees from Technical Universitat München (TU
2002, pp. 131–136. München) in computer science in 2003 and 2007, re-
[126] C. Furmanski, R. Azuma, and M. Daily, “Augmented-reality visual- spectively. He received scholarships for the National
izations guided by cognition: Perceptual heuristics for combining vis- University of Singapore, the University of Sydney,
ible and obscured information,” in Proc. IEEE and ACM Int. Symp. on the National Taiwan University, and the Nagoya
Mixed and Augmented Reality (ISMAR), 2002. University.
[127] M. Livingston, I. JE, H. JL, H. TH, and S. D. Julier, “Resolving From January 2004 to April 2005 he was employed
multiple occluded layers in augmented reality,” in Proc. IEEE and at the Department of Cardiothoracic Surgery at the
ACM Int. Symp. on Mixed and Augmented Reality (ISMAR), 2003, German Heart Center Munich. From May 2005 to
pp. 56–65. November 2007, he was with the Chair for Computer
[128] M. Lerotic, A. J. Chung, G. Mylonas, and G.-Z. Yang, “pq-space Aided Medical Procedures & Augmented Reality at TU München. He is cur-
based non-photorealistic rendering for augmented reality,” in Proc. rently a postdoctoral associate in the Department of Media Science, Graduate
Int. Conf. Medical Image Computing and Computer Assisted Interven- School of Information Science, at Nagoya University.
tion (MICCAI), 2007, pp. 102–109. His research interests are computer aided diagnosis and surgery, medical
[129] F. A. Biocca and J. P. Rolland, “Virtual eyes can rearrange your body: imaging, augmented reality, and computer vision.
Adaptation to visual displacement in see-through, head-mounted dis-
plays,” Presence: Teleoperators and Virtual Env., vol. 7, pp. 262–277,
1998.
[130] E. M. Kolasinski, S. L. Goldberg, and J. H. Hiller, Simulator sickness
in virtual environments U.S. Army Research Institute—Simulator Syst. Nassir Navab is a full professor and director of the
Research Unit, Technical Report 1027, 1995. institute for Computer Aided Medical Procedures
[131] G. Riccio, “An ecological theory of motion and postural instability,” and Augmented Reality (CAMP) at Technical Uni-
Ecological Psychology, vol. 3, no. 3, pp. 195–240, 1991. versity of München (TUM). He has also a secondary
[132] W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolu- faculty appointment at the Medical School of TU
tion 3d surface construction algorithm,” in Proc. 14th Annu. Conf. on München. Before joining the Computer Science
SIGGRAPH’87: Computer Graphics and Interactive Techniques, New Department at TUMünchen, he was a distinguished
York, NY, 1987, pp. 163–169. member at Siemens Corporate Research (SCR) in
[133] J. Krüger and R. Westermann, “Acceleration techniques for GPU-based Princeton, New Jersey. He received the prestigious
volume rendering,” Proceedings IEEE Visualization 2003, 2003. Siemens Inventor of the Year Award in 2001 for
[134] M. Levoy, “Display of surfaces from volume data,” IEEE Computer the body of his work in interventional imaging. He
Graphics and Appl., vol. 8, no. 3, pp. 29–37, 1988. had received his Ph.D. from INRIA and University of Paris XI in France and
[135] A. Van Gelder and K. Kim, “Direct volume rendering with shading via enjoyed two years of postdoctoral fellowship at MIT Media Laboratory before
three-dimensional textures,” in Proc. 1996 Symp. on Volume Visualiza- joining SCR in 1994.
tion, Piscataway, NJ, 1996, pp. 23–30. In November 2006, Dr. Navab was elected as a member of board of direc-
[136] B. Reitinger, A. Bornik, R. Beichel, and D. Schmalstieg, “Liver surgery tors of MICCAI society. He has been serving on the Steering Committee of the
planning using virtual reality,” IEEE Computer Graphics and Applica- IEEE Symposium on Mixed and Augmented Reality since 2001. He has served
tions, vol. 26, no. 6, pp. 36–47, 2006. on the program committee of over 30 international conferences. He is the author
[137] N. Navab, M. Feuerstein, and C. Bichlmeier, “Laparoscopic virtual of hundreds of peer reviewed scientific papers and over 40 US and international
mirror—New interaction paradigm for monitor based augmented re- patents. His main fields of interest include: Medical Augmented Reality, Com-
ality,” in Virtual Reality, Charlotte, NC, Mar. 2007, pp. 43–50. puter Aided Surgery and Medical Image Registration.

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 12, 2009 at 23:36 from IEEE Xplore. Restrictions apply.

You might also like