Professional Documents
Culture Documents
3d Visualization PDF
3d Visualization PDF
3d Visualization PDF
Three-dimensional
Visualization and
Analysis Methodologies:
A Current Perspective1
Jayaram K. Udupa, PhD
INTRODUCTION
The main purpose of three-dimensional (3D) imaging is to provide both qualitative
and quantitative information about an object or object system from images obtained
with multiple modalities including digital radiography, computed tomography (CT), mag-
netic resonance (MR) imaging, positron emission tomography (PET), single photon
emission computed tomography (SPECT), and ultrasonography (US). Objects that are
studied may be rigid (eg, bones), deformable (eg, muscles), static (eg, skull), dynamic
(eg, heart, joints), or conceptual (eg, activity regions in PET, SPECT, and functional MR
imaging; isodose surfaces in radiation therapy).
At present, it is possible to acquire medical images in two, three, four, or even five
dimensions. For example, two-dimensional (2D) images might include a digital radio-
graph or a tomographic section obtained with CT, MR imaging, PET, SPECT, or US; a
3D image might be used to demonstrate a volume of tomographic sections of a static
object; a time sequence of 3D images of a dynamic object would be displayed in four
Abbreviations: MIP = maximum intensity projection, PD = proton density, PET = positron emission tomography, 3D =
three-dimensional, 2D = two-dimensional
Index terms: Computed tomography (CT) Computers Computers, simulation Images, analysis Images, display
Images, processing Magnetic resonance (MR) Single-photon emission tomography (SPECT) Ultrasound (US)
783
Figure 1. Schematic illustrates
a typical 3D imaging system.
dimensions; and an image of a dynamic object terdependent. For example, some form of visu-
for a range of parameters (eg, MR spectro- alization is essential to facilitate the other three
scopic images of a dynamic object) would be classes of operations. Similarly, object defini-
displayed in five dimensions. tion through an appropriate set of preprocess-
It is not currently feasible to acquire truly re- ing operations is vital to the effective visualiza-
alistic-looking four- and five-dimensional im- tion, manipulation, and analysis of the object
ages; consequently, approximations are made. system. We use the phrase 3D imaging to col-
In most applications, the object system being lectively refer to these four classes of operations.
investigated consists of only a few static ob- A monoscopic or stereoscopic video display
jects. For example, a 3D MR imaging study of monitor of a computer workstation is the most
the head may focus on white matter, gray mat- commonly used viewing medium for images.
ter, and cerebrospinal fluid. However, other media such as holography
A textbook with a systematic presentation of and head-mounted displays are also available.
3D imaging is not currently available. However, Unlike the 2D computer monitor, holography
edited works may be helpful for readers unfa- offers a 3D medium for viewing. The head-
miliar with the subject (13). The reference list mounted display basically consists of two tiny
at the end of this article is representative but monitors positioned in front of the eyes as part
not exhaustive. of a helmetlike device worn by the user. This
In this article, we provide an overview of arrangement creates the sensation of being free
the current status of the science of 3D imaging, from ones natural surroundings and immersed
identify the primary challenges now being en- in an artificial environment. However, the com-
countered, and point out the opportunities puter monitor is by far the most commonly
available for advancing the science. We de- used viewing medium, mainly because of its
scribe and illustrate the main 3D imaging op- superior flexibility, speed of interaction, and
erations currently being used. In addition, we resolution compared with other media.
delineate major concepts and attempt to clear A generic 3D imaging system is represented
up some common misconceptions. Our in- in Figure 1. A workstation with appropriate
tended audience includes developers of 3D im- software implementing 3D imaging operations
aging methods and software as well as develop- forms the core of the system. A wide variety of
ers of 3D imaging applications and clinicians input or output devices are used depending on
interested in these applications. We assume the application. On the basis of the core of the
the reader has some familiarity with medical system (ie, independent of input or output), 3D
imaging modalities and a knowledge of the ru- imaging systems may be categorized as those
dimentary concepts related to digital images. having (a) physician display consoles provided
by imaging equipment vendors, (b) image pro-
CLASSIFICATION OF 3D IMAGING cessing and visualization workstations supplied
OPERATIONS by other independent vendors, (c) 3D imaging
Three-dimensional imaging operations can be software supplied independent of the worksta-
broadly classified into the following categories: tion, and (d) university-based 3D imaging soft-
(a) preprocessing (defining the object system ware (often freely available via the Internet).
to create a geometric model of the objects un- Systems produced by scanner manufacturers
der investigation), (b) visualization (viewing and and workstation vendors usually provide effec-
comprehending the object system), (c) manipu- tive solutions but may cost $50,000$150,000.
lation (altering the objects [eg, virtual surgery]), For users with expertise in accessing, install-
and (d) analysis (quantifying information about ing, and running the software, university-based
object system). These operations are highly in- 3D imaging software is available that can pro-
vide very effective, inexpensive solutions. For
Term Definition
Scene Multidimensional image;
rectangular array of
voxels with assigned
values
Scene domain Anatomic region repre-
sented by the scene
Scene intensity Values assigned to the
voxels in a scene
Pixel size Length of a side of the
square cross section of
a voxel
Scanner coordinate Origin and orthogonal axes
system system affixed to the
imaging device
Scene coordinate Origin and orthogonal axes
system system affixed to the
scene (origin usually
assumed to be upper
left corner of first section
of scene, axes are edges
of scene domain that
converge at the origin)
Object coordinate Origin and orthogonal axes
system system affixed to the ob- Figure 2. Drawing provides graphic representation
ject or object system of the basic terminology used in 3D imaging. abc =
Display coordinate Origin and orthogonal scanner coordinate system, rst = display coordinate
system axes system affixed to system, uvw = object coordinate system, xyz = scene
the display device coordinate system.
Rendition 2D image depicting the
object information cap-
tured in a scene or object
system
aging operations: graded composition and hang-
ing-togetherness.
Volume of Interest
Volume of interest converts a given scene into
another scene. Its purpose is to reduce the
amount of data by specifying a region of inter-
est and a range of intensity of interest.
A region of interest is specified by creating a
rectangular box that delimits the scene domain
in all dimensions (Fig 4a). A range of intensity
of interest is specified by designating an inten-
sity interval. Within this interval, scene intensi-
ties are transferred unaltered to the output. Out-
Figure 3. Graded composition and hanging-to-
side the interval, they are set to the lower and getherness. CT scan of the knee illustrates graded
upper limits. The range of intensity of interest composition of intensities and hanging-togetherness.
is indicated as an interval on a histogram of the Voxels within the same object (eg, the femur) are
scene (Fig 4b). The corresponding section in assigned considerably different values. Despite this
the output scene is shown in Figure 4c. This gradation of values, however, it is not difficult to
operation can often reduce storage require- identify the voxels as belonging to the same object
ments for scenes by a factor of 25. It is advis- (hanging-togetherness).
able to use the volume of interest operation
first in any sequence of 3D imaging operations.
The challenge in making use of the volume (median) of the intensities of the voxels in the
of interest operation is to completely automate neighborhood of v in the input scene when the
this operation and to do so in an optimal fash- voxels are arranged in ascending order.
ion, which requires explicit delineation of ob- In another method (5), often used in process-
jects at the outset. ing MR images, a process of diffusion and flow
is considered to govern the nature and extent
Filtering of smoothing. The idea is that in regions of
Filtering converts a given scene into another voxels with a low rate of change in intensity,
scene. Its purpose is to enhance wanted (object) voxel intensities diffuse and flow into neigh-
information and suppress unwanted (noise, boring regions. This process is prevented by
background, other object) information in the voxels with a high rate of change in intensity.
output scene. Two kinds of filters are available: Certain parameters control the extent of diffu-
suppressing filters and enhancing filters. Ide- sion that takes place and the limits of the mag-
ally, unwanted information is suppressed with- nitude of the rate of change in scene intensity
out affecting wanted information and wanted that are considered low and high. This
information is enhanced without affecting un- method is quite effective in overcoming noise
wanted information. but sensitive enough not to suppress subtle de-
The most commonly used suppressing filter tails or blur edges.
is a smoothing operation used mainly for sup- The most commonly used enhancing filter is
pressing noise (Fig 5a, 5b). In this operation, a an edge enhancer (Fig 5c) (4). With this filter,
voxel v in the output scene is assigned an in- the intensity of a voxel v in the output is the
tensity that represents a weighted average of rate of change in the intensity of v in the input.
the intensities of voxels in the neighborhood of If we think of the input scene as a function,
v in the input scene (4). Methods differ as to then this rate of change is given by the magni-
how neighborhoods are determined and how tude of the gradient of the function. Because
weight is assigned (5). Another commonly used this function is not known in analytic form,
method is median filtering. In this method, various digital approximations are used for this
the voxel v in the output scene is assigned a operation. The gradient has a magnitude (rate of
value that simply represents the middle value change) and a direction in which this change is
maximal. For filtering, the direction is usually
a. b.
Figure 4. Preprocessing with a volume of inter-
est operation. (a) Head CT scan includes a speci-
fied region of interest (rectangle). (b) Histogram
depicts the intensities of the scene designated in
a and includes a specified intensity of interest.
(c) Resulting image corresponds to the specified
region of interest in a.
c.
b.
Figure 7. Scene-based registration. (a) Three-dimensional scenes corresponding to proton-density (PD)weighted
MR images of the head obtained in a patient with multiple sclerosis demonstrate a typical preregistration ap-
pearance. The scenes were acquired at four different times. (b) Same scenes as in a after 3D registration. The
progression of the disease (hyperintense lesions around the ventricles) is now readily apparent. At registration,
the scenes were re-sectioned with a scene-based interpolation method to obtain sections at the same location.
represented by the 1 voxels (the object) is nal 3D scene was first assigned a threshold to
then used to create an output binary scene with create a binary scene. This binary scene was
a similar shape (9,10) by way of interpolation. then interpolated at coarse (Fig 6a) and fine
This is done by first converting the binary scene (Fig 6b) levels and surface rendered.
back into a (gray-valued) scene by assigning ev- The challenge in interpolation is to identify
ery voxel in this scene a value that represents specific object information and incorporate it
the shortest distance between the voxel and into the process. With such information, the
the boundary between the 0 voxels and the 1 accuracy of interpolation can be improved.
voxels. The 0 voxels are assigned a negative
distance, whereas the 1 voxels are assigned a Registration
positive distance. This scene is then interpolated Registration takes two scenes or objects as in-
with a scene-based technique and is subsequently put and outputs a transformation that, when
converted back to a binary scene by setting a applied to the second scene or object, matches
threshold at 0. At the other extreme, the shape it as closely as possible to the first. Its purpose
of the intensity profile of the input scene is it- is to combine scene or object information from
self considered an object to be used to guide multiple modalities and protocols to determine
interpolation so that this shape is retained as change, growth, motion, and displacement of
faithfully as possible in the output scene (11). objects as well as aid in object identification.
For example, in the interpolation of a 2D scene Registration may be either scene-based or ob-
with this method, the scene is converted into a 3D ject-based.
surface of intensity profile wherein the height
of the surface represents pixel intensities. This Scene-based Registration.To match two
(binary) object is then interpolated with a shape- scenes, a rigid transformation made with trans-
based method. Several methods exist between lation and rotation (and often scaling) is calcu-
these two extremes (12,13). The shape-based lated for one scene S2 such that the intensity
methods have been shown to produce more ac- pattern of the transformed scene (S2) matches
curate results (811) than most of the commonly that of the first scene (S1) as closely as possible
used scene-based methods. (Fig 7) (14). Methods differ with respect to the
Figure 6 demonstrates binary shape-based in- matching criterion used and the means of
terpolation of an image derived from CT data at
coarse and fine levels of discretization. The origi-
determining which of the infinite number of tain object gradation can potentially overcome
possible translations and rotations are optimal this but may still retain the strength of scene-
(15). Scene-based registration methods are also based methods. Deformable fuzzy object match-
available for cases in which objects undergo ing seems natural and appropriate in most situ-
elastic (nonrigid) deformation (16). ations but will require the development of fuzzy
mechanics theory and algorithms.
Object-based Registration.In object-based
registration, two scenes are registered on the Segmentation
basis of object information extracted from the From a given set of scenes, segmentation out-
scenes. Ideally, the two objects should be as puts computer models of object information
similar as possible. For example, to match 3D captured in the scenes. Its purpose is to iden-
scenes of the head obtained with MR imaging tify and delineate objects. Segmentation con-
and PET, one may use the outer skin surface of sists of two related tasks: recognition and delin-
the head as computed from each scene and eation.
match the two surfaces (17). Alternatively (or
in addition), landmarks such as points, curves, Recognition.Recognition consists of roughly
or planes that are observable in and computable determining the whereabouts of an object in
from both scenes as well as implanted objects the scene. In Figure 3, for example, recognition
may be used (1820). Optimal translation and involves determining that this is the femur
rotation parameters for matching the two ob- and this is the patella. This task does not in-
jects are determined by minimizing some mea- volve the precise specification of the region
sure of distance between the two (sets of) occupied by the object.
objects. Methods differ as to how distances are Recognition may be accomplished either au-
defined and optimal solutions are computed. tomatically or with human assistance. In auto-
Rigid object-based registration is illustrated matic (knowledge- and atlas-based) recogni-
in Figure 8. In contrast, deformable matching tion, artificial intelligence methods are used to
operations can also be used on objects (21,22). represent knowledge about objects and their
These operations may be more appropriate than relationships (2426). Preliminary delineation
rigid matching for nonrigid soft-tissue structures. is usually needed in these methods to extract
Typically, a global approximate rigid matching object components and to form and test hy-
operation is performed, followed by local de- potheses related to whole objects.
formations for more precise matching. Deform- A carefully created atlas consisting of a
able registration is also used to match comput- complete description of the geometry and in-
erized brain atlases to brain scene data obtained terrelationships of objects is used (16,27,28).
in a given patient (23). Initially, some object in- Some delineation of object components in the
formation has to be identified in the scene. given scene is necessary. This information is
This procedure has several potential applica- used to determine the mapping necessary to
tions in functional imaging, neurology, and neu- transform voxels or other geometric elements
rosurgery as well as in object definition per se. from the scene space to the atlas. Conversely,
The challenge in registration is that scene- the information is also used to deform the atlas
based methods require that the intensity pat- so that it matches the delineated object compo-
terns in the two scenes be similar. This is often nents in the scene.
not the case, however. Converting scenes into In human-assisted recognition, simple assis-
fuzzy (nonbinary) object descriptions that re- tance is often sufficient to help solve a segmen-
tation problem. This assistance may take several
forms: for example, specification of several
a. b. c.
Figure 12. Clustering. (a) Sections from an MR imaging scene with T2 (top) and PD (bottom) values assigned
to voxels. (b) Scatter plot of the sections in a. A cluster outline for cerebrospinal fluid is indicated. (c) Segmented
binary section demonstrates cerebrospinal fluid.
to three times faster and statistically significantly Another commonly used method is cluster-
more repeatable than manual tracing (47). Its ing (Fig 12). If, for example, multiple values as-
3D version (48) is about 315 times faster than sociated with each voxel are determined (eg,
manual tracing. Note that, in this method, rec- T2 and PD values), then a 2D histogram (also
ognition is manual but delineation is automatic. known as a scatter plot) represents a plot of
To our knowledge, no fuzzy, boundary-based, the number of voxels in the given 3D scene for
human-assisted methods have been described each possible value pair. The 2D histogram of
in the literature. all possible value pairs is usually referred to as
The most commonly used hard, region-based, a feature space. The idea in clustering is that
automatic method of segmentation is thresh- feature values corresponding to the objects of
olding (Fig 11). A voxel is considered to belong interest cluster together in the feature space.
to the object region if its intensity is at an upper Therefore, to segment an object, one need only
or lower threshold or between the two thresh- identify and delineate this cluster. In other words,
olds. If the object is the brightest in the scene the problem of segmenting the scene becomes
(eg, bone in CT scans), then only the lower the problem of segmenting the 2D scene repre-
threshold needs to be specified. The threshold senting the 2D histogram. In addition to T2 and
interval is specified with a scene intensity his- PD values, it is possible to use computed val-
togram in Figure 11b, and the segmented ob- ues such as the rate of change in T2 and PD for
ject is shown as a binary scene in Figure 11c.
c. d. e.
Figure 15. Fuzzy connected segmentation. (a, b) Sections from an MR imaging scene with T2 (a) and PD (b)
values assigned to voxels. (c e) Sections created with 3D fuzzy connected segmentation demonstrate the
union of white matter and gray matter objects (c), the cerebrospinal fluid object (d), and the union of multi-
ple sclerosis lesions (e) detected from the scene in a and b.
between any two voxels in the pool is greater gray matter, cerebrospinal fluid, and multiple
than a small threshold value (typically about sclerosis lesions in a T2, PD scene pair. Figure
0.1) and the strength between any two voxels 16a shows an MIP rendition of an MR angiogra-
(only one of which is in the pool) is less than phy data set, whereas Figure 16b demonstrates
the threshold value. Obviously, computing fuzzy a rendition of a 3D fuzzy connected vessel tree
objects even for this simple affinity function is detected from a point specified on the vessel.
computationally impractical if we proceed straight There are a number of challenges associated
from the definitions. However, the theory al- with segmentation, including (a) developing
lows us to simplify the complexity considerably general segmentation methods that can be eas-
for a wide variety of affinity relations so that ily and quickly adapted to a given application,
fuzzy object computation can be done in prac- (b) keeping human assistance required on a per
tical time (about 1520 minutes for a 256 256 scene basis to a minimum, (c) developing fuzzy
64 3D scene (16 bits per voxel) on a SPARC- methods that can realistically handle uncertain-
station 20 workstation (Sun Microsystems, Moun- ties in data, and (d) assessing the efficacy of
tain View, Calif). A wide spectrum of applica- segmentation methods.
tion-specific knowledge of image characteristics
can be incorporated into the affinity relation. VISUALIZATION
Figure 15 shows an example of fuzzy con- Visualization operations create renditions of
nected segmentation (in 3D) of white matter, given scenes or object systems. Their purpose
is to create renditions from a given set of
scenes or objects that facilitate the visual per- Section Mode.Methods differ as to what con-
ception of object information. Two approaches stitutes a section and how this information is
are available: scene-based visualization and ob- displayed. Natural sections may be axial, coro-
ject-based visualization. nal, or sagittal; oblique or curved sections are
also possible. Information is displayed as a mon-
Scene-based Visualization tage with use of roam-through (fly-through) and
In scene-based visualization, renditions are cre- gray scale and pseudocolor. Figure 17 shows a
ated directly from given scenes. Within this ap- montage display of the natural sections of a CT
proach, two further subclasses may be identi-
fied: section mode and volume mode.
19a. 19b.
Figures 18, 19. (18) Three-dimensional displayguided extraction of an oblique section from CT
data obtained in a patient with a craniofacial disorder. A plane is selected interactively by means of
the 3D display to indicate the orientation of the section plane (left). The section corresponding to the
oblique plane is shown on the right. (19) Pseudocolor display. (a) Head MR imaging sections obtained
at different times are displayed in green and red, respectively. Where there is a match, the composite
image appears yellow. Green and red areas indicate regions of mismatch. (b) On the same composite
image displayed after 3D scene-based registration, green and red areas indicate either a registration
error or a change in an object (eg, a lesion) over the time interval between the two acquisitions.
scene. Figure 18 demonstrates a 3D display red and green hues) where the sections match
guided extraction of an oblique section from a perfectly or where there has been no change
CT scene of a pediatric patients head. This re- (for example, in the lesions). At other places, ei-
sectioning operation illustrates how visualiza- ther red or green is demonstrated.
tion is needed to perform visualization itself.
Figure 19 illustrates pseudocolor display with Volume Mode.In volume mode visualiza-
two sections from a brain MR imaging study in tion, information may be displayed as surfaces,
a patient with multiple sclerosis. The two sec- interfaces, or intensity distributions with use of
tions, which represent approximately the same surface rendering, volume rendering, or MIP. A
location in the patients head, were taken from projection technique is always needed to move
3D scenes that were obtained at different times from the higher-dimensional scene to the 2D
and subsequently registered. The sections are screen of the monitor. For scenes of four or
assigned red and green hues. The display more dimensions, 3D cross sections must
shows yellow (produced by a combination of first be determined, after which a projection
technique can be applied to move from 3D to
2D. Two approaches may be used: ray casting ated interactively directly from the scene as the
(34), which consists of tracing a line perpen- threshold is changed. Instead of thresholding,
dicular to the viewing plane from every pixel any automatic, hard, boundary- or region-based
in the viewing plane into the scene domain, or method can be used. In such cases, however,
voxel projection (72), which consists of directly the parameters of the method will have to be
projecting voxels encountered along the pro- specified interactively, and the speed of seg-
jection line from the scene onto the viewing mentation and rendition must be sufficient to
plane (Fig 20). Voxel projection is generally make this mode of visualization useful. Although
considerably faster than ray casting; however, rendering based on thresholding can presently
either of these projection methods may be used be accomplished in about 0.030.25 seconds
with any of the three rendering techniques on a Pentium 300 with use of appropriate algo-
(MIP, surface rendering, volume rendering). rithms in software (61), more sophisticated seg-
In MIP, the intensity assigned to a pixel in the mentation methods (eg, kNN) may not offer in-
rendition is simply the maximum scene inten- teractive speed.
sity encountered along the projection line (Fig The actual rendering process consists of
16a) (73,74). MIP is the simplest of all 3D ren- three basic steps: projection, hidden part re-
dering techniques. It is most effective when moval, and shading. These steps are needed to
the objects of interest are the brightest in the impart a sense of three-dimensionality to the
scene and have a simple 3D morphology and a rendered image that is created. Additional cues
minimal gradation of intensity values. Contrast for three-dimensionality may be provided by
materialenhanced CT angiography and MR an- techniques such as stereoscopic display, mo-
giography are ideal applications for this method; tion parallax by rotation of the objects, shad-
consequently, MIP is commonly used in these owing, and texture mapping.
applications (75,76). Its main advantage is that If ray casting is used as the method of pro-
it requires no segmentation. However, the jection, hidden part removal is performed by
ideal conditions mentioned earlier frequently go stopping at the first voxel encountered along
unfulfilled, due (for example) to the presence of each ray that satisfies the threshold criterion
other bright objects such as clutter from surface (78). The value (shading) assigned to the pixel
coils in MR angiography, bone in CT angiogra- in the viewing plane that corresponds to the ray
phy, or other obscuring vessels that may not be is determined as described later. If voxel pro-
of interest. Consequently, some segmentation jection is used, hidden parts can be removed
eventually becomes necessary. by projecting voxels from the farthest to the
In surface rendering (77), object surfaces are closest (with respect to the viewing plane) and
portrayed in the rendition. A threshold interval always overwriting the shading value, which
must be specified to indicate the object of in- can be achieved in a number of computationally
terest in the given scene. Clearly, speed is of efficient ways (72,7981).
the utmost importance in surface rendering be-
cause the idea is that object renditions are cre-
may also make calculations from front to back, Volume Rendering.Volume rendering
which has actually been shown to be faster (35). methods take as input fuzzy object descriptions,
In volume rendering (as in surface render- which are in the form of a set of voxels wherein
ing), voxel projection is substantially faster than values for objectness and a number of other pa-
ray casting. Figure 21 shows the CT knee data rameters (eg, gradient magnitude) are associ-
set illustrated in Figure 3 as rendered with this ated with each voxel (35). Because the object
method. Three types of tissuebone, fat, and description is more compact than the original
soft tissuehave been identified. scene and additional information for increasing
computation speed can be stored as part of the
Object-based Visualization object description, volume rendering based on
In object-based visualization, objects are first fuzzy object description can be performed at
explicitly defined and then rendered. In difficult interactive speeds even on personal computers
segmentation situations, or when segmentation such as the Pentium 300 entirely in software.
is time consuming or involves too many param- In fact, the rendering speed (215 sec) is now
eters, it is impractical to perform direct scene- comparable to that of scene-based volume ren-
based rendering. The intermediate step of com- dering with specialized hardware engines. Fig-
pleting object definition then becomes necessary. ure 22b shows a fuzzy object rendition of the
data set in Figure 22a. Figure 23a shows a ren-
Surface Rendering.Surface rendering dition of craniofacial bone and soft tissue, both
methods take hard object descriptions as input of which were defined separately with use of
and create renditions. The methods of projec- the fuzzy connected methods described ear-
tion, hidden-part removal, and shading are simi- lier. Note that if one uses a direct scene-based
lar to those described for scene-based surface volume rendering method with the opacity
rendering, except that a variety of surface de- function illustrated in Figure 13, the skin be-
scription methods have been investigated us- comes indistinguishable from other soft tissues
ing voxels (72,79,81), points, voxel faces (29, and always obscures the rendition of muscles
80,85,86), triangles (30,37,87), and other sur- (Fig 23b).
face patches. Therefore, projection methods
that are appropriate for specific surface elements Misconceptions in Visualization
have been developed. Figure 22a shows a ren- Several inaccurate statements concerning visu-
dition, created with use of voxel faces on the alization frequently appear in the literature.
basis of CT data, of the craniofacial skeleton in a The following statements are seen most often:
patient with agnathia. Figure 8 shows renditions
of the bones of the foot created by way of the
same method on the basis of MR imaging data.
Surface rendering is the same as thresh- (and, therefore, visualization operations) can be
olding. Clearly, thresholding is only one applied in many different sequences to achieve
indeed, the simplestof the many available the desired result. For example, the filtering-in-
hard region- and boundary-based segmentation terpolation-segmentation-rendering sequence
methods, the output of any of which can be may produce renditions that are significantly
surface rendered. different from those produced by interpolation-
segmentation-filtering-rendering. With the large
Volume rendering does not require seg- number of different methods possible for each
mentation.Although volume rendering is a operation and the various parameters associated
general term and is used in different ways, the with each operation, there are myriad ways of
statement is false. The only useful volume ren- achieving the desired results. Figure 24 shows
dering or visualization technique that requires five images derived from CT data that were
no segmentation is MIP. The opacity assignment created by performing different operations. Sys-
schemes illustrated in Figure 13 and described tematic study is needed to determine which
in the section entitled Scene-based Visualiza- combination of operations is optimal for a given
tion are clearly fuzzy segmentation strategies application. Normally, the fixed combination
and involve the same problems that are en- provided by the 3D imaging system is assumed
countered with any segmentation method. It is to be the best for that application. Second, ob-
untenable to hold that opacity functions such jective comparison of visualization methods
as the one shown in Figure 13 do not represent becomes an enormous task in view of the vast
segmentation while maintaining that the mani- number of ways one may reach the desired goal.
festation that results when t1 = t2 and t3 = t4 (cor- A third challenge is achieving realistic tissue
responding to thresholding) does represent display that includes color, texture, and surface
segmentation. properties.