Professional Documents
Culture Documents
PGRT M1 MC 5 Jinmankim2005
PGRT M1 MC 5 Jinmankim2005
Abstract-Recent advances in medical imaging have resulted images and mental 3D reconstruction of the structures.
in the introduction of dual-modality scanners which can However, such 2D visualization approaches applied to 4D
simultaneously acquire two independent imaging modalities, data are in-efficient and may not fully utilize the
such as positron emission tomography (PET) and computed visualization possibilities from the 4D data, such as
tomography (CT) image data in a single session. These multi-
navigational 3D rendering.
dimensional PET-CT data contain both the functional and
anatomical information of the human body, thus, providing A common supplement to visualizing the multi-
the ability to identify anatomical structures of interest and dimensional medical data is via volume rendering.
then overlaying the result onto the corresponding functional Volume rendering is a method of displaying 3D data in a
structure, as an example. The utilization of the combined 2D format by using ray-tracing and is able to produce
functional and anatomical information has been proven to be rendition of complex and high-detailed medical data [3].
an effective approach in diagnosis and interpretation of The ability to interactively navigate the rendered volume
certain medical conditions. However, the increase in image by “flying” the viewing window through the volume data
dimensions has not been accompanied with new visualization is rapidly becoming a very attractive method for many
techniques, with the use of two-dimensional (2D) display
medical imaging applications, including image guided
being the norm. In this study, we propose a new approach to
three-dimensional (3D) visualization of dual-modality PET- surgery and computer aided diagnosis [1, 4-6].
CT data in order to complement the 2D visualization and In volume visualization of PET-CT data, two separate
potentially improve medical diagnosis and interpretation. imaging data are often fused together into a single fused
We have design and implemented a prototype visualization data prior to volume rendering [4,5]. Alternatively, two
technique using real-time volume rendering and image data can be individually rendered using different rendering
fusion running on a commodity graphics card. We further techniques, such as surface and volume renderings, prior
propose the application of interactive segmentation of 3D to fusion [5,7]. Whereas fused volume rendering permits
anatomical structures from CT data which can be used to the adjustment of fusion ratio, the ability to apply volume
identify the corresponding functional structure in volume
manipulation tools, such as brightness/contrast, lookup
visualization.
table (LUT) manipulation, etc., are often limited to the
volume rendition of the fused data. Therefore, the
I. INTRODUCTION manipulation to only a particular modality in fused PET-
Dual-modality medical imaging scanners such as PET- CT most be performed as a pre-process prior to volume
CT permit the simultaneous acquisition of two rendering.
independent 3D image data, respectively of functional and Further complication in volume visualization of PET-
anatomical images. These four-dimensional (4D) dual- CT data is image segmentation. Often in medical imaging
modality imaging data have introduced significant applications, the identification of the structures of interest
improvements in medical diagnosis and interpretation of and subsequent segmentation of these structures are
the human body [1-2]. Conventional approaches to necessary [4,8]. Image segmentation refers to the process
visualizing multi-modality medical data are by navigating of partitioning an image into distinct regions by grouping
through 2D cross-sectional slices and in orthogonal views, together of voxels based on certain similarity criterion.
which require physicians to do sequential examinations of With PET-CT data, the two different modalities can
The flowchart of the proposed real-time volume B. Real-Time Volume Rendering Implementation
rendering visualization technique with interactive A prototype of the PET-CT visualization technique has
segmentation is shown in Fig. 1, exemplified with the been developed using the SGI OpenGL Volumizer
whole-body PET-CT data. application programming interface (API) [11]. OpenGL
Volumizer is a C++ programming library that provides
A. Overview of PET-CT Visualization and Manipulations implementation of the visualization algorithms optimally
In the proposed visualization technique, PET and CT designed for volume rendering by utilizing the high level
data can be volume rendered individually or fused of texture-mapping capabilities in commodity graphics
together with fusion ratio adjustments in real-time. The card. It also provides the flexibility in direct interfacing
rendered volume can be interactively navigated using the with the graphics hardware. Texture-mapping volume
combination of rotation, scaling, and translation. In order rendering technique refers to creating parallel planes
to increase the responsiveness of the volume rendering to through the column of the 3D data, in the principal
user movements, the sampling rate, measured as the direction most perpendicular to the viewer’s line of site.
number of samples used for volume rendering from a The planes are then drawn back to front with appropriate
voxel in the data set, can be sampled at lower rates. A 3D texture co-ordinates. To simultaneously render two
utility of our visualization technique is the intensity-based separate volumes, fusion of the volume is required which
thresholding of the CT data segmentation in real-time preserves the depth information and image characteristics
volume rendering. Thresholding of the CT can potentially of different imaging modalities. We utilize the hardware-
permit the segmentation of the anatomical structures. based per-voxel fusion [12] method which uses a single
643
ray-cast to perform the fusion calculation during the technique demonstrates potential aid in the interpretation
rendering of each voxel. Such approach has the advantage of the PET data with clear selection of the lung boundary
of producing interactive volume rendering performance. from the CT segment. Using PET data alone, it is unlikely
The intensity-based thresholding is performed by that the extraction of structures can be accurately
interactively adjusting the opacity function of the LUT. performed. As the CT segmentation is applied in real-
Other volume manipulations are all implemented based on time, the physician is able to interactive adjust the
the Volumizer’s API. threshold while exploring through the 3D data. As the two
data are rendered independently, the user can rapidly
III. RESULTS interchange between the PET, CT, and fused PET-CT,
which is a commonly practiced routine in clinical PET-CT
The clinical whole-body PET-CT data were acquired by diagnosis.
a SIEMENS Biograph LSO scanner. The resolution of the Fig. 3 further illustrates the proposed PET-CT volume
images are at 16bit and number of imaging slices for PET rendering and volume manipulations. In Fig. 3(a) the CT
and CT images are 263 and 262, respectively. The image data has been segmented to reveal the bone structures
size of the PET data are 128 × 128 in size with voxel surrounding the functional organs apparent in PET data,
dimensions of 5.148 × 5.148 × 3.38mm, and CT data are from the lower chest section of the human body. The
512 × 512 in size with voxel dimensions of 0.977 × 0.977 volume has been clipped to show the internal structures.
× 3.40mm. Both data are cropped and re-scaled to the The window level of CT from Fig 3(a) has been adjusted
same voxel size of 256 × 256 × 263 with voxel dimensions to better visualize the PET data in Fig. 3(b). In Fig. 3(c), a
of 1.953 × 1.953mm. thin skin layer from the CT data has been thresholded
which also reveals the air-tree within the lung. Equal
fusion ratios were set to the PET and CT data in Fig 3(a)
to (c). Finally, Fig. 9(d) shows the clipped PET-CT data
using the clipping box, revealing internal structures of the
fused PET-CT data with the fusion ratio set to 70% PET
and 30% CT, in order to to better visualize the functional
structures.
(a) (b)
(c) (d)
(d)
(a) (b)
Fig. 2. Real-time volume rendering and fusion of PET-CT data: (a) PET; (c) (d)
(b) CT; (c) fused PET-CT; and (d) fused PET-CT with CT segmentation
of the lung. The volumes has been rotated and clipped.
644
Radeon 9600 graphic card with 64Mb memories on a Kong Sanitarium Hospital (HKSP) for providing the
Windows XP platform. Based on these measures, our image data sets used in this study.
visualization technique can animate the volumes with
good response times of 4~15 FPS among all the REFERENCES
manipulations presented in this study prior to optimization [1] O. Ratib, “PET/CT image navigation and communication”,
in our prototype implementation. The lowest FPS is from J. Nucl. Med., vol. 45, pp. 46S-55S, 2004.
the interactive segmentation of CT data, which requires [2] D.W. Townsend, J.P.J. Carney, J.T. Yap, and N.C. Hall,
most computation, with only 4 FPS. For greater “PET/CT today and tomorrow”, J. Nucl. Med., vol. 45, pp.
performance, frame rates can be significantly improved by 4S-14S, 2004.
lowering the voxel resolution and the viewing window [3] R. Shahidi, R. Tombropoulos, and R.P. Grzeszczuk,
“Clinical applications of three-dimensional rendering of
size. medical data sets”, Proc. IEEE, vol. 86, pp. 555-565, 1998.
[4] D. Gering, A. Nabavi, R. Kikinis,W. Grimson, N. Hata, P.
IV. CONCLUSIONS AND DISCUSSIONS Everett, F. Jolesz, and W. Wells, “An integrated
visualization system for surgical planning and guidance
This paper described a novel technique and a prototype using image fusion and interventional imaging,” Proc.
implementation for the visualization of the dual-modality MICCAI Int. Conf. Med. Image Computing and Computer
PET-CT data. The main contributions of this study are: (1) Assisted Intervention, Springer Verlag, pp. 809–819, 1999.
Real-time volume rendering and fusion of the dual- [5] A. Rosset, L. Spadola, and O. Ratib, “OsiriX: an open-
modality PET-CT data; (2) Interactive segmentation of source software for navigating in multidimensional
volume rendered CT data; and (3) Volume manipulations DICOM images”, J. Digital Imag., vol. 17, pp. 205-216,
in real-time. These proposed techniques can potentially 2004.
benefit the visualization of the PET-CT data in diagnosis [6] F. Beltrame, G. De Leo, M. Fato, F. Masulli, and A.
Schenone, “A three-dimensional visualization and
and interpretation. Overall, the proposed technique was navigation tool for diagnostic and surgical planning
shown to permit interactive visualization of dual-modality applications”, Proc. SPIE Visualization, Display, and
PET-CT data using real-time volume rendition in a variety Image-Guided Procedures, vol. 4319, pp. 507-514, 2001.
of potentially useful ways. We believe that our proposed [7] H. Hauser, L. Mroz, G.I. Bischi, and M.E. Gröller”, Two-
technique is able to complement the 2D visualization level volume rendering”, IEEE Trans. Visualization and
approaches commonly applied in practice. Computer Graphics, vol. 7, pp. 242-252, 2001.
In our current hardware and software configuration, the [8] E. Bullitt and S.R. Aylward, “Volume rendering of
performance of the proposed PET-CT visualization segmented image objects”, IEEE Trans. Med. Imag., vol.
technique was unable to render entire whole-body PET- 21, pp. 998-1002, 2002.
[9] S. Hu, E.A. Hoffman, and J.M. Reinhardt, “Automatic lung
CT images in high sampling rate with responsive frame segmentation for accurate quantification of volumetric X-
rates. Although the performance may be substantially ray CT images”, IEEE Trans. Med. Imag., vol. 20, pp. 490-
improved by lowering the sampling rates, the need for 498, 2001.
interactive manipulations to be applied in high sampling [10] R. Wiemker and A. Zwartkruis, “Optimal thresholding for
rate must be addressed. In this study, we therefore provide 3D segmentation of pulmonary nodules in high resolution
the ability for the user to select slices corresponding to the CT” International Congress Series, vol. 1230, pp. 653-658,
sub-section of the human body from a conventional 2D 2001.
PET-CT viewer. Nevertheless, we will investigate the [11] P. Bhaniramka and Y. Demange, “OpenGL volumizer: a
optimization of the proposed technique and also the toolkit for high quality volume rendering of large data sets,
“IEEE / ACM SIGGRAPH Symp. Volume Visualization and
utilization of more powerful graphics card currently Graphics”, pp. 45-53, 2002.
available which may address this limitation. [12] ATI Research, "ARB fragment program specification,"
The segmentation of the CT data has been demonstrated 2002. URL: http://oss.sgi.com/projects/ogl-sample/registry/
in this study to be a useful tool in interpreting the dual- ARB/ fragment_program.txt [accessed July 2005]
modality PET-CT data, with examples illustrated in this
study on how these modalities can complement each
other. Further studies will investigate the integration of
different segmentation techniques, applied not only to the
CT data, but applied to segment both the anatomical and
functional structures for PET-CT visualization.
ACKNOWLEDGEMENT
This study was supported by the ARC, HKRGC, and
CMSP Grants. We like to thank the staff at the Hong
645