Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261091611

Analysis of CT and MRI Image Fusion Using Wavelet Transform

Conference Paper · May 2012


DOI: 10.1109/CSNT.2012.36

CITATIONS READS

9 619

3 authors:

Kiran Parmar Dr. Rahul K. Kher


Babaria Institute of Technology G H Patel College of Engineering and Technology (GCET)
2 PUBLICATIONS   34 CITATIONS    41 PUBLICATIONS   145 CITATIONS   

SEE PROFILE SEE PROFILE

Falgun Thakkar
G H Patel College of Engineering and Technology (GCET)
17 PUBLICATIONS   113 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Compressive sensing based medical image compression using DSP Processor View project

All content following this page was uploaded by Dr. Rahul K. Kher on 30 September 2015.

The user has requested enhancement of the downloaded file.


1

Analysis of CT and MRI Image Fusion using Wavelet Transform

Kiran Parmar Rahul K Kher Falgun N Thakkar


PG Student, EC Dept, Asso. Professor, EC Dept, Asst. Professor, EC Dept,
G.H Patel College of G H Patel College of G. H Patel College of
Engg. and Tech, Engg. and Tech, Engg. and Tech,
Vallabh Vidyanagar, India Vallabh Vidyanagar, India Vallabh Vidyanagar, India
parmarkiran.ec@gmail.com rahul2777@gmail.com falgungcet@gmail.com

Abstract— Medical image fusion has been used to derive useful have also been developed [3]. Due to the multiresolution
information from multimodality medical image data. The idea is transform can contribute a good mathematical model of
to improve the image content by fusing images like computer human visual system and can provide information on the
tomography (CT) and magnetic resonance imaging (MRI) contrast changes, the multiresolution techniques have then
images, so as to provide more information to the doctor and
attracted more and more interest in image fusion. The
clinical treatment planning system. This paper aims to
demonstrate the application of wavelet transformation to multi-
multiresolution techniques involve two types, viz. pyramidal
modality medical image fusion. This work covers the selection of transform and wavelet transform. The pyramid method was
wavelet function, the use of wavelet based fusion algorithms on firstly introduced by Burt and Adelson and then was extended
medical image fusion of CT and MRI, implementation of fusion by Toet. However, for the reason of the pyramid ethod fails to
rules and the fusion image quality evaluation. The fusion introduce any spatial orientation selectivity in the
performance is evaluated on the basis of the root mean square decomposition process and usually contains blocking effects
error (RMSE) and peak signal to noise ratio (PSNR). in the fusion results, the wavelet transform has then been used
Key words — Medical image fusion, Multimodality images, more widely than other methods. More details on the wavelet
Wavelet transform, Fusion rules. transform based medical image fusion may be obtained in
[1] – [11].
I. INTRODUCTION In this paper, a novel approach for the fusion of computed
tomography (CT) and magnetic resonance images (MR)
Image fusion involves two or more images to attain the
images based on wavelet transform has been presented.
most useful features for some specific applications. For
Different fusion rules are then performed on the wavelet
Instance, doctors can annually combine the CT and MRI
medical images of a patient with a tumor to make a more coefficients of low and high frequency portions. The
registered computer tomography (CT) and magnetic
accurate diagnosis, but it is inconvenient and tedious to finish
resonance imaging (MRI) images of the same people and
this job. And more importantly, using the same images,
same spatial parts have been used for the analysis.
doctors with different experiences make inconsistent
decisions. Thus, it is necessary to develop the efficiently
automatic image fusion system to decrease doctor’s workload II. IMAGE FUSION BASED ON WAVELET TRANSFORM
and improve the consistence of diagnosis. Image fusion has The original concept and theory of wavelet-based
wide application domain in Medicinal diagnosis. Medical multiresolution analysis came from Mallat. The wavelet
images have difference species such as CT, MRI, PET, ECT, transform is a mathematical tool that can detect local features
and SPECT. These different images have their respective in a signal process. It also can be used to decompose two-
application ranges. For instance, functional information can dimensional (2D) signals such as 2D gray-scale image signals
be obtained by PET, SPECT. They contain relative low into different resolution levels for multiresolution analysis
spatial resolution, but they can provide information about Wavelet transform has been greatly used in many areas, such
visceral metabolism and blood circulation. And that as texture analysis, data compression, feature detection, and
anatomical image contains high spatial resolution such as CT, image fusion. In this section, we briefly review and analyze
MRI, B-mode ultrasonic, etc. Medical fusion image is to the wavelet-based image fusion technique.
combine functional image and anatomical image together into
A. Wavelet Transform
one image. This image can provide abundance information to
doctor to diagnose clinical disease. Many techniques for Wavelet transforms provide a framework in which a
image fusion have been proposed in the literature [3]. The signal is decomposed, with each level corresponding to a
simplest way of image fusion is to take the average of the two coarser resolution or lower frequency band and higher-
images pixel by pixel. However, this method usually leads to frequency bands. There are two main groups of transforms,
undesirable side effect such as reduced contrast. Other continuous and discrete. The discrete wavelet transform
methods based on intensity-hue saturation (IHS), principal (DWT), which applies a two- channel filter bank (with down
component analysis (PCA), synthetic variable ratio (SVR) etc. sampling) iteratively to the lowpass band (initially the original
signal). The wavelet representation then consists of the low-
2

pass band at the lowest resolution and the high-pass bands B. Wavelet Transform for Fusing Images
obtained at each step. This transform is invertible and non In this subsection, to better understand the concept and
redundant. The DWT is a spatial-frequency decomposition procedure of the wavelet based fusion technique, a schematic
that provides a flexible multiresolution analysis of an image. diagram is first given in Figure 1.
In one dimension (1D) the basic idea of the DWT is to
represent the signal as a superposition of wave lets. Suppose
that a discrete signal is represented by f (t); the wavelet
decomposition is then defined as

f (t) = Σ cm,n ψm,n (t) (1)


m,n

where ψm,n(t) = 2−m/2ψ[2−mt − n] and m and n are integers.


There exist very special choices of ψ such that ψm,n(t)
constitutes an orthonormal basis, so that the wavelet
transform coefficients can be obtained by an inner calculation:
Figure 1. The scheme for image fusion using the wavelet transform [6].

cm,n = (f , ψm,n) = ∫ ψm,n(t) f (t) dt (2.1) In general, the basic idea of image fusion based on wavelet
transform is to perform a multiresolution decomposition on
In order to develop a multiresolution analysis, a scaling each source image; the coefficients of both the low-frequency
function φ is needed, together with the dilated and translated band and high-frequency bands are then performed with a
version of it, certain fusion rule as displayed in the middle block of Figure
2 The widely used fusion rule is maximum selection scheme.
φm,n(t) = 2−m/2φ[2−mt − n]. (2.2)
This simple scheme just selects the largest absolute wavelet
According to the characteristics of the scale spaces spanned coefficient at each location from the input images as the
by φ and ψ, the signal f(t) can be decomposed in its coarse coefficient at the location in the fused image. After that, the
part and details of various sizes by projecting it onto the fused image is obtained by performing the inverse DWT
corresponding spaces. Therefore, to find such a (IDWT) for the corresponding combined wavelet coefficients.
decomposition explicitly, additional coefficients am,n are Therefore, as shown in Figure 2, the detailed fusion steps
required at each scale. At each scale am,n and am-1,n describe based on wavelet transform can be summarized below:
the approximations of the function f( t) at resolution 2m and at
Step 1. The images to be fused must be registered to assure
the coarser resolution 2m−1, respectively, while the
that the corresponding pixels are aligned.
coefficients cm,n describe the information loss when going
from one approximation to another. In order to obtain the Step 2. These images are decomposed into wavelet
coefficients cm,n and am,n at each scale and position, a scaling transformed images, respectively, based on wavelet
function is needed that is similarly defined to (2). The transformation. The transformed images with K-level
approximation coefficients and the wavelet coefficients can be decomposition will include one low-frequency portion (low-
obtained: low band) and 3K high-frequency portions (low-high bands,
high-low bands, and high-high bands).
am,n = Σ h 2n−k am-1,k (3) Step 3. The transform coefficients of different portions or
k bands are performed with a certain fusion rule.
Step 4. The fused image is constructed by performing an
cm,n = Σ g 2n−k am-1,k (4) inverse wavelet transform based on the combined transform
k coefficients from Step 3.
where hn is a low-pass FIR filter and gn is related high-pass
FIR filter. To reconstruct the original signal the analysis In order to measure the fusion performance, we calculate
filters can be selected from a biorthogonal set which have a the root mean square error (RMSE) between the reconstructed
related set of synthesis filters. These synthesis filters h’ and g’ image and the original image for every fusion performed, and
can be used to perfectly reconstruct the signal using the present this error as a percentage of the mean intensity of the
reconstruction formula: original image. The RMSE is given by [8]

am-1,l (f)=Σ[h’ 2n−l am,n(f) +Σg’ 2n−lcm,n(f) (5) RMSE=√(1/(M*N))[ΣxΣy(Itrue(x,y)-Ifused(x,y))2] (6)

Equations (3) and (4) are implemented by filtering and down Where Itrue(x, y) is the reference image, Ifused(x, y) is the
sampling. Conversely eqn. (5) is implemented by an initial up fusion image and M & N are the dimensions of the images.
sampling and a subsequent filtering. For each level of reconstruction, RMSE is measured and
compared.
3

III. EXPERIMENTAL RESULTS Figure 2(a) and 2(b) respectively. Figure 2(c) shows the
We considered five wavelet families namely Haar, resultant fusion image (2) head CT and MRI images shown in
Daubechies (db), Symlets, Coiflets and Biorsplines for fusing Figure 3(a) and 3(b) respectively. Figure 3(c) shows the
CT and MRI medical images. The filter -Daubechies (db) - resultant fusion image (3) abdomen CT and MRI images
which produced the smallest RMSE was chosen for further shown in Figure 4(a) and4 (b) respectively. Figure 4(c) shows
analysis. Different fusion rules were tested, including the the resultant fusion image using the above mentioned fusion
mean rule, maximum rule, minimum rule and random rule. algorithm. For fusing simulated images, our observation was
Here maximum rule gives better result, so maximum rule is that the smallest RMSE and higher PSNR were achieved at
selected. Here we applied maximum fusion rule to three CT the higher decomposition levels.
and MRI pair images, (1) brain CT and MRI images shown in

Fig. 2. (a) CT image (Brain), (b) MRI image (Brain) and (c) Resultant Fused image using maximum fusion rule

Fig.3. (a) CT image (abdomen), (b) MRI image (abdomen) (c) Resultant Fused image using maximum fusion rule

Fig. 4. (a) CT image (head), (b) MRI image (head), (c) Resultant Fused image using maximum fusion rule
4

REFERENCES
[1] J. Wenjun Chi, Sir Michael Brady, Niall R. Moore and Julia A. Schnabel,
“Fusion of Perpendicular Anisotropic MRI Sequences”, IEEE Int. Symp.
on Biomedical Imaging (ISBI 2011), March 30 – April 2 2011, Chicago,
pp 1455-1458.
[2] Anna Wang, Haijing Sun, and Yueyang Guan, “The Application of
Wavelet Transform to Multi-modality Medical Image Fusion”, IEEE Int.
Conf. on Networking, Sensing and Control (ICNSC ’06), Florida, pp
270-274.
[3] Yong Yang, Dong Sun Park, Shuying Huang, Zhijun Fang, Zhengyou
(a) (b) Wang, “Wavelet based Approach for Fusing Computed Tomography and
Magnetic Resonance Images”, Control and Decision Conference (CCDC
’09), Guilin, China, 17-19 June, 2009, pp 5770-5774.
[4] F. E. Ali and A. A. Saad, “ Fusion of MR and CT images using the
curvelet transform”, 25th National Radio Science Conference
(NRSC2008), Egypt, March 18-20, 2008, pp 1-8.
[5] Sabalan Daneshvar, Hassan Ghassemian, “Fusion of MRI and PET
Images Using Retina Based Multiresolution Transforms”, 9th Int. Symp.
on Signal Processing and its Applications (ISSPA ’07), Sharjah, 12-15
Feb, 2007, pp1-7.
(c) (d) [6] Yong Yang, Dong Sun Park, Shuying Huang, and Nini Rao, “Medical
Image Fusion via an Effective Wavelet- Based Approach”, EURASIP
Journal on Advances in Signal Processing, Volume 2010(2010), pp 1-13.
[7] Zhao Wencang, Cheng Lin, “Medical Image Fusion Method based on
Wavelet Multi-resolution and Entropy” IEEE International Conference
on Automation and Logistics (ICAL 2008), Qingdao, China, 1-3
September 2008, pp 2329-2333.
[8] Zhiming Cui, Guangming Zhang, Jian Wu, “Medical Image Fusion Based
on Wavelet Transform and Independent Component Analysis”
International Joint Conference on Artificial Intelligence (JCAI ’09),
Hainan Island, 25-26 April 2009, pp 480-483.
[9] Xiaoqing Zhang, Yongguo Zheng, Yanjun Peng, Weike Liu, Changqiang
(e) (f) Yang, “Research on Multi-Mode Medical Image Fusion Algorithm Based
Figure 5. (a), (c) and (e) Plots of PSNR v/s. Level of decomposition for fused on Wavelet Transform and the Edge Characteristics of Images”, 2nd Int.
Brain image , Abdomen image and Head image, respectively; (b), (d) and (f) Congress on Image and Signal Processing (CISP ’09), China.
Similar Plots of RMSE values (RMSE values have multiplier of 100). [10] Singh, R.; Vatsa, M.; Noore, A.; “Multimodal Medical Image Fusion
Using Redundant Discrete Wavelet Transform”, 7th Int. Conf. on
IV. CONCLUSION AND FUTURE WORK Advances in Pattern Recognition (ICAPR ’09), 4-6 Feb. 2009, pp 232-
235.
We have combined the wavelet transform and various [11] Yang Licai; Liu Xin; Yao Yucui; “Medical Image Fusion Based on
fusion rules to fuse CT and MRI images. This method gives Wavelet Packet Transform and Self- Adaptive Operator”, 2nd Int. Conf.
encouraging results in terms of smaller RMSE and higher on Bioinformatics and Biomedical Engineering (ICBBE 2008), 16-18
PSNR values. Among all the fusion rules, the maximum May 2008, pp 2647-2650.
fusion rule performs better as it achieved least MSE and [12] Zhao Tongzhou, Wang Yanli, Wang Haihui, Song Hongxian, Gao Shen,
highest PSNR values. Using this method we have fused other “Approach of Medicinal Image Fusion Based on Multiwavelet
Transform”, Chinese Control & Decision Conf. (CCDN ’09), pp 3411-
head and abdomen images. The images used here are 3416, Guilin, China.
grayscale CT and MRI images. However, the images of other
modalities (like PET, SPECT, X-ray etc) with their true color
nature may also be fused using the same method.

View publication stats

You might also like