Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Medical Image Fusion Based on Redundancy DWT and Mamdani Type Min-sum Mean-of-max Techniques with Quantitative Analysis

Chandra Prakash
SCSE VIT University, Vellore cp5767@gmail.com

S Rajkumar
SCSE VIT University, Vellore rajkumarsrajkumar@gmail.com

P.V.S.S.R. Chandra Mouli


SCSE VIT University, Vellore mouli.chand@gmail.com

Abstract Image fusion is basically a process where multiple images (more than one) are combined to form a single resultant fused image. This fused image is more productive as compared to its original input images. The fusion technique in medical images is useful for resourceful disease diagnosis purpose. This paper illustrates different multimodality medical image fusion techniques and their results assessed with various quantitative metrics. Firstly two registered images CT (anatomical information) and MRI-T2 (functional information) are taken as input. Then the fusion techniques are applied onto the input images such as Mamdani type minimum-sum-mean of maximum (MIN-SUM-MOM) and Redundancy Discrete Wavelet Transform (RDWT) and the resultant fused image is analyzed with quantitative metrics namely Over all Cross Entropy(OCE), Peak Signal to- Noise Ratio (PSNR), Signal to Noise Ratio (SNR), Structural Similarity Index(SSIM), Mutual Information(MI). From the derived results it is inferred that Mamdani type MIN-SUM-MOM is more productive than RDWT and also the proposed fusion techniques provide more information compared to the input images as justified by all the metrics. Keywords CT, MRI-T2, MIN-SUM-MOM algorithm, Redundancy Discrete Wavelet Transform, medical image fusion, Quantitative Metrics, registered images

I. INTRODUCTION Medical imaging field demands images with high resolution and having high information content in context with bones, tissues and visualization for necessary disease diagnosis [10]. This is not possible using single modality medical images as X-ray computed tomography (CT) is suited only for recognizing bones structure, MRI giving clear information about the soft tissues and so on. In practical scenario we require complementary information from different modalities for disease diagnosis purpose. In this regard, medical image fusion is the only emerging technique which has attracted researchers to assist the doctors in fusing images and retrieving relevant information from multiple modalities such as CT, MRI, FMRI, SPET, PET etc. In the area of medical image fusion, there are several fusion techniques, but these techniques have certain limitations [1]. For example, Contrast Pyramid could not retain sufficient information from its source images, Ratio Pyramid method

suffers by providing false information which never existed in the original images, Morphological Pyramid creates many false edges. Therefore multimodality medical image fusion has emerged as a promising research area in the recent years. Image fusion basically aims at integrating information from the source images on pixel basis thus obtaining a more precise and complete information about an object. The actual fusion process can be carried out at various levels. Under this, in the pixel-level image fusion the fused images provided all relevant information present in original images with no artifacts or inconsistencies. The pixel-level image fusion were classified into spatial domain fusion and transform domain fusion. Spatial domain fusion is directly applied on the source images which in turn reduce the signalto-noise ratio of the resultant image with simple averaging technique but the spatial distortion still persists in the fused image. To improve on that in transform domain fusion, firstly the input images are decomposed based on transform coefficients. Then the fusion technique is applied and the fusion decision map is obtained. Inverse transformation on this decision map yields the fused image. The fused image carries all the details of the source images and reduces the spatial distortion. So, majority of the earlier fusion techniques were based on wavelet transformation. In wavelet transformation, the DWT technique resulted in an image with shift variance and additive noise [6]. These problems can be overcome using RDWT and Mamdani type MIN-SUM-MOM fusion technique. RDWT technique preserves the overall information (exact edge and spectral) from the input images without any loss of spatial distortion. Here, the high pass and low pass sub bands of the input images are fused using average method and entropy method respectively. The Mamdami type MIN-SUM-MOM technique basically utilizes fuzzy implication operation for the MIN algorithm, calculating the membership degree of the derived output fuzzy sets uses the Sum algorithm and the Defuzzification of the output set uses MOM algorithm [3]. The performances of different fusion techniques are evaluated using different quantitative metrics namely OCE, PSNR, SNR, SSIM and MI. The rest of the sections are organized in this paper as follows: In section-II, the system design is discussed briefly;

978-1-4673-0255-5/12/$31.00 c 2012 IEEE

54

section-III deals with experimental results and performance of the proposed method evaluation based on the quantitative metrics; section-IV, discussion and future work are summarized. II. SYSTEM DESIGN In this new system, two different modality medical images CT and MRI-T2 are taken as input. Then the registered input images are fused using the Mamdani type MIN-SUM-MOM and RDWT technique. Finally, fused image information is analyzed using the quantitative metrics. The overall system design is shown in Fig. 1. The dataset (images) are collected from Indian Scan Center, Madurai, Tamil Nadu, India. The images are acquired from the organ (Brain) at different modality such as CT and MRI (MR-T2). Each set of images were taken from the same patient at different modality [10].

M medium; L large; VL very large. The output image also has 256 gray levels and uses the same fuzzy set. For simulation purpose triangular membership function is used because compared to Gaussian, Trapezoidal membership functions it requires less computational quantities. So, triangular membership function is selected in this paper, and then the input and output membership functions are builded as shown in Fig. 2 and Fig. 3.

Figure 2. The Inputs Membership Function Curves

Figure 3. The Output Membership Function Curves

2) Fuzzy Rules: In this paper, with Mamdani-type fuzzy inference the rules are of the IF-THEN form as given below:
Figure 1. An overall system design

A. Mamdani type minimum-sum-mean of maximum Mamdani type minimum-sum-mean of maximum is pixel based fusion technique. This technique is implemented in three stages. In the first stage, fuzzy sets are used to describe the gray levels of the input images and building the fuzzy inference system (FIS). In second stage, fuzzy inference is carried out according to the fuzzy rules and the membership degree of each output pixel is obtained.Finally, the output gray level value is calculated through defuzzification and the fused image is obtained [2]. 1) Fuzzification of inputs and setting up of membership function: The input grayscale images have pixel values in the range 0-255. So there are a total of 256 gray levels. Divide these gray levels into a fuzzy set {VS, S, M, L, VL} which has five membership functions as follows: VS very small; S small;

If (CT is VS) and (MRI is VS) then fused image is VS. If (CT is VS) and (MRI is S) then fused image is S. If (CT is VS) and (MRI is M) then fused image is M. If (CT is VS) and (MRI is H) then fused image is H. If (CT is S) and (MRI is VS) then fused image is S. If (CT is S) and (MRI is S) then fused image is S. If (CT is S) and (MRI is M) then fused image is M. If (CT is S) and (MRI is H) then fused image is H. If (CT is M) and (MRI is VS) then fused image is M. If (CT is M) and (MRI is S) then fused image is M. If (CT is M) and (MRI is M) then fused image is M. If (CT is M) and (MRI is L) then fused image is L. If (CT is L) and (MRI is VS) then fused image is L.

2012 International Conference on Recent Advances in Computing and Software Systems

55

If (CT is L) and (MRI is S) then fused image is L. If (CT is L) and (MRI is M) then fused image is L. If (CT is L) and (MRI is L) then fused image is L. If (CT is VL) or (MRI is VL) then fused image is VL.

Daubechies filters on both the images in order to produce an approximate wavelet bands as shown in Fig. 4 [5].

3) MIM-SUM-MOM ALGORITHM: On the basis of MIN-SUM-MOM algorithm, create a FIS which included two inputs, one output and seventeen fuzzy rules. Using MIN-SUM-MOM algorithm the fuzzy inference is carried out according to the three given rules and the membership function for the output fuzzy set is obtained. In this paper fuzzy rules have two antecedents. The relationship is defined by logical AND or logical OR. If it is logical AND, MAX algorithm is employed for inferring the output membership degree, otherwise MIN algorithm [3]. Considering that k1 expresses the gray value of input image CT, k2 the gray value of another input image MRI and t the gray value of output fused image we have: F1 (t) = max{ VS(k1), L(k2)} F2 (t) = max{ S(k1), L(k2)} F3 (t) = min{ VL(k1), VL(k2)} (1) (2) (3)

Figure 4. Three Levels of Decomposition

In the first stage image Ii is decomposed into Iia, Iib, Iic, Iid and image Iii is decomposed into Iiia, Iiib, Iiic, Iiid which are the corresponding RDWT subbands. To utilize the features from both the images, the coefficient values from the two approximated subbands are averaged. IMa = mean(Iia,Iiia) (7)

where VS(k1) denotes the membership degree to which the element k1 belongs to fuzzy subset VS. On calculating the sum of above three results the membership function of the total output fuzzy subset F is obtained as F (t) = 1, F1(t) + F2(t) + F3(t) >=1 F1(t) + F2(t) + F3(t) , otherwise (4)

Where IMa is the fused image approximation band. In the second stage the remaining three complete subbands LH, HL, HH are divided into blocks of size 3*3 and the entropy of each block is calculated, as shown below ejpq = ln ( jpq- 3,3x,y=1Ijpq(x,y)/
pq 2 j

) /n2

(8)

To get the gray value of the fused image we use defuzzification operation. In this paper MOM defuzzification is used. Defining a set as given below r(F) = { t T | F (t) =sup t
T

F (t) }

(5)

where r(F) is a set consisting of upper values of F(t) as t belongs T. Now the MOM defuzzification is given as: t* = r(F) tdt / r(F) dt (6)

Where (p=b,c,d) denotes the RDWT subbands, n=3 denotes the size of each block, q represents the block number, and j=(i,ii) is used for differentiating the two multimodal medical images Ii and Iii. The mean and standard deviation of the RDWT coefficients is given by jpq and jpq. This entropy value in turn helps in generating the detailed subbands for the fused image IMb, IMc, IMd as shown in (9). The fused image block IMpq is derived , RDWT coefficients from image Ii is selected if the value of entropy for the specific block of Ii is greater than that of image Iii, otherwise Iiipq is selected [9]. IMpq = Iipq , if (e1pq>e2pq) Iiipq, otherwise (9)

where r(F) is the conventional integral as r(F) is continuous, and can also be the sum as r(F) is discrete. B. Redundancy Discrete Wavelet Transform RDWT fusion technique is performed on two registered images with different modality (CT and MRI-T2). At the outset we consider X and Y as two registered images of different modality CT and MRI-T2. The input images are decomposed into three levels of RDWT decomposition using

Finally, inverse RDWT is applied on all the subbands to generate the resultant fused medical image IM. IM = IRDWT(IMa,IMb,IMc,IMd) (10)

C. Quantitative Analysis

56

2012 International Conference on Recent Advances in Computing and Software Systems

Performance evaluation method is required in common to determine the result of any technique in terms of qualitative and quantitative analysis.The quantitative assessment is done on the fused images by means of various quality measures. It helps to verify the fact that the fused images are more informative and that the fusion techniques applied are satisfactory. The following section explains the quantitative metrics used in the analysis of the proposed system. 1) Overall Cross Entropy (OCE): It gives the difference between the input images and the fused image. Lower the value, better is the fusion results obtained [10]. It is given as OCE (IA,IB,F) = (CE(IA,F)+CE(IB,F)) / 2 (11)

where K(A) denotes the entropy of image A, K(B) denotes entropy of image B and K(A,B) gives the joint entropy. Registering A with respect to B maximizes the mutual information between A and B, thus maximizing the entropy values K(A) and K(B) and minimizing the joint entropy K(A,B). A higher value of M(A,B) justifies a better fusion algorithm [4]. III. EXPERIMENTAL RESULTS The following steps in sequence explain the fusion of input images with their performance analysis. Step 1: Input images of CT and MRI (MR-T2) are considered as one group. All images have the same size of 256 * 256 pixels, with 256-level gray scale. Totally, six groups of images are used in analysis. Some sample set of input images (Dataset-1 and Dataset-4) are shown in fig. 5 and fig. 6.

where IA, IB are the input images of different modality, F is the fused image, CE(IA,F) and CE(IB,F) is the cross entropy of the input images with the fused image. 2) Peak Signal to Noise Ratio (PSNR): PSNR is used to measure the quality of the image with respect to the original input image. It is defined as given below [7]: MSE=1/pq p-1i=0 q-1j=0 [A(i,j)-B(i,j)]2 PSNR = 10 log10 (MAX /MSE)
2

(12) (13)
CT MRI (MR-T2)

where MAX is the maximum value in an image. p, q are the height and weight of an image. A(i,j) is the value of input image and B(i,j) is the value of fused image. 3) Signal to Noise Ratio (SNR): It is defined as the ratio of mean pixel value to that of standard deviation of the corresponding pixel values [8]. SNR = Mean / Standard Deviation (14)

Figure 5. Original Multimodality Image Dataset 1.

It gives the contrast information of an image. Higher value indicates more contrast. 4) Structural Similarity Index (SSIM): It gives the association between the structural information changes within the images and the perceived distortion of the images. It is defined as a measure to assess similarity between two images A and B by the expression [8] SSIM(A,B) = (2( A B+k1)*2( (( A2+ B2+k1)*(
AB+k2)) / 2 2 A + B +k2 ))

CT

MRI (MR-T2)

Figure 6. Original Multimodality Image Dataset 4.

Step 2: The input images of each dataset are fused using Mamdani type MIN-SUM-MOM and Redundant Discrete Wavelet Transform and some of the samples are shown in fig. 7 and Fig. 8.

(15)

where A and B denotes the mean intensities, A and B denotes the standard deviation, AB gives the covariance of A and B, k1 and k2 are constants. 5) Mutual Information (MI) : Let A and B be the two registered multimodal images, then mutual information is given as M(A,B) = K(A) + K(B) - K(A,B) (16)

RDWT - 1

RDWT - 2

Figure 7. Fused Images of Dataset 1

2012 International Conference on Recent Advances in Computing and Software Systems

57

Step 3: The performances of the fused images are analysed with quantitative metrics discussed in the section II. The result of all metrics are shown in table I and also shown as graph in figure (fig.9 to fig.13) where in each graph x axis specifies the dataset (16) and y axis denotes the values of the metric derived from the corresponding fusion techniques
RDWT - 3 MSM Figure 8. Fused Images of Dataset 1

.
TABLE I Comparison of image fusion algorithm for CT and MRI T2 time

Metrics
OCE

Algorithm
RDWT-1 RDWT-2 RDWT-3 MSM

Dataset1
1.0705 1.7954 1.3620 0.6711 47.6414 45.9942 48.3606 56.5536 0.4576 0.4694 0.4441 0.4812 54.6852 59.4354 60.7342 64.7323 60.7772 60.4854 60.1853 62.9324

Dataset2
3.7607 3.2571 3.0505 1.1062 47.7731 45.9836 48.0577 56.2715 0.4325 0.4405 0.4048 0.4532 53.0165 58.6767 59.0593 66.7851 60.0504 59.8624 59.6415 63.0753

Dataset3
3.7873 3.1154 2.7211 0.8634 47.8608 46.2343 48.3422 56.7041 0.4255 0.4313 0.4065 0.4413 54.8243 60.1493 60.1309 70.6996 60.9053 60.5463 60.2893 62.2324

Dataset4
2.7278 3.7660 3.3382 0.9497 47.8931 46.1345 48.3951 56.7312 0.4384 0.4467 0.4236 0.4434 56.1597 61.3857 62.0662 68.2089 60.5664 60.2553 59.9942 62.0294

Dataset5
2.9605 3.7405 3.3529 0.9794 48.0332 46.1042 48.1451 56.7933 0.4540 0.4624 0.4362 0.4614 55.5667 59.8026 60.9873 66.8447 60.2753 60.0164 59.9614 62.5595

Dataset6
2.2524 3.0920 3.3516 0.7773 47.8314 46.2141 49.2433 57.1547 0.4507 0.4335 0.4390 0.4641 54.6788 59.6076 61.5968 68.7561 60.8612 60.5293 60.2493 63.1621

PSNR

RDWT-1 RDWT-2 RDWT-3 MSM

SNR

RDWT-1 RDWT-2 RDWT-3 MSM

SSIM

RDWT-1 RDWT-2 RDWT-3 MSM

MI

RDWT-1 RDWT-2 RDWT-3 MSM

Figure 9. Graph-1 Overall Cross Entropy Calculation

Figure 10. Graph-2 Peak Signal to Noise Ratio Calculation

58

2012 International Conference on Recent Advances in Computing and Software Systems

Figure 11. Graph-3 Signal to Noise Ratio Calculation

higher in MIN-SUM-MOM, higher values of SSIM in case of MIN-SUM-MON justifies that the fused images were similar to the original input images and higher values of MI suggest that MIN-SUM-MOM gives better fusion results when compared to RDWT. Thus the fused image obtained from MIN-SUM-MOM is more informative and suitable from the clinical perspective, for efficient retrieval purpose and the fused images are also obtained quickly so it is advisable. In future we would like to extend this fusion technique in order to fuse multiple sensor images (CT, MRI, PET and SPECT), motion images and color images for providing better fusion result for dealing with real time scenario. ACKNOWLEDGMENT The authors would like to thank Indian Scan Center, Madurai for providing the Brain images of same patient at different modality. REFERENCES
[1] Firooz Sadjadi, Comparative Image Fusion Analysis, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 3, June 2005. Harpreet Singh, Jyoti Raj, Gulsheen Kaur ,Thomas Meitzler, Image Fusion using Fuzzy Logic and Applications, Vol. XXVIII-1749, pp. 25-29 July,2004. Jionghua Teng, Suhuan Wang, Jingzhou Zhang, Xue Wang, Fusion Algorithm of Medical Images Based on Fuzzy Logic, Seventh International Conference on Fuzzy Systems and Knowledge Discovery, August 2010. Josien .P.W.Pluim,J.B .Antoine Maintz and Max A.Veirgever , Mutual Information based Registration of Medical images a Survey, IEEE Transaction on Medical Imaging , Vol . XX, pp-986-1004,13-20 August 2003. Ligia Chiorean, Mircea-Florin Vaida, Medical Image Fusion Based on Discrete Wavelet Transform using Java Technology, 31st International Conference on Information Technology Interfaces, pp. 22-25, June 2009. Oliver Rockinger, Image Sequence Fusion Using a Shift-Invariant Wavelet Transform, International Conference on Image Processing, vol. 3, pp. 288, October 1997. Pamela C. Cosman, Richard A. Olshen, Evaluating Quality of Compressed Medical Images: SNR, Subjective Rating and Diagnostic Accuracy, Proceedings of the IEEE, vol. 82, pp. 919-932, August 2002. R.Maruthi, R.M.Suresh, Metrics for Measuring the Quality of Fused Images, International Conference on Computational Intelligence and Multimedia Applications, December16,2007. Richa Singh, Mayank Vatsa, Multimodal Medical Image Fusion using Redundant Discrete Wavelet Transform, In proceedings of Seventh International Conference on Advances in Pattern Recognition, pp. 232-235, February 2009. S.Rajkumar, S.Kavitha, Redundancy Discrete Wavelet Transform and Contourlet Transform for Multimodality Medical Image Fusion with Quantitative Analysis, 3rd International Conference on Emerging Trends in Engineering and Technology, November 2010.

[2] Figure 12. Graph-4 Structural Similarity Index Calculation [3]

[4]

[5]

[6] Figure 13. Graph-5 Mutual Information Calculation [7]

IV. CONCLUSIONS AND FUTURE WORK In this paper, we proposed two different fusion techniques algorithm which are analyzed with quantitative metrics for six sets of brain images acquired from CT and MRI-T2. The experimental result shows that Mamdani type MIN-SUMMOM outperforms RDWT from the visual perspective and is also more satisfactory as verified with the quantitative metrics. From Table1 and figure 9,10,11,12,13 we find that lower value of OCE in case of MIN-SUM-MOM indicates better fused images, higher values of PSNR signifies better quality of images for MIN-SUM-MOM, higher values for SNR justifies that contrast information for fused images were

[8]

[9]

[10]

2012 International Conference on Recent Advances in Computing and Software Systems

59

You might also like