Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Image Fusion Method for Microwave Tomography Image

Arif Saputra 1, a) , Prawito Prajitno 1, b)


1
Departement of Physics, Faculty of Science and Mathematics, University of Indonesia, West Java, Indonesia
a)
aripsapu@gmail.com
b)
prawito@sci.ui.ac.id

Abstract. Image depiction in microwave tomography has limitations at certain frequencies. Makes the complete
information does not obtained from the resulting image, especially on the number of objects which are more than one in an
experiment. In this study we using foam and plastic cylinders, it produces clear foam images at 3 GHz and 5 GHz
frequencies. But the plastic cylinder is visible at the frequencies of 9 GHz and 10 GHz. Image results need an image fusion
method so that both objects can be seen in one image. The image fusion methods used are wavelet and pyramid methods.
The resulting SSIM value for the wavelet method is 0.699 and the Gauss-Laplacian method is 0. 748. In this study can be
concluded that the image fusion method with Gauss-Laplacian gives better results for microwave tomography images than
wavelet.

Keywords. Image, Fusion,Tomography, Microwave, Wavelet, Pyramid, Gauss-Laplacian.

INTRODUCTION

The image generated from the microwave capture process requires a processing in advance so that the image is
able to convey the information we want completely and clearly. Basically tomography is an imaging method using
waves that can penetrate objects so that the transmission of waves that are not absorbed by the object can be captured
by sensors that work as detectors. Microwave tomography is an imaging model that uses electromagnetic radiation to
reconstruct a quantitative image of the dielectric value of the object being observed [1]. Microwave tomography can
provide safety, mobility and cost-effectiveness in the imaging process, especially in the medical field [2].
However, the information that obtained from the microwave tomography image is only a part of the whole. This
because the image that appears after processing is based on a different frequency. For example, the results of the image
of a frequency of 2 GHz to 6 GHz, the image of a cylindrical plastic object is clearly visible, while the foam tube itself
is clearly imaged at a frequency of 7 GHz to 9 GHz [2]. To get clear image information, it is necessary to combine
two or more images based on the information generated at these different frequencies. This method is known as image
fusion.
Image fusion is widely used to combine two or more images with different information in them. Image fusion
method transformations can be divided into three categories, namely pyramid, wavelet and multi-scale decomposition
methods [3].
The key step in wavelet-based image fusion is coefficient combination, which is the process of combining
coefficients in the right way to get the best quality image fusion fil [4]. Discrete wavelet transformation is one of the
algorithms commonly used in the wavelet method. Discrete wavelet transform is a spatial frequency domain
disintegration that performs multi-resolution analysis obtained from an image.
One of the effective and clear structures used to describe multi-resolution images is the pyramid image proposed
by Burt and Adelson in 1983. The basic principle of this method is to decompose the original image into sub-image
pieces with different spatial resolutions through several operations. mathematics [5].
With the various methods mentioned above, separate information from various images can be combined into an
image with complete information. The next obstacle is regarding the image quality of the fusion process. Information
from the images produced by these various methods of course experiences a reduction of information in the process.
So to determine the quality of the resulting image, it is necessary to evaluate based on the performance of the matrix
generated from the image fusion process.

METHODS

Image fusion or is a process of unifying images from several different images containing different information so
as to produce an image that can be more useful with more complete information. Image fusion is also the fusion of
images which obtained by sensors of different wavelengths simultaneously viewing the same scene, to form a
composite image. Composite images are formed to enhance image content and make it easier for users to detect,
recognize and identify targets and improve the quality of existing information [6].
Image acquisition is image data retrieval using one sensor or several sensors with different modalities. Image
registration is the stage where the image that will use the information is selected, including the reduction or removal
of noise in the image. Evaluation of fusion performance is needed to determine the quality of the fused image
qualitatively and quantitatively as shown as Fig. 1.
There are several algorithms or methods used in this image fusion process. Some of them are pyramid, wavelet,
and Gaussian-Laplacian Fusion.

Wavelet

The wavelet transform is a mathematical tool for analyzing data that has many variations of features at different
scales. In the context of a signal, a feature is a frequency that varies over time, a transient, or a slowly changing trend
as shown as Fig. 2.
The wavelet method used in image fusion uses the discrete wavlet transformation (DWT) method. In 1-D, the
mean of the wavelet transform corresponds to the signal as a wavelet superposition. If the signal is isolated according
to f(t) its wavelet decomposition becomes:

𝑓(𝑡) = ∑ 𝐶𝑚,𝑛 𝜑𝑚,𝑛 (𝑡) (1)


𝑚,𝑛

where m and n are integers. This ensures that the signal is parsed into normalized wavelets on the octave scale.
Mathematically, the DWT decomposition of the one-dimensional signal xl at level l is [7]:

𝑦𝑙+1 =↓ (𝑥𝑙 ⊗ 𝐻) (2)


𝑥𝑙+1 =↓ (𝑥𝑙 ⊗ 𝐿) (3)

FIGURE 1. Analisys dan sinthesys image fusion prosces [2].


FIGURE 2. Capturing transient behavior in signals using a matlab wavelet transform [8].

FIGURE 3. Block diagram of general wavelet method image fusion [9].

Where ↓ (𝑥𝑙 ⊗ 𝐻) and ↓ (𝑥𝑙 ⊗ 𝐿)showing xl convolutions respectively with high-pass (H) and low-pass (L) filters
followed by sub-sampling.
DWT is preferred over pyramid due to various advantages. It provides a coherent representation and precise
information of a particular image. These qualities make DWT suitable for fusion purposes. The wavelet method
produces less blocking effect when compared to the pyramid method. In DWT, each image source is decomposed into
wavelet coefficients with varying levels. Using different fusion rules, these wavelet coefficients are combined. In the
final step, the inverse of the wavelet transform is applied to the combined previous wavelet coefficients to obtain the
desired composite image.
As depicted in Fig.3 , the source image I1 is decomposed into a set of four sub-images in various directions. {CLL}1
is the approximation coefficient of I1 which represents the low frequency content. {CLH}1, {CHL}1, and {CHH}1
show the detail coefficients in the horizontal, vertical, and diagonal directions, respectively. The estimated coefficient
{CLL}1 at level 1 will be further broken down into approximation coefficients and detail coefficients in the horizontal,
vertical and diagonal directions of the directions at level 2, and so on. The same is true for the source image I 2. By
using various fusion rules, the wavelet coefficients of the source image at various levels will be combined. From this
wavelet fusion coefficient, the final fusion image F is generated using the inverse wavelet transform.

Pyramid

The basic idea of this method is explained as follows: First, it decomposes the source image into successive sub-
images using several operations such as blurring and down sampling. Next, apply the fusion rule to the decomposition
of this sub-image. Finally, the composite image reconstruction of these combined sub-images. Next, apply the fusion
rule to the sub-image decomposition. Finally, the composite image reconstruction from the merged sub-images.
FIGURE 4. Block diagram of pyramid image fusion [9].

A general diagram of pyramid-based image fusion can be seen in Fig.7. Image sources I1 and I2 were blurred using
a linear filter and two down sampling was performed along the rows and columns. This process can be formulated:
1 1
{𝐶𝑃𝑖+1 } = [{𝐶𝑃𝑖 (𝑥, 𝑦)} ∗ 𝑤(𝑥, 𝑦)] (4)
↓2
2 2
{𝐶𝑃𝑖+1 } = [{𝐶𝑃𝑖 (𝑥, 𝑦)} ∗ 𝑤(𝑥, 𝑦)] (5)
↓2

𝑖 = 0,1,2, … , 𝑁.

1
Where, {𝐶𝑃𝑖+1 } represents the sub-images obtained from the decomposition of the source pyramid of image I1 at
1
level 𝑖 + 1 depending on the level of the previous {𝐶𝑃𝑖 (𝑥, 𝑦)} sub-image. {𝐶𝑃0 }1 represents the input image I1.
Covolutional operations are represented by ∗ 𝑤 as a linear filter and N as the number of levels.
This concept also applies to the source image I2. Various fusion rules can be used for this sub-image decomposition
𝐹
to obtain sub-images that converge {𝐶𝑃𝑖 } at various Li levels. Then the pyramid is reconstructed again from these
fused sub-images to obtain a fusion of F images.
Laplacian transform and Gaussian are commonly used in image fusion of this pyramid method. So one of the
image fusion models is called the Laplacian pyramid method. The Laplacian Pyramid applies a "pattern selective"
image fusion approach, so that a composite image is constructed instead of one pixel at a time. The basic idea is to
perform a pyramid decomposition on each source image, then integrate all these decompositions to form a composite
representation, and finally reconstruct the merged image by performing an inverted pyramid transformation [10].
The first step is to build a pyramid for each source image.Then, Fusion implemented for each level of the pyramid
using a feature selection decision mechanism. The feature selection method selects the most prominent patterns from
the source and copies them to the pyramid composite, while discarding the least significant salient patterns. This way,
all locations where the image is sourced are clearly selected. The salient components are selected based on the
following equation.
𝐴 (𝑥, 𝑦) 𝑖𝑓 |𝐴𝑖 (𝑥, 𝑦)| > |𝐵𝑖 (𝑥, 𝑦)
𝐹(𝑥, 𝑦) = { 𝑖 } (6)
𝐵𝑖 (𝑥, 𝑦) 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where, A and B are source images and F are fusion images and 0 ≤ 𝑖 ≤ 𝑁 − 1.

EXPERIMENTAL AND RESULT

In this study we have a lot of data scattering of tomography image from fresnel University. Then datas were
reconstructed into a true images using the born approach. The Born approximation is one of the best known and
commonly used for imaging scatter waves scattered in the detector plane. This method assumes that the target is a
weak scattering object and is generally used in inversion methods. The total field spread across the detector can be
expressed by the following equation:
Ψ(𝑟) = Ψ𝑖𝑛𝑐 (𝑟) + Ψ𝑠 (𝑟) (7)
where, Ψ𝑖𝑛𝑐 is event field dan Ψ𝑠 is a scattering field. By applying the Fourier equation between the target V(r), and
the scatter amplitude which can be calculated on the detector Ψ𝑠𝐵𝐴 (𝑘𝑟̂ , 𝑘𝑟̂𝑖𝑛𝑐 ) so the equation can be written as follows
[2]:
1 𝜋 ′
Ψ𝑠𝐵𝐴 (𝑟, 𝑟̂𝑖𝑛𝑐 ) = 𝑒 𝑖(𝑘𝑟+4 ) 𝑘 2 ∫ 𝑉(𝑟 ′ )𝑒 −𝑖𝑘(𝑟̂ ,𝑟̂𝑖𝑛𝑐 )𝑟 | 𝑑𝑟′ (8)
√8𝜋𝑘𝑟 𝐷

if equation 3.2 is derived then the equation becomes:


𝜋
′ )𝑒 −𝑖𝑘(𝑟̂ ,𝑟̂ 𝑖𝑛𝑐 )𝑟′ |
√8𝜋𝑘𝑟𝑒 𝑖(𝑘𝑟+4 ) 𝐵𝐴 (9)
∫ 𝑉(𝑟 𝑑𝑟′ = Ψ𝑠 (𝑟, 𝑟̂𝑖𝑛𝑐 )
𝐷 𝐾2
After we got the true image we did Structural Similiarty Index Matrice (SSIM). measure the similarity and
degradation between the IG ground truth and IF fused images. Mathematically SSIM can be determined by equation
2.9, which includes a mapping between IF and IG.
𝜎𝐼𝐹𝐼 2𝜇𝐼 𝜇𝐼 2𝜎𝐼 𝜎𝐼
𝐺 𝐹 𝐺 𝐹 𝐺
𝑆𝑆𝐼𝑀𝑚𝑎𝑝 (𝐼𝐹 , 𝐼𝐺 ) = (10)
𝜎𝐼𝐹 𝜎𝐼𝐺 𝜇𝐼2𝐹 + 𝜇𝐼2𝐺 𝜎𝐼2𝐹 + 𝜎𝐼2𝐺
𝜎𝐼𝐹𝐼 is the block-wise covariance between IF and IG, then 𝜇𝐼𝐹 is the mean and 𝜎𝐼𝐹 is the standard deviation of the IF.
𝐺
In the below were ground truth image that we used as IG:

(a) (b)
(c) (d)
FIGURE 5. (a) FoamDielExt, (b) FoamDielInt, (c) FoamTwinDiel, and (d) FoamMetExt.

FoamDielExt consists of a cylinder foam (SAITEC SBF 300) (Ԑr = 1.45) with a 80 mm of diameter and a cylinder
plastic (betylon) (Ԑr = 3.0) with a 31 mm of diameter, where the position of the cylinder plastic is outside of the
cylinder foam. FoamDielInt consists of a cylinder foam (SAITEC SBF 300) (Ԑr = 1.45) with 80 mm diameter and a
cylinder plastic (betylon) (Ԑr = 3.0) with 31 mm diameter, where the position of the cylinder plastic is inside the
cylinder foam. FoamTwinDiel consist of cylinder foam (SAITEC SBF 300) (Ԑr = 1.45) with a 80 mm of diameter and
a cylinder plastic (betylon) (Ԑr = 3.0) with a 31 mm of diameter, where the position of the cylinder plastic is outside
and inside of the cylinder foam. FoamMetExt consists of a cylinder foam (SAITEC SBF 300) (Ԑr = 1.45) with a
diameter of 80 mm and a cylinder copper with a diameter of 28.5 mm, where the position of the cylinder copper is
outside the cylinder foam.
There are two modes of each object arrangement condition based on the existing ground truth image. Transverse
Electric mode and Transverse Magnetic mode. That’s because we used two-dimensional inverse scattering data.
TABLE 1. Reconstruction FoamDielExt image result using Born approximation and SSIM value before image fusion process.
FoamDielExt
TM Mode SSIM TE Mode SSIM

0.653 0.663

3Ghz 3Ghz

0.66 0.656

5Ghz 4Ghz
FoamDielExt
TM Mode SSIM TE Mode SSIM

0.625 0.663

9Ghz 9Ghz

0.608 0.626

10Ghz 10Ghz

As a result of the born approach that has been carried out on FoamDielExt scattering data for TE and TM modes.
In TE mode, images from 3, 4, 9, and 10Ghz scattering are selected because based on their visualization they represent
part of the ground truth image used. In addition, the SSIM value generated from the image is higher than the other
scattering images. Likewise applies to the selection of images in TM mode.
TABLE 2. Results of FoamDielExt image fusion process using Wavelet, Pyramid, and Gauss-Laplacian method.
Method Result TM Mode SSIM Result TE Mode SSIM

Pyramid 0.699 0.714


Method Result TM Mode SSIM Result TE Mode SSIM

Wavelet 0.69 0.699

Gaussian-
0.743 0.748
Laplacian

Based on Table 2, it can be seen that there is an increase in the SSIM value in the results of the image fusion
performed. The largest increase was generated by image fusion using the Gauss-Laplacian method of 0.74 for both
TE and TM modes.
TABLE 3. Reconstruction FoamDielInt image result using Born approximation and SSIM value before image fusion process.
FoamDielInt
TM Mode SSIM TE Mode SSIM

0.627 0.595

4Ghz 3Ghz
FoamDielInt
TM Mode SSIM TE Mode SSIM

0.592 0.583

5Ghz 4Ghz

0.598 0.569

8Ghz 9Ghz

0.554 0.562

9Ghz 10Ghz

As a result of the born approach that has been carried out on FoamDielInt scattering data for TE and TM modes.
In TM mode, images from 4Ghz, 5Ghz, 8Ghz, and 9Ghz scattering are selected because based on their visualization
they represent part of the ground truth image used. In addition, the SSIM value generated from the image is higher
than the other scattering images. Likewise applies to the selection of images in TE mode.
In TM mode, the 4Ghz and 5Ghz frequency modes provide quite good interaction variations on the plastic cylinder.
Likewise, at the frequencies of 8GHz and 9GHz, it is shown that the foam cylinder looks quite good.
TABLE 4. Results of FoamDielInt image fusion process using Wavelet, Pyramid, and Gauss-Laplacian method.
FoamDielInt
Method Result TM Mode SSIM Result TE Mode SSIM

Pyramid 0.644 0.652

Wavelet 0.646 0.656

Gaussian-
0.68 0.693
Laplacian

Based on Table 4, it can be seen that there is an increase in the SSIM value in the results of the image fusion
performed. The largest increase was generated by image fusion using the Gauss-Laplacian method of 0.68 for TM
mode TE and 0.693 TM mode.
TABLE 5. Reconstruction TwinDielExt image result using Born approximation and SSIM value before image fusion process.
TwinDielExt
TM Mode SSIM TE Mode SSIM

0.676 0.672

3Ghz 3Ghz
TwinDielExt
TM Mode SSIM TE Mode SSIM

0.695 0.688

4Ghz 4Ghz

0.669 0.669

9Ghz 9Ghz

0.648 0.651

10Ghz 10Ghz

As a result of the born approach that has been carried out on TwinDielExt scattering data for TE and TM modes.
However, the scattering generated in the TwinDielExt object is not good so that the reconstruction results do not show
a good image. This may occur because there are too many objects so that the scattering reflected from one object to
another experiences a lot of noise. The selection of images on the TwinDielExt object is only limited to the highest
SSIM value compared to the others. So in this case the images from 3Ghz, 4Ghz, 9Ghz, and 10Ghz scattering are
selected for TM mode. Likewise for TE mode.
TABLE 6. Results of FoamDielInt image fusion process using Wavelet, Pyramid, and Gauss-Laplacian method.
Method Result TM Mode SSIM Result TE Mode SSIM

Pyramid 0.717 0.718

Wavelet 0.702 0.707

Gaussian- 0.746
0.751
Laplacian

Although the results of the reconstruction are not good visually, but based on the quality of the SSIM produced
after the image fusion process has increased. Especially the Gauss-Laplacian method produces the highest SSIM value
of 0.751 for TM mode and 0.746 for TE mode.
TABLE 7. Reconstruction MetDielExt image result using Born approximation and SSIM value before image fusion process.
MetDielExt
TM Mode SSIM

0.656

4Ghz
MetDielExt
TM Mode SSIM

0.673

5Ghz

0.684

8Ghz

0.659

10Ghz

The MetDielext object only takes samples from the TM mode because the image in the TE mode scattering does
not display the copper cylinder well enough. In TM mode, 4Ghz and 5Ghz are selected which visualize the copper
cylinder quite well and have a good SSIM value. Likewise, the scattering interaction with the foam cylinder works
quite well at 8Ghz and 10Ghz frequencies.
TABLE 8. Results of MetDielExt image fusion process using Wavelet, Pyramid, and Gauss-Laplacian method.
Method Result TM Mode SSIM

Pyramid 0.729
Wavelet 0.696

Gaussian-
0.764
Laplacian

Based on Table 8, it can be seen that there is an increase in the SSIM value in the results of the image fusion
performed. The largest increase was generated by image fusion using the Gauss-Laplacian method of 0.68 for TM
mode TE and 0.693 TM mode.

CONCLUSION

Based on the results of SSIM calculations that have been obtained, it can be concluded that combining two images
with different information produces a better image. Although in the form of visualization, the fusion image still looks
abstract and has not yet produced an image with the best quality. This is because the results of the reconstruction of
several images using the Born approach still had some anomalies.
Although visually it is still difficult to identify the shape and object of the tomography results, the quality results
based on the SSIM values obtained show an increase in image quality. With the image quality using the Gauss-
Laplacian method, it produces an image with a better SSIM value followed by the fusion image quality with the
pyramid method and finally the wavelet method.
In this study, there are still less than optimal results because the image fusion process is carried out using the trial and
error method on several parameters. This tuning process is still used as a material for discussion so that the process
can be eliminated in further research.

REFERENCES

[1] M. OstadRahimi, K. Nemez, A. Zakaria, J. LoVetri, L. Shafai and S. Pistorius, "A novel microwave
tomography system for breast imaging based on the modulated scattering technique," USNC-URSI Radio
Science Meeting (Joint with AP-S Symposium), no. 14, p. 54, 2014.
[2] S. Siburian, S. Siburian and P. Prajitno, "Image Fusion-based Multi-frequency Microwave Tomography,"
International Conference on Applied Information Technology and Innovation, no. 19, pp. 220-226, 2019.
[3] S. Singh, N. Mittal and H. Singh, "Review of Various Image Fusion Algorithms and Image Fusion
Performance Metric," in Archives of Computational Methods in Engineering , Barcelona, 2021.
[4] V. Sahu and D. Sahu, "Image Fusion using Wavelet Transform: A Review," Global Journal of Computer
Science and Technology: Graphics & Vision, vol. 14, no. 5, 2014.
[5] W. Wang and F. Chang, "A Multi-focus Image Fusion Method Based on Laplacian Pyramid," Journal of
Computer, vol. 6, no. 12, pp. 2559-2566, 2011.
[6] M. Q, L. J and X. P, "Image fusion based on NSCT and Bandelet transform," in Proceedings of the 2012 8th
International Conference on Computational Intelligence and Security, Guangzhou, 2012.
[7] H. Mitchell, Image Fusion : Theories, Techniques and Applications, Berlin: Springer, 2010.
[8] "MathWorks," The MathWorks, Inc., [Online]. Available: https://www.mathworks.com/discovery/wavelet-
transforms.html. [Accessed 22 03 2021].
[9] G. Xiao, D. P. Bavirisetti, G. Liu and X. Zhang, Image Fusion, Shanghai: Springer, 2020.
[10] M.Pradeep, "Implementation of Image Fusion algorithm using MATLAB (LAPLACIAN PYRAMID)," IEEE,
vol. 13, p. 165, 2013.

You might also like