Research of Multi-Focus Image Fusion Based On M-Band

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Fourth International Workshop on Advanced Computational Intelligence

Wuhan, Hubei, China; October 19-21, 2011

Research of Multi-Focus Image Fusion based


Based on
on M-band
M-band
Multi-Wavelet Transformation
Haozheng Ren, Yihua Lan, and Yong Zhang

Abstract—Image fusion is one of the important system for infrared imaging system can be expressed as, in
embranchments of data fusion. Its purpose is to synthesis general, radar, other electromagnetic waves or infrared light
multi-image information in one scene to one image which is whose wavelength is longer. The imaging system to the
more suitable to human vision and computer vision or more wide-angle view of most could be all clear image; while for
adapt to further image processing, such as target identification. the wavelengths of visible light is short, the imaging system is
This paper mainly discusses the image fusion method based very difficult to talk about wide-angle view clearly in all parts
on wavelet transformation. Firstly, the article gives the basic of the image[1].
concept of multi-focus image fusion. On top of this, the paper
Therefore, the use of visible light imaging system to take
gives the theory of wavelet analyses and its fast arithmetic,
hereon gives the image fusion method based on singe wavelet. photograph, because of its focus on the wide range for the
Getting on with the single wavelet, the paper presents some limited area, we are only able to focus on their regions of
improved wavelet as multi-wavelet, multi-band multi-wavelet, interest, which means that you could focus optical zoom.
including their theories and their arithmetic of decomposition Those regions of interest can indeed be shown a very clear
and reconstruction. At the same time, the article applies the image even, but in the entire imaging area if more than one
multi-band multi wavelet in the image fusion with the wavelet area, and the imaging depth of different objects need to
fusion thought. In the side of selecting fusion arithmetic observe, so the visible imaging system is on the powerless
operators, the paper compares the methods based on pictures, condition. The objects with different depth from the optical
windows and regions, and adopts the fusion norm based on
lens as the difference between object distance image distance
grads and characteristic measurement of regional energy.
Besides it compares images which is based on different fusion imaging led to the difference, if an object from the lens closer
norms and different wavelets in the aspects of entropy, peak to accurate focus, it means that the object close became clear
value Signal-to-Noise, square root error and standard error in photographic image on the screen, then for objects far away
the experimentation. from the lens, and its necessity in the photographic image into
By using Matlab as experimental platform, we approved the the front of the screen, then the image on the sensitive items
feasibility and validity of the method mentioned in the article has become obscured. The same principle, if the distance of
through a lot of experiments. The result indicates that the lens for accurate focusing distant objects, then the exact
multi-band multi-wavelet is very effective in image fusion. time of the object in the photographic image on the screen has
Furthermore, the article does some post processing to the fusion become clear, but for closer objects from the lens, and its
image. The method is based on anisotropic diffusion arithmetic
necessity in the photographic image into behind the screen,
based on partial differential equations. The experiments show
that the brim diffusion enhanced the PSNR of image with the this time photographic images on the screen is also blurred.
selective brim diffusion to the fusion image and depressed the Therefore, the general field of view has two or more goals to
image block domino effects caused by wavelet fusion method. be seen to take pictures, view the scene in order to obtain
clear images of all targets, we generally use separate focus
I. INTRODUCTION shooting, and then use the image fusion method to obtain
region of interest required to clearly image everywhere[2].
A T present, imaging systems can generally be divided into
visible light imaging systems and non-visible light
imaging system. Typical non-visible light imaging
We could first focus on a particular target, and get a clear
picture first, then focus on the next target to get a clear picture
for the second piece, and then focused again on the third
target, get the third piece of clear picture, so take pictures, so
Manuscript received June 9, 2010. This work was supported in part by the
JiangSu Natural Science Foundation under Grant BK20082140, and by the that a set of observations obtained in the same region, but the
Huaihai Institute of Technology Natural Science Foundation under Grant focus on regional differences in the sequence of multi-focus
Z2009013, and by the key disciplines in computer application Construction images. And then using digital image processing methods,
Foundation of JiangSu.
multi-focus images of these sequences are fused, and we
Haozheng Ren is with the School of Computer Engineering, Huaihai
Institute of Technology, Lianyungang, CO 222005 China (e-mail: finally get satisfactory results.
HaozhengRen @163.com). Multi-focus image fusion could capture the image
Yihua Lan is with the School of Computer Engineering, Huaihai Institute information into a variety of different levels of representation
of Technology, Lianyungang, CO 222005 China (corresponding author to in the fusion process [3],[4]
[3]-[4] in general can be divided into
provide phone:18986188662.e-mail: lanhua_2000@ sina.com.cn).
Yong Zhang is with the School of Computer Engineering, Huaihai three levels: pixel level fusion, feature level fusion and
Institute of Technology, Lianyungang, CO 222005 China(e-mail: decision-making level fusion. Pixel level fusion is the most
Zhangy@163.com). basic integration method [5]-[6]
[5],[6] The first step is the feature
detection of multi-focus image and the region matching, and

978-1-61284-375-9/11/$26.00 @2011 IEEE 395


extracts the characteristics of the image, in the corresponding ­
°¦¦ cm − Mk d n − Ml a j +1, m,n (t = 0,1 ≤ s ≤ M − 1)
s
features of the images to choose, the final number of sensors
to obtain images of high information content images. In the °° m n
entire integration process, the fusion algorithm to the image btj,,sk ,l = ® ¦¦ d mt − Mk cn − Ml a j +1, m,n (1 ≤ t ≤ M − 1)
gray level information or RGB color information, etc., or
° m n
other means, such as HIV and other color information based
° ¦¦ d mt − Mk d ns− Ml a j +1,m ,n (1 ≤ t , s ≤ M − 1)
on the images obtained from different sensors and get °̄ m n
together to form a clear image both, and rich image pixel
color information [7].
[6]. j = 0,1,2…..ˈ
a j +1,k ,l = ¦¦ ck − Mm cl − Mn a j , k ,l
m n
M −1
II. METHODS + ¦ ¦¦ d
t , s = 0, s + t ≠ 0 m
t
k − Mm d ls− Mnbtj,,sk ,l
Wavelet analysis of multi-band with the further n

development and expansion in engineering offers more Vector wavelet was first proposed by Goodman et al. , and
possible choice of wavelet style, keeping a lot of good later form the construction of the vector wavelet. Following,
features into the wavelet, taking off the negative aspects. we first described the vector wavelet multi-resolution
Analysis Φ ( x ) = (φ1 ( x), φ2 ( x ),..., φN ( x)) ∈ L2 ( R) N
Therefore, we need multi-band wavelet-depth study and T

discussion. Here we have the orthogonal M-band wavelet


transform for a brief introduction. φl ( x) ∈ L2 ( R), l = 1,..., N Multi-resolution analysis space
j
Giving {V j } , φ ( x ) , {ψ s ( x),1 ≤ s ≤ M − 1} is M-band V j ˙ span{M j /2φl ( M x − k ) : 1 ≤ l ≤ N , k ∈ Z }
avelet function Assuming
j
If V j satisfies the following conditions˖
w
φ j ,k ( x) = M 2 φ ( M j x − k ) , (1)  ⊂ V−1 ⊂ V0 ⊂ V1 ⊂, ;
j
ψ sj ,k ( x) = M 2ψ s ( M j x − k ),1 ≤ s ≤ M − 1 , (2)  j∈z V = L ( R );  j∈z V j = 0 N ×1 ˗
2

f j +1 ∈ V j +1 is f ( x) orthogonal transformation on space (3) f ( x) ∈ V j ⇔ f ( M ) ∈ V j +1 , ∀j ∈ Z ˗

V j+1 , f j +1 ( x) = ¦ a j +1, kφ j +1,k ( x) , hence, (4) {φl (⋅ − k ) :1 ≤ l ≤ N , k ∈ Z} is Reize basis, for


k∈Z
M −1
C = {ck }k∈Z ∈ l 2 ( z ) N ˖
f j +1 = f j + g j = f j + ¦ g sj N
2

¦ ¦ c φ ( −k )
2 2
1 Ac l 2 ( z )N
≤ k
i l ≤B c l 2 ( z )N
The decomposition algorithm can be expressed as follows: n∈Z k∈l 2 L2 ( R )

0 < A ≤ B < ∞ We believe that the vector scaling function


φ j +1,k , φ j ,l = ck − Ml , φ j +1,k ,ψ s
j ,l = d k − Ml Φ ( x) is constituted of wavelet multi resolution
1 ≤ s ≤ M −1 approximation. If H k , Gk k∈Z ∈ l
i 2
(Z) N × N , satisfying

f j = ¦ a j ,kφ j ,k ( x) , g sj = ¦ b sj ,kψ sj ,k ( x) Φ ( x) = ¦ H k Φ ( Mx − k )
n∈Z
k∈Z k∈Z
1 ≤ s ≤ M −1 then ȥ ( x ) =
(i )
¦G
n∈Z
(i )
k Φ ( Mx − k ) Where,

a j ,k = ¦ cn − Mk a j +1, n , b sj ,k = ¦ d ns− Mk a j +1, n φl , j ,k ( x ) = M φl ( M j x − k )


j /2

n∈Z n∈Z
1 ≤ s ≤ M −1 ψ (i )l , j ,k ( x) = M j /2ψ (i )l ( M j x − k )
The corresponding reconstruction algorithm is Wavelet decomposition of multi-vector is as follows
§ M −1
· 1
a j +1,k = ¦ ¨ ck − Mn a j ,k + ¦ b sj ,k d ks− Mn ¸ C j , k =< f ( x), ĭ j ,k >=
M
¦H C n j +1, Mk + n
n∈Z © ¹ n
s =1
1
For f ( x, y ) ∈ L ( R ) , M-band wavelet decomposition D (ji, k) =< f ( x), Ȍ (ji,)k >= ¦G
2 2 (i )
n C j +1, Mk + n
and reconstruction are M n

a j , k ,l = ¦¦ cm− Mk cn − Ml a j +1, m ,n Reconstruction is


1 M −1 (i ) (i )
m n
C j +1,k = ¦ ( H n C j ,Mn+ k + ¦ Gn D j , Mn+ k )
M n i =1

396
i = 0,  , N − 1 Step3.In this step, the 64 coefficients from different source
images will be fuse with different rules as step 2 described,
We will use the pixel-level image fusion method based on then new 64 coefficients will achieve. At last, a inverse WT is
the pixel-level multi-focus image fusion of basic steps then performed on those new coefficients, the fused image is
Step1.In the first step, all source images A and B to be thus constructed.
fused is performed a pre-filtering respectively. This process Step4.Based on the results of step 2, we calculate those
decompose each source image into 16 sub-images, the upper pixels which need post-filtering.
left is sub-image with low frequency signals, and the others Step5.We use anisotropic diffusion arithmetic based on
are 15 sub-images with high frequency signals. partial differential equations on the processing fused image to
Step2.In the two groups of images, each of 16 sub-images the fusion image to eliminate the blocking effects (i.e.,
is decomposed by vector wavelet with multiplicity 2. After diffusion). After this process, the ultimately result image is
then, 64 coefficients of low and high bands are obtained. To obtained. The post-processing method is described in next
each source image, there are 64 decomposed image blocks. section.
Among of these blocks, the 4 of upper left are blocks with low The commonly used evaluation criteria of Image fusion
frequency signals, and they will be fused with a same fusion results include subjective evaluation and objective evaluation:
rule. The rest of 60 blocks are blocks with high frequency Subjective evaluation criteria are subjective scoring by a
signals, and will be fused with another same fusion rule.

Fig.1. Image fusion based on our wavelet decomposition and reconstruction method

judge fusion effect. The image is clearer, the image is lower III. EXPERIMENT RESULTS
distortion of the spectrum. Objective evaluation criteria could
In order to compare the different wavelet decomposition
mainly be expressed as follows˖
for the integration of image quality, we use three images:
1) Root of mean square error
Lena figure, Cameraman figure and Michael & Lincoln
M N
1 figure diagram for the experiment, using 7 × 7 template, the
RMSE =
MN
¦¦ [ R(i, j ) − F (i, j )]
i =1 j =1
2
standard deviation of 5 with high intensity Gaussian blur,
focus process, respectively, is generated focus on the right
2) Mean average error and the left side image. Were using a single wavelet 'db1',
M N 'coif3', four-band into the wavelet 'db1', vector wavelet used
1
MAE =
MN
¦¦ R(i, j ) − F (i, j )
i =1 j =1
'ghm' dual wavelet, and our method. The results are shown in
Fig 2-3, Table 1-2. Experimental results show that
3) Peak signal to noise ratio wavelet-based multi-vector into the fusion image quality is
255 × 255 much better.
PSNR = 10 lg M N
1
MN
¦¦ [ R(i, j ) − F (i, j )]
i =1 j =1
2

397
image of the upper left corner is the low frequency signal, and
the remaining 15 images are high-frequency signals.
Then these 16 sub-images are transformed by wavelet
decomposition, each sub-image was treated through the
double vector decomposition, the formation of their
decomposition coefficients, respectively, so that the result of
decomposition to form the image decomposition coefficients
of 64 blocks, of which the upper left corner of the four image
block is low-frequency signals, they use the same fusion rules,
and the remaining 60-bit high-frequency coefficients, and
they use the other same fusion rules. In the integration into a
new factor of 64, we can counter its wavelet transform, after
Fig. 2. Comparison of the fusion for the Cameraman image filtering through the formation of the final fused image.
Multi-vector of the multiple layers of wavelet decomposition
could also been break down, but just like the decomposition
level increasing, decomposition of the operation are greatly
increased, applications as requiring real-time processing is
not suitable, and decomposition level increases top-level
integration of information can lead to large losses to increase,
hence the quaternary double decomposition, the small image,
a decomposition can achieve very good results.

V. CONCLUSION
Wavelet transform (WT) is a very popular technology
which is used in pixel-level image fusion scheme. This
Fig. 3. Comparison of the fusion for the Michael & Lincoln image paper provides a novel multi focus
image fusion methods based on multi-band vector
TABLE I wavelet decomposition and reconstruction algorithm. Several
FUSION RESULTS COMPARISON FOR THE CAMERAMAN IMAGE WT based image fusion method and the proposed wavelet
Methods PSNR MAE RMSE
based multi focus image fusion method evaluated by three
Wavelet 'db1' 30.8243 2.1935 7.3337
Wavelet 'coif3' 30.6240 2.5575 7.5048
images. Experiment results show the proposed techniques
Four-band wavelet 33.2384 2.2877 5.5542 can provide better performance than other fusion measures.
Vector wavelet 31.8081 2.0539 6.5484
Our method 34.4235 1.4433 4.8458 REFERENCES
TABLE II [1] H. Li, "Multisensor image fusion using the wavelet transform,"
FUSION RESULTS COMPARISON FOR THE MICHAEL & LINCOLN IMAGE raphical Models
Graphical models and
and image
Image processing,
Processing,vol. 57, pp. 235-245, 1995.
[2] Z. Zhang and R. S. Blum, "A categorization of
Methods PSNR MAE RMSE
multiscale-decomposition-based image fusion schemes with a
Wavelet 'db1' 35.4921 1.7018 4.2849 performance study for a digital camera application," Proceedings of the
Wavelet 'coif3' 35.6863 1.8517 4.1901 IEEE, vol. 87, pp. 1315-1326, 1999.
Four-band wavelet 37.0199 1.8088 3.5937 [3] M. Choi, "A new intensity-hue-saturation fusion approach to image
Vector wavelet 35.9927 1.5836 4.0449 fusion with a tradeoff parameter," Geoscience and Remote Sensing,
Our method 39.0494 1.0891 2.8449 IEEE Transactions on, vol. 44, pp. 1672-1682, 2006.
[4] X. Otazu, "Introduction of sensor spectral response into image fusion
methods. Application to wavelet-based methods," Geoscience and
IV. DISCUSSION Remote Sensing, IEEE Transactions on, vol. 43, pp. 2376-2385, 2005.
Wavelet packet based image is decomposed with a single [5] Z. Wang . , "A comparative analysis of image fusion methods,"
Geoscience and Remote Sensing, IEEE Transactions on, vol. 43, pp.
comparison, the same size by 64 sub-image will be three 1391-1402, 2005.
layers of wavelet decomposition of the original image, and [6] H. Chen and P. K. Varshney, "A human perception inspired quality
that the low-frequency sub-graphic image for the rest of the metric for image fusion based on regional information," Information
top 63 are high-frequency image, and quaternary double Fusion, vol. 8, pp. 193-207, 2007.
decomposition will generate four low-frequency image, and
the remaining 60 for the high-frequency images.
Wavelet-based multi-band multi-image fusion algorithm has
been shown in figure 3-5 and table 1-3 Firstly, for the
multi-focus images A and B, respectively, the pretreatment
process image is decomposed into 16 sub-images, where,

398

You might also like