Fusion 15

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

Fusion of Infrared and Visible sensor images based


on Anisotropic diffusion and Karhunen-Loeve
Transform
Durga Prasad Bavirisetti, Ravindra Dhuli

AbstractImage fusion is a process of generating a more informative image from a set of source images. Major applications
of image fusion are in navigation and military. Here infrared and
visible sensors are used to capture complementary images of the
targeted scene. The complementary information of these source
images has to be integrated into a single image using some fusion
algorithms.The aim of any fusion method is to transfer maximum
information from the source images to the fused image with
minimum information loss. It has to minimize the artifacts in the
fused image. In this context, we propose a new edge preserving
image fusion method for infrared and visible sensor images.
Anisotropic diffusion is used to decompose the source images into
approximation and detail layers. Final detail and approximation
layers are calculated with the help of Karhunen-Loeve (KL)
transform and weighted linear superposition respectively. Fused
image is generated from the linear combination of final detail and
approximation layers. Performance of the proposed algorithm
is assessed with the help of petrovic metrics. Results of the
proposed algorithm are compared with the traditional and recent
image fusion algorithms. Results reveal that proposed method
outperforms the existing methods.
Index TermsAnisotropic diffusion, KL-transform, Edge preserving, Imaging sensors.

I. I NTRODUCTION
INGLE sensor image capture is insufficient to provide
complete information about a targeted scene because of
the sensor system limitations. Indeed multiple image captures
using a single sensor or multiple sensors are essential to
know in-detail knowledge about the scene. However the entire
information from several captures should be integrated into a
single image. Image fusion is a phenomenon of merging useful
information from several captures of a particular scene into a
single image such that the fused image provides more knowledge about the scene than individual source captures. Digital
photography [24], remote sensing [2], concealed weapon detection [5], helicopter navigation aid [22], medical imaging,
navigation and military [11], [14], [9], [15], [8] are the various
areas in need of image fusion.
In all these applications, image fusion is used to extract
more information that is not available in individual source
images. In particular, IR-visible image fusion plays a vital
role in military and navigation applications (for accurate target
identification). Due to bad weather circumstances after rain,
during winter etc., the images captured using visible sensors

Durga Prasad Bavirisetti and Ravindra Dhuli are with School of


electronics engineering, VIT University, Vellore, 632014 India. e-mail:
bdps1989@gmail.com,ravindradhuli@vit.ac.in.

alone are not sufficient to provide the information about a situation. Visible sensors can provide background like vegetation
and soil in great details. In contrast infrared sensors provide
information about the foreground like weapons, enemy, and
vehicle movements. For detection and localization of a target,
to improve the situational awareness information from both
infrared and visible images are needed to be fused into a single
image.
Several fusion methods are implemented so far in literature.
Among them multi-scale decomposition methods (pyramid,
wavelet) [16], [22] and data driven methods [13], [12] are the
most successful methods. But these methods may introduce
artifacts into the fused image. To overcome these problems
optimization based fusion schemes [24], [25] are proposed.
These methods take multiple iterations to find the optimal
solution (fused image). These optimization methods may over
smooth the fused image because of multiple iterations.
In addition, edge preserving image fusion schemes are
becoming popular these days. These methods use edge preserving smoothing filtering/process for the purpose of fusion.
Popular methods in this class are guided image filter [10],
weighted least square filter [3], [6], bilateral filter [20], cross
bilateral filter [8], 3-D anisotropic diffusion [7] based methods.
Most of these methods decompose each source image into base
and detail layers. Either manipulated base layer or manipulated
detail layer or both manipulated layers together are combined
to get the fused image. Bilateral filter and cross bilateral filter
fusion methods produces gradient reversal artifacts in the fused
image whereas guided image fusion method produces halo
effects in the fused image.
An efficient image fusion scheme should possess three
properties.
1) It has to transfer most of the useful information from
source images into the fused image.
2) It should not loose useful information of source imagery
in the fusion process.
3) It should not introduce any artifacts or extra information
into the fused image.
A new anisotropic diffusion based image fusion (ADF)
using KL-transform is proposed to address the problems of the
existing methods and also by keeping the above properties in
mind. Each source image is filtered using anisotropic diffusion
process to extract base and detail layers. Useful information
from base and detail layers are integrated into the fused image.

This method is very effective and easy to implement.

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

It transfers most of the information from source images


to the fused image.
Fusion loss is very less.
Fusion artifacts introduced in the fused image are almost
negligible.
Computation time is less.

The remainder of this paper is structured as follows. Section


II briefly reviews the preliminaries. Section III presents the
proposed method. Section IV deals with the experimental
setup. Section V describes the results and analysis. Section
VI concludes the paper.

Base layers

Source images
Anisotropic
Diffusion

The anisotropic diffusion process [17] will smooth a given


image at homogeneous regions while preserving the nonhomogeneous regions (edges) using partial differential equations (PDE). It overcomes the drawbacks of isotropic diffusion. Isotropic diffusion uses inter-region smoothing. So edge
information is lost. In contrast, anisotropic diffusion uses intraregion smoothing to generate coarser resolution images. At
each coarser resolution edges are sharp and meaningful.
The anisotropic diffusion equation uses flux function to
control the diffusion of an image I as
It = c (x, y, t) I + c I,

(1)

where, c (x, y, t) = Flux function or rate of diffusion, = Laplacian operator, = Gradient operator, t
= Time or scale or iteration.
We can also term (1) as heat equation. Forward-time-centralspace (FTCS) scheme is used to solve this equation. The
solution for this PDE is
=

t
t
N Ii,j
S Ii,j
+ cN
+ cS
+

t
t

.
cE E I + cW W I

i,j

(2)

i,j

t+1
is the coarser resolution image at t + 1 scale
In (2), Ii,j
t
which depends on the previous coarser scale image Ii,j
. is
N ,
S,
E
a stability constant satisfying 0 1/4 .
W are the nearest- neighbor differences in north, south,
and
east and west directions respectively. They are defined as

N Ii,j Ii1,j Ii,j ,

S Ii,j Ii+1,j Ii,j ,

E Ii,j Ii,j+1 Ii,j ,

W Ii,j Ii,j1 Ii,j .

Fused
Image

_
+

Detail
layers

Detail layers
fusion

In (4), g () is a monotonically decreasing function with


g (0) = 1. Different functions can be used for g (). But Perona
and Malik [17] suggested two functions as mentioned below.
g (I) = e(

(3)

Similarly, cN , cS , cE and cS are the conduction coefficients or


flux functions in north, south, east and west directions.






t
t
N Ii,j
,
ctNi,j = g (I)i+1/2,j = g







t
SIt ,
ctSi,j = g (I)i1/2,j = g
i,j


(4)




t
t
t


,
cEi,j = g (I)i,j+1/2 = g E Ii,j






t
t
W Ii,j
.
ctWi,j = g (I)i,j1/2 = g

kIk
k

1+

) .

g (I) =

kIk
k

2 .

(5)
(6)

These functions offer a trade-off between the smoothing and


edge preservation. First function is useful if the image consists
of high-contrast edges over the low-contrast edges. Second
function is useful if the image consists of wide regions over
the smaller regions. Both functions consist of a free parameter
k. This constant k is used to decide the validity of a region
boundary based on its edge strength. Anisotropic diffusion for
a given image I is represented as aniso(I).
III.

t
Ii,j

Linear
Combination

Fig. 1. Proposed ADF method.

II. ANISOTROPIC DIFFUSION

t+1
Ii,j

Base layers
fusion

METHODOLOGY

ADF methodology is shown in Fig. 1. These steps are


briefed below. Each step is explained in-detail in the following
sub sections.
(A) Extract the base and detail layers from source images
using anisotropic diffusion.
(B) Fuse detail layers based on KL-transform.
(C) Fuse base layers using weighted superposition.
(D) Superposition of final detail and base layers.
A. Extracting the base and detail layers from source images
using anisotropic diffusion
N

Let the source images {In (x, y)}n=1 are of size p q and
all are co-registered. These images are passed through the
edge preserving smoothing anisotropic diffusion process for
obtaining base layers.
Bn (x, y) = aniso (In (x, y)) ,

(7)

where Bn (x, y) is the n-th base layer and aniso (In (x, y))
represents the anisotropic diffusion process on the n-th source
image. Refer section II for more details of the anisotropic
diffusion. Detail layers are obtained by subtracting base layers
from the source images.

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

(max =
max (1 , 2 )). If max is the eigen
vector corresponding to the max then KL1 and KL2
are given by
max (1)
max (2)
KL1 = X
, KL2 = X
.
max (i)
max (i)
i

(a)

(b)

(c)

(9)

(5) Finally the fused detail layer D is given by


D (x, y) = KL1 D1 (x, y) + KL2 D2 (x, y) .

(10)

(6) The generalized expression for N detail layers is given


as

(d)

(e)

D (x, y) =

(f)

Fig. 2. (a) IR image (b) Visible image (c) Base layer of IR image (d) Base
layer of visible image (e) Detail layer of IR image (f) Detail layer of visible
image.

Dn (x, y) = In (x, y) Bn (x, y) .

(8)

N
X

KLn Dn (x, y) .

(11)

n=1

C. Base layer fusion using weighted superposition


Here, required base layer information in each source image
is chosen by assigning proper weights wn to them. Fusion of
base layers is calculated as

Base and detail layer decomposition for IR-visible data set is


shown in Fig. 2.

B (x, y) =

N
X

wn Bn (x, y) ,

(12)

n=1

B. Detail layer fusion based on KL-Transform


Detail layers are fused with the help of KL-transform [4].
It transforms the correlated components into the uncorrelated
components. It gives the compact representation of the given
data set. KL-transform basis vectors depend on the data set
unlike FFT and DCT. The KL- transform algorithm used for
the purpose of detail layers fusion is briefed below.
(1) ADF algorithm can be applied for N -input images. For
simplicity let us take two detail layers D1 (x, y) and
D2 (x, y) corresponding to two input images I1 (x, y) and
I2 (x, y). Arrange these detail layers as column vectors
of a matrix X.
(2) Find the covariance matrix CXX of X by considering
each row as an observation and each column as a variable.
(3) Calculate
values 1 , 2 and eigen vectors

 eigen
2 (1)
1 (1)
of CXX .
and 2 =
1 =
2 (2)
1 (2)
(4) Compute
uncorrelated
components
KL1
and
KL2 corresponding to the large eigen value

where,

wn = 1 and 0 wn 1.

If w1 = w2 = ...wN = N1 , n then this process represents


the average of base layers. The final detail layer (11) obtained
by using KL-transform and the final base layer (12) obtained
by using weighted superposition for IR-visible images are
shown in Fig. 3.
D. Superposition of final detail and base layers
Fused image F is given by a simple linear combination of
final base layer B and detail layer D.
F = B + D.

(13)

IV. E XPERIMENTAL SETUP


Here, the image data base, various image fusion algorithms
used for comparison, fusion metrics used to assess the fused
image quality and effect of free parameters on the proposed
algorithm are discussed.
A. Image data base

(a)

(b)

Fig. 3. (a) Final detail layer (11), (b) Final base layer (12) of IR-visible
image data set.

ADF image fusion method can be applied on various


infrared and visible images. However, two example data sets
(Infrared (IR)-visible, millimeter wave (MMW)-visible) are
presented in this paper. These example data sets are available
at [1] and are extensively used in literature [5], [23], [9].

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

0.5

0.9

0.4

0.8

0.3

LXY/F

QXY/F

IEEE SENSORS JOURNAL

0.7

0.2

0.6
0.5

0.1

10

15

20

25

30

35

40

10

15

20

25

30

35

40

25

30

35

40

IRvisible
MMWvisible

0.01

0.008

0.006

x 10

NXY/F

NXY/F

0.004

0.002
0

10

15

20

25

30

35

40

10

15

Fig. 4. Analysis of Q

XY
F

,L

XY
F

,N

XY
F

XY

and Nk F

for k change.

0.7

0.9

0.6
0.5
0.4

LXY/F

QXY/F

0.8
0.7

0.3

0.6

0.2

0.5
0.4

0.1
0

10

0.008

0.006

10

15

x 10

0.004

2
1

10

15

,L

15

0.01

0.002

XY
F

10

IRvisible
MMWvisible

NXY/F

NXY/F

15

Fig. 5. Analysis of Q

20

XY
F

,N

XY
F

XY

and Nk F

for t change.

0.7

0.9

0.6
0.5
0.4

LXY/F

QXY/F

0.8
0.7

0.3

0.6

0.2

0.5
0.4

0.1
0

0.05

0.1

0.15

0.2

0.25

0.05

0.1

0.05

0.1

IRvisible
MMWvisible

0.15

0.2

0.25

0.15

0.2

0.25

0.008

0.006

0.004

0.002
0

Fig. 6. Analysis of Q

XY
F

,L

x 10

NXY/F

NXY/F

0.01

XY
F

0.05

,N

XY
F

0.1

0.15

0.2

0.25

XY

and Nk F

for change.

B. Methods for comparison


ADF method is compared with transform domain techniques
in [9], [21], singular value decomposition based technique [15]

and recently proposed edge preserving fusion techniques [8],


[10]. Default parameter settings are considered for all of these
methods.
For proposed ADF, g () function in (5) is used. Weights

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 7. (a) IR image. (b) Visible image. (c) [9]. (d) [21]. (e) [15]. (f) [8]. (g) [10]. (h) ADF method.

(a)
Fig. 8. Fusion score
data set.

XY
Q F

(b)
, Fusion loss

XY
L F

, Fusion artifacts N

(c)
XY
F

(d)
XY
F

and Modified fusion artifacts Nk

analysis of various fusion algorithms for IR-visible

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 9. (a) Visible image. (b) MMW image. (c) [9]. (d) [21]. (e) [15]. (f) [8]. (g) [10]. (h) ADF method.

w1 = w2 = 0.5 are taken for the base layers fusion.

C. Fusion metrics
Gradient based objective fusion metrics termed as petrovic
metrics are considered for in depth evaluation of the proposed

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

(a)

(b)

Fig. 10. Fusion score Q


MMW-visible data set.

XY
F

, Fusion loss L

XY
F

(c)

, Fusion artifacts N

XY
F

(d)
XY

and Modified fusion artifacts Nk F

analysis of various fusion algorithms for

TABLE I
S UMMATION OF PETROVIC METRICS (14), (15) FOR IR AND VISIBLE IMAGES .

(14)
(15)

[9]
1.0323
1.0000

[21]
1.0093
1.0000

[15]
1.0088
1.0000

[8]
1.2293
1.0000

[10]
1.0077
1.0000

ADF
1.0042
1.0000

TABLE II
S UMMATION OF PETROVIC METRICS (14), (15) FOR MMW AND VISIBLE IMAGES .

(14)
(15)

[9]
1.0226
1.0000

[21]
1.0017
1.0000

[15]
1.0010
1.0000

ADF method. If X, Y are the source images and F is a


fused image then quantitative analysis is done by considering
the information contribution from each sensor, fusion gain
XY
XY
or fusion score Q F , fusion loss L F and fusion artifacts
XY
XY
XY
XY
N F . The metrics Q F , L F and N F represent the
complementary information. Their summation should be unity.
Q

XY
F

+L

XY
F

+N

XY
F

= 1.

XY
F

+L

XY
F

XY

+ Nk F = 1.

[10]
1.0020
1.0000

ADF
1.0003
1.0000

k. Similar analysis is done for t. In Fig. 5, as the number of


XY
XY
XY
XY
iterations t increases Q F , L F , N F and Nk F changes up
to t = 10, afterwords they are almost constant. Similarly, effect
of is demonstrated in Fig. 6. Above = 0.1 these metrics
show consistent performance. We observe that the optimal
performance of the proposed method for t = 10, = 0.15
and k = 30.

(14)

However, it is not satisfied always. So modified fusion artifacts


XY
(Nk F ) is considered to make the summation unity.
Q

[8]
1.0340
1.0000

(15)

For more details about these fusion metrics one may refer
XY
[23], [19], [18]. Note that for better performance Q F value
XY
XY
XY
should be high and L F , N F , Nk F values should be low.
D. Effect of free parameters on the ADF method
Here, with the help of petrovic fusion metrics the effect of
free parameters on the performance of the ADF method is
evaluated . In this method, each source image is filtered using
the anisotropic diffusion process. The degree of smoothing
depends on the parameters k, t and . While inspecting the
effect of k we set t = 10, = 0.15. When examining the
influence of t on ADF rest of the parameters are taken as
= 0.15 and k = 30. Similarly, for , free parameters
considered as t = 10, k = 30. Effect of k on ADF algorithm
for both data sets (IR-visible, MMW-visible) is illustrated in
XY
XY
Fig. 4. This figure demonstrates the behavior of Q F , L F ,
XY
XY
N F and Nk F with respective to the change of k. We
XY
observe that Q F looks almost constant after k = 20 for
XY
both image data sets. Same observation is made for L F . The
XY
XY
metrics N F and Nk F are almost constant for any value of

V. RESULTS AND ANALYSIS


Here, comparative analysis of various image fusion algorithms with ADF algorithm for two image data sets is done
in terms of visual quality and fusion metrics. Fig. 7 shows
the comparison of visual quality of the ADF method with the
existing methods. Fig. 7a and Fig. 7b are IR and visible images
of a scene respectively. These images consist information of a
seashore, men and a ship in the sea. Either IR or visible image
do not provide the complete information about the scene. By
the process of fusion one can attempt to perceive complete
information about the scene. Method in [9] produces blocking
artifacts in the fused image (Fig. 7c). Fused images Fig. 7d
and Fig. 7e of [21] and [15] are visually good but they are
not sufficient to provide all the details of the scene. Fusion
scheme in [8] produces gradient reversal artifacts and [10]
introduces halo affects in the fused images (Fig. 7f) and (Fig.
7g) respectively. It can be observe that ADF method (Fig. 7h)
provides more information with fewer artifacts compared to
the existing methods [9] (Fig. 7c), [21] (Fig. 7d), [15] (Fig.
7e), [8] (Fig. 7f), [10] (Fig. 7g).
Fig. 8a-8d, Fig. 10a-10d illustrate the bar chart comparison
of various image fusion methods for IR-visible and MMWvisible data sets. As demonstrated in Fig. 8a-8d, our method
is giving superior performance in all fusion metrics for IRvisible data set. The metrics summation in (14) and modified
summation in (15) are tabulated in Table. I for reference.

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

TABLE III
C OMPUTATIONAL TIME IN SECONDS OF DIFFERENT IMAGE FUSION ALGORITHMS ON BOTH IR- VISIBLE AND MMW- VISIBLE DATA SETS .
Data set
IR-visible
MMW-visible

[9]
1.3868
2.4166

[21]
0.29026
0.3572

The fused images of various methods and ADF method are


shown in Fig. 9. Fig. 9a and 9b are the images captured
by using visible and MMW imaging technologies. Visible
image conveys the information about three persons with the
middle person holding an object. MMW image conveys the
information about the concealed weapon inside the shirt of the
third person from left. We need information from both visible
and MMW images into the fused image. Blocking effects can
be observe in Fig. 9c of [9]. Fused images (Fig. 9d), (Fig. 9e)
of [21] and [15] are not visually good. Fusion artifacts can
be observed in Fig. 9f of [8]. There is no object information
in Fig. 9g of [10]. ADF method gives a good quality fused
image (Fig. 9h) compared with the existing methods [9] (Fig.
9c), [21] (Fig. 9d), [15] (Fig. 9e), [8] (Fig. 9f), [10] (Fig. 9g).
XY
XY
XY
XY
Fig. 10a-10d display the Q F , L F , N F and Nk F
metrics performance for MMW-visible image data set. The
metrics summation in (14) and modified summation in (15)
are tabulated in Table. II. For this case as well ADF is giving
good performance in all metrics.
A. Computational Time
A comparison of computational time of various image
fusion methods for IR-visible data set of size 320322 and
MMW-visible data set of size 256200 are shown in Table. III.
The experiments are carried out on a computer with 4 GB
RAM and 2.27 GHz CPU. Experimentation on each data set
is conducted for 25 times and the average of 25 computational
times is considered for better accuracy. The average computational time of ADF is less than the average computational
time of [9], [8] and it is more than the methods [21], [15],
[10].
It is found that our ADF method is giving superior results
than the transform domain and edge preserving methods for
IR-visible images in terms of visual quality and petrovic
metrics. It has considerable less computational time for realtime implementation.
VI. C ONCLUSION
A new edge preserving image fusion method is proposed for
IR-visible images. First, each image is separated into base and
detail layers with help of anisotropic diffusion. Then fusion
process is applied on base and detail layers. Detail layers
are fused using KL-transform and base layers are fused using
weighted average method. Performance of the ADF method is
assessed with various image fusion algorithms using petrovic
fusion metrics. It is observed that our method outperforms the
existing methods.

[15]
0.2504
0.4149

[10]
0.2840
0.5643

[8]
15.3090
31.4187

ADF
0.6629
1.2446

R EFERENCES
[1] . www.imagefusion.org.
[2] Shaohui Chen, Renhua Zhang, Hongbo Su, Jing Tian, and Jun Xia. Sar
and multispectral image fusion using generalized ihs transform based
on a trous wavelet and emd decompositions. IEEE Sensors Journal,
10(3):737745, 2010.
[3] Zeev Farbman, Raanan Fattal, Dani Lischinski, and Richard Szeliski.
Edge-preserving decompositions for multi-scale tone and detail manipulation. In ACM Transactions on Graphics (TOG), volume 27, page 67.
ACM, 2008.
[4] Rafael C Gonzalez. Digital image processing. Pearson Education India,
2009.
[5] A. Jameel, A. Ghafoor, and M.M. Riaz. Adaptive compressive fusion for
visible/ir sensors. IEEE Sensors Journal, 14(7):22302231, July 2014.
[6] Yong Jiang and Minghui Wang. Image fusion using multiscale edgepreserving decomposition based on weighted least squares filter. IET
Image Processing, 8(3):183190, 2014.
[7] Fatih Kahraman, C Deniz Mendi, and M Gokmen. Image frame fusion
using 3d anisotropic diffusion. In 23rd International Symposium on
Computer and Information Sciences, ISCIS08., pages 16. IEEE, 2008.
[8] BK Shreyamsha Kumar. Image fusion based on pixel significance using
cross bilateral filter. Signal, Image and Video Processing, pages 112,
2013.
[9] BK Shreyamsha Kumar. Multifocus and multispectral image fusion
based on pixel significance using discrete cosine harmonic wavelet
transform. Signal, Image and Video Processing, 7(6):11251143, 2013.
[10] Shutao Li, Xudong Kang, and Jianwen Hu. Image fusion with guided
filtering. IEEE transactions on image processing: a publication of the
IEEE Signal Processing Society, 22(7):28642875, 2013.
[11] Shutao Li and Bin Yang. Hybrid multiresolution method for multisensor
multimodal image fusion. IEEE Sensors Journal, 10(9):15191526,
2010.
[12] Junli Liang, Yang He, Ding Liu, and Xianju Zeng. Image fusion using
higher order singular value decomposition. IEEE Transactions on Image
Processing, 21(5):28982909, 2012.
[13] David Looney and Danilo P Mandic. Multiscale image fusion using
complex extensions of emd. IEEE Transactions on Signal Processing,
57(4):16261630, 2009.
[14] Nikolaos Mitianoudis and Tania Stathaki. Optimal contrast correction
for ica-based fusion of multimodal images. IEEE Sensors Journal,
8(12):20162026, 2008.
[15] VPS Naidu. Image fusion technique using multi-resolution singular
value decomposition. Defence Science Journal, 61(5):479484, 2011.
[16] Gonzalo Pajares and Jesus Manuel De La Cruz. A wavelet-based image
fusion tutorial. Pattern recognition, 37(9):18551872, 2004.
[17] Pietro Perona and Jitendra Malik. Scale-space and edge detection
using anisotropic diffusion. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 12(7):629639, 1990.
[18] Vladimir Petrovic. Multisensor pixel-level image fusion. University of
Manchester, 2001.
[19] Vladimir Petrovic and Costas Xydeas. Objective image fusion performance characterisation. In Tenth IEEE International Conference on
Computer Vision, 2005. ICCV 2005, volume 2, pages 18661871. IEEE,
2005.
[20] Georg Petschnigg, Richard Szeliski, Maneesh Agrawala, Michael Cohen,
Hugues Hoppe, and Kentaro Toyama. Digital photography with flash and
no-flash image pairs. ACM transactions on graphics (TOG), 23(3):664
672, 2004.
[21] Oliver Rockinger. Image sequence fusion using a shift-invariant wavelet
transform. In International Conference Proceedings on Image Processing., volume 3, pages 288291. IEEE, 1997.
[22] Oliver Rockinger. Multiresolution-Verfahren zur fusion dynamischer
bildfolgen. dissertation. de, 1999.
[23] Parul Shah, Shabbir N Merchant, and Uday B Desai. Multifocus and
multispectral image fusion based on pixel significance using multiresolution decomposition. Signal, Image and Video Processing, 7(1):95109,
2013.

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/JSEN.2015.2478655, IEEE Sensors Journal

IEEE SENSORS JOURNAL

[24] Rui Shen, Irene Cheng, Jianbo Shi, and Anup Basu. Generalized random
walks for fusion of multi-exposure images. IEEE Transactions on Image
Processing, 20(12):36343646, 2011.
[25] Min Xu, Hao Chen, and Pramod K Varshney. An image fusion approach
based on markov random fields. IEEE Transactions on Geoscience and
Remote Sensing, 49(12):51165127, 2011.

Durga prasad Bavirisetti received B.Tech degree


in electronics and communication engineering from
Jawaharlal Nehru Technological University, Kakinada, India, in 2010, and M.Tech degree in communication engineering from VIT University, India in
2012. At present, he is pursuing Ph.D in school of
electronics engineering, VIT University, India. His
research interests are in signal and image processing.

Dr. Ravindra Dhuli was born in Tuni, Andhra


Pradesh, India. He received his B.Tech in electronics
and communication engineering from Bapatla engineering college, Andhra Pradesh, India in 2005, and
Ph.D in signal processing from the department of
electrical engineering, Indian institute of technology
Delhi in 2010. He is currently an associate professor
with the school of electronics engineering, VIT
University, India. His research interests include multirate signal processing, statistical signal processing,
image processing and mathematical modeling.

1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like