Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

International Journal of Remote Sensing

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tres20

A Multi-Objective Enhanced Fruit Fly Optimization


(MO-EFOA) Framework for Despeckling SAR Images
using DTCWT based Local Adaptive Thresholding

Bibek Kumar, Ranjeet Kumar Ranjan & Arshad Husain

To cite this article: Bibek Kumar, Ranjeet Kumar Ranjan & Arshad Husain (2021) A Multi-
Objective Enhanced Fruit Fly Optimization (MO-EFOA) Framework for Despeckling SAR Images
using DTCWT based Local Adaptive Thresholding, International Journal of Remote Sensing, 42:14,
5497-5518, DOI: 10.1080/01431161.2021.1921875

To link to this article: https://doi.org/10.1080/01431161.2021.1921875

Published online: 06 May 2021.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tres20
INTERNATIONAL JOURNAL OF REMOTE SENSING
2021, VOL. 42, NO. 14, 5497–5518
https://doi.org/10.1080/01431161.2021.1921875

A Multi-Objective Enhanced Fruit Fly Optimization


(MO-EFOA) Framework for Despeckling SAR Images using
DTCWT based Local Adaptive Thresholding
Bibek Kumar , Ranjeet Kumar Ranjan and Arshad Husain
School of Computing,DIT University, Dehradun, India

ABSTRACT ARTICLE HISTORY


The importance of Satellite Aperture Radar (SAR) imagery systems is Received 13 October 2020
increasing day-by-day in various field such as earth observation, hi- Accepted 5 March 2021
technology war mechanisms, etc. The images captured by SAR
imagery systems are mainly used to detect and classify objects
captured in the images. Due to the complexity of the image captur­
ing process, SAR images can be highly noisy – often consisting of
multiplicative noise, also known as speckle. To detect or classify
objects in SAR images, speckle noise must be removed from
images. During despeckling process, the preservation of important
information in the SAR images while removing noise such as edge
or patterns is a crucial task. Despeckling methods are liable to
compromise on edge preservation ability while aspiring for good
quality denoised images. Many researchers have proposed wavelet
transform based despeckling approaches such as Discrete Wavelet
Transform (DWT), Undecimated Wavelet Transform (UDWT), Dual
Tree Complex Wavelet Transform (DTCWT). In these approaches,
finding the best values of the coefficients plays an important role in
yielding excellent denoised images with preserved edges. In this
paper, we have proposed a novel optimization framework that
optimizes thresholding coefficients of DTCWT despeckling method
for SAR images. The proposed optimization framework is based on
Fruit Fly Optimization (FOA) algorithm. The approach is a multi-
objective optimization algorithm that is used to find maximum
values for Peak Signal-to-Noise Ratio (PSNR), Mean Structural
Similarity Index (MSSIM) and Equivalent number of look (ENL). The
maximum value of MSSIM indicates high edge preservation capa­
city, whereas maximum PSNR value indicates good quality denois­
ing of images. The maximum ENL value represents a good speckle-
noise smoothing capability. We have applied our framework over
some classical images as well as over SAR images of MSTAR dataset.
In our experiments, we found that our proposed framework results
in the excellent PSNR 36.87 dB, 35.4 dB and 37.8 dB, MSSIM values
0.92, 0.93 and 0.92, respectively, in the case of lena image, MSTAR
dataset image and TerraSAR-X dataset image.

CONTACT Bibek Kumar bkknith@gmail.com School of Computing, DIT University, Dehradun, Uttarakhand, India
© 2021 Informa UK Limited, trading as Taylor & Francis Group
5498 B. KUMAR ET AL.

1. Introduction
In the field of active remote sensing, SAR imaging system is a growing field which has
attracted several researchers in the past decade (J. Wang et al. 2018; Xu et al. 2019). SAR
imaging system has a wide range of applications in several domains such as topography,
oceanography, agriculture, geology, forestry, environment monitoring, military surveil­
lance, moving target recognition etc. The imaging system captures high-resolution
images during both day and night and in different weather conditions (Hou et al. 2018).
In order to capture images, the active sensor transmits microwave signals to the observa­
tion point on earth. Further, it receives back signals or backscattered signals from the
surface of the earth (Zhai et al. 2019). During the electromagnetic scattering flow,
simulation of SAR images can be categorized into two parts. The first part is a signal-
level simulation which mainly emphasizes on electromagnetic scattering and the second,
at the image-level which emphasizes on pre-existing or hypothetical distribution (Huang
et al. 2019). SAR images are typically captured from any moving flight or satellite antenna,
as shown in Figure 1. The distance between flight or satellite antenna and the penetration
point on the earth’s surface is known as slant range. These sensors can find the amount of
speckle in the system. Speckle is a scattering pattern and arises due to spatial resolution,
which is not enough to solve the individual scatters.

Figure 1. SAR images capturing system.


INTERNATIONAL JOURNAL OF REMOTE SENSING 5499

In general, SAR images can be captured in the form of single look complex (SLC)
images which are characterized by speckle or multiplicative noise. To reduce this speckle
noise, a multi-looking process is applied on SAR SLC images which improves the quality of
SAR images by squaring pixels of output images. After this process, the resolution of SLC
images may get reduced. In SAR images, some of the key metrics explain the performance
of the SAR imagery systems. These metrics are sub-metre resolution, spatial resolution,
radiometric resolution, speckle and NESZ (Noise Equivalent Sigma Zero) etc.
As per available literature, numerous researchers have worked on Moving and
Stationary Target Acquisition and Recognition (MSTAR) dataset (Keydel et al. 1975).
Sample images of the MSTAR dataset are shown in Figure 2. The dataset is collected by
Sandila National Laboratory, jointly sponsored by Defence Advanced Research Project
Agency. Dataset is freely available on the internet and contains SAR images of 10 targets
such as armed vehicles, weapons systems, etc. The data has been captured with 9.60 GHz
operated X-band radar and bandwidth of 0.591 GHz. The cross-range resolution and
range are identical (0.3047 m).
SAR images play an essential role in identifying the objects, which could be applicable
in defence-related automatic target recognition (ATR) and some other applications such
as earth observation and monitoring etc. (ZZhou et al. 2018) but the noise present in the
images can lead to degradation of accuracy and performance in these applications. To
improve the accuracy and performance, we need to remove speckle noise (J. Wang et al.
2018). The process is known as despeckling. Despeckling is used to reduce the speckle
noise and enhance the quality of images. It can also eliminate some essential features
such as edges, textures, structure etc. Therefore, the despeckling process should be
carried out in a way so that it can preserve such features in the images.
Recently, several denoising filters have been proposed by various researchers (Panetta,
Bao, and Agaian 2016; Suresh and Lal 2017). The existing denoising filters can be broadly
categorized into two domains, spatial domain (Lee, 1983) and frequency domain (Hurley
and Simeoni 2016). In spatial filtering, operations are performed on image pixels.
Therefore, the success of spatial filtering mainly depends on window (mask) size.
In the area of SAR image despeckling, multi-scale wavelet transform has been widely
used. Kang, Lee, and Yoo (2016) proposed an FESR (Feature Enhanced Speckle Reduction)
methodology by which coherence, contrast and edge filtering was enhanced while
crushing speckle with a robust diffusion filtering method. So far, various approaches
have been proposed for despeckling. One such approach is DWT (Discrete Wavelet
Transform), which has fast computation. However, it lacks translation variance (Choi and
Jeong 2019). In a stationary signal with the frequency domain, the performance of DWT is

Figure 2. MSTAR (Moving and Stationary Target Acquisition and Recognition) dataset images affected
with multiplicative noise.
5500 B. KUMAR ET AL.

not up to the mark. To overcome this drawback, UDWT (Un Decimated Wavelet
Transform) (Hu et al. 2018; Starck, Fadili, and Murtagh 2007; Argenti et al. 2002) has
been developed to enhance near shift variance by downsampling in the case of forward­
ing wavelet transform and upsampling in the reverse wavelet transform. It also provides
a better computational time to overcome the drawback of DWT. Kingsbury (Selesnick,
Baraniuk, and Kingsbury 2005) proposed another popular method named DTCWT (Dual-
Tree Complex Wavelet Transform) (Farhadiani, Homayouni, and Safari 2019; Vimalraj,
Esakkirajan, and Sreevidya 2018). DTCWT was used to overcome the drawback of DWT
and UDWT. Kingsbury has used both CWT (complex wave transforms) and DWT proper­
ties. The approach gives limited redundancy, near shift-invariance, improved directional
selectivity and low time complexity. The DWT method uses one filter tree, whereas
DTCWT uses two filter trees, which provide a pair of coefficients. These coefficients are
further combined to produce complex coefficients. The first DWT coefficient is used to
give the real part and the second DWT coefficient is used to provide the imaginary part. In
DTCWT, both real and imaginary DWT transform, calculation of coefficient is performed
independently and parallely with half of the sampling time interval between both trans­
forms. This improved time complexity of the process. But for multiplicative noise removal,
these techniques are not suitable for restoring images affected by multiplicative or
speckle noise, e.g. SAR images (Goodman 1984). The nature of speckle noise can be
converted from multiplicative to additive by performing a preprocessing method known
as log transform. The log transformation process takes less computation time for the
restoration of images compared to the restoration of images using multiplicative noise
directly. The execution of log transformation on the speckled appearance changes the
measurable attributes of the speckled image (Cao et al. 2019). Although, some issues can
also occur during the log transformation process as well, such as; it gives better results if
most of the pixels are in low contrast. It is crucial to examine the impact on a statistical
attribute of the speckled image after performing log transformation.
Along with log transformation, Hazarika and Bhuyan (2013) proposed a lapped
orthogonal transform (LOT) in which the transform coefficient is remapped into an
octave form. Further, enhanced lee filtering is applied to its sub-band LOT coefficients.
The lapped transform can preserve the texture information of the image. Some
researchers (Tomassi, Milone, and Nelson 2015) developed a bilateral filter (non-linear
filter) for smoothing and protecting the edge data by combining range and spatial
filters. Over smoothing an image yields high contrast image but might lose some
important features. To overcome this, a thresholding approach based on SURE (Steins
Unbiased Risk Estimator) has been proposed (Simi et al. 2019) for denoising the image.
An improved version of SURE shrink thresholding gives reasonably good results where
the mean square error (MSE) has been considered as the performance metric. In trans­
form domain-based thresholding, the main factor is to set the threshold and to find the
correct coefficients used in the despeckling approach. To find the accurate coefficients,
optimization techniques based on the maximization of reference parameters can be
used.
Transformed coefficient thresholding is one of the popular methodologies for noise
removal, which provides a tremendous improvement in PSNR (Peak Signal-to-Noise Ratio)
and SSIM (Structural Similarity Index). PSNR and SSIM are two important parameters used
to validate noise reduction in SAR images. (Panigrahi 2019) proposed thresholding of
INTERNATIONAL JOURNAL OF REMOTE SENSING 5501

curvelet magnitude of residual noise to remove multiplicative noise from a noisy image
considering total variation.
The selection of the best possible values for these co-efficient leads to good quality
despeckled images. In general, reference and non-reference parameters are considered to
validate the quality of images after despeckling. Such parameters are Signal-to-Noise
Ratio (SNR), Mean-Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Energy Signal-to-
Noise Ratio (ESNR), Equivalent Number of Looks (ENL), α and β. Where α and β parameters
are known as gain and bias factors and used to monitor contrast and brightness,
respectively. To maintain the edges information, Figure of Merit (FoM), Mean Structural
Similarity Index Metric (MSSIM), Edge Correlation (EC), coefficient of variation and
Despeckling Evaluation Index (DEI) are validated.
In this research, the main focus is to get optimized PSNR, ENL and MSSIM para­
meters. The proposed method is used to find the optimal values for the thresholding
approach based on the maximization of PSNR and MSSIM. The maximization of PSNR
ensures the removal of maximal noise from the images. At the same time, the max­
imization of MSSIM parameters validates better edge preservation. To deal with the
preservation of edge information and reduce noise, a multi-objective optimization
technique has been used. The primary aim of using multi-objective optimization algo­
rithm is to provide a threshold value for the co-efficients of transform-based despeck­
ling approach.
In evolutionary computing, there is only one way to get the ideal threshold value
which is optimization in frequency domain filtering. There are many optimization
algorithms available to get the ideal threshold value to remove noise from an image,
for example: Fruit Fly, Ant Colony Optimization, Swarm Particle Optimization and
Genetic Algorithm, etc. In the curvelet transform approach, proposed in (Li et al.
2011), the authors have used particle swarm optimization to reduce speckle noise.
A PSNR based approach is used to denoise images with the cuckoo optimization
algorithm (Malik, Ahsan, and Mohsin 2016). Some researchers have used the lion
optimization-based denoising method (Jayapal and Subban 2020). A Multi-Objective
PSO (MOPSO) on wavelet transform approach has been proposed (Sivaranjani, Roomi,
and Senthilarasi 2019) to decrease speckle noise from SAR images. Most of the available
optimization approach for SAR image despeckling focuses on optimizing single-
objective i.e. concentrate on optimizing a single parameter.
A very few approaches for multi-objective optimization techniques have been
explored in this domain. In most of the single objective methods, the focus was to reduce
the speckle noise. In contrast, in multi-objective techniques, along with reducing speckle
noise, edge information in images is also preserved. Therefore, considering multiple
parameters will be a better selection to get good quality despeckled SAR images without
degrading edge information. Hence multi-objective optimization techniques can provide
a better despeckled image without disturbing edge information. This motivated authors
to use the multi-objective optimization technique to get an optimized despeckling
framework for SAR images.
In this paper, a DTCWT based despeckling approach has been proposed to despeckle
SAR images. This study uses a Multi-Objective Enhanced Fruit Fly Optimization Algorithm
(MO-EFOA) in order to find a suitable threshold value for a favourable despeckling SAR
images. The authors contributions are as follows:
5502 B. KUMAR ET AL.

(1) Finding a right objective function to optimize a despeckling approach along with
edge information preservation.
(2) Implement a MO-EFOA model against reference-based objective function.
(3) Design an MO-EFOA-based threshold value for dual tree complex wavelet trans­
form (DTCWT) despeckling.
(4) Analysis of the MO-EFOA tuned DTCWT approach to speckle-affected imagery.
(5) Evaluation of MO-EFOA tuned despeckled images on target detection.

2. 2.Methodology Overview
In this section, the authors explain bivariate shrinkage method using DTCWT (Dual Tree
Complex Wavelet Transform), FOA (Fruit Fly Optimization Algorithm) and local adaptive
thresholding.

2.1. Bivariate Shrinkage Method based DTCWT


Because of the multiresolution property in non-linear despeckling processes, many
researchers focused on despeckling methods based on wavelet transform. The main
drawbacks of wavelet transform can be categorized as aliasing and oscillations. Aliasing
is the pattern when one signal is not finished, but another signal is started. This leads to
providing a poor representation of the analog signal. If aliasing occurred in the signal,
then it is difficult to observe the oscillations. Due to aliasing and oscillations, under-
sampling of the signal becomes a challenging task. An approach called NSCT (Non-
Subsampled Contourlet Transform) has been proposed (Da Cunha, Zhou, and Do 2006)
to overcome shortcomings of the wavelet transform. The NSCT is more complex in terms
of implementation and it also takes more time to process. The DTCWT is used to overcome
the drawback of the NSCT and UDWT by improving time complexity. The filters used in
2-dimensional DWT are planned in such a way that it meets the hilbert pair necessity
(Toda and Zhang 2017). The hilbert pair transform evaluated in three steps. The first step
evaluates the fourier transform of the given signal s(t). In the second step, it removes
negative frequencies, and then in the last step, it evaluates the inverse fourier transform,
which leads to a complex-valued signal. The real and imaginary parts of the complex-
valued signal form the hilbert transform pair.
DTCWT is used to find the complex transform of a signal with two different two-
dimensional discrete wavelet transform (2D-DWT) decomposition. The first decomposed
part is p (real), and the second decomposed part is q (imagery). A block diagram
describing the flow-of-control of the DTCWT approach is shown in Figure 3. In this
diagram, the a0 ðnÞ represents a low pass filter, whereas a1 ðnÞrepresents a high pass
filter of the top filter bank, which is used to calculate the wavelet coefficient and scale
coefficient of the real part. Similarly, b0 ðnÞ represents a low pass filter, and b1 ðnÞ
represents a high pass filter of the bottom filter bank which is used to calculate the
wavelet coefficient and scale coefficient of the imagery part. In the very first phase, the
image uses a line transformation from the filter ½b0 ðnÞ; b1 ðnÞ]. After that, it uses a rank
transformation from the filter [a0 ðnÞ, a1 ðnÞ]. In the second phase, the image passes a line
transformation from the filter [a0 ðnÞ, a1 ðnÞ]. After that, it uses a rank transformation
from the filter ½b0 ðnÞ; b1 ðnÞ].
INTERNATIONAL JOURNAL OF REMOTE SENSING 5503

Figure 3. Dual tree complex wavelet transform.

At each level of the DTCWT decomposition, it produces HH (high-high), HL (high-low)


and LH (low-high) high pass sub bands and LL (low-low) low pass sub-band. The addition
or subtraction of every pair of the sub-bands produces a wavelet coefficient with low
frequency and six directional wavelet transform coefficients. The bivariate shrinkage
method is applied to process the wavelet coefficient.
The bivariate shrinkage function (Şendur and Selesnick 2002) can be used with DTCWT
as follows.
Consider a noisy image represented as
Inoisy ¼ Ioriginal � N (1)
Where Inoisy represents a noisy image; � is an operator which represents a noise model
such as additive noise (+), multiplicative noise (×) etc.; Ioriginal is the original image and N is
a kind of disturbance in the captured image known as noise. Noise originated during the
capturing process due to some unwanted signals, weather conditions, etc.
In the form of DTCWT it is again reproduced such as
O ¼ þN (2)
where the wavelet coefficient of the captured image is represented by O, ψ stands for the
wavelet coefficient of the original image and N represents the noise wavelet coefficient.
By considering MAP (maximum a posterior probability) (Liu et al. 2017), ψ is calculated
from the noise N as
^ ðOÞ ¼ argðmax kψjO ðψjOÞÞ
ψ (3)
ψ

where krepresents the posterior probability function.


Equation (3) can be further rewritten based on bayesian estimation-based theory
^ ðOÞ ¼ argðmaxðkN ðNÞ � kψ ðψÞÞÞ
ψ (4)
ψ

� � � �
1 N1 2 þ N2 2
kN ðNÞ ¼ � exp (5)
2π@ 2 2@ 2
where author proposed kψ ðψÞ, which can be represented as a bivariate probability
distributed function.
5504 B. KUMAR ET AL.

! � pffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi�
3 3
kψ ðψÞ ¼ � exp ψ21 þ ψ22 (6)
2π@ψ2 @

where @ψ2 denoted signal variance, ψ1 denotes first level wavelet coefficient, and
ψ2 denotes the wavelet coefficient at the next level to ψ1 . The upper value of the posterior
probability estimate of ψ1 can be calculated as
�pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffi 2 �
3@
O21 þ O22 @ψ
^1 ¼ þ
ψ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi � O1 (7)
O21 þ O22
Bivariate shrinkage function ðuÞþ can be obtained by

0; u < 0
ðuÞþ ¼ (8)
u; otherwise

The median filter function is used to find noise variance @ 2


MedianðjOi jÞ
@^2 ¼ ; where Oi 2 subbandHH (9)
0:675
where subbandHH is representing as the wavelet coefficient aggregation within the
GðbÞ centred wavelet coefficient on the starting b. In the discussed model, we can
calculate @ 2 ¼ @^2 þ @
1 X 2
@^O2 ¼ O (10)
M O 2GðbÞ i
i

whereMrepresents the GðbÞ size and estimated value of @^2 will be


rffi�ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi�ffiffi
^2
@ ¼ @^O2 þ @^ (11)

Once processing has been completed by the DTCWT on the noisy image, the wavelet
coefficients of the DTCWT are dealt with the nearby adaptive bivariate shrinkage model
which has nearby variance calculation. Then, inverse DTCWT (IDTCWT) can be performed
for the new wavelet coefficient to get the denoised image.

2.2. Fruit Fly Optimization Algorithm (FOA)


Due to requirements of less computation and simplicity in transformation and execu­
tion, the fruit fly optimization algorithm has become one of the popular optimization
techniques in soft computing. The fruit fly optimization algorithm was proposed by
Wen-Tsao Pan (2012). The FOA is inspired by the behaviour of the fruit flies to
discover the food. Using this algorithm, we can find a global optimized (minimum
or maximum) value of a function. It follows the mechanism of finding food by sensing
and perception, particularly in the osphresis and vision by the fruit flies, as shown in
Figure 4.
The implementation of FOA optimization algorithms is carried out in the following
phases:
Phase 1: Assign value to different parameters
INTERNATIONAL JOURNAL OF REMOTE SENSING 5505

Figure 4. Fruit fly optimization algorithm.

The size of population count PC, iteration count (maximum) ICmax, range of location
RLOC and range of fly distance RFD . The starting location of fruit fly is calculated as
x horizontal ¼ random valueðRLOC Þ (12)

y vertical ¼ random valueðRLOC Þ (13)


Phase 2: To find food with osphresis by each fruit fly, we will give random guidance and
distance.
xi ¼ x horizontal þ random valueðRFD Þ (14)

yi ¼ y vertical þ random valueðRFD Þ (15)


Phase 3: However, the food location is unknown. Firstly, distance (Di ) will be calculated
from the origin then the value of smell concentration (SCi ) will be computed as
Di ¼ sqrtðxi þ yi Þ (16)

SCi ¼ 1=Di (17)


Phase 4: Between the different fruit fly batches, they find the food as per the smell
concentration method (fitness function), the largest smell concentration will be
selected.
5506 B. KUMAR ET AL.

smelli ¼ fitness functionðSCi Þ (18)

ðhigh smell; high indexÞ ¼ largest valueðsmelli Þ (19)


Here, largest valueof smell concentration is denoted by high smelland the fruit fly
number is denoted by the high index.
Phase 5: Now, both new and the previous best smell concentration will be compared
and a better one gets selected.
smell best ¼ high smell (20)

x horizontal ¼xðhigh indexÞ (21)

y vertical ¼yðhigh indexÞ (22)


If a new smell concentration is not found better than previous, then repeat phase 1 to
phase 4 to get improved smell concentration.
Phase 6: The execution will stop if we reach iteration count ICmax or find the precise
value of smellconcentration. If not, phases 2 to 5 will repeat.

2.3. Fitness Functions


In our proposed framework, the fitness functions (edge preserving and denoising para­
meters) for MO-EFOA are considered. To validate the efficiency of MO-EFOA based
despeckling approach on simulated despeckle images, the fitness functions are produced
using full reference parameters such as MSSIM and PSNR. The availability of reference SAR
image data is not possible (Sivaranjani, Roomi, and Senthilarasi 2019). That is why the
fitness functions have been considered in this framework which uses full reference metrics
such as PSNR (Peak Signal-to-Noise Ratio) and MSSIM (Mean Structural Similarity Index).
These functions are described in the following subsections.

2.3.1. PSNR
Peak Signal-to-Noise Ratio is the amount of intensity difference between a noisy image
and a despeckled image. Our target in this fitness function is to achieve a high PSNR value
which means a better quality of the despeckled image (Horé and Ziou 2010). PSNR can be
calculated as
!
ðIoriginal Þ2 Peak
PSNRðIfiltered ;Ioriginal Þ ¼ 10log10 (23)
MSE

where Ifiltered represents despeckled image and Ioriginal represents original image.

2.3.2. MSSIM
MSSIM is the next fitness function which will be measured in this work. This fitness
function is basically used to focus on edge information, preserving the strength of the
despeckling method. To measure MSSIM, first, we will have to calculate the SSIM value
(Rai and Chatterjee 2019), which is used to find the similarity between two images. The
INTERNATIONAL JOURNAL OF REMOTE SENSING 5507

value of MSSIM lies in the range of 0 to 1. The larger value of MSSIM means a better-
despeckled image. SSIM and MSSIM are defined as
ð2χ Ifiltered χ Ioriginal þz1 ÞðSDIfiltered Ioriginal þz2 Þ
SSIMðIfiltered ;Ioriginal Þ ¼ (24)
ðχ 2Ifiltered þχ 2Ioriginal þz1 ÞðSD2Ifiltered þSD2Ioriginal þz2 Þ

1WX1
MSSIMðIfiltered ;Ioriginal Þ ¼ SSIMðIfiltered ;Ioriginal Þ (25)
W p¼0

where z1 and z2 are constant,Wrepresents the total count of the local windows. Mean and
standard deviation termed as χand SD, respectively.
In order to validate despeckling results of proposed framework, the authors calculated
and compared ENL (Equivalent Number of Look) (Lopes, Touzi, and Nezry 1990) metrics.
Sahebi and Heidarian (2015) used ENL metrics for validating the smoothness of the
despeckled SAR image. The ENL is an index to measure the relative intensity of speckle
noise smoothing in SAR images. It is the value to calculate the relative intensity of speckle
noise smoothing in SAR images. ENL can be calculated as given in equation 26.
ENL ¼M2 ðIfiltered Þ =SD2 ðIfiltered Þ (26)
Where Ifiltered is despeckled image, SDis the standard deviation andMis mean intensity of
homogeneous region.

2.4. Local Adaptive Thresholding


In the case of greyscale images, the pixel intensity value is Pðx; yÞ∈ (0,255), where x andy
are the coordinates of the pixel matrix. The goal of local adaptive thresholding is to find
threshold value (Jidesh et al. 2018) Tðx; yÞ of every pixel as in equation (27)

0; if Pðx; yÞ < Tðx; yÞ
Bðx; yÞ ¼ (27)
255; otherwise

where Bðx; yÞ is the binarized image and Tðx; yÞ∈ [0,1]. With the help of Sauvola’s
binarization techniques (Sauvola and Pietikäinen 2000), the Tðx; yÞcan be found with the
Meanðx; yÞand standard deviation SDðx; yÞagainst the pixel intensities n × n window
SDðx; yÞ
Tðx; yÞ ¼ Meanðx; yÞ � ð1 þ lð 1ÞÞ (28)
SD
where the maximal value of standard deviation is denoted by SD (128 for greyscale image)
and l ∈ [0.2,0.5].

3. Proposed Framework Design


In this section, we describe the proposed framework architecture for optimization of the
despeckling process as well as we also describe the flow of the proposed methodology.
Our proposed methodology optimizes the thresholding parameters used in DTCWT
despeckling method for SAR Images. The optimization is performed using the Fruit Fly
optimization algorithm.
5508 B. KUMAR ET AL.

3.1. Proposed Framework Architecture


ZThe proposed framework, as shown in Figure 5, is used to handle noisy images
with reference images. The framework design is decomposed into multiple phases.
In the first phase, the bivariate shrinkage based DTCWT approach is used with an
enhanced-FOA to remove noise from images. We have compared the denoised
image with a noisy image based on denoising parameters such as PSNR, MSSIM
and ENL. Random noise variance was calculated with enhanced-FOA. A new wave­
let coefficient is generated with the help of the bivariate shrinkage function by
passing the value of noise variance. Then, the denoised image is formed by
calculating an inverse DTCWT for the new wavelet coefficient. In the denoised
image, after applying the proposed method, there may be some square type of
area present. We have used adaptive thresholding to remove these from the
denoised image and get a good despeckled image.

3.2. EFOA (Enhanced Fruit Optimization Algorithm)


When we used the original FOA, the search ability and convergence speed are
affected due to the fly distance range. Whereas to improved global search ability,
we need to use enough fly distance range. To increase fly distance range, we can

Figure 5. Improved DTCWT with using threshold value by using fruit fly optimized algorithm.
INTERNATIONAL JOURNAL OF REMOTE SENSING 5509

use a booster function to improved global search ability. In the early stages of the
iteration, we have applied a heavy booster to get a large enough fly distance
range. In the later stages, we have reduced this booster value to get a small fly
distance range. To achieve a better global search ability, we used the following
booster function described in Equation (29).
pffiffiffiffiffiffiffiffiffiffi
s 1
bðsÞ ¼ 1 (29)
smax

where s is the current iteration andsmax represents the total iteration count. We used the
booster function in our proposed method to the enhanced performance of FOA
algorithm.

3.3. Proposed Framework Flow


In this section, we are going to explain the different phases taken during operations.

● Phase 1: If the original image is in RGB then it will be first converted into a greyscale
image. After that log vector of the image will be calculated and considered for
further process.
● Phase 2: The different parameters required in enhanced MO-EFOA-DTCWT are
initialized i.e. population count PC, iteration count ICmax, fly distance range RFD and
position range PR.
● Phase 3: The position of every fly will be gathered by the range of fly distance
range RFD , smellconcentration SC, and distanceDfor each individual fly will be
measured with the help of Equation (16) and Equation (17) respectively. Now,
each SCwill assume a potential threshold. The calculated new threshold is
taken as noise variance. With the help of DTCWT and shrinkage functions,
the noisy image will be denoised after considering a new variance as
a bivariate shrinkage function. To measure the performance of the proposed
framework, we will consider PSNR (defined in Equation (23)) and MSSIM
(defined in Equation (25)) as a fitness function.
● Phase 4: The high smellwill be considered as maximal fitness in the fruit fly, but if
the current high smellvalue is larger than the previous smell bestthen the
respective coordinates value will be changed else smell best, x horizontal and
y vertical will be unchanged.
● Phase 5: If the execution of the final condition is completed then the smell best,
x horizontaland y vertical will be treated as the optimal solution otherwise, phase
3 and phase 4 will be repeated.
● Phase 6: We will consider the optimal solution from phase 3 and consider that
threshold value. Again, this value is taken as a noise variance of the noisy picture and
applies the bivariate shrinkage filter based DTCWT. After the denoising process, we
will apply adaptive thresholding on the denoised image.
5510 B. KUMAR ET AL.

4. Experiment Analysis and Results


The evaluation of SAR image despeckling cannot be measured directly because the actual
image is not known. To evaluate our proposed method, first, we have applied speckle noise
on the original images and then validated our model on these images to denoise them. Our
proposed method has been applied on classical and popular images of image processing
domains such as Lena. We have also applied our proposed method on SAR images of
MSTAR and TerraSAR-X dataset. Our proposed method is a despeckling approach which
uses a bivariate shrinkage based DTCWT. We are using DTCWT window size 5 × 5 which is
most suitable to eliminate small noise in a small area. In this method, we have used a multi-
objective enhanced fruit fly optimization algorithm. In this MO-EFOA with DTCWT using the
bivariate function, we tuned MSSIM and PSNR values to get an optimized threshold value.
In our proposed framework, the fitness functions (edge preserving and denoising
parameters) for MO-EFOA are taken into account. To validate efficiency of MO-EFOA
based despeckling approach on simulated despeckled images, the fitness functions are
produced using full reference parameters such as MSSIM and PSNR. The availability of
reference SAR image data is not possible.
The results of the proposed method are compared with the existing approaches Kuan
(Kuan et al. 1987), Lee (Lee, 1983), Mean (Goodman 1984), Lee diffusion (Aksel et al. 2006),
Hybrid median (Darus et al. 2017), BM3D (Parrilli et al. 2012), BDSS (Yuan, Guan, and Sun
2019), SAR-DRN (Zhang et al. 2018), ID-CNN (P. Wang, Zhang, and Patel 2017) and
MOSPSO (Sivaranjani, Roomi, and Senthilarasi 2019).
In EFOA, we initialized population count, PC = 30, initial location range,
LR = [−2,2] and iteration count, ICmax = 50. fly distance range, RFD = [−5,5] (Shan,
Cao, and Dong 2013). Due to the booster function fly distance range in our
proposed model has increased up to double to the FOA [−10,10]. After applying
an inertia weight, bðsÞ, the fly distance range (RFD ) can be represented in the range

Figure 6. Maximal value of MSSIM and PSNR by MO-EFOA on lena image.


INTERNATIONAL JOURNAL OF REMOTE SENSING 5511

[−10,10]. In the global search, this proposed MO-EFOA-DTCWT gives faster speed as
compare to FOA.
The MO-EFOA-DTCWT tuning is used to get the optimized value of the threshold
by maximizing the MSSIM and PSNR as shown in Figure 6. The fitness functions,
such as MSSIM and PSNR, are increased to get the optimal value in the processed
speckled images. The fitness functions, such as edge preserving and denoising
metrics, for MO-EPOA-DTCWT are considered according to the reference image in
their functions. The reference SAR images are not available, so we used simulated
speckle image based full reference parameters (PSNR and MSSIM) in fitness function
to validate the efficiency of the proposed framework.
To check the efficiency of the proposed framework, we applied it on lena image,
MSTAR dataset image and TerraSAR-X image and compared the performance of the
optimized threshold function with the other existing filtering techniques men­
tioned earlier. These values are calculated based on different iteration counts and
population counts. The results show that the proposed MO-EFOA-DTCWT outper­
formed for lena image (128 × 128 pixel) against the reference-based parameters
MSSIM and PSNR. The proposed frameworks achieved PSNR 36.87 dB as compared
with some existing studies, as shown in Figure 7. The proposed frameworks give an
excellent MSSIM value, 0.92, as compared with some existing literature, as shown in
Figure 8.
The proposed MO-EFOA-DTCWT method was applied to one image (128 × 128
pixel) collected from MSTAR dataset. MO-EFOA-DTCWT gives results on reference-
based parameters MSSIM and PSNR. The proposed framework achieved highest
PSNR value, 35.4 dB, as compared with some existing despeckling techniques, as
shown in Figure 7. The proposed framework gives an excellent MSSIM value 0.93,
much higher than some existing despeckling techniques, as shown in Figure 8.
The proposed MO-EFOA-DTCWT method applied over image collected from
TerraSAR-X image dataset. The method gives results on reference-based parameters
MSSIM and PSNR. The proposed frameworks achieved the PSNR value as 37.8 dB,
which is the highest among some existing despeckling techniques, as shown in
Figure 7. The proposed frameworks give an excellent MSSIM, 0.92, as compared
with some existing despeckling techniques, as shown in Figure 8.
In this article, we applied our proposed framework as well as different state of
art despeckling methods on the speckled lena image, MSTAR’s image and
TerraSAR-X’s image as shown in Figure 9, Figure 10 and Figure 11, respectively.
From this experiment, we found that the proposed framework results in the better
despeckled images as shown in Figure 9(xii), Figure 10(xii) and Figure 11(xii) on
Lena image, MSTAR’s image and TerraSAR-X’s image, respectively.
After despeckling of the SAR images, the authors validated the smoothness of
despeckled SAR images with ENL metrics. The ENL metric gives us the efficiency of
the smoothness of the speckle noise in homogeneous regions. The authors have
compared ENL metrics values of despeckled SAR images shown in the Figure 9(xii),
Figure 10(xii) and Figure 11(xiii) with the other available state of arts as shown in
Table 1. Table 1 also presents the comparison of the proposed framework’s PSNR
and MSSIM metrics. The proposed method results in the highest PSNR values in the
case of lena image, MSTAR image as well as TerraSAR-X image as 36.9, 35.4 and
5512
B. KUMAR ET AL.

Figure 7. PSNR(dB) results analysis (lena, MSTAR dataset and TerraSAR-X dataset image) of proposed framework with different existing despeckling methods.
INTERNATIONAL JOURNAL OF REMOTE SENSING 5513

Figure 8. MSSIM results analysis (lena, MSTAR dataset and TerraSAR-X dataset image) of the proposed
framework with different existing despeckling methods.

Figure 9. (a) Speckled lena image with noise variance σ = 0.01, (b)-(m) various despeckled images
after applying different filtering methods (b) Kuan, (c) Lee, (d) Mean, (e) Lee diffusion, (f) Hybrid
median, (g) PSO, (h) SAR-BM3D, (i)BDSS, (j) SAR-DRN, (k)ID-CNN, (l)MOPSO, and (m) proposed
framework.
5514 B. KUMAR ET AL.

Figure 10. (a) Speckled MSTAR Dataset image, (b)-(m) various despeckled images after applying
different filtering methods (b) Kuan, (c) Lee, (d) Mean, (e) Lee diffusion, (f) Hybrid median, (g) PSO, (h)
SAR-BM3D, (i)BDSS, (j) SAR-DRN, (k)ID-CNN, (l)MOPSO, and (m) proposed framework.

37.8, respectively. Similarly, MSSIM values are 0.92, 0.93 and 0.92 respectively. The
results show that the proposed despeckling method, MO-EFOA-DTCWT, results in
a better quality despeckled image and provides better edge preservation. The ENL
value we get for the lena image, MSTAR image and TerraSAR-X image as 6.3, 7.87
and 7.89, respectively. This verifies that we get a better smoothness in the des­
peckled image using the proposed despeckling approach.

5. Conclusion
Images captured from using a SAR imagery system can be highly corrupted with
noise. The noisy images can impede target identification and recognition. Because of
the multiplicative nature of noise in SAR images, it can be challenging to remove
noise. In this article, we have proposed a transform domain-based optimization
framework, called multi-objective enhanced fruit fly optimization algorithm (MO-
EFOA). A bivariate-shrinkage-based dual tree complex wavelet transform is applied
to different popular speckled images, with an optimal threshold given by MO-EFOA.
To check the performance of EFOA, which uses a booster function, we compared it
with the original FOA and found that our proposed MO-EFOA achieves improved
result in terms of maximal PSNR, MSSIM and execution time. The parameters PSNR
and MSSIM are made maximal with the help of the proposed enhanced fruit fly
optimization algorithm. The proposed framework has the ability to remove speckle
noise as well as to preserve edge information. We compared our proposed
INTERNATIONAL JOURNAL OF REMOTE SENSING 5515

Figure 11. (a) Speckled TerraSAR-X Dataset image, (b)-(m) Various despeckled images after applying
different filtering methods (b) Kuan, (c) Lee, (d) Mean, (e) Lee diffusion, (f) Hybrid median, (g) PSO, (h)
SAR-BM3D, (i)BDSS, (j) SAR-DRN, (k)ID-CNN, (l)MOPSO, and (m) proposed framework.

Table 1. PSNR, MSSIM and ENL indexes for lena, MSTAR dataset image and TerraSAR-X dataset images
respectively.
lena image MSTAR dataset image TerraSAR-X image
Filtering method PSNR MSSIM ENL PSNR MSSIM ENL PSNR MSSIM ENL
Kuan 24.8 0.77 5.4 22.3 0.75 5.67 23.5 0.71 5.4
Lee 25 0.73 5.5 24.6 0.72 5.8 24.2 0.74 5.9
Mean 26 0.71 4.9 26.8 0.69 5.6 23.7 0.7 6.1
Lee diffusion 27.5 0.72 5.6 26.75 0.74 6.36 27.8 0.78 6.5
Hybrid median 26.5 0.59 5.3 27.9 0.63 5.68 26.7 0.68 5.8
PSO 27.5 0.79 5.6 28.3 0.81 6.7 28.7 0.8 6.7
SAR-BM3D 28.5 0.79 5.5 28.75 0.81 6.48 28.7 0.83 6.9
BDSS 32.3 0.91 5.2 32.1 0.91 6.95 33.5 0.9 7.4
SAR-DRN 31.9 0.89 5.2 32.3 0.9 6.78 34.6 0.89 7.1
ID-CNN 33.6 0.91 5.6 33.9 0.92 7.12 33.2 0.91 7.6
MOPSO 30.7 0.88 5.7 31.7 0.88 7.36 35.9 0.89 7.6
Proposed 36.9 0.92 6.3 35.4 0.93 7.87 37.8 0.92 7.98

framework with state-of-art techniques. Our framework has achieved excellent results
compared to other existing techniques. The proposed method gives improved results
in terms of PSNR and MSSIM. The framework provides better and constructive results
suitable for achieving better accuracy in the field of automated target identification.
This work has given more significant improvement in terms of maximizing PSNR,
MSSIM and ENL values. However, it has some shortcomings, mainly in computation
time needed for automated target identification. To mitigate these shortcomings, we
5516 B. KUMAR ET AL.

will work to reduce the calculation time in target identification.

Disclosure statement
No potential conflict of interest was reported by the author(s).

ORCID
Bibek Kumar http://orcid.org/0000-0002-2081-2327
Ranjeet Kumar Ranjan http://orcid.org/0000-0002-8796-4579
Arshad Husain http://orcid.org/0000-0003-0982-4789

References
Aksel, A., A. D. Gilliam, J. A. Hossack, and S. T. Acton. 2006. “Speckle Reducing Anisotropic Diffusion
for Echocardiography.” Conference Record - Asilomar Conference on Signals, Systems and
Computers 11 (11): 1988–1992. doi:10.1109/ACSSC.2006.355113.
Argenti, F., and L. Alparone. 2002. “Speckle Removal from SAR Images in the Undecimated Wavelet
Domain.” IEEE Transactions on Geoscience and Remote Sensing 40 (11): 2363–2374. doi:10.1109/
TGRS.2002.805083.
Bi, H., G. Bi, B. Zhang, and W. Hong. 2018. “Complex-image-based Sparse Sar Imaging and Its
Equivalence.” IEEE Transactions on Geoscience and Remote Sensing 56 (9): 5006–5014.
doi:10.1109/TGRS.2018.2803802.
Cao, X., Y. Ji, L. Wang, B. Ji, L. Jiao, and J. Han. 2019. “SAR Image Change Detection Based on Deep
Denoising and CNN.” IET Image Processing 13 (9): 1509–1515. doi:10.1049/iet-ipr.2018.5172.
Choi, H., and J. Jeong. 2019. “Speckle Noise Reduction Technique for Sar Images Using Statistical
Characteristics of Speckle Noise and Discrete Wavelet Transform.” Remote Sensing 11 (10): 10.
doi:10.3390/rs11101184.
Da Cunha, A. L., J. Zhou, and M. N. Do. 2006. “The Nonsubsampled Contourlet Transform: Theory,
Design, and Applications.” IEEE Transactions on Image Processing 15 (10): 3089–3101. doi:10.1109/
TIP.2006.877507.
Darus, M. S., S. N. Sulaiman, I. S. Isa, Z. Hussain, N. M. Tahir, and N. A. M. Isa (2017). Modified Hybrid
Median Filter for Removal of Low Density Random-valued Impulse Noise in Images. Proceedings -
6th IEEE International Conference on Control System, Computing and Engineering, ICCSCE
2016, November, 528–533,Penang, Malaysia. 10.1109/ICCSCE.2016.7893633
Farhadiani, R., S. Homayouni, and A. Safari. 2019. “Hybrid SAR Speckle Reduction Using Complex
Wavelet Shrinkage and Non-local PCA-based Filtering.” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing 12 (5): 1489–1496. doi:10.1109/
JSTARS.2019.2907655.
Goodman, J.W. (1984). Statistical Properties of Laser Speckle Patterns. In: Dainty J.C. (eds) Laser
Speckle and Related Phenomena. Topics in Applied Physics, vol 9. Berlin: Springer. https://doi.
org/10.1007/978-3-662-43205-1_2
Hazarika, D., and M. Bhuyan (2013). Despeckling SAR Images in the Lapped Transform Domain. 2013
4th National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics,
NCVPRIPG 2013, Jodhpur, India. 10.1109/NCVPRIPG.2013.6776255
Horé, A., and D. Ziou (2010). Image Quality Metrics: PSNR Vs. SSIM. Proceedings - International
Conference on Pattern Recognition, ICPR 2010 2366–2369. ,Instanbul, Turkey. 10.1109/
ICPR.2010.579
Hou, B., Z. Wen, L. Jiao, and Q. Wu. 2018. “Target-oriented High-resolution SAR Image Formation via
Semantic Information Guided Regularizations.” IEEE Transactions on Geoscience and Remote
Sensing 56 (4): 1922–1939. doi:10.1109/TGRS.2017.2769808.
INTERNATIONAL JOURNAL OF REMOTE SENSING 5517

Hu, K., Q. Cheng, B. Li, and X. Gao. 2018. “The Complex Data Denoising in MR Images Based on the
Directional Extension for the Undecimated Wavelet Transform.” Biomedical Signal Processing and
Control 39: 336–350. doi:10.1016/j.bspc.2017.08.014.
Huang, H., F. Zhang, Y. Zhou, Q. Yin, and W. Hu (2019). High Resolution SAR Image Synthesis with
Hierarchical Generative Adversarial Networks.IGARSS 2019–2019 IEEE International Geoscience and
Remote Sensing Symposium, 2782–2785, Yokohama, Japan. 10.1109/igarss.2019.8900494
Hurley, P., and M. Simeoni. 2016. “FLEXIBEAM : ANALYTIC SPATIAL FILTERING BY BEAMFORMING IBM
Zurich Research Laboratory, CH-8803 R ¨ Ecole Polytechnique F ´.” Icassp 2016: 2877–2880.
Jayapal, J., and R. Subban. 2020. “Automated Lion Optimization Algorithm Assisted Denoising
Approach with Multiple Filters.” Multimedia Tools and Applications 79 (5–6): 4041–4056.
doi:10.1007/s11042-019-07803-x.
Jidesh, P., and B. Balaji. 2018. “Adaptive Non-local Level-set Model for Despeckling and Deblurring of
Synthetic Aperture Radar Imagery.” International Journal of Remote Sensing 39 (20): 6540–6556.
doi:10.1080/01431161.2018.1460510.
Kang, J., J. Y. Lee, and Y. Yoo. 2016. “A New Feature-Enhanced Speckle Reduction Method Based on
Multiscale Analysis for Ultrasound B-Mode Imaging.” IEEE Transactions on Biomedical Engineering
63 (6): 1178–1191. doi:10.1109/TBME.2015.2486042.
Keydel, E. R., A. Arbor, S. W. Lee, and J. T. Moore (1975). Against Measured Data Over A Broad Range Of
Challenging Real World Battlefield Scenarios . These Extended Operating Conditions Table 1 :
Summary OfMSTAR Extended Operating Conditions ntra-Cass V ’ arability Obscuratin (Occkision
and/orLiyover): Up to . 228–242.
Kuan, D. T., A. A. Sawchuk, T. C. Strand, and P. Chavel. 1987. “Adaptive Restoration of Images with
Speckle.” IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (3): 373–383.
doi:10.1109/TASSP.1987.1165131.
Lee,, J. S. 1983. “Digital Image Smoothing and the Sigma Filter.” Computer Vision, Graphics and Image
Processing 24 (2): 255–269. doi:10.1016/0734-189X(83)90047-6.
Li, Y., H. Gong, D. Feng, and Y. Zhang. 2011. “An Adaptive Method of Speckle Reduction and Feature
Enhancement for SAR Images Based on Curvelet Transform and Particle Swarm Optimization.” IEEE
Transactions on Geoscience and Remote Sensing 49 (8): 3105–3116. doi:10.1109/TGRS.2011.2121072.
Liu, Y., Z. Wang, L. Si, L. Zhang, C. Tan, and J. Xu. 2017. “A Non-reference Image Denoising Method for
Infrared Thermal Image Based on Enhanced Dual-tree Complex Wavelet Optimized by Fruit Fly
Algorithm and Bilateral Filter.” Applied Sciences (Switzerland) 7: 11. doi:10.3390/app7111190.
Lopes, A., R. Touzi, and E. Nezry. 1990. “Adaptive Speckle Filters and Scene Heterogeneity.” IEEE
Transactions on Geoscience and Remote Sensing 28 (6): 992–1000. doi:10.1109/36.62623.
Malik, M., F. Ahsan, and S. Mohsin. 2016. “Adaptive Image Denoising Using Cuckoo Algorithm.” Soft
Computing 20 (3): 925–938. doi:10.1007/s00500-014-1552-x.
Pan, W. T. 2012. “A New Fruit Fly Optimization Algorithm: Taking the Financial Distress Model as an
Example.” Knowledge-Based Systems 26: 69–74. doi:10.1016/j.knosys.2011.07.001.
Panetta, K., L. Bao, and S. Agaian. 2016. “Sequence-to-Sequence Similarity-Based Filter for Image
Denoising.” IEEE Sensors Journal 16 (11): 4380–4388. doi:10.1109/JSEN.2016.2548782.
Panigrahi, S. K. (2019). 2019 International Conference on Wireless Communications, Signal Processing
and Networking, Speckle noise removal by total variation and curvelet coefficient shrinkage of
residual noise, WiSPNET 2019, 101–106, Chennai, India. 10.1109/WiSPNET45539.2019.9032763
Parrilli, S., M. Poderico, C. V. Angelino, and L. Verdoliva. 2012. “A Nonlocal SAR Image Denoising
Algorithm Based on LLMMSE Wavelet Shrinkage.” IEEE Transactions on Geoscience and Remote
Sensing 50 (2): 606–616. doi:10.1109/TGRS.2011.2161586.
Rai, H. M., and K. Chatterjee. 2019. “Hybrid Adaptive Algorithm Based on Wavelet Transform and
Independent Component Analysis for Denoising of MRI Images.” Measurement: Journal of the
International Measurement Confederation 144: 72–82. doi:10.1016/j.measurement.2019.05.028.
Sahebi, M. R., and A. Heidarian. 2015. “Criterion for Designing Adaptive Filters Based on Segregation
of Grey Levels in SAR Images.” Electronics Letters 51 (12): 935–937. doi:10.1049/el.2014.4178.
Sauvola, J., and M. Pietikäinen. 2000. “Adaptive Document Image Binarization.” Pattern Recognition
33 (2): 225–236. doi:10.1016/S0031-3203(99)00055-2.
5518 B. KUMAR ET AL.

Selesnick, I. W., R. G. Baraniuk, and N. G. Kingsbury. 2005. “The Dual-tree Complex Wavelet
Transform.” IEEE Signal Processing Magazine 22 (6): 123–151. doi:10.1109/MSP.2005.1550194.
Şendur, L., and I. W. Selesnick. 2002. “Bivariate Shrinkage Functions for Wavelet-based Denoising
Exploiting Interscale Dependency.” IEEE Transactions on Signal Processing 50 (11): 2744–2756.
doi:10.1109/TSP.2002.804091.
Shan, D., G. Cao, and H. Dong. 2013. “LGMS-FOA: An Improved Fruit Fly Optimization Algorithm for
Solving Optimization Problems.” Mathematical Problems in Engineering 2013: 1–9. doi:10.1155/
2013/108768.
Simi, V. R., D. R. Edla, J. Joseph, and V. Kuppili (2019). Prospect of Stein’s Unbiased Risk Estimate as
Objective Function for Parameter Optimization in Image Denoising Algorithms - A Case Study on
Gaussian Smoothing Kernel. 2019 International Conference on Data Science and Engineering, ICDSE
2019, 149–153, Patna, India. 10.1109/ICDSE47409.2019.8971487
Sivaranjani, R., S. M. M. Roomi, and M. Senthilarasi. 2019. “Speckle Noise Removal in SAR Images
Using Multi-Objective PSO (MOPSO) Algorithm.” Applied Soft Computing Journal 76: 671–681.
doi:10.1016/j.asoc.2018.12.030.
Starck, J. L., J. Fadili, and F. Murtagh. 2007. “The Undecimated Wavelet Decomposition and Its
Reconstruction.” IEEE Transactions on Image Processing 16 (2): 297–309. doi:10.1109/TIP.2006.887733.
Suresh, S., and S. Lal. 2017. “Two-Dimensional CS Adaptive FIR Wiener Filtering Algorithm for the
Denoising of Satellite Images.” IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing 10 (12): 5245–5257. doi:10.1109/JSTARS.2017.2755068.
Toda, H., and Z. Zhang. 2017. “Hilbert Transform Pairs Oforthonormal Bases of Chromatic-scale
Wavelets.” International Conference on Wavelet Analysis and Pattern Recognition 1 (7): 115–121.
doi:10.1109/ICWAPR.2017.8076674.
Tomassi, D., D. Milone, and J. D. B. Nelson. 2015. “Wavelet Shrinkage Using Adaptive Structured
Sparsity Constraints.” Signal Processing 106: 73–87. doi:10.1016/j.sigpro.2014.07.001.
Vimalraj, C., S. Esakkirajan, and P. Sreevidya (2018). DTCWT with Fuzzy Based Thresholding for
Despeckling of Ultrasound Images. 2017 International Conference on Intelligent Computing,
Instrumentation and Control Technologies, ICICICT 2017, 2018-Janua, 515–519, Kerala, India.
10.1109/ICICICT1.2017.8342616
Wang, J., T. Zheng, P. Lei, and X. Bai. 2018. “Ground Target Classification in Noisy SAR Images Using
Convolutional Neural Networks.” IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing 11 (11): 4180–4192. doi:10.1109/JSTARS.2018.2871556.
Wang, P., H. Zhang, and V. M. Patel. 2017. “SAR Image Despeckling Using a Convolutional Neural
Network.” IEEE Signal Processing Letters 24 (12): 1763–1767. doi:10.1109/LSP.2017.2758203.
Xu, Z., H. C. Li, Q. Shi, H. Wang, M. Wei, J. Shi, and Y. Shao. 2019. “Effect Analysis and Spectral
Weighting Optimization of Sidelobe Reduction on SAR Image Understanding.” IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing 12 (9): 3434–3444. doi:10.1109/
JSTARS.2019.2925420.
Yuan, Y., J. Guan, and J. Sun. 2019. “Blind SAR Image Despeckling Using Self-supervised Dense
Dilated Convolutional Neural Network.” ArXiv (v1): 1–12.
Yue, D.-X., F. Xu, A. C. Frery, and Y.-Q. Jin (2019). SAR Image Generation with Semantic-Statistical
Convolution. IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium,
9999–10002, Yokohama, Japan. 10.1109/igarss.2019.8900225
Zhai, J., X. Dang, F. Chen, X. Xie, Y. Zhu, and H. Yin (2019). SAR Image Generation Using Structural
Bayesian Deep Generative Adversarial Network. 2019 Photonics and Electromagnetics Research
Symposium - Fall, PIERS - Fall 2019 - Proceedings, 1386–1392, Xiamen, China. 10.1109/PIERS-
Fall48861.2019.9021403
Zhang, Q., Q. Yuan, J. Li, Z. Yang, and X. Ma. 2018. “Learning a Dilated Residual Network for SAR
Image Despeckling.” Remote Sensing 10 (2): 1–18. doi:10.3390/rs10020196.
Zhou, F., L. Wang, X. Bai, and Y. Hui. 2018. “SAR ATR of Ground Vehicles Based on LM-BN-CNN.” IEEE
Transactions on Geoscience and Remote Sensing 56 (12): 7282–7293. doi:10.1109/TGRS.2018.2849967.

You might also like