Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Biomedical Signal Processing and Control 66 (2021) 102480

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

An image quality enhancement scheme employing adolescent identity


search algorithm in the NSST domain for multimodal medical image fusion
Jais Jose a, *, Neha Gautam b, Mohit Tiwari c, Tripti Tiwari d, Arjun Suresh e, Vinu Sundararaj f, **,
Rejeesh MR f, **
a
Amity Institute of Geoinformatics and Remote Sensing, Amity University, Noida, India
b
Faculty of Engineering & Technology, JAIN (Deemed-to-be University), Bengaluru, Karnataka, India
c
Department of Computer science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, India
d
Management Studies, Bharati Vidyapeeth Deemed University, India
e
Amity Institute of Environmental Science, Amity University, Noida, India
f
Department of Electronics and Communcations, Anna University, Chennai, India

A R T I C L E I N F O A B S T R A C T

Keywords: The fast-developing Image fusion technique has become a necessary one in every field. Analyzing the efficiency
Non-subsampled Shearlet transform of various fusion technologies analytically and objectively are spotted as an essentially required processes.
Multi-modal medical image fusion Further, Image fusion becomes an inseparable technique in the medical field, since the role of medical images in
Magnetic resonance
diagnosing and identifying diseases becomes a crucial task for the radiologists and doctors at its early stage.
PET
AISA
Different modalities used in clinical applications offer unique information, unlike any other in any form. To
diagnose diseases with high accuracy, clinicians require data from more than one modality. Multimodal image
fusion has received wide popularity in the medical field since it enhances the accuracy of the clinical diagnosis
thereby fusing the complementary information present in more than one image. Obtaining optimal value along
with a reduction in cost and time in multimodal medical image fusions are a critical one. Here, in this paper a
new multi-modality algorithm for medical image fusion based on the Adolescent Identity Search Algorithm
(AISA) for the Non-Subsampled Shearlet Transform is proposed to obtain image optimization and to reduce the
computational cost and time. The NSST is a multi-directional and multi-dimensional example of a multiscale and
multi-directional wavelet transform. The input source image is decomposed into the NSST subbands at the initial
stage. The boundary measure is modulated by the Adolescent Identity Search Algorithm (AISA) that fuses the
sub-band in the NSST thereby reducing the complexity and increasing the computational speed. The proposed
method is tested under different real-time disease datasets such as Glioma, mild Alzheimer’s, and Encephalop­
athy with hypertension that includes similar pairs of images and analyzed different evaluation measures such as
Entropy, standard deviation, structural similarity index measure,Mutual information, Average gradient, Xydeas
and Petrovic metric, Peak-signal to-noise-ratio, processing time. The experimental findings and discussions
indicate that the proposed algorithm outperforms other approaches and offers high quality fused images for an
accurate diagnosis.

1. Introduction registration is also named image fusion or matching and it is the process
of bringing together two or more images depends on the external
The critical objective of analysis is to separate the most helpful in­ appearance of the images. The medical image registration is looking for
formation present in the processed images. Hence, various schemes take the optimal spatial transformation evaluation which arranges the un­
place in this process of separation of the underlying information namely, derlying anatomical frameworks. Medical image registration is
image registration as well as the image fusion. The aim of registering employed in various medical applications like motion tracking, image
images is to assign images that are linked to each other [1]. Image reconstruction, image guidance, dose accumulation, etc [2,49–55].

* Corresponding author.
** Corresponding authors.
E-mail addresses: jaisjose.phd@gmail.com (J. Jose), vinovinu2020@gmail.com (V. Sundararaj), rejeeshmr@gmail.com (R. MR).

https://doi.org/10.1016/j.bspc.2021.102480
Received 25 August 2020; Received in revised form 2 December 2020; Accepted 30 January 2021
Available online 19 February 2021
1746-8094/© 2021 Elsevier Ltd. All rights reserved.
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

The combination of multiple images with special features to create • The input source image is decomposed into low and high-frequency
an independent fused image using only one or many modalities to subbands by employing a non-subsampled Shearlet transform.
achieve enhanced results is said to be known as image fusion. This is the • The proposed algorithm is evaluated by comparing its performances
new method which has its footprint in numerous fields like robotics, by various medical images such as CT-PET, MR-CT, CT-MR, MR-T2
forensics, image enhancements, medical imaging, surveillance, remote MR-T1 image fusion.
sensing, and medical diagnosis. This method has proven techniques for • The proposed approach obtains better fusion results for the multi­
medical application (MRI, CT, PET, etc.) for the pre-diagnosis and modal medical images for the datasets than other methods.
classification of diseases [3]. Specifically, medical imaging has a major • The experimental outcome illustrates the proposed approach exe­
part in the analysis of several body parts namely, bone marrow, brain, cutes better than the majority of the previous fusion techniques.
stomach, liver, lungs, bones, teeth, soft tissues, mouth, etc. The image
fusion is increasingly consistent equipment for surgery of the The remainder of the paper is arranged as follows. The previous
image-guided by linking the pre-operative images and the patient’s existing literature works about medical image fusion are explained in
substantial realities during the surgical plans. Several images of organs section 2. Section 3 described the proposed methodology for medical
have been generated by utilizing various sensor methods [4]. image fusion. The metrics used for the evaluation purposes are expressed
The feature extraction methods enhance the pathological data by in section 4. The experimental evaluations and discussions are repre­
discarding the ineffective regions and this increases the diagnosis speed sented in section 5. Section 6 concludes the paper.
for the disease. These features are identical factors that hold important
information. The features linking to various image modalities are 2. Summary of previous works
needed for the image fusion for the accurate analysis of the data contains
in them. The image fusion initially wants spatial communication among Many registration and fusion techniques have been developed,
the image constituents called registration. The feature-based image evaluated, and reviewed. Li D et al. (2020) demonstrated improved
registration basically detects some important points on the image which RANSAC approach for solving electronic noise issue and scattering in
are to be registered and these points are co-ordinated by the linear or medical image processing. The image used for the fusion of image was
non-linear functions [5]. Traditional image registration is the repetitive an X-ray image. The metrics for the performance evaluation was infor­
optimization process that requires appropriate extraction of the feature, mation entropy, contrast ratio, average gradient, mean. The accuracy of
selecting a similarity measure, selecting the transformation model, and the fusion has to be improved in the future [10]. Wu J et al. (2020)
finally the method for search space investigation. The image registration proposed the integration of nonsubsampled Contourlet transform
techniques have the input as a couple of images, in that one image is (NSCT) with a pulse-coupled neural network for processing of higher
fixed and another one is the moving image. The similarity measure is frequency images. The metrics used for the evaluation were information
first considered and the relations between the input images are entropy, average gradient, mutual information, cross-entropy, and
observed. The new transformation attributes are evaluated by the standard error [11].
optimization algorithm and by using these parameters, the moving Chu Y et al. quantified the fusion and registration display error of
image directs to the new associated image [6]. augmented reality-based nasal endoscopic surgery. To study the link
In the case of fusion, the spatial deformation of the image is referred among the possible sources and invalid suggestions through surgical
to as soft tissue activity by the interventional process in the tissue. Image navigation to reduce the surgical processes disturbing effect. The hier­
registration allows the integration of different images into one demon­ archical rendering method was proposed to adjust the fused images by
stration, allowing for simpler and more precise communication with the necessitated visual sensation. The database used here was endo­
balancing data. Consequently, the occurrence of deviations in multi- scopic images, real-time ERT position, CT images, and 3D scanning
modal fused images of the same or different patients is estimated by depth image [12]. Tian X et al. proposed feature-based registration
local arithmetic variations. Hence, these fused images might have been method separates the vessels via the ISO- DATA algorithm and
used to enhance surgery planning, medical diagnosis, and inter- morphologic procedures by the OCT image. The Generalized
operative navigation [7]. Further, this method also deliberates to Minimax-Concave functions sustain the entire convexity issues. The
improve the medical image spatial quality acquired from different mo­ database used for the fusion of images was the OCT image and original
dalities and is used to refine the noises that occur during the process. The confocal image [13]
first step is the integration process which brings the various modalities Polinati S et al. combined the corresponding information from
into the spatial alignment called image registration. The next step is the several dissimilar imaging modalities like MRI, SPECT, and PET by
image fusion in which the combined data from the multiple input images decreasing the deformation by employing empirical wavelet transform
are displayed [8]. (EWT) form as well as local energy maxima method. The basic function
In recent studies, the obtained multi-parametric MRI for the purpose was optimally chosen by EWT based on the input image character. The
of diagnosis could decrease the unwanted biopsy as well as linked fundamental information is highlighted with the utilization of LEM by
possible dangerous side-effects for men by the prominent prostate- the energy constriction. The metrics used for the evaluation were en­
specific antigen (PSA). The ultrasound lacks insensitivity to be utilized tropy, mutual information, edge-based similarity measure, virtual in­
as the main diagnostic modality in contrast to the mp-MRI. The fusion formation fidelity, and structural similarity index measure. This work
process is pre-biopsy mp-MRI with the TRUS guided biopsy to enhance will be enlarged to select the basis function for improved decision
aggressive cancer capitulates [9]. In our work, a novel multimodal making and object inequity in medical imaging [14].
image fusion is developed in which the boundary measure modulated by Sandhya S et al. utilized the hybrid image fusion to establish image
the Adolescent identity search algorithm (AISA) in the NSST domain to processing by employing the numerous fusion rules. The Shearlet
enhance fused image quality. The previous methods exhibit several transform is combined with Principal component analysis (PCA) to
disadvantages like low accuracy, high sensitivity, high error, computa­ employ the combined fusion. It is used in MRI and CT image and the
tional overhead, and poor reliability. The proposed methodology, an images are converted using Shearlet transformation and the PCA is
image quality enhancement scheme employing Adolescent Identity employed to the converted images [15]. Biswas B et al. presented the
Search Algorithm in the NSST domain for Multimodal medical image new fusion method for PET-MRI images by utilizing 2-D discrete Fourier
fusion is used for improving the accuracy, and thereby reduces the – karhunen-Loeve transforms as well as the singular value decomposi­
sensitivity, erroneous prediction, and computational overhead. This tion in the Shearlet domain. Here this domain was employed to eradicate
paper fulfils the following objectives: the coefficients of ST, KLT, and 2DFT are employed to calculate the best
low-pass ST coefficients in every decomposed image. The metrics such as

2
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Table 1
Survey on medical image fusion methods.
Reference Pre-processing method Decomposition method fusion rule with Low-frequency fusion rule with High-frequency

Maqsood S et al. Histogram equalization Two-scale image Base layer fusion Approximate layer fusion
(2020) [18] decomposition
Liu X et al. (2016) [19] Edge preserving GMSF The regional weighted sum of pixel energy PCNN
smoothing filters and gradient energy
Liu X et al. (2018) [20] Smoothing filter MFDF and NSST Texture component fusion by the maximum Approximation component fusion by NSST
selection rule
Parvathy VS et al. Median filter Discrete wavelet transform Approximation fusion Detailed fusion
(2019) [21] (DWT)
Rajalingam B et al. Image scaling DFRWT with DTCWT Average fusion rule Maximum fusion rule
(2019) [22]
Rajalingam B et al. Smoothing filter Nonsubsampled contourlet Gabor filter Orthogonal matching pursuit (OMP) and
(2019) [23] transform (NSCT) manifold based conjugate gradient
Polinati S et al. (2020) RGB color space to YUV EWT LEM rule LEM fusion rule
[14] color
Zhancheng Zhang The edge probability NSST transformation Choose-Max Rule Based On The Local global-regional-local CHMM
et al. (2020) [46] density function Gradient Measure
Xiaoxiao Li et al. LEM Fusion Rule Laplacian decision graph OD and NOD fusion rule (overlapping IRS Fusion Rule
(2020) [47] decomposition scheme domain and the non-overlapping domain)
Sneha Singh (2020) RGB color space Convolutional Neural Network Post-processing of decision map Detail Layer Fusion
[48] transform (CNN)

entropy, average gradient, MI, were used for evaluation purposes. The various sources to obtain a more informative phase. The techniques that
images used for the fusion was PET and MRI image [16]. depend on the multi-resolution examination have been extensively
El-Hoseny HM et al. introduced the best result for the wavelet-based employed in this region owing to their better performance in medical
medical images fusion by employing various wavelet families as well as images. The steps involved in proposed multimodal medical image
PCA depends on the modified central force optimization algorithm to fusion are pre-processing, registration, image fusion.
maximize medical image fusion quality to establish the exact diagnosis
of diseases by optimal therapy. The image used for the fusion was MRI,
PET, and CT images. This will increase the image quality, better visu­ 3.1. Pre-processing
alization, and accurate diagnosis [17]. Maqsood S et al. introduced the
multimodal image fusion approach based on the decomposition by The pre-processing or data enhancement phase is applied to mini­
two-scale images and spare illustration. The source images were first mize the noise in images. The image pre-processing consists of three
pre-processed by the contrast improvement method to improve better steps: smoothing, sharpening, and scaling. The Gaussian smoothing fil­
visualization. The metrics used for the fusion to improve the image ter is used for smoothing processes. It smoothes the image by evaluating
quality was spatial similarity measure, entropy, mutual information, the weighted averages in the filter [24]. It is utilized to blur the images
visual information fidelity, and feature mutual information. The dataset and remove the noise and provides more weight at the central axis and
used was multimodal image databases such as Med-1, Med-2, Med-3, generates a clean smoothing effect devoid of any side effect. The unique
and Med-4. This proposed algorithm is further utilized for other image Gaussian smoothing mask is obtained as pursues. The index (a, b) of the
processing applications also [18]. Zhang et al. proposed a new images middle component of the 5 × 5 mask is set to (0, 0), and the whole mask
fusion method regarding global-regional-local rules applied to overcome is described to range from (-2, -2) to (2, 2). The Gaussian mask value for
the problem of wrongly interpreting the source image. The source im­ the element in the 5 × 5 mask is
ages are statistically correlated by the G− CHMM model, R− CHMM
gd (a, b) = e − ( a2 + b2 )/ 2 σ2
(1)
model, andL-CHMM model in high subband region. High-pass subbands
were fused by global-regional-local CHMM design and choose-max rules Where a, b = { − 2, − 1, 0, 1, 2} and σ = [0.1, 5]. Moreover, the
based on local gradient measure. Finally, the fused images were sum of weights of the mask impacts the overall intensity of the resulting
extracted by exploiting the inverse NSST [46]. object. Since some of the masks have positive coefficients and their
Xiaoxiao Li et al. Introduced a new method known as Laplacian summed value is equal to 1 while some have negative weights and
Decomposition for multi-modal medical image fusion to overcome is­ summed to 0. The masks which possess negative values might produce
sues like color distortion, blurring, and noise. The overlapping and non- negative pixels and it affects the smoothing function. Hence to map the
overlapping domain concept was applied to combine complementary negative values to the positive range normalization is required. Subse­
and redundant information respectively. The high-frequency sub-band quently, every element is normalized with respect to the addition of 25
fusion image was reframed by using a high-frequency scheme along with elements and it is described as
a local mean and global decision graph [47]. Sneha Singh projected two
hybrid level scheme to improve the structural detail by eliminating the 1
gd (a, b; σ) = √̅̅̅̅̅̅̅ − (a2 +b2 )/2σ 2
(2)
∏ e
noises and artifacts. Convolutional Neural Network was employed as 2 σ
structural patch clustering for the decomposed base. Post-processing of
1 ̅
the decision map was introduced to generate the fusion score and detail Where √̅̅̅̅̅̅
∏ is the normalization constant and it represents the fact
2 σ
layer fusion rule was applied in mapping the structural details of layers
that the integral over the exponential function is not equal to unity i.e.,
effectively. All the above-resulted images are integrated to obtain the
∫∞ ∫∞
resultant fused images. The survey on various medical image fusion 2 2 2 √̅̅̅̅̅̅̅
∏̅
e− (a +b )/2σ dadb = 2 σ .
methods are explained in Table 1.
− ∞ − ∞
The second step in the pre-processing stage is image sharpening.
3. Proposed methodology
Sharpening is the method to increase the visible image sharpness,
enhance the contrast in the edges, and the consequence is utilized to the
Image fusion is the process of collecting the required data from
original image. Edge sharpening is the filter used for improving the edge

3
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Fig. 1. Schematic of 2-level NSST decomposition.

contrast of the image. This edge sharpening is used to improve image Where χ 3 (v ) = χ 3 (a) χ 3 (b) χ 3 (c) represents the kernel function of
quality. The last step in the pre-processing stage is the scaling, here we cubic b-spline. ∂y represents the control points nearest to v. Since the
use image centered scaling. This scaling is utilized to evaluate the mean described deformation coefficient c (v | ∂ ) is employed to the test image
value of the whole database and then the value is subtracted from every specified as
image. When scaling with the graphics image, a new image with low or
high pixel value is generated. M (v | ε ) = c (v | ∂ ) (5)

Where, v = [a, b, c]d location in the reference image. ∂y represents the


3.2. Image registration
deformation function set. Currently, the function of transformation is
described as
Image registration is defined as the procedure that merges two or
more image to attain spatial deformation. It has become the most ε = ∂y (6)
popularly used method in medical image fusion since it can map
different modality images obtained from the same or different patient at
different times. Predominantly image registration is used to identify the 3.3. Non-subsampled Shearlet transform (NSST)
respective anatomical or functional locations of static images concerned
with the reference images. The proposed method is implemented with The nonsubsampled Shearlet transform is the multi-directional and
the aid of the B-spline registration method. Spline-based image regis­ multi-dimensional illustration of wavelet transform which contains
tration is the most widely used technique in a different field such as multidirectional and multiscale examination [25]. Fig. 1 represents the
medical image fusion, computer graphics, and so on. The appended Schematic diagram of 2-level NSST decomposition. The NSLP first ap­
advantage of using the B-Spline registration method is that it provides plies a two-level decomposition to decomposes into low and high pass
elastic deformation registration i.e., it guarantees the edge smoothness filtered image from the source image. This process continues for every
as well as transparency of the fused images [35]. To proceed with iteration until the NSLP decomposes the upper layers low pass filter
B-Spline registration, two image set from the same patient nevertheless image until the required decomposition layer is reached. The result
with diverse time intervals are to be taken. Initially by using B-spline obtained is the low pass subbands as well as the series of high pass
registration the constraints v in the reference image I2 has to be mapped directional subbands. The two-dimensional affine system with complex
with the mapping function M(v) in the first image I1 by using the geo­ dilations was founded by Easley et al. [26]
metric transformation M. This transformation function ̂ε decreases the { }
XIS = ϕk,l, n (a) = |detI|k/2 ϕ (Sl I k a − n) : k, l ∈ C, n ∈ C2
image divergence function D, that is described as,
⃒ (7)

ε = arg min D ( I2 , I1 * M (v ⃒⃒ ε ) )
̂ (3)
ε Where ϕ represents the mother function to create the basis functions, XIS
The transformation function ̂ ε decreases the image divergence represents the basic functions generated by scale, shift, and orientation
function D for the test image registration with respect to the second alterations in ϕ. I indicates the anisotropic matrix, the shear matrix is
image (reference). The non-rigid transformation is an important portion indicated by S and k, l, and n indicates the scale, direction, and shift
of the construction of the image registration method. Therefore, the factor correspondingly. I and S represents the invertible matrices with 2
cubic b-spline registration method is employed for the deformations, × 2 and |det S| = 1. The anisotropic matrix is denoted as I in the form of
[ ] [ 1/2 ]
and also the kernel-based b-spline is applied to compute the de­ i 0 i 0
1/2 or in that i > 0 is to manage the shearlets scale.
formations interval among the control points. The test image in any 0 i 0 i
[ ] [ ]
control point is interpolated for the deformation by the kernel functions 1 s 1 0
The shear matrix is represented as S in the form of or
is derived as 0 1 s 1
⃒ that is used to manage only the Shearlet direction. The transform

⃒ ∑ (v − η ) function(by Singh et al. [27]) is then applied as shown below:
(4)
y
c (v ⃒ ∂ ) = ∂y χ 3
⃒ y
Δα 3 ( l k )
ϕ(0) k2 (0)
k, l, n (a) = 2 ϕ S 0 I0 a − n (8)

4
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

D, k
3 ( ) detailed sub-bands XH and YHD, k of the source images 1 and 2 is derived
(9)
(1)
ϕk, l, n (a) = 2k2 ϕ(0) S1l I1k a − n
as follows:
̂ (0) (ζ) = ϕ
In that ≥ 0 , − 2k ≤ l ≤ 2k , n ∈ C2 , ϕ ̂ (0) (ζ1 , ζ2 ) MG(CHD, k ) = (CHD, k ⊕ YS ) − (CHD, k Θ YS ) (15)
(1) (1)
̂
= ϕ ̂
(ζ1 ) ϕ (ζ1 / ζ2 ) and
Where CD, k D, k D, k
H indicates either XH or YH , i.e., the detailed source image
/
̂ (1) (ζ) = ϕ
ϕ ̂ (1) (ζ1 , ζ2 ) = ϕ
̂ (1) (ζ2 ) = ϕ
̂ (1) (ζ1 ζ2 ) (10) sub-bands 1 and 2, respectively, and Ys represents the unit matrix of
structuring element is described as below:
̂ (i) are represented as the basic func­ ⃒ ⃒
Where ζ = (ζ1 , ζ2 ) ∈ R2 and ϕ ⃒1 1 1 1 1⃒
⃒ ⃒
tions that bears the particular regions of space. The discrete transform is ⃒1 1 1 1 1⃒
⃒ ⃒
able to handle discontinuities efficiently [28]. The NSST is decomposed Ys = ⃒ 1 1 1 1 1 ⃒⃒
⃒ (16)
into low and high-frequency sub-bands that are explained below. The ⃒1 1 1 1 1⃒
⃒ ⃒
input source images 1 and 2 are decomposed by NSST into one esti­ ⃒1 1 1 1 1⃒
mation sub-band Xlow, Ylow, respectively, and sub-band detailed se­
The above stated 5 × 5 structuring matrix separates the lines, cor­
D, k D, k
quences is Xhigh , Yhigh , respectively. ners, and edges in an efficient way as compared with other matrices for
the high frequency sub-bands. The MG values along the corner edge
3.3.1. Low-frequency sub-band fusion areas are big, and the value of the smooth region is small. The MG of the
While comparing with the multi-scale decomposition tools, the low- sub-band is fed as the external stimulus to AISA. The details of the AISA
frequency sub-band fused image outcomes using non-subsampled algorithm are explained below.
shearlet transform used to exhibits the average gray values as well as
the contour of original images. More often, the coefficients of low-
frequency components are integrated by considering easy linear 3.4. Adolescent identity search algorithm (AISA)
weighted average value of them [36]. This might have resulted in less
transparency and information loss in the fused image. Hence it is a must According to WHO, the adolescent is described as the person’s age
to sustain the regional link and the contour source image information between 10 and 19 [30]. The adolescent identity progress for the search
among adjoining coefficients [37]. Moreover, the low-frequency sub-­ has been represented in three major cases:
band also holds the energies of the subjected images, hence by consid­
ering the above-mentioned norms, the regional energy attribute [38] • The adolescent has created his personal identity by examining and
can be described as explaining the values, social attitudes, behaviors, and beliefs.
• Identity is formed by imitating the role model, and those whose role

1 ∑
1
rE [Clow (l, m)] = W × |Clow (l + j, m + k)|2 (11) model has a high status, prestige, and power are observed.
j=− 1 k=− 1 • The adolescent shall assume the negative identity option like early
sexual behavior, smoking, utilization of substance, and harassment
Where Clow indicates either Xlow or Ylow . The regional energy [38] is within the group.
employed to calculate the initial fusion decision maps with respect to the
subsequent equation: By considering the above statements, the process of obtaining opti­
{ } mization by exploiting the adolescent behavior can be listed below. The
1, if rE [Xlow (l, m)] ≥ rE [Ylow (l, m)]
mapj (l, m) = (12) maturity of the adolescent completely depends on the individuals, since
0, if rE [Xlow (l, m)] < rE [Ylow (l, m)]
human intelligence varied from person to person. Probably based on this
The above operation is pursued by the consistency verification by the behavior the optimization time interval can also differ. The next factor
popular filter of window size 3 × 3 and it replace the present coefficients to be deemed is the cost function, i.e., some individuals may behave
of mapj (l, m) by 1 if the number of 1 s in the 3 × 3 window size is larger better in the structure of community norms, while the behavior of others
than or equivalent to 5. This implies that the popular window co­ may vary. Hence the optimal cost function can be obtained by consid­
efficients are from the first image and the central coefficient is obtained ering well-behaving individuals. Consider if none of the adolescent
from the second image then the central pixel is also obtained from the arrive the best value, then it is taken as the optimization issue. Hence it
first image. This procedure decreases the noise effect and formulates the is the possible situation in the optimization process. Identity search
fused image more homogeneous [29]. The decision maps for the fusions behavior is the new feature proposed by Marcia’s to overcome those
are evaluated after the verification of consistency is obtained as follows: issues, to attain the identity status of the committed individuals and it
{ 1, if
∑[ ] can be expressed numerically as below.
mapj (l, m ), W ≥ 5
mapF (l, m) = w (13)
0, elsewhere 3.4.1. AISA implementation
The superscript (.)− 1 and (.)T indicates operators of inverse as well as
The coefficients of LF sub-band fusion are computed from the above- transpose, respectively. Let us consider the AISA is executed on the
stated decision map is derived as follows: unrestricted single-objective optimization issue is described as
{ }
Xlow (l, m), if mapF (l, m) = 1
flow (l, m) = (14) min F( a1 , a2 , ..., am )
(17)
Ylow (l, m), if mapF (l, m) = 0 y k ≤ ak ≤ yk , k = 1, 2, ..., m

3.3.2. Fusion of high-frequency sub-band The objective function is represented as F (.) to be decreased, ak in­
The high-frequencies sub-bands include the comprehensive source dicates the kth decision variable, the decision variables number is
image edge information, thus morphologic gradient-based activity indicated by m, y k and yk indicates the lower and upper bounds on the
determination is recommended in this article to combine the low and kth variable, correspondingly [30].
frequency sub-band of the NSST decomposition. The morphological
gradient (MG) utilizes the two fundamental morphological operators 3.4.2. Random initialization
called erosion and grayscale dilation. Numerically, the MG [29] of the AISA begins by creating the initial population randomly inside the
solution space boundary. It is supposed that there are M adolescent

5
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

numbers in the group and the individuality of every adolescent com­ ⎡ ⎤


prises of m identity attributes. Moreover, the identity of jth adolescent 1
φ1m ⎥
{ j} ⎢ φ1
a j = 1, 2, ..., M may be denoted by the vector and can be achieved as ⎢ φ2
⎢ φ2m ⎥

Φ = ⎢ 1 ⎥ (23)
⎢ ⋮ ⋱ ⋱ ⋮ ⎥
ajk = y k + V (0, 1)k × (yk − y k ), j = 1, 2, ..., M; k ⎣ ⎦
φM
1 φMm
= 1, 2, ..., m (18)
The weighting parameter of the approximation scheme are calcu­
j
Where ak indicates the kth identity attribute of the jth adolescent and lated employing LSE [31] as
V (0, 1) represents the equivalently distributed arbitrary number in the ( )− 1 T [ ]
interval [0,1]. The populations including all the adolescents by their ̂ = ΦT Φ
w Φ F = w1 ⋯ ⋯ wm T (24)
individualities is indicated in the below matrix.
Where wk ∈ ℜ1 × l represents the kth input weight vector.
⎡ ⎤
a1 a12 ⋯ a1m For every matrix component specified in Eqn. (22), the partial suit­
⎢ 1
⎢ a2 a2 ⋯ a2 ⎥
⎥ ability value is achieved by employing Eq. (25) and saved as in Eq. (26).
⎢ 1 m ⎥
A = ⎢ 2
⎥ (19)
⎢ ⋮
⎣ ⋮ ⋱ ⋮ ⎥ ⎦ ̂ j = φjk wj
F (25)
k
aM
1 aM
2 ⋯ aM m
M×m ⎡ ⎤
̂1 ̂ 12 ̂ 1m
⎢ F1 F ⋯ F ⎥
Where A termed as population matrix ⎢ 2 ⎥
⎢F̂1 ̂ 22
F ⋯ ̂ 2m ⎥
F
̂p
F = ⎢


⎥ (26)
3.4.3. Create a new identity ⎢ ⋮
⎣ M
⋮ ⋱ ⋮ ⎥

The new search agents are established repeatedly until the pre­
M M
̂1
F ̂
F2 ⋯ ̂m
F
defined termination criteria is convinced with several metaheuristic M×m

methods. The formation of identity shall be numerically modeled as At last, the optimal identity vector of the present population contains
pursues. the values in the population matrix (A) described as in Eqn. (19)
Case 1: The adolescent mimic by detecting the optimal attributes in equivalent to row indices in each and every matrix column in Eqn. (26)
the group. The Chebyshev functional-link network (CFLN) is the well- and it can be established as
specified orthogonal approximation function schemes that have fast { L⃒ }
and precise approximation capacity, particularly for the online a*k = ank , nk = arg min F
k
̂ k ⃒⃒ L = 1, 2, ..., M , ∀k (27)
L
approximation issues [31]. Chebyshev polynomials are more appro­ ( )
priate for approximation function because they have orthogonal, For the first case, the new identity ajnew of the jth adolescent is
recurrence associations, and an intense definition duration [-1, 1] [39]. evaluated as
For a ∈ [ − 1, 1], the polynomial of Chebyshev {Tl (a)}l = 0, 1, 2, ... are
ajnew = aj − r1 (aj − a* ) (28)
produced through the subsequent recurrence equation,
⎧ The above equation r1 represents the arbitrary number in the interval
⎨ 1, : if l = 0 [0,1]. The set of the optimum identity attributes is indicated by a* and
Tl (a) := a, : if l = 1 (20)
⎩ then estimating the peers’ identity attributes.
2 a Tl − 1 (a) − Tl − 2 (a) : if l ≥ 2
Case 2: Identity shall create by duplicating the role model, who has
The CFLNs was built as per in [31] and established efficiently. At high power, prestige, and status. The novel identity for the adolescent
first, the populations of the CFLN scheme is normalized in the range [-1, [30] is achieved as Eqn. (29)
1] as
ajnew = aj − r2 (aq − arn ) (29)
( )
ajk − y k From the above equation, the random number in the interval [0,1] is
a jk = 2 (
̂ ) , j = 1, 2, ..., M ; k = 1, 2, ..., m. (21) represented as r2 and the role model or the optimum individual is rep­
yk − y k resented as arn . For q ∕
= rn, aq represents the qth adolescent.
Case 3: The adolescent will accept unwanted identity attributes like
From the above equation, ̂
j
a k represents the normalized value at the substance usage, smoking, premature sexual action, etc. Let us consider
kth identity attribute of the jth adolescent. The normalized inputs shall the negative identity attribute (av ) represents the arbitrarily chosen
be characterized as the following matrix, component from the population matrix. For this, the new identity of jth
⎡ ⎤ adolescent [30] is represented as

⎢̂a 11 a 12
̂ ⋯ a 1m ⎥
̂ ajnew = aj − r3 (aj − as ) (30)
⎢ 2 ⎥
⎢ a1
̂ = ⎢̂ a 22
̂ ⋯ a 2m ⎥
̂
A ⎢

⎥ (22) From the above equation, row vector of uniformly distributed
⎢ ⋮ ⋮ ⋱ ⋮ ⎥ number in the range [0,1] is represented by r3 . as represents the negative
⎣ M ⎦
a1
̂ aM
̂ 2 ⋯ am
̂ M
identity vector specified as peruses.
M×m

as = [ av av ⋯ av ] T1 × m (31)
A
̂ represents the standardized input matrix employed to the orthogonal
function schemes. First, the regression matrix Φ and the vector φ is Consequently, three cases are merged to be utilized as
defined as follows, if sub-regressor is indicated for each input variable. ⎧
⎨ Case 1 : aj − r1 (aj − a* ), r/4 ≤ 1 3
/
j
/
anew = Case 2 : aj − r2 (aq − arn ), 1 3/ ≤ r4 ≤ 2 3

Case 3 : aj − r3 (aj − as ), 2 3 ≤ r4
(32)
The above equation r4 represents the random number in the range
[0,1] employed as the selection scheme.

6
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Fig. 2. Proposed medical image fusion approach.

3.4.4. Method of boundary control


When the new individual departs across the specified search space, Fused image = NSST − 1 (flow , f D, k
high ) (35)
the boundary control method is stimulated. Otherwise, the variables
violate its restricted interval, such variants are arbitrarily established 4. Evaluation metrics
under their restricted interval, and then the violators are focussed on
Eqn. (18)[34]. The performance measures for the fusion employed to compute the
effectiveness of the proposed algorithm. The fusion metrics are
3.4.5. Updating method explained below.
The method of updating is chosen if the new identity is comprised of
the population. Prior to the application of the updating method, the 4.1. Entropy
adolescent’s new identity fitness value is estimated. Several meta­
heuristics algorithms utilize the same updating methods at the finishing Entropy [44] is computed by measuring the amount of information
stage of every iteration. Since AISA utilizes the updating method at included in the image, thus to acquire the values ranging from 0 to 8.
every estimation. The main aim is to enhance other solution by inserting ∑
n
the more capable solution into the populations instantly. The overall En = − p (ai ) log p (ai ) (36)
step is iterated until the stopping criterion is satisfied. i=0

In fusion phase, the input consists of grayscale image value is eval­


Where ai represents the ith point is its gray-scale values and the
uated by employing the MG and it does not depend upon neighboring
probability.
pixels as well as exponential decay attributes. The proposed image
fusion block diagram is illustrated in Fig. 2. It is established to decrease
4.2. Average gradient (Ga)
the AISA complexity and increase computational speed.
D, k
fj,Ck [m] = MG (Chigh ) (33) The amount of changes in the textures of the image is called the
average gradient. It is evaluated as [44]:
The fused high-frequency sub-band image of Dth scale and kth di­ √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
rections are evaluated as the following equation ∑(X − 1) (Y − 1) (∂∂ af )2 + (∂∂ bf )2
1
⎧ ⎫ Ga = (37)
⎨ X D, k (l, m) , if TA (l, m) ≥ TB (l, m) ⎬ ( X − 1) ( Y − 1) i=1 2
(34)
D, k high
f high (l, m) =
Where X and Y signifies the image dimensions of images a and b
D, k
⎩ Yhigh (l, m) , if TA (l, m) < TB (l, m) ⎭
correspondingly.
Where TA and TB represent the firing time of image 1 and image 2,
respectively. 4.3. Standard deviation (SD)

3.5. Inverse NSST The standard deviation is utilized in finding how much the data is
different from mean. If the standard deviation value is more, the input
The inverse NSST is executed on the fused high and low-pass fre­ data is more clear [42]. SD is calculated using the below equation.
quency sub-bands image. This is described in the below equation as √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
∑X ∑Y
follows: j=1 |F(i, j) − m|2
(38)
i=1
SD =
XY

7
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Fig. 3. Input source image for fusion. Set-1 (a) CT and (b) PET of brain, Set-2 (c) CT and (d) PET of the brain, Set-3 (e) CT and (f) PET of brain, Set-4 (g) MR and (h)
CT of the brain, Set-5 (i) MR-T2 and (j) MR-T1 of the brain, Set-6 (k) MR-T2 and (l) MR-T1, Set-7 (m) CT and (n) MR of brain, Set-8 (o) CT and (p) PET of abdominal.

Where X and Y indicate the image dimensions and the mean value is 4.5. Xydeas and Petrovic metric
indicated as m.
It is utilized in computing the transmitted edge information amount
from the source images to the fused images [40].
4.4. Mutual information (MI) ∑I ∑J ( XF )
XF YF YF
i=1 j = 1 Q(i, j) W(i, j) + Q(i, j) W(i, j)
The index which estimates the dependence quantity among the two Q XY/F
= ∑I ∑J ( XF ) (41)
YF
j = 1 W(i, j) + W(i, j)
images (X, Y) and it provides the joint distribution disconnection among i=1

them [41]:
( ) Where, QXF YF
(i, j) , Q(i, j) represents the preservation value of the edge infor­
∑∑ P (x, y) XF YF
I (x, y) = P (x, y) log (39) mation, and the weights are represented as W(i, j) , W(i, j) .
b ∈ Xy ∈ Y
P(x) P (y)

From Eqn. (39), the joint probability distribution functions P (x, y) 4.6. Structural similarity index measure (SSIM)
represents the marginal probability distribution function P(x) and P (y)
SSIM [45] is the calculation of connection among the two regions wa
MI (x, y, f ) =
I (x, y) + I (x, f )
(40) and wb of two images a and b.
H (x) + H (y) ⃒
⃒ (2 wa wb + z1 ) ( 2 σ wa wb + z2 )

From Eqn. (40), the entropies of x and y are denoted as H (x) and SSIM (a, b ⃒w ) = (42)
⃒ (w2a + w2b + z1 ) (σ2 wa + σ2 wb + z2 )
H (y).

Fig. 4. Graphical user interface of the subjective evaluation.

8
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Table 2 fused image quality and quantitatively compared with various methods.
Performance Evaluation on the basis of accuracy and error prediction. The proposed fusion approache is applied to the databases of CT-PET
Algorithm Accuracy Sensitivity Precision RMSE and CT-MRI images estimated using all the metrics. The execution is
performed in MATLAB R2016b on a window 10 by the Intel Core I5
M1 92.43% 89 84 0.102
M2 91.37% 91 89 0.305 Processor, 4.0GB RAM, and 500 GB hard disk. Here, the input is
M3 95.45% 88 91 0.321 collected from various diseases such as glioma, mild Alzheimers, and
M4 94.67% 94 89 0.094 hypertension encephalopathy. The objective evaluation is evaluated
M5 90.76% 95 94 0.083 based on the objective metrics that are usually utilized in multimodal
M6 98.54% 97 96 0.057
medical images fusion is implemented to make qualitative as well as
Bold values indicate best results. quantitative evaluations. Eight sets of multimodality medical images are
from the whole-brain atlas medical image databases from the website
http://www.med.harvard.edu/aanlib/ are demonstrated in Fig. 3. To
Where z1 and z2 represents the constants, the mean values of wa and wb
establish the performance of the method the comparison is performed
are represented by wa , wb , the variance of wa and wb are represented by
with other previous image fusion approaches such as NSST (M1) [32],
σ 2 wa and σ2 wb , and the covariance among the two regions are repre­
GMSF + PCNN (M2) [19], EWT + LEM [14], MFDF + NSST (M4) [20],
sented by σ wa wb .
SSOWCO (M5) [33] with the proposed approach (M6).

4.7. Peak signal to noise ratio (PSNR)


5.1. Subjective evaluation

PSNR is the quantitative analysis that depends on the root mean


The subjective evaluation uses the graphical user interface and is
square error that are indicated in the folloing equation [43]:
depicted in Fig. 4. Here eight set of images and their respective fused
( ) image are detectable in the graphical user interface. The subjective
(Fmax )2
Psnr = 10 × log (43) evaluation of the fused three images are shown in the below figure.
rmse2

Where Fmax indicates the value of the maximum pixel gray levels in the 5.2. Accuracy evaluation and error prediction
reconstructed images.
To analyze the performances of the proposed method, we have taken
4.8. Processing time accuracy, sensitivity, precision, and Root mean square error. The per­
formances are tabulated in Table 2.
Due to computational conditions, the processing time shows the time
needed for the fusion phase in seconds [45]. • Accuracy: Accuracy can be determined as the ratio of true prediction
to the total number of classified cases. It can be given as below,
5. Results and discussions tP + tN
Accuracy = (44)
tP + tN + fP + fN
The evaluations of the two images are subjective and objective. The
subjective evaluations can simply be affected by some personal param­ Where tP, tN, fP and fN signifies the true positive, true negative; false-
eters or external factors. Hence objective evaluation is essential for the positive and False-negative.

Fig. 5. (a-h): Fusion result for the entire datasets for the proposed approach with other approaches.

9
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

Fig. 6. Comparison of various metric with other approaches.

10
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

• Sensitivity: Sensitivity is the ratio of accurately identified images to Tiwari: Supervision. Arjun Suresh: Software, Validation. Vinu Sun­
the total number of predicted images. dararaj: Writing - review & editing, Writing - original draft. Rejeesh
tP MR: Writing - review & editing, Writing - original draft.
Sensitivity = (45)
tP + fN

Declaration of Competing Interest

• Precision: Precision is described as the total number of true positives The authors report no declarations of interest.
to the elements belong to the positive classes.
tP References
Precision = (46)
tP + fP [1] F.E. El-Gamal, M. Elmogy, A. Atwan, Current trends in medical image registration
and fusion, Egypt. Inform. J. 17 (1) (2016) 99–124.
[2] Y. Fu, Y. Lei, T. Wang, W.J. Curran, T. Liu, X. Yang, Deep learning in medical image
registration: a review, Phys. Med. Biol. (2020). Mar 27.
[3] S.P. Yadav, S. Yadav, Image fusion using hybrid methods in multimodality medical
• RMSE: RMSE of fused image value over the targeted input image images, Med. Biol. Eng. Comput. 28 (2020) 1–9.
with the number of sample images can be given as [4] S. Polinati, R. Dhuli, Structural and functional medical image fusion using an
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ adaptive Fourier analysis, Multimed. Tools Appl. 11 (June) (2020) 1–24.
1 ∑T ( )2̅ [5] P.K. Yelampalli, J. Nayak, V.H. Gaidhane, Daubechies wavelet-based local feature
RMSE = Y i − Y i (47) descriptor for multimodal medical image registration, IET Image Process. 12 (April
T i=1
(10)) (2018) 1692–1702.
[6] H.R. Boveiri, R. Khayami, R. Javidan, A.R. MehdiZadeh, Medical Image
Where, Y i fused image value, Yi is the targeted input image, T no of Registration Using Deep Neural Networks: A Comprehensive Review, Feb 9 arXiv
sample images used. preprint arXiv:2002.03401, 2020.
[7] S. Nandish, G. Prabhu, K.V. Rajagopal, Multiresolution image registration for
multimodal brain images and fusion for better neurosurgical planning, Biomed. J.
5.3. Experimental evaluation based on the pairing of images 40 (6) (2017) 329–338. Dec 1.
[8] R. Revathy, S.V. Kumar, V.V. Reddy, V. Bhavana, Medical
imageregistrationusinglandmarkregistrationtechnique and fusion, in:
The CT images are delicate to the structure of the bones while MR
InInternational Conference On Computational Vision and Bio Inspired Computing,
images provide the features concerning the parenchyma inside the area Springer, Cham, 2019, pp. 402–412. Sep 25.
of interest. The functional features are conveyed by the PET. Although [9] A. Sedghi, A. Mehrtash, A. Jamzad, A. Amalou, William M. Wells 3rd, T. Kapur, J.
T. Kwak, B. Turkbey, P. Choyke, P. Pinto, B. Wood, Improving detection of prostate
MR-T1 weighted and the MR-T2 weighted image provide the corre­
cancer foci via information fusion of MRI and temporal enhanced ultrasound, Int.
sponding details of the identical person at various periods. Four sets of J. Comput. Assist. Radiol. Surg. (2020). May 5.
PET and CT images are employed for this multimodal medical images [10] D. Li, L. Chen, W. Bao, J. Sun, B. Ding, Z. Li, An improved image registration and
fusion. The evaluation measures namely Entropy, Average gradient, fusion algorithm, Wirel. Netw. (2020) 1–5. Jan 2.
[11] J. Wu, X. Ren, Z. Xiao, F. Zhang, L. Geng, S. Zhang, Research on fundus image
mutual information, Xydeas and Petrovic metric, SSIM, PSNR, standard registration and fusion method based on nonsubsampled contourlet and adaptive
deviation, the processing time are analyzed. Fig. 5 presents the fused pulse coupled neural network, Multimed. Tools Appl. (2019) 1–8. Oct 14.
output. [12] Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, Y. Wang,
Registration and fusion quantification of augmented reality based nasal endoscopic
The various metrics such as average gradient (AG), standard devia­ surgery, Med. Image Anal. 42 (2017) 241–256. Dec 1.
tion (SD), entropy (E), peak signal to noise ratio (PSNR), xydeas and [13] X. Tian, R. Zheng, C.J. Chu, O.H. Bell, L.B. Nicholson, A. Achim, Multimodal
petrovic measure (QXY/F), structural similarity index measure (SSIM), retinalimageregistration and fusionbased on sparseregularization via a
generalizedminimax-concave penalty, in: ICASSP 2019-2019 IEEE International
mutual information (MI) and processing time (Time) for the eight Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2019,
datasets for the proposed approach have been compared with various pp. 1010–1014. May 12.
approaches is demonstrated in Fig. 6. The processing time is very small [14] S. Polinati, R. Dhuli, Multimodal medical image fusion using empirical wavelet
decomposition and local energy maxima, Optik. 205 (2020) 163947. Mar 1.
for the proposed approach, PSNR is high and entropy is high as
[15] S. Sandhya, M.S. Kumar, L. Karthikeyan, A hybrid fusion of multimodal medical
compared when compared with other existing approaches. images for the enhancement of visual quality in medical diagnosis. Computer
Aided Intervention and Diagnostics in Clinical and Medical Images, Springer,
Cham, 2019, pp. 61–70.
6. Conclusions
[16] B. Biswas, B.K. Sen, Color PET-MRI medical image fusion combining matching
regional spectrum in shearlet domain, Int. J. Image Graph. 19 (01) (2019)
This paper proposed a novel multimodal medical image fusion al­ 1950004. Jan 28.
gorithm that depends upon the morphological gradient motivated AISA [17] H.M. El-Hoseny, Z.Z. El Kareh, W.A. Mohamed, G.M. El Banby, K.R. Mahmoud, O.
S. Faragallah, S. El-Rabaie, E. El-Madbouly, F.E. Abd El-Samie, An optimal wavelet-
fusion in the NSST domain. Here, the input source image is decomposed based multi-modality medical image fusion approach based on modified central
into high and low frequency subbands using NSST. Then the two sub­ force optimization and histogram matching, Multimed. Tools Appl. 78 (18) (2019)
bands are fused using the AISA method. The images are reconstructed 26373–26397. Sep 30.
[18] S. Maqsood, U. Javed, Multi-modal medical image fusion based on two-scale image
using the INSST. The simulation results for the proposed approach on decomposition and sparse representation, Biomed. Signal Process. Control 57
public datasets and is compared with the other approaches. The per­ (2020) 101810. Mar 1.
formance evaluations of the proposed approach outperforms than other [19] X. Liu, W. Mei, H. Du, Multimodality medical image fusion algorithm based on
gradient minimization smoothing filter and pulse coupled neural network, Biomed.
methods using various metrics. For the proposed approach, the pro­ Signal Process. Control 30 (2016) 140–148. Sep 1.
cessing time is less compared to other approaches. The entropy, PSNR, [20] X. Liu, W. Mei, H. Du, Multi-modality medical image fusion based on image
mutual information, and SSIM are high when compared with other ap­ decomposition framework and nonsubsampled shearlet transform, Biomed. Signal
Process. Control 40 (2018) 343–350. Feb 1.
proaches. The experimental evaluations and discussions depict that the
[21] V.S. Parvathy, S. Pothiraj, Multi-modality medical image fusion using
proposed algorithm outperforms other approaches when compared with hybridization of binary crow search optimization, Health Care Manag. Sci. (2019)
both objective and subjective evaluation, and provides better-fused 1–9. Jul 10.
[22] B. Rajalingam, R. Priya, R. Bhavani, Hybrid
images for the accurate diagnosis.
multimodalmedicalimagefusionalgorithms for astrocytomadiseaseanalysis, in:
International Conference on Emerging Technologies in Computer Engineering,
CRediT authorship contribution statement Springer, Singapore, 2019, pp. 336–348. Feb 1.
[23] B. Rajalingam, R. Priya, R. Bhavani, Hybrid
multimodalmedicalimagefusionalgorithms for astrocytomadiseaseanalysis, in:
Jais Jose: Conceptualization, Methodology, Software. Neha Gau­ International Conference on Emerging Technologies in Computer Engineering,
tam: Data curation. Mohit Tiwari: Visualization, Investigation. Tripti Springer, Singapore, 2019, pp. 336–348. Feb 1.

11
J. Jose et al. Biomedical Signal Processing and Control 66 (2021) 102480

[24] P.Y. Hsiao, S.S. Chou, F.C. Huang, Generic 2-d gaussian smoothing filter for noisy [41] G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion,
image processing, TENCON 2007-2007 IEEE Region 10 Conference (2007). Oct 30. Electron. Lett. 38 (7) (2002) 313–315.
[25] A. Tannaz, S. Mousa, D. Sabalan, P. Masoud, Fusion of multimodal medical images [42] H. Li, B.S. Manjunath, S.K. Mitra, Multisensor image fusion using the wavelet
using nonsubsampled shearlet transform and particle swarm optimization, transform, Graph. Model. Image Process. 57 (3) (1995) 235–245.
Multidimens. Syst. Signal Process. 31 (January(1)) (2020) 269–287. [43] J.-H. Park, K-Ok. Kim, Y.-K. Yang, Image fusion using multiresolution analysis, in:
[26] G. Easley, D. Labate, W.-Q. Lim, Sparse directional image representations using the IEEEIGARSS 2001. Scanning the Present and Resolving the Future. Proceedings.
discrete shearlet transform, Appl. Comput. Harmon. Anal. 25 (1) (2008) 25–46. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.
[27] S. Singh, D. Gupta, R. Anand, V. Kumar, Nonsubsampled shearlet based CT and MR 01CH37217), 2, 2001, pp. 864–866.
medical image fusion using biologically inspired spiking neural network, Biomed. [44] S. Singh, N. Mittal, H. Singh, 9 Classification of various image fusion algorithms
Signal Process. Control 18 (2015) 91–101. and their performance evaluation metrics, Comput. Intell. Mach. Learn. Healthc.
[28] Y. Cao, S. Li, J. Hu, Multi-focus image fusion by nonsubsampled shearlet transform, Inform. 1 (2020) 179.
in: 2011 Sixth InternationalConference on Image and Graphics (ICIG), IEEE, 2011, [45] U. Sara, M. Akter, M.S. Uddin, Image quality assessment through FSIM, SSIM, MSE
pp. 17–21. and PSNR—a comparative study, J. Comput. Commun. 7 (3) (2019) 8–18.
[29] S.D. Ramlal, J. Sachdeva, C.K. Ahuja, N. Khandelwal, Multimodal medical image [46] Z. Zhang, X. Xi, X. Luo, Y. Jiang, J. Dong, X. Wu, Multimodal image fusion based on
fusion using non-subsampled shearlet transform and pulse coupled neural network global-regional-local rule in NSST domain, Multimed. Tools Appl. (2020) 1–27.
incorporated with morphological gradient, Signal Image Video Process. 12 (8) [47] X. Li, X. Guo, P. Han, X. Wang, H. Li, T. Luo, Laplacian re-decomposition for
(2018) 1479–1487. Nov 1. multimodal medical image fusion, IEEE Trans. Instrum. Meas. (2020).
[30] E. Bogar, S. Beyhan, Adolescent Identity search Algorithm (AISA): a novel [48] S. Singh, R.S. Anand, Multimodal medical image fusion using hybrid layer
metaheuristic approach for solving optimization problems, Appl. Soft Comput. decomposition with CNN-Based feature mapping and structural clustering, IEEE
(2020), 106503. Jun 27. Trans. Instrum. Meas. 69 (6) (2019) 3855–3865.
[31] M. Çetin, B. Bahtiyar, S. Beyhan, Adaptive uncertainty compensation-based [49] V. Sundararaj, et al., An optimal cluster formation based energy efficient dynamic
nonlinear model predictive control with real-time applications, Neural Comput. scheduling hybrid MAC protocol for heavy traffic load in wireless sensor networks,
Appl. 31 (2) (2019) 1029–1043. Comput. Secur. 77 (2018) 277–288, https://doi.org/10.1016/j.cose.2018.04.009.
[32] P. Ganasala, V. Kumar, Multimodality medical image fusion based on new features [50] V. Sundararaj, An efficient threshold prediction scheme for wavelet based ECG
in NSST domain, Biomed. Eng. Lett. 4 (December (4)) (2014) 414–424. signal noise reduction using variable step size firefly algorithm, Int. J. Intell. Eng.
[33] L. Xu, Y. Si, S. Jiang, Y. Sun, H. Ebrahimian, Medical image fusion using a modified Syst. 9 (3) (2016) 117–126, https://doi.org/10.22266/ijies2016.0930.12.
shark smell optimization algorithm and hybrid wavelet-homomorphic filter, [51] V. Sundararaj, Optimal task assignment in mobile cloud computing by queue based
Biomed. Signal Process. Control 59 (2020) 101885. May 1. ant-bee algorithm, Wirel. Pers. Commun. 104 (1) (2019) 173–197, https://doi.org/
[34] P. Civicioglu, Backtracking search optimization algorithm for numerical 10.1007/s11277-018-6014-9.
optimization problems, Appl. Math. Comput. 219 (15) (2013) 8121–8144. [52] V. Sundararaj, Optimised denoising scheme via opposition-based self-adaptive
[35] W.U. Zufeng, L.A.N. Tian, W.A.N.G. Jiang, D.I.N.G. Yi, Q.I.N. Zhiguang, Medical learning PSO algorithm for wavelet-based ECG signal noise reduction, Int. J.
image registration using B-Spline transform, Int. J. Simul. 17 (48) (2016). Biomed. Eng. Technol. 31 (4) (2019) 325–345, https://doi.org/10.1504/
[36] Z.-h Wang, J.-q Wang, D.-g Zhao, W. Fu, Image fusion based on shearlet and IJBET.2019.103242.
improved PCNN, Laser Infrared 42 (2) (2012) 213–216. [53] V. Sundararaj, et al., CCGPA-MPPT: cauchy preferential crossover-based global
[37] Q. Miao, C. Shi, P. Xu, et al., Multi-focus image fusion algorithm based on shearlets, pollination algorithm for MPPT in photovoltaic system, Prog. Photovolt.: Re Appl.
Chin. Opt. Lett. 9 (4) (2011), 041001. 28 (11) (2020) 1128–1145, https://doi.org/10.1002/pip.3315.
[38] Xuan Liu, Yue Zhou, Jiajun Wang, Image fusion based on shearlet transform and [54] M.R. Rejeesh, Interest point based face recognition using adaptive neuro fuzzy
regional features, AEU 68 (6) (2014) 471–477. inference system, Multimed. Tools Appl. 78 (16) (2019) 22691–22710.
[39] J.C. Patra, A.C. Kot, Nonlinear dynamic system identification using chebyshev [55] M.R. Rejeesh, P. Thejaswini, MOTF: Multi-objective Optimal Trilateral Filtering
functional link artificial neural networks, IEEE Trans. Syst. Man Cybern. B 32 (4) based partial moving frame algorithm for image denoising, Multimed. Tools Appl.
(2002) 505–511. 79 (37) (2020) 28411–28430.
[40] C.S. Xydeas, V. Petrovic, Objective image fusion performance measure, Electron.
Lett. 36 (4) (2000) 308–309.

12

You might also like