Hybrid Algorithm Using Brain Tumor

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Biomedical Signal Processing and Control 87 (2024) 105419

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

Hybrid Archimedes Sine Cosine optimization enabled Deep Learning for


multilevel brain tumor classification using MRI images
Geetha Ma, *, Srinadh Vb, Janet Jc, Sumathi Sd
a
Department of Computer Science and Engineering, S.A Engineering College, Poonamallee-Avadi Road, Thiruverkadu, Chennai - 600077, Tamil Nadu, India
b
Department of Computer Science and Engineering, GMR Institute of Technology, Rajam, Andhra Pradesh, India
c
Department of CSE, Sri Krishna College of Engineering and Technology, Coimbatore, India
d
Department of EEE, Mahendra Engineering College, Namakkal, India

A R T I C L E I N F O A B S T R A C T

Keywords: The most terrible form of cancer caused by uncontrolled and aberrant cell division is the Brain Tumor (BT). The
Gaussian filter current methodologies are insufficient for precise categorization due to the variety of tumor sizes, forms, and
Archimedes Optimization Algorithm placements in the brain. The main objective of the proposed method is to classify the presence of a brain tumor
Sine Cosine Algorithm
and also classify its type. It is necessary for effective treatment, and recovery and it improves the survival rate.
DenseNet
Shepard Convolutional Neural Network
The Sine cosine Archimedes optimization algorithm (SCAOA) is used in this study to construct a highly successful
model for BT classification. Primarily, input Magnetic Resonance Imaging (MRI) brain image is taken from
databases, which is later promoted to pre-processing phase. Thereafter, in this pre-processing phase, a Gaussian
filter is used to eliminate undesirable noises in the image. The BT segmentation is done by using SegNet, which is
tuned by using SCAOA. The proposed SCAOA is established by the combination of Archimedes Optimization
Algorithm (AOA) and Sine Cosine Algorithm (SCA). The segmented image samples are then exposed to the
feature extraction phase. Features are sent to BT detection and then ShCNN is applied to detect BT with features
obtained. If the detected output is found as a tumor, then the BT image is classified as Pituitary tumors, Gliomas,
and Meningiomas using DenseNet, which is tuned by using the proposed SCAOA. Finally, SCAOA_DenseNet
attained high accuracy of 93.0%, sensitivity of 92.3%, and specificity of 92.0%.

1. Introduction generation of complex images in each direction. MRI is utilized in


medical imaging to reveal a wide range of differences in the different
The condition of the internal organs of the human body is regularly soft tissues that are present in the body. Information about BT is con­
examined using MRI. The diagnosis of brain-related diseases is done by tained in the MRI image.MRI delivers a wide range of images, each of
using the computerized imaging technique called MRI [36]. The most which contains a variety of parameters dependent on interior anatom­
beneficial imaging method is MRI since it is the only non-invasive and ical structures. To reduce risks, BT diagnoses should be made as early as
non-ionizing imaging method that offers useful information in 2D and possible. The image is scanned using the MRI method, which is widely
3D formats on the size, location and shape of BT. Manual image evalu­ recognised for its accuracy. The tumor appearance is very precise and
ation is time-consuming, demanding, and potentially error-prone due to high in MRI [4]. BT is frequently found and categorized using MRI. It
the large number of patients. To address this problem, the development assists doctors in evaluating tumors in order to plan for further treat­
of an automatic Computer-Aided Diagnosis (CAD) system is required to ment [21].
alleviate the workload of the classification and diagnosis of brain MRI BT is the most common and dangerous tumor type in both adults and
and act as a tool to help radiologists and doctors [8]. The MRI image children. They account for 85% to 90% of the primary Central Nervous
processing is used for the early-stage brain tumor diagnosis and treat­ System (CNS) tumors in the US. The abnormal growth of the glial and
ment [33]. MRI is regarded as an advanced method that provides in­ neural cells is known as BTs. Several types of brain tumors exist nowa­
formation on the structure of human soft tissues. This information is days. In this, the non-cancerous and the most prevalent case includes
broadened for looking at the structure of the area that contributes to the meningiomas. The cancerous form, which includes gliomas and

* Corresponding author.
E-mail address: geethachenbapan@gmail.com (M. Geetha).

https://doi.org/10.1016/j.bspc.2023.105419
Received 12 May 2023; Received in revised form 19 August 2023; Accepted 12 September 2023
Available online 20 September 2023
1746-8094/© 2023 Elsevier Ltd. All rights reserved.
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

glioblastomas. A cancer that spread to various parts of the body is called Moreover, BT detection is done by utilizing ShCNNwith SCAOA. If BT
brain metastasis. The tumor that develops in the pituitary gland of the is identified, then further classification took place using DenseNet
brain is called a pituitary tumor [34]. Medical interventions will be with SCAOA to classify BT into Pituitary tumors, Gliomas, and
easier to do, the survival rate will rise, and life quality will also improve Meningiomas.
with an early diagnosis of BT. The signs and symptoms of a BT vary
greatly and depend on the BT size, location and rate of growth [23]. The The remaining section of this study is organized as follows: Section 2
current research focus in computerised medical diagnostics is BT seg­ is explained reviews of traditional approaches used to classify BTs, while
mentation in MRI. A method extensively used in the field of neuro­ Section 3 describes the proposed SCAOA_DenseNet model for BT clas­
imaging for visualizing the inner body is called MRI [34]. On calculated sification along with its training procedure. Section 4 examines the re­
tomography, MRI presents the chosen results. Additionally, it provides a sults. At last, in section 5, a conclusion with a plan for further research is
notable variation between a number of soft tissues in the human body in devised.
computerised medical diagnostic systems. MRI has become considerably
more effective in imaging the brain and cancer. The obtained features of 2. Motivation
MRI image are considered as the crucial part since it indicates an image
in its compact shape [2]. These variables include the generous, grade, BTs are frequently treated when they are in their early stages, which
size, location, and form of the malignancy. These variables can differ can lower the death rate linked with cases. To realize the current con­
greatly depending on the health of a patient. Hence, accurate recogni­ dition of cancer patients, BT categorization necessitates a highly
tion and classification of BT are critical for proper treatment [21]. The complicated automated approach. This idea inspired the development of
early discovery of BT can significantly improve the possibility of treat­ a plan by looking at already existing techniques for recording BTs. The
ment and survival rate of the patients [20]. following research for finding BTs is discussed, along with its
Convolutional Neural Networks (CNN), a popular early detection drawbacks.
technique, exhibit outstanding performance on both 2D and 3D medical
images. Deep Convolutional Neural Networks (DCNN) were developed
2.1. Literature survey
for brain tumor classification but due to accuracy and ineffective deci­
sion making they failed to provide improved classification [4 27 28 29
Deepak, S. and Ameer, P.M., [1] developed an efficient method
30]. Similar to this, the transfer learning method is frequently used to
named Transfer learning to extract features from brain MRI. In this case,
speed up processing times when there are fewer computer and data re­
the TL system performed well despite having fewer training examples.
sources available. By using this technique, knowledge acquired for one
However, tuning of the transfer learning model was not implemented in
task is transferred to others that are related to it. Feature fusion is the
this method. Raja, P.S., [2] established Bayesian Fuzzy Clustering for the
detection of co-related features in order to fuse them to identify and
segmentation of BTs.This method achieved less error rate. However,
compact a set of salient features to improve detection accuracy [21]. For
using more than one classifier together with this method does not in­
the problem of cancer categorization, there are various methods. Two
crease accuracy. Ayadi, W., et al.,[3] introduced Dense Speed Up Robust
main groups of techniques can be distinguished among them. The first
Features and Histogram of Gradients to classify three BT types. Here, the
group combines the feature extraction and classification part which is in
features of this method were sufficient to accomplish greater results.
most of the Deep Learning (DL) methods [23]. The application of DL in
Nevertheless, this approach was not suitable for real-time applications.
new Computer Aided Design (CAD) techniques results in increased
Kumar, S. and Mankame, D.P., [4] devised an efficient method named
performance. Machine learning’s subset of DL doesn’t require manually
Deep CNN (Deep CNN) for BT classification. Using features derived from
constructed features. It has been proven to be more efficient than
the image, this Deep CNN identifies the image as abnormal or normal. By
traditional methods and close the pattern recognition gap between
classifying the tumor cells as benign or malignant, this method of
human and computer vision. It surpassed state-of-the-art schemes in
analysis did not succeed.
several fields including classification, detection, and segmentation of
Ayadi, W., et al.,[5] established CNN for BT classification. The ability
medical images [5].
to classify MRI BTs into categories or grades in this case allowed clini­
The main goal is to develop an effective technique for the classifi­
cians to quickly and accurately make a choice. However, this technique
cation of BT based on the suggested SCAOA_DenseNet. This method
was ineffective to exploit images from different modalities. Swati, Z.N.
begins with a collection of an input MRI brain image from two types of
K., et al., [6] developed Transfer learning and fine-tuning for BT clas­
datasets, which is then pre-processed with Gaussian filtering to remove
sification. Here, incremental block-wise fine-tuning gradually improves
any external disruptions. After that, BT regions are proficiently
classification performance. However, this approach was unable to add
segmented using SegNet, with SegNet parameters optimally biased using
normal brain Contrast Enhanced-Magnetic Resonance Images (CE-MRI)
newly created SCAOA. Moreover, feature extraction is done after the
to the dataset. Badža, M.M. and Barjaktarović, M.Č., [7] established
segmentation process. In this instance, features like statistical features,
CNN for BT classification of three tumor types. This method was quick to
Haralick texture features, Spider Local Image Features (SLIF), Shape
execute and has a good capacity to generalize. Nevertheless, this method
Local Binary Texture (SLBT), Speeded Up Robust Features (SURF) and
finds difficulty in the efficient training of large sets of images. Kang, J.,
Oriented Fast and Rotated BRIEF (ORB) are extracted. The statistical
et al.,[8] introduced Deep CNN to extract deep features from brain MRI.
features consist of kurtosis, variance, tumor size, mean, and entropy
Here, the pre-trained convolutional networks were useful for removing
whereas Haralick texture features, namely contrast, correlation, homo­
deep features from brain MRI. However, this classifier depends on the
geneity, and Angular Second Moment (ASM) are extracted. Further­
number of training sets. Taha Muezzinoglu., et al., [35] developed a
more, BT is detected using ShCNN, which is trained by SCAOA. If BT is
patch-based deep feature engineering model for increasing the classifi­
detected, then it is further classified as pituitary tumors, Gliomas, and
cation performance using CNN. This method attains high performance
Meningiomas using DenseNet [37], which is tuned with the developed
and it is a self-organized framework. However, it suffers from huge
SCAOA. The created SCAOA combines the AOA and SCA.
memory requirements and higher computational demands.
The following points describe the important contributions of the
research:
2.2. Major challenges
• SCAOA_DenseNet for BT classification: The SCAOA_DenseNet is
used to generate an effective framework for classifying BTs. In this The following are some of the issues that standard BT categorization
scenario, SegNet is employed to efficiently segment the BT. models face:

2
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 1. Block diagram of proposed SCAOA_DenseNet for classification of BT.

• In [2], the Bayesian fuzzy clustering technique is developed for BT • In [4], Deep CNN was developed for the classification of BT. It was
classification. In this method, conjoining multiple classifiers could unable to enhance the Histogram of Gradients (HoG) features, along
not able to improve accuracy. The proposed method has high per­ with the Statistical features and LBP features, because it ignored the
formance and thus it improves the accuracy. performance of Deep CNN. The proposed method has high perfor­
• The Dense Speed Up Robust Features and Histogram of Gradients mance than the other existing methods.
was designed in [3]. However, this method is ineffective to achieve • DL approaches have made significant advances in the field of medical
images from numerous modalities to add robustness to this system. analysis and processing of images in recent years. Although brain
The AOA algorithm used here is robust, as it can solve optimization cancers can be categorised using MRI, there are some difficulties.
problems with minimum error. First, the structure of the brain and the way its tissues are intertwined

3
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 2. The architecture of SegNet using BT segmentation.

make it challenging. Second, it can be difficult to classify BT due to improve picture restoration using a Gaussian filter [9] and remove noise
the high density of the brain. from an image, the input image from the raw dataset is allowed to
proceed through the pre-processing stage. Pre-processing helps to pro­
3. Proposed SCAOA_Densenet for BT classification vide clean data. Techniques for image pre-processing are offered by the
great majority of hyperspectral software programmes. Gaussian filtering
Effective treatment planning and patient care depend on a timely is a type of linear smooth filter that combines time domain and fre­
diagnosis of a BT. The ability and experience of the radiologist to quency domain filters. Gaussian filtering is the most effective technique
recognize the BT and classify it is a requirement for the manual cate­ for reducing noise associated with the normal distribution. According to
gorization of BT using MRI with similar structures or appearances. The its means ψ d and variance ϛ2d , the overall surface responsibility image is
primary goal is to design and develop an effective technique named average-weighted and is provided as,
SCAOA_DenseNet for BT classification. Initially, the input MRI brain { }
image is acquired from the databases [18,19], which are later forwarded −
(K− ψ d )2

to the pre-processing phase. In the pre-processing phase, a Gaussian ( ) 1 2ϛ2


(2)
2 d
Wd ψ d , ϛd , K = √̅̅̅̅̅̅̅̅̅2̅exp
filter [9] is used to remove the unwanted noises in the signal. Then, 2 π ϛd
preprocessed image is exposed to the segmentation phase, in which
In this case, ψ d signifies the mean of the filter, ϛ2d denotes the vari­
preprocessed images are gets segmented. The BT segmentation is done
ance, and K is running with [ − Φ, Φ].
by using SegNet [10], which is trained by using SCAOA. The proposed
SCAOA is developed by integrating AOA [11] and SCA [12]. The
segmented image samples are then subjected to the feature extraction 3.3. BT segmentation using SegNet
phase, wherein features, like statistical features, Haralick texture fea­
tures, SLIF [15], LGXP [16], SURF [24] and ORB [25] are extracted. The The pre-processed image E is used in the segmentation stage, where
statistical features [26] consist of tumor size, mean, variance, kurtosis, SegNet [10] is employed to get BT segments. To precisely optimize the
and entropy. The Haralick texture [22] features contain contrast, ho­ parameters of SegNet, developed SCAOA is used in this case. The SegNet
mogeneity, correlation and ASM. The extracted features are forwarded model is applied to produce expected segmentation masks for tumor
to the BT detection phase which is done by using ShCNN [13], which is regions of unfamiliar images. Instead of DLstructures, SegNet is the
trained by using SCAOA. If BT is detected, then it is further classified as network that is most usually used for segmentation because it has less
Gliomas, Meningiomas and Pituitary tumors using DenseNet [14], parameter. In addition, it is simple to train from beginning to end and
which is trained by using the proposed SCAOA. Fig. 1 displays a block does not need as many powerful computing resources as Deconv NeT.
diagram of SCAOA_DenseNet for BT classification. Another advantage is the high memory requirements of the system are
greatly reduced by SegNet’s exclusive usage of pooling indices. SegNet
only requires forward evaluation of a fully learnt function to obtain
3.1. Image acquisition
smooth label predictions for increasing depth, which improves accuracy
and it is easy to visualize the effect of feature activation. As a result, the
For a φ dataset with n images of BTs, consider the following
network can offer a comprehensive feature map with related geographic
expression:
information.
{ }
φ = Q1 , Q2 , ..., Qy , ..., Qn (1)
3.3.1. SegNet’s architecture for BT segmentation
Here Qy stands for the yth input MRI and total MRI is signified as Qn . Medical semantic image segmentation was accomplished using
SegNet architecture, also referred to as CNN encoder-decoder. As shown
3.2. Image pre-processing using Gaussian filter in Fig. 2, it has five symmetrically arranged encoders and decoders, a
max-pooling layer, rectified linear unit layer, convolution layers,
Here,Qy is provided as the first input for pre-processing. In order to upsampling, batch normalization, and a Softmax classifier. Batch

4
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

normalization is used by the encoder and decoder networks after each Step iv): Updation of density factor and transfer operator.
convolutional layer. It is only convolutional because there isn’t a
completely connected layer. With CNN encoder-decoder technique, pool Additionally helping AOA on global to local search is density
layers VGG-16 or VGG-19 are used to significantly decrease the temporal reducing factor w. It gets shorter with time by employing equation (10).
component of images. A result of SegNet is indicated as Es . Fig. 2 shows ( ) ( )
Qmax − Q Q
the SegNet architecture. wQ+1 = exp − (10)
Qmax Qmax
3.3.2. Training algorithm of SegNet using proposed SCAOA Wherein,wQ+1 declines with time, permitting convergence in a
The SCAOA method, which combines the AOA [11] and SCA [12] formerly selected appropriate zone.
techniques, is successfully used to train the used SegNet to separate the While initially colliding with one another, objects attempt to create
regions of BTs. A metaheuristic algorithm named AOA is established to an equilibrium condition over time. This is accomplished in AOA with
address the challenges of optimization. AOA was developed as a result of the aid of the transfer operator μ, specified in equation (11), which
being inspired by the intriguing physics law known as Archimedes changes the search from exploration to exploitation.
Principle. It mimics the idea that upward force applied to an item that is ( )
moderately or completely submerged in a fluid is comparable to the Q − Qmax
μ = exp (11)
weight of displaced fluid.AOA technique generated objective function Qmax
values with the least amount of error to resolve optimization issues.SCA Here, maximum iterations are indicated as Qmax and Q represent the
is also a population-based optimization algorithm that can be used to iteration number.
solve optimization issues. SCA was able to solve real problems with
unknown, challenging, and constrained search spaces. Combining these Step v): Exploration stage.
two methods allows one to solve difficult optimization issues with
excellent results and a faster convergence rate. Choose a random material τ and adjust the acceleration of the object
per iteration Q +1 using equation (12), if μ ≤ 0.5, a collision between
Step i): Initialization. objects happens.

Each position of the object is initialized using the following expres­ β τ + α τ × Xτ


XpQ+1 = (12)
sion (3). βQ+1
p × αQ+1
p

Fp = gjp + rand × (xvp − gjp ); p = 1, 2, ..., D (3) Here, Xp , βp and αp represents acceleration, density and volume of an
object p. Similarly, density, volume and acceleration of material are
Here,Fp represents the pth object among D total objects. The search indicated as βτ , ατ , and Xτ respectively.
space of lower bound and upper bound are gjp and xvp , correspondingly.
Use equations (4) and (5) to initialize the volume (α) and density (β) Step vi): Exploitation stage.
for each pth object.
βp = rand (4) Update the object’s acceleration using equation (13), if μ > 0.5 and
there is no object collision for iteration Q + 1.
αp = rand (5) βbest + αbest × Xbest
XpQ+1 = (13)
Wherein,rand is a random number between [0, 1]. With equation (6), βQ+1
p × αQ+1
p

acceleration (X) of the pth object is initialized and provided as,


where Xbest denotes the object’s best acceleration.
Xp = gjp + rand × (xvp − gjp ) (6)
Step vii): Acceleration normalization.
Step ii): Evaluating Fitness function.
Using equation (14), normalize acceleration to regulate the per­
centage of change.
The best solution is one with the lowest Mean Square Error (MSE)
since it is regarded as easy to obtain the best solution using an error XpQ+1 − min(X)
function. As a result, MSE is represented below. Xp−Q+1
norm = f × +t (14)
max(X) − min(X)
1 ∑m
Here, t and f signifies the range of normalization,
χ= [ε − E s ]2 (7)
m s=1
Step viii): Position updation.
In this case, ε represents expected output, Es represents the catego­
rized output from SegNet, m represents the total number of samples and
If μ ≤ 0.5, then determine the location of the pth item for Q +1 iter­
χ indicates MSE.
ation by using equation (15) (exploration phase).
( )
Step iii): Density and volume updation. hQ+1 = hQp + J1 × rand × Xp−Q+1 Q
(15)
p norm × h × hrand − hp

An equation (8) is utilized to appraise the density and volume of the


object p for iteration Q + 1. The expression is mentioned below. hQ+1
p = hQp (1 − J1 × rand × Xp−Q+1 Q+1
norm × h) + J1 × rand × Xp− norm × h × rand

(16)
βQ+1 = βQp + rand × (βbest − βQp ) (8)
From SCA [12], the update equation of location is expressed as,
p

αQ+1
p = αQp + rand × (αbest − αQp ) (9) hQ+1 = hQp + l1 × sin(l2 ) + |l3 MpQ − hQp |, if l4 < 0.5 (17)
p

Here, αbest and βbest represent the volume and density of the best Consider MQp > hQp ,
object so far discovered, while rand denotes a uniformly distributed
random number.

5
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

hQ+1 = hQp + l1 sin(l2 ) × l3 MpQ − l1 sin(l2 ) × hQp (18) (continued )


p
Algorithm 1. Pseudocode of SCAOA
hQ+1
p = hQp [1 − l1 sin(l2 )] + l1 sin(l2 ) × l3 MpQ (19) 11. Apprise position using equation (15)
12. Else
hQ+1 − l1 sin(l2 ) × l3 MpQ 13. Apprise position using equation (24)
(20) 14. End if
p
hQp =
[1 − l1 sin(l2 )] 15. End for
16. Determine the fitness value of each object and choose it
Substitute equation (20) in equation (16), an expression becomes, 17. Set Q = Q + 1
( )( 18. End while
Q+1
hQ+1
p − l1 sin(l2 ) × l3 MpQ 19. Return object with best fitness value
hp = 1 − J1 × rand × Xp−Q+1
norm × h) + J1
(1 − l1 sin(l2 ) 20. End procedure

× rand × Xp−Q+1
norm × h × rand

(21)
( )
hQ+1 − hQ+1 1 − J1 × rand × Xp−Q+1
norm × h
The exploratory ability of the suggested SCAOA approach is improved
p p

(1 − l1 sin(l2 ) by combining the AOA and SCA.


( )
J1 × rand × Xp−Q+1norm × h × rand (1 − l1 sin(l2 ))−
3.4. Feature extraction
l1 sin(l2 ) × l3 MpQ
= (22)
1 − l1 sin(l2 ) The majority of image processing systems view feature extraction as
( ) a crucial step in the identification and classification of diseases with
hQ+1
p 1 − l1 sin(l2 ) − 1 + J1 × rand × Xp−Q+1
norm × h accuracy and efficiency. The feature extraction step receives the
(1 − l1 sin(l2 ) segmented image as input. In this scenario, the statistical features of
⎛( ) ⎞ kurtosis, tumor size, variance, mean, and entropy, as well as SLIF, SURF,
and ORB, are involved, whereas Haralick features namely homogeneity,
Q+1
⎜ J1 × rand × Xp− norm × h × rand (1 − l1 sin(l2 ))− ⎟
⎜ ⎟
⎜ l1 sin(l2 ) × l3 MpQ ⎟ contrast, ASM and correlation are also relevant.
⎜ ⎟
=⎜ ⎟ (23)
⎜ 1 − l1 sin(l2 ) ⎟



⎠ 3.4.1. SURF
One of the best methods suitable for real-time applications is the
“SURF” algorithm [24], which is regarded as a reliable local feature
( ) detector and extractor algorithm. It is useful for many computer vision
J1 × rand × Xp−Q+1 Q
norm × h × rand (1− l1 sin(l2 ) − l1 sin(l2 ) × l3 Mp tasks, including object detection and 3D reconstruction. According to
hQ+1 =
equation (25), the integral image is an image where each point is equal
p
J1 × rand × Xp−Q+1
norm × h − l1 sin(l2 )

(24) to e. This image (q, b) maintains the total number of pixels in the input
image U in a rectangle created by the origin e.
The position of the best solution is modified using the equation
above. Here, J1 indicates the constant, uniformly distributed random c≤q ∑
∑ C≤b

number is signified as rand, f and t represents the range of normaliza­ U(e) = U(c, C) (25)
c=0 C=0
tion, random numbers are l1 , l2 , l3 respectively. Moreover, the position of
the destination points in Qth iteration is denoted as MQp . The Hessian matrix κ(e, H) in e at scale is defined in equation (26),
and it is as follows:
Step ix): Re-evaluating the fitness. [ ]
Lqq (q, H) Lqb (q, H)
κ(e, H) = (26)
Lbq (q, H) Lbb (q, H)
After moving, Archimedes is at the position with the smallest
objective function, which is when equation (7), which recalculates the Wherein,Lqq (q, H) and Lbb (q, H) are produced by the second-order
fitness of the solution, is employed to be the best position. derivation of the Gaussian filter when used with the image at the
point e. The weights assigned to the rectangular regions have been made
Step v): Termination. as easy as possible to maximize computing efficiency.
( )2
det (κappr ) = ωqq ωbb − G ωqb (27)
The above steps are repetitive until the termination condition is met.
Algorithm 1 shows the pseudo-code ofSCAOAtechnique. Here, G specifies the filter response’s relative weight and is revealed
Algorithm 1. Pseudocode of SCAOA in equation (28).
1. Procedure SCAOA of population size D, maximum iterations Qmax ,J1 , J2 , J3 , ‖Lqb (1, 2)‖T‖ωbb (9)‖T
and J4 G= (28)
2. Initialize population of objects with random positions, density and volume
‖Lbb (1, 2)‖T‖ωqb (9)‖T
using (3),(4), (5), (6)
G is employed in this computation as the relative weight of the filter
3. Calculate the initial population, then choose the person with best fitness value
4. Set iteration counter Q = 1 in order to preserve the equilibrium of the Hessian determination pro­
5. While Q ≤ Qmax cess. The SURF feature is denoted as Ia1 .
6. For each object p do
7. Apprise density and volume of each object using equation (8) and (9) 3.4.2. ORB
8. Apprise transfer and density decreasing factors μ and k using equation (11)
and (12)
The Rublee developed ORB feature [25] appears to be quicker than
9. If μ ≤ 0.5 then SIFT and SURF features. It used the FAST key point detector to get its
10. Equation (13) updates acceleration, and equation (14) normalizes best features. The ORB also excavate less features, and it appears to be a
acceleration. necessary feature. Additionally, adopting the ORB functionality reduces
(continued on next column) computing costs. Here, the result of the ORB feature is denoted as Ia2 .

6
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Table 1 other elements of image processing since it has low-dimensional feature


Features. descriptors. Thus, Ia3 is listed as the SLIF extracted feature.
Number Features Definition
3.4.4. LGXP
1. SURF One of the best methods suitable for real-time
applications is the “SURF” algorithm, which is regarded In the LGXP feature [16], the phases are first quantized to great di­
as a reliable local feature detector and extractor versity, and the inner pixel is then quantized using the Local XOR
algorithm. Pattern (LXP) operator. A local central pixel pattern is created by equally
2. ORB It used the FAST key point detector to get its best combining the resulting binary labels. Equation (29)describes the LGXP
features. The ORB also excavates less features, and it
appears to be a necessary feature.
pattern in the binary and decimal models.
3. SLIF SLIF is a vital image descriptor that is developed in [ ]
order to overcome the issues with image matching LGXPV,η (bz ) = LGXPZV,η , LGXPZ− 1 1
V,η , ..., LGXPV,η , (29)
i
4. LGXP In the LGXP feature, the phases are first quantized to
great diversity, and the inner pixel is then quantized [ ]

Z
using the Local XOR Pattern (LXP) operator. A local LGXPV,η (bz ) = 2l− 1 , LGXPlV,η (30)
central pixel pattern is created by equally combining the l=1 u
resulting binary labels.
5. Mean A mean is defined as “the average of the total number of In the scale η and orientation V Gabor phase map, bz represents the
pixels in the binary image,”
position of the centre pixel, the size of the neighborhood is indicated as Z
6. Variance The variance value represents the variation between the
images actual and averaged grey levels. and pattern calculation is represented as LGXPAV,η (A = 1, 2, ..., Z).
7. Kurtosis Kurtosis is commonly referred to as the “fourth Moreover, decimal is signified as u and binary is denoted as i. The LGXP
normalized moment”. feature is symbolized as Ia4 .
8. Entropy Entropy is a metric that evaluates both the degree of
disorder in the system and the information included in I = {Ia1 , Ia2 , Ia3 , Ia4 } (31)
the segmented images.
9. Contrast Contrast is defined as the difference in intensity or grey
level between a reference pixel and a neighboring pixel. 3.4.5. Statistical features
10. Homogeneity The degree of homogeneity describes how well the The descriptions for statistical features [26] like mean, variance,
diagonal of GLCM is matched by the distribution of its kurtosis, and entropy are as follows. The textural feature I is fed as input
elements. of statistical features.
11. Correlation The co-occurrence matrix for correlation features shows
that the values of the grey levels are linearly related
12. ASM The local uniformity of the grey levels is measured by 3.4.5.1. Mean. A mean is defined as “the average of the total number of
the ASM. The ASM value will be high for closely related pixels in the binary image,” and the mean is calculated by the following
pixels equation. It is easy to calculate and useful for comparison.

ħ− 1
3.4.3. SLIF Mean (Na ), ψ = Y*s(Y) (32)
SLIF [15] is a vital image descriptor that is developed in order to Y=0

overcome the issues with image matching. In order to find pixel infor­ Wherein, the total number of grey levels is represented as ħ, the
mation based on the SLIF feature, a particular or web sampling pattern is possibility of Y happening signifies as s(Y), and Y indicates every grey
employed to adjacent point of interest pixels. SLIF method produces level in images.
simple low-dimensional feature descriptors, which are robust to several
image transformations and distortions. It is difficult to process the final 3.4.5.2. Variance. The variance value represents the variation between
image using conventional techniques for detection, classification, and

Fig. 3. Architecture of ShCNN.

7
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 4. The general architecture of DenseNet.

Table 2 3.4.5.4. Entropy. Entropy is a metric that evaluates both the degree of
Exprimetal parameters of SegNet and DenseNet. disorder in the system and the information included in the segmented
images. The entropy decreases with increasing temperature distribution
Methods Parameters Values
symmetry. Large entropy differences between left and right images of BT
SegNet Epochs 10 in an individual are associated with greater asymmetry and an elevated
Input height 256
risk of abnormalities. The following equation (35) describes the entropy
Input Width 256
Encoder level 3 feature. It can be used in any weight-determination process.
Channels 3
DenseNet Learning rate 0.0001

h− 1
Entropy (Nd ), λ = s(Y)*log2 [s(Y)] (35)
Batch size 32
Y=0
Epochs 10

3.4.6. Haralick features


the images actual and averaged grey levels. The variance is computed The following description is given for the Haralick features [22] like
using equation (33). Variance helps to find the distribution of data in a contrast, homogeneity, correlation, and ASM.
population from a mean.
3.4.6.1. Contrast. Contrast is defined as the difference in intensity or

ħ− 1
Variance (Nb ) , Υ2 = (Y − ξ)2 *s(Y) (33) grey level between a reference pixel and a neighboring pixel. The cor­
Y=0 relation relation is expressed as below. It is used to create the textures
and clarity in a photograph.
3.4.5.3. Kurtosis. Kurtosis is commonly referred to as the “fourth ∑∑
contrast (Ne ) = (r − S)2 λo (r, S) (36)
normalized moment”. A distribution’s leveling in reference to a normal r S
distribution is calculated using equation (34). It is used to understand
the shape of a distribution and identify whether it deviates from a
3.4.6.2. Homogeneity. The degree of homogeneity describes how well
normal distribution.
the diagonal of GLCM is matched by the distribution of its elements.
[ ]

ħ− 1 Usually, the contrast reduces with increasing homogeneity. It involves
kurtosis (Nc ), ∂2 = ξ− 4 (Y − ξ)4 *s(Y) (34) easier communication and faster decision-making. An expression of
Y=0
homogeneity is given below.

8
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 5. Experimental results of SCAOA_DenseNet, a) input image, b) segmented image, c) SLIF, d) LGXP, e) SURF feature, and f) ORBfeatures.

9
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 6. Experimental outcomes of SCAOA_DenseNet, a) Input images, b) segmented image, c) SLIF feature image, d) LGXP feature image, e) SURF feature image, and
f) ORB features.

10
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 7. An investigation based on BRATS 2020 dataset for the first-level classifier, a) accuracy, b) specificity, c) sensitivity.

∑∑ 1 mean and ρy , ρx signified as standard deviation.


Homogeneity (Nf ) = λo (r, S) (37)
r S
1 + (r − S) Combining the various statistical, Haralick and texture features
produces a feature vector, which is shown as,
3.4.6.3. Correlation. The co-occurrence matrix for correlation features {
I = Na , Nb , Nc , Nd , Ne , Nf , Ng , Nh
}
(40)
shows that the values of the grey levels are linearly related. It helps to
identify the absence or presence of a relationship between two variables. Here, the feature vector is denoted as I. Table 1. shows the features
An expression of correlation is given below. used during the feature extraction process.
∑∑ (r − ϖ y )(S − ϖ x )
correlation (Ng ) = λo (r, S) (38)
r S
ρy ρx 3.5. BT detection

3.4.6.4. ASM. The local uniformity of the grey levels is measured by the Once the feature extraction process is completed, the BT is also
ASM. The ASM value will be high for closely related pixels. It measures detected using ShCNN. Now, BTdetection done is using feature vector
the uniformity of an image. An expression of ASM is given below. output I. The proposed SCAOA algorithm is newly integrated by the
∑∑ combination of AOA [11] and SCA [12] and is used to train the ShCNN
ASM (Nh ) = λ2o (r, S) (39) [4], which is then used to classify BTs.
r S

Here, normalized symmetric GLCM is represented as λo , λo (r, S) is the 3.5.1. Architecture of ShCNN
(r, S)th element of the normalized GLCM. Moreover, ϖ y , ϖ x is denoted as End-to-end trainable TVI operators across the network are under­
stood by ShCNN [13]. A few feature maps were incorporated into

11
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 8. Analysis based on BRATS 2020 dataset for second-level classifier, a) accuracy, b) specificity, c) sensitivity.

Shepard layers, which helped the network, function more effectively. An Here, the index of layers is represented as R, the mask of the current
effort is made to use traditional interpolation techniques to inform the layer is denoted as ℘R , the input of the current layer is signified as ΓR−

1
,
creation of the neural network TVI model. The construction of the when computing the fraction, the trainable kernels Ptj are shared by the
ShCNN method is displayed in Fig. 5.where, a framework called Shepard numerator and denominator, a smooth and differentiable function is
is developed that weighs predictable pixels inversely based on how far designated as Γ, the feature map index in the layer R is represented by
away they are from the processing pixel. In addition, the Shepard the subscript t in ΓR ℓ and bias term is denoted as g. To identify the
approach is now expressed as a convolution, which is expressed as, interpolation regions as inputs, the Shepard interpolation layer addi­
{
(P*ϕ)p /(P*℘)p ; if ℘p = 0
} tionally makes use of pictures, feature maps, and masks. It should be
Bp = (41) noted that several interpolation layers can be applied to produce inter­
ϕp ; if ℘ = 1
polation functions with multiple nonlinearity layers. The mask is a bi­
Here, image coordinates are indicated as p, input and output images nary map, where one signifies a known area and 0 represents the
are denoted as ϕ and B, binary coordinates are signified as ℘, kernel unknown area. A single kernel is used to apply both the image and mask.
function is represented as P, and convolutional operator is denoted as ∗. It is crucial for tasks like inpainting that have relatively large missing
The element-wise separation between convolved image and the areas since these tasks allow for the learning of advanced propagation
convolved mask governs the organic distribution of pixel information methods from data using a multi-stage Shepard interpolation layer with
over the areas. As a result, it makes it possible to manage interpolation nonlinearity. Additionally, it offers flexibility in balancing the depth of
for data with uneven spacing and to make translation variants practical. the network with kernel size. The outcome of ShCNN is signified as Ωδ .
The definition of the convolution kernel, which has a significant impact Shepard interpolation layers are used in the ShCNN model, as seen in
on the interpolation outcome, is a fundamental Shepard method Fig. 3.
component.
3.5.2. Training of ShCNN using proposed SCAOA
3.5.1.1. Shepard interpolation layer. The subsequent equation can be Developed SCAOA, a hybrid optimization that combines the advan­
used to mathematically represent the feed-forward pass of the trainable tages of AOA and SCA, is used to optimize ShCNN[13] weights and
interpolation layer. parameters. An extensive examination of the SCAOA training algorithm
( ) is provided in section 3.3.2. Here, the specified fitness parameter is
tj *Γℓ
∑PR R− 1
ΓR (ΓR− 1
, ℘R
) = Δ + g R
, R = 1, 2, 3, ... (42) given by,
t Ptj *℘j
ℓ ℓ R R

12
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 9. Investigation based on Figshare dataset for the first level classifier, a) accuracy, b) specificity, c) sensitivity.

1 ∑m convolution layer. DenseNet can achieve excellent classification accu­


χ= [ε − Ωδ ]2 (43) racy. It obtains deep image information and reconstructs dense seg­
m δ=1
mentation masks for brain tumor classification. The DenseNet model can
Here, the targeted goal is ε and the result achieved by ShCNN is enhance the flow of information between levels by providing direct
designated as Ωδ . Furthermore, an overall number of samples is sym­ connections from each layer to all subsequent layers. In order to achieve
bolized as m and MSE is represented by χ . feature maps of all preceding layers, a layer is calculated as follows,
(⃒ ⃒)
∏w ⃒∏0 ∏1 ∏w− 1 ⃒⃒

3.6. DenseNet for BT classification =H ⃒ , , ..., ⃒ (44)
⃒ ⃒

BT classification is accomplished following the BT detection tech­ Wherein, layer indicates w, and Πw signifies the outcome of wth layer.
nique. An output of BT detection Ωδ is now input into the classification In addition, the consolidation of feature maps created in layers
of BTs. DenseNet [14], which was developed using the proposed SCAOA ⃒∏ ∏ ∏ ⃒
0, 1, 2, ..., w − 1 is designated as ⃒ 0 , 1 , ..., w− 1 ⃒. The main distinction
algorithm and newly integrated using the combination of AOA [11] and
between the ResNet model and the DenseNet model is the addition of
SCA [12], is used to categorize BTs.
skip connections, which employ shortcut connections to link non-linear
transformations with an identity function. The expression is explained
3.6.1. Architecture of DenseNet
below.
In this work, the classification of BT is done using DenseNet [14 17],
∏w ∏w ∏w− 1
which is one of the commonly applied networks for the classification of = Hw ( )+ (45)
images. DenseNet contains numerous layers such as Batch Normaliza­
tion (BN) layer, Rectified Linear Units (ReLU) layer, pooling layer and Here, the output released from the DenseNet model is ℵd . Fig. 4

13
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 10. An investigation based on the figshare dataset for the second-level classifier, a) accuracy, b) specificity, c) sensitivity.

depicts the architecture of DenseNet design. 4.1. Experimental set-up

3.6.2. Training of ShCNN using proposed DenseNet The SCAOA_DenseNet is run in Python tool on a computer with an
DenseNet weights and parameters are optimized using developed Intel Core i3 processor and 4 GB of RAM. Here, sensitivity, accuracy, and
SCAOA, a hybrid optimization that syndicates the benefit of AOA and specificity are evaluation metrics which is applied to assess SCAOA_­
SCA. Moreover, the SCAOA training algorithm is thoroughly explored in DenseNet performance. Table 2 shows the experimental parameters of
section 3.3.2. Here, an applied fitness parameter is provided by, the SegNet and DenseNet.

1 ∑m
χ= [ε − ℵ d ]2 (46) 4.2. Dataset description
m d=1

Here, the targeted goal is ε and the result achieved by DenseNet is SCAOA_DenseNet experiments were conducted using BRATS 2020
designated as ℵd . Furthermore, the overall number of samples is sym­ dataset [19] and Figshare dataset [18].
bolized as m and MSE is represented by χ .
4.2.1. Figshare dataset
4. Results and discussion In this BT dataset [18], there are three different types of BTs repre­
sented by the 3064 T1-weighted contrast-enhanced images.
In this section, two datasets are utilized to analyze and describe
SCAOA_DenseNet, along with assessment metrics; evaluation measures 4.2.2. BRATS 2020 dataset
dataset description, simulation parameters, comparative analysis, and The BRATS 2020 dataset [19], which is mainly, aims to explore,
experimental outcome are specified in this section. analyze, and share quality data. Datasets can be shared publicly or

14
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 11. Segmentation analysis based on BRATS 2018 dataset.

Fig. 12. Segmentation analysis based on the Figshare dataset.

privately using Kaggle Datasets. It is primarily focused on the chances of classification of BTs using the proposed SCAOA_DenseNet approach are
survival of the patient. Ample multi-institutional routine clinically- briefly summarized in this section. Here, experimental results of BTim­
acquired pre-operative multimodal MRI scans of glioblastoma (GBM/ ages are represented using two different dataset types namely the fig­
HGG) and lower grade glioma (LGG), with pathologically confirmed share and BRATS 2020.
diagnosis and available OS, are provided as the training, validation and
testing data. 4.3.1. Experimental outcomes with BRATS 2020 dataset
The experimental outcomes for BT classification using SCAOA_­
DenseNet from BRATS 2020 dataset are shown in Fig. 5. Additionally,
4.3. Experimental outcomes
the input image is demonstrated in Fig. 5a), the segmentation image is
signified in Fig. 5b), Fig. 5c), 5d), 5e), and 5f) represents Haralick
The experimental results made feasible by an investigation into the

15
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Fig. 13. The confusion matrix for a) the BRATS dataset and b) the Figshare dataset.

Table 3
Comparative discussion of SCAOA_DenseNet.
Dataset Metrics/ Transfer Bayesian fuzzy Deep CNN ResNet DNN Proposed
Methods learning clustering CNN Model SCAOA_DenseNet

First level Accuracy (%) 79.5 83.4 87.8 90.1 90.2 90.6 93.0
classifier Sensitivity (%) 79.1 84.4 86.9 88.6 89 89.4 92.3
Specificity (%) 80.0 83.3 87.0 88.6 88.9 89.4 92.0
BRATS Second level Accuracy (%) 79.5 82.1 89.6 90.5 90.7 90.9 92.3
2020 classifier Sensitivity (%) 79.6 84.8 86.9 87.0 87.5 87.7 91.8
Specificity (%) 79.9 82.6 88.6 89.2 89.6 90 91.0
First level Accuracy (%) 79.5 84.3 88.2 90.5 90.8 91.1 92.7
classifier Sensitivity (%) 79.0 83.8 88.1 89.3 89.7 90 91.4
Specificity (%) 79.9 82.6 89.4 90.0 90.4 90.8 92.5
Figshare Second level Accuracy (%) 79.2 82.7 89.1 89.1 89.5 89.9 91.1
classifier Sensitivity (%) 88.0 84.3 79.5 82.3 88.1 88.6 91.3
Specificity (%) 79.4 82.8 88.7 89.2 89.7 90 91.8

Transfer Learning [1], Bayesian Fuzzy Clustering [2], Deep CNN [4],
Table 4 CNN [7], ResNet model [31], and DNN [32].
Computational time of the methods.
Methods Time (sec) 4.4.1. Analysis based on BRATS 2020 dataset for first-level classifier
Fig. 7 displays an assessment of SCAOA_DenseNet with respect to the
Transfer Learning 8.458
Bayesian Fuzzy Clustering 7.712 evaluation measures for the BRATS 2020 dataset for first-level classifier
Deep CNN 6.576 by increasing the training set proportion from 50% to 90%. Here,
CNN 5.439 Fig. 7a) depicts the accuracy of SCAOA_DenseNet analysis. By increasing
ResNet model 4.403 the proportion to training set = 90%, the proposed SCAOA_DenseNet
DNN 4.396
Proposed SCAOA_DenseNet 4.210
model improves its accuracy to 93%. Existing methods like Transfer
Learning, Bayesian Fuzzy Clustering, Deep CNN, CNN, ResNet model,
and DNN achieve accuracy of 79.5%, 83.4%, 87.8%, 90.1%, 90.2%, and
texture features of SLIF, LGXP, SURF, and ORB. 90.6% respectively. Fig. 7b) illustrates the specificity evaluation of the
suggested SCAOA_DenseNet strategy. Here, transfer learning had a
4.3.2. Experimental outcomes with Figshare dataset specificity of 79.1%, Bayesian fuzzy clustering of 84.4%, Deep CNN of
Fig. 6 depicts experimental findings for categorizing BTs using the 86.9%, CNN of 88.6%, ResNet model of 89%, DNN of 89.4% and the
SCAOA_DenseNet dataset from Figshare. Fig. 6a) represents an input proposed SCAOA_DenseNet of 92.3%, which used training set = 90%.
image, Fig. 6b) depicts a segmented image, Fig. 6c), Fig. 6d), and Fig. 6e) Fig. 7c) displays the sensitivity assessment of the SCAOA_DenseNet.
shows features such as SLIF, LGXP, SURF, and ORB. Moreover, the proposed SCAOA_DenseNet achieved a specificity of
92.0% when training set = 90% was used, while Transfer Learning,
Bayesian Fuzzy Clustering, Deep CNN, CNN, ResNet model, and DNN
4.4. Comparative analysis achieved 80%, 83.3%, 87%, 88.6%, 88.9%, and 89.4%, respectively.

This section includes an evaluation of SCAOA_DenseNet in relation to 4.4.2. Analysis based on BRATS 2020 dataset for second level classifier
valuation metrics for two unique datasets, like Figshare and BRATS
According to the evaluation metrics for the BRATS 2020 dataset for
2020 datasets. The present techniques utilized for comparison include

16
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

the second-level classifier, SCAOA_DenseNet was evaluated in Fig. 8 by achieved segmentation accuracy is 91.3%, while existing techniques like
increasing the training set fraction from 50% to 90%. Fig. 8a) displays Deep joint clustering, GAN, Psi-Net, and Psp-Net gained segmentation
the accuracy assessment of the SCAOA_DenseNet. Moreover, the pro­ accuracy of 79.4%, 85.0%, 85.7%, and 87.2%, for training set = 90%.
posed SCAOA_DenseNet achieved an accuracy of 92.3% when training
set = 90% was used, while Transfer Learning, Bayesian Fuzzy Clus­ 4.5.2. Analysis based on the Figshare dataset
tering, Deep CNN, CNN, ResNet model, and DNN achieved 79.5%, Fig. 12 shows an evaluation of the segmentation accuracy of the
82.1%, 89.6%, 90.5%, 90.7%, and 90.9%, respectively. Fig. 8b) depicts proposed SCAOA + SegNet for the Figshare dataset. With training set =
the specificity of the SCAOA_DenseNet analysis. By increasing the pro­ 90%, the segmentation accuracy of proposed SCAOA + SegNet is 91.2%,
portion to training set = 90%, the proposed SCAOA_DenseNet model the segmentation accuracy of Deep joint clustering is 79%, the seg­
improves its accuracy to 91.8%. Existing systems namely Transfer mentation accuracy of GAN is 83.5%, the segmentation accuracy of Psi-
Learning, Bayesian Fuzzy Clustering, Deep CNN, CNN, ResNet model, Net is 84.4%, segmentation accuracy of Psp-Net is 86.9%, and seg­
and DNN achieve accuracy of 79.6%, 84.8%, 86.9%, 87%, 87.5%, and mentation accuracy of GAN is 84.4%. Comparing SCAOA + SegNet
87.7%, correspondingly. Fig. 8c) illustrates the sensitivity evaluation of alone to it, segmentation accuracy is improved by 13.38%, 8.44%,
the suggested SCAOA_DenseNet strategy. Here, transfer learning had a 7.46%, and 4.71%.
specificity of 79.9%, Bayesian fuzzy clustering of 82.6%, Deep CNN of
88.6%, CNN of 89.2%, ResNet model of 89.6%, DNN of 90% and the 4.6. Confusion matrix
proposed SCAOA_DenseNet of 91%, which used training set = 90%.
A confusion matrix is a summary of prediction results on a classifi­
4.4.3. Analysis based on Figshare dataset for first level classifier cation problem. The number of correct and incorrect predictions is
Fig. 9 shows the suggested SCAOA_DenseNet analysis in connection summarized with count values and broken down by each class. This is
to the evaluation criteria for the Figshare dataset for the first-level the key to the confusion matrix. The confusion matrix shows the ways in
classifier. Fig. 9a) displays an accuracy analysis of the suggested meth­ which your classification model is confused when it makes predictions.
odology. If the training set = 90%, Transfer Learning, Bayesian Fuzzy The confusion matrix of the proposed method is given in Fig. 13. Fig. 13
Clustering, Deep CNN, CNN, ResNet model, DNN and suggested a) shows the confusion matrix of the BRATS dataset and Fig. 13b) shows
SCAOA_DenseNet achieved an accuracy of 79.5%, 84.3%, 88.2%, the confusion matrix of the Figshare dataset.
90.5%, 90.8%, 91.1%, and 92.7%, respectively. Fig. 9b) shows speci­
ficity analysis graphs. Here, the proposed SCAOA_DenseNet achieved a 4.7. Comparative discussion
sensitivity of 91.4% for a training set = 90%. The sensitivity calculated
by the different existing methods was 79% for Transfer Learning, 83.8% Table 3 provides a comparative discussion of SCAOA_DenseNet. With
for Bayesian Fuzzy Clustering, 88.1% for Deep CNN, 89.3% for CNN, maximum accuracy of 93.0%, sensitivity of 92.3%, and specificity of
89.7% for ResNet model, and 90% for DNN. In Fig. 9c), the specificity 92.0%, it is obvious that the proposed methodology outperforms all
evaluation of SCAOA_DenseNet is depicted. The performance increases present models. The accuracy obtained by transfer Learning is 79.5%,
over existing techniques for Transfer Learning, Bayesian Fuzzy Clus­ Bayesian Fuzzy Clustering is 83.4%, Deep CNN is 87.8%, CNN is 90.1%,
tering, Deep CNN, CNN, ResNet model, and DNN are 79.9%, 82.6%, ResNet model is 90.2%, and DNN is 90.6%. The sensitivity acquired by
89.4%, 90%, 90.4%, and 90.8%, correspondingly. The suggested transfer Learning, Bayesian Fuzzy Clustering, Deep CNN, CNN, ResNet
SCAOA_DenseNet strategy has a specificity of 92.5%. model, and DNN is 79.1%, 84.4%, 86.9%, 88.6%, 89%, and 89.4%.
Similarly, the specificity obtained by transfer Learning is 80%, Bayesian
4.4.4. Analysis based on Figshare dataset for second-level classifier Fuzzy Clustering is 83.3%, Deep CNN is 87%, CNN is 88.6%, ResNet
The suggested SCAOA_DenseNet analysis in relation to the first-level model is 88.9%, and DNN is 89.4%.
classifier’s evaluation criteria for the Figshare dataset is revealed in
Fig. 10. An accuracy analysis of the suggested method is exposed in 4.7.1. Computational complexity
Fig. 10a). When training set = 90%, transfer Learning, Bayesian Fuzzy Table 4 shows the computational time of the proposed method
Clustering, Deep CNN, CNN, ResNet model, and DNN and the proposed compared with the other literature. The existing methods used for
SCAOA_DenseNet also achieved accuracy results of 79.2%, 82.7%, comparing are transfer Learning, Bayesian Fuzzy Clustering, Deep CNN,
89.1%, 89.1%, 89.5%, 89.9% and 91.1%, respectively. A specificity CNN, ResNet model, DNN and the proposed SCAOA_DenseNet. The
analysis of the suggested SCAOA_DenseNet is shown in Fig. 10b). In this computational time of the existing methods are 8.458 sec, 7.712 sec,
instance, the proposed SCAOA_DenseNet obtained a sensitivity of 91.3% 6.576 sec, 5.439 sec, 4.403 sec, 4.396 sec and 4.210 sec for the proposed
for a training set = 90%. The sensitivity estimated by the various ap­ method. Thus, the proposed method has low computational time
proaches in use was 88.1% for the ResNet model, 88.6% for DNN 88% compared to other methods.
for CNN, 84.3% for Deep CNN, 79.5% for Transfer Learning, and 82.3%
for Bayesian Fuzzy Clustering. The specificity valuation of SCAOA_­ 5. Conclusion
DenseNet is shown in Fig. 10c). The performance improvements for
transfer Learning, Bayesian Fuzzy Clustering, Deep CNN, CNN, ResNet BT is the deadliest disease which has an extremely short life expec­
model, and DNN over previous techniques are 79.4%, 82.8%, 88.7%, tancy in the highest grade. BT is the most common and dangerous tumor
89.2%, 89.7%, and 90%, respectively. The proposed SCAOA_DenseNet type in both adults and children. Ineffective medical treatment and
method has 91.8% specificity. worse patient survival rates will result from the misdiagnosis of BTs.
This paper suggests a practical method for the classification of BTs using
4.5. Segmentation analysis SCAOA_DenseNet. The main determination of this research is to
construct a technique for BT classification using SCAOA_DenseNet.
The segmentation accuracy analysis of the proposed SCAOA_Dense­ Firstly, the input image is attained from a specified database. The
Net for two distinct datasets, namely the Figshare and BRATS 2020 Gaussian filter is used in pre-processing phase to remove noise from the
datasets is shown in Fig. 11. input image. SegNet with SCAOA trained, is used for segmentation in
this case. After this stage, in feature extraction is carried out, the features
4.5.1. Analysis based on BRATS 2020 dataset such as statistical features, Haralick texture features, SLIF, LGXP, SURF
For the BRATS 2020 dataset, the segmentation accuracy of SCAOA + and ORB are extracted. After that, BT detection is done by using ShCNN
SegNet is examined in Fig. 11. Here, the proposed SCAOA + SegNet which is tuned by SCAOA, whether it is tumor or non-tumor. Finally, BT

17
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

is classified into three sections namely Meningiomas, gliomas, and pi­ [9] S.K. Kopparapu, M. Satish, Identifying optimal Gaussian filter for Gaussian noise
removal, in: In 2011 Third National Conference on Computer Vision, Pattern
tuitary tumors using DenseNet which is trained using SCAOA. The
Recognition, Image ProcessIng and Graphics, IEEE, December 2011, pp. 126–129.
SCAOA combines AOA and SCA. Finally, the SCAOA_DenseNet attained [10] H.M. Afify, K.K. Mohammed, A.E. Hassanien, An improved framework for polyp
extreme accuracy of 93.0%, sensitivity of 92.3%, and specificity of image segmentation based on SegNet architecture, Int. J. Imaging Syst. Technol. 31
92.0%. In future work, the discovered technique will be employed in (3) (2021) 1741–1751.
[11] F.A. Hashim, K. Hussain, E.H. Houssein, M.S. Mabrouk, W. Al-Atabany,
mobile devices to accurately classify BTs and in the field of stochastic Archimedes optimization algorithm: a new metaheuristic algorithm for solving
optimization it can be hybridized with other algorithms to improve its optimization problems, Appl. Intell. 51 (2021) 1531–1551.
performance. [12] S. Mirjalili, SCA: a sine cosine algorithm for solving optimization problems,
Knowl.-Based Syst. 96 (2016) 120–133.
[13] J.S. Ren, L. Xu, Q. Yan, W. Sun, Shepard convolutional neural networks, Adv.
Author Contribution Neural Inf. Proces. Syst. 28 (2015).
[14] G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger. “Densely connected
convolutional networks”, In Proceedings of the IEEE conference on computer
All authors have made substantial contributions to the conception vision and pattern recognition, pp. 4700-4708, 2017.
and design, revising the manuscript, and the final approval of the [15] F. Fausto, E. Cuevas, A. Gonzales, A new descriptor for image matching based on
version to be published. Also, all authors agreed to be accountable for all bionic principles, Pattern Anal. Appl. 20 (2017) 1245–1259.
[16] S. Xie, S. Shan, X. Chen, J. Chen, Fusing local patterns of gabor magnitude and
aspects of the work in ensuring that questions related to the accuracy or phase for face recognition, IEEE Trans. Image Process. 19 (5) (2010) 1349–1361.
integrity of any part of the work are appropriately investigated and [17] Z. Zhong, M. Zheng, H. Mai, J. Zhao, X. Liu, “Cancer image classification based on
resolved. DenseNet model”, In Journal of Physics Conference Series, vol.1651, no.1, pp.
012143, IOP Publishing, 2020.
[18] The Figshare dataset was available at, “https://figshare.com/articles/brain_tu
Data Availability Statement: mor_dataset/1512427”, accessed on April, 2023.
[19] The BRATS 2020 dataset was available at, “https://www.med.upenn.edu/cbica/
brats2020/data.html”, accessed on April, 2023.
The BRATS 2020 and figshare datasets used for the manuscript is
[20] M. Arbane, R. Benlamri, Y. Brik, M. Djerioui, Transfer learning for automatic BT
taken from “https://www.med.upenn.edu/cbica/brats2020/data.ht classification using MRI images, in: In 2020 2nd International Workshop on
ml”and“https://figshare.com/articles/brain_tumor_dataset/1512427″. Human-Centric Smart Environments for Health and Well-beIng (IHSH), IEEE,
February 2021, pp. 210–214.
[21] M.A. Khan, I. Ashraf, M. Alhaisoni, R. Damaševičius, R. Scherer, A. Rehman, S.A.
CRediT authorship contribution statement C. Bukhari, Multimodal BT classification using deep learning and robust feature
selection: A machine learning application for radiologists, Diagnostics 10 (8)
(2020) 565.
Geetha M: Conceptualization, Methodology, Software, Resources,
[22] N. Zayed, H.A. Elnemr, “Statistical analysis of haralick texture features to
Validation, Data curation, Formal analysis, Investigation. Srinadh V: discriminate lung abnormalities”, J. Biomed. Imag. (2015) 12.
Conceptualization, Methodology, Software, Resources, Validation, Data [23] M.S. Fasihi, W.B. Mikhael, MRI BT classification Employing transform Domain
curation, Formal analysis, Investigation. Janet J: Writing – original projections, in: In 2020 IEEE 63rd International Midwest Symposium on Circuits
and Systems (MWSCAS), IEEE, August 2020, pp. 1020–1023.
draft, Writing – review & editing, Project administration. Sumathi S: [24] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, Speeded-up robust features (SURF),
Writing – original draft, Writing – review & editing, Project Comput. Vis. Image Underst. 110 (3) (2008) 346–359.
administration. [25] M. Bansal, M. Kumar, M. Kumar, 2D object recognition: a comparative analysis of
SIFT, SURF and ORB feature descriptors, Multimed. Tools Appl. 80 (2021)
18839–18857.
Declaration of Competing Interest [26] V. Lessa, M. Marengoni, Applying artificial neural network for the classification of
breast cancer using infrared thermographic images, in: International Conference on
Computer Vision and Graphics, Springer, Cham, September 2016, pp. 429–438.
The authors declare that they have no known competing financial [27] C. Cai, B. Gou, M. Khishe, M. Mohammadi, S. Rashidi, R. Moradpour, S. Mirjalili,
interests or personal relationships that could have appeared to influence Improved deep convolutional neural networks using chimp optimization algorithm
the work reported in this paper. for Covid19 diagnosis from the X-ray images, Expert Syst. Appl. 213 (c) (March
2023).
[28] A. Saffari, M. Khishe, M. Mohammadi, A. Hussein Mohammed, S. Rashidi, “DCNN-
Data availability FuzzyWOA: Artificial Intelligence Solution for Automatic Detection of COVID-19
Using X-Ray Images,”, Comput. Intell. Neurosci. 2022 (August 2022) 1–11.
[29] X.u. Binfeng, D. Martín, M. Khishe, R. Boostani, COVID-19 diagnosis using chest CT
No data was used for the research described in the article. scans and deep convolutional neural networks evolved by IP-based sine-cosine
algorithm, Med. Biol. Eng. Compu. 60 (10) (October 2022) 2931–2949.
Acknowledgements [30] M. Khishe, An automatic COVID-19 diagnosis from chest X-ray images using a deep
trigonometric convolutional neural network, Imag. Sci. J. 71 (2) (February 2023)
128–141.
I would like to express my very great appreciation to the co-authors [31] M. Aggarwal, A.K. Tiwari, M.P. Sarathi, A. Bijalwan, “An early detection and
of this manuscript for their valuable and constructive suggestions during segmentation of Brain Tumor using Deep Neural Network”, BMC Med. Inf. Decis.
Making 23 (1) (April 2023).
the planning and development of this research work. [32] M.B. Sahaaia, G.R. Jothilakshmi, RaghavendraPrasath, Saurav Singh, “Brain
Tumor Detection using DNN Algorithm”, Turkish J. Comput. Math. Educ. 12 (11)
References (May 2021) 3338–3345.
[33] Mohammad Hossein Gohari Raouf, Ali Fallah, and Saeid Rashidi, “Use of Discrete
Cosine-based Stockwell Transform in the Binary Classification of Magnetic
[1] S. Deepak, P.M. Ameer, Brain tumor classification using deep CNN features via
Resonance Images of Brain Tumor,” 2022 29th National and 7th International
transfer learning, Comput. Biol. Med. 111 (2019), 103345.
Iranian Conference on Biomedical Engineering (ICBME), pp. 293-298, December
[2] P.S. Raja, BT classification using a hybrid deep autoencoder with Bayesian fuzzy
2022.
clustering-based segmentation approach, Biocybern. Biomed. Eng. 40 (1) (2020)
[34] S. Tummala, S. Kadry, S.A.C. Bukhari, H.T. Rauf, Classification of Brain Tumor
440–453.
from Magnetic Resonance Imaging Using Vision Transformers Ensembling, Curr.
[3] W. Ayadi, I. Charfi, W. Elhamzi, M. Atri, Brain tumor classification based on hybrid
Oncol. 29 (10) (October 2022) 7498–7511.
approach, Vis. Comput. (2020) 1–11.
[35] T.h. Muezzinoglu, N. Baygin, I. Tuncer, P.D. Barua, M. Baygin, S. Dogan, T. Tuncer,
[4] S. Kumar, D.P. Mankame, Optimization driven deep convolution neural network
E.E. Palmer, K.H. Cheong, U. Rajendra Acharya, PatchResNet: Multiple Patch
for BT classification, Biocybern. Biomed. Eng. 40 (3) (2020) 1190–1204.
Division-Based Deep Feature Fusion Framework for Brain Tumor Classification
[5] W. Ayadi, W. Elhamzi, I. Charfi, M. Atri, Deep CNN for brain tumor classification,
Using MRI Images, J. Digit. Imaging 36 (February 2023) 973–987.
Neural Process. Lett. 53 (1) (2021) 671–700.
[36] S. Dogan, P.D. Barua, M. Baygin, S. Chakraborty, E.J. Ciaccio, T. Tuncer, K.A.
[6] Z.N.K. Swati, Q. Zhao, M. Kabir, F. Ali, Z. Ali, S. Ahmed, J. Lu, Brain tumor
A. Kadir, M.d. Mohammad Nazri, R.R. Shah, C.C. Azman, K.H. Lee, Ng, U. Rajendra
classification for MR images using transfer learning and fine-tuning, Comput. Med.
Acharya, “Novel multiple pooling and local phase quantization stable feature
Imag. Graph. 75 (2019) 34–46.
extraction techniques for automated classification of brain infarcts”, Biocybern.
[7] M.M. Badža, M.Č. Barjaktarović, Classification of BTs from MRI images using a
Biomed. Eng. 42 (3) (September 2022) 815–828.
convolutional neural network, Appl. Sci. 10 (6) (2020) 1999.
[37] Sabiha Gungor Kobat, Nursena Baygin, Elif Yusufoglu, Mehmet Baygin, Prabal
[8] J. Kang, Z. Ullah, J. Gwak, Mri-based brain tumor classification using ensemble of
Datta Barua, Sengul Dogan, Orhan Yaman, Ulku Celiker, Hakan Yildirim, Ru-San
deep features and machine learning classifiers, Sensors 21 (6) (2021) 2222.

18
M. Geetha et al. Biomedical Signal Processing and Control 87 (2024) 105419

Tan, Turker Tuncer, Nazrul Islam, and U Rajendra Acharya, “Automated Diabetic Trained DenseNET with Digital Fundus Images,” Diagnostics (Basel), vol. 12, no. 8,
Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre- August 2022.

19

You might also like