Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Comparative Analysis of Deep Learning Approach

for Detection and Segmentation of Brain Tumor

Mawaddah Harahap Amir Mahmud Husein* Shahin Singh Deol


2022 IEEE International Conference of Computer Science and Information Technology (ICOSNIKOM) | 979-8-3503-9907-3/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICOSNIKOM56551.2022.10034876

Faculty of Technology and Computer Science Faculty of Technology and Computer Science Faculty of Technology and Computer Science
Universitas Prima Indonesia Universitas Prima Indonesia Universitas Prima Indonesia
Medan, Indonesia Medan, Indonesia Medan, Indonesia
mawaddah@unprimdn.ac.id amirmahmud@unprimdn.ac.id* shahindeol@gmail.com

Sukhbir Singh Septuagesima D.P. Situmorang Junardy Saputra


Faculty of Technology and Computer Science Faculty of Technology and Computer Science Faculty of Technology and Computer Science
Universitas Prima Indonesia Universitas Prima Indonesia Universitas Prima Indonesia
Medan, Indonesia Medan, Indonesia Medan, Indonesia
rahulsnghh03@gmail.com gesimaseptua@gmail.com ijunardysaputra18@gmail.com

Abstract — A brain tumor is one of the more aggressive and precise diagnostics must be carried out. Magnetic
diseases that affect both children and adults. The survival rate Resonance Imaging (MRI) is the most effective method for
for patients with brain cancer or CNS tumors is approximately identifying brain tumors [5].Scanning generates a significant
34% for men and 36% for women. If not detected promptly and amount of image data. A radiologist examines these images.
accurately, brain tumors can result in death. To extend the Due to the complexity of brain tumors and their nature,
patient's life expectancy, appropriate treatment, planning, and manual examination can be error-prone [6].
precise diagnostics are required. Magnetic Resonance Imaging
(MRI) is the best method for finding brain tumors. However, Because it is noninvasive and can depict a wide range of
MRI calls for a skilled neurosurgeon to conduct an in-depth tissue types and physiological processes, MRI is the most
analysis. In the field of Computer Vision (CV), the emergence of commonly use diagnostic tool for brain tumors [7, 8].
DL offers cutting-edge solutions to image processing issues. This Repeated axial slices of the brain are taken using MRI, which
study used the Convolution Neural Network (CNN) algorithm
and five Deep Learning (DL) models to use Transfer Learning
creates a three-dimensional representation through the use of
(TL) to classify brain tumors. The accuracy of the ResNet50 and magnetic gradients and radiofrequency pulses. There are 155
DenseNet121 models was 93.23%. However, the VGG16 model slices in each brain scan, with each pixel representing a 3 mm
performed the best, with an accuracy of 97.08%, followed by voxel.[1] A brain tumor is a complicated issue. Brain tumors
MobileNetV2's accuracy of 97.02, and DenseNet121 had the exhibit numerous abnormalities in size and location. The
lowest accuracy, with 92.86%. tumor's nature cannot be fully understood because of this.
MRI analysis also requires the expertise of a qualified
Keywords — Brain Tumor, MRI, Transfer Learning, Deep neurosurgeon [9]. It is frequently difficult and time-
Learning, VGG16 consuming to produce MRI reports in developing nations due
to a lack of skilled physicians and tumor knowledge.
I. INTRODUCTION Automatic classification methods like Machine Learning
Brain tumor is one of the most dangerous diseases, and it (ML) [10–12] and Deep Learning (DL) [2–13–18]
affects both children and adults. The survival rate for patients consistently outperform manual classification methods. The
with brain cancer or CNS tumors is approximately 34% for ML moves closer. This method necessitates the creation of
men and 36% for women. When abnormal cells in the brain features by hand, necessitates the extraction of features from
begin to grow, they become brain tumors [1]. Primary tumors, the training image to begin the learning process, and may
which spread within the brain, and secondary tumors, also necessitate the identification of the most crucial features by an
known as brain metastatic tumors, which have spread from expert with extensive knowledge.
other organs in the body, are the two types of cancer tumors.
A cutting-edge solution to the issue of image processing
Depending on the region of the brain affected by the tumor,
techniques has emerged with the emergence of DL in
each type of brain tumor can present with a different set of
Computer Vision (CV) [19]. Application in medical imaging
symptoms [2].
is becoming increasingly popular as CV advances [4], [12],
Headaches, seizures, visual disturbances, vomiting, and [16], [19]-[21]. DL appears to be an ideal candidate for
indications of mental illness are all possible manifestations of modeling data that is both complex and high-dimensional and
these symptoms. In most cases, headaches can get worse in has a direct impact on human life. As a result, ML-based
the morning and go away with vomiting. People with brain techniques are limited and susceptible to errors when handling
tumors may also experience difficulty walking, speaking, or large data sets because their detection accuracy is dependent
feeling. Patients may experience sudden unconsciousness or on the quality and representation of the extracted features [17].
collapse as the disease progresses. [3 If not detected promptly Segmentation of brain tumors aims to distinguish healthy
and accurately, brain tumors can result in death. tissue from tumor areas like advanced tumors, necrotic nuclei,
Meningiomas, gliomas, and pituitary tumors are among the and edema. In order to maximize the likelihood of successful
most common types of brain tumors [4]. To extend the treatment in cases of cancer, this approach is an essential step
patient's lifespan, appropriate treatment, strategic planning, in both diagnosis and treatment planning [2]. Computer

Authorized licensed use limited to: Bahria University. Downloaded on January 03,2024 at 10:28:39 UTC from IEEE Xplore. Restrictions apply.
algorithms that can segment data quickly and precisely are in D. Architecture Used
high demand due to the time-consuming and laborious nature VGG16, VGG19, MobileNetV2, InceptionResNetV2,
of manual segmentation. Using the Convolution Neural and ResNet (ResNet101V2, ResNet152V2, and
Network to detect brain tumors in images Pre-trained transfer
ResNet50V2) were the architectures that were utilized in the
learning model consists of VGG19 and ResNet50 on imagenet
transfer learning model network that was the focus of this
for one-to-one feature extraction. Brain tumor MRI dataset
sourced from Kaggle [22]. study. These architectures were utilized for the detection of
normal and brain tumor.
In this decade, application of deep learning in field of
computer vision has developed in various fields, one of which
is the field of medical imaging. Some recent work on the
success of this model is presented by [23] conducting a
comparative performance analysis of transfer learning models
based on CNN VGG-16, ResNet-50, and Inception-v3 for
automatic prediction of tumor cells in the brain. They
evaluated the performance of the transfer learning pre-trained
model based on training time and number of epochs and
produced the VGG16 model more accurately than the other
models. In addition, Runwei Zhou et al [9] built a suitable
network model for the brain tumor segmentation task based
on the CNN framework to enhance the tumor core region,
improve the accuracy of tumor region segmentation, and
obtain good substructural segmentation prediction results in
the 2017 Bra TS data set. Using Convolution Neural Networks Fig. 2 Method
(CNN) and Transfer Learning (TL), we propose a system for
detection and classification in this study. The proposed model E. Dataset
is expected to be very helpful to doctors all over the world. This study utilized a Brain Tumor X-Ray online dataset
as its source of data. The image from the testing and analysis
of brain tumor detection is shown here.
II. RESEARCH METHODS
A. Type of Research
Experimentation was used in the research. The goal of the
study was to determine how well the CNN algorithm with the
Transfer Learning (TL) approach classified positive and
negative X-ray images of brain tumors from computed
tomography (CT) results.
B. Time and Place of Research Implementation
The goal of the study was to learn more about various
aspects of image recognition, particularly transfer learning
(TL) and convolutional neural networks (CNN). The study Fig. 3. Normal Brain X-Ray
began in April 2020 and was completed in September 2020.
C. Transfer Learning
Transfer Learning is a technique for processing a dataset
as a starting point to solve similar problems and then adapting
its parameters to the new dataset. Additionally, this is done to
avoid redundancy and capitalize on previous knowledge by
retaking the test from the beginning. ResNet is the transfer
learning network used in this study.

Fig. 4. Brain X-Ray Detected Tumor

F. Work Procedure
The following steps provide an explanation of the
operation procedure for detecting NORMAL or brain tumor
images in the dataset:
1) The X Ray data set was used to collect the brain MRI
images, which were then divided into each folder;
Fig. 1. Residual Network (ResNet). NORMAL and TUMOR, the folders were later
combined into a single set of images that were divided

Authorized licensed use limited to: Bahria University. Downloaded on January 03,2024 at 10:28:39 UTC from IEEE Xplore. Restrictions apply.
into a 252-image training section, 155-image
validation section, and 97-image labeling section;
2) Next, a transfer learning network (VGG19,
MobileNetV2, InceptionResNetV2, and ResNet
(ResNet101V2, ResNet152V2, and ResNet50V2))
was used to process the brain image in order to
determine whether or not a patient had tumor Fig. 5. Dataset Augmentation Process
infection.
3) During this testing phase, CNN was used to evaluate How to crop an image is shown in fig. 5. Preparing the image
and monitor potential features by implementing a was the first step, followed by locating the largest contour and
number of convolutions and merged operations. the extreme points based on that contour in the second and
4) The fully connected layer served as a classifier during third stages. The final step, which was Step 4, was to crop the
this phase.utilizing evaluated probabilities and image at the extreme point, as shown in fig 6.
extracted features for the image's objects.

III. RESULT AND DISCUSSION


Various kinds of experiments were carried out in this
section to demonstrate the efficacy of the proposed
Convolution Neural Network (CNN) and Transfer Learning
(TL) architecture. To begin, the dataset augmentation
procedure was crucial in determining the appropriate image
patch size for the pre-training procedure's input. The goal of
this operation was to determine the best patch size for CNN
and TL training. On ImageNet with only one path,
experiments were carried out to evaluate the results of (a)
traditional CNN and pre-trained TL. The performance of
CNN with various layers and iteration times was also
evaluated in this section. Then, a different number of
iterations will be used to evaluate each traditional CNN and
TL tested in the preceding section. Finally, an experiment
was conducted to demonstrate the CNN and TL architecture's
high accuracy and segmentation efficiency.
A. Dataset Augmentation
We tested the CNN and TL models' generalizability in
this step by utilizing a data augmentation method. In image
classification, data augmentation, or applying a process to the
existing data to increase the amount of data available without
acquiring new data, has proven to be beneficial. We used the
data augmentation method in this study due to the limited
number of images in the dataset. To generate additional (b)
Fig. 6. Dataset Augmentation Result (a) MRI imageof the non-tumor. (b)
images, images in the training set were rotated at random MRI image of the tumor
angles of 20 to 20 degrees and translated arbitrarily by up to
thirty pixels vertically and horizontally. Additionally, a Without actually collecting new data, the goal of the
dynamically enlarged set of images was created using the application of data augmentation was to raise the variance of
image Data Augmenter function during each training phase. the data that was used in the training model.
This data augmentation technique significantly increased the
number of images in the training set, making it possible to B. Dataset Training
train our CNN and DL models more efficiently with a The results of various TL classifiers used to classify brain
significantly larger number of training images. Furthermore, MRI images from a brain tumor classification (MRI) dataset
the expanded image was only used to train and not test the are discussed in this section. In order to make the proposed
proposed framework; As a result, the framework that was model's classification process easier, the dataset that
studied was evaluated only using actual images of the data underwent data augmentation was divided into two classes:
set. Each of the brain tumor MRI image datasets was yes=1 and no=0.7. In the image batch pre-processing stage,
augmented using the stages of the data augmentation process Image Data Generator was used to change the image's size to
depicted in fig. 5 and fig. 6. 244 x 244 x 3, rotate it slightly (probability = 1, maximum
rotation = 15), and zoom it (probability = 0.5, percentage area
= 0.9), as well as apply threshold values from [0..255] to
[0...1] to remove as many text annotations and bright pixels
as possible. Using Adam as the optimizer and a sigmoid to

Authorized licensed use limited to: Bahria University. Downloaded on January 03,2024 at 10:28:39 UTC from IEEE Xplore. Restrictions apply.
identify losses across entropy categories and activation
functions, all models were trained.
All of the trained models are depicted in fig. 7; It is
evident that the epoch value changed in tandem with an
increase in the validation accuracy value. The researcher
made observations to ascertain the accuracy of the training,
validation, and test using the parameters batch size = 15,
epoch value = 100, weight value, and "Adams" for the
optimizer researcher.
InceptionV3 Model

VGG16 Model
CNN Model proposed

Fig. 7. Results of training and validation of all model

C. Result
TABLE 1. Comparison results based on the accuracy of
training, validation and test
Training Validation Test
Model Accuracy Accuracy Accuracy
VGG16 98.07 97.56 97.92
ResNet50 99.03 98.23 94.03
DenseNet121 92.87 90.34 92.86
InceptionV3 96.76 94.23 94.78
MobileNetV2 97.05 96.23 97.02
Proposed CNN 98.34 96.56 93.23

Every training, testing, and validation result for each


model that has been proposed is shown in Table 1 on the base.
We found that the proposed CNN model outperformed the
Densenet-201 Model ResNet50 and DenseNet121 models with an accuracy of
98.34 on the training dataset, 96.56 on the validation dataset,
and 93.23 on the testing dataset. However, the ResNet50
model trained with a higher accuracy, indicating its suitability
for large datasets.
D. Discussion
In the medical field, classification of brain tumors is
important. It wasn't easy to make a CNN that worked well.
As a result, it was critical to employ an optimization strategy
when setting the CNN hyperparameters. A novel method for
classifying common brain tumors using the CNN network
structure was proposed in this paper. Five transfer learning

Authorized licensed use limited to: Bahria University. Downloaded on January 03,2024 at 10:28:39 UTC from IEEE Xplore. Restrictions apply.
models—VGG16, ResNet50, DenseNet121, InceptionV3, [7] N. Wang, C. Chen, Y. Xie, and L. Ma, “Brain Tumor Anomaly
Detection via Latent Regularized Adversarial Network,” arXiv, Jul.
and MobileNetV2—were compared to the proposed CNN 2020, doi: 10.48550/arxiv.2007.04734.
model, and no one found that the proposed model was [8] M. F. Safdar, S. S. Alkobaisi, and F. T. Zahra, “A comparative analysis
superior to the existing model. According to the findings of of data augmentation approaches for magnetic resonance imaging
the study, the optimized model offered classification (MRI) scan images of brain tumor,” Acta Inform. Medica, vol. 28, no.
performance with an accuracy of 93.23 percent. With an 1, pp. 29–36, Mar. 2020, doi: 10.5455/AIM.2020.28.29-36.
accuracy of 97.08%, the VGG16 model performed the best, [9] R. Zhou, S. Hu, B. Ma, and B. Ma, “Automatic Segmentation of MRI
of Brain Tumor Using Deep Convolutional Network,” Biomed Res.
followed by MobileNetV2 with 97.02, and DenseNet121 Int., vol. 2022, pp. 1–9, Jun. 2022, doi: 10.1155/2022/4247631.
came in last with an accuracy of 92.86%. Finally, Amount et [10] A. M. U. D. Khanday, S. T. Rabani, Q. R. Khan, N. Rouf, and M. Mohi
al.'s research [24] used the Bayesian Optimization algorithm Ud Din, “Machine learning based approaches for detecting COVID-19
to improve CNN performance to classify three types of tumor using clinical text data,” Int. J. Inf. Technol., 2020, doi:
10.1007/s41870-020-00495-9.
diseases, and this CNN model was taken into consideration
[11] T. Xiao, L. Liu, K. Li, W. Qin, S. Yu, and Z. Li, “Comparison of
for the future by using optimal hyperparameters. In addition, Transferred Deep Neural Networks in Ultrasonic Breast Masses
this will be taken into account in subsequent research. Discrimination,” Biomed Res. Int., vol. 2018, 2018, doi:
10.1155/2018/4605191.
IV. CONCLUSION [12] Y. H. Li, N. N. Yeh, S. J. Chen, and Y. C. Chung, “Computer-Assisted
For purpose of using Transfer Learning (TL) to classify Diagnosis for Diabetic Retinopathy Based on Fundus Images Using
Deep Convolutional Neural Network,” Mob. Inf. Syst., vol. 2019, no.
brain tumors, this study compares CNN and five Deep 1, 2019, doi: 10.1155/2019/6142839.
Learning (DL) models. Finding the best DL classifier for [13] J. Chen et al., “TransUNet: Transformers Make Strong Encoders for
brain tumor classification is the goal of this effort, which aims Medical Image Segmentation,” arXiv, Feb. 2021, Accessed: Feb. 28,
to automate the process of detecting brain tumors. The 2021. [Online]. Available: http://arxiv.org/abs/2102.04306
proposed CNN model outperforms the ResNet50 and [14] M. F. Safdar, S. S. Alkobaisi, and F. T. Zahra, “A comparative analysis
of data augmentation approaches for magnetic resonance imaging
DenseNet121 models in terms of classification accuracy with (MRI) scan images of brain tumor,” Acta Inform. Medica, vol. 28, no.
93.23 percent, but the VGG16 model outperforms them all 1, pp. 29–36, Mar. 2020, doi: 10.5455/AIM.2020.28.29-36.
with 97.08 percent accuracy, followed by MobileNetV2 with [15] X. Li, X. Hu, L. Yu, L. Zhu, C. W. Fu, and P. A. Heng, “CANet: Cross-
97.02 percent accuracy, and DenseNet121 comes in last with Disease Attention Network for Joint Diabetic Retinopathy and Diabetic
92.86 percent accuracy. Macular Edema Grading,” IEEE Trans. Med. Imaging, vol. 39, no. 5,
pp. 1483–1493, 2020, doi: 10.1109/TMI.2019.2951844.
Some things that are recommended in the development of
[16] C. Qin, D. Yao, Y. Shi, and Z. Song, “Computer- aided detection in
further research are improving classification performance by chest radiography based on artificial intelligence: A survey,”
setting CNN hyperparameters using optimization techniques, BioMedical Engineering Online, vol. 17, no. 1. BioMed Central Ltd.,
are: performing advanced tests to more accurately classify pp. 1–23, Aug. 22, 2018. doi: 10.1186/s12938-018-0544-y.
data into more class labels; Utilizing various models for [17] M. Masood et al., “A novel deep learning method for recognition and
segmentation, such as Mask R-CNN, Refine Def, and others. classification of brain tumors from mri images,” Diagnostics, vol. 11,
no. 5, p. 744, Apr. 2021, doi:
10.3390/diagnostics11050744.
ACKNOWLEDMENT
[18] D. Jha, M. A. Riegler, D. Johansen, P. Halvorsen, and H. D. Johansen,
The researcher wishes to express gratitude to Rector of “DoubleU-Net: A Deep Convolutional Neural Network for Medical
Universitas Prima Indonesia. Image Segmentation,” Proc. - IEEE Symp. Comput. Med. Syst., vol.
2020-July, pp. 558–564, Jun. 2020, Accessed: Nov. 07, 2021.
[Online]. Available: https://arxiv.org/abs/2006.04868v2
REFERENCES
[19] A. Esteva et al., “Deep learning-enabled medical computer vision,” npj
[1] L. M. Ballestar and V. Vilaplana, “MRI Brain Tumor Segmentation Digital Medicine, vol. 4, no. 1. Nature Research, pp. 1–9, Dec. 01,
and Uncertainty Estimation Using 3D-UNet Architectures,” in Lecture 2021. doi: 10.1038/s41746-020-00376-2.
Notes in Computer Science (including subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinformatics), Dec. [20] S. Yeung et al., “A computer vision system for deep learning-based
2021, vol. 12658 LNCS, pp. 376–390. doi: 10.1007/978-3-030-72084- detection of patient mobilization activities in the ICU,” npj Digit. Med.,
1_34. vol. 2, no. 1, pp. 1–5, Dec. 2019, doi: 10.1038/s41746-019-0087-z.
[2] R. Ranjbarzadeh, A. Bagherian Kasgari, S. Jafarzadeh Ghoushchi, S. [21] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers,
Anari, M. Naseri, and M. Bendechache, “Brain tumor segmentation “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on
based on deep learning and an attention mechanism using MRI multi- weakly- supervised classification and localization of common thorax
modalities brain images,” Sci. Rep., vol. 11, no. 1, pp. 1–17, May 2021, diseases,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition,
doi: 10.1038/s41598-021-90428-8. CVPR 2017, vol. 2017-Janua, pp. 3462–3471, 2017, doi:
[3] M. S. Majib, M. M. Rahman, T. M. Shahriar Sazzad, N. I. Khan, and 10.1109/CVPR.2017.369.
S. K. Dey, “VGG-SCNet: A VGG Net based Deep Learning [22] Chakrabarty Navoneel, “Brain MRI Images for Brain Tumor
framework for Brain Tumor Detection on MRI Images,” IEEE Access, Detection, Kaggle,” Kaggle, 2019.
2021, doi: 10.1109/ACCESS.2021.3105874. https://www.kaggle.com/datasets/navoneel/brain- mri-images-for-
[4] A. Naseer, T. Yasir, A. Azhar, T. Shakeel, and K. Zafar, “Computer- brain-tumor-detection (accessed Jul. 10, 2022).
Aided Brain Tumor Diagnosis: Performance Evaluation of Deep [23] C. Srinivas et al., “Deep Transfer Learning Approaches in Performance
Learner CNN Using Augmented Brain MRI,” Int. J. Biomed. Imaging, Analysis of Brain Tumor Classification Using MRI Images,” J.
vol. 2021, 2021, doi: 10.1155/2021/5513500. Healthc. Eng., vol. 2022, 2022, doi: 10.1155/2022/3264367.
[5] F. Ekong et al., “Bayesian Depth-Wise Convolutional Neural Network [24] M. A. Amou, K. Xia, S. Kamhi, and M. Mouhafid, “A Novel MRI
Design for Brain Tumor MRI Classification,” Diagnostics, vol. 12, no. Diagnosis Method for Brain Tumor Classification Based on CNN and
7, p. 1657, Jul. 2022, doi: 10.3390/diagnostics12071657. Bayesian Optimization,” Healthc., vol. 10, no. 3, p. 494,
[6] L. Liu, L. Kuang, and Y. Ji, “Multimodal MRI Brain Tumor Image Mar. 2022, doi: 10.3390/healthcare10030494.
Segmentation Using Sparse Subspace Clustering Algorithm,” Comput.
Math. Methods Med., vol. 2020, 2020, doi: 10.1155/2020/8620403.

Authorized licensed use limited to: Bahria University. Downloaded on January 03,2024 at 10:28:39 UTC from IEEE Xplore. Restrictions apply.

You might also like