Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

A NOVEL TRANSFER LEARNING BASED

BRAIN TUMOR CLASSIFICATION

ABSTRACT:
Accurate detection and classification of brain tumors play a pivotal role in clinical
diagnosis and treatment planning. In this study, we present a comprehensive approach for brain
tumor detection leveraging state-of-the-art techniques. Our methodology integrates data
augmentation as preprocessing to enhance the diversity and abundance of training data, thereby
improving the robustness of the classifier. We employ a fine-tuned pre-trained model,
specifically Inception V3, to extract informative features from brain MRI images. These
features are then utilized by a Convolutional Neural Network (CNN) model for classification
across four categories: Glioma, Meningioma, Non-tumor, and Pituitary tumors. Evaluation of
our model is conducted using various performance metrics, including precision, recall, F1-
score, as well as training and testing accuracy. Through extensive experimentation and
comparative analysis, we demonstrate the efficacy of our approach in accurately detecting and
categorizing brain tumors. Our proposed framework not only contributes to advancing the field
of medical image analysis but also holds significant promise for enhancing clinical decision-
making and patient care in neuro-oncology.

1
1. INTRODUCTION
1.1. About the project
The brain and spinal cord (together known as the Central Nervous System (CNS))
regulate many biological tasks such as organizing, analyzing, making decisions, giving orders,
and integrating . The human brain is incredibly complicated because of its elaborate physical
make-up . Stroke, infection, brain tumors, and migraines are only a few examples of CNS
illnesses that present considerable difficulties in diagnosis, evaluation, and the development of
effective treatments. In terms of early diagnosis, brain tumors—which are caused by the
abnormal proliferation of brain cells—present a significant problem for neuropathologists and
radiologists. Magnetic resonance imaging (MRI) brain tumor detection is a difficult and error-
prone manual process. Brain tumors are characterized by the abnormal development of nerve
cells, leading to a mass. About 130 different forms of tumors can develop in the brain and CNS,
ranging from benign to malignant and from extremely rare to common occurrences. These
malignancies can either form in the brain (primary brain tumors) or spread there from elsewhere
in the body (secondary or metastatic brain tumors). Primary brain tumors refer to tumors that
originate within the brain itself. These tumors are formed from the brain cells or can be
encapsulated within the nerve cells surrounding the brain. Primary brain tumors can exhibit a
range of characteristics, including both benign and malignant forms. Secondary brain tumors,
also known as metastatic brain tumors, are the most common type of malignant brain tumor. It
is important to note that while benign tumors do not typically spread from one area of the body
to another, secondary brain tumors are invariably cancerous and pose a serious threat to health.
There are a large number of patients in the United States who have been diagnosed with primary
brain tumors, roughly 700,000. In addition, in the United States alone, approximately 85,000
new cases of brain tumors were detected in 2021. A patient’s age is just one of several factors
that affect prognosis and survival rates when dealing with a brain tumor. Patients aged 55–64
had a 46.1% one-year survival rate, according to the research referenced in, whereas those aged
65–74 had a 29.3% survival rate. The authors of highlight the importance of early tumor
detection in increasing the likelihood of survival.

Described meningioma, glioma, and pituitary tumors are the most frequent primary
brain tumors seen in clinical practice. Most cases of meningioma arise near the meninges
tissues on the periphery of the brain or spinal cord .

2
This benign tumor develops in the membranes that rescue the brain and spinal cord. However,
glioma, the brain tumor with the highest fatality rate, develops from the glial cells that surround
and support the neurons. About a third of all cases of brain tumors are gliomas. Benign pituitary
tumors develop inside the pituitary gland . Prognosis and treatment options for brain tumors
depend on a correct diagnosis. However, conventional biopsy techniques are painful, time-
consuming, and fraught with inaccuracy in sampling. Histopathological tumor grading
(Biopsy) has its own set of problems, including intra-tumor heterogeneity and differences in
the subjective assessments of different experts. The diagnostic process for tumors is made more
difficult and restrictive by these characteristics.

Effective treatment planning and patient outcomes depend on a quick and precise
diagnosis of brain tumors. However, radiologists may spend a lot of effort on image analysis
when dealing with brain tumors . Today’s radiologists must rely on their own skills and
subjective interpretation of pictures to make detection and decisions manually. Accurate
diagnosis by human visual examination alone is difficult due to the wide range of practitioners’
expertise and the inherent complexity of brain tumor images. MRI scanning is commonly
utilized in neurology because it allows for an in-depth examination of the skull and brain . It
provides axial, coronal, and sagittal imaging for a more thorough evaluation. In addition to
producing high-resolution pictures with great contrast, MRI also has the benefit of being a
radiation-free technology. For this reason, it is the preferred noninvasive imaging technique for
identifying many forms of brain malignancy.

Artificial intelligence (AI) plays a significant role in the detection and diagnosis of brain
tumors, making it a useful complement to the notoriously difficult field of brain tumor surgery.
Subsets of artificial intelligence like machine learning (ML) and deep learning (DL) and
transfer learning have revolutionized neuropathological practices. Preprocessing of data,
feature extraction, feature selection, feature reduction, and classification are only a few of the
steps involved in these methods. AI has helped boost neuropathologist’s assurance in making
diagnoses of brain tumors, allowing them to make better decisions for their patients. Recent
developments in deep learning have led to a wide range of useful applications in fields as
disparate as pattern classification, object detection, voice recognition, and making. Researchers
in the healthcare industry have used a variety of ML algorithms, such as support vector
machines (SVMs), k-nearest neighbor (k-NN), decision trees, Naive Bayes, and DL algorithms,
such as trained convolutional neural networks (CNNs), VGGNets , GoogleNet, and ResNets ,

3
to aid in the diagnosis of cancer. In addition, research development is hampered by the dearth
of comprehensive medical datasets due to privacy issues that prevent the exchange in patient
information. In addition, because existing methods lack precision and recall, they are inefficient
and take too long to classify images, which can postpone the start of treatment. It can be used
to diagnose neurological diseases and analyze images of brain tumor cancer.

A new automated method based on transfer learning algorthim model is fine tuned using
our proposed module, which can replace conventional invasive brain tumor detection and
enhance overall detection accuracy. In order to enhance the accuracy of the brain tumor
identification algorithm, a large dataset of brain tumor images was collected from open-source
resources. To improve the readability of low-resolution MRI images, a three-stage image
preparation strategy was put in place. In addition, we examined the effect of overfitting on
classification accuracy and utilized a data augmentation technique to boost performance on
limited datasets. We developed a fully automated brain tumor detection model using deep
learning algorithms and Transfer learning algorithm. This model aims to reduce false detections
and ultimately minimize the loss of human lives associated with brain tumors.

4
2. SYSTEM ANALYSIS
2.1. EXISTING SYSTEM

Manav Sharma has suggested a method for identifying brain tumor. CNN is a deep
learning algorithm that is used for image processing. This algorithm uses an image as an input
and differentiates it on different bases or features. A model which is based on machine learning
algorithms to detect brain tumours from magnetic resonance images with low accuracy. Time
taken to train the dataset is high. Highly accurate but not completely accurate. Mohanmmad
Zafer Khaliki, This big data enables high performance in image processing classification
problems, which is a subfield of artificial intelligence. In this study, we aim to classify brain
tumors such as glioma, meningioma, and pituitary tumor from brain MR images. Convolutional
Neural Network (CNN) and CNN-based inception-V3, EfficientNetB4, VGG19, transfer
learning methods were used for classification. With our motivation to investigate how it will
work in single CNN and multilayer CNN based transfer learning models, we subjected the
dataset to classification as it is without rotation and cropping operations, which is the most
important limitation of our study.Md. Saikat Isalam Khan, This study propose a system for
automatically classifying brain tumors based on two deep learning models. A “Fine-tuned
proposed model with the attachment of the transfer learning based VGG16” architecture is used
for classifying normal and abnormal brain images. Although our proposed models achieved
promising classification outcomes, there are still a number of issues that can be resolved in the
future work. one of the key difficulties in using the deep learning-based automated detection
of brain tumor is the requirement for a substantial amount of annotated images collected by a
qualified physician or radiologist. Shtwai Alsubai, This study propose a system automatically
classifying early-stage brain tumors utilizing deep and machine learning approaches. This
paper proposes a hybrid deep learning model Convolutional Neural Network-Long Short Term
Memory (CNN-LSTM) for classifying and predicting brain tumors. The future work would be
to investigate the performance of the proposed approach on multi-class MR brain tumor images
problem and use different datasets such as Brast2022 and T-weighted to enhance the
performance of the proposed model.

5
2.2. PROPOSED SYSTEM

Brain tumor detection based on transfer learning would involve leveraging pre-trained
deep learning models and fine-tuning them on brain tumor imaging datasets. Gather a labeled
dataset of brain MRI scans, including images of both tumor and non-tumor cases. Preprocess
the images by resizing them to a uniform size, applying intensity normalization, and potentially
augmenting the dataset to increase variability and robustness. Using Transfer learning for select
a pre-trained convolutional neural network (CNN) model that has been trained on a large-scale
dataset (e.g., ImageNet) and has learned general image features. Remove the final classification
layers of the pre-trained CNN, retaining the convolutional layers that capture hierarchical
image features. Freeze the weights of the pre-trained convolutional layers to prevent them from
being updated during training. Extract features from the pre-trained CNN for each MRI image
in the dataset. These features serve as a high-level representation of the input images and
capture relevant patterns for tumor detection. Design a classification model to learn from the
extracted features and make predictions about tumor presence. Add fully connected layers to
the feature extraction output to perform classification. The number of neurons in the final layer
corresponds to the number of classes (Galioma, Meningioma, Notumor and Pituitary).
Introduce activation functions and regularization techniques to improve the model's
generalization capability and prevent overfitting. Define the loss function and choose an
optimizer (e.g., Adam) for training the classification model. Split the dataset into training,
validation, and test sets. Ensure that each set contains a balanced distribution of images. Train
the classification model using the extracted features as input and corresponding labels Galioma,
Meningioma, Notumor and Pituitary.

6
3. DEVELOPMENT ENVIRONMENT

3.1 HARDWARE ENVIRONMENT

Processor : NIVDIA

RAM : 4GB

Mouse : Compatiable Mouse

Keyboard : Logitech

Moniter : 15‖ Color Moniter

3.2. SOFTWARE ENVIRONMENT

 Operating System

 Windows 11

 Python - Googlecolab

 Packages

 Numpy

 Pandas

 Tensorflow

7
3.3. GOOGLE COLAB

Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter


notebook environment that provides a platform for writing and executing Python code through
your browser. It's especially popular in the data science and machine learning communities.
Colab is a hosted Jupyter Notebook service that requires no setup to use and provides free
access to computing resources, including GPUs and TPUs. Colab is especially well suited to
machine learning, data science, and education. With Colab you can import an image dataset,
train an image classifier on it, and evaluate the model, all in just a few lines of code. Colab
notebooks execute code on Google's cloud servers, meaning you can leverage the power of
Google hardware, including GPUs and TPUs, regardless of the power of your machine. Google
Colab is an excellent platform for developing and running Python code without any
installations. Google Colab also provides some advanced features like GPU and TPU support,
which turns out to be very useful while running machine learning and deep learning models.
Google Colaboratory, also known as Colab, is a free online platform for data analysis, machine
learning, and coding. It provides a cloud-based Jupyter Notebook environment that enables
users to write, run, and share Python code.

Figure 1: Google colab With Deep learning

Colaboratory by Google (Google Colab in short) is a Jupyter notebook based runtime


environment which allows you to run code entirely on the cloud. This is necessary because it
means that you can train large scale ML and DL models even if you don't have access to a
powerful machine or a high speed internet access

8
Why Use Google Colab?

Most users prefer Colab as it is very convenient and easy to use, and you get free
processing power with it. Colab is particularly suited for machine learning and data analysis,
as it provides its users free access to high computing resources such as GPUs and TPUs that
are essential to training models quickly and efficiently. When it comes to deep learning and
artificial intelligence, you need to train your program on test data. The program reads and
interprets this data, and the more data you feed it, the more accurate the AI will be. However,
processing all this data can require powerful hardware, which is where Google’s cloud comes
in. With Google’s resources (GPUs, TPUs), you can train your model on their servers, taking
advantage of their processing power.

Benefits of Google Colab:

Google Colab offers several benefits that make it a popular choice among data
scientists, researchers, and machine learning practitioners. Key features of Google
Collaboratory notebook include:

Free Access to GPUs: Colab offers free GPU access, which is particularly useful for
training machine learning models that require significant computational power.

No Setup Required: Colab runs in the cloud, eliminating the need for users to set up
and configure their own development environment. This makes it convenient for quick coding
and collaboration.

Collaborative Editing: Multiple users can work on the same Colab notebook
simultaneously, making it a useful tool for collaborative projects.

Integration with Google Drive: Colab is integrated with Google Drive, allowing users
to save their work directly to their Google Drive account. This enables easy sharing and access
to notebooks from different devices.

Support for Popular Libraries: Colab comes pre-installed with many popular Python
libraries for machine learning, data analysis, and visualization, such as TensorFlow, PyTorch,
Matplotlib, and more.

Easy Sharing: Colab notebooks can be easily shared just like Google Docs or Sheets.
Users can provide a link to the notebook, and others can view or edit the code in real-time.

9
3.4. PYTHON

Python is an interpreter, interactive, object-oriented programming language. It


incorporatesmodules, exceptions, dynamic typing, very high level dynamic data types, and
classes. It supportsmultiple programming paradigms beyond object-oriented programming,
such as procedural and functional programming. Python combines remarkable power with very
clear syntax.

Figure 2: Python

It has interfaces to many system calls and libraries, as well as to various window
systems, and isextensible in C or C++. It is also usable as an extension language for applications
that need aprogrammable interface. Finally, Python is portable: it runs on many Unix variants
including Linux and macOS, and on Windows. Python is a high-level general-purpose
programming language that can be applied to many different classes of problems.

Python is used for server-side web development, software development, mathematics,


and system scripting, and is popular for Rapid Application Development and as a scripting or
glue language to tie existing components because of its high-level, built-in data structures,
dynamic dynamic typing, and dynamic binding. Program maintenance costs are reduced with
Python due to the easily learned syntax an emphasis on readability. Additionally, Python's
support of modules and packages facilitates modular programs and reuse of code. Python is an
open source community language, so numerous independent programmers are continually
building libraries and functionality for it. The language comes with a large standard library that
covers areas such as string processing (regular expressions, Unicode, calculating differences

10
between files), internet protocols (HTTP, FTP, SMTP, XML-RPC, POP, IMAP), software
engineering (unit testing, logging, profiling, parsing Python code), and operating system
interfaces (system calls, filesystems, TCP/IP sockets).

PYTHON SYNTAX:

Syntax in a programming language is a standard way of expressing values or statements


which every programming language follows.

To print a statement- print(“Hello World”)

Output: Hello World

FEATURES OF PYTHON:

Python has plenty of features that make it the most demanding and popular. Let’s read
about a few of the best features that Python has:

 Easy to read and understand


 Interpreted language
 Object-oriented programming language
 Free and open-source
 Versatile and Extensible
 Multi-platform
 Hundreds of libraries and frameworksc
 Flexible, supports GUI
 Dynamically typed
 Huge and active community

These also state the reasons why you should choose Python to learn as a beginner, or also
to use it for development purposes as a developer, and a lot more.

PYTHON USE CASES:

 Creating web applicationsona server

 Building work flows that

 Connecting to database systems

 Reading and modifying files

11
 Performing complex mathematics

 Processing big data

 Fast prototyping

 Developing production-ready software

Professionally, Python is great for backend web development, data analysis, artificial
intelligence, and scientific computing.Developers also usePython to build productivitytools,
games, and desktop apps.

Advantages and Disadvantages of Python

Every programming language comes with benefits and limitations as well. These
benefits and limitations can be treated as advantages and disadvantages. Python also has a few
disadvantages over many advantages. Let’s discuss each here:

Advantages of Python:

 Easy to learn, read, and understand


 Versatile and open-source
 Improves productivity
 Supports libraries
 Huge library
 Strong community
 Interpreted language

Disadvantages of Python:

 Restrictions in design
 Memory inefficient
 Weak mobile computing
 Runtime errors
 Slow execution speed

12
PYTHON AND AI:

AI researchers are fans of Python. Google TensorFlow, as well as other libraries (scikit-
learn, Keras), establish a foundation for AI development because of the usability and flexibility
it offers Python users. These libraries, and their availability, are critical because they enable
developers to focus on growth and building.

NUMPY:

NumPy is the fundamental package for scientific computing in Python. It is a Python


library that provides a multidimensional array object, various derived objects (such as masked
arrays and matrices), and an assortment of routines for fast operations on arrays, including
mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms,
basic linear algebra, basic statistical operations, random simulation and much more.

At the core of the NumPy package, is the ndarray object. This encapsulates n-dimensional
arrays of homogeneous data types, with many operations being performed in compiled code
for performance. There are several important differences between NumPy arrays and the
standard Python sequences:

 NumPy arrays have a fixed size at creation, unlike Python lists (which can grow
dynamically). Changing the size of an ndarray will create a new array and delete the
original.
 The elements in a NumPy array are all required to be of the same data type, and thus will
be the same size in memory. The exception: one can have arrays of (Python, including
NumPy) objects, thereby allowing for arrays of different sized elements.
 NumPy arrays facilitate advanced mathematical and other types of operations on large
numbers of data. Typically, such operations are executed more efficiently and with less
code than is possible using Python’s built-in sequences.
 A growing plethora of scientific and mathematical Python-based packages are using
NumPy arrays; though these typically support Python-sequence input, they convert such
input to NumPy arrays prior to processing, and they often output NumPy arrays. In other
words, in order to efficiently use much (perhaps even most) of today’s
scientific/mathematical Python-based software, just knowing how to use Python’s built-
in sequence types is insufficient - one also needs to know how to use NumPy arrays.

13
MATPLOTLIB:

Matplotlib is a python library used to create 2D graphs and plots by using python
scripts. It has a module named pyplot which makes things easy for plotting by providing feature
to control line styles, font properties, formatting axes etc. It supports a very wide variety of
graphs and plots namely - histogram, bar charts, power spectra, error charts etc. It is used along
with NumPy to provide an environment that is an effective open source alternative for MatLab.

TENSORFLOW:

TensorFlow is an open-source library developed by Google primarily for deep learning


applications. It also supports traditional machine learning. TensorFlow was originally
developed for largnumerical computations without keeping deep learning in mind. However,
it proved to be very useful for deep learning development as well, and therefore Google open-
sourced it. TensorFlow accepts data in the form of multi-dimensional arrays of higher
dimensions called tensors. Multi-dimensional arrays are very handy in handling large amounts
of data.

TensorFlow works on the basis of data flow graphs that have nodes and edges. As the
execution mechanism is in the form of graphs, it is much easier to execute TensorFlow code in
a distributed manner across a cluster of computers while using GPUs.

TensorFlow with Python:

Many programmers access TensorFlow by way of the Python programming language.


Python is easy to learn and work with, and it provides convenient ways to express and couple
high-level abstractions. TensorFlow is supported on Python versions 3.7 through 3.11, and
while it may work on earlier versions of Python it's not guaranteed to do so. Nodes and tensors
in TensorFlow are Python objects, and TensorFlow applications are themselves Python
applications. The actual math operations, however, are not performed in Python. The libraries
of transformations that are available through TensorFlow are written as high-performance C++
binaries.Python just directs traffic between the pieces and provides the programming
abstractions to hook them together. High-level work in TensorFlow-creating nodes and layers
and linking them together-relies on the Keras library.
The Keras API is outwardly simple; you can define a basic model with three layers in less than
10 lines of code, and the training code for the same takes just a few more lines.

14
But if you want to "lift the hood" and do more fine-grained work, such as writing your own
training loop, you can do that.

TensorFlow is an open-source library developed by Google primarily for deep learning


applications. It also supports traditional machine learning. TensorFlow was originally
developed for large numerical computations without keeping deep learning in mind. However,
it proved to be very useful for deep learning development as well, and therefore Google open-
sourced it. TensorFlow accepts data in the form of multi-dimensional arrays of higher
dimensions called tensors. Multi-dimensional arrays are very handy in handling large amounts
of data.

PANDAS:

Pandas is a Python library used for working with data sets. It has functions for
analyzing, cleaning, exploring, and manipulating data. The name "Pandas" has a reference to
both "Panel Data", and "Python Data Analysis" and was created by Wes McKinney in 2008.
Pandas allows us to analyze big data and make conclusions based on statistical theories. Pandas
can clean messy data sets, and make them readable and relevant. Relevant data is very
important in data science. As an open-source software library built on top of Python specifically
for data manipulation and analysis, Pandas offers data structure and operations for powerful,
flexible, and easy-to-use data analysis and manipulation. Pandas is a predominantly used
python data analysis library. It provides many functions and methods to expedite the data
analysis process. What makes pandas so common is its functionality, flexibility, and simple
syntax.

KERAS:

Keras is an open-source Python library that is known to be beneficial for efficient and
fast experimentation associated with deep neural networks. This neural network Python library
was initially developed by Google for Open-Ended Neuro Electronic Intelligent Robot
Operating System or ONEIROS. It is now a standalone library and is supported in the core
library of TensorFlow which makes it available in addition to TensorFlow. Keras is extensively
used for creating Machine Learning and Deep Learning models that assist engineers and
developers in building applications like Yelp, Uber, Square, and Netflix. This Python-written
library augments the neural network creation speed.

15
It employs an API that enables access to various ML frameworks. Keras is a user-
friendly library that works on the model level. Keras allows us to prototype, research and
deploy deep learning models in an intuitive and streamlined manner. Keras has something for
every user: easy customisability for the academic; out-of-the-box, performant models and
pipelines for use by the industry, and readable, modular code for the student.

Pros:

 It is the most efficient Python library for prototyping.


 It is best used for research applications.
 It is a portable framework.
 It offers high-level abstractions and multi-backend support/
 It makes neural network representation easier.
 It allows more efficient modeling and visualization.
 It comes with an extensible and modular architecture that helps hasten development.

Cons:

 Before you can implement an operation, you will need to leverage a computational
graph. This makes this Python library slow.

16
4. FEASIBILITY STUDY
A feasibility study for brain tumor detection involves assessing the practicality,
viability, and potential challenges associated with implementing a brain tumor detection
system.

4.1. Market Analysis:

Evaluate the market demand for brain tumor detection systems, considering factors
such as the prevalence of brain tumors, the importance of early detection in patient outcomes,
and the current availability of diagnostic tools. Identify potential stakeholders, including
healthcare providers, hospitals, research institutions, and medical device companies.

4.2. Technical Feasibility:

Assess the availability and quality of brain MRI datasets for training machine learning
models. Consider factors such as data size, diversity, and annotation quality. Evaluate the
suitability of transfer learning and classification techniques for brain tumor detection based on
existing research and available resources. Investigate the computational requirements for
training and deploying machine learning models, considering hardware infrastructure and
computational costs.

4.3. Financial Feasibility:

Estimate the costs associated with data acquisition, preprocessing, model development,
and deployment.Compare the potential costs of developing an in-house brain tumor detection
system versus purchasing a commercially available solution. Conduct a cost-benefit analysis
to determine the return on investment (ROI) and potential revenue streams, such as sales of the
detection system or provision of diagnostic services.

4.4. Regulatory and Ethical Considerations:

Identify regulatory requirements and standards governing medical device development


and deployment, such as FDA approval in the United States or CE marking in the European
Union.

17
Ensure compliance with patient privacy regulations (e.g., HIPAA) and ethical guidelines for
the use of medical data in research and development. Address potential ethical concerns related
to algorithm bias, patient consent, and the responsible use of AI in healthcare.

4.5. Clinical Validation:

Plan clinical studies or validation trials to assess the performance of the brain tumor
detection system in real-world settings. Define metrics for evaluating the system's accuracy,
sensitivity, specificity, and clinical utility. Collaborate with healthcare providers and
institutions to collect feedback and validate the system's effectiveness in improving patient
outcomes.

4.6. Market Entry Strategy:

Develop a market entry strategy that takes into account competitive landscape, pricing,
distribution channels, and marketing efforts. Identify potential partners or collaborators for
pilot studies, clinical trials, or commercialization efforts. Consider geographical markets, target
customer segments, and reimbursement mechanisms for healthcare services related to brain
tumor detection.

4.7. Risk Assessment:

Identify potential risks and challenges associated with developing and deploying a brain
tumor detection system, such as technical limitations, regulatory hurdles, market competition,
and data privacy concerns. Develop mitigation strategies to address identified risks and ensure
project success.

4.8. Technical and Operational Feasility:

Even if the financials are looking good and the market is ready, this initiative may not
be something your organization can support. To evaluate operational feasibility, consider any
staffing or equipment requirements this project needs. What organizational resourcesincluding
time, money, and skills are necessary in order for this project to succeed? Depending on the
project, it may also be necessary to consider the legal impact of the initiative. For example, if
the project involves developing a new patent for your product, you will need to involve your
legal team and incorporate that requirement into the project plan.

18
4.9. Proposal:

Summarize the findings of the feasibility study, including the technical, financial,
regulatory, and clinical aspects of developing a brain tumor detection system. Make
recommendations regarding the viability and next steps for the project, considering the
opportunities and challenges identified during the assessment. By conducting a comprehensive
feasibility study, stakeholders can make informed decisions about whether to proceed with the
development and implementation of a brain tumor detection system, taking into account
technical, financial, regulatory, and clinical consideration.

19
5. METHODOLOGY

5.1. ARTIFICIAL INTELLIGENCE

Artificial intelligence, or AI, is technology that enables computers and machines to


simulate human intelligence and problem-solving capabilities. On its own or combined with
other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would
otherwise require human intelligence or intervention. Digital assistants, GPS guidance,
autonomous vehicles, and generative AI tools (like Open AI's Chat GPT) are just a few
examples of AI in the daily news and our daily lives.

As a field of computer science, artificial intelligence encompasses (and is often


mentioned together with) machine learning and deep learning. These disciplines involve the
development of AI algorithms, modeled after the decision-making processes of the human
brain, that can ‘learn’ from available data and make increasingly more accurate classifications
or predictions over time.

Figure 3: Artificial Intelligence

20
Artificial intelligence has gone through many cycles of hype, but even to skeptics, the
release of ChatGPT seems to mark a turning point. The last time generative AI loomed this
large, the breakthroughs were in computer vision, but now the leap forward is in natural
language processing (NLP). Today, generative AI can learn and synthesize not just human
language but other data types including images, video, software code, and even molecular
structure.

Applications for AI are growing every day. But as the hype around the use of AI tools
in business takes off, conversations around ai ethics and responsible ai become critically
important. For more on where IBM stands on these issues, please read Building trust in AI.
5.2. MACHINE LEARNING

Machine learning (ML) is a branch of artificial intelligence (AI) and computer science
that focuses on the using data and algorithms to enable AI to imitate the way that humans learn,
gradually improving its accuracy.

Figure 4: Machine Learning

21
Machine learning is a subfield of artificial intelligence, which is broadly defined as the
capability of a machine to imitate intelligent human behavior. Artificial intelligence systems
are used to perform complex tasks in a way that is similar to how humans solve problems.

There are four types of machine learning algorithms: supervised, unsupervised and semi-
supervised reinforcement.

 Supervised Learning:

Supervised learning is a category of machine learning that uses labeled datasets to


train algorithms to predict outcomes and recognize patterns. Unlike unsupervised learning,
supervised learning algorithms are given labeled training to learn the relationship between the
input and the outputs.

Figure 5: Supervised Learning

It is called supervised learning because algorithms that learn from the training dataset
can be thought of as a guide supervising the learning process. We already know the correct
answers; the algorithm iteratively makes predictions on the training data and is corrected by
the guide. Supervised machine learning can be used to make more accurate financial
predictions because it consumes unlimited data that it can sort through much faster than a
human, allowing you to make better decisions based on error-free data.

22
While many different applications use supervised machine learning, below are the most
common examples are Retail. Supervised machine learning is used in retail to predict
customers' purchasing behavior, Finance, Health, Spam Detection, Weather Forecasting, Image
Classification, Face Recognition, Therapeutic Drug Interaction.

Some common examples of supervised learning include spam filters, fraud detection
systems, recommendation engines, and image recognition systems.

 Unsupervised Learning:

Unsupervised learning, also known as unsupervised machine learning, uses machine


learning (ML) algorithms to analyze and cluster unlabeled data sets. These algorithms discover
hidden patterns or data groupings without the need for human intervention.

Unsupervised learning's ability to discover similarities and differences in information


make it the ideal solution for exploratory data analysis, cross-selling strategies, customer
segmentation and image recognition

Figure 6: Unsupervised Learning

The main difference between supervised and unsupervised learning: Labeled data. The
main distinction between the two approaches is the use of labeled datasets. To put it simply,
supervised learning uses labeled input and output data, while an unsupervised learning
algorithm does not.

23
Some of the most common real-world applications of unsupervised learning are: News
Sections: Google News uses unsupervised learning to categorize articles on the same story
from various online news outlets. For example, the results of a presidential election could be
categorized under their label for “US” news. In unsupervised feature learning, features are
learned with unlabeled input data by analyzing the relationship between points in the dataset.
Examples include dictionary learning, independent component analysis, matrix factorization
and various forms of clustering.

Unsupervised learning is beneficial in exploratory data analysis by automatically detecting


structure and relationships in data and provide initial insights used to test against individual
hypotheses.

 Semi-Supervised Learning:

Semi-supervised learning is a branch of machine learning that combines supervised


and unsupervised learning by using both labeled and unlabeled data to train artificial
intelligence (AI) models for classification and regression tasks. Semi-supervised learning
allows for the algorithm to learn from a small amount of labeled text documents while still
classifying a large amount of unlabeled text documents in the training data.

Semi-supervised Learning is applied in a variety of industries from Fintech, Education to


Entertainment. Image and Speech Analysis: This is the most popular example of semi-
supervised learning models. Images and audio files are usually not labeled. To label them is an
arduous task that is expensive as well.

Figure 7: Semi-supervised Learning

24
Though semi-supervised learning is generally employed for the same use cases in which one
might otherwise use supervised learning methods, it’s distinguished by various techniques that
incorporate unlabeled data into model training, in addition to the labeled data required for
conventional supervised learning.

Semi-supervised learning methods are especially relevant in situations where obtaining a


sufficient amount of labeled data is prohibitively difficult or expensive, but large amounts of
unlabeled data are relatively easy to acquire. In such scenarios, neither fully supervised nor
unsupervised learning methods will provide adequate solutions.

 Reinforcement Learning:

Reinforcement learning is an area of Machine Learning. It is about taking suitable


action to maximize reward in a particular situation. Reinforcement Learning (RL) is the science
of decision making. It is about learning the optimal behavior in an environment to obtain
maximum reward. In RL, the data is accumulated from machine learning systems that use a
trial-and-error method. Data is not part of the input that we would find in supervised or
unsupervised machine learning.

Figure 8: Reinforcement Learning

Reinforcement learning uses algorithms that learn from outcomes and decide which
action to take next. After each action, the algorithm receives feedback that helps it determine
whether the choice it made was correct, neutral or incorrect. It is a good technique to use for

25
automated systems that have to make a lot of small decisions without human guidance.
Reinforcement learning is an autonomous, self-teaching system that essentially learns by trial
and error. It performs actions with the aim of maximizing rewards, or in other words, it is
learning by doing in order to achieve the best outcomes.

Typical ML algorithms divide problems into sub problems and address them
individually without concern for the main problem. However, RL is more about achieving the
long-term goal without dividing the problem into sub-tasks, thereby maximizing the rewards.

Reinforcement learning can be used to create personalized learning experiences for


students. This includes tutoring systems that adapt to student needs, identify knowledge gaps,
and suggest customized learning trajectories to enhance educational outcomes. Natural
language processing.

5.3. DEEP LEARNING

Deep learning is a subset of machine learning that uses multi-layered neural networks,
called deep neural networks, to simulate the complex decision-making power of the human
brain. Some form of deep learning powers most of the artificial intelligence (AI) in our lives
today.

Figure 9: Deep Learning

26
Deep learning is a method in artificial intelligence (AI) that teaches computers to
process data in a way that is inspired by the human brain. Deep learning models can recognize
complex patterns in pictures, text, sounds, and other data to produce accurate insights and
predictions.

The hierarchy of concepts allows the computer to learn complicated concepts by


building them out of simpler ones. If we draw a graph showing how these concepts are built
on top of each other, the graph is deep, with many layers. For this reason, we call this approach
to AI deep learning.

Deep learning networks learn by discovering intricate structures in the data they
experience. By building computational models that are composed of multiple processing layers,
the networks can create multiple levels of abstraction to represent the data.

Deep learning models are best used on large volumes of data, while machine learning
algorithms are generally used for smaller datasets. In fact, using complex DL models on small,
simple datasets culminate in inaccurate results and high variance - a mistake often made by
beginners in the field.

ML is a good choice for simple classification or regression problems. At the same time,
DL is better suited for complex tasks such as image and speech recognition, natural language
processing, and robotics.

One of the biggest advantages of using deep learning approach is its ability to execute
feature engineering by itself. In this approach, an algorithm scans the data to identify features
which correlate and then combine them to promote faster learning without being told to do so
explicitly.

Deep learning changes how you think about representing the problems that you're
solving. With deep learning, data trains the computer, through deep algorithms, to learn on its
own by recognizing patterns using layers of processing.

Deep learning is ideal for predicting outcomes whenever you have a lot of data to learn
from – 'a lot' being a huge dataset with hundreds of thousands or better millions of data points.
Where you have a huge volume of data like this, the system has what it needs to train itself.

27
5.4. CONVOLUTION NEURAL NETWORK (CNN)

A convolutional neural network (CNN) is a category of machine learning model,


namely a type of deep learning algorithm well suited to analyzing visual data. CNNs --
sometimes referred to as convents -- use principles from linear algebra, particularly convolution
operations, to extract features and identify patterns within images. Although CNNs are
predominantly used to process images, they can also be adapted to work with audio and other
signal data.

CNNs are particularly useful for finding patterns in images to recognize objects,
classes, and categories. CNN is a deep learning algorithm responsible for processing animal
visual cortex-inspired images in the form of grid patterns. These are designed to automatically
detect and segment-specific objects and learn spatial hierarchies of features from low to high-
level patterns.

Figure 10: CNN

28
1. Input Layers: It’s the layer in which we give input to our model. The number of neurons in
this layer is equal to the total number of features in our data (number of pixels in the case of an
image).

2. Hidden Layer: The input from the Input layer is then feed into the hidden layer. There can
be many hidden layers depending upon our model and data size. Each hidden layer can have
different numbers of neurons which are generally greater than the number of features. The
output from each layer is computed by matrix multiplication of output of the previous layer
with learnable weights of that layer and then by the addition of learnable biases followed by
activation function which makes the network nonlinear.

3. Output Layer: The output from the hidden layer is then fed into a logistic function like
sigmoid or softmax which converts the output of each class into the probability score of each
class.

5.5. CNN ARCHITECTURE

Convolutional Neural Network consists of multiple layers like the input layer,
Convolutional layer, Pooling layer, and fully connected layers.

Figure 11: CNN Architecture

29
CNN LAYERS:

A deep learning CNN consists of three layers: a convolutional layer, a pooling layer
and a fully connected (FC) layer. The convolutional layer is the first layer while the FC layer
is the last. The convolutional layer to the FC layer, the complexity of the CNN increases. It is
this increasing complexity that allows the CNN to successively identify larger portions and
more complex features of an image until it finally identifies the object in its entirety.

KEY COMPONENTS OF CNN:

The convolutional neural network is made of four main parts.

 Convolutional layers
 Rectified Linear Unit (ReLU for short)
 Pooling layers
 Fully connected layers
CONVOLUTIONAL LAYER:
The majority of computations happen in the convolutional layer, which is the core
building block of a CNN. A second convolutional layer can follow the initial convolutional
layer. The process of convolution involves a kernel or filter inside this layer moving across the
receptive fields of the image, checking if a feature is present in the image. Over multiple
iterations, the kernel sweeps over the entire image. After each iteration a dot product is
calculated between the input pixels and the filter. The final output from the series of dots is
known as a feature map or convolved feature. Ultimately, the image is converted into numerical
values in this layer, which allows the CNN to interpret the image and extract relevant patterns
from it.

POOLING LAYER :

Like the convolutional layer, the pooling layer also sweeps a kernel or filter across the
input image. But unlike the convolutional layer, the pooling layer reduces the number of
parameters in the input and also results in some information loss. On the positive side, this
layer reduces complexity and improves the efficiency of the CNN.

30
FULLY CONNECTED LAYER:

The FC layer is where image classification happens in the CNN based on the features
extracted in the previous layers. Here, fully connected means that all the inputs or nodes from
one layer are connected to every activation unit or node of the next layer. All the layers in the
CNN are not fully connected because it would result in an unnecessarily dense network. It also
would increase losses and affect the output quality, and it would be computationally expensive.

FLATTEN LAYER :

Intuition behind flattening layer is to converts data into 1-dimentional array for feeding
next layer. we flatted output of convolutional layer into single long feature vector. which is
connected to final classification model, called fully connected layer. let’s suppose
we’ve [5,5,5] pooled feature map are flattened into 1x125 single vector. So, flatten layers
converts multidimensional array to single dimensional vector.

How do convolutional neural networks work?

CNNs use a series of layers, each of which detects different features of an input image.
Depending on the complexity of its intended purpose, a CNN can contain dozens, hundreds or
even thousands of layers, each building on the outputs of previous layers to recognize detailed
patterns. The process starts by sliding a filter designed to detect certain features over the input
image, a process known as the convolution operation (hence the name "convolutional neural
network"). The result of this process is a feature map that highlights the presence of the detected
features in the image. This feature map then serves as input for the next layer, enabling a CNN
to gradually build a hierarchical representation of the image. Initial filters usually detect basic
features, such as lines or simple textures. Subsequent layers' filters are more complex,
combining the basic features identified earlier on to recognize more complex patterns. For
example, after an initial layer detects the presence of edges, a deeper layer could use that
information to start identifying shapes. Between these layers, the network takes steps to reduce
the spatial dimensions of the feature maps to improve efficiency and accuracy. In the final layers
of a CNN, the model makes a final decision -- for example, classifying an object in an image --
based on the output from the previous layers.

31
Advantages of Convolutional Neural Networks:

 Good at detecting patterns and features in images, videos, and audio signals.
 Robust to translation, rotation, and scaling invariance.
 End-to-end training, no need for manual feature extraction.
 Can handle large amounts of data and achieve high accuracy.

Disadvantages of Convolutional Neural Networks:

 Computationally expensive to train and require a lot of memory.


 Can be prone to overfitting if not enough data or proper regularization is used.
 Requires large amounts of labeled data. Interpretability is limited, it’s hard to
understand what the network has learned.

ACTIVATION FUNCTION:

Sigmoid:

The importance of the sigmoid is to some degree historical. It’s one of the earliest
activation functions that was used in neural networks. A sigmoid function is any mathematical
function whose graph has a characteristic S-shaped curve or sigmoid curve. A common
example of a sigmoid function is the logistic function. A sigmoid function is a bounded,
differentiable, real function that is defined for all real input values and has a non-negative
derivative at each point and exactly one inflection point.

ReLu:

The Rectified Linear Unit is the most commonly used activation function in deep
learning models. The function returns 0 if it receives any negative input, but for any positive
value x it returns that value back. So it can be written as f(x)=max(0,x). The usage of ReLU
helps to prevent the exponential growth in the computation required to operate the neural
network. If the CNN scales in size, the computational cost of adding extra ReLUs increases
linearly.

ReLU is popular because it's simple and fast, and its ability to introduce non-linearity
into neural networks enables them to learn complex patterns and relationships in data, making
it an essential tool in various applications.

32
5.6. TRANSFER LEARNING:

Transfer learning is the reuse of a pre-trained model on a new problem. It’s currently very
popular in deep learning because it can train deep neural networks with comparatively little
data. This is very useful in the data science field since most real-world problems typically do
not have millions of labeled data points to train such complex models. Transfer of learning
means the use of previously acquired knowledge and skills in new learning or problem-solving
situations. Thereby similarities and analogies between previous and actual learning content and
processes may play a crucial role. The concept of 'Transfer of learning' is introduced by 'E.
Thorndike' and 'S. Woodworth'. Transfer of learning refers to applying acquired abilities or
knowledge in a new situation.

Transfer learning is a technique in machine learning where a model trained on one task is
used as the starting point for a model on a second task. This can be useful when the second task
is similar to the first task, or when there is limited data available for the second task. By using
the learned features from the first task as a starting point, the model can learn more quickly and
effectively on the second task. This can also help to prevent overfitting, as the model will have
already learned general features that are likely to be useful in the second

Figure 12: Transfer Learning

33
Why do we need Transfer Learning?

Many deep neural networks trained on images have a curious phenomenon in common:
in the early layers of the network, a deep learning model tries to learn a low level of features,
like detecting edges, colours, variations of intensities, etc. Such kind of features appear not to
be specific to a particular dataset or a task because no matter what type of image we are
processing either for detecting a lion or car. In both cases, we have to detect these low-level
features. All these features occur regardless of the exact cost function or image dataset. Thus,
learning these features in one task of detecting lions can be used in other tasks like detecting
humans.

How does Transfer Learning work?

Pre-trained Model: Start with a model that has previously been trained for a certain task
using a large set of data. Frequently trained on extensive datasets, this model has identified
general features and patterns relevant to numerous related jobs.

 Base Model: The model that has been pre-trained is known as the base model. It is
made up of layers that have utilized the incoming data to learn hierarchical feature
representations.
 Transfer Layers: In the pre-trained model, find a set of layers that capture generic
information relevant to the new task as well as the previous one. Because they are prone
to learning low-level information, these layers are frequently found near the top of the
network.
 Fine-tuning: Using the dataset from the new challenge to retrain the chosen layers. We
define this procedure as fine-tuning. The goal is to preserve the knowledge from the
pre-training while enabling the model to modify its parameters to better suit the
demands of the current assignment.

5.7. PRE PROCESSING


Preprocessing refers to the steps taken to prepare raw data for analysis or modeling. It
involves cleaning, transforming, and organizing data to make it suitable for further analysis or
machine learning tasks. The goal of preprocessing is to enhance the quality of the data and to
ensure that it can be effectively utilized by algorithms. Common preprocessing steps:

34
Data cleaning, Data Cleaning, Data transformation, Normalization, Standardization,
Encoding Categorical Variables Feature Engineering, Handling Outliers, Data Reduction, Data
Integration, Data Sampling, Handling Imbalanced data, Normalization and Scaling.

5.8. IMAGE DATA AGUMENTATION:

Data augmentation is a technique which is used to increase the number of sample


images in a dataset in order to reduce class imbalance. This technique is used to increase the
number of samples of each image in the dataset to prevent the model from being undertrained.
The diversity of the training set can be increased by applying several different transformation
techniques to our image dataset such as flipping, rotating, stretching the image.

Image data augmentation is perhaps the most well-known type of data augmentation
and involves creating transformed versions of images in the training dataset that belong to the
same class as the original image. Transforms include a range of operations from the field of
image manipulation, such as shifts, flips, zooms, and much more. The intent is to expand the
training dataset with new, plausible examples. This means, variations of the training set images
that are likely to be seen by the model. For example, a horizontal flip of a picture of a cat may
make sense, because the photo could have been taken from the left or right. A vertical flip of
the photo of a cat does not make sense and would probably not be appropriate given that the
model is very unlikely to see a photo of an upside down cat. As such, it is clear that the choice
of the specific data augmentation techniques used for a training dataset must be chosen
carefully and within the context of the training dataset and knowledge of the problem domain.
In addition, it can be useful to experiment with data augmentation methods in isolation and in
concert to see if they result in a measurable improvement to model performance, perhaps with
a small prototype dataset, model, and training run.Modern deep learning algorithms, such as
the convolutional neural network, or CNN, can learn features that are invariant to their location
in the image. Nevertheless, augmentation can further aid in this transform invariant approach
to learning and can aid the model in learning features that are also invariant to transforms such
as left-to-right to top-to-bottom ordering, light levels in photographs, and more.

Image data augmentation is typically only applied to the training dataset, and not to the
validation or test dataset. This is different from data preparation such as image resizing and
pixel scaling; they must be performed consistently across all datasets that interact with the
model.

35
6. SYSTEM MODEL

6.1. SYSTEM PERFORMANCE

MRI Data
images Pre-Processing Augmentation
images

Glioma Meningioma

CNN Features
Classifier extraction

Notumor Pituitary

Figure 13:System Performance

6.2. DATASET:

Datasets are an important part of any machine learning or deep learning approach. The
quality of the available data help develop, train and improve the algorithms. In medical imaging
applications, the available data must be validated and labeled by experts in order to be useful
in any development. This section present the datasets used in recent works related to deep
learning for Brain tumor detection.

This dataset used provide by Kaggle contains 1311 human brain MRI images classified
into 4 classes: glioma, meningioma, no tumor, and pituitary. The size of the images in this
dataset is different. After pre-processing and removing the extra margins, we can resize the
image to the desired size.This work will improve the accuracy of the model pre-processing
code.

GLIOMA

A glioma is a tumor that forms when glial cells grow out of control. Normally, these
cells support nerves and help your central nervous system work. Gliomas usually grow in the
brain, but can also form in the spinal cord. Gliomas are malignant (cancerous), but some can
be very slow growing.

36
Figure 14: Glioma

MENINGIOMA

A meningioma is a tumor that grows from the membranes that surround the brain and
spinal cord, called the meninges. A meningioma is not a brain tumor, but it may press on the
nearby brain, nerves and vessels. Meningioma is the most common type of tumor that forms in
the head.

Figure 15: Meningioma

37
NOTUMOR

Figure 16: Notumor

PITUITARY

The pituitary gland is referred to as the “master gland” because it monitors and regulates
many bodily functions through the hormones that it produces, including: Growth and
sexual/reproductive development and function. Glands (thyroid gland, adrenal glands, and
gonads)

Figure 17: Pituitary

38
6.3. PREPROCESSING:

Preprocessing plays a crucial role in preparing medical images, such as MRI scans, for
input into transfer learning-based models for brain tumor detection. Effective preprocessing
techniques can enhance the quality of the data, improve the performance of the neural network,
and ensure robustness against variations in image characteristics. Here are some key
preprocessing steps commonly employed in transfer learning-based brain tumor detection:
MRI scans often come in various dimensions, and resizing them to a consistent size is essential
for compatibility with the neural network architecture. Resizing also helps reduce
computational complexity and memory requirements during training and inference.

Preprocessing is a crucial step in preparing medical imaging data, such as brain MRI
scans, for transfer learning-based brain tumor detection. Proper preprocessing techniques can
help enhance the quality of the data, reduce noise, and standardize the input for the neural
network. Here are some common preprocessing steps for brain tumor detection:

Image Rescaling and Standardization: MRI images may vary in resolution and size. Rescale
the images to a uniform size to ensure consistency across the dataset. Additionally, standardize
the intensity values of the images to a common range (e.g., [0, 1] or [-1, 1]) to make the model
training more stable.

Normalization: Normalize the pixel intensity values of the MRI images to have a mean of zero
and a standard deviation of one. Normalization helps in reducing the effect of variations in
imaging conditions and enhances the convergence of the neural network during training.

Noise Reduction: MRI images can contain noise, artifacts, or inconsistencies due to various
factors such as scanner imperfections or patient motion. Apply noise reduction techniques such
as Gaussian blurring or median filtering to improve image quality and remove unwanted noise.

Skull Stripping: In brain MRI images, the skull and surrounding tissues may not be relevant
for tumor detection and can introduce noise. Perform skull stripping to remove non-brain
tissues and focus the analysis on the relevant regions of the brain.

Image Registration: MRI scans from different patients may have variations in position and
orientation. Register or align the images to a common anatomical space to ensure consistency
in the data and improve the model's ability to generalize across different scans.

39
6.4. DATA AUGMENTATION

Augmentor:

Augmentation using libraries like Augmentor can be a powerful tool to artificially


increase the size and diversity of your dataset, which is especially beneficial when working
with medical imaging data like MRI scans. Augmentor is a Python library that provides a
simple interface for augmenting images with a variety of transformations. Once the
augmentation process is complete, you will have a larger dataset containing the original images
along with their augmented versions. You can then use this augmented dataset for training your
transfer learning-based brain tumor detection model. By augmenting your dataset with
transformations such as rotation, flipping, and mirroring, you introduce variability into the
training data, which can help improve the generalization and robustness of your model.
Additionally, data augmentation can help address issues such as class imbalance and limited
dataset size, ultimately leading to better performance of your brain tumor detection model.

6.5. FEATURE EXTRACTION :

EfficientNetB3:

EfficientNet is a convolutional neural network architecture and scaling method that


uniformly scales all dimensions of depth/width/resolution using a compound coefficient.
EfficientNet's layers are based on a compound scaling method that uniformly scales the depth,
width, and resolution of the network, with the stem layer being the initial part of the network
that performs initial convolutions to process the input image before deeper layers. Conv2D:
3x3 convolution with stride 2. EfficientNet is a type of Neural Network architecture that uses
compound Scaling to enable better performance. EfficientNet aims to improve performance
while increasing computational efficiency by reducing the number of parameters and
FLOPs(Floating point Operations Per Second). EfficientNet model trained on ImageNet-1k at
resolution 300x300. It was introduced in the paper EfficientNet: Rethinking Model Scaling for
Convolutional Neural Networks by Mingxing Tan and Quoc V. Le, and first released in this
repository.

40
Figure 18: EfficientNetB3

6.6. CNN CLASSIFIER:

A classifier in machine learning is an algorithm that automatically orders or categorizes


data into one or more of a set of “classes.” One of the most common examples is an tumor
classifier that scans emails to filter them by class label: glioma, meningioma, notumor,
pituitary. It is a supervised learning problem, where a model is trained on a labeled dataset of
images and their corresponding class labels, and then used to predict the class label of new,
unseen images. We use filters when using CNNs. we'll explore the implementation of a
Convolutional Neural Network (CNN) classifier for brain tumor detection. We'll use the Keras
library in Python to implement the CNN classifier.

The CNN classifier consists of the following layers:

Conv2D: A convolutional layer with 32 filters, kernel size 3x3, and activation function ReLU.

MaxPooling2D: A max pooling layer with pool size 2x2.

Conv2D: A convolutional layer with 64 filters, kernel size 3x3, and activation function ReLU.

MaxPooling2D: A max pooling layer with pool size 2x2.

Flatten: A flatten layer to flatten the output of the convolutional and pooling layers.

41
Dense: A fully connected layer with 128 units and activation function ReLU.

Dropout: A dropout layer with dropout rate 0.2.

Dense: A fully connected layer with 2 units (binary classification) and activation function
sigmoid.

42
7. SYSTEM IMPLEMENTATION

7.1. SAMPLE CODING

Import needed modules

!pip install tensorflow==2.9.1

# import system libs


import os
import time
import shutil
import pathlib
import itertools
from PIL import Image
# import data handling tools
import cv2
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('darkgrid')
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
# import Deep learning Libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD, Adam, Adamax
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Activation,
Dropout, BatchNormalization
from tensorflow.keras import regularizers

43
# Ignore Warnings
import warnings
warnings.filterwarnings("ignore")
print ('modules loaded')

Data Preprocessing
# Generate data paths with labels
train_data_dir = '/content/drive/MyDrive/DataSet'
filepaths = []
labels = []
folds = os.listdir(train_data_dir)
for fold in folds:
foldpath = os.path.join(train_data_dir, fold)
filelist = os.listdir(foldpath)
for file in filelist:
fpath = os.path.join(foldpath, file)
filepaths.append(fpath)
labels.append(fold)
# Concatenate data paths with labels into one dataframe
Fseries = pd.Series(filepaths, name= 'filepaths')
Lseries = pd.Series(labels, name='labels')
train_df = pd.concat([Fseries, Lseries], axis= 1)
train_df

Data Augmentation:
!pip install Augmentor
import Augmentor
p = Augmentor.Pipeline(r'/content/drive/MyDrive/meningioma')
p.zoom(probability=0.3, min_factor=0.8, max_factor=1.5)

44
p.flip_top_bottom(probability=0.3)
p.random_brightness(probability=0.3, min_factor=0.3, max_factor=1.2 )
p.random_distortion(probability=1, grid_width=4, grid_height=4, magnitude=8)
p.sample(410)
# Generate data paths with labels
test_data_dir = '/content/drive/MyDrive/DataSet'
filepaths = []
labels = []
folds = os.listdir(test_data_dir)
for fold in folds:
foldpath = os.path.join(test_data_dir, fold)
filelist = os.listdir(foldpath)
for file in filelist:
fpath = os.path.join(foldpath, file)
filepaths.append(fpath)
labels.append(fold)
# Concatenate data paths with labels into one dataframe
Fseries = pd.Series(filepaths, name= 'filepaths')
Lseries = pd.Series(labels, name='labels')
ts_df = pd.concat([Fseries, Lseries], axis= 1)

Split dataframe into train, valid, and test


# valid and test dataframe
valid_df, test_df = train_test_split(ts_df, train_size= 0.5, shuffle= True, random_state= 123)

Create image data generator


# crobed image size
batch_size = 16
img_size = (224, 224)
channels = 3

45
img_shape = (img_size[0], img_size[1], channels)

tr_gen = ImageDataGenerator()
ts_gen = ImageDataGenerator()

train_gen = tr_gen.flow_from_dataframe( train_df, x_col= 'filepaths', y_col= 'labels',


target_size= img_size, class_mode= 'categorical', color_mode= 'rgb', shuffle= True,
batch_size= batch_size)

valid_gen = ts_gen.flow_from_dataframe( valid_df, x_col= 'filepaths', y_col= 'labels',


target_size= img_size, class_mode= 'categorical', color_mode= 'rgb', shuffle= True,
batch_size= batch_size)

test_gen = ts_gen.flow_from_dataframe( test_df, x_col= 'filepaths', y_col= 'labels',


target_size= img_size, class_mode= 'categorical', color_mode= 'rgb', shuffle= False,
batch_size= batch_size)

Show sample from train data


g_dict = train_gen.class_indices # defines dictionary {'class': index}
classes = list(g_dict.keys()) # defines list of dictionary's kays (classes), classes names :
string
images, labels = next(train_gen) # get a batch size samples from the generator

plt.figure(figsize= (20, 20))


for i in range(16):
plt.subplot(4, 4, i + 1)
image = images[i] / 255 # scales data to range (0 - 255)
plt.imshow(image)
index = np.argmax(labels[i]) # get image index
class_name = classes[index] # get class of image
plt.title(class_name, color= 'blue', fontsize= 12)
plt.axis('off')
plt.show()

46
Model Structure
Generic model creation

# Create Model Structure


img_size = (224, 224)
channels = 3
img_shape = (img_size[0], img_size[1], channels)
class_count = len(list(train_gen.class_indices.keys())) # to define number of classes in dense
layer
# create pre-trained model (you can built on pretrained model such as : efficientnet, VGG ,
Resnet )
# we will use efficientnetb3 from EfficientNet family.
base_model = tf.keras.applications.efficientnet.EfficientNetB3(include_top= False, weights=
"imagenet", input_shape= img_shape, pooling= 'max')
# base_model.trainable = False
model = Sequential([
base_model,
#BatchNormalization(axis= -1, momentum= 0.99, epsilon= 0.001),
Dense(64, kernel_regularizer= regularizers.l2(l= 0.016), activity_regularizer=
regularizers.l1(0.016),
bias_regularizer= regularizers.l1(0.016), activation= 'relu'),
Dropout(rate= 0.35, seed= 123),
Dense(class_count, activation= 'softmax')
])
model.compile(Adam(learning_rate= 0.001), loss= 'categorical_crossentropy', metrics=
['accuracy'])
model.summary()

47
Train model
epochs = 10 # number of all epochs in training
history = model.fit(x= train_gen, epochs= epochs, verbose= 1, validation_data= valid_gen,
validation_steps= None, shuffle= False)

Display model performance


# Define needed variables
tr_acc = history.history['accuracy']
tr_loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
index_loss = np.argmin(val_loss)
val_lowest = val_loss[index_loss]
index_acc = np.argmax(val_acc)
acc_highest = val_acc[index_acc]
Epochs = [i+1 for i in range(len(tr_acc))]
loss_label = f'best epoch= {str(index_loss + 1)}'
acc_label = f'best epoch= {str(index_acc + 1)}'
# Plot training history
plt.figure(figsize= (20, 8))
plt.style.use('fivethirtyeight')
plt.subplot(1, 2, 1)
plt.plot(Epochs, tr_loss, 'r', label= 'Training loss')
plt.plot(Epochs, val_loss, 'g', label= 'Validation loss')
plt.scatter(index_loss + 1, val_lowest, s= 150, c= 'blue', label= loss_label)
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

48
plt.subplot(1, 2, 2)
plt.plot(Epochs, tr_acc, 'r', label= 'Training Accuracy')
plt.plot(Epochs, val_acc, 'g', label= 'Validation Accuracy')
plt.scatter(index_acc + 1 , acc_highest, s= 150, c= 'blue', label= acc_label)
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.tight_layout
plt.show()

Evaluate model
ts_length = len(test_df)
test_batch_size = max(sorted([ts_length // n for n in range(1, ts_length + 1) if ts_length%n ==
0 and ts_length/n <= 80]))
test_steps = ts_length // test_batch_size
train_score = model.evaluate(train_gen, steps= test_steps, verbose= 1)
valid_score = model.evaluate(valid_gen, steps= test_steps, verbose= 1)
test_score = model.evaluate(test_gen, steps= test_steps, verbose= 1)
print("Train Loss: ", train_score[0])
print("Train Accuracy: ", train_score[1])
print('-' * 20)
print("Validation Loss: ", valid_score[0])
print("Validation Accuracy: ", valid_score[1])
print('-' * 20)
print("Test Loss: ", test_score[0])
print("Test Accuracy: ", test_score[1])

49
Get Predictions
preds = model.predict_generator(test_gen)
y_pred = np.argmax(preds, axis=1)

Confusion Matrices and Classification Report


g_dict = test_gen.class_indices
classes = list(g_dict.keys())

# Confusion matrix
cm = confusion_matrix(test_gen.classes, y_pred)
plt.figure(figsize= (10, 10))
plt.imshow(cm, interpolation= 'nearest', cmap= plt.cm.Blues)
plt.title('Confusion Matrix')
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation= 45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j], horizontalalignment= 'center', color= 'white' if cm[i, j] > thresh else
'black')
plt.tight_layout()
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
plt.show()

# Classification report
print(classification_report(test_gen.classes, y_pred, target_names= classes))

50
8. RESULT AND DESCRIPTION
8.1. PERFORMANCE MATRICES

Preforming brain tumor detection with four classes (e.g., different types of tumors or
non-tumor regions) using machine learning models, specific performance metrics are used to
evaluate the model's effectiveness in classifying these different categories. Accuracy measures
the proportion of correctly predicted samples (all classes combined) out of the total number of
samples. To evaluate the performance of a classification model, different metrics are used, and
some of them are as follows:

 Accuracy
 Confusion Matrix
 Precision
 Recall
 F-Score
 AUC(Area Under the Curve)-ROC

ACCURACY:

The accuracy metric is one of the simplest Classification metrics to implement, and it
can be determined as the number of correct predictions to the total number of predictions.
Accuracy measures the overall correctness of the model on the dataset.

It can be formulated as:

Accuracy = Number of predictions / Total number of predictions

To implement an accuracy metric, we can compare ground truth and predicted values
in a loop, or we can also use the scikit-learn module for this.

Although it is simple to use and implement, it is suitable only for cases where an equal number
of samples belong to each class.

51
When to Use Accuracy?

It is good to use the Accuracy metric when the target variable classes in data are
approximately balanced. For example, if 60% of classes in a fruit image dataset are of Apple,
40% are Mango. In this case, if the model is asked to predict whether the image is of Apple or
Mango, it will give a prediction with 97% of accuracy.

When not to use Accuracy?

It is recommended not to use the Accuracy measure when the target variable majorly
belongs to one class. For example, Suppose there is a model for a disease prediction in which,
out of 100 people, only five people have a disease, and 95 people don't have one. In this case,
if our model predicts every person with no disease (which means a bad prediction), the
Accuracy measure will be 95%, which is not correct.

CONFUSION MATRIC:
The Confusion Matrix is a deep learning visual assessment method. The prediction class
results are represented in the columns of a Confusion Matrix, whereas the real class results are
represented in the rows. This matrix includes all the raw data regarding a classification model's
assumptions on a specified data collection. To determine how accurate a model is. It's a square
matrix with the rows representing the instances' real class and the columns representing their
expected class. The confusion matrix is a 2 x 2 matrix that reports the number of true positives
(T P), true negatives (T N), false positives (FP), and false negatives (F N) when dealing with a
binary

TP FN

FP TN

Precision, recall, and F-measure, which are commonly utilized in the text mining and
machine learning communities, were used to evaluate the algorithms. True positive (TP –
objects correctly labeled as belonging to the class), false positive (FP – items falsely labeled as
belonging to a certain class), false negative (FN – items incorrectly labeled as not belonging to
a certain class), and true negative (TN – items incorrectly labeled as not belonging to a certain
class) are the four types of classified items (TN - items correctly labelled as not belonging to a
certain class).

52
RECALL:

Recall is determined using the following formula given the amount of true positives
and false negatives.It is also similar to the Precision metric; however, it aims to calculate the
proportion of actual positive that was identified incorrectly. It can be calculated as True
Positive or predictions that are actually true to the total number of positives, either correctly
predicted as positive or incorrectly predicted as negative (true Positive and false negative).

Recall measures the ability of the model to correctly identify instances of a class.

Recall = TP for a class/ TP for a class+ FN for a class.

PRECISION:

The precision metric is used to overcome the limitation of Accuracy. The precision
determines the proportion of positive prediction that was actually correct. It can be calculated
as the True Positive or predictions that are actually true to the total positive predictions (True
Positive and False Positive).

Precision measures the accuracy of positive predictions (tumor predictions) for each class.

Precision=TP for a class/TP for a class + FP for a class.

When to use Precision and Recall?

From the above definitions of Precision and Recall, we can say that recall determines the
performance of a classifier with respect to a false negative, whereas precision gives information about
the performance of a classifier with respect to a false positive.So, if we want to minimize the false
negative, then, Recall should be as near to 100%, and if we want to minimize the false positive, then
precision should be close to 100% as possible.In simple words, if we maximize precision, it will
minimize the FP errors, and if we maximize recall, it will minimize the FN error.

The measure that combines precision and recall is known as F-measure, given as:

53
where β denotes the precision's relative value. A value of β = 1 (which is often used)
means that recall and accuracy are of equal importance. A lower value implies that accuracy is
more important, whereas a higher value indicates that recall is more important.

F-SCORES:

F-score or F1 Score is a metric to evaluate a binary classification model on the basis of


predictions that are made for the positive class. It is calculated with the help of Precision and
Recall. It is a type of single score that represents both Precision and Recall. So, the F1 Score
can be calculated as the harmonic mean of both precision and Recall, assigning equal weight
to each of them.

The formula for calculating the F1 score is given below:

When to use F-Score?

As F-score make use of both precision and recall, so it should be used if both of them
are important for evaluation, but one (precision or recall) is slightly more important to consider
than the other. For example, when False negatives are comparatively more important than false
positives, or vice versa.

ROC CURVE AND AUC-ROC:

Useful for understanding the trade-off between true positive rate and false positive rate
across different class boundaries. By leveraging these performance metrics, researchers and
clinicians can assess the accuracy, sensitivity, and specificity of machine learning models in
multi-class brain tumor detection tasks, facilitating better decision-making and model
optimization for clinical applications.

Sometimes we need to visualize the performance of the classification model on charts;


then, we can use the AUC-ROC curve. It is one of the popular and important metrics for
evaluating the performance of the classification model.

54
Firstly, let's understand ROC (Receiver Operating Characteristic curve) curve. ROC
represents a graph to show the performance of a classification model at different threshold
levels. The curve is plotted between two parameters, which are:

o True Positive Rate


o False Positive Rate

TPR or true Positive rate is a synonym for Recall, hence can be calculated as:

FPR or False Positive Rate can be calculated as:

To calculate value at any point in a ROC curve, we can evaluate a logistic regression
model multiple times with different classification thresholds, but this would not be much
efficient. So, for this, one efficient method is used, which is known as AUC.

AUC: Area Under the ROC Curve:

AUC is known for Area Under the ROC curve. As its name suggests, AUC calculates
the two-dimensional area under the entire ROC curve, AUC calculates the performance across
all the thresholds and provides an aggregate measure. The value of AUC ranges from 0 to 1. It
means a model with 100% wrong prediction will have an AUC of 0.0, whereas models with
100% correct predictions will have an AUC of 1.0.

When to Use AUC:

AUC should be used to measure how well the predictions are ranked rather than their
absolute values. Moreover, it measures the quality of predictions of the model without
considering the classification threshold.

55
When not to use AUC:

As AUC is scale-invariant, which is not always desirable, and we need calibrating


probability outputs, then AUC is not preferable.Further, AUC is not a useful metric when there
are wide disparities in the cost of false negatives vs. false positives, and it is difficult to
minimize one type of classification error.

8.2. TEST CASE REPORT

Figure 19: Training and Testing – Low and Accuracy

The purpose of the test case report, which is to evaluate the performance of a brain
tumor detection model using specific test cases. Model Details: Provide information about the
machine learning model used, including the architecture, dataset, and training methodology.
The specific test cases designed to evaluate the model's performance. Test Case 1 - Detection
of Glioblastoma (Class 1). Test Case 2 - Differentiation between Meningioma (Class 2) and
Pituitary Tumor (Class 3). Test Case 3 - Overall accuracy assessment across all classes (Tumor
and Non-Tumor). The dataset used for testing, including the number of samples per class and

56
any preprocessing steps applied. The performance metrics selected for evaluation (e.g.,
accuracy, precision, recall, F1-score). Procedure: Outline the steps followed during the test
execution phase. Loading the trained model.

Processing test images. Making predictions and evaluating performance. Performance


Metrics: Present the quantitative results obtained from the test cases. Accuracy, Precision,
Recall, F1-score for each class. Confusion matrix illustrating true positives, false positives, true
negatives, and false negatives. Visualizations: Include visual representations of model
predictions (e.g., overlaying predicted tumor regions on brain MRI scans).

8.3. CONFUSION MATRIX

Figure 20: Confusion Matrix

57
To evaluate the performance of a brain tumor detection model, the confusion matrix is
a critical tool that provides a detailed breakdown of model predictions versus actual outcomes.
This matrix is particularly useful for understanding the types of errors made by the model. A
confusion matrix is a table that summarizes the performance of a classification model. It
organizes predictions into four categories based on the true and predicted classes:

True Positive (TP): Model correctly predicts the presence of a tumor.

False Positive (FP): Model incorrectly predicts the presence of a tumor (false alarm).

True Negative (TN): Model correctly predicts the absence of a tumor.

False Negative (FN): Model incorrectly predicts the absence of a tumor (miss).

The confusion matrix visualizes the model's performance as a heatmap. Each cell in the
matrix represents the count of instances for a specific combination of predicted and true labels.
Analyzing the confusion matrix allows you to understand the strengths and weaknesses of your
brain tumor detection model. For instance, a high number of false positives (FP) could indicate
a need to adjust the model's threshold, while a high number of false negatives (FN) may require
improvements in feature selection or model architecture. Use the confusion matrix in
conjunction with other evaluation metrics to optimize your model for accurate brain tumor
detection.

8.4. CLASSIFICATION REPORT

precision recall f1-score support

glioma 0.97 1.00 0.98 213


meningioma 0.97 0.96 0.97 200
notumor 1.00 0.97 0.98 209
pituitary 0.99 1.00 1.00 198

accuracy 0.98 820


macro avg 0.98 0.98 0.98 820
weighted avg 0.98 0.98 0.98 820

58
For brain tumor detection using machine learning or deep learning techniques, a
classification report can provide valuable insights into the performance of your model. This
classification report presents the performance evaluation of a brain tumor detection model. The
goal of this model is to accurately classify brain MRI images into two categories: images
containing a tumor (positive class) and images without a tumor (negative class).

 Precision: The model achieved a precision of 0.97 for detecting glioma tumor cases,
0.97 for detecting meningioma tumor cases, 1.00 for detecting notumor cases and 0.99
for detecting pitutitaery tumor cases.
 Recall: The recall (sensitivity) for glioma, meningioma, non-tumor and pitutiary cases
are 1.00, 0.96, 0.97 and 1.00, respectively. This shows that the model can correctly
identify 96% of actual tumor cases and 97% of actual non-tumor cases.
 F1-Score: The F1-score, which considers both precision and recall, is 0.98 for glioma
cases, 0.97 for meningioma cases, 0.98 for notumor cases and 1.00 for pituitary cases
 Accuracy: The overall accuracy of the model is 98%, which suggests that the model
performs well in classifying brain MRI images into glioma, meningioma, non-tumor
and pituitary categories.

59
9. CONCLUSION

To reduce global death rates, diagnosis of brain cancers is essential. Brain tumors can be
difficult to identify because of their complex architecture, size variability, and unusual forms.
In our research, we used a large collection of MRI scans of brain tumors to overcome this
obstacle. Improved by transfer learning and fine tuning in order to detect gliomas,
meningioma, and pituitary brain tumors in MRI data. Our suggested CNN model
demonstrates the substantial influence of deep learning models in tumor identification and
demonstrates how these models have changed this field. Using a huge collection of MRI
images, we found some encouraging findings in the diagnosis of brain cancers. We used a
wide range of performance measures to measure the effectiveness of our deep learning
models. When compared to standard techniques of categorization, the proposed technology
not only detects the existence of brain tumors, but also pinpoints their precise location within
the MRI images. This localization allows for fine-grained categorization without laborious
human interpretation. The proposed solution, in contrast to segmentation techniques, uses a
little amount of storage space and has a low computational cost, making it portable across a
variety of systems. Not only did the suggested approach achieve better accuracy than prior
efforts using bounding box detection techniques, it also outperformed those techniques when
applied to meningioma, glioma, and pituitary brain cancers. The results were improved, and
the problem was tackled with the help of picture data augmentation, even though the dataset
was relatively small. Using the available data, we obtained an accuracy of 99.5% in our
analysis. The proposed method for classifing brain tumors in medical images has achieved
this accuracy.

60
10. REFERENCE
1. Lee D.Y. Roles of mTOR signaling in brain development. Exp. Neurobiol. 2015;24:177–
185. doi: 10.5607/en.2015.24.3.177. [PMC free article] [PubMed] [CrossRef] [Google
Scholar]
2. Zahoor M.M., Qureshi S.A., Bibi S., Khan S.H., Khan A., Ghafoor U., Bhutta M.R. A New
Deep Hybrid Boosted and Ensemble Learning-Based Brain Tumor Analysis Using MRI.
Sensors. 2022;22:2726. doi: 10.3390/s22072726. [PMC free article] [PubMed] [CrossRef]
[Google Scholar]
3. Arabahmadi M., Farahbakhsh R., Rezazadeh J. Deep Learning for Smart healthcare—A
Survey on Brain Tumor Detection from Medical Imaging. Sensors. 2022;22:1960. doi:
10.3390/s22051960. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
4. Gore D.V., Deshpande V. Comparative study of various techniques using deep Learning for
brain tumor detection; Proceedings of the 2020 IEEE International Conference for Emerging
Technology (INCET); Belgaum, India. 5–7 June 2020; pp. 1–4. [Google Scholar]
5. Sapra P., Singh R., Khurana S. Brain tumor detection using neural network. Int. J. Sci. Mod.
Eng. 2013;1:2319–6386. [Google Scholar]
6. Soomro T.A., Zheng L., Afifi A.J., Ali A., Soomro S., Yin M., Gao J. Image Segmentation
for MR Brain Tumor Detection Using Machine Learning: A Review. IEEE Rev. Biomed. Eng.
2022;16:70–90. doi: 10.1109/RBME.2022.3185292. [PubMed] [CrossRef] [Google Scholar]
7. Yavuz B.B., Kanyilmaz G., Aktan M. Factors affecting survival in glioblastoma patients
below and above 65 years of age: A retrospective observational study. Indian J. Cancer.
2021;58:210. [PubMed] [Google Scholar]
8. Fahmideh M.A., Scheurer M.E. Pediatric brain tumors: Descriptive epidemiology, risk
factors, and future directions. Cancer Epidemiol. Prev. Biomark. 2021;30:813–821. doi:
10.1158/1055-9965.EPI-20-1443. [PubMed] [CrossRef] [Google Scholar]
9. Nodirov J., Abdusalomov A.B., Whangbo T.K. Attention 3D U-Net with Multiple Skip
Connections for Segmentation of Brain Tumor Images. Sensors. 2022;22:6501. doi:
10.3390/s22176501. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
10. Shafi A.S.M., Rahman M.B., Anwar T., Halder R.S., Kays H.E. Classification of brain
tumors and auto-immune disease using ensemble learning. Inform. Med. Unlocked.
2021;24:100608. doi: 10.1016/j.imu.2021.100608. [CrossRef] [Google Scholar.

61

You might also like