Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

University of Gondar

College of Informatics
Department of Information Science
Data Science and analytics Post Graduate program
Course name: -Computer Vision
Title: Brain Tumor Classification by Transfer learning

Prepared By: 1. Mohammed Seid

2. Yewoinhareg Girma

3 . Azeb Gezahegn

Submitted to Dr. Million

Date of submission Apr, 28, 2023

0
Contents
Abstract ......................................................................................................................................................... 2
1. Introduction ........................................................................................................................................... 3
2. Objective ............................................................................................................................................... 5
2.1. General objective .......................................................................................................................... 5
2.2. Specific objective .......................................................................................................................... 5
3. Literature review ................................................................................................................................... 5
4. About the data ....................................................................................................................................... 8
5. Methodology ......................................................................................................................................... 9
5.1 Preprocessing ...................................................................................................................................... 9
5.1.1. Resizing................................................................................................................................. 9
5.1.2. Image-to-array....................................................................................................................... 9
5.1.3. Image normalization ............................................................................................................. 9
5.2. Data augmentation .......................................................................................................................... 10
5.3. Train test splitting ............................................................................................................................ 10
5.4. Applying pre-trained ResNet50V2 model ........................................................................................ 11
6. Model Architecture of ResNetV2 ........................................................................................................ 12
6.1. Model summery .......................................................................................................................... 14
7. Data flow diagram ............................................................................................................................... 15
8. Experimental discussion ..................................................................................................................... 16
9. Overview of the result on the dataset .................................................................................................. 17
9.1. Accuracy ..................................................................................................................................... 17
9.2 .Loss ............................................................................................................................................ 18
10. Testing the model ............................................................................................................................ 19
11. Deployment..................................................................................................................................... 20
12. Strength ........................................................................................................................................... 22
13. Weakness ........................................................................................................................................ 22
14. Feature work ................................................................................................................................... 22
Conclusion, ................................................................................................................................................. 23
Tool used .................................................................................................................................................... 24
Reference ...................................................................................................................................................... 1

1
Abstract
Brain tumor is an abnormal mass of tissue in which cells grow and multiply uncontrollably and
caused to one of the most dangerous cancer types in the world called Brain cancer, so thousands
of people are suffering from malignant brain tumors leading to a very short expected life if
diagnosed at a higher grade Depending on the level of cancer, early diagnosis and grading is a
very critical step after detecting the tumor to achieve an effective treatment plan,. However,
thousands of scans must be studied in order to classify tumor types with high accuracy. Deep
learning models and computer vision can handle that amount of data, and they can present results
with high accuracy, but they may take large computational time ,So the aims of this project is to
classify MRI o four different tumor classes, one normal and three different abnormal brain tumor
classes using pre trained Convolutional Neural Network (CNN) called ResNet50V2 and VGG-19
by using transfer learning so solve above problem , and used The preferred image to detect brain
tumors called Magnetic Resonance Imaging (MRI), A dataset of which 200 weighted contrast-
enhanced brain MRI images for grading (classifying) the brain tumors into four classes (no-
Tumor, glioma-tumor , meningioma-tumor, pituitary-tumor ) is drawn from, publicly available
datasets called Kaggle repository. Datasets are shuffled randomly for 80% training, 10%
validation, and 10% testing. For fine-tuning, models are modified so that the output channel of
the classifier is equal to the number of classes in the datasets. The results of our model from pre-
trained and fine-tuned ResNet50V2, deep learning model achieved accuracies of “85%” and
from pre-trained and fine-tuned VGG-19, deep learning model achieved accuracies of “70%” so
the model that transfer from ResNet50V2 shows better accuracy and by doing further
modification and deployment it can be used as classifiers for the brain tumor when diagnosis is
required,

Keyword

MRI -Magnetic Resonance Imaging,

CNN-convolutional neural, network,

ResNet50V2 -Residual Network version two

VGG-19 - Visual Geometry Group version-nineteen

2
1. Introduction

The brain is the most complex organ in vertebrates, and it is located in the center of the nervous
system. Tumor types in the brain can be mainly classified as benign and malignant tumors.
Additionally, brain tumors can be classified as primary and secondary. Tumors that start to grow
in the tissue of the brain are named primary brain tumors, and if neoplasm has grown in another
organ and then affected the brain, the corresponding type of tumor is called a secondary brain
tumor, The most common primary brain tumors are meningiomas (referred as meningioma
tumor), pituitary adenomas (referred as pituitary tumor), and astroglial neoplasms (including
glioblastoma and referred to as glioma tumor)[1]. Treatments are dependent on the patient, but
common treatment techniques for primary brain tumors are multimodality treatments, radiation,
and chemotherapy[], Meningioma tumors form in the thin layers of tissue that cover the spinal
cord and brain. Gliomas are tumors that are thought to derive from neuroglial stem or progenitor
cells they comprise 80% of all malignant brain tumors. Pituitary adenomas are tumors of the
anterior pituitary, and most of them are benign and slow-growing [4].

In recent decades, for detecting these type of tumor many imaging techniques such as X-ray,
Magneto EncephaloGraphy (MEG), Computed Tomography (CT), Ultrasonography, Electro
EncephaloGraphy (EEG), Single-Photon Emission Computed Tomography (SPECT), Positron
Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) have emerged, and they
are not only exhibit the detailed and complete facets of brain tumors but also help doctors to
accurately diagnose the tumor and determine the correct treatment mechanism . MRI is
considered the most popular imaging technique for detecting brain tumors [3]. It is a none and
excellent soft tissue contrast imaging technique that gives essential information about brain
tumor shape, location, and size without subjecting the patients to excessive ionization radiation,
it is more favorable for brain tumors classification according to its harmless nature.

In recent years, among many other applications, artificial intelligence (AI) detecting diseases
and established support systems to assist in detecting diseases and establishing precise medical
diagnoses. In order to address practical problems researchers and governments focus on AI Deep
leaning, machine learning, computer vision… and other technique. Deep Learning (DL)
techniques have recently been widely employed to build automatic systems that can accurately

3
classify or segment brain tumors in less time. DL enables the use of a pre-trained Convolutional
Neural Network (CNN) models for medical imagery, they have been created for various
applications, including GoogLeNet , AlexNet, and ResNet-34... ,for this paper from different
pre-trained convolutional neural network we use ResNet50v2 which stands for Residual Network
version two , it is a specific type of convolutional neural network (CNN) introduced in the 2015
paper “Deep Residual Learning for Image Recognition” by He Kaiming, Zhang Xiangyu, Ren
Shaoqing, and Sun Jian and VGG-19 which stands for Visual Geometry Group version nineteen
it is as a successor of the AlexNet but it was created by a different group named as Visual
Geometry Group at Oxford's and hence the name VGG, It carries and uses some ideas from its
predecessors and improves on them and uses deep Convolutional neural layers to improve
accuracy and apply transfer learning from both pre-trained CNN model. CNNs are commonly
used to power computer vision applications and transferring them Which involves the use of a
pre-trained model as a starting point for most computer vision and natural language processing
tasks [4], Pre-trained models are state-of-the-art deep learning models that were trained on
millions and millions of samples, and often for months. These models have an astonishingly
huge capability of detecting nuances in different images. These models can be used as a base for
our model. Most models are so good that we won’t need to add convolution and pooling Layers.

As we know medical result is Positive and negative tests are typically used for diagnostic
purposes to ascertain whether a disease or condition is present (positive) or not (negative). In
layperson's terms: Positive (in our case tumor) means that whatever the test was looking for was
found. Negative (non-tumor) means that whatever the test was looking for was not found. But
the problem is find out these result accurately,

The old method of classifying brain tumor and non-tumor MRI image is done manually which is
time-consuming, inaccurate, and prone to human error and it is based on the skills and
experience of the radiologist, so a huge amount of image data is generated through the scans.
These images are examined by the radiologist. This manual examination can be error-prone due
to the level of complexities involved in brain tumors and their properties. Application of
automated classification techniques using deep learning and computer vision has consistently
shown higher accuracy than manual classification, but within their advantage of accuracy they
take much computational time to analyze the data of the image and they are hard to train because

4
of vanishing gradient problem and the problem of "feature extraction" which is one of the main
challenges in training deep neural networks , so in order to solve this problem we propose a
transfer learning technique which is a subfield of machine learning and artificial intelligence that
aims to apply the knowledge gained from one task (source task) to a different but similar task
(target task). And fine-tuning involves further training the pre-trained model on the new task by
updating its weights.

2. Objective
2.1. General objective
The general objective of this project is to developing a categorical deep learning classification
model using Transfer learning.

2.2. Specific objective


To achieve above general objective, we set back some specific tasks such as

 To Collect brain MRI images (data acquisition)


 To apply Data augmentation in training dataset
 To applying preprocessing in MRI image
 To build a model that transferred from ResNet50v2 model
 To build a model that transferred from VGG-19 model
 To test our models
 To measure the performance of our models
 To make deployment of one of the best of the two new model

3. Literature review
This section introduces a collection of DL-based brain tumor classification techniques. Based on
DL and transfer learning algorithms; there are numerous methods for classifying brain tumors.
State-of-the-art techniques can be classified into deep learning-based, machine learning-based
and hybrid-based techniques. Below table summarizes different classification problem literature.

5
Table different brain tumor classification techniques
Reference Used Technique Dataset Accuracy Advantages Gap

[2] deep dense inception publicly available mean accuracy The results of the study demonstrate . Exploration of other architectures:
residual network brain tumor of 99.69% that the proposed approach There are other deep learning

(DDIRN) image dataset outperforms several existing state-of- architectures that have been developed

having 3064 the-art methods in terms of accuracy, and can be explored for brain tumor

images
sensitivity, specificity, and F1 score classification. It’s unclear if DDIRN is
the most optimal architecture for this
The study shows promising results
task, thus other models should also be
for using deep learning models for
explored for a more comprehensive
the classification of brain tumors.
comparison.

[4] Vit pre-trained A brain tumor Best one the study provides exciting insights Although the article claims that the
models ( (B/16, B/32, dataset from is L/32, with an into using advanced deep learning proposed vision Transformer-based
L/16, and L/32) figshare overall test methods for diagnosing medical classifier achieved higher accuracy than
accuracy of conditions such as brain tumors, other state-of-the-art methods, it is
98.2%
which can potentially save lives and unclear if the performance improvement
improve overall healthcare outcomes. is clinically significant

6
[6] Convolutional Neural MRI image was 98%. Proposes a new method for brain Lack of generalizability: The proposed
network assembled from tumor classification using a hybrid model was not tested on independent
data from several approach. datasets that would have validated the
institutes. model's capability to generalize to new
This approach combines texture
Hospitals, data.
features, wavelet transform, and
colleges
three-dimensional fractal dimension
to improve the accuracy of
classification

The paper have Result comparison


with different approaches

[3] RCNN technique two publicly 98.21 presence of more comprehensive Interpretability: Although the proposed
available datasets evaluation metrics architecture is said to be efficient, it's
from Figshare unclear how it accomplishes its goals,
the authors propose an improved
(Cheng et al., and therefore difficult to interpret.
architecture for detecting and
2017) and Kaggle
classifying brain tumors using a Lack of comparison with state-of-the-art
(2020)
combination of region-based methods: The authors have not provided
convolutional neural network any comparison with other state-of-the-
(RCNN) and two-channel art methods in this domain.
convolutional neural network (CNN)

7
4. About the data
The data collected are grouped into four kinds of classes’ healthy brain images (non-tumor) and
three unhealthy brain images (tumor) are collected from the MRI image category of Kaggle
(https://www.kaggle.com/datasets/navoneel/brain-mri-images-for-brain-tumor-detection),

Folder Description

The folder yes contains 50 Brain MRI Images that are


glioma-tumor
tumorous

meningioma- The folder no contains 50 Brain MRI Images that are non-
tumor tumorous

The folder yes contains 50 Brain MRI Images that are


pituitary-tumor
tumorous

The folder no contains 50 Brain MRI Images that are non-


pituitary-tumor
tumorous

Colored Tumord brain image MRI brain tumor


image

Figure 1 brain tumor image

8
5. Methodology
For this project we did design science research which is a qualitative research approach
Depending on our data availability to train and generate the models we used a supervised
machine learning method or technique, And from the statement of problem classification
problem exists, to answer these classification problems Convolutional Neural Network (CNN) is
most preferable used in image recognition and computer vision.

6. Methods
To get better accuracy result we follow a certain type of supervised machine learning method
such as

6.1. Preprocessing
Preprocessing data is a common first step in the deep learning workflow to prepare raw data in a
format that the network can accept.

6.1.1. Resizing
Since neural networks receive inputs of the same size, all images need to be resized to a fixed
size before inputting them to the CNN. The larger the fixed size, the less shrinking required. Less
shrinking means less deformation of features and patterns inside the image, we use target_size=
(img_width, img_height). We resize the image to 224 × 224 ,

6.1.2. Image-to-array
The Keras preprocessing layers API allows, img_to_array () function for converting a loaded
image in PIL format into a NumPy array for use with deep learning models. The API also
provides the array_to_img() function that can be used for converting a NumPy array of pixel data
into a PIL image

6.1.3. Image normalization


Image Normalization is a technique often applied as part of data preparation for Deep learning.
The normalization of an image consists in dividing each of its pixel values by the maximum
value that a pixel can take (255 for an 8-bit image), for example If images are printed, using
normalized images would prevent printing problems due to changes in page size and orientation

9
6.2. Data augmentation
Data augmentation is a technique of artificially increasing the training set by creating modified
copies of a dataset using existing data. It includes making minor changes to the dataset or using
deep learning to generate new data points. In this project we used zoom-range, shear range,
width_shift_range, and height_shift_range and recall techniques, for example for the rotatiojn
range of 90 d=90

6.3. Train test splitting


A train test split is when you split your data into a training set and a testing set. The training set
is used for training the model, and the testing set is used to test your model. This allows you to
train your models on the training set, and then test their accuracy on the unseen testing set. I this
project I added a Validation set which used to tune the parameters of a classifier; I followed the
common training testing splitting method which is 80% for training 10% for validating and 10%
for testing. This ensures that all sets are representative of the entire dataset, hoped to gives us a
good way to measure the accuracy of your models,

10
Figure 2 data set splitting

6.4. Transfer Learning


Transfer learning is often used in DL networks from the trained object to re-train the other
objects. There are four types of transfer learning: case-based, features-based, parameter-based,
and relationship-based. Obviously, choosing the trained parameters for the best classification
system is a big challenge. Here, suitable network architecture needs to be addressed along with
the network parameters, and these values need to be estimated for the new input data. Then, the
new network needs to be fine-tuned to improve performance [8]. In this paper; the parameter-
based transfer learning method was applied for classifying brain tumor image

7. Applying pre-trained ResNet50V2 model


The regular network ResNet was based on the VGG neural networks (VGG-16 and VGG-19)
each convolutional network had a 3×3 filter. However, a ResNet has fewer filters and is less
complex than a VGGNet. A 34-layer ResNet can achieve a performance of 3.6 billion FLOPs,
and a smaller 18-layer ResNet can achieve 1.8 billion FLOPs, which is significantly faster than a
VGG-19 Network with 19.6 billion FLOPs [4].

The ResNet architecture follows two basic design rules. First, the number of filters in each layer
is the same depending on the size of the output feature map. Second, if the feature map’s size is
halved, it has double the number of filters to maintain the time complexity of each layer.

11
8. Model Architecture of ResNetV2

Figure 3ResNet50v2 architecture

The 50-layer ResNet architecture includes the following elements, as shown in the table below:

 A 7×7 kernel convolution alongside 64 other kernels with a 2-sized stride.

 A max pooling layer with a 2-sized stride.

 9 more layers—3×3,64 kernel convolution, another with 1×1,64 kernels, and a third with
1×1,256 kernels. These 3 layers are repeated 3 times.

 12 more layers with 1×1,128 kernels, 3×3,128 kernels, and 1×1,512 kernels, iterated 4
times.

12
 18 more layers with 1×1,256 cores, and 2 cores 3×3,256 and 1×1,1024, iterated 6 times.

 9 more layers with 1×1,512 cores, 3×3,512 cores, and 1×1,2048 cores iterated 3 times.

(up to this point the network has 50 layers)

 Average pooling, followed by a fully connected layer with 1000 nodes, using the
softmax activation function.

 Description about the layer


 A convolution layer
A convolution layer transforms the input image in order to extract features from
it. In this transformation, the image is convolved with a kernel (or filter). A kernel
is a small matrix, with its height and width smaller than the image to be
convolved
 Pooling layers
Pooling layers provide an approach to down-sampling feature maps by summarizing the
presence of features in patches of the feature map. Two common pooling methods are
average pooling and max pooling which summarize the average presence of a feature and
the most activated presence of a feature respectively, we used avg polling
 Dense layer

13
A simple layer of neurons in which each neuron receives input from all the
neurons of the previous layer is thus called as dense. A dense Layer is used to
classify images based on output from convolutional layers.
 Activation function
We all know that the purpose of introducing an activation function is to give
neural network nonlinear expression ability so that it can better fit the results, so
as to improve the accuracy. we used the SoftMax activation function.

8.1. Model summery

9. Model Architecture of VGG-19


VGG has architecture of a CNN network, and VGG-19 is one of the VGG based architectures.
The VGG-19 is a deep-learning neural network with 19 connection layers, including 16
convolution layers and 3 fully connected layers. The convolution layers will extract features of
the input images, and the fully connected layers will classify the leaf images for those features.
In addition, the max-pooling layers will reduce the features and avoid overfitting [8].

14
Figure 4 Architecture VGG-19 Adopted from [8]

A fixed size of (224 * 224) RGB image was given as input to this network which means that the
matrix was of shape (224,224,3).The only preprocessing that was done is that they subtracted the
mean RGB value from each pixel, computed over the whole training set. Used kernels of (3 * 3)
size with a stride size of 1 pixel, this enabled them to cover the whole notion of the image.
spatial padding was used to preserve the spatial resolution of the image.max pooling was
performed over a 2 * 2 pixel windows with sride 2.this was followed by Rectified linear
unit(ReLu) to introduce non-linearity to make the model classify better and to improve
computational time as the previous models used tanh or sigmoid functions this proved much
better than those. Implemented three fully connected layers from which first two were of size
4096 and after that a layer with 1000 channels for 1000-way ILSVRC classification and the final
layer is a softmax function.

10. Data flow diagram


A data flow diagram (DFD) is a graphical or visual representation using a standardized set of
symbols and notations to describe operations through data movement.

15
Figure 5 data flow diagram

11. Experimental discussion


In this study We had conducted experiments on Intel Xeon CPU with 12 GB RAM on Google
Colab, we developed a pre-trained convolutional neural network (CNN) model to classify brain
tumors into four different classes: meningioma, glioma, pituitary adenoma, and no tumor. The
proposed model has simulated using Python and Tensorflow .The model was trained on a dataset
consisting of 200 MRI images, which were divided into 80% training, 10% validation, and 10%
testing sets, ensuring a balanced distribution of samples across the different classes. We fine-
tuned the pre-trained NesNet50V2 model using transfer learning and fine-tune the last dense
(softmax) layer, and using sequential method we add 1 Avgpool, 1 Dropout, and 2 dense layer
.we applied data augmentation techniques such as zoom_range, width_shift_range,
height_shift_range, shear_range, and rescale to prevent overfitting and improve generalization
performance. And also we apply Adam optimizer which is one of the time-efficient optimizer for
deep networks we optimized the model hyperparameters such as the learning rate, and number of
epochs using a random search algorithm,

16
In the other hand when we fine-tune VGG-19 which is a deep and wide structure in which the
number of computational parameters is well-optimized. In particular, the parameters were
configured for training the network, including epochs (10), hidden layer active function (Tansig),
output active function (Softmax), initial learning rate (0.00001), and batch size (60), Epochs
describe the number of training times of the neural network until the training is stopped. The
model will not match the training data (under-fitting) when the epoch is too small, and the model
will be over-fitting when this value is too large. In both cases, the classification result is not
good. However, we cannot calculate the suitable epoch and we must choose this value based on
the model and dataset.

12. Overview of the result on the dataset


12.1. Accuracy of the model pre-trained from ResNet50V2
After applying hyper parameter optimization with learning rate of “0.1”and epochs” 10” and
patience of “10”, the pre-trained model yielded the best performing model with an accuracy
score of 93.7% on the training set. And accuracy score of 70 % on the validation set and 80%
on the testing set, but for the performance measurement we take the accuracy of the tasting set
which is 80%.

Hence, The training and validation set of accuracy with respect to the number of epochs are
drawn graphically listed below:

17
Figure 6 Accuracy ResNet50V2

12.2. .Loss of the model pre-trained from ResNet50V2


Loss is the penalty for a bad prediction. That is, loss is a number indicating how bad the model's
prediction was on a single example. If the model's prediction is perfect, the loss is zero;
otherwise, the loss is greater, with respect to our training and validation data the loss showed
bellow

Figure 7 loss ResNet50V2

18
12.3. Accuracy of the model pre-trained from VGG-19

12.4. Loss of the model pre-trained from VGG-19

13. Testing the model


The best model which transfer from “ ResNet50V2 ” is applied to data which already contains
the target field values that the model can predict. in this paper to test our model we give the
image path and preprocess it to the model then the model predict which types of classes it
belongs. For example for the no tumor image is given below

19
14. Deployment
The method by which we integrate learning model into an existing production environment to
make practical business decisions based on data. It is one of the last stages in the machine
learning life cycle and can be one of the most cumbersome. in this project we try to deploy the
model by using streetlight which let us to create apps for our project using simple code. It also
supports hot-reloading that let our app update live as we edit and save our file. Using streamlit
creating an app is very easy, adding a widget is as simple as declaring a variable.

After creating the we app.py file we run by using streemlight then we get temporary url
which is: https://famous-hats-appear-34-73-58-41.loca.lt to go to the created up,

20
21
15. Strength
The strength of our study is that it used two CNN pre-trained models for transfer learning, which
require less training time data comparing them two find the best one , and also the incorporation
of data augmentation techniques lowers the risk of overfitting. Moreover, our results show the
applicability and efficiency of a pre-trained CNN model in the field of medical imaging and
diagnosis, saving clinicians and medical practitioner’s valuable time and effort.

16. Weakness
The weakness of our multi-class brain tumor classification using pre-trained ResNet50V2 and
VGG-19 CNN models is that the model may struggle to accurately classify images that contain
abnormalities or unique features that are not present in the original dataset that the ResNet50 was
trained on. This could cause misclassifications or decreased accuracy in identifying certain types
of brain tumors. That is why our test data classification is decreased, but the issue is directly
related to life and death so the accuracy should be increased additionally, the model may require
significant amounts of computational resources and time for training and optimization, especially
if the dataset is large or complex. And also the model only classify accurately the MRI image not
other image.

17. Feature work


For the future work classification of the brain tumor we will try to do on larger dataset and
different type of image of brain other than MRI , and find out state of the art deep learning or
transfer learning algorithm on the data set and achieve better accuracy for the classification and
deploy the model for the real world purpose .

22
Conclusion
Despite the limitations, our study highlights how pre-trained CNN models can be utilized
efficiently to accurately classify different types of brain tumors; from the two pre-trained model
we get better accurate model from “ResNet50V2”, Overall results demonstrate the potential of
model from pre-trained CNN models of ResNet50V2 for multi-class brain tumor classification
and its applicability in clinical settings. However, further validation and testing will be necessary
before considering broad clinical implementations. Making it a promising tool for aiding the
accurate diagnosis of these conditions. Future studies can build on our findings, and we look
forward to exploring new ways in which transfer learning techniques can be used to solve other
medical image classification problems.

23
Tool used
A set of software designed to help us to plan a project, track & manage the projects and achieve
the defined project goals within the time are shown bellow.

24
Reference

[1] Ayadi, W., Elhamzi, W., Charfi, I. et al. Deep CNN for Brain Tumor Classification. Neural
Process Lett 53, 671–700 (2021)

[2] Kokkalla, S.; Kakarla, J.; Venkateswarlu, I.B.; Singh, M. Three-class brain tumor
classification using deep dense inception residual network. Soft Comput. 2021, 25, 8721–8729.

[3] Kesav, N.; Jibukumar, M. Efficient and low complex architecture for detection and
classification of Brain Tumor using RCNN with Two Channel CNN. J. King Saud Univ.-
Comput. Inf. Sci. 2022, 34, 6229–6242.

[4] Tummala, Sudhakar & Kadry, Seifedine & Bukhari, Syed Ahmad Chan & Rauf, Hafiz
Tayyab. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision
Transformers Ensembling. Current Oncology. (2022) 29. 7498-7511.

[5] Hassan Ali Khan, Wu Jue, Muhammad Mushtaq, Muhammad Umer Mushtaq. Brain tumor
classification in MRI image using convolutional neural network, Mathematical Biosciences and
Engineering, 2020,

[6] Ayadi, W.; Charfi, I.; Elhamzi, W.; Atri, M. Brain tumor classification based on hybrid
approach. Vis. Comput. 2020, 38, 107–117.

[7] Hao, R., Namdar, K., Liu, L., and Khalvati, F. A Transfer Learning Based Active Learning
Framework for Brain Tumor Classification. (2020)

[8] Nguyen, T.-H.; Nguyen, T.-N.; Ngo, B.-V. A VGG-19 Model with Transfer Learning and
Image Segmentation for Classification of Tomato Leaf Disease. AgriEngineering 2022, 4, 871-
887.

1
Contribution of group member
1. Mohamed Seid
The name given above is the member of the group who contribute, on writing of the python code
by discussing different type of syntax and semantics error on the project
2. Yewoinhareg Girma
She contribute on, assessing different literature and discussing which type of model they used,
and find out different resources tour project
3. Azeb Gezhagn
She contribute on writing the documentation with proper formatting style with respect to IEE
referencing style

When we conclude our group member had contribution on from the starting up to ending of the project
and hope we get big idea and concept on how to prepare and write our future thesis and project

You might also like