Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Brain image classification is a critical task in medical imaging analysis, aiding in the diagnosis and

treatment of neurological conditions. In this study, we imposed convolutional neural networks (CNNs)
and recurrent neural networks (RNNs) to classify brain images and provide the accuracies. We have used
two different datasets from different sources. One of them is from Kaggle and the other is from OASIS. In
Kaggle the images are divided into four categories: Pituitary, Notumor, Meningioma, and Glioma. And in
OASIS dataset the brain images are classified into four categories: Non-Demented, Very Mild Dementia,
Mild Dementia, and Moderate Dementia. We have imported some libraries like, NumPy, Pandas, Keras,
OS, matplotlib, seaborn, sklearn, TensorFlow, and PIL that help us to implement our model on Alzheimer
Disease Detection Model. We have used 4417 images for training, 632 images for validation and 1262
images for testing purposes in Kaggle dataset. And in OASIS dataset, 1561 images are used for training
and 391 for testing purposes. The data has already been separated into different files named training and
testing in Kaggle dataset. And in the other model we have separated the data into training and testing
using train_test_split method from the library that we have imported at the beginning. Each image was
resized to 128x128 pixels to maintain the consistency in input and dimensions across all samples. After
loading and preprocessing the data, which involved the resizing and converting the images into NumPy-
arrays, one-hot encoding was performed to represent the categorical labels. The splitting of the data was
performed using an 80-20 ratio. 20% of the data was used for testing or evaluation purposes.
The CNN is a sequential architecture consisted of multiple convolutional layers followed by batch
normalization, max-pooling, dropout, and dense blocks, while the RNN model was developed by
modifying the CNN architecture and utilized Simple RNN layers followed by dense layers. In one of the
models the last layer of the CNN was replaced with an LSTM layer to capture sequential information
inherent in the data. Both models were trained on preprocessed image data and evaluated on separate test
sets to assess their performance. The CNN model demonstrated robust performance during training,
achieving high accuracy and low loss on the training set. Additionally, metrics such as areas under the
curve (AUC), precis ion, and recall exhibited promising results, indicating effective classification of brain
images. Validation results further reinforced the model's generalization ability, with convergence between
training and validation metrics observed over epochs. Evaluation on the test set provided real-world
performance insights, revealing CNN’s effectiveness in accurately classifying brain images across
different categories. Early stopping was implemented to prevent overfitting and ensure optimal model
performance.
In contrast, the RNN model leveraged sequential information inherent in the image data, treating each
row of pixels as a sequence. Although the RNN architecture was simpler than the CNN, it demonstrated
competitive performance in classifying brain images. The model was compiled and trained using similar
configurations as the CNN, with early stopping applied to prevent overfitting. Training and validation
metrics tracked the RNN's learning progress, showing improvements in accuracy and loss over epochs.
Test set evaluation highlighted the RNN's capability to utilize sequential patterns for effective
classification, complementing the CNN's performance. The training history of the RNN model was
plotted to visualize its learning progress, showing similar convergence behavior as observed in the CNN.
Evaluation on the test set revealed competitive performance, demonstrating the effectiveness of
leveraging sequential information for brain image classification.
In conclusion, our study demonstrates the effectiveness of convolutional neural networks (CNNs) and
recurrent neural networks (RNNs) in classifying brain images into multiple categories. CNN achieved
remarkable accuracies of 100% on the OASIS dataset and 97.33% on the Kaggle dataset, while the RNN
exhibited slightly lower accuracies of 73.15% and 80.67% on the respective datasets. This disparity
underscores the differing strengths of each architecture, with CNNs excelling in capturing spatial features
and RNNs in capturing temporal dependencies and sequential patterns. Our findings emphasize the
importance of selecting the appropriate architecture based on the characteristics of the data and the
specific task requirements. Moving forward, refining model architectures, exploring advanced data
augmentation techniques, and leveraging larger datasets hold promise for further enhancing classification
accuracy and clinical relevance. Overall, our study highlights the potential of deep learning models in
accurately classifying brain images and underscores the need for continued research and development to
address clinical challenges effectively.

You might also like