Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

FINAL YEAR PROJECT

ON

FIRE DETECTION USING DEEP NEURAL NETWORK

PROJECT GUIDE : GROUP MEMBERS :


Mrs. Poonam Singh Ansh Shivhare (2000640100018)

Asst. Professor Antra Bansal (2000640100021)


CSE Department Ayush Gupta (2000640100032)

Khushi Mittal (2000640100062)


INTRODUCTION

In recent years, using deep neural networks (DNNs) has transformed many fields, including fire
detection. A DNN is a smart computer model inspired by the human brain, made up of many connected
layers of artificial neurons. Our project uses DNNs to create advanced fire detection models. By using
Convolutional Neural Networks (CNNs), a type of DNN especially good at analyzing images and
videos, we aim to accurately spot fires in different area. These models combine data from cameras to
detect fires quickly and reliably.
3

OBJECTIVE

The objective of this project is to use the ResNet-18 model for fire detection, experimenting with
different learning rate schedulers, optimizers, and activation functions. We will run various
models to find the best results. Additionally, we will use the VGG-19 model as a benchmark for
comparison. By comparing the performance of these models, we aim to identify the optimal
configuration for accurate and reliable fire detection.
4

DATASET

Training Dataset:
We sourced our training dataset from Kaggle, known as the "Fire Segmentation Image
Dataset." This dataset provides labeled images specifically tailored for fire detection tasks,
facilitating our model training process effectively.
Testing Dataset:
We've created a custom test dataset for fire detection, containing real-life images we collected
ourselves. These images include both fire and non-fire instances, and we've labeled them
accordingly. Additionally, we've preprocessed the images to ensure they're suitable for evaluating
our model's performance.
MODEL ARCHITECTURE

ResNet-18, a member of the ResNet family, revolutionizes deep neural network training by
introducing residual connections. This architecture comprises convolutional layers for initial feature
extraction, followed by residual blocks designed to combat the vanishing gradient problem. These
blocks incorporate skip connections, enabling smoother gradient flow during training. Additionally,
down sampling is performed using stride-2 convolutions after every two residual blocks. The network
concludes with global average pooling to condense spatial features into a one-dimensional vector,
followed by a fully connected layer for classification. Despite its relative simplicity, ResNet-18 excels
in various image-related tasks due to its effective utilization of residual connections.
COMPARISON OF DIFFERENT MODELS

We ran 13 different ResNet-18 models, each with different learning rate schedulers, optimizers,
and activation functions, to see which combination works best. To compare their performance, we
made graphs showing the training accuracy of each model over time and bar charts to show the
final training accuracy of each model. These graphs help us see which setups lead to better
accuracy, making it easier to find the best way to optimize ResNet-18.
RESULT

• We ran 12 ResNet-18 models, each configured with a unique combination of learning rate
schedulers, optimizers, and activation functions to evaluate their performance. To compare their
results effectively, we have created a detailed table that lists each model's configuration and its
corresponding training and testing accuracy. This comprehensive comparison allows us to identify
which combination of hyperparameters yields the highest testing accuracy, providing insights into
the most effective strategies for optimizing ResNet-18.
From the above results obtained, it appears that the ResNet18 model with Swish as the
Activation function, Adam as the Optimizer, and CosineAnnealingWarmRestarts as the
Learning Rate Scheduler achieves the highest testing accuracy, reaching 75.12% and the
training accuracy reaching 93.34%.
Here are the accuracy and loss graphs of the modified ResNet18 Model :
We made predictions by classifying 10 random images from the testing dataset as either fire or
non-fire, and some of them yielded incorrect results. As we know, accuracy is 75.12%; here are
the results.
CONCLUSION

Our study focused on how activation functions, optimizers, and learning rate schedulers
impact fire detection models. We experimented with different combinations of these
factors, particularly ReLU, ELU, and Swish activations. Our findings reveal the significant
influence of these choices on model accuracy and effectiveness. We observed varied
impacts on computational efficiency and convergence, emphasizing the crucial role of
activation functions. This study lays the groundwork for further research and optimization
in fire detection, aiming to enhance safety with highly reliable systems.
THANK YOU…!!

You might also like