Intelligent Fire-Fighting Robot With Deep Learning

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Intelligent Fire-Fighting Robot with Deep Learning

1. Mohamed Imran Reshmaja K Ramesh


Mechanical Engineering Electronics and Communication Engineering
Sri Sairam Engineering College Sri Sairam Engineering College
Chennai, India Chennai, India
mimran.ias@gmail.com reshmajakramesh@ieee.org

S. S Abineshwar v. Pandyaraj
Mechanical Engineering Mechanical Engineering
Sri Sairam Engineering College Sri Sairam Engineering College
Chennai, India Chennai, India
e8me l89@sairamtap.edu.in pandyaraj.mech@sairam.edu.in

Abstract-An accidental fire is a mishap that could be intensive applications were also part. Intelligent robots can
2022 International Conference on Communication, Computing and Internet of Things (IC3IoT) | 978-1-6654-7995-0/22/$31.00 ©2022 IEEE | DOI: 10.1109/IC3IOT53935.2022.9767869

happen man-made or natural. Accidental fire occurs frequently only detect fIre but also classify fIre and douse it with help of
and can be controlled but sometimes result in severe loss of life deep learning [14]. Sensors like proximity, vision, and IR
and property. To prevent these deadly activities, an autonomous helps to build a complete fIre detection model [15].
fire-fighting robot is introduced. The bot will be activated as
soon as the information ofthe fIre accident reaches the bot. Once To reduce the burden undergoing by fIrefIghters and douse
the location is fIXed with google API, waypoints can be marked the fIre quicker, the proposed system is introduced. Hence fIre
through the mission planner with the help of the GPS module. dousing will happen effectively, effIciently without any
The bot will autonomously reach the destination using computer manual errors. The main function ofthe proposed system is to
vision technologies. Lane, obstacle, and traffic light detections reach the destination autonomously and extinguish the fIre to
are executed to enable the autonomous drive. YolovS prevent great losses.
architecture plays a major role in detection methodologies. As
soon as it reaches the destination, the fire and heat sensors will II. LITERATURE SURVEY
detect and spray water to douse the fIre through the nozzle that loT-based fIrefIghting robots have become quite common
can rotate about 180·. Initially, only one bot will be sent for
due to their reliability and low cost. These bots can detect the
rescue purposes, if it can't satisfy the needs and requirements,
then the GSM module will send the message of inefficiency. So fIre type as well as the carbon-monoxide level and graphs can
that more bots will reach the destination and can resolve the be plotted for analysis. These bots help the authority to
problem. To protect the electronic components the bot is communicate with the people at risk with help of a receiver
thermally insulated with a special alloy of stainless steel The attached to the bot [1] [5]. Additional features like the usage of
proposed system will help reduce the burden of firefighters as a suction vacuum fan with a cylinder can remove all the smoke
well as manual errors caused during firefighting thus, it can own present at a fIre accident location. So, that people trapped do
by private companies as well government. not have to suffer from smoke. People who are trapped can be
detected using an AI camera and Viola-Jones face detection
Keywords-Arduino UNO; GSM Module; GPS Module; Fire algorithm [2] or as an alternative, a PIR sensor can also be
Sensor; Raspberry Pi 4; used to fInd motion [4] but it has many limitations and lesser
accuracy because the PIR sensor works on the principle of
1. INTRODUCTION thermal energy generated due to motion. Detection of fIre can
Fire accidents can be man-made or natural. They cause be made possible using different methods, one such is using a
great destruction to the livelihood. According to a survey it fIre sensor.
was estimated that around 20,000 people die due to fIre
Autonomous navigation of robots plays a crucial role
accidents every year. National fIre protection association
which is equivalent to fIre detections and dousing. The best
reports that fIre accidents cause damages of around $7 billion
end-user access to data is through a mobile application. Hence,
annually. It causes catastrophic and devastating damages to
the manual controlling of a robot can be operated through a
the properties. So, to avoid such destruction, fIrefIghters need
mobile application [8]. So, either Bluetooth or WiFi can best
to risk their lives to save people's life and properties.
serve for autonomous navigation purposes [6]. FirefIghting
Safeguarding people who risk their lives to save others is very
robots can be controlled using RF communication. The
crucial. But according to estimation around 30% of
principal moto of such robots is to douse fIre effectively. So,
fIrefIghters get hurt during fIrefIghts.
to make sure water is sprayed in all directions, a servo motor
RS1-Tl Thermite was invented in 2012 by Howe & Howe. can be used [3].
It was the fIrst commercial fIrefighting robot. The thermite
ROS (Robot Control System) is an advanced technology
robot set a trend for fIrefIghting robots that was widely used
that can provide an effIcient and advanced controlling unit for
in the US military [13]. Howe & Howe developed the
the robot. Data collected from the accident location needs to
technology wider with their inventions ofRSl- T2, RSl-T3,
be transferred to the ground station for analysis and take the
RS 1-T4. But the one which is very popular among the military
next step action. Cloud databases can store data and that can
troop is Thermite RS3. From the year 2019 fIrefIghting robots
be accessed by anyone around the world [7]. A well-structured
become more common and advanced. Because it was the time
and designed fIrefIghting robot will have good climbing
of the emergence of popular technologies like computer
ability, terrain adaptability and also gait uplift, angular speed,
vision, deep learning, machine learning, etc. Intel Research
and forward displacement [9].
initiative launched the OpenCV project in 1999. It was an
advance to a series ofprojects including real-time ray tracing, The limitation of pure sensor-based robots is, physical
3D displays walls, and also mask detection, in which sensors have a very less range of detection. Thus, the rate of
CPU-

978-1-6654-7995-0/22/$31.00 ©2022 IEEE

Authorized licensed use limited to: Bahria University. Downloaded on March 26,2024 at 05:02:29 UTC from IEEE Xplore. Restrictions apply.
inefficiency in its sole purpose will be more. A computerized magnetometer is an electronic compass that works on the
method of detection will have a wider range and be non- electromagnetic field emitted by the Earth. A magnetometer is
sensitive towards the environment Thus, computer vision used to make the bot move in the desired direction pointed by
methods such as color segmentation and correlation can be the GPS module. Figure 3 shows the flow diagmm of the GPS
used to detect fire [10]. Thermal imaging using statistical enabling module. The bot will choose the path that is shortest
texture also helps to detect fire [11]. An autonomous vehicle and with less traffic. This feature is supported by Google Maps
can be designed with a few automation features like lane SDK. It is a ready-made package supplied by google through
detection, obstacle detection, traffic light detection along with which we can identif)r the traffic status ofthe road and shorted
a web interface [12]. Even ultrasonic sensors or Lidar can be path. Once the shortest path is identified bot will proceed with
used for obstacle detection and avoidance. Raspberry pi is the it so that it can reach the destination quickly. The speed is also
most commonly used core processor in any detection model determined by the gear mechanism which is explained
that involves computer vision [16]. navigation module.

ill. PROPOSED SYSTEM


Automation is a technology to provide goods and services Location
with minimal human intervention. In today's world, and
automation is utilized in seveml sectors. As said in section I, Direction
a fIrefighting bot is introduced to reduce the risk taken by
fIrefighters. The proposed idea has three modules namely;
GPS Enabling Module, Navigation Module, Fire Fighting GPS
t
Module. These modules are explained in detail in further sub-
sections. Figure 1 depicts the 3D diagram of the proposed
GPS module
Magnetomete
----. Enabling ~
Mission
Planner
system. Figure 2 shows the flow of three different modules.
Module

Fig. 3. Flow diagram of the GPS enabling module

B. Navigation Module
Once the bot starts from the fIre station, it has to travel on
its own until it reaches the desired destination. For navigation
purposes of the bot, lane and obstacle detection are used. The
development of the model was through a machine leaming
algorithm with an image classification method, MobileNetV2.
This is a method based on CNN (Convolutional Neural
Networks) developed by Google. This is used in many visual
recognition tasks with improved performance and
enhancement to be more efficient. Figure 4 depicts the
i
,;-- computer vision model of lane and obstacle detection. Figure
5 shows the flow diagram of the navigation module.
Fig. 1. 3D design of the system
. "'."
v.,.. . . 1I"
.
-.

GPS Fire·
Navigation
Enabllng H Module H Fighting
Module Module

Fig. 2. Flow diagram of the proposed system

A. GPS Enabling Module


It is the 1st module in the workflow. Since the bot is GPS-
guided, this module plays a vital role in navigation purposes.
The functioning of this particular module is based on, GPS Fig. 4. Lane and obstacle detection
module (Ublox Neo 6m) and magnetometer (HMC5883L). In
Steps to construct the model:
addition to it, for the performance of the bot, components like
a motor driver (L293d), DC motors, Arduino UNO are added.
To set the waypoints for the bot mission planner is used. 1) Data Collection: The development of the model
Mission Planner is a community-supported and free begins with collecting data to train the model. The dataset for
application that uses MAVLink2 protocol which is compatible lane detection contains photos of lanes taken in different
with Ardupilot software. Once the waypoint is fixed, the bot angles, directions, and times (concerning sun rays). The
starts from the fire station. A detailed explanation of the external dataset of obstacle detection includes various
navigation mechanism is explained in the Navigation module. obstacles that a vehicle would meet during a road journey. It
The working of the GPS (Global positioning system) incorporates, different types and models ofvehicles like cars,
module is based on distance measurements using satellites. buses, two-wheelers. Some other obstacles like pedestrians,
The GPS module with the data received from the satellite animals (dog, cow, cat, etc.), poles, barricades, etc. The Coco
marks the longitude and latitude of the object. The dataset of Yolo architecture is used for traffic light detection

Authorized licensed use limited to: Bahria University. Downloaded on March 26,2024 at 05:02:29 UTC from IEEE Xplore. Restrictions apply.
r Image Data I

I Location Identification ) I Data pre-processing) I Gear Detection )

IMission
Planner

I
II GPS
Magnetometer

I
I Noise
Reduction
Data
Scrul>l>lng
Image
Folder fACC 'eromo\ 1 Tr fflc
Cl ction
,I
1
GoogleAPI

I Direction Identification I
II Lane
DeleCllon ,I 11°Detection
I>Stcal~1
Traffic Light
Delection

~
Bot Movement

Fig. 5. Flow diagram of the navigation module

and also for obstacle detection. All the collected data will be processing and Google Maps SDK, the traffic status of the
converted into image folders and log files to meet the road is identified. By integrating both the parameters and
upcoming procedures. connecting them with the Gear shaft of the bicycle gear 4
mechanism can be automated. This information is used to
2) Training: Training process involves three sub- activate a motorized shifter mounted on the rear wheel's hub
processes: gear to change the gear ratio. Based upon the traffic status
a) Data Pre-Processing: All the collected data, i.e., the speed of the bot will adjust automatically.
dataset will be pre-processed to get better accuracy and to
avoid data distortion. Grey scaling is the method followed 3) Implementation o/the Model: The model works based
here for noise reduction. To avoid unwanted data being stored on the video uploaded. The lane is detected first along with
in the dataset, data scrubbing is processed. obstacles if any. After this reprocessing will be performed,
including image resize, network conversion, preprocessing
b) Direction Identification: Inverse binarization input using MobileNetV2. The message sent to the servo
method and pixel summation method is used to detect lane in motor via Raspberry Pi 4 is depicted in figure 7. If the bot
the proposed model. Based on the thresholding algorithm, detects any sort of obstacle, it will instruct the servo motor to
any color of the HFS code is distinguished into either black change the direction of the bot. The same method applies to
or white concerning the threshold value chosen. Here due to lane curvature too.
reverse binarization, the lane turns to be white with a pixel
rating of 225. As the next step pixel summation is done in
~~----~~
each column and, by comparing the adjacent column lane and
the curvature can be detected. Figure 6 shows the
diagrammatic representation of the pixel summation method.
For traffic light detection and obstacle detection yolov5, the
model is used.

PIXEL SUMMATION
...
..... ...
.......
Fig.7. Infonnation sent to servo motor via Raspberry Pi 4.

C. Fire-Fighting Module
Once the bot reaches the destination, the flame sensors
placed in it will get activated if it detects flame. The proposed
RIGHT CURVE STRAIGHT LEFT CURVE system comprises multiple flame sensors and heat sensors for
Fig. 6. Lane detection using pixel summation method better accuracy. Flame detectors consist of an electromagnetic
radiation receiver. The flame sensor can detect flame through
c) Gear Detection: To automate the gear mechanism the electromagnetic radiation received from defined
wavelengths of the infrared spectrum. Heat sensor works on
accelerometer, google API, and computer neural networking
the principle of temperature reading received through an
are used. An accelerometer is a sensor that detects the speed
electric signal.
of the cycle. Using computer neural networking with image

Authorized licensed use limited to: Bahria University. Downloaded on March 26,2024 at 05:02:29 UTC from IEEE Xplore. Restrictions apply.
So, as soon as the response is generated, the water pump using Keras, Tensorflow, MobileNet, serial, and OpenCV.
will be activated. The water pump sprinkles water through the The list of hardware components used in the proposed system
nozzle attached at the top of the bot that was connected is given in table 1.
through a servo motor. The servo motor helps the nozzle to
reach 1800 to spray water. Based on the heat sensor's reading, TABLE!. TABLE TYPE STYLES
the pressure of the water being sprayed is adjusted. Initially, S.No Table Column Head
only one bot is sent to the targeted location. Since it is a 1 GPS Module CUblox Neo 6m)
compact bot, it can hold up to only 200 gallons of water. If the 2 GSMModule
water is not sufficient, then a single bot cannot douse the fire. 3 Mallnetometer (HMC5883U
In that scenario, the GSM module present in the bot will send 4 Motor driver (L293d)
5 Raspberry Pi V2 camera
an alert message to the fire station. Then fire station will send
6 Flame sensor
a few more robots to solve the issue.
Fire-fighting bots are supposed to deal with high Through computer vision technology, lane detection and
temperatures. So, to protect the electronic components present obstacle detection models are performed with help of
in the bot, thermal insulation is provided. 1st layer of thermal Raspberry pi and pi camera. The obstacle and lane detector
insulation is coated with stainless steel. The thermal models are accurate. The MobileNetV2 architecture is
conductivity of stainless is 16.2 W/m.K and, its melting point effective computationally. Thus, the architecture makes the
is 1450 oC. 2nd layer is coated with polystyrene for ensuring model easier to establish a model for the embedded systems
the safety of electronic components. Figure 8 shows the (Raspberry Pi, etc.). The gmphical representation of the
picture of the prototype. Figure 9 shows the flowchart of the accuracy and training ofthe system is depicted model in figure
frrefighting module. 10. Here the x-axis shows the epoch and, the y-axis shows the
accuracy/loss of data.

Training Loss and Accuracy


1.0·

0.8

c;
~ 0.6

~
\It 0.4-

\
.9

0.2 ~ - tr tn_lOSS
- val_loss

=-==------------ _
uain_IJcc
v '_ cc;
0.0 -
0.0 25 50 7 ..5 10.0 12.5 1..5.0 17 ..5
EpoCh #

Fig. 8. Fire Fighting bot prototype Fig. 10. Accuracy ofthe model (accuracy>loss)

Through the mission planner, waypoints can be marked.


Once the system knows about the desired location, it starts
from the fire station. In real-time, this is achieved with the help
of a GPS module that shows the location and magnetometer
True False
which shows the certain direction for the bot to navigate.
Figure 11 depicts the waypoints marked through the mission
planner. The bot moves exactly through the same waypoints
and reaches the location.

Fig.9. Flowchart ofthe fire-fighting module

IV. RESULTS AND DISCUSSION


This section briefs about the results obtained in various
Fig. 11. Waypoints marked through mission planner
modules of the system. The system comprises three modules
namely; the GPS Enabling module and the Fire-Fighting
Figure 12 depicts the representation of Google map SDK
module which are processed using Arduino UNO and Arduino
that shows the shortest path with respect to time, tmffic and
IDE software. Finally, the Navigation module is processed

Authorized licensed use limited to: Bahria University. Downloaded on March 26,2024 at 05:02:29 UTC from IEEE Xplore. Restrictions apply.
toll gate. The bot chooses the path accordingly such that it can along with integrated LiDAR for object detection and lane
reach the destination with the least span of time. Thus, by detection using Raspberry Pi to achieve a higher framemte.
integrating Google map SDK and mission planner, the bot Since the current system can only detect fIre with lesser
works effectively in its navigation process. accuracy because of the usage of IRbased sensors, future
enhancement also includes detection and classifIcation of fIre
As soon as the flame sensor detects the flame, the water using machine vision technology.
pump starts to spray water through the nozzle. As the IR flame
sensor is used, accuracy is interrupted when the sensor is REFERENCES
exposed to high sunlight. If the initial cannot douse the whole [1] Kanwar, Megha, and L. Agilandeeswari. "lOT based firefighting
fIre all alone. An alert message is sent to the respective fIre robot." In 2018 7th International Conference on Reliability, Infocom
Technologies and Optimization (Trends and Future Directions)
station. The stainless steel and polystyrene used in the bot is (lCRITO), pp. 718-723. IEEE, 2018.
an excellent insulator, thus safeguarding the internal [2] Uaday, Md Aowrongajab, Md Nazmul Islam Shuzan, Saffan
environment and the components of the bot. Figure 13 shows Shanewaze, Rakibol Islam Raktb, and Hasan U. Zaman. "The design
the graphical representation of the pursuit of the flame sensor of a novel multipwpose firefighting robot with video streaming
when it is activated and not activated. Since it is IR based capability." In 2019 IEEE 5th International Conference for
sensor, it shows binary output only. Here x-axis shows time in Convergence in Technology (I2CT), pp. 1- 5. IEEE, 2019.
seconds and the y-axis shows the digital output. [3] Pillai, H. O. C. "Fire Fighting Robot." (2021).
[4] Wu, Changzhong, Fan Ge, Guangchao Shang, Mingpeng Zhao, Guitao
Wang, Hengshuai Guo, and Liang Wu. "Design and Development of
Intelligent Fire-fighting Robot Based on STM32." In Journal of
Physics: Conference Series, voL 1748, no. 6, p. 062019. lOP
Publishing, 2021.
[5] Sabhanayagam, T., T. Senthil Kumar, M. Narendra, and J. Martin
Sahayaraj. "Internet Connected Modem Fire Fighting Robot." In
Journal ofPhysics: Conference Series, voL 1964, no. 4, p. 042088. lOP
Publishing, 2021.
[6] Murad, Ahmed, Oguz Bayat, and Hamzah M. Marhoon.
"Implementation of rover tank firefighting robot for closed areas based
on arduino microcontroller." Indonesian Journal of Electrical
Engineering and Computer Science 21, no. 1 (2021): 56-63.
[7] Gan, Zhijian, Guofang Huang, Jing Zhang, Xiaoming Liu, and Chao
Shan. "The Control System and Application Based on ROS Elevating
FireFighting Robot." In Journal of Physics: Conference Series, voL
2029, no. 1, p. 012004. lOP Publishing, 2021.
[8] Yashaswini, R., H. V. MalikaJjuna, P. Bharathi, A. Prajwal, and B.
Jagadeesh. "Fire Fighting Robot Using Various Wi-Fi Module: A
Review." International Journal of Progressive Research in Science and
Engineering 2, no. 7 (2021): 62-64.
[9] Guo, Anfu, Tao Jiang, Junjie Li, Yajun Cui, Jin Li, and Zhipeng Chen.
Fig. 12. Representation ofgoogle map SDK for choosing the shortest path "Design of a small wheel-foot hybrid firefighting robot for infrared
visual fire recognition." Mechanics Based Design of Structures and
Machines (2021): 1-20.
[10] Rangan, M. K., S. M. Rakesh, G. S. P. Sandeep, and C. Sharmila Suttur.
"A computer vision-based approach for detection of fire and direction
control for enhanced operation of firefighting robot." In 2013
International Conference on Control, Automation, Robotics and
Embedded Systems (CARE), pp. 1-6. IEEE, 2013.
[11] Kim, Jong-Hwan, and Brian Y. Lattimer. "Real-time probabilistic
classification of fire and smoke using thermal imagery for intelligent
., firefighting robot." Fire Safety Journal 72 (2015): 40-49.
[12] Hossai, Mohammad Rubaiyat Tanvir, Md AsifShahjalal, and Nowroz
(a) (b) Farhan Nuri. "Design of an loT based autonomous vehicle with the aid
of computer vision." In 2017 International Conference on Electrical,
Fig.13. Graph plotted when a) flame sensor is activated and b) flame sensor Computer and Communication Engineering (ECCE), pp. 752-756.
not activated IEEE,20l7.
[13] Pransky, Joanne. "GeoffHowe, senior vice president, Howe and Howe,
v. RESULTS AND DISCUSSION Inc., a subsidiary ofTextron Systems; co-pioneer ofrobotic firefighting
technologies, including Thermite™ firefighting robots." Industrial
To conclude, a fIre-fIghting system with a Pi camem, a fIre Robot: the international journal of robotics research and application
sensor, a heat sensor, a magnetometer, a Bluetooth module, a (2021).
GPS, and a GSM module is successfully developed. In [14] Dhiman, Amit, Neel Shah, Pranali Adhikari, Sayali Kumbhar, IndeJjit
addition, the remote function of the self-driving vehicle helps Singh Dhanjal, and Ninad Mehendale. "Firefighting robot wi1h deep
the bot reach the destination without any human interference. leaming and machine vision." Neural Computing and Applications
The system is efficient enough to reach the destination (2021): 1-9.
accurately since it is trained with a wide dataset. The proposed [15] Gao, Shang, Zhiyang Zhang, Zihan Zhao, and Mohsin M. Jamali.
"Vision and infra-red sensor-based firefighting robot." In 2018 IEEE
solution genemtes accurate results under certain performance 6lst International Midwest Symposium on Circuits and Systems
limitations. The results rely on both hardware and software, (MWSCAS), pp. 873-876. IEEE, 2018.
which is a defmite and desirable advantage for such systems. [16] Kumer, SV Aswin, LSP Sairam Nadipalli, P. Kanakaraja, K. Sarat
Future work plans to conduct demonstrations with various Kumar, and K. Ch Sri Kavya. "Controlling the autonomous vehicle
deep learning algorithms and computer vision frameworks using computer vision and cloud server." Materials Today: Proceedings
37 (2021): 2982-2985

Authorized licensed use limited to: Bahria University. Downloaded on March 26,2024 at 05:02:29 UTC from IEEE Xplore. Restrictions apply.

You might also like