Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Design and Implementation of an IoT Drowsiness

Detection System for Drivers


Fathi KALLEL (  fathi.kallel@gmail.com )
Sfax University

Research Article

Keywords: Drowsiness detection, IoT, EAR, CNN, web and mobile applications

Posted Date: August 23rd, 2023

DOI: https://doi.org/10.21203/rs.3.rs-3268242/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Additional Declarations: No competing interests reported.


Design and Implementation of an IoT Drowsiness Detection System
for Drivers

Fathi KALLEL1,2
1
Advanced Technologies for Medicine and Signals Laboratory ‘ATMS’, National Engineering School of Sfax, Sfax
University, Sfax, Tunisia
2
National School of Electronics and Communications, Sfax University, Sfax, Tunisia

fathi.kallel@enetcom.usf.tn
Abstract
Drowsiness stands as a significant peril to road safety, manifesting as a prominent contributor to severe
injuries, fatalities, and substantial economic ramifications within the realm of road accidents. The presence of
drowsiness substantially diminishes driving performance, fostering a decline in attentiveness and reaction
times. This, in turn, exacerbates the potential for accidents and underscores the criticality of addressing
drowsiness-related issues to mitigate the adverse consequences on road safety. The objective of this research
work is to design and implement an IoT based intelligent alert system for vehicles, capable of automatically
mitigating the risks associated with drowsy driving. Indeed, we propose a real time drowsy driver alert system
including a hardware and a software parts. The hardware part includes a camera for face image acquisition
and a Raspberry Pi 4 platform for real time face image processing to analyze eye blinks and drowsiness
detection. The software part includes a web application for drivers’ management and a mobile application for
drowsiness detection and notification management. In fact, once the driver's drowsiness is detected, the system
instantaneously sends all details to a wireless connected real-time database and the mobile application module
issues a warning message, while a Raspberry Pi monitoring system delivers an audible alert to the driver.

Keywords: Drowsiness detection, IoT, EAR, CNN, web and mobile applications

1. Introduction
Driver fatigue is indeed a significant factor in road accidents, leading to numerous mishaps and
fatalities worldwide [1]. The World Health Organization reported approximately 1.4 million deaths
annually due to vehicle crashes. Many accidents occur due to improper driving behaviors, including
driving under the influence of alcohol or while experiencing drowsiness [2, 3]. One of the most
dangerous consequences of driver fatigue is falling asleep at the wheel, which results in a loss of
control over the vehicle [4]. To address this issue, there is a need for smart and intelligent vehicle
systems that employ advanced technology [5]. Indeed, Drowsiness detection in drivers can be
determined using various aspects, and combining multiple measurements and predictive algorithms
can improve the accuracy of the system.
Several methods and techniques are developed in literature to detect drowsiness for drivers. Face and
Eye Detection are widely used for drowsiness detection using several approaches. Poursadeghiyan et
al. proposed an eye tracking method for monitoring the driver's eye movements and patterns to detect
signs of drowsiness. This included measuring blink frequency and percentage of eye closure measures
to assess the level of drowsiness [6]. Convolutional Neural Network (CNN) technique is proposed by
Jabbar et al. for drowsiness and microsleep detection [2]. A camera is used in this paper to detect a
driver's facial landmarks, which is then fed into a CNN algorithm to determine whether the driver is
drowsy. Considering several datasets, such as without glasses and with glasses in day or night vision,
the classification operation of eye detection is achieved. Therefore, it works well for detecting
drowsiness with high precision using Android modules. Using the Deep CNN method as described
by Sanyal and Chakrabarty [7], eye blink detection and state recognition were done.
For the classification of driver's behaviors using sensors, Saleh et al. developed a long short-term
memory algorithm and a recurrent neural network (RNN). Ed-Doughmi et al. conducted also an
analysis of driver behavior using the RNN algorithm [8]. Their primary focus was on developing a
real-time solution for fatigue detection and roadside accidents prevention. The system utilized
multilayered 3D CNN architecture to analyze drivers' faces, successfully identifying drowsy drivers

1
with an impressive 92% acceptance rate. Jemai et al. proposed a drowsy alert system based on wavelet
networking [9]. This network tracks eye movements by employing classifying algorithms like the
Wavelet Network Classifier (WNC), which relies on Fast Wavelet Transform (FWT) to make binary
decisions regarding a driver's level of consciousness.
Various physiological aspects, including heart rate, electroencephalography (EEG),
electrocardiography (ECG), electrooculogram (EOG), and electromyogram (EMG), were explored in
different drowsy alert systems. Babaeian et al. explored features extracted from different
physiological parameters using both wavelet transformation and regression methods to detect fatigue
[10]. Budak and colleagues [11] proposed a drowsiness detection system employing EEG technology.
It integrated multiple processing methods like AlexNet, VGGNet and wavelet transform methods.
This comprehensive approaches proficiently assessed the level of drowsiness by analyzing brain
indicator signals (EEG), alongside the input from cameras and sensors. This amalgamation of data
was processed using a machine learning technique, resulting in timely alerts to notify drowsy drivers
and mitigate potential risks.
Hayawi and Waleed introduced a technique to monitor drowsiness by analyzing Heart Rate Variability
(HRV) signals acquired through EEG sensors [12]. Furthermore, they developed an intrusive
approach involving Electrooculography (EOG) to track eyeball movements, resulting in the creation
of a fatigue alert system. This system was furthermore integrated and embedded with an Arduino
controller board and a K Nearest Neighbors (KNN) classifier. This integration aimed at refining the
accuracy of the system's drowsiness detection capabilities.
Song and al. [13] introduced a system designed to recognize driver fatigue by analyzing the
movement of the muscles around the eyes. This analysis was achieved through the utilization of
Electromyography (EMG) sensors and a human-machine interface. Additionally, the system
monitored the closure of eyelids and muscle movements, with EMG sensor signals serving as the
primary input. The data collected through this process was then harnessed by the ESP8266, a
microcontroller, to transmit or supervise drowsiness-related information on the Internet, in line with
the design proposed by Artanto et al. [14].
In their work, Zhongmin et al. [15] put forth an effective application to detect driver fatigue using
facial expressions. Their approach involved leveraging deep learning techniques, specifically
multiblock local binary patterns (MB-LBP), in conjunction with an AdaBoost classifier. This
combined methodology enabled the rapid and precise detection of drowsiness in drivers. However, a
reduction in detection accuracy was noted when drivers wore glasses, prompting the researchers to
explore solutions.
Ma and co-authors [16] developed a system for detecting driving fatigue through the measurement of
EEG signals. This system offered a robust framework for identifying drowsiness, utilizing a deep
learning methodology to ascertain the accuracy of fatigue detection from EEG signals. Notably, the
deep learning process was structured around a principal component analysis network (PCANet),
which preprocessed the EEG data to enhance detection accuracy. While initially evaluated with small
sample sizes and in an offline mode, this system exhibited limitations in accuracy when applied to a
larger population of samples in real-time scenarios. In response, an Internet of Things (IoT) module
was integrated to facilitate testing on a larger scale, both online and offline. This adjustment aimed to
address the accuracy challenges observed when the system was deployed in more complex and
dynamic real-world conditions.
Vitabile et al. [17] developed a drowsiness detection system with minimal intrusion, employing a
field programmable gate array (FPGA). This innovative system centers on the detection of bright
pupils in the eyes, facilitated by an infrared sensor light source integrated within a vehicle.
Remarkably, this approach achieves a retina identification rate of up to 90%, enabling the analysis of
drivers' drowsiness across multiple frames. The system's ability to monitor drowsiness contributes
significantly to the prevention of severe accidents.

2
Navaneethan et al. [18] took a different route, creating a real-time system that tracks human eyes
using the Cyclone II FPGA. This implementation demonstrates the versatility of FPGA technology
in eye tracking applications, offering potential benefits for various fields, including driver monitoring
and safety.
Jang and Ahn [19] introduced a system designed to detect both alcohol addiction and drowsy drivers,
achieved through the integration of sensors and the Raspberry Pi controller module. This system's
capabilities are further extended by incorporating Internet of Things (IoT) modules, which facilitate
the transmission of messages in response to abnormal driver behavior. The system's vigilance is
enhanced by leveraging a webcam for image processing and the controller unit. In a novel approach,
a process was developed to ensure regular surveillance of facial detection and eye blink patterns,
enabling the prediction of driver drowsiness [20]. This innovative technique expands upon existing
methods that calculate driver fatigue using factors such as eye or facial movements, deep learning,
FPGA-based methodologies, and signals like ECG, EEG, or EOG. The incorporation of IoT-based
techniques adds a layer of intelligent control to address various driver drowsiness concerns. These
techniques automatically trigger alarms, precisely locate potential mishaps, and promptly send
warning messages or emails to the vehicle owner, thereby contributing to enhanced road safety.
The rest of the paper is organized as follows. The methodology and architecture of the proposed
drowsiness system is detailed in section 2. A flowchart may be provided to illustrate the step-by-step
process of drowsy driver detection. Implementation details are described in section 3. A full
description and performances comparison of considered drowsiness detection techniques is also
detailed. The results obtained from implementing the system with a description of both develop
mobile and web applications are discussed and analyzed in section 4. The paper concludes with a
summary of the achieved outcomes and the potential for further improvements or extensions to the
system. Future research directions and enhancements are also addressed.
2. Architecture of the proposed drowsiness system
In this paper, we propose a new system to detect driver drowsiness and promptly alert the driver using
various monitoring techniques. The proposed system includes a hardware part and a software part.
The integration of the hardware part of the proposed system involves the use of a Raspberry Pi4
platform, along with a camera for face image acquisition in order to monitor the driver's face detection
and eye blinking and closure. In fact, advanced image processing algorithms are used to measure the
face movement and eye blink and therefore study the state of the driver. Here, eye blink is mainly
focused to detect drowsiness of the driver. By extracting the adequate features, the system can
determine the level of driver fatigue and provide instant alerts through a sonar alarm. The hardware
part of the system is wireless connected to a firebase real-time database designed and implemented
to save and store all received notifications for each driver. These notifications could be consulted and
visualized at any time by the system administrator. Additionally, the soft part of our system sends a
notification, via a developed mobile application, to the vehicle owner, who can then take appropriate
action to keep the driver alert through a phone call. We note here that the mobile phone of the driver
is already connected to the to the vehicle Bluetooth speaker. The software part of our system includes
also a web application for drivers and accounts management.
Figure 1 provides an overview of our proposed system, labeled as 'Smart Transport System'. The
system configuration diagram outlines the flow of data and processes involved in detecting driver
drowsiness using camera images. It highlights the importance of facial recognition and blink detection
as key elements in assessing the driver's condition. The automatic notifications serve as a proactive
safety measure to prevent accidents caused by drowsy driving. It's important to consider that the
success of such a system relies on the accuracy and reliability of the algorithms used for face
recognition and drowsiness detection. Additionally, user interface design and the integration of alerts
into the driving experience are critical factors for ensuring the system's effectiveness in real-world
scenarios.

3
Smart Transport System
Hardware Part Installed on each Drivers’ Vehicle

Raspberry Pi 4 Model

Camera for Face Buzzer for Sonar


Image Acquisition Image Processing Algorithms: Notification
- Face Detection
- Drowsiness Detection

WiFi Connectivity

Send Query Firebase


Realtime
Driver’s Vehicle Bluetooth
Speaker
Response Database
Mobile Phone
Tethering
Phone Call
Notification

Mobile Application Mobile Phone Web Application

Smart Transport System


Software Part for Owner’s Vehicle

Figure 1. Architecture of the proposed drowsiness system

The objectives of our proposed IoT based drowsiness detection system could be summarized as
following:

- Drowsy driver detection: using a Raspberry Pi4 model with a camera to monitor the driver's
eye movements and detect signs of drowsiness during driving.
- A specific firebase real-time database design: When drowsiness is detected in the driver, the
system should instantaneously send all details to a Wireless connected specific firebase real-
time database where they will be stored and saved for further exploration.
- Mobile application and owner notification: If the drowsiness persists, the system will send a
notification via a mobile application to the vehicle owner, providing them with information
about the driver's condition. As a second and supplementary notification, the vehicle owner is
able to call the drivers. The driver’s mobile phone is already connected to the vehicle
Bluetooth speaker.
- Web application: A web application including a dashboard for accounts creating, driver’s data
management (add new driver, edit data for a specific driver, display drivers list…) and
displaying charts including the percentage of drowsiness notification for each driver.

4
3. Implementation of the proposed system
When the camera is successfully integrated with Raspberry Pi4 model, it continuously records each
movement of the driver’s face. The Raspberry Pi4 model B ensures secure processing due to its
operating system and predictable secure shell (SSH) keys. The utilization of SSH host keys enables
secure network communications, preventing unauthorized access and file transfers. To develop the
IoT-based application, several IoT modules such as wireless sensors, Pi camera, and intelligent code
for detecting driver drowsiness are integrated. These modules are effectively integrated with the
Raspberry Pi controller, enabling intelligent control and smart warnings for drowsy drivers. The
system successfully detects potential causes of mishaps and promptly alerts drowsy drivers to avoid
careless driving. The Internet of Things (IoT) plays a crucial role in managing real-time complexities,
particularly in handling complex sensing environments, and it offers a highly flexible platform for
managing multiple connectivities. The IoT module provides a reliable method for capturing images
of the driver's drowsiness and sending alert messages to the vehicle owner, increasing awareness and
safety. Figure 2 describes a flowchart of the drowsy detection system.
The step by step methods for detection of drowsy drivers in our scenario are summarized as follows:
1. Video recording and frame extraction
2. Face detection
3. Face recognition
4. Eye detection and drowsiness identification

Start

Video recording
using camera

Face detection

Face recognition

Face No
recognized
Yes
Eye detection

Drowsiness No
identified

Sonar Firebase connexion


notification and updating

Stop
Figure 2. A flowchart of the drowsy detection system

5
3.1. Video recording
The camera is an 8-megapixel video camera which captures continuous video stream in good quality.
Recorded video stream is converted into a number of frames which are forwarded to face detection
step. Images are captured using a camera installed in the vehicle. Access to this camera is done using
the ‘cv2.VideoCapture(0)’ function of the OpenCV Python library. OpenCv is used for real-time
image processing which is implemented by computer vision algorithm.

3.2. Face detection and recognition


To accomplish face recognition, the system leverages the Haar Cascade technique within the OpenCV
framework. This technique exploits patterns of light and shadow to discern human faces. In the
context of drivers' faces, the eyes typically appear darker than the surrounding features, such as the
nose. This distinction is utilized to extract facial information from black-and-white images. The
extracted face information is then subjected to recognition using the Haar Cascade technique within
OpenCV [21].

It's worth noting that this study specifically applies the Haar Cascade technique to identify human
eye features, contributing to the overall process of detecting driver drowsiness. Figure 3 illustrates
two examples for face detection and recognition using Haar cascade technique.

(a) (b)
Figure 3. Face detection and recognition using Haar cascade technique (a) known user (already
included in the database) (b) unknown user

3.3. Eye blink detection techniques


Ou research work aims at detecting whether a driver is feeling drowsy or is active while driving based
on whether both the eyes of the driver are closed representing drowsiness or both the eyes are open.
Both Eye Aspect Ratio and CNN techniques are considered in our work for the Eye blink detection.

3.3.1. Eye Aspect Ratio algorithm (EAR)


The Eye Aspect Ratio (EAR) serves as a numerical representation particularly sensitive to the act of
opening and closing the eyes [22]. Throughout the blinking process, the EAR value experiences swift
increments or significant reductions. A noteworthy observation regarding its robustness emerged from
the utilization of EAR to identify blinks. To calculate the EAR value, six specific coordinates
surrounding the eyes (as depicted in Figure 4) are firstly computed.

Figure 4. Open and closed eyes with facial landmarks (P1, P2, P3, P4, P5, P6). (a) Open eye. (b)
Close eye.

6
These coordinates are therefore incorporated into Equation (1) [23, 24]. This computation captures
crucial eye-related measurements, forming the foundation for our drowsiness detection mechanism.
‖𝑃2−𝑃6‖+‖𝑃3−𝑃5‖
𝐸𝐴𝑅 = 2‖𝑃1−𝑃4‖
(1)

In our approach, each frame from the video stream is utilized to compute the EAR. The EAR value
experiences a drop when the user closes their eyes and returns to a baseline level upon reopening.
This characteristic enables the detection of both blinks and instances of eye opening. Importantly, the
EAR formula's insensitivity to face orientation and distance between the face and observer allows for
effective face detection even from a considerable distance.

Indeed, we employed an EAR threshold value to detect rapid EAR value changes induced by blinking.
This approach was based on insights from previous research and was chosen to accommodate a
broader spectrum of individuals, thereby enhancing the system's practicality and adaptability to real-
world conditions. Past research has often adopted a fixed EAR threshold to discern blinking instances.
However, this approach proves challenging when applied across diverse individuals, given the
variations in appearance and characteristics like natural eye openness, which were addressed in this
study. Dewi et al. [25] employed a variable EAR threshold strategy to systematically categorize
different types of blinks (0.18, 0.2, 0.225, 0.25). Experimental results indicated the best EAR
threshold was 0.18. This value provided the best accuracy and Area Under the Curve (AUC) values
in all experiments. Hence, 0.25 is the worst EAR threshold value because it obtained the minimum
accuracy and AUC values.

In our work, we considered a fixed EAR threshold equal to 0.18. The predictions of the model on
images of eyes can be seen in the following figures.

(a) (b)
Figure 5. Prediction of the EAR model (a) Open eye (b) Closed eye

3.3.2. CNN architecture


A Convolutional Neural Network (CNN) is a computational model that efficiently analyzes image
features by employing specific convolution and pooling operations. It functions as a classification
framework, categorizing images into predefined classes. The layers within a CNN progressively
extract image features and learn to identify objects within images. Consequently, the output of a CNN
typically indicates the classes or labels that the network has been trained to recognize.

Malhotra [26] proposed the implementation of a custom-designed Convolutional Neural Network that
possesses the following characteristics:
• Three Convolution Blocks: These blocks consist of 2, 3, and 3 convolutional layers,
respectively.
• Batch Normalization Layer: This layer follows each Convolution Layer to stabilize training
and accelerate convergence.
• Dropout Layer and MaxPool Layer: Each Convolution Block is followed by a Dropout Layer
to prevent overfitting, along with a MaxPool Layer for subsampling.

7
• Three Fully Connected Layers: These layers come after the convolutional layers, contributing
to the final classification process.

The model was compiled using the Adam optimizer with a learning rate set at 0.0001. The training
process encompasses a total of 200 epochs, utilizing a batch size of 128. To enhance performance,
the training images are randomized using the ImageDataGenerator. In summary, this approach
utilizes a custom-designed CNN architecture with strategically integrated layers and optimization
techniques to effectively classify images and mitigate overfitting concerns during the training process.

Figure 6 illustrates a program section of the CNN model training.

Figure 6. Program section of the CNN model training


3.3.3. Material
The project relies on the utilization of the Drowsiness dataset accessible on the Kaggle platform
(https://www.kaggle.com/datasets/dheerajperumandla/drowsiness-dataset). The original dataset
initially encompassed four distinct classes, categorizing images into Open Eyes, Closed Eyes,
Yawning, or No-Yawning. However, the primary focus of this project is to differentiate drowsiness
based on whether the eyes are closed or open. Consequently, only two classes from the dataset will
be utilized. This modified dataset comprises a total of 1452 images, equally distributed across two
categories. Each category is composed of 726 images. Notably, the dataset is inherently balanced,
obviating the need for additional balancing procedures. The class labels are designated as 'Open Eye'
(encoded as 0) and 'Closed Eye' (encoded as 1).
A preprocessing phase has been incorporated to ensure uniformity in image dimensions, thereby
resizing each image to a common size of (32, 32, 3). Subsequently, the dataset is divided into two
subsets: a Training Set and a Test Set, proportionally distributed at an 80%-20% ratio. The model's
predictions on eye images are visually represented in the provided images. These illustrations serve
to demonstrate the model's performance in discerning between open and closed eyes, showcasing its
potential efficacy in detecting drowsiness based on eye states. An example of the predictions of the
model on images of eyes can be seen in the Figure 7.

(a) (b)

Figure 7. Prediction of the CNN model (a) open eye (b) closed eye

8
3.3.4. Performance evaluation
To evaluate the performance of our proposed models, we firstly considered both Accuracy and Loss
metrics (prediction errors). Accuracy is calculated using equations 2.
𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁
(2)

where TP, TN, FP and FN are respectively the True Positive, True Negative, False Positive and False
Negative.

Figure 8 displays the variation of Loss and Accuracy according to the number of Epochs.

Figure 8. Performance evaluation of CNN model (a) Accuracy vs Number of Epochs. (b) Loss vs
Number of Epochs.

Figure 9 illustrates the confusion Matrix for considered CNN model.

TP FN

FP TN

Figure 9. Confusion matrix for considered CNN model.

To compare the performance of both considered classification algorithms (EAR and CNN), in
addition to the accuracy, we considered the precision, recall and F1-Score evaluation metrics
calculated respectively using equation 3, 4 and 5.
𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (3)
𝑇𝑃+𝐹𝑃
𝑇𝑃
𝑅𝑒𝑐𝑎𝑙𝑙 = (4)
𝑇𝑃+𝐹𝑁
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛∗𝑅𝑒𝑐𝑎𝑙𝑙
𝐹1 − 𝑆𝑐𝑜𝑟𝑒 = 2 ∗ 𝑇𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑅𝑒𝑐𝑎𝑙𝑙 (5)

9
Table 1 summarizes the obtained results for the different evaluation metrics of both considered
algorithms.

Table 1. Performance evaluation and comparison of EAR and CNN algorithms

Algorithm
EAR CNN
Metric
Accuracy 0.98 0.99
Precision 0.97 0.99
Recall 0.98 0.98
F1-score 0.99 0.99
Processing time (µs) 2.46 4.51

Experimental results shown that both EAR and CNN algorithms give good classification
performances with a little superiority for CNN model. Both models are implemented on our
considered Raspberry Pi environment and real time operation is needed in our case. Since the
Raspberry Pi environment used in this study was poor in terms of its performance, we chose the EAR
model which is less time consuming and is able to perform real time implementation of downness
detection algorithm.

3.4. Drowsiness detection


Eye closure pertains to the count of frames in which closed eyes are detected within a specified time
interval. As outlined in [27], a driver is classified as drowsy when the number of consecutive frames
depicting closed eyes surpasses 45 frames, equivalent to approximately 1.5 seconds. The detection of
eye blinking occurs when the eye's state transitions from open to closed.

Based on existing literature, it is established that an average individual blinks approximately ten times
per minute [27]. Therefore, if the frequency of eye blinking falls below this norm, the driver is deemed
drowsy. This approach emphasizes the significance of monitoring both the frequency of eye blinking
and the duration of consecutive eye-closed frames to accurately identify drowsiness in drivers.

When a driver experiences automatic eye closure, the calculated Eye Aspect Ratio (EAR) falls below
a predefined threshold range. This threshold represents the number of consecutive video frames
capturing the driver's closed eyes, effectively defining a drowsy eye blink pattern. If the count of
consecutive frames surpasses this threshold value, the system identifies the driver as drowsy. This
methodology employs a specific camera to continuously monitor eye movement and calculate the
EAR threshold value. A counter is integrated into the system to keep track of the occurrence of these
frames. When the frame count exceeds a predefined value, set to 35 in our case, the system triggers
an audio alert through a Buzzer and sends an automated notification to an authorized individual
(administrator) associated with the vehicle using a mobile application. This proactive approach
ensures timely response to drowsiness detection instances. This entire process is effectively
orchestrated by a Raspberry Pi4, programmed using the Python programming language.

4. Mobile and Web applications


The Software Part of our developed Smart Transport System includes two main applications. The first
one is a web application dedicated for the for the owner’s vehicle and/or the manager/administrator
of the system. This web application is set up to allow the administrator to monitor the status of drivers
and manage their profile. It includes several features such as adding, modifying or deleting drivers.
An authentication step is always requested using login and password defined during the registration
process or using an existing google account.

10
Figure 10 describes a flowchart of the developed web application.

Start

No
Already a Want to Yes Fill registration
member ? register ? Form
Yes No
Enter Email
and Password Select Google
account

Login No Valid User Login via No Exit


Failed Credentials Google

Yes
No Forgot User logged in Yes
Password Successfully
Yes
Reset Driver’s statuts and
Password profile management

End

Figure 10. Flowchart of the developed web application

11
Figure 11 illustrates some screens of the developed web application.

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)


Figure 11. Developed ‘Smart Transport’ web application (a) main screen (b) registration screen for
new user (c) reset password screen (d) Dashboard screen (e) Drivers management screen (f) Add
new driver screen (g) Update existing driver screen (h) Delete existing driver screen (i) Drowsiness
historical screen

The second software part of our developed Smart Transport application is a mobile application. It is
developed to allow the manager to supervise the condition of the drivers and receive notifications in
case of emergency, which allows him to contact the driver to recommend that he take a break. All
tasks are synchronized in real time. Figure 12 describes a flowchart of the developed mobile
application.

12
Start

Already a No Want to No
member ? register ? Exit

Yes Yes
Enter Email Fill registration
and Password Form

Login No Valid User


Failed Credentials

Yes
No Forgot User logged in
Password Successfully
Yes
Reset Display dashboard
Password and manage drivers

Drowsness
detected

Yes

Sonar Firebase connexion


notification and updating

Real time phone call


notification

End

Figure 9. Flowchart of the developed mobile application

13
Figure 13 illustrates some screens of the developed mobile application

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)


Figure 10. Developed ‘Smart Transport’ mobile application (a) main screen (b) registration of new
administrator and confirmation email screens (c) Database updating with new administrator screen
(d) Dashboard screen (e) Drowsiness notifications percentages screen (f) Database updating with
new drowsiness alert screen (g) Drowsiness notification alert screen (h) Real time phone call
notification screen (i) Notifications historical screen for a particular driver

We note that we considered in our case two databases. The first one, denoted ‘administrator database’
and given by figure 10-c, is considered to save and manage all users who are able to use and manage
the mobile application. The management process of the mobile application includes different
operations like adding, updating and removing drivers, receive drowsiness notification alerts,
consulting the notification history for each driver using a specific dashboard and manage the real time
phone call notifications. The second database, denoted drowsiness alert database and given by figure
10-f, is considered to save and manage all drowsiness alerts for all drivers.

14
5. Conclusion and Future Works
This study focuses on a key issue in traffic safety, i.e., developing a reliable and user-acceptable
drowsiness detection system. A camera is used to monitor the facial features of the driver and a
Raspberry Pi hardware platform is required for a real time facial features processing and drowsiness
detection. The Haar Cascade technique is applied to recognize the faces of human beings. Both Eye
Aspect Ratio and CNN techniques are considered in our work for drowsiness detection. The
performances of these techniques are evaluated and compared using different evaluation metric and
a specific public dataset. When drowsiness is detected in the driver, our proposed system
instantaneously sends all details to a Wireless connected specific firebase real-time database where
they will be stored and saved for further exploration. If the drowsiness persists, the system will send
a notification via a mobile application to the vehicle owner, providing them with information about
the driver's condition. As a second and supplementary notification, the vehicle owner is able to call
the drivers. The driver’s mobile phone is already connected to the vehicle Bluetooth speaker. A web
application including a dashboard for accounts creating, driver’s data and displaying charts including
the percentage of drowsiness notification for each driver.

Competing interests: No conflict of interest to report.

References

1. Jeong, Y.S.; Yon, Y.H.; Ku, Y.J.H. Hash-chain-based IoT authentication scheme suitable for small and
medium enterprises. J. Converg. Inf. Technol. 2017, 7, 105–111.

2. Azmat, M.; Kummer, S.; Moura, L.T.; Gennaro, F.D.; Moser, R. Future Outlook of Highway Operations
with Implementation of Innovative Technologies Like AV, CV, IoT and Big Data. Logistics 2019, 3, 15

3. Wintersberger, S.; Azmat, M.; Kummer, S. AreWe Ready to Ride Autonomous Vehicles? A Pilot Study on
Austrian Consumers’ Perspective. Logistics 2019, 3, 20.

4. Kim, J.H.; Kim, W.Y. Eye Detection in Facial Images Using Zernike Moments with SVM. ETRI J. 2008,
30, 335–337.

5. Azmat, M.; Kummer, S. Potential applications of unmanned ground and aerial vehicles to mitigate
challenges of transport and logistics-related critical success factors in the humanitarian supply chain. AJSSR
2020, 5, 3.

6. Poursadeghiyan, M.; Mazlaumi, A.; Saraji N, et al. Determination the Levels of Subjective and Observer
Rating of Drowsiness and Their Associations with Facial Dynamic Changes. Iran J Public Health, 2017, 46(1):
93–102.

7. R. Sanyal and K. Chakrabarty, “Two stream deep convolutional neural network for eye state recognition and
blink detection,” in 2019 3rd International Conference on Electronics, Materials Engineering & Nano-
Technology (IEMENTech), pp. 1–8, Kolkata, India, 2019.

8. Y. Ed-Doughmi, N. Idrissi, and Y. Hbali, “Real-time system for driver fatigue detection based on a recurrent
neuronal network,” Journal of Imaging, vol. 6, no. 3, p. 8, 2020.

9. O. Jemai, I. Teyeb, and T. Bouchrika, “A novel approach for drowsy driver detection using eyes recognition
system based on wavelet network,” International Journal of Recent Contributions from Engineering, Science
& IT, vol. 1, no. 1, pp. 46–52, 2013.

15
10. M. Babaeian, K. A. Francis, K. Dajani, and M. Mozumdar, “Real-time driver drowsiness detection using
wavelet transform and ensemble logistic regression,” International Journal of Intelligent Transportation
Systems Research, vol. 17, no. 3, pp. 212–222, 2019.

11. U. Budak, V. Bajaj, Y. Akbulut, O. Atila, and A. Sengur, “An effective hybrid model for EEG-based
drowsiness detection,” IEEE Sensors Journal, vol. 19, no. 17, pp. 7624–7631, 2019.

12. A. A. Hayawi and J. Waleed, “Driver's drowsiness monitoring and alarming auto-system based on EOG
signals,” in 2019 2nd International Conference on Engineering Technology and its Applications (IICETA), pp.
214–218, IEEE, 2019.

13. M. S. Song, S. G. Kang, K. T. Lee, and J. Kim, “Wireless, skin-mountable EMG sensor for human–machine
interface application,” Micromachines, vol. 10, no. 12, p. 879, 2019.

14. D. Artanto, M. P. Sulistyanto, I. D. Pranowo, and E. E. Pramesta, “Drowsiness detection system based on
eye-closure using a low-cost EMG and ESP 8266,” in 2017 2nd International conferences on Information
Technology, Information Systems and Electrical Engineering (ICITISEE), pp. 235–238, IEEE, 2017.

15. Zhongmin Liu, Yuxi Peng, Wenjin Hu, Driver fatigue detection based on deeply-learned facial expression
representation, Journal of Visual Communication and Image Representation, Volume 71,
2020, 102723, ISSN 1047-3203, https://doi.org/10.1016/j.jvcir.2019.102723.

16. AY. Ma, B. Chen, R. Li et al., “Driving fatigue detection from EEG using a modified PCANet method,”
Computational Intelligence and Neuro-science, vol. 2019, pp. 1–9, 2019.

17. S. Vitabile, A. De Paola, and F. Sorbello, “A real-time non-intrusive fpga-based drowsiness detection
system,” Journal of Ambient Intelligence and Humanized Computing, vol. 2, no. 4, pp. 251–262, 2011.

18. S. Navaneethan, N. Nandhagopal, and V. Nivedita, “An FPGA-based real-time human eye pupil detection
system using E2V smart camera,” Journal of Computational and Theoretical Nanoscience, vol. 16, no. 2, pp.
649–654, 2019.

19. S. W. Jang and B. Ahn, “Implementation of detection system for drowsy driving prevention using image
recognition and iot,” Sustainability, vol. 12, no. 7, p. 3037, 2020.

20. D. Singh and B. K. Pattanayak, “Markovian model analysis for energy harvesting nodes in a modified
opportunistic routing protocol,” International Journal of Electronics, vol. 107, no. 12, pp. 1963–1984, 2020.

21. Dixon, C. Unobtrusive eyelid closure and visual of regard measurement system. In Proceedings of the
Conference on Ocular Measures of Driver Alertness, Washington, DC, USA, 26–27 April 1999.

22. Rakshita, R. Communication Through Real-Time Video Oculography Using Face Landmark Detection. In
Proceedings of the International Conference on Inventive Communication and Computational Technologies,
ICICCT 2018, Coimbatore, India, 20–21 April 2018; pp. 1094–1098.

23. You, F.; Li, X.; Gong, Y.; Wang, H.; Li, H. A Real-Time Driving Drowsiness Detection Algorithm with
Individual Differences Consideration. IEEE Access 2019, 7, 179396–179408.

24. Noor, A.Z.M.; Jafar, F.A.; Ibrahim, M.R.; Soid, S.N.M. Fatigue Detection among Operators in Industry
Based on Euclidean Distance Computation Using Python Software. Int. J. Emerg. Trends Eng. Res. 2020, 8,
6375–6379.

25. Dewi, C.; Chen, R.-C.; Chang, C.-W.; Wu, S.-H.; Jiang, X.; Yu, H. Eye Aspect Ratio for Real-Time
Drowsiness Detection to Improve Driver Safety. Electronics 2022, 11, 3183.
https://doi.org/10.3390/electronics11193183.

26. https://medium.com/ai-techsystems/driver-drowsiness-detection-using-cnn-ac66863718d

16
27. Theresia, N., Pasaribu, Br., Prijono, A., Ratnadewi, R., Adhie, R. and Felix, J., “Drowsiness Detection
According to the Number of Blinking Eyes Specified from Eye Aspect Ratio Value Modification”, Proceeding
of the 1st International Conference on Life, Inovation, Change and Knowledge ,2018.

17

You might also like