Batch 9 Report Chapter (C)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

CHAPTER 1

INTRODUCTION

1.1 OVERVIEW

Safe drive and deep behavior analysis and detection using learning
methods encompass a set of techniques aimed at improving road safety by
leveraging advanced technologies, particularly machine learning and deep
learning algorithms. Safe drive systems typically involve the integration of
sensors, cameras, and other data-gathering devices within vehicles to
monitor various aspects of driving behavior and the surrounding
environment. These systems often employ real-time analysis to detect
potential hazards such as lane departures, collisions, or drowsy driving.
Deep behavior analysis involves the use of deep learning algorithms to
analyze and understand complex patterns of human behavior, particularly in
the context of driving. By processing large amounts of data collected from
sensors and cameras, deep learning models can extract meaningful insights
into driver behavior, such as detecting aggressive driving, distracted driving,
or fatigue. Detection using learning methods refers to the application of
machine learning and deep learning techniques to identify and classify
various driving behaviors and potential safety risks. This includes the
development of predictive models that can anticipate dangerous situations
on the road based on historical data and real-time inputs. The acceptance
ratio in this context refers to the degree to which these advanced
technologies are embraced and adopted by both vehicle manufacturers and
consumers. High acceptance ratios indicate widespread recognition of the
benefits of safe drive and behavior analysis systems, leading to their
integration into vehicles and widespread usage by drivers.

1
1.2 OBJECTIVES

The specific objectives of a safe drive and deep behavior analysis and
detection using learning method acceptance ratio project will vary
depending on the specific needs of the organization. However, some
common objectives include:
 Develop a real – time system for analyzing driver behavior using deep
learning techniques.
 Implement an adaptive learning model that continuously refines
behavior analysis based on driving scenarios.
 Enhance road safety by providing timely alerts for potentially unsafe
driving behavior and to create user friendly environment.

2
CHAPTER 2

LITERATURE SURVEY

[1] B. K. Dar, M. A. Shah, S. U. Islam, C. Maple, S. Mussadiq, and S.


Khan, “DELAY-AWARE ACCIDENT DETECTION AND
RESPONSE SYSTEM USING FOG COMPUTING,” IEEE Access,
2019

Emergencies, by definition, are unpredictable and rapid response is a


key requirement in emergency management. Globally, a significant number
of deaths occur each year, caused by excessive delays in rescue activities.
Vehicles embedded with sophisticated technologies, along with roads
equipped with advanced infrastructure, can play a vital role in the timely
identification and notification of roadside incidents. However, such
infrastructure and technologically-rich vehicles are rarely available in less
developed countries. Hence, in such countries, low-cost solutions are
required to address the issue. Systems based on the Internet of Things (IoT)
have begun to be used to detect and report roadside incidents. The majority
of the systems designed for this purpose involve the use of the cloud to
compute, manage, and store information. However, the centralization and
remoteness of cloud resources can result in an increased delay that raises
serious concerns about its feasibility in emergency situations; in life-
threatening situations, all delays should be minimized where feasible. To
address the problem of latency, fog computing has emerged as a middleware
paradigm that brings the cloud-like resources closer to end devices. In light
of this, the research proposed here leverages the advantages of sophisticated
features of smartphones and fog computing to propose and develop a low-
cost and delay-aware accident detection and response system, which we term
Emergency Response and Disaster Management System (ERDMS). An

3
Android application is developed that utilizes smartphone sensors for the
detection of incidents. When an accident is detected, a plan of action is
devised. Initially, a nearby hospital is located using the Global Positioning
System (GPS). The emergency department of the hospital is notified about
the accident that directs an ambulance to the accident site. In addition, the
family contacts of the victim are also informed about the accident. All the
required computation is performed on the nearby available fog nodes.
Moreover, the proposed scheme is simulated using iFogSim to evaluate and
compare the performance using fog nodes and cloud data centers.

[2] Y. Ji, S. Wang, Y. Zhao, J. Wei, and Y. Lu, ‘‘FATIGUE STATE


DETECTION BASED ON MULTI-INDEX FUSION AND STATE
RECOGNITION NETWORK,’’ IEEE Access, vol. 7, pp. 64136–64147,
2019

Fatigued driving detection in complex environments is a challenging


problem. This paper proposes a fatigued driving detection algorithm based
on multi-index fusion and a state recognition network, for further analysis
of driver fatigue states. This study uses a multi-task cascade convolutional
neural network for face detection and facial key point detection, corrects the
face according to the key points of the eye, intercepts a binoculus image to
recognize the eye state, and intercepts a mouth image according to the left
and right corner points to recognize the mouth state. This can improve the
detection accuracy of the driver’s head tilt, deflection, and so on. Next, an
eye state recognition network is constructed for the binoculus image to
identify the eye closure state, and a mouth state recognition network is used
to identify the mouth state. Finally, a fatigue judgment model is established
by combining the two characteristics of the eye state and the mouth state to
further analyze the driver fatigue state. The algorithm achieved 98.42%
detection accuracy on a public eye dataset and achieved 97.93% detection

4
accuracy on an open mouth dataset. As compared with other existing
algorithms, the proposed algorithm has the advantages of high accuracy and
simple implementation.

[3] J. Hu, L. Xu, X. He, and W. Meng, ‘‘ABNORMAL DRIVING


DETECTION BASED ON NORMALIZED DRIVING BEHAVIOR,’’
IEEE Trans. Veh. Technol., vol. 66, no. 8, pp. 6645–6652, Aug. 2017.

Abnormal driving behavior may cause serious danger to both the


driver and the public. In this work, we propose to detect abnormal driving
by analyzing normalized driving behavior. Serving as the virtual driver, a
personalized driver model is established for the speed control purpose by
using the locally designed neural network and the real-world Vehicle Test
Data. The driving behavior is normalized by employing the virtual driver to
conduct the speed following task as defined by the standard driving cycle
test, e.g., the FTP-72. Three typical abnormal driving behaviors are
characterized and simulated, namely, the fatigue/drunk, the reckless and the
phone use while driving. An abnormality index is proposed based on the
analysis of normalized driving behaviors and is applied to quantitatively
evaluate the abnormity. Numerical experiments are conducted to verify the
effectiveness of the proposed scheme.

[4] Koichi Fujiwara, Erika Abe, Keisuke “HEART RATE


VARIABILITY-BASED DRIVER DROWSINESS DETECTION AND
ITS VALIDATION WITH EEG,” IEEE 2019

Driver drowsiness detection is a key technology that can prevent fatal


car accidents caused by drowsy driving. The present work proposes a driver
drowsiness detection algorithm based on heart rate variability (HRV)
analysis and validates the proposed method by comparing with
electroencephalography (EEG)-based sleep scoring. Methods: Changes in

5
sleep condition affect the autonomic nervous system and then HRV, which
is defined as an RR interval (RRI) fluctuation on an electrocardiogram trace.
Eight HRV features are monitored for detecting changes in HRV by using
multivariate statistical process control, which is a well-known anomaly
detection method. Result: The performance of the proposed algorithm was
evaluated through an experiment using a driving simulator. In this
experiment, RRI data were measured from 34 participants during driving,
and their sleep onsets were determined based on the EEG data by a sleep
specialist. The validation results of the experimental data with the EEG data
showed that drowsiness was detected in 12 out of 13 pre-N1 episodes prior
to the sleep onsets, and the false positive rate was 1.7 times per hour.
Conclusion: The present work also demonstrates the usefulness of the
framework of HRV-based anomaly detection that was originally proposed
for epileptic seizure prediction. Significance: The proposed method can
contribute to preventing accidents caused by drowsy driving.

[5] S. Muhlbacher-Karrer, “A DRIVER STATE DETECTION


SYSTEM— COMBINING A CAPACITIVE HAND DETECTION
SENSOR WITH PHYSIOLOGICAL SENSORS,” IEEE 2017

With respect to automotive safety, the driver plays a crucial role.


Stress level, tiredness, and distraction of the driver are therefore of high
interest. In this paper, a driver state detection system based on cellular neural
networks (CNNs) to monitor the driver's stress level is presented. We
propose to include a capacitive-based wireless hand detection (position and
touch) sensor for a steering wheel utilizing ink-jet printed sensor mats as an
input sensor in order to improve the performance. A driving simulator
platform providing a realistic virtual traffic environment is utilized to
conduct a study with 22 participants for the evaluation of the proposed
system. Each participant is driving in two different scenarios, each

6
representing one of the two no-stress/stress driver states. A “threefold” cross
validation is applied to evaluate our concept. The subject dependence is
considered carefully by separating the training and testing data.
Furthermore, the CNN approach is benchmarked against other state-of-the-
art machine learning techniques. The results show a significant improvement
combining sensor inputs from different driver inherent domains, giving a
total related detection accuracy of 92%. Besides that, this paper shows that
in case of including the capacitive hand detection sensor, the accuracy
increases by 10%. These findings indicate that adding a subject-independent
sensor, such as the proposed capacitive hand detection sensor, can
significantly improve the detection performance.

[6] K. Fujiwara et, “EPILEPTIC SEIZURE PREDICTION BASED ON


MULTIVARIATE STATISTICAL PROCESS CONTROL OF
HEART RATE VARIABILITY FEATURES,” IEEE 2016.

The present study proposes a new epileptic seizure prediction


method through integrating heart rate variability (HRV) analysis and an
anomaly monitoring technique. Methods: Because excessive neuronal
activities in the preictal period of epilepsy affect the autonomic nervous
systems and autonomic nervous function affects HRV, it is assumed that a
seizure can be predicted through monitoring HRV. In the proposed method,
eight HRV features are monitored for predicting seizures by using
multivariate statistical process control, which is a well-known anomaly
monitoring method. Results: We applied the proposed method to the clinical
data collected from 14 patients. In the collected data, 8 patients had a total
of 11 awakening preictal episodes and the total length of interictal episodes
was about 57 h. The application results of the proposed method
demonstrated that seizures in ten out of eleven awakening preictal episodes
could be predicted prior to the seizure onset, that is, its sensitivity was 91%,

7
and its false positive rate was about 0.7 times per hour. Conclusion: This
study proposed a new HRV-based epileptic seizure prediction method, and
the possibility of realizing an HRV-based epileptic seizure prediction system
was shown. Significance: The proposed method can be used in daily life,
because the heart rate can be measured easily by using a wearable sensor.

8
CHAPTER 3

EXISTING SYSTEM

According to World Health Organization (WHO) statistics, traffic


accidents cause millions of people to lose their lives every year. Statistics
assert that most of the fatal accidents are due to driver fatigue and
carelessness.
To detect the driver conditions, methods based on vision or
physiological signals can be used. The vision-based method uses cameras
and image processing to monitor the head movement, and facial expression
of the driver. However, sensitivity to environmental factors, such as lighting
conditions and the wearing is a major problem that must be solved.
Furthermore, this method can detect a condition only after it begins to show
on the face.
The method based on physiological signals detects abnormal
conditions by monitoring the change in these signals according to each
condition. It is not sensitive to the environmental factors, and early detection
is possible; thus, this method has been studied widely. Automatic driver
drowsiness can be detected using artificial intelligence and visual
information. System is to detect, track and examine face and eyes of drivers
for this different real vehicle image of drivers are taken to validate the
algorithms. It is a real time system work in different light conditions.
The numbers of accidents are increased due to several factor, one of
the main factors is that driver fatigue. Driver’s sleepiness is also
implemented using video-based approach. This system is non-invasive and
human related elements are used. Band power and Empirical Mode
Decomposition methods are used to investigate and extract the signal, SVM
(Support Vector Machine) used to confirm the analysis and to categorize the
state of vigilance of the driver.

9
The system designs to find the drivers drowsiness using the hypothesis
of Bayesian networks. The interaction between driver and vehicle features
are extracted to get reliable symptoms of driver drowsiness. It presents more
suitable and accurate strategies to design drowsy driver detection system.
Brain and visual activity are used in drowsiness detection system.
Electroencephalographic (EEG) channel used to monitor the brain activity.
Diagnostic techniques and fuzzy logic are used in EEG based drowsiness
detector. Using blinking detection and characterization for visual activity
monitored. Electrooculography (EOG) channel is used to extract the
Blinking features.
DISADVANTAGES
 Research regarding methods that use a nonintrusive wearable
device to classify multiple conditions is limited.
 The system requires the driver to wear an eye blink sensor frame
while driving.
 The notifications were sending after the accident occurred.
 The implementations of the system have limited detection and
recognition of vehicle collision.
 Exists a delay in reaching ambulance to location and providing
medical service which is under high risk.

10
CHAPTER 4

PROPOSED SYSTEM

Driving at night has become a tricky situation with a lot of accidents


and concerns for the transport authorities and common man especially
because of the increasing heavy vehicle movement. The drivers are forced
to drive with minimal rest which takes a toll on their driving capability after
a few days of continuous driving leading to reduction in their reflexes and
thus causing accidents. In most of the cases of accidents, fatigue is found to
be the reason for nodding off. The term fatigue refers to a combination of
symptoms such as impaired performance and a subjective feeling of
drowsiness. Even with the intensive research that has been performed, the
term fatigue still does not have a universally accepted definition. From the
viewpoint of individual organ functionality, there are different kinds of
fatigue, such as the following cases:
1) Local physical fatigue (e.g., in a skeletal or ocular muscle).
2) General physical fatigue (following heavy manual labor).
3) Central nervous fatigue (sleepiness).
4) Mental fatigue (not having the energy to do anything).
In this proposed system, we can implement the system for detecting
the faces using Linear discriminate analysis and also track the eyes states
with improved accuracy. In case of abnormal behavior that is drivers’ eyes
found to be closed as a corrective action alarm signal will be raised. The
system enters into analysis stage after locating the driver’s head and eyes
properly in image captured through camera. This image is then preprocessed
using various Image Processing techniques for drowsiness detection. Finally
provide alert system in the form of voice, SMS and Email alert admin with
face recognition. And also reduce the vehicle speed using embedded system
to prevent the driver from accidents.

11
ADVANTAGES

 Intelligent Transportation Accidents due to drowsiness can be


avoided.
 Physiological signals are found to be more accurate and consistent
in detecting drivers’ drowsiness levels.
 The proposed system is practical and easy to use.
 All signals are measured by one wearable device on the wrist.
 Features set and the classification method were devised to deal with
the inter driver variance problem.

12
4.1 BLOCK DIAGRAM ANDEXPLANATION

Face Detection Eye Acceptance


Input Face
and Feature Eye Detection Ratio
Video
Extraction

Classification
Output Behavior
Output

From PC
(USB Interface) Power Supply

Vehicle Slow &


Motor Driver OFF
DC Motor
Engine
ON-OFF
Push Button
Arduino NANO Parking LED
LED Driver
Setup

ECG and
Alcohol Sensor Buzzer Driver Buzzer Indicator

LCD

Figure 4.1 Block diagram of the proposed system

13
MICROCONTROLLER

This is the main working unit which connects all the inputs and
outputs to its 3 ports namely, PB, PC, PD and each of them have some pins
which act as pins for input and output. The main AC dc power supply of 12V
is given to this board, and it will be converted to the microcontroller DC
operating voltage of +5V. It has a huge RAM and ROM which helps the
program run faster as compared to other microcontrollers and also provides
extra memory storage capacity. The development methods decide a large
section of how the final system functions, and thus care is taken to ensure
that the best practices, tools and equipment’s are used. The system will be
developed using IDE. The programming language embedded C will also be
used. Certain sections of the system will be ported android application
development for performance enhancement.

PSU BOARD

This board is connected to the relay chip which in further connected


to the vehicle model system. This board’s main purpose is to fulfil the power
requirements of the Microcontroller, DC Motor and Ultrasonic Sensor.

BUZZZER

The buzzer receives power from this driver, which in turn is powered
from the microcontroller board. The enable pins of this driver should be high
to send the power to the buzzer.

CAMERA

Camera is used to take the continuous images to get the traffic signs
and signals from the real world. According to the images available through
the camera we can send these images to the raspberry pi to perform car’s
control action.

14
PYTHON SOFTWARE

Python is a high-level, general-purpose programming language used


widely in industries and research work also used in making general purpose
projects. It`s software comes in various version i.e. IDLE python 2, python
3 also in these two types different version of python IDLE are available for
programming the python language.

OPEN CV

It stands for Open-Source Computer Vision. It has a library of


programming function mainly for real time computer visions. It has over
more than 2500 optimize algorithms for set of classical algorithms as well
as for the state of art algorithms in the computer visions is basically used for
image processing in which we used it for the face detection, object
detections, image recognition, traces and also for other functions.

The overall block of the proposed approach is given in Figure. In this


project, take a real time video using webcam and apply Haar cascade
algorithm to detect the driver, where the face is detected and the video is
then used to detect the eyes of the driver using the same haar cascade
algorithm, then the drowsiness is detected using the Eye Aspect Ratio
technique. Proposed algorithm is working based on three methods.

FACE DETECTION USING VIOLA JONES ALGORITHMIC


PROGRAM:

The Viola–Jones object detection framework is that the 1st object


detection framework to supply competitive object detection by Paul Viola
and Michael Jones though it may be trained to sight a variety of object
categories, it absolutely was motivated primarily by the matter of face
detection. This algorithmic program is enforced in OpenCV. The strategy
relies on the Viola-Jones algorithmic program.
15
FACE DETECTION USING HAAR CASCADE CLASSIFIER
ALGORITHM:

Haar cascade classifier algorithmic rule consists of four completely


different type of options they're Ascertaining Haar options, making Integral
images, Utilizing Adaboost, Execution of Cascading Classifiers as Figure.

The options sought-after by the detection framework universally


involve the sums of image pixels among rectangular areas. As such, they
bear some likeness to Haar basis functions, which are used among the
important of image-based object detection. However, since the choices
utilized by Viola and Jones all have confidence over one rectangular space,
they are usually more advanced.

The figure on the correct illustrates the four different types of options
utilized within the framework. The worth of any given feature is that the
addition of the pixels among clear rectangles deducted from the addition of
the pixels among shaded rectangles. Rectangular choices of this sort unit
primitive once compared to alternatives like manageable filters. Although
they are sensitive to vertical and horizontal choices, their feedback is
considerably coarser.

EYE DETECTION:

In the system we have used facial landmark prediction for eye


detection Facial landmarks are used to localize and represent salient regions
of the face, such as eyes, eyebrows, nose, mouth. Jawline Facial landmarks
have been successfully applied to face alignment, head pose estimation,
faces Wapping, blink detection and much more.

In the context of facial landmarks, our goal is detecting important


facial structures on the face using shape prediction methods.

16
Detecting facial landmarks is therefore a two-step process:

• Localize the face in the image.

• Detect the key facial structures on the face ROI.

Localize the face in the image: The face image is localized by Haar
feature based cascade classifiers which were discussed in the first step of
our algorithm i.e. face detection.

Detect the key facial structures on the face ROI: There are a variety
of facial landmark detectors, but all methods essentially try to localize and
label the following facial regions: Mouth, Right eyebrow, Left eyebrow,
Right eye, Left eye, Nose.

OpenCV’s Haar cascades to detect the face in an image, which boils


down to finding the bounding box (x, y)-coordinates of the face in the frame.

Given the bounding box the face we can apply dlib’s facial landmark
predictor to obtain 68 salient points used to localize the eyes, eyebrows,
nose, mouth, and jawline:

17
Visualizing the 68 facial landmark coordinates from the iBUG 300-
W dataset.

As I discuss in this tutorial, dlib’s 68 facial landmarks are indexable


which enables us to extract the various facial structures using simple Python
array slices.

Top-left: A visualization of eye landmarks when then the eye is open.

Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye
aspect ratio over time. The dip in the eye aspect ratio indicates a blink

On the top-left we have an eye that is fully open and the eye facial
landmarks plotted. Then on the top-right we have an eye that is closed. The
bottom then plots the eye aspect ratio over time. As we can see, the eye
aspect ratio is constant (indicating that the eye is open), then rapidly drops
to close to zero, then increases again, indicating a blink has taken place.

18
4.2 SYSTEM SPECIFICATION

4.2.1 HARDWARE SPECIFICATION

• Arduino NANO
• PC
• Web Cam
• LCD display
• Bluetooth
• Power Supply

4.2.2 SOFTWARE SPECIFICATION

• Arduino IDE

• Python IDE

19
CHAPTER 5

ALGORITHM

Developing an algorithm for safe driving and deep behavior analysis and
detection using a learning method acceptance ratio involves several steps.
Here's a high-level overview of how you might approach it:

5.1 DATA COLLECTION


Data collection for safe driving and deep behaviour analysis and
detection using learning method acceptance ratio involves gathering a
diverse range of driving-related data. Here are some key types of data to
collect:
[1] Vehicle Sensor Data
 Speed: Collect data on vehicle speed from the vehicle's
speedometer or onboard sensors.
 Acceleration and Deceleration: Measure changes in vehicle speed
over time to capture acceleration and deceleration patterns.
 Braking: Monitor braking events, including intensity and
duration.
 Steering Angle: Record steering wheel positions to analyse
turning behaviour.
[2] GPS Data
 Location: Collect GPS coordinates to track the vehicle's position
and route.
 Time and Date: Record timestamps for each GPS data point to
analyse driving patterns over time.
[3] Environmental Data
 Location: Collect GPS coordinates to track the vehicle's position
and route.

20
 Time and Date: Record timestamps for each GPS data point to
analyse driving patterns over time.
[4] Driver Behaviour Data
 Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
 Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[5] Video and Image Data
 Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
 Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[6] Telematics Data
 Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
 Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[7] Contextual Data
 Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
 Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[8] Safety Critical Events
 Driver Input: Capture driver actions such as use of turn signals,

21
lane changes, and use of headlights.
 Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[9] Ethical and Privacy Considerations
 Ensure that data collection methods respect the privacy rights of
drivers and passengers.
 Anonymize or pseudonymize personally identifiable information
(PII) to protect individual privacy.
 Comply with relevant data protection regulations and obtain
informed consent where necessary.

5.2 FEATURE ENGINEERING

 Extract relevant features from the raw data that can be used to
characterize driving behavior. These features may include statistical
measures, time-series analysis, frequency domain analysis, etc.

22
 Consider incorporating contextual information such as weather
conditions, road conditions, time of day, etc.

5.3 MODEL SELECTION

 Choose appropriate machine learning models for analyzing driving


behavior. This could include traditional classifiers such as Decision
Trees, Random Forests, Support Vector Machines (SVM), or more
advanced techniques like deep learning models

 For Example (Convolutional Neural Networks, Recurrent Neural


Networks) for analyzing video data.

 Ensemble methods like AdaBoost or Gradient Boosting may also be


useful for combining multiple weak learners into a stronger model.

5.4 TRAINING

 Split the dataset into training, validation, and test sets.

 Train the selected models on the training data, optimizing their


parameters using techniques like cross-validation or grid search.

 Validate the models on the validation set to tune hyperparameters and


prevent overfitting.

5.5 EVALUATION

 Evaluate the trained models on the test set using appropriate metrics
such as accuracy, precision, recall, F1-score, or area under the ROC
curve (AUC).

23
 AUC is known for Area Under the ROC curve. As its name suggests,
AUC calculates the two-dimensional area under the entire ROC curve
ranging from (0,0) to (1,1), as shown above image:

 The ROC curve, AUC computes the performance of the binary classifier
across different thresholds and provides an aggregate measure.

 The value of AUC ranges from 0 to 1, which means an excellent model


will have AUC near 1, and hence it will show a good measure of
Separability.

 ROC or Receiver Operating Characteristic curve represents a


probability graph to show the performance of a classification model at
different threshold levels. The curve is plotted between two parameters,
which are:

i. True Positive Rate or TPR

ii. False Positive Rate or FPR

TPR or True Positive rate

24
FPR or False Positive Rate

 Pay particular attention to metrics related to safe driving behavior


detection, as this is the primary goal of the algorithm.

5.6 DEPLOYMENT

 Integrate the trained models into a real-time system that can analyze
driving behavior as it occurs.

 This may involve developing software that interfaces with onboard


vehicle sensors and/or cameras to continuously monitor driving
behavior.

5.7 FEEDBACK LOOP

 Continuously collect new data from the deployed system to monitor its
performance in real-world conditions.

 Periodically retrain the models using updated data to adapt to changing


driving patterns and improve performance over time.

5.8 INTERPTABILITY

• Ensure that the algorithm provides explanations for its decisions,


especially in safety-critical applications like driving.

• Techniques like feature importance analysis, SHAP values, or attention


mechanisms in deep learning models can help provide insights into the
factors driving the algorithm's decisions.

25
5.9 MACHINE LEARNING METHODS

The features of supervised machine learning methods generally include:


Labeled Data:
Supervised learning requires labeled data, where each example in the dataset
is paired with the correct output or target value. This labeled data is essential
for training the model to make predictions.
Input Features:
Supervised learning models rely on input features, which are the variables or
attributes that the model uses to make predictions. These features can be
numerical, categorical, or even text-based, depending on the nature of the
problem.
Target Variable:
Also known as the output variable or label, the target variable is what the
model aims to predict based on the input features. In classification tasks, the
target variable consists of discrete categories or classes, while in regression
tasks, it consists of continuous values.
Training Data:
Supervised learning models are trained on a portion of the labeled dataset
called the training data. The model learns from this data by adjusting its
parameters to minimize a predefined loss function, which measures the

26
difference between the predicted outputs and the actual labels.
Validation Data:
During the training process, a separate portion of the labeled dataset called
the validation data is used to tune hyperparameters and assess the model's
performance. This helps prevent overfitting by providing an independent
measure of how well the model generalizes to new, unseen data.
Model Evaluation Metrics:
Supervised learning models are evaluated using various metrics depending
on the nature of the problem. For classification tasks, common evaluation
metrics include accuracy, precision, recall, F1-score, and ROC-AUC. For
regression tasks, common evaluation metrics include mean squared error
(MSE), mean absolute error (MAE), and R-squared.
Generalization:
A key goal of supervised learning is to generalize well to new, unseen data.
A model that generalizes well performs accurately on data it hasn't seen
before, indicating that it has learned the underlying patterns in the data rather
than memorizing specific examples.
Model Complexity:
Supervised learning models vary in complexity, ranging from simple linear
models like linear regression to complex nonlinear models like neural
networks. The choice of model depends on the complexity of the problem
and the amount of available data.

Overall, supervised machine learning methods leverage labeled data to train


models that can make accurate predictions or decisions on new, unseen data.
These methods have a wide range of applications across various domains,
including healthcare, finance, natural language processing, computer vision,
and more.

27
CHAPTER 6

SOFTWARE REQUIREMENTS

6.1 ARDUINO NANO

Arduino Nano is a small, compatible open-source electronic development


board based on an 8-bit AVR microcontroller. Two versions of this board
are available, one is based on ATmega328p, and the other on Atmega168.

Arduino Nano can perform some functions similar to other boards available
in the market, however, it is smaller in size and is a right match for projects
requiring less memory space and fewer GPIO pins to connect with.

This unit features 14 digital pins which you can use to connect with external
components, while 6 analog pins of 10-bit resolution each, 2 reset pins, and
6 power pins are integrated on the board.

Like other Arduino boards, the operating voltage of this device is 5V, while
input voltage ranges between 6V to 20V while the recommended input
voltage ranges from 7V to 12V.

The clock frequency of this unit is 16MHz which is used to generate a clock
of a certain frequency using constant voltage.

The board supports a USB interface and it uses a mini-USB port, unlike most
Arduino boards that use the standard USB port. And there is no DC power

28
jack included in this unit i.e. you cannot power the board from an external
power supply.

Plus, this device is bread-board friendly in nature means you can connect
this unit with breadboards and make a range of electronic projects. The flash
memory is used to store the program and the flash memory of Atmega168 is
16KB (of which 2KB is used for the Bootloader) and the flash memory of
Atmega328 is 32KB.Similarly, the EEPROM is 512KB and 1KB, and
SRAM is 1KB and 2KB for Atmega168 and Atmega328 respectively. The
Nano board is almost similar to the UNO board with the former smaller in
size with no DC power jack.

Features

 Microcontroller: Microchip ATmega328P [4]

 Operating voltage: 5 volts

 Input voltage: 5 to 20 volts

 Digital I/O pins: 14 (6 optional PWM outputs)

 Analog input pins: 8

29
 DC per I/O pin: 40 mA

 DC for 3.3 V pin: 50 mA

 Flash memory: 32 KB, of which 2 KB is used by bootloader

 SRAM: 2 KB

 EEPROM: 1 KB

 Clock speed: 16 MHz

 Length: 45 mm

 Width: 18 mm

 Mass: 7 g

 USB: Mini-USB Type-B [5]

 ICSP Header: Yes

 DC Power Jack: No

Communication

The Arduino Nano has a number of facilities for communicating with a


computer, another Arduino, or other microcontrollers. The ATmega328
provides UART TTL serial (5V) communication, which is available on
digital pins 0 (RX) and 1 (TX).

An FTDI FT232RL on the board channels this serial communication over


USB and the FTDI drivers (included with the Arduino firmware) provide a
virtual com port to software on the computer. The Arduino software includes
a serial monitor which allows simple textual data to be sent to and from the

30
Arduino board. The RX and TX LEDs on the board flash when data is being
transmitted via the FTDI chip and the USB connection to the computer (but
not for serial communication on pins 0 and 1).

A Software Serial library allows for serial communication on any of the


Nano's digital pins. The ATmega328 also supports I2C and SPI
communication. The Arduino software includes the Wire library to simplify
use of the I2C bus.

6.1.1 SOFTWARE PROGRAMMING FOR ATmega328P

Just like other Arduino boards, Arduino ATmega328P also uses Arduino
IDE. This IDE supports C programming so we have to write program in C
language.

We can type our program in it and then burn that code in our microcontroller
and we can also change our code according to requirements. The code that
we make on software is known as sketch and it is burned in software and
then transferred to our microcontroller using USB cable.

To reset the board and to erase all previous data from microcontroller we can
use the reset button on board Arduino Mega 2560 comes with built-in boot
loader and it rules out the usage of external burner for burning code into the
microcontroller. This boot loader uses STK500 protocol for communication
purposes.

Another feature that is included in Arduino mega 2560 is multitasking.


Although IDE doesn’t support this feature but different operating system can
be used to make C program for this purpose and doing so also provides
flexibility to use our own build program using ISP connector of board.

31
6.2 SOFTWARE APPLICATION VERSION 2.1.1
The new major release of the Arduino IDE is faster and even more powerful!
In addition to a more modern editor and a more responsive interface it
features autocompletion, code navigation, and even a live debugger.

Code Editor:
The IDE provides a text editor where you write your Arduino code. This
editor typically includes features like syntax highlighting, auto-indentation,
and code completion to assist you in writing your programs.
Sketches and Libraries:
In Arduino, a program is called a "sketch". The IDE supports multiple
sketches within a project. Additionally, it provides a way to manage
libraries, which are pre-written pieces of code that extend the functionality
of the Arduino.
Compiler:
The IDE has a built-in compiler that translates your human-readable code
into machine-readable instructions (known as machine code or binary
code). This process is known as compilation.
Serial Monitor:
Arduino boards often communicate with a computer over a serial
connection. The IDE includes a Serial Monitor tool, which allows you to
send and receive data between the computer and the Arduino board. This
is particularly useful for debugging and testing.
Upload and Burn Bootloader:
Once you've written and compiled your code, you use the IDE to upload it
to the Arduino board. In some cases, you might also need to burn a
bootloader onto the microcontroller, which is a small piece of software that
enables the microcontroller to communicate with the IDE.

32
Board Manager:
The IDE allows you to select the type of Arduino board you're using.
Different boards have different specifications and require specific
configurations.
Tools and Options:
The IDE provides various tools and options for configuring settings,
managing libraries, and selecting different types of Arduino-compatible
hardware.

6.3 FUNCTION INVOLVE


Looking into the future, advancements in technology and methodologies can
lead to the development of new functions and capabilities for safe driving
and deep behavior analysis and detection systems. Here are some potential
future functions:
[1] Multimodal Data Fusion
 Integrating data from diverse sources such as vehicle sensors,
external cameras, smartphone sensors, and IoT devices to provide a
comprehensive understanding of driving behavior.
[2] Advanced Sensor Technologies
 Adoption of advanced sensor technologies such as LiDAR, radar,
and 360-degree cameras for enhanced perception and analysis of the
driving environment.
[3] Behavioral Biometrics
 Incorporating biometric data (e.g., facial expressions, heart rate
variability) to assess driver cognitive and emotional states and their
impact on driving behavior.
[4] Contextual Analysis
 Analyzing contextual factors such as traffic patterns, infrastructure
conditions, and social dynamics to adapt driving behavior models to
specific environments.

33
[5] Predictive Analytics
 Utilizing predictive analytics techniques to anticipate potential safety
risks and proactively intervene to prevent accidents or mitigate their
severity.
[6] Human-Machine Collaboration
 Developing systems that facilitate collaboration between human
drivers and AI-based assistance systems to enhance safety and
improve driving performance.
[7] Explainable AI
 Advancing interpretability and explainability techniques to provide
more transparent insights into the decision-making processes of AI
models, fostering trust and user acceptance.
[8] Autonomous Driving Integration
 Integrating safe driving and behavior analysis functions into
autonomous driving systems to ensure safe interaction with human-
driven vehicles and pedestrians.
[9] Continuous Learning and Adaptation
 Implementing algorithms capable of continuous learning and
adaptation to evolving driving conditions, user preferences, and
regulatory requirements.
[10] Robustness and Security
 Enhancing the robustness and security of safe driving systems
against adversarial attacks, cyber threats, and system failures to
ensure reliability and safety.
[11] Personalized Feedback and Coaching
 Providing personalized feedback and coaching to drivers based on
their individual driving behaviors and preferences to promote safer
and more efficient driving habits.

34
[12] Social and Environmental Impact Analysis
 Assessing the social and environmental impact of driving behavior
patterns to inform policy decisions and promote sustainable
transportation practices.

These future functions represent potential directions for innovation and


research in the field of safe driving and behavior analysis, leveraging
advances in technology, data science, and human-centered design to create
safer and more efficient transportation systems.

35
CHAPTER 7

APPENDIX

7.1 ARDUINO DRIVER DROWSINESS DETECTION

from scipy.spatial import distance as dist

from imutils.video import VideoStream

from imutils import face_utils

import argparse

import imutils

import time

import dlib

import math

from cv2 import cv2

import numpy as np

from EAR import eye_aspect_ratio

from MAR import mouth_aspect_ratio

from HeadPose import getHeadTiltAndCoords

import serial

import yagmail

import playsound

arduino = serial.Serial('COM5', 9600, timeout=.1)

def arduino_data():

arduino.write(b'1\r\n')

36
def mail_send():

mail = 'gunabalar.@gmail.com';

password = 'SDAD@BADlr’

# list of email_id to send the mail

li = ["@gmail.com"]

body = "Drowsiness / Fatigue Detected...!"

filename = "drowsy.jpg"

yag = yagmail.SMTP(mail, password)

for dest in li:

yag.send(

to=dest,

subject="Alert...! Driver in Drowsy",

contents=body,

attachments=filename,

print ("Mail sent to all...!")

time.sleep(1)

def sound_alarm():

# Play an alarm sound

playsound.playsound("./alarm.wav")

# Initialize dlib's face detector (HOG-based) and then create the

# facial landmark predictor

print("[INFO] loading facial landmark predictor...")

37
detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor(

'./dlib_shape_predictor/shape_predictor_68_face_landmarks.dat')

# Initialize the video stream and sleep for a bit, allowing the

# camera sensor to warm up

print("[INFO] initializing camera...")

vs = VideoStream(src=0).start()

# vs = VideoStream(usePiCamera=True).start() # Raspberry Pi

time.sleep(2.0)

# 400x225 to 1024x576

frame_width = 1024

frame_height = 576

# loop over the frames from the video stream

# 2D image points. If you change the image, you need to change vector

image_points = np.array([

(359, 391), # Nose tip 34

(399, 561), # Chin 9

(337, 297), # Left eye left corner 37

(513, 301), # Right eye right corne 46

(345, 465), # Left Mouth corner 49

(453, 469) # Right mouth corner 55

], dtype="double")

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]

38
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]

EYE_AR_THRESH = 0.25

MOUTH_AR_THRESH = 0.65

EYE_AR_CONSEC_FRAMES = 3

COUNTER = 0

# grab the indexes of the facial landmarks for the mouth

(mStart, mEnd) = (49, 68)

while True:

# grab the frame from the threaded video stream, resize it to

# have a maximum width of 400 pixels, and convert it to

# grayscale

frame = vs.read()

frame = imutils.resize(frame, width=1024, height=576)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

size = gray.shape

# detect faces in the grayscale frame

rects = detector(gray, 0)

# check to see if a face was detected, and if so, draw the total

# number of faces on the frame

if len(rects) > 0:

text = "{} face(s) found".format(len(rects))

cv2.putText(frame, text, (10, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

39
# loop over the face detections

for rect in rects:

# compute the bounding box of the face and draw it on the

# frame

(bX, bY, bW, bH) = face_utils.rect_to_bb(rect)

cv2.rectangle(frame, (bX, bY), (bX + bW, bY + bH), (0, 255, 0), 1)

# determine the facial landmarks for the face region, then

# convert the facial landmark (x, y)-coordinates to a NumPy

# array

shape = predictor(gray, rect)

shape = face_utils.shape_to_np(shape)

# extract the left and right eye coordinates, then use the

# coordinates to compute the eye aspect ratio for both eyes

leftEye = shape[lStart:lEnd]

rightEye = shape[rStart:rEnd]

leftEAR = eye_aspect_ratio(leftEye)

rightEAR = eye_aspect_ratio(rightEye)

# average the eye aspect ratio together for both eyes

ear = (leftEAR + rightEAR) / 2.0

# compute the convex hull for the left and right eye, then

# visualize each of the eyes

leftEyeHull = cv2.convexHull(leftEye)

rightEyeHull = cv2.convexHull(rightEye)

40
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)

cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)

# check to see if the eye aspect ratio is below the blink

# threshold, and if so, increment the blink frame counter

if ear < EYE_AR_THRESH:

COUNTER += 1

# if the eyes were closed for a sufficient number of times

# then show the warning

if COUNTER >= EYE_AR_CONSEC_FRAMES:

cv2.putText(frame, "Eyes Closed!", (500, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

# otherwise, the eye aspect ratio is not below the blink

# threshold, so reset the counter and alarm

sound_alarm()

print("Drowsiness Detected... Driver Slept...!")

time.sleep(0.25)

arduino_data()

print("Data Send to Hardware...!")

cv2.imwrite('drowsy.jpg', frame)

#print("Image Captured and Ready send Mail...")

time.sleep(0.25)

#mail_send() ##

print("Done...!")

41
time.sleep(1)

else:

COUNTER = 0

mouth = shape[mStart:mEnd]

mouthMAR = mouth_aspect_ratio(mouth)

mar = mouthMAR

# compute the convex hull for the mouth, then

# visualize the mouth

mouthHull = cv2.convexHull(mouth)

cv2.drawContours(frame, [mouthHull], -1, (0, 255, 0), 1)

cv2.putText(frame, "MAR: {:.2f}".format(mar), (650, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

# Draw text if mouth is open

if mar > MOUTH_AR_THRESH:

cv2.putText(frame, "Yawning!", (800, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

sound_alarm()

print("Yawing Detected... Driver Slept...!")

time.sleep(0.25)

arduino_data()

print("Data Send to Hardware...!")

cv2.imwrite('drowsy.jpg', frame)

print("Image Captured and Ready send Mail...")

42
time.sleep(0.25)

#mail_send()

print("Done...!")

time.sleep(1)

#sys.exit()

# loop over the (x, y)-coordinates for the facial landmarks

# and draw each of them

for (i, (x, y)) in enumerate(shape):

if i == 33:

# something to our key landmarks

# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[0] = np.array([x, y], dtype='double')

# write on frame in Green

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 8:

# something to our key landmarks

# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[1] = np.array([x, y], dtype='double')

# write on frame in Green

43
cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 36:

# something to our key landmarks

# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[2] = np.array([x, y], dtype='double')

# write on frame in Green

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 45:

# something to our key landmarks

# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[3] = np.array([x, y], dtype='double')

# write on frame in Green

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 48:

# something to our key landmarks

44
# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[4] = np.array([x, y], dtype='double')

# write on frame in Green

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 54:

# something to our key landmarks

# save to our new key point list

# i.e. keypoints = [(i,(x,y))]

image_points[5] = np.array([x, y], dtype='double')

# write on frame in Green

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

else:

# everything to all other landmarks

# write on frame in Red

cv2.circle(frame, (x, y), 1, (0, 0, 255), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

# Draw the determinant image points onto the person's face

45
for p in image_points:

cv2.circle(frame, (int(p[0]), int(p[1])), 3, (0, 0, 255), -1)

(head_tilt_degree, start_point, end_point,

end_point_alt) = getHeadTiltAndCoords(size, image_points,


frame_height)

cv2.line(frame, start_point, end_point, (255, 0, 0), 2)

cv2.line(frame, start_point, end_point_alt, (0, 0, 255), 2)

if head_tilt_degree:

cv2.putText(frame, 'Head Tilt Degree: ' + str(head_tilt_degree[0]), (170,

20),

cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

# extract the mouth coordinates, then use the

# coordinates to compute the mouth aspect ratio

# show the frameq

cv2.imshow("Frame", frame)

key = cv2.waitKey(1) & 0xFF

# if the `q` key was pressed, break from the loop

if key == ord("q"):

break

# d¬¬¬¬¬o a bit of cleanup

cv2.destroyAllWindows()

vs.stop()

46
7.2 HEAD POSE

import numpy as np

import math

from cv2 import cv2

# 3D model points.

model_points = np.array([

(0.0, 0.0, 0.0), # Nose tip 34

(0.0, -330.0, -65.0), # Chin 9

(-225.0, 170.0, -135.0), # Left eye left corner 37

(225.0, 170.0, -135.0), # Right eye right corne 46

(-150.0, -150.0, -125.0), # Left Mouth corner 49

(150.0, -150.0, -125.0) # Right mouth corner 55

])

# Checks if a matrix is a valid rotation matrix.

def isRotationMatrix(R):

Rt = np.transpose(R)

shouldBeIdentity = np.dot(Rt, R)

I = np.identity(3, dtype=R.dtype)

n = np.linalg.norm(I - shouldBeIdentity)

return n < 1e-6

# Calculates rotation matrix to euler angles

# The result is the same as MATLAB except the order

# of the euler angles ( x and z are swapped ).

47
def rotationMatrixToEulerAngles(R):

assert(isRotationMatrix(R))

sy = math.sqrt(R[0, 0] * R[0, 0] + R[1, 0] * R[1, 0])

singular = sy < 1e-6

if not singular:

x = math.atan2(R[2, 1], R[2, 2])

y = math.atan2(-R[2, 0], sy)

z = math.atan2(R[1, 0], R[0, 0])

else:

x = math.atan2(-R[1, 2], R[1, 1])

y = math.atan2(-R[2, 0], sy)

z=0

return np.array([x, y, z])

def getHeadTiltAndCoords(size, image_points, frame_height):

focal_length = size[1]

center = (size[1]/2, size[0]/2)

camera_matrix = np.array([[focal_length, 0, center[0]], [

0, focal_length, center[1]], [0, 0, 1]], dtype="double")

# print "Camera Matrix :\n {0}".format(camera_matrix)

dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion

(_, rotation_vector, translation_vector) = cv2.solvePnP(model_points,


image_points, camera_matrix, dist_coeffs, flags =
cv2.SOLVEPNP_ITERATIVE) # flags=cv2.CV_ITERATIVE)

# print "Rotation Vector:\n {0}".format(rotation_vector)

48
# print "Translation Vector:\n {0}".format(translation_vector)

# Project a 3D point (0, 0 , 1000.0) onto the image plane

# We use this to draw a line sticking out of the nose_end_point2D

(nose_end_point2D, _) = cv2.projectPoints(np.array(

[(0.0, 0.0, 1000.0)]), rotation_vector, translation_vector, camera_matrix,


dist_coeffs)

#Get rotation matrix from the rotation vector

rotation_matrix, _ = cv2.Rodrigues(rotation_vector)

#Calculate head tilt angle in degrees

head_tilt_degree = abs(

[-180] - np.rad2deg([rotationMatrixToEulerAngles(rotation_matrix)[0]]))

#Calculate starting and ending points for the two lines for illustration

starting_point = (int(image_points[0][0]), int(image_points[0][1]))

ending_point = (int(nose_end_point2D[0][0][0]),
int(nose_end_point2D[0][0][1]))

ending_point_alternate = (ending_point[0], frame_height // 2)

return head_tilt_degree, starting_point, ending_point, ending_point_alternate

49
7.3 MOUTH ACCEPTANCE RATIO

from scipy.spatial import distance as dist

def mouth_aspect_ratio(mouth):

# Compute the euclidean distances between the two sets of

# vertical mouth landmarks (x, y)-coordinates

A = dist.euclidean(mouth[2], mouth[10]) # 51, 59

B = dist.euclidean(mouth[4], mouth[8]) # 53, 57

# Compute the euclidean distance between the horizontal

# mouth landmark (x, y)-coordinates

C = dist.euclidean(mouth[0], mouth[6]) # 49, 55

# Compute the mouth aspect ratio

mar = (A + B) / (2.0 * C)

# Return the mouth aspect ratio

return mar

50
7.4 EYE ACCEPTANCE RATIO

from scipy.spatial import distance as dist

def eye_aspect_ratio(eye):

# Compute the euclidean distances between the two sets of

# vertical eye landmarks (x, y)-coordinates

A = dist.euclidean(eye[1], eye[5])

B = dist.euclidean(eye[2], eye[4])

# Compute the euclidean distance between the horizontal

# eye landmark (x, y)-coordinates

C = dist.euclidean(eye[0], eye[3])

# Compute the eye aspect ratio

ear = (A + B) / (2.0 * C)

# Return the eye aspect ratio

return ear

51
CHAPTER 8

RESULT

8.1 INPUT

8.2 OUTPUT

52
CHAPTER 9
CONCLUSION
AND FUTURE ENHANCEMENTS

9.1 CONCLUSION

The system detects drowsiness of driver through eye conditions. It


based on face detection using well known haar cascaded, eyes are detected
through proposed crop Eye algorithm which segments the face in different
segments in order to get left and right eye. Conditions of open and close eye
are determined by intensity values, distance between eye brow and eye lash
is calculated. If calculated distance is greater than threshold value, eyes are
closed otherwise open. An alarm is triggered if eyes are found to be closed
for consecutive frames. The proposed method was tested in video sequence
recorded in vehicle as well as in lab environment. The proposed system
works in real time with minimal computational complexity. Therefore, it is
also suitable for implementing in surveillance environment. The system
produces 90% accurate results for different faces.

9.2 FUTURE ENHANCEMENTS


 Develop learning methods that adapt to different driving contexts and
individual preferences. By considering factors such as road conditions,
traffic patterns, and driver habits, the system can provide more
personalized and relevant insights, leading to higher acceptance

 Design learning methods with a focus on the end-users, such as drivers


and fleet managers. Incorporate user feedback loops and usability testing
to ensure that the technology aligns with their needs and preferences,
ultimately increasing acceptance.

 Enhance the robustness and reliability of learning methods to ensure

53
consistent performance across diverse scenarios. By minimizing false
positives and false negatives, users are more likely to trust the system and
accept its recommendations.

 Emphasize the ethical and transparent use of artificial intelligence (AI)


algorithms in behaviour analysis and decision-making processes. Provide
clear explanations and visualizations of how AI models operate and make
recommendations, empowering users to understand and trust the
technology behind the app's safety features.

By incorporating these future enhancements, an app built for safe driving


and behavior analysis can further improve its acceptance ratio among
users, ultimately contributing to enhanced road safety and driver
satisfaction.

54
REFERENCES

[1] B. K. Dar, M. A. Shah, S. U. Islam, C. Maple, S. Mussadiq, and S. Khan,


“Delay-Aware Accident Detection and Response System Using Fog
Computing,” IEEE Access, 2019

[2] Y. Ji, S. Wang, Y. Zhao, J. Wei, and Y. Lu, ‘‘Fatigue state detection based
on multi-index fusion and state recognition network,’’ IEEE Access, vol.
7, pp. 64136–64147, 2019.

[3] Koichi Fujiwara, Erika Abe, Keisuke Kamat, Chikao Nakayama, Yoko
Suzuki, Toshitaka Yamakawa, “Heart Rate Variability-based Driver
Drowsiness Detection and its Validation with EEG,” IEEE Transactions on
Bio. Engg. Vol. 66, NO. 6, June 2019

[4] S. Mühlbacher-Karrer et al., “A Driver State Detection System—


Combining a Capacitive Hand Detection Sensor with Physiological
Sensors,” IEEE Trans. Instrum. Meas., vol. 66, no. 4, pp. 624–636, Apr.
2017

[5] K. Fujiwara et al., “Epileptic seizure prediction based on multivariate


statistical process control of heart rate variability features,” IEEE Trans.
Biomed. Eng., vol. 63, no. 6, pp. 1321–1332, Jun. 2016.

55

You might also like