Professional Documents
Culture Documents
Batch 9 Report Chapter (C)
Batch 9 Report Chapter (C)
Batch 9 Report Chapter (C)
INTRODUCTION
1.1 OVERVIEW
Safe drive and deep behavior analysis and detection using learning
methods encompass a set of techniques aimed at improving road safety by
leveraging advanced technologies, particularly machine learning and deep
learning algorithms. Safe drive systems typically involve the integration of
sensors, cameras, and other data-gathering devices within vehicles to
monitor various aspects of driving behavior and the surrounding
environment. These systems often employ real-time analysis to detect
potential hazards such as lane departures, collisions, or drowsy driving.
Deep behavior analysis involves the use of deep learning algorithms to
analyze and understand complex patterns of human behavior, particularly in
the context of driving. By processing large amounts of data collected from
sensors and cameras, deep learning models can extract meaningful insights
into driver behavior, such as detecting aggressive driving, distracted driving,
or fatigue. Detection using learning methods refers to the application of
machine learning and deep learning techniques to identify and classify
various driving behaviors and potential safety risks. This includes the
development of predictive models that can anticipate dangerous situations
on the road based on historical data and real-time inputs. The acceptance
ratio in this context refers to the degree to which these advanced
technologies are embraced and adopted by both vehicle manufacturers and
consumers. High acceptance ratios indicate widespread recognition of the
benefits of safe drive and behavior analysis systems, leading to their
integration into vehicles and widespread usage by drivers.
1
1.2 OBJECTIVES
The specific objectives of a safe drive and deep behavior analysis and
detection using learning method acceptance ratio project will vary
depending on the specific needs of the organization. However, some
common objectives include:
Develop a real – time system for analyzing driver behavior using deep
learning techniques.
Implement an adaptive learning model that continuously refines
behavior analysis based on driving scenarios.
Enhance road safety by providing timely alerts for potentially unsafe
driving behavior and to create user friendly environment.
2
CHAPTER 2
LITERATURE SURVEY
3
Android application is developed that utilizes smartphone sensors for the
detection of incidents. When an accident is detected, a plan of action is
devised. Initially, a nearby hospital is located using the Global Positioning
System (GPS). The emergency department of the hospital is notified about
the accident that directs an ambulance to the accident site. In addition, the
family contacts of the victim are also informed about the accident. All the
required computation is performed on the nearby available fog nodes.
Moreover, the proposed scheme is simulated using iFogSim to evaluate and
compare the performance using fog nodes and cloud data centers.
4
accuracy on an open mouth dataset. As compared with other existing
algorithms, the proposed algorithm has the advantages of high accuracy and
simple implementation.
5
sleep condition affect the autonomic nervous system and then HRV, which
is defined as an RR interval (RRI) fluctuation on an electrocardiogram trace.
Eight HRV features are monitored for detecting changes in HRV by using
multivariate statistical process control, which is a well-known anomaly
detection method. Result: The performance of the proposed algorithm was
evaluated through an experiment using a driving simulator. In this
experiment, RRI data were measured from 34 participants during driving,
and their sleep onsets were determined based on the EEG data by a sleep
specialist. The validation results of the experimental data with the EEG data
showed that drowsiness was detected in 12 out of 13 pre-N1 episodes prior
to the sleep onsets, and the false positive rate was 1.7 times per hour.
Conclusion: The present work also demonstrates the usefulness of the
framework of HRV-based anomaly detection that was originally proposed
for epileptic seizure prediction. Significance: The proposed method can
contribute to preventing accidents caused by drowsy driving.
6
representing one of the two no-stress/stress driver states. A “threefold” cross
validation is applied to evaluate our concept. The subject dependence is
considered carefully by separating the training and testing data.
Furthermore, the CNN approach is benchmarked against other state-of-the-
art machine learning techniques. The results show a significant improvement
combining sensor inputs from different driver inherent domains, giving a
total related detection accuracy of 92%. Besides that, this paper shows that
in case of including the capacitive hand detection sensor, the accuracy
increases by 10%. These findings indicate that adding a subject-independent
sensor, such as the proposed capacitive hand detection sensor, can
significantly improve the detection performance.
7
and its false positive rate was about 0.7 times per hour. Conclusion: This
study proposed a new HRV-based epileptic seizure prediction method, and
the possibility of realizing an HRV-based epileptic seizure prediction system
was shown. Significance: The proposed method can be used in daily life,
because the heart rate can be measured easily by using a wearable sensor.
8
CHAPTER 3
EXISTING SYSTEM
9
The system designs to find the drivers drowsiness using the hypothesis
of Bayesian networks. The interaction between driver and vehicle features
are extracted to get reliable symptoms of driver drowsiness. It presents more
suitable and accurate strategies to design drowsy driver detection system.
Brain and visual activity are used in drowsiness detection system.
Electroencephalographic (EEG) channel used to monitor the brain activity.
Diagnostic techniques and fuzzy logic are used in EEG based drowsiness
detector. Using blinking detection and characterization for visual activity
monitored. Electrooculography (EOG) channel is used to extract the
Blinking features.
DISADVANTAGES
Research regarding methods that use a nonintrusive wearable
device to classify multiple conditions is limited.
The system requires the driver to wear an eye blink sensor frame
while driving.
The notifications were sending after the accident occurred.
The implementations of the system have limited detection and
recognition of vehicle collision.
Exists a delay in reaching ambulance to location and providing
medical service which is under high risk.
10
CHAPTER 4
PROPOSED SYSTEM
11
ADVANTAGES
12
4.1 BLOCK DIAGRAM ANDEXPLANATION
Classification
Output Behavior
Output
From PC
(USB Interface) Power Supply
ECG and
Alcohol Sensor Buzzer Driver Buzzer Indicator
LCD
13
MICROCONTROLLER
This is the main working unit which connects all the inputs and
outputs to its 3 ports namely, PB, PC, PD and each of them have some pins
which act as pins for input and output. The main AC dc power supply of 12V
is given to this board, and it will be converted to the microcontroller DC
operating voltage of +5V. It has a huge RAM and ROM which helps the
program run faster as compared to other microcontrollers and also provides
extra memory storage capacity. The development methods decide a large
section of how the final system functions, and thus care is taken to ensure
that the best practices, tools and equipment’s are used. The system will be
developed using IDE. The programming language embedded C will also be
used. Certain sections of the system will be ported android application
development for performance enhancement.
PSU BOARD
BUZZZER
The buzzer receives power from this driver, which in turn is powered
from the microcontroller board. The enable pins of this driver should be high
to send the power to the buzzer.
CAMERA
Camera is used to take the continuous images to get the traffic signs
and signals from the real world. According to the images available through
the camera we can send these images to the raspberry pi to perform car’s
control action.
14
PYTHON SOFTWARE
OPEN CV
The figure on the correct illustrates the four different types of options
utilized within the framework. The worth of any given feature is that the
addition of the pixels among clear rectangles deducted from the addition of
the pixels among shaded rectangles. Rectangular choices of this sort unit
primitive once compared to alternatives like manageable filters. Although
they are sensitive to vertical and horizontal choices, their feedback is
considerably coarser.
EYE DETECTION:
16
Detecting facial landmarks is therefore a two-step process:
Localize the face in the image: The face image is localized by Haar
feature based cascade classifiers which were discussed in the first step of
our algorithm i.e. face detection.
Detect the key facial structures on the face ROI: There are a variety
of facial landmark detectors, but all methods essentially try to localize and
label the following facial regions: Mouth, Right eyebrow, Left eyebrow,
Right eye, Left eye, Nose.
Given the bounding box the face we can apply dlib’s facial landmark
predictor to obtain 68 salient points used to localize the eyes, eyebrows,
nose, mouth, and jawline:
17
Visualizing the 68 facial landmark coordinates from the iBUG 300-
W dataset.
Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye
aspect ratio over time. The dip in the eye aspect ratio indicates a blink
On the top-left we have an eye that is fully open and the eye facial
landmarks plotted. Then on the top-right we have an eye that is closed. The
bottom then plots the eye aspect ratio over time. As we can see, the eye
aspect ratio is constant (indicating that the eye is open), then rapidly drops
to close to zero, then increases again, indicating a blink has taken place.
18
4.2 SYSTEM SPECIFICATION
• Arduino NANO
• PC
• Web Cam
• LCD display
• Bluetooth
• Power Supply
• Arduino IDE
• Python IDE
19
CHAPTER 5
ALGORITHM
Developing an algorithm for safe driving and deep behavior analysis and
detection using a learning method acceptance ratio involves several steps.
Here's a high-level overview of how you might approach it:
20
Time and Date: Record timestamps for each GPS data point to
analyse driving patterns over time.
[4] Driver Behaviour Data
Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[5] Video and Image Data
Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[6] Telematics Data
Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[7] Contextual Data
Driver Input: Capture driver actions such as use of turn signals,
lane changes, and use of headlights.
Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[8] Safety Critical Events
Driver Input: Capture driver actions such as use of turn signals,
21
lane changes, and use of headlights.
Driver State: Monitor driver behaviour indicators like
drowsiness, distraction, or impairment (if possible and ethically
permissible).
[9] Ethical and Privacy Considerations
Ensure that data collection methods respect the privacy rights of
drivers and passengers.
Anonymize or pseudonymize personally identifiable information
(PII) to protect individual privacy.
Comply with relevant data protection regulations and obtain
informed consent where necessary.
Extract relevant features from the raw data that can be used to
characterize driving behavior. These features may include statistical
measures, time-series analysis, frequency domain analysis, etc.
22
Consider incorporating contextual information such as weather
conditions, road conditions, time of day, etc.
5.4 TRAINING
5.5 EVALUATION
Evaluate the trained models on the test set using appropriate metrics
such as accuracy, precision, recall, F1-score, or area under the ROC
curve (AUC).
23
AUC is known for Area Under the ROC curve. As its name suggests,
AUC calculates the two-dimensional area under the entire ROC curve
ranging from (0,0) to (1,1), as shown above image:
The ROC curve, AUC computes the performance of the binary classifier
across different thresholds and provides an aggregate measure.
24
FPR or False Positive Rate
5.6 DEPLOYMENT
Integrate the trained models into a real-time system that can analyze
driving behavior as it occurs.
Continuously collect new data from the deployed system to monitor its
performance in real-world conditions.
5.8 INTERPTABILITY
25
5.9 MACHINE LEARNING METHODS
26
difference between the predicted outputs and the actual labels.
Validation Data:
During the training process, a separate portion of the labeled dataset called
the validation data is used to tune hyperparameters and assess the model's
performance. This helps prevent overfitting by providing an independent
measure of how well the model generalizes to new, unseen data.
Model Evaluation Metrics:
Supervised learning models are evaluated using various metrics depending
on the nature of the problem. For classification tasks, common evaluation
metrics include accuracy, precision, recall, F1-score, and ROC-AUC. For
regression tasks, common evaluation metrics include mean squared error
(MSE), mean absolute error (MAE), and R-squared.
Generalization:
A key goal of supervised learning is to generalize well to new, unseen data.
A model that generalizes well performs accurately on data it hasn't seen
before, indicating that it has learned the underlying patterns in the data rather
than memorizing specific examples.
Model Complexity:
Supervised learning models vary in complexity, ranging from simple linear
models like linear regression to complex nonlinear models like neural
networks. The choice of model depends on the complexity of the problem
and the amount of available data.
27
CHAPTER 6
SOFTWARE REQUIREMENTS
Arduino Nano can perform some functions similar to other boards available
in the market, however, it is smaller in size and is a right match for projects
requiring less memory space and fewer GPIO pins to connect with.
This unit features 14 digital pins which you can use to connect with external
components, while 6 analog pins of 10-bit resolution each, 2 reset pins, and
6 power pins are integrated on the board.
Like other Arduino boards, the operating voltage of this device is 5V, while
input voltage ranges between 6V to 20V while the recommended input
voltage ranges from 7V to 12V.
The clock frequency of this unit is 16MHz which is used to generate a clock
of a certain frequency using constant voltage.
The board supports a USB interface and it uses a mini-USB port, unlike most
Arduino boards that use the standard USB port. And there is no DC power
28
jack included in this unit i.e. you cannot power the board from an external
power supply.
Plus, this device is bread-board friendly in nature means you can connect
this unit with breadboards and make a range of electronic projects. The flash
memory is used to store the program and the flash memory of Atmega168 is
16KB (of which 2KB is used for the Bootloader) and the flash memory of
Atmega328 is 32KB.Similarly, the EEPROM is 512KB and 1KB, and
SRAM is 1KB and 2KB for Atmega168 and Atmega328 respectively. The
Nano board is almost similar to the UNO board with the former smaller in
size with no DC power jack.
Features
29
DC per I/O pin: 40 mA
SRAM: 2 KB
EEPROM: 1 KB
Length: 45 mm
Width: 18 mm
Mass: 7 g
DC Power Jack: No
Communication
30
Arduino board. The RX and TX LEDs on the board flash when data is being
transmitted via the FTDI chip and the USB connection to the computer (but
not for serial communication on pins 0 and 1).
Just like other Arduino boards, Arduino ATmega328P also uses Arduino
IDE. This IDE supports C programming so we have to write program in C
language.
We can type our program in it and then burn that code in our microcontroller
and we can also change our code according to requirements. The code that
we make on software is known as sketch and it is burned in software and
then transferred to our microcontroller using USB cable.
To reset the board and to erase all previous data from microcontroller we can
use the reset button on board Arduino Mega 2560 comes with built-in boot
loader and it rules out the usage of external burner for burning code into the
microcontroller. This boot loader uses STK500 protocol for communication
purposes.
31
6.2 SOFTWARE APPLICATION VERSION 2.1.1
The new major release of the Arduino IDE is faster and even more powerful!
In addition to a more modern editor and a more responsive interface it
features autocompletion, code navigation, and even a live debugger.
Code Editor:
The IDE provides a text editor where you write your Arduino code. This
editor typically includes features like syntax highlighting, auto-indentation,
and code completion to assist you in writing your programs.
Sketches and Libraries:
In Arduino, a program is called a "sketch". The IDE supports multiple
sketches within a project. Additionally, it provides a way to manage
libraries, which are pre-written pieces of code that extend the functionality
of the Arduino.
Compiler:
The IDE has a built-in compiler that translates your human-readable code
into machine-readable instructions (known as machine code or binary
code). This process is known as compilation.
Serial Monitor:
Arduino boards often communicate with a computer over a serial
connection. The IDE includes a Serial Monitor tool, which allows you to
send and receive data between the computer and the Arduino board. This
is particularly useful for debugging and testing.
Upload and Burn Bootloader:
Once you've written and compiled your code, you use the IDE to upload it
to the Arduino board. In some cases, you might also need to burn a
bootloader onto the microcontroller, which is a small piece of software that
enables the microcontroller to communicate with the IDE.
32
Board Manager:
The IDE allows you to select the type of Arduino board you're using.
Different boards have different specifications and require specific
configurations.
Tools and Options:
The IDE provides various tools and options for configuring settings,
managing libraries, and selecting different types of Arduino-compatible
hardware.
33
[5] Predictive Analytics
Utilizing predictive analytics techniques to anticipate potential safety
risks and proactively intervene to prevent accidents or mitigate their
severity.
[6] Human-Machine Collaboration
Developing systems that facilitate collaboration between human
drivers and AI-based assistance systems to enhance safety and
improve driving performance.
[7] Explainable AI
Advancing interpretability and explainability techniques to provide
more transparent insights into the decision-making processes of AI
models, fostering trust and user acceptance.
[8] Autonomous Driving Integration
Integrating safe driving and behavior analysis functions into
autonomous driving systems to ensure safe interaction with human-
driven vehicles and pedestrians.
[9] Continuous Learning and Adaptation
Implementing algorithms capable of continuous learning and
adaptation to evolving driving conditions, user preferences, and
regulatory requirements.
[10] Robustness and Security
Enhancing the robustness and security of safe driving systems
against adversarial attacks, cyber threats, and system failures to
ensure reliability and safety.
[11] Personalized Feedback and Coaching
Providing personalized feedback and coaching to drivers based on
their individual driving behaviors and preferences to promote safer
and more efficient driving habits.
34
[12] Social and Environmental Impact Analysis
Assessing the social and environmental impact of driving behavior
patterns to inform policy decisions and promote sustainable
transportation practices.
35
CHAPTER 7
APPENDIX
import argparse
import imutils
import time
import dlib
import math
import numpy as np
import serial
import yagmail
import playsound
def arduino_data():
arduino.write(b'1\r\n')
36
def mail_send():
mail = 'gunabalar.@gmail.com';
password = 'SDAD@BADlr’
li = ["@gmail.com"]
filename = "drowsy.jpg"
yag.send(
to=dest,
contents=body,
attachments=filename,
time.sleep(1)
def sound_alarm():
playsound.playsound("./alarm.wav")
37
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(
'./dlib_shape_predictor/shape_predictor_68_face_landmarks.dat')
# Initialize the video stream and sleep for a bit, allowing the
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start() # Raspberry Pi
time.sleep(2.0)
# 400x225 to 1024x576
frame_width = 1024
frame_height = 576
# 2D image points. If you change the image, you need to change vector
image_points = np.array([
], dtype="double")
38
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
EYE_AR_THRESH = 0.25
MOUTH_AR_THRESH = 0.65
EYE_AR_CONSEC_FRAMES = 3
COUNTER = 0
while True:
# grayscale
frame = vs.read()
size = gray.shape
rects = detector(gray, 0)
# check to see if a face was detected, and if so, draw the total
if len(rects) > 0:
39
# loop over the face detections
# frame
# array
shape = face_utils.shape_to_np(shape)
# extract the left and right eye coordinates, then use the
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
# compute the convex hull for the left and right eye, then
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
40
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
COUNTER += 1
sound_alarm()
time.sleep(0.25)
arduino_data()
cv2.imwrite('drowsy.jpg', frame)
time.sleep(0.25)
#mail_send() ##
print("Done...!")
41
time.sleep(1)
else:
COUNTER = 0
mouth = shape[mStart:mEnd]
mouthMAR = mouth_aspect_ratio(mouth)
mar = mouthMAR
mouthHull = cv2.convexHull(mouth)
sound_alarm()
time.sleep(0.25)
arduino_data()
cv2.imwrite('drowsy.jpg', frame)
42
time.sleep(0.25)
#mail_send()
print("Done...!")
time.sleep(1)
#sys.exit()
if i == 33:
elif i == 8:
43
cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)
elif i == 36:
elif i == 45:
elif i == 48:
44
# save to our new key point list
elif i == 54:
else:
45
for p in image_points:
if head_tilt_degree:
20),
cv2.imshow("Frame", frame)
if key == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
46
7.2 HEAD POSE
import numpy as np
import math
# 3D model points.
model_points = np.array([
])
def isRotationMatrix(R):
Rt = np.transpose(R)
shouldBeIdentity = np.dot(Rt, R)
I = np.identity(3, dtype=R.dtype)
n = np.linalg.norm(I - shouldBeIdentity)
47
def rotationMatrixToEulerAngles(R):
assert(isRotationMatrix(R))
if not singular:
else:
z=0
focal_length = size[1]
48
# print "Translation Vector:\n {0}".format(translation_vector)
(nose_end_point2D, _) = cv2.projectPoints(np.array(
rotation_matrix, _ = cv2.Rodrigues(rotation_vector)
head_tilt_degree = abs(
[-180] - np.rad2deg([rotationMatrixToEulerAngles(rotation_matrix)[0]]))
#Calculate starting and ending points for the two lines for illustration
ending_point = (int(nose_end_point2D[0][0][0]),
int(nose_end_point2D[0][0][1]))
49
7.3 MOUTH ACCEPTANCE RATIO
def mouth_aspect_ratio(mouth):
mar = (A + B) / (2.0 * C)
return mar
50
7.4 EYE ACCEPTANCE RATIO
def eye_aspect_ratio(eye):
A = dist.euclidean(eye[1], eye[5])
B = dist.euclidean(eye[2], eye[4])
C = dist.euclidean(eye[0], eye[3])
ear = (A + B) / (2.0 * C)
return ear
51
CHAPTER 8
RESULT
8.1 INPUT
8.2 OUTPUT
52
CHAPTER 9
CONCLUSION
AND FUTURE ENHANCEMENTS
9.1 CONCLUSION
53
consistent performance across diverse scenarios. By minimizing false
positives and false negatives, users are more likely to trust the system and
accept its recommendations.
54
REFERENCES
[2] Y. Ji, S. Wang, Y. Zhao, J. Wei, and Y. Lu, ‘‘Fatigue state detection based
on multi-index fusion and state recognition network,’’ IEEE Access, vol.
7, pp. 64136–64147, 2019.
[3] Koichi Fujiwara, Erika Abe, Keisuke Kamat, Chikao Nakayama, Yoko
Suzuki, Toshitaka Yamakawa, “Heart Rate Variability-based Driver
Drowsiness Detection and its Validation with EEG,” IEEE Transactions on
Bio. Engg. Vol. 66, NO. 6, June 2019
55