Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

DRIVER DROWSINESS

DETECTION USING COMPUTER


VISION AND HEART RATE
VARIABILITY

PROJECT REPORT

Submitted by

NANDHA KUMAR.T(17BCS210)

NIKHIL.R(17BCS215)
KARTHIK.V(17BCS208)

in partial fulfillment of the requirement for


the award of the degree

of

BACHELOR OF ENGINEERING

in

COMPUTER SCIENCE AND ENGINEERING

Department of Computer Science and Engineering

KUMARAGURU COLLEGE OF TECHNOLOGY,

COIMBATORE -641 049

(An Autonomous Institution Affiliated to Anna University,


Chennai)

MAY 2021

i
KUMARAGURU COLLEGE OF TECHNOLOGY,

COIMBATORE -641 049

(An Autonomous Institution Affiliated to Anna University, Chennai)

BONAFIDE CERTIFICATE

Certified that this project work titled “DRIVER DROWSINESS DETECTION USING
COMPUTER VISION AND HEART RATE VARIABILITY” is the bonafide work of
NANDHA KUMAR.T (17BCS210), NIKHIL.R(17BCS215), KARTHIK.V (17BCS208)
who carried out the project under my supervision. Certified further, that to the best of my
knowledge the work reported here in does not form part of any another project or dissertation
on the basis of which a degree or award was conferred on an earlier occasion on this or any
other student.

SIGNATURE SIGNATURE
Dr. Devaki. P, Ph.D., Dr. Baskaran K R, Ph.D.,
HEAD OF THE DEPARTMENT SUPERVISOR
Professor and Head, Professor,
Department of Computer Science and Department of Computer Science and
Engineering, Engineering,
Kumaraguru College of Technology Kumaraguru College of Technology
Coimbatore – 641 049 Coimbatore – 641 049

The candidates with University register number 17BCS210, 17BCS215, 17BCS208


were examined in the Project Viva-Voce examination held on …………………

Internal Examiner External Examiner

ii
DECLARATION

Hereby declare that the project entitled “DRIVER DROWSINESS DETECTION USING
COMPUTER VISION AND HEART RATE VARIABILITY” is a work done by us and to the

best of knowledge, an exact similar work has not been submitted to Anna University or
any Institution, for fulfilment of the requirement of the course study, under my
knowledge.

The report is submitted in partial fulfilment of the requirement for the award of the
Degree of Bachelor of Computer Science and Engineering of Anna University, Chennai.

NANDHA KUMAR.T (17BCS210),

NIKHIL.R(17BCS215),

KARTHIK.V (17BCS208)

iii
ACKNOWLEDGEMENT

We express our profound gratitude to the management of Kumaraguru College of


Technology for providing as with the required infrastructure that enabled us to
successfully complete the project.

We extend our gratefulness to our Principal Dr. Saravanan D, for providing the
necessary facilities to complete our project.

We would like to make a special acknowledgement and thanks to Dr. Devaki P,


Professor and Head, Department of Computer Science and Engineering for her support
and encouragement throughout the project.

We thank our project coordinator Dr.L. Latha, Ph.D., Professor, Department of


Computer Science and Engineering for her support during the project.

We express our heartfelt thanks to our project guide Dr. Baskaran K R, Ph.D.,
Professor, Department of Computer Science and Engineering, for his whole hearted
support during the project.

We would like to convey our honest thanks to all faculty members and non-teaching staff
members of the department for their support.

iv
ABSTRACT

Drowsy motoring is a consolidation of driving and sleepiness or


weariness. It is a major problem in road accidents. Fatal car accidents
caused by drowsy driving can be mitigated and prevented with Driver
drowsiness detection. In this proposal machine learning is employed to
detect drowsiness episodes. It focuses primarily on the state of the eye.
The alert signal is spawned from a combination of devices, the Raspberry
Pi is teamed up with Raspbian camera and is used to identify the
sluggishness of the driver in problem-solving time by tracking the eye
blink. The HRV (Heart Rate variability) sensor is used for validating the
drowsiness of the driver. The HRV pulse sensor can measure the heart
rate by detecting changes in light that passes through it as blood is
pumped. This work proposes a model to detect and notify drowsy driving
and could naturally reduce the number of road accidents.

v
TABLE OF CONTENT

CHAPTER NO. TITLE PAGE.NO

ABSTACT vi

LIST OF TABLES xi
LIST OF FIGURES xii

LIST OF ABBREVATION xiii

1 INTRODUCTION 1

2 LITERATURE REVIEW 3

3 MODULES 10
3.1 RASPBERRY PI 3 MODEL B 10
3.2 PI CAMERA MODULE 13
3.3 HEART RATE SENSOR 15
3.4 MCP3008 16
3.5 BLUETHOOTH 18

4 PROPOSED SYSTEM AND ALGORITHMS 19


4.1 INTRODUCTION 19

4.2 CASCADE CLASSIFIER 20


4.3 LBP CASCADE 20
4.4 HAAR CASCADE 22
4.5 BLINK CASCADE 22

5 RESULT 24
5.1 EFFECTS OF LBP CASCADE AND HAAR CASCADE 24

5.2 EFFECT OF BLINK CASCADE 24


5.3 EFFECT OF HEART RATE SENSOR 24
5.4 PERFORMANCE METRICS 25

6 CONCLUSION AND FUTURE WORK 27


7 REFERENCES 28

x
LIST OF TABLES

TABL TITLE PAGE


ENO. NO.

3.1 Raspberry Pi 3 Model B Specifications 10

3.2 Raspberry Pi 3 Model B Pin Details 12

3.3 Pi Camera Module Specifications 14

3.4 Pi Camera Module Pin Details 14

3.5 Heart Rate Sensor Specifications 16

3.6 MCP3008 Specifications 16

3.7 MCP3008 Pin Details 17

4.1 Cascade Specification 23

5.3 The average time taken for alert sound generation 25

5.4 The average heart rate value when drowsy 25

5.5 Accuracy rate of face and eye blink detection 26

5.6 Confusion Matrix Parameters 26

5.7 Confusion Matrix for the model 26

xi
LIST OF FIGURES

FIGURE TITLE PAGE


NO. NO.

3.1 Raspberry Pi 3 Model B 11

3.2 Raspberry Pi 3 Model B GPIO Pinout 11

3.3 Pi Camera Module 13

3.4 Pi Camera Module Pinout 13

3.5 Heart Rate Sensor Front 15

3.6 Heart Rate Sensor Rear 15

3.7 MCP3008 Pinout 17

3.8 Bluetooth Speaker 18

4.1 Proposed system Flowchart 19

4.2 Proposed System Circuit Diagram 20

4.3 Euclidean distance formula 21

4.4 Feature Extraction in HAAR Cascade Algorithm 22

5.1 LBP Cascade and HAAR Cascade 24

5.2 Output of Heart Rate Sensor 24

xi
LIST OF ABBREVIATIONS

HRV - Heart Rate Variability


EEG - Electroencephalography
CV - Computer Vision
BVP - Blood Volume Pressure
TEDD - Thoracic Effort Derived Drowsiness index
DCCNN - Deep Cascaded Convolutional Neural Network

xii
1. INTRODUCTION

Car is one of the popularly used vehicles for transportation. People prefer the car as it is
efficient when pooled for long-distance travel. Driving a car requires undistracted attention and
rapid response, loss of any such attribute may have severe consequences such as accidents,
injury and even loss of life. Road accidents claim more lives rather than deadly diseases like
HIV/AIDS, tuberculosis or diarrhoeal diseases. Road traffic crashes accounts for
1.35 million deaths each year, the eighth prominent mainspring of mortality for all ages and
paramount matter of casualty for juvenile and younglings amongst 5-29 years. Car accidents
are increasing at an alarming rate every year due to human errors.

There are many factors for the cause of road accidents such as lack of attention, disobeying
lane rules, fatigue, over speeding, drunken driving, drowsy driving, etc. The estimated deaths
due to fatigue are around 1,550 and also includes the injuries which sum up to 71,000. Drowsy
driving contributes a substantial part in roadway mishaps. It is caused due to medication,
sleepiness, fatigue or even stress. Shift workers face this issue more often in their return journey.
It increases the risk of accidents as the driver is not aware of his surroundings for a few seconds
which leads to fatalities or injuries. Microsleep episodes can happen at any time during driving
as the driver may doze off without realizing it for a moment of a few seconds. As per the
National Safety Council, losing 3 hours of sleep is equivalent to driving after consuming 3
beers. Maximum safe driving time for a person is 8 hours and rest between every 3 hours is
mandatory. But in India private-sector drivers haul up to 10 hours daily.

Automakers are investing in the prevention of drowsy driving as it stands out to be a potential
threat to the driver and his surroundings. They combat this with many modern technologies to
ensure the early detection and prevention of such incidents.

1
1.1 CHALLENGES

• Ambient Lighting.

• Detecting performance in Driving.

• Considering Driving Environment.

1.2 OBJECTIVES

• To recognize and caution the drowsiness of the motorist.

• To curtail the count of roadway casualties provoked due to drowsy driving.

1.3 PROBLEM STATEMENT

To alert the driver whether he is drowsy or not by using Computer Vision (CV) and Heart
rate variability (HRV)

2
2. LITERATURE REVIEW

2.1 DETECTION OF DRIVER FATIGUE

Ji Hyun Yang et al (2009) says that it revealed the characteristics of drowsy driving through
simulator-based human-in-the-loop experiments. They have observed that drowsiness has
greater impact on rule-based driving tasks than on skill-based tasks. They have confirmed
this by finding inferring driver alertness using the BN paradigm.

2.2 FATIGUE DETECTION USING RASPBERRY PI 3

Akalya Chellappa et al (2018) says that In their system through the detection of the Eye and
face fatigue is measured by adapting Haar Cascade Classifier, especially facial landmarks are
identified using the shape-predictor and Eye ratio (EAR) by determining the Euclideandistance
between the eyes. Detection of eyes and faces accurately in each frame will aid to determine
the degree of drowsiness. The major disadvantage of this system is that the operational
complexity is high. In recent years, driving has become an important part of our day-to-day
life, especially in urban areas sleepiness-related accidents are occurring infrequent. Road
accidents are apparently a global hazard in our country. Based on the survey of the National
Crime Records Bureau (NCRB) about 1, 35, 000 traffic-related demisehappen every year in
India. These factors lead to the development of Intelligent Transportation System (ITS). If the
accident caused by abnormalities of the driver, it can be prevented by placing abnormality
detecting system within the vehicle.

3
2.3 INTERFACE SYSTEM FOR DROWSINESS DETECTION

Chin-Teng Lin et al (2010) says that a real-time wireless EEG-based BCI system was
designed for drowsiness detection in car applications. It contains of a wireless physiological
signal-acquisition module and an embedded signal-processing module. Their module is
small enough to be embedded into a headband as a wearable EEG device.

2.4 EYE ASPECT RATIO

Sukrit Mehta et al (2019) says that their system monitors and detects the loss of attention
of drivers of vehicles is proposed in their system. The face of the driver has been detected
by capturing facial landmarks and warning is given to the driver to avoid accidents. In their
proposed system Non-intrusive methods have been preferred over intrusive methods to
prevent the driver from being distracted due to the sensors attached on his body. This section
details the proposed approach to detect driver’s drowsiness that works on two levels. The
application is installed on driver’s device running Android operating system (OS). The
process starts with capturing of live images from camera and is subsequentlysent at local
server. At the server’s side, Dlib library is employed to detect facial landmarks and a
threshold value is used to detect whether driver is drowsy or not. These facial landmarks are
then used to compute the EAR (Eye Aspect Ratio) and are returned back to the driver. In our
context, the EAR value received at the application’s end would be compared with the
threshold value taken as 0.25. If the EAR value is less than the threshold value, then this
would indicate a state of fatigue. In case of Drowsiness, thedriver and the passengers would
be alerted by an alarm. The subsequent section details the working of each module.

2.5 DRIVER BEHAVIOR

Hossein Naderi et al (2018) says that it shows 12-item Driver Behavior Questionnaire and
17-item ARDES are two appropriate and confirmed tools for assessing heavy vehicle driver

4
characteristics. Some researchers also use GSD, driver exposure, the price of the truck and
driver daily fatigue as a good index to evaluate heavy vehicle and driver behavior. In this
research, a total of 474 professional truck driver volunteers were interviewed anonymously. All
the drivers were male. The drivers had an average age of 44.1 (SD = 9.79) years, with minimum
and maximum ages of 18 and 75 years respectively. Five participants reported that they were
driving a truck without having a truck driver’s license. Seven percent of the participants were
aged 18–30 years, 68.6% were aged 31–50 years, and 24.4% were aged 51 years and above.
The drivers had an average driving experience of 17.3 (SD = 9.54) years anddrove their trucks
an average of 3395 Km per week (SD = 2051.2).

2.6 SLEEPINESS INDICATORS

David Sandberg et al (2011) says that they detect the driver drowsiness by using some
measures like Indicators of Driver Sleepiness, Rapid Steering-Wheel Movement, Generic
Variability Indicator, etc., They use some formulas and algorithms to detect the sleepiness.
To estimate a driver’s level of sleepiness, indicators of driver sleepiness may be formed by
extracting relevant information from signals such as those a fore mentioned. The indicators

considered here are scalar-valued functions that map given segments of signals onto numerical
values. Several indicators, based on different signals and of different complexities, have been
considered in the literature. Among the indicators based on driving performance, the standard
deviation of the lane position and the steering-wheel reversal rate are, perhaps, the two most
commonly studied.

2.7 MIXED-EFFECT MODEL

Xuxin Zhang et al (2020) says that they used the Tongli University driving simulator combined
with an eye-tracking system to determine which individual physiological and behavioral
characteristics might be used as drowsiness detectors. They defined three drowsiness levels based
on the Karolinska Sleepiness Scale (KSS) and a mixed-effect ordered logit model was developed
to consider the time cumulative effect of drowsiness (MOL-TCE). Twenty-seven male drivers
aged 24–52 years (mean 33.2, SD 8.0) were recruited for this study. To ensure

5
that the subjects’ alertness was not influenced differently by the time of their working day, all
were night-shift workers who work 11 hours a day on average. As a result, their bodies are very
likely to be in a state of physiological fatigue after work. All subjects were required to have a
valid Chinese driver’s license, good health, no sleep-related diseases or issues, and have had no
drug or medicine use within the month prior to the experiment, no alcohol within 24 h, and no
stimulant beverages within 12 h of the experiment. Just prior to the experiment, participants
signed an Experiment Informed Consent form that described the study’s requirements, risks, and
rights. Each participant was paid the equivalent of $30 U.S. at the end of the experimental
session.

2.8 NEURAL NETWORK

Maryam Hashemi et al (2020) says that the algorithm of eye detection and preprocessing,
then the data collection process describes about the driver drowsiness detection and reviews
in another dataset in this field. The image in their system converts to gray image and contrast
of eye equalizes. The image also resizes to 24x24 pixels. Their system detects the drowsiness
when more than 12 successive images have been recorded and it alerts the driver by a alarm.
The authors peruse the algorithm of eye detection and preprocessing, then the data collection
process describes for driver drowsiness detection and reviews another dataset in this field.
Finally, the system framework presented in detail. three different neural networks proposed in
this work, and the result of these networks compared with each other for both datasets. The first
network is a Fully Designed Neural Network (FD-NN). In this research, we used a system with
Intel Core i7-6700K CPU@ 4.00 GHz with 16 GB RAM and NVIDIA GeForce GTX 1070Ti
GPU. The Python selected as language programming on the Anaconda platform.

2.9 THERMAL IMAGING

Serajeddin Ebrahimian et al (2020) says that they used respiration features and thermal imaging
for detecting the drowsiness of the driver. Their system produces 91% efficiency. They used
KNN classifiers were used to classify drivers as awake or sleepy. They collected data from 30
volunteers to succeed this model. Several features of the respiration system alter

6
from wakefulness to sleep. The most noticeable changes in the respiration signal are changes
in the respiration rhythm. Previous studies showed noticeable changes in the respiration rate
during transition from wakefulness to drowsiness. Human respiration rate varies in different
physiological and mental conditions. Because of the non-stationary nature of respiration signal,
time-frequency analysis can be used to analyze it. A wavelet synchro squeezed transform is
used to extract the instantaneous frequency of the respiration signal. A total number of 30 male
subjects participated in the tests. The subjects signed consent forms to fully observe the
experimental protocol.

2.10 OPTICAL CORRELATOR BASED ALGORITHM

Elhoussaine Ouabida et al (2020) says that they proposed an eye tracking, state estimation,
and then a driver drowsiness detection method based on a numerical simulation of the optical
Vander Lugt Correlator (VLC). By using a specific filter in Fourier plane of the VLC, Their
proposed method estimates precisely the eyes location and state of the eye. The human eye is
known to be one of the significant features of the human face. The characteristics extracted by
studying the eye movement, texture, and gaze have attracted much attention for potential use
in expressing person needs, mental processes and emotional states. The fact that the human eye
is constantly moving is indeed the principle for the development of robust nonintrusive eye
detection and tracking methods. Besides, research in eye characteristics utilization, and
particularly the pupil features exploitation, is currently taking place at a breathtaking pace.
Pupil segmentation and tracking has undergone great development and has found various
applications ranging from the primary diagnosis of the central nervous to gaze extraction, and
fatigue detection.
2.11 VALIDATION WITH EEG

Koichi Fujiwara et al (2019) says that they developed a driver drowsiness detection system
using EEG and heart rate variability for detecting drowsiness. They tested this system with a
driving simulator. The EEG consists of sensors which is fixed in the face for detecting the brain
activities and with the help of that drowsiness is detected. Heart rate variability is used as a
secondary device to detect the drowsiness. The proposed algorithm adopts HRV instead of
EEG, and EEG-based sleep scoring is used as the reference for the proposed algorithm.

7
This section explains EEG-based sleep scoring and HRV briefly and proposes an HRV-based
drowsiness detection algorithm. Sleep consists of REM (rapid eye movement sleep) and NREM
(non-REM sleep), which is categorized into three levels: N1, N2, and N3. Theinclusion criteria
for the participants were nonprofessional drivers, with a valid drivinglicense. The exclusion
criterion was having a chronic illness that may affect HRV such as cardiovascular disease,
arrhythmia, epilepsy, or sleep disorders.

2.12 NON-INTRUSIVE METHODS

F Omidi, G Nasleseraji (2017) says that In this study, various non-intrusive techniques were
applied to measure the motorist’s drowsiness and their advantages and limitations were
discussed. The obtained results from the simulated environment compared with driving under
real conditions will have considerable differences. Although there are different methods to
measure the driver drowsiness, but because driving is a multi-dimensional task, using a single
method alone cannot effectively measure driver drowsiness.

2.13 VISUALINFORMATION

K C Patel et al (2018) says that In this proposal, a non-intrusive system for detectingmotorist’s
drowsiness supporting Computer Vision has been discussed. The state of the eye is monitored
and analyzed. The disadvantage is that it suffers from percentage accuracy. Drowsiness is the
state of feeling tired or sleepy. We all can be victim of drowsiness while driving, simply after
too short night sleep, tired physical condition or during long journeys. Drowsiness is the state
of feeling tired or sleepy. Driver fatigue affects the driving ability in the following 3 areas, a)
It impairs coordination, b) It causes longer reaction times, and, c) It impairs judgment. The
number of accidents as a result of drowsiness is increasing day by day.Recent statistics estimate
that annually 76,000 injuries and 1200 deaths can be attributed to drowsiness related crashes.
The advancement of technology in detecting the drowsiness of the driver is a noteworthy
challenge as it can help reduce the probability of accidents taking place resulting in decrease in
the death and injuries caused due to drowsy driving.

8
2.14 EAR EEG

M Hwang et al (2016) says that they detect the drowsiness using the EEG sensor fixed inside
the ear to detect the drowsiness of the driver.

2.15 FACIALFEATURES

W Deng and R Wu (2019) says that they use facial expressions for tracking the sleepiness of
the driver. They track the drowsiness of the driver by closure of the eye and mouth of the person
who drives the vehicle. They use camera to track the angle of eye opening and width and height
of the mouth of the person. we categorize the related work into three parts, those related to the
visual object tracking algorithm, the facial landmarks recognition algorithm and those to the
methods of driver-drowsiness detection. We propose a novel system for evaluating the driver's
level of fatigue based on face tracking and facial key point detection.

9
3. MODULES

3.1 RASPBERRY PI 3 MODEL B

Raspberry Pi is equivalent to a desktop computer which is capable of performing tasks like


playing audio and video files, accessing the internet and other desktop computer-specific
applications. It is a stand-alone computer with Bluetooth compatibility and cable-free LAN.
Raspberry Pi 3 Model B is used in this project which is the initial model of the third- generation
of Raspberry Pi devices. It has several advantages like small in size, less power consumption
and easy portability as a consequence of SD card support. Able to render images and videos
at 1080p resolution, Python is the programming language required whichis easy, stronger, and
quick processor, multitasking available.

Table 3.1 Raspberry Pi 3 Model B Specifications

Component Description
CPU 4× ARM Cortex-A53, 1.2GHz
GPU Broadcom Video Core IV
RAM 1GB LPDDR2 (900 MHz)
Networking 10/100 Ethernet, 2.4GHz 802.11n wireless
Energy
Storage micro SD
GPIO 40-pin header, populated
Ports HDMI, 3.5mm analogue audio-video jack,
4× USB 2.0, Ethernet, Camera Serial
Interface (CSI), Display Serial Interface
(DSI)

10
Figure 3.1 Raspberry Pi 3 Model B

Figure 3. 2 Raspberry Pi 3 Model B GPIO Pinout

11
Table 3.2 Raspberry Pi 3 Model B Pin Details

Pin Number Pin Name Description

6,9,14,20,25,30,34,39 Ground (GND) Connected to the


Groundof the circuit

1,7 Power pins Connected to 3.3V


positive voltage for
components

14,30 Power pins Connected to 5V


positive voltage for
components

27,28 Reserved pins Reserved for


communication with
EEPROM

3,5,7,8,10,11,12,13,15,16, General Purpose Input Used as input/output for


18,19,21,22,23,24,26,29, Output (GPIO) general purpose I/O
31,32,33,35,36, 37,38,40 devices

12
3.2 PI CAMERA MODULE
In this project, Pi camera module v1.3 (5 MP) is used to capture frames. It is portable and
occupies less space. This component is coupled to the Raspberry Pi CSI Camera port and
communicates via MIPI camera serial interface protocol. Raspberry Pi also supports other USB
webcams for capturing frames.

Figure 3.3 Pi Camera Module

Figure 3.4 Pi Camera Module

13
Table 3.3 Pi Camera Module Specifications

Feature Description
Version 1.3
Camera Quality 5MP color camera
Connector MIPI Camera serial interface
Resolution 2592 * 1944. Supports 1080p,
720p and 480p
Weight 3gm
Pi Compatibility Model A and B

Table 3.4 Pi Camera Module Pin Details

Pin Number Pin Name Description


1 Ground System Ground
2,3 CAM1_DN0, CAM1_DP0 MIPI Data Positive and MIPI
Data Negative for data lane 0
4 Ground System Ground
5,6 CAM1_DN1, CAM1_DP1 MIPI Data Positive and MIPI
Data Negative for data lane 1
7 Ground System Ground
8,9 CAM1_CN, CAM1_CP These pins provide the clock
pulses for MIPI data lanes
10 Ground System Ground

11 CAM_GPIO GPIO pin used optionally

12 CAM_CLK Optional clock pin

13,14 SCL0, SDA0 Used for I2C communication

15 +3.3V Power pin

14
3.3 HEART RATE SENSOR

Heart rate can be measured using various techniques such as Photoplethysmography (Light
transmission and reflection), Oscillometry (Monitoring blood pressure), Phonocardiography
(Detect and records heart sounds) and Electrocardiography (Records electrical signals in the
heart).

In this work, heart rate is measured using Photoplethysmography technique which is non-
invasive and economical. This sensor can work without any prior configuration. It consists
of a LED, ambient luminous detector and noise elimination circuit. The LED acts as a light
source and ambient luminous detector records the reflection of the light from the blood. Noise
elimination circuitry helps in the removal of high-frequency noise signals.

Figure 3.5 Heart Rate Sensor Front

Figure 3.6 Heart Rate Sensor Rear

15
Table 3.5 Heart Rate Sensor

Specifications

Feature Description
Diameter 0.625" (~16mm)
Overall thickness 0.125" (~3mm)
Working Voltage 3V to 5V
Working Current ~4mA at 5V

3.4 MCP3008

The MCP3008 is a 10-bit 8-channel Analog-to-Digital (A/D) converter. Since raspberry


pi cannot read analog inputs this IC is used to gather the analog data from sensors like
temperature and light. The converted digital output can be viewed by connecting it to the
GPIO pins of the Raspberry Pi.

Table 3. 6 MCP3008 Specification

Feature Specification
Channels 8
Resolution 10-bit
Communication protocol Serial SPI interface
Operating voltage 2.7V to 5V
ADC method Successive Approximation (SAR)
Sampling Rate 200ksps and 75ksps for 5V and 2.7V resp.
Pins 16

16
Figure 3.7 MCP3008 pinout

Table 3.7 MCP3008 Pin Details

Pin Number Pin Name Description


1,2,3,4,5,6,7,8 Analog Input Channels These are the 8 Input pins, to
which the analog voltage
which has to be measured is
provided.

9 Digital Ground Connected to the Ground of


the circuit

10 Chip Select / This pin is connected to GPIO


Shutdown(CS`/SHDN) pin or MCU for turning on or
off the IC

11 Serial Data In (DIN) Used for SPI communication

12 Serial Data Out (DOUT) Used for SPI communication

13 Serial Clock (CLK) Used to provide clock signal


for SPI communication

14 Analog Ground Connected to Ground of the


reference voltage

17
15 Reference Voltage (VREF) Connected to reference voltage
for ADC Conversion

16 Positive Voltage (VDD) Connected to the positive


voltage of the circuit.

3.5 BLUETOOTH SPEAKER

Data is exchanged between devices using Bluetooth which is wireless communication


technology. It replaces the use of cables and makes short-range communications more
effective. The general operating frequency of the Bluetooth devices is between 2.402 GHz to
2.480 GHz. Bluetooth speakers act as loudspeakers which use RF technology to receive the
audio signals from the paired devices. Most of the Bluetooth speakers have a rechargeable
battery to provide longer playback time.

Figure 3.8 Bluetooth Speaker

18
4. PROPOSED SYSTEM AND ALGORITHMS

4.1 INTRODUCTION
This work focuses on the eye blink and heart rate variability using Raspberry Pi 3 Model
B and Pi Camera Module to provide an efficient and economic system for detecting the
drowsiness of the driver. LBP cascade, HAAR cascade, Custom Blink cascade is used
to detect face, eyes and eye blinks of the driver. The Python environment with OpenCV
is used for processing the frames captured by the Raspberry Pi camera. After the eye
blink is detected the drowsiness is validated bychecking the heart rate using the HRV
sensor. The heart rate when sleepy drivingis 81.5 ± 9.2 beats/min. Upon successful
validation of drowsiness using eye blink and heart rate, the driver is alerted via alert
sound using Bluetooth speaker integrated with Raspberry Pi. The entire setup can be
placed in the dashboard ofthe car as it occupies minimal space.

Figure 4.1 Proposed system Flowchart

19
Figure 4.2 Proposed System Circuit Diagram

4.2 CASCADE CLASSIFIER

Classifiers (Cascade of advanced classifiers functioning on HAAR like features) are


trained with a lot of "positive" specimen views of a target object and random "negative"
images of a similar size. Once training is done, it is applied to an image for searching
the trained object by moving the search window across the localityof space prior to
some phase where the candidate passes the ensemble of steps oris rejected. This can
be normally utilized in image processing for object detection, facial detection, and
recognition. Training a cascade in OpenCV is also possible. This training is completed
with heartrending strategies.

4.3 LBP CASCADE

LBP Cascades are used to recognize the face. The face is classified from a non- face
by the process of extraction of LBP features from an LBP vector. To train the
20
classifier Tufts- Face-Database is used which consists of 500 face images(positive)
and negative images are random images that are other than the face. The idea behind
the local binary pattern is dividing the complete image into smaller parts then
scrutinizing every element with its neighbour element, by taking the centre element as
a threshold. This smaller part will be represented as a 3 X 3 matrix. For every
surrounding value of the central value (threshold), binary Value will be assigned. " 1 "
for values that break even or are more than the threshold and " 0 " for values less than
the threshold. Now, then it contains only 0
/ 1 (neglecting the central value). Then chain all binary values from every coordinate
of the matrix. Then, the Decimal value is converted from the binary values and set it to
the middle value of the matrix. At the end of this process (LBP procedure), we have a
replacement image that serves as higher the properties ofthe initial image.
Training Parameters

Positive Image : 500


Negative Image : 1000
Sample Width : 30
Sample Height : 30
Dataset : Tufts-Face-Database
Tool Used : Cascade Trainer GUI

Performing the face detection

Several methods are used to check the histograms (Determine the space among 2
histograms). For instance, Euclidean distance, chi-square, etc.

-(1)

Figure 4.3 Euclidean distance formula

21
4.4 HAAR CASCADE

This procedure is based on Haar Wavelet to study pixels within the image into squares
by function. This uses machine learning procedures to activate a tall degree of
exactness from “training data”. These employ “integral image” concepts to calculate
the “features” identified. It uses the Adaboost learning calculation which chooses less
important highlights from an expansive set to provide an effective output of classifiers.

Figure 4.4 Feature Extraction in HAAR Cascade Algorithm


Training Parameters:

Positive Image :500


Negative Image: 1000
Sample Width : 30
Sample Height : 30
Dataset : MRL Dataset
Tool Used : Cascade Trainer GUI

Process

The input video image is converted into grayscale. And then find the faces in the image using
the trained classifier. The localized position of the face is given by rect(x-position, y-
position,width,height). This location is used for eye detection.

4.5 BLINK CASCADE


Eye status detection is a fundamental step for driver drowsiness detection. In this module,
HAAR cascade discloses the status concerning eyes. The efficiency of the object detection

22
method highly depends on the training dataset. MRL Dataset consists of over 15,000 images of
eyes, among which the closed eyes are taken as our positive images. The negative images
consist of random images that do not contain eyes. The model is trained by giving these images
as input. We get an XML file as output which can be used in OpenCV to detect the status of
the motorist’s eyes.

Training Parameters:
Positive Image : 1000
Negative Image: 2000
Sample Width : 30
Sample Height : 30
Dataset : MRL Dataset
Tool Used : Cascade Trainer GUI

Process

The coordinates of the eye. The status of the eye is detected and if the eye is closed the
colour of the bounding rectangle turns from green to cyan.

Table 4.5 Cascade Specification

S.No Cascade Positive Negative


Name Images Images
1 LBP 500 1000
2 HAAR 500 1000
3 Custom 1000 2000
Blink

23
5. RESULT

5.1 EFFECTS OF LBP CASCADE AND HAAR CASCADE

Input

Image captured from the Pi Camera Module

Output
Image captured from the Pi Camera Module is shown and detection of facial features.

Figure 5.1 LBP Cascade and HAAR Cascade

5.2 EFFECT OF BLINK CASCADE


Input

The localization of the eyes detected by HAAR Cascade and LBP Cascade.

Output

Status of the eye.

5.3 EFFECT OF HEART RATE SENSOR

Output

Figure 5.2 Output of Heart Rate Sensor

24
5.4 PERFORMANCE METRICS

Table 5.3 The average time taken for alert sound generation

Subjects Time taken to alert


after eye blink &
HRV validation (s)
1 1.7404
2 1.5497
3 1.6689
4 2.0503
5 1.5974
6 1.6927
7 1.6212
8 1.5497
Average response 1.6837
time

Table 5.4 The average heart rate value when drowsy

Subjects Heart Rate (bpm)


1 83
2 80
3 81
4 81
5 80
6 82
7 84
8 82
Average Heart 81.625
Rate when drowsy

25
Table 5.5 Accuracy rate of face and eye blink detection

Accuracy rate of face and


Gender eye blink
detection
Tota True False False
l Positi Positi Negat
Blin ve ve ive
ks (%) (%) (%)
Male 1 39 97.43 0.00 2.56
Male 2 44 100 0.00 0.00
Male 3 37 100 0.00 0.00
Male 4 38 97.36 2.63 0.00
Male 5 34 100 0.00 0.00
Female1 33 100 0.00 0.00
Female 2 41 97.56 2.43 0.00
Female 3 45 95.55 2.22 2.22
Total 311 98.3 0.96 0.64

Table 5.6 Confusion Matrix Parameters

Parame TP FP TN FN
ters
Driver Blink No No Blink
Blink Blink
System Alert Alert No Alert No Alert

Table 5.7 Confusion Matrix for the model

50 (TN) 10 (FP) 60
5 (FN) 95 (TP) 100
55 105 160 (N)

Testing Conditions
Number of test subjects = 8
Number of readings/subject = 20 (10 reading at morning, 10 readings at night)
Total number of readings = 160

Metrics
Accuracy = (TP + TN) / (TP + TN + FP + FN) = ( 50 + 95 ) / ( 160 ) = 90.6%
Recall = TP / (TP + FN) = ( 95 ) / ( 100 ) = 95%
Precision = TP / (TP + FP) = ( 95 ) / ( 105 ) = 90.4%

Fmeasure = (2 * Recall * Precision) / (Recall + Presision) = ( 2 * 0.95 * 0.904) / ( 0.95 +


0.904 ) = ( 1.7176 ) / ( 1.854 ) = 92%
26
6. CONCLUSION AND FUTURE WORK

Drowsy driving is caused due to sleepiness or fatigue; it contributes a major part in road
accidents. Hence a robust system is required to detect and notify the driver during such episodes
to avoid incidents and make the roads safer. The proposed system is quick and efficient in
detecting and alerting such episodes with a high degree of accuracy. The system is also quite
inexpensive and easy to install. It is to be noted that the system can be improved with the
addition of an IR camera which enables better accuracy in low light conditions.
The current system makes use of the vehicle’s stereo system via Bluetooth for alerting the
driver, Future versions can have their buzzer system to alert the driver. The system can be
further enhanced with the addition of the SMS module which can be used to alert the driver’s
status to the authorities if the driver is unresponsive even after being alerted.

27
7. REFERENCES

1. J. H. Yang, Z. Mao, L. Tijerina, T. Pilutti, J. F. Coughlin and E. Feron, "Detection of Driver


Fatigue Caused by Sleep Deprivation," in IEEE Transactions on Systems, Man, and Cybernetics
- Part A: Systems and Humans, vol. 39, no. 4, pp. 694-705, July 2009, doi:
10.1109/TSMCA.2009.2018634.

2. A Chellappa, M S Reddy, R Ezhilarasie, S Kanimozhi Suguna, A Umamakeswari “Fatigue


detection using Raspberry Pi 3” International Journal of Engineering and Technology(UAE),
volume 7, issue 2, pp. 29 – 32.

3. C.-T. Lin et al., “A real-time wireless brain–computer interface system for drowsiness
detection,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 4, pp. 214–222, Aug. 2010.

4. Sukrith Mehta et al., “Real-Time Driver Drowsiness Detection System Using Eye Aspect
Ratio and Eye Closure Ratio”.

5. W. Deng and R. Wu, "Real-Time Driver-Drowsiness Detection System Using Facial


Features," in IEEE Access, vol. 7, pp.118727-118738, 2019, doi:
10.1109/ACCESS.2019.2936663.

6. O. Khunpisuth, T. Chotchinasri, V. Koschakosai and N. Hnoohom, "Driver Drowsiness


Detection Using Eye-Closeness Detection," 2016 12th International Conference on Signal-
Image Technology & Internet-Based Systems (SITIS), Naples, 2016, pp. 661-668, doi:
10.1109/SITIS.2016.110.

7. F. You, X. Li, Y. Gong, H. Wang and H. Li, "A Real-time Driving Drowsiness Detection
Algorithm With Individual Differences Consideration," in IEEE Access, vol. 7, pp. 179396-
179408, 2019, doi: 10.1109/ACCESS.2019.2958667.

8. F. Guede-Fernández, M. Fernández-Chimeno, J. Ramos-Castro and M. A. García- González,


"Driver Drowsiness Detection Based on Respiratory Signal Analysis," in IEEE Access, vol.
7, pp. 81826-81838, 2019, doi: 10.1109/ACCESS.2019.2924481.

9. M. A. Tanveer, M. J. Khan, M. J. Qureshi, N. Naseer and K. Hong, "Enhanced Drowsiness


Detection Using Deep Learning: An fNIRS Study," in IEEE Access, vol. 7, pp. 137920-
137929, 2019, doi: 10.1109/ACCESS.2019.2942838.

10. C. Zhang, X. Wu, X. Zheng and S. Yu, "Driver Drowsiness Detection Using Multi- Channel
Second Order Blind Identifications," in IEEE Access, vol. 7, pp. 11829-11843, 2019, doi:
10.1109/ACCESS.2019.2891971.

11. H.Naderi et al., “ASSESSING THE RELATIONSHIP BETWEEN HEAVY VEHICLE


DRIVER SLEEP PROBLEMS AND CONFIRMED DRIVER BEHAVIOR MEASUREMENT
TOOLS IN IRAN”.

12. D. Sandberg et al., “Detecting driver sleepiness using optimized nonlinear combinations of
sleepiness indicators,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 1, pp. 97–108, Mar.
2011.

13. X.Zhang et al., “DRIVER DROWSINESS DETECTION USING MIXED-EFFECT


ORDERED LOGIT MODEL CONSIDERING TIME CUMULATIVE EFFECT”.

28
14. Hashemi, M., Mirrashid, A. & Beheshti Shirazi, A. Driver Safety Development: Real-
Time Driver Drowsiness Detection System Based on Convolutional Neural Network.
SN COMPUT. SCI. pp. 1, 289 (2020). https://doi.org/10.1007/s42979-020-00306-
9/5.4.2021

15. Kiashari, S.E.H., Nahvi, A., Bakhoda, H. et al. Evaluation of driver drowsiness using
respiration analysis by thermal imaging on a driving simulator. Multimed
Tools Appl 79, 17793–17815 (2020). https://doi.org/10.1007/s11042-020-08696-x
16.The Tufts Face Database, Weblink: http://tdface.ece.tufts.edu/ (17.4.2021)

17.MRL Eye Dataset, Weblink: http://mrl.cs.vsb.cz/eyedataset/ (20.4.2021)

You might also like