AI Finalreport

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Project report on

DROWSINESS DETECTION

CSE3013 – ARTIFICIAL INTELLIGENCE


Slot – B1

SUBMITTED TO
UMADEVI KS

GROUP MEMBERS
NAME REG NO Email id
ANSHUMAN 16BCI0122 anshuman.parashar2016
PARASHAR @vitstudent.ac.in
KASHISH 16BCI0155 Kashish.desai2016
DESAI @vitstudent.ac.in
ABSTRACT

There have been many accidents nowadays because the driver has fell asleep behind the wheel.
The innovation of technologies for preventing fatigue related accidents at the wheel is a major
challenge in the domain of accident avoidance systems[1]. We have developed a method for
preventing drowsiness during driving by monitoring the drivers eyes and the playing an alarm to
alert him. We have made a system that uses image processing technology[2] to analyses the
video stream of the driver's face taken with a video camera. Fatigue or sleepiness is detected on
the basis of the degree to which the driver's eyes are open or closed and if its closed for a certain
amount of time the driver will be alerted. This is a noncontact technique for judging various
levels of driver alertness and facilitates early detection of a decline in alertness during driving.
KEYWORDS – open cv, dlib, image processing, facial landmark detection, drowsiness
detection, accident avoidance
INTRODUCTION
The growing number of traffic accident fatalities in India in recent years has become a problem
of serious concern to society. We have worked in active safety systems that are intended to
reduce the number of accidents causing death or injury. The key to road safety and the accident
prevention before they happen lies with the driver. For this reason only, we have tried
eliminating situations in which the driver is insecure which is essential for accident prevention.
Accidents because of drowsiness at the wheel have a high fatality rate because of the marked
reduction in the driver's abilities of perception, cognition and vehicle control when sleepy and
drowsy. Preventing accidents caused by drowsiness requires a technique for detecting sleepiness
in a driver and a technique for alerting the driver from that sleepy condition.
Our paper describes a technique to recognize the open and closed state of drivers eyes and uses it
to detect drowsiness at wheel.
LITERATURE SURVEY
• Drivers drowsiness detection in embedded system by Tianyi Hong ; Huabiao Qin
This paper proposes an efficient method to solve these problems for eye state
identification of drivers' drowsiness detection in embedded system which based on image
processing techniques. This method break traditional way of drowsiness detection to
make it real time, it utilizes face detection and eye detection to initialize the location of
driver's eyes; after that an object tracking method is used to keep track of the eyes;
finally, we can identify drowsiness state of driver with PERCLOS by identified eye state.
Experiment results show that it makes good agreement with analysis.

• A contextual and temporal algorithm for driver drowsiness detection by


Anthony D.McDonalda ,John D.Leeb Chris Schwarzc Timothy L.Brown
This study designs and evaluates a contextual and temporal algorithm for detecting
drowsiness-related lane. The algorithm uses steering angle, pedal input, vehicle speed and
acceleration as input. Speed and acceleration are used to develop a real-time measure of
driving context. These measures are integrated with a Dynamic Bayesian Network that
considers the time dependencies in transitions between drowsiness and awake states

• Facial features monitoring for real time drowsiness detection by B. N. Manu


This paper describes an efficient method for drowsiness detection by three well defined
phases. These three phases are facial features detection using Viola Jones, the eye
tracking and yawning detection. Once the face is detected, the system is made
illumination invariant by segmenting the skin part alone and considering only the
chromatic components to reject most of the non face image backgrounds based on skin
color. The tracking of eyes and yawning detection are done by correlation coefficient
template matching
PROBLEM STATEMENT
We made a computer vision system that can automatically detect if the driver is getting drowsy
in real-time video feed and then if the driver appears to be drowsy play an alarm. We monitor
the persons eyes in the feed and calculate how long the person’s eye has been closed. If the eyes
have been shut long enough we will try to wake up the driver by playing an alarm

BASIC ALGORITHM

• First, we’ll setup a camera that monitors a stream for faces.


• If a face is found, we apply facial landmark detection and extract the eye regions.
• Now that we have the eye regions, we can compute the eye aspect ratio to determine if
the eyes are closed.
• If the eye aspect ratio indicates that the eyes have been closed for a sufficiently long
enough amount of time, we’ll sound an alarm to wake up the driver.
SOLUTION
Facial landmarks is defined as the detection and localization of certain keypoints points on
the face which have an impact on subsequent task focused on the face, like
animation, face recognition, gaze detection, face tracking, expression recognition, gesture
understanding etc.
from scipy.spatial import distance as dist
from imutils.video import VideoStream
from imutils import face_utils
from threading import Thread
import numpy as np
import playsound
import argparse
import imutils
import time
import dlib
import cv2

We’ll need the SciPy package so we can compute the Euclidean distance between facial
landmarks points in the eye aspect ratio calculation .
We’ll also need the imutils package, my series of computer vision and image processing
functions to make working with OpenCV easier.

def eye_aspect_ratio(eye):
A = dist.euclidean(eye[1], eye[5])
B = dist.euclidean(eye[2], eye[4])
C = dist.euclidean(eye[0], eye[3])
ear = (A + B) / (2.0 * C)
return ear

We compute the euclidean distances between the two sets of vertical eye landmarks (x, y)-
coordinates.
We then compute the euclidean distance between the horizontal eye landmark (x, y)-coordinates.
Then we compute the eye aspect ratio and return it.
EYE_AR_THRESH = 0.3
EYE_AR_CONSEC_FRAMES = 48
COUNTER = 0
ALARM_ON = False

We define two constants, one for the eye aspect ratio to indicate blink and then a second constant
for the number of consecutive frames the eye must be below the threshold for to set off the alarm
Then we initialize the frame counter and a boolean used to indicate if the alarm is going off.
EYE_AR_CONSEC_FRAMES to be 48 , meaning that if a person has closed their eyes for 48
consecutive frames, we’ll play the alarm sound.
COUNTER is the total number of consecutive frames where the eye aspect ratio is below
EYE_AR_THRESH .
If COUNTER exceeds EYE_AR_CONSEC_FRAMES , then we’ll update the boolean
ALARM_ON

print("[INFO] loading facial landmark predictor...")


detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])
We initialize dlib's face detector and then create the facial landmark predictor
We get coordinates of facial landmarks.
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]

We grab the indexes of the facial landmarks for the left and right eye, respectively.
print("[INFO] starting video stream thread...")
vs = VideoStream(src=args["webcam"]).start()
time.sleep(1.0)

while True:

frame = vs.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

rects = detector(gray, 0)

Here start the video stream thread. Then we loop over frames from the video stream grab the
frame from the threaded video file stream, resize it, and convert it to grayscale and detect faces
in the grayscale frame.
for rect in rects:
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)

ear = (leftEAR + rightEAR) / 2.0

We determine the facial landmarks for the face region, then convert the facial landmark (x, y)-
coordinates to a NumPy array.
We extract the left and right eye coordinates, then use the coordinates to compute the eye aspect
ratio for both eyes.
Then calculate average the eye aspect ratio together for both eyes.
if ear < EYE_AR_THRESH:
COUNTER += 1
if COUNTER >= EYE_AR_CONSEC_FRAMES:

if not ALARM_ON:
ALARM_ON = True
if args["alarm"] != "":
t = Thread(target=sound_alarm,
args=(args["alarm"],))
t.deamon = True
t.start()

cv2.putText(frame, "DROWSINESS ALERT!", (10, 30),


cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

else:
COUNTER = 0
ALARM_ON = False

We check to see if the eye aspect ratio is below the blink threshold, and if so, increment the
blink frame counter if the eyes were closed for a sufficient number of frames then sound the
alarm.
Otherwise, the eye aspect ratio is not below the blink threshold, so reset the counter and alarm.
RESULTS
When the eyes are open which means driver is alert and awake.

When the driver gets drowsy his eyes close and the EAR is reduced. When it is less for long
enough we play an alarm which alerts the driver.
CONCLUSION

The results of tests conducted under a drowsy state in the tests have made the following points
clear.
• Image recognition achieves highly accurate and reliable detection of drowsiness.

• Image recognition offers a noncontact approach to detecting drowsiness without


annoyance and interference.

• A drowsiness detection system developed around the principle of image recognition


judges the driver's alertness level on the basis of a continuous time history and provides
early detection of reduced alertness at initial stage.

There are a number of issues that remain to be addressed in the drowsiness detection system.
These include improvement of its adaptability to changes in ambient brightness, assurance of
reliability and attainment of a more compact system design.
REFERENCES

• Accident avoidance systems -Breed, D. S., Johnson, W. C., & DuVall, W. E. (2002). U.S.
Patent No. 6,405,132. Washington, DC: U.S. Patent and Trademark Office.
• Van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D.,
Yager, N., ... & Yu, T. (2014). scikit-image: image processing in Python. PeerJ, 2, e453.
• Zhang, Z., Luo, P., Loy, C. C., & Tang, X. (2014, September). Facial landmark detection
by deep multi-task learning. In European conference on computer vision (pp. 94-108).
Springer, Cham.

You might also like