Machine Learning Systems To Detect Driver'S Drowsiness: Authors: Esra Vural, Mujdat Cetin, Avtul Ercil, Gwen

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 30

MACHINE LEARNING SYSTEMS TO

DETECT DRIVER’S DROWSINESS

Authors: Esra Vural, Mujdat Cetin, Avtul Ercil, Gwen


Littlewort, Marian Bartlett, Javier Movellan

Presentation by : Sushmitha. B

1
INDEX

• Introduction

• Problem statement

• Techniques and concepts

• Procedure

• Results

• Conclusion

• References

2
3
INTRODUCTION

• Driver drowsiness

• Significance of machine learning, deep learning

• Previous approach

• Pre-assumptions

• Expected accuracy
DIFFERENT APPROACHES

• Sensors and vehicle components

• Physiological signals (EEG, heart rate, pulse rate)

• Computer vision
PROBLEM STATEMENT

• Reduce the number of accidents

• Build a model using facial expressions

• Identifying the useful facial configures

• Improve the accuracy than previous models


TECHNIQUES AND CONCEPTS

• Gabor filter

• Adaboost

• Facial Action Coding System (FACS)

• Accelerometer to measure the head movement

• Ridge regression

7
GABOR FILTER

• Filtering is a technique for modifying and enhancing an image.

• Gabor filters are orientation-sensitive filters, used for edge analysis and texture
analysis.

• It replicates mammalian visual cortex.

Image 1: Gabor Filter


8
ADABOOST

• Adaboost (adaptive boosting) is a machine learning algorithm.

• Adaboost works by choosing and combining weak classifiers together to form one
strong classifier.

Weak Weak Weak


Classifier classifier classifier
Feature Final output
1 2 n
extraction

negative negative negative positive

Diagram 1: Overview of Adaboost

9
FACIAL ACTION CODING SYSTEM(FACS)

• Facial action coding systems is a system to taxonomise human facial movements


based on a system originally developed by Carl-Herman Hjortsjo.

• Movements of individual facial muscles are encoded by FACS from slight different
instant changes in facial appearance.

• It is the common standard to systematically categorise the physical expressions of


emotions, and it has proven useful to psychologists and also to animators.

10
FACIAL ACTION CODING SYSTEM(FACS)

expression
Active
shape
model Proposed
Facial
Gabor facial Facial AU’S
feature
filter activity
detection
model
Adaboost
classifier Facial
inference feature
Input Pre-processing points
image
Measurement
output
extraction

Diagram 2: Overview of FACS


11
ACCELEROMETER

• A device that measures the changes in gravitational acceleration in a device it may


be installed in.

• Accelerometer are used to used to measure acceleration, tilt, vibration in numerous


devices.

• APLLICATIONS
1) Monitoring devices in biology, engineering cars etc.
2) Image orientation in smart phones.
3) Inputs in smart phones, tablets, game controllers.

12
RIDGE REGRESSION

• The Ridge regression is a technique which is specialised to analyse multiple


regression data which is multicollinear in nature.

• Multicollinearity
In this phenomenon, one predicted value in multiple regression models is linearly
predicted with others to attain certain level of accuracy.

13
HOW THE MODEL IS BUILT

• Driving simulation task

• Head movement measurement

• Facial action classifiers

14
METHODS

1. Driving simulation task


• Here, four subjects played a video
game for 3 hours session in the
midnight.

• Video of the subjects face was


recorded using a DV camera for
the entire 3 hours session.
.

Image 2: Driving task

15
METHODS

2. Head movement measures

Head movements were measured


using accelerometer with
3 degrees of freedom.

Image 3: Head movement

16
METHODS

3. Facial action classifiers

In this method, FACS has been


used for coding facial
expressions.

Image 4: Action units

17
Facial action classifiers

Feature Data
selection driven Facial
(adaboost) classifier AU’S
(SVM)

Automatic
face and eye
detection Gabor filter

Diagram 3: Overview of facial action classifiers

18
AU Name AU Name AU Name AU Name

1 Inner Brow Raise 10 Upper Lip Raiser 18 Lip Pucker 27 Mouth Stretch

2 Outer Brow Raise 11 Nasolabial Furrow 19 Tongue show 28 Lips Suck


Deepener

4 Brow Lowerer 12 Lip Corner Puller 20 Lip Stretch 30 Jaw Sideways

5 Upper Lid Raise 13 Sharp Lip Puller 22 Lip Funneller 32 Bite

6 Cheek Raise 14 Dimpler 23 Lip Tightener 38 Nostril Dilate

7 Lids Tight 15 Lip Corner 24 Lip Presser 39 Nostril Compress


Depressor

8 Lip Toward 16 Lower Lip Depress 25 Lips Part 45 Blink

9 Nose Wrinkle 17 Chin Raise 26 Jaw Drop

Table 1: Facial action units


RESULTS

• Alert and non-alert scenarios

• Facial action signals

• Drowsiness prediction

• Comparison between Adaboost and MLR

• Coupling effect

20
FACIAL ACTION SIGNALS

• Histogram for Action units

• Area under ROC curve

• Most discriminative feature

Figure 1: Histogram of 2 action units


21
Sub AU Name A’

Sub1 45 Blink .94


17 Chin raise .85
30 Jaw Sideways .84
7 Lid tighten .81
39 Nostril compress .79
Sub2 2 Outer brow raise .91
45 Blink .80
17 Chin raise .76
15 Lip corner depress .76
11 Nasolabial furrow .76
Sub 3 45 Blink .86
9 Nose wrinkle .78
25 Lips part .78
1 Inner brow raise .74
20 Lip stretch .73
Sub4 45 Blink .90
4 Brow lower .81
15 Lip corner depress .81
7 Lid tighten .80
39 Nostril compress .74

Table 2:Top 5 most discriminant action units for each subject

22
DROWSINESS PREDICTION USING ADABOOST

• Selection of Action detectors

• Trained to predict alert and non alert from each frame of video.

• Training samples and testing samples.

• 92% accuracy

23
DROWSINESS PREDICTION USING RIDGE
REGRESSION
feature A’(ROC)
• Weightage for FAUs
AU45 .9493
• Training in Two phase AU12 .8765
AU2 .8133
• Time dependency AU15 .8035
AU26 .7778

AU45,AU2 .9614
AU45,AU2,AU19 .9693
AU45,AU2,AU19,AU26 .9776
AU45,AU2,AU19,AU26,AU15 .9792
All feature .8954

Table 3: Drowsiness detection performance


of different AU combination

24
TIME DEPENDENCY

Figure 2: Time dependency

25
COMPARISON BETWEEN THE MODELS

• 85% similarity in choosing Action units

• Accuracy rate

Classifier Percent Correct Hit rate False alarm rate

Adaboost .92 .92 .06

MLR .94 .98 .13

Table 4: Comparison between the models

26
COUPLING EFFECT

• Non alert and alert states

• Coupling of hand and head movement

• Coupling of head and steering movement

Figure 3: Coupling effect of head movement and steering

27
CONCLUSION

• Driver drowsiness prediction using information from a video

• Comparison between previous approach and current approach

• Positive and negative predictors

28
REFERENCE

[1] Kim Hong, Chung, “Electroencephalographic study of drowsiness in


simulated driving with sleep deprivation.,” International Journal of
Industrial Ergonomics., Volume 35, Issue 4, April2005
[2] Haisong Gu and Qiang Ji, “An automated face reader for fatigue
detection,” Automatic Face and Gesture Recognition, 2004. Proceedings.
Sixth IEEE International Conference, vol. 00, pp. 111, 2004
[3] https://stats.stackexchange.com/questions/23388/in-boosting
[4] https://mindmajix.com/ridge-regression
[5] https://prateekvjoshi.com/2014/04/26/understanding-gabor-filters
[6] https://www.cs.cmu.edu/~schneide/tut5/node42.html
[7] https://www.youtube.com/watch?v=Q81RR3yKn30

29
Thank you

30

You might also like