Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

2016 International Conference on Electrical Engineering and Automation (ICEEA 2016)

ISBN: 978-1-60595-407-3
 
Side Face Contour Extraction Algorithm for Driving Fatigue Detection
Jian-mei LEI1,2,*, Lei CHEN2, Zi ZENG3, Zhi-da LAI1, Xin LIU2,
Qing-wen HAN1,2 and Li JIN2
1
State Key Laboratory of Vehicle NVH and Safety Technology, Chongqing, China
2
Chongqing University, Chongqing, China
3
Chongqing No.7 High School, Chongqing, China
*Corresponding author

Keywords: Side face profile, Skin color model, Contour extraction, Driving fatigue.

Abstract. Fatigue-driving detection, which employs pattern recognition measure to discover


driver’s fatigue status, is considered as a key technique to improve road safety. However, the widely
used frontal face recognition systems have some problems, such as low recognition accuracy, lack
of real-time response, and high algorithm complexity. As a very efficient supplement, in this paper,
a side face contour extraction algorithm is proposed, which is composed of side face basic
extraction, image correction and side face contour extraction. Experiment results show that
proposed algorithm could easily extract nose and chin contour line from side face image effectively,
which helps in detecting nodding posture caused by fatigue and lays a good foundation for fatigue
state’s real-time tracking application.

Introduction
Road transportation safety is a ‘grand challenge’ problem for a modern industrial society with
close to a billion vehicles on the road today, and a doubling projected over the next 20 years. Traffic
accidents have been taking tens of thousands of lives each year, outnumbering any deadly diseases
or natural disasters [1]. A number of studies in the world have shown that some bad driving
behaviors, including fatigue driving, are the main causes of traffic accidents. Therefore, researchers
are committed to developing new techniques to detect fatigue-driving state and put forward
corresponding alarm methods. It is generally accepted that pattern recognition is an effective
measure in fatigue driving detecting process, while related techniques, such as face location
detection [2], eye detection [3][4], etc., are considered as key issues in this field of research.
The mainstream device to monitor the driver state is a video camera. And the mainstream driving
fatigue detecting methods focus on eye, mouth and head posture detection from frontal images of
human face [2]. However, due to the individual difference of human face, frontal images analysis
always involve a series correction techniques, which should bring a relative long decision time and
reduce the system in real-time. Furthermore, the ambient illumination is considered as another
serious impact factor. The variation of frontal face image, brought by the ambient illumination, is
even bigger than the variation between the face images from different bodies [5].
To overcome the shortage of fatigue detecting methods using frontal images, in this paper, a side
face recognition method is proposed. Unlike frontal image detection, proposed method focus on the
relationship between facial profile and driver fatigue status, and make decision according to the
profile change of nose, mouth and chin.
The rest of the paper is organized as follows. Section 1 presents the flowchart of proposed side
face recognition method. Section 2 presents corresponding illumination adaptive techniques; while
section 3 gives the accuracy enhance method. The facial contour extraction measure is described in
section 4. At last, section 5 draws some conclusions and gives directions for further research.
Flow of Proposed Side Contour Extraction Algorithm
The profile change of nose, mouth and chin is considered as the decision basis for fatigue
detection[6]. Hence, the performance of side face detection is decided by the accuracy of extracted
contour. In proposed algorithm, three processing steps are involved.
Step1: Side face basic extraction.
In-car cameras capture a large amount of driver’s images. To obtain side face contour line, we
must extract the side face from different background. In this paper, the basic extraction process,
whose output is a two-value image, is done based on skin color detection.
Step 2: Extracted side face correction
In this step, corrosion and expansion processing are done to correct the two-value image of step1.
Step 3: Side face contour line extraction
Based on a corrected side face image, side face contour line extraction process is done.
Flow chart of proposed method is given in Fig.1.

Figure 1. Flow chart of proposed method.


All above three steps are described in detail in section 2, 3, and 4 respectively.

Basic Extraction of Side Face Region


Color model establishment is the key point for side face extraction process [7]. As described earlier,
illumination is a serious impact factor and should be considered in image pre-processing. Here a
color correction measure is employed to remove color bias. Then skin color modeling is used to
detect the face area and locate it on the image.
Color Correction
In order to release the impact of illumination, a color correction procedure is set as the first step of
proposed method. In this paper, we follow Gray World’s color equalization method [8] to remove
color bias. The whole process is as follows:
1) Calculate mean value of RGB components, denote as Raver , Gaver and Baver respectively;
2) Calculate average gray value as Aaver   Raver  Gaver  Baver  / 3
3) Reconstruct RGB components R  R  Raver / Aaver , G   G  Gaver / Aaver , B   B  Baver / Aaver
4) Restrict value range of R  , G  and B  . The value range of reconstruct RGB components is
set as [0,255];
5) Rebuild image color.
 
(a) Before correction (b) After correction
Figure 2. Color correction.
The performance of this process is shown in Fig.2. As shown in Fig.2, the correction process lead
to a better color balance performance.
Skin Color Modeling
Skin color modeling is the core step for side face region extraction. As the basis for model
establishment, color space should be selected firstly. In this paper, YUV and YIQ color space [9]
are used. After that, a multiple threshold values criterion is employed to construct skin color model.
The whole process is illustrated as follows.
1) Transform RGB values into YUV space and obtain Y, U and V value [10];
2) Calculate value θ as   tan 1 V /U ;
3) Transform central block from RGB into YIQ space [8];
4) Calculate both mean and standard deviation of θ and I and denote as  min ,  std , I mean , I std ;
5) Obtain 4 threshold values of θ and I as follows [10]:
YIQmax  I mean  3* I std ,YIQmax  I mean  3* I std  max   mean  3* std , min   mean  3* std (1)
6) Normalize R, G and B value and get Rbase ,Gbase , Bbase , where base  R  G  B ;
7) Calculate mean value and variance of base, denote as basemean and basestd ;
8) Calculate decision threshold of RGB space as baselim  45.26  0.79 *(basemean  basestd ) [9];
9) Convert whole image into YUV and YIQ color space and get three components, Up,Vp, Ip of object
pixel point p. Estimate skin color region and calculate the corresponding phase Pθ. Then make decision
if point p is a skin point. Hence the two value image of the skin color region is get according to
YIQmin  I P  YIQmax 0.35  Rbase  0.5
  (2)
IF  min   p   max OR 0.31  Gbase  0.7 THEN P  255 ELSE P  0
 base  base
R  G  B  lim

The performance of this process is shown in Fig.3.

(a1) (b1) (c1) (d1) (e1) (f1)

(a1) (b2) (c2) (d2) (e2) (f2)

(a3) (b3) (c3) (d3) (e3) (f3)


Figure 3. (a) is original image; (b) is the color correction image ; (c) is the skin color detection result under Gaussian
model; (d) is the skin detection result under elliptical model; (e) is the YCrCb single threshold detection result;
(f) is the skin detection result using the proposed algorithm.
As shown in Fig.3, the skin color modeling could detect the side face region precisely. However,
some skin-like block and no-skin block also could still be observed.
Side Face Region Location
In order to release the influence of skin-like block and no-skin block, in this paper, corrosion and
expansion processing are used.
Corrosion, which maps subset X+Y to X, is used to reduce set, increase the internal hole of the
connected domain and eliminate the external isolated noise points. On one level, corrosion should
be considered as a morphological filtering operation, which is used to filter the image details
smaller than structural elements from two-value image. On the other hand, expansion, which
expands all points in subset X to X+Y, could bring target "growth" and "rough", is used to fill the
void of target image and connect separated domains.
The whole process is given as follows.
1) Repeat corrosion progress multiple times. Then small details are removed and connected
domain block becomes more clearly;
2) Extract the side face features from the connected domain pictures. Calculate the ratio A and
ratio B as A  m / n, B  area / areqsq ;
3) Make decision follow criterion  IF A  3 || A  0.5 || B  0.3 THEN L(i, j )  0 ;
4) Calculate the area of each connected area; only keep the largest area of the connected region.
After multiple expansion and corrosion process, a clear face region image should obtained.

Facial Contour Extraction


After precisely position of side face region, the face contour line extract progress shall be done. In
this paper, a face contour line extract method is proposed as follows.
1) Construct a direction bias matrix. In this paper, we employ matrix Direction=[-1 1;0 1;1 1;1
0;1 -1;0 -1;-1 -1;-1 0].
2) Find out the position of the first white pixel near the upper left corner of the image middle
position. Set it as the starting point of the boundary.
3) Search for each contour point of the starting position in a clockwise direction by pre-defined
8-neighborhood matrix until it returns to its starting point. Record the coordinates that have been
found of each point of the boundary to the chain table in turn.
4) The coordinate information stored in the chain table is taken out and connected in turn, and the
profile curve of the target profile 4(d) is obtained.
The performance of this process is shown in Fig.4.

(a1) (b1) (c1) (d1)

(a2) (b2) (c2) (d2)

(a3) (b3) (c3) (d3)


Figure 4. (a) is the contour on original image; (b) is a profile two value image; (c) is the side face region extraction
result; (d) is the contour line extraction result.
As shown in Fig.4, the side face contour line can be fitted well.
Conclusion
Unconscious nodding, whether slightly or heavily, is a very typical behavior suggesting fatigue.
Nose and chin profile will show obvious change during nodding, especially from the side view.
Inspired by the frontal facial feature extraction algorithm, this paper provides a side face contour
detection angle for driving fatigue detection. Algorithm includes skin chrominance recognition,
mathematical morphology filtering and boundary tracing. Experiments show that this algorithm
under situation like complex light or small proportion of face area availability can get accurate
profile of two binary images, and effective monitoring to the contour line of nose and chin area,
which lay a foundation for further study of the fatigue state’s real-time tracking.

Acknowledgement
This research was supported by open research fund of State Key Laboratory of Vehicle NVH and
Safety Technology, NVHSKL-201511, NVHSKL-201414, and Fundamental Research Funds for
the Central Universities, No. CDJPY12160002.

Literature References
[1] Qingwen Han, Lingqiu Zeng, Le Yang, Yuebo Liu, Experimental analysis of CCA threshold
adjusting for vehicle EWM transmission in V-CPS [J], Int. J. Ad Hoc and Ubiquitous Computing,
21(2016) 1-10.
[2] Tyron Louw, Ruth Madigan, Oliver Carsten, Natasha Merat, Were they in the loop during
automated driving? Link between visual attention and crash potential [J], Injury Prevention, 2016
1-13.
[3] F.S.C. Clement, Aditya Vashistha, Milind E Rane, Driver fatigue detection system [C], 2015
International Conference on Information Processing (ICIP), Dec 16-19, 2015.
[4] Fei Wang, Shaonan Wang, Xihui Wang. Design of Driving Fatigue Detection System Based on
Hybrid Measures Using Wavelet-packets Transform [C]. International Conference on Robotics &
Automation. Hong Kong, China, 2014 4037-4042.
[5] Stan Z. Li, Rufeng Chu, Shengcai Liao, Illumination Invariant Face Recognition Using
Near-Infrared Images [J], IEEE Transactions on Pattern Analysis and Machine Intelligence,
(29)2007 627-639.
[6] Jianmei Lei, Min Chen, Shibiao He, Qian Chen, and Li Jin, P.R. China. Patent
201310090223.7(2013).
[7] Zhao Lihong, Liu Jihong, Xu Xinhe. A Survey of Face Detection Methods [J]. Application
Research of Computers, (9)2004 1-4.
[8] Zhao Xiaohui, Shen Xuanjing. Research on Self-adaptive Chroma Space Model Skin-color
Algorithm Based on Brightness [J]. Chinese Journal of Scientific Instrument, (26)2005 591-594.
[9] Zhang Honggang, Chen Guang, Guo Jun. Image Processing And Recognition. Beijing,
BUPTPress, 2006, pp. 46-52.
[10] Tao Linmi, Peng Zhenyun, Xu Guangyou. Skin Color Characteristics of The Human Body [J].
Journal of Software. (12)2001 1032-1041.

You might also like