Professional Documents
Culture Documents
Driver's Eye State Identification Based On Robust Iris Pair Localization
Driver's Eye State Identification Based On Robust Iris Pair Localization
Driver's Eye State Identification Based On Robust Iris Pair Localization
*Tauseef Ali, **Khalil Ullah *Myongji univ.,deptt. Electronics and comn. Engg. (TEL: 010-5814-1333;E-mail: tuaseefcse@yahoo.com ) **Myongji univ.,deptt. Electronics and comn. Engg. (TEL: 010-8691-8402;E-mail:khalil_dirvi@yahoo.com) Abstract In this paper, we propose a novel and robust approach to determine eye state. The method is based on robust iris pair localization. After iris pair is detected from image, it is analyzed by comparing its openness with the normal images of the person. Our approach has five steps: 1) Face detection 2) Eye candidate detection 3) Tuning candidate points 4) Iris pair selection and 5) Eye Analysis. Experimental results for iris pair localization and eye state identification are shown separately. For testing three public image databases, Yale, BioID and Bern are used. Extensive experiments have shown the effectiveness and robustness of the proposed method. Keywords Iris pairs, Eye candidate, Eye analysis, Eye state 1. Introduction
Monitoring a drivers visual attention is very important for detecting fatigue, lack of sleep, and drowsiness. By automatically detecting eye state and drowsiness level of driver, an alarm can be activated to inform driver or other authority which can avoid a large number of road accidents. Robust eye detection is crucial step for this kind of application. After robust eye detection, information about the gaze, eye blinking, and drowsiness can be determined. Some work has been done on this subject but the problem is still far from being fully solved. Some algorithms for eye detection have obtained good results such as [1] but they cant points out exact center of iris which can further be used for drowsiness detection. Generally eye detection is achieved using active or passive techniques. Active techniques are based on spectral characteristics of eye under IR illumination.[2]. These techniques are very simple and give good results but the success rates of such systems require stable lighting conditions and person close to camera. Passive techniques locate eyes based on their different shape and appearance from face. In these techniques, generally, first face is detected to extract eye regions and then eyes are localized using eye windows. Much work has been done on face detection and there are robust algorithms available [3]. However robust and precise eye detection is still an open problem. After eye detection, several measures can be used to determine eye state and detect drowsiness. Many efforts have been made to detect drowsiness among drivers [4,5]. Eye blinking is a good measure of detecting the level of drowsiness. PERCOLS (the percentage of time that an eye is closed time in a given period) is one of the best methods to measure the eye blinking as high PERCOLS scores are strongly related to [6]. However, in this paper, we use a still image and first detect the centers of eyes and then determine eye state by comparing the eyes openness in the test image with that of original image taken when subject is normal or alert. This kind of system can be used with a camera which input a still image periodically to the system and the system determine the eye state in real time and if subject eyes state is close for more than a few input samples, an alert can be activated to show that driver is drowsy.
3. Face Detection
We first detect the face in the input image. Then problem is simplified due to the background being restricted to the face. It saves searching time and improves accuracy. For face detection, Violas method [3] is used. A robust face classifier is obtained by supervised AdaBoost learning. Given a sample set of training data {xi, yi}, the AdaBoost algorithm selects a set of weak classifiers {hj(x)} from a set of Haar-like rectangle features and combine them into a strong classifier. The strong classifier g(x) is defined as follow:
1 g (x ) = 0
corresponding optimal radius R among {RL ,...RU } [9]. These seperability and radius values for new candidate points are used later. Fig 1(d) shows the tuned candidate points.
Fig . 2 An eye template (R1 is the inside region of the smaller circle and R2 is the region between the two concentric circles).
h (x )
k =1 k k
kmax
otherwise
where is the threshold that is adjusted to meet the detection rate goal. The Haar-like rectangle features are easily computed using integral image representation. The cascade method quickly filters out non-face image areas. More details can be found in [3].
Fig. 3: Subregion for mean crossing function A subregion of the form shown in Fig. 3 is formed around each candidate point. The subregion is scanned horizontally and the mean crossing function [10] for pixel (i, j ) is computed as follows:
1 ; If I (i, j ) + A then if I (i, j + 1) A C(i, j) = 1 ; If I (i, j ) A then if I (i, j + 1) + A 0 ; otherwise
(3) where A is a constant. The horizontal mean crossing value for the subregion is determined as
C subregion = C (i, j )
i =1 j =1 M N
B A
N 2
A = (I (xi , y i ) Pm )
i =1
(2) where nk (k = 1; 2) is the number of pixels in Rk; N = n1+n2; Pk (k=1; 2) is the average intensity in Rk ; Pm is the average intensity in the union of R1 and R2, and I (xi; yi) the intensity values of pixels (xi; yi) in the union of R1 and R2. Separability values for each of the point in the neighborhood are determined by varying the radius in a range {R L ,...R U } . The point in the neighborhood which gives maximum separability is considered as the new candidate point. We also find the separability values for each new candidate point and its
B = n1 P 1 P m
+ n2 P 2 P m
(4) In a similar way, vertical mean crossing function is evaluated by scanning vertically the subregion. To find the final mean crossing value for the subregion, we linearly add both mean crossing numbers. Convolution with edge image subregion First we find the edge image of the subregion around the candidate point. The size of the subregion is the same as the mask in Fig. 4. The subregion is convolved with the convolution kernel shown in Fig. 4 with the edge image of the subregion. The radius of the template is equal to the radius of the candidate determined in section 5. The center of the template is placed on the candidate point and the value of convolution is determined. The process is repeated for each of the candidate. The resultant signal from convolution is summed up and a single value is
obtained.
resolution of face image found in section 3 and it is necessary to keep the test image face size the same as that used for finding threshold.
8. Experimental Results
Fig. 4: Convolution template Now, we define fitness of an iris candidate by the following equation.
fitness (C x ) =
C sub Re gion (C x )
N i =1
C sub Re gion (C i )
Conv (C x )
N i =1
Conv (C i )
Seperabili ty (C x )
N i =1
Seperabili ty (C i )
Where N is the total number of candidate points, C sub Re gion (C j ) , Conv (C j ) , are the mean crossing value and convolution result for
j candidate Cj and is the separability value for candidate Cj computed in section 5. Now, Candidate pair with the maximum fitness is taken as the iris pair according to the following equation.
(5)
Seperability (C
8.1 Iris pair Localization In this section the correct iris pair localization results obtained on three popular databases, BioID [13] Yale [14] and Bern [15] are shown. BioID database contains 1520 images with different lighting conditions, ethnicity, and pose. Yale database consist of 165 images while the images selected from Bern database include all 150 images without spectacles. Yale and BioID database also contains closed eyes images, partially closed eye images and images in which subjects are wearing glasses. The algorithm is tested on all BioID and Yale database. However 150 images without glasses are used from Bern database. Fig.5, Fig.6 and Fig. 7 show some of successful iris pair localization from BioID, Yale and Bern database respectively. In Fig.8 some unsuccessful results are shown. In table 1, we have tabulated the results.
(6)
Fig. 6: Some successful iris pair localization results using Yale database
Fig. 7: Some successful iris pair localization results using Bern database
Fig. 8: Some unsuccessful iris pair localization results Table 1: Results of the proposed algorithm Database Bern BioID Yale Success Rate 99.33% 93.7% 92.6%
8.2 Eye State Identification Fig. 9 and 10 shows images of the same subject with varying amount of eye closure. The detected radius and the state classified by the algorithm are shown. For both subjects the face image determined in section 3 is in the range of 140 X 140 to 160 X 160 and threshold chosen is 3.
(a)
(b)
(c)
Fig. 9: (a) Detected Iris Radius by algorithm = 4, Classified as Open Eyes. (b) Detected Iris Radius by algorithm = 3, Classified Open Eyes. (c)Detected Iris Radius by algorithm = 2, Classified as Closed eyes.
(a)
(b)
(c)
Fig. 10: (a) Detected Iris Radius by algorithm = 4 Classified as Open Eyes. (b) Detected Iris Radius by algorithm = 2 sified as Closed Eyes. (c) Iris pair not detected in section 6 Clas Classified as Closed eyes. The left most images show the normal state of eyes. For both subjects the normal iris radius = 4. For these images iris radius = 3 can be taken as a good threshold to classify subjects as drowsy.
images for a driver vigilance system, 2004 IEEE intelligent vehicles symposium. [2] C.H. Morimoto, D. Koons, A. Amir, M. Flickner Pupil detection and tracking using multiple light sources Image and Vision Computing 18, (2000) 331-335 [3]Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. In Computer Vision and Pattern Recognition Conference 2001, 1 (2001) 511-518 [4] Hamada, T., Ito, T., Adachi, K., Nakano, T. and Yamamoto, S., Detecting method for drivers' drowsiness applicable to individual features", In Proc. Intelligent Transportation Systems, Vol. 2, 2003,Page(s): 1405 - 1410. [5] Qiang Ji and Xiaojie Yang, "Real-time eye, gaze, and face pose tracking for monitoring driver vigilance," RealTime Imaging, Vol. 8 , Issue 5, 2002,Pages: 357 - 377. Dinges, D. and Grace, R, "PERCLOS: A Valid [6] Psychophysiological Measure of Alertness as Assessed by Psychomotor Vigilance",1998, TechBrief FHWAMCRT- 98-006. [7] Modesto Castrillon-Santana, Javier Lorenzo-Navarro, Oscar Deniz-Suarez, Jose Isern-Gonzalez, and Antonio Falcon-Martel, Multiple Face Detection at Different Resolutions for Perceptual User Interfaces. 2nd Iberian Conference on Pattern Recognition and Image Analysis , LNCS 3522, pp. 445 452, 2005. [8] K. Fukui, O. Yamaguchi, Facial feature point extraction method based on combination of shape extraction and pattern matching, Trans. IEICE Japan J80-D-II (8) (1997) 21702177(in Japanese). [9] T. Kawaguchi, M. Rizon, Iris detection using intensity and edge information, Pattern Recognition 36 (2003) 549562. [10] Chun-Hung Lin and Ja-Ling, Automatic facial feature extraction by genetic algorithms, IEEE Trans. Image Process. 8 (6) (1999)10577149. [11] http://www.bioid.com/downloads/facedb/index.php [12] http://cvc.yale.edu/projects/yalefaces/yalefaces.html [13] http://iamwww.unibe.ch/~kiwww/staff/achermann.html
References
[1] T. D'Orazio, M. Leo, A. Distante, Eye detection in face