Professional Documents
Culture Documents
Recognizing Faces From The Eyes Only
Recognizing Faces From The Eyes Only
net/publication/2622259
CITATIONS READS
27 236
2 authors, including:
Jørn Wroldsen
Norwegian University of Technology and Science
26 PUBLICATIONS 300 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jørn Wroldsen on 18 July 2013.
3.1 Eigenfeatures
Figure 1: Examples of subtracted histogram equalized We do a principal component analysis of the matrix of
windows from the face image (same subject on each hor- vectorized training samples, separate for right eye and
izontal line). left eye. The resulting principal components (eigeneyes)
span a subspace [4] which is a good representation for
right and left eye images. Examples of these eigeneyes
2 Dataset and Preprocessing are given in gures 2 and 3, ordered according to eigen-
values. One important notice about this technique is
We use the face database from Olivetti Research Lab. that its aim is representation and not discrimination.
This database has been widely used for testing face Thus, we should act with care when selecting the num-
recognition systems, and the best reported result to our ber of eigeneyes to keep for spanning the subspace
knowledge is 96% correct classi cation [6], when the (eigeneyes with small corresponding eigenvalues might
database is divided into two equally large sets (for train-still be important for discrimination).
ing and testing). The database contains 10 images of 40 One limitation which also needs to be pointed out
subjects for a total of 400 images. The images are gray- in the way we apply eigeneyes in this paper, is that
level of size 92 112. We have manually located the the training set is relatively small. Ideally, we would
center of the eyes in all the images, and then extracted have a large separate set of eyes to compute a more
a 25 25 window at this region (for both eyes). If this general subspace of eigeneyes. We would also expect a
subwindow includes an area outside the original face im- degradation in performance due to the large variation
age, we pad the image with the mean of the gray-level in the dataset. We normalize the images by keeping the
values in the full face image. The extracted eye-images center point xed, which gives us some invariance with
show considerable variation with respect to open/closed respect to positioning of the eye, but a better way would
eyes, expression, glasses (re ection in glasses) and gaze, perhaps be to locate the corners of the eye so we might
making this a challenging dataset. Some example eye- be able to normalize with respect to rotation and size
images are shown in gure 1. in addition to position.
Before feature extraction, all images were histogram
equalized. As an additional preprocessor for the eigen- 3.2 Gabor Wavelets
feature approach, the mean image was computed from
the training set and subtracted from all the training and Similar to the system of Lades et al. [5], we apply a
testing images. wavelet transform based on the Gabor kernel
kj2 ? k2j x22 i~kj ~x ? 2
j (~x) = 2 e 2 e ?e 2 ; (1)
3 Feature Extraction
where
We only brie y outline the techniques we use here, since
they are standard computer vision techniques and more ~kj = k cos ; k 2 ; = :
= 2? +2 (2)
appropriatly described in papers such as [11, 12, 5, 13] k sin 8
eigeneye #1 eigeneye #2 eigeneye #3
4 Experimental Results and
Discussion
For classi cation we apply the nearest-neighbour (NN)
and the minimum distance (to class mean) (MD) classi-
er. As a measure of distance in feature space, a natural
eigeneye #4 eigeneye #5 eigeneye #6