Professional Documents
Culture Documents
Face Recognition
Face Recognition
Young-Ouk Kim†*, Joonki Paik†, Jingu Heo‡, Andreas Koschan‡, Besma Abidi‡, and Mongi Abidi‡
†
Image Processing Laboratory, Department of Image Engineering
Graduate School of Advanced Imaging Science, Multimedia, and Film
Chung-Ang University, Seoul, Korea
^ 1M
35
120 3M
30
100
where I ng and I mg respectively represent the Gaussian 80
25
20
filtered n and m -th image frames, which are converted 60
15
0 0
detection: the top left image represents I 5 , top right [99] [104] [109] [114] [119] [124] [129] [134] [139] [144] [149] [154] [159] [164] [169] [174] [179]
I 23 g − I 25 g .
180
f(x)Max
160 f(xi)Low-th
140
f(xi)Hi-th
120
100
80
60
40
20
3.2. Skin color segmentation from background Using the three values, f(x)Max, f(xi)Low-th, and f(xi)Hi-th,
we can segment the face region within the ROI from the
Color information for a moving object is one of the background. The hue index of f(xi)Max can be iteratively
most important features. However, color changes due to calculated and the other variables f(xi)Low-th, and f(xi)Hi-th,
illumination changes and reflected light. In this can be formulated as
experiment, we applied the HSV color model since it is
less sensitive to illumination changes than other color f ( xi ) Low−th : f ' ( x i )f ' ( x i 1 ) ¡Ü0 , and ( f ( xi ) < f ( x) Max ), (2)
models. In the proposed surveillance system, the skin
color of moving objects changes according to the distance f ( xi ) Hi −th : f ' ( x i )f ' ( x i+1 ) ¡Ü0 , and ( f ( x i ) > f ( x ) Max ), (3)
between the object and camera even if light conditions are
fixed. where f ′ represents the first derivative of f .
Figure 4 presents experimental results of skin color Figure 6 respectively shows the original input image,
changes according to the distance between the cameras the corresponding HSV image, and the face region
and the moving object. In this figure the horizontal axis segmented from the background.
represents the hue value of the face and the vertical axis
Figure 6: Skin color segmentation result
∑ H ( x, y ) ∑ H ( x, y )
y
candidate area for the moving object having the latest
f(xi)Low-th, and f(xi)Hi-th values. This dynamic change of the
x
xc = , yc = , (4) ROI is necessary for correct tracking. This process is
EH EH shown in Figure 9.
Figure13: Example images of same individual with different Table 5: Execution time and compatibilities
poses [23] Feature Description
Aligning In order to create a gallery database, three steps are
(eye necessary; auto aligning, create template and create
Table 3 and Figure 14 show a summary of the pose positioning) vector - 2~3 sec / image.
tests (R-Right rotation, L-Left rotation). The greater the Matching In order to match against database, subjects should be
pose deviation from the frontal view, the less accuracy aligned first (1~2 sec) and then matched (2.5~3 sec;
FaceIt® achieved and the more manual aligning required. depends on the size of database).
Speed Up We can load the data into RAM to speed up process.
Ease of Use Easy to add and delete images regardless of the size
Table 3: Summary of pose test and image types (drag images from Window Explorer
into the FaceIt® software).
Pose(R, L) 1st Match 1st 10 Match Manual Aligning
(%) (%) Required (%)
90°L N/A N/A 100.0 4.2 FaceIt® surveillance accuracy
60°L 34.5 71.0 13.5 In this experiment, live face images from real scenes were
40°L 65.0 91.0 4.5
captured by FaceIt software using a small PC camera
25°L 95.0 99.5 2.5
15°L 97.5 100.0 0.5
attached via a USB port. We used randomly captured face
0 100.0 100.0 0.0 images and matched these against databases which were
15°R 99.0 99.5 0.0 used previously in the FaceIt Identification test.
25°R 90.5 99.5 2.0 In order to see the effects of variations, we applied
40°R 61.5 87.5 4.5 different database sizes (the small DB was the IRIS
60°R 27.5 65.0 11.0 database which contains 34 faces while the large DB was
90°R N/A N/A 100.0 700 faces from FERET plus the IRIS DB) and different
lighting conditions to face images. Since face variations
are hard to measure, we divided variations such as pose,
expressions and age into small and large variations.
Figure 15 shows an example of captured faces used in the
experiment. When we captured the faces, any person with
significant variations such as quick head rotation or
continuous or notable expression changes was considered
as a large variation, while the others were considered as
small variations.