Professional Documents
Culture Documents
Contreras AnImprovedMethod
Contreras AnImprovedMethod
Abstract
When it is necessary to analyze a persons face,
whether it is for recognizing pathologies, emotions, or
states of mind, it becomes necessary to obtain a maximum
of information of the facial characteristics that especially
reflect these aspects. These characteristics are
principally, the mouth, eyes and eyebrows. The start of
the analytic process, once the face and the facial feature
to be analyzed have been divided into segments, consists
in applying an algorithm to said feature for the detection
of its edges. The most used edges algorithms are; SUSAN,
Canny, Sobel, and Roberts, among others. These
algorithms function excellently when the edges are fairly
well defined, but in the case of faces in color, where the
transitions are not clearly marked, and when there are
many imperfections and shadings, the above mentioned
algorithms generate incomplete contours in a great
number of cases, leading to errors in high level analysis.
This article presents a new methodology based on the
Canny algorithm which allows us to obtain the edges of
the mouth with much greater information than the other
above mentioned algorithms, which makes it more
adequate for the originally stated objective. The method
has been tested detecting the outline contour of the mouth
and using the databases of facial images, MMI Facial
Expression Database compiled by M. Pantic & M. F.
Valstar" and "A.M. Martinez and R. Benavente. The AR
Face Database. CVC Technical Report #24, June 1998".
1. Introduction
When it is necessary to obtain the contour of an object
in a certain image that has been previously segmented
using one of the existing methods [8], [9], for later
description and analysis, a edge algorithm is generally
employed, which can be based on derivatives, such as the
denominated methods Canny [1], Sobel [2], Laplacian
[3], or based on the internal area of a nucleus such as the
SUSAN method [4]. These algorithms function
exceptionally well when the contours are reasonably
defined, i.e., when the intensity difference between the
object and its background is prominent. Fig. 1 shows the
a)
b)
c)
a)
a)
b)
b)
Figure 2a. a) Original image, b) Contour obtained with
the Canny algorithm.
c)
d)
e)
2. Proposed Methodology
A key aspect of the methodology is the fact that the
histogram analysis of the diverse images of mouths
manually segmented, tells us that the relevant information
is located between 60% and 90% of the intensity values,
b)
a)
Figure 3. An average histogram of 20 images where it
is shown that the relevant information is located between
60% and 90% of the intensity values.
c). The R, G and B components are obtained from the
original image, ImR, ImG and ImB and then their
histogram is expanded to 100%, obtaining the images
ImRHE, ImGHE and ImBHE.
d). A double gamma correction is performed on the
ImRHE, ImGHE and ImBHE images, Fig. 4, with
parameters of 0.4 and 0.8 and a compression of the
histogram in the range of 60% to 80%. The absolute
range values to which the histogram is compressed are
not significant, although the range of 20% between
them is. This compression value was obtained
experimentally. The gamma adjustment performed
c)
d)
3. Obtained Results
a)
b)
c)
a)
Figure 8. a) Original object, b) Contour obtained with
Canny, c) Proposed contour algorithm.
b)
a)
c)
a)
b)
c)
Figure 9. a) Original object, b) Contour obtained with
Canny, c) Proposed contour algorithm.
b)
c)
(1)
Length of real segments
4. Conclusions
In certain very specific applications, such as facial
analysis, focused on determining emotions, pathologies
or state of mind, where it requires an analysis of the
deformation of the main facial characteristics, it is
particularly important to obtain the greatest amount of
5. References
[1] Canny, J., A Computational Approach To Edge Detection,
IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679714, 1986.
[2] Sobel, I., Feldman,G., "A 3x3 Isotropic Gradient Operator
for Image Processing", presented at a talk at the Stanford
Artificial Project in 1968, unpublished but often cited, orig. in
Pattern Classification and Scene Analysis, Duda,R. and Hart,P.,
John Wiley and Sons,'73, pp271-2
[3] D. Marr, E. Hildreth, Theory of Edge Detection,
Proceedings of the Royal Society of London. Series B,
Biological Sciences, Vol. 207, No. 1167 (Feb. 29, 1980), pp.
187-217
[4] S.M. Smith and M. Brady, SUSAN - a new approach to
low level image processing, International Journal of Computer
Vision, Vol. 23(1), 45-78, 1997.
[5] Jeffrey F. Cohn, Takeo Kanade, Use of Automated Facial
Image Analysis for Measurement of Emotion Expression, The
handbook of emotion elicitation and assessment, J. A. Coan & J.
B. Allen (Eds.), Oxford University Press Series in Affective
Science. New York: Oxford.
[6] Carlos Busso, Zhigang Deng , Serdar Yildirim, Murtaza
Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich
Neumann, Shrikanth Narayanan, Analysis of Emotion
Recognition using Facial Expressions, Speech and Multimodal
Information, Proc. of ACM 6th International Conference on
Mutlmodal Interfaces (ICMI 2004), State College, PA, Oct
2004.
[7] M. Pantic, M.F. Valstar, R. Rademaker and L. Maat, "Webbased Database for Facial Expression Analysis", Proc. IEEE
Int'l Conf. Multmedia and Expo (ICME'05), Amsterdam, The
Netherlands, July 2005.
[8] Dzung L. Pham, Chenyang Xu, and Jerry L. Prince (2000):
Current Methods in Medical Image Segmentation, Annual
Review of Biomedical Engineering, volume 2, pp 315-337.