Facial Expression

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

INTRODUCTION

A face provides genuine insight into expressions and emotions. The distances between facial
features and rapid variations provide more reliable cues than verbal communication. Thus, human
communication can be divided into two parts: verbal and nonverbal. Here, nonverbal
communication is described as the process of communicating through facial expressions, eye
contact, and nodding in addition to nonverbal aspects of speech such as tone and volume. Over
80% of all communication is carried out through these nonverbal messages (Argyle et al. 1970).
For the visually impaired, a face provides little to no insight into a person. According to the World
Health Organization, there are about 284 million people in the world who are visually impaired
(WHO, 2009). This visually impaired population is at a disadvantage when it comes to
communicating since its members are not able to interpret most nonverbal messages (Krishna et
al. 2005).

In the last twenty years, the computer-based facial recognition field has expanded rapidly. Several
algorithms have been introduced and improved to the point where computers can rival humans in
accuracy of facial recognition (Toole, 2007). In order to develop our product, we need to
understand how we identify faces, and to understand and evaluate the different existing facial
recognition algorithms and examine existing applications of this technology. Sinha et al. (2006)
outline nineteen basic results regarding human facial recognition, including many of the methods
that humans use to identify faces. They show that the study of human processes involved in facial
recognition and the artificial algorithms being used for facial recognition systems are inextricably
linked together. The human brain can recognize faces in 120 milliseconds (ms). In order to
achieve a useful system, the algorithm we choose must have near realtime feedback. Several real
time algorithms have been developed in recent years. Beveridge et al. (2005) of the Colorado State
University evaluated the efficiency and accuracy of the algorithms, Principle Component Analysis
(PCA), Linear Discriminant Analysis (LDA), Elastic Graph Matching (EGM), and Bayesian
Intrapersonal/Extrapersonal Image Difference Classifier (BIC). A study by Krishna, Little, Black,
and Panchanathan also evaluated these algorithms with respect to changes in illumination and
pose. The LDA and PCA algorithms were found to be superior. LDA was fastest while PCA was
the most accurate messages (Krishna et al. 2005). We will be evaluating the four algorithms (PCA,
LDA, EGM, and BIC) to determine what works best for our purposes, as our conditions may be
slightly different than those of previous studies. Another algorithm recently developed is Luxands
FaceSDK, a facial recognition system that supports recognition for still images and real time video
streams. It is mostly used in the entertainment industry to create real time animations.

Facial expression recognition is an essential ability for good interpersonal relations (Niedenthal
and Brauer, 2012), and a major subject of study in the fields of human development, psychological
well-being, and social adjustment. In fact, emotion recognition plays a pivotal role in the
experience of empathy (Gery et al., 2009), in the prediction of prosocial behavior (Marsh et al.,
2007), and in the ability model of emotional intelligence (Salovey and Mayer, 1990). Additionally,
the literature demonstrates that impairments in emotional expression recognition are associated
with several negative consequences, such as difficulties in identifying, differentiating, and
describing feelings. For example, several studies have shown an association of deficits in
emotional facial expression processing with psychiatric disorders in both adults (Phillips et al.,
2003) and children (Blair, 2003), alexithymia (Grynberg et al., 2012), and difficulties in social
functioning (Batty and Taylor, 2006).

Numerous studies have identified six basic universally recognized emotions: anger, disgust, fear,
happiness, sadness, and surprise (see, for instance, Ekman, 1992). The ability to recognize basic
emotions emerges very early in life, as infants use emotional expressions as behavioral cues. In
fact, from the first year of life, infants recognize emotions from faces and, therefore, can adjust
their behavior accordingly during social interactions with a caregiver (Hertenstein and Campos,
2004). A more accurate understanding of emotions appears in pre-school age children (Widen and
Russell, 2008), although it may be that 3-year-olds' facial expression recognition skills partly
depend on task specifications. For example, Székely et al. (2011) found different recognition rates
between verbal (emotion-labeling) and nonverbal (emotion-matching) tasks in the case of four
basic emotions (happiness, sadness, anger, and fear), among normally developing 3-year-olds.
Despite this, a progressive improvement occurs during school age through middle childhood, up to
the emergence of mature recognition patterns in adulthood (Herba et al., 2006; Widen, 2013).
However, it seems that recognition ability has different developmental patterns among emotions
(Durand et al., 2007). For example, while happy and sad expressions seem to be accurately
categorized early in life (Gosselin, 1995), there is no clear evidence regarding expressions of anger
and disgust. Despite some inconsistencies, face processing abilities generally increase across
childhood and adolescence (Ewing et al., 2017), and research generally agrees on the improvement
in recognition performance with age (Theurel et al., 2016). Specifically, some studies have
suggested that near-adult levels of recognition are achieved before adolescence (Rodger et al.,
2015). However, it is worth mentioning that developmental researchers have used several methods
to measure how accurately children recognize facial expressions of different emotions (Bruce et
al., 2000), such as the discrimination paradigm, the matching procedure and free labeling. Hence,
the ability to recognize most emotional expressions appears, at least partly, to be dependent on
task demands. Additionally, while recognition of, and reactions to, facial expressions have been
largely examined among children and adults in both natural and clinical settings, fewer studies
have focused on preadolescence, providing mixed and inconsistent results. For example, Lawrence
et al. (2015) tested the hypothesis that an individual's ability to recognize simple emotions through
childhood and adolescence is modulated by pubertal stage. In contrast, Motta-Mena and Scherf
(2017) affirmed that while the ability to process basic expressions improves with age, puberty, per
se, does not specifically contribute in this sense. Indeed, once age is accounted for, there is no
additional influence of pubertal development on the ability to perceive basic expressions in early
adolescence.
In addition, to understand the mechanisms involved in recognition of facial expressions, two
primary dimensions of emotional perception of faces have been identified (Russell,
1980; Schubert, 1999): valence and arousal. Valence represents the pleasantness of a face, and is
measured on a linear scale with pleasant/positive emotions at one end, and unpleasant/negative
emotions at the other. Arousal indicates the degree to which a face brings an observer to a state of
greater alertness. Arousal and valence have long been studied, finding broad similarities between
adults and children (Vesker et al., 2017). However, to our knowledge, there is a paucity of
research examining preadolescents' affective ratings of facial expressions elicited by experimental
stimuli. According to Lang et al. (1997), the variance in facial emotional assessments is accounted
for by three major affective dimensions: pleasure, arousal, and dominance. In this study, we are
particularly interested in affective valence (ranging from pleasant to unpleasant) and arousal
(ranging from calm to excited). Affective reactions to facial emotional expressions in
prepubescent boys and girls were examined, and compared to those of adults, by McManis et al.
(2001). These authors demonstrated that children's and adolescents' affective evaluations of
pictures, in terms of pleasure, arousal, and dominance, were similar to those of adults. However,
more studies specifically focused on preadolescent youth are needed.
For the aforementioned reasons, the present study combined the categorical (ability) and
dimensional (valence and arousal) approaches to explore recognition ability and affective ratings
of valence and arousal in preadolescence, which is considered the phase of development ranging
between 11 and 14 years of age (Meschke et al., 2012). 
As outlined by Zhao et al. (2003), these algorithms may be susceptible to several well known
challenges including pose, illumination, and resolution. However, over the past several years,
major improvements have been made to these baseline algorithms. In an experiment by Toole
(2007), seven facial recognition algorithms were compared with humans on face matching tasks.
Out of the seven algorithms, three were better at recognizing faces than humans were. Though
illumination still presented problems, O’Toole’s study shows that current algorithm capabilities
compete favorably with human ability to recognize faces.

Systems for recognizing faces do currently exist. However, many of the existing facial recognition
systems are created for security rather than for the visually impaired (Xiao and yang, 2009). Even
so, these systems show that it is possible for a recognition system to recognize faces acquired
under controlled conditions at a recognition rate of 99.2% in near realtime. One system, developed
by Krishna et al., uses a PCA algorithm and was designed with the visually impaired in mind
((Krishna et al. 2005). Krishna’s system does not use stereo cameras, nor was it tested on visually
impaired users. Krishna’s system also did not have a facial expression recognition feature. Our
system will be designed with the advice of the visually impaired to provide a facial recognition
system that would benefit them the most.

In order to use facial expression recognition algorithms to improve social interactions among the
visually impaired, we need to understand the biological underpinnings and universality of facial
expression, as well as research existing algorithms. Ekman and Friesen (1971) identified six
universal facial expressions in an experiment conducted among a group of New Guinea tribe
members who had been isolated from foreign cultures. They found that the subjects were able to
accurately identify certain expressions happiness, anger, sadness, disgust, surprise and fear
depicted in pictures of different facial poses. Their findings support the theory of the universal
correspondence between certain facial behaviors with certain emotions. These expressions are
characterized by the facial action coding system which uses 44 small facial movement action units
(AU), each of which encodes a specific facial muscle action. Since the expressions stated above
can be recognized cross-culturally, the primary focus of our computer algorithm will be on
detecting these six universal expressions.

Experiments have also been conducted where motion in the major facial areas (the mouth, eyes,
and eyebrows) was selectively frozen through videos. Multiple studies concluded that in some
expressions, single movements could play the decisive role in differentiating facial expressions
while, in other expressions, multiple movements were necessary in differentiating facial
expressions (Nusseck et asl. 2008).

Takeo Kanade, Jeffrey F. Cohn, and Yingli Tian compiled a database of 2105 coded facial
expression images using FACS (Kanade wet al. 2000). This database consists of thousands of
different faces, all with various expressions, poses, and illuminations. We will test the chosen
algorithms and final system on this facial database to ensure accuracy and efficiency.

One prominent expression detector is a parametric optical flow-based algorithm that analyzes
facial expressions in realtime. It was developed by Yacoob and Black in 1997. This algorithm
employs facial tracking and facial feature tracking, checking changes against thresholds
established by existing databases (Yacoob and Black, 1997). Another expression algorithm that is
available in the market is FaceAPI, a face tracking technology developed by Seeing Machines
Limited. FaceAPI provides real time face recognition and facial feature tracking by tracking X
individual points on the face in the xyz plane. The algorithm is able to detect head position and has
a freedom of +/- 90 degrees for head orientation, allowing more freedom for the user of the
camera to move around during the conversation. FaceAPI is also able to calculate distance of a
person without the use of dual cameras (a stereo system), but rather by calculating the size of the
face and determining the distance using that measurement.

Arguably the most important contribution basic science has made to our understanding of emotion
concerns the universality of facial expressions of emotion. Darwin (1872) was the first to suggest
that they were universal; his ideas about emotions were a centerpiece of his theory of evolution,
suggesting that emotions and their expressions were biologically innate and evolutionarily
adaptive, and that similarities in them could be seen phylogenetically. Early research testing
Darwin’s ideas, however, was inconclusive (Ekman, et al. 1972).

Since the original universality studies more than 30 studies examining judgments of facial
expressions have replicated the universal recognition of emotion in the face (Matsumoto, 2001). In
addition a meta-analysis of 168 datasets examining judgments of emotion in the face and other
nonverbal stimuli indicated universal emotion recognition well above chance levels (Elfenbein &
Ambady, 2002a). And there have been over 75 studies that have demonstrated that these very
same facial expressions are produced when emotions are elicited spontaneously (Matsumoto, et al.
2008). These findings are impressive given that they have been produced by different researchers
around the world in different laboratories using different methodologies with participants from
many different cultures but all converging on the same set of results. Thus there is strong evidence
for the universal facial expressions of seven emotions – anger, contempt, disgust, fear, joy,
sadness, and surprise

Having code to detect different expressions is essential to creating a feedback system, which
shares such information with blind users. Rehman and Liu (2008) at Umea University in Sweden
wrote his doctoral thesis on emotion recognition feedback through vibration. After focusing on the
importance of recognizing and understanding emotions in everyday conversation, Rehman
proposed a method to communicate such information without vision. He created a chair with what
he calls a vibration actuators matrix located on the back of the chair. Distinct sequences of
vibrations represent different emotions, and the user therefore knows which emotions their
communication partner is exhibiting through memorization of the vibration pattern. Rehman
believes in the value of tactile perception and incorporates his feedback systems mainly with
tactile functions such as vibrations.

Another challenge to consider is the movement of visually impaired subjects, which may cause the
acquired faces to be blurred. We need to develop image deblurring tools and feed the restored face
images to the recognition algorithms. Humans rely on visual cues when interacting within a social
context and for those who cannot see, this interferes with the quality of social interactions
messages (Krishna et al. 2005). Previous studies have shown that non-verbal communication is
integral to maintaining social interactions. Since the majority of non-verbal communication is
comprised of facial gestures and expressions, which are visual cues, interpersonal communication
is compromised (messages (Krishna et al. 2005). This societal disadvantage can best be addressed
through an application of science and technology; we propose the implementation of computer
vision to provide realtime feedback of facial identification and expression for the visually
impaired.

Methodology
The study used an experimental design. The participants were psychology students of DMU
university. The study sample of the study comprised of 50 students who were over 18 years of age
and had normal or corrected to normal vision. These 50 students were divided into 2 groups
equally one with correct recognition and the other with alternate recognition of the facial
expression.

References

[1] M. Argyle, V. Salter, H. Nicholson, M. Williams, and P. Burgess. (1970). “The


communication of inferior and superior attitudes by verbal and non-verbal signals,” British journal
of social and clinical psychology, no. 9, pp. 222–231, 1970.

[2] “Visual impairment and blindness,” World Health Organization, 2009.

[3] S. Krishna, G. Little, J. Black, and S. Panchanathan, “A wearable face recognition system for
individuals with visual impairments,” pp. 106–113, 2005.

[6] P. Sinha, B. Balas, Y. Ostrovsky, and R. Russell, “Face recognition by humans: Nineteen
results all computer vision researchers should know about,” Proceedings of the IEEE, vol. 94, no.
11, pp. 1948–1962, 2006.

[14] A. J. O Toole, “Face recognition algorithms surpass humans matching faces over changes in
illumination,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 9, p.
1642, 2007.
15] R. Beveridge, B. Draper, G. Givens, and W. Fisher, “Introduction to the statistical evaluation
of face recognition algorithms,” 2005.

16] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature


survey,” Acm Computing Surveys, vol. 35, no. 4, pp. 399–459, 2003.

17] Q. Xiao and X. D. Yang, “A facial presence monitoring system for information security,”
IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms, and
Applications, pp. 69–76, 2009.

[18] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of
Personality and Social Psychology, vol. 17, no. 2, pp. 124–129, 1971.

19] M. Nusseck, D. W. Cunningham, C. Wallraven, and H. H. Bulthoff, “The contribution of


different facial regions to the recognition of conversational expressions,” Journal of Vision, vol. 8,
no. 8, 2008, nusseck, Manfred Cunningham, Douglas W. Wallraven, Christian Buelthoff, Heinrich
H.

[20] T. Kanade, Y. Tian, and J. Cohn, “Comprehensive database for facial expression analysis,”
2000.

[21] Y. Yacoob and M. Black, “Recognizing facial expressions in image sequences,” 1997.

[22] S. u. Rehman and L. Liu, “Vibrotactile rendering of human emotions on the manifold of
facial expressions,” Journal of Multimedia, vol. 3, no. 3, pp. 18–25, 2008.

Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: guide-lines for
research and an integration of findings. New York: Pergamon Press.

Darwin, C. (1872). The expression of emotion in man and animals. New York: Oxford University
Press.

Matsumoto, D. (2001). Culture and Emotion. In D. Matsumoto (Ed.), The handbook of culture


and psychology (pp. 171-194). New York: Oxford University Press.
Elfenbein, H. A., & Ambady, N. (2002a). On the universality and cultural specificity of emotion
recognition: A meta-analysis. Psychological Bulletin, 128(2), 205-235.

Matsumoto, D., Keltner, D., Shiota, M. N., Frank, M. G., & O'Sullivan, M. (2008). What's in a
face? Facial expressions as signals of discrete emotions. In M. Lewis, J. M. Haviland & L.
Feldman Barrett (Eds.), Handbook of emotions (pp. 211-234). New York: Guilford Press.

Niedenthal, P. M., and Brauer, M. (2012). Social Functionality of Human Emotion. Annu. Rev.
Psychol. 63, 259–285. doi: 10.1146/annurev.psych.121208.131605.
Salovey, P., and Mayer, J. D. (1990). Emotional intelligence. Imagin. Cogn. Pers. 9, 185–211. doi:
10.2190/DUGG-P24E-52WK-6CDG.
Phillips, M. L., Drevets, W. C., Rauch, S. L., and Lane, R. (2003). Neurobiology of emotion
perception II: implications for major psychiatric disorders. Biol. Psychiatry 54, 515–528. doi:
10.1016/S0006-3223(03)00171-9.
Blair, R. J. R. (2003). Facial expressions, their communicatory functions and neuro-cognitive
substrates. Philos. Trans. R. Soc. Lond. B Bio. Sci. 358, 561–572. doi: 10.1098/rstb.2002.1220.
Grynberg, D., Chang, B., Corneille, O., Maurage, P., Vermeulen, N., Berthoz, S., et al. (2012).
Alexithymia and the processing of emotional facial expressions (EFEs): systematic review,
unanswered questions and further perspectives. PLoS ONE 7:e42429. doi:
10.1371/journal.pone-.0042429.
Batty, M., and Taylor, M. J. (2006). The development of emotional face processing during
childhood. Dev. Sci. 9 207–220. doi: 10.1111/j.1467-7687.2006.00480.x.
Marsh, A. A., Kozak, M. N., and Ambady, N. (2007). Accurate identification of fear facial
expressions predicts prosocial behavior. Emotion 7, 239–251. doi: 10.1037/1528-3542.7.2.239.
Gery, I., Miljkovitch, R., Berthoz, S., and Soussignan, R. (2009). Empathy and recognition of
facial expressions of emotion in sex offenders, non-sex offenders and normal controls. Psychiatry
Res. 165, 252–262. doi: 10.1016/j.psychres.2007.11.006.
Ekman, P. (1992). An argument for basic emotions. Cogn. Emot. 6, 169–200. doi:
10.1080/026999-39208411068.
Hertenstein, M. J., and Campos, J. J. (2004). The retention effects of an adult's emotional displays
on infant behavior. Child Dev. 75, 595–613. doi: 10.1111/j.1467-8624.2004.00695.x
Widen, S. C., and Russell, J. A. (2008). Children acquire emotion categories gradually. Cognitive
dev. 23, 291–312. doi: 10.1016/j.cogdev.2008.01.002.
Székely, E., Tiemeier, H., Arends, L. R., Jaddoe, V. W., Hofman, A., Verhulst, F. C. (2011).
Recognition of facial expressions of emotions by 3-year-olds. Emotion 11, 425–435. doi:
10.1037/a0022587.
Herba, C. M., Landau, S., Russell, T., Ecker, C., and Phillips, M. L. (2006). The development of
emotion-processing in children: effects of age, emotion, and intensity. J. Child Psychol.
Psychiatry 47, 1098–1106. doi: 10.1111/j.1469-7610.2006.01652.x.
Widen, S. C. (2013). Children's interpretation of facial expressions: the long path from valence-
based to specific discrete categories. Emot. Rev. 5, 72–77. doi: 10.1177/1754073912451492.
Durand, K., Gallay, M., Seigneuric, A., Robichon, F., and Baudouin, J. Y. (2007). The
development of facial emotion recognition: the role of configural information. J. Exp. Child
Psychol. 97, 14–27. doi: 10.1016/j.jecp.2006.12.001.
Gosselin, P. (1995). The development of the recognition of facial expressions of emotion in
children. Can. J. Behav. Sci. 27, 107–119. doi: 10.1037/008-400X.27.1.107.
Ewing, L., Karmiloff-Smith, A., Farran, E. K., and Smith, M. L. (2017). Distinct profiles of
information-use characterize identity judgments in children and low-expertise adults. J. Exp.
Psychol. Hum. Percept. Perform. 43, 1937–1943. doi: 10.1037/xhp0000455.
Theurel, A., Witt, A., Malsert, J., Lejeune, F., Fiorentini, C., Barisnikov, K., et al. (2016). The
integration of visual context information in facial emotion recognition in 5-to 15-year-olds. J. Exp.
Child Psychol. 150, 252–271. doi: 10.1016/j.jecp.2016.06.004.
Lawrence, K., Campbell, R., and Skuse, D. (2015). Age, gender, and puberty influence the
development of facial emotion recognition. Front. Psychol. 6:761. doi: 10.3389/fpsyg.2015.00761.
Motta-Mena, N. V., and Scherf, K. S. (2017). Pubertal development shapes perception of complex
facial expressions. Dev. Sci. 20:e12451. doi: 10.1111/desc.12451.
Rodger, H., Vizioli, L., Ouyang, X., and Caldara, R. (2015). Mapping the development of facial
expression recognition. Dev. Sci. 18, 926–939. doi: 10.1111/desc.12281.
Bruce, V., Campbell, R. N., Doherty-Sheddon, G., Import, A., Langton, S., McAuley, S., et al.
(2000). Testing face processing skills in children. Br. J. Dev. Psychol. 18, 319–333. doi:
10.1348/026151000165715.
Russell, J. A. (1980). A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178. doi:
10.1037/h0077714.
Schubert, E. (1999). Measuring emotion continuously: validity and reliability of the two-
dimensional emotion-space. Aust. J. Psychol. 51, 154–165. doi: 10.1080/00049539908255353.
Vesker, M., Bahn, D., Kauschke, C., and Schwarzer, G. (2017). Perceiving arousal and valence in
facial expressions: differences between children and adults, Eur. J. Dev. Psychol. 15, 411–425.
doi: 10.1080/17405629.2017.1287073.
Lang, P. J., Bradley, M. M., and Cuthbert, B. N. (1997). International Affective Picture System
(IAPS): Technical Manual and Affective Ratings. NIMH Center for the Study of Emotion and
Attention, 39–58.
McManis, M. H., Bradley, M. M., Berg, W. K., Cuthbert, B. N., and Lang, P. J. (2001). Emotional
reactions in children: verbal, physiological, and behavioral responses to affective
pictures. Psychophysiol. 38, 222–231. doi: 10.1111/1469-8986.3820222.
Meschke, L. L., Peter, C. R., and Bartholomae, S. (2012). Developmentally appropriate practice to
promote healthy adolescent development: integrating research and practice. Child and Youth Care
Forum 41, 89–108. doi: 10.1007/s10566-011-9153-7

You might also like