Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

2020 6th International Conference on Interactive Digital Media (ICIDM)

Evaluation of Emotional Meaning of Eyelid


Position on a 3D Animatronic Eyes
Arief Setyo Jatmiko Cecilia Tities Ginalih Reza Darmakusuma
School of Electrical Engineering and School of Electrical Engineering and School of Electrical Engineering and
Informatics Informatics Informatics
Institut Teknologi Bandung Institut Teknologi Bandung Institut Teknologi Bandung
Bandung, Indonesia Bandung, Indonesia Bandung, Indonesia
ariefsetyojatmiko@students.itb.ac.id ceciliatitiesginalih@students.itb.ac.id reza.darmakusuma@lskk.ee.itb.ac.id
2020 6th International Conference on Interactive Digital Media (ICIDM) | 978-1-7281-4928-8/20/$31.00 ©2020 IEEE | DOI: 10.1109/ICIDM51048.2020.9339629

Abstract—Emotion is a complex thing in humans, there are One study related to eye gazes examines how the effect of
many ways to express emotions including eye gaze. Ekman, et partially closed eyes by the eyelids on human perception of
al. describing abstractions regarding core features on human emotions displayed by robots using an avatar in the form of an
faces adopted by Onchi E, et al. through single-eyed 2D avatar eye [6]. The results of this study indicate that changes in the
that is designed that only move the upper and lower eyelids only. eyelid can indicate differences in emotions.
Adopting a 2D single-eye avatar, this study evaluated the
similarity of human emotions expressed using 3D two-eye But what happens if the avatars that are the subject of
animatronic models with stiff eyelids shared by Will Cogley. research are replaced into 3-dimensional objects that have a
Simulation in the form of a survey to evaluate the degree of morphology that resembles humans, whether humans can still
similarity of the seven types of emotions described by Ekman, et perceive emotions displayed by robots. Using Will Cogley's
al. i.e. neutral, happy, surprised, sad, scared, angry and animatronic eye design [7] with eyelid movement features so
disgusted, given to 40 participants with a variety of different that it can display emotions like an 1-eye avatar [6], then test
backgrounds. The result is that the sample of emotions the results by conducting a survey.
displayed had a significant effect on the participants'
perceptions, each sample was also able to display the meaning of The structure of this paper is as follows: section 2 presents
emotions well because there was a significant effect on the the study of literature, section 3 discusses the basis of the
interaction between the sample and emotions (p = 5.63 x 10-17). design made, section 4 discusses the research methodology,
Based on these results, the participants had almost similar section 5 presents the results obtained, and section 6 presents
perceptions of the eyelids with physical embodiment and virtual conclusions and further research.
agents so that these results could be mutually reinforcing.
Compared with the results of research using Probo, emotions II. RELATED WORK
can be expressed better when facial features such as eyebrows, Eye gazes can indicate mental states, and show the purpose
eyelids and mouth are present. The conclusion of this research
of social robots [8], eye gazes can also facilitate the other
is expected to be a step to find out the function of each facial
person to understand the emotions that are being felt. [9].
feature that contributes to express emotions.
Various emotions can also be expressed even if only using an
Keywords— emotion, eye gaze, 3D model, animatronic eyes artificial agent in the form of a one-eye avatar with a stiff
eyelid [6] designed to influence the user's emotional state
I. INTRODUCTION based on research on visual design, color, and shape of the
Human Robot Interaction (HRI) is a field of science that product [10], other studies have found that colors, sounds , and
continues to grow, until now there has been a robot that can vibration becomes the main stimulus for expressing emotions
perform normal human activities. The focus of the HRI is a [11]. One-eye artificial agent designs are often used in the
robot that functions socially with humans and the environment animation and game industries to convey a variety of emotions
in everyday life [1]. Basically, robots are not as perfect as to the audience through the appearance of characters.
humans, so it needs a problem-based and multidisciplinary Common characteristics possessed by these artificial agents
field of science, among which are represented by engineers, are the absence of eyebrows and the difference in ratio
designers, psychologists, anthropologists, philosophers, and between iris and taste in character and humans [12]. Research
sociologists so that HRI can provide experiences of interaction using the social robot Probo [13] to express happy, sad,
as closely as possible with human interactions. Humans have surprised, angry, disgusted and afraid, showed significant
the basic ability to communicate verbally and non-verbally, results. Probo has 20 Degree of Freedom (DOF) to express
we can imagine when meeting with strangers and we have to emotions, including movements in the head (3 DOF), eyes (3
communicate with them, before communicating verbally, DOF), eyelids (2 DOF), eyebrows (4 DOF), ears (2 DOF),
sometimes we have to do non-verbal communication both in proboscis (3 DOF) and mouth (3 DOF). 6 emotions that can
the form of gaze [2] or gesture [3] which interprets our be expressed by probo can be seen in figure 1.
intentions, emotions, and interests to start interacting. Gaze is III. BASIC DESIGN
an important component of social interaction, the eye is able
to represent a variety of different signals depending on the Referring to Onchi's research with the design of one-eye
status, disposition, and emotional state of the sender and artificial agent [6] shown in figure 2 with an iris size of 0.7 *
receiver of the signal [4]. At present the approaches taken to d, pupil size of 0.25 * d, glossy texture of 0.2 * d placed in the
incorporate eye gaze into HRI vary widely, some researchers upper right of the eye, to resemble the proportion of the human
conduct investigations on robots, virtual agents, psychology. eye, eyelids the upper part covers 0.7 * d of the whole eye and
Some researchers use robots as stimulus to understand the the rest are covered by the lower eyelid. We use an open-
limits of human perception, or examine the effects of robot source animatronic eye created by Will Cogley [7]. The size
gaze by modifying the physical appearance and behavior of of the eyeball is ± 3 cm, with a height of ± 2.75 cm, iris size ±
the robot and then measuring its effect on human response [5]. 1.34 cm and pupils ± 0.3 cm. To give a real impression, the

978-1-7281-4928-8/20/$31.00 ©2020 IEEE

Authorized licensed use limited to: Central Michigan University. Downloaded on May 14,2021 at 04:10:13 UTC from IEEE Xplore. Restrictions apply.
outer layer of the eye is coated with plastic. Based on Ekman V. RESULTS
[14] and Faigin's [15] research on facial characteristics The study was conducted randomly on 40 Indonesian
especially the main features of the eye, we replicate the seven citizens with various different backgrounds (Minage = 16,
basic human emotions displayed in Eiji's research [6] in Will Maxage = 57, Rangeage = 41, Medianage = 17.00, Meanage =
Cogley's open-source animatronic eye with a total of 4 DOF, 24.5, SDage = 12.412, Varianceage = 154.051). 2-way repeated
upper eyelid (2 DOF) and lower eyelid (2 DOF) which is measure ANOVA test to determine the effect of the given
shown in table 1. sample and the degree of similarity in each type of emotion
tested. The first test performed is testing using Mauchly's test
of sphericity shown in table 3 showing significant results with
p <0.05, these results can be interpreted that the variance
differences are not the same, so a correction is needed in df
using Greenhouse-Geisser correction due to the estimated
epsilon value (Ɛ ) <0.75. Table 4 shows the results obtained
statistically in this study, the results obtained indicate that
there is a significant influence between the sample and the
type of emotion displayed. The test resulted in F(10.812,
421.662) = 10.557, p < 5.63 x 10-17, ηp2 = .213. Participants
perceived emotional similarity better in sample 1 (S1)
(neutral, 3,007), S3 (surprised, 3.079) and S5 (fear, 3.061), but
participants perceived more emotions S3 (surprised, 3.011)
and S6 (angry, 3.107) across the sample based on estimates of
Fig. 1. Six basic emotions that can be expressed by Probo [15] (happy, marginal means. A plot of the similarity level is shown in
surprise, sad, angry, fear, disgust). Figure 3.
TABLE I. SET OF EMOTIONS BASED ON HUMAN PHYSICAL
CHARACTERISTICS.

Emotion Description Expression Avatar 3D Model


Upper eyelid
touches the
Neutral iris, lower
eyelid is
relaxed
Cheeks raise,
pushing the
Happy lower eyelid.
Upper eyelid
can be raised

Eyes wide
Fig. 2. Design and dimensions of one-eye artificial agent. Fully open eyes Surprise open. Selera
will display tastes, iris, pupils and glossy effects (left), eyes in closed fully visible
conditions display the position of the eyelids and the dimensions.
Eyes slightly
IV. RESEARCH METHODOLOGY squinted,
Sad upper eyelid
The methodology used to determine the relationship of the drops due to
position of the eyelids to expressing emotions in this study is the brows
a survey using a 5-level likert scale (1-Very dissimilar, 2-Not Eyes open and
tense, Lower
Similar, 3-Doubtful, 4-Similar, 5-Very Similar ) to measure Fear
eyelid
the degree of similarity with emotions expressed by humans. contracted
Initially the research will be carried out by bringing
Eyes squinted
participants directly together with the 3D model, but to avoid due to the
Disgust
the spread of the COVID-19 virus, the 3D model is set to wrinkle of the
display expressions and be photographed looking forward as nose
if they were face to face with the participants, then displayed Eyes focused
to participants in each question. Participants were asked to rate and wide open.
Upper eyelid
the expressions displayed on the photo, then respond to Anger
seems lower
Google Form. Participants were given seven questions with a due to the
3D model photo display that was displaying expressions in the brow.
form of neutral expressions, anger, surprise, sad, fear, disgust, The results of this study are similar to the results obtained
and happy. To avoid cognitive bias, questions are given using by virtual agents [6], for example, surprise sample with
randomly in the order shown in table 2. First, participants are physical embodiment perceived as happy emotions (3.025),
asked to fill in personal information in the form of nationality, surprise (4.375), fear (3,375) and angry (3,425), while surprise
gender, and age. Second, participants were given the same sample at the virtual agent is perceived as surprise (3.45),
information regarding how to answer questions. Third, seven happy (3.06), and neutral (3.29).
questions are given randomly using the random feature owned
by Google, each question must be answered before displaying Based on the boxplots obtained, there are several emotions
the next question, but the time given is not limited. The that can be clearly distinguished by participants, including S1
statistical test used was repeated measures two-way ANOVA. (neutral) expressing neutral emotions considered similar by

Authorized licensed use limited to: Central Michigan University. Downloaded on May 14,2021 at 04:10:13 UTC from IEEE Xplore. Restrictions apply.
participants [Neutral = 4.1], in addition to the surprise emotion TABLE III. RESULTS OF MAUCHLY'S TEST OF SPHERICITY.
(S3) also has a high level of similarity, but similarity It also Source df p-value
mixed with other emotions including fear and anger [Surprise Sample 20 0.000042
= 4.38, Fear = 3.38, Angry = 3.40]. The result obtained are Emotion 20 1.095 x 10-12
consistent with conclusions obtained by Ekman that there are Sample*Emotion 665 9.14 x 10-37
more than one emotional state in one shape of eye. The results
of pairwise comparison using Bonferroni adjustments showed TABLE IV. CALCULATION RESULTS OF TWO WAY REPEATED
MEASURE ANOVA WITH CORRECTION USING GREENHOUSE-GEISSER
that there was an insignificant difference between sample (S6) CORRECTION
and (S3) (p = .865), then sample (S7) with (S1) (p = .610),
(S2) (p = .319), (S4) (p = .675) and (S5) (.369). A significant Source df F-value p-value
Sample 4.094 2.940 .021
difference is shown in (S7) and (S3) (p = .023). Comparison
Emotion 2.897 1.218 .306
in other samples did not show a significant difference with the Sample*Emotion 10.812 10.557 5.63 x 10-17
value (p = 1,000). Based on the results of pairwise
comparisons using Bonferroni adjustment for virtual agents
[6], it easier for participant to distinguish the meaning of
emotions because all the emotions displayed have a significant
level of difference, for example, the differences in surprise
sample, whether they have a gloss or not, significant with
anger and disgust (p = <. 001), the fear sample also had a
significantly better level than the other four emotion samples,
while the physical eyelids showed a significant difference
only in sample 7 (disgust) with sample 3 (surprise) (p = .023).

TABLE II. STAGES OF CONDUCTING THE SURVEY ON THE (S1) (S2) (S3)
PARTICIPANT
Stages Display Explanation

Introduction
Introduction
of research

The
participant
fills in
demographic (S4) (S5) (S6)
Demographic
information
Information
to find out the
background
of each
participant
Instructions
given to
participants to
fill in the
survey given,
Instruction
each
participant
was given the
same
(S7)
instructions

Neutral Happy Surprise Sad Fear Angry Disgust


Displaying
expressions
on 3D models Fig. 3. Boxplot of emotional likelihood of each sample.

The results obtained are different from evaluation using


Fields to Probo [15], the results obtained by Probo can be seen in table
Survey View measure the 5. The results obtained are significant with the level of
level of recognition of each emotion at 83% -84% at a shape of
similarity of physical robot. Facial features that have an important role to
3D model express emotions on Probo are eyebrows, eyelids and mouth,
expressions
against 7
other features that increase the level of recognition are the ears
types of and proboscis.
expressions
that will be
examined

Authorized licensed use limited to: Central Michigan University. Downloaded on May 14,2021 at 04:10:13 UTC from IEEE Xplore. Restrictions apply.
TABLE V. EMOTION RECOGNITION RESULT BY PROBO [15] research through design,” The design journal, vol. 4, pp. 32-47, March
2001.
% happy sad disgust anger surprise fear
match [11] S. Song and S. Yamada, “Expressing emotions through color, sound,
Happy 100 0 0 0 0 0 and vibration with an appearance-constrained social robot,” in 2017
Sad 0 87 0 0 0 9 12th ACM/IEEE International Conference on Human-Robot
Disgust 0 0 87 4 4 4 Interaction (HRI 2017), pp. 2-11, March 2017.
Anger 0 9 4 96 0 0 [12] J. Saldien , K. Goris, B. Vanderborght, J. Vanderfaeillie, and D.
Surprise 0 0 9 0 70 22 Lefeber, “Expressing emotions with the social robot probo,”
Fear 0 4 0 0 26 65 International Journal of Social Robotics, vol. 2, pp. 377-389, December
Overall % 84 Fleiss’ Kappa 0.68 2010.
[13] H. Kobayashi and S. Kohshima, “Unique morphology of the human
VI. CONCLUSION eye and its adaptive meaning: comparative studies on external
morphology of the primate eye,” Journal of human evolution, vol. 40,
Based on the results of research that has been done, it can pp. 419-435, May 2001.
be concluded that 3D two-eye animatronic models with rigid [14] P. Ekman and W. V. Friesen, “Unmasking the Face: A guide to
eyelids can express emotions properly, there are some recognizing emotions from facial expression,” Malor Books, 2003.
emotions that can be recognized well by the participants [15] G. Fairgin, “The artist’s complete guide to facial expression,” New
including surprise, disgust and neutral. The results obtained York, NY: Watson-Guptill, 1990.
indicate that there are several types of emotions that appear in
an eyelid position according to Ekman's description.
Participants find it easier to distinguish the meaning of
emotions in virtual agents [6] based on the results of pairwise
comparisons using Bonferroni adjustment because all the
emotions displayed have a significant level of difference. The
difference that appears is probably due to the adjustment of
the eyelid movement of the one eye virtual agent which is
easier and more often seen, for example in anime films than in
physical embodiment. Based on these results, the participants
had almost similar perceptions of the eyelids with physical
embodiment and virtual agents [6] so that these results could
be mutually reinforcing. Compared with the results of
research using Probo, emotions can be expressed better when
facial features such as eyebrows, eyelids and mouth are
present. The conclusion of this research is expected to be a
step to find out the function of each facial feature that
contributes to express emotions.
VII. ACKNOWLEDGMENT
This research can be done with the help of the Institut
Teknologi Bandung.
VIII. REFERENCES
[1] C. Bartneck, T. Belpaeme, F. Eyssel, T. Kanda, M. Keijsers, and S.
Šabanović, “Human-robot interaction: An introduction,” Cambridge
University Press, March 2020.
[2] M. Argyle, "Non-verbal communication in human social interaction,"
1972.
[3] D. McNeill, “Hand and mind: What gestures reveal about thought,”
University of Chicago press, 1992.
[4] N. J. Emery, "The eyes have it: the neuroethology, function and
evolution of social gaze," Neuroscience & biobehavioral reviews 24,
no. 6, pp. 581-604, 2000.
[5] H. Admoni and S. Brian, "Social eye gaze in human-robot interaction:
a review," Journal of Human-Robot Interaction 6, no. 1, 2017, pp. 25-
63.
[6] E. Onchi, D. Saakes, and S. H. Lee, “Emotional meaning of eyelid
positions on a one-eyed 2D avatar,” in International Symposium on
Affective Science and Engineering (ISASE) 2020 Japan Society of
Kansei Engineering, pp. 1-4, 2020.
[7] C. W. Nilheim, “Mechatronic-Simplified Eye Mechanism”. [Online].
Available: http://www.nilheim.co.uk.
[8] T. Fong, I. Nourbakhsh, and K. Dautenhahn. “A survey of socially
interactive robots,” Robotics and autonomous systems, vol. 42, pp.
143-166, March 2003.
[9] D. H. Lee and A. K. Anderson AK. “Reading what the mind thinks
from how the eye sees,” Psychological science, vol. 28, pp. 494-503,
April 2017.
[10] P. Desmet, K. Overbeeke, S. Tax, “Designing products with added
emotional value: Development and appllcation of an approach for

Authorized licensed use limited to: Central Michigan University. Downloaded on May 14,2021 at 04:10:13 UTC from IEEE Xplore. Restrictions apply.

You might also like