Facial Expressions Thesis

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Struggling with writing your thesis on facial expressions? You're not alone.

Crafting a comprehensive
and insightful thesis on such a nuanced topic can be incredibly challenging. From conducting
extensive research to analyzing data and formulating coherent arguments, the process demands time,
dedication, and expertise.

Exploring the intricacies of facial expressions requires a deep understanding of psychology,


sociology, neuroscience, and communication studies, among other disciplines. Moreover, interpreting
facial expressions involves navigating a vast array of cultural, social, and individual factors, adding
layers of complexity to the research process.

Whether you're examining the role of facial expressions in interpersonal communication, studying
their significance in emotional expression and recognition, or investigating their implications in
various professional fields, the task can feel overwhelming.

Fortunately, there's a solution. At ⇒ HelpWriting.net ⇔, we specialize in providing top-notch


academic assistance to students grappling with complex thesis topics like facial expressions. Our
team of experienced writers and researchers is adept at delving into intricate subjects, conducting
thorough investigations, and delivering well-crafted, original theses that meet the highest academic
standards.

By entrusting your thesis to ⇒ HelpWriting.net ⇔, you can alleviate the stress and uncertainty
associated with the writing process. Our experts will work closely with you to understand your
research objectives, refine your thesis statement, and develop a coherent and compelling argument.
With our assistance, you can confidently present a thoroughly researched and expertly written thesis
that showcases your knowledge and insights.

Don't let the challenges of writing a thesis on facial expressions hold you back. Reach out to ⇒
HelpWriting.net ⇔ today and take the first step toward academic success.
Gathering more emotional words and generating more facial expressions can give more accurate
results and drive us in different conclusions. There were also many cases where people were giving
an answer different than the expected one but still the semantic was very similar so there was not a
clear separation. c. How to code facial expressions. Additionally, the perception of emotions varies
widely based on experience, culture, age, and many other factors, which makes evaluation difficult (
Maria et al., 2019 ). Lichtenstein et al. (2008) showed that the dimensional approach is more accurate
for self-assessments. Since the ground-truth labeling in both datasets is based on arousal and valence
levels, instead of classifying micro-expressions based on basic emotions, we classified micro-
expressions based on arousal and valence levels. Later, in chapter three, we explain how we gathered
and validated the data for our database. However, later studies showed improvement by combining
EEG and facial expressions. In these cases the expected answer had been selected from more
participants than the other possible words. In these equations, TP is True Positive which means the
number of correctly positive class predictions. Then, the EEG headset was placed on the participant's
head, a shimmer3 wristband was worn on the participant's non-dominant hand, and the Shimmer's
PPG and GSR sensors were attached to their three middle fingers. The psychological point of view
should not be magnified, however some techniques and ideas made in this area can be useful in a
more practical field of interest. 20B2.2 Facial Expressions Facial expressions play an important role
in our relations. After producing the data it is very important to validate them. He claimed that
people can use the 2D space to analyse the emotion tendency of each word and their expression in a
glance. For reaching our goal we designed a user test which we called Labeling User Test because the
main idea is that users will be given images of facial expressions and list of emotional words and
they will have to choose the appropriate word for the given image. These units aim at allowing
interpretation of the FAPs on any facial model in a consistent way, producing reasonable results in
terms of expression and speech pronunciation. However, for a better visualization of the process and
better interaction between the user and the system we used the FEM for producing the tested facial
expressions. To extract temporal features from physiological signals and EEG signals, we employed
LSTM. Unleashing the Power of AI Tools for Enhancing Research, International FDP on. These
features have commonly been used in previous studies ( Wagh and Vasanth, 2019 ). Kreibig (2010)
showed that although EDA signals show changes in emotional arousal, more research is needed to
identify the type of emotion using EDA signals. For the method using a questionnaire, the error in
measurement accuracy cannot be ignored as the answer varies depending on the subjectivity. They
extracted the power spectral density of power bands and the lateralization for 14 left-right pairs and
extracted 230 features of EEG data. The shape of the wireframe was built on the basis of an existing
person’s face. They will also learn about different facial expressions that are associated with different
emotions and feelings. For labeling the facial expressions we used emotional words. Most of these
studies focused on traditional facial expression methods and used all recorded video frames to
recognize emotions. The intention of having two choices was that some facial expressions are
possible to have ambiguous meaning according to some users. Besides that, JESS also has the
property of using Java, which in the long run might prove to be a big advantage over CLIPS. Page
56. The other dimension is arousal or activation; see Figure 6. We describe our models for measuring
arousal and valence levels from a combination of facial micro-expressions, Electroencephalography
(EEG) signals, galvanic skin responses (GSR), and Photoplethysmography (PPG) signals. Thus, 480
trials were performed, divided in three blocks of 160 trials with resting period between the blocks.
Participants wore the Shimmer sensor as a wristband, with PPG and GSR sensors attached to their
three middle fingers. Also, the F-Score of the ROI-based LSTM is relatively close to or sometimes
better than using LSTM on the whole of the data. A Bayesian network (or a belief network) is a
probabilistic model that represents a set of variables and their probabilistic independences. When we
analyzed the results of the unimodal and multimodal classifications, the results of classifying
emotion using only facial expressions showed higher accuracy than classification using multiple data.
Emotions produce changes in parts of our brain that mobilize us to deal with what has set off the
emotion, as well as changes in our autonomic nervous system, which regulates our heart rate,
breathing, sweating, and many other bodily changes, preparing us for different actions. To approach
the naturalness of face-to-face interaction, machines should be able to emulate the way humans
communicate with each other. In Aristotelian theory of the emotions, Aristotle with the opportunity
to develop his most sustained thoughts on emotions, not only does he define, explicate, compare and
contrast various emotions, but also he characterizes emotions themselves. So, if the formula when
using the basic intensities is the following: ( ). Also the rules we implemented depend on values that
have to be further researched in order to generate more accurate results. Emotional strength can be
measured as the distance from the origin to a given point in the activation-evaluation space. Besides
these basic properties, CLIPS has pattern matching abilities (the Rete Algorithm), extended math
functions, conditional tests, object-oriented programming (COOL: Clips Object-Oriented Language)
with abstraction, inheritance, encapsulation, polymorphism and dynamic binding. Stress is a response
to the past and present, while anxiety is also different in that it involves worry about what will
happen in the future. After producing the data it is very important to validate them. There are,
however, some unwanted movements that can affect the results of detecting micro-expressions and
identifying regions of interest. Also, the result of fusion strategies is considerably better than single
modalities. EEG devices differ based on the type and number of electrodes, the position of
electrodes (flexible or fixed position), connection type (wireless or wired), type of amplifier and
filtering steps, the setup, and wearability (Tep lan et al., 2002 ). EEG devices with higher data
quality like g.tec 1 or Biosemi 2 or EGI 3 are usually expensive and bulky and require a time-
consuming setup. Several other emotions and many combinations of emotions have been studied but
remain unconfirmed as universally distinguishable. Page 20. In the criticizing of the results we have
also to take into account the nature of the test, which was more exacting than the usual type of facial
recognition user test found in the literature. Hyper parameters applied to physio-emotion
classification model and facial-emotion classification model. The age distribution of the participants
is not normal, due to the fact that most of them are students in TUDelft. International Journal of
Environmental Research and Public Health (IJERPH). Journal of Low Power Electronics and
Applications (JLPEA). For each emotional word we produced the corresponding facial expression
according to images found in literature and our knowledge and understanding of expected
appearance for each emotion. What I plan to do afterwards is to send this data to the Bioloid robot.
This shows that exploiting both temporal and spatial features could help detect emotions. If we have
to calculate the intensity of an emotion when the intensity level of the individual activated AUs is
random we have to compute a factor that will estimate what percentage of the basic intensity is
attained. The psychological point of view should not be magnified, however some techniques and
ideas made in this area can be useful in a more practical field of interest. 20B2.2 Facial Expressions
Facial expressions play an important role in our relations. The accuracy of the model was 81.54%
when physiological signals were used, 99.9% when facial expressions were used, and 86.2% when
both were used. They may occur either as the quick succession of different emotions, the
superposition of emotions, the masking of one emotion by another one, the suppression of an
emotion or the overacting of an emotion. We added the six basic emotions because they considered
being the keystone in emotional expressions’ research.
The 12 combinations are the basic representations of the subset of emotions from the 60 emotions
found in the emotional database. Results of classifying emotion using physiological signal and facial
expression. Finally, we calculated the per-pixel average value for each frame. Figure 3 Masks in
ancient Greek theater showing intensely over-exaggerated facial features and expressions We also
see sophisticated theories in the works of philosophers such as Rene Descartes, Baruch Spinoza and
David Hume. In case of pure emotions the results are correct but in most cases facial expressions
show blended emotions. Support Vector Machines (SVMs) are a set of related methods used for
classification and regression. The age distribution of the participants is not normal, due to the fact
that most of them are students in TUDelft. Then, we used linear mixed-effect models in order to
analyze the relation between the EDA phasic activity and the minimum comfort distance, according
to the model. To browse Academia.edu and the wider internet faster and more securely, please take a
few seconds to upgrade your browser. This method is focused on the similarities of facial
expressions. Besides that, JESS also has the property of using Java, which in the long run might
prove to be a big advantage over CLIPS. Page 56. Our aim is to have the images of the faces an
emotional word that describes that image. Furthermore, CLIPS has no explicit agenda mechanism
and only forward chaining is available as basic control flow. The system decomposes the face into 46
so called Action Units (AUs), each of which is anatomically related to the contraction of either a
specific facial muscle or a set of facial muscles. In figure 8 we see how Desmet divided the 2D space
in 8 octants and how plotted the 41 non ambiguous emotional words based on those 8 octants.
Figure 1 Expressing emotions by facial expressions Real life emotions are often complex and involve
several simultaneous emotions. In micro-expression datasets, participants were asked to inhibit their
expressions and keep a poker face while watching the videos to prevent macro-expression
contamination ( Goh et al., 2020 ). This condition leads to neutral faces in almost all frames, and only
genuine emotions will leak as micro-expressions. In this paper, we used a simple traditional micro-
expression spotting strategy to detect the apex frame. In first place, the developed high-level analysis
can benefit from already existing low-level analysis techniques for FDP and FAP extraction, as well
as from any advances that will be made in this area in the future years. For creating a database of
facial expressions we have first to produce the facial expressions as we mentioned above. However,
many micro-expression can be observed in response to these passive tasks. The contribution of this
database to the problem stated above is that it can be used by systems in order to recognize
emotional facial expressions given one of the database data i.e. action units’ combination. For more
information on the journal statistics, click here. Then, the EEG headset was placed on the
participant's head, a shimmer3 wristband was worn on the participant's non-dominant hand, and the
Shimmer's PPG and GSR sensors were attached to their three middle fingers. In the output of the
experts system shown in figure 30, we can see the impact of the activation of these AUs in the 12
emotions: Figure 30 Output of the irritated.clp execution As we expected the dominant emotion is
irritated. Classified Emotion: Is the output of the analyzer, the classified emotion that displays in the
systems console, which in our case is the CLIPS console. An interesting implication is that strong
emotions are more sharply distinct from each other than weaker emotions with the same emotional
orientation. The chapter nine includes the evaluation of the expert system and the final results. We
will give our initial hypotheses, the representation of the emotions and the model we used for
calculating the impact of the action units in each emotion. However, these basic expressions represent
only a small set of human facial expressions.
Thus, 480 trials were performed, divided in three blocks of 160 trials with resting period between the
blocks. With this project I hope to both learn some new things and have a fun, interactive project at
the end. The first observation we made was that the y-axis in both plots divides the data almost in
the same way. We would also like to use more complex fusion strategies to exploit multimodal
sensors effectively. To extract temporal features from physiological signals and EEG signals, we
employed LSTM. In table 2 we can see a part of the table containing the basic representation of
emotions in terms of AUs intensities. However, five facial expressions were found confusing to find
the corresponding label. This step is implemented in the rule presented below. The first choice
expresses the most suitable label from the list and the second choice gave to participants the option
to select another label that they also found relative to the expression shown. Page 35. We used
Python’s scikit-learn library, Keras, and Tensorflow to develop a deep learning model, and 70% of
the total data were used as a training set and the remaining 30% as a testing set. Table 2 shows the
list of movies, their references, and their details. Compared to the previous work reported in Table 1,
the accuracy of the proposed methods is considerably high while considering the subject-
independent approach, which is the most challenging evaluation condition. They achieved the
average accuracy of 97.17, 97.34% for arousal and valence levels subject-dependently. As we showed
in table 1, in the example of our dataset, the intensity of an AU can range from 0 to 100. Informed
Consent Statement Informed consent was obtained from all subjects involved in the study. We also
discuss future directions for using facial micro-expressions and physiological signals in emotion
recognition. Also the rules we implemented depend on values that have to be further researched in
order to generate more accurate results. They used a 3D convolutional neural network (CNN) to
extract facial features and classify them, and they also used a 1D CNN to extract EEG features and
classify them. Their support and their contribution during all this time were really meaningful. Then,
the participants started with the reachability judgment task and then performed the interpersonal
comfort task. More recent theories of emotions tend to be informed by advances in empirical
research. Their faces may have been very expressive or very guarded in their facial expressions.
Many datasets have been created, and spotting and recognition methods have developed
significantly. In figure 11 you can see the plot of emotional words in 2D space. The other dimension
is arousal or activation; see Figure 6. A whole different method is the Gabor wavelet or Gabor filter.
Several frames around the micro-expression were extracted and fed to a 3D convolutional network.
For this, CLIPS uses a LISP-like procedural language. Thank you Reply Delete Replies Reply Add
comment Load more. The drawbacks of this procedure were that the photos found in literature were
sketches and not photos of real people, the movements on the face were not based on the action
units, and that made the simulation in FEM more complicated. Page 32.
Near-far perceptual space and spatial demonstratives. We applied physiological signals and facial
expressions to the physio-facial-emotion classification model. So, in this case we see that the results
of the expert system do not agree with the users scoring. Happy.clp Page 77. Most of the facial
expressions were easily recognized and for the ones that leaded to confusing results we still gave
some arguments in section 3.4.2. However, what is also clear from the results is that there is no 1 to 1
correspondence of emotional words and facial expressions. The mapping f indicates a non-metric,
monotone transformation of the observed input data (distances). For example, if the robot detects a
happy face, he may make a clapping motion, while if he sees a sad face, he will react in a different
manner. CLIPS is designed to facilitate the development of software to model human knowledge or
expertise. FACS is the most widely used and versatile method for measuring and describing facial
behaviors. They then trained a linear kernel SVM using these features to calculate expression
percentage features and used this feature vector for emotion classification. Figure 17 Gender of the
participants Figure 18 Age of the participants Page 45. Later, in chapter three, we explain how we
gathered and validated the data for our database. Last I want to thank all the people in the lab for
making the time of studying more interesting and pleasant. In chapter 1 we also introduced a number
of research questions that were established to meet this goal: a. Valence describes the pleasantness
of the stimuli, with positive (or pleasant) on one end, and negative (or unpleasant) on the other. The
five generally accepted features of object-oriented programming are supported: classes, message-
handlers, abstraction, encapsulation, inheritance, and polymorphism. Our aim is to have the images of
the faces an emotional word that describes that image. The average and standard deviation of the
GSR signal and the first and second-order discrete differences of the GSR signal made up the GSR
feature vector. However, the emotional facial expressions used around the world are much more and
some of them are combinations of more than one. We will give the steps for generating our algorithm
and we will explain the concept of the generated system. In this research we will focus on the second
stage, the classification. If we take into account that in the second plot x-axis is representing
pleasantness, we can say that the two clusters represent the negative and the positive data. We used
cost-sensitive learning ( Ling and Sheng, 2008 ) to handle the imbalanced data. Confusion matrix of
classifying emotion using physiological signal and facial expression. I would also like to thank
Zhenke Yang for answering my questions about expert systems and spend time for giving me advise
and ideas. The faces of our parents and other people that took care of us as children were the first
ones to see. For the evaluation procedure we used as input 12 files of AUs combinations. Also,
combining facial micro-expressions with physiological signals will improve the recognition result. The
micro-expression window is used to approximately determine the time of arising emotions. In the
chapter 6 we explained that the initial implementation of our system was designed in order to handle
all the possible known combinations of AUs that could be found in the emotional database. In
CLIPS, there’s only a single level of rule sets, which make things less clear.

You might also like