Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Republic of Tunisia LR-SITI-ENIT

Ministry of Higher Education, Scientific


Research and Information and
Communication Technologies

Tunis Manar University ST-EN07/00


National School of Engineering of Tunis Master Project
Serial N°: 2015 / DIMA-033

Master Project
Report
presented at

National School of Engineering of Tunis


(LR-SITI-ENIT)

in order to obtain the

Master degree in Systems, Science and Data

by

Awatef MESSAOUDI

Defended on 18/12/2020 in front of the committee composed of

Mr Foulen Fouleni President


Ms Foulena Foulenia Supervisor
Mr Foulen Fouleni Reviewer
Dedication

Put your dedication lines here


And try to be expressive ;)
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum

To all of you,
I dedicate this work.

Awatef MESSAOUDI
Thanks

And put your thanks here.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure
dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit
anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis
aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.
CONTENTS Awatef MESSAOUDI

Contents

Dedication i

Thanks ii

Contents iv

List of Figures v

Acronyms vi

Introduction 1

1 Facial expression recognition : state of the art 2


1.1 Introduction: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Facial expressions and emotions : . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 definitions: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 The universal facial expressions: . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Coding systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Areas of application of FER: . . . . . . . . . . . . . . . . . . . . . 6
1.3 Architecture of Facial expression recognition: . . . . . . . . . . . . . . . . . 6
1.3.1 Face detection: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Feature extraction: . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Emotion recognition: . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.4 Facial expression databases: . . . . . . . . . . . . . . . . . . . . . . 9
1.3.5 Machine learning: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.6 Deep learning: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Page iii
CONTENTS Awatef MESSAOUDI

1.4 Conclusion: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Chapter Two 13
2.1 Section One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Sub section One . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2 Sub section Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Conclusion 15

Appendix 16

Webography 17

Bibliography 17

Page iv
LIST OF FIGURES Awatef MESSAOUDI

List of Figures

1 The six universal emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


2 This is a test image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 This is a test image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Page v
LIST OF FIGURES Awatef MESSAOUDI

Acronyms

ENIT National School of Engineering of Tunis

Page vi
INTRODUCTION Awatef MESSAOUDI

Introduction

Welcome to National School of Engineering of Tunis (ENIT).


Again, welcome to ENIT.
Your introduction goes here.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure
dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit
anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis
aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.

Page 1
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

Chapter
1
Facial expression recognition : state
of the art

1.1 Introduction:

Due to the important role of facial expression in human interaction, the ability to
perform facial expression recognition automatically via computer vision enables a range
of applications such as human- computer interaction and data analytics, etc… In this
chapter, we will present some notions of emotions and different coding theories as well as
the architecture of facial recognition. We will present some approaches that help as to
recognize facial expression and we will end the chapter with differents machine learning
techniques.

1.2 Facial expressions and emotions :

1.2.1 definitions:

1.2.1.1 Emotions:

the emotion is expressed through many channels such as body position, voice and facial
expressions. It is a mental and physiological state which is subjective and private. It
involves a lot of behaviours, actions, thoughts and feelings.

Page 2
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

SHERER proposes the following definition : « Emotion is a set of episodic variations


in several components of the organisation in response to events assessed as important by
the organism. »

1.2.1.2 Facial expressions:

facial expression is a meaningful imitation of the face. The meaning can be expression of an
emotion, a semantic index or an intonation in the language of panels. The interpretation
of a set of muscle movements in expression depends on the context of the application. For
example, in the case of an application in Human-Machine interaction where we want to
know an indication of the emotional state of an individual, we will try to classify measures
in terms of emotions.

1.2.2 The universal facial expressions:

Charles DARWIN wrote in his 1872 book « the expressions of the emotions in Man and
Animals » that facial expressions of emotion are universal, not learned differently in each
culture. Several studies since have attempted to classify human emotions and demonstrate
how your face can give away your emotional state.[2] In 1960, Ekman and Friesen defined
six basic emotions based on cross-culture study, which indicated that humans perceive
certain basic emotions in the same way regardless of culture. These prototypical facial
expressions are anger, disgust, fear, hapiness, sadness, and surprise.[2]

Figure 1. The six universal emotions

Page 3
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

1.2.3 Coding systems:

Facial expressions is a consequence of activity of facial muscles. These muscles are also
called mimetic muscles or muscles of the facial expressions. The study of facial expressions
cannot be done without the study of the anatomy of the face and the underlying structure
of the muscles. That’s why some researchers focused on a coding system for facial
expressions. Several systems have been proposed such as Ekman system’s. In 1978 Ekman
developed a tool for coding facial expressions widely used today. We will present some
systems.

1.2.3.1 FACS:

facial action coding systems is a system developped by Ekman and friesen which is a
standard way of describing facial expressions in both psychology and computer animation.
Facs is based on 44 actions units (AUs) that represent facial movement that cannot be
composed into smaller area. FACS is very successful but it suffers from some defaults
such as :

• Complexity: : it takes 100 hours of learning to master the main concepts.

• Difficulty of handling bu a machine : FACS was created for psychologist, some


measurements remains vague and difficult to assess by a machine.

• Lack of precision : the transition between two states of a muscle are represented by
linear way, which is an approximation of reality.

subsubsectionComplexity: It takes 100 hours of learning to master the main concepts.


subsubsectionDifficulty of handling bu a machine: FACS was created for psychologist,
some measurements remains vague and difficult to assess by a machine. subsubsectionLack
of precision: The transition between two states of a muscle are represented by linear way,
which is an approximation of reality.

Page 4
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

1.2.3.2 MPEG4:

the MPEG4 video encoding standard has a model of the face human developped by the
face and body AdHocGroup interest group. This is a 3D model. This model is built
on a set of facial attributes, called Facial Feature Points(FFP). Measurements are used
to describe muscle movements( Facial animation Parameters-equivalents of Ekman unit
Actions).

1.2.3.3 Candide:

It is a model of the face, contained 75 vertices ans 100 triangles. It is composed of a model
with a generic face and a set of parameters(SHAPE UNITS). These parameters are used
to adapt the generic model to a particular individual. They represent the differences
between individuals and are 12 in number:

1. head height.

2. vertical position of the eyebrows.

3. vertical eye position.

4. eye width.

5. eye height.

6. eye separation distance.

7. depth of the cheeks.

8. depth of the nose.

9. vertical position of the nose.

10. degree of the curvature of the nose.

11. vertical position of the mouth.

12. width of the mouth.

Page 5
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

1.2.4 Areas of application of FER:

Automatic Facial expression recognition system has many applications including human
behavior understanding, detecting of mental disorder, etc...[3]. It has become a research
field involving many scientists specializing in different Areas such as artificial intelligence,
computer vision, psychology, physiology, education, website customization, etc…

1.3 Architecture of Facial expression recognition:

The system that performs automatic recognition of facial expression consists of three
modules : The first one is detecting and recording the face in the image or the input image
sequences. It can be a sensor to detect the face in each image or just detect the face in
the first image and then follow the face in the rest of video sequences. The second module
consist in extracting and representing the facial changes caused by facial expressions. The
last one determines a similarity between the set of characteristics extracted and a set of
reference characteristics. Other filters or data preprocessing modules can be used between
these main modules to improve the results of detection, extraction of characteristics or
classification.

1.3.1 Face detection:

Face detection consists of determinig the presence or absence of faces in a picture. This
is a preliminary task necessary for most techniques for analysing the face. This used
technique come from the field of recognition shapes. There are several techniques for
detecting the face, we mention the most used.

• Automatic facial treatement : it is a method that specifies faces by distances and


proportions between particular pointsaround the eyes, nose, corners of the mouth,
but it is not effective when the light i slow.

Page 6
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

• Eigenface : this is an effective method of characterization in facial treatment such as


as face detection and recognition. It is based on the representation of face features
from model grayscale images.

• LDA( linear discriminant analysis) : it is based on predictive discriminant analysis.


It is about explaining and predicting the membership of azn individual to a
predefined class based on measured characteristics using prediction variables.

• LBP( local binary patterns method) ; the technique of local binary model devides the
face into square subregions of equal size where the LBPcharacteristics are calculated
. the vector obtained are concatenated to get the final feature vector.

• Haar filter : this face detection method uses a multiscale haar filter. The
characteristics of a face are described in an XML file.

1.3.2 Feature extraction:

The characteristics of the face are mainly located around the facial components such as
the eyes, mouth, eye-brow nose and chin. The detection of characteristics points of the
faces is done by a rectangular box returned by a detector which locates the face. The
extraction of the geometric features such as the countours of facial components and facial
distance provides location or appearance of characteristics. Therefore, there are two types
of approaches :

1.3.2.1 the geometric characteristics:

characteristics represent the shape and location of components of the face(including the
mouth, eyes, eyebrows and nose). The facial compnents or facial features are extracted
to form a vector of features representing the geometry of the face.

1.3.2.2 the characteristics of appearance:

It represents change in appearance of the face such as wrinkles and furrows. According to
these methods, the effect of rotation of head and the different facial shooting scales could

Page 7
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

be eliminated by a normalization before the step of extraction of characteristics or by a


representation of features before the expression recognition step.

1.3.3 Emotion recognition:

Many researches are divided into three parts global approaches local approaches,
local approaches and finally hybrid approaches. Each approaches has advantages and
disadvantages related to environmental issues, orientation of images, position of the head,
etc…

1.3.3.1 global approach:

These approaches are independent of head positions (top, bottom) and face image
orientations. These methods are effective but requires a heavy learning phase and the
result depends on the number of samples used.

1.3.3.2 local approach:

These approachs are based facial objects detection and they are robust to the change
of luminance. The position of the head and its orientation can cause some gaps in the
system.

1.3.3.3 Hybrid approach:

the alternative is to combine the two approaches(local and global) in order to take
advantages from these approaches. The recognition phase in this system is based on
machine learning theory : The feature vector is formed to describe the facial expressionand
the first part of the classifier is Learning. Classifier training consists of labeling the images
after detection, once the classifier is trained, it can recognize the images input. The
classification method can be devided into two groups : • Recognition based on static
data which only concerns images. • Recognition based on dynamic data concerning

Page 8
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

sequences images or videos. Various classifiers have been applied such as neural network,
bayesian network, SVM, etc…

1.3.4 Facial expression databases:

Having sufficient labeled training data that include as many variations of the populations
and environments as possible is important for the design of a deep expression recognition
system. We will introduce some databases that contain a large amount of affective images
collected from the real world to benefit the training of deep neural networks.

1.3.4.1 CK+:

The extended cohnkanade database is the most extensively used laboratru-controlled


database for evaluating FER system. CK+ contains 593 video sequences from 123
subjects. The sequences vary in duration from 10 to 60 frames and show a shift from a
neutral facial expression to the peak expression. Among these video, 327 sequences from
118 subjects are labeled with seven basic expression labels(anger, comptemt, disgust, fear,
hapiness, sadness and surprise) based on the facial action coding systems(FACS). Because
CK+does not provide specified training, validation and test set, the algorithms evaluated
on this database are not uniform.

1.3.4.2 NMI:

this database is laboratry-controlled are includes 326 sequences from 32 subjects. A total
of 213sequences are labeled with six basic expressions and 205 sequences are captured
in frontal view. In contrast to CK+ sequences in NMI are onset-apex-offset labeled.
The sequence begins with a neutral expression and reaches peak near the middle before
returning to the neutral expression.

Page 9
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

1.3.4.3 JAFFE:

The japaneese female facial expression database is a laboratry-controlled image database


that contains 213 samples of posed expressions from 10 japaneese female. Each person has
3°4 images with each of six basic facial expression( anger, disgust, fear, hapiness, sadness
and surprise) and one image with a neutral expression. The database is challenging
because it contains few examples per subject/expression.

1.3.4.4 FER-2013:

This database was introduced during the ICML 2013 challenges in representation learning.
FER-2013 is a large scale and unconstrained database collected automatically by the
google image search API. All images have been registred and resized to 48*48 pixels
after rejecting wrongfully labeled frames and adjusting the cropped region. FER-2013
contains 28.709 training images, 3.589 validation images and 3.589 test images with seven
expression labels ( anger, disgust, fear, hapiness, sadness, surprise, and neutral).

Figure 2. Test Image

1.3.5 Machine learning:

This is a second subsection[1].


Machine learning is one of the most exciting areas of technology at the moment. We
see daily many stories that herald new breackthroughs in facial recognition technology,
self driving cars or computers that can have a conversation just like a person. Machine

Page 10
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

learning technology is set to revolutionise almost any area of human life and work. The
one primary reason behind the using of machine learning is to automate complex tasks
and to analyze the variety and the complexity of data.

1.3.6 Deep learning:

Deep learning or deep machine learning is a branch of machine learning that takes data
as an input and makes intuitive and intelligent decisions using an artificial neural network
stacked layer wise. It is being applied in various domains for its ability to find patterns
in data extract features and generate intermediate representations.

1.4 Conclusion:

Deep learning or deep machine learning is a branch of machine learning that takes data
as an input and makes intuitive and intelligent decisions using an artificial neural network
stacked layer wise. It is being applied in various domains for its ability to find patterns
in data extract features and generate intermediate representations.

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

Also bullets such as:

• One

Page 11
CHAPTER 1. FACIAL EXPRESSION RECOGNITION : STATEAwatef
OF THEMESSAOUDI
ART

• Two

• Three

• Four

• …

And for more chaptes, just copy the file “004-chapter1.tex” and edit the content, and
then you’ll have to add it to “001-report.tex”.

Page 12
CHAPTER 2. CHAPTER TWO Awatef MESSAOUDI

Chapter
2
Chapter Two

2.1 Section One

2.1.1 Sub section One

And your chapter one goes here[2].


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est
laborum.

Figure 3. Test Image

Page 13
CHAPTER 2. CHAPTER TWO Awatef MESSAOUDI

2.1.2 Sub section Two

This is a second subsection[1].


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est
laborum.

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

• Menu Item
Menu Description.
Focus topics: Topic one, topic two, topic three, ...

Also bullets such as:

• One

• Two

• Three

• Four

• …

And for more chaptes, just copy the file “004-chapter1.tex” and edit the content, and
then you’ll have to add it to “001-report.tex”.

Page 14
CONCLUSION Awatef MESSAOUDI

Conclusion

And a very interesting conclusion here.


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id
est laborum. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute
irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit
anim id est laborum.

Page 15
APPENDIX Awatef MESSAOUDI

Appendix

An appedix if you need it.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure
dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit
anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis
aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.

Page 16
WEBOGRAPHY Awatef MESSAOUDI

Webography

[2] Latex @ Wikipedia. url: https://www.kairos.com/blog/the-universally-recognized-


facial-expressions-of-emotion (visited on 0004–2016).

[3] ENIS. url: http://www.enis.rnu.tn/site/enis_fr/ (visited on 0004–2016).

Page 17
BIBLIOGRAPHY Awatef MESSAOUDI

Bibliography

[1] Charles Bazerman et al. Shaping written knowledge: The genre and activity of the
experimental article in science. Vol. 356. University of Wisconsin Press Madison,
1988.

[4] Ashraf Aboulnaga, Alaa R Alameldeen, and Jeffrey F Naughton. “Estimating the
selectivity of XML path expressions for internet scale applications”. In: VLDB. Vol. 1.
2001, pp. 591–600.

[5] Ashbindu Singh. “Review article digital change detection techniques using
remotely-sensed data”. In: International journal of remote sensing 10.6 (1989),
pp. 989–1003.

Page 18
Hello World!
Here is my test PDF document.

You might also like