Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

Pune Vidyarthi Griha's College of Engineering &

S.S.D. Institute of Management, Nashik


A Seminar Presentation on
Facial Emotion Recognition using Convolutional Neural Network

Department of Computer Department


Under The Guidance of
Prof. XYZ

Presented by
XYZ( TE IT )
Presentation Outline
1.Introduction (Overview, Need, Problem statement, Objectives)
2.Literature Survey
3.Proposed system
4.System Design
5.Details of Modules
6.Algorithm
7.Software Tools / Technologies
8.Project Plan
9.References

2
Introduction
Overview:
• Facial expression recognition has brought much attention in the past years in clinical practice, sociable robotics and
education.
• According to diverse research, emotion plays an important role in education.
• Currently, a teacher use exams, questionnaires and observations as sources of feedback but these classical methods
often come with low efficiency. Using facial expression of students the teacher can adjust their strategy and their
instructional materials to help foster learning of students.
Need:
• The proposed system that identifies and monitors student’s emotion and gives feedback in real-time.
• The system uses moving pattern of eyes and head to deduce relevant information to understand students’ mood in an
e-learning environment.
• Ayvaz et al] developed a Facial Emotion Recognition System (FERS), which recognizes the emotional states and
motivation of students in videoconference type e-learning.

3
Introduction
Problem statement:
• To Improve the existing system of emotion recognition.
• To help in managing and improving one’s feelings.
• To help improve the teaching methodologies and psychiatric sessions.

Objectives:
• The main aim for developing this project is to capture the image and predict the emotion, and tell
that person is in good mental state.
• To create a better environment for the psychiatrists and therapist to conduct their therapy sessions
• To give the better knowledge of students mood or interest level in the lectures or in the
counselling sessions to the teachers.

4
Literature Survey
Sr.No. Reference Paper Year Work Description Problem Found

1. A Machine Learning Emotion Detection 2018 Functioning emotions help us to perceive, think, and the common
Platform to Support Effective Well Being. act correctly. The crucial role of emotions in general research used is
well-being becomes self-evident when they become posed expression
Presented By: - Michael Healy, Ryan Donovan, dysfunctional. This paper describes a new emotional
datasets that are
Paul Walsh, Huiru Zheng. detection system based on a video feed in real-time. It
demonstrates how a bespoke machine learning not based on
support vector machine (SVM) can be utilized to authentic emotions
provide quick and reliable classification. Features
used in the study are 68-point facial landmarks.

2. Facial Emotion Recognition of Students Using 2019 Proposed system is to analyze facial expressions Vulnerable
Convolutional Neural Network. using a Convolutional Neural Network (CNN) detection,
architecture. First, the system detects the face from Potential privacy
Presented By: - Imane Lasri, Anouar Riad Solh, input image and these detected faces are cropped and
issues
Mourad EL Belkacemi. normalized to a size of 48×48. Then, these face
images are used as input to CNN. Finally, the output
is the facial expression recognition results (anger,
happiness, sadness, disgust, surprise or neutral).

5
Sr.No. Reference Paper Year Work Description Problems Found

3. Facial Emotion Detection Using Deep 2020 This paper presents the design of an Because of the
Learning. artificial intelligence (AI) system capable subjective nature of
of emotion detection through facial emotions,
Presented by: - Akriti Jaiswal, A. expressions. It discusses about the emotional AI is
Krishnama Raju, Suman Deb. procedure of emotion detection, which especially prone to
includes basically three main steps: face bias
detection, features extraction, and emotion
classification. Emotion is a mental state
associated with the nervous system
feelings, perceptions, behavioral reactions,
and a degree of gratification or
displeasure. One of the current application
of artificial intelligence (AI) using neural
networks is the recognition of faces in
images and videos for various
applications.

6
Proposed System

Above fig represents our CNN model. It contains 4 convolutional layers with 4 pooling layers
to extract features, and 2 fully connected layers then the softmax layer with 7 emotion classes.
Input image is grayscale face image with a size of 48×48. For each convolutional layer we
used 3×3 filters with stride 2.

7
Yoga Predication
Yoga is very important now a days, in this very stressful and hectic days people need to focus
on themselves. Yoga is often partially understood as being limited to asanas or poses, and its
benefits are only perceived to be at the physical level. However, we fail to realize the
immense benefits yoga offers in uniting the body, mind, and breath.
When you are in harmony, the journey through life is calmer, happier and more fulfilling. So, if you are keen to lose
weight, develop a strong and flexible body or being at peace, then yoga can help you achieve it all.

Top 10 benefits of yoga


1.Yoga helps you in all-around fitness
This Diagram Shows the
2.Yoga benefits in weight loss
Yoga Recommendation feature
3.Yoga is one of the best solutions for stress relief
flow of this system
4.Yoga helps for inner peace
5.Yoga Improves Immunity
6.Practice of Yoga Offers Greater Awareness
7.Yoga improves relationships
8.Yoga Increases Energy
9.Yoga Gives you Better Flexibility and Posture
10.Yoga helps in improving intuition

8
System Design

April 18, 2024


Details of Module
Android Application
INPUT: -
Login/Registration Credential,
Image captured from camera,
Resulted mood from the cloud Server

OUTPUT: -
User data,
Captured image from camera,
Push Notification if harmful/suspicious activity
detected,
Push notification if tired user of long usage of
phone is detected.

Algorithm/Process: -
a) Login/Register
b) Access the Camera
c) Send the data to the Cloud Server
d) Get the resulted mood of the user
e) Notification if harmful/suspicious activity
detected
f) Notification if tired user or long usage detected
Details of Module
INPUT: - Website Algorithm: - CNNs are regularized versions of multilayer
Login/Register perceptron's. Multilayer perceptron's usually mean fully connected
Get the resulted mood of the server networks, that is, each neuron in one layer is connected to all
Detect if multiple user detected neurons in the next layer. The "full connectivity" of these networks
Get these multiple users data make them prone to overfitting data. Typical ways of regularization,
or preventing overfitting, include: penalizing parameters during
OUTPUT: - training (such as weight decay) or trimming connectivity (skipped
User data, connections, dropout, etc.) CNNs take a different approach towards
Send captured Image to the server regularization: they take advantage of the hierarchical pattern in
If multiple persons detected, send separated data and assemble patterns of increasing complexity using smaller
images of each person and simpler patterns embossed in their filters.

Processes: -
a) Login/Register
b) Access the camera (External, if added)
c) Send the data to the cloud server
d) Get the resulted mood of the user
e) Session: - i) Detect multiple persons.
ii) Send separated images to cloud
iii) Get resulted mood
iv) Display average mood of all users
f) Detect if suspicious, then lying otherwise
display the mood.
12
Details of Module
Cloud Server
INPUT: -
Captured image from application/website.

OUTPUT: -
Send the resulted mood to the user.

Algorithm/Processes: -
a) Get Image from website/application
b) Convert the image to grayscale
c) Apply CNN algorithm on image
d) Store the predicted mood on the database
e) Send the resulted mood to the concern devices
Algorithm

The CNN Algorithm uses


following steps while execution:
• Step 1: Convolution Operation
• Step 1(b): ReLU Layer
• Step 2: Pooling
• Step 3: Flattening
• Step 4: Full Connection

14
Algorithm
A Convolutional Neural Networks Introduction so to speak.
Step 1: Convolution Operation Step 3: Flattening
The first building block in our plan of attack is convolution This will be a brief breakdown of the flattening process
operation. In this step, we will touch on feature detectors, which and how we move from pooled to flattened layers when working
basically serve as the neural network’s filters. We will also with Convolutional Neural Networks.
discuss feature maps, learning the parameters of such maps, how
patterns are detected, the layers of detection, and how the Step 4: Full Connection
findings are mapped out.
In this part, everything that we covered throughout the
section will be merged together. By learning this, you'll get to
Step 1(b): ReLU Layer envision a fuller picture of how Convolutional Neural
The second part of this step will involve the Rectified Linear Networks operate and how the "neurons" that are finally
Unit or ReLU. We will cover ReLU layers and explore how linearity produced learn the classification of images.
functions in the context of Convolutional Neural Networks.
Not necessary for understanding CNN's, but there's no Summary
harm in a quick lesson to improve your skills.
In the end, we'll wrap everything up and give a quick
recap of the concept covered in the section. If you feel like it will
Step 2: Pooling do you any benefit (and it probably will), you should check
In this part, we'll cover pooling and will get to understand out the extra tutorial in which Softmax and Cross-Entropy are
exactly how it generally works. Our nexus here, however, will be a covered. It's not mandatory for the course, but you will likely
specific type of pooling; max pooling. We'll cover various come across these concepts when working with Convolutional
approaches, though, including mean (or sum) pooling. This part
will end with a demonstration made using a visual interactive tool Neural Networks and it will do you a lot of good to be familiar
15
that will definitely sort the whole concept out for you. with them.
Filters of Convolutional
Neural Network

Hear is a image of different types of filters


used by the Convolutional Neural Network.

We are here using the Grey Scaling filters.

def process_test_data():
testing_data = []
for img in tqdm(os.listdir(TEST_DIR)):
path = os.path.join(TEST_DIR, img)
img_num = img.split('.')[0]
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
testing_data.append([np.array(img), img_num])

16
Software Tools / Technologies

1] Software Requirements: - 2] Hardware Requirements:


• Python 3.9.6 • OS: Windows 10 Or Ubantu 20.04.2.0 LTS
• Flask • Processor: Intel core i3 or AMD Ryzen 3
• Pip 20.2.2 • Memory: 2GB
• Anaconda Individual Edition • Raspberry PI
• PyCharm: Python IDE, VS Code.
• Jumpers
• Mongo DB
• External Camera
• JavaScript, HTML, CSS
• scikit-learn: Machine Learning Library
• PyTorch: An open-source machine learning framework
Advantages
• CNN is more convenient.
• Social acceptability.
• More user Friendly.
• Inexpensive Technique of identification.
• Better security

Disadvantages
• Huge storage requirements
• Vulnerable detection
• Potential privacy issues
18
Applications
• Educational Institutions
• psychiatrists and therapist sessions
• Video Gaming
• commercial and law enforcement
• ATM's ,identifying duplicate voters, passport and visa verification,
driving license verification, in defense, competitive and other exams,
in governments and private sectors.

19
Conclusion
The proposed model includes 4 convolutional layers, 4 max pooling and
2 fully connected layers. The system recognizes faces from students’
input images using CNN and classifies them into seven facial
expressions: surprise, fear, disgust, sad, happy, angry and neutral.

20
References

• P. Ekman and W. V. Friesen, Constants across cultures in the face and emotion,•h Journal of Personality and Social
Psychology, vol. 17, no 2, p. 124.129, 1971.
• C. Tang, P. Xu, Z. Luo, G. Zhao, and T. Zou, •gAutomatic Facial Expression Analysis of Students in Teaching
Environments,•h in Biometric Recognition, vol. 9428, J. Yang, J. Yang, Z. Sun, S. Shan, W. Zheng, et J. Feng, Ed.
Cham: Springer International Publishing, 2015, p. 439.447.
• A. Savva, V. Stylianou, K. Kyriacou, and F. Domenach, •gRecognizing student facial expressions: A web
application,•h in 2018 IEEE Global Engineering Education Conference (EDUCON), Tenerife, 2018, p. 1459.1462.
• J. Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, •gThe Faces of Engagement: Automatic
Recognition of Student Engagementfrom Facial Expressions,•h IEEE Transactions on Affective Computing, vol. 5,
no 1, p. 86.98, janv. 2014.
• N. Bosch, S. D'Mello, R. Baker, J. Ocumpaugh, V. Shute, M. Ventura, L. Wang and W. Zhao, •gAutomatic Detection
of Learning-Centered Affective States in the Wild,•h in Proceedings of the 20th International Conference on
Intelligent User Interfaces - IUI •f15, Atlanta, Georgia, USA, 2015, p. 379.388.
21
THANK YOU

22

You might also like