Professional Documents
Culture Documents
Context (3) (1) (AutoRecovered) (1) ... - Removed
Context (3) (1) (AutoRecovered) (1) ... - Removed
CHAPTER 1
INTRODUCTION
1.1 Introduction
This research paper aims to contribute to this evolving field by proposing a novel
approach to music and movie recommendation systems that integrates real-time facial
emotion detection. By harnessing the capabilities of CNN models trained with
frameworks such as Keras, we seek to develop a lightweight and efficient system
capable of analyzing users' facial expressions and predicting their emotional states with
high accuracy. In this paper, we will discuss the theoretical foundations of our proposed
system, including the underlying principles of facial emotion detection and CNN-based
image processing. We will also outline the methodology employed in the development
and training of our emotion detection model, as well as the integration ofthis model into
a music and movie recommendation framework. Furthermore, we will present
experimental results and performance evaluations to demonstrate the efficacy and
practicality of our approach.
Department of CSBS 1
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Ultimately, we believe that the integration of facial emotion detection into music
and movie recommendation systems designs the caliber to innovate the way users
associate with digital content, offering a more immersive, personalized, and
emotionally resonant experience. Through this research, we aim to contribute to the
ongoing efforts to harness the power of artificial intelligence for the betterment of
human-computer interaction and content consumption experiences.
Department of CSBS 2
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 2
LITERATURE SURVEY
2.1. Introduction
Department of CSBS 3
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Everyone strives for success in today's world, however many people suffer from
unhappiness as a result of this constant battle. Facial emotion recognition technology
has been given a lot of scope and interest by society and is being used in many industries
like security systems, DVP, and other technical breakthroughs. Mood Indicator was
developed to help people in recommending films and tunes based on their emotions.
60% of users say that the number of songs or videos in their collection is so big that
they can't pick which to play. By creating a recommendation system, we can help a user
in deciding which entertainment factor to listen to, which reduces the stress levels of
the client. It is time efficient too.
Once users' emotional states are detected, the system proceeds to recommend
music tracks and movie titles that align with their current moods. Leveraging extensive
databases of music and movies, such as Spotify and IMDb, the system curates tailored
recommendations that resonate with users' emotional states. Whether users are seeking
upbeat music to uplift their spirits or heartwarming movies to evoke nostalgia, the
system provides a diverse selection of content options to suit their preferences.
Department of CSBS 4
Moodify: A Movie & Music Recommendation System Using Emotion Detection
This paper proposed a new approach for playing music automatically using
facial emotion. Most of the existing approaches involve playing music manually, using
wearable computing devices, or classifying based on audio features. Instead, we advise
a modification to manual sorting and playing. We employed a Convolutional Neural
Network to detect emotions. Pygame and Tkinter are used to provide music
recommendations. Our proposed approach reduces the computational time required to
produce the results as well as the overall cost of the planned system, boosting the
system's overall correctness. The system is tested using the FER2013 dataset. The built-
in camera captures facial expressions. Feature extraction is used on input face photos to
detect emotions including happy, angry, sad, surprise, and neutral. Music playlists are
made automatically based on the user's current emotion. It performs better in terms of
computational time than the algorithm in the existing literature.
The paper presents a mobile application, EmoTunes, that detects the face
emotions of people and thereafter plays suitable music. The emotion detection system
works for any skin tone as the dataset introduced contains a mix of ethnicities. The
dataset was collected by asking friends, family members and acquaintances selfies of
themselves expressing the Ekman’s basic emotions. EmoTunes then used an ensemble
learning with MobileNetV2 and ResNet50 to detect the 7 Ekman’s emotions: Angry,
Disgust, Fear, Happy, Neutral, Sad and Surprise.
The system can detect single user as well as multi-user emotions. It has 2 types
of recommendation: Recommendation from server, where it fetches songs from the
database, and Recommendation via YouTube, where it takes only a single emotion and
fetches a YouTube song that best relates to the user’s preferences and liking.
EmoTunes uses 2 methods of discretizing multi-user emotions: either by finding the
most common emotion among them, or by finding the emotion of the face nearest to the
camera. During real-time emotion detection, the system detects the user’s emotion for
15 s and finds the most common emotion expressed within those15 s to play a music
directly from the server. A total validation accuracy of 99.64% was deduced from the
final model, making it one of the promising models when
Department of CSBS 5
Moodify: A Movie & Music Recommendation System Using Emotion Detection
The system can be deployed to any part of the world as it adapts to each user’s
liking using the YouTube feature. While humans are social creatures, they do not like
to wait for services. EmoTunes can hence be deployed in waiting rooms such as for
healthcare facilities and at the metro/bus stations, or even in buses and trains in
Mauritius, or other parts of the world, to provide the users with some recomforting
music based on their current emotions. Future considerations might include further
working on the dataset to achieve a much bigger variation.
Looking ahead, future considerations for EmoTunes may involve expanding and
refining the dataset to encompass an even broader variation of facial expressions and
emotions. This could further enhance the system's accuracy and effectiveness, ensuring
its continued relevance and utility in diverse environments and user scenarios. Overall,
EmoTunes represents a promising advancement in emotion-driven music
recommendation systems, offering both technical sophistication and practical
applicability in real-world settings
Department of CSBS 6
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Music and movies play a fundamental role in shaping and elevating one's
emotional state, serving as powerful sources of entertainment and inspiration. Recent
research highlights the profound impact of music and movies on human emotions and
cognitive activity. In the contemporary digital landscape, characterized by an
abundance of content, the demand for personalized recommendations that align with
users' emotional states has taken center stage. This research project introduces an
innovative recommendation system that capitalizes on facial expressions to deliver real-
time, customized movie and music suggestions through emotion analysis. The system
employs facial feature detection techniques, incorporating both the Haar Cascade
algorithm and Convolutional Neural Networks (CNN).
Department of CSBS 7
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 8
Moodify: A Movie & Music Recommendation System Using Emotion Detection
It is often confusing for a person to decide which music he/she must listen from
a massive collection of existing options. There have been several suggestion
frameworks available for issues like music, dining, and shopping depending upon the
mood of the user. The main objective of our music recommendation system is to provide
suggestions to the users that fit the user’s preferences. The analysis of the facial
expression/user emotion may lead to understanding the current emotional or mental
state of the user. Music and videos are one region where there is a significant chance to
prescribe abundant choices to clients considering their inclinations and also recorded
information. It is well known that humans make use of facial expressions to express
more clearly what they want to say and the context in which they meant their words.
More than 60 percent of the users believe that at a certain point of time the
number of songs present in their song library is so large that they are unable to figure
out the song which they must play. By developing a recommendation system, it could
assist a user in deciding which music one should listen to, helping the user to reduce
his/her stress levels. The user would not have to waste any time in searching or to look
up songs and the best track matching the user’s mood is detected, and songs would be
shown to the user according to his/her mood. The image of the user is captured with the
help of a webcam. The user’s picture is taken and then as per the mood/emotion of the
user an appropriate song from the playlist of the user is shown matching the user’s
requirement.
The analysis of facial expressions and user emotions serves as a valuable tool in
understanding the user's current emotional or mental state. Facial expressions are a
universal language through which individuals convey their feelings and intentions,
providing valuable insights into their mood and emotional well-being. Leveraging this
information, our recommendation system aims to offer personalized music suggestions
that resonate with the user's emotions and preferences.
Department of CSBS 9
Moodify: A Movie & Music Recommendation System Using Emotion Detection
they offer abundant choices that can cater to diverse tastes and preferences. However,
navigating through this vast sea of content can be overwhelming for users, particularly
when faced with a large and diverse music library. Studies have shown that a significant
portion of users struggle to select the perfect song from their extensive collection,
leading to frustration and stress.
By developing a recommendation system that takes into account the user's mood
and emotions, we aim to alleviate this burden and enhance the user's music listening
experience. Rather than spending time searching for the right song, our system
automates the process by capturing the user's facial image using a webcam and
analyzing their mood/emotion. Based on this analysis, the system selects an appropriate
song from the user's playlist that matches their emotional state and preferences.
This approach not only saves users time and effort but also helps reduce stress
levels by providing them with music that resonates with their current mood. By
seamlessly integrating facial emotion analysis with music recommendation technology,
our system offers a more intuitive and personalized music listening experience,
enhancing user satisfaction and engagement. With the user's emotional well-being at
the forefront, our recommendation system aims to revolutionize the way individuals
discover and enjoy music, providing a seamless and emotionally enriching journey
through their music libraries.
Furthermore, our recommendation system aims to bridge the gap between the
user's emotional state and their music preferences, creating a more seamless and
intuitive music listening experience. By understanding the user's mood and emotions,
the system can offer relevant music suggestions tha resonate with their current state
of mind. This not only enhances user satisfaction but also promotes emotional well-
being by providing music that aligns with the user's feelings and helps them navigate
through their emotional landscape.
Department of CSBS 10
Moodify: A Movie & Music Recommendation System Using Emotion Detection
The proposed model depicts the recommendation system built to detect facial
expressions and automatically suggest the movies and music from the sample dataset.
There are many existing models which recommend music/movie based on genre, but
it accompanies manual sorting. So, to avoid manual sorting and reduce the browsing
time this model is proposed which is titled ‘Movie & Music recommendation system
based on facial expressions. We express our emotions using facial expressions.
Whenever we are in a happy, party, sad or gloomy mood we mostly prefer to listen some
songs or just sit and watch movies and spend our leisure time. But the most common
problem we all face is, whenever we hop to any movies or music platforms, we find
tons of options and get stuck & confused what to watch in terms of movies and spend
most of our time browsing. This utilizes a lot of our time. Music has always been known
to alter human emotions, enhancing their mood. In some cases mood alterations can
help to reduce depressions and sadness. So, to overcome manual sorting and save the
browsing time to play the music/movie with recommendation of our expressions,
‘Movie & Music Recommendation system based on Facial expressions’ model is
proposed. The model is named ‘Moodlift’.
The proposed model aims to capture the image of the user using an inbuilt
webcam. With the help of image segmentation and image processing techniques it
extracts features from the face of a target individual and tries to detect the emotion. The
project aims to lighten the mood of the user, by playing songs that match requirements
of the user. The Convolutional Neural Network will be used to detect emotions. It
consists of an input layer, convolutional layer, dense layer and output layer. CNN
extracts feature from the image and determines the specific expression. To accurately
detect emotions, we need to detect face within an image and run the model to detect
expressions. For music recommendation, Deep Learning and Spotify Apiwill be used
and for movie recommendation it is routed to IMDB site. It recognizes facial expression
based on the 5 categories i.e. angry, sad, happy, surprise and neutral. Based on the
emotion captured it will give user two choices either suggesting movies or songs. The
recommendation system is exhibited on a website created using Html and CSS with
Flask.
Department of CSBS 11
Moodify: A Movie & Music Recommendation System Using Emotion Detection
The extensive focus of the model was to classify music as per different
emotions. Main aspect for music recommendation was to train model to take different
features of song like acoustic, timber, tone, rhythm etc. and to determine if a song was
happy, sad, neutral, energetic. We found a playlist of songs as per genre and mood on
Spotify. For happy songs we used pop as EDM, for sad we used mellow and for neutral
we used Lofi.
Department of CSBS 12
Moodify: A Movie & Music Recommendation System Using Emotion Detection
More than 300 songs per genre were used to create a dataset of 1200 songs. For
movie recommendation we have trained the model with FER 2013 dataset images to
extract features to determine the genre depending on the five emotions. We have used
one hot encoding to give each song a label to use as training data for the model. Now
with two data frames containing song links and labels respectively, it was time to get
songs data we needed to make classifications. We used Spotify API to extract song
information and store it into the dataset. To train our classifier we used Tensorflow &
Keras. To make pandas data frames compatible with model, we had to convert them
into numpy model. Later, we used sklearn to split our dataset. The final layer was
created with keras which contained single dense layer with Relu activation function, it
was trained using scikit learn keras classifier function.
Department of CSBS 13
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 3
Additionally, the reliance solely on user interaction data such as search history
and ratings may result in recommendations that lack diversity or fail to capture users'
evolving preferences accurately. This limitation highlights the need for more holistic
recommendation systems that consider a broader range of factors, including users'
emotional states and external influences such as trending topics or cultural events. By
incorporating additional sources of data and employing more sophisticated algorithms,
future recommendation systems could overcome these limitations and provide more
comprehensive and personalized recommendations for both music and movies.
Some users may be resistant to the idea of using facial recognition technology
for music recommendations due to concerns about surveillance, security, or simply a
preference for more traditional and less invasive methods. A facial recognition system
relies solely on visual input, neglecting other important contextual factors that
influence musical preferences, such as mood, location, and social context.
Implementing facial recognition technology can be resource-intensive, requiring
significant computational power and storage. This may increase the cost of developing
and maintaining the music recommendation system.
Department of CSBS 15
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 16
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Real-time video stream: The system first captures a video stream from a webcam.
Capture Frame: An individual frame is captured from the video stream.
Detecting face using Haar Cascade Classifier: The Haar cascade classifier is used
to detect faces within the captured frame. This classifier is a machine learning
model that’s trained to identify specific features within an image, such as edges and
lines. By identifying these features, the classifier can recognize the presence of a
face.
Preprocessing and Pixelization of image: The captured frame is then pre-
processed, which involves preparing the image data for further processing. This may
involve resizing the image, converting it to grayscale, or applying other techniques
to improve the image quality for face detection. After preprocessing, the image is
pixelated, which reduces the resolution of the image. This is done to protect the
user’s privacy.
Feature Extraction: Facial features are then extracted from the pre-processed frame.
These features may include the distance between the eyes, the shape of the nose,
and the curvature of the lips.
Emotion Recognition using Trained Model: A trained model is used to recognize
the emotions from the extracted facial features. This model is likely a convolutional
neural network (CNN) that has been trained on a dataset of labelled faces. The CNN
is able to identify patterns in the facial features that correspond to different
emotions.
Emotion Detected: Once the emotion is recognized, the system can take various
actions. In the flowchart you sent me, the system is designed to recommend music
and movies based on the user’s emotion. For instance, if the system detects that the
user is happy, it might recommend an upbeat song playlist or a comedy movie.
Department of CSBS 17
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Recommend Movies & Songs: The system recommends songs from Spotify and
movies & descriptions from IMDB based on the user’s detected emotion.
Play suggested Spotify Playlist: If the user chooses to play the recommended
songs, the system will play the suggested playlist on Spotify.
End: This is the final state of the flowchart.
Once the video stream is captured, the system proceeds to capture individual
frames, each of which contains valuable visual information for facial emotion analysis.
To detect faces within these frames, the system employs the Haar cascade classifier, a
machine learning model trained to identify specific features indicative of facial
structures. This classifier utilizes pattern recognition algorithms to identify edges, lines,
and other distinguishing features, enabling it to accurately recognize the presence of
faces within the captured frames.
With the pre-processed image in hand, the system proceeds to extract facial
features, such as the distance between the eyes, the shape of the nose, and the curvature
of the lips. These features serve as valuable inputs for emotion recognition, allowing
the system to discern users' emotional states from their facial expressions. Employing a
trained model, likely a convolutional neural network (CNN) trained on a dataset of
labelled faces, the system analyses these facial features to identify patterns
indicative of various emotions.
Department of CSBS 18
Moodify: A Movie & Music Recommendation System Using Emotion Detection
If the user chooses to play the recommended songs, the system initiates
playback of the suggested Spotify playlist, providing users with a seamless and
immersive music listening experience. This comprehensive process culminates in the
system's ability to provide personalized music and movie recommendations based on
the user's emotional state, enriching the entertainment experience and setting a new
standard for content recommendation systems in the digital age.
Upon emotion detection, the system recommends songs from Spotify and
movies from IMDb that align with the user's detected emotion, ensuring a highly
relevant and engaging entertainment selection. Users have the option to explore and
play the recommended content directly from the interface, facilitating a seamless
transition from recommendation to consumption.
Overall, upon emotion detection, the system recommends songs from Spotify
and movies from IMDb tailored to the user's detected emotion. If the user opts to play
the recommended songs, the system initiates playback of the suggested Spotify playlist.
This comprehensive process culminates in the system's ability to provide personalized
music and movie recommendations based on the user's emotional state, enriching the
entertainment experience.
Department of CSBS 19
Moodify: A Movie & Music Recommendation System Using Emotion Detection
o keras
o pandas
o opencv
o statistics
o tensorflow
o numpy
o flask
o matplotlib
A computer with sufficient CPU and memory resources to run Python and the
required libraries efficiently.
Internet connectivity to access the URL posted.
Department of CSBS 20
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 4
SYSTEM DESIGN
Department of CSBS 21
Moodify: A Movie & Music Recommendation System Using Emotion Detection
is probably happy. Similarly, if the brows are furrowed and the lips is directed
downward, the person may be feeling sorrowful.
By leveraging the extracted facial features, the system can make inferences
about the person's emotional state. For instance, if the corners of the lips are turned up,
indicating a smile, the person is likely experiencing happiness. Conversely, if the brows
are furrowed and the lips are directed downward, it may suggest feelings of sorrow or
distress.
The ability to capture and analyze facial expressions in this manner opens up a
myriad of applications across various domains, ranging from psychology and human-
computer interaction to marketing and entertainment. By harnessing the capabilities of
OpenCV and facial emotion recognition systems, researchers and developers can gain
valuable insights into human emotions and behaviors, paving the way for innovative
solutions and experiences tailored to individual needs and preferences.
Department of CSBS 22
Moodify: A Movie & Music Recommendation System Using Emotion Detection
The training phase of a DCNN involves exposing the network to a large dataset
comprising images containing faces. During training, the network adjusts the weights
of its neurons to minimize the difference between the expected outputs (i.e., the
presence or absence of a face) and the actual outputs generated by the network. Through
this iterative process, the DCNN learns to recognize and differentiate between facial
features, enabling it to accurately detect faces in new photos.
One of the key advantages of using DCNNs for face detection is their superior
accuracy compared to traditional approaches such as Haar cascades. DCNNs are
capable of detecting faces with high precision across a wide range of lighting
conditions, orientations, and poses. This robustness makes them highly effective in real-
world scenarios where faces may appear in varying contexts and environments.
The ability of DCNNs to accurately and efficiently detect faces in images has
numerous practical applications across various domains, including surveillance,
security, and human-computer interaction. In surveillance systems, for example,
Department of CSBS 23
Moodify: A Movie & Music Recommendation System Using Emotion Detection
DCNN-based face identification can be used to monitor crowded areas and identify
individuals of interest in real time. Similarly, in security applications, DCNNs can
enhance access control systems by verifying the identities of individuals based on their
facial features.
Feature extraction is the process of detecting and extracting the important facial
features required for emotion recognition. The most prevalent facial emotion
identification features are the position and form of the eyes, brows, mouth, and nose.
Other characteristics, such as skin texture and colour, as well as face shape, may also
be considered. Haar cascades are one of the most often used feature extraction
approaches in facial emotion identification. Haar cascades are essentially a collection
of features used to recognize objects in an image. In the context of facial emotion
identification, these traits are intended to detect the eyes, nose, mouth, and other
significant facial features. Haar cascades work by scanning an image at various scales
and sizes to detect the presence of features. The features are essentially rectangular
sections of the image, containing dark and light areas. Combining numerous
characteristics allows for the detection of more complex objects, such as faces. Once
the features are identified, they can be used to train a machine learning model, such as
CNN, to distinguish emotions.
One of the most used approaches for feature extraction in facial emotion
recognition is the Haar cascade method. Haar cascades are essentially a collection of
features specifically designed to identify objects within an image. In the context of
facial emotion identification, Haar cascades are utilized to detect important facial
Department of CSBS 24
Moodify: A Movie & Music Recommendation System Using Emotion Detection
landmarks such as the eyes, nose, mouth, and other prominent features indicative of
emotional expressions.
Once the relevant facial features are identified using Haar cascades, they can be
utilized to train machine learning models, such as Convolutional Neural Networks
(CNNs), to distinguish between different emotional expressions. CNNs are particularly
well-suited for this task, as they can effectively learn and extract intricate patterns and
features from raw image data, enabling them to accurately classify emotions based on
the detected facial features.
Department of CSBS 25
Moodify: A Movie & Music Recommendation System Using Emotion Detection
local and global characteristics of the facial expression. For example, the algorithm can
learn to distinguish precise patterns in the position of the eyes or mouth, as well as more
general patterns in the overall form of the face.
One of the key advantages of employing a deep CNN algorithm for emotion
classification lies in its ability to detect both local and global characteristics of facial
expressions. This means that the algorithm can effectively capture subtle nuances in
facial features, such as the precise positioning of the eyes or mouth, while also
recognizing broader patterns in the overall structure of the face. By leveraging these
capabilities, the DCNN can accurately classify the listener's current emotional state
based on the extracted facial features, providing valuable insights into users' emotional
experiences.
Department of CSBS 26
Moodify: A Movie & Music Recommendation System Using Emotion Detection
expressions are analyzed and emotions are extracted using advanced computer vision
techniques, the recommendation system employs sophisticated algorithms to map these
emotional cues to appropriate movie genres or musical genres, styles, or specific tracks.
For example, if a user is identified as expressing joy or excitement, the system might
suggest upbeat and cheerful songs or movies with uplifting themes.
Department of CSBS 27
Moodify: A Movie & Music Recommendation System Using Emotion Detection
In the final phase of the recommendation system's workflow, users are presented
with the curated music tracks sourced from Spotify playlists and movie
recommendations sourced from IMDb. Leveraging the seamless integration with these
platforms, the recommendation system delivers a tailored entertainment experience that
aligns with users' emotional states and preferences.
For music playback, the system accesses the selected tracks from Spotify
playlists directly within the application interface. Users can enjoy uninterrupted
streaming of the recommended songs without the need to switch between multiple apps
or interfaces. By harnessing Spotify's extensive library and personalized playlists, users
are presented with a diverse range of music that complements their current mood and
enhances their listening experience. Simultaneously, users are provided with detailed
information about the recommended movies sourced from IMDb. Leveraging IMDb's
rich database, the recommendation system displays movie titles, synopses, genres,
ratings, cast, and crew information directly within the application interface. Users can
explore additional details about the recommended films and make informed decisions
about their viewing preferences.
Department of CSBS 28
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 29
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Fig 4.8 (a) Describes the Use Case Diagram for movie and music recommendation.
Department of CSBS 30
Moodify: A Movie & Music Recommendation System Using Emotion Detection
In the Fig 4.8 (a) describes use case diagram of facial emotion-driven
recommendation system captures the user's face, analyzes their emotional state, and
recommends content (like music or movies) that aligns with their mood, aiming to
provide a more personalized and emotionally fulfilling user experience.
Class diagrams illustrate the static structure of a system by showing the classes,
attributes, methods, and relationships between them. Classes represent objects in the
system, while attributes and methods representthe properties and behaviors of those
objects. Relationships such as association, aggregation, composition, inheritance,
and dependency depict how classes are connected to each other. Class diagrams
provide a blueprint for designing and understanding thestructure of the system's
objects and their relationships.
Fig4.8 (b) Describes the Class Diagram for movie and music recommendation.
In the Fig 4.8 (b) describes the diagram depicts a system recommending media
content based on a user's emotion. The user initiates the process by requesting a
recommendation. The system then captures the user's emotion through facial detection
and analysis. Based on the detected emotion (type: String), the system recommends
music or movies. Recommendations include title, artist/genre (for music) and title,
genre, rating (for movies). The user can then choose to play the recommended media.
Fig 4.8 (c) Describes the Activity Diagram for movie and music recommendation.
In the Fig 4.8 (c) describes the activity diagram that depicts a movie
recommendation system based on facial expressions. It starts by capturing the user's
facial expressions. If a face is detected, the system analyzes the emotions and
recommends movies based on the user's emotional state. If no face is detected or no
emotion is recognized, the system ends the recommendation process. The user can
then select a movie from the recommendations or end the process. If a movie is
selected, the system plays the chosen movie.
Department of CSBS 32
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Fig 4.8 (d) Describes the Sequence Diagram for movie and music recommendation.
In the Fig 4.8 (d) describes the sequence diagram shows a facial emotion-driven
recommendation system. The user interacts with the system, which captures their facial
expression. The system then detects the emotion and retrieves music and movie
recommendations based on that emotion. The user can then see the recommendations
and choose to play either the recommended music or movie.
Department of CSBS 33
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Fig. 4.8 (e) Describes the Collaboration Diagram for movie and music
recommendation.
In the Fig 4.8 (e) describes the collaboration diagram you sent depicts a facial
emotion-driven recommendation system that recommends music and movies based on
a user's emotional state. The user initiates the process by requesting a recommendation.
The facial emotion detection system captures the user's face and analyzes their
expression to detect their emotion. The recommendation engine then retrieves the
user’s emotional profile and queries external APIs (like Spotify and IMDb) for relevant
music tracks or movies based on the user's emotion. Finally, the system presents
recommendations to the user, who can then choose to play the selected music or movie.
Department of CSBS 34
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Fig. 4.8 (f) Describes the Component Diagram for movie and music
recommendation.
In the Fig 4.8 (f) describes the component diagram depicts a facial emotion-
driven recommendation system. The user interface allows users to interact with the
system by selecting music or movie recommendations. The recommendation engine
analyzes the user’s facial expressions captured through the user interface and sends
them to the emotion detection component. This component analyzes the expression
and sends the detected emotion back to the recommendation engine. The
recommendation engine then retrieves recommendations from external services like
IMDb and Spotify and displays them to the user through the user interface.
Department of CSBS 35
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Fig 4.8 (g) Describes the Deployment Diagram for movie and music recommendation
In the Fig 4.8 (g) describes the deployment diagram shows a facial emotion-
driven recommendation system. The user device, which can be a web browser, sends
a request to the recommendation engine. The recommendation engine then
communicates with the emotion detection component to analyze the user's facial
expressions captured by the user devic
Department of CSBS 36
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 5
IMPLEMENTATION
train_datagen = ImageDataGenerator(
zoom_range = 0.2,
shear_range = 0.2,
horizontal_flip=True,
rescale = 1./255
)
train_data = train_datagen.flow_from_directory(directory=
"/content/images/images/train",
target_size=(224,224),
batch_size=32,
)
train_data.class_indices
Department of CSBS 37
Moodify: A Movie & Music Recommendation System Using Emotion Detection
plt.axis = False
plt.show()
count += 1
if count == 10:
break
#
# function call to plot the images
plotImages(t_img, label)
Creating our required model which is used to take the faces in the training dataset.
hist = model.fit_generator(train_data,
steps_per_epoch= 10,
epochs= 100,
validation_data= val_data,
validation_steps= 10)
Department of CSBS 38
Moodify: A Movie & Music Recommendation System Using Emotion Detection
model.save('final_model.h5')
path = "/content/images/images/validation/angry/10052.jpg"
img = load_img(path, target_size=(224,224) )
i = img_to_array(img)/255
input_arr = np.array([i])
input_arr.shape
pred = np.argmax(model.predict(input_arr))
i = img_to_array(img)/255
input_arr = np.array([i])
input_arr.shape
pred = np.argmax(model.predict(input_arr))
Department of CSBS 39
Moodify: A Movie & Music Recommendation System Using Emotion Detection
plt.imshow(input_arr[0])
plt.title("input image")
plt.show()
Department of CSBS 40
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 6
CODING SPECIFICATIONS
#import sys
import os
import cv2
#import re
import numpy as np
import tensorflow as tf
import statistics as st
@app.route("/")
def home():
return render_template("index1.html")
Department of CSBS 41
Moodify: A Movie & Music Recommendation System Using Emotion Detection
def camera():
i=0
GR_dict={0:(0,255,0),1:(0,0,255)}
model = tf.keras.models.load_model('final_model.h5')
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
output=[]
cap = cv2.VideoCapture(0)
while (i<=30):
faces = face_cascade.detectMultiScale(img,1.05,5)
face_img = img[y:y+h,x:x+w]
resized = cv2.resize(face_img,(224,224))
reshaped=resized.reshape(1, 224,224,3)/255
predictions = model.predict(reshaped)
max_index = np.argmax(predictions[0])
predicted_emotion = emotions[max_index]
output.append(predicted_emotion)
cv2.rectangle(img,(x,y),(x+w,y+h),GR_dict[1],2)
Department of CSBS 42
Moodify: A Movie & Music Recommendation System Using Emotion Detection
cv2.rectangle(img,(x,y-40),(x+w,y),GR_dict[1],-1)
i = i+1
cv2.imshow('LIVE', img)
key = cv2.waitKey(1)
if key == 27:
cap.release()
cv2.destroyAllWindows()
break
print(output)
cap.release()
cv2.destroyAllWindows()
final_output1 = st.mode(output)
return render_template("buttons.html",final_output=final_output1)
def buttons():
return render_template("buttons.html")
def moviesSurprise():
return render_template("moviesSurprise.html")
def moviesAngry():
return render_template("moviesAngry.html")
Department of CSBS 43
Moodify: A Movie & Music Recommendation System Using Emotion Detection
def moviesSad():
return render_template("moviesSad.html")
def moviesDisgust():
return render_template("moviesDisgust.html")
def moviesHappy():
return render_template("moviesHappy.html")
def moviesFear():
return render_template("moviesFear.html")
def moviesNeutral():
return render_template("moviesNeutral.html")
def songsSurprise():
return render_template("songsSurprise.html")
def songsAngry():
return render_template("songsAngry.html")
Department of CSBS 44
Moodify: A Movie & Music Recommendation System Using Emotion Detection
def songsSad():
return render_template("songsSad.html")
def songsDisgust():
return render_template("songsDisgust.html")
def songsHappy():
return render_template("songsHappy.html")
def songsFear():
return render_template("songsFear.html")
def songsNeutral():
return render_template("songsSad.html")
def join():
return render_template("join_page.html")
app.run(debug=True)
Department of CSBS 45
Moodify: A Movie & Music Recommendation System Using Emotion Detection
CHAPTER 7
7.1 Testing
functionality of each component and identify any bugs or issues early in the
development process.
Department of CSBS 47
Moodify: A Movie & Music Recommendation System Using Emotion Detection
usability testing, performance testing, and validation testing, developers can ensure the
robustness, reliability, and usability of the facial emotion-driven music and movie
recommendation system, ultimately delivering a high-quality and engaging user
experience.
7.2 Validation
system's algorithms and user-centric validation of the overall user experience. Here are
the types of validation used:
Department of CSBS 49
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 50
Moodify: A Movie & Music Recommendation System Using Emotion Detection
SCREENSHOTS
Once link gets run then an interface visible which curates the option for Click
"Let's Start" to activate your live camera and unlock a world of immersive multimedia
experiences. With just a glance, our cutting-edge facial emotion analysis technology
will detect your current mood.
In the Fig 8.1 (a) describes button in the center of the screen, a sleek interface
displays the title "Find Your Next Vibe." Below, rows of captivating movie posters and
music album covers entice you to explore. On the left, user profiles allow you to
personalize your experience. Feeling indecisive? A prominent "Lets Start" button
beckons you to discover your perfect movie and music pairing.
It detects the emotion as given below with the help of Haar Cascade Classifier
Algorithm for the frontal face. The facial emotion-driven music and movie
recommendation project employs the Haar Cascade Classifier Algorithm for frontal
face detection, a key component in the emotion detection process. This algorithm is
instrumental in identifying and localizing frontal faces within images or video streams,
providing a foundational step for subsequent emotion recognition.
Department of CSBS 51
Moodify: A Movie & Music Recommendation System Using Emotion Detection
The model was trained using a diverse dataset comprising facial expressions
representing seven primary emotions: anger, sadness, disgust, happiness, surprise,
neutral, and fear. Through meticulous training and validation, the model demonstrates
its capability to accurately predict users'. Through meticulous training and validation
procedures, the model undergoes rigorous refinement to ensure its capability to
accurately predict users' emotions. This involves iterative adjustments to the model's
parameters and architecture, as well as thorough evaluation against validation datasets
to assess its performance across different scenarios and variations in facial expressions.
Figure 8.1 (b) illustrates the Emotion Detection interface focused on detecting
sadness. This interface serves as a crucial component within our facial emotion-driven
music and movie recommendation system, enabling real-time analysis of users' facial
expressions to identify the emotion of sadness.
Fig 8.1 (c) illustrates the Emotion Detection interface focused on detecting
furious. This interface serves as a crucial component within our facial emotion-driven
Department of CSBS 52
Moodify: A Movie & Music Recommendation System Using Emotion Detection
music and movie recommendation system, enabling real-time analysis of users' facial
expressions to identify the emotion of angriness.
Figure 8.1 (d) illustrates the Emotion Detection interface focused on detecting
happiness. This interface serves as a crucial component within our facial emotion-
driven music and movie recommendation system, enabling real-time analysis of users'
facial expressions to identify the emotion of happiness.
Upon completion of the training process, the model is equipped with the ability
to analyze facial expressions captured in real-time through the Haar Cascade Classifier
Algorithm. By leveraging the knowledge acquired during training, the model can
accurately identify and classify the emotional states of users based on their facial
expressions.
recommendations sourced from IMDb, each carefully selected to match the user's
detected emotional state. From heartwarming comedies to gripping thrillers, the
interface presents a diverse array of cinematic experiences tailored to the user's mood.
The user interface for both music and movies is designed with simplicity and
ease of use in mind, allowing users to effortlessly navigate through the recommended
content and explore additional options. Interactive features such as search functionality,
genre filters, and personalized recommendations enhance the user experience, enabling
users to discover new and relevant content aligned with their emotional states.
Furthermore, the interface may incorporate user feedback mechanisms to refine
recommendations over time, ensuring that the system continuously adapts to users'
evolving preferences and emotional states. Through this process, users can enjoy a more
immersive and emotionally resonant entertainment experience, enriched by content
recommendations that align with their current moods and preferences.
Fig 8.1 (e) Describes the Selection interface for detected emotion.
In the Fig 8.1 (e) depicts the Selection interface designed for the detected
emotion within our facial emotion-driven music and movie recommendation system.
This interface serves as a pivotal point where users are presented with curated
recommendations tailored to their detected emotional state. By leveraging advanced
emotion detection algorithms and user preferences, the system offers a selection of
music tracks and movie titles aligned with the user's current emotional state, whether
Department of CSBS 54
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 55
Moodify: A Movie & Music Recommendation System Using Emotion Detection
In the Fig 8.1 (f) describes the interface displays a comprehensive list of
playlists sourced from Spotify, an integral component of our facial emotion-driven
music recommendation system. This feature provides users with a diverse selection of
curated playlists tailored to their emotional states and preferences.
Fig 8.1 (g) Describes the Spotify interface for playing songs.
Fig 8.1 (g) illustrates the Spotify interface dedicated to playing songs within
our facial emotion-driven music recommendation system. This interface serves as a
pivotal component, seamlessly integrating with Spotify's extensive music library to
Department of CSBS 56
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 57
Moodify: A Movie & Music Recommendation System Using Emotion Detection
extensive databases, the system provides users with a wide range of music and movie
options tailored to their emotional states and preferences. Whether users seek to uplift
their spirits with upbeat music or unwind with a captivating film, the recommendation
system ensures they have access to relevant and engaging content. This seamless
integration of music and movie recommendations enhances user satisfaction and
engagement, fostering a more enjoyable and personalized multimedia consumption
experience.
Department of CSBS 58
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Conclusion
The project has also tackled various challenges inherent in developing such a
system, including ensuring data quality and diversity, enabling real-time processing,
addressing privacy considerations, and seamlessly integrating with external platforms.
Through interdisciplinary collaboration and innovative solutions, these obstacles have
been overcome, laying the groundwork for a robust and scalable recommendation
system that prioritizes user satisfaction and engagement.
Future Work
Future work for the facial emotion-driven music and movie recommendation project
could involve several avenues of research and development to enhance the system's
capabilities, user experience, and overall effectiveness. Here are some potential areas
for future exploration:
Another avenue for future exploration lies in the realm of dynamic content adaptation.
Here, the focus would be on developing mechanisms for dynamically adjusting content
recommendations in response to real-time changes in users’ motional states or
Department of CSBS 60
Moodify: A Movie & Music Recommendation System Using Emotion Detection
Department of CSBS 61
Moodify: A Movie & Music Recommendation System Using Emotion Detection
REFERENCES
[1] Ramya Ramanathan, Radha Kumaran, Ram Rohan R, Rajat Gupta, and Vishalakshi
Prabhu, an intelligent music player based on emotion recognition.
[2] Shlok Gilda, Husain Zafar, Chintan Soni, Kshitija Waghurdekar, Smart musicplayer
integrating facial emotion recognition and music mood recommendation. India,
(IEEE),2017.
[3] Deger Ayata, Yusuf Yaslan and Mustafa E. Kamasak, Emotion-based music
recommendation system using wearable physiological sensors, IEEE transactions on
consumer electronics, vol. 14.
[4] Ahlam Alrihail, Alaa Alsaedi, Kholood Albalawi, Liyakathunisa Syed, Music
recommender system for users based on emotion detection through facial features.
[5] Research Prediction Competition, Challenges in representation learning: facial
expression recognition challenges, Learn facial expression from an image,
(KAGGLE).
[6] Preema J.S, Rajashree, Sahana M, Savitri H, Review on facial expression-based music
player, International Journal of Engineering Research & Technology (IJERT), ISSN-
2278-0181, Volume 6, Issue 15, 2018.
[7] AYUSH Guidel, Birat Sapkota, Krishna Sapkota, Music recommendation by facial
analysis, February 17, 2020.
[8] CH.Sadhvika, Gutta.Abigna, P.Srinivasreddy, Emotion-based music recommendation
system, (JETIR) Volume 7, Is-sue 4, April 2020.
[9] Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen. A survey of convolutional neural
network visualization methods. Mathematical Foundations of Computing, May 2018.
[10] Ahmed Hamdy AlDeeb, Emotion- Based Music Player Emotion Detection from Live
Camera, ResearchGate, June 2019.
[11] Frans Norden and Filip von Reis Marlevi, A Comparative Analysis of Machine
Learning Algorithms in Binary Facial Expression Recognition, TRITA-EECS-EX-
2019:143.
[12] P. Singhal, P. Singh and A. Vidyarthi (2020) Interpretation and localization of
Thorax diseases using DCNN in Chest X-Ray. Journal of Informatics Electrical and
Elecrtonics Engineering,1(1), 1, 1-7
Department of CSBS 62
Moodify: A Movie & Music Recommendation System Using Emotion Detection
[13] M. Vinny, P. Singh (2020) Review on the Artificial Brain Technology: BlueBrain.
Journal of Informatics Electrical and Electronics Engineering,1(1), 3, 1-11.
[14] N. K. Rao, N. P. Challa, S. S. Chakravarthi and R. Ranjana, "Movie
Recommendation System using Machine Learning," 2022.
[15] Chillara, S., Kavitha, A. S., Neginhal, S. A., Haldia, S., & Vidyullatha, K. S. (2019).
Music genre classification using machine learning algorithms: a comparison.
[16] J. Singh and V. K. Bohat, "Neural Network Model for Recommending Music Based
on Music Genres," 2021 International Conference on Computer Communication and
Informatics (ICCCI), Coimbatore, India, 2021, pp. 1-6, doi:
10.1109/ICCCI50826.2021.9402621.
[17] Reddy, S. R. S., Nalluri, S., Kunisetti, S., Ashok, S., & Venkatesh, B. (2019).
Content-based movie recommendation system using genre correlation. In Smart
Intelligent Computing and Applications: Proceedings of the Second International
Conference on SCI 2018, Volume 2 (pp. 391-397). Springer Singapore.Nici.
[18] S. Chawla, S. Gupta and R. Majumdar, "Movie Recommendation Models Using
Machine Learning," 2021 5th International Conference on Information Systems and
Computer Networks (ISCON), Mathura, India, 2021, pp. 1-6, doi:
10.1109/ISCON52037.2021.9702472.
[19] . J. Singh and V. K. Bohat, "Neural Network Model for Recommending Music Based
on Music Genres," 2021 International Conference on Computer Communication and
Informatics (ICCCI), Coimbatore, India, 2021, pp. 1-6, doi:
10.1109/ICCCI50826.2021.9402621.
[20] A. Singh and S. Gupta, "Recommendation System Algorithms For Music Therapy,"
2023 13th International Conference on Cloud Computing, Data Science &
Engineering (Confluence), Noida, India, 2023, pp. 138 143, doi:
10.1109/Confluence56041.2023.10048894
Department of CSBS 63