Professional Documents
Culture Documents
Research Paper
Research Paper
Abstract- A user's emotion can be detected by restrictions on how image processing results
his/her facial expression. These expressions can be used. Image research frequently focuses
can be derived from the system's camera's on enhancing recognition accuracy, and the
live feed. Machine learning (ML) offers a image data lacks the use of secondary
variety of strategies for detecting human processing, i.e., the image information has not
emotions. With our suggested system, a been fully and effectively utilized in the real
music player that analyses emotions in real production and life process. An expression
time and recommends songs depending on detection model for convolutional neural
those emotions is constructed. This adds a networks is created and trained using a deep
new feature to the conventional music player learning technique. When the outcomes of
apps that are already pre-installed on our image processing are combined with a music
phones. Customer satisfaction is a significant recommendation algorithm, the music that best
advantage of emotion detection. This suits the person's attitude is suggested. Major
system's goal is to study the user's face, music websites' playlists and manually
foretell their feeling, and provide tunes that annotated tracks are crawled to generate music
fit that emotion. data sets. The results of image processing have
a suitably broader range of applications.
Keywords: Machine Learning, Emotion
Detection
1. Introduction 2. Background and Motivation
Machine learning is extensively used in image
2.1 Background
recognition, image processing, and particularly
facial expression recognition with the advent of The muscles of the face can be moved in one or
the information era. In the area of human- more ways, or even positioned to convey a
computer interaction, face recognition has facial expression. These motions communicate
emerged as a study hotspot, but there are still a person's emotional state. Since a person has
control over his or her facial expression and can organizing and playing the songs. All music
choose how to express it, facial expression can fans have access to a superior platform thanks
be chosen as a voluntary action. to the Emotion-Based Player, which automates
song selection and regularly updates play-lists.
The "Emotion Based Music Player" is a tool Users can now arrange tracks according to their
designed to recognize a person's emotions and
emotions and play them.
play music selections in line with those
emotions. The person will first display his 3. Methodology
feeling through his facial expression. The 3.1 Mood Detection Module
device will then assess the facial expression's ● Face Detection
state and interpret the mood from it. The music
Face detection is an essential feature in
player will play songs that can fit the person's
many mobile applications, such as social
current mood after determining the person's
media, security systems, and user
emotion. The system will only analyze facial
authentication. Face detection involves
expressions; it won't take into account head or
face movement. In this work we have used locating and extracting facial features from
an image or video frame. We used the
CNN in the beginning to get the required
OpenCV pre-trained model for face
trained model and predict user’s emotion. The
detection and captured images from a
model is then used on android with including
mobile device's camera.
the libraries like TensorFlow lite for prediction
from model [2]. We Chose a pre-trained machine learning
model for face detection, such as OpenCV
2.2 Problem Statement and Motivation
or TensorFlow Lite and Integrated the pre-
Old-school music devices required the user to trained model into the Flutter application by
actively peruse the selection and choose tracks adding it as a dependency in the
in accordance with his mood. Numerous audio pubspec.yaml file and importing it into the
players have been created with features like Dart code. Use the pre-trained model to
rapid forward, backward, variable playing detect faces in the captured image or video
speed, local playback, streaming playback, frame by passing the image data through the
volume modulation, genre categorization, etc. model and obtaining the location of the
in the modern world due to the ever-increasing detected faces.
developments in multimedia and technology.
These features might fulfill the user's ● Emotion Detection / Model generation
fundamental needs, but the user still has to Face detection and emotion detection are
actively browse through the song playlist and essential features in many mobile
choose songs based on their attitude and applications, such as social media, security
behavior at the time. systems, and mental health apps.
The main idea behind this initiative is to We created Machine Learning Model for
automatically play music depending on the emotion detection and modified the existing
user's emotions. It attempts to play music that face detection module in the Flutter
the user prefers based on the feelings found. In application to incorporate the additional
the current system, the user must manually trained model for emotion detection by
select the songs because songs performed at importing the model and integrating it into
random may not suit the user's attitude. Instead, the existing code.
the user must categorize the songs into different
emotions before manually selecting one to play. 3.2 Diagram
● System Architecture
An intelligent audio player should respond to
the user's tastes. Without requiring much effort
in song selection and reorganization, a music
device should assist users in instantly
Fig 1. System Architecture Design [1]
● Block Diagram