Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Moodify – An Emotion-Based Music Player

Siddiqui Mamoon Siddiqui Saad


Dept. of Computer Engineering Dept. of Computer Engineering
M.H. Saboo Siddik Polytechnic M.H. Saboo Siddik Polytechnic
Byculla, Mumbai, India Byculla, Mumbai, India
siddiquimamoon2004@gmail.com siddiquisaad529@gmail.com

Waghoo Rehan Khobragade Ashish


Dept. of Computer Engineering Dept. of Computer Engineering
M.H. Saboo Siddik Polytechnic M.H. Saboo Siddik Polytechnic
Byculla, Mumbai, India Byculla, Mumbai, India
waghoorehan03@gmail.com ashishkhobragade450@gmail.com

Zaibunnisa L.H. Malik Salima Khatib


Research Scholar, MCA Department I/C HOD AN
Sardar Patel Institute of Technology M.H. Saboo Siddik Polytechnic
Affiliated to Mumbai University Byculla, Mumbai, India
Mumbai, India khatibsalima@gmail.com
zebamalik@yahoo.com

Abstract- A user's emotion can be detected by restrictions on how image processing results
his/her facial expression. These expressions can be used. Image research frequently focuses
can be derived from the system's camera's on enhancing recognition accuracy, and the
live feed. Machine learning (ML) offers a image data lacks the use of secondary
variety of strategies for detecting human processing, i.e., the image information has not
emotions. With our suggested system, a been fully and effectively utilized in the real
music player that analyses emotions in real production and life process. An expression
time and recommends songs depending on detection model for convolutional neural
those emotions is constructed. This adds a networks is created and trained using a deep
new feature to the conventional music player learning technique. When the outcomes of
apps that are already pre-installed on our image processing are combined with a music
phones. Customer satisfaction is a significant recommendation algorithm, the music that best
advantage of emotion detection. This suits the person's attitude is suggested. Major
system's goal is to study the user's face, music websites' playlists and manually
foretell their feeling, and provide tunes that annotated tracks are crawled to generate music
fit that emotion. data sets. The results of image processing have
a suitably broader range of applications.
Keywords: Machine Learning, Emotion
Detection
1. Introduction 2. Background and Motivation
Machine learning is extensively used in image
2.1 Background
recognition, image processing, and particularly
facial expression recognition with the advent of The muscles of the face can be moved in one or
the information era. In the area of human- more ways, or even positioned to convey a
computer interaction, face recognition has facial expression. These motions communicate
emerged as a study hotspot, but there are still a person's emotional state. Since a person has
control over his or her facial expression and can organizing and playing the songs. All music
choose how to express it, facial expression can fans have access to a superior platform thanks
be chosen as a voluntary action. to the Emotion-Based Player, which automates
song selection and regularly updates play-lists.
The "Emotion Based Music Player" is a tool Users can now arrange tracks according to their
designed to recognize a person's emotions and
emotions and play them.
play music selections in line with those
emotions. The person will first display his 3. Methodology
feeling through his facial expression. The 3.1 Mood Detection Module
device will then assess the facial expression's ● Face Detection
state and interpret the mood from it. The music
Face detection is an essential feature in
player will play songs that can fit the person's
many mobile applications, such as social
current mood after determining the person's
media, security systems, and user
emotion. The system will only analyze facial
authentication. Face detection involves
expressions; it won't take into account head or
face movement. In this work we have used locating and extracting facial features from
an image or video frame. We used the
CNN in the beginning to get the required
OpenCV pre-trained model for face
trained model and predict user’s emotion. The
detection and captured images from a
model is then used on android with including
mobile device's camera.
the libraries like TensorFlow lite for prediction
from model [2]. We Chose a pre-trained machine learning
model for face detection, such as OpenCV
2.2 Problem Statement and Motivation
or TensorFlow Lite and Integrated the pre-
Old-school music devices required the user to trained model into the Flutter application by
actively peruse the selection and choose tracks adding it as a dependency in the
in accordance with his mood. Numerous audio pubspec.yaml file and importing it into the
players have been created with features like Dart code. Use the pre-trained model to
rapid forward, backward, variable playing detect faces in the captured image or video
speed, local playback, streaming playback, frame by passing the image data through the
volume modulation, genre categorization, etc. model and obtaining the location of the
in the modern world due to the ever-increasing detected faces.
developments in multimedia and technology.
These features might fulfill the user's ● Emotion Detection / Model generation
fundamental needs, but the user still has to Face detection and emotion detection are
actively browse through the song playlist and essential features in many mobile
choose songs based on their attitude and applications, such as social media, security
behavior at the time. systems, and mental health apps.
The main idea behind this initiative is to We created Machine Learning Model for
automatically play music depending on the emotion detection and modified the existing
user's emotions. It attempts to play music that face detection module in the Flutter
the user prefers based on the feelings found. In application to incorporate the additional
the current system, the user must manually trained model for emotion detection by
select the songs because songs performed at importing the model and integrating it into
random may not suit the user's attitude. Instead, the existing code.
the user must categorize the songs into different
emotions before manually selecting one to play. 3.2 Diagram
● System Architecture
An intelligent audio player should respond to
the user's tastes. Without requiring much effort
in song selection and reorganization, a music
device should assist users in instantly
Fig 1. System Architecture Design [1]
● Block Diagram

Fig 4. Expected O/P of Music Player

Fig 2. Block Diagram


4. Expected Result

Fig 5. Expected Result for Mood Selection


5. Future Scope
The project's primary goal is to identify feelings
and play the appropriate music in accordance
with those emotions. Future benchmarks
include expanding the database's size, adding
additional emotions for user ease, and
improving the accuracy of emotion detection.
The short-term objectives are to release
updates, improve the UI of the app, make it
Fig 3. Expected Page for User Mood Selection more user-friendly, and add distinct music for a
better user experience.
6. Conclusion
This concept suggests an Android program that
plays music depending on emotions. The
application's goal is to make music
recommendations based on user feelings. Face
API is used to evaluate and categorize the
emotions. The application then recommends
the appropriate songs based on the user's
preference mode after taking into account the
user's mood. The app will suggest upbeat music
if the user chooses the positive option. In
contrast, because users want to express their
rage, sorrow, or tension, it will suggest tracks
with a bad tone. We intend to investigate
additional methods to get rid of the unregulated
environment attachment in order to enhance the
performance of the application. Additionally, in
order to accommodate the preferences of more
people, we intend to increase the music
database.
7. References
[1] Mahadik, Bharathi, Milgir, Kavathekar,
Patel, ©2021 “Mood based Music
Recommendation System”,
IJERTV10IS060253
[2] Azure.microsoft.com. (2019). Face API -
Facial Recognition Software — Microsoft
Azure. [online] Available at:
https://azure.microsoft.com/enus/services/cogn
itive-services/face/ [Accessed 4 Jan. 2020].

You might also like