Face Emotion Based Music Player System

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

INTERNATIONAL JOURNAL FOR RESEARCH & DEVELOPMENT IN Volume-15,Issue-5 (May-21)

TECHNOLOGY ISSN (O) :- 2349-3585

FACE EMOTION BASED MUSICPLAYER SYSTEM


__________________________________________________________________________________________
Vishal Savane1, Rutuja Palamkar2, Amir Barmare3, Misba Shaikh4
1,2,3,4
Department of Computer Engineering, Indira College of Engineering and Management, Pune, India.

ABSTRACT: This paper proposes an intelligent agent that organized solely by artist, song, genre, and number of times
sorts a music selection according to the emotions expressed played. Users are also left with the difficult task of creating
by each song and then recommends a playlist to the user mood-based playlists as a result of this. With larger music
based on his or her current mood. The user's local music libraries, the job becomes more complicated, and automating
selection is initially grouped based on the emotion elicited by the process will save many users the time and effort of doing it
the album. Any time the user wants to create a mood-based manually, while also enhancing their overall experience and
playlist, they take a photo of themselves at that precise enabling them to appreciate the music more. To recommend
moment. The user's emotion is recognized using facial songs to the user, the recommendation module incorporates
detection and emotion recognition techniques on this image. the results of the emotion and music classification modules.
The user is then given a playlist of music that best suits this This system outperforms current systems in terms of accuracy
emotion. The main goal of this paper is to create a low-cost and efficiency.
music player that automatically creates a sentiment-aware OBJECTIVE:
playlist based on the user's emotional state. The programme Emotion recognition is a branch of artificial intelligence that is
is made to use as few machine resources as possible. The becoming increasingly important for automating processes
user's emotion is determined by the emotion module. The that are inherently more time-consuming to complete
music classification module extracts relevant and important manually. Identifying an individual's mental state based on
audio information from a record. their feelings is a vital part of making efficient automatic
KEYWORDS - face emotion, pre-processing, classifier decisions that are ideally suited to the person in question, and
algorithm, feature extraction (CNN), open cv, etc. can be useful in a number of situations.
INTRODUCTIONS Methodology:
Emotion recognition is a branch of artificial intelligence that is The process of classifying music starts with downloading all
becoming increasingly important for automating processes of the music files that are locally accessible on the device
that are inherently more time-consuming to complete through the music database. The database can be queried for
manually. Using emotions to determine a person's mental state the position of all.mp3 files, which can then be passed to the
is an integral part of making efficient automated decisions. We glob package in Python, which recursively finds all the music
investigate this from the standpoint of making personalized files in that location. This can also be used to refresh the
music recommendations based on a person's mood as database automatically when new music is added.
determined by their facial expressions to achieve the desired RELATED WORK OR LITERATURE SURVEY
result; the proposed method goes through several steps. It [1] “An Intelligent Music Player based on Emotion
describes the specific steps involved in identifying the user's Recognition”
expression and playing a music track based on the recognized Author: Ramya Ramanathan∗, Radha Kumaran
expression. We investigate this from the standpoint of making Emotion recognition is a branch of artificial intelligence that is
personalized music recommendations based on a person's becoming increasingly important for automating processes
mood as determined by their facial expressions. Most music that are inherently more time-consuming to complete
connoisseurs have large music collections that are frequently manually.

128
All rights reserved by www.ijrdt.org
Paper Title: - FACE EMOTION BASED MUSICPLAYER SYSTEM

[2] Facial Expression Based Music Player Apache, MySQL, Perl/PHP/Python. MySQL is used by many
Author: De˘ger Ayata, Yusuf Yaslan and Mustafa E. Kamasak database-driven web applications, including Drupal, Joomla,
It's determined using the mean of the training dataset's phpBB, and Word Press. MySQL is also used by many
eigenfaces. Happy, sad, fear, surprise, rage, disgust, and popular websites, including Face book, Flicker, MediaWiki,
neutral expressions are labeled on the training images that Twitter, and YouTube.
correspond to different distances from the mean picture. SOFTWARE REQUIREMENTS
[3] Smart Music Player Integrating Facial Emotion Operating system : Windows 7 and above.
Recognition and Music Mood Recommendation Coding Language : Python
Author: Shlok Gilda, 1 Husain Zafar IDE : Sublimetext3,PyCharm
Nowadays, people tend to increasingly have more stress HARDWARE REQUIREMENTS
because of the bad economy, high living expenses, etc. System : Intel I3 Processor and above.
Listening to music is a key activity that assists to reduce Hard Disk : 200 GB.
stress. However, it may be unhelpful if the music does not suit Monitor : 15VGA Color
current emotion of the listener. Moreover, there is no music Ram : 4 GB.
player which can select songs based on the user emotion. To MATHEMATICAL MODELING
solve this problem, this paper proposes an emotion-based
music player, which can suggest songs based on the user’s
emotions; sad, happy, neutral, and angry.
[4] An Intelligent Music Player based on Emotion
Recognition
Author:Ramya, Ramanathan, Rohan
We're also aware that there's still space for growth. It will be Where,
important to see how the system works when all seven basic Q = User entered input
emotions are considered; additional songs from various CB = preprocess
languages and regions could also be included to strengthen the C = feature selection
recommendation system. PR = preprocess request evaluation
SYSTEM REQUIREMENTS UB = predict outcome
DATABASE REQUIREMENTS Set Theory
MySQL: MySQL is an open-source relational database 1) Let S be as system which input image
management system (RDBMS). Its name is a combination of S = {In, P, Op, 𝛷}
"My", the name of co-founder Michael Widenius's daughter, 2) Identify Input In as
and "SQL", the abbreviation for Structured Query Language. In = {Q}
MySQL is free and open-source software under the terms of Where,
the GNU General Public License, and is also available under a Q = User entered input image (dataset)
variety of proprietary licenses. MySQL was owned and Identify Process P as
sponsored by the Swedish company MySQL AB, which was P = {CB, C, PR}
bought by Sun Microsystems (now Oracle Corporation). In Where,
2010, when Oracle acquired Sun, Widenius forked the open- CB = Pre-process
source MySQL project to create MariaDB. C = feature selection
MySQL is a component of the LAMP web application PR = Pre-process request evaluation
software stack (and others), which is an acronym for Linux, 4) Identify Output Op as

129
ISSN:-2349-3585 |www.ijrdt.org
Paper Title: - FACE EMOTION BASED MUSICPLAYER SYSTEM

Op = {UB}
Where,
UB = Predict outcome
𝛷=Failures and Success conditions.
Space Complexity: Convolution and Cross-Correlation in Images
The space complexity depends on Presentation and Convolution operator: G=H*F
visualization of discovered patterns. More the storage of data
more is the space complexity.
Time Complexity:
Check No. of patterns available in the datasets= n
If (n>1) then retrieving of information can be time consuming.
So, the time complexity of this algorithm is O 𝑛𝑛 .
Above mathematical model is CNN Complete.
Algorithm:
CNN:
A Convolutional Neural Network (ConvNet/CNN) is a Deep
Learning algorithm which can take in an input image, assign
importance (learnable weights and biases) to various
aspects/objects in the image and be able to differentiate one
from the other. The pre-processing required in a ConvNet is
much lower as compared to other classification algorithms.
While in primitive methods filters are hand-engineered, with
enough training, ConvNets have the ability to learn these
filters/characteristics.
How CNN works:
 Convolution
 Relu layer
 Pooling
 Fully connected
The convolution of f and g, written as f∗g, is defined as the
integral of the product of the two functions after one is
reversed and shifted:
Figure: Advance System Architecture
EXISTING SYSTEM AND DISADVANTAGES
The traditional method of playing music based on a person's
mood necessitates human interaction. Migrating to computer
Convolution is commutative. Can be viewed as a weighted vision technology would enable such a device to be
average operation at every moment (for this w need to be a automated. To accomplish this, an algorithm is used to
valid probability density function).
identify human emotions and play music tracks based on the
Discrete Convolution (one-axis): currently detected emotion.

130
ISSN:-2349-3585 |www.ijrdt.org
Paper Title: - FACE EMOTION BASED MUSICPLAYER SYSTEM

ADVANCED SYSTEM AND ADVANTAGES [4] Nikalaou, N. Music emotion classification. PhD diss.,
Some of these, which we expect to solve soon, include the Dissertation for the Diploma of Electronic and
ability to return any number of emotions regardless of the Computer Engineer, Technical University of Crete,
number of clusters, which may also differ, by plotting the 2011.
emotion returned from the user's image based on the arousal [5] Viola, Paul, and Michael Jones.” Rapid object detection
and valence values, and finding the cluster whose mean is using a boosted cascade of simple features.” In
nearest to the emotion returned than any other mean. There are Computer Vision and Pattern Recognition, 2001. CVPR
no limitations in this manner. 2001. Proceedings of the 2001 IEEE Computer Society
CONCLUSION Conference on, vol. 1, pp. I-I. IEEE, 2001.
While this system is fully functional, there is room for
development in the future. Various aspects of the application
can be tweaked to deliver improved results and a more
pleasant overall user experience. Some of these, which we
expect to solve in the near future, include the ability to return
any number of emotions regardless of the number of clusters,
which may also differ, by plotting the emotion returned from
the user's image based on the arousal and valence values, and
finding the cluster whose mean is nearest to the emotion
returned than any other mean. The above findings are
extremely encouraging. The application's high accuracy and
fast response time make it ideal for a wide range of practical
applications. The music classification module, in particular,
does exceptionally well; it achieves high accuracy in the
“angry” category, as well as in the “happy” and “calm”
categories. We’re also aware that there's still space for growth.
It will be important to see how the system works when all
seven basic emotions are considered; additional songs from
various languages and regions could also be included to
strengthen the recommendation system.
REFERENCES
[1] I. Cohen, A. Garg and T. S. Huang, Emotion
Recognition from Facial Expressions using Multilevel
HMM ,2015.
[2] Yading Song, Simon Dixon, and Marcus Pearce.
Evaluation of Musical Features for Emotion
Classification. In Proceedings of the 13th International
Society for Music Information Retrieval Conference,
Portugal. pp. 523-528. October 2012.
[3] Thayer, R. E. The Biopsychology of Mood and
Arousal. Oxford University Press, USA, 1989.

131
ISSN:-2349-3585 |www.ijrdt.org

You might also like