Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Abstract Conclusion

Music plays a very important role in people's daily lives and in modern The proposed work proposes a facial expression recognition system that
advanced technology. Typically, users have to manually scroll through a can play songs based on detected expressions and classify music genres.
playlist of songs to select. Here we propose an efficient and accurate It uses CNN method to extract features and Euclidean distance classifier
model that will generate playlists based on the current emotional state to classify these expressions. In this work, real images, i.e. user-
and behavior of the user. Existing methods to automate the playlist dependent images, are captured using an integrated camera. Migration
generation process are computationally slow, inaccurate, and to computer vision technology will automate these systems. To do this,
sometimes even require the use of additional hardware, such as an EEG an algorithm is used to classify human expressions and play a piece of
or sensors. Speech is the oldest and most natural way of expressing music based on the current mood detected.
feelings, moods, and emotions, and processing it takes a lot of time,
calculation, and money. The system automatically generates playlists
based on real-time facial expression extraction and audio feature
Result
extraction from songs to classify specific emotions, so the
computational cost is relatively low. The results of a music recommendation system are typically used for
several purposes, including facial recognition, professional music, and
music playlists. Overall, the results of a music recommendation system
Objectives are intended to be used to improve the effectiveness of
recommendation of best music based on the facial expression or the
The objective of a music recommendation system is to suggest the user’s mood. It evaluates the facial expression of the user and suggests
songs based on the facial expressions. The main objectives of a machine music according to it.
learning based music recommendation system are :

1. To analyse an interface between the music and machine


learning model.
2. To implement the ideas of machine learning and to study
facial expression.
3. To develop an application to select an appropriate music on
the analysis of user’s facial expression.
4. To bridge gap between growing technologies and music
techniques.

Methodology

References

1. S L Happy and Aurobinda Routray, “Automatic Facial


Expression Recog- nition using Features of salient Facial
Patches,” in IEEE Trans. On Af- fective Computing, January-
March 2015, pp. 1-12.
2. Hafeez Kabani, Sharik Khan, Omar Khan and Shabana Tadvi,
“Emotion based Music Player,” Int. J. of Eng. Research and
General Sci., Vol. 3, Issue 1, pp. 750-756, January- February
2015.
This work proposes a CNN-based approach to recommending music by 3. Li Siquan, Zhang Xuanxiong. Research on Facial Expression
analyzing multimodal emotional information captured by facial Recogni- tion Based on Convolutional Neural Networks [J].
movements and semantic analysis of user speech/text interactions, so Journal of Software, 2018, v.17; No.183 (01): 32-35.
that the system's decision to recognize the emotion is really enhanced - 4. Hou Yuqingyang, Quan Jicheng, Wang Hongwei. Overview of
time. Traditional methods of playing music based on a person's mood the de- velopment of deep learning [J]. Ship Electronic
require human interaction. Migration to computer vision technology will Engineering, 2017, 4: 5-9.
automate these systems. 5. 5.Liu Sijia, Chen Zhikun, Wang Fubin, et al. Multi-angle face
recognition based on convolutional neural network [J].

You might also like