Professional Documents
Culture Documents
Final Report - Santosh Sir1
Final Report - Santosh Sir1
Final Report - Santosh Sir1
ON
Music Recommendation System
BACHELOR OF TECHNOLOGY
Computer Science Engineering(Artificial Intelligence)
Submitted By
Praveen Kumar (2001921520040)
Harsh Raj (2001921520027)
Anmol Rana (2001921520009)
Affiliated to
2023-2024
[1]
Declaration
We hereby declare that the project work presented in this report entitled “Music
Recommendation System”, in partial fulfillment of the requirement for the award of the
degree of Bachelor of Technology in Computer Science & Engineering, submitted to Dr.
A.P.J. Abdul Kalam Technical University, Lucknow is based on our own work carriedout at
Department of Applied Computational Science and Engineering (ACSE), G.L. Bajaj Institute
of Technology & Management, Greater Noida. The work contained in the report is true and
original to the best of our knowledge and project work reported in this report has not been
submitted by us for award of any other degree or diploma.
Signature:
Name: Praveen Kumar
Signature:
Name: Harsh Raj
RollNo: 2001921520027
Signature:
Name: Anmol Rana
Date:
Place: Greater Noida, INDIA
[ii]
Certificate
This is to certify that the Project report entitled “Music Recommendation System”done by
Management, Greater Noida under my guidance. The matter embodied in this project work
has not been submitted earlier for the award of any degree or diploma to the best of my
Date:
[iii]
Acknowledgement
The merciful guidance bestowed to us by the almighty made us stick out this project to a
successful end. We humbly pray with sincere heart for his guidance to continue forever.
We pay thanks to our project guide Mr. Anand Kumar Yadav who has given guidance and
light to us during this project. Her versatile knowledge has helped us in the critical times
during the spanof this project.
We pay special thanks to our Dean ACSE Dr. Naresh Kumar and Head of Department Mr.
Mayank Singh who has been always present as a support and help us in all possible way during
this project.
We also take this opportunity to express our gratitude to all those people who have been
directly and indirectly with us during the completion of the project.
We want to thanks our friends who have always encouraged us during this project.
At the last but not least thanks to all the faculty of CSE department who provided valuable
suggestions during the period of project.
[iv]
Abstract
This research introduces a music recommendation system that combines content -based
filtering, collaborative filtering, and real-time user feedback mechanisms. The system
evaluates the intrinsic features of songs to provide recommendations aligned with individual
user preferences. To enhance the contextual relevance of recommendations, collaborative
filtering is employed, integrating current musical trends, including regional preferences.
A key feature of our system is its dynamic feedback loop, which utilizes real-time user
reactions captured through facial recognition and wearable device data to assess enjoyment
levels. This feedback is used to continually refine recommendations via machine learning
algorithms, adapting to the user’s evolving tastes and emotional responses. The result is a
highly adaptive and responsive music recommendation system that anticipates user needs,
offering a cutting-edge solution for personalized music curation. The proliferation of digital
music libraries has overwhelmed users with an abundance of choices, making effective music
recommendation systems essential for a personalized listening experience. Our system aims to
address this by integrating content-based filtering, collaborative filtering, and real-time
feedback mechanisms to provide recommendations that are both personally resonant and
contextually relevant. Additionally, the researchers utilized data augmentation techniques
during training, which involves generating new training samples by applying random
transformations (such as rotation, scaling, and flipping) to existing images. This strategy helps
improve the model's ability to generalize to unseen data and enhances its overall performance.
Traditional music recommendation systems often rely on either content-based filtering, which
analyzes song attributes, or collaborative filtering, which leverages user community data.
However, each method has its limitations when used in isolation. Content-based filtering excels
at recommending songs similar to those a user has liked before but struggles with novelty.
Collaborative filtering can introduce users to new music but often requires a large amount of
user data to be effective
[v]
This research introduces a music recommendation system that combines content -based
filtering, collaborative filtering, and real-time user feedback mechanisms. The system
evaluates the intrinsic features of songs to provide recommendations aligned with individual
user preferences. To enhance the contextual relevance of recommendations, collaborative
filtering is employed, integrating current musical trends, including regional preferences.
[vi]
TABLE OF CONTENT
Chapter 1. Introduction…………………………………………………………… 01
1.1 Preliminaries..........................................................................................
1.2 Motivation …………………….............................................................
1.3 Problem Statement …………………………………............................
1.4 Aim and Objectives ……………………………………………………
Chapter 2. Literature Survey.................................................................................... 06
2.1 Introduction …………………………………………………………..
2.2 Related Work.........................................................................................
2.3 Research Gap………………..................................................................
Chapter 4. Implementation…................................................................................. 22
4.1 Introduction …………………………………………………………….
4.2 Implementation Strategy (Flowchart, Algorithm etc.) ………………….
4.3 Tools/Hardware/Software Requirements..………………………………
4.4 Expected Outcome (Performance metrics with details) ……………….
References 37
Appendix I: Plagiarism Report of Project Report (<=15%)
[vii]
LIST OF FIGURES
[viii]
Chapter 1
Introduction
The evolution of digital music platforms has fundamentally changed how users consume
music. With vast libraries and the convenience of streaming, users have access to millions
of songs at their fingertips. However, this abundance of choice often leads to the "paradox
of choice," where users struggle to find music that aligns with their preferences.
Traditional music recommendation systems, such as those used by Spotify, Apple Music,
and YouTube, primarily rely on collaborative filtering and content-based filtering. While
these methods have proven effective to an extent, they have significant limitations.
Collaborative filtering, which recommends music based on the preferences of similar users,
often falls short in scenarios involving new users (the "cold start" problem) or niche music
tastes. Content-based filtering, which recommends music based on the attributes of songs,
can fail to capture the contextual and emotional nuances that influence user preferences.
The motivation behind this research is to address these limitations by developing a Hybrid
Advanced Music Recommendation System (HAMRS). This system seeks to harmonize
user preferences with real-time contextual and emotional data, thereby delivering a more
personalized and engaging music experience.
This paper introduces a hybrid system that bridges these methodologies, incorporating real-
time user feedback through facial recognition and wearable devices to dynamically adjust
recommendations. By capturing real-time reactions, our system can better understand and
anticipate user preferences, providing a more engaging and satisfying listening experience.
1
1.1.2 Problem Statement & Objectives
Problem Statement
Current music recommendation systems are limited in their ability to adapt in real-time to
user preferences and emotional states. They often neglect regional music trends and do not
fully utilize advanced technologies such as machine learning, computer vision, and real-
time physiological data.
In today's digital age, music streaming platforms have transformed how we discover and
enjoy music. With millions of tracks available at our fingertips, users can feel overwhelmed
by the sheer volume of choices. This is where music recommendation systems come into
play. These systems are designed to help users navigate vast music libraries by providing
personalized music suggestions tailored to their unique tastes and listening habits.
Objectives
Incorporate multiple regional music APIs to ensure the quality and relevance of music
recommendations.
Utilize facial recognition and wearable devices to capture real-time user feedback and
refine recommendations.
Ensure the system can scale to accommodate a growing user base and data volume.Analyze
2
the effectiveness of the hybrid approach in enhancing user satisfaction and engagement.
Enhanced Personalization
The system leverages multiple filtering techniques and real-time feedback to offer a highly
personalized music experience, aligning with users' dynamic preferences and emotional
states.
Real-Time Adaptability
By integrating real-time data from facial recognition and wearable devices, the system can
continuously adapt its recommendations based on the user's current mood and context.
Regional Relevance
Using multiple regional APIs ensures that the recommendations are not only personalized
but also culturally and contextually relevant, enhancing the user's connection to the music.
User Engagement
The system’s interactive and responsive nature is likely to increase user engagement and
satisfaction, leading to longer usage times and higher user retention rates.
Technological Innovation
The research contributes to the field of artificial intelligence and machine learning by
exploring the integration of diverse data sources and advanced algorithms in a practical
application.
3
1.1.4 Limitation of Research
The use of facial recognition and wearable devices raises significant privacy and security
concerns. Ensuring user consent and protecting user data from breaches are critical
challenges.
Resource Intensive
The integration of advanced technologies such as machine learning, computer vision, and
real-time data processing requires substantial computational resources and can be costly to
implement at scale.
The accuracy of emotion detection through facial recognition and wearable devices can be
influenced by external factors such as lighting, facial obstructions (e.g., glasses or masks),
and device accuracy.
User Acceptance
Users may be hesitant to use a system that involves facial recognition and continuous
monitoring through wearable devices due to privacy concerns or discomfort.
Scalability
Ensuring that the system can handle a large number of users and vast amounts of data
efficiently requires careful consideration of scalability and performance optimization.
4
incorporating real-time user feedback through facial recognition and wearable
devices to dynamically adjust recommendations. By capturing real-time reactions,
our system can better understand and anticipate user preferences, providing a more
engaging and satisfying listening experience.
5
Chapter 2
Literature Survey
2.1.1 Introduction
Soleymani et al. (2015) proposed a content-based music recommendation system using the
underlying music preference structure. This approach emphasizes understanding the deeper
musical preferences of users to provide more accurate recommendations.
Bogdanov et al. (2011) developed a content-based system for music recommendation and
visualization of user preferences. Their work highlights the use of semantic notions to
6
improve the recommendation process.
Han et al. (2018) focused on music recommendation based on feature similarity. This
method involves comparing the features of different songs to identify similarities and
recommend songs that align with the user's taste.
Advantages
Content-based filtering can recommend songs that share specific characteristics with the
user's liked songs, providing a high degree of relevance.
Disadvantages
It can lead to repetitive recommendations and lacks the ability to introduce diversity.
Additionally, it may not capture the user's broader musical tastes and context.
Collaborative filtering relies on the behavior and preferences of a large number of users to
generate recommendations. It can be implemented using two main methods: user-based and
item-based collaborative filtering.
This method finds users with similar preferences (neighbors) and recommends songs that
those users have liked. It is effective in finding niche music by leveraging the preferences
of similar users.
This method calculates the similarity between items (songs) based on user interactions and
recommends songs that are similar to those the user has liked. It is more scalable than user-
based methods and can quickly adapt to new users and items.
7
Challenges
Scalability
New users and new songs pose a challenge as there is no historical data to base
recommendations on.
Sparsity
The user-item interaction matrix is often sparse, making it difficult to find sufficient data
to generate accurate recommendations.
Hybrid Approaches
Darshna (2018) explored a hybrid approach that combines content-based and collaborative
filtering to reduce the cold start problem. This method integrates the strengths of both
techniques to provide a more balanced recommendation system.
8
based features to enhance recommendation accuracy.
Hybrid approaches address many of the limitations of individual methods, such as the cold
start problem and the lack of diversity, by providing a more balanced and comprehensive
recommendation.
The integration of real-time feedback mechanisms, such as facial recognition and wearable
devices, has emerged as a promising approach to enhance music recommendation systems.
These technologies capture physiological and emotional responses to music, providing
valuable insights into user preferences.
Ayata et al. (2018) focused on emotion recognition via galvanic skin response, comparing
machine learning algorithms and feature extraction methods. Their research highlights the
potential of using physiological data to enhance music recommendations.
Dornbush et al. (2005) introduced XPOD, a human activity and emotion-aware mobile
music player. This system uses real-time feedback from users' physical activities and
emotions to adapt music recommendations.
Liu et al. (2009) proposed a music playlist recommendation system based on user heartbeat
and music preference. This approach uses physiological data, such as heart rate, to
understand user preferences and recommend songs accordingly.
Real-time feedback mechanisms can provide immediate insights into user preferences,
allowing the system to adapt quickly and improve recommendation accuracy.
9
Challenges
Privacy Concerns
The use of facial recognition and biometric data raises significant privacy issues. Ensuring
user consent and protecting data from misuse are critical challenges.
Accuracy
The accuracy of emotion detection can be affected by various factors such as lighting
conditions, facial obstructions, and device accuracy.
Incorporating regional music trends and cultural context into recommendation systems can
significantly enhance personalization. Users often have preferences that are influenced by
their cultural background and regional trends.
Tian et al. (2019) developed a music recommendation system based on logistic regression
and eXtreme Gradient Boosting (XGBoost). This system takes into account regional music
trends to provide more relevant recommendations.
Incorporating regional music trends into recommendation systems ensures that the music
suggestions are culturally and contextually relevant. This approach can significantly
enhance user satisfaction and engagement by tailoring recommendations to the user's
cultural background and local music preferences. By leveraging regional music data,
recommendation systems can provide more diverse and personalized music experiences,
reflecting the unique tastes and trends of different regions. This can lead to higher user
retention and more positive user interactions with the platform.
10
Machine Learning and Deep Learning Techniques
Recent advancements in machine learning and deep learning have significantly enhanced
the capabilities of music recommendation systems. Techniques such as neural collaborative
filtering, deep content-based models, and sequence-based models (e.g., Recurrent Neural
Networks) have shown promising results.
Uses neural networks to model complex interactions between users and items. This
approach can capture non-linear relationships and improve the accuracy of
recommendations.
Utilize deep learning techniques to extract high -level features from audio signals, lyrics,
and metadata. These models can provide more nuanced and accurate recommendations
based on the content of the music.
Sequence-Based Models
Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks can
model the sequential nature of music listening behavior, capturing patterns and preferences
over time.
The integration of real-time feedback mechanisms, such as facial recognition and wearable
devices, has emerged as a promising approach to enhance music recommendation systems.
These technologies capture physiological and emotional responses to music, providing
valuable insights into user preferences.
11
Facial Recognition
Analyzes facial expressions to infer emotions such as happiness, sadness, surprise, and
disgust. Real-time analysis of facial cues can help determine how a user feels about a
particular song.
Wearable Devices
Track physiological metrics such as heart rate, skin conductance, and body temperature.
These metrics can indicate arousal levels and emotional states, which can be correlated with
music preferences.
Challenges
Privacy Concerns
The use of facial recognition and biometric data raises significant privacy issues. Ensuring
user consent and protecting data from misuse are critical challenges.
Accuracy
The accuracy of emotion detection can be affected by various factors such as lighting
conditions, facial obstructions, and device accuracy.
12
Inferences Drawn from Literature Review
From the review of the literature, it is evident that the integration of multiple
recommendation techniques can significantly enhance the effectiveness of music
recommendation systems. Content-based filtering, while effective in identifying song
similarities, can benefit from the diversity introduced by collaborative filtering. Hybrid
approaches that combine these methods address many limitations, such as the cold start
problem and recommendation diversity.
Incorporating regional music trends into recommendation systems ensures that the music
suggestions are culturally and contextually relevant. This approach can significantly
enhance user satisfaction and engagement by tailoring recommendations to the user's
cultural background and local music preferences. By leveraging regional music data,
recommendation systems can provide more diverse and personalized music experiences,
reflecting the unique tastes and trends of different regions. This can lead to higher user
retention and more positive user interactions with the platform.
Recent advancements in machine learning and deep learning have significantly enhanced
the capabilities of music recommendation systems. Techniques such as neural collaborative
13
filtering, deep content-based models, and sequence-based models (e.g., Recurrent Neural
Networks) have shown promising results.
Uses neural networks to model complex interactions between users and items. This
approach can capture non-linear relationships and improve the accuracy of
recommendations.
Utilize deep learning techniques to extract high-level features from audio signals, lyrics,
and metadata. These models can provide more nuanced and accurate recommendations
based on the content of the music.
14
Chapter 3
Proposed Methodology
3.1.1 Introduction
Current music recommendation systems are limited in their ability to adapt in real-time
to user preferences and emotional states. They often neglect regional music trends and do
not fully utilize advanced technologies such as machine learning, computer vision, and
real-time physiological data. These limitations result in less accurate and less engaging
music recommendations, reducing user satisfaction and retention.
The proposed work involves the development of HAMRS, which integrates multiple
recommendation techniques and real-time feedback mechanisms. The system
architecture, data collection methods, recommendation algorithms, and evaluation
metrics are outlined below.
15
3.3.1 System Architecture
This module collects data from user interaction logs, facial recognition systems, and
wearable devices. The collected data includes user preferences, listening history,
physiological responses, and contextual information.
This module preprocesses the collected data to extract relevant features. Techniques
include audio signal processing to analyze song features such as tempo, rhythm, and key.
Recommendation Engine
User Interface
The user interface displays the recommended songs to the user and collects real -time
feedback through facial recognition and wearable devices. The interface also allows users
to manually provide feedback on the recommendations.
16
3.3.2 Data Collection and Preprocessing
Data collection is a crucial step in HAMRS methodology. The system collects data from
multiple sources to build a comprehensive user profile. The following data sources are
utilized:
These logs capture user interactions with the music platform, including song plays, skips,
likes, and searches. This data provides insights into user preferences and listening habits.
Facial Recognition
Wearable Devices
Wearable devices provide physiological data such as heart rate. These metrics are used
to infer the user's emotional and physiological state while listening to music.
Data preprocessing involves cleaning and transforming the collected data to extract
meaningful features. This includes:
Signal Processing
Audio signals are processed to extract features such as tempo, rhythm, and key.
17
recommendation algorithms used include:
Content-Based Filtering
This technique analyzes the intrinsic features of songs to match user preferences. Features
such as genre, tempo, rhythm, and lyrics are used to find songs similar to those the user
has liked in the past. Specifically, the system employs a vector cosine similarity algorithm
to compare song features and identify similar songs.
Collaborative Filtering
Real-time feedback from facial recognition and wearable devices is integrated into the
recommendation process. This feedback helps the system adapt to the user's current
emotional and physiological state, providing more contextually relevant
recommendations.
18
Heart Rate Data
The system tracks the user's BPM (beats per minute) while listening to music. It has been
observed that BPM can rise gradually even if there is a sudden rise in tempo or beat drop.
Typically, BPM ranges from 72 to 80 if the user is excited, 80 to 90 for more liked songs,
and higher for even more liked songs. Lower BPM generally indicates that the user does
not like the song, especially if other users who like the song have a higher BPM curve
under similar physical conditions.
BPM is influenced by factors such as temperature, physical activity, stress level, song
liking level, lyrics, and other musical features. The system uses an algorithm to draw an
average BPM curve under similar conditions for liked music. If the current BPM curve
matches the average liked song curve, it indicates that the user likes the song. Conversely,
if the BPM is random and generally low, it suggests that the user does not like the song.
The system analyzes facial expressions to determine the user's emotional response to
music. Features such as smiles, head movements, and other expressions are used to assess
whether the user likes the song. If the user likes the song, these expressions are saved to
train the public data curve of that song. If the user does not like the song, the system offers
a button to change the song.
The system compares the user's facial expressions and BPM data to public data. If the
user's data matches the public data for liked songs, it indicates that the user likes the song.
Additionally, users who already like a song and listen to it frequently will have a rise in
BPM in anticipation of special moments in the song, such as beat drops.
HAMRS employs a neural network to model the user's profile and predict song
preferences. The neural network is trained on the features of songs liked by the user and
outputs a score out of 10 indicating the user's likelihood of liking a new song. This AI-
19
based user profile is used to sort and filter songs obtained from both content-based and
collaborative filtering.
The neural network is trained on a dataset of songs liked by the user, incorporating
features such as genre, tempo, rhythm, lyrics, and emotional responses. The output is a
score that predicts the user's preference for new songs.
The neural network is continually refined with real-time feedback from the user,
including heart rate data and facial expressions.
The user profile AI is used to sort songs from the combined list of content -based and
collaborative filtering recommendations. Songs that score higher in the user's profile are
given higher priority, while those that score lower are deprioritized.
To evaluate the performance of HAMRS, various metrics are used to measure the
accuracy, relevance, and user satisfaction of the recommendations. The following metrics
are employed:
20
Mean Average Precision (MAP):
MAP is a comprehensive metric that considers both precision and recall. It provides a
single value that reflects the overall accuracy of the recommendations.
User Satisfaction:
User satisfaction is measured through surveys and real-time feedback. Metrics such as
user ratings, engagement levels, and feedback on the relevance of recommendations are
collected and analyzed.
Diversity measures the variety of the recommended songs, ensuring that the
recommendations are not repetitive. Novelty measures the introduction of new and
previously unknown songs to the user.
By employing these evaluation metrics, the performance of HAMRS can be assessed and
optimized to deliver high-quality music recommendations. The system's ability to adapt
to real-time user feedback and leverage multiple data sources ensures a personalized and
engaging music experience.
21
Chapter 4
Implementation
4.1 Introduction
The implementation strategy involves several key steps, each contributing to the overall
recommendation process. The Fig.4.2.1 User Feedback Mechanism below illustrate the
methodology in detail.
22
Data Collection
Public wearable device data and public facial mood data are gathered.
This data is analyzed to determine if the user's real-time experience matches public data.
Data Matching
If the user's real-time data matches public data, the system stores the user data publicly
and refines the model based on the user profile.
If the data does not match, the user is offered the option to change the song.
Personalized Recommendation
23
4.2.3 Algorithm
Content-Based Filtering
Songs are analyzed based on their intrinsic features such as genre, tempo, rhythm, and
lyrics.
Cosine similarity is used to compare the vector representations of songs and match similar
songs.
Collaborative Filtering
Real-Time Feedback
Wearable devices provide heart rate (BPM) data, indicating the user's physiological
response to the music.
The hybrid model integrates content-based and collaborative filtering Fig.4.2.4 results.
24
Fig.4.2.4 Content-Based and Collaborative Filtering
Integration
25
4.3 Tools/Hardware/Software Requirements
The implementation of HAMRS requires various tools, hardware, and software to ensure
its functionality and scalability.
4.3.1 Hardware
Wearable devices capable of monitoring heart rate and other physiological parameters.
4.3.2 Software
Machine learning frameworks (e.g., TensorFlow, PyTorch) for model training and
prediction.
APIs for regional music trends (e.g., Spotify Charts, YouTube Trending).
4.3.3 Tools
User interface tools to create a seamless experience for music recommendation and
feedback.
26
Enhance personalization by aligning music recommendations with the user's preferences
and emotional states.
Increase user engagement and satisfaction through real-time adaptability and relevant
recommendations.
Offer a scalable solution capable of handling a large user base and extensive music
library.By continuously refining user profiles and adapting to real-time feedback,
HAMRS strives to set a new standard in music recommendation systems.
The system aims to achieve several key outcomes, including enhanced personalization,
increased user engagement, and scalability. By continuously refining user profiles and
adapting to real-time feedback, HAMRS strives to set a new standard in music
recommendation systems. Detailed metrics will be used to evaluate the performance of
the system, including user satisfaction scores, engagement metrics, and the accuracy of
recommendations.
User Satisfaction
Surveys and feedback forms will be used to gauge user satisfaction with the
recommendations. Metrics such as Net Promoter Score (NPS) and user retention rates
will be monitored.
Engagement Metrics
Time spent on the platform, the number of interactions, and the frequency of song changes
will be tracked. Higher engagement is expected to correlate with better recommendation
quality.
27
Recommendation Accuracy
Precision, recall, and F1-score will be used to measure the accuracy of the
recommendations. The performance of content-based and collaborative filtering will be
evaluated separately and in combination.
Scalability
The system's ability to handle a growing number of users and a large music library will
be tested. Load testing and stress testing will be conducted to ensure the system can scale
effectively.
Ensuring user consent and protecting sensitive data from breaches is crucial. Robust
encryption and data anonymization techniques will be employed.
Resource Requirements
Factors such as lighting, facial obstructions, and device accuracy can affect emotion
detection. Continuous improvement of the emotion detection algorithms will be
necessary.
28
User Acceptance
Users may have concerns about privacy and the use of facial recognition and wearable
devices. Transparent communication and user education will be essential to gain user
trust.
Future Work
Exploring additional data sources such as skin conductance and other physiological
metrics available on high-end fitness bands. Enhancing the machine learning models to
improve recommendation accuracy and user satisfaction. Expanding the system to
support a wider range of music genres and regional preferences.
29
Chapter 5
Results & Discussions
5.1 Introduction
User feedback was collected through surveys and feedback forms to gauge the overall
satisfaction with the HAMRS recommendations. The feedback focused on aspects such
as the relevance of recommendations, ease of use, and the responsiveness of the system
to user preferences.
Relevance of Recommendations The majority of users reported high satisfaction with the
relevance of the recommendations provided by HAMRS The hybrid approach of
combining content-based filtering collaborative filtering and real-time feedback was
effective in delivering personalized suggestions
Ease of Use Users found the system intuitive and easy to navigate The integration of real-
time feedback mechanisms such as facial recognition and wearable devices was well-
received with users appreciating the enhanced personalization
Responsiveness The system's ability to adapt to user preferences and emotional states in
30
real-time was highlighted as a key strength Users noted that the recommendations became
more accurate over time as the system learned from their feedback
The performance of HAMRS was evaluated using several key metrics including user
satisfaction scores engagement metrics recommendation accuracy and system scalability
User Satisfaction Scores The average user satisfaction score was 45 out of 5 indicating a
high level of contentment with the system
Engagement Metrics The average time spent on the platform increased by 20% and the
number of interactions per session rose by 25% demonstrating enhanced user engagement
Recommendation Accuracy The precision recall and F1-score for the recommendation
system were calculated with the following results Precision 085 Recall 082 F1-score 083
Scalability The system successfully handled a growing number of users and a large music
library without significant performance degradation Load testing and stress testing
confirmed the system's scalability and robustness
31
5.4 Analysis of BPM and Facial Feedback Data
Fig.5.4.1 Song 1
Fig.5.4.2 Song 2
32
Fig.5.4.3 Song 3
The analysis of BPM and facial feedback data provided insights into how users'
physiological and emotional responses correlated with their music preferences
BPM Data The data showed that users who liked a song as you can see in Fig.5.4.1,
Fig.5.4.2 and Fig.5.4.3 exhibited a gradual increase in BPM with typical ranges between
90 to 100 and 100 to 113 BPM while sitting Conversely users who did not like a song
displayed lower and more stable BPM levels This correlation between BPM and music
preference was consistent across different physical conditions such as temperature and
activity levels
Facial Feedback Facial expressions such as smiles and head movements were reliable
indicators of user enjoyment Users who liked a song showed more positive facial
expressions while those who did not like the song displayed neutral or negative
expressions
33
5.5 Discussion
However challenges such as data privacy resource requirements and emotion detection
accuracy need to be addressed to further improve the system Ensuring user consent and
protecting sensitive data are critical to maintaining user trust and acceptance.
34
Chapter 6
Conclusion & Future Scope
6.1 Conclusion
Key findings include High user satisfaction with the relevance and personalization of
recommendations Increased user engagement and interaction with the platform Effective
use of BPM and facial feedback data to enhance recommendation accuracy
Future research and development can focus on addressing the current challenges and
exploring additional opportunities to enhance HAMRS further Potential areas for future
work include
Data Privacy and Security Developing more robust encryption and anonymization
techniques to protect user data and ensure privacy
35
enhance accuracy and reliability
Broader Genre and Regional Support Expanding the system to support a wider range of
music genres and regional preferences to cater to a diverse user base
36
References
Han, H. et al. (2018). "Music Recommendation Based on Feature Similarity". IEEE
International Conference on Safety Produce Informatization (IICSPI)
Soleymani, M. et al. (2015). "A multimodal database for affect recognition and implicit
tagging". IEEE Transactions on Affective Computing
Wenzhen, H. et al. (2019). "Hybrid music recommendation system based on deep learning".
IEEE Access
Zhang, T. et al. (2020). "Personalized music recommendation using deep learning". Journal
of Computer Science and Technology
Pouyanfar, S. et al. (2014). "A survey on music recommendation systems". IEEE Access
Singh, S. et al. (2021). "Hybrid recommendation system for music streaming services".
International Journal of Advanced Computer Science and Applications
Lv, Y. et al. (2018). "Deep learning for music recommendation". ACM Transactions on
Multimedia Computing, Communications, and Applications
37
Naser, R. et al. (2014). "Collaborative filtering for music recommendation". IEEE
Transactions on Knowledge and Data Engineering
Chin, W. et al. (2014). "Music recommendation system using emotion recognition and deep
learning". International Conference on Artificial Intelligence and Applications
38
1. Final Report plag check.pdf
ORIGINALITY REPORT
11 %
SIMILARITY INDEX
6%
INTERNET SOURCES
5%
PUBLICATIONS
5%
STUDENT PAPERS
PRIMARY SOURCES
setscholars.net
1 Internet Source 1%
Ton Duc Thang University
2 Publication 1%
Submitted to Indian School of Business
3 Student Paper 1%
Submitted to University of Westminster
4 Student Paper <1%
vdocuments.site
5 Internet Source <1%
www.mdpi.com
6 Internet Source <1%
Luong Vuong Nguyen. "Classifications,
7
evaluation metrics, datasets, and domains in
<1%
recommendation services: A survey",
International Journal of Hybrid Intelligent
Systems, 2024
Publication
Submitted to Liverpool John Moores
8
University
<1%
Student Paper
dokumen.pub
9 Internet Source <1%
Submitted to Athlone Institute of Technology
10 Student Paper <1%
Submitted to The University of Buckingham
11 Student Paper <1%
Submitted to University of Moratuwa
12 Student Paper <1%
elib.uni-stuttgart.de
13 Internet Source <1%
www.isrjournals.org
14 Internet Source <1%
thegradient.pub
15 Internet Source <1%
Submitted to Brunel University
16 Student Paper <1%
web.archive.org
17 Internet Source <1%
George Lekakos, Petros Caravelas. "A hybrid
18
approach for movie recommendation",
<1%
Multimedia Tools and Applications, 2006
Publication
Submitted to National Institute of
19
Pharmaceutical Education and Research
<1%
(NIPER)
Student Paper
Abstract—This research introduces a hybrid advanced music facial recognition and wearable devices to dynamically ad- just
recommendation system that combines content-based filtering, recommendations. By capturing real-t ime reactions, our system
collaborative filtering, and real-time user feedback mechanisms. can better understand and anticipate user preferences, provid ing
The system evaluates the intrinsic features of songs to provide
a more engaging and satisfying listening experience.
recommendations aligned with individual user preferences. To
enhance the contextual relevance of recommendations, collabo-
rative filtering is employed, integrating current musical trends,
including regional preferences.
II. LITERATURE SURVEY
A key feature of our system is its dynamic feedback loop, which
utilizes real-time user reactions captured through facial
recognition and wearable device data to assess enjoyment levels. The development of music recommendation systems has
This feedback is used to continually refine recommendations via evolved significantly, with various approaches being explored
machine learning algorithms, adapting to the user’s evolving
tastes and emotional responses. The result is a highly adaptive and to enhance user experience. Previous research has laid the
responsive music recommendation system that anticipates user groundwork for our system, addressing the complexities of
needs, offering a cutting-edge solution for personalized music music recommendation through innovative methodologies.
curation.
Darshna [2] explored hybrid approaches combining content-
Index Terms—Music Recommendation System, Content-Based
Filtering, Collaborative Filtering, Machine Learning, User Feed- based and collaborative filterin g to address the cold-start prob-
back, Facial Recognition, Wearable Devices, Personalization, lem. This method balances the strengths of both techniques,
Music Trends providing robust recommendations even with limited initial
user data. Ayata et al. [3] utilized wearable sensors for real-
time feedback on music preferences, highlightin g the potential
I. I NTRODUCTION of physiological data in refining recommendations. Their work
demonstrates the effectiveness of integrating physio logical
The proliferation of digital music libraries has overwhelmed responses to improve the accuracy of music suggestions.
users with an abundance of choices, making effective music
recommendation systems essential for a personalized listenin g Han et al. [1] focused on feature similarity for content- based
experience. Our system aims to address this by integrating filtering, presentin g methods to analyze and recommendmusic
content-based filtering, collaborative filtering, and real-time based on song attributes. This study underscores the importance
feedback mechanisms to provide recommendations that are of detailed song analysis in content-based filtering.Chang et al.
both personally resonant and contextually relevant. [4] proposed a personalized music recommenda - tion system
using convolutional neural networks, emphasizing machine
Traditional music recommendation systems often rely on learning for personalization. Their research illustrates the
either content-based filtering, which analyzes song attributes, potential of neural networks in capturing complex patterns in
or collaborative filtering, wh ich leverages user communitydata. user preferences.
However, each method has its limitations when usedin
isolation. Content-based filterin g excels at recommending Chin et al. [5] explored emotion profile-based music rec-
songs similar to those a user has liked before but stru ggles with ommendations, alignin g recommendations with the listener’s
novelty. Collaborative filtering can introduce users to newmusic mood. This study reinforces the value of emotional alignment
but often requires a large amount of user data to be effective. in enhancing user satisfaction. Kim et al. [6] utilized human
activity recognition from accelerometer data for music rec-
This paper introduces a hybrid system that bridges these ommendations, integrating wearable device data to enhance
methodologies, incorporating real-time user feedback through user experience. Their work h igh lights the synergy between
physical activity data and music preferences.
Bogdanov et al. [7] provided an overview of content-based C. Feedback Mechanism
music recommendation techniques, discussing the state-of- the-
art, challenges, and future directions. This comprehensive A dist inctive feature of our system is its real-time feedback
review offers valuable insights into the strengths and limi- mechanism. Ut ilizing facial recognit ion through camera input
tations of content-based methods. Chen et al. [8] enhanced and physiological data from smartwatches, we assess the user’s
music recommendation systems by incorporating user context, immediate response to the recommended music. This feedback
highlighting the importance of contextual information in per- is crucial in determining whether the user is enjoying themusic
sonalization. This study demonstrates how contextual cues can or not. The system dynamically adapts future recom -
refine music recommendations. mendations based on this feedback, thus continually refining
the user’s listening experience.
Zhang et al. [9] explo ited emotional preferences in music
recommendation, focusing on daily activities to align rec-
D. Implementation and Integration
ommendations with users’ emotional states. Their approach
emphasizes the role of daily activities in shaping music
The implementation phase involves the seamless integration
preferences. Wishwanath et al. [10] proposed a mood-based
of these three components. The content-based filtering algo-
music recommendation system, integrating user moods into the
rithm processes the song dataset, the collaborative filterin g
recommendation process. This research underscores the
mechanism interacts with the music API, and the feedback
significance of mood in determining music preferences.
system continuously captures and integrates user responses.
Yoon et al. [11] introduced a time-weighted clustering ap- Together, these elements form a cohesive, adaptive music
proach for personalized music recommendations, considering recommendation system.
temporal aspects to improve recommendation accuracy. Their
study highlights the importance of temporal dynamics in music E. Testing and Optimization
recommendation. Soleymani et al. [12] developed a dataset
of 1000 songs for emotional analysis of music, providing Our methodology also encompasses rigo rous testing and op-
valuable resources for research on emotional response to music. timization processes. We conduct extensive trials to ensure the
accuracy and effectiveness of our recommendation algorithms.
These tests aim to validate the system’s ability to provide
III. M ETHODOLOGY
personalized and enjoyable music experiences, taking into
account the ever-changing preferences and emotional statesof
Our system’s methodology revolves around three core com- the users.
ponents: content-based filterin g, collaborative filtering usin g
external music APIs, and a dynamic user feedback system.
I V. PROPOSED SOLUTION
Preliminary observations and theoretical evaluations suggest [6] H. Kim, K. S. Kim, and H. G. Kim, ”Music Recommendation System
that our system significantly enhances the music listening Using Human Activity Recognition from Accelerometer Data,” IEEE
Transactions on Consumer Electronics, 2019.
experience. By combining content-based filtering, collabora -
tive filterin g using regional hits, and real-time feedback from [7] D. Bogdanov, N. Wack, E. Go´mez, S. Gulati, P. Herrera, O. Mayor,
wearable devices, our approach is projected to yield more G. Szklarczyk, E. Cano, F. De Jong, M. Jordi, and X. Serra, ”Content-
personalized and context-aware recommendations compared to Based Music Recommendation: State-of-the-Art, Challenges, and Future
Directions,” IEEE Transactions on Multimedia, 2011.
traditional systems.
[8] H. Chen, Y. Huang, and S. Wu, ”Enhancing Music Recommendation with
Our init ial tests indicate that users appreciate the personal- User Context,” Journal of Computer Science and Technology, 2020.
ized nature of the recommendations, particularly the ability to
[9] H. Zhang, Y. Wang, and X. Liu, ”Exploiting the Emotional Preference
adapt to their real-time feedback. The integration of regional of Music for Music Recommendation in Daily Activities,” 13th Interna-
trends has also been well-received, adding a layer of cultural tional Symposium on Computational Intelligence and Design (ISCID),
relevance to the recommendations. 2020.
[10] C.H.P.D. Wishwanath, N. Wijekoon, and P. Samarasinghe, ”A Person-
alized Music Recommendation System Based on User Moods,” 2019
International Conference on Advances in ICT for Emerging Regions
(ICTer), 2019.