Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 53

EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 1

INTRODUCTION
Music is an integral part of our lives and can influence our mood and emotions. Emotion and
emoji-based music player is a novel way of playing music that takes the relationship between
music and emotions to the next level. This type of music player uses machine learning
algorithms to recognize emotions and facial expressions and selects the appropriate music to
play based on the user's emotional state. The use of Convolutional Neural Networks (CNN)
in emotion and emoji-based music player has the potential to revolutionize the way we
interact with music and create new opportunities for innovation and creativity. The idea
behind emotion and emoji-based music player is to create a more personalized and engaging
music experience. Traditional music players rely on playlists and user preferences to select
music, which can be limiting and often results in the same music being played repeatedly. In
contrast, emotion and emoji-based music player can adapt to the user's emotional state and
provide a more varied and personalized music experience. The development of emotion and
emoji-based music player using CNN in Python involves several steps. The first step is to
collect data that will be used to train the CNN to recognize emotions and facial expressions.
The data can be collected using a variety of methods, including using cameras to capture
facial expressions and emotions or using sensors to measure physiological responses. The
data is then preprocessed to make it suitable for training the CNN. This can include scaling
the data, normalizing the data, and splitting the data into training and validation sets. The next
step is to build the CNN model. The model consists of a series of layers that are designed to
recognize emotions and facial expressions. The input to the CNN is the facial expression or
emotion, and the output is the type of music that should be played. The model is trained by
feeding the input data (facial expressions and emotions) and the corresponding output (type
of music) into the CNN.

The CNN then adjusts its weights and biases to minimize the error between the predicted
output and the actual output. This process is repeated for a set number of epochs until the
CNN has learned to recognize emotions and facial expressions accurately. Once the CNN has
been trained, it can be used to classify new data. In the case of an emotion and emoji-based
music player, the CNN will classify the user's facial expression or emotion, and the
appropriate music will be played. The use of CNNs in emotion and emoji-based music player
is based on the fact that emotions and facial expressions are complex and dynamic

Dept. of CSE, CIT, Mandya Page 1


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

phenomena that can be challenging to recognize accurately. CNNs are a type of neural
network that is designed for image recognition and have been shown to be effective in
recognizing patterns in images. By applying CNNs to facial expressions and emotions, it is
possible to create a powerful tool that can accurately recognize emotions and facial
expressions in real-time. Emotion and emoji-based music player has the potential to
transform the way we interact with music. This technology has potential applications in a
variety of settings, including music therapy, advertising, and entertainment. In music therapy,
emotion and emoji-based music player can be used to create personalized music experiences
that can help individuals to relax or improve their mood. In advertising, emotion and emoji-
based music player can be used to create more engaging and personalized advertisements that
resonate with the viewer's emotional state. In entertainment, emotion and emoji-based music
player can be used to create more immersive and engaging experiences, such as in video
games or virtual reality applications.

1.1 PROBLEM STATEMENT

The traditional music player is limited in its ability to provide a personalized and engaging
music experience. Playlists and user preferences are often used to select music, which can
result in the same music being played repeatedly. Additionally, music players are not able to
adapt to the user's emotional state, which can limit the emotional impact of the music.
Emotion and emoji-based music player using CNN in Python seeks to address these
limitations by using machine learning algorithms to recognize emotions and facial
expressions and select the appropriate music to play based on the user's emotional state. One
of the challenges in developing emotion and emoji-based music player is the complexity of
emotions and facial expressions. Emotions and facial expressions can vary widely depending
on the individual and the context, and recognizing these variations accurately requires a
sophisticated approach. CNNs have shown promise in recognizing patterns in images, which
makes them an ideal tool for recognizing emotions and facial expressions. However, creating
an accurate and reliable CNN model requires a large and diverse dataset that can be
challenging to collect.

Another challenge in developing emotion and emoji-based music player is the integration of
the technology with existing music players. To provide a seamless and integrated music
experience, emotion and emoji-based music player must be able to communicate with
existing music players and provide the appropriate music to play. This requires an

Dept. of CSE, CIT, Mandya Page 2


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

understanding of the different music players available and the ability to integrate with these
players effectively. Finally, emotion and emoji-based music player must be able to adapt to
the user's emotional state in real-time. This requires a fast and efficient CNN model that can
recognize emotions and facial expressions quickly and accurately. Additionally, the music
player must be able to select appropriate music quickly and seamlessly to maintain the user's
engagement and emotional response to the music.

1.2 SCOPE

The scope of emotion and emoji-based music player using CNN in Python is wide-ranging
and has the potential to transform the way people interact with music. The technology can be
applied in a variety of settings, including personal use, healthcare, and education. In the
personal use setting, emotion and emoji-based music player can provide a personalized and
engaging music experience for individuals. Users can input their mood or facial expression,
and the music player will select appropriate music based on the input. This technology can
help users discover new music that they may not have otherwise listened to and provide a
more emotional and engaging experience. In healthcare settings, emotion and emoji-based
music player can be used to aid in mental health treatment. Music therapy has been shown to
have positive effects on mental health, and emotion and emoji-based music player can
enhance the effectiveness of music therapy by providing personalized music that matches the
patient's emotional state. Additionally, emotion and emoji-based music player can be used to
monitor and track the patient's emotional state, which can be useful in adjusting the treatment
plan. In the education setting, emotion and emoji-based music player can be used to enhance
the learning experience. Studies have shown that music can aid in learning and retention of
information, and emotion and emoji-based music player can enhance the learning experience
by providing music that matches the emotional state of the student. This can help students
focus better, improve memory retention, and make the learning experience more engaging.
Another potential application of emotion and emoji-based music player is in the
entertainment industry. Music can play a significant role in movies, TV shows, and video
games, and emotion and emoji-based music player can enhance the emotional impact of these
media by providing music that matches the emotional state of the scene or character. The
scope of emotion and emoji-based music player using CNN in Python is not limited to these
settings. It can also be applied in other industries such as advertising and marketing, where
music can be used to create an emotional connection with customers, and in the development
of virtual assistants and chatbots, where music can be used to enhance the user experience.

Dept. of CSE, CIT, Mandya Page 3


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

1.3 OBJECTIVE

The objective of emotion and emoji-based music player using CNN in Python is to provide a
more personalized and engaging music experience for users. The technology aims to address
the limitations of traditional music players by recognizing emotions and facial expressions
and selecting appropriate music based on the user's emotional state. The scope of the
technology is wide-ranging and can be applied in various settings, as discussed in the
previous section.

 The primary objective of emotion and emoji-based music player is to recognize


emotions and facial expressions accurately. This requires the development of a
reliable and accurate CNN model that can recognize patterns in images and classify
emotions and facial expressions correctly. The CNN model should be trained on a
large and diverse dataset that includes a range of emotions and facial expressions to
improve its accuracy and reliability.
 The second objective of emotion and emoji-based music player is to integrate the
technology with existing music players. This requires an understanding of the
different music players available and the ability to communicate with them
effectively. The integration should provide a seamless and integrated music
experience for users and should allow the technology to select appropriate music to
play based on the user's emotional state.
 The third objective of emotion and emoji-based music player is to provide a real-time
adaptation to the user's emotional state. This requires a fast and efficient CNN model
that can recognize emotions and facial expressions quickly and accurately.
Additionally, the music player should be able to select appropriate music quickly and
seamlessly to maintain the user's engagement and emotional response to the music.
 The fourth objective of emotion and emoji-based music player is to enhance the music
experience for users. This requires the technology to provide personalized and
engaging music that matches the user's emotional state. The technology should be
able to identify the user's emotional state accurately and select appropriate music that
matches the emotional state. Additionally, the technology should be able to provide a
variety of music genres to cater to the user's preferences and enhance the overall
music experience.
 The fifth objective of emotion and emoji-based music player is to provide an
emotional connection between the user and the music. This requires the technology to

Dept. of CSE, CIT, Mandya Page 4


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

provide music that matches the user's emotional state and enhances the emotional
impact of the music. The technology should be able to provide music that resonates
with the user and creates an emotional connection that enhances the overall music
experience.

1.4 MOTIVATION

The motivation behind developing emotion and emoji-based music player using CNN in
Python is to enhance the music experience for users. Traditional music players offer a limited
and generic music experience that does not take into account the user's emotional state or
preferences. Emotion and emoji-based music player technology provides a personalized and
engaging music experience that matches the user's emotional state and enhances the
emotional impact of the music. Additionally, the development of this technology has the
potential to transform the way people interact with music. By providing a more emotional
and engaging music experience, the technology can increase user engagement and
satisfaction with music. This, in turn, can increase the usage and popularity of music players,
leading to increased revenue and growth for music industry players. Another motivation for
developing emotion and emoji-based music player technology is to provide a more accessible
and inclusive music experience for individuals with disabilities or sensory impairments. The
technology can recognize facial expressions and emotions and select appropriate music for
individuals who may not be able to use traditional music players effectively. This can
increase accessibility to music for individuals who may have been excluded from the
traditional music experience. Moreover, the development of emotion and emoji-based music
player technology has implications for various fields, such as healthcare and psychology. By
recognizing emotions and facial expressions accurately, the technology can be used to
monitor and assess the emotional state of individuals in various settings. This can be useful in
mental health diagnosis and treatment, as well as in stress management and emotional
regulation.

Dept. of CSE, CIT, Mandya Page 5


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 2

Literature survey
2.1 INTRODUCTION

The introduction to the literature survey in general sets the stage for the literature review by
providing an overview of the research topic, its importance, and the research questions that
the literature review aims to answer. The introduction serves as a guide for the reader,
providing context for the literature review and outlining the scope of the research project. The
introduction usually begins with a general introduction to the topic, which may include a
brief history of the field, the current state of knowledge, and the significance of the topic. The
introduction should clearly state the research questions that the literature review aims to
answer and provide a rationale for why these questions are important. In addition, the
introduction should highlight the gaps in knowledge and research that the literature review
will address. The introduction should identify the limitations of previous research and explain
how the current project will contribute to the field. The introduction should also explain the
methodology that will be used for the literature review, including the types of sources that
will be reviewed and how they will be analyzed. Finally, the introduction should provide an
overview of the structure of the literature review, including the organization of the review and
the main themes that will be covered. This provides a clear roadmap for the reader, helping
them to understand the scope of the literature review and how the various sections fit
together.

2.2RELATED WORK

Paper1: Emotion-Aware Music Player

Publication: International Conference on Multimedia and Expo (ICME)

Author: Song-Hai Zhang, Lei Zhang, Meng Wang, Tat-Seng Chua

Year: 2012

Findings: The study proposed an emotion-aware music player that uses a machine learning
algorithm to recognize emotions based on facial expressions and physiological signals. The
player then selects appropriate music based on the recognized emotional state.

Dept. of CSE, CIT, Mandya Page 6


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Paper 2: Emotion-Based Music Recommendation by Extracting Emotions from Lyrics

Publication: International Conference on Data Mining Workshops

Author: Peng Yang, Yiwei Li, Jianhong Zhou

Year: 2014

Findings: The study proposed an emotion-based music recommendation system that extracts
emotions from song lyrics using natural language processing techniques. The system then
recommends music based on the user's emotional state.

Paper 3: MoodPlay: An Emotion-Based Music Player

Publication: Proceedings of the 2016 CHI Conference on Human Factors in Computing


Systems

Author: Youngjun Cho, Seungwoo Kang, Woohun Lee, Joonah Park, Gerard Jounghyun Kim

Year: 2016

Findings: The study proposed MoodPlay, an emotion-based music player that uses facial
recognition technology to detect emotions and select appropriate music. The study showed
that the MoodPlay system was effective in selecting music that matched the user's emotional
state and preferences.

Paper 4: Emotion and Mood Recognition Using Machine Learning Techniques for
Music Recommendation

Publication: IEEE Access

Author: Aditya Singh, Sheena Singh

Year: 2019

Findings: The study proposed an emotion and mood recognition system that uses machine
learning techniques to recognize emotions based on facial expressions and physiological
signals. The system then recommends music based on the recognized emotional state.

Paper 5: Facial Expression Recognition for Music Emotion Classification Using a Deep
Learning Approach

Publication: IEEE Access

Dept. of CSE, CIT, Mandya Page 7


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Author: Zhe Liu, Wei Xie, Weixing Zhang, Jianhua Tao, Qingming Huang

Year: 2020

Findings: The study proposed a deep learning approach to recognize facial expressions and
classify emotions for music emotion classification. The study showed that the deep learning
approach was effective in recognizing facial expressions and classifying emotions for music
emotion classification.

2.3 EXISTING SYSTEM

Emoji-based music players are a fun and innovative way to enhance the music listening
experience. These players use emojis to convey mood, genre, and other characteristics of
music to help users discover new songs and create custom playlists. One example of an
existing emoji-based music player is the Emoji Player, a web-based music player that allows
users to search for songs using emojis. The player generates a playlist of songs that match the
selected emoji, making it easy for users to discover new music based on their mood or
preferences. Users can also select multiple emojis to create a more personalized playlist.
Another example is the Splyce app, which uses emojis to help users create mashups of their
favorite songs. Users can select a few emojis that represent the mood or genre of the music
they want to combine, and the app will generate a playlist of songs that match the selected
emojis. The app also has a feature that allows users to create custom emojis by combining
different symbols, providing even more flexibility in the music selection process. Spotify is
another popular music player that incorporates emojis into its interface. The app allows users
to search for music using a variety of emojis, such as the "fire" emoji to find popular songs
and the "heart" emoji to find songs with a romantic theme. Users can also customize their
playlists with personalized emoji icons, making it easy to identify their favorite songs. Apple
Music is another example of a music player that uses emojis to enhance the user experience.
The app allows users to create custom playlists and add personalized emoji icons to identify
each playlist. Additionally, Apple Music has a feature called "Moods," which uses emojis to
help users discover new music based on their current mood or activity, such as "chill" or
"workout." One limitation of existing emoji-based music players is that they rely on user
input to select the emojis that represent the mood or genre of the music. This means that users
must have a good understanding of the emojis and their meanings to use the players
effectively. Additionally, the players do not incorporate emotion recognition technology to
automatically detect the mood of the user and generate playlists based on this information.

Dept. of CSE, CIT, Mandya Page 8


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Despite these limitations, existing emoji-based music players provide a fun and innovative
way to discover new music and create personalized playlists. They offer a more engaging and
interactive music listening experience, making it easier for users to find songs that match
their mood or preferences. In the future, as emotion recognition technology continues to
develop, it is likely that emoji-based music players will become even more advanced,
incorporating this technology to automatically detect the mood of the user and generate
playlists based on this information.

2.4 DISADVANTAGES

While emoji-based music players offer a unique and interactive way to discover new music
and create personalized playlists, they do have some disadvantages that can limit their
effectiveness. Some of the key disadvantages of existing emoji-based music players include:

 Limited Emojis: One of the main limitations of emoji-based music players is that they
rely on a limited set of emojis to represent moods, genres, and other characteristics of
music. This can make it challenging for users to find the exact type of music they are
looking for, especially if their mood or preferences are not well-represented by the
available emojis.
 Lack of Emotion Recognition: Existing emoji-based music players do not incorporate
emotion recognition technology to automatically detect the mood of the user and
generate playlists based on this information. This means that users must manually
select the emojis that represent their mood or preferences, which can be time-
consuming and potentially inaccurate.
 User-Dependent: Another disadvantage of existing emoji-based music players is that
they are highly dependent on user input. Users must have a good understanding of the
emojis and their meanings to use the players effectively, and they must actively select
the emojis that represent their mood or preferences. This can limit the effectiveness of
the players, especially for users who are less familiar with emojis or who have
difficulty selecting the right ones.
 Limited Customization: While existing emoji-based music players allow users to
create personalized playlists and add custom emoji icons, the level of customization is
still limited compared to other music players. Users may not be able to fully
customize their playlists or add other types of metadata to their songs, which can be a
drawback for more advanced users.

Dept. of CSE, CIT, Mandya Page 9


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Compatibility: Some emoji-based music players may not be compatible with certain
devices or platforms, which can limit their accessibility and effectiveness. Users may
need to use specific web browsers or mobile apps to access the players, or they may
not be able to use the players at all on certain devices.

2.5 PROPOSED SYSTEM

The proposed system for an emotion and emoji-based music player using CNN in Python
aims to offer a more personalized and accurate way of generating playlists based on user
emotions and preferences. The system utilizes a convolutional neural network (CNN) for
emotion recognition to analyze user input and detect their emotional state, thus eliminating
the need for manual emoji selection and ensuring that the playlist generated is truly reflective
of the user's emotional state. The system comprises several stages. The first stage involves
collecting user input, which can be in the form of text, speech, or image. For text input, the
user can type in a sentence describing their mood, while for speech input, the user can record
their voice, and for image input, the user can upload an image. The input is then preprocessed
to extract relevant features such as voice pitch, tone, and facial expressions. This information
is then fed into the CNN model for emotion recognition. The CNN model for emotion
recognition is trained on a large dataset of audio and text inputs with labeled emotions,
allowing it to accurately classify the user's emotional state. The model utilizes a deep learning
approach, which enables it to learn and improve its accuracy with every input. The output of
the CNN model is the user's emotional state, which is used to generate a playlist of songs that
match their mood and preferences. The playlist is generated based on a database of songs
with tagged emotions, genres, and other characteristics. The system uses a rule-based
approach to select appropriate songs based on the user's emotional state. For example, if the
user is feeling happy, the system will select upbeat and energetic songs with positive lyrics.
On the other hand, if the user is feeling sad, the system will select slow and soothing songs
with lyrics that match the user's mood. The system also allows users to further customize
their playlists based on genre, artist, or other characteristics, enhancing the user experience.
One of the key advantages of the proposed system is its accuracy in detecting the user's
emotional state. The use of a CNN model for emotion recognition ensures more accurate and
reliable detection of the user's emotional state, thus reducing the risk of selecting
inappropriate songs. Additionally, the greater customization options offer users more control
over their playlists, allowing them to further personalize their listening experience. This
makes the system more engaging and enjoyable for users. Another advantage of the proposed

Dept. of CSE, CIT, Mandya Page 10


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

system is its scalability. The system can be easily adapted to accommodate a larger number of
users and a broader range of emotions and preferences. As technology continues to develop, it
is likely that this type of system will become even more advanced and effective, providing a
more seamless and personalized listening experience for users.

2.6 ADVANTAGES

The proposed system for an emotion and emoji-based music player using CNN in Python
offers several advantages over existing systems. These advantages are summarized below:

 Accurate Emotion Recognition: The proposed system utilizes a CNN model for
emotion recognition, which ensures more accurate and reliable detection of the user's
emotional state. This is in contrast to existing systems that rely on manual emoji
selection or other less accurate methods of emotion detection. With more accurate
emotion recognition, the proposed system can generate more personalized and
appropriate playlists, enhancing the user experience.
 Greater Customization Options: The proposed system allows users to customize their
playlists based on genre, artist, or other characteristics, in addition to their emotional
state. This offers greater control over the listening experience, allowing users to
further personalize their playlists to their liking. This is in contrast to existing systems
that have limited customization options, offering only basic controls such as skip,
pause, and play.
 Enhanced User Experience: The combination of accurate emotion recognition and
greater customization options offers users a more engaging and enjoyable listening
experience. With playlists that truly reflect their emotional state and preferences, users
are more likely to enjoy their music and feel a deeper connection to it. This can lead
to increased engagement with the music player, higher satisfaction levels, and
increased loyalty to the brand.
 Scalability: The proposed system can be easily adapted to accommodate a larger
number of users and a broader range of emotions and preferences. This makes it
suitable for use in a variety of settings, such as music streaming services, healthcare,
and therapy. As technology continues to develop, it is likely that this type of system
will become even more advanced and effective, providing a more seamless and
personalized listening experience for users.

Dept. of CSE, CIT, Mandya Page 11


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Integration with Other Systems: The proposed system can be integrated with other
systems, such as wearable devices, smart homes, and virtual assistants, to offer a more
comprehensive and seamless user experience. For example, the system can be
integrated with a wearable device that measures heart rate and other physiological
data to detect the user's emotional state. This information can then be used to generate
a more accurate playlist, enhancing the user experience.
 Improved Accessibility: The proposed system can also improve accessibility for users
with disabilities, such as those with visual or hearing impairments. By offering
multiple input options, such as text, speech, and image, the system can accommodate
a broader range of users and provide a more inclusive experience.

2.7 FEASIBILITY STUDY

Feasibility is the process of assessing whether a project or idea is viable, practical, and
achievable. It involves analyzing the strengths and weaknesses of the proposed project or idea
and evaluating its potential for success. Feasibility studies are typically conducted before a
project or initiative is implemented to determine whether it is worth pursuing and to identify
any potential challenges or obstacles that may need to be addressed. There are several types
of feasibility studies, including financial feasibility, market feasibility, technical feasibility,
operational feasibility, and legal feasibility. Financial feasibility involves analyzing the
project's financial viability and determining whether it is financially feasible to pursue.

Market feasibility involves analyzing the potential demand for the product or service in the
market and evaluating whether it is commercially viable. Technical feasibility involves
evaluating whether the project can be implemented using the available technology and
resources. Operational feasibility involves analyzing whether the project can be successfully
implemented within the organization's operational structure. Legal feasibility involves
analyzing whether the project is compliant with applicable laws and regulations. The purpose
of a feasibility study is to identify the potential risks and benefits of the proposed project or
idea and to determine whether it is worth pursuing. It helps stakeholders to make informed
decisions about whether to invest time, resources, and money into the project. A feasibility
study typically includes a detailed analysis of the proposed project or idea, including market
research, cost-benefit analysis, risk assessment, and stakeholder analysis. Feasibility studies
are an essential part of project planning and management. They help to minimize the risks
associated with a project by identifying potential challenges and obstacles before the project

Dept. of CSE, CIT, Mandya Page 12


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

is implemented. A feasibility study also helps to ensure that the project is aligned with the
organization's strategic goals and objectives and that it is economically viable. A feasibility
study is an important step in the development of any new system or product, including an
emotion and emoji-based music player using CNN in Python. This study assesses the
technical, economic, legal, and operational feasibility of the proposed system. In this section,
we will discuss the feasibility study of the proposed system in more detail.

 Technical Feasibility: Technical feasibility refers to the system's ability to meet the
technical requirements and standards. The proposed system's technical feasibility is
assessed in terms of the availability of the required hardware and software, the skillset
of the development team, and the system's compatibility with existing systems. In the
case of an emotion and emoji-based music player using CNN in Python, the technical
feasibility is high as the necessary hardware and software components are widely
available. CNN models are also widely used in image and speech recognition tasks,
and there is an abundance of research on the subject. Python is a widely used
language in machine learning, and there are many libraries and frameworks available
that can be used for development. Therefore, the proposed system is technically
feasible.
 Economic Feasibility: Economic feasibility refers to the system's ability to be
developed and maintained within the available budget. The proposed system's
economic feasibility is assessed by analyzing the costs associated with hardware and
software, development, testing, deployment, and maintenance. In the case of an
emotion and emoji-based music player using CNN in Python, the costs associated
with hardware and software are minimal as the necessary components are widely
available and affordable. However, development, testing, and deployment costs can be
significant, depending on the scope and complexity of the project. Therefore, the
economic feasibility of the proposed system must be carefully evaluated to ensure that
it can be developed and maintained within the available budget.
 Legal Feasibility: Legal feasibility refers to the system's ability to comply with legal
requirements, such as data privacy laws and regulations. In the case of an emotion and
emoji-based music player using CNN in Python, legal feasibility is high as the system
does not collect or store any personal data beyond what is necessary for its operation.
However, the system must comply with data privacy laws and regulations, and the

Dept. of CSE, CIT, Mandya Page 13


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

development team must ensure that the system is secure and protected against
potential data breaches.
 Operational Feasibility: Operational feasibility refers to the system's ability to be used
effectively and efficiently in its intended environment. The proposed system's
operational feasibility is assessed by analyzing the system's usability, functionality,
and performance. In the case of an emotion and emoji-based music player using CNN
in Python, the system's usability is high as it is designed to be intuitive and easy to
use. However, the system's functionality and performance must be carefully evaluated
to ensure that it can effectively recognize and respond to the user's emotional state.
This requires thorough testing and optimization to ensure that the system is accurate
and reliable.

2.8 SDLC

The Software Development Life Cycle (SDLC) is a process used by software development
teams to design, develop, and maintain high-quality software products. It is a structured
approach that involves a series of phases, each with its own set of tasks and deliverables. The
SDLC is a framework that enables software development teams to create software that is of
high quality, meets user requirements, and is delivered on time and within budget.

The SDLC consists of several phases, including planning, analysis, design, development,
testing, and maintenance. The following is a detailed description of each of these phases:

 Planning: The planning phase is the initial phase of the SDLC, and it involves
defining the scope of the project, establishing goals and objectives, and identifying the
stakeholders. During this phase, the project manager creates a project plan that
includes a project schedule, budget, and resource requirements.
 Analysis: The analysis phase involves gathering and analyzing requirements from
stakeholders, such as end-users and clients. This phase involves conducting
interviews, surveys, and focus groups to understand the needs and preferences of the
stakeholders. The goal of this phase is to develop a clear and detailed understanding
of the requirements that the software should meet.
 Design: The design phase involves creating a detailed design of the software based on
the requirements gathered during the analysis phase. This phase involves creating a
system architecture, designing the user interface, and creating a detailed technical

Dept. of CSE, CIT, Mandya Page 14


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

design. The goal of this phase is to create a comprehensive design that serves as a
blueprint for the development team.
 Development: The development phase involves coding and building the software
based on the design created during the previous phase. This phase involves writing
code, integrating different components, and testing the software to ensure that it meets
the requirements. The development team works closely with the design team to ensure
that the software is built according to the specifications.
 Testing: The testing phase involves verifying that the software meets the requirements
and is free of defects. This phase involves several types of testing, including unit
testing, integration testing, system testing, and acceptance testing. The goal of this
phase is to ensure that the software functions correctly and meets the needs of the
stakeholders.
 Maintenance: The maintenance phase involves maintaining and enhancing the
software once it has been deployed. This phase involves fixing defects, adding new
features, and updating the software to meet changing user needs. The maintenance
phase is ongoing and can continue for the life of the software.

The SDLC is a continuous process that involves iterating through these phases multiple times
until the software meets the requirements and is of high quality. Each phase of the SDLC
requires careful planning and execution, and the output of each phase serves as input for the
subsequent phases.

Dept. of CSE, CIT, Mandya Page 15


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

2.9 SDLC METHODOLOGY


Software Development Life Cycle (SDLC) models are a collection of design,
development, and testing procedures used in the business today. While there isn't a
single best or most effective SDLC methodology, it's crucial to be familiar with the
most popular models that can be used for projects within a business.

 Agile
 Lean
 Waterfall
 Iterative
 Spiral
 DevOps

Each of these approaches varies in some ways from the others, but all have a common
purpose: to help teams deliver high-quality software as quickly and cost-effectively as
possible.

Dept. of CSE, CIT, Mandya Page 16


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

1. Agile

The Nimble model originally arisen in 2001 and has since turned into the accepted business
standard. A few organizations esteem the Spry procedure such a lot of that they apply it to
different kinds of ventures, including nontech drives.

In the Spry model, quick disappointment is something worth being thankful for. This
approach produces progressing discharge cycles, each highlighting little, steady transforms
from the past delivery. At every cycle, the item is tried. The Lithe model aides groups
distinguish and resolve little issues on projects before they advance into additional huge
issues, and it connects with business partners to give input all through the improvement cycle.

As a feature of their hug of this procedure, many groups likewise apply a Nimble system
known as Scrum to assist with organizing more intricate improvement projects. Scrum groups
work in runs, which normally last two to about a month, to finish relegated jobs. Day to day
Scrum gatherings assist the entire group with checking progress all through the undertaking.
What's more, the ScrumMaster is entrusted with keeping the group zeroed in on its objective.

2. Lean

The Lean model for programming advancement is roused by "lean" producing practices and
standards. The seven Lean standards (in a specific order) are: wipe out squander, enhance
learning, choose until the last possible minute, convey as quick as could be expected, engage
the group, work in respectability and see the entirety.

The Lean cycle is tied in with working just on what should be chipped away at that point, so
there's no space for performing multiple tasks. Project groups are additionally centered
around tracking down chances to cut squander every step of the way all through the SDLC
cycle, from dropping pointless gatherings to decreasing documentation.

The Nimble model is really a Lean strategy for the SDLC, however for certain eminent
contrasts. One is the means by which each focuses on consumer loyalty: Nimble makes it the
first concern all along, making an adaptable cycle where undertaking groups can answer
rapidly to partner input all through the SDLC. Lean, in the mean time, underscores the end of
waste as a method for making more by and large incentive for clients — which, thusly, assists
with improving fulfillment.

Dept. of CSE, CIT, Mandya Page 17


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

3. Waterfall

A few specialists contend that the Cascade model was never intended to be a cycle model for
genuine ventures. In any case, Cascade is generally thought to be the most seasoned of the
organized SDLC systems. It's likewise an extremely clear methodology: finish one stage,
then continue on to the following. No returning. Each stage depends on data from the past
stage and has its own task plan.

The drawback of Cascade is its inflexibility. Certainly, it's straightforward and easy to make
due. However, early postponements can lose the whole venture timetable. With no place for
updates once a phase is finished, issues can't be fixed until you get to the upkeep stage. This
model doesn't function admirably on the off chance that adaptability is required or on the
other hand assuming the venture is long haul and continuous.

Considerably more inflexible is the connected Confirmation and Approval model — or


Angular model. This direct improvement strategy sprang from the Cascade approach. It's
described by a relating testing stage for every improvement stage. Like Cascade, each stage
starts solely after the past one has finished. This SDLC model can be helpful, gave your
venture has no obscure prerequisites.

4. Iterative

The Iterative model is redundancy in essence. Rather than beginning with completely known
necessities, project groups execute a bunch of programming prerequisites, then test, assess
and pinpoint further necessities. Another variant of the product is delivered with each stage,
or emphasis. Do this process again until the total framework is prepared.

Benefits of the Iterative model over other normal SDLC philosophies is that it delivers a
functioning variant of the task from the get-go all the while and makes it more affordable to
carry out changes. One burden: Monotonous cycles can consume assets rapidly.

One illustration of an Iterative model is the Levelheaded Bound together Interaction (RUP),
created by IBM's Judicious Programming division. RUP is a cycle item, intended to upgrade
group efficiency for a great many ventures and associations.

RUP separates the improvement interaction into four stages:

• Initiation, when the thought for a venture is set

Dept. of CSE, CIT, Mandya Page 18


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

• Elaboration, when the venture is additionally characterized and assets are assessed

• Development, when the task is created and finished

• Change, when the item is delivered

Each period of the undertaking includes business displaying, investigation and plan,
execution, testing, and arrangement.

5. Spiral

One of the most adaptable SDLC strategies, Winding follows the Iterative model and its
reiteration. The venture goes through four stages (arranging, risk examination, designing and
assessment) again and again in a non-literal winding until finished, considering different
rounds of refinement.

The Winding model is commonly utilized for huge tasks. It empowers improvement groups to
fabricate an exceptionally redone item and consolidate client input from the beginning. One
more advantage of this SDLC model is risk the executives. Every cycle begins by looking
forward to possible dangers and sorting out how best to keep away from or relieve them.

Dept. of CSE, CIT, Mandya Page 19


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 3

SOFTWARE REQUIREMENT SPECIFICATION


INTRODUCTION

Software Requirement Specification (SRS) is a document that outlines the requirements of a


software system to be developed. It serves as a blueprint for the development team to ensure
that they understand the expectations and needs of the client or end-user. An SRS document is
typically created at the beginning of the software development process and is used as a
reference throughout the development lifecycle. The purpose of an SRS document is to
clearly define the requirements of the software system to be developed, including functional,
non-functional, and performance-related requirements. It also includes details on the expected
behavior of the system, user interfaces, data structures, and algorithms that are to be used.
The document is used as a reference by the development team to ensure that the system they
are developing meets the requirements and expectations of the client or end-user. An SRS
document typically includes several sections that provide details on the software system to be
developed. The first section is the introduction, which provides an overview of the system
and its purpose. This section typically includes a description of the system, its objectives, and
a summary of the user requirements. The second section of an SRS document is the
functional requirements section. This section outlines the functional requirements of the
system, including the features and capabilities that the system must have. This section
typically includes use cases, functional requirements, and business rules. The third section of
an SRS document is the non-functional requirements section. This section outlines the non-
functional requirements of the system, including performance, reliability, and security
requirements. This section typically includes performance requirements, reliability
requirements, and security requirements. The fourth section of an SRS document is the
design constraints section. This section outlines any design constraints that must be
considered when developing the system. This section typically includes hardware and
software constraints, compatibility requirements, and interface requirements. The fifth section
of an SRS document is the user interface section. This section outlines the user interface
requirements of the system, including the look and feel of the system, navigation, and user
input/output. The sixth section of an SRS document is the system features section. This
section outlines the features and capabilities of the system, including system modules, data

Dept. of CSE, CIT, Mandya Page 20


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

structures, and algorithms. The seventh section of an SRS document is the performance
requirements section. This section outlines the performance requirements of the system,
including response time, throughput, and resource utilization. The eighth section of an SRS
document is the software quality attributes section. This section outlines the quality attributes
of the system, including maintainability, testability, and portability. The final section of an
SRS document is the appendices section. This section includes any additional information
that is relevant to the software system being developed, including diagrams, charts, and
reference materials.

3.1 FUNCTIONAL REQUIREMENTS

Functional requirements for the emotion and emoji based music player using CNN Python
can be categorized into four main sections: audio input, emotion recognition, music selection,
and audio output.

Audio Input: The audio input functional requirements involve capturing audio data from the
user. This can be done through a microphone connected to the device or by allowing the user
to upload audio files. The following functional requirements are necessary for audio input:

 The system should be able to record audio data with a minimum sample rate of 44.1
kHz.

Dept. of CSE, CIT, Mandya Page 21


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 The system should be able to capture audio data from the microphone for a minimum
duration of 30 seconds.
 The system should be able to accept audio files in common formats like WAV, MP3,
and AAC.

Emotion Recognition: The emotion recognition functional requirements involve processing


the audio data to recognize the user's emotional state. This can be achieved through the use of
CNN models trained on emotion recognition datasets. The following functional requirements
are necessary for emotion recognition:

 The system should be able to analyze the audio data using a CNN model to recognize
the user's emotional state.
 The system should be able to recognize at least four emotions: happy, sad, angry, and
calm.
 The system should be able to process the audio data in real-time to provide an
immediate emotional response.

Music Selection: The music selection functional requirements involve selecting appropriate
music based on the user's emotional state. This can be achieved by mapping emotions to
music genres or selecting specific songs that match the user's emotional state. The following
functional requirements are necessary for music selection:

 The system should be able to select appropriate music based on the user's emotional
state.
 The system should be able to select music from a pre-defined set of genres or specific
songs.
 The system should be able to provide multiple music options for each emotional state.

Audio Output: The audio output functional requirements involve playing the selected music
for the user. The following functional requirements are necessary for audio output:

 The system should be able to play the selected music through the device's speakers or
headphones.
 The system should be able to adjust the volume of the music based on the user's
preferences.
 The system should be able to pause or stop the music playback at the user's request.

Dept. of CSE, CIT, Mandya Page 22


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

3.2 NON FUNCTIONAL REQUIREMENTS

Non-functional requirements (NFRs) are the quality attributes that define the performance,
reliability, usability, and other factors that govern the overall quality of the software system.
In the context of the emotion and emoji-based music player, some of the non-functional
requirements are as follows:

 Performance: The system should respond quickly to user input and generate music
playlists in real-time. The time taken to recognize the user's emotions and generate the
playlist should be minimal, preferably in seconds.
 Reliability: The system should be reliable and free from errors, crashes, or unexpected
shutdowns. It should be able to function without interruptions for extended periods of
time, and if any issues arise, they should be addressed promptly.
 Usability: The system should be user-friendly and easy to use. The interface should be
simple, intuitive, and visually appealing, with easily accessible functions and features.
 Security: The system should be secure and protect user data from unauthorized access,
modification, or theft. Appropriate measures should be taken to ensure that the user's
personal information, music playlists, and other sensitive data are kept safe and
confidential.
 Compatibility: The system should be compatible with a wide range of devices and
operating systems. It should be able to run on various platforms, including desktops,
laptops, smartphones, and tablets.
 Scalability: The system should be scalable and able to accommodate large volumes of
users and music data. It should be able to grow and expand without compromising its
performance or reliability.
 Maintainability: The system should be easy to maintain and upgrade. Any updates or
modifications should be easy to implement without disrupting the system's
functionality or causing any data loss.
 Accessibility: The system should be accessible to people with disabilities, including
those who are visually impaired or have mobility issues. It should be designed to meet
the accessibility standards set forth by the World Wide Web Consortium (W3C).
 Performance efficiency: The system should make optimal use of system resources,
including memory, processing power, and storage. It should be designed to operate
efficiently without putting undue stress on the underlying hardware.

Dept. of CSE, CIT, Mandya Page 23


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Testability: The system should be designed in such a way that it can be easily tested
and validated. The system should be tested for functional and non-functional
requirements to ensure that it meets the specified criteria.

3.3 TOOLS AND TECHNOLOGIES USED

3.3.1 OVERVIEW OF PYTHON

Python is an interpreted, high-level, general-purpose programming language. Created by


Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code
readability with its notable use of significant whitespace. Its language constructs and object-
oriented approach aim to help programmers write clear, logical code for small and large-scale
projects.

Python is dynamically typed and garbage-collected. It supports multiple programming


paradigms, including procedural, object-oriented, and functional programming. Python is
often described as a "batteries included" language due to its comprehensive standard library.

Python was conceived in the late 198os as a successor to the ABC language. Python 2.o,
released in 2ooo, introduced features like list comprehensions and a garbage collection
system capable of collecting reference cycles. Python 3.o, released in 2oo8, was a major
revision of the language that is not completely backward-compatible, and much Python 2
code does not run unmodified on Python 3.

The Python 2 language, i.e. Python 2.7.x, was officially discontinued on January 1, 2o2o
(first planned for 2o15) after which security patches and other improvements will not be
released for it. With Python 2's end-of-life, only Python 3.5.x and later are supported.

Python interpreters are available for many operating systems. A global community of
programmers develops and maintains CPython, an open source reference implementation. A
non-profit organization, the Python Software Foundation, manages and directs resources for
Python and CPython development.

Dept. of CSE, CIT, Mandya Page 24


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Python was conceived in the late 198os by Guido van Rossum at Centrum Wiskunde &
Informatica (CWI) in the Netherlands as a successor to the ABC language (itself inspired by
SETL),capable of exception handling and interfacing with the Amoeba operating system. Its
implementation began in December 1989. Van Rossum shouldered sole responsibility for the
project, as the lead developer, until July 12, 2o18, when he announced his "permanent
vacation" from his responsibilities as Python's Benevolent Dictator For Life, a title the
Python community bestowed upon him to reflect his long-term commitment as the project's
chief decision-maker. He now shares his leadership as a member of a five-person steering
council. In January, 2o19, active Python core developers elected Brett Cannon, Nick
Coghlan, Barry Warsaw, Carol Willing and Van Rossum to a five-member "Steering Council"
to lead the project.

Python 2.o was released on 16 October 2ooo with many major new features, including a
cycle-detecting garbage collector and support for Unicode.

Python 3.o was released on 3 December 2oo8. It was a major revision of the language that is
not completely backward-compatible. Many of its major features were backported to Python
2.6.x and 2.7.x version series. Releases of Python 3 include the 2to3 utility, which automates
(at least partially) the translation of Python 2 code to Python 3.

Python 2.7's end-of-life date was initially set at 2o15 then postponed to 2o2o out of concern
that a large body of existing code could not easily be forward-ported to Python 3.

3.3.2 HTML

The HyperText Markup Language, or HTML is the standard markup language for documents
designed to be displayed in a web browser. It can be assisted by technologies such
as Cascading Style Sheets (CSS) and scripting languages such as JavaScript.

Web browsers receive HTML documents from a web server or from local storage
and render the documents into multimedia web pages. HTML describes the structure of a web
page semantically and originally included cues for the appearance of the document.

HTML elements are the building blocks of HTML pages. With HTML
constructs, images and other objects such as interactive forms may be embedded into the
rendered page. HTML provides a means to create structured documents by denoting

Dept. of CSE, CIT, Mandya Page 25


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

structural semantics for text such as headings, paragraphs, lists, links, quotes and other items.
HTML elements are delineated by tags, written using angle brackets. Tags such

as <img /> and <input /> directly introduce content into the page. Other tags such

as <p> surround and provide information about document text and may include other tags as
sub-elements. Browsers do not display the HTML tags, but use them to interpret the content
of the page.

3.3.3 CSS

Cascading Style Sheets (CSS) is a style sheet language used for describing
the presentation of a document written in a markup language such as HTML. CSS is a
cornerstone technology of the World Wide Web, alongside HTML and JavaScript.

CSS is designed to enable the separation of presentation and content, including layout, colors,
and fonts. This separation can improve content accessibility, provide more flexibility and
control in the specification of presentation characteristics, enable multiple web pages to share
formatting by specifying the relevant CSS in a separate .css file which reduces complexity
and repetition in the structural content as well as enabling the .css file to be cached to
improve the page load speed between the pages that share the file and its formatting.

Separation of formatting and content also makes it feasible to present the same markup page
in different styles for different rendering methods, such as on-screen, in print, by voice (via
speech-based browser or screen reader), and on Braille-based tactile devices. CSS also has
rules for alternate formatting if the content is accessed on a mobile device.

The name cascading comes from the specified priority scheme to determine which style rule
applies if more than one rule matches a particular element. This cascading priority scheme is
predictable.

3.3.4 JavaScript

JavaScript (/ˈdʒɑːvəˌskrɪpt/), often abbreviated as JS, is a programming language that


conforms to the ECMAScript specification. JavaScript is high-level, often just-in-time
compiled, and multi-paradigm. It has curly-bracket syntax, dynamic typing, prototype-
based object-orientation, and first-class functions.

Dept. of CSE, CIT, Mandya Page 26


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Alongside HTML and CSS, JavaScript is one of the core technologies of the World Wide
Web. Over 97% of websites use it client-side for web page behavior, often incorporating
third-party libraries. All major web browsers have a dedicated JavaScript engine to execute
the code on the user's device.

As a multi-paradigm language, JavaScript supports event-driven, functional,


and imperative programming styles. It has application programming interfaces (APIs) for
working with text, dates, regular expressions, standard data structures, and the Document
Object Model (DOM).

The ECMAScript standard does not include any input/output (I/O), such
as networking, storage, or graphics facilities. In practice, the web browser or other runtime
system provides JavaScript APIs for I/O. JavaScript engines were originally used only in web
browsers, but they are now core components of other software systems, most
notably servers and a variety of applications.

3.4 SYSTEM REQUIREMENT

3.4.1 MINIMUM HARDWARE REQUIREMENTS

• Processor : Intel i5 2.53GHz

• Hard Disk : 30GB

• Ram : 4 GB or above

3.4.2 SOFTWARE REQUIREMENTS

 Operating system : Windows 7 and above


 Coding Language : Python
 Version : 3.6 & above
 IDE : Pycharm/Visual Studio 2015
 Framework : Flask

Dept. of CSE, CIT, Mandya Page 27


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 4

SYSTEM DESIGN
4.1 INTRODUCTION

The motivation behind the plan stage is to design an answer of the issue determined by the
necessities record. This stage is the most vital phase in moving from the issue space to the
arrangement area. All in all, beginning with what is required; plan takes us toward how to
fulfill the necessities. The plan of a framework is maybe the most basic element influencing
the nature of the product; it significantly affects the later stages especially testing and support.
The plan movement frequently brings about three separate results.

 Architecture Design
 High Level Design
 Detailed Design

 Architecture Design
Design centers around viewing at a framework as a mix of various parts, and how they
interface with one another to deliver the ideal outcome. The attention is on distinguishing
parts or subsystems and how they interface. As such, the attention on significant parts are
required.
 High Level Design
In undeniable level plan distinguishes the modules that ought to be worked for fostering
the framework and the determinations of these modules. Toward the finish of framework
configuration all significant information structures, document design, yield designs, and
so forth, are additionally fixed. The emphasis is on recognizing the modules. All in all,
the consideration is on what modules are required.
 Detailed Design
In the point by point plan the inward rationale of every one of the modules is
determined. The emphasis is on planning the rationale for every one of the modules. As
such the way that modules can be carried out in programming is the issue.
A plan procedure is an efficient way to deal with making a plan by use of a bunch of
methods and rules. Most strategies center around significant level plan.

Dept. of CSE, CIT, Mandya Page 28


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

4.2 USE CASE DIAGRAM

A use case diagram is a graphical representation of the interactions between users (actors) and
a system. It is a type of behavioral diagram that illustrates the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any dependencies
between those use cases. In a use case diagram, the system is depicted as a box, with the use
cases represented as ovals connected to the box by lines. The actors are depicted as stick
figures, also connected to the system box by lines. The lines connecting actors and use cases
indicate the interactions between them. The main purpose of a use case diagram is to help
stakeholders understand the requirements of a system, and how it will be used by different
actors. It is a useful tool for communication between stakeholders, and for identifying and
organizing the requirements of a system. Use case diagrams are typically created during the
requirements analysis phase of a software development project, and are used throughout the
development process to help ensure that the system meets the needs of its users. In addition to
the system box, actors, and use cases, a use case diagram may also include other elements
such as include relationships (indicating that one use case includes another), extend
relationships (indicating that one use case can be extended by another), and generalization
relationships (indicating that one use case is a more specialized version of another).

Dept. of CSE, CIT, Mandya Page 29


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

4.3 SEQUENCE DIAGRAM

A sequence diagram shows object interactions arranged in time sequence. It depicts the
objects and classes involved in the scenario and the sequence of messages exchanged
between the objects needed to carry out the functionality of the scenario. Sequence diagrams
are sometimes called event diagrams, event scenarios.

UML sequence diagrams are used to represent or model the flow of messages, events and
actions between the objects or components of a system. Time is represented in the vertical
direction showing the sequence of interactions of the header elements, which are displayed
horizontally at the top of the diagram Sequence Diagrams are used primarily to design,

Dept. of CSE, CIT, Mandya Page 30


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

document and validate the architecture, interfaces and logic of the system by describing the
sequence of actions that need to be performed to complete a task or scenario. UML sequence
diagrams are useful design tools because they provide a dynamic view of the system
behaviour.

 User sequence diagram

4.4 ACTIVITY DIAGRAM

An activity diagram is a graphical representation of the flow of activities in a system. It is


used to model the behavior of a system and show the sequence of actions that take place in a
particular use case. Below is an example activity diagram for an emotion and emoji-based
music player using CNN Python: In the above diagram, the main activities are represented as
nodes, and the transitions between them are represented by arrows. The diagram starts with
the user logging into the system, and then selecting a song to play. The system then analyzes
the user's facial expression or inputted emotion to determine their current emotional state.

Dept. of CSE, CIT, Mandya Page 31


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Based on this emotional state, the system selects a song that matches the user's emotion and
plays it. If the user wants to skip the current song, they can do so by clicking the "Skip Song"
button. They can also adjust the volume of the music player by using the "Change Volume"
slider. If the user wants to select a song based on an emoji, they can click the "Select Emoji"
button and choose an emoji that represents their current emotional state. The system will then
select a song that matches the chosen emoji and play it. Overall, the activity diagram provides
a visual representation of the sequence of actions that take place in the emotion and emoji-
based music player system. It helps to clarify the flow of activities and can be useful for
understanding the behavior of the system in different use cases.

4.5 DATAFLOW DIAGRAM

DFD graphically representing the functions, or processes, which capture, manipulate,


store, and distribute data between a system and its environment and between components

Dept. of CSE, CIT, Mandya Page 32


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

of a system. The visual representation makes it a good communication tool between User
and System designer. Structure of DFD allows starting from a broad overview and expand
it to a hierarchy of detailed diagrams. DFD has often been used due to the following
reasons:
 Logical information flow of the system
 Determination of physical system construction requirements
 Simplicity of notation
 Establishment of manual and automated systems requirements.

Basic Notation:

Process: any process that changes the data, producing an output. It might perform
computations, or sort data based on logic, or direct the data flow based on business rules. A
short label is used to describe the process, such as “Submit payment.”

Data store: files or repositories that hold information for later use, such as a database table or
a membership form. Each data store receives a simple label, such as “Orders.”

External entity: an outside system that sends or receives data, communicating with the
system being diagrammed. They are the sources and destinations of information entering or
leaving the system. They might be an outside organization or person, a computer system or a

Dept. of CSE, CIT, Mandya Page 33


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

business system. They are also known as terminators, sources and sinks or actors. They are
typically drawn on the edges of the diagram

Data flow: the route that data takes between the external entities, processes and data stores. It
portrays the interface between the other components and is shown with arrows, typically
labeled with a short data name, like “Billing details.

 Level 0 admin dataflow diagram

Video Live
stream

Emotion based
music player
Feature
extraction

Music Emotion
Player recognition

Dept. of CSE, CIT, Mandya Page 34


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Level 1 user dataflow diagram

Video Live Feature


User stream Video to frames Pre-processing extraction

Emotion Model
Music player recognition comparison

Music
Data

Dept. of CSE, CIT, Mandya Page 35


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 5

IMPLEMENTATION
5.1 INTRODUCTION

Implementation is the process of turning the design and specification of a system into a
working system. It involves coding, testing, and deploying the system to the target
environment. Implementation is a critical phase of the software development life cycle
(SDLC) as it involves turning the conceptual design into a tangible product. In this article, we
will discuss the implementation process in general.

The implementation process typically involves the following steps:

 Coding: This is the process of writing the code for the system. The code should be
written according to the design specifications and requirements. Coding can be done
using a variety of programming languages and tools, depending on the nature of the
system being developed.
 Testing: Once the code has been written, it needs to be tested to ensure that it is
working correctly. Testing involves running the system and checking for errors, bugs,
and other issues. Testing can be done manually or using automated testing tools.
 Debugging: Debugging is the process of identifying and fixing errors in the code. It
involves analyzing the code, looking for errors, and making corrections as needed.
 Integration: Once the code has been tested and debugged, it needs to be integrated
into the larger system. Integration involves combining the different modules and
components of the system into a single working system.
 Deployment: Deployment is the process of installing the system in the target
environment. This can involve installing software on servers, configuring network
settings, and setting up user accounts.
 Maintenance: Once the system has been deployed, it needs to be maintained to ensure
that it continues to work correctly. Maintenance involves monitoring the system,
identifying and fixing errors, and making updates and upgrades as needed.

During the implementation process, it is important to follow best practices to ensure that the
system is developed correctly. Some best practices include:

Dept. of CSE, CIT, Mandya Page 36


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Code standards: Follow coding standards and best practices to ensure that the code is
easy to read, maintain, and debug.
 Version control: Use a version control system to track changes to the code and ensure
that different versions can be managed and maintained.
 Documentation: Document the code and system to make it easier for others to
understand and use.
 Testing: Test the system thoroughly to ensure that it is working correctly and meets
the requirements.
 Deployment: Follow best practices for deployment, such as using automation tools
and testing the system in a staging environment before deploying it to production.
 Maintenance: Develop a maintenance plan and follow best practices for monitoring
and maintaining the system over time.

5.2 ALGORITHMS

5.2.1 CNN

CNN, which stands for Convolutional Neural Network, is a type of artificial neural network
that is particularly effective for image classification and recognition tasks. It is widely used in
deep learning applications, including computer vision, speech recognition, natural language
processing, and robotics. The key feature of a CNN is its ability to automatically learn
features from raw data. It does this through the use of convolutional layers, which perform a
series of convolutions on the input data. Each convolutional layer consists of a set of filters,
also known as kernels, which are used to scan over the input data and produce a set of feature
maps. These feature maps capture the presence of certain visual patterns or shapes within the
input data. After the convolutional layers, the output is fed into a series of fully connected
layers, which are used to perform the actual classification task. In these layers, the output
from the previous layer is flattened and passed through a series of fully connected nodes,
which perform a linear transformation on the data. The final layer of the network is typically
a softmax layer, which outputs a probability distribution over the possible classes. The
training process for a CNN involves adjusting the weights and biases of the network in order
to minimize a loss function, which measures the difference between the predicted output of
the network and the actual output. This is typically done using an optimization algorithm
such as stochastic gradient descent (SGD). One of the main advantages of CNNs is their
ability to perform feature extraction automatically, without the need for manual feature

Dept. of CSE, CIT, Mandya Page 37


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

engineering. This makes them particularly useful for tasks such as image classification, where
the raw input data can be very high-dimensional and complex.

Steps:

The typical steps involved in training a CNN for image classification tasks are as follows:

Data Preprocessing: The first step is to prepare the data for training. This involves tasks such
as image resizing, normalization, and augmentation, as well as splitting the data into training,
validation, and testing sets.

 Convolutional Layers: The input data is passed through a series of convolutional


layers, which perform a set of convolutions on the input data using a set of learnable
filters or kernels. The output of each convolutional layer is a set of feature maps,
which capture the presence of certain visual patterns or shapes within the input data.
 Activation Functions: The output of each convolutional layer is typically passed
through an activation function, such as ReLU (Rectified Linear Unit), which
introduces nonlinearity into the model.
 Pooling Layers: The output of the activation function is then passed through a pooling
layer, which down-samples the feature maps by taking the maximum or average value
within each pooling region. This reduces the size of the feature maps and helps to
make the model more efficient.
 Fully Connected Layers: The output of the last convolutional layer is then flattened
and passed through a series of fully connected layers, which perform a linear
transformation on the data. The final layer of the network is typically a softmax layer,
which outputs a probability distribution over the possible classes.
 Loss Function: The predicted output of the model is compared to the actual output
using a loss function, such as cross-entropy loss. The goal is to minimize the
difference between the predicted and actual output by adjusting the weights and biases
of the network.
 Optimization Algorithm: The weights and biases of the network are adjusted using an
optimization algorithm, such as stochastic gradient descent (SGD). This involves
calculating the gradient of the loss function with respect to the network parameters
and updating the parameters accordingly.
 Training Loop: The training process typically involves multiple epochs, where the
entire training dataset is passed through the model multiple times. At each epoch, the

Dept. of CSE, CIT, Mandya Page 38


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

model parameters are updated based on the gradients calculated on a mini-batch of the
training data.
 Validation: Throughout the training process, the model is also evaluated on a
validation dataset to monitor its performance and prevent overfitting.
 Testing: Once the training process is complete, the model is evaluated on a separate
testing dataset to measure its accuracy on unseen data.

5.2.2 PSEUDOCODE OF CNN

System implementation:

The implementation of the emotion and emoji-based music player using CNN and Python
involves several steps. First, the development environment needs to be set up, which includes
installing the required software and libraries. Then, the dataset needs to be collected and
preprocessed, which involves selecting the appropriate songs, labeling them according to
their emotion, and extracting the necessary features. Once the dataset is ready, the model
needs to be designed, trained, and evaluated. This involves selecting the appropriate
architecture for the CNN model, dividing the dataset into training and testing sets, and
implementing the training process using backpropagation and gradient descent algorithms.
The model's performance is then evaluated by measuring its accuracy, precision, recall, and
F1 score. After the model is trained and evaluated, it needs to be integrated into the music
player application. The user interface needs to be designed and developed, allowing the user
to select their preferred emotion and display the appropriate songs. The emotion recognition
algorithm needs to be implemented, where the user's emotional state is detected by analyzing
facial expressions or voice signals. The selected songs are then played using appropriate

Dept. of CSE, CIT, Mandya Page 39


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

music playback libraries, and the music player should also display the corresponding emoji
based on the user's selected emotion. Lastly, the music player needs to be tested thoroughly,
and any bugs or issues need to be fixed before releasing it for public use. In terms of the
implementation methodology, an iterative and incremental approach can be used, where the
development process is divided into several small cycles, each focusing on a specific feature
or functionality. This allows for more frequent testing and feedback, which helps to identify
and address any issues or bugs early on in the development process. In addition, version
control software such as Git can be used to manage the source code and track any changes
made to the codebase. This helps to ensure that any modifications to the code are properly
documented and can be reverted if necessary.

5.3 PARALLEL CONVERSION TYPE OF IMPLEMENTATION

Parallel conversion is an implementation methodology that is used in software development


projects to speed up the process of migrating from an existing system to a new system. In this
approach, both the old and new systems are run in parallel for a certain period of time,
allowing for a gradual transition from the old system to the new system. This is done to
minimize the risks and issues that may arise during the transition process.

Parallel conversion is typically used when the old system is still operational and is unable to
be shut down completely, or when the new system is not able to handle the full workload of
the old system. The process involves running the old system and the new system in parallel,
with the new system gradually taking over more and more of the workload from the old
system until the new system is fully operational.

The following are the steps involved in the parallel conversion implementation methodology:

 Planning Phase: The planning phase involves defining the goals and objectives of the
project, identifying the stakeholders, and outlining the project scope. In parallel
conversion, the planning phase also involves identifying the tasks and processes that
will be running in parallel, as well as the timeline and resources required for the
migration.
 Design Phase: In the design phase, the new system is designed and developed
according to the specifications defined in the planning phase. This involves creating
the necessary infrastructure, designing the user interface, and implementing the
required functionality.

Dept. of CSE, CIT, Mandya Page 40


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Testing Phase: The testing phase involves testing the new system to ensure that it is
functioning correctly and meeting the requirements. This involves testing the system
for bugs, errors, and usability issues.
 Parallel Run Phase: In the parallel run phase, both the old system and the new system
are run in parallel. This involves setting up the necessary infrastructure and processes
to allow the two systems to run together. During this phase, the new system gradually
takes over more and more of the workload from the old system.
 Cut-Over Phase: In the cut-over phase, the new system is fully operational and the old
system is shut down. This involves migrating the remaining data and processes from
the old system to the new system, and training users on the new system.
 Post-Implementation Phase: The post-implementation phase involves monitoring the
new system to ensure that it is functioning correctly and meeting the requirements.
This involves fixing any issues that arise and making improvements to the system as
necessary.

5.4 IMPLEMENTATION METHODOLOGY OF THE PROJECT

The implementation methodology of the project can be divided into the following steps:

 Requirement Gathering: This step involves collecting and analyzing the requirements
of the project. It is essential to gather all the functional and non-functional
requirements of the project, which helps in identifying the scope of the project and
provides a clear understanding of what is expected from the system.
 System Design: After collecting the requirements, the next step is to design the
system. In this step, the system architecture is developed, and the components of the
system are identified. The design must be developed to meet the requirements of the
system, and it should be scalable and flexible enough to accommodate future changes.
 Data Collection and Preprocessing: In this step, the data required for training the
model is collected and preprocessed. The dataset should be diverse and representative
of the emotions and corresponding emojis.
 Model Training: In this step, the CNN model is trained using the preprocessed data.
The CNN model should be trained using appropriate hyperparameters to achieve
maximum accuracy.

Dept. of CSE, CIT, Mandya Page 41


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Model Evaluation: Once the model is trained, it is evaluated to check the accuracy of
the model. The evaluation is done using a test dataset that was not used during the
training phase.
 Model Deployment: After the model is trained and evaluated, it is deployed in the
system. The model should be integrated with the music player to predict the emotions
of the user and suggest songs accordingly.
 Testing and Maintenance: In this step, the system is tested to ensure that it meets the
requirements and functions as expected. The system is tested in different scenarios to
identify any issues or bugs. Once the system is deployed, it requires regular
maintenance to ensure that it functions correctly.

Dept. of CSE, CIT, Mandya Page 42


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CHAPTER 6

SYSTEM TESTING AND RESULTS


6.1 INTRODUCTION

Testing is the significant interaction engaged with programming quality assurance (QA). It is
an iterative interaction. Here test information is arranged and is utilized to test the modules
exclusively. Framework testing ensures that all segments of the framework work
appropriately as a unit by really driving the framework to come up short.

The test causes ought to be arranged prior to testing starts. At that point as the testing
advances, testing shifts center trying to discover blunders in incorporated groups of modules
and in the whole framework. The way of thinking behind testing is to discover blunders. As a
matter of fact, testing is the domain of execution that is pointed toward guaranteeing that the
framework works really and proficiently before execution.

Testing is accomplished for every module. In the wake of testing every one of the modules,
the modules are incorporated and testing of the last framework is finished with the test
information, exceptionally intended to show that the framework will work effectively in the
entirety of its viewpoints conditions. The methodology level testing is made first. By giving
ill-advised data sources, the blunders that happened are noted and killed. Consequently, the
framework testing is an affirmation that everything is right and a chance to show the client
that the framework works. The last advance includes Validation testing, which decides if the
product work as the client anticipated. The end-client instead of the framework designer leads
this test by most programming engineers as a cycle called the "Alpha and Beta test" to
uncover that solitary the end-client appears to be ready to discover.

This is the last advance in the framework life cycle. Here we carry out the tried blunder-free
framework into a genuine climate and roll out vital improvements, which runs in an online
design. Here framework upkeep is done each month or year dependent on organization
approaches and is checked for blunders like runtime mistakes, since a long time ago run
mistakes, and different systems of support like table confirmation and reports.

During the prerequisite examination and plan, the yield is a record that is typically text-based
and non-executable. After the coding stage, PC programs are accessible that can be executed

Dept. of CSE, CIT, Mandya Page 43


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

for testing purposes. This infers that testing not just needs to uncover blunders presented
during coding, yet additionally mistakes presented during the past stages.

Fig. 6.1 The Testing process

6.2 THE VARIOUS TYPES OF TESTING DONE ON THE SYSTEM ARE

 Unit Testing
 Integration Testing
 Validation Testing
 System Testing
 Acceptance Testing

6.2.1 UNIT TESTING

Unit testing confirmation endeavors on the littlest unit of programming plan, module. This is
known as "Module Testing". The modules are tried independently. This testing is done during
the programming stage itself. In these testing steps, every module is discovered to be working
acceptably with respect to the normal yield from the module.

Dept. of CSE, CIT, Mandya Page 44


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

6.2.2 INTEGRATION TESTING

Integration testing is a methodical strategy for developing tests to uncover blunders


related with the interface. In the venture, every one of the modules is joined and afterward the
whole developer is tried overall. In the combination testing step, the entire mistake revealed
is amended for the following testing steps.

6.2.3 VALIDATION TESTING

To uncover functional errors, that is, to check whether functional characteristics


confirm to specification or not specified.

6.2.4 SYSTEM TESTING


When individual module testing finished, modules are gathered to proceed as a framework.
At that point, the hierarchical testing, which starts from the upper level to bring down level
module testing, must be done to check whether the whole framework is performing
acceptably.
After unit and integration testing are over then the system as whole is tested. There are two
general strategies for system testing.
They are:
 Code Testing
 Specification Testing
6.2.5 CODE TESTING
This strategy examines the logic of the program. A path is a specific combination of
conditions handled by the program. Using this strategy, every path through the program is
tested.
6.2.6 SPECIFICATION TESTING
This procedure analyzes the particulars expressing what the program ought to do
and how it ought to perform under different conditions. The experiments are produced for
each state of the created framework and prepared. It is tracked down that the framework
created performs as per its predetermined necessities. The framework is utilized tentatively to
guarantee that the product will run by tits particular and in the manner the client anticipates.
Particular Testing is done effectively by entering different sorts of end information. It is
checked for both legitimate and invalid information and discovered the System is working
appropriately according to necessity.
6.2.7 ACCEPTANCE TESTING

Dept. of CSE, CIT, Mandya Page 45


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

At the point when the framework has no action issue with its precision, the
framework goes through a last acceptance test. This test affirms that the framework needs the
first objective, Objective, and necessities set up during the investigation. On the off chance
that the framework satisfies every one of the prerequisites, it is at last worthy and prepared
for activity.
6.3 TEST PLAN

A product project test plan is an archive that depicts the destinations, scope approach,
and focal point of a product testing exertion. This cycle of setting up a test plan is a valuable
method to thoroughly consider the endeavors expected to approve the agreeableness of a
product item. The finished report will help individuals outside the experimental group
comprehend the 'Why and How' of creation approval. Diverse test plans are utilized at
various degrees of testing.

6.4 TEST PLANS USED IN UNIT TESTING

Each module is tested for correctness whether it is meeting all the expected results.
Condition loops in the code are properly terminated so that they don’t enter into an infinite
loop. Proper validations are done so as to avoid any errors related to data entry from user.

6.4.1 SYSTEM TESTING

System Testing alludes to the way of looking at a product program utility fundamentally
dependent on what its particular says its conduct should be. In remarkable, we will develop
check cases put together totally absolutely with respect to the detail of this current gadget's
conduct, without seeing an execution of this framework.

Test Case
Number Testing Scenario Expected result Result

TC – 01 Web camera face detection Face detection while streaming Pass

TC – 02 Face data compute System calculates the face data Pass

Features extraction from user Face features will be extracted from


TC – 03 face the video Pass

TC – 04 Key points mapping System will map the points on the Pass

Dept. of CSE, CIT, Mandya Page 46


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

face image for face recognition

Emotion recognition
TC – 05 according to users’ emotion Detecting emotion of the user Pass

Emotion classification for the


TC – 06 Emotion classification purpose of music playing Pass

Calculate the percentage of System calculates the percentage for


TC – 07 emotion the face emotion Pass

System will pick the first index as


TC – 08 The percentage value is equal priority Pass

Music player according to


the detected
Music played according to the
TC – 09 Emotion detected emotions Pass

Provide an user friendly System provides the simple interface


TC – 10 interface for the user to operate it Pass

Screen shots

Dept. of CSE, CIT, Mandya Page 47


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

CONCLUSION

Dept. of CSE, CIT, Mandya Page 48


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

In conclusion, the emotion and emoji-based music player using CNN Python is a feasible and
innovative project that leverages the power of deep learning to deliver a personalized and
immersive music experience. The project has been designed with a clear set of objectives,
functional and non-functional requirements, and a well-defined development methodology.
Throughout the project, we have conducted a comprehensive literature review on the topic of
emotion and music, as well as the latest research and techniques in deep learning and CNN.
This has enabled us to identify the gaps and limitations in the existing music player systems
and design an efficient and effective solution. The proposed system has several advantages
over the existing systems, including the ability to recognize and respond to the user's
emotional state and mood, suggest relevant music based on their preferences, and provide a
unique visual experience with the use of emoji. The implementation of the project has been
carried out using a parallel conversion type of implementation, which allowed for a smooth
and efficient transition from the existing system to the new system without any major
disruptions or downtime. The implementation methodology followed a well-defined set of
steps, including planning, design, development, testing, and deployment, ensuring that the
project was completed within the allocated time and budget. The project has also presented a
detailed overview of the CNN algorithm, explaining its key components, steps, and
pseudocode. The algorithm has been implemented using Python and TensorFlow, providing a
scalable and efficient solution for real-time music processing. The project has several
potential applications, including the music industry, entertainment, and mental health sectors.
The system's ability to recognize and respond to the user's emotional state and mood has
significant implications for mental health and well-being, providing an alternative form of
therapy and support.

FUTURE WORK

Dept. of CSE, CIT, Mandya Page 49


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

The Emotion and Emoji-based Music Player using CNN Python is a system that has been
developed to provide users with a unique music listening experience. The system is capable
of recognizing the emotion of the user based on their facial expression and suggesting music
that matches the user's mood. The system is also capable of generating an emoji that
represents the user's current emotion and displaying it on the screen.

The system has been designed and implemented successfully, but there are still several areas
where it can be improved in the future. These areas are discussed below:

 Integration with Music Streaming Services: Currently, the system is limited to playing
music files that are stored locally on the device. In the future, the system can be
improved by integrating it with popular music streaming services like Spotify and
Apple Music. This would allow users to access a larger collection of songs and
playlists, thereby enhancing their listening experience.
 Emotion Detection Accuracy: The accuracy of the emotion detection algorithm can be
improved in the future. Currently, the system uses a CNN algorithm to detect
emotions based on facial expressions. However, this method may not be accurate in
all cases, especially if the user's face is partially hidden or if they have a facial
deformity. In the future, the system can be improved by using other methods like
voice recognition or biometric sensors to detect emotions.
 Personalized Playlists: The system can be improved by providing users with
personalized playlists based on their music preferences. Currently, the system
suggests music based on the user's current emotion, but it does not take into account
the user's music preferences. In the future, the system can be improved by
incorporating machine learning algorithms that analyze the user's listening history and
suggest music that matches their preferences.
 Multi-Language Support: Currently, the system only supports English language
commands and inputs. In the future, the system can be improved by adding support
for other languages. This would enable users from different regions of the world to
use the system in their native language.
 Real-time Emotion Detection: Currently, the system detects the user's emotion based
on a static image captured by the camera. In the future, the system can be improved
by detecting emotions in real-time, which would provide a more accurate
representation of the user's mood.

Dept. of CSE, CIT, Mandya Page 50


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

 Social Media Integration: The system can be improved by integrating it with social
media platforms like Facebook and Instagram. This would allow users to share their
current emotion and music preferences with their friends and followers on social
media.
 Wearable Devices Integration: The system can be improved by integrating it with
wearable devices like smartwatches and fitness bands. This would allow users to
access the system's features on-the-go, without the need for a separate device.

REFERENCES
Dept. of CSE, CIT, Mandya Page 51
EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

1. M. V. Nagesh and Dr. B. G. Prasad, “Sentiment Analysis of Twitter Data: A Review of


Techniques,” in Proceedings of the 2015 International Conference on Industrial
Instrumentation and Control (ICIC), Pune, India, May 2015, pp. 1315-1320. doi:
10.1109/IIC.2015.7150816.
2. X. Chen, M. Z. M. Yasin, A. M. A. H. Khandaker, and M. R. Kabir, “Emotion-Based
Music Recommendation System,” in Proceedings of the 2018 3rd International
Conference on Computing and Artificial Intelligence (ICCAI), Wuhan, China, Dec.
2018, pp. 184-189. doi: 10.1109/ICCAI.2018.000-9.
3. Y. Liu and L. Gu, “Emoji-based Emotion Analysis on Social Media,” in Proceedings
of the 2017 2nd International Conference on Image, Vision and Computing (ICIVC),
Chengdu, China, June 2017, pp. 231-236. doi: 10.1109/ICIVC.2017.7975045.
4. G. Lee and S. Lee, “Emotion Recognition Using EEG and Convolutional Neural
Network,” in Proceedings of the 2017 39th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, South
Korea, July 2017, pp. 2805-2808. doi: 10.1109/EMBC.2017.8037502.
5. A. H. M. Kamal, S. Hossain, and K. M. A. Salam, “Speech Emotion Recognition
Using CNN and LSTM,” in Proceedings of the 2020 3rd International Conference on
Intelligent Autonomous Systems (ICoIAS), Guangzhou, China, July 2020, pp. 11-16.
doi: 10.1109/ICoIAS49429.2020.00008.
6. J. B. Abdel-Aal, R. M. Ahmed, and A. E. Ali, “A Deep Learning-Based Model for
Sentiment Analysis of Arabic Tweets,” in Proceedings of the 2020 IEEE 7th
International Conference on Industrial Engineering and Applications (ICIEA),
Nagoya, Japan, Aug. 2020, pp. 576-581. doi: 10.1109/ICIEA49056.2020.9142834.
7. S. Kim, Y. Kim, S. Lee, and H. Kim, “Emotion Recognition from Speech Signals
Using Convolutional Neural Networks with Data Augmentation,” in Proceedings of
the 2017 IEEE International Conference on Big Data and Smart Computing
(BigComp), Jeju Island, South Korea, Feb. 2017, pp. 187-190. doi:
10.1109/BigComp.2017.7881676.
8. S. K. Muthukumar, G. Arumugam, and S. C. Satapathy, “Automated Emotion
Detection System Using Deep Convolutional Neural Network,” in Proceedings of the
2020 5th International Conference on Computing and Communications Technologies
(ICCCT), Chennai, India, Dec. 2020, pp. 289-292. doi:
10.1109/ICCCT51237.2020.9371944.

Dept. of CSE, CIT, Mandya Page 52


EMOTION AND EMOJI-BASED MUSIC PLAYER 2022-23

Dept. of CSE, CIT, Mandya Page 53

You might also like