Mufecs

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

MINI PROJECT REPORT ON

Music Application(Mufecs)

(KCA-353)
Session-2023-2024

Master of Computer Applications (MCA)

Submitted to: Mr.Gaurav Bhatia Submitted By:


Name: Amit Kumar Rai
Roll No.- 2201920140018
Class-MCA (3rd)
Section-A1

G.L. Bajaj Institute of Management and Research


Plot No 2, APJ Abdul Kalam Rd, Knowledge Park III,
Greater Noida, Uttar Pradesh
ACKNOWLEDGEMENT

The satisfaction that accompanies the successful completion of any task would be incomplete

without the mention of people whose ceaseless cooperation made it possible whose constant

guidance and encouragement crown all efforts with success. Our project whose title is “Mufics

- Listen your mood” developed for partial fulfillment of MCA degree course as Mini Project.

We are grateful to our projects Mentor Mr. Gaurav Bhatia for the guidance, inspiration and

constructive suggestions that help us in the preparation of this project.

Date: 28/02/2024 With due regards,

Amit Kumar Rai

(2201920140018)
CERTIFICATE OF ORIGINALITY

I hereby declare that my Project titled “Mufics- Listen Your Mood“ submitted to Dr. APJ ABDUL
KALAM TECHNICAL UNIVERSITY, Lucknow for the partial fulfillment of the degree of Master
of Computer Application Session 2023-2024 from GL Bajaj College of Technology and
Management, Greater Noida has not previously formed the basis for the award of any other degree,
diploma or other title.

Place: Greater Noida Signature

Name of Student: Amit Kumar Rai

Date: 28 February, 2024


CERTIFICATE OF ACCEPTANCE

This is to certify that the project entitled, “Mufics - Listen Your mood” submitted by Amit Kumar

Rai a bonafide student of GL Bajaj College of Technology and Management, Greater Noida in

partial fulfillment for the award of Master of Computer Applications affiliated to Dr. APJ ABDUL

KALAM TECHNICAL UNIVERSITY, LUCKNOW during the year 2023-24. It is certified that all

corrections, suggestions indicated as per Internal Assessment have been incorporate in the project.

To the best of our knowledge, the work embodied in this report is original and has not been submitted

to any other degree of discipline. The project report has been approved as it satisfies the academic

requirements in respect of Project work prescribed for the said degree.

[Sign and Name of Internal Guide] [Sign of External Examiner]

Mr. Gaurav Bhatia

(MCA Department)

HOD

Department of Computer Applications


TABLE OF CONTENTS

1. Certificate

2. Executive Summary

3. Key words

4. Chapter1: Introduction & Aim of Project

5. Chapter 2: Background Study & Research Gap

6. Chapter 3: Proposed work & Methodology

7. Chapter 4: Results & Testing

8. Chapter 5: Conclusion & Future Scope of the Project

9. List of Tables

10.ER Digram

11.Data Flow Digram

12. References
Page NO.-01

Executive summery

We suggest a novel method for employing facial expression to trigger automatic music

playing. The majority of the current methods entail manually playing music, utilizing

wearable computers, or categorizing audio aspects. Rather, we suggest modifying the

manual sorting and playing. Convolutional neural networks(CNN) have been

employed by us to detect emotions. Flask framework and Front-end

technology(HTML,CSS,Bootstrap) and TensorFlow are used to suggest music.

Our suggested approach tends to decrease the total cost of the planned system as well as

the computational time required to collect the results, improving the system's overall

correctness. The FER2013 dataset is used to test the system. Face expressions are

recorded by means of an integrated camera. Using input face photos, feature extraction

is used to identify emotions including happiness, anger, sadness, surprise, and neutral.

A playlist of music is automatically created by determining the user's current emotional

state. Compared to the approach in the current literature, it produces better results in

terms of computational time.

Key word: TensorFlow, Flask, Music, Player, Camera, Convolutional Neural Network,

Emotion detection, Face recognition, and feature extraction.


Page No.-02

Chapter No. 1

Introduction & Aim of Project

Mufecs is a web based application which is developed in flask framework . The

project mufecs has been developed to override the problems of loneliness to recommend

music based on their moods. This application supported seven face emotion like happy,

sad, angry, neutral, disgusting etc. The project recommendation features work on

convolution neural network(CNN).

The Aim of Project is to increase productivity of human being which helps growth of

country and also an individual.It also analyse the moods of user that helps

to know the mental health of an individual.

Mufecs include the registration of the user , storing their music collection and

also user can create their own music, and it also include the search facility to listen

Music online.

An user can be entered using username and password and new user can create its

account.

This web based application is working smoothly without any bug .It is developed using

Flask, python, HTML,jupyter, CSS, Javascript, Bootstrap, MySQL.


Page No. - 03

Recent research has shown that people respond and react to music, and that music has a

significant impact on brain activity. Researchers looking into the reasons individuals

listen to music found that music was important in connecting arousal and mood. Music

serves two primary purposes for participants: firstly, it can help them feel happy and,

secondly, it can improve their mood.and increase in self-awareness. It has been shown

that emotions and personality factors are strongly correlated with musical choices [1].

The brain regions that influence emotions and mood are also responsible for

controlling the meter, timbre, rhythm, and pitch of music [2]. One of the main

components of lifestyle may be interpersonal interaction. It makes flawless details and a

wealth of human data visible, including emotions, body language, voice, and facial

expressions [3]. In many applications these days, including smart card applications,

surveillance, image database investigation, criminal, video indexing, civilian

applications, security, and adaptive human-computer interface with multimedia

environments, emotion detection is regarded as the most crucial technique.

Automated emotion detection in multimedia attributes like music or movies is

expanding quickly thanks to advancements in digital signal processing and other

efficient feature extraction algorithms. This system has a lot of potential uses, including

music entertainment and human-computer interaction systems. We present a

recommender system for emotion recognition based on facial expressions that can

identify the user's moods and provide a list of suitable music [13–24]. The suggested

method is able to identify an individual's emotions; if the user is experiencing negative

emotions, a playlist including the most upbeat music genres will be displayed. And in
Page No. - 04

the event that the feeling is pleasant, a particular playlist featuring various kinds of

songs that will boost happy feelings [4].

We utilized the Kaggle Facial Expression Recognition dataset [5] to detect emotions.

Bollywood Hindi music was used to construct the music player's dataset. Convolutional

neural networks are used to implement facial emotion detection, and its accuracy rate is

about 95.14% [2].


Page No. - 05

Chapter 2:

Background Study & Research Gap

2.1 Background Study

The purpose of the review is to get insight into the processes and identify any

shortcomings that may be addressed. A literature review, also known as a literature

survey, is a text of an academic work that summarizes the state of the field's knowledge,

highlights noteworthy discoveries, and makes theoretical and methodological

contributions to a certain subject. Humans have innate abilities that can contribute to

any system in a variety of ways, which has drawn attention to comprising several

students, scientists, engineers, and other professionals from around the globe.

Facial expressions convey the person's present mental state. In most interpersonal

communication, we convey our emotions through nonverbal cues including hand

movements, facial expressions, and voice tonality. Preema et al. [6] claimed that

making and maintaining a big playlist takes a lot of effort and time. According to the

publication, the music player chooses a tune based on the user's present mood. The

program creates playlists based on mood by scanning and categorizing audio files based

on audio attributes. The Viola-Jonas method, which is used for face detection and facial

emotion extraction, is utilized by the program.

Anger, joy, surprise, sadness, and disgust are the five main universal emotions that were

identified from the classification of traits.

In a research published in 2013, Yusuf Yaslan et al. developed an emotion-based

music recommendation system that determines the user's mood based on signals
Page No. - 06

received from wearable computers equipped with physiological sensors such as

photoplethysmography (PPG) and galvanic skin response (GSR) [3]. Humans are

fundamentally emotional beings. They are essential to everything in life. This work

considers the problem of emotion recognition as the prediction of arousal and valence

from multi-channel physiological information. Ayush Guidel et al. claimed in [7] that

facial expressions are a simple way to read a person's mental state and present emotional

mood. Basic emotions were used to construct this system. (content, upset, furious,

ecstatic, shocked, disgusted, afraid, and indifferent) into account. In this research, face

detection was implemented by convolutional neural network. All around the world,

music is typically described as a "language of emotions".

Emotion identification was used by Ramya Ramanathan et al. [1] to demonstrate their

intelligent music player.

A fundamental aspect of human nature is emotion. They have the most significant

impact on life. Human emotions are designed to be shared and understood by one

another. Initially, the user's local music selection is categorized according to the mood

the album expresses. This is frequently computed with the lyrics of the song in mind. In

particular, the paper highlights the specialization of existing methods for identifying

human emotions in order to produce emotion-based music players, the approach a music

player follows to identify human emotions, and the way it is best to apply the offered

system for emotion detection. It also provides a quick overview of how our systems

operate, how playlists are made, and how emotions are categorized.Manually sorting

through a playlist and annotating music based on a user's emotional state is a labor-
Page No. - 07

intensive and time-consuming operation, according to CH Radhika et al. [8]. Several

algorithms have been suggested to automate this process.

Unfortunately, the current algorithms are quite inaccurate, slow, and require more

hardware (such as sensors and EEG structures) which raises the system's total cost. In

this research, an algorithm is presented that performs the task of automatically creating

an audio playlist using the facial expressions of an individual, in order to save time and

labor, when carrying out this task by hand. The algorithm presented in the research aims

to lower the cost of the planned system as well as the total computational time. It also

seeks to increase the system design's correctness. The facial expression recognition

module of the system is verified by comparison with a dataset that is both user-

dependent and user-impartial.

2.2 Problem Statement

Create a system that uses a webcam and machine learning algorithms to deliver a cross-

platform music player that makes recommendations for music based on the user's

current mood.
Page No.: 08
Chapter 3:

Proposed work & Methodology

3.1 Proposed work

We profit from the user-music player interaction that the suggested system presents.

The system's objective is to adequately capture the face using the camera. The

Convolutional Neural Network receives input from captured photos to forecast the

emotion. A playlist of music is then generated by utilizing the emotion contained in the

image. Our suggested system's primary objective is to automatically produce a playlist

of songs to alter the user's mood, which might range from shocked to joyful to sad to

natural. The suggested system is able to identify emotions; if the subject matter

involves a negative emotion, a playlist featuring the best possible music genres that

would uplift the listener's spirits will be played. Four modules make up the facial

emotion recognition-based music recommendation system.

• Real-Time Recording: Within this section, the system is to accurately record the user's

face .

• Face Recognition: In this case, the user's face will be entered. The user image's

features are to be assessed by the convolutional neural network.

• Emotion Detection: In this stage, the elements of the user image are extracted in order

to identify the emotion. The system then generates captions based on the user's emotions.

• music Recommendation: Based on the user's emotions and the mood type of the music,

the recommendation module makes a song recommendation.


Page No. - 09

shows the suggested system's block diagram.

3.2 Methodology

3.2.1 Overview of the Database

We used the Kaggle datasets to construct the Convolutional Neural Network model.

The database, called FER2013, is divided into training and testing datasets. 24176

images make up the training datasets, whereas 6043 images make up the testing datasets.

The collection contains 48x48 pixel grayscale pictures of faces. Five emotions are

assigned to each image in FER-2013: joyful, sad, furious, surprised, and neutral. The

faces are automatically registered such that they occupy roughly the same amount of

area and are roughly centered in each shot. Grayscale 48x48 pixel head-shots, both

posed and unposed, are included in the FER-2013 collection of photos.

The FER-2013 datasets was produced by compiling the outcomes of a Google image

search for each emotion together with its synonyms. When trained on an unbalanced

datasets, FER systems may exhibit strong performance on dominating emotions like

happiness, sadness. They do well on the underrepresented ones, like as disgust and

t e r r o r , bu t po o r l y on t he on e s th a t ar e ang r y , i n d i ff e r e n t , an d s t a rt l e d .

This issue is typically solved using the weighted-Soft Max loss strategy, which weights
Page No. - 10

the loss term for each emotion class based on how much of each class it represents in

the training set. The Soft-Max loss function, on the other hand, is the foundation of this

weighted-loss method and is said to readily drive features of different classes to remain

separate without taking intra-class compactness into consideration. Using an auxiliary

loss to coach the neural network is one practical method for handling the Soft-Max loss

issue. We have applied the categorical cross entropy loss function to handle missing and

outlier values. A chosen loss function is used for every iteration in order to estimate the

error value.Consequently, we have employed a loss function called categorical cross

entropy to handle missing and outlier values.

FER2013 datasets samples.

3.2.1 Emotion Detection Module

3.2.1.1 Face Detection

A face detection application falls within the category of computer vision technology. In
Page No. - 11

order to accurately locate faces or objects in object detection or related systems in

photos, algorithms must first be created and trained. Real-time detection from an image

or video frame can be used for this. These classifiers are used in face identification;

they are algorithms that identify whether an image contains a face (1) or not (0). To

increase accuracy, classifiers are taught to recognize faces in a large number of photos.

Two types of classifiers are used by OpenCV: LBP (Local Binary Pattern) and the Haar

Caverns. When it comes to face identification, a Haar classifier is employed. It is trained

using pre-defined, variable face data, allowing it to recognize various faces with

accuracy. Face detection's primary goal is to identify the face inside the frame by

minimizing background noise and other distractions. Using a collection of input files,

the cascade function is taught in this machine learning-based method. The Haar Wavelet

approach supports the function-based pixel investigation within the image into squares

[9]. This makes use of "training data" and machine learning techniques to extract a high

degree of accuracy.
Page No.- 12

Figure 3: Recognition of faces.

3.2.1.2 Extraction of Features

We use the pre-trained network, which is a sequential model, as an arbitrary feature

extractor while carrying out feature extraction. letting the input image continue until it

reaches the pre-designated layer, at which point we take the layer's outputs and use them

as our features. Use only a few filters since the initial layers of a convolutional network

extract high-level characteristics from the captured image. We raise the number of

filters to twice or three times the dimension of the previous layer's filter as we create

deeper layers. Although they require a lot of work, filters in the deeper levels obtain

additional features.
Page No. - 13

Figure 4.:Visualization of Feature Map

In order to accomplish this, we made use of the strong, discriminative characteristics

that the convolution neural network learnt [10]. The model will provide feature maps as

its outputs, which are a kind of intermediate representation for every layer above the

first. To see which features were significant enough to classify the image, load the input

image that you wish to examine as a feature map. Applying filters or feature detectors to

the input image or the feature map output of the previous layers yields feature maps.

Visualization of feature maps will shed light on the internal representations for

particular input for every convolutional layer in the model.


Page No. - 14

3.2.1.3 Sensitivity Assessment

Figure 5: Architecture of a convolution neural network.

Using the Relu activation function, convolution neural network design applies filters or

feature detectors to the input picture to obtain feature maps or activation maps [11].

Bends, vertical and horizontal lines, edges, and other characteristics that are present in

the image can all be identified with the use of feature detectors or filters. Subsequently,

the feature maps undergo pooling to ensure translation invariance. Pooling is predicated

on the idea that the combined outputs remain constant when we make slight changes to

the input. Any pooling from min, average, or max can be used. However, max-pooling

outperforms min or average pooling in terms of performance. All of the inputs should

be flattened before being fed into a deep neural network, which will produce outputs for

the class of the object.


Page No. - 15

Figure 6: Convolutional Neural Network Feature Extraction per Layer.

The image's class will either be binary or multi-class, used for things like digit

identification or item separation. The learnt features in a neural network cannot be

interpreted, making neural networks akin to a black box. In other words, the CNN

model receives an input image and outputs the results [10]. The model that has been

trained by weights using CNN is loaded in order to detect emotions. When a user

submits a real-time photograph, it is sent to a pre-trained CNN model, which adds a

label and predicts the emotion.

3.2.1.4.Module for Recommending Music

3.2.1.4.1 Songs Database

We assembled a collection of Bollywood Hindi song lyrics. There are between 100 and

150 tracks for each feeling. As everyone knows, music definitely contributes to
Page No. - 16

elevating our mood. If a user is depressed, for example, the system will suggest a

playlist of music that will cheer them up and make them feel better automatically.

3.2.1.5 Suggested Music Playlist

Real-time user emotion is recognized by the emotion module. Labels like Happy, Sad,

Angry, Surprise, and Neutral will result from this. We linked these labels to the folders

in our built music database using Python's os.listdir() function. Table 1 displays the song

list.

To retrieve a list of every file in the specified directories, use the os.listdir() method.

os.chdir("C:\Users\Amit Rai\OneDrive\Desktop\EmoPlayer\happy") if label== 'happy'

self.mood.set("I'm playing a song for you because you seem happy")

# Obtaining Music Tracks songtracks = os.listdir()

Table 1. Database Song

Emotion Song

Track 1"Aaj Mai Upar"

Track 2 "Ilahi"
Happy
Track 3 “Neeche Pholo ki dukaan”

Track 4 “Pyar K kisti m”

Track 1 "Apna Time Aayega"

Track 2 ''Ruk Jana Nahi"


Sad Track 3 "All is Well"

Track I "Dushman Na Kare Dost Ne"

Track 2 "Thukra Ke Mera Pyaar"


Angry
Track 3 "Khalbali'

Track I "Zindagi Kaisi Hai Paheli"

Track 2 "Aao Milon Chalen"


Surprise
Track 3 "Jaane Kyun"

Track I "Buddhu Sa Mann"

Track 2 "Matargashti"
Neutral
Track 3 ''Dildara"

As a result, the GUI of the music player will suggest a playlist for the user, displaying

captions based on emotions that have been identified. To play the audio, we utilized the

Pygame library, which can play a variety of multimedia formats, including audio, video,

and so on. The play, pause, resume, and stop functions of this library are accustomed to

interacting with the music player. The names of all the songs, the status of the songs that

are presently playing, and the main GUI window are stored in variables called playlist,

song status, and root, respectively. HTML and CSS was utilized in the GUI

development process.

These are the outcomes:


Page No. - 18

Figure 7 shows the front page's GUI.


Page No. - 19

Figure 8: Emotion Detection


Page No. - 20
Chapter 4

Results & Testing

We assessed several research that make use of convolutional neural networks, extreme

learning machines (ELM), and support vector machines (SVM) [12]. A comparison of

comparable algorithms is presented in Table 2. For every study, corresponding

algorithms and accuracy values are provided. Convolutional neural networks are used to

increase the accuracy and efficiency of emotion detection.

Table 2 shows the accuracy of the three methods' validation and testing using the

Fer2013 Datasets
Algorithm SVM ELM CNN

Validation Accuracy 0.66 0.62 0.95

Testing Accuracy 0.66 0.63 0.71

The trained CNN network's hyperparameters are displayed in Table 3. The weight

update at the conclusion of each batch is controlled by the learning rate. Many epochs of

the training dataset's iterations are applied to the network during training. The number

of patterns shown in the network prior to the weights being updated is known as the

batch size. The model is able to learn nonlinear prediction bounds thanks to activation

functions. When training deep learning models, Adam might be used in place of

stochastic gradient descent as the optimization algorithm. Deep learning model errors

are usually quantified using the loss function categorical crossentropy in single-label,
Page No. - 21

multi-class classification problems.

Table 3: CNN network hyperparameter for training.

Hyperparameters Values

Batch size 64

No. of classes 7

Optimizer Adam

Learning rate 0.001

Epoch 21

No. of Layers 28

Activation function Relu, SoftMax

Loss function Categorical-crossentropy

7. Final thought

A comprehensive analysis of the literature reveals that there are numerous ways to put

the music recommender system into practice. A review of approaches put forth by

earlier researchers and developers was conducted. The results led to the fixation of our

system's aims. Since AI-powered applications are becoming more and more popular,

our project will make use of cutting-edge, in-demand technology. We give a summary

of the system's features, including how music can lift users' spirits and how to select the

appropriate songs. Emotions of the user can be detected by the developed system. The

emotions that the machine was able to identify were neutral, shocked, joyful, sad, and

furious. The suggested system identified the user's emotion and then showed them a
Page No. - 22

playlist with music selections that matched their mood. Processing a large dataset

requires a lot of memory and processing power. Development will become more

appealing and hard as a result. The goal is to develop this application using the least

amount of resources and on a standardized platform. Users' work in developing and

maintaining playlists will be lessened by our face emotion recognition-based music

recommendation system.
Page No. :23

Chapter 5:

Conclusion & Future Scope of the Project

Even if this system is fully operational, there is always room for improvement. The

program can be adjusted in a number of ways to improve user experience overall and

yield superior results. Some of these use an alternate approach based on other feelings

like fear and disgust that are not included in our system. This feeling includes endorsing

the automated playing of music. Potential applications of the system's future scope

include the creation of a mechanism that could aid in the treatment of patients with

mental stress, anxiety, acute depression, and trauma by music therapists. There is a

chance to add functionality as a future solution because the existing system performs

poorly in extremely poor light circumstances and with low camera resolution.

Building a mini- project requires the assistance and direction of multiple individuals. It

is, therefore, our first priority to express our gratitude to everyone who supported us

throughout this endeavor.

We express our gratitude to Dr. Madhu Gaur, our Head of Department, for his

inspiration and provision of the necessary materials for us to work on this project.

Additionally, we are grateful for the gracious cooperation of Mrs. Manju verma, Head

of the Mini-Project.

We take great pleasure in expressing our gratitude to Mr. Gaurav Bhatia, our project

guide, for her ongoing guidance and support during the project planning process, as well
Page No. - 24

as for her encouraging and providing us with useful insights.

Finally, but just as importantly, we would like to express our gratitude to our friends and

the teaching and non-teaching staff members whose support and recommendations

enabled us to improve our mini Project. We are also appreciative of our parents'

unwavering encouragement and well wishes.


Page No. - 25

List of Tables

1. Table 1. Database Song

2. Table 2 shows the accuracy of the three methods' validation and testing using the

Fer2013 Datasets.

3. Table 3: CNN network hyperparameter for training.


Page NO. : 26

ER Diagram
Page No.: 27

Data Flow Diagram

Level 0
Page No. - 28

Level 1
Page No. - 29

Level 2
Page No.: 30

References

[1] Python Documentation: Official documentation for Python programming language,


providing syntax, libraries, and modules reference. (https://docs.python.org/)

[2] MySQL Documentation: Official documentation for MySQL database setup, data
modeling, and querying.(https://mysql.com)

[3] Front-end Documentation: Official documentation for HTML, CSS, JavaScript to


structure, style, and giving functionality to web page.(https://developer.mozilla.org/en-
US/docs/Web/HTML)

[4] Flask Documentation: Official documentation for python framework to create a


robust and salable web application(https://flask.palletsprojects.com)

[5] An intelligent music player based on emotion identification was presented by Ramya

Ramanathan, Radha Kumaran, Ram Rohan R, Rajat Gupta, and Vishalakshi Prabhu at

the 2nd IEEE International Conference on Computational Systems and Information

Technology for Sustainable Solutions in 2017. 10.1109/CSITSS.2017.8447743 can be

found at this link.

[6] The following authors published a paper in 2017: "Smart music player integrating

facial emotion recognition and music mood recommendation," Department of Computer

Engineering, Pune Institute of Computer Technology, Pune, India; Husain Zafar, Shlok

Gilda, Chintan Soni, and Kshitija Waghurdekar. 10.1109/WiSPNET.2017.8299738 can

be found here.

[7] Deger Ayata, Yusuf Yaslan, and Mustafa E. Kamasak, Wearable physiological

sensors for an emotion-based music recommendation system, IEEE Transactions on


Page No. - 31

Consumer Electronics, vol. 14, no. 8, May 2018.What is the doi for TCE.2018.2844736?

[8] Alaa Alsaedi, Kholood Albalawi, Ahlam Alrihail, Liyakathunisa Syed, Users' music

recommendation system based on facial feature-based emotion recognition, Department

of Computer Science, Taibah University (DeSE), 2019. What is the DOI for

DeSE.2019.00188?

[9] Research Prediction Competition, Face Recognition Challenges: Obstacles in

Representation Learning, Acquire facial expression skills using visual aids.

[10] Sahana M, Savitri H, Rajashree, Preema J.S., Review of a music player that plays

music based on facial expressions, International Journal of Engineering Research &

Technology (IJERT), Volume 6, Issue 15, 2018, ISSN-2278-0181.

[11] AYUSH Guidel, Krishna Sapkota, Birat Sapkota, Facial analysis-based music

recommendation, February 17, 2020.

[12] Emotion-based music recommendation system, Sreenidhi Institute of Science and

Technology, Yamnampet, Hyderabad; International Journal of Emerging Technologies

and Innovative Research (JETIR), Volume 7, Issue 4, April 2020; CH. Sadhvika,

Gutta.Abigna, P. Srinivas reddy.

[13] Vincent Tabora, Face identification with Haar Cascade Classifiers in OpenCV,

Becominghuman.ai, 2019.

[14] Xiang Chen, Chenchen Liu, Fuxun Yu, and Zhuwei Qin. An overview of

convolutional neural network visualization techniques: How convolutional neural

networks see the world. May 2018, Mathematical Foundations of Computing.

[15] ResearchGate, June 2019, Ahmed Hamdy AlDeeb, Emotion-Based Music Player
Page No. - 32

Emotion Detection via Live Camera.

[16] Filip von Reis Marlevi and Frans Norden, TRITA-EECS-EX-2019:143, A

Comparison of Machine Learning Algorithms in Binary Facial Expression Recognition.

[17] Singhal, Singh, and Vidyarthi (2020) Thorax illness interpretation and localization

utilizing DCNN in chest X-ray interpretation and localization. Journal of Electronics,

Information, and Control Engineering, 1(1), 1–7

[18] M. Vinny, P. Singh (2020) BlueBrain: A Review of Artificial Brain Technology.

Journal of Electrical and Electronic Engineering in Informatics, 1(1), 3, 1–11.

[19] Object Detection, A. Singh and P. Singh (2021). 3, pp. 1–20 in Journal of

Management and Service Science, 1(2).

[20] A Survey on Image Classification, A. Singh, P. Singh, 2020. Electrical and

Electronics Engineering Journal of Informatics, 1(2), 2, 1–9.

[21] A. Singh and P. Singh (2021): Identification of License Plates. 1, pp. 1–14 in

Journal of Management and Service Science, 1(2).

You might also like