Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Volume 8, Issue 11, November – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Eeg Based Emotion Classification Using Deep


Learning Models
Mohammed Afthab N Dhanush B Sri Vignesh C
Student, Dept. of Computer Student, Dept. of Computer Student, Dept. of Computer
Science & Engineering Science & Engineering Science & Engineering
SRM Institute of Science and SRM Institute of Science and SRM Institute of Science and
Technology Chennai, India Technology Chennai, India Technology Chennai, India

Under the Guidance of


P. Deepika, M.E.
Assistant Professor, Department of Computer Science and Engineering,
SRM Institute of Science and Technology, India - 600089

Abstract:- In the area of sentiment analysis using deep language processing and responds to the increasing need for
learning models, this study attempts to thoroughly more advanced and precise sentiment analysis tools. The
analyze and assess three deep learning models: a purpose of this study is to provide light on the possible
Feedforward Deep Neural Network (DNN), a Gated effects of this technology and how it can influence future
Recurrent Unit (GRU), and Long Short-Term Memory policy development and decision-making processes in a
(LSTM). The goal is to determine which model performs variety of fields and businesses.
best, offering insightful information for choosing models
for sentiment analysis tasks and their useful II. LITERATURE REVIEW
implementation in real-world scenarios. This work also
advances the field of sentiment analysis and natural This section of the literature review presents the issues
language processing (NLP) by providing methodological and problems surrounding sentiment analysis, identifies the
insights into the selection of deep learning models and gaps in the field's current knowledge, and describes the goal
evaluating their capacity for generalization. and importance of your suggested system in filling these
gaps. It gives your study and the reasoning behind your
Keywords:- Deep learning Models, Long Short Term work a solid basis.
Memory(LSTM), Sentiment Analysis, DNN,GRU, Accuracy,
Efficiency. A. Challenges and concerns:
The rapid expansion of textual data on the internet and
I. INTRODUCTION the growing need to interpret human sentiment for a range
of applications have driven remarkable advancements in the
A lot of emphasis has been paid to accurately field of sentiment analysis in recent years. Although there
interpreting human sentiment in textual data in the have been some encouraging developments, there are still a
constantly changing fields of Natural Language Processing number of important issues and worries. First of all, context
(NLP) and sentiment analysis. Sentiment analysis and ambiguity presents a challenge to sentiment analysis since
classification of text, including reviews, comments, and the same words can imply different meanings depending on
postings on social media, has significant applications in the context in which they are used. Secondly, since sarcasm
marketing, customer service, social sciences, and public and irony frequently reverse the intended sentiment,
opinion research, among other fields. Sentiment analysis sentiment analysis needs to be adept at detecting these
models are essential for interpreting the subjective subtleties. This field has become more complex due to
information and emotional content buried in large amounts issues with managing noisy data, domain-specific sentiment
of textual data. This helps decision-makers determine public lexicons, and the impact of cultural variables on sentiment
opinion, spot new trends, and make well-informed expression.
decisions. This work explores the fundamentals of this
important field of study, with particular attention to the use B. Research Gaps:
of deep learning models, such as Long Short-Term Memory Despite these obstacles, it is clear that there are
(LSTM), Gated Recurrent Unit (GRU), and feedforward important research gaps that need to be filled. While
deep neural networks (DNN) are used. We seek to offer somewhat successful, the sentiment analysis models
important insights into the relative performance of these currently in use frequently lack robustness and context-
models in terms of precision, effectiveness, and capacity for adaptability. Studies comparing the effectiveness of various
generalization through a thorough comparative study. This deep learning models, such as Feedforward Deep Neural
work is extremely important since it fills a gap in the Networks (DNN), Gated Recurrent Units (GRU), and Long
knowledge about deep learning applications in natural Short-Term Memory (LSTM), in the context of sentiment

IJISRT23NOV1061 www.ijisrt.com 722


Volume 8, Issue 11, November – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
analysis are also noticeably lacking. These kinds of
comparative studies are essential for determining which
model architecture is best, especially in light of the
constantly changing nature of natural language processing.
Research on the practical application of these models in
real-world settings, offering useful insights for professionals
in the industry, is conspicuously lacking.

III. PROPOSED SYSTEM

This paper presents a thorough analysis and


comparison of LSTM, GRU, and DNN models for sentiment
analysis in real-world datasets, filling in these research gaps.
Our suggested system seeks to close the knowledge gap
between sentiment analysis theory and real-world, industry- Fig 2: Methodology
relevant applications. Through the use of deep learning
methods and a comprehensive empirical assessment, our A. Feature enhancement:
goal is to provide a useful framework for choosing the best To improve the quality of the EEG dataset used for
model according to particular application needs. We also sentiment analysis, the project starts with data preparation
stress the need for continued research into sentiment and exploratory data analysis (EDA). This first step is very
analysis models that are flexible and sensitive to context in important because it guarantees the accuracy and
order to keep up with how human language is changing. dependability of the analysis that comes after. To learn more
about the dataset, class distribution analyses, and
visualizations are carried out.

B. Testing and evaluation:


The chosen deep learning models are put through
rigorous testing and evaluation as part of the research. Every
model is built methodically, and callbacks, monitoring
metrics, and suitable optimizers are used during training.
Training data shows the convergence and learning process
of the model. The performance of each model can only be
understood by following these evaluation steps

 LSTM Model:

 Architecture: Adam optimizer-trained LSTM layers with


additional Dense layers.

Fig 1: System Architecture  Training: The model is terminated early if no discernible


gains are made, and both training and validation
IV. METHODOLOGY accuracy rise with time.

In this study, we compare three deep learning models:  Evaluation: The model obtains a test accuracy of about
a feedforward deep neural network (DNN), a gated recurrent 97.5%, which is high.
unit (GRU), and long short-term memory (LSTM) in the
context of sentiment analysis using eeg data. Our method is  Confusion Matrix: This graphic illustrates how well the
a methodical sequence of actions. Initially, we gather social model performs in identifying different emotions.
media information from websites like Kaggle, making sure
that text content and related sentiment labels are included.
To improve data quality, thorough data preprocessing is then
carried out. To prepare the preprocessed data for deep
learning models, it is transformed into a numerical format.
Every one of the three deep learning models has a
customized architecture in place for model selection. We
train thoroughly, adjusting the hyperparameters as
necessary. There are numerous sentiment analysis metrics
included in the evaluation process, such as accuracy and
precision

IJISRT23NOV1061 www.ijisrt.com 723


Volume 8, Issue 11, November – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Training: Validation accuracy peaks at roughly 98%,
and training accuracy rises over epochs.

 Evaluation: With a test accuracy of roughly 98.438%,


the DNN model performs well.

 Conclusion: The DNN model is more appropriate for the


sentiment analysis task than the GRU model, as
evidenced by its superior accuracy and validation
performance.

Fig 3 : LSTM confusion matrix

 GRU Model:

 Architecture: Adam optimizer is used to train a GRU


layer with additional layers for classification.

 Training: Metrics for training and validation show that


the model reaches a performance plateau that causes it to
stop too soon.

 Assessment: The GRU model achieves approximately Fig 5: DNN confusion matrix
97.188% test accuracy.
C. Documentation and reporting:
 Confusion Matrix: A thorough summary of the model's Our research places great importance on the thorough
classification performance is given by the confusion documentation of model architectures, hyperparameters, and
matrix and classification report. data preprocessing. A key component of reproducibility is
this thorough documentation, which guarantees that other
researchers can follow our research procedure in an open
and transparent manner. Concurrently, the reporting section
provides an explanation of the evaluation outcomes by
displaying the models' classification reports, confusion
matrices, and accuracy metrics. Understanding the models'
advantages and disadvantages is made possible by their
transparency, which also makes it easier to conduct
additional research and apply sentiment analysis in new
ways.

Furthermore, the reporting section is also an equally


important part of our study project. In this section, we
provide a detailed explanation of the findings from our
model assessments. A wide range of performance metrics
are covered by this in-depth reporting, with accuracy
Fig 4: GRU confusion matrix measurements, thorough classification reports, and intricate
confusion matrices explaining the classification outcomes
 DNN Model: for every model taking center stage and this comprehensive
reporting strategy improves the research findings'
 Architecture: Uses batch normalization, dropout layers, transparency, provides readers with an in-depth
and numerous dense layers for regularization. comprehension of our methods, and makes it easier to
compare the three deep learning models we used for our

IJISRT23NOV1061 www.ijisrt.com 724


Volume 8, Issue 11, November – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
sentiment analysis study. Our research is a shining example [9]. Wang, X., Wu, P., Liu, G., Huang, Q., Hu, X., & Xu,
of accountability and rigor in the field of deep learning H. (2019). Learning performance prediction via
research because it adheres to these stringent documentation convolutional GRU and explainable neural networks in
and reporting standards. e-learning environments. Springer-Verlag GmbH
Austria, part of Springer Nature, 101(6), 587–604.
V. CONCLUSION [10]. Bengio, Y. (2009). Learning Deep Architectures for
AI. Foundations and Trends in Machine Learning,
In conclusion, our study emphasizes how crucial it is to 2(1), 1-127.
choose the best deep learning model when doing sentiment
analysis on social media data. The DNN model is the best
performer with a high accuracy rate, but the LSTM and
GRU models also have advantages that make them good
choices for particular use cases. Future research can build on
this work by refining and optimizing these models and
applying them to different social media datasets. Our
observations add to the changing field of sentiment analysis
and its potential for gleaning important information from the
deluge of social media posts.

REFERENCES

[1]. Kiruthika, M., Woonna, S., & Giri, P. (2016).


Sentiment Analysis of Twitter Data. International
Journal of Innovations in Engineering and Technology
(IJIET), 4(6), 264-273.
[2]. Bilgin, M., & Şentürk, İ. F. (2017). Sentiment Analysis
on Twitter data with Semi-Supervised Doc2Vec. In
UBMK International conference on computer science
and engineering, pp. 661-666.
[3]. Kaur, J., & Sidhu, B. K. (2018). Sentiment Analysis
Based on Deep Learning Approaches. Global journal
of engineering science and researches, ISSN, 1496-
1500.
[4]. Deng, L., & Wiebe, J. (2014). Sentiment Propagation
via Implicature Constraints. In Proceedings of the 14th
Conference of the European Chapter of the
Association for Computational Linguistics, pp. 377-
385.
[5]. Bradbury, J., Merity, S., Xiong, C., & Socher, R.
(2016). Quasi-Recurrent Neural Networks. In 5th
International Conference on Learning
Representations.
[6]. Shuqin Gu, Lipeng Zhang, & Yuexian Hou (2018). A
Position-aware Bidirectional Attention Network for
Aspect-level Sentiment Analysis. 27th International
Conference on Computational Linguistics, Santa Fe,
New Mexico, USA, pp. 774-784.
[7]. Pal, S., Ghosh, S., & Nag, A. (2018). Sentiment
Analysis in the Light of LSTM Recurrent Neural
Networks. International Journal of Synthetic Emotions
(IJSE), 9(1), 33-39.
[8]. Feng, S., Wang, Y., Liu, L., Wang, D., & Yu, G.
(2019). Attention-based hierarchical LSTM network
for context-aware microblog sentiment classification.
Springer Science Business Media, LLC, part of
Springer Nature, 22(1), 59–81.

IJISRT23NOV1061 www.ijisrt.com 725

You might also like