Professional Documents
Culture Documents
Final Review
Final Review
Madhumitha M 311520104026
Tejashri J 311520104026
OVERVIEW
1. Introduction 8. Module Description
2. Abstract 9. System Requirements
3. Problem Statement 10. Use cases
4. Literature Survey 11. Implementation
5. Existing System 12. Conclusion
6. Proposed System 13. Future Enhancements
7. System Architecture 14. References
Abstract
❖ Transformer Models: Utilizing mBERT, ALBERT, and XML RoBERTa for Malayalam fake news
detection.
❖ Performance: mBERT and XML RoBERTa achieved outstanding macro F1 scores of 0.84 in binary
classification, while ALBERT showed robust capability with a score of 0.56.
❖ Deeper Analysis: XML RoBERTa led Task 2 with a macro F1 score of 0.21, followed by mBERT
and BERT with scores of 0.16 and 0.11.
❖ Implications: Findings offer insights for constructing tools to safeguard information credibility,
crucial for combating the dissemination of fake news.
Introduction
❖ Detection methods scrutinize social media text meticulously,
aiming to distinguish between original and misleading content,
combating the spread of fake news.
Fake news detection based on news 2022 Transformer-based approach enables Limitations: Real-world data quality
content and social contexts: a efficient processing, focusing on affects performance, scalability
transformer-based approach by early detection by integrating news concerns for large datasets, limited
Shaina Raza and Chen Ding. content and social contexts. generalization, interpretability
Encoder-decoder architecture and challenges, resource-intensive
effective labeling enhance model training.
performance for fake news
detection.
Covid-19 fake news detection using 2021 Utilizes BERT for fake news Limitations include increased model
bidirectional encoder detection, incorporates BiLSTM and complexity, varying performance
representations from transformers CNN layers, achieves state-of-the-art across fake news types, reliance on
based models by Yuxiang Wang, performance, explores keyword keywords, interpretability
Yongheng Zhang, Xuebo Li, and evaluation, and offers adaptable challenges, and dependence on
Xinyao Yu. parameter freezing methods for training data quality and quantity for
optimization. generalizability.
Automatic fake news detection 2019 Leveraging BERT, the model detects Challenges include interpreting
model based on bidirectional fake news automatically, prioritizing results, demanding training process,
encoder representations from data-driven approaches, pre-training dependence on data quality, and
transformers (bert) by Heejung with extra data, and surpassing potential limitations in contextual
Jwa, Dongsuk Oh, Kinam Park, Jang previous model performance. understanding.
Mook Kang, and Heuiseok Lim.
Literature survey
TITLE Year of Publication Salient Features Limitations
Detection of Fake News using 2020 Utilizes attention-based transformer Potential reliance on dataset quality,
Transformer Model iCoMET by model for fake news detection on scope limitations in contextual
Momina Qazi, Muhammad US publicly available dataset. Aims to understanding, and interpretability
Khan, and Mazhar Ali. compare with state-of-the-art challenges with transformer model
algorithms, showing 15% accuracy predictions.
improvement over Hybrid CNN.
BERT: Pre-training of Deep 2018 Introduces BERT, a Bidirectional May require substantial
Bidirectional Transformers for Encoder Representations from computational resources for
Language Understanding. CoRR Transformers, pre-trained on pre-training and fine-tuning.
by Jacob Devlin, Ming-Wei Chang, unlabeled text with deep Evaluation primarily focuses on task
Kenton Lee, and Kristina Toutanova. bidirectional context conditioning in performance without thorough
all layers. Achieves state-of-the-art analysis of failure cases.
results on eleven NLP tasks.
Fake News Detection based on 2022 Novel fake news detection Generalization challenges, limited
News Content and Social Contexts: framework, Transformer interpretability, resource intensity,
a Transformer-based approach by architecture, multiple features labeling technique effectiveness
Shaina Raza and Chen Ding. integration, effective labeling, and variation, and potential validation
higher accuracy in early detection. necessity across diverse datasets.
Existing System
❖ Current fake news detection systems employ preprocessing and feature engineering but struggle with
nuanced characteristics and evolving tactics due to imbalanced datasets and temporal dynamics.
❖ Advanced frameworks integrating deep learning architectures are vital for enhanced accuracy, scalability,
and early detection capabilities.
❖ These solutions must adeptly utilize contextual cues and temporal dynamics to combat misinformation
effectively in the digital landscape.
❖ Addressing the pressing need for sophisticated detection methods, these frameworks offer promising
avenues for mitigating the impact of fake news and safeguarding information integrity in the evolving
online environment.
Proposed System
❖ Incorporation of Advanced Transformer Models:- Implement state-of-the-art transformer models like
mBERT, ALBERT, and XMLRoBERTa to enhance the system's language coverage and understanding,
particularly focusing on Dravidian languages like Malayalam.
❖ Dynamic Feature Adaptation:- Introduce dynamic feature sets that adapt to evolving misinformation
patterns, ensuring the system remains effective in identifying and categorizing fake news in the rapidly
changing landscape of social media.
❖ Scalable Architecture:- Design a scalable and efficient architecture capable of handling large volumes of
social media data, ensuring real-time or near-real-time processing for timely and effective fake news detection.
❖ Utilization of Transfer Learning: - Implement transfer learning techniques to leverage pre-trained models
and enhance the system's contextual understanding, enabling more accurate classification of fake news by
considering the broader linguistic and cultural context.
Project Architecture
System Architecture
Module Description
2 RAM 4 GB 4 GB
5 SDD 1 TB 500 GB
2. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding. CoRR, abs/1810.04805.
3. Quanliang Jing, Di Yao, Xinxin Fan, Baoli Wang, Haining Tan, Xiangpeng Bu, and Jingping Bi. 2021. TRANSFAKE:
Multi-Task Transformer for Multimodal Enhanced Fake News Detection. In 2021 International Joint Conference on Neural
Networks (IJCNN), pages 1–8. IEEE.
4. Heejung Jwa, Dongsuk Oh, Kinam Park, Jang Mook Kang, and Heuiseok Lim. 2019. exBAKE: Automatic Fake News
Detection model based on Bidirectional Encoder Representations from Transformers (BERT). Applied Sciences, 9(19):4062.
5. Sebastian Kula, Rafał Kozik, Michał Choras, and ´ Michał Wo´zniak. 2021. Transformer Based Models in Fake News
Detection. In the International Conference on Computational Science, pages 28–38. Springer.
6. Ashok Kumar, Tina Esther Trueman, and Erik Cambria. 2021. Fake news detection using XLNet fine-tuning model. In 2021
International Conference on Computational Intelligence and Computing Applications (ICCICA), pages 1–4. IEEE.
Publication
❖ TechWhiz@DravidianLangTech 2024: Fake News Detection Using Deep Learning Models, Madhumitha M,
Kunguma Akshatra M, Tejashri, Jerin Mahibha C published in Proceedings of the Fourth Workshop on
Speech, Vision, and Language Technologies for Dravidian Languages, Association for Computational
Linguistics.
Thank-you