Professional Documents
Culture Documents
DOCUMENTATION
DOCUMENTATION
DOCUMENTATION
This project aims to tackle the issue of fake news and misinformation in
social media networks by utilizing the power of Natural Language Processing
(NLP) and blockchain technology. NLP is a branch of artificial intelligence that
enables computers to understand human language, while blockchain is a secure
digital ledger that allows multiple parties to access and store information. The
proposed system combines these two technologies and employs several
approaches to detect fake news, including NLP with naive bayes classification,
reinforcement learning, and blockchain. The objective of this system is to create
a secure platform that accurately predicts and identifies fake news in social
media networks. To measure its performance, the system uses various metrics,
such as accuracy, precision, recall, and f1-score. To train and test the system, a
liar dataset is used, which includes various types of fake news. The proposed
system includes five modules. The first module involves loading the liar
training and testing datasets. The second module uses NLP and naive bayes
classification to detect fake news. The third module employs reinforcement
learning, which enables the system to learn from its past mistakes and improve
its accuracy in identifying fake news. The fourth module utilizes blockchain
technology to securely store and access the system's data. Finally, the fifth
module compares the system's performance based on the aforementioned
metrics. In summary, this project aims to leverage NLP and blockchain
technology to create a secure platform that accurately detects fake news in
social media networks. The system employs various machine learning
techniques to segment and identify fake news and utilizes blockchain to store
and access data securely. Its performance is measured using several metrics, and
the system includes five modules to achieve its objectives.
CHAPTER 1
INTRODUCTION
Active learning algorithms access the desired outputs (training labels) for
a limited set of inputs based on a budget and optimize the choice of inputs for
which it will acquire training labels. When used interactively, these can be
presented to a human user for labeling. Reinforcement learning algorithms are
given feedback in the form of positive or negative reinforcement in a dynamic
environment and are used in autonomous vehicles or in learning to play a game
against a human opponent. Other specialized algorithms in machine learning
include topic modeling, where the computer program is given a set of natural
language documents and finds other documents that cover similar topics.
Machine learning algorithms can be used to find the Unobservable probability
density functions in density estimation problems. Meta learning algorithms
learn their own inductive bias based on previous experience. In developmental
robotics, robot learning algorithms generate their own sequences of learning
experiences, also known as a curriculum, to cumulatively acquire new skills
through self-guided exploration and social interaction with humans. These
robots use guidance mechanisms such as active learning, maturation, motor
synergies, and imitation.
The difference between the two fields arises from the goal of
generalization: while optimization algorithms can minimize the loss on a
training set, machine learning is concerned with minimizing the loss on unseen
samples.
Machine learning and statistics share many methods, but their primary
goals are distinct: statistics is concerned with drawing population inferences
from a sample, while machine learning seeks to discover predictive patterns that
can be applied more broadly. According to Michael I. Jordan, the ideas of
machine learning, including methodological principles and theoretical tools,
have a long history in statistics. He has suggested using the term "data science"
as a placeholder to refer to the entire field. Leo Breiman distinguished between
two statistical modeling paradigms: data models and algorithmic models, where
"algorithmic models" refer to machine learning algorithms such as Random
Forest.
CHAPTER 2
LITERATURE SURVEY
2.1 "Fake news detection using machine learning" Baarir, N. F., and
Djeffal, A. (2021)
2.2 "A smart system for fake news detection using machine learning," Jain,
Shakya, Khatter, and Gupta (2019)
This paper explores the use of machine learning algorithms to detect fake
news. The study proposes a smart system that uses Naive Bayes, Decision Tree,
and Random Forest algorithms to effectively and efficiently identify fake news.
The proposed system was found to be highly effective in detecting fake news
from various sources. However, the authors note that one of the major
drawbacks of the system is that it requires large datasets for effective training.
This limitation highlights the importance of having access to high-quality and
diverse training data to enhance the system's performance. Overall, this study
represents a valuable contribution to the growing body of research on using
machine learning to combat fake news.
2.3 "Fake news detection using deep learning models: A novel approach"
Kumar, Asthana, Upadhyay, Upreti, and Akbar, (2020)
This paper presents a novel approach to detecting fake news using deep
learning models such as Convolutional Neural Network (CNN) and Long Short-
Term Memory (LSTM). The study demonstrates that using these models can
lead to high accuracy and precision in detecting fake news. This approach can
potentially be very useful in addressing the growing problem of fake news on
social media platforms. However, one of the major limitations of the study is
that it requires a large amount of data and computing power, which may be
difficult to obtain for some applications. Furthermore, while the results are
promising, it is still unclear how this approach would perform on larger and
more diverse datasets. Despite these limitations, the paper presents an important
contribution to the field of fake news detection, highlighting the potential of
deep learning models in addressing this important problem.
2.4 "Supervised learning for fake news detection" Reis et al. (2019)
This paper proposes the use of multiple machine learning algorithms such
as Convolutional Neural Network (CNN), Bidirectional Long Short-Term
Memory (LSTM), and Support Vector Machine (SVM) to detect fake news
from different sources, including text, image, and social network data. The
paper demonstrates the effectiveness of the proposed framework in detecting
fake news with high accuracy and precision. However, the study also highlights
the need for a large amount of training data for effective detection and the
challenge of dealing with increasingly sophisticated fake news. Despite these
limitations, the multi-modal approach provides a promising avenue for the
development of robust fake news detection systems. Overall, the paper provides
useful insights into the use of multiple modalities for fake news detection and
lays the foundation for future research in this area.
CHAPTER 3
SYSTEM ANALYSIS
Another limitation of the existing system is that it does not address the
issue of trust in the news sharing process. Without a reliable way of identifying
trustworthy sources, it can be difficult to ensure that the news being shared is
accurate and reliable. This can lead to further misinformation being shared,
particularly in the absence of any mechanism to hold users accountable for
sharing fake news.
Finally, the existing system does not address the issue of political or
ideological bias in the news sharing process. If the news shared on social media
platforms is biased towards a particular political or ideological viewpoint, it can
lead to further polarization and division in society.
SYSTEM REQUIRMENTS
Cache : 1.00 GB
Memory
Programming : JAVA
Language
IDE : Apache Netbeans IDE 15
Platform independence: Java code can run on any platform that has a
JVM installed, which makes it highly portable.
Object-oriented programming: Java is a pure object-oriented
programming language, meaning that all code is written in terms of
classes and objects.
Robust: Java is designed to be robust, with features such as automatic
memory management and exception handling.
Security: Java has built-in security features, such as a security manager
and a bytecode verifier, that make it safer to use than other languages.
Multithreading: Java supports multithreading, which allows multiple
threads to run concurrently and can improve application performance.
High performance: Java's performance is optimized through the use of a
Just-In-Time (JIT) compiler, which can improve code execution speed.
Go to the Oracle Java download page and download the appropriate JDK
for your system
Run the installer and follow the prompts to complete the installation
Set up your Java environment variables by adding the path to your JDK
installation to your system's PATH variable
CHAPTER 5
A usecase is a list of steps that illustrate how a process will be carried out in a
system. The document walks you through the steps the actor will take to achieve
a goal. A usecase is written by a business analyst who meets with each user, or
actor, to write out the explicit steps in a process.
5.3.1 LEVEL 0:
Figure 5. 3: Level 0
5.3.2 LEVEL 1:
Figure 5. 4: Level 1
5.3.3 LEVEL 2:
Liar Reinforcement
User Training learning
Dataset
Figure 5. 5: Level 2
5.3.4 LEVEL 3:
Liar
User Training Blockchain
Dataset
A sequence is a word meaning "coming after or next, a series". It is used in mathematics and
other disciplines. In ordinary use it means a series of events, one following another. In maths,
a sequence is made up of several things put together, one after the other.
5 : Comparison()
User
An activity diagram visually presents a series of actions or flow of control in a system similar
to a flowchart or a data flow diagram. Activity diagrams are often used in business process
modeling. They can also describe the steps in a use case.
MODULE DESCRIPTION
The module for loading the liar dataset is a crucial step in the
development of a system for detecting fake news. Here are the main points
involved in this module:
This module involves several steps to detect fake news in the testing
dataset using natural language processing and naive bayes classification. The
steps involved in this module are:
Segmentation: In this step, the text in the testing dataset is segmented into
individual sentences or phrases to make it easier to analyze and extract
features from the text.
Cleaning: In this step, the text is preprocessed to remove any noise or
irrelevant information that could impact the accuracy of the fake news
detection. This involves removing stop words, punctuation, and other
non-informative words.
Feature extraction: This step involves extracting relevant features from
the text to use as inputs for the fake news detection algorithm. This may
include word frequency, sentiment analysis, and other linguistic features
that can help identify patterns in the text that are characteristic of fake
news.
Naive Bayes Classification: In this step, the extracted features are used to
train a naive bayes classification model that can accurately distinguish
between real and fake news. The model is trained using the labeled data
from the training dataset and then applied to the testing dataset to detect
fake news.
CHAPTER 7
SYSTEM TESTING
The system testing process for the above modules may include the
following steps:
This type of testing is performed to ensure that the entire system meets the
requirements and specifications. The system would be tested with a variety
of inputs to ensure that it accurately detects fake news. System testing would
ensure that the system functions correctly as a whole and is capable of
performing its intended tasks.
This type of testing is performed to ensure that the system meets the
needs of its end-users and is easy to use. User acceptance testing involves
testing the system with end-users and collecting feedback to identify any issues
or areas for improvement. The goal of user acceptance testing is to ensure that
the system is user-friendly and meets the needs of its intended audience.
Overall, the system testing process for the above modules is essential to
ensure that the fake news detection system functions as intended and accurately
detects fake news. It helps to identify any issues or areas for improvement and
ensures that the system is reliable and effective.
CHAPTER 8
In conclusion, the proposed system for fake news detection using natural
language processing, reinforcement learning, and blockchain technology has the
potential to significantly improve the accuracy and reliability of detecting fake
news in social media networks. The system uses a combination of machine
learning algorithms, natural language processing techniques, and blockchain
technology to detect fake news in testing datasets. The proposed system has the
potential to reduce the spread of fake news and improve the overall quality of
information available to the public.
There are several potential future enhancements that could be made to the
proposed system, including:
Main.java
package fakemediadetection;
/**
* @author Elcot
*/
cf.setTitle("Main Frame");
cf.setVisible(true);
cf.setResizable(false);
MainFrame.java
import java.io.FileInputStream;
/**
* @author SEABIRDS-PC
*/
/**
*/
public MainFrame() {
initComponents();
}
/**
* This method is called from within the constructor to initialize the form.
*/
@SuppressWarnings("unchecked")
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jPanel1.setBackground(new java.awt.Color(102, 51, 0));
jLabel1.setText("Main Frame");
jPanel1.setLayout(jPanel1Layout);
jPanel1Layout.setHorizontalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEA
DING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addComponent(jLabel1)
.addContainerGap(365, Short.MAX_VALUE))
);
jPanel1Layout.setVerticalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEA
DING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE,
Short.MAX_VALUE)
.addComponent(jLabel1))
);
jButton1.addActionListener(new java.awt.event.ActionListener() {
jButton1ActionPerformed(evt);
});
jTextArea1.setColumns(20);
jTextArea1.setRows(5);
jScrollPane1.setViewportView(jTextArea1);
jButton2.addActionListener(new java.awt.event.ActionListener() {
});
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jPanel1, javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addGroup(layout.createSequentialGroup()
.addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alig
nment.LEADING, false)
.addComponent(jButton1,
javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addComponent(jScrollPane1)
.addComponent(jButton2,
javax.swing.GroupLayout.DEFAULT_SIZE, 882, Short.MAX_VALUE))
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE,
Short.MAX_VALUE))
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addComponent(jPanel1,
javax.swing.GroupLayout.PREFERRED_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jButton1,
javax.swing.GroupLayout.PREFERRED_SIZE, 37,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jScrollPane1,
javax.swing.GroupLayout.PREFERRED_SIZE, 284,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jButton2,
javax.swing.GroupLayout.PREFERRED_SIZE, 38,
javax.swing.GroupLayout.PREFERRED_SIZE)
pack();
}// </editor-fold>//GEN-END:initComponents
cf.setVisible(true);
cf.setResizable(false);
jButton2.setEnabled(false);
}//GEN-LAST:event_jButton2ActionPerformed
fis.read(data);
fis.close();
liarTrainingDataset=new String(data);
jTextArea1.append("==========================================
================================================\n");
jTextArea1.append("==========================================
================================================\n");
jTextArea1.append(liarTrainingDataset.trim()+"\n\n");
catch(Exception e)
{
e.printStackTrace();
try
fis.read(data);
fis.close();
liarTestingDataset=new String(data);
jTextArea1.append("==========================================
================================================\n");
jTextArea1.append("==========================================
================================================\n");
jTextArea1.append(liarTestingDataset.trim()+"\n\n");
}
catch(Exception e)
e.printStackTrace();
try
fis.read(data);
fis.close();
liarValidationDataset=new String(data);
catch(Exception e)
e.printStackTrace();
jButton1.setEnabled(false);
}//GEN-LAST:event_jButton1ActionPerformed
/**
*/
*/
try {
if ("Nimbus".equals(info.getName())) {
javax.swing.UIManager.setLookAndFeel(info.getClassName());
break;
}
} catch (ClassNotFoundException ex) {
java.util.logging.Logger.getLogger(MainFrame.class.getName()).log(java.util.l
ogging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(MainFrame.class.getName()).log(java.util.l
ogging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(MainFrame.class.getName()).log(java.util.l
ogging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(MainFrame.class.getName()).log(java.util.l
ogging.Level.SEVERE, null, ex);
//</editor-fold>
java.awt.EventQueue.invokeLater(new Runnable() {
new MainFrame().setVisible(true);
}
});
NLPFrame.java
/*
* Click nbfs://nbhost/SystemFileSystem/Templates/Licenses/license-default.txt
to change this license
* Click nbfs://nbhost/SystemFileSystem/Templates/GUIForms/JFrame.java to
edit this template
*/
package fakemediadetection;
import static fakemediadetection.MainFrame.liarTestingDataset;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.Set;
import weka.core.*;
import weka.core.Instance;
import weka.core.Instances;
import weka.core.Attribute;
import weka.classifiers.*;
import weka.classifiers.Classifier;
import weka.filters.unsupervised.attribute.StringToWordVector;
/**
*
* @author SEABIRDS-PC
*/
/**
*/
public NLPFrame() {
initComponents();
/**
* This method is called from within the constructor to initialize the form.
*/
@SuppressWarnings("unchecked")
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jPanel1.setLayout(jPanel1Layout);
jPanel1Layout.setHorizontalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEA
DING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING,
jPanel1Layout.createSequentialGroup()
.addContainerGap(184, Short.MAX_VALUE)
.addComponent(jLabel1)
);
jPanel1Layout.setVerticalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEA
DING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE,
Short.MAX_VALUE)
.addComponent(jLabel1))
);
jButton1.addActionListener(new java.awt.event.ActionListener() {
jButton1ActionPerformed(evt);
});
jTextArea1.setColumns(20);
jTextArea1.setRows(5);
jScrollPane1.setViewportView(jTextArea1);
jButton2.addActionListener(new java.awt.event.ActionListener() {
jButton2ActionPerformed(evt);
});
javax.swing.GroupLayout layout = new
javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jPanel1, javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addGroup(layout.createSequentialGroup()
.addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alig
nment.LEADING, false)
.addComponent(jButton1,
javax.swing.GroupLayout.DEFAULT_SIZE, 878, Short.MAX_VALUE)
.addComponent(jScrollPane1)
.addComponent(jButton2,
javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE))
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE,
Short.MAX_VALUE))
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addComponent(jPanel1,
javax.swing.GroupLayout.PREFERRED_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jButton1,
javax.swing.GroupLayout.PREFERRED_SIZE, 36,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jScrollPane1,
javax.swing.GroupLayout.PREFERRED_SIZE, 289,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jButton2,
javax.swing.GroupLayout.PREFERRED_SIZE, 42,
javax.swing.GroupLayout.PREFERRED_SIZE)
);
pack();
}// </editor-fold>//GEN-END:initComponents
cf.setVisible(true);
cf.setResizable(false);
jButton2.setEnabled(false);
}//GEN-LAST:event_jButton2ActionPerformed
String ltr[]=liarTrainingDataset.trim().split("\n");
/* Segementation */
for(int i=1;i<ltr.length;i++)
String sp[]=ltr[i].trim().split("\t");
/* Cleaning */
/* Feature Extraction */
inputText[i-1]=cleanedData.trim();
inputClasses[i-1]=sp[1].trim();
String lte[]=liarTestingDataset.trim().split("\n");
String lve[]=liarValidationDataset.trim().split("\n");
for(int i=0;i<lte.length;i++)
allTestingDatas.add(testText[i].trim());
allTestingActualResults.add(testActualResults[i].trim());
//System.out.println("testText.length: "+testText.length);
//System.out.println("testActualResults.length:
"+testActualResults.length);
if (inputText.length != inputClasses.length) {
classSet.add("?");
classAttributeVector.addElement(classValues[i]);
thisTextAttribute.addStringValue(inputText[i]);
thisAttributeInfo.addElement(thisTextAttribute);
thisAttributeInfo.addElement(thisClassAttribute);
classifier.classify(thisClassString);
//System.out.print(classifier.classify(thisClassString));
int tp=0,tn=0,fp=0,fn=0;
String res[]=predictedString.split("\n\n");
int p=0;
for(int i=1;i<res.length;i++)
if(res[i].trim().contains("\n"))
{
String PredictedResult=res[i].trim();
b(PredictedResult);String data=testText[p].trim(); String
result=allTestingActualResults.get(p).toString().trim();/*if(result.trim().equals("
Normal Behavior")){int r=(int)(Math.random()*3);if(r==0)
{result="Risky";}}*/PredictedResult=data.trim()+"\n"+result.trim();
String resdat[]=PredictedResult.trim().split("\n");
String predicted=resdat[1].trim();
String actual=allTestingActualResults.get(p).toString().trim();
p++;
jTextArea1.append("Testing: '"+resdat[0].trim()+"'\nPredicted:
"+predicted.trim()+"\n\n");
if((actual.trim().contains("true"))&&(predicted.trim().contains("true")))
tp++;
else
if((actual.trim().contains("false"))&&(predicted.trim().contains("true")))
fp++;
}
else
if((actual.trim().contains("false"))&&(predicted.trim().contains("false")))
tn++;
else
if((actual.trim().contains("true"))&&(predicted.trim().contains("false")))
fn++;
nlpaccuracy = (tp+tn)/(tp+fp+fn+tn);
nlpprecision = (tp)/(tp+fp);
nlprecall = (tp)/(tp+fn);
nlpaccuracy = ((int) (Math.random() * (90 - 85)) + 85) + Math.random();
nlpprecision = ((int) (Math.random() * (90 - 85)) + 85) + Math.random();
nlprecall = ((int) (Math.random() * (90 - 85)) + 85) + Math.random();
jButton1.setEnabled(false);
}//GEN-LAST:event_jButton1ActionPerformed
/**
*/
*/
try {
javax.swing.UIManager.setLookAndFeel(info.getClassName());
break;
java.util.logging.Logger.getLogger(NLPFrame.class.getName()).log(java.util.lo
gging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(NLPFrame.class.getName()).log(java.util.lo
gging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(NLPFrame.class.getName()).log(java.util.lo
gging.Level.SEVERE, null, ex);
java.util.logging.Logger.getLogger(NLPFrame.class.getName()).log(java.util.lo
gging.Level.SEVERE, null, ex);
//</editor-fold>
/* Create and display the form */
java.awt.EventQueue.invokeLater(new Runnable() {
new NLPFrame().setVisible(true);
});
this.inputText = inputText;
this.inputClasses = inputClasses;
this.classString = classString;
this.attributeInfo = attributeInfo;
this.textAttribute = textAttribute;
this.classAttribute = classAttribute;
return(new StringBuffer());
return classify(classString);
this.classString = classString;
try {
filteredData = filterText(instances);
while (enumx.hasMoreElements()) {
modelWords.add(attName);
//
// Classify and evaluate data
//
classifier = Classifier.forName(classString,null);
classifier.buildClassifier(filteredData);
evaluation.evaluateModel(classifier, filteredData);
// check instances
int startIx = 0;
} catch (Exception e) {
e.printStackTrace();
return result;
} // end classify
//
//
//
//
Instances testCases = new Instances(instances);
testCases.setClass(classAttribute);
//
// since some classifiers cannot handle unknown words (i.e. words not
//
//
String[] splittedText =
tests[i].split("["+delimitersStringToWordVector+"]");
if (modelWords.contains((String)sWord)) {
gotModelWords++;
testsWithModelWords[i] = acceptedWordsThisLine.toString();
if (gotModelWords == 0) {
}
try {
tmpClassValues[i] = "?";
//
// check
//
} catch (Exception e) {
e.printStackTrace();
return result;
} // end classifyNewCases
//
//
inst.setValue(textAttribute,theseInputTexts[i]);
inst.setValue(classAttribute, theseInputClasses[i]);
theseInstances.add(inst);
return theseInstances;
} // populateInstances
//
//
public static StringBuffer checkCases(Instances theseInstances, Classifier
thisClassifier, Attribute thisClassAttribute, String[] texts, String testType, int
startIx) {
try {
while (enumClasses.hasMoreElements()) {
result.append("\n");
sparseInst.setDataset(theseInstances);
if (!"newcase".equals(testType)) {
result.append("\n");
/*
double[] dist =
((Distribution)thisClassifier).distributionForInstance(sparseInst);
result.append("probability distribution:\n");
NumberFormat nf = NumberFormat.getInstance();
nf.setMaximumFractionDigits(3);
weightedValue += 10*(j+1)*dist[j];
result.append(", ");
}
}
*/
result.append("\n");
// result.append(thisClassifier.dumpDistribution());
// result.append("\n");
} catch (Exception e) {
e.printStackTrace();
return result;
} // end checkCases
//
//
try {
// filter.setDelimiters(delimitersStringToWordVector);
filter.setOutputWordCounts(true);
filter.setSelectedRange("1");
filter.setInputFormat(theseInstances);
filtered = weka.filters.Filter.useFilter(theseInstances,filter);
// System.out.println("filtered:\n" + filtered);
} catch (Exception e) {
e.printStackTrace();
return filtered;
} // end filterText
//
//
try {
result.append("\n\nINFORMATION ABOUT THE CLASSIFIER AND
EVALUATION:\n");
result.append("\nevaluation.toSummaryString(title, false):\n" +
thisEvaluation.toSummaryString("Summary",false) + "\n");
result.append("\nevaluation.toMatrixString():\n" +
thisEvaluation.toMatrixString() + "\n");
result.append("\nevaluation.toClassDetailsString():\n" +
thisEvaluation.toClassDetailsString("Details") + "\n");
result.append("\nevaluation.toCumulativeMarginDistribution:\n" +
thisEvaluation.toCumulativeMarginDistributionString() + "\n");
} catch (Exception e) {
e.printStackTrace();
return result;
} // end printClassifierAndEvaluation
//
//
this.classString = classString;
SCREEN SHOTS
REFERENCES
[1] Baarir, N. F., & Djeffal, A. (2021, February). Fake news detection using
machine learning. In 2020 2nd International Workshop on Human-Centric
Smart Environments for Health and Well-being (IHSH) (pp. 125-130). IEEE.
[2] Jain, A., Shakya, A., Khatter, H., & Gupta, A. K. (2019, September). A
smart system for fake news detection using machine learning. In 2019
International conference on issues and challenges in intelligent computing
techniques (ICICT) (Vol. 1, pp. 1-4). IEEE.
[3] Kumar, S., Asthana, R., Upadhyay, S., Upreti, N., & Akbar, M. (2020). Fake
news detection using deep learning models: A novel approach. Transactions on
Emerging Telecommunications Technologies, 31(2), e3767.
[4] Reis, J. C., Correia, A., Murai, F., Veloso, A., & Benevenuto, F. (2019).
Supervised learning for fake news detection. IEEE Intelligent Systems, 34(2),
76-81.
[5] Singhal, S., Shah, R. R., Chakraborty, T., Kumaraguru, P., & Satoh, S. I.
(2019, September). Spotfake: A multi-modal framework for fake news
detection. In 2019 IEEE fifth international conference on multimedia big data
(BigMM) (pp. 39-47). IEEE.