Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Survey on Detection of fake and biased news

using Machine Learning Techniques


Akshaya R Anju Krishnan R
Dr. Deepak N R Student, Dept of CSE Student, Dept of
Professor, Dept of CSE HKBKCE CSE HKBKCE
HKBKCE Bangalore, India Bangalore, India
Bangalore, India akshayaravi2000@gmail.com anju2807r@gmail.co
deepakn.cs@hkbk.edu m
.in Arya Prasad
Student, Dept of CSE
Tabassum Ara
Archana Krishnan R HKBKCE
Assistant Prof, Dept of CSE
Student, Dept of CSE Bangalore, India
HKBKCE
HKBKCE aryaprasad321@gmail.com
Bangalore, India
Bangalore, India tabuara@gmail.com
archana2807r@gmail.com

Abstract— Fake news is exponentially grown due to the of Fake News has increased because some social media
popularity and ease of putting forward our thoughts and users want to increase their views/readerships/likes. Even
opinion. This easily can be believed by a big mass of people and media, sometimes spreads satirical news to increase the
circulating the same causes chaos and unwanted disputes. With TRP ratings and the news spread can also be biased either
fierce competition in news outlets and social media, it has towards a specific person or a specific group or a specific
become a threat to consumers. For months, the Corona virus political party.
and its associated impact has given rise to many fake news
worldwide, including India. During the initial stages of the Currently if there is anything that is spreading faster than
corona, fake news particularly on social media was spread
recklessly. Contents and news about the origin, cause,
Covid-19 it is the fake news related to Covid-19 regarding
symptoms, and cure and conspiracy theories about the virus how the virus spreads, lockdown, the second wave, vaccines
were on the feed. This brings in the need to help the masses to etc. This has caused unnecessary panic among the public as
be able to not fall prey to this fake or biased news using ML to what to believe and not. To build awareness among the
algorithms. The use of ensemble machine learning algorithms public, social media platforms such as Instagram, Twitter
and vectorizers and the main theme of this paper. and Facebook have taken certain measures and various
approaches to provide the public with right information.
Keywords— Fake news, Social awareness, panic, Covid-19, Social media platforms have also given quick guidelines on
News bias, Misleading Information, Bias Identification, Machine
how to distinguish Fake News from genuine news [4].
Learnings.
Numerous researchers are additionally dealing with
handling this issue by utilizing various methods to
I. INTRODUCTION distinguish counterfeit news.
Counterfeit news isn't new, it has consistently been there
This review paper centers on the Machine Learning techniques
even before computerized media existed, but with the
which can be utilized to distinguish fake news in social media
increase in usage of social platforms and digital media fake
platforms. Section II gives a brief overview on the papers
news is spreading in an uncontrollable rate.
referred; and in Section III, we have described the techniques
Fake News is something that everyone has been dealing with and datasets used by different authors in order to detect fake
for many years now. In the past, it was spread through news.
Yellow Journalism, rumors and excessive gossip articles;
however today, it has become an even more problematic II. LITERATURE SURVEY
topic as the use of social media has increased exponentially. The enormous and persistent supply of news is web-based
Any individual/User that has access to the internet can platforms like Facebook, Twitter, etc. Most of the contents
spread Fake News which can either be very silly or can also and articles do not come from professional sources. And the
lead to panic among the public. This has become a problem vast majority of the news contents are either fake or biased.
because people cannot distinguish between fake news and This makes the crowd to reach a wrong conclusion and
real news with 100% accuracy and this can cause them to intentionally promote political and other sorts of social
make their decisions according to the news they read/heard issues. This paper proposes to locate a dependable and right
about. This is also one of the reasons for psychological model to classify fake news and articles from the real ones
warfare, misinformation, reduction of economy, distrust etc. using machine learning algorithms.
These types of News have caused huge conflicts in social
media platforms like Twitter and Facebook. All the papers that were referred proposed approaches to
detect misleading information that is being circulated on
There are many definitions for Fake News, Fake News can social media and other platforms using machine learning
be defined as made up stuff about a particular event that algorithms.
exists or about a non-existent event. Fake News can also be
satirical which tries to bring humor from real news which
may not be the same as the real news. Nowadays, the spread
Granik et al. [1] discusses about implementing a very simple and "word vector" features were improved by using
model using an artificial intelligence algorithm, called Naïve "ensemble methods".
Bayes Classifier to detect fake news as artificial intelligence
was getting better at solving classification problems. The Ning Xin Nyow and Hui Na Chua, within [9], obtained and
datasets collected by the authors of [1] were from Facebook remodeled Twitter's information for spotting further vital
posts of news channels like Politico, ABC News and CNN. attributes that influence the exactness of the ML techniques
to categorize if a news is real or fake by making use of
The authors in [2] proposed a model for detecting fake news approaches like data processing. The authors also shed light
that utilized a technique called "N-gram Analysis" and other on the characteristics, the numerous Tweet’s attributes and
Machine Learning techniques and also performed feature design of the application to reliably change the distinction
extraction techniques on the datasets unlike [1]. The dataset of online news.
used for this experiment was mostly collected from "Kaggle"
and "Reuters" for fake news and real news. The proposed The authors in [10] begin by putting forward an outline of
model was also tested with already available datasets like well-known data revealing the characteristics using varied
“Horne and Adali dataset” and “Burfoot and Baldwin’s satire large models which includes the far-famed
dataset” which was publically available. "Maki–Thompson model". Using these as the foundation,
they projected and designed what they described as
The authors in [3] have collected around thirty three "context-aware modeling frameworks capable of capturing
thousand Tweets as its dataset through a twitter developer specific eventualities in on-line social media data unfold".
account. To recognize fake news extracted from social They projected four models capable of capturing the
media, the same authors have proposed a system with elements of data separation for a particular context and
"stylistic – computational analysis" based on Natural additionally provided random versions of those models.
Language Processing and have used algorithm like
"one-class Support Vector Machine". Utilizing multinomial voting algorithm [11], the paper
mainly focuses on a combined method for detecting
Different from the previous papers referred, [5] identifies counterfeit data. "Naïve Bayes", "Random Forest","
User profiles which spreads fake news in online media Decision Tree"," Support Vector Machine", "k - Nearest
platforms. This is done by using both automatic and human Neighbours" are some of the other Machine Learning
point of view in identifying certain features of that particular techniques implemented in this paper. The training data
user profile and the news content shared by them. Instead of used to train the algorithm is obtained from the "Bag of
checking for fake news, the author tries to identify unreliable Words" model. Verifying and validating the data further is
User profiles that spread fake news in social platforms completed using the python language. Tableau, a
through offline and online analysis. The datasets required for visualization tool is used in this paper. Default algorithm
this purpose was collected from twitter. (i)Offline analysis values carry out the implementation process.
was done by using Deep Neural Networks and (ii)Online
analysis was done by providing questionnaire to real users. Due to the negative influence on society, [13] because of the
widespread of fake news via social media and news outlets,
The authors in paper [6] exploited the features of "Google this paper talks about the urge to make automated fake news
BERT (Bidirectional Encoder Representations from detection tool. To address this problem, a combination of
Transformers)" and have implemented a model for neural network architecture like CNN and LSTM is utilized
distinguishing fake news. The types of data in the dataset and two distinct dimensionality reduction approach - Chi-
used are multimedia, news and social network data from Square and Principle Components Analysis is used. This
twitter. They used two famous publically available datasets model improved result by 20% in term of F1 score and 4%
which are the "LIAR dataset" and "FakeNewsNet dataset" in term of accuracy.
for testing the performance. The model designed in this
paper also categorizes faux news in multimedia content. As mentioned in the above papers one of the most
commonly used vectorizers is TF-IDF and count. [14] Also,
Elhadad et al. [7] discusses the detection of misleading to improve the scores they have included English stop
information, particularly on Covid – 19. They have collected words. In addition to this, the paper also uses various
data from reputed and international organizations like WHO, classifiers like SVM, Naive Bayes, DT, and logistic
UNICEF, UN and fact checking websites to validate their regression.
data on Covid – 19. The authors have created a model for
detecting misleading information by utilizing a number of ] [15] This particular paper talks about the presidential
Machine Learning algorithms and feature extraction election in US that took place in 2016 and how counterfeit
techniques. news resulted in debates and baseless accusations. The
author of the paper has created a dataset that consisted of
two hundred tweets on "Hilary Clinton" and assess them.
[8] briefs about the detection of fake news through
They explore the techniques for extracting and classifying
approaches which uses only the textual features in the news.
news and perform linguistic analysis on tweets after text
The authors brought out the power of "ensemble methods"
normalization.
and proposed a model using "stylometric features" and
"word vector" representation of the dataset to detect fake
news on social media which is further

elaborated in section III. The results of both "stylometric"


III FAKE NEWS DETECTION The highest accuracy and precision obtained by the authors
after assessing the methodologies is 86% and 94%
In this section we brief about the various methodologies respectively.
mentioned in section II in order to detect fake news.
As discussed in section II, the authors of [5] have proposed
This upcoming generation is more inclined towards relying a model that predicts unreliable users that spread fake news
on social media for news updates more than relying on the in social media. The offline analysis is performed in already
news outlets. The omnipresence of online life has a strong labelled data sets and is classified using classifiers. The
negative impact on solitary consumers. Having no means online analysis is performed on data to be evaluated by the
and time to recheck the facts, they tend to believe and spread real users. They have performed “News Content
the same. [12] has used Twitter data and proposed a system Classification” and “User Profile Classification” for
that identifies fake news using the SVM algorithm. detecting fake user profiles by combining the social
information of the user and news . News content is
[1] used the most simplest approach, that is by using Naïve classified using a hybrid approach by combining Long Short
Bayes Classifier to classify as real or fake news as – Term Memory (LSTM) Neural Network property with
mentioned above. The datasets collected were manually Convolutional Neural Network Property, whereas User
labelled as “true” or “false”, “mostly true”, “mixture of profile is classified using Deep Neural Networks and the
both” etc. The accuracy achieved by this method was 74% result obtained was compared with three other classifiers
which was a very decent result as no text pre-processing and like Support Vector Machine Classifier Optimized by
feature extraction methods were performed on the datasets Stochastic Gradient Descent (SVM-SGD), Linear Support
collected. The datasets used by [1] contained only 2000 Vector Machine Classifier (SVC), and k – Nearest
articles which was less. The author has also suggested Neighbour Classifier (kNN). The authors have achieved an
various ways by which one can improve the accuracy of the accuracy of 90% in content classification and 92% in
model, i.e. by using larger datasets and longer articles, predicting whether a user profile is fake or real which is
removing stop- words in the datasets and stemming. The partially seen in the online analysis with an average
author has also suggested that the accuracy can be improved accuracy of 54%.
by using more complex algorithms. Similar to the above papers, the datasets used in [6] have
As cited above in section II, Ahmed et al. [2] built a model undergone text pre-processing and feature extraction using
for detecting fake news by using N-gram Analysis and TF-IDF while preserving certain stop-words for proper
machine learning algorithms. The datasets were context analysis. The proposed framework used multiple
pre-processed by performing a lower casing, stop-word Python libraries for detecting fake news. Tweepy library
removal, sentence segmentation, tokenization, and was used to download data from Twitter. Apache Kafka was
punctuation removal. The authors used two different feature used to aggregate and filter the data obtained. A data
extraction techniques which are Term Frequency (TF) and crawler was also used to extract articles using the
Term Frequency – Inverted Document Frequency (TF-IDF) Newspaper Library in Python. They used the Apache
to extract features from the text. Finally for classification Cassandra DB to store the data. To perform text
purpose six different algorithms were used, they are: Support classification, the data is retrieved from Cassandra DB and
Vector Machines (SVM), Stochastic Gradient Descent is pre-processed by using machine learning algorithms in
(SGD), Decision Trees (DT), Linear Support Vector Scikit Learn library and it is also pre- processed using Deep
Machines (LSVM), and k – Nearest Neighbour (kNN). They Learning algorithms in Keras library. To detect fake news
used the Python Natural Language Toolkit (NLTK) to images, the Multimedia Deep Learning Module in Keras
implement the above-mentioned classifiers. The best library is used. The overall model was implemented in
accuracy was achieved using LSVM which was 92% and the Google Colab platform by uploading the dataset in Google
lowest accuracy was obtained using k-NN and SVM which Drive which was then loaded to Colab’s Jupyter as a Pandas
was 47.2%. After testing this model against “Horne and dataframe. Compared to other Machine Learning algorithms
Adali’s dataset” and “Burfoot and Baldwin’s satire dataset” Google BERT Model gave the best results and it was also
they achieved accuracy of 87% which was significantly tested on LIAR and Politifact datasets.
higher than what was achieved by the authors of the datasets. The model proposed by Elhadad et al. [7] for detecting
[3] uses an unsupervised machine learning algorithm called misleading information on Covid – 19 had mainly four
one – class SVM and in addition to the feature extraction stages, namely: Information – Fusion, Information –
methods used, the authors of [3] have also used Filtering, Model – Building and Detection. They used 10
dimensionality reduction techniques like Latent Semantic Machine Learning algorithms which were Decision Tree
Analysis (LSA) to reduce the dimensionality of the datasets (DT), k – Nearest Neighbour (kNN), Logistic Regression
while maintaining high precision. They proposed the (LR), Linear Support Vector Machines (LSVM),
following three methodologies for classifying the news as Multinomial Naïve Bayes (MNB), Bernoulli Naïve Bayes
legitimate or not, which are (i)The Reduction Methodology (BNB), Perceptron, Neural Network (NN), Ensemble
with Training, (ii)The Matrix Transformation Methodology Random Forest (ERF) and Extreme Gradient Boosting
and (iii)The Radial Limit Methodology. The first two classifiers (XGBoost). The datasets obtained after filtering,
methods uses a combination of machine learning algorithms text pre-processing and feature extraction were introduced
for unsupervised clustering and classification of news after to a number of commonly used machine learning algorithms
training the model using only real news and the third method which is executed by using a well-known machine learning
is used to expand the model’s detection proposal for a library in Python called Scikit-Learn. The authors have used
statistical scenario where it will evaluate the type of news. 10 machine learning algorithms and 7 feature extraction
techniques for the voting ensemble method to determine
whether the data is real or misleading and have also
performed 5 cross validation checks for checking the validity training; through “boosting”, faux news may be anticipated
of data and performance metrics. with an accuracy as much as 95.49%.
The authors of [9], with the intention to pick out more vast
attributes which have an impact at the accuracy of machine
learning techniques that classifies actual and faux
information, derived and transformed social media Twitter
data the use of data mining approach. They centered on
tiers of data transformation, data understanding, data
modelling and data evaluation. URL, id, tweet_ids and title
the four attributes of FakeNewsNet repository have been
analyzed for data expertise and those four attributes can be
acquired the usage of relevant Twitter API directed from a
Tweet’s properties for data crawling. An improved dataset
to seize Tweet attributes became evolved for the duration of
pre-processing and transformation process. Data
transformer was handed to the transformed attribute values
to get entry to the performance for cross validation and
training. Before sending for prediction of the data, data
training obligations are performed by the data transformer
on input records. The anticipated end result is then
dispatched back to user interface via the application server’s
primary algorithm. The model is assessed the use of
numerous machine learning algorithms which includes
Naïve Bayes (NB), Decision Tree, Logistic Regression,
Random Forest (RF) and SVM. Additionally, in the training
process, the authors even attempted tremendous parameter
settings at some stage. Based on the cross validation
outcomes with stratified data partition ratio of 60:40 on
23,206 data for cross validation and training on news
dataset Random Forest (RF) and Naïve Bayes (NB)
strategies perform the best.
The paper [10] examines the social media structures through
variety of microscopical information unfold models. ISR,
ISI and IS models, bestowed in the context of data unfold
and tailored towards social media usage. To supply an
explanation for interactions among unfolding and
counter-spreading classes, an ISCR model became
projected. An ISSRR model for contentious statistics turned
into bestowed to cope with people with competitor
viewpoints in social networks. Finally, to demonstrate how
actual-global stochastic systems of social media facts unfold
may be approached, the random models had been
introduced. The authors moreover made attempt for case
Fig. 1. Generalized flowchart for Fake News Detection research related to the 2018 Golden Globe Awards and a
viral internet debate alongside an algorithm to assemble
Ignorant– Spreader–Recovered groups (ISR groups) using
The article [8] mainly focuses on improving methods of actual Twitter statistics to validate the projected models.
detecting faux news on social networks using word vector The research in [11] exhibited that the ensemble algorithm
representations of the textual content and stylometric carries out a better performance than single algorithm.
features as mentioned in section II. The authors blended During the study, the authors used experimental data which
datasets FakeNewsNet dataset and McIntire Dataset that was collected from the Kaggle data world. Similar to the
have a balanced set of faux news samples (50.1%) and actual rest of the papers, Naïve Bayes, KNN, DT, RF and SVM are
news samples (49.9%) which additionally consists of the algorithms which have been used to examine and
political news starting from all sources in reference to the evaluate the datasets. Naïve Bayes algorithm scored
elections. To affirm the authorship of textual content accuracy of about 0.8966, SVM-0.8918, RF-0.8626,
primarily based totally on the semantic way of writing, k-NN-0.82 and DT
stylometric function is carried out and word vector functions algorithm achieved a score of 0.8279. After the observation,
is carried out for conversion of raw collection of textual a new hybrid algorithm was created by combining all the
content data into vectors, that is numbers of a set length algorithms that was worked with in the before set-up and it
using bag-of- words (BOW) tool. Then intending with resulted in an accuracy rate of about 0.9258. This led to
classification purpose, they’ve used “Naive Bayes (NB) validating other various scores like “recall”, “precision” and
(Gaussian and multinomial)”, “Support Vector Machine “f1_score” wherein the hybrid algorithm again showcased
(SVM)”, “KNN”, “Logistic Regression (LR)”, “Random better performance when compared to the rest of the
Forest (RF)”, “bagging” with preferred bagging classifier algorithms.
and additional trees classifier, and boosting with “adaboost”
and “Stochastic Gradient Boosting”. During the technique PCA surpasses Chi-square and other methods with 97.8%
they found that the usage of ensemble methods: “boosting”,” accuracy in these experimental results. [13] The results in
bagging” and “voting” have helped to enhance the outcomes the
with each stylometric and word vector functions, i.e.; via
ensemble methods, in spite of a reasonably sized dataset for
final set of experiments, the ensemble model of CNN-LSTM This paper gave a brief overview on the different
as proposed, are trained on 49,972 samples and trained on methodologies used by various authors in order to detect
25,413 articles and headlines. Initially, the non-reduced fake news on social networking sites. The referred authors
feature set was preprocessing with and without neural used machine learning algorithms like Naïve Bayes
network. Later they applied dimensionality reduction Classifier, SVM, k- NN, LSVM, DT, RF, LR etc. to detect
techniques. To conclude these experimental results, the real and fake news. Most of the authors agree to the fact that
noise, irrelevant, and redundant features are removed from ensemble model with different combinations of the
the feature vector by PCA yielding 97.8% accuracy. above-mentioned algorithms are capable of providing
higher accuracy when compared to single algorithms.
[14] As a result of a smaller sample data set, the accuracy of
this experiment is low. They hypothesize that the algorithm
will perform well and yield higher scores and accuracy if the
data set is large. In this paper, they lean towards SVM along REFERENCES
with TF-IDF vectorizer for higher accuracy. Both SVM and [1] Mykhailo Granik, Volodymyr Mesyura, “Fake News Detection
logistic regression model gave better scores for larger data Using Naïve Bayes Classifier”, 2017 IEEE First Ukraine
sets, so it is difficult to forecast which is better. But clearly Conference on Electrical and Computer Engineering
Naive Bayes and decision tree improve the scores. (UKRCON), 2017, pp. 900-903.
doi: 10.1109/UKRCON.2017.8100379
Alongside linguistic analysis, [15] they also used bag-of- [2] Hadeer Ahmed, Issa Traore, and Sherif Saad, “Detection of
words to extract and find noticeable patterns. An algorithm Online Fake News Using N-Gram Analysis and Machine
Learning Techniques”, Springer International Publishing AG
for classifying polarized news is done by applying the 2017, pp. 127-138. doi: 10.1007/978-3-319-69155-8_9
nearest neighbor. They also have designed a model that help [3] Nicollas R. de Oliveira, Dianne S. V. Medeiros and Diogo M. F.
humans and make better decisions of detecting deceptive Mattos, Member,IEEE, “ A Sensitive Stylistic Approach to
news. The grammar in the tweets were deconstructed for in- Identify Fake News on Social Networking”, IEEE Signal
depth analysis and the BoW model was used based on the Processing Letters (Volume: 27) 2020, pp. 1250-1254. doi:
10.1109/LSP.2020.3008087
categorized labelled tweets. [4] Dylan de Beer and Machdel Matthee, “Approaches to Identify
Fake News: A Systematic Literature Review”, International
Conference on Integrated Science, ICIS 2020: Integrated
Science in Digital Age 2020, pp. 13-22. doi:
10.1007/978-3-030- 49264-9_2
[5] Giuseppe Sansonetti, Fabio Gasparetti, Giuseppe D’aniello,
(Member IEEE), and Alessandro Micarelli, “Unreliable Users
Detection in Social Media: Deep Learning Techniques for
Automatic Detection”, IEEE Access (Volume: 8) 2020, pp.
213154-213167. doi: 10.1109/ACCESS.2020.3040604
[6] Elio Masciari, Vincenzo Moscato, Antonio Picariello, and
Giancarlo Sperli, “A Deep Learning Approach to Fake News
Detection”, Springer Nature Switzerland AG 2020, ISMIS 2020,
pp. 113-122, 2020. Doi: 10.1007/978-3-030-59491-6_11
[7] Mohamed K. Elhadad, Kin Fun Li, (Senior Member, IEEE), and
Fayez Gebali, (Life Senior Member, IEEE), “Detecting
Misleading Information on COVID-19”, IEEE Access (Volume:
8) 2020, pp. 165201-165215.
doi: 10.1109/ACCESS.2020.3022867
Fig. 2. Positive, Negative and Neutral sentiments in [8] Harita Reddy, Namratha Raj, Manali Gala, Annappa Basava,
malicious tweets “Text-mining-based Fake News Detection Using Ensemble
Methods”, Int. J. Autom. Comput. 17, pp. 210–221, 2020.
doi:10.1007/s11633-019-1216-5
[9] Ning Xin Nyow and Hui Na Chua, "Detecting Fake News with
Tweets’ Properties", 2019 IEEE Conference on Application,
Information and Network Security (AINS), Pulau Pinang,
Malaysia, 2019, pp. 24-29,
doi: 10.1109/AINS47559.2019.8968706.
[10] Michael Muhlmeyer, Shaurya Agarwal, Member IEEE and
Jiheng Huang, "Modeling Social Contagion and Information
Diffusion in Complex Socio-Technical Systems", in IEEE
Systems Journal, vol. 14, no. 4, pp. 5187-5198, Dec. 2020, doi:
10.1109/JSYST.2020.2993542.
[11] Palagati Bhanu Prakash Reddy, Mandi Pavan Kumar Reddy,
Ganjikunta Venkata Manaswini Reddy and K. M. Mehata,
"Fake Data Analysis and Detection Using Ensembled Hybrid
Algorithm", 2019 3rd International Conference on Computing
Methodologies and Communication (ICCMC), Erode, India,
2019, pp. 890-897, doi: 10.1109/ICCMC.2019.8819741.
[12] Shivani Suresh, Nikam, Rupali Dalvi, “Fake News Detection on
Social Media using Machine Learning Techniques”,
Fig. 3. Positive, Negative and Neutral sentiments in International Journal of Innovative Technology and Exploring
credible tweets Engineering (IJITEE) ISSN: 2278-3075, Volume-9 Issue-7,
May2020, pp. 940-143.doi: 10.35940/ijitee.G5428.059720
[13] Muhammad Umer, Zainab Imtiaz, Saleem Ullah, Arif
Mehmood, Gyu Sang Choi, and Byung-Won On, “Fake News
Stance Detection Using Deep Learning Architecture (CNN-
IV CONCLUSION
LSTM)”, IEEE Access (Volume 8) 2016, pp. 156695 – 156706.
doi: 10.1109/ACCESS.2020.3019735
[14] Karishnu Poddar, Geraldine Bessie Amali D, Umadevi K S,
“Comparison of Various Machine Learning Models for
Accurate Detection of Fake News”, 2019 Innovations in Power
and Advanced Computing Technologies (i-PACT) 2020. doi:
10.1109/i-PACT44901.2019.8960044
[15] Amitabha Dey, Rafsan Zani Rafi, Shahriar Hasan Parash, Sauvik
Kundu Arko and Amitabha Chakrabarty School of Engineering
and Computer Science, BRAC University Dhaka, Bangladesh,
“Fake News Pattern Recognition using Linguistic Analysis”,
2018 Joint 7th International Conference on Informatics,
Electronics & Vision (ICIEV) and 2018 2nd International
Conference on Imaging, Vision & Pattern Recognition (icIVPR).
doi: 10.1109/ICIEV.2018.8641018

You might also like