Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

THE C & C

THE CLASSIFICATION AND


CLUSTERING PROJECT
THIS PROJECT IS A
DEMONSTRATION OF BOTH
CLASSIFICATION AND
CLUSTERING CONCEPTS OF
DATA WAREHOUSING AND
DATA MINING.
17BCP006 APOORVA PANCHAL
17BCP020 FORAM PARIKH
THE ELEPHANT IN THE ROOM
(THE PROBLEM STATEMENT)

• STUDY OF SPAM DETECTION IN TEXTS


USING DIFFERENT CLASSIFIERS
• IMPLEMENTATION OF CLUSTERING USING
DIFFERENT ALGORITHHMS
MODUS OPERANDI (METHODOLOGY USED)

• IN SPAM DETECTION (PYTHON), THE USER IS


GIVEN A CHOICE OF USING ANY OF THE
FOUR CASSIFIERS ON THE DATASET
SPAM.CSV: MULTINOMIALNB(NAÏVE BAYES),
RANDOM FOREST, KNN OR LOGISTIC
REGRESSION.
• THE TRAINING AND TEST SET ACCURACY OF
THE SELECTED CLASSIFIER IS EXHIBITED.
• ALONGWITH THE ACCURACY, THE
FEATURES OF THE DATASET ARE
DEMONSTRATED USING WORD CLOUD. THE
WORDS WHICH HAVE THE HIGHEST
WEIGHT ACORDING TO THE TFIDF
VECTORIZER ARE DISPLAYED BIGGEST IN
SIZE.
• CLUSTERING ALGORITHMS, NAMELY, K
MEANS AND AGGLOMERATIVE
HEIRARCHICHAL CLUSTERING HAVE BEEN
IMPLEMENTED IN PYTHON FROM SCRATCH
USING SELF GENERATED DATASETS.
• IN K MEANS ALGORITHM, THE
PARAMETER, ‘k’, HAS BEEN TWEAKED TO
THE FOLLOWING VALUES: k= 2, 3, 4, 5 AND
RESULTS HAVE BEEN NOTED
ACCORDINGLY.
• IN AGGLOMERATIVE CLUSTERING, THE
INPUTS HAVE BEEN TWEAKED AND THE
RESULTS NOTED.
TREASURE TROVE (DATASETS)
• FOR SPAM DETECTION, A KAGGLE DATASET
CALLED SPAM.CSV IS USED.
• THE SMS SPAM COLLECTION IS A SET OF
SMS TAGGED MESSAGES THAT HAVE BEEN
COLLECTED FOR SMS SPAM RESEARCH. IT
CONTAINS ONE SET OF SMS MESSAGES IN
ENGLISH OF 5,574 MESSAGES, TAGGED
ACORDING BEING HAM (LEGITIMATE) OR
SPAM.
• THE FILES CONTAIN ONE MESSAGE PER
LINE. EACH LINE IS COMPOSED BY TWO
COLUMNS: V1 CONTAINS THE LABEL (HAM
OR SPAM) AND V2 CONTAINS THE RAW
TEXT.
• THE DATASET SET USED FOR K MEANS
ALGORITHM HAS BEEN CREATED BY
GENERATING RANDOM PLOTTING POINTS
AS X,Y IN A CSV FILE.
• THE INPUT FOR AGGLOMERATIVE
CLUSTERING HAS BEEN TAKEN AS A NUMPY
ARRAY OF POINTS.
PROOF OF BLOOD, SWEAT AND TEARS
(SCREENSHOTS OF OUTPUTS)
• SPAM DETECTION:

CHOSEN OPTION:
MULTINOMIALNB
TRAIN AND TEST
SCORE.
WORD CLOUD
VISUALISATION OF
WEIGHTED WORDS.
MESSAGES THAT
SHOULD /
SHOULDN’T HAVE
BEEN SPAM
RESPECTIVELY.
CHOSEN OPTION:
RANDOM FOREST

TRAIN AND TEST


SCORE.
WORD CLOUD
VISUALISATION OF
WEIGHTED WORDS.
MESSAGES THAT
SHOULD /
SHOULDN’T HAVE
BEEN SPAM
RESPECTIVELY.
CHOSEN OPTION:
LOGISTIC
REGRESSION

TRAIN AND TEST


SCORE.
WORD CLOUD
VISUALISATION OF
WEIGHTED WORDS.
MESSAGES THAT
SHOULD /
SHOULDN’T HAVE
BEEN SPAM
RESPECTIVELY.
CHOSEN OPTION:
K NEAREST
NEIGHBOURS

TRAIN AND TEST


SCORE.
WORD CLOUD
VISUALISATION OF
WEIGHTED WORDS.
MESSAGES THAT
SHOULD /
SHOULDN’T HAVE
BEEN SPAM
RESPECTIVELY.
• CLUSTERING: K MEANS

K=3

CENTROIDS ARE
SHOWN.

FIRST 5 ROWS ARE


DISPLAYED.
UPDATED CENTROIDS
AND ITERATIONS
K=4

CENTROIDS ARE
SHOWN.

FIRST 5 ROWS ARE


DISPLAYED.
UPDATED CENTROIDS
AND ITERATIONS
K=5

CENTROIDS ARE
SHOWN.

FIRST 5 ROWS ARE


DISPLAYED.
UPDATED CENTROIDS
AND ITERATIONS
• CLUSTERING: AGGLOMERATIVE

THE INPUT
MATRIX IS
DISPLAYED.

SAMPLE
AFTER EACH
STEP IS
DISPLAYED
FRUIT OF THE LABOUR (OUTCOME)
1) SPAM DETECTION
TEST SET TRAINING SET
ACCURACY ACCURACY
MULTINOMIAL NB 95.16% 96.81%
RANDOM FOREST 97.28% 100%
LOGISTIC 96.41% 97.40%
REGRESSION
K NEAREST 92.22% 95.07%
NEIGHBOURS
2) CLUSTERING: K MEANS
FROM THE EVALUATION OF COST FUNCTION AT DIFFERENR
VALUES OF ‘k’, THE ELBOW POINT WAS FOUND AT k=4. THAT
IS, THE OPTIMAL SOLUTION CAN BE FOUND AT k=4

2) CLUSTERING:
AGGLOMERATIVE
CLUSTERING
DIFFERENT OUTCOMES WERE OBSERVED BY FEEDING IN
DIFFERENT VALUES OF POINTS.
THE FINAL PLATTER (CONCLUSION)
1) SPAM DETECTION
AS OBSERVED FROM THE TABLE IN THE OUTCOMES SECTION,
WE CAN INFER AND CONCLUDE THAT THE HIGHEST ACCURAXY
WAS SEEN IN TH CASE OF RANDOM FOREST CLASSIFIER FOR
BOTH, TEST AND TRAINING DATA.
THE REASONS HAVE BEEN STATED CLASSIFIER-WISE:
LOGISTIC REGRESSION
MULTINOMIAL RANDOM FOREST KNN
NAÏVE BAYES

Naive bayes is a The random forest (RF) is KNN is a non-parametric model,


generative model whereas an “ensemble learning” where LR is a parametric model.
LR is a discriminative technique consisting of
model. the aggregation of a large
number of decision trees,
resulting in a reduction of
variance.
Naive bayes functions LR is comparatively faster KNN is comparatively slower
admirably with small than RF than Logistic Regression.
datasets, though
LR+regularization can
accomplish comparative
execution.
LR performs better than KNN supports non-linear
naive bayes upon solutions where LR supports
colinearity, as naive bayes only linear solutions.
NAÏVE BAYES
LOGISTIC RANDOM FOREST KNN
REGRESSION

Naive bayes is a Random Forest is a KNN is a non-parametric model,


generative model whereas complex and large model where NB is a parametric model.
LR is a discriminative whereas Naive Bayes is a
model. relatively smaller model.
Naive bayes functions Naive Bayes performs Naive bayes is much faster than
admirably with small better with small training KNN due to KNN’s real-time
datasets, though data, whereas RF needs execution.
LR+regularization can larger set of training data.
accomplish comparative
execution.
LR performs better than
naive bayes upon
colinearity, as naive bayes
expects all features to be
independent.
KNN
LOGISTIC RANDOM FOREST NAÏVE BAYES
REGRESSION

KNN is a non-parametric Random Forest is a KNN is a non-parametric model,


model, where LR is a complex and large model where NB is a parametric model.
parametric model. whereas KNN is a
relatively smaller model.
KNN is comparatively Both need large training Naive bayes is much faster than
slower than Logistic sets. KNN due to KNN’s real-time
Regression. execution.
KNN supports non-linear
solutions where LR
supports only linear
solutions.
RANDOM FOREST
LOGISTIC KNN NAÏVE BAYES
REGRESSION

The random forest (RF) is Random Forest is a Random Forest is a complex and
an “ensemble learning” complex and large model large model whereas Naive
technique consisting of whereas KNN is a Bayes is a relatively smaller
the aggregation of a large relatively smaller model. model.
number of decision trees,
resulting in a reduction of
variance.
LR is comparatively faster Both need large training Naive Bayes performs better
than RF sets. with small training data,
whereas RF needs larger set of
training data.
The random forest (RF) is
an “ensemble learning”
technique consisting of
the aggregation of a large
number of decision trees,
THE HELPING HAND(REFERENCES)
• https://
www.kaggle.com/uciml/sms-spam-collection-da
taset
• https://medium.com/@
dannymvarghese/comparative-study-on-classic-
machine-learning-algorithms-part-2-5ab58b683
ec0
• https://www.edureka.co/blog/k-nearest-neighb
ors-algorithm
/
• https://matplotlib.org/api/_
THANK YOU!

You might also like