Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 107

Classification Methods

Definition of Classification

Classification, or more specifically, statistical
classification, is a problem of identifying to
which of a set of categories (sub-populations)
a new observation belongs, on the basis of a
training set of data containing observations (or
instances) whose category membership is
known.
The Importance of Classification

The most straight-forward way for a computer
program to understand human intelligence.

The fundamental way for computer
intelligence to understand this world by true
(1) or false (0).
Types of Classification Methods

Unsupervised learning: grouping a set of
objects in such a way that objects in the same
group (called a cluster) are more similar (in
some sense or another) to each other than to
those in other groups (clusters).

Supervised learning: (Next Slide)

Hybrid learning method.
Supervised Learning: Definition

Given a collection of records (training set )

Each record contains a set of attributes, one of the attributes is
the class.

Find a model for class attribute as a function of
the values of other attributes.

Goal: previously unseen records should be
assigned a class as accurately as possible.

A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test
sets, with training set used to build the model and test set used
to validate it.
Illustrating Supervised Learning
Tid Attrib1 Attrib2 Attrib3 Class
Learning
No
1 Yes Large 125K
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
An example of learned model
An example of learned model
Let’s choose income as initial condition
An example application

An emergency room in a hospital measures 17 variables
(e.g., blood pressure, age, etc) of newly admitted
patients.

A decision is needed: whether to put a new patient in an
intensive-care unit.

Due to the high cost of ICU, those patients who may
survive less than a month are given higher priority.

Problem: to predict high-risk patients and discriminate
them from low-risk patients.

10
Another application

A credit card company receives thousands of applications
for new cards. Each application contains information
about an applicant,

age

Marital status

annual salary

outstanding debts

credit rating

etc.

Problem: to decide whether an application should
approved, or to classify applications into two categories,
approved and not approved.

11
Machine learning and our focus

Like human learning from past experiences.

A computer does not have “experiences”.

A computer system learns from data, which represent
some “past experiences” of an application domain.

Our focus: learn a target function that can be used to
predict the values of a discrete class attribute, e.g.,
approve or not-approved, and high-risk or low risk.

The task is commonly called: Supervised learning,
classification, or inductive learning.

12
The data and the goal

Data: A set of data records (also called
examples, instances or cases) described by

k attributes: A1, A2, … Ak.

a class: Each example is labelled with a pre-
defined class.

Goal: To learn a classification model from the
data that can be used to predict the classes of
new (future, or test) cases/instances.

13
An example: data (loan
application) Approved or not

14
An example: the learning task

Learn a classification model from the data

Use the model to classify future loan applications into

Yes (approved) and

No (not approved)

What is the class for following case/instance?

15
Supervised vs. unsupervised
Learning

Supervised learning: classification is seen as supervised
learning from examples.

Supervision: The data (observations, measurements, etc.)
are labeled with pre-defined classes. It is like that a
“teacher” gives the classes (supervision).

Test data are classified into these classes too.

Unsupervised learning (clustering)

Class labels of the data are unknown

Given a set of data, the task is to establish the existence of
classes or clusters in the data

16
Supervised learning process: two
steps
 Learning (training): Learn a model using the

training data
 Testing: Test the model using unseen test
data to assess the model accuracy
Number of correct classifications
Accuracy  ,
Total number of test cases

17
What do we mean by learning?

Given

a data set D,

a task T, and

a performance measure M,
a computer system is said to learn from D to
perform the task T if after learning the system’s
performance on T improves as measured by M.

In other words, the learned model helps the
system to perform T better as compared to no
learning.

18
An example

Data: Loan application data

Task: Predict whether a loan should be approved or
not.

Performance measure: accuracy.

No learning: classify all future applications (test data) to


the majority class (i.e., Yes):
Accuracy = 9/15 = 60%.

We can do better than 60% with learning.

19
Fundamental assumption of
learning
Assumption: The distribution of training examples is
identical to the distribution of test examples
(including future unseen examples).


In practice, this assumption is often violated to
certain degree.

Strong violations will clearly result in poor
classification accuracy.

To achieve good accuracy on the test data, training
examples must be sufficiently representative of the
test data.
20
Supervised Learning Methods
Bayesian Methods
Frequency Table
Decision Trees

Covariance Matrix Linear Dis.


Analysis

Classificatio Logistic Regression


n
Similarity Function
K Nearest Neighbor

Neural Network
Others
Support Vetor
Machine
Bayesian Classification Methods

The Bayesian Classification represents a
supervised learning method.


Assumes an underlying probabilistic model
and it allows us to capture uncertainty about
the model in a principled way by determining
probabilities of the outcomes. It can solve
diagnostic and predictive problems.
Bayesian Classification Methods

This Classification is named after Thomas Bayes
( 1702-1761), who proposed the Bayes classification
methods.

Bayesian classification provides practical learning
algorithms and prior knowledge and observed data
can be combined. Bayesian Classification provides a
useful perspective for understanding and evaluating
many learning algorithms. It calculates explicit
probabilities for hypothesis and it is robust to noise in
input data
Bayes’ Rule
Understanding Bayes' rule
P ( d | h) P ( h) d  data
p(h | d )  h  hypothesis
P(d ) Proof. Just rearrange :
p ( h | d ) P ( d )  P ( d | h) P ( h)
P ( d , h)  P ( d , h)
the same joint probability
Who is who in Bayes’ rule on both sides

P ( h) : prior belief (probabili ty of hypothesis h before seeing any data)


P ( d | h) : likelihood (probabili ty of the data if the hypothesis h is true)
P(d )   P(d | h) P(h) : data evidence (marginal probabilit y of the data)
h

P(h | d ) : posterior (probabili ty of hypothesis h after having seen the data d )


Naïve Bayesian Classifier: Example1
The Evidence relates all attributes without Exceptions.

Outlook .Temp Humidity Windy Play


Sunny Cool High True ? Evidence E

Pr[ yes | E ]  Pr[Outlook  Sunny | yes]


 Pr[Temperature  Cool | yes]
 Pr[ Humidity  High | yes]
Probability of
class “yes”  Pr[Windy  True | yes]
Pr[ yes]

Pr[ E ]

 93  93  93  149
2
 9
Pr[ E25]
Outlook Temperature Humidity Windy Play
Yes No Yes No Yes No Yes No Yes No
Sunny 2 3 Hot 2 2 High 3 4 False 6 2 9 5
Overcast 4 0 Mild 4 2 Normal 6 1 True 3 3
Rainy 3 2 Cool 3 1

Sunny 2/9 3/5 Hot 2/9 2/5 High 3/9 4/5 False 6/9 2/5 9/14 5/14

Overcast 4/9 0/5 Mild 4/9 2/5 Normal 6/9 1/5 True 3/9 3/5
Rainy 3/9 2/5 Cool 3/9 1/5

Outlook Temp Humidity Windy Play

Sunny Hot High False No

Sunny Hot High True No

Overcast Hot High False Yes

Rainy Mild High False Yes

Rainy Cool Normal False Yes

Rainy Cool Normal True No

Overcast Cool Normal True Yes

Sunny Mild High False No

Sunny Cool Normal False Yes

Rainy Mild Normal False Yes

Sunny Mild Normal True Yes

Overcast Mild High True Yes

Overcast Hot Normal False Yes


26
Rainy Mild High True No
Compute Prediction For New
Day
Sunny 2/9 3/5 Hot 2/9 2/5 High 3/9 4/5 False 6/9 2/5 9/14 5/14

Overcast 4/9 0/5 Mild 4/9 2/5 Normal 6/9 1/5 True 3/9 3/5
Rainy 3/9 2/5 Cool 3/9 1/5

For compute prediction for new day:

Outlook Temp. Humidity Windy Play


Sunny Cool High True ?

Likelihood of the two classes


For “yes” = 2/9  3/9  3/9  3/9  9/14 = 0.0053
For “no” = 3/5  1/5  4/5  3/5  5/14 = 0.0206
Conversion into a probability by normalization:
P(“yes”) = 0.0053 / (0.0053 + 0.0206) = 0.205
P(“no”) = 0.0206 / (0.0053 + 0.0206) = 0.795

27
Naïve Bayesian Classifier: Example2

Training dataset
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
Class: 30…40 high no fair yes
C1:buys_computer= >40 medium no fair yes
‘yes’ >40 low yes fair yes
C2:buys_computer= >40 low yes excellent no
‘no’ 31…40 low yes excellent yes
<=30 medium no fair no
Data sample <=30 low yes fair yes
X =(age<=30, >40 medium yes fair yes
Income=medium, <=30 medium yes excellent yes
Student=yes 31…40 medium no excellent yes
Credit_rating= 31…40 high yes fair yes
Fair) >40 medium 28 no excellent no
Naïve Bayesian Classifier: Example2

Compute P(X/Ci) for each class

P(age=“<30” | buys_computer=“yes”) = 2/9=0.222


P(age=“<30” | buys_computer=“no”) = 3/5 =0.6
P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444
P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4
P(student=“yes” | buys_computer=“yes”)= 6/9 =0.667
P(student=“yes” | buys_computer=“no”)= 1/5=0.2
P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667
P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4

X=(age<=30 ,income =medium, student=yes,credit_rating=fair)

P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.667 =0.044


P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019
P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028
P(X|buys_computer=“no”) * P(buys_computer=“no”)=0.007
X belongs to class “buys_computer=yes”

29
Naïve Bayesian Classifier:
Advantages and Disadvantages

Advantages :

Easy to implement.

Good results obtained in most of the cases.

Disadvantages

Assumption: class conditional independence , therefore loss of accuracy

Practically, dependencies exist among variables

E.g., hospital patients’profile: age, family history etc
Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc

Dependencies among these cannot be modeled by Naïve Bayesian
Classifier.

How to deal with these dependencies?

Bayesian Belief Networks.

30
Supervised Learning Methods
Bayesian Methods
Frequency Table
Decision Trees

Covariance Matrix Linear Dis.


Analysis

Classificatio Logistic Regression


n
Similarity Function
K Nearest Neighbor

Neural Network
Others
Support Vetor
Machine
Example of a Decision Tree
cal cal us
ri ri uo
ego ego tin ss
t t n a
ca ca co cl
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
3 No Single 70K No
Yes No

4 Yes Married 120K No NO MarSt


5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Another Example of Decision
Tree
i cal
i cal
ous
or or nu
t eg
t eg
nti
a ss Single,
ca ca co cl MarSt
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Decision Tree Induction

Many Algorithms:

Hunt’s Algorithm (one of the earliest)

CART

ID3, C4.5

SLIQ,SPRINT
Hunt’s Algorithm Tid Refund Marital Taxable

Let Dt be the set of training records Status Income Cheat
that reach a node t 1 Yes Single 125K No

General Procedure: 2 No Married 100K No
3 No Single 70K No

If Dt contains records that belong
4 Yes Married 120K No
the same class yt, then t is a leaf 5 No Divorced 95K Yes
node labeled as yt 6 No Married 60K No

If Dt is an empty set, then t is a leaf 7 Yes Divorced 220K No

node labeled by the default class, 8 No Single 85K Yes

yd 9 No Married 75K No
10 No Single 90K Yes

If Dt contains records that belong 10

to more than one class, use an Dt


attribute test to split the data into
smaller subsets. Recursively apply
the procedure to each subset. ?
Hunt’s Algorithm
Refund
Don’t
Yes No
Cheat
Don’t Don’t
Cheat Cheat

Refund Refund
Yes No Yes No

Don’t Don’t Marital


Marital
Cheat Status
Cheat Status
Single, Single,
Married Married
Divorced Divorced

Don’t Taxable Don’t


Cheat Cheat
Cheat Income
< 80K >= 80K
Don’t Cheat
Cheat
Tree Induction

Greedy strategy.

Split the records based on an attribute test that
optimizes certain criterion.


Issues

Determine how to split the records

How to specify the attribute test condition?

How to determine the best split?

Determine when to stop splitting
How to Specify Test Condition?

Depends on attribute types

Nominal

Ordinal

Continuous


Depends on number of ways to split

2-way split

Multi-way split
Splitting Based on Nominal Attributes

Multi-way split: Use as many partitions as distinct
values.
CarType
Family Luxury
Sports


Binary split: Divides values into two subsets.
Need to find optimal partitioning.

CarType CarType
{Sports, OR {Family,
Luxury} {Family} Luxury} {Sports}
Splitting Based on Continuous
Attributes

Different ways of handling

Discretization to form an ordinal categorical attribute

Static – discretize once at the beginning

Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.


Binary Decision: (A < v) or (A  v)

consider all possible splits and finds the best cut

can be more compute intensive
Splitting Based on Continuous
Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split


How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1

Own Car Student


Car? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?


The loan data (reproduced)
Approved or not

51
A decision tree from the loan
data
 Decision nodes and leaf nodes (classes)

52
Use the decision tree

No

53
Is the decision tree unique?
 No. Here is a simpler tree.
 We want smaller tree and accurate tree.
 Easy to understand and perform better.

 Finding the best tree is


NP-hard.
 All current tree building
algorithms are heuristic
algorithms

54
From a decision tree to a set of
rules
 A decision tree can
be converted to a
set of rules
 Each path from the
root to a leaf is a
rule.

55
Algorithm for decision tree
learning

Basic algorithm (a greedy divide-and-conquer algorithm)

Assume attributes are categorical now (continuous attributes can be
handled too)

Tree is constructed in a top-down recursive manner

At start, all the training examples are at the root

Examples are partitioned recursively based on selected attributes

Attributes are selected on the basis of an impurity function (e.g.,
information gain)

Conditions for stopping partitioning

All examples for a given node belong to the same class

There are no remaining attributes for further partitioning – majority
class is the leaf

There are no examples left
56
Decision tree learning algorithm

57
Choose an attribute to partition
data

The key to building a decision tree - which
attribute to choose in order to branch.

The objective is to reduce impurity or
uncertainty in data as much as possible.

A subset of data is pure if all instances belong to the
same class.

The heuristic in C4.5 is to choose the attribute
with the maximum Information Gain or Gain
Ratio based on information theory.
58
The loan data (reproduced)
Approved or not

59
Two possible roots, which is
better?

 Fig. (B) seems to be better.

60
Information theory

Information theory provides a mathematical basis for
measuring the information content.

To understand the notion of information, think about
it as providing the answer to a question, for example,
whether a coin will come up heads.

If one already has a good guess about the answer, then the
actual answer is less informative.

If one already knows that the coin is rigged so that it will
come with heads with probability 0.99, then a message
(advanced information) about the actual outcome of a flip
is worth less than it would be for a honest coin (50-50).

61
Information theory (cont …)

For a fair (honest) coin, you have no information,
and you are willing to pay more (say in terms of
$) for advanced information - less you know, the
more valuable the information.

Information theory uses this same intuition, but
instead of measuring the value for information in
dollars, it measures information contents in bits.

One bit of information is enough to answer a
yes/no question about which one has no idea, such
as the flip of a fair coin

62
Information theory: Entropy
measure

The entropy formula,
|C |
entropy ( D)    Pr(c ) log
j 1
j 2 Pr(c j )

|C |

 Pr(c )  1,
j 1
j


Pr(cj) is the probability of class cj in data set D

We use entropy as a measure of impurity or disorder of
data set D. (Or, a measure of information in a tree)

63
Entropy measure: let us get a feeling

 As the data become purer and purer, the entropy value


becomes smaller and smaller. This is useful to us!
64
Information gain

Given a set of examples D, we first compute its entropy:


If we make attribute Ai, with v values, the root of the
current tree, this will partition D into v subsets D1, D2 …,
Dv . The expected entropy if Ai is used as the current root:

v | Dj |
entropy Ai ( D)   | D |  entropy ( D )
j 1
j

65
Information gain (cont …)

Information gained by selecting attribute Ai to branch
or to partition the data is
gain( D, Ai )  entropy ( D)  entropy Ai ( D)


We choose the attribute with the highest gain to
branch/split the current tree.

66
6 6 9 9
entropy ( D)   log 2   log 2  0.971
15 15 15 15
6 9
entropy Own _ house ( D)   entropy ( D1 )   entropy ( D2 )
15 15
6 9
  0   0.918
15 15
 0.551

5 5 5
entropy Age ( D)   entropy ( D1 )   entropy ( D2 )   entropy ( D3 ) Age Yes No entropy(Di)
15 15 15
young 2 3 0.971
5 5 5
  0.971   0.971   0.722 middle 3 2 0.971
15 15 15
old 4 1 0.722
 0.888

 Own_house is the best


choice for the root.

67
We build the final tree

 We can use information gain ratio to evaluate the


impurity as well (see the handout)

68
QUIZ

1. Naive Bayes Method

2. Decision Tree Method
Handling continuous attributes

Handle continuous attribute by splitting into two
intervals (can be more) at each node.

How to find the best threshold to divide?

Use information gain or gain ratio again

Sort all the values of an continuous attribute in
increasing order {v1, v2, …, vr},

One possible threshold between two adjacent
values vi and vi+1. Try all possible thresholds and
find the one that maximizes the gain (or gain
ratio).
70
An example in a continuous
space

71
Avoid overfitting in classification

Overfitting: A tree may overfit the training data

Good accuracy on training data but poor on test data

Symptoms: tree too deep and too many branches, some may reflect
anomalies due to noise or outliers

Two approaches to avoid overfitting

Pre-pruning: Halt tree construction early

Difficult to decide because we do not know what may happen
subsequently if we keep growing the tree.

Post-pruning: Remove branches or sub-trees from a “fully grown” tree.

This method is commonly used. C4.5 uses a statistical method to
estimates the errors at each node for pruning.

A validation set may be used for pruning as well.

72
Likely to overfit the data
An example

73
Underfitting and Overfitting
(Example)
500 circular and 500
triangular data points.

Circular points:
0.5  sqrt(x12+x22)  1

Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
Underfitting and Overfitting

Overfitting

Underfitting: when model is too simple, both training and test errors are large
Overfitting due to Noise

Decision boundary is distorted by noise point


Overfitting due to Insufficient
Examples

Lack of data points in the lower half of the diagram makes it difficult
to predict correctly the class labels of that region
- Insufficient number of training records in the region causes the
decision tree to predict the test examples using other training
records that are irrelevant to the classification task
Notes on Overfitting

Overfitting results in decision trees that are
more complex than necessary


Training error no longer provides a good
estimate of how well the tree will perform on
previously unseen records


Need new ways for estimating errors
Evaluating classification methods

Predictive accuracy


Efficiency

time to construct the model

time to use the model

Robustness: handling noise and missing values

Scalability: efficiency in disk-resident databases

Interpretability:

understandable and insight provided by the model

Compactness of the model: size of the tree, or the number of
rules.

79 CS583, Bing Liu, UIC


Evaluation methods

Holdout set: The available data set D is divided into two
disjoint subsets,

the training set Dtrain (for learning a model)

the test set Dtest (for testing the model)

Important: training set should not be used in testing and
the test set should not be used in learning.

Unseen test set provides a unbiased estimate of accuracy.

The test set is also called the holdout set. (the examples in
the original data set D are all labeled with classes.)

This method is mainly used when the data set D is large.

CS583, Bing Liu, UIC 80


Evaluation methods (cont…)

n-fold cross-validation: The available data is partitioned
into n equal-size disjoint subsets.

Use each subset as the test set and combine the rest n-1
subsets as the training set to learn a classifier.

The procedure is run n times, which give n accuracies.

The final estimated accuracy of learning is the average of
the n accuracies.

10-fold and 5-fold cross-validations are commonly used.

This method is used when the available data is not large.

CS583, Bing Liu, UIC 81


Evaluation methods (cont…)

Leave-one-out cross-validation: This method
is used when the data set is very small.

It is a special case of cross-validation

Each fold of the cross validation has only a
single test example and all the rest of the data
is used in training.

If the original data has m examples, this is m-
fold cross-validation

CS583, Bing Liu, UIC 82


Evaluation methods (cont…)

Validation set: the available data is divided into three
subsets,

a training set,

a validation set and

a test set.

A validation set is used frequently for estimating
parameters in learning algorithms.

In such cases, the values that give the best accuracy on
the validation set are used as the final parameter values.

Cross-validation can be used for parameter estimating as
well.

CS583, Bing Liu, UIC 83


Classification measures

Accuracy is only one measure (error = 1-accuracy).

Accuracy is not suitable in some applications.

In text mining, we may only be interested in the documents of
a particular topic, which are only a small portion of a big
document collection.

In classification involving skewed or highly imbalanced data,
e.g., network intrusion and financial fraud detections, we are
interested only in the minority class.

High accuracy does not mean any intrusion is detected.

E.g., 1% intrusion. Achieve 99% accuracy by doing
nothing.

The class of interest is commonly called the positive class,
and the rest negative classes.

84
Precision and recall measures

Used in information retrieval and text classification.

We use a confusion matrix to introduce them.

85 CS583, Bing Liu, UIC


Precision and recall measures (cont…)

TP TP
p . r .
TP  FP TP  FN
 Precision p is the number of correctly classified
positive examples divided by the total number of
examples that are classified as positive.
 Recall r is the number of correctly classified positive
examples divided by the total number of actual
positive examples in the test set.
CS583, Bing Liu, UIC 86
An example


This confusion matrix gives

precision p = 100% and

recall r = 1%
because we only classified one positive example correctly and
no negative examples wrongly.

Note: precision and recall only measure classification on
the positive class.

87 CS583, Bing Liu, UIC


F1-value (also called F1-score)

It is hard to compare two classifiers using two measures. F1 score
combines precision and recall into one measure


The harmonic mean of two numbers tends to be closer to the
smaller of the two.

For F1-value to be large, both p and r much be large.

88 CS583, Bing Liu, UIC


Supervised Learning Methods
Bayesian Methods
Frequency Table
Decision Trees

Covariance Matrix Linear Dis.


Analysis

Classificatio Logistic Regression


n
Similarity Function
K Nearest Neighbor

Neural Network
Others
Support Vector
Machine
K-Nearest-Neighbors Algorithm
and Its Application
K-Nearest-Neighbors Algorithm

K nearest neighbors (KNN) is a simple
algorithm that stores all available cases and
classifies new cases based on a similarity
measure (distance function)


KNN has been used in statistical estimation
and pattern recognition since 1970’s.
K-Nearest-Neighbors Algorithm

A case is classified by a majority voting of its
neighbors, with the case being assigned to the
class most common among its K nearest
neighbors measured by a distance function.


If K=1, then the case is simply assigned to the
class of its nearest neighbor
Distance Function Measurements
Hamming Distance

For category variables, Hamming distance can
be used.
K-Nearest-Neighbors
What is the most possible label for
c?

c
What is the most possible label for
c?

Solution: Looking for the nearest K neighbors
of c.

Take the majority label as c’s label

Let’s suppose k = 3:
What is the most possible label for
c?

c
What is the most possible label for
c?

The 3 nearest points to c are: a, a and o.

Therefore, the most possible label for c is a.
Voronoi Diagram
Voronoi Diagram
Remarks
Choosing the most suitable K
Normalization
Normalization
Normalization
Normalization

You might also like