Naive Bayes Classifier

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

Naïve Bayes Classifier

Ke Chen

http://intranet.cs.man.ac.uk/mlo/comp20411/

Extended by Longin Jan Latecki


latecki@temple.edu

COMP20411 Machine Learning


Outline
• Background
• Probability Basics
• Probabilistic Classification
• Naïve Bayes
• Example: Play Tennis
• Relevant Issues
• Conclusions

COMP20411 Machine Learning 2


Background
• There are three methods to establish a classifier
a) Model a classification rule directly
Examples: k-NN, decision trees, perceptron, SVM
b) Model the probability of class memberships given input data
Example: multi-layered perceptron with the cross-entropy cost
c) Make a probabilistic model of data within each class
Examples: naive Bayes, model based classifiers
• a) and b) are examples of discriminative classification
• c) is an example of generative classification
• b) and c) are both examples of probabilistic classification

COMP20411 Machine Learning 3


Probability Basics
• Prior, conditional and joint probability
– Prior probability: P(X )
– Conditional probability: P( X1 |X2 ), P(X2 | X1 )
– Joint probability: X  ( X1 , X2 ), P( X )  P(X1 ,X2 )
– Relationship: P(X1 ,X2 )  P( X2 | X1 )P( X1 )  P( X1 | X2 )P( X2 )
– Independence: P( X | X )  P( X ), P( X | X )  P( X ), P(X ,X )  P( X )P( X )
2 1 2 1 2 1 1 2 1 2

• Bayesian Rule

P( X |C )P(C ) Likelihood Prior


P(C |X )  Posterior 
P( X ) Evidence

COMP20411 Machine Learning 4


Example by Dieter Fox
Probabilistic Classification
• Establishing a probabilistic model for classification
– Discriminative model
P(C |X ) C  c1 ,  , c L , X  (X1 ,  , Xn )
– Generative model
P( X |C ) C  c1 ,  , c L , X  (X1 ,  , Xn )
• MAP classification rule
– MAP: Maximum A Posterior
– Assign x to c* if P(C  c * |X  x )  P(C  c |X  x ) c  c * , c  c ,  , c
1 L
• Generative classification with the MAP rule
– Apply Bayesian rule to convert: P( X |C )P(C )
P(C |X )   P( X |C )P(C )
P( X )
COMP20411 Machine Learning 8
Feature Histograms

P(x)
C1
C2

Slide by Stephen Marsland


x
Posterior Probability
P(C|x)

0
Slide by Stephen Marsland
x
Naïve Bayes
• Bayes classification
P(C |X )  P( X |C )P(C )  P( X1 ,  , Xn |C )P(C )

Difficulty: learning the joint probability P( X1 ,  , Xn |C )


• Naïve Bayes classification
– Making the assumption that all input attributes are independent
P( X1 , X2 ,  , Xn |C )  P( X1 | X2 ,  , Xn ; C )P( X2 ,  , Xn |C )
 P( X1 |C )P( X2 ,  , Xn |C )
 P( X1 |C )P( X2 |C )    P( Xn |C )
– MAP classification rule

[ P( x1 |c * )    P( xn |c * )]P(c * )  [ P( x1 |c )    P( xn |c)]P(c ), c  c * , c  c1 ,  , c L

COMP20411 Machine Learning 11


Naïve Bayes
• Naïve Bayes Algorithm (for discrete input attributes)
– Learning Phase: Given a training set S,
For each target value of ci (ci  c1 ,  , c L )
Pˆ (C  ci )  estimate P(C  ci ) with examples in S;
For every attribute value a jk of each attribute x j ( j  1,  , n; k  1,  , N j )
Pˆ ( X j  a jk |C  ci )  estimate P( X j  a jk |C  ci ) with examples in S;

Output: conditional probability tables; for x j , N j  L elements


– Test Phase: Given an unknown instance X  ( a ,  , a ),
1 n

Look up tables to assign the label c* to X’ if


[ Pˆ ( a1 |c * )    Pˆ ( an |c * )]Pˆ (c * )  [ Pˆ ( a1 |c)    Pˆ ( an |c )]Pˆ (c ), c  c * , c  c1 ,  , c L

COMP20411 Machine Learning 12


Example
• Example: Play Tennis

COMP20411 Machine Learning 13


Example
• Learning Phase
Outlook Play=Yes Play=No Temperature Play=Yes Play=No
Sunny 2/9 3/5 Hot 2/9 2/5
Overcast 4/9 0/5 Mild 4/9 2/5
Rain 3/9 2/5 Cool 3/9 1/5

Humidity Play=Yes Play=No Wind Play=Yes Play=No


High 3/9 4/5 Strong 3/9 3/5
Normal 6/9 1/5 Weak 6/9 2/5

P(Play=Yes) = 9/14 P(Play=No) = 5/14

COMP20411 Machine Learning 14


Example
• Test Phase
– Given a new instance,
x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)
– Look up tables
P(Outlook=Sunny|Play=Yes) = 2/9 P(Outlook=Sunny|Play=No) = 3/5
P(Temperature=Cool|Play=Yes) = 3/9 P(Temperature=Cool|Play==No) = 1/5
P(Huminity=High|Play=Yes) = 3/9 P(Huminity=High|Play=No) = 4/5
P(Wind=Strong|Play=Yes) = 3/9 P(Wind=Strong|Play=No) = 3/5
P(Play=Yes) = 9/14 P(Play=No) = 5/14

– MAP rule

P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053


P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206

Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.

COMP20411 Machine Learning 15


Example (2)
• Misalnya ingin diketahui apakah suatu objek masuk dalam ketegori
dipilih untuk perumahan atau tidak dengan algoritma Naive Bayes
Classifier. Untuk menetapkan suatu daerah akan dipilih sebagai
lokasi untuk mendirikan perumahan, telah dihimpun 10 aturan.
• Ada 4 atribut yang digunakan, yaitu:
– harga tanah per meter persegi (C1),
– jarak daerah tersebut dari pusat kota (C2),
– ada atau tidaknya angkutan umum di daerah tersebut (C3),
dan
– keputusan untuk memilih daerah tersebut sebagai lokasi
perumahan (C4).
Example (2)
• Pemilihan Lokasi Tanah

COMP20411 Machine Learning 17


Example (2)
• Learning Phase
Probabilitas kemunculan setiap nilai untuk Probabilitas kemunculan setiap nilai
atribut Harga Tanah (C1) untuk atribut Jarak dari Pusat Kota (C2)

Probabilitas kemunculan setiap nilai untuk Probabilitas kemunculan setiap nilai untuk
atribut Ada Angkutan Umum (C3) atribut Dipilih untuk perumahan (C4

COMP20411 Machine Learning 18


Example (2)
• Test Phase
– Given a new instance,
x’=(Harga=Mahal, Jarak=Sedang, Angkutan=Ada)
– Look up tables
P(Harga=Mahal|Beli=Yes) = P(Harga=Mahal|Beli=No) =
P(Jarak=Sedang|Beli=Yes) = P(Jarak=Sedang|Beli=No) =
P(Angkutan=Ada|Beli=Yes) P(Angkutan=Ada|Beli=No)
P(Beli=Yes) = P(Beli=No) =
– MAP rule

P(Yes|x’): [P(Mahal|Yes)P(Sedang|Yes)P(Ada Angkutan|Yes)]P(Beli=Yes) = 1/5 x 2/5


x 1/5 x 5/10 = 2/125 = 0,008
P(No|x’): [P(Mahal|No) P(Sedang|No)P(Ada Angkutan|No)]P(Beli=No) = 3/5 x 1/5 x
3/5 x 5/10 = 2/125 = 0,036

Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.


19
Example (2)
• Test Phase
– MAP rule
P(Yes|x’): [P(Mahal|Yes)P(Sedang|Yes)P(Ada Angkutan|Yes)]P(Beli=Yes) = 1/5 x 2/5
x 1/5 x 5/10 = 2/125 = 0,008
P(No|x’): [P(Mahal|No) P(Sedang|No)P(Ada Angkutan|No)]P(Beli=No) = 3/5 x 1/5 x
3/5 x 5/10 = 2/125 = 0,036

Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.

– Nilai probabilitas dapat dihitung dengan melakukan normalisasi


terhadap likelihood tersebut sehingga jumlah nilai yang diperoleh
=1

20
Relevant Issues
• Violation of Independence Assumption
– For many real world tasks, P( X1 ,  , Xn |C )  P( X1 |C )    P( Xn |C )
– Nevertheless, naïve Bayes works surprisingly well anyway!
• Zero conditional probability Problem
– If no example contains the attribute value X j  a jk , Pˆ ( X j  a jk |C  ci )  0
– In this circumstance, Pˆ ( x |c )    Pˆ ( a |c )    Pˆ ( x |c )  0 during test
1 i jk i n i
– For a remedy, conditional probabilities estimated with
n  mp
Pˆ ( X j  a jk |C  ci )  c
nm
nc : number of trainingexamples for which X j  a jk and C  ci
n : number of trainingexamples for which C  ci
p : prior estimate (usually, p  1 / t for t possiblevalues of X j )
m : weight to prior (number of " virtual" examples, m  1)
COMP20411 Machine Learning 21
Relevant Issues
• Continuous-valued Input Attributes
– Numberless values for an attribute
– Conditional probability modeled with the normal distribution
1  ( X j   ji )2 
Pˆ ( X j |C  ci )  exp  
2  ji  2 ji2 
 
 ji : mean (avearage) of attribute values X j of examples for which C  ci
 ji : standarddeviation of attribute values X j of examples for which C  ci

– Learning Phase: for X  ( X ,  , X ), C  c ,  , c


1 n 1 L
Output: n  L normal distributions and P(C  c ) i  1,  , L
i
– Test Phase: for X  ( X ,  , X )
1 n
• Calculate conditional probabilities with all the normal distributions
• Apply the MAP rule to make a decision

COMP20411 Machine Learning 22


Conclusions
• Naïve Bayes based on the independence assumption
– Training is very easy and fast; just requiring considering each
attribute in each class separately
– Test is straightforward; just looking up tables or calculating
conditional probabilities with normal distributions
• A popular generative model
– Performance competitive to most of state-of-the-art classifiers
even in presence of violating independence assumption
– Many successful applications, e.g., spam mail filtering
– Apart from classification, naïve Bayes can do more…

COMP20411 Machine Learning 23


Tugas
• Dari data yang diperoleh Polsek Bikini Bottom, diperoleh data mobil tercuri sebagai
berikut :

• Dari tabel tersebut, serta tentukan mobil dengan warna merah, tipe SUV, dan asal
domestik tersebut tercuri atau tidak?

COMP20411 Machine Learning 24

You might also like