Pattern Recognition Task Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Pattern Recognition

Report About Computer Exercise (1,2,5)

Introduced by :
FATMA GAMAL ABDEL_AZIZ
Under supervision of:
DR. MOUMEN EL-MELEGY
COMPUTER EXERCISE 1 :
Here in Exercise 1 , it is required to implement some algorithms as a functions to use it
later in the next exercises , so there are no results or comments on it !! , You can find these
functions in folder named “EX_1”

COMPUTER EXERCISE 2 :
Here it is required to classify 10 samples given and deciede each sample relate to which
class and , and given the prior probability of each class , now what are the steps to get the
results :
▪ Get the mean of each class as: the element mue1(1) is the mean of feature x1 given
the class w1 and so on ,
▪ The covariance matrix named “sigma” of each class depending on three features x1 ,
x2, x3 , with the help of COV() function to get the covariance between the vectors .
▪ Get the required results in three cases :
o Only one feature exists
o Two features exist
o Three feature exist
NOW let’s discuss our results at each case

A. Only one feature exists :


First we plotted the conditional probability of the feature x1 given class w1 , and conditional
probability of the feature x1 given class w2 on the same figure ,as shown in the below figure ,to
get an expectation about the classification process , and we found the :
o Each of P(x1/w1) , P(x1/w2) are gaussian distributed
o P(x1/w1) is larger than P(x1/w2) for the range of ~-5<x1<4.5
o So , for any value x1 > 4.5 or x1 <-5 , our classfier will classify it
to class 1 regardless of the actual class.

Figure(1)
So we calculated the emprical error of our discriminant function under conditions that :
✓ Perform classification for n samples data set
✓ All these samples are random samples of feature1 generated from gaussian distribution
with mean (mue1) , sigma(sigma1) related to seconed class
✓ So , we have the correct class is class 2, and test the output class from the
discriminant function and get the emprical error according to these previous
conditions , in different number of samples as shown in figure 2 we found that at :

➢ N= 10 : The emprical error is large and larger than the Bhattacharyya bound.
➢ N=1000 :The emprical error is large but smaller than the Bhattacharyya bound.
➢ N=10^6: The emprical error is small and smaller than the Bhattacharyya bound.

B. Two feature exist :


We will repeate the same steps but using two features for each class and found that :

➢ N= 10 : The emprical error is large and larger than the Bhattacharyya bound.
➢ N=1000 :The emprical error is large but smaller than the Bhattacharyya bound.
➢ N=10^6: The emprical error is small and smaller than the Bhattacharyya bound.
C. Three feature exist :
We will repeate the same steps but using two features for each class and found that :

➢ N= 10 : The emprical error is large and larger than the Bhattacharyya bound.
➢ N=1000 :The emprical error is small but smaller than the Bhattacharyya bound.
➢ N=10^6: The emprical error is very small and smaller than the Bhattacharyya
bound.
From these previous results we can say that :
1- For the same finite number of samples , increasing the number of features will cause
decreasing the emprical error
2- For the same number of features , increasing the number of test samples will increase
the emprical error.
3- So when we increase number of features , our classiefier( discriminant function ) will
get more effeceint performance ,
COMPUTER EXERCISE 5 :
Here we have three classes and three features , and it is required to classify some test points ,
according to two different cases of prior probability of each class
1- The results of the Mahalanobis distance for each point with each class are :

“We cannot classify the points according to the smallest Mahalanobis distance as it will get wrong
classes indeed “
2- We will classify them using the discriminant function with equal prior probability
and get that:
A. Equal prior probability P(w1) = P(w2) = P(w3) = 1/3

I don’t know what are the correct classes , so I cannot get any opinion about whether the
results are correct or not

B. Different prior probability P(w1) =0.8 , P(w2) = P(w3) = 0.1

And that result is wrong but a little logical , as classs 1 has the most prior probabilty !!

You might also like