Professional Documents
Culture Documents
KMPA Assignment-II: 1 Task 1
KMPA Assignment-II: 1 Task 1
KMPA Assignment-II
Group No. 11
Vindhani Mohsin (ED11B041), Rohit R. Salunke (ME11B119), Pritesh Jain (ME11B116)
April 16, 2016
1
1.1
1.1.1
Task 1
Linearly Separable
C-SVM
Gaussian Kernel
Polynomial Kernel
1.1.2
Nu-SVM
Gaussian Kernel
Figure 43:
Nu=0.00001
Figure 42: Confusion Matrix with Nu=0.00001
Polynomial Kernel
Decision
boundaries
with
10
Figure 54:
Nu=0.00001
Figure 53: Confusion Matrix with Nu=0.00001
11
Decision
boundaries
with
1.1.3
Nonlinearly Separable
1.1.4
C-SVM
Gaussian Kernel
12
13
14
Polynomial Kernel
Figure 66: Kernel Gram Matrix with order 1 Figure 67: Kernel Gram Matrixwith order 2
15
16
17
1.1.5
Nu-SVM
Gaussian Kernel
Figure 84:
Nu=0.00001
Figure 83: Confusion Matrix with Nu=0.00001
18
Decision
boundaries
with
Polynomial Kernel
19
Overlapping data
1.1.7
C-SVM
Gaussian Kernel
20
Polynomial Kernel
21
22
1.1.8
Nu-SVM
Gaussian Kernel
23
24
Inferences
As we decreased C parameter in C-SVM, the width of the margin increased, which also increased
the total number of support vectors and bounded support vectors.
Whereas, the same happened when Nu parameter was increased in Nu-SVM.
After a certain Nu parameter, it becomes infeasible. This depends on the size of the dataset. More
is the dataset, higher the values Nu parameter could take. This seems because as we increase Nu,
number of support vectors increase which is bounded by the sample size.
Kernel Gram matrices plots for gaussian kernel resembled a block diagonal matrix more than the
polynomial kernel. This is because the kernel values change exponentially for gaussian kernel. The
blocks are the data points of same class.
Task 2
25
2.1
Top 130 principal components explain over 95 percent variance in the data. We have selected top 150
principal components.
26
2.2
The structure of the autoencoder is 350 neurons in the hidden layer and 150 in the bottleneck layer.
The used error is MSE for training with standard back propagation.
The activation in the hidden layer is sigmoid.
27
2.3
Figure 124: Confusion Matrix for Stacked auto encoder after dimensions were reduced to 100. Optimal
results achieved for Nu=0.003 and C=0.5
Task 3
ConvolutionN euralN etwork
Image resized to 32 32.
Input: 3 feature maps RGB.
Image size after first layer is 28 28 after convolution. 16 feature maps are generated.
Sub sampling using Max pooling of (2 2) is done. So after pooling it is 14 14.
Image size after second layer after convolution is 10 10
Then, max pooling of (2 2), so it becomes 5 5. 32 maps are generated.
After third convolution, the output is (1 1). 300 features are generated from this.
First MLP layer has input of size 300 from convolution, and 150 out.
Second MLP layer then has 150 inputs and 5 outputs for 5 classes.
28
Softmax function is used for output layer followed by cross entropy error for classification.
The training is done using mini-batch stochastic gradient descent with learning rate=0.1.
The confusion matrix for CNN is as follows:
154
81
43
31
32
114
144
25
36
41
21
11
39
4
3
34
37
7
41
26
32
26
7
15
36
Task 4
The values of the layer before classification of CNN are used as input to the SVM. The number of features
is thus 150.
Figure 125: Confusion Matrix with 150 number of dimensions. Results achieved for Nu=0.003 and C=15
Task 5
DeepBoltzmanM achine
The task is performed for 5 classes numbered 13, 16, 48, 56, 93.
New labels assigned 1, 2, 3, 4, 5 correspondingly.
A total of 513 image data available for all the five classes combined.
Data split into .75 and .25 for training and testing.
1 of k representation used for classification.
There are 256 input and 5 output features.
DBM built using layer configuration of (256, 100, 50, 30, 5) including input layer, three hidden layers
and output layer.
29
The visible layer and hidden layers use sigmoid unit and output uses softmax unit funciton.
Pertraining done with 15 epochs.
Finetuning using backpropagation with 50 epochs.
Used k-step contrastive divergence with k=1.
Conf usionmatrixf ortestdata
31
0
2
0
1
1
15
0
0
0
2
0
23
1
3
1
8
1
19
1
2
0
1
0
26
1
64
0
4
0
4
1
65
1
0
30
1
12
0
71
0
1
0
1
1
63