AIP A1 Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

E9 246 AIP Assignment 1

Name: Pagadala Krishna Murthy, SR NO: 19217, MTech AI (Artificial Intelligence), EE.

Q1: Scale-Space extrema detection SIFT:

Original image: Keypoints for original image:

Keypoints detected= 699

Keypoints for scaled image: Keypoints for noisy image:

Keypoints detected = 170 Keypoints detected = 519


Keypoints for blur image: Keypoints for rotated image:

Keypoints detected = 864 Keypoints detected = 693

Original image: Keypoints for original image:

Keypoints detected= 261


Keypoints for scaled image: Keypoints for noisy image:

Keypoints detected = 102 Keypoints detected = 255


Keypoints for blur image: Keypoints for rotated image:

Keypoints detected = 991 Keypoints detected = 271

For all the transformations of image, most of the features detected are repeated in all the
transformations of image. So features detected can be said as invariant features.

First Image size is 500 X 375, Sigma =1.6, No. of images per octave = 5, No. of octaves = 8

Second Image size is 400 X 500


Q2_PartA: Using Pre-Trained deep neural network for Image classification:

Loaded inbuilt VGG16 model and extracted features from FC2 layer for each image, and formed
x_train and x_test

For y_train and y_test used the following numbers to each class of image in the data set

Albatross –0, American_Goldfinch –1, anthuriam - 2, frangipani - 3, Marigold-4,


Red_headed_Woodpecker-5

Classified each feature using KNN algorithm, ‘n’ is no. of Neighbours taken

For n=3 accuracy is 95%

For n=5 accuracy is 98.33%

For n=7 accuracy is 96.66%

For n=5 the accuracy obtained is highest on the given data

Q2_PartB: Fine tuning the classification layer

Folder preparation for test and train data set creation :


The prediction layer (last layer) in the VGG16 model has 1000 classes. Since our task is to classify on
6 classes. We need to modify this layer

Since I'm not able to replace the layer in the existing VGG16 model, I created a empty new
sequential model

Copied all the layers except the prediction layer into sequential model

Added a 6 class Dense layer with SoftMax activation function.

Compiled the model using Adam optimizer. Initially I tried to compile the entire model, but its taking
a very long time so I understood that it's trying to train all the layers, so freezed all the layers except
the last layer, so that only final layer is trainable.

The results are as follows:

Accuracy on training set: 99.12%

Accuracy on test set: 93.33%

Collab code link:


https://colab.research.google.com/drive/1AtDQLnc9eZuZRqTo5u2eEl_bQSZfNrFF#scrollTo=oxi573Zn
dISA

Q2_Optional: Simple CNN from scratch with CE (Cross Entropy) loss:

Implemented 3-layer CNN from scratch with convolution, flatten and dense layers

Compiled the model with Adam optimizer with learning rate = 0.0001 and loss = crossentropy

When trained model for the first time, I got 94.72% accuracy on training data, when I executed the
cell again, I got 99.72% accuracy. I assume this was due to overfitting tendency of the neural
networks repeating the same training data.

Later I restarted the program and run the model over the training data only once.

The accuracy on training data is 94.72%

The accuracy on test data is 99.17%

Collab codelink: https://colab.research.google.com/drive/1s2UzRs1nH5pOIcFIq959HwX_4ZqGOYJD

Comparison between the different parts of Q2.

In part A we try to use just the features given by fc1/fc2 layer of VGG16 model and use separate
classifier algorithm(KNN) to classify.

In part B we directly use VGG16 to classify, by fine tuning its classification layer to predict only 6
classes

In part C we built a simple CNN model from scratch with 3 layers and process is same as part B,
except that here we can train all layers in less time.

You might also like