Internship Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Project Report

• Previous Understandings

Upto now I have read and tried to understand all the papers you have provided. Though
I am not very confident in all the papers but I have gained ample amount of knowledge
to get started.

I have executed the full codes in DLND and understood upto rdv.py part. I have
couple of doubts in process.py

But I am going through very hard time in figuring out the con_net_sentence.py
code. The code seems very obfuscated to me and still I am not able to understand it
fully.

• The New Architecture Part

As my task was to design new architecture, I have researched a couple of architectures


for predicting the novelty score.

• 1st Architecture:

I will use the nearly same architecture as you have done for novelty detection but
after the fully connected layer instead of using softmax, add another layer of a single
neuron with linear activation.

Pseudocode

# Previous Model
.
.
.
model.add(Dense(..)) # Fully Connected Layer
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('linear'))

Cost Function: In my opinion this problem is more or less like a regression problem
therefor I am planning to use L2 loss function.

• 2nd Architecture:

I have research a bit about LSTM neural networks and planning to combine CNN
and LSTM.
The model will be like this:
RDV —> CNN —>LSTM —> FULLY CONNETCED —>OUTPUT

After Flattening of Convolution Layer we will add two LSTM layers and one dense
layer of 1 neuron and again we will use linear activation for final layer.

Pseudocode

#CNN model
.
.
.
cnn_model.add(Flatten())
#LSTM model
model.add(LSTM(..))
model.add(LSTM(..))
model.add(Dense(..))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('linear'))

Cost function: Again we will use mean squared loss function for it.

Note: I am planning to use Adagrad optimiser for both architectures as


it converges faster.

You might also like