Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Panipat Institute of Engineering & Technology

Samalkha
Computer Science & Engineering Department

Practical File of Neural Networks & Deep Learning


Code:PE-CS-D411AL

Submitted to: Submitted by:


Dr. Saurabh Goyal Vikrant
Associate Professor 2819044
CSE Department B.Tech. CSE VII Sem, C1

Affiliated to

Kurukshetra University Kurukshetra, India


TABLE OF CONTENTS
S. No. Experiment Date Remarks

1
Write a Program to implement Perceptron. 03/09/2022

2
Write a Program to implement AND OR Gates 17/09/2022
using Perceptron.

3
To implement Crab Classification using pattern 24/09/2022
net Objective.

4
To write a program to implement Wine 15/10/2022
Classification using Back propagation.

5 Write a MATLAB Script containing four functions


Addition, Subtraction, Multiplication, Division 29/10/2022

6 Write a program to implement classification of


linearly separable Data with a perceptron. 05/11/2022

7 To study ImageNet, Google Net, Reset


Convolutional Neural Networks. 19/11/2022

8 To study Convolutional Neural Network and


Recurrent Neural Network. 03/12/2022
Vikrant 2819044

Experiment #1

Objective: Write a Program to implement Perceptron.


Code:
1. File 1: Example1.mp =
[2 1 -2 -1;2 -2 2 1]t = [0 1
0 1]
net = newp (p, t)net
= init (net) W = [0
0]
p1 = p (: , 1)t1
= t (1)
a = sim (net, p1)e =
t1-a
for i = 1:4
p1 = p(: , i)
t1=t (i)
a=sim (net, p1)
e=t1-a;
for j=1:10
net.IW{1,1}=W
e=t1-a
dw = learnp (W, p1, [], [], [], [], e, [], [], [], [], []);
W = W + dw
net.IW{1,1} =W
a=sim(net,p1)
end a=sim(net,p1)
end

2
Vikrant 2819044

2. File 2: Example2.m
clear all
p = [2 1 -2 -1;2 -1 2 1]
t = [0 1 0 1]
net = newp (p, t)
net.IW{1,1} = [0 0]
net.b{1} = 0
net.trainParam.epochs = 20net =
train (net, p, t)

3
Vikrant 2819044

OUTPUT:

Epoch Graph:

4
Vikrant 2819044

Experiment #2

Objective: Write a Program to implement AND OR Gates using


Perceptron.
Code:
a) AND Gate
%Perceptron for AND functionclear;
clc ;
x = [1 1 -1 -1;1 -1 1 -1];
t = [1 -1 -1 -1];
w = [0 0];
b=0;
alpha = input ('Enter Learning rate='); theta =
input ('Enter Threshold value=');con=1;
epoch=0;
while con
con=0; for i
=1:4
yin=b + x(1, i) * w(1) + x(2, i) * w(2);if
yin>theta
y=1;
end
if yin <=theta & yin>=-thetay=0;
end
if yin<-theta
y=-1;
end
if y - t (i)
con=1;

5
Vikrant 2819044

for j=1:2
w(j) = w(j) + alpha * t(i) * x(j, i);end
b=b + alpha * t (i);end
end
epoch=epoch+1;
end
disp (' Perceptron for AND function ');disp ('
Final Weight matrix ');
disp (w);
disp (' Final Bias ');
disp (b);

OUTPUT:

6
Vikrant 2819044

b) OR Gate
%Perceptron for OR functionclear;

clc ;

x = [1 1 -1 -1;1 -1 1 -1];

t = [1 -1 1 1];

w = [0 0];
b=0;
alpha = input ('Enter Learning rate='); theta =
input ('Enter Threshold value=');con=1;
epoch=0;
while con
con=0; for i
=1:4
yin=b + x(1, i) * w(1) + x(2, i) * w(2);if
yin>theta

y=1;

end

if yin <=theta & yin>=-thetay=0;

end

if yin<-theta
y=-1;

end

if y - t (i)
con=1;

7
Vikrant 2819044

for j=1:2

w(j) = w(j) + alpha * t(i) * x(j, i);end


b=b + alpha * t (i);end
end
epoch=epoch+1;
end
disp (' Perceptron for AND function ');disp ('
Final Weight matrix ');

disp (w);

disp (' Final Bias ');


disp (b);

OUTPUT:

8
Vikrant 2819044

Experiment #3

Objective: Write a MatLab Script containing four functions


Addition, Subtraction,Multiplication, Division

Code:
p=input('ENTER NUM1 ')
q=input('ENTER NUM2 ')
function add= ADD(a,b);
add=a+b;
end
function sub= SUB(a,b);sub=
a-b;
end
function multiply= MULTIPLY(a,b)
multiply=a*b;
end
function divide= DIVIDE(a,b)
divide=a/b;
end
fprintf("SUM OF TWO NUMBERS IS %d \n",ADD(p,q)); fprintf("SUBTRACTION OF TWO
NUMBERS IS %d \n",SUB(p,q)); fprintf("DIVISION OF TWO NUMBERS IS %d
\n",DIVIDE(p,q)); fprintf("PRODUCT OF TWO NUMBERS IS %d \n",MULTIPLY(p,q));

OUTPUT:

9
Vikrant 2819044

Experiment #4

Objective: To implement Crab Classification using pattern net


Objective.
Dataset Sample:

Code: x

= x’;

t = t’;

10
Vikrant 2819044

net = patternnet (10)net

= train (net, x, y)

OUTPUT:

11
Vikrant 2819044

12
Vikrant 2819044

13
Vikrant 2819044

14
Vikrant 2819044

Experiment #5

Objective: To write a program to implement Wine


Classification using Backpropagation

It is a GUI Process which is inbuilt in MATLAB and can be started by typing nnstartin command
window input.

15
Vikrant 2819044

16
Vikrant 2819044

17
Vikrant 2819044

Output:

18
Vikrant 2819044

19
Vikrant 2819044

Experiment #6

Objective: Write a program to implement classification of


linearly separable Datawith a perceptron.
Code:

close all; clear


all; clc;
warning off;
format compact

% Number of samples of each class

N=20;

% define inputs and outputs


offset=5;
x=[randn(2,N) randn(2,N)+offset];
y=[zeros(1,N) ones(1,N)];

figure(1)
plotpv(x,y)

% Calculate and Train Perceptron

net=perceptron;
net=train(net,x,y);
view(net)

20
Vikrant 2819044

% Plot decision boundary


figure(1)
plotpc(net.IW(1),net.b(1));

Output:

21
Vikrant 2819044

22
Vikrant 2819044

Experiment #7

Objective: To study ImageNet, GoogleNet, ResNet

Convolutional Neural Networks.

Dataset:-

Code:-

Dataset = imageDatastore(‘Dataset’, ‘IncludeSubfolders’, true, ‘LabelSource’,‘foldernames’);


[Training_Dataset, Validation_Dataset]=splitEachLabel(Dataset, 7.0);

Net=googlenet;
analyeNetwork(net)

Input_Layer_Size = net.Layers(1).InputSize(1:2); Resized_Training_Image =


augmentedImageDatastore(Input_Layer_Size,Training_Dataset);
Resized_Validation_Image = AugmentedImageDatastore(Input_Layer_Size,
Validation_Dataset);

Number_of_Classes = nume1(categories(Training_Dataset.Labels));

23
Vikrant 2819044

New_Feature_Learner = FullyConnectedLayer(Number_of_Classes, … ‘Name’, ‘FruitFeature Learner’,


… ‘WeightLearnRateFactor’, 10, … );
New_Classifier_Layer =classificationLayer(‘Name’, ‘Fruit Classifier’);Layer_Graph
= layerGraph(net);
NewLayer_Graph = replaceLayer( Layer_Graph,
Feature_Learner.Name,New_Feature_Learner);
New_Layer_Graph = replaceLayer(New_Layer_Graph,
Output_Classifier.Name,New_Classifier_Layer);

analyzeNetwork(New_Layer_Graph)

Output

24
Vikrant 2819044

Experiment #8

Objective: To study Convolutional Neural Network and


Recurrent Neural Network.
CNN Theory:
It is assumed that the reader knows the concept of Neural networks.
When it comes to Machine Learning, Artificial Neural Networks perform really well. Artificial Neural
Networks are used in various classification tasks like image, audio, words.Different types of Neural Networks
are used for different purposes, for example for predicting the sequence of words we use Recurrent Neural
Networks more precisely an LSTM, similarly for image classification we use Convolution Neural networks.
In this blog, we are going to build a basic building block for CNN.
Before diving into the Convolution Neural Network, let us first revisit some concepts ofNeural Network.
In a regular Neural Network there are three types of layers:

1. Input Layers: It’s the layer in which we give input to our model. The number ofneurons in this
layer is equal to the total number of features in our data (numberof pixels in the case of an image).
2. Hidden Layer: The input from the Input layer is then feed into the hidden layer. There can be
many hidden layers depending upon our model and data size. Eachhidden layer can have different
numbers of neurons which are generally greater than the number of features. The output from
each layer is computed by matrix multiplication of output of the previous layer with learnable
weights of that layer and then by the addition of learnable biases followed by activation function
which makes the network nonlinear.
3. Output Layer: The output from the hidden layer is then fed into a logistic function like sigmoid
or softmax which converts the output of each class intothe probability score of each class.
The data is then fed into the model and output from each layer is obtained this step is calledfeedforward, we
then calculate the error using an error function, some common error functions are cross-entropy, square loss
error, etc. After that, we backpropagate into the model by calculating the derivatives. This step is called
Backpropagation which basically isused to minimize the loss.
Here’s the basic python code for a neural network with random inputs and two hiddenlayers.

W1,W2,W3,b1,b2,b3 are learnable parameter of the model.

25
Vikrant 2819044

Convolution Neural Networks or covnets are neural networks that share their parameters.Imagine you have
an image. It can be represented as a cuboid having its length, width (dimension of the image), and height (as
images generally have red, green, and blue channels).

Now imagine taking a small patch of this image and running a small neural network on it, with say, k outputs
and represent them vertically. Now slide that neural network across thewhole image, as a result, we will get
another image with different width, height, and depth.Instead of just R, G, and B channels now we have more
channels but lesser width and height. This operation is called Convolution. If the patch size is the same as that
of the image it will be a regular neural network. Because of this small patch, we have fewer weights.

Now let’s talk about a bit of mathematics that is involved in the whole convolutionprocess.

26
Vikrant 2819044

• Convolution layers consist of a set of learnable filters (a patch in the above image). Every
filter has small width and height and the same depth as that ofinput volume (3 if the input
layer is image input).
• For example, if we have to run convolution on an image with dimension 34x34x3. The possible
size of filters can be axax3, where ‘a’ can be 3, 5, 7, etcbut small as compared to image
dimension.
• During forward pass, we slide each filter across the whole input volume step bystep where each
step is called stride (which can have value 2 or 3 or even 4 for high dimensional images) and
compute the dot product between the weights of filters and patch from input volume.
• As we slide our filters we’ll get a 2-D output for each filter and we’ll stack themtogether and as
a result, we’ll get output volume having a depth equal to the number of filters. The network will
learn all the filters.

Layers used to build ConvNets:-


A convnets is a sequence of layers, and every layer transforms one volume to anotherthrough a
differentiable function.
Types of layers:
Let’s take an example by running a convnets on of image of dimension 32 x 32 x 3.

1. Input Layer: This layer holds the raw input of the image with width 32, height32, and depth 3.
2. Convolution Layer: This layer computes the output volume by computing thedot product
between all filters and image patches. Suppose we use a total of 12filters for this layer we’ll get
output volume of dimension 32 x 32 x 12.
3. Activation Function Layer: This layer will apply an element-wise activation function to the
output of the convolution layer. Some common activation functions are RELU: max(0, x),
Sigmoid: 1/(1+e^-x), Tanh, Leaky RELU, etc. The volume remains unchanged hence output
volume will have dimension 32 x32 x 12.
4. Pool Layer: This layer is periodically inserted in the convnets and its main function is to
reduce the size of volume which makes the computation fast reduces memory and also prevents
overfitting. Two common types of poolinglayers are max pooling and average pooling. If we
use a max pool with 2 x 2filters and stride 2, the resultant volume will be of dimension
16x16x12.

27
Vikrant 2819044

1. Fully-Connected Layer: This layer is a regular neural network layer that takesinput from the
previous layer and computes the class scores and outputs the 1-Darray of size equal to the number
of classes.

RNN Theory:-

Recurrent Neural Network(RNN) are a type of Neural Network where the outputfrom previous step are
fed as input to the current step. In traditional neural networks, all the inputs and outputs are independent of
each other, but in cases like when it is required to predict the next word of a sentence, the previous words are
required and hence there is a need to remember the previous words. Thus RNN

28
Vikrant 2819044

came into existence, which solved this issue with the help of a Hidden Layer. The main and most important
feature of RNN is Hidden state, which remembers someinformation about a sequence.

RNN have a “memory” which remembers all information about what has been calculated. It uses the same
parameters for each input as it performs the same task on all the inputs or hidden layers to produce the output.
This reduces the complexityof parameters, unlike other neural networks.

How RNN works


The working of a RNN can be understood with the help of below example:
Example:
Suppose there is a deeper network with one input layer, three hidden layers and oneoutput layer. Then like
other neural networks, each hidden layer will have its own set of weights and biases, let’s say, for hidden layer
1 the weights and biases are (w1, b1), (w2, b2) for second hidden layer and (w3, b3) for third hidden layer.
Thismeans that each of these layers are independent of each other, i.e. they do not memorize the previous
outputs.

29
Vikrant 2819044

Now the RNN will do the following:

• RNN converts the independent activations into dependent activations by providing the same
weights and biases to all the layers, thus reducing thecomplexity of increasing parameters and
memorizing each previous outputs by giving each output as input to the next hidden layer.

30
Vikrant 2819044

• Hence these three layers can be joined together such that the weights andbias of all the hidden
layers is the same, into a single recurrent layer.

• Formula for calculating current state:

ht -> current state


ht-1 -> previous statext -
> input state
• Formula for applying Activation function(tanh):

whh -> weight at recurrent neuronwxh ->


weight at input neuron
• Formula for calculating output:

Yt -> output

31
Vikrant 2819044

Why -> weight at output layer

Training through RNN

1. A single time step of the input is provided to the network.


2. Then calculate its current state using set of current input and the previous state.
3. The current ht becomes ht-1 for the next time step.
4. One can go as many time steps according to the problem and join theinformation
from all the previous states.
5. Once all the time steps are completed the final current state is used to calculatethe output.
6. The output is then compared to the actual output i.e the target output and the error is
generated.
7. The error is then back-propagated to the network to update the weights andhence the
network (RNN) is trained.

Advantages of Recurrent Neural Network

1. An RNN remembers each and every information through time. It is useful in time series
prediction only because of the feature to remember previous inputsas well. This is called Long
Short Term Memory.
2. Recurrent neural network are even used with convolutional layers to extend theeffective pixel
neighborhood.
Disadvantages of Recurrent Neural Network
1. Gradient vanishing and exploding problems.
2. Training an RNN is a very difficult task.
3. It cannot process very long sequences if using tanh or relu as an activationfunction.

32

You might also like