Week 6_Lab

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

[Introduction to Fuzzy/Neural Systems]

6
[Test 1 Simulation]

Module 6 Exp 6: Perceptron Neural Network


Learning Rules

Course Learning Outcomes:


C4. Conduct experiments using advanced tools related to fuzzy and neural
systems.
C5. Extend existing knowledge in designing a feedback fuzzy controller and
single/multilayer neural networks based on the given Industrial
requirements.
C7. Interpret and evaluate through written and oral forms.
C8. Lead multiple groups and projects with decision making responsibilities.
Topics
Exp 6: Perceptron Neural Network Learning Rules

1. Aim/Objective:
What is the objective of this experiment?
2. Theory
Provide explanation on the perceptron learning rule
3. Procedure:
Step by step implementation of the implementation
4. Program
Python code to implement Perceptron learning rule
5. Result
State the your inputs and its corresponding output and figures
6. Conclusion
Summarize what you have learned from the experiment
Perceptron Learning Rule
1. Initialize weights at random
2. For each training pair/pattern ( x, ytarget )
- Compute output y
- Compute error, δ=(ytarget – y)

Course Module
- Use the error to update weights as follows:
∆w = w – wold = η * δ*x
or
wnew = wold + η * δ*x
where η is called the learning rate or step size and it determines how smoothly the learning
process is taking place.
3. Repeat 2 until convergence (i.e. error δ is zero) The Perceptron Learning Rule is then
given by
wnew = wold + η * δ*x where δ=(ytarget – y)
Program to implement single layer perceptron using different activation functions
Now that we have got some sort of high level view on what is happening in the net, we can
write some code. Just a few variable notations up front:
 w: weights
 b: biases
 z: output of a neuron: x⋅w+bx⋅w+b
 a: activations of z: f(z)f(z)
Activation functions

import numpy as np
class Relu:
@staticmethod
def activation(z):
z[z < 0] = 0
return z

class Sigmoid:
@staticmethod
def activation(z):
return 1 / (1 + np.exp(-z))

Network class
Next is the container that will keep the hidden state and perform all the logic for the neural
network: the Network class.

class Network:
def __init__(self, dimensions, activations):
"""
:param dimensions: (tpl/ list) Dimensions of the neural net. (input, hidden layer, output)
:param activations: (tpl/ list) Activations functions.

"""

self.n_layers = len(dimensions)
self.loss = None
[Introduction to Fuzzy/Neural Systems]
6
[Test 1 Simulation]

self.learning_rate = None

# Weights and biases are initiated by index. For a one hidden layer net you will have a w[1]
and w[2]
self.w = {}
self.b = {}

# Activations are also initiated by index. For the example we will have activations[2] and
activations[3]
self.activations = {}

for i in range(len(dimensions) - 1):


self.w[i + 1] = np.random.randn(dimensions[i], dimensions[i + 1]) /
np.sqrt(dimensions[i])
self.b[i + 1] = np.zeros(dimensions[i + 1])
self.activations[i + 2] = activations[i]

np.random.seed(1)
nn = Network((2, 3, 1), (Relu, Sigmoid))
If we print the weights and biases we will receive the following dictionaries. As said in the
first part of the post, every input neuron is connected to all the neurons of the next layer,
resulting in 2 * 3 = 6 weights in the first layer and resulting in 3 * 1 = 3 weights in de second
layer. Because we’ve chosen to add the biases after the summation has taken place, we only
need as many biases as there are nodes in the next layer.
The following snippet shows the location of the internal state of the network. The keys
represent the layers, the values represent the weights, biases and activation classes.
internal state
Weights:
{
1: array([[ 1.14858562, -0.43257711, -0.37347383],
[-0.75870339, 0.6119356 , -1.62743362]]),
2: array([[ 1.00736754],
[-0.43948301],
[ 0.18419731]])}
}

Biases:
{
1: array([ 0., 0., 0.]),
2: array([ 0.])
}

Activation classes:
{
2: <class '__main__.Relu'>,
3: <class '__main__.Sigmoid'>
Course Module
}

Feeding forward
Now we are going to implement the forward pass. The forward pass is how the network
generates output. The inputs that are fed into the network are multiplied with the weights
and shifted along the biases of every layer (if they pass through the Relu function) and finaly
they pass the sigmoid function resulting in an output between 0 and 1. The mathematics of
the forward pass are the same for every layer. The only variable is the activation
function f(x)f(x).
→a(i−1)⋅→wi+bi=zi(1.3.0)(1.3.0)a(i−1)→⋅wi→+bi=zi
f(zi)=ai(1.3.1)(1.3.1)f(zi)=ai
However algorithmicly we have abstracted the activation function with activation classes.
All the activation classes have got the method .activation() . This means that we can loop
over all the layers doing the same mathematical operation. Finally we call the varying
activation function with the .activation() method. We add a ._feed_forward() method to
the Network class.
def _feed_forward(self, x):
"""
Execute a forward feed through the network.

:param x: (array) Batch of input data vectors.


:return: (tpl) Node outputs and activations per layer.
The numbering of the output is equivalent to the layer numbers.
"""

# w(x) + b
z = {}

# activations: f(z)
a = {1: x} # First layer has no activations as input. The input x is the input.

for i in range(1, self.n_layers):


# current layer = i
# activation layer = i + 1
z[i + 1] = np.dot(a[i], self.w[i]) + self.b[i]
a[i + 1] = self.activations[i + 1].activation(z[i + 1])

return z, a
We create two new dictionaries in the function z and a . In these dictionaries we append
the outputs of every layer, thus again the keys of the dictionaries map to the layers of the
neural network. Note that the first layer has no ‘real’ activations as it is the input layer. Here
we consider the inputs x as the activations of the previous layer.
The dictionaries structure looks like this.
a:
{
1: "inputs x",
2: "activations of relu function in the hidden layer",
3: "activations of the sigmoid function in the output layer"
}
[Introduction to Fuzzy/Neural Systems]
6
[Test 1 Simulation]

z:
{
2: "z values of the hidden layer",
3: "z values of the output layer"
}

The last thing we need is a .predict() method.


def predict(self, x):
"""
:param x: (array) Containing parameters
:return: (array) A 2D array of shape (n_cases, n_classes).
"""
_, a = self._feed_forward(x)
return a[self.n_layers]

<Exercise 1.
Lab Activity: Simulation
Design and develop the neural network system for the following experiment
Experiment 1: Perceptron learning
1. Design and train a neural network system which can perform NAND operation
using logistic sigmoidal activation function.
2. Tune the neural network model and minimize the error by updating the
weights and perform the testing.
3. Run the simulation in group and explain the working principles of the
algorithm.
4. Interpret the output of the designed neural network system by varying the
inputs.

References and Supplementary Materials


Books and Journals

1. Van Rossum, G. (2007). Python Programming Language. In USENIX annual technical


conference (Vol. 41, p. 36).
2. SN, S. (2003). Introduction to artificial neural networks.
3. Rashid, T. (2016). Make your own neural network. CreateSpace Independent Publishing
Platform.

Online Supplementary Reading Materials

1. Chaitanya Singh; How to Install Python. (n.d.). Retrieved 14 May 2020, from
https://beginnersbook.com/2018/01/python-installation/; 14-05-2020
Course Module

You might also like