Lect 7 Single Layer NN

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 20

Artificial Neural Network

and Linear Problems


Introduction
Artificial neural networks are represented by a set of
nodes, often arranged in layers, and a set of weighted
directed links connecting them.
There are a wide variety of networks depending on the
•Nature of information processing carried out at individual
nodes,
•The topology of the links, and
•The algorithm for adaptation of link weights.
“The arrangement of neurons into layers and the
connection pattern within and between layers is known
as network architecture.”
Layers of Artificial Neural Network

Input layer : The neurons in this layer receive the external


input signals and perform no computation, but simply
transfer the input signals to the neurons in another layer
Output layer : receive signals from neurons of input or
hidden layer
Hidden layer : layer of neurons connected between input
and output layer, they are processing units
Usually, Neural networks are classified into single and
multiple layer network
Single Layer Neural Network - McCulloh-Pitts
Model
Warren McCulloch and Walter Pitts presented the first
mathematical model of a single idealized biological neuron
in 1943 . This is single layer neural network model which
does not require learning or adoption.
A single layer network consist of one layer of connection
weights. The net consists of a layer of units called output
layer from which the response of the net can be obtained.

The neurons are binary activated


If the neuron fire then it has activation of 1 otherwise
activation of 0
X1 ( j )
bk
WK1
X2
WK2
Vk F(.) Yk
X3 WK3

WKn K Threshold function

Xn

McCulloh-Pitts Model
Binary – Threshold / Activation function

Hard limited
If Vk >=0 then Yk = 1
Otherwise 0
Yk
1

0
-∞ +∞
Vk
Example
I
1
O
1
2

2
3

Complete connection –bipartite graph


Example I
0 1
O O 1= ?
1
T1 = 1
1 2

2
1 3
Complete connection –bipartite graph
Specification of previous example

If we are using threshold function then if weight is


greater than equal to 0 then high otherwise low
So we want total output is below 0 then net output
become 0.
For this we reduce the weight of two edges having 1
so that total weight is decreased and we move closer
to the goal.
Limitation

McCulloh-Pitts Model can only


solve problems which are
linearly separable

linearly Separable ?
Linearly Separable functions are those which
can differentiate +ve and –ve cases

Example 1: let our function is logical AND

I1 I2 O
I1 O
0 0 0
I2
0 1 0
1 0 0
1 1 1
I2

I1
This can linearly separable because it can have a plane
with –ve and +ve values
Implementation of AND gate using
McCulloh-Pitts Model
The output of the neuron K
can be written as I1 1

If Vk >= 2 then Yk = 1 V
1 k
Otherwise Yk = 0 I2
Input is
a) I1 = 1, I2 = 1 b) I1 = 1, I2 = 0
Vk=I1 * w1 + I2 * w2
Vk=I1 * w1 + I2 * w2
Vk = 2 then Yk = 1
Vk = 1 then Yk = 0
Example 2: let our function is logical OR

I1 I2 O
0 0 0
0 1 1
I1 O 1 0 1
I2 1 1 1
I2

I1
This can linearly separable because it can have a plane with
–ve and +ve values
Implementation of OR gate using McCulloh-
Pitts Model
The output of the neuron K can be written as
If Vk >= 2 then Yk = 1
Otherwise Yk = 0 I1 2
V
2
Input is I2 k
a) I1 = 1, I2 = 0
b) I1 = 1 , I2 = 1
Vk=I1 * w1 + I2 * w2
Vk = 2 then Yk = 1 Vk=I1 * w1 + I2 * w2

c) I1 = 0 , I2 = 0 Vk = 4 then Yk = 1

Vk=I1 * w1 + I2 * w2
Vk = 0 then Yk = 0
Example 3: let our function is logical XOR

I1 I2 O
0 0 0
0 1 1
I1 O 1 0 1
I2 1 1 0
I2

I1
This cannot linearly separable because it can’t have a
plane with –ve and +ve values
Comparison of linearly separable and non
linearly separable data
Single Layer Neural Network not able to perform
XOR

This cause failure of Single


layer model and Neural
Network both

You might also like