Professional Documents
Culture Documents
Neural Network 1
Neural Network 1
Niranjan Panda
29.10.09
Before we start…
Inputs Outputs
Dendrites (inputs)
1. Neurons (nodes)
The basic computational unit.
2. Synapses (weights)
Connection links characterized
by certain weight known as synaptic weight.
Neuron Synapse
vs. vs.
Node weight
Application area
Several areas are there where Neural Network works:
Aerospace Manufacturing
Automotive Medical
Banking Applications
Defense Oil and Gas
Electronics Robotics
Entertainment Speech Processing
Finance & Telecommunication
Insurance Transportation
Example Applications
NETtalk (Sejnowski and Rosenberg, 1987)
Maps character strings into phonemes for
Bias
b
x1 wi1
Activation
Local
Field
function
Output
Input
signal
xj wij v
() y
Summing
function
xm wim
Synaptic
weights
Contd..
The neuron is the basic information processing unit of a
NN. It consists of:
1 A set of synapses or connecting links, each link
y (u b)
Contd..
Bias b ( weight is w0 but with a fixed input of x0 =1 )
has the effect of applying an affine transformation to u
i.e. it serves the purpose of increasing or decreasing
the net input of activation function depending on
whether it is +ve or –ve.
v=u+b
v is the induced field of the neuron
v
u
m
u wijxj
j 1
Contd..
Bias is an external parameter of the neuron.
Can be modeled by adding an extra input.
m
x0 = +1
w0 v w
j 0
ij xj
w0 b
x1 w1 Activation
Local function
Field
Input Output
signal x2 w2
v () y
Summing
function
xm wm Synaptic
weights
Activation Functions
Threshold Function:
φ(v) = 1 , v ≥ 0
=0, v<0 1
φ(v) output
0 v input
Contd..
Signum Function:
φ(v) = +1 , v > θ
= -1 , v ≤ θ 1
φ(v) output
0 θ
v input
-1
Contd..
Piecewise Linear Function:
φ(v) output
= 0.5 , v = 0
=1 ,v=…
= 0 , v = -…
Contd..
Hyperbolic Tangent Function:
φ(v) = tan hv
φ(v) 1, v infinite
-1 , v -infinite
Neural Network Architectures
ANN structure can be represented using a digraph
i.e. a graph G(V,E) with V no of vertices and E no of
edges ( Each edge assigned with a orientation ).
Neural Networks are classified into many types
according to their learning mechanisms.
However in general we consider following 3
fundamental types of networks:
> Single Layer Feedforward Network
> Multilayer feedforward Network
> Recurrent Networks
Single Layer Feedforward Network
The nodes on the left are in the so-called input layer.
The input layer neurons are to only pass and distribute the
inputs and perform no computation. Thus, the only true layer of
neurons is the one on the right.
Each of the inputs x1,x2,…xN is connected to every artificial
neuron in the output layer through the connection weight.
Since every value of outputs y1,y2,…yN is
calculated from the same set of
input values, each
Input Output
output is aried based layer layer
on the connection weights.
Although the presented network is fully connected, the true biological neural
network may not have all possible connections - the weight value of zero can be
Multilayer Feedforward Network
To achieve higher level of computational capabilities, a more
complex structure of neural network is required. Figure shows
the multilayer neural network which distinguishes itself from the
single-layer network by having one or more hidden layers. In
this multilayer structure, the input nodes pass the information to
the units in the first hidden layer, then the outputs from the first
hidden layer are passed to the next layer, and so on.
Multilayer network can be also viewed as cascading of groups
of single-layer networks.
The level of complexity
in computing can be Input Output
Seen by the fact that layer layer
many single-layer
networks are combined
into this multilayer network.
The designer of an artificial Hidden Layer
neural network should consider how many hidden layers are
Recurrent Networks
In this type of networks there exists at least 1 feedback loop.
Output layer is again feed the outputs to the input layer
( arbitrary neurons ) as inputs of the same network.
There could be neurons with self feedback links.
Learning Algorithm
There are a lot of learning algorithm – classified as
supervised learning and unsupervised Learning.
Supervised Learning – uses a set of inputs for which
the appropriate (desired) output are
known.Computed output and correct output are
compared to determine error.
Unsupervised Learning – only input stimuli are shown
to the network. The network is self-Organizing. The
system learns of it’s own by discovering and adapting
to structural features in input pattern.
2 Main Types of ANN
Supervised Unsupervised
e.g: e.g:
Adaline
Perceptron Competitive learning networks
MLP - SOM
RBF - ART families
Fuzzy ARTMAP - neocognition
etc. - etc.
Supervised Network
Teacher
error +
-
ANN
Unsupervised ANN
Teacher
ANN
How does an ANN learn
I
weights
O Connected by links-each link
N
U has a numerical weight
P
neurons T Weight
U
P
basic means of long-term
T memory in ANNs
U
S Express the strength
T
I Learns through repeated
S
G I
adjustments of these
N G weights
N
A A
L L
Input Middle Output S
S
layer layer Layer
Learning Process of ANN
Learn from experience Compute
Learning algorithms output
Recognize pattern of
activities
Involves 3 tasks Adjust No Is
Desired
Compute outputs Weight Output
achieved
Compare outputs with
desired targets
yes
Adjust the weights and
repeat the process Stop
Learning Algorithm
Classification
NN Learning Algorithm
Hebbian Competitive
Error Correction Stochastic
Gradient descent
Least Backpropagation
Mean Square