Professional Documents
Culture Documents
Neural Network 1&2
Neural Network 1&2
Intelligent control systems excel in areas that are highly non-linear, or when
classical control systems fail, or a model of the system is difficult or impossible to
obtain.
Static and dynamical systems
• Fault tolerance
• Adapting to circumstances
Neural networks to the rescue
(ii) NNs learn by examples. NN architectures can be 'trained' with known examples of a
problem before they are tested for their capability on unknown instances of the
problem.
(iii) NNs posses the capability to generalize. They can predict new outcomes from past
trends.
(iv) NNs are robust systems and are fault tolerant. They can, therefore, recall
patterns from incomplete, partial or noisy patterns.
(v) NNs can process information in parallel, at high speed, and in a distributed manner.
ADVANTAGES
1. ANNs are not programmed. They learn by example with no requirement of
prior knowledge.
2. ANNs are very robust in nature, and can operate even if portions of the input data
are incorrect.
3. The network can see through noise and distortions to obtain the true essence of
real- world environment being viewed.
5. ANNs can solve any problem involving the mapping of input- output data.
DISADVANTAGES
1. Learning takes time. Depending on the complexity of the ANNs and the quantity of
classification data (i,e,training sets), the learning process can take hours, days or even
weeks.
2. Even though the ANNs can see through noise and distortions most of the time, there
will be cases where the network is tricked or sees an "optical illusion".
3. Being good at making generalizations and reaching new conclusions
does not lead to being precise or logical. Consider the following problem:
2.14325 + 3.25617 = 5.39942
The ANS would conclude that adding the two numbers together will result in a
number that is probably very close to 5.4. Now looking at a logical problem:
IF a = b AND b = c THEN a = c
The neural network would conclude that a is equal to a term is probably really
close to term c.
Where are NN used?
• Recognition
– Pattern recognition: SNOOPE (bomb detector in
U.S. airports)
– Character recognition
– Handwriting: processing checks
• Data association
– Not only identify the characters that were scanned
but identify when the scanner is not working
properly
Applications
• Data Conceptualization
– infer grouping relationships
e.g. extract from a database the names of those most
likely to buy a particular product.
• Data Filtering
e.g. take the noise out of a telephone signal, signal
smoothing
• Planning
– Unknown environments
– Sensor data is noisy
– Fairly new approach to planning
Applications
• Modelling and Identification
• Direct and Indirect Adaptive Control
Strengths of a Neural Network
2. Stochastic Learning:
In this method, weights are adjusted in a probabilistic fashion. An example is
evident in simulated annealing-the learning mechanism employed by Boltzman and
Cauchy machines, which are a kind of NN systems.
Neural Network Learning
Process
Unsupervised Learning
It uses no external teacher and is based upon only local information. It is also
referred to as self-organization, in the sense that it self-organizes data presented to the
network and detects their emergent collective properties. Paradigms of unsupervised
learning are Hebbian learning and competitive learning. A neural network learns off-line
if the learning phase and the operation phase are distinct. A neural network learns on-line
if it learns and operates at the same time. Usually, supervised learning is performed
off-line, whereas unsupervised learning is performed on-line.
Neural Network Learning
Process
Hebbian Learning Rule
For the Hebbian learning rule the learning signal is equal simply to the
neuron’s output, We have
r = f (wtix)
x2 w2
o
x3 w3 Initial
weights
w4 Input
x4 x1 1
x2 -1
x= w=
x3 0
x4 0.5
Neural Network Learning
Process
This example illustrates Hebbian learning with binary and continuous
activation functions of very simple network. Assume the network shown in above figure with
the initial weight vector T
w = 1 -1 0 0.5
for an arbitrary choice of learning constant c =1. Since the initial weights are of nonzero
value, the network has apparently been trained before hand. Assume that f(net) = sgn(net).
Neural Network Learning
Process
Step 1: Input x1 applied to the network results in activation net1 as below:
net1 = wTx1 = =3 1
1 -1 0 0.5 -2
1.5
The updated weights are 0
wnew = wold + sgn(net1) x1 = w + x1
1 1 2
-1 -2 -3
wnew = + =
0 1.5 1.5
0.5 0 0.5
Neural Network Learning
Process
Step 2: This learning step is with x2 as input:
net2 = wTx2 = = -0.25 1
2 -3 1.5 0.5 -0.5
-2
The updated weights are -1.5
wnew = wold + sgn(net2) x2 = w - x2 1
and plugging numerical values we obtain -2.5
= 3.5
Step 3: This learning step is with x3 as input: 2
net3 = wTx3 = = -3
0
The updated weights are 1 -2.5 3.5 2
new old 3 1
w = w + sgn(net ) x3 = w - x3
-1
1.5
T
= 1 -3.5 4.5 0.5
Neural Network Learning
Process
Competitive Learning
In this method those neurons which respond strongly to the input stimuli have their
weights updated.When an input pattern is presented , all neurons in the layer compete
and the winning neuron undergoes weight adjustment. Hence, it is a winner takes all
strategy.Only a single output neuron is active at any one time. Let
denote the synaptic weight connecting input node i to neuron j.
Each neuron is allotted a fixed amount of weight which is distributed among its input
nodes:
Neural Network Learning
Process
A neuron learns by shifting synaptic weights from its inactive to active input nodes.If a
neuron does not respond to a particular input pattern, no learning takes place in that
neuron.If a particular neuron wins the competition then each input node of that neuron
relinquishes some proportion of its synaptic weight and the weight relinquished is then
distributed equally among the active input nodes.
According to this rule:
Pattern Recognition Example
shape
p = texture
weight
Prototype orange is represented by:
1
p1 = -1 The neural network
-1 receives one three
dimensional input
Prototype apple is represented by: vector for each fruit on
the conveyor and must
1 make a decision.
p2 = 1
-1
Solution
(Orange)
(Apple)
Memory Networks
• These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns
and at the time of giving an output they can produce one of
the stored patterns by matching them with the given input
pattern. These types of memories are also
called Content-Addressable Memory (CAM). Associative
memory makes a parallel search with the stored patterns as
data files.
Following are the two types of associative memories we can
observe −
• Auto Associative Memory
• Hetero Associative memory
Auto Associative Memory
2. Testing Algorithm