Intro Poyecto Neuro Fiinal

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

INTRODUCTION

HEBB NETWORK
The Hebb rule was proposed by Donald Hebb in 1949, was one of the first neural network learning
laws. This Network uses the linear algebra concepts to explain why Hebbian learning works. The
Hebb rule can be used to train neural networks for pattern recognition. We take advantage of this
feature to develop our project.
Hebbian learning
The Hebb's rule states:
``When neuron A repeatedly and persistently takes part in exciting neuron B, the synaptic
connection from A to B will be strengthened.''
From the point of view of artificial neural networks, Hebb's principle can be described as a method
of determining how to alter the weights between model neurons. The weight between two
neurons increases if the two neurons activate simultaneously and reduces if they activate
separately.
Linear Associator
W
SXR
p n a
RX1 SX1
S
R
SX1

The linear associator is a type of neural network called an associator memory. The task of an
associator is to learn Q pairs of prototype input/output vectors:
{p1,t1}, {p2,t2}, , {p
Q
,t
Q
}.
Unsupervised learning rule

Supervised learning rule



p a w w
old new
+ = o
p t W W
old new
+ =
Supervised Hebb Rule
Assume that the weight matrix is initialized to zero and each of the Q input/output pairs are
applied once to the supervised Hebb rule.






There are two cases:
- CASE 1: Vector p is orthonormal
Assume that the p
q
vectors are orthonormal (orthogonal and unit length), then

If p
q
is input to the network, then the network output can be computed


If the input prototype vectors are orthonormal, the Hebb rule will produce the correct output for
each input.
- CASE 2: Vector p is not othonormal
Assume that each p
q
vector is unit length, but they are not orthogonal. Then


The magnitude of the error will depend on the amount of correlation between the prototype input
patterns.
Pseudoinverse Rule
When the input vectors are not orthogonal and we use the Hebb rule, then F(W) will be not be
zero, and it is not clear that F(W) will be minimized.
| |
T
T
T
2
T
1
2 1
1
T T T
2 2
T
1 1
TP
p
p
p
t t t
p t p t p t p t W
=
(
(
(
(
(

=
= + + + =

=
Q
Q
Q
q
q q Q Q

| | | |
Q Q
p p p P t t t T
2 1 2 1
, where = =

=
=
=
. , 0
. , 1
k q
k q
k
T
q
p p
k
Q
q
k q q k
Q
q
q q k
t p p t p p t Wp a = =
|
|
.
|

\
|
= =

= = 1
T
1
T
) (
+ = = =

=
k
Q
q
k q q k
t p p t Wp a
1
T
) (

=k q
k q q
) (
T
p p t
error

If the P matrix has an inverse, the solution is

P matrix has an inverse iff P must be a square matrix. Normally the p
q
vectors (the column of P)
will be independent, but R (the dimension of p
q
, no. of rows) will be larger than Q (the number of
p
q
vectors, no. of columns). P does not exist any inverse matrix.
The weight matrix W that minimizes the performance index is given by the
pseudoinverse rule .

where P
+
is the Moore-Penrose pseudoinverse.
The pseudoinverse of a real matrix P is the unique matrix that satisfies




Autoassociative Memory
The linear associator using the Hebb rule is a type of
associative memory ( t
q


= p
q
). In an autoassociative memory the desired output vector is equal to
the input vector ( t
q
= p
q
).
An autoassociative memory can be used to store a set of patterns and then to recall these
patterns, even when corrupted patterns are provided as input.
W
p n a

2
1
) (

=
=
Q
q
q q
F Wp t W
+
= TP W
1
= TP W
T
T
) (
) (
+ +
+ +
+ + +
+
=
=
=
=
PP PP
P P P P
P PP P
P P PP

You might also like