Professional Documents
Culture Documents
Neural Network Lec7
Neural Network Lec7
Neural Networks
Lecture 7: Self Organizing Maps
H.A Talebi
Farzaneh Abdollahi
Winter 2011
Neural Networks
Lecture 7
1/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
2/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Rare inputs will have less impact for learning than those occurring
frequently.
The winning output node will receive the only reward during learning
Sometimes, the extension of rewards for the nodes neighborhood will
also be allowed.
Neural Networks
Lecture 7
3/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Lecture 7
4/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Hamming Network
I
Neural Networks
Lecture 7
5/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
x = s (m) s (m)T s (1) , s (m)T s (2) , ..., s (m)T s (m) , ..., s (m)T s (p)
considering 1 as input s (m) , only the mth output is n and others are
less then n
Total we can say:
is for convenient
The weight matrix WH of the Hamming network s defined s.t its rows
(1)
(1)
(1)
are the prototypes
s
s
. . . sn
1(2) 2(2)
(2)
s2
. . . sn
s1
1
WH = .
..
..
2 ..
.
...
.
(p)
s1
H. A. Talebi, Farzaneh Abdollahi
(p)
s2
Neural Networks
(p)
. . . sn
Lecture 7
6/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
netm =
for m = 1, 2, ..., p
I
0 netm n0 f (netm ) 1.
Neural Networks
Lecture 7
7/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
8/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
1 . . .
1 . . .
..
.. . .
..
.
. .
.
. . . 1
0<<
1
p
Neural Networks
Lecture 7
9/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
is a nonlinear
diagonal matrix operator with entries
0 net < 0
f (net) =
n net 0
Example: Let us design an HD classifier for three characters C, I, and
T
Neural Networks
Lecture 7
10/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
1 1 1
1 1 1 1 1 1
So WH = 12 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
9/2
net = W T x + 9/2
9/2
= 0.2 < 1/3
Neural Networks
Lecture 7
11/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
12/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
13/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
x T xi
kxkkxi k
14/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
y = [Wx]
Neural Networks
Lecture 7
15/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
If the weights are not normalized the above statement is not always
true
Neural Networks
Lecture 7
16/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
w
ik+1 = w
ik i 6= m
I
Neural Networks
Lecture 7
17/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
The rule should increase the chances of winning by the mth neuron, if
the same input pattern using the updated weights is repeated
Neural Networks
Lecture 7
18/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
0 s length is
The adjusted weight vector wm
below unity
for next step it should be
renormalized
Neural Networks
Lecture 7
19/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Both the winners and losers weights are adjusted in proportion to their
level of responses.
The layer now performs as a filter of the input vectors such that the
largest output neuron is found as ym = max(y1 , ..., yp )
Neural Networks
Lecture 7
20/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Weight Initializing
After the weights have been trained to provide coarse clustering, the
learning constant should be reduced
Neural Networks
Lecture 7
21/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
22/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
I
I
I
I
Consider
for
the
normalized
patterns
training: {x1 , x2 , x3 , x4 , x5 } =
0.8
0.1736
0.707
0.342
0.6
{
,
,
,
,
}
0.6
0.9848
0.707
0.9397
0.8
# of clusters: 2, and = 1/2
1
1
0
0
The normalized initial weights: w1 =
, w2 =
0
0
The inputs are submitted in ascending sequence and recycled
The first neuron won all of the first six competitions in a row
first weight is only updated.
After 20 steps the weights are adjusted properly and index the center
of clusters.
Neural Networks
Lecture 7
the
23/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example Contd
Neural Networks
Lecture 7
24/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
25/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Lecture 7
26/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
27/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
In the first type the radius of the neighboring set in decreasing in time
Neural Networks
Lecture 7
28/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
I
PATTERNS
3
13
16
18
24
C1,C2,C3
B1,B3,D1,D3,E1,K1,K3,E3
A1,A2,A3
J1,J2,J3
B2,D2,E2,K2
Neural Networks
Lecture 7
29/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
2. Linear structure (R=1)
I
NEURON
PATTERNS
6
10
14
16
18
20
22
23
25
K2
J1,J2,J3
E1,E3
K1,K3
B1,B3,D1,D3
C1,C2,C3
D2
B2,E2
A1,A2,A3
Neural Networks
Lecture 7
30/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
3. Diamond Structure
I
i \j
1
2
3
4
5
J1,J2,J3
D2
D1.D3
K2
E1,E3,B3
C1,C2,C3
B1
K1,K3
Neural Networks
B2 E2
A3
A1,A2
Lecture 7
31/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
32/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
SOM
Neural Networks
Lecture 7
33/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
1. Initialization Choose random values for the initial weight vectors Wj (0).
Keep the magnitude of the weights small and different from each other
2. Sampling Draw a sample x from input space. It is applied to the lattice.
3. Similarity Matching Find the winner neuron mth, based on Euclidean
distance criterion
4. Updating Adjust the weights of the winner and its neighbors based on
4wi (t) = (Ni , t)[x(t) wi (t)] for i Nm (t)
5. Continuation Continue with step 2 until no noticeable changes in the
feature map is observed
H. A. Talebi, Farzaneh Abdollahi
Neural Networks
Lecture 7
34/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
35/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
I
Neural Networks
Lecture 7
36/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
I
I
Neural Networks
Lecture 7
37/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
x: training vector
I
I
If T = Cj Wj (k + 1) = Wj (k) + (x Wj (k))
If T =
6 Cj Wj (k + 1) = Wj (k) (x Wj (k))
Neural Networks
Lecture 7
38/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
39/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
40/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
I
I
Consider the r pairs of training data (x1 , z1d ), ..., (xr , zrd )
The first layer is Kohonen Layer
I
I
I
I
yi
Neural Networks
[0 0 ... 1 ... 0]
Lecture 7
(1)
41/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Neural Networks
Lecture 7
42/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
After training first layer, p clusters are formed among sample input x;
each wk ; k = 1, ..., p( p : # of first layer neurons) is a representative
of a cluster (average).
< x > is the mean of all training samples that make the kth neuron win
Then uses the region as the index to look-up the table for the function
value.
Neural Networks
Lecture 7
43/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Full CPN
Neural Networks
Lecture 7
44/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
I
I
I
Training is non-incremental:
I
Adding new samples often requires re-train the network with the
enlarged training set until a new stable state is reached.
Neural Networks
Lecture 7
45/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART
I
Neural Networks
Lecture 7
46/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
I
I
Neural Networks
Lecture 7
47/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
I
I
Neural Networks
Lecture 7
48/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART Architecture
x: Input vector
Neural Networks
Lecture 7
49/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART1
1. Bottom-top processing
I
I
Neural Networks
Lecture 7
50/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART1
2. Top-Bottom Processing
I
I
I
I
Neural Networks
Lecture 7
51/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART1 Algorithm
1. Set vigilance threshold 0 < < 1; for n-tuple input vectors, and
(m = 1)-tuple output vectors. Input x = [xi ] is assumed binary
unipolar.
2. Initialize the Bottom-top weight matrix, Wnm , and Top-bottom
1
, vji = 1
weight matrix, Vnm : wij = 1+n
3. Hamming net-MAXNET
I
I
Pn
Find all matching score: yr0 = i=1 wir xi for r = 1, ..., m
Then find the best matching cluster, j: yj0 = maxr =1,...,m (yr0 )
1
kxk
Pn
i=1 vij xi
> ;
Neural Networks
Lecture 7
52/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
ART1 Algorithm
Neural Networks
Lecture 7
53/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
If rho is closed to 1
I
I
Neural Networks
Lecture 7
54/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example
Present pattern A
I
I
1
26 , vij
= 1, i = 1, ..., 25, j = 1
2
w1,1 = w7,1 = w13,1 = w19,1 = w25,1 = 11
, other wi1 = 0
v1,1 = v7,1 = v13,1 = v19,1 = v25,1 = 1, other vi1 = 0
Neural Networks
Lecture 7
55/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example Contd
I
Present pattern B
I
I
I
I
Present Pattern C
I
I
I
I
I
I
1
2
1
2
) + 8( 26
) = 1.217, y20 = 9( 19
) + 4( 26
) = 1.101
y10 = 5( 11
Neuron 1 is winner
5
The vigilance test: 13
< 0.7 it fails
Node 1 is disabled, only 1 neuron is remained
9
The vigilance test for neuron 2: 13
< 0.7 it fails
Add new neuron and update the wights accordingly
Neural Networks
Lecture 7
56/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example Contd
Present pattern D
I
repeat this example for 2 = 0.3 classifies the patterns in two classes
only
Neural Networks
Lecture 7
57/58
Outline Hamming Net and Maxnet Unsupervised Learning of Clusters Self Organizing Feature Map LVQ CPN ART
Example Contd
Neural Networks
Lecture 7
58/58