Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Clustering and competitive

learning
Sebastian Seung
Competitive learning

1, a = argminb |x wb |
ya =
0, otherwise

wa = ya (x wa )
Move the closest weight
vector to the input vector

w2

w1
a b
Average velocity
approximation

wa (ya x ya wa )

steady state
ya x
wa
ya
Clustering
Divide data vectors into clusters
Summarize each cluster by a single
prototype.
A single prototype
Summarize all data with the sample
mean.
m
1
= xa
m a=1


Multiple prototypes
Each prototype is the mean of a subset
of the data.
Divide data into k clusters.
One prototype for each cluster.
Vector quantization
Many telecom applications
Codebook of prototypes
Send index of prototype rather than
whole vector
Lossy encoding
Assignment matrix
cluster

data $1, x a cluster


vector Ya = %
a
&0, otherwise


Data structure for cluster memberships.
k-means algorithm
Alternate between computing means
and computing assignments.

x Y a a Ya = 1 for
a=1
w = m a = argmin x a w

Y b
b=1


Objective function for k-means
m k
1 2
E (Y,w ) = Ya x a w
2 a=1 =1

w2

w1
Avoiding local minima
Good initialization
Splitting
Annealing
Model selection
How to choose the number of clusters?
Tradeoff between model complexity and
objective function.

You might also like