Professional Documents
Culture Documents
AI 2021 Lecture 2
AI 2021 Lecture 2
• There also areas performing specialized tasks, e.g. speech control and
analysis of sensory signals (visual, auditory, somatosensory, etc.)
•
What is Self-Organization?
The process of self-organization can be found in many other fields also, such as
economy, sociology, medicine, technology.
Motivation: The Problem Statement
Input Pattern 2
Input Pattern 3
Semantics Map
Topic1
Topic2
Topic3
Phonetic
typewriter
• The SOM algorithm is based on unsupervised, competitive learning.
• It provides a topology preserving mapping from the high dimensional space to map
units.
• Map units, or neurons, usually form a two-dimensional lattice and thus the mapping is
a mapping from high dimensional space onto a plane.
• The property of topology preserving means that the mapping preserves the relative
distance between the points. Points that are near each other in the input space are
mapped to nearby map units in the SOM.
• The SOM can thus serve as a cluster analyzing tool of high-dimensional data.
• Also, the SOM has the capability to generalize. Generalization capability means that
the network can recognize or characterize inputs it has never encountered before. A
new input is assimilated with the map unit it is mapped to.
(http://users.ics.aalto.fi/jhollmen/dippa/node9.html)
SOM videos:
https://www.youtube.com/watch?v=g8O6e9C_CfY
SOM algorithm – weight adaptation
16 September 2021
Self-Organizing Maps
A Text Example:
Self-organizing maps (SOMs) are a
data visualization technique invented by
Professor Teuvo Kohonen which reduce
the dimensions of data through the use
of self-organizing neural networks. The
problem that data visualization attempts Self-organizing 2
to solve is that humans simply cannot maps 1
visualize high dimensional data as is so data 4
visualization 2
technique are created to help us technique 2
understand this high dimensional data. Professor 1
invented 1
Teuvo Kohonen 1
dimensions 1
...
Zebra 0
(iii) Find the best-matching neuron w(x), usually the neuron whose weight vector
has smallest Euclidean distance from the input vector x
The winning node is that which is in some sense ‘closest’ to the input vector
‘Euclidean distance’ is the straight line distance between the data points, if they
were plotted on a (multi-dimensional) graph
Euclidean distance between two vectors a and b, a = (a1,a2,…,an), b =
(b1,b2,…bn), is calculated as:
= (a −b )
2
a, b i d i
i
Euclidean distance
(0/0) (0/2)
(2/2)
Mouse (0/0), Lion (1/0) Dove (0/2)
(2/1) (2/0)
(0/0)
Shark (2/1) Horse (2/0)
(1/0.75) (0.25/1)
(1.5/1.5)
Lion Dove
(2/0)
(1.25/0.5) (1/0.75)
Horse
(1.25/1) (0.5/0)
(1/1)
Shark Mouse
(0.1875/1.25)
(0.75/0.6875) (1.125/1.625)
Dove
(1.5/0)
(1.375/0.5) (1/0.875)
Hourse
Land animals
likes
Feathers
Hunt
1
0
1
0
1
0
1
0
1
1
1
1
1
1
0
1
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
0
peaceful
Run 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 0
to Fly 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
Swim 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
birds
hunters
[Teuvo Kohonen 2001] Self-Organizing Maps; Springer;