PowerPoint

advertisement
Un Supervised Learning
&
Self Organizing Maps
Un Supervised Competitive Learning
• In Hebbian networks, all neurons can fire at
the same time
• Competitive learning means that only a
single neuron from each group fires at each
time step
• Output units compete with one another.
• These are winner takes all units
(grandmother cells)
UnSupervised Competitive
Learning
• In the hebbian like models, all the neurons
can fire together
• In Competitive Learning models, only one
unit (or one per group) can fire at a time
• Output units compete with one another 
Winner Takes All units
(“grandmother cells”)
US Competitive, Cntd
• Such networks cluster the data points
• The number of clusters is not predefined but
is limited to the number of output units
• Applications include VQ, medical
diagnosis, document classification and more
Simple Competitive Learning
N inputs units
P output neurons
P x N weights
x1
W11
W12
x2
W22
WP1
N
hi   Wij X j
Y1
Y2
j 1
i  1, 2... P
Yi  1or0
xN
WPN
YP
Simple Model, Cntd
• All weights are positive and normalized
• Inputs and outputs are binary
• Only one unit fires in response to an input
 
hi  Wij X j  Wi X
j
i*  arg max(hi )
Network Activation
• The unit with the highest field hi fires
• i* is the winner unit

• Geometrically W i* is closest to the current
input vector
• The winning unit’s weight vector is updated
to be even closer to the current input vector
• Possible variation: adding lateral inhibition
Learning
Starting with small random weights, at
each step:
1. a new input vector is presented to the
network
2. all fields are calculated to find a winner

3. W i* is updated to be closer to the input
Learning Rule
• Standard Competitive Learning
Wi* j   ( X j  Wi* j )

Can be formulated as hebbian:
Wij  Oi ( X j  Wij )

Result
• Each output unit moves to the center of
mass of a cluster of input vectors 
clustering
Competitive Learning, Cntd
• It is important to break the symmetry in the
initial random weights
• Final configuration depends on initialization
– A winning unit has more chances of winning
the next time a similar input is seen
– Some outputs may never fire
– This can be compensated by updating the non
winning units with a smaller update
Model: Horizontal & Vertical lines
Rumelhart & Zipser, 1985
• Problem – identify vertical or horizontal
signals
• Inputs are 6 x 6 arrays
• Intermediate layer with 8 WTA units
• Output layer with 2 WTA units
• Cannot work with one layer
Rumelhart & Zipser, Cntd
H
V
Geometrical Interpretation
• So far the ordering of the output units themselves
was not necessarily informative
• The location of the winning unit can give us
information regarding similarities in the data
• We are looking for an input output mapping that
conserves the topologic properties of the inputs
 feature mapping
• Given any two spaces, it is not guaranteed that
such a mapping exits!
Biological Motivation
• In the brain, sensory inputs are represented
by topologically ordered computational
maps
– Tactile inputs
– Visual inputs (center-surround, ocular
dominance, orientation selectivity)
– Acoustic inputs
Biological Motivation, Cntd
• Computational maps are a basic building
block of sensory information processing
• A computational map is an array of neurons
representing slightly different tuned
processors (filters) that operate in parallel
on sensory signals
• These neurons transform input signals into a
place coded structure
Self Organizing (Kohonen) Maps
• Competitive networks (WTA neurons)
• Output neurons are placed on a lattice, usually 2dimensional
• Neurons become selectively tuned to various input
patterns (stimuli)
• The location of the tuned (winning) neurons
become ordered in such a way that creates a
meaningful coordinate system for different input
features 
a topographic map of input patterns is formed
SOMs, Cntd
• Spatial locations of the neurons in the map
are indicative of statistical features that are
present in the inputs (stimuli) 
Self Organization
Kohonen Maps
• Simple case: 2-d input and 2-d output layer
• No lateral connections
• Weight update is done for the winning
neuron and its surrounding neighborhood

j
Wij   F (i, i*)( X  Wij )
Neighborhood Function
• F is maximal for i* and drops to zero far from i,
for example:
  2
ri  ri*
F (i,i*)  exp(
)
2
2
• The update “pulls” the winning unit (weight
vector) to be closer to the input, and also drags the
close neighbors of this unit 
• The output layer is a sort of an elastic net
that wants to come as close as possible to
the inputs
• The output maps conserves the topological
relationships of the inputs
• Both η and σ can be changed during the
learning
Feature Mapping
Weight Vectors
Weight Vectors
6
6
4
4
2
W(i,2)
W(i,2)
2
0
0
-2
-2
3
2
1
0
-1
-4
-4
-2
-6
-6
-3
-4
-4
-4
-6
-2
0
-2
-4
2
0
-2
2
0
4
4
W(i,1)
W(i,1)
2
4
6
6
8
8
6
10
10
8
10
12
12
12
14
Topologic Maps in the Brain
• Examples of topologic conserving mapping
between input and output spaces
– Retintopoical mapping between the retina and
the cortex
– Ocular dominance
– Somatosensory mapping (the homunculus)
Models
Goodhill (1993) proposed a model for the
development of retinotopy and ocular dominance,
based on Kohonen Maps
–
–
–
–
Two retinas project to a single layer of cortical neurons
Retinal inputs were modeled by random dots patterns
Added between eyes correlation in the inputs
The result is an ocular dominance map and a retinotopic
map as well
Models, Cntd
Farah (1998) proposed an explanation for
the spatial ordering of the homunculus using
a simple SOM.
– In the womb, the fetus lies with its hands close
to its face, and its feet close to its genitals
– This should explain the order of the
somatosensory areas in the homunculus
Other Models
• Semantic self organizing maps to model
language acquisition
• Kohonen feature mapping to model layered
organization in the LGN
• Combination of unsupervised and
supervised learning to model complex
computations in the visual cortex
Examples of Applications
• Kohonen (1984). Speech recognition - a map
of phonemes in the Finish language
• Optical character recognition - clustering of
letters of different fonts
• Angeliol etal (1988) – travelling salesman
problem (an optimization problem)
• Kohonen (1990) – learning vector quantization
(pattern classification problem)
• Ritter & Kohonen (1989) – semantic maps
Summary
• Unsupervised learning is very common
• US learning requires redundancy in the stimuli
• Self organization is a basic property of the brain’s
computational structure
• SOMs are based on
– competition (wta units)
– cooperation
– synaptic adaptation
• SOMs conserve topological relationships between
the stimuli
• Artificial SOMs have many applications in
computational neuroscience
Download