9.8. What do we gain from competition?

advertisement
9.8. What do we gain from competition?
(Translation by Weronika Łabaj, weronika.labaj@googlemail.com)
Any type of a neural network can be self-learned, however most interesting results can be gained
from enriching the self-learning process with competition. The competition between neurons should
not be new for you. In section 4.8 I already described (do you remember? If not go back to the
application Example 02) how do networks in which neurons „compete” with each other look like and
work. As you probably remember, in such competitive networks all neurons receive input signals
(generally - the same signals, since such networks are usually one-layered), then all neurons calculate
the sums of those signals (of course, they are multiplied by some weights that vary with each
neuron). Then all values calculated by particular neurons are compared and the „winner” is found –
that is the neuron which produced the strongest output value for the given input.
As you probably remember, the output value is the higher the better the accordance
between the input signal and the internal pattern of the neuron. Therefore if you know the weights
of the neurons you can predict which of them will win in the case of showing samples that lie in the
particular areas of the input signal space. You could easily make that prediction, because only the
neuron which internal knowledge is in accordance with the current input signal will win the
competition and only its output signal will be sent to the output of the whole network. Outputs of all
the other neurons will be ignored. Of course such a „success” of the neuron is short-lived, because
the next moment new input data arrives and some other neuron „wins” the competition. There is
nothing surprising there, because the map showing the arrangement of the weights values
determines which neuron will be the winner for any given input signal – it will be the very same
neuron which weight values vector is the most similar to the vector representing input signal.
There are a few consequences of winning the competition by one of the neurons. Firstly, in
most networks of this type only one neuron has a non-zero output signal (usually its value is 1). The
output signals of all other neurons are zeroed, what is known as WTA (Winner Takes All).
Furthermore, the self-learning process usually concerns only the „winner”. Its (and only its!)
weights values are altered in such a way that the next time the same input signal is presented the
same „winning” neuron will produce even more „convincing” output (the output value will be
higher).
Why is that?
To answer this question let’s examine carefully what exactly happens in a self-learning
network with a competition. On the input we have the object represented by its input signals. Those
signals are propagated to all neurons and form a „combined stimulation”. In the simplest case it is
just a sum of input signal multiplied by the weights values, but we can apply the same rule for
neurons with non-linear characteristics. The more weights values of the neuron are similar to the
input signal the stronger the „combined stimulation” on the output of this neuron. We have already
said that the sets of weights values can be treated as input signals „patterns” to which each neuron is
particularly sensible. Therefore the more input signal is similar to the pattern stored in the neuron
the stronger the output when this signal is used as an input. So when one of the neurons becomes
the „winner” it means that its „internal pattern” is the most similar to the particular input signal out
of all neurons.
But why is it similar?
In the beginning it might be the result of a random weights values initialization. In each
network initial values of the weights values are random. Those randomly assigned values are more or
less similar to the input signals used during learning process. Some neurons have – accidentally – an
„innate bias” towards recognition of some objects and – also accidentally – an „aversion” towards
others. Later on the learning process forces internal patterns to become more and more similar to
some kinds of objects with each step of learning. The randomness disappears and neurons specialize
in the recognition of particular classes of objects.
At this stage if a neuron „won” during recognition of a letter A it is even more probable that it
will win once more when a letter A is presented on the input, even if it is slightly different from the
previous sample, for example written by another person. In the beginning we always start with
randomness – neurons themselves decide which of them should recognize a letter A, which B, which
should signal that particular character is not a letter but - for example – a fingerprint. The selflearning process always only reinforces and polishes the natural bias (again: randomly assigned
during initial values generation).
This happens in every self-learning network so what is the meaning of the competition?
Thanks to the competition the self-learning process might be more effective and efficient.
Since initial values of weights are random then it might happen that a few neurons are
„biased” towards the same class of objects. The normal process lacking the competition will be
strengthening those „biases” simultaneously in all those neurons. Eventually there will be no variety
between behaviors of various parts of the network (that is particular neurons), quite the contrary –
the various parts will become more and more similar. You have seen exactly that phenomenon
during experiments with the Application Example 10b.
However, when we introduce a competition the situation changes completely. Each time
there will be some neuron at least slightly more suitable for recognizing currently shown object than
its „competitors”. The natural consequence is that a neuron which weights values are (accidentally)
most similar to the currently presented object will become the „winner”. If this neuron (and only this
one) will „be learning” in this particular step then its „inborn bias” will be - during learning process further developed and strengthened, the „competition” will stay behind and will compete only for
recognizing other classes of objects.
You can observe the self-learning process with a competition using the Application
Example 10c. In this application I used a competitive learning in the task similar to the previous ones
from this chapter (recognizing Marian males, females, kids and animals). However, this time the
learning principle is different so you will observe completely new, different behaviors.
When started the application will display a window with parameters, where you can (among
others) specify the number of neurons in the network. I recommend to manipulate only this
parameter at first, because if you change many parameters it will be too easy to get lost.
The rules of determining number of neurons are simple: the less neurons (for example 30)
the more spectacular the self-learning with a competition - you easily observe that all neurons but
the winner „stay still” and only the winner is learning. I enabled tracing changes of location for
learning neurons so you can observe their „trajectory”. In the pictures generated by the Application
Example 10c the neuron is presented as a big red square when it gets to its final location. You can
consider it as a sign of completion of the learning process for the particular class (Fig. 9.37).
Fig. 9.37. Parameters window and self-learning process visualization in a network with competition –
before and after eventual success. Application Example 10c.
When the self-learning process starts you will see that only one neuron will be „attracted” to each
point in which objects belonging to the particular class appear. This neuron will eventually become a
perfect detector for objects that belong to this class (you will see a big red square in a place where
objects of that class typically appeared during learning process). If you click Start button again you
will activate a „step by step” feature – just like in a previous application (if you click Start button and
hold it then the self-learning process will become automatic). Observing trajectories of moving
weights values vectors of specific neurons (and reading messages shown in each quarter that inform
which neuron „wins” in each step) you can notice that in each quarter one neuron is chosen that
wins every time when samples from a given quarter are presented. Moreover only this neuron
changes its location, moving towards a presented pattern. Eventually it reaches its final location and
stops moving (Fig. 9.38).
Fig. 9.38. Self-learning process in a network with competition
When the number of neurons is huge it is more difficult to observe the self-learning process with
competition because it has (contrary to the classic self-learning in big networks) a very local
character (Fig. 9.39). It stems from the fact that with many neurons the distance from the nearest
neighbor to the winning one is very short and therefore the trajectory of the „winner” is hardly
visible.
Fig. 9.39. Self-learning process with competition in a large network
On the other hand, with a small number of neurons (say 5) trajectories are spectacular and long,
moreover, you can notice that thanks to the competition even very weak initial biases towards
recognizing some classes of objects can be detected and strengthened during learning process –
providing that „competitors” will have even weaker biases towards recognizing objects of a particular
class (Fig. 9.40). Because of the visibility of the sequence of changing locations you will notice one
more, quite interesting and general characteristic of the neural networks - that the learning is fastest
and location changes are biggest in the beginning of the process.
Fig. 9.40. Self-learning process with competition in a network with a low number of neurons
Download