ABC Optimized SOM Algorithm

advertisement


C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo#
School of Computing and Software Engineering
Southern Polytechnic State University, Marietta, Georgia USA
#Graduate Institute of Educational Measurement and Statistics, National Taichung University of Education,
Taichung, Taiwan, R. O. C.
• Introduction
• Self-Organizing Maps (SOM)
• Artificial Bee Colony (ABC) Algorithm
• Combining SOM and ABC
• Experimental Results
• Conclusions & Future Work
 Self-Organizing
Map (SOM) has been used for image
classification.
 Similar
to K-means algorithm, the local minimum
problem is inevitable in a complex problem domain.
 Several
solutions have been proposed for optimizing
the SOM in remote sensing applications. Example:
Simulated annealing.
 Can
we use Bee algorithms (BA) to accomplish the
more robust classification?
 Many
 The
bee algorithms have been developed.
artificial bee colony (ABC) was used with SOM
for this study.

The self-organizing map (SOM) is a method for unsupervised
learning, based on a grid of artificial neurons whose weights
are adapted to match input vectors in a training set.

It was first described by the Finnish professor Teuvo Kohonen
and is thus sometimes referred to as a Kohonen map.

SOM is one of the popular neural computation methods in use,
and several thousand scientific articles have been written about
it. SOM is especially good at producing visualizations of highdimensional data.
Figure 1: Each Xi represents one component of the pixel vector for multispectral bands. L denotes the
number of bands used. Each neuron in the output layer corresponds to one spectral class where its
spectral means are stored in the connection between the inputs and the output neurons.
6
Figure 2:SOM Neural Network
7
Each SOM neuron can be seen as representing a cluster
containing all the input examples which are mapped to that
neuron.

For a given input, the output of SOM is the neuron with weight
vector most similar (with respect to Euclidean distance) to that
input.

The “trained” classes are represented by the output nodes and
the center of each class is stored in the connection weights
between input and output nodes.

8
Step 4: Update weights of the winning node j* using
wij* (t  1)  wij* (t )  (t )(xi  wij* (t )),
i  1, ..., L and 1  j *  N  N ,
Δ(t) is a monotonically slowly decreasing function of t (i.e. learning
rate) and its value is between 0 and 1
9
The basic process can be summarized by the following steps:
Step 1: Initialize all nodes to small random values.
Step 2: Choose a random data point.
Step 3: Calculate the winning node.
Step 4: Update the winning node and neighbors.
Step 5: Repeat steps 2 to 4 for given number of iterations.
The Bees Algorithm is a population-based search algorithm
inspired by the natural foraging behaviour of honey bees to
find the optimal solution.
The algorithm performs a kind of neighbourhood search
combined with random search.
11
Scout bees search randomly from one patch to another
12
They deposit their nectar or pollen go to the “dance floor” to
perform a “waggle dance”
Credit: Masaryk University, Brno,
Czech Republic, Wed 08 Apr 2009
Bees communicate through the waggle dance which
contains the following information:
1. The direction of flower patches
(angle between the sun and the
patch)
2. The distance from the hive
(duration of the dance)
3. The quality rating (fitness)
(frequency of the dance)
Credit: Masaryk University, Brno,
Czech Republic, Wed 08 Apr 2009
Three type of Bees in ABC:
1) Employed bees
2) Onlooker bees, and
3) Scouts.
Employed and onlooker bees perform the exploitation search.
Scouts carry out the exploration search.
ABC employs four different selection processes:
1) a global selection process used by onlookers,
2) a local selection process carried out in a region by employed
and onlooker bees,
3) a greedy selection process used by all bees, and
4) a random selection process used by scouts.
The ABC algorithm consists of the following steps:
Step 1: Initialize by picking k random Employed bees from data.
Step 2: Send Scout bees and test against Employed bees (replace
if better than Employed is found).
Step 3: Send Onlooker bees to Employed.
Step 4: Test Onlooker bees against Employed (replace if better
than Employed is found).
Step 5: Reduce the radius of Onlooker bees.
Step 6: Repeat steps 2 to 5 for a given number of iterations.

The algorithm requires these parameters to be set:
1)
Number of clusters (k),
2)
Number of bees including Employed, Onlookers, and Scouts
(B),
3)
Number of iteration (iter), and
4)
Initial radius of Onlookers (ir).
The proposed algorithm selects some neighboring nodes in SOM
for weights update by using the ABC.

The basic process can be summarized by the following steps:
Step 1: Initialize all weights to small random values.
Step 2: Choose a random data point.
Step 3: Calculate the winning node.
Step 4: Use ABC to select neighboring nodes.
Step 5: Update the winning node and selected neighboring nodes.
Step 6: Repeat steps 2 to 5 for a given number of iterations.

Illustration:
•
Some of results on Iris and Glass data sets are shown in
Tables 1 and 2. Columns Max, Mean and Variance, give the
maximum accuracy achieved, mean accuracy of 500 runs and
the standard deviation respectively. The accuracy distribution
is also given.

Table 1: Results and accuracy distribution of 100 runs on Iris
data.
BEE
SOM
max mean std
max
BEE+SOM
avg of
100
runs
iris
mean std
94 69.15 0.3334 92.67 85.43
max
mean std
0.0824 93.33 90.53
0.014

Table 2: Results and accuracy distribution of 500 random
experiments on Iris data.
Algorit
hms
ABC
SOM
ABC +
SOM
Max
Mean
Var.
93.33
%
93.33
%
93.33
%
89.20 0.0557
%
86.35 0.0304
%
90.60 0.0117
%
Accur.
[0.9, 1]
325
Accur.
Accur. [0, 0.85)
[0.85, 0.9)
155
20
97
229
174
390
110
0

Table 3: Results and accuracy distribution of 500 random
experiments on Glass data.
Algorithms
ABC
Max
55.14
%
SOM
62.15
%
ABC + SOM 56.07
%
Mean
Var.
Accur.
[0.55, 1]
52.29 0.0133
1
%
48.70 0.0323
10
%
52.31 0.0305
95
%
Accur. [0.50,
0.55)
493
Accur. [0,
0.50)
6
157
342
286
119
(a) An original image
(b) ABC
(c) SOM
(d) ABC + SOM
Figure 2. (a) An original image, (b), (c) and (d) results of applying ABC,
SOM, and the SOM+ABC algorithm, respectively.

From the results we can see that all three achieved the same
maximum accuracy for Iris data but the proposed algorithm is
more stable than either of the other two. Furthermore, the
algorithm can be effective with almost any parameters if one
lets it run long enough.

The proposed algorithm (i.e. ABC + SOM) is an improved
time efficient algorithm as compared to ABC. The ratio of
computation time for SOM, ABC + SOM, and ABC is about
1:3:20.

This is a very primitive experiment. Further study and
comparison with other similar methods are still need be done.

The robustness of the algorithm can be improved by refining
the bee model.
Download