A Fuzzy Logic Controller for Ant Algorithms

advertisement
A Fuzzy Logic Controller for Ant Algorithms
Cherry Amir
Amr Badr
Ibrahim Farag
ruaab@rusys.eg.net
Faculty of Computers and Information, Department of Computer Science, Cairo University
Abstract
Parameter setting highly affects the performance of Ant
Colony Optimization (ACO) algorithms which inspired us
to develop a module to be added to the standard Ant
Colony System (ACS) algorithm by which the parametersetting is done automatically. The module we added to the
algorithm is a Fuzzy Logic Controller (FLC) which is used
to tune the parameters according to robust performance
measures of the algorithm. The parameter-tuning is
performed as the algorithm runs, and this allows the
dynamic setting of the parameters based on the current
performance of the algorithm. The adaptive ACS
algorithm was tested on many TSP problems with different
sizes and the results were compared to those of the
standard algorithm, where this comparison showed that the
adaptive ACS algorithm outperformed the standard
algorithm and reached near-optimal solutions that the
standard algorithm could not find. It also showed a faster
convergence to the solutions than the standard algorithm.
Keywords:
Ant Colony Optimization, Parameter Tuning, Genetic
Algorithms and Fuzzy Controllers.
1
Introduction
Ant colonies [Dor04], and more generally social insect
societies, are distributed systems that in spite of the
simplicity of their component individuals present a highly
structured social organization. As a result of this
organization, ant colonies can accomplish astonishingly
complex tasks that could never be performed by a single
ant. Ants coordinate their activities via stigmergy, a form
of indirect communication mediated by modifications of
the environment. For example, a foraging ant deposits a
chemical substance on the ground which increases the
probability that other ants will follow the same path.
The first ACO algorithm was the Ant System (AS) and
it was the first ACO algorithm to be applied to the TSP
problem. The ACS algorithm was introduced to improve
the performance of the AS algorithm and was successfully
applied to the Traveling Salesman Problem (TSP).
There have been many studies [Bot98, Cas00,
Dor97(a)] that analyzed the ACO algorithms performance
with parameters’ tuning, and the effect of the different
parameters’ values on the performance of the ACO
algorithms.
The parameters of the ACS algorithm have been
previously set manually [Dor97(a), Dor97(b)] using values
that are known to give reasonable performance of the
algorithm; another thing is that the parameters had fixed
values throughout the entire run. The values of the
parameters vary from one application to another and from
one problem size to another so there are no fixed magical
values that will make the algorithm perform to its best in
all situations. What we are proposing here is to set these
values automatically and their values would change
throughout the run according to certain performance
measures, which will lead to the best performance of the
algorithm and faster convergence to the optimal solutions.
The adaptive ACS algorithm that we implemented was
applied to various TSP problems.
We have implemented the ACS-TSP but with a Fuzzy
Controller module that was added to the algorithm which
evolves parameters automatically as the algorithm run. The
rule-base of the fuzzy controller represents the fuzzy rules
that govern the performance of the ACS algorithm in
response to the changes in the parameters’ values. The
fuzzy rules were deduced using a genetic algorithm that
produces its output with the help of a data set.
The adaptive ACS algorithm has proven to get better
results than the traditional ACS for larger TSP instances.
We tried the modified ACS to a number of TSP instances
with different sizes and the algorithm yielded better
solutions in some cases specially the cases with large
number of cities and faster convergence to good solutions
in most of the cases. In the following section we describe
the standard ACS algorithm in more details.
2
Traveling Salesman Problem (TSP)
The first application of an ant colony optimization
algorithm was done using the traveling salesman problem
(TSP) as a test problem. The main reasons why the TSP
was chosen are that it is a shortest path problem to which
the ant colony metaphor is easily adapted and that it is a
very easy application to understand and explanations of the
algorithm behavior are not obscured by too many
technicalities.
A general definition of the traveling salesman problem
is the following. Consider a set N of nodes, representing
cities, and a set E of arcs fully connecting the nodes N .Let
dij be the length of the arc (i, j) Є E ,that is the distance
between cities i and j ,with i, j Є N .The TSP is the
problem of finding a minimal length Hamiltonian circuit
on the graph G =(N,E), where an Hamiltonian circuit of
graph G is a closed tour visiting once and only once all the
n = |N | nodes of G ,and its length is given by the sum of
the lengths of all the arcs of which it is composed [Dor99].
3
Applying ACO algorithms to the TSP
In ACO algorithms ants are simple agents which, in the
TSP case, construct tours by moving from city to city on
the problem graph. The ants' solution construction is
guided by artificial pheromone trails and an a priori
available heuristic information. Each ant has a tabu list
which contains the cities that have not yet been visited by
this ant.
3.1
Ant Colony System (ACS) [Dor97(a), Gam96] has many
characteristics that distinguish it from the rest of the ACO
algorithms. First, it uses an aggressive action choice rule.
Second, the pheromone is added only to arcs belonging to
the global-best solution. Third, each time an ant uses an
arc (i, j) to move from city i to city j, it removes some
pheromone from the arc. In the following we present these
modifications in more detail.
Tour construction: In ACS ants choose the next city using
the pseudorandom-proportional action choice rule: First
let q be a random variable uniformly distributed over [0,
1], and q0 Є [0, 1] be a tunable parameter. According to
Equation 2, when located at city i, ant k moves, with
probability q0, to city l for which
 il t . il 
is maximal,
that is, with probability q0 the best possible move as
indicated by the learned pheromone trails and the heuristic
information is made (exploitation of learned knowledge).
With probability (1 - q0), an ant performs a biased
exploration of the arcs according to Equation 1.
 t  . 

ij
where
ij
  t  . 

lN ik
il
 arg max lNik  il il  ,
j  J,


if qq0 ;
otherwise;
,
Global pheromone trail update: In ACS, after each
iteration, the shortest tour (global-best tour) of this
iteration is determined, and arcs belonging to this tour
receive extra pheromone, so only the global-best ant is
allowed to add pheromone after each iteration which is
shown in Equation 3.
Where
and dij is the distance between cities i and j,
 ij t 
is the
pheromone trail on arc (i, j), β is a parameter which
determines the relative influence of the pheromone trail
and the heuristic information, and N
k
i
is the feasible
(3)
 ijgb (t )  1 / Lgb , and Lgb is the length of the
Local pheromone trail update: Additionally to the global
updating rule, in ACS the ants use a local update rule that
they apply immediately after having crossed an arc during
the tour construction:
 ij  (1   ). ij   . 0
where
 ij  1 d ij is an a priori available heuristic value
 
 i , j  global - best tour
global-best tour. It is important to note that the trail update
only applies to the arcs of the global-best tour, not to all
the arcs like in AS. The parameter  again represents the
pheromone evaporation. In ACS only the global best
solution receives feedback. Initially, also using the
iteration best solution was considered for the pheromone
update. Although for smaller TSP instances the difference
in solution quality between using the global-best solution
or the iteration-best solution is minimal, for larger
instances the use of the global-best tour gives by far better
results.
if j  N ik (1)
il
(2)
J here is a randomly generated number between 0 and 1,
where a city j gets chosen according to J with probability
pij explained in Equation 1.
gb
 ij t  1  1    ij t     ij t ,
The ACS Algorithm
pijk t  
neighborhood of ant k, that is, the set of cities which ant k
has not yet visited.
(4)
 , 0 <  < 1, and  0 are two parameters to the
ACS algorithm. In this way the exploration of not yet
visited arcs is increased. The value of  0 is set to be the
same as the initial value for the pheromone trails.
Experimentally, a good value for  was found to be 0.1,
while a good value for
0
was found to be
1 n.Lnn ,
where n is the number of cities in the TSP instance and
Lnn is the length of the nearest-neighbor tour. The effect
of the local updating rule is that each time an ant uses an
arc (i, j) its pheromone trail  ij is reduced, so that the arc
becomes less desirable for the following ants. In other
words, this allows an increase in the exploration of arcs
that have not been visited yet and, in practice, has the
effect that the algorithm does not show a stagnation
behavior. A stagnation behavior occurs when pheromone
accumulates on a certain path, which is usually a local
optimal solution, and more ants keep choosing this path
over an over until eventually all ants choose this path and
the algorithm prematurely converges to this local optimal
solution.
Hand-Tuned
Parameter
Setting
for
ACO
Algorithms:
Experimental study of the various ACO algorithms for the
TSP has identified parameter settings that result in good
performance [Dor04]. It should be clear that in individual
instances, different settings may result in much better
performance. However, the stated values of the parameters
were found to yield reasonable performance over a
significant set of TSP instances. The performance of the
ant algorithms with the hand-tuned parameters was not
always good enough and this is the reason why many
studies [Pil02] were made to try to automate the process of
the parameter-setting for some ACO algorithms to improve
the performance.
3.2
pheromone on that path, so we need to increase the value
of the β parameter such that the weight of the heuristic
information (length of the path) is higher than the weight
of the pheromone trail on that path.
There are two steps involved in the automatic parametersetting, the first is:
1. Fuzzy Rule Induction:
In order for the fuzzy controller to output the best
parameter values for the running algorithm, it needed a
rule-base that best describes the performance of the ACS
algorithm in response to the changes in the values of the
parameters β and q0. A genetic algorithm was used for the
fuzzy-rule induction process.
To use the genetic algorithm we needed to go through a
number of steps first, they are:
 Determine the Inputs and Outputs of the Fuzzy
Rules: where the inputs are the error and variance
which are the performance measures, while the
outputs are the parameters β and q0.
 Define the Membership Functions: we used
triangular membership functions for each of the
inputs and outputs.
 Prepare the Data Set File: which contains the data
samples of a running ACS algorithm. We created
the data set file by varying each of the parameters
such that the file shows the performance of the
algorithm with different parameters’ values. The
larger the data set file is, the better the chance to
produce rules that best describes the effect of the
algorithm’s parameters on the performance.
Adaptive ACS Algorithm
We propose the modified ACS algorithm that has an added
fuzzy controller module to tune the ACS parameters β and
q0 automatically according to some performance measures.
The reason for which we chose these two parameters is the
strong relationship we observed between these two
parameters values and the algorithm’s performance which
we noticed after running the algorithm thousands of times
while monitoring its performance in response to the
changes in parameters’ values. For example, considering
the q0 parameter, if the variance throughout the population
is high this means that there are a variety of different paths
currently being explored by the ants, and accordingly we
don’t require any more exploration, but instead we need to
exploit the learned information (the pheromone trails and
the heuristic information), so we’ll set the q0 parameter to
a higher value such that we increase the probability by
which the ant will make the best possible move as
indicated by the learned pheromone and the heuristic
information. Considering the β parameter, if the variance
throughout the population is low, this means that the ants
could be stuck in a local optimal path (stagnation behavior
explained in section 3.1) because of the accumulated
Design of the Genetic Algorithm:
 Individual Representation:
In conventional GAs each individual corresponds to a
candidate solution to a given problem. In our case the
problem is to discover a set of prediction rules so we have
the option of representing an individual as a set of rules or
as a single rule. We chose to represent each individual of
the GA population as a set of prediction rules, i.e., an
entire candidate solution. This is called the Pittsburgh
approach.
The Pittsburgh approach has the advantage that the
individual represents the whole set of rules so the fitness
function measures the fitness of the set of rules as one
entity.
 Encoding Rule Antecedents:
Each parameter has three fuzzy sets. The fuzzy sets
are:
 LeftTriangle which determines the range for the
low value of the parameter.
 Triangle which determines the range of the
medium value of the parameter.
 RightTriangle which determines the range for the
high value of the parameter.
So the possible fuzzy values for any of the parameters we
have are: Low, Medium, High, Not Low, Not Medium, and
Not High.
The values “Low”, “Medium”, and “High” are stored as
the values “1”, “2”, and “3“ respectively. The “Not” can be
represented as a negative value so “Not Low” value would
map to “-1”.
For example, if we want to represent the rule antecedent:
IF error is high and variance is not medium
The output string would be:
3
-2
 Encoding a Rule Consequent:
Encoding the rule consequent is no different than
encoding the rule antecedent
For example if we want to represent the following rule:
If variance is low then beta is low and q0 is medium
The output string would be:
Attr1 Attr2 Attr3 Attr4
0
1
1
2
Figure 1: fixed-length encoded string
 Encoding an Individual:
When encoding an individual one has to ask how many
rules should each individual have. It’s better if each
individual has a different number of rules to explore all the
possibilities. Although a maximum number of rules per set
will be given as an input to the system, each individual can
have any number of rules that is less than or equal the
maximum number of rules.
What decides the number of rules the rule set will have is a
random number generated at run time that ranges from the
value 1 to the maximum number of allowed rules. So each
individual will have an integer in its array that represents
the number of rules in this rule set and an array of integers
representing the whole rule set. Length of the array will be
as follows:
Len = (NOI + NOO) * (NOR) + 1
Where NOI is number of inputs, NOO is number of
outputs, and NOR is number of rules.
 Fitness Evaluation of an Individual:
The fitness of an individual in the population of the
genetic algorithm is measured by matching the given data
instances against the individual’s set of rules. For each
individual of the population, we go through each entry in
the data set file, if there’s no rule in the rule set of the
current matches this entry, then it’s considered as a
misclassification.
So the calculated fitness of each individual will be:
Fitness = (number of data instances
misclassifications) / number of data instances
–
The genetic algorithm then outputs a set of fuzzy rules that
best suits the data set. An example of a rule that could be
part of the output of the genetic algorithm is:
IF error is low and variance is medium THEN beta is low.
where error is the difference between the TSP’s optimal
solution(best known solutions [TSPLIB]) and the found
solution, variance is the variance among the solutions
found by the population of ants, and beta is the relative
influence of the pheromone trail and the heuristic
information.
2.
Operation of the Fuzzy Controller over the ACS
Algorithm:
A Fuzzy Logic Controller (FLC) is composed of a
knowledge base, that includes the information given by the
expert in the form of linguistic control rules, a fuzzification
interface, which has the effect of transforming crisp data
into fuzzy sets, an Inference System, that uses them
together with the knowledge base to make inference by
means of a reasoning method, and a defuzzification
interface, that translates the fuzzy control action thus
obtained to a real control action using a defuzzification
method [Her96(a)]. The structure of an FLC is shown in
Figure 2.
Knowledge
Base
Fuzzification
Interface
State
Variables
Inference
System
Controlled
System
Defuzzification
Interface
Control
Variables
Figure 2: Structure of a Fuzzy Logic Controller
[Her96(a)].
 The knowledge base is composed of two components, a
data base, containing the definitions of the fuzzy
control rules linguistic labels, i.e., the membership
functions of the fuzzy sets specifying the meaning of
the linguistic terms, and a rule base, constituted by the
collection of fuzzy control rules representing the
expert’s knowledge.
o The data base is constituted of a membership
functions file that we prepared. Each of the inputs
and outputs of the fuzzy rules has associated
membership functions. We used triangular
membership functions for their simplicity.
o The rule base comes from the genetic system using
rule induction. We should bear in mind that our
fuzzy controller tunes the parameters of the ACS
algorithm depending on the fuzzy rule base, but the
fuzzy rules we got from the genetic algorithm just
monitor the performance of the ACS algorithm and
its variance with the change in the ACS parameters.
What we need is to adapt these fuzzy rules to match
our requirement to adjust the performance of the
ACS algorithm and form an adaptive ACS
algorithm. For example, if we have the generated
fuzzy rule: if error is high then beta is low, this rule
puts the ACS performance in words by saying that it
was found that when the error was high, the beta
was found to have a low value. If we need to adjust
the ACS performance by minimizing the error in the
population, then we should avoid setting the beta to
a low value, may be a medium or higher value will
be more suitable, and so on.
 Fuzzification is the process of converting a "crisp"
input value (in our case the crisp values are the values
of the error and variance) into a corresponding degree
of membership in a membership function (region).
 The job of the inference engine is to check the fired
rules and combine their outputs to one single output
representing the set of values for each variable.
 In the final step known as defuzzification, the combined
or aggregate output of all the rules which "fired" is
determined, and the result is used to calculate a "crisp"
(non-fuzzy) output result representing values of the
parameters β and q0.
o
For each ant i in the population
 IF q < q0 THEN //with //probability q0
Choose the city J from the tabu list to
step to, for which the pheromone and
heuristic information is maximal
 ELSE //with probability (1-q0)

Perform a biased exploration from the
tabu list and choose city J to step to.
 Update pheromone levels using the ACS
local rule;
o Until no more cities to visit;
o Compute the length of the Tour found by
each ant;
o Update pheromone level using the ACS global
rule for the best tour;
o Calculate Variance;
o Calculate the Error of the best-so-far
tour;
o [β, q0 ] = Controller [Variance, Error];
Until no more iterations
END
Algorithm: Controller
Input:
Variance
Error //the error of the best-so-far
tour compared to the best-known tour to
the TSP problem
Output: β
q0
BEGIN
 Read the rule base file
 For each rule i in the rule base
o
Fuzzify the Variance and Error
parameters.
o
Check if the rule is active for the
Variance and Error values.
o
IF the rule is active THEN
 Output the fuzzy values of the β and q0
parameters
 Add the produced fuzzy values of each output
 Defuzzify the result and output the crisp
values for β and q0 parameters
END
Structure of the Adaptive ACS Algorithm:
Figure 3: Algorithm of the Adaptive ACS Algorithm
Algorithm: Adaptive ACS
Input:
TSP Problem //the TSP instance file name
NumberOfAnts //the number of ants to be
//used in the algorithm
Beta (β) //weight of heuristic
//information to pheromone
ρ //global decay
ξ //local decay
q0 //exploration versus exploitation
N //number of iterations
Output: bestTour
// The shortest path found by the
//algorithm
BEGIN


Initialize the Controller
For each Iteration
o Place each ant in a randomly chosen city;
4
4.1
The Results of the Adaptive ACS
Algorithm
Sample Runs
We have tested our adaptive ACS algorithm as opposed to
the standard ACS algorithm on different TSP instances.
Unless specified, the parameters of both algorithms were
set to the best known values which are:
m (number of ants) = n (number of cities)
β (pheromone/distance) = 2
ρ (global decay) = 0.2
ξ (local decay) = 0.2
q0 (exploration vs. exploitation) = 0.5
n (number of cycles) = 500
Sample run 3 (pr264.tsp):
The following chart shows the result of running both
algorithms on the pr264.tsp with number of ants = 100.
The number of ants was decreased here to 100 instead of
264 for a limited resources problem as the larger the TSP
instances are, the greater the time it takes to solve them, so
we decreased the number of ants to reduce the time for
solving the problem. The adaptive ACS has found a tour of
length 53429, while the standard ACS algorithm has found
a tour of length 56131.
Sample run 1 (eil51.tsp):
The following chart shows the result of running both
algorithms on the eil51.tsp. The x-axis represents the tour
length, while the y-axis represents the cycle number. The
adaptive ACS has found a tour of length 431, while the
standard ACS algorithm has found a tour of length 452.
Cycle/Tour with 51 ants
adaptive ACS
Cycle/Tour with 100 ants
standardACS
adaptive ACS
standard ACS
Thousands
600
575
550
Tour
Tour
525
500
66
64
62
60
475
58
450
56
425
54
52
400
0
50
100
150
200
250
300
350
400
450
500
550
0
Cycle
4.2
Tour
Thousands
adapted ACS
standard ACS
72
71
70
69
68
67
66
65
64
63
62
61
60
59
58
150
200
250
300
Cycle
Chart 2: Sample run on pr144.tsp.
200
250
300
350
400
450
500
550
350
400
450
500
Simplified Results of the Adaptive ACS
versus the standard ACS:
Both the adaptive and the standard ACS algorithms
were run for a limited number of iterations which is 500.
We found out that when the number of iterations is greater
than 500, both the algorithms’ performance is not
improved and the algorithms’ performance curves stabilize
before iteration 500.
The Fuzzy Logic Controller module has a negligible
computation overhead so for the same number of iterations
for both algorithms we can say that both the adaptive and
the standard algorithm have the same run time.
Cycle/Tour with 144 ants
100
150
Chart 3: Sample run on pr264.tsp.
Sample run 2 (pr144.tsp):
The following chart shows the result of running both
algorithms on the pr144.tsp with number of ants = 144.
The adaptive ACS has found a tour of length 59710, while
the standard ACS algorithm has found a tour of length
61314.
50
100
Cycle
Chart 1: Sample run on eil51.tsp.
0
50
550
The following table shows the results of the runs on each
of the TSP instances.
The numbers followed by each TSP instance represent:
1. The best tour found by the standard ACS
algorithm with number of ants (m) equals to 10.
2. The best tour found by the adaptive ACS
algorithm with number of ants (m) equals 10.
3. The best tour found by the standard ACS
algorithm with number of ants (m) equals to the
number of cities (n).
4.
The best tour found by the adaptive ACS
algorithm with number of ants (m) equals to the
number of cities (n).
5. The best known solution to this TSP instance.
The number between the square brackets is the iteration
where the algorithm found the best tour.
ACS
with
m=10
474.1
[289]
824.1
[326]
675.6
[205]
Adaptive
ACS with
m=10
454.7
[199]
751.8
[143]
607.6
[95]
ACS
with
m=n
452.1
[283]
695.4
[54]
562.6
[452]
1270.2
[314]
Adaptive
ACS with
m=n
431.9
[133]
Best
Known
Solution
691 [66]
675
556 [479]
538
Rat99
N/A
N/A
1234.2
[315]
1211
Eil101
N/A
N/A
672.1
[63]
655.6
[218]
629
Lin105
N/A
N/A
14665.4
[236]
14543.1
[27]
14379
Pr107
N/A
N/A
46347
[312]
45757.9
[329]
44303
Pr124
N/A
N/A
60559.6
[287]
60203.3
[322]
59030
Rat195
N/A
N/A
2449.1
[392]
2401.9
[65]
2323
Pr226
N/A
N/A
Pr264
N/A
N/A
83521.5
[106]
56131.6
[237]
(m=100)
83141.5
[258]
53429.6
[78]
(m=100)
TSP
Problem
Eil51
St70
Eil76
426
80369
49135
Table 1: results of the adaptive ACS and the standard
ACS algorithms run on a number of TSP instances
As we notice from the table that both algorithms haven’t
reached the optimal solutions but only reached nearoptimal solutions, this is due to the limitation of resources
since TSP is a computation-intensive problem that requires
high specifications of the machine it runs on. In most of
the cases, these near-optimal solutions are considered to be
good enough. As we can see in the first three TSP
instances, the results of the adaptive ACS compared to the
standard ACS has a smaller difference when m is set to n
than the difference when m is set to 10. While for larger
TSP instances, the difference in performance is significant
whatever the value of the m parameter is.
5
Conclusions
In this paper, we developed a Fuzzy Logic Controller
(FLC) module to be added to the Ant Colony System
(ACS) algorithm for the sake of tuning its parameters
automatically when applied to the TSP. The ACS
parameters used to be hand-tuned, which requires human
experience to set them in addition to the fact that the
parameter settings used to be static through out the entire
run of the algorithm regardless of the performance of the
algorithm. Our FLC takes robust performance measures of
the ACS algorithm (like the fitness and the variance), and
evolves the parameters at run time, which guarantees
supplying the algorithm with the suitable values of the
parameters based on the current performance.
The adaptive ACS algorithm tunes two parameters of
the ACS algorithm, which are the beta (β) parameter and
the q0 parameter. The β parameter is the weight of the
heuristic information to the pheromone trail, while the q0 is
the parameter that controls the exploration versus the
exploitation in the ants’ decision. The values of these two
parameters are very important since they are considered as
the controlling parameters of the ACS algorithm.
The Fuzzy Logic Controller’s main component is the
rule base because the rule base contains the fuzzy rules
that govern the ACS algorithm. It controls the algorithm
with sentences rather than equations. The rule base usually
comes from a domain’s expert who formulates his
knowledge into fuzzy rules, but in our case we used a
ready made genetic algorithm that generates fuzzy rules
automatically given a large data set and the membership
functions of each input and output.
The adaptive ACS algorithm was tested on many
different TSP instances to compare its performance with
the standard ACS algorithm. The result was that the
adaptive ACS showed faster convergence to the optimal
solutions than the standard ACS, which can be an
advantage for time constrained problems. For larger TSP
instances, the adaptive ACS algorithm yielded nearoptimal solutions that were not found by the standard ACS
algorithm, which will be needed by problems that need
accurate solutions.
Another interesting aspect about our adaptive ACS
algorithm is how it behaves under the condition of bad
parameter-setting. For example, the optimal value for the
number of ants is known to be equal to the number of
cities of the TSP instance, but when set to a number less
than that, our algorithm outperformed the standard ACS
algorithm by far. So it can be used in time-constrained
environments since decreasing number of ants decreases
the time required to solve the problem significantly.
References
[Bon00] E. Bonabeau, M. Dorigo, G. Theraulaz (2000) “inspiration for
optimization from social insect behavior”, no. 406, pages 39-42.
[Bot98] Hozefa M. Botee and Eric Bonabeau (1998) “Evolving Ant
Colony optimization”, Adv. Complex Systems, 1, page 149-159.
[Bra03] J. Branke and M. Guntsch (2003) “New Ideas for Applying Ant
Colony Optimization to the Probabilistic TSP”, Proceedings of Evo
Workshops, vol. 2611, pages 165-175.
[Cas00] Jorge Casillas, Oscar Cordon, and Francisco Herrera (2000)
“Learning Fuzzy Rules using Ant Colony Optimization Algorithms”,
Proceedings of the 2nd International Workshop on Ant Algorithms (ANTS
2000), page13-21.
[Col92] Alberto Colorni, Marco Dorigo, and Vittotio Maniezzo (1992)
“An Investigation of some Properties of an Ant Algorithm”, Proceedings
of the Parallel Problem Solving from Nature Conference: PPSN, pages
509-520.
[Cor01] O. Cordon, F. Herrera, F. Gomide, F. Hoffmann, and L.
Magdalena (2001) “Ten Years of Genetic Fuzzy Systems: Current
Framework and New Trends”, IFSA/NAFIPS.
[Dor04] Marco Dorigo and Thomas Stützle (2004) Ant Colony
Optimization, MIT Press.
[Dor01] Marco Dorigo (2001) “Ant Algorithms Solve Difficult
Optimization Problems”, Proceedings of the Sixth European Conference
on Artificial Life, vol. 2159, pages 11-22.
[Dor00(a)] Marco Dorigo, Eric Bonabeau, Guy Theraulaz (2000) “Ant
algorithms and stigmergy”, Future Generation Computer Systems 16FGCS, pages 851–871.
[Dor00(b)] Marco Dorigo, Gianni Di Caro, Thomas Stützle (2000)
“Special Issue on Ant Algorithms”, selection of the papers presented at
ANTS’98- From Ant Colonies to Artificial Ants: First International
Workshop on Ant Colony Optimization 1998.
[Dor99] Marco Dorigo, Gianni Di Caro, and Luca M.Gambardella
(1999) “Ant Algorithms for Discrete Optimization”, Artificial Life , vol.
5, no. 3,page 137-172.
[Dor97(a)] Marco Dorigo and Luca Maria Gambardella (1996) “Ant
Colony System: A Cooperative Learning Approach to the Traveling
Salesman Problem”, IEEE Transactions on Evolutionary Computation,
vol. 1, no. 1 (Also Technical Report TR/IRIDIA/1996-5, IRIDIA,
Université Libre de Bruxelles.)
[Dor97(b)] Marco Dorigo and Luca Maria Gambardella (1997) “Ant
colonies for the traveling salesman problem”, BioSystems, vol. 43, page
73-81 (Also Technical Report TR/IRIDIA/1996-3, IRIDIA, Université
Libre de Bruxelles.)
[Dor96] Marco Dorigo, Vittorio Maniezzo, and A. Colorni (1996) “The
Ant System: Optimization by a Colony of Cooperating Agents”, IEEE
Transactions on Systems, Man,and Cybernetics - Part B, vol.26,
no.1,page 29-41.
[Dor91] Marco Dorigo, Vittorio Maniezzo, and A. Colorni (1991)
“Positive Feedback as a Search Strategy”, Technical Report 91-016, Dip.
Elettronica, Politecnico di Milano, Italy.
[Dou02] George Dounias, Athanasios Tsakonas, Jan Jantzen, Hubertus
Axer, Beth Bjerregaard, Diedrich Graf von Keyserlingk (2002) “Genetic
Programming for the Generation of Crisp and Fuzzy Rule Bases in
Classification and Diagnosis of Medical Data”, Proceedings of First
International NAISO Congress on Neuro Fuzzy Technologies.
[Ebe96] Russell C. Eberhart , Roy Dobbins , and Patrick K. Simpson
(1996), Computational Intelligence PC Tools, Morgan Kaufmann Pub.
[Fre02] Alex A. Freitas (2002), Data Mining and Knowledge Discovery
with Evolutionary Algorithms, Springer.
[Gae05] Dorian Gaertner and Keith Clark (2005) “On Optimal
Parameters for Ant Colony Optimization Algorithms”, At the
International Conference on AI 2005 (ICAI'05).
[Gam96] Luca Maria Gambardella and Marco Dorigo (1996) ”Solving
Symmetric and Asymmetric TSPs by Ant Colonies”, In Proceedings of
the IEEE International Conference on Evolutionary Computation
(ICEC'96), page 622-627, IEEE Press.
[Gom04] Osvaldo Gόmez and Benjamin Barán (2004) “Relationship
between Genetic Algorithms and Ant Colony Optimization Algorithms”,
30ma Conferencia Latinoamericana de Informática (CLEI2004), page
766-776.
[Gon96] Antonio Gonzalez and Francisco Herrera (1996) “Muli-stage
Genetic Fuzzy Systems based on the Iterative Rule Learning Approach”,
CICYT, Technical Report #DECSAI-96128.
[Her96(a)] Francisco Herrera and Manuel Lozano (1996) “Adaptation of
Genetic Algorithm Parameters Based on Fuzzy Logic Controllers”,
Genetic Algorithms and Soft Computing, Physica-Verlag, page 95-125
[Her96(b)] Francisco Herrera, Manuel Lozano, and José Luis Verdegay
(1996) “Dynamic and Heuristic Fuzzy Connectives-Based Crossover
Operators for Controlling the Diversity and Convergence of Real Coded
Genetic Algorithms”, Int. Journal of Intelligent Systems, vol. 11, page
1013-1041.
[Her95] Francisco Herrera and Luis Magdalena (1995) “Genetic Fuzzy
Systems: a Tutorial”, CICYT.
[Her94] Francisco Herrera, Manuel Lozano, and José Luis Verdegay
(1994) “Generating Fuzzy Rules from Examples using Genetic
Algorithms”, Proceeding Fifth International Conference of Information
Processing and Management of Uncertainty in Knowledge-based
Systems, page 675-680.
[Jan98] Jan Jantzen (1998) “Design of Fuzzy Controllers“, Technical
University of Denmark, Department of Automation, Bldg 326, DK-2800
Lyngby, DENMARK. Tech. report no 98-E 864 (design).
[Kin94] J. Kinzel, F. Klawonn, and R. Kruse (1994) “Modifications of
Genetic Algorithms for Designing and Optimizing Fuzzy Controllers”,
Proceedings of IEEE Conference on Evolutionary Computation, page 2833.
[Law85] E.L.Lawler, J.K.Lenstra, A.H.G.Rinnooy-Kan, and D.B.Shmoys
eds. (1985) ”The Travelling Salesman Problem”, New York:Wiley.
[Man04] Vittorio Maniezzo, Luca Maria Gambardella, and Fabio de
Luigi (2004) ”Ant Colony Optimization”, New Optimization Techniques
in Engineering, by Onwubolu, G. C., and B. V. Babu, Springer-Verlag
Berlin Heidelberg, page 101-117.
[Pil02] Marcin L.Pilat and Tony White (2002) “Using Genetic
Algorithms to optimize ACS-TSP”, Proceedings of the Third
International Workshop on Ant Algorithms, vol. 2463, page 282 - 287.
[Rou00] Hans Roubos, Magne Setnes, and Janos Abonyi (2000)
“Learning Fuzzy Classification Rules from Data”, Recent advances in
soft computing (RASC2000).
[Sol02] Christine Solnon (2002) “Boosting ACO with a Preprocessing
Step”, EvoWorkshops 2002, page 163-172.
[Stü99] Thomas Stützle and Marco Dorigo (1999) “ACO algorithms for
the traveling salesman problem”, Evolutionary algorithms in engineering
and computer science: Recent advances in genetic algorithms, evolution
strategies, evolutionary programming, genetic programming and
industrial applications.
[Sur02] Hartmut Surmann and Alexander Selenschtschikow (2002)
“Automatic Generation of Fuzzy Logic Rule Bases: Examples I”,
Proceedings of the NF2002: First International ICSC Conference on
Neuro-Fuzzy Technologies, pages 75.
[Sur00] Hartmut Surmann (2000) “Learning a Fuzzy Rule Based
Knowledge Representation”, Proceeding of 2 ICSC Symo. on Nueral
Computation, NC’2000, pages 349 – 355.
[TSPLIB] A library of sample instances for the TSP along with their best
known solutions “http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html”
[Wal02] Walter J. Gutjahr (2002) “ACO Algorithms with Guaranteed
Convergence to the Optimal Solution”, Information Processing Letters,
vol. 82, no 1.
[Whi03] Tony White, Simon Kaegi, and Terri Oda (2003) “Revisiting
Elitism in Ant Colony Optimization”, Genetic and Evolutionary
Computation Conference GECCO, vol. 2723.
Download