Variable Objective Search Sergiy Butenko , Oleksandra Yezerska ,

advertisement
Variable Objective Search
Sergiy Butenko∗, Oleksandra Yezerska∗,
and Balabhaskar Balasundaram†
Abstract
This paper introduces the variable objective search framework for
combinatorial optimization. The method utilizes different objective
functions used in alternative mathematical programming formulations
of the same combinatorial optimization problem in an attempt to improve the solutions obtained using each of these formulations individually. The proposed technique is illustrated using alternative quadratic
unconstrained binary formulations of the classical maximum independent set problem in graphs.
Keywords. Variable objective search, Binary quadratic programming, Maximum independent set
1
Introduction
Due to the inherent computational complexity of most important combinatorial optimization problems, one cannot expect to be able to solve very
large-scale instances to optimality, and (meta) heuristics are usually applied
in practice. The simplicity and effectiveness of metaheuristic approaches
earned them considerable popularity in the optimization community [5].
In their now classical paper, Mladenović and Hansen [8] (see also [6]) proposed to take advantage of different neighborhood structures for the same
combinatorial optimization problem by systematically changing neighborhoods within the search. The resulting variable neighborhood search (VNS)
∗
Industrial and Systems Engineering, Texas A&M University, College Station, TX
77843-3131, {butenko,yaleksa}@tamu.edu
†
Industrial Engineering & Management, Oklahoma State University, Stillwater, OK
74078, baski@okstate.edu
1
is based on the observation that a local optimum with respect to one neighborhood may not be a local optimum with respect to another neighborhood,
whereas a global optimum is a local optimum with respect to any neighborhood structure. Therefore, if we find a solution which is locally optimal with
respect to several neighborhood structures rather than just one, intuitively
such a solution stands a higher chance of being a global optimum. VNS
has been successfully applied to many different combinatorial optimization
problems. Inspired by this success and motivated by the observation that
the properties similar to those stated for neighborhoods also hold for alternative objective functions for the same problem, in this paper we propose a
new metaheuristic framework, variable objective search (VOS). It is in the
spirit of VNS, however, instead of varying the neighborhood structures we
propose to consider different equivalent formulations of the combinatorial
optimization problem solved. Given two or more equivalent formulations of
the same combinatorial optimization problem, and assuming for simplicity
that all formulations share the same feasible region but have different objective functions, we can impose a neighborhood structure for our problem
that will be the same for all the considered formulations. Then the following
properties hold:
(i) a local optimum for one formulation may not correspond to a local
optimum with respect to another formulation;
(ii) a global optimal solution of the considered combinatorial optimization
problem should correspond to a global optimum for any formulation of
this problem.
Therefore, it seems natural to use the local maximum for one of the formulations as a starting point of search for another formulation. The proposed
variable objective search framework is based on these properties and can
be applied to almost any optimization problem allowing multiple equivalent
formulations (discrete or continuous). In particular, many classical combinatorial optimization problems, such as the quadratic assignment problem,
the max-cut problem, and the maximum clique problem can be expressed
using unconstrained binary quadratic formulations, and therefore allow for
multiple equivalent binary quadratic reformulations [7]. One way to obtain
such reformulations is as follows. Assume that one has the problem in the
form
max n f (x) = xT Qx + cT x,
x∈{0,1}
2
where Q is a real symmetric n × n matrix. Given any real vector q ∈ Rn , let
Q = Q + diag(q), where diag(q) is the n × n diagonal matrix having q as its
diagonal, and let c = c − q. Then, since f (x) = f (x) for any x ∈ {0, 1}n , the
above 0–1 quadratic problem is equivalent to the problem:
max f (x) = xT Qx + cT x.
x∈{0,1}n
The above modification does not have any impact on a local search based
on any binary neighborhood, however, it does change the curvature of the
quadratic function and can be useful for continuous local search. If the vector
q above is selected so that the resulting matrix Q either has zero diagonal
or is positive definite then we are guaranteed that the continuous problem
maxn f (x) has a binary global maximum, thus we obtain two equivalent conx∈[0,1]
tinuous reformulations of the original binary quadratic problem. Applying
classical nonlinear programming algorithms to one of these reformulations
will yield a stationary point x(0) that is not necessarily a stationary point for
the other reformulation and does not have to be a local maximum with respect to a binary neighborhood structure. Thus, local search procedures can
be applied for the latter two cases using x(0) as initial guess. In addition, selecting the vector q above that does not result in an equivalent reformulation
but provides a continuous relaxation for the binary problem (e.g., by choosing q ≤ 0) may also be of value, as it may provide good search directions.
To illustrate this point, consider the following simple example:
1 −3
1
Q=
; c=
.
−3 2
2
There are four feasible points for the corresponding binary quadratic maximization problem, x(1) = [0, 0]T , x(2) = [0, 1]T , x(3) = [1, 0]T , x(4) = [1, 1]T
with f (x(1) ) = f (x(4) ) = 0, f (x(2) ) = 4, and f (x(3) ) = 2. Hence, x(3) is a
local maximum of f (x) with respect to the binary one-flip neighborhood as
well as a local maximum for the corresponding continuous relaxation (see
Fig. 1). Note that all feasible directions at x(3) are descent directions for
f (x). However, applying the above transformation with q = [−4, −5]T , we
obtain f(x) ≥ f (x) ∀x ∈ [0, 1]2 , with the gradient ∇f (x(3) ) = [−1, 1]T ,
meaning that [−1, 1]T is a feasible ascent direction for f (x) at x(3) . Using
this direction for line search with either zero-diagonal or a positive definite
continuous reformulation of the considered binary problem, we immediately
obtain the global maximum given by x(2) .
3
4
3
x3
2
1
0
0
0 .5
1
0 .5
0
x1
x2
Figure 1: Plots of x3 = f (x) and x3 = f (x) considered in the example.
To illustrate the proposed VOS approach on a specific problem, we will use
alternative quadratic formulations of the maximum independent set problem
in graphs, therefore, some definitions from graph theory are in order. Given
a simple undirected graph G = (V, E), let AG denote the adjacency matrix of
G. For a vertex v ∈ V let N(v) = {u : (u, v) ∈ E} denote the neighborhood of
v. The complement graph Ḡ of G is given by Ḡ = (V, Ē), where Ē = {(u, v) ∈
V × V \ E : u 6= v}. A subset of vertices I ⊂ V is called an independent
(stable) set if (u, v) ∈
/ E for any pair of vertices u, v ∈ I. The maximum
independent set problem is to find an independent set of the largest size in
G. The cardinality of the largest independent set is denoted by α(G) and
is called the independence (stability) number. This classical combinatorial
optimization problem is of great theoretical importance and has numerous
practical applications of diverse origins, therefore it has been well studied
in the literature. See [3] for an in-depth survey of the maximum clique
problem, which is equivalent to the maximum independent set problem in
the complement graph.
The remainder of this paper is organized as follows. Section 2 describes
the proposed methodology in detail. The approach is illustrated numerically
on the maximum independent set problem in Section 3. Finally, Section 4
concludes the paper.
4
2
The method
A combinatorial optimization problem P can be generally defined as a set of
problem instances and a minimization or maximization objective, where an
instance of P is a pair (S, f ), with S being the set of feasible solutions and
f : S → ℜ being the objective function [1]. Without loss of generality, we
will assume that P has the maximization objective. The problem consists in
finding a globally optimal solution, which is a feasible solution s∗ ∈ S such
that f (s∗ ) ≥ f (s) for all s ∈ S. Let us denote by
f ∗ = f (s∗ ) = max f (s)
s∈S
the optimal objective function value, and by
S ∗ = {s ∈ S : f (s) = f ∗ } = arg max f (s)
s∈S
the set of optimal solutions of the considered instance of the problem P. For
simplicity, in this paper we will assume that we are dealing with a specific
instance of the problem P, therefore “problem P” will refer to the given
instance of P.
Many combinatorial optimization problems can be equivalently modeled
as mathematical programs of different types, e.g., as (mixed) integer programming problems or continuous nonconvex optimization problems. The
equivalence is in the sense that every global maximum of a mathematical programming formulation corresponds, in a straightforward fashion, to a global
maximum of the combinatorial optimization problem that this formulation
models.
Assume that our problem P allows the following p alternative equivalent
formulations:
f ∗ = max fi (x), i = 1, . . . , p,
x∈Xi
where Xi , i = 1, . . . , p is a feasible set that may be either discrete or continuous. If we denote by X0 = S and by f0 = f , then the original combinatorial
formulation of the problem P can be written as
f ∗ = max f0 (x).
x∈X0
Consider the set of all optimal solutions for the ith formulation: Xi∗ =
arg max fi (x). Before we proceed with the formal description of the variable
x∈X
objective framework, we need to make the following important assumptions.
5
Assumption 1. There are mappings hij : Xi → Xj , i = 0, . . . , p (where
hii , i = 0, . . . , p are the identity maps) between the feasible sets of the ith and
j th formulations such that any global optimal solution of the ith formulation
is mapped to a global optimum of the j th formulation, i.e., hij (Xi∗ ) ⊆ Xj∗ . In
some cases the mappings may be straightforward, while in others they may
need to be calculated using efficient algorithms. Given x(i) , one would want
to have f0 (hi0 (x(i) )) ≥ fi (x(i) ), however this may not be the case in general.
Assumption 2. We assume that the same neighborhood structure N can
be imposed on each feasible region Xi , i = 1, . . . , p. This neighborhood structure will be used for local search applied to each formulation within the
proposed variable objective framework. Note that if we deal with alternative
continuous formulations of P, the continuous ǫ-neighborhood definition based
on the Euclidean norm is the natural choice of the neighborhood structure.
Assumption 3. We assume that the mappings hij preserve the neighborhood structure N , i.e., for any x ∈ Xi we have hij (N (x)) = N (hij (x)).
This assumption will allow us to perform a local search for all problems by
essentially concentrating on one of Xi ’s.
The last two assumptions are not critical and are used to simplify the presentation. They can be relaxed resulting in the generalized variable objective
search or variable neighborhood/objective search, which naturally generalizes
both VOS and VNS.
2.1
Basic variable objective search
In the basic variable objective search (BVOS), we start with some feasible
solution for the first formulation and perform a local search in the neighborhood N yielding a locally optimal solution x̄(1) for max f1 (x). This solution
x∈X1
is then transformed into a feasible solution h12 (x̄(1) ) of the second formulation, which is then used as a starting point of a local search for the second
formulation. Because of Assumption 3 above, the local search for the second
formulation is performed by searching the neighborhood of x̄(1) in X1 and
mapping the neighbors to the corresponding points of X2 , and the resulting local maximum for the second formulation is given by h12 (x̄(2) ), where
x̄(2) ∈ X1 . Next, h13 (x̄(2) ) is used as the starting point of local search with
the third formulation, etc. To avoid cycling, local search moves for the ith
6
Algorithm 1 BVOS with best improvement move
Require: fi , Xi , i = 0, . . . , p; N ;
Ensure: a feasible solution s of P;
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
Find an initial solution x ∈ X1 ;
x̄(i) = h1i (x), i = 1, . . . , p;
f¯i = fi (x̄(i) ), i = 1, . . . , p;
repeat
x̂ = x;
for i = 1, . . . , p do
for each y ∈ N (x) do
if fi (h1i (y)) > f¯i then
x = y;
x̄(i) = x;
f¯i = fi (h1i (x̄(i) ));
end if
end for
end for
until x̂ = x;
X̃0 = {h10 (x̄(i) ) : i = 1, . . . , p};
X̄0 = arg max{f0 (s) : s ∈ X̃0 };
f¯0 = f0 (s), s ∈ X̄0 ;
return s ∈ X̄0 , f¯0 ;
formulation are performed only if there is an improvement compared to the
best previously computed solution x̄(i) . The corresponding pseudocodes are
given in Algorithms 1 and 2, which describe BVOS with best improvement
and first improvement local search strategies, respectively.
2.2
Uniform variable objective search
The uniform variable objective search (UVOS) is similar to BVOS, however,
the local search moves are performed based on analyzing the objective function values for all p formulations simultaneously rather than one at a time.
Like in BVOS, we start with a feasible solution x of the first formulation.
Throughout the execution of the algorithm, we store the best computed solutions with respect to each of the p formulations, where, as before, x̄(i) ∈ X1
is the best current solution with respect to the ith formulation such that
7
Algorithm 2 BVOS with first improvement move
Require: fi , Xi , i = 0, . . . , p; N ;
Ensure: a feasible solution s of P;
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
Find an initial solution x ∈ X1 ;
x̄(i) = h1i (x), i = 1, . . . , p;
f¯i = fi (x̄(i) ), i = 1, . . . , p;
repeat
x̂ = x;
for i = 1, . . . , p do
for each y ∈ N (x) do
if fi (h1i (y)) > f¯i then
x = y;
x̄(i) = x;
f¯i = fi (h1i (x̄(i) ));
go to line 7;
end if
end for
end for
until x̂ = x;
X̃0 = {h10 (x̄(i) ) : i = 1, . . . , p};
X̄0 = arg max{f0 (s) : s ∈ X̃0 };
f¯0 = f0 (s), s ∈ X̄0 ;
return s ∈ X̄0 , f¯0 ;
h1i (x̄(i) ) ∈ Xi is the corresponding local maximum. The local search proceeds as follows: for each y in the neighborhood of the current solution x, for
each i = 1, . . . , p we check whether y improves the previously best objective
value f¯i , and if it does, we update x̄(i) and f¯i and either continue searching
the same neighborhood until all neighbors of x have been explored (the best
improvement strategy), or, after updating all f¯i , i = 1, . . . , p that improved,
move to y and start exploring the neighborhood of y (the first improvement
strategy). The pseudocodes given in Algorithms 3 and 4 describe UVOS with
best improvement and first improvement local search strategies, respectively.
8
Algorithm 3 UVOS with best improvement move
Require: fi , Xi , i = 0, . . . , p; N ;
Ensure: a feasible solution s of P;
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
3
Find an initial solution x ∈ X1 ;
x̄(i) = x, i = 1, . . . , p;
f¯i = fi (h1i (x̄(i) )), i = 1, . . . , p;
repeat
x̂ = x;
for each y ∈ N (x) do
for i = 1, . . . , p do
if fi (h1i (y)) > f¯i then
x = y;
x̄(i) = x;
f¯i = fi (h1i (x̄(i) ));
end if
end for
end for
until x̂ = x;
X̃0 = {h10 (x̄(i) ) : i = 1, . . . , p};
X̄0 = arg max{f0 (s) : s ∈ X̃0 };
f¯0 = f0 (s), s ∈ X̄0 ;
return s ∈ X̄0 , f¯0 ;
Experiments with the maximum independent set problem
In this section, we evaluate and compare the performance of the proposed
algorithms using the maximum independent set problem as the testbed. The
main objective of this exercise is to explore how much the proposed approaches improve the solutions found using similar local search strategies
with different formulations individually.
The maximum independent set problem can be equivalently formulated
using the following continuous nonconvex programs (see, e.g., [2]):
1
α(G) = maxn eT x − xT AG x.
x∈[0,1]
2
9
(1)
Algorithm 4 UVOS with first improvement move
Require: fi , Xi , i = 0, . . . , p; N ;
Ensure: a feasible solution s of P;
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
Find an initial solution x ∈ X1 ;
x̄(i) = x, i = 1, . . . , p;
f¯i = fi (h1i (x̄(i) )), i = 1, . . . , p;
repeat
x̂ = x;
for each y ∈ N (x) do
improvement = 0;
for i = 1, . . . , p do
if fi (h1i (y)) > f¯i then
x = y;
x̄(i) = x;
f¯i = fi (h1i (x̄(i) ));
improvement = 1;
end if
if (improvement ==1) then
go to line 5;
end if
end for
end for
until x̂ = x;
X̃0 = {h10 (x̄(i) ) : i = 1, . . . , p};
X̄0 = arg max{f0 (s) : s ∈ X̃0 };
f¯0 = f0 (s), s ∈ X̄0 ;
return s ∈ X̄0 , f¯0 ;
1−
1
= max xT AḠ x,
x∈S
α(G)
(2)
where S = {x ∈ ℜ|V | : eT x = 1, x ≥ 0} and e is the vector with all components equal to 1.
In formulation (1), the continuous feasible region [0, 1]n can be replaced
with the discrete one, {0, 1}n without altering the optimal objective function
value. Also, the feasible region of the Motzkin-Straus [9] formulation (2) can
10
be discretized by considering S ′ = {y = x/(eT x) : x ∈ {0, 1}n , x 6= 0} in
place of S, since there is always a global optimal solution x∗ of (2) in the
form of the characteristic vector of a maximum independent set I divided by
the independence number:
1/|I|, i ∈ I;
∗
xi =
0,
i∈
/ I.
Thus, we obtain the following discrete formulations:
1
α(G) = max n eT x − xT AG x.
x∈{0,1}
2
1−
1
= max′ xT AḠ x,
x∈S
α(G)
(3)
(4)
where S ′ = {y = x/(eT x) : x ∈ {0, 1}n , x 6= 0}.
We will use these two formulations in our experiments, with
1
f1 (x) = eT x − xT AG x; f2 (x) = xT AḠ x.
2
Then the obvious choice for transformation h12 (x) is
0,
x = 0;
h12 (x) =
x
,
x
6= 0.
eT x
As for the choice of the neighborhood, it should be noted that we are not
interested in the most efficient neighborhood that would yield an optimal
solution for most test instances. In fact, in order to be able to make a fair
comparison, we are more interested in a neighborhood structure that would
leave some space for improvement after finding an initial local optimum with
respect to one of the formulations used. Hence, we select the standard one-flip
neighborhood, which is commonly used in heuristics for unconstrained optimization of functions of binary variables. Namely, given a vector x ∈ {0, 1}n ,
its neighborhood consists of all y ∈ {0, 1}n that are Hamming distance 1 away
from x (i.e., y differs from x in exactly one component).
3.1
Results of experiments
To test the proposed algorithms, complements of selected DIMACS clique
benchmark graphs [4] were used. The number of vertices of tested graphs
11
ranged from 64 to 496. A set of 100 runs of each algorithm was performed
for every instance. The initial solution for each run was generated randomly
and was the same for all algorithms.
Table 1 represents the results of executing BVOS method. The columns
in each row of this table contain the graph instance name (“Instance”) folTable 1: Results for BVOS
BVOS
(best improvement)
Instance
brock200 1
brock200 2
brock200 3
brock200 4
c-fat200-1
c-fat200-2
c-fat200-5
hamming6-2
hamming8-2
hamming8-4
johnson8-2-4
johnson8-4-4
johnson16-2-4
johnson32-2-4
keller4
mann a9
p hat300-1
p hat300-2
p hat300-3
san200 09 1
san200 09 2
san200 09 3
sanr200 09
|V |
200
200
200
200
200
200
200
64
256
256
28
70
120
496
171
45
300
300
300
200
200
200
200
f¯10
14.53
7.58
9.81
10.95
11.31
22.35
57.26
22.39
80.29
11.79
3.41
9.98
6.89
13.8
7.44
13.64
5.79
17.97
26.13
45.03
33.12
25.85
29.81
f¯1∗
Best %
15.72 45.45
8.04
33.33
10.5
37.50
11.93 44.44
11.31
0
22.35
0
57.26
0
24.58
60
87.42 33.33
13.32
60
3.41
0
11.08
75
7.01
33.33
15.3
20
7.57
28.57
14.58
25
6.12
40
19.3
46.67
28.03 33.33
45.03
0
34.01 19.23
27.22
32
32.39 34.62
12
BVOS
(first improvement)
f1∗
18
10
12
14
12
24
58
32
128
16
4
14
8
16
9
16
8
25
33
46
46
34
40
f¯10
13.31
7.04
8.88
9.79
8.94
14.94
47
19.61
67.52
8.49
3.26
8.98
7.43
15.4
7.93
13.27
5.18
15.76
22.16
37.77
28.06
25.67
28.49
f¯1∗
Best %
14.24
40
7.39
100
9.58
66.67
11.11 71.43
12
1100
23.13
2300
57.58 96.55
20.89 41.18
70.86 29.17
10.88
100
3.26
0
10
66.67
7.43
0
15.4
0
8.38
50
14.54
25
5.62 133.33
16.5
45.45
23.31 41.18
41.11 60.71
29.59 30.43
27.5
25
30.26 30.43
f1∗
17
10
12
13
12
24
58
32
112
16
4
14
8
16
11
16
7
20
29
55
35
31
34
Table 2: Results for UVOS
UVOS
(best improvement)
Instance
brock200 1
brock200 2
brock200 3
brock200 4
c-fat200-1
c-fat200-2
c-fat200-5
hamming6-2
hamming8-2
hamming8-4
johnson8-2-4
johnson8-4-4
johnson16-2-4
johnson32-2-4
keller4
mann a9
p hat300-1
p hat300-2
p hat300-3
san200 09 1
san200 09 2
san200 09 3
sanr200 09
UVOS
(first improvement)
f¯1∗
f1∗
f¯2∗
f2∗
f¯1∗
f1∗
f¯2∗
f2∗
|V |
200 16.36
20 16.23
20
15.47 18 15.40 18
200
8.12
10 7.97
10
7.89
10
7.80
10
200 10.78
13 10.58
13
10.2
13 10.13 13
200 12.39
15 12.24
15
11.61 14 11.52 14
200 11.77
12 11.77
12
6.78
12 10.03 12
200 22.38
24 22.38
24
16.43 24 19.94 24
200 57.31
58 57.31
58
57.61 58 57.61 58
64
25.17
32 25.09
32
27.43 32 27.35 32
256 88.98 128 88.95 128 116.64 128 116.57 128
256 12.78
16 13.28
16
11.55 16 11.50 16
28
-1.45
4
3
4
2.07
4
3.02
4
70
12.24
14 12.40
14
11.49 14 11.47 14
120 -131.24 8
5.02
8
-27.55
8
5.68
8
496 -2052.6 16
8.6
16
-85.9
16
15.2
16
171
6.09
9
7.20
9
8.40
11
8.05
11
45
14.52
16 13.92
16
14.43 16 13.61 16
300
6.44
8
6.45
8
5.71
7
5.69
7
300 20.77
25 20.56
25
19.02 23 18.96 23
300 29.35
33 29.24
33
28.02 32 27.97 32
200 45.04
46 44.99 45.02 47.41 68 44.88 68
200 34.67
47 34.25
47
34.08 51 32.86 51
200
26.2
35 26.20
35
28.9
35 28.10 35
200 33.44
38 33.31
38
33.82 38 33.82 38
lowed by its number of vertices (“|V |”). The next four columns contain the
following information: the mean (over the total number of runs) of the values
of the first found local optimal solutions for the formulation (3) (“f¯10 ”), the
mean of the maximum values of the objective function (3) found (“f¯1∗ ”), the
maximum (over the 100 trials) percentage improvement from the first found
13
local optimal solution to the best found solution (“Best %”), and the best
overall value found for the objective function (3) (“f1∗ ”) obtained under the
BVOS method with best improvement move. The last four columns contain
the same information for the first improvement move strategy.
The results of applying UVOS algorithm to the same graph instances are
shown in Table 2. Similar to Table 1, the first two columns in this table
represent the graph instance name and its number of vertices, while the
other two groups of four columns contain the mean (“f¯1∗ ”) and the maximum
(“f1∗ ”) of objective values for formulation (3), and the mean (“f¯2∗ ”) and the
maximum (“f2∗ ”) of objective values for formulation (4) obtained using best
and first improvement approach, respectively.
The results of the experiments indicate that the overall performance of
the VOS on the given set of instances was rather encouraging. As can be
seen from Table 1 in many cases it yielded a considerable (more than 50%)
improvement in the value of the objective function under either the BVOS
with best improvement or with first improvement, or both. We obtained only
one result with no improvement in both BVOS strategies. However, there
were three instances with a very large percentage of improvement indicating
that the first obtained value of objective function was far from the optimal
and it considerably improved with the help of BVOS with first improvement
move. Although, in general the better results of the maximum value of
objective function were generated by BVOS with best improvement move.
In general we observed a tendency (about 70% of all cases) of getting the
same maximum values of objective functions under both BVOS and UVOS
methods. The majority of the rest 30% of the problems was better solved
using the UVOS algorithm. We observed no significant difference in performance of UVOS with either best improvement or first improvement approach.
In most cases both strategies showed similar results.
4
Conclusion
We introduced the general framework of the variable objective search for
combinatorial optimization, which is based on a simple observation that systematically combining different mathematical programming formulations of
the same combinatorial optimization problem may lead to better heuristic
solutions than those obtained by solving any of the considered formulations
individually. We proposed two different variations of the method, the basic
14
variable objective search (BVOS), in which local search performed sequentially with respect to the considered formulations, and the uniform variable
objective search (UVOS), which runs one local search that tries to look for
better solutions with respect to each formulation simultaneously. In the latter part of the paper, we demonstrate the merit of BVOS and UVOS by
applying them to two different formulations of the maximum independent
set problem in graphs.
While in this paper we restricted the discussion to two versions of VOS,
many other variations of the method can be developed, similarly to how numerous versions of the VNS have been proposed in the literature [6]. Since
the main objective of this paper is to introduce the general idea of this new
framework, we leave the exploration of its various potentially useful variations and evaluation of their practical performance on large-scale instances of
different combinatorial optimization problems for future research. In particular, the case of alternative continuous nonconvex formulations of the same
problem, where the neighborhood of a feasible point is naturally defined as
the intersection of the feasible region with an ǫ-ball centered at this point, is
an interesting direction to investigate.
References
[1] E. Aarts and J. K. Lenstra, editors. Local Search in Combinatorial Optimization. John Wiley & Sons, Chichester, 2003.
[2] J. Abello, S. Butenko, P. Pardalos, and M. Resende. Finding independent
sets in a graph using continuous multivariable polynomial formulations.
Journal of Global Optimization, 21:111–137, 2001.
[3] I. M. Bomze, M. Budinich, P. M. Pardalos, and M. Pelillo. The maximum
clique problem. In D.-Z. Du and P. M. Pardalos, editors, Handbook of
Combinatorial Optimization, pages 1–74. Kluwer Academic Publishers,
Dordrecht, The Netherlands, 1999.
[4] DIMACS. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. http://dimacs.rutgers.edu/Challenges/, 1995.
Accessed April 2010.
[5] F. Glover and G. Kochenberger, editors. Handbook Of Metaheuristics.
Springer, London, 2002.
15
[6] P. Hansen and N. Mladenović. Variable neighborhood search: Principles
and applications. European Journal of Operational Research, 130:449–
467, 2001.
[7] R. Horst, P. M. Pardalos, and N. V. Thoai. Introduction to Global Optimization. Kluwer Academic Publishers, Dordrecht, The Netherlands, 2
edition, 2000.
[8] N. Mladenović and P. Hansen. Variable neighborhood search. Computers
and Operations Research, 24:1097–1100, 1997.
[9] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of
a theorem of Turán. Canad. J. Math., 17:533–540, 1965.
16
Download