Transposition Tables for Constraint Satisfaction

advertisement
Transposition Tables for Constraint Satisfaction
Christophe Lecoutre and Lakhdar Sais and Sébastien Tabary and Vincent Vidal
CRIL−CNRS FRE 2499
Université d’Artois
Lens, France
{lecoutre, sais, tabary, vidal}@cril.univ-artois.fr
key is computed from the description of the state, such as
the key based on the logical XOR operator for chess (Zobrist 1970). The amount of memory in a machine being
limited, the number of entries in a transposition table may
be bounded. This technique has been adapted to heuristic search algorithms such as IDA* (Reinefeld & Marsland
1994), and is also successfully employed in modern automated STRIPS planners such as e.g. FF (Hoffmann & Nebel
2001) and YAHSP (Vidal 2004).
A direct use of transposition tables in complete CSP backtracking search algorithms is clearly useless, one property of
this kind of algorithm being that the state of a constraint network (variables, constraints and reduced domains) cannot be
encountered twice. Indeed, with a binary branching scheme
for example, once it has been proven that a positive decision X = v leads to a contradiction, the opposite decision
X = v is immediately taken in the other branch. In other
words, in the first branch the domain of X is reduced to the
singleton {v}, while in the second branch v is removed from
the domain of X: obviously, no state where X = v has been
asserted can be identical to a state where X = v is true.
However, we show in this paper that two different sequences of decisions performed during the resolution of a
constraint network P can lead to two networks P1 and P2
that can be reduced to the same subnetwork. We exhibit
three reduction operators which, in that case, satisfy the following property: P1 is satisfiable if and only if P2 is satisfiable. These operators remove some selected variables and
constraints involving them. For example, depending on the
operator, removed variables belong to the scope of universal
constraints or have an associated domain that has not been
filtered yet. In practice, once a subnetwork has been identified, it can simply be stored into a transposition table that
will be checked before expanding each node. A lookup in
the table will then avoid to explore a network that can be
reduced to a subnetwork already encountered.
Abstract
In this paper, a state-based approach for the Constraint Satisfaction Problem (CSP) is proposed. The key novelty is an
original use of state memorization during search to prevent
the exploration of similar subnetworks. Classical techniques
to avoid the resurgence of previously encountered conflicts
involve recording conflict sets. This contrasts with our statebased approach which records subnetworks – a snapshot of
some selected domains – already explored. This knowledge
is later used to either prune inconsistent states or avoid recomputing the solutions of these subnetworks. Interestingly
enough, the two approaches present some complementarity:
different states can be pruned from the same partial instantiation or conflict set, whereas different partial instantiations
can lead to the same state that needs to be explored only once.
Also, our proposed approach is able to dynamically break
some kinds of symmetries (e.g. neighborhood interchangeability). The obtained experimental results demonstrate the
promising prospects of state-based search.
Introduction
In classical heuristic search algorithms (A*, IDA*, ...) or
game search algorithms (α-β, SSS* ...), where nodes in the
search tree represent world states and transitions represent
moves, many states may be encountered several times at possibly different depths. This is due to the fact that different
sequences of moves from the initial state of the problem can
yield identical situations of the world. Moreover, a state S
in a node at depth i of the search tree cannot lead to a better
solution than a node containing the same state S previously
encountered at a depth j < i. As a consequence, some portions of the search space may be unnecessarily evaluated and
explored several times, which may be costly.
The phenomenon of revisiting identical states reached
from different sequences of transitions, better known as
transpositions, has been identified very early in the context
of chess software (Greenblatt, Eastlake, & Crocker 1967;
Slate & Atkin 1977; Marsland 1992). One solution to this
problem is to store the encountered nodes, plus some related
information (e.g. depth, heuristic evaluation), in a so-called
transposition table. The data structure used to implement
such a transposition table is classically a hash table whose
Technical Background
A Constraint Network (CN) P is a pair (X , C ) where X is
a finite set of n variables and C a finite set of e constraints.
Each variable X ∈ X has an associated domain, denoted
domP (X) or simply dom(X), which contains the set of values allowed for X. The set of variables of P will be denoted
by vars(P ). An instantiation t of a set {X1 , ..., Xq } of vari-
c 2007, Association for the Advancement of Artificial
Copyright Intelligence (www.aaai.org). All rights reserved.
243
ables is a set {(Xi , vi ) | i ∈ [1, q] and vi ∈ dom(Xi )}. The
value vi assigned to Xi in t will be denoted by t[Xi ]. Each
constraint C ∈ C involves a subset of variables of X , called
scope and denoted scp(C), and has an associated relation,
denoted rel(C), which is the set of instantiations allowed
for the variables of its scope. A solution to P is an instantiation of vars(P ) such that all the constraints are satisfied.
The set of all the solutions of P is denoted sol(P ), and P is
satisfiable if sol(P ) = ∅.
The Constraint Satisfaction Problem (CSP) is the NPcomplete task of determining whether or not a given CN is
satisfiable. A CSP instance is then defined by a CN, and
solving it involves either finding one (or more) solution or
determining its unsatisfiability. To solve a CSP instance,
one can modify the CN by using inference or search methods. Usually, domains of variables are reduced by removing inconsistent values, i.e. values that cannot occur in any
solution. Indeed, it is possible to filter domains by considering some properties of constraint networks. Generalized
Arc Consistency (GAC) remains the central one (e.g. see
(Bessiere 2006)). It is for example maintained during search
by the algorithm MGAC, called MAC in the binary case.
From now on, we will consider an inference operator φ
that enforces a domain filtering consistency (Debruyne &
Bessiere 2001) and can be employed at any step of a tree
search. For a constraint network P and a set of decisions
Δ, P |Δ is the CN derived from P such that, for any positive
decision (X = v) ∈ Δ, dom(X) is restricted to {v}, and,
for any negative decision (X = v) ∈ Δ, v is removed from
dom(X). φ(P ) is the CN derived from P obtained after
applying the inference operator φ. If there exists a variable
with an empty domain in φ(P ) then P is clearly unsatisfiable, denoted φ(P ) = ⊥.
A constraint is universal if every valid instantiation built
from the current domains of its variables satisfies it.
Definition 1 Let P = (X , C ) be a CN . A constraint
C ∈ C with scp(C) = {X1 , . . . , Xr } is universal if
∀v1 ∈ dom(X1 ), . . . , ∀vr ∈ dom(Xr ), ∃t ∈ rel(C) such
that t[X1 ] = v1 , . . . , t[Xr ] = vr .
A constraint subnetwork can be obtained from a CN by
removing a subset of its variables and the constraints involving them.
Definition 2 Let P = (X , C ) be a CN and S ⊆ X . The
constraint subnetwork P S is the CN (X , C ) such that
X = X \ S and C = {C ∈ C | scp(C) ∩ S = ∅}.
Figure 1: Pigeon holes: partial search tree
Figure 2: Pigeon holes: two similar subnetworks
We can first remark that the six nodes n1 , . . . , n6 represent networks that are all different, as the domains of the
variables differ by at least one value. However, let us focus
on nodes n3 and n6 . The only difference lies in the domains of P0 and P1 , which are respectively reduced to the
singletons {0} and {1} in n3 , and {1} and {0} in n6 . The
domains of the other variables P2 , P3 and P4 are all equal to
{2, 3}. The networks associated to nodes n3 and n6 are represented in Figure 2, including all instantiations allowed for
each constraint. We can see that the structure of these networks is very similar, the only difference being the inversion
of the values 0 and 1 between P0 and P1 .
The two crucial points about n3 and n6 are the following:
(1) neither P0 nor P1 will play a role in subsequent search
anymore, and (2) checking the satisfiability of (the network
attached to node) n3 is equivalent to check the satisfiability
of n6 . Point (1) is easy to see: as arc consistency (AC) is
maintained, all constraints involving P0 and P1 are universal: whatever is the assignment of a value to the other variables, these constraints will be satisfied. Variables P0 and
Illustration
Let us now illustrate our purpose with the classical pigeon
holes problem with five pigeons. This problem involves five
variables P0 , . . . , P4 that represent the pigeons, whose domains initially equal to {0, . . . , 3} represent the holes. The
constraints state that two pigeons cannot be in the same hole,
making this problem unsatisfiable as there are five pigeons
for only four holes. They may be expressed with a clique
of binary constraints: P0 = P1 , P0 = P2 , . . . , P1 = P2 , . . .
Figure 1 depicts a partial view of a search tree for this problem, built by MAC with a binary branching scheme, and the
state of the domains at each node.
244
Definition 5 Let P = (X , C ) be a CN . A variable
X ∈ X is u-eliminable if ∀C ∈ C | X ∈ scp(C), C is
universal. The set of u-eliminable variables of P is denoted
by Uelim (P ).
P1 can thus be disconnected from the constraint networks
in n3 and n6 . As a consequence, we can immediately see
that point (2) is true: the constraint subnetworks consisting
of the remaining variables P2 , P3 and P4 and the constraints
involving them are equal, and as a consequence n3 is satisfiable if and only if n6 is satisfiable. Then, if we had stored
that subnetwork in a transposition table after proving the unsatisfiability of n3 , we could have avoided expanding n6 by
a simple lookup in the table.
The operator ρuni removes u-eliminable variables from a
CN as well as constraints involving at least one of them.
Definition 6 Let P be a CN . ρuni (P ) = P Uelim (P ).
Satisfiability is preserved by ρsol since eliminated constraints have no more impact on the network (as they are
universal). It is clear that for any CN P , ρuni (P ) is a subnetwork of ρsol (P ) since s-eliminable variables are also ueliminable. Hence, ρsol can be considered as a special case
of ρuni .
Identifying Constraint Subnetworks
Storing complete states in a transition table is not relevant,
as they cannot be encountered twice by a complete CSP
search algorithm such as MGAC. We have then to address
the problem of identifying constraint subnetworks, that may
be reached several times during a tree search. Such subnetworks should ideally be as general as possible while just
requiring a small amount of memory. We define in this section three reduction operators whose objective is essentially
to minimize the number of recorded variables. Intuitively,
the fewer the number of recorded variables is, the higher the
pruning capability is. This also contributes to reduce memory consumption.
Proposition 2 A constraint network P is satisfiable if and
only if ρuni (P ) is satisfiable.
Proof. Removing any universal constraint does not change
the satisfiability of a network. Here, we only remove universal constraints and variables that become disconnected from
the network.
2
Reducing Subnetworks (operator ρred )
The third operator, denoted ρred , extracts a reduced subnetwork from a current network by removing both u-eliminable
variables and so-called r-eliminable variables. The latter
correspond to variables whose domain remains unchanged
after taking a set of decisions and applying an inference operator.
Preserving Solutions (operator ρsol )
The first operator, denoted ρsol , preserves the set of solutions
of a given CN. Applied to a network, it consists in removing so-called s-eliminable variables which have a singleton
domain and only appear in universal constraints.
Definition 3 Let P = (X , C ) be a CN . A variable X ∈
X is s-eliminable if |dom(X)| = 1 and ∀C ∈ C | X ∈
scp(C), C is universal. The set of s-eliminable variables of
P is denoted by Selim (P ).
Definition 7 Let P = (X , C ) be a CN , φ an inference
operator, Δ a set of decisions and P = φ(P |Δ ). A vari
able X ∈ X is r-eliminable in P w.r.t. P if domP (X) =
P
dom (X). The set of r-eliminable variables of P is deP
(P ).
noted by Relim
The operator ρsol removes all s-eliminable variables from
a CN as well as all constraints involving at least one of them.
Definition 8 Let P = (X , C ) be a CN , φ an inference operator, Δ a set of decisions and P = φ(P |Δ ). ρred (P ) =
P
(P ).
ρuni (P ) Relim
Definition 4 Let P be a CN . ρsol (P ) = P Selim (P ).
The solutions of a network P can be enumerated by expanding the solutions of the subnetwork ρsol (P ) with the
interpretation built from the s-eliminable variables. Indeed,
domains of removed variables are singleton, and removed
constraints have no more impact on the network.
The main result of this paper states that, when two networks derived from a given network can be reduced to the
same subnetwork, the satisfiability of one determines the satisfiability of the other one.
Proposition 3 Let P be a CN , φ an inference operator, and
Δ1 , Δ2 two sets of decisions. Let P1 = φ(P |Δ1 ) and P2 =
φ(P |Δ2 ). If ρred (P1 ) = ρred (P2 ), then P1 is satisfiable if
and only if P2 is satisfiable.
Proposition 1 Let P be a CN and t = {(X, v) | X ∈
Selim (P ) ∧ dom(X) = {v}}. sol(P ) = {s ∪ t | s ∈
sol(ρsol (P ))}.
Proof. Obviously, any solution of P also satisfies all constraints of ρsol (P ). The converse is immediate: any solution
s of ρsol (P ) can be extended to a unique solution of P since
eliminated variables (of Selim (P )) have a singleton domain
and eliminated constraints are universal.
2
Proof. Let us show that sol(P1 ) = ∅ ⇒ sol(P2 ) = ∅.
From any solution s ∈ sol(P1 ), it is possible to build a solution s ∈ sol(P2 ) as follows: if s[X] ∈ domP2 (X) then
s [X] = s[X], otherwise s [X] = a where a is any value
taken in domP2 (X). For any constraint C of P , we know
that s satisfies C. Let us show now that s also satisfies C. It
is clear that this is the case if ∃X ∈ scp(C) | s[X] = s [X].
Next, if ∃X ∈ scp(C) | s[X] = s [X], we can show that C
is universal, and consequently, C is satisfied by s . Indeed,
we show below that s [X] = s[X] implies X ∈ Uelim (P2 )
from which we can deduce that any constraint involving
Preserving Satisfiability (operator ρuni )
The second operator, denoted ρuni , preserves the satisfiability of a given CN (but not necessarily all solutions). Applied
to a network, it removes so-called u-eliminable variables.
Such variables only appear in universal constraints.
245
X is universal (by definition of Uelim ). Let us suppose
/ Uelim (P2 )). If
that X ∈ vars(ρuni (P2 )) (and then, X ∈
P
(P2 ) then it is immediate that s[X] ∈ domP2 (X)
X ∈ Relim
(and so, s [X] = s[X] is impossible). Otherwise, it means
that X belongs to ρred (P2 ), and consequently also belongs
to ρred (P1 ). As the domains of the variables of these two
subnetworks are equal (by hypothesis), by construction of
s , we cannot have s [X] = s[X]. So, we can conclude that
X ∈ Uelim (P2 ). Finally, we can apply the same reasoning
2
to show that sol(P2 ) = ∅ ⇒ sol(P1 ) = ∅.
Proposition 4 Let P be a binary CN , Δ a set of decisions,
and P = (X , C ) = ρred (AC(P |Δ )). We have then:
∀X ∈ X , 1 < |domP (X)| < |domP (X)|.
Figure 3: Network reduction by ρuni and ρred
in a transposition table all reduced subnetworks extracted
from nodes that have been proven inconsistent. Then, nodes
whose extracted reduced subnetwork already belongs to the
transposition table can be safely pruned.
The recursive function solve determines the satisfiability
of a network P (see Algorithm 1). At a given stage, if the
current network (after enforcing φ) is inconsistent, f alse is
returned (line 2) whereas if all variables have been assigned,
true is returned (line 3). Otherwise, we check if the current
node can be pruned thanks to the transposition table (line
4). If search continues, we select a pair (X, a) and recursively call solve by considering two branches, one labelled
with X = a and the other with X = a (lines 5 and 6). If
a solution is found, true is returned. Otherwise, the current
network has been proven inconsistent and its reduced subnetwork is added to the transposition table (line 7) before
returning f alse.
This algorithm could be slightly modified to enumerate
all the solutions of a network by using ρsol . To do that, the
transposition table should store all encountered subnetworks
(not only the unsatisfiable ones), along with an additional
information: their solution set. When a network P is such
that ρsol (P ) already belongs to the table, the solutions of P
can be expanded from the solutions of ρsol (P ) stored into
the table, with the interpretation built from the s-eliminable
variables of P (c.f. Proposition 1). Similarly, to count the
number of solutions, one can associate to each reduced subnetwork stored in the table, the number of its solutions.
Proof. A variable X with a singleton domain is removed
since all constraints involving X are universal. Indeed, as
arc consistency (AC) is enforced, all values for any variable
Y connected to X (constraints being binary) are compatible
with the single value of X. By definition of ρred , a variable
2
X such that |domP (X)| = |domP (X)| is removed.
Pruning Capability of Reduction Operators
In the context of checking the satisfiability of a CSP instance, we discuss now the pruning capability of the operators we defined. Clearly, the more variables are removed
from a network P1 to produce a subnetwork P1 , the more
networks will be discarded by P1 . Indeed, if a network P2
can be discarded because it can be reduced to P1 , then the
domains of the variables of P2 that do not appear in P1 can
be in any state: they either contain all the values w.r.t. the
initial problem, or are reduced in such a way that they belong to universal constraints only. It is easy to see that,
by definition of the reduction operators, vars(ρred (P )) ⊆
vars(ρuni (P )) ⊆ vars(ρsol (P )). We can then expect ρred
to have a better (at least equal) pruning capability than ρsol
and ρuni .
This is illustrated by the networks depicted in Figure 3.
On the first one, the assignment Y = 1 on an initial network P (where all domains are {1, 2, 3}) leaves the domain of W unchanged and eliminates the value 2 from
both dom(X) and dom(Z), yielding the derived network
P1 = AC(P |Y =1 ). On the second one, the assignment
W = 1 on P leaves the domain of Y unchanged and eliminates the value 2 from both dom(X) and dom(Z), yielding
the derived network P2 = AC(P |W =1 ). Whereas ρuni produces two different subnetworks from P1 and P2 , ρred applied to them leads to the same reduced subnetwork, whatever is the reason of variable elimination (u-eliminable or
r-eliminable). As a consequence, we know from Proposition
3 that, once P1 or P2 has been explored, the exploration of
the other one is useless to determine its satisfiability.
Algorithm 1 solve(Pinit = (X , C ) : CN) : Boolean
1:
2:
3:
4:
5:
6:
7:
8:
P = φ(Pinit )
if P = ⊥ then return f alse
if ∀X ∈ X , |dom(X)| = 1 then return true
if ρred (P ) ∈ transposition table then return f alse
select a pair (X, a) with |dom(X)| > 1 ∧ a ∈ dom(X)
if solve(P |X=a ) or solve(P |X=a ) then return true
add ρred (P ) to transposition table
return f alse
Search Algorithm Exploiting ρred
In this section, we succinctly present an algorithm that performs a depth-first search, maintains a domain filtering consistency φ (at least, checking that constraints only involving singleton-domain variables are satisfied) and prunes inconsistent states using ρred . The main idea is to record
State-Based Search: Scope and Relationships
The scope of our approach is related to several key issues in
constraint programming. Indeed, state based search is able
to automatically eliminate some kinds of symmetries during
246
P igeons-11
P igeons-13
P igeons-15
P igeons-18
cpu
nodes
hits
cpu
nodes
hits
cpu
nodes
hits
cpu
nodes
hits
brelaz
¬SBS
SBS
265.48
2.33
4, 421K
5, 065
0
4, 008
timeout
4.57
−
24, 498
−
20, 350
timeout
12.66
−
115K
−
98, 124
timeout
116.19
−
1, 114K
−
983K
dom/wdeg
¬SBS
SBS
272.73
6.35
4, 441K
61, 010
0
40, 014
timeout
26.44
−
327K
−
245K
timeout
81.58
−
900K
−
728K
timeout
timeout
−
−
−
−
scen11-f8
scen11-f7
scen11-f6
scen11-f5
cpu
nodes
hits
cpu
nodes
hits
cpu
nodes
hits
cpu
nodes
hits
dom/wdeg
¬SBS
SBS
14.84
15.74
15, 045
13, 858
0
370
57.68
15.36
113K
14, 265
0
919
110.18
18.67
217K
18, 938
0
1252
550.55
162.32
1, 147K
257K
0
17, 265
Table 1: Cost of running MAC without and with SBS on
Pigeon Hole instances
Table 2: Cost of running MAC without and with SBS on
hard RLFAP instances
search, and presents strong complementarity with nogood
recording.
Neighborhood interchangeability is a weak form of (full)
interchangeability (Freuder 1991) that can be exploited in
practice to reduce the search space. Given a variable X, two
values a and b in dom(X) are neighborhood interchangeable
if for any constraint C involving X, the set of supports of a
for X in C is equal to the set of supports of b for X in C.
We can observe that our state-based approach discards redundant states coming from interchangeable values. Indeed,
if P is a network such that values a and b for a variable X
of P are interchangeable, it clearly appears that the subnetworks extracted from P |X=a and P |X=b are identical after
applying any ρ operator.
Interchangeability is related to symmetry (Cohen et al.
2006) whose objective is to discard parts of the search tree
that are symmetrical to already explored subtrees. This can
lead to a dramatic reduction of the search effort required to
solve a constraint network. To reach this goal, one has first
to identify symmetries and then, to exploit them. Different
approaches have been proposed to exploit symmetries; the
most related one being symmetry breaking via dominance
detection (SBDD).
The principle of SBDD is the following: every time
the search algorithm reaches a new node, one just checks
whether this node is equivalent to or dominated by a node
that has already been expanded earlier. This approach requires (1) the memorization of information about nodes explored during search (2) the exploitation of this information
by considering part or all of the symmetries from the symmetry group associated with the initial network. The information stored for a node can be the current domains of all
variables at the node, called Global Cut Seed in (Focacci &
Milano 2001) and pattern in (Fahle, Schamberger, & Sellman 2001). But it can also be reduced to the set of decisions
labelling the path from the root to the node (Puget 2005). In
our case, we only store the current domains of a subset of
variables (for ρred , those that are neither u-eliminable nor
r-eliminable) of the initial network, which allows us to automatically break some kinds of local symmetry. Interestingly,
we can imagine to combine the two approaches, using the
general “nogoods” extracted by our method with dominance
detection via a set of symmetries.
Finally, we discuss the complementarity between state
based search and nogood recording (e.g. see (Dechter
1990)). On the one hand, a given state corresponding to
a subnetwork already shown unsatisfiable represents in a
compact way an exponential number of nogoods. On the
other hand, a given (minimal) nogood represents an exponential number of instantiations. The complementarity of
these two paradigms appears in their ability to avoid redundant search. Indeed, a given nogood avoids (or cuts) several
states, whereas a given state cuts several partial instantiations i.e. instantiations leading to the same state.
Experiments
In order to show the practical interest of state-based search,
we have conducted an experimentation on benchmarks
from the second CSP solver competition (http://cpai.
ucc.ie/06/Competition.html) on a PC Pentium IV
2.4GHz 1024Mb under Linux. We have used the algorithm
MGAC, and studied the impact of state-based search, denoted SBS, with various variable ordering heuristics. Performance is measured in terms of number of visited nodes
(nodes), cpu time in seconds (cpu) and number of discarded
nodes by SBS (hits). Remark that SBS can be applied to
constraints defined in extension or in intention (but, according to the selected reduction operator, dealing with global
constraints may involve some specific treatment).
We have implemented Algorithm 1, but have considered
a subset of Uelim , as determining u-eliminable variables involved in non binary constraints grows exponentially with
the arity of the constraints. More precisely, in our implementation, the ρuni operator (called by ρred ) only removes
the variables with a singleton domain involved in constraints
binding at most one non singleton-domain variable. Computing this restricted set can be done in linear time. For binary networks, any variable with a singleton domain is automatically removed by our operator (see Proposition 4). The
transposition table used to store the subnetworks is implemented as a hash table whose key is the concatenation of
couples (id, dom) where id is a unique integer associated
with each variable and dom the domain of the variable itself
represented as a bit vector. To search one solution only, no
additional data needs to be stored in an entry of the table, as
the presence of a key is sufficient to discard a node.
Table 1 presents results obtained on some pigeon hole
instances. One can observe the interest of SBS on this problem since many nodes can be discarded. Here, one can note
that it is more interesting to use the heuristic brelaz (identical results are obtained with dom/ddeg (Bessiere & Régin
1996)) than dom/wdeg (Boussemart et al. 2004). This can
247
brelaz
Instances
composed-25-10-20-4-ext
composed-25-10-20-9-ext
dubois-21-ext
dubois-22-ext
pret-60-25-ext
pret-150-25-ext
cpu
nodes (hits)
cpu
nodes (hits)
cpu
nodes (hits)
cpu
nodes (hits)
cpu
nodes (hits)
cpu
nodes (hits)
¬SBS
179.84
1, 771K
857.07
10M
timeout
−
timeout
−
687.31
12M
timeout
−
SBS
4.27
9, 944 (2, 609)
3.73
7, 935 (1, 738)
480.98
6, 292K (2, 097K)
timeout
−
5.65
55, 842 (13, 188)
timeout
−
¬SBS
2.82
1, 644
12.13
75, 589
timeout
−
timeout
−
471.16
7, 822K
timeout
−
dom/ddeg
SBS
2.6
784 (24)
10.25
54, 245 (2, 486)
384.84
4, 194K (2, 097K)
timeout
−
5.82
47, 890 (13, 188)
timeout
−
¬SBS
2.7
262
2.58
323
911.51
16M
timeout
−
416.49
7, 752K
timeout
−
dom/wdeg
SBS
2.57
255 (5)
2.66
323 (0)
295.81
3, 496K (1, 573K)
527.19
6, 641K (3, 146K)
2.27
4, 080 (1, 384)
10.34
97, 967 (37, 457)
Table 3: Cost of running MGAC without and with SBS on structured instances
Acknowledgments
be explained by the fact that the former is closer to the lexicographic order which is well adapted for this problem.
In Table 2, we focus on some difficult real-world instances of the Radio Link Frequency Assignment Problem
(RLFAP). Even with SBS, these instances cannot be solved
within 1, 200 seconds when using brelaz or dom/ddeg, so
we only present the results obtained with dom/wdeg. We
can see about a 4-fold improvement when using SBS. In
Table 3, we can see the results obtained for some binary
and non binary instances (dubois and pret instances involve
ternary constraints) for which our approach is effective.
We can summarize the results of our experimentation as
follows. When SBS is ineffective, the solver is slowed down
by approximately 15%. Yet, by analysing the behaviour of
SBS, one can decide at any time to stop using it and free
memory: the cpu time lost by the solver is then bounded by
the time alloted to the analysis. When SBS is effective, and
this is the case on some series, the improvement can be very
significant in cpu time and number of solved instances.
One can wonder about the amount of memory required to
store the different states. An interesting thing is that as ueliminable and r-eliminable variables are removed, a lot of
space can be saved, in particular on sparse constraint graphs.
For instance, only 265M iB were necessary to record the
50, 273 different subnetworks when solving (in 162 seconds) the large instance scen11-f 5 that involves 680 variables with domains up to 39 values.
This paper has been supported by the CNRS and the ANR
“Planevo” project no JC05 41940.
References
Bessiere, C., and Régin, J. 1996. MAC and combined heuristics: two reasons to forsake FC (and CBJ?) on hard problems. In
Proceedings of CP’96, 61–75.
Bessiere, C. 2006. Constraint propagation. In Handbook of Constraint Programming. Elsevier. chapter 3.
Boussemart, F.; Hemery, F.; Lecoutre, C.; and Sais, L. 2004.
Boosting systematic search by weighting constraints. In Proceedings of ECAI’04, 146–150.
Cohen, D.; Jeavons, P.; Jefferson, C.; Petrie, K.; and Smith, B.
2006. Symmetry definitions for constraint satisfaction problems.
Constraints 11(2-3):115–137.
Debruyne, R., and Bessiere, C. 2001. Domain filtering consistencies. Journal of Artificial Intelligence Research 14:205–230.
Dechter, R. 1990. Enhancement schemes for constraint processing: backjumping, learning and cutset decomposition. Artificial
Intelligence 41:273–312.
Fahle, T.; Schamberger, S.; and Sellman, M. 2001. Symmetry
breaking. In Proceedings of CP’01, 93–107.
Focacci, F., and Milano, M. 2001. Global cut framework for
removing symmetries. In Proceedings of CP’01, 77–92.
Freuder, E. 1991. Eliminating interchangeable values in constraint satisfaction problems. In Proc. of AAAI’91, 227–233.
Greenblatt, R.; Eastlake, D.; and Crocker, S. 1967. The Greenblatt chess program. In Proceedings of Fall Joint Computer Conference, 801–810.
Hoffmann, J., and Nebel, B. 2001. The FF planning system: Fast
plan generation through heuristic search. JAIR 14:253–302.
Marsland, T. 1992. Computer chess and search. In Encyclopedia
of Artificial Intelligence. J. Wiley & Sons. 224–241.
Puget, J. 2005. Symmetry breaking revisited. Constraints
10(1):23–46.
Reinefeld, A., and Marsland, T. A. 1994. Enhanced iterativedeepening search. IEEE Transactions on Pattern Analysis and
Machine Intelligence 16(7):701–710.
Slate, D., and Atkin, L. 1977. Chess 4.5: The northwestern
university chess program. In Chess Skill in Man and Machine.
Springer Verlag. 82–118.
Vidal, V. 2004. A lookahead strategy for heuristic search planning. In Proceedings of ICAPS’04, 150–159.
Zobrist, A. L. 1970. A new hashing method with applications for
game playing. Technical Report 88, Computer Sciences Dept.,
Univ. of Wisconsin.
Conclusion
In this paper, we provided the proof of concept of the exploitation, for constraint satisfaction, of a well-known technique widely used in search: pruning from transpositions.
This has not been addressed so far since, in CSP, contrary
to search, two branches of a search tree cannot lead to the
same state. This led us to define some reduction operators
that keep partial information from a node, sufficient to detect
constraint networks that do not need to be explored.
We actually addressed the theoretical and practical aspects of how to exploit these operators in terms of equivalence between nodes. Two immediate prospects of this work
concern the definition of more powerful reduction operators
and the exploitation of dominance properties between nodes.
Also, many links with the concept of symmetry have still to
be investigated, and we can expect a cross-fertilization between state-based search and symmetry breaking methods.
248
Download