Document 14407262

advertisement
A Multistart Local Search Heuristic for Knapsack Problem
Geng Lin
Department of Mathematics, Minjiang University, Fuzhou, China
(lingeng413@163.com)
Abstract -Knapsack problem is one of classical
combinatorial optimization problems, and has a lot of
applications. It is known to be NP-hard. In this paper we
propose a multistart local search heuristic for solving the
knapsack problem. Firstly, knapsack problem is converted
into an unconstrained integer programming by penalty
method. Then an iterative local search method is presented
to solve the resulting unconstrained integer programming.
The computational results on three benchmarks show that
the proposed algorithm can find high quality solutions in an
effective manner.
Keywords - knapsack problem, local search, heuristic
I. INTRODUCTION
Given n items to pack in a knapsack of capacity c .
Each item i is associated with a weight wi and a profit pi .
The objective of the knapsack problem is to maximize the
profit sum without having the weight sum to exceed c .
The problem can be mathematically formulated as
follows[1][2]:
n

max
f
(
x
)

pi xi


i 1

n

( KP)  s.t.  wi xi  c
i 1


x  S,


where S  {0,1}n and xi takes a value of 1 if item i is to
be included in the knapsack, and 0 otherwise. Without
loss of generality, we assume that wi  c , for i  1, , n
to ensure that each item considered fits into the knapsack,
n
and that
w
i 1
i
 c to avoid trivial solutions.
KP is one of the classical optimization problems in
combinatorial optimization, and has a lot of applications [3]
in production, logistics, material cutting and financial
problems. In solving large combinatorial optimization
problems, KP is also regarded as a sub-problem to deal
with. It has been widely studied in the last few decades
due to its theoretical interest and its wide applicability,
see kellerer et al. [4] and references therein. KP is known
to be NP-hard, so the exact algorithms with polynomial
complexity can only exist in the case P  NP . It can be
solved in pseudo polynomial time by dynamic
programming [5]. A lot of heuristic algorithms have been
considered for approximately solving the knapsack
problem, such as tabu search[6], genetic algorithm[7,8,9],
artificial fish school algorithm[10], ant colony algorithm[11].
Local search algorithms are widely applied to
numerous hard optimization problems, including
problems from mathematics, operations research, and
engineering. A local search algorithm starts from an initial
solution and iteratively moves to a neighbor solution.
Every solution has more than one neighbor solutions, the
choice of which one to move to is taken using only
information about the neighborhood of the current one.
When no improving configurations are present in the
neighborhood, local search is trap in a local optima.
In this paper, a new local search method is proposed
for knapsack problem. When local search is stuck at a
locally optimal solution, we restart the local search
procedure from a new initial solution.
The remainder of the paper is arranged as follows: In
Section II, some definitions and local search methods
which have been used in the literature are introduced.
Section III presents a new multistart local search method
for the knapsack problem. Experiments were done on
some benchmarks, computational results and comparisons
are presented in Section IV, and concluding remarks are
put in Section V.
II. METHODOLOGY
Local search is a well-known approach for solving a
lot of combinatorial optimization problems. When using a
local search procedure to a given instance of an
optimization problem, we need to define a “neighborhood
”, which is a subset of the solution set, for each solution.
Local search algorithm begins with an initial solution and
searches its neighborhood, then moves from solution to
solution in the neighborhood by applying local changes,
until the current solution is better than its neighbors.
There are two neighborhood structures have been
considered for the knapsack problem: the 1-flip and 1flip-exchange neighborhoods. If two solutions differ
exactly on one assignment, they are 1-flip neighbor.
Definition 1. For any x  S , the 1-flip neighborhood
N f ( x ) of x is defined by N f ( x)  { y  S | x  y 1  1} .
The 1-flip neighborhood N f ( x ) can be reached by adding
or removing one item from x . Hence, N f ( x)  n  1 .
If two solutions are 1-flip-exchange neighbors if one
can be obtained from the other by exchanging two items.
It is an extension of 1-flip neighborhood.
A lot of algorithms for knapsack problem used above
two neighborhood structures. They start from an initial
solution and iteratively move to the best solution of the
neighbor, until the current solution is better than its
neighbors. These local search methods are belonging to
greedy algorithm, and trap in a local optima easily.
III. THE PROPOSED LOCAL SEARCH METHOD
In this Section, firstly, we convert knapsack problem
equivalently into unconstrained integer programming.
Then a new local search method for the resulting
unconstrained integer programming is proposed.
A. Equivalent unconstrained integer programming
We use penalty method to transform knapsack
problem to unconstrained integer programming.
Constructing the following unconstrained integer
programming:
n

max g ( x)   pi xi  kh( x)
( NKP) 
i 1
 s.t.
xS

where
k 0
is
a
penalty
parameter,
and
n
h( x)  max{ wi xi  c,0} .
i 1
using this solution as its starting solution. The local search
algorithm terminates when a pass fails to find a solution
with better value of the objective value of (NKP).
When the local search algorithm traps in a local
optima, we restarts the local search algorithm from a
randomly solution.
Let V be a set of items which are free to flip in a pass.
The multistart local search algorithm can be stated as
follows:
Step 0. Choose a positive number max iter as the
tolerance parameter for terminating the algorithm. Set
N  0 , x global  0 .
Step 1. Generate a solution x  {x1 , , xn } randomly.
Step 2. Set V  {1,
, n} , t  1 , x0  x . Calculate
gain(i, x) , for i  V .
Step 3. Let gain( j, x)  max{gain(i, x), i V } . Set
xt  ( x1 , ,1  x j , , xn ) , and V  V \{ j} , x  xt ,
t  t 1 .
Step 4. If V   , calculate gain(i, x) for i  V , go to
Step 3. Else go to Step 5.
x max  max{x t , t  1, , n} .
Step 5. Let
If
g ( x max )  g ( x 0 ) , set x  xmax , go to Step 2. Else, if
Lemma 1. If k  pmax , where pmax  max{ p1 , , pn } ,
problems KP and NKP have the same optimal solution
and optimal value.
B. Local search method
g ( x global )  g ( x max ) , let x global  xmax . Go to Step 6.
Step 6. If N  max iter , let N  N  1 , go to Step 1.
Else output x global .
A lot of local search methods used in the existing
algorithms for knapsack problem based on greedy
method. It trap into a local optima easily. We present an
iterative local search method for knapsack problem. The
main idea of the algorithm is to flip a bit at a time in an
attempt to maximize the profit sum without having the
weight sum to exceed c . Define the gain(i, x) of item i
as the objective value of the problem (NKP) would
increase if the i bit is flipped, which is as follows:
gain(i, x)  g ( x1 , , xi 1 ,1  xi , xi 1 , , xn )  g ( x) .
Note that an item’s gain may be negative. For each item
i , the local search algorithm computes the gain(i, x) . It
starts with a randomly solution in the solution space
S and changes the solution by a sequence of 1-flip
operations, which are organized as passes. At the
beginning of a pass, each item is free, meaning that it is
free to be flipped; after a bit is flipped, it become unfree,
i.e., the bit is not allowed to be flipped again during that
pass. The algorithm iterative selects a free item to flip.
When a item is flipped, it becomes unfree and the gain of
free items are updated. After each flip operation, the
algorithm records the objective value of (NKP) achieved
at this point. When there are no more free item, a pass of
the algorithm stops. Then it checks the recorded objective
values, and selects the point where the maximum
objective value was achieved. All items that were flipped
after that point are flipped. Another pass is then executed
IV. NUMERICAL EXPERIMENT
In this section, we test the proposed multistart local
search algorithm. The experiments were performed on a
personal computer with a 2.11 GHz processor and 1.0 GB
of RAM. For our experiments we employ the following
three benchmark instances, which are also used to test the
genetic algorithm for knapsack problem in [9].
Problem 1. ( w1 , , w20 ) =(92, 4, 43, 83, 84, 68, 92,
82, 6, 44, 32, 18, 56, 83, 25, 96, 70, 48, 14, 58),
( p1 , , p20 ) =(44, 46, 90, 72, 91, 40, 75, 35, 8, 54, 78, 40,
77, 15, 61, 17, 75, 29, 75, 63), c  878 .
Problem 2. ( w1 , , w50 ) =(220, 208, 198, 192, 180,
180, 165, 162, 160, 158, 155, 130, 125, 122, 120, 118,
115, 110, 105, 101, 100, 100, 98, 96, 95, 90, 88, 82, 80,
77, 75, 73, 70, 69, 66, 65, 63, 60, 58, 56, 50, 30, 20, 15,
10, 8, 5, 3, 1, 1), ( p1 , , p50 ) =(80, 82, 85, 70, 72, 70, 66,
50, 55, 25, 50, 55, 40, 48, 50, 32, 22, 60, 30, 32, 40, 38,
35, 32, 25, 28, 30, 22, 50, 30, 45, 30, 60, 50, 20, 65, 20,
25, 30, 10, 20, 25, 15, 10, 10, 10, 4, 4, 2, 1), c  1000 .
Problem 3. ( w1 , , w100 ) =(54, 183, 106, 82, 30, 58,
71, 166, 117, 190, 90, 191, 205, 128, 110, 89, 63, 6, 140,
86, 30, 91, 156, 31, 70, 199, 142, 98, 178, 16, 140, 31, 24,
197, 101, 73, 169, 73, 92, 159, 71, 102, 144, 151, 27, 131,
209, 164, 177, 177, 129, 146, 17, 53, 164, 146, 43, 170,
180, 171, 130, 183, 5, 113, 207, 57, 13, 163, 20, 63, 12,
24, 9, 42, 6, 109, 170, 108, 46, 69, 43, 175, 81, 5, 34, 146,
148, 114, 160, 174, 156, 82, 47, 126, 102, 83, 58, 34, 21,
14), ( p1 , , p100 ) =(597, 596, 593, 586, 581, 568, 567,
560, 549, 548, 547, 529, 529, 527, 520, 491, 482, 478,
475, 475, 466, 462, 459, 458, 454, 451, 449, 443, 442,
421, 410, 409, 395, 394, 390, 377, 375, 366, 361, 347,
334, 322, 315, 313, 311, 309, 296, 295, 294, 289, 285,
279, 277, 276, 272, 248, 246, 245, 238, 237, 232, 231,
230, 225, 192, 184, 183, 176, 174, 171, 169, 165, 165,
154, 153, 150, 149, 147, 143, 140, 138, 134, 132, 127,
124, 123, 114, 111, 104, 89, 74, 63, 62, 58, 55, 48, 27, 22,
12 ,6), c  6718 .
The proposed algorithm uses a parameter maxiter as
a termination parameter. In the experiment, we
set maxiter = 30 . We run the proposed algorithm 10 times
to above three benchmarks. The test results are given in
Table I. In order to compare with genetic algorithm
proposed in [9], the results of greedy algorithm, basic
genetic algorithm, hybrid genetic algorithm [9] are also
listed in Table I, and the results quote from [9] directly.
Table I gives the best solutions found by greedy
algorithm, basic genetic algorithm, hybrid genetic
algorithm. P and W denotes the sum of the profit, and
the sum of weight, respectively. g means algorithm
found the best solution within g generations.
TABLE I
EXPERIMENT RESULTS
Problem
1
1023/825
1024/878/29
1024/878/12
1024/878
2
3095/996
3077/1000/192
3103/1000/50
3103/1000
3
26380/6591
25848/6716/319
26559/6717/147
26559/6717
P/W
Basic genetic
algorithm[9]
P/W / g
Hybrid genetic
algorithm[9]
The
proposed
algorithm
P/W
Greedy
algorithm[9]
P/W / g
The following observations can be made based on the
experimental results in Table I.
(1) The proposed algorithm found the solution better
than those of greedy algorithm and basic genetic
algorithm found.
(2) The proposed algorithm and hybrid genetic
algorithm found the same best objective value.
(3) Note that our proposed used only 30 initial
solutions. It shows that the proposed can reduce the
chance that local search process becomes trapped at local
optima.
V. CONCLUSION
A multistart local search algorithm is proposed to find
approximate solutions for knapsack problems. Penalty
method is used to transform knapsack problem to
unconstrained integer programming. An iterative local
search method is presented to solve the resulting
unconstrained integer programming. It can reduce the
chance of trapping at local optima. Experiments were
done on three benchmarks from literature. Compare with
some existing algorithms, it shows the proposed algorithm
is effective.
ACKNOWLEDGMENT
This research is supported by the Science and
Technology Project of the Education Bureau of Fujian,
China, under Grant JA11201.
REFERENCES
[1] S. Martello, D. Pisinger, and D. Toth, “New trends in exact
algorithms for the 0-1 knapsack problem,” Eur. J. Oper.
Res., vol. 123, pp. 325-332, 2000.
[2] D. Pisinger, “An expanding-core algorithm for the exact 0-1
knapsack problem,” Eur. J. Oper. Res., vol 87, pp. 175-187,
1995.
[3] M. F. Gorman and S. Ahire, “A major appliance
manufacturer rethinks its inventory policies for service
vehicles,” Interfaces, vol 36, pp. 407-419, 2006.
[4] H. Kellerer, U. Pferschy, and D. Pisinger, “Knapsack
Problems,” Springer, Berlin, 2004.
[5] H. C. Papadimitriou, “On the complexity of integer
programming,” Journal of ACM, vol 28, pp. 765-768, 1981.
[6] S. hanafi and A. Freville, “An efficient tabu search
approach for the 0-1 multidimensional knapsack problem,”
European Journal of Operational Research, vol 106, pp.
663-679, 1998.
[7] X. C. Zhao, Y. Han, W. B. Ai, “Improved genetic algorithm
for knapsack problem,” Computer Engineering and
Applications, vol 47, pp. 34-36, 2011.
[8] J. L. Tian and X. P. Chao, “Novel chaos genetic algorithm
for solving 0-1 knapsack problem,” Application Research
of Computers, vol 28, pp. 2838-2839, 2011.
[9] X. J. Shan, S. P. Wu, “Solving 0-1 knapsack problems with
genetic algorithm based on greedy strategy,” Computer
Applications and Software, vol 27, pp 238-239, 2010.
[10] K. S. Li, Y. Z. Jia, W. S. Zhang, “Genetic algorithm with
schema replaced for solving 0-1 knapsack problem,”
Application Research of Computers, vol 26, pp. 470-471,
2009.
[11] C. X. Liao, X. S. Li, P. Zhang, Y. Zhang, “Improved
ant colony algorithm base on normal distribution for
knapsack problem,” Journal of System Simulation,
vol 23, pp. 1156-1160, 2011.
Download