Problem

advertisement
AN APPROACH TO THE DOMINATING SUBSET WITH THE
MINIMAL WEIGT PROBLEM
NURIYEV Urfat & ORDIN Burak
Department of Mathematics
Ege University
Ege University, Faculty of Science, Department of Mathematics
35100, Bornova, Izmir
TURKEY
Abstract: In this work, we investigate a combinatoric problem under the name “Dominating Subset with the
Minimal Weight” problem that is equivalent to the Auxiliary problem which appears as solving Global
Optimization problems and can be expressed as of a kind Assigment problem. Then the mathematical model
and the economical interpretation of the problem are given and its properties are. We offer a new algorithm
with greedy type, by shown using above properties for solving the problem and develop a program in C++.
After that, the results of its computational experiments are given.
Keywords: Dominating Subset with the Minimal Weight, Global Optimization, Assignment Problem, Weighted SetCovering Problem, Combinatorial Optimization, Cutting Angle Method.
1. Introduction
Recently a new method, which is called the
Cutting Angle Method (CAM), for solving a broad
class of Global Optimization problems has been
developed [1, 8]. That is developed as
generalization of the convex minimization of
Cutting Plane Method and it is an iterative method.
In each iteration of the CAM a Auxiliary problem
has to be solved, which is in turn, generally, a
global optimization problem.
n
This Auxiliary problem on the set S={x|  xi =1,
i 1
xi  0 , i =1, …, n}, where x=( x1 ,…, x n ) ) is seen
the minimization of minimizing Increasing
Positively Homogeneous of degree one functions.
A using method for solving auxiliary problem in
the Cutting Angle Method (CAM) of a global
optimization problem is very important for better
the solution of the problem [8].
“Dominating Subset with the Minimal Weight”
(DSMW) problem is a combinatorial optimization
problem with boolean variables and it has been
investigated as equivalent to the Auxiliary problem
which appears as solving Global Optimization
problems [2-6].
In [5], it is shown that the DSMW problem
is NP-hard and the different methods which solve
DSMW problem are developed in [3, 4].
In this work, it is shown that. The Auxiliary
problem which appears as solving Global
Optimization problems transforms to the DSMW
problem. Then, the mathematical model and the
economical interpretation of the problem are
given. Besides, it is offered an algorithm by using
above properties for solving the problem.
Algorithm is developed in C++ and computational
experiments are done problems with different
dimensions.
The paper consists of 9 sections. In Section
2, the formulation of the Auxiliary problem is
presented. In the next section, we transform
Auxiliary problem (1) – (2) into a Boolean variable
problem (3) – (9) and show that this problem is
equivalent to DSMW problem. In section 4,
economical interpretation of the DSMW problem
is expressed. In the following section, it is given
the main facts and definitions which will be used.
In section 6, some properties of the DSMW
problem are investigated and it is offered a
algorithm by using above properties for solving the
problem in section 7. In the next section, a
program is developed in C++ and the results of the
computational experiments are given. In the last
section, we think that offered algorithms could be
used to solve the Global Optimization problems.
2. Formulation of the Auxiliary
Problem
p
x
Let ( l ik ) be an (m  n) matrix, m  n, with m rows k
= 1, …, m, and n columns, i = 1, …, n. If k  n
and k  i then m  n and l ik =0. Otherwise, l ik >0.
k
i
(Namely, the first n rows of ( l ) matrix form a
diagonal matrix).
Introduce the function:
h(x)= max mink l ik xi , where I( l k )={i: l ik >0}.
k
iI ( l )
The problem considered in this paper is formulated
as follows:
The Auxiliary Problem
h = min h(x) .
(1)
x
subject to
j 1
n
j
i
 1 , i=1,2,…,n,
p
 x
i 1 j 1
n
y
i 1
j
i
j
i
j
i
(5)
 1,
(6)
 1 , j = 1,2,…,p,
x  0  1, i =1,2,…,n; j=1,2,…,p,


(7)
(8)
yij  Sg max uik xik   uij , i=1,...,n; j=1,…,p. (9)
k 1, p
In the paper [6], the following theorem
was proved:
Theorem 1. The Auxiliary problem (1)-(2) and
problem (3)-(9) are equivalent.
n
x  S={x|  xi =1, xi  0 , i=1,…,n.}
(2)
i 1
3. Transformation of the Auxiliary
Problem to an Equivalent Problem
2.
We will use the following notation for simplicity:
1
1
p = m – n, uij  i  j  n , i=1,…,n; j=1,…,p.
li li
Clearly ui j is the increment of the denominator of
the fraction that expresses the function h in the
substitution li j  n  lii . Let us define the following
function:
1, if x  0,
Sg  x   
0, if x  0
and consider variables xi j , i=1,2,…,n; j =1,2,…,p :
1, if the substitution li j n  li j is accomplished
xij  
0, otherwise
So the Auxiliary problem (1)-(2) is transformed
into the following Boolean (0 – 1) programming
problem:
n
p
 u
i 1 j 1
n
x
i 1
j
i
xij  minj ,
(3)
 1 , j=1,2,…,p,
(4)
j
i
xi
4. Dominating Subset with the
Minimal Weight Problem
Let us call the problem (3)-(9) “Dominating Subset
with Minimal Weight”. We can interpret this
problem as follows:
Let ( ui j ) be a (pn) matrix ( j = 1, 2, …, p and i =
1, 2, …, n) and ui j 0 for all i, j .
The task is to choose some elements of the matrix
such that:
i) The sum of the chosen elements is minimal.
ii) Each row contains a chosen element, or
contains some element which is less than some
chosen element located in its column;
We can give the following economical
interpretation of this problem:
A task consisting of p (j = 1, 2, …, p) operations
can be accomplished by n (i = 1, 2, …,n)
processors. Suppose that the matrix ( ui j ) gives the
time necessary for the accomplishment of the task
as follows: If
j
(10)
uij1  uij2  ...  u i p
for column i, then u ij1 is the time (or cost) for the
accomplishment of operation j1 by processor i,
u ij2 is the time for the accomplishment of
operations j1 and j2 by processor i, and so on, at
j
last u i p is the time for the accomplishment of all
operations
( j1, j2, . . . , jp ) by processor i . The problem is to
distribute operations among
the processors
minimizing the total time (or the total cost)
required for the accomplishment of all tasks.
Clearly, this problem is generalized of the
Assignment problem [7].
It’s known that the assignment problem can
be solved by Hungarian method at a complexity of
O(r3) (r = max{p, n}) but we note that the DSMW
problem is NP-hard [5].
Subset with Minimal Weight is approved for
optimal solution.
Clearly, each element is dominant of itself and
row of the existing. u ij is dominant of t (u ij ) rows
except itself. So, the numbers of the rows which
are covered with element u ij is t (u ij )  1 .
Definition 6. Clearly, each element is dominant of
itself and row of the existing. u ij is dominant of
t (u ij ) rows except itself. So, the numbers of the
rows which are covered with element u ij is
5. Some Notations and Definitions
t (u ij )  1 . This number is called the degree of the
It is given some definitions and notations in order
to facilite our presentation.
Definition 1. Let us take element u ij . If there are
dominant of the element u ij , the proportion
( uij )
1
j
( uij )
2
,j
( uij )
t ( uij )
rows
,..., j
j
u ij  u ij1 ( ui ) ,u ij  u i
element
ui
j1 ( uij
j
j2 ( uij
)
such
,..., u ij  u ijt ( ui
j
)
that
Definition 7. Suppose that the element u i j is
dominant (or side dominant) of the element u ij . If
is dominant of the elements
the dominant weight proportion of the element u i j
is not greater than the dominant weight proportion
u ij
of
the
element
(namely,
j
u i ) , u ij2 ( ui ) ,..., u ijt ( ui ) (or dominant according to
i.th column). Besides, row j is dominant of rows
j
j
j
j1( ui ) , j 2( ui ) ,..., j t((uui j )) according to column i.
i
Definition 2. The dominance concept expresses
that the element u ij covers numbered rows
j
proportion and it is denoted by R (u ij ) .
then the
j
j
u ij /(t (u ij )  1) is called the dominant weight
j
j, j1(ui ) , j2(ui ) ,..., jt((uui j)) of element u ij . The subset
i
of the dominant elements which cover the rows of
matrix ( u ij ) is called dominant subset.
Definition 3. We say that the value of every one
elements of the matrix is its weight and the sum of
the weights of the elements of each subset is
weight of the subset. Then, the DSMW problem is
to find the dominant subset which has minimal
weight.
Definition 4. Each ( x ij ) matrix which ensures
conditions (3)-(8) (or condition (ii)) is called
feasible solution. We remark that X̂ is feasible
solution and Û is the value of appropriate goal
function.
Definition 5. If feasible solution satisfies
condition (2) (or condition (i)) then this solution is
optimal. We mark that X* is optimal solution and
appropriate value of goal function for X* is
marked with U* . Clearly, a dominant subset is
suitable for each feasible solution and Dominating
R(u i j )  R(u ij )(
ui j
u ij

) ) then the
t (u i j )  1 t (u ij )  1
element u i j is appropriate dominant according to
the element u ij .
Definition 8. For every row j, we determine
element u j element as follow:
u j  min{ u ij }, j  1,..., p ve i j  arg min{ u ij }, j  1,..., p
i 1,n
i 1,n
It is obvious that u  u i j . Let us call u ijj the
j
j
critical element for row j.
Definition 9. For every column i, we determine
element u~i as follow:
0, if no critical element in column i

ui   max{u j u j  u j },
otherwise
i
i
 j 1, p

Suppose
that
i,
then,
u~i  0 ,
u~  max{ u j u j  u j }  max{ u j i  i }  u ji ,
i
j 1, p
i
i
j 1, p
ij
j
i
j i  arg max{ u ijj i j  i } .
j 1, p
We call element u iji dominant critical element for
column i. A column which hasn’t any dominant
element is called independent column.
Definition 10. We set a data structure for every
critical element u ij as follows.
j
j
j
( u ij ; j, j1(ui ) , j 2(ui ) ,..., jt((uui j)) ).
column of the matrix (u ij ) gives a feasible solution
of the DSMW problem. Namely, condition (ii) or
(conditions (3)-(8)) is feasible for every Ui .
i
Here, j is the number of the row of element u ij ,
j
j
j
j1(ui ) , j 2(ui ) ,..., jt((uui j)) is numbers of rows which is
i
P6. The value of the goal function isn’t greater
than the smallest of the biggest elements of the
columns of the matrix (u ij ) for the optimal
dominant according to i.th column. The vector
j
j
j
( u ij ; j, j1(ui ) , j 2(ui ) ,..., jt((uui j))
) is called the
solution of the DSMW problem. Namely, U *  U .
dominance vector of the element u ij and is
P7. We get the following bounds for the optimal
~
value of the goal function. U  U *  U .
i
remarked with V( u ij ). Clearly, its number of
coordinates is (t (u ij )  1) .
Definition 11. If the value of the goal function is
better than that of the solution obtained by
replacing u i j by u i j then we call u i j to be
improved appropriate dominant.
Let us accept the following notations.
~
U i  max {u ij } , U  min {U i } , U  max{u~i } .
i 1, n
j 1, p
i 1, n
6. The Properties of the DSMW
Problem
The following properties of the DSMW problem
are known [3, 5]:
P1. The set of the dominant critical elements of
the problem gives a better feasible solution
according to critical elements. Namely,
n
p
i 1
j 1
 uiji   uijj .
P2. The number of dominant critical elements (q)
isn’t greater than the number of the columns (n).
P3. There is an element which isn’t smaller than
the biggest of dominant critical elements in each
feasible
solution.
In
other
words,
if
p
n
~
Uˆ   uˆ ij xˆ ij then, uˆ ij  U ,  û ij .
i 1 j 1
P4. The value of the goal function is not smaller
than the biggest of dominant critical elements in
~
each feasible solution. Namely, Uˆ  U .
P5. The biggest element Ui (i  1, n) of every
P8. Suppose that U  u cd . If u cd  min {uid } then
i 1, n
x  1 is the optimal solution for the DSMW
problem.
d
c
~
P9. Suppose that U  ums . If u ms  max{ u mj } then
x ms  1 is the optimal solution for the DSMW
problem.
~
~
P10. If U  U then U *  U  U .
7. A Solution Algorithm for the DSMW
Problem
The above theoric results is not important for
practical applications. So that, an algorithm as
compound of two algorithms, by using above
properties for solving the problem is offered.
In level 1(Steps A1- A2), the biggest dominant
element for every column of matrix U is found.
(Ui, (i=1,2,…,n); according to property 5, every
one of them is a feasible solution), the smallest of
them( U ) is chosen and it is accepted as initial
solution ( Uˆ  U ). If this solution Û is equal to
the biggest of the dominant critical element then
this solution is optimal according to properties 9
and the algorithm is stopped.(Step A6).
In level 2 (Steps A3-A9), those are determined:
the critical elements for every rows and dominant
critical elements for every column. Then, dominant
critical elements and critical elements are sorted in
descending order provided that the dominant ones
are at head. (Note: This line (line (10)) is used
following sections as greedy criteria). The biggest
of dominant critical elements is chosen and a
feasible solution is found according to this line.
This solution is compared with the old solution and
if it is better then it is taken to new solution.
In the other levels, according to properties 2-7, it is
tried to improve found solution (A10-A31). So,
according to dominants, it is used procedures to
improve found solution.
7.1. Algorithm AA
Algorithm AA is main algorithm and above levels
are done by this algorithm. Besides, algorithm B
is used as auxiliary algorithm. i(uk) and j(uk) are
column and row numbers which consist of the
element uk.
Also, kg is line number of the order element
which is examined in line (10) for current
solution.
A0: U=0, X  ( xij )  0 , (i  1, 2,.., n; .
j  1, 2,..., p); P  {1, 2,..., p}
A1:Find u i  max {u ij } and
j 1, p
j i  arg max{ u ij }, i  1,2,..., n .
j 1,n
It is clear that u i  u iji .
A2: Find the smallest of them ( u ). Namely, the
best of the feasible solutions for this time is
determined. The best solution and the value of the
its goal function are marked X̂ , Û .
In other words,
ˆ
U  min {U i } , i ji  arg min {U i } . It is true that
i 1, n
i 1,n
Uˆ  uijji and it is marked that:
i
ig  i ji , jg  ji , kg  0 .
A vector of the dominant (V(uj))is been setup for
every this element.
A3: Determine the critical elements for each row.
Namely, u j  min{ u ij } ,
i 1,n
i j  arg min{ u ij }, j  1,2,..., p .
i 1,n
A4: Find the dominant critical elements for each
column ( u~i , i  1,..., n ) . If no critical element exist
in column i (namely, it is independent column),
u~i is assumed to be 0 for this column.
A5: Sort the critical elements in descending
order provided that the dominant ones are at head
u~i1  u~i2  ...  u~iq , u~iq 1  ...  ui p
(9)
In (9), leftmost q elements are dominant
critical elements and other elements are critical
elements of rows u i q 1 ,..., u i p . (According to P2,
qn). It is provided that which could use easier.
v1  v 2  ...  v q , v q 1  ...  v p
(10)
v1  u~i1 , v2  u~i2 ,..., v p  ui p
Namely,
.
ˆ
A6: If v1  U then X̂ is an optimal solution
(According to P9). Namely, x jg  1, UG  Uˆ ,
ig
U  Uˆ and goto A30.
A7: Accept that: PK=P, U  v1 , P=P-V(v1) and
find a feasible solution according to the current
order using Algorithm B.
A8: For the newly obtained solution, if the value
of the goal function U  Uˆ then goto A10.
A9:Acceptthat
ˆ
U  U , ig  i (v1 ), jg  j (v1 ), kg  ks. .
A10: UG  0, uy  Uˆ  v , k  1, pk  p. .
*
p
A11: s=1.
A12: j=PK(s).
A13: l=1.
A14: If u ij  uy or u ij  v k then goto A21.
A15: Accept that : U  u ij , P  PK  V (u ij ) .
A16: If P   then ks=0 and goto A18.
A17: Find a feasible solution according to the
current order using Algorithm B.
A18: If U  Uˆ then goto A21.
A19: Uˆ  U , ig  i, jg  j , kg  ks.
A20: If k=1 then uy  Uˆ  v .
p
A21: i=i+1.
A22: If i  n then goto A 14.
A23: s=s+1.
A24: If s  pk then goto A12.
A25: UG  UG  uigjg , xigjg  1.
A26: If kg=0 then goto A29.
A27:Acceptthat
PK  PK  V (uigjg ), pk  PK , uy  uigjg , Uˆ  Uˆ  uy.
A28: k=kg, ig=i(vk) .
A29: Print X, UG.
A30: STOP.
7.2. Algorithm B
B0: The parameters U , P are determined by the
program which calls.
B1: k=1, ks=0.
B2: If j (v k )  P then goto B7.
B3: If ks=0 then ks=k.
B4: U  U  vk .
B5: P=P \ V (vk ) .
B6: If P= then goto B9.
B7: k=k+1.
B8: If k  p then goto B2.
B9: End the algorithm with outputs of U , ks .
Here, j(uk) are row number which consists of the
element uk.
8. Computational Experiments
To see the practice effectives of the proposed
algorithm, computational experiments by written
codes in C++ have been carried out on an IBM
Pentium-S CPU 166 MHz.
In this experiments, randomly generated input
matrices ( u ij ) of different sizes are used. For the
purpose of imitating inputs of general nature, the
matrices are generated under the conditions stated
below:
1) None of the rows major any other row; there
li j1  li j2 .
are no rows j1, j2 that for every i
2)
0  u ij  10.000 ,
i  1, n, j  1, p
j
and
elements u i are integers.
The experiments are done in three stages. In
the first stage, small scale matrices are chosen and
the solutions are found by both described
algorithm and Branch & Bound algorithm in order
to consider how close is the solution found by the
algorithm to the optimum solution. Relative errors
U* U
are estimated by ratio  
.
U*
In this stage, matrices with dimensions p*n are
investigated and values are chosen as n=5, 10 and
p=5, 10, 15, 20 (Table 1-2).
Results is denoted following tables.
Table 1. Results for small scale matrices.
n=5, p=5
n=5, p=10
No The Value of The
Main Function
B&B Heuristic
1
7652
7652
2
6341
6341
3
4410
4410
4
7493
7493
5
2559
2559
6
5711
5711
7
3523
3523
8
2652
2652
9
8713
8713
10 4572
4572
No
1
2
3
4
5
6
7
8
9
10
3767
6914
2534
8663
8682
1842
2784
7655
5797
4822

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
n=5, p=15
3767
0.00
6914
0.00
2534
0.00
8663
0.00
8682
0.00
1842
0.00
2791
0.00
7655
0.00
5797
0.00
4822
0.00
The Value of The
Main Function
B&B Heuristic
2564
2564
5627
5627
3727
3727
9547
9547
5753
5753
5915
5915
3514
33514
8521
8521
7257
7732
1642
1642

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.00
n=5, p=20
3782
3782
4667
4667
3932
3932
7563
7563
5932
5947
4775
4775
3617
3617
6923
6935
5678
5678
3718
3718
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.00
0.00
Table 2. Results for small scale matrices.
n=10, p=5
n=10, p=10
The Value of The
Main Function
B&B Heuristic
1
9435
9435
2
4322
4322
3
3253
3253
4
7132
7132
5
8442
8442
6
9663
9663
7
2458
2458
8
7344
7344
9
4362
4362
10 4523
4523
The Value of The
Main Function
B&B Heuristic
8421
8421
4463
4528
4617
4617
5584
5584
9338
9338
8694
8694
4642
4642
7599
7599
6508
6508
3502
3502

0.00
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
n=10, p=20
6843
6843
5668
5668
6828
6833
7724
7724
8553
8553
3872
3880
5753
5761
4851
4851
5560
5560
8775
8775
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
No
No
1
2
3
4
5
6
7
8
9
10
5737
7513
3456
9663
4763
7514
8503
3776
6476
5738

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
n=10, p=15
5737
0.00
7589
0.01
3456
0.00
9663
0.00
4763
0.00
7557
0.01
8503
0.00
3779
0.00
6476
0.00
5738
0.00
In the second stage, the rows and columns
of the matrices are accepted as n=6, p=60 and the
solutions are found by both described algorithm
and Branch&Bound algorithm and it is given
solution times to compare (Table 3).
Table 3. Results for matrices(n=6, m=60)
1
2
3
4
5
6
7
8
9
10
B&B
Time
9968
12977
8962
15861
8950
2978
6882
5959
9951
11953
77.47 sec.
134.1 sec.
93.62 sec.
92.21 sec.
123.1 sec.
116.2 sec.
84.69 sec.
103.3 sec.
97.41 sec.
124.5 sec.
Heur.
9968
12977
8962
15861
8950
2978
6882
5959
9951
11953
Time

0.00 sec.
0.00 sec.
0.00 sec.
0.00 sec.
0.01 sec.
0.01 sec.
0.00 sec.
0.00 sec.
0.00 sec.
0.01 sec.
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
In the third stage, the purpose is to evaluate the
time required by the algorithm for solution of
problems with larger scale input matrices. The
values are chosen for matrices as p=1000 and
n=15, 25, 35, 45, 55, 65, 75, 85, 95-(Table 4).
Table 4. Results for large scale matrices .
No
1
2
3
4
5
6
7
8
9
10
n
5
15
25
35
45
55
65
75
85
95
p
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
Time
0.05 sec.
0.06 sec.
0.14 sec.
0.13 sec.
0.18 sec.
0.17 sec.
0.24 sec.
0.21 sec.
0.25 sec.
0.33 sec.
As it is seen from experiments, the presented
algorithm gives usually the optimal solutions. In
other cases, relative errors are rather small.
Namely, found solutions are very near to optimal
solutions. According to table 3, 4, it is seen that
algorithm solves fast even for large scale input
matrices.
9. Conclusion
In this paper, a heuristic algorithm as using the
properties of the DSMW problem with boolean
variables which is equivalent to the Auxiliary
problem which appears as solving Global
Optimization problems by Cutting Angle Method
is proposed. The limit for the number of iterations
of the algorithm is O(nּpּlog2 p). Experiments done
by proposed program in C++ show that the
effectives of the algorithm is high. It is thought
solving the Global Optimization Problem by using
this algorithm with C.A.M.
References
[1] M.Yu.Andramonov, A.M.Rubinov and
B.M.Glover , Cutting Angle methods in
Global Optimization, Applied Mathematics
Letters, Vol.12, 1999, pp. 95-100.
[2] U.G. Nuriyev , On Transformation of Global
Optimization Auxiliary problem., In proc.
of 3-th Joint Seminar on Appied
Mathematics, Baku State University &
Zanjan University, Baku, 2002, p. 96.
[3] U.G. Nuriyev , On the solving of a Global
Optimization Auxiliary problem, Proceeding
of XXIII National Meeting on Operational
Research and Industrial Engineering,
Istanbul, 2002, p.33.
[4] U.G. Nuriyev & O. Şen , Dominating
Subset with Minimal Weigth Problem.,
Proceeding of the Third International
Conference on Matematical &
Computational Applications
(ICMCA’2002). Konya, 2002, pp. 54–60.
[5] U.G. Nuriyev & B. Ordin , Dominating
Subset with Minimal Weight Problem and
the survey of its complexity, Proceeding of
XXIII National Meeting on Operational
Research and Industrial Engineering,
Istanbul, 2002, p.33.
[6]. U.G. Nuriyev , On Complexity of a Global
Optimization Problem, Mathematical &
Computational Applications, Vol.8, No.1,
pp. 2003, 27-34.
[7] C.H. Papadimitriou, K. Steiglitz ,
Combinatorial Optimization:Algorithms and
Complexity, Prentice-Hall, 1982.
[8] A.M. Rubinov, Abstract Convexity and
Global Optimization, Dordrecht, Kluwer
Academic Publishers, 2000.
Download