Document 10953347

advertisement
Hindawi Publishing Corporation
Mathematical Problems in Engineering
Volume 2012, Article ID 482193, 28 pages
doi:10.1155/2012/482193
Research Article
A Two-Phase Support Method for Solving Linear
Programs: Numerical Experiments
Mohand Bentobache1, 2 and Mohand Ouamer Bibi2
1
2
Department of Technology, University of Laghouat, 03000, Algeria
Laboratory of Modelling and Optimization of Systems (LAMOS), University of Bejaia, 06000, Algeria
Correspondence should be addressed to Mohand Bentobache, mbentobache@yahoo.com
Received 6 September 2011; Accepted 7 February 2012
Academic Editor: J. Jiang
Copyright q 2012 M. Bentobache and M. O. Bibi. This is an open access article distributed under
the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We develop a single artificial variable technique to initialize the primal support method for solving
linear programs with bounded variables. We first recall the full artificial basis technique, then
we will present the proposed algorithm. In order to study the performances of the suggested
algorithm, an implementation under the MATLAB programming language has been developed.
Finally, we carry out an experimental study about CPU time and iterations number on a large
set of the NETLIB test problems. These test problems are practical linear programs modelling
various real-life problems arising from several fields such as oil refinery, audit staff scheduling,
airline scheduling, industrial production and allocation, image restoration, multisector economic
planning, and data fitting. It has been shown that our approach is competitive with our
implementation of the primal simplex method and the primal simplex algorithm implemented
in the known open-source LP solver LP SOLVE.
1. Introduction
Linear programming is a mathematical discipline which deals with solving the problem
of optimizing a linear function over a domain delimited by a set of linear equations or
inequations. The first formulation of an economical problem as a linear programming
problem is done by Kantorovich 1939, 1, and the general formulation is given later by
Dantzig in his work 2. LP is considered as the most important technique in operations
research. Indeed, it is widely used in practice, and most of optimization techniques are based
on LP ones. That is why many researchers have given a great interest on finding efficient
methods to solve LP problems. Although some methods exist before 1947 1, they are
restricted to solve some particular forms of the LP problem. Being inspired from the work
of Fourier on linear inequalities, Dantzig 1947, 3 developed the simplex method which is
2
Mathematical Problems in Engineering
known to be very efficient for solving practical linear programs. However, in 1972, Klee and
Minty 4 have found an example where the simplex method takes an exponential time to
solve it.
In 1977, Gabasov and Kirillova 5 have generalized the simplex method and
developed the primal support method which can start by any basis and any feasible solution
and can move to the optimal solution by interior points or boundary points. The latter is
adapted by Radjef and Bibi to solve LPs which contain two types of variables: bounded
and nonnegative variables 6. Later, Gabasov et al. developed the adaptive method to
solve, particularly, linear optimal control problems 7. This method is extended to solve
general linear and convex quadratic problems 8–18. In 1979, Khachian developed the first
polynomial algorithm which is an interior point one to solve LP problems 19, but it’s not
efficient in practice. In 1984, Karmarkar presented for the first time an interior point algorithm
competitive with the simplex method on large-scale problems 20.
The efficiency of the simplex method and its generalizations depends enormously
on the first initial point used for their initialization. That is why many researchers have
given a new interest for developing new initialization techniques. These techniques aim to
find a good initial basis and a good initial point and use a minimum number of artificial
variables to reduce memory space and CPU time. The first technique used to find an initial
basic feasible solution for the simplex method is the full artificial basis technique 3. In
21, 22, the authors developed a technique using only one artificial variable to initialize the
simplex method. In his experimental study, Millham 23 shows that when the initial basis
is available in advance, the single artificial variable technique can be competitive with the
full artificial basis one. Wolfe 24 has suggested a technique which consists of solving a new
linear programming problem with a piecewise linear objective function minimization of the
sum of infeasibilities. In 25–31, crash procedures are developed to find a good initial basis.
In 32, a two-phase support method with one artificial variable for solving linear
programming problems was developed. This method consists of two phases and its general
principle is the following: in the first phase, we start by searching an initial support with
the Gauss-Jordan elimination method, then we proceed to the search of an initial feasible
solution by solving an auxiliary problem having one artificial variable and an obvious feasible
solution. This obvious feasible solution can be an interior point of the feasible region. After
that, in the second phase, we solve the original problem with the primal support method 5.
In 33, 34, we have suggested two approaches to initialize the primal support method
with nonnegative variables and bounded variables: the first approach consists of applying
the Gauss elimination method with partial pivoting to the system of linear equations
corresponding to the main constraints and the second consists of transforming the equality
constraints to inequality constraints. After finding the initial support, we search a feasible
solution by adding only one artificial variable to the original problem, thus we get an
auxiliary problem with an evident support feasible solution. An experimental study has
been carried out on some NETLIB test problems. The results of the numerical comparison
revealed that finding the initial support by the Gauss elimination method consumes much
time, and transforming the equality constraints to inequality ones increases the dimension
of the problem. Hence, the proposed approaches are competitive with the full artificial basis
simplex method for solving small problems, but they are not efficient to solve large problems.
In this work, we will first extend the full artificial basis technique presented in 7, to
solve problems in general form, then we will combine a crash procedure with a single artificial
variable technique in order to find an initial support feasible solution for the initialization
of the support method. This technique is efficient for solving practical problems. Indeed, it
Mathematical Problems in Engineering
3
takes advantage of sparsity and adds a reduced number of artificial variables to the original
problem. Finally, we show the efficiency of our approach by carrying out an experimental
study on some NETLIB test problems.
The paper is organized as follows: in Section 2, the primal support method for solving
linear programming problems with bounded variables is reviewed. In Section 3, the different
techniques to initialize the support method are presented: the full artificial basis technique
and the single artificial variable one. Although the support method with full artificial basis is
described in 7, it has never been tested on NETLIB test problems. In Section 4, experimental
results are presented. Finally, Section 5 is devoted to the conclusion.
2. Primal Support Method with Bounded Variables
2.1. State of the Problem and Definitions
The linear programming problem with bounded variables is presented in the following
standard form:
max
z cT x,
2.1
s.t.
Ax b,
2.2
l ≤ x ≤ u,
2.3
where c and x are n-vectors; b an m-vector; A an m × n-matrix with rankA m < n; l
and u are n-vectors. In the following sections, we will assume that l < ∞ and u < ∞. We
define the following index sets:
I {1, 2, . . . , m},
JN ∩ JB ∅,
J {1, 2, . . . , n},
|JB | m,
J J N ∪ JB ,
|JN | n − m.
So we can write and partition the vectors and the matrix A as follows:
x xJ xj , j ∈ J ,
xB xJB xj , j ∈ JB ;
x
xN
,
xB
xN xJN xj , j ∈ JN ,
c cJ cj , j ∈ J ,
c
cN
,
cB
cN cJN cj , j ∈ JN , cB cJB cj , j ∈ JB ,
l lJ lj , j ∈ J , u uJ uj , j ∈ J ,
⎞
AT1
⎜ . ⎟
⎜ .. ⎟
⎟
⎜
⎜ ⎟
A AI, J aij , i ∈ I, j ∈ J a1 , . . . , aj , . . . , an ⎜ ATi ⎟,
⎜ . ⎟
⎜ . ⎟
⎝ . ⎠
ATm
⎛
2.4
4
Mathematical Problems in Engineering
⎛
⎞
a1j
⎜ a2j ⎟
⎜
⎟
aj ⎜ . ⎟,
⎝ .. ⎠
j 1, n;
ATi ai1 , ai2 , . . . , ain ,
i 1, m,
amj
A AN | AB ,
AN AI, JN , AB AI, JB .
2.5
i A vector x verifying constraints 2.2 and 2.3 is called a feasible solution for the
problem 2.1–2.3.
ii A feasible solution x0 is called optimal if zx0 cT x0 max cT x, where x is taken
from the set of all feasible solutions of the problem 2.1–2.3.
iii A feasible solution x is said to be -optimal or suboptimal if
z x0 − zx cT x0 − cT x ≤ ,
2.6
where x0 is an optimal solution for the problem 2.1–2.3, and is a positive
number chosen beforehand.
iv We consider the set of indices JB ⊂ J such that |JB | |I| m. Then JB is called a
0.
support if detAB detAI, JB /
v The pair {x, JB } comprising a feasible solution x and a support JB will be called a
support feasible solution SFS.
vi An SFS is called nondegenerate if lj < xj < uj , j ∈ JB .
Remark 2.1. An SFS is a more general concept than the basic feasible solution BFS. Indeed,
the nonsupport components of an SFS are not restricted to their bounds. Therefore, an SFS
may be an interior point, a boundary point or an extreme point, but a BFS is always an
extreme point. That is why we can classify the primal support method in the class of interior
search methods within the simplex framework 35.
i We define the Lagrange multipliers vector π and the reduced costs vector Δ as
follows:
π T cBT A−1
B ,
ΔT ΔT J π T A − cT ΔTN , ΔTB ,
2.7
T
T
T −1
T
where ΔTN cBT A−1
B AN − cN , ΔB cB AB AB − cB 0.
Theorem 2.2 the optimality criterion 5. Let {x, JB } be an SFS for the problem 2.1–2.3. So
the relations:
Δj ≥ 0
for xj lj ,
Δj ≤ 0
for xj uj ,
Δj 0
for lj < xj < uj , j ∈ JN
2.8
Mathematical Problems in Engineering
5
are sufficient and, in the case of nondegeneracy of the SFS {x, JB }, also necessary for the optimality of
the feasible solution x.
The quantity βx, JB defined by:
βx, JB Δj xj − lj Δj >0, j∈JN
Δj xj − uj ,
2.9
Δj <0, j∈JN
is called the suboptimality estimate. Thus, we have the following theorem 5.
Theorem 2.3 sufficient condition for suboptimality. Let {x, JB } be an SFS for the problem 2.1–
2.3 and an arbitrary positive number. If βx, JB ≤ , then the feasible solution x is -optimal.
2.2. The Primal Support Method
Let {x, JB } be an initial SFS and an arbitrary positive number. The scheme of the primal
support method is described in the following steps:
T
1 Compute π T cBT A−1
B ; Δj π aj − cj , j ∈ JN .
2 Compute βx, JB with the formula 2.9.
3 If βx, JB 0, then the algorithm stops with {x, JB }, an optimal SFS.
4 If βx, JB ≤ , then the algorithm stops with {x, JB }, an -optimal SFS.
5 Determine the nonoptimal index set:
JNNO j ∈ JN : Δj < 0, xj < uj or Δj > 0, xj > lj .
2.10
6 Choose an index j0 from JNNO such that |Δj0 | maxj∈JNNO |Δj |.
7 Compute the search direction d using the relations:
dj0 − sign Δj0 ,
dj 0,
dJB j/
j0 , j ∈ JN ,
−A−1
B AN dJN 2.11
−A−1
B aj0 dj0 .
8 Compute θj1 minj∈JB θj , where θj is determined by the formula:
⎧
uj − xj
⎪
⎪
,
⎪
⎪
⎪
⎨ dj lj − xj
θj ⎪
,
⎪
⎪
dj
⎪
⎪
⎩
∞,
if dj > 0;
if dj < 0;
if dj 0.
2.12
6
Mathematical Problems in Engineering
9 Compute θj0 using the formula
θj 0 xj0 − lj0 ,
uj0 − xj0 ,
if Δj0 > 0;
if Δj0 < 0.
2.13
10 Compute θ0 min{θj1 , θj0 }.
11 Compute x x θ0 d, z z θ0 |Δj0 |.
12 Compute βx, JB βx, JB − θ0 |Δj0 |.
13 If βx, JB 0, then the algorithm stops with {x, JB }, an optimal SFS.
14 If βx, JB ≤ , then the algorithm stops with {x, JB }, an -optimal SFS.
15 If θ0 θj0 , then we put J B JB .
16 If θ0 θj1 , then we put J B JB \ {j1 } ∪ {j0 }.
17 We put x x and JB J B . Go to the step 1.
3. Finding an Initial SFS for the Primal Support Method
Consider the linear programming problem written in the following general form:
max
s.t.
z cT x,
A1 x ≤ b 1 ,
A2 x b2 ,
3.1
l ≤ x ≤ u,
where c c1 , c2 , . . . , cp T , x x1 , x2 , . . . , xp T , l l1 , l2 , . . . , lp T , u u1 , u2 , . . . , up T are
vectors in Rp ; A1 is a matrix of dimension m1 × p, A2 is a matrix of dimension m2 × p,
b1 ∈ Rm1 , b2 ∈ Rm2 . We assume that l < ∞ and u < ∞.
Let m m1 m2 be the number of constraints of the problem 3.1 and I {1, 2, . . . , m},
J0 {1, 2, . . . , p} are, respectively, the constraints indices set and the original variables indices
set of the problem 3.1. We partition the set I as follows: I I1 ∪ I2 , where I1 and I2 represent,
respectively, the indices set of inequality and equality constraints. We note by ej the j-vector
of ones, that is, ej 1, 1, . . . , 1 ∈ Rj and ej the jth vector of the identity matrix Im of order
m.
After adding m1 |I1 | slack variables to the problem 3.1, we get the following
problem in standard form:
max
z cT x,
3.2
s.t. Ax H e xe b,
3.3
l ≤ x ≤ u,
3.4
0 ≤ x e ≤ ue ,
3.5
Mathematical Problems in Engineering
7
1
1
I
1
, b bi , i ∈ I bb2 , H e 0m m×m
ei , i ∈ I1 where A aij , i ∈ I, j ∈ J0 A
A2
2
1
e1 , e2 , . . . , em1 , xe xp
i , i ∈ I1 xp
1 , xp
2 , . . . , xp
m1 T is the vector of the added slack
variables and ue up
i , i ∈ I1 up
1 , up
2 , . . . , up
m1 T its upper bound vector. In order to
work with finite bounds, we set up
i M, i ∈ I1 , that is, ue Mem1 , where M is a finite,
positive and big real number chosen carefully.
Remark 3.1. The upper bounds, up
i , i ∈ I1 , of the added slack variables can also be deduced
as follows: up
i bi − ATi hi , i ∈ I1 , where hi is a p-vector computed with the formula
⎧
⎪
⎪
⎨lj ,
i
hj uj ,
⎪
⎪
⎩0,
if aij > 0;
if aij < 0;
if aij 0.
3.6
p
Indeed, from the system 3.3, we have xp
i bi − j1 aij xj , i ∈ I1 . By using the bound
p
constraints 3.4, we get xp
i ≤ bi − j1 aij hij bi − ATi hi , i ∈ I1 . However, the experimental
study shows that it’s more efficient to set up
i , i ∈ I1 , to a given finite big value, because for
the large-scale problems, the deduction formula 3.6 given above takes much CPU time to
compute bounds for the slack variables.
The initialization of the primal support method consists of finding an initial support
feasible solution for the problem 3.2–3.5. In this section, being inspired from the technique
used to initialize interior point methods 36 and taking into account, the fact that the support
method can start with a feasible point and a support which are independent, we suggest
a single artificial variable technique to find an initial SFS. Before presenting the suggested
technique, we first extend the full artificial basis technique, originally presented in 7 for
standard form, to solve linear programs presented in the general form 3.1.
3.1. The Full Artificial Basis Technique
Let x
be a p-vector chosen between l and u and w an m-vector such that w b − Ax
. We
consider the following subsets of I1 : I1
{i ∈ I1 : wi ≥ 0}, I1− {i ∈ I1 , wi < 0}, and we assume
without loss of generality that I1
{1, 2, . . . , k}, with k ≤ m1 and I1− {k 1, k 2, . . . , m1 }.
Remark that I1
and I1− form a partition of I1 ; |I1
| k and |I1− | m
1e
−k.
We make the following partition for xe , ue and H e : xe xxe− , where
ue T
xe
xp
i , i ∈ I1
xp
1 , xp
2 , . . . , xp
k ,
T
xe− xp
i , i ∈ I1− xp
k
1 , xp
k
2 , . . . , xp
m1 ;
3.7
T
ue
up
i , i ∈ I1
up
1 , up
2 , . . . , up
k ,
T
ue− up
i , i ∈ I1− up
k
1 , up
k
2 , . . . , up
m1 ,
3.8
ue
ue−
, where
8
Mathematical Problems in Engineering
and H e H e
, H e− , where
H e
ei , i ∈ I1
Im I, I1
e1 , e2 , . . . , ek ,
H e− ei , i ∈ I1− Im I, I1− ek
1 , ek
2 , . . . , em1 .
3.9
Hence, the problem 3.2–3.5 becomes
max
3.10
z cT x,
s.t. Ax H e− xe− H e
xe
b,
l ≤ x ≤ u,
3.11
3.12
0 ≤ xe
≤ ue
,
0 ≤ xe− ≤ ue− .
3.13
After adding s m − k artificial variables to the equations k 1, k 2, . . . , m of the
system 3.11, where s is the number of elements of the artificial index set I a I1− ∪ I2 {k 1, k 2, . . . , m1 , m1 1, . . . , m}, we get the following auxiliary problem:
max
s.t.
ψ −esT xa ,
Ax H e− xe− H e
xe
H a xa b,
3.14
l ≤ x ≤ u,
0≤x
e
≤u ,
e
0≤x
e−
≤u ,
e−
0≤x ≤u ,
a
a
where
T
xa xp
m1 i , i 1, s xp
m1 1 , . . . , xp
m1 s
3.15
represents the artificial variables vector,
H a signwi ei , i ∈ I a signwk
1 ek
1 , signwk
2 ek
2 , . . . , signwm em ,
ua |wI a | δes |wi | δ, i ∈ I a ,
3.16
where δ is a real nonnegative number chosen in
advance.
xe
x
p
m1 −k
e−
e
a
e−
If
we
put
x
,
x
∈
R
∈ Rm , A
N
B
x
xa
N A, H , AB H , H ,
e
0Rk
l
u
u
lN 0Rm1 −k , uN ue− , lB 0Rk
s 0Rm , uB ua , cB −es , then we get the following
auxiliary problem:
max
ψ cBT xB ,
3.17
s.t. AN xN AB xB b,
3.18
lN ≤ xN ≤ uN ,
3.19
lB ≤ xB ≤ uB .
Mathematical Problems in Engineering
9
The variables indices set of the auxiliary problem 3.17–3.19 is
J J0 ∪ p i, i ∈ I1 ∪ p m1 i, i 1, s
1, . . . , p, p 1, . . . , p m1 , p m1 1, . . . , p m1 s .
3.20
Let us partition this set as follows: J JN ∪ JB , where
JN J0 ∪ p i, i ∈ I1− 1, . . . , p, p k 1, . . . , p m1 ,
JB p i, i ∈ I1
∪ p m1 i, i 1, s
p 1, . . . , p k, p m1 1, . . . , p m1 s .
3.21
Remark that the pair {y, JB }, where
y
yN
yB
xN
xB
⎛
⎞ ⎛
⎞
x
x
⎜xe− ⎟ ⎜ 0Rm1 −k ⎟
⎟ ⎜
⎟
⎜
⎝xe
⎠ ⎝ wI ⎠,
1
xa
|wI a |
3.22
with wI1
wi , i ∈ I1
and wI a wi , i ∈ I a , is a support feasible solution SFS for the
auxiliary problem 3.17–3.19. Indeed, y lies between its lower and upper bounds: for δ ≥ 0
we have
⎛
l
⎞
⎛
x
⎞
⎛
u
⎞
⎜ 0Rm1 −k ⎟ ⎜ Mem1 −k ⎟
⎜0Rm1 −k ⎟
⎟
⎜
⎟.
⎟ ⎜
⎜
⎠
⎝ 0Rk ⎠ ≤ y ⎝ wI ⎠ ≤ ⎝
Mek
1
a
a
s
0Rs
|wI | δe
|wI |
3.23
Furthermore, the m × m-matrix
AB e1 , e2 , . . . , ek , signwk
1 ek
1 , signwk
2 ek
2 , . . . , signwm em
3.24
is invertible because
detAB i∈I a
signwi m
ik
1
signwi ±1,
3.25
10
Mathematical Problems in Engineering
and y verifies the main constraints: by replacing x, xe− , xe
, and xa with their values in the
system 3.18, we get
AN xN AB xB A, H e−
x
0Rm1 −k
Ax
H e
w I1
H a |wI a |
Ax
w I1
|wI a |
H e
, H a k
m
wi ei wi ei
i1
ik
1
3.26
Ax Im w
Ax
w
Ax
b − Ax
b.
Therefore, the primal support method can be initialized with the SFS {y, JB } to solve the
auxiliary problem. Let {y∗ , JB∗ } be the obtained optimal SFS after the application of the primal
support method to the auxiliary problem 3.17–3.19, where
⎛
⎞
x∗
⎜xe−∗ ⎟
⎟
y∗ ⎜
⎝xe
∗ ⎠,
xa∗
ψ ∗ −esT xa ∗ .
3.27
If ψ ∗ < 0, then the original problem 3.2–3.5 is infeasible.
∗
Else, when JB∗ does not contain any artificial index, then { xxe ∗ , JB∗ } will be an SFS for
the original problem 3.2–3.5. Otherwise, we delete artificial indices from the support JB∗ ,
and we replace them with original or slack appropriate indices, following the algebraic rule
used in the simplex method.
In order to initialize the primal support method for solving linear programming
problems with bounded variables written in the standard form, in 7, Gabasov et al. add
m artificial variables, where m represents the number of constraints. In this work, we are
interested in solving the problem written in the general form 3.1, so we have added artificial
variables only for equality constraints and inequality constraints with negative components
of the vector w.
Remark 3.2. We have I I1 ∪ I2 I1
∪ I1− ∪ I2 I1
∪ I a . Since in the relationship 3.22, we
wI have yB xB |wI1a | and wI1
≥ 0, we get yB |wI1
∪ I a | |wI| |w|.
Remark 3.3. If we choose x
such that x
l or x
u, then the vector y, given by the
relationship 3.22, will be a BFS. Hence, the simplex algorithm can be initialized with this
point.
Remark 3.4. If b1 ≥ 0Rm1 , l ≤ 0Rp and u ≥ 0Rp , two cases can occur.
Mathematical Problems in Engineering
11
Case 1. If b2 ≥ 0Rm2 , then we put x
0Rp .
Case 2. If b2 < 0Rm2 , then we put A2 −A2 , b2 −b2 and x
0Rp .
In the two cases, we get b ≥ 0Rm , w b − Ax
b ≥ 0, therefore I1− ∅. Hence
−
a
I I1 ∪ I2 I2 , so we add only m2 artificial variables for the equality constraints.
3.2. The Single Artificial Variable Technique
In order to initialize the primal support method using this technique, we first start by
searching an initial support, then we proceed to the search of an initial feasible solution for
the original problem.
The application of the Gauss elimination method with partial pivoting to the system
of equations 3.3 can give us a support JB {j1 , j2 , . . . , jr }, where r ≤ m. However,
the experimental study realized in 33, 34 reveals that this approach takes much time in
searching the initial support, that is, why it’s important to take into account the sparsity
property of practical problems and apply some procedure to find a triangular basis among the
columns corresponding to the original variables, that is, the columns of the matrix A. In this
work, on the base of the crash procedures presented in 26, 28–30, we present a procedure to
find an initial support for the problem 3.2–3.5.
Procedure 1 searching an initial support. 1 We sort the columns of the m × p- matrix A
according to the increasing order of their number of nonzero elements. Let L be the list of the
sorted columns of A.
2 Let aj0 Lr the rth column of L be the first column of the list L verifying: ∃i0 ∈ I
such that
ai j maxaij ,
0 0
0
i∈I
ai j > pivtol,
0 0
3.28
where pivtol is a given tolerance. Hence, we put JB {j0 }, Ipiv {i0 }, k 1.
3 Let jk be the index corresponding to the column r k of the list L, that is, ajk Lr k. If ajk has zero elements in all the rows having indices in Ipiv and if ∃ik ∈ I such that
ai j max aij ,
k k
k
i∈I\Ipiv
ai j > pivtol,
k k
3.29
then we put JB JB ∪ {jk }, Ipiv Ipiv ∪ {ik }.
4 We put k k 1. If r k ≤ p, then go to step 3, else go to step 5.
5 We put s 0, I a ∅, J a ∅.
6 For all i ∈ I \ Ipiv , if the ith constraint is originally an inequality constraint, then we
add to JB an index corresponding to the slack variable p i, that is, JB JB ∪ {p i}. If the ith
constraint is originally an equality constraint, then we put s s 1 and add to this latter an
artificial variable xp
m1 s for which we set the lower bound to zero and the upper bound to
a big and well-chosen value M. Thus, we put JB JB ∪ {p m1 s}, J a J a ∪ {p m1 s},
I a I a ∪ {i}.
Remark 3.5. The inputs of Procedure 1 are:
i the m × p-matrix of constraints, A;
12
Mathematical Problems in Engineering
ii a pivoting tolerance fixed beforehand, pivtol;
iii a big and well chosen value M.
The outputs of Procedure 1 are:
i JB {j0 , j1 , . . . , jm−1 }: the initial support for the problem 3.2–3.5;
ii I a : the indices of the constraints for which we have added artificial variables;
iii J a : the indices of artificial variables added to the problem 3.2–3.5;
iv s |I a | |J a |: the number of artificial variables added to the problem 3.2–3.5.
After the application of the procedure explained above Procedure 1 for the problem
3.2–3.5, we get the following linear programming problem:
max
z cT x,
s.t. Ax H e xe H a xa b,
l ≤ x ≤ u,
3.30
0 ≤ x ≤ Me
e
m1 ,
s
0 ≤ x ≤ Me ,
a
where xa xp
m1 1 , xp
m1 2 , . . . , xn T is the s-vector of the artificial variables added during
the application of Procedure 1, with n p m1 s, H a ei , i ∈ I a is an m × s-matrix.
Since we have got an initial support, we can start the procedure of finding a feasible
solution for the original problem.
Procedure 2 searching an initial feasible solution. Consider the following auxiliary problem:
max
ψ −xn
1 −
xj ,
j∈J a
s.t.
Ax H e xe H a xa ρxn
1 b,
l ≤ x ≤ u,
0 ≤ xe ≤ Mem1 ,
3.31
0 ≤ xa ≤ Mes ,
0 ≤ xn
1 ≤ 1,
where xn
1 is an artificial variable, ρ b − Ax
, and x
is a p-vector chosen between l and u.
We remark that
⎛
⎞ ⎛ ⎞
x
x
e
⎜ x ⎟ ⎜0Rm1 ⎟
⎟ ⎜
⎟
y⎜
⎝ xa ⎠ ⎝ 0Rs ⎠
xn
1
1
3.32
is an obvious feasible solution for the auxiliary problem. Indeed,
Ax
H e 0Rm1 H a 0Rs b − Ax
b.
3.33
Mathematical Problems in Engineering
13
Hence, we can apply the primal support method to solve the auxiliary problem 3.31 by
starting with the initial SFS {y, JB }, where JB {j0 , j1 , . . . , jm−1 } is the support obtained with
Procedure 1. Let
⎛
⎞
x∗
⎜ xe∗ ⎟ ∗
∗
⎟
y∗ ⎜
⎝ xa∗ ⎠, JB , ψ
∗
xn
1
3.34
be, respectively, the optimal solution, the optimal support, and the optimal objective value of
the auxiliary problem 3.31.
If ψ ∗ < 0, then the original problem 3.2–3.5 is infeasible. ∗
Else, when JB∗ does not contain any artificial index, then { xxe∗ , JB∗ } will be an SFS for
the original problem 3.2–3.5. Otherwise, we delete the artificial indices from the support
JB∗ and we replace them with original or slack appropriate indices, following the algebraic
rule used in the simplex method.
Remark 3.6. The number of artificial variables na s
1 of the auxiliary problem 3.31 verifies
the inequality: 1 ≤ na ≤ m2 1.
The auxiliary problem will have only one artificial variable, that is, na 1, when the
initial support JB found by Procedure 1 is constituted only by the original and slack variable
indices J a ∅, and this will hold in two cases.
Case 1. When all the constraints are inequalities, that is, I2 ∅.
Case 2. When I2 /
∅ and step 4 of Procedure 1 ends with Ipiv I2 .
The case na m2 1 holds when the step 4 of Procedure 1 stops with Ipiv I1 .
Remark 3.7. Let’s choose a p-vector x
between l and u and two nonnegative vectors ve ∈ Rm1
and va ∈ Rs . If we put in the auxiliary problem 3.31, ρ b − v − Ax
, with v H e ve H a va ,
then the vector
⎞ ⎛ ⎞
x
x
⎜ xe ⎟ ⎜ v e ⎟
⎟ ⎜ ⎟
y⎜
⎝ xa ⎠ ⎝v a ⎠
xn
1
1
⎛
3.35
is a feasible solution for the auxiliary problem. Indeed,
Ax H e xe H a xa ρxn
1 Ax
H e ve H a va b − H e ve − H a va − Ax
b.
3.36
We remark that if we put ve 0Rm1 , va 0Rs , we get v 0Rm , then ρ b − Ax
and we obtain
the evident feasible point that we have used in our numerical experiments, that is,
14
Mathematical Problems in Engineering
⎛
⎞ ⎛ ⎞
x
x
⎜ xe ⎟ ⎜0Rm1 ⎟
⎟ ⎜
⎟
y⎜
⎝ xa ⎠ ⎝ 0Rs ⎠.
xn
1
1
3.37
It’s important here, to cite two other special cases.
Case 1. If we choose the nonbasic components of y equal to their bounds, then we obtain a
BFS for the auxiliary problem, therefore we can initialize the simplex algorithm with it.
Case 2. If we put ve em1 and va es , then v H e em1 H a es the vector
i∈I1
ei i∈I a
ei . Hence,
⎞ ⎛ ⎞
x
x
⎜ xe ⎟ ⎜em1 ⎟
x
⎟⎜
⎟
y⎜
⎝ xa ⎠ ⎝ es ⎠
em1 s
1
xn
1
1
⎛
3.38
is a feasible solution for the auxiliary problem 3.31, with ρ b − v − Ax
.
Numerical Example
Consider the following LP problem:
max cT x, s.t. A1 x ≤ b1 , A2 x b2 , l ≤ x ≤ u ,
3.39
where
0 0 1 2
A ,
2 0 0 −3
1
T
c 2, −3, −1, 1 ,
3
b ,
3
A 2
1
l0 ,
R4
2
b ,
2
0 3 0 2
,
1 0 2 3
T
u 10, 10, 10, 10 ,
2
3.40
T
x x1 , x2 , x3 , x4 .
We put M 1010 , pivtol 10−6 , x
0R4 , and we apply the two-phase primal support
method using the single artificial variable technique to solve the problem 3.39.
Phase 1. After adding the slack variables x5 and x6 to the problem 3.39, we get the following
problem in standard form:
max z cT x, s.t. Ax H e xe b, l ≤ x ≤ u, 0R2 ≤ xe ≤ ue ,
3.41
where
⎛
0
⎜2
A⎜
⎝0
1
0
0
3
0
1
0
0
2
⎞
2
−3⎟
⎟,
2⎠
3
⎛
1
⎜0
e
H ⎜
⎝0
0
⎞
0
1⎟
⎟,
0⎠
0
⎛ ⎞
3
⎜3⎟
⎟
b⎜
⎝2⎠,
2
xe x5
,
x6
ue M
.
M
3.42
Mathematical Problems in Engineering
15
The application of Procedure 1 to the problem 3.41 gives us the following initial support:
JB {2, 1, 3, 5}. In order to find an initial feasible solution to the original problem, we add an
artificial variable x7 to problem 3.41, and we compute the vector ρ: ρ b − Ax
b. Thus,
we obtain the following auxiliary problem:
max ψ −x7 , s.t. Ax H e xe ρx7 b, l ≤ x ≤ u, 0R2 ≤ xe ≤ ue , 0 ≤ x7 ≤ 1 .
3.43
Remark that the SFS {y, JB }, where y x1 , x2 , x3 , x4 , x5 , x6 , x7 T 0, 0, 0, 0, 0, 0, 1T , and JB {2, 1, 3, 5} is an obvious support feasible solution for the auxiliary problem. Hence, the primal
support method can be applied to solve the problem 3.43 starting with the SFS {y, JB }.
Iteration 1. The initial support is JB {2, 1, 3, 5}, JN {4, 6, 7}; the initial feasible solution and
the corresponding objective function value are: y 0, 0, 0, 0, 0, 0, 1T and ψ −1.
The vector of multipliers is π T 0, 0, 0, 0, and ΔTN 0, 0, 1. Hence, the reduced
costs vector is Δ 0, 0, 0, 0, 0, 0, 1T .
The suboptimality estimate is βx, JB Δ7 x7 − l7 1 > 0. So the current solution is
not optimal. The set of nonoptimal indices is JNNO {7} ⇒ j0 7.
In order to improve the objective function, we compute the search direction d: we
have d7 − sign Δ7 −1, so dN d4 , d6 , d7 T 0, 0, −1T ; dB −A−1
B A N dN 2/3, 3/2, 1/4, 11/4T . Hence, the search direction is d 3/2, 2/3, 1/4, 0, 11/4, 0, −1T .
Since dj > 0, ∀j ∈ JB , θj uj − xj /dj , ∀j ∈ JB . So θ2 15, θ1 20/3, θ3 40 and θ5 4M/11 ⇒ θj1 θ1 20/3, and θj0 θ7 x7 −l7 1. Hence, the primal step length
is θ0 min{θ1 , θ7 } 1 θ7 . The new solution is y y θ0 d 3/2, 2/3, 1/4, 0, 11/4, 0, 0T ,
and the new objective value is ψ ψ θ0 |Δ7 | 0 ⇒ y is optimal for the auxiliary problem.
Therefore, the pair {x, JB }, where x 3/2, 2/3, 1/4, 0, 11/4, 0T and JB {2, 1, 3, 5}, is an SFS
for the problem 3.41.
Phase 2.
Iteration 1. The initial support is JB {2, 1, 3, 5}, and JN {4, 6}.
The initial feasible solution is x 3/2, 2/3, 1/4, 0, 11/4, 0T , and the objective value is
z 3/4.
The multipliers vector and the reduced costs vector are: π T 0, 5/4, −1, −1/2 and
Δ 0, 0, 0, −33/4, 0, 5/4T . The suboptimality estimate is βx, JB Δ4 x4 − u4 Δ6 x6 − l6 165/2 > 0. So the initial solution is not optimal.
The set of nonoptimal indices is JNNO {4} ⇒ the entering index is j0 4.
The search direction is d 3/2, −2/3, −9/4, 1, 1/4, 0T .
The primal step length is θ0 min{θ3 , θ4 } min{1/9, 1} 1/9 θ3 . So the leaving
index is j1 3. The new solution is x x θ0 d 5/3, 16/27, 0, 1/9, 25/9, 0T , and z z θ0 |Δ4 | 5/3.
Iteration 2. The current support is JB {2, 1, 4, 5}, and JN {3, 6}.
The current feasible solution is x 5/3, 16/27, 0, 1/9, 25/9, 0T , and the objective
value is z 5/3.
The multipliers vector and the reduced costs vector are: π T 0, 1/3, −1, 4/3 and
Δ 0, 0, 11/3, 0, 0, 1/3T . The suboptimality estimate is βx, JB Δ3 x3 − l3 Δ6 x6 − l6 0.
16
Mathematical Problems in Engineering
Therefore, the optimal solution and the optimal objective value of the original problem
3.39 are
x∗ 1
5 16
, , 0,
3 27
9
T
,
z∗ 5
.
3
3.44
4. Experimental Results
In order to perform a numerical comparison between the simplex method and the
different variants of the support method, we have programmed them under the MATLAB
programming language version 7.4.0 R2007a.
Since the test problems available in the NETLIB library present bounds on the
variables which can be infinite, we have built a sample of 68 test problems having finite
bounds and a same optimal objective value as those of NETLIB. The principle of the building
process is as follows: let xl and xu be the obtained bounds after the conversion of a given
test problem from the “mps” format to the “mat” format and x∗ the optimal solution. We put
x min min{xj∗ , j 1, p} and x max max{xj∗ , j 1, p}. Thus, the constraint matrix and the
objective coefficients vector of the new problem remains the same as those of NETLIB, but
the new lower bounds l and upper bounds u
will be changed as follows:
lj u
j x min,
if xlj −∞;
if xlj > −∞,
xlj ,
x max,
if xuj ∞;
xuj ,
if xuj < ∞.
4.1
Table 1 represents the listing of the main characteristics of the considered 68 test problems,
where NC, NV, NEC, and D represent, respectively, the number of constraints, the number
of variables, the number of equality constraints, and the density of the constraints matrix
the ratio between the number of nonzero elements and the number of total elements of the
constraints matrix, multiplied by 100.
There exist many LP solvers which include a primal simplex algorithm package for
solving LP problems, we cite: commercial solvers such as CPLEX 37, MINOS 28 and opensource solvers such as LP SOLVE 38, GLPK 39, and LPAKO 30. The LP SOLVE solver
is well documented and widely used in applications and numerical experiments 40, 41.
Moreover, the latter has a mex interface called mxlpsolve which can be easily installed
and used with the MATLAB language. That is why we compare our algorithm with the
primal simplex algorithm implemented in LP SOLVE. However, MATLAB is an interpreted
language, so it takes much time in solving problems than the native language C
used
for the implementation of LP SOLVE. For this reason, the numerical comparison carried out
between our method and LP SOLVE concerns only the iterations number. In order to compare
our algorithm with the primal simplex algorithm in terms of CPU time, we have developed
our own simplex implementation. The latter is based on the one given in 42.
In order to compare the different solvers, we have given them the following names.
Mathematical Problems in Engineering
17
Table 1: Characteristics of the test problems.
LP
1 afiro
2 kb2
3 sc50a
4 sc50b
5 adlittle
6 blend
7 share2b
8 sc105
9 stocfor1
10 scagr7
11 israel
12 share1b
13 vtpbase
14 sc205
15 beaconfd
16 grow7
17 lotfi
18 brandy
19 e226
20 bore3d
21 capri
22 agg
23 scorpion
24 bandm
25 sctap1
26 scfxm1
27 agg2
28 agg3
29 stair
30 scsd1
31 grow15
32 scagr25
33 fit1d
34 finnis
NC
27
43
50
50
56
74
96
105
117
129
174
117
198
205
173
140
153
220
223
233
271
488
388
305
300
330
516
516
356
77
300
471
24
497
NV
32
41
48
48
97
83
79
103
111
140
142
225
203
203
262
301
308
249
282
315
353
163
358
472
480
457
302
302
467
760
645
500
1026
614
NEC
8
16
20
20
15
43
13
45
63
84
0
89
55
91
140
140
95
166
33
214
142
36
280
305
120
187
60
60
209
77
300
300
1
47
D %
9,61
16,20
5,42
4,92
7,05
7,99
9,15
2,59
3,44
2,33
9,18
4,37
2,26
1,32
7,45
6,20
2,29
3,92
4,10
1,95
1,85
3,03
1,03
1,73
1,18
1,72
2,75
2,76
2,32
4,08
2,90
0,66
54,40
0,76
LP
35 fffff800
36 grow22
37 standata
38 scsd6
39 standmps
40 standgub
41 scfxm2
42 scrs8
43 gfrd-pnc
44 bnl1
45 ship04s
46 fit1p
47 modszk1
48 shell
49 scfxm3
50 25fv47
51 ship04l
52 wood1p
53 sctap2
54 ganges
55 scsd8
56 ship08s
57 ship12s
58 ctap3
59 stocfor2
60 czprob
61 ship08l
62 bnl2
63 ship12l
64 d2q06c
65 woodw
66 truss
67 fit2d
68 maros-r7
NC
524
440
359
147
467
361
660
490
616
643
402
627
687
536
990
821
402
244
1090
1309
397
778
1151
1480
2157
929
778
2324
1151
2171
1098
1000
25
3136
NV
854
946
1075
1350
1075
1184
914
1169
1092
1175
1458
1677
1620
1775
1371
1571
2118
2594
1880
1681
2750
2387
2763
2480
2031
3523
4283
3489
5427
5167
8405
8806
10500
9408
NEC
350
440
160
147
268
162
374
384
548
232
354
627
687
534
561
516
354
243
470
1284
397
698
1045
620
1143
890
698
1327
1045
1507
1085
1000
1
3136
D %
1,39
1,98
0,79
2,17
0,73
0,73
0,86
0,56
0,35
0,68
0,74
0,94
0,29
0,37
0,57
0,81
0,74
11,10
0,33
0,31
0,79
0,38
0,26
0,24
0,19
0,33
0,38
0,17
0,26
0,29
0,41
0,32
49,10
0,49
Solver1. SupportSav
The two-phase support method, where we use the suggested single artificial variable
technique in the first phase, that is, the initial support is found by applying Procedure 1,
and the feasible solution is found by applying Procedure 2.
Solver2. SupportFab
The two-phase support method, where we use the full artificial basis technique presented
above in the first phase.
18
Mathematical Problems in Engineering
Table 2: CPU time and iterations number for Solver1 SupportSav.
LP
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
cput1
0,00
0,00
0,00
0,03
0,02
0,00
0,00
0,02
0,02
0,02
0,02
0,02
0,03
0,08
0,06
0,05
0,03
0,03
0,05
0,06
0,06
0,05
0,09
0,13
0,13
0,09
0,09
0,09
0,13
0,02
0,20
0,19
0,03
0,17
cput
0,02
0,05
0,02
0,05
0,08
0,05
0,11
0,06
0,06
0,11
0,59
0,38
1,03
0,23
0,23
0,27
0,27
0,58
0,89
0,23
1,05
0,22
0,64
1,03
0,61
1,05
0,38
0,41
1,33
0,31
1,23
0,89
3,36
2,41
nit1
15
29
27
19
25
59
134
61
56
114
272
300
942
128
133
18
79
419
165
97
636
72
288
360
129
455
40
36
452
48
18
309
14
343
nit
18
95
38
33
118
81
170
84
74
138
589
450
989
169
161
229
228
461
690
140
741
127
316
630
340
620
169
178
701
188
573
414
1427
994
LP
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
cput1
0,14
0,42
0,11
0,05
0,16
0,11
0,27
0,44
0,66
0,31
0,05
0,88
0,91
0,47
0,61
0,33
0,08
0,05
1,48
1,83
0,34
0,17
0,31
2,73
4,08
0,13
0,17
4,19
0,38
5,14
1,31
2,25
0,16
21,30
cput
4,75
2,70
2,19
1,17
3,17
2,09
4,02
2,27
2,92
13,78
1,44
3,36
6,83
2,20
9,20
62,47
2,20
3,17
6,09
8,66
3,50
4,42
8,30
10,08
17,31
23,55
8,23
88,17
17,36
643,03
49,59
153,02
290,91
100,34
nit1
1339
18
635
87
998
616
990
173
510
3541
324
4
322
280
1546
5376
324
383
220
1311
212
656
968
251
1325
2477
656
3256
979
6625
2107
448
11
1757
nit
1707
917
806
425
1064
741
1344
707
929
3902
455
673
1759
525
2137
12998
521
450
1015
1645
643
848
1309
1237
2061
3373
1026
8840
1679
46311
3003
9241
13593
3480
Solver3. SimplexFab
The two-phase primal simplex method, where we use the full artificial basis technique
presented above in the first phase we use the BFS presented in Remark 3.3, with x
l.
Solver4. LP Solve PSA
The primal simplex method implemented in the open-source LP solver LP SOLVE. The
different options are Primal-Primal, PRICER DANTZIG, No Scaling, No Presolving. For
setting these options, the following MATLAB commands are used.
Mathematical Problems in Engineering
19
Table 3: CPU time and iterations number for Solver2 SupportFab.
LP
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
cput
0,02
0,06
0,02
0,02
0,09
0,06
0,08
0,06
0,05
0,19
0,36
0,33
1,02
0,22
0,17
0,36
0,28
0,66
0,94
0,33
0,91
0,19
0,73
2,22
0,59
1,17
0,28
0,33
2,17
0,38
1,53
4,05
3,52
2,41
nit1
10
84
26
26
35
92
96
56
64
242
9
241
951
112
138
142
150
520
160
226
527
77
378
1095
292
552
78
80
860
81
301
1834
10
395
nit
18
143
49
52
127
118
129
112
82
285
346
394
1001
232
170
337
261
566
757
282
678
118
423
1339
375
723
169
181
1128
226
843
2129
1469
1083
LP
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
cput
4,48
3,59
1,22
1,50
2,00
1,28
5,16
3,09
3,11
11,23
1,67
12,47
8,59
2,78
10,39
50,19
3,27
3,06
7,78
9,58
7,48
5,73
9,48
13,50
11,23
17,91
14,63
123,72
28,77
644,20
45,80
180,47
297,03
137,19
nit1
1334
442
427
203
667
419
1384
620
719
2752
479
1625
1038
568
1853
3562
706
351
896
1637
556
913
1334
1183
1144
1647
1404
7209
2463
9240
1916
1320
11
4519
nit
1659
1427
479
554
718
470
1772
1112
1095
3235
535
3067
2325
757
2464
10694
789
426
1452
1979
1432
1125
1478
1902
1655
2544
1806
12614
2719
45670
2752
11069
13574
5612
i mxlpsolve“set simplextype”, lp, 5; % Phase 1: Primal, Phase 2: Primal.
ii mxlpsolve“set pivoting”, lp, 1; % Dantzig’s rule.
iii mxlpsolve“set scaling”, lp, 0; % No scaling.
The Lagrange multipliers vector and the basic components of the search direction in
the first three solvers are computed by solving the two systems of linear equations: ATB π cB
and AB dB −aj0 dj0 , using the LU factorization of AB 43. The updating of the L and U
factors is done, in each iteration, using the Sherman-Morrison-Woodbury formula 42.
In the different implementations, we have used the Dantzig’s rule to choose the
entering variable. To prevent cycling, the EXPAND procedure 44 is used.
20
Mathematical Problems in Engineering
Table 4: CPU time and iterations number for Solver3 SimplexFab.
LP
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
cput
0,02
0,05
0,02
0,03
0,06
0,06
0,08
0,08
0,05
0,23
0,34
0,33
3,03
0,22
0,17
0,34
0,22
0,75
0,89
0,52
4,19
0,20
0,73
2,13
0,63
1,08
0,30
0,30
2,78
0,34
1,47
3,08
3,06
2,53
nit1
10
68
26
26
33
91
96
56
64
276
9
234
3921
111
138
141
155
501
156
431
3769
88
373
1034
295
513
76
79
1216
81
301
1299
10
520
nit
18
125
51
52
123
117
131
112
82
318
348
383
4145
224
170
317
235
569
713
486
3917
133
417
1251
374
656
167
179
1475
236
816
1563
1420
1190
LP
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
cput
5,38
3,38
5,45
1,31
7,19
5,81
4,88
2,69
3,05
11,06
1,69
11,98
8,47
4,72
10,55
41,23
3,23
3,11
7,88
9,95
6,27
5,77
9,66
13,22
11,55
20,03
14,36
131,38
28,69
539,45
41,14
150,53
275,03
132,11
nit1
1513
441
2609
203
3217
2610
1245
529
672
2718
479
1716
1100
646
1901
3350
709
355
894
1657
536
912
1333
1176
1144
2164
1404
6782
2462
8708
1844
1359
11
4434
nit
1920
1366
2646
524
3232
2627
1602
975
1035
3100
534
3001
2275
1507
2420
8974
784
426
1494
1991
1276
1121
1477
1879
1635
3027
1800
12918
2715
39227
2550
9845
13535
5477
We have solved the 68 NETLIB test problems listed in Table 1 with the solvers
mentioned above on a personal computer with Intel R Core TM 2 Quad CPU Q6600
2.4 GHz, 4 GB of RAM, working under the Windows XP SP2 operating system, by setting
the different tolerances appropriately. We recall that the initial point x
, needed in the three
methods Solvers 1, 2 and 3, must be located between its lower and upper bounds, so after
giving it some values and observing the variation of the CPU time, we have concluded that
it’s more efficient to set x
l. The upper bounds for the slack and artificial variables added
for the methods SupportSav, SupportFab, and SimplexFab are put to M 1010 .
Numerical results are reported in Tables 2, 3, 4, and 5, where nit1 , nit, and cput
represent respectively the phase one iterations number, the total iterations number, and the
Mathematical Problems in Engineering
21
Table 5: Iterations number for Solver4 LP SOLVE PSA.
LP
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
nit
17
96
50
51
172
129
185
105
124
184
606
379
782
250
110
328
296
LP
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
nit
1215
909
193
835
583
386
1040
359
592
597
634
936
249
987
898
2196
999
LP
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
nit
1771
1369
221
594
454
210
1272
1170
895
3810
485
2873
1736
754
1962
12767
733
LP
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
nit
352
1465
1893
6165
702
1249
1864
3317
3422
1153
9579
2197
39828
2179
14046
14249
4474
CPU time in seconds of each method necessary to find the optimal objective values presented
in 25 or 45; cput1 , shown in columns 2 and 7 of Table 2, represents the CPU time necessary
to find the initial support with Procedure 1. The number of artificial variables added to the
original problem in our implementations is listed in Table 6.
In order to compare the different solvers, we have ordered the problems according
to the increasing order of the sum of the constraints and variables numbers NC
NV, as
they are presented in the different tables and we have computed the CPU time ratio Table 7
and the iteration ratio Table 8 of the different solvers over the solver SupportSav Solver1,
where for each test problem, we have
cput Solverj
, j 2, 3,
cputr1j cputSolver1
nit Solverj
nitr1j , j 2, 4.
nitSolver1
4.2
The above ratios see 46 indicate how many times SupportSav is better than the other
solvers. Ratios greater than one mean that our method is more efficient for the considered
problem: let S be one of the solvers 2, 3, and 4. If cputr1S ≥ 1, resp., nitr1S ≥ 1, then our
solver Solver1 is competitive with SolverS in terms of CPU time, resp., in terms of iterations
number. Ratios greater or equal to one are mentioned with the bold character in Tables 7 and
8.
We plot the CPU time ratios of the solvers SupportFab and SimplexFab over
SupportSav Figures 1a and 1c, and the iteration ratios of each solver over SupportSav
Figures 1b, 1d, and 1e. Ratios greater than one correspond to the points which are
above the line y 1 in the graphs of Figure 1.
22
Mathematical Problems in Engineering
Table 6: The number of artificial variables added in Phase one of the three Solvers.
LP
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
Sum
Solver1
7
7
19
15
9
33
10
41
50
47
1
62
40
82
100
17
34
127
28
103
125
1
245
138
1
124
1
3
86
13
17
173
2
24
Solver2
8
16
20
20
16
43
13
45
63
91
8
89
81
91
140
140
95
166
46
214
170
63
333
305
154
187
60
60
301
77
300
325
1
129
Solver3
8
16
20
20
16
43
13
45
63
91
8
89
81
91
140
140
95
166
46
214
170
63
333
305
154
187
60
60
301
77
300
325
1
129
LP
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
Solver1
304
17
121
20
229
122
247
65
89
171
355
1
112
151
370
417
355
234
1
678
45
699
1046
1
889
870
699
864
1046
558
608
62
2
1
13234
Solver2
384
440
171
147
279
173
374
386
548
239
354
627
687
534
561
520
354
244
521
1284
397
698
1045
682
1143
890
698
1335
1045
1533
1089
1000
1
3136
27389
Solver3
384
440
171
147
279
173
374
386
548
239
354
627
687
534
561
520
354
244
521
1284
397
698
1045
682
1143
890
698
1335
1045
1533
1089
1000
1
3136
27389
The analysis of the different tables and graphs leads us to make the following observations.
i In terms of iterations number, SupportSav is competitive with SupportFab in
solving 74% of the test problems with an average iteration ratio of 1.33. In terms
of CPU time, SupportSav is competitive with SupportFab in solving 66% of the test
problems with an average CPU time ratio of 1.20. Particularly, for the LP “scagr25”
Problem number 32, SupportSav is 5.14 times faster than SupportFab in terms
of iterations number and solves the problem 4.55 times faster than SupportFab in
terms of CPU time.
Mathematical Problems in Engineering
23
Table 7: CPU time ratio of the different solvers over SupportSav.
LP\Ratio
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
Mean
cputr12
1,000
1,200
1,000
0,400
1,125
1,200
0,727
1,000
0,833
1,727
0,610
0,868
0,990
0,957
0,739
1,333
1,037
1,138
1,056
1,435
0,867
0,864
1,141
2,155
0,967
1,114
0,737
0,805
1,632
1,226
1,244
4,551
1,048
1,000
cputr13
1,000
1,000
1,000
0.600
0,750
1,200
0,727
1,333
0,833
2,091
0,576
0,868
2,942
0,957
0,739
1,259
0,815
1,293
1,000
2,261
3,990
0,909
1,141
2,068
1,033
1,029
0,789
0,732
2,090
1,097
1,195
3,461
0,911
1,050
LP\Ratio
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
cputr12
0,943
1,330
0,557
1,282
0,631
0,612
1,284
1,361
1,065
0,815
1,160
3,711
1,258
1,264
1,129
0,803
1,486
0,965
1,278
1,106
2,137
1,296
1,142
1,339
0,649
0,761
1,778
1,403
1,657
1,002
0,924
1,179
1,021
1,367
1,20
cputr13
1,133
1,252
2,489
1,120
2,268
2,780
1,214
1,185
1,045
0,803
1,174
3,565
1,240
2,145
1,147
0,660
1,468
0,981
1,294
1,149
1,791
1,305
1,164
1,312
0,667
0,851
1,745
1,490
1,653
0,839
0,830
0,984
0,945
1,317
1,35
ii In terms of iterations number, SupportSav is competitive with SimplexFab in
solving 79% of the test problems with an average iteration ratio of 1.57. In terms
of CPU time, SupportSav is competitive with SimplexFab in solving 68% of the test
problems with an average CPU time ratio of 1.35. The peaks of the graph ratios
correspond to the problems where our approach is 2 up to 5 times faster than
SimplexFab. Particularly, for the LP “capri” Problem number 21, SupportSav is
5.29 times faster than SimplexFab in terms of iterations number and solves the
problem 3.99 times faster than SimplexFab in terms of CPU time.
24
Mathematical Problems in Engineering
Table 8: Iteration ratio of the different solvers over SupportSav.
LP\Ratio
afiro
kb2
sc50a
sc50b
adlittle
blend
share2b
sc105
stocfor1
scagr7
israel
share1b
vtpbase
sc205
beaconfd
grow7
lotfi
brandy
e226
bore3d
capri
agg
scorpion
bandm
sctap1
scfxm1
agg2
agg3
stair
scsd1
grow15
scagr25
fit1d
finnis
Mean
nitr12
1,000
1,505
1,289
1,576
1,076
1,457
0,759
1,333
1,108
2,065
0,587
0,876
1,012
1,373
1,056
1,472
1,145
1,228
1,097
2,014
0,915
0,929
1,339
2,125
1,103
1,166
1,000
1,017
1,609
1,202
1,471
5,143
1,029
1,090
nitr13
1,000
1,316
1,342
1,576
1,042
1,444
0,771
1,333
1,108
2,304
0,591
0,851
4,191
1,325
1,056
1,384
1,031
1,234
1,033
3,471
5,286
1,047
1,320
1,986
1,100
1,058
0,988
1,006
2,104
1,255
1,424
3,775
0,995
1,197
nitr14
0,944
1,011
1,316
1,545
1,458
1,593
1,088
1,250
1,676
1,333
1,029
0,842
0,791
1,479
0,683
1,432
1,298
2,636
1,317
1,379
1,127
4,591
1,222
1,651
1,056
0,955
3,533
3,562
1,335
1,324
1,723
2,169
1,539
1,005
LP\Ratio
fffff800
grow22
standata
scsd6
standmps
standgub
scfxm2
scrs8
gfrd-pnc
bnl1
ship04s
fit1p
modszk1
shell
scfxm3
25fv47
ship04l
wood1p
sctap2
ganges
scsd8
ship08s
ship12s
sctap3
stocfor2
czprob
ship08l
bnl2
ship12l
d2q06c
woodw
truss
fit2d
maros-r7
nitr12
0,972
1,556
0,594
1,304
0,675
0,634
1,318
1,573
1,179
0,829
1,176
4,557
1,322
1,442
1,153
0,823
1,514
0,947
1,431
1,203
2,227
1,327
1,129
1,538
0,803
0,754
1,760
1,427
1,619
0,986
0,916
1,198
0,999
1,613
1,33
nitr13
1,125
1,490
3,283
1,233
3,038
3,545
1,192
1,379
1,114
0,794
1,174
4,459
1,293
2,870
1,132
0,690
1,505
0,947
1,472
1,210
1,984
1,322
1,128
1,519
0,793
0,897
1,754
1,461
1,617
0,847
0,849
1,065
0,996
1,574
1,57
nitr14
1,037
1,493
0,274
1,398
0,427
0,283
0,946
1,655
0,963
0,976
1,066
4,269
0,987
1,436
0,918
0,982
1,407
0,782
1,443
1,151
9,588
0,828
0,954
1,507
1,609
1,015
1,124
1,084
1,309
0,860
0,726
1,520
1,048
1,286
1,49
iii In terms of iterations number, SupportSav is competitive with LP SOLVE PSA in
solving 72% of the test problems with an average iteration ratio of 1.49 the majority
of iteration ratios are above the line y 1 in Figure 1e. Particularly, for the
LP “scsd8” Problem number 55, our method is 9.58 times faster than the primal
simplex implementation of the open-source solver LP SOLVE.
iv We remark from the last row of Table 6 that the total number of artificial variables
added in order to find the initial support for SupportSav 13234 is considerably
less than the total number of artificial variables added for SimplexFab 27389 and
SupportFab 27389.
6
5
4
3
2
1
0
25
Iteration ratio
CPU time ratio
Mathematical Problems in Engineering
0
10
20
30
40
50
60
70
6
5
4
3
2
1
0
0
10
20
0
10
20
30
40
40
50
60
70
b Ratio of the iterations number of SupportFab over
the iterations number of SupportSav nitr12 Iteration ratio
CPU time ratio
a Ratio of the CPU time of SupportFab over the CPU
time of SupportSav cputr12 6
5
4
3
2
1
0
30
Problem number
Problem number
50
60
70
6
5
4
3
2
1
0
0
10
20
30
40
50
60
70
Problem number
Problem number
c Ratio of the CPU time of SimplexFab over the CPU
time of SupportSav cputr13 d Ratio of the iterations number of SimplexFab over
the iterations number of SupportSav nitr13 Iteration ratio
10
8
6
4
2
0
0
10
20
30
40
50
60
70
Problem number
e Ratio of the iterations number of LP SOLVE PSA over
the iterations number of SupportSav nitr14 Figure 1: Ratios of the different solvers over SupportSav.
v If we compute the total number of iterations necessary to solve the 68 LPs for
each solver, we find 143737 for SupportSav, 159306 for SupportFab, 163428
for SimplexFab, and 158682 for LP SOLVE PSA. Therefore, the total number of
iterations is the smallest for our solver.
5. Conclusion
In this work, we have proposed a single artificial variable technique to initialize the
primal support method with bounded variables. An implementation under the MATLAB
environnement has been developed. In the implementation, we have used the LU
factorization of the basic matrix to solve the linear systems and the Sherman-MorrisonWoodbury formula to update the LU factors. After that, we have compared our approach
SupportSav to the full artificial basis support method SupportFab, the full artificial
basis simplex method SimplexFab, and the primal simplex implementation of the opensource solver LP SOLVE LP SOLVE PSA. The obtained numerical results are encouraging.
26
Mathematical Problems in Engineering
Indeed, the suggested method SupportSav is competitive with SupportFab, SimplexFab,
and LP SOLVE PSA.
Note that during our experiments, we have remarked that the variation of the initial
support and the initial point x
affects the performances CPU time and iterations number
of the single artificial variant of the support method. Thus, how to choose judiciously the
initial point and the initial support in order to improve the efficiency of the support method?
In future works, we will try to apply some crash procedure like those proposed in 25, 27 in
order to initialize the support method with a good initial support. Furthermore, we will try
to implement some modern sparse algebra techniques to update the LU factors 47.
References
1 L. V. Kantorovich, Mathematical Methods in the Organization and Planning of Production, Publication
House of the Leningrad State University, 1939, Translated in Management Science, vol. 6, pp. 366–
422, 1960.
2 G. B. Dantzig, “Maximization of a linear function of variables subject to linear inequalities,” in Activity
Analysis of Production and Allocation, R. C. Koopmans, Ed., pp. 339–347, Wiley, New York, NY, USA,
1951.
3 G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, USA,
1963.
4 V. Klee and G. J. Minty, “How good is the simplex algorithm?” in Inequalities III, O. Shisha, Ed., pp.
159–175, Academic Press, New York, NY, USA, 1972.
5 R. Gabasov and F. M. Kirillova, Methods of Linear Programming, vol. 1–3, The Minsk University, 1977,
1978 and 1980.
6 S. Radjef and M. O. Bibi, “An effective generalization of the direct support method,” mathematical
Problems in Engineering, vol. 2011, Article ID 374390, 18 pages, 2011.
7 R. Gabasov, F. M. Kirillova, and S. V. Prischepova, Optimal Feedback Control, Springer, London, UK,
1995.
8 R. Gabasov, F. M. Kirillova, and O. I. Kostyukova, “Solution of linear quadratic extremal problems,”
Soviet Mathematics Doklady, vol. 31, pp. 99–103, 1985.
9 R. Gabasov, F. M. Kirillova, and O. I. Kostyukova, “A method of solving general linear programming
problems,” Doklady AN BSSR, vol. 23, no. 3, pp. 197–200, 1979 Russian.
10 R. Gabasov, F. M. Kirillova, and A. I. Tyatyushkin, Constructive Methods of Optimization. Part1. Linear
Problems, Universitetskoje, Minsk, Russia, 1984.
11 E. A. Kostina and O. I. Kostyukova, “An algorithm for solving quadratic programming problems with
linear equality and inequality constraints,” Computational Mathematics and Mathematical Physics, vol.
41, no. 7, pp. 960–973, 2001.
12 E. Kostina, “The long step rule in the bounded-variable dual simplex method: numerical
experiments,” Mathematical Methods of Operations Research, vol. 55, no. 3, pp. 413–429, 2002.
13 M. O. Bibi, “Support method for solving a linear-quadratic problem with polyhedral constraints on
control,” Optimization, vol. 37, no. 2, pp. 139–147, 1996.
14 B. Brahmi and M. O. Bibi, “Dual support method for solving convex quadratic programs,” Optimization, vol. 59, no. 6, pp. 851–872, 2010.
15 M. Bentobache and M. O. Bibi, “Two-phase adaptive method for solving linear programming
problems with bounded variables,” in Proceedings of the YoungOR 17, pp. 50–51, University of
Nottingham, UK, 2011.
16 M. Bentobache and M. O. Bibi, “Adaptive method with hybrid direction: theory and numerical
experiments,” in Proceedings of the Optimization, pp. 24–27, Universidade Nova de Lisboa, Portugal,
2011.
17 M. O. Bibi and M. Bentobache, “The adaptive method with hybrid direction for solving linear
programming problems with bounded variables,” in Proceedings of the colloque International sur
l’Optimisation et les Systèmes d’Information (COSI’ 11), pp. 80–91, University of Guelma, Algeria, 2011.
18 M. O. Bibi and M. Bentobache, “An hybrid direction algorithm for solving linear programs,” in
Proceedings of the International Conference on Discrete Mathematics & Computer Science (DIMACOS’11),
pp. 28–30, University of Mohammedia, Morocco, 2011.
Mathematical Problems in Engineering
27
19 L. G. Khachian, “A polynomial algorithm for linear programming,” Soviet Mathematics Doklady, vol.
20, pp. 191–194, 1979.
20 N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no.
4, pp. 373–395, 1984.
21 S. I. Gass, Linear Programming: Methods and Applications, McGraw-Hill, New York, NY, USA, 1964.
22 C. M. Shetty, “A simplified procedure for quadratic programming,” Operations Research, vol. 11, pp.
248–260, 1963.
23 C. B. Millham, “Fast feasibility methods for linear programming,” Opsearch, vol. 13, pp. 198–204, 1976.
24 P. Wolfe, “The Composite Simplex Algorithm,” SIAM Review, vol. 7, no. 1, pp. 42–54, 1965.
25 R. E. Bixby, “Implementing the simplex method: the initial basis,” ORSA Journal on Computing, vol. 4,
no. 3, pp. 1–18, 1992.
26 N. I. M. Gould and J. K. Reid, “New crash procedures for large systems of linear constraints,”
Mathematical Programming, vol. 45, no. 1–3, pp. 475–501, 1989.
27 I. Maros and G. Mitra, “Strategies for creating advanced bases for large-scale linear programming
problems,” INFORMS Journal on Computing, vol. 10, no. 2, pp. 248–260, 1998.
28 B. A. Murtagh and M. A. Saunders, “MINOS 5.5 User’s Guide,” Tech. Rep. SOL 83–20, Systems
Optimization Lab., Stanford University, Stanford, Calif, USA, 1998.
29 W. Orchard-Hays, Advanced Linear-Programming Computing Techniques, McGraw-Hill, New York, NY,
USA, 1968.
30 S. Lim and S. Park, “LPAKO: a simplex-based linear programming program,” Optimization Methods
and Software, vol. 17, no. 4, pp. 717–745, 2002.
31 H. D. Sherali, A. L. Soyster, and S. G. Baines, “Nonadjacent extreme point methods for solving linear
programs,” Naval Research Logistics Quarterly, vol. 30, no. 1, pp. 145–161, 1983.
32 M. Bentobache, A new method for solving linear programming problems in canonical form and with bounded
variables, M.S. thesis, University of Bejaia, Algeria, 2005.
33 M. Bentobache and M. O. Bibi, “Two-phase support method for solving linear programming problems
with nonnegative variables: numerical experiments,” in Proceedings of the Colloque International sur
l’Optimisation et les Systèmes d’Information (COSI’08), pp. 314–325, University of Tizi Ouzou, Algeria,
2008.
34 M. Bentobache and M. O. Bibi, “Two-phase support method for solving linear programming problems
with bounded variables: numerical experiments,” in Proceedings of the Colloque International sur
l’Optimisation et les Systèmes d’Information (COSI’09), pp. 109–120, University of Annaba, Algeria, 2009.
35 G. Mitra, M. Tamiz, and J. Yadegar, “Experimental investigation of an interior search method within
a simplex framework,” Communications of the ACM, vol. 31, no. 12, pp. 1474–1482, 1988.
36 R. J. Vanderbei, Linear Programming: Foundations and Extensions, Kluwer Academic Publichers, Princeton University, 2001.
37 ILOG CPLEX, “9.0 User’s Manual,” 2003, http://www.ilog.com.
38 “LP SOLVE,” http://sourceforge.net/projects/lpsolve/files/lpsolve.
39 A. Makhorin, “GNU linear programming Kit,” Reference Manual Version 4.9, Draft Edition, 2006,
http://www.gnu.org/software/glpk/glpk.html .
40 P. G. Garcı́a and Á. Santos-Palomo, “A deficient-basis dual counterpart of Paparrizos, Samaras
and Stephanides’s primal-dual simplex-type algorithm,” in Proceedings of the 2nd Conference on
Optimization Methods & Software and 6th EUROPT Workshop on Advances in Continuous Optimization,
Prague, Czech Republic, 2007.
41 S. R. Thorncraft, H. R. Outhred, and D. J. Clements, “Evaluation of open-source LP optimization
codes in solving electricity spot market optimization problems,” in Proceedings of the 19th Mini-Euro
Conference on Operations Research Models and Methods in the Energy Sector, Coimbra, Portugal, 2006.
42 M. C. Ferris, O. L. Mangasarian, and S. J. Wright, “Linear programming with MATLAB,” MPSSIAM
Series on Optimization, 2007.
43 R. H. Bartels and G. H. Golub, “The simplex method of linear programming using LU decomposition,” Communications of the ACM, vol. 12, no. 5, pp. 266–268, 1969.
44 P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright, “A practical anti-cycling procedure for
linearly constrained optimization,” Mathematical Programming, vol. 45, no. 1-3, pp. 437–474, 1989.
45 M. Gay, “Electronic mail distribution of linear programming test problems,” Mathematical Programming Society COAL, Bulletin no. 13, 1985, http://www.netlib.org/lp/data.
28
Mathematical Problems in Engineering
46 K. Paparrizos, N. Samaras, and G. Stephanides, “An efficient simplex type algorithm for sparse and
dense linear programs,” European Journal of Operational Research, vol. 148, no. 2, pp. 323–334, 2003.
47 J. J. H. Forrest and J. A. Tomlin, “Updated triangular factors of the basis to maintain sparsity in the
product form simplex method,” Mathematical Programming, vol. 2, no. 1, pp. 263–278, 1972.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download