Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 735916, 8 pages
http://dx.doi.org/10.1155/2013/735916
Research Article
Smoothing Techniques and Augmented Lagrangian Method for
Recourse Problem of Two-Stage Stochastic Linear Programming
Saeed Ketabchi and Malihe Behboodi-Kahoo
Department of Applied Mathematics, Faculty of Mathematical Sciences, University of Guilan, P.O. Box 416351914 Rasht, Iran
Correspondence should be addressed to Malihe Behboodi-Kahoo; behboodi.m@gmail.com
Received 1 February 2013; Accepted 22 April 2013
Academic Editor: Neal N. Xiong
Copyright © 2013 S. Ketabchi and M. Behboodi-Kahoo. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
The augmented Lagrangian method can be used for solving recourse problems and obtaining their normal solution in solving
two-stage stochastic linear programming problems. The augmented Lagrangian objective function of a stochastic linear problem
is not twice differentiable which precludes the use of a Newton method. In this paper, we apply the smoothing techniques and
a fast Newton-Armijo algorithm for solving an unconstrained smooth reformulation of this problem. Computational results and
comparisons are given to show the effectiveness and speed of the algorithm.
1. Introduction
In stochastic programming, some data are random variables
with specific possibility distribution [1], which was first introduced by the designer of linear programming problems,
Dantzig, in [2].
In this paper, we consider the following two-stage stochastic linear program (slp) with recourse which involves the
calculation of an expectation over a discrete set of scenarios:
minπ (π₯) = ππ π₯ + π (π₯) ,
π₯∈π
(1)
π = {π₯ ∈ Rπ : π΄π₯ = π, π₯ ≥ 0} ,
where
π
π (π₯) = πΈ (π (π₯, π)) = ∑π (π₯, ππ ) π (ππ )
(2)
π=1
and πΈ shows the expectation of function π(π₯, π) which depend on the random variable π. The function π is defined as
follows:
π (π₯, π) = minπ {π(π)π π¦ | π(π)π π¦ ≥ β (π) − π (π) π₯} ,
π¦∈R
2
(3)
where π΄ ∈ Rπ×π , π ∈ Rπ , and π ∈ Rπ . Also, in the problem
(3) vector of coefficients π(⋅) ∈ Rπ2 , matrix of coefficients
ππ (⋅) ∈ Rπ2 ×π2 , demand vector β(⋅) ∈ Rπ2 , and matrix π(⋅) ∈
Rπ2 ×π depend on the random vector π with support space
Ω. The problems (1) and (3) are called master and recourse
problems of stochastic programming, respectively.
We assume that the problem (3) has a solution for each
π₯ ∈ π and π ∈ Ω.
In general, the recourse function π(π₯) is not differentiable
everywhere. Therefore, the traditional methods use nonsmooth optimization techniques [3–5]. However, in the last
decade, it is proposed smoothing method for recourse function in standard form of recourse problem [6–11]. In this
paper, we apply a smooth approximation technique to smooth
recourse function that the recourse problem has inequality linear constrained. For more explanation see Section 2.
The approximated problem is based on the least two-norm
solution of recourse problem. This paper considers the
augmented Lagrangian method to obtain least two-norm
solution (Section 3). For convenience, Euclidean least twonorm solution of linear programming problem is named
normal solution. This effective method contains solving an
unconstrained quadratic problem which its objective function is not twice differentiable. To apply a fast Newton method
we use the soothing technique and replace plus function by
an accurate smooth approximation [12, 13]. In Section 4, the
smoothing algorithm and the numerical results are presented.
Also, concluding remarks are given in Section 5.
2
Journal of Applied Mathematics
We now describe our notation. Let π = [ππ ] be a vector in
Rπ . By π+ we mean a vector in Rπ whose πth entry is 0 if ππ < 0
and equals ππ if ππ ≥ 0. By π΄π we mean the transpose of matrix
π΄, and ∇π(π₯0 ) is the gradient of π at π₯0 . For π₯ ∈ Rπ , β π₯ β and
β π₯ β∞ denote 2-norm and infinity norm, respectively.
Using the approximated recourse function ππ (π₯, π), we
can define a differentiable approximation function to the
objective function of (1):
π
ππ (π₯) = ππ π₯ + ∑ππ (π₯, ππ ) π (ππ ) .
2. Approximation of Recourse Function
As mentioned the objective function of (1) is nondifferentiable. This disadvantage property occurs on the recourse
function. In this section, there is an attempt to approximate
it to a differentiable function.
Using dual of the problem (3), function π(π₯, π) can be
written as follows:
π (π₯, π) = max
(β (π) − π (π) π₯)π π§
π
π§∈R
2
s.t. π (π) π§ = π (π) ,
By (6), the gradient of above function exists and is obtained
by
π
∇ππ (π₯) = π + ∑∇ππ (π₯, ππ ) π (ππ )
π=1
π
(10)
π
π
= π − ∑π (π
(4)
π§ ≥ 0.
Unlike the linear recourse function, the quadratic recours
function is differentiable. Thus in this paper, the approximation is based on the following quadratic problem with helpful
properties:
π
ππ (π₯, π) = max
(β (π) − π (π) π₯)π π§ − βπ§β2
π§∈Rπ2
2
(9)
π=1
π=1
) π§π∗
π
π
(π₯, π ) π (π ) .
This approximation has paved the way to use the optimization
algorithm for master problem (1) in which the objective
function is substituted by ππ (π₯)
minππ (π₯) .
π₯∈π
(11)
The next theorem shows that, for the sufficiently small π >
0, the solution of this problem is the normal solution of the
problem (4).
In [7], it is considered slp problem with inequality constrained in master problem and equality constrained in
recourse problem. Also, in Theorem 2.3 of [7], it is shown that
a solution of the approximated problem is a good approximation to a solution of master problem. Here we can express
a similar theorem for the problem (1) by using the similar
technique in the proof of Theorem 2.3 in [7].
Theorem 1. For functions π(π₯, π) and ππ (π₯, π) introduced in
(4) and (5), the following can be presented:
Theorem 2. Consider the problem (1). Then, for any π₯ ∈ π,
there exists an π(π₯) > 0 such that for any π ∈ (0, π(π₯)]
s.t. π (π) π§ = π (π) ,
(5)
π§ ≥ 0.
(a) ∃π > 0 such that, for each π ∈ (0, π], the solution for the
problem (5) is the normal solution for the problem (4).
(b) For each π > 0, function ππ (π₯, π) is differentiable with
respect to π₯.
(c) The gradient of function ππ (π₯, π) at point π₯ is
∇ππ (π₯, π) = −ππ (π) π§π∗ (π₯, π)
(6)
in which π§π∗ (π₯, π) is the solution of the problem (5).
Proof. To prove (a), refer to [14, 15].
Also, (b) and (c) can be easily proved considering that
function ππ (π₯, π) is the conjugate of function
π
{ βπ§β2 , π§ ∈ π,
π (π§) = { 2
π§ ∉ π,
{∞,
and Theorems (26-3) and (23-5) in [16].
(12)
where π is defined as follows:
σ΅©
σ΅©2
π = max σ΅©σ΅©σ΅©σ΅©π§π∗ (π₯, ππ )σ΅©σ΅©σ΅©σ΅© .
π=1,2,...,π
(13)
Let π₯∗ be a solution of (1) and π₯π∗ a solution of (11). Then, there
exists an π > 0 such that for any 0 < π ≤ π
π
max {π (π₯π∗ ) − π (π₯∗ ) , ππ (π₯∗ ) − ππ (π₯π∗ )} ≤ π.
2
(14)
Further, one assumes that π or ππ are strongly convex on π with
modulus π > 0. Then,
(7)
π
σ΅©σ΅© ∗
∗σ΅©
σ΅©σ΅©π₯ − π₯π σ΅©σ΅©σ΅© ≤ π .
π
(8)
According to Theorem 1, it can be found that for obtaining
the gradient of function ππ (π₯) in each iteration, we need the
normal solution of π linear programming problems (4). In
this paper, the augmented Lagrangian method [17] is used for
this purpose.
where
π = {π§ ∈ Rπ2 : π (π) π§ = π (π) , π§ ≥ 0} ,
σ΅¨ π
σ΅¨σ΅¨
σ΅¨σ΅¨π (π₯) − ππ (π₯)σ΅¨σ΅¨σ΅¨ ≤ π,
2
(15)
Journal of Applied Mathematics
3
3. Smooth Approximation and Augmented
Lagrangian Method
In the augmented Lagrangian method, the unconstrained
maximization problem is solved which gives the project of a
point on the solution set of the problem (4).
Assume that π§Μ is an arbitrary vector. Consider the problem of finding the least 2-norm projection π§Μ∗ of π§Μ on the solution set π∗ of the problem (4)
1
1 σ΅©σ΅©
σ΅©2
2
σ΅©σ΅©π§Μ∗ − π§Μσ΅©σ΅©σ΅© = min βπ§ − π§Μβ ,
π§∈π∗ 2
2
π∗ = {π§ ∈ R
π2
π
: π (π) π§ = π (π) , π π§ = π (π₯, π) , π§ ≥ 0} .
(16)
In this problem, vector π₯ and random variable π are constants; therefore, for simplicity, this is assumed to be π =
Μ
β(π) − π(π)π₯, and function π(π)
is defined in a way that
Μ = π(π₯, π).
π(π)
Considering that the objective function of the problem
(16) is strictly convex, its solution is unique. Let us introduce
the Lagrangian function for the problem (16) as follow:
1
βπ§ − π§Μβ2 + ππ (π (π) π§ − π (π))
2
(17)
π
Μ
+ π½ (π π§ − π (π)) ,
πΏ (π§, π, π½, π§Μ, π, π) =
where π ∈ Rπ2 and π½ ∈ R are Lagrangian multipliers and π,
π§Μ are constant values. Therefore, the dual problem of (16)
becomes
max maxπ min
πΏ (π§, π, π½, π§Μ, π, π) .
π
π½∈R π∈R
2
π§∈R+ 2
(18)
By solving the inner minimization of the problem (18), duality
of the problem (16) is obtained:
Μ (π, π½, π§Μ, π) ,
max maxπ πΏ
π½∈R π∈R
2
(19)
where duality function is
Μ (π, π½, π§Μ, π) = π (π) π − 1 σ΅©σ΅©σ΅©σ΅©(Μπ§ + ππ (π) π + π½π) σ΅©σ΅©σ΅©σ΅©2
πΏ
+σ΅©
2σ΅©
Also, in special conditions, the solution for the problem
(3) can be also obtained and the following theorem expresses
this issue.
Theorem 4 (see [17]). Assume that the solution set π∗ is
nonempty. For each π½ > 0 and π§Μ ∈ π∗ , π¦∗ = π(π½)/π½ is one
exact solution for the linear programming problem (3), where
π(π½) is the solution for the problem (21).
According to the theorems mentioned above, augmented
Lagrangian method presents the following iteration process
for solving the problem (16):
1σ΅©
σ΅©2
ππ+1 ∈ arg maxπ {ππ (π) π − σ΅©σ΅©σ΅©σ΅©(π§π + π(π)π π + π½π)+ σ΅©σ΅©σ΅©σ΅© } ,
π∈R 2
2
π§π+1 = (π§π + ππ (π) ππ+1 + π½π)+ ,
+∞
−∞
(20)
The following theorem states that if π½ is sufficiently large,
solving the inner maximization of (19) gives the solution of
the problem (16).
π (π ) ππ = 1,
∫
max π (π, π½, π§Μ, π, π)
π∈Rπ2
1σ΅©
σ΅©2
π (π, π½, π§Μ, π, π) = π (π) π − σ΅©σ΅©σ΅©σ΅©(Μπ§ + ππ (π) π + π½π)+ σ΅©σ΅©σ΅©σ΅© .
2
(22)
|π | π (π ) ππ < ∞.
(24)
It is obvious that the derivative of plus function is step funcπ₯
tion, that is, (π₯)+ = ∫−∞ πΏ(π‘)ππ‘, where the step function πΏ(π₯)
is defined 1 if π₯ > 0 and equals 0 if π₯ ≤ 0. Therefore, a smoothing approximation function of the plus function is defined by
π (π₯, πΌ) = ∫
π₯
π (π‘, πΌ) ππ‘,
(25)
where π(π₯, πΌ) is smoothing approximation function of step
function and is defined as
(21)
in which π½, π§Μ, and π are constants, and function π(π, π½, π§Μ, π) is
introduced as follows:
+∞
−∞
−∞
Theorem 3 (see [17]). Consider the following maximization
problem
(23)
where π§0 is an arbitrary vector and here we can use zero vector
as initial vector for obtaining normal solution of the problem
(4).
We note that the problem (23) is a concave problem and
its objective function is piecewise quadratic and is not twice
differentiable. Applying the smoothing techniques [18, 19]
and replacing π₯+ by a smooth approximation, we transform
this problem to a twice continuously differentiable problem.
Chen and Mangasarian [19] introduced a family of
smoothing functions, which is built as follows. Let π : π
→
[0, ∞) be a piecewise continuous density function satisfying
∫
π
Μ (π) + 1 βΜπ§β2 .
+ π½π
2
Also, assume that the set π∗ is nonempty, and the rank of
submatrix ππ of π corresponding to nonzero components of
π§Μ∗ is π2 . In such a case, there is π½∗ which for all π½ ≥ π½∗ ,
π§Μ∗ = (Μπ§ + ππ π(π½) + π½π)+ is the unique and exact solution for
the problem (16), where π(π½) is the point obtained from solving
the problem (21).
π₯
π (π₯, πΌ) = ∫
−∞
πΌπ (πΌπ‘) ππ‘.
(26)
By choosing
π
π (π ) =
π−π
,
(1 + π−π )2
(27)
4
Journal of Applied Mathematics
specific cases of these approaches are obtained as follows:
1
≈ πΏ (π₯) ,
1 + π−πΌπ₯
(28)
1
π (π₯, πΌ) = π₯ + log (1 + π−πΌπ₯ ) ≈ (π₯)+ .
πΌ
The function π with a smoothing parameter πΌ is used here to
replace the plus function of (22) to obtain a smooth reformulation of function (22):
π (π₯, πΌ) =
πΜ (π, π½, π§Μ, π, π½, π, πΌ) := ππ (π) π
σ΅©
σ΅©2
− σ΅©σ΅©σ΅©σ΅©π(Μπ§ + ππ (π)π + π½π, πΌ)σ΅©σ΅©σ΅©σ΅© .
(29)
Further, one assumes that ππ is a full rank matrix. Then,
2
2 log (2)
log (2)
σ΅©σ΅© ∗
∗σ΅©
) + 8ππ2 π
.
σ΅©σ΅©π − ππΌ σ΅©σ΅©σ΅© ≤ π(
πΌ
πΌ
Proof. For any πΌ > 0 and π ∈ Rπ2
π (Μπ§π + πππ (π) π + π½ππ , πΌ) ≥ (Μπ§π + πππ (π) π + π½ππ , πΌ)+ .
(37)
Hence
Therefore, we have the following iterative process instead of
(23) and (28):
σ΅©
σ΅©2
ππ+1 ∈ arg maxπ {ππ (π) π − σ΅©σ΅©σ΅©σ΅©π(π§π + ππ (π)π + π½π, πΌ)σ΅©σ΅©σ΅©σ΅© } ,
π∈R 2
σ΅¨
σ΅¨σ΅¨
σ΅¨σ΅¨π (π, π½, π§Μ, π, π) − πΜ (π, π½, π§Μ, π, π, πΌ)σ΅¨σ΅¨σ΅¨
σ΅¨
σ΅¨
= π (π, π½, π§Μ, π, π) − πΜ (π, π½, π§Μ, π, π, πΌ)
σ΅©
σ΅©2
= σ΅©σ΅©σ΅©σ΅©π(Μπ§ + ππ (π)π + π½π, πΌ)σ΅©σ΅©σ΅©σ΅©
π§π+1 = π (π§π + ππ (π) ππ+1 + π½π, πΌ) .
(30)
σ΅©2
σ΅©
− σ΅©σ΅©σ΅©σ΅©(Μπ§ + ππ (π) π + π½π, πΌ)+ σ΅©σ΅©σ΅©σ΅©
It can be shown that as the smoothing parameter πΌ approaches infinity any solution of smooth problem (29) approaches
the solution of the equivalent problem (22) (see [19]).
We begin with a simple lemma that bounds the square difference between the plus function π₯+ and its smooth approximation π(π₯, πΌ).
Lemma 5 (see [13]). For π₯ ∈ R and |π₯| < π
2
π2 (π₯, πΌ) − (π₯+ ) ≤ (
log (2) 2 2π
) +
log (2) ,
πΌ
πΌ
Theorem 6. Consider the problems (21) and
(32)
Then, for any π ∈ Rπ2 and πΌ > 0
(34)
π (ππΌ∗ , π½, π§Μ, π, π) ,
πΜ (ππΌ∗ , π½, π§Μ, π, π, πΌ) − πΜ (π∗ , π½, π§Μ, π, π, πΌ)}
≤(
log (2) 2
log (2)
) + 2ππ2
.
πΌ
πΌ
σ΅¨
σ΅¨σ΅¨
σ΅¨σ΅¨π (π, π½, π§Μ, π, π) − πΜ (π, π½, π§Μ, π, π, πΌ)σ΅¨σ΅¨σ΅¨
σ΅¨
σ΅¨
π2
≤ ∑ ((
≤(
log (2) 2
σ΅¨
σ΅¨ log (2)
) + 2 σ΅¨σ΅¨σ΅¨σ΅¨π§Μπ + πππ (π) π + π½ππ , πΌσ΅¨σ΅¨σ΅¨σ΅¨
)
πΌ
πΌ
log (2) 2
log (2)
) +2
πΌ
πΌ
π=1
Let π∗ be a solution of (21) and ππΌ∗ a solution of (32). Then
max {π (π , π½, π§Μ, π, π) −
2
−(Μπ§π + πππ (π) π + π½ππ , πΌ)+ ) .
π
log (2)
,
πΌ
(33)
where π is defined as follows:
σ΅¨
σ΅¨
π = max σ΅¨σ΅¨σ΅¨σ΅¨π§Μπ + πππ (π) π + π½ππ σ΅¨σ΅¨σ΅¨σ΅¨ .
1≤π≤π2
∗
π=1
2
σ΅¨
σ΅¨
× ∑ max σ΅¨σ΅¨σ΅¨σ΅¨π§Μπ + πππ (π) π + π½ππ σ΅¨σ΅¨σ΅¨σ΅¨
1≤π≤π2
log (2) 2
σ΅¨
σ΅¨σ΅¨
σ΅¨σ΅¨π (π, π½, π§Μ, π, π) − πΜ (π, π½, π§Μ, π, π, πΌ)σ΅¨σ΅¨σ΅¨ ≤ (
)
σ΅¨
σ΅¨
πΌ
+ 2ππ2
π2
= ∑ (π2 (Μπ§π + πππ (π) π + π½ππ , πΌ)
π=1
max πΜ (π, π½, π§Μ, π, π, πΌ) .
(38)
By using Lemma 5, we get that
(31)
where π(π₯, πΌ) is the π function of (28) with smoothing parameter πΌ > 0.
π∈Rπ2
(36)
=(
log (2) 2
log (2)
) + 2ππ2
.
πΌ
πΌ
(39)
From above inequality, we have
πΜ (π, π½, π§Μ, π, π, πΌ) ≤ π (π, π½, π§Μ, π, π)
2
(35)
log (2)
≤ πΜ (π, π½, π§Μ, π, π, πΌ) + (
)
πΌ
+ 2ππ2
log (2)
.
πΌ
(40)
Journal of Applied Mathematics
5
Table 1: Comparative between smooth augmented Lagrangian Newton method (SALN) and CPLEX solver.
Recourse problem
π2 × π2 × π
Solver
βππ§ − πβ∞
Μ − ππ π§|
|π(π)
βπ§β
Time (second)
P1
50 × 50 × 0.68
P2
100 × 105 × 0.4
P3
150 × 150 × 0.5
P4
200 × 200 × 0.5
P5
300 × 300 × 0.5
P6
350 × 350 × 0.5
P7
450 × 500 × 0.05
P8
500 × 550 × 0.04
P9
700 × 800 × 0.6
P10
900 × 1100 × 0.4
P11
1500 × 2000 × 0.1
P12
1000 × 2000 × 0.01
P13
1000 × 5000 × 0.001
P14
1000 × 10000 × 0.001
P15
1000 × 1π5 × 0.001
P16
1000 × 1π6 × 0.0002
P17
100 × 1π6 × 0.001
P18
10 × 1π4 × 0.001
P19
10 × 1π6 × 0.001
P20
10 × 5π6 × 0.01
P21
100 × 1π6 × 0.01
P22
100 × 1π4 × 0.1
P23
100 × 1π5 × 0.05
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
SALN
CPLEX
9.9135π − 011
1.0717π − 007
1.8622π − 010
6.5591π − 008
3.1559π − 010
5.9572π − 010
5.3819π − 010
1.1734π − 007
9.4178π − 010
4.1638π − 010
1.2787π − 009
7.7398π − 010
1.6564π − 010
9.0949π − 013
1.0425π − 010
6.1618π − 011
4.6139π − 009
9.1022π − 009
5.3396π − 009
4.5020π − 011
1.1289π − 008
7.5886π − 011
3.6398π − 010
6.8212π − 013
2.7853π − 010
4.8203π − 010
1.1221π − 010
7.9094π − 009
9.7702π − 010
2.7285π − 012
9.9432π − 013
9.9098π − 006
3.9563π − 010
2.0082π − 003
2.2737π − 013
2.2737π − 013
1.2478π − 009
5.9615π − 004
2.0425π − 008
4.9966π + 004
3.8654π − 012
1.7994π − 004
1.1084π − 012
1.2518π − 005
1.7053π − 012
3.9827π − 006
7.2760π − 012
5.2589π − 007
4.3074π − 009
1.6105π − 006
5.6361π − 009
1.2014π − 009
1.0506π − 008
1.3964π − 006
2.7951π − 008
4.4456π − 009
2.6226π − 008
3.4452π − 009
4.7094π − 009
1.6371π − 011
1.0241π − 009
1.9081π − 009
2.3908π − 007
1.3768π − 007
1.5207π − 007
2.0373π − 010
2.0696π − 007
1.1059π − 009
7.9162π − 009
1.1642π − 010
1.9281π − 010
1.5425π − 009
1.4988π − 009
2.6691π − 007
2.3283π − 009
1.8044π − 009
2.0464π − 012
1.1089π − 004
5.6607π − 009
1.1777π − 002
7.9581π − 013
1.1369π − 013
1.0710π − 008
6.4811π − 005
1.1816π − 008
3.3500π + 005
7.7875π − 012
6.2712π − 005
2.1600π − 012
1.3174π − 005
6.3121π − 012
2.8309π − 006
41.0362
41.0362
57.1793
57.1826
74.9098
74.9098
85.6646
85.6646
102.4325
102.4325
110.4189
110.4189
124.1204
124.1205
134.3999
134.3999
153.9782
153.9906
178.1151
178.1163
231.5284
231.5286
198.4905
198.4922
190.0141
190.0142
212.2416
212.3453
231.7930
231.8763
121.3937
121.3940
73.8493
73.9412
19.5735
19.5739
18.8192
18.8863
20.8470
0.0000
42.0698
42.0931
39.4563
39.4563
43.5944
43.5944
0.3292
0.3182
0.1275
0.1585
0.6593
0.1605
0.2530
0.1820
2.1356
0.1830
3.2102
0.2116
0.7807
0.2606
0.7567
0.2660
7.1364
0.8435
7.9383
1.3643
13.2493
2.1343
5.2620
0.6752
0.2709
0.2162
0.4867
0.3353
3.7511
1.2472
9.0948
3.7848
6.8537
2.5582
0.0166
0.1386
5.6399
2.5623
28.7339
1.9482
7.8895
8.8324
0.1440
0.3099
1.1379
1.5729
N. P
6
Journal of Applied Mathematics
Table 1: Continued.
N. P
Recourse problem
π2 × π2 × π
P24
1000 × 5π4 × 0.1
P25
1000 × 1π5 × 0.08
Solver
βππ§ − πβ∞
Μ − ππ π§|
|π(π)
βπ§β
Time (second)
SALN
CPLEX
SALN
CPLEX
1.3074π − 012
6.7455π − 007
2.5011π − 012
9.4298π − 006
2.4102π − 011
6.7379π − 006
1.0516π − 011
5.5861π − 004
117.4693
117.4699
116.6964
116.6964
15.1065
20.3967
23.2540
33.4319
Therefore
∗
π (π
Considering the advantage of the twice differentiability of
the objective function of the problem (32) allows us to use a
quadratically convergent Newton algorithm with an Armijo
stepsize [20] that makes the algorithm globally convergent.
, π½, π§Μ, π, π) − π (ππΌ∗ , π½, π§Μ, π, π)
≤ πΜ (π∗ , π½, π§Μ, π, π, πΌ) − πΜ (ππΌ∗ , π½, π§Μ, π, π, πΌ)
+(
log (2) 2
log (2)
) + 2ππ2
πΌ
πΌ
log (2) 2
log (2)
) + 2ππ2
,
πΌ
πΌ
πΜ (π∗ , π½, π§Μ, π, π, πΌ) − πΜ (π∗ , π½, π§Μ, π, π, πΌ)
4. Numerical Results and Algorithm
≤(
πΌ
(41)
≤ π (ππΌ∗ , π½, π§Μ, π, π) − π (π∗ , π½, π§Μ, π, π)
+(
log (2) 2
log (2)
) + 2ππ2
πΌ
πΌ
−1
ππ = −(∇π2 πΜ (π, π½, π§Μ, π, π, πΌ) − πΏπΌπ2 ) (∇π πΜ (π, π½, π§Μ, π, π, πΌ)) ,
ππ +1 = ππ + π π ππ ,
log (2) 2
log (2)
≤(
) + 2ππ2
.
πΌ
πΌ
(44)
Suppose that ππ is full rank. Then the Hessian of πΜ is negative
definite, and πΜ is strongly concave on bounded sets. By the
definition of strong concavity, for any πΎ ∈ (0, 1),
πΜ (πΎππΌ∗ + (1 − πΎ) π∗ , π½, π§Μ, π, π, πΌ) − πΎπΜ (ππΌ∗ , π½, π§Μ, π, π, πΌ)
− (1 − πΎ) πΜ (π∗ , π½, π§Μ, π, π, πΌ)
1
σ΅©2
σ΅©
≥ ππΎ (1 − πΎ) σ΅©σ΅©σ΅©ππΌ∗ − π∗ σ΅©σ΅©σ΅© .
2
(42)
Let πΎ = 1/2, then
1 σ΅©σ΅© ∗
σ΅©2
πσ΅©π − π∗ σ΅©σ΅©σ΅©
8 σ΅© πΌ
1
≤ πΜ ( (ππΌ∗ + π∗ ) , π½, π§Μ, π, π, πΌ)
2
1 Μ ∗
− (π (ππΌ , π½, π§Μ, π, π, πΌ) + πΜ (π∗ , π½, π§Μ, π, π, πΌ))
2
Μ
≤ π (ππΌ∗ , π½, π§Μ, π, π, πΌ)
−
1 Μ ∗
(π (ππΌ , π½, π§Μ, π, π, πΌ) + πΜ (π∗ , π½, π§Μ, π, π, πΌ))
2
1 Μ ∗
(π (ππΌ , π½, π§Μ, π, π, πΌ) − πΜ (π∗ , π½, π§Μ, π, π, πΌ))
2
log (2)
1 log (2) 2
≤ (
) + ππ2
.
2
πΌ
πΌ
In each iteration of the process (30), one concave, quadratic,
unconstrained maximization problem is solved. For solving
it, the fast Newton method can be used.
In the algorithm, the Hessian matrix may be singular, thus
we use a modified Newton. The direction in each iteration for
solving (30) is obtained through the following relation:
≤
(43)
where πΏ is a small positive number, πΌπ2 is the identity matrix
of order π2 , and π π is the suitable step length that Armijo
algorithm is used for determining it (see Algorithm 1).
The proposed algorithm was applied to solve some
recourse problems. Table 1 compares this algorithm with
CPLEX v. 12.1 solver for quadratic convex programming
problems (5). As is evident from Table 1, most of recourse
problems could be solved more successful by the algorithm
which is based on smooth augmented Lagrangian Newton
method (SALN) than CPLEX package (for illustration see the
problems 21–25 in Table 1). This algorithm gives us high accuracy and the solution with minimum norm in suitable time
(see last column of Table 1). Also, we can find that CPLEX
is better than the algorithm proposed for some recourse
problems in which the matrices are approximately square (Ex.
line 5–12).
The test generator generates recourse problems. These
problems are generated using the MATLAB code show in
Algorithm 2.
The algorithm considered for solving several recourse
problems was run on a computer with 2.5 dual-core CPU and
4 GB memory in MATLAB 7.8 programming environment.
Also, in the generated problems, recourse matrix π is the
Sparse matrix (π2 × π2 ) with the density π. The constants π½
and πΏ in the above algorithm in (44) were selected 1 and 10−8 ,
respectively.
In Table 1, the second column indicates the size and
density of matrix π, the forth column indicates the feasibility
of the primal problem (4), and the next column indicates the
error norm function of this problem (the MATLAB code of
this paper is available from the authors upon request).
Journal of Applied Mathematics
7
Choose a π§0 ∈ π
π2 , πΌ > 1, π > 1, π > 0 be error tolerance and πΏ is a small positive
number.
π = 0;
σ΅©
σ΅©
While σ΅©σ΅©σ΅©π§π − π§π−1 σ΅©σ΅©σ΅©∞ ≥ π
Choose a π0 ∈ π
π2 and set π = 0.
σ΅© Μ
σ΅©σ΅©
σ΅© ≥π
While σ΅©σ΅©σ΅©σ΅©∇π π(π
π , π½, π§π , π, π, πΌ)σ΅©
σ΅©∞
2
Choose π π = max{π , π π, π π , . . .} such that
Μ π , π½, π§π , π, π, πΌ) − π(π
Μ π + π π ππ , π½, π§π , π, π, πΌ) ≥ −π π π∇(π(π
Μ π , π½, π§π , π, π, πΌ))π ππ ,
π(π
where,
Μ π , π½, π§π , π, π, πΌ, ) − πΏπΌπ )−1 ∇π(π
Μ π , π½, π§π , π, π, πΌ), π > 0 be a constant,
ππ = −(∇2 π(π
2
π ∈ (0, 1) and π ∈ (0, 1).
Put ππ+1 = ππ + π π ππ , π = π + 1 and πΌ = ππΌ.
end
Set ππ+1 = ππ+1 , π§π+1 = π(π§π + ππ (π)ππ+1 + π½π, πΌ) and π = π + 1.
end
Algorithm 1: Newton method with the Armijo rule.
%Sgen: Generate random solvable recourse problems:
%Input: m,n,d(ensity); Output: W,q,π;
m=input(’Enter π2 :’)
n=input(’Enter π2 :’)
d=input(’Enter d:’)
pl=inline(’(abs(x)+x)/2’)
W=sprand(π2 ,π2 ,d);W=100∗(W-0.5∗spones(W));
z=sparse(10∗pl(rand(π2 ,1)));
q=W∗z;
y=spdiags((sign(pl(rand(π2 ,1)-rand(π2 ,1)))),0,π2 ,π2 )
∗5∗((rand(π2 ,1)-rand(π2 ,1)));
π=W’∗y-10∗spdiags((ones(π2 ,1)-sign(z)),0,π2 ,π2 )∗ones(π2 ,1));
format short e; nnz(W)/prod(size(W))
Algorithm 2
5. Conclusion
In this paper, a smooth reformulation process, based on
augmented Lagrangian algorithm, was proposed for obtaining the normal solution of recourse problem of a stochastic
linear programming. This smooth iterative process allows us
to use a quadratically convergent Newton algorithm, which
accelerates obtaining the normal solution.
Table 1 shows that the proposed algorithm has appropriate speed in most of the problems. This result, specifically,
can be observed in recourse problems with the matrix of
coefficients in which the number of constraints is noticeably
more than the number of variables. The more challenging is
solving the problems which their coefficient matrix is square
(the numbers of constraints and variables get closer to each
other), and more time is needed by the algorithm for solving
the problem.
Acknowledgment
The authors would like to thank the reviewers for their helpful
comments.
References
[1] J. R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research, Springer, New
York, NY, USA, 1997.
[2] G. B. Dantzig, “Linear programming under uncertainty,” Management Science. Journal of the Institute of Management Science.
Application and Theory Series, vol. 1, pp. 197–206, 1955.
[3] J. L. Higle and S. Sen, “Stochastic decomposition: an algorithm
for two-stage linear programs with recourse,” Mathematics of
Operations Research, vol. 16, no. 3, pp. 650–669, 1991.
[4] P. Kall and S. W. Wallace, Stochastic Programming, John Wiley
& Sons, Chichester, UK, 1994.
[5] S. A. Tarim and I. A. Miguel, “A hybrid benders’ decomposition
method for solving stochastic constraint programs with linear
recourse,” in Recent Advances in Constraints, vol. 3978 of
Lecture Notes in Computer Science, pp. 133–148, Springer, Berlin,
Germany, 2006.
[6] J. R. Birge, X. Chen, L. Qi, and Z. Wei, “A stochastic Newton
method for stochastic quadratic programs with recourse,” Tech.
Rep., Department of Industrial and Operations Engineering,
University of Michigan, Ann Arbor, Mich, USA, 1994.
8
[7] X. Chen, “A parallel BFGS-SQP method for stochastic linear
programs,” in World Scientific, pp. 67–74, World Scientific, River
Edge, NJ, USA, 1995.
[8] X. Chen, “Newton-type methods for stochastic programming,”
Mathematical and Computer Modelling, vol. 31, no. 10–12, pp.
89–98, 2000.
[9] X. J. Chen, L. Q. Qi, and R. S. Womersley, “Newton’s method for
quadratic stochastic programs with recourse,” Journal of Computational and Applied Mathematics, vol. 60, no. 1-2, pp. 29–46,
1995.
[10] X. Chen and R. S. Womersley, “A parallel inexact Newton
method for stochastic programs with recourse,” Annals of
Operations Research, vol. 64, pp. 113–141, 1996.
[11] X. Chen and R. S. Womersley, “Random test problems and parallel methods for quadratic programs and quadratic stochastic
programs,” Optimization Methods and Software, vol. 13, no. 4,
pp. 275–306, 2000.
[12] X. Chen and Y. Ye, “On homotopy-smoothing methods for boxconstrained variational inequalities,” SIAM Journal on Control
and Optimization, vol. 37, no. 2, pp. 589–616, 1999.
[13] Y.-J. Lee and O. L. Mangasarian, “SSVM: a smooth support
vector machine for classification,” Computational Optimization
and Applications. An International Journal, vol. 20, no. 1, pp. 5–
22, 2001.
[14] C. Kanzow, H. Qi, and L. Qi, “On the minimum norm
solution of linear programs,” Journal of Optimization Theory and
Applications, vol. 116, no. 2, pp. 333–345, 2003.
[15] O. L. Mangasarian, “A Newton method for linear programming,” Journal of Optimization Theory and Applications, vol. 121,
no. 1, pp. 1–18, 2004.
[16] R. T. Rockafellar, Convex Analysis, Princeton University Press,
Princeton, NJ, USA, 1970.
[17] Yu. G. Evtushenko, A. I. Golikov, and N. Mollaverdy, “Augmented Lagrangian method for large-scale linear programming
problems,” Optimization Methods & Software, vol. 20, no. 4-5,
pp. 515–524, 2005.
[18] C. H. Chen and O. L. Mangasarian, “Smoothing methods
for convex inequalities and linear complementarity problems,”
Mathematical Programming, vol. 71, no. 1, pp. 51–69, 1995.
[19] C. Chen and O. L. Mangasarian, “A class of smoothing functions
for nonlinear and mixed complementarity problems,” Computational Optimization and Applications. An International Journal,
vol. 5, no. 2, pp. 97–138, 1996.
[20] X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence
of the smoothing Newton method and its application to
general box constrained variational inequalities,” Mathematics
of Computation, vol. 67, no. 222, pp. 519–540, 1998.
Journal of Applied Mathematics
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )