Research Article Parallel Variable Distribution Algorithm for Constrained Optimization with Nonmonotone Technique

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 295147, 7 pages
http://dx.doi.org/10.1155/2013/295147
Research Article
Parallel Variable Distribution Algorithm for Constrained
Optimization with Nonmonotone Technique
Congying Han,1,2 Tingting Feng,2 Guoping He,2 and Tiande Guo1
1
School of Mathematical Sciences, University of the Chinese Academy of Sciences, No. 19(A), Yuquan Road, Shijingshan District,
Beijing 100049, China
2
College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, China
Correspondence should be addressed to Congying Han; hancy@ucas.ac.cn
Received 6 September 2012; Accepted 28 February 2013
Academic Editor: Ching-Jong Liao
Copyright © 2013 Congying Han et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A modified parallel variable distribution (PVD) algorithm for solving large-scale constrained optimization problems is developed,
which modifies quadratic subproblem 𝑄𝑃𝑙 at each iteration instead of the 𝑄𝑃𝑙0 of the SQP-type PVD algorithm proposed by C. A.
Sagastizábal and M. V. Solodov in 2002. The algorithm can circumvent the difficulties associated with the possible inconsistency of
𝑄𝑃𝑙0 subproblem of the original SQP method. Moreover, we introduce a nonmonotone technique instead of the penalty function
to carry out the line search procedure with more flexibly. Under appropriate conditions, the global convergence of the method is
established. In the final part, parallel numerical experiments are implemented on CUDA based on GPU (Graphics Processing unit).
1. Introduction
In this paper, we consider the following nonlinear programming problem:
min
𝑓 (π‘₯) ,
(1)
s.t. 𝑐 (π‘₯) ≤ 0,
where 𝑓 : 𝑅𝑛 → 𝑅 and 𝑐(π‘₯) : 𝑅𝑛 → π‘…π‘š are all continuously
differentiable. Suppose the feasible set 𝑋 in (1) is described by
a system of inequality constraints:
𝑍 = {π‘₯ ∈ 𝑅𝑛 | 𝑐𝑖 (π‘₯) ≤ 0, 𝐼 = {𝑖 | 𝑖 = 1, 2, . . . , π‘š}} .
(2)
In this paper, we give a new algorithm based on the method
in [1], which partitions the problem variables π‘₯ ∈ 𝑅𝑛 into 𝑝
blocks π‘₯1 , . . . , π‘₯𝑝 , such that
𝑝
π‘₯ = (π‘₯1 , . . . , π‘₯𝑝 ) ,
π‘₯𝑙 ∈ 𝑅𝑛𝑙 , 𝑙 = 1, . . . , 𝑝, ∑𝑛𝑙 = 𝑛.
𝑙=1
(3)
And assume that 𝑋 has block-separable structure. Specifically,
𝑐 (π‘₯) = {𝑐1 (π‘₯1 ) , 𝑐2 (π‘₯2 ) , . . . , 𝑐𝑝 (π‘₯𝑝 )} ,
𝑐𝑙 (π‘₯𝑙 ) : 𝑅𝑛𝑙 󳨀→ π‘…π‘šπ‘™ ,
𝑝
𝑙 ∈ {1, 2, . . . , 𝑝} , ∑π‘šπ‘™ = π‘š.
(4)
𝑙=1
Parallel variable distribution (PVD) algorithm for solving
optimization problems was first proposed by Ferris and
Mangasarian [2], in 1994, in which the variables are distributed among 𝑝 processors. Each processor has the primary responsibility for updating its block of variables while
allowing the remaining secondary variables to change in a
restricted fashion along some easily computable directions.
The distinctive novel feature of this algorithm is the presence
of the “forget-me-not” term which allows for a change
in “secondary” variables. This makes PVD fundamentally
different from the block Jacobi [3] and coordinate descent [4]
methods. The forget-me-not approach improves robustness
and accelerates convergence of the algorithm and is the key
to its success. In 1997, Solodov [5] proposed useful generalizations that consist, for the general unconstrained case,
of replacing exact global solution of the subproblems by
2
a certain natural sufficient descent condition, and, for the
convex case, of inexact subproblem solution in the PVD
algorithm. Solodov [6] proposed an algorithm applied the
PVD approach to problems with general convex constraints
directly use projected gradient residual function as the direction and show that the algorithm converges, provided that
certain conditions are imposed on the change of secondary
variables. In 2002, Sagastizábal and Solodov [1] proposed
two new variants of PVD for the constrained case. Without
assuming convexity of constraints, but assuming blockseparable structure, they showed that PVD subproblems can
be solved inexactly by solving their quadratic programming
approximations. This extends PVD to nonconvex (separable)
feasible sets and provided a constructive practical way of
solving the parallel subproblems. For inseparable constraints,
but assuming convexity, they developed a PVD method based
on suitable approximate projected gradient directions. The
approximation criterion was based on a certain error bound
result, and it was readily implementable. In 2011, Zheng
et al. [7] gave a parallel SSLE algorithm, in which the PVD
subproblems are solved inexactly by serial sequential linear
equations, for solving large-scale constrained optimization
with block-separable structure. Without assuming the convexity of constraints, the algorithm is proved to be globally
convergent to a KKT point. Han et al. [8] proposed an
asynchronous PVT algorithm for solving large-scale linearly
constrained convex minimization problems with the idea
of [9] in 2009, which based on the idea that a constrained
optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer
Function. And in particular, different from [9] the linear rate
of convergence does not depend on the number of processors.
In this paper, we use [1] as our main reference on SQPtype PVD method for problems (1) and (4). Firstly, we introduce the algorithm in [1] simply. The original problem is
distributed into 𝑝 parallel subproblems which treatment
among 𝑝 parallel processors. The algorithm of [1] may result
in that the linear constraints in quadratic programming
subproblems are inconsistent or the solution of quadratic
programming subproblems is unbounded, so that the algorithm may fail. This drawback has been overcome by many
researchers, such as [10]. In [1], exact penalty function is
used as merit function to carry out the line-search. For the
penalty function, as pointed out by Fletcher and Leyffer [11],
the biggest drawback is that the penalty parameter estimates
could be problematic to obtain. To overcome this drawback,
we use a nonmonotone technique to carry out the line search
instead of penalty function.
Recent research [12–14] indicates that the monotone line
search technique may have some drawbacks. In particular,
enforcing monotonicity may considerably reduce the rate
of convergence when the iteration is trapped near a narrow curved valley, which can result in very short steps or
zigzagging. Therefore, it might be advantageous to allow
the iterative sequence to occasionally generate points with
nonmonotone objective values. Grippo et al. [12] generalized
the Armijo rule and proposed a nonmonotone line search
technique for Newton’s method which permits increase in
function value, while retaining global convergence of the
Journal of Applied Mathematics
minimization algorithm. In [15], Sun et al. give several
nonmonotone line search techniques, such as nonmonotone
Armijo rule, nonmonotone Wolfe rule, nonmonotone F-rule,
and so on. Several numerical tests show that the nonmonotone line search technique for unconstrained optimization
and constrained optimization is efficient and competitive [12–
14, 16]. Recently, [17] gives a method to overcome the drawback of zigzagging with nonmonotone line search technique
to determine the step length, which makes the algorithm
more flexible.
In this paper, we combine [1] with the ideas of [10, 17] and
propose an infeasible SQP-type Parallel Variable Distribution
algorithm for constrained optimization with nonmonotone
technique.
The paper is organized as follows. The algorithm is
presented in Section 2. In Section 3, under mild assumptions,
some global convergence results are proved. The numerical
results are shown in Section 4. And conclusions are given in
the last section.
2. A Modified SQP-Type PVD Method with
Nonmonotone Technique
To describe our algorithm, we first give the modified quadratic subproblems of SQP method. Given π‘₯π‘˜ ∈ 𝑅𝑛 , we distribute
it into 𝑝 blocks, as the iterate π‘₯π‘™π‘˜ ∈ 𝑅𝑛𝑙 , 𝑙 = 1 . . . , 𝑝, 𝐻𝑙 ∈
𝑅𝑛𝑙 ×𝑛𝑙 , 𝑙 = 1, 2, . . . , 𝑝 denote an approximate Hessian.
We use 𝑧𝑙 ∈ 𝑅 to perturb the subproblem of 𝑄𝑃𝑙0 of [1]
and give a new subproblem 𝑄𝑃𝑙 instead of it. Consider
𝑄𝑃𝑙 (π‘₯π‘™π‘˜ , π»π‘™π‘˜ )
min
{
{
{
(𝑑𝑙 ,𝑧𝑙 )∈𝑅𝑛𝑙 +1
{
{
{
{s.t.
{
={
{
{
{
{
{
{
{
{
1
𝑧𝑙 + 𝑑l𝑇 π»π‘™π‘˜ 𝑑𝑙
2
𝑇
(5)
(π‘”π‘™π‘˜ ) 𝑑𝑙 ≤ 𝑧𝑙 ,
𝑇
𝑐𝑖 (π‘₯π‘™π‘˜ ) + ∇𝑐𝑖 (π‘₯π‘™π‘˜ ) 𝑑𝑙 ≤ 𝑧𝑙 ,
𝑖 ∈ 𝐼𝑙 = {1, . . . , π‘šπ‘™ } .
For convenience, given π‘₯, the 𝑙th block of ∇𝑓(π‘₯) will be
denoted by
𝑔𝑙 = (𝐼𝑛𝑙 ×𝑛𝑙 0𝑛𝑙 ×𝑛𝑙 ) ∇𝑓 (π‘₯) .
(6)
In (5), π‘”π‘™π‘˜ is 𝑔𝑙 in step π‘˜. Note that (5) is always feasible for
𝑑𝑙 = 0, 𝑧𝑙 = max𝑖∈𝐼𝑙 {𝑐𝑖 (π‘₯π‘™π‘˜ ), 0}. To test whether constraints
are satisfied or not, we denote the violation function β„Ž(π‘₯) as
follows:
σ΅„©
σ΅„©
β„Ž (π‘₯) = 󡄩󡄩󡄩𝑐(π‘₯)+ σ΅„©σ΅„©σ΅„© ,
(7)
where 𝑐𝑖 (π‘₯)+ = max{𝑐𝑖 (π‘₯), 0}, 𝑖 ∈ 𝐼, β€– ⋅ β€– denotes the Euclidean
norm on π‘…π‘š . It is easy to see that β„Ž(π‘₯) = 0 if and only if π‘₯ is a
feasible point (β„Ž(π‘₯) > 0 if and only if π‘₯ is infeasible).
We describe our modified PVD algorithm as follows.
Algorithm A. Consider the following.
Journal of Applied Mathematics
3
Step 0. Start with any π‘₯0 ∈ 𝑅𝑛 . Choose parameters 𝑒 ∈ (0,
1/2), 𝛽 ∈ (1/2, 1), 𝛾 ∈ (0, 1), positive integer 𝑀 > 0, and
positive definite matrices 𝐻𝑙0 ∈ 𝑅𝑛𝑙 ×𝑛𝑙 , 𝑙 ∈ {1, 2, . . . , 𝑝}. Set
π‘˜ = 0, π‘š(π‘˜) = 0.
Having π‘₯π‘˜ , check a stopping criterion. If it is not satisfied,
compute π‘₯π‘˜+1 as follows.
Step 1. Parallelization. For each processor 𝑙 ∈ {1, 2, . . . , 𝑝},
solve subproblem (5) to get (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ ). If π‘‘π‘™π‘˜ = 0, 𝑙 = 1, . . . , 𝑝
stop, otherwise return to Step 2.
𝑇
π»π‘˜ = (
If π‘”π‘˜π‘‡ π‘‘π‘˜
𝑇
π‘”π‘˜π‘‡ = ((𝑔1π‘˜ ) , . . . , (π‘”π‘π‘˜ ) ) ,
𝐻1π‘˜
Step 2. Compute
Ψπ‘˜π‘– (𝑑)
max
s.t.
‖𝑑‖ ≤ Δ𝑖
(12)
to get π‘‘π‘˜π‘– . Calculate π‘Ÿπ‘˜π‘– .
Step 4. If π‘Ÿπ‘˜π‘– > πœ‚, then let π‘₯π‘˜π‘–+1 = π‘₯π‘˜π‘– + π‘‘π‘˜π‘– , Δ𝑖+1 = 2Δ𝑖 , 𝑖 = 𝑖 + 1,
π‘š(𝑖) = min{π‘š(𝑖 − 1) + 1, 𝑀} and go to Step 1.
(8)
..
).
.
3. The Convergence Properties
π»π‘π‘˜
−(1/2)π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜ , return to Step
≤
1 and return to Step 4.
3; otherwise, set πœ† π‘˜ =
𝑓 (π‘₯π‘˜ + πœ†π‘‘π‘˜ ) ≤ max [𝑓 (π‘₯π‘˜−𝑗 )] − πœ†πœ‡π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜ .
0≤𝑗≤π‘š(π‘˜)
(9)
Step 4. Set π‘₯π‘˜+1 = π‘₯π‘˜ + πœ† π‘˜ π‘‘π‘˜ . Let β„Ž = max0≤𝑗≤π‘š(π‘˜) [β„Ž(π‘₯π‘˜−𝑗 )]. If
β„Ž(π‘₯π‘˜+1 ) ≤ π›½β„Ž, then set π‘š(π‘˜ + 1) = min{π‘š(π‘˜) + 1, 𝑀}, update
π»π‘™π‘˜ to π»π‘™π‘˜+1 , 𝑙 = 1, . . . , 𝑝, set π‘˜ = π‘˜ + 1, and go to Step 1.
Otherwise, let π‘˜ = π‘˜ + 1, call Restoration Algorithm of [17] to
obtain π‘₯π‘˜π‘Ÿ , let π‘₯π‘˜ = π‘₯π‘˜π‘Ÿ , π‘š(π‘˜) = π‘š(𝑖), and go to Step 1.
Remark 1. In Step 3 of Algorithm A, the nonmonotone
parameter π‘š(π‘˜) satisfies
0 ≤ π‘š (π‘˜) ≤ min {π‘š (π‘˜ − 1) + 1, 𝑀} ,
π‘˜ ≥ 1.
(10)
For the convenience, we denote 𝑓(π‘₯𝑙(π‘˜))=max0≤𝑗≤π‘š(π‘˜) [𝑓(π‘₯π‘˜−𝑗)],
where π‘˜ − π‘š(π‘˜) ≤ 𝑙(π‘˜) ≤ k.
In a restoration algorithm, we aim to decrease the value of
β„Ž(π‘₯) more precisely, we will use a trust region type method to
obtain β„Ž(π‘₯π‘˜ ) → 0, π‘˜ → ∞ by the help of the nonmonotone
technique. Let
π‘€π‘˜π‘– (𝑑) = β„Ž (π‘₯π‘˜π‘– ) − β„Ž (π‘₯π‘˜π‘– + 𝑑) ,
+σ΅„©
σ΅„©σ΅„©
𝑇
σ΅„©
Ψπ‘˜π‘– (𝑑) = β„Ž (π‘₯π‘˜π‘– ) − σ΅„©σ΅„©σ΅„©σ΅„©(𝑐 (π‘₯π‘˜π‘– ) + ∇𝑐(π‘₯π‘˜π‘– ) 𝑑) σ΅„©σ΅„©σ΅„©σ΅„© ,
σ΅„©
σ΅„©
(11)
and π‘Ÿπ‘˜π‘– = π‘€π‘˜π‘– (𝑑)/Ψπ‘˜π‘– (𝑑).
Algorithm B is similar to the restoration phase given by Su
and Yu [17], we describe the Restoration Algorithm as follows.
Algorithm B. Consider the following.
To prove the global convergence of Algorithm A, we make the
following assumptions.
Assumptions
Step 3. Choose πœ† π‘˜ which is the largest one in the sequence
{1, 𝛾, 𝛾2 , . . .} satisfying:
π‘š (0) = 0,
Step 1. If β„Ž(π‘₯π‘˜π‘– ) ≤ π›½β„Ž, then π‘₯π‘˜π‘Ÿ = π‘₯π‘˜π‘– stop.
Step 3. If π‘Ÿπ‘˜π‘– ≤ πœ‚, then let π‘₯π‘˜π‘–+1 = π‘₯π‘˜π‘– , Δ𝑖+1 = Δ𝑖 /2, 𝑖 = 𝑖 + 1,
π‘š(𝑖) = min{π‘š(𝑖 − 1) + 1, 𝑀} and go to Step 2.
Step 2. Synchronization. Set
π‘‘π‘˜ = (𝑑1π‘˜ , . . . , π‘‘π‘π‘˜ ) ,
Step 0. Assume π‘₯π‘˜0 = π‘₯π‘˜ , Δ0 > 0, 𝑖 = 0, πœ‚ ∈ (0, 1), π‘š(𝑖) = 0.
(A1) The iterate {π‘₯π‘˜ } remains in compacted subset 𝑆 ⊂ 𝑅𝑛 .
(A2) The objective function 𝑓 and the constraint functions
𝑐𝑗 (𝑗 ∈ 𝐼) are twice continuously differentiable on 𝑅𝑛 .
(A3) For all π‘₯𝑙 ∈ 𝑅𝑛𝑙 , the set {∇𝑐𝑖 (π‘₯𝑙 ) : 𝑖 ∈ 𝐼𝑙 (π‘₯𝑙 )} is
linearly independent, where 𝐼𝑙 (π‘₯𝑙 ) = {𝑖 ∈ 𝐼𝑙 : 𝑐𝑖 (π‘₯𝑙 ) =
Φ𝑙 (π‘₯𝑙 ), Φ𝑙 (π‘₯𝑙 ) = max𝑖∈𝐼𝑙 {𝑐𝑖 (π‘₯𝑙 ), 0}}.
(A4) There exist two constants 0 < π‘Ž ≤ 𝑏 such that π‘Žβ€–π‘‘π‘™ β€–2 ≤
𝑑𝑙𝑇 π»π‘™π‘˜ 𝑑𝑙 ≤ 𝑏‖𝑑𝑙 β€–2 , 𝑙 = 1, . . . , 𝑝 for all iteration π‘˜ and
𝑑𝑙 ∈ 𝑅𝑛𝑙 .
(A5) The solution of problem (12) satisfies
+σ΅„©
σ΅„©
𝑇
σ΅„©σ΅„©
Ψπ‘˜π‘– (𝑑) = β„Ž (π‘₯π‘˜π‘– ) − σ΅„©σ΅„©σ΅„©σ΅„©(𝑐 (π‘₯π‘˜π‘– ) + ∇𝑐(π‘₯π‘˜π‘– ) 𝑑) σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
(13)
𝑖
𝑖
𝑖
≥ 𝛽2 Δ min {β„Ž (π‘₯π‘˜ ) , Δ } ,
where 𝛽2 is a constant.
Remark 2. Assumptions (A1) and (A2) are the standard
assumptions. (A3) is the LICQ constraint qualification. (A4)
plays an important role in obtaining the convergence results.
(A5) is the sufficient reduction condition which guarantees
the global convergence in a trust region method. Under the
assumptions, 𝑓 is bounded below and the gradient function
𝑔𝑙 , 𝑙 = 1, . . . , 𝑝 is uniformly continuous in 𝑆𝑙 , where 𝑆 =
(𝑆1 , . . . , 𝑆𝑝 ).
Remark 3. We can use quasi-Newton BFGS methods to
update π»π‘™π‘˜ , different from [18], we can use a small modification of π‘¦π‘™π‘˜ to make π»π‘™π‘˜ reserve positive definite according to
the formula (33) of [17].
4
Journal of Applied Mathematics
Similar to Lemma 1 in [19], we can get the following conclusions.
Lemma 4. Suppose (A1)–(A4) hold, π»π‘™π‘˜ , 𝑙 = 1, . . . , 𝑝 is a
symmetric positive definite, then π‘‘π‘™π‘˜ is well defined and (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ )
is the unique KKT point of (5). Furthermore, 𝑑lπ‘˜ is bounded
over compact subsets of 𝑆𝑙 × π‘ƒπ‘™ , where 𝑃𝑙 is the set of symmetric
positive definite 𝑛𝑙 × π‘›π‘™ matrices.
Together with the definination of Φ𝑙 (π‘₯π‘™π‘˜ ), implies that (18)
holds.
(B2) Since (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ ) is the solution of (5), there exists V ∈ 𝑅
and πœ‡ ∈ π‘…π‘šπ‘™ such that the KKT condition of (5), while π‘‘π‘™π‘˜ = 0,
we have
𝑇
Vπ‘”π‘™π‘˜ + ∇𝑐𝑙 (π‘₯π‘™π‘˜ ) πœ‡π‘™π‘˜ = 0,
π‘šπ‘™
1 − V − ∑π‘’π‘–π‘˜ = 0,
Proof. First note that the feasible set for (5) is nonempty, since
(𝑑𝑙 , 𝑧𝑙 ) = (0, max𝑖∈𝐼𝑙 {𝑐𝑖 (π‘₯𝑙 ), 0}) is always feasible. It is clear
that, if (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ ) is a solution to (5) if an only if π‘‘π‘™π‘˜ solves the
unconstrained problem
1
min 𝑑𝑙𝑇 π»π‘™π‘˜ 𝑑𝑙
𝑑𝑙 ∈𝑅𝑛𝑙 2
𝑇
𝑇
(14)
+ max {(π‘”π‘™π‘˜ ) 𝑑𝑙 , max {𝑐𝑖 (π‘₯π‘™π‘˜ ) + ∇𝑐𝑖 (π‘₯π‘™π‘˜ ) 𝑑𝑙 }} ,
𝑇
𝑇
𝑖∈𝐼𝑙
(15)
Since the function being minimized in (14) is strictly convex
and radially unbounded, it follows that π‘‘π‘™π‘˜ is well defined and
unique as a global minimizer for the convex problem (5) and
thus unique as a KKT point for that problem. So we have
𝑇
π»π‘™π‘˜ π‘‘π‘™π‘˜ + Vπ‘”π‘™π‘˜ + ∇𝑐𝑙 (π‘₯π‘™π‘˜ ) πœ‡π‘™π‘˜ = 0,
(16)
π‘šπ‘™
(17)
𝑖=1
due to (17) and 𝑒𝑖 ≥ 0, V ≥ 0, we have 𝑒𝑖 , 𝑖 ∈ 𝐼𝑙 is
bounded. Since 𝑓, 𝑐𝑖 is twice continuously differentiable and
assumption (A1) holds, for all π‘₯𝑙 ∈ 𝑆𝑙 , β€– − (V𝑔𝑙 + ∇𝑐𝑙 (π‘₯π‘™π‘˜ )𝑇 πœ‡π‘™π‘˜ β€–
is bounded, set the maximum is 𝑀. For (16), β€–π»π‘™π‘˜ π‘‘π‘™π‘˜ β€– ≤ 𝑀
and combining with Assumption (A4), we have π‘Žβ€–π‘‘π‘™π‘˜ β€–2 ≤
π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ ≤ β€–π‘‘π‘™π‘˜ β€–β€–π»π‘™π‘˜ π‘‘π‘™π‘˜ β€– ≤ π‘€β€–π‘‘π‘™π‘˜ β€–, so that β€–π‘‘π‘™π‘˜ β€– ≤ 𝑀/π‘Ž;
therefore, π‘‘π‘™π‘˜ is bounded on 𝑆𝑙 × π‘ƒπ‘™ .
Lemma 5. Suppose that π‘₯π‘™π‘˜ ∈ 𝑅𝑛𝑙 , 𝑙 = 1, . . . , 𝑝,
definite matrix, and (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ ) is the solution of
have the results (B1) and (B2).
π»π‘™π‘˜
is positive
𝑄𝑃𝑙 . Then we
(B1) The following inequality holds:
π‘§π‘™π‘˜
≤
Φ𝑙 (π‘₯π‘™π‘˜ )
𝑇
1
− (π‘‘π‘™π‘˜ ) π»π‘™π‘˜ π‘‘π‘™π‘˜ ,
2
−π‘§π‘™π‘˜ ≤ 0,
𝑐𝑖 (π‘₯π‘™π‘˜ ) − π‘§π‘™π‘˜ ≤ 0,
Vπ‘§π‘™π‘˜ = 0,
𝑙 = 1, . . . , 𝑝,
(18)
where Φ𝑙 (π‘₯𝑙 ) = max𝑖∈𝐼𝑙 {𝑐𝑖 (π‘₯𝑙 ), 0}.
(B2) If π‘‘π‘™π‘˜ = 0 then π‘§π‘™π‘˜ = 0, 𝑙 = 1, . . . , 𝑝, and π‘₯π‘˜ =
(π‘₯1π‘˜ , . . . , π‘₯π‘π‘˜ ) is a KKT point of problem (1).
Proof. (B1) Since 𝑑𝑙 = 0, 𝑧𝑙 = max𝑖∈𝐼𝑙 {𝑐𝑖 (π‘₯𝑙 ), 0} is a feasible
point of 𝑄𝑃l , from the optimality of (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ ), we have
𝑇
1
π‘§π‘™π‘˜ + (π‘‘π‘™π‘˜ ) π»π‘™π‘˜ π‘‘π‘™π‘˜ ≤ 𝑧𝑙 + 0 = max {𝑐𝑖 (π‘₯π‘™π‘˜ ) , 0} .
𝑖∈𝐼𝑙
2
𝑖 ∈ 𝐼𝑙 ,
V ≥ 0,
π‘’π‘–π‘˜ [𝑐𝑖 (π‘₯π‘™π‘˜ ) − π‘§π‘™π‘˜ ] = 0,
(22)
(23)
π‘’π‘–π‘˜ ≥ 0,
π‘’π‘–π‘˜ = 0,
𝑖 ∈ 𝐼𝑙 .
∀𝑖 ∉ 𝐼𝑙0 = {𝑖 ∈ 𝐼𝑙 : 𝑐𝑖 (π‘₯π‘™π‘˜ ) = Φ𝑙 (π‘₯π‘™π‘˜ )} .
(24)
(19)
(25)
Hence, it follows from Assumption (A3) Together with (20)
and (21) that V > 0. For (23) we have π‘§π‘™π‘˜ = 0.
From (22) and (24), we have
π‘’π‘–π‘˜ 𝑐𝑖 (π‘₯π‘™π‘˜ ) = 0,
π‘’π‘–π‘˜ ≥ 0,
𝑐𝑖 (π‘₯π‘™π‘˜ ) ≤ 0,
1 − V − ∑𝑒𝑖 = 0,
(21)
𝑖=1
By the definition of Φ𝑙 (π‘₯π‘™π‘˜ ) and (22), Φ𝑙 (π‘₯π‘™π‘˜ ) ≤ π‘§π‘™π‘˜ . Then, from
(18) and π‘‘π‘™π‘˜ = 0, Φ𝑙 (π‘₯π‘™π‘˜ ) ≥ π‘§π‘™π‘˜ . From (24), it follows that
𝑖∈𝐼𝑙
𝑧𝑙 = max {(π‘”π‘™π‘˜ ) 𝑑𝑙 , max {𝑐𝑖 (π‘₯π‘™π‘˜ ) + ∇𝑐𝑖 (π‘₯π‘™π‘˜ ) 𝑑𝑙 }} .
(20)
𝑖 ∈ 𝐼𝑙 ,
𝑖 ∈ 𝐼𝑙 ,
(26)
due to (20) and set πœŒπ‘–π‘˜ = π‘’π‘–π‘˜ /V, we have
𝑇
π‘”π‘™π‘˜ + ∇𝑐𝑙 (π‘₯π‘™π‘˜ ) πœŒπ‘™π‘˜ = 0.
(27)
Taking into account separability of constraints 𝑐(π‘₯), and
𝑝
let 𝐼 = ∪𝑙=1 𝐼𝑙 , then
πœŒπ‘–π‘˜ 𝑐𝑖 (π‘₯π‘˜ ) = 0,
πœŒπ‘–π‘˜ ≥ 0,
𝑐𝑖 (π‘₯π‘˜ ) ≤ 0,
𝑖 ∈ 𝐼,
𝑖 ∈ 𝐼,
(28)
𝑇
π‘”π‘˜ + ∇𝑐𝑖 (π‘₯π‘˜ ) πœŒπ‘˜ = 0;
that is, π‘₯π‘˜ is a KKT point of (1).
Lemma 6 (see [17, Lemma 3]). In Step 3, the line search procedure is well defined.
Lemma 7 (see [17, Lemma 4]). Under Assumption (A5), the
Restoration Algorithm (Algorithm B) terminates finitely.
From Lemmas 6 and 7, we know that the Algorithm A
and Algorithm B are well implemented.
The following Lemma implies that every cluster point of
{π‘₯π‘˜ } which generated by Algorithm A is a feasible point of
problem (1).
Lemma 8. Let {π‘₯π‘˜ } be an infinite sequence generated by Algorithm A, then β„Ž(π‘₯π‘˜ ) → 0 (π‘˜ → ∞).
Journal of Applied Mathematics
5
Theorem 9. Suppose {π‘₯π‘˜ } is an infinite sequence generated by
Algorithm A, π‘‘π‘™π‘˜ , 𝑙 = 1, . . . , 𝑝 is the solution of (5), π‘‘π‘˜ =
(𝑑1π‘˜ , . . . , π‘‘π‘π‘˜ ), then limπ‘˜ → ∞ β€–π‘‘π‘™π‘˜ β€– = 0, 𝑙 = 1, . . . , 𝑝.
∗
Proof. By Assumption (A1), there exists a point π‘₯ such that
π‘₯π‘˜ → π‘₯∗ for π‘˜ ∈ 𝐾, where 𝐾 is an infinite index set. By
Algorithm A and Lemma 8, we consider the following two
possible cases.
If 𝑗 = 1, since {𝑙(π‘˜)} ⊂ {𝑙(π‘˜)}, (35) and (36) follow from (33)
and (34). For a given number 𝑗 > 1, we have
𝑓 (π‘₯𝑙(π‘˜)−𝑗 )
𝑇
≤ 𝑓 (π‘₯𝑙(𝑙(π‘˜)−𝑗−1) ) − πœ† 𝑙(π‘˜)−𝑗−1 πœ‡π‘‘π‘™(π‘˜)−𝑗−1
𝐻𝑙(π‘˜)−𝑗−1 𝑑𝑙(π‘˜)−𝑗−1 .
(37)
Using the same arguments above, we obtain
σ΅„©σ΅„© 𝑙(π‘˜)−𝑗−1 σ΅„©σ΅„©
σ΅„©σ΅„© = 0,
lim 󡄩󡄩󡄩󡄩𝑑𝑙
σ΅„©σ΅„©
π‘˜→∞ σ΅„©
σ΅„©
Case I. 𝐾0 = {π‘˜ ∈ 𝐾 | π‘”π‘˜π‘‡ π‘‘π‘˜ ≤ −(1/2)π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜ } is an infinite
index set. In this case, we have
𝑓 (π‘₯π‘˜ + πœ† π‘˜ π‘‘π‘˜ ) ≤ 𝑓 (π‘₯𝑙(π‘˜) ) −
𝑝
π‘˜→∞
(29)
≤
max
max
0≤𝑗≤π‘š(π‘˜)+1
[𝑓 (π‘₯π‘˜+1−𝑗 )]
[𝑓 (π‘₯π‘˜+1−𝑗 )]
𝑗 ≤ 𝑙(π‘˜) − π‘˜ − 1. Therefore,
lim 𝑓 (π‘₯π‘˜ ) = lim 𝑓 (π‘₯𝑙(π‘˜)−1 ) = lim 𝑓 (π‘₯𝑙(π‘˜) ) .
Hence for π‘š(π‘˜) ≤ 𝑀, it holds
π‘˜→∞
𝑇
𝑓 (π‘₯𝑙(π‘˜) ) ≤ 𝑓 (π‘₯𝑙(𝑙(π‘˜)−1) ) − πœ† 𝑙(π‘˜)−1 πœ‡π‘‘π‘™(π‘˜)−1
𝐻𝑙(π‘˜)−1 𝑑𝑙(π‘˜)−1
𝑝
𝑇
𝑙=1
(31)
Since 𝑓 is bounded below, {𝑓(π‘₯𝑙(π‘˜) )} converges. Due to (31),
we can obtain
𝑇
lim πœ† 𝑙(π‘˜)−1 πœ‡∑(𝑑𝑙𝑙(π‘˜)−1 ) 𝐻𝑙𝑙(π‘˜)−1 𝑑𝑙𝑙(π‘˜)−1 = 0.
(32)
𝑙=1
By Lemma 6, there exists πœ† > 0 such that πœ† 𝑙(π‘˜)−1 ≥ πœ†. By
Assumption (A4), we obtain
𝑙 = 1, . . . , 𝑝.
(33)
From the uniform continuity of 𝑓(π‘₯), this implies that
lim 𝑓 (π‘₯𝑙(π‘˜)−1 ) = lim 𝑓 (π‘₯𝑙(π‘˜) ) .
π‘˜→∞
π‘˜→∞
(34)
Let 𝑙(π‘˜) = 𝑙(π‘˜ + 𝑀 + 2), for any given 𝑗 ≥ 1, we can have
σ΅„©σ΅„© 𝑙(π‘˜)−𝑗 σ΅„©σ΅„©
σ΅„©σ΅„© = 0,
lim 󡄩󡄩󡄩󡄩𝑑𝑙
σ΅„©σ΅„©
π‘˜→∞ σ΅„©
σ΅„©
(35)
lim 𝑓 (π‘₯𝑙(π‘˜)−𝑗 ) = lim 𝑓 (π‘₯𝑙(π‘˜) ) .
(36)
π‘˜→∞
π‘˜→∞
π‘˜→∞
(40)
𝑝
lim ∑πœ† π‘˜ πœ‡π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ = 0.
π‘˜→∞
(41)
𝑙=1
Using the same arguments for deriving (33), we obtain that
σ΅„© σ΅„©
lim σ΅„©σ΅„©σ΅„©π‘‘π‘™π‘˜ σ΅„©σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞ σ΅„©
𝑙 = 1, . . . , 𝑝.
(42)
Case II. 𝐾0 is a finite index set, which implies 𝐾1 = {π‘˜ ∈ 𝐾 |
π‘”π‘˜π‘‡ π‘‘π‘˜ > −(1/2)π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜ } is an infinite index set.
If limπ‘˜ → ∞ β€–π‘‘π‘™π‘˜ β€– = 0, 𝑙 = 1, . . . , 𝑝 does not hold, then
there exists a positive number πœ€ > 0 and an infinite index
set 𝐾2 , such that β€–π‘‘π‘™π‘˜ β€– > πœ€, for all π‘˜ ∈ 𝐾2 ⊂ 𝐾1 . Since (π‘‘π‘™π‘˜ , π‘§π‘™π‘˜ )
is the solution of 𝑄𝑃𝑙 . By the KKT condition of (5), we have
𝑇
π»π‘™π‘˜ π‘‘π‘™π‘˜ + Vπ‘”π‘™π‘˜ + ∇𝑐𝑙 (π‘₯π‘™π‘˜ ) πœ‡π‘™π‘˜ = 0,
(43)
π‘šπ‘™
1 − V − ∑𝑒𝑖 = 0,
(44)
π‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜ − π‘§π‘™π‘˜ ≤ 0,
(45)
𝑖=1
𝑇
𝑙 = 1, . . . , 𝑝,
π‘˜→∞
(39)
Then by the relation (29), we have
= 𝑓 (π‘₯𝑙(𝑙(π‘˜)−1) ) − πœ† 𝑙(π‘˜)−1 πœ‡∑(𝑑𝑙𝑙(π‘˜)−1 ) 𝐻𝑙𝑙(π‘˜)−1 𝑑𝑙𝑙(π‘˜)−1 .
σ΅„©
σ΅„©
lim 󡄩󡄩󡄩󡄩𝑑𝑙𝑙(π‘˜)−1 σ΅„©σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞
π‘˜→∞ σ΅„©
Since {𝑓(π‘₯𝑙(π‘˜) )} admits a limit, by the uniform continuity
of 𝑓 on 𝑆, it holds
= 𝑓 (π‘₯𝑙(π‘˜) ) .
π‘˜→∞
σ΅„©
σ΅„©
lim σ΅„©σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯𝑙(π‘˜) σ΅„©σ΅„©σ΅„©σ΅„© = 0.
(30)
= max {𝑓 (π‘₯𝑙(π‘˜) ) , 𝑓 (π‘₯π‘˜+1 )}
𝑝
Therefore (35) and (36) hold for any given 𝑗 ≥ 1.
Remark 1, we obtain 𝑙(π‘˜) = 𝑙(π‘˜ + 𝑀 + 2) < π‘˜ + 𝑀 + 2, that
l(π‘˜)−𝑗
𝑙(π‘˜)−𝑗
is 𝑙(π‘˜) − π‘˜ − 1 ≤ 𝑀 + 1. Since 𝑑𝑙(π‘˜)−𝑗 = {𝑑1 , . . . , 𝑑𝑝 }
together with (35), we can obtain limπ‘˜ → ∞ ‖𝑑𝑙(π‘˜)−𝑗 β€– = 0, 1 ≤
Since π‘š(π‘˜ + 1) ≤ π‘š(π‘˜) + 1, we obtain
0≤𝑗≤π‘š(π‘˜+1)
π‘˜→∞
Now for any π‘˜, π‘₯π‘˜+1 = π‘₯𝑙(π‘˜) − ∑𝑙(π‘˜)−π‘˜−1
πœ† 𝑙(π‘˜)−𝑗 𝑑𝑙(π‘˜)−𝑗 , from
𝑗=1
𝑙=1
𝑓 (π‘₯𝑙(π‘˜+1) ) =
(38)
lim 𝑓 (π‘₯𝑙(π‘˜)−𝑗−1 ) = lim 𝑓 (π‘₯𝑙(π‘˜) ) .
πœ† π‘˜ πœ‡π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜
= 𝑓 (π‘₯𝑙(π‘˜) ) − πœ† π‘˜ πœ‡∑π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ .
𝑙 = 1, . . . , 𝑝,
𝑒𝑖 [𝑐𝑖 (π‘₯π‘™π‘˜ ) + ∇𝑐𝑖 (π‘₯π‘™π‘˜ ) π‘‘π‘™π‘˜ − π‘§π‘™π‘˜ ] = 0,
𝑒𝑖 ≥ 0, 𝑖 ∈ I𝑙 . (46)
Since V ≥ 0, 𝑒𝑖 ≥ 0 and (44), we obtain 0 ≤ 𝑒𝑖 ≤ 1,
therefore there exists 𝑀𝑙 satisfied β€–π‘’π‘™π‘˜ β€– ≤ 𝑀𝑙 .
6
Journal of Applied Mathematics
By Lemma 8, β„Ž(π‘₯π‘˜ ) → 0 implies β„Žπ‘™ (π‘₯π‘™π‘˜ ) → 0. Hence
there exists π‘˜0 , such that for π‘˜ > π‘˜0 , π‘˜ ∈ 𝐾2 , it holds
σ΅„© σ΅„©2
π‘Žσ΅„©σ΅„©σ΅„©σ΅„©π‘‘π‘™π‘˜ σ΅„©σ΅„©σ΅„©σ΅„©
π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜
π‘Žπœ€2
π‘˜
β„Žπ‘™ (π‘₯𝑙 ) ≤
≤
≤ 𝑙 𝑙 𝑙,
2𝑀𝑙
2𝑀𝑙
2𝑀𝑙
𝑙 = 1, . . . , 𝑝. (47)
Consequently, from (43), we deduce
Vπ‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜ = −π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ − πœ‡π‘™π‘˜π‘‡ ∇𝑐𝑙 (π‘₯π‘™π‘˜ ) π‘‘π‘™π‘˜
π‘šπ‘™
= −π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ + ∑𝑒𝑖 (𝑐𝑖 (π‘₯π‘™π‘˜ ) − π‘§π‘™π‘˜ )
Table 1: Results for Algorithm A.
Problem 1
𝑛, ]
]=3
𝑃
𝑛=6
𝑛 = 12
𝑛 = 24
𝑛 = 48
𝑛 = 96
𝑛 = 192
2
4
8
16
32
64
Iteration
number
Algorithm A
Iteration number
of 𝑄𝑃𝑙 subproblem
Running
times
5
10
15
9
6
7
52
66
78
46
50
24
0.5
0.5
1
2
9
13
𝑖=1
π‘šπ‘™
≤ −π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ + 𝑀𝑙 β„Žπ‘™ (π‘₯π‘™π‘˜ ) − ∑𝑒𝑖 π‘§π‘™π‘˜
(48)
𝑖=1
Taking into account separability of constraints 𝑐(π‘₯) and
π‘₯, we conclude
𝑔∗ + 𝜌∗ ∇𝑐∗ = 0,
π‘šπ‘™
≤ −π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ + 𝑀𝑙 β„Žπ‘™ (π‘₯π‘™π‘˜ ) − ∑𝑒𝑖 π‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜ .
πœŒπ‘–∗𝑇 𝑐𝑖 (π‘₯∗ ) = 0,
𝑖=1
π‘šπ‘™
π‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜ = (∑𝑒𝑖 + V) π‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜
≤
1
+ π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜
2
(49)
𝑝−1 𝑝
V
∑ ∑ ∑ π‘₯(𝑖−1)V+𝑑 π‘₯(𝑗−1)V+𝑑
𝑖=1 𝑗=𝑖+1 𝑑=1
V
Taking into account separability of constraints 𝑐(π‘₯), then
𝑝
π‘”π‘˜π‘‡ π‘‘π‘˜ = ∑π‘”π‘™π‘˜π‘‡ π‘‘π‘™π‘˜
𝑙=1
𝑝
(51)
To get some insight into computational properties of our
approach in Section 2, we considered the same test problems
taken from [1]. Choose Problem 1 of [1] as follows:
min
1
= − π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ .
2
1
≤ −∑ π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜
2
𝑙=1
𝑖 ∈ 𝐼.
4. Numerical Results
𝑖=1
−𝑑lπ‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜
πœŒπ‘–∗ ≥ 0,
Therefore π‘₯∗ is a KKT point of problem (1).
Together with (44), (47), and transpose
≤ −π‘‘π‘™π‘˜π‘‡ π»π‘™π‘˜ π‘‘π‘™π‘˜ + 𝑀𝑙 β„Žπ‘™ (π‘₯π‘™π‘˜ )
𝑐 (π‘₯∗ ) ≤ 0,
(50)
1
= − π‘‘π‘˜π‘‡ π»π‘˜ π‘‘π‘˜ ,
2
which contradicts the definition of 𝐾1 . The proof is complete.
Theorem 10. Suppose {π‘₯π‘˜ } is an infinite sequence generated
by Algorithm A and the assumptions in Theorem 9 hold. Then
every cluster point of {π‘₯π‘˜ } is a KKT point of problem (1).
Proof. By Assumption (A2), there exists π‘₯∗ , such that π‘₯π‘˜ →
π‘₯∗ , π‘˜ ∈ 𝐾. From Lemma 8, we have β„Ž(π‘₯π‘˜ ) → 0, π‘˜ ∈ 𝐾,
which means that π‘₯∗ is a feasible point. From Theorem 9, we
have limπ‘˜ → ∞ β€–π‘‘π‘™π‘˜ β€– = 0, 𝑙 = 1, . . . , 𝑝. By Lemma 5, π‘‘π‘™π‘˜ →
0 implies that π‘§π‘™π‘˜ → 0, 𝑙 = 1, . . . , 𝑝, that is, 𝑑𝑙∗ = 0, 𝑙 =
1, . . . , 𝑝, 𝑧𝑙∗ = 0, 𝑙 = 1, . . . , 𝑝.
Compare the KKT condition of (5) with the KKT condition of problem (1), and let 𝜌∗ = (𝜌1∗ , . . . πœŒπ‘∗ ), πœŒπ‘™∗ = (𝑒𝑖∗ /V, 𝑖 ∈
𝑝
𝐼𝑙 ), 𝐼 = ∪𝑙=1 𝐼𝑙 .
s.t.
2
− 1 = 0,
∑π‘₯(𝑖−1)V+𝑑
(52)
𝑖 = 1, . . . , 𝑝.
𝑑=1
In our test, we only verify our convergence theory without
comparing with serial algorithms. In the future, we will
propose the convergent rate of the parallel algorithm.
Not having access to a parallel computer, we have carried
out on Graphics Processing Unit (GPU) to solve them
and implement the algorithm on Compute Unified Device
Architecture (CUDA) [20]. However, the input and output
are implemented by CPU. And all the subproblems are solved
on blocks which are constructed into many threads. If there
are 𝑝 blocks in GPU, which is equivalent to 𝑝 processors in
a parallel computer. Many threads of the block can achieve a
large number of vector calculations in current block.
We have implemented Algorithm A in C language. All
codes were run under Visual Studio 2008 and Cuda 4.0 on
the DELL Precision T7400. In Table 1, we report the number
of iterations and the running times for Algorithms A on
Problem (52). The results in Table 1 confirm that the proposed
approach certainly makes sense. We solve the subproblem
(5) which is a positive semidefinite quadratic programs by
the modified interior-point methods. And the Restoration
Algorithm of [17] is solved by Trust-region Approach combined with local pattern search methods. All algorithms
are coded by C language without using any function from
optimization Toolbox; hence the results are not the best.
Journal of Applied Mathematics
In Table 1, 𝑛 denotes the dimensions of test problem and
] denotes the variable number of one block. There is less
iteration number of 𝑄𝑃𝑙 subproblem with more numbers
of blocks for high-dimensional problem, which shows the
algorithm is reliable.
7
[9]
[10]
5. Conclusion
The PVD algorithm which was proposed in 1994 is used to
solve unconstrained optimization problems or has a special
case of convex block-separable constraints. In 1997, M. V.
Solodov proposed useful generalizations that consist, for the
general unconstrained case, of replacing exact global solution
of the subproblems by a certain natural sufficient descent
condition, and, for the convex case, of inexact subproblem
solution in the PVD algorithm. M. V. Solodov proposed a
PVD approach in 1998 which applied to problems with general convex constraints directly and show that the algorithm
converges. In 2002, C. A. Sagastizábal et al. propose two
variants of PVD for the constrained case: without assuming
convexity of constraints, but assuming block-separable structure and for inseparable constraints, but assuming convexity.
In this paper, we propose the modified algorithm of [1] used
to the general constraints but with block-separable structure.
In a word, the algorithm above is a special structure of the
objective function or constraints with a special structure. In
the further, we will study the parallel algorithm with general
inseparable constraints.
Acknowledgments
This paper was supported by National Natural Science Foundation of China (11101420, 71271204) and other foundations
(2010BSE06047). The authors would like to thank Dr. Jiang
Zhipeng for valuable work on numerical experiments.
References
[1] C. A. Sagastizábal and M. V. Solodov, “Parallel variable distribution for constrained optimization,” Computational Optimization and Applications, vol. 22, no. 1, pp. 111–131, 2002.
[2] M. C. Ferris and O. L. Mangasarian, “Parallel constraint distribution,” SIAM Journal on Optimization, vol. 1, no. 1, pp. 487–500,
1994.
[3] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation, Prentice Hall, Englewood Cliffs, NJ, USA, 1989.
[4] P. Tseng, “Dual coordinate ascent methods for non-strictly convex minimization,” Mathematical Programming, vol. 59, no. 2,
pp. 231–247, 1993.
[5] M. V. Solodov, “New inexact parallel variable distribution algorithms,” Computational Optimization and Applications, vol. 7,
no. 2, pp. 165–182, 1997.
[6] M. V. Solodov, “On the convergence of constrained parallel variable distribution algorithms,” SIAM Journal on Optimization,
vol. 8, no. 1, pp. 187–196, 1998.
[7] F. Zheng, C. Han, and Y. Wang, “Parallel SSLE algorithm for
large scale constrained optimization,” Applied Mathematics and
Computation, vol. 217, no. 12, pp. 5377–5384, 2011.
[8] C. Han, Y. Wang, and G. He, “On the convergence of asynchronous parallel algorithm for large-scale linearly constrained
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
minimization problem,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 434–441, 2009.
M. Fukushima, “Parallel variable transformation in unconstrained optimization,” SIAM Journal on Optimization, vol. 8,
no. 3, pp. 658–672, 1998.
J. Mo, K. Zhang, and Z. Wei, “A variant of SQP method for
inequality constrained optimization and its global convergence,” Journal of Computational and Applied Mathematics, vol.
197, no. 1, pp. 270–281, 2006.
R. Fletcher and S. Leyffer, “Nonlinear programming without a
penalty function,” Mathematical Programming, vol. 91, pp. 239–
269, 2002.
L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone
line search technique for Newton’s method,” SIAM Journal on
Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986.
L. Grippo, F. Lampariello, and S. Lucidi, “A truncated Newton
method with nonmonotone line search for unconstrained
optimization,” Journal of Optimization Theory and Applications,
vol. 60, no. 3, pp. 401–419, 1989.
G. Liu, J. Han, and D. Sun, “Global convergence of the BFGS
algorithm with nonmonotone linesearch,” Optimization, vol. 34,
no. 2, pp. 147–159, 1995.
W. Sun, J. Han, and J. Sun, “Global convergence of nonmonotone descent methods for unconstrained optimization problems,” Journal of Computational and Applied Mathematics, vol.
146, no. 1, pp. 89–98, 2002.
P. L. Toint, “An assessment of nonmonotone linesearch techniques for unconstrained optimization,” SIAM Journal on Scientific Computing, vol. 17, no. 3, pp. 725–739, 1996.
K. Su and Z. Yu, “A modified SQP method with nonmonotone
technique and its global convergence,” Computers & Mathematics with Applications, vol. 57, no. 2, pp. 240–247, 2009.
Y.-X. Yuan and W. Sun, Optimization Theory and Methods,
Springer, New York, NY, USA, 2006.
C. T. Lawrence and A. L. Tits, “A computationally efficient
feasible sequential quadratic programming algorithm,” SIAM
Journal on Optimization, vol. 11, no. 4, pp. 1092–1118, 2001.
I. Buck, Parallel Programming with CUDA, 2008, http://gpgpu
.org/sc2008.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download