Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 697474, 7 pages http://dx.doi.org/10.1155/2013/697474 Research Article An Approximate Quasi-Newton Bundle-Type Method for Nonsmooth Optimization Jie Shen,1 Li-Ping Pang,2 and Dan Li2 1 2 School of Mathematics, Liaoning Normal University, Dalian 116029, China School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China Correspondence should be addressed to Jie Shen; tt010725@163.com Received 22 January 2013; Revised 31 March 2013; Accepted 1 April 2013 Academic Editor: Gue Lee Copyright © 2013 Jie Shen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An implementable algorithm for solving a nonsmooth convex optimization problem is proposed by combining Moreau-Yosida regularization and bundle and quasi-Newton ideas. In contrast with quasi-Newton bundle methods of Mifflin et al. (1998), we only assume that the values of the objective function and its subgradients are evaluated approximately, which makes the method easier to implement. Under some reasonable assumptions, the proposed method is shown to have a Q-superlinear rate of convergence. 1. Introduction In this paper we are concerned with the unconstrained minimization of a real-valued, convex function π : π π → π , namely, min π (π₯) s.t. π₯ ∈ π π , (1) and in general π is nondifferentiable. A number of attempts have been made to obtain convergent algorithms for solving (1). Fukushima and Qi [1] propose an algorithm for solving (1) under semismoothness and regularity assumptions. The proposed algorithm is shown to have a Q-superlinear rate of convergence. An implementable BFGS method for general nonsmooth problems is presented by Rauf and Fukushima [2], and global convergence is obtained based on the assumption of strong convexity. A superlinearly convergent method for (1) is proposed by Qi and Chen [3], but it requires the semismoothness condition. He [4] obtains a globally convergent algorithm for convex constrained minimization problems under certain regularity and uniform continuity assumptions. Among methods for nonsmooth optimization problems, some have superlinear rate of convergence, for instance, see Mifflin and SagastizaΜbal [5] and LemareΜchal et al. [6]. They propose two conceptual algorithms with superlinear convergence for minimizing a class of convex functions, and the latter demands that the objective function π should be differentiable in a certain space π (the subspace along which ππ(π) has 0 breadth at a given point π), but sometimes it is difficult to decompose the space. Besides these methods mentioned above, there is a quasi-Newton bundle type method proposed by Mifflin et al. [7] it has superlinear rate of convergence, but the exact values of the objective function and its subgradients are required. In this paper, we present an implementable algorithm by using bundle and quasi-Newton ideas and Moreau-Yosida regularization, and the proposed algorithm can be shown to have a superlinear rate of convergence. An obvious advantage of the proposed algorithm lies in the fact that we only need the approximate values of the objective function and its subgradients. It is well known that (1) can be solved by means of the Moreau-Yosida regularization πΉ : π π → π of π, which is defined by πΉ (π₯) = minπ {π (π§) + (2π)−1 βπ§ − π₯β2 } , π§∈π (2) where π is a fixed positive parameter and β ⋅ β denotes the Euclidean norm or its induced matrix norm on π π×π . The problem of minimizing πΉ(π₯), that is, min πΉ (π₯) s.t. π₯ ∈ π π , (3) 2 Abstract and Applied Analysis is equivalent to (1) in the sense that π₯ ∈ π π solves (1) if and only if it solves (3), see Hiriart-Urruty and LemareΜchal [8]. The problem (3) has a remarkable feature that the objective function πΉ is a differentiable convex function, even though π is nondifferentiable. Moreover πΉ has a Lipschitz continuous gradient πΊ (π₯) = π−1 (π₯ − π (π₯)) ∈ ππ (π (π₯)) , ππ΅ πΊ (π₯) = {π· ∈ π π×π | π· = lim ∇πΊ (π₯π ) , π₯π → π₯ (5) is nonempty and bounded for each π₯. We say πΊ is BDregular at π₯ if all matrices π· ∈ ππ΅ πΊ(π₯) are nonsingular. It is reasonable to pay more attention to the problem (3) since πΉ has such good properties. However, because the MoreauYosida regularization itself is defined through a minimization problem involving π, the exact values of πΉ and its gradient πΊ at an arbitrary point π₯ are difficult or even impossible to compute in general. Therefore, we attempt to explore the possibility of utilizing the approximations of these values. Several attempts have been made to combine quasiNewton idea with Moreau-Yosida regularization to solve (1). For related works on this subject, see Chen and Fukushima [9] and Mifflin [10]. In particular, Mifflin et al. [7] consider using bundle ideas to approximate linearly the values of π in order to approximate πΉ in which the exact values of π and one of its subgradients π at some points are needed. In this paper we assume that for given π₯ ∈ π π and π ≥ 0, we can find some πΜ ∈ π and ππ (π₯, π) ∈ π π such that π (π₯) ≥ πΜ ≥ π (π₯) − π, ∀π§ ∈ π π , (6) which means that ππ (π₯, π) ∈ ππ π(π₯). This setting is realistic in many applications, see Kiwiel [11]. Let us see some examples. Assume that π is strongly convex with modulus π > 0, that is, π βπ§ − π₯β2 ≤ π (π§) , 2 ∀π§, π₯ ∈ π , σ΅© σ΅© σ΅©σ΅© + σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© σ΅©σ΅©σ΅©∇β V (π₯) − ∇V (π₯)σ΅©σ΅©σ΅© βπ§ − π₯β ≤ π (π§) − (7) π (π₯) ∈ ππ (π₯) , and that π(π₯) = π€(V(π₯)) with V : π π → π π continuously differentiable and π€ : π π → π convex. By the chain rule π we have ππ(π₯) = {∑π ∈ π=1 ππ ∇Vπ (π₯) | π = (π1 , π2 , . . . , ππ ) ππ€(V(π₯))}. Now assume that we have an approximation ∇β V(π₯) of ∇V(π₯) such that β∇β V(π₯)−∇V(π₯)β≤ π (β), β > 0. Such an approximation may be obtained by using finite differences. (8) π σ΅© σ΅© βπ§ − π₯β2 + π (β) σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© βπ§ − π₯β 2 for all π₯, π§ ∈ π π and π(π₯) = ∑π π=1 ππ ∇Vπ (π₯) ∈ ππ(π₯). Some simple manipulations show that π σ΅© σ΅© βπ§ − π₯β2 + π (β) σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© βπ§ − π₯β 2 ≤ where πΊ is differentiable at π₯ } π ≤ π (π₯) + π(π₯)π (π§ − π₯) − π π (π₯) + π(π₯)π (π§ − π₯) + π (π₯) + πβ (π₯)π (π§ − π₯) (4) where π(π₯) is the unique minimizer of (2) and ππ is the subdifferential mapping of π. Hence, by Rademacher’s theorem, πΊ is differentiable almost everywhere and the set π (π§) ≥ πΜ + β¨ππ (π₯, π) , π§ − π₯β© , In this case, typically π (β) → 0 for β → 0. Let πβ (π₯) = ∑π π=1 ππ ∇β Vπ (π₯), π ∈ ππ€(V(π₯)). Then, we have 1 σ΅©σ΅© σ΅©σ΅©2 2 σ΅©πσ΅© π (β) =: πβ , 2π σ΅© σ΅© ∀π₯, π§ ∈ π π . (9) By the definition of π, the bound πβ depends on π₯, we obtain π (π₯) + πβ (π₯)π (π§ − π₯) ≤ π (π§) + πβ , ∀π§ ∈ π π . (10) From the local boundedness of ππ€(V(π₯)), we infer that πβ > 0 is locally bounded. Thus, πβ (π₯) is an πβ -subgradient of π at π₯, see HintermuΜller [12]. As for the approximate function values, if π is a max-type function of the form π (π₯) = sup {ππ’ (π₯) | π’ ∈ π} , ∀π₯ ∈ π π , (11) where each ππ’ : π π → π is convex and π is an infinite set, then it may be impossible to calculate π(π₯). However, for any positive π one can usually find in finite time an πsolution to the maximization problem (11), that is, an element π’π ∈ π satisfying ππ’π ≥ π(π₯) − π. Then one may set ππ (π₯) = ππ’π (π₯). On the other hand, in some applications, calculating π’π for a prescribed π ≥ 0 may require much less work than computing π’0 . This is, for instance, the case when the maximization problem (11) involves solving a linear or discrete programming problem by the methods of Gabasov and Kirilova [13]. Some people have tried to solve (1) by assuming the values of the objective function, and its subgradients can only be computed approximately. For example, Solodov [14] considers the proximal form of a bundle algorithm for (1), assuming the values of the function and its subgradients are evaluated approximately, and it is shown how these approximations should be controlled in order to satisfy the desired optimality tolerance. Kiwiel [15] proposes an algorithm for (1), and the algorithm utilizes the approximation evaluations of the objective function and its subgradients; global convergence of the method is obtained. Kiwiel [11] introduces another method for (1); it requires only the approximate evaluations of π and its πsubgradients, and this method converges globally. It is in evidence that bundle methods with superlinear convergence for solving (1) by using approximate values of the objective and its subgradients are seldom obtained. Compared with the methods mentioned above, the method proposed in this paper is not only implementable but also has a superlinear Abstract and Applied Analysis 3 rate of convergence under some additional assumptions, and it should be noted that we only use the approximate values of the objective function and its subgradients which makes the algorithm easier to implement. Some notations are listed below for presenting the algorithm. π where πΜπ₯ ∈ π satisfies π π (π₯π ) ≥ πΜπ₯ ≥ π (π₯π ) − ππ₯π , (iii) π(π₯) = arg minπ§∈π π {π(π§)+(2π)−1 β π§−π₯β2 }, the unique minimizer of (2). (iv) πΊ(π₯) = π−1 (π₯ − π(π₯)), the gradient of πΉ at π₯. This paper is organized as follows: in Section 2, to approximate the unique minimizer π(π₯) of (2), we introduce the bundle idea, which uses approximate values of the objective function and its subgradients. The approximate quasiNewton bundle-type algorithm is presented in Section 3. In the last section, we prove the global convergence and, under additional assumptions, Q-superlinear convergence of the proposed algorithm. 2. The Approximation of π(π₯) πΉ (π₯π ) = minπ {π (π₯π + π ) + (2π)−1 βπ β2 } . π ∈π (12) Now we consider approximating π(π₯π +π ) by using the bundle idea. Suppose we have a bundle π½π generated sequentially starting from π₯π and possibly a subset of the previous set used to generate π₯π . The bundle includes the data (π§π , πΜπ , ππ (π§π , ππ )), π ∈ π½π , where π§π ∈ π π , πΜπ ∈ π , and ππ (π§π , ππ ) ∈ π π satisfy π (π§π ) ≥ πΜπ ≥ π (π§π ) − ππ , π (π§) ≥ πΜπ + β¨ππ (π§π , ππ ) , π§ − π§π β© , ∀π§ ∈ π π . (13) Suppose that the elements in π½π can be arranged according to the order of their entering the bundle. Without loss of generality we may suppose π½π = {1, . . . , π}. ππ is updated by the rule ππ+1 = πΎππ , 0 < πΎ < 1, π ∈ π½π . The condition (13) means ππ (π§π , ππ ) ∈ πππ π(π§π ), π ∈ π½π . By using the data in the bundle we construct a polyhedral function ππ (π₯π + π ) defined by π ππ (π₯π + π ) = max {πΜπ + ππ (π§π , ππ ) (π₯π + π − π§π )} . π∈π½π π π∈π½π (17) Let πΉπ (π₯π ) = minπ {ππ (π₯π + π ) + (2π)−1 βπ β2 } π ∈π π = πΜπ₯ + minπ {max {ππ (π§π , ππ ) π − πΌ (π₯π , π§π , ππ )} π π ∈π π∈π½π + (2π)−1 π π π } . (18) The problem (18) can be dealt with by solving the following quadratic programming: min V + π(2)−1 π π π , π ππ (π§π , ππ ) π − πΌ (π₯π , π§π , ππ ) ≤ V π (15) ∀π ∈ π½π . (19) As iterations go along, the number of elements in bundle π½π increases. When the size of the bundle becomes too big, it may cause serious computational difficulties in the form of unbounded storage requirement. To overcome these difficulties, it is necessary to compress the bundle and clean the model. Wolfe [16] and LemareΜchal [17], for the first time, introduce the aggregation strategy, which requires storing only a limited number of subgradients, see Kiwiel and Mifflin [18–20]. Aggregation strategy is the synthesis mechanism that condenses the essential information of the bundle into one Μ single couple (πΜππ , πΌΜΜπ ) (defined below). The corresponding affine function, inserted in the model when there is compression, is called aggregate linearization (defined below). This function summarizes all the information generated up to iteration π. Suppose π½max is the upper bound of the number of elements in π½π , π = 1, 2, . . . . If |π½π | reaches the prescribed π½max , two or more of those elements are deleted from the bundle π½π ; that is, two or more linear pieces in the constraints of (19) are discarded (notice that different selections of discarded linear pieces may result in different speed of convergence), and introduce the aggregate linearization associated with the aggregate π-subgradient and linearization error into bundle. Define the aggregate linearization as Μ ππ‘ (π₯π + π ) = πΜπ₯ + β¨πΜππ , π β© − πΌΜΜπ , (14) Obviously ππ (π₯π + π ) is a lower approximation of π(π₯π + π ), so ππ (π₯π + π ) ≤ π(π₯π + π ). We define a linearization error by π πΌ (π₯π , π§π , ππ ) = πΜπ₯ − πΜπ − ππ (π§π , ππ ) (π₯π − π§π ) , π π ππ (π₯π + π ) = πΜπ₯ + max {ππ (π§π , ππ ) π − πΌ (π₯π , π§π , ππ )} . s.t. Let π₯ = π₯π and π = π§ − π₯π , where π₯π is the current iterate point of AQNBT algorithm presented in Section 3, then (13) has the form (16) Then ππ (π₯π + π ) can be written as (i) ππ(π₯) = {π ∈ π π | π(π§) ≥ π(π₯) + ππ (π§ − π₯), ∀π§ ∈ π π }, the subdifferential of π at π₯, and each such π is called a subgradient of π at π₯. (ii) ππ π(π₯) = {π ∈ π π | π(π§) ≥ π(π₯) + ππ (π§ − π₯) − π}, the π-subdifferential of π at π₯, and each such π is called an π-subgradient of π at π₯. for given ππ₯π ≥ 0. Μ (20) where πΜππ = ∑π∈π½π ππ ππ (π§π , ππ ), πΌΜΜπ = ∑π∈π½π ππ πΌ(π₯π , π§π , ππ ). Multiplier π = (ππ )π∈π½π is the optimal solution of dual problem for (19), see Solodov [14]. By doing so, the surrogate aggregate linearization maintains the information of the deleted linear 4 Abstract and Applied Analysis pieces and at the same time the problem (19) is manageable since the number of the elements in π½π is limited. Suppose π (π₯π ) solves the problem (19), and let ππ (π₯π ) = π₯π + π (π₯π ) be an approximation of π(π₯π ) and πππ (π₯π ) = ππ+1 = πΎππ . Let πΉπ (π₯π ) = πΜπ where πΜπ π (π₯π ) π (π₯π ) π + πππ (π₯π ) + (2π)−1 π (π₯π ) π (π₯π ) , (21) ∈ π is chosen to satisfy π (ππ (π₯π )) ≥ πΜπ π (π₯π ) ≥ π (ππ (π₯π )) − πππ (π₯π ) . (22) The results stated below are fundamental and useful in the subsequent discussions. (P1) πΉπ (π₯π ) ≤ πΉ(π₯π ) ≤ πΉπ (π₯π ). (P2) πΉπ (π₯π ) = πΉ(π₯π ) if and only if ππ (π₯π ) = π(π₯π ) and π π πΜπ (π₯ ) = π(π(π₯π )). Note that π(π₯π ) is the unique minimizer of (2) and (P1) and (P2) can be obtained by the definitions of πΉπ (π₯π ), πΉπ (π₯π ), and πΉ(π₯π ). (P3) (i) If we define πΉππ (π₯π ) = minπ ∈π π {maxπ∈π½π {π(π§π ) + π(π§π )π (π₯π + π − π§π )} + (2π)−1 π π π }, where π(π§π ) ∈ ππ(π§π ), then πΉπ (π₯π ) → πΉππ (π₯π ) as the new point π§π+1 = π₯π + π (π₯π ) is appended into the bundle π½π infinitely. (ii) Let π = maxπ∈π½π {ππ }. if ππ (π§π , ππ ) = π(π§π ) ∈ ππ(π§π ), then πΉπ (π₯π ) ≥ πΉππ (π₯π ) − π. Because ππ → 0 by the update rule ππ+1 = πΎππ , 0 < πΎ < 1, we have ππ (π§π , ππ ) → π(π§π ) and πΜπ → π(π§π ). Thus ππ (π₯π + π ) → maxπ∈π½π {π(π§π ) + π(π§π )π (π₯π + π − π§π )}, so πΉππ (π₯π ) → πΉπ (π₯π ). It is easy to see that ππ (π₯π + π ) = maxπ∈π½π {πΜπ + ππ (π§π , ππ )π (π₯π + π − π§π )} ≥ maxπ∈π½π {π(π§π ) + ππ (π§π , ππ )π (π₯π + π − π§π ) − ππ } ≥ maxπ∈π½π {π(π§π ) + π(π§π )π (π₯π + π − π§π )} − π. Therefore, πΉπ (π₯π ) = minπ ∈π π {ππ (π₯π + π ) + (2π)−1 β π β2 } ≥ πΉππ (π₯π ) − π. Let π (π₯π ) = πΉπ (π₯π ) − πΉπ (π₯π ) . π π (23) π We accept π (π₯ ) as an approximation of π(π₯ ) based on the following rule: π π (π₯π ) < π (π₯π ) min {π−2 π (π₯π ) π (π₯π ) , πΏ} , (24) where π(π₯π ) and πΏ are given positive numbers and π(π₯π ) is fixed during one bundling process; that is, π(π₯π ) depends on π₯π , see Step 1 in AQNBT algorithm presented in Section 3. If (24) is not satisfied, we let π§π+1 = π₯π + π (π₯π ) and ππ+1 = π π πΎπ , 0 < πΎ < 1, and take πΜπ+1 = πΜπ (π₯ ) and ππ (π§π+1 , π ) ∈ π π π satisfying π+1 π (π§π+1 ) ≥ πΜπ+1 ≥ π (π§π+1 ) − ππ+1 , π (π§) ≥ πΜπ+1 + β¨ππ (π§π+1 , ππ+1 ) , π§ − π§π+1 β© , ∀π§ ∈ π π , (25) and then append a new piece πΜπ+1 + ππ (π§π+1 , ππ+1 )π (π₯π + π − π§π+1 ) to (14), replace π by π+1, and solve (19) for finding a new π (π₯π ) and π(π₯π ) to be tested in (24). If this bundle process does not terminate, we have the following conclusion. (P4) Suppose that π₯π is not the minimizer of π. If (24) is never satisfied, then π(π₯π ) → 0 as the new point π§π+1 is appended into the bundle π½π infinitely. Suppose that |π½π | = |{1, 2, . . . , π}| = π < π½max . Define the functions π and ππ+1 , π = 1, 2, . . . by σ΅© σ΅©2 π (π§) = π (π§) + (2π)−1 σ΅©σ΅©σ΅©σ΅©π§ − π₯π σ΅©σ΅©σ΅©σ΅© , ππ+1 (π§) = max π∈π½π ={1,2,...,π} π {πΜπ + ππ (π§π , ππ ) (π§ − π§π )} (26) σ΅© σ΅©2 + (2π)−1 σ΅©σ΅©σ΅©σ΅©π§ − π₯π σ΅©σ΅©σ΅©σ΅© . Let π§π+1 be the unique minimizer of minπ§∈π π ππ+1 (π§), and let π§π+2 be the unique minimizer of minπ§∈π π ππ+2 (π§), where ππ+2 (π§) = maxπ∈π½π+1 {πΜπ + ππ (π§π , ππ )π (π§ − π§π )} + (2π)−1 β π§ − π₯π β2 . Note that if |{1, 2, . . . , π + 1}| = π + 1 < π½max , then let π½π+1 = {1, 2, . . . , π + 1}, so ππ+1 (π§π+1 ) ≤ ππ+2 (π§π+2 ); if |{1, 2, . . . , π + 1}| = π + 1 = π½max , delete at least two elements from {1, 2, . . . , π + 1}, say π1 , π2 , and π1 =ΜΈ π + 1, π2 =ΜΈ π + 1, the order of the other elements in {1, 2, . . . , π + 1} are left intact. Introduce an additional index Μπ associated with the aggregated π-subgradient and linearization error into π½π+1 and let π½π+1 = {1, 2, . . . , π1 −1, π1 +1, . . . , π2 −1, π2 +1, . . . , Μπ, π+ 1}, so |π½π+1 | = π < π½max . By adjusting π appropriately, we can make sure that π§π+1 and π§π+2 are not far away from π₯π . According to the proof of Proposition 3, see Fukushima [21], we find that π(π§π ) has limit, say π∗ , and ππ+1 (π§π+1 ) also converges to π∗ as π → ∞. By the definitions of πΉ(π₯π ) and πΉπ (π₯π ) we have πΉπ (π₯π ) → πΉ(π₯π ) and πΉπ (π₯π ) → πΉ(π₯π ) as π → ∞, so π(π₯π ) → 0 as π → ∞. In the next part we give the definition of πΊπ (π₯π ), which is the approximation of πΊ(π₯π ), πΊπ (π₯π ) = π−1 (π₯π − ππ (π₯π )) = −π−1 π (π₯π ) , (27) and some properties of πΊπ (π₯π ) are discussed. It is easy to see that the approximation of πΊ(π₯π ) is associated with πΉ(π₯π ): (P5) βπΊ(π₯π ) − πΊπ (π₯π )β = βπ−1 (π(π₯π ) − ππ (π₯π ))β ≤ √2π(π₯π )/π. By the strong convexity of π(π§), we have π(ππ (π₯π )) ≥ π(π(π₯π )) + (2π)−1 β π(π₯π ) − ππ (π₯π )β2 . From the definitions π π of πΉπ (π₯π ) and π(π₯π ), we obtain πΉπ (π₯π ) = πΜπ (π₯ ) + πππ (π₯π ) + (2π)−1 β ππ (π₯π ) − π₯π β2 ≥ π(ππ (π₯π )) + (2π)−1 β ππ (π₯π ) − π₯π β2 = π(ππ (π₯π )) ≥ π(π(π₯π )) + (2π)−1 β π(π₯π ) − ππ (π₯π )β2 = πΉ(π₯π ) + (2π)−1 β π(π₯π ) − ππ (π₯π )β2 . By (P1), (P5) holds. By (P4) and (P5), we have the following (P6). In fact, (P6) says that the bundle subalgorithm for finding π (π₯π ) terminates in finite steps. Abstract and Applied Analysis 5 (P6) If π₯π does not minimize π, then we can find one solution π (π₯π ) of (18) such that (24) holds. Step 5 (updating π΅π ). Let Δπ₯π = π₯π+1 − π₯π and Δππ = πΊπ (π₯π+1 ) − πΊπ (π₯π ). Set π΅π+1 3. Approximate Quasi-Newton Bundle-Type Algorithm π For presenting the algorithm, we use the following notations: ππ = π(π₯π ), π π = π (π₯π ), and ππ = π(π₯π ). Given positive numbers πΏ, π, πΎ, and πΏ such that 0 < πΏ < 1, 0 < π < 1, 0 < πΎ < 1, and one symmetric π × π positive definite matrix π. Approximate Quasi-Newton Bundle-Type Algorithm (AQNBT Alg): Step 1 (initialization). Let π₯1 be a starting point, and let π΅1 be an π × π symmetric positive definite matrix. Let π1 and π be positive numbers. Choose a sequence of positive numbers ∞ 1 π {ππ }∞ π=1 such that ∑π=1 ππ < ∞. Set π = 1. Find π ∈ π and π1 such that π π1 ≤ π1 min {π−2 (π 1 ) π 1 , πΏ} . (28) Let πΊπ (π₯1 ) = −π−1 π 1 , π§1 = π₯1 , π = 1, and π be the running index of bundle subalgorithm. Step 2 (finding a search direction). If βπΊπ (π₯π )β = 0, stop with π₯π optimal. Otherwise compute if (Δπ₯π ) Δππ ≤ 0, { {π, = {(symmetric, positive definite { k k { and satisfies Bk+1 Δx = Δg ) otherwise. (32) Set π = π + 1, and go to Step 2. End of AQNBT algorithm. 4. Convergence Analysis In this section we prove the global convergence of the algorithm described in Section 3, and furthermore under the assumptions of semismoothness and regularity, we show that the proposed algorithm has a Q-superlinear convergence rate. Following the proof of Theorem 3, see Mifflin et al. [7], we can show that, at each iteration π, ππ is well defined, and hence the stepsize π‘π > 0 can be determined finitely in Step 4. We assume the proposed algorithm does not terminate in finite steps, so the sequence {π₯π }∞ π=1 is an infinite sequence. Since ∞ the sequence {ππ }∞ π=1 satisfies ∑π=1 ππ < ∞, there exists a ∞ constant π such that ∑π=1 ππ ≤ π. Let π·π = {π₯ ∈ π π | πΉ(π₯) ≤ πΉ(π₯1 ) + 2πΏπ}. By making a slight change of the proof of Lemma 1, see Mifflin et al. [7], we have the following lemma. (29) Lemma 1. πΉ(π₯π+1 ) ≤ πΉ(π₯π )+πΏ(ππ +ππ+1 ) πππ πππ π ≥ 1 and π₯π ∈ π·π . Step 3 (line search). Starting with π’ = 1, let ππ be the smallest nonnegative integer π’ such that Theorem 2. Suppose π is bounded below and there exists a constant π½ such that ππ = −π΅π−1 πΊπ (π₯π ) . π π’ π π π π’ π π π π πΉπ (π₯ + π π ) ≤ πΉ (π₯ ) + πΏπ (π ) πΊ (π₯ ) , (30) where ππ’+1 = πΎππ’ corresponds to the approximations πΉπ (π₯π + ππ’ ππ ) and πΉπ (π₯π + ππ’ ππ ) of πΉ at π₯π + ππ’ ππ ; πΉπ (π₯π + ππ’ ππ ) satisfies πΉπ (π₯π + ππ’ ππ ) − πΉπ (π₯π + ππ’ ππ ) π ≤ ππ+1 min {π−2 π (π₯π + ππ’ ππ ) π (π₯π + ππ’ ππ ) , πΏ} , (31) and π (π₯π +ππ’ ππ ) is the solution of (19), in which π₯π is replaced by π₯π + ππ’ ππ , and the expression of πΉπ (π₯π + ππ’ ππ ) is similar to (21), but π₯ is replaced by π₯π + ππ’ ππ . Set π‘π = πππ and π₯π+1 = π₯π + π‘π ππ . Step 4 (computing the approximate gradient). Compute πΊπ (π₯π+1 ) = −π−1 π π+1 . β¨π΅π π, πβ© ≥ π½βπβ2 , ∀π ∈ π π , ∀π. (33) Then any accumulation point of {π₯π } is an optimal solution of problem (1). Proof. According to the first part of the proof of Theorem 3, see Mifflin et al., [7], we have limπ → ∞ πΉ(π₯π ) = πΉ∗ . Since ππ → 0, from (P1) we obtain ππ → 0 as π → ∞, and limπ → ∞ πΉπ (π₯π ) = limπ → ∞ πΉπ (π₯π ) = πΉ∗ . Thus π lim π‘π (ππ ) πΊπ (π₯π ) = 0. π→∞ (34) Let π₯ be an arbitrary accumulation point of {π₯π }, and let {π₯π }π∈πΎ be a subsequence converging to π₯. By (P5) we have lim π∈πΎ,π → ∞ πΊπ (π₯π ) = πΊ (π₯) . (35) Since {π΅π−1 } is bounded, we may suppose lim ππ = π π∈πΎ,π → ∞ (36) 6 Abstract and Applied Analysis for some π ∈ π π . Moreover we have lim π∈πΎ,π → ∞ (ii) limπ → ∞ dist(π΅π , ππ΅ πΊ(π₯π )) = 0, σ΅© σ΅©2 β¨πΊπ (π₯π ) , ππ β© = β¨πΊ (π₯) , πβ© ≤ −π½σ΅©σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅©σ΅© . (37) If lim inf π → ∞ π‘π > 0, then π = 0. Otherwise, if lim inf π → ∞ π‘π = 0, by taking a subsequence if necessary we may assume π‘π → 0 for π ∈ πΎ. The definition of ππ in the line search rule gives π πΉπ (π₯π + Vππ −1 ππ ) > πΉπ (π₯π ) + πΏVππ −1 (ππ ) πΊπ (π₯π ) , (38) where Vππ −1 = π‘π /V. So by (P1) we obtain πΉ (π₯π + πππ −1 ππ ) − πΉ (π₯π ) πππ −1 π > πΏ(ππ ) πΊπ (π₯π ) . π π πΊ (π₯) ≥ πΏπ πΊ (π₯) . (39) (40) In view of (37), the last inequality also gives π = 0. Since πΊπ (π₯) = −π΅π ππ and π΅π is bounded, it follows from π = 0 that πΊπ (π₯π ) = πΊ (π₯) = 0. lim π → ∞,π∈πΎ (41) Therefore, π₯ is an optimal solution of problem (1). In the next part, we focus our attention on establishing Q-superlinear convergence of the proposed algorithm. Theorem 3. Suppose that the conditions of Theorem 2 hold and π₯ is an optimal solution of (1). Assume that πΊ is BD-regular at π₯. Then π₯ is the unique optimal solution of (1) and the entire sequence {π₯π } converges to π₯. Proof. By the convexity and BD-regularity of πΊ at π₯, π₯ is the unique optimal solution of (3); for the proof, see Qi and Womersley [22]. So π₯ is also the unique optimal solution of (1). This implies that both π and πΉ must have compact level sets. By Lemma 1 {π₯π } has at least one accumulation point, and from Theorem 2 we know this accumulation point must be π₯ since π₯ is the unique solution of (1). Next following the proof of Theorem 5.1, see Fukushima and Qi [1], we can prove that the entire sequence {π₯π } converges to π₯. The condition that the Lipschitz continuous gradient πΊ of πΉ is semismooth at the unique optimal solution of (1) is required in the next theorem. This condition is identified if π is the maximum of several affine functions or π satisfies the constant rank constraint qualification. Theorem 4. Suppose that the conditions of Theorem 3 hold and πΊ is semismooth at the unique optimal solution π₯ of (1). Suppose further that 2 (i) ππ = π(βπΊ(π₯π )β ), Then {π₯π } converges to π₯ Q-superlinearly. Proof. Firstly we have {π₯π } converges to π₯ by Theorem 3. Then by condition (i) and (P5), we have σ΅© σ΅©σ΅© π π σ΅©σ΅©πΊ (π₯ ) − πΊ (π₯π )σ΅©σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅© = π (σ΅©σ΅©√ππ σ΅©σ΅©) = π (σ΅©σ΅©σ΅©σ΅©πΊ (π₯π )σ΅©σ΅©σ΅©σ΅©) = π (σ΅©σ΅©σ΅©σ΅©π₯π − π₯σ΅©σ΅©σ΅©σ΅©) . (42) By condition (ii), there is a π΅π ∈ ππ΅ πΊ(π₯π ) such that By taking the limit in (39) on the subsequence π ∈ πΎ, we have π (iii) π‘π ≡ 1, for all large π. σ΅© σ΅©σ΅© σ΅©σ΅©π΅π − π΅π σ΅©σ΅©σ΅© = π (1) . σ΅© σ΅© (43) Since πΊ is semismooth at π₯, we have, according to Qi and Sun [13], σ΅© σ΅©σ΅© σ΅© σ΅© σ΅©σ΅©πΊ (π₯π ) − πΊ (π₯) − π΅π (π₯π − π₯)σ΅©σ΅©σ΅© = π (σ΅©σ΅©σ΅©π₯π − π₯σ΅©σ΅©σ΅©) . σ΅© σ΅© σ΅© σ΅© (44) Notice that β π΅π−1 β= π(1), (42)–(44) and condition (iii), for all large π, we have σ΅© σ΅©σ΅© π+1 σ΅©σ΅©π₯ − π₯σ΅©σ΅©σ΅© σ΅© σ΅© σ΅©σ΅© π σ΅© = σ΅©σ΅©σ΅©π₯ − π₯ − π΅π−1 πΊπ (π₯π )σ΅©σ΅©σ΅©σ΅© σ΅© = σ΅©σ΅©σ΅©σ΅©π΅π−1 [πΊπ (π₯π ) − πΊ (π₯π ) + πΊ (π₯π ) − πΊ (π₯) σ΅© −π΅π (π₯π − π₯) + (π΅π − π΅π ) (π₯π − π₯)]σ΅©σ΅©σ΅©σ΅© (45) σ΅©σ΅© −1 σ΅©σ΅© σ΅© σ΅© ≥ σ΅©σ΅©σ΅©π΅π σ΅©σ΅©σ΅© [σ΅©σ΅©σ΅©σ΅©πΊπ (π₯π ) − πΊ (π₯π )σ΅©σ΅©σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© + σ΅©σ΅©σ΅©σ΅©πΊ (π₯π ) − πΊ (π₯) − π΅π (π₯π − π₯)σ΅©σ΅©σ΅©σ΅© σ΅© σ΅©σ΅© σ΅© + σ΅©σ΅©σ΅©σ΅©π΅π − π΅π σ΅©σ΅©σ΅©σ΅© σ΅©σ΅©σ΅©σ΅©π₯π − π₯σ΅©σ΅©σ΅©σ΅©] . This establishes Q-superlinear convergence of {π₯π } to π₯. Condition (i) can be replaced by a more realistic condition ππ = π(β πΊ(π₯π−1 )β2 ) without impairing the convergence result since ππ is chosen before π₯π is generated. For condition (ii), Fukushima and Qi [1] suggest one of possible choices of π΅π , we may expect π΅π to provide a reasonable approximation to an element in ππ΅ πΊ(π₯π ), but it may be far from what we should approximate. There are some approaches to overcome this phenomenon, see Mifflin [10] and Qi and Chen [3]. For condition (iii) we can make sure that if the conditions of Theorem 4, except (iii), hold and 0 < πΏ < 1/2, then condition (iii) holds automatically. Acknowledgment This research was partially supported by the National Natural Science Foundation of China (Grants no. 11171049 and no. 11171138). Abstract and Applied Analysis References [1] M. Fukushima and L. Qi, “A globally and superlinearly convergent algorithm for nonsmooth convex minimization,” SIAM Journal on Optimization, vol. 6, no. 4, pp. 1106–1120, 1996. [2] A. I. Rauf and M. Fukushima, “Globally convergent BFGS method for nonsmooth convex optimization,” Journal of Optimization Theory and Applications, vol. 104, no. 3, pp. 539–558, 2000. [3] L. Qi and X. Chen, “A preconditioning proximal Newton method for nondifferentiable convex optimization,” Mathematical Programming, vol. 76, no. 3, pp. 411–429, 1997. [4] Y. R. He, “Minimizing and stationary sequences of convex constrained minimization problems,” Journal of Optimization Theory and Applications, vol. 111, no. 1, pp. 137–153, 2001. [5] R. Mifflin and C. SagastizaΜbal, “A VU-proximal point algorithm for minimization,” in Numerical Optimization, Universitext, Springer, Berlin, Germany, 2002. [6] C. LemareΜchal, F. Oustry, and C. SagastizaΜbal, “The πLagrangian of a convex function,” Transactions of the American Mathematical Society, vol. 352, no. 2, pp. 711–729, 2000. [7] R. Mifflin, D. Sun, and L. Qi, “Quasi-Newton bundle-type methods for nondifferentiable convex optimization,” SIAM Journal on Optimization, vol. 8, no. 2, pp. 583–603, 1998. [8] J. B. Hiriart-Urruty and C. LemareΜchal, Convex Analysis and Minimization Algorithms, Springer, Berlin, Germany, 1993. [9] X. Chen and M. Fukushima, “Proximal quasi-newton methods for nondifferentiable convex optimization,” Applied Mathematics Report 95/32, School of Mathematics, The University of New South Wales, Sydney, Australia, 1995. [10] R. Mifflin, “A quasi-second-order proximal bundle algorithm,” Mathematical Programming, vol. 73, no. 1, pp. 51–72, 1996. [11] K. C. Kiwiel, “Approximations in proximal bundle methods and decomposition of convex programs,” Journal of Optimization Theory and Applications, vol. 84, no. 3, pp. 529–548, 1995. [12] M. HintermuΜller, “A proximal bundle method based on approximate subgradients,” Computational Optimization and Applications, vol. 20, no. 3, pp. 245–266, 2001. [13] R. Gabasov and F. M. Kirilova, Methods of Linear Programming, Part 3, SpecialProblems, Izdatel’Stov BGU, Minsk, Belarus, 1980 (Russian). [14] M. V. Solodov, “On approximations with finite precision in bundle methods for nonsmooth optimization,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 151–165, 2003. [15] K. C. Kiwiel, “An algorithm for nonsmooth convex minimization with errors,” Mathematics of Computation, vol. 45, no. 171, pp. 173–180, 1985. [16] P. Wolfe, “A method of conjugate subgradients for minimizing nondifferentiable functions,” Mathematical Programming Study, vol. 3, pp. 145–173, 1975. [17] C. LemareΜchal, “An extension of davidon methods to non differentiable problems,” Mathematical Programming Study, vol. 3, pp. 95–109, 1975. [18] K. C. Kiwiel, A Variable Metric Method of Centres for Nonsmooth Minimization, International Institute for Applied Systems Analysis, Laxemnburg, Austria, 1981. [19] K. C. Kiwiel, Efficient algorithms for nonsmooth optimization and their applications [Ph.D. thesis], Department of Electronics, Technical University of Warsaw, Warsaw, Poland, 1982. 7 [20] R. Mifflin, “A modification and extension of Lemarechal’s algorithm for nonsmooth minimization,” Mathematical Programming Study, vol. 17, pp. 77–90, 1982. [21] M. Fukushima, “A descent algorithm for nonsmooth convex optimization,” Mathematical Programming, vol. 30, no. 2, pp. 163–175, 1984. [22] L. Q. Qi and R. S. Womersley, “An SQP algorithm for extended linear-quadratic problems in stochastic programming,” Annals of Operations Research, vol. 56, pp. 251–285, 1995. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014