Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 780107, 5 pages http://dx.doi.org/10.1155/2013/780107 Research Article A New Smoothing Nonlinear Conjugate Gradient Method for Nonsmooth Equations with Finitely Many Maximum Functions Yuan-yuan Chen1,2 and Shou-qiang Du1 1 2 College of Mathematics, Qingdao University, Qingdao 266071, China School of Management, University of Shanghai for Science and Technology, Shanghai 200093, China Correspondence should be addressed to Yuan-yuan Chen; usstchenyuanyuan@163.com Received 8 March 2013; Accepted 25 March 2013 Academic Editor: Yisheng Song Copyright © 2013 Y.-y. Chen and S.-q. Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The nonlinear conjugate gradient method is of particular importance for solving unconstrained optimization. Finitely many maximum functions is a kind of very useful nonsmooth equations, which is very useful in the study of complementarity problems, constrained nonlinear programming problems, and many problems in engineering and mechanics. Smoothing methods for solving nonsmooth equations, complementarity problems, and stochastic complementarity problems have been studied for decades. In this paper, we present a new smoothing nonlinear conjugate gradient method for nonsmooth equations with finitely many maximum functions. The new method also guarantees that any accumulation point of the iterative points sequence, which is generated by the new method, is a Clarke stationary point of the merit function for nonsmooth equations with finitely many maximum functions. 1. Introduction In this paper, we consider the following nonsmooth equations with finitely many maximum functions: max {β1π (π₯)} = 0, π∈π½1 .. . (1) max {βππ (π₯)} = 0, π∈π½π where βππ : π π → π for π = 1, . . . , π and π ∈ π½π , π = 1, . . . , π are continuously differentiable functions, and π½π are finite index sets. Denote that βπ (π₯) = max {βππ (π₯)} , π∈π½π π = 1, . . . , π, π (2) π» (π₯) = (β1 (π₯) , . . . , βπ (π₯)) , then (1) can be reformulated as π» (π₯) = 0, (3) where π» : π π → π π is nonsmooth equations and π½π (π₯) = {ππ ∈ π | βππ (π₯) = βπ (π₯)}, π = 1, . . . , π. Nonsmooth equations have been studied for decades, which is proposed in the study of the optimal control, the variational inequality and complementarity problems, equilibrium problems, and engineering mechanics [1–3], because many practical problems, such as stochastic complementarity problems, variational inequality problems, KKT systems of constrained nonlinear programming problems, and many problems in equilibrium problems, can be reformulated into (1). In the past few years, there has been a growing interest in the study of (1) (such as [4, 5]). Due to its simplicity and global convergence, the iterative methods, such as nonsmooth Levenberg-Marquardt method, general Newton method, and smoothing methods for solving nonsmooth equations, have been widely studied [4–11]. In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). In the following section, we will recall some definitions and some background of nonlinear conjugate gradient methods. And we also give the new smoothing nonlinear conjugate gradient method for (1), which guarantees that any accumulation point of the iterative points sequence is a Clarke stationary point of the merit 2 Abstract and Applied Analysis function for (1). In the last section, some discussions are also given. Notation. In the following, a quantity with a subscript π denotes that quantity is evaluated at π₯π , the norm is the 2 norm, and π ++ = {π‘ | π‘ > 0}. where π, π ∈ (0, 1), π < π, and π is the gradient of π. The direction is defined by 2. Preliminaries and New Method In this section, firstly, we give some definitions and some backgrounds of nonlinear conjugate gradient method. Secondly, we propose the new methods for (1) and give the convergence analysis. In the following, we give two definitions, which will be used in this paper. Definition 1. Let πΉ : π π → π be a locally Lipschitz function. Then, πΉ is almost everywhere differentiable. Denote π·πΉ be the set of points where πΉ is differentiable, then the general gradient of πΉ at π₯ in the sense of Clark is ππΉ (π₯) = conv { lim π₯π → π₯, π₯π ∈π·πΉ ∇πΉ (π₯π )} , (4) Definition 2. Let π : π π → π be a locally Lipschitz continuous function. We call πΜ : π π × π ++ → π a smoothing function of π, if πΜ(⋅, π) is continuously differentiable for any fixed π ∈ π ++ and Μ π↓0 π₯ → π₯, −ππ , if π = 1, ππ = { −ππ + π½π ππ−1 , if π ≥ 2, (10) where π½π = σ΅©2 σ΅©σ΅© πππ π¦π−1 σ΅©σ΅©π¦π−1 σ΅©σ΅©σ΅© − πΏ ππ π , 2 π π−1 π π§ π ππ−1 π−1 (ππ−1 π§π−1 ) (11) π π π§π−1 = π¦π−1 + π‘π−1 π π−1 , π‘π−1 = π0 + max{0, −π π−1 π¦π−1 /π π−1 π π−1 }, π0 > 0, πΏ > 1/4, π¦π−1 = ππ − ππ−1 , π π−1 = π₯π − π₯π−1 . Now we give the new method for (7) as follows. Method 1. Consider the following steps. where conv denotes the convex set. Μ . lim πΜ (π₯, π) = π (π₯) where ππ is the direction and πΌπ > 0 is a step size; in this paper, we use the Wolfe type line search [14]. Compute πΌπ > 0, such that σ΅© σ΅©2 π (π₯π + πΌπ ππ ) − π (π₯π ) ≤ −ππΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , (9) π σ΅© σ΅©2 π(π₯π + πΌπ ππ ) ππ ≥ −2ππΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , Step 0. Given π₯0 ∈ π π , π0 = −π0 , and π := 0, if βπ0 β = 0, then stop. Step 1. Find πΌπ > 0 satisfying (9), and π₯π+1 is given by (8). Step 2. Compute ππ by (10). Set π := π + 1, and go to Step 1. (5) We also need the following assumptions. In the following, we will give the new method for (1). In order to describe the method clearly, we divide this section into two parts. In Case 1, we give the new nonlinear conjugate gradient method for smooth objective function. In Case 2, we give the new smoothing nonlinear conjugate gradient method for nonsmooth objective function. Denote that 1 π (π₯) = βπ» (π₯)β2 , 2 (6) where π» is defined in (3). Then (1) is equivalent to the following unconstrained minimization problem with zero optimal value minπ π (π₯) . π₯∈π (i) πΏ = {π₯ ∈ π π | π(π₯) ≤ π(π₯0 )} is level bounded; (ii) in the neighborhood of πΏ, there exists πΏ > 0, such that σ΅©σ΅©σ΅©π (π₯) − π (π¦)σ΅©σ΅©σ΅© ≤ πΏ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© , (12) σ΅© σ΅© σ΅© σ΅© where π₯, π¦ ∈ π, π is a neighborhood of πΏ. In the following, we will give the global convergence analysis about Method 1. Firstly, we give some lemmas. Lemma 4. Let {π₯π } be generated by Method 1, then πππ ππ ≤ − (1 − (7) Case 1. In this section, we assume that π is a continuously differentiable function. Then (7) is a standard unconstrained optimization problem. There are many methods for solving the unconstrained optimization problem, such as Newton method, nonlinear conjugate gradient method, and quasiNewton method [12–16]. Here, based on [13, 14], we will give a new nonlinear conjugate gradient method to solve (7). The iterates for solving (7) is given by π₯π+1 = π₯π + πΌπ ππ , Assumption 3. Consider the following: (8) 1 σ΅©σ΅© σ΅©σ΅©2 ) σ΅©π σ΅© , 4π σ΅© π σ΅© (13) where π > 1/4. By [13, Theorem 2.1], we have Lemma 4. Lemma 5 (see [14]). Suppose that Assumption 3 holds, πΌπ is computed by (9), and we have 2 (πππ ππ ) ∑ σ΅© σ΅©2 < +∞. σ΅©ππ σ΅©σ΅© π=1 σ΅© σ΅© σ΅© ∞ (14) Abstract and Applied Analysis 3 By Lemmas 4 and 5, we have Lemma 6. Lemma 6. Suppose that Assumption 3 holds, πΌπ is determined by (9), and we get ∞ σ΅© σ΅©σ΅©π σ΅©σ΅©σ΅©4 ∑ σ΅©σ΅© π σ΅©σ΅©2 < +∞. σ΅©ππ σ΅©σ΅© π=1 σ΅© σ΅© σ΅© (15) Now, we give the global convergence theorem for Method 1. We give the following smoothing nonlinear conjugate gradient method. Method 2. Give π₯0 ∈ π π , π0 > 0, πΎ > 0, and πΎ1 ∈ (0, 1). Step 1. Find πΌπ > 0 satisfying (21), and π₯π+1 is given by (8). Μ Step 2. If β∇π₯ π(π₯ π+1 , ππ )β ≥ πΎππ , then set ππ+1 = ππ ; otherwise, let ππ+1 ≤ πΎ1 ππ . Step 3. Compute ππ by (22). Set π := π + 1, and go to Step 1. Theorem 7. Suppose that Assumption 3 holds and {π₯π } is generated by Method 1, then σ΅© σ΅© lim inf σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© = 0. (16) π→∞ Μ π) satisfies Assumption 3, for Theorem 8. Suppose that π(⋅, fixed π > 0. Then the sequence {π₯π } generated by Method 2 satisfies Proof. Suppose by contradiction that there exists π > 0, such that σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≥ π, (17) σ΅© σ΅© holds for π ≥ 1. By π ππ−1 π§π−1 ≥ π π0 ππ−1 π π−1 , (18) and (11), we have σ΅©σ΅© σ΅©σ΅© πΏπ + ππΏ2 σ΅©π σ΅© σ΅¨σ΅¨ σ΅¨σ΅¨ ) σ΅©σ΅©σ΅© π σ΅©σ΅©σ΅© , σ΅¨σ΅¨π½π σ΅¨σ΅¨ ≤ ( 0 2 π0 σ΅©σ΅©ππ−1 σ΅©σ΅© 2 πΏπ + ππΏ σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© ) σ΅©σ΅©ππ σ΅©σ΅© . σ΅©σ΅©ππ σ΅©σ΅© ≤ σ΅©σ΅©ππ σ΅©σ΅© + ( 0 2 π (19) 0 Denoting that π = (1 + ((πΏπ0 + ππΏ2 )/π02 )), we get β ππ β2 ≤ π2 β ππ β2 . Then we have σ΅©4 ∞ σ΅© ∞ 2 σ΅©σ΅©π σ΅©σ΅© π (20) ∑ σ΅©σ΅© π σ΅©σ΅©2 ≥ ∑ 2 = +∞, σ΅© σ΅© π π π=1 σ΅© π=1 σ΅© π σ΅©σ΅© which contradicts (15). So we get the theorem. Case 2. In this section, we assume that π is a nonsmooth function. Equation (7) is the nonsmooth unstrained optiΜ π) is the smoothing funcmization problem. Denote that π(π₯, Μ , π ), then πΌ > 0 is computed by tion for π(π₯), πΜπ = ∇π₯ π(π₯ π π π σ΅© σ΅©2 πΜ (π₯π + πΌπ ππ , ππ ) ≤ πΜ (π₯π , ππ ) − ππΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , σ΅© σ΅©2 π ππ ≥ −2ππΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , πΜπ+1 lim ππ = 0, π→∞ σ΅© σ΅© lim inf σ΅©σ΅©σ΅©σ΅©∇π₯ πΜ (π₯π , ππ−1 )σ΅©σ΅©σ΅©σ΅© = 0. π→∞ Proof. If the set {π | ππ+1 ≤ πΎ1 ππ } is finite, then for a fixed Μ , π )β ≥ πΎπ , for all π > πΎ. Denoteing that πΎ, β∇π(π₯ π π−1 π−1 Μ π) is a smooth function, the π = ππ , π > πΎ, because π(⋅, Μ π). previous method reduces to Method 1, where π(π₯) = π(π₯, By Theorem 7, we get σ΅© σ΅© lim inf σ΅©σ΅©σ΅©σ΅©∇π₯ πΜ (π₯π , π)σ΅©σ΅©σ΅©σ΅© = 0. (25) π→∞ Μ , π )β ≥ πΎπ So we know that β∇π(π₯ π π−1 π−1 for π > πΎ is impossible. Then we can assume that the set {π | ππ+1 ≤ πΎ1 ππ } is infinite, then lim ππ = 0. π→∞ (26) By the infinity of {π | ππ+1 ≤ πΎ1 ππ }, we can assume that the set is {π0 , π1 , . . . | π0 < π1 < ⋅ ⋅ ⋅ }. Then we have σ΅© σ΅© lim inf σ΅©σ΅©σ΅©σ΅©∇πΜ (π₯ππ +1 , πππ )σ΅©σ΅©σ΅©σ΅© ≤ πΎ1 lim πππ . (27) π→∞ π→∞ By lim πππ = 0, (28) σ΅© σ΅© lim inf σ΅©σ΅©σ΅©σ΅©∇π₯ πΜ (π₯π , ππ−1 )σ΅©σ΅©σ΅©σ΅© = 0. (29) π→∞ we have (21) (24) π→∞ So we complete the proof. where 0 < π < π < 1. The direction is defined by if π = 1, −πΜπ , ππ = { −πΜπ + π½π ππ−1 , if π ≥ 2, (22) σ΅©σ΅© Μ σ΅©σ΅©2 πΜππ π¦Μπ−1 σ΅©π¦ σ΅© − πΏ σ΅© π−1 σ΅© 2 πΜππ ππ−1 , π½π = π π π§ ππ−1 π§Μπ−1 Μπ−1 ) (ππ−1 (23) Remark 9. From the result of Theorem 8, we know that for some kinds of smoothing functions [9, 10], any accumulation point π₯β of {π₯π } generated by Method 2 is a Clarke stationary point of π; that is, 0 ∈ ππ(π₯β ). where 3. Some Discussions π π π§Μπ−1 = π¦Μπ−1 + π‘π−1 π π−1 , π‘π−1 = π0 + max{0, −π π−1 π π−1 }, π¦Μπ−1 /π π−1 π0 > 0, πΏ > 1/4, π¦Μπ−1 = πΜπ − πΜπ−1 , π π−1 = π₯π − π₯π−1 . In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). The new method also guarantees that any accumulation point is a Clarke stationary point of the merit function for (1). 4 Abstract and Applied Analysis Discussion 1. In our Methods 1 and 2, we can use any line search, which is well defined under the condition that the search directions are descent directions. Discussion 2. There are some kinds of smoothing functions, which satisfied Assumption 3 for fixed π > 0 (see [9, 10]). When the smoothing function of π has gradient consistent property, then any accumulation point of the sequence {π₯π } generated by Method 2 is a Clarke stationary point. Under some assumptions, we can also use the methods in [15] to solve nonsmooth equations with finitely many maximum functions (1). Discussion 3. The new method can also be used for solving nonlinear complementarity problem (NCP). By F-B function π (π, π) = √π2 + π2 − (π + π) , π π ∏π»π (π₯) = 0, π = 1, . . . , π, π = 1, . . . , π, π (π¦ − π₯) π (π₯) ≥ 0, (31) for any π¦ ∈ π, (35) where π₯ ∈ π. The general nonlinear complementarity problems are to compute π₯ ∈ π π , such that π (π₯) ≥ 0, π (π₯) ≥ 0, π(π₯)π π (π₯) = 0. (36) We can rewrite (34) as the following nonsmooth equation: π (π₯) − ππ [π (π₯) − π (π₯)] = 0, (30) we know that NCP is equivalent to π(π₯) = 0, where π is a continuously differentiable function, so we can use the smooth version Method 1 to solve it. Method 2 can also be used to solve the vertical complementarity problems π»π (π₯) ≥ 0, is denoted by GVI(π, π, π) in [18]. GVI(π, π, π) is a generalization of complementarity problems, nonlinear variational inequalities problems, and general nonlinear complementarity problems. The variational inequalities problems are to compute (37) where ππ is the projection operator onto π under the Euclidean norm. The general nonlinear complementarity problems are widely used in solving engineering problems and economic problems; such that, under some conditions, the π-person noncooperative game problem can be reformulated as (37) (see [19]). Therefore, how to use our new method to solve the general nonlinear complementarity problems and the π-person noncooperative game problem would be an interesting topic and deserves further investigation. π=1 where π»π (π₯) : π π → π π are continuously differentiable functions. Discussion 4. The new method can also be used for solving Hamilton-Jacobi-Bellman equations (HJB) (see [17]). The Hamilton-Jacobi-Bellman equations (HJB) is used to find π₯ ∈ π π , such that max {πΏπ π₯ − ππ } = 0, (32) on πΘ, π π where Θ is a bounded domain in π , πΏ and π = 1, . . . , π are elliptic operators of second order. HJB arise in stochastic control problems and often are used to solve finance and control problems. By finite element method, we can obtain the following discrete HJB equation; find π₯ ∈ π π , such that max {π΄π π₯ − ππ } = 0, 1≤π≤π (33) where π΄π ∈ π π×π , ππ ∈ π π , π = 1, . . . , π. Discussion 5. We also can consider to use the new method to solve the general variational inequality problem (see [18]). The general variational inequality problem is to compute π₯ ∈ π π , such that π (π₯) ∈ π, π (π¦ − π (π₯)) π (π₯) ≥ 0, This work was supported by National Science Foundation of China (11101231, 70971070), a Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department. in Θ, 1≤π≤π π₯ = 0, Acknowledgments for any π¦ ∈ π, (34) where π, π : π π → π π are two continuously differentiable functions and π ⊆ π π is a closed convex set. Equation (34) References [1] L. Qi and P. Tseng, “On almost smooth functions and piecewise smooth functions,” Nonlinear Analysis, vol. 67, no. 3, pp. 773– 794, 2007. [2] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1, Springer, New York, NY, USA, 2003. [3] D. Sun and J. Han, “Newton and quasi-Newton methods for a class of nonsmooth equations and related problems,” SIAM Journal on Optimization, vol. 7, no. 2, pp. 463–480, 1997. [4] Y. Gao, “Newton methods for solving two classes of nonsmooth equations,” Applications of Mathematics, vol. 46, no. 3, pp. 215– 229, 2001. [5] S.-Q. Du and Y. Gao, “A parametrized Newton method for nonsmooth equations with finitely many maximum functions,” Applications of Mathematics, vol. 54, no. 5, pp. 381–390, 2009. [6] N. Yamashita and M. Fukushima, “Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems,” Mathematical Programming B, vol. 76, no. 3, pp. 469–491, 1997. [7] L. Q. Qi and J. Sun, “A nonsmooth version of Newton’s method,” Mathematical Programming A, vol. 58, no. 3, pp. 353–367, 1993. Abstract and Applied Analysis [8] Y. Elfoutayeni and M. Khaladi, “Using vector divisions in solving the linear complementarity problem,” Journal of Computational and Applied Mathematics, vol. 236, no. 7, pp. 1919–1925, 2012. [9] X. Chen and W. Zhou, “Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization,” SIAM Journal on Imaging Sciences, vol. 3, no. 4, pp. 765–790, 2010. [10] X. Chen, “Smoothing methods for nonsmooth, nonconvex minimization,” Mathematical Programming B, vol. 134, no. 1, pp. 71–99, 2012. [11] M. C. Ferris, O. L. Mangasarian, and J. S. Pang, Complementarity: Applications, Algorithms and Extensions, Kluwer Academic, Dordrecht, The Netherlands, 2001. [12] Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. [13] Z. Dai and F. Wen, “Global convergence of a modified HestenesStiefel nonlinear conjugate gradient method with Armijo line search,” Numerical Algorithms, vol. 59, no. 1, pp. 79–93, 2012. [14] C.-Y. Wang, Y.-Y. Chen, and S.-Q. Du, “Further insight into the Shamanskii modification of Newton method,” Applied Mathematics and Computation, vol. 180, no. 1, pp. 46–52, 2006. [15] Y. Y. Chen and S. Q. Du, “Nonlinear conjugate gradient methods with Wolfe type line search,” Abstract and Applied Analysis, vol. 2013, Article ID 742815, 5 pages, 2013. [16] Y. Xiao, H. Song, and Z. Wang, “A modified conjugate gradient algorithm with cyclic Barzilai-Borwein steplength for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 13, pp. 3101–3110, 2012. [17] S. Zhou and Z. Zou, “A relaxation scheme for Hamilton-JacobiBellman equations,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 806–813, 2007. [18] X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities,” Mathematics of Computation, vol. 67, no. 222, pp. 519–540, 1998. [19] M. C. Ferris and J. S. Pang, “Engineering and economic applications of complementarity problems,” SIAM Review, vol. 39, no. 4, pp. 669–713, 1997. 5 Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014