Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 742815, 5 pages http://dx.doi.org/10.1155/2013/742815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen and Shou-Qiang Du College of Mathematics, Qingdao University, Qingdao 266071, China Correspondence should be addressed to Yuan-Yuan Chen; usstchenyuanyuan@163.com Received 20 January 2013; Accepted 6 February 2013 Academic Editor: Yisheng Song Copyright © 2013 Y.-Y. Chen and S.-Q. Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods with Wolfe type line search are efficient for some unconstrained optimization problems. 1. Introduction In this paper, we focus our attention on the global convergence of nonlinear conjugate gradient method with Wolfe type line search. We consider the following unconstrained optimization problem: minπ π (π₯) . π₯∈π (1) In (1), π is continuously differentiable function, and its gradient is denoted by π(π₯) = ∇π(π₯). Of course, the iterative methods are often used for (1). The iterative formula is given by π₯π+1 = π₯π + πΌπ ππ , (2) where π₯π , π₯π+1 ∈ π π is the kth and (k+1)th iterative step, πΌπ is a step size, and ππ is a search direction. Here, in the following, we define the search direction by −π , if π = 1, ππ = { π −ππ + π½π ππ−1 , if π ≥ 2. (3) In (3), π½π is a conjugate gradient scalar, and the well-known useful formulas are π½πFR , π½πPRP , π½πHS , and π½πDY (see [1–6]). Recently, some kinds of new nonlinear conjugate gradient methods are given in [7–11]. Based on the new method, we give some new kinds of nonlinear conjugate gradient methods and analyze the global convergence of the methods with Wolfe type line search. The rest of the paper is organized as follows. In Section 2, we give the methods and the global convergence results for them. In the last section, numerical results and some discussions are given. 2. The Methods and Their Global Convergence Results Firstly, we give the Wolfe type line search, which will be used in our new nonlinear conjugate gradient methods. In the following section of this paper, β ⋅ β stands for the 2-norm. We have used the Wolfe type line search in [12]. The line search is to compute πΌπ > 0 such that σ΅© σ΅©2 (4) π (π₯π + πΌπ ππ ) ≤ π (π₯π ) − ππΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , π σ΅© σ΅©2 (5) π(π₯π + πΌπ ππ ) ππ ≥ −2ππΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© , where π, π ∈ (0, 1), π < π. Now, we present the nonlinear conjugate gradient methods as follows. Algorithm 1. We have the following steps. Step 0. Given π₯0 ∈ π π , set π0 = −π0 , π := 0. If βπ0 β = 0, then stop. 2 Abstract and Applied Analysis Step 1. Find πΌπ > 0 satisfying (4) and (5), and by (2), π₯π+1 is given. If βππ+1 β = 0, then stop. ππ = π½π ππ−1 − (1 + π½π βππ β 2 ∞ ∑ Step 2. Compute ππ by the following equation: πππ ππ−1 By (4), we know that π=1 ) ππ , (πππ ππ ) 2 βππ β2 ∞ σ΅© σ΅©2 ≤ ∑ (2π + πΏ)2 πΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© π=1 (6) ≤ π ππ−1 . Set π := π + 1, and go to in which π½π = −βππ β2 /ππ−1 Step 1. (2π + πΏ)2 ∞ ∑ {π (π₯π ) − π (π₯π+1 )} π π=1 (13) < +∞. Before giving the global convergence theorem, we need the following assumptions. Assumption 1. (A1) The set πΏ 0 = {π₯ ∈ π π | π(π₯) ≤ π(π₯0 )} is bounded. (A2) In the neighborhood of πΏ 0 , denoted as π, π is continuously differentiable. Its gradient is Lipschitz continuous; namely, for π₯, π¦ ∈ π, there exists πΏ > 0 such that σ΅© σ΅© σ΅© σ΅©σ΅© (7) σ΅©σ΅©π (π₯) − π (π¦)σ΅©σ΅©σ΅© ≤ πΏ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© . In order to establish the global convergence of Algorithm 1, we also need the following lemmas. Lemma 2. Suppose that Assumption 1 holds; then, (4) and (5) are well defined. So, we get (9), and this completes the proof of the lemma. Lemma 5. Suppose that Assumption 1 holds, ππ is computed by (6), and πΌπ is determined by (4) and (5); one has ∑ σ΅© σ΅©2 πππ ππ = −σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≤ 0 (14) Proof. From Lemmas 3 and 4, we can obtain (14). Theorem 6. Consider Algorithm 1, and suppose that Assumption 1 holds. Then, one has σ΅© σ΅© lim inf σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© = 0. π→∞ (15) Proof. We suppose that the theorem is not true. Suppose by contradiction that there exists π > 0 such that σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©ππ σ΅©σ΅© ≥ π (8) holds for all π ≥ 0. So, one knows that ππ is descent search direction. < +∞. 2 π≥0 βππ β The proof is essentially the same as Lemma 1 of [12]; hence, we do not rewrite it again. Lemma 3. Suppose that direction ππ is given by (6); then, one has βππ β4 (16) holds for π ≥ 0. From (6) and Lemma 3, we get Proof. From the definitions of ππ and π½π , we can get it. Lemma 4. Suppose that Assumption 1 holds, and πΌπ is determined by (4) and (5); one has ∞ ∑ π=1 2 (πππ ππ ) βππ β2 < +∞. σ΅© σ΅©2 − (2π + πΏ) πΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≤ πππ ππ . (10) ππ ππ σ΅© σ΅© (2π + πΏ) πΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≥ − σ΅©σ΅©π σ΅©σ΅© . σ΅©σ΅©ππ σ΅©σ΅© (11) By squaring both sides of the previous inequation, we get (2π + 2 ≥ (πππ ππ ) βππ β 2 . (12) πππ ππ−1 βππ β ) πππ ππ . 2 Dividing the previous inequation by (πππ ππ )2 , we get βππ β2 Then, we know that σ΅© σ΅©2 πΏ)2 πΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© − 2 (1 + π½π (9) Proof. By (4), (5), Lemma 3, and Assumption 1, we can get 2 π π ππ−1 σ΅© σ΅©2 2σ΅© σ΅©σ΅© σ΅©σ΅©2 σ΅©2 σ΅©σ΅©ππ σ΅©σ΅© = (π½π ) σ΅©σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅© − (1 + π½π π 2 ) σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© βππ β βππ β 4 = = βππ β2 2 (πππ ππ ) σ΅© σ΅©2 π½π2 σ΅©σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅© (πππ ππ ) − 2 2 − (1 + π½π πππ ππ−1 /βππ β2 ) βππ β2 2 (1 + π½π πππ ππ−1 /βππ β2 ) πππ ππ 2 (πππ ππ ) (17) Abstract and Applied Analysis = βππ−1 β2 βππ−1 β4 + 3 1 βππ β2 Lemma 10. Suppose that π > 0, and ] is a constant. If positive series ππ satisfied π 2 − ≤ π½π2 (πππ ππ−1 ) /βππ β4 ∑ππ ≥ ππ + ], βππ β2 one has βππ−1 β2 1 + βππ β2 βππ−1 β4 ππ2 = +∞, π≥1 π ∑ π−1 1 2 βπ π=0 πβ ≤∑ ≤ ∑ (18) So, we obtain βππ β4 π≥1 βππ β ππ2 2 ≥ ∑ π2 π≥1 1 = +∞, π (19) which contradicts (14). Hence we get this theorem. Remark 7. In Algorithm 1, we also can use the following equations to compute ππ : ππ = π½π ππ−1 − π½π πππ ππ−1 βππ β2 − ππ , From the previous analysis, we can get the following global convergence result for Algorithm 8. Theorem 11. Suppose that Assumption 1 holds, and βπ(π₯)β2 ≤ π, where π is a constant. Then, one has σ΅© σ΅© lim inf σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© = 0. (25) π→∞ Proof. Suppose by contradiction that there exists π > 0 such that σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅©2 ≥ π (26) σ΅© σ΅© holds for all π. From (3), we have ππ = π½π ππ−1 − ππ . (20) ππ π π½π π π−1 βππ β2 σ΅©σ΅© σ΅©σ΅©2 σ΅© σ΅© 2 σ΅© σ΅©2 π σ΅©σ΅©ππ σ΅©σ΅© = (π½π σ΅©σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅©) − σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© − 2ππ ππ . − ππ , (21) ππ ≤ ππ−1 − 2 Algorithm 8. We have the following steps. π Step 0. Given π₯0 ∈ π , set π0 = −π0 , π = 0. If βπ0 β = 0, then stop. Step 1. Find πΌπ > 0 satisfying (4) and (5), and by (2), π₯π+1 is given. If βππ+1 β = 0, then stop. (28) Let ππ = βππ β2 /βππ β4 and ππ = −πππ ππ /βππ β2 ; from (22), we have where π½π = max{min{π½πLS , π½πCD }, 0}. ππ βππ β2 − 1 . βππ β2 (29) By π1 = 1/βπ1 β2 , π1 = 1, we know that π 2σ΅¨ σ΅¨ π ππ ≤ ∑ σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ − . π π=1 π (30) By (30), we get π Step 2. Compute π½π by formula πππ ππ DY { = , π½ { π { π { { (ππ − ππ−1 ) ππ−1 π½π = { 2 { { { {π½FR = βππ β , π βππ−1 β2 { (27) Squaring both sides of the previous equation, we get where π½π = max{min{π½πFR , π½πPRP }, 0}; ππ = π½π ππ−1 − (24) = +∞. π π≥1 ∑π=1 ππ π . π2 ∑ (23) π=1 2σ΅¨ σ΅¨ ππ ≤ ∑ σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ , π=1 π σ΅©2 σ΅©σ΅© σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅© ≤ πππ ππ−1 , π (22) σ΅©2 σ΅©σ΅© σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅© > πππ ππ−1 , and compute ππ+1 by (3). Set π := π + 1, and go to Step 1. Lemma 9. Suppose that Assumption 1 holds, and π½π is computed by (22); if βππ β =ΜΈ 0, then one gets πππ ππ < 0 for all π ≥ 2 and π½πFR ≥ |π½π | (see [9]). (31) ππ σ΅¨ σ΅¨ ∑ σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ ≥ 2 . π π=1 From (31) and Lemma 10, we have 2 ∑ π≥1 (πππ ππ ) βππ β2 = +∞, (32) which contradicts Lemma 4. Therefore, we get this theorem. 4 Abstract and Applied Analysis Algorithm 12. We have the following steps. Step 0. Given π₯0 ∈ π π , π > 1/4, set π0 = −π0 , π := 0. If βπ0 β = 0, then stop. Step 1. Find πΌπ > 0 satisfying (4) and (5), and by (2), π₯π+1 is given. If βππ+1 β = 0, then stop. σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©ππ σ΅©σ΅© ≥ π (41) holds for all π ≥ 0. By Lemma 13, we have 2 Step 2. Compute ππ by π½π = − −ππ , { { { { ππ = { ππ π { { ) ππ , {π½π ππ−1 − (1 + π½π π π−1 βππ β2 { π = 0, π ≥ 1, (33) where π½π = Proof. We suppose that the conclusion is not true. Suppose by contradiction that there exists π > 0 such that πππ (ππ − ππ−1 ) βππ−1 β2 −π βππ − ππ−1 β2 πππ ππ−1 βππ−1 β4 πππ (ππ − ππ−1 ) βπ − π β − π( π π π−1 ) πππ ππ−1 . π ππ−1 ππ−1 −ππ−1 ππ−1 From Assumption 1, Lemma 15, and (42), we know that σ΅© σ΅© ππΏ2 + ππΏ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© σ΅¨σ΅¨ σ΅¨σ΅¨ ) σ΅¨σ΅¨π½π σ΅¨σ΅¨ ≤ ( σ΅©σ΅© σ΅©. π2 σ΅©σ΅©ππ−1 σ΅©σ΅©σ΅© . (34) ππΏ2 + ππΏ σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© ) σ΅©σ΅©ππ σ΅©σ΅© σ΅©σ΅©ππ σ΅©σ΅© ≤ σ΅©σ΅©ππ σ΅©σ΅© + 2 ( π2 σ΅© σ΅©2 πππ ππ = −σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≤ 0 (35) We obtain ∑ holds for any π ≥ 0. βππ β4 2 π≥1 βππ β Lemma 14. Suppose that Assumption 1 holds, ππ is generated by (33) and (34), and πΌπ is determined by (4) and (5); one has π≥0 βππ β 2 < +∞. (44) ππΏ2 σ΅© σ΅© πΏ = (1 + 2 + 2 2 ) σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© . π π Lemma 13. Suppose that direction ππ is given by (33) and (34); then, one has ∑ (43) Therefore, by (33), we get Set π := π + 1, and go to Step 1. βππ β4 (42) ≥ +∞, (45) which contradicts (36). Therefore, we have σ΅© σ΅© lim inf σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© = 0. (46) π→∞ (36) So, we complete the proof of this theorem. Proof. From Lemma 4 and Lemma 13, we obtain (36). Lemma 15. Suppose that π is convex. That is, ππ ∇2 π(π₯)π ≥ 0, for all π ∈ π π , where ∇2 π(π₯) is the Hessian matrix of π. Let {π₯π } and {ππ } be generated by Algorithm 12; one has σ΅© σ΅©2 ππΌπ σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≤ −πππ ππ . π½π = πππ (ππ − ππ−1 ) βππ−1 β2 (37) − min { Proof. By Taylor’s theorem, we can get 1 π (π₯π+1 ) = π (π₯π ) + πππ π π + π ππ πΊπ π π , 2 Remark 17. In Algorithm 12, π½π can also be computed by the following formula: (38) πππ (ππ − ππ−1 ) βππ−1 β2 ,π βππ − ππ−1 β2 πππ ππ−1 βππ−1 β4 }, (47) where π > 1/4. 1 where π π = π₯π+1 − π₯π , and πΊπ = ∫0 ∇2 π(π₯π + ππ π )πππ π . By Assumption 1, (4), and (38), we get σ΅© σ΅©2 −ππΌπ2 σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© ≥ π (π₯π+1 ) − π (π₯π ) ≥ πππ π π = πΌπ πππ ππ . 3. Numerical Experiments and Discussions (39) So, we get (37). Theorem 16. Consider Algorithm 12, and suppose that Assumption 1 and the assumption of Lemma 15 hold. Then, one has σ΅© σ΅© lim inf σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅© = 0. (40) π→∞ In this section, we give some numerical experiments for the previous new nonlinear conjugate gradient methods with Wolfe type line search and some discussions. The problems that we tested are from [13]. We use the condition βππ+1 β ≤ 10−6 as the stopping criterion. We use MATLAB 7.0 to test the chosen problems. We give the numerical results of Algorithms 1 and 12 to show that the method is efficient for unconstrained optimization problems. The numerical results of Algorithms 1 and 12 are listed in Tables 1 and 2. Abstract and Applied Analysis 5 References Table 1: Test results for Algorithm 1. Name GULF VARDIM LIN LIN LIN LIN1 LIN1 LIN0 Dim 3 2 50 2 1000 2 10 4 NI 2 3 1 1 1 1 1 1 NF 52 54 3 3 3 51 3 3 NG 3 6 3 3 3 2 3 3 Name: the test problem name; Dim: the problem dimension; NI: the iterations number; NF: the function evaluations number; NG: the gradient evaluations number. Table 2: Test results for Algorithm 12. Name GULF BIGGS IE IE TRIG BV BV Dim 3 6 50 3 1000 10 3 NI 2 1000 1000 1000 1103 5 6 NF 52 1149 1053 1101 1052 203 109 NG 3 1095 1003 1052 3 55 9 Name: the test problem name; Dim: the problem dimension; NI: the iterations number; NF: the function evaluations number; NG: the gradient evaluations number. Discussion 1. From the analysis of the global convergence of Algorithm 1, we can see that if ππ satisfies the property of efficient descent search direction, we can get the global convergence of the corresponding nonlinear conjugate gradient method with Wolfe type line search without other assumptions. Discussion 2. In Algorithm 8, we use a Wolfe type line search. Overall, we also feel that nonmonotone line search (see [14]) also can be used in our algorithms. Discussion 3. From the analysis of the global convergence of Algorithm 12, we can see that when ππ is an efficient descent search direction, we can get the global convergence of the corresponding conjugate gradient method with Wolfe type line search without requiring uniformly convex function. Acknowledgments This work is supported by the National Science Foundation of China (11101231 and 10971118), Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and the International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department. [1] B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. [2] R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964. [3] M. R. Hestenes, “E.Stiefel. Method of conjugate gradient for solving linear equations,” Journal of Research of the National Bureau of Standards, vol. 49, pp. 409–436, 1952. [4] Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. [5] R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, Wiley, New York, NY, USA, 2nd edition, 1987. [6] M. Raydan, “The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem,” SIAM Journal on Optimization, vol. 7, no. 1, pp. 26–33, 1997. [7] L. Zhang and W. Zhou, “Two descent hybrid conjugate gradient methods for optimization,” Journal of Computational and Applied Mathematics, vol. 216, no. 1, pp. 251–264, 2008. [8] A. Zhou, Z. Zhu, H. Fan, and Q. Qing, “Three new hybrid conjugate gradient methods for optimization,” Applied Mathematics, vol. 2, no. 3, pp. 303–308, 2011. [9] B. C. Jiao, L. P. Chen, and C. Y. Pan, “Global convergence of a hybrid conjugate gradient method with Goldstein line search,” Mathematica Numerica Sinica, vol. 29, no. 2, pp. 137–146, 2007. [10] G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. [11] Z. Dai and B. Tian, “Global convergence of some modified PRP nonlinear conjugate gradient methods,” Optimization Letters, vol. 5, no. 4, pp. 615–630, 2011. [12] C. Y. Wang, Y. Y. Chen, and S. Q. Du, “Further insight into the Shamanskii modification of Newton method,” Applied Mathematics and Computation, vol. 180, no. 1, pp. 46–52, 2006. [13] J. J. MoreΜ, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. [14] L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton’s method,” SIAM Journal on Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014