Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 875935, 6 pages http://dx.doi.org/10.1155/2013/875935 Research Article An Improved Diagonal Jacobian Approximation via a New Quasi-Cauchy Condition for Solving Large-Scale Systems of Nonlinear Equations Mohammed Yusuf Waziri1,2 and Zanariah Abdul Majid1,3 1 Department of Mathematics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia Department of Mathematical Sciences, Faculty of Science, Bayero University, Kano PMB 3011, Nigeria 3 Institute for Mathematical Research, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia 2 Correspondence should be addressed to Mohammed Yusuf Waziri; mywaziri@gmail.com Received 8 August 2012; Revised 14 December 2012; Accepted 15 December 2012 Academic Editor: Turgut OΜzisΜ§ Copyright © 2013 M. Y. Waziri and Z. Abdul Majid. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present a new diagonal quasi-Newton update with an improved diagonal Jacobian approximation for solving large-scale systems of nonlinear equations. In this approach, the Jacobian approximation is derived based on the quasi-Cauchy condition. The anticipation has been to further improve the performance of diagonal updating, by modifying the quasi-Cauchy relation so as to carry some additional information from the functions. The effectiveness of our proposed scheme is appraised through numerical comparison with some well-known Newton-like methods. 1. Introduction Let us consider the systems of nonlinear equations πΉ (π₯) = 0, π (1) π where πΉ : π → π is a nonlinear mapping. Often, the mapping πΉ is assumed to be satisfying the following assumptions: (A1) there exists an π₯∗ ∈ π π s.t πΉ(π₯∗ ) = 0; (A2) πΉ is a continuously differentiable mapping in a neighborhood of π₯∗ ; (A3) πΉσΈ (π₯∗ ) is invertible. The well-known method for finding the solution to (1) is the classical Newton’s method which generates a sequence of iterates {π₯π } from a given initial point π₯0 via −1 π₯π+1 = π₯π − (πΉσΈ (π₯π )) πΉ (π₯π ) , (2) where π = 0, 1, 2 . . .. The attractive features of this method are rapid convergence and being easy to implement. Nevertheless, Newton’s method requires the computation of the matrix entails the first-order derivatives of the systems. In practice, computations of some functions derivatives are quite costly, and sometimes they are not available or could not be done precisely. In this case, Newton’s method cannot be applied directly. Moreover, some substantial efforts have been made by numerous researchers in order to eliminate the well-known shortcomings of Newton’s method for solving systems of nonlinear equations, particularly large-scale systems (see, e.g., [1, 2]). Notwithstanding, most of these modifications of Newton’s method still have some shortfalls as Newton’s counterpart. For example, Broyden’s method and Chord Newton’s method need to store an π × π matrix, and their floating points operations, are π(π2 ), respectively. To tackle these disadvantages, a diagonally Newton’s method has been suggested by Leong et al. [3] and showed that their updating formula is significantly cheaper than Newton’s method and some of its variants. Based on this fact, it is pleasing to present an approach which will improve further the diagonal Jacobian approximation, as well as reducing the computational cost, floating points operations and number of iterations. This is what leads to the idea of 2 Journal of Applied Mathematics this paper. The anticipation has been to further improve the performance of diagonal updating, by modifying the quasiCauchy relation so as to carry some additional information from the functions. We organized the paper as follows. In the next section, we present the details of the proposed method. Convergence results are present in Section 3. Some numerical results are reported in Section 4. Finally, conclusions are made in Section 5. 2. Derivation Process This section presents a new diagonal quasi-Newton-like method for solving large-scale systems of nonlinear equations. The quasi-Newton method is an iterative method that generates a sequence of points {π₯π } from a given initial guess π₯0 via the following form: π₯π+1 = π₯π − πΌπ π΅π πΉ (π₯π ) π = 0, 1, 2 . . . , (3) where πΌπ is a step length and π΅π is an approximation to the Jacobian inverse which can be updated at each iteration for π = 0, 1, 2 . . .; the updated matrix π΅π+1 is chosen in such a way that it satisfies the secant equation, that is, π΅π+1 π π = π¦π . (4) It is clear that the only Jacobian information we have is π¦, and this is only approximation information. To this end, we incorporate more information from π π and πΉπ to π¦ in order to present a better approximation to the Jacobian matrix. We consider the modification on π¦ presented by Li and Fukushima [4]: σ΅© σ΅© π¦Μ = π¦π + π£π σ΅©σ΅©σ΅©πΉπ σ΅©σ΅©σ΅© π π , (5) max{−π ππ π¦π /βπ π β2 , 0}. where π£π = 1 + Our aim here is to build a square matrix, say π΅, using diagonal updating scheme which is an approximation to the Jacobian inverse, and we let π΅π+1 satisfy the quasi-Cauchy equation, that is, π¦πΜ π π π = π¦πΜ π π΅π+1 π¦πΜ . (6) where ππ = diag((π¦πΜ (1) )2 , (π¦πΜ (2) )2 , . . . , (π¦πΜ (π) )2 ), ∑ππ=1 (π¦πΜ (π) )4 = tr(ππ2 ), and Tr is the trace operation. Proof. Consider the Lagrangian function of (7): 1 σ΅© σ΅©2 πΏ (Ψπ , πΌ) = σ΅©σ΅©σ΅©Ψπ σ΅©σ΅©σ΅©πΉ + πΌ (π¦πΜ π Ψπ π¦πΜ − π¦πΜ π π π + π¦πΜ π π΅π π¦πΜ ) , 2 (9) where πΌ is the corresponding Lagrangian multiplier. By differentiating πΏ with respect to each Ψπ(1) , Ψπ(2) , . . . , Ψπ(π) , and setting them all equal to zero, we obtain 2 Ψπ(π) = −πΌ(π¦πΜ (π) ) 1 σ΅© σ΅©2 min σ΅©σ΅©σ΅©Ψπ σ΅©σ΅©σ΅©πΉ 2 s.t (7) π¦πΜ π (π΅π + Ψπ ) π¦πΜ = π¦πΜ π π π , where β ⋅ βπΉ denotes the Frobenius norm. Hence, the optimal solution of (7) is given by π Ψπ = π (π¦πΜ π π − π¦πΜ π΅π π¦πΜ ) tr (ππ2 ) ππ , (8) (10) Multiplying both sides of (10) by (π¦πΜ (π) )2 and summing them all give π π 2 4 ∑(π¦πΜ (π) ) Ψπ(π) = −πΌ∑(π¦πΜ (π) ) π=1 for every π = 1, 2, . . . , π. π=1 (11) Differentiating πΏ with respect to πΌ, and since π¦πΜ π Ψπ π¦πΜ = then we have ∑ππ=1 (π¦πΜ (π) )2 Ψ(π) , π 2 ∑ ((π¦πΜ (π) ) Ψπ(π) ) = π¦πΜ π π π − π¦πΜ π π΅π π¦πΜ . (12) π=1 Equating (11) and (12) and substituting the relation into (10), finally we have Ψπ(π) = π¦πΜ π π π − π¦πΜ π π΅π π¦πΜ ∑ππ=1 (π) 4 (π¦πΜ ) 2 (π¦πΜ (π) ) ∀π = 1, 2, . . . , π. (13) Since π΅π(π) is a diagonal component of π΅π , π¦πΜ (π) is the πth component of vector π¦πΜ , then ππ = diag((π¦πΜ (1) )2 , (π¦πΜ (2) )2 , . . . , (π¦πΜ (π) )2 ) and ∑ππ=1 (π¦π(π) )4 = tr(ππ2 ). We further rewrite (13) as In addition, the deviation between π΅π+1 and π΅π is minimized under some norms; hence, in the following theorem, we state the resulting update formula for π΅π . Theorem 1. Assume that π΅π+1 be the diagonal update of a diagonal matrix π΅π . Let us denote the deviation between π΅π and π΅π+1 as Ψπ = π΅π+1 − π΅π . Suppose that π¦πΜ =ΜΈ 0 which is defined by (5). Consider the following problem: ∀π = 1, 2, . . . , π. Ψπ = (π¦πΜ π π π − π¦πΜ π π΅π π¦πΜ ) tr (ππ2 ) ππ , (14) which completes the proof. Hence, the best possible updating formula for diagonal matrix π΅π+1 is given by π΅π+1 = π΅π + (π¦πΜ π π π − π¦πΜ π π΅π π¦πΜ ) tr (ππ2 ) ππ . (15) Now, we can describe the algorithm for our proposed method as follows. Algorithm IDJA Step 1. Choose an initial guess π₯0 , π ∈ (0, 1), πΎ > 1, π΅0 = πΌπ , πΌ0 > 0, and let π := 0. Journal of Applied Mathematics 3 Table 1: Numerical results of NM, CN, BM, DQNM, and IDJA methods. prob 1 2 3 4 5 6 7 8 NM Dim NI 7 9 10 — 12 8 8 11 50 50 50 50 50 50 50 50 CN CPU 0.046 0.078 0.062 — 0.078 0.064 0.094 0.064 NI 55 344 — — — — — — BM CPU 0.031 0.062 — — — — — — Step 2. Compute πΉ(π₯π ), and If βπΉ(π₯π )β ≤ 10−8 stop. Step 3. Compute π = −πΉ(π₯π )π΅π . Step 4. If βπΉ(π₯π + πΌπ ππ )β ≤ πβπΉ(π₯π )β, retain πΌπ and go to Step 5. Otherwise set πΌπ+1 = πΌπ /2 and repeat Step 4. Step 5. If βπΉ(π₯π + πΌπ ππ ) − πΉ(π₯π )β ≥ βπΉ(π₯π + πΌπ ππ )β − βπΉ(π₯π )β, retain πΌπ and go to Step 6. Otherwise set πΌπ+1 := πΌπ × πΎ and repeat Step 5. Step 6. Let π₯π+1 = π₯π + πΌπ ππ . Step 7. If βπ₯π+1 − π₯π β2 + βπΉ(π₯π )β2 ≤ 10−8 stop. Also go to Step 8. Step 8. If βΔπΉπ β2 ≥ π1 where π1 = 10−4 , compute π΅π+1 as defined by (15); if not, π΅π+1 = π΅π . Step 9. Set π := π + 1 and go to Step 2. 3. Convergence Result This section presents local convergence results of the IDJA methods. To analyze the convergence of these methods, we will make the following assumptions on nonlinear systems πΉ. Assumption 2. (i) πΉ is differentiable in an open convex set πΈ in Rπ . (ii) There exists π₯∗ ∈ πΈ such that πΉ(π₯∗ ) = 0; πΉσΈ (π₯) is continuous for all π₯. (iii) πΉσΈ (π₯) satisfies the Lipschitz condition of order one that is there exists a positive constant π such that σ΅© σ΅©σ΅© σΈ σ΅©σ΅©πΉ (π₯) − πΉσΈ (π¦)σ΅©σ΅©σ΅© ≤ π σ΅©σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅©σ΅© , σ΅© σ΅© (16) for all π₯, π¦ ∈ Rπ . (iv) There exist constants π1 ≤ π2 such that π1 βπβ2 ≤ π πΉ (π₯)π ≤ π2 βπβ2 for all π₯ ∈ πΈ and π ∈ Rπ . π σΈ We can state the following result on the boundedness of {βΨπ βπΉ } by assuming that, without loss of generality, the updating matrix (15) is always used, then we have the following. NI 15 15 — — 42 16 — 11 CPU 0.031 0.031 — — 0.031 0.032 — 0.0312 NI 14 15 20 19 16 14 25 11 DQNM CPU 0.016 0.031 0.016 0.031 0.016 0.031 0.031 0.016 IDJA NI 2 13 10 9 8 7 14 9 CPU 0.011 0.031 0.016 0.031 0.015 0.014 0.010 0.016 Theorem 3. Suppose that {π₯π } is generated by Algorithm IDJA where π΅π is defined by (15). Assume that Assumption 2 holds. There exists π½ > 0, πΏ > 0, πΌ > 0 and πΎ > 0, such that if π₯0 ∈ πΈ and π΅0 satisfies βπΌ − π΅0 πΉσΈ (π₯∗ )βπΉ < πΏ for all π₯π ∈ πΈ then σ΅©σ΅© σ΅© σ΅©σ΅©πΌ − π΅π πΉσΈ (π₯∗ )σ΅©σ΅©σ΅© < πΏπ , (17) σ΅© σ΅©πΉ for some constant πΏπ > 0, π ≥ 0. Proof. Since βπ΅π+1 βπΉ = βπ΅π + Ψπ βπΉ , it follows that σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©π΅π+1 σ΅©σ΅©σ΅©πΉ ≤ σ΅©σ΅©σ΅©π΅π σ΅©σ΅©σ΅©πΉ + σ΅©σ΅©σ΅©Ψπ σ΅©σ΅©σ΅©πΉ . For π = 0 and assuming π΅0 = πΌ, we have σ΅¨ π σ΅¨σ΅¨ π σ΅¨σ΅¨ (π) σ΅¨σ΅¨ σ΅¨σ΅¨σ΅¨σ΅¨ π¦0Μ π 0 − π¦0Μ π΅0 π¦0Μ (π) 2 σ΅¨σ΅¨σ΅¨ Μ ) ( π¦ σ΅¨σ΅¨ σ΅¨σ΅¨σ΅¨Ψ0 σ΅¨σ΅¨σ΅¨ = σ΅¨σ΅¨σ΅¨ 0 σ΅¨σ΅¨ tr (π02 ) σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ π σ΅¨σ΅¨ π σ΅¨σ΅¨π¦0Μ π 0 − π¦0Μ π΅0 π¦0Μ σ΅¨σ΅¨ (max) 2 σ΅¨ (π¦Μ ≤ σ΅¨ ), 0 tr (π02 ) (18) (19) where (π¦0Μ (max) )2 is the largest element among (π¦0Μ (π) )2 , π = 1, 2, . . . , π. After multiplying (19) by (π¦0Μ (max) )2 /(π¦0Μ (max) )2 and substituting tr(π02 ) = ∑ππ=1 (π¦0Μ (π) )4 , we have σ΅¨σ΅¨σ΅¨π¦Μ π π − π¦Μ π π΅ π¦Μ σ΅¨σ΅¨σ΅¨ 4 σ΅¨ 0 0 σ΅¨σ΅¨ (π) σ΅¨σ΅¨ 0 0 0 σ΅¨σ΅¨ σ΅¨σ΅¨Ψ0 σ΅¨σ΅¨ ≤ σ΅¨ (π¦0Μ (max) ) . (20) σ΅¨ σ΅¨ (max) 2 π (π) 4 (π¦0Μ ) ∑π=1 (π¦0Μ ) Since (π¦0Μ (max) )4 / ∑ππ=1 (π¦0Μ (π) )4 ≤ 1, then (20) turns into σ΅¨σ΅¨ π σΈ σ΅¨σ΅¨ π σ΅¨σ΅¨ (π) σ΅¨σ΅¨ σ΅¨σ΅¨σ΅¨π¦0Μ πΉ (π₯) π¦0Μ − π¦0Μ π΅0 π¦0Μ σ΅¨σ΅¨σ΅¨ . (21) σ΅¨σ΅¨σ΅¨Ψ0 σ΅¨σ΅¨σ΅¨ ≤ 2 (π¦0Μ (max) ) From Assumption 2 and π΅0 = πΌ, (21) becomes π σ΅¨σ΅¨ (π) σ΅¨σ΅¨ |π − 1| (π¦0Μ π¦0Μ ) σ΅¨σ΅¨Ψ0 σ΅¨σ΅¨ ≤ , 2 σ΅¨ σ΅¨ (π¦0Μ (max) ) (22) where π = max{|π1 |, |π2 |}. Since (π¦0Μ (π) )2 ≤ (π¦0Μ (max) )2 for π = 1, . . . , π, it follows that 2 (max) ) σ΅¨σ΅¨ (π) σ΅¨σ΅¨ π |π − 1| (π¦0Μ σ΅¨σ΅¨Ψ0 σ΅¨σ΅¨ ≤ . σ΅¨ σ΅¨ (max) 2 (π¦0Μ ) (23) 4 Journal of Applied Mathematics Hence, we obtain σ΅©σ΅© σ΅©σ΅© 3/2 σ΅©σ΅©Ψ0 σ΅©σ΅©πΉ ≤ π |π − 1| . Suppose πΌ = π3/2 |π − 1|, then σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©Ψ0 σ΅©σ΅©πΉ ≤ πΌ. From the fact that βπ΅0 βπΉ = √π, it follows that σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©π΅1 σ΅©σ΅©πΉ ≤ π½, Problem 3. System of π nonlinear equations (24) ππ (π₯) = sin (1 − π₯π ) π (25) × ∑ π₯π2 + 2π₯π−1 π=1 1 1 − 3π₯π−2 − π₯π−4 + π₯π−5 − π₯π ln (9 + π₯π ) 2 2 (26) where π½ = √π + πΌ > 0. Therefore, if we assume that βπΌ − π΅0 πΉσΈ (π₯∗ )βπΉ < πΏ, then σ΅©σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©πΌ − π΅1 πΉσΈ (π₯∗ )σ΅©σ΅©σ΅© = σ΅©σ΅©σ΅©πΌ − (π΅0 + Ψ0 ) πΉσΈ (π₯∗ )σ΅©σ΅©σ΅© σ΅© σ΅©πΉ σ΅© σ΅©πΉ σ΅©σ΅© σ΅©σ΅© σ΅© σΈ ∗ σ΅© σ΅© ≤ σ΅©σ΅©σ΅©πΌ − π΅0 πΉ (π₯ )σ΅©σ΅©σ΅©πΉ + σ΅©σ΅©σ΅©Ψ0 πΉσΈ (π₯∗ )σ΅©σ΅©σ΅©σ΅©πΉ σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© ≤ σ΅©σ΅©σ΅©σ΅©πΌ − π΅0 πΉσΈ (π₯∗ )σ΅©σ΅©σ΅©σ΅©πΉ + σ΅©σ΅©σ΅©Ψ0 σ΅©σ΅©σ΅©πΉ σ΅©σ΅©σ΅©σ΅©πΉσΈ (π₯∗ )σ΅©σ΅©σ΅©σ΅©πΉ ; (27) − 9 exp (1 − π₯π ) + 2 2 π = 1, 2, . . . , π, π₯0 = (0, 0, . . . , 0)π . (31) Problem 4. System of π nonlinear equations ππ (π₯) = π₯π2 − 4 exp (sin (4 − π₯π2 )) 2 therefore, βπΌ − π΅1 πΉσΈ (π₯∗ )βπΉ < πΏ + πΌπ = πΏ1 . Hence, by induction, βπΌ − π΅π πΉσΈ (π₯∗ )βπΉ < πΏπ for all π. + 4. Numerical Results (28) The identity matrix has been chosen as an initial approximate Jacobian inverse. We further design the codes to terminates whenever one of the following happens: (i) the number of iteration is at least 200 but no point of π₯π that satisfies (28) is obtained; (ii) CPU time in seconds reaches 200; (iii) Insufficient memory to initial the run. π ππ (π₯) = (∑π₯π ) (π₯π − 2) + (cos π₯π − 2) − 1 π=1 (29) π = 1, . . . , π, π₯0 = (1, 1, 1, . . . , 1) . Problem 6. System of π nonlinear equations π ππ (π₯) = ∑π₯π2 − (sin (π₯π ) − π₯π4 + sin π₯π2 ) π=1 (34) π = 1, . . . , π, π₯0 = (.5, .5, .5, . . . , .5) . Problem 7. System of π nonlinear equations π ππ (π₯) = (∑ π₯π2 − 1) (π₯π − 1) π=1 π +π₯π (∑ (π₯π − 1)) − π + 1 π=1 (35) ππ (π₯) = (∑ π₯π2 − 1) (π₯π − 1) + (cos π₯π − 1) − 1 π=1 π = 2, . . . , π − 1, π₯0 = (.5, .5, .5, . . .) . Problem 2. Trigonometric function of Spedicato [6] π Problem 8. System of π nonlinear equations ππ (π₯) = π − ∑ cos π₯π + π (1 − cos π₯π ) − sin π₯π , 1 1 1 π = 1, . . . , π , π₯0 = ( , , . . . , ) . π π π (33) π Problem 1. Spares 1 function of Shin et al. [5]: π=1 (32) Problem 5. System of π nonlinear equations The performance of these methods are compared in terms of number of iterations and CPU time in seconds. In the following, some details on the benchmarks test problems are presented. ππ (π₯) = π₯π2 − 1 π = 1, 2, . . . , π, π₯0 = (5, 5, . . . , 5) . 2π − ∑ππ=1 π₯π cos π₯π π = 1, . . . , π, π₯0 = (2.8, 2.8, 2.8, . . . , 2.8) . In this section, the performance of IDJA method has been presented, when compared with Broyden’s method (BM), Chord Newton’s method (CN), Newton’s method (NM), and (DQNM) method proposed by [3], respectively. The codes are written in MATLAB 7.4 with a double precision computer; the stopping condition used is σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© −8 σ΅©σ΅©π π σ΅©σ΅© + σ΅©σ΅©πΉ (π₯π )σ΅©σ΅©σ΅© ≤ 10 . 2 + sin (4 − π₯π ) + π(π₯π − π₯π ) (30) ππ (π₯) = (1 − π₯π2 ) + π₯π + π₯π2 π₯π−2 π₯π−1 π₯π − 2 π = 1, 2, . . . , π, π₯0 = (.5, .5, . . . , .5) . (36) Journal of Applied Mathematics 5 Table 2: Numerical Results of NM, CN, BM, DQNM, and IDJA methods. prob Dim 1 2 3 4 5 6 7 8 100 100 100 100 100 100 100 100 NM NI 7 10 7 — 13 8 8 11 CN CPU 0.156 0.187 0.203 — 0.265 0.203 0.185 0.234 NI 98 — — — — — — — BM CPU 0.094 — — — — — — — NI 15 18 24 — 53 16 — 11 CPU 0.043 0.062 0.140 — 0.109 0.047 — 0.094 NI 14 16 15 13 17 14 26 11 DQNM CPU 0.016 0.032 0.031 0.031 0.031 0.031 0.031 0.032 IDJA NI 2 13 7 10 12 7 16 10 CPU 0.011 0.032 0.015 0.030 0.031 0.017 0.030 0.016 Table 3: Numerical Results of NM, CN, BM, DQNM, and IDJA methods. prob Dim 1 2 3 4 5 6 7 8 250 250 250 250 250 250 250 250 NM NI 7 11 8 — 14 8 8 11 CN CPU 0.359 0.640 0.499 — 0.827 0.686 0.499 0.484 NI 100 — — — — — — — BM CPU 0.109 — — — — — — — NI 15 21 29 — — 24 — 11 CPU 0.101 0.218 0.250 — — 0.250 — 0.125 NI 14 18 16 15 19 14 27 11 DQNM CPU 0.034 0.032 0.016 0.031 0.031 0.031 0.031 0.031 IDJA NI 2 8 9 10 8 10 14 10 CPU 0.032 0.031 0.016 0.032 0.016 0.031 0.031 0.016 Table 4: Numerical results of NM, CN, BM, DQNM, and IDJA methods. prob Dim 1 2 3 4 5 6 7 8 500 500 500 500 500 500 500 500 NM NI 7 13 7 — 15 8 8 11 CN CPU 0.796 1.997 1.4352 — 2.449 2.184 1.498 1.451 NI 101 — — — — — — — BM CPU 0.702 — — — — — — — NI 15 23 — — — 23 — 11 CPU 0.671 0.972 — — — 0.998 — 0.515 NI 14 19 17 12 21 14 32 11 DQNM CPU 0.016 0.031 0.031 0.030 0.031 0.032 0.047 0.031 IDJA NI 2 9 9 10 9 10 15 9 CPU 0.011 0.032 0.031 0.031 0.031 0.045 0.047 0.031 Table 5: Numerical results of NM, CN, BM, DQNM, and IDJA methods. prob Dim 1 2 3 4 5 6 7 8 1000 1000 1000 1000 1000 1000 1000 1000 NM NI 7 — 9 — 16 8 8 11 CN CPU 2.730 — 5.819 — 8.705 6.474 4.321 4.882 NI 103 — — — — — — — BM CPU 3.167 — — — — — — — NI 38 31 — — — — — 11 CPU 9.438 7.722 — — — — — 2.418 NI 14 20 17 11 22 14 38 11 DQNM CPU 0.016 0.032 0.031 0.064 0.031 0.062 0.062 0.032 IDJA NI 2 8 9 10 10 11 31 10 CPU 0.011 0.043 0.031 0.064 0.031 0.061 0.047 0.031 6 The numerical results presented in Tables 1, 2, 3, 4, and 5 demonstrate clearly the proposed method (IDJA) shows good improvements, when compared with NM, CN, BM, and DQNM, respectively. In addition, it is worth mentioning, the IDJA method does not require more storage locations than classic diagonal quasi-Newton’s methods. One can observe from the tables that the proposed method (IDJA) is faster than DQNM methods and required little time to solve the problems when compared to the other Newton-like methods and still keeping memory requirement and CPU time in seconds to only π(π). 5. Conclusions In this paper, we present an improved diagonal quasi-Newton update via new quasi-Cauchy condition for solving largescale Systems of nonlinear equations (IDJA). The Jacobian inverse approximation is derived based on the quasi-Cauchy condition. The anticipation has been to further improve the diagonal Jacobian, by modifying the quasi-Cauchy relation so as to carry some additional information from the functions. It is also worth mentioning that the method is capable of significantly reducing the execution time (CPU time), as compared to NM, CN, BM, and DQNM methods while maintaining good accuracy of the numerical solution to some extent. Another fact that makes the IDJA method appealing is that throughout the numerical experiments it never fails to converge. Hence, we can claim that our method (IDJA) is a good alternative to Newton-type methods for solving largescale systems of nonlinear equations. References [1] J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, Englewood Cliffs, NJ, USA, 1983. [2] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, vol. 16, SIAM, Philadelphia, Pa, USA, 1995. [3] W. J. Leong, M. A. Hassan, and M. Waziri Yusuf, “A matrixfree quasi-Newton method for solving large-scale nonlinear systems,” Computers & Mathematics with Applications, vol. 62, no. 5, pp. 2354–2363, 2011. [4] D.-H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001. [5] B.-C. Shin, M. T. Darvishi, and C.-H. Kim, “A comparison of the Newton-Krylov method with high order Newton-like methods to solve nonlinear systems,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3190–3198, 2010. [6] E. Spedicato, “Cumputational experience with quasi-Newton algorithms for minimization problems of moderatetly large size,” Tech. Rep. CISE-N-175 3, pp. 10–41, 1975. Journal of Applied Mathematics Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014