Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 540951, 7 pages http://dx.doi.org/10.1155/2013/540951 Research Article Novel Global Exponential Stability Criterion for Recurrent Neural Networks with Time-Varying Delay Wenguang Luo,1,2 Xiuling Wang,1 Yonghua Liu,1 and Hongli Lan3 1 School of Electrical and Information Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China Guangxi Key Laboratory of Automobile Components and Vehicle Technology, Guangxi University of Science and Technology, Liuzhou 545006, China 3 School of Computer Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China 2 Correspondence should be addressed to Wenguang Luo; wenguang.luo@163.com Received 15 October 2012; Accepted 3 January 2013 Academic Editor: Massimo Furi Copyright © 2013 Wenguang Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The problem of global exponential stability for recurrent neural networks with time-varying delay is investigated. By dividing the time delay interval [0, π(π‘)] into πΎ + 1 dynamical subintervals, a new Lyapunov-Krasovskii functional is introduced; then, a novel linear-matrix-inequality (LMI-) based delay-dependent exponential stability criterion is derived, which is less conservative than some previous literatures (Zhang et al., 2005; He et al., 2006; and Wu et al., 2008). An illustrate example is finally provided to show the effectiveness and the advantage of the proposed result. 1. Introduction In the past decades, recurrent neural networks (RNNs) have been extensively investigated because of their applications, such as combinatorial optimization [1, 2], associative memories [3–5], signal processing [6], image processing [7], pattern recognition [8, 9], and so forth. Some of these applications often require that equilibrium points of the designed networks be stable. Meanwhile, in the hard implementation of RNNs, time delay commonly occurrs due to the finite switching speed of amplifiers or finite speed of signal processing, and its existence is always an origin of oscillation, divergence, and instability in neural networks. Therefore, the stability of RNNs with time delay has received much attention, and a large amount of results have been proposed to ensure the asymptotic or exponential stability of delayed neural networks [10–21]. So far, there is a main means handling the stability of delayed neural networks: free-weighting matrix approach [22–26]. Recently, a novel method was proposed for Hopfield neural networks with constant delay in [27], which brings more free-weighting matrices by dividing equally the constant time delay interval [0, π] into π subintervals. Further more, by dividing the time delay interval [0, π(π‘)] into πΎ + 1 dynamical subintervals, Zhang et al. [28] generalize this method to study global asymptotic stability of RNNs with time-varying delay. This method mainly utilizes the information in the time delay interval [0, π(π‘)], which brings more freedom degrees and can reduce conservativeness. Motivated by above-mentioned discussions, in this paper, we consider the global exponential stability of RNNs with time-varying delay. By dividing the time delay interval [0, π(π‘)] into πΎ + 1 dynamical subintervals, we construct a new Lyapunov-Krasovskii functional (LKF) and derive a novel sufficient condition, which is presented in term of linear matrix inequality (LMI). The obtained stability result is less conservative than some existing results [22, 23, 29]. Finally, 2 Abstract and Applied Analysis an illustrating example is given to verify the effectiveness and the advantage of the proposed result. The rest of this paper is organized as follows. In Section 2, the problem of exponential stability analysis for RNNs with time-varying delay is formulated. Section 3 presents our main results. An illustrating example is provided in Section 4. The conclusion is stated in Section 5. Throughout this paper, πΆ = [πΆππ ]π×π denotes an π × π real matrix. πΆπ , βπΆβ, π π (πΆ), and π π(πΆ) represent the transpose, the Euclidean norm, the minimum eigenvalue, and the maximum eigenvalue of matrix πΆ, respectively. πΆ > 0 (πΆ < 0) denotes that πΆ is a positive (negative) definite matrix. πΌ denotes an identify matrix with compatible dimensions, and ∗ denotes the symmetric terms in a symmetric matrix. Assuming system (3) has an equilibrium point π§∗ = (π§1∗ , π§2∗ , . . . , π§π∗ )π . Then, let π₯ = (π₯1 (π‘), π₯2 (π‘), . . . , π₯π (π‘))π , and we define π₯π (π‘) = π§π (π‘) − π§π∗ , system (3) is transformed into the following form: π₯Μ (π‘) = − (π· − π΄πΏ 0 ) π₯ (π‘) + π΄π (π₯ (π‘)) + π΅π (π₯ (π‘ − π (π‘))) + π΅πΏ 0 π₯ (π‘ − π (π‘)) , where ππ (π₯π (π‘)) = ππ (π₯π (π‘) + π§π∗ ) − ππ (π§π∗ ) with ππ (0) = 0, ππ (π₯π (π‘ − π(π‘))) = ππ (π₯π (π‘ − π(π‘)) + π§π∗ ) − ππ (π§π∗ ). In the derivation of the main results, we need the following lemmas and definitions. Definition 1 (global exponential stability). System (5) is said to be globally exponentially stable with convergence rate π, if there exist constants π > 0 and π ≥ 1, such that 2. Problem Formulation βπ₯ (π‘)β ≤ πππ−ππ‘ , Consider the following RNNs with time-varying delay: π§Μ (π‘) = −π·π§ (π‘) + π΄π (π§ (π‘)) + π΅π (π§ (π‘ − π (π‘))) + π½, π§ (π‘) = π (π‘) (1) ∀π‘ ∈ [−π, 0] , where π§(⋅) = [π§1 (⋅), π§2 (⋅), . . . , π§π (⋅)]π is the state vector, π(π§(⋅)) = [π1 (π§1 (⋅)), π2 (π§2 (⋅)), . . . , ππ (π§π (⋅))]π denotes the neuron activation function, and π½ is a bias value vector. π· = diag(ππ ) is diagonal matrix with ππ > 0, π = 1, 2, . . . , π. π΄ and π΅ are connection weight matrix and the delay connection weight matrix, respectively. The initial condition π(π‘) is a continuous and differentiable vector-valued function, where π‘ ∈ [−π, 0]. The time delay π(π‘) is a differentiable function that Μ ≤ π, where π > 0 and π ≥ 0. satisfies: 0 ≤ π(π‘) ≤ π, π(π‘) To obtain the proposal result, we assume that each ππ is bounded and satisfies ππ− ≤ ππ (π’) − ππ (π£) ≤ ππ+ , π’−π£ (5) (2) where ∀π’, π£ ∈ π , π’ =ΜΈ π£. ππ− and ππ+ are some constants, π = 1, 2, . . . , π. Let ππ (π§π (π‘)) = ππ (π§π (π‘)) − ππ− π§π (π‘), ππ = ππ+ − ππ− , system (1) is equivalent to the following form: π§Μ (π‘) = − (π· − π΄πΏ 0 ) π§ (π‘) + π΄π (π§ (π‘)) + π΅π (π§ (π‘ − π (π‘))) + π΅πΏ 0 π§ (π‘ − π (π‘)) + π½, (3) ∀π‘ ≥ 0, (6) where π = sup−π≤π≤0 βπ₯(π)β. Lemma 2. Let π₯(π‘) ∈ π π be a vector-valued function with the first-order continuous-derivative entries. Then, the following integral inequality holds for matrix π = ππ > 0 and any matrices π1 , π2 , and two scalar functions β1 (π‘) and β2 (π‘), where β2 (π‘) ≥ β1 (π‘) ≥ 0 −∫ π‘−β1 (π‘) π‘−β2 (π‘) π₯Μπ (π ) ππ₯Μ (π ) ππ ≤ ππ (π‘) [ π1π + π1 −π1π + π2 ] π (π‘) ∗ −π2π − π2 (7) + (β1 (π‘) − β2 (π‘)) ππ (π‘) π»π π −1 π»π (π‘) , where π» = [π1 , π2 ] ∈ π π π [π₯ (π‘ − β1 (π‘)) π₯ (π‘ − β2 (π‘))] . π π×2π and π(π‘) = Proof. This proof can be completed in a manner similar to [30]. Lemma 3 (see [31]). For any two vectors π, π ∈ π π , any matrix A, any positive definite symmetric matrix π΅ with the same dimensions, and any two positive constants m, n, the following inequality holds: −πππ π΅π + 2πππ π΄π ≤ π2 ππ π΄π (ππ΅)−1 π΄π. (8) where πΏ 0 = diag(π1− , π2− , . . . , ππ− ), πΏ 1 = diag(π1+ , π2+ , . . . , ππ+ ), πΏ = diag(π1 , π2 , . . . , ππ ) = πΏ 1 − πΏ 0 . Noting assumption (2), we have π (π’) − ππ (π£) ≤ ππ 0≤ π π’−π£ ∀π’, π£ ∈ π , π’ =ΜΈ π£, ππ = ππ+ − ππ− (π = 1, 2, . . . , π) . 3. Main Results (4) In this section, we will consider the delay interval [0, π(π‘)], which is divided into πΎ + 1 dynamical subintervals, namely, [0, π1 π(π‘)], . . . , [ππΎ π(π‘), π(π‘)], where π1 < ⋅ ⋅ ⋅ < ππΎ . This is to Abstract and Applied Analysis 3 say, there is a parameter sequence (π1 , . . . , ππΎ ), which satisfies the following conditions: 0 < π1 π (π‘) < π2 π (π‘) < ⋅ ⋅ ⋅ < ππΎ π (π‘) < π (π‘) , 0 ≤ ππ πΜ (π‘) ≤ ππ π, (9) where ππ ∈ (0, 1), π = 1, 2, . . . , πΎ, πΎ is positive integer. Utilizing the useful information of πΎ+1 dynamical subintervals, a novel LKF is constructed, and then a newly LMIbased delay-dependent sufficient condition can be proposed to guarantee the global exponential stability of RNNs with time-varying delay. Theorem 4. The equilibrium point of system (5) with π < 1 is globally exponentially stable with convergence rate π > 0, if there exists parameter ππ satisfying 0 < π1 < ⋅ ⋅ ⋅ < ππΎ < 1, some positive definite symmetric matrices π, π 1 , π 2 , π 3 , ππ , π, some positive definite diagonal matrices Λ, π1 , π2 , π1 and π2 , and any matrices ππ , where π = 1, 2, . . . , πΎ, π = 1, 2, . . . , 2πΎ + 4, and πΎ is a positive integer, such that the following LMI has feasible solution: 0 0 0 Σ π0 ] [ ] [ −π [∗ π1 0 0 ] ] [ ] [ π1 ] [ −π [∗ ∗ ⋅⋅⋅ 0 ] ] < 0, [ ] [ π2 − π1 [∗ ∗ ∗ ⋅ ⋅ ⋅ ππΎ ] ] [ ] [ ] [ [ −π ] ∗ ∗ ∗ ∗ 1 − ππΎ ] [ where ππ = ππ−ππ π»ππ , π»π = [π2π+1 π2π+2 ] ∈ π π×2π , π = 0, 1, 2, . . . , πΎ. 0 0 0 Σ1,πΎ+2 0 Σ1,πΎ+4 Σ1,πΎ+5 Σ1,1 Σ1,2 0 ∗ Σ2,2 Σ2,3 0 0 0 0 0 0 0 ] ] 0 0 0 0 0 ] ∗ ∗ Σ3,3 Σ3,4 0 ] ∗ ∗ ∗ ⋅⋅⋅ 0 0 0 0 0 0 ] ] 0 0 0 0 ] ∗ ∗ ∗ ∗ ΣπΎ,πΎ ΣπΎ,πΎ+1 , 0 0 0 ] ∗ ∗ ∗ ∗ ∗ ΣπΎ+1,πΎ+1 ΣπΎ+1,πΎ+2 ] ] ∗ ∗ ∗ ∗ ∗ ∗ ΣπΎ+2,πΎ+2 ΣπΎ+2,πΎ+3 ΣπΎ+2,πΎ+4 ΣπΎ+2,πΎ+5 ] ∗ ∗ ∗ ∗ ∗ ∗ ∗ ΣπΎ+3,πΎ+3 0 0 ] ] ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ΣπΎ+4,πΎ+4 ΣπΎ+4,πΎ+5 ] ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ΣπΎ+5,πΎ+5 ] [∗ [ [ [ [ [ [ [ Σ=[ [ [ [ [ [ [ π Σ1,1 = 2ππ − π (π· − π΄πΏ 0 ) − (π· − π΄πΏ 0 ) π π + π 1 + π 3 + π2 (π· − π΄πΏ 0 ) π (π· − π΄πΏ 0 ) πΎ −2ππ + ∑ ππ + πΏπ1 πΏ + ππ π=1 (π1π + π1 ) , Σ1,2 = ππ−2ππ (−π1π + π2 ) , Σ1,πΎ+2 = ππ΅πΏ 0 − π2 (π· − π΄πΏ 0 ) ππ΅πΏ 0 , π Σ1,πΎ+4 = ππ΄ + 2πΛ − (π· − π΄πΏ 0 ) Λ π 2 − π (π· − π΄πΏ 0 ) ππ΄ + πΏπ2 , 2 π Σ1,πΎ+5 = ππ΅ − π (π· − π΄πΏ 0 ) ππ΅, Σ2,2 = −π−2ππ1 π π1 (1 − π1 π) + ππ−2ππ (−π2π − π2 ) + ππ−2ππ (π3π + π3 ) , Σ3,4 = ππ−2ππ (−π5π + π6 ) , . . . , ΣπΎ+1,πΎ+1 = −π−2πππΎ π ππΎ (1 − ππΎ π) π − π2πΎ ) + ππ−2ππ (−π2πΎ π + π2πΎ+1 ) , + ππ−2ππ (π2πΎ+1 π ΣπΎ+1,πΎ+2 = ππ−2ππ (−π2πΎ+1 + π2πΎ+2 ) , ΣπΎ+2,πΎ+2 = −π−2ππ π 1 (1 − π) + π2 πΏ 0 π΅π ππ΅πΏ 0 + πΏπ1 πΏ π − π2πΎ+2 ) + ππ−2ππ (−π2πΎ+2 π + π2πΎ+3 ) + ππ−2ππ (π2πΎ+3 Σ2,3 = ππ−2ππ (−π3π + π4 ) , π + π2πΎ+4 ) , ΣπΎ+2,πΎ+3 = ππ−2ππ (−π2πΎ+3 Σ3,3 = −π−2ππ2 π π2 (1 − π2 π) + ππ−2ππ (−π4π − π4 ) ΣπΎ+2,πΎ+4 = πΏ 0 π΅π Λ + π2 πΏ 0 π΅π ππ΄, + ππ−2ππ (π5π + π5 ) , (10) ΣπΎ+2,πΎ+5 = π2 πΏ 0 π΅π ππ΅ + πΏπ2 , (11) 4 Abstract and Applied Analysis Let the parameters π0 = 0, ππΎ+1 = 1; calculating the time derivatives ππ (π = 1, 2, 3, 4, 5) along the trajectories of system (5) yields π ΣπΎ+3,πΎ+3 = π−2ππ π 3 + ππ−2ππ (−π2πΎ+4 − π2πΎ+4 ) , ΣπΎ+4,πΎ+4 = Λπ΄ + π΄π Λ + π 2 π1Μ (π₯ (π‘)) = 2ππ2ππ‘ π₯π (π‘) ππ₯ (π‘) + 2π2ππ‘ π₯π (π‘) ππ₯Μ (π‘) , + π2 π΄π ππ΄ − π1 − 2π2 , 2 π π2Μ (π₯ (π‘)) = 4ππ2ππ‘ ∑ π π ∫ π ΣπΎ+4,πΎ+5 = Λπ΅ + π π΄ ππ΅, π=1 π₯π (π‘) 0 ππ (π ) ππ + 2π2ππ‘ ππ (π₯ (π‘)) Λπ₯Μ (π‘) , ΣπΎ+2,πΎ+5 = −π−2ππ π 2 (1 − π) π3Μ (π₯ (π‘)) = π2ππ‘ [π₯π (π‘) π 1 π₯ (π‘) + π2 π΅π ππ΅ − π1 − 2π2 , πΏ 0 = diag (ππ− ) , − π−2ππ(π‘) π₯π (π‘ − π (π‘)) π 1 π₯ (π‘ − π (π‘)) πΏ 1 = diag (ππ+ ) , × (1 − πΜ (π‘)) πΏ 0 = diag (ππ ) , + ππ (π₯ (π‘)) π 2 π (π₯ (π‘)) π· = diag (ππ ) , − π−2ππ(π‘) ππ (π₯ (π‘ − π (π‘))) π 2 π (π₯ (π‘ − π (π‘))) π΄ = [πππ ]π×π , × (1 − πΜ (π‘)) + π₯π (π‘) π 3 π₯ (π‘) π΅ = [πππ ]π×π . −π−2ππ ππ (π₯ (π‘ − π)) π 3 π (π₯ (π‘ − π))] , (12) Proof. Construct the following Lyapunov-Krasovskii functional candidate: π (π₯ (π‘)) = π1 (π₯ (π‘)) + π2 (π₯ (π‘)) + π3 (π₯ (π‘)) + π4 (π₯ (π‘)) + π5 (π₯ (π‘)) , (13) πΎ π4Μ (π₯ (π‘)) = ∑ π2ππ‘ [π₯π (π‘) ππ π₯ (π‘) − π−2πππ π(π‘) π₯π (π‘ − ππ π (π‘)) π=1 ×ππ π₯ (π‘ − ππ π (π‘)) (1 − ππ πΜ (π‘)) ] , π‘ π5Μ (π₯ (π‘)) = π2 π2ππ‘ π₯Μπ (π‘) ππ₯Μ (π‘) − π ∫ π‘−π π2ππ π₯Μπ (π ) ππ₯Μ (π ) ππ . (15) where It is clear that the following inequality is true: π1 (π₯ (π‘)) = π2ππ‘ π₯π (π‘) ππ₯ (π‘) , π π2 (π₯ (π‘)) = 2π2ππ‘ ∑ π π ∫ 0 π=1 π‘ π3 (π₯ (π‘)) = ∫ π‘−π(π‘) +∫ π‘ π‘ π‘−π πΎ π4 (π₯ (π‘)) = ∑ ∫ ππ (π ) ππ , +π΅π (π₯ (π‘ − π (π‘))) + π΅π₯ (π‘ − π (π‘))] π × π [− (π· − π΄πΏ 0 ) π₯ (π‘) + π΄π (π₯ (π‘)) (16) +π΅π (π₯ (π‘ − π (π‘))) + π΅π₯ (π‘ − π (π‘))] , π2ππ ππ (π ) π 2 π (π ) ππ (14) π2ππ π₯π (π ) π 3 π₯ (π ) ππ , π‘ π=1 π‘−ππ π(π‘) 0 π₯Μπ (π‘) ππ₯Μ (π‘) ≤ [− (π· − π΄πΏ 0 ) π₯ (π‘) + π΄π (π₯ (π‘)) π2ππ π₯π (π ) π 1 π₯ (π ) ππ π‘−π(π‘) +∫ π₯π (π‘) π5 (π₯ (π‘)) = π ∫ ∫ π‘ −π π‘+π π2ππ π₯π (π ) ππ π₯ (π ) ππ , 2ππ π π π₯Μ (π ) ππ₯Μ (π ) ππ , where π = ππ > 0, Λ = diag(π π ) > 0, π 1 = π 1π > 0, π 2 = π 2π > 0, π 3 = π 3π > 0, ππ = πππ > 0, π = ππ > 0, and π = 0, 1, 2, . . . , πΎ. According to (4), for some diagonal matrices π1 > 0, π2 > 0, π1 > 0, π2 > 0, we have ππ (π₯ (π‘)) π1 π (π₯ (π‘)) ≤ π₯π (π‘) πΏπ1 πΏπ₯ (π‘) , ππ (π₯ (π‘)) π2 π (π₯ (π‘)) ≤ π₯π (π‘) πΏπ1 π (π₯ (π‘)) , ππ (π₯ (π‘ − π (π‘))) π1 π (π₯ (π‘ − π (π‘))) ≤ π₯π (π‘ − π (π‘)) πΏπ1 πΏπ₯ (π‘ − π (π‘)) , ππ (π₯ (π‘ − π (π‘))) π2 π (π₯ (π‘ − π (π‘))) ≤ π₯π (π‘ − π (π‘)) πΏπ2 π (π₯ (π‘ − π (π‘))) . (17) Abstract and Applied Analysis 5 Using Lemma 2, we have π‘−ππ π(π‘) −∫ π‘−ππ+1 π(π‘) ≤ On the other hand, the following inequality holds: π₯Μπ (π ) π₯Μ (π ) ≤ [ − (π· − π΄πΏ 0 ) π₯ (π ) + π΄π (π₯ (π )) π₯Μπ (π ) ππ₯Μ (π ) ππ ππ πππ (π‘) [ 2π+1 + ∗ π π2π+1 −π2π+1 π −π2π+2 +π΅π (π₯ (π‘ − π (π ))) + π΅π₯ (π‘ − π (π ))] + π2π+2 ] π (π‘) − π2π+2 π × [ − (π· − π΄πΏ 0 ) π₯ (π ) + π΄π (π₯ (π )) π‘−π(π‘) π‘−π Combining Lemma 3, we have π₯Μπ (π ) ππ₯Μ (π ) ππ π ≤ ππΎ+1 (π‘) [ where π»π π π π₯Μπ (π ) π₯Μ (π ) ≤ 4 [π π ((π· − π΄πΏ 0 ) (π· − π΄πΏ 0 )) π π π2πΎ+3 + π2πΎ+3 −π2πΎ+3 + π2πΎ+4 ] π (π‘) , π ∗ −π2πΎ+4 − π2πΎ+4 πΎ+1 (18) = π [π2π+1 π2π+2 ] π π π×2π , ππ (π‘) ∈ πΜ (π₯ (π‘)) ≤ π2ππ‘ Ω, + π π (πΏ2 ) π π (π΄π π΄) σ΅© σ΅© +π π (πΏ2 ) π π (π΅π π΅)+π π (πΏ 0 π΅π π΅πΏ 0 ) ] σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© . (23) = [π₯ (π‘ − ππ π(π‘)) π₯ (π‘ − ππ+1 π(π‘))] , ππΎ+1 (π‘) π [π₯π (π‘ − π(π‘)) π₯π (π‘ − π)] , π = 0, 1, . . . , πΎ. From (15)–(17), and using (18), we finally get = Thus, σ΅© σ΅©2 σ΅© σ΅©2 π (π₯ (0)) = π π (π) σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© + (π π (πΏ2 ) + π π (Λπ Λ)) σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© σ΅© σ΅©2 σ΅© σ΅©2 + π π (π 1 ) πσ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© + π π (π 2 ) π π (πΏ2 ) πσ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© (19) where Ω = ππ (π‘)Σπ(π‘) + Ω0 + Ω1 + ⋅ ⋅ ⋅ + ΩπΎ , Σ is defined in (11). πΎ σ΅© σ΅©2 σ΅© σ΅©2 + π π (π 3 ) πσ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© + ∑ π π (ππ ) ππ πσ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© π=1 Ωπ = (ππ+1 − ππ ) π2 π−2ππ πππ (π‘) π»ππ π−1 π»π ππ (π‘) , π + 2π3 π π (π) [π π ((π· − π΄πΏ 0 ) (π· − π΄πΏ 0 )) π = 0, 1, . . . , πΎ, π + π π (πΏ2 ) π π (π΄π π΄) π π ππ (π‘) = [π₯ (π‘ − ππ π (π‘)) π₯ (π‘ − ππ+1 π (π‘))] , π = 0, 1, . . . , πΎ + 1. + π π (πΏ2 ) π π (π΅π π΅) (20) σ΅© σ΅© +π π (πΏ 0 π΅π π΅πΏ 0 ) ] σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© , π (π‘) = [π₯π (π‘) , π₯π (π‘ − π1 π (π‘)) , . . . , π₯π (π‘ − ππΎ π (π‘)) , π π π₯ (π‘ − π (π‘)) , π₯ (π‘ − π) , Δ = π π (π) + (π π (πΏ2 ) + π π (Λπ Λ)) + π π (π 1 ) π π Μ Obviously, if Ω < 0, it implies π(π₯(π‘)) < 0 for any π(π‘) =ΜΈ 0. And π π π (π₯ (0)) = π₯ (0) ππ₯ (0) + 2 ∑ π π ∫ π=1 π₯π (0) 0 + π π (π 2 ) π π (πΏ2 ) π + π π (π 3 ) π πΎ σ΅© σ΅©2 + ∑ π π (ππ ) ππ πσ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© ππ (π ) ππ π=1 π 0 + 2π3 π π (π) [π π ((π· − π΄πΏ 0 ) (π· − π΄πΏ 0 )) 2ππ π −π(0) +∫ (24) where ππ (π₯ (π‘)) , ππ (π₯ (π‘ − π (π‘)))] . +∫ (22) +π΅π (π₯ (π‘ − π (π ))) + π΅π₯ (π‘ − π (π ))] . + (ππ+1 − ππ ) π (π‘) πππ (π‘) π»ππ π−1 π»π ππ (π‘) −∫ π 0 −π(0) 0 π π₯ (π ) π 1 π₯ (π ) ππ π2ππ ππ (π₯ (π )) π 2 π (π₯ (π )) ππ −π πΎ + ∑∫ π₯ (π ) π 3 π₯ (π ) ππ 0 π=1 −ππ π(0) 0 +π π (πΏ2 ) π π (π΅π π΅) + π π (πΏ 0 π΅π π΅πΏ 0 ) ] . (25) (21) 2ππ π +∫ π + π π (πΏ2 ) π π (π΄π π΄) 0 π2ππ π₯π (π ) ππ π₯ (π ) ππ + π ∫ ∫ π2ππ π₯Μπ (π ) ππ₯Μ (π ) ππ . −π π On the other hand, we have σ΅© σ΅© π (π₯ (π‘)) ≥ π2ππ‘ π π (π) σ΅©σ΅©σ΅©πσ΅©σ΅©σ΅© . (26) Δ σ΅©σ΅© σ΅©σ΅© σ΅©πσ΅© . π π (π) σ΅© σ΅© (27) Therefore, βπ₯ (π‘)β ≤ π−ππ‘ √ 6 Abstract and Applied Analysis 0.8 Thus, according to Definition 1, we can conclude that the equilibrium point π₯∗ of system (5) is globally exponentially stable. This completes the proof. Remark 6. In Theorem 4, by setting π 1 = π 2 = ππ = 0 (π = 1, 2, . . . , πΎ), similar to the proof of Theorem 4, we can derive a criterion to guarantee the global exponential stability of Μ is unknown or π(π‘) RNNs with time-varying delay when π(π‘) is not differentiable. 0.4 0.2 State Remark 5. Differential from the results in [22, 23, 29], we divide the time delay interval [0, π(π‘)] into πΎ + 1 dynamical subintervals, and a novel Lyapunov-Krasovskii functional is introduced. This brings more degrees of freedom to ensure the global exponential stability. Therefore, Theorem 4 is less conservative than some previous results. 0.6 0 −0.2 −0.4 −0.6 0 1 2 3 4 5 6 Time (s) 7 8 9 10 π₯1 π₯2 4. Illustrating Example In this section, an illustrating example is given to verify the effectiveness and advantage of the criteria proposed in this paper. Figure 1: State response curves with π = 1 and π = 0 for Example 7. Example 7. Consider system (5) with the following parameters: Table 1: Allowable exponential convergence rate π for variable π and π = 1. π·=[ π΄=[ π΅=[ πΏ=[ 2 0 ], 0 3.5 −1 0.5 ], 0.5 −1 −0.5 0.5 ], 0.5 0.5 (28) 1 0 ]. 0 0 At first, we suppose that the time delay interval [0, π(π‘)] is divided into 2 (πΎ = 1) subintervals, and π = 0.1. While the upper bound π = 1, the exponential convergence rates for various π obtained from Theorem 4 and those in [22, 23, 29] are listed in Table 1. In addition, while the exponential convergence rate of π = 0.8, the upper bounds of π for various π from Theorem 4 and those in [22, 23, 29] are listed in Table 2. Thus, from Tables 1 and 2, we can say that, the result in this paper is much effective and less conservative than those in [22, 23, 29]. Figure 1 shows the state response of Example 7 with constant delay π = 1, when the initial value is [0.8, −0.6]π . 5. Conclusion In this paper, we consider the global exponential stability for RNNs with time-varying delay. By dividing the time delay interval [0, π(π‘)] into πΎ + 1 dynamical subintervals, a novel Lyapunov-Krasovskii functional is introduced. A less conservative LMI-based delay-dependent stability criterion is derived based on Lyapunov stability theory. Furthermore, an π [22] [23] [29] Theorem 4 (π = 1) 0 0.25 1.15 1.15 1.15 0.5 Fail 0.7538 0.8643 0.9105 0.9 Fail 0.6106 0.8344 0.9105 Unsure Fail 0.3391 0.8169 0.8608 Table 2: Allowable upper bound of π for variable π and π = 0.8. π [22] [23] [29] Theorem 4 (π = 1) 0.5 Fail 1.2606 1.2787 1.3022 0.8 Fail 0.9442 1.0819 1.1646 Unsure Fail 0.8310 1.0366 1.0817 illustrating example is given to show the effectiveness of the proposed result. Acknowledgments This work is supported by Guangxi Science Foundation Grant (0832067), the Foundation Grant of Guangxi Key Laboratory of Automobile Components and Vehicle Technology (13-A03-01), and the Opening Project of Guangxi Key Laboratory of Automobile Components and Vehicle Technology (2012KFZD03). References [1] Y. H. Chen and S. C. Fang, “Solving convex programming problems with equality constraints by neural networks,” Computers and Mathematics with Applications, vol. 36, no. 7, pp. 41–68, 1998. Abstract and Applied Analysis [2] Y. H. Chen and S. C. Fang, “Neurocomputing with time delay analysis for solving convex quadratic programming problems,” IEEE Transactions on Neural Networks, vol. 11, no. 1, pp. 230– 240, 2000. [3] J. A. Farrell and A. N. Michel, “A synthesis procedure for Hopfield’s continuous-time associative memory,” IEEE Transactions on Circuits and Systems, vol. 37, no. 7, pp. 877–884, 1990. [4] A. N. Michel, J. A. Farrell, and H. F. Sun, “Analysis and synthesis techniques for Hopfield type synchronous discrete time neural networks with application to associative memory,” IEEE Transactions on Circuits and Systems, vol. 37, no. 11, pp. 1356–1366, 1990. [5] T. Nishikawa, Y. C. Lai, and F. C. Hoppensteadt, “Capacity of oscillatory associative-memory networks with error-free retrieval,” Physical Review Letters, vol. 92, no. 10, Article ID 108101, 4 pages, 2004. [6] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1273–1290, 1988. [7] R. Cancelliere, M. Gai, and A. Slavova, “Application of polynomial cellular neural networks in diagnosis of astrometric chromaticity,” Applied Mathematical Modelling, vol. 34, no. 12, pp. 4243–4252, 2010. [8] S. S. Young, P. D. Scott, and N. M. Nasrabadi, “Object recognition using multilayer Hopfield neural network,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 357–372, 1997. [9] L. Chen, W. Xue, and N. Tokuda, “Classification of 2dimensional array patterns: assembling many small neural networks is better than using a large one,” Neural Networks, vol. 23, no. 6, pp. 770–781, 2010. [10] L. Olien and J. BeΜlair, “Bifurcations, stability, and monotonicity properties of a delayed neural network model,” Physica D, vol. 102, no. 3-4, pp. 349–363, 1997. [11] S. Arik, “An analysis of global asymptotic stability of delayed cellular neural networks,” IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1239–1242, 2002. [12] J. Cao and D. Zhou, “Stability analysis of delayed cellular neural networks,” Neural Networks, vol. 11, no. 9, pp. 1601–1605, 1998. [13] J. Wei and S. Ruan, “Stability and bifurcation in a neural network model with two delays,” Physica D, vol. 130, no. 3-4, pp. 255–272, 1999. [14] X. Liao, G. Chen, and E. N. Sanchez, “Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach,” Neural Networks, vol. 15, no. 7, pp. 855–866, 2002. [15] H. Huang, J. Cao, and J. Wang, “Global exponential stability and periodic solutions of recurrent neural networks with delays,” Physics Letters A, vol. 298, no. 5-6, pp. 393–404, 2002. [16] B. Chen and J. Wang, “Global exponential periodicity and global exponential stability of a class of recurrent neural networks with various activation functions and time-varying delays,” Neural Networks, vol. 20, no. 10, pp. 1067–1080, 2007. [17] H. Jiang and Z. Teng, “Global eponential stability of cellular neural networks with time-varying coefficients and delays,” Neural Networks, vol. 17, no. 10, pp. 1415–1425, 2004. [18] S. Senan and S. Arik, “New results for exponential stability of delayed cellular neural networks,” IEEE Transactions on Circuits and Systems II, vol. 52, no. 3, pp. 154–158, 2005. [19] Q. Zhang, X. Wei, and J. Xu, “An analysis on the global asymptotic stability for neural networks with variable delays,” Physics Letters A, vol. 328, no. 2-3, pp. 163–169, 2004. 7 [20] H. Zhao and G. Wang, “Delay-independent exponential stability of recurrent neural networks,” Physics Letters A, vol. 333, no. 5-6, pp. 399–407, 2004. [21] T. L. Liao, J. J. Yan, C. J. Cheng, and C. C. Hwang, “Globally exponential stability condition of a class of neural networks with time-varying delays,” Physics Letters A, vol. 339, no. 3-5, pp. 333– 342, 2005. [22] Q. Zhang, X. Wei, and J. Xu, “Delay-dependent exponential stability of cellular neural networks with time-varying delays,” Chaos, Solitons & Fractals, vol. 23, no. 4, pp. 1363–1369, 2005. [23] Y. He, M. Wu, and J. H. She, “Delay-dependent exponential stability of delayed neural networks with time-varying delay,” IEEE Transactions on Circuits and Systems II, vol. 53, no. 7, pp. 553–557, 2006. [24] Y. He, G. P. Liu, D. Rees, and M. Wu, “Stability analysis for neural networks with time-varying interval delay,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1850–1854, 2007. [25] T. Li, L. Guo, C. Sun, and C. Lin, “Further results on delaydependent stability criteria of neural networks with timevarying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 4, pp. 726–730, 2008. [26] C. D. Zheng, H. G. Zhang, and Z. S. Wang, “Improved robust stability criteria for delayed cellular neural networks via the LMI approach,” IEEE Transactions on Circuits and Systems II, vol. 57, no. 1, pp. 41–45, 2010. [27] S. Mou, H. Gao, J. Lam, and W. Qiang, “A new criterion of delaydependent asymptotic stability for Hopfield neural networks with time delay,” IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 532–535, 2008. [28] H. Zhang, Z. Liu, G. B. Huang, and Z. Wang, “Novel weightingdelay-based stability criteria for recurrent neural networks with time-varying delay,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 91–106, 2010. [29] M. Wu, F. Liu, P. Shi, Y. He, and R. Yokoyama, “Exponential stability analysis for neural networks with time-varying delay,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 4, pp. 1152–1156, 2008. [30] X. M. Zhang, M. Wu, J. H. She, and Y. He, “Delay-dependent stabilization of linear systems with time-varying state and input delays,” Automatica, vol. 41, no. 8, pp. 1405–1412, 2005. [31] H. Zhang, Z. Wang, and D. Liu, “Robust exponential stability of recurrent neural networks with multiple time-varying delays,” IEEE Transactions on Circuits and Systems II, vol. 54, no. 8, pp. 730–734, 2007. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014