Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 270791, 9 pages http://dx.doi.org/10.1155/2013/270791 Research Article Asymptotic Behavior of Switched Stochastic Delayed Cellular Neural Networks via Average Dwell Time Method Hanfeng Kuang,1 Jinbo Liu,1 Xi Chen,2 Jie Mao,1 and Linjie He3 1 College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China Hunan Provincial Center for Disease Control and Prevention, Changsha, Hunan 410005, China 3 School of Business, Hunan Normal University, Changsha, Hunan 410081, China 2 Correspondence should be addressed to Hanfeng Kuang; kuanghanfeng@126.com Received 28 January 2013; Accepted 31 March 2013 Academic Editor: Zhichun Yang Copyright © 2013 Hanfeng Kuang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The asymptotic behavior of a class of switched stochastic cellular neural networks (CNNs) with mixed delays (discrete timevarying delays and distributed time-varying delays) is investigated in this paper. Employing the average dwell time approach (ADT), stochastic analysis technology, and linear matrix inequalities technique (LMI), some novel sufficient conditions on the issue of asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential stability) are established. A numerical example is provided to illustrate the effectiveness of the proposed results. π 1. Introduction + ∑πππ ∫ Since Chua and Yang’s seminal work on cellular neural networks (CNNs) in 1988 [1, 2], it has witnessed the successful applications of CNN in various areas such as signal processing, pattern recognition, associative memory, and optimization problems (see, e.g., [3–5]). From a practical point of view, both in biological and man-made neural networks, processing of moving images and pattern recognition problems require the introduction of delay in the signals transmitted among the cells [6, 7]. After the widely use of discrete delays, distributed delays arise because that neural networks usually have a spatial extent due to the presences of a multitude of parallel pathway with a variety of axon sizes and lengths. The mathematical model can be described by the following differential equations: π ππ₯π (π‘) = − ππ π₯π (π‘) + ∑ πππ ππ (π₯π (π‘)) π=1 π + ∑πππ ππ (π₯π (π‘ − ππ (π‘))) π=1 π=1 π‘ π‘−βπ (π‘) ππ (π₯π (π )) dπ + π½π , π = 1, . . . , π, (1) where π‘ ≥ 0, π ≥ 2 corresponds to the number of units in a neural network; π₯π (π‘) denotes the potential (or voltage) of cell π at time π‘; ππ (⋅) denotes a nonlinear output function; ππ > 0 denotes the rate with which the cell π resets its potential to the resting state when isolated from other cells and external inputs; πππ , πππ , πππ denote the strengths of connectivity between cell π and π at time π‘, respectively; ππ (π‘) and βπ (π‘) correspond to the discrete time-varying delays and distributed time-varying delays, respectively. Neural network is nonlinearity; in the real world, nonlinear problems are not exceptional, but regular phenomena. Nonlinearity is the nature of matter and its development [8, 9]. Although discrete delays combined with distributed delays can usually provide a good approximation for prime model, most real models are often affected by so many external perturbations which are of great uncertainty. For 2 instance, in electronic implementations, it was realized that stochastic disturbances are mostly inevitable owing to thermal noise. Just as Haykin [10] point out that in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. Consequently, noise is unavoidable and should be taken into consideration in modeling. Moreover, it has been well recognized that a CNN could be stabilized or destabilized by certain stochastic inputs. Therefore, it is of significant importance to consider stochastic effects to the delayed neural networks. One approach to the mathematical incorporation of such effects is to use probabilistic threshold models. However, the previous literatures all focus on the stability of stochastic neural networks with delays [11–14]. Actually, studies on dynamical systems involve not only a discussion of the stability property, but also other dynamic behaviors such as the ultimate boundedness and attractor. However, there are very few results on the ultimate boundedness and attractor for stochastic neural networks [15–17]. Hence, discussing the asymptotic behavior of neural networks with mixed delays is valuable and meaningful. On the other hand, neural networks often exhibit a special characteristic of network mode switching; that is, a neural network sometimes has finite modes that switch from one to another at different times according to a switching law generated from a switching logic. As an important class of hybrid systems, switched systems arise in many practical processes. In current papers, the analysis of switched systems has drawn considerable attention since they have numerous applications in control of mechanical systems, computer communities, automotive industry, electric power systems and many other fields [18–22]. Most recently, the stability analysis of switched neural systems has been further investigated which was mainly based on Lyapunov functions [23, 24]. It is worth noting that the average dwell time (ADT) approach is an effective method for the switched systems, which avoid the common Lyapunov function and can be adopted to obtain less conservative stability conditions. For instance, based on the average dwell time method, the problems of stability have been discussed for uncertain switched CohenGrossberg neural networks with interval time-varying delay and distributed time-varying delay in [25]. In [26], the average dwell time method has been utilized to get some sufficient conditions for the exponential stability and the weighted πΏ 2 gain for a class of switched systems. However, it is worth emphasizing that when the activation functions are unbounded in some special applications, the existence of equilibrium point cannot be guaranteed [27]. Therefore, in these circumstances, the discussing of stability of equilibrium point for switched neural networks turns to be unreachable, which motivated us to consider the ultimate boundedness and attractor for the switched neural networks. Unfortunately, as far as we know, the issue of asymptotic behavior of switched systems with mixed time delays has not been investigated yet, let alone studying the asymptotic behavior of switched stochastic systems. Therefore, these researches are challenging and interesting since they integrate the switched hybrid system into the stochastic system and are Abstract and Applied Analysis thus theoretically and practically significant. Notice that the asymptotic behavior of switched stochastic neural networks with mixed delays should be studied intensively. Motivated by the above analysis, the main purpose of this paper is to get sufficient conditions on the asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and mean-square exponential stability) for the switched stochastic system. This paper is organized as follows. In Section 2, the considered model of switched stochastic CNN with mixed delays is presented. Some necessary assumptions, definitions and lemmas are also given in this section. In Section 3, mean-square ultimate boundedness and attractor for the proposed model are studied. A numerical example is arranged to demonstrate the effectiveness of the theoretical results in Section 4, and we conclude this paper in Section 5. 2. Problem Formulation In general, a stochastic cellular neural network with mixed delays can be described as follows: ππ₯ (π‘) = [ − π·π₯ (π‘) + π΄πΉ (π₯ (π‘)) + π΅πΉ (π₯ (π‘ − π (π‘))) π‘ +πΆ ∫ π‘−β(π‘) πΉ (π₯ (π )) dπ + π½] ππ‘ (2) + πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ€ (π‘) , where π₯(π‘) = (π₯1 (π‘), . . . , π₯π (π‘))π ∈ π π , πΉ(π₯(π‘)) = (π1 (π₯1 (π‘)), . . . , ππ (π₯π (π‘)))π , π· = diag(π1 , . . . , ππ ), π΄ = (πππ )π×π , π΅ = (πππ )π×π , πΆ = (πππ )π×π , π½ = (π½1 , . . . , π½π )π , π(π‘) = (π1 (π‘), . . ., ππ (π‘))π , β(π‘) = (β1 (π‘), . . . , βπ (π‘))π , πΊ(⋅, ⋅) is a π × π matrix valued function, and π€(π‘) = (π€1 (π‘), . . . , π€π (π‘))π is an π-dimensional Brownian motion defined on a complete probability space (Ω, F, P) with a natural filtration {Fπ‘ }π‘≥0 (i.e., Fπ‘ = π{π€(π ) : 0 ≤ π ≤ π‘}). By introducing switching signal into the system (2) and taking a set of neural networks as the individual subsystems, the switched system can be obtained, which is described as ππ₯ (π‘) = [ − π·π(π‘) π₯ (π‘) + π΄ π(π‘) πΉ (π₯ (π‘)) + π΅π(π‘) πΉ (π₯ (π‘ − π (π‘))) +πΆπ(π‘) ∫ π‘ π‘−β(π‘) πΉ (π₯ (π )) dπ + π½] ππ‘ + πΊπ(π‘) (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ€ (π‘) , (3) where π(π‘) : [0, +∞) → Σ = {1, 2 . . . π} is the switching signal. At each time instant π‘, the index π(π‘) ∈ Σ (i.e., π(π‘) = π) of the active subsystem means that the πth subsystem is activated. For the convenience of discussion, it is necessary to introduce some notations. π π denotes the π-dimensional Euclidean space. π ≤ π (π < π) means that each pair of corresponding elements of π and π satisfies the inequality “≤ Abstract and Applied Analysis 3 (<)”. π is especially called a positive (negative) matrix if π > 0 (< 0). ππ denotes the transpose of any square matrix π, and the symbol “∗” within the matrix represents the symmetric term of the matrix. π min (π) means the minimum eigenvalue of matrix π, and π max (π) means the maximum eigenvalue of matrix π. πΌ denotes unit matrix. Let C([−π∗ , 0], π π ) denote the Banach space of continuous functions which mapping from [−π∗ , 0] to π π with is the topology of uniform convergence. For any βπβ ∈ C([−π∗ , 0], π π ), we define βπβ = max1≤π≤πsupπ‘−π∗ <π ≤π‘ |ππ (π )|. The initial conditions for system (3) are given in the form: π ∈ CF0 ([−π∗ , 0] , π π ) , π₯ (π‘) = π, (4) where CF0 ([−π∗ , 0], π π ) is the family of all F0 -measurable bounded C([−π∗ , 0], π π )-valued random variables. Throughout this paper, we assume the following assumptions are always satisfied. (π»1 ) The discrete time-varying delay π(π‘) and distributed time-varying delay β(π‘) are satisfying 0 ≤ π (π‘) ≤ π, π∗ = max {π, β} , (5) 1≤π≤π 0 ≤ β (π‘) ≤ β, ∗ where π, β, π are scalars. (π»2 ) There exist constants ππ and πΏ π , π = 1, 2, . . . , π, such that ππ ≤ ππ (π₯) − ππ (π¦) π₯−π¦ ≤ πΏ π, ∀π₯, π¦ ∈ π , π₯ =ΜΈ π¦. (6) Moreover, we define Σ1 = diag {π1 πΏ 1 , π2 πΏ 2 , . . . , ππ πΏ π } , (7) Σ2 = diag {π1 + πΏ 1 , π2 + πΏ 2 , . . . , ππ + πΏ π } . (π»3 ) We assume that πΊ(π‘, π₯, π¦) : π + × π π × π π → π π×π is locally Lipschitz continuous and satisfies the following condition: Definition 2 (see [28]). For any switching signal π(π‘), corresponding a switching sequence {(π(π‘0 ), π‘0 ), . . . (π(π‘π ), π‘π ), . . . , | π = 0, 1, . . .}, where (π(π‘π ), π‘π ) means the π(π‘π )th subsystem, is activated during π‘ ∈ [π‘π , π‘π−1 ), and π denotes the switching ordinal number. Given any finite constants π1 , π2 satisfying π2 > π1 ≥ 0 denotes the number of discontinuity of a switching signal π(π‘) over the time interval (π1 , π2 ) by ππ (π1 , π2 ). If ππ (π1 , π2 ) ≤ π0 +(π2 −π1 )/ππΌ holds for ππΌ > 0, π0 > 0, then ππΌ > 0 is called the average dwell time. π0 is the chatter bound. Lemma 3. Let π and π be any π-dimensional real vectors, π be a positive semidefinite matrix and a scalar π > 0. Then the following inequality holds: 2ππ ππ ≤ πππ ππ + π−1 ππ ππ. Lemma 4 (see [29]). For any positive definite constant matrix π ∈ Rπ×π , and a scalar π, if there exists a vector function π π : [0, π] → Rπ such that the integrals ∫0 ππ (π )ππ(π )dπ and π ∫0 π(π )dπ are well defined, then π ∫ ππ (π ) ππ (π ) dπ ≥ 0 +π¦ π2π π2 π¦ (11) Let C2,1 : (π π × π + ; π +) denote the family of all nonnegative functions π(π‘, π₯) on π π × π + which are continuously twice differentiable in π₯ and once differentiable in π‘. If π ∈ C2,1 : (π π × π + ; π +), define an operator Lπ associated with general stochastic system ππ₯(π‘) = π(π₯(π‘), π‘)ππ‘ + πΊ(π₯(π‘), π₯(π‘ − π(π‘)))ππ€(π‘) as Lπ (π‘, π₯) = ππ‘ (π‘, π₯) + ππ₯ (π‘, π₯) π (π₯ (π‘) , π‘) 1 + trace {πΊπ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ₯π₯ (π‘, π₯) 2 ×πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) } , (12) + 2π₯ π π1π π2 π¦, (8) where π1 > 0, π2 > 0 are constant matrices with appropriate dimensions. Some definitions and lemmas are introduced as follows. Definition 1 (see [15]). System (2) is called mean-square ultimate boundedness if there exists a constant vector π΅Μ > 0, such that, for any initial value π ∈ CF0 , there is a π‘σΈ = π‘σΈ (π) > 0, for all π‘ ≥ π‘σΈ , the solution π₯(π‘, π) of system (2) satisfies σ΅©2 Μ σ΅© πΈσ΅©σ΅©σ΅©π₯ (π‘, π)σ΅©σ΅©σ΅© ≤ π΅. π 1 π π ∫ π (π ) dπ π ∫ π (π ) dπ . π 0 0 3. Main Results trace [πΊπ (π‘, π₯, π¦) πΊ (π‘, π₯, π¦)] ≤ π₯π π1π π1 π₯ π (10) (9) Μ is said In this case, the set A = {π ∈ CF0 | πΈβπ(π )β2 ≤ π΅} to be an attractor of system (2) in mean square sense. Clearly, proposition above equals to 2 Μ limπ‘ → ∞ sup πΈβπ₯(π‘)β ≤ π΅. where ππ‘ (π‘, π₯) = ππ (π‘, π₯) , ππ‘ ππ (π‘, π₯) π ππ (π‘, π₯) ,..., ), ππ₯ (π‘, π₯) = ( ππ₯1 ππ₯π ππ₯π₯ (π‘, π₯) = ( ππ (π‘, π₯) ) . ππ₯π ππ₯π π×π (13) Μ Theorem 5. If there are constants π, ] such that π(π‘) ≤ π, Μβ(π‘) ≤ ], we denote π(π), π(]) as: (1 − π) π−πΌπ , π ≤ 1; π (π) = { 1 − π, π ≥ 1, (1 − ]) π−πΌβ , ] ≤ 1; π (]) = { 1 − ], ] ≥ 1. (14) 4 Abstract and Applied Analysis For a given constant πΌ > 0, if there exist positive definite matrixes π = diag(π1 , π2 , . . . , ππ ), π, π , π, π, π1 , π2 , ππ = diag(ππ1 , ππ2 , . . . , πππ ), π = 1, 2, such that the following condition holds: the operator Lπ along the trajectory of system (2) can be obtained Lπ1 (π‘) = Φ11 Φ12 Φ13 Φ14 0 Φ16 [ ∗ Φ22 0 Φ24 0 0 ] ] [ ] < 0, 0 0 0 ∗ ∗ Φ Δ1 = [ 33 ] [ [ ∗ ∗ ∗ Φ44 Φ55 0 ] 0 Φ66 ] [∗ ∗ ∗ ∗ × [ − π·π₯ (π‘) + π΄πΉ (π₯ (π‘)) + π΅πΉ (π₯ (π‘ − π (π‘))) π‘ +πΆ ∫ π‘−β(π‘) Φ11 = 2πΌπ − 2π·π + π + π2 π − 2Σ1 π1 + πΌπΌ + π1π ππ1 , Φ12 = π1π ππ2 , Φ14 = ππ΅, Φ13 = ππ΄ + Σ2 π1 , (15) ×πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ] π2πππ2 , = πΌππΌπ‘ π₯π (π‘) ππ₯ (π‘) + 2ππΌπ‘ π₯π (π‘) π 2 Φ33 = π + β π − 2π1 + πΌπΌ, × [ − π·π₯ (π‘) + π΄πΉ (π₯ (π‘)) + π΅πΉ (π₯ (π‘ − π (π‘))) Φ44 = −π (]) π − 2π2 + πΌπΌ, Φ55 = −π (π) π, Φ66 = −π (]) π, π‘ +πΆ ∫ π‘−β(π‘) πΉ (π₯ (π )) dπ + π½] + ππΌπ‘ trace [πΊπ (π₯ (π‘) , π₯ (π‘ − π (π‘))) π then system (2) is mean-square ultimate boundedness. Proof. Consider the positive definite Lyapunov functional as follows: π (π‘) = π1 (π‘) + π2 (π‘) + π3 (π‘) + π4 (π‘) + π5 (π‘) , πΉ (π₯ (π )) dπ + π½] π2 π1 (π₯ (π‘) , π‘) 1 + trace [πΊπ (π₯ (π‘) , π₯ (π‘ − π (π‘))) 2 ππ₯2 Φ16 = ππΆ, Φ22 = −π (π) π − 2Σ1 π2 + πΌπΌ + Φ24 = Σ2 π2 , ππ1 (π₯ (π‘) , π‘) ππ1 (π₯ (π‘) , π‘) + ππ‘ ππ₯ (16) where ×πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ] . (19) From Assumption (π»3 ), Lemma 3, and (19), we can get Lπ1 (π‘) ≤ 2πΌππΌπ‘ π₯π (π‘) ππ₯ (π‘) + 2ππΌπ‘ π₯π (π‘) π π1 (π‘) = ππΌπ‘ π₯π (π‘) ππ₯ (π‘) , π2 (π‘) = ∫ π‘ π‘−π(π‘) π3 (π‘) = ∫ π‘ π‘−β(π‘) π4 (π‘) = π ∫ 0 π₯π (π ) πππΌπ π₯ (π ) dπ , 0 +πΆ ∫ π‘ π‘−β(π‘) πΉπ (π₯ (π )) π ππΌπ πΉ (π₯ (π )) dπ , ∫ π‘ −π(π‘) π‘+π π5 (π‘) = β ∫ × [ − π·π₯ (π‘) + π΄πΉ (π₯ (π‘)) + π΅πΉ (π₯ (π‘ − π (π‘))) π‘ ∫ −β(π‘) π‘+π (17) + ππΌπ‘ π₯π (π‘) π1π ππ1 π₯ (π‘) + π₯π (π‘ − π (π‘)) π2π ππ2 π₯ (π‘ − π (π‘)) π₯π (π ) πππΌπ π₯ (π ) dπ dπ, + 2π₯π (π‘) π1π ππ2 π₯ (π‘ − π (π‘)) . πΉπ (π₯ (π )) πππΌπ πΉ (π₯ (π )) dπ dπ. Then, by Ito’s formula, the stochastic derivative of π(π₯, π‘) is ππ (π₯, π‘) = Lπ (π₯, π‘) ππ‘ + ππ₯ (π₯, π‘) πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ€ (π‘) , πΉ (π₯ (π )) dπ ] + ππΌπ‘ πΌ−1 π½π ππ½ (18) (20) Similarly, calculating the operator Lππ (π = 2, 3, 4, 5), along the trajectory of system (2), one can get Lπ2 = ππΌπ‘ π₯π (π‘) ππ₯ (π‘) − (1 − πΜ (π‘)) ππΌ(π‘−π(π‘)) π₯π (π‘ − π (π‘)) ππ₯ (π‘ − π (π‘)) Abstract and Applied Analysis 5 ≤ ππΌπ‘ π₯π (π‘) ππ₯ (π‘) On the other hand, it follows from Assumption (π»2 ) that we can easily obtain − (1 − π) ππΌ(π‘−π) π₯π (π‘ − π (π‘)) ππ₯ (π‘ − π (π‘)) [ππ (π₯π (π‘)) − ππ (0) − πΏ π π₯π (π‘)] ≤ ππΌπ‘ π₯π (π‘) ππ₯ (π‘) × [ππ (π₯π (π‘)) − ππ (0) − ππ π₯π (π‘)] ≤ 0, − π (π) ππΌπ‘ π₯π (π‘ − π (π‘)) ππ₯ (π‘ − π (π‘)) , [ππ (π₯π (π‘ − π (π‘))) − ππ (0) − πΏ π π₯π (π‘ − π (π‘))] Lπ3 ≤ ππΌπ‘ πΉπ (π₯ (π‘)) π πΉ (π₯ (π‘)) × [ππ (π₯π (π‘ − π (π‘))) − ππ (0) − ππ π₯π (π‘ − π (π‘))] ≤ 0, − π (]) ππΌπ‘ πΉπ (π₯ (π‘ − π (π‘))) π πΉ (π₯ (π‘ − π (π‘))) , π = 1, 2, . . . , π. (24) πΌπ‘ π Lπ4 = π [π (π‘) π π₯ (π‘) ππ₯ (π‘) − (1 − πΜ (π‘)) ππΌ(π‘−π(π‘)) ∫ π‘ π‘−π(π‘) Then we obtain π₯π (π ) ππ₯ (π ) dπ ] π ≤ π2 ππΌπ‘ π₯π (π‘) ππ₯ (π‘) πΌπ‘ − ππ (π) π ∫ 0 ≤ πΏ1 = − 2∑π¦1π [ππ (π₯π (π‘)) − ππ (0) − πΏ π π₯π (π‘)] π=1 π‘ π π‘−π(π‘) × [ππ (π₯π (π‘)) − ππ (0) − ππ π₯π (π‘)] , π₯ (π ) ππ₯ (π ) dπ , π Lπ5 ≤ β2 ππΌπ‘ πΉπ (π₯ (π‘)) ππΉ (π₯ (π‘)) πΌπ‘ − βπ (]) π ∫ π‘ 0 ≤ πΏ2 = − 2∑π¦2π [ππ (π₯π (π‘ − π (π‘))) − ππ (0) π=1 π π‘−β(π‘) πΉ (π₯ (π )) ππΉ (π₯ (π )) dπ . −πΏ π π₯π (π‘ − π (π‘))] × [ππ (π₯π (π‘ − π (π‘)))−ππ (0)−ππ π₯π (π‘ − π (π‘))] , (21) π According to Lemma 4, the following inequalities can be obtained: ∫ π=1 π − 2∑π¦1π ππ2 (0) π‘ π‘−π(π‘) π‘ π‘−β(π‘) π=1 π₯π (π ) ππ₯ (π ) dπ ≥ ∫ πΏ1 = − 2∑π¦1π [ππ (π₯π (π‘)) − πΏ π π₯π (π‘)] [ππ (π₯π (π‘)) − ππ π₯π (π‘)] π + 2∑π¦1π ππ (0) [2ππ (π₯π (π‘)) − (πΏ π + ππ ) π₯π (π‘)] π‘ 1 π‘ π₯π (π ) dπ π ∫ π₯ (π ) dπ , ∫ π π‘−π(π‘) π‘−π(π‘) π=1 (22) πΉπ (π₯ (π )) ππΉ (π₯ (π )) dπ ≥ π ≤ − 2∑π¦1π [ππ (π₯π (π‘)) − πΏ π π₯π (π‘)] [ππ (π₯π (π‘)) − ππ π₯π (π‘)] π=1 π + ∑ [πΌππ2 (π₯π (π‘)) + 4πΌ−1 ππ2 (0) π¦1π2 + πΌπ₯π2 (π‘) π‘ 1 π‘ πΉπ (π₯ (π )) dπ π ∫ πΉ (π₯ (π )) dπ . ∫ β π‘−β(π‘) π‘−β(π‘) π=1 2 +πΌ−1 ππ2 (0) π¦1π2 (πΏ π + ππ ) ] . (25) Then, we can get Similarly, one can get Lπ4 ≤ π2 ππΌπ‘ π₯π (π‘) ππ₯ (π‘) πΌπ‘ − π (π) π ∫ π‘ π π‘−π(π‘) π₯ (π ) dπ π ∫ π π‘ π‘−π(π‘) πΏ2 ≤ − 2∑π¦2π [ππ (π₯π (π‘ − π (π‘))) − πΏ π π₯π (π‘ − π (π‘))] π₯ (π ) dπ , π=1 2 πΌπ‘ π × [ππ (π₯π (π‘ − π (π‘))) − ππ π₯π (π‘ − π (π‘))] Lπ5 ≤ β π πΉ (π₯ (π‘)) ππΉ (π₯ (π‘)) − π (]) ππΌπ‘ ∫ π‘ π‘−β(π‘) π‘ πΉπ (π₯ (π )) dπ π ∫ π‘−β(π‘) πΉ (π₯ (π )) dπ . (23) + [πΌππ2 (π₯π (π‘ − π (π‘))) + 4πΌ−1 ππ2 (0) π¦2π2 2 +πΌπ₯π2 (π‘ − π (π‘)) + πΌ−1 ππ2 (0) π¦2π2 (πΏ π + ππ ) ] . (26) 6 Abstract and Applied Analysis Theorem 6. If all of the conditions of Theorem 5 hold, then Μ there exists an attractor Aπ΅Μ = {π ∈ CF0 | πΈβπ(π )β2 ≤ π΅} for the solutions of system (2). Denote π (π‘) = [π₯π (π‘) , π₯π (π‘ − π (π‘)) , πΉπ (π₯ (π‘)) , πΉπ (π₯ (π‘ − π (π‘))) , (∫ π‘ π‘−π(π‘) (∫ π π₯ (π ) dπ ) , (27) π π π‘ π‘−β(π‘) πΉ (π₯ (π )) dπ ) ] , and combing with (16)–(26), we can get ππ = Lπ1 ππ‘ + Lπ2 ππ‘ + Lπ3 ππ‘ + Lπ4 ππ‘ + Lπ5 ππ‘ + 2πππΌπ‘ π₯π (π‘) πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ€ (π‘) (28) ≤ ππΌπ‘ ππ (π‘) Δ 1 π (π‘) ππ‘ + ππΌπ‘ N1 ππ‘ Proof. If one chooses π΅Μ = (1 + πΌ−1 N1 )/πΎ > 0, Theorem 5 shows that, for any π, there is π‘σΈ > 0, such that πΈβπ₯(π‘, π)β2 ≤ π΅Μ for all π‘ ≥ π‘σΈ . Let Aπ΅Μ denote by Aπ΅Μ = {π ∈ Μ Clearly, A Μ is closed, bounded, and CF0 | πΈβπ(π )β2 ≤ π΅}. π΅ invariant. Furthermore, limπ‘ → ∞ sup inf π¦∈Aπ΅Μ βπ₯(π‘, π)−π¦β = 0. Therefore, Aπ΅Μ is an attractor for the solutions of system (2). This completes the proof. Corollary 7. In addition to that all of the conditions of Theorem 5 hold, if π½ = 0, πΊ(π‘, 0, 0) = 0, and ππ (0) = 0 for all π = 1, 2, . . . , π, then system (2) has a trivial solution π₯(π‘) ≡ 0, and the trivial solution of system (2) is mean-square exponentially stable. Proof. If π½ = 0 and ππ (0) = 0 (π = 1, 2, . . . , π), then N1 = 0, and it is obvious that system (2) has a trivial solution π₯(π‘) ≡ 0. From Theorem 5, one has πΌπ‘ + 2ππ π₯ (π‘) πΊ (π₯ (π‘) , π₯ (π‘ − π (π‘))) ππ€ (π‘) , where σ΅©2 σ΅© πΈσ΅©σ΅©σ΅©π₯ (π‘, π)σ΅©σ΅©σ΅© ≤ πΎ∗ π−πΌπ‘ , −1 π N1 = πΌ π½ ππ½ π ∀π, (33) where πΎ∗ = πΈ{π(π₯(π‘0 ))}/πΎ. Therefore, the trivial solution of system (2) is mean-square exponentially stable. This completes the proof. 2 + ∑ [4πΌ−1 ππ2 (0) π¦2π2 + πΌ−1 ππ2 (0) π¦1π2 (πΏ π + ππ ) π=1 2 +4πΌ−1 ππ2 (0) π¦2π2 + πΌ−1 ππ2 (0) π¦2π2 (πΏ π + ππ ) ] . (29) By integrating both sides of (28) in time interval π‘ ∈ [π‘0 , π‘] and then taking expectation results in πΎππΌπ‘ βπ₯ (π‘)β2 ≤ π (π₯ (π‘)) ≤ π (π₯ (π‘0 )) + πΌ−1 ππΌπ‘ N1 π‘ + ∫ 2πππΌπ‘ π₯ (π ) πΊ (π₯ (π ) , π₯ (π − π (π ))) dπ€ (π ) , π‘0 According to Theorem 5–Corollary 7, we will present conditions of mean-square ultimate boundedness for the switched systems (3) by applying the average dwell time method in the follow-up studies. Μ Theorem 8. If there are constants π, ] such that π(π‘) ≤ π, Μ ≤ ], we denote π(π), π(]) as β(π‘) (30) (1 − π) π−πΌπ , π ≤ 1; π (π) = { 1 − π, π ≥ 1, where πΎ = π min (π). Therefore, one obtains −1 πΌπ‘ πΈ {π (π₯ (π‘))} ≤ πΈ {π (π₯ (π‘0 ))} + πΈ {πΌ π N1 } , (31) (32) For a given constant πΌ > 0, if there exist positive definite matrixs ππ ,π π , ππ , ππ , π1π , π2π , ππ = diag (ππ1 , ππ2 , . . . , πππ ), ππ = diag (ππ1 , ππ2 , . . . , πππ ), π = 1, 2, such that the following condition holds which implies π−πΌπ‘ πΈ {π (π₯ (π‘0 ))} + πΌ−1 N1 πΈβπ₯ (π‘)β2 ≤ . πΎ (34) (1 − ]) π−πΌβ , ] ≤ 1; π (]) = { 1 − ], ] ≥ 1. If one chooses π΅Μ = (1 + πΌ−1 N1 )/πΎ > 0, then, for initial value π ∈ CF0 , there is π‘σΈ = π‘σΈ (π) > 0, such that π−πΌπ‘ πΈ{π(π₯(π‘0 ))} ≤ 1 for all π‘ ≥ π‘σΈ . According to Definition 1, we have πΈβπ₯(π‘, π)β2 ≤ π΅Μ for all π‘ ≥ π‘σΈ . That is to say, system (2) is mean-square ultimate boundedness. This completes the proof. Φπ11 Φπ12 Φπ13 Φπ14 0 Φπ16 [ ∗ Φπ22 0 Φπ24 0 0 ] ] [ ] < 0, 0 0 0 ∗ ∗ Φ Δ π1 = [ π33 ] [ [ ∗ ∗ ∗ Φπ44 Φπ55 0 ] ∗ ∗ ∗ 0 Φπ66 ] [ ∗ (35) Abstract and Applied Analysis 7 where Φπ11 = 2πΌππ − 2π·ππ + ππ + π2 ππ − 2Σ1 π1 + πΌπΌ + π1ππ ππ1π , Φπ12 = π1ππ ππ2π , Φπ13 = ππ π΄ π + Σ2 π1 , Φπ14 = ππ π΅π , Since the system state is continuous, it follows from (40) that σ΅© σ΅©2 Rππ σ΅©σ΅©σ΅©π₯ (π‘π )σ΅©σ΅©σ΅© π−πΌ(π‘−π‘π ) Λ 2 + πΈβπ₯ (π‘)β ≤ πΎππ πΎππ σ΅© σ΅©2 = π»ππ πΈσ΅©σ΅©σ΅©π₯ (π‘π )σ΅©σ΅©σ΅© π−πΌ(π‘−π‘π ) + π½ππ ≤ ⋅ ⋅ ⋅ Φ16 = ππ πΆπ , Φπ22 = −π (π) ππ − 2Σ1 π2 + πΌπΌ + π2ππ ππ2π , π σ΅© σ΅©2 ≤ π∑V=0 ln π»πV −πΌ(π‘−π‘0 ) πΈσ΅©σ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© Φπ33 = π π + β2 ππ − 2π1 + πΌπΌ, Φπ24 = Σ2 π2 , Φπ44 = −π (]) π π − 2π2 + πΌπΌ, + [π»ππ π−πΌ(π‘−π‘π ) π½ππ + π»ππ π»ππ−1 π−πΌ(π‘−π‘π−1 ) π½ππ−1 Φπ55 = −π (π) ππ , + π»ππ π»ππ−1 π»ππ−2 π−πΌ(π‘−π‘π−2 ) π½ππ−2 + ⋅ ⋅ ⋅ Φπ66 = −π (]) ππ . +π»ππ π»ππ−1 π»ππ−2 ⋅ ⋅ ⋅ π»π1 π−πΌ(π‘−π‘1 ) π½π1 + π½ππ ] (36) Then system (3) is mean-square ultimate boundedness for any switching signal with average dwell time satisfying ππΌ > ππΌ∗ = ln Rmax , πΌ σ΅© σ΅©2 ≤ π(π+1) ln π»max −πΌ(π‘−π‘0 ) πΈσ΅©σ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© π + ⋅ ⋅ ⋅ + π»max π½max + π»max π½max + π½max ] σ΅© σ΅©2 ≤ π»max ππ ln π»max −πΌ(π‘−π‘0 ) πΈσ΅©σ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© Proof. Define the Lyapunov functional candidate + ππ(π‘) = ππΌπ‘ π₯π (π‘) ππ(π‘) π₯ (π‘) +∫ π‘−π(π‘) π‘ +∫ π‘−β(π‘) + π∫ 0 −π(π‘) π‘+π 0 + β∫ + πΉπ (π₯ (π )) π π(π‘) ππΌπ πΉ (π₯ (π )) dπ π‘ ∫ π‘ −β(π‘) π‘+π (38) π₯π (π ) ππ(π‘) ππΌπ π₯ (π ) dπ dπ From (16) and (32), we have the following result: σ΅© σ΅©2 R0 πΈσ΅©σ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© π−πΌ(π‘−π‘0 ) Λ πΈβπ₯ (π‘)β ≤ + , πΎ πΎ 2 (39) where Λ = πΌ−1 N1 , R0 is a positive constant. When π‘ ∈ [π‘π , π‘π+1 ], the ππ th subsystem is activated; from (39) and Theorem 5,we can get πΈβπ₯ (π‘)β2 ≤ πΎππ Λ + πΎππ ≤ π»max − 1 π+1 [π»max − 1] (40) σ΅© σ΅©2 = π»ππ πΈσ΅©σ΅©σ΅©π₯ (π‘π )σ΅©σ΅©σ΅© π−πΌ(π‘−π‘π ) + π½ππ , where Rππ is a positive constant, πΎππ = π min (ππ ), π»ππ = Rππ /πΎππ , π½ππ = Λ/πΎππ . π½max π»max − 1 π+1 [π»max − 1] Rmax ππ0 ln Rmax −(πΌ−(ln Rmax /ππΌ ))(π‘−π‘0 ) σ΅©σ΅© σ΅©2 πΈσ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© π+1 πΎmin + πΉπ (π₯ (π )) ππ(π‘) ππΌπ πΉ (π₯ (π )) dπ dπ. σ΅© σ΅©2 Rππ πΈσ΅©σ΅©σ΅©π₯ (π‘π )σ΅©σ΅©σ΅© π−πΌ(π‘−π‘π ) π½max σ΅© σ΅©2 ≤ π»max πln π»max ππ (π‘0 ,π‘)−πΌ(π‘−π‘0 ) πΈσ΅©σ΅©σ΅©π₯ (π‘0 )σ΅©σ΅©σ΅© π₯π (π ) ππ(π‘) ππΌπ π₯ (π ) dπ ∫ π−2 2 (37) where Rmax = maxπ∈Σ,1≤π≤π {Rππ }. π‘ π−1 + [π»max π½max + π»max π½max + π»max π½max π+1 Λ [(Rπ+1 max /πΎmin ) − 1] Rmax − πΎmin , (41) where πΎmin = minππ {πΎππ }, π»max = maxππ {π»ππ }. π+1 If one chooses π΅Μ = (1/πΎmin ) + Λ[(Rπ+1 max /πΎmin ) − > 0, then, for initial value 1]/(Rmax − πΎmin ) π ∈ CF0 , there is π‘σΈ = π‘σΈ (π) > 0, such that Rmax ππ0 ln Rmax −(πΌ−(ln Rmax /ππΌ ))(π‘−π‘0 ) πΈβπ₯(π‘0 )β2 ≤ 1 for all π‘ ≥ π‘σΈ . According to Definition 1, we have πΈβπ₯(π‘, π)β2 ≤ π΅Μ for all π‘ ≥ π‘σΈ . That is to say, system (3) is mean-square ultimate boundedness, and the proof is completed. Remark 9. In this paper, we construct two piecewise functions π(π), π(]) to remove the restrictive condition π < 1 and ] < 1 in the results, which have reduced the conservatism of the obtained results and also avoid the computational complexity. Remark 10. The condition (35) is given as in the form of linear matrix inequalities, which are more relaxing than the algebraic formulation. Furthermore, by using the MATLAB LMI 8 Abstract and Applied Analysis toolbox, we can check the feasibility of (35) straightforward without tuning any parameters. Theorem 11. If all of the conditions of Theorem 8 hold, then there exists an attractor AσΈ π΅Μ for the solutions of system (3), Μ where AσΈ π΅Μ = {π ∈ CF0 | πΈβπ(π )β2 ≤ π΅}. π+1 Proof. If one chooses π΅Μ = (1/πΎmin ) + Λ[(Rπ+1 max /πΎmin ) − 1]/(Rmax − πΎmin ) > 0, Theorem 8 shows that, for any π, there is π‘σΈ > 0, such that πΈβπ₯(π‘, π)β2 ≤ π΅Μ for all π‘ ≥ π‘σΈ . Μ Let AσΈ π΅Μ denote by AσΈ π΅Μ = {π ∈ CF0 | πΈβπ(π )β2 ≤ π΅}. σΈ Clearly, Aπ΅Μ is closed, bounded, and invariant. Furthermore, limπ‘ → ∞ sup inf π¦∈AσΈ Μ βπ₯(π‘, π) − π¦β = 0. Therefore, AσΈ π΅Μ is an π΅ attractor for the solutions of system (3). This completes the proof. Corollary 12. In addition to all that of the conditions of Theorem 8 hold, if π½ = 0, πΊ(π‘, 0, 0) = 0 and ππ (0) = 0 for all π = 1, 2, . . . , π, then system (3) has a trivial solution π₯(π‘) ≡ 0, and the trivial solution of system (3) is mean-square exponentially stable. Proof. If π½ = 0 and ππ (0) = 0 for all π = 1, 2, . . . , π, then it is obvious that system (3) has a trivial solution π₯(π‘) ≡ 0. From Theorem 8, one has σ΅©2 Μ∗ −πΌπ‘ σ΅© π , πΈσ΅©σ΅©σ΅©π₯ (π‘, π)σ΅©σ΅©σ΅© ≤ πΎ ∀π, (42) Μ∗ = (Rmax ππ0 ln Rmax −(πΌ−(ln Rmax /ππΌ ))(π‘−π‘0 ) πΈβπ₯(π‘0 )β2 / where πΎ π+1 πΎmin . Therefore, the trivial solution of system (3) is meansquare exponentially stable. This completes the proof. Remark 13. Assumption (π»3 ) is less conservative than that in [17] since the constants ππ and πΏ π are allowed to be positive, negative, or zero. Hence, the resulting activation functions π(⋅) could be nonmonotonic and are more general than the usual forms |ππ (π’)| ≤ πΎπ |π’|, πΎπ > 0, π = 1, 2, . . . , π. Moreover, unlike the bounded case, there will be no equilibrium point for the switched system (3) under the assumption (π»3 ). For this reason, to investigate the asymptotic behavior (the ultimate boundedness and the existence of attractor) of switched system that contains mixed delays is more complex and challenge. Remark 14. In this paper, the chatter bound π0 is a positive integer, which is more practical in significance and can include the model π0 = 0 in [16, 25, 26] as a special case. Remark 15. If Σ = 0, which implies that the switched delay system (3) reduces to the usual stochastic CNN with delays. In this case, attractor and ultimate boundedness are discussed in [17]. And when π1 = π2 = 0, the model in our paper turns out to be a switched CNN with mixed delays; to the best of our knowledge, there are no published results in this aspect yet. Thus, the main results of this paper are novel. Moreover, when uncertainties appear in the switched stochastic CNN system (3), we can obtain the corresponding results, by applying the similar method as in [25]. 4. Illustrative Examples In this section, we shall give a numerical example to demonstrate the validity and effectiveness of our results. Consider the switched cellular neural networks with two subsystems. Consider the switched stochastic cellular neural network system (3) with ππ (π₯π (π‘)) = 0.5 tanh(π₯π (π‘)),ππ (0) = 0 (π = 1, 2), π(π‘) = 0.25sin2 (π‘), β(π‘) = 0.3sin2 (π‘), and the connection weight matrices as follows: 0.3 0.1 π΄1 = ( ), 0.2 0.2 0.2 −0.1 ), πΆ1 = ( 0.3 0.1 0.2 0.1 π21 = ( ), 0 0.1 π΅2 = ( 0.1 0 ), −0.1 0.2 0.2 0 π΅1 = ( ), 0.3 0.5 π11 = ( 0.1 0 ), −0.1 0.2 0.2 0.4 π΄2 = ( ), 0.1 0.3 πΆ2 = ( (43) 0.3 0.2 ), 0.1 0.2 0.2 0.1 0.1 0 ) , π22 = ( ). π12 = ( 0 0.3 0.2 0.1 From assumptions (π»1 )–(π»3 ), we can gain ππ = 1, ππ = 0, πΏ π = 0.5, (π = 1, 2), π = 0.25, β = 0.3, and π = 0.5, ] = 0.6. Therefore, for πΌ = 0.5, by solving LMIs (35), we get 1.4968 0 π1 = ( ), 0 1.4851 1.6073 −0.0528 π1 = ( ), −0.0528 1.4567 1.8642 0.4698 ), π 1 = ( 0.4698 1.5241 π1 = ( 2.7467 0.0225 ), 0.0225 1.9941 5.4373 0.0644 π1 = ( ), 0.0644 4.5969 1.4316 0 π2 = ( ), 0 1.4528 1.6541 0.0229 ), π2 = ( 0.0229 1.8391 1.0837 0.4540 π 2 = ( ), 0.4540 1.2710 1.6888 0.4356 ), π2 = ( 0.4356 1.6165 4.5736 0.5698 π2 = ( ). 0.5698 4.4524 (44) Using (37), we can get the average dwell time ππ∗ = 1.3445. 5. Conclusions In this paper, we studied the switched stochastic cellular neural networks with discrete time-varying delays and distributed time-varying delays. With the help of the average dwell time approach, the novel multiple Lyapunov-Krasovkii functionals methods, and some inequality techniques, we obtain the new sufficient conditions guaranteeing the meansquare ultimate boundedness, the existence of an attractor, and the mean-square exponential stability. A numerical example is also given to demonstrate our results. Furthermore, our derived conditions are presented in the forms of LMIs, which are more relaxing than the algebraic formulation and can be easily checked in practice by the effective LMI toolbox in MATLAB. Abstract and Applied Analysis Acknowledgments The authors are extremely grateful to Prof. Zhichun Yang and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improvement of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grant no. 11101053, the Key Project of Chinese Ministry of Education under Grant no. 211118, the Excellent Youth Foundation of Educational Committee of Hunan Provincial no. 10B002, the Hunan Provincial NSF no. 11JJ1001, National Science and technology Major Projects of China no. 2012ZX10001001-006, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China no. 2012SK3096. References [1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1257– 1272, 1988. [2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1273–1290, 1988. [3] D. Liu and A. Michel, “Celular neural networks for associative memories,” IEEE Transactions on Circuits and Systems II, vol. 40, pp. 119–121, 1993. [4] H.-C. Chen, Y.-C. Hung, C.-K. Chen, T.-L. Liao, and C.-K. Chen, “Image-processing algorithms realized by discrete-time cellular neural networks and their circuit implementations,” Chaos, Solitons & Fractals, vol. 29, no. 5, pp. 1100–1108, 2006. [5] P. Venetianer and T. Roska, “Image compression by delayed CNNs,” IEEE Transactions on Circuits and Systems I, vol. 45, pp. 205–215, 1998. [6] T. Roska, T. Boros, P. Thiran, and L. Chua, “Detecting simple motion using cellular neural networks,” in Proceedings of the International Workshop Cellular Neural Networks Application, pp. 127–138, 1990. [7] T. Roska and L. Chua, “Cellular neural networks with nonlinear and delay-type template,” International Journal of Circuit Theory and Applications, vol. 20, pp. 469–481, 1992. [8] F. Wen and X. Yang, “Skewness of return distribution and coefficient of risk premium,” Journal of Systems Science & Complexity, vol. 22, no. 3, pp. 360–371, 2009. [9] F. Wen, Z. Li, C. Xie, and S. David, “Study on the fractal and chaotic features of the Shanghai composite index,” Fractals— Complex Geometry Patterns and Scaling in Nature and Society, vol. 20, no. 2, pp. 133–140, 2012. [10] S. Haykin, Neural Networks, Prentice-Hall, Englewood Cliffs, NJ, USA, 1994. [11] C. Huang and J. Cao, “Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays,” Neurocomputing, vol. 72, pp. 3352–3356, 2009. [12] C. Huang and J. Cao, “On pth moment exponential stability of stochastic Cohen-Grossberg neural networks with timevarying delays,” Neurocomputing, vol. 73, pp. 986–990, 2010. [13] R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays,” Applied Mathematics and Computation, vol. 198, no. 2, pp. 526–533, 2008. 9 [14] H. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009. [15] B. Li and D. Xu, “Mean square asymptotic behavior of stochastic neural networks with infinitely distributed delay,” Neurocomputing, vol. 72, pp. 3311–3317, 2009. [16] L. Wan, Q. Zhou, P. Wang, and J. Li, “Ultimate boundedness and an attractor for stochastic Hopfield neural networks with timevarying delays,” Nonlinear Analysis: Real World Applications, vol. 13, no. 2, pp. 953–958, 2012. [17] L. Wan and Q. Zhou, “Attractor and ultimate boundedness for stochastic cellular neural networks with delays,” Nonlinear Analysis: Real World Applications, vol. 12, no. 5, pp. 2561–2566, 2011. [18] W. Yu, J. Cao, and W. Lu, “Synchronization control of switched linearly coupled neural networks with delay,” Neurocomputing, vol. 73, pp. 858–866, 2010. [19] L. Wu, Z. Feng, and W. Zheng, “Exponential stability analysis for delayed neural networks with switching parameters: average dwell time approach,” IEEE Transactions on Neural Networks, vol. 21, pp. 1396–1407, 2010. [20] C. W. Wu and L. O. Chua, “A simple way to synchronize chaotic systems with applications to secure communication systems,” International Journal of Bifurcation and Chaos, vol. 3, pp. 1619– 1627, 1993. [21] W. Yu, J. Cao, and K. Yuan, “Synchronization of switched system and application in communication,” Physics Letters A, vol. 372, pp. 4438–4445, 2008. [22] C. Maia and M. Goncalves, “Application of switched adaptive system to load forecasting,” Electric Power Systems Research, vol. 78, pp. 721–727, 2008. [23] H. Wu, X. Liao, W. Feng, S. Guo, and W. Zhang, “Robust stability analysis of uncertain systems with two additive time-varying delay components,” Applied Mathematical Modelling, vol. 33, no. 12, pp. 4345–4353, 2009. [24] X. Yang, C. Huang, and Q. Zhu, “Synchronization of switched neural networks with mixed delays via impulsive control,” Chaos, Solitons & Fractals, vol. 44, no. 10, pp. 817–826, 2011. [25] J. Lian and K. Zhang, “Exponential stability for switched CohenGrossberg neural networks with average dwell time,” Nonlinear Dynamics, vol. 63, no. 3, pp. 331–343, 2011. [26] T.-F. Li, J. Zhao, and G. M. Dimirovski, “Stability and πΏ 2 -gain analysis for switched neutral systems with mixed time-varying delays,” Journal of the Franklin Institute, vol. 348, no. 9, pp. 2237– 2256, 2011. [27] C. Huang and J. Cao, “Convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded distributed delays,” IEEE Transactions on Neural Networks, vol. 22, pp. 561– 572, 2011. [28] J. Hespanha and A. Morse, “Stability of switched systems with average dwell time,” in Proceedings of the 38th IEEE Conference on Decision and Control, pp. 2655–2660, 1999. [29] K. Gu, “An integral inequality in the stability problem of timedelay systems,” in Proceedings of the 39th IEEE Conference on Decision and Control, pp. 2805–2810, 2000. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014