Research Article Asymptotic Behavior of Switched Stochastic Delayed Cellular

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 270791, 9 pages
http://dx.doi.org/10.1155/2013/270791
Research Article
Asymptotic Behavior of Switched Stochastic Delayed Cellular
Neural Networks via Average Dwell Time Method
Hanfeng Kuang,1 Jinbo Liu,1 Xi Chen,2 Jie Mao,1 and Linjie He3
1
College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China
Hunan Provincial Center for Disease Control and Prevention, Changsha, Hunan 410005, China
3
School of Business, Hunan Normal University, Changsha, Hunan 410081, China
2
Correspondence should be addressed to Hanfeng Kuang; kuanghanfeng@126.com
Received 28 January 2013; Accepted 31 March 2013
Academic Editor: Zhichun Yang
Copyright © 2013 Hanfeng Kuang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The asymptotic behavior of a class of switched stochastic cellular neural networks (CNNs) with mixed delays (discrete timevarying delays and distributed time-varying delays) is investigated in this paper. Employing the average dwell time approach
(ADT), stochastic analysis technology, and linear matrix inequalities technique (LMI), some novel sufficient conditions on the issue
of asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential
stability) are established. A numerical example is provided to illustrate the effectiveness of the proposed results.
𝑛
1. Introduction
+ ∑𝑐𝑖𝑗 ∫
Since Chua and Yang’s seminal work on cellular neural
networks (CNNs) in 1988 [1, 2], it has witnessed the successful applications of CNN in various areas such as signal
processing, pattern recognition, associative memory, and
optimization problems (see, e.g., [3–5]). From a practical
point of view, both in biological and man-made neural networks, processing of moving images and pattern recognition
problems require the introduction of delay in the signals
transmitted among the cells [6, 7]. After the widely use of
discrete delays, distributed delays arise because that neural
networks usually have a spatial extent due to the presences
of a multitude of parallel pathway with a variety of axon sizes
and lengths. The mathematical model can be described by the
following differential equations:
𝑛
𝑑π‘₯𝑖 (𝑑) = − 𝑑𝑖 π‘₯𝑖 (𝑑) + ∑ π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (𝑑))
𝑗=1
𝑛
+ ∑𝑏𝑖𝑗 𝑓𝑗 (π‘₯𝑗 (𝑑 − πœπ‘– (𝑑)))
𝑗=1
𝑗=1
𝑑
𝑑−β„Žπ‘– (𝑑)
𝑓𝑗 (π‘₯𝑗 (𝑠)) d𝑠 + 𝐽𝑖 ,
𝑖 = 1, . . . , 𝑛,
(1)
where 𝑑 ≥ 0, 𝑛 ≥ 2 corresponds to the number of units in a
neural network; π‘₯𝑖 (𝑑) denotes the potential (or voltage) of cell
𝑖 at time 𝑑; 𝑓𝑗 (⋅) denotes a nonlinear output function; 𝑑𝑖 > 0
denotes the rate with which the cell 𝑖 resets its potential to
the resting state when isolated from other cells and external
inputs; π‘Žπ‘–π‘— , 𝑏𝑖𝑗 , 𝑐𝑖𝑗 denote the strengths of connectivity between
cell 𝑖 and 𝑗 at time 𝑑, respectively; πœπ‘– (𝑑) and β„Žπ‘– (𝑑) correspond to
the discrete time-varying delays and distributed time-varying
delays, respectively.
Neural network is nonlinearity; in the real world, nonlinear problems are not exceptional, but regular phenomena.
Nonlinearity is the nature of matter and its development
[8, 9]. Although discrete delays combined with distributed
delays can usually provide a good approximation for prime
model, most real models are often affected by so many
external perturbations which are of great uncertainty. For
2
instance, in electronic implementations, it was realized
that stochastic disturbances are mostly inevitable owing to
thermal noise. Just as Haykin [10] point out that in real
nervous systems, synaptic transmission is a noisy process
brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. Consequently,
noise is unavoidable and should be taken into consideration in modeling. Moreover, it has been well recognized
that a CNN could be stabilized or destabilized by certain
stochastic inputs. Therefore, it is of significant importance
to consider stochastic effects to the delayed neural networks.
One approach to the mathematical incorporation of such
effects is to use probabilistic threshold models. However, the
previous literatures all focus on the stability of stochastic
neural networks with delays [11–14]. Actually, studies on
dynamical systems involve not only a discussion of the
stability property, but also other dynamic behaviors such as
the ultimate boundedness and attractor. However, there are
very few results on the ultimate boundedness and attractor
for stochastic neural networks [15–17]. Hence, discussing the
asymptotic behavior of neural networks with mixed delays is
valuable and meaningful.
On the other hand, neural networks often exhibit a special
characteristic of network mode switching; that is, a neural
network sometimes has finite modes that switch from one to
another at different times according to a switching law generated from a switching logic. As an important class of hybrid
systems, switched systems arise in many practical processes.
In current papers, the analysis of switched systems has drawn
considerable attention since they have numerous applications
in control of mechanical systems, computer communities,
automotive industry, electric power systems and many other
fields [18–22]. Most recently, the stability analysis of switched
neural systems has been further investigated which was
mainly based on Lyapunov functions [23, 24]. It is worth
noting that the average dwell time (ADT) approach is an
effective method for the switched systems, which avoid
the common Lyapunov function and can be adopted to
obtain less conservative stability conditions. For instance,
based on the average dwell time method, the problems of
stability have been discussed for uncertain switched CohenGrossberg neural networks with interval time-varying delay
and distributed time-varying delay in [25]. In [26], the
average dwell time method has been utilized to get some
sufficient conditions for the exponential stability and the
weighted 𝐿 2 gain for a class of switched systems.
However, it is worth emphasizing that when the activation
functions are unbounded in some special applications, the
existence of equilibrium point cannot be guaranteed [27].
Therefore, in these circumstances, the discussing of stability
of equilibrium point for switched neural networks turns to
be unreachable, which motivated us to consider the ultimate
boundedness and attractor for the switched neural networks.
Unfortunately, as far as we know, the issue of asymptotic
behavior of switched systems with mixed time delays has
not been investigated yet, let alone studying the asymptotic
behavior of switched stochastic systems. Therefore, these
researches are challenging and interesting since they integrate
the switched hybrid system into the stochastic system and are
Abstract and Applied Analysis
thus theoretically and practically significant. Notice that the
asymptotic behavior of switched stochastic neural networks
with mixed delays should be studied intensively.
Motivated by the above analysis, the main purpose of this
paper is to get sufficient conditions on the asymptotic behavior (the mean-square ultimate boundedness, the existence of
an attractor, and mean-square exponential stability) for the
switched stochastic system. This paper is organized as follows.
In Section 2, the considered model of switched stochastic CNN with mixed delays is presented. Some necessary
assumptions, definitions and lemmas are also given in this
section. In Section 3, mean-square ultimate boundedness and
attractor for the proposed model are studied. A numerical
example is arranged to demonstrate the effectiveness of the
theoretical results in Section 4, and we conclude this paper in
Section 5.
2. Problem Formulation
In general, a stochastic cellular neural network with mixed
delays can be described as follows:
𝑑π‘₯ (𝑑) = [ − 𝐷π‘₯ (𝑑) + 𝐴𝐹 (π‘₯ (𝑑)) + 𝐡𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑑
+𝐢 ∫
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) d𝑠 + 𝐽] 𝑑𝑑
(2)
+ 𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑑𝑀 (𝑑) ,
where π‘₯(𝑑) = (π‘₯1 (𝑑), . . . , π‘₯𝑛 (𝑑))𝑇 ∈ 𝑅𝑛 , 𝐹(π‘₯(𝑑)) = (𝑓1 (π‘₯1 (𝑑)),
. . . , 𝑓𝑛 (π‘₯𝑛 (𝑑)))𝑇 , 𝐷 = diag(𝑑1 , . . . , 𝑑𝑛 ), 𝐴 = (π‘Žπ‘–π‘— )𝑛×𝑛 , 𝐡 =
(𝑏𝑖𝑗 )𝑛×𝑛 , 𝐢 = (𝑐𝑖𝑗 )𝑛×𝑛 , 𝐽 = (𝐽1 , . . . , 𝐽𝑛 )𝑇 , 𝜏(𝑑) = (𝜏1 (𝑑), . . .,
πœπ‘› (𝑑))𝑇 , β„Ž(𝑑) = (β„Ž1 (𝑑), . . . , β„Žπ‘› (𝑑))𝑇 , 𝐺(⋅, ⋅) is a 𝑛 × π‘› matrix
valued function, and 𝑀(𝑑) = (𝑀1 (𝑑), . . . , 𝑀𝑛 (𝑑))𝑇 is an 𝑛-dimensional Brownian motion defined on a complete probability
space (Ω, F, P) with a natural filtration {F𝑑 }𝑑≥0 (i.e., F𝑑 =
𝜎{𝑀(𝑠) : 0 ≤ 𝑠 ≤ 𝑑}).
By introducing switching signal into the system (2) and
taking a set of neural networks as the individual subsystems,
the switched system can be obtained, which is described as
𝑑π‘₯ (𝑑) = [ − 𝐷𝜎(𝑑) π‘₯ (𝑑) + 𝐴 𝜎(𝑑) 𝐹 (π‘₯ (𝑑)) + 𝐡𝜎(𝑑) 𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑)))
+𝐢𝜎(𝑑) ∫
𝑑
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) d𝑠 + 𝐽] 𝑑𝑑
+ 𝐺𝜎(𝑑) (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑑𝑀 (𝑑) ,
(3)
where 𝜎(𝑑) : [0, +∞) → Σ = {1, 2 . . . π‘š} is the switching
signal. At each time instant 𝑑, the index 𝜎(𝑑) ∈ Σ (i.e., 𝜎(𝑑) =
𝑖) of the active subsystem means that the 𝑖th subsystem is
activated.
For the convenience of discussion, it is necessary to
introduce some notations. 𝑅𝑛 denotes the 𝑛-dimensional
Euclidean space. 𝑋 ≤ π‘Œ (𝑋 < π‘Œ) means that each pair of
corresponding elements of 𝑋 and π‘Œ satisfies the inequality “≤
Abstract and Applied Analysis
3
(<)”. 𝑋 is especially called a positive (negative) matrix if 𝑋 > 0
(< 0). 𝑋𝑇 denotes the transpose of any square matrix 𝑋, and
the symbol “∗” within the matrix represents the symmetric
term of the matrix. πœ† min (𝑋) means the minimum eigenvalue
of matrix 𝑋, and πœ† max (𝑋) means the maximum eigenvalue of
matrix 𝑋. 𝐼 denotes unit matrix.
Let C([−𝜏∗ , 0], 𝑅𝑛 ) denote the Banach space of continuous functions which mapping from [−𝜏∗ , 0] to 𝑅𝑛 with
is the topology of uniform convergence. For any β€–πœ‘β€– ∈
C([−𝜏∗ , 0], 𝑅𝑛 ), we define β€–πœ‘β€– = max1≤𝑖≤𝑛sup𝑑−𝜏∗ <𝑠≤𝑑 |πœ‘π‘– (𝑠)|.
The initial conditions for system (3) are given in the form:
πœ‘ ∈ CF0 ([−𝜏∗ , 0] , 𝑅𝑛 ) ,
π‘₯ (𝑑) = πœ‘,
(4)
where CF0 ([−𝜏∗ , 0], 𝑅𝑛 ) is the family of all F0 -measurable
bounded C([−𝜏∗ , 0], 𝑅𝑛 )-valued random variables.
Throughout this paper, we assume the following assumptions are always satisfied.
(𝐻1 ) The discrete time-varying delay 𝜏(𝑑) and distributed
time-varying delay β„Ž(𝑑) are satisfying
0 ≤ 𝜏 (𝑑) ≤ 𝜏,
𝜏∗ = max {𝜏, β„Ž} , (5)
1≤𝑖≤𝑛
0 ≤ β„Ž (𝑑) ≤ β„Ž,
∗
where 𝜏, β„Ž, 𝜏 are scalars.
(𝐻2 ) There exist constants 𝑙𝑗 and 𝐿 𝑗 , 𝑖 = 1, 2, . . . , 𝑛, such
that
𝑙𝑗 ≤
𝑓𝑗 (π‘₯) − 𝑓𝑗 (𝑦)
π‘₯−𝑦
≤ 𝐿 𝑗,
∀π‘₯, 𝑦 ∈ 𝑅, π‘₯ =ΜΈ 𝑦.
(6)
Moreover, we define
Σ1 = diag {𝑙1 𝐿 1 , 𝑙2 𝐿 2 , . . . , 𝑙𝑛 𝐿 𝑛 } ,
(7)
Σ2 = diag {𝑙1 + 𝐿 1 , 𝑙2 + 𝐿 2 , . . . , 𝑙𝑛 + 𝐿 𝑛 } .
(𝐻3 ) We assume that 𝐺(𝑑, π‘₯, 𝑦) : 𝑅+ × π‘…π‘› × π‘…π‘› →
𝑅𝑛×π‘š is locally Lipschitz continuous and satisfies the
following condition:
Definition 2 (see [28]). For any switching signal 𝜎(𝑑), corresponding a switching sequence {(𝜎(𝑑0 ), 𝑑0 ), . . . (𝜎(π‘‘π‘˜ ), π‘‘π‘˜ ), . . . , |
π‘˜ = 0, 1, . . .}, where (𝜎(π‘‘π‘˜ ), π‘‘π‘˜ ) means the 𝜎(π‘‘π‘˜ )th subsystem,
is activated during 𝑑 ∈ [π‘‘π‘˜ , π‘‘π‘˜−1 ), and π‘˜ denotes the switching
ordinal number. Given any finite constants 𝑇1 , 𝑇2 satisfying
𝑇2 > 𝑇1 ≥ 0 denotes the number of discontinuity of
a switching signal 𝜎(𝑑) over the time interval (𝑇1 , 𝑇2 ) by
π‘πœŽ (𝑇1 , 𝑇2 ). If π‘πœŽ (𝑇1 , 𝑇2 ) ≤ 𝑁0 +(𝑇2 −𝑇1 )/𝑇𝛼 holds for 𝑇𝛼 > 0,
𝑁0 > 0, then 𝑇𝛼 > 0 is called the average dwell time. 𝑁0 is the
chatter bound.
Lemma 3. Let 𝑋 and π‘Œ be any 𝑛-dimensional real vectors, 𝑃
be a positive semidefinite matrix and a scalar πœ€ > 0. Then the
following inequality holds:
2𝑋𝑇 π‘ƒπ‘Œ ≤ πœ€π‘‹π‘‡ 𝑃𝑋 + πœ€−1 π‘Œπ‘‡ π‘ƒπ‘Œ.
Lemma 4 (see [29]). For any positive definite constant matrix
𝑀 ∈ R𝑛×𝑛 , and a scalar π‘Ÿ, if there exists a vector function
π‘Ÿ
πœ‚ : [0, π‘Ÿ] → R𝑛 such that the integrals ∫0 πœ‚π‘‡ (𝑠)π‘€πœ‚(𝑠)d𝑠 and
π‘Ÿ
∫0 πœ‚(𝑠)d𝑠 are well defined, then
π‘Ÿ
∫ πœ‚π‘‡ (𝑠) π‘€πœ‚ (𝑠) d𝑠 ≥
0
+𝑦
π‘ˆ2𝑇 π‘ˆ2 𝑦
(11)
Let C2,1 : (𝑅𝑛 × π‘…+ ; 𝑅+) denote the family of all nonnegative
functions 𝑉(𝑑, π‘₯) on 𝑅𝑛 × π‘…+ which are continuously twice
differentiable in π‘₯ and once differentiable in 𝑑. If 𝑉 ∈
C2,1 : (𝑅𝑛 × π‘…+ ; 𝑅+), define an operator L𝑉 associated with
general stochastic system 𝑑π‘₯(𝑑) = 𝑓(π‘₯(𝑑), 𝑑)𝑑𝑑 + 𝐺(π‘₯(𝑑), π‘₯(𝑑 −
𝜏(𝑑)))𝑑𝑀(𝑑) as
L𝑉 (𝑑, π‘₯) = 𝑉𝑑 (𝑑, π‘₯) + 𝑉π‘₯ (𝑑, π‘₯) 𝑓 (π‘₯ (𝑑) , 𝑑)
1
+ trace {𝐺𝑇 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑉π‘₯π‘₯ (𝑑, π‘₯)
2
×𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) } ,
(12)
+ 2π‘₯
𝑇
π‘ˆ1𝑇 π‘ˆ2 𝑦,
(8)
where π‘ˆ1 > 0, π‘ˆ2 > 0 are constant matrices with
appropriate dimensions.
Some definitions and lemmas are introduced as follows.
Definition 1 (see [15]). System (2) is called mean-square
ultimate boundedness if there exists a constant vector 𝐡̃ > 0,
such that, for any initial value πœ‘ ∈ CF0 , there is a 𝑑󸀠 = 𝑑󸀠 (πœ‘) >
0, for all 𝑑 ≥ 𝑑󸀠 , the solution π‘₯(𝑑, πœ‘) of system (2) satisfies
σ΅„©2 Μƒ
σ΅„©
𝐸󡄩󡄩󡄩π‘₯ (𝑑, πœ‘)σ΅„©σ΅„©σ΅„© ≤ 𝐡.
π‘Ÿ
1 π‘Ÿ 𝑇
∫ πœ‚ (𝑠) d𝑠𝑀 ∫ πœ‚ (𝑠) d𝑠.
π‘Ÿ 0
0
3. Main Results
trace [𝐺𝑇 (𝑑, π‘₯, 𝑦) 𝐺 (𝑑, π‘₯, 𝑦)] ≤ π‘₯𝑇 π‘ˆ1𝑇 π‘ˆ1 π‘₯
𝑇
(10)
(9)
Μƒ is said
In this case, the set A = {πœ‘ ∈ CF0 | πΈβ€–πœ‘(𝑠)β€–2 ≤ 𝐡}
to be an attractor of system (2) in mean square sense.
Clearly,
proposition
above
equals
to
2
Μƒ
lim𝑑 → ∞ sup 𝐸‖π‘₯(𝑑)β€– ≤ 𝐡.
where
𝑉𝑑 (𝑑, π‘₯) =
πœ•π‘‰ (𝑑, π‘₯)
,
πœ•π‘‘
πœ•π‘‰ (𝑑, π‘₯) 𝑇
πœ•π‘‰ (𝑑, π‘₯)
,...,
),
𝑉π‘₯ (𝑑, π‘₯) = (
πœ•π‘₯1
πœ•π‘₯𝑛
𝑉π‘₯π‘₯ (𝑑, π‘₯) = (
πœ•π‘‰ (𝑑, π‘₯)
) .
πœ•π‘₯𝑖 πœ•π‘₯𝑗 𝑛×𝑛
(13)
Μ‡
Theorem 5. If there are constants πœ‡, ] such that 𝜏(𝑑)
≤ πœ‡,
Μ‡β„Ž(𝑑) ≤ ], we denote 𝑔(πœ‡), π‘˜(]) as:
(1 − πœ‡) 𝑒−π›Όπœ , πœ‡ ≤ 1;
𝑔 (πœ‡) = {
1 − πœ‡,
πœ‡ ≥ 1,
(1 − ]) 𝑒−π›Όβ„Ž , ] ≤ 1;
π‘˜ (]) = {
1 − ],
] ≥ 1.
(14)
4
Abstract and Applied Analysis
For a given constant 𝛼 > 0, if there exist positive definite
matrixes 𝑃 = diag(𝑝1 , 𝑝2 , . . . , 𝑝𝑛 ), 𝑄, 𝑅, 𝑆, 𝑍, π‘ˆ1 , π‘ˆ2 , π‘Œπ‘– =
diag(π‘Œπ‘–1 , π‘Œπ‘–2 , . . . , π‘Œπ‘–π‘› ), 𝑖 = 1, 2, such that the following condition
holds:
the operator L𝑉 along the trajectory of system (2) can be
obtained
L𝑉1 (𝑑) =
Φ11 Φ12 Φ13 Φ14 0 Φ16
[ ∗ Φ22 0 Φ24 0
0 ]
]
[
] < 0,
0
0
0
∗
∗
Φ
Δ1 = [
33
]
[
[ ∗ ∗ ∗ Φ44 Φ55 0 ]
0 Φ66 ]
[∗ ∗ ∗ ∗
× [ − 𝐷π‘₯ (𝑑) + 𝐴𝐹 (π‘₯ (𝑑)) + 𝐡𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑑
+𝐢 ∫
𝑑−β„Ž(𝑑)
Φ11 = 2𝛼𝑃 − 2𝐷𝑃 + 𝑄 + 𝜏2 𝑆 − 2Σ1 π‘Œ1 + 𝛼𝐼 + π‘ˆ1𝑇 π‘ƒπ‘ˆ1 ,
Φ12 = π‘ˆ1𝑇 π‘ƒπ‘ˆ2 ,
Φ14 = 𝑃𝐡,
Φ13 = 𝑃𝐴 + Σ2 π‘Œ1 ,
(15)
×𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) ]
π‘ˆ2π‘‡π‘ƒπ‘ˆ2 ,
= 𝛼𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑) + 2𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑃
2
Φ33 = 𝑅 + β„Ž 𝑍 − 2π‘Œ1 + 𝛼𝐼,
× [ − 𝐷π‘₯ (𝑑) + 𝐴𝐹 (π‘₯ (𝑑)) + 𝐡𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑)))
Φ44 = −π‘˜ (]) 𝑅 − 2π‘Œ2 + 𝛼𝐼,
Φ55 = −𝑔 (πœ‡) 𝑆,
Φ66 = −π‘˜ (]) 𝑍,
𝑑
+𝐢 ∫
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) d𝑠 + 𝐽]
+ 𝑒𝛼𝑑 trace [𝐺𝑇 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑃
then system (2) is mean-square ultimate boundedness.
Proof. Consider the positive definite Lyapunov functional as
follows:
𝑉 (𝑑) = 𝑉1 (𝑑) + 𝑉2 (𝑑) + 𝑉3 (𝑑) + 𝑉4 (𝑑) + 𝑉5 (𝑑) ,
𝐹 (π‘₯ (𝑠)) d𝑠 + 𝐽]
πœ•2 𝑉1 (π‘₯ (𝑑) , 𝑑)
1
+ trace [𝐺𝑇 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑)))
2
πœ•π‘₯2
Φ16 = 𝑃𝐢,
Φ22 = −𝑔 (πœ‡) 𝑄 − 2Σ1 π‘Œ2 + 𝛼𝐼 +
Φ24 = Σ2 π‘Œ2 ,
πœ•π‘‰1 (π‘₯ (𝑑) , 𝑑) πœ•π‘‰1 (π‘₯ (𝑑) , 𝑑)
+
πœ•π‘‘
πœ•π‘₯
(16)
where
×𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) ] .
(19)
From Assumption (𝐻3 ), Lemma 3, and (19), we can get
L𝑉1 (𝑑) ≤ 2𝛼𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑) + 2𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑃
𝑉1 (𝑑) = 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑) ,
𝑉2 (𝑑) = ∫
𝑑
𝑑−𝜏(𝑑)
𝑉3 (𝑑) = ∫
𝑑
𝑑−β„Ž(𝑑)
𝑉4 (𝑑) = 𝜏 ∫
0
π‘₯𝑇 (𝑠) 𝑄𝑒𝛼𝑠 π‘₯ (𝑠) d𝑠,
0
+𝐢 ∫
𝑑
𝑑−β„Ž(𝑑)
𝐹𝑇 (π‘₯ (𝑠)) 𝑅𝑒𝛼𝑠 𝐹 (π‘₯ (𝑠)) d𝑠,
∫
𝑑
−𝜏(𝑑) 𝑑+πœƒ
𝑉5 (𝑑) = β„Ž ∫
× [ − 𝐷π‘₯ (𝑑) + 𝐴𝐹 (π‘₯ (𝑑)) + 𝐡𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑑
∫
−β„Ž(𝑑) 𝑑+πœƒ
(17)
+ 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) π‘ˆ1𝑇 π‘ƒπ‘ˆ1 π‘₯ (𝑑)
+ π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) π‘ˆ2𝑇 π‘ƒπ‘ˆ2 π‘₯ (𝑑 − 𝜏 (𝑑))
π‘₯𝑇 (𝑠) 𝑆𝑒𝛼𝑠 π‘₯ (𝑠) d𝑠dπœƒ,
+ 2π‘₯𝑇 (𝑑) π‘ˆ1𝑇 π‘ƒπ‘ˆ2 π‘₯ (𝑑 − 𝜏 (𝑑)) .
𝐹𝑇 (π‘₯ (𝑠)) 𝑍𝑒𝛼𝑠 𝐹 (π‘₯ (𝑠)) d𝑠dπœƒ.
Then, by Ito’s formula, the stochastic derivative of 𝑉(π‘₯, 𝑑) is
𝑑𝑉 (π‘₯, 𝑑) = L𝑉 (π‘₯, 𝑑) 𝑑𝑑
+ 𝑉π‘₯ (π‘₯, 𝑑) 𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑑𝑀 (𝑑) ,
𝐹 (π‘₯ (𝑠)) d𝑠] + 𝑒𝛼𝑑 𝛼−1 𝐽𝑇 𝑃𝐽
(18)
(20)
Similarly, calculating the operator L𝑉𝑖 (𝑖 = 2, 3, 4, 5),
along the trajectory of system (2), one can get
L𝑉2 = 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑄π‘₯ (𝑑)
− (1 − πœΜ‡ (𝑑)) 𝑒𝛼(𝑑−𝜏(𝑑)) π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) 𝑄π‘₯ (𝑑 − 𝜏 (𝑑))
Abstract and Applied Analysis
5
≤ 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑄π‘₯ (𝑑)
On the other hand, it follows from Assumption (𝐻2 ) that
we can easily obtain
− (1 − πœ‡) 𝑒𝛼(𝑑−𝜏) π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) 𝑄π‘₯ (𝑑 − 𝜏 (𝑑))
[𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝐿 𝑖 π‘₯𝑖 (𝑑)]
≤ 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑄π‘₯ (𝑑)
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖 π‘₯𝑖 (𝑑)] ≤ 0,
− 𝑔 (πœ‡) 𝑒𝛼𝑑 π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) 𝑄π‘₯ (𝑑 − 𝜏 (𝑑)) ,
[𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0) − 𝐿 𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
L𝑉3 ≤ 𝑒𝛼𝑑 𝐹𝑇 (π‘₯ (𝑑)) 𝑅𝐹 (π‘₯ (𝑑))
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0) − 𝑙𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))] ≤ 0,
− π‘˜ (]) 𝑒𝛼𝑑 𝐹𝑇 (π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑅𝐹 (π‘₯ (𝑑 − 𝜏 (𝑑))) ,
𝑖 = 1, 2, . . . , 𝑛.
(24)
𝛼𝑑 𝑇
L𝑉4 = 𝜏 [𝜏 (𝑑) 𝑒 π‘₯ (𝑑) 𝑆π‘₯ (𝑑)
− (1 − πœΜ‡ (𝑑)) 𝑒𝛼(𝑑−𝜏(𝑑)) ∫
𝑑
𝑑−𝜏(𝑑)
Then we obtain
π‘₯𝑇 (𝑠) 𝑆π‘₯ (𝑠) d𝑠]
𝑛
≤ 𝜏2 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑆π‘₯ (𝑑)
𝛼𝑑
− πœπ‘” (πœ‡) 𝑒 ∫
0 ≤ 𝛿1 = − 2∑𝑦1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝐿 𝑖 π‘₯𝑖 (𝑑)]
𝑖=1
𝑑
𝑇
𝑑−𝜏(𝑑)
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖 π‘₯𝑖 (𝑑)] ,
π‘₯ (𝑠) 𝑆π‘₯ (𝑠) d𝑠,
𝑛
L𝑉5 ≤ β„Ž2 𝑒𝛼𝑑 𝐹𝑇 (π‘₯ (𝑑)) 𝑍𝐹 (π‘₯ (𝑑))
𝛼𝑑
− β„Žπ‘˜ (]) 𝑒 ∫
𝑑
0 ≤ 𝛿2 = − 2∑𝑦2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0)
𝑖=1
𝑇
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) 𝑍𝐹 (π‘₯ (𝑠)) d𝑠.
−𝐿 𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑)))−𝑓𝑖 (0)−𝑙𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))] ,
(21)
𝑛
According to Lemma 4, the following inequalities can be
obtained:
∫
𝑖=1
𝑛
− 2∑𝑦1𝑖 𝑓𝑖2 (0)
𝑑
𝑑−𝜏(𝑑)
𝑑
𝑑−β„Ž(𝑑)
𝑖=1
π‘₯𝑇 (𝑠) 𝑆π‘₯ (𝑠) d𝑠
≥
∫
𝛿1 = − 2∑𝑦1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝐿 𝑖 π‘₯𝑖 (𝑑)] [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖 π‘₯𝑖 (𝑑)]
𝑛
+ 2∑𝑦1𝑖 𝑓𝑖 (0) [2𝑓𝑖 (π‘₯𝑖 (𝑑)) − (𝐿 𝑖 + 𝑙𝑖 ) π‘₯𝑖 (𝑑)]
𝑑
1 𝑑
π‘₯𝑇 (𝑠) d𝑠𝑆 ∫
π‘₯ (𝑠) d𝑠,
∫
𝜏 𝑑−𝜏(𝑑)
𝑑−𝜏(𝑑)
𝑖=1
(22)
𝐹𝑇 (π‘₯ (𝑠)) 𝑍𝐹 (π‘₯ (𝑠)) d𝑠
≥
𝑛
≤ − 2∑𝑦1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝐿 𝑖 π‘₯𝑖 (𝑑)] [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖 π‘₯𝑖 (𝑑)]
𝑖=1
𝑛
+ ∑ [𝛼𝑓𝑖2 (π‘₯𝑖 (𝑑)) + 4𝛼−1 𝑓𝑖2 (0) 𝑦1𝑖2 + 𝛼π‘₯𝑖2 (𝑑)
𝑑
1 𝑑
𝐹𝑇 (π‘₯ (𝑠)) d𝑠𝑍 ∫
𝐹 (π‘₯ (𝑠)) d𝑠.
∫
β„Ž 𝑑−β„Ž(𝑑)
𝑑−β„Ž(𝑑)
𝑖=1
2
+𝛼−1 𝑓𝑖2 (0) 𝑦1𝑖2 (𝐿 𝑖 + 𝑙𝑖 ) ] .
(25)
Then, we can get
Similarly, one can get
L𝑉4 ≤ 𝜏2 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝑆π‘₯ (𝑑)
𝛼𝑑
− 𝑔 (πœ‡) 𝑒 ∫
𝑑
𝑇
𝑑−𝜏(𝑑)
π‘₯ (𝑠) d𝑠𝑆 ∫
𝑛
𝑑
𝑑−𝜏(𝑑)
𝛿2 ≤ − 2∑𝑦2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝐿 𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
π‘₯ (𝑠) d𝑠,
𝑖=1
2 𝛼𝑑 𝑇
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖 π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
L𝑉5 ≤ β„Ž 𝑒 𝐹 (π‘₯ (𝑑)) 𝑍𝐹 (π‘₯ (𝑑))
− π‘˜ (]) 𝑒𝛼𝑑 ∫
𝑑
𝑑−β„Ž(𝑑)
𝑑
𝐹𝑇 (π‘₯ (𝑠)) d𝑠𝑍 ∫
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) d𝑠.
(23)
+
[𝛼𝑓𝑖2
(π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) +
4𝛼−1 𝑓𝑖2
(0) 𝑦2𝑖2
2
+𝛼π‘₯𝑖2 (𝑑 − 𝜏 (𝑑)) + 𝛼−1 𝑓𝑖2 (0) 𝑦2𝑖2 (𝐿 𝑖 + 𝑙𝑖 ) ] .
(26)
6
Abstract and Applied Analysis
Theorem 6. If all of the conditions of Theorem 5 hold, then
Μƒ
there exists an attractor A𝐡̃ = {πœ‘ ∈ CF0 | πΈβ€–πœ‘(𝑠)β€–2 ≤ 𝐡}
for the solutions of system (2).
Denote
𝜁 (𝑑) = [π‘₯𝑇 (𝑑) , π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) , 𝐹𝑇 (π‘₯ (𝑑)) ,
𝐹𝑇 (π‘₯ (𝑑 − 𝜏 (𝑑))) , (∫
𝑑
𝑑−𝜏(𝑑)
(∫
𝑇
π‘₯ (𝑠) d𝑠) ,
(27)
𝑇 𝑇
𝑑
𝑑−β„Ž(𝑑)
𝐹 (π‘₯ (𝑠)) d𝑠) ] ,
and combing with (16)–(26), we can get
𝑑𝑉 = L𝑉1 𝑑𝑑 + L𝑉2 𝑑𝑑 + L𝑉3 𝑑𝑑 + L𝑉4 𝑑𝑑 + L𝑉5 𝑑𝑑
+ 2𝑃𝑒𝛼𝑑 π‘₯𝑇 (𝑑) 𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑑𝑀 (𝑑)
(28)
≤ 𝑒𝛼𝑑 πœπ‘‡ (𝑑) Δ 1 𝜁 (𝑑) 𝑑𝑑 + 𝑒𝛼𝑑 N1 𝑑𝑑
Proof. If one chooses 𝐡̃ = (1 + 𝛼−1 N1 )/𝐾 > 0, Theorem 5
shows that, for any πœ‘, there is 𝑑󸀠 > 0, such that 𝐸‖π‘₯(𝑑, πœ‘)β€–2 ≤
𝐡̃ for all 𝑑 ≥ 𝑑󸀠 . Let A𝐡̃ denote by A𝐡̃ = {πœ‘ ∈
Μƒ Clearly, A Μƒ is closed, bounded, and
CF0 | πΈβ€–πœ‘(𝑠)β€–2 ≤ 𝐡}.
𝐡
invariant. Furthermore, lim𝑑 → ∞ sup inf 𝑦∈A𝐡̃ β€–π‘₯(𝑑, πœ‘)−𝑦‖ = 0.
Therefore, A𝐡̃ is an attractor for the solutions of system (2).
This completes the proof.
Corollary 7. In addition to that all of the conditions of
Theorem 5 hold, if 𝐽 = 0, 𝐺(𝑑, 0, 0) = 0, and 𝑓𝑖 (0) = 0
for all 𝑖 = 1, 2, . . . , 𝑛, then system (2) has a trivial solution
π‘₯(𝑑) ≡ 0, and the trivial solution of system (2) is mean-square
exponentially stable.
Proof. If 𝐽 = 0 and 𝑓𝑖 (0) = 0 (𝑖 = 1, 2, . . . , 𝑛), then N1 = 0,
and it is obvious that system (2) has a trivial solution π‘₯(𝑑) ≡ 0.
From Theorem 5, one has
𝛼𝑑
+ 2𝑃𝑒 π‘₯ (𝑑) 𝐺 (π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑑𝑀 (𝑑) ,
where
σ΅„©2
σ΅„©
𝐸󡄩󡄩󡄩π‘₯ (𝑑, πœ‘)σ΅„©σ΅„©σ΅„© ≤ 𝐾∗ 𝑒−𝛼𝑑 ,
−1 𝑇
N1 = 𝛼 𝐽 𝑃𝐽
𝑛
∀πœ‘,
(33)
where 𝐾∗ = 𝐸{𝑉(π‘₯(𝑑0 ))}/𝐾. Therefore, the trivial solution of
system (2) is mean-square exponentially stable. This completes the proof.
2
+ ∑ [4𝛼−1 𝑓𝑖2 (0) 𝑦2𝑖2 + 𝛼−1 𝑓𝑖2 (0) 𝑦1𝑖2 (𝐿 𝑖 + 𝑙𝑖 )
𝑖=1
2
+4𝛼−1 𝑓𝑖2 (0) 𝑦2𝑖2 + 𝛼−1 𝑓𝑖2 (0) 𝑦2𝑖2 (𝐿 𝑖 + 𝑙𝑖 ) ] .
(29)
By integrating both sides of (28) in time interval 𝑑 ∈ [𝑑0 , 𝑑]
and then taking expectation results in
𝐾𝑒𝛼𝑑 β€–π‘₯ (𝑑)β€–2 ≤ 𝑉 (π‘₯ (𝑑)) ≤ 𝑉 (π‘₯ (𝑑0 )) + 𝛼−1 𝑒𝛼𝑑 N1
𝑑
+ ∫ 2𝑃𝑒𝛼𝑑 π‘₯ (𝑠) 𝐺 (π‘₯ (𝑠) , π‘₯ (𝑠 − 𝜏 (𝑠))) d𝑀 (𝑠) ,
𝑑0
According to Theorem 5–Corollary 7, we will present
conditions of mean-square ultimate boundedness for the
switched systems (3) by applying the average dwell time
method in the follow-up studies.
Μ‡
Theorem 8. If there are constants πœ‡, ] such that 𝜏(𝑑)
≤ πœ‡,
Μ‡ ≤ ], we denote 𝑔(πœ‡), π‘˜(]) as
β„Ž(𝑑)
(30)
(1 − πœ‡) 𝑒−π›Όπœ , πœ‡ ≤ 1;
𝑔 (πœ‡) = {
1 − πœ‡,
πœ‡ ≥ 1,
where 𝐾 = πœ† min (𝑃).
Therefore, one obtains
−1 𝛼𝑑
𝐸 {𝑉 (π‘₯ (𝑑))} ≤ 𝐸 {𝑉 (π‘₯ (𝑑0 ))} + 𝐸 {𝛼 𝑒 N1 } ,
(31)
(32)
For a given constant 𝛼 > 0, if there exist positive definite
matrixs 𝑄𝑖 ,𝑅𝑖 , 𝑆𝑖 , 𝑍𝑖 , π‘ˆ1𝑖 , π‘ˆ2𝑖 , 𝑃𝑖 = diag (𝑝𝑖1 , 𝑝𝑖2 , . . . , 𝑝𝑖𝑛 ),
π‘Œπ‘– = diag (π‘Œπ‘–1 , π‘Œπ‘–2 , . . . , π‘Œπ‘–π‘› ), 𝑖 = 1, 2, such that the following
condition holds
which implies
𝑒−𝛼𝑑 𝐸 {𝑉 (π‘₯ (𝑑0 ))} + 𝛼−1 N1
𝐸‖π‘₯ (𝑑)β€–2 ≤
.
𝐾
(34)
(1 − ]) 𝑒−π›Όβ„Ž , ] ≤ 1;
π‘˜ (]) = {
1 − ],
] ≥ 1.
If one chooses 𝐡̃ = (1 + 𝛼−1 N1 )/𝐾 > 0, then, for initial
value πœ‘ ∈ CF0 , there is 𝑑󸀠 = 𝑑󸀠 (πœ‘) > 0, such that
𝑒−𝛼𝑑 𝐸{𝑉(π‘₯(𝑑0 ))} ≤ 1 for all 𝑑 ≥ 𝑑󸀠 . According to Definition 1,
we have 𝐸‖π‘₯(𝑑, πœ‘)β€–2 ≤ 𝐡̃ for all 𝑑 ≥ 𝑑󸀠 . That is to say, system
(2) is mean-square ultimate boundedness. This completes the
proof.
Φ𝑖11 Φ𝑖12 Φ𝑖13 Φ𝑖14 0 Φ𝑖16
[ ∗ Φ𝑖22 0 Φ𝑖24 0
0 ]
]
[
] < 0,
0
0
0
∗
∗
Φ
Δ π‘–1 = [
𝑖33
]
[
[ ∗
∗
∗ Φ𝑖44 Φ𝑖55 0 ]
∗
∗
∗
0 Φ𝑖66 ]
[ ∗
(35)
Abstract and Applied Analysis
7
where
Φ𝑖11 = 2𝛼𝑃𝑖 − 2𝐷𝑃𝑖 + 𝑄𝑖 + 𝜏2 𝑆𝑖 − 2Σ1 π‘Œ1 + 𝛼𝐼 + π‘ˆ1𝑇𝑖 π‘ƒπ‘ˆ1𝑖 ,
Φ𝑖12 = π‘ˆ1𝑇𝑖 π‘ƒπ‘ˆ2𝑖 ,
Φ𝑖13 = 𝑃𝑖 𝐴 𝑖 + Σ2 π‘Œ1 ,
Φ𝑖14 = 𝑃𝑖 𝐡𝑖 ,
Since the system state is continuous, it follows from (40)
that
σ΅„©
σ΅„©2
Rπ‘–π‘˜ σ΅„©σ΅„©σ΅„©π‘₯ (π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© 𝑒−𝛼(𝑑−π‘‘π‘˜ )
Λ
2
+
𝐸‖π‘₯ (𝑑)β€– ≤
πΎπ‘–π‘˜
πΎπ‘–π‘˜
σ΅„©
σ΅„©2
= π»π‘–π‘˜ 𝐸󡄩󡄩󡄩π‘₯ (π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© 𝑒−𝛼(𝑑−π‘‘π‘˜ ) + π½π‘–π‘˜ ≤ ⋅ ⋅ ⋅
Φ16 = 𝑃𝑖 𝐢𝑖 ,
Φ𝑖22 = −𝑔 (πœ‡) 𝑄𝑖 − 2Σ1 π‘Œ2 + 𝛼𝐼 + π‘ˆ2𝑇𝑖 π‘ƒπ‘ˆ2𝑖 ,
π‘˜
σ΅„©
σ΅„©2
≤ 𝑒∑V=0 ln 𝐻𝑖V −𝛼(𝑑−𝑑0 ) 𝐸󡄩󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„©
Φ𝑖33 = 𝑅𝑖 + β„Ž2 𝑍𝑖 − 2π‘Œ1 + 𝛼𝐼,
Φ𝑖24 = Σ2 π‘Œ2 ,
Φ𝑖44 = −π‘˜ (]) 𝑅𝑖 − 2π‘Œ2 + 𝛼𝐼,
+ [π»π‘–π‘˜ 𝑒−𝛼(𝑑−π‘‘π‘˜ ) π½π‘–π‘˜ + π»π‘–π‘˜ π»π‘–π‘˜−1 𝑒−𝛼(𝑑−π‘‘π‘˜−1 ) π½π‘–π‘˜−1
Φ𝑖55 = −𝑔 (πœ‡) 𝑆𝑖 ,
+ π»π‘–π‘˜ π»π‘–π‘˜−1 π»π‘–π‘˜−2 𝑒−𝛼(𝑑−π‘‘π‘˜−2 ) π½π‘–π‘˜−2 + ⋅ ⋅ ⋅
Φ𝑖66 = −π‘˜ (]) 𝑍𝑖 .
+π»π‘–π‘˜ π»π‘–π‘˜−1 π»π‘–π‘˜−2 ⋅ ⋅ ⋅ 𝐻𝑖1 𝑒−𝛼(𝑑−𝑑1 ) 𝐽𝑖1 + π½π‘–π‘˜ ]
(36)
Then system (3) is mean-square ultimate boundedness for any
switching signal with average dwell time satisfying
𝑇𝛼 > 𝑇𝛼∗ =
ln Rmax
,
𝛼
σ΅„©
σ΅„©2
≤ 𝑒(π‘˜+1) ln 𝐻max −𝛼(𝑑−𝑑0 ) 𝐸󡄩󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„©
π‘˜
+ ⋅ ⋅ ⋅ + 𝐻max 𝐽max + 𝐻max 𝐽max + 𝐽max ]
σ΅„©
σ΅„©2
≤ 𝐻max π‘’π‘˜ ln 𝐻max −𝛼(𝑑−𝑑0 ) 𝐸󡄩󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„©
Proof. Define the Lyapunov functional candidate
+
π‘‰πœŽ(𝑑) = 𝑒𝛼𝑑 π‘₯𝑇 (𝑑) π‘ƒπœŽ(𝑑) π‘₯ (𝑑)
+∫
𝑑−𝜏(𝑑)
𝑑
+∫
𝑑−β„Ž(𝑑)
+ 𝜏∫
0
−𝜏(𝑑) 𝑑+πœƒ
0
+ β„Ž∫
+
𝐹𝑇 (π‘₯ (𝑠)) π‘…πœŽ(𝑑) 𝑒𝛼𝑠 𝐹 (π‘₯ (𝑠)) d𝑠
𝑑
∫
𝑑
−β„Ž(𝑑) 𝑑+πœƒ
(38)
π‘₯𝑇 (𝑠) π‘†πœŽ(𝑑) 𝑒𝛼𝑠 π‘₯ (𝑠) d𝑠 dπœƒ
From (16) and (32), we have the following result:
σ΅„©
σ΅„©2
R0 𝐸󡄩󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„© 𝑒−𝛼(𝑑−𝑑0 ) Λ
𝐸‖π‘₯ (𝑑)β€– ≤
+ ,
𝐾
𝐾
2
(39)
where Λ = 𝛼−1 N1 , R0 is a positive constant.
When 𝑑 ∈ [π‘‘π‘˜ , π‘‘π‘˜+1 ], the π‘–π‘˜ th subsystem is activated; from
(39) and Theorem 5,we can get
𝐸‖π‘₯ (𝑑)β€–2 ≤
πΎπ‘–π‘˜
Λ
+
πΎπ‘–π‘˜
≤
𝐻max − 1
π‘˜+1
[𝐻max − 1]
(40)
σ΅„©
σ΅„©2
= π»π‘–π‘˜ 𝐸󡄩󡄩󡄩π‘₯ (π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© 𝑒−𝛼(𝑑−π‘‘π‘˜ ) + π½π‘–π‘˜ ,
where Rπ‘–π‘˜ is a positive constant, πΎπ‘–π‘˜ = πœ† min (𝑃𝑖 ), π»π‘–π‘˜ =
Rπ‘–π‘˜ /πΎπ‘–π‘˜ , π½π‘–π‘˜ = Λ/πΎπ‘–π‘˜ .
𝐽max
𝐻max − 1
π‘˜+1
[𝐻max − 1]
Rmax 𝑒𝑁0 ln Rmax −(𝛼−(ln Rmax /𝑇𝛼 ))(𝑑−𝑑0 ) σ΅„©σ΅„©
σ΅„©2
𝐸󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„©
π‘˜+1
𝐾min
+
𝐹𝑇 (π‘₯ (𝑠)) π‘πœŽ(𝑑) 𝑒𝛼𝑠 𝐹 (π‘₯ (𝑠)) d𝑠 dπœƒ.
σ΅„©
σ΅„©2
Rπ‘–π‘˜ 𝐸󡄩󡄩󡄩π‘₯ (π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© 𝑒−𝛼(𝑑−π‘‘π‘˜ )
𝐽max
σ΅„©
σ΅„©2
≤ 𝐻max 𝑒ln 𝐻max π‘πœŽ (𝑑0 ,𝑑)−𝛼(𝑑−𝑑0 ) 𝐸󡄩󡄩󡄩π‘₯ (𝑑0 )σ΅„©σ΅„©σ΅„©
π‘₯𝑇 (𝑠) π‘„πœŽ(𝑑) 𝑒𝛼𝑠 π‘₯ (𝑠) d𝑠
∫
π‘˜−2
2
(37)
where Rmax = maxπ‘˜∈Σ,1≤𝑖≤𝑛 {Rπ‘–π‘˜ }.
𝑑
π‘˜−1
+ [𝐻max 𝐽max + 𝐻max 𝐽max + 𝐻max 𝐽max
𝑛+1
Λ [(R𝑛+1
max /𝐾min ) − 1]
Rmax − 𝐾min
,
(41)
where 𝐾min = minπ‘–π‘˜ {πΎπ‘–π‘˜ }, 𝐻max = maxπ‘–π‘˜ {π»π‘–π‘˜ }.
𝑛+1
If one chooses 𝐡̃ = (1/𝐾min ) + Λ[(R𝑛+1
max /𝐾min ) −
>
0, then, for initial value
1]/(Rmax − 𝐾min )
πœ‘ ∈ CF0 , there is 𝑑󸀠 = 𝑑󸀠 (πœ‘) > 0, such that
Rmax 𝑒𝑁0 ln Rmax −(𝛼−(ln Rmax /𝑇𝛼 ))(𝑑−𝑑0 ) 𝐸‖π‘₯(𝑑0 )β€–2 ≤ 1 for all 𝑑 ≥ 𝑑󸀠 .
According to Definition 1, we have 𝐸‖π‘₯(𝑑, πœ‘)β€–2 ≤ 𝐡̃ for all
𝑑 ≥ 𝑑󸀠 . That is to say, system (3) is mean-square ultimate
boundedness, and the proof is completed.
Remark 9. In this paper, we construct two piecewise functions 𝑔(πœ‡), π‘˜(]) to remove the restrictive condition πœ‡ < 1 and
] < 1 in the results, which have reduced the conservatism of
the obtained results and also avoid the computational complexity.
Remark 10. The condition (35) is given as in the form of linear
matrix inequalities, which are more relaxing than the algebraic formulation. Furthermore, by using the MATLAB LMI
8
Abstract and Applied Analysis
toolbox, we can check the feasibility of (35) straightforward
without tuning any parameters.
Theorem 11. If all of the conditions of Theorem 8 hold, then
there exists an attractor A󸀠𝐡̃ for the solutions of system (3),
Μƒ
where A󸀠𝐡̃ = {πœ‘ ∈ CF0 | πΈβ€–πœ‘(𝑠)β€–2 ≤ 𝐡}.
𝑛+1
Proof. If one chooses 𝐡̃ = (1/𝐾min ) + Λ[(R𝑛+1
max /𝐾min ) −
1]/(Rmax − 𝐾min ) > 0, Theorem 8 shows that, for any πœ‘, there
is 𝑑󸀠 > 0, such that 𝐸‖π‘₯(𝑑, πœ‘)β€–2 ≤ 𝐡̃ for all 𝑑 ≥ 𝑑󸀠 .
Μƒ
Let A󸀠𝐡̃ denote by A󸀠𝐡̃ = {πœ‘ ∈ CF0 | πΈβ€–πœ‘(𝑠)β€–2 ≤ 𝐡}.
σΈ€ 
Clearly, A𝐡̃ is closed, bounded, and invariant. Furthermore,
lim𝑑 → ∞ sup inf 𝑦∈AσΈ€ Μƒ β€–π‘₯(𝑑, πœ‘) − 𝑦‖ = 0. Therefore, A󸀠𝐡̃ is an
𝐡
attractor for the solutions of system (3). This completes the
proof.
Corollary 12. In addition to all that of the conditions of
Theorem 8 hold, if 𝐽 = 0, 𝐺(𝑑, 0, 0) = 0 and 𝑓𝑖 (0) = 0
for all 𝑖 = 1, 2, . . . , 𝑛, then system (3) has a trivial solution
π‘₯(𝑑) ≡ 0, and the trivial solution of system (3) is mean-square
exponentially stable.
Proof. If 𝐽 = 0 and 𝑓𝑖 (0) = 0 for all 𝑖 = 1, 2, . . . , 𝑛, then it is
obvious that system (3) has a trivial solution π‘₯(𝑑) ≡ 0. From
Theorem 8, one has
σ΅„©2 Μƒ∗ −𝛼𝑑
σ΅„©
𝑒 ,
𝐸󡄩󡄩󡄩π‘₯ (𝑑, πœ‘)σ΅„©σ΅„©σ΅„© ≤ 𝐾
∀πœ‘,
(42)
Μƒ∗ = (Rmax 𝑒𝑁0 ln Rmax −(𝛼−(ln Rmax /𝑇𝛼 ))(𝑑−𝑑0 ) 𝐸‖π‘₯(𝑑0 )β€–2 /
where 𝐾
π‘˜+1
𝐾min
. Therefore, the trivial solution of system (3) is meansquare exponentially stable. This completes the proof.
Remark 13. Assumption (𝐻3 ) is less conservative than that in
[17] since the constants 𝑙𝑗 and 𝐿 𝑗 are allowed to be positive,
negative, or zero. Hence, the resulting activation functions
𝑓(⋅) could be nonmonotonic and are more general than the
usual forms |𝑓𝑗 (𝑒)| ≤ 𝐾𝑗 |𝑒|, 𝐾𝑗 > 0, 𝑗 = 1, 2, . . . , 𝑛. Moreover,
unlike the bounded case, there will be no equilibrium point
for the switched system (3) under the assumption (𝐻3 ).
For this reason, to investigate the asymptotic behavior (the
ultimate boundedness and the existence of attractor) of
switched system that contains mixed delays is more complex
and challenge.
Remark 14. In this paper, the chatter bound 𝑁0 is a positive
integer, which is more practical in significance and can
include the model 𝑁0 = 0 in [16, 25, 26] as a special case.
Remark 15. If Σ = 0, which implies that the switched delay
system (3) reduces to the usual stochastic CNN with delays.
In this case, attractor and ultimate boundedness are discussed
in [17]. And when π‘ˆ1 = π‘ˆ2 = 0, the model in our paper turns
out to be a switched CNN with mixed delays; to the best of our
knowledge, there are no published results in this aspect yet.
Thus, the main results of this paper are novel. Moreover, when
uncertainties appear in the switched stochastic CNN system
(3), we can obtain the corresponding results, by applying the
similar method as in [25].
4. Illustrative Examples
In this section, we shall give a numerical example to demonstrate the validity and effectiveness of our results. Consider
the switched cellular neural networks with two subsystems.
Consider the switched stochastic cellular neural network
system (3) with 𝑓𝑖 (π‘₯𝑖 (𝑑)) = 0.5 tanh(π‘₯𝑖 (𝑑)),𝑓𝑖 (0) = 0 (𝑖 = 1, 2),
𝜏(𝑑) = 0.25sin2 (𝑑), β„Ž(𝑑) = 0.3sin2 (𝑑), and the connection
weight matrices as follows:
0.3 0.1
𝐴1 = (
),
0.2 0.2
0.2 −0.1
),
𝐢1 = (
0.3 0.1
0.2 0.1
π‘ˆ21 = (
),
0 0.1
𝐡2 = (
0.1 0
),
−0.1 0.2
0.2 0
𝐡1 = (
),
0.3 0.5
π‘ˆ11 = (
0.1 0
),
−0.1 0.2
0.2 0.4
𝐴2 = (
),
0.1 0.3
𝐢2 = (
(43)
0.3 0.2
),
0.1 0.2
0.2 0.1
0.1 0
) , π‘ˆ22 = (
).
π‘ˆ12 = (
0 0.3
0.2 0.1
From assumptions (𝐻1 )–(𝐻3 ), we can gain 𝑑𝑖 = 1, 𝑙𝑖 =
0, 𝐿 𝑖 = 0.5, (𝑖 = 1, 2), 𝜏 = 0.25, β„Ž = 0.3, and πœ‡ = 0.5, ] = 0.6.
Therefore, for 𝛼 = 0.5, by solving LMIs (35), we get
1.4968
0
𝑃1 = (
),
0
1.4851
1.6073 −0.0528
𝑄1 = (
),
−0.0528 1.4567
1.8642 0.4698
),
𝑅1 = (
0.4698 1.5241
𝑆1 = (
2.7467 0.0225
),
0.0225 1.9941
5.4373 0.0644
𝑍1 = (
),
0.0644 4.5969
1.4316
0
𝑃2 = (
),
0
1.4528
1.6541 0.0229
),
𝑄2 = (
0.0229 1.8391
1.0837 0.4540
𝑅2 = (
),
0.4540 1.2710
1.6888 0.4356
),
𝑆2 = (
0.4356 1.6165
4.5736 0.5698
𝑍2 = (
).
0.5698 4.4524
(44)
Using (37), we can get the average dwell time π‘‡π‘Ž∗ = 1.3445.
5. Conclusions
In this paper, we studied the switched stochastic cellular
neural networks with discrete time-varying delays and distributed time-varying delays. With the help of the average
dwell time approach, the novel multiple Lyapunov-Krasovkii
functionals methods, and some inequality techniques, we
obtain the new sufficient conditions guaranteeing the meansquare ultimate boundedness, the existence of an attractor,
and the mean-square exponential stability. A numerical
example is also given to demonstrate our results. Furthermore, our derived conditions are presented in the forms of
LMIs, which are more relaxing than the algebraic formulation
and can be easily checked in practice by the effective LMI
toolbox in MATLAB.
Abstract and Applied Analysis
Acknowledgments
The authors are extremely grateful to Prof. Zhichun Yang
and the anonymous reviewers for their constructive and
valuable comments, which have contributed much to the
improvement of this paper. This work was jointly supported
by the National Natural Science Foundation of China under
Grant no. 11101053, the Key Project of Chinese Ministry
of Education under Grant no. 211118, the Excellent Youth
Foundation of Educational Committee of Hunan Provincial no. 10B002, the Hunan Provincial NSF no. 11JJ1001,
National Science and technology Major Projects of China no.
2012ZX10001001-006, the Scientific Research Funds of Hunan
Provincial Science and Technology Department of China no.
2012SK3096.
References
[1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE
Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1257–
1272, 1988.
[2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Transactions on Circuits and Systems, vol. 35, no.
10, pp. 1273–1290, 1988.
[3] D. Liu and A. Michel, “Celular neural networks for associative
memories,” IEEE Transactions on Circuits and Systems II, vol.
40, pp. 119–121, 1993.
[4] H.-C. Chen, Y.-C. Hung, C.-K. Chen, T.-L. Liao, and C.-K.
Chen, “Image-processing algorithms realized by discrete-time
cellular neural networks and their circuit implementations,”
Chaos, Solitons & Fractals, vol. 29, no. 5, pp. 1100–1108, 2006.
[5] P. Venetianer and T. Roska, “Image compression by delayed
CNNs,” IEEE Transactions on Circuits and Systems I, vol. 45, pp.
205–215, 1998.
[6] T. Roska, T. Boros, P. Thiran, and L. Chua, “Detecting simple
motion using cellular neural networks,” in Proceedings of the
International Workshop Cellular Neural Networks Application,
pp. 127–138, 1990.
[7] T. Roska and L. Chua, “Cellular neural networks with nonlinear
and delay-type template,” International Journal of Circuit Theory
and Applications, vol. 20, pp. 469–481, 1992.
[8] F. Wen and X. Yang, “Skewness of return distribution and
coefficient of risk premium,” Journal of Systems Science &
Complexity, vol. 22, no. 3, pp. 360–371, 2009.
[9] F. Wen, Z. Li, C. Xie, and S. David, “Study on the fractal and
chaotic features of the Shanghai composite index,” Fractals—
Complex Geometry Patterns and Scaling in Nature and Society,
vol. 20, no. 2, pp. 133–140, 2012.
[10] S. Haykin, Neural Networks, Prentice-Hall, Englewood Cliffs,
NJ, USA, 1994.
[11] C. Huang and J. Cao, “Almost sure exponential stability of
stochastic cellular neural networks with unbounded distributed
delays,” Neurocomputing, vol. 72, pp. 3352–3356, 2009.
[12] C. Huang and J. Cao, “On pth moment exponential stability
of stochastic Cohen-Grossberg neural networks with timevarying delays,” Neurocomputing, vol. 73, pp. 986–990, 2010.
[13] R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent
asymptotic stability for stochastic delayed recurrent neural
networks with time varying delays,” Applied Mathematics and
Computation, vol. 198, no. 2, pp. 526–533, 2008.
9
[14] H. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,”
Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009.
[15] B. Li and D. Xu, “Mean square asymptotic behavior of stochastic
neural networks with infinitely distributed delay,” Neurocomputing, vol. 72, pp. 3311–3317, 2009.
[16] L. Wan, Q. Zhou, P. Wang, and J. Li, “Ultimate boundedness and
an attractor for stochastic Hopfield neural networks with timevarying delays,” Nonlinear Analysis: Real World Applications,
vol. 13, no. 2, pp. 953–958, 2012.
[17] L. Wan and Q. Zhou, “Attractor and ultimate boundedness
for stochastic cellular neural networks with delays,” Nonlinear
Analysis: Real World Applications, vol. 12, no. 5, pp. 2561–2566,
2011.
[18] W. Yu, J. Cao, and W. Lu, “Synchronization control of switched
linearly coupled neural networks with delay,” Neurocomputing,
vol. 73, pp. 858–866, 2010.
[19] L. Wu, Z. Feng, and W. Zheng, “Exponential stability analysis
for delayed neural networks with switching parameters: average
dwell time approach,” IEEE Transactions on Neural Networks,
vol. 21, pp. 1396–1407, 2010.
[20] C. W. Wu and L. O. Chua, “A simple way to synchronize chaotic
systems with applications to secure communication systems,”
International Journal of Bifurcation and Chaos, vol. 3, pp. 1619–
1627, 1993.
[21] W. Yu, J. Cao, and K. Yuan, “Synchronization of switched system
and application in communication,” Physics Letters A, vol. 372,
pp. 4438–4445, 2008.
[22] C. Maia and M. Goncalves, “Application of switched adaptive
system to load forecasting,” Electric Power Systems Research, vol.
78, pp. 721–727, 2008.
[23] H. Wu, X. Liao, W. Feng, S. Guo, and W. Zhang, “Robust stability
analysis of uncertain systems with two additive time-varying
delay components,” Applied Mathematical Modelling, vol. 33, no.
12, pp. 4345–4353, 2009.
[24] X. Yang, C. Huang, and Q. Zhu, “Synchronization of switched
neural networks with mixed delays via impulsive control,”
Chaos, Solitons & Fractals, vol. 44, no. 10, pp. 817–826, 2011.
[25] J. Lian and K. Zhang, “Exponential stability for switched CohenGrossberg neural networks with average dwell time,” Nonlinear
Dynamics, vol. 63, no. 3, pp. 331–343, 2011.
[26] T.-F. Li, J. Zhao, and G. M. Dimirovski, “Stability and 𝐿 2 -gain
analysis for switched neutral systems with mixed time-varying
delays,” Journal of the Franklin Institute, vol. 348, no. 9, pp. 2237–
2256, 2011.
[27] C. Huang and J. Cao, “Convergence dynamics of stochastic
Cohen-Grossberg neural networks with unbounded distributed
delays,” IEEE Transactions on Neural Networks, vol. 22, pp. 561–
572, 2011.
[28] J. Hespanha and A. Morse, “Stability of switched systems with
average dwell time,” in Proceedings of the 38th IEEE Conference
on Decision and Control, pp. 2655–2660, 1999.
[29] K. Gu, “An integral inequality in the stability problem of timedelay systems,” in Proceedings of the 39th IEEE Conference on
Decision and Control, pp. 2805–2810, 2000.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download