Research Article Dynamical Behaviors of Stochastic Hopfield Neural Networks

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 631734, 9 pages
http://dx.doi.org/10.1155/2013/631734
Research Article
Dynamical Behaviors of Stochastic Hopfield Neural Networks
with Both Time-Varying and Continuously Distributed Delays
Qinghua Zhou,1 Penglin Zhang,2 and Li Wan3
1
Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430072, China
3
School of Mathematics and Physics, Wuhan Textile University, Wuhan 430073, China
2
Correspondence should be addressed to Penglin Zhang; zplcy@263.com
Received 24 December 2012; Revised 19 March 2013; Accepted 20 March 2013
Academic Editor: Chuangxia Huang
Copyright © 2013 Qinghua Zhou et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper investigates dynamical behaviors of stochastic Hopfield neural networks with both time-varying and continuously
distributed delays. By employing the Lyapunov functional theory and linear matrix inequality, some novel criteria on asymptotic
stability, ultimate boundedness, and weak attractor are derived. Finally, an example is given to illustrate the correctness and
effectiveness of our theoretical results.
1. Introduction
Hopfield neural networks [1] have been extensively studied
in the past years and found many applications in different areas such as pattern recognition, associative memory,
and combinatorial optimization. Such applications heavily
depend on the dynamical behaviors such as stability, uniform
boundedness, ultimate boundedness, attractor, bifurcation,
and chaos. As it is well known, time delays are unavoidably
encountered in the implementation of neural networks. Since
time delays as a source of instability and bad performance
always appear in many neural networks owing to the finite
speed of information processing, the stability analysis for the
delayed neural networks has received considerable attention.
However, in these recent publications, most research on
delayed neural networks has been restricted to simple cases
of discrete delays. Since a neural network usually has a
spatial nature due to the presence of an amount of parallel
pathways of a variety of axon sizes and lengths, it is desired
to model them by introducing distributed delays. Therefore,
both discrete and distributed delays should be taken into
account when modeling realistic neural networks [2, 3].
On the other hand, it has now been well recognized
that stochastic disturbances are also ubiquitous owing to
thermal noise in electronic implementations. Therefore, it
is important to understand how these disturbances affect
the networks. Many results on stochastic neural networks
have been reported in [4–24]. Some sufficient criteria on the
stability of uncertain stochastic neural networks were derived
in [4–7]. Almost sure exponential stability of stochastic neural networks was studied in [8–10]. In [11–16], mean square
exponential stability and 𝑝th moment exponential stability
of stochastic neural networks were discussed. The stability
of stochastic impulsive neural networks was discussed in
[17–19]. The stability of stochastic neural networks with the
Markovian jumping parameters was investigated in [20–22].
The passivity analysis for stochastic neural networks was
discussed in [23, 24]. These references mainly considered the
stability of equilibrium point of stochastic delayed neural
networks. What do we study to understand the asymptotic
behaviors when the equilibrium point does not exist?
Except for the stability property, boundedness and attractor are also foundational concepts of dynamical systems. They
play an important role in investigating the uniqueness of
equilibrium, global asymptotic stability, global exponential
stability, and the existence of periodic solution, its control,
and synchronization [25]. Recently, ultimate boundedness
and attractor of several classes of neural networks with
time delays have been reported in [26–33]. Some sufficient
criteria were derived in [26, 27], but these results hold only
2
Abstract and Applied Analysis
under constant delays. Following, in [28], the globally robust
ultimate boundedness of integrodifferential neural networks
with uncertainties and varying delays were studied. After that,
some sufficient criteria on the ultimate boundedness of neural
networks with both varying and unbounded delays were
derived in [29], but the concerned systems are deterministic
ones. In [30, 31], a series of criteria on the boundedness,
global exponential stability, and the existence of periodic
solution for nonautonomous recurrent neural networks were
established. In [32, 33], the ultimate boundedness and weak
attractor of stochastic neural networks with time-varying
delays were discussed. To the best of our knowledge, for
stochastic neural networks with mixed time delays, there are
few published results on the ultimate boundedness and weak
attractor. Therefore, the arising questions about the ultimate
boundedness, weak attractor, and asymptotic stability of the
stochastic Hopfield neural networks with mixed time delays
are important and meaningful.
The left paper is organized as follows. Some preliminaries
are in Section 2, main results are presented in Section 3, a
numerical example is given in Section 4, and conclusions are
drawn in Section 5.
2. Preliminaries
(A1) For 𝑓(π‘₯(𝑑)) and 𝑔(π‘₯(𝑑)) in (1), there are always constants 𝑙𝑖− , 𝑙𝑖+ , π‘šπ‘–− , and π‘šπ‘–+ such that
𝑙𝑖− ≤
𝑔 (π‘₯) − 𝑔𝑖 (𝑦)
π‘šπ‘–− ≤ 𝑖
≤ π‘šπ‘–+ ,
π‘₯−𝑦
+𝐷 ∫
𝑑
−∞
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠 + 𝐽] 𝑑𝑑
(1)
∞
∀π‘₯, 𝑦 ∈ 𝑅.
∞
∫ π‘˜π‘— (πœƒ) 𝑒]πœƒ π‘‘πœƒ = π‘˜π‘— (]) < ∞. (3)
∫ π‘˜π‘— (πœƒ) π‘‘πœƒ = 1,
0
0
Following, 𝐴 > 0 (resp., 𝐴 ≥ 0) means that matrix 𝐴 is
a symmetric positive definite (resp., positive semi-definite).
𝐴𝑇 and 𝐴−1 denote the transpose and inverse of the matrix 𝐴.
πœ† max (𝐴) and πœ† min (𝐴) represent the maximum and minimum
eigenvalues of matrix 𝐴, respectively.
Definition 1. System (1) is said to be stochastically ultimately
bounded if, for any πœ€ ∈ (0, 1), there is a positive constant 𝐢 =
𝐢(πœ€) such that the solution π‘₯(𝑑) of system (1) satisfies
lim sup 𝑃 {β€–π‘₯ (𝑑)β€– ≤ 𝐢} ≥ 1 − πœ€.
𝑑→∞
(4)
Lemma 2 (see [34]). Let 𝑄(π‘₯) = 𝑄𝑇 (π‘₯), 𝑅(π‘₯) = 𝑅𝑇 (π‘₯) and
𝑆(π‘₯) depends affinely on π‘₯. Then, linear matrix inequality
𝑄 (π‘₯) 𝑆 (π‘₯)
)>0
𝑆𝑇 (π‘₯) 𝑅 (π‘₯)
(
+ [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] 𝑑𝑀 (𝑑) ,
in which π‘₯ = (π‘₯1 , . . . , π‘₯𝑛 )𝑇 is a state vector associated
with the neurons; 𝐢 = diag(𝑐1 , . . . , 𝑐𝑛 ), 𝑐𝑖 > 0 represents
the rate with which the 𝑖th unit will reset its potential to
the resting state in isolation when being disconnected from
the network and the external stochastic perturbation; 𝐴 =
(π‘Žπ‘–π‘— )𝑛×𝑛 , 𝐡 = (𝑏𝑖𝑗 )𝑛×𝑛 , and 𝐷 = (𝑑𝑖𝑗 )𝑛×𝑛 represent the
connection weight matrix, the delayed connection weight
matrix, and the distributively delayed connection weight
matrix, respectively; 𝐽 = (𝐽1 , . . . , 𝐽𝑛 )𝑇 , 𝐽𝑖 denotes the external
bias on the ith unit; 𝑓(π‘₯(𝑑)) = (𝑓1 (π‘₯1 (𝑑)), . . . , 𝑓𝑛 (π‘₯𝑛 (𝑑)))𝑇 ,
𝑔(π‘₯(𝑑)) = (𝑔1 (π‘₯1 (𝑑)), . . . , 𝑔𝑛 (π‘₯𝑛 (𝑑)))𝑇 , 𝑓𝑗 and 𝑔𝑗 denote
activation functions, 𝐾(𝑑) = diag(π‘˜1 (𝑑), . . . , π‘˜π‘› (𝑑)), and the
delay kernel π‘˜π‘— (𝑑) is a real-valued nonnegative continuous
function defined on [0, ∞); 𝜎1 and 𝜎2 are diffusion coefficient
matrices; 𝑀(𝑑) is a one-dimensional Brownian motion or
Winner process, which is defined on a complete probability
space (Ω, F, 𝑃) with a natural filtration {F𝑑 }𝑑≥0 generated
by {𝑀(𝑠) : 0 ≤ 𝑠 ≤ 𝑑}; 𝜏(𝑑) is the transmission delay,
and the initial conditions associated with system (1) are of
the following forms: π‘₯(𝑑) = πœ‰(𝑑), −∞ < 𝑑 ≤ 0, where
πœ‰(⋅) is a F0 -measurable, bounded, and continuous 𝑅𝑛 -valued
random variable defined on (−∞, 0].
Throughout this paper, one always supposes that the
following condition holds.
(2)
Moreover, there exist constant ] > 0 and matrix
𝐾(]) = diag(π‘˜1 (]), . . . , π‘˜π‘› (])) > 0 such that
Consider the following stochastic Hopfield neural networks
with both time-varying and continuously distributed delays:
𝑑π‘₯ (𝑑) = [ − 𝐢π‘₯ (𝑑) + 𝐴𝑓 (π‘₯ (𝑑)) + 𝐡𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑓𝑖 (π‘₯) − 𝑓𝑖 (𝑦)
≤ 𝑙𝑖+ ,
π‘₯−𝑦
(5)
is equivalent to
(1) 𝑅(π‘₯) > 0, 𝑄(π‘₯) − 𝑆(π‘₯)𝑅−1 (π‘₯)𝑆𝑇 (π‘₯) > 0,
(2) 𝑄(π‘₯) > 0, 𝑅(π‘₯) − 𝑆𝑇 (π‘₯)𝑄−1 (π‘₯)𝑆(π‘₯) > 0.
3. Main Results
Theorem 3. System (1) is stochastically ultimately bounded
Μ‡ ≤ πœ‡ ≤ 1, and there
provided that 𝜏(𝑑) satisfies 0 ≤ 𝜏(𝑑) ≤ 𝜏, 𝜏(𝑑)
exist some matrices 𝑃 > 0, 𝑄𝑖 ≥ 0, π‘ˆπ‘– = diag(𝑒𝑖1 , . . . , 𝑒𝑖𝑛 ) ≥ 0
(𝑖 = 1, 2, 3, 4) such that the following linear matrix inequality
holds:
Σ11
∗
(∗
(
Σ=(∗
(
∗
∗
(
0 𝑃𝐴 + 𝐿 2 π‘ˆ1 𝑃𝐡 𝑀2 π‘ˆ4 𝑃𝐷 𝜎1𝑇 𝑃
Σ22
0
𝐿 2 π‘ˆ2
0
0 𝜎2𝑇 𝑃
∗
Σ33
0
0
0
0 )
)
∗
∗
Σ44
0
0
0 )
)
∗
∗
∗
Σ55
0
0
∗
∗
∗
∗
−π‘ˆ3 0
∗
∗
∗
∗
∗ −𝑃 )
< 0,
(6)
Abstract and Applied Analysis
3
where ∗ denotes the corresponding symmetric terms,
where 𝐼 is identity matrix,
Σ11 = 𝑄1 + πœπ‘„2 − 𝑃𝐢 − 𝐢𝑃 − 2𝐿 1 π‘ˆ1 − 2𝑀1 π‘ˆ4 ,
Σ22 = − (1 − πœ‡) 𝑄1 − 2𝐿 1 π‘ˆ2 ,
Σ44 = − (1 − πœ‡) 𝑄3 − 2π‘ˆ2 ,
Σ33 = 𝑄3 + πœπ‘„4 − 2π‘ˆ1 ,
Σ55 = π‘ˆ3 𝐾 (]) − 2π‘ˆ4 ,
𝐿2 =
+
𝑙1+ , . . . , 𝑙𝑛−
+
If (8) holds, then it follows from Chebyshev’s inequality that
for any πœ€ > 0 and 𝐢 = √𝐢∗ /πœ€,
lim sup𝑑 → ∞ 𝐸‖π‘₯ (𝑑)β€–2
lim sup𝑃 {β€–π‘₯ (𝑑)β€– > 𝐢} ≤
= πœ€,
𝐢2
𝑑→∞
(9)
which implies that (4) holds. Now, we begin to prove that (8)
holds.
From Σ < 0 and Lemma 2, one may obtain
Σ11
∗
(∗
∗
∗
(∗
+ 𝑉3 (π‘₯ (𝑑) , 𝑑) + 𝑉4 (π‘₯ (𝑑) , 𝑑) ,
(7)
0 𝑃𝐴 + 𝐿 2 π‘ˆ1 𝑃𝐡 𝑀2 π‘ˆ4 𝑃𝐷
Σ22
0
𝐿 2 π‘ˆ2
0
0
0
0
0 )
∗
Σ33
0
0
∗
∗
Σ44
0
∗
∗
∗
Σ55
∗
∗
∗
∗
−π‘ˆ3 )
𝑇
𝜎1𝑇 𝑃
𝜎1𝑇 𝑃
𝑇
𝜎2𝑇 𝑃
𝜎2 𝑃
+ ( 0 ) 𝑃−1 ( 0 ) < 0.
0
0
0
0
( 0 )
( 0 )
𝑉1 (π‘₯ (𝑑) , 𝑑) = π‘’πœ†π‘‘ π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑) ,
𝑛
𝑗=1
𝑇
𝑑
0
𝑑−πœƒ
π‘’πœ†(𝑠+πœƒ) 𝑔𝑗2 (π‘₯𝑗 (𝑠)) 𝑑𝑠 π‘‘πœƒ,
𝑉3 (π‘₯ (𝑑) , 𝑑)
=∫
𝑑
𝑑−𝜏(𝑑)
π‘’πœ†(𝑠+𝜏) [π‘₯𝑇 (𝑠) 𝑄1 π‘₯ (𝑠) + 𝑓𝑇 (π‘₯ (𝑠)) 𝑄3 𝑓 (π‘₯ (𝑠))] 𝑑𝑠,
𝑉4 (π‘₯ (𝑑) , 𝑑)
=∫
𝑑
𝑑
∫ π‘’πœ†πœƒ [π‘₯𝑇 (πœƒ) 𝑄2 π‘₯ (πœƒ)
𝑑−𝜏(𝑑) 𝑠
+𝑓𝑇 (π‘₯ (πœƒ)) 𝑄4 𝑓 (π‘₯ (πœƒ))] π‘‘πœƒ 𝑑𝑠.
(14)
(10)
Then, it can be obtained by Ito’s formula in [35] that
𝑑𝑉 (π‘₯ (𝑑) , 𝑑)
= π‘’πœ†π‘‘ 2π‘₯𝑇 (𝑑) 𝑃 [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] 𝑑𝑀 (𝑑)
+ {L𝑉1 (π‘₯ (𝑑) , 𝑑)
0 𝑃𝐴 + 𝐿 2 π‘ˆ1 𝑃𝐡 𝑀2 π‘ˆ4 𝑃𝐷
0
𝐿 2 π‘ˆ2
0
0
Γ22
0
0
0 )
∗
Γ33
0
0
∗
∗
Γ44
0
∗
∗
∗
Γ55
∗
∗
∗
∗
−π‘ˆ3 )
𝜎1𝑇 𝑃
𝜎1𝑇 𝑃
𝜎2𝑇 𝑃
𝜎2𝑇 𝑃
(
(
)
)
+ ( 0 ) 𝑃−1 ( 0 ) < 0,
0
0
0
0
0
(
( 0 )
)
∞
𝑉2 (π‘₯ (𝑑) , 𝑑) = ∑ 𝑒3𝑗 ∫ π‘˜π‘— (πœƒ) ∫
Hence, there exists a sufficiently small πœ† ∈ (0, ]) such that
Γ11
∗
∗
Γ=(
∗
∗
(∗
(13)
where
(8)
𝑑→∞
Γ55 = Σ55 + πœ†πΌ.
𝑉 (π‘₯ (𝑑) , 𝑑) = 𝑉1 (π‘₯ (𝑑) , 𝑑) + 𝑉2 (π‘₯ (𝑑) , 𝑑)
Proof. The key of the proof is to prove that there exists a
positive constant 𝐢∗ , which is independent of the initial data,
such that
lim sup𝐸‖π‘₯(𝑑)β€– ≤ 𝐢∗ .
(12)
Consider the Lyapunov-Krasovskii function as follows:
𝑀1 = diag (π‘š1− π‘š1+ , . . . , π‘šπ‘›− π‘šπ‘›+ ) ,
2
Γ33 = πœ†πΌ + π‘’πœ†πœ 𝑄3 + πœπ‘„4 − 2π‘ˆ1 ,
Γ44 = Σ44 + πœ†πΌ,
𝑙𝑛+ ) ,
𝑀2 = diag (π‘š1− + π‘š1+ , . . . , π‘šπ‘›− + π‘šπ‘›+ ) .
− 2𝑀1 π‘ˆ4 + 2πœ†π‘ƒ + 2πœ†πΌ,
Γ22 = Σ22 + πœ†πΌ,
𝐿 1 = diag (𝑙1− 𝑙1+ , . . . , 𝑙𝑛− 𝑙𝑛+ ) ,
diag (𝑙1−
Γ11 = π‘’πœ†πœ 𝑄1 + πœπ‘„2 − 𝑃𝐢 − 𝐢𝑃 − 2𝐿 1 π‘ˆ1
+
πœ• [𝑉2 (π‘₯ (𝑑) , 𝑑) + 𝑉3 (π‘₯ (𝑑) , 𝑑) + 𝑉4 (π‘₯ (𝑑) , 𝑑)]
} 𝑑𝑑,
πœ•π‘‘
(15)
where
(11)
πœ•π‘‰2 (π‘₯ (𝑑) , 𝑑)
πœ•π‘‘
𝑛
∞
= ∑𝑒3𝑗 ∫ π‘˜π‘— (πœƒ) [π‘’πœ†(𝑑+πœƒ) 𝑔𝑗2 (π‘₯𝑗 (𝑑)) − π‘’πœ†π‘‘ 𝑔𝑗2 (π‘₯𝑗 (𝑑 − πœƒ))] π‘‘πœƒ
0
𝑗=1
𝑛
≤ π‘’πœ†π‘‘ ∑𝑒3𝑗 π‘˜π‘— (]) 𝑔𝑗2 (π‘₯𝑗 (𝑑))
𝑗=1
4
Abstract and Applied Analysis
𝑛
∞
− π‘’πœ†π‘‘ ∑ 𝑒3𝑗 [∫ π‘˜π‘— (πœƒ) 𝑔𝑗 (π‘₯𝑗 (𝑑 − πœƒ)) π‘‘πœƒ]
2
= πœ†π‘’πœ†π‘‘ π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑)
0
𝑗=1
+ π‘’πœ†π‘‘ 2π‘₯𝑇 (𝑑) 𝑃 [ − 𝐢π‘₯ (𝑑) + 𝐴𝑓 (π‘₯ (𝑑))
= π‘’πœ†π‘‘ 𝑔𝑇 (π‘₯ (𝑑)) π‘ˆ3 𝐾 (]) 𝑔 (π‘₯ (𝑑))
− π‘’πœ†π‘‘ (∫
−∞
× π‘ˆ3 (∫
+ 𝐡𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑇
𝑑
𝑑
−∞
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠)
+𝐷 ∫
𝑑
−∞
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠+𝐽]
+ π‘’πœ†π‘‘ [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))]
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠) ,
𝑇
× π‘ƒ [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))]
πœ•π‘‰3 (π‘₯ (𝑑) , 𝑑)
πœ•π‘‘
≤ πœ†π‘’πœ†π‘‘ π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑)
= π‘’πœ†(𝑑+𝜏) [π‘₯𝑇 (𝑑) 𝑄1 π‘₯ (𝑑) + 𝑓𝑇 (π‘₯ (𝑑)) 𝑄3 𝑓 (π‘₯ (𝑑))]
+ π‘’πœ†π‘‘ 2π‘₯𝑇 (𝑑) 𝑃 [ − 𝐢π‘₯ (𝑑) + 𝐴𝑓 (π‘₯ (𝑑))
πœ†(𝑑−𝜏(𝑑)+𝜏)
− (1 − πœΜ‡ (𝑑)) 𝑒
+ 𝐡𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
× [π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) 𝑄1 π‘₯ (𝑑 − 𝜏 (𝑑))
+𝐷 ∫
+𝑓𝑇 (π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑄3 𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))]
𝑑
−∞
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠]
+ π‘’πœ†π‘‘ [πœ†π‘₯𝑇 (𝑑) 𝑃π‘₯ (𝑑) + πœ†−1 𝐽𝑇 𝑃𝐽]
≤ π‘’πœ†(𝑑+𝜏) [π‘₯𝑇 (𝑑) 𝑄1 π‘₯ (𝑑) + 𝑓𝑇 (π‘₯ (𝑑)) 𝑄3 𝑓 (π‘₯ (𝑑))]
𝑇
− (1 − πœ‡) π‘’πœ†π‘‘
+ π‘’πœ†π‘‘ [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))]
× [π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) 𝑄1 π‘₯ (𝑑 − 𝜏 (𝑑))
× π‘ƒ [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] ,
(17)
𝑇
+𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑))) 𝑄3 𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))] ,
πœ•π‘‰4 (π‘₯ (𝑑) , 𝑑)
πœ•π‘‘
in which the following inequality is used:
= π‘’πœ†π‘‘ 𝜏 (𝑑) [π‘₯𝑇 (𝑑) 𝑄2 π‘₯ (𝑑) + 𝑓𝑇 (π‘₯ (𝑑)) 𝑄4 𝑓 (π‘₯ (𝑑))]
2π‘₯𝑇 𝑃𝐽 ≤ πœ†π‘₯𝑇 𝑃π‘₯ + πœ†−1 𝐽𝑇 𝑃𝑇 𝑃−1 𝑃𝐽
− (1 − πœΜ‡ (𝑑))
×∫
𝑑
𝑑−𝜏(𝑑)
= πœ†π‘₯𝑇 𝑃π‘₯ + πœ†−1 𝐽𝑇 𝑃𝐽,
π‘’πœ†π‘  [π‘₯𝑇 (𝑠) 𝑄2 π‘₯ (𝑠)+𝑓𝑇 (π‘₯ (𝑠)) 𝑄4 𝑓 (π‘₯ (𝑠))] 𝑑𝑠
≤ π‘’πœ†π‘‘ 𝜏 [π‘₯𝑇 (𝑑) 𝑄2 π‘₯ (𝑑) + 𝑓𝑇 (π‘₯ (𝑑)) 𝑄4 𝑓 (π‘₯ (𝑑))] ,
(16)
L𝑉1 (π‘₯ (𝑑) , 𝑑)
for 𝑃 > 0, πœ† > 0.
On the other hand, it follows from (A1) that for 𝑖 =
1, . . . , 𝑛,
[𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖+ π‘₯𝑖 (𝑑)]
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖− π‘₯𝑖 (𝑑)] ≤ 0,
πœ•π‘‰ (π‘₯ (𝑑) , 𝑑) πœ•π‘‰1 (π‘₯ (𝑑) , 𝑑)
= 1
+
πœ•π‘‘
πœ•π‘₯
[𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0) − 𝑙𝑖+ π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
× [ − 𝐢π‘₯ (𝑑) + 𝐴𝑓 (π‘₯ (𝑑)) + 𝐡𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0) − 𝑙𝑖− π‘₯𝑖 (𝑑 − 𝜏 (𝑑))] ≤ 0,
𝑛
+𝐷 ∫
𝑑
−∞
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠 + 𝐽]
π‘‡πœ•
+ trace ([𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))]
2
𝑉1 (π‘₯ (𝑑) , 𝑑)
πœ•π‘₯2
0 ≤ π‘’πœ†π‘‘ {−2∑𝑒1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖+ π‘₯𝑖 (𝑑)]
𝑖=1
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑓𝑖 (0) − 𝑙𝑖− π‘₯𝑖 (𝑑)]
𝑛
− 2∑𝑒2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑)))
𝑖=1
× [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] )
(18)
−𝑓𝑖 (0) − 𝑙𝑖+ π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
(19)
Abstract and Applied Analysis
5
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑓𝑖 (0)
𝑛
≤ π‘’πœ†π‘‘ {−2∑𝑒1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖+ π‘₯𝑖 (𝑑)]
𝑖=1
−𝑙𝑖− π‘₯𝑖
(𝑑 − 𝜏 (𝑑))] }
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖− π‘₯𝑖 (𝑑)]
𝑛
𝑛
= π‘’πœ†π‘‘ {−2∑𝑒1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖+ π‘₯𝑖 (𝑑)]
− 2∑𝑒2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖+ π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
𝑖=1
𝑖=1
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) −
𝑙𝑖− π‘₯𝑖
(𝑑)]
𝑛
− 2∑𝑒2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖+ π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖− π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
𝑛
2
+ ∑ [πœ†π‘“π‘–2 (π‘₯𝑖 (𝑑)) + 4πœ†−1 𝑓𝑖2 (0) 𝑒1𝑖
𝑖=1
2
2 +
+πœ†π‘₯𝑖2 (𝑑) + πœ†−1 𝑓𝑖2 (0) 𝑒1𝑖
(𝑙𝑖 + 𝑙𝑖− ) ]
𝑖=1
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖− π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
𝑛
𝑛
2
+ ∑ [πœ†π‘“π‘–2 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) + 4πœ†−1 𝑓𝑖2 (0) 𝑒2𝑖
𝑖=1
− 2∑𝑒1𝑖 𝑓𝑖2 (0)
+ πœ†π‘₯𝑖2 (𝑑 − 𝜏 (𝑑))
𝑖=1
𝑛
2
2 +
(𝑙𝑖 + 𝑙𝑖− ) ] } .
+πœ†−1 𝑓𝑖2 (0) 𝑒2𝑖
+ 2∑𝑒1𝑖 𝑓𝑖 (0) [2𝑓𝑖 (π‘₯𝑖 (𝑑)) − (𝑙𝑖+ + 𝑙𝑖− ) π‘₯𝑖 (𝑑)]
𝑖=1
(20)
𝑛
− 2∑𝑒2𝑖 𝑓𝑖2 (0)
Similarly, one may obtain
𝑖=1
𝑛
+ 2∑𝑒2𝑖 𝑓𝑖 (0) [2𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑)))
𝑖=1
𝑛
0 ≤ π‘’πœ†π‘‘ {−2∑𝑒4𝑖 [𝑔𝑖 (π‘₯𝑖 (𝑑)) − 𝑔𝑖 (0) − π‘šπ‘–+ π‘₯𝑖 (𝑑)]
𝑖=1
− (𝑙𝑖+ + 𝑙𝑖− ) π‘₯𝑖 (𝑑 − 𝜏 (𝑑))] }
πœ†π‘‘
𝑛
≤ 𝑒 {−2∑𝑒1𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑)) −
𝑖=1
𝑙𝑖+ π‘₯𝑖
× [𝑔𝑖 (π‘₯𝑖 (𝑑)) − 𝑔𝑖 (0) − π‘šπ‘–− π‘₯𝑖 (𝑑)] }
𝑛
(𝑑)]
× [𝑓𝑖 (π‘₯𝑖 (𝑑)) − 𝑙𝑖− π‘₯𝑖 (𝑑)]
𝑛
− 2∑𝑒2𝑖 [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖+ π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
≤ π‘’πœ†π‘‘ {−2∑𝑒4𝑖 [𝑔𝑖 (π‘₯𝑖 (𝑑)) − π‘šπ‘–+ π‘₯𝑖 (𝑑)]
𝑖=1
× [𝑔𝑖 (π‘₯𝑖 (𝑑)) − π‘šπ‘–− π‘₯𝑖 (𝑑)]
𝑛
2
+ ∑ [πœ†π‘”π‘–2 (π‘₯𝑖 (𝑑)) + 4πœ†−1 𝑔𝑖2 (0) 𝑒4𝑖
𝑖=1
𝑖=1
× [𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑))) − 𝑙𝑖− π‘₯𝑖 (𝑑 − 𝜏 (𝑑))]
2
2
+πœ†π‘₯𝑖2 (𝑑) + πœ†−1 𝑔𝑖2 (0) 𝑒4𝑖
(π‘šπ‘–+ + π‘šπ‘–− ) ] } .
(21)
𝑛
󡄨
󡄨
+ ∑ [󡄨󡄨󡄨4𝑒1𝑖 𝑓𝑖 (0) 𝑓𝑖 (π‘₯𝑖 (𝑑))󡄨󡄨󡄨
𝑖=1
󡄨
󡄨
+ 󡄨󡄨󡄨2𝑒1𝑖 𝑓𝑖 (0) (𝑙𝑖+ + 𝑙𝑖− ) π‘₯𝑖 (𝑑)󡄨󡄨󡄨]
𝑛
󡄨
󡄨
+ ∑ [󡄨󡄨󡄨4𝑒2𝑖 𝑓𝑖 (0) 𝑓𝑖 (π‘₯𝑖 (𝑑 − 𝜏 (𝑑)))󡄨󡄨󡄨
𝑖=1
󡄨
󡄨
+ 󡄨󡄨󡄨2𝑒2𝑖 𝑓𝑖 (0) (𝑙𝑖+ + 𝑙𝑖− ) π‘₯𝑖 (𝑑 − 𝜏 (𝑑))󡄨󡄨󡄨] }
Therefore, from (15)–(21), it follows that
𝑑𝑉 (π‘₯ (𝑑) , 𝑑) ≤ π‘’πœ†π‘‘ 2π‘₯𝑇 (𝑑) 𝑃 [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] 𝑑𝑀 (𝑑)
+ π‘’πœ†π‘‘ πœ‚π‘‡ (𝑑) Γπœ‚ (𝑑) 𝑑𝑑 + π‘’πœ†π‘‘ 𝐢1 𝑑𝑑
≤ π‘’πœ†π‘‘ 2π‘₯𝑇 (𝑑) 𝑃 [𝜎1 π‘₯ (𝑑) + 𝜎2 π‘₯ (𝑑 − 𝜏 (𝑑))] 𝑑𝑀 (𝑑)
+ π‘’πœ†π‘‘ 𝐢1 𝑑𝑑,
(22)
6
Abstract and Applied Analysis
where
πœ‚ (𝑑) = (π‘₯𝑇 (𝑑) , π‘₯𝑇 (𝑑 − 𝜏 (𝑑)) , 𝑓𝑇 (π‘₯ (𝑑)) , 𝑓𝑇 (π‘₯ (𝑑 − 𝜏 (𝑑))) ,
𝑑
𝑔𝑇 (π‘₯ (𝑑)) , (∫
−∞
Theorem 5. Suppose that all conditions of Theorem 3 hold and
𝑓(0) = 𝑔(0) = 𝐽 = 0; then, zero solution of system (1) is
mean square exponential stability and almost sure exponential
stability.
𝑇
𝑇
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠) ) ,
(23)
𝐢1 = πœ†−1 𝐽𝑇 𝑃𝐽
𝑛
2
2
2 +
+ ∑ [4πœ†−1 𝑓𝑖2 (0) 𝑒1𝑖
+ πœ†−1 𝑓𝑖2 (0) 𝑒1𝑖
(𝑙𝑖 + 𝑙𝑖− )
𝑖=1
2
+ 4πœ†−1 𝑓𝑖2 (0) 𝑒2𝑖
+
πœ†−1 𝑓𝑖2
2 +
(𝑙𝑖
(0) 𝑒2𝑖
(24)
+
2
𝑙𝑖− )
Proof. If 𝑓(0) = 𝑔(0) = 𝐽 = 0, then 𝐢1 = 𝐢∗ = 0. By (25)
and the semimartingale convergence theorem used in [35],
zero solution of system (1) is almost sure exponential stability.
It follows from (26) that zero solution of system (1) is mean
square exponential stability.
Remark 6. If one takes 𝑄2 = 𝑄4 = 0 in Theorem 3, then it
is not required that πœ‡ ≤ 1. Furthermore, If one takes 𝑄1 =
𝑄2 = 𝑄3 = 𝑄4 = 0, then 𝜏(𝑑) may be nondifferentiable or the
Μ‡ is unknown.
boundedness of 𝜏(𝑑)
Remark 7. Assumption (A1) is less conservative than that in
[32] since the constants 𝑙𝑖− , 𝑙𝑖+ , π‘šπ‘–− , and π‘šπ‘–+ are allowed to be
positive, negative, or zero. System (1) includes mixed timedelays, which is more complex than that in [33]. The systems
concerned in [26–31] are deterministic, so the stochastic
system studied in this paper is more complex and realistic.
When 𝜎1 = 𝜎2 = 0, system (1) becomes the following
determined system:
2
+ 4πœ†−1 𝑔𝑖2 (0) 𝑒4𝑖
2
2
+πœ†−1 𝑔𝑖2 (0) 𝑒4𝑖
(π‘šπ‘–+ + π‘šπ‘–− ) ] .
Thus, one may obtain that
𝑉 (π‘₯ (𝑑) , 𝑑) ≤ 𝑉 (π‘₯ (0) , 0) + π‘’πœ†π‘‘ πœ†−1 𝐢1
𝑑
Theorem 4. Suppose that all conditions of Theorem 3 hold,
then there exists a weak attractor 𝐡𝐢 for the solutions of system
(1).
𝑇
+ ∫ 2π‘₯ (𝑠) 𝑃 [𝜎1 π‘₯ (𝑠) + 𝜎2 π‘₯ (𝑠 − 𝜏 (𝑠))] 𝑑𝑀 (𝑠) ,
𝑑π‘₯ (𝑑)
= −𝐢π‘₯ (𝑑) + 𝐴𝑓 (π‘₯ (𝑑)) + 𝐡𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
𝑑𝑑
0
+ 𝐷∫
(25)
𝐸‖π‘₯(𝑑)β€–2 ≤
𝑒−πœ†π‘‘ 𝐸𝑉 (π‘₯ (0) , 0) + πœ†−1 𝐢1
πœ† min (𝑃)
𝑒−πœ†π‘‘ 𝐸𝑉 (π‘₯ (0) , 0)
+ 𝐢∗ ,
=
πœ† min (𝑃)
(26)
where 𝐢∗ = πœ†−1 𝐢1 /πœ† min (𝑃). Equation (26) implies that (8)
holds. The proof is completed.
Theorem 3 shows that there exists 𝑑0 > 0 such that for any
𝑑 ≥ 𝑑0 , 𝑃{β€– π‘₯(𝑑) β€–≤ 𝐢} ≥ 1 − 𝛿. Let 𝐡𝐢 be denoted by
𝐡𝐢 = {π‘₯ ∈ 𝑅𝑛 | β€–π‘₯ (𝑑)β€– ≤ 𝐢, 𝑑 ≥ 𝑑0 } .
(27)
Clearly, 𝐡𝐢 is closed, bounded, and invariant. Moreover,
σ΅„©
σ΅„©
lim sup inf σ΅„©σ΅„©σ΅„©π‘₯ (𝑑) − 𝑦󡄩󡄩󡄩 = 0
𝑦∈𝐡
𝐢
𝑑→∞
(28)
with no less than probability 1 − 𝛿, which means that 𝐡𝐢
attracts the solutions infinitely many times with no less than
probability 1 − 𝛿; so we may say that 𝐡𝐢 is a weak attractor for
the solutions.
𝑑
−∞
(29)
𝐾 (𝑑 − 𝑠) 𝑔 (π‘₯ (𝑠)) 𝑑𝑠 + 𝐽.
Definition 8. System (29) is said to be uniformly bounded,
if, for each 𝐻 > 0, there exists a constant 𝑀 = 𝑀(𝐻) > 0
such that [𝑑0 ∈ 𝑅, πœ™ ∈ 𝐢(−∞, 0], β€– πœ™ β€–≤ 𝐻, 𝑑 > 𝑑0 ] imply
β€– π‘₯(𝑑, 𝑑0 , πœ™) β€–≤ 𝑀, where β€– πœ™ β€–= sup𝑑≤0 β€– πœ™(𝑑) β€–.
Theorem 9. System (29) is uniformly bounded provided that
Μ‡ ≤ πœ‡ ≤ 1, and there exist some
𝜏(𝑑) satisfies 0 ≤ 𝜏(𝑑) ≤ 𝜏, 𝜏(𝑑)
matrices 𝑃 > 0, 𝑄𝑖 ≥ 0, and π‘ˆπ‘– = diag(𝑒𝑖1 , . . . , 𝑒𝑖𝑛 ) ≥
0(𝑖 = 1, 2, 3, 4) such that the following linear matrix inequality
holds:
Σ11
∗
∗
(
Σ=
∗
∗
(∗
0 𝑃𝐴 + 𝐿 2 π‘ˆ1 𝑃𝐡 𝑀2 π‘ˆ4 𝑃𝐷
0
𝐿 2 π‘ˆ2
0
0
Σ22
0
0
0 )
∗
Σ33
0
0
∗
∗
Σ44
(30)
0
∗
∗
∗
Σ55
∗
∗
∗
∗
−π‘ˆ3 )
< 0,
where ∗ denotes the corresponding symmetric terms, Σ𝑖𝑖 (𝑖 =
1, 2, 3, 4, 5), 𝐿 1 , 𝐿 2 , 𝑀1 , 𝑀2 are the same as in Theorem 3.
Abstract and Applied Analysis
7
Proof. From Σ < 0, there exists a sufficiently small πœ† ∈ (0, ])
such that
Γ11 0 𝑃𝐴 + 𝐿 2 π‘ˆ1 𝑃𝐡 𝑀2 π‘ˆ4 𝑃𝐷
0
𝐿 2 π‘ˆ2
0
0
∗ Γ22
0
0
0 )
∗
∗
Γ
33
Γ=(
0
0
∗ ∗
∗
Γ44
(31)
0
∗ ∗
∗
∗
Γ55
∗
∗
∗
−π‘ˆ3 )
(∗ ∗
𝑛
∞
0
0
−πœƒ
𝑗=1
−πœ†π‘‘
𝑉 (π‘₯ (0) , 0) + πœ†−1 𝐢1 )
β€–π‘₯(𝑑)β€–2 ≤ πœ†−1
min (𝑃) (𝑒
𝑛
≤ ( ∑2𝑒3𝑗 πœ†−1 π‘˜π‘— (]))
2
2 σ΅„© σ΅„©2
σ΅„©2
σ΅„©
× [max {(π‘šπ‘—− ) , (π‘šπ‘—+ ) } σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝑔 (0)σ΅„©σ΅„©σ΅„© ] ,
1≤𝑗≤𝑛
0
∫ π‘’πœ†(𝑠+𝜏) [π‘₯𝑇 (𝑠) 𝑄1 π‘₯ (𝑠) + 𝑓𝑇 (π‘₯ (𝑠)) 𝑄3 𝑓 (π‘₯ (𝑠))] 𝑑𝑠
−𝜏
0
≤ ∫ π‘’πœ†(𝑠+𝜏) [πœ† max (𝑄1 ) β€–π‘₯(𝑠)β€–2
−1
≤ πœ†−1
min (𝑃) (𝑉 (π‘₯ (0) , 0) + πœ† 𝐢1 )
−𝜏
σ΅„©2
σ΅„©
+πœ† max (𝑄3 ) 󡄩󡄩󡄩𝑓(π‘₯(𝑠))σ΅„©σ΅„©σ΅„© ] 𝑑𝑠
σ΅„©σ΅„© σ΅„©σ΅„©2
[
≤ πœ†−1
min (𝑃) πœ† max (𝑃) σ΅„©
σ΅„©πœ‰σ΅„©σ΅„©
[
𝑛
0
σ΅„© σ΅„©2
≤ ∫ π‘’πœ†(𝑠+𝜏) [πœ† max (𝑄1 ) σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© + 2πœ† max (𝑄3 )
∞
+ ∑𝑒3𝑗 ∫ π‘˜π‘— (πœƒ)
−𝜏
0
𝑗=1
×∫
0
−πœƒ
0
π‘’πœ†(𝑠+πœƒ) 𝑔𝑗2
σ΅„©2
σ΅„©
× (󡄩󡄩󡄩𝑓 (0)σ΅„©σ΅„©σ΅„©
(π‘₯𝑗 (𝑠)) 𝑑𝑠 π‘‘πœƒ
2
2 σ΅„© σ΅„©2
+max {(𝑙𝑖− ) , (𝑙𝑖+ ) } σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© )] 𝑑𝑠
1≤𝑖≤𝑛
πœ†(𝑠+𝜏)
+∫ 𝑒
−𝜏
σ΅„© σ΅„©2
≤ πœ†−1 [πœ† max (𝑄1 ) σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© + 2πœ† max (𝑄3 )
× [π‘₯𝑇 (𝑠) 𝑄1 π‘₯ (𝑠)
2
2 σ΅„© σ΅„©2
σ΅„©2
σ΅„©
× (󡄩󡄩󡄩𝑓 (0)σ΅„©σ΅„©σ΅„© + max {(𝑙𝑖− ) , (𝑙𝑖+ ) } σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© )] 𝑑𝑠,
+𝑓𝑇 (π‘₯ (𝑠)) 𝑄3 𝑓 (π‘₯ (𝑠))] 𝑑𝑠
0
0
+ ∫ ∫ π‘’πœ†πœƒ
0
0
𝑇
+𝑓 (π‘₯ (πœƒ)) 𝑄4 𝑓 (π‘₯ (πœƒ))] π‘‘πœƒ 𝑑𝑠
(32)
2
where β€–πœ‰β€– = sup𝑑≤0 β€–π‘₯(𝑑)β€– , 𝐢1 is the same as in (24). Note
that
0
0
−πœƒ
∑𝑒3𝑗 ∫ π‘˜π‘— (πœƒ) ∫
π‘’πœ†(𝑠+πœƒ) 𝑔𝑗2
∞
0
0
−πœƒ
≤ ∑2𝑒3𝑗 ∫ π‘˜π‘— (πœƒ) ∫ 𝑒
𝑗=1
(π‘₯𝑗 (𝑠)) 𝑑𝑠 π‘‘πœƒ
πœ†(𝑠+πœƒ)
0
σ΅„© σ΅„©2
≤ ∫ ∫ π‘’πœ†πœƒ [πœ† max (𝑄2 ) σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© + 2πœ† max (𝑄4 )
−𝜏 𝑠
σ΅„©2
σ΅„©
× (󡄩󡄩󡄩𝑓 (0)σ΅„©σ΅„©σ΅„©
+πœ†−1 𝐢1 ] ,
]
∞
0
−𝜏 𝑠
× [π‘₯𝑇 (πœƒ) 𝑄2 π‘₯ (πœƒ)
2
1≤𝑖≤𝑛
∫ ∫ π‘’πœ†πœƒ [π‘₯𝑇 (πœƒ) 𝑄2 π‘₯ (πœƒ) + 𝑓𝑇 (π‘₯ (πœƒ)) 𝑄4 𝑓 (π‘₯ (πœƒ))] π‘‘πœƒ 𝑑𝑠
−𝜏 𝑠
𝑛
1≤𝑗≤𝑛
𝑗=1
where 𝐼 is identity matrix, Γ11 = π‘’πœ†πœ 𝑄1 + πœπ‘„2 − 𝑃𝐢 − 𝐢𝑃 −
2𝐿 1 π‘ˆ1 − 2𝑀1 π‘ˆ4 + 2πœ†π‘ƒ + 2πœ†πΌ, Γ22 = Σ22 + πœ†πΌ, Γ33 = πœ†πΌ +
π‘’πœ†πœ 𝑄3 + πœπ‘„4 − 2π‘ˆ1 , Γ44 = Σ44 + πœ†πΌ, and Γ55 = Σ55 + πœ†πΌ.
We still consider the Lyapunov-Krasovskii functional
𝑉(π‘₯(𝑑)) in (13). From (16)–(21), one may obtain
𝑗=1
2
σ΅„©2
σ΅„©
×β€–π‘₯ (𝑠)β€–2 + 󡄩󡄩󡄩𝑔 (0)σ΅„©σ΅„©σ΅„© ] 𝑑𝑠 π‘‘πœƒ
< 0,
𝑛
2
≤ ∑2𝑒3𝑗 ∫ π‘˜π‘— (πœƒ) ∫ π‘’πœ†(𝑠+πœƒ) [max {(π‘šπ‘—− ) , (π‘šπ‘—+ ) }
2
2
[max {(π‘šπ‘—− ) , (π‘šπ‘—+ ) } π‘₯𝑗2
1≤𝑗≤𝑛
+𝑔𝑗2 (0) ] π‘‘π‘ π‘‘πœƒ
(𝑠)
2
2 σ΅„© σ΅„©2
+max {(𝑙𝑖− ) , (𝑙𝑖+ ) } σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© )] π‘‘πœƒ 𝑑𝑠
1≤𝑖≤𝑛
σ΅„© σ΅„©2
≤ πœπœ†−1 [πœ† max (𝑄2 ) σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© + 2πœ† max (𝑄4 )
2
2 σ΅„© σ΅„©2
σ΅„©2
σ΅„©
× (󡄩󡄩󡄩𝑓 (0)σ΅„©σ΅„©σ΅„© + max {(𝑙𝑖− ) , (𝑙𝑖+ ) } σ΅„©σ΅„©σ΅„©πœ‰σ΅„©σ΅„©σ΅„© )] .
1≤𝑖≤𝑛
(33)
So, system (29) is uniformly bounded.
8
Abstract and Applied Analysis
8
2
6
1.5
4
1
0.5
0
π‘₯2 (𝑑)
π‘₯1 (𝑑), π‘₯2 (𝑑)
2
−2
−4
0
−0.5
−6
−1
−8
−1.5
−10
−12
−100
0
100
200
300
400
500
−2
−4
−3
−2
−1
π‘₯1 (𝑑)
Time
0
1
2
π‘₯1 (𝑑)
π‘₯2 (𝑑)
(b)
(a)
Figure 1: Time trajectories (a) as well as the phase portrait (b) for the system in Example 1.
4. One Example
Example 1. Consider system (1) with 𝐽 = (0, 1)𝑇 , 𝐾(𝑑) = diag
(𝑒−𝑑 , 2𝑒−2𝑑 ), ] = 0.5, and
−0.1 0.4
𝐴=(
),
0.2 −0.5
0.1 −1
𝐡=(
),
−1.4 0.4
1.4 0
𝐢=(
),
0 1.65
−0.6 0.7
𝐷=(
),
1 1.15
0.23 0.1
),
𝜎1 = (
0.3 0.2
(34)
0.1 −0.2
𝜎2 = (
).
0.2 0.3
The activation functions 𝑓𝑖 (π‘₯𝑖 ) = π‘₯𝑖 + sin(π‘₯𝑖 ), 𝑔𝑖 (π‘₯𝑖 ) =
tanh(π‘₯𝑖 )(𝑖 = 1, 2) satisfy, 𝑙𝑖− = π‘šπ‘–− = 0, 𝑙𝑖+ = π‘šπ‘–+ = 1. Then, one
computes 𝐿 1 = 𝑀1 = 0, 𝐿 2 = 𝑀2 = diag(1, 1), and 𝐾(0.5) =
diag(2, 4/3). By using MATLAB’s LMI Control Toolbox [34],
for πœ‡ = 0.0035 and 𝜏 = 1, based on Theorem 3, such system
is stochastically ultimately bounded when 𝑃, π‘ˆπ‘– , and 𝑄𝑖 (𝑖 =
1, 2, 3, 4) satisfy
1.7748 0.2342
𝑃=(
),
0.2342 1.9398
218.3215
0
),
π‘ˆ2 = (
0
274.1607
2.1620
0
π‘ˆ4 = (
),
0
1.8493
31.3129 31.4377
),
𝑄2 = (
31.4377 74.4756
1.3091
0
π‘ˆ1 = (
),
0
2.1602
1.0825
0
π‘ˆ3 = (
),
0
1.4021
275.3435 104.0781
𝑄1 = (
),
104.0781 397.1452
1.0857 −1.2164
𝑄3 = (
),
−1.2164 2.9090
189.3661 16.3951
).
𝑄4 = (
16.3951 92.8987
(35)
From Figure 1, it is easy to see that π‘₯(𝑑) is stochastically
ultimately bounded.
5. Conclusions
A proper Lyapunov functional and linear matrix inequalities
are employed to investigate the ultimate boundedness, stability, and weak attractor of stochastic Hopfield neural networks
with both time-varying and continuously distributed delays.
New results and sufficient criteria are derived after extensive
deductions. From the proposed sufficient conditions, one can
easily prove that zero solution of such network is mean square
exponentially stable and almost surely exponentially stable by
applying the semimartingale convergence theorem.
Acknowledgments
The authors thank the editor and the reviewers for their
insightful comments and valuable suggestions. This work was
supported by the National Natural Science Foundation of
China (nos. 10801109, 11271295, and 11047114), Science and
Technology Research Projects of Hubei Provincial Department of Education (D20131602), and Young Talent Cultivation Projects of Guangdong (LYM09134).
References
[1] C. Bai, “Stability analysis of Cohen-Grossberg BAM neural
networks with delays and impulses,” Chaos, Solitons & Fractals,
vol. 35, no. 2, pp. 263–267, 2008.
[2] T. Lv and P. Yan, “Dynamical behaviors of reaction-diffusion
fuzzy neural networks with mixed delays and general boundary
conditions,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 2, pp. 993–1001, 2011.
[3] J. Liu, X. Liu, and W.-C. Xie, “Global convergence of neural
networks with mixed time-varying delays and discontinuous
neuron activations,” Information Sciences, vol. 183, pp. 92–105,
2012.
Abstract and Applied Analysis
[4] H. Huang and G. Feng, “Delay-dependent stability for uncertain
stochastic neural networks with time-varying delay,” Physica A,
vol. 381, no. 1-2, pp. 93–103, 2007.
[5] W.-H. Chen and X. M. Lu, “Mean square exponential stability
of uncertain stochastic delayed neural networks,” Physics Letters
A, vol. 372, no. 7, pp. 1061–1069, 2008.
[6] M. Hua, X. Liu, F. Deng, and J. Fei, “New results on robust
exponential stability of uncertain stochastic neural networks
with mixed time-varying delays,” Neural Processing Letters, vol.
32, no. 3, pp. 219–233, 2010.
[7] P. Balasubramaniam, S. Lakshmanan, and R. Rakkiyappan,
“LMI optimization problem of delay-dependent robust stability
criteria for stochastic systems with polytopic and linear fractional uncertainties,” International Journal of Applied Mathematics and Computer Science, vol. 22, no. 2, pp. 339–351, 2012.
[8] C. Huang and J. Cao, “Almost sure exponential stability of
stochastic cellular neural networks with unbounded distributed
delays,” Neurocomputing, vol. 72, no. 13–15, pp. 3352–3356, 2009.
[9] H. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,”
Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009.
[10] C. Huang, P. Chen, Y. He, L. Huang, and W. Tan, “Almost
sure exponential stability of delayed Hopfield neural networks,”
Applied Mathematics Letters, vol. 21, no. 7, pp. 701–705, 2008.
[11] C. Huang and J. Cao, “On pth moment exponential stability
of stochastic Cohen-Grossberg neural networks with timevarying delays,” Neurocomputing, vol. 73, no. 4–6, pp. 986–990,
2010.
[12] C. X. Huang, Y. G. He, L. H. Huang, and W. J. Zhu, “𝑝th moment
stability analysis of stochastic recurrent neural networks with
time-varying delays,” Information Sciences, vol. 178, no. 9, pp.
2194–2203, 2008.
[13] C. Huang, Y. He, and H. Wang, “Mean square exponential
stability of stochastic recurrent neural networks with timevarying delays,” Computers and Mathematics with Applications,
vol. 56, no. 7, pp. 1773–1778, 2008.
[14] R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent
asymptotic stability for stochastic delayed recurrent neural
networks with time varying delays,” Applied Mathematics and
Computation, vol. 198, no. 2, pp. 526–533, 2008.
[15] S. Lakshmanan and P. Balasubramaniam, “Linear matrix
inequality approach for robust stability analysis for stochastic
neural networks with time-varying delay,” Chinese Physics B,
vol. 20, no. 4, Article ID 040204, 2011.
[16] S. Lakshmanan, A. Manivannan, and P. Balasubramaniam,
“Delay-distribution-dependent stability criteria for neural networks with time-varying delays,” Dynamics of Continuous,
Discrete & Impulsive Systems A, vol. 19, no. 1, pp. 1–14, 2012.
[17] Q. Song and Z. Wang, “Stability analysis of impulsive stochastic
Cohen-Grossberg neural networks with mixed time delays,”
Physica A, vol. 387, no. 13, pp. 3314–3326, 2008.
[18] X. Li, “Existence and global exponential stability of periodic
solution for delayed neural networks with impulsive and
stochastic effects,” Neurocomputing, vol. 73, no. 4–6, pp. 749–
758, 2010.
[19] C. H. Wang, Y. G. Kao, and G. W. Yang, “Exponential stability of
impulsive stochastic fuzzy reactiondiffusion Cohen-Grossberg
neural networks with mixed delays,” Neurocomputing, vol. 89,
pp. 55–63, 2012.
[20] Q. Zhu, C. Huang, and X. Yang, “Exponential stability for
stochastic jumping BAM neural networks with time-varying
9
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
and distributed delays,” Nonlinear Analysis, vol. 5, no. 1, pp. 52–
77, 2011.
P. Balasubramaniam and M. Syed Ali, “Stochastic stability of
uncertain fuzzy recurrent neural networks with Markovian
jumping parameters,” International Journal of Computer Mathematics, vol. 88, no. 5, pp. 892–904, 2011.
Y. F. Wang, P. Lin, and L. S. Wang, “Exponential stability of
reaction-diffusion high-order Markovian jump Hopfield neural
networks with time-varying delays,” Nonlinear Analysis, vol. 13,
no. 3, pp. 1353–1361, 2012.
P. Balasubramaniam and G. Nagamani, “Global robust passivity
analysis for stochastic interval neural networks with interval
time-varying delays and Markovian jumping parameters,” Journal of Optimization Theory and Applications, vol. 149, no. 1, pp.
197–215, 2011.
P. Balasubramaniam and G. Nagamani, “Global robust passivity
analysis for stochastic fuzzy interval neural networks with timevarying delays,” Expert Systems With Applications, vol. 39, pp.
732–742, 2012.
P. Wang, D. Li, and Q. Hu, “Bounds of the hyper-chaotic
Lorenz-Stenflo system,” Communications in Nonlinear Science
and Numerical Simulation, vol. 15, no. 9, pp. 2514–2520, 2010.
X. Liao and J. Wang, “Global dissipativity of continuous-time
recurrent neural networks with time delay,” Physical Review E,
vol. 68, no. 1, Article ID 016118, 7 pages, 2003.
S. Arik, “On the global dissipativity of dynamical neural
networks with time delays,” Physics Letters A, vol. 326, no. 1-2,
pp. 126–132, 2004.
X. Y. Lou and B. T. Cui, “Global robust dissipativity for integrodifferential systems modeling neural networks with delays,”
Chaos, Solitons & Fractals, vol. 36, no. 2, pp. 469–478, 2008.
Q. Song and Z. Zhao, “Global dissipativity of neural networks
with both variable and unbounded delays,” Chaos, Solitons &
Fractals, vol. 25, no. 2, pp. 393–401, 2005.
H. Jiang and Z. Teng, “Global eponential stability of cellular
neural networks with time-varying coefficients and delays,”
Neural Networks, vol. 17, no. 10, pp. 1415–1425, 2004.
H. Jiang and Z. Teng, “Boundedness, periodic solutions and
global stability for cellular neural networks with variable coefficients and infinite delays,” Neurocomputing, vol. 72, no. 10–12,
pp. 2455–2463, 2009.
L. Wan and Q. H. Zhou, “Attractor and ultimate boundedness
for stochastic cellular neural networks with delays,” Nonlinear
Analysis, vol. 12, no. 5, pp. 2561–2566, 2011.
L. Wan, Q. Zhou, P. Wang, and J. Li, “Ultimate boundedness and
an attractor for stochastic Hopfield neural networks with timevarying delays,” Nonlinear Analysis, vol. 13, no. 2, pp. 953–958,
2012.
S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequations in System and Control Theory, SIAM,
Philadelphia, Pa, USA.
X. Mao, Stochastic Differential Equations and Applications,
Horwood Publishing, 1997.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download