Research Article Global Networks with Unbounded Mixed Delays

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 746241, 12 pages
http://dx.doi.org/10.1155/2013/746241
Research Article
Global πœ‡-Stability Analysis for Impulsive Stochastic Neural
Networks with Unbounded Mixed Delays
Lizi Yin1 and Xinchun Wang2
1
2
School of Mathematical Sciences, University of Jinan, Jinan 250022, China
Computer Department, Jinan Vocational College, Jinan 250103, China
Correspondence should be addressed to Lizi Yin; ss yinlz@ujn.edu.cn
Received 19 October 2012; Accepted 14 December 2012
Academic Editor: Chuandong Li
Copyright © 2013 L. Yin and X. Wang. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We investigate the global πœ‡-stability in the mean square of impulsive stochastic neural networks with unbounded time-varying
delays and continuous distributed delays. By choosing an appropriate Lyapunov-Krasovskii functional, a novel robust stability
condition, in the form of linear matrix inequalities, is derived. These sufficient conditions can be tested by MATLAB LMI software
packages. The results extend and improve the earlier publication. Two numerical examples are provided to illustrate the effectiveness
of the obtained theoretical results.
1. Introduction
Since 1982 when American California Engineering institute physicist Hopfield proposed Hopfield neural networks
models [1, 2], artificial neural networks theory and applied
research have attracted more and more attentions. The
application of artificial neural networks is very extensive.
It can process a variety of information, such as artificial
intelligence, secure communications, network optimization,
military information, pattern recognition, and so forth.
Information processing of neural networks greatly depends
on the system’s dynamic characteristics. The stability is one of
very important dynamic characteristics of neural network.
External perturbations can affect the stability of a real system, so it is necessary to consider the stochastic effects for the
stability property of neural networks. Recently, some results
to guarantee the global asymptotic stability or exponential
stability for stochastic neural networks are obtained, see [3–
8]. For example, in [4], Blythe et al. investigated stochastic
stability of neural networks with constant delay. In [6], Wang
et al. analysed the stability in the mean square for stochastic
Cohen-Grossberg neural networks with time-varying delays
and continuous distributed delays.
As we known, artificial neural networks are often subject
to impulsive perturbations which can affect systems’ stability,
see [9–11]. Impulsive perturbations often make stable systems
unstable. Therefore, we should consider systems’ stability in
impulsive effects. Very recently, some results on stability of
stochastic neural networks with impulses have been proposed
and studied, see [12, 13].
The neural network may be disturbed by environmental
noises, which cause the uncertainty of the connection weights
of the neurons and affect the dynamical behaviors of system.
Therefore, parameter uncertainties should be taken into
account in system model. There are some global stability
results of neural networks under the influence of parameter
uncertainties [14–20]. In [14], the authors kept a watchful eye
on robust exponential stability for uncertain stochastic neural
networks with discrete interval and distributed time-varying
delays by Lyapunov-Krasovski functional method and the
random analysis theory.
The time delays happen frequently in the neurotransmission. Many researchers have a large amount interest
in time delay neural networks. There are many sufficient
conditions have been proposed to guarantee the stability
of neural networks with various type of time delays [21–
26]. For example, in [22], Lou and Cui investigated the
global asymptotic stability of the Hopfield neural networks
with bounded time delay by constructing suitable Lyapunovkrasovski functional and employing LMI method. In [24],
Raja et al. further obtained some results of stochastic neural
networks with mixed time delays by using Lyapunov stability
2
Abstract and Applied Analysis
theory and LMI technology. However, most of the results
have only focused on bounded delays. As we know, in many
engineering time delays depend on the histories heavily and
may be unbounded [27, 28]. In this case, the conditions on
delays of those existing results are too restrictive.
Recently, Chen and Wang [29], Liu and Chen [30]
proposed a new concept of πœ‡-stability, which can be applied
to neural networks with unbounded time-varying delay.
Moreover, few results have been reported in the literature concerning the problem of πœ‡-stability for parameter
impulsive stochastic neural networks with unbounded timevarying delays and continuously distributed delays, which
inspire our interests.
This paper is concerned with the global stability analysis
in the mean square for impulsive stochastic neural networks
with unbounded time-varying delays and continuously
distributed delays. By choosing an appropriate LyapunovKrasovskii functional, a novel robust stability condition,
in the form of linear matrix inequalities, is derived. These
sufficient conditions can be tested by Matlab LMI software
packages. Our results extend and improve the some results
[29, 30].
The paper is organized as following. In Section 2, the
basic definitions and assumptions are given together with
the statement of the problem. In Section 3, the sufficient
conditions of πœ‡-stability in the mean square of impulsive
stochastic neural networks with unbounded time-varying
delays and continuously distributed delays are obtained. In
Section 4, global robust πœ‡-stability criteria in the mean square
are derived for uncertain neural networks model. Then, two
numerical examples are provided to demonstrate the
effectiveness of our results in Section 5. Finally, concluding
remarks are given in Section 6.
Notations. Throughout this paper, 𝑅 and 𝑅𝑛 denote, respectively, the set of real numbers and 𝑛-dimensional real spaces
equipped with the Euclidean norm | ⋅ |; 𝑍+ denotes the set
of positive integers; 𝑅𝑛×π‘š denotes the set of all 𝑛 × π‘š. Let
𝐴 ≥ 0 (𝐴 ≤ 0) denote that the matrix 𝐴 is a symmetric
and positive semidefinite (negative semidefinite) matrix; 𝐴𝑇
denotes the transpose of the matrix 𝐴; πœ† max (⋅) (πœ† min ) denotes
the maximum eigenvalue (minimum eigenvalue) of matrix 𝐴.
L2 [0, ∞) is the space of square integrable vector. Moreover,
let (Ω, F, (F𝑑 )𝑑≥0 , P) be a probability space with a filtration
(F𝑑 )𝑑≥0 satisfying the usual conditions (It is right continuous
and F0 contains all P-null sets). 𝐿2F0 (−∞, 0]; 𝑅𝑛 denotes the
family of all F0 -measurable. 𝐸(⋅) stands for the mathematical
expectation operator with respect to the given probability
measure P.
Consider the following neural network model:
π‘₯Μ‡ (𝑑) = − 𝐴 0 π‘₯ (𝑑) + 𝐴 1 𝑓 (π‘₯ (𝑑)) + 𝐴 2 𝑓 (π‘₯ (𝑑 − 𝜏 (𝑑)))
∞
0
Δπ‘₯ (π‘‘π‘˜ ) = π‘₯ (π‘‘π‘˜ ) − π‘₯ (π‘‘π‘˜− ) = π½π‘˜ (π‘₯ (π‘‘π‘˜− )) ,
and 𝐴 3 = (π‘Žπ‘–π‘—(3) )𝑛×𝑛 are connection weight matrix; 𝑓(⋅) =
(𝑓1 (⋅), 𝑓2 (⋅), . . . , 𝑓𝑛 (⋅))𝑇 is the neuron activation function; 𝜏(𝑑)
represents the transmission delay of neural networks; β„Ž(⋅) =
diag(β„Ž1 (⋅), β„Ž2 (⋅), . . . , β„Žπ‘› (⋅)) is the delay kernel function; 𝐽 is an
input constant matrix, and π½π‘˜ is the impulsive function.
Throughout the paper, we make the following assumptions.
Assumption 1. The neuron activation functions 𝑓𝑗 (⋅), 𝑗 =
1, 2, . . . , 𝑛, are bounded, continuous differentiable, and satisfy
0≤
𝑓𝑗 (𝑒) − 𝑓𝑗 (𝑣)
𝑒−𝑣
≤ 𝑙𝑗 ,
𝑒 =ΜΈ 𝑣, 𝑒, 𝑣 ∈ 𝑅,
(2)
where 𝑙𝑗 , 𝑗 = 1, 2, . . . , 𝑛 are some positive constants.
Assumption 2. 𝜏(𝑑) is a nonnegative and continuous differenΜ‡
tiable time-varying delay and satisfies 𝜏(𝑑)
≤ 𝜌 < 1, 𝜌 is a
positive constant.
Assumption 3. The delay kernels β„Žπ‘– , 𝑖 = 1, 2, . . . , 𝑛, are some
real nonnegative continuous functions defined in [0, ∞) and
satisfy
∞
∫ β„Žπ‘– (𝑠) 𝑑𝑠 = 1,
0
𝑖 = 1, 2, . . . , 𝑛.
(3)
Assumption 4. The impulse times π‘‘π‘˜ satisfy 0 = 𝑑0 < 𝑑1 <
⋅ ⋅ ⋅ < π‘‘π‘˜ < ⋅ ⋅ ⋅ , lim𝑑 → ∞ π‘‘π‘˜ = +∞.
If the functions 𝑓𝑗 (⋅), 𝑗 = 1, 2, . . . , 𝑛 satisfy Assumption 1,
the system (1) exists an equilibrium point, see [14]. Assume
that π‘₯∗ = (π‘₯1∗ , π‘₯2∗ , . . . , π‘₯𝑛∗ )𝑇 is an equilibrium point of system
(1), and the impulsive function of system (1) characterized
by π½π‘˜ (π‘₯(π‘‘π‘˜− )) = −π‘€π‘˜ (π‘₯(π‘‘π‘˜− ) − π‘₯∗ ), where π½π‘˜ (π‘₯(π‘‘π‘˜− )) =
(𝐽1π‘˜ (π‘₯(π‘‘π‘˜− )), 𝐽2π‘˜ (π‘₯(π‘‘π‘˜− )), . . . , π½π‘›π‘˜ (π‘₯(π‘‘π‘˜− )))𝑇 , π‘€π‘˜ is a real matrix.
Letting 𝑦𝑖 = π‘₯𝑖 − π‘₯∗ , one can switch the equilibrium point
of system (1) to the origin and have
𝑦̇ (𝑑) = − 𝐴 0 𝑦 (𝑑) + 𝐴 1 𝑔 (𝑦 (𝑑)) + 𝐴 2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
∞
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠,
0
Δ𝑦 (π‘‘π‘˜ ) = 𝑦 (π‘‘π‘˜ ) − 𝑦 (π‘‘π‘˜− ) = −π‘€π‘˜ (𝑦 (π‘‘π‘˜− )) ,
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(4)
where 𝑦(𝑑) = (𝑦1 (𝑑), 𝑦2 (𝑑), . . . , 𝑦𝑛 (𝑑))𝑇 , 𝑔(𝑦(𝑠)) = (𝑔1 (𝑦1 (𝑠)),
𝑔2 (𝑦2 (𝑠)), . . . , 𝑔𝑛 (𝑦𝑛 (𝑠)))𝑇 , and 𝑔𝑖 (𝑦𝑖 (𝑠)) = 𝑓𝑖 (π‘₯𝑖 (𝑠) + π‘₯𝑖∗ ) −
𝑓𝑖 (π‘₯𝑖∗ ). By the definition of g𝑖 (⋅), 𝑖 = 1, 2, . . . , 𝑛 and
Assumption 1, 𝑔𝑖 satisfies the sector condition
2. Model Description and Some Assumptions
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑓 (π‘₯ (𝑑 − 𝑠)) 𝑑𝑠 + 𝐽,
where π‘₯ = (π‘₯1 (𝑑), π‘₯2 (𝑑), . . . , π‘₯𝑛 (𝑑))𝑇 is the neuron state vector
of the neural network; 𝐴 0 = diag(π‘Ž1(0) , π‘Ž2(0) , . . . , π‘Žπ‘›(0) ), π‘Žπ‘–(0) >
0, 𝑖 = 1, 2, . . . , 𝑛, is decay rate; 𝐴 1 = (π‘Žπ‘–π‘—(1) )𝑛×𝑛 , 𝐴 2 = (π‘Žπ‘–π‘—(2) )𝑛×𝑛
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(1)
0≤
𝑔𝑖 (𝑒)
≤ 𝑙𝑖 ,
𝑒
𝑒∈𝑅
(5)
and 𝑔𝑖 (0) = 0, 𝑖 = 1, 2, . . . , 𝑛, where 𝑙𝑗 , 𝑗 = 1, 2, . . . , 𝑛 are
some positive constants.
Abstract and Applied Analysis
3
In the practical application, a neural system is affected by
external perturbations. It is necessary to consider the effect
of randomness to neural networks. We obtain stochastic
impulsive neural networks with delays:
𝑑𝑦 (𝑑) = [ − 𝐴 0 𝑦 (𝑑) + 𝐴 1 𝑔 (𝑦 (𝑑)) + 𝐴 2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠] 𝑑𝑑
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(6)
where πœ”(𝑑) = (πœ”1 (𝑑), πœ”2 (𝑑), . . . , πœ”π‘› (𝑑))𝑇 is 𝑛 dimensional
Brownian motions defined on a complete probability space
(Ω, F, (F𝑑 )𝑑≥0 , P).
Assumption 5. Assume that 𝜎 : 𝑅+ × π‘…π‘› × π‘…π‘› → 𝑅𝑛×𝑛
is local Lipschitz continuous and satisfies the linear growth
condition, that is, 𝜎(𝑑, 0, 0) = 0 for all 𝑑 ∈ 𝑅+ . Moreover, there
exist 𝑛 × π‘› dimension matrices Γ0 > 0, Γ1 > 0, such that
𝑇
𝑇
= 𝑆11 , 𝑆22
= 𝑆22 is equivalent to any one of the
where 𝑆11
following conditions:
3. Stochastic Stability Analysis of
Neural Networks
Let 𝐢2,1 (𝑅𝑛 × π‘…+ → 𝑅+ ) denote the family of all nonnegative
functions 𝑉(𝑦, 𝑑) on 𝑅𝑛 × π‘…+ , which are continuous one
differentiable in 𝑑 and twice differentiable in π‘₯. For every such
𝑉, we define an operator 𝐿𝑉 associated with system (6),
𝐿𝑉 = 𝑉𝑑 (𝑦, 𝑑) + 𝑉𝑦 (𝑦, 𝑑) [ − 𝐴 0 𝑦 (𝑑) + 𝐴 1 𝑔 (𝑦 (𝑑))
+ 𝐴 2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
∞
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠]
𝑇
trace [𝜎(𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑))) 𝜎 (𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑)))]
≤ 𝑦𝑇 (𝑑) Γ0 𝑦 (𝑑) + 𝑦𝑇 (𝑑 − 𝜏 (𝑑)) Γ1 𝑦 (𝑑 − 𝜏 (𝑑)) .
0
(7)
The initial conditions for system (6) are 𝑦(𝑠) = πœ‘(𝑠), 𝑠 ∈
2
2
(−∞, 0], πœ‘ ∈ 𝐢F
((−∞, 0], 𝑅𝑛 ), where 𝐢F
denotes the
0
0
family of all bounded F0 -measurable, 𝐢((−∞, 0], 𝑅𝑛 )-valued
variables, satisfying β€–πœ‘β€– = sup𝑠≤0 𝐸|πœ‘(𝑠)|2 < ∞.
For completeness, we first give the following definition
and lemmas.
Definition 1 (see [29]). Suppose that πœ‡(𝑑) is a nonnegative
continuous function and satisfies lim𝑑 → ∞ πœ‡(𝑑) = ∞. If there
exists a scalar 𝑀 > 0 such that
𝑀
σ΅„©
σ΅„©
,
𝐸 󡄩󡄩󡄩𝑦 (𝑑)σ΅„©σ΅„©σ΅„© ≤
πœ‡ (𝑑)
𝑑 ≥ 0,
(8)
then the system (6) is said to be globally stochastically πœ‡stable in the mean square.
Obviously, the definition of global stochastic πœ‡-stability in
the mean square includes the globally stochastically asymptotical stability and exponential stability in the mean square.
1
+ trace [πœŽπ‘‡ 𝑉𝑦𝑦 (𝑦, 𝑑) 𝜎] ,
2
Lemma 2 (see [31]). Let π‘Š1 , π‘Š2 , 𝑄 = 𝑄 > 0 be real matrices
of appropriate dimensions, and πœ€ > 0 be a scalar, then one has
π‘Š1𝑇 π‘Š2 + π‘Š2𝑇 π‘Š1 ≤ πœ€π‘Š1𝑇 Qπ‘Š1 + πœ€−1 π‘Š2𝑇 𝑄−1 π‘Š2 .
(9)
Lemma 3 (see [32]). Let π‘ˆ, 𝑉, π‘Š and 𝐹(𝑑) be the real matrices
of appropriate dimensions with π‘Š satisfying π‘Šπ‘‡ = π‘Š, then
∀𝐹𝑇 (𝑑) 𝐹 (𝑑) ≤ 𝐼, (10)
if and only if there exists a scalar πœ€ > 0, such that
π‘Š + πœ€π‘ˆπ‘ˆπ‘‡ + πœ€−1 𝑉𝑇 𝑉 < 0.
(11)
(13)
where
𝑉𝑑 (𝑦, 𝑑) =
πœ•π‘‰ (𝑦, 𝑑)
,
πœ•π‘‘
𝑉𝑦 (𝑦, 𝑑) = (
𝑉𝑦𝑦 (𝑦, 𝑑) = (
πœ•π‘‰(𝑦, 𝑑)
πœ•π‘‰(𝑦, 𝑑) 𝑇
,...,
) ,
πœ•π‘¦1
πœ•π‘¦π‘›
πœ•π‘‰(𝑦, 𝑑)
) ,
πœ•π‘¦π‘– 𝑦𝑗 𝑛×𝑛
(14)
𝑗 = 1, 2, . . . , 𝑛.
Theorem 5. Assume that Assumptions 1–5 hold. Then, the zero
solution of system (6) is globally stochastically πœ‡-stable in the
mean sense if there exist diagonal matrices 𝑃 > 0, 𝑄𝑖 > 0, 𝑖 =
1, 2 and 𝑄3 = diag(π‘ž1(3) , . . . , π‘žπ‘›(3) ), some constants 𝛼1 ≥ 0, 𝛼2 >
0, 𝛼3 > 0, 𝛿𝑗 > 0, 𝑗 = 0, 1, 2, π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 = 1, 2, . . . , 𝑛, π‘˜ ∈
𝑍+ , a nonnegative continuous differential function πœ‡(𝑑) defined
on [0, ∞), and a constant 𝑇 > 0 such that, for 𝑑 ≥ 𝑇,
πœ‡Μ‡ (𝑑)
≤ 𝛼1 ,
πœ‡ (𝑑)
𝑇
π‘Š + π‘ˆπΉ (𝑑) 𝑉 + 𝑉𝑇 𝐹𝑇 (𝑑) π‘ˆ < 0,
(12)
𝑇 −1
(2) S11 > 0, 𝑆22 − 𝑆12
𝑆11 𝑆12 > 0.
0
Δ𝑦 (π‘‘π‘˜ ) = 𝑦 (π‘‘π‘˜ ) − 𝑦 (π‘‘π‘˜− ) = −π‘€π‘˜ (𝑦 (π‘‘π‘˜− )) ,
𝑆 𝑆
𝑆 = ( 11 12 ) > 0,
𝑆21 𝑆22
−1 𝑇
𝑆12 > 0;
(1) 𝑆22 > 0, 𝑆11 − 𝑆12 𝑆22
∞
+ 𝜎 (𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) ,
Lemma 4 (see [32]). For a given matrix
πœ‡ (𝑑 − 𝜏 (𝑑))
≥ 𝛼2 ,
πœ‡ (𝑑)
(15)
∞
∫0 β„Žπ‘– (𝑠) πœ‡ (𝑑 + 𝑠) 𝑑𝑠
πœ‡ (𝑑)
≤ 𝛼3 ,
𝑖 = 1, 2, . . . , 𝑛,
and the following LMI hold:
𝛿
[Π0 √𝛿0 𝑃𝐴 1 √ 1 𝑃𝐴 2 √𝛿2 𝑃𝐴 3 ]
]
[
1
−𝜌
]
[
Ξ∗ = [ ∗
] < 0,
−𝑄
0
0
1
]
[
[∗
0 ]
∗
−𝑄2
∗
∗
−𝑄3 ]
[∗
(16)
4
Abstract and Applied Analysis
where Π0 = 𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + πœ†Γ0 + (1/(1 − 𝜌))𝛼2−1 πœ†Γ1 +
𝐿[𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 + 𝛿2−1 𝛼3 𝑄3 ]𝐿𝐿 = diag(𝑙1 , 𝑙2 , . . . , 𝑙𝑛 ), πœ† =
πœ† max (𝑃), and impulsive operator π‘€π‘˜ = diag(π‘š1π‘˜ , . . . , π‘šπ‘›π‘˜ ).
Proof. By Lemma 4, (16) is equal to the following inequality:
𝛼1 𝑃 − 𝑃𝐴 0 −
𝐴𝑇0 𝑃
+
𝛿0 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃
1
𝑃𝐴 𝑄−1 𝐴𝑇 𝑃 + 𝛿2 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃
1−𝜌 2 2 2
1 −1
+πœ†Γ0 +
𝛼 πœ†Γ
1−𝜌 2 1
+
𝛿1−1 𝛼2−1 𝑄2
+
+
𝛿2−1 𝛼3 𝑄3 ] 𝐿
2𝑦𝑇 (𝑑) 𝑃𝐴 1 𝑔 (𝑦 (𝑑))
≤ 𝛿0 𝑦𝑇 (𝑑) 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃𝑦 (𝑑)
+ 𝛿0−1 𝑔𝑇 (𝑦 (𝑑)) 𝑄1 𝑔 (𝑦 (𝑑)) ,
2𝑦𝑇 (𝑑) 𝑃𝐴 2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
+ 𝛿1
𝐿 [𝛿0−1 𝑄1
By Lemma 2, we get that
(17)
−1
≤ 𝛿1 (1 − 𝜌) 𝑦𝑇 (𝑑) 𝑃𝐴 2 𝑄2−1 𝐴𝑇2 𝑃𝑦 (𝑑)
+ 𝛿1−1 (1 − 𝜌) 𝑔𝑇 (𝑦 (𝑑 − 𝜏 (𝑑))) 𝑄2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑))) ,
< 0.
∞
Based on the system (6), we construct the following
Lyapunov-Krasovski functional:
𝑉 (𝑦 (𝑑) , 𝑑) = 𝑉1 (𝑦 (𝑑) , 𝑑) + 𝑉2 (𝑦 (𝑑) , 𝑑) + 𝑉3 (𝑦 (𝑑) , 𝑑) ,
(18)
2𝑦𝑇 (𝑑) 𝑃𝐴 3 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠
0
≤ 𝛿2 𝑦𝑇 (𝑑) 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃𝑦 (𝑑)
∞
+ 𝛿2−1 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠𝑄3
0
where
∞
𝑉1 (𝑦 (𝑑) , 𝑑) = πœ‡ (𝑑) 𝑦𝑇 (𝑑) 𝑃𝑦 (𝑑) ,
× ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠.
0
𝑉2 (𝑦 (𝑑) , 𝑑)
=
1 −1 𝑑
πœ‡ (𝑠) 𝑦𝑇 (𝑠) Γ1 𝑦 (𝑠) 𝑑𝑠
𝛼 πœ†∫
1−𝜌 2
𝑑−𝜏(𝑑)
𝑑
+ 𝛿1−1 𝛼2−1 ∫
𝑑−𝜏(𝑑)
𝑛
𝑖=1
From Assumption 5, we obtain that
πœ‡ (𝑠) 𝑔𝑇 (𝑦 (𝑠)) 𝑄2 𝑔 (𝑦 (𝑠)) 𝑑𝑠,
∞
𝑑
0
𝑑−𝜎
𝑉3 (𝑦 (𝑑) , 𝑑) = ∑π‘žπ‘–(3) 𝛿2−1 ∫ β„Žπ‘– (𝜎) ∫
(23)
1
trace [πœŽπ‘‡ 𝑉𝑦𝑦 (𝑦, 𝑑) 𝜎]
2
= πœ‡ (𝑑) trace [πœŽπ‘‡ π‘ƒπœŽ]
πœ‡ (𝑠 + 𝜎) 𝑔𝑖2
≤ πœ‡ (𝑑) πœ† max (𝑃) trace [πœŽπ‘‡ 𝜎]
× (𝑦𝑖 (𝑠)) 𝑑𝑠 π‘‘πœŽ.
(19)
(24)
≤ πœ‡ (𝑑) πœ† [𝑦𝑇 (𝑑) Γ0 𝑦 (𝑑)
+ 𝑦𝑇 (𝑑 − 𝜏 (𝑑)) Γ1 𝑦 (𝑑 − 𝜏 (𝑑))] .
By ItoΜ‚’s formula, we can obtain the following stochastic
differential:
Therefore,
𝑑𝑉 (𝑦 (𝑑) , 𝑑) = 𝐿𝑉 (𝑦 (𝑑) , 𝑑) 𝑑𝑑
+ 𝑉𝑦 (𝑦, 𝑑) 𝜎 (𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) ,
𝐿𝑉1 (𝑦 (𝑑) , 𝑑) ≤ πœ‡ (𝑑) 𝑦𝑇 (𝑑) [𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃
+ 𝛿0 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃
(20)
where
−1
+ 𝛿1 (1 − 𝜌) 𝑃𝐴 2 𝑄2−1 𝐴𝑇2 𝑃
𝐿𝑉 (𝑦 (𝑑) , 𝑑) = 𝐿𝑉1 (𝑦 (𝑑) , 𝑑) + 𝐿𝑉2 (𝑦 (𝑑) , 𝑑) + 𝐿𝑉3 (𝑦 (𝑑) , 𝑑) .
(21)
Along the trajectories of system (6), we have that
𝑇
+ 𝛿2 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃 + πœ†Γ0 ] 𝑦 (𝑑)
+ πœ‡ (𝑑) 𝛿0−1 𝑔𝑇 (𝑦 (𝑑)) 𝑄1 𝑔 (𝑦 (𝑑))
𝑇
Μ‡ (𝑑) 𝑃𝑦 (𝑑) + 2πœ‡ (𝑑) 𝑦 (𝑑) 𝑃
𝐿𝑉1 (𝑦 (𝑑) , 𝑑) = πœ‡π‘¦
+ πœ‡ (𝑑) πœ†π‘¦π‘‡ (𝑑 − 𝜏 (𝑑)) Γ1 𝑦 (𝑑 − 𝜏 (𝑑))
× [ − 𝐴 0 𝑦 (𝑑) + 𝐴 1 𝑔 (𝑦 (𝑑))
+ πœ‡ (𝑑) 𝛿1−1 (1 − 𝜌) 𝑔𝑇 𝑦
+ 𝐴 2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
× (𝑑 − 𝜏 (𝑑)) 𝑄2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
∞
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠]
∞
+ πœ‡ (𝑑) 𝛿2−1 ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠𝑄3
0
1
+ trace [πœŽπ‘‡ 𝑉𝑦𝑦 (𝑦, 𝑑) 𝜎] .
2
0
∞
(22)
× ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠,
0
Abstract and Applied Analysis
𝐿𝑉2 (𝑦 (𝑑) , 𝑑) ≤
5
1 −1
𝛼 πœ† [πœ‡ (𝑑) 𝑦𝑇 (𝑑) Γ1 𝑦 (𝑑)
1−𝜌 2
We use Cauchy’s inequality (∫ 𝑝(𝑠)π‘ž(𝑠))2
(∫ π‘ž2 (𝑠)𝑑𝑠) and get
− πœ‡ (𝑑 − 𝜏 (𝑑)) 𝑦𝑇 (𝑑 − 𝜏 (𝑑))
𝑛
+
∞
0
0
πœ‡ (𝑑) ∑π‘žπ‘–(3) 𝛿2−1 ∫ β„Žπ‘– (𝜎) π‘‘πœŽ ∫ β„Žπ‘– (𝜎) 𝑔𝑖2 (𝑦𝑖 (𝑑 − 𝜎)) π‘‘πœŽ
× Γ1 𝑦𝑇 (𝑑 − 𝜏 (𝑑)) 1 − πœΜ‡ (𝑑)]
𝛿1−1 𝛼2−1
∞
≤ (∫ 𝑝2 (𝑠)𝑑𝑠)
𝑖=1
𝑇
[πœ‡ (𝑑) 𝑔 (𝑦 (𝑑)) 𝑄2 𝑔 (𝑦 (𝑑))
∞
≥ πœ‡ (𝑑) 𝛿2−1 [∫ β„Ž (𝜎) 𝑔 (𝑦 (𝑑 − 𝜎)) π‘‘πœŽ]
− πœ‡ (𝑑 − 𝜏 (𝑑)) 𝑔𝑇 (𝑦 (𝑑 − 𝜏 (𝑑)))
𝑇
0
∞
× π‘„2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑))) 1 − πœΜ‡ (𝑑)]
× π‘„3 [∫ β„Ž (𝜎) 𝑔 (𝑦 (𝑑 − 𝜎)) π‘‘πœŽ] .
0
1 −1
≤
𝛼 πœ†πœ‡ (𝑑) 𝑦𝑇 (𝑑) Γ1 𝑦 (𝑑)
1−𝜌 2
(27)
Thus,
− 𝛼2−1 πœ†πœ‡ (𝑑 − 𝜏 (𝑑)) 𝑦𝑇 (𝑑 − 𝜏 (𝑑))
× Γ1 𝑦𝑇 (𝑑 − 𝜏 (𝑑)) + 𝛿1−1 𝛼2−1
𝐿𝑉3 (𝑦 (𝑑) , 𝑑) ≤ 𝛿2−1 πœ‡ (𝑑) 𝛼3 𝑔𝑇 (𝑦 (𝑑)) 𝑄3 𝑔 (𝑦 (𝑑))
× [πœ‡ (𝑑) 𝑔𝑇 (𝑦 (𝑑)) 𝑄2 𝑔 (𝑦 (𝑑))
𝑇
∞
− πœ‡ (𝑑) 𝛿2−1 [∫ β„Ž (𝜎) 𝑔 (𝑦 (𝑑 − 𝜎)) π‘‘πœŽ] ,
0
− πœ‡ (𝑑 − 𝜏 (𝑑)) 𝑔𝑇 × (𝑦 (𝑑 − 𝜏 (𝑑))) 𝑄2 𝑔
∞
𝑄3 [∫ β„Ž (𝜎) 𝑔 (𝑦 (𝑑 − 𝜎)) π‘‘πœŽ] .
× (𝑦 (𝑑 − 𝜏 (𝑑))) (1 − 𝜌) ] .
0
(28)
(25)
From (15), we have
Substituting (25), (26), and (28) to (21), we get
𝐿𝑉2 (𝑦 (𝑑) , 𝑑)
≤
1 −1
𝛼 πœ†πœ‡ (𝑑) 𝑦𝑇 (𝑑) Γ1 𝑦 (𝑑)
1−𝜌 2
𝐿𝑉 (𝑦 (𝑑) , 𝑑)
≤ πœ‡ (𝑑) 𝑦𝑇 (𝑑) [𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + 𝛿0 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃
− πœ†πœ‡ (𝑑) 𝑦𝑇 (𝑑 − 𝜏 (𝑑)) Γ1 𝑦𝑇 (𝑑 − 𝜏 (𝑑))
−1
+ 𝛿1−1 𝛼2−1 πœ‡ (𝑑) 𝑔𝑇 (𝑦 (𝑑)) 𝑄2 𝑔 (𝑦 (𝑑))
+ 𝛿1 (1 − 𝜌) 𝑃𝐴 2 𝑄2−1 𝐴𝑇2 𝑃
− 𝛿1−1 𝛼2−1 πœ‡ (𝑑 − 𝜏 (𝑑)) 𝑔𝑇 (𝑦 (𝑑 − 𝜏 (𝑑)))
+ 𝛿2 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃
× π‘„2 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑))) (1 − 𝜌) ,
+ πœ†Γ0 + (1 − 𝜌) 𝛼2−1 πœ†Γ1 ] 𝑦 (𝑑)
−1
𝐿𝑉3 (𝑦 (𝑑) , 𝑑)
+ πœ‡ (𝑑) 𝑔𝑇 (𝑦 (𝑑))
𝑛
∞
= 𝛿2−1 ∑π‘žπ‘–(3) 𝑔𝑖2 (𝑦𝑖 (𝑑)) ∫ β„Žπ‘– (𝜎) πœ‡ (𝑑 + 𝜎) π‘‘πœŽ
× [𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 + 𝛿2−1 𝛼3 𝑄3 ] 𝑔 (𝑦 (𝑑))
0
𝑖=1
≤ πœ‡ (𝑑) 𝑦𝑇 (𝑑) [𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃
𝑛
≤
∞
− πœ‡ (𝑑) ∑π‘žπ‘–(3) 𝛿2−1 ∫ β„Žπ‘– (𝜎) 𝑔𝑖2 (𝑦𝑖 (𝑑 − 𝜎)) π‘‘πœŽ
0
𝑖=1
∞
∫ β„Žπ‘– (𝜎) πœ‡ (𝑑 + 𝜎) π‘‘πœŽ 𝑛 (3) 2
𝛿2−1 πœ‡ (𝑑) 0
∑π‘žπ‘– 𝑔𝑖 (𝑦𝑖 (𝑑))
πœ‡ (𝑑)
𝑖=1
−
𝑛
∞
πœ‡ (𝑑) ∑π‘žπ‘–(3) 𝛿2−1 ∫
0
𝑖=1
β„Žπ‘– (𝜎) 𝑔𝑖2
+ 𝛿0 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃
−1
+ 𝛿1 (1 − 𝜌) 𝑃𝐴 2 𝑄2−1 𝐴𝑇2 𝑃
+ 𝛿2 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃 + πœ†Γ0
(𝑦𝑖 (𝑑 − 𝜎)) π‘‘πœŽ
−1
+ (1 − 𝜌) 𝛼2−1 πœ†Γ1
𝑛
+ 𝐿 [𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2
≤ 𝛿2−1 πœ‡ (𝑑) 𝛼3 ∑π‘žπ‘–(3) 𝑔𝑖2 (𝑦𝑖 (𝑑))
𝑖=1
−
𝑛
∞
πœ‡ (𝑑) ∑π‘žπ‘–(3) 𝛿2−1 ∫
0
𝑖=1
β„Žπ‘– (𝜎) 𝑔𝑖2
+ 𝛿2−1 𝛼3 𝑄3 ] 𝐿] 𝑦 (𝑑)
(𝑦𝑖 (𝑑 − 𝜎)) π‘‘πœŽ.
= πœ‡ (𝑑) 𝑦𝑇 (𝑑) Ξ𝑦 (𝑑) ,
(26)
(29)
6
Abstract and Applied Analysis
where 𝑀 = max0≤𝑠≤𝑇 𝐸𝑉(𝑦, 𝑠). It implies that
where
Ξ = 𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + 𝛿0 𝑃𝐴 1 𝑄1−1 𝐴𝑇1 𝑃
σ΅„©2
σ΅„©
𝐸󡄩󡄩󡄩𝑦(𝑑)σ΅„©σ΅„©σ΅„© ≤
−1
+ 𝛿1 (1 − 𝜌) 𝑃𝐴 2 𝑄2−1 𝐴𝑇2 𝑃 + 𝛿2 𝑃𝐴 3 𝑄3−1 𝐴𝑇3 𝑃
(30)
−1
+ πœ†Γ0 + (1 − 𝜌) 𝛼2−1 πœ†Γ1
+ 𝐿 [𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 + 𝛿2−1 𝛼3 𝑄3 ] 𝐿.
So, we have
𝑑𝑉 (𝑦 (𝑑) , 𝑑) ≤ πœ‡ (𝑑) 𝑦𝑇 (𝑑) Ξ𝑦 (𝑑)
+ 𝑉𝑦 (𝑦 (𝑑) , 𝑑) 𝜎 (𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) .
(31)
Taking the mathematical expectation, we get
dEV (𝑦 (𝑑) , 𝑑)
≤ πœ‡ (𝑑) 𝑦𝑇 (𝑑) Ξ𝑦 (𝑑) .
𝑑𝑑
(32)
By Ξ < 0, we get
dEV (𝑦 (𝑑) , 𝑑)
≤ 0,
𝑑𝑑
𝑑 ∈ [π‘‘π‘˜−1 , π‘‘π‘˜ ) ∩ [𝑇, +∞) , π‘˜ ∈ 𝑍+ .
(33)
𝑀
,
πœ‡ (𝑑) πœ† min (𝑃)
𝑑 ≥ 0.
(39)
This completes the proof of Theorem 5.
Remark 6. Theorem 5 provides a global stochastic πœ‡-stability
in the mean sense for a impulsive stochastic differential
system (6). It should be noted that our results depend on
the upper bound of the derivative of time-varying delay, the
influence of randomness, the delay kernels of continuous
distribution, and have nothing to do with the range of timevarying delay. Therefore, our results can be applied to the
impulsive stochastic neural networks with unbounded timevarying delay and continuous distributed delay.
Remark 7. In [29, 30], different approaches were used to study
πœ‡-stability of neural networks with unbounded time-varying.
However, the impulsive effect and unbounded continuous
distributed delay are not considered in their models. It is
known that neural networks with unbounded continuous
distributed delays in impulsive effect are of great importance
in many practically problems. Hence, our results in this paper
are more general than those reported in [29, 30],
If the diffusion coefficient 𝜎(𝑑, 𝑦(𝑑), 𝑦(𝑑 − 𝜏(𝑑))) = 0, the
model (6) becomes the model (4), then the following result
can be obtained.
In addition,
𝑉1 (π‘‘π‘˜ )
= πœ‡ (π‘‘π‘˜ ) 𝑦𝑇 (π‘‘π‘˜ ) 𝑃𝑦 (π‘‘π‘˜ )
= πœ‡ (π‘‘π‘˜− ) [𝑦 (π‘‘π‘˜− ) − π‘€π‘˜ 𝑦 (π‘‘π‘˜− )]
𝑇
(34)
× π‘ƒ [𝑦𝑇 (π‘‘π‘˜− ) − π‘€π‘˜ 𝑦𝑇 (π‘‘π‘˜− )]
𝑇
= πœ‡ (π‘‘π‘˜− ) [(𝐼 − π‘€π‘˜ ) 𝑦 (π‘‘π‘˜− )] 𝑃 [(𝐼 − π‘€π‘˜ ) 𝑦 (π‘‘π‘˜− )]
Corollary 8. Assume that Assumptions 1–4 hold. Then, the
zero solution of system (4) is globally πœ‡-stable if there exist
diagonal matrices 𝑃 > 0, 𝑄𝑖 > 0, 𝑖 = 1, 2 and 𝑄3 =
diag(π‘ž1(3) , . . . , π‘žπ‘›(3) ), some constants 𝛼1 ≥ 0, 𝛼2 > 0, 𝛼3 > 0,
𝛿𝑗 > 0, 𝑗 = 0, 1, 2, π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 = 1, 2, . . . , 𝑛, π‘˜ ∈ 𝑍+ ,
a nonnegative continuous differential function πœ‡(𝑑) defined on
[0, ∞), and a constant 𝑇 > 0 such that, for 𝑑 ≥ 𝑇,
πœ‡Μ‡ (𝑑)
≤ 𝛼1 ,
πœ‡ (𝑑)
𝑇
= πœ‡ (π‘‘π‘˜− ) 𝑦𝑇 (π‘‘π‘˜− ) (𝐼 − π‘€π‘˜ ) 𝑃 (𝐼 − π‘€π‘˜ ) 𝑦 (π‘‘π‘˜− ) .
Noting π‘€π‘˜ = diag(π‘š1π‘˜ , π‘š2π‘˜ , . . . , π‘šπ‘›π‘˜ ), π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 = 1,
2, . . . , 𝑛, π‘˜ ∈ 𝑍+ , we have
𝑉1 (π‘‘π‘˜ ) ≤
πœ‡ (π‘‘π‘˜− ) 𝑦𝑇 (π‘‘π‘˜− ) 𝑃𝑦 (π‘‘π‘˜− )
=
𝑉1 (π‘‘π‘˜− ) .
(35)
It is obvious that 𝑉2 (π‘‘π‘˜ ) = 𝑉2 (π‘‘π‘˜− ), 𝑉3 (π‘‘π‘˜ ) = 𝑉3 (π‘‘π‘˜− ), therefore
𝑉 (π‘‘π‘˜ ) ≤ 𝑉 (π‘‘π‘˜− ) ,
π‘˜ ∈ 𝑍+ .
(36)
By (33) and (36), we know that 𝑉 is monotonically nonincreasing for 𝑑 ∈ [𝑇, ∞), which implies that
𝐸𝑉 (𝑦 (𝑑) , 𝑑) ≤ 𝐸𝑉 (𝑦 (𝑇) , 𝑇) ,
𝑑 ≥ 𝑇.
(37)
(40)
∞
∫0 β„Žπ‘– (𝑠) πœ‡ (𝑑 + 𝑠) 𝑑𝑠
πœ‡ (𝑑)
≤ 𝛼3 ,
𝑖 = 1, 2, . . . , 𝑛,
and the following LMI hold:
𝛿
[Π0 √𝛿0 𝑃𝐴 1 √ 1 𝑃𝐴 2 √𝛿2 𝑃𝐴 3 ]
]
[
1
−𝜌
]
[
Ξ∗ = [ ∗
] < 0,
−𝑄
0
0
1
]
[
[∗
0 ]
∗
−𝑄2
∗
∗
−𝑄3 ]
[∗
(41)
where Π0 = 𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + 𝐿[𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 +
𝛿2−1 𝛼3 𝑄3 ]𝐿, 𝐿 = diag(𝑙1 , 𝑙2 , . . . , 𝑙𝑛 ), πœ† = πœ† max (𝑃), and impulsive
operator π‘€π‘˜ = diag(π‘š1π‘˜ , . . . , π‘šπ‘›π‘˜ ).
From the definition of 𝑉, we can deduce that
σ΅„©2
σ΅„©
πœ‡ (𝑑) πœ† min (𝑃) 𝐸󡄩󡄩󡄩𝑦 (𝑑)σ΅„©σ΅„©σ΅„© ≤ πœ‡ (𝑑) 𝐸𝑦𝑇 (𝑑) 𝑃𝐸𝑦 (𝑑)
≤ 𝐸𝑉 (𝑦 (𝑑) , 𝑑) ≤ 𝑀 < ∞,
πœ‡ (𝑑 − 𝜏 (𝑑))
≥ 𝛼2 ,
πœ‡ (𝑑)
𝑑 ≥ 0,
(38)
If we take 𝑃 = 𝑄1 = 𝑄2 = 𝑄3 = 𝐼 in Theorem 5, then we
obtain the following testable condition.
Abstract and Applied Analysis
7
Corollary 9. Assume that Assumptions 1–5 hold. Then, the
zero solution of system (6) is globally stochastically πœ‡-stable in
the mean square if there exist some constants 𝛼1 ≥ 0, 𝛼2 >
0, 𝛼3 > 0, 𝛿𝑗 > 0, 𝑗 = 0, 1, 2, π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 = 1, 2, . . . , 𝑛,
π‘˜ ∈ 𝑍+ , a nonnegative continuous differential function πœ‡(𝑑)
defined on [0, ∞), and a constant 𝑇 > 0 such that, for 𝑑 ≥ 𝑇,
πœ‡Μ‡ (𝑑)
≤ 𝛼1 ,
πœ‡ (𝑑)
πœ‡ (𝑑 − 𝜏 (𝑑))
≥ 𝛼2 ,
πœ‡ (𝑑)
πœ‡ (𝑑)
[Δ𝐴 0 (𝑑) , Δ𝐴 1 (𝑑) , Δ𝐴 2 (𝑑) , Δ𝐴 3 (𝑑)]
= 𝐻𝐹 (𝑑) [𝑇0 , 𝑇1 , 𝑇2 , 𝑇3 ] ,
≤ 𝛼3 ,
𝐹𝑇 (𝑑) 𝐹 (𝑑) ≤ 𝐼,
𝑖 = 1, 2, . . . , 𝑛,
(47)
where 𝐼 is appropriate dimensions unit matrix.
and the following LMI hold:
𝛿
[Π0 √𝛿0 𝐴 1 √ 1 𝐴 2 √𝛿2 𝐴 3 ]
[
]
1
−𝜌
[
]
Ξ∗ = [ ∗
< 0,
−𝐼
0
0 ]
[
]
[∗
∗
−𝐼
0 ]
∗
∗
−𝐼 ]
[∗
(43)
where Π0 = 𝛼1 − 𝐴 0 − 𝐴𝑇0 + πœ†Γ0 + (1/(1 − 𝜌))𝛼2−1 πœ†Γ1 + 𝐿[𝛿0−1 +
𝛿1−1 𝛼2−1 + 𝛿2−1 𝛼3 ]𝐿, 𝐿 = diag(𝑙1 , 𝑙2 , . . . , 𝑙𝑛 ), πœ† = πœ† max (𝑃), and
impulsive operator π‘€π‘˜ = diag(π‘š1π‘˜ , . . . , π‘šπ‘›π‘˜ ).
Theorem 10. Assume that Assumptions 1–5 hold. Then, the
zero solution of system (44) is globally robustly stochastically
πœ‡-stable in the mean square if there exist diagonal matrices
𝑃 > 0,𝑄𝑖 > 0, 𝑖 = 1, 2 and 𝑄3 = diag(π‘ž1(3) , . . . , π‘žπ‘›(3) ), some
constants 𝛼1 ≥ 0, 𝛼2 > 0, 𝛼3 > 0, 𝛿𝑗 > 0, 𝑗 = 0, 1, 2, πœ€π‘— > 0,
𝑗 = 0, 1, 2, 3, π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 = 1, 2, . . . , 𝑛, π‘˜ ∈ 𝑍+ , a
nonnegative continuous differential function πœ‡(𝑑) defined on
[0, ∞), and a constant 𝑇 > 0 such that, for 𝑑 ≥ 𝑇,
πœ‡Μ‡ (𝑑)
≤ 𝛼1 ,
πœ‡ (𝑑)
πœ‡ (𝑑)
Consider the following uncertain stochastic neural networks:
+ 𝐴 2 (𝑑) 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
Ξ∗∗
∞
+ 𝐴 3 (𝑑) ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠
0
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
Δ𝑦 (π‘‘π‘˜ ) = 𝑦 (π‘‘π‘˜ ) − 𝑦 (π‘‘π‘˜− ) = −π‘€π‘˜ (𝑦 (π‘‘π‘˜− )) ,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(44)
where some parameters and variables are introduced in
Section 2, the uncertainties are described as follows:
𝐴 𝑖 (𝑑) = 𝐴 𝑖 + Δ𝐴 𝑖 (𝑑) ,
𝑖 = 1, 2, 3,
(45)
0
∗
∗
≤ 𝛼3 ,
𝑖 = 1, 2, . . . 𝑛,
𝛿
[Π1 √𝛿0 𝑃𝐴 1 √ 1 𝑃𝐴 2 √𝛿2 𝑃𝐴 3 ]
[
]
1−𝜌
[
]
= [∗
< 0,
𝑅1
0
0 ]
[
]
[∗
]
0
∗
𝑅2
∗
∗
𝑅3 ]
[∗
(49)
where Π1 = 𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + πœ†Γ0 + (1/(1 −
𝜌))𝛼2−1 πœ†Γ1 + 𝐿[𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 + 𝛿2−1 𝛼3 𝑄3 ]𝐿 + πœ€0−1 𝑃𝐻𝐻𝑇𝑃 +
πœ€0 𝑇0𝑇 𝑇0 + πœ€1 𝛿0 𝑇1𝑇 𝑇1 + πœ€2 (𝛿1 /(1 − 𝜌))𝑇2𝑇 𝑇2 + πœ€3 𝛿2 𝑇3𝑇 𝑇3 , 𝐿 =
diag(𝑙1 , 𝑙2 , . . . , 𝑙𝑛 ), πœ† = πœ† max (𝑃), 𝑅1 = −𝑄1 + πœ€1−1 𝑃𝐻𝐻𝑇𝑃,
𝑅2 = −𝑄2 +πœ€2−1 𝑃𝐻𝐻𝑇𝑃, 𝑅3 = −𝑄3 +πœ€3−1 𝑃𝐻𝐻𝑇𝑃, and impulsive
operator π‘€π‘˜ = diag(π‘š1π‘˜ , . . . , π‘šπ‘›π‘˜ ).
Proof. Let the Lyapunov-Krasovskii functional as defined in
Theorem 5, then by ItoΜ‚’s formula, we have
−𝑃Δ𝐴 0 (𝑑) − Δ𝐴𝑇0 (𝑑) 𝑃 √𝛿0 𝑃Δ𝐴 1 (𝑑) √
∗
∗
∗
(48)
and the following LMI hold:
𝑦̇ (𝑑) = − 𝐴 0 (𝑑) 𝑦 (𝑑) + 𝐴 1 (𝑑) 𝑔 (𝑦 (𝑑))
+ 𝜎 (𝑑, 𝑦 (𝑑) , 𝑦 (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) ,
πœ‡ (𝑑 − 𝜏 (𝑑))
≥ 𝛼2 ,
πœ‡ (𝑑)
∞
∫0 β„Žπ‘– (𝑠) πœ‡ (𝑑 + 𝑠) 𝑑𝑠
4. Robust Stochastic Stability Analysis of
Neural Networks
Ξ∗∗ = Ξ∗ + (
(46)
where 𝐻, 𝑇𝑖 , 𝑖 = 1, 2, 3 are some appropriate dimensions
constant matrices, 𝐹(𝑑) is an unknown real time-varying
function with appropriate dimensions and satisfies
(42)
∞
∫0 β„Žπ‘– (𝑠) πœ‡ (𝑑 + 𝑠) 𝑑𝑠
where Δ𝐴 𝑖 (𝑑), 𝑖 = 1, 2, 3 are perturbed matrices satisfying
𝛿1
𝑃Δ𝐴 2 (𝑑) √𝛿2 𝑃Δ𝐴 3 (𝑑)
1−𝜌
)
0
0
0
0
∗
0
8
Abstract and Applied Analysis
= Ξ∗ + (
−𝑃𝐻𝐹 (𝑑) 𝑇0 − 𝑇0𝑇 𝐹𝑇 (𝑑) 𝐻𝑇 𝑃 √𝛿0 𝑃𝐻𝐹 (𝑑) 𝑇1 √
∗
∗
∗
0
∗
∗
𝛿1
𝑃𝐻𝐹 (𝑑) 𝑇2 √𝛿2 𝑃𝐻𝐹 (𝑑) 𝑇3
1−𝜌
)
0
0
0
0
∗
0
−𝑇0𝑇
−𝑇0𝑇
𝑇
0
0
= Ξ∗ + (
) 𝐹𝑇 (𝑑) (𝐻𝑇 𝑃 0 0 0) + (𝐻𝑇 𝑃 0 0 0) 𝐹 (𝑑) (
)
0
0
0
0
𝑇
√𝛿0 𝑇1𝑇
√𝛿0 𝑇1𝑇
𝑇
0
0
+(
) 𝐹𝑇 (𝑑) (0 𝐻𝑇 𝑃 0 0) + (0 𝐻𝑇 𝑃 0 0) 𝐹 (𝑑) (
)
0
0
0
0
𝑇
𝛿
𝛿
√ 1 𝑇2𝑇
√ 1 𝑇2𝑇
1−𝜌
1−𝜌
𝑇
+(
) 𝐹𝑇 (𝑑) (0 0 𝐻𝑇 𝑃 0) + (0 0 𝐻𝑇 𝑃 0) 𝐹 (𝑑) (
)
0
0
0
0
0
0
𝑇
𝑇
√𝛿2 𝑇3𝑇
√𝛿2 𝑇3𝑇
𝑇
0
0
+(
) 𝐹𝑇 (𝑑) (0 0 0 𝐻𝑇 𝑃) + (0 0 0 𝐻𝑇 𝑃) 𝐹 (𝑑) (
) .
0
0
0
0
(50)
By Lemma 3, we get
Ξ
∗∗
∗
≤Ξ +
πœ€0−1
𝑃𝐻
0
( ) (𝐻𝑇 𝑃 0 0 0)
0
0
−𝑇0𝑇
+ πœ€0 ( 0 ) (−𝑇0 0 0 0)
0
0
+
πœ€1−1
0
𝑃𝐻
( ) (0 𝐻𝑇 𝑃 0 0)
0
0
√𝛿0 𝑇1𝑇
+ πœ€1 ( 0 ) (√𝛿0 𝑇1 0 0 0)
0
0
+
πœ€2−1
0
0
( ) (0 0 𝐻𝑇 𝑃 0)
𝑃𝐻
0
𝛿1 𝑇
𝑇
1−𝜌 2
𝛿
) (√ 1 𝑇2 0 0 0)
+ πœ€2 (
0
1−𝜌
0
0
√
+
πœ€3−1
0
0
( ) (0 0 0 𝐻𝑇 𝑃)
0
𝑃𝐻
√𝛿2 𝑇3𝑇
+ πœ€3 ( 0 ) (√𝛿2 𝑇3 0 0 0)
0
0
0
0
0
Π2
−1
𝑇
𝑃𝐻𝐻
𝑃
0
0
∗
πœ€
1
= Ξ∗ + (
),
0
∗
∗
πœ€2−1 𝑃𝐻𝐻𝑇𝑃
∗
∗
∗
πœ€3−1 𝑃𝐻𝐻𝑇𝑃
(51)
where Π2 = πœ€0−1 𝑃𝐻𝐻𝑇 𝑃 + πœ€0 𝑇0𝑇 𝑇0 + πœ€1 𝛿0 𝑇1𝑇 𝑇1 + πœ€2 (𝛿1 /(1 −
𝜌))𝑇2𝑇 𝑇2 + πœ€3 𝛿2 𝑇3𝑇 𝑇3 .
This completes the proof of Theorem 10.
Abstract and Applied Analysis
9
Remark 11. The neural network can be disturbed by environmental noises, which cause system parameters’ uncertainty.
It can lead to system instability. As far as we know, there is no
literature that is published on robust πœ‡-stability on uncertain
neural networks with unbounded continuous distributed
delays in impulsive perturbations.
There exist only parameter uncertainties and no stochastic perturbations. So the uncertain neural networks can be
described as
Example 13. Consider the stochastic impulsive neural network with two neurons:
𝑑π‘₯ (𝑑) = [ − 𝐴 0 π‘₯ (𝑑) + 𝐴 1 𝑔 (π‘₯ (𝑑))
+ 𝐴 2 𝑔 (π‘₯ (𝑑 − 𝜏 (𝑑)))
∞
+ 𝐴 3 ∫ β„Ž (𝑠) 𝑔 (π‘₯ (𝑑 − 𝑠)) 𝑑𝑠] 𝑑𝑑
0
+ 𝜎 (𝑑, π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) ,
𝑦̇ (𝑑) = − 𝐴 0 (𝑑) 𝑦 (𝑑) + 𝐴 1 (𝑑) 𝑔 (𝑦 (𝑑))
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
Δπ‘₯ (π‘‘π‘˜ ) = π‘₯ (π‘‘π‘˜ ) − π‘₯ (π‘‘π‘˜− )
+ 𝐴 2 (𝑑) 𝑔 (𝑦 (𝑑 − 𝜏 (𝑑)))
= − π‘€π‘˜ (π‘₯ (π‘‘π‘˜− )) ,
∞
+ 𝐴 3 (𝑑) ∫ β„Ž (𝑠) 𝑔 (𝑦 (𝑑 − 𝑠)) 𝑑𝑠,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(55)
0
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
Δ𝑦 (π‘‘π‘˜ ) = 𝑦 (π‘‘π‘˜ ) − 𝑦 (π‘‘π‘˜− ) = −π‘€π‘˜ (𝑦 (π‘‘π‘˜− )) ,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ .
(52)
Corollary 12. Assume that Assumptions 1–5 hold. Then, the
zero solution of system (52) is globally robustly πœ‡-stable if there
exist diagonal matrices 𝑃 > 0, 𝑄𝑖 > 0, 𝑖 = 1, 2 and 𝑄3 =
diag(π‘ž1(3) , . . . , π‘žπ‘›(3) ), some constants 𝛼1 ≥ 0, 𝛼2 > 0, 𝛼3 > 0,
𝛿𝑗 > 0, 𝑗 = 0, 1, 2, πœ€π‘— > 0, 𝑗 = 0, 1, 2, 3, π‘šπ‘–π‘˜ ∈ [0, 2], 𝑖 =
1, 2, . . . , 𝑛, π‘˜ ∈ 𝑍+ , a nonnegative continuous differential
function πœ‡(𝑑) defined on [0, ∞], and a constant 𝑇 > 0 such
that, for 𝑑 ≥ 𝑇,
πœ‡Μ‡ (𝑑)
≤ 𝛼1 ,
πœ‡ (𝑑)
πœ‡ (𝑑 − 𝜏 (𝑑))
≥ 𝛼2 ,
πœ‡ (𝑑)
(53)
∞
∫0 β„Žπ‘– (𝑠) πœ‡ (𝑑 + 𝑠) 𝑑𝑠
πœ‡ (𝑑)
≤ 𝛼3 ,
𝑖 = 1, 2, . . . 𝑛,
and the following LMI hold:
Ξ
∗∗
[Π1 √𝛿0 𝑃𝐴 1
[
[
= [∗
𝑅1
[
[∗
∗
∗
∗
[
𝛿
√ 1 𝑃𝐴 2 √𝛿2 𝑃𝐴 3 ]
]
1−𝜌
]
< 0,
0
0 ]
]
0 ]
𝑅2
∗
𝑅3 ]
where the activation function is described by 𝑔𝑖 (𝑠) = tanh(𝑠),
𝑖 = 1, 2, . . . , 𝑛, 𝜏(𝑑) = 0.4𝑑, β„Žπ‘– (𝑠) = 𝑒−𝑠 . It is obvious that (0, 0)𝑇
is an equilibrium point of system (55). Let πœ‡(𝑑) = 𝑑 and choose
𝛼1 = 0.1, 𝛼2 = 0.6, 𝛼3 = 1.2, π‘€π‘˜ = diag(0.5, 0.5), and the
parameter matrices are, respectively, given by
2.8 0
𝐴0 = (
),
0 3.2
0.06 0.07
),
𝐴2 = (
−0.07 0.03
0.2 −0.1
𝐴1 = (
),
0.1 0.1
0.02 −0.03
𝐴3 = (
),
0.01 0.01
(56)
1.1 0
).
Γ0 = Γ1 = (
0 1.1
In this case, we get 𝐿 = diag(1, 1), 𝜌 = 0.4. Let 𝑄1 =
𝑄2 = 𝑄3 = 𝐼, 𝛿0 = 𝛿1 = 𝛿2 = 1, and 𝑃 = diag(10, 9). We
can get πœ† max (Ξ) = −0.9590 < 0 via MATLAB LMI toolbox.
By Theorem 5, the equilibrium point of model (55) with
unbounded time-varying delay and continuously distributed
delay is globally stochastically πœ‡-stable in the mean square.
The numerical simulation is shown in Figure 1.
Example 14. Consider the uncertain stochastic impulsive
neural network with two neurons:
(54)
where Π1 = 𝛼1 𝑃 − 𝑃𝐴 0 − 𝐴𝑇0 𝑃 + 𝐿[𝛿0−1 𝑄1 + 𝛿1−1 𝛼2−1 𝑄2 +
𝛿2−1 𝛼3 𝑄3 ]𝐿 + πœ€0−1 𝑃𝐻𝐻𝑇𝑃 + πœ€0 𝑇0𝑇 𝑇0 + πœ€1 𝛿0 𝑇1𝑇 𝑇1 + πœ€2 (𝛿1 /(1 −
𝜌))𝑇2𝑇 𝑇2 + πœ€3 𝛿2 𝑇3𝑇 𝑇3 , 𝐿 = diag(𝑙1 , 𝑙2 , . . . , 𝑙𝑛 ), πœ† = πœ† max (𝑃),
𝑅1 = −𝑄1 + πœ€1−1 𝑃𝐻𝐻𝑇𝑃, 𝑅2 = −𝑄2 + πœ€2−1 𝑃𝐻𝐻𝑇𝑃, 𝑅3 = −𝑄3 +
πœ€3−1 𝑃𝐻𝐻𝑇𝑃, and impulsive operator π‘€π‘˜ = diag(π‘š1π‘˜ , . . . , π‘šπ‘›π‘˜ ).
5. Illustrative Examples
In this section, we will give two examples to show the validity
of the results obtained.
𝑑π‘₯ (𝑑) = [−𝐴 0 (𝑑) π‘₯ (𝑑) + 𝐴 1 𝑔 (π‘₯ (𝑑))
+ 𝐴 2 (𝑑) 𝑔 (π‘₯ (𝑑 − 𝜏 (𝑑)))
∞
+𝐴 3 (𝑑) ∫ β„Ž (𝑠) 𝑔 (π‘₯ (𝑑 − 𝑠)) 𝑑𝑠] 𝑑𝑑
0
+ 𝜎 (𝑑, π‘₯ (𝑑) , π‘₯ (𝑑 − 𝜏 (𝑑))) π‘‘πœ” (𝑑) ,
𝑑 =ΜΈ π‘‘π‘˜ , 𝑑 > 0,
Δπ‘₯ (π‘‘π‘˜ ) = π‘₯ (π‘‘π‘˜ ) − π‘₯ (π‘‘π‘˜− )
= − π‘€π‘˜ (π‘₯ (π‘‘π‘˜− )) ,
𝑑 = π‘‘π‘˜ , π‘˜ ∈ 𝑍+ ,
(57)
where the activation function is described by 𝑔𝑖 (𝑠) =
tanh(𝑠), 𝑖 = 1, 2, . . . , 𝑛, 𝜏(𝑑) = 0.5𝑑, β„Žπ‘– (𝑠) = 𝑒−𝑠 . It is obvious
that (0, 0)𝑇 is an equilibrium point of system (57). Let πœ‡(𝑑) = 𝑑
10
Abstract and Applied Analysis
7
9
6
8
7
5
x
6
4
x
3
4
3
2
2
1
0
5
1
0
1
2
3
4
0
5
0
1
2
t
E(x1 )2
5
3
4
5
(b)
6
9
8
5
7
4
6
3
x
2
5
4
3
2
1
0
4
E(x2 )2
(a)
x
3
t
1
0
1
2
3
4
5
0
0
1
2
t
t
E(x1 )2
E(x2 )2
(c)
(d)
Figure 1: (a) Time-series of the 𝐸(π‘₯1 )2 of model (55) without impulsive effects for 𝑑 ∈ [0, 5]. (b) Time-series of the 𝐸(π‘₯2 )2 of model (55)
without impulsive effects for 𝑑 ∈ [0, 5]. (c) Time-series of the 𝐸(π‘₯1 )2 of model (55) with impulsive effects for 𝑑 ∈ [0, 5]. (d) Time-series of the
𝐸(π‘₯2 )2 of model (55) with impulsive effects for 𝑑 ∈ [0, 5].
and choose 𝛼1 = 0.1, 𝛼2 = 0.5, 𝛼3 = 1.2, π‘€π‘˜ = diag(1.5, 1.5),
and the parameter matrices are, respectively, given by
9.8 0
𝐴0 = (
),
0 11.2
0.2 −0.2
𝐴1 = (
),
0.4 0.3
0.6 0.7
𝐴2 = (
),
−0.1 0.3
0.12 0
𝐴3 = (
),
0 0.21
6. Concluding Remarks
(58)
0.5 0
).
Γ0 = Γ1 = (
0 0.5
In this case, we get 𝐿 = diag(1, 1), 𝜌 = 0.5. Let 𝐻 = 𝑇0 = 𝑇1 =
𝑇2 = 𝑇3 = 0.2𝐼, 𝛿0 = 𝛿1 = 𝛿2 = πœ€0 = πœ€1 = πœ€2 = πœ€3 = 1, we can
get the following feasible solution via MATLAB LMI toolbox:
𝑃=(
0.0863
0
),
0
0.0863
0.0527
0
),
𝑄2 = (
0
0.0527
and obtain πœ† max (Ξ∗∗ ) = −1.0865 < 0. By Theorem 10,
the equilibrium point of model (24) is globally robustly
stochastically πœ‡-stable in the mean square.
0.1416
0
𝑄1 = (
),
0
0.1416
0.1461
0
𝑄3 = (
),
0
0.1461
(59)
This paper firstly investigated the global stochastic πœ‡stability in the mean square of a class of impulsive stochastic neural network with unbounded mixed delays. We
obtained some sufficient conditions by using LyapunovKrasovskii functional method, the random analysis theory, and the linear matrix inequality (LMI) technique.
Secondly, we researched global robust πœ‡-stability in the
mean square for a class of uncertain impulsive neural network with unbounded mixed delays. Our results improve
and generalize some earlier works reported in the literature. Finally, two examples and their numerical simulations are given to illustrate the effectiveness of obtained
results.
Abstract and Applied Analysis
Acknowledgments
This work was supported by the National Natural Science Foundation of China (61174217), the Natural Science
Foundation of Shandong Province of China (ZR2010AL016
and ZR2011AL007), and the Doctoral Foundation of University of Jinan (XBS1244).
References
[1] J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisions
in optimization problems,” Biological Cybernetics, vol. 52, no. 3,
pp. 141–152, 1985.
[2] J. J. Hopfield and D. W. Tank, “Computing with neural circuits:
a model,” Science, vol. 233, no. 4764, pp. 625–633, 1986.
[3] Z. Wang, Y. Liu, K. Fraser, and X. Liu, “Stochastic stability
of uncertain Hopfield neural networks with discrete and distributed delays,” Physics Letters A, vol. 354, no. 4, pp. 288–297,
2006.
[4] S. Blythe, X. Mao, and X. Liao, “Stability of stochastic delay
neural networks,” Journal of the Franklin Institute, vol. 338, no.
4, pp. 481–495, 2001.
[5] C. Huang, Y. He, and H. Wang, “Mean square exponential
stability of stochastic recurrent neural networks with timevarying delays,” Computers & Mathematics with Applications,
vol. 56, no. 7, pp. 1773–1778, 2008.
[6] Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for
stochastic Cohen-Grossberg neural networks with mixed time
delays,” IEEE Transactions on Neural Networks, vol. 17, no. 3, pp.
814–820, 2006.
[7] W. Su and Y. Chen, “Global robust stability criteria of stochastic Cohen-Grossberg neural networks with discrete and distributed time-varying delays,” Communications in Nonlinear
Science and Numerical Simulation, vol. 14, no. 2, pp. 520–528,
2009.
[8] L. Wan and Q. Zhou, “Exponential stability of stochastic reaction-diffusion Cohen-Grossberg neural networks with
delays,” Applied Mathematics and Computation, vol. 206, no. 2,
pp. 818–824, 2008.
[9] S. Mohamad, K. Gopalsamy, and H. AkcΜ§a, “Exponential stability
of artificial neural networks with distributed delays and large
impulses,” Nonlinear Analysis: Real World Applications, vol. 9,
no. 3, pp. 872–888, 2008.
[10] J. Zhou, L. Xiang, and Z. Liu, “Synchronization in complex
delayed dynamical networks with impulsive effects,” Physica A,
vol. 384, no. 2, pp. 684–692, 2007.
[11] H. Gu, H. Jiang, and Z. Teng, “Existence and globally exponential stability of periodic solution of BAM neural networks with
impulses and recent-history distributed delays,” Neurocomputing, vol. 71, no. 4–6, pp. 813–822, 2008.
[12] H. Zhang, M. Dong, Y. Wang, and N. Sun, “Stochastic stability
analysis of neutral-type impulsive neural networks with mixed
time-varying delays and Markovian jumping,” Neurocomputing,
vol. 73, no. 13–15, pp. 2689–2695, 2010.
[13] L. Chen, R. Wu, and D. Pan, “Mean square exponential stability
of impulsive stochastic fuzzy cellular neural networks with
distributed delays,” Expert Systems with Applications, vol. 38, no.
5, pp. 6294–6299, 2011.
[14] R. Rakkiyappan, P. Balasubramaniam, and S. Lakshmanan,
“Robust stability results for uncertain stochastic neural networks with discrete interval and distributed time-varying
delays,” Physics Letters A, vol. 372, no. 32, pp. 5290–5298, 2008.
11
[15] W. Zhou and M. Li, “Mixed time-delays dependent exponential
stability for uncertain stochastic high-order neural networks,”
Applied Mathematics and Computation, vol. 215, no. 2, pp. 503–
513, 2009.
[16] H. Li, C. Wang, P. Shi, and H. Gao, “New passivity results for
uncertain discrete-time stochastic neural networks with mixed
time delays,” Neurocomputing, vol. 73, no. 16-18, pp. 3291–3299,
2010.
[17] O. M. Kwon, S. M. Lee, and J. H. Park, “Improved delaydependent exponential stability for uncertain stochastic neural
networks with time-varying delays,” Physics Letters A, vol. 374,
no. 10, pp. 1232–1241, 2010.
[18] Z. Wang, Y. Liu, X. Liu, and Y. Shi, “Robust state estimation
for discrete-time stochastic neural networks with probabilistic
measurement delays,” Neurocomputing, vol. 74, no. 1-3, pp. 256–
264, 2010.
[19] C. Li, J. Shi, and J. Sun, “Stability of impulsive stochastic differential delay systems and its application to impulsive stochastic
neural networks,” Nonlinear Analysis, Theory, Methods and
Applications, vol. 74, no. 10, pp. 3099–3111, 2011.
[20] P. Balasubramaniam and M. S. Ali, “Robust exponential stability
of uncertain fuzzy Cohen-Grossberg neural networks with
time-varying delays,” Fuzzy Sets and Systems, vol. 161, no. 4, pp.
608–618, 2010.
[21] X. Liao and C. Li, “An LMI approach to asymptotical stability of
multi-delayed neural networks,” Physica D, vol. 200, no. 1-2, pp.
139–155, 2005.
[22] X. Y. Lou and B. Cui, “New LMI conditions for delay-dependent
asymptotic stability of delayed Hopfield neural networks,”
Neurocomputing, vol. 69, no. 16-18, pp. 2374–2378, 2006.
[23] Q. Zhang and X. W. J. Xu, “Delay-dependent global stability
results for delayed Hopfield neural networks,” Chaos, Solitons
and Fractals, vol. 34, no. 2, pp. 662–668, 2007.
[24] R. Raja, R. Sakthivel, and S. M. Anthoni, “Stability analysis
for discrete-time stochastic neural networks with mixed time
delays and impulsive effects,” Canadian Journal of Physics, vol.
88, no. 12, pp. 885–898, 2010.
[25] R. Sakthivel, R. Samidurai, and S. M. Anthoni, “New exponential stability criteria for stochastic BAM neural networks with
impulses,” Physica Scripta, vol. 82, no. 4, Article ID 045802, 2010.
[26] R. Sakthivel, R. Samidurai, and S. M. Anthoni, “Asymptotic
stability of stochastic delayed recurrent neural networks with
impulsive effects,” Journal of Optimization Theory and Applications, vol. 147, no. 3, pp. 583–596, 2010.
[27] S.-I. Niculescu, Delay Effects on Stability: A Robust Control
Approach, vol. 269 of Lecture Notes in Control and Information
Sciences, Springer, London, UK, 2001.
[28] V. B. KolmanovskiΔ±Μ† and V. R. Nosov, Stability of FunctionalDifferential Equations, vol. 180 of Mathematics in Science and
Engineering, Academic Press, London, UK, 1986.
[29] T. Chen and L. Wang, “Global πœ‡-stability of delayed neural networks with unbounded time-varying delays,” IEEE Transactions
on Neural Networks, vol. 18, no. 6, pp. 1836–1840, 2007.
[30] X. Liu and T. Chen, “Robust πœ‡-stability for uncertain stochastic
neural networks with unbounded time-varying delays,” Physica
A, vol. 387, no. 12, pp. 2952–2962, 2008.
[31] J. Cao, K. Yuan, and H. X. Li, “Global asymptotical stability
of recurrent neural networks with multiple discrete delays and
distributed delays,” IEEE Transactions on Neural Networks, vol.
17, no. 6, pp. 1646–1651, 2006.
12
[32] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear
Matrix Inequalities in System and Control Theory, vol. 15 of SIAM
Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA,
1994.
Abstract and Applied Analysis
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download