Document 10843885

advertisement
Hindawi Publishing Corporation
Discrete Dynamics in Nature and Society
Volume 2009, Article ID 430158, 14 pages
doi:10.1155/2009/430158
Research Article
Delay-Range-Dependent Global Robust Passivity
Analysis of Discrete-Time Uncertain Recurrent
Neural Networks with Interval Time-Varying Delay
Chien-Yu Lu,1 Chin-Wen Liao,1 and Hsun-Heng Tsai2
1
Department of Industrial Education and Technology, National Changhua University of Education,
Changhua 500, Taiwan
2
Department of Biomechatronics Engineering, National Pingtung University of Science & Technology,
Pingtung 912, Taiwan
Correspondence should be addressed to Chien-Yu Lu, lcy@cc.ncue.edu.tw
Received 11 March 2009; Accepted 25 August 2009
Recommended by Guang Zhang
This paper examines a passivity analysis for a class of discrete-time recurrent neural networks
DRNNs with norm-bounded time-varying parameter uncertainties and interval time-varying
delay. The activation functions are assumed to be globally Lipschitz continuous. Based on an
appropriate type of Lyapunov functional, sufficient passivity conditions for the DRNNs are
derived in terms of a family of linear matrix inequalities LMIs. Two numerical examples are
given to illustrate the effectiveness and applicability.
Copyright q 2009 Chien-Yu Lu et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
1. Introduction
Recurrent neural networks have been extensively studied in the past decades. Two popular
examples are Hopfield neural networks and cellular neural networks. Increasing attention
has been draw to the potential applications of recurrent neural networks in information
processing systems such as signal processing, model identification, optimization, pattern
recognition, and associative memory. However, these successful applications are greatly
dependent on the dynamic behavior of recurrent neural networks RNNs. On the other
hand, time delay is inevitably encountered in RNNs, since the interactions between different
neurons are asynchronous. Generally, time delays, both constant and time varying, are often
encountered in various engineering, biological, and economic systems due to the finite
switching speed of amplifiers in electronic networks, or to the finite signal propagation time
in biological networks 1. The existence of time delay could make delayed RNNs be instable
or have poor performance. Therefore, many research interests have been attracted to the
2
Discrete Dynamics in Nature and Society
stability analysis for delayed RNNs. A lot of results related to this issue have been reported
2–6.
The theory of passivity plays an important role for analyzing the stability of nonlinear
system 7 and has received much attention in the literature from the control community
since 1970s 8–14. It is well known that the passivity theory plays an important role in
both electrical network and nonlinear control systems, provides a nice tool for analyzing the
stability of system 15, and has found applications in diverse areas such as signal processing,
chaos control and synchronization, and fuzzy control. The passivity condition for delayed
neural networks with or without time-varying parametric uncertainties using LMIs 16 has
been proposed in 17. By constructing proper Lyapunov functionals and using some analytic
techniques, sufficient conditions are given to ensure the passivity of the integro-differential
neural networks with time-varying delays in 18. In 19, 20, the authors studied the delaydependent robust passivity criterion for the delayed cellular neural networks and the delayed
recurrent neural networks. It should be pointed out that the aforementioned results are the
continuous-time neural networks.
Recently, the stability analysis problems for discrete-time neural networks with time
delay have received considerable research interests. For instance in 21, global exponential
stability of a class of discrete-time Hopfield neural networks with variable delays is
considered. By making use of a difference inequality, a new global exponential stability
result is provided. Under different assumptions on the activation functions, a unified linear
matrix inequality LMI approach has been developed to establish sufficient conditions
for the discrete-time recurrent neural networks with interval variable time to be globally
exponentially stable in 22. Delay-dependent results on the global exponential stability
problem for discrete-time neural networks with time-varying delays were presented in
23, 24, respectively. However, no delay-range-dependent passivity conditions on discretetime uncertain recurrent neural networks with interval time-varying delay are available in the
literature and remain essentially open. The objective of this paper is to address this unsolved
problem.
The purpose of this paper is to deal with the problem of passivity conditions for
discrete-time uncertain recurrent neural networks with interval time-varying delay. The
interval time-varying delay includes both lower and upper bounds of delay, and the
parameter uncertainties are assumed to be time varying but norm bounded which appear
in all the matrices in the state equation. It is then established that the resulting passivity
condition can be cast in a linear matrix inequality format which can be conveniently solved
by using the numerically effective Matlab LMI Toolbox. In particular, when the interval timedelay factor is known, it is emphasized that delay-range-dependent passivity condition yields
more general and practical results. Finally, two numerical examples are given to demonstrate
the effectiveness.
Throughout this paper, the notation X ≥ Y X > Y for symmetric matrices X and
Y indicates that the matrix X − Y is positive and semidefinite resp., positive definite; ZT
represents the transpose of matrix Z.
2. Preliminaries
Consider a discrete-time recurrent neural network with interval time-varying delay
described by
xk 1 A ΔAkxk W ΔWkfxk W1 ΔW1 kfxk − τk uk,
2.1
Discrete Dynamics in Nature and Society
3
where xk x1 k, x2 k, . . . , xn kT is the state vector, A diaga1 , a2 , . . . , an with
|ai | < 1, i 1, 2, . . . , n, is the state feedback coefficient matrix, W n×n and W1n×n are the
interconnection matrices representing the weighting coefficients of the neurons, fxk f1 x1 k, . . . , fn xn kT ∈ Rn is the neuron activation function with f0 0, τk is the
time-varying delay of the system satisfying
τ1 ≤ τk ≤ τ2 ,
k ∈ N,
2.2
where 0 ≤ τ1 ≤ τ2 are known integers. Let yk fxk be the output of the neural
networks. uk is the input vector. ΔAk, ΔWk, and ΔW1 k are unknown matrices
representing time-varying parameter uncertainties, which are assumed to be of the form
ΔAk ΔWk ΔW1 k MFk N1 N2 N3 ,
2.3
where M, N1 , N2 , and N3 are known real constant matrices, and F· : R → Rk×l is unknown
time-varying matrix function satisfying
F T kFk ≤ I,
k ∈ N.
2.4
The uncertain matrices ΔAk, ΔWk, and ΔW1 k are said to be admissible if both 2.3
and 2.4 hold.
In order to obtain our main results, the activation functions in 2.1 are assumed to be
bounded and satisfy the following assumption.
Assumption 1. The activation functions gi ·, i 1, 2, . . . , n, are globally Lipschitz and
monotone nondecreasing; that is, there exist constant scalars αi such that for any ς1 , ς2 ∈ R
ς2 ,
and ς1 /
0≤
gi ς1 − gi ς2 ≤ αi ,
ς1 − ς2
i 1, 2, . . . , n.
2.5
Definition 2.1 25. System 2.1 is called passive if there exists a scalar β ≥ 0 such that
2
kp
k0
T
y kuk ≥ −β
kp
uT kuk
2.6
k0
for all kp ≥ 0 and for all solutions of 2.1 with x0 0.
3. Mathematical Formulation of the Proposed Approach
This section explores the globally robust delay-range-dependent passivity conditions of the
discrete-time recurrent uncertain neural network with interval time-varying delay given in
2.1. Specially, an LMI approach is employed to solve the robust delay-range-dependent
passivity condition if the system in 2.1 is globally asymptotically stable for all admissible
4
Discrete Dynamics in Nature and Society
uncertainties ΔAk, ΔWk, and ΔW1 k satisfying 2.6. The analysis commences by
using the LMI approach to develop some results which are essential to introduce the
followingLemma 3.1 for the development of our main theorem.
Lemma 3.1. Let A, D, S, F, and P be real matrices of appropriate dimensions with P > 0 and F
satisfying F T kFk ≤ I. Then the following statements hold.
a For any ε > 0 and vectors x, y ∈ Rn
2xT DFSy ≤ ε−1 xT DDT x εyT ST Sy.
3.1
b For vectors x, y ∈ Rn
2xT DSy ≤ xT DP DT x yT ST P −1 Sy.
3.2
For any matrices Ei , Si , Ti , and Hi i 1, 2, . . . , 8 of appropriate dimensions, it follows from null
equations that
Φ1 2 xT kE1 xT k − τkE2 xT k − τ2 E3 xT k − τ1 E4
f T xkE5 f T xk − τkE6 xT k 1E7 uT kE8
⎤
⎡
k
× ⎣xk − xk − τk −
x j − x j − 1 ⎦ 0,
3.3
jk−τk1
Φ2 2 xT kS1 xT k − τkS2 xT k − τ2 S3 xT k − τ1 S4
f T xkS5 f T xk − τkS6 xT k 1S7 uT kS8
⎡
⎤
k−τk
× ⎣xk − τk − xk − τ2 −
x j − x j − 1 ⎦ 0,
3.4
jk−τ2 1
Φ3 −2 xT kT1 xT k − τkT2 xT k − τ2 T3 xT k − τ1 T4
f T xkT5 f T xk − τkT6 xT k 1T7 uT kT8
⎡
⎤
k−τ
1 × ⎣xk − τ1 − xk − τk −
x j − x j − 1 ⎦ 0,
3.5
jk−τk1
Φ4 −2 xT kH1 xT k − τkH2 xT k − τ2 H3 xT k − τ1 H4
f T xkH5 f T xk − τkH6 xT k 1H7 uT kH8
× xk 1 − A ΔAkxk − W ΔWkfxk
−W1 ΔW1 kfxk − τk − uk 0,
3.6
Discrete Dynamics in Nature and Society
5
Φ5 2f T xkR1 fxk − 2f T xkR1 fxk 2f T xk − τkR2 xk − τk
− 2f T xk − τkR2 xk − τk 0.
3.7
To study the globally robust delay-range-dependent passivity conditions of the
discrete-time uncertain recurrent neural network with interval time-varying delay, the
following theorem reveals that such conditions can be expressed in terms of LMIs.
Theorem 3.2. Under Assumption 1, given scalars 0 ≤ τ1 < τ2 , system 2.1 with interval timevarying delay τk satisfying 2.2 is globally asymptotically robust stability, if there exist matrices
P > 0, Q1 > 0, Q2 > 0, Z1 > 0, Z2 > 0, diagonal matrices R1 > 0, R2 > 0, and matrices Ei , Si , Ti
and Hi i 1, 2, . . . , 8 of appropriate dimensions, a positive scalar ε > 0 and a scalar β ≥ 0 such that
the following LMI holds:
⎡
Ω
τ2 E
τ21 S
τ21 T
HM εN
⎢
⎢ τ2 ET −τ2 Z1
0
0
0
⎢
⎢
T
⎢ τ21 S
0
−τ21 Z1 Z2 0
0
⎢
⎢
⎢ τ21 T T
0
0
−τ21 Z2 0
⎢
⎢
⎢MT H T
0
0
0
−εI
⎣
εN T
0
0
0
0
⎤
⎥
0 ⎥
⎥
⎥
0 ⎥
⎥
⎥ < 0,
0 ⎥
⎥
⎥
0 ⎥
⎦
−εI
where
⎡
Ω11
⎢ T
⎢Ω12
⎢
⎢ T
⎢Ω
⎢ 13
⎢ T
⎢Ω
⎢ 14
Ω⎢ T
⎢Ω
⎢ 15
⎢
⎢ΩT
⎢ 16
⎢
⎢ΩT
⎣ 17
ΩT18
Ω12 Ω13 Ω14 Ω15 Ω16 Ω17 Ω18
⎤
⎥
Ω22 Ω23 Ω24 Ω25 Ω26 Ω27 Ω28 ⎥
⎥
⎥
ΩT23 Ω33 Ω34 Ω35 Ω36 Ω37 Ω38 ⎥
⎥
⎥
T
T
Ω24 Ω34 Ω44 Ω45 Ω46 Ω47 Ω48 ⎥
⎥
⎥,
T
T
T
Ω25 Ω35 Ω45 Ω55 Ω56 Ω57 Ω58 ⎥
⎥
⎥
ΩT26 ΩT36 ΩT46 ΩT56 Ω66 Ω67 Ω68 ⎥
⎥
⎥
T
T
T
T
T
Ω27 Ω37 Ω47 Ω57 Ω67 Ω77 Ω78 ⎥
⎦
T
T
T
T
T
T
Ω28 Ω38 Ω48 Ω58 Ω68 Ω78 Ω88
Ω11 −P τ21 1Q1 τ2 Z1 τ21 Z2 E1 E1T H1 A AT H1T Q2 ,
Ω12 E2T − E1 S1 T1 AT H2T ,
Ω14 E4T − T1 AT H4T ,
Ω15 E5T H1 W0 AT H5T ΓT R1 ,
Ω16 E6T H1 W1 AT H6T ,
Ω18 E8T H1 AT H8T ,
Ω13 E3T − S1 AT H3T ,
Ω17 E7T − τ2 Z1 − τ21 Z2 − H1 AT H7T ,
Ω22 −Q1 − E2 − E2T S2 ST2 T2 T2T ,
Ω23 −E3T − S2 ST3 T3T ,
Ω25 −E5T ST5 T5T H2 W0 ,
Ω24 −E4T − T2 ST4 T4 ,
Ω26 −E6T ST6 T6T H2 W1 RT2 ,
3.8
6
Discrete Dynamics in Nature and Society
Ω27 −E7T ST7 T7T − H2 ,
Ω34 −ST4 − T3 ,
Ω28 −E8T ST8 T8T H2 ,
Ω35 −ST5 H3 W0 ,
Ω38 −ST8 H3 ,
Ω44 −T4 − T4T ,
Ω47 −T7T − H4 ,
Ω48 −T8T H4 ,
Ω33 −Q2 − S3 − ST3 ,
Ω36 −ST6 H3 W1 ,
Ω45 T5T H4 W0 ,
Ω37 −ST7 − H3 ,
Ω46 −T6T H4 W1 ,
Ω55 −R1 − RT1 H5 W0 W0T H5T ,
Ω56 H5 W1 W0T H6T ,
Ω57 −H5 W0T H7T ,
Ω58 H5 W0T H8T − I,
T
Ω67 −H6 W1T H7T ,
Ω66 −R2 Γ−1 − R2 Γ−1 H6 W1 W1T H6T ,
Ω68 H6 W1T H8T ,
Ω77 −H7 − H7T P τ2 Z1 τ21 Z2 ,
Ω78 H7 − H8 ,
T T T T T T T T T
E E1 E2 E3 E4 E5 E6 E7 E8 ,
Ω88 H8 H8T − βI,
T
S ST1 ST2 ST3 ST4 ST5 ST6 ST7 ST8 ,
T
T T1T T2T T3T T4T T5T T6T T7T T8T ,
T
H H1T H2T H3T H4T H5T H6T H7T H8T ,
T
N N1T 0 0 0 N2T N3T 0 0 ,
3.9
in which Γ diag α1 , α2 , . . . , αn , τ21 τ2 − τ1 . Then system 2.1 satisfying 3.8 with interval
time-varying delay is robust delay-range-dependent passivity condition in the sense of Definition 2.1.
Proof. Choose the Lyapunov-Krasovskii functional candidate for the system in 2.1 as
V k V1 k V2 k V3 k V4 k
xT kP xk −1
k
T x j − x j − 1 Z1 x j − x j − 1
i−τ2 jki1
k−1
1
−1−τ
xT j Q1 x j −1−τ
1
k−1
xT j Q1 x j xT j Q2 x j
i−τ2 jki1
jk−τk
k−1
jk−τ2
k
T x j − x j − 1 Z2 x j − x j − 1 .
i−τ2 jki1
Then, the difference ΔV k V k 1 − V k of V k along the solution of 2.1 gives
ΔV1 k xT k 1P xk 1 − xT kP xk,
ΔV2 k −1
k1
T x j − x j − 1 Z1 x j − x j − 1
i−τ2 jki2
−
−1
k
T x j − x j − 1 Z1 x j − x j − 1
i−τ2 jki1
3.10
Discrete Dynamics in Nature and Society
7
≤ τ2 xk 1 − xkT Z1 xk 1 − xk
−
k−τk
T x j − x j − 1 Z1 x j − x j − 1
jk−τ2 1
−
k
T x j − x j − 1 Z1 x j − x j − 1 ,
jk−τk1
ΔV3 k ≤ xT k{Q2 τ2 − τ1 1Q1 }xk − xT k − τkQ1 xk − τk
− xT k − τ2 Q2 xk − τ2 ,
ΔV4 k τ2 − τ1 xk 1 − xkT Z2 xk 1 − xk
−
k−τk
T x j − x j − 1 Z2 x j − x j − 1
jk−τ2 1
−
k−τ
1
T x j − x j − 1 Z2 x j − x j − 1 .
jk−τk1
3.11
Defining the following new variables:
T
ηk xT kxT k − τkxT k − τ2 xT k − τ1 f T xkf T xk − τkxT k 1uT k ,
T
E E1T E2T E3T E4T E5T E6T E7T E8T ,
T
S ST1 ST2 ST3 ST4 ST5 ST6 ST7 ST8 ,
T
T T1T T2T T3T T4T T5T T6T T7T T8T ,
T
H H1T H2T H3T H4T H5T H6T H7T H8T ,
T
N N1T 0 0 0 N2T N3T 0 0 ,
and combining null equations 3.3–3.7, it yields
ΔV k − 2yT kuk − βuT kuk
ΔV1 k ΔV2 k ΔV3 k ΔV4 k − 2yT kuk
− βuT kuk Φ1 Φ2 Φ3 Φ4 Φ5
≤ xT k 1P xk 1 − xT kP xk τ2 xk 1 − xkT Z1 xk 1 − xk
3.12
8
Discrete Dynamics in Nature and Society
−
k−τk
T x j − x j − 1 Z1 x j − x j − 1
jk−τ2 1
−
k
T x j − x j − 1 Z1 x j − x j − 1
jk−τk1
xT k{Q2 τ2 − τ1 1Q1 }xk − xT k − τkQ1 xk − τk
− xT k − τ2 Q2 xk − τ2 τ2 − τ1 xk 1 − xkT Z2 xk 1 − xk
−
k−τk
T x j − x j − 1 Z2 x j − x j − 1
jk−τ2 1
−
k−τ
1
T x j − x j − 1 Z2 x j − x j − 1
jk−τk1
⎡
2ηT kE⎣xk − xk − τk −
k
⎤
x j −x j −1 ⎦
jk−τk1
⎡
k−τk
2ηT kS⎣xk − τk − xk − τ2 −
⎤
x j −x j −1 ⎦
jk−τ2 1
⎡
k−τ
1
− 2η kT ⎣xk − τ1 − xk − τk −
T
⎤
x j −x j −1 ⎦
jk−τk1
− 2ηT kH xk 1 − A ΔAkxk − W ΔWkfxk
−W1 ΔW1 kfxk − τk − uk
2f T xkR1 fxk − 2f T xkR1 fxk 2f T xk − τkR2 xk − τk
− 2f T xk − τkR2 xk − τk − 2yT kuk − βuT kuk.
3.13
Moreover,
k
− 2ηT kE
x j −x j −1
jk−τk1
≤ τ2 η
T
− 2ηT kS
kEZ1−1 ET ηk
k−τk
k
T x j − x j − 1 Z1 x j − x j − 1 ,
3.14
jk−τk1
x j −x j −1
jk−τ2 1
≤ τ2 −τ1 ηT kSZ1 Z2 −1 ST ηk
k−τk
T
x j −x j − 1 Z1 Z2 x j −x j −1 ,
jk−τ2 1
3.15
Discrete Dynamics in Nature and Society
2ηT kT
k−τ
1
9
x j −x j −1
jk−τk1
≤ τ2 − τ1 ηT kT Z2−1 T T ηk k−τ
1
T x j − x j − 1 Z2 x j − x j − 1 .
jk−τk1
3.16
Using Assumption 1 and noting that R1 > 0 and R2 > 0 are diagonal matrices, one has
2f T xkR1 fxk ≤ 2f T xkR1 Γxk,
−2f T xk − τkR2 xk − τk ≤ −2f T xk − τkR2 Γ−1 fxk − τk,
3.17
3.18
where Γ diag α1 , α2 , . . . , αn .
Following from Lemma 3.1a results in
2ηT kH ΔAkxk ΔW0 kfxk ΔW1 kfxk − τk
2ηT kHMFkN T ηk ≤ ε−1 ηT kHMMT H T ηk εηT kNN T ηk.
3.19
Substituting 3.14–3.19 into 3.13, it is not difficult to deduce that
ΔV k − 2yT kuk − βuT kuk
≤ ηT k Ω τ2 EZ1−1 ET τ21 SZ1 Z2 −1 ST τ21 T Z2−1 T T ε−1 HMMT H T εNN T ηk.
3.20
Using the Schur complement to the 3.20 and in view of LMI 3.8, it follows that
ΔV k − 2yT kuk − βuT kuk < 0.
3.21
It follows from 3.21 that
2
kp
k0
k
p
yT kuk ≥ V x kp − V x0 − β
uT kuk,
3.22
k0
for x0 0, one has V x0 0, so 2.6 holds, and hence the system is robust delayrange-dependent passivity condition in the sense of Definition 2.1. This completes the proof
of Theorem 3.2.
Remark 3.3. Theorem 3.2 provides a sufficient passivity condition for the globally robust
stability of the discrete-time uncertain recurrent neural network with interval time-varying
delay given in 2.1 and proposes a delay-range-dependent criterion. Even for τ1 0, the
result in Theorem 3.2 may lead to the delay-dependent stability criteria. In fact, if Z2 ε1 I,
10
Discrete Dynamics in Nature and Society
with ε1 > 0, being sufficiently small scalar, Ti 0, i 1, 2, . . . , 8, E4 0, S4 0, H4 0,
Theorem 3.2 yields the following delay-dependent passivity criterion.
Corollary 3.4. Under Assumption 1, given scalars τ2 > 0, τ1 0, system 2.1 with time-varying
delay satisfying 2.3 is globally asymptotically robust stability, if there exist matrices P > 0, Q1 > 0,
Q2 > 0, Z1 > 0, diagonal matrices R1 > 0, R2 > 0, and matrices Ei , Si , and Hi i 1, 2, . . . , 8 of
appropriate dimensions and E4 S4 H4 0, a positive scalar ε > 0, and a scalar β ≥ 0 such that
the following LMI holds:
⎡
Ω
τ2 E
τ2 S
HM εN
⎤
⎢
⎥
⎢ τ2 ET −τ2 Z1
0
0
0 ⎥
⎢
⎥
⎢
⎥
⎢ τ2 ST
0
−τ2 Z1 0
0 ⎥
⎢
⎥ < 0,
⎢
⎥
⎢MT H T
0
0
−εI 0 ⎥
⎣
⎦
εN T
0
0
0 −εI
3.23
where
⎡
Ω11
⎢ T
⎢Ω
⎢ 12
⎢ T
⎢Ω
⎢ 13
⎢ T
Ω⎢
⎢Ω15
⎢ T
⎢Ω
⎢ 16
⎢
⎢ΩT
⎣ 17
Ω12 Ω13 Ω15 Ω16 Ω17 Ω18
⎤
⎥
Ω22 Ω23 Ω25 Ω26 Ω27 Ω28 ⎥
⎥
⎥
T
Ω23 Ω33 Ω35 Ω36 Ω37 Ω38 ⎥
⎥
⎥
ΩT25 ΩT35 Ω55 Ω56 Ω57 Ω58 ⎥
⎥,
⎥
T
T
T
Ω26 Ω36 Ω56 Ω66 Ω67 Ω68 ⎥
⎥
⎥
T
T
T
T
Ω27 Ω37 Ω57 Ω67 Ω77 Ω78 ⎥
⎦
ΩT18 ΩT28 ΩT38 ΩT58 ΩT68 ΩT78 Ω88
Ω11 −P τ2 1Q1 τ2 Z1 E1 E1T H1 A AT H1T Q2 ,
Ω12 E2T − E1 S1 AT H2T ,
Ω13 E3T − S1 AT H3T ,
Ω15 E5T H1 W0 AT H5T ΓT R1 ,
Ω16 E6T H1 W1 AT H6T ,
Ω17 E7T − τ2 Z1 − H1 AT H7T ,
Ω18 E8T H1 AT H8T ,
Ω22 −Q1 − E2 − E2T S2 ST2 ,
Ω23 −E3T − S2 ST3 ,
Ω26 −E6T ST6 H2 W1 RT2 ,
Ω27 −E7T ST7 − H2 ,
Ω33 −Q2 − S3 − ST3 ,
Ω38 −ST8 H3 ,
Ω57 −H5 W0T H7T ,
Ω35 −ST5 H3 W0 ,
Ω25 −E5T ST5 H2 W0 ,
Ω28 −E8T ST8 H2 ,
Ω36 −ST6 H3 W1 , Ω37 −ST7 − H3 ,
Ω55 −R1 − RT1 H5 W0 W0T H5T ,
Ω56 H5 W1 W0T H6T ,
Ω58 H5 W0T H8T − I,
T
Ω66 −R2 Γ−1 − R2 Γ−1 H6 W1 W1T H6T ,
Ω67 −H6 W1T H7T ,
Discrete Dynamics in Nature and Society
Ω68 H6 W1T H8T ,
11
Ω77 −H7 − H7T P τ2 Z1 ,
Ω78 H7 − H8 ,
T
E E1T E2T E3T E5T E6T E7T E8T ,
Ω88 H8 H8T − βI,
T
H H1T H2T H3T H5T H6T H7T H8T ,
T
S ST1 ST2 ST3 ST5 ST6 ST7 ST8 ,
T
N N1T 0 0 N2T N3T 0 0 .
3.24
Therefore, the discrete-time recurrent neural network with time-varying delay in 2.1 (i.e., the lower
bounds τ1 0 and the given upper bounds τ2 ) approaches globally robustly delay-dependent passivity
condition in the sense of Definition 2.1.
Remark 3.5. In the stochastic context, robust delay-dependent passivity conditions are studied
in 26 for discrete-time stochastic neural networks with time-varying delays. In this paper,
however, robust delay-range-dependent passivity conditions are studied in the deterministic
context. It should be noted that deterministic systems and stochastic systems have different
properties and need to be dealt with separately. The results given in Theorem 3.2 provide
an LMI approach to the robust delay-range-dependent passivity conditions for deterministic
discrete-time recurrent neural networks with interval time-varying delay, which is new and
represents a contribution to recurrent neural networks systems.
Two numerical examples are now presented to demonstrate the usefulness of the
proposed approach.
4. Examples
Example 4.1. Consider the following discrete-time uncertain recurrent neural network:
xk 1 A ΔAkxk W ΔWkfxk W1 ΔW1 kfxk − τk uk,
4.1
where
A
M
0.2
0.2 0.1
0.1 0.2
0
0 0.14
,
W
,
N1 0.1 0.2
0.1 0.2
0.2 0.25
0.25 0.1
,
,
W1 0.3 0.1
N2 ,
0 0.1
−0.2 0.1
0.1 0.3
N3 ,
0.2 0.1
0.1 0.1
4.2
.
The activation functions in this example are assumed to satisfy Assumption 1 with α1 0.5633, α2 0.0478. For interval time-varying delay, the best approached values of β by
Theorem 3.2, for the given upper bound τ2 10 and the various lower bounds τ1 , are listed
inTable 1 by the Matlab LMI Control Toolbox. Therefore, using Theorem 3.2, the discrete-time
uncertain recurrent neural network with interval time-varying delay 4.1 satisfies robustly
delay-range-dependent passivity conditions in the sense of Definition 2.1 for various β levels.
12
Discrete Dynamics in Nature and Society
Example 4.2. Consider the discrete-time uncertain recurrent neural network with the following parameters:
⎤
⎡
0.1 0 0
⎥
⎢
⎥
A⎢
⎣ 0 0.1 0 ⎦,
0 0 0.2
⎤
⎡
0.1 0 0.1
⎥
⎢
⎥
M⎢
⎣0.1 0.3 0 ⎦,
0.2 0.1 0
⎡
⎤
0.1 −0.1 0.2
⎥
⎢
⎥
W ⎢
⎣ 0.1 −0.2 0.3 ⎦,
−0.1 0 −0.1
⎡
0
0.02 0.02
⎢
N1 ⎢
⎣−0.1 0.1
−0.1 0.1
⎡
⎤
⎥
0.1 ⎥
⎦,
0.1
0.01 0.02 0
⎤
⎡
−0.1 0.2 0.1
⎥
⎢
⎥
W1 ⎢
⎣−0.1 0.4 0.2⎦,
0.2 −0.1 0.4
⎡
0 −0.02 0.02
⎤
⎥
⎢
N2 ⎢
0 ⎥
⎦,
⎣0.1 −0.1
0.1
0
−0.1
⎤
⎥
⎢
⎥
N3 ⎢
⎣−0.1 0.3 0.1⎦.
0.1
0 0.2
4.3
The activation functions in this example are assumed to satisfy Assumption 1 with α1 0.034,
α2 0.429, α3 0.508. By the Matlab LMI Control Toolbox, it can be verified that Corollary 3.4
in this paper is feasible solution for all delays 0 ≤ τk ≤ 15 i.e., the lower bound τ1 0 and
the upper bound τ2 15 as follows:
⎡
2.3189 −0.2657 −0.0023
⎤
⎥
⎢
⎥
P ⎢
⎣−0.2657 2.2981 −0.0574⎦,
−0.0023 −0.0574 2.3612
⎡
0.0199 −0.0038 0.0008
⎤
⎥
⎢
⎥
Z1 ⎢
⎣−0.0038 0.0150 0.0002⎦,
0.0008 0.0002 0.0144
⎤
⎡
1.8998
0
0
⎥
⎢
R1 ⎢
1.6298
0 ⎥
⎦,
⎣ 0
0
0
1.8378
ε 3.4160,
⎡
0.0565 −0.0056 0.0005
⎤
⎥
⎢
⎥
Q1 ⎢
⎣−0.0056 0.0679 −0.0052⎦,
0.0005 −0.0052 0.0657
⎡
0.0215 −0.0042 0.0009
⎤
⎥
⎢
⎥
Z2 ⎢
⎣−0.0042 0.0158 0.0002⎦,
0.0009 0.0002 0.0153
4.4
⎤
⎡
1.2991
0
0
⎥
⎢
R2 ⎢
0.2980
0 ⎥
⎦,
⎣ 0
0
0
0.2635
β 5.0169.
Thus, by Corollary 3.4, the discrete-time uncertain recurrent neural network with timevarying delay in 2.1 i.e., the lower bound τ1 0 and the given upper bound τ2 15
attains globally robustly delay-dependent passivity condition in the sense of Definition 2.1.
Discrete Dynamics in Nature and Society
13
Table 1: The various lower bounds τ1 and β levels for the given upper bound τ2 10.
τ1
0
1
2
3
4
5
6
7
8
9
β
5.9189
5,9228
5.8145
5.9832
5.8014
5.9522
5.9744
6.0126
6.0186
6.1851
5. Conclusions
This study has investigated the problem of globally robust passivity conditions for a discretetime recurrent uncertain neural network with interval time-varying delay. A sufficient
condition for the solvability of this problem, which takes into account the range for the
time delay, has been established that the passivity conditions can be cast in linear matrix
inequalities format. It has been shown that the bound for the time-varying delay in a range
which ensures that the discrete-time recurrent uncertain neural network with interval timevarying delay attains globally robust passivity conditions can be obtained by solving a convex
optimization problem. Two numerical examples have been presented to demonstrate the
effectiveness of the proposed approach.
References
1 S. Arik, “Global asymptotic stability of a larger class of neural networks with constant time delay,”
Physics Letters A, vol. 311, no. 6, pp. 504–511, 2003.
2 I. Győri and F. Hartung, “Stability analysis of a single neuron model with delay,” Journal of
Computational and Applied Mathematics, vol. 157, no. 1, pp. 73–92, 2003.
3 E. Kaszkurewicz and A. Bhaya, “On a class of globally stable neural circuits,” IEEE Transactions on
Circuits and Systems I, vol. 41, no. 2, pp. 171–174, 1994.
4 J. Cao and J. Wang, “Global asymptotic stability of a general class of recurrent neural networks with
time-varying delays,” IEEE Transactions on Circuits and Systems I, vol. 50, no. 1, pp. 34–44, 2003.
5 M. Joy, “Results concerning the absolute stability of delayed neural networks,” Neural Networks, vol.
13, no. 6, pp. 613–616, 2000.
6 S. Xu, Y. Chu, and J. Lu, “New results on global exponential stability of recurrent neural networks
with time-varying delays,” Physics Letters A, vol. 352, no. 4-5, pp. 371–379, 2006.
7 L. O. Chua, “Passivity and complexity,” IEEE Transactions on Circuits and Systems I, vol. 46, no. 1, pp.
71–82, 1999.
8 L. Xie, M. Fu, and H. Li, “Passivity analysis for uncertain signal processing systems,” in Proceedings of
the IEEE International Conference on Signal Processing, vol. 46, pp. 2394–2403, 1998.
9 M. S. Mahmoud and A. Ismail, “Passivity and passification of time-delay systems,” Journal of
Mathematical Analysis and Applications, vol. 292, no. 1, pp. 247–258, 2004.
10 E. Fridman and U. Shaked, “On delay-dependent passivity,” IEEE Transactions on Automatic Control,
vol. 47, no. 4, pp. 664–669, 2002.
11 T. Hayakawa, W. M. Haddad, J. M. Bailey, and N. Hovakimyan, “Passivity-based neural network
adaptive output feedback control for nonlinear nonnegative dynamical systems,” IEEE Transactions
on Neural Networks, vol. 16, no. 2, pp. 387–398, 2005.
12 S.-I. Niculescu and R. Lozano, “On the passivity of linear delay systems,” IEEE Transactions on
Automatic Control, vol. 46, no. 3, pp. 460–464, 2001.
14
Discrete Dynamics in Nature and Society
13 J. C. Travieso-Torres, M. A. Duarte-Mermoud, and J. L. Estrada, “Tracking control of cascade systems
based on passivity: the non-adaptive and adaptive cases,” ISA Transactions, vol. 45, no. 3, pp. 435–445,
2006.
14 L. Keviczky and Cs. Bányász, “Robust stability and performance of time-delay control systems,” ISA
Transactions, vol. 46, no. 2, pp. 233–237, 2007.
15 R. Lozano, B. Brogliato, O. Egeland, and B. Maschke, Dissipative Systems Analysis and Control: Theory
and Application, Communications and Control Engineering Series, Springer, London, UK, 2000.
16 S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control
Theory, vol. 15 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994.
17 C. Li and X. Liao, “Passivity analysis of neural networks with time delay,” IEEE Transactions on Circuits
and Systems II, vol. 52, no. 8, pp. 471–475, 2005.
18 X. Lou and B. Cui, “Passivity analysis of integro-differential neural networks with time-varying
delays,” Neurocomputing, vol. 70, no. 4–6, pp. 1071–1078, 2007.
19 J. H. Park, “Further results on passivity analysis of delayed cellular neural networks,” Chaos, Solitons
& Fractals, vol. 34, no. 5, pp. 1546–1551, 2007.
20 Q. Song and Z. Wang, “New results on passivity analysis of uncertain neural networks with timevarying delays,” International Journal of Computer Mathematics. In press.
21 Q. Zhang, X. Wei, and J. Xu, “On global exponential stability of discrete-time Hopfield neural
networks with variable delays,” Discrete Dynamics in Nature and Society, vol. 2007, Article ID 67675, 9
pages, 2007.
22 Y. Liu, Z. Wang, A. Serrano, and X. Liu, “Discrete-time recurrent neural networks with time-varying
delays: exponential stability analysis,” Physics Letters A, vol. 362, no. 5-6, pp. 480–488, 2007.
23 W.-H. Chen, X. Lu, and D.-Y. Liang, “Global exponential stability for discrete-time neural networks
with variable delays,” Physics Letters A, vol. 358, no. 3, pp. 186–198, 2006.
24 J. Liang, J. Cao, and D. W. C. Ho, “Discrete-time bidirectional associative memory neural networks
with variable delays,” Physics Letters A, vol. 335, no. 2-3, pp. 226–234, 2005.
25 M. S. Mahmoud, Robust Control and Filtering for Time-Delay Systems, vol. 5 of Control Engineering,
Marcel Dekker, New York, NY, USA, 2000.
26 Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discrete-time stochastic neural networks with
time-varying delays,” Neurocomputing, vol. 72, no. 7–9, pp. 1782–1788, 2009.
Download