Hindawi Publishing Corporation Discrete Dynamics in Nature and Society pages

advertisement
Hindawi Publishing Corporation
Discrete Dynamics in Nature and Society
Volume 2008, Article ID 421614, 14 pages
doi:10.1155/2008/421614
Research Article
Delay-Dependent Exponential Stability
for Discrete-Time BAM Neural Networks
with Time-Varying Delays
Yonggang Chen,1 Weiping Bi,2 and Yuanyuan Wu3
1
Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, China
College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China
3
Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education,
School of Automation, Southeast University, Nanjing 210096, China
2
Correspondence should be addressed to Yonggang Chen, happycygzmd@tom.com
Received 18 May 2008; Revised 5 August 2008; Accepted 10 September 2008
Recommended by Yong Zhou
This paper considers the delay-dependent exponential stability for discrete-time BAM neural
networks with time-varying delays. By constructing the new Lyapunov functional, the improved
delay-dependent exponential stability criterion is derived in terms of linear matrix inequality
LMI. Moreover, in order to reduce the conservativeness, some slack matrices are introduced
in this paper. Two numerical examples are presented to show the effectiveness and less
conservativeness of the proposed method.
Copyright q 2008 Yonggang Chen et al. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
1. Introduction
Bidirectional associative memory BAM neural networks were first introduced by Kosko 1,
2. It generalizes the single-layer autoassociative Hebbian correlator to a two-layer patternmatched heteroassociative circuits. This class of neural networks has been successfully
applied to pattern recognition and artificial intelligence. In those applications, it is very
important to guarantee the designed neural networks to be stable. On the other hand,
time delays are unavoidably encountered in the implementation of neural networks, and
its existence will lead to instability, oscillation, and poor performances of neural networks.
Therefore, the the asymptotic or exponential stability analysis for BAM neural networks with
time delays has received great attention during the past years, see, for example, 3–15. The
obtained results are classified into two categories: delay-independent results 3–5, 7, 10 and
delay-dependent results 6, 9, 11, 12, 14–16. Generally speaking, the delay-dependent results
are less conservative than the delay-independent ones, especially when the size of time delay
is small.
2
Discrete Dynamics in Nature and Society
It should be pointed out that all of the above-mentioned references are concerned with
continuous-time BAM neural networks. However, when implementing the continuous-time
neural networks for computer simulation, for experimental or computational purposes, it
is essential to formulate a discrete-time system that is an analogue of the continuous-time
recurrent neural networks 17. Generally speaking, the stability analysis of continuous-time
neural networks is not applicable to the discrete version. Therefore, the stability analysis for
various discrete-time neural networks with time delays is widely studied in recent years
17–28. Using M-matrix method, Liang and Cao 25 studied the exponential stability of
continuous-time BAM neural network with constant delays and its discrete analogue. Liang
et al. 26 also studied the dynamics of discrete-time BAM neural networks with variable
time delays based on unreasonably severe constraints on the delay functions. By using the
Lyapunov functional method and linear matrix inequality technique, the exponential stability
and robust exponential stability for discrete-time BAM neural networks with variable delays
were considered in 27, 28, respectively. However, the results presented in 27, 28 are
still conservative, since the the Lyapunov functionals constructed in 27, 28 are so simple.
Therefore, there is much room to improve the results in 27, 28.
In this paper, we present a new exponential stability criterion for discrete-time BAM
neural networks with time-varying delays. Compared with the existing methods, the main
contributions of this paper are as follows. Firstly, more general Lyapunov functional is
employed to obtain the improved exponential stability criterion; secondly, in order to reduce
the conservativeness, some slack matrices, which bring much flexibility in solving LMI, are
introduced in this paper. The proposed exponential stability criterion is expressed in LMI
which can be efficiently solved by LMI toolbox in Matlab. Finally, two numerical examples
are presented to show that our result is less conservative than some existing ones 27, 28.
The organization of this paper is as follows. Section 2 presents problem formulation of
discrete-time BAM neural networks with time delay-varying. In Section 3, our main result of
this paper is established and some remarks are given. In Section 4, numerical examples are
given to demonstrate the proposed method. Finally, conclusion is drawn in Section 5.
Notation
Throughout this paper, the superscript “T ” stands for the transpose of a matrix. Rn and Rn×n
denote the n-dimensional Euclidean space and set of all n × n real matrices, respectively. A
real symmetric matrix P > 0≥ 0 denotes P being a positive definite positive semidefinite
matrix. I is used to denote an identity matrix with proper dimension. For integers a, b, and
a < b, Na, b denote the discrete interval Na, b {a, a 1, . . . , b − 1, b}. λM X and λm X
stand for the maximum and minimum eigenvalues of the symmetric matrix X, respectively.
Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symmetric
terms in a symmetric matrix are denoted by ∗.
2. Problem formulation
Consider the following discrete-time BAM neural network with time-varying delays:
ui k 1 ai ui k m
wij fj vj k − τk Ii ,
i 1, 2, . . . , n,
j1
vj k 1 bj vj k n
i1
2.1
vji gi ui k − hk Jj ,
j 1, 2, . . . , m,
Yonggang Chen et al.
3
where ui k and vj k are the states of the ith neuron from the neural field FX and the
jth neuron from the neural field FY at time k, respectively; ai , bj ∈ 0, 1 denote the
stability of internal neuron process on the X-layer and the Y -layer, respectively; wij , vji are
real constants, and denote the synaptic connection weights; fj ·
gi · denote the activation
functions of the jth neuron from the neural field FY and the jth neuron from the neural
field FX , respectively; Ii and Jj denote the external constant inputs from outside the network
acting on the ith neuron from the neural field FX and the jth neuron from the neural field FY ,
respectively. τk and hk represent time-varying interval delays satisfying
τm ≤ τk ≤ τM ,
hm ≤ hk ≤ hM ,
2.2
where τm , τM , hm , and hM are positive integers.
Throughout this paper, we make the following assumption on the activation functions
fj ·, gi ·.
A1 The activation functions fj ·, gi · i 1, 2, . . . , n, j 1, 2, . . . , m are bounded on
R.
A2 For any ζ1 , ζ2 ∈ R, there exist positive scalars l1j , l2i such that
0 ≤ |fj ζ1 − fj ζ2 | ≤ l1j |ζ1 − ζ2 |,
j 1, 2, . . . , m,
0 ≤ |
gj ζ1 − gj ζ2 | ≤ l2i |ζ1 − ζ2 |,
i 1, 2, . . . , n.
2.3
It is clear that under Assumption A1 and A2, system 2.1 has at least one
equilibrium. In order to simplify our proof, we shift the equilibrium point u∗ u∗1 , u∗2 ,
∗ T
. . . , u∗n T , v∗ v1∗ , v2∗ , . . . , vm
of system 2.1 to the origin. Let xi k ui k − u∗i , yj k ∗
vj k − vj , fj yj k fj vj k − fj vj∗ , gi xi k gi ui k − gi u∗i , then system 2.1 can
be transformed to
m
wij fj yj t − τk,
xi k 1 ai xi k i 1, 2, . . . , n
j1
yj k 1 bj yj k n
vji gi xi t − hk,
2.4
j 1, 2, . . . , m.
i1
Obviously, the the activation functions fj ·, gi · satisfy the following conditions.
A3 For any ζ ∈ R, there exist positive scalars l1j , l2i such that
0 ≤ |fj ζ| ≤ l1j |ζ|,
j 1, 2, . . . , m,
0 ≤ |gi ζ| ≤ l2i |ζ|,
i 1, 2, . . . , n.
2.5
∀s ∈ N−τ ∗ , 0, i 1, 2, . . . , n, j 1, 2, . . . , m,
2.6
Also we assume that system 2.4 with initial value
xi s φi s,
yj s ψj s,
where τ ∗ max{τM , hM }.
4
Discrete Dynamics in Nature and Society
For convenience, we rewrite system 2.4 in the form
xk 1 Axk Wfyk − τk,
yk 1 Byk V gxk − hk,
xs φs,
ys ψs,
2.7
∗
∀s ∈ N−τ , 0,
where xk x1 k, x2 k, . . . , xn kT , yk y1 k, y2 k, . . . , ym kT , fyk f1 y1 k, f2 y2 k, . . . , fm ym kT , gxk g1 x1 k, g2 x2 k, . . . , gn xn kT , A diag{a1 , a2 , . . . , an }, B diag{b1 , b2 , . . . , bm }, W wij m×n , V vij n×m , φs φ1 s,
φ2 s, . . . , φn sT , ψs ψ1 s, ψ2 s, . . . , ψm sT .
From above analysis, we can see that the exponential stability problem of system 2.1
on equilibrium x∗ is changed into the zero stability problem of system 2.7. Therefore, in the
following part, we will devote into the exponential stability analysis problem of system 2.7.
Before giving the main result, we will firstly introduce the following definition and
lemmas.
Definition 2.1. The trivial solution of BAM neural network 2.7 is said to be globally
exponentially stable, if there exist scalars r > 1 and M > 1 such that
xk yk ≤ M
2
2
sup
φs s∈N−hM ,0
2
sup
ψs
2
r −k .
2.8
s∈N−τM ,0
Lemma 2.2 see 29. For any real vectors a, b, and any matrix Q > 0 with appropriate dimensions,
it follows that
2aT b ≤ aT Qa bT Q−1 b.
2.9
3. Main result
In this section, we are in a position to present the global exponential stability criterion of
system 2.7.
Theorem 3.1. For given diagonal matrices L1 diag{l11 , l12 , . . . , l1m } and L2 diag{l21 , l22 ,
. . . , l2n }. Under Assumptions (A1) and (A2), the system 2.7 is globally exponentially stable,
if there exist matrices Pi > 0, Qi > 0, Ri > 0, Si > 0, Zi > 0, i 1, 2,
Lj , Mj , Nj , Tj , j 1, 2, . . . , 8, and diagonal matrices D1 diag{d11 , d12 , . . . , d1m } > 0, D2 diag{d21 , d22 , . . . , d2n } > 0, such that the following LMI holds:
⎡ ⎤
√
√
Ω
hM L
hM − hm M τM N τM − τm T
⎢
⎥
0
0
0
⎢ ∗ −Z1
⎥
⎢
⎥
⎢∗
⎥ < 0,
0
0
∗
−Z
1
⎢
⎥
⎣∗
⎦
0
∗
∗
−Z2
∗
∗
∗
∗
−Z2
3.1
Yonggang Chen et al.
5
where
⎡
ω11 ω12 ω13
⎢
⎢
⎢ ∗ ω22 ω23
⎢
⎢
⎢ ∗
∗ ω33
⎢
⎢
⎢ ∗
∗
∗
⎢
ω⎢
⎢
⎢ ∗
∗
∗
⎢
⎢
⎢ ∗
∗
∗
⎢
⎢
⎢
∗
∗
⎢ ∗
⎣
∗
∗
∗
ω14 ω15
ω24 ω25
−tt4 ω35
ω44 n4
∗
ω55
∗
∗
∗
∗
∗
∗
ω16
−m1 l7t
ω26
ω27
l8t
⎤
⎥
⎥
tt8 − nt8 ⎥
⎥
⎥
ω36
−m3 − tt7
−tt8 ⎥
⎥
⎥
⎥
m4 − l4
−m4
0
⎥
⎥,
⎥
t
ω56 −m5 n7
ω58 ⎥
⎥
⎥
t
t⎥
ω66
ω67
−l8 m8 ⎥
⎥
⎥
∗
ω77
−mt8 ⎥
⎦
∗
∗
ω88
t
l l1t l2t l3t l4t l5t l6t l7t l8t ,
t
m mt1 mt2 mt3 mt4 mt5 mt6 mt7 mt8 ,
t
n nt1 nt2 nt3 nt4 nt5 nt6 nt7 nt8 ,
t
t tt1 tt2 tt3 tt4 tt5 tt6 tt7 tt8 ,
ω11 rat p1 a r hm hm a − it z1 a − i − p1 hm − hm 1q1 r1 l1 l1t ,
ω12 l2t t1 − n1 ,
ω13 l3t − t1 ,
ω14 rat p1 w r hm hm a − it z1 w l4t ,
ω15 l5t n1 ,
ω16 −l1 l6t m1 ,
ω22 −r −τm q2 t2 tt2 d1 − n2 − nt2 ,
ω23 −t2 tt3 − nt3 ,
ω24 tt4 − nt4 ,
ω25 tt5 n2 − nt5 ,
ω26 tt6 m2 − l2 − nt6 ,
ω27 −m2 tt7 − nt7 ,
ω33 −r −τm r2 − t3 − tt3 ,
ω35 n3 − tt5 ,
6
Discrete Dynamics in Nature and Society
ω36 m3 − l3 − tt6 ,
ω44 rwt p1 w r hm hm wt z1 w − l1−1 d1 l1−1 ,
ω55 rbt p2 b r τm τm b − it z2 b − i − p2 τm − τm 1q2 r2 n5 nt5 ,
ω56 m5 − l5 nt6 ,
3.2
ω58 rbt p2 v r τm τm b − it z2 v nt8 ,
ω66 −r −hm q1 − l6 − l6t d2 m6 mt6 ,
ω67 −l7t − m6 mt7 ,
ω77 −r −hm r1 − m7 − mt7 ,
ω88 rvt p2 v r τm τm vt z2 v − l2−1 d2 l2−1 .
Proof. Choose the following Lyapunov-Krasovskii candidate function of system 2.7 as
follows:
V k V1 k V2 k V3 k V4 k V5 k,
3.3
where
v1 k r k xt kp1 xk r k yt kp2 yk,
v2 k k−1
r l xt lq1 xl lk−hk
v3 k −h
m
k−1
lk−τk
k−1
r l xt lq1 xl θ−hm 1 lkθ
v4 k k−1
−1
k−1
−τm
k−1
r l yt lq2 yl,
θ−τm 1 lkθ
r l xt lr1 xl lk−hm
v5 k r hm
r l yt lq2 yl,
k−1
r l yt lr2 yl,
lk−τm
r l η1t lz1 η1 l r τm
θ−hm lkθ
and η1 l xl 1 − xl, η2 l yl 1 − yl.
−1
k−1
θ−τm lkθ
r l η2t lz2 η2 l,
3.4
Yonggang Chen et al.
7
Calculating the difference of V k along the trajectories of system 2.7, we can obtain
δv1 k r k1 axk wfyk − τkt p1 axk wfyk − τk
r k1 byk vgxk − hkt p2 byk vgxk − hk
− r k xt kp1 xk − r k yt kp2 yk
r k xt krat p1 a − p1 xk 2rxt kat p1 wfyk − τk
3.5
rf t yk − τkwt p1 wfyk − τk yt krbt p2 b − p2 yk
2ryt kbt p2 vgxk − hk rg t xk − hkvt p2 vgxk − hk ,
δv2 k ≤ r k xt kq1 xk − r k−hk xt k − hkq1 xk − hk
r k yt kq2 yk − r k−τk yt k − τkq2 yk − τk
k−h
m
k−τ
m
r l xt lq1 xl lk1−hk1
r l yt lq2 yl
lk1−τk1
≤ r k xt kq1 xk − r k−hm xt k − hkq1 xk − hk
3.6
r k yt kq2 yk − r k−τm yt k − τkq2 yk − τk
k−h
m
r l xt lq1 xl lk1−hm
k−τ
m
r l yt lq2 yl,
lk1−τm
δv3 k hm − hm r k xt kq1 xk τm − τm r k yt kq2 yk
k−h
m
−
r l xt lq1 xl −
lk1−hm
k−τ
m
r l yt lq2 yl,
3.7
lk1−τm
δv4 k r k xt kr1 xk − r k−hm xt k − hm r1 xk − hm r k yt kr2 yk − r k−τm yt k − τm r2 yk − τm ,
k−1
δv5 k r k r hm hm η1t kz1 η1 k − r hm
3.8
r l η1t lz1 η1 l
lk−hm
r k r τm τm η2t kz2 η2 k − r τm
k−1
r l η2t lz2 η2 l
lk−τm
≤r r
k hm
hm a − ixk wfyk − τkt z1 a − ixk wfyk − τk
r k r τm τm b − iyk vgxk − hkt z2 b − iyk vgxk − hk
− rk
− rk
k−1
η1t lz1 η1 l − r k
k−hk
lk−hk
lk−hm
k−1
k−τk
η2t lz2 η2 l − r k
lk−τk
η1t lz1 η1 l
η2t lz2 η2 l.
lk−τm
3.9
8
Discrete Dynamics in Nature and Society
By Lemma 2.2, we have
k−1
−
η1t lz1 η1 l ≤ 2
lk−hk
k−1
k−1
η1t llt ξk lk−hk
t
ξt klz−1
1 l ξk
lk−hk
t
2x k − x k − hklt ξk hkξt klz−1
1 l ξk
t
t
3.10
t
≤ ξt kψ1 lt lψ1 ξk hm ξt klz−1
1 l ξk,
−
k−hk
η1t lz1 η1 l ≤ 2
lk−hm
k−hk
η1t lmt ξk lk−hm
k−hk
t
ξt kmz−1
1 m ξk
lk−hm
t
2x k−hk−x k−hm mt ξkhm −hkξt kmz−1
1 m ξk
t
t
t
≤ ξt kψ2 mt mψ2 ξk hm − hm ξt kmz−1
1 m ξk,
k−1
−
η2t lz2 η2 l ≤ 2
lk−τk
k−1
η2t lnt ξk lk−τk
k−1
t
ξt knz−1
2 n ξk
lk−τk
t
2y k − y k − τknt ξk τkξt knz−1
2 n ξk
t
3.11
t
3.12
t
≤ ξt kψ3 nt nψ3 ξk τm ξt knz−1
2 n ξk,
−
k−τk
lk−τm
η2t lz2 η2 l ≤ 2
k−τk
lk−τm
η2t ltt ξk k−τk
t
ξt ktz−1
2 t ξk
lk−τm
t
2yt k − τk − yt k − τm tt ξk τm − τkξt ktz−1
2 t ξk
t
≤ ξt kψ4 tt tψ4 ξk τm − τm ξt ktz−1
2 t ξk,
3.13
where
t
l l1t l2t l3t l4t l5t l6t l7t l8t ,
t
m mt1 mt2 mt3 mt4 mt5 mt6 mt7 mt8 ,
t
n nt1 nt2 nt3 nt4 nt5 nt6 nt7 nt8 ,
t
t tt1 tt2 tt3 tt4 tt5 tt6 tt7 tt8 ,
t
ψ1 i 0 0 0 0 −i 0 0 ,
t
ψ2 0 0 0 0 0 i −i 0 ,
t
ψ3 0 −i 0 0 i 0 0 0 ,
t
ψ4 0 i −i 0 0 0 0 0 ,
3.14
Yonggang Chen et al.
9
and ξk xT kyT k − τkyT k − τM f T yk − τkyT kxT k − hkxT k − hM g T xk −
hk. By inequalities 2.5, it is well known that there exist positive diagonally matrices
Di ≥ 0 i 1, 2 such that the following inequalities hold:
r k yt t − τtd1 yt − τt − f t yt − τtl1−1 d1 l1−1 fyt − τt ≥ 0,
r k xt t − htd2 xt − ht − g t xt − htl2−1 d2 l2−1 gxt − ht ≥ 0,
3.15
where L1 diag{l11 , l12 , . . . , l1m }, L2 diag{l21 , l22 , . . . , l2n }.
Substituting 3.10–3.13 into 3.9, we have
δvk ≤ δv1 k δv2 k δv3 k δv4 k δv5 k
r k yt t − τtd1 yt − τt − f t yt − τtl1−1 d1 l1−1 fyt − τt
r k xt t − htd2 xt − ht − g t xt − htl2−1 d2 l2−1 gxt − ht
t
−1 t
−1 t
−1 t
≤ r k ξt k ω hm lz−1
1 l hm − hm mz1 m τm nz2 n τm − τm tz2 t ξk,
3.16
where Ω, L, M, N, and T are defined in Theorem 3.1. Thus, if LMI 3.1 holds, it can be
concluded that ΔV k < 0 according to Schur complement, which implies that V k < V 0.
Note that
v1 0 xt 0p1 x0 yt 0p2 y0
≤ λm p1 sup φs2 λm p2 sup ψs2 ,
s∈n−hm ,0
v2 0 −1
s∈n−τm ,0
r l xt lq1 xl l−h0
−1
r l yt lq2 yl
l−τ0
1 − r −hm
1 − r −τm
sup φs2 λm q2 sup ψs2 ,
≤ λm q1 r − 1 s∈n−hm ,0
r − 1 s∈n−τm ,0
v3 0 −h
m
−1
3.18
−τm −1
r l yt lq2 yl
r l xt lq1 xl θ−hm 1 lθ
≤ hm − hm 3.17
θ−τm 1 lθ
−1
r l xt lq1 xl τm − τm lhm
≤ hm − hm λm q1 −1
r l yt lq2 yl
lτm
−hm
1−r
r−1
τm − τm λm q2 sup ψs2
s∈n−hm ,0
−τm
1−r
r−1
sup ψs2 ,
s∈n−τm ,0
3.19
10
Discrete Dynamics in Nature and Society
v4 0 −1
−1
r l xt lr1 xl l−hm
r l yt lr2 yl
l−τm
1 − r −hm
1 − r −τm
≤ λm r1 sup φs2 λm r2 sup ψs2 ,
r − 1 s∈n−hm ,0
r − 1 s∈n−τm ,0
v5 0 r hm
3.20
−1 −1
−1 1
r l η1t lz1 η1 l r τm
r l η2t lz2 η2 l
θ−hm lθ
θ−τm lθ
−1
≤ r hm hm λm z1 r l η1t lη1 l r τm τm λm z2 l−hm
≤ r hm hm λm z1 sup
r l η2t lη2 l
l−τm
−hm
1−r
r−1
r τm τm λm z2 −1
3.21
η1 s2
sup
s∈n−hm ,−1
−τm
1−r
r−1
sup
η2 s2 ,
s∈n−τm ,−1
η1 s2 s∈n−hm ,−1
xs 1 − xs2
sup
s∈n−hm ,−1
≤2
xs 12 xs2 sup
s∈n−hm ,−1
3.22
≤ 4 sup φs2 ,
s∈n−hm ,0
sup
η2 s2 s∈n−τm ,−1
ys 1 − ys2
sup
s∈n−τm ,−1
≤2
sup
ys 12 ys2 s∈n−τm ,−1
3.23
≤ 4 sup ψs2 .
s∈n−τm ,0
Substituting 3.22, 3.23 into 3.21, and combining 3.17–3.21, then we have
V 0 5
Vi 0 ≤ ρ1
i1
sup
s∈N−hM ,0
φs2 ρ2
sup
ψs2 ,
3.24
s∈N−τM ,0
where
1 − r −hm
ρ1 λm p1 hm − hm 1λm q1 λm r1 4r hm hm λm z1 ,
r−1
1 − r −τm
.
ρ2 λm p2 τm − τm 1λm q2 λm r2 4r τm τm λm z2 r−1
3.25
Yonggang Chen et al.
11
On the other hand,
V k ≥ r k xT kP1 xk r k yT kP2 yk ≥ r k λm P1 xk2 λm P2 yk2 .
3.26
Therefore, we can obtain
xk2 yk2 ≤
α
β
sup
φs2 s∈N−hM ,0
sup
ψs2 r −k ,
3.27
s∈N−τM ,0
where α max{ρ1 , ρ2 }, β min{λm P1 , λm P2 }. Obviously, α/β > 1, by Definition 2.1, the
system 2.7 or 2.1 is globally exponentially stable.
Remark 3.2. Based on the linear matrix inequality LMI technique and Lyapunov stability
theory, the exponential stability for discrete-time delayed BAM neural network 2.1 was
investigated in 26–28. However, it should be noted that the results in 26 are based on the
unreasonably severe constraints on the delay functions 1 < τk1 < 1τk, 1 < hk1 < 1
hk, and the results in 27, 28 are based on the simple Lyapunov functionals. In this paper, in
order to obtain the less conservative stability criterion, the novel Lyapunov functional V k is
employed, which contains V4 k, V5 k, and more general than the traditional ones 26–28.
Moreover, some slack matrices, which bring much flexibility in solving LMI, are introduced
in this paper.
Remark 3.3. By setting r 1 in Theorem 3.1, we can obtain the global asymptotic stability
criterion of discrete-time BAM neural network 2.7.
Remark 3.4. Theorem 3.1 in this paper depends on both the delay upper bounds τM , hM and
the delay intervals τM − τm and hM − hm .
Remark 3.5. The proposed method in this paper can be generalized to more complex neural
networks, such as delayed discrete-time BAM neural networks with parameter uncertainties
30, 31 and stochastic perturbations 30–32, delayed interval discrete-time BAM neural
networks 28, and discrete-time analogues of BAM neural networks with mixed delays
32, 33.
4. Numerical examples
Example 4.1. Consider the delayed discrete-time BAM neural network 2.7 with the
following parameters:
A
0.8 0
,
0 0.9
B
0.5 0
,
0 0.4
W
0.1 −0.01
,
−0.2 −0.1
V 0.15 0
.
−0.2 0.1
4.1
The activation functions satisfy Assumptions A1 and A2 with
L1 1 0
,
0 1
L2 1 0
.
0 1
4.2
12
Discrete Dynamics in Nature and Society
Table 1: Maximum allowable delay bounds for Example 4.1.
τm hm
τM hM 27, 28
τM hM Theorem 3.1
2
6
11
4
8
12
6
10
13
8
12
15
10
14
16
15
19
21
20
24
25
25
29
30
For this example, by Theorem 3.1 in this paper with r 1, we can obtain some
maximum allowable delay bounds for guaranteeing the asymptotic stability of this system.
For a comparison with 27, Theorem 1 and in 28, Corollary 1, we made Table 1. For this
example, it is obvious that our result is less conservative than those in 27, 28.
Example 4.2 see 27. Consider the delayed discrete-time BAM NN 2.7 with the following
parameters:
⎤
1
0
⎥
⎢
A ⎣5 1⎦ ,
0
5
⎡
⎡
1
⎢ 10
B⎣
0
⎤
0⎥
1 ⎦,
10
⎡
⎤
1
0
⎢
⎥
W ⎣1 8⎦ ,
0
8
⎡
⎤
1
0
−
⎢
⎥
V ⎣ 20
1 ⎦.
0 −
20
4.3
The activation functions satisfy Assumptions A1 and A2 with
1 0
L1 ,
0 1
1 0
L2 .
0 1
4.4
Now, we assume τk hk 4 − 2 sinπ/2k, r 1.55. It is obvious that
2 ≤ τk hk ≤ 6, that is, hm τm 2, hM τm 6. In this case, it is found that the
exponential conditions proposed in 27, 28 is not satisfied. However, it can be concluded
that this system is globally exponentially stable by using Theorem 3.1 in this paper. If we
assume hm τm τM 2, r 2.7. Using 27, Theorem 1 and 28, Corollary 1, the achieved
maximum allowable delays for guaranteeing the globally exponentially stable of this system
are hM 3, hM 3, respectively. However, we can obtain the delay bound hM 4 by using
Theorem 3.1 in this paper. For this example, it is seen that our result improve some existing
results 27, 28.
5. Conclusion
In this paper, we consider the delay-dependent exponential stability for discrete-time BAM
neural networks with time-varying delays. Based on the Lyapunov stability theory and
linear matrix inequality technique, the novel delay-dependent stability criterion is obtained.
Two numerical examples show that the proposed stability condition in this paper is less
conservative than previously established ones.
Acknowledgments
The authors would like to thank the Associate Editor and the anonymous reviewers for
their constructive comments and suggestions to improve the quality of the paper. This work
was supported by the National Natural Science Foundation of China no. 60643003 and the
Natural Science Foundation of Henan Education Department no. 2007120005.
Yonggang Chen et al.
13
References
1 B. Kosko, “Adaptive bidirectional associative memories,” Applied Optics, vol. 26, no. 23, pp. 4947–
4960, 1987.
2 B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man and Cybernetics,
vol. 18, no. 1, pp. 49–60, 1988.
3 J. Cao and L. Wang, “Exponential stability and periodic oscillatory solution in BAM networks with
delays,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 457–463, 2002.
4 J. Cao and M. Dong, “Exponential stability of delayed bi-directional associative memory networks,”
Applied Mathematics and Computation, vol. 135, no. 1, pp. 105–112, 2003.
5 Y. Li, “Existence and stability of periodic solution for BAM neural networks with distributed delays,”
Applied Mathematics and Computation, vol. 159, no. 3, pp. 847–862, 2004.
6 X. Huang, J. Cao, and D.-S. Huang, “LMI-based approach for delay-dependent exponential stability
analysis of BAM neural networks,” Chaos, Solitons
Fractals, vol. 24, no. 3, pp. 885–898, 2005.
7 S. Arik, “Global asymptotic stability analysis of bidirectional associative memory neural networks
with time delays,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 580–586, 2005.
8 M. Jiang, Y. Shen, and X. Liao, “Global stability of periodic solution for bidirectional associative
memory neural networks with varying-time delay,” Applied Mathematics and Computation, vol. 182,
no. 1, pp. 509–520, 2006.
9 J. H. Park, “A novel criterion for global asymptotic stability of BAM neural networks with time
delays,” Chaos, Solitons
Fractals, vol. 29, no. 2, pp. 446–453, 2006.
10 X. Lou and B. Cui, “On the global robust asymptotic stability of BAM neural networks with timevarying delays,” Neurocomputing, vol. 70, no. 1–3, pp. 273–279, 2006.
11 J. H. Park, “Robust stability of bidirectional associative memory neural networks with time delays,”
Physics Letters A, vol. 349, no. 6, pp. 494–499, 2006.
12 J. Cao, D. W. C. Ho, and X. Huang, “LMI-based criteria for global robust stability of bidirectional
associative memory networks with time delay,” Nonlinear Analysis: Theory, Methods
Applications, vol. 66, no. 7, pp. 1558–1572, 2007.
13 D. W. C. Ho, J. Liang, and J. Lam, “Global exponential stability of impulsive high-order BAM neural
networks with time-varying delays,” Neural Networks, vol. 19, no. 10, pp. 1581–1590, 2006.
14 L. Sheng and H. Yang, “Novel global robust exponential stability criterion for uncertain BAM neural
networks with time-varying delays,” Chaos, Solitons
Fractals. In press.
15 J. H. Park, S. M. Lee, and O. M. Kwon, “On exponential stability of bidirectional associative memory
neural networks with time-varying delays,” Chaos, Solitons
Fractals. In press.
16 Y. Wang, “Global exponential stability analysis of bidirectional associative memory neural networks
with time-varying delays,” Nonlinear Analysis: Real World Applications. In press.
17 Y. Liu, Z. Wang, A. Serrano, and X. Liu, “Discrete-time recurrent neural networks with time-varying
delays: exponential stability analysis,” Physics Letters A, vol. 362, no. 5-6, pp. 480–488, 2007.
18 S. Hu and J. Wang, “Global stability of a class of discrete-time recurrent neural networks,” IEEE
Transactions on Circuits and Systems I, vol. 49, no. 8, pp. 1104–1117, 2002.
19 S. Mohamad and K. Gopalsamy, “Exponential stability of continuous-time and discrete-time cellular
neural networks with delays,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 17–38, 2003.
20 W. Xiong and J. Cao, “Global exponential stability of discrete-time Cohen-Grossberg neural
networks,” Neurocomputing, vol. 64, pp. 433–446, 2005.
21 L. Wang and Z. Xu, “Sufficient and necessary conditions for global exponential stability of discretetime recurrent neural networks,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 6, pp. 1373–
1380, 2006.
22 W.-H. Chen, X. Lu, and D.-Y. Liang, “Global exponential stability for discrete-time neural networks
with variable delays,” Physics Letters A, vol. 358, no. 3, pp. 186–198, 2006.
23 Q. Song and Z. Wang, “A delay-dependent LMI approach to dynamics analysis of discrete-time
recurrent neural networks with time-varying delays,” Physics Letters A, vol. 368, no. 1-2, pp. 134–145,
2007.
24 B. Zhang, S. Xu, and Y. Zou, “Improved delay-dependent exponential stability criteria for discretetime recurrent neural networks with time-varying delays,” Neurocomputing. In press.
14
Discrete Dynamics in Nature and Society
25 J. Liang and J. Cao, “Exponential stability of continuous-time and discrete-time bidirectional
associative memory networks with delays,” Chaos, Solitons
Fractals, vol. 22, no. 4, pp. 773–785, 2004.
26 J. Liang, J. Cao, and D. W. C. Ho, “Discrete-time bidirectional associative memory neural networks
with variable delays,” Physics Letters A, vol. 335, no. 2-3, pp. 226–234, 2005.
27 X.-G. Liu, M.-L. Tang, R. Martin, and X.-B. Liu, “Discrete-time BAM neural networks with variable
delays,” Physics Letters A, vol. 367, no. 4-5, pp. 322–330, 2007.
28 M. Gao and B. Cui, “Global robust exponential stability of discrete-time interval BAM neural
networks with time-varying delays,” Applied Mathematical Modelling. In press.
29 M. S. Mahmoud, Resilient Control of Uncertain Dynamical Systems, vol. 303 of Lecture Notes in Control
and Information Sciences, Springer, Berlin, Germany, 2004.
30 Z. Wang, S. Lauria, J. Fang, and X. Liu, “Exponential stability of uncertain stochastic neural networks
with mixed time-delays,” Chaos, Solitons
Fractals, vol. 32, no. 1, pp. 62–72, 2007.
31 Z. Wang, H. Shu, J. Fang, and X. Liu, “Robust stability for stochastic Hopfield neural networks with
time delays,” Nonlinear Analysis: Real World Applications, vol. 7, no. 5, pp. 1119–1128, 2006.
32 Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for stochastic Cohen-Grossberg neural networks
with mixed time delays,” IEEE Transactions on Neural Networks, vol. 17, no. 3, pp. 814–820, 2006.
33 Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and
distributed delays,” Physics Letters A, vol. 345, no. 4–6, pp. 299–308, 2005.
Download