Hindawi Publishing Corporation fference Equations Advances in Di Volume 2007, Article ID 78160,

advertisement
Hindawi Publishing Corporation
Advances in Difference Equations
Volume 2007, Article ID 78160, 18 pages
doi:10.1155/2007/78160
Research Article
Exponential Stability for Impulsive BAM Neural Networks with
Time-Varying Delays and Reaction-Diffusion Terms
Qiankun Song and Jinde Cao
Received 9 March 2007; Accepted 16 May 2007
Recommended by Ulrich Krause
Impulsive bidirectional associative memory neural network model with time-varying delays and reaction-diffusion terms is considered. Several sufficient conditions ensuring the
existence, uniqueness, and global exponential stability of equilibrium point for the addressed neural network are derived by M-matrix theory, analytic methods, and inequality techniques. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. The obtained results in this paper are less restrictive
than previously known criteria. Two examples are given to show the effectiveness of the
obtained results.
Copyright © 2007 Q. Song and J. Cao. This is an open access article distributed under
the Creative Commons Attribution License, which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The bidirectional associative memory (BAM) neural network model was first introduced
by Kosko [1]. This class of neural networks has been successfully applied to pattern recognition, signal and image processing, artificial intelligence due to its generalization of the
single-layer auto-associative Hebbian correlation to two-layer pattern-matched heteroassociative circuits. Some of these applications require that the designed network has a
unique stable equilibrium point.
In hardware implementation, time delays occur due to finite switching speed of the
amplifiers and communication time [2]. Time delays will affect the stability of designed
neural networks and may lead to some complex dynamic behaviors such as periodic oscillation, bifurcation, or chaos [3]. Therefore, study of neural dynamics with consideration
of the delayed problem becomes extremely important to manufacture high-quality neural
networks. Some results concerning the dynamical behavior of BAM neural networks with
2
Advances in Difference Equations
delays have been reported, for example, see [2–12] and references therein. The circuits diagram and connection pattern implementing for the delayed BAM neural networks can
be found in [8].
Most widely studied and used neural networks can be classified as either continuous
or discrete. Recently, there has been a somewhat new category of neural networks which
are neither purely continuous-time nor purely discrete-time ones, these are called impulsive neural networks. This third category of neural networks displays a combination
of characteristics of both the continuous-time and the discrete systems [13]. Impulses
can make unstable systems stable, so they have been widely used in many fields such as
physics, chemistry, biology, population dynamics, and industrial robotics. Some results
for impulsive neural networks have been given, for example, see [13–22] and references
therein.
It is well known that diffusion effect cannot be avoided in the neural networks when
electrons are moving in asymmetric electromagnetic fields [23], so we must consider that
the activations vary in space as well as in time. There have been some works devoted to
the investigation of the stability of neural networks with reaction-diffusion terms, which
are expressed by partial differential equations, for example, see [23–26] and references
therein. To the best of our knowledge, few authors have studied the stability of impulsive
BAM neural network model with both time-varying delays and reaction-diffusion terms.
Motivated by the above discussions, the objective of this paper is to give some sufficient
conditions ensuring the existence, uniqueness, and global exponential stability of equilibrium point for impulsive BAM neural networks with time-varying delays and reactiondiffusion terms, without assuming the boundedness, monotonicity, and differentiability
on these activation functions. Our methods, which do not make use of Lyapunov functional, are simple and valid for the stability analysis of impulsive BAM neural networks
with time-varying or constant delays.
2. Model description and preliminaries
In this paper, we consider the following model:
∂u (t,x)
∂ui (t,x) ∂
=
Dik i
− ai ui (t,x)
∂t
∂x
∂xk
k
k =1
l
+
m
ci j f j v j t − τi j (t),x
j =1
Δui tk ,x = Ik ui tk ,x ,
+ αi ,
t = tk , i = 1,...,n,
i = 1,...,n, k = 1,2,...,
l
∂v j (t,x) ∂v j (t,x)
∂
=
D∗jk
− b j v j (t,x)
∂t
∂xk
∂xk
k =1
+
n
i =1
d ji gi ui t − σ ji (t),x
Δv j tk ,x = Jk vi tk ,x ,
+ βj,
t = tk , j = 1,...,m,
j = 1,...,m, k = 1,2,...
(2.1)
Q. Song and J. Cao 3
for t > 0, where x = (x1 ,x2 ,...,xl )T ∈ Ω ⊂ Rl , Ω is a bounded compact set with smooth
boundary ∂Ω and mesΩ > 0 in space Rl ; u = (u1 ,u2 ,... ,un )T ∈ Rn ; v = (v1 ,v2 ,...,vm )T ∈
Rm ; ui (t,x) and v j (t,x) are the state of the ith neurons from the neural field FU and the
jth neurons from the neural field FV at time t and in space x, respectively; f j and gi denote
the activation functions of the jth neurons from FV and the ith neurons from FU at time
t and in space x, respectively; αi and β j are constants, and denote the external inputs
on the ith neurons from FU and the jth neurons from FV , respectively; τi j (t) and σ ji (t)
correspond to the transmission delays and satisfy 0 ≤ τi j (t) ≤ τi j and 0 ≤ σ ji (t) ≤ σ ji (τi j
and σ ji are constants); ai and b j are positive constants, and denote the rates with which the
ith neurons from FU and the jth neurons from FV will reset their potentials to the resting
state in isolation when disconnected from the networks and external inputs, respectively;
ci j and d ji are constants, and denote the connection strengths; smooth functions Dik =
Dik (t,x) ≥ 0 and D∗jk = D∗jk (t,x) ≥ 0 correspond to the transmission diffusion operator
along the ith neurons from FU and the jth neurons from FV , respectively. Δui (tk ,x) =
ui (tk+ ,x) − ui (tk− ,x) and Δv j (tk ,x) = v j (tk+ ,x) − v j (tk− ,x) are the impulses at moments tk
and in space x, and t1 < t2 < · · · is a strictly increasing sequence such that limk→∞ tk =
+∞. The boundary conditions and initial conditions are given by
∂u
∂ui ∂ui
∂ui
,
,..., i
:=
∂n
∂x1 ∂x2
∂xl
∂v j
∂v j ∂v j
∂v j
,
,...,
:=
∂n
∂x1 ∂x2
∂xl
ui (s,x) = φui (s,x),
s ∈ − σ,0 ,
v j (s,x) = φv j (s,x),
s ∈ [−τ,0],
= 0,
i = 1,2,...,n,
(2.2)
T
= 0,
σ=
τ=
T
j = 1,2,...,m,
max
1≤i≤n,1≤ j ≤m
max
1≤i≤n,1≤ j ≤m
σ ji ,
τi j ,
i = 1,2,...,n,
j = 1,2,...,m,
(2.3)
where φui (s,x), φv j (s,x) (i = 1,2,...,n, j = 1,2,...,m) denote real-valued continuous functions defined on [−σ,0] × Ω and [−τ,0] × Ω, respectively.
Since the solution (u1 (t,x),...,un (t,x),v1 (t,x),... ,vm (t,x))T of model (2.1) is discontinuous at the point tk , by theory of impulsive differential equations, we assume that
(u1 (tk ,x),...,un (tk ,x),v1 (tk ,x),...,vm (tk ,x)) ≡ (u1 (tk − 0,x),...,un (tk − 0,x),v1 (tk − 0,x),
...,vm (tk − 0,x))T . It is clear that, in general, the partial derivatives ∂ui (tk ,x)/∂t and
∂v j (tk ,x)/∂t do not exist. On the other hand, according to the first and the third equations of model (2.1), there exist the limits ∂ui (tk ∓ 0,x)/∂t and ∂v j (tk ∓ 0,x)/∂t. According
to the above convention, we assume ∂ui (tk ,x)/∂t = ∂ui (tk − 0,x)/∂t and ∂v j (tk ,x)/∂t =
∂v j (tk − 0,x)/∂t.
Throughout this paper, we make the following assumption.
(H) There exist two positive diagonal matrices G = diag(G1 ,G2 ,...,Gn ) and F = diag
(F1 ,F2 ,...,Fm ) such that
g i u 1 − g i u 2 ≤ G i u 1 − u 2 ,
f j v1 − f j v2 ≤ F j v1 − v2 for all u1 ,u2 ,v1 ,v2 ∈ R, i = 1,2,...,n, j = 1,2,...,m.
(2.4)
4
Advances in Difference Equations
For convenience, we introduce two notations. For any u(t,x) = (u1 (t,x),u2 (t,x),... ,
uk (t,x))T ∈ Rk , define
ui (t,x) =
2
Ω
ui (t,x)2 dx
1/2
i = 1,2,...,k.
,
For any u(t) = (u1 (t),u2 (t),...,uk (t))T ∈ Rk , define u(t) = [
k
r 1/r
i=1 |ui (t)| ] ,
(2.5)
r > 1.
∗ T
Definition 2.1. A constant vector (u∗1 ,...,u∗n ,v1∗ ,...,vm
) is said to be an equilibrium of
model (2.1) if
−ai u∗
i +
m
ci j f j v∗j + αi = 0,
Ik u∗i = 0,
∗
−b j v j +
i = 1,2,...,n,
j =1
n
i = 1,2,...,n, k ∈ Z+ ,
∗
d ji gi ui + β j = 0,
(2.6)
j = 1,2,...,m,
i=1
Jk v∗j = 0,
j = 1,2,...,m, k ∈ Z+ ,
where Z+ denotes the set of all positive integers.
Definition 2.2 (see [3]). A real matrix A = (ai j )n×n is said to be an M-matrix if ai j ≤ 0 (i,
j = 1,2,...,n, i = j) and successive principle minors of A are positive.
Definition 2.3 (see [27]). A map H : Rn → Rn is a homomorphism of Rn onto itself if
H ∈ C 0 , H is one-to-one, H is onto, and the inverse map H −1 ∈ C 0 .
To prove our result, the following four lemmas are necessary.
Lemma 2.4 (see [3]). Let Q be n × n matrix with nonpositive off-diagonal elements, then Q
is an M-matrix if and only if one of the following conditions holds.
(i) There exists a vector ξ > 0 such that Qξ > 0.
(ii) There exists a vector ξ > 0 such that ξ T Q > 0.
Lemma 2.5 (see [27]). If H(x) ∈ C 0 satisfies the following conditions:
(i) H(x) is injective on Rn ,
(ii) H(x) → +∞ as x → +∞,
then H(x) is homomorphism of Rn .
Lemma 2.6 (see [28]). Let a,b ≥ 0, p > 1, then
a p −1 b ≤
p−1 p 1 p
a + b .
p
p
(2.7)
Lemma 2.7 (see [29]) (C p inequality). Let a ≥ 0, b ≥ 0, p > 1, then
(a + b)1/ p ≤ a1/ p + b1/ p .
(2.8)
Q. Song and J. Cao 5
3. Existence and uniqueness of equilibria
Theorem 3.1. Under assumption (H), if there exist real constants αi j , βi j , α∗ji , β∗ji (i =
1,2,...,n, j = 1,2,...,m), and r > 1 such that
A − C −C ∗
W=
−D∗ B − D
(3.1)
is an M-matrix, and
Ik u∗i = 0,
i = 1,2,...,n, k ∈ Z+ ,
Jk v∗j = 0,
(3.2)
j = 1,2,...,m, k ∈ Z+ ,
∗ T
) , where
then model (2.1) has a unique equilibrium point (u∗1 ,...,u∗n ,v1∗ ,...,vm
A = diag a1 ,a2 ,...,an ,
C = diag c1 ,... , cn
with ci =
m
r −1
j =1
(r −αi j )/(r −1)
ci j r
(r −βi j )/(r −1)
Fj
,
B = diag b1 ,b2 ,... ,bm ,
= diag d1 ,..., dm
D
with dj =
D∗ = d∗ji
(3.3)
i
r
i=1
1 α β i j
with ci∗j = ci j i j F j ,
r
1 α∗ β∗ji
∗
with d ji = d ji ji Gi .
r
∗
C ∗ = ci j
n
∗
∗
r −1
d ji (r −α ji )/(r −1) G(r −β ji )/(r −1) ,
n ×m
m×n
Proof. Define the following map associated with model (2.1):
H(x, y) =
−A
0
−B
0
x
0
+
y
D
C
0
g(x)
α
+
,
f (y)
β
(3.4)
where
C = ci j
g(x) =
D = d ji
m×n ,
T
g1 (x1 ,g2 (x2 ),...,gn xn ) ,
n ×m ,
f (y) = f1 y1 , f2 y2 ,..., fm ym
α = α1 ,α2 ,...,αn
T
,
T
(3.5)
,
β = β1 ,β2 ,... ,βm
T
In the following, we will prove that H(x, y) is a homomorphism.
.
6
Advances in Difference Equations
First, we prove that H(x, y) is an injective map on Rn+m .
In fact, if there exist (x, y)T , (x, y)T ∈ Rn+m and (x, y)T = (x, y)T such that H(x, y) =
H(x, y), then
ai xi − xi =
m
j =1
bj yj − y j =
ci j f j y j − f j y j ,
n
d ji gi xi − gi xi ,
i = 1,2,...,n,
(3.6)
j = 1,2,...,m.
(3.7)
i =1
Multiply both sides of (3.6) by |xi − xi |r −1 , it follows from assumption (H) and Lemma
2.6 that
r
ai x i − x i ≤
m
ci j F j xi − xi r −1 y j − y j
j =1
≤
m
r −1
r
j =1
(r −αi j )/(r −1)
ci j (r −βi j )/(r −1) Fj
r
xi − x i (3.8)
1 ci j αi j F βi j y j − y r .
j
j
r j =1
m
+
Similarly, we have
r
bjyj − y j ≤
n
∗
∗
r −1
d ji (r −α ji )/(r −1) G(r −β ji )/(r −1) y j − y r
i
r
i =1
j
∗ ∗
1 d ji α ji g β ji xi − xi r .
+
i
r i =1
n
(3.9)
From (3.8) and (3.9) we get
r
r r
r T
≤ 0.
W x1 − x1 ,..., xn − xn , y1 − y 1 ,..., ym − y m (3.10)
Since W is an M-matrix, we get xi = xi , y j = y j , i = 1,2,...,n, j = 1,2,...,m, which is a
contradiction. So, H(x, y) is an injective map on Rn+m .
Second, we prove that H(x, y) → +∞ as (x, y)T → +∞.
Since W is an M-matrix, from Lemma 2.4, we know that there exists a vector γ =
(λ1 ,...,λn ,λn+1 ,...,λn+m )T > 0 such that γT W > 0, that is,
λi ai − ci −
m
λn+ j d∗ji > 0,
i = 1,2,...,n,
j =1
λn+ j b j − dj −
n
i=1
(3.11)
∗
λi ci j > 0,
j = 1,2,...,m.
Q. Song and J. Cao 7
We can choose a small number δ such that
m
λi ai − ci −
λn+ j d∗ji ≥ δ > 0,
i = 1,2,...,n,
j =1
n
λn+ j b j − dj −
(3.12)
∗
λi ci j ≥ δ > 0,
j = 1,2,...,m.
i =1
y) = H(x, y) − H(0,0), and sgn(θ) is the signum function defined as 1 if θ > 0, 0
Let H(x,
if θ = 0, −1 if θ < 0. From assumption (H), Lemma 2.6, and (3.12) we have
n
m
r −1
r −1
λi xi sgn xi H i (x, y) + λn+ j y j sgn y j H n+ j (x, y)
i=1
j =1
≤−
n
r
λ i ai x i +
i=1
+
n
λn+ j
j =1
≤
λi
i=1
m
n
n
m
r −1 r
ci j F j y j xi −
λn+ j b j y j j =1
j =1
r −1
d ji Gi xi y j i =1
− ai +
λi
m
i=1
m
r −1
j =1
r
(r −αi j )/(r −1) (r −βi j )/(r −1) r
x i ci j
F
j
m
1
ci j αi j F βi j y j r
+
j
j =1
+
m
λn+ j
r
n
(r −α∗ji )/(r −1) (r −β∗ji )/(r −1) r
r −1
d ji yj
Gi
− bj +
j =1
i =1
r
n
∗ ∗ 1
d ji α ji Gβ ji xi r
+
i=1
=−
n
i
r
λi ai − ci −
i =1
m
j =1
m n
r r
∗
λn+ j d ji xi −
λn+ j b j − dj − λi ci j y j ∗
j =1
i =1
r
≤ −δ (x, y)T .
(3.13)
From (3.13) we have
n
m
r −1
r −1
T r
≤−
λ i xi
sgn xi H i (x, y) + λn+ j y j
sgn y j H n+ j (x, y)
δ (x, y)
i=1
j =1
n
m
r −1 r −1 ≤ max λi
xi
Hi (x, y) +
yj
Hn+ j (x, y) .
1≤i≤n+m
i =1
j =1
(3.14)
8
Advances in Difference Equations
By using Hölder inequality we get
(r −1)/r
n
m
r r
(x, y)T r ≤ max1≤i≤n+m λi
x i +
yj
δ
i=1
j =1
n
m
H
H
i (x, y)r +
n+ j (x, y)r
×
i=1
(3.15)
1/r
,
j =1
that is,
(x, y)T ≤ max1≤i≤n+m λi H(x,
y).
δ
(3.16)
Therefore, H(x,
y)∞ → +∞ as (x, y)T ∞ → +∞, which directly implies that
H(x, y) → +∞ as (x, y)T → +∞. From Lemma 2.5 we know that H(x, y) is a homomorphism on Rn+m . Thus, equation
−ai ui +
m
ci j f j v j + αi = 0,
i = 1,2,...,n,
j =1
−b j v j +
n
(3.17)
d ji gi ui + β j = 0,
j = 1,2,...,m
i =1
∗ T
) , which is one unique equilibrium point of
has unique solution (u∗1 ,...,u∗n ,v1∗ ,...,vm
model (2.1). The proof is completed.
4. Global exponential stability
Theorem 4.1. Under assumption (H), if W in Theorem 3.1 is an M-matrix, and
Ik (ui (tk ,x)) and Jk (v j (tk ,x)) satisfy
Ik ui tk ,x
Jk v j tk ,x
= −γik ui tk ,x − u∗
i ,
0 < γik < 2, i = 1,2,...,n, k ∈ Z+ ,
= −δ jk v j tk ,x − v ∗j ,
0 < δik < 2, j = 1,2,...,m, k ∈ Z+ ,
(4.1)
∗ T
) , which is globally exponentially
then model (2.1) has a unique point (u∗1 ,...,u∗n ,v1∗ ,...,vm
stable.
Proof. From (4.1) we know that Ik (u∗i ) = 0 and Jk (v∗j ) = 0 (i = 1,2,...,n, j = 1,2,...,m,
k ∈ Z+ ), so the existence and uniqueness of equilibrium point of (2.1) follow from
Theorem 3.1.
Q. Song and J. Cao 9
Let (u1 (t,x),...,un (t,x),v1 (t,x),... ,vm (t,x))T be any solution of model (2.1), then
∂ ui (t,x) − u∗i
∂t
=
l
∂
k =1
+
Dik
∂xk
m
∂ ui (t,x) − u∗i
∂xk
ci j f j v j t − τi j (t),x
− ai ui (t,x) − u∗
i
− f j v ∗j ,
t > 0, t = tk , i = 1,...,n, k ∈ Z+ ,
j =1
(4.2)
∂ v j (t,x) − v∗j
∂t
=
∂ v j (t,x) − v∗j D jk
− b j v j (t,x) − v ∗j
∂xk
l
∂
k =1
+
∂xk
n
∗
d ji gi ui t − σ ji (t),x
,
− g i u∗
i
t > 0, t = tk , j = 1,...,m, k ∈ Z+ .
i=1
(4.3)
Multiply both sides of (4.2) by ui (t,x) − u∗i , and integrate, then we have
1 d
2 dt
Ω
2
ui (t,x) − u∗i dx =
l k =1 Ω
− ai
+
m
j =1
ui (t,x) − u∗i
Ω
∂xk
Dik
∂ ui (t,x) − u∗i
∂xk
dx
2
ui (t,x) − u∗i dx
ci j
∂
Ω
ui (t,x) − u∗i
f j v j t − τi j (t),x
− f j v ∗j dx.
(4.4)
From the boundary condition (2.2) and the proof of [22, Theorem 1] we get
l k =1 Ω
∗
ui (t,x) − ui
∂
∂xk
∂ ui (t,x) − u∗i
Dik
∂xk
dx = −
l k =1 Ω
Dik
∂ ui (t,x) − u∗i
∂xk
2
dx.
(4.5)
From (4.4), (4.5), assumption (H), and Cauchy integrate inequality we have
2
2
dui (t,x) − u∗i 2
≤ −2ai ui (t,x) − u∗
i 2
dt
+2
m
ci j F j ui (t,x) − u∗ v j t − τi j (t),x − v ∗ .
i 2
j 2
j =1
(4.6)
10
Advances in Difference Equations
Thus
D+ ui (t,x) − u∗i 2 ≤ −ai ui (t,x) − u∗i 2 +
m
ci j F j v j t − τi j (t),x − v ∗ j
j =1
2
(4.7)
for t > 0, t = tk , i = 1,...,n, k ∈ Z+ .
Multiply both sides of (4.3) by v j (t,x) − v∗j , similarly, we can get
D+ v j (t,x) − v∗j 2 ≤ −b j v j (t,x) − v∗j 2 +
n
d ji Gi ui t − σ ji (t),x − u∗ i
i=1
2
(4.8)
for t > 0, t = tk , j = 1,...,m, k ∈ Z+ .
It follows from (4.1) that
ui tk + 0,x − u∗ = 1 − γik ui tk ,x − u∗ ,
i 2
i 2
v j tk + 0,x − v ∗ = 1 − δ jk v j tk ,x − v ∗ ,
j
j
2
2
i = 1,...,n, k ∈ Z+ ,
i = j,...,m, k ∈ Z+ .
(4.9)
Let us consider functions
ρi (θ) = λi
χ j (θ) = λn+ j
θ
− ai + ci +
λn+ j ci∗j eτθ ,
r
j =1
i = 1,2,...,n,
θ
− b j + dj + λi d ∗ji eσθ ,
r
i =1
j = 1,2,...,m.
m
(4.10)
n
Since W is an M-matrix, from Lemma 2.4, we know that there exists a vector γ =
(λ1 ,...,λn ,λn+1 ,...,λn+m )T > 0 such that Wγ > 0, that is,
λi ai − ci −
m
λn+ j ci∗j > 0,
i = 1,2,...,n,
j =1
λn+ j b j − dj −
n
(4.11)
∗
λi d ji > 0,
j = 1,2,...,m.
i =1
From (4.11) and (4.10) we know that ρi (0) < 0, χ j (0) < 0, and ρi (θ) and χ j (θ) are continuous for θ ∈ [0,+∞). Moreover, ρi (θ),χ j (θ) → +∞ as θ → +∞. Since dρi (θ)/dθ > 0,
dχ j (θ)/dθ > 0, ρi (θ) and χ j (θ) are strictly monotone increasing functions on [0,+∞).
Thus, there exist constants zi∗ and z∗j ∈ (0,+∞) such that
ρi zi∗ = λi
∗
χ j zj = λn+ j
∗
zi∗
− ai + ci +
λn+ j ci∗j ezi τ = 0,
r
j =1
z∗
j
r
− b j + dj +
m
n
i=1
∗ z∗j σ
λi d ji e
= 0,
i = 1,2,...,n,
(4.12)
j = 1,2,...,m.
Q. Song and J. Cao 11
∗
}, then
Choosing 0 < ε < min{z1∗ ,...,zn∗ , z1∗ ,..., zm
λi
ε
− ai + ci +
λn+ j ci∗j eετ < 0,
r
j =1
λn+ j
m
ε
− b j + dj + λi d ∗ji eεσ < 0,
r
i =1
Let
i = 1,2,...,n,
(4.13)
n
r
Ui (t) = eεt ui (t,x) − u∗i 2 ,
i = 1,2,...,n,
r
V j (t) = eεt v j (t,x) − v∗ ,
j
j = 1,2,...,m.
(4.14)
j = 1,2,...,m,
2
then it follows from (4.7), (4.8), and (4.14) that
D+ Ui (t) ≤ r
ε
− ai + ci Ui (t) +
ci∗j eετ V j t − τi j (t) ,
r
j =1
m
t > 0, t = tk , k ∈ Z+ , i = 1,2,...,n,
ε
− b j + dj V j (t) + d ∗ji eεσ Ui t − σ ji (t) ,
r
i =1
+
D V j (t) ≤ r
n
(4.15)
t > 0, t = tk , k ∈ Z+ , j = 1,2,...,m.
From (4.9) and (4.1) we get that
Ui tk + 0 = 1 − γik Ui tk ≤ Ui tk ,
V j tk + 0 = 1 − δ jk V j tk ≤ V j tk ,
k ∈ Z+ , i = 1,2,...,n,
k ∈ Z+ , j = 1,2,...,m.
Let l0 = (1 + δ)(sups∈[−σ,0] ni=1 φui (s,x) − u∗i r2 + sups∈[−τ,0]
min1≤i≤n+m {λi } (δ is a positive constant), then
r
r
m
(4.16)
∗ r
j =1 φv j (s,x) − v j
r
2 )/
Ui (s) = eεs ui (s,x) − u∗i 2 ≤ ui (s,x) − u∗i 2 = φui (s,x) − u∗i 2 < λi l0 ,
−σ ≤ s ≤ 0,
r r r
V j (s) = eεs v j (s,x) − v∗j 2 ≤ v j (s,x) − v∗j 2 = φv j (s,x) − v∗j 2 < λn+ j l0 ,
−τ ≤ s ≤ 0.
(4.17)
In the following, we will prove
Ui (t) < λi l0 ,
V j (t) < λn+ j l0 ,
0 ≤ t < t1 , i = 1,2,...,n, j = 1,2,...,m.
(4.18)
If (4.18) is not true, no loss of generality, then there exist some i0 and t ∗ ∈ [0,t1 ) such
that
Ui0 t ∗ = λi0 l0 ,
Ui (t) ≤ λi l0 ,
V j (t) ≤ λn+ j l0 ,
D+ Ui0 t ∗ ≥ 0,
−σ ≤ t ≤ t ∗ , i = 1,2,...,n,
∗
−τ ≤ t ≤ t , j = 1,2,...,m.
(4.19)
12
Advances in Difference Equations
However, from (4.15) and (4.13) we get
D+ Ui0 t ∗ ≤ r
∗ ετ ∗
ε
− ai0 + ci0 Ui0 t ∗ +
ci0 j e V j t − τi0 j t ∗
r
j =1
ε
− ai0 + ci0 λi0 l0 +
ci∗0 j eετ λn+ j l0
r
j =1
≤r
m
m
(4.20)
< 0,
this is a contradiction, so (4.18) holds.
Suppose that for all k = 1,2,...,N, the inequalities
tN −1 ≤ t < tN , i = 1,2,...,n,
Ui (t) < λi l0 ,
(4.21)
tN −1 ≤ t < tN , j = 1,2,...,m,
V j (t) < λn+ j l0 ,
hold. Then from (4.16) and (4.21) we get
Ui tk + 0 ≤ Ui tk < λi l0 ,
V j tk + 0 ≤ V j tk < λn+ j l0 ,
i = 1,2,...,n,
(4.22)
j = 1,2,...,m.
This, together with (4.21), leads to
tN − σ ≤ t ≤ tN , i = 1,2,...,n,
Ui (t) < λi l0 ,
(4.23)
tN − τ ≤ t ≤ tN , j = 1,2,...,m.
V j (t) < λn+ j l0 ,
In the following, we will prove
tN ≤ t < tN+1 , i = 1,2,...,n,
Ui (t) < λi l0 ,
(4.24)
tN ≤ t < tN+1 , j = 1,2,...,m.
V j (t) < λn+ j l0 ,
If (4.24) is not true, no loss of generality, then there exist some i1 and t ∗∗ ∈ [tN ,tN+1 )
such that
Ui1 t ∗∗ = λi1 l0 ,
Ui (t) ≤ λi l0 ,
D+ Ui1 t ∗∗ ≥ 0,
tN − σ ≤ t ≤ t ∗∗ , i = 1,2,...,n,
(4.25)
tN − τ ≤ t ≤ t ∗∗ , j = 1,2,...,m.
V j (t) ≤ λn+ j l0 ,
However, from (4.15), (4.23), and (4.13) we get
D+ Ui1 t ∗∗ ≤ r
≤r
∗ ετ
ε
− ai1 + ci1 Ui1 t ∗∗ +
ci1 j e V j (t ∗∗ − τi1 j t ∗∗
r
j =1
ε
− ai1 + ci1 λi1 l0 +
ci∗1 j eετ λn+ j l0
r
j =1
this is a contradiction, so (4.24) holds.
m
m
(4.26)
< 0,
Q. Song and J. Cao 13
By the mathematical induction, we can conclude that
tN −1 ≤ t < tN , N = 1,2,..., i = 1,2,...,n,
Ui (t) < λi l0 ,
V j (t) < λn+ j l0 ,
tN −1 ≤ t < tN , N = 1,2,..., j = 1,2,...,m.
(4.27)
This implies that
Ui (t) < λi l0 ,
i = 1,2,...,n,
(4.28)
j = 1,2,...,m
V j (t) < λn+ j l0 ,
for any t > 0. That is,
r
eεt ui (t,x) − u∗i 2 ≤ sup
n
φui (s,x) − u∗ r
i
s∈[−σ,0] i=1
+ sup
m
φv j (s,x) − v ∗ r ,
j
s∈[−τ,0] j =1
r
eεt v j (t,x) − v∗j 2 ≤ sup
2
2
i = 1,2,...,n,
(4.29)
n
φui (s,x) − u∗ r
i
s∈[−σ,0] i=1
+ sup
2
m
φv j (s,x) − v ∗ r ,
j
s∈[−τ,0] j =1
2
j = 1,2,...,m
for any t > 0. Let M = n1/r + m1/r , from (4.29) and Lemma 2.7, we get that
n
ui (t,x) − u∗ r
i
i=1
≤M
sup
1/r
+
2
m
v j (t,x) − v ∗ r
j
j =1
1/r
2
n
m
1/r
1/r (−ε/r)t
φui (s,x) − u∗ r
φv j (s,x) − v ∗ r
+
sup
e
i 2
j 2
s∈[−σ,0] i=1
s∈[−τ,0] j =1
(4.30)
for all t > 0. Therefore, the unique point of model (2.1) is globally exponentially stable,
and the exponential convergence rate index ε/r comes from (4.12). The proof is com
pleted.
Corollary 4.2. Under assumption (H) and condition (4.1), if
A
W1 =
−D∗
−C ∗
B
(4.31)
is an M-matrix, then model (2.1) has a unique equilibrium point (u∗1 ,...,u∗n ,v1∗ ,...,
∗ T
) , which is globally exponentially stable, where A = diag(a1 ,a2 ,...,an ), B = diag(b1 ,b2 ,
vm
...,bm ), C ∗ = (F j |ci j |)n×m , D∗ = (Gi |d ji |)m×n .
Proof. Take αi j = βi j = α∗ji = β∗ji = 1, and let r → 1+ , then W turns to W1 . The proof is
completed.
14
Advances in Difference Equations
Remark 4.3. As the smooth operators Dik = 0, D∗jk = 0 (i = 1,2,...,n, j = 1,2,...,m, k =
1,2,...,l), model (2.1) becomes the following impulsive BAM neural networks with timevarying delays:
dui (t)
= −ai ui (t) +
ci j f j v j t − τi j (t) + αi ,
dt
j =1
m
Δui tk = Ik ui tk ,
i = 1,...,n, k = 1,2,...,
n
dv j (t)
= −b j v j (t) + d ji gi ui t − σ ji (t) + β j ,
dt
i =1
Δv j tk = Jk vi tk ,
t > 0, t = tk , i = 1,...,n,
(4.32)
t > 0, t = tk , j = 1,...,m,
j = 1,...,m, k = 1,2,....
For this model, we have the following results.
Corollary 4.4. Under assumption (H), if W in Theorem 3.1 is an M-matrix, and impulsive operators Ik (ui (tk )) and Jk (v j (tk )) satisfy
0 < γik < 2, i = 1,2,...,n, k ∈ Z+ ,
= −γik ui tk − u∗
i ,
Jk v j tk = −δ jk v j tk − v∗j , 0 < δik < 2, j = 1,2,...,m, k ∈ Z+ ,
Ik ui tk
(4.33)
∗ T
then model (4.32) has a unique equilibrium point (u∗1 ,...,u∗n ,v1∗ ,... ,vm
) , which is globally
exponentially stable.
Corollary 4.5 (see [18]). Under assumption (H) and condition (4.33), when τi j (t),σ ji (t)
(i = 1,2,...,n, j = 1,2,...,m) are constants, model (4.32) has a unique equilibrium point
∗ T
) , which is globally exponentially stable, if
(u∗1 ,...,u∗n ,v1∗ ,...,vm
ai > F i
m
d ji ,
j =1
bj > Gj
n
ci j ,
i = 1,2,...,n, j = 1,2,...,m.
(4.34)
i =1
Proof. If condition (4.34) holds, then matrix W1 in Corollary 4.2 is column diagonally
dominant, so W1 is an M-matrix. The proof is completed.
Remark 4.6. In [18, 21], the globally exponential stability for impulsive BAM neural networks with constant delays was investigated by constructing a suitable Lyapunov functional. In [20], authors have considered the impulsive BAM neural networks with distributed delays, several sufficient criteria checking the globally exponential stability were
obtained by constructing a suitable Lyapunov functional. It should be noted that our
methods, which do not make use of Lyapunov functional, are simple and valid for the
Q. Song and J. Cao 15
stability analysis of impulsive BAM neural networks with constant delays, time-varying
or distributed delays. It may be difficult to apply the Lyapunov approach in [18, 21, 20]
to discuss the exponential stability of model (4.32) and model (2.1).
Remark 4.7. In [3, 5, 8–10, 12, 15], the boundedness of the activation functions was required. In [4, 6, 7, 10, 26], the monotonicity of the activation functions was needed. However, the boundedness and monotonicity of the activation functions have been removed
in this paper.
5. Examples
Example 5.1. Consider the following impulsive BAM neural networks with fixed delays:
du(t)
= −5u(t) + 6 f v(t − 3) + 11, t > 0, t = tk ,
dt
Δu tk = −γ1k u tk − 1 , k = 1,2,...,
dv(t)
= −3v(t) − gi ui (t − 1) + 2, t > 0, t = tk ,
dt
Δv j tk = −δ1k v tk − 1 , k = 1,2,...,
(5.1)
where f (y) = g(y) = −| y |, and t1 < t2 < · · · is strictly increasing sequence such that
limk→∞ tk = +∞, γ1k = 1 + (1/2)sin(2 + k), δ1k = 1 + (6/7)cos(9 + k2 ), k ∈ Z+ .
Since b1 = 3 < |c11 | = 6, conditions (4.34) are not satisfied, which means that the theorem in [18] is not applicable to ascertain the stability of neural networks (5.1). However,
it is easy to check that (5.1) satisfies all conditions of Corollary 4.4 in this paper. Hence,
model (5.1) has a unique equilibrium point, which is globally exponentially stable. In
fact, the unique equilibrium (1,1)T is a unique stable equilibrium point. From (4.12) we
can estimate the exponential convergence rate index which is 0.2008.
Example 5.2. Consider the following impulsive BAM neural networks with both timevarying delays and reaction-diffusion terms:
∂ 2 6 ∂ui (t,x)
∂ui (t,x)
=
t x
ci j f j v j t − τi j (t),x + αi ,
− ai ui (t,x) +
∂t
∂xk
∂xk
j =1
2
t = tk ,
Δui tk ,x = Ik ui tk ,x ,
2
∂v j (t,x)
∂ 4 2 ∂v j (t,x)
=
t x
− b j v j (t,x) + d ji gi ui t − σ ji (t),x + β j ,
∂t
∂xk
∂xk
i =1
Δv j tk ,x = Jk vi tk ,x
t = tk ,
(5.2)
16
Advances in Difference Equations
for i, j = 1,2, k ∈ Z+ , where
fi (r) = g j (r) = r + 1 + r − 1,
τi j (t) = σ ji (t) = sin (i + j)t ,
1 Δu1 tk ,x = − 1 + sin 7 + k
2
2
i, j = 1,2,
u1 tk ,x ,
Δu2 tk ,x = − 1 + cos(3 − k) u2 tk ,x − 1 ,
Δv1 tk ,x = −2sin(1 − 5k) v1 tk ,x − 1 ,
1
Δv2 tk ,x = − 1 − cos(11k) v2 tk ,x − 2 ,
3
a1 = 7,
b1 = 4,
a2 = 12,
b2 = 36,
c11 = 2,
d11 = 2,
c12 = 3,
d12 = 2,
c21 = 1.5,
c22 = 2.5,
α1 = −16,
α2 = −1,
d21 = 2,
d22 = 3.5,
β1 = −4,
β2 = 61.
(5.3)
Since f j and gi are not monotone increasing functions, the conditions of two theorems in [26] are not satisfied, which means that the theorems in [26] are not applicable
to ascertain the stability of neural networks (5.2). However, It is easy to check that (5.2)
satisfies all conditions of Corollary 4.2 in this paper. Hence, model (5.2) has a unique
equilibrium point, which is globally exponentially stable. In fact, the unique equilibrium
(0,1,1,2)T is a unique stable equilibrium point. From (4.12) we can estimate the exponential convergence rate index which is 0.1374.
6. Conclusions
In this paper, several easily checked sufficient criteria ensuring the existence, uniqueness, and global exponential stability of equilibrium point have been given for impulsive bidirectional associative memory neural networks with both time-varying delays and
reaction-diffusion terms. In particular, the estimate of convergence rate index has been
also provided. Some existing results are improved and extended. Two examples have been
given to show that obtained results are less restrictive than previously known criteria. The
method is simpler and more effective for stability analysis of neural networks with timevarying delays.
Acknowledgments
The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was a project
supported by Scientific Research Fund of Chongqing Municipal Education Commission
under Grant KJ070401, and was supported by the National Natural Science Foundation
of China under Grant 60574043, and the Natural Science Foundation of Chongqing Science and Technology Committee.
Q. Song and J. Cao 17
References
[1] B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988.
[2] J. Cao and Q. Song, “Stability in Cohen-Grossberg-type bidirectional associative memory neural
networks with time-varying delays,” Nonlinearity, vol. 19, no. 7, pp. 1601–1617, 2006.
[3] S. Arik, “Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 580–586,
2005.
[4] K. Gopalsamy and X.-Z. He, “Delay-independent stability in bidirectional associative memory
networks,” IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 998–1002, 1994.
[5] X. Liao, K.-W. Wong, and S. Yang, “Convergence dynamics of hybrid bidirectional associative
memory neural networks with distributed delays,” Physics Letters A, vol. 316, no. 1-2, pp. 55–64,
2003.
[6] V. S. H. Rao and Bh. R. M. Phaneendra, “Global dynamics of bidirectional associative memory
neural networks involving transmission delays and dead zones,” Neural Networks, vol. 12, no. 3,
pp. 455–465, 1999.
[7] S. Mohamad, “Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks,” Physica D, vol. 159, no. 3-4, pp. 233–251, 2001.
[8] J. Cao and L. Wang, “Exponential stability and periodic oscillatory solution in BAM networks
with delays,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 457–463, 2002.
[9] J. Cao, J. Liang, and J. Lam, “Exponential stability of high-order bidirectional associative memory neural networks with time delays,” Physica D, vol. 199, no. 3-4, pp. 425–436, 2004.
[10] S. Xu and J. Lam, “A new approach to exponential stability analysis of neural networks with
time-varying delays,” Neural Networks, vol. 19, no. 1, pp. 76–83, 2006.
[11] S. Guo, L. Huang, B. Dai, and Z. Zhang, “Global existence of periodic solutions of BAM neural
networks with variable coefficients,” Physics Letters A, vol. 317, no. 1-2, pp. 97–106, 2003.
[12] J. H. Park, “A novel criterion for global asymptotic stability of BAM neural networks with time
delays,” Chaos, Solitons and Fractals, vol. 29, no. 2, pp. 446–453, 2006.
[13] Z.-H. Guan and G. Chen, “On delayed impulsive Hopfield neural networks,” Neural Networks,
vol. 12, no. 2, pp. 273–280, 1999.
[14] Z. Yang and D. Xu, “Impulsive effects on stability of Cohen-Grossberg neural networks with
variable delays,” Applied Mathematics and Computation, vol. 177, no. 1, pp. 63–78, 2006.
[15] D. W. C. Ho, J. Liang, and J. Lam, “Global exponential stability of impulsive high-order BAM
neural networks with time-varying delays,” Neural Networks, vol. 19, no. 10, pp. 1581–1590,
2006.
[16] H. Akça, R. Alassar, V. Covachev, Z. Covacheva, and E. Al-Zahrani, “Continuous-time additive
Hopfield-type neural networks with impulses,” Journal of Mathematical Analysis and Applications, vol. 290, no. 2, pp. 436–451, 2004.
[17] D. Xu and Z. Yang, “Impulsive delay differential inequality and stability of neural networks,”
Journal of Mathematical Analysis and Applications, vol. 305, no. 1, pp. 107–120, 2005.
[18] Y. Li, “Global exponential stability of BAM neural networks with delays and impulses,” Chaos,
Solitons and Fractals, vol. 24, no. 1, pp. 279–285, 2005.
[19] Y. Zhang and J. Sun, “Stability of impulsive neural networks with time delays,” Physics Letters A,
vol. 348, no. 1-2, pp. 44–50, 2005.
[20] Y.-T. Li and C.-B. Yang, “Global exponential stability analysis on impulsive BAM neural networks with distributed delays,” Journal of Mathematical Analysis and Applications, vol. 324, no. 2,
pp. 1125–1139, 2006.
[21] F. Yang, C. Zhang, and D. Wu, “Global stability analysis of impulsive BAM type CohenGrossberg neural networks with delays,” Applied Mathematics and Computation, vol. 186, no. 1,
pp. 932–940, 2007.
18
Advances in Difference Equations
[22] G. T. Stamov and I. M. Stamova, “Almost periodic solutions for impulsive neural networks with
delay,” Applied Mathematical Modelling, vol. 31, no. 7, pp. 1263–1270, 2007.
[23] X. X. Liao, S. Z. Yang, S. J. Chen, and Y. L. Fu, “Stability of general neural networks with reactiondiffusion,” Science in China. Series F, vol. 44, no. 5, pp. 389–395, 2001.
[24] L. Wang and D. Xu, “Global exponential stability of Hopfield reaction-diffusion neural networks
with time-varying delays,” Science in China. Series F, vol. 46, no. 6, pp. 466–474, 2003.
[25] Q. Song, Z. Zhao, and Y. Li, “Global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms,” Physics Letters A, vol. 335, no. 2-3, pp. 213–225,
2005.
[26] J. Qiu, “Exponential stability of impulsive neural networks with time-varying delays and
reaction-diffusion terms,” Neurocomputing, vol. 70, no. 4–6, pp. 1102–1108, 2007.
[27] J. Cao and J. Wang, “Global exponential stability and periodicity of recurrent neural networks
with time delays,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 5, pp. 920–931, 2005.
[28] Q. Zhang, X. Wei, and J. Xu, “New stability conditions for neural networks with constant and
variable delays,” Chaos, Solitons and Fractals, vol. 26, no. 5, pp. 1391–1398, 2005.
[29] Q. Song and J. Cao, “Stability analysis of Cohen-Grossberg neural network with both timevarying and continuously distributed delays,” Journal of Computational and Applied Mathematics, vol. 197, no. 1, pp. 188–203, 2006.
Qiankun Song: Department of Mathematics, Chongqing Jiaotong University,
Chongqing 400074, China
Email address: sqk@hutc.zj.cn
Jinde Cao: Department of Mathematics, Southeast University, Nanjing 210096, China
Email address: jdcao@seu.edu.cn
Download