Document 10822084

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2012, Article ID 481582, 16 pages
doi:10.1155/2012/481582
Research Article
Synchronization between
Bidirectional Coupled Nonautonomous Delayed
Cohen-Grossberg Neural Networks
Qiming Liu and Rui Xu
Institute of Applied Mathematics, Shijiazhuang Mechanical Engineering College,
Shijiazhuang 050003, China
Correspondence should be addressed to Qiming Liu, lqmmath@yahoo.com.cn
Received 15 May 2012; Accepted 3 September 2012
Academic Editor: Wing-Sum Cheung
Copyright q 2012 Q. Liu and R. Xu. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Based on using suitable Lyapunov function and the properties of M-matrix, sufficient conditions
for complete synchronization of bidirectional coupled nonautonomous Cohen-Grossberg neural
networks are obtained. The methods for discussing synchronization avoid complicated error
system of Cohen-Grossberg neural networks. Two numerical examples are given to show the
effectiveness of the proposed synchronization method.
1. Introduction
It is well known that chaos synchronization has gained considerable attention due to the truth
that many benefits of chaos synchronization exist in various engineering fields such as secure
communication, image processing, and harmless oscillation generation. Since the pioneering
works of Pecora and Carroll 1, chaos synchronization has been widely investigated, and
many effective research methods such as feedback control, impulsive control, adaptive
control, and important results have been presented in 2–10 and references cited therein.
Synchronization of coupled chaotic systems also has received considerable attention 11–13.
Since Cohen and Grossberg proposed the Cohen-Grossberg neural networks CGNNs
in 1983 14 and soon the networks have been the subject of active research due to their many
applications in various engineering and scientific areas such as patter recognition, associative
memory, and combination, and many dynamic analyses of equilibria or periodic solutions of
the networks with delays are investigated 15–20. Unfortunately, so far, only a few works
have been done on synchronization of CGNNs, which remains challenging. In 2010, Li et al.
investigated the synchronization of discrete-time CGNNs with delays 6 and Chen discussed
2
Abstract and Applied Analysis
the synchronization of impulsive delayed CGNNs under noise perturbation 7, in 2010, Zhu
and Cao discussed adaptive synchronization of delayed CGNNs 21 and in 2012, Gan also
discussed adaptive synchronization of delayed CGNNs 22, and in 2011, Yu et al. discuss
synchronization of delayed CGNNs via periodically intermittent control 23.
Note that the previous methods in 6, 7, 21–23 have obtained error dynamical system
first, and then discuss the error system, but, the error dynamical system of Cohen-Grossberg
neural network which comparing the Hopfield neural networks becomes very complex
because of existence of the amplification functions of the neural network; hence the results
above turn to be quite complex, too. Comparing with foregoing works, the objective of this
paper is to study the complete synchronization of nonautonomous CGNNs with mixed time
delays by using suitable Lyapunov function and the properties of M-matrix, and our methods
can avoid writing explicit complex error system which leads to complicated conditions for
synchronization.
The rest of this paper is organized as follows. The model, some definitions, and
assumptions are presented in Section 2. The sufficient conditions for complete synchronization of bidirectional coupled CGNNs are obtained in Section 3. Two examples are given in
Section 4 to demonstrate the main results.
2. Model Description and Preliminaries
Consider the following bidirectional coupled nonautonomous CGNNs with time delays:
⎡
ẋi t − ai xi t⎣bi t, xi t −
n
n
uij tfj xj t − vij tfj xj t − τij t
j1
j1
⎤
∞
n
−
wij t
kij sfj xj t − s ds − Ii t⎦
j1
n
2.1
0
αij t yj t − xj t ,
j1
⎡
n
n
ẏi t − ai yi t ⎣bi t, yi t − uij tfj yj t − vij tfj yj t − τij t
j1
j1
⎤
∞
n
−
wij t
kij sfj yj t − s ds − Ii t⎦
j1
n
2.2
0
βij t xj t − yj t
j1
for 1 ≤ i ≤ n, t > 0. xi t and yi t denote the state variable of the ith neuron in 2.1 and
2.2, fj · denote the signal functions of the jth neuron at time t; Ii t denote inputs of the
ith neuron at time t; ai · represent amplification functions; bi t, · are appropriately behaved
functions; uij t, vij t, and wij t are bounded connection weights of the neural networks,
respectively; τij t correspond to the finite speed of the axonal signal transmission, there exist
Abstract and Applied Analysis
3
positive τij and τ̇ij such that 0 ≤ τij t ≤ τij and τ̇ij t ≤ τ̇ij < 1; kij · correspond to the delay
kernel functions, coupled matrix are αij tn×n and βij tn×n in which αii t ≥ 0 and βii t ≥ 0
and αij t, βij t are bounded on 0, ∞.
Throughout this paper, we assume for system 2.1 and 2.2 that
H1 amplification functions ai · are continuous and there exist constants ai , ai such
that 0 < ai ≤ ai · ≤ ai for 1 ≤ i ≤ n;
H2 there exist positive continuous and bounded functions bi t such that
bi t, x − bi t, y
≥ bi t > 0
x−y
2.3
for all x /
y, 1 ≤ i ≤ n;
H3 for activation functions fj ·, there exist positive constants Lj such that
fj x − fj y Lj sup
x
−
y
x
y
/
2.4
for all x /
y, 1 ≤ j ≤ n;
H4 the kernel functions kij s are nonnegative continuous function on 0, ∞ and
satisfy
∞
seλs kij sds < ∞,
0
Kij λ ∞
2.5
e kij sds
λs
0
are differentiable functions for λ ∈ 0, rij , 0 < rij < ∞, Kij 0 1 and limλ → rij− Kij λ ∞.
Remark 2.1. A typical example of kernel function is given by kij s sr /r!rijr1 e−rij s for s ∈
0, ∞, where rij ∈ 0, ∞, r ∈ {0, 1, . . . , n}. These kernel functions are called as the gamma
memory filter 24 and satisfy condition H4 .
For any bounded function st on 0, ∞, s and s denote inft∈0,∞ {|st|} and
supt∈0,∞ {|st|}, respectively.
For any xt xi t, x2 t, . . . , xk tT ∈ Rk , t > 0, define xt1 ki1 |xi t|, and for
any ϕs ϕ1 s, ϕ2 s, . . . , ϕk sT ∈ Rk , s ∈ −∞, 0, define ϕ sups∈−∞,0 ki1 |ϕi s|.
Denote
C −∞, 0, Rk ψ : −∞, 0 → Rk | ψs is bounded and continuous for s ∈ −∞, 0 ,
2.6
Then C−∞, 0, Rk is a Banach space with respect to · .
4
Abstract and Applied Analysis
The initial conditions of system 2.1 and 2.2 are given by
xi s ϕi s,
−∞ < s ≤ 0, 1 ≤ i ≤ n,
yi s ςi s,
−∞ < s ≤ 0, 1 ≤ i ≤ n.
2.7
Definition 2.2. System 2.1 and system 2.2 are said to achieve global exponential complete
synchronization, if for any solution xt, ψ1 of system 2.1 and any solution yt, ψ2 of system
2.2, there exist positive constants λ and M such that
y t, ψ2 − x t, ψ1 ≤ Mψ2 − ψ1 e−λt ,
1
2.8
t ≥ 0,
where ψ2 s ς1 s, ς2 s, . . . , ςn sT , ψ1 s ϕ1 s, ϕ2 s, . . . , ϕn sT ∈ C−∞, 0, Rn .
Definition 2.3. A real matrix A aij n×n is said to be a nonsingular M-matrix if aij ≤ 0 i, j 1, 2, . . . , n, i /
j, and all successive principle minors of A are positive.
Lemma 2.4 see 25. A matrix with nonpositive offdiagonal elements A aij n×n is a nonsingular
M-matrix if and only if there exists a vector p pi 1×n > 0 such that pA > 0 or ApT > 0 holds.
Lemma 2.5 see 26. Let a < b ≤ ∞. Suppose that vt v1 t, v2 t, . . . , vn tT satisfies the
following differential equality:
D vt ≤ P vt Q ⊗ v ten ∞
R ⊗ Ksvt − sds,
t ∈ a, b,
0
va s ∈ C−∞, 0, R ,
n
2.9
s ∈ −∞, 0,
where
j,
P pij n×n , pij ≥ 0 for i /
Q qij n×n , qij ≥ 0,
R rij n×n , rij ≥ 0,
en 1, 1, . . . , 1T ∈ Rn ,
Ks kij s n×n ,
v t vj t − τij t n×n ,
2.10
and ⊗ means Hadamard product. If initial conditions satisfy
vt ≤ κξe−λt−a ,
κ ≥ 0, t ∈ −∞, a,
2.11
where ξ ξ1 , ξ2 , . . . , ξn T > 0 and the positive number λ is determined by the following inequality:
λE P Q ⊗ ελ R ⊗ Kλξ < 0
2.12
Abstract and Applied Analysis
5
in which
ελ eλτij
n×n
Kλ Kij λ n×n ,
,
∞
Kij λ eλs kij sds,
2.13
0
and E is an identity matrix. Then vt ≤ κξe−λt−a for t ∈ a, b.
3. Main Results
Theorem 3.1. Under assumptions (H1 )–(H4 ), system 2.1 and system 2.2 will achieve global
exponential synchronization, if the following conditions hold:
H5 M1 A1 − C1 is a nonsingular M-matrix, where
γ
A1 diag b1 C1 cij
n×n
a1
cij ,
11
, b2 γ
22
a2
, . . . , bn γ
1
uij vij
wij
1 − τ̇ij
nn
an
,
3.1
Lj qij ,
i and qii 0, γij t αij t βij t, i, j 1, 2, . . . , n.
where qij γ ij /ai , j /
Proof. Let xt, yt be two solutions of system 2.1 with initial value ψ1 ϕ1 , ϕ2 , . . . , ϕn and system 2.2 with ψ2 ς1 , ς2 , . . . , ςn ∈ C−∞, 0, Rn , respectively.
Denote ei t yi t − xi t.
Note that conditions H5 , M1 is a nonsingular M-matrix implies that MT1 is
a nonsingular M-matrix. From Lemma 2.4, we know that there exists a vector p p1 , p2 , . . . , pn T such that MT1 p > 0, that is
pi b i γ
ii
ai
−
n
pj
j1
n
γ ji
vji
L
uji w
−
p
>0
ji
i
j
1 − τ̇ji
aj
j1,j /i
3.2
for 1 ≤ i ≤ n.
Denote
Fi θ pi
−
γ
θ
− bi ii
ai
ai
n
pj
j1,j /i
γ ji
aj
n
− pj
j1
eθτji
uji Li vji
Li wji Li
1 − τ̇ji
∞
kji se ds
θs
0
3.3
,
for 1 ≤ i ≤ n, which indicates Fi 0 > 0. Since Fi θ are continuous and differential on
0, rji on 0, rji in which rji are some positive constants, and limθ → rji− Fi θ −∞ according
6
Abstract and Applied Analysis
to condition H4 , furthermore, Fi θ < 0 for θ ∈ 0, rji . There exist constants θi such that
Fi θi 0 for i 1, 2, . . . , n. So we can choose
0 < λ ≤ min {θi }
3.4
Fi λ ≥ 0.
3.5
1≤i≤n
such that
Let
Vi t e signei t
λt
yi t
xi t
1
ds
ai s
for i 1, 2, . . . , n,
3.6
where ei t yi t − xi t.
Calculating the upper right derivative of Vi t along solutions of 2.1 and 2.2, we
get
D Vi t ≤ eλt
⎧
⎨λ
⎩ ai
|ei t| − signei t bi t, yi t − bi t, xi t
n
uij t fj yj t − fj xj t j1
n
vij t fj yj t − τij t − fj xj t − τij t j1
∞
∞
n
wij t
kij sfj yj t − s ds −
kij sfj xj t − s ds j1
0
0
⎫
n γ γ
⎬
ij
ej t
− ii |ei t| ⎭
a
ai
j1,j i i
/
≤e
λt
⎧
⎨ λ
⎩
ai
− bi −
γ
ii
ai
|ei t| n
n
uij Lj ej t vij Lj ej t − τij t j1
j1
⎫
∞
n
n γ ⎬
ij
ej t ,
wij Lj
kij sej t − sds ⎭
a
0
i
j1
j1,j /
i
where γij t αij t βij t.
3.7
Abstract and Applied Analysis
7
Now we define a Lyapunov function V t by
⎧
t
n
n
⎨
eλτij
ej seλs ds
V t pi Vi t vij Lj
⎩
1 − τ̇ij t−τij t
i1
j1
n
∞
j1
0
wij Lj
⎫
⎬
λsμ
ej μ e
kij s
dμ ds .
⎭
t−s
t
3.8
We can obtain that
⎧
n
⎨ λ
γ
ii
λt
D V t ≤ e
pi
− bi −
|ei t|
⎩ ai
ai
i1
n
j1
eλτij
uij vij
wij
1 − τ̇ij
∞
kij se ds
0
⎫
n γ ⎬
ij
ej t
× Lj ej t ⎭
a
j1,j /
i i
− eλt
λs
3.9
n
Fi λ|ei t| ≤ 0,
i1
which together with 3.6 and 3.8 leads to
m0 eλt
n
|ei t| ≤ V t ≤ V 0 ≤ M0 ψ2 l − ψ1 l,
3.10
i1
where
m0 min
1≤i≤n
pi
,
ai
M0 max{M1 , M2 , M3 },
M1 max
1≤i≤n
⎛
pi
ai
,
⎞
n
eλτ − 1 L
i
⎠,
M2 max⎝
pj vji
1≤i≤n
λ
1 − τ̇ji
j1
n
∞ λs
M3 pj max wji Li
se max kji sds.
j1
1≤i≤n
0
1≤i≤n
3.11
8
Abstract and Applied Analysis
Hence, we obtain that the following inequality holds:
n
i1
|ei t| ≤
M0 ψ2 l − ψ1 le−λt ,
m0
t ≥ 0,
3.12
that is,
yt − xt ≤ M0 ψ2 l − ψ1 le−λt ,
1
m0
t ≥ 0.
3.13
This completes the proof.
Theorem 3.2. Under assumptions (H1 )–(H4 ), system 2.2 and system 2.1 will achieve global
exponential synchronization, if the following conditions hold:
H6 M2 A2 − C2 is a nonsingular M-matrix, where
γ
γ γ nn
11
22
,
A2 diag a1 b1 , a2 b 2 , . . . , an bn a1
a2
an
C2 cij n×n , cij aj uij vij wij Lj qij ,
3.14
in which qij γ ij /ai , j /
i, qii 0 and γij t αij t βij t, i, j 1, 2, . . . , n.
Proof. Let xt, yt be two solutions of system 2.1 with initial value ψ1 ϕ1 , ϕ2 , . . . , ϕn and system 2.2 with ψ2 ς1 , ς2 , . . . , ςn ∈ C−∞, 0, Rn , respectively.
Denote ei t yi t − xi t.
Let
Vi t signei t
yi t
xi t
1
ds
ai s
for i 1, 2, . . . , n
3.15
and note that
1
1
|ei t| ≤ Vi t ≤ |ei t|.
ai
ai
3.16
Abstract and Applied Analysis
9
Calculating the upper right derivative of Vi t along solutions of 2.1 and 2.2, we can get
D Vi t ≤ − bi |ei t| n
n
j1
∞
wij Lj
0
j1
≤ − bi ai Vi t n
n
uij Lj ej t vij Lj ej t − τij t j1
n γ γii t
ij
ej t
kij sej t − sds −
|ei t| a
ai
j1,j /
i i
n
n
uij Lj aj Vj t vij Lj aj Vj t − τij t
j1
j1
∞
wij Lj aj
kij sVj t − sds −
0
j1
3.17
γ ai
ii
ai
n γ aj
ij
Vi t j1,j /i
ai
Vj t,
that is,
∞
D V t ≤ AV t C ⊗ V t En D ⊗ KsV t − sds,
3.18
0
where
V t V1 t, V2 t, . . . , Vn tT ,
V t Vj t − τij t n×n ,
γ
γ γ nn
11
22
H,
A diag −a1 b1 , −a2 b2 , . . . , −an bn a1
a2
an
H hij n×n ,
hij aj uij Lj qij ,
C cij n×n ,
qij cij aj vij Lj ,
En 1, 1, . . . , 1T ∈ Rn ,
aj γ ij
ai
,
j
/ i,
qii 0,
i, j 1, 2, . . . , n,
3.19
D dij n×n , dij aj wij Lj ,
Ks kij s n×n .
We know from H6 that M2 is nonsingular M-matrix, which implies −A D C is also
a nonsingular M-matrix. From Lemma 2.4, there exists a vector ξ ξ1 , ξ2 , . . . , ξn T > 0 such
that −A D Cξ > 0, consequently,
−ξi ai bi γ ai
ii
ai
n
n
aj γ ij
ξj aj uij vij wij Lj ξj
> 0.
ai
j1
j1,j /
i
3.20
10
Abstract and Applied Analysis
Consider the function
Fi θ ξi θ − ai bi n
ξj
j1,j /
i
aj γ ij
ai
γ ai
ii
ai
n
ξj aj uij vij eθτij wij Kij θ Lj
j1
3.21
,
$ ∞
in which Kij θ 0 eθτij kij sds.
We obtain from condition H4 that Kij 0 1, which, together with 3.20, leads to
Fi 0 < 0, and similar to proof of Theorem 3.1 above, there exist
0 < λ < min {θi }
3.22
Fi λ < 0,
3.23
1≤i≤n
such that
that is,
%
λE A C ⊗ eλτij
n×n
&
D ⊗ Kλ ξ < 0,
3.24
where Kλ Kij λn×n in which Kij λ shown in 3.21.
Note that Vi s ≤ 1/ai ψ2 − ψ1 and denote κ ψ2 − ψ1 /min1≤i≤n {ai ξi }, we have
V t ≤ κξe−λt ,
3.25
t ∈ −∞, 0.
We have from Lemma 2.5, 3.18, 3.24, and 3.25 that
V t ≤ κξe−λt ,
3.26
t ≥ 0.
It follows from 3.16 that
ξi ψ2 − ψ1 e−λt
|ei t| ≤ ai Vi t ≤ ai κ
3.27
in which κ
1/min1≤i≤n {ai ξi }.
Hence
yt − xt ≤ Mψ2 − ψ1 e−λt ,
1
where M n
i1
t ≥ 0,
ai ξi /min1≤i≤n {ai ξi }. This completes the proof.
3.28
Abstract and Applied Analysis
11
Corollary 3.3. Under assumptions (H1 )–(H4 ), response system 2.2 will be globally exponentially
synchronized with master system 2.1 with αij 0, i, j 1, 2, . . . , n, if M1 in Theorem 3.1 and M2
in Theorem 3.2 are nonsingular M-matrices with γij t βij t.
Remark 3.4. If ai · 1, bi t, xi t bi txi t and bi t, yi t bi tyi t, system 2.1 and
2.2 reduce to coupled Hopfield neural networks. Both Theorems 3.1 and 3.2 reduce to the
simpler cases in which ai ai 1.
Remark 3.5. If αij t βij t 0, M1 in Theorem 3.1 and M2 in Theorem 3.2 still be
nonsingular M-matrix, both theorems imply the global asymptotical stability of solutions
of system 2.1, consequently, system 2.2 and 2.1 achieve complete synchronization for
sure. In addition, if system 2.1 and 2.2 reduce to autonomous systems, the results in both
theorems above still hold.
4. Two Simple Examples
Example 4.1. Consider the following coupled CGNNs:
∞
'
(
ẋt − axt Axt − B
Kstanhxt − sds − It
0
K1 t yt − xt ,
∞
(
'
Kstanh yt − s ds − It
ẏt − a yt Ayt − B
0
4.1
4.2
K2 t xt − yt ,
where we write system 4.1 and 4.2 in the vector-matrix form and
xt x1 t, x2 tT ,
T
yt y1 t, y2 t ,
It 2, 2T ,
tanhxt tanhx1 t, tanhx2 tT ,
T
tanh yt tanh y1 t , tanh y2 t ,
axt diag2 sinx1 t, 2 sinx2 t,
a yt diag 2 sin y1 t , 2 sin y2 t ,
1 0
0 −2 cost
0 e−t
,
A
,
B
,
Ks e−t , 0
0 2
2
−0
α1 t 0
,
K1 t 0 α2 t
β1 t 0
K2 t .
0 β2 t
Let K1 t 0. Figure 1 shows the dynamical behaviors of 4.1 with K1 t 0.
4.3
12
Abstract and Applied Analysis
4
3.5
3
2.5
2
1.5
1
0.5
0
5
10
15
20
25
t
x1
x2
Figure 1: Time response of state x1 , x2 of system 4.1 with initial value 0.1, 0.1 and K1 0.
5 −2 Let K1 t diag1, 1 and K2 t diag5, 0, it is easy to know M1 −2
1
is still a M-matrix. From Theorem 3.1, we know coupled system 4.1 and system 4.2
achieve complete synchronization, and Figure 2 shows the dynamical behaviors of complete
synchronization with initial conditions y1 s, y2 s 2, 2 and x1 s, x2 s 0.1, 0.1.
Example 4.2. Consider the following Cohen-Grossberg neural network with discrete delay:
ẋ1 t −3x1 t − 3.5 sinx2 t − 0.4,
ẋ2 t −22x2 t − 3 sinx1 t − 0.4.
4.4
System 4.4 is chaotic system. Figure 3 shows the chaotic behaviors of system 4.4 with
initial condition 0.2, 0.2.
For driving system 4.4, we construct the response system as follows:
ẏ1 t −3 y1 t − 3.5 sin y2 t − 0.4 α11 x1 t − y1 t ,
ẏ2 t −2 2y2 t − 3 sin y1 t − 0.4 α22 x2 t − y2 t ,
4.5
where α11 > 0, α22 > 0.
Let α11 9, α22 0. It is easy to know
M1 3 −3.5
−3 4
4.6
is a M-matrix. From Corollary 3.3, we know the response system 4.5 achieves complete
synchronization with system 4.4, and Figure 4 shows the dynamical behaviors of complete
synchronization with initial conditions y1 s, y2 s 2, 2 and x1 s, x2 s 0.2, 0.2.
Abstract and Applied Analysis
13
3.5
3
2.5
2
1.5
1
0.5
0
0
2
4
6
8
10
12
14
16
t
x1
y1
a
2
1.5
1
0.5
0
0
5
10
15
20
25
30
t
x2
y2
b
Figure 2: Complete synchronization of coupled systems 4.1 and 4.2.
14
12
1.4
12
10
10
8
Log 10 (power of x1 )
1.2
1
x2
0.8
0.6
0.4
0.2
Log 10 (power of x2 )
1.6
8
6
4
2
6
4
2
0
−2
0
0
−4
−0.2
−2
−6
−0.4
−0.5
0
0.5
1
1.5
x1
a
2
2.5
3
3.5
0
0.1
0.2
0.3
0.1
ω
0.2
0.3
0.4
ω
b
Figure 3: a Phase plot of system 4.4 with initial condition 0.2, 0.2. b Power spectrums of time series
of x1 and x2 for system 4.4.
14
Abstract and Applied Analysis
2
3
1.5
2.5
1
2
1.5
0.5
1
0
0.5
0
0
1
2
4
3
5
−0.5
6
0
1
2
3
4
5
6
t
t
x1
x2
y1
y2
a
b
2
1.5
y2
1
0.5
0
−0.5
−0.5
0
0.5
1
1.5
y1
2
2.5
3
3.5
c
Figure 4: Complete synchronization of the response system 4.5 and the driving system 4.4.
5. Conclusions
Based on using suitable Lyapunov function and the properties of M-matrix, sufficient
conditions for complete synchronization of bidirectional coupled CGNNs are directly
obtained without writing the explicit error system. Two examples show the effectiveness of
the proposed method. Note that M1 and M2 must be M-matrix if we let αii t βii t, that is,
γii t be big enough to a certain extent such that M1 and M2 are strongly diagonally dominant
matrix, this shows that our criteria are easy to verify and are useful for synchronization of
CGNNs.
Acknowledgments
The authors are grateful to the editor and two reviewers for their careful reading of this paper,
helpful comments, and valuable suggestions. This research was supported by the National
Natural Science Foundation of China under Grant no. 11071254 and the Hebei Provincial
Natural Science Foundation of China under Grant no. A2012205028.
Abstract and Applied Analysis
15
References
1 L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Physical Review Letters, vol. 64,
no. 8, pp. 821–824, 1990.
2 Y. Sun and J. Cao, “Adaptive lag synchronization of unknown chaotic delayed neural networks with
noise perturbation,” Physics Letters A, vol. 364, no. 3-4, pp. 277–285, 2007.
3 K. Wang, Z. Teng, and H. Jiang, “Adaptive synchronization of neural networks with time-varying
delay and distributed delay,” Physica A, vol. 387, no. 2-3, pp. 631–642, 2008.
4 B. Cui and X. Lou, “Synchronization of chaotic recurrent neural networks with time-varying delays
using nonlinear feedback control,” Chaos, Solitons and Fractals, vol. 39, no. 1, pp. 288–294, 2009.
5 H. Huang and G. Feng, “Synchronization of nonidentical chaotic neural networks with time delays,”
Neural Networks, vol. 22, no. 7, pp. 869–874, 2009.
6 T. Li, A. Song, and S. Fei, “Synchronization control for arrays of coupled discrete-time delayed CohenGrossberg neural networks,” Neurocomputing, vol. 74, no. 1–3, pp. 197–204, 2010.
7 Z. Chen, “Complete synchronization for impulsive Cohen-Grossberg neural networks with delay
under noise perturbation,” Chaos, Solitons and Fractals, vol. 42, no. 3, pp. 1664–1669, 2009.
8 J. Cao, G. Chen, and P. Li, “Global synchronization in an array of delayed neural networks with hybrid
coupling,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 2, pp. 488–498, 2008.
9 J. Cao, Z. Wang, and Y. Sun, “Synchronization in an array of linearly stochastically coupled networks
with time delays,” Physica A, vol. 385, no. 2, pp. 718–728, 2007.
10 X. Yang, J. Cao, and J. Lu, “Stochastic synchronization of complex networks with nonidentical nodes
via hybrid adaptive and impulsive control,” IEEE Transactions on Circuits and Systems. I, vol. 59, no. 2,
pp. 371–384, 2012.
11 C. H. Chiu, W. W. Lin, and C. C. Peng, “Asymptotic synchronization in lattices of coupled
nonidentical Lorenz equations,” International Journal of Bifurcation and Chaos in Applied Sciences and
Engineering, vol. 10, no. 12, pp. 2717–2728, 2000.
12 J. Lu, T. Zhou, and S. Zhang, “Chaos synchronization between linearly coupled chaotic systems,”
Chaos, Solitons and Fractals, vol. 14, no. 4, pp. 529–541, 2002.
13 Z. M. Gu, M. Zhao, T. Zhou, C. P. Zhu, and B. H. Wang, “Phase synchronization of non-Abelian
oscillators on small-world networks,” Physics Letters A, vol. 362, no. 2-3, pp. 115–119, 2007.
14 M. A. Cohen and S. Grossberg, “Absolute stability of global pattern formation and parallel memory
storage by competitive neural networks,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 13,
no. 5, pp. 815–826, 1983.
15 H. Zhao and L. Wang, “Hopf bifurcation in Cohen-Grossberg neural network with distributed
delays,” Nonlinear Analysis, vol. 8, no. 1, pp. 73–89, 2007.
16 C. H. Li and S. Y. Yang, “Existence and attractivity of periodic solutions to non-autonomous CohenGrossberg neural networks with time delays,” Chaos, Solitons and Fractals, vol. 41, no. 3, pp. 1235–1244,
2009.
17 Q. Zhou, L. Wan, and J. Sun, “Exponential stability of reaction-diffusion generalized Cohen-Grossberg
neural networks with time-varying delays,” Chaos, Solitons and Fractals, vol. 32, no. 5, pp. 1713–1719,
2007.
18 Z. Li and K. Li, “Stability analysis of impulsive Cohen-Grossberg neural networks with distributed
delays and reaction-diffusion terms,” Applied Mathematical Modelling, vol. 33, no. 3, pp. 1337–1348,
2009.
19 J. Cao and Q. Song, “Stability in Cohen-Grossberg-type bidirectional associative memory neural
networks with time-varying delays,” Nonlinearity, vol. 19, no. 7, pp. 1601–1617, 2006.
20 J. Cao and X. Li, “Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach,” Physica D, vol. 212, no. 1-2, pp. 54–65, 2005.
21 Q. Zhu and J. Cao, “Adaptive synchronization of chaotic Cohen-Crossberg neural networks with
mixed time delays,” Nonlinear Dynamics, vol. 61, no. 3, pp. 517–534, 2010.
22 Q. Gan, “Adaptive synchronization of Cohen-Grossberg neural networks with unknown parameters
and mixed time-varying delays,” Communications in Nonlinear Science and Numerical Simulation, vol.
17, no. 7, pp. 3040–3049, 2012.
23 J. Yu, C. Hu, H. Jiang, and Z. Teng, “Exponential synchronization of Cohen-Grossberg neural
networks via periodically intermittent control,” Neurocomputing, vol. 74, no. 10, pp. 1776–1782, 2011.
24 J. C. Principe, J. M. Kuo, and S. Celebi, “An analysis of the gamma memory in dynamic neural
networks,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 331–337, 1994.
16
Abstract and Applied Analysis
25 R. S. Varga, Matrix Iterative Analysis, vol. 27 of Springer Series in Computational Mathematics, Springer,
Berlin, Germany, 2000.
26 K. Li, “Stability analysis for impulsive Cohen-Grossberg neural networks with time-varying delays
and distributed delays,” Nonlinear Analysis, vol. 10, no. 5, pp. 2784–2798, 2009.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download