Document 10822209

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2012, Article ID 587426, 16 pages
doi:10.1155/2012/587426
Research Article
Delay-Dependent Guaranteed Cost
Controller Design for Uncertain Neural Networks
with Interval Time-Varying Delay
M. Rajchakit,1 P. Niamsup,2 T. Rojsiraphisal,2 and G. Rajchakit1
1
Major of Mathematics and Statistics, Faculty of Science, Maejo University,
Chiang Mai 50290, Thailand
2
Department of Mathematics, Faculty of Science, Chiang Mai University,
Chiang Mai 50000, Thailand
Correspondence should be addressed to G. Rajchakit, mrajchakit@yahoo.com
Received 3 August 2012; Revised 24 September 2012; Accepted 25 September 2012
Academic Editor: Xiaodi Li
Copyright q 2012 M. Rajchakit et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper studies the problem of guaranteed cost control for a class of uncertain delayed neural
networks. The time delay is a continuous function belonging to a given interval but not necessary
to be differentiable. A cost function is considered as a nonlinear performance measure for the
closed-loop system. The stabilizing controllers to be designed must satisfy some exponential
stability constraints on the closed-loop poles. By constructing a set of augmented LyapunovKrasovskii functionals combined with Newton-Leibniz formula, a guaranteed cost controller is
designed via memoryless state feedback control, and new sufficient conditions for the existence of
the guaranteed cost state feedback for the system are given in terms of linear matrix inequalities
LMIs. Numerical examples are given to illustrate the effectiveness of the obtained result.
1. Introduction
The last few decades have witnessed the use of artificial neural networks ANNs in many
real-world applications and have offered an attractive paradigm for a broad range of adaptive
complex systems. In recent years, ANNs have enjoyed a great deal of success and have proven
useful in wide variety pattern recognition feature-extraction tasks. Examples include optical
character recognition, speech recognition, and adaptive control, to name a few. To keep the
pace with the huge demand in diversified application areas, many different kinds of ANN
architecture and learning types have been proposed to meet varying needs as robustness and
stability. Stability and control of neural networks with time delay have attracted considerable
2
Abstract and Applied Analysis
attention in recent years 1–8. In many practical systems, it is desirable to design neural
networks which are not only asymptotically or exponentially stable but can also guarantee
an adequate level of system performance. In the area of control, signal processing, pattern
recognition, and image processing, delayed neural networks have many useful applications.
Some of these applications require that the equilibrium points of the designed network be
stable. In both biological and artificial neural systems, time delays due to integration and
communication are ubiquitous and often become a source of instability. The time delays
in electronic neural networks are usually time varying, and sometimes vary violently with
respect to time due to the finite switching speed of amplifiers and faults in the electrical
circuitry. Guaranteed cost control problem 9–12 has the advantage of providing an upper
bound on a given system performance index and thus the system performance degradation
incurred by the uncertainties or time delays is guaranteed to be less than this bound. The
Lyapunov-Krasovskii functional technique has been among the popular and effective tool
in the design of guaranteed cost controls for neural networks with time delay. Nevertheless,
despite such diversity of results available, most existing work either assumed that the time
delays are constant or differentiable 13–16. Although, in some cases, delay-dependent
guaranteed cost control for systems with time-varying delays was considered in 12, 13, 15,
the approach used there can not be applied to systems with interval, nondifferentiable timevarying delays. To the best of our knowledge, the guaranteed cost control and state feedback
stabilization for uncertain neural networks with interval, non-differentiable time-varying
delays have not been fully studied yet see, e.g., 4–26 and the references therein, which
are important in both theories and applications. This motivates our research.
In this paper, we investigate the guaranteed cost control for uncertain delayed neural
networks problem. The novel features here are that the delayed neural network under
consideration is with various globally Lipschitz continuous activation functions, and the
time-varying delay function is interval, non-differentiable. A nonlinear cost function is
considered as a performance measure for the closed-loop system. The stabilizing controllers
to be designed must satisfy some exponential stability constraints on the closed-loop poles.
Based on constructing a set of augmented Lyapunov-Krasovskii functionals combined with
Newton-Leibniz formula, new delay-dependent criteria for guaranteed cost control via
memoryless feedback control are established in terms of LMIs, which allow simultaneous
computation of two bounds that characterize the exponential stability rate of the solution
and can be easily determined by utilizing Matlabs LMI control toolbox.
The outline of the paper is as follows. Section 2 presents definitions and some wellknown technical propositions needed for the proof of the main result. LMI delay-dependent
criteria for guaraneed cost control and a numerical examples showing the effectiveness of the
result are presented in Section 3. The paper ends with conclusions and cited references.
2. Preliminaries
The following notation will be used in this paper. R denotes the set of all real nonnegative
numbers; Rn denotes the n-dimensional space with the scalar product x, y or xT y of two
vectors x, y, and the vector norm · ; Mn×r denotes the space of all matrices of n × rAT ; I denotes
dimensions. AT denotes the transpose of matrix A; A is symmetric if A
max{Re λ; λ ∈
the identity matrix; λA denotes the set of all eigenvalues of A; λmax A
λA}. xt : {xt s : s ∈ −h, 0}, xt sups∈−h,0 xt s; C1 0, t, Rn denotes the set
of all Rn -valued continuously differentiable functions on 0, t; L2 0, t, Rm denotes the set
of all the Rm -valued square integrable functions on 0, t.
Abstract and Applied Analysis
3
Matrix A is called semipositive definite A ≥ 0 if Ax, x ≥ 0, for all x ∈ Rn ; A is
positive definite A > 0 if Ax, x > 0 for all x / 0; A > B means A − B > 0. The notation
diag{· · · } stands for a block-diagonal matrix. The symmetric term in a matrix is denoted by
∗.
Consider the following uncertain neural networks with interval time-varying delay:
ẋt
−A ΔAtxt W0 ΔW0 tW0 fxt W1 ΔW1 tgxt − ht
But,
t ≥ 0, xt
φt, t ∈ −h1 , 0,
2.1
where xt x1 t, x2 t, . . . , xn tT ∈ Rn is the state of the neural; u· ∈ L2 0, t, Rm is the
control; n is the number of neurals, and
fxt
gxt − ht
T
f1 x1 t, f2 x2 t, . . . , fn xn t ,
T
g1 x1 t − htt, g2 x2 t − htt, . . . , gn xn t − ht ,
2.2
are the activation functions; A
diaga1 , a2 , . . . , an , ai > 0 represents the self-feedback
term; B ∈ Rn×m is control input matrix; W0 , W1 denote the connection weights, the discretely
delayed connection weights and the distributively delayed connection weight, respectively;
the time-varying uncertain matrices ΔAt, ΔW0 t, and ΔW1 t are defined by
ΔAt
Ea Fa tHa ,
ΔW0 t
Ew0 Fw0 tHw0 ,
ΔW1 t
Ew1 Fw1 tHw1 ,
2.3
where Ea , Ew0 , Ew1 , Ha , Hw0 , and Hw1 are known constant real matrices with appropriate
dimensions. Fa t, Fw0 t, and Fw1 t are unknown uncertain matrices satisfying
FaT tFa t ≤ I,
T
Fw
tFw0 t ≤ I,
0
T
Fw
tFw1 t ≤ I,
1
t ∈ R .
2.4
The time-varying delay function ht satisfies the condition
0 ≤ h0 ≤ ht ≤ h1 .
2.5
The initial functions φt ∈ C1 −h1 , 0, Rn , with the norm
φ
sup
φt2 φ̇t2 .
t∈−h1 ,0
2.6
In this paper we consider various activation functions and assume that the activation
functions f·, g· are Lipschitzian with the Lipschitz constants fi , ei > 0:
fi ξ1 − fi ξ2 ≤ fi |ξ1 − ξ2 |,
gi ξ1 − gi ξ2 ≤ ei |ξ1 − ξ2 |,
i
1, 2, . . . , n, ∀ξ1 , ξ2 ∈ R,
i
1, 2, . . . , n, ∀ξ1 , ξ2 ∈ R.
2.7
4
Abstract and Applied Analysis
The performance index associated with the system 2.1 is the following function:
∞
J
f 0 t, xt, xt − ht, utdt,
2.8
0
where f 0 t, xt, xt − ht, ut : R × Rn × Rn × Rm → R is a nonlinear cost function that
satisfies
∃Q1 , Q2 , R : f 0 t, x, y, u ≤ Q1 x, x Q2 y, y Ru, u,
2.9
for all t, x, u ∈ R × Rn × Rm and Q1 , Q2 ∈ Rn×n , R ∈ Rm×m are given symmetric positive
definite matrices. The objective of this paper is to design a memoryless state feedback
controller ut
Kxt for system 2.1 and the cost function 2.8 such that the resulting
closed-loop system
ẋt
− A Ea Fa tHa − BKxt W0 Ew0 Fw0 tHw0 fxt
W1 Ew1 Fw1 tHw1 gxt − ht
2.10
is exponentially stable and the closed-loop value of the cost function 2.10 is minimized.
Definition 2.1. Given α > 0. The zero solution of closed-loop system 2.8 is α-exponentially
stabilizable if there exists a positive number N > 0 such that every solution xt, φ satisfies
the following condition:
x t, φ ≤ Ne−αt φ,
∀t ≥ 0.
2.11
Definition 2.2. Consider the control system 2.1. If there exists a memoryless state feedback
control law u∗ t Kxt and a positive number J ∗ such that the zero solution of the closedloop system 2.10 is exponentially stable and the cost function 2.8 satisfies J ≤ J ∗ , then the
value J ∗ is a guaranteed constant and u∗ t is a guaranteed cost control law of the system and
its corresponding cost function.
We introduce the following technical well-known propositions, which will be used in
the proof of our results.
Proposition 2.3 Schur complement lemma 27. Given constant matrices X, Y, and Z with
appropriate dimensions satisfying X X T , Y Y T > 0, then X ZT Y −1 Z < 0 if and only if
X ZT
Z −Y
< 0.
2.12
Abstract and Applied Analysis
5
Proposition 2.4 integral matrix inequality 28. For any symmetric positive definite matrix M >
0, scalar γ > 0 and vector function ω : 0, γ → Rn such that the integrations concerned are well
defined, the following inequality holds
γ
γ
T
ωsds
M
ωsds
0
≤γ
0
γ
ωT sMωsds .
2.13
0
3. Design of Guaranteed Cost Controller
In this section, we give a design of memoryless guaranteed feedback cost control for uncertain
neural networks 2.1. Let us set
W11
− P AT − AP − 2αP 0.25BRBT 1
Gi 21 EaT Ea 61−1 P HaT Ha P
i 0
T
42−1 P FHw
Hw0 FP,
0
e−2αh0 H0 0.5BBT AP,
W12
P AP − 0.5BBT ,
W14
e−2αh1 H1 0.5BBT AP,
W22
1
1
T
T
Wi Di WiT h2i Hi h1 − h0 U − 2P − BBT 1 EaT Ea 2 Ew
Ew0 3 Ew
Ew1 ,
0
1
i 0
W23
W33
W13
P 0.5BBT AP,
W15
i 0
P,
W24
P,
W25
P,
− e−2αh0 G0 − e−2αh0 H0 − e−2αh1 U 1
T
T
Wi Di WiT 1 EaT Ea 2 Ew
Ew0 3 Ew
Ew1 ,
0
1
i 0
e−2αh1 U,
W34
0,
W44
1
T
T
Wi Di WiT − e−2αh1 U − e−2αh1 G1 − e−2αh1 H1 1 EaT Ea 2 Ew
Ew0 3 Ew
Ew1 ,
0
1
W35
i 0
W45
W55
E
λ1
λ2
e−2αh1 U,
T
T
T
− e−2αh1 U W0 D0 W0T 43−1 P EHw
Hw1 EP 1 EaT Ea 2 Ew
Ew0 3 Ew
Ew1 ,
1
0
1
F diag fi , i 1, . . . , n ,
diag{ei , i 1, . . . , n},
λmin P −1 ,
λmax P
−1
h21 λmax
h0 λmax P
−1
1
Gi P
−1
i 0
1
−1
P
h1 − h0 λmax P −1 UP −1 .
Hi P
−1
i 0
3.1
6
Abstract and Applied Analysis
Theorem 3.1. Consider control system 2.1 and the cost function 2.8. If there exist symmetric
positive definite matrices P, U, G0 , G1 , H0 , and H1 , and diagonal positive definite matrices Di , i
0, 1, and i > 0, i 1, 2, 3 satisfying the following LMIs
⎡
W11 W12 W13
⎢ ∗ W W
⎢
22
23
⎢
∗ W33
⎢ ∗
⎢
⎣ ∗
∗
∗
∗
∗
∗
E
⎡
1
⎤
W15
W25 ⎥
⎥
⎥
W35 ⎥ < 0,
⎥
W45 ⎦
W55
3.2
⎤
−2αhi
Hi 2P F P Q1 ⎥
⎢−P A − A P − e
⎥
⎢
i 0
⎥ < 0,
⎢
⎣
∗
−D0
0 ⎦
∗
∗ −Q1−1
⎤
⎡
W1 D1 W1T − e−2αh1 U 2P E P Q2
∗
−D1
0 ⎦ < 0,
S2 ⎣
∗
∗ −Q2−1
T
S1
W14
W24
W34
W44
∗
3.3
3.4
then
ut
1
− BT P −1 xt,
2
t≥0
3.5
is a guaranteed cost control and the guaranteed cost value is given by
J∗
2
λ2 φ .
3.6
Moreover, the solution xt, φ of the system satisfies
x t, φ ≤
λ2 −αt e φ,
λ1
∀t ≥ 0.
3.7
Proof. Let Y P −1 , yt Y xt. Using the feedback control 2.8 we consider the following
Lyapunov-Krasovskii functional:
V t, xt 6
Vi t, xt ,
i 1
V1
V2
xT tY xt,
t
e2αs−t xT sY G0 Y xsds,
t−h0
t
V3
t−h1
e2αs−t xT sY G1 Y xsds,
Abstract and Applied Analysis
7
0 t
V4
h0
−h0
e2ατ−t ẋT τY H0 Y ẋτdτ ds,
ts
0 t
V5
V6
h1
−h1
e2ατ−t ẋT τY H1 Y ẋτdτ ds,
ts
h1 − h0 t−h0 t
t−h1
e2ατ−t ẋT τY UY ẋτdτ ds.
ts
3.8
It is easy to check that
λ1 xt2 ≤ V t, xt ≤ λ2 xt 2 ,
∀t ≥ 0.
3.9
Taking the derivative of V1 we have
V̇1
2xT tY ẋt
yT t −P A Ea Fa tHa T − A Ea Fa tHa P yt − yT tBBT yt
2yT tW0 Ew0 Fw0 tHw0 f·yt 2yT tW1 Ew1 Fw1 tHw1 g·yt,
V̇2
yT tG0 yt − e−2αh0 yT t − h0 G0 yt − h0 − 2αV2 ,
V̇3
yT tG1 yt − e−2αh1 yT t − h1 G1 yt − h1 − 2αV3 ,
t
h20 ẏT tH0 ẏt − h1 e−2αh0
ẋT sH0 ẋs ds − 2αV4 ,
V̇4
3.10
t−h0
V̇5
V̇6
h21 ẏT tH1 ẏt − h1 e−2αh1
t
ẏT sH1 ẏsds − 2αV4 ,
t−h1
2 T
h1 − h0 ẏ tUẏt − h1 − h0 e
−2αh1
t−h0
ẏT sUẏsds − 2αV6 ,
t−h1
Applying Proposition 2.4 and the Leibniz-Newton formula
t
ẏτdτ
s
yt − ys.
3.11
8
Abstract and Applied Analysis
We have for j
1, 2, i
−hi
t
0, 1
ẏ sHj ẏsds ≤ −
T
t
ẏsds
t−hi
t
T
Hj
t−hi
ẏsds
t−hi
T ≤ − yt − yt − ht Hj yt − yt − ht
3.12
−yT tHi yt 2xT tHj yt − ht
− yT t − hi Hj yt − hi .
Note that
t−h0
ẏT sUẏsds
t−ht
ẏT sUẏsds t−h1
t−h1
t−h0
ẏT sUẏsds.
3.13
t−ht
Applying Proposition 2.4 gives
t−ht
h1 − ht
ẏ sUẏsds ≥
T
t−h1
t−ht
t−h1
T t−ht
ẏsds U
ẏsds
t−h1
T ≥ y t − ht − yt − h1 U y t − ht − yt − h1 .
3.14
Since h1 − ht ≤ h1 − h0 , we have
h1 − h0 t−ht
T ẏT sUẏsds ≥ y t − ht − yt − h1 U y t − ht − yt − h1 ,
t−h1
3.15
then
−h1 − h0 t−ht
t−h1
T ẏT sUẏsds ≤ − y t − ht − yt − h1 U y t − ht − yt − h1 .
3.16
Similarly, we have
−h1 − h0 t−h0
t−ht
T ẏT sUẏsds ≤ − yt − h0 − yt − ht U yt − h0 − yt − ht .
3.17
Abstract and Applied Analysis
9
Then, we have
V̇ · 2αV · ≤ yT t −P A Ea Fa tHa T − A Ea Fa tHa P yt − yT tBBT yt
2yT tW0 Ew0 Fw0 tHw0 f· 2yT tW1 Ew1 Fw1 tHw1 g·
1
T
y t
Gi yt 2α P yt, yt
i 0
ẏ t
T
1
h2i Hi
ẏt h1 − h0 ẏT tUẏt
i 0
3.18
1
− e−2αhi yT t − hi Gi yt − hi i 0
T − e−2αh0 yt − yt − h0 H0 yt − yt − h0 T − e−2αh1 yt − yt − h1 H1 yt − yt − h1 T − e−2αh1 yt − ht − yt − h1 U yt − ht − yt − h1 T − e−2αh1 yt − h0 − yt − ht U yt − h0 − yt − ht .
Using 2.8
P ẏt A Ea Fa tHa P yt − W0 Ew0 Fw0 tHw0 f· − W1 Ew1 Fw1 tHw1 g·
0.5BBT yt
0,
3.19
and multiplying both sides with 2yt, −2ẏt, 2yt − h0 , 2yt − h1 , 2yt − htT , we have
2yT tP ẏt 2yT tA Ea Fa tHa P yt − 2yT tW0 Ew0 Fw0 tHw0 f·
− 2yT tW1 Ew1 Fw1 tHw1 g· yT tBBT yt
0,
− 2ẏT tP ẏt − 2ẏT tA Ea Fa tHa P yt 2ẏT tW0 Ew0 Fw0 tHw0 f·
2ẏT tW1 Ew1 Fw1 tHw1 g· − ẏT tBBT yt
0,
2yT t − h0 P ẏt 2yT t − h0 A Ea Fa tHa P yt − 2yT t − h0 W0 Ew0 Fw0 tHw0 × f· − 2yT t − h0 W1 Ew1 Fw1 tHw1 g· yT t − h0 BBT yt
0,
10
Abstract and Applied Analysis
2yT t − h1 P ẏt 2yT t − h1 A Ea Fa tHa P yt − 2yT t − h1 W0 Ew0 Fw0 tHw0 × f· − 2yT t − h1 W1 Ew1 Fw1 tHw1 g· yT t − h1 BBT yt
0,
2yT t − htP ẏt 2yT t − htA Ea Fa tHa P yt − 2yT t − ht
× W0 Ew0 Fw0 tHw0 f· − 2yT t − htW1 Ew1 Fw1 tHw1 g·
yT t − htBBT yt
3.20
0.
Adding all the zero items of 3.20 and f 0 t, xt, xt−ht, ut−f 0 t, xt, xt−ht, ut
0, respectively, into 3.18 and using the condition 2.7 for the following estimations:
f 0 t, xt, xt − ht, ut ≤ Q1 xt, xt Q2 xt − ht, xt − ht Rut, ut
P Q1 P yt, yt P Q2 P yt − ht, yt − ht
!
0.25 BRBT yt, yt ,
!
!
2 W0 fx, y ≤ W0 D0 W0T y, y D0−1 fx, fx ,
!
!
2 W1 gz, y ≤ W1 D1 W1T y, y D1−1 gz, gz ,
!
!
2 D0−1 fx, fx ≤ FD0−1 Fx, x ,
!
!
2 D1−1 gz, gz ≤ ED1−1 Ez, z ,
!
!
2 Ea Fa tHa P y, y ≤ 1 EaT Ea y, y 1−1 P HaT Ha P y, y ,
1 > 0,
!
!
T
−1
T
E
y,
y
P
D
H
H
D
P
y,
y
,
2 Ew0 Fw0 tHw0 P fx, y ≤ 2 Ew
w
0
w
0
w0
0
0
2
0
2 > 0,
!
!
T
T
Ew1 y, y 3−1 P D1 Hw
Hw1 D1 P z, z ,
2 Ew1 Fw1 tHw1 P gz, y ≤ 3 Ew
1
1
3 > 0,
3.21
we obtain
V̇ · 2αV · ≤ ζT tEζt yT tS1 yt yT t − htS2 yt − ht
− f 0 t, xt, xt − ht, ut,
3.22
Abstract and Applied Analysis
where ζt
11
yt, ẏt, yt − h0 , yt − h1 , yt − ht, and
E
⎡
W11 W12 W13
⎢
⎢ ∗ W22 W23
⎢
⎢
⎢ ∗
∗ W33
⎢
⎢ ∗
∗
∗
⎣
∗
−P A − AT P −
S1
∗
W14 W15
⎤
⎥
W24 W25 ⎥
⎥
⎥
,
W34 W35 ⎥
⎥
⎥
W44 W45 ⎦
∗
∗
W55
1
e−2αhi Hi 4P FD0−1 FP P Q1 P,
i 0
S2
W1 D1 W1T − e−2αh2 U 4P ED1−1 EP P Q2 P.
3.23
Note that by the Schur complement lemma, Proposition 2.3, the conditions S1 < 0 and S2 < 0
are equivalent to the conditions 3.3 and 3.4, respectively. Therefore, by condition 3.2,
3.3, and 3.4, we obtain from 3.22 that
V̇ t, xt ≤ −2αV t, xt ,
∀t ≥ 0.
3.24
∀t ≥ 0.
3.25
Integrating both sides of 3.24 from 0 to t, we obtain
V t, xt ≤ V φ e−2αt ,
Furthermore, taking condition 3.9 into account, we have
2
2
λ1 xt, φ ≤ V xt ≤ V φ e−2αt ≤ λ2 e−2αt φ ,
3.26
then
x t, φ ≤
λ2 −αt e φ,
λ1
3.27
t ≥ 0,
which concludes the exponential stability of the closed-loop system 2.8. To prove the
optimal level of the cost function 2.4, we derive from 3.22 and 3.2–3.4 that
V̇ t, zt ≤ −f 0 t, xt, xt − ht, ut,
t ≥ 0.
3.28
Integrating both sides of 3.28 from 0 to t leads to
t
0
f 0 t, xt, xt − ht, utdt ≤ V 0, z0 − V t, zt ≤ V 0, z0 ,
3.29
12
Abstract and Applied Analysis
dute to V t, zt ≥ 0. Hence, letting t → ∞, we have
∞
J
2
f 0 t, xt, xt − ht, utdt ≤ V 0, z0 ≤ λ2 φ
J ∗.
3.30
0
This completes the proof of the theorem.
Remark 3.2. Note that ht is non-differentiable and interval time-varying delay; therefore, the
stability criteria proposed in 5–8, 12, 15–26 are not applicable to this system.
Example 3.3. Consider the uncertain neural networks with interval time-varying delays 2.1,
where
"
#
"
#
"
#
" #
0.1 0
0.1 0.1
0.2 0.2
0.1
A
,
W0
,
W1
,
B
,
0 0.3
0.2 0.3
0.1 0.4
0.2
"
#
"
#
"
#
"
#
0.3 0
0.2 0
0.2 0.1
0.3 0.2
E
,
F
,
Q1
,
Q2
,
0 0.4
0 0.3
0.1 0.4
0.2 0.5
"
"
"
"
#
#
#
#
0.1 0.1
0.1 0.1
0.1 0.1
0.2 0.1
R
,
Ea
,
E w0
,
Ew 1
,
0.1 0.3
0.1 0.3
0.1 0.2
0.1 0.3
"
"
"
#
#
#
0.3 0.2
0.2 0.1
0.3 0.1
Ha
,
Hw0
,
Hw1
,
0.2 0.2
0.1 0.2
0.1 0.3
$
ht 0.1 1.3 sin2 t if t ∈ I
2kπ, 2k 1π
3.31
k≥0
ht
0
if t ∈ R \ I.
Note that ht is non-differentiable; therefore, the stability criteria proposed in 4–8, 12, 15–26
are not applicable to this system. Given α 0.1, h0 0.1, and h1 1.4, by using the Matlab
LMI toolbox, we can solve for P, U, G0 , G1 , H0 , H1 , D0 , and D1 which satisfy the conditions
3.2–3.4 in Theorem 3.1. A set of solutions are 1 0.0017, 2 0.0013, 3 0.0012,
"
P
#
1.1578 −0.1128
,
−0.1128 1.0597
"
#
1.4596 0.1397
,
G0
0.1397 1.2369
"
#
0.6455 0.0452
H0
,
0.0452 0.5104
"
#
0.0011
0
D0
,
0
0.0011
"
U
G1
H1
D1
2.3269
−0.3820
"
2.2694
0.8114
"
0.3005
0.0233
"
0.7809
0
#
−0.3820
,
2.6681
#
0.8114
,
1.0125
#
0.0233
,
0.2306
#
0
.
0.7809
3.32
Then
ut
−0.2292x1 t − 0.1816x2 t,
t≥0
3.33
Abstract and Applied Analysis
13
10
8
6
4
2
0
−2
−4
0
2
4
6
8
10
Time (s)
x1
x2
Figure 1: The simulation of the solutions x1 t and x2 t with the initial condition φt10 5T , t ∈ −0.4, 0.
is a guaranteed cost control law and the cost given by
J∗
2
5.4631φ .
3.34
Moreover, the solution xt, φ of the system satisfies
x t, φ ≤ 3.6984e−0.1t φ,
∀t ≥ 0.
3.35
The exponential convergence dynamics of the network 2.1 are shown in Figure 1.
Example 3.4. Consider the uncertain neural networks with interval time-varying delays
2.1,where
"
#
"
"
#
#
" #
1 0
1 1
2 2
1
,
W0
,
W1
,
B
,
0 2
2 3
1 4
2
"
"
#
#
"
#
"
#
2 1
3 2
3 0
2 0
,
Q2
,
E
,
F
,
Q1
1 4
2 5
0 4
0 3
"
"
"
"
#
#
#
#
1 1
1 1
2 1
1 1
R
,
Ew0
,
Ew1
,
,
Ea
1 3
1 2
1 3
1 3
"
"
"
#
#
#
3 2
2 1
3 1
Ha
,
Hw0
,
Hw1
,
2 2
1 2
1 3
$
ht 0.1 0.7sin2 t if t ∈ I
2kπ, 2k 1π
A
ht
k≥0
0 if t ∈ R \ I.
3.36
14
Abstract and Applied Analysis
Note that ht is non-differentiable; therefore, the stability criteria proposed in 5–8, 12, 15–
26 are not applicable to this system. Given α 0.3, h0 0.1, h1 0.8, by using the Matlab
LMI toolbox, we can solve for P, U, G0 , G1 , H0 , H1 , D0 , and D1 which satisfy the conditions
3.2–3.4 in Theorem 3.1. A set of solutions are 1 0.9, 2 0.8, 3 0.7,
"
#
0.7832 −0.0213
P
,
−0.0213 0.0011
"
#
0.1795 0.0137
,
G0
0.0137 0.2211
"
#
0.8931 0.1183
H0
,
0.1183 0.7197
"
#
0.1397
0
D0
,
0
0.2278
"
U
G1
H1
D1
0.1297
−0.0019
"
1.2197
0.9648
"
0.6851
0.1297
"
0.6812
0
#
−0.0019
,
0.0197
#
0.9648
,
0.7391
#
0.1297
,
0.5726
#
0
.
0.6813
3.37
Then
ut
−0.7314x1 t − 0.0196x2 t,
t ≥ 0,
3.38
is a guaranteed cost control law and the cost given by
J∗
2
24.3219φ .
3.39
Moreover, the solution xt, φ of the system satisfies
x t, φ ≤ 12.3690e−0.3t φ,
∀t ≥ 0.
3.40
The exponential convergence dynamics of the network 2.1 are shown in Figure 2.
4. Conclusions
In this paper, the problem of guaranteed cost control for uncertain neural networks with
interval nondifferentiable time-varying delay has been studied. A nonlinear quadratic cost
function is considered as a performance measure for the closed-loop system. The stabilizing
controllers to be designed must satisfy some exponential stability constraints on the closedloop poles. By constructing a set of time-varying Lyapunov-Krasovskii functionals combined
with Newton-Leibniz formula, a memoryless state feedback guaranteed cost controller
design has been presented, and sufficient conditions for the existence of the guaranteed cost
state-feedback for the system have been derived in terms of LMIs.
Abstract and Applied Analysis
15
1
0.8
0.6
0.4
x(t)
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
1
2
3
4
5
Time (s)
x1 (t)
x2 (t)
Figure 2: The simulation of the solutions x1 t and x2 t with the initial condition φt
−1, 0.8.
1 − 1T , t ∈
Acknowledgments
This work was supported by the Thai Research Fund Grant, the Higher Education
Commission, and Faculty of Science, Maejo University, Thailand. The second author is
supported by the Center of Excellence in Mathematics, Thailand, and Commission for Higher
Education, Thailand. The authors thank anonymous reviewers for valuable comments and
suggestions, which allowed us to improve the paper.
References
1 J. J. Hopfield, “Neural networks and physical systems with emergent collective computational
abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no.
8, pp. 2554–2558, 1982.
2 G. Kevin, An Introduction to Neural Networks, CRC Press, 1997.
3 M. Wu, Y. He, and J.-H. She, Stability Analysis and Robust Control of Time-Delay Systems, Springer, 2010.
4 S. Arik, “An improved global stability result for delayed cellular neural networks,” IEEE Transactions
on Circuits and Systems I, vol. 49, no. 8, pp. 1211–1214, 2002.
5 K. Ratchagit, “Asymptotic stability of delay-difference system of Hopfield neural networks via matrix
inequalities and application,” International Journal of Neural Systems, vol. 17, pp. 425–430, 2007.
6 Y. He, Q.-G. Wang, and M. Wu, “LMI-based stability criteria for neural networks with multiple timevarying delays,” Physica D, vol. 212, no. 1-2, pp. 126–136, 2005.
7 O. M. Kwon and J. H. Park, “Exponential stability analysis for uncertain neural networks with interval
time-varying delays,” Applied Mathematics and Computation, vol. 212, no. 2, pp. 530–541, 2009.
8 V. N. Phat and H. Trinh, “Exponential stabilization of neural networks with various activation
functions and mixed time-varying delays,” IEEE Transactions on Neural Networks, vol. 21, pp. 1180–
1185, 2010.
9 W.-H. Chen, Z.-H. Guan, and X. Lu, “Delay-dependent output feedback guaranteed cost control for
uncertain time-delay systems,” Automatica, vol. 40, no. 7, pp. 1263–1268, 2004.
10 M. N. Parlakçı́, “Robust delay-dependent guaranteed cost controller design for uncertain neutral
systems,” Applied Mathematics and Computation, vol. 215, no. 8, pp. 2936–2949, 2009.
16
Abstract and Applied Analysis
11 J. H. Park and O. Kwon, “On guaranteed cost control of neutral systems by retarded integral state
feedback,” Applied Mathematics and Computation, vol. 165, no. 2, pp. 393–404, 2005.
12 J. H. Park and K. Choi, “Guaranteed cost control for uncertain nonlinear neutral systems via memory
state feedback,” Chaos, Solitons and Fractals, vol. 24, no. 1, pp. 183–190, 2005.
13 J. H. Park and O. M. Kwon, “Guaranteed cost control of time-delay chaotic systems,” Chaos, Solitons
and Fractals, vol. 27, no. 4, pp. 1011–1018, 2006.
14 J. H. Park, “Dynamic output guaranteed cost controller for neutral systems with input delay,” Chaos,
Solitons and Fractals, vol. 23, no. 5, pp. 1819–1828, 2005.
15 J. H. Park, “Delay-dependent criterion for guaranteed cost control of neutral delay systems,” Journal
of Optimization Theory and Applications, vol. 124, no. 2, pp. 491–502, 2005.
16 J. H. Park, “A novel criterion for global asymptotic stability of BAM neural networks with time
delays,” Chaos, Solitons and Fractals, vol. 29, no. 2, pp. 446–453, 2006.
17 J. H. Park, “On global stability criterion for neural networks with discrete and distributed delays,”
Chaos, Solitons and Fractals, vol. 30, no. 4, pp. 897–902, 2006.
18 H. He, L. Yan, and J. Tu, “Guaranteed cost stabilization of time-varying delay cellular neural networks
via Riccati inequality approach,” Neural Processing Letters, vol. 35, pp. 151–158, 2012.
19 J. Tu and H. He, “Guaranteed cost synchronization of chaotic cellular neural networks with timevarying delay,” Neural Computation, vol. 24, no. 1, pp. 217–233, 2012.
20 J. Tu, H. He1, and P. Xiong, “Guaranteed cost synchronous control of time-varying delay cellular
neural networks,” Neural Computing and Applications.
21 H. He and J. Tu, “Algebraic condition of synchronization for multiple time-delayed chaotic Hopfield
neural networks,” Neural Computing and Applications, vol. 19, pp. 543–548, 2010.
22 H. He, J. Tu, and P. Xiong, “Lr -synchronization and adaptive synchronization of a class of chaotic
Lurie systems under perturbations,” Journal of the Franklin Institute, vol. 348, no. 9, pp. 2257–2269,
2011.
23 E. Fridman and Y. Orlov, “Exponential stability of linear distributed parameter systems with timevarying delays,” Automatica, vol. 45, no. 1, pp. 194–201, 2009.
24 S. Xu and J. Lam, “A survey of linear matrix inequality techniques in stability analysis of delay
systems,” International Journal of Systems Science, vol. 39, no. 12, pp. 1095–1113, 2008.
25 J.-S. Xie, B.-Q. Fan, Y. S. Lee, and J. Yang, “Guaranteed cost controller design of networked control
systems with state delay,” Acta Automatica Sinica, vol. 33, no. 2, pp. 170–174, 2007.
26 L. Yu and F. Gao, “Optimal guaranteed cost control of discrete-time uncertain systems with both state
and input delays,” Journal of the Franklin Institute, vol. 338, no. 1, pp. 101–110, 2001.
27 S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control
Theory, vol. 15, SIAM, Philadelphia, Pa, USA, 1994.
28 K. Gu, V. Kharitonov, and J. Chen, Stability of Time-delay Systems, Birkhauser, Berlin, Germany, 2003.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download