Sponsored by: IEEE (UKRI) Communications & IT

advertisement
LIAPUNOV’S EXPONENTS AND
CELLULAR NEURAL
NETWORKS
DAŇO Ivan
FEI KM TU Košice, B. Němcovej 32, 042 00 Košice, e-mail:
Ivan.Dano@tuke.sk
Abstract: In the present paper we give some relationship between the
Liapunov’s exponents and delayed cellular neural networks model
described by the stochastic differential equation.
Keywords: neural networks, central Liapunov’s exponent, delaydifferential equations .
INDRODUCTION
The stability of nonlinear dynamical system is a difficult issue to
deal with. When we speak of stability in the context of a nonlinear
dynamical system, we usually mean stability in the sense of Liapunov.
A.M. Liapunov (see [1]) presented the fundamental concepts of the
stability theory known as the first method of Liapunov. This method is
widely used for the stability analysis of linear and nonlinear systems, both
time-invariant and time-varying. As such it is directly applicable to the
stability analysis of neural networks. The study of neurodynamics may
follow one of two routes, depending on the application of interest:
a)
Deterministic neurodynamics, in which the neural network
model has a deterministic behavior. In mathematical terms, it is described
by a set of nonlinear delay-differential equations that define the exact
evolution of the model as a function of time.
b)
Statistical neurodynamics, in which the neural network
model is perturbed by the presence of noise. In this case, we have to deal
with stochastic nonlinear differential equations, expressing the solution in
probabilistic terms. The combination of stochasticity and nonlinearity
makes the subject more difficult to handle.
In this paper we consider another important class of neural
networks that have a recurrent structure, and development of which is
inspired by different ideas from statistical physics.
1 DEFINITIONS OF STABILITY
In order to proceed with the study of neurodynamics, we need a
mathematical model for describing the dynamics of nonlinear system. A
model most naturally suited for this purpose is the so-called state-space
model. According to this model, we think in terms of a set of state
variables whose values are supposed to contain sufficient information to
predict the future evolution of the system. Let x1 (t ), x2 (t ),, x n (t )
denote the state variables of a nonlinear dynamical system, where
continuous time t is the independent variable and n is the order of
system. The dynamics of a large class of nonlinear dynamical systems may
then be cast in the form of a system of first-order differential equations
written as follows:
d
x j (t )  F j ( x j (t )),
dt
j  1,2, , n
(1)
where the function F j is, in general, a nonlinear function of its argument.
We may cast this system of equations in a compact form by using vector
notation, as shown by
 
d 
x (t )  F ( x (t ) )
dt
( 2)

where the nonlinear function F operates on each element of the
state vector

x (t )  ( x1 (t ), x 2 (t ),  , x n (t ) ) T .
A nonlinear dynamical system for which the vector function
 
F ( x (t ) ) does not depend explicitly on time t , is said to be autonomous;
otherwise, it is nonautonomous.
A nonautonomous dynamical system is defined by the state
equation
 
d 
x (t )  F ( x (t ), t )
dt
(3)


with the initial condition x (t 0 )  x0 . For a nonautonomous system the
 
vector field F ( x (t ), t ) depend on time t . Consider an autonomous
dynamical system described by the state-space Equation ( 2) . A constant

vector x  M is said to be an equilibrium state of the system if the
following condition is satisfied:
  
F (x)  0


dx
where 0 is the null vector. Clearly, the velocity vector
vanishes at the
dt



equilibrium state x , and therefore the constant function x (t )  x is a
solution of Equation ( 2) . Furthermore, because of the uniqueness

property of solutions curve can pass through the equilibrium state x . In
the context of an autonomous nonlinear dynamical system with

equilibrium state x , the definitions of stability an convergence are as
follows.

Definition 1.1. The equilibrium state x is said to be uniformly stable if
for any given positive  , there exists a positive  such that the condition


x (0)  x < 
implies


x (t )  x < 
for all t > 0 .
This definition states that a trajectory of the system can be made

to stay within a small neighborhood of the equilibrium state x if the


initial state x (0) is close to x .

Definition 1.2. The equilibrium state x is said to be convergent if there
exists a positive  such that the condition


x (0)  x < 
implies that


x (t )  x as t   .
The meaning of this second definition is that if the initial state


x (0) of a trajectory is close enough to the equilibrium state x , then the


trajectory described by the state vector x (t ) will approach x as time t
approaches infinity.

Definition 1.3. The equilibrium state x is said to be asymptotically stable
if it is both stable and convergent.
Consider a nonautonomous dynamical system described by the state

Equation (3) . Let x0 (t ) is a solution of Equation (3) , which is defined
on the interval t 0 ,  .
Definition 1.4.

The solution x0 (t ) is said to be uniformly stable if for
any given positive  , there exists a positive  such that the condition


x (t 0 )  x0 (t 0 ) < 
implies


x (t )  x (t 0 ) < 
for all
t > t0 .

Definition 1.5. The solution x0 (t ) is said to be asymptotically stable if it

is stable and concurrently x0 (t ) satisfies the following condition


x (t )  x0 (t )  0 as t   .
2 Liapunov’s exponents
Consider the following a system of nonlinear delay-differential equations
of the form
n
n
n
d
xi (t )   aij (t ) x j (t )   bij (t ) x j (t   )   cij (t ) g ( x j (t ))  I i (t )
dt
j 1
j 1
j 1
( 4)
where all aij (t ), bij (t ), cij (t ), I i (t ) are continuous functions, aii (t ) < 0 ,
 > 0 , a ij (t )  a,
bij (t )  a, cij (t )  a, g : R  R be
g (u ) 
1
( u  1  u  1 ),
2
i  1,2, , n .
This system of delay – differential equations can be used to model neural
networks.
The initial value problem (IVP) for 4  is defined as follows:
On the initial set
Et0  { t   : t   < t 0 , t  t 0 , } {t 0 }
let a continuous initial vector “white noise process”
 t    0 t , 1 t , .... ,  n1 t  ,  k t   xk 1 t 0  k t   1 ,
k  0 ,1, .... , n  1
be given.
Suppose that the process  t    0 t , 1 t , .... ,  n1 t  satisfies
following conditions:
1.  k t 0   0 , k  0 ,1, ... , n  1 ;
2.  k t  - it has independent increments, i. e.
 k t1    k t 0 ,  k t 2    k t1 , ... ,  t m    t m1  are
independent for all 0  t 0  t1  t 2  ...  t m .
3. The random variables  k t    k s  , 0  s  t , have a normal
distribution with parameters 0 ,  k t  s  ,  k  0 ;
n 1
4.

k 0
k
0;
5.  k t  - it has a continuous version.


We have to find the solution x(t )  C n t 0 ,  of 4 
x(t )  x1 (t ), x2 (t ),, xn (t ) , satisfying
x j 1 t    j t  if t    t  t 0 , j  0,1,2, , n  1 .
5
Under the above assumptions, the initial value problem 4, 5 has
exactly one solution on the interval t 0 ,  where
 j t   x j 1 0 j t  , x j 1 (t 0 )  x j 1 0 ,  j t    j t   1 ,
 j (t 0 )  1 , j  0,1, , n  1 .
We will consider a system of linear differential equations of the
form
n
d
xi t    aij t x j t  , i  1,2, , n
dt
j 1
6
and a system of nonlinear delay-differential equations
n
n
n
d
xi (t )   aij (t ) x j (t )   bij (t ) x j (t   )   cij (t ) g ( x j (t ))
dt
j 1
j 1
j 1
, i  1,2, , n 7 
Definition 2.1. A superior Liapunov’s exponent of a vector function

x (t ) is called a real number  which is defined by
1
t


1 x     lim sup  ln xt   .
t 
Definition 2.2. A inferior Liapunov’s exponent of a vector function

x (t ) is called a real number  which is defined by
1
t


 2 x    lim inf  ln xt  
t 
where
x 
 x, x 
n
and
 x, y    x i y i .
i 1
Definition 2.3. A superior central exponent of Cauchy’s matrix of a
system 6  is called a real number  which is defined by
1 k


  inf  lim sup
ln X iT , i  1T   

T 0
 k  kT i 1

k
1


 lim  lim sup
ln X iT , i  1T   .

T 
 k  kT i 1

Definition 2.4. A inferior central exponent of Cauchy’s matrix of a
system 6  is called a real number  which is defined by
1 
1 k

  inf  lim sup
ln X 1 iT , i  1T   

T 0
 k  kT i 1

k
1 
1

 lim  lim sup
ln X 1 iT , i  1T   .

T 
 k  kT i 1

We have to find the norm of Cauchy’s matrix of a system 6  by using the
following formula
X t , s   max
x
x(t )
x( s )
where we have to search a maximum element of a set of all solutions of a
system 6  .
We may cast this system 7  of delay-differential equations in a
compact form by using vector notation
8
 
d 


x (t )  A(t ) x (t )  B(t ) x (t   )  f (t , x (t )) .
dt
Theorem 2.1. Let the Cauchy’s matrix of a system 6  is uniformly
bounded above by
X t , s   D e  t s  , s  t ,   0
 
bij t   0 , i, j  1,2, .... , n and the vector function f (t , x ) satisfies the
inequality
 

f (t , x )   x .
Then all solutions of system 8 are uniformly bounded above by


x (t )  x (t 0 ) D e  D   t s 
where D is a constant which is depended on  .
Proof: (see [9]).
Let 1   2  ...   n - superior Liapunov’s exponents of a system
differential equations 6  ,
Bd  diag 1 , 2 , .... , n  ,
t
lim sup
t 
1 n
 aii s ds     ,
t 0 i 1
X t ,0 - Cauchy’s matrix of a system 6  , then we put
Lt   X t ,0 exp  tBd  .
Lemma. Let Lt   X t ,0 exp  tBd  , then 1 Lt   0 and
1 L1 t      , where    i .
n
i 1
Proof: ( see [8] ).
Theorem 2.2. Let xt   Lt yt  and
d
y t   Bd y t   L1 t  f t , Lt  y t  , then
dt
1 xt   1  yt  and     0 .
Proof: ( see [8] ).
3 CONCLUSION
Analog circuits have played a very important role in the
development of modern electronic technology. Conventional digital
computation methods have run into a serious speed bottleneck due to their
serial nature. To overcome this problem a new computation model, called
“neural networks” has been proposed which is based on some aspects of
neurobiology and adapted to integrated circuits [2]-[9].
REFERENCES
1.
ARNOLD, L.: Stochastische Differentialgleichungen. Theorie und
Anwendung, Oldenbourg Verlag. 1973.
2.
SZATHMÁRY, P.- KOLCUN, M.: Predikcia denných diagramov
zaťaženia v ES s využitím
umelých neurónových sietí, elfa s.r.o.,
Košice, 2001.
3. BUŠA, J. – PIRČ, V.: Trajecttory Bounds of Solutions of Certain
Systems of Nonlinear Differential Equations, Bull. Appl. Math., Baia
mare, Romania, (2001), 77-84.
4. HAŠČÁK, A.: Disconjugacy and Multipoint Boundary Value Problems
for Linear Differential Equations with Delay, Czechoslovak Mathematical
Journal, 39 (114) 1989, 70-77.
5. HOPFIELD, J. J. – TANK, D. W.: Neural computation of decisions in
optimization problems, Biol. Cybern. , 52 (1985), 141-152.
6. SINČÁK, P., ANDREJKOVÁ, G.: Neurónové siete. Inžiniersky prístup,
Elfa s.r.o. Košice, ISBN-80-88786-42-8, 63 str., II. diel.
7.
DAŇO, I.: The Central Exponent, Zborník vedeckých prác Vysokej
školy technickej v Košiciach, 1991, 37-45.
8. DAŇO, I.: Pokazateli Liapunova, Zborník 9. letnej školy
z diferenciálnych rovníc, Poprad, 1986, 21-24.
9. DAŇO, I.: Stability Theory by Liapunov’s First Method and Neural
Networks, (to appear).
Download