Linear Attractor Networks for solving linear equations and

advertisement

IJCNN2003 Tutorial

Linear Hopfield Networks for solving linear equations and optimization problems

Tutorial by Evgeny E. Dudnikov, Russia

Solving linear equations is considered to be one of the basic problem widely encountered in many real applications (signal processing, robot control, automatic control of industrial processes and so on). Also some optimization problems may be converted to the solving linear equations that wide extends this field of applications. It is well known that by help of the neural networks it is possible to solve the different system of linear equations. The most important advantages of the neural networks are massive parallel processing and fast convergence, thus algorithms of the neural networks has many computational adventures over the traditional algorithms realized by ordinary digital computers. It is especially notable when you need a real-time solution of linear equations. The neural networks have shown big potential to solve these problems efficiently. It is a reason why the methods for solving linear equations by using the neural networks continue to be very actual. In this Tutorial we present a survey of such methods based on application of the linear Hopfield neural networks.

Assume that we need to solve the following system of linear algebraic equations:

(1) Ax

I

0 where A

( a ij

) is the square matrix of order n ; x is the vector of variables and I is the vector of constant values.

For solution we apply the linear Hopfield neural networks described by the following model shown here in the matrix form:

(2) dx / dt

Ax

I where n is the total number of neurons in the network; the square matrix A

( a ij

) represents the matrix of connections between these neurons; x is the vector of outputs of the neurons and I is the vector of the external signals which are supplied to the inputs of the neurons.

The unique equilibrium point of the system (2) is defined by the system

Ax

I

0 i.e. by the solution of the initial system (1).

This method can be called the direct method. The Hopfield network (2) is a neural network with feedbacks so as for any dynamical system one of the main problems of its realization is stability.

We can use the direct method only if the matrix A has a special features, because in the opposite case the network (2) become unstable. This restricts the application of this method in many practical cases.

To stabilize the network (2) we suggest to use the following model:

(3) dx / dt

B ( Ax

I ) where B is a special matrix that is selected to stabilize the initial system (2).

According to (3) this group of methods can be called the methods with multiplication of matrixes. For example, if we select B

 

A

T

, where of the well-known gradient method

A

T

means the transpose of A , that we receive the realization

(4) dx / dt

 

A

T

( Ax

I ) .

The equilibrium point of the system (4) is defined by the system of equations

A

T

( Ax

I )

0 and coincides with the solution of the initial system (1). The system (4) is stable and its stability does not depend on the characteristics of the matrix A .

During the Tutorial we will discuss also a group of new methods for solution of the linear equations.

In these methods we replace the system (2) by the following system: dx / dt

 

A

T y

(5) dy / dt

  y

( Ax

I ) where y is a new variable. Other parameters and variables are the same as in (2).

The equilibrium point of the system (5) is defined by the system of equations

A

T y

0 ,

 y

( Ax

I )

0 .

The equilibrium of (5) over the variable x coincides with the solution of the initial system (1). The system (5) is stable and its stability does not depend on the characteristics of the matrix A . But in comparison with the gradient system (4) it demonstrates better convergence to the solution. We call this group of methods as the methods with supplementary variables.

For all groups we consider the circuit implementations of the discussed methods and present a comparative analysis. An application to the optimal adaptive filter design when it is necessary to realize a tracking context to distinguish targets from background and noise is analyzed in the end of the

Tutorial. This problem is formulated as a quadratic programming problems with equation constraints.

We reduce it to solving the system of linear equations with bad-posed matrix. And for the solution of this system we use the method with supplementary variables realized on the base of the linear Hopfield neural network.

The Tutorial is self-contained, and is suitable for researchers who use or would like to use the neural networks with feedbacks as a tool for different applications. The material presented in the Tutorial does not require from the participants a special knowledge in the field of neural networks or sophisticated mathematical background.

References:

1.

Cichocki A. and Unbehauen R., Neural networks for Optimization and Signal processing. New

York: John Wiley and Sons. 1993. (A basic book in the field).

2.

Lendaris G. G., Mathia K., and Saeks R ., Linear Hopfield Networks and Constrained Optimization. IEEE

Trans. Syst., Man, Cybern., Part B, vol. 29, pp. 114-118, Feb. 1999.

3.

Dudnikov E ., Two-layer stabilization of continuous neural networks with feedbacks. Cybernetics and Systems: An International Journal, 33: P. 326-340, May 2002. (Method with supplementary variables).

The Instructor

The instructor - Evgeny Dudnikov is a lead senior scientist, head of the Research Department of

Computer Application and Mathematical Methods in the International Research Institute for

Management Sciences (IRIMS) of Russian Academy of Sciences in Moscow, Russia.

E. Dudnikov received the degree of the Doctor of Sciences in Cybernetics and Theory of Information from the Institute of Control Problems in Moscow, USSR, in 1976. He has a big experience in the applied mathematics and application of control systems. His main research interests are in the field of automatic control, optimization problems and parallel computations. He published some monographs and many research articles on these subjects. Last ten years he has also deep interest to the problems of theory and application of artificial neural networks with feedbacks.

His latest publication in this field:

1.

Dudnikov E., Rybashov M., Stability of Neural Networks with Nonlinear Feedbacks. Proceedings of the 1996 World Congress of Neural Networks - WSNN'96, San Diego, USA,1996.

2.

Dudnikov E., Rybashov M., Artificial Neural Networks with Nonlinear Feedbacks. Fifth

International Conference on Artificial Neural Networks (July 1997). IEE Conference Publication

No 440, University of Cambridge, UK,1997.

3.

Dudnikov E., Rybashov M., Stabilization of single-layer neural network with feedbacks.

Proceedings of the International ICSC/IFAC Symposium on Neural Computation - NC'98, Vienna.

Austria. 1998. P. 954-959. Canada/ Switzerland: ICSC Academic Press. 1998.

4.

Dudnikov E., Rybashov M., Absolute stability of the neural networks with feedbacks. Automation and Remote Contro l , No. 12. P. 33-40, 1999.

5.

Dudnikov E ., Structural Stabilization of Cellular Neural Networks. Proceedings of the Fourth

International Symposium on Soft Computing and Intelligent Systems for Industry, Paisley,

Scotland, UK, June 2001 .

Canada/ Switzerland: ICSC Academic Press. 2001.

6.

Dudnikov E., Rybashov M., Single-layer neural networks with various feedbacks. Neural, Parallel

& Scientific Computations, , v. 9, No. 1, P. 29-48, March 2001.

7.

Dudnikov E ., Two-layer stabilization of continuous neural networks with feedbacks. Cybernetics and Systems: An International Journal, 33: P. 326-340, May 2002.

CURRICULUM VITAE

Evgeny Evgenievich DUDNIKOV

Evgeny Evgenievich DUDNIKOV is a leading Russian scientist, head of the Research Department of

Computer Application and Mathematical Methods at the International Research Institute for

Management Sciences (IRIMS) of the Russian Academy of Sciences in Moscow, Russia.

After graduating from Moscow Power Institute in 1959 E. Dudnikov began to work at the Institute of

Control Problems in Moscow, USSR, as postgraduate student, then junior researcher, senior researcher, head of laboratory. From this Institute he received his Master Degree (Candidate of Sciences) in

Control Problems in 1965 and Doctor of Sciences in Cybernetics and Theory of Information in 1976.

He had a big experience in applied mathematics and application of control systems. His main research interests were in the fields of automatic control, optimization problems and parallel computations.

Since 1974 Dr. Dudnikov has been with the International Research Institute for Management Sciences.

In the middle of the eighties he participated in some research projects concerning parallel computations and application of transputers. During the last ten years he also had great interest in problems of theory and applications of artificial neural networks with feedbacks. He investigated problems of stability for neural networks with different feedbacks using the direct Lyapunov Method, formulated a stabilization problem for this class of neural networks and studied its features. Two of his research projects were supported by grants of the Russian Foundation for Basic Research (from 1994 to 1996 and from 1998 up to now).

In 1999 he received the fellowship grant from the Italian Ministry of Foreign Affairs and spent three months in the Instituto di Elaborazione della Informazione in Pisa (Italy) where he worked with the problems of neural networks, optimal control and system analysis.

As a Visiting Professor he presented some courses of lectures at Moscow Power Institute and at

Moscow Technical University of Electronic Machine-tool Industry.

In May, 2000, he was invited to the Instituto di Elaborazione della Informazione in Pisa where he presented the courses of lectures “Large scale systems and system analysis” and “Neural Networks with feedbacks” for Italian postgraduates in a framework of the project ”MURST”.

The scientific biography of Dr. Dudnikov was included in the prestige edition of Who’s Who in Science and Engineering published in the USA in 2001.

Dr. Dudnikov published over 100 scientific papers, including 8 monographs.

Contact information

Address: International Research Institute for Management Sciences

9, Prospect 60-let Octyabria, 117312 Moscow, Russia

E-mail: eed@isa.ru

Fax: (007) 095 135 2449

Closest technical area

Constrained Optimization, Signal Processing.

Download