FRACTALS: Chaos in Neural Networks and Other Applications

advertisement
1
FRACTALS: Chaos in Neural Networks
and Other Applications
A. Bacopoulos and Mina Kribeni
Department of Applied Mathematics and Physical Sciences
National Technical University of Athens
Zografou 15773, Athens, Greece
abacop@math.ntua.gr, yantho@netonline.gr
minimization of error, stability, efficiency,
I. INTRODUCTION
reliability, etc.). This is done both in the
theoretical modelizations themselves as well
We examine the fractal behaviour of certain
as
computer neural networks. Chaotic behaviour
aspects. A partial order of the above objectives
in neural networks is a useful property in
is introduced and optimal trade-offs are
random number generation. Generally, chaos
presented.
seems to be a necessary ingredient for
The main contribution so far in this work-in-
“imagination”
progress is more in terms of the framework of
type
machine
intelligence.
in their
corresponding computational
Some aspects of the above are studied.
multiobjective optimization rather than finding
In addition, we discuss here three more fractal
the optimal unique-criterion solutions.
applications: Voice Identification, Fractal
Image
Compression,
Fractal
Encryption
II. THEORY AND EXAMPLES
(Encrypting Chaos).
A framework of multicriteria optimization is
developed in which the above problems are
We introduce 1or 2 additional criteria of
posed. Some of our progress is presented and
proximity (e.g. minimization of error, stability,
open questions that arise naturally are posed.
efficiency, reliability etc. ---The criterion of
Specifically, we investigate an aspect of these
error
problems by introducing 1 or 2 additional
attributes to the existing objective, i.e. error
minimization between the model and the
“ideal” which the model simulates. Thus, the
minimization should always be included. --- )
new aim becomes to optimize simultaneously
and give a grade to each attribute, say,
with
between 1 and 10. Here the lower numbers are
respect
to
multiple
criteria
(e.g.
2
the better ones and suggestive of small errors

at each cycle of iteration k, small instability
(C1 (),C2 ()) (C1 (),C2 ()) and
etc. Thus 1 is the best we can do at each cycle
(C1 (),C2 ())  (C1 (),C2 ())
and 10 the worst.
For simplicity we consider a total of 2 criteria
C1 and C2 which assume integer values,
according to their states Sk
(at each iteration
Thus we are led to defining the vec minimum
k)
i.e.
Definition: Vk ()  (C1 (,Sk ),C2 (,Sk ))
Ci (Sk )  n
set
where i  1, 2
n  1, 2,...,10 and k is the iteration index .
Three models of the above are introduced: the
sum the max and the vec.
VecMin  {   : notVk ()  Vk ()
Vk ()}
Similarly SumM in  min{C1 ()  C 2 ()}
k  C1 (Sk )  C2 (Sk ) for each k

And
 k  max{C1 (Sk ), C2 (Sk )}
MaxM in  min{max{C1 (), C 2 ()}}

Vk  (C1 (Sk ), C2 (Sk ))
where the ordered pair values of Vk in
2
are partially ordered by
The multiple criteria is a way of studying
trade-offs such as e.g. in neural networks
adaptivity versus robustness. The models we
(C1 (,Sk ),C2 (,Sk )) (C1 (,Sk ),C2 (,Sk ))

C1 (,Sk )  C1 (,Sk )
and
C2 (,Sk )  C2 (,Sk )
use are simple in each of the above wellstudied examples of fractal simulation. It
should be mentioned that this is work in
progress
with
the
emphasis
on
this
multicriteria approach. The major obstacle, as,
where ,    the space of parameters.
expected, in the neural research is the nonconvexity
of
the
minimizations
proved.
However, here again we made use of the
We say
(C1 (),C2 () (C1 (),C2 ())
inherent non-uniqueness minimal set (even in
the case of non-convexity) to incorporate the
fault tolerance of the neural network.
3
However, the vectorial formulation shows
certain trade-offs involved, e.g., adaptivity vs
Symbols
H  {Vk () :   } 
robustness and the stability vs plasticity
2
  {Vk () :   VecMin}  H .
dilemma. The main difficulty remains, besides
of course the scale of the problems, the
optimizations involved that are non convex.
Theorem 1
Here again, we have some ideas that are best
For “large” neural networks the set H is a
expressed in the above vectorial context.
discrete effectively convex set.
Theorem 2
Examples
For “large” neural networks the set μ is a
Neural networks, random number generators,
discrete effectively convex arc.
chaos in neural networks, voice identification,
fractal image compression, fractal encryption.
Theorem 3
The
SumMin  VecMin .
presented in all the above.
framework
we
have
developed
is
Theorem 4
REFERENCES
MaxMin  VecMin .
Theorem 5
The max and the vec formulations of
[1] S.L.Anderson ,Random Number
minimization are equivalent to the standard
Generators on Vector Supercomputers and
neuron architecture, where the activation
other Advanced Achitectures, SIAM Review,
function is chosen appropriately.
Vol. 32, No 2,pp. 221-251, June 1990
[2] A. Bacopoulos and B.L. Chalmers,
Vectorially minimal lprojections, in
Remarks
Approximation Theory III Vol. 1:
As mentioned in all the above, we show the
Approximation and Interpolation, World
equivalence of the standard linear combiner
Scientific, C.K. Chui end L.L. Schumaker,
with
eds. 1995, pp. 15-22.
the
max
and
vec
formulations.
Specifically, an activation function exists
[3] A. Bacopoulos, Α., Topology of a General
which provides the equivalence.
Approximation System and Applications,
4
Journal of Approximation Theory, Vol. 4, pp.
for Combinatorial Optimization under a
147-158, (1971)
Unifying Framework”, Neural Networks 13
[4] N. Bourbakis, C.Alexopopulos, “A Fractal-
(2000),pp. 731-744
Based Image Processing Language: Formal
[10] H.lu, “Chaotic Attractors in Delayed
Modeling”, Pattern Recognition 32 (1999),
Neural Networks”, Physics Letters A298
pp.317-338.
(2002), pp.109-116
[5] B. Brosowski and A. Da Silva,
[11] P. Maragos,Fang-Kuo Sun, “Measuring
Scalarization of vectorial relations applied to
the Fractal Dimension of Signals:
certain optimization problems, Note Mat. 11
Morphological Covers and Iterative
(1991), 69-91.
Optimization”, IEEE Transactions on Signal
[6] P.L. Butzer and H. Berens, “Semi-Groups
Processing, Vol. 41, No.1, January 1993
of Operators and
[12] A. Potapov, M.K. Ali, “Robust Chaos in
Approximation” Springer-Verlag, New York
Neural Networks”, Physics Letters A 277
(1967).
(2000),pp. 310-322.
[7] S.K.Ghosh,J.Mukherjee, P.P.Das, “Fractal
[13] R.Schmitz,, “Use of Chaotic Dynamical
Image Compression: A Randomized
Systems in Cryptography”,Journal of the
Approach”, Pattern Recognition Letters 25
FranklinInstitute 338 (2001),pp.429-441.
(2004), pp.1013-1024
[8] S.Haykin, “Neural Network, A
Comprehensive Foundation”, Macmillan
College Publishing Company,Inc., New York,
1994.
[9] T.Kwok, K. A. Smith, “Expirimental
Analysis of Chaotic Neural Network Models
Download