Uploaded by Md Rashel Sheikh

Chapter4

advertisement
Chapter-4
Chaotic behavior on nonlinear dynamical system
Definition of Chaos
No definition of the term chaos is universally accepted yet, but almost everyone would agree on
the
three
ingredients
used
in
the
following
working
definition:
”Chaos is aperiodic long-term behavior in a deterministic system that exhibits sensitive
dependence on initial conditions.”
1.
“Aperiodic
long-term
behavior”
means
that
there
are
trajectories
which do not settle down to fixed points, periodic orbits, or quasiperiodic orbits as t → ∞. For
practical
reasons,
we
should
require
that
such trajectories are not too rare. For instance, we could insist that
there be an open set of initial conditions leading to aperiodic trajectories, or perhaps that such
trajectories should occur with nonzero probability, given a random initial condition.
2.
“Deterministic”
means
that
the
system
has
no
inputs
or
parameters.
The
irregular
behavior
arises
nonlinearity, rather than from noisy driving forces.
random
or
noisy
from
the
system’s
3. “Sensitive dependence on initial conditions” means that nearby trajectories separate
exponentially
fast,
i.e.,
the
system
has
a
positive
Liapunov exponent.
LORENZ EQUATIONS:
We begin our study of chaos with the Lorenz equations:
𝒙̇ = 𝝈(y - x)
π’šΜ‡ = rx - y – xz
(*)
𝒛̇ =xy –bz
Here 𝜎, r, b > 0 are parameters. Ed Lorenz (1963) derived this three-dimensional system from a
drastically simplified model of convection rolls in the atmosphere. The same equations also arise
in models of lasers and dynamos, they exactly describe the motion of a certain waterwheel
Lorenz discovered that this simple-looking deterministic system could have extremely erratic
dynamics: over a wide range of parameters, the solutions oscillate irregularly, never exactly
repeating but always remaining in a bounded region of phase space. When he plotted the
trajectories in three dimensions, he discovered that they settled onto a complicated set, now called
a strange attractor. Unlike stable fixed points and limit cycles, the strange attractor is not a point
or
a
curve
or
even a surface—it’s a fractal, with a fractional dimension between 2 and 3.In this chapter we’ll
follow the beautiful chain of reasoning that led Lorenz to his discoveries. Our goal is to get a feel
for his strange attractor and the chaotic motion that occurs on it. Lorenz’s paper (Lorenz
1963) is deep, prescient, and surprisingly readable—look it up! It is also reprinted in cvitanovic
(1989a) and Hao (1990). For a captivating history of Lorenz’s work and that of other chaotic
heroes.
Deterministic
Chaos
Properties
Dynamical
Systems
The deterministic chaos term and its properties is tightly connected with dynamical systems theory.
There are a few approaches of modern dynamical systems theory e.g. Ergodic theory, Topological
dynamics, the theory of smooth dynamical systems and Hamiltonian dynamics.
We assume that in the scope of this book are smooth dynamical systems. The dynamical system
includes
three
components:
•phase
space
•
time
• time evolution law
We assume the phase space represents all possible states of the system. Let’s assume it is a compact
Riemannian manifold M, a compact manifold with Riemannian metric and sufficiently
differentiable structure, which can be rewritten as bounded subset in 𝑅 𝑛 . A state of the system at
a given time, denoted as X (t), we call the single point in the manifold M
X (t) = (π‘₯1 ,π‘₯ 2 , . . . ,π‘₯ 𝑛 )
(3.1)
Where X (t) ∈ M. By the time we mean a real value (t ∈ R) for the continuous time dynamical
systems or an integer (t ∈ Z) for discrete time systems. The time evolution law describes how a
particular point in manifold M moves in this manifold with time passing. The time evolution law
is a rule, which describes where the system state is in M after time t. This rule can be described
e.g. by a flows or maps.
A continuous time dynamical system is called flow. To describe it’s evolution in time usually a
differential equations. The differential equation:
𝑑𝑋(𝑑)
𝑑𝑑
= F(X (t))
(3.2)
Where X (t) is a state vector and F: M → M is a vector field differentiable as often as required.
Having the vector field F defined in Eq. 3.2 one may see time variable t does not appear explicitly.
This kind of systems not explicitly dependent on time are called autonomous and they are the
subject of this research. The analysis and embedding of forced systems is also a subject of the
related work.
Dynamical evolution of the system is an initial value problem, which resolution determine what
will happen with the initial system’s state X (0) after time t
F: X (0) → X (X (0), t)
(3.3)
Having the time evolution law F and initial conditions: X (0), one may resolve the state of the
system in the future time t > 0
X (t) = F(X (0), t) (3.4)
During the evolution, the state of the system draws a path in the phase space called trajectory or
orbit.
An
example
trajectory
is
presented
in
the
Fig.3.1.
For the autonomous dynamical system the solution of the initial value problem of Eq. 3.3 exists
and
is
unique
once
the
vector
field
F
is
Lipshitz
continuous.
The Lipschitz continuity can be defined as following. Let U be an open set in n. A vector field
F(X, t) on n is said to be Lipschitz on U if does exist a constant L such that
// F (𝑋1, t) - F (𝑋2, t) // ≤ L // X1 - X2//
Fig. 3.1 The sample dynamical system trajectory
(3.5)
Where X1, X2 ∈ U. The constant L is called a Lipschitz constant for F. If the constant L does
exist the vector field F is called Lipschitz-continuous on U [4]. We assume in the scope of this
work are dynamical systems described by Lipschitz vector field sand Lipschitz maps.
In the case of discrete time (valued by integers), systems are called maps and the evolution law is
given by a map M (𝑋𝑛 ), which maps vectors in M to another vectors
in M:
𝑋𝑛+1 = M (𝑋𝑛 )
Where n stands for the discrete time moments (0, 1, 2…) and
𝑋𝑛 = ( π‘₯𝑛1 , π‘₯𝑛2 , π‘₯𝑛3 , …. π‘₯𝑛𝑑 )
(3.7)
is a state vector.
The behavior of the trajectory of dynamical system’s evolution depends both on the form of F or
M and on the initial condition. It is worth to emphasize, that there exist some sets of initial
conditions, which leads to the same asymptotic behavior of trajectories. Such kind of a set of initial
conditions is called the basin of attraction. The attracting subset of the phase space to which the
trajectories tend to is called an attractor [72]. The examples of attractors are fixed point attractor
and limited cycle attractor presented in the Fig. 3.2.
The dynamical systems can be divided into two subgroups:
• Linear dynamical systems
• Nonlinear dynamical systems
Linear dynamical systems satisfy the superposition principle and they can be described by the
equations using linear operators. Nonlinear dynamical systems do not satisfy the superposition
principle and to describe them nonlinear operators are used.
It is worth to underline that in practical applications very often the continuous time dynamical
system can be replaced by discrete time system by sampling in discrete moments in time. It can be
also done using Poincare section method, which reduces the dimensionality of the system from N
to N – 1.
Fig. 3.2 Example of dynamical systems attractors. Left - fixed point attractor, Right - limited
cycle attractor
Chaos Properties
The dynamical system can be called chaotic once it preserves the certain group of properties:
• Positive metric and topological entropies
• strong sensitivity to initial conditions
• strange - fractal attractor of the phase space
• exponentially fast orbits mixing
The presence of deterministic chaos property depends on the dimension of the system. It can be
observed for nonlinear dynamical systems, which dimensionality reaches some certain level.
This required level of dimensionality is due to the fact that the complexity of the state space
trajectory of the evolving system can be greater for larger systems dimensions. It is worth to
underline the minimal dimension of the nonlinear dynamical systems, for which the deterministic
chaos can occur. As it was mentioned in [101], for the continuous systems the minimal
dimension of state space vector is defined as:
π‘π‘π‘œπ‘›π‘‘π‘–π‘›π‘œπ‘’π‘  ≥ 3
(3.8)
In the case of discrete systems described by maps the condition is divided into two sub-conditions.
The
minimal
dimensionality
for
invertible
maps
is:
π‘π‘‘π‘–π‘ π‘π‘Ÿπ‘’π‘‘π‘’ ≥ 2
(3.9)
when the map is non-invertible the occurrence of chaos is possible even in the one dimensional
maps (e.g. logistic map).In the case of the linear systems the chaos can be observed for infinitedimensional
systems
[12].
All of the mentioned properties of the deterministic chaos will be explored more deeply in
forthcoming subsections.
Positive Entropies
The positive metric and topological entropies are along with exponential sensitivity to initial
conditions a deterministic chaos quantifier. The foundation of the metric and topological entropies
is Shannon’s Information Theory. The basic concept here is that the observation of a dynamical
system is treated as a source of information.
Let us suppose we have a partition β of the phase space of the dynamical system described by Eq.
3.2. The partition is a collection of nonempty and non-intersecting measurable sets 𝐡𝑖 :
β = {Bi}
(3.10)
The partition was created with a given scale of resolution ε. Having the partition with perfect
resolution (ε → 0), each element of the partition corresponds to the point in the phase space of the
system. In this consideration we assume the motion in the phase space is bounded, the effects of
an initial condition are gone and hence the system’s state is close to its attractor. In consequence,
the
probability
to
find
the
state
in an element of the partition, which does not contain part of the attractor is very small. By making
the repeatable measurements at random time intervals on the systems state, we can estimate a
histogram of the frequency of occurrence Pi of ith element (Bi) of the partition. In result we obtain
the set of probabilities {Pi}, which is called coarse grained asymptotic probability of distribution.
When we take the limit as the scale resolution goes to zero and the number of samples goes to
infinity, for any fixed set C on the attractor, the sum of probabilities Pi over elements of the
partition covering C gives a number µΜ…(C). This is a measure µΜ… on set C with a corresponding
density of probability P(x)
µΜ…(C) = ∫𝐢 𝑃(π‘₯)𝑑π‘₯
(3.11)
Once the measure µΜ… is defined one may estimate the probability that the system state will be in
the element of the partition Bi is given as:
Pi = µΜ… 𝑩𝑖
(3.12)
As we already mentioned the observed dynamical system is described by the Eq. 3.2. We call the
measure µΜ…(C) an invariant measure if it satisfies the below condition:
\π‘£π‘Žπ‘Ÿ(\π‘šπ‘’)
We assume in the scope of this book are dynamical systems preserving the invariant measure. Let
us now suppose the system’s observer measuring the state of the system has got the coarse grained
asymptotic probability distribution Pi given a priori and has a good knowledge of the measure µΜ….
The information obtained by making a isolated measurement is given as:
Where n (ε) is the number of cells with nonzero probability and the isolated measurement means
that no other measurements have been made recently and the observer knows only that the state of
the system is near the attractor. Finally the entropy arising from the measurement is given by:
H (ε) = -I(ε)
(3.15)
With the decreasing resolution scale the gained information increase. In the limit of small ε, the
slope of the graph I(ε) versus |logε| we call the information dimension:
𝐼(πœ€)
𝐷1 = lim |π‘™π‘œπ‘”πœ€|
(3.16)
πœ€→0
Let us now move from the single isolated measurement case to the accumulation of
measurements case. We suppose the observer is watching the dynamical system continuously
and acquiring the snapshot of the measurements. What is now the information acquisition rate of
a kind of observer? The upper bound on the information acquisition rate is the metric entropy or
Kolmogorov-Sinai entropy.
We are still considering the case that phase space of the evolving system is partitioned into n
elements {βi}. To each element βi we assign the symbol si. The state of our source of information
we denote π‘†π‘š and it is described using also m-1previously occurred symbols
π‘†π‘š = (s1, s2 . . . sm)
(3.17)
The probability that the source of the information is in the state of Sm we denote as P(π‘†π‘š )
P(π‘†π‘š ) = P(s1, s2 . . . π‘ π‘š )
(3.18)
where the state of the information source is not exactly the state of the system. Once the source is
state π‘†π‘š the conditional probability that the next symbol will be π‘ π‘š+1 is P(π‘†π‘š |π‘ π‘š+1 ) and new
information acquired while the transition to state m+1will is given as:
I = -log(π‘†π‘š |π‘ π‘š+1 )
(3.19)
The average information gained per symbol is obtained from averaging overall possible transitions
from π‘†π‘š to π‘ π‘š+1 and over all possible states π‘†π‘š :
The joint probability of π‘†π‘š and π‘ π‘š+1 to occurring in succession is denoted as
P(Sm, π‘ π‘š+1 ) = P(Sm)P(Sm|π‘ π‘š+1 )
The
system
conserves
that
(3.21)
probability
while
transition
to
the
new
state:
Utilizing the above two Eqs. 3.21 and 3.22 we can define πΌπ‘š as
In consequence the average amount of newly gained information per the symbol emitted by the
source can be rewritten as
ΔπΌπ‘š = πΌπ‘š+1 - πΌπ‘š
(3.24)
Having the sequence of the observed symbols generated by the information source with time
interval Δt, the information rate per unit time for this sequence of symbols is defined as
Finally, the metric entropy can be defined as the maximum information rate when the partition
and sampling rate are varied
The topological entropy is also the upper bound on the information acquisition rate for the given
symbols sequence, but without the given probability of them. Since that the information contained
in a refined partition βm is logNm, where Nm is the total number of elements of βm, the topological
entropy is described by
The average information gained per unit of time for the chaotic systems is positive. Each new
measurement brings the new information and the metric entropy is positive. For predictable
systems new measurements do not provide any new information and entropy is zero. Apart of
indicating the chaos in the observed system the value of the entropy provides the answer for the
question
how
much
chaotic
is
the
observed
system. There is also a relation between entropy and the Lyapunov exponents. The metric entropy
and the same the information dimension of an attractor can be expressed in terms of the spectrum
of Lyapunov exponents. The metric entropy can be calculated using positive Lyapunov exponents
as following:
In the light of this relation, the system with at least one positive Lyapunov exponent has got the
positive metric entropy, it can be called chaotic and it has the chaotic (strange) attractor.
3.2.3 Strange Attractor
The third property preserved by the chaotic systems is a strange attractor. The term was first
introduced by Ruell and Takens while investigation of the nature of turbulence. The very good
works on strange attractors has been published by Grassberger and Procaccia. In the Fig. 3.2 two
kind of attractors were presented: fixed point attractor and limited cycle attractor. With the
investigation of Takens and Ruelle another kind of attractor was defined. It is called strange
attractor.
Let us suppose we consider the system described by the Eq. 3.2 and its attractor. We can say the
bounded set A in d-dimensional manifold M is called strange attractor for the flow F(X (0), t) if
there is a set U with the properties:
• The set U is a d-dimensional neighborhood of A. Which means that, for each point X ∈ A, there
is a small ball centered at X , which belongs to U. In particular A is contained in U
• For each initial condition X (0) in U, the point X (t) after evolution with positive t value, still
remains in U. For large t X (t) stays as close as possible to A. This means A is attracting.
• The system exhibits the sensitive dependence on initial condition when X (0) ∈ U.
• One can select two initial conditions: X1(0) ∈ A and X2(0) ∈ A such that, after evolution of
positive time t, there are points X1(t) and X2(t) which stay in A. This means the attracting set A
can’t be splitted into two different attractors (This is the in-decomposability condition).
The examples of the chaotic systems with the strange attractors are Lorenz, Henon and Rossler
systems. Their attractors are presented below on the Fig.3.3.With the definition of the strange
attractor there has arisen the question how to measure this strangeness of the attractor. Peter
Grassberger and Itemar Procaccia in their works proposed to use a correlation integral.
Let us suppose we consider a set of points {Xi} on attractor, where
i = 1 . . . N.
If the system exhibits the exponential divergence of trajectories, most points pairs of points on the
attractor
(Xi,
Xj)
(where
i
=j)will be dynamically uncorrelated, but since they lie on the same attractor they will be spatially
correlated.
The special correlation can be measured with correlation integral:
where θ(x) is the Heaviside function and r is the radius. The strange attractors are also interesting
because of their geometrical properties. They are also called fractal attractors and can be described
by the fractal geometry techniques. For small radiuses the correlation integral grows like power of
exponent v:
Fig. 3.3 Strange attractors. Upper left - Lorenz attractor, Upper right - Rossler attractor, Down
center - Henon attractor
C(r) ∼ π‘Ÿ 𝑣
(3.41)
The exponent v is called correlation exponent or correlation dimension and can be considered as a
very useful measure of the structure of the strange attractor. The correlation dimension is treated
as a type of fractal dimension. The fractal geometry provides a very useful tools for measuring the
structure of strange attractors.
Dynamics and Numerical solution of Lorenz system:
The phase portrait of the Lorenz system on different plane using MATLAB code are given as
follows:
1(e): Phase portrait of Lorenz system x vs y
1(f) Phase portrait of Lorenz system y vs z
The numerical solution of the Lorenz system () by using Runge-Kutta fourth order method is
ODE45 code of the MATLAB program is
To show the behavior of solution changing the three parameters are presented in the following
figures
sigma=10; beta=8/3; ro=28; % Your data
ICs=[5, 5, 5];
% Your data
t=[0, 20];
OPTs = odeset('reltol', 1e-6, 'abstol', 1e-8);
[time, fOUT]=ode45(@(t, x)([-sigma*x(1)+sigma*x(2); -x(2)-x(1).*x(3); beta*x(3)+x(1).*x(2)-beta*ro]), t, ICs, OPTs);
close all
figure
plot3(fOUT(:,1), fOUT(:,2), fOUT(:,3)), grid
xlabel('x(t)'), ylabel('y(t)'), zlabel('z(t)')
title('LORENZ functions x(t) vs. y(t) vs. z(t)')
axis tight
figure
comet3(fOUT(:,1), fOUT(:,2), fOUT(:,3))
figure %3
subplot(311)
plot(time, fOUT(:,1), 'b','linewidth', 3), grid minor
title 'LORENZ functions x(t), y(t), z(t)', xlabel 'time', ylabel 'x(t)'
subplot(312)
plot( time', fOUT(:,2), 'r', 'linewidth', 2 ), grid minor
xlabel 'time', ylabel 'y(t)'
subplot(313)
plot(time, fOUT(:,3),'k', 'linewidth', 2), grid minor, xlabel 'time', ylabel
'z(t)'
figure
plot(fOUT(:,1), fOUT(:,2), 'b', 'linewidth',1.5)
grid minor, title('LORENZ functions'), xlabel('x(t)'), ylabel 'y(t)'
axis square
figure
plot(fOUT(:,1), fOUT(:,3), 'k', 'linewidth', 1.5)
grid minor, title('LORENZ functions'), xlabel('x(t)'), ylabel 'z(t)'
axis square
figure
plot(fOUT(:,2), fOUT(:,3), 'm', 'linewidth', 1.5)
grid minor, title('LORENZ functions'), xlabel('y(t)'), ylabel 'z(t)'
axis square
solutions-1: The amplitudes and nature of oscillating behavior of x,y and z against time
are shown in Fig 1 for parameter sigma=10; beta=8/3; ro=28
Figure-1
Solutions-2 :The amplitudes and nature of oscillating behavior of x,y and z against time are
shown in Fig 2 for parameter sigma=10; beta=8/3; ro=28
Figure-2
Solutions-3: The amplitudes and nature of oscillating behavior of x,y and z against time are
shown in Fig 3(a) for parameter sigma=10; beta=8; ro=28
Figure-3(a)
The amplitudes and nature of oscillating behavior of x,y and z against time are shown in Fig 3(b)
for parameter sigma=3; beta=1; ro=78
Figure -3(b)
The amplitudes and nature of oscillating behavior of x,y and z against time are shown in Fig 3(c)
for parameter sigma=3; beta=1; ro=8
Figure-3(c)
The amplitudes and nature of oscillating behavior of x,y and z against time are shown in Fig 3(d)
for parameter sigma=10; beta=8/3; ro=28
Figure-3(d)
The amplitudes and nature of oscillating behavior of x,y and z against time are shown in Fig 3(e)
for parameter sigma=10; beta=8; ro=78
Figure-3(e)
solutions-4: The amplitudes and nature of oscillating behavior of x,y and z against time
are shown in Fig 4 for parameter sigma=10; beta=8/3; ro=28
Figure-4
solutions-5: The amplitudes and nature of oscillating behavior of x,y and z against time
are shown in Fig 5 for parameter sigma=10; beta=8/3; ro=28
Figure-5
solutions-6: The amplitudes and nature of oscillating behavior of x,y and z against time
are shown in Fig 6 for parameter sigma=10; beta=8/3; ro=28
Figure-6
Download