Chaos and Time-series Analysis - Lecture Notes (Sprott).

advertisement
Introduction and Overview
Chaos And Time-Series Analysis
9/5/00 Lecture #1 in Physics 505
Biography of the Instructor (Clint Sprott)









Born and raised in Memphis, Tennessee
BS (1964) in physics from MIT
Graduate thesis in experimental plasma physics under Donald Kerst
Ph.D. (1969) from UW
1 year postdoc at UW
2 years at Oak Ridge National Lab
UW faculty since 1973
Interest in chaos began in 1988 (mentored by George Rowlands)
Current research: plasma physics, chaos (numerical), complex systems
Discussion of Course Description and Syllabus
Bibliography of Books on Chaos and Related Topics
Comments on Computers and Programming
Student Questionnaire
Chaos and Complex Systems Seminar (Tuesday noon, 4274 Chamberlin)
Chaos Software of use in the Course




Chaos Demonstrations (Sprott and Rowlands - commercial product)
Chaos Data Analyzer (Sprott and Rowlands - commercial product)
Fractint (Stone Soup Group - freeware)
Fractal eXtreme (Cygnus Software - 30-day trialware)
Physics Journals with Chaos and Related Papers
Relevant USENET discussion groups




sci.nonlinear (FAQ) - very active, good discussion
sci.fractals (FAQ) - also very good
alt.binaries.pictures.fractals - lots of nice pictures
comp.theory.dynamic-sys - not many posts
Examples of Dynamical Systems











The Solar System
The atmosphere (the weather)
The economy (stock market)
The human body (heart, brain, lungs, ...)
Ecology (plant and animal populations)
Cancer growth
Spread of epidemics
Chemical reactions
The electrical power grid
Astrophysical dynamo
The Internet
Lecture Demos















Computer noise demo
Three-body problem
Metronome
Magnetic pendulum
Driven pendulum demo and simulation
Balls in troughs
Aquarium with water and dye
Chaotic double pendulum
Chaotic toy
Ball on oscillating floor
Firehose instability
Chaotic water bucket
Dripping faucet
Logistic equation
Inductor-diode circuit
Most of these demonstrations are included on the videotape, The Wonders of Physics 1990: Chaos and Randomness, available from the UW Physics Library (QC21.2 W66
1990) or for purchase ($25).
Computer simulations are from the program Chaos Demonstrations, available from the
UW Physics Library or Engineering Library (QC172.5 C45 S67 1992) or on display in
the UW Physics Museum (1323 Sterling).
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
One-Dimensional Maps
Chaos And Time-Series Analysis
9/12/00 Lecture #2 in Physics 505
Review (last week)



Dynamical systems
o Random (stochastic) versus deterministic
o Linear versus nonlinear
o Simple (few variables) versus complex (many variables)
o Examples (solar system, stock market, ecology, ...)
Some Properties of chaotic dynamical systems
o Deterministic, nonlinear dynamics (necessary but not sufficient)
o Aperiodic behavior (never repeats - infinite period)
o Sensitive dependence on initial conditions (exponential)
o Dependence on a control parameter (bifurcation, phase transition)
o Period-doubling route to chaos (common, but not universal)
Demonstrations
o Computer animations (3-body problem, driven pendulum)
o Chaotic pendulums
o Ball on oscillating floor
o Falling leaf (or piece of paper)
o Fluids (mixing, air hose, dripping faucet)
o Chaotic water bucket
o Chaotic electrical circuits
Logistic Equation - Motivation




Exhibits many aspects of chaotic systems (prototype)
Mathematically simple
o Involves only a single variable
o Doesn't require calculus
o Exact solutions can be obtained
Can model many different phenomena
o Ecology
o Cancer growth
o Finance
o Etc...
Can be understood graphically
Exponential Growth (Discrete Time)


Xn+1 = AXn (example: compound interest)
Example of linear deterministic dynamics






Example of an iterated map (involves feedback)
Exhibits stretching (A > 1) or shrinking (A < 1)
Attracts to X = 0 (for A < 1) or X = infinity (for A > 1)
Solution is Xn = X0An (exponential growth or decay)
A is the control parameter (the "knob")
A = 1 is a bifurcation point.
Logistic Equation







Xn+1 = AXn(1 - Xn)
Quadratic nonlinearity (X2)
Graph of Xn+1 versus Xn is a parabola
Equivalent form: Yn+1 = B - Yn2 (quadratic map)
o Y = A(X - 0.5)
o B = A2/4 - A/2
Solutions: X* = 0, 1 - 1/A (fixed point)
Graphical solution (reflection from 45° line - "cobweb diagram")
Computer simulation of logistic map
Bifurcations




0 < A < 1 Case:
o Only non-negative solution is X* = 0
o All X0 in the interval 0 < X0 < 1 attract to X*
o They lie in the basin of attraction
o The nonlinearity doesn't matter
1 < A < 3 Case:
o Solution at X* = 0 becomes a repellor
o Solution at X* = 1 - 1/A appears
o It is a point attractor (also called "period-1 cycle")
o Basin of attraction is 0 < X0 < 1
3 < A < 3.449... Case:
o Attractor at 1 - 1/A becomes unstable (repellor)
o This happens when df/dX < -1 (==> A > 3)
o This bifurcation is called a flip
o Growing oscillation occurs
o Oscillation nonlinearly saturates (period-2 cycle)
o Xn+2 = f(f(Xn)) = f(2)(Xn) = Xn
o Quartic equation has four roots
 Two are the original unstable fixed points
 The other two are are the new 2-cycle
3.449... < A < 3.5699... Case:
o Period-2 becomes unstable when df(2)(X)/dX < -1
o At this value (A = 3.440...) a stable period-4 cycle is born
o The process continues with successive period doublings
o Infinite period is reached at A = 3.5699... (Feigenbaum point)
o
o
o
o



This is period-doubling route to chaos
Bifurcation plot is self-similar (a fractal)
Feigenvalues: delta = 4.6692..., alpha = 2.5029...
Feigenvalues are universal (for all smooth 1-D unimodal maps)
3.5699... < A < 4 Case:
o Most values of A in this range produce chaos (infinite period)
o There are infinitely many periodic windows
o Each periodic window displays period doubling
o All periods are present somewhere for 3 < A < 4
A = 4 Case:
o This value of A is special
o It maps the interval 0 < X < 1 back onto itself (endomorphism)
o Notice the fold at Xn = 0.5
o Thus we have stretching and folding (silly putty demo)
o Stretching is not uniform (cf: tent map)
o Each Xn+1 has two possible values of Xn (preimages)
o Error in initial condition doubles (on average) with each iteration
o We lose 1 bit of precision with each time step
A > 4 Case:
o Transient chaos for A slightly above 4 for most X0
o Orbit eventually escapes to infinity for most X0
Other Properties of the Logistic Map (A = 4)



Eventually fixed points
o X0 = 0 and X0 = 1 - 1/A = 0.75 are (unstable) fixed points
o X0 = 0.5 --> 1 --> 0 is an eventually fixed point
o There are infinitely many such eventually fixed points
o Each fixed point has two preimages, etc..., all eventually fixed
o Although infinite in number they are a set of measure zero
o They constitute a Cantor set (Georg Cantor)
o Compare with rational and irrational numbers
Eventually periodic points
o If Xn+2 = Xn orbit is (unstable) period-2 cycle
o Solution (A = 4): X* = 0, 0.345491, 0.75, 0.904508
o 0 and 0.75 are (unstable) fixed points (as above)
o 0.345491 and 0.904508 are (unstable) period-2 cycle
o All periods are present and all are unstable
o (Unstable) period-3 orbit implies chaos (Li and Yorke)
o Each period has infinitely many preimages
o Still, most points are aperiodic (100%)
o Periodic orbits are dense on the set
Probability density (also called invariant measure)
o Many Xn values map to Xn+1 close to 1.0
o These in turn map to Xn+2 close to 0.0
o Thus the probability density peaks at 0 and 1
Actual form: P = 1 / pi[X(1 - X)]1/2
Ergodic hypothesis: the average over all starting points is the same as the
average over time for a single starting point
Nonrecursive representation
o Xn = (1 - cos(2ncos-1(1 - 2X0)))/2
o Ref: H. G. Schuster, Deterministic Chaos, (VCH, Weinheim, 1989)
o
o

Other One-Dimensional Maps




Sine map
o Xn+1 = A sin(pi Xn)
o Properties similar to logistic map (except A = 1 corresponds to A = 4)
Tent map
o Xn+1 = A min(Xn, 1 - Xn)
o Piecewise linear
o Uniform stretching
o All orbits become unstable at A = 1
o Uniform (constant) probability density at A = 2
o Numerical difficulties
General symmetric map
o Xn+1 = A(1 - |2Xn - 1|alpha)
o alpha = 1 gives the tent map
o alpha = 2 gives the logistic map
o alpha is a measure of the smoothness of the map
Binary shift map
o Xn+1 = 2Xn (mod 1)
o Stretching, cutting, and reattaching
o Resembles tent map
o Chaotic only for irrational initial conditions
o Can be used to generate pseudo-random numbers
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Nonchaotic Multidimensional Flows
Chaos and Time-Series Analysis
9/19/00 Lecture #3 in Physics 505
Comments on Homework #1 (The Logistic Equation)



Some people didn't note sensitive dependence on conditions
Please write a short summary of your results for each question
Course grades are available on the WWW (check for accuracy)
Review (last week) - One Dimensional Maps (Logistic Map)





















Xn+1 = AXn(1 - Xn)
Control parameters and bifurcations
Stable and unstable fixed points (attractors and repellors)
Basin of attraction
Periodic cycles (periodic attractors and repellors)
Period-doubling route to chaos
Bifurcation diagram (self-similar fractal)
Feigenbaum numbers ("Feigenvalues")
Chaos and periodic windows
Stretching and folding (sensitive dependence)
Transient chaos
Eventually fixed points (preimages)
Cantor set (infinite set of measure zero)
Eventually periodic points
Dense unstable periodic orbits
Period-3 implies chaos (Li and Yorke)
Numerical errors in logistic map calculation
Probability density: P = 1 / [X(1 - X)]1/2
Ergodic hypothesis
Solution: Xn = (1 - cos(2ncos-1(1 - 2X0)))/2
Can be used to make fractal music
(see also The Music of José Oscar Marques)
Maps versus Flows
Maps:
Flows:
Discrete time
Variables change abruptly
Described by algebraic equations
Continuous time
Variables change smoothly
Described by differential equations
Complicated dynamics (in 1-D)
Xn+1 = f(Xn)
Example: Xn+1 = AXn
Solution: Xn = AnX0
Simple dynamics (in 1-D)
dx/dt = f(x)
Example: dx/dt = ax
Solution: x = x0eat
Growth for A >1
Decay for A < 1
We call this an "orbit"
n --> t, A --> ea
Growth for a > 0
Decay for a < 0
We call this a "trajectory"
t --> n, a --> loge A
Logistic Differential Equation (1-D nonlinear flow)

dx/dt = ax(1 - x) [= f(x)]
f(x)
x







Used by Verhulst to model population growth (1838)
Equilibria (fixed points): f(x) = 0 ==> x* = 0, 1
Stable if df/dx < 0, unstable if df/dx > 0 (evaluated at x*)
All initial conditions in (0, 1) attract to x* = 1 for all a > 0
Oscillations and chaos are impossible
Only point attractors (and repellors) are possible
There can be many equilibria in nonlinear systems
Circular Motion (2-D linear flow)





Ball on string, planetary motion, gyrating electron, etc.
dx/dt = y, dy/dt = -x, y0 = 0
Solution: x = x0 cos t (or x0 sin t)
Note: x2 + y2 = x02 = constant (a circle of radius x0)
Equilibrium point at (0, 0) is called a center

A center is neutrally stable (neither attracts nor repels)
Mass on a Spring (frictionless)












Newton's second law: F = ma (a = dv/dt)
Hooke's law (spring): F = -kx
Equation of motion: mdv/dt = -kx
This is a prototypical (linear) harmonic oscillator
Let m = k = 1 (or define new variables)
Then: dv/dt = -x, dx/dt = v
Motion is a circle in phase space (x, v)
2 phase-space variables for each real variable
This is because F = ma is a second order ODE
Phase-space orbit is elliptical because energy is constant
E = mv2 / 2 + kx2 / 2 ==> x2 + v2 = constant (for m = k = 1)
There is no attractor for the motion (only a center)
Nonautonomous Equations













These have t on the right-hand side
Example: driven mass on a spring
md2x/dt2 = -kx + A sin t
Let m = k = 1 (for simplicity)
dv/dt = -x + A sin t
dx/dt = v
Let t = 
dx/dt = v
dv/dt = -x + A sin 
d/dt = 
Can always eliminate t by adding a variable
The  direction is periodic (period 2)
The motion is on a torus (a doughnut)
Damped Harmonic Oscillator







Add friction to harmonic oscillator (i.e., F = -bdx/dt)
dx/dt = v
dv/dt = -x - bv
Equilibrium: x* = 0, v* = 0 (stable)
Mechanical energy is not conserved (dissipation)
Best undersood from phase portrait
Types of equilibria for damped oscillator:
Spiral point (focus)
if b < 2 (underdamped)




Radial point (node)
if b > 2 (overdamped)
Attractor --> sink, repellor --> source
Trajectory cannot intersect itself
Poincare-Bendixson theorem (no chaos in 2-D)
Chaos in flows requires at least 3 variables and a nonlinearity (next time)
Van der Pol Equation (2-D nonlinear ODE)






Model for electrical oscillator, heart, Cephids
dx/dt = y
dy/dt = b(1 - x2)y - x
Unstable equilibrium point (for b > 0): x* = 0: y* = 0
Growth for small x, y; decay for large x, y
Solution attracts to a stable limit cycle

Basin of attraction is the whole x-y plane
Numerical Methods for solving ODEs






dx/dt = f(x, y), dy/dt = g(x, y)
Let h be a small interval of time (delta t)
This converts to flow to a map
Euler method:
o xn+1 = xn + hf(xn, yn)
o
yn+1 = yn + hg(xn, yn)
o x and y are advanced simultaneously
o This is a first-order method
o It works poorly (orbit spirals out)
Leap-frog method:
o Good if f = f(y) and g = g(x)
o xn+2 = xn + hf( yn+1)
o
yn+3 = yn+1 + hg(xn+2)
o x and y are advanced sequentially
o This is a second-order method
o Periodic orbit orbit closes exactly
o Can be modified for f(xn, yn) and g(xn, yn)
Second order Runge-Kutta method:
o Trial step: kx = hf(xn, yn), ky = hg(xn, yn)
o xn+1 = xn + hf(xn + kx/2, yn + ky/2)
o
yn+1 = yn + hg(xn + kx/2, yn + ky/2)
o Can be extended to higher order (see HW #3)
o Fourth order is a good compromise
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Dynamical Systems Theory
Chaos and Time-Series Analysis
9/26/00 Lecture #4 in Physics 505
Comments on Homework #2 (Bifurcation Diagrams)



Most everyone did fine
Number of periodic windows increases with period
o 1 period-3
o 2 period-4
o 3 period-5
o 5 period-6, etc.
Scaling of number and window width with period is an open question
Review (last week) - Nonchaotic Multidimensional Flows




Maps versus flows (discrete versus continuous time)
Exponential growth or decay (1-D linear)
o Equation: dx/dt = ax
o Solution: x = x0eat
Logistic differential equation (1-D nonlinear)
o Equation: dx/dt = ax(1 - x)
o Equilibrium points at x* = 0 and x* = 1
o Stability determined by sign of df/dx
o Stable equilibrium point is approached asymptotically
o Oscillations and chaos are not possible
Circular motion (2-D linear)
o Equations: dx/dt = y, dy/dt = -x
o Solution: x = x0 cos t, y = -x0 sin t
o Solutions are circles around a center at (0, 0)
o

Center is neutrally stable (neither attracts nor repels)
Mass on a spring (2-D linear)
o Equations: dx/dt = v, dv/dt = -x
o Same as circular motion above
o (x, v) are phase-space variables
o Trajectory forms a phase portrait




Mass on a spring with friction (dissipative 2-D linear)
o Equations: dx/dt = v, dv/dt = -x -bv
o Solution attracts to a stable equilibrium point at (0, 0)
Nonautonomous systems
o These systems have an explicit time dependence
o Can remove time by defining a new variable
o Dimension increases by 1 (circle becomes a torus)
o
Limit cycles (van der Pol equation)
Numerical methods for solving ODEs
o Suppose dx/dt = f(x, y) and dy/dt = g(x, y)
o Let h be a small increment of time
o Euler method
o Leap-frog method
o Second order Runge-Kutta
o Fourth order Runge-Kutta
General 2-D Linear Flows




Equilibrium: x* = 0, y* = 0 (stable)
Mechanical energy is not conserved (dissipation)
Best undersood from phase portrait
Types of equilibrium points in linear 2-D systems:
Spiral point (focus)
if b < 2 (underdamped)




Radial point (node)
if b > 2 (overdamped)
Attractor --> sink, repellor --> source
Trajectory cannot intersect itself
Poincare-Bendixson theorem (no chaos in 2-D)
Chaos in flows requires at least 3 variables and a nonlinearity
Other limit cycle examples (2-D nonlinear)

Saddle point
(hyperbolic point)
Circular limit cycle

o
o
o
o
dx/dt = y
dy/dt = (1 - x2 - y2)y - x
Unstable equilibrium point: x* = 0: y* = 0
Solution: x2 + y2 = 1 (a circle)
o
Bad numerical method (Euler) can give chaos
Lorenz example (in The Essence of Chaos)
o dx/dt = x - y - x3
o dy/dt = x - x2y
o Unstable equilibrium point: x* = 0: y* = 0
o Limit cycle is approximately a circle of radius 1
o
Bad numerical method (Euler) can give chaos
Stability of Equilibria in 2 Dimensions










Recall 1-D result:
o dx/dt = f(x)
o Equilibrium: f(x) = 0 ==> x*
o Stability is determined by sign of df/dx at x*
o Solution near equilibrium is x = x* + (x0 - x*)et
o Where  = df/dx (at x = x*) is the growth rate
dx/dt = f(x, y)
dy/dt = g(x, y)
Equilibrium points: f = g = 0 ==> x*, y*
Calculate the Jacobian matrix J at x*, y*
Let fx be the partial derivative of f with respect to x, etc.
The eigenvalues of J are: (fx - )(gy - ) = fygx
This is the characteristic equation (quadratic in 2-D, etc.)
Solutions are of the form: x = x0et, y = y0et
There are 2 solutions for  (real or complex conjugates)
Example: Damped Harmonic Oscillator (2-D linear)






dx/dt = y (velocity)
dy/dt = -x - by (force)
fx = 0, fy = 1, gx = -1, gy = -b
Characteristic equation:  + b + 1 = 0
Solutions:  = -b/2 ± (b2 - 4)1/2/2 (eigenvalues)
First case: b > 2
o Overdamped
o Two negative real eigenvalues
o This gives a radial point (node)



Second case: b = 2
o Critical damping
o Two negative equal eigenvalues
Third case: 0 < b < 2
o Underdamped
o Two negative complex eigenvalues
o This gives a spiral point (focus)
Fourth case: b < 0 (negative damping)
o Exponential growth
o Two positive eigenvalues
o Attractors become repellors
Saddle Points (or hyperbolic points)


Example (note similarity to harmonic oscillator)
o dx/dt = y
o dy/dt = x
Eigenvalues are 1 = 1, 2 = -1

The flow is of the form:








Unstable manifold (outset)  > 0
Stable manifold (inset)  < 0
Also called separatrices (trajectories can't cross)
Separatrices given by the eigenvectors of JR = R
Consult any book on linear algebra
We won't be using the eigenvectors
The separatrices organize the phase space
The eigenvalues allow prediction of bifurcations
Area Contraction (or expansion) in 2-D




12 = det J = fxgy - fygx (determinant of J)
1 + 2 = trace J = fx + gy (trace of J)
Expanding direction: dE/dt = 1E (1 > 0)
Contracting direction: dC/dt = C ( < 0)






Phase space area: A = CE sin 
dA/dt = CdE/dt sin  + EdC/dt sin  = CE(1 + ) sin 
dA/dt / A = 1 + 2 (fractional rate of expansion)
In higher dimension: dV/dt / V = 1 + 2 + ... (sum of eigenvalues)
V is the phase-space volume of initial conditions
Sum of the eigenvalues must be negative for an attractor
Flows in 3 Dimensions

Types of equilibria
o Cubic characteristic equation
o Three eigenvalues (3 real or 1 real)
o Attracting equilibrium points (2 types)
o
Repelling equilibrium points (2 types)
o
Saddle points (4 types)
o
Index: number of eigenvalues with Re() > 0
[or dimension of the unstable manifold]


o Chaos occurs with 2 or more unstable equilibria
Attractors in 3-D flows
o Equilibrium point (as in 1 and 2 dimensions)
o Limit cycle (as in 2 dimensions)
o Torus (quasiperiodic - 2 incommensurate frequencies)
o Strange (chaotic) attractors
Examples of chaotic dissipative flows in 3-D:
o Driven pendulum
 dx/dt = v
 dv/dt = -sin x - bv + A sin wt
 A = 0.6, b = 0.05, w = 0.7
o Driven nonlinear oscillator (Ueda)
 dx/dt = v
 dv/dt = -x3 - bv + A sin wt
 A = 2.5, b = 0.05, w = 0.7
o Driven Duffing oscillator
 dx/dt = v
 dv/dt = x - x3 - bv + A sin wt
 A = 0.7, b = 0.05, w = 0.7
o Driven Van der Pol oscillator
 dx/dt = v
 dv/dt = -x + b(1 - x2)v + A sin wt
 A = 0.61, b = 1, w = 1.1 (a torus)
o Lorenz attractor
 dx/dt = p(y - x)
 dy/dt = -xz + rx - y
 dz/dt = xy - bz
 p = 10, r = 28, b = 8/3
o Rössler attractor
 dx/dt = -y - z
 dy/dt = x + ay
 dz/dt = b + z(x - c)
 a = b = 0.2, c = 5.7
o Simplest dissipative chaotic flow




o
dx/dt = y
dy/dt = z
dz/dt = -x + y2 - Az
A = 2.107
Other simple chaotic flows
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Lyapunov Exponents
Chaos and Time-Series Analysis
10/3/00 Lecture #5 in Physics 505
Comments on Homework #3 (Van der Pol Equation)




Some people only took initial conditions inside the attractor
For b < 0 the attractor becomes a repellor (time reverses)
The driven system can give limit cycles and toruses but not chaos (?)
Can get chaos if you drive the dx/dt equation instead of dy/dt
Review (last week) - Dynamical Systems Theory

Types of attractors/repellors:
o Equilibrium points (radial, spiral, saddle) 0-D
o Limit cycles (closed loops) 1-D
o 2-Toruses (quasiperiodic surfaces) 2-D
o N-Toruses (hypersurfaces) N-D
o Strange attractors (fractal) Non-integer D
(Attractor dimension < system dimension)


Stability of equilibrium points:
o Find equilibrium point: f(x) = 0 ==> x*, etc.
o Calculate partial derivatives fx etc. at equilibrium
o Construct the Jacobian matrix J
o Find the characteristic equation: det(J - ) = 0
o Solve for the D eigenvalues: 1, 2D
o Find the eigenvectors (if needed) from JR = R
 Stable and unstable manifold (inset & outset)
 Organize the phase space
o Plot position of eigenvalues in complex-plane
o If any have Re() > 0, point is unstable
o Index is number of eigenvalues with Re() > 0
o Dimension of outset = index
o Volume expansion: dV/dt / V = 1 + 2 + 3 + ...
o By convention, 1 > 2 > 3 > ...
o An attractor has dV/dt < 0
o Different rules for stability of fixed points for maps
 In 1-D, X = X0n is stable if || < 1
 In 2-D and higher, stable if all  are inside unit circle
 Bifurcations occur when  touches unit circle
Examples of chaotic dissipative flows in 3-D:
o
o
o
o
o
o
o
o
Driven pendulum
 dx/dt = v
 dv/dt = -sin x - bv + A sin wt
 A = 0.6, b = 0.05, w = 0.7
Driven nonlinear oscillator (Ueda)
 dx/dt = v
 dv/dt = -x3 - bv + A sin wt
 A = 2.5, b = 0.05, w = 0.7
Driven Duffing oscillator
 dx/dt = v
 dv/dt = x - x3 - bv + A sin wt
 A = 0.7, b = 0.05, w = 0.7
Driven Van der Pol oscillator
 dx/dt = v
 dv/dt = -x + b(1 - x2)v + A sin wt
 A = 0.61, b = 1, w = 1.1 (a torus)
 Can get chaos with drive in dx/dt equation
Lorenz attractor
 dx/dt = p(y - x)
 dy/dt = -xz + rx - y
 dz/dt = xy - bz
 p = 10, r = 28, b = 8/3
Rössler attractor
 dx/dt = -y - z
 dy/dt = x + ay
 dz/dt = b + z(x - c)
 a = b = 0.2, c = 5.7
Simplest dissipative chaotic flow
 dx/dt = y
 dy/dt = z
 dz/dt = -x + y2 - Az
 A = 2.107
Other simple chaotic flows
General Properties of Lyapunov Exponents




A measure of chaos (how sensitive to initial conditions?)
Lyapunov exponent is a generalization of an eigenvalue
Average the phase-space volume expansion along trajectory
2-D example:
o Circle of initial conditions evolves into an ellipse
o
o
o
o
o
o
o
o
o
Area of ellipse: A = d1d2 / 4
Where d1 = d0e1t is the major axis
And d2 = d0e2t is the minor axis
Magnitude and direction continually change
We must average along the trajectory
As with eigenvalues, dA/dt / A = 1 + 2
Note:  is always real (sometimes base-2, not base-e)
For chaos we require 1 > 0 (at least one positive LE)
By convention, LEs are ordered from largest to smallest




In general for any dimension:
o (hyper)sphere evolves into (hyper)ellipsoid
o One Lyapunov exponent per dimension
Units of Lyapunov exponent:
o Units of  are inverse seconds for flows
o Or inverse iterations for maps
o Alternate units: bits/second or bits/iteration
Caution: False indications of chaos
o Unbounded orbits can have 1 > 0
o Orbits can separate but not exponentially
o Can have transient chaos
Lyapunov Exponent for 1-D Maps









Suppose Xn+1 = f(Xn)
Consider a nearby point Xn + Xn
Taylor expand: Xn+1 = df/dX Xn + ...
Define e = |Xn+1/Xn| = |df/dX| (local Lyapunov number)
Local Lyapunov exponent:  = log |df/dX|
Can use any base such as loge (ln) or log2
Since df/dX is usually not constant over the orbit,
We average <log |df/dX|> over many iterations
For example, logistic map:
o
o
o
o
o
df/dX = A(1 - 2X), and
log |df/dX| is minus infinity at X = 1/2
(A) has a complicated shape
There are infinitely many negative spikes
A = 4 gives  = ln(2) (or 1 bit per iteration)
Lyapunov Exponents for 2-D Maps




Suppose Xn+1 = f(Xn, Yn), Yn+1 = g(Xn, Yn)
Area expansion: An+1 = Ane1+2 (as with eigenvalues)
1 + 2 = <log (An+1/An)> = <log |det J|> = <log |fxgy - fygx|>
For example, Hénon map:
o Xn+1 = 1 - CXn2 + BYn [= f(X, Y)]
o Yn+1 = Xn [= g(X, Y)]
o Alternate representation: Xn+1 = 1 - CXn2 + BXn-1
o Note: This reduces to quadratic map for B = 0
o Usual parameters for chaos: B = 0.3, C = 1.4
o 1 + 2 = <log |fxgy - fygx|> = log |-B| = -1.204 (base-e)
(or -1.737 bits per iteration in base-2)
o
o
Contraction is the same everywhere (unusual)
Numerical calculation gives 1 = 0.419 (base-e)
(or 0.605 bits per iteration in base-2)
o
Hence 2 = -1.204 - 0.419 = -1.623 (base-e)
(or -2.342 bits per iteration in base-2)
Lyapunov Exponents for 3-D Flows


Sum of LEs:  = 1 + 2 + 3 = <trace J> = <fx + gy + hz>
o Must be negative for an attractor (dissipative system)
o This is the divergence of the flow
o It is the fractional rate of volume expansion (or contraction)
o For a conservative (Hamiltonian) system, sum is zero
For non-point attractors, one exponent must = 0
[corresponding to the direction of the flow]

For a chaotic system, one exponent must be positive
Numerical Calculation of Largest Lyapunov Exponent
1. Start with any initial condition in the basin of attraction
2. Iterate until the orbit is on the attractor
3. Select (almost any) nearby point (separated by d0)
4. Advance both orbits one iteration and calculate new separation d1
5. Evaluate log |d1/d0| in any convenient base
6. Readjust one orbit so its separation is d0 in same direction as d1
7. Repeat steps 4-6 many times and calculate average of step 5
8. The largest Lyapunov exponent is 1 = <log |d1/d0|>
9. If map approximates an ODE, then 1 = <log |d1/d0|> / h
10. A positive value of 1 indicates chaos
General character of exponents in 3-D flows:
1
neg
0
0
pos

2
neg
neg
0
0
3
neg
neg
neg
neg
Attractor
equilibrium point
limit cycle
2-torus
strange (chaotic)
For flows in dimension higher than 3:
o (0, 0, 0, -, ...) 3-torus, etc.
o (+, +, 0, -, ...) hyperchaos, etc.
Kaplan-Yorke (Lyapunov) Dimension






Attractor dimension is a geometrical measure of complexity
Random noise is infinite dimensional (infinitely complex)
How do we calculate the dimension of an attractor? (many ways)
Suppose system has dimension N (hence N Lyapunov exponents)
Suppose the first D of these sum to zero
Then the attractor would have dimension D
(in D dimensions there would be neither expansion nor contraction)

In general, find the largest D for which 1 + 2 + ... + D > 0
(The integer D is sometimes called the topological dimension)


The attractor dimension would be between D and D + 1
However, we can do better by interpolating:
DKY = D + (1 + 2 + ... + D) / |D+1|




The Kaplan-Yorke conjecture is that DKY agrees with other methods
Multipoint interpolation doesn't work
2-D Map Example: Hénon map (B = 0.3, C = 1.4)
o 1 = 0.419 and 2 = -1.623
o D = 1 and DKY = 1 + 1 / |2| = 1 + 0.419 / 1.623 = 1.258
o Agrees with intuition and other calculations
3-D Flow Example: Lorenz Attractor (p = 10, r = 28, b = 8/3)
o Numerical calculation gives 1 = 0.906
o Since it is a flow, 2 = 0
o 1 + 2 + 3 = <fx + gy + hz> = -p - 1 - b = -13.667
o Therefore,  = -14.572
o D = 2 and DKY = 2 + 1 / |3| = 2 + 0.906 / 14.572 = 2.062
o Chaotic flows always have DKY > 2
[Chaotic maps can have any dimension]
Precautions





Be sure orbit is bounded and looks chaotic
Be sure orbit has adequately sampled the attractor
Watch for contraction to zero within machine precision
Test with different initial conditions, step size, etc.
Supplement with other tests (Poincaré section, Power spectrum, etc.)
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Strange Attractors
Chaos and Time-Series Analysis
10/10/00 Lecture #6 in Physics 505
Comments on Homework #4 (Lorenz Attractor)


Everyone did a good job
To get smooth graphs, make h smaller or connect the dots
Review (last week) - Lyapunov Exponents








Lyapunov Exponents are a dynamical measure of chaos
There are as many exponents as the system has dimensions
dV/dt / V = 1 + 2 + 3 + ...
o = <log |det J|> for maps
o = <trace J> = <fx + gy + hz + ...> for flows
o Where J is the Jacobian matrix
o  must be negative for an attractor
o  must be zero for a conservative (Hamiltonian) system
For chaos we require 1 > 0 (at least one positive LE)
For 1-D Maps,  = <log |df/dX|>
2-D example, Hénon map:
o Xn+1 = 1 - CXn2 + BYn [= f(X, Y)]
o Yn+1 = Xn [= g(X, Y)]
o Usual parameters for chaos: B = 0.3, C = 1.4
o 1 + 2 = <log |fxgy - fygx|> = log |-B| = -1.204 (base-e)
o Numerical calculation gives 1 = 0.419 (base-e)
o Hence 2 = -1.204 - 0.419 = -1.623 (base-e)
o Fixed points at x* = y* = -1.1313445 and x* = y* = 0.63133545
General character of Lyapunov exponents in flows:
o (-, -, -, -, ...) fixed point (0-D)
o (0, -, -, -, ...) limit cycle (1-D)
o (0,0, -, -, ...) 2-torus (2-D)
o (0, 0, 0, -, ...) 3-torus, etc. (3-D, etc.)
o (+, 0, -, -, ...) strange (chaotic) (2+-D)
o (+, +, 0, -, ...) hyperchaos, etc. (3+-D)
Numerical Calculation of Largest Lyapunov Exponent
o Start with any initial condition in the basin of attraction
o Iterate until the orbit is on the attractor
o Select (almost any) nearby point (separated by d0)
o Advance both orbits one iteration and calculate new separation d1
o Evaluate log |d1/d0| in any convenient base
o Readjust one orbit so its separation is d0 in same direction as d1
o Repeat steps 4-6 many times and calculate average of step 5
o
o
o



The largest Lyapunov exponent is 1 = <log |d1/d0|>
If map approximates an ODE, then 1 = <log |d1/d0|> / h
A positive value of 1 indicates chaos
Shadowing lemma: The computed orbit shadows some possible orbit
Kaplan-Yorke (Lyapunov) Dimension
o Attractor dimension is a geometrical measure of complexity
o Random noise is infinite dimensional (infinitely complex)
o How do we calculate the dimension of an attractor? (many ways)
o Suppose system has dimension N (hence N Lyapunov exponents)
o Suppose the first D of these sum to zero
o Then the attractor would have dimension D
o (in D dimensions there would be neither expansion nor contraction)
o In general, find the largest D for which 1 + 2 + ... + D > 0
o (The integer D is sometimes called the topological dimension)
o The attractor dimension would be between D and D + 1
o However, we can do better by interpolating:
o DKY = D + (1 + 2 + ... + D) / |D+1|
o The Kaplan-Yorke conjecture is that DKY agrees with other methods
o 2-D Map Example: Hénon map (B = 0.3, C = 1.4)
 1 = 0.419 and 2 = -1.623
 D = 1 and DKY = 1 + 1 / |2| = 1 + 0.419 / 1.623 = 1.258
 Agrees with intuition and other calculations
o 3-D Flow Example: Lorenz Attractor (p = 10, r = 28, b = 8/3)
o Numerical calculation gives 1 = 0.906
o Since it is a flow, 2 = 0
o 1 + 2 + 3 = <fx + gy + hz> = -p - 1 - b = -13.667
o Therefore,  = -14.572
o D = 2 and DKY = 2 + 1 / |3| = 2 + 0.906 / 14.572 = 2.062
o Chaotic flows always have DKY > 2
o Chaotic maps always have DKY > 1
o Higher order interpolations are possible
Precautions
o Be sure orbit is bounded and looks chaotic
o Be sure orbit has adequately sampled the attractor
o Watch for contraction to zero within machine precision
o Test with different initial conditions, step size, etc.
o Supplement with other tests (Poincaré section, Power spectrum, etc.)
Strange Attractors

























This is a "lecture within a lecture"
Sample strange attactors and their properties will be discussed.
Typical experimental data often lacks obvious structure.
One goal is to find evidence of determinism in such data.
An Example is a 2-D quadratic map.
Models developed to fit chaotic data are seldom chaotic.
This raises the question of how common is chaos?
The Hénon map is about 6% chaotic over its bounded range.
The Hénon map is a strange attractor with fractal structure.
The Mandelbrot set is chaotic almost nowhere.
It has a very complicated basin boundary.
General 2-D quadratic maps are 10-20% chaotic.
Maps become less and flows more chaotic as dimension increases.
Artificial neural networks are another kind of iterated map.
Neural networks become more chaotic as dimension increases.
There are many types of attractors.
Strange attractors have many interesting properties:
o Limit set as t --> infinity
o Set of measure zero
o Basin of attraction
o Fractal structure
 non-integer dimension
 self-similarity
 infinite detail
o Chaotic dynamics
 sensitivity to initial conditions
 topological transitivity
 dense periodic orbits
o Aesthetic appeal
Strange attractors are produced by a stretching and folding.
Attractor dimension increases with system dimension.
Lyapunov exponent decreases with system dimension.
Attractor search turned up the simplest chaotic flow.
Simplest flow has a strange attractor that's a Mobius strip.
There are also conservative chaotic system but not attractors.
Strange attractors can be used as general approximators.
They can also be used to generate computer art.
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Bifurcations
Chaos and Time-Series Analysis
10/17/00 Lecture #7 in Physics 505
Comments on Homework #5 (Hénon Map)



Everyone did fine
Many noted the large number of iterates required when zoming in on the attractor
Only a couple of people had a good plot of the basin of attraction
Review (last week) - Strange Attractors







Kaplan-Yorke (Lyapunov) Dimension
o DKY = D - (1 + 2 + ... + D) / D+1
o where D is the largest integer for which 1 + 2 + ... + D > 0
o DKY = 1.258 for Hénon map (B = 0.3, C = 1.4)
o DKY = 2.062 for Lorenz attractor (p = 10, r = 28, b = 8/3)
o Chaotic flows always have DKY > 2
o Hence we need visualization techniques
o Chaotic maps always have DKY > 1
o Why not use a multipoint interpolation?
Is chaos the rule or the exception?
o Polynomial maps and flows
o Artificial neural networks
Examples of strange attractors
Properties of strange attractors
Dimension of strange attractors
Strange attractors as general approximators
Strange attractors as objects of art
What is the "Most Chaotic" 2-D Quadratic Map?



This work is unpublished
Use genetic algorithm to maximize 1 for 12 parameters
o Mate 2 chaotic cases (arbitrarily chosen)
o Kill inferior offspring (eugenics)
o Introduce occasional mutations
o Replace parents with superior children
The answer? - 2 decoupled logistic maps!
o Xn+1 = 4Xn(1 - Xn)
o Yn+1 = 4Yn(1 - Yn)
o This system has 1 =  = log(2) as expected
o It is area expanding but folded in both directions
o Its Kaplan-Yorke dimension is DKY = 2


Largest Lyapunov exponent generally decreases with D
Implications
o Complex systems evolve at the "edge of chaos"
o Allows exploration of new regions of state space
o But retains good short-term memory
Shift Map (1-D Nonlinear)



Start with logistic map: Xn+1 = 4Xn(1 - Xn)
Let X = sin2 Y
Then sin2 Yn+1 = 4 sin2 Yn (1 - sin2 Yn)
= 4 sin2 Yn cos2 Yn





But 2 sin  cos  = sin 2 (from trigonometry)
Hence sin2 Yn+1 = sin2 2Yn
Thus Yn+1 = 2Yn (mod 1) (shift map)
"mod 1" means take only the fractional part of 2Yn
Caution: mod only works for integers on some compilers
In this case, use instead:
IF X >= 1 THEN X = X - INT(X)
IF X < 0 THEN X = INT(X) - X

Shift map is conjugate to the logistic map
(equivalent except for a nonlinear change of variables)


More specifically, this is a piece-wise linear map
Maps the unit interval (0, 1) back onto itself twice:



Involves a stretching and tearing
Lyapunov exponent:  = log(2)
Invariant measure (probability density) is uniform





Generates apparently random numbers in (0, 1)
But these numbers are strongly correlated (obviously)
Solution: Yn = 2nY0 mod 1
Why is it called a "shift map"?
o Represent initial condition in binary: 0.1011010011...
o Or in (left/right) symbols: RLRRLRLLRR...
o Each iteration left-shifts by 1: 1.011010011...
o Mod 1 discards the leading 1: 0.011010011...
o The sequence is determined by the initial condition
o Only irrational initial conditions give chaos
o Any sequence of RL's can be generated
o Computer hint: Use Xn+1 = 1.999999Xn mod 1
Dynamics are similar in the tent map:
Computer Random Number Generators




A generalization of the shift map: Yn+1 = (AYn + B) mod C
A, B, and C must be chosen optimally (large integers)
o A is the number of cycles
o B is the "phase" (horizontal shift)
o C is the number of distinct values
o Example: A = 1366, B = 150889, C = 714025
o There are many other choices (see Knuth)
Must also choose an initial "seed" Y0
This is called a "linear congruential generator"




Lyapunov exponent:  = log(A) (very large)
Numbers produced this way are pseudo-random
The sequence will repeat after at most C steps
In QuickBASIC the numbers repeat after 16,777,216 = 214 steps


The repetition time is much longer in PowerBASIC
Cycle time can be increased with shuffling
Intermittency - Logistic Map at A = 3.8284



This is just to the left of the period-3 window
Dynamics change abruptly from period-3 to chaos
Time series (Xn versus n):




This is a result of a tangent bifurcation(Xn+3 versus Xn)
Can be understood by the cobweb diagram
Orbit gets caught for many iterations in a narrow channel
This is the intermittency route to chaos (cf: transient chaos)
Bifurcations - General




A qualitative change in behavior at a critical parameter value
Observation of a bifurcation verifies determinism
Flows are often analyzed using their maps (Poincaré section)
Classifications:
o Local - involves one or more equilibrium points
o Global - equilibrium points appear or vanish
o Continuous (subtle) - eigenvalues cross unit circle
o Discontinuous (catastrophic) - eigenvalues appear or vanish
o Explosive - like catastrophic but no hysteresis
(occur when attractor touches the basin boundary)




There are dozens of bifurcations, many not discovered
Terminology is not precise or universal (still evolving)
Transcritical Bifurcation
o A simple form where a stable fixed point becomes unstable
o Or an unstable point becomes stable
o Example: Fixed points of logistic map
 X* = 0, 1 - 1/A
 At A = 1, stability of points switch
o Exchange of stability between two fixed points
Pitchfork Bifurcation
o
This is a local bifurcation
o
o
o
o



Stable branch becomes unstable
Two new stable branches are born
Happens when eigenvalue of fixed point reaches +1
This usually occurs when there is a symmetry in the problem
Flip Bifurcation
o As above but solution oscillates between the branches
o This is the common period-doubling route to chaos
o As occurs in the logistic map at 3 < A < 3.5699
o Happens when eigenvalue of fixed point reaches -1
o Can double and then halve without reaching chaos
o Can occur only in maps (not flows)
Tangent (or Saddle-Node or Blue Sky) Bifurcation
o This was previously discussed under intermittency
o Provides a new route to chaos
o This is also a local bifurcation
o It is sometimes called an interior crisis
o Basic mechanism for creating and destroying fixed points
Catastrophe (1-D example)
o Cubic map: Xn+1 = AXn(1 - Xn2)
o
Anti-symmetric about Xn = 0 (allows negative solutions)
Catastrophe occurs at A = 271/2/2 = 2.59807... where attractors collide
This system is exhibits hysteresis (decrease in A can leave X < 0)
Also occurs in two back-to-back logistic maps
Can also have infinitely many attractors
 Process equation: Xn+1 = Xn + A sin(Xn)
 Fixed points: X* = n pi for n = 0, + 1, +2, ...
 Attractors collide at A = 4.669201...
 Orbits diffuse in X for A > 4.669201...
Hopf Bifurcation
o A stable focus becomes unstable and a limit cycle is born
o Example: Van der Pol equation at b = 0
o
o
o
o

o
o

This bifurcation is local and continuous
It occurs when complex eigenvalues touch the unit circle
Niemark (or Secondary Hopf) Bifurcation
o A stable limit cycle becomes unstable and a 2-torus is born
o
o
The Poincaré section exhibits a Hopf bifurcation
Main sequence (quasi-periodic route to chaos)
 fixed point --> limit cycle --> 2-torus --> chaos
 N-torus with N > 2 not usually seen (Piexito's Theorem)
(3-torus and higher are structurally unstable)

This contradicts the Landau theory of turbulence
(turbulence is a sum of very many periodic modes)


Also called the Newhouse-Ruelle-Takens route
Probably the most common route to chaos at high-D
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Hamiltonian Chaos
Chaos and Time-Series Analysis
10/24/00 Lecture #8 in Physics 505
Warning: This is probably the most technically difficult lecture of the course.
Comments on Homework #6 (Lyapunov Exponent)



Not everyone had a good graph of LE versus C for B = 0.3
Some had numerical troubles with unbounded orbits (C > 1.42)
BASIC code for doing part 3 has been put on the WWW
Review (last week) - Bifurcations




Bifurcation is a qualitative change in behavior at a critical parameter value
Observation of a bifurcation verifies determinism
Flows are often analyzed using their maps (Poincaré section)
Classifications:
o Local - involves single equilibrium points
o Global - equilibrium points appear or vanish
o Continuous (subtle) - eigenvalues cross unit circle
o Discontinuous (catastrophic) - eigenvalues appear or vanish
o Explosive - like catastrophic but no hysteresis
(occur when attractor touches the basin boundary)









There are dozens of bifurcations, many not discovered
Terminology is not precise or universal (still evolving)
Transcritical Bifurcation
Pitchfork Bifurcation
Flip Bifurcation
Tangent (or Saddle-Node or Blue Sky) Bifurcation
Catastrophe (1-D example)
Hopf Bifurcation
Niemark (or Secondary Hopf) Bifurcation
o A stable limit cycle becomes unstable and a 2-torus is born
o
o
The Poincaré section exhibits a Hopf bifurcation
Main sequence (quasi-periodic route to chaos)
 fixed point --> limit cycle --> 2-torus --> chaos
 N-torus with N > 2 not usually seen (Piexito's Theorem)
(3-torus and higher are structurally unstable)

This contradicts the Landau theory of turbulence
(turbulence is a sum of very many periodic modes)


Also called the Newhouse-Ruelle-Takens route
Probably the most common route to chaos at high-D
Hamiltonian Systems - Introduction and Motivation




These are systems that conserve mechanical energy
They have no dissipation (frictionless)
They are of historical interest and importance
Examples (all from physics):
o Planetary motion (recall 3-body problem)
o Charged particles in magnetic fields
o Incompressible fluid flows (liquids)
o Trajectories of magnetic field lines
o Quantum mechanics
o Statistical mechanics
A Case Study - Mass on a Spring (frictionless)










dx/dt = v
dv/dt = -(k/m)x
This system has 1 spatial dimension (1 degree of freedom)
It has a 2-D phase space however
Solution: kx2 + mv2 = constant (conservation of energy)
Hamiltonian: H = kx2/2 + mv2/2 (total energy)
o kx2/2 is the potential energy (stored in spring)
o mv2/2 is the kinetic energy (energy of motion)
Let k = m = 1 for simplicity
Given the Hamiltonian, we can get the equations of motion:
o dx/dt = H/v = v
o dv/dt = -H/x = -x
o where  is the partial derivative
The motion occurs along a 1-D curve in 2-D space
This curve is not a limit cycle (it is a center)

Such a system cannot exhibit chaos (even if driven)
Hamilton's Equations (N Dimensions)










Generalize the above ideas to dimensions N > 1:
o dqi/dt = H/pi (q is a generalized coordinate)
o dpi/dt = -H/qi (p is a generalized momentum = mv)
p and q constitute the phase space for the dynamics
N-dimensional dynamics have a 2N-dimensional phase space
pi and qi (for i = 1 to N) are the phase space variables
Note: dH/dt = H/p dp/dt + H/q dq/dt = 0
H is a constant of the motion (Hamiltonian)
There may be other constants (say a total of k)
The dynamics are constrained to a 2N - k dimensional space
Hamiltonian's equations are just another dynamical ODE system:
o dq/dt = f(p, q)
o dp/dt = g(p, q)
o ...
Note: dV/dt / V = trace J = fq + gp + ...
= /q [H/p] + /p [-H/q] + ... = 0

Phase-space volume is conserved
Properties of Hamiltonian Systems
















They have no dissipation (frictionless)
There are one or more conserved quantities (energy, ...)
They are described by a Hamiltonian function H
There are 2N dimensions for N degrees of freedom
Motion is on a 2N - k dimensional (hyper)surface
k + 1 Lyapunov exponents are equal to zero
There are no attractors (or attractor = basin)
Transients don't die away (no need to wait)
Equations are time-reversible
Orbit returns arbitrarily close to the initial condition
Phase-space volume is conserved (Liouville's theorem)
The flow is incompressible (like water)
The Lyapunov exponents sum to zero
Chaos can occur only for N > 1 (at least 2 degrees of freedom)
The dynamics occur in a space of integer dimension
This space may be a (fat) fractal however (many holes)
2-D Symplectic (Area-Preserving) Maps






Xn+1 = f(Xn, Yn)
Yn+1 = g(Xn, Yn)
An+1/An = |det J| = |fxgy - fygx| = 1
Example: Hénon map with B = 1
o Xn+1 = 1 - CXn2 + Yn
o Yn+1 = Xn
o An+1/An = |0 - (1)(1)| = 1
o Computer demo (C = 0.3)
More general polynomial symplectic map:
o Xn+1 = A + Yn + F(Xn)
o Yn+1 = B - Xn
o One choice of F is C1 + C2X + C3X2 + ...
o Verify that this has An+1/An = |det J| = 1
o Slide show from Strange Attractors book
Stochastic web maps:
o These occur for charged particle in EM wave
o Xn+1 = a1 + [Xn + a2sin(a3Yn + a4)]cos  + Ynsin 
o Yn+1 = a5 + [Xn + a2sin(a3Yn + a4)]sin  + Yncos 
o where  = 2/N (N is an integer)
o Verify that this has An+1/An = |det J| = 1
o 1 is positive but small
o Exhibit minimal chaos or Arnol'd diffusion
o Examples: case 1 (N = 9) / case 2 (N = 5)
Simple Pendulum (2-D Conservative Flow)



dx/dt = v (v is really an angular velocity)
dv/dt = -sin x (x is really an angle)
For x << 1, sin x --> x and orbits are circles around a center:

More generally equilibria are at v* = 0, x* = N
(where N is an integer, N = 0, ±1, ±2, ±3, ...)

Phase space trajectories:


O-points (centers) and X-points (saddle points)
Separatrix (homoclinic orbit) separates trapped (elliptic) and passing (hyperbolic)
orbits
Homoclinic orbits are sensitive to perturbations

Chirikov (Standard) Map









Start with the pendulum equations:
o dx/dt = v
o dv/dt = -sin x
Solve by the leap-frog method:
o vn+1 = vn - h1sin xn
o xn+1 = xn + h2vn+1
Leap frog is symplectic if fx = gv = 0
Let  = x/2, r = v/2, h1 = K, h2 = 1:
o 2rn+1 = 2rn - K sin(2n)
o 2n+1 = 2n + 2rn+1
rn+1 = [rn - (K/2) sin(2n)] mod 1
n+1 = [n + rn+1] mod 1
K is the nonlinearity parameter
This system also models ball bouncing on vibrating floor
Animation of Chirikov map
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Time-Series Properties
Chaos and Time-Series Analysis
10/31/00 Lecture #9 in Physics 505
Comments on Homework #7 (Poincaré Sections)



Some people's Poincaré sections were obviously not correct
Increasing the damping generally decreases the attractor dimension, eventually leading to
a limit cycle.
Sample results for 0 < b < 0.4
Review (last week) - Hamiltonian Chaos

Properties of Hamiltonian (Conservative) Systems
o They have no dissipation (frictionless)
o There are one or more (k) conserved quantities (energy, ...)
o They are described by a Hamiltonian function H
whose partial derivatives  gives the dynamical equations:
dx/dt = H/v
dv/dt = -H/x
o There are 2N dimensions for N degrees of freedom
o Motion is on a 2N - k dimensional (hyper)surface
o k + 1 Lyapunov exponents are equal to zero
o There are no attractors (or attractor = basin)
o Transients don't die away (no need to wait)
o Equations are time-reversible
o Trajectory returns arbitrarily close to the initial condition
o Phase-space volume is conserved (Liouville's theorem)
o The flow is incompressible (like water)
o The Lyapunov exponents sum to zero
o Chaos can occur only for N > 1 (at least 2 degrees of freedom)
o The dynamics occur in a space of integer dimension
o This space may be a (fat) fractal however (infinitely many holes)
Example - Chirikov (Standard) Map
o rn+1 = [rn - (K/2) sin(2n)] (mod 1)
o n+1 = [n + rn+1] (mod 1)
o K is the nonlinearity parameter
o This system also models ball bouncing on vibrating floor
o Animation of Chirikov map




Example - Simplest conservative chaotic flow
o dx/dt = y
o dy/dt = z
o dz/dt = x2 - y - B
o For B less than about 0.05
o Poincaré section for B = 0.01
Time-Series Analysis - Introduction








This is the second major part of the course
Previously shown: simple equations often have complex behavior
This suggests: complex behavior may have a simple cause
We move from a theoretical to an experimental viewpoint
Applications of time-series analysis:
o Prediction, forecasting (economy, weather, gambling)
o Noise reduction, encryption (communications, espionage)
o Insight, understanding, control (butterfly effect)
Time-series analysis is not new
Some things are new:
o Better understanding of nonlinear dynamics
o New analysis techniques
o Better and more plentiful computers
Precautions:
o Time-series analysis is more art than science
o There are few sure-fire methods
o We generally need a battery of tests
o It's easy to fool yourself
o The literature is full of false claims of chaos
o New algorithms are constantly being developed
o "Is is chaos?" might not be the right question
Hierarchy of Dynamical Behavior
(adapted from F. C. Moon)


Regular predictable behavior (planets, clocks, tides)
Regular unpredictable behavior (tossing a coin)









Transient chaos (pinball machine)
Intermittent chaos (logistic equation)
Narrow-band (almost periodic) chaos (Rössler attractor)
Broad-band low-dimensional chaos (Lorenz attractor)
Broad-band high-dimensional chaos (Mackey-Glass system)
Correlated (colored) noise (random walk)
Pseudo-randomness (computer RND function)
Random (non-deterministic) white noise (radio static)
Superposition of several of the above (weather, stock market)
Examples of Experimental Time Series

















Xn iterates from an iterated map (i.e., logistic equation)
x(t) sampled at regular intervals for flow (i.e., Lorenz attractor)
Population growth (plants, animals)
Meteorological data (temperature, etc.)
El Niño (Pacific ocean temperature)
Seismic waves (earthquakes)
Tidal levels (good example of N-torus)
Astrophysical data (sunspots, Cephids, etc.)
Fluid fluctuations / turbulence (plasmas)
Financial data (stock market, etc.)
Physiological data (EEG, EKG, etc.)
Epidemiological data (diseases)
Music and speech
Geological core samples
Sequence of ASCII codes (written text)
Sequence of bases in DNA molecule
Many others ...
o Center of mass of standing human
o Interval between footsteps
o Reaction time intervals
o Necker cube flips
o
o
o
Eye movements
Human metronome (tap your foot)
Attempted human randomness
 Imitate radioactive decay (Geiger counter)
 Write a list of "random numbers"
 Generate a random sequence of bits (0, 1)

Click mouse at random points on a line
or in a circle or within a square

The independent variable may not be time, but space, frequency, ...
Practical Considerations

You may not know the dynamical variables
(or even how many of them there are)





You may not have experimental access to them
You may only have a short time record
The record is usually sampled at discrete times
The sample rate may not be chosen optimally
The sample time may be non-uniform
(or some data samples may be missing)




The data are subject of measuring and rounding errors
The system may be contaminated by noise
The signal may be filtered by the detector
The system may not be stationary (bull market)
Case Study


Two similar signals (one random, one chaotic)
Random signal (Gaussian white noise)
o Add N pseudorandom numbers uniform in 0 to 1
(called "uniform deviates")
o
o
Subtract their average (N/2)
For large N, the result is a Gaussian (normal) distribution with a standard
deviation of (N/6)1/2
o

For many purposes N = 6 suffices, but maximum value is only 3.
Chaotic signal (logit transform of logistic map)
o Generate sequence of iterates from Xn+1 = 4Xn(1 - Xn)
o Transform each iterate by loge[X/(1 - X)]
o Result approximates a Gaussian distribution
o
But it is obviously chaotic (1-dimensional)
(since it came from the logistic map)

Conventional linear analysis
o Assume signal is sum of sine waves (Fourier modes)
o Example: looking for "cycles" in stock prices
o Look at power spectrum P(f)
o Highest f is Nyquist frequency: fmax = 1/2t
(t is the time interval between data samples)
o
o
If t is too large, aliasing can occur
Lowest f is approximately: fmin = 1/Nt
(N is the number of data points)
o
o
o
o
o
If N is too small, data may not be stationary
White noise has P(f) = constant
Chaos (i.e., logistic map) can also have P(f) = constant
Hence, this is a bad method for detecting chaos
It works well for limit cycles (like van der Pol case)
and for N-torus (2 sine waves or 3 sine waves, etc.)
which can be hard to distinguish from chaos
o
Instead, look at the return maps (Xn+1 versus Xn)
Autocorrelation Function

Calculating power spectrum is difficult
(Use canned FFT or MEM - see Numerical Recipes)










Autocorrelation function is easier and equivalent
Autocorrelation function is Fourier transform of power spectrum
Let g() = <x(t)x(t+)> (< ... > denotes time average)
Note: g(0) = <x(t)2> is the mean-square value of x
Normalize: g() = <x(t)x(t-)> / <x(t)2>
For discrete data: g(n) = XiXi+n / Xi2
Two problems:
o i + n cannot exceed N (number of data points)
o Spurious correlation if Xav = <X> is not zero
Use: g(n) = (Xi - Xav)(Xi+n - Xav) / (Xi - Xav)2
Do the sums above from i = 1 to N - n
Examples (data records of 2000 points):
o Gaussian white noise:
o
Logit transform of logistic equation:
o
Hénon map:
o
Sine wave:
o
Lorenz attractor (x variable step size 0.05):










A broad power spectrum gives a narrow correlation function and vice versa
Colored (correlated) noise is indistinguishable from chaos
Correlation time is width of g() function (call it tau)
It's hard to define a unique value of this width
This curve is really symmetric about tau = 0 (hence width is 2 tau)
0.5/tau is sometimes called a "poor-man's Lyapunov exponent"
o Noise: LE = infinity ==> tau = 0
o Logistic map: LE = loge(2) ==> tau = 0.72
o Hénon map: LE = 0.418 ==> tau = 1.20
o Sine wave: LE = 0 ==> tau = infinity
o Lorenz attractor: LE = 1.50/sec = 0.075/step ==> tau = 6.67
This really only works for tau > 1
Testing this would make a good student project
The correlation time is a measure of how much "memory" the system has
From the correlation function g(n), the power spectrumP(f) can be found:
P(f) = 2  g(n) cos(2fnt) t (ref: Tsonis)
Time-Delayed Embeddings




How do you know what variable to measure in an experiment?
How many variables do you have to measure?
The wonderful answer is that (usually) it doesn't matter!
Example (Lorenz attractor):
o Plot of y versus x:
o
Plot of dx/dt versus x:
o
Plot of x(t) versus x(t-0.1):
o
o
o


These look like 3 views of the same object
They are "diffeomorphisms"
They have same topological properties (dimension, etc.)
Whitney's embedding theorem says this result is general
Taken's has shown that DE = 2m + 1
o m is the smallest dimension that contains the attractor (3 for Lorenz)
o DE is the maximum time-delay embedding dimension (7 for Lorenz)
o This guarantees a smooth embedding (no intersections)
o
o
o
o
o
This is the price we pay for choosing an arbitrary variable
Removal of all intersections may be unnecessary
Recent work has shown that 2m may be sufficient (6 for Lorenz)
In practice m often seems to suffice
Example (Hénon viewed in various ways):
There is obvious folding, but topology is preserved


How do we choose an appropriate DE (embedding dimension)?
o Increase DE until topology of attractor (dimension) stops changing
o This may require more data than you have to do properly
o Saturation of attractor dimension is usually not excellent
o Example: 3-torus (attractor dimension versus DE , 1000 points)
o Can also use the method of false nearest neighbors:
 Find the nearest neighbor to each point in embedding DE
 Increase DE by 1 and see how many former nearest neighbors are no
longer nearest
 When the fraction of these false neighbors falls to nearly zero, we have
found the correct embedding
How do we choose an appropriate t for sampling a flow?
o In principle, it should not matter
o In practice there is an optimum
o Rule of thumb: t ~ tau / DE
o Vary t until tau is about DE (3 to 7 for Lorenz)
o A better method is to use minimum mutual information
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Nonlinear Prediction and Noise Reduction
Chaos and Time-Series Analysis
11/7/00 Lecture #10 in Physics 505
Comments on Homework #8 (Chirikov Map)



Everyone had something that resembles Chirikov map
The map is not normally plotted in polar coordinates despite r and 
Here's the first few iterations for a square of initial conditions:

Hard to verify area conservation numerically,
but the stretching is very evident

Easy to verify area conservation analytically:
o rn+1 = [rn - (K/2) sin(2n)] mod 1 = f(r, )
o n+1 = [n + rn+1] mod 1 = [n + rn - (K/2) sin(2n)] mod 1 = g(r, )
o An+1/An = |det J| = |frg - fgr| = 1
Review (last week) - Time-Series Properties



Introductory comments
o Time-series analysis more art than science
o Allows prediction, noise reduction, physical insight
o Easy to get wrong answers
Hierarchy of Dynamical Behavior
o Regular
o Quasiperiodic
o Chaotic
o Pseudo-random
o Random
Examples of Experimental Time Series
o Numerical experiments
o
o
o


Real moving systems
Abstract dynamical systems
Non-temporal sequences
Practical Considerations
o What variable(s) do you measure?
o How much data do you need?
o How accurately must it be measured?
o What is the effect of filtering?
o What if the system is not stationary?
Case Study
o Two similar signals (one random, one chaotic)
o Both have Gaussian distributions
o Both have flat power spectra (white)
o Hence, this is a bad method for detecting chaos
o It works well for limit cycles (like van der Pol case)
and for N-torus (2 sine waves or 3 sine waves, etc.)
which can be hard to distinguish from chaos
o
Instead, look at the return maps (Xn+1 versus Xn)
Autocorrelation Function

Calculating power spectrum is difficult
(Use canned FFT or MEM - see Numerical Recipes)










Autocorrelation function is easier and equivalent
Autocorrelation function is Fourier transform of power spectrum
Let g() = <x(t)x(t+)> (< ... > denotes time average)
Note: g(0) = <x(t)2> is the mean-square value of x
Normalize: g() = <x(t)x(t-)> / <x(t)2>
For discrete data: g(n) = XiXi+n / Xi2
Two problems:
o i + n cannot exceed N (number of data points)
o Spurious correlation if Xav = <X> is not zero
Use: g(n) = (Xi - Xav)(Xi+n - Xav) / (Xi - Xav)2
Do the sums above from i = 1 to N - n
Examples (data records of 2000 points):
o Gaussian white noise:
o
Logit transform of logistic equation:
o
Hénon map:
o
Sine wave:
o

Lorenz attractor (x variable step size 0.05):
A broad power spectrum gives a narrow correlation function and vice versa
[cf: the Uncertainty Principle (f ~ 1)]









Colored (correlated) noise is indistinguishable from chaos
Correlation time is width of g() function (call it tau)
It's hard to define a unique value of this width
This curve is really symmetric about tau = 0 (hence width is 2 tau)
0.5/tau is sometimes called a "poor-man's Lyapunov exponent"
o Noise: LE = infinity ==> tau = 0
o Logistic map: LE = loge(2) ==> tau = 0.72
o Hénon map: LE = 0.419 ==> tau = 1.19
o Sine wave: LE = 0 ==> tau = infinity
o Lorenz attractor: LE = 0.906/sec = 0.060/step ==> tau = 8.38
This really only works for tau > 1
Testing this would make a good student project
The correlation time is a measure of how much "memory" the system has
g(1) is the linear correlation coefficient (or "Pearson's r")
It measures correlation with the preceding point
o
If g(1) = 0 then there is no correlation
White (f 0) noise:
o
If g(1) ~ 1 then there is strong correlation
Pink (1/f) noise:
o
If g(1) < 0 then there is anti-correlation
Blue (f 2) noise:
(This was produced by taking the time derivative of white noise)

From the correlation function g(n), the power spectrumP(f) can be found:
P(f) = 2  g(n) cos(2fnt) t (ref: Tsonis)
Time-Delayed Embeddings




How do you know what variable to measure in an experiment?
How many variables do you have to measure?
The wonderful answer is that (usually) it doesn't matter!
Example (Lorenz attractor - HW #10):
o
Plot of y versus x:
o
Plot of dx/dt versus x:
o
Plot of x(t) versus x(t-0.1):
o
o
o


These look like 3 views of the same object
They are "diffeomorphisms"
They have same topological properties (dimension, etc.)
Whitney's embedding theorem says this result is general
Taken's has shown that DE = 2m + 1
o m is the smallest dimension that contains the attractor (3 for Lorenz)
o
o
o
o
o
o
o
DE is the minimum time-delay embedding dimension (7 for Lorenz)
This guarantees a smooth embedding (no intersections)
This is the price we pay for choosing an arbitrary variable
Removal of all intersections may be unnecessary
Recent work has shown that 2m may be sufficient (6 for Lorenz)
In practice m often seems to suffice
Example (Hénon viewed in various ways):
There is obvious folding, but topology is preserved


How do we choose an appropriate DE (embedding dimension)?
o Increase DE until topology of attractor (dimension) stops changing
o This may require more data than you have to do properly
o Saturation of attractor dimension is usually not excellent
o Example: 3-torus (attractor dimension versus DE , 1000 points)
o Can also use the method of false nearest neighbors:
 Find the nearest neighbor to each point in embedding DE
 Increase DE by 1 and see how many former nearest neighbors are no
longer nearest
 When the fraction of these false neighbors falls to nearly zero, we have
found the correct embedding
 Example: 3-torus
How do we choose an appropriate t for sampling a flow?
o In principle, it should not matter
o In practice there is an optimum
o Rule of thumb: t ~ tau / DE
o Vary t until tau is about DE (3 to 7 for Lorenz)
o A better method is to use minimum mutual information (see Abarbanel)
Summary of Important Dimensions


Configuration space (number of independent dynamical variables)
Solution manifold (the space in which the solution "lives" - an integer)




Attractor dimension (fractional if it's a strange attractor)
o Kaplan-Yorke (Lyapunov) dimension
o Hausdorff dimension
o Capacity dimension (see below)
o Information dimension
o Correlation dimension (next week)
o ... (infinitely many more)
Observable (1-D for a univariate time series: Xi)
Reconstructed (time-delayed) state space (can be chosen arbitrarily)
Time-delayed embedding (the minimum time-delayed state space that preserves the
topology of the solution)
Nonlinear Prediction

There are many forecasting (prediction) methods:
o Extrapolation (fitting data to a function)
o Moving average (MA) methods
o Linear autoregression (ARMA)
o State-space averaging (see below)
o Principal component analysis (PCA)
(also called singular value decomposition - SVD)




o Machine learning / AI (neural nets, genetic algorithms, etc.)
Conventional linear prediction methods apply in the time domain
o Fit the data to a mathematical function (polynomial, sine, etc.)
o The function is usually not linear, but assume that the equations governing the
dynamics are (hence no chaos)
o Evaluate the function at some future time
o This works well for smooth and quasi-periodic data
o It (usually) fails badly for chaotic data
Nonlinear methods usually apply in state space
Lorenz proposed this method for predicting the weather
Example (predicting next term in Hénon map - HW # 11):
o We know Xn+1 = 1 - CXn2 + BXn-1
o In a 2-D embedding, the next value is unique
o
o
o
o



Find M nearest points in Xn-Xn-1 space
Calculate their average displacement: X = <Xn+1 - Xn>
Use X to predict next value in time series
Repeat as necessary to get future time steps
Sensitive dependence will eventually spoil the method
Growth of prediction error crudely gives the Lyapunov exponent
Example (Hénon map average error):
o
o








If LE = 0.604 bits/iterations, error should double every 1.7 iterations
Saturation occurs after error grows sufficiently
The method also can remove some noise
Predict all points not just next point
Need to choose DE and M optimally
Alternate related method is to construct f(Xn, Xn-1, ...)
This improves noise reduction but is less accurate
Best method is to make a local function approximation
Usually linear or quadratic functions are used
This offers best of both worlds but is hard to implement and slow
Lyapunov Exponent of Experimental Data

We previously calculated largest LE from known equations






Getting the LE from experimental data is much more difficult (canned routines are
recommended - See Wolf)
Finding a value for LE may not be very useful
o Noise and chaos both have positive LEs (LE = infinity for white noise)
o Quasiperiodic dynamics have zero LE, but there are better ways to detect it
(look for discrete power spectrum)
o The value obtained is usually not very accurate
Conceptually, it's easy to see what to do:
o Find two nearby points in an embedding DE
o Follow them a while and calculate <log(rate of separation)>
o Repeat with other nearby points until you get a good average
There are many practical difficulties:
o How close do the points have to be?
o What if they are spuriously close because of noise?
o What if they are not oriented in the right direction?
o How far can they safely separate?
o What do you do when they get too far apart?
o How many pairs must be followed?
o How do you choose the proper embedding?
It's especially hard to get exponents other than the largest
The sum of the positive exponents is called the entropy
Capacity Dimension















The most direct indication of chaos is a strange attractor
Strange attractors will generally have a low, non-integer dimension
There are many ways to define and calculate the dimension
We already encountered the Kaplan-Yorke dimension, but it requires knowledge of all
the Lyapunov exponents
A more direct method is to calculate the capacity dimension (D0)
Capacity dimension is closely related to the Hausdorff dimension
It is also sometimes called the "cover dimension"
Consider data representing a line and a surface embedded in 2-D
o The number of squares N of size d required to cover the line (1-D) is proportional
to 1/d
o The number of squares N of size d required to cover the surface (2-D) is
proportional to 1/d2
o The number of squares N of size d required to cover a fractal (dimension D0) is
proportional to 1/dD0
Hence the fractal dimension is given by D0 = d log(N) / d log(1/d)
This is equivalent to D0 = -d log(N) / d log(d)
Plot log(N) versus log(d) and take the slope to get D0
Example (2000 data points from Hénon map with DE = 2)
This derivative should be taken in the limit d --> 0
The idea can be generalized to DE > 2 using (hyper)cubes
Many data points are required to get a good result

The number required increases exponentially with D0
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Fractals
Chaos and Time-Series Analysis
11/14/00 Lecture #11 in Physics 505
Comments on Homework #9 (Autocorrelation Function)


There was a certain confusion about the notation
The autocorrelation function of Lorenz is ~ 6 x 0.05 = 0.3 seconds
Review (last week) - Nonlinear Prediction and Noise Reduction

Autocorrelation Function
o g(n) = (Xi - Xav)(Xi+n - Xav) / (Xi - Xav)2
o Correlation time is width of g(n) function (call it tau)
o tau can be taken as the value of n for which g(n) = 1/e = 37%
o 0.5/tau is sometimes called a "poor-man's Lyapunov exponent"
o From the correlation function g(n), the power spectrumP(f) can be found:
P(f) = 2  g(n) cos(2fnt) t (ref: Tsonis)

Time-Delayed Embeddings
o (Almost) any variable(s) can be analyzed
o Create multi-dimensional data by taking time lags
o May need up to 2m + 1 time lags to avoid intersections
where m is the dimension of solution manifold
o
o
Must choose an appropriate DE (embedding dimension)
 Increase DE until topology of attractor (dimension) stops changing
 Or use the method of false nearest neighbors
Must choose an appropriate t for sampling a flow
Rule of thumb: t ~ tau / DE
A better method is to use minimum mutual information (see Abarbanel)
Summary of Important Dimensions
o Configuration (or state) space (number of independent dynamical variables)
o Solution manifold (the space in which the solution "lives" - an integer)
o Attractor dimension (fractional if it's a strange attractor)
 Kaplan-Yorke (Lyapunov) dimension
 Hausdorff dimension
 Cover dimension
 Similarity dimension (see below)
 Capacity dimension (see below)
 Information dimension
 Correlation dimension (next week)
 ... (infinitely many more)
o Observable (1-D for a univariate (scalar) time series: Xi)
o Reconstructed (time-delayed) state space (can be chosen arbitrarily)
o Time-delayed embedding (the minimum time-delayed state space that preserves
the topology of the solution)



Nonlinear Prediction




There are many forecasting (prediction) methods
Conventional linear prediction methods apply in the time domain
o Fit the data to some function of time and evaluate it
o The function of time may be nonlinear
o The dynamics are usually assumed to be linear
o Linear equations can have oscillatory or exponential solutions
Nonlinear methods usually apply in state space
Example (predicting next term in Hénon map - HW # 11):
o We know Xn+1 = 1 - CXn2 + BXn-1
o In a 2-D embedding, the next value is unique
o
o
o
o
Find M nearest points in Xn-Xn-1 space
Calculate their average displacement: X = <Xn+1 - Xn>
Use X to predict next value in time series
Repeat as necessary to get future time steps




Sensitive dependence will eventually spoil the method
Method does not necessary keep you on the attractor (but could be modified to do so)
Growth of prediction error crudely gives the Lyapunov exponent
Example (Hénon map average error):
o
o





If LE = 0.604 bits/iterations, error should double every 1.7 iterations
Saturation occurs after error grows sufficiently
Prediction methods can also remove some noise
o Predict all points not just next point
o Can be used to produce an arbitrarily long time series
o This is useful for calculating LE, dimension, etc.
o Gives an accurate answer to an approximate model
o Example: Hénon map with noise, removed using SVD
Need to choose DE and M optimally
Alternate related method is to construct f(Xn, Xn-1, ...)
o This improves noise reduction but is less accurate
o The solution can eventually walk off the attractor
Best method is to make a local function approximation
o Usually linear or quadratic functions are used
o This offers best of both worlds but is hard to implement and slow
Case study - 20% drop in S&P500 on 10/19/87 ("Black Monday')
o Could this drop have been predicted?
o Consider 3000 previous trading days (~ 15 years of data)
o Note that the 20% drop was unprecedented
o Three predictors:
 Linear (ARMA)
 Principle component analysis (PCA or SVD)
 Artificial neural net (essentially state-space averaging)
o The method above predicts a slight rise (not shown)
o The stock market has little if any determinism
Lyapunov Exponent of Experimental Data







We previously calculated largest LE from known equations
Getting the LE from experimental data is much more difficult (canned routines are
recommended - See Wolf)
Finding a value for LE may not be very useful
o Noise and chaos both have positive LEs (LE = infinity for white noise)
o Quasi-linear dynamics have zero LE, but there are better ways to detect it (look
for discrete power spectrum)
o Inpossible to distinguish zero LE from very small positive LE
o The value obtained is usually not very accurate
Conceptually, it's easy to see what to do:
o Find two nearby points in a suitably chosen embedding DE
o Follow them a while and calculate <log(rate of separation)>
o Repeat with other nearby points until you get a good average
There are many practical difficulties:
o How close do the points have to be?
o What if they are spuriously close because of noise?
o What if they are not oriented in the right direction?
o How far can they safely separate?
o What do you do when they get too far apart?
o How many pairs must be followed?
o How do you choose the proper embedding?
It's especially hard to get exponents other than the largest
The sum of the positive exponents is called the entropy
Hurst Exponent (skip this if time is short)

Consider a 1-D random walk
o Start at X0 = 0
o Repeatedly flip a (2-sided) coin (N times)
o Move 1 step East on heads
o Move 1 step West on tails
o <X> = 0, <X2> = N after N steps of size 1
o Proof:
 N = 1: E = 1, W = 1, <X2> = 1
 N = 2: EE = WW = 2, EW = WE = 0, <X2> = 2
 N = 3: EEE = WWW = 3, other 6 = 1, <X2> = 3
 etc... <X2> = N
o Numerically generated random walk (2000 coin flips):

Extend to 2-D random walk
o
o
o
o
o


Start at R0 = 0 (X0 = Y0 = 0)
Repeatedly flip a 4-sided coin (N times)
Move 1 step N, S, E, or W respectively
<R> = 0, <R2> = N after N steps of size 1
Animation shows Rrms = <R2>1/2 = (R)t1/2
Result is general
o Applies to any dimension
o Applies for any number of directions (if isotropic)
o Applies for any step size (even a distribution of sizes)
o However, coin flips must be uncorrelated ("white")
o H = 1/2 is the Hurst exponent for this uncorrelated random walk
o H > 1/2 means positive correlation of coin flips (persistence)
o H < 1/2 means negative correlation of coin flips (anti-persistence)
o The time series Xn has persistence for H > 0
Note ambiguity in definition of Hurst exponent
o The steps are uncorrelated (white)
o The path is highly correlated (Brownian motion)
o Can get from one to the other by integrating or differentiating
o I prefer to say Hurst exponent of white noise is zero,
and brown noise (1/f 2) is 0.5, but others disagree
With this convention, H = /4 for 1/f  noise
Hurst exponent has same information as power law coefficient
If power spectrum is not a power law, no unique exponent
Calculation of Hurst exponent from experimental data is easy
o Choice of embedding not critical (1-D usually OK)
o Use each point in time series as an initial condition
o Calculate average distance d (= |X - X0|) versus t
o Plot log(d) versus log(t) and take slope
o Example #1 (Hurst exponent of brown noise):
o
o
o

o
Example #2 (Hurst exponent of Lorenz x(t) data)
Capacity Dimension



















The most direct indication of chaos is a strange attractor
Strange attractors will generally have a low, non-integer dimension
There are many ways to define and calculate the dimension
We already encountered the Kaplan-Yorke dimension, but it requires knowledge of all
the Lyapunov exponents
Most calculations depend on the fact that amount of "stuff" M scales as dD where d is
the linear size of a "box"
Hence D = d log(M) / d log(d) [i.e., D is the slope of log(M) versus log(d)]
One example is the capacity dimension (D0)
Closely related to the Hausdorff dimension or "cover dimension"
Consider data representing a line and a surface embedded in 2-D
o The number of squares N of size d required to cover the line (1-D) is proportional
to 1/d
o The number of squares N of size d required to cover the surface (2-D) is
proportional to 1/d2
o The number of squares N of size d required to cover a fractal (dimension D0) is
proportional to 1/dD0
Hence the fractal dimension is given by D0 = d log(N) / d log(1/d)
This is equivalent to D0 = -d log(N) / d log(d)
Plot log(N) versus log(d) and take the (negative) slope to get D0
More typically D0 is calculated using a grid of fixed squares
Example (2000 data points from Hénon map with DE = 2)
This derivative should be taken in the limit d --> 0
The idea can be generalized to DE > 2 using (hyper)cubes
Many data points are required to get a good result
The number required increases exponentially with D0
If 10 points are needed to define a line (1-D),
then 100 points are needed to define a surface (2-D),
and 1000 points are needed to define a volume (3-D), etc.
Examples of Fractals




There are many other ways to make fractals besides chaotic dynamics
They are worthy of study in their own right
They provide a new way of viewing the world
Fractal slide show (another "lecture within a lecture")
o Nature's Fractals
o Snowflake
o Cantor set
o Cantor curtains
o Devil's staircase
o Hilbert curve
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

Von Koch island
Flat Fournier universe
Weierstrass function ( x(t) = [cos(2nt) / 2(2-D)n] )
Fractal word
Sierpinski triangle
Sierpinski carpet
Cantor square
Cantor maze
Twin dragon
Julia dendrite (Zn+1 = Zn2 + i)
Fractal fern
Maple leaf
Peitgen tree
Fractal tree
Diffusion-limited aggregation
Iterated function systems
Cellular automata
Game of life
Strange attractors
Mandelbrot set (Zn+1 = Zn2 + C)
Some of these cases will be studied more later in the semester
Similarity Dimension


Easy to calculate dimension for exactly self-similar fractals
Example #1 (Sierpinski carpet):
o
o
o
o
Consists of 9 squares in a 3 x 3 array
Eight squares are self-similar squares and one is empty
Each time the linear scale is increased 3 x, the "stuff" increases 8 times
Hence, D = log(8) / log(3) = 1.892789261...
o

Note: Any base can be used for log since it involves a ratio
Example #2 (Koch curve):
o
o
o
Consists of 4 line segments: _/\_
Each factor of 3 increase in length adds 4 x the "stuff"
Hence, D = log(4) / log(3) = 1.261859507...

Example #3 (Triadic Cantor set):
o
o
o
o
o
Consists of three line segments _____ _____ _____
Two segments are self similar and one is empty
Each factor of 3 increase in length adds 2 x the "stuff"
Hence, D = log(2) / log(3) = 0.630929753
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Calculation of Fractal Dimension
Chaos and Time-Series Analysis
11/21/00 Lecture #12 in Physics 505
Comments on Homework #10 (Time-Delay Reconstruction)




Optimum n is about 2 (delay of 2 x 0.05 = 0.1 seconds)
This is about equal to autocorrelation time (0.3 seconds) / DE (3)
It's very hard to see fractal structure in return map (D ~ 1.05)
Your graphs should look as follows:
Review (last week) - Fractals



Nonlinear prediction
o methods usually apply in state space
o methods can also remove some noise
o Can be used to produce an arbitrarily long time series
o Gives an accurate answer to an approximate model
Lyapunov exponent of experimental data
o Getting the LE from experimental data is much more difficult (canned routines
are recommended - See Wolf)
o Finding a value for LE may not be very useful
o Noise and chaos both have positive LEs (LE = infinity for white noise)
o Quasi-linear dynamics have zero LE, but there are better ways to detect it (look
for discrete power spectrum)
Hurst exponent (omitted)
o Deviation from starting point: Xrms = NH
o Hence H is slope of log(Xrms) versus N (or t)
o One convention is for white noise to have H = 0
o And brown noise (1/f 2) to have H = 0.5
o With this convention, H = /4 for 1/f  noise
o Hurst exponent has same information as power law coefficient
Capacity Dimension



















The most direct indication of chaos is a strange attractor
Strange attractors will generally have a low, non-integer dimension
There are many ways to define and calculate the dimension
We already encountered the Kaplan-Yorke dimension, but it requires knowledge of all
the Lyapunov exponents
Most calculations depend on the fact that amount of "stuff" M scales as dD
Hence D = d log(M) / d log(d) [i.e., D is the slope of log(M) versus log(d)]
One example is the capacity dimension (D0)
Closely related to the Hausdorf dimension or "cover dimension"
Consider data representing a line and a surface embedded in 2-D
o The number of squares N of size d required to cover the line (1-D) is proportional
to 1/d
o The number of squares N of size d required to cover the surface (2-D) is
proportional to 1/d2
o The number of squares N of size d required to cover a fractal (dimension D0) is
proportional to 1/dD0
Hence the fractal dimension is given by D0 = d log(N) / d log(1/d)
This is equivalent to D0 = -d log(N) / d log(d)
Plot log(N) versus log(d) and take the (negative) slope to get D0
More typically D0 is calculated using a grid of fixed squares
Example (2000 data points from Hénon map with DE = 2)
This derivative should be taken in the limit d --> 0
The idea can be generalized to DE > 2 using (hyper)cubes
Many data points are required to get a good result
The number required increases exponentially with D0
If 10 points are needed to define a line (1-D),
then 100 points are needed to define a surface (2-D),
and 1000 points are needed to define a volume (3-D), etc.
Examples of Fractals




There are many other ways to make fractals besides chaotic dynamics
They are worthy of study in their own right
They provide a new way of viewing the world
Fractal slide show (another "lecture within a lecture")
o Nature's Fractals
o Snowflake
o Cantor set
o Cantor curtains
o Devil's staircase
o Hilbert curve
o Von Koch island
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o


Flat Fournier universe
Weierstrass function ( x(t) = [cos(2nt) / 2(2-D)n] )
Fractal word
Sierpinski triangle
Sierpinski carpet
Cantor square
Cantor maze
Twin dragon
Julia dendrite (Zn+1 = Zn2 + i)
Fractal fern
Maple leaf
Peitgen tree
Fractal tree
Diffusion-limited aggregation
Iterated function systems
Cellular automata
Game of life
Strange attractors
Mandelbrot set (Zn+1 = Zn2 + C)
Some of these cases will be studied more in the next few weeks
Some ways to produce fractals by computer:
o Chaotic dynamical orbits (strange attractors)
o Recursive programming
o Iterated function systems (the chaos game)
o Infinite mathematical sums (Weierstrass function, etc.)
o Boundaries of basins of attraction
o Escape-time plots (Mandelbrot, Julia sets, etc.)
o Random walks (diffusion)
o Diffusion limited aggregation
o Cellular automata (game of life, etc.)
o Percolation (fluid seeping through porous medium)
o Self-organized criticality (avalanches in sand piles)
Similarity Dimension


Easy to calculate dimension for exactly self-similar fractals
Example #1 (Sierpinski carpet):
o
o
Consists of 9 squares in a 3 x 3 array
o
o
Eight squares are self-similar squares and one is empty
Each time the linear scale is increased 3 x, the "stuff" increases 8 times
Hence, D = log(8) / log(3) = 1.892789261...

o Note: Any base can be used for log since it involves a ratio
Example #2 (Koch curve):
o
o
o
Consists of 4 line segments: _/\_
Each factor of 3 increase in length adds 4 x the "stuff"
Hence, D = log(4) / log(3) = 1.261859507...

Example #3 (Triadic Cantor set):
o
o
o
o
o
Consists of three line segments _____ _____ _____
Two segments are self similar and one is empty
Each factor of 3 increase in length adds 2 x the "stuff"
Hence, D = log(2) / log(3) = 0.630929753
Correlation Dimension








Capacity dimension does not give accurate results for experimental data
Similarity dimension is hard to apply to experimental data
The best method is the (two-point) correlation dimension (D2)
This method opened the floodgates for identifying chaos in experiments
Next homework asks you to calculate D2 for the Hénon map
Original (Grassberger and Procaccia) paper included with HW #12
Illustration for 1-D and 2-D data embedded in 2-D
Procedure for calculating the correlation dimension:
o Choose an appropriate embedding dimension DE
o Choose a small value of r (say 0.001 x size of attractor)
o Count the pairs of points C(r) with r < r
 r = [(Xi - Xj)2 + (Xi-1 - Xj-1)2 + ...]1/2
 Note: this requires a double sum (i, j) ==> 106 calculations for 1000 data
points
 Actually, this double counts; can sum j from i+1 to N
 In any case, don't include the point with i = j
o Increase r by some factor (say 2)
o Repeat count of pairs with r < r
o
o
o






Graph log C(r) versus log r (can use any base)
Fit curve to a straight line
Slope of that line is D2
Think of C(r) as the probability that 2 random points on the attractor are separated by
<r
Example: C(r) versus r for Hénon map with N = 1000 and DE = 2
o Result: D2 = 1.223 ± 0.097
o Compare: DGP = 1.21 ± 0.01 (Original paper, N = 15,000)
o Compare: DKY = 1.2583 (from Lyapunov exponents)
o Compare: D0 = 1.26 (published result for capacity dimension)
o See also my calculations with N = 3 x 106
Generally D2 < DKY< D0 (but they are often close)
Sometimes the convergence is very slow as r --> 0
Tips for speeding up the calculation (in order of difficulty):
o Avoid double counting by summing j from i+1 to N
o Collect all the r values at once by binning the values of r
o Avoid taking square roots by binning r2 and using log x2 = 2 log x
o Avoid calculating log by using exponent of floating point variable
o Collect data for all embeddings at once
o Sort the data first so you can quit testing when r exceeds r
o Can also use other norms, but accuracy suffers
Number of data points needed to get valid correlation dimension
o Need a range of r values over which slope is constant (scaling region)
o Limited at large r by the size of the attractor (D2 = 0 for r > attractor size)
o Limited at small r by statistics (need many points in each bin)
o Various criteria, all predict N increases exponentially with D2
o Tsonis criterion: N ~ 10 2 + 0.4D (D to use is probably D2)
D N
1 250
2 630
3 1600
4 4000
5 10,000
6 25,000
7 63,000
8 158,000
9 400,000
10 1,000,000
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Multifractals
Chaos and Time-Series Analysis
11/28/00 Lecture #13 in Physics 505
Comments on Homework #11 (Nonlinear Prediction)




Results varied from almost too good to rather poor
Hard to diagnose the poor results when not enough detail given
You could try several data sets to see how general are your results
Here's sample results (averaged over 1000 realizations of the Hénon map):
Review (last week) - Calculation of Fractal Dimension


Cover dimension
o Find the number N of (hyper)cubes of size d required to cover the object
o D0 = -d log(N) / d log(d)
Capacity dimension
o Similar to cover dimension but cubes are fixed in space
o D0 = -d log(N) / d log(d)
o This derivative should be taken in the limit d --> 0
o Many data points are required to get a good result
The number required increases exponentially with D0

Examples of fractals
o Nature's Fractals
o Deterministic Fractals
o Random Fractals
o Diffusion-limited aggregation
o Iterated function systems
o
o
o


Cellular automata
Strange attractors
Mandelbrot set (Zn+1 = Zn2 + C)
Some ways to produce fractals by computer:
o Chaotic dynamical orbits (strange attractors)
o Recursive programming
o Iterated function systems (the chaos game)
o Infinite mathematical sums (Weierstrass function, etc.)
o Boundaries of basins of attraction
o Escape-time plots (Mandelbrot, Julia sets, etc.)
o Random walks (diffusion)
o Diffusion limited aggregation
o Cellular automata (game of life, etc.)
o Percolation (fluid seeping through porous medium)
o Self-organized criticality (avalanches in sand piles)
Similarity dimension
o Example #1 (Sierpinski carpet):
D = log(8) / log(3) = 1.892789261...
o
Example #2 (Koch curve):
D = log(4) / log(3) = 1.261859507...
o
Example #3 (Triadic Cantor set):
D = log(2) / log(3) = 0.630929753
Correlation Dimension












Capacity dimension does not give accurate results for experimental data
Similarity dimension is hard to apply to experimental data
The best method is the (two-point) correlation dimension (D2)
This method opened the floodgates for identifying chaos in experiments
Homework this week asked you to calculate D2 for the Hénon map
Original (Grassberger and Procaccia) paper included with HW #12
Illustration for 1-D and 2-D data embedded in 2-D
Procedure for calculating the correlation dimension:
o Choose an appropriate embedding dimension DE
o Choose a small value of r (say 0.001 x size of attractor)
o Count the pairs of points C(r) with r < r
 r = [(Xi - Xj)2 + (Xi-1 - Xj-1)2 + ...]1/2
 Note: this requires a double sum (i, j) ==> 106 calculations for 1000 data
points
 Actually, this double counts; can sum j from i+1 to N
 In any case, don't include the point with i = j
o Increase r by some factor (say 2)
o Repeat count of pairs with r < r
o Graph log C(r) versus log r (can use any base)
o Fit curve to a straight line
o Slope of that line is D2
Think of C(r) as the probability that 2 random points on the attractor are separated by
<r
Correlation dimension emphasizes regions of space the orbit visits
Example: C(r) versus r for Hénon map with N = 1000 and DE = 2
o Result: D2 = 1.223 ± 0.097
o Compare: DGP = 1.21 ± 0.01 (Original paper, N = 15,000)
o See also my calculations (D2 = 1.220 ± 0.036) with N = 3 x 106
o Compare: DKY = 1.2583 (from Lyapunov exponents)
o Compare: D0 = 1.26 (published result for capacity dimension)
Generally D2 < DKY<D0 (but they are often close)
Practical Considerations


Tips for speeding up the calculation (roughly in order of increasing difficulty and
decreasing value):
o Avoid double counting by summing j from i+1 to N
o Collect all the r values at once by binning the values of r
o Avoid taking square roots by binning r2 and using log x2 = 2 log x
o Avoid calculating log by using exponent of floating point variable
o Collect data for all embeddings at once
o Sort the data first so you can quit testing when r exceeds r
o Can also use other norms, but accuracy suffers
o Discard a few temporally adjacent points if sample time is small for flows
Number of data points needed to get valid correlation dimension
o Need a range of r values over which slope is constant (scaling region)
o
o
o
o
Limited at large r by the size of the attractor (D2 = 0 for r > attractor size)
Limited at small r by statistics (need many points in each bin)
Various criteria, all predict N increases exponentially with D2
Tsonis criterion: N ~ 10 2 + 0.4D (D to use is probably D2)
D N
1 250
2 630
3 1600
4 4000
5 10,000
6 25,000
7 63,000
8 158,000
9 400,000
10 1,000,000



Effect of round-off errors
o Round-off errors discretize the state space
o Causes the measured dimension to approach zero at small r
o This limits the useful scaling region of log C(r) versus log r
o Easy to test for this in low D since points will be identical
Effect of superimposed noise
o Noise fuzzes out the attractor on small scales
o Causes the measured dimension to approach infinity at small r
o This limits the useful scaling region of log C(r) versus log r
o Hard to test for this especially if noise is correlated
o There may be a knee in the C(r) curve if noise is white
o Example: C(r) for Hénon map with Gaussian white noise
Distinguishing chaos from noise
o Chaos has dimension less than infinity
o It may be unmeasurably high, however
o White noise has infinite dimension
o Colored (correlated) noise can appear low dimensional
o Example: 1/f 2 (brown) noise




o
gives C(r) versus r with no real scaling region
Dimension is low at large r, high at small r
Osborne & Provenzale (Physica D 35, 357, 1989) show 1/f  noise tends to
give D2 ~ 2 / ( - 1) for 1 <  < 3
Conjecture: It's possible to get any C(r) with appropriately colored noise
 Start with an attractor of a dynamical system (or IFS)










Example: Triadic Cantor set (D = 0.631)
Visit the points on the attractor in random order
Geometry is accurately fractal, dynamics is random
Hence dimension can be low and well defined, but no chaos
This would presumably fail at sufficiently high embedding
A good demonstration of this would be a publishable student project
K-S entropy (Kolmogorov-Sinai)
o Entropy is the sum of the positive Lyapunov exponents (Pesin Identity)
o A periodic orbit (zero LE) has 0 entropy (no spreading)
o A random orbit (infinite LE) has infinity entropy (maximal disorder)
o This is related to but different from the thermodynamic entropy
o It is actually a rate of change of the usual entropy
o Can be estimated from C(r, DE) in different embeddings
o Formula: K = d log C(r)/dDE in the limit of infinite DE
o Hence, entropy can be obtained for free when calculating D2
Multivariate data
o Suppose you have measured 2 (or more) simultaneous dynamical variables (X
and Y)
o If they are independent, they reduce the required number of time delays
o Construct embedding from Xi, Yi, Xi-1, Yi-1, etc...
o Trick: Make single time series by intercalation
 Xi, Yi, Xi+1, Yi+1, ... and use existing D2 algorithm
 Probably good to rescale variables to the same range
 Get twice as much data this way in the same time
 I'm not aware of this being discussed in literature but it works
 This could also be a publishable student project
Filtered data
o What happens if you measure dX/dt or integral of X?
o This has been examined theoretically and numerically
o Differentiating is usually OK but enhances noise and lowers correlation time
o Integrating suppresses noise and increases correlation time - hard to get a good
plateau in C(r)
o These operations can raise the dimension by 1 (adds an equation)
o Other filtering methods have not been extensively studied (another possible
student project)
Missing data
o What if some data points are missing or clearly in error?
o Honest method:
 Restart the time series
 You lose DE - 1 points on each restart
 Don't calculate across the gap
 But you can combine the segments into one C(r)
o



Other options (if gap is of short duration):
 Ignore the gap (probably a bad idea)
 Set data to zero or to average (also a bad idea)
 Interpolate or curve fit (OK if data is from a flow)
 Use a nonlinear predictor to estimate the missing points (best idea)
 None of these work if gap is of long duration
Nonuniformly sampled data
o If sampling is deterministically nonuniform:
 All is probably OK
 Dimension may increase since additional equations come into play
o If sampling is random:
 This will give infinite dimension if randomness is white
 If data is from a flow, and sample intervals are known, you can try to
construct a data set with uniform intervals by interpolation or curve
fitting
o How accurate does sample time have to be? (good student project)
Lack of stationarity
o dx/dt = F(x, y)
o dy/dt = G(x, y, t)
o dz/dt = 1 (non-autonomous slowly growing term)
o Increases system dimension by 1
o Increases attractor dimension by < 1
o If t is periodic, attractor projects onto a torus
o Can try to detrend the data
 This is problematic
 How best to detrend? (polynomial fit, sine wave, etc.)
 What is interesting dynamics and what is uninteresting trend?
o Take log first differences: Yn = log(Xn) - log(Xn-1) = log(Xn/Xn-1)
Surrogate data
o Generate data with same power spectrum but no determinism
o This is colored noise
o Take Fourier transform, randomize phases, inverse Fourier transform
o Compare C(r), predictability, etc.
o Many surrogate data sets allow you to specify confidence level
Multifractals









Most attractors are not uniformly dense
Orbit visits some portions more often than others
Local fractal dimension may vary over the attractor
Capacity dimension (D0) weights all portions equally
Correlation dimension (D2) emphasizes dense regions
q = 0 and 2 are only two possible weightings
Let Cq(r) =  [  (r - r) / (N - 1)]q-1 / N
Then Dq = [d log Cq(r)/d log r] / (q - 1)
Note: for q = 2 this is just the correlation dimension






q = 0 is the capacity dimension
q = 1 is the information dimension
Other values of q don't have names (so far as I know)
q = infinity is dimension of densest part of attractor
q = -infinity is dimension of sparsest part of attractor
In general Dq < Dq-1

The K-S entropy can also be generalized
Kq = -log  piq / (q - 1)N
Summary of Time-Series Analysis






Verify integrity of data
o Graph X(t)
o Correct bad or missing data
Establish stationarity
o Observe trends in X(t)
o Compare first and second half of data set
o Detrend the data (if necessary)
 Take first differences
 Fit to low-order polynomial
 Fit to superposition of sine waves
Examine data plots
o Xi versus Xi-1
o Phase space plots (dX/dt versus X)
o Return maps (max X versus previous max X, etc.)
o Poincaré sections
Determine correlation time or minimum of mutual information
Look for periodicities (if correlation time decays slowly)
o Use FFT to get power spectrum
o Use Maximum entropy method (MEM) to get dominant frequencies
Find optimal embedding
o False nearest neighbors
o







Saturation in correlation dimension
Determine correlation dimension
o Make sure log C(r) versus log r has scaling (linear) region
o Make sure result is insensitive to embedding
o Make sure you have sufficient data points (Tsonis)
Determine largest Lyapunov exponent and entropy (if chaotic)
Determine growth of unpredictability
Try to remove noise (if dimension is too high)
o Integrate data
o Use nonlinear predictor
o Use principal component analysis (PCA)
Construct model equations
Make short-term predictions
Compare with surrogate data sets
Time-Series Analysis Tutorial (using CDA)








Sine wave
Two incommensurate sine waves
Logistic map
Hénon map
Lorenz attractor
White noise
Mean daily temperatures
Standard & Poor's Index of 500 common stocks
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Other Fractal Sets
Chaos and Time-Series Analysis
12/5/00 Lecture #14 in Physics 505
Note: All assignments are due by 3:30 pm on Tuesday, December 19th in my office or mailbox.
Comments on Homework #12 (Correlation Dimension)






This was one of the harder assignments but most useful
Most people got a reasonable value of D2 = 1.21 ± 0.1
A few people got D2 < 1 (perhaps embedded in 1D ?)
Your Hénon C(r) should look like this with 1000 data points
The D2 versus log r plot should approach this with many data points
See also a detailed discussion of this problem
Review (last week) - Multifractals












Tips for speeding up D2 calculation
Number of data points needed is N ~ 10 2 + 0.4D2 (Tsonis criterion)
Round-off errors descretize the state space and narrow scaling region
Superimposed noise makes dimension high at small r
Colored noise may be impossible to distinguish from chaos (conjecture)
Kolmogorov-Sinai (K-S) entropy
o Sum of the positive Lyapunov exponents (Pesin Identity)
o It is actually a rate of change of the usual entropy
o Estimate: K = d log C(r)/dDE in the limit of infinite DE
Multivariate data can be combined with intercalation
Filtering data should be harmless but often isn't
Missing data can be reconstructed but should not be ignored
Nonuniform sampling is OK if nonuniformity it deterministic
Lack of stationarity
o dx/dt = F(x, y)
o dy/dt = G(x, y, t)
o dz/dt = 1 (non-autonomous slowly growing term)
o Increases system dimension by 1
o Increases attractor dimension by < 1
o If t is periodic, attractor projects onto a torus
o Can try to detrend that data
 This is problematic
 How best to detrend? (polynomial fit, sine wave, etc.)
 What is interesting dynamics and what is uninteresting trend?
o Take log first differences: Yn = log(Xn) - log(Xn-1) = log(Xn/Xn-1)
Surrogate data
o Generate data with same power spectrum but no determinism
o This is colored noise
o
o
o
Take Fourier transform, randomize phases, inverse Fourier transform
Compare C(r), predictability, etc.
Many surrogate data sets allow you to specify confidence level
Multifractals



















Most attractors are not uniformly dense
Orbit visits some portions more often than others
Local fractal dimension may vary over the attractor
Capacity dimension (D0) weights all portions equally
Correlation dimension (D2) emphasizes dense regions
q = 0 and 2 are only two possible weightings
Let Cq(r) =  [  (r - r) / (N - D)]q-1 / (N - D + 1)
Then Dq = [d log Cq(r)/d log r] / (q - 1)
Note: for q = 2 this is just the correlation dimension
q = 0 is the capacity dimension
q = 1 is the information dimension
Other values of q don't have names (so far as I know)
q can be negative (or non-integer)
There are (multiply) infinitely many dimensions
q = infinity is dimension of densest part of attractor
q = -infinity is dimension of sparsest part of attractor
All dimensions are the same if the attractor is uniformly dense
Otherwise, we call the object a multifractal
In general, dDq/dq < 0:

The K-S entropy can also be generalized
Kq = -log  piq / (q - 1)N
Summary of Time-Series Analysis

Verify integrity of data
o
o












Graph X(t)
Correct bad or missing data
Establish stationarity
o Observe trends in X(t)
o Compare first and second half of data set
o Detrend the data
 Take (log) first differences
 Fit to low-order polynomial
 Fit to superposition of sine waves
Examine data plots
o Xi versus Xi-1
o Phase space plots (dX/dt versus X)
o Return maps (max X versus previous max X, etc.)
o Poincaré sections
Determine correlation time or minimum of mutual information
Look for periodicities (if correlation time decays slowly)
o Use FFT to get power spectrum
o Use Maximum entropy method (MEM) to get dominant frequencies
Find optimal embedding
o False nearest neighbors
o Saturation in correlation dimension
Determine correlation dimension
o Make sure log C(r) versus log r has scaling (linear) region
o Make sure result is insensitive to embedding
o Make sure you have sufficient data points (Tsonis)
Determine largest Lyapunov exponent and entropy (if chaotic)
Determine growth of unpredictability
Try to remove noise if dimension is too high
o Integrate data
o Use nonlinear predictor
o Use principal component analysis (PCA)
Construct model equations
Make short-term predictions
Compare with surrogate data sets
Time-Series Analysis Tutorial (using CDA)








Sine wave
Two incommensurate sine waves
Logistic map
Hénon map
Lorenz attractor
White noise
Mean daily temperatures
Standard & Poor's Index of 500 common stocks
Iterated Function Systems












2-D Linear affine transformation
o Xn+1 = aXn + bYn + e
o Yn+1 = cXn + dYn + f
Area expansion: An+1/An = det J = ad - bc
Contraction: |ad - bc| < 1
Translation: e, f < > 0
Rotation: a = d = r cos , b = -c = -r sin 
Shear: bd < > -ac
Reflection: ad - bc < 0
Such transformations can be extended to 3-D and higher
To make an IFS fractal:
o Specify two or more affine transformations
o Choose a random sequence of the transformations
o Apply the transformations in sequence
o Repeat many times
o Helps to weight the probabilities proportional to |det J|
Examples of IFS fractals produced this way
o These were produced with two 2-D transformations
o Can also use two 3-D transformations and color the third D
o Aesthetic preferences are for high LE and high D2
o Note that LE is actually negative (all directions contract)
o Can also colorize by the number of successive applications of each transform
IFS compression
o With enough transformations, any image can be replicated
o Method pioneered by Barnsley & Hurd
o Barnsley started company, Iterated Systems, to commercialize this
o Used to produce images in Microsoft Encarta (CD-ROM encyclopedia)
o Uses the collage theorem to find optimal transformations
o Compression is lossy and slow (proprietary)
o 10 - 100 x compressions are typical
o Decompression is fast
o Provides unlimited resolution (but fake)
IFS clumpiness test
o Use time-series data instead of random numbers
o Play the chaos game, for example with a square
o Divide the range of data into 4 quartiles
o Random data (white noise) fills the square uniformly
o
Chaotic data (i.e., logistic map) produces a pattern
o
o
o
o
The eye is very sensitive to patterns of this sort
This has been done with the sequence of 4 bases in DNA molecule
It can also be done with speech or music
Caution - colored noise (i.e., 1/f) also makes patterns
Mandelbrot and Julia Sets


Non-Attracting Chaotic Sets
o These sets ARE attracting
o They are generally only transiently chaotic
Derivation from logistic equation
o Start with logistic equation: Xn+1 = AXn(1 - Xn)
o Define a new variable: Z = A(1/2 - X)
o Solve for X(Z, A) to get: X = 1/2 - Z/A
Substitute into logistic equation: Zn+1 = Zn2 + c
Where c = A/2 - A2/4
Range (1 < A < 4) ==> -2 < c < 1/2
Zn+1 = Zn2 + c is equivalent to logistic map
General the above to complex values of Z and c
Review of complex numbers
o Z = X + iY, where i = (-1)1/2
o Z2 = X2 + 2iXY - Y2
o Separate real and imaginary parts
 Xn+1 = Xn2 - Yn2 + a
 Yn+1 = 2XnYn + b
 where a = Re(c) and b = Im(c)
o This is just another 2-D quadratic map
o X, Y, a, and b are real variables
o Orbits are either bounded or unbounded
Mandelbrot (M) set
o Region of a-b space with bounded orbits with X0 = Y0 = 0
o Orbit escapes to infinity if X2 + Y2 > 4 (circle of radius 2)
o It's sometimes defined as the complement of this
o There is only one Mandelbrot set
o The "buds" in the M-set correspond to different periodicities
o Usually plotted are escape-time contours in colors
o Each point in the M-set has a corresponding Julia set
o The M-set is everywhere connected
o Boundary of M-set is fractal with dimension = 2 (proved)
o Area of set is ~ /2
o Points along the real axis replicate logistic map and exhibit chaos
o Points just outside the boundary exhibit transient chaos
o The chaotic region appears to be a set of measure zero (not proved)
o Boundary of M-set is a repellor
o With deep zoom, M-set and J-set are identical
o People have zoomed in by factors as large as 101600
o Miniature M-sets are found at deep zooms
o See the Mandelbrot Java applet written by Andrew R. Cavender
Julia (J) sets
o Region of X0-Y0 space with bounded orbits for given a, b
o Orbit escapes to infinity if X2 + Y2 > 4 (circle of radius 2)
o This is sometimes called the "filled-in" Julia set
o There are infinitely many J-sets
o Usually plotted are escape-time contours in colors
o The J-sets correspond to points on the Mandelbrot set
o J-sets from inside the M-set are connected
o J-sets from outside the M-set are "dusts"
o Boundary of J-set is a repellor
o With deep zoom, J-set and M-set are identical
Fixed points of Julia sets
o
o
o
o
o




Z = Z2 + c ==> Z = 1/2 ± (1 - 4c)1/2/2
These fixed points are unstable (repellors)
They can be found by backward iteration: Zn = ± (Zn+1 - c)1/2
There are two roots (pre-images) each with two roots, etc.
Find them with the random iteration algorithm (cf: IFS)
The repelling boundary of J-set thus becomes an attractor
An example is the Julia dendrite (c = i)
Generalized Julia sets
o Other complex functions Zn+1 = F(Zn) have interesting boundaries
o No good answer to what's special about these functions
o General 2-D quadratic map sometimes have interesting basin boundaries
o Fractal eXtreme has a plug-in for calculating these
Applications of M-set and J-sets
o None known except computer art
o High traction shoe tread?
o
o
o
o
o
o
o


J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Spatiotemporal Chaos and Complexity
Chaos and Time-Series Analysis
12/12/00 Lecture #15 in Physics 505
Announcements




Please verify grade records (on WWW)
o Some people have already earned an A (>25 points)
o You may wish to omit the last assignment(s)
Homework #15 is due at 3:30 pm on 12/19 in my office or mailbox
o Intended to be fun; may not effect your grade
o Note that HW #15 is graded differently from the others
All late assignments are due by 3:30 pm on 12/19 in my office or mailbox
Please fill out and return the teaching evaluation
o What topics did you enjoy most and least?
o Would you have preferred more depth and less breadth?
o How useful was it to have the lecture notes on the WWW?
o Was the one long lecture per week the best format for the course?
o Was the emphasis on computer experimentation appropriate?
Comments on Homework #13 (Iterated Function Systems)






Everyone had good plots of Sierpinski triangle and fern
Should use small dots and many iterations
Use non-uniform probabilities (P ~ |det J| = |ad - bc|)
IFS patterns are attractors and fractals, but not usually called "strange attractors"
Bounded, linear IFS's are usually contracting in every direction, hence not chaotic
Only one person calculated capacity dimension (1.585)
Review (last week) - Non-Attracting Chaotic Sets




Multifractals
o Fractals that are not uniformly covered
o Spectrum of generalized dimensions, Dq
o And generalized entropies, Kq
o Where -infinity < q < +infinity
Summary of Time-Series Analysis
Time-Series Analysis Tutorial (using CDA)
Iterated Function Systems
o Multiple affine transformations
o Random iteration algorithm
o Collage theorem
o Image compression
o IFS clumpiness test
Mandelbrot and Julia Sets





Non-Attracting Chaotic Sets
o These sets ARE attracting
o They are generally only transiently chaotic
Derivation from logistic equation
o Start with logistic equation: Xn+1 = AXn(1 - Xn)
o Define a new variable: Z = A(1/2 - X)
o Solve for X(Z, A) to get: X = 1/2 - Z/A
o Substitute into logistic equation: Zn+1 = Zn2 + c
o Where c = A/2 - A2/4
o Range (1 < A < 4) ==> -2 < c < 1/2
o Zn+1 = Zn2 + c is equivalent to logistic map
o General the above to complex values of Z and c
Review of complex numbers
o Z = X + iY, where i = (-1)1/2
o Z2 = X2 + 2iXY - Y2
o Separate real and imaginary parts
 Xn+1 = Xn2 - Yn2 + a
 Yn+1 = 2XnYn + b
 where a = Re(c) and b = Im(c)
o This is just another 2-D quadratic map
o X, Y, a, and b are real variables
o Orbits are either bounded or unbounded
Mandelbrot (M) set
o Region of a-b space with bounded orbits with X0 = Y0 = 0
o Orbit escapes to infinity if X2 + Y2 > 4 (circle of radius 2)
o It's sometimes defined as the complement of this
o There is only one Mandelbrot set
o The "buds" in the M-set correspond to different periodicities
o Usually plotted are escape-time contours in colors
o Each point in the M-set has a corresponding Julia set
o The M-set is everywhere connected
o Boundary of M-set is fractal with dimension = 2 (proved)
o Area of set is ~ /2
o Points along the real axis replicate logistic map and exhibit chaos
o Points just outside the boundary exhibit transient chaos
o The chaotic region appears to be a set of measure zero (not proved)
o Boundary of M-set is a repellor
o With deep zoom, M-set and J-set are identical
o People have zoomed in by factors as large as 101600
o Miniature M-sets are found at deep zooms
o See the Mandelbrot Java applet written by Andrew R. Cavender
Julia (J) sets
o Region of X0-Y0 space with bounded orbits for given a, b
o Orbit escapes to infinity if X2 + Y2 > 4 (circle of radius 2)
o
o
o
o
o
o
o
o



This is sometimes called the "filled-in" Julia set
There are infinitely many J-sets
Usually plotted are escape-time contours in colors
The J-sets correspond to points on the Mandelbrot set
J-sets from inside the M-set are connected
J-sets from outside the M-set are "dusts"
Boundary of J-set is a repellor
With deep zoom, J-set and M-set are identical
Fixed points of Julia sets
o Z = Z2 + c ==> Z = 1/2 ± (1 - 4c)1/2/2
o These fixed points are unstable (repellors)
o They can be found by backward iteration: Zn = ± (Zn+1 - c)1/2
o There are two roots (pre-images) each with two roots, etc.
o Find them with the random iteration algorithm (cf: IFS)
o The repelling boundary of J-set thus becomes an attractor
o An example is the Julia dendrite (c = i)
Generalized Julia sets
o Other complex functions Zn+1 = F(Zn) have interesting boundaries
o No good answer to what's special about these functions
o General 2-D quadratic map sometimes have interesting basin boundaries
o Fractal eXtreme has a plug-in for calculating these
Applications of M-set and J-sets
o None known except computer art
o High traction shoe tread?
Spatiotemporal Chaos (Complexity)


Examples of spatiotemporal (infinite-dimensional) dynamics:
o Turbulent fluids
o Plasmas (ionized gases)
o The weather
o Molecular diffusion
o The brain
o Any process governed by partial differential equations (PDEs)
Two coupled logistic maps
o Xn+1 = (1 - ) A1Xn (1 - Xn) + A2 Yn (1 - Yn)
Yn+1 = (1 - ) A2Yn (1 - Yn) + A1 Xn (1 - Xn)
Each map has a different A but the same coupling 0 <  < 1
If one map is periodic and one chaotic, which wins?
Can do the same with other maps and flows (Lorenz, etc.)
Coupled-map lattices (CMLs)
o Consider a 1-D lattice of logistic maps
o Coupling can be to all others (GCMs) or N neighbors
o
o
o

o
o
o
o



Dynamics exhibit transient chaos, waves, diffusion, damping, etc.
Such systems have not been extensively studied
Could use a distribution of  values
Could extend calculations to 2 or more dimensions
Cellular automata
o 1-D example with periodic boundary conditions
o 2-D example (Game of Life)
o Many other CA rules and models are possible
o Local rules give rise to global behavior
o Visit the Primordial Soup Kitchen (Prof. David Griffeath)
Langton's ants
o Example of simple computer automaton
o Lots of ways to extend these models
Summary of spatiotemporal (complex) models
o Models can be discrete or continuous in space, time, and value:
space D, time D, value D
cellular automata
space D, time D, value C
coupled map lattices
space D, time C, value D
not studied
space D, time C, value C
coupled flow lattices
space C, time D, value D
not studied
space C, time D, value C
not studied
space C, time C, value D
not studied
space C, time C, value C
partial differential eqns
Concluding Remarks




Nature is nonlinear and often chaotic
Nature is complex, but simple models may suffice
These models preclude prediction but invite control
Remember the butterfly!
Please contact me if you want to do additional work in this area.
Best wishes!
J. C. Sprott | Physics 505 Home Page | Previous Lecture | Next Lecture
Download