Implementation of a Chaotic Electromechanical

advertisement
Implementation of a Chaotic Electromechanical Oscillator Described by a Hybrid
Differential Equation
A Thesis
Presented to
The Division of Mathematics and Natural Sciences
Reed College
In Partial Fulfillment
of the Requirements for the Degree
Bachelor of Arts
Xueping Long
May 2013
Approved for the Division
(Physics)
Professor Lucas Illing
Acknowledgments
I would like to thank the following people: Professor Lucas Illing for being a great
thesis advisor, Gregory Eibel, Robert Ormond, Jay Ewing, Cristian Panda, Yudan
Guo and Kuai Yu for the help they gave me during my thesis project, and my parents for always supporting me. I also want to thank Reed College Science Research
Fellowship for funding my experimental setup.
Table of Contents
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 1: System Equation and Experimental Setup
1.1 System Equation . . . . . . . . . . . . . . . . . . . .
1.2 Magnetic Coil Design . . . . . . . . . . . . . . . . . .
1.3 Primary Coil Circuitry . . . . . . . . . . . . . . . . .
1.4 Secondary Coil Circuitry . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
5
8
10
13
Chapter 2: Introduction to the Hybrid System . . . . . . . . . .
2.1 A Brief Introduction to Dynamics and Chaos . . . . . . . . . .
2.1.1 Dynamics and Dynamical Systems . . . . . . . . . . .
2.1.2 Differential Equations and Iterated Maps . . . . . . . .
2.1.3 Fixed Points, Periodic Orbits and Quasiperiodic Orbits
2.1.4 Chaos and Lyapunov Exponent . . . . . . . . . . . . .
2.1.5 Poincaré Section and Poincaré Return Map . . . . . . .
2.1.6 Symbol Sequences, Shift Maps and Symbolic Dynamics
2.2 Solution to the Hybrid Differential Equation . . . . . . . . . .
2.2.1 Discussion of the Solution for β = ln 2 . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
17
19
21
22
24
25
28
32
Chapter 3: Experimental Data and Analysis . . . . . . . . . . . . . . .
35
Chapter 4: Further Discussion on the Hybrid System . . . . . . . . .
41
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Schematic setup of the mechanical oscillator . . . . . . . . . . .
Picture of the mechanical oscillator . . . . . . . . . . . . . . . .
Primary coil circuit . . . . . . . . . . . . . . . . . . . . . . . . .
Calculation of emf using Amperian loops and a Gaussian surface
Equivalent circuit for the primary coil circuit . . . . . . . . . . .
First subcircuit of the secondary coil circuit . . . . . . . . . . .
Second subcircuit of the secondary coil circuit . . . . . . . . . .
Last subcircuit of the secondary coil circuit . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
6
11
11
13
14
15
15
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
Typical waveform for u(t) . . . . . . . . . . . . . . . .
Phase space projection for β = ln 2 . . . . . . . . . . .
Phase portrait of the phase space of 1-D spring . . . .
Separation of nearby trajectories . . . . . . . . . . . .
Example of a Poincaré section in 2-D phase space . . .
Example of a Poincaré section in 3-D phase space . . .
Determination of symbol sequence representation of 0.3
Bernoulli Shift Map . . . . . . . . . . . . . . . . . . . .
un+1 versus un for β = ln 2 . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
19
23
25
26
26
28
33
3.1
3.2
3.3
3.4
3.5
Position v.s. time measurement . . . . . . . . . . . . . . . . . .
Phase space diagram of experiment data . . . . . . . . . . . . .
Poincaré return map for experiment data . . . . . . . . . . . . .
Poincaré return map in the form of Bernoulli shift map . . . . .
Comparison of analytic solution to measured experimental data
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
36
37
37
38
4.1
Waveforms for numerical solution and its corresponding u, v and s . .
43
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Abstract
This thesis describes an electromechanical oscillator whose governing equation of motion is an exactly solvable differential equation. The differential equation is a hybrid
system that takes the form of a force driven harmonic oscillator with negative damping
coefficient and a discrete switch condition to control and set two different equilibrium
positions. Very strong agreement is found between the waveforms produced by the
oscillator and the waveforms predicted by the analytic solutions. An extension to the
theory of hybrid system is also proposed, which potentially allows the hybrid system
to be perceived as the limiting case of an ordinary differential equation.
Introduction
The dynamics of chaotic systems is often described by nonlinear differential equations,
which generally cannot be exactly solved using analytic methods. Recently a class of
exactly solvable chaotic differential equations was discovered [1–4]. These differential
equations are interesting not only because they are exactly solvable and may provide
theoretical insights that would be otherwise unachievable, but also because they may
allow us to investigate the properties of chaos and to explore possible applications of
chaos.
One potential application of these novel solutions would be matched filters for
chaos communication [1]. It is known that matched filters are optimal receivers
in the presence of noise in conventional communication protocols, where a small
number of basis waveforms are used to transmit information. However, matched
filtering for chaos communication is often thought to be impossible to design since
chaotic waveforms, which are non-repeating, are used to transmit information in chaos
communication. The class of novel solutions discussed in this thesis have enabled the
design of matched filters [1], thus making chaos communication a viable option.
Although research is in progress in order to realize these ideas at technological
relevant microwave/radar frequencies, there remain many unresolved questions about
these systems. An electromechanical oscillator that implements such an exactly solvable chaotic system is therefore of great interest.
This thesis describes an electromechanical oscillator, whose governing equation of
motion is a novel class of exactly solvable differential equations. The differential equation is a hybrid system which takes the form of a force driven harmonic oscillator with
negative damping coefficient and a discrete switch condition to control the different
oscillatory fixed point. Very strong agreement is found between waveforms produced
by the oscillator and waveforms predicted by analytic solutions. An extension to the
theory of hybrid system is also proposed, which potentially allows the hybrid system
to be perceived as the limiting case of an ordinary differential equation.
Chapter 1
System Equation and
Experimental Setup
Recently, an exactly solvable hybrid differential equation was discovered [1]. I am interested in implementing an electromechanical oscillator described by this differential
equation, which is
ü − 2β u̇ + (ω 2 + β 2 )(u − s) = 0,
(1.1)
where u(t) ∈ R is a continuous state, s(t) ∈ {±1} is a discrete state, and ω and β are
fixed parameters satisfying ω = 2π and 0 < β ≤ ln 2. Transitions in s(t) are made
when u̇ satisfies the guard condition
u̇ = 0 ⇒ s(t) = sgn(u(t)),
where the signum function is defined as
+1, u ≥ 0
sgn(u) =
−1, u < 0.
(1.2)
(1.3)
This means that when the derivative of the continuous state hits zero, s(t) will be
set to sgn(u(t)) and maintain this value until the same guard condition is met next.
Note that sgn(0) = +1 is arbitrarily chosen for definiteness.
Moving s to the right hand side in Eq. (1.1) yields
ü − 2β u̇ + (ω 2 + β 2 )u = (ω 2 + β 2 )s.
(1.4)
For comparison, the equation of a damped driven harmonic oscillator is
ẍ + 2ζω0 ẋ + ω02 x =
F
,
m
(1.5)
where ω0 is the natural frequency, ζ is the damping ratio, m is mass, and F is the
driving force. It is easy to see that Eq. (1.1) is the equation for a damped driven
harmonic oscillator with negative damping.
A setup of an electromechanical oscillator described by Eq. (1.1) has been proposed
by Owens et al. [5]. Following their setup, an oscillator system composed of mechanical, electric and electromagnetic parts is constructed, and various improvements to
4
Chapter 1. System Equation and Experimental Setup
Iron frame
Metal bar
Clamps
Springs
Secondary
coil
Magnets
Primary
coil
Wooden board
Position
sensor
Clamps
Super pulleys
Metal bar
Clamps
Figure 1.1: Schematic diagram of the mechanical oscillator. The coils are placed right below the
spring sets. A set of cylindrical neodymium magnets is placed in the center of each coil, aligned
along the vertical symmetry axis, and is attached to the spring set above with a fishing line. The
magnet sets are also connected to one another using a fishing line that passes through the two super
pulleys. A position sensor is placed below the primary coil to detect position of the magnets.
1.1. System Equation
5
the mechanical setup and circuit design are made. As schematically demonstrated in
Figure 1.1, two 0.45 m metal bars are attached to a heavy, stable rectangular iron
frame 0.56 m wide and 1.8 m high. A wooden board 0.45 m long and 0.36 m wide
is placed horizontally in the middle of the metal frame. Two copper coils wound
around PVC pipes are placed on top of the wooden board. Two sets of springs
(McMaster-Carr Steel Extension Spring 9654K616) are hung down from the top bar,
and two pulleys (Pasco Super Pulley ME09450A) are placed on top of the bottom bar.
Each set of springs consists of four springs placed in parallel, and two aluminum set
screws are used to hold the ends of the springs in place. Two sets of three cylindrical
neodymium magnets (K&J Magnetics DX08B-N52), joined end to end, are placed in
the hollow region inside the coils and are attached to the springs with fishing lines.
The two sets of magnets are also connected to one another using a fishing line that
passes through the two pulleys. The fishing lines are connected in specially designed
knots such that the lengths of the fishing lines can be adjusted or fixed at will. A
position sensor (Measurement Specialties DC-EC Series) is placed below the primary
coil and is secured to the side of the metal frame using a combination of clamps.
Figure 1.2 is a photo showing the real physical setup of the oscillator.
1.1
System Equation
In order to derive the equation of motion for the system, a few assumptions are made
to simplify the system. The pulleys are assumed to be both massless and frictionless,
and the fishing lines connecting the magnets and springs are assumed to share equal
and constant tension throughout the line and are always stiff. It follows from the
assumptions that the magnets are always moving along the central vertical axis of the
copper coil, and the two sets of magnets have the same oscillatory motion (although
it would look like one set of magnets is moving up if the other set is moving down).
Electric circuits are coupled to the mechanical oscillator via electromagnetic interaction of the magnetic masses and the coils. The primary coil is connected to a circuit
that acts as a negative resistor and provides a positive feedback. The secondary coil
is connected to a circuit that switches between constant offset states based on the
position and velocity information it receives. This hybrid system can therefore be
viewed as a damped harmonic oscillator system with a driving force that depends on
the displacement and velocity of the test mass. After carefully tuning various parameters in the circuits, the predicted chaotic behavior, as will be derived in Chap. 2,
can be obtained for the oscillator. The chaotic oscillator can thus be implemented
successfully.
In addition, it is assumed that the current applied to the coil varies sufficiently
slowly in time such that the magnetic field Bcoil generated inside the coil can be
treated as a steady state field. The coils are designed to display cylindrical symmetry
such that when a current I is applied to the coil, the magnetic field generated by the
coils, Bcoil , is in the vertical direction ŷ. Since each set of magnets can be treated as a
single cylindrical magnet with three times the length of the original magnet, and the
magnets only oscillate in the vertical ŷ direction along the central axis of the copper
6
Chapter 1. System Equation and Experimental Setup
Figure 1.2: Picture of the mechanical oscillator.
coil, when the magnets interact with the magnetic field Bcoil , the resultant force Fmag
is vertical:
Fmag = Fmag ŷ = µ
∂B
ŷ,
∂y
(1.6)
where µ is the magnitude of the magnetic dipole moment generated by the cylindrical
magnets. Since the system can be viewed as a force-driven damped harmonic system,
one can write down the equation of motion describing the system:
mÿ + cẏ + ky = Fmag ,
(1.7)
where Fmag is the effective magnetic force, k is the effective spring constant of the
system, and c is the lumped viscous damping coefficient.
The magnetic force in this system can be written as
Fmag = Fpc + Fsc ,
(1.8)
where Fpc is the magnetic force due to the primary coil and Fsc is the magnetic
force due to the secondary coil. As will be shown in Sec. (1.2) and Sec. (1.3), Fpc is
directly proportional to the velocity of the oscillating magnets inside the primary coil
and represents a negative viscous friction term, whereas Fsc represents an offset term
that is switched between two constant values as a function of position and velocity of
1.1. System Equation
7
the magnets. It will be shown in Sec. (1.3) that Fpc can be written in the form
Fpc =
βc2
ẏ,
Reff
(1.9)
where βc is a constant that relates the current running in the coil to the magnetic
force on the cylindrical magnet due to the coil (an effective negative viscous friction
coefficient), and Reff is the magnitude of the negative resistor that will be discussed
in Sec. (1.3). Therefore Fmag can be written as
Fmag =
βc2
ẏ + Fsc (y, ẏ).
Reff
(1.10)
Rewriting Eq. (1.7) using Eq. (1.10) gives
βc2
mÿ + cẏ + k(y − y0 ) =
ẏ + F̃sc (y, ẏ),
Reff
(1.11)
where y0 is the position of the top of the coil such that it effectively allows the origin
of the coordinate system to be chosen at will, and F̃sc = Fsc − ky0 is the effective
magnetic force due to the secondary coil.
To nondimensionalize Eq. (1.11), a coordinate transformation is performed:
y = uY + Y0 ,
t = τ T,
(1.12)
where Y , Y0 and T are constants to be determined later, and u and τ are dimensionless. Then Eq. (1.11) can be written as
2
βc
Y 0
Y0 y 0
mY 00
u −
−c
u + kY u +
−
= F̃sc ,
(1.13)
T2
Reff
T
Y
Y
where u0 is the τ -derivative of u.
To simplify Eq. (1.13), T is chosen such that
β 2 + 4π 2 =
where
T
β=
2m
k 2
T ,
m
βc2
−c .
Reff
(1.14)
(1.15)
Solving Eq. (1.14) and Eq. (1.15) yields
4πm
T =p
,
4mk − βc2 /Reff + c
2π(βc2 /Reff − c)
β =p
.
4mk − βc2 /Reff + c
(1.16)
8
Chapter 1. System Equation and Experimental Setup
Then Eq. (1.13) becomes
Y0 y0
u − 2βu + (β + 4π ) u +
−
Y
Y
00
0
2
2
= F̃sc
T2
,
mY
or
u00 − 2βu0 + (β 2 + ω 2 ) (u − s) = 0,
(1.17)
where ω = 2π, and
s =
=
=
=
1
Y
1
Y
1
Y
1
Y
T2
y0 − Y0 + F̃sc 2
(β + 4π 2 )m
T2
y0 − Y0 + F̃sc
k/m · T 2 m
y0 − Y0 + F̃sc /k
Fsc
− Y0 .
k
To find appropriate values for Y0 and Y , we demand
(
1
when Fsc = βc I max
s=
−1 when Fsc = βc I min .
(1.18)
(1.19)
Solving Eq. (1.18) with the definitions given in Eq. (1.19) yields
Y0 =
βc max
(I
+ I min ).
2k
(1.20)
Plugging Eq. (1.20) into Eq. (1.18) for s = 1 yields
Y =
βc max
(I
− I min ).
2k
(1.21)
One can check that Eq. (1.20) and Eq. (1.21) together satisfy Eq. (1.18) for s = −1.
Therefore Eq. (1.17) is the nondimensionalized differential equation that governs the
electromechanical system. Note that Eq. (1.17) is exactly the same as the hybrid
differential equation I am interested in studying, Eq. (1.1).
The conditions given in Eq. (1.19) are fulfilled by controlling the electromagnetic
force in the secondary coil through the attached circuitry and position sensor, which
recognizes the guard condition and outputs the desired voltage accordingly. The
theory and development of the magnetic coils are described in the next section.
1.2
Magnetic Coil Design
As has been discussed in Eq. (1.6), the magnetic force inside a coil is in the ŷ direction
and is proportional to ∂B/∂y:
Fmag = µ
∂B
ŷ.
∂y
(1.22)
1.2. Magnetic Coil Design
9
In order to achieve a constant offset for the secondary coil, a magnetic field that is
linear in y for some region of the coil is needed.
The copper coils were made in the machine shop in the subbasement of the Reed
College Physics Department. Copper wires of American wire gauge 24 are wound
onto two PVC pipes, each having a length of 40.6 cm, an outer diameter of 4.7 cm
and an inner diameter of 4.0 cm. For each coil one long continuous wire is wound in
layers; each successive layer is 12.7 cm shorter than the previous layer. The first layer
has a length of 30.5 cm, and a total of 24 layers are on a coil. A theoretical calculation
predicts the resistance of coils to be 103 Ω. The measured resistance for both coils is
104.5 Ω, in good agreement with the theoretical prediction. The measured inductance
is 0.567 H for both coils.
With such winding, the number of turns of copper wire per length, n(y), increases
approximately linearly with distance from the top of the coil:
n(y) = cn y,
y ∈ [0, Lc ],
(1.23)
where the origin of the coordinate system is chosen to be the top of the coil and the
y-axis to point downward. Lc is the length of the coil and cn = 2Nturns /L2c ≈ 14 cm−2
is a constant. It will be demonstrated next that this design indeed gives a region of
constant magnetic field gradient to a good approximation.
Because of its dense helical windings, the coil can be modeled as a continuum of
current-carrying circular loops. By Newton’s third law, each loop exerts a vertical
force on the magnet that is equal in magnitude but opposite in direction to the force
the magnet exerts on the loop. Therefore, the total force the coil exerts on the magnet
is equal in magnitude to the total force the magnet exerts on the coil, which is the
sum of the forces the magnet exerts on all the loops. Assuming that each loop has
the same radius (which is to say that the wire is thin), when the geometric center of
the magnet is at height y, by the Lorentz force law, in the continuous limit,
Z
Fmag = − Jcoil × Bmag (rc , ȳ − y) dV
Z
Jcoil Bρ,mag (rc , ȳ − y)ŷ dV
=
Z Lc
=
Bρ,mag (rc , ȳ − y) × 2πrc n(ȳ)I dȳ ŷ
0
Z Lc
=
2πrc I
n(ȳ)Bρ,mag (rc , ȳ − y) dȳ ŷ,
(1.24)
0
where Jcoil is the current density running in the coil, Bmag (rc , ȳ − y) is the magnetic
field felt by the wire at height ȳ due to the magnet, Bρ,mag is the radial component
of Bmag (rc , ȳ − y), rc is the coil radius, and I is the current running in the coil-wire.
The direction of the current is chosen such that Bmag , Jcoil and ŷ form a right-handed
system. The magnetic force Fmag depends on the geometry of the coil through n(ȳ)
and the geometry of the magnet through Bρ,mag . These dependencies can be put into
10
Chapter 1. System Equation and Experimental Setup
a single term βc , where
Lc
Z
n(ȳ)Bρ,mag (rc , ȳ − y) dȳ,
βc (y) = 2πrc
(1.25)
0
which allows the magnetic force to be expressed as
Fmag = βc (y)I.
(1.26)
Therefore, if one can show that βc (y) is independent of y, a constant current I would
result in a constant force Fmag and subsequently a constant magnetic field gradient
as desired.
Evaluating the value of βc for a dipole magnet using Eq. (1.25) with the definition
given in Eq. (1.24) yields
!
y
µµ0 cn
(Lc − y)3 − yrc2
+p
βc (y) =
.
(1.27)
2
y 2 + rc2
(rc2 + (Lc − y)2 )3/2
Assuming Lc rc (long coil) and Lc > y rc (the magnet is away from the coil
edges), Eq. (1.27) can be simplified to
βc (y) ≈ µµ0 cn ,
(1.28)
which is independent of y as desired.
1.3
Primary Coil Circuitry
The function of the primary coil circuit is to provide the negative damping term −2β u̇
in Eq. (1.1) to the hybrid system. As shown in Figure 1.3, the primary coil circuit
has three substructures placed in series: the coil, the negative resistor, and the active
inductor. From the perspective of circuit design, the primary coil can be treated as
an in-series combination of a voltage source, a resistor, and an inductor. That the
coil acts as a voltage source follows from Faraday’s law, which implies that the time
varying displacement of the magnets inside the coil induces an emf. The goal is to
provide positive feedback that is proportional to the emf, and hence a primary circuit
that compensates for the coil’s self-inductance and overcompensates the coil’s internal
resistance needs to be constructed.
The negative damping force provided by the primary coil is the force exerted on
the magnet due to the primary coil. As derived in Eq.(1.26), Fpc is
Fpc = βc Ipc .
(1.29)
I will derive an expression for Ipc by analyzing the emf ε that the moving magnet
induces across the coil.
Consider a horizontal Amperian loop of radius rc located at a height of ȳ, as shown
in Figure 1.4. By Faraday’s law, the induced emf ε is the negative time derivative of
the magnetic flux through the loop
1.3. Primary Coil Circuitry
11
100 Ω
+15 V
104
LM675
104
104
1 kΩ
1 kΩ
560 Ω
Primary Coil
-15 V
-R
Negative
Resistance
1 kΩ
L
Active
Inductance
1 kΩ
5.6 kΩ
105
1 kΩ
Figure 1.3: The primary coil circuit. It is composed of the primary coil, the negative resistor and
the active inductor placed in series.
Amperian Loop
Cylindrical Magnet
Coil
Figure 1.4: Calculation of emf using Amperian loops and a Gaussian surface. The dotted loops
are the Amperian loops, and the cylinder formed by connecting the two Amperian loops provides a
Gaussian surface one can work with. By Gauss’s law, the difference of the magnetic flux through
the top and bottom Amperian loop is exactly canceled by the magnetic flux through the sidewall of
the cylinder, as no magnetic monopole exists in nature.
12
Chapter 1. System Equation and Experimental Setup
ε=−
∂Φ(ȳ − y)
dΦ(ȳ − y)
=
ẏ,
dt
∂ ȳ
(1.30)
where y is the position of the magnet. Consider a second Amperian loop that is dȳ
below the first Amperian loop. Together the two Amperian loops form a cylinder
whose surface can be considered as a Gaussian surface. By Gauss law the total
magnetic flux through the Gaussian surface is equal to the magnetic “charge” enclosed
by the Gaussian surface, and, as there is no magnetic monopole, the difference dΦ of
the magnetic flux through the top and bottom Amperian loop is exactly canceled by
the magnetic flux through the sidewall of the cylinder: 2πrc Bρ,mag (rc , ȳ − y)dȳ. Then
∂Φ(ȳ − y)
= −2πrc Bρ,mag (rc , ȳ − y),
∂ ȳ
(1.31)
Since there are n(ȳ) loops of wire at position ȳ per unit length, the total induced
emf is
Z Lc
ε = − 2πrc
n(ȳ)Bρ,mag (rc , ȳ − y)dȳ ẏ
0
= −βc ẏ.
(1.32)
If the effective resistance of the attached circuit is a negative resistor −Reff , then the
current in the primary coil is
βc
ẏ
Reff
(1.33)
βc2
ẏ,
Reff
(1.34)
Ipc = −ε/Reff =
and the force due to the primary coil would be
Fpc = βc Ipc =
as has been claimed in Eq. (1.9).
The effective resistance of the primary coil circuitry can be derived from its equivalent circuit, as shown in Figure 1.5 (the 104 capacitor used to eliminate high-frequency
noise is ignored because it has a negligible effect on the circuit). The value of Lv is
chosen deliberately such that
R1
Lc
=
.
(1.35)
R2
Lv
Since V1 =V2 ,
I1
V1 − V
=
I2
R1
V2 − V
R2
=
.
R2
R1
(1.36)
Applying Kirchhoff’s voltage law to the circuit in Figure 1.5 yields
V 2 + I 2 Rv +
dI2
Lv = 0
dt
(1.37)
1.4. Secondary Coil Circuitry
13
R1
I1
V1
V
Rc
R2
I2
Lc
V0
V2
Rv
Lv
Figure 1.5: Equivalent circuit for the primary coil circuit. V0 is the induced emf as a result of
moving magnets inside the primary coil, Lc and Rc are the effective inductance and resistance of the
primary coil respectively, Rv is a variable resistor, Lv is an active inductor whose value is controlled
by choosing appropriate components, and R1 = 100Ω, R2 = 1kΩ.
and
dI1
Lc − I1 Rc = V1 .
dt
Solving Eq. (1.36), (1.37) and (1.38) for V0 yields
R1 Rv
V0 = −I1
− Rc ,
R2
V0 −
or
Reff
V0
=
=−
I1
R1 Rv
− Rc .
R2
(1.38)
(1.39)
(1.40)
The variable resistor is tuned to keep Reff negative ( (R1 /R2 )Rv −Rc = Rv /10−Rc > 0,
or Rv > 10Rc ) while maximizing gain. As a consequence, the primary coil acts as a
source that provides positive feedback (negative damping) proportional to the velocity
of the oscillating magnet. In other words, the amplifier provides the power that is
required to sustain the mechanical oscillations in the system.
1.4
Secondary Coil Circuitry
The secondary circuit provides the constant offset force on the right hand side of
Eq. (1.1). It shifts the oscillation fixed point between two levels based on the switch
condition s defined by Eq. (1.2). It is clear from the switch condition that to produce
the desired driving force, two pieces of information are required: u(t) and u̇(t). Therefore the secondary circuit needs to have three subcircuits: one that collects position
14
Chapter 1. System Equation and Experimental Setup
Vin
AD620
100 kΩ
Vg
105
LM339
x
500 kΩ
10 kΩ
5.6kΩ
v
Figure 1.6: The first subcircuit of the secondary coil circuit. This circuit takes information from
the position sensor and derives information on position and velocity, which are then used to output
the desired offset voltage.
and velocity information from the data gathered by the position sensor, one logic
circuit that implements the desired switch condition, and one that outputs current
to the secondary coil.
The first subcircuit, which collects position and velocity information, is shown
in Figure 1.6. Position is determined by a Measurement Specialties DC-EC Series
position sensor, which relates position linearly to a voltage output. As the position
sensor puts out voltage from -2V to 8V while the displacement of the system is relatively symmetric, a zero-displacement position corresponding to approximately 4V
is set. However, it is necessary to acquire the exact zero-displacement position (voltage) with the help of a LabVIEW program before actually running the experiment,
because the chaotic system is extremely sensitive to this parameter. I denote this
zero-displacement voltage by Vg .
In Figure 1.6, the voltage signal Vin collected by the position sensor is buffered,
the zero-displacement voltage Vg is subtracted from Vin using an AD620 instrumental
op-amp, and the difference is compared to ground using an LM339 comparator. The
digital output of the LM339 comparator, called x, encodes the relative position (higher
or lower) with respect to the zero-displacement point.
Vin is also used to derive the velocity information. Since velocity is the time
derivative of relative position, and Vin is directly proportional to relative position,
Vin is converted to velocity information by being sent into an R-C differentiator
(the 100kΩ variable resistor in front of the differentiator is used to suppress high
frequency noise). Velocity information is then compared to ground, yielding a HIGH
or LOW voltage (±14V depending on the sign of the input, and converted to a suitable
amplitude for digital signals). The resulting signal encodes the sign of the velocity.
A final buffer is used to isolate the voltage divider from the logic components in the
second subcircuit.
Signals encoding position and velocity information are next sent into the second
1.4. Secondary Coil Circuitry
15
Q1
A1
x
7475
D
Q
v
C
CLR1=(H)
A2=(L)
Q2
Comparator
Low level
OR gate
7432
Vdout
LM339
2.7 kΩ
B1=(H)
Vcc=5V
B2
CLR2=(H)
DM74123N
Figure 1.7: The second subcircuit of the secondary coil circuit. This circuit takes position and
velocity information from the first subcircuit and outputs the correct waveform for the offset signal.
Vdout
+15 V
10 kΩ
-15 V
104
Sec. Coil
10 kΩ
100 kΩ
2.2 kΩ
+15 V
LM675
104
Figure 1.8: The last subcircuit of the secondary coil circuit. This circuit scales down the signal
from Vdout by a factor of 1/6, which is then suitable as an input to a LM675 high-output current
precision amplifier that generates the current driving the secondary coil.
subcircuit. As the offset voltage corresponds to position and updates its value only
when velocity vanishes (in other words when velocity changes sign, since it is extremely unlikely for velocity to stay zero for a considerable period of time in a real
experiment), a circuit that determines the position and sets the output in accordance
to the position (HIGH or LOW) for every change of sign of the velocity can be built.
In Figure 1.7, the DM74123N retriggerable one-shot circuit is connected in series
with an OR gate such that it outputs a HIGH every time the velocity changes sign.
This information, together with position information, is then sent into an SN7475
bistable latch, such that every time C is HIGH, Q will be set to the value of D
(position). Finally a LM339 comparator is used to set a LOW value since a LOW
16
Chapter 1. System Equation and Experimental Setup
output from SN7475 latch is normally greater than 0V. The comparator LOW level
is set to 0.6V. The output signal from the digital circuit is then sent into the third
subcircuit, which generates the desired signal to the coil.
The third subcircuit is shown in Figure 1.8. The input signal to this circuit, i.e.
Vdout , is scaled by a factor of 1/6 and is sent to a LM675 high-output-current precision
amplifier that generates the current driving the secondary coil.
Chapter 2
Introduction to the Hybrid System
The governing equation for the hybrid system is
ü − 2β u̇ + (ω 2 + β 2 )(u − s) = 0,
(2.1)
where u(t) ∈ R is a continuous state, s(t) ∈ {±1} is a discrete state, and ω and β are
fixed parameters satisfying ω = 2π and 0 < β ≤ ln 2. Transitions in s(t) are made
when u̇ satisfy the guard condition
u̇ = 0 ⇒ s(t) = sgn(u(t)),
where the signum function is defined as
+1, u ≥ 0
sgn(u) =
−1, u < 0.
(2.2)
(2.3)
Using an adjustable step size Runge-Kutta integrator (MATLAB’s ODE45) to integrate
the ordinary differential equation and implementing the switching condition as a
detectable event in the integrator, a typical waveform for the hybrid system (2.1) is
obtained using numerical integration for β = ln 2 and is shown in Figure 2.1 (source
code for the program was provided by Professor Lucas Illing). The corresponding
phase-space projection is shown in Figure 2.2. The solution obtained by numerical
integration appears to be chaotic. A brief introduction to dynamics and a discussion
on the meaning of chaos will be given in Sec. 2.1. An exact solution to Eq. (2.1) will
also be derived, which demonstrates that the solution obtained is indeed chaotic. The
textbooks consulted in Sec. (2.1) are Ott [6], Hilborn [7], Strogatz [8] and Zheng [9].
The method for solving Eq. (2.1) in Sec. (2.2) is from the paper by Corron, Blakely
and Stahl [1].
2.1
2.1.1
A Brief Introduction to Dynamics and Chaos
Dynamics and Dynamical Systems
Dynamics is a subject that studies how a given system evolves with time. In particular, people are interested in questions such as whether the system will eventually
18
Chapter 2. Introduction to the Hybrid System
2
u
1
0
−1
−2
0
5
10
15
20
25
Time
30
35
40
45
50
Figure 2.1: Typical waveform for u(t). The waveform is obtained from numerical integration of
the hybrid system for β = ln 2.
10
udot
5
0
−5
−10
−2.5
−2
−1.5
−1
−0.5
0
u
0.5
1
1.5
2
2.5
Figure 2.2: Phase space projection from numerical integration of the hybrid system for β = ln 2.
2.1. A Brief Introduction to Dynamics and Chaos
19
Figure 2.3: Phase portrait of the phase space of 1-D spring. The spring oscillates harmonically
about its equilibrium position. Every ellipse is a possible trajectory for a point on the spring. The
origin is a special point, where the trajectory is a simple fixed point for all time.
settle to an equilibrium state, or demonstrate periodic behavior, or do something
more complicated.
In order to discuss dynamics, it is convenient to introduce the concept of phase
space. In general, the phase space of a system is a space in which all possible states
of the system are represented. For this reason, the phase space is also called the state
space. Every possible state of the given system corresponds to a point in the phase
space uniquely. In the study of moving particles, the phase space often consists of all
possible values of position and velocity variables. Figure 2.3 demonstrates the phase
space of a 1-D spring oscillating about its equilibrium position:
Each ellipse in Figure 2.3 represent a possible time revolution for a given initial
state, and thus is a possible trajectory in the phase space. The origin alone constitute another kind of trajectory: it represents a spring in its equilibrium position
where it is fixed for all future time. Such plot, which includes a collection of several
different trajectories originating from different initial conditions is called a phase
portrait for the system. As each point in phase space can be considered an initial
condition, the phase space is completely filled with trajectories.
Finally, a deterministic dynamical system consists of a phase space and a fixed
rule that determines the time evolution of any possible starting state of the system.
One key thing to note about deterministic systems is that for a fixed time interval,
only one future state can result from a given current state.
2.1.2
Differential Equations and Iterated Maps
When considering dynamical systems one can distinguish dynamical systems governed
by differential equations and dynamical systems governed by iterated maps. The
choice of the description of a dynamical system depends on the nature of time that
describes the system: differential equations are used to describe dynamical systems
in which time is a continuous variable, while iterated maps are used to describe
dynamical systems in which time is discrete.
If a system can be described solely by some variables and their time derivatives,
it is possible to convert the system into a system of first-order, autonomous ordinary
20
Chapter 2. Introduction to the Hybrid System
differential equations,
ẋ(1) = f1 (x(1) , ..., x(N ) ),
..
.
(N )
ẋ
= fN (x(1) , ..., x(N ) ),
(2.4)
where ẋ(i) ≡ dx(i) /dt, and the expressions for f1 , ..., fN will depend on the actual
system one is trying to describe. Eq.(2.4) can also be written in vector form as
ẋ(t) = f[x(t)],
(2.5)
where x is an N -dimensional vector.
The differential equation system given in Eq. (2.4) is said to be autonomous because it has no explicit time dependence. In the case of non-autonomous systems, in
which the differential equations explicitly depend on time, one more equation can be
added to Eq. (2.4) by setting x(0) = t:
ẋ(0) = 1
ẋ(1) = f1 (x(0) , ..., x(N ) ),
..
.
(N )
ẋ
= fN (x(0) , ..., x(N ) ),
(2.6)
The advantage of writing the non-autonomous system in the autonomous form (2.6)
is that non-autonomous systems and autonomous systems can be treated on an equal
footing, but the advantage gained does come with a price: there is one more dimension
in phase space.
Sometimes information about a system can only be obtained at discrete, integervalued times. For example, in my experiment the position is recorded every 7 ms. In
such cases, it is useful to describe the system using a mapping:
xn+1 = M(xn ),
(1)
(2.7)
(2)
(N )
where xn is an N -dimensional vector xn = (xn , xn , ..., xn ) representing the state
of the system after n time intervals, and M represents a fixed rule that governs
how the current state maps to the next state. Again this is a dynamical system,
because given any initial state x0 , the mapping M can be applied n times to get the
uniquely determined state at t = n∆t, where ∆t is the time interval between adjacent
measurements. Systems described by Eq. (2.7) are known as iterated maps. They
are also referred to as difference equations, recursion relations or simply maps.
It should be noted that continuous time systems can be turned into discrete time
systems by sampling at regular time intervals or using the Poincaré section method.
Sampling at a regular time interval T is also known as the time T map, in which
a continuous time trajectory x(t) is evaluated at discrete times tn = t0 + nT (n =
0, 1, 2 . . .). In this way, a continuous time trajectory x(t) yields a discrete time orbit
xn ≡ x(tn ). The Poincaré section method will be discussed in Sec. (2.1.5). Of course,
there are other types of maps that are not derived from a discretization of continuoustime ordinary differential equations. For more information on iterated maps, refer to
Ott [6] and Hilborn [7].
2.1. A Brief Introduction to Dynamics and Chaos
2.1.3
21
Fixed Points, Periodic Orbits and Quasiperiodic Orbits
Often people are interested in the time evolution of systems. While generally points
in the phase space of a dynamical system follow predestined trajectories as dictated
by the deterministic rule, some special points never change: the fixed points.
If a dynamical system is described by a set of autonomous, first-order differential
equations, as in Eq. (2.4), a point in the phase space of the system for which the time
derivatives of the phase space variables are 0 is called a fixed point for the system.
That is, a fixed point is a point for which
dx(i)
= 0,
dt
(2.8)
where i = 1, ..., N , and N is the dimension of the system. Equivalently, in vector
form fixed points must satisfy
ẋ = 0.
(2.9)
Fixed points for systems of differential equations are also referred to as equilibrium
points, or critical points, or singular points.
If a dynamical system is described by an iterated map, as in Eq. (2.7), a fixed
point would be one whose next iteration is the same as the current point:
x∗ = M(x∗ ).
(2.10)
While the fixed points stay at their values and never change, some trajectories
return to their previous value after some fixed time interval. These trajectories are
called periodic orbits. For a dynamical system described by differential equations,
such periodic behavior is characterized by
x(t + T ) = x(t),
(2.11)
where T is the period, while for a dynamical system described by iterated maps, the
periodic behavior is characterized by
xn+P = xn ,
(2.12)
xn+P = MP (xn ),
(2.13)
or equivalently,
where P is the period. Periodic orbits and fixed points for a dynamical system described by differential equations are shown in Figure 2.3. The ellipses are periodic
orbits while the origin is a fixed point.
In addition, sometimes one encounters another type of trajectories called quasiperiodic orbits (or almost periodic orbits), which can be described by a quasiperiodic function
F (t) = f (ω1 t, . . . , ωm t)
(2.14)
for some continuous function f (ϕ1 , . . . , ϕm ) of m variables (m ≥ 2), periodic in
ϕ1 , . . . , ϕm with period 2π, and some set of positive frequencies ω1 , . . . , ωm . The
22
Chapter 2. Introduction to the Hybrid System
frequencies are rationally linearly independent, meaning that
m
X
ki ωi 6= 0
(2.15)
i=1
for all non-zero integer valued vectors k = (k1 , . . . , km ) [10]. This thesis will not
discuss quasiperiodic orbits in detail.
2.1.4
Chaos and Lyapunov Exponent
Besides fixed points, periodic orbits and quasiperiodic orbits, there is another possible
type of trajectories: trajectories that are said to be chaotic. A chaotic system must
satisfy the following criteria:
1. the system demonstrates aperiodic long-term behavior on a nontrivial open set
in phase space,
2. the system must be deterministic,
3. the system shows sensitivity to initial conditions.
Aperiodic long-term behavior means that there are trajectories in the system that
do not eventually settle to fixed points, periodic orbits or quasiperiodic orbits as
t → ∞. Nontrivial open set ensures that there is a reasonable number of such
trajectories. Deterministic system means that with knowledge of initial states, one
can have complete knowledge of all future states. In other words, the system is not
subject to random or noisy inputs or parameters. Sensitivity to initial conditions
roughly means that neighboring trajectories separate exponentially fast.
Here a more quantitative description of sensitivity to initial conditions is developed. For a dynamical system described by differential equations, suppose two nearby
points start off with a separation vector d0 , and after time t the separation vector
becomes dt , as shown in Figure 2.4. If neighboring trajectories separate exponentially
fast, then
|dt | ≈ |d0 |eλt ,
(2.16)
where |d| is the length of the vector d, and λ is a positive number.
This idea can be generalized by introducing the Lyapunov exponent, which
characterizes the rate of separation between infinitesimally close trajectories. Suppose
two trajectories with an initially infinitesimally small separation δx(0) diverge such
that at time t the separation δx(t) satisfies
|δx(t)| ≈ |δx(0)|eλt .
(2.17)
λ is called the Lyapunov exponent. If the Lyapunov exponent is positive, then
separation between nearby trajectories grows exponentially fast. The system is extremely sensitive to small changes in the initial conditions.
2.1. A Brief Introduction to Dynamics and Chaos
23
Figure 2.4: Separation of nearby trajectories. Initially two points, x0 and x0 + d0 , are separated
by a vector d0 . After time t, x0 travels to xt while x0 + d0 travels to xt + dt , and the separation
becomes dt . For the system to be “sensitive to initial conditions”, the separation needs to grow
exponentially fast: |dt | = eλt |d0 | for some positive coefficient λ.
The definition of Lyapunov exponent can be generalized to iterated maps. Given
the initial infinitesimal separation δx0 and the separation after n iterations δxn , if
|δxn | ≈ |δx0 |enλ ,
(2.18)
then λ is called the Lyapunov exponent.
Note that if the phase space has N dimensions, N Lyapunov exponents can be
defined, corresponding to the N dimensions. A dynamical system may be chaotic as
long as the largest Lyapunov exponent is positive.
For a one-dimensional iterated map
xn+1 = M (xn ),
(2.19)
an explicit formula for its Lyapunov exponent can be found. Suppose two neighboring
trajectories start off at x0 and x0 + δ0 , where δ0 → 0. Then the separation after n
iteration is δn = M n (x0 + δ0 ) − M n (x0 ). By the definition of Lyapunov exponents,
|δn | ≈ |δ0 |enλ .
Dividing both sides by |δ0 | and taking the logarithm
δn ln ≈ nλ
δ0
(2.20)
(2.21)
yields
1 δn ln
λ ≈
n δ0 1 M n (x0 + δ0 ) − M n (x0 ) =
ln
n δ0
1
=
ln |(M n )0 (x0 )| .
n
(2.22)
24
Chapter 2. Introduction to the Hybrid System
The last equality follows from the definition of the derivative. Eq. (2.22) can be
simplified further by applying the chain rule:
0
(M n )0 (x0 ) = M n−1 ◦ M (x0 )
0
= M n−1 (M (x0 )) · (M (x0 ))0
0
= M n−1 (x1 ) · M 0 (x0 )
..
.
= M 0 (xn−1 ) · M 0 (xn−2 ) · ... · M 0 (x0 )
n−1
Y
=
M 0 (xi ).
(2.23)
i=0
Then Eq. (2.22) becomes
n−1
1 Y 0
ln λ ≈
M (xi )
n i=0
n−1
1X
ln |M 0 (xi )| .
=
n i=0
(2.24)
More formally, for a trajectory starting at x0 in a one-dimension iterated map, the
Lyapunov exponent can be calculated using
n−1
1X
λ = lim
ln |M 0 (xi )| .
n→∞ n
i=0
(2.25)
If an iterated map is obtained from a time T map of a well-behaved continuous time
system, the Lyapunov exponent can be used to determine the nature of the continuous
time dynamics. Suppose the original continuous time system is sampled at a regular
time interval ∆t such that an iterated map with a positive largest Lyapunov exponent
λ is obtained, and within each time interval the system does not behave crazily (such
as performing wild oscillations). Consider two nearby trajectories of initial separation
|δ0 | ≡ |δ(0)|. The separation after n iterations is
|δn | ≡ |δ(n∆t)| ≈ |δ(0)|eλn∆t = |δ0 |en(λ∆t) .
(2.26)
Thus, as long as λ > 0, nearby trajectories separate exponentially fast at the sampling
points. Since the continuous time system is well-behaved, the trajectories separate
exponentially fast. Therefore a continuous time system is chaotic if the iterated map
obtained by the time T map is chaotic. This fact will be used in Sec. (2.2) to verify
that the continuous time hybrid differential equation is chaotic.
2.1.5
Poincaré Section and Poincaré Return Map
As has been mentioned in Sec. (2.1.2), a continuous time system can be turned into
a discrete time system via the Poincaré section method. A Poincaré section of
2.1. A Brief Introduction to Dynamics and Chaos
25
Figure 2.5: An example of a Poincaré section in 2-D phase space. In the x-v phase plane, a Poincaré
section could be a straight line passing through the origin, such that the trajectories intersect the
Poincaré section transversely, as shown in the figure. Given a particular trajectory and a point x0
where the trajectory crosses the Poincaré line, the next crossing point x1 can be found by following
the trajectory, and similarly all the future crossings x2 , x3 , etc. can be found. The mapping P that
carries one crossing point to the next P (xn ) = xn+1 is known as the Poincaré map, and x1 , x2 , etc.
are the returns to the Poincaré map.
an N -dimensional phase space is an (N − 1)-dimensional subspace of the original
phase space such that the trajectories of the dynamical system intersect the plane
transversely, which means that trajectories are not parallel to the Poincaré section.
The Poincaré section of a 2-D system may be a straight line, while it may be a plane
for a 3-D system. Examples of Poincaré sections are shown in Figure 2.5 and 2.6.
Now given a particular trajectory of the continuous time system and a cross point
x0 on the Poincaré section, one can treat the cross point as an initial condition and
integrate over time, until the trajectory next crosses the Poincaré section at x1 . This
process can be continued and a series of cross points x2 , x3 , etc. can be obtained.
The Poincaré map P is the mapping that maps a cross point xn to the next cross
point xn+1 :
P (xn ) = xn+1 .
(2.27)
The crossing points xn are sometimes called the returns of the Poincaré map. As
the returns of the Poincaré map are confined to an (N − 1)-dimensional Poincaré
section, it simplifies the geometric description of the dynamics by removing one of
the phase space dimensions. Nevertheless, it still keeps the essential information of
the dynamical system such as periodicity, quasi-periodicity and chaoticity. For more
discussion, see Ott [6] and Hilborn [7].
2.1.6
Symbol Sequences, Shift Maps and Symbolic Dynamics
Suppose the phase space of a system is the interval between 0 and 1 (including 0 but
excluding 1) on the real axis. Every point in the interval represents a possible state
of the system. Then the system has infinitely many possible states, corresponding to
the uncountably many points in the interval [0, 1). The goal is to use some sequence
of symbols to represent each state.
26
Chapter 2. Introduction to the Hybrid System
Figure 2.6: An example of a Poincaré section in 3-D phase space. In the xyz phase space, a
Poincaré section could be a plane passing through the origin, such that the trajectories intersect the
Poincaré section transversely, as shown in the figure. x1 , x2 and x3 are the returns to the Poincaré
map defined by the shown trajectory and initial crossing point x0 .
0
0.3
1/2
1
0
1/4 0.3
1/2
Figure 2.7: Determination of symbol sequence representation of 0.3. As 0.3 is in the left half
interval of [0, 1), the first symbol for 0.3 is L. Next as 0.3 is in the right half interval of [0, 1/2)
(0.3 ∈ [1/4, 1/2)), its second symbol is R. Hence the first two symbols of the symbol sequence
representation of 0.3 is LR.
It is clear that every point in [0, 1) must be either in [0, 1/2) or [1/2, 1). If the
point is in [0, 1/2), write down the first symbol as L, otherwise write down R. Visually
it makes sense: if the point is in the left half of the interval it can be represented
as LEFT, or L, and if it is in the right half of the interval it can be represented as
RIGHT, or R.
Once the first symbol is determined, a similar procedure can be carried out to
determine the second symbol. For instance, if the point is in [0, 1/2), then it must
be in either [0, 1/4) or [1/4, 1/2). If it is in [0, 1/4), write down the second symbol L,
corresponding to it being in the left half interval of [0, 1/2). Otherwise write down the
second symbol as R. If the point is in [1/2, 1) instead, the interval can be partitioned
in halves and the second symbol can be written down. By this procedure, 0.3 would
have a LR representation for the first two symbols, as shown in Figure 2.7.
This procedure can be continued infinitely many times for any point in [0, 1),
and a infinite sequence of symbols for each point can be obtained. For example, 0
will be represented as LLL . . ., 1/3 will be represented as LRLR . . ., and 1/2 will be
2.1. A Brief Introduction to Dynamics and Chaos
27
represented as RLLL . . .. Two different points will always have different symbolic
representations: it can be easily shown that two points that are δ apart will have different symbols in their sequences after at most b− log2 δc symbols, where bxc picks up
the integer part of x. It can also be shown that each sequence will uniquely determine
a point in the interval (the sequence RRR . . . will not be a valid representation under
the definition above, since it will represent the point 1, but 1 is not in the interval).
This means that there is a bijective mapping between each point in the interval and
each different sequence of symbols. As each point corresponds to a possible state
of the system, there is a bijective relation between states of the system and symbol
sequences.
Next the shift map operation S that acts on the symbol sequences will be defined.
When S acts on a particular symbol sequence, it removes the first symbol in the
sequence and shifts all other symbols one position to the left. For example, when the
shift map is applied to RLLL . . ., LLLL . . . is obtained: S(RLLL . . .) = LLLL . . ..
As RLLL . . . = 0.5 and LLL . . . = 0, the shift map has effectively mapped the symbol
sequence for 0.5 to the symbol sequence for 0. This particular shift map is known as
the Bernoulli shift map. It can be proved mathematically that the shift map is
equivalent to the mapping
S(x) ≡ 2x (mod 1),
(2.28)
which multiplies the original number by two and only keeps the decimal part of the
product. As the shift map (in this case, the Bernoulli shift map) maps a point in
the phase space to another point in the phase space, the system is closed, and by
tracing out the points determined by the shift map, a trajectory for any initial point
x ∈ [0, 1) can be obtained. Given the phase space ([0, 1)) and time evolution rule
(the shift map), the system is effectively a dynamical system. The dynamics of such
dynamical systems are known as symbolic dynamics. It should be noted that the
dynamics of symbolic dynamics is completely deterministic, as the symbol sequence
is completely fixed once the point is specified.
I will demonstrate how the symbolic dynamics works with a few points. Firstly,
starting with 0 (or LLL . . .), it is clear that no matter how many times the Bernoulli
shift map is applied, LLL . . . (or 0) is always obtained. This means that it is a fixed
point for the system. Next, starting with 0.25 (or LRLLL . . .), S(LRLLL . . .) =
RLLL . . . (or S(0.25) = 0.5), S(RLLL . . .) = LLL . . . (or S(0.5) = 0). This means
that it is a trajectory that eventually settles down to a fixed point.
More complicated trajectories can be obtained as well. Starting with 2/7, or
LRLLRLLRLLR . . ., it is easy to see that after applying S 3 times the original
point is obtained. This means that it is a periodic orbit of period 3. It is possible
that a symbol sequence does not demonstrate any periodicity. In fact, any symbol
sequence obtained from an irrational number is aperiodic, and the resulting trajectory
demonstrates aperiodicity.
It should be noted that L and R are purely symbols, and hence they can be
replaced with any other symbols. If instead of L and R, “0” and “1” are used for
symbols, then interestingly, the symbol sequence of a point in the interval [0, 1) will be
exactly the same as the digits after the decimal point in its binary representation (of
28
Chapter 2. Introduction to the Hybrid System
1
0.9
0.8
0.7
xn+1
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
xn
Figure 2.8: Bernoulli Shift Map.
course infinitely many 0’s have to be inserted at the end of the binary representation).
For example, in binary representation 0.5(10) = 0.1000 . . .(2) , and 0.5 in symbolic
representation is 1000 . . .. The decimal representation of a number 0 ≤ x < 1 is
related to its binary representation 0.s1 s2 ... via:
x=
∞
X
si
i=1
2i
.
(2.29)
A plot of the Bernoulli shift map is shown in Figure 2.8. It is composed of two
straight lines of slope 2. It is clear from the plot that the Bernoulli shift map is not
invertible, as for each horizontal section there are two intersections.
2.2
Solution to the Hybrid Differential Equation
In this section a solution to the hybrid differential equation Eq. (2.1) will be derived.
Consider the initial conditions u(0) = u0 , u̇(0) = 0, and s(0) = s0 , where |u0 | ≤ 1.
For |u0 | = 1, it is not hard to see that u(t) = u0 , s(t) = s0 is a solution to Eq. (2.1)
for all t > 0. In this case, the solution is a fixed point.
For |u0 | < 1, let ũ ≡ u − s0 . Then the differential equation is equivalent to
ũ¨ − 2β ũ˙ + (ω 2 + β 2 )ũ = 0,
(2.30)
which has the solution
βt
ũ(t) = ũ(0)e
β
cos ωt − sin ωt ,
ω
(2.31)
2.2. Solution to the Hybrid Differential Equation
or
β
cos ωt − sin ωt .
u(t) = s0 + (u0 − s0 )e
ω
βt
29
(2.32)
Note that the solution given in Eq. (2.32) subsumes the fixed point solution u0 = ±1,
and hence Eq. (2.32) is valid for all |u0 | ≤ 1.
The next step is to find when the guard condition is met, or when is u̇ = 0. To
do this, differentiate Eq. (2.32) with respect to t:
β
βt
cos ωt − sin ωt + (u0 − s0 )eβt (−ω sin ωt − β cos ωt)
u̇(t) = β(u0 − s0 )e
ω
2
2
ω +β
= −
(u0 − s0 )eβt sin ωt.
(2.33)
ω
Since ω = 2π, the guard condition u̇ = 0 is first met when t = 1/2. Thus the next
initial condition is
1
u
= s0 −(u0 −s0 )eβ/2 = s0 [1 + (1 − u0 /s0 )] eβ/2 = s0 [1 + (1 − |u0 |)] eβ/2 . (2.34)
2
The last equality follows from the fact that |s0 | = 1 and u0 and s0 have the same
sign. Because |u0 | ≤ 1, it follows that u(1/2) and s0 have the same sign as well:
1
= s0 .
(2.35)
sgn u
2
Therefore the discrete state s(t) remains unchanged by this initial trigger of the guard
condition and the solution given by Eq.(2.32) is valid until at least the second trigger
of the guard condition.
The same process is repeated to find the time the guard condition is next met:
t = 1. At this time,
u1 ≡ u(1) = eβ u0 − (eβ − 1)s0 .
(2.36)
Now I will show |u1 | ≤ 1. Since 0 < β ≤ ln 2, 1 < eβ ≤ 2. Assume that 0 ≤ u0 ≤
s0 = 1. Then
u1 = u0 eβ − (eβ − 1)s0 = s0 − (s0 − u0 )eβ
≤ s0 − (s0 − u0 ) = u0 ≤ 1,
(2.37)
u1 = u0 eβ − (eβ − 1)s0 = s0 − (s0 − u0 )eβ
≥ s0 − 2(s0 − u0 ) = 2u0 − s0 ≥ −s0 = −1.
(2.38)
and
Thus under the assumption that 0 ≤ u0 ≤ s0 = 1, by Eq. (2.37) and Eq. (2.38),
|u1 | ≤ 1. Similarly the same result is obtained for −1 = s0 ≤ u0 < 0. Therefore
|u1 | ≤ 1.
Note that
s1 = sgn(u1 ) = sgn eβ u0 − (eβ − 1)s0 ,
(2.39)
30
Chapter 2. Introduction to the Hybrid System
explicitly depends on the value of u0 , thus a transition in the discrete state s can only
occur at t = 1. In the case that the value of s does change at t = 1, Eq. (2.30) is no
longer valid for t > 1, and thus the solution given in Eq. (2.32) is not valid for t ≥ 1.
It is also helpful to note that a Poincaré section defined by u̇(t) = 0 can be chosen.
Then transitions can only occur at the returns of this Poincaré map, the points in
phase space where the trajectory crosses the Poincaré section.
To continue to solve Eq. (2.1) for t > 1, note that the initial condition now becomes
u(1) = u1 , u̇(1) = 0 and s(1) = s1 , where |u1 | ≤ 1 and s1 = sgn(u1 ). Comparing this
set of initial conditions to the set of initial conditions specified at the beginning of this
section, it is easy to see that this problem is equivalent to the original initial value
problem if a unit time translation and an increment of the subscripts are applied,
which allows the solution for 1 ≤ t < 2 to be written as
β
β(t−1)
(2.40)
cos ωt − sin ωt .
u(t) = s1 + (u1 − s1 )e
ω
Repeating the process above extends the solution to a general one that is valid for
n ≤ t < n + 1, where n ∈ Z is a non-negative integer:
β
β(t−n)
u(t) = sn + (un − sn )e
cos ωt − sin ωt ,
(2.41)
ω
where the returns at the transition times satisfy the following recurrence relation:
un+1 ≡ u(n + 1) = eβ un − (eβ − 1)sn ,
(2.42)
sn+1 = sgn(un+1 ).
(2.43)
and sn+1 is defined as
In principle, the hybrid system can be solved exactly using this method for all t ≥ 0,
given the initial condition (u0 , s0 ).
Defining a mapping M as
M(un ) = eβ un − (eβ − 1)sn ,
(2.44)
it is easy to see that Eq. (2.42) is an iterated map. As
M 0 (un ) = eβ
(2.45)
is a constant for all un , use Eq. (2.25) to obtain the Lyapunov exponent λ of the
iterated map:
m−1
m−1
1 X β 1 X
0
ln |M (un )| = lim
ln e = β.
λ = lim
m→∞ m
m→∞ m
n=0
n=0
(2.46)
Since β > 0, the iterated map is chaotic. As the iterated map comprises returns
at regular time intervals of a continuous time system, by the discussion in the last
paragraph of Sec. (2.1.4), the continuous time system is also chaotic.
2.2. Solution to the Hybrid Differential Equation
31
Next an expression of u(t) using only si ’s will be derived. Changing the indices
in Eq. (2.42) and multiplying by a suitable factor of eβ the following set of equations
is obtained
un = eβ un−1 − (eβ − 1)sn−1 ,
eβ un−1 = e2β un−2 − eβ (eβ − 1)sn−2 ,
..
.
e(n−1)β u1 = enβ u0 − e(n−1)β (eβ − 1)s0 ,
(2.47)
then adding up all the equations in Eq. (2.48) and canceling repeated terms yields
nβ
β
un = e u0 − (e − 1)
n−1
X
e(n−i)β si
i=0
(
= enβ
u0 − (1 − e−β )
n−1
X
)
si e−iβ
.
(2.48)
i=0
Rearranging Eq. (2.48) yields
−nβ
u0 = e
−β
un + (1 − e
)
n−1
X
si e−iβ .
(2.49)
i=0
Since |un | ≤ 1 is bounded and e−nβ decays exponentially with increasing n, taking
the limit of Eq. (2.49) as n → ∞ yields
(
)
n−1
X
−nβ
−β
−iβ
u0 = lim e un + (1 − e )
si e
n→∞
n→∞
i=0
= (1 − e−β )
∞
X
si e−iβ
(2.50)
i=0
In Eq. (2.50), the initial condition u0 has been expressed exclusively in terms of
current and future si ’s for 0 ≤ i < ∞. Thus the si ’s can be “read” by resolving the
initial condition u0 . Changing the indices in Eq. (2.50) yields
−β
un = (1 − e
)
∞
X
si+n e−iβ ,
(2.51)
i=0
which represents future returns purely in terms of current and future si ’s.
Note that si takes value of only +1 or −1. If one views the possible values for si
as symbols (as has been discussed in Sec. (2.1.6)), and forms symbol sequences Si as
Si = si si+1 si+2 . . . ,
(2.52)
then it is easy to see that each ui corresponds to a symbol sequence Si . The next
return ui+1 is related to the current return ui via the shift map, as the shift map
32
Chapter 2. Introduction to the Hybrid System
shifts the symbol sequence Si to Si+1 . Thus the symbols and the shift map form a
symbolic dynamics for the chaotic iterated map.
Now u(t) can be written purely in terms of current and future symbols. Plugging
Eq. (2.51) into Eq. (2.32) gives
(
)
∞
X
β
−β
−iβ
β(t−n)
cos ωt − sin ωt , (2.53)
u(t) = sn + −sn + (1 − e )
si+n e
e
ω
i=0
where n = btc is the largest integer that is smaller than t. As one can get the
current and future symbols by resolving the initial condition u0 , the hybrid differential
equation (2.1) has been solved exactly.
2.2.1
Discussion of the Solution for β = ln 2
Plugging β = ln 2 into the solution for the returns to the Poincaré section Eq. (2.51)
yields
∞
∞
1 X sn+i X sn+i
=
,
(2.54)
un =
i
2 i=0 2i
2
i=1
and the relation between successive returns is
un+1 = 2un − sn ,
(2.55)
which is found by plugging β = ln 2 into Eq. (2.42). Figure 2.9 shows the plot of
un+1 versus un . It is not hard to see that Figure 2.9 resembles Figure 2.8, except
that the domain and range are slightly different: the domain and range are [−1, 1)
instead of [0, 1). It is not surprising, because the solution Eq.(2.54) has the same
form as Eq. (2.29), which is the binary representation of a real number between 0 and
1, except that in Eq.(2.54) the symbols are −1 and +1 instead of 0 and 1. In fact,
making the substitution of xn = un2+1 yields
xn
un + 1
1
=
=
2
2
1
=
2
=
=
∞
X
sn+i
i=1
i=1
∞
X
+
i=1
2i
1
2i
!
+1
!
∞
1 X sn+i + 1
2
i=1
∞
X s0n+i
,
2i
i=1
where
s0n+i
2i
∞
X
sn+i
sn+i + 1
=
=
2
2i
(2.56)
+1, sn+i = +1
0,
sn+i = −1,
(2.57)
2.2. Solution to the Hybrid Differential Equation
33
1
0.8
0.6
0.4
un+1
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
−1
−0.5
0
un
0.5
1
Figure 2.9: un+1 versus un for β = ln 2. This mapping looks just like the Bernoulli shift map (see
Figure 2.8), except that the domain and range of the mapping are [−1, 1) instead of [0, 1).
which is exactly the same as Eq. (2.29). This means that the solution to the returns of
the Poincaré map for β = ln 2 is equivalent to a Bernoulli shift map under a suitable
change of coordinates. Since the Bernoulli shift map is known to be chaotic, the
discrete mapping for β = ln 2 is chaotic, and since the points in the mapping are
sampled at regular intervals, the continuous time system is also chaotic for β = ln 2.
˙ = 0, |u(t)| < 1 will be
In Chap. 3, the returns to the Poincaré section u(t)
determined and the returns un+1 versus un will be plotted, like what has been done
in this section. The data points will then be fitted using linear regression method
and the slope of mapping, k, will be determined using the slope of the fitted lines.
Subsequently the slope can be used to determine β = ln k.
Chapter 3
Experimental Data and Analysis
Experiments using the setup described in Chap. 1 are carried out to test the theoretical solutions to the hybrid differential equation Eq.(2.1). The position of cylindrical
magnets is measured every 7 ms and is recorded as voltage. Figure 3.1 shows a typical
waveform obtained, which demonstrates the aperiodic behavior of the oscillator.
Next, the position x is normalized to set the two offset states to be +1 and −1,
and is then Fourier transformed and filtered to eliminate high frequency noise. The
velocity v is calculated as the time derivative of position x. The resulting phase space
plot is shown in Figure 3.2.
A Poincaré section, defined as the points where v = 0 and |x| ≤ 1, is then taken
and the returns to this Poincaré section are picked out and plotted, giving rise to the
Poincaré map shown in Figure 3.3.
Figure 3.3 has two notable features. Firstly, the end points (points for which
un < −0.8 or un > 0.8) seem to behave differently from points in the middle region,
possibly as a result of increasing friction that is inherent in the mechanical system.
Therefore they are ruled out from the data analysis. Secondly, recognizing that Figure
3.3 resembles a Bernoulli shift map, the region [−0.8, 0.8] × [−0.8, 0.8] can be mapped
to [0, 1] × [0, 1] in a way that linearity is preserved. Figure 3.4 is then obtained.
The left and right stripes are then separately fitted to linear functions. The left
stripe yields a linear fit of
u0n+1 = 1.9736u0n + 0.0057,
(3.1)
while the right stripe yields a linear fit of
u0n+1 = 2.0027u0n − 0.9603.
(3.2)
Therefore the Poincaré returns form a Bernoulli shift map to a good approximation
under a suitable change of coordinates, and the system is indeed chaotic.
Since the slopes of the linear fits approximate 2 to a good degree, the slope in
Eq. (2.44) is eβ = 2. Therefore β = ln 2. All the symbols in the symbol sequence
representation of the solution can be obtained by reading the sign for each u(t) where
u̇(t) = 0, and the solutions to the hybrid differential equation Eq. (2.1) can be ob-
36
Chapter 3. Experimental Data and Analysis
4.5
zsmoothed (V)
4
3.5
3
2.5
2
1.5
6.5
7
7.5
8
8.5
9
9.5
10
Time (s)
Figure 3.1: Position measured in Volts v.s. time measured in seconds. The diagram shows a
typical waveform obtained from a 7501-point measurement of the position of the cylindrical magnets.
Measurements are taken every 7ms.
80
60
40
20
v
0
−20
−40
−60
−80
−100
−3
−2
−1
0
u
1
Figure 3.2: Phase space diagram of experiment data.
2
3
37
1
0.8
0.6
0.4
un+1
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
−1
−0.5
0
un
0.5
1
Figure 3.3: Poincaré return map for experiment data.
1
0.9
0.8
0.7
un+1
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
un
0.6
0.8
1
Figure 3.4: Poincaré return map in the form of Bernoulli shift map. The Poincaré returns in the
middle region [−0.8, 0.8] × [−0.8, 0.8] are selected and mapped to [0, 1] × [0, 1]. It is clear from the
plot that they resemble a Bernoulli shift map. The left and right stripes are then separately fitted to
linear functions. The slope for the left partition is calculated as 1.9736, while for the right partition
is 2.0027.
38
Chapter 3. Experimental Data and Analysis
2
usmoothed
1
0
−1
−2
21.5
22
22.5
23
Time (s)
23.5
24
Figure 3.5: Comparison of analytic solution to measured experimental data. The analytic solution
(black), calculated from Eq. (2.53) with β = ln 2, demonstrates a strong correlation with the measured data (blue). Also shown in the figure is the symbol sequence as obtained from experimental
data (red) and Poincaré returns (square).
tained from Eq. (2.53):
u(t) = sn + 2
t−n
−sn +
∞
X
si+n
i=1
2i
!
β
cos ωt − sin ωt ,
ω
where n = btc. Then the waveform produced by the electromechanical oscillator can
be “predicted” using the symbols as initial condition by plotting the analytic solution
given in Eq.(3.3). A typical waveform in the resultant analytic solution is shown in
black in Figure 3.5. Also shown in the same plot is the measured experimental data,
which is painted in blue.
As one can tell from Figure 3.5, the agreement is not perfect, which is possibly
due to the following factors:
1. Amplitude noise. The mechanical system suffers friction: in addition to the friction forces due to the rotation of pulleys and movement of springs, the magnets
also experience horizontal disturbances as the dipole moment of magnets are
not perfectly aligned in the vertical direction.
2. Delays in the offset voltage signal. In the real experiment, transitions in the
discrete state s in Eq. (2.1) cannot occur instantaneously. In addition, the RC differentiator used in the secondary coil circuit, as discussed in Sec. (1.4),
inevitably introduces another delay in the form of a small phase shift. The
outcome of this delay effect is obvious in Figure 3.2, where “overshooting” at
v = 0 occurs.
3. Timing jitter. As a result of the aforementioned “overshooting” problem, the
time intervals in between returns to the Poincaré section are not regular. These
irregular time intervals make numerical solution hard, and for this thesis I take
the average of the time intervals and use this averaged value as the unit time
24.5
39
standard. The time unit used for this thesis is 204 data points, which corresponds to 1.428 sec.
Despite all these difficulties, the experimental data and analytic solution show a
very strong correlation. Considering that it is a chaotic system, which is extremely
sensitive to any small difference in initial condition, the agreement is really amazing.
The agreement between the experiment and analytic solution demonstrates that it
is possible to reproduce a chaotic wave signal from its symbol sequence representations. This may have potential application in data storage, since to recover a chaotic
wave signal, it suffices to store only its symbol sequence representation, which hugely
compresses the information needed to store the wave signal.
Chapter 4
Further Discussion on the Hybrid
System
Despite the fact that the hybrid differential equation has been solved in Chap. 2 and
the implementation of an electromechanical oscillator described by this equation has
been discussed in Chap. 1 and Chap. 3, there are still many open questions. A few
noticeable ones include:
1. The hybrid system consists of discrete states which are discontinuous. Is there
a limit where an ordinary differential equation reduces to the hybrid system?
2. Hybrid systems are not ordinary differential equations, and hence the notion
of phase space is complicated. There are basically two independent differential
equations, and the two corresponding phase spaces, each being a 2D sheet, are
combined to form a 2 × 2D phase space. Is there a well-defined embedding of
the 2 × 2D sheets in a higher dimensional phase space?
3. Currently the electric circuit that outputs offset states to the mechanical oscillator calculates velocity using an RC differentiator, which inevitably introduces
a small delay. While this delay is necessary for the design of the circuit1 , it
changes the hybrid differential equation. Is it possible to reduce or even eliminate the delay while still implementing the discrete state signal?
Whether these are answerable questions is not certain. Here, to finish up this
thesis, the first question is explored. The goal is to write the discrete state s as
a limit of continuous functions. One way to do this requires restating the guard
condition for transition in s, given by Eq. (2.2) and Eq. (2.3):
u̇ = 0 ⇒ s(t) = sgn(u(t)),
where
sgn(u) =
1
+1, u ≥ 0
−1, u < 0.
(4.1)
(4.2)
In Figure 1.7, a DM74123N latch is used to detect a change in sign in v. The delay allows the
oscillator to overshoot and causes a sign change in v, and without it a change in the discrete state
will not be possible. In the setup proposed by Owen et al.[5], a similar mechanism, which requires
a small delay, is also used to output the discrete state signal.
42
Chapter 4. Further Discussion on the Hybrid System
In Figure 4.1, the waveform for a numerical solution to Eq. (2.1) as well as its
corresponding waveforms for sgn(u), sgn(v) (v is the time derivative of u) and s are
plotted. The following observation is made: s follows the shape of u but is delayed,
the transition in s occurs when v = 0 is next met. Based on this observation, it
is proposed that the modified continuous “discrete” state s should account for the
following two events:
1. A transition in value of sgn(u).
2. The next occurrence for v = 0.
In order to account for the occurrence of the above two events, intuitively two
Dirac delta functions are needed. After trying out a few different combinations, the
following expression is obtained:
Z t
Z t
0
0
0
0
0
sgn u(t ) δ u(t ) dt
s(u, v, t) = sgn u(t) − 2
1−
δ v(t ) dt ,
t−∆t
t−∆t
(4.3)
where ∆t is a maximum time window between the transition in u and the occurrence
of v = 0.2 In the expression, the first term demonstrates that s basically follows the
waveform of sgn(u), the term
Z
t
sgn u(t0 ) δ u(t0 ) dt0
t−∆t
accounts for the occurrence and direction of transition in u, and the term
Z
t
1−
δ v(t0 ) dt0
t−∆t
accounts for the occurrence of v = 0.
At this point, the signum function can be approximated using an error function,
and the Dirac delta function using a normal distribution:
sgn(x) = lim erf(kx),
k→∞
2
δ(x) = lim
N (k 0 x),
0
k →∞
(4.4)
One might doubt whether such a ∆t can be found at all. After all, if the ∆t is too big, then
multiple events might be included in the integrals and the evaluation of s will be inaccurate. However,
Figure 4.1 suggests that the time interval between a transition in u and a v = 0 event is less than
0.25. This should be obvious: taking the transition from positive to negative, for instance. The
oscillator travels from the previous positive v = 0 point, where u > 1, to u = 0, then to the negative
v = 0 point, where 0 > u > −1. The length of travel from the positive v = 0 point to u = 0 is larger
than 1, while the length of travel from u = 0 to the negative v = 0 point is less than 1. It is then
expected that the time interval between the transition in u and the next v = 0 event is less than the
time interval between the previous v = 0 event and the transition in u. Since the total time interval
between successive v = 0 events is 0.5, it is expected that the time interval between a transition in u
and a v = 0 event is less than 0.25. Choosing ∆t = 0.25 should suffice for the purpose of this thesis,
although a more careful proof should be done.
43
Figure 4.1: Waveforms for numerical solution and its corresponding u, v and s.
44
Chapter 4. Further Discussion on the Hybrid System
and substitute v with u̇. Then the continuous form for the state function s is obtained:
Z t
Z t
0
0
0
0
0
0
0
N k u̇(t ) dt ,
1−
s(u, u̇, t) = erf ku(t) −2
erf ku(t ) N k u(t ) dt
t−∆t
t−∆t
(4.5)
where k and k 0 are some large numbers one can choose to approximate the Dirac delta
function and the signum function.
Comparing the continuous definition of s (Eq. (4.5)) to its original discrete definition (Eq. (4.1)), it is noted that s is now explicitly dependent on u̇. This piece
of information, however, is not new, since the direction of transition in sgn(u) in
the discrete definition (from −1 to +1 or from +1 to −1) implicitly tells the signum
function of u̇.
Eq. (4.5) allows the hybrid differential equation Eq. (2.1) to be written in terms of
continuous functions of u and the time derivatives of u. Nevertheless, the continuous
functions include integrals, which are not convenient for numerical integrations. The
next goal is to convert Eq. (2.1) into a system of first-order, autonomous ordinary
differential equations that are free from integrals, as in Eq. (2.4). In order to do so,
denote
1
σ ≡ − s − erf ku(t) ,
2
Z t
x≡
erf ku(t0 ) N k 0 u(t0 ) dt0 ,
(4.6)
t−∆t
Z t
y ≡1−
N k 0 u̇(t0 ) dt0 ,
t−∆t
then σ = xy. By the fundamental theorem of calculus,
ẋ = erf ku(t) N k 0 u(t) − erf ku(t − ∆t) N k 0 u(t − ∆t)
and
ẏ = −N k 0 u̇(t) + N k 0 u̇(t − ∆t) ,
which are free from integrals. If one can represent some time derivatives of σ or
their linear combinations as some function of the time derivatives of x and y, but not
directly in terms of x and y, then a system of differential equations that are free from
integrals can be obtained.
Start from σ = xy. Differentiating both sides with respect to time yields
σ̇ = ẋy + xẏ.
(4.7)
Rearranging Eq. (4.7) to solve for x,
1
x = (σ̇ − ẋy).
ẏ
(4.8)
Differentiating both sides of Eq. (4.7) with respect to time yields
σ̈ = ẍy + 2ẋẏ + xÿ.
(4.9)
45
Plugging Eq. (4.8) into Eq. (4.9) gives
1
σ̇
ẋ
σ̈ = ẍy + 2ẋẏ + (σ̇ − ẋy)ÿ = 2ẋẏ + ÿ + ẍ − ÿ y.
ẏ
ẏ
ẏ
(4.10)
Rearranging Eq. (4.10) to solve for y,
y=
σ̈ − 2ẋẏ − σ̇ẏ ÿ
ẍ − ẋẏ ÿ
=
σ̈ ẏ − 2ẋẏ 2 − σ̇ ÿ
.
ẍẏ − ẋÿ
(4.11)
Differentiating both sides of Eq. (4.9) with respect to time and plugging in Eq. (4.8)
and Eq. (4.11) yields
...
...
...
σ = x y + ẍẏ + 2ẍẏ + 2ẋÿ + ẋÿ + x y
..
. after some algebra
...
... ...
...
...
...
x ẏ − ẋ y
ẍ y − x ÿ
2ẋẏ( x ẏ − ẋ y )
= 3(ẍẏ + ẋÿ) +
σ̈ +
σ̇ +
.
(4.12)
ẍẏ − ẋÿ
ẍẏ − ẋÿ
ẍẏ − ẋÿ
At this point, linear combination of time derivatives of σ has been successfully written
...
as functions of time derivatives of x and y. Note that σ is symmetric under the
exchange of x and y, which is expected since σ is symmetric under the exchange of x
and y.
Define a ≡ −2β and b ≡ ω 2 + β 2 . Then the hybrid differential equation Eq. (2.1)
can be written as
ü + au̇ + bu = bs
= b[erf(ku) − 2σ]
= −2bσ + b erf(ku).
(4.13)
Denote α ≡ −2b, β ≡ b erf(ku). Eq. (4.13) can be written as
ü + au̇ + bu = ασ + β.
(4.14)
Differentiating both sides of Eq. (4.14) multiple times yields:
...
u + aü + bu̇ = ασ̇ + β̇,
...
(4)
u + a u + bü = ασ̈ + β̈,
...
... ...
u(5) + au(4) + b u = α σ + β .
(4.15)
...
Rearranging Eq. (4.15) to solve for σ̇, σ̈ and σ ,
1 ...
[ u + aü + bu̇ − β̇],
α
1
...
σ̈ = [u(4) + a u + bü − β̈],
α
... 1 (5)
... ...
σ = [u + au(4) + b u − β ],
α
σ̇ =
(4.16)
46
Chapter 4. Further Discussion on the Hybrid System
Plugging Eq. (4.16) into Eq. (4.12) yields
...
...
1 (5)
1 x ẏ − ẋ y (4)
... ...
...
(4)
[u + au + b u − β ] =
[u + a u + bü − β̈]
α
α ẍẏ − ẋÿ
... ...
1 ẍ y − x ÿ ...
[ u + aü + bu̇ − β̇]
+
α ẍẏ − ẋÿ
...
...
2ẋẏ( x ẏ − ẋ y )
+
+ 3(ẍẏ + ẋÿ).
ẍẏ − ẋÿ
Let
...
...
x ẏ − ẋ y
X1 =
ẍẏ − ẋÿ
and
... ...
ẍ y − x ÿ
X2 =
,
ẍẏ − ẋÿ
(4.17)
then rearranging Eq. (4.17) to solve for u(5) yields
...
u(5) = (X1 − a)u(4) + (aX1 + X2 − b) u + (bX1 + aX2 )ü + bX2 u̇ +
...
...
...
2ẋẏ( x ẏ − ẋ y )
α + 3(ẍẏ + ẋÿ)α .
β − X1 β̈ − X2 β̇ +
ẍẏ − ẋÿ
(4.18)
Now, Eq. (2.1) can be converted into a system of first-order, autonomous ordinary
differential equations that are free from integrals:
u̇(0)
u̇(1)
u̇(2)
u̇(3)
u̇(4)
=
=
=
=
=
u̇,
ü,
...
u,
u(4) ,
u(5) ,
(4.19)
...
where u(5) , given by Eq. (4.18), is a function of u̇, ü, u and u(4) . Numerical integration can then be carried out for Eq. (4.19), the result of which can be compared to the
numerical solution for the original hybrid differential equation. If the two numerical
integrations agrees, it may suggest that the original hybrid differential equation with
the discrete offset states is in fact a limiting case for a continuous differential equation, and more interesting theoretical problems can be explored for this differential
equation.
Conclusion
This thesis has demonstrated the implementation of a chaotic electromechanical oscillator described by a hybrid differential equation. The hybrid differential equation
is
ü − 2β u̇ + (ω 2 + β 2 )(u − s) = 0,
(5.20)
where u(t) ∈ R is a continuous state, ω and β are fixed parameters, s(t) ∈ {±1} is a
discrete state, and transitions in s(t) are made when u̇ satisfy the guard condition
u̇ = 0 ⇒ s(t) = sgn(u(t)).
(5.21)
Following the paper by Corron[1], an analytic solution to the hybrid differential
equation is obtained. An electromechanical oscillator that can be described by the
hybrid differential equation is built based on a setup proposed by Owen et al.[5], with
multiple improvements on mechanical and circuit design. The waveform produced
by the electromechanical oscillator closely matches the analytic solution (as shown in
Figure 3.5). In light of all the difficulties of implementation (as discussed in Chap. 3),
as well as the fact that the system is chaotic, the agreement between the two curves
is amazing. An extension to the theory of hybrid system is also proposed, which
potentially allows the hybrid system to be written as a limiting case to an ordinary
differential equation.
Therefore I conclude that the chaotic electromechanical oscillator, described by
a hybrid differential equation, has been successfully implemented. The success of
recovering the original waveform using only symbol sequence demonstrates that it is
a powerful and promising technique for data storage.
Bibliography
[1] N. J. Corron, J. N. Blakely, and M. T. Stahl, “A matched filter for chaos,” Chaos
20, 023123 (pages 10) (2010).
[2] N. J. Corron, J. N. Blakely, S. T. Hayes, and S. D. Pethel, “Determinism in
synthesized chaotic waveforms,” Phys. Rev. E 77, 037201 (2008).
[3] N. J. Corron, S. T. Hayes, S. D. Pethel, and J. N. Blakely, “Chaos without
Nonlinear Dynamics,” Phys. Rev. Lett. 97, 024101 (2006).
[4] N. J. Corron, S. T. Hayes, S. D. Pethel, and J. N. Blakely, “Synthesizing folded
band chaos,” Phys. Rev. E 75, 045201 (2007).
[5] A. M. B. Owen, N. J. Corron, M. T. Stahl, J. N. Blakely, and L. Illing, “Exactly
solvable chaos in an electromechanical oscillator,” (submitted).
[6] E. Ott, Chaos in dynamical systems (Cambridge University Press, New York,
NY, 2002).
[7] R. Hilborn, Chaos and nonlinear dynamics: an introduction for scientists and
engineers (Oxford University Press, New York, NY, 1994).
[8] S. H. Strogatz., Nonlinear dynamics and chaos: with application to physics, biology, chemistry and engineering (Addison-Wesley Publishing Company, Reading,
MA, 1994).
[9] W. Zheng and B. Hao, Practical symbolic dynamics (Shanghai Technology and
Education Press, Shanghai, China, 1994).
[10] A. M. Samoilenko, “Quasiperiodic oscillations,” Scholarpedia 2, 1783 (2007).
Download