Project 2: Exploring Chaos
Work on one of the three problems below. You have free choice; if you are particularly
eager or cannot decide which one to choose, you may also work on more than one problem.
These problems address different aspects of chaotic dynamics.
1. Lorenz System and Lorenz Map
In his pioneering paper in 1963, Lorenz not only demonstrated the possibility of chaotic
dynamics, but also showed how a continuous chaotic system can be reduced to a lower
dimensional map by exploiting a time series. Later in the 1980s Packard, Takens and
Sauer elaborated on this idea further and developped a systematic approach to analyzing
computer generated as well as measured time varying data. The buzword for this approach is “Takens’ embedding theorem”, but it was Lorenz who first came up with the
basic idea.
In this problem you investigate the Lorenz system and generate the associated Lorenz
map. Lorenz introduced this map to obtain a simplified description of the chaotic dynamics he observed in simulations of his continuous three dimensional system (the celebrated
Lorenz system), governed by (dots denote time derivatives)
ẋ = σ(y − x), ẏ = rx − y − xz, ż = xy − bz.
Choose the same parameter values as Lorenz: σ = 10, b = 8/3, r = 28. For these values
the solutions of (1) approach a chaotic attractor.
(a) To become familiar with the Lorenz system, plot a solution curve in the (x, y)–plane
(and/or three–dimensionally in the (x, y, z)–space). Also show the time series x(t), z(t).
Ignore transient behaviour, i.e. plot only the sustained long time behaviour.
(b) To illustrate the divergence of trajectories, choose two nearby initial conditions
x01 = (x0 , y0 , z01 ), x02 = (x0 , y0 , z02 ),
where |z01 − z02 | is small (10−5 or smaller). Plot the x-components x1 (t), x2 (t) of the two
solutions as functions of t in one plot on top of each other (use subplot). You should
see that for t not too large the two functions remain close, but for larger t they behave
completely differently.
Quantify this divergence by estimating the largest Liapunov exponent λ. To this end
compute first a solution over a sufficiently large time range, say 0 ≤ t ≤ tf = 300, for a
generic initial condition. Use x(tf ) = x01 as new initial condition for a solution x1 (t), this
time over a shorter time range, say 0 ≤ t ≤ 50. Choose a nearby initial condition x02 to
compute another solution x2 (t), where the distance kx02 − x01 k is sufficiently small, say
≈ 10−5 . Make sure that the two solutions are computed over the same time spans. You
can achieve this by defining a prescribed time span, e.g. tspan=linspace(0,50,200), and
replacing [t0 tf ] by tspan when you call of ode45. Then compute
d(t) = kx2 (t) − x1 (t)k,
and plot the natural log of d, ln(d(t)), as function of t. In a certain initial range of t,
ln(d(t)) appears to fluctuate about a straight line. The slope of this line provides an
estimate for λ. Thus match a straight line to ln(d(t)) in the appropriate time range and
determine its slope. Matlab commands which allow easily to match a straight line to a
set of point in a plane on the basis of the method of least squares will be introduced in
(c) Create a numerical solution of (1) over a sufficiently large time range, and extract the
maxima of z(t) after the transients have died out. You obtain a long sequence of data
points z1 , z2 , z3 , . . .. Plot all pairs (zi , zi+1 ) as points in a plane. Your plot should indicate
that these points lie on a nice continuous curve with a cusp. The meaning of this curve is
that the sequence of successive maxima of z(t) is governed by iteration of a map, called
the Lorenz map. Create this map by interpolating the data points. Linear interpolation
works best here – you may check what comes out if you use a spline interpolation.
(d) Having created the Lorenz map, the dynamics can now be described by iteration.
Choose an initial condition z0 and display the iterates in the usual way as cobweb diagram
and time plot. For comparison with the original system, you may generate again a solution
of (1), let transients die out, and extract the maxima of z(t). Then use one of these maxima
as initial point for the Lorenz map and compare the iterates with the subsequent maxima
of z(t).
(e) Proceed as in (b) to estimate the Liapunov exponent of the Lorenz map. That is,
match a straight line to ln(dt ), where dt = |zt − zt0 | is the difference of two sequences of
iterates computed for nearby initial conditions z0 , z00 . Compare the result to the number
obtained in (b).
(f ) Consider the Mackey Glass equation,
λam c(t − T )
= m
− gc(t).
a + cm (t − T )
For λ = T = 2, a = g = 1, m = 11 the solutions are chaotic and show irregular oscillations. Thus it is tempting to try the same as for the Lorenz system. Check whether you
also find a nice curve when successive maxima of c(t) are plotted. For the initial condition
you may simply choose a constant.
2. Liapunov Exponents, Chaos and Noise
Consider the logistic map
xt+1 = rxt (1 − xt ).
(a) Create a bifurcation diagram when r varies from 2.7 to 4. A reasonable value for the
increment ∆r of r is ∆r = .05. For each value of r iterate long enough (say 500 iterations)
until the transients have died out. Then plot the next, say 200 iterates.
(b) (See Review: Section I.2.) Compute and plot the Liapunov exponent λ as function
of r. Use the same increment as in (a). Here you have to compute more points, say
n = 10, 000, after the transients have died out. The Liapunov exponent is approximated
ln |f 0 (xt , r)|,
n t=1
where f 0 (x, r) = r − 2rx. In the computation you only need to store the current value
of xt and λt , not all the previous iterates. Just update the value of λ at each iteration
according to
λt+1 =
(tλt + ln |f 0 (xt+1 , r)|).
Warning: At superstable cycles (xt = 1/2 for some iterate t), λ = −∞ because f 0 (xt , r) =
0. You need to give this fact special consideration.
(c) Do the same as in (a),(b) for the noisy logistic map
xt+1 = rxt (1 − xt ) + ηt .
Here ηt is chosen randomly from an interval −ηmax ≤ η ≤ ηmax . The bound ηmax is the
noise level. Investigate what happens to λ(r) and the bifurcation diagram if the noise
level increases. You may start with, say ηmax = 10−3 . Describe your observations. Specifically address the question from which noise level on you no longer can identify sharp
bifurcations as in the noiseless case.
3. Challenging Problem: Computing Feigenbaum’s Function and Feigenbaum
Numbers (Review: Section II)
(a) Compute numerical approximations of the solution (a, g(x)) of Feigenbaum’s renormalization equation
g(x) = −ag(g(−x/a)),
and of the largest eigenvalue δ (δ > 4) of the associated linear eigenvalue problem
−ag 0 (g(−x/a))h(−x/a) − ah(g(−x/a)) = δh(x),
as described in the Review: Sections II.4–5. a, δ should be accurate up to 10 digits.
Question to the math grads: Inspection of the other eigenvalues of the linearized operator
indicates that there is an eigenvalue 1. Can you explain the presence of this eigenvalue?
(b) Graph some iterates g n (x) of the solution g(x) found in (a) in the interval −1 ≤ x ≤ 1.
You should observe that the iterates become very flat around x = 0 when n increases.
(c) Use equation (13) to approximate the value r∞ of r for which the period doubling
sequence of the logistic map accumulates. You need two starting values for rn . The first
5 period doublings occur at
r1 = 3, r2 = 3.449, r3 = 3.54409, r4 = 3.5644, r5 = 3.568759.
If you want to be more accurate, compute rn up to, say n = 8 (the computation becomes
very delicate for larger n).
I. Liapunov Exponents
I.1 Qualitative Description
Let x, y, . . . be phase space points (vectors) of a discrete or continuous dynamical system
and let t be time (integers in the discrete case). A basic property of chaotic systems is
the inherent unpredictability. That is, given two different initial conditions x0 , y0 with
associated orbits (solutions) x(t), y(t), the distance
d(t) = kx(t) − y(t)k
will eventually reach finite values, no matter how small the initial distance d0 = kx0 − y0 k
is. This implies unpredictability, because it is never possible to fix an initial condition
exactly (we cannot control infinitely many digits). The evolution of d(t) towards finite
values is called “divergence of nearby trajectories”. The degree of divergence can be made
quantitative by the largest1 Liapunov exponent, commonly denoted by λ, as
d(t) ≈ d0 eλt .
If the system is truly chaotic λ is positive, for nonchaotic deterministic systems we have
λ ≤ 0.
The relation (3) is not a rigorous relation. In practice it holds only on average and over
a certain initial time range. For larger values of t there is a saturation due to the finite
size of the system. For t in an initial range, d(t) typically fluctuates about d0 exp(λt).
Nevertheless, in this range (3) can be used to estimate λ. Taking natural logs of both
sides of (3) we find
ln d(t) ≈ ln d0 + λt,
which defines a straight line with slope λ. Thus, given a set of computed (or measured)
values ti and d(ti ), λ can be estimated by matching a straight line to the set of point
(ti , ln d(ti )). The slope of this line yields the desired estimate of λ.
For Problem 1 you may use this (unsophisticated) approach to estimating Liapunov exponents. The initially linear range is simply found by inspecting the graph of ln d(t) versus
t. Number example: In case of the Lorenz map, the linear range extends approximately
from t = 2 or 3 to about 20 if d0 ≈ 10−5 .
I.2 Definition For One Dimensional Maps
To motivate the definition given below, consider first a one dimensional continuous system
ẋ = f (x). Assume there is a fixed point x∗ , f (x∗ ) = 0. The linearized ODE about x∗
is obtained by setting x = x∗ + ξ, Taylor expanding f about x∗ , and retaining only the
linear term, which results in the linear ODE
ξ˙ = λξ (λ ≡ f 0 (x∗ )).
In dimensions greater than one there are several Liapunov exponents, but it is only the largest
Liapunov exponent that quantifies the distance between nearby trajectories.
If λ 6= 0 the behaviour of solutions close to x∗ can be approximated by x(t) ≈ x∗ + ξ(t).
Since ξ(t) = ξ0 exp(λt), the distance d(t) = |x(t) − x∗ | ≈ |ξ(t)| evolves as
d(t) ≈ d(0)eλt .
If λ < 0, x∗ is asymptotically stable and d(t) → 0 for t → ∞. If λ > 0, x∗ is unstable and
d(t) is increasing. As long as d(t) is still small, this growth is exponential with exponent λ.
Next consider a one dimensional map xt+1 = f (xt ) and assume again there is a fixed
point x∗ , x∗ = f (x∗ ). As before we set xt = x∗ + ξt and Taylor expand f (x∗ + ξt ) in the
iteration prescription. Retaining only the linear term yields the linearized map
ξt+1 = µξt (µ ≡ f 0 (x∗ )).
If |µ| =
6 1, the iterates close to x∗ can be approximated by xt ≈ x∗ + ξt . Since ξt = |µ|t ξ0 ,
the distance dt = |xt − x∗ | evolves as
dt ≈ d0 |µ|t .
The quantity |µ| is called a multiplier. If |µ| < 1, x∗ is asymptotically stable and dt → 0
for t → ∞. If |µ| > 1, dt is growing. Setting
λ = ln |µ| = ln |f 0 (x∗ )|,
we can rewrite the evolution of dt as
dt ≈ d0 eλt ,
and quantify the rate of growth or decay by an exponent λ as in the continuous case.
Assume now the map has an m–cycle (x∗0 , x∗1 , . . . , x∗m−1 ),
x∗1 = f (x∗0 ), x∗2 = f (x∗1 ), . . . , x∗m−1 = f (x∗m−2 ), x∗0 = f (x∗m−1 ).
Each point of this cycle is a fixed point of the mth iterate f m (x) of f (f 2 (x) = f (f (x)),
f 3 (x) = f (f 2 (x)) etc). The stability of the cycle is determined by the stability of any cycle
point viewed as fixed point of f m . Thus we can repeat the above stability consideration
with f replaced by f m . If m = 2 we find
d 2 ∗
f (x0 ) =
f (f (x∗0 )) = f 0 (f (x∗0 ))f 0 (x∗0 ) = f 0 (x∗1 )f 0 (x∗0 ),
which generalizes to arbitrary m as
d m ∗
f (x0 ) = f 0 (x∗m−1 )f 0 (x∗m−2 ) · · · f 0 (x∗1 )f 0 (x∗0 ) ≡ ρ.
Consider now an initial condition x0 = x∗0 + ξ0 close to x∗0 (|ξ0 | small). Set xt = x∗t + ξt
with x∗km+t = x∗t for k ≥ 1, taking the periodic repetition of the cycle into account. Then,
ξ1 = f (x0 ) − x∗1 = f (x∗0 + ξ0 ) − f (x∗0 ) ≈ f 0 (x∗0 )ξ0
ξ2 = f (x1 ) − x∗2 = f (x∗1 + ξ1 ) − f (x∗1 ) ≈ f 0 (x∗1 )ξ1 = f 0 (x∗1 )f 0 (x∗0 )ξ0
ξm = f (xm−1 ) − x∗m = f (x∗m−1 + ξm−1 ) − f (x∗m ) ≈ f 0 (x∗m−1 )ξm−1 = ρξ0 .
Traversing the cycle k times yields ξkm ≈ ρk ξ0 , and taking absolute values leads to
dkm = d0 |ρ|k = d0 µkm (µ ≡ |ρ|1/m ),
where dt = |xt −x∗t | ≈ |ξt | as before. Thus ρ is the cumulative multiplier over m iterations,
and its mth root can be considered as the average multiplier at a single iteration. On
average we may write
dt = d0 µt = d0 eλt (λ = ln µ),
where again the exponent λ has been associated with the multiplier µ. The explicit
formula for λ reads
1 X
ln |f 0 (x∗ )|,
m t=1
and can be interpreted as average of the ‘local’ exponents ln |f 0 (x∗i )| associated with each
cycle element. The general Liapunov exponent is defined by the extension of this formula
to arbitrary orbits (not necessarily cycles).
Definition. Let (xt )|0≤t<∞ be a sequence of iterates of a one dimensional map. The
Liapunov exponent, λ, associated with (xt ) is defined by
ln |f 0 (xt )|.
T →∞ T
λ = lim
Because of the average over all iterates, the Liapunov exponent of any orbit is determined
by its long time behaviour, i.e. by the attractor the iterates approach for t → ∞. If
they approach a fixed point λ is given by (5), if they approach a cycle λ is given by (8).
In these cases generically λ ≤ 0: λ = 0 at bifurcations, otherwise λ < 0 with λ = −∞
if the fixed point or cycle is superstable2 . If the iterates approach a chaotic attractor,
λ is positive and quantifies the divergence of nearby iterates on the attractor, i.e. the
sensitivity to initial conditions. It is important to note that λ is the same for all initial
conditions whose iterates approach the chaotic attractor.
2 0
f (x∗ ) = 0 or f 0 (x∗i ) = 0 for one of the cycle elements
II. Feigenbaum Function and Feigenbaum Numbers
II.1 Doubling Operator and First Feigenbaum Number
A key to understanding period doubling sequences such as in the logistic map is provided
by the so called doubling operator T . This operator acts on maps as follows. Let f (x)
be a map defined on an interval −α ≤ x ≤ α (α > 0 arbitrary). Assume that f (x) has a
nondegenerate maximum or minimum at x = 0: f 0 (0) = 0, f 00 (0) 6= 0. Then (T [f ])(x) is
the function defined by
(T [f ])(x) = −af (f (−x/a)),
where a > 1 is a parameter. Note that the r.h.s. of (10) can be written as −af 2 (−x/a)
where f 2 is the second iterate of f . Geometrically the doubling operation can be described
as follows:
(i) The original interval −α ≤ x ≤ α is rescaled to the smaller interval −α/a ≤ x ≤
(ii) f 2 is evaluated on the smaller interval with inverted signs.
(iii) The rescaled interval and the negative of the function resulting from step (ii) are
blown up with the inverse scale factor a of step (i).
The three steps are illustrated in Figure 1. Let’s explicitely calculate T [f ] for the parabola
f (x) = f0 − f1 x2 :
(T [f ])(x) = −a{f0 − f1 [f0 − f1 (x/a)2 ]2 }
= (−af0 + af1 f02 ) − (2f0 f12 /a)x2 + (f13 /a3 )x4 .
Observe that the doubling operation produces a polynomial of degree four from the
parabola. When T is iterated,
f → T [f ] → T [T [f ]] ≡ T 2 [f ] → T 3 [f ] → · · · ,
one zooms (and blows up) progressively towards the extreme point at x = 0 and generates
higher and higher powers of x.
Similarly as in the case of map iterations one can ask for “fixed points” (which are
functions) g(x) of T . The condition T [g] = g yields
g(x) = −ag(g(−x/a)),
which is Feigenbaum’s celebrated renormalization equation, a functional equation for g(x).
Assume that g(x) is a solution of (11) and consider the rescaled function gc (x) = g(cx)/c.
Applying T to gc yields
(T [gc ])(x) = −agc (gc (−x/a)) = −(a/c)g(cg(−cx/a)/c) = (1/c)g(cx) = gc (x)
because g satisfies (11). Thus gc (x) is also a solution of (11), i.e. if one solution of (11)
is is found we automatically obtain a one parameter family of solutions. This degree of
freedom can be used to normalize the solutions of (11). Given a solution with g(0) 6= 1,
set c = g(1) and g ∗ (x) = g(cx)/c. Then g ∗ (x) is also a solution satisfying g ∗ (0) = 1.
Figure 1: Application of T to f (x) = 1.5(1 − x2 ) − 1 for α = 1, a = 2. This function is
symmetric, hence the x–reflection has no effect.
Thus w.l.o.g. we can consider (11) as an equation for functions satisfying g 0 (0) = 0 and
g(0) = 1.
When g is normalized to g(0) = 1, we loose one degree of freedom. This loss of
degree of freedom is compensated by the presence of the parameter a in (11) which has
been considered arbitrary so far: (11) is not an equation for g(x) alone, but an equation
for g(x) and a. Advanced mathematics shows that equation (11), with the restrictions
g(0) = 1, g 0 (0) = 0, has a unique solution g(x) and a. Moreover, g(x) is symmetric,
g(−x) = g(x). The function g(x) is referred to as Feigenbaum’s function and a is called
the first Feigenbaum number. The numerical value for a given by Feigenbaum in 1976 is
a = 2.50290785 · · · .
II.2 Linearized Doubling Operator and Second Feigenbaum Number
The second Feigenbaum number quantifies how a perturbation of g(x) grows under iteration of T . This is similar to the growth of perturbations of an unstable fixed point of
a map. If xt+1 = f (xt ) is a higher dimensional map with fixed point x∗ , the growth of
ξt = xt − x∗ is, for small kξt k, quantified by the largest eigenvalue of the Jacobian of f
evaluated at x∗ . In the case of our doubling operator we have the same situation. Let
εh(x) be a perturbation of g(x). The “Jacobian” of T , denoted by L, is a linear operator
acting on functions h(x). The result of this operation, denoted by (Lh)(x), is defined by
(Lh)(x) =
(T [g + εh])(x)|ε=0 .
Compute first T [g + εh],
(T [g + εh])(x) = −a[g(y) + εh(y)]|y=g(−x/a)+εh(−x/a)
= −ag(g(−x/a) + εh(−x/a)) − aεh(g(−x/a) + εh(−x/a)),
differentiate this w.r.t. ε and then set ε = 0 to obtain,
(Lh)(x) = −ag 0 (g((−x/a))h(−x/a) − ah(g(−x/a)).
Given (a, g(x)), the r.h.s of (12) defines a linear operation on h(x), which is the action
of the operator L. Similarly such as matrices acting on vectors have eigenvalues and
eigenvectors, linear operators acting on functions have eigenvalues and eigenfunctions. It
can be shown that L has exactly one eigenvalue δ > 1 with a single, even eigenfunction
hδ (x), whereas all other eigenvalues are ≤ 1. The eigenvalue δ is the celebrated second
Feigenbaum number. The numerical value given by Feigenbaum is
δ = 4.669201609 · · · .
II.3 Universality of the Feigenbaum Numbers
Similarly such as π is a universal number characteristic of circles, the two Feigenbaum
numbers are universal numbers characteristic of period doubling sequences leading to
chaos. Given any one dimensional, one parameter family of unimodal3 maps f (x, r) that
undergoes an accumulating period doubling sequence, e.g. the logistic map, the scaling
relations in the bifurcation diagram are always quantifiable by the Feigenbaum numbers
a and δ.
The second Feigenbaum number δ characterizes the scalings in the r–direction. Let rn
be the value of r at which the nth period doubling occurs, i.e. a 2n cycle bifurcates from
a 2n−1 cycle. Then, for sufficiently large n4 ,
rn+1 − rn
≈ δ.
rn+2 − rn+1
The first Feigenbaum number a characterizes the scaling in the x–direction as follows.
In the r–range rn < r < rn+1 we find a stable 2n cycle (x1 , x2 , . . . , x2n ). Let
µ(r) = f 0 (x1 , r)f 0 (x2 , r) · · · f 0 (x2n , r)
be the multiplier of this cycle. It turns out that µ(r) decreases monotonically from
µ(r1 ) = 1 (birth of the 2n cycle) to µ(rn+1 ) = −1 (flip bifurcation of the 2n cycle →
f (x, r) has exactly one nondegenerate maximum or minimum
More precisely, δ is the limit of the ratio on the l.h.s. of (13) when n → ∞, but it often turns out
that δ is a good approximation for this ratio already for, say n = 5.
Figure 2: d1 and d2 for the logistic map
birth of the 2n+1 cycle). Between rn and rn+1 there must be a value r = Rn for which
µ(Rn ) = 0. Cycles with vanishing multipliers are called superstable . Since µ(r) is the
product of the f 0 (xi , r), one of the cycle elements must be at the maximum (or minimum)
of f (f 0 = 0). Let dn be the distance of the maximum/minimum from its (2n−1 + 1)st
iterate. The quantity dn is a characteristic number for the splitting of the 2n−1 cycle
into a 2n cycle. When n varies, the dn alternate in sign (whence the minus signs in the
definition of the doubling operator) and tend to zero when n → ∞. Figure 2 shows d1
and d2 for the logistic map (maximum at 1/2). The meaning of a is that the ratio dn /dn+1
approaches −a when n → ∞, i.e. for sufficiently large n,
≈ −a.
Although mathematically the universality of the numbers a, δ has been proven “only”
for one dimensional unimodal maps, the relations (13),(14) have been confirmed numerically for many higher dimensional continuous and discrete systems, as well as in a variety
of physical experiments.
II.4 Numerical Computation of g(x) and a
In numerical computations g(x) has to be represented by a finite set of parameters. It
turns out that g(x) can be very well approximated by polynomials, thus we can use the
coefficients of approximating polynomials as parameters. Since it is known that g(x) is
even and g(0) = 1, the polynomials have the form
g(x) = 1 + g1 x2 + g2 x4 + . . . gn x2n .
The basic idea is to move progressively to higher integers n.
As a first approximation we take n = 1, i.e. g(x) = 1 + g1 x2 . Substitute this in (11),
expand, and ignore powers of x higher than two. The term on the r.h.s. of (11) becomes
−ag(g(−x/a)) = −a[1 + g1 (1 + g1x2 /a2 )2 ] = −(g1 a + a) − 2(g12 /a)x2 + O(x4 ).
The last expression with O(x4 ) ignored should coincide with g(x). By equating the constant term (x0 –coefficient) and the x2 –coefficient, this leads to two equations:
1 = −a(1 + g1 ),
g1 = −2g12 /a.
Substituting g1 = −a/2 from the second equation into the first equation yields a quadratic
equation for a. The (positive) solution and the resulting g1 –value are
a = 1 + 3 = 2.732, g1 = −1.366.
It should become apparent that the loss of one degree of freedom due to fixing g(0) = 1
requires to treat a as unknown.
Let’s try an expansion of degree 4:
g(x) = 1 + g1 x2 + g2 x4 .
Substituting this into the equation
0 = g(x) + ag(g(−x/a)),
ignoring terms of order six and higher, and setting the coefficients of x0 , x2 and x4 equal
to zero results in the following system of three equations:
x0 :
x2 :
x4 :
0 = 1 + a(1 + g1 + g2 )
0 = g1 [1 + a−1 (2g1 + 4g2 )]
0 = g2 + a−3 (g13 + 6g12 g2 ) + g2 a−3 (2g1 + 4g2 ).
The x2 –equation yields 2g1 + 4g2 = −a. Substituting this into the x4 –equation yields
0 = g2 + a−3 (g13 + 6g12 g2 ) − a−2 g2 .
This equation is linear in g2 which allows us to express g2 in terms of a and g1 :
g2 = g13 /(a − a3 − 6g12 ).
Now we are left with two equations,
0 = 1 + a(1 + g1 + g2 )
0 = a + 2g1 + 4g2 ,
with g2 considered as function of (a, g1 ) according to (18). The equations (19),(20) have
to be solved numerically. As starting values for the numerical search one can use the
previously computed values (16) resulting from the n = 1 approximation. After solving
for (a, g1 ) we find g2 from (18). The result of this computation is
a = 2.534, g1 = −1.522, g2 = 0.128.
This procedure can be iterated, stepping progressively to higher values of n in (15).
Substituting (15) into (17) yields n + 1 equations for (a, g1 , g2 , . . . , gn ) resulting from
comparing coefficients. The first equation is
x0 :
0 = 1 + a(1 + g1 + g2 + . . . + gn ),
and the second equation reduces after division by g1 to
x2 :
0 = a + 2g1 + 4g2 + . . . + 2ngn .
The last equation has the form
x2n :
0 = gn + a1−2n [A + Bgn + gn (2g1 + 4g2 + . . . + 2ngn )],
where A and B depend only on (a, g1 , g2 , . . . , gn−1 ). You may convince yourself that A, B
are given as follows. Set y = x2 /a2 and let g̃(y) be the (n − 1)th degree polynomial
g̃(y) = 1 + g1 y + g2 y 2 + . . . + gn−1 y n−1 .
A = coefficient of y n in (g̃(y))2n
B = coefficient of y n in g̃([g̃(y)]2 ).
2g1 + 4g2 + . . . + 2ngn = −a
from the x2 – equation into the x2n –equation leaves us then with a linear equation for gn ,
with solution
gn = A/(a − a2n−1 − B).
Now, since gn is represented in terms of (a, g1 , . . . , gn−1 ) by virtue of (22), there remain n
0 = F1 (a, g1 , . . . , gn−1 ), 0 = F2 (a, g1 , . . . , gn−1 ), . . . , 0 = Fn (a, g1 , . . . , gn−1 ).
These equations are to be solved for (a, g1 , g2 , . . . , gn−1 ). As starting value for the numerical search one can use the values of (a, g1 , . . . , gn−1 ) obtained in the previous computation
(n replaced by n − 1 in (15)). The coefficient gn is then found from (22).
Matlab Implementation. Write a function which receives as input a column vector
[gn−1 ; gn−2 ; . . . ; g1 ; a]. The function computes first A, B and then gn according to (22).
Then create the values of F1 , F2 , . . . , Fn from g1 , g2 , . . . , gn . When applying the solver
(mmfsolve), the function has to be coded to generate as output a column vector whose
entries are F1 , . . . , Fn . When the solution has been found, you retrieve in a separate
function call the value of gn by inputting the computed values of (a, g1 , . . . , gn−1 ). The
two outputs may be distinguished, e.g., by an additional flag in the input arguments. In
essence the function has to determine coefficients in compositions and products of polynomials.
II.5 Numerical Computation of δ
The computation of δ amounts to solving the eigenvalue problem associated with the
linear operator L defined in (12). In finite term approximations linear operators become
matrices. Thus, if an approximate solution (a, g1 , . . . , gn ) of (17) for a and g(x) has been
found, one has to set up the matrix A representing L in the 2nth degree polynomial
approximation. Given a polynomial
h(x) = h0 + h1 x2 + . . . + hn x2n ,
operating with L on h(x) yields a polynomial
(Lh)(x) = H0 + H1 x2 + . . . + Hn x2n + . . .
of degree ≥ 2n. Truncating this polynomial at degree 2n yields the desired image polynomial H(x). The coefficients Hm (0 ≤ m ≤ n) are related to the hm by linear relations
H0 = A00 h0 + A01 h1 + . . . + A0n hn
H1 = A10 h0 + A11 h1 + . . . + A1n hn
Hn = An0 h0 + An1 h1 + . . . + Ann hn
The matrix A is the desired matrix representation of L. The matrix elements are nonlinear
functions of (a, g1 , . . . , gn ). You need to inspect thoroughly how the Aij are defined, and
use this to to write a Matlab function that accepts (a, g1 , . . . , gn ) as input and produces
A as output. Once A is generated, the eigenvalues are simply found by applying the
eig command of Matlab: D=eig(A), or [V,D]=eig(A) if you are also interested in the
eigenvectors (representing “eigenpolynomials”).
II.5 Polynomials in Matlab
Representation: Polynomials are represented in Matlab by vectors of coefficients. The
(n − 1)th degree polynomial
P (y) = p(1)y n−1 + p(2)y n−2 + . . . + p(n − 1)y + p(n)
corresponds to the row vector p=[p(1),p(2),. . .,p(n)] (column vectors work as well). In
front of the highest power coefficient there may be an arbitrary number of zeros; the
vector [0,. . . ,0,p] represents the same polynomial. In this representation truncation of
polynomials at degree m ≤ n − 1, i.e.
Ptrun (y) = p(n − m)y m + . . . + p(n − 1)y + p(n)
corresponds to extracting p(end-m:end) or, if it is necessary to retain the length of the
vector, to replace the coefficients occuring in front of p(n-m) by zeros.
Polynomial Operations: Matlab offers two basic commands for operations on polynomials
represented by vectors p.
(i) q=polyder(p) returns the vector representing the derivative of the polynomial represented by p.
(ii) r=conv(p,q) returns the vector representing the product of the polynomials represented by p and q, with length(r)=length(p)+length(q)-1.

Project 2: Exploring Chaos