Theory of non-linear dynamical systems

advertisement
Theory of non-linear dynamical systems
D. Gonze & M. Kaufman
November 4, 2015
Master en Bioinformatique et Modélisation
Contents
1 Introduction
4
1.1 Evolution equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2 Definitions
6
2.1 System of differential equations . . . . . . . . . . . . . . . . . . . . . . . .
6
2.2 Steady state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2.3 Phase space, trajectories and nullclines . . . . . . . . . . . . . . . . . . . .
7
2.4 Cauchy-Lipschitz theorem . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
3 Linear stability analysis
9
3.1 Case of a 1-variable system . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.2 Case of a 2-variable system . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Case of a N-variable system . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Dynamical behaviors
15
4.1 Single steady state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Bistability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Limit-cycle oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4 Chaos and other complex behaviors . . . . . . . . . . . . . . . . . . . . . . 16
5 Hysteresis, bifurcations, and stability diagram
17
5.1 Parameter space and stability diagram . . . . . . . . . . . . . . . . . . . . 17
5.2 Hysteresis and bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6 Applications
21
6.1 One-variable system: synthesis and degradation of one compound . . . . . 21
6.2 One-variable system: logistic synthesis and linear degradation . . . . . . . 22
6.3 Two-variable system: two mutually activated compounds (bistability) . . . 23
6.4 Two-variable system: Brusselator (limit-cycle oscillations) . . . . . . . . . 26
7 Appendix
31
7.1 Detailed study of the perturbation evolution in a 2-variable system . . . . 31
7.2 Examples of trajectories around the steady state for a 3-variable system . . 34
7.3 Stability of the steady state for a N-variable system: Routh-Hurwitz criteria 35
2
7.4 Cusp and other co-dimension 2 bifurcations . . . . . . . . . . . . . . . . . 37
7.5 Poincaré-Bendixson theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.6 Characterization of complex behaviors . . . . . . . . . . . . . . . . . . . . 43
8 References
44
3
1
1.1
Introduction
Evolution equations
When we modelize the dynamics of cellular phenomena, we are interested into the evolution of concentrations of the products and substrates as a function of time. This variation
is described by a time derivative. Such an evolution equation typically involves synthesis,
degradation and transport terms:
dX
= synthesis of X - degradation of X + transport of X
dt
(1)
Generally, the transport and degradation of X are functions of X but also of other variables
of the system. When we modelize a biological system, we are usually faced with a system
of several coupled equations. These equations are called differential equations because
they involve derivatives. Since the chemical interaction laws and enzyme kinetics are
generally non-linear (e.g. Michealis-Menten, Hill), these equations are said non-linear
differential equations.
1.2
An example
As an example, here is a model proposed to study the dynamics of calcium oscillations
observed in cells. The model is based on the Calcium-induced-Calcium release mechanism
through the inositol-trisphosphate receptor (fig. 1) and is used to describe signal-induced
Calcium oscillations.
Figure 1: Scheme of the model proposed for the dynamics of calcium (Dupont G, Berridge
MJ, Goldbeter A, 1991, Signal-induced Ca2+ oscillations: properties of a model based on
Ca2+ -induced Ca2+ release, Cell Calcium 12, 73-85).
4
Evolution equations for the cytosolic Calcium (variable Z) and the intravesicular Calcium
(variable Y ) are given by:
⎧ dZ
Zn
Ym
Zp
⎪
= v0 + v1 β − vm2 n
+
v
+ kf Y − kZ
⎨
m3
dt
K2 + Z n
Krm + Y m Kap + Z p
dY
Zn
Ym
Zp
⎪
⎩
= vm2 n
−
v
− kf Y
m3
dt
K2 + Z n
Krm + Z m Kap + Z p
(2)
The analysis of these equations shows that the model can account for the occurrence of
self-sustained oscillations of Calcium and allows to study the effect of various parameters,
such as the stimulus strength, on the period of these oscillations (fig. 2).
Figure 2: Oscillations of cytosolic Calcium (Z) and intravesicular Calcium (Y ) obtained
by numerical simulation of equations (2), for two different levels of stimulation, measured
by the parameter β: (A) β low, (B) β high (Dupont et al, 1991).
Overview of the course
In this summary we will expose some notions of the theory of dynamical systems, which are necessary for the analysis of models involving
several non-linear differential equations (such as eq. (2)). The theory is
illustrated by biological applications and some exercises are proposed.
5
2
2.1
Definitions
System of differential equations
The evolution of two interacting variables (concentrations of the species) can be described
by a system of two differential equations:
⎧
dX
⎪
⎪
= f (X, Y, δ, µ)
⎨
dt
(3)
⎪
dY
⎪
⎩
= g(X, Y, δ, µ)
dt
The functions f and g usually depend on variables (concentrations X and Y of compounds, mRNA, proteins,...) and on some kinetics parameters (Michaelis-Menten constant, maximum rate, degree of cooperativity,..., denoted here by δ and µ). Because
of their high non-linear degree (due to the presence of terms X 2 , XY , Michaelian, or
Hill), finding an analytical solution of this system is rarely feasable. Consequently, other
approaches have to be developped to solve such systems.
2.2
Steady state
First, we can see that if we have
⎧
dX
⎪
⎪
=0
⎨
dt
⎪
dY
⎪
⎩
=0
dt
(4)
the concentration of X and Y will not evolve: X and Y are constant. The system (4) is an
algebraic system of equations and we denote the solution (XS , YS ) (where the subscript
S stands for steady state):
%
f (X, Y, δ, µ) = 0
(5)
g(X, Y, δ, µ) = 0
The steady state is reached when the synthesis (production) and the degradation (consumption) terms are compensated (see eq. (1)). Note that the equilibrium state is a
particular steady state which is specific to equilibrium reactions.
If the initial conditions do not correspond to the steady state (X0 ̸= XS , Y0 ̸= YS ), we
expect that the system will evolve until it reaches a steady state satisfying the condition
4 (or, equivalently, 5), as illustrated in figure 3.
But is it always the case? Will the system always evolve monotonously towards a steady
state? Before treating this question, we will first define the notions of phase space,
trajectories and nullclines, that are very convenient for our discussion.
6
Variable
XS
X
0
Time
Figure 3: Evolution towards a steady steady state.
2.3
Phase space, trajectories and nullclines
Another way to represent the dynamical behaviors than the temporal evolution is the
representation in the phase space. This is a multidimensional space whose coordinates
are the variables (concentrations) of the system.
Figure 4 examplifies the representation in the phase space (X, Y ) of the evolution toward
the steady state. The starting point, represented in white, is called the initial condition
(X0 , Y0 ), i.e. the concentations X and Y at time t = 0. The black point is the steady
state (XS , YS ) and the curve is the trajectory of the system. This trajectory is built by
calculating at each time step the values of X and Y .
(X ,Y )
S
Variable Y
S
(X0,Y0)
Variable X
Figure 4: Phase space.
When the system contains two variables, it is useful to resort to the nullclines representation. These curves are defined by dX/dt = 0 on one hand and dY /dt = 0 on the other
hand. Thus, the intersection of these curves correponds to the steady state, for which
we have simultaneously dX/dt = 0 and dY /dt = 0. In the different regions delimited by
these nullclines, it is possible to determine the direction of the evolution of the system by
studying the sign of dX/dt and of dY /dt (fig. 5).
7
Variable Y
Variable Y
dX/dt=0
dY/dt=0
Variable X
Variable X
Figure 5: Nullclines (left panel) and vector field (right panel).
2.4
Cauchy-Lipschitz theorem
Under some continuity and regularity hypotheses (usually satisfied in biological ODE
models), the theorem of Cauchy-Lipschitz states that from a given initial condition, a
system of differential equations has one and only one solution, i.e. there is a single trajectory that the system can follow. A direct consequence of this property is that a trajectory
can not cross itself or another one. This also implies that, once the initial condition is
specified, the behaviour of the system is completely determined. The behaviour is said
deterministic.
Figure 6: Cauchy-Lipschitz theorem and determinism.
8
3
Linear stability analysis
To study the behavior of the system around the steady state, we resort to the linearized
stability principle. We describe this method first for a one-variable and subsequently for
a two-variable system. This theory can be generalized to a N-variable system.
3.1
Case of a 1-variable system
We can write the evolution equation of X as:
dX
= f (X)
dt
(6)
where f (X) is a function taking into account both the synthesis and the degradation
kinetics of X.
By definition, the steady state satisfies the condition:
dXS
= f (XS ) = 0
dt
(7)
Now, we apply a little perturbation x (with x << XS ) to this steady state:
X = XS + x
(8)
Combining equations (6)-(8), we have:
dX
dXS dx
dx
=
+
=
= f (X)
dt
dt
dt
dt
(9)
Since the evolution function f (X) is non-linear, we proceed to a linearization using a
Taylor development around XS . The fact that x is small allows us to neglect the terms
of order superior or equal to 2 (x2 is very small, x3 is very very small, etc):
&
'
& 2 '
&
'
df
df
df
2
f (X) = f (XS ) +
x+
x + ... =
x
(10)
dX S
dX 2 S
dX S
Using (10) for the right-hand side, we see that the equation for the evolution of the
perturbation x becomes:
&
'
dx
df
= f (X) =
x
(11)
dt
dX S
( df )
If we name λ the derivative dX
, the solution of equation (11) can be written1
S
1
Let us recall the resolution of a differential equation of the first order:
dx
= λx
dt *
*
1
dx = λ dt
x
ln(x) = λt + C
x = x̃eλt
9
with:
λ=
&
x = x̃eλt
(12)
'
(13)
df
dX
and x̃ = x(0)
S
Equation (12) describes the (exponential) evolution with respect to time of the perturbation x: if λ > 0, x will increase and the system will leave its steady state, which is said
unstable. On the opposite, if λ < 0, the perturbation will damp in such a way that the
system will reach back its steady state; which is said stable.
Figure 7: Stability of the steady state.
3.2
Case of a 2-variable system
Consider now a system of two variables described by the following coupled equations:
⎧
dX
⎪
⎪
= f (X, Y )
⎨
dt
(14)
⎪
dY
⎪
⎩
= g(X, Y )
dt
The steady state (XS , YS ) satisfies:
⎧
dXS
⎪
⎪
= f (XS , YS ) = 0
⎨
dt
⎪
dYS
⎪
⎩
= g(XS , YS ) = 0
dt
We apply a little perturbation (with x << XS and y << YS ):
%
X = XS + x
(15)
(16)
Y = YS + y
Combining Eqs. (14)-(16), we have:
⎧
dXS dx
dx
dX
⎪
⎪
=
+
=
= f (X, Y )
⎨
dt
dt
dt
dt
⎪
dY
dYS dy
dy
⎪
⎩
=
+
=
= g(X, Y )
dt
dt
dt
dt
10
(17)
Since the system (14) is non-linear, we proceed to a Taylor development (we neglect the
terms of order superior or equal to 2):
&
'
&
'
&
'
&
'
⎧
df
df
df
df
⎪
⎪
x+
y + ... =
x+
y
⎨ f (X, Y ) = f (XS , YS ) + dX
dY S
dX S
dY S
S
'
&
'
&
'
&
'
&
(18)
dg
dg
dg
dg
⎪
⎪
⎩ g(X, Y ) = g(XS , YS ) +
x+
y + ... =
x+
y
dX S
dY S
dX S
dY S
We call the Jacobian matrix for the steady state (XS , YS ) the matrix:
&
'
a11 a12
J=
a21 a22
where
a11 =
&
df
dX
'
, a12 =
S
&
df
dY
'
, a21 =
S
&
dg
dX
'
, a22 =
S
Combining (17) and (18), we obtain the system of equations:
⎧
dx
⎪
⎪
= a11 x + a12 y
⎨
dt
⎪
dy
⎪
⎩
= a21 x + a22 y
dt
&
(19)
dg
dY
'
(20)
S
(21)
whose solution can be written as:
%
x = x̃eλt
y = ỹeλt
Replacing (22) into (21), we obtain :
%
(a11 − λ)x̃ + a12 ỹ = 0
(22)
(23)
a21 x̃ + (a22 − λ)ỹ = 0
This system has a non-trivial solution if:
&
'
a11 − λ
a12
Det
=0
a21
a22 − λ
(24)
i.e.
λ2 − (a11 + a22 )λ + (a11 a22 − a12 a21 ) = 0
(25)
We note that a11 + a22 = T = trace of the Jacobian matrix and a11 a22 − a12 a21 = ∆ =
determinant of the Jacobian matrix
Equation (25) has in general two roots:
λ± =
,
√
1+
T ± T 2 − 4∆
2
11
(26)
By replacing (26) in (22), we obtain the general form of the solution of the system (21):
%
x(t) = x̃1 eλ+ t + x̃2 eλ− t
(27)
y(t) = ỹ1 eλ+ t + ỹ2 eλ− t
The values of x̃1 , x̃2 , ỹ1 and ỹ2 can be found given the initial condition x(0) and y(0).
The presence of the square root in (22) implies that the λ± (called the eigen values) can
be complex. We discuss separately both cases:
• If the eigen values are different and real:
In this case, three situations are possible:
– If λ+ < 0 and λ− < 0, the perturbation will decrease with respect to time and
the system will come back to its steady state. In this case, the steady state is
stable. Because the approach is exponential, this type of steady state is called
a node.
– If λ+ > 0 and λ− > 0, the perturbation will increase with respect to time
and the system will leave its steady state. In this case, the steady state is
unstable. Here also, the steady state is a node.
– If λ+ > 0 and λ− < 0 (or the opposite), the perturbation will either increases
or, in a first time, decrease, but after a while, it will also increase and the
system will leave its steady state. Here, the steady state is unstable and is
called a saddle.
• If the eigen values are complex conjugates:
We can write these eigen values as λ± = p ± iq (with p = Re λ and q = Im λ)
It can be shown that the solution (27) can be written as (see Appendix):
%
x(t) = Aept cos qt + Bept sin qt
y(t) = A′ ept cos qt + B ′ ept sin qt
(28)
This is the parametric equation of a spiral. Indeed this system can be rewritten as:
%
x(t) = κept (cos qt + φ)
(29)
y(t) = κ′ ept (cos qt + φ′ )
where κ, κ′ , φ, and φ′ are function of A, B, A′ and B ′ .
The imaginary part (q) is responsible of the oscillations around the steady state. In
this case, the steady state is called a focus. If p < 0 the oscillations will be damped,
while if p > 0 the oscillations will be amplified. Thus, if p < 0, the steady state is
stable, while if p > 0, the steady state is unstable.
12
In summary...
In summary, the stability describes the way in which the system reacts to a small perturbation that moves the system slightly away from the steady state. If the system is
exactly at its steady state, it will not evolve; X and Y are constant and equal to XS and
YS respectively. If we apply a small perturbation of these values, two situations can arise:
either the system comes back on its steady state, or it definitively leaves this state. In
the first case, the steady state is stable, in the second case, it is unstable.
Unstable steady state
perturbation
Variable X
Variable X
Stable steady state
damping
Time
amplification
perturbation
Time
Figure 8: Stability of the steady state.
As we have seen, according to the disposition of the eigen values λ, in the plane (p, q) (also
called the Gauss plane (Re λ, Im λ)), we can obtain different approaches to the steady
state. The table and the figure here below summarize the different possible situations.
3.3
Case of a N-variable system
More generally, for a system involving N variables, the characteristic equation is of degree
N and must often be solved numerically. Except in some particular cases, such an equation
has N distinct roots that can be real or complex. These values are the eigenvalues of the
N x N Jacobian matrix. The general rule is that the steady state is stable if there is no
eigen value with a positive real part. It is sufficient that one eigenvalue is positive for the
steady state to be unstable.
Remark: Very often, because of the complexity of the model (several coupled and nonlinear differential equations), the analytical solution (for the trajectories far from the
steady state, the limit cycles or chaotic attractor) cannot be obtained. So, we resort to
numerical simulations, using a computer.
13
Eigen value
T
!
T2-4!
Steady state
Phase plot
y
Im
Re
T<0
!>0
T2-4! > 0
stable node
T>0
!>0
T2-4! > 0
unstable node
T>0
ou
T<0
!<0
T2-4! > 0
saddle
(unstable)
T<0
!>0
T2-4! < 0
stable focus
T>0
!>0
T2-4! < 0
unstable focus
T=0
!>0
T2-4! < 0
x
center
(marginal stability)
Figure 9: According to the nature of the eigen values, several approaches to the steady
state can be observed.
Figure 10: Depending on the sign of the trace T and of the determinant ∆ of the Jacobian
matrix and on the sign of T 2 −4∆, several approaches to the steady state can be observed.
14
4
Dynamical behaviors
What happens if the steady state is unstable? How does the system behave? Where do
the trajectories go in the phase plane? Several behaviors are possible (see more examples
in section 6). We describe here some of them, which have important consequences in
biology.
4.1
Single steady state
If there is a single stable steady state and if the system can not explode, the only possible
behavior of the system is to evolve towards this steady state, whatever the initial condition
(provided that it is not on an unstable steady state) (fig. 11).
Y
X, Y
X
Y
Time
X
Figure 11: Evolution towards a steady state.
4.2
Bistability
The system can have several stable steady states. Then depending on the initial condition,
the system will evolve towards one or the other steady state (see section 6.3). If two steady
states coexist, this is a particular case of multistability, refer to as bistability (fig. 12).
X starting from (X ,Y )
01
01
X starting from (X 02,Y02)
Y starting from (X ,Y )
02
02
Y
X, Y
Y starting from (X 01,Y01)
(X ,Y )
02
(X01,Y01)
02
X
Time
Figure 12: Bistability.
Bistability plays a crucial role in biology (cell differentiation, genetic switch, memory
effect,...).
15
4.3
Limit-cycle oscillations
The system can exhibit a new type of behavior, such as sustained oscillations. In the
phase space, these oscillations correspond to a close curve called a limit cycle (fig. 13).
Y
X, Y
X
Y
Time
X
Figure 13: Limit cycle oscillations.
Limit cycle oscillations plays a crucial role in biology. Indeed, many biological rhythms
can be described by a limit cycle (Calcium oscillations, circadian rhythms, cell cycle,
glycolytic oscillations, etc).
4.4
Chaos and other complex behaviors
X
Z
The system can exhibit complex (but still deterministic) behaviors, such as chaos (fig.
14). Such complex behaviors require a phase space with a dimension >2 (i.e. more than
2 variables) and multiple instability mechanisms (obtained via multiple feedback loops in
regulatory systems).
Y
Time
X
Figure 14: Chaos.
It is not clear weather chaos play a major role in biology, but several studies show that it
might have a role in the heart beat and other constructive effets in genetic systems.
Other complex behaviors include quasi-periodic oscillations (evolution on a torus in
the phase space), bursting (alternance between oscillatory and non-oscillatory phases),
synchronization (important in coupled oscillators), or excitability.
16
5
5.1
Hysteresis, bifurcations, and stability diagram
Parameter space and stability diagram
For a given system, the linear stability analysis allows us to determine the regions in the
phase space where the steady state is stable or unstable. We can represent these domaines
graphically in the parameter space (also called two-parameter bifurcation diagram):
parameter µ
coexistence between
two stable steady
states (bistability)
stable limit cycle
(oscillations)
stable steady state
parameter !
Figure 15: Stability diagram.
5.2
Hysteresis and bifurcations
A bifurcation diagram shows how a steady state, and more generally the behaviour of
the system changes, as a control parameter varies. The following figures illustrate different
types of bifurcations that can be encountered.
A transcritical bifurcation (fig. 16) is characterised by the passage from a stable steady
state to an unstable steady state. At the same point, an other steady state, switches from
unstable to stable.
Figure 16: Transcritical bifurcation.
As a function of a certain parameter λ, the curve of the steady state can display a S
shape, delimited by two limit points. These points are called saddle node bifurcation
points (fig. 17). Between these two critical values, three steady states are present: two
are stable and one is unstable. This is a typical case of bistability.
17
Figure 17: Saddle-node bifurcation.
If the control parameter λ progressively increases, at the second critical value, the system
switches from one stable steady state (X high) to the other stable steady state (X low).
When λ progressively decreases, the it is at the first critical point that the system switches
from the stable steady state with X low to the stable steady state with X high. This
behavior is referred to as hysteresis (fig. 18, left). In some particular cases, sometimes
due to some biological constraints, only one limit point is present and the transition is
irreversible (fig. 18, right).
Figure 18: Hysteresis with reversible (left) and irreversible (right) transitions.
A pitchfork bifurcation is characterised by the passage from one stable steady state to
an unstable steady state and the simultaneous appearance of two stable steady states (fig.
19). This kind of bifurcation is encountered in particular systems, usually composed of
two coupled subsystems. This bifurcation is not very robust: a slight asymetry between
the subsystems leads to a symetry breaking: the pitchfork bifurcation is lost and a saddle
node (limit point) appears (fig. 20).
Figure 19: Pitchfork bifurcation.
18
Figure 20: Pitchfork destruction (symetry breaking).
The most common bifurcation leading to limit cycle oscillations is the Hopf bifurcation
(fig. 21). In this case, we plot the maximum Xmax (and, optionally, the minimum Xmin )
that the variable X takes when running around the limit cycle. The difference Xmax −Xmin
gives then the amplitude of the oscillations. Sometimes it is the period of the oscillations
which is given as a function of the control parameter. The Hopf bifurcation is characterised
by the passage of the real part of the complex conjugated eigenvalues from negative to
positive values.
Figure 21: Hopf bifurcation.
It could happen that the branch of the Hopf bifurcation is displayed in the domain where
the steady state is stable. In this case this branch is unstable. Such a bifurcation is said
sub-critical (fig. 22, left), while in the other case, the bifurcation is said super-critical
(fig. 22, right).
Figure 22: Sub-critical (left) and super-critical (right) Hopf bifurcation.
When a Hopf bifurcation is sub-critical, it often reaches a limit point where it turns back
towards the value of the parameter for which the steady state is unstable. In a small
region, delimited by the limit point and by the Hopf bifurcation point, a domain where
the stable steady state and a stable limit cycle, separated by an unstable limit cycle is
present. This situation is said hard excitation (fig. 23). This term stands for the fact
that a perturbation sufficiently hard has to be given to the system to make it switch from
the stable steady state to the stable limit cycle (or the opposite).
19
Figure 23: Hard excitation.
Finally, note that chaos can be obtained after a cascade of period doubling (Feigenbaum
sequence) (fig. 24). The succession of limit cycles are sometimes refered to as limit cycle,
period-2 limit cycle, period-4 limit cycle, period-8 limit cycle,...(fig. 25).
Figure 24: Period doubling cascade leading to chaos.
Figure 25: Period doubling cascade leading to chaos, trajectory in the phase space.
20
6
Applications
6.1
One-variable system: synthesis and degradation of one compound
Consider one compound, X, synthetised with a rate constant ks and degraded at rate kd :
k
k
s
d
−
→
X−
→
Evolution equation: The evolution equation of the concentration X is:
dX
= F (X) = ks − kd X
dt
(30)
Note that this is a linear system.
Steady state:
dX
= 0 ⇒ Xs = ks /kd
dt
Linear stability analysis: The evolution of the perturbation x is given by:
& '
dx
dF
=
x = −kd x
dt
dx XS
(31)
(32)
The solution of this differential equation is:
x = e−kd t
(33)
This means that the perturbation will be damped, in an exponential way, with respect to
time. The steady state is therefore stable.
Analytical solution: Note that this system is relatively simple and the solution of equation
(30) can be found analytically2 :
X = X0 e−kd t +
)
ks (
1 − e−kd t
kd
(34)
where X0 = X(0)
We can check that
lim x(t) =
t→∞
2
Recall that:
-
1
p+qX dX
=
1
q
ln(p + qX)
21
ks
= XS
kd
(35)
6.2
One-variable system: logistic synthesis and linear degradation
Consider now one compound, X, synthetised with a logistic synthesis rate and linearly
degraded:
&
'
dX
X
= F (X) = ks X 1 −
− kd X
(36)
dt
N
Steady states:
Xs1 = 0
Xs2 = N
ks − kd
(exists if ks > kd )
ks
(37)
(38)
Linear stability analysis:
ks 2
X − kd X
N
ks
F ′ = ks − 2 X − kd
N
F ′ (Xs1 ) = ks − kd
F ′ (Xs2 ) = kd − ks
F = ks X −
(39)
(40)
(41)
(42)
The following table summarizes the stability conditions for each steady state:
Xs1
Xs2
ks < kd
stable
!
ks > kd
unstable
stable
Graphical analysis:
Figure 26: Graphical analysis and bifurcation diagram (transcritical bifurcation).
Analytical solution: Note that Eq. (36) can be rewritten as a standard logistic equation
and can be solved analytically.
22
6.3
Two-variable system: two mutually activated compounds
(bistability)
Consider a system involving two compounds which activates each other according to a
Hill-type kinetics:
Evolution equations:
⎧
dX
Yn
⎪
⎪
=
v
− k1 X
⎨
1 n
dt
θ +Yn
⎪
Xn
dY
⎪
⎩
= v2 n
− k2 Y
dt
θ + Xn
To simplify, we assume that v1 = v2 = 1 and k1 = k2 = 1. The equations become:
⎧
dX
Yn
⎪
⎪
= n
−X
⎨
dt
θ +Yn
n
⎪
⎪ dY = X
⎩
−Y
dt
θn + X n
(43)
(44)
Steady state:
Let’s θ = 1/2. We can check that XS = YS = 1/2 is a steady state. We also see that
XS = YS = 0 is also a steady state. Note that those two states may not be the only
solutions...
Representation in the phase plane:
We can represent the nullclines in the phase space. The nullclines are defined by:
dX
Yn
=0⇒ n
=X
dt
θ +Yn
Xn
dY
=0⇒ n
=Y
dt
θ + Xn
(45)
(46)
Linear stability analysis:
The Jacobian matrix for the steady state (1/2, 1/2) is:
⎛
∂ Ẋ
⎜ ∂X
J =⎜
⎝
∂ Ẏ
∂X
23
⎞
∂ Ẋ
∂Y ⎟
⎟
⎠
∂ Ẏ
∂Y
(47)
Figure 27: Phase space and nullclines corresponding to the system (44) for n = 1 (left
panel) and n = 4 (right panel).
Recall that:
∂
&
Xn
θn + X n
∂X
'
(X n )′ (θn + X n ) − (X n ) (θn + X n )′
nθn X n−1
=
=
(θn + X n )2
(θn + X n )2
(48)
If θ = 1/2:
∂
&
Xn
θn + X n
∂X
'
=(
n 21n X n−1
)
1
n 2
+
X
n
2
At the steady state, XS = 1/2:
'⎞
⎛ &
Xn
1
1
⎜ ∂ θn + X n ⎟
⎟ = ( n 2n 2n−1) = n
⎜
2
⎠
⎝
1
∂X
2
+ 21n
2n
(49)
(50)
S
The trace T and the determinant ∆ of the Jacobian matrix are: T = −2 and ∆ = 1−n2 /4
and T 2 − 4∆ = 4 − 4(1 − n2 /4) = n2
We can see that the trace T is always negative and that T 2 − 4∆ is always positive.
The sign of ∆ depends on n:
• if n < 2, we have ∆ > 0 ⇒ the steady state (1/2, 1/2) is thus a stable node.
• if n > 2, we have ∆ < 0 ⇒ the steady state (1/2, 1/2) is thus an (unstable) saddle
point.
24
Bifurcation diagram
In the previous analysis, we assumed v1 = v2 . We can now sketch the behaviour of
the system when v2 is varied. Fig. 28 shows how the nullclines are affected when v2 is
decreased. In particular we see that bistability is lost when v1 is below a critical value,
around v2 = 0.8. Fig. 29 shows the steady state of X as a function of v2 (obtained by
numerical simulation). When v2 > 0.8, the system displays bistability: two stable nodes
are separated by a (unstable) saddle point. When v2 < 0.8, the single (stable) steady
state is the trivial state (0,0). The bifurcation point, at which the upper stable state and
the middle saddle point coalesce and disappear is called “Saddle-Node” (SN) bifurcation.
Through numerical simulation it is also possible to follow the SN bifurcation point in the
(v1 , v2 ) plane (fig. 30). Such stability diagram shows the region of bistability as a function
of these two parameters.
v2=1
1
0.6
0.4
0.4
0.2
0.2
0
0.5
0.8
Y
0.6
0
1
v2=0.7
1
0.8
Y
Y
0.8
0
v2=0.85
1
0.6
0.4
0.2
0
0.5
X
0
1
0
0.5
X
1
X
Figure 28: Nullclines for 3 different values of v2 (v1 = 1, n = 4).
2
1.8
1.6
stable node
1.4
X
S
1.2
1
Saddle−Node
0.8
bifurcation
0.6
unstable saddle
0.4
0.2
stable node
0
0
0.2
0.4
0.6
0.8
1
v
1.2
1.4
1.6
1.8
2
2
Figure 29: Bifurcation diagram as a function of v2 (v1 = 1, n = 4).
2
1.8
1.6
Bistability
1.4
v2
1.2
1
0.8
SN bifurcation
0.6
Single steady state
0.4
(X =0, Y =0)
S
0.2
0
0
0.2
0.4
S
0.6
0.8
1
v
1.2
1.4
1.6
1.8
2
1
Figure 30: Stability diagram as a function of v1 and v2 (n = 4).
25
6.4
Two-variable system: Brusselator (limit-cycle oscillations)
Consider the following system of 4 chemical reactions:
k
1
A−
→
X
k
2
B+X−
→
Y+C
k
3
2X + Y −
→
3X
k
4
X−
→
D
Evolution equations
The concentrations of A and B are supposed to be constant and noted a and b, respectively.
The evolution equations of the concentrations X and Y are:
⎧
dX
⎪
⎪
= k1 a − k2 bX + k3 X 2 Y − k4 X
⎨
dt
(51)
⎪
dY
⎪
2
⎩
= k2 bX − k3 X Y
dt
To simplify, we will consider that k1 = k2 = k3 = k4 = 1. The system then reduces to:
⎧
dX
⎪
⎪
= a − bX + X 2 Y − X
⎨
dt
(52)
⎪
dY
⎪
2
⎩
= bX − X Y
dt
Steady state:
The steady state of the system is:
%
XS = a
(53)
YS = b/a
Linear stability analysis:
The Jacobian matrix is:
⎛
∂ Ẋ
⎜ ∂X
J =⎜
⎝
∂ Ẏ
∂X
⎞
∂ Ẋ
4
5
−b + 2XY − 1 X 2
⎟
∂Y ⎟
⎠=
∂ Ẏ
b − 2XY
−X 2
∂Y
(54)
At the steady state, X = XS = a and Y = YS = b/a, and thus:
J=
&
b − 1 a2
−b −a2
26
'
(55)
The trace T and the determinant ∆ of the Jacobian matrix are: T = b − 1 − a2 and
∆ = a2 .
The characteristic equation is:
λ2 − T λ + ∆ = 0
(56)
We have to study the sign of T 2 − 4∆
T 2 − 4∆ = (b − 1 − a2 )2 − 4a2
= (b − 1 − a2 − 2a)(b − 1 − a2 + 2a)
= (b − (a + 1)2 )(b − (a − 1)2 )
(57)
The determinant ∆ is always positive. The trace T is positive if b > a2 + 1 and negative
otherwise. T 2 − 4∆ is negative if (a − 1)2 < b < (a + 1)2 and positive otherwise.
The following table summarizes the different possible behaviors as a function of the parameters a and b:
b
T
∆
T 2 − 4∆
type of
steady state
+
+
stable
node
(a − 1)2
+
0
+
stable
focus
a2 + 1
0
+
-
+
+
unstable
focus
(a + 1)2
+
+
0
+
+
+
unstable
node
When parameter b increases, the steady state turns from a stable node to a stable focus,
then, it losts its stability and the steady state turns from an unstable focus to an unstable
node. When the steady state is unstable, the system then evolves towards a limit cycle.
These behaviours are shown in figure 31 (results obtained by numerical integration of the
differential equations):
• For a = 2 and b = 0.5: the system evolves towards a steady state (stable node)
• For a = 2 and b = 4: the system evolves towards a steady state (stable focus)
• For a = 2 and b = 6: the system leaves its steady state (unstable focus) to reach a
limit cycle (sustained oscillations)
• For a = 2 and b = 12: the system leaves its steady state (unstable node) to reach a
limit cycle (sustained oscillations)
The stability diagram showing the stable and unstable regions as a function of a and b
is given in figure 32. The solid curves, satisfying b = a2 + 1 delimits the stability region.
The dotted curve corresponding to b = (a + 1)2 and b = (a − 1)2 separate the node from
the focus in the stable and unstable regions respectively.
Figure 33 is a bifurcation diagram: it shows how the steady state of X changes as a
function of the parameter b (for a fixed to 2). Figure 34 shows how the period varies in
the oscillatory domain. These diagrams have been obtained numerically.
27
0.4
2.5
X
Y
a=2, b=0.5
0.3
1.5
Y
X, Y
2
1
0.5
0
0.2
0.1
0
5
10
15
0
20
1
2
X
Time
2.5
2
2
Y
X, Y
X
Y
a=2, b=4
3
1.5
1
0
1
0
5
10
15
20
0
2
4
X
Time
8
a=2, b=6
X
Y
6
6
Y
X, Y
8
4
2
2
0
4
0
0
10
20
30
40
50
0
5
10
X
Time
25
a=2, b=12
25
X
Y
20
15
15
Y
X, Y
20
10
10
5
5
0
0
0
10
20
30
40
50
0
20
40
X
Time
Figure 31: Different kinds of behaviors obtained for the Brusselator model with different
parameter (a and b) values. Left panels: time evolution. Right panels: phase space.
Note that the period at the bifurcation can be calculated. We know that the frequency
is given by the imaginary part √
of the eigen values. At the bifurcation point, T = 0 and
the eigen values are λ1 = λ2 = ∆ = a. The period is therefore equal to 2π/a. For a = 2
and b = 5, the period is thus equal to π ≈ 3.14.
28
12
unstable
node
parameter b
10
unstable
focus
stable
focus
stable
node
8
6
4
2
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
parameter a
Figure 32: Stability diagram for the Brusselator.
18
16
maximum of X
(oscillations)
S
X or X
max
or X
min
14
12
10
8
Hopf
bifurcation
6
4
stable steady state
unstable steady state
2
minimum of X
0
0
2
4
6
8
10
12
Parameter b
Figure 33: Bifurcation diagram for the Brusselator as a function of parameter b (with
a=2). The steady state looses its stability and limit cycle oscillations emerge at b = 5
(Hopf bifurcation).
14
12
Period
10
8
6
4
2
0
0
2
4
6
8
10
12
Parameter b
Figure 34: Period for the Brusselator as a function of parameter b (with a=2). At the
bifurcation point (b = 5), the period is = π ≈ 3.14.
29
Nullclines and direction field:
The nullclines, defined by:
dX
= 0 ⇒ Y = (1 + b)/X − a/X 2
dt
dY
= 0 ⇒ Y = b/X
dt
(X-nullcline)
(58)
(Y-nullcline)
are shown in figure 35. They delimit regions in the phase space where the vector field has
a particular direction (figure 36).
9
X nullcline
Y nullcline
8
7
6
Y
5
4
stable steady
state (focus)
3
2
1
0
0
0.5
1
1.5
2
2.5
3
3.5
4
X
Figure 35: Brusselator: nullclines (for a=2 and b = 6).
9
8
7
6
Y
5
4
3
2
1
0
0
0.5
1
1.5
2
2.5
3
3.5
4
X
Figure 36: Brusselator: direction field (for a=2 and b = 6).
30
7
7.1
Appendix
Detailed study of the perturbation evolution in a 2-variable
system
The evolution of the perturbation around the steady state in the case of a 2-variable
system is given by equation (27):
6
x(t) = x̃1 eλ+ t + x̃2 eλ− t
y(t) = ỹ1 eλ+ t + ỹ2 eλ− t
(59)
Note that λ+ and λ− are related to the trace and the determinant of the Jacobian matrix
(see Fig. 9).
As explained in the text, two cases should be distinguished: (a) The eigen values λ+
and λ− are real, or (b) they are complex conjugated. We describe the evolution of the
perturbation for these both cases in a bit more details.
(a) λ+ and λ− are real
In order to sketch the trajectories of x and y in the plane (x, y), it is convenient to calculate
the ratio dy/dx:
dy
ỹ1 λ+ eλ+ t + ỹ2 λ− eλ− t
=
(60)
dx
x̃1 λ+ eλ+ t + x̃2 λ− eλ− t
we can rearrange eq (60) as
dy
ỹ1 λ+ + ỹ2 λ− e(λ− −λ+ )t
=
dx
x̃1 λ+ + x̃2 λ− e(λ− −λ+ )t
(61)
If we assume that λ− < λ+ < 0, then the limit
dy
ỹ1
=
t→∞ dx
x̃1
lim
(62)
This is the equation of a straight line crossing the origin (0,0) (see line d1 on fig. 37) that
describes the asymptotic approach of the evolution of the perturbation.
We can also calculate the limit
ỹ2
dy
=
t→−∞ dx
x̃2
lim
(63)
This is the equation of a straight line crossing the origin (see line d2 on fig. 37) that
describes the slope of the trajectories of the evolution of the perturbation ”initially”.
Similarly, if λ− > λ+ > 0, the trajectories will leave the steady state, initially being
dy
ỹ1
asymptotic to the line d1 :=
=
dx
x̃1
When λ− and λ+ have opposite signs, the trajectory approaches the point (0,0) along the
line d1 but then goes away following the line d1 (see fig. 38).
31
Figure 37: Evolution of the perturbation in the case λ+ < 0 and λ− < 0 (stable node).
Figure 38: Evolution of the perturbation in the case λ+ > 0 and λ− < 0 (saddle).
(b) λ+ and λ− are complex conjugates
When the eigen values λ+ and λ− are complex conjugates, let’s say λ+
− = p ± iq, equations
59 can be rearranged as
6
x(t) = Aept cos qt + Bept sin qt
(64)
y(t) = A′ ept cos qt + B ′ ept sin qt
which can then be rewritten as
6
x(t) = Kept cos(qt + φ)
y(t) = K ′ ept cos(qt + φ′ )
(65)
cos(a + b) = cos a cos b − sin a sin b
(66)
Indeed, recalling that
we can show the equivalence between equations (64) and (65):
x = Kept (cos qt cos φ − sin qt sin φ)
= K cos φ.ept . cos qt − K sin φ.ept . sin qt
(67)
(68)
Defining A = K cos φ and B = K sin φ, we get
A
A
⇒ φ = arctan
B
B
√
A2 + B 2 = K 2 ⇒ K = A2 + B 2
tan φ =
32
(69)
(70)
Similarly, for y, we can show that:
C
φ′ = arctan
D
√
′
K = A′2 + B ′2
(71)
Equation (65) is the equation of a spiral in the plane (x, y). If p < 0, the spiral converges
to the origin (0,0) and the trajectory corresponds to damped oscillations of x and y with
the time (fig. 40). If p > 0, the spiral diverge from the origin (0,0) and the trajectory
corresponds to amplified oscillations of x and y (not shown).
If λ+ and λ− are purely imaginary (p = 0), then the oscillations are not damped, neither
amplified. Their amplitude remains constant and depends on the perturbation applied to
the system. In this particular case, Eq. (64) becomes
6
x(t) = A cos(qt) + B sin(qt)
(72)
y(t) = A′ cos(qt) + B ′ sin(qt)
which can be written as:
6
x(t) = K cos(qt + φ)
y(t) = K ′ cos(qt + φ′ )
(73)
Equation (73) is the equation of a close curve in the pane (x, y) and corresponds to
self-sustained oscillations of x and y. Note that its amplitude and period, respectively
governed by K and φ are depending on A, B, A′ and B ′ , i.e. on the initial conditions of
the perturbation x(0) and y(0).
Figure 39: Evolution of the perturbation when λ+ and λ− are complex conjugates (λ± =
p ± iq) with real part p > 0 (stable focus).
33
Figure 40: Evolution of the perturbation when λ+ and λ− are purely imaginary (center,
marginal stability).
7.2
Examples of trajectories around the steady state for a 3variable system
A 3-variable system has three eigen values. Again, the type of behavior can be characterized as a function of the position of the these eigen values in the Re/Im plane. Five
non-degenerated cases can be distinguished: (1) the three eigen values are real and negative (stable steady state), (2) the three eigen values are real, Two of them are negative
(unstable steady state), (3-4) two eigen values are complex conjugates with a negative
real part and the third one real is negative (stable steady state) (two cases can be distinguished depending on the relative value of the real part of the complex eigen values
and of the real one, and (5) two eigen values are complex conjugates with a negative real
part and the third one real is positive (unstable steady state). The cases (3) and (5) are
illustraed on fig. 41.
Figure 41: Evolution of the perturbation for a 3-variable system (a) when two eigen values
are complex conjugates with a negative real part and the third one is real and negative
(stable steady state) (left panel) and (b) when two eigen values are complex conjugates
with a negative real part and the third one is real and positive (unstable steady state)
(right panel).
34
7.3
Stability of the steady state for a N-variable system: RouthHurwitz criteria
Let’s consider the case of a N variable system:
⎧
dX1
⎪
⎪
= f1 (X1 , X2 , ...XN )
⎪
⎪
dt
⎪
⎪
⎪
⎨ dX2
= f2 (X1 , X2 , ...XN )
dt
⎪
⎪
⎪
...
⎪
⎪
⎪
⎪
⎩ dXN = fN (X1 , X2 , ...XN )
dt
(74)
This can be rewritten is a more compact form:
dXi
= fi (X1 , X2 , ...XN )
dt
(75)
with i = 1, ...N
or in the vectorial form:
dX
= f(X)
dt
(76)
By definition, the steady state XS must satisfy:
'
&
dX
= f(XS ) = 0
dt X=XS
(77)
Linearization around the steady state leads us to consider the jacobian:
&
'
∂F
J=
∂X X=XS
i.e.
This a N x N matrix.
⎛
⎜
⎜
⎜
⎜
J=⎜
⎜
⎜
⎝
∂f1
∂X1
∂f2
∂X1
...
∂fN
∂X1
∂f1
∂f1
...
∂X2
∂XN
∂f2
∂f2
...
∂X2
∂XN
∂fN
∂fN
...
∂X2
∂XN
(78)
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
(79)
X=XS
The eigen values are obtained by solving the following equation:
det(J − λI) = 0
(80)
The characteristic equation is a polynom of degree N:
λN + a1 λN −1 + a2 λN −2 + ... + aN = 0
35
(81)
If N = 2, we obtain the quadratic equation discussed in details for the case of a 2-variable
system (remember that in this case, a1 = T race(J) and a2 = −Det(J)). When N > 2, it
is difficult (usually impossible) to find all the roots of equations (81). We can nevertheless
obtain informations about their signs (and hence about the stability of the steady state).
Suppose λ1 , λ2 , ...λN all (known) eigenvalues of the linearized system:
dX
= JX
dt
(82)
The solution of this equation can be written:
Xi = XiS + x̃1 eλ1 t + x̃2 eλ2 t + ... + x̃N eλN t
(83)
If one eigen value λ has a real part > 0, the peturbation will ultimately increase and the
steady state is thus unstable. How to determine if all the eigen value have a negative
real part? This can be done by checking some conditions, known as the Routh-Hurwitz
criteria.
We can define N matrices as follows:
H1 =
⎛
⎜
⎜
Hj = ⎜
⎜
⎝
a1
a3
a5
...
1
a2
a4
(
a1
)
, H2 =
0
a1
a3
0
1
a2
&
a1 1
a3 a2
...
...
...
0
0
0
a2j−1 a2j−2 a2j−3 a2j−4 ... aj
where the (l, m) element in the matrix Hj is
a2l−m
1
0
⎞
a1 1 0
, H3 = ⎝ a3 a2 a1 ⎠ , ...
a5 a4 a3
⎞
⎛
a1 1 0 ... 0
⎟
⎟
⎜ a3 a2 a1 ... 0
⎟ , ..., HN = ⎜
⎟
⎝ ...
⎠
0 0 0 ... aN
'
⎛
⎞
⎟
⎟ (84)
⎠
for 0 < 2l − m < k
for 2l = m
for 2l < m or 2l > k + m
Then, all eigen values have negative real parts (steady steady stable) if and only if the
determinants of all Hurwitz marices are positive:
det(Hj ) > 0
∀j
(85)
Robert May (1973) summarized the stability criteria in the cases N = 2, N = 3, and
N = 4 as follows:
N
Conditions for stability
N = 2 a1 > 0, a2 > 0
N = 3 a1 > 0, a3 > 0, a1 a2 > a3
N = 4 a1 > 0, a1 a2 > a3 , a4 > 0, a1 a2 a3 > a23 + a21 a4
An application of the Routh-Hurwitz criteria to a 3-variable system is given in EdelsteinKeshet, pp.234-235.
36
7.4
Cusp and other co-dimension 2 bifurcations
The cusp bifurcation (also called cusp catastrophe) is a bifurcation, observed in 2-parameter
space, at which two branches of saddle-node bifurcations vanish. The cusp bifurcation is
associated to the presence of bistability and to hysteresis phenomenon.
Because the cusp bifurcation can only be seen in a 2-parameter plane, it is called a codimension 2 bifurcation. Another example of co-dimension 2 bifurcation is the degenerate
Hopf bifurcation at which a super-critical Hopf bifurcation becomes sub-critical. In a
way these co-dimension 2 bifurcations show how a bifurcation changes when a second
parameter is changed. Further examples of co-dimension 2 bifurcations are described
in [Guckenheimer and P. Holmes (1983) Nonlinear Oscillations, Dynamical systems and
Bifurcations of Vector Fields. Springer] and briefly reviewed in the appendix of the paper
by Tyson et al, Biophys J (2006).
Figure 42: Cusp bifurcation
37
7.5
Poincaré-Bendixson theorem
It is difficult (usually impossible) to proof the existence of a limit cycle for any N-variable
non-linear dynamical system. In the simple case of a 2-variable system, the PoincaréBendixson theorem, based on the topology of the phase space, provides however a way
to determine if a limit cycle is present. The Bendixson criteria can sometimes also be
applied to rule out the existence of a limit cycle.
Poincaré-Bendixson theorem
If, for t ≥ t0 , a trajectory is bounded and does not approach any singular point (steady
state), then the trajectory is either a closed periodic orbit or approaches a closed periodic
orbit, i.e. a limit cycle, for t → ∞ (Fig. (43)).
Figure 43: Illustration of the Poincaré-Bendixson theorem
We give below an application of this theorem. Another application to a simple model
(the Schnakenberg model) can be found in Edelstein-Keshet (pp. 357-360).
Application of the Poincaré-Bendixson theorem
The theorem of Poincaré-Bendixon is here illustrated on the Selkov model for glycolysis
(cf Strogatz book, p. 203):
dx
= ẋ = −x + ay + x2 y
dt
dy
= ẏ = b − ay − x2 y
dt
(86)
The procedure is in two steps. First we need to find a trapping region for the system.
Then we need to find conditions on parameters a and b under which the steady state is
unstable.
Finding a trapping region requires an analysis in the phase space. First we find the
nullclines:
x
ẋ = 0 ⇔ y =
(87)
a + x2
b
ẏ = 0 ⇔ y =
(88)
a + x2
These nullclines are sketched in Fig. 44. They delimit various regions of the phase space
distinguished by the direction of the vectors.
38
Figure 44: Selkov model: nullclines
Now consider the region bounded by the dashed line shown in Fig 45. Let’s demonstrate
that this is a trapping region. To this end we need to show that all the vectors on the
boundary point into the box. On the vertical and horizontal axes there is no problem:
the system is such that starting from positive values of x and y, these variables can not
become negative (i.e. when x = 0, ẋ ≥ 0 and when y = 0, ẏ ≥ 0). The tricky part of
the construction is the diagonal line of slope -1 extending from the point (b,b/a) to the
nullcline (1).
Figure 45: Selkov model: trapping region
39
The get the right intuition, consider ẋ and ẏ in the limit of very large x. Then
ẋ = x2 y
ẏ = x2 y
so,
ẋ
=1
ẏ
(89)
(90)
(91)
along trajectories.
Hence the vector field is roughly parallel to the diagonal line. This is confirmed by a more
rigorous calculation, which consists of comparing the size of ẋ and −ẏ:
ẋ − (−ẏ) = −x + ay + x2 y + (b − ax − x2 y) = b − x
(92)
−ẏ > ẋ if x > b
(93)
Said otherwise,
This inequality implies that the vector field point inward on the diagonal line, as illustrated
in Fig. 45, because dy/dx is more negative than -1, and therefore the vectors are steeper
than the diagonal line. Thus the region is the trapping region.
Figure 46: Selkov model: trapping region + unstable steady state
Now we have to calculate the steady state and to determine the condition in which it is
unstable (Fig. 46). The steady state in found by solving
dx
= −x + ay + x2 y = 0
dt
dy
= b − ay − x2 y = 0
dt
40
(94)
We then find the steady state (i.e. the intersection between the two nullclines):
xs = b
b
ys =
a + b2
(95)
(96)
The Jacobian maxtrix is
J=
&
−1 + 2xy
a + x2
−2xy
−(a + x2 )
'
(97)
b4 + (2a − 1)b2 + (a + a2 )
a + b2
√
Hence the fixed point is unstable if T > 0, i.e. when b2 = 21 (1 − 2a ± 1 − 8a). This
defines a curve in the parameter space (a, b), as shown in Fig. 47.
The determinant is ∆ = a + b2 > 0 and the trace is T = −
Figure 47: Selkov model: stability diagram
Thus, for parameter values satisfying T > 0, we have guaranteed that the system has a
close orbit (a limit cycle in this case). This is confirmed by numerical simulation of the
system for the case a = 0.08, b = 06 (see Fig. 48).
Figure 48: Selkov model: numerical analysis
41
Remark: The Poincaré-Bendixon theorem is one of the central results in nonlinear dynamics. It says that the dynamical possibilities in the phase space are very limited: if a
trajectory is confined to a close, bounded region that contains no fixed points then the
trajectory must eventually approach a close orbit. Nothing more complicated is possible.
This result depends crucially on the 2-dimensionality of the plane. In higher dimensions,
The Poincaré-Bendixon theorem no longer applies. Something new can happen: trajectories may wander around forever in a bounded region without setting down to a fixed point
or a close orbit. They converge to more complex attractors (e.g. chaos or quasi-periodic
torus).
Bendixson criteria:
dx
dy
∂F ∂G
Let’s call
= F and
= G. If
+
̸= 0 for all x and y and does not change sign
dt
dt
∂x
∂y
in D, there is no close orbit (no limit cycle)
The proof - which requires advanced mathematics - can be found in Edelstein-Keshet p.
379-380.
42
7.6
Characterization of complex behaviors
It exists several methods to characterize complex behaviors, in particular to differentiate
between chaos and quasi-periodicity. These methods include Poincaré maps, Fourier
transforms, and Lyapunov exponents.
We describe here the principle of the Poincaré map. The idea is to “cut” the attractor
with a plane and report the intersection between the trajectory and the plane, i.e. the
points where the trajectory cross the plane (selecting only one direction). A simple limit
cycle would thus give a poincaré section with a single point, a chaotic attractor would
give a poincaré section with a open line of points (fig. 49), and a quasi-periodic attractor
would give a poincaré section with a close line of points (fig. 50). Note that in high
dimension, the situation is often more complex...
Poincare map
20
18
16
14
Z
12
10
8
6
4
2
0
3
4
5
6
7
8
9
10
11
0.144
0.146
0.148
12
X
Figure 49: Poincaré map for chaos.
Poincare map
0.17
0.16
0.15
x2
0.14
0.13
0.12
0.11
0.1
0.13
0.132
0.134
0.136
0.138
0.14
y1
Figure 50: Poincaré map for quasi-periodicity.
43
0.142
0.15
8
References
Enzyme kinetics theory:
• Cornish-Bowden A (1995) Fundamentals of enzyme kinetics, Portmand Press, London.
• Engel P (1981) Enzyme Kinetics. The steady state approach, Chapman & Hall.
• Segel I.H. (1976) Biochemical Calculations: How to Solve Mathematical Problems
in General Biochemistry, Wiley.
Theory of non-linear systems:
• Bergé P, Pommeau Y & Vidal C (1984) L’Ordre dans le Chaos, Hermann, Paris.
• Nicolis G (1995) Introduction to Non Linear Science, Cambridge Univ. Press.
• Nicolis G & Prigogine I (1977) Self Organization in Non-Equilibrium Systems, Wiley,
New York.
• Strogatz S (1994) Nonlinear Dynamics and Chaos: with applications to physics,
biology, chemistry, and engineering, Westview Press.
Applications in biology:
• Edelstein-Keshet, L (2005; originally 1988) Mathematical Models in Biology, SIAM
Editions.
• Glass L & MacKey MC (1988) From Clocks to Chaos. Princeton Univ. Press.
• Goldbeter A (1996) Biochemical Oscillations and Cellular Rhythms. The Molecular
Bases of Periodic and Chaotic Behavior, Cambridge Univ. Press, Cambridge, U.K.
• Murray JD (1989) Mathematical Biology, Springer, Berlin.
• Segel LA (1984) Modeling Dynamic Phenomena in Molecular and Cellular Biology.
Cambridge Univ. Press, Cambridge, U.K.
• Thomas & D’Ari (1990) Biological Feedback, CRC Press.
• Winfree AT (1980) The Geometry of Biological Time, Springer-Verlag, New-York.
44
Download