Time delays and the control of biological systems

advertisement
Time delays and the control of biological
systems: An overview ?
John G. Milton ∗
∗
W. M. Keck Science Center, The Claremont Colleges, Claremont,
CA 91711 USA (e-mail: jmilton@kecksci.claremont.edu).
Abstract: This special session provides an introduction for engineers to the applications of
delay differential equations to biological control. Topics include the regulation of populations
of organisms and neurons, the stabilization of unstable states and the design of drug delivery
systems.
Keywords: Bio-control,delay,feedback
1. INTRODUCTION
One of the most important regulatory mechanisms in
biology is time-delayed feedback. The seminal work of
Lawrence Stark on the pupil light reflex sparked the
interest of biologists to use engineering techniques to
investigate the feedback control of biological systems. See
Stark (1968). However, despite over 50 years of progress,
careful analyses of the properties and dynamics of feedback
systems receive scant attention by biologists.
There are two barriers to overcome in order to facilitate
the study of feedback by biologists. For many years it was
argued that the analysis of delay, or functional, differential
equations (DDE) was an advanced topic in biomathematics and hence beyond the scope of most biologists.
However, this barrier is rapidly being broken down by
the increasing number of textbooks on DDEs written for
science undergraduates. See, for example, Erneux (2009),
MacDonald (1989), Milton and Ohira (2014), Smith (2011)
and Stépán (1989). Moreover, easy to use computer software packages to numerically integrate DDEs are now
available, such as XPPAUT and dde23. See Ermentrout
(2002) and Shampine and Thompson (2001). The second
barrier has been the lack of forums, such as this multidisciplinary IFAC workshop on time delay systems, that
bring together mathematical biologists and engineers so
that ideas and experiences concerning feedback control can
be exchanged in a mutually beneficial manner.
This special session provides an introduction for engineers
into three common types of control problems that arise in
biology: 1) the regulation of the dynamics of populations of
organisms (MAGPANTAY) and neurons (CAMPBELL);
2) the delay-dependent stabilization of unstable equilibria
(INSPERGER); and 3) the development of drug delivery
systems (BELAIR). Traditionally engineers have analyzed
control in the frequency domain and have focussed on the
use of time-delayed feedback to stabilize time-independent
states, e.g. equilibria. Important exceptions arise in the
setting of machine tool vibrations. See Insperger and
Stépán (2011) and Stépán (2001). On the other hand,
? JM acknowledges support from the William R. Kenan, Jr. Charitable Trust.
Fig. 1. Block diagram of a time-delayed negative feedback
system. The ’-’ sign indicates that the feedback signal
acts to decrease the value of the controlled variable
and e(t) is the error signal.
mathematical biologists analyze feedback in the time domain in order to gain insights into the nature of the
dynamical phenomena which arise when equilibria become
unstable. Most often their focus is on the stable, timedependent behaviors that can arise including limit cycle,
chaos and other complex behaviors. Consequently, mathematical biologists are often not aware of the advantages
of analyzing linear dynamical systems in the laboratory
setting using, for example, the Laplace and Fourier integral transforms, whereas engineers are less familiar with
techniques involved in the analysis of nonlinear dynamical
systems. See Milton and Ohira (2014). This review discusses three topics related to considerations of time-delays
and feedback in biology: 1) the nature of the time delays,
2) the identification of the feedback and 3) the role that
the initial function plays in shaping the dynamics of these
control systems.
2. FEEDBACK CONTROL
A schematic representation of a time-delayed feedback
control dynamical system is shown in Fig. 1. Such systems
have a looped structure in which the output is fed back
after a time delay, τ , to influence future outputs. Consequently the governing equations take the form of delay
differential equations (DDE); an example is the first-order
DDE
ẋ(t) + kx(t) = F (x(t − τ )) ,
(1)
where k is a constant, x = x(t) is the state variable, x(t−τ )
is its value at time t−τ , τ is the time delay, and ẋ := dx/dt.
Examples of biological systems modeled by (1) include the
control of the pupil light reflex, blood cell populations,
gene regulation and population growth. See Longtin and
Milton (1989), Mackey and Glass (1977) and Santillan and
Mackey (2001). Typically, F , is a nonlinear function of x,
for example
Kn
,
(2)
F (x(t − τ )) = n
K + xn (t − τ )
where n, K are positive constants.
The plant in Fig. 1, or system to be controlled, corresponds
to the left-hand side of (1). The error signal, e(t), is equal
to the difference between the observed x(t) and the desired
outcome, xdes (t), namely, e(t) = x(t) − xdes (t). Typically
xdes (t) is chosen to correspond to the fixed-point of (1), x∗ ,
where x∗ is determined be setting ẋ(t) = 0 and assuming
that x(t) = x(t − τ ) = x∗ . In this case the determination
of the stability of the feedback control in the time domain
corresponds to the linear stability analysis of the fixedpoint. The linearized equation for the error is
ė(t) + ke(t) = kP e(t − τ )
(3)
where kp is the proportional gain and is equal to the slope
of F evaluated at x∗ . When F is given by (2), kP < 0,
hence the terminology ‘negative feedback’,
3. TIME DELAYS
Time delays typically reflect a combination of transmission
times, namely the time taken to detect the error signal, and
processing times, the time then taken to act upon it. In a
spatially distributed feedback control system, transmission
times primarily reflect the time taken for the signal to
travel between the various components of the feedback
loop. Examples of transmission times are the circulation
time and the axonal conduction time. Alternately the time
delay can be due to the time it takes to act on the error
signal. Examples of processing times include the time it
takes to synthesize a protein (≈ 1 − 2min), for red blood
cells to mature in the bone marrow before their release into
circulation (≈ 7 days) and, in animal population models,
gestation times (weeks to months) and the time it takes
animals to mature to reproductive age (months to years).
Often in biology the contribution of the processing times to
τ is much larger than the contribution due to transmission.
For example, in the pupil light reflex only about 20ms of
the 300ms delay can be accounted for by axonal conduction
times. Similarly in the glucose-insulin feedback loop, the
delay is of the order of minutes compared to a circulation
time of a few seconds. See Makroglou et al. (2006).
One of the main reasons that DDEs have been so successful
as models of biological control stems from the fact that the
Fig. 2. The time delay (≈ 0.3s) for the pupil light reflex can
be measured as the time between (a) the onset of a
light pulse and (b) the onset of the transient decrease
in pupil area. In these measurements the feedback
loop has been opened by focussing a light beam whose
diameter is smaller than the smallest pupil diameter
onto the center of the pupil (“Maxwellian view”).
Under these conditions movements of the iris do
not affect retinal illumination. See Milton and Ohira
(2014).
details of the mechanisms that underlie the transmission
and processing times are hidden from the control process.
In other words all of this biology is replaced by a time
delay which can be measured experimentally. A variety of
methods have been used to measure time delays in biology.
In some cases time delays have been measured as the time
between when a stimulus is applied and when a response
is first detected (see Fig. 2). Examples of biological time
delays measured in this way include gestation and maturation times, certain neural reflexes, voluntary movement
reaction times and the times to make a protein. Although
there have been some attempts to measure the delay using cross-correlation techniques, such measurements have
been considered less realiable. See van der Krooj et al.
(2005). In the nervous system transmission delays are most
commonly estimated indirectly from measurements of the
axonal conduction velocity. Such estimations of the time
delay are limited by the fact that the distance between,
for example, two neurons is seldom known with precision.
Conduction velocities, v, can be measured using electrophysiological techniques. Practical problems arise when
the diameter of the axon becomes smaller than that of
the recording electrode. See Miller (1994). Conduction
velocities can also be estimated from the diameter, d, of
the axon
and its myelination: for an unmyelinated
axon
√
√
v ≈ d whereas for a myelinated axon, v ≈ dg ln g, where
g is the ratio of axon diameter to overall fiber diameter.
See Waxman and Bennett (1972).
Many of the first applications of DDEs to problems related
to biological control assumed a single discrete delay; however, a few studies included a second discrete delay. See
Atay (1999) and Bélair et al. (1995). Careful experimental
observations often indicate that biological delays can be
distributed. For example, in the nervous system there is
a distribution of neuronal axon diameters implying that
there is a distribution of axonal conduction velocities,
and hence transmission delays. See MacDonald (1989) and
Miller (1994). Thus (3) becomes
Z ∞
ė(t) + ke(t) = k1
G(s)e(t − s)ds ,
(4)
0
where G(τ ) is a kernel that describes the distribution
of delays and k1 is a constant. For special choices of
G(τ ) it has been shown that a distributed delay has a
stabilizing effect on negative feedback control in both
neural and ecological dynamical systems. See Eurich et al.
(2002), Meyer et al. (2008) and Eurich et al. (2005).
However, it must be kept in mind that G(τ ) has not been
well characterized experimentally and hence the relative
contributions of a distribution of transmission times versus
a distribution of processing times on control have not yet
been investigated.
An important observation is that τ can be state-dependent.
For examples, time delays for insect and animal and even
blood cell maturation can be a function of population
density. See Aiello et al. (1992) and Mahaffy et al. (1998).
State-dependent delays also arise very frequently in the
nervous system. For example, in the pupil light reflex τ is
a function of the light intensity: the shortest τ is observed
for the most intense light stimulus. See Loewenfeld (1993).
A much more common manisfestation of a neural statedependent delay is the observation that central processing
times increase as the complexity of a voluntary task increases. See Hick (1952). Thus (3) becomes
ė(t) + ke(t) = k2 e(t − τ (e(t))) ,
(5)
where k2 is a constant and the notation τ (e(t)) indicates
that the delay is a function of the error. State-dependent
delays arise in a number of engineering applications including milling, cooling systems, and in spatially extended
networks. See Insperger et al. (2007) and Bekiaris-Liberis
et al. (2012).
4. IDENTIFICATION OF FEEDBACK
Fig. 3. Schematic representation of prediction schemes for
time-delayed feedback. The exact motion is given by
the solid line and the desired motion by the dashed
line. State-dependent controllers use the most reliable
available data, i.e. x, ẋ, ẍ, measured at t−τ to predict
the correction (•) made at time t. Model predictive
controllers use information obtained from the known
time history to develop an inverse model to predict
the correction made at time t.
Milton et al. (2015). For a retarded DDE instability arises
when only a finite number of eigenvalues with positive
real part. In contrast for a neutral DDE instability can
be associated with an infinite number of eigenvalues with
positive real part. See Hale and Lunel (1993) and Kolmanovskii and Nosov (1986). Moreover, the characteristic
equation for a neutral DDE can have roots which bifurcate
from infinity. See Kuang (1993). Thus the loss of stability
in a neutral DDE is not necessarily entirely described by
the characteristic equation having a pure imaginary root
(possibly infinitely many).
Up to this point we have considered first-order DDEs
in which the feedback is proportional to the magnitude
of the state variable. However, real biological systems
are typically quite sensitive to sudden changes suggesting
that F may contain terms related to the time-dependent
changes in x(t−τ ). Important examples arise in the neural
control of movement and balance. The human sensory
nervous system can detect changes in the position, speed
and even the acceleration of the body and its joints. Most
commonly the governing equations take the form of the
second-order DDE
ẍ(t) + k1 ẋ(t) + k2 ωn2 x(t) = F (x(t − τ ), ẋ(t − τ )) , (6)
where ωn is the natural angular frequency. See Maurer and
Peterka (2005) and Kiemel et al. (2011).
Perhaps the earliest application of a neutral DDE to biology is the neutral delayed logistic equation for population
growth
ẋ(t) = r(1 − (x(t − τ ) + k2 ẋ(t − τ ))/K)x(t)
where r is a growth rate constant and K is a constant
equal to the carrying capacity of the environment. This
equation was developed to explain the effects on growth
rate of consumption of a resource which takes time τ to
recover. For a review see Kuang (1993). It is interesting
to note that present day mathematical biologists prefer
to model this phenomenon using a DDE with a statedependent delay. Neutral delays also arise in the setting
of the control of movement when acceleration terms are
included in the feedback. See Insperger et al. (2013), Sieber
and Krauskopf (2005) and Lockhart and Ting (2007).
There are two classifications that are used to discuss (6).
The first classification compares the highest derivative in
the plant (left-hand side of (6)) to that contained in the
feedback (right-hand side). When the highest derivative in
the feedback is less than that in the plant, (6) is referred
to as a retarded DDE. When the highest derivative in the
feedback equals that in the plant, (6) is referred to as a
neutral DDE. The distinction between retarded and neutral DDEs has important consequences for stability. See
A second classification considers the nature of the feedback. In order to classify the different kinds of feedback
that arise in second-order DDEs it is useful to consider that
all forms of time-delayed feedback control can be considered as a type of predictive control in which observations
made in the past are used to predict corrective actions
taken in the present. From this perspective two general
classes of delayed feedback controllers can be distinguished
(Fig. 3). State-dependent predictive feedback uses informa-
tion obtained at time t − τ to predict the corrections made
at time t. The type of information available for feedback
control depends on the existence of biological sensors to
detect the magnitude and time-dependent changes in the
controlled variable. The feedback can be proportional (P)
fP = kP e(t − τ ) ,
(7)
proportional-derivative (PD)
fPD = kP e(t − τ ) + kD ė(t − τ ) ,
(8)
or proportional-derivative-accelerative (PDA)
fPDA = kP e(t − τ ) + kD ė(t − τ ) + kPDA ë(t − τ ) . (9)
where kD , kA are, respectively, the differential and accelerative gains and f denotes the feedback F linearized about
xdes .
In contrast, model-dependent predictive feedback uses the
information available up to t − τ to develop an internal
model upon which to predict corrective actions taken at
time t. The corresponding control force is
fMP = kP ep (t)
(10)
where ep (t) is the prediction of the actual error, which is
determined based on an internal model, namely
Z 0
ep (t) = exp (kτ ) ep (t − τ ) +
exp (ks) F (s + t)ds.
−τ
(11)
Optimal control can be obtained by solving the equations
over the delay period. See Kleinman (1969) and Manitius
and Olbrot (1979). When the parameters of the internal
model precisely match those of the actual system then the
prediction gives the exact state. In this case the feedback
eliminates, or compensates, for the delay in the feedback
loop.
5. THE INVERTED PENDULUM
Studies of the inverted pendulum
θ̈(t) − ωn2 θ(t) = f (θ(t − τ ), θ̇(t − τ ), θ̈(t − τ )) ,
(12)
where θ is the vertical displacement angle provide insights
into how time-delayed feedback can stabilize unstable dynamical systems. Biological interest in (12) is motivated by
its applications to understanding how bipedal organisms
and robots are balanced and how a stick can be balanced
at the fingertip. It has been suggested that the control of
human stick balancing at the fingertip may involve model
predictive feedback. See Insperger and Milton (2014) and
Mehta and Schaal (2002). In this case the control force
becomes
fMP = kP θp (t) + kD θ̇p (t)
(13)
where θp (t) and θ̇p (t) are the prediction of the actual error
in θ(t) and θ̇(t) which are determined based on an internal
model:
θp (t)
θp (t − τ )
0
1
= exp
τ
−k2 ωn −k1
θ̇p (t)
θ̇p t − τ )
Z 0
0
1
0
+
exp −
s
F (s + t)ds. (14)
−k
ω
−k
1
2 n
1
−τ
The stabilization of an inverted pendulum is also important as a benchmark for comparing different time-delayed
feedback control strategies: the shorter the critical length,
`crit , that can be stabilized, the more robust the control.
It has been shown that for a given τ that
PDA
MP
`PD
crit > `crit > `crit .
See Insperger et al. (2013), Insperger and Milton (2014)
and Sieber and Krauskopf (2005). It should be noted
that an inverted pendulum cannot be stabilized using Ptype delayed feedback unless damping and/or friction is
present. See Cabrera and Milton (2002) and Campbell
et al. (2008). Although this comparison suggests that
model predictive controllers are the most robust, it must
be kept in mind that the effectiveness of model predictive
control is ultimately limited by the detrimental effects of
sensory uncertainties on the specification of the internal
model. See Insperger and Milton (2014) and Mehta and
Schaal (2002).
The above considerations assume that the feedback evolves
in continuous time. However, the quantized nature of slow
voluntary movements, the intermittent character of corrective movements and the presence of central refractory
times all reinforce the concept that nature of neural control
is likely to be quite complex. See Cabrera and Milton
(2002), Cabrera and Milton (2012), Loram et al. (2005),
van der Kamp et al. (2013) and Valbo and Wessberg
(1993). One explanation for this complexity is the pulsecoupled nature of neural feedback control loops. See Foss
and Milton (2000) and Judd and Aihara (1993). The
pulse-coupling arises from the fact that when neurons
are physically separated by more that ≈ 0.01m, the only
interactions between them is dependent on the effects of
discrete action potentials. If the feedback has features that
resemble those of a digital controller, then we could expect
that the dynamics of the neural control of movement would
have similarities to those experienced by engineers when a
digital processor attempts to control a mechanical system.
However, the post-synaptic potentials triggered in neuron
by the arrival of action potentials are clearly continuous in
time. This means that the nature of neural feedback must
lie somewhere between the two extremes of a digital and a
continuous feedback controlled dynamical system. These
observations have sparked interest in intermittent types
of feedback control such as act-and-wait and intermittent
predictive controllers. See Gawthrop and Wang (2007),
Gawthrop et al. (2013), Insperger et al. (2007) and Milton
et al. (2015). The semi-discretization methods for DDEs
discussed by INSPERGER is this session provide a very
useful method to estimate the digital nature of neural
feedback. See also Insperger and Stépán (2011).
6. INITIAL FUNCTIONS
In order to solve a DDE it is necessary to specify an initial
function for the state variable(s) and its time depedent
changes. For example, for (1) the initial function is x(s) :=
φ(s), s ∈ [−τ, 0] denoted as φ0 (t). There are two situations
in which the initial function, φ0 (t), plays an especially
important role. The first situation arises because there
is an initial discontinuity in the derivative of a solution
of a DDE that occurs at t0 . This initial discontinuity is
propagated as a second degree discontinuity at t0 + τ , as a
third degree discontinuity at t0 +2τ , and so on. The effects
of this propagating discontinuity depend on the highest
derivative in the plant and the feedback. For a retarded
DDE the solution is progressively smoothed as a function
of time. See El’sgol’ts and Norkin (1973). However, for a
neutral DDE the solution is not smoothed as a function of
time. See Hale and Lunel (1993).
The second situation occurs when a time-delayed dynamical system is placed close to the boundary (separatrix) that
separates two or more basins of attraction. In this situation
it is possible that φ0 can span the separatrix, namely,
portions of φ0 can lie on either side of the separatrix.
When this occurs there can be transient stabilizations of
unstable states and even the transient appearance of limit
cycle oscillations. See Milton et al. (2008), Pakdaman et al.
(1998) and Quan et al. (2011).
7. CONCLUSION
The continuing development of technologies to explore
biological feedback mechanisms in great detail is likely
to uncover novel and unexpected strategies for control.
Thus it can be anticipated that collaborations between
biologists, engineers and mathematician will benefit all
involved.
ACKNOWLEDGEMENTS
The author has benefited from useful discussions with J.
Bélair, S. A. Campbell and T. Inspeger.
REFERENCES
Aiello, W.G., Freedman, H.I., and Wu, J. (1992). Analysis
of a model representing stage-structured population
growth with state-dependent time delay. SIAM J. Appl.
Math., 52, 855–869.
Atay, A. (1999). Balancing the inverted pendulum with
position feedback. Appl. Math. Lett., 12, 51–56.
Bekiaris-Liberis, N., Jankovic, M., and Krstic, M. (2012).
Compensation of state-dependent state delay for nonlinear systems. Systems & Control Lett., 61, 849–856.
Bélair, J., Mackey, M.C., and Mahaffy, J. (1995). Agestructured and two delay models for erythropoiesis.
Math. Biosci., 128, 317–346.
Cabrera, J.L. and Milton, J.G. (2002). On-off intermittency in a human balancing task. Phys. Rev. Lett., 89,
158702.
Cabrera, J.L. and Milton, J.G. (2012). Stick balancing,
falls and Dragon Kings. Eur. Phys. J. Spec. Topics,
205, 231–241.
Campbell, S.A., Crawford, S., and Morris, K. (2008). Friction and the inverted pendulum stabilization problem.
ASME J. Dyn. Syst. Meas. Control, 130, 054502.
El’sgol’ts, L.E. and Norkin, S.B. (1973). Introduction to
the theory and application of differential equations with
deviating arguments. Academic Press, New York.
Ermentrout, B. (2002). Simulating, analyzing, and animating dynamical systems: A guide to XPPAUT for
researchers and students. SIAM, Philadelphia.
Erneux, T. (2009). Applied delay differential equations.
Springer, New York.
Eurich, C.W., Mackey, M.C., and Schwegler, H. (2002).
Recurrent inhibitory dynamics: the role of statedependent distribution of conduction delay times. J.
Theoret. Biol., 216, 31–50.
Eurich, C.W., Thiel, A., and Fahse, L. (2005). Distributed
delays stabilize ecological feedback systems. Phys. Rev.
Lett., 94, 158104.
Foss, J. and Milton, J. (2000). Multistability in recurrent
neural loops arising from delay. J. Neurophysiol., 84,
975–985.
Gawthrop, P., Lee, K.Y., Halaki, M., and O’Dwyer, N.
(2013). Human stick balancing: an intermittent control
explanation. Biol. Cybern., 107, 637–652.
Gawthrop, P.J. and Wang, L. (2007). Intermittent model
predictive control. Proc. Inst. Mech. Eng., 221, 1007–
1018.
Hale, J.K. and Lunel, S.M.V. (1993). Introduction to
functional differential equations. Springer, New York.
Hick, W.E. (1952). On the rate of gain of information.
Quart. J. Exp. Psych., 4, 11–26.
Insperger, T. and Milton, J. (2014). Sensory uncertainty
and stick balancing at the fingertip. Biol. Cybern., 108,
85–101.
Insperger, T., Milton, J., and Stépán, G. (2013). Acceleration feedback improves balancing against reflex delay.
J. R. Soc. Interface, 10, 20120763.
Insperger, T. and Stépán, G. (2011). Semi-discretization
for time-delay systems: Stability and engineering applications. Springer, New York.
Insperger, T., Stépán, G., and Turi, J. (2007). Statedependent delay in regenerative turning processes. Nonlinear Dynam., 47, 275–283.
Judd, K.T. and Aihara, K. (1993). Pulse propagation
networks: a neural network model that uses temporal
coding by action potentials. Neural Networks, 6, 203–
215.
Kiemel, T., Zhang, Y., and Jeka, J.J. (2011). Identification
of neural feedback for upright stance in humans: Stabilization rather than sway minimization. J. Neurosci.,
31, 15144–15153.
Kleinman, D.L. (1969). Optimal control of linear systems
with time-delay and observation noise. IEEE Trans.
Automat. Contr., 14, 524–527.
Kolmanovskii, V.B. and Nosov, V.R. (1986). Stability of
functional differential equations. Academic Press, New
York.
Kuang, Y. (1993). Delay differential equations with applications to population dynamics. Academic Press, New
York.
Lockhart, D.B. and Ting, L.H. (2007). Optimal sensorimotor transformations for balance. Nature Neurosci.,
10, 1329–1336.
Loewenfeld, I.E. (1993). The pupil: Anatomy, physiology
and clinical applications. Iowa State University Press,
Ames, IA.
Longtin, A. and Milton, J.G. (1989). Modeling autonomous oscillations in the human pupil light reflex
using nonlinear delay-differential equations. Bull. Math.
Biol., 51, 605–624.
Loram, I.D., Maganaris, C.N., and Lakie, M. (2005).
Human postural sway results from frequent, ballistic
bias impulses by soleus and gastrocnemius. J. App.
Physiol., 100, 295–311.
MacDonald, N. (1989). Biological delay systems: linear
stability theory. Cambridge University Press, Cambridge.
Mackey, M.C. and Glass, L. (1977). Oscillation and chaos
in physiological control systems. Science, 197, 287–289.
Mahaffy, J.M., Bélair, J., and Mackey, M.C. (1998).
Hematopoietic model with moving boundary condition
and state dependent delay. J. Theoret. Biol., 190, 135–
146.
Makroglou, A., Li, J., and Kuang, Y. (2006). Mathematical models and software tools for the glucose-insulin
regulatory system and diabetes: an overview. Appl.
Num. Anal., 56, 559–573.
Manitius, A.Z. and Olbrot, A.W. (1979). Finite spectrum
assignment problem for systems with delays. IEEE
Trans. Autom. Contr., AC-24, 541–553.
Maurer, C. and Peterka, R.J. (2005). A new interpretation
of spontaneous sway measurements based on a simple
model of human postural control. J. Neurophysiol., 93,
189–200.
Mehta, B. and Schaal, S. (2002). Forward models in
visuomotor control. J. Neurophysiol., 88, 942–953.
Meyer, U., Shao, J., Chakrabarty, S., Brandt, S.E., Luksch,
H., and Wessel, R. (2008). Distributed delays stabilize
neural feedback. Biol. Cybern., 99, 79–87.
Miller, R. (1994). What is the contribution of axonal
conduction delay to temporal structure of brain dynamics? In C. Pantex (ed.), Oscillatory event-related brain
dynamics, 53–57. Academic Press.
Milton, J., Insperger, T., and Stépán, G. (2015). Human
balance control: Dead zones, intermittency and microchaos. In T. Ohira and T. Ozawa (eds.), Mathematical
approaches to biological systems: Networks, oscillations
and collective phenomena, 1–28. Springer, New York.
Milton, J. and Ohira, T. (2014). Mathematics as a
laboratory tool: Dynamics, delays and noise. Springer,
New York.
Milton, J.G., Cabrera, J.L., and Ohira, T. (2008). Unstable dynamical systems: Delays, noise and control.
Europhys. Lett., 83, 48001.
Pakdaman, K., Grotta-Ragazzo, C., and Malta, C.P.
(1998). Transient regime dynamics or continuous-time
neural networks with delay. Phys. Rev. E, 58, 3623–
3627.
Quan, A., Osorio, I., Ohira, T., and Milton, J. (2011).
Vulnerability to paroxysmal oscillations in delay neural
networks: A basis for nocturnal frontal lobe epilepsy?
Chaos, 21, 047512.
Santillan, M. and Mackey, M.C. (2001). Dynamic regulation of the tryptophan operon: A modeling study and
comparison with experimental data. Proc. Natl. Acad.
Sci. USA, 98, 1364–1369.
Shampine, L.F. and Thompson, S. (2001). Solving DDEs
in MATLAB. Appl. Num. Math., 37, 441–458.
Sieber, J. and Krauskopf, B. (2005). Extending the permissible control loop latency for the controlled inverted
pendulum. Dyn. Sys., 20, 189–199.
Smith, H. (2011). An introduction to delay differential
equations with applications to the life sciences. Springer,
New York.
Stark, L. (1968). Neurological control systems: Studies in
bioengineering. Plenum Press, New York.
Stépán, G. (1989). Retarded dynamical systems. Longman,
Harlow.
Stépán, G. (2001). Vibrations of machines subjected to
digital force. Int. J. Solids Struct., 38, 2149–2159.
Valbo, A.B. and Wessberg, J. (1993). Organization of
motor output in slow finger movements in man. J.
Physiol. (Lond.), 469, 673–691.
van der Kamp, C., Gawthrop, P.J., Gollee, H., and Loram,
I.D. (2013). Refractoriness in sustained visuo-manual
control: Is the refractory duration intrinsic or does it
depend on external systems properties? PLoS Comp.
Biol., 9, e1002843.
van der Krooj, H., van Asseldonk, E., and van der Helm,
F.C.T. (2005). Comparison of different methods to
identify and quantify balance control. J. Biomech., 145,
175–203.
Waxman, S.G. and Bennett, M.V.L. (1972). Relative
conduction velocities of small myelinated and nonmyelinated fibres in the central nervous system. Nature
New Biology, 238, 217–219.
Download