Multiscale analysis of hybrid processes and reduction of stochastic neuron models.

advertisement
Multiscale analysis of hybrid processes and reduction of
stochastic neuron models.
Gilles Wainrib
joint work with:
Khashayar Pakdaman and Michèle Thieullen
Institut J.Monod- CNRS,Univ.Paris 6,Paris 7 - Labo. Proba et Modèles Aléatoires Univ.Paris 6,Paris 7,CNRS
CREA Ecole polytechnique
January, 2010
Part I : Introduction
Deterministic neuron model
Hodgkin Huxley (HH) model (Hodgkin Huxley - J.Physiol. 1952):
Cm
dV
dt
dm
dt
dh
dt
dn
dt
=
I − gL (V − VL ) − gNa m3 h(V − VNa ) − gK n4 (V − VK )
=
τm (V )−1 (m∞ (V ) − m)
=
τh (V )−1 (h∞ (V ) − h)
=
τn (V )−1 (n∞ (V ) − n)
→ Conductance-based neuron model
Time-scale separation and reduction
Sodium activation dynamic is faster than the other variables : τm → 0
m = m∞ (V )
Three-dimensional reduced system:
Cm
dV
dt
dh
dt
dn
dt
=
I − gL (V − VL ) − gNa m∞ (V)3 h(V − VNa ) − gK n4 (V − VK )
=
τh (V ) (h∞ (V ) − h)
=
τn (V ) (n∞ (V ) − n)
Time-scale separation and reduction
Sodium activation dynamic is faster than the other variables : τm → 0
m = m∞ (V )
Three-dimensional reduced system:
Cm
dV
dt
dh
dt
dn
dt
=
I − gL (V − VL ) − gNa m∞ (V)3 h(V − VNa ) − gK n4 (V − VK )
=
τh (V ) (h∞ (V ) − h)
=
τn (V ) (n∞ (V ) − n)
Reduction of neuron models : key step in theoretical (singular perturbations) and
numerical analysis
Rinzel 1985, Kepler et al. 1992, Meunier 1992, Suckley et al.2003, Rubin et al. 2007,
...
Modelling neurons with stochastic ion channels
Single ion channels stochasticity:
• Macromolecular devices : open and close through voltage-induced
conformational changes
Potassium channel
• Stochasticity due to thermal noise
Modelling neurons with stochastic ion channels
Single ion channels stochasticity:
• Macromolecular devices : open and close through voltage-induced
conformational changes
Potassium channel
• Stochasticity due to thermal noise
Channel noise : finite size effects responsible for intrinsic variability noise-induced
phenomena (spontaneous activity, signal detection enhancement,...)
Modelling neurons with stochastic ion channels
Deterministic model X = (V , u)
dV
dt
du
dt
=
F (V , u)
=
(1 − u)α(V ) − uβ(V ) = τu (V )(u∞ (V ) − u)
Modelling neurons with stochastic ion channels
Stochastic model XN = (VN , uN )
Modelling neurons with stochastic ion channels
Stochastic model XN = (VN , uN )
• Single ion channel i ∈ {1, ..., N} with voltage-dependent transition rates :
independent jump Markov process ci (t)
Modelling neurons with stochastic ion channels
Stochastic model XN = (VN , uN )
• Single ion channel i ∈ {1, ..., N} with voltage-dependent transition rates :
independent jump Markov process ci (t)
• Proportion of open ion channels (empirical measure) ::
N
uN (t) =
1X
ci (t)
N i=1
Modelling neurons with stochastic ion channels
Stochastic model XN = (VN , uN )
• Single ion channel i ∈ {1, ..., N} with voltage-dependent transition rates :
independent jump Markov process ci (t)
• Proportion of open ion channels (empirical measure) ::
N
uN (t) =
1X
ci (t)
N i=1
• Between the jumps, voltage dynamics:
dVN
= F (VN , uN )
dt
Modelling neurons with stochastic ion channels
Modelling neurons with stochastic ion channels
• Modelling framework:
Neuron ⇐⇒ population of globally coupled independent ion channels
Modelling neurons with stochastic ion channels
• Modelling framework:
Neuron ⇐⇒ population of globally coupled independent ion channels
• Mathematical framework:
Piecewise-deterministic Markov process at the fluid limit
Modelling neurons with stochastic ion channels
• Modelling framework:
Neuron ⇐⇒ population of globally coupled independent ion channels
• Mathematical framework:
Piecewise-deterministic Markov process at the fluid limit
⇓
(Davis, 1984)
Modelling neurons with stochastic ion channels
• Modelling framework:
Neuron ⇐⇒ population of globally coupled independent ion channels
• Mathematical framework:
Piecewise-deterministic Markov process at the fluid limit
⇓
⇓
(Kurtz, 1971)
(Davis, 1984)
Limit Theorems : Law of large numbers
Theorem When N → ∞, XN converges to X in probability over finite time intervals
[0, T ]
Limit Theorems : Law of large numbers
Theorem When N → ∞, XN converges to X in probability over finite time intervals
[0, T ]
For ∆ > 0, define
"
#
PN (T , ∆) := P
sup |XN (t) − X (t)|2 > ∆
t∈[0,T ]
Then
lim PN (T , ∆) = 0
N→∞
Limit Theorems : Law of large numbers
Theorem When N → ∞, XN converges to X in probability over finite time intervals
[0, T ]
For ∆ > 0, define
"
#
PN (T , ∆) := P
sup |XN (t) − X (t)|2 > ∆
t∈[0,T ]
Then
lim PN (T , ∆) = 0
N→∞
More precisely, there exists constants B, C > 0 such that:
lim sup
N→∞
∆e −BT
1
log PN (T , ∆) ≤ −
N
CT
2
Limit Theorems : Law of large numbers
Theorem When N → ∞, XN converges to X in probability over finite time intervals
[0, T ]
For ∆ > 0, define
"
#
PN (T , ∆) := P
sup |XN (t) − X (t)|2 > ∆
t∈[0,T ]
Then
lim PN (T , ∆) = 0
N→∞
More precisely, there exists constants B, C > 0 such that:
lim sup
N→∞
∆e −BT
1
log PN (T , ∆) ≤ −
N
CT
2
Pakdaman, Thieullen, W. ”Fluid limit theorems for stochastic hybrid systems with
application to neuron models” (2009) arXiv:1001.2474
Limit Theorems : Central limit
Theorem:
Let
√
RN (t) :=
„
N
t
Z
XN (t) −
«
F (XN (s))ds
0
When N → ∞, RN converges in law to a diffusion process
Z t
R(t) =
Σ(X (s))dWs
0
Limit Theorems : Central limit
Theorem:
Let
√
RN (t) :=
„
N
t
Z
XN (t) −
«
F (XN (s))ds
0
When N → ∞, RN converges in law to a diffusion process
Z t
R(t) =
Σ(X (s))dWs
0
Langevin Approximation X̃N = (ṼN (t), ũN (t)):
d ṼN (t)
=
F (ṼN (t), ũN (t))dt
d ũN (t)
=
1
b(ṼN (t), ũN (t))dt + √ Σ(ṼN (t), ũN (t))dWs
N
Limit Theorems : Central limit
Theorem:
Let
√
RN (t) :=
„
N
t
Z
XN (t) −
«
F (XN (s))ds
0
When N → ∞, RN converges in law to a diffusion process
Z t
R(t) =
Σ(X (s))dWs
0
Langevin Approximation X̃N = (ṼN (t), ũN (t)):
d ṼN (t)
=
F (ṼN (t), ũN (t))dt
d ũN (t)
=
1
b(ṼN (t), ũN (t))dt + √ Σ(ṼN (t), ũN (t))dWs
N
Further developments : strong approximation (pathwise CLT), Markov vs. Langevin,
large deviations
Stochastic reduction ?
Part II : Mathematical analysis
Singular perturbations for jump Markov processes
Figure: Multiscale four-state model. Horizontal transitions are fast, whereas vertical transitions are
slow.
Singular perturbations for jump Markov processes
Singular perturbations for jump Markov processes
Singular perturbations for jump Markov processes : general setting
Yin, Zhang, ”Continuous-time Markov Chains and Applications : a singular
perturbation approach”, 1998
Assumption There exist n subsets of fast transitions.
E = E1 ∪ E2 ∪ ... ∪ En
• if i, j ∈ Ek then αi,j is of order O(−1 ),
• otherwise, if i ∈ Ek and j ∈ El , with k 6= l then αi,j is of order O(1).
Singular perturbations for jump Markov processes : general setting
Constructing a reduced process:
• quasi-stationary distributions (ρki )i∈Ek within fast subsets Ek , for k ∈ {1, ..., n}.
• aggregated process (X̄ ) on the state space Ē = {1, ..., n} with transition rates:
ᾱk,l =
XX
i∈Ek j∈El
ρik αi,j for k, l ∈ Ē
Singular perturbations for jump Markov processes : first-order
Theorem
• all-fast case For all t > 0, the probability Pi (t) = P [Xt = xi ] converges when
→ 0 to the stationary distribution ρi , for all i ∈ E .
Singular perturbations for jump Markov processes : first-order
Theorem
• all-fast case For all t > 0, the probability Pi (t) = P [Xt = xi ] converges when
→ 0 to the stationary distribution ρi , for all i ∈ E .
• multiscale case As → 0 the process (X ) is close to the reduced process (X̄ ).
More precisely :
1. E
hR
T
0
“
”
i2
1{X (t)=xik } − ρki 1{X̄ =k} Φ(xik )dt = O(), for any function Φ : E → R,
with k ∈ {1, ..., n} and i ∈ Ek .
2. The process X̄ converges in law to X̄ .
Singular perturbations for jump Markov processes : second-order
Rescaled process
1
n (t) = √
Theorem The rescaled process
process
Z
T
`
´
1{X (t)=xi } − ρi Φ(xi , s)ds
0
n (t)
converges in law to the switching diffusion
t
Z
n(t) =
σ(s)dWs
0
where W is a standard n-dimensional Brownian motion. The diffusion matrix
A = σ(s)σ 0 (s) is given by:
ˆ
˜
Aij (s) = Φ(xi , s)Φ(xj , s) ρi R(i, j) + ρj R(j, i)
where
∞
Z
R(i, j) =
0
` ´
P (i, j, t) − ρj dt
Multiscale analysis of stochastic neuron models
) with
Full model : XN = (VN , uN
empirical measure for a population of multiscale jump processes
• uN
)
• V̇N = F (VN , uN
Multiscale analysis of stochastic neuron models
) with
Full model : XN = (VN , uN
empirical measure for a population of multiscale jump processes
• uN
)
• V̇N = F (VN , uN
Requires two extensions :
1. Population of jump processes
2. Piecewise deterministic Markov process
Stationnary distribution for populations of multiscale jump processes
Stationnary distributions for the empirical measure→ multinomial distributions
Ex: two-state model
k
ρ(N) (k/N) = CNk u∞
(1 − u∞ )N−k
Averaging method for PDMP
Ex (all-fast):
VN (t) =
fast
with uN
t
Z
0
F (VN (s), uN
(s))ds
Averaging method for PDMP
Ex (all-fast):
VN (t) =
t
Z
0
F (VN (s), uN
(s))ds
fast
with uN
Z
→ F̄N (V̄N ) :=
(N)
F (V̄N , u)ρstat (du) (ergodic convergence)
Averaging method for PDMP
Ex (all-fast):
VN (t) =
t
Z
0
F (VN (s), uN
(s))ds
fast
with uN
Z
→ F̄N (V̄N ) :=
(N)
F (V̄N , u)ρstat (du) (ergodic convergence)
) converges in law
• Theorem (general case) When → 0, the process (VN , uN
towards a coarse-grained hybrid process:
d V̄N
= F̄N (V̄N , ūN )
dt
and ū reduced jump process with averaged transition rates, functions of V̄ .
Faggionato, Gabrielli, Ribezzi Crivellari 2009
Averaging method for PDMP
Ex (all-fast):
VN (t) =
t
Z
0
F (VN (s), uN
(s))ds
fast
with uN
Z
→ F̄N (V̄N ) :=
(N)
F (V̄N , u)ρstat (du) (ergodic convergence)
) converges in law
• Theorem (general case) When → 0, the process (VN , uN
towards a coarse-grained hybrid process:
d V̄N
= F̄N (V̄N , ūN )
dt
and ū reduced jump process with averaged transition rates, functions of V̄ .
Faggionato, Gabrielli, Ribezzi Crivellari 2009
• Central limit theorem (ongoing work) → diffusion approximation :
d ṼN dt = F̄N (ṼN , ũN
)dt +
√
σN (ṼN , ũN
)dWt
Part III : Application to Hodgkin-Huxley model
Application Hodgkin-Huxley model : reduced model (two-state)
Averaging ”m3 ” with respect to the binomial stationnary distribution
(N)
k (1 − m )N−k yields:
ρm (k/N) = CNk m∞
∞
Cm
dV
dt
=
I − gL (V − VL ) − gNa m∞ (V )3 h(V − VNa ) − gK n4 (V − VK )
−
gNa h(V − VNa )KN (V) (supplementary terms)
with
KN (V ) =
3
1
m∞ (V )2 (1 − m∞ (V )) + 2 m∞ (V )(1 + 2m∞ (V )2 )
N
N
Important remark : Noise strength η :=
1
N
appears as a bifurcation parameter.
Application Hodgkin-Huxley model : bifurcations of the reduced model
Figure: Bifurcation diagram with η as parameter for I = 0 of system (HHN
TS ).
Application Hodgkin-Huxley model : bifurcations of the reduced model
Figure: Two-parameter bifurcation diagram of system (HHN
TS ) with I and η as parameters.
Application Hodgkin-Huxley model : bifurcations of the reduced model
1. Below the double cycle curve is a region with a unique stable equilibrium point :
ISI distribution should be approximately exponential, since a spike corresponds to
a threshold crossing.
Application Hodgkin-Huxley model : bifurcations of the reduced model
1. Below the double cycle curve is a region with a unique stable equilibrium point :
ISI distribution should be approximately exponential, since a spike corresponds to
a threshold crossing.
2. Between the double cycle and the Hopf curves is a bistable region : ISI
distribution should be bimodal, one peak corresponding to the escape from the
stable equilibrium, and the other peak to the fluctuations around the limit cycle.
Application Hodgkin-Huxley model : bifurcations of the reduced model
1. Below the double cycle curve is a region with a unique stable equilibrium point :
ISI distribution should be approximately exponential, since a spike corresponds to
a threshold crossing.
2. Between the double cycle and the Hopf curves is a bistable region : ISI
distribution should be bimodal, one peak corresponding to the escape from the
stable equilibrium, and the other peak to the fluctuations around the limit cycle.
3. Above the Hopf curve is a region with a stable limit cycle and an unstable
equilibrium point : ISI distribution should be centered around the period of the
limit cycle.
Application Hodgkin-Huxley model : stochastic simulations
Figure: A. With N = 30 (zone 3), noisy periodic trajectory. B. With N = 70 (zone 2), bimodality
of ISI’s C. With N = 120, ISI statistics are closer to a poissonian behavior.
Application Hodgkin-Huxley model : stochastic simulations
Figure: Interspike Interval (ISI) distributions
Conclusions and perspectives
• Systematic method for reducing a large class of stochastic neuron models
• Based on recent mathematical developments of the averaging method
• Illustration on HH : enables a bifurcation analysis with noise strength as
parameter
Conclusions and perspectives
• Systematic method for reducing a large class of stochastic neuron models
• Based on recent mathematical developments of the averaging method
• Illustration on HH : enables a bifurcation analysis with noise strength as
parameter
• Other applications in neuroscience (synaptic models, networks, biochemical
reactions)
• Open mathematical questions (link with stochastic bifurcations, scaling in the
double limit N → ∞, → 0)
Singular perturbations for jump Markov processes : heuristics
Law evolution :
dP =
dt
„
Qs (t) +
«
1
Qf (t) P with initial condition P (0) = p 0 . We are looking for an expansion of P (t) of the
form
Pr (t) =
r
X
i=0
i φi (t) +
r
X
i=0
t
i ψi ( )
Singular perturbations for jump Markov processes : heuristics
Identifying power of :
Qf (t)φ0 (t)
Qf (t)φ1 (t)
=
=
...
Qf (t)φi (t)
=
0
dφ0 (t)
− φ0 (t)Qs (t)
dt
dφi−1 (t)
− φi−1 (t)Qs (t)
dt
Error control:
1. |P (t) − Pr (t)| = O(r +1 ) uniformly in t ∈ [0, T ]
2. there exist K , k0 > 0 such that |ψi (t)| < Ke −k0 t
Multiscale analysis of stochastic neuron models : summary
Second order approximation for PDMP
Central limit theorem
1
√
t
„
Z
Vt −
0
F (Vs )ds
«
t
Z
→
σF (V̄s )dWs
0
Download