Distribution de spikes pour des mod` eles stochastiques de neurones

advertisement
Séminaire de Probabilités et Théorie Ergodique, LMPT Tours
Distribution de spikes
pour des modèles stochastiques de neurones
et chaı̂nes de Markov à espace continu
Nils Berglund
MAPMO, Université d’Orléans
Tours, 5 décembre 2014
Avec Barbara Gentz (Bielefeld), Christian Kuehn (Vienne) and Damien Landon (Dijon)
Nils Berglund
nils.berglund@univ-orleans.fr
http://www.univ-orleans.fr/mapmo/membres/berglund/
Neurons and action potentials
Action potential [Dickson 00]
.
Neurons communicate via patterns of spikes
in action potentials
.
Question: effect of noise on interspike interval statistics?
.
Poisson hypothesis: Exponential distribution
⇒ Markov property
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
1/21(29)
Neurons and action potentials
Action potential [Dickson 00]
.
Neurons communicate via patterns of spikes
in action potentials
.
Question: effect of noise on interspike interval statistics?
.
Poisson hypothesis: Exponential distribution
⇒ Markov property
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
1/21(29)
Conduction-based models for action potential
.
Hodgkin–Huxley model (1952)
C
.
dV
dt
dn
dt
dm
dt
dh
dt
= −gK n4 (V − VK ) − gNa m3 h(V − VNa ) − gL (V − VL ) + I
= αn (V )(1 − n) − βn (V )n
= αm (V )(1 − m) − βm (V )m
= αh (V )(1 − h) − βh (V )h
FitzHugh–Nagumo model (1962)
C dV
= V − V3 + w
g dt
dw
τ
= α − βV − γw
dt
dV
dt
.
Morris–Lecar model (1982) 2d, more realistic eq for
.
Koper model (1995) 3d, generalizes FitzHugh–Nagumo
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
2/21(29)
Conduction-based models for action potential
.
Hodgkin–Huxley model (1952)
C
.
dV
dt
dn
dt
dm
dt
dh
dt
= −gK n4 (V − VK ) − gNa m3 h(V − VNa ) − gL (V − VL ) + I
= αn (V )(1 − n) − βn (V )n
= αm (V )(1 − m) − βm (V )m
= αh (V )(1 − h) − βh (V )h
FitzHugh–Nagumo model (1962)
C dV
= V − V3 + w
g dt
dw
τ
= α − βV − γw
dt
dV
dt
.
Morris–Lecar model (1982) 2d, more realistic eq for
.
Koper model (1995) 3d, generalizes FitzHugh–Nagumo
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
2/21(29)
Deterministic FitzHugh–Nagumo (FHN) model
Consider the FHN equations in the form
εẋ = x − x 3 + y
ẏ = a − x − by
.
x ∝ membrane potential of neuron
.
y ∝ proportion of open ion channels (recovery variable)
.
ε 1 ⇒ fast–slow system
.
b = 0 in the following for simplicity (but results more general)
Stationary point P = (a, a3 − a) √
2
2
Linearisation has eigenvalues −δ± ε δ −ε where δ = 3a 2−1
√
√
. δ > 0: stable node (δ >
ε ) or focus (0 < δ < ε )
.
.
δ = 0: singular Hopf bifurcation [Erneux & Mandel ’86]
√
√
δ < 0: unstable focus (− ε < δ < 0) or node (δ < − ε )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
3/21(29)
Deterministic FitzHugh–Nagumo (FHN) model
Consider the FHN equations in the form
εẋ = x − x 3 + y
ẏ = a − x − by
.
x ∝ membrane potential of neuron
.
y ∝ proportion of open ion channels (recovery variable)
.
ε 1 ⇒ fast–slow system
.
b = 0 in the following for simplicity (but results more general)
Stationary point P = (a, a3 − a) √
2
2
Linearisation has eigenvalues −δ± ε δ −ε where δ = 3a 2−1
√
√
. δ > 0: stable node (δ >
ε ) or focus (0 < δ < ε )
.
.
δ = 0: singular Hopf bifurcation [Erneux & Mandel ’86]
√
√
δ < 0: unstable focus (− ε < δ < 0) or node (δ < − ε )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
3/21(29)
Deterministic FitzHugh–Nagumo (FHN) model
δ > 0:
.
P is asymptotically stable
.
the system is excitable
.
one can define a separatrix
δ < 0:
P is unstable
∃ asympt. stable periodic orbit
sensitive dependence on δ:
canard (duck) phenomenon
[Callot, Diener, Diener ’78,
Benoı̂t ’81, . . . ]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
4/21(29)
Deterministic FitzHugh–Nagumo (FHN) model
δ > 0:
.
P is asymptotically stable
.
the system is excitable
.
one can define a separatrix
δ < 0:
P is unstable
∃ asympt. stable periodic orbit
sensitive dependence on δ:
canard (duck) phenomenon
[Callot, Diener, Diener ’78,
Benoı̂t ’81, . . . ]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
4/21(29)
Stochastic FHN equation
1
σ1
(1)
dxt = [xt − xt3 + yt ] dt + √ dWt
ε
ε
(2)
dyt = [a − xt − byt ] dt + σ2 dWt
.
Again b = 0 for simplicity in this talk
.
Wt , Wt : independent Wiener processes (white noise)
q
0 < σ1 , σ2 1, σ = σ12 + σ22
.
(1)
(2)
ε = 0.1
δ = 0.02
σ1 = σ2 = 0.03
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
5/21(29)
Mixed-mode oscillations (MMOs)
Time series t 7→ −xt for ε = 0.01, δ = 3 · 10−3 , σ = 1.46 · 10−4 , . . . , 3.65 · 10−4
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
6/21(29)
Random Poincaré map
nullclines
y
D
x
separatrix
P
Σ
Y0
Y1
Y0 , Y1 , . . . substochastic Markov chain describing process killed on ∂D
Number of small oscillations:
N = inf{n > 1 : Yn 6∈ Σ}
Law of N?
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
7/21(29)
Random Poincaré maps
In appropriate coordinates
dϕt = f (ϕt , xt ) dt + σF (ϕt , xt ) dWt
ϕ∈R
dxt = g (ϕt , xt ) dt + σG (ϕt , xt ) dWt
x ∈E ⊂Σ
.
all functions periodic in ϕ (say period 1)
.
f > c > 0 and σ small ⇒ ϕt likely to increase
.
process may be killed when x leaves E
(or R /Z )
x
X3
E
X2
X1 = xτ1
xt
X0
1
2
3
4
ϕ
X0 , X1 , . . . form (substochastic) Markov chain
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
8/21(29)
Harmonic measure
x
D
E
xt
X1 = xτ
X0
−M
.
.
.
.
.
1
ϕ
τ : first-exit time of zt = (ϕt , xt ) from D = (−M, 1) × E
A ⊂ ∂D: µz (A) = P z {zτ ∈ A} harmonic measure (wrt generator L)
[Ben Arous, Kusuoka, Stroock ’84]: under hypoellipticity cond,
µz admits (smooth) density h(z, y ) wrt arclength on ∂D
Remark: Lz h(z, y ) = 0 (kernel is harmonic)
For B ⊂ E Borel set
Z
X0
:=
P {X1 ∈ B} = K (X0 , B)
K (X0 , dy )
B
where K (x, dy ) = h((0, x), (1, y )) dy =: k(x, y ) dy
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
9/21(29)
Fredholm theory
Consider integral operator K acting
Z
. on L∞ via f 7→ (Kf )(x) =
k(x, y )f (y ) dy = E x [f (X1 )]
E
.
on L1 via m 7→ (mK )(y ) =
Z
m(x)k(x, y ) dx = P µ {X1 ∈ dy }
E
Thm [Fredholm 1903]:
If k ∈ L2 , then K has eigenvalues λn of finite multiplicity
Right/left eigenfunctions: Khn = λn hn , hn∗ K = λn hn∗ , form complete ON basis
Thm [Perron, Frobenius, Jentzsch 1912, Krein–Rutman ’50, Birkhoff ’57]:
Principal eigenvalue λ0 is real, simple, |λn | < λ0 ∀n > 1, h0 , h0∗ > 0
Spectral decomp: k n (x, y ) = λn0 h0 (x)h0∗ (y ) + λn1 h1 (x)h1∗ (y ) + . . .
⇒ P x {Xn ∈ dy |Xn ∈ E } = π0 (dx) + O((|λ1 |/λ0 )n )
R
where π0 = h0∗ / E h0∗ is quasistationary distribution (QSD)
[Yaglom ’47, Bartlett ’57, Vere-Jones ’62, . . . ]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
10/21(29)
Fredholm theory
Consider integral operator K acting
Z
. on L∞ via f 7→ (Kf )(x) =
k(x, y )f (y ) dy = E x [f (X1 )]
E
.
on L1 via m 7→ (mK )(y ) =
Z
m(x)k(x, y ) dx = P µ {X1 ∈ dy }
E
Thm [Fredholm 1903]:
If k ∈ L2 , then K has eigenvalues λn of finite multiplicity
Right/left eigenfunctions: Khn = λn hn , hn∗ K = λn hn∗ , form complete ON basis
Thm [Perron, Frobenius, Jentzsch 1912, Krein–Rutman ’50, Birkhoff ’57]:
Principal eigenvalue λ0 is real, simple, |λn | < λ0 ∀n > 1, h0 , h0∗ > 0
Spectral decomp: k n (x, y ) = λn0 h0 (x)h0∗ (y ) + λn1 h1 (x)h1∗ (y ) + . . .
⇒ P x {Xn ∈ dy |Xn ∈ E } = π0 (dx) + O((|λ1 |/λ0 )n )
R
where π0 = h0∗ / E h0∗ is quasistationary distribution (QSD)
[Yaglom ’47, Bartlett ’57, Vere-Jones ’62, . . . ]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
10/21(29)
Consequence for FitzHugh–Nagumo model
Theorem 1 [B & Landon, Nonlinearity 2012]
N is asymptotically geometric: lim P{N = n + 1|N > n} = 1 − λ0
n→∞
where λ0 ∈ (0, 1) if σ > 0 is principal eigenvalue of the chain
Proof:
.
x ∗ = argmax h0
Z
⇒
λ0 =
k(x ∗ , y )
E
h0 (y )
dy 6 K (x, E ) < 1
h0 (x ∗ )
by ellipticity (k bounded below)
.
P µ0 {N > n} = P µ0 {Xn ∈ E } =
R
P µ0 {N > n} = P µ0 {Xn ∈ E } =
R
E
E
µ0 (dx)K n (x, E )
µ0 (dx)λn0 h0 (x)kh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
P µ0 {N > n} = P µ0 {Xn ∈ E } = λn0 hµ0 , h0 ikh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
R R
. P µ0 {N = n + 1} =
µ (dx)K n (x, dy )[1 − K (y , E )]
E E 0
P µ0 {N = n + 1} = λn0 (1 − λ0 )hµ0 , h0 ikh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
.
Existence of spectral gap follows from positivity condition [Birkhoff ’57]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
11/21(29)
Consequence for FitzHugh–Nagumo model
Theorem 1 [B & Landon, Nonlinearity 2012]
N is asymptotically geometric: lim P{N = n + 1|N > n} = 1 − λ0
n→∞
where λ0 ∈ (0, 1) if σ > 0 is principal eigenvalue of the chain
Proof:
.
x ∗ = argmax h0
Z
⇒ λ0 =
k(x ∗ , y )
E
h0 (y )
dy 6 K (x ∗ , E ) < 1
h0 (x ∗ )
by ellipticity (k bounded below)
.
P µ0 {N > n} = P µ0 {Xn ∈ E } =
R
P µ0 {N > n} = P µ0 {Xn ∈ E } =
R
E
E
µ0 (dx)K n (x, E )
µ0 (dx)λn0 h0 (x)kh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
P µ0 {N > n} = P µ0 {Xn ∈ E } = λn0 hµ0 , h0 ikh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
R R
. P µ0 {N = n + 1} =
µ (dx)K n (x, dy )[1 − K (y , E )]
E E 0
P µ0 {N = n + 1} = λn0 (1 − λ0 )hµ0 , h0 ikh0∗ k1 [1 + O((|λ1 |/λ0 )n )]
.
Existence of spectral gap follows from positivity condition [Birkhoff ’57]
More
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
11/21(29)
Histograms of distribution of N (1000 spikes)
σ=ε=10−4
140
160
δ=1.3·10−3
δ=6·10−4
140
120
120
100
100
80
80
60
60
40
40
20
0
20
0
500
1000
1500
2000
2500
600
3000
3500
4000
0
0
50
100
150
200
1000
δ=2·10−4
300
δ=−8·10−4
900
500
250
800
700
400
600
300
500
400
200
300
200
100
100
0
0
10
20
30
40
50
60
70
0
0
1
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
2
3
4
5
6
5 décembre 2014
7
8
12/21(29)
Weak-noise regime
Theorem B & Landon , Nonlinearity 2012
√
Assume ε and δ/ ε sufficiently small
√
There exists κ > 0 s.t. for σ 2 6 (ε1/4 δ)2 / log( ε/δ)
.
.
Principal eigenvalue:
n (ε1/4 δ)2 o
1 − λ0 6 exp −κ
σ2
Expected number of small oscillations:
n (ε1/4 δ)2 o
E µ0 [N] > C (µ0 ) exp κ
σ2
where C (µ0 ) = probability of starting on Σ above separatrix
Proof:
Construct
A ⊂ Σ Zsuch that K (x, A) exponentially close
Z
Z to 1 for all x ∈ A
∗
∗
. λ0
h0 (y ) dy =
h0 (x)K (x, A) dx > inf K (x, A) h0∗ (y ) dy
.
A
E
x∈A
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
A
5 décembre 2014
13/21(29)
Dynamics near the separatrix
z
Change of variables:
.
Translate to Hopf bif. point
.
Scale space and time
.
Straighten nullcline ẋ = 0
⇒ variables (ξ, z) where nullcline: {z = 12 }
√ 1
ξ
ε 3
(1)
dξt =
− zt −
ξt dt + σ̃1 dWt
2
3 √
2 ε 4
(1)
(2)
dzt = µ̃ + 2ξt zt +
ξ dt − 2σ̃1 ξt dWt + σ̃2 dWt
3 t
where
δ
µ̃ = √ − σ̃12
ε
√ σ1
σ̃1 = − 3 3/4
ε
σ̃2 =
√ σ2
3 3/4
ε
Upward drift dominates if µ̃2 σ̃12 + σ̃22 ⇒ (ε1/4 δ)2 σ12 + σ22
Rotation around P: use that 2z e−2z−2ξ
2
+1
is constant for µ̃ = ε = 0
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
14/21(29)
Dynamics near the separatrix
z
Change of variables:
.
Translate to Hopf bif. point
.
Scale space and time
.
Straighten nullcline ẋ = 0
⇒ variables (ξ, z) where nullcline: {z = 12 }
√ 1
ξ
ε 3
(1)
dξt =
− zt −
ξt dt + σ̃1 dWt
2
3 √
2 ε 4
(1)
(2)
dzt = µ̃ + 2ξt zt +
ξ dt − 2σ̃1 ξt dWt + σ̃2 dWt
3 t
where
δ
µ̃ = √ − σ̃12
ε
√ σ1
σ̃1 = − 3 3/4
ε
σ̃2 =
√ σ2
3 3/4
ε
Upward drift dominates if µ̃2 σ̃12 + σ̃22 ⇒ (ε1/4 δ)2 σ12 + σ22
Rotation around P: use that 2z e−2z−2ξ
2
+1
is constant for µ̃ = ε = 0
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
14/21(29)
From below to above threshold
Linear approximation:
⇒
(1)
(2)
dzt0 = µ̃ + tzt0 dt − σ̃1 t dWt + σ̃2 dWt
Z
Φ(x) =
P{no small osc} ' Φ −π 1/4 √ µ̃2 2
σ̃1 +σ̃2
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
x
−∞
5 décembre 2014
2
e−y /2
√
dy
2π
15/21(29)
From below to above threshold
Linear approximation:
⇒
(1)
(2)
dzt0 = µ̃ + tzt0 dt − σ̃1 t dWt + σ̃2 dWt
Z
Φ(x) =
P{no small osc} ' Φ −π 1/4 √ µ̃2 2
σ̃1 +σ̃2
0.8
−∞
2
e−y /2
√
dy
2π
∗: P{no small osc}
+: 1/E[N]
◦: 1 − λ0
curve: x 7→ Φ(π 1/4 x)
1
0.9
x
series
1/E(N)
P(N=1)
phi
0.7
0.6
0.5
0.4
0.3
x = −√
0.2
0.1
0
−1.5
−1
−0.5
0
0.5
1
1.5
µ̃
σ̃12 +σ̃22
1/4
2
ε (δ−σ /ε)
= − √ 2 12
σ1 +σ2
2
−µ/σ
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
15/21(29)
Summary: Parameter regimes
σ
σ1 = σ2 :
1/4
2
/ε)
P{N = 1} ' Φ − (πε) (δ−σ
σ
2
3/
III
σ
ε3/4
=
δ
1/2
)
(δ ε
1/4
σ = II
δε
=
σ
see also
[Muratov & Vanden Eijnden ’08]
I
ε1/2
δ
Regime I: rare isolated spikes
Theorem 2 applies (δ ε1/2 )
Interspike interval ' exponential
Regime II: clusters of spikes
# interspike osc asympt geometric
σ = (δε)1/2 : geom(1/2)
Regime III: repeated spikes
P{N = 1} ' 1
Interspike interval ' constant
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
16/21(29)
The Koper model
ε dxt = [yt − xt3 + 3xt ] dt
+
√
εσF (xt , yt , zt ) dWt
0
dyt = [kxt − 2(yt + λ) + zt ] dt + σ G1 (xt , yt , zt ) dWt
+ σ 0 G2 (xt , yt , zt ) dWt
dzt = [ρ(λ + yt − zt )] dt
Ma+
0
Σ
L−
Mr0
y
Ma−
0
P∗
x
z
L+
Folded-node singularity at P ∗ induces mixed-mode oscillations
[Benoı̂t, Lobry ’82, Szmolyan, Wechselberger ’01, . . . ]
Poincaré map Π : Σ → Σ is almost 1d due to contraction in x-direction
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
17/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
-8.6
-8.5
-8.4
-8.3
0
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 0 – c.f. [Guckenheimer, Chaos, 2008]
σ = σ 0stochastiques
= 2 · 10−7
Modèles
de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
-8.6
−7
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 2 · 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
-8.6
−6
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 2 · 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
-8.6
−5
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 2 · 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
-8.6
−4
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 2 · 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
-8.6
−3
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 2 · 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Poincaré map zn 7→ zn+1
-8.5
-8.6
-8.7
-8.8
-8.9
-9.0
-9.1
-9.2
-9.3
-9.2
-9.1
-9.0
-8.9
-8.8
-8.7
0
k = −10, λ = −7.6, ρ = 0.7, ε = 0.01, σ = σ = 10
2008]
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
-8.6
−2
-8.5
-8.4
-8.3
– c.f. [Guckenheimer, Chaos,
5 décembre 2014
18/21(29)
Size of fluctuations
Regular fold
Σ5
C0a+
Σ′4
Σ4
Σ6
C0r
µ 1 : eigenvalue ratio
µ 1 : at folded node
Transition
Σ2 → Σ3
Σ3 → Σ4
Σ04 → Σ5
Σ5 → Σ6
Σ6 → Σ1
→ Σ2
Σ2
C0a−
Σ′′1
Σ′1
∆y
√
σ ε + σ 0 ε1/6
σ + σ0
σ + σ0
√
Σ01 → Σ001 if z = O( µ)
Σ001
Folded node
∆x
σ + σ0
σ + σ0
σ
σ0
+ 1/3
1/6
ε
ε
Σ4 → Σ04
Σ1 → Σ01
Σ1
Σ3
∆z
√
σ ε + σ0
√
σ ε + σ0
p
σ ε|log ε| + σ 0
√
σ ε + σ 0 ε1/6
√
σ ε + σ0
√
σ ε + σ0
(σ + σ 0 )ε1/4
σ0
(σ + σ 0 )(ε/µ)1/4
σ 0 (ε/µ)1/4
0
(σ + σ )ε
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
1/4
σ 0 ε1/4
5 décembre 2014
19/21(29)
Main results
[B, Gentz, Kuehn, JDE 2012 & arXiv:1312.6353, to appear in J Dynam Diff Eq]
√
x (+z)
kµ ε
√
Theorem 1: canard spacing
ε
√
At z = 0, k th canard lies at distance
√ −c(2k+1)2 µ
εe
from primary canard
µ
ε
√
ε e−cµ
z
√ −c(2k+1)2 µ
εe
.
.
p
Saturation effect occurs at kc ' |log(σ + σ 0 )|/µ
p
For k > kc , behaviour indep. of k and ∆z 6 O( εµ|log(σ + σ 0 )| )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
20/21(29)
Main results
[B, Gentz, Kuehn, JDE 2012 & arXiv:1312.6353, to appear in J Dynam Diff Eq]
√
x (+z)
kµ ε
√
Theorem 1: canard spacing
ε
√
At z = 0, k th canard lies at distance
√ −c(2k+1)2 µ
εe
from primary canard
Theorem 2: size of fluctuations
µ
ε
√
εµ
z
More
√
(σ + σ 0 )(ε/µ)1/4 up to z = εµ
√
2
(σ + σ 0 )(ε/µ)1/4 ez /(εµ) for z > εµ
(σ + σ 0 )(ε/µ)1/4
Theorem 3: early escape
√
P0 ∈ Σ1 in sector with k > 1/ µ ⇒ first hitting of Σ2 at P2 s.t.
2
0
P P0 {z2 > z} 6 C |log(σ + σ 0 )|γ e−κz /(εµ|log(σ+σ )|)
.
.
p
Saturation effect occurs at kc ' |log(σ + σ 0 )|/µ
p
For k > kc , behaviour indep. of k and ∆z 6 O( εµ|log(σ + σ 0 )| )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
20/21(29)
Main results
[B, Gentz, Kuehn, JDE 2012 & arXiv:1312.6353, to appear in J Dynam Diff Eq]
√
x (+z)
kµ ε
√
Theorem 1: canard spacing
ε
√
At z = 0, k th canard lies at distance
√ −c(2k+1)2 µ
εe
from primary canard
Theorem 2: size of fluctuations
µ
ε
√
εµ
z
More
√
(σ + σ 0 )(ε/µ)1/4 up to z = εµ
√
2
(σ + σ 0 )(ε/µ)1/4 ez /(εµ) for z > εµ
(σ + σ 0 )(ε/µ)1/4
Theorem 3: early escape
√
P0 ∈ Σ1 in sector with k > 1/ µ ⇒ first hitting of Σ2 at P2 s.t.
2
0
P P0 {z2 > z} 6 C |log(σ + σ 0 )|γ e−κz /(εµ|log(σ+σ )|)
.
.
p
Saturation effect occurs at kc ' |log(σ + σ 0 )|/µ
p
For k > kc , behaviour indep. of k and ∆z 6 O( εµ|log(σ + σ 0 )| )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
20/21(29)
Concluding remarks
.
Noise can induce spikes that may have non-Poisson interval statistics
.
Noise can increase the number of small-amplitude oscillations
.
Important tools: random Poincaré maps and quasistationary distrib.
.
Future work: more quantitative analysis of oscillation patterns, using
singularly perturbed Markov chains and spectral theory More
References
. N. B., Damien Landon, Mixed-mode oscillations and interspike interval statistics in
the stochastic FitzHugh-Nagumo model, Nonlinearity 25, 2303–2335 (2012)
. N. B., Barbara Gentz and Christian Kuehn, Hunting French Ducks in a Noisy
Environment, J. Differential Equations 252, 4786–4841 (2012)
.
, From random Poincaré maps to stochastic mixed-mode-oscillation
patterns, preprint arXiv:1312.6353 (2013), to appear in J Dynam Diff Eq
. N. B. and Barbara Gentz, Stochastic dynamic bifurcations and excitability in C.
Laing and G. Lord (Eds.), Stochastic methods in Neuroscience, p. 65-93, Oxford
University Press (2009)
.
, On the noise-induced passage through an unstable periodic orbit II:
General case, SIAM J Math Anal 46, 310–352 (2014)
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
21/21(29)
How to estimate the spectral gap
Various approaches: coupling, Poincaré/log-Sobolev inequalities, Lyapunov
functions, Laplace transform + Donsker–Varadhan, . . .
Thm [Garett Birkhoff ’57] Under uniform positivity condition
s(x)ν(A) 6 K (x, A) 6 Ls(x)ν(A)
∀x ∈ E , ∀A ⊂ E
one has |λ1 |/λ0 6 1 − L−2
Localised version: assume ∃ A ⊂ E and m : A → R ∗+ such that
m(y ) 6 k(x, y ) 6 Lm(y )
∀x, y ∈ A
Then
|λ1 | 6 L − 1 + O sup K (x, E \ A) + O sup[1 − K (x, E )]
x∈E
x∈A
To prove the restricted positivity condition (1):
. Show that |Yn − Xn | likely to decrease exp for X0 , Y0 ∈ A
. Use Harnack inequalities once |Yn − Xn | = O(σ 2 )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
Back
22/21(29)
How to estimate the spectral gap
Various approaches: coupling, Poincaré/log-Sobolev inequalities, Lyapunov
functions, Laplace transform + Donsker–Varadhan, . . .
Thm [Garett Birkhoff ’57] Under uniform positivity condition
s(x)ν(A) 6 K (x, A) 6 Ls(x)ν(A)
∀x ∈ E , ∀A ⊂ E
one has |λ1 |/λ0 6 1 − L−2
Localised version: assume ∃ A ⊂ E and m : A → R ∗+ such that
m(y ) 6 k(x, y ) 6 Lm(y )
∀x, y ∈ A
(1)
Then
|λ1 | 6 L − 1 + O sup K (x, E \ A) + O sup[1 − K (x, E )]
x∈E
x∈A
To prove the restricted positivity condition (1):
. Show that |Yn − Xn | likely to decrease exp for X0 , Y0 ∈ A
. Use Harnack inequalities once |Yn − Xn | = O(σ 2 )
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
Back
22/21(29)
Estimating noise-induced fluctuations
γǫs
γǫ1
γǫ2
γǫ5
γǫ4
γǫ3
γǫw
y
x
y
z
z
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
x
5 décembre 2014
Back
23/21(29)
Estimating noise-induced fluctuations
ζt = (xt , yt , zt ) − (xtdet , ytdet , ztdet )
σ
1
1
A(t)ζt dt + √ F(ζt , t) dWt + b(ζt , t) dt
ε
ε | {z }
ε
=O(kζt k2 )
Z t
Z t
σ
1
ζt = √
U(t, s)F(ζs , s) dWs +
U(t, s)b(ζs , s) ds
ε 0
ε 0
dζt =
where U(t, s) principal solution of εζ̇ = A(t)ζ.
Lemma (Bernstein-type estimate):
Z
s
h2
G(ζu , u) dWu > h 6 2n exp −
P sup 2V (t)
06s6t
0
Z s
where
G(ζu , u)G(ζu , u)T du 6 V (s) a.s. and n = 3 space dimension
0
Remark: more precise
Z t results using ODE for covariance matrix of
σ
0
Remark :ζt = √
U(t, s)F(0, s) dWs
ε 0
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
Back
24/21(29)
Example: analysis near the regular fold
(δ0 , y ∗ , z ∗ )
Σ∗n
(x0 , y0 , z0 )
Σ′4
c1
C0a−
ǫ2/3
Σ∗n+1
Σ5
ǫ1/3 2n
y
C0r
x
z
Proposition: For h1 = O(ε2/3 )
P k(yτΣ5 , zτΣ5 ) − (y ∗ , z ∗ )k > h1
κh12
κε
6 C |log ε| exp − 2
+ exp − 2
σ + (σ 0 )2 ε
σ ε + (σ 0 )2 ε1/3
Useful if σ, σ 0 √
ε
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
Back
5 décembre 2014
25/21(29)
Further ways to anayse random Poincaré maps
.
Theory of singularly perturbed Markov chains
ε
1
1
ε
4
1
ε
1
2
ε
3
1 − 2ε − ε2
ε
1
5
.
ε2
For coexisting stable periodic orbits:
spectral-theoretic description of metastable transitions
More
Back
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
26/21(29)
Further ways to anayse random Poincaré maps
.
Theory of singularly perturbed Markov chains
ε
1−ε
1
ε
4
1−ε
ε
2
1−ε
ε
3
1 − 2ε − ε2
ε
1−ε
5
.
ε2
For coexisting stable periodic orbits:
spectral-theoretic description of metastable transitions
More
Back
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
26/21(29)
Laplace transforms
{Xn }n>0 : Markov chain on E , cemetery state ∆, kernel K
Given A ⊂ E , B ⊂ E ∪ {∆}, A ∩ B = ∅, x ∈ E and u ∈ C , define
u
τA = inf{n > 1 : Xn ∈ A}
GA,B
(x) = E x [euτA 1{τA <τB } ]
u
HA,B
(x) = E x [euσA 1{σA <σB } ]
−1
. G u (x) is analytic for |eu | < supx∈(A∪B)c K (x, (A ∪ B)c )
A,B
σA = inf{n > 0 : Xn ∈ A}
.
u
u
u
u
GA,B
= HA,B
in (A ∪ B)c , HA,B
= 1 in A and HA,B
= 0 in B
Lemma: Feynman–Kac-type relation
u
u
KHA,B
= e−u GA,B
Proof:
h
i
u
(KHA,B
)(x) = E x E X1 euσA 1{σA <σB }
h
h
i
i
= E x 1{X1 ∈A} E X1 euσA 1{σA <σB } + E x 1{X1 ∈Ac } E X1 euσA 1{σA <σB }
= E x 1{1=τA <τB } + E x eu(τA −1) 1{1<τA <τB }
u
= E x eu(τA −1) 1{τA <τB } = e−u GA,B
(x)
u
⇒ if GA,B
varies little in A ∪ B, it is close to an eigenfunction
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
Back
27/21(29)
Laplace transforms
{Xn }n>0 : Markov chain on E , cemetery state ∆, kernel K
Given A ⊂ E , B ⊂ E ∪ {∆}, A ∩ B = ∅, x ∈ E and u ∈ C , define
u
τA = inf{n > 1 : Xn ∈ A}
GA,B
(x) = E x [euτA 1{τA <τB } ]
u
HA,B
(x) = E x [euσA 1{σA <σB } ]
−1
. G u (x) is analytic for |eu | < supx∈(A∪B)c K (x, (A ∪ B)c )
A,B
σA = inf{n > 0 : Xn ∈ A}
.
u
u
u
u
GA,B
= HA,B
in (A ∪ B)c , HA,B
= 1 in A and HA,B
= 0 in B
Lemma: Feynman–Kac-type relation
u
u
KHA,B
= e−u GA,B
Proof:
h
i
u
(KHA,B
)(x) = E x E X1 euσA 1{σA <σB }
h
h
i
i
= E x 1{X1 ∈A} E X1 euσA 1{σA <σB } + E x 1{X1 ∈Ac } E X1 euσA 1{σA <σB }
= E x 1{1=τA <τB } + E x eu(τA −1) 1{1<τA <τB }
u
= E x eu(τA −1) 1{τA <τB } = e−u GA,B
(x)
u
⇒ if GA,B
varies little in A ∪ B, it is close to an eigenfunction
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
Back
27/21(29)
Small eigenvalues: Heuristics
(inspired by [Bovier, Eckhoff, Gayrard, Klein ’04])
Stable periodic orbits in x1 , . . . , xN
S
. Bi small ball around xi , B = N Bi
i=1
.
.
Eigenvalue equation (Kh)(x) = e−u h(x)
Assume h(x) ' hi in Bi
N
X
Ansatz: h(x) =
hj HBu j ,B\Bj (x) +r (x)
.
j=1
x ∈ Bi : h(x) = hi +r (x)
. x ∈ B c : eigenvalue equation is satisfied by h − r (Feynman–Kac)
. x = xi : eigenvalue equation yields by Feynman–Kac
.
hi =
N
X
hj Mij (u)
Mij (u) = GBuj ,B\Bj (xi ) = E xi [euτB 1{τB =τBj } ]
j=1
⇒ condition det(M − 1l) = 0 ⇒ N eigenvalues exp close to 1
If P{τB > 1} 1 then Mij (u) ' eu P xi {τB = τBj } =: eu Pij and Ph ' e−u h
Back
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
28/21(29)
Small eigenvalues: Heuristics
(inspired by [Bovier, Eckhoff, Gayrard, Klein ’04])
Stable periodic orbits in x1 , . . . , xN
S
. Bi small ball around xi , B = N Bi
i=1
.
.
Eigenvalue equation (Kh)(x) = e−u h(x)
Assume h(x) ' hi in Bi
N
X
Ansatz: h(x) =
hj HBu j ,B\Bj (x) +r (x)
.
j=1
x ∈ Bi : h(x) = hi +r (x)
. x ∈ B c : eigenvalue equation is satisfied by h − r (Feynman–Kac)
. x = xi : eigenvalue equation yields by Feynman–Kac
.
hi =
N
X
hj Mij (u)
Mij (u) = GBuj ,B\Bj (xi ) = E xi [euτB 1{τB =τBj } ]
j=1
⇒ condition det(M − 1l) = 0 ⇒ N eigenvalues exp close to 1
If P{τB > 1} 1 then Mij (u) ' eu P xi {τB = τBj } =: eu Pij and Ph ' e−u h
Back
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
28/21(29)
Control of error term
The error term satisfies the boundary value problem
(Kr )(x) = e−u r (x)
r (x) = h(x) − hi
x ∈ Bc
x ∈ Bi
Lemma:
u
For u s.t. GB,E
c exists, the unique solution of
(K ψ)(x) = e−u ψ(x)
x ∈ Bc
ψ(x) = θ(x)
x ∈B
is given by ψ(x) = E x [euτB θ(XτB )]
P
⇒ r (x) = E x [euτB θ(XτB )] where θ(x) = j [h(x) − hj ]1{x∈Bj }
To show that h(x) − hj is small in Bj : use Harnack inequalities
Consequence: Reduction to an N-state process in the sense that
P x {Xn ∈ Bi } =
N
X
λnj hj (x)hj∗ (Bi ) + O(|λN+1 |n )
j=1
Back
Modèles stochastiques de neurones et chaı̂nes de Markov à espace continu
5 décembre 2014
29/21(29)
Download