ANALYSIS OF A FITZHUGH–NAGUMO–RALL MODEL

advertisement
ANALYSIS OF A FITZHUGH–NAGUMO–RALL MODEL OF A
NEURONAL NETWORK
STEFANO CARDANOBILE AND DELIO MUGNOLO
Abstract. Pursuing an investigation started in [25], we consider a generalization of the FitzHugh–Nagumo model for the propagation of impulses in a
network of nerve fibers. To this aim, we consider a whole neuronal network
that includes models for axons, somata, dendrites, and synapses (of both inhibitory and excitatory type). We investigate separately the linear part by
means of sesquilinear forms, in order to obtain well-posedness and some qualitative properties. Once they are obtained, we perturb the linear problem
by a nonlinear term and we prove existence of local solutions. Qualitative
properties with biological meaning are also investigated.
1. Introduction
While mathematically modelling of an individual nerve fiber was attempted already at the beginning of the last century, the first rigorous attempt to describe a
larger, ramified part of a whole neuronal network goes probably back to the pioneering studies of W. Rall at the end of the 1950s. Rall considered in [28] a lumped
model of a dendritcal network ending with a soma, which is today usually called
after his name. In [12] and a series of subsequent articles, J.D. Evans, G. Major
et al. extended Rall’s ideas to the case of a more extended and heterogeneous
branched dendritical network that does not fit Rall’s framework and thus cannot
be represented as a single equivalent cylinder. They have thus faced the problem of formulating correct nodes conditions in those ramification points (typically,
synapses) where two different dendritical trees are touching: see also Rall’s own
review [29] of these papers, or [21]–[22] for more details.
Rall supposed the conduction in dendritical fibers to be passive and thus modelled it by a linear cable equation. There are however many possible choices for
conduction laws in active fibers, which lead to nonlinear equations. We only recall
the Hodgkin–Huxley and the FitzHugh–Nagumo models, and refer to [23] for possible alternatives. For transmission conditions in the nodes (synapses and soma,
essentially) there is a similar manifold of proposed models: a review of some possible
ones can be found in [16].
In this paper, we schematize larger neuronal networks by considering
– a FitzHugh–Nagumo (nonlinear) system on the axons, coupled with
– a (linear) Rall model for the dendritical trees and somata, complemented with
– Kirchhoff-type rule in axonal or dendritical ramification points.
2000 Mathematics Subject Classification. 47N60; 34B45; 92C20.
Key words and phrases. FitzHughNagumo system and Rall’s lumped soma model and Semigroups of operators and Semilinear evolution equations on networks.
1
2
STEFANO CARDANOBILE AND DELIO MUGNOLO
We generalize the standard Kirchhoff rule in order to allow for excitatory effects
in inactive nodes, and briefly discuss in Remark 4.5 a possible extension to more
realistic balanced systems that also include nodes with inhibitory effects.
Our efforts essentially continue the investigations started in [25] by Silvia Romanelli and the second author, where only dendritical networks complying with
the Rall model have been considered. While our techniques are similar to those
used in [25], the essentially more complex dynamics considered here (mostly due to
the coupling of the Rall model with a FitzHugh–Nagumo system) accounts for new
phenomena, happening not only in the nonlinear setting, as one may expect, but
already in a simplified linear version of the problem. Throughout the paper, we
have deduced abstract properties in mathematically formulated propositions and
theorems, and then regularly proposed tentative interpretations, in the light of facts
that are well-known in the neurophysics.
As shown in [25], network models based on diffusion processes complemented
with active nodes fit particularly well the abstract mathematical theory of parabolic
equations with dynamical boundary conditions, and in case of purely excitatory
nodes it can be discussed in an efficient way by means of sesquilinear forms. In fact,
variational/functional analytical methods based on the application of the spectral
theorem or, more recently, on non-symmetric sesquilinear forms have been already
used for several years to discuss (parabolic) network equations (see, e.g., [7] and
[18]). Such methods ensure necessary flexibility and allow to obtain a number of
qualitative properties (like smoothness or positivity of solutions, or suitable energy
estimates). Our model is somewhat more involved than usual diffusion problems: in
fact, even when considering a single neuron the linearized problem is not associated
with an elliptic operator – thus, the problem is not, strictly speaking, a parabolic
one. We emphasize that our model turns out to be non-selfadjoint.
We essentially follow the methods introduced in [25] in order to deal with network
equations with dynamical boundary conditions. We introduce a sesquilinear, nonsymmetric form and can thus prove several well-posedness and qualitative results
in a suitable weighted product space. We can thus discuss the behaviour of a whole
neuronal network whose dynamics is described by a semilinear diffusion system. In
this way we take into account the nonlinear conduction taking place on the axons,
whereas the model discussed in [25] was based on a simpler linear cable equation
on all (passive) network edges. Another feature of our method, as opposed to
standard FitzHugh–Nagumo and Rall theory, is that we allow variable coefficients
for the conduction laws: this generalization is important because, as pointed out
by Yuste–Tank in [35], “the basic cable properties of dendrites are unknown, and
it is even possible that they may not be constant throughout the dendritic tree”.
The plan of our investigation is the following one: We introduce the precise network semilinear diffusion problem based on the FitzHugh–Nagumo and Rall models
in Sections 2–3. Due to the presence of inhomogeneous terms, such a system cannot
be easily linearized around a stationary solution; to this aim, we rather follow an
approach based on semilinear perturbation theory for generators of analytic semigroups. Thus, we temporarily drop the nonlinear terms and discuss the associated
linear problem. More precisely, in Section 4 we introduce a weak formulation of our
problem and treat it with the method of sesquilinear forms. By standard methods
we then prove that the related linear abstract Cauchy problem is well-posed in
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
3
a suitable Hilbert space by showing the generation of an analytic semigroup. In
Section 5 we obtain further qualitative properties (like smoothness, non-positivity,
non-L∞ -contractivity) for the solutions to the linear problem. In Section 6 we are
able to develop a spectral theory that in turn allows us to study the complete nonlinear problem and to prove its local well-posedness as well as certain qualitative
properties. Although an explicit formula for the solution cannot be proven by our
methods, we emphasize that our techniques can be applied to a quite general class
of problems, as we do impose only weak assumptions on smoothness of coefficients
or initial data. Finally, in Section 7 we summarize some of the results we have
obtained and exhibit again a strong interplay between mathematical linear theory
and biological nonlinear properties.
Our investigation is mainly based on methods from the mathematical theories of
sesquilinear forms and of nonlinear perturbations of sectorial operators: for details
we refer to [26] as well as [4], and to [20], respectively. For the sake of completeness we briefly summarize in an Appendix many known results we have applied
throughout in the paper.
2. FitzHugh–Nagumo–Rall model of a neuron
To begin with, we introduce the standard FitzHugh–Nagumo differential model
of an excitable nerve fiber. It is a simplification of the celebrated (and more involved) Hodgkin–Huxley model (see [31, Chapt. 2]), a thorough discussion of which
goes beyond the aim of this work. In the equations of the FitzHugh–Nagumo model,
v denotes a transmembrane voltage-like variable. On the other hand, R is an ad
hoc recovery function, whose rôle is to approximate and sum up the activity of ion
pumps that appear in the Hodgkin–Huxley model while keeping the phase space
a simple as possible. Such a model was first introduced by R. FitzHugh at the
beginning of the 1960s in [13] and in its basic form reads
∂
∂2
vFN (t, x) = C1 2 vFN (t, x) − P1 vFN (t, x) − Θ(vFN (t, x)) − R(t, x),
∂t
∂x
∂
R(t, x) = vFN (t, x) − R(t, x) + ζ.
∂t
Observe that the equations are coupled, i.e., regenerative self-excitation of v is
allowed via a positive feedback, while R provides a slower negative feedback. Here,
the function Θ : R → R is given by
Θ(t) :=
t(t − ξ1 )(t − ξ2 )
,
ξ2 (ξ2 − ξ1 )
and the parameters C1 , P1 , ξ1 , ξ2 , ζ have to be chosen to fit the experimental data.
While fibers of (semi-)infinite length are usually considered in the literature, we
want to model an axon of length `1 , i.e., the space variable x in the above equations
ranges in an interval (0, `1 ), where the soma (the cell body) is identified with the
point 0. Standard geometrical arguments (see e.g. [2, Chapter 16]) show that the
solutions of the ordinary differential equation
(2.1)
u0 = −
u(u − ξ1 )(u − ξ2 )
= −Θ(u),
ξ2 (ξ2 − ξ1 )
map initial values in the interval [a, b] into values in the same interval, whenever
a ≤ 0 and b ≥ ξ2 . Since the flow associated with (2.1) leaves invariant any interval
4
STEFANO CARDANOBILE AND DELIO MUGNOLO
[a, b], the equation is globally well-posed – i.e., solutions do not blow up in finite
time. Moreover the right hand-side of the equation is C ∞ and negative for u > ξ2
and for 0 < u < ξ1 . This implies that the only stable stationary points of (2.1) are 0
and ξ2 , while the stationary point ξ1 is unstable. The neurophysical interpretation
of this may be that ξ1 is the voltage threshold for spike initiation while ξ2 is the
asymptotic voltage of action potentials transmitted along an axon.
While the above FitzHugh–Nagumo model only accounts for the axon, our aim is
to describe a whole neuron. In order to pursue this target, we also have to model the
dendritical tree. In the literature, this is usually desribed as a mesh of non-excitable
fibers. A now standard approach (so-called lumped soma model ) has been developed
in the 1950s by Rall in [28]. Rall’s idea was to consider a simpler, concentrated
“equivalent cylinder” (of finite length `2 ) that schematizes a dendritical tree, under
certain (quite restrictive) geometrical assumptions on the network. He showed
that a linear cable equation fits experimental data on dendritical trees quite well,
provided that it is complemented by a suitable dynamical conditions imposed in
the interval end corresponding to the soma. Rall’s system is given by
∂2
∂
vR (t, x) = C2 2 vR (t, x) − P2 u(t, x),
∂t
∂x
∂
∂
vR (t, `2 ) = − vR (t, `2 ),
∂t
∂x
∂
vR (t, 0) = 0,
∂x
where again the parameters C2 , P2 ought to be chosen to fit the experimental data.
In order to describe the behaviour of both axon and dendritical tree, we glue
them at the soma and couple the FitzHugh–Nagumo system with a Rall model. It
seems reasonable to assume the soma to be isopotential, and therefore the voltage
variables vFN (as in the FitzHugh–Nagumo system) and vR (as in the Rall model)
to attain a common value in the contact point, We thus set
dv (t) := vR (t, `2 ) = vFN (t, 0),
t ≥ 0.
A similar, unified approach is quite standard in the investigation of compartmental
models, (see e.g. [10] or [33]). It has also already been considered in the investigation of differential models as in [6], where the cable equation is coupled with a
space-clamped FitzHugh–Nagumo equation in the soma. In this scheme, we replace
the outgoing potential at `2 in the dynamic boundary condition of Rall’s model by
a Kirchhoff-like term representing the sum of potentials flowing from the dendritical tree into the axon. If we finally close the system by assuming also the axon to
terminate with a sealed end, we can write our complete system as
∂
v1 (t, x)
∂t
∂
R(t, x)
∂t
∂
v2 (t, x)
∂t
= C1
∂2
v1 (t, x) − P1 v1 (t, x) − Θ(v1 (t, x)) − R(t, x)
∂x2
= v1 (t, x) − R(t, x) + ζ,
= C2
∂2
v2 (t, x) − P2 v2 (t, x),
∂x2
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
5
for t > 0 and x ∈ (0, `1 ) and x ∈ (0, `2 ) in the first two and in the last equations,
respectively, complemented with the node conditions
v1 (t, `2 ) = v2 (t, 0) =: dv (t),
∂ v
∂
∂
d (t) = −
v1 (t, `2 ) −
v2 (t, 0) ,
∂t
∂x
∂x
∂
∂
v1 (t, 0) = 0,
v2 (t, `1 ) = 0.
∂x
∂x
Here we have set v1 := vR and v2 := vFN .
Observe that ordinary dendritical trees do not satisfy Rall’s geometrical assumptions. This phenomenon, which is in neurobiology is known as “tapering”
(see e.g. [27]–[34]–[11]), make it necessary to allow for more general coefficients
that describe the anisotropic properties of biological fibers. At a mathematical
level, this amounts to replace parameters C1 , C2 , P1 , P2 by more general functions
c1 , c2 , p1 , p2 . As we will see, this is no big trouble in our mathematical approach.
Even more generality in the biological model can be obtained if we assume the
dendritical tree to consist of smaller, geometrically homogeneous subtrees – each
one modelled in Rall’s fashion. This approach, thoroughly developed in [21] and
subsequent papers, will be considered in the remainder of our paper.
3. The FitzHugh–Nagumo–Rall model of a neuronal network
Beyond the toy model presented in the previous section, one could discuss the
behaviour of a whole neuronal network of m edges and n nodes, that possibly
schematizes larger regions of a brain. A complete model should take into account
both active and non-active vertices, i.e., somata as well as synapses and ramifications of the axonal and dendrytical trees. In the following, we continue the
investigation initiated in [18]–[24]–[25]. We borrow from these papers most of our
notation and mathematical tools. In these more theoretical papers, however, the
recovery feature typical of the FitzHugh–Nagumo model has been neglected, so that
unrealistic properties were deduced1.
In the following, we identify the network and the underlying graph G. Furthermore, without loss of generality we normalize and parametrize the edges as
[0, 1]-intervals: this is legitimate as long as we allow variable coefficients for the
diffusion operators. We denote by Φ+ and Φ− the incoming and the outgoing
incidence matrices, i.e.,
1, if ej (0) = vi ,
1, if ej (1) = vi ,
+
−
φij :
and
φij :
0, otherwise,
0, otherwise,
so that the incidence matrix of the graph Φ is given by Φ = Φ+ − Φ− .
We want to the distinguish between active nodes, which model the somata (and,
according to some neurobiologists, also the synapses, cf. [23]), and inactive nodes,
which model the axonal and dendritical ramification. For the sake of simplicity we
will consider a model of inhibitory type only: we thus have n0 axons (and hence n0
1For instance, in all the above articles it was shown that, under reasonable locality assumptions,
positive initial data imply positive solutions: this is in sharp contrast to experimental observation
of hyperpolarization following a spike – i.e., of fall in voltage under initial level, which always
follows the transmission of an action potential.
6
STEFANO CARDANOBILE AND DELIO MUGNOLO
active, inhibitory nodes, too) and n − n0 inactive nodes. In this setting we define
the modified incidence matrix
0, if 1 ≤ i ≤ n0 ,
0, if 1 ≤ i ≤ n0 ,
−
e
and
φ
:
φe+
:
−
ij
ij
φ+
,
otherwise,
φ
ij
ij , otherwise.
We impose a (generalized) Kirchhoff law in the inactive nodes (modelling absorption as well as transmission of electrical potential) and a suitable, physically
motivated dynamic condition in the active nodes. Furthermore, we impose continuity conditions in all nodes, so that we can denote by dui the common value of
functions uj on all edges ej incident to the node vi : this is the usual approach in
the literature. Each soma (i.e., each active node) is the end of an axon: hence, our
model will consist of n0 edges representing axons (denoted by e1 , . . . , en0 ) and m−n0
edges representing dendritical trees (denoted by em−n0 , . . . , en ). The dynamics on
axonal and dendritical edges will be modeled by a nonlinear FitzHugh–Nagumo
systen and by a linear cable equation, respectively. We will always assume that
n0 < n and n0 < m, i.e., the system does contain axons and somata.
We now allow all parameters and all coefficients to be dependent on the individual
edge ej where the equation takes place. Summing up, we finally formulate on the
whole network the differential system
(3.1)
 ∂
0 0

∂t uj (t, x) = (cj uj ) (t, x) − pj (x)uj (t, x)



−Θj (uj (t, x)) − Rj (t, x)
x ∈ (0, 1), j = 1, . . . , n0 ,



∂

R
(t,
x)
=
α
(x)u
(t,
x)
−
β
(x)R
(t,
x)
+
ζ
(t)
x
∈ (0, 1), j = 1, . . . , n0 ,

j
j
j
j
j
j
∂t

 ∂
0 0
)
(t,
x)
−
p
(x)u
(t,
x)
x
∈ (0, 1), j = m − n0 , . . . , m,
u
(t,
x)
=
(c
u
j
j
j
j
j
∂t
u
(t),
j,
` ∈ Γ(vi ), i = 1, . . . , n,
u
(t,
v
)
=
u
(t,
v
)
=:
d

j
i
`
i
i

Pm

∂ u
0

(t,
v
)
φ
µ
c
(v
)u
(t)
=
−ν
d

i
ij
j
j
i
i
j
i
j=1

∂t



−γi dui (t)
i = 1, . . . , n0 ,

P

m
i = n0 + 1, . . . , n,
γi dui (t) = − j=1 φij µj cj (vi )u0j (t, vi ),
which we investigate for positive time
conditions
uj (0, x) = u0j (x),
Rj (0, x) = r0j (x),
dui (0) = q0i ,
t ≥ 0 and complete by the set of initial
x ∈ (0, 1), j = 1, . . . , m,
x ∈ (0, 1), j = 1, . . . , n0 ,
i = 1, . . . , n0 .
The nonlinear functions Θj are here defined by
(3.2)
Θj (z) :=
z(z − ξ1j )(z − ξ2j )
ξ2j (ξ2j − ξ1j )
z ∈ C,
for some constants ξ2j > ξ1j > 0, j = 1, . . . , n0 .
Mathematical and biological considerations motivate us to impose the following.
Assumptions 3.1. Throughout this paper we assume the coefficients to satisfy
• cj ∈ C 1 [0, 1], j = 1, . . . , m;
• pj , αj , βj ∈ L∞ (0, 1), j = 1, . . . , m;
• cj (x) ≥ c > 0, pj (x) ≥ p > 0, αj (x) ≥ α > 0, and βj (x) ≥ β > 0 for a.e.
x ∈ (0, 1), j = 1, . . . , m;
• µj , j = 1, . . . , m, and νi , γi , i = 1, . . . , n0 , are constants;
• µj > 0, j = 1, . . . , m;
• νi > 0 and γi ≥ 0, i = 1, . . . , n0 .
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
7
Tentative interpretation 3.2. Such weak regularity assumptions on the diffusion
coefficients are motivated by concrete biological models. In the continuum limit for
saltatory conduction in myelinated fibers or in compartmental dendritical models
(see [31, Chapt. 7 and Chapt. 9]), for example, one assumes the diffusion coefficients
within the same dendrite, or axon, to be piecewise constant, but to have jumps at
both ends of each compartment. However, occasionally in this paper (in particular
in section 6) the above assumptions will be strenghtened, in order to achieve more
satisfactory regularity results for the solution to the above problem. As already
remarked, this is a mathematical technical condition, but also reflects the real picture
of some concrete systems (e.g., in non-myelinated fibers).
Observe that the condition νi > 0, i = 1, . . . , n0 , is consistent with a model
of purely excitatory node conditions, i.e., a model of a neuronal tissue where all
synapses depolarize the post-synaptic cell. We will briefly discuss in Remarks 4.5–
5.2 below the case of a neuronal network model consisting of both excitatory and
inhibitory synapses – i.e., where synapses can either depolarize or hyperpolarize
the post-synaptic cells.
+
We now introduce the weighted modified incidence matrices Φ+
w := (ωij ) and
−
−
Φw := (ωij ) defined by
+
ωij
:
µj νi cj (vi ), if φe+
ij = 1,
0,
otherwise,
and
−
ωij
:
µj νi cj (vi ), if φe−
ij = 1,
0,
otherwise.
With this notation the third and fifth conditions in (3.1) on the solution u of the
system can be compactly reformulated by imposing that for all t ≥ 0 to u(t) there
exists du (t) ∈ Cn such that
>
(3.3)
>
(Φ+ ) du (t) = u(t, 0), (Φ− ) du (t) = u(t, 1), and
0
− 0
u
Φ+
w u (t, 0) − Φw u (t, 1) = Dd (t).
Here we denote by D the n × n diagonal matrix diag(0, . . . , 0, −γn0 +1 , . . . , −γn ).
We start by considering weighted product spaces
Q n0 2
Qm
X 2 := X12 × X22 := j=1
L (0, 1; µj dx) × j=n0 L2 (0, 1; µj dx),
Qm
Q n0
(3.4)
µj
2
Y 2 :=
Z := j=1
C; ν1i ,
j=1 L (0, 1; αj dx),
which are Hilbert spaces with the natural inner product, since the coefficients
µ1 , . . . , µm , α1 , . . . , αm , and ν1 , . . . , νn0 are strictly positive. We then introduce
the product Hilbert space
X 2 := X 2 × Y 2 × Z
endowed with the natural inner product
   
n0 Z 1
n0
m Z 1
f
g
X
X
X
Rj (x)Sj (x)
ψi χi
R | S  :=
fj (x)gj (x)µj dx +
dx +
.
α
(x)
νi
j
j=1 0
j=1 0
i=1
ψ
χ
2
X
for all f, g ∈ X12 × X22 , R, S ∈ Y 2 , and ψ, χ ∈ Z.
We define a linear operator A by
8
STEFANO CARDANOBILE AND DELIO MUGNOLO










 

f


A R := 

ψ










d
d
dx (c1 dx f1 )

− p1 f1 − R1
..
.
d
d
dx (cn0 dx fn0 ) − pn0 fn0 − Rn0
d
d
dx (cm−n0 dx hm−n0 ) − pm−n0 fm−n0
..
.
d
d
dx (cm dx fm ) − pm fm
α1 f1 − β1 R1
..
.
α
f
−
βn0 Rn0
n
n
0
0
Pm
−ν1 j=1 φ1j µ1 c1 (v1 )f10 (t, v1 ) − γ1 ψ1
..
.
Pm
−νn0 j=1 φ1j µn0 cn0 (vn0 )fn0 0 (t, vn0 ) − γn0 ψn0












,











with domain

 
∃df ∈ Cn s.t.


 f
(Φ+ )> df = f (0), (Φ− )> df = f (1),
D(A) := R ∈ X 2 : f ∈ (H 2 (0, 1))m and e + 0
0
f
e−
Φw f (0) − Φ

w f (1) = Dd ,

 ψ
f
(d1 , . . . , dfn0 )> = ψ.
acting on the space X 2 . We define also a nonlinear function
: C2m+n
0







→ C by
(z) := (Θ1 (z11 ), . . . , Θn (z1n ), 0 . . . , 0, ζ1 , . . . , ζm , 0, . . . , 0) ,
where z = (z1 , z2 , z3 )> , z1 , z2 ∈ Cm , z3 ∈ Cn . Denoting by F : X 2 → X 2 the
Nemitsky operator associated to , the well-posedness of (3.1) is equivalent to the
>
0
0
0
well-posedness of the abstract Cauchy problem
u̇(t) = Au(t) + F(u(t)),
(3.5)
u(0) = u0 ,
t ≥ 0,
on the Hilbert space X 2 . Observe that, a biological level, the value fj (t, x) represents the voltage at time t on the point x of the axon ej (resp., of the dendritical
tree ej ) if j = 1, . . . , n0 (resp., if j = n0 , . . . , m). Similarly, Rj (t, x) is the value
of the FitzHugh–Nagumo recovery term at time t and point x on the axon ej ,
j = 1, . . . , n0 , and finally ψi (t) is the voltage of the soma vi at time t, i = 1, . . . , n0 .
4. Auxiliary linear results
With the aim of later applying standard semilinear perturbation theory, we consider in this section the linear abstract Cauchy problem
u̇(t) = Au(t), t ≥ 0,
(4.1)
u(0) = u0 .
In order to show its well-posedness we want to use a variational approach based on
sesquilinear forms. In order to do that, define a linear supspace V of X 2 by
 

∃df ∈ Cn s.t.
 f

V := R ∈ X 2 : f ∈ (H 1 (0, 1))m and (Φ+ )> df = f (0), (Φ− )> df = f (1),
.


ψ
(df1 , . . . , dfn0 )> = ψ.
This will be our form domain. We emphasize that V is not a product space.
,
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
9
Lemma 4.1. The subspace V is dense in X 2 . It is a Hilbert space with respect to
the scalar product defined by
(f | g)V :=
m Z
X
1
n0 Z
X
fj0 (x)gj0 (x) + fj (x)gj (x) µj dx +
0
j=1
j=1
0
1
Rj (x)Sj (x)
µj dx.
αj (x)
Proof. In order to prove the density of V in X 2 we first set
(4.2) 

∃ df ∈ Cn s.t.
 
f
V0 :=
,
∈ X 2 × Z : f ∈ (H 1 (0, 1))m and (Φ+ )> df = f (0), (Φ− )> df = f (1),
 ψ

(df1 , . . . , dfn0 )T = ψ.
and observe that V is isomorphic to V0 × Y 2 , since there is no condition on R in the
definition of V. In [25, Lemma 3.2] it has been proved that V0 is dense in X 2 × Z,
hence the density of V in X 2 follows at once.
Moreover, V is a Hilbert space for the scalar product
(f | g) :=
m Z
X
j=1
1
n0 Z
X
fj0 (x)gj0 (x) + fj (x)gj (x) dx +
0
j=1
1
Rj (x)Sj (x)dx +
0
n0
X
dfi dgi ,
i=1
2
since V is a closed subspace of (H 1 (0, 1))m × (LP
(0, 1))m × Cn0 . Like in the proof
m
f
of [25, Lemma 3.3], we conclude that |d | ≤
j=1 kfj kH 1 (0,1) . Because of the
positivity of the αi , µi , νi , we see that (· | ·) is equivalent to (· | ·)V .
From now on, we thus consider the Hilbert space V equipped with the scalar
product (·|·)V . We introduce on X 2 the sesquilinear form a : V × V → C defined by
a(f, g)
:=
m Z
X
1
cj (x)fj0 (x)gj0 (x) + pj (x)fj (x)gj (x) µj dx
j=1 0
n0 Z 1
X
+
j=1
+
fj (x)Sj (x) − Rj (x)gj (x) +
0
n0
X
γi
i=1
νi
dfi dgi +
n
X
βj (x)
Rj (x)Sj (x) µj dx
αj (x)
γi dfi dgi .
i=n0 +1
We stress that the form a is not symmetric, as it can be seen by setting f = (1, 1, 1)>
and g = 2f and computing a(f, g), a(g, f).
With the aim of applying the results summarized in the Appendix, we first
establish the correspondence between a and the operator that naturally arises from
the problem (3.1).
Lemma 4.2. The operator associated with the form a is (A, D(A)) defined in
Section 3.
10
STEFANO CARDANOBILE AND DELIO MUGNOLO
Proof. We first show that A ⊂ B, where B denotes the linear operator associated
to a defined as in Definition A.1. Let to this aim f ∈ D(A) and g ∈ V and compute
(Af | g)X 2
=
m Z
X
1
(cj fj0 )0 (x)gj (x)µj dx
0
j=1
m Z 1
X
−
+
−
j=1 0
n0 Z 1
X
j=1 0
n0 X
m
X
pj (x)fj (x)gj (x)µj dx −
n0 Z
X
Rj (x)gj (x)µj dx.
0
j=1
αj (x)fj (x)Sj (x)
1
n0 Z 1
X
µj
µj
dx −
dx
βj (x)Rj (x)Sj (x)
αj (x)
α
(x)
j
j=1 0
cj (vi )µj φij fj0 (vi )dgi −
n0
X
γi
i=1
i=1 j=1
νi
dfi dgi .
We now apply the definition of the incidence matrix Φ = Φ+ − Φ− and observe that
n0 X
n
m
m
m
X
X
X
1 X
0
g
g
0
di
cj (vi )φij f (vi )µj di =
cj (vi )µj φij f 0 (vi ).
cj fj gj µj 0 −
j=1
i=n0 +1
i=1 j=1
j=1
Summing up, integrating by parts we obtain that
(Af | g)X 2
m
m Z
X
0
1 X
cj fj gj µj 0 −
=
j=1
m Z 1
X
−
+
−
j=1 0
n0 Z 1
X
j=1 0
n0 X
m
X
j=1
1
cj (x)fj0 (x)gj0 (x)µj dx
0
pj (x)fj (x)gj (x)µj dx −
j=1
fj (x)Sj (x)µj dx −
n0 Z
X
cj (vi )µj φij fj0 (vi )dgi −
= −
−
+
+
1
Rj (x)gj (x)µj dx
0
1
βj (x)Rj (x)Sj (x)
0
j=1
i=1 j=1
m Z
X
n0 Z
X
n0
X
γi
νi
i=1
µj
dx
αj (x)
dfi dgi
1
cj (x)fj0 (x)gj0 (x)µj dx
j=1 0
m Z 1
X
j=1 0
n0 Z 1
X
j=1 0
n
X
i=n0 +1
pj (x)fj (x)gj (x)µj dx −
n0 Z
X
j=1
fj (x)Sj (x)µj dx −
n0 Z
X
j=1
dgi
m
X
j=1
1
Rj (x)gj (x)µj dx
0
1
βj (x)Rj (x)Sj (x)
0
cj (vi )µj φij fj0 (vi ) −
n0
X
γi
i=1
νi
dfi dgi .
µj
dx
αj (x)
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
11
Since f ∈ D(A), the generalized Kirchhoff condition (3.3) holds for i = n0 +
1, · · · , n and summing up we obtain
(Af | g)X 2
= −
−
−
m Z
X
1
j=1 0
n0 Z 1
X
j=1
n0
X
i=1
cj (x)fj0 (x)gj0 (x) + pj (x)fj (x)gj (x) µj dx
0
βj (x)
Rj (x)Sj (x) µj dx
Rj (x)gj (x) − fj Sj (x) +
αj (x)
n
X
γi f g
di di −
γi dfi dgi = a(f, g),
νi
i=n +1
0
for all f ∈ D(A), g ∈ V.
Mutatis mutandis, one can show as in the proof of [24, Lemma 4.6] that the
converse inclusion, i.e., B ⊂ A, holds, too.
We are now in the position to apply the theory exposed in the Appendix.
Theorem 4.3. The operator matrix A generates a contractive, analytic, uniformly
exponentially stable semigroup (T (t))t≥0 on X 2 .
Proof. We are going to show that the densely defined sesquilinear form a is coercive
and continuous, as the assertion then follows directly from Proposition A.2.
To begin with, let f ∈ V and observe that
Rea(f, f)
m Z
X
=
1
cj (x)|fj0 (x)|2 + pj (x)|fj (x)|2 µj dx
j=1 0
n0 Z 1
X
+
j=1
0
n0
n
X
X
γi f 2
βj (x)
|Rj (x)|2 µj dx +
|di | +
γi |dfi |2 .
αj (x)
ν
i=1 i
i=n +1
0
Thus, with the notation of Assumptions 3.1 and letting C := min {c, p, β} > 0, we
can estimate
Rea(f, f) ≥
m Z
X
j=1
1
n0 Z
X
c|fj0 (x)|2 + p|fj (x)|2 µj dx+
0
j=1
0
1
βj (x)|Rj (x)|2
µj dx ≥ Ckfk2V .
αj (x)
This proves the coercivity of a.
In order to check the continuity, let f, g ∈ V. Let further M ≥ 0 be a constant
such that
n0
X
γi
i=1
νi
|dfi |2 +
n
X
i=n0 +1
γi |dfi |2 ≤ M
m Z
X
j=1
1
(|fj0 (x)|2 + |fj (x)|2 )dx.
0
n
o
kβ k
Then for K := max1≤j≤m µj kpj k∞ , µj kcj k∞ , µj jα ∞ and using the Cauchy–
Schwartz inequality applied to the Hilbert spaces H := (L2 (0, 1))m and V0 defined
12
STEFANO CARDANOBILE AND DELIO MUGNOLO
in (4.2), we obtain that
|a(f, g)|
≤ K
m Z
X
1
0
fj (x)gj0 (x) + fj (x)gj (x)
j=1 0
n0 Z 1
X
+
+
j=1
n0
X
i=1
0
fj (x)Sj (x) − Rj (x)gj (x) + Rj (x)Sj (x) dx
n
X
γi f g
γi |dfi dgi |
|di di | +
νi
i=n +1
0
≤ K (kf kV0 kgkV0 + kf kH kSkH + kRkH kgkH + kRkH kSkH ) + M (kf kV0 kgkV0 ).
As in the proof of Lemma 4.1 one sees that (kf kV0 + kRkH ) ≤ CkfkV for some
constant C > 0. Hence, we finally obtain that |a(f, g)| ≤ C(K + M )kfkV kgkV . This
concludes the proof.
Remark 4.4. Observe that considering the weighted spaces X 2 , Y 2 , Z introduced
in (3.4), instead of (L2 (0, 1))m , (L2 (0, 1))n0 , Cn0 , has been crucial in order to define
the form and obtain its coercivity. In fact, although X 2 is isomorphic to X 2 :=
(L2 (0, 1))m × (L2 (0, 1))n0 × Cn0 , without weighting the factor spaces the semigroup
associated to a is in general not even contractive on X 2 . However, the semigroup on
X 2 is similar to that acting on X 2 , hence it is bounded and uniformly exponentially
stable, too.
Remark 4.5. Neuronal networks are usually balanced, i.e., they include synapses
of both inhibitory and excitatory type. In mathematical language, this amounts
to generalize Assumptions 3.1 by also allowing some coefficients νi to be negative,
which accounts for inhibitory synapses.
Let us show that in fact A generates an analytic semigroup on X 2 regardless of
the sign of the coefficients νi , i = 1, . . . , n0 . To this aim, we write Af = A0 f + Kf,
where

d
d
dx (c1 dx f1 )
− p1 f1 − R1
..
.







d
d


(c
f
)
−
p
f
−
R
n
n
n
n
n
0
0
0
0
 d dx 0 dx

 (cm−n0 d hm−n0 ) − pm−n0 fm−n0 
dx
 dx



..


 
.


f
d
d



dx (cm dx fm ) − pm fm
A0 R := 


α1 f1 − β1 R1


ψ


.
..






αn0 fn0 − βn0 Rn0




−γ1 ψ1




..


.
−γn0 ψn0
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
13
and
0
..
.










 

f



K R := 


ψ





















.











0
0
..
.
0
0
..
.
0
φ1j µ1 c1 (v1 )f10 (t, v1 )
..
.
Pm
−νn0 j=1 φ1j µn0 cn0 (vn0 )fn0 0 (t, vn0 )
−ν1
Pm
j=1
By Theorem 4.3, A0 is the generator of an analytic semigroup on X 2 . Since the
range of K is a finite-dimensional subspace of X 2 , K is a compact operator from
D(A) to X 2 regardless of the sign of the coefficients νi . By Proposition A.3 we
deduce that also A generates an analytic semigroup on X 2 . We thus conclude that
the system (3.1) is well-posed even in this more general case. However, contractivity
and stability properties fail in general to hold.
5. Qualitative properties
Typically, Cauchy problems governed by an analytic semigroup enjoy some kind
of smoothing property for the solutions. In our case, this is not exactly true, since
A is not a differential operator. However, we do have regularization of the first coordinate of the solution, provided that some addition assumption on the coefficients
of the cable equations are imposed. Such a regularity is, e.g., satisfied whenever the
nerve fibers are homogeneous enough – like in the case of non-myelinated fibers.
Corollary 5.1. Let the coefficients cj , pj be of class C ∞ [0, 1]. Then the solution u
to the problem (3.1) is of class C ∞ .
k
Proof. Since the semigroup is analytic, it maps X 2 into ∩∞
k=1 D(A ). It now suffices
∞
k
∞
m
2
m
n0
to check that ∩k=1 D(A ) ⊂ (C [0, 1]) × (L (0, 1)) × C . This holds due to
the continuity of the Sobolev embedding H k (0, 1) ,→ C k−1 [0, 1], k ≥ 1.
Remark 5.2. Since the above proof only relies upon analyticity of the semigroup
generated by the operator matrix A, in view of Remark 4.5 the solution u to (4.5)
enjoys a regularizing effect also in the general case of coefficients νi that are not
necessarily positive.
In the following, we apply the standard semigroup theory and discuss how certain
properties of the solution (u, R) to the problem (3.1) are inherited from analogous
properties of the initial conditions f0 , R0 , q0 .
In the following we consider the weighted L∞ -spaces
X ∞ :=
m
Y
j=1
L∞ (0, 1; µj dx)
and
Y ∞ :=
n0
Y
j=1
L∞ (0, 1;
µj
dx)
αj
14
STEFANO CARDANOBILE AND DELIO MUGNOLO
endowed with
kf kX ∞ := max kfj kL∞ (0,1;µj dx) := max ess sup{µj |fj (x)| : 0 < x < 1},
1≤j≤m
1≤j≤m
and likewise
kRkY ∞ := max kRj k
1≤j≤n0
µ
L∞ (0,1; αj
j
dx)
|Rj (x)|
:= max ess sup µj
:0<x<1 .
1≤j≤n0
αj
A similar weighted `∞ -norm, denoted by k · kZ ∞ , can also endow the space Z.
Proposition 5.3. 1) Let the functions f0 , R0 be real-valued, and let q0 be a vector
of real numbers. Then the solution (u, R, q) is real-valued.
2) There exist some positive initial condition (f0 , R0 , q0 )> such that the solution
uj0 (t0 , x) < 0 or Rj0 (t0 , x) < 0 for some time t0 , some edge j0 , and all x in a set
of non-zero Lebesgue measure.
3) There exist initial conditions f0 , R0 , q0 such that kf0 kX ∞ ≤ 1, kR0 kY ∞ ≤ 1,
and kq0 kZ ∞ ≤ 1, but such that max{ku(t)kX ∞ , kR(t)kY ∞ , kq(t)kZ ∞ } > 1 at a
certain time t0 .
Proof. The statement can be reformulated in terms of abstract semigroup theory by
saying that (T (t))t≥0 is real, but it is neither positive, nor L∞ -contractive. Thus,
in order to prove the claim we check the criteria listed in Proposition A.6.
More precisely, let f ∈ V. Then Ref = (Ref, ReR, Redf )> ∈ V, and moreover
a(Ref, Imf) ∈ R. Thus, the reality of the semigroup follows by Proposition A.6.(i).
In order to prove that the semigroup is not positive, we are going to apply
Proposition A.6.(ii). Since the semigroup is real, it is sufficient to exhibit a realvalued function f ∈ V such that f+ ∈ V and a(f+ , f− ) > 0. To this aim, consider
f = (1, −1, 1)> .
Finally, we show that the semigroup is not L∞ -contractive. Take f ∈ V. Then
it can be checked as in the proof of [25, Prop. 4.3] that (1 ∧ |f|)signf ∈ V. Observe
0
0
that ((1 ∧ |f |)signf ) = 1{|f |≤1} f 0 and ((|f | − 1)+ signf ) = 1{|f |≥1} f 0 for all f ∈
H 1 (0, 1), so that for f = 2 · 1 and R = 1, hence the criterion in A.6.(iii) is not
verified. This concludes the proof.
Tentative interpretation 5.4. As we have seen, our system is governed by a
semigroup (T (t))t≥0 which is not positive. However, as a direct application of
Proposition A.7, we deduce that such a semigroup admits a modulus semigroup
(T ] (t))t≥0 . The linearized problem governed by (T ] (t))t≥0 is in fact such that it
coincides with (4.1), up to replacing the usual recovery variable R by a new term
R] and the first n0 equations by
∂uj
(t, x) = (cj u0j )0 (t, x)−pj (x)uj (t, x)+Rj] (t, x),
t ≥ 0, x ∈ (0, 1), j = 1, . . . , n0 .
∂t
The behaviour of such a hypothetical, non-biological system can be described as
follows: After the transmission of an action potential, the axon is not pulled by R]
toward its resting potential, i.e., R] represents a “fatiguing” variable that forces the
axon to remain in its depolarized state.
Proposition 5.5. Let (u0 , R0 , q0 )> be an initial condition such that u0 = R0 ≡ 0
on some edge ej of the graph. Then there exists t0 > 0 such that the solution
(u, R, q)> to (4.1) satisfies u(t0 , x) 6= 0 or R(t0 , x) 6= 0 for all x in a set of non-zero
Lebesgue measure on ej .
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
15
Proof. Without loss of generality, assume that u0 = R0 ≡ 0 on the n0 − th edge of
the graph (it is not relevant to the proof whether the edge incides on nodes where
a generalized Kirchhoff, or rather a dynamical node condition holds). As pointed
out at the beginning of Section 3, we have bijectively associated every edge ej of
the graph to the j-th factor of the product spaces X 2 and Y 2 . Thus, our goal is
to prove that the semigroup (T (t))t≥0 governing the problem, i.e., the semigroup
generated by the operator associated to the form a, does not leave C invariant. Here
C is the closed subspace of X 2 given by
nY
nY
nY
n0 0 −1
0 +1
0 −1
Y
µj
1
C :=
L2 (0, 1; µj dx)×{0}×
L2 (0, 1; µj dx)×
L2 (0, 1; dx)×{0}×
C;
.
αj
νi
j=1
j=1
j=1
j=1
To this aim, we are going to use the criterion stated in Proposition A.6.(v).
More precisely, denote by P the projection of X 2 onto C, which is given by
Pu := (u1 , . . . , un0 −1 , 0, un0 +1 , . . . , fm , R1 , . . . , Rn0 −1 , 0, ψ1 , . . . , ψn0 )>
where u := (u, R, ψ)> . If the semigroup (T (t))t≥0 leaves C invariant, then necessarily P(V) ⊂ V, where as usual V denotes the domain of a defined in Section 4. To
see that this cannot hold, one can use similar arguments to those considered in [25,
Lemma 3.9].
With similar arguments one can show an analogous result if instead we consider
initial data satisfying Dirichlet boundary conditions on the nodes incident to a given
edge ej . Of course, one can obtain similar results when considering initial data that,
more generally, vanish on k edges, k = 1, . . . , m − 1, or in h nodes, h = 1, . . . , n0 .
Tentative interpretation 5.6. In Proposition 5.3 we have shown that the semigroup governing the problem is not positive, thus we cannot speak of its irreducibility
in a rigorous sense. Nevertheless, Proposition 5.5 states that no invariant subgraphs
can be invariant under the action of our semigroup. In other words, even if the initial conditions are compactly supported in a some cortical region, in the linear model
every neuron will still show electrical activity after a certain time.
In the same spirit, we prove in the following that the recovery term alone is
already sufficient to drive the membrane voltage away from 0.
Proposition 5.7. Consider an initial condition such that u0 = q0 = 0 6= R0 . Then
for a certain time t0 > 0 the solution (u, R, q)> satisfies u(t0 , x) 6= 0 for all x in a
set of non-zero Lebes gue measure. Similarly, if R0 = 0 6= f0 , then at a certain time
t0 the solution satisfies R(t0 ) 6= 0 for all x in a set of non-zero Lebesgue measure.
Proof. The statement simpy says that neither {0} × Y 2 × {0}, nor X 2 × {0} × Z
are invariant under the semigroup (T (t))t≥0 generated by the operator associated
to a. This can be shown again by applying Proposition A.6.(v).
Let us show that C := {0} × Y 2 × {0} is not invariant under (T (t))t≥0 . Consider
the projection P of X 2 onto C, which is of course given by P(f, R, ψ)> := (0, R, 0)> .
Observe that P(V) ⊂ V. By Proposition A.6 the invariance of C under (T (t))t≥0
would imply that Rea(Pf, f − Pf) ≥ 0 for all f ∈ V. This is however not true, since
n0 Z 1
X
a(Pf, f − Pf) := −
Rj (x)fj (x)µj dx.
j=1
0
The second assertion can be proven likewise.
16
STEFANO CARDANOBILE AND DELIO MUGNOLO
Let us now briefly consider the system obtained by dropping the dependence of R
and u in the first and second equations of (3.1), respectively. Such a system consists
of two, uncoupled differential equations: the first one is a boundary value problem of
parabolic type, thoroughly studied in [25], while the second one is a linear ordinary
differential equation. Both Cauchy problems are well-posed, and more precisely
each of them is governed by a positive semigroup: we denote by (ũ, q̃)> and R̃
the respective solutions. One could wonder what is the relation between (u, R, q)>
and (ũ, R̃, q̃)> , since for all positive initial conditions (ũ(t), R̃(t), q̃(t))> is a positive
vector, while (u(t), R(t), q(t))> need not be, cf. Proposition 5.3.
Proposition 5.8. There exist an initial condition (u0 , R0 , q0 )> and a certain time
t0 such that the solution (u(t0 ), R(t0 ), q(t0 ))> is not dominated by (ũ(t0 ), R̃(t0 ), q̃(t0 ))> ,
i.e., such that u(t0 , x) ≥ ũ(t0 , x) or R(t0 , x) ≥ R̃(t0 , x) for all x in a set of non-zero
Lebesgue measure.
Proof. The above statement can be reformulated by saying that the semigroup
(T (t))t≥0 governing (3.1) is not dominated (in the sense of positive semigroups) by
the semigroup (T̃ (t))t≥0 governing the uncoupled system. The latter semigroup is
in fact generated by the operator associated to the sesquilinear form ã : V × V → C
defined by
m Z 1
X
ã(f, g) :=
cj (x)fj0 (x)gj0 (x) + pj (x)fj (x)gj (x) µj dx
j=1 0
m Z 1
X
βj (x)
Rj (x)Sj (x) µj dx
+
αj (x)
j=1 0
n0
n
X
X
γi f g
di di +
γi dfi dgi .
+
ν
i
i=1
i=n +1
0
It is is easy to see that this form is in fact continuous and coercive, and moreover
it is symmetric and the semigroup (T̃ (t))t≥0 generated by the associated operator
is positive. Thus, we are in the position to apply Proposition A.6.(iv) in order to
disprove the domination of (T (t))t≥0 by (T̃ (t))t≥0 . In fact, letting f = S = 1 and
R = g ≡ 0, one has that fg ≡ 0, with
m Z 1
m
X
X
a(f, g) = −
Rj (x)fj (x)µj dx = −
µj < 0 = ã(f, g).
j=1
This concludes the proof.
0
j=1
Tentative interpretation 5.9. Since the semigroup (T̃ (t))t≥0 governing the uncoupled system is uniformly exponentially stable, the above result can be interpreted
in the following way: Both the uncoupled system (which is nothing but a Rall’s
model for passive fibers extended to the whole network) and our one tend toward
an equilibrium where no electrical potential is spreading through the neuronal network: however, for large initial data our coupled model begins its converging phase
only after a spike-like, non L∞ -contractive event has happened. Instead, in Rall’s
recovery-less model no such local potential increase is allowed and the convergence
is smoother.
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
17
Remark 5.10. It is possible to generalize the system we have discussed so far and
consider a non-autonomous version of (3.1), i.e., we can allow more general, timedependent coefficients in the linear diffusion as well as in the recovery equations
and in the node conditions. In Remark 6.7 we also discuss the possibility of timedependent coefficients in the nonlinear terms.
This time dependence may be the mathematical counterpart of the experimental
observation that synaptical weights may (and do) self-adapt – in analogy with the
Hebbian rules for discrete neural networks usually used in neuroinformatics to model
learning phases, see [17, Chapt. 13]. One may also wish to allow the topology
of the network to change as the time goes by, i.e., to allow the synapses to be
“switched on and off”. In fact, we can describe both phenomena in the same way:
instead of letting the network structure evolve in time, we “switch all synapses
on” and consider a problem on an ideal brain where all connections are active;
however, to come back to a realistic system we modulate the diffusion of electric
potential through the network by a time-dependent coefficient that – coherently with
electrodynamical principles – cannot vanish completely. In other words, we allow the
parameters cj (·, x), pj (·, x), bj (·, x), j = 1, . . . , m, as well as excitation coefficients
in ramification points γi (·), i = 1, . . . , n, to be measurable functions with respect to
the time variable, and consider the non-autonomous abstract Cauchy problem
u̇(t) = A(t)u(t),
t ≥ s,
u(s) = us .
Without going into in details we just mention that well-posedness (in a suitably weak
sense) is an easy consequence of well-known results that go back to J.L. Lions. We
emphasize that, even if the above coefficients can change in time, the domain V(t)
of a(t) does in fact not depend on t, i.e., V = V(t) for all t ≥ 0, and we can apply
the theory developed in [19].
6. Spectral estimates and nonlinear theory
The aim of this section is to gather some spectral information about the operator
A and then to apply it to the discussion of our original nonlinear system.
We already know that A is not self-adjoint, this one cannot expect its spectrum
to be real. According to our form approach, our aim is to describe the numerical
range of a. We show that the numerical range contains a strip of semi-infinite length
and non-zero thickness: in other words, W (a) contains non-real numbers. We recall
that, by Assumptions 3.1, all the coefficients appearing in our system are real.
Proposition 6.1. The following properties hold.
(1) The numerical range of a is symmetric with respect to the real line, i.e.
z ∈ W (a) if and only if z̄ ∈ W (a).
(2) The numerical range of a is contained in a strip of semi-infinite length;
more precisely, there exists r > 0 such that
W (a) ⊂ {z ∈ C : Rez ≥ r, |Imz| ≤ α} ,
where α := max1≤j≤m kαj k∞ .
(3) There exists positive real numbers r1 , r2 , s, r1 < r2 , such that the inclusion
Tr1 ,r2 ,s := co {r1 , r2 + is, r2 − is} ∪ {z ∈ C : Rez ≥ r2 , |Imz| ≤ s} ⊂ W (a),
holds, where co denotes the convex hull of a set.
18
STEFANO CARDANOBILE AND DELIO MUGNOLO
Proof. 1) First observe that
m Z 1
X
a(f, f) =
cj (x)|fj0 (x)|2 + pj (x)|fj (x)|2 µj dx
j=1 0
n0 Z 1
X
+
+
j=1
n0
X
i=1
0
βj (x)
2
|Rj (x)| µj dx
2iIm(fj (x)Rj (x)) +
αj (x)
n
X
γi f 2
|di | +
γi |dfi |2 .
νi
i=n +1
0
Let f = (f, R, df )> ∈ V, such that kfkX 2 = 1. Consider g := (f, −R, df )> . It is
clear that g ∈ V. Since Rea(f, f) = Rea(g, g) and
n0 Z 1
X
Ima(f, f) = 2
Im(fj (x)Rj (x))µj dx = −Ima(g, g),
j=1
0
the first assertion follows.
2) We have already showed in the proof of Theorem 4.3 that the form a associated
to the operator A is coercive with constant C = min{c, p, β} (with the notation of
Assumptions 3.1). Let us denote E the imbedding constant such that k · kX 2 ≤
Ek · kV . Compute now
Rea(f, f) ≥ Ckfk2V ≥
min{c, p, β} 2
kfkX 2 .
E2
This shows that the Rez ≥ min{c,p,β}
for each z ∈ W (a). Moreover, let z = a(f, f) ∈
E2
W (a). Then by the Cauchy–Schwartz inequality
n0 Z 1
X
|Imz| = 2 Im
fj (x)Rj (x)µj dx ≤ 2αkf kX 2 kRkY 2 ≤ α(kf k2X 2 +kRk2Y 2 ) = α.
j=1 0
Pm R 1
2
0
3) Letting f = (f, 0, 0)> , one obtains a(f, f) =
j=1 0 cj (x)|fj (x)| dx for all
1
m
f ∈ (H0 (0, 1)) . Thus, [c, ∞) ⊂ W (a).
r R 1 dx
R1
−2
Let now ρ > 0 such that ρ > µj 0 αj (x) and set F := 2 µ1j − ρ2 0 αjdx
(x) .
Let j0 ∈ {1, . . . , n0 } and take f := (f, R, 0), where


 
0
0

 .. 

..
.


.


 
th
th



f (x) := iF sin(πx) ← j0 row,
R(x) := 
iρ ← j0 row,




..
.
 .. 


.
0
0
for some constants F, ρ > 0 to be chosen later. Observe now that
2
Z 1
Z 1 2
Z 1
ρ µj
dx
F
kfk2 =
F 2 sin2 (πx)µj dx +
dx = µj
+ ρ2
= 1.
2
0
0 αj (x)
0 αj (x)
Since all coefficients are assumed to be real,
Z 1
2ρF
Ima(f, f) = −ρF
sin(πx)dx = −
< 0.
π
0
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
19
Moreover,
Rea(f, f)
=
m Z
X
j=1
≥
1
n0
X
F π cj (x) cos πx + pj (x) sin πx µj dx + ρ2
2 2
2
2
0
j=1
F 2 π2 c + p
2
m
X
j=1
µj + ρ2 β
n0
X
j=1
Z
1
µj
0
Z
0
1
βj (x)
µj dx
αj (x)
dx
.
αj (x)
Then, due to the convexity of W (a), and since we have already proved that it
contains a real interval of semi-infinite length, we conclude that Tr1 ,r2 ,s ⊂ W (a) for
some r1 , r2 , s.
We have shown that the numerical range of a is contained in a strip of semiinfinite length, and in particular in a parabola. By Proposition A.5 we promptly
obtain the following.
Corollary 6.2. The analytic semigroup generated by A has analyticity angle
1
The form domain V is is isometric to the fractional power domain D(−A) 2 .
π
2.
Proof. The analyticity angle can be deduced from Proposition A.5.(i). Moreover,
because W (a) is contained in a parabola, it follows by Proposition A.5.(ii) that the
1
injective, sectorial operator A has the square root property, i.e., V ∼
= D(−A) 2 . Our next goal is to prove that the original nonlinear problem introduced in
Section 3 is well-posed. We want to apply the techniques presented in [20, Chapt. 7].
To this aim, we need to prove that a suitable Lipschitz condition is satisfied by the
Nemitsky operator F with respect to the norm of the interpolation space V.
Lemma 6.3. For each r > 0 there exists L ≥ 0 such that
kF(f) − F(g)kX 2 ≤ Lkf − gkV
for all f, g ∈ V such that kfkV , kgkV ≤ r.
Proof. Let r > 0, j = 1, . . . , m, and f, g ∈ V such that kfkV , kgkV ≤ r. With the
notation in (3.2), due to the local Lipschitz continuity condition satisfied by the
polynomial Θj there exists Lj > 0 such that
|Θj (fj (x)) − Θj (gj (x))| ≤ Lj |(fj − gj )(x)|,
x ∈ (0, 1),
Thus, by the Jensen inequality and since H 1 (0, 1) is continuously imbedded in
C[0, 1] there exists L̃j > 0 such that
kΘj (fj ) − Θj (gj )k22 ≤ kΘj (fj ) − Θj (gj )k2∞ ≤ L2j kfj − gj k2∞ ≤ L̃2j kfj − gj k2H 1 (0,1) .
For the second component, we only observe that the nonlinear term is constant,
hence globally Lipschitz continuous. This shows that
kF(f) − F(g)k2X 2 ≤ max L̃2j kf − gk2V
1≤j≤m
whenever kfkV , kgkV ≤ r, which completes the proof.
We finally deduce the following local well-posedness result.
20
STEFANO CARDANOBILE AND DELIO MUGNOLO
Theorem 6.4. Let ũ ∈ V. Then there exist δ, r > 0 such that the problem (3.5)
has a unique classical solution u(·; u0 ) : [0, δ) → X 2 whenever ku0 − ũkV < r. In
addition, u depends in a locally continuous way on the initial data, i.e., for each
u0 , u˜0 ∈ V such that ku0 − ũkV < r and ku˜0 − ũkV < r there exists K > 0 such that
ku(t; u0 ) − u(t; u1 )kV ≤ Kku0 − u1 kV ,
0 ≤ t ≤ δ.
If moreover u0 ∈ D(A), then the solution is differentiable with respect to t up to 0.
Proof. By Lemma 6.3, [20, Thm. 7.1.2] applies. Hence, we obtain the existence of
a mild solution of class L∞ (0, δ; V ) ∩ C([0, δ], X) for some δ ≥ 0 and small initial
data. The continuous dependence on the initial data follows from [20, Thm 7.1.2],
too. Moreover, it follows from [20, Prop. 7.1.10] that the solution is classical, since
the nonlinear operator F is time-independent, and that it is strict if u0 ∈ D(A). Remark 6.5. It is clear that Proposition 6.3 and thus Theorem 6.4 apply in fact in
the general case of Nemitsky operator F associated to a polynomial. In particular,
we can promptly obtain well-posedness of several models involving nonlinear conditions in the active nodes that have been formulated in recent years: we refer to [16]
for a survey of such general conditions. Likewise, we may also consider alternative models for the nonlinear transmission in the axons: variants of the FitzHugh–
Nagumo model like those proposed by Hindmarsch–Rose, Roger–MacCulloch, and
Aliev–Panfilov (cf. [15]–[30]–[1]) all fit our framework – and so does even the original Hodgkin–Huxley model, up to enlarging the state space X 2 . Finally, several
authors have speculated that dendritic fibers may behave as active fibers, rather
than as purely passive ones, see e.g. [17, Chapter 19]. Observe that such models
can also be described by our methods.
Remark 6.6. Global well-posedness of (3.5) can be likely proved by showing that
A + F is, up to a globally Lipschitz continuous perturbation, a maximal monotone
operator: this would then allow to apply the classical theory developed in [8]. However, we omit such a proof: this would be lenghty, technical, and would offer no new
insight in the motivating biological problem.
Remark 6.7. It seems reasonable to conjecture that thresholds ξ1 , ξ2 need not be
constant, but rather evolve in time – in order to take account of the threshold variations during the refractory and enhancement periods following the transmission of
action potentials, cf. [31, § 4.7]. In fact, if we replace constants ξi by continuous
functions ξij = ξij (t), for i = 1, 2, j = 1, . . . , n0 , and t > 0, then [20, Thm. 7.1.2]
still applies and the problem remains well-posed. If furthermore the dependence of
ξi on time is regular enough, say ξi are locally Hölder continuous, i = 1, 2, then
also the regularity property holds.
We prove some qualitative properties of the solutions to the nonlinear system.
Proposition 6.8. Assume that ζ(·) 6≡ 0. Then
(1) 0 is not a stationary solution to the nonlinear system (3.5);
(2) there exist positive initial data u0 and time t0 such that u(t0 ; u0 ) is not a
positive function.
Proof. We first let K ∈ R and observe that u0 := (0, . . . , 0, K, . . . , K, 0, . . . , 0)> ,
lies in D(A). Thus, by Theorem 6.4 it is possible to differentiate the solution u in
a classical senso also in the origin.
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
21
1
1) Let K = 0. Since u satisfies (3.5), we have ∂R
∂t (0; u0 ) = ζ1 6= 0. This shows
that 0 is not a stationary point.
2) Let now K ≥ β −1 kζ1 k∞ > 0, with β as in the Assumptions 3.1. Then
∂u1
∂t (0; u0 ) = −K < 0. This completes the proof.
We next state the equivalent of the Proposition 5.3.3), i.e., we show the unit ball
of X ∞ is not invariant under the flow associated to our nonlinear system.
Proposition 6.9. There exist initial data u0 = (u0 , R0 , ψ0 )> , with u0 6= 0, such
that the solution u to (3.1) satisfies
|u(t0 , x; u0 )| > |u0 (x)|
for all x in a set of non-zero Lesbegue measure and some time t0 .
Proof. Let us construct initial data u0 that satisfy the claim. Let ξ ∈ (ξ11 , ξ12 ) and
define u0 = (u0 , R0 , ψ0 )> defined as follows:
– u0j (x) ≡ 0 for all j = 2, . . . , m and R0j (x) ≡ 0 for all j = 2, . . . , n0 ;
– u01 (x) is of class Cc∞ (0, 1) and satisfies 0 ≤ u01 (x) ≤ ξ, x ∈ [0, 1], with
u01 (x) ≡ ξ for some 0 < a < b < 1 and all x ∈ (a, b);
– R01 (x) := p1 (x)ξ for all x ∈ [0, 1].
By construction, u0 is in D(A) and by Theorem 6.4 it is possible to differentiate the solution u in a classical senso also in the origin. By computing the first
component of the time derivative u̇(0) = Au0 + F(u(0)) = Au0 + F(u0 ), one obtains
ξ(ξ − ξ11 )(ξ − ξ12 )
∂u1
(0, x; u0 ) = −
> 0,
∂t
ξ12 (ξ12 − ξ11 )
This completes the proof.
x ∈ [a, b].
Tentative interpretation 6.10. In order to clarify the neurophysical counterpart
of the above results, we recall that all experimental observations corroborate the
following description of transmembrane voltage’s behaviour in excitable fibers during
transmission of an action potential:
• before an action potential initiates, the transmembrane voltage is observed
to cross the threshold value of approx. −55mV , which up to translation
corresponds to ξ1 ;
• the voltage quickly rises to approx. +40mV , which up to translation corresponds to the asymptotic signal amplitude ξ2 : depolarization occurs;
• afterwards, voltage suddenly sinks to an undershoot value of approx. −80mV ,
i.e., hyperpolarization occurs;
• finally, voltage reaches its resting value of approx. −70mV .
Proposition 6.9 thus has a direct neurophysical interpretation. In fact, it is proved
that there are solutions which are not L∞ -contractive, that is, depolarization may
occur in our system. Actually, we prove that it occurs for any overthreshold initial
data.
Furthermore, we have proved in Proposition 6.8 that negative solutions may arise
from positive initial data. In fact, since in our exposition the membrane resting
potential is shifted to 0, this proves that also hyperpolarization appears.
This is in sharp constrast with the results obtained in [25], where a dendritical tree
only was considered according to Rall’s lumped soma model. There, the potential
22
STEFANO CARDANOBILE AND DELIO MUGNOLO
diffusion in such a dendritical tree was shown to be governed by a positive and L∞ contractive semigroup. This did exclude de- or hyperpolarization effects, as it may
indeed be expected from a purely passive fiber.
7. Conclusions
We finally summarize some of the aforecollected mathematical results that are
relevant to a biological interpretation.
• The linear system (4.1) is governed by a semigroup that converges towards
a zero-solution, whereas the constant 0 function is not a solution to (3.1).
A fortiori, the nonlinear system will not converge to 0.
• As shown in the Propositions 6.8 and 6.9, (3.1) is governed by a nonlinear
flow that is neither positive nor L∞ -contractive, i.e., hyper- and depolarization may occur for suitable initial data. These properties are shared by
the linear problem (4.1) that is naturally associated to (3.1).
• Further properties are common to the linear problem and to the nonlinear one. This is hardly suprising, as (4.1) can in fact be considered the
linearization of (3.5) around the 0-solution, up to neglecting the external
recovery force ζ in the FitzHugh–Nagumo system.
• The operator associated to the linear problem is not self-adjoint. In fact,
arguments from geometrical theory of nonlinear flows suggest that the solutions to (3.1) are bounded but do not converge to an equilibrium point.
One may conjecture that solutions are asymptotically almost periodic or, more
generally, that they are in some sense “asymptotically oscillating”. Making this
intuition more precise may lead to a deeper understanding of spontaneuos activity
as well as of regular firing patterns of neurons. We thus address the following.
Open problem. Are there initial data for the nonlinear problem such that the
corresponding solution is periodic?
Appendix A. A brief reminder of sesquilinear form methods
For the sake of completeness, we collect in this section some known results on
sesquilinear forms and associated semigroups we have needed in our proofs. However, the assumptions under which most of the results below are formulated are by
no means sharp: we refer to [26] or [3] the reader interested in this beautiful theory.
We first consider a σ-finite measure space (X, µ) and the Hilbert space H :=
L2 (X). We also consider another Hilbert space V such that V is densely and
continuously imbedded in H. We further consider a sesquilinear mapping a : V ×
V → C. We will call a a sesquilinear form throughout this paper. We emphasize
that a is not assumed to be symmetric.
Definition A.1. The form a is called coercive and continuous if there exist α > 0
and M ≥ 0 s.t. for all f, g ∈ V it enjoys the properties
• Rea(f, f ) ≥ αkf k2V ,
• |a(f, g)| ≤ M kf kV kgkV ,
respectively. We call
D(A) := {f ∈ V : ∃h ∈ H s.t. a(f, g) = (h | g)H ∀g ∈ V } ,
Af := −h.
the operator associated with a.
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
23
Due to the density of V in H, one sees that the operator associated with a is
uniquely determined. Observe that A is self-adjoint if and only if a is symmetric,
i.e., if and only if a(f, g) = a(g, f ) for all f, g ∈ V .
The following assertion is a direct consequence of [26, Prop. 1.51 and Thm. 1.52].
Proposition A.2. Let a : V × V → C be a coercive and continuous sesquilinear form on H. Then the associated operator A generates an analytic semigroup
(T (t))t≥0 on H. Moreover, kT (t)f k ≤ e−t kf k for some > 0 and all t ≥ 0.
In many concrete applications it is crucial to check whether the sum A + B of
two operators generates a semigroup. The following criterion, due to Desch and
Schappacher (cf. [5, Thm. 3.7.25]), is often quite useful.
Proposition A.3. Assume an operator A generates an analytic semigroup (T (t))t≥0
on a Banach space X. Let a linear operator B be compact from D(A) to X. Then
A + B generates on X an analytic semigroup with same analyticity angle.
Much information about a is given by some subset of
below.
C,
which we introduce
Definition A.4. The numerical range of a sesquilinear form a : V × V →
defined as
W (a) := {a(f, f ) ∈ C : f ∈ V : ||f ||H = 1}.
C
is
The numerical range of a form plays a rôle similar to that of the spectrum of an
operator, cf. [26, § 1.2] or [14, § C.3]. Indeed, it is known that the numerical range
of a is closed, convex, and −σ(A) ⊂ W (a), where σ(A) denotes the spectrum of the
operator A associated to a. There is a rich theory for forms whose numerical range
is contained in a parabola around the real axis. We limit ourselves to recall the
following recent result due to Crouzeix [9] which, combined with [5, Thm. 3.14.17],
also yields the analyticity angle of the semigroup generated by A and to recall
a known result due to McIntosh concerning domains of fractional powers, cf. [3,
§ 5.6.6]. Observe that a coercive continuous sesquilinear form is associated to an
1
invertible sectorial operator, hence we can consider the square root (−A) 2 .
Proposition A.5. Let a : V × V → C be a coercive, continuous sesquilinear form.
Assume its numerical range to be contained in a parabola. Then the following
assertions hold.
(i) The associated operator generates a cosine operator function. In particular,
the angle of analyticity of the semigroup associated with a is π2 .
(ii) The operator A associated with a has the Kato square-root property, i.e., V ∼
=
1
D(−A) 2 .
Using form methods allows to deduce simple, almost algebraical criteria in order
to characterize crucial properties of semigroups, and thus of solution to Cauchy
problems. The following results all come from [26, § 2]. In the following we consider
the L∞ (X, µ) defined as usual as the Banach space of all µ-essentially bounded
functions from X to C. Moreover, the mapping signf is defined by signf (x) :=
f
( x)|f (x)|, for f ∈ H and x ∈ X.
Proposition A.6. Let a : V × V → C be a coercive, continuous sesquilinear form,
and denote by (T (t))t≥0 the associated semigroup on H, cf. Proposition 4.3. Then
the following assertions hold.
24
STEFANO CARDANOBILE AND DELIO MUGNOLO
(i) (T (t))t≥0 is real (i.e., T (t)f is real-valued for all t ≥ 0 and all real valued f )
if and only if for all f ∈ V one has Ref ∈ V and a(Ref, Imf ) ∈ R.
(ii) (T (t))t≥0 is positive (i.e., T (t)f is positive for all t ≥ 0 and all positive valued
f ) if and only if for all f ∈ V one has Ref + ∈ V and a(Ref + , Ref − ) ≤ 0.
(iii) Let b : V × V → C be another coercive and continuous sesquilinear form, and
denote by (S(t))t≥0 the associated semigroup. Assume (T (t))t≥0 to be positive.
Then (T (t))t≥0 dominates (S(t))t≥0 in the sense of positive semigroups (i.e.,
|S(t)f | ≤ T (t)|f | for all t ≥ 0 and f ∈ H) if and only if for all f, g ∈ V such
that f g ≥ 0 one has Reb(f, g) ≤ a(|f |, |g|).
(iv) (T (t))t≥0 is L∞ -contractive (i.e., T (t) maps functions in the unit ball of
L∞ (X, µ) to functions in the unit ball of L∞ (X, µ) for all t ≥ 0) if and
only if for all f ∈ V one has (1 ∧ |f |)signf ∈ V and Rea((1 ∧ |f |)signf, (|f | −
1)+ signf ) ≥ 0.
(v) Let P the orthogonal projection of H over some closed subspace of H. Then
the semigroup (T (t))t≥0 leaves Y invariant (i.e., T (t)Y ⊂ Y for all t ≥ 0)
if and only if for all f ∈ V, g ∈ V ∩ Y, h ∈ V ∩ Y ⊥ one has P f ∈ V and
a(g, h) = 0.
Assume that for a given non-positive semigroup there exist positive semigroups
dominating it. It is natural to wonder whether there exists a “minimal” one, in some
sense. Indeed, one defines the modulus semigroup of (T (t))t≥0 as the unique positive
semigroup (T ] (t))t≥0 that dominates (T (t))t≥0 and is dominated by all positive
semigroups that also dominate (T (t))t≥0 . In general, a non-positive semigroup has
no modulus semigroup: however, a very special class of semigroups admitting a
modulus has been exhibited in [32], where the following has been proved.
Proposition A.7. Let (X1 , µ1 ), (X2 , µ2 ) be σ-finite measure spaces. Let A, D be
generators of positive semigroups on H1 := L2 (X1 ) and H2 := L2 (X2 ), respectively.
Assume B, C to be bounded, positive operators from H1 to H2 and from H2 to H1 ,
respectively. Let β, γ ∈ C. Then, the operator A defined by
A(u1 , u2 )> := (Au1 + βBu2 , γCu1 + Du2 )> ,
(u1 , u2 ) ∈ H1 × H2 .
generates on H1 × H2 a semigroup which admits a modulus semigroup. Such a
modulus semigroup is generated by the operator A] defined by
A] (u1 , u2 )> := (Au1 + |β|Bu2 , |γ|Cu1 + Du2 )> .
Acknowledgment. We are most grateful to the anonymous referee; his suggestions
and criticisms greatly helped us to revise our manuscript and to focus on many
biological issues that were not yet properly discussed in the first draft. We are
also indebted to Prof. Günther Palm of the Department of Neural Information
Processing at the University of Ulm for motivating discussions and many insights
into neuroscientifical literature.
References
[1] Aliev RR, Panfilov AV. A simple two-variable model of cardiac excitation. Chaos Solitons and
Fractals 1996; 7:293–301.
[2] Amann H. Gewöhnliche Differentialgleichungen. de Gruyter: New York, 1995.
[3] Arendt W. Semigroups and Evolution Equations:Functional Calculus, Regularity and Kernel Estimates. In: Dafermos CM and Feireisl E (eds.) Handbook of Differential Equations:Evolutionary Equations – Vol. 1. North Holland: Amsterdam, 2004.
FITZHUGH–NAGUMO–RALL MODEL OF A NEURONAL NETWORK
25
[4] Arendt W. Heat Kernels, manuscript of the 9th Internet Seminar, available at
https://tulka.mathematik.uni-ulm.de/index.php/Lectures.
[5] Arendt W, Batty CJK, Hieber M, Neubrander F. Vector-valued Laplace Transforms and
Cauchy Problems. Monographs in Mathematics 96, Birkhäuser: Basel, 2001.
[6] Baer SM, Tier C. An analysis of a dendritic neuron model with an active membrane site. J.
Math. Biol. 1986; 23:137–161.
[7] von Below J. A characteristic equation associated with an eigenvalue problem on C 2 -networks.
Lin. Algebra Appl. 1985; 71:309–325.
[8] Brézis H. Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces
de Hilbert. North-Holland, Amsterdam, 1973.
[9] Crouzeix M. Operators with numerical range in a parabola. Arch. Math. 2004 82:517–527.
[10] Dodge FA, Cooley JW. Action Potential of the Motorneuron. IBM J. Res. Develop. 1973;
17:219–229.
[11] Evans JD. Analysis of a multiple equivalent cylinder model with generalized taper. IMA J.
Math. Appl. Medic. Biol. 2000; 17:347-377.
[12] Evans JD, Kember GC, Major G. Techniques for obtaining analytical solutions to the multycylinder somatich shunt cable model for passive neurones. Biophys. J. 1992; 63:350–365.
[13] FitzHugh R. Impulses and physiological states in theoretical models of nerve membrane.
Biophysics. J. 1961; 1:445–466.
[14] Haase M. The Functional Calculus for Sectorial Operators, Operator Theory: Advances and
Applications 169, Birkhäuser: Basel, 2006.
[15] Hindmarsh JL, Rose RM. A model of the nerve impulse using two first-order differential
equations. Nature 1982; 296:162–164.
[16] Izhikevich E. Which Model to Use for Cortical Spiking Neurons?, IEEE Trans. Neural Networks 2004; 15:1063–1070.
[17] Koch C. Biophyisics of computation: information processing in single neurons. Oxford University Press: New York, 1999.
[18] Kramar Fijavž M, Mugnolo D, Sikolya E. Variational and semigroup methods for waves and
diffusion in networks. Appl. Math. Optim., in press.
[19] Lions JL. Equations Différentielles Opérationelles et Problèmes aux Limites. Springer-Verlag:
Berlin, 1961.
[20] Lunardi A. Analytic Semigroups and Optimal Regularity in Parabolic Problems. Progress in
Nonlinear Differential Equations and their Applications 16, Birkhäuser: Basel, 1995.
[21] Major G, Evans JD, Jack JJ. Solutions for transients in arbitrarily branching cables:I. Voltage
recording with a somatic shunt. Biophys. J. 1993; 65:423–449.
[22] Major G, Evans JD, Jack JJ. Solutions for transients in arbitrarily branching cables:II. Voltage clamp theory, Biophys. J. 1993; 65:450–468.
[23] Meunier C, Segev I. Neurons as Physical Objects:Structure, Dynamics and Function. In:
Moss F and Gielen S (eds.): Neuro-Informatics and Neuronal Modelling. North-Holland: Amsterdam, 2001:353–467.
[24] Mugnolo D. Gaussian estimates for a heat equation on a network, Network and Heterogeneous
Media 2007; 2:55–79.
[25] Mugnolo D, Romanelli S. Dynamic and generalized Wentzell node conditions for network
equations. Math. Methodes Appl. Sciences 2007; 30:681–706.
[26] Ouhabaz E.M. Analysis of Heat Equations on Domains. LMS Monograph Series 30, Princeton University Press: Princeton, 2004.
[27] Poznanski RR. Membrane Voltage Changes in Passive Dendritic Trees: A Tapering Equivalent Cylinder Model IMA J. Math. appl. Med. Biol. 1990; 5:113–145.
[28] Rall W. Branching dendritic trees and motoneurone membrane resistivity. Exp. Neurol. 1959;
1:491–527.
[29] Rall W. Transients in neuron with arbitrary dendritic branching and shunted soma. Biophys.
J. 1993; 65:15–16.
[30] Roger JM, McCulloch AD. A collocation Galerkin finite element model of cardiac action
potential propagation. IEEE Trans. Biomed. Engin. 1994; 41:743–757.
[31] Scott A. Neuroscience. A Mathematical Primer. Springer-Verlag: New York, 2002.
[32] Stein M, Voigt J. The modulus of a matrix semigroup, Arch. Math.. 2004; 82:311–316.
[33] Traub R. Motorneurons of Different Geometry and the Size Principle. Biol. Cybernetics. 1977;
25:163–176.
26
STEFANO CARDANOBILE AND DELIO MUGNOLO
[34] Van Pelt J, Uylings HBM. Natural Variability in the Geometry of Dendritic Branching Patterns. In: Poznanski RR (ed.), Modelling in the Neurosciences:From Ionic Channels to Neural
Networks. Harwood Academic Publishers: Amsterdam, 1999.
[35] Yuste R, Tank DW. Dendritic integration in mammalian neurons, a century after Cajal,
Neuron 1996; 16:701–716.
Institut für Angewandte Analysis, Helmholtzstraße 18, D-89081 Ulm, Germany
E-mail address: stefano.cardanobile@uni-ulm.de, delio.mugnolo@uni-ulm.de
Download