Frequency separation by an excitatory-inhibitory network Alla Borisyuk

advertisement
J Comput Neurosci (2013) 34:231–243
DOI 10.1007/s10827-012-0417-5
Frequency separation by an excitatory-inhibitory network
Alla Borisyuk · Janet Best · David Terman
Received: 19 March 2012 / Revised: 29 May 2012 / Accepted: 13 July 2012 / Published online: 3 August 2012
© Springer Science+Business Media, LLC 2012
Abstract We consider a situation in which individual
features of the input are represented in the neural system by different frequencies of periodic firings. Thus,
if two of the features are presented concurrently, the
input to the system will consist of a superposition
of two periodic trains. In this paper we present an
algorithm that is capable of extracting the individual
features from the composite signal by separating the
signal into periodic spike trains with different frequencies. We show that the algorithm can be implemented
in a biophysically based excitatory-inhibitory network
model. The frequency separation process works over
a range of frequencies determined by time constants
of the model’s intrinsic variables. It does not rely on a
“resonance” phenomenon and is not tuned to a discrete
set of frequencies. The frequency separation is still
reliable when the timing of incoming spikes is noisy.
Keywords Excitatory-inhibitory networks ·
Decorrelation · Oscillations
Action Editor: Carson C. Chow
A. Borisyuk (B)
Department of Mathematics, University of Utah,
155 S 1400 E, Salt Lake City, UT 84112, USA
e-mail: borisyuk@math.utah.edu
J. Best
Department of Mathematics,
Mathematical Biosciences Institute, Ohio State University,
231 W 18th Ave., Columbus, OH 43210, USA
e-mail: jbest@math.ohio-state.edu
D. Terman
Department of Mathematics, Ohio State University,
231 W 18th Ave., Columbus, OH 43210, USA
e-mail: terman@math.ohio-state.edu
1 Introduction
Oscillatory or near-oscillatory activity is a ubiquitous
feature of neuronal networks. Examples range from auditory nerve responses to pure tones (Rose et al. 1967)
to theta-rhythm in hippocampus (Green and Arduini
1954), to odor-evoked oscillations in the mammalian olfactory bulb (Adrian 1950), etc. The exact role of these
oscillations in neural coding remains elusive. However,
in many systems the rate of near-periodic firing is well
correlated with features of the stimulus, such as orientation tuning in primary visual corex (Arieli et al. 1995),
coding of head direction (Taube et al. 1990), air current
direction coding in cricket cercal system (Landolfa and
Miller 1995), and so on.
It has been suggested in numerous earlier studies
that the natural frequency of oscillators may be used
to represent stimulus features, and adaptation of oscillator frequencies can be used as a mechanism of
learning and memory (e.g. Torras 1986; Niebur et al.
2002; Kuramoto 1991; Kazanovich and Borisyuk 2006).
However, because sensory input typically encodes for
multiple features, there must be some mechanism by
which the brain extracts individual features from composite signals.
It has also been suggested that the recurrent activity
in excitatory-inhibitory networks may serve to decorrelate (reduce the correlations) in the spiking activity
(Ecker et al. 2010; Renart et al. 2010; Tetzlaff et al.
2010). This type of network has been studied in a
variety of different contexts, including models for sleep
rhythms, Parkinsonian rhythms, olfaction and working
memory. In some sense, the present paper is motivated
by work presented in (Bar-Gad et al. 2000; Bar-Gad
and Bergman 2001) where it is suggested the neuronal
232
J Comput Neurosci (2013) 34:231–243
activity within the basal ganglia serves to reduce the dimensionality and decorrelate information coming from
cortical areas. Here, we address the problem of frequency separation as a well-formulated particular first
step in studying the “detangling” of input information.
The basic setup for the problem we consider is shown
in Fig. 1(a). Two stimuli, represented by different frequencies of firing at lower levels of processing, are
presented simultaneously to the neuronal network (top
panel in Fig. 1(a)). Thus the incoming signal is a superposition of two periodic pulse trains. The main task of
the system is to produce two outputs: one at each of the
original frequencies making up the input (lowest panel
in Fig. 1(a)).
First, we present an algorithm for frequency separation. We formulate a set of rules and prove that
most frequencies can be successfully separated. Next,
we build a biophysical model (based on Terman et al.
2002 with some modifications) that implements the algorithm and demonstrate its functionality in numerical
simulations. Even though the implementation of the
algorithm is not exact, we show in numerical simulations that the frequency separation works for a range
of frequencies. The mechanism does not rely on precise
tuning to a predetermined set of frequencies. We also
show that errors in the frequency separation can be
avoided by changing the relative phase of the inputs or
the initial condition of the model elements, and that the
frequency separation is still successful when the times
of incoming pulses are perturbed by random amounts.
(b)
2 Frequency separation algorithm
2.1 Setup
The frequency separator in the algorithmic form consists of two response units, each represented by a pair
of variables (xi , yi ), i = 1, 2 (Fig. 1(a)). Both x and y
evolve according to the rules below. The input arrives
at both cells at times (t j) obtained by superposition of 2
pulse trains with interpulse intervals T1 and T2 (without
loss of generality T1 < T2 ). At each input pulse, one
of the units responds by resetting its x and y values
according to the rules described in the next section
and the response is recorded. The variable xi tracks
time since the most recent response of the unit i and
the variable yi represents the anticipated time until
the next response (the difference between the previous
inter-response time and the time since the most recent
response). We use notation f (t− ) and f (t+ ) as the left
and right limits of function at t.
2.2 Rules
The algorithm can be summarized as follows: Cell 1 always responds if it is expecting a pulse (i.e., y1 ≤ 0). The
first time a pulse arrives earlier than cell 1 anticipates
(i.e. while y1 > 0), cell 2 responds. Thereafter, cell 1
also responds to an unexpected pulse if cell 1 is less
surprised than cell 2 (i.e., y2 ≥ y1 > 0).
x1
x
2
(a)
rule
used
A1 B
y
x1
y1
x2
y2
A
CA
B
CA
1
21
1
21
A
C
A
A
C
1
2
1
y2
cell
responding
1
2
0
200
400
1
600
2
1
800
1000
t, msec
Fig. 1 Frequency separation algorithm. (a) schematic. The input
is a superposition of two periodic pulse trains (black and grey),
the output are the individual periodic trains. The algorithm is
represented by the quantities xi , yi , i = 1, 2. (b) Example of
frequency separation. The input pulse trains (the bottom panel)
have interpulse intervals of 115 and 200 ms. The dynamics of xi
variables (top) and yi variables (middle) varies according to the
algorithm rules (see text). The rule used at each incoming pulse is
indicated by a letter between top two panels. The numbers above
the bottom panel indicate which unit responded to each incoming
pulse. Note that after about 400 ms all black incoming pulses are
picked out by cell 1 and all grey ones by cell 2 (frequencies have
been separated)
J Comput Neurosci (2013) 34:231–243
Formally:
–
–
Dynamics: xi = yi = 0 at time 0; dxi /dt = 1 and
dyi /dt = −1;
Response at pulse time t j:
–
–
–
–
–
If y1 ≤ 0, cell 1 responds
(rule A)
If cell 2 has never responded, it sets y2 (t+j ) =
x2 (t−j ), x2 (t+j ) = x1 (t−j )
(rule A1)
If y2 ≥ y1 > 0, cell 1 responds, unless cell 2 has
never responded, then cell 2 responds. (rule B)
If y1 > 0 and y1 > y2 , cell 2 responds. (rule C)
If two input pulses coincide, both cells
respond.
(rule D)
Reset: when cell i responds at time t, it resets
yi (t+ ) = xi (t− ), xi (t+ ) = 0; when cell 1 responds,
and cell 2 has never responded, rule A1 is used
An example of the algorithm application is shown
in Fig. 1(b). The top two panels track the evolution of
x1,2 and y1,2 with time. The bottom panel shows the
incoming pulse train and the number over each pulse
indicates which of the two cells responded. Two periodic trains have different shades (black and grey) for
ease of viewing, but this information is not available to
the model system. One can see that after about 400 ms
cell 1 responds at all black and cell 2 at all grey pulses,
demonstrating the success of frequency separation. To
see how the rules are applied, let us consider the input
pulse near 400 ms mark. At that time y2 > y1 > 0, so
the rule B is applied, cell 1 responds and x1 and y1
are reset. At the previous input pulse y1 < 0, so rule
A applies and cell 1 responds as well.
2.3 Validity of the algorithm
We will now formally show that for any input frequencies the algorithm works for selected initial conditions.
First we will introduce notations and definitions, then
we will formulate and prove the main algorithm result.
Let Ti (i = 1, 2) be the periods of the original periodic input pulse trains. We assume that Ti are integers
(in ms), and, without loss of generality, that T1 < T2
and
T2 = mT1 + R
where m is an integer and R < T1 .
j
We will use notation ti for the time of jth spike from
the train with period Ti , while xk and yk (k = 1, 2) will
be the algorithm variables, as above.
Definition We say cell k remembers period Ti at time
t if its last response at or before time t happened at
233
j
the last encountered Ti pulse (say, at time ti ≤ t), and
j
xk ((ti )− ) = Ti .
We note that the event “cell k remembers Ti ” has to
j
occur first at a time of Ti pulse. If it happens at time ti
(both k and i being 1 or 2) then the cell will continue
j
to remember Ti for any time t > ti until one of the
following events occurs: cell k does not respond to a
Ti pulse; cell k responds to a Ti pulse, but just before
the response xk = Ti ; or cell k responds to a pulse that
does not belong to Ti .
Definition We say “cell 1 remembers period Ti and
cell 2 remembers period Tk ” at time t if at time t each
cell remembers the corresponding period according to
the above definition.
Definition We say that the algorithm separates frequencies if there exists N such that for all input pulse
times tn , n > N, cell 1 will respond to every Ti pulse
and cell 2 will respond to every Tk pulse (i = k).
Definition We say that initial configuration of the
inputs is an increasing interval initial condition if the
time to the first input pulse and 3 subsequent interpulse
intervals form a strictly increasing sequence.
The increasing interval initial condition frequently
occurs. It will occur, for instance, for m > 1 when the
first spike from T1 train occurs early and the one from
T2 train follows soon, namely: t11 < T1 /2 and t11 < t21 <
t11 + T1 /2.
Lemma 1 If cell 1 remembers Ti and cell 2 remembers
Tk (i = k) at some time t, then the frequencies are
separated.
Proof If the event “cell 1 remembers Ti and cell 2
remembers Tk ” is true at some time t between the
incoming pulses, it will persist until the next incoming
pulse. Thus, we need to show that if “cell 1 remembers
Ti and cell 2 remembers Tk ” occurs at one of the incoming pulses, then it will also occur at the subsequent
pulse. Then, by induction, it will occur indefinitely,
meaning that cell one will respond to every input from
Ti sequence, and cell 2 will respond to every input from
Tk sequence.
1. Suppose “cell 1 remembers Ti and cell 2 remembers
j
Tk ” occurs at some t∗ = tk (which may also be coincident with one of Ti pulses). At t∗− (x1 , y1 , x2 , y2 ) =
(a, Ti − a, Tk , y∗2 ), where a is the time from last Ti
input, and y∗2 is an arbitrary value. Since cell 2
234
responds, we have at t∗+ : (x1 , y1 , x2 , y2 ) = (a, Ti −
j
a, 0, Tk ). (If tk was coincident with one of Ti pulses,
then both cells respond by rule D and the values
after reset are (0, Ti , 0, Tk )).
If the next incoming input at time t∗∗ is from Ti se−
−
quence, we will have x1 (t∗∗
) = Ti , y1 (t∗∗
) = 0. Thus,
cell 1 will respond by rule A, and we still have “cell
1 remembers Ti and cell 2 remembers Tk ”.
Conversely, if the next incoming input at time
t∗∗ is from Tk sequence (which also implies Ti >
−
Tk ), then at t∗∗
: (x1 , y1 , x2 , y2 ) = (b , Ti − b , Tk , 0),
where b is the time since the last Ti spike. We have
−
−
y1 (t∗∗
) > 0 = y2 (t∗∗
) thus cell 2 will respond by rule
C and “cell 1 remembers Ti and cell 2 remembers
Tk ” persists.
Finally, if the next incoming input at time t∗∗ is coincident, then values before reset are (Ti , 0, Tk , 0),
both cells respond by rule D and “cell 1 remembers
Ti and cell 2 remembers Tk ” remains.
2. If “cell 1 remembers Ti and cell 2 remembers Tk ”
j
occurs at some ti , we can similarly show that for
any subsequent pulse (Ti , Tk or coincident) “cell
1 remembers Ti and cell 2 remembers Tk ” will
remain true.
As a result, “cell 1 remembers Ti and cell 2 remembers Tk ” will occur indefinitely, meaning that cell
one will respond to every input from Ti sequence,
and cell 2 will respond to every input from Tk
sequence.
Theorem 1 Suppose T1 , T2 are positive integers with
T1 < T2 . Under the setup described above, for any pair
T1 , T2 , there exist initial conditions such that frequencies
will be separated. In particular, it will happen in the
following situations:
1. for initial condition in which f irst two input pulses
belong to T1 train;
2. for increasing-interval initial conditions for m > 1
(where m is such that T2 = mT1 + R, see above).
Proof
Case 1 Suppose that first two input pulses belong to T1
train, and there are no coincident inputs up to t22 . Let’s
say the first pulse arrives at time t = t11 = a < T1 , then
at t = a− : (x1 , y1 , x2 , y2 ) = (a, −a, a, −a). Here y1 < 0,
so cell 1 responds by rule A, and at t = a+ the values of
(x1 , y1 , x2 , y2 ) become (0, a, a, a) by rule A1.
Next pulse is again T1 at t = t12 . The values of
(x1 , y1 , x2 , y2 ) at t− are (T1 , a − T1 , a + T1 , a − T1 ). By
J Comput Neurosci (2013) 34:231–243
rule A cell 1 responds and cell 1 remembers T1 . The
values are reset to (0, T1 , T1 , a + T1 ).
Now, as long as T1 pulses continue to arrive, cell
1 will respond by rule 1 and the reset values each
time will be (0, T1 , T1 , 2T1 ), and cell 1 will continue to
remember T1 . At some point we will get a T2 pulse.
Let’s say it arrives at time b after the previous T1 pulse.
Just before the first T2 input the values are (b , T1 −
b , T1 + b , 2T1 − b ) or (b , T1 − b , T1 + b , a + T1 − b )
if T2 pulse arrives right away, as the third pulse of the
joint train. In either case we have y1 > 0 and cell 2 has
never responded, so cell 2 responds by rule B and reset
values are (b , T1 − b , 0, T1 + b ). Next, as long as T1
inputs are arriving, we will have y1 = 0, cell 1 responding, and cell 1 still remembering T1 . Finally, when t22
arrives (some time c after the previous T1 pulse), we
will have (x1 , y1 , x2 , y2 ) = (c, T1 − c, T2 , T1 + b − T2 ).
Here y1 > 0, and y2 = T1 + b − T2 . Also, we must have
T1 + b < T2 (even for m = 1), otherwise we would
have had a T2 pulse earlier. Thus, cell 2 responds again
by rule C and cell 2 remembers T2 . Frequencies are
separated by Lemma 1.
Now consider the case when first T2 pulse is coincident with one of T1 pulses. Note that because of
the theorem assumption it can be coincident with t13
or later, which also implies m ≥ 2, and T2 > 2T1 . Just
before the first T2 pulse, similar to above, the values of
(x1 , y1 , x2 , y2 ) are (T1 , 0, 2T1 , T1 ) or (T1 , 0, 2T1 , a) (the
latter will happen if t21 = t13 ). In both cases both cell 1
and cell 2 respond (by rule D) and after the reset the
values become (0, T1 , 0, 2T1 ), and cell 1 remembers T1 .
Then, as long as T1 pulses continue arriving, cell 1 responds by rule A and remembers T1 . When the second
T2 pulse arrives, it could again be coincident with one
of T1 pulses, in which case the values just before t22 will
be (T1 , 0, T2 , 2T1 − T2 ), both cells respond by rule D,
and cell 2 remembers T2 , while cell 1 remembers T1 . If
the second T2 pulse arrives time c after the preceding
T1 pulse (0 < c < T1 ), then the values just before are
(c, T1 − c, T2 , 2T1 − T2 ), cell 2 responds by rule C and
cell 2 remembers T2 . Frequencies are separated by
Lemma 1.
Similarly, if the first T2 pulse is not coincident with
T1 (occurs at time b after the preceding T1 pulse),
but the second one is, then just before t22 the values
of (x1 , y1 , x2 , y2 ) are (T1 , 0, T2 , T1 + b − T2 ), both cells
respond by rule D, cell 1 remembers T1 and cell 2
remembers T2 , and Lemma 1 applies.
Case 2 Now consider increasing interval initial condition with m > 1. First, we will show that this condition
implies that second incoming pulse is from T2 train and
J Comput Neurosci (2013) 34:231–243
235
that the interval between the third and fourth pulses is
equal to T1 .
Let us say first input pulse occurs at time a, and subsequent interpulse intervals are b , c and d. Increasing
interval condition means that a < b < c < d. We also
know that d ≤ T1 as T1 is the largest possible interpulse
interval. This implies that there are no two consecutive
T1 pulses among the first three, so second pulse is T2 .
Moreover, since m > 1 the next T2 spike can only occur
as the fifth one or later. This means that third and
fourth pulses are both T1 and d = T1 .
Next, by following the rules of the algorithm, we
can obtain that cell 1 will respond to each of the first
4 pulses. After the fourth one the values are reset to
(0, d, d, c + d) = (0, T1 , T1 , c + T1 ) and cell 1 remembers T1 . This is exactly the same situation as we found
after the second input pulse in part 1 of Case 1 of this
proof, and the rest follows.
Remark We believe that the frequencies will also be
separated for most initial configurations as long as T1
does not divide T2 (i.e., R = 0). To show this, all initial
configurations must be considered. In some cases an
incorrect pattern is initially remembered and it takes
a while for the algorithm to find the correct solution.
Moreover, for smaller m, cell 1 can remember either
T1 or T2 , depending on initial conditions while for m
large enough (m > 3), we always have cell 1 remembering T1 for any initial conditions. The proofs of these
statements are very technical, and we do not present
them in this paper. Instead, in Fig. 2 we show results of
numerical iteration of the algorithm.
To illustrate the remark, we ran the algorithm 106
times. In each run we chose T1 and T2 randomly (uniformly) between 1 and 100 (in numerical simulations we
did not follow the T1 < T2 convention), and also chose
the time for the first Ti pulse uniformly between 1 and
Ti − 1, i = 1, 2. Each run proceeded until separation of
frequencies occured, or time reached 15,000, whichever
is earlier. The results are illustrated in Fig. 2.
Overall, there was 0.1 % of cases in which the frequencies were not separated (time ran to the maximum
of 15,000). The Fig. 2 includes those (few) cases where
separation of frequencies would take even longer, and
those where non-separated periodic pattern is reached
(separation will never occur). Figure 2(a) shows values
of T1 and T2 in all of these unsuccessful cases. Most
of them lie on T2 = T1 , T2 = 3T1 , or T2 = T1 /3 lines
(solid lines), but there are a few other points as well.
Note that each of the points in this figure corresponds
to multiple failed cases, i.e. multiple sets of initial conditions.
Next, we look at the distribution of times at which
frequency separation occurs (Fig. 2(b)). Most of cases
(99.6 %) were successfully separated by t = 600, but
there is a long tail extending (and slowly decaying) to
the right. Median separation time is 131.
If we eliminate special resetting at the beginning of
the separation (eliminate rule A1), then the percent of
(b)
(a)
(c)
25
100
100
90
90
20
80
80
70
40
60
2
2
T
50
15
T
Percent per bin
70
60
10
40
30
30
20
20
5
10
10
0
50
0
20
40
60
T
1
80
100
0
0
0
500
1000
Decorrelation time
Fig. 2 Numerical iteration of the algorithm. (a) Black dots show
randomly chosen (T1 , T2 ) pairs that were not sucessfully separated for at least one set of initial conditions in 15,000 long run
(see text). Solid lines show T2 = T1 , T2 = 3T1 , or T2 = T1 /3.
(b) Distribution of times taken to reach frequency separation in
1500
2000
0
20
40
60
80
100
T
1
106 iterations of the algorithm with randomly chosen periods and
initial conditions (see text). The horizontal axis is artificially cut
at 2,000 for ease of viewing. (c) Same results as in panel (a), with
rule A1 removed from the algorithm
236
J Comput Neurosci (2013) 34:231–243
failed separations grows to 1.2 % and corresponding
values of T1 and T2 now cover a broad range (Fig. 2(c)).
Each cell is described by a biophysically-based model
(Hodgkin-Huxley type, ((Hodgkin and Huxley 1952))),
in which the membrane potential (voltage) V = V(t)
satisfies the current-balance equation:
3 Biophysical implementation
Cm
3.1 Frequency separator unit
As explained in the introduction, the original motivation for this work came from experimental findings
in basal ganglia (Bar-Gad et al. 2000; Bar-Gad and
Bergman 2001). Thus, we chose to implement the algorithm in a modification of the model (Terman et al.
2002) of basal ganglia circuit, consisting of the external segment of the globus pallidus (GPe; inhibitory
cells) and the subthalamic nucleus (STN; excitatory
cells). The models for individual cells are quite minimal
(reduced Hodgkin-Huxley tupe model (Hodgkin and
Huxley 1952), with 2 variables for the inhibitory cell
and 3 for the excitatory cells) but include ionic currents
that are known to be present in real cells and have
been shown to be important for their function, such
as low threshold calcium current in the E cell. It is
possible that a more generic model could be successful
in implementing the frequency separation algorithm as
well, but it is beyond the scope of this paper.
The main unit of the model is a network of two
excitatory (E) and two inhibitory (I) cells (shaded box
in Fig. 3(a)). We will call it the frequency-separator unit.
dV
= −Iion (V) − Isyn + Iinput .
dt
Here Cm is the membrane capacitance, Iion represents
the cell’s intrinsic ionic currents, the synaptic current
Isyn is the current due to activity of other cells in
the network, and finally Iinput is the incoming mixedfrequency signal. The details of the model, together
with parameter values and units of various variables are
given below.
The frequency-separator unit is the minimal network
of excitatory-inhibitory connections to allow the two
inhibitory cells to compete and to allow the E cells to
serve as read-outs of the network output. Addition of
another such unit, as in Fig. 3(a) and in simulations,
provides additional lateral inhibition, making competition between I cells more efficient. Plus, as explained
in Discussion, if the two blocks start at different initial
conditions, or receive inputs at different initial phase
shifts (due to delay lines), then the second unit may be
able to separate frequencies correctly even of the first
one fails.
The model has several features that need to be
pointed out. First, the input to the system is inhibitory.
So the cells in the first, inhibitory, layer respond to the
(b)
(a)
I
I
I
E
E
E
E
I2
E1
E2
300
ISI
I
250 msec
I1
150
0
Fig. 3 Biophysical model. (a) Schematic of the wiring of the
model excitatory (E) and inhibitory (I) cells, and the inputs.
Arrows represent excitatory connections, f illed circles represent
inhibitory connections, and open circles represent y-dependent
inhibitory connections (see text). Basic unit of the network is a set
of 4 cells (shaded box). Two such units are shown. (b) Example
of the response of the biophysical model. Top panel shows the
inputs (black periodic pulse train and grey periodic pulse train
are combined), middle panels show voltages of the 4 cells from
the shaded box in panel (a). Inhibitory cells respond to inputs
by ceasing their firing, and excitatory cells respond by spiking.
Lower panel summarizes responses of the output (excitatory)
cells by showing their interspike intervals. Dotted lines show the
frequencies of the input pulse trains
J Comput Neurosci (2013) 34:231–243
input by temporally stopping their firing (for a period
of time longer than the typical interspike interval). The
cells of the second layer (excitatory cells) are usually
suppressed and when they respond, it is by emitting a
spike (or a short burst). Therefore when we refer to
a cell “responding” it can mean “stopping to fire” in
the case of inhibitory cells or “firing” in the case of
excitatory cells. This arrangement is not a requirement
of the model. The frequency separation would work as
well if the roles of excitation and inhibition reversed.
Second, a special feature of the inhibitory cells is the
presence of x and y variables, analogous to xi and yi
in the algorithm above. The quantities x and y can be
thought of as fractions of some substances X and Y
in the active state, affecting both the incoming and the
outgoing synapses of the cells. We discuss the roles and
possible implementations and interpretations of x and
y in the Section 3.4 below. Besides these special features, the algorithm implementation is not dependent
on details of the particular model.
Equations for an I cell The intrinsic current Iion consists of the leak, sodium, and potassium currents, and
the bias current I:
237
Equations for an E cell The intrinsic current Iion consists of the leak, sodium, and potassium currents, and
the T-type current (outward current activated by hyperpolarization):
Iion = g L (V − V L ) + g Na (m∞ (V))3 h(V − V Na )
+ g K (1 − h)4 (V − V K ) + gT (m∞,T )2 hT V,
dh
= (h∞ (V) − h)/τh (V),
dt
dhT
= (h∞,T (V) − h)/τh,T (V),
dt
where the functions and the parameter values are given
in the Appendix.
External input to the E cell is equal to zero (Iinput =
0). The E cell also produces a gating variable s E to be
used as an input in I equations above:
ds E
= α E (1 − s E )s∞,E (V) − β E s E ,
dt
s∞,E (V) = 1/(1 + exp(−(V + 35))).
Iion = g L (V − V L ) + g Na (m∞ (V))3 h(V − V Na )
3.2 Excitatory connections
+ g K (1 − h)4 (V − V K ) − I,
dh
= (h∞ (V) − h)/τh (V).
dt
The functions and parameters are given in the
Appendix.
The synaptic current Isyn consists of the contribution
to I from E cells (subscript ‘I E’; this current depends
on the synaptic conductance s E of an appropriate E
cell as shown in diagram in Fig. 1, for definition of
s E see equation for E cell below); and the input from
neighboring I cells (subscript ‘I I’; the summation is
over two neighboring I cells with periodic boundary
condition, connection between cells 1 and 4 are not
shown in the figure)
Isyn = I I E + I I I .
Currents I I I have the same parameters whether or
not to I cells belong to the same block. The synaptic
currents are described in detail in Sections 3.2–3.4.
The inhibitory cell also produces a gating variable s I
to be used as an input in E equations below:
ds I
= α I (1 − s I )s∞,I (V) − β I s I ,
dt
s∞,I (V) = 1/(1 + exp(−(V + 45))).
The excitatory current received by the cell with voltage
V is given by
I I E = g I E s E (V − V I E ).
3.3 y-independent inhibitory connections
The synaptic current Isyn received by the E cell with
voltage V comes from a neighboring I cell according to
the wiring diagram in Fig. 3(a) and depends on the I
cell activity through the s I variable:
I EI = g EI s I (V − V EI ).
3.4 y-dependent inhibitory connections
For each inhibitory cell the external input it feels (Iinput )
and the current it sends to its neighbors (I I I ) are
influenced by the cell’s y variable.
The dynamics of y The dynamics of y is governed by:
dy
= −β y .
dt
238
J Comput Neurosci (2013) 34:231–243
In addition, at the start of a response (at time tr when
the cell I stops firing) y is reset to a value determined
by an auxiliary variable x:
y(tr+ ) = x(tr− ),
dx
= αx ,
dt
x(tr+ ) = 0.
Both x and y stay constant for the duration of the
response.
In general, x and y can be thought of as fractions of
substances X and Y in the active state. During the firing
of the I cell, x is accumulating and y is being removed
(Fig. 4). Once the firing stops and V is shifted to a relatively more hyperpolarized level, y accumulates quickly
to the extent that x is available and x is quickly removed
after a short delay. As in the algorithmic toy model, x
keeps track of time since the last event, and y compares
the time spent since last event with the previous ISI.
The cell is ready to fire if the current time since last
event is long enough (y is low enough).
Similar dynamics has been used in models of synaptic
depression in Bose et al. (2001) and Matveev et al.
(2007). There the depression variable d was increased
at every spike by a multiplicative factor and decayed
exponentially between spikes. At the same time the
synaptic variable s was increased by the spike to the
value of d (the amount of available synaptic resources)
and decayed between spikes at its own time scale.
The amount of substance y in an I cell affects the
amount of inhibition that this cell is sending to other
I cells (presynaptic effect on I-I inhibition), and also
how responsive the cell is to the external input (postsynaptic effect on external input)—shown in Fig. 3(a).
Right after the response the high concentration of y
makes the external input less efficient. At the same time
the higher value of y facilitates the I-I synapses preparFig. 4 Schemes for activation
and inactivation of substances
X (left) and Y (right). They
can switch between active
state (x/y) and inactive (xi /yi )
with voltage-dependent rates
αx (V), βx (V), α y (V), β y (V) as
shown in the figure.
Activation of Y is also
affected by the amount of
activated X and is drawn
from a large pool (grey)
ing the cell’s neighbors to respond more easily to expected external (inhibitory) input. Note that the higher
value of y has the same effect on both neighboring I
cells. As a result, neigboring cells will tend to pick out
different frequencies from the original train, and every
other cell will tend to pick out the same frequency. As
the cell keeps on firing, its y value decreases attenuating
the efficacy of I-I inhibition and increasing the efficacy
of the external input.
Both effects of y in the network (decreasing sensitivity to increasing input and potentiating lateral
inhibitory connections) contribute to biophysical implementation of the main idea behind the separation
algorithm: the cell with lower y responds. The third
component that has similar effect is the net-inhibitory
I-E-I connection. It is worth noting that finer points
of the algorithm, such as advantage of cell 1 (it always
responds if y < 0, by rule A), setting of special initial
configuration (rules A1 and B), and linear relationship
of x and y with time are all lost. This contributes to
considerably less successful frequency separation by
the biophysical model compared to the algorithm.
The time constants for x and y restrict the range
of frequencies over which the frequency separation is
successful.
Synaptic currents The I-I synaptic current received by
the cell is given by
II I = gI I
where H(y) is the smoothed Heaviside function:
H(y) =
x
y
x
xi
1
,
1 + exp(−y/0.02)
θ1 is the threshold value of y for this connection, and the
summation is over two neighboring I cells with periodic
boundary conditions.
α (V)
x
H(y j − θ1 ),
j
x
β (V)
βy(V)
α (V)
y
y
i
J Comput Neurosci (2013) 34:231–243
Table 1 Model parameters
239
Parameter
Value
gL
g Na
gK
gT
VL
V Na
VK
βy
αx
θ1
θ2
dp
0.05
3
5
1
−70
50
−90
0.005
0.005
0.1
2/3
10
The external input current has as its gating variable
sinput —a modified, more realistic form of the original
mixed-frequency pulse train F(t):
Iinput = ginput sinput max(1 − y/θ2 , 0)(V − Vinput ),
dsinput
= αinp (1 − sinput )H(F(t) − .5)
dt
− βinp (1 − H(F(t) − .5))sinput ,
F(t) = H(10 sin(2π t/T1 ))
× (1 − H(10 sin(2π(t + d p )/T1 )))
+ H(10 sin(2π(t + ϕ)/T2 ))
×(1 − H(10 sin(2π(t + d p + ϕ)/T2 ))),
where ϕ is the phase shift between the pulse trains, and
d p is the duration of the pulse.
Notice that the current is zero if the postsynaptic
cell is not ready to accept it (y > θ2 ). For values of all
parameters see Table 1.
If we consider a single I cell, it can be switched from
firing continuously to quiescence and back by varying
the parameter k = sinput max(1 − y/θ2 , 0). In particular,
for an isolated cell k = 0 (sinput = 0) and the cell is in
oscillatory state (firing continuously). For some larger
value of k = k∗ a steady state voltage solution stabilizes (at a hyperpolarized V value) and the firing stops
(Fig. 5(a)). In terms of sinput and y it means that for
y large enough the oscillatory solution persists for any
value of input sinput . For small y, on the other hand, the
k∗
input can stop the firing if sinput ≥ θθ22−y
(Fig. 5(b)).
4 Simulation results
All results below were obtained with a block of two
frequency separators (2 E and 2 I cells each), shown
schematically in Fig. 3(a). Input is inhibitory to I cells
and the output is the spikes of the E cells.
Single cells Both E and I cells want to fire continuously, until the firing is stopped by inhibitory input
either from the external input (for I cells) or from
the I population (for E cells). Cells will respond to
release from inhibition by firing. This is accomplished,
because each cell in isolation has a high firing rate,
and is aided by the presence of the low-threshold Ca2+
current, which is transiently activated as cell is released
from hyperpolarization, resulting in fast depolarization
and spiking.
Frequency separation Figure 3(b) shows a typical example of the numerical experiment with the model.
The top panel shows the input pulse train which is
constructed as a superposition of two periodic pulse
trains, in this case with interpulse intervals of 210 and
270 ms. For picture clarity we have colored two original
trains in different shades. The next four panels (labeled
I1 , I2 , E1 and E2 ) show the voltage time courses of
the four cells of the first separator unit (shaded box in
Fig. 3(a)).
input
Vinh
−20
−80
s
−50
Quiescence
Spiking
k*
−0.1
0
k
k*
0.1
Fig. 5 Bifurcation diagram of the inhibitory cell. Left panel
shows stable (solid) and unstable (dashed) steady states as a
function of bifurcation parameter k (see text). The maximum
and minimum of the family of periodic orbits are also shown
(thick solid curves). As k crosses k∗ from right to left, the system
y
θ
2
dynamics changes from quiescence to spiking. The transition
information is replotted again in the right panel in the twoparameter space (see text). Solid line marks the bifurcation value
k = k∗ and the dotted line is the asymptote of this curve. For
y > θ2 there is no quiescent regime for any value of sinput
240
J Comput Neurosci (2013) 34:231–243
Let us consider what happens when an input pulse
arrives shortly before the 1,000 ms mark (black arrow).
Both cells I1 and I2 receive it as an inhibitory input.
At that time cell I2 has lower value of y, thus, as
explained in Section 3.4 it is more susceptible to the
external input and it also receives more inhibition from
its neighbors, with their higher y values. As a result, it
is only cell I2 that terminates its firing. This, in essence,
implements the essential part of rules A,B and C of
the algorithm—the cell with lower y value will be the
one to respond. Next, excitatory cells act as readouts of
these responses. Due to a pause in I2 firing E2 receives
less inhibition and is able to fire a spike, aided by the
presence of the lower threshold calcium current. This,
in turn, provides excitation to I1 , which fires faster,
further reducing chances of E1 to fire. Cessation of
firing in I2 also causes the reset of its x and y variables,
as explained in Section 3.4. The lower panel of Fig. 3(b)
shows the interspike intervals of output cells E1 and E2 .
You can see that each of the cells settles to firing with
approximately one of the original input frequencies.
Figure 6 shows an example of the network performance for a range of input frequency values. Each
point in Fig. 6 corresponds to a simulation, in which
the model was presented with a mixture of pulses of
two given frequencies for 5,000 ms. The periods of
input trains were each randomly chosen by drawing
once from a uniform distribution on every 10-by-10 ms
square. The phase-shift of inputs ϕ is fixed at 0, and
the initial conditions of the ODE system were the same
in every trial. The trial is successful (filled circle) if for
each input period Ti there was at least one output (E)
cell with an average inter-spike interval (ISI) within 5
% of Ti for 2,000 ms. Otherwise the trial is labeled
unsuccessful (open circle).
Note that the sucess is not automatic even when T1
and T2 lie within 5 % from each other. For example
one of the cells can take over and respond nearly every
time, while the other one will respond only infrequently
or not at all. It can also be seen by observing the open
circles that occur in the figure even near the diagonal.
The rate of successes overall was 65 %. Each panel on
the right highlights a section of the main figure and lists
percentage of frequency pairs in that section that were
successfully decorrelated. The success rate is high at
the optimal range of frequencies (81 %, upper right),
but starts to decline if one of the input frequencies
becomes too high or too low (58 %, lower right). Range
of successfully separated frequencies can be varied by
adjusting parameters of the model.
Most of the errors in the model’s performance can
be corrected by using a different phase-shift of inputs
(ϕ) or the initial conditions of the network. Figure 7(a)
shows an example in which we vary the input phase shift
ϕ with fixed T1 = 232 and several different values of
T2 . For each frequency combination (and fixed initial
600
Interpulse interval 2
81%
400
58%
200
200
400
Interpulse interval 1
Fig. 6 Examples of frequency separation with a network from
Fig. 2(a). Left Each of the input periods is chosen as T + 10r
where r is a random number between 0 and 1 and T is varied
from 100 to 600 ms in 10 ms steps. Each period pair is presented
for 5,000 ms and the separation is judged as successful if for each
input train there was an output cell that maintained the correct
interspike interval (within 5 % of the input period) for 2,000
600
ms. Successful trials are marked with f illed circles and occurred
in 65 % of the trials, unsuccessful trials are marked with open
circles. Right When the range of presented periods is restricted
(indicated by a frame), the success rate may increase or decrease
(indicated by percent of successful trials). Simulation data same
as in the left panel
J Comput Neurosci (2013) 34:231–243
241
(b)
100
400
80
Interpulse interval 2
Phase shift between inputs
(a)
60
40
20
0
200
250
300
Interpulse interval 1
350
300
250
200
200
250
300
350
Interpulse interval 1
400
Fig. 7 Error correction by the model. (a) Errors can be corrected
by changing the phase shift between inputs. Interpulse interval
of the first input train is shown on the horizontal axis, the interpulse interval of the second input train is marked by the square.
Dif ferent rows correspond to different phase shift between the
inputs as indicated. Successful separation (by the same criterion
as in Fig. 6) is shown with a f illed circle, unsuccessful—with
a cross. (b) Results of frequency separation in the situation
when the time of each incoming spike is modified by adding a
random number uniformly distributed in the range [−20 20]. In
the example on top black bars correspond to the original pulses,
and thin lines indicate the ±20 ms range around each black pulse,
from which the perturbed pulse time is drawn. White bars are the
perturbed pulses. Notation on the grid is the same as in (a)
conditions, same as in Fig. 6) there are only a few phase
shifts (if any) at which the trial is unsuccessful. Similar
result is achieved if the phase is fixed, but the initial
condition is allowed to vary (not shown).
The performance of the model is robust to jitter in
the input spike trains. Figure 7(b) shows an example
in which the time of arrival of each input pulse was
modified by a random amount (uniform distribution
from −20 to 20 ms). We tested pairs of input frequencies in the range of 200–400 ms, with 10 ms increments
and fixed initial conditions. We found that 90 % of
frequency pairs were successfully separated (Fig. 7(b)).
3.4, they could be thought of as concentrations of some
substances X and Y. It is conceivable that Ca2+ can
play the role of X, as it is accumulated during spiking.
The substance Y needs to be related to efficacy of
synaptic function, and its activation be affected by X.
For example, it could represent a part of a metabotropic
receptor pathway that naturally weakens during spiking, potentiating the synapse. Once the spiking stops,
it quickly uses calcium to reactivate again. We repeat
once more, that this pure speculation, and more work
needs to be done to identify specific biophysical identities for x and y.
One of the strengths of the proposed algorithm is
that the frequency separation works for a whole range
of frequencies and does not rely on a resonance to a
discrete set of preferred frequencies.
We have shown in numerical simulations (Fig. 7(a))
that the errors of computation can be corrected by
changing the phase shift between inputs or the network’s initial conditions. This property of the model
can be exploited in a larger network, in which each of
the separating units starts independently at a different
time, thus effectively being at a different initial condition at the time of the input’s arrival. Then the units
with most consistent output ISIs (the successful units)
can be rewarded and reinforced.
As a possible functional role for such a network,
we envision a situation where there is a large class of
possible features (attributes) that the system needs to
recognize. For example, the object can be red, blue,
square, triangular, etc. Suppose that each of the possible attributes is represented in the system by a different
5 Discussion
We described an algorithmic model that can pick out
individual periodic trains from the superposition of
two such trains and its biophysical implementation.
We believe that this algorithm can be implemented in
hardware as well.
It should be noted that this biophysical implementation is not unique, and it does not follow the algorithm
exactly. Our results show that even with an imperfect implementation the frequency separation procedure works. It also suggests another possible role for
excitatory-inhibitory networks.
One of the specialized features of proposed biophysical network is the presence of variables x and y. We do
not presently have specific agents in mind that could
be represented by this model, but we will speculate
about possibilities. As we indicated above in Section
242
J Comput Neurosci (2013) 34:231–243
frequency (Torras 1986; Niebur et al. 2002; Kazanovich
and Borisyuk 2006; Kuramoto 1991). Assume additionally that at every object presentation only two features
(from a large pool of possibilities) are presented (say,
a red octagon). The system needs to recognize which
features it is confronted with (i.e., to detect the individual frequencies in the mixed signal) then transfer this
information to higher processing areas. For example
the detected frequencies can be compared to memorystored database and the appropriate action retrieved.
This work is also related to studies in pattern identification, in which the system is looking for a certain
sequence of neuronal firings (ISIs). Several different
strategies have been described, such as matching to
template (Dayhoff and Gerstein 1983; Tetko and Villa
2001) and a method based on the correlation integral
(Christen et al. 2004). In contrast to these studies,
our work focuses on identification of only the periodic
sequences of ISIs. On the other hand it is more flexible,
not requiring the presence of a template and the detecting network produces detected patterns as its outputs
ready for transmission and further use.
Acknowledgements This work was supported by the Mathematical Biosciences Institute and the National Science Foundation under grant DMS 0931642, NSF grant DMS-1022945 (AB),
NSF CAREER Award DMS-0956057 (JB), Alfred P. Sloan Research Foundation Fellowship (JB).
Equations for an E cell
Cm
dV
= −Iion (V) − Isyn + Iinput .
dt
The intrinsic current Iion consists of the leak, sodium,
and potassium currents, and the T-type current—
outward current activated by hyperpolarization:
Iion = g L (V − V L ) + g Na (m∞ (V))3 h(V − V Na )
+ g K (1 − h)4 (V − V K ) + gT (m∞,T )2 hT V,
dh
= (h∞ (V) − h)/τh (V),
dt
dhT
= (h∞,T (V) − h)/τh,T (V),
dt
where
m∞ (V) = 1/(1 + exp(−(V + 37)/7)),
m∞,T (V) = 1/(1 + exp(−(V + 60)/6.2)),
h∞ (V) = 1/(1 + exp((v + 41)/4)),
h∞,T (V) = 1/(1 + exp((v + 84)/4)),
τh (V) = 0.83/(αh (V) + βh (V)),
τh,T (V) = 28 − exp((V + 25)/10.5),
αh (V) = 0.128 ∗ exp(−(46 + V)/18),
βh (V) = 4/(1 + exp(−(23 + V)/5)).
Appendix
External input to the E cell is equal to zero (Iinput = 0).
Parameter values are given in Table 1.
Equations for an I cell
Cm
dV
= −Iion (V) − Isyn + Iinput .
dt
The intrinsic current Iion consists of the leak, sodium,
and potassium currents, and the bias current I:
Iion = g L (V − V L ) + g Na (m∞ (V))3 h(V − V Na )
+ g K (1 − h)4 (V − V K ) − I,
dh
= (h∞ (V) − h)/τh (V),
dt
where
m∞ (V) = 1/(1 + exp(−(V + 37)/7)),
h∞ (V) = 1/(1 + exp((V + 41)/4)),
τh (V) = 0.69/(αh (V) + βh (V)),
αh (V) = 0.128 exp(−(46 + V)/18),
βh (V) = 4/(1 + exp(−(23 + V)/5)).
The parameters are given in Table 1.
References
Adrian, E.D. (1950). The electrical activity of the mammalian
olfactory bulb. Electroencephalography and Clinical Neurophysiology, 2, 377–388.
Arieli, A., Shoham, D., Hildesheim, R., Grinvald, A. (1995). Coherent spatiotemporal patterns of ongoing activity revealed
by real-time optical imaging coupled with single-unit recording in the cat visual cortex. Journal of Neurophysiology, 73,
2072–2093.
Bar-Gad, I., & Bergman, H. (2001). Stepping out of the box:
information processing in the neural networks of the basal
ganglia. Current Opinion in Neurobiology, 11, 689–695.
Bar-Gad, I., Havazelet, H.G., Goldberg, J.A., Ruppin, E.,
Bergman, H. (2000). Reinforcement driven dimensionality
reduction—a model for information processing in the basal
ganglia. Journal of Basic & Clinical Physiology & Pharmacology, 11, 305–320.
Bose, A., Manor, Y., Nadim, F. (2001). Bistable oscillations
arising from synaptic depression. SIAM Journal of Applied
Mathematics, 62, 706–727.
Christen, M., Kern, A., Nikitchenko, A., Steeb, W.-W., Stoop,
R. (2004). Fast spike pattern detection using the correlation
integral. Physical Review E, 70, 011901.
J Comput Neurosci (2013) 34:231–243
Dayhoff, J.E., & Gerstein, G.L. (1983). Favored patterns in spike
trains. I. Detection. Journal of Neurophysiology, 49, 1334--1348.
Ecker, A.S., Berens, P., Keliris, G.A., Bethge, M., Logothetis,
N., Tolias, A. (2010). Decorrelated neuronal firing in cortical
microcircuits. Science, 327, 584–587.
Green, J.D., & Arduini, A. (1954). Hippocampal activity in
arousal. Journal of Neurophysiology, 17, 533–557.
Hodgkin, A., & Huxley, A. (1952). A quantitative description
of membrane current and its application to conduction and
excitation in nerve. Journal of Physiology, 117, 500–544.
Kazanovich, Ya., & Borisyuk, R. (2006). An oscillatory neural
model of multiple object tracking. Neural Computation, 18,
1413–1440.
Kuramoto, Y. (1991). Collective synchronization of pulse coupled oscillators and excitable units. Physica D, 50, 15–30.
Landolfa, M.A., & Miller, J.P. (1995). Stimulus-response properties of cricket cercal filiform receptors. Journal of Comparative Physiology, 177, 745–757.
Matveev, V., Bose, A., Nadim, F. (2007). Capturing the bursting dynamics of a two-cell inhibitory network using a onedimensional map. Journal of Computational Neuroscience,
23, 169–187.
Niebur, E., Hsiao, S.S., Johnson, K.O. (2002). Synchrony: a neuronal mechanism for attentional selection? Current Opinion
in Neurobiology, 12, 190–194.
243
Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga,
N., Reyes, A., Harris, K. (2010). The asynchronous state in
cortical circuits. Science, 327, 587–590.
Rose, J.E., Brugge, J.F., Anderson, D.J., Hind, J.E. (1967). Phaselocked response to low-frequency tones in single auditory
nerve fibers of the squirrel monkey. Journal of Neurophysiology, 30, 769–793.
Terman, D., Rubin, J.E., Yew, A.C., Wilson, C.J. (2002). Activity patterns in a model for the subthalamopallidal network of the basal ganglia. Journal of Neuroscience, 22, 2963–
2976.
Taube, J.S., Muller, R.U., Ranck, J.B. Jr. (1990). Head-direction
cells recorded from the postsubiculum in freely moving rats.
I. Description and quantitative analysis. Journal of Neuroscience, 10, 420–435.
Tetko, I.V., & Villa, A.E.P. (2001). A pattern grouping algorithm
for analysis of spatiotemporal patterns in neuronal spike
trains. 1. Detection of repeated patterns. Journal of Neuroscience Methods, 105, 1–14.
Tetzlaff, T., Helias, M., Einevoll, G., Diesmann, M. (2010).
Decorrelation of low-frequency neural activity. BMC Neuroscience, 11, Suppl. 1, 011.
Torras, C. (1986). Neural network model with rhythm assimilation capacity. IEEE Transactions on Systems, Man and
Cybernetics, 16, 680–693.
Download