Mimicking synaptic plasticity in memristive neuromorphic systems

advertisement
Mimicking synaptic plasticity in memristive neuromorphic
systems
S.W.Keemink1*
1. Life Sciences Graduate School, University of Utrecht, Utrecht, Netherlands
Correspondence:
Amorijstraat 6 6815GJ Arnhem Netherlands
sanderkeemink@gmail.com
7th
Date:
of August 2012
1
Abstract
Development of artificial intelligence has been disappointing in many aspects, and has been
severely limited by the basic architecture of computers. The new field of neuromorphic
engineering tries to tackle this problem by basing circuit design on brain architecture. There are
two features of the brain that people try to implement especially: massive parallelism and
plasticity. Synapse implementations, however, have proven difficult, due to a lack of inherently
plastic circuit elements. This leads to the need of overly complex circuits to mimic any kind of
plasticity. Recent developments in nanotechnology provide us with an exciting new
opportunity: the memristor. The memristor is basically a resistor whose resistance depends on
the amount of current that passed through it: effectively it is a plastic resistor. This is the first
element of its kind and could potentially revolutionize the field of neuromorphic engineering.
This paper will study the viability of the memristor as a plastic synapse by reviewing the recent
developments in memristive technologies separately and in combination with known theories
of plasticity in the brain. Memristors turn out to be very powerful for mimicking synaptic
plasticity, but current research has focused too much on spiking based learning mechanisms
and not enough experimental work has been done. It also seems the memristor-based learning
rules could potentially improve our understanding of the underlying neuroscience, but little
work has been done on this. Finally, despite promises of memristor-based circuitry being able
to match the complexity and scale of the brain, current memristors would use too much
energy. Future research should focus on these three issues.
Keywords: Neuromorphic engineering, memristance, synapses, plasticity
2
Table of Contents
Abstract
2
1. Introduction
4
2. Synaptic Plasticity
7
3. Memristance
11
4. Synaptic Plasticity and Memristance
14
5. Applying memristive synaptic plasticity
18
6. Discussion
25
7. Bibliography
30
3
1. Introduction
During the ascent of the silicon computers wild predictions were made: artificial intelligence
would soon surpass the capabilities of our own brain, presenting superior intelligence.
However, now, several decades later, this has not yet been realized. The basic architecture all
silicon computers used turned out to have fundamental restrictions. Albeit powerful at
sequential computing, and in that aspect certainly surpassing humans, the von Neumann
architecture prevented computers was always limited by having to use a central processor: the
von Neumann bottleneck (Backus, 1978). This architecture is substantially different from what
we understand of the workings of the brain: it is non-parallel and has no inherent plasticity. For
this reason much effort has been going into developing alternative technologies, and most
notable into neuromorphic engineering: circuitry with architecture inspired by and mimicking
that of the brain.
Mimicking the brain is a daunting task. It is, after all, arguably the most complex system in
existence, while also being extremely efficient. Its storage and computational abilities are
simply astounding. The brain contains billions of neurons working in parallel, and the number of
plastic synapses connecting them is several orders of magnitude larger. With over 1010 synapses
per cm3 it is a daunting task to manufacture any circuit system that works similarly at the same
scale and efficiency. This is the challenge the field of neuromorphic engineering tries to tackle.
Neuromorphic engineering is interesting for two important reasons. First, by designing circuits
using knowledge about brain architecture and function, it is possible to develop genuinely new
kinds of computers. Second, it might actually serve as a tool to gain more understanding of that
very brain architecture it tries to mimic. By using an engineering-approach to build similar
structures of the brain we might learn more about its restrictions and advantages in ways
impossible to study with classic neuroscience. One might argue that this kind of approach is
already accomplished by analog modeling of the brain, in which brain structures are modeled
on existing computer architectures. However, actually building the circuits instead has at least
two advantages. First, models will never be able to replicate all the complexities of the real
world, all the imperfections of physical circuitry might actually be important (after all, a given
biological neuron is also not a perfect cell, but the brain still manages). Second, implementing
models using actual circuits is more energy efficient than analog modeling (Poon & Zhou, 2011;
Greg Snider et al., 2011).
One of the defining features of the nervous system is the plasticity of the synapses, the inner
workings of which we’ve only recently begun to truly understand. It not only underlies long and
short term memory (Abbott & Nelson, 2000) but is also extremely important for computation
(Abbott & Regehr, 2004). The functional and molecular underpinnings are only beginning to be
understood, but we need only concern us with the former. When trying to make circuitry
mimicking brain function, the goal is not to reproduce an exact neural synapse with all its
molecular complexity, but to reproduce the functional workings. The most well known
functional mechanism of synaptic plasticity is known as Hebbian plasticity (Hebb, 1949), which
is often summarized as “Cells that fire together, wire together” (attributed to Carla Shatz). Since
4
the formulation of Hebbian plasticity both computational and experimental neuroscience has
provided us with many more specific learning mechanisms, which can roughly be categorized as
either spike based learning or firing rate based learning. Neuromorphic engineering aims to
apply this knowledge to actual circuits.
A problem in reproducing the functional plasticity rules in circuit based synapses has been that
the basic elements used in electronic circuits are not inherently (or at least not controllably)
plastic. The only basic element capable of inherent plasticity was the memristor, and only
existed in theory (Chua, 1971). However, this element has finally been realized in practice in
recent years by several labs (Jo et al., 2010; Strukov, Snider, Stewart, & Williams, 2008). The
memristor is basically a resistor, whose resistance depends on how much current has passed
through the element in the past. As such it has memory and is plastic. It is an obvious question
whether or not this element can be used to mimic plastic synapse on a circuit level, and how
this can be done. The science of using memristors as synapses is an emerging and exciting field,
and in this paper I will be reviewing the advances and studying if the memristor really is as
promising as it sounds.
First I will summarize our current understanding of synaptic plasticity, by explaining two of the
best known and successful learning rules: one which changes the connection strength between
two neurons based on their relative firing rate, known as the BCM model (Bienenstock, Cooper,
& Munro, 1982), and one which depends on the relative timing of fired spikes, known as the
Spike-Timing-Dependent-Plasticity (STDP) rule (Gerstner, Ritz, & van Hemmen, 1993). Next I will
discuss the basic memristor workings: how the memristor was theorized (Chua, 1971) and how
it was finally realized (Jo et al., 2010; Strukov et al., 2008).
Then, having discussed some of the basics of both synaptic plasticity and memristance, I will
show how the two are related. Soon after the realization of the memristor several groups
began researching the possibilities of using memristors as synapses. Using memristors in
surprisingly simple circuits automatically leads to associative memory (Pershin & Di Ventra,
2010). It was quickly realized that memristors can be implemented en masse in cross-bar
circuits (Borghetti et al., 2009; Jo, Kim, & Lu, 2009). Most importantly it was found that if you
assume two spiking neurons with specific kinds of spike shapes, connected by a memristor,
various kinds of STDP automatically follow (Linares-Barranco & Serrano-Gotarredona, 2009a,
2009b).
Having considered the basic theory of using memristors as plastic synapses, I will summarize
various more complex and applied applications such as maze solving (Ventra & Pershin, 2011)
and modeling part of the visual cortex (Zamarreño-Ramos et al., 2011). I will finally discuss
some of the current short comings: the extreme focus on using neuron-like spiking elements,
the relative lack of realized circuits, the problems with trying to mimic a structure we don’t fully
understand yet, the energy use of currently proposed circuits and the lack of learning rules
other than STDP.
5
Nevertheless, considering the relative youth of the field, incredible process has already been
made, and memristors are definitely more than just a promising technology: there is real
evidence that they could act as plastic synapses and could potentially revolutionize
neuromorphic engineering and improve both the way we understand the role of plasticity in
the brain, and how we could apply this knowledge to actual circuits.
6
2. Synaptic Plasticity
Here I will briefly discuss the current state of knowledge about plasticity mechanisms. I will
focus on the functional plasticity rules between pairs of neurons, rather than the molecular
underpinnings, as we are not trying to build actual biological synapses. On the single synapse
level I will divide the learning rules in two classes: firing rate based learning and spike based
learning. For an excellent and more extensive recap of plasticity rules see Abbott & Nelson
(2000).
Although the theory of synaptic plasticity has become more and more detailed, the basics
remain much the same as first postulated by Hebb:
“Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to
induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough
to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that A's efficiency, as one of the cells
firing B, is increased.” (Hebb, 1949)
Or more simply and intuitively stated by Carla Shatz: “Cells that fire together, wire together”
(Markram, Gerstner & Sjöström, 2011).
Figure 1 Example figure of two connected cells. The pre-synaptic cell (sends an axon to the post-synaptic cell and connects to
one or more of its dendrites. This connection is a synapse, and is thought to be plastic according to a Hebbian rule.
This learning principle is called Hebbian learning, and is at the root of most plasticity theories.
Hebbian learning requires some kind of plasticity of the connection between two cells, which
depends on the activity of both cells. Before we get into details a few definitions need to be
made: For every connected cell pair there is a pre-synaptic cell and a post-synaptic cell. The
pre-synaptic cell sends an axon to the post-synaptic cell’s dendrites, and is connected by a
synapse (or several synapses). The effectiveness of a particular synapse we will classify as the
“weight” of the synapse. How this weight changes is the kind of plasticity studied in this paper.
See Figure 1 for an illustration. Several molecular mechanisms have been suggested such as
changes in the numbers of messenger molecules in a single synapse or increasing the number
of synapses. However in this report we are concerned with replicating synaptic plasticity in
computer chips, and as such will primarily deal with the functional rules of learning, rather than
7
with the underlying mechanisms. For a more in depth explanation of the neuroscience and
molecular underpinnings see an up to date neuroscience text book (e.g. Purves et al., 2012).
There are of course a large number of possible functional learning rules that we can define, so
we should first ask the question: what requirements should these rules meet that lead to
network with the capability of memory formation and/or computation? It is hard to define
what these are but in this review we will consider the following. In a network of connected cells
two prime features are thought to be necessary (Dayan & Abbott, 2003). First, a Hebbian type
of learning rule. If two connected cells activate together, their connection should grow
stronger. Second, there has to be some kind of competition between connections. If several
pre-synaptic cells are connected to a post-synaptic cell, and a particular pre-synaptic cell’s
connection grows stronger, future activity of the other pre-synaptic cells should influence their
connection less. Now, what learning rules have been proposed and how do they follow these
requirements?
I will first consider firing rate based learning. The assumption here is that the individual spikes
don’t matter so much, but rather the sustained activity over time. A big advantage is that there
is no need to model a spiking mechanism, only the firing rate is needed. In most firing rate
models, how the post-synaptic cell’s activity depends on the pre-synaptic activity, and how the
connection weights depend on pre- and post-synaptic activity, can be written in its most
general form as:
π‘‘π‘£π‘π‘œπ‘ π‘‘
= 𝑓(π’˜, 𝒗𝒑𝒓𝒆 , π‘£π‘π‘œπ‘ π‘‘ )
𝑑𝑑
π‘‘π’˜(𝑑)
= 𝑔(π’˜, 𝒗𝒑𝒓𝒆 , π‘£π‘π‘œπ‘ π‘‘ )
𝑑𝑑
Here 𝒗𝒑𝒓𝒆 is a vector of pre-synaptic firing rates, π‘£π‘π‘œπ‘ π‘‘ the post-synaptic firing rate and π’˜ a
vector with all synaptic weights. The function 𝑓 describes how the post-synaptic activity
depends on this activity itself, the pre-synaptic firing rates and the synaptic weights. In this
paper we will not concern ourselves too much with this function, and assume that the postsynaptic firing rate is directly dependent on the pre-synaptic firing rate as follows:
π‘£π‘π‘œπ‘ π‘‘ = π’˜ βˆ™ 𝒗𝒑𝒓𝒆
Which is simply the sum of all presynaptic firing rates multiplied by their weights. The function
𝑔 describes how the synaptic weights change over time based on the current weights and the
firing rates of the pre- and post-synaptic neurons. This is the function which could inform how
memristors could act as plastic synapses. To implement Hebbian learning the first and simplest
assumption one can make is the following function:
π‘‘π’˜(𝑑)
= 𝒗𝒑𝒓𝒆 π‘£π‘π‘œπ‘ π‘‘
𝑑𝑑
This simple rule satisfies Hebbian learning, as the connection between two cells grows stronger
the more the two cells fire together. However, this system of learning is unstable as all weights
8
grow stronger independent of other connection strengths. In fact, the higher the post-synaptic
firing rate is the stronger all the weights get.
Several adjustments have been made to this simple model, the details of which I will skip, but
they have ultimately lead to the BCM model (Bienenstock et al., 1982). For a detailed treatise
see also Dayan & Abbott (2003). First of all to prevent excessive growing of weights, a threshold
mechanism is added as follows:
π‘‘π’˜(𝑑)
= (π‘£π‘π‘œπ‘ π‘‘ − πœƒ)𝒗𝒑𝒓𝒆 π‘£π‘π‘œπ‘ π‘‘
𝑑𝑑
Here we’ve added the term (π‘£π‘π‘œπ‘ π‘‘ − πœƒ). This decreases the rate of change of weights as the
post-synaptic activity comes closer to some threshold. For activity smaller than πœƒ synapses are
weakened, for activity larger than πœƒ synapses are strengthened. The threshold itself also
evolves according to:
π‘‘πœƒ
𝜏
= π‘£π‘π‘œπ‘ π‘‘ 2 − πœƒ
𝑑𝑑
Where 𝜏 is some time constant. As π‘£π‘π‘œπ‘ π‘‘ grows, this also results in the threshold shifting. As
long as the threshold grows faster than π‘£π‘π‘œπ‘ π‘‘ , the system remains stable (i.e. the weight values
don’t explode). The sliding threshold also results in competition between synapses. When
particular pre-synaptic cells contribute most to the post-synaptic firing rate, the threshold slides
up accordingly. When less contributing pre-synaptic cells are active, however, the post-synaptic
firing rate is lower. The heightened threshold then results in their weights being decreased.
BCM learning has been shown to lead to several features of learning observed in the brain, such
as orientation selectivity (Blais & Cooper, 2008).
Firing rate modes, albeit very powerful, miss a potentially crucial feature of actual neuronal
systems: a spiking mechanism. In the last decade experimental work (Bi & Poo, 1998, 2001) and
theoretical work (Gerstner et al., 1993) has suggested a strong dependence of synaptic
plasticity on the relative spike timing of the pre and post-synaptic cells. The resulting learning
mechanism is called Spike Timing Dependent Plasticity (STDP). In this model each spike pair
fired at both cells changes the synaptic weight between the cells. The timing of the spikes is
what specifies how the weight changes exactly. Several known forms of this dependence have
been measured in different places in the brain, but for now I will only consider the most studied
and well known form, which is shown in Figure 2. In this particular STDP learning function small
timing differences result in high changes in plasticity, and vice versa for large timing differences.
Furthermore, the order of spikes is also extremely important. If the pre-synaptic cell fires first, a
positive change in weight is the results. However, if the post-synaptic cell fires first, the weight
is decreased. This reflects the fact that spikes fired at the post-synaptic cell first, could never
have been fired due to activity at the pre-synaptic cell.
9
This particular learning rule is often approximated as:
π‘‘π‘π‘Ÿπ‘’ − π‘‘π‘π‘œπ‘ π‘‘
π‘Ž
ex
p
(−
),
−
𝑑𝑀
𝜏−
={
π‘‘π‘π‘Ÿπ‘’ − π‘‘π‘π‘œπ‘ π‘‘
𝑑𝑑
π‘Ž+ ex p (
),
𝜏+
π‘‘π‘π‘Ÿπ‘’ − π‘‘π‘π‘œπ‘ π‘‘ < 0
π‘‘π‘π‘Ÿπ‘’ − π‘‘π‘π‘œπ‘ π‘‘ ≥ 0
Where π‘‘π‘π‘Ÿπ‘’ and π‘‘π‘π‘œπ‘ π‘‘ describe the timing a given post-synaptic spike, π‘Ž+ and π‘Ž− describe the
heights and 𝜏 + and 𝜏 − describe the time constants. These functions describe the change in
weight due to a positive timing difference and a negative timing difference respectively, and are
plotted in Figure 2.
Figure 2 Experimental and theoretical STDP curves. On the x-axis the timing difference between a pre and a post-synaptic spike
is shown. On the y-axis the resulting change in synaptic weight is shown. The dots are experimental data points, the lines are
exponential fitted curves. βˆ†π‘» = 𝒕𝒑𝒓𝒆 − 𝒕𝒑𝒐𝒔𝒕 . Reproduced from Zamarreño-Ramos et al. (2011).
The powerful thing about STDP is that this simple mechanism automatically results in both
competition and stable Hebbian learning (Song, Miller, & Abbott, 2000; van Rossum, Bi, &
Turrigiano, 2000), as well as various specific types of learning and computation in networks
such as timing/coincidence detection (Gerstner et al., 1993), sequence learning (Abbott & Blum,
1996; Minai & Levy, 1993; Roberts, 1999), path learning/navigation (Blum & Abbott, 1996;
Mehta, Quirk, & Wilson, 2000) and direction selectivity in visual responses (Mehta et al., 2000;
Rao & Sejnowski, 2000). If STDP could be replicated using memristors, all these things could
theoretically also quite easily be achieved by circuits using memristors.
10
3. Memristance
This section will briefly recap how memristors were theorized, and how they have been
realized. I will briefly go into the theoretical proof, show what the basic learning rules for
memristors are and how they have been realized.
Electrical circuits contain three passive basic elements: resistors, capacitors and inductors.
These elements are have fixed values, meaning they’re non-plastic. However, Leon Chua
theorized an extra element with plasticity in 1971, the memristor (Chua, 1971). The memristor
acted like a resistor, by relating the voltage over the element and the current through it as
follows:
𝑣 = 𝑀(𝑀)𝑖
The memristance 𝑀 thus acts the same as a resistance, except that it depends on a parameter
𝑀, which in Chua’s derivations was either the charge π‘ž or the flux πœ‘. In what follows I will
consider the charge case. Since the charge and current are related as follows:
π‘‘π‘ž
=𝑖
𝑑𝑑
𝑀 depends on the complete history of current passing through the element, which makes the
memristor act like a resistor with memory for current. Chua later showed that memristors are
part of a broader class of systems called memristive systems (Chua & Kang, 1976) described by:
𝑣 = 𝑀(𝑀, 𝑖)𝑖
𝑑𝑀
= 𝑓(𝑀, 𝑖)
𝑑𝑑
Where 𝑀 can be any controllable property, and 𝑓 is some function. The function 𝑓 can be called
the equivalent learning rule of the memristor, analogous to the learning rules in the synapse
models discussed earlier.
Figure 3 Illustration of the basic mechanism for the memristor realizations. When current flows through the device, the
boundary between the regions of different resistivities shifts changing the overall resistance. Reproduced from ZamarreñoRamos et al. (2011).
The memristor has only been realized as an actual circuit element during the last few years.
One of the reasons it had taken this long is that it only works on at nano scale levels (Strukov et
al., 2008). The technique involves building an element with two regions with different
resistances, π‘…π‘œπ‘› and π‘…π‘œπ‘“π‘“ . If constructed correctly, the boundary between the regions will shift
due to applied voltages or currents, resulting in a net change of resistance. See Figure 3 for an
illustration. In these systems the memristance is described by:
𝑀(𝑀) = 𝑀 π‘…π‘œπ‘› + (1 − 𝑀)π‘…π‘œπ‘“π‘“
11
And the learning rule of 𝑀 is given to a linear approximation by:
𝑑𝑀
= π‘Žπ‘–(𝑑)
𝑑𝑑
Where π‘Ž is some constant dependent on device properties. How the memristor works is
illustrated in Figure 4. Here a sinusoidal voltage is applied (blue), resulting in a current passing
through the memristor (green). Initial applied voltage results in a low current but also a shift in
w. The shift in w changes the memristance, and during the next voltage period the current is
larger. This is illustrated by the labels 1, 2 and 3 in the top and bottom graphs. And vice versa
for negative voltages, as illustrated by labels 4, 5 and 6.
Figure 3 Memristor workings illustrated. Top: applied sinusoidal voltage (blue) and resulting current (green) plotted over time.
Middle: plastic parameter w plotted over time. D is a scaling factor dependent on the device length. Bottom: Voltage VS current
plot with hysteresis loops. The numbers in the top plot correspond to specific loops in the bottom plot. As positive voltage is
applied, w shifts changing the memristance resulting in later applied voltages in having a larger effect. Negative voltages next
have the opposite effect. Reproduced from Strukov, et al. (2008).
12
Two labs have actually realized an element like this; using a region with oxygen vacancies which
move due to an applied electric field (Strukov et al., 2008), and using Ag-rich versus Ag-poor
regions which shift in much the same way (Jo et al., 2010). These elements behave as the ideal
equations written above in a linear region of operation, and as memristive systems outside this
region. Furthermore it was found that a certain threshold voltage was needed before any
change in memristance occurred.
A theoretical simplification of the whole memristor system implementing the threshold and
nonlinear region was proposed by (Linares-Barranco & Serrano-Gotarredona, 2009b). In this
model there is a dead zone where nothing changes, while 𝑀 changes exponentially outside this
region:
π‘£π‘‘β„Ž
𝑣
𝑑𝑀
𝑣
π‘œ
= πΌπ‘œ 𝑠𝑖𝑔𝑛(𝑣)[ 𝑒 − 𝑒 π‘£π‘œ ] if |v| > π‘£π‘‘β„Ž
𝑑𝑑
Where π‘£π‘‘β„Ž describes the functioning threshold, and πΌπ‘œ and 𝑣0 are parameters determining
slope. This learning rule is illustrated in Figure 5.
Figure 4 Learning rule for proposed memristors. ADD: ideal memristors. Reproduced from Zamarreño-Ramos et al. (2011).
The memristor offers many new advantages. It allows for analog based data storage, rather than
0’s and 1’s, among other things. Furthermore, it can easily be implemented in existing circuits
based on the basic elements. But how relevant is it for neuromorphic engineering, especially as a
synaptic device? It definitely has inherent plasticity, with clear learning rules based on the
current that has passed through the device. Connecting two nodes in a network will result in
plasticity automatically. But how do memristors compare to plastic synapses found in the brain?
Would the plasticity mechanism meet the Hebbian learning and competition requirements?
13
4. Synaptic Plasticity and Memristance
In this section I will explore how we can combine synaptic plasticity and memristance on the
lowest level. That is, given two connected nodes, how can we relate the two? Research into this
has focuses on reproducing the SPDT mechanism, which I will recap here.
Soon after the development of actual memristors several labs started looking into using them in
neuromorphic systems, focusing on spike-based systems. The most central result was achieved
by Linares-Barranco and Serrano-Gotarredona (Linares-Barranco & Serrano-Gotarredona,
2009a, 2009b). They showed that if you connect two spiking neuron-like system with a
memristor, and assume a few basic things, STDP-based plasticity automatically follows. The
basic system is illustrated in Figure 6.
Vpre
Vpost
Figure 5 Two spiking neurons connected by a memristor
Vpre refers to the pre-synaptic voltage, and Vpost to the post synaptic voltage. For the purpose
of the proof no exact spiking mechanism is needed, only a specific spike shape. If a spike
happens at either node, the voltage at that node follows the spike shape. The spike shape is
assumed to be as follows (Linares-Barranco & Serrano-Gotarredona, 2009b):
+/−
Here π΄π‘šπ‘ describes the positive and negative height respectively and 𝜏+ and 𝜏− the exponential
rise and decay time scales. The shape is illustrated in Figure 7 and replicates the main features of
actual spikes: a sharp initial peak, followed by a slow recovery to equilibrium.
14
Figure 6 Basic spike shapes assumed for the SPDT memristor analysis. Reproduced from Linares-Barranco & SerranoGotarredona (2009b).
Furthermore, they used a learning rule as in Figure 5 for their theoretical memristor. If these
equations are used to find the voltage over the memristor for different spike timing
combinations, the memristance (or “synaptic weight”) changes for these combinations can be
found. The voltage over the memristor in Figure 6 is given by:
𝑣(𝑑) = π‘£π‘π‘œπ‘ π‘‘ (𝑑) − π‘£π‘π‘Ÿπ‘’ (𝑑)
Where π‘£π‘π‘œπ‘ π‘‘ and π‘£π‘π‘Ÿπ‘’ are the post and pre-synaptic voltage respectively. Now given a known
learning rule:
𝑑𝑀
= 𝑓(𝑣)
𝑑𝑑
where 𝑓(𝑣) is as illustrated in Figure 5, it is possible to find the weight change due to a spike
time difference:
π›₯𝑀(π›₯𝑇) = ∫ 𝑓(𝑣(π›₯𝑇, 𝑑))𝑑𝑑
where π›₯𝑇 is the spike timing difference. By using a range of different spike time combinations,
this function can be found. This process is illustrated in Figure 8. The red colored area shows
what voltages are above threshold and contribute to 𝑓(𝑣), and thus the change in w. For positive
π›₯𝑇, w increases, and for negative π›₯𝑇 w decreases. For spikes too far apart or for spikes at the
same time nothing changes. Since 𝑓(𝑣) is an exponential function the increase in w will be
bigger the smaller π›₯𝑇 is (unless it’s 0).
15
Figure 7 Finding weight functions for different spike pairs illustrated. Reproduced from Linares-Barranco & SerranoGotarredona (2009b).
In Figure 9 we can see the resulting function π›₯𝑀(π›₯𝑇). The curve is equivalent to the original
SPDT curve, showing that the memristor learning rule given two connected spiking mechanisms
automatically leads to a SPDT like learning rule. It should be noted that the “weight” 𝑀 is not
actually what is used as the synaptic weight in the above model, the conductance is. The latter
is a function of 𝑀. So rather than using w for the learning function, one related to the
memristance 𝑀 should be used. This results in multiplicative SPDT of the form (ZamarreñoRamos et al., 2011):
π›₯𝐺(π›₯𝑇) ∝ −𝐺 2 π›₯𝑀(π›₯𝑇)
1
Where 𝐺 is the memristive conductance . This is a different kind of learning where the change
𝑀
in synaptic strength is dependent on the current synaptic strength as well.
Figure 8 Original SPDT function (left) and memristance derived SPDT function (right). Reproduced from Linares-Barranco &
Serrano-Gotarredona (2009a).
16
Since SPDT like learning automatically follows from the described setup, one could thus expect
many of the learning and computational advantages SPDT learning offers in a network of
memristive connected spiking nodes, including competition between synapses and Hebbian
learning.
17
5. Applying memristive synaptic plasticity
Until now we’ve mostly discussed the theoretical side of synaptic plasticity and memristance,
and how they’re related. In this section we will go more into applying the technologies
discussed so far in two ways: theoretical ideas of how to build large circuits for specific
purposes, and various actually realized circuits. I will start with the demonstration of some
simple operations and learning in simple circuits, followed by more large scale network learning
and finally two proposed circuits for actual tasks: maze solving and a visual cortex inspired
imaging network.
Simple arithmetic operations
The possible computational power of circuits using memristors was demonstrated by Farnood
Merrikh-Bayat & Shouraki (2011a). They showed how simple circuits are capable of basic
arithmetic operations using just a few elements. In conventional circuits the voltage and
currents was used for the calculation, but they instead proposed to use the memristance value
itself resulting in simpler and faster operations while using less chip-area, in particular for
multiplication and dividing. Their proposed circuits are shown in Figure 16. While not directly a
neuromorphic application, the simplicity of these circuits could allow them to be combined with
neuromorphic structures and they are worth mentioning.
Figure 16. Simple memristor based circuit example capable of basic arithmetic operations. Reproduced from Farnood MerrikhBayat & Shouraki (2011b)
18
Associative learning
Memristor circuits have successfully been used to model many forms of learning including
experimentally found features of amoeba learning (Pershin, Fontaine, & Di Ventra, 2009).
Surprisingly small circuits of spiking neurons can perform associative memory, as shown in
Pershin & Di Ventra (2010). The proposed circuit is shown in Figure 11A. The circuit was actually
built, but with memristors constructed from regular components and digital controllers. In this
setup circuits with a spiking mechanism representing pre-synaptic neurons (N1 and N2) receive
inputs, associated with different kinds of inputs (in this case “sight of food” and “sound”).
These neurons are connected to a post-synaptic neuron N3 by memristors S1 and S2. S1 has
low resistance initially, and S2 high, resulting in only the “sight of food” (N1) signal being passed
successfully leading to N3 spiking.
Figure 11 A. Simple associative memory circuit N1, N2 and N3 are spiking neuron circuits, connected by memristors S1 and S2.
B. Associative memory demonstrated. At first the output neuron only fires if there is a “sight of food” signal, not if there is only
a “sound” signal. However after getting both the “sight of food” signal and the “sound” signal at the same time the system
learned to associate the two, and the output neuron fires both when there is a “sound” signal and a “sight of food” signal.
Reproduced from YPershin & Di Ventra (2010).
The learning mechanism is similar to that explained in Section 4. When N2 fires no current
flows between N2 and N3 due to the high memristance of S2. If N3 fires spikes at the same
time, however, the memristance is lowered, resulting in N2 also being more strongly connected
to S3. N3 only fires if N1 is firing, resulting in the connection only strengthening if N1 and N2
fire at the same time. As such, the circuit portrays associative learning.
19
How well the circuit works is illustrated in Figure 11B. Here to the two input signals are shown
(green and red curves) and the output spikes (black). Before a learning period during which N1
and N2 spike at the same time, only N1 spikes result in N3 spiking. N2 spikes have no effect.
After a learning period during which N1 and N2 spike at the same time, either N1 or N2 spikes
result in N3 spiking.
Network learning
Learning on the level of just a few neurons is conceptually and computationally simple, but
does this principle easily translate to larger networks? After all, we originally set out to find out
if memristors can be used to reproduce brain-like structures, which is involves numbers like
1010 synapses per cm3. Can we build large networks of neuron-like structures connected by
memristors that are still tractable and useful?
Theoretically several networks have been proposed, but they all have the cross-bar architecture
in common (e.g. Jo et al., 2010, 2009; Zamarreño-Ramos et al., 2011), which is illustrated in
Figure 12. The proposed structure consists of pre- and post-synaptic neurons, each with their
own electrode. These electrodes are arranged in a cross-bar structure, as illustrated in Figure
12A, where every electrode is connected by a memristor synapse, as illustrated in Figure 12B.
All pre-neurons are thus connected to all post neurons. The idea is to have the memristors
encode weights between the pre- and post-neurons, performing some transformation on input
to the pre-neurons which is read out by the post neurons.
Figure 12 Example cross-bar circuit illustrated, as proposed in Jo et al. (2010) A. The proposed cross-bar circuit. Every pre- and
post-neuron has its own electrode, and every electrode is connected to every other electrode, creating an all-to-all connectivity
scheme between the pre- and post-neurons which maximizes connection density B. A single synapse illustrated. Two electrodes
are connected by a memristive connection, in this case by one based on Ag rich and an Ag poor region (see Memristor section).
Jo et al. (2009) actually built a simple memory crossbar circuit, capable of storing information
by changing the connection weights between electrodes, letter encoding capabilities of which
are illustrated in Figure 13. This particular circuit example, however, is an explicit memory
encoding-read out example. Another failing of the circuit is that it is based on on/off switching,
while one of the main theoretical advantages of memristors is analog storage (Strukov et al.,
2008). Nevertheless, for an initial circuit using a new technology these are hopeful results.
20
Figure 13 A simple crossbar realization, with memory storage and retrieval capabilities. A. The word “crossbar” stored and
retrieved in the circuit. B. SEM image of the simple 16X16 circuit. Reproduced from Jo et al. (2009).
One group realized an early exploration using the more plastic advantages of memristor circuits
in the form of a self programming circuit (Borghetti et al., 2009). This circuit was capable of
AND/OR logic operations after being self programmed. An example run can be seen in Figure
14. This is an example of self-programmed AND-logic, where only overlapping input elicit a
postsynaptic response.
Figure 14 AND/OR logic illustrated for a crossbar circuit after self-programming. The red lines were input voltages, the blue
curve the measured output voltage. A. Non overlapping puts shows very small output. B. Overlapping inputs show large output,
indicating an AND operation. Reproduced from Borghetti et al. (2009).
Even further theoretical advances have been achieved by Howard et al. They developed a
memristor based circuit capable of learning using evolutionary rules (Howard, Gale, Bull, de
Lacy Costello, & Adamatzky, 2011). They showed that memristive properties of the network
increase efficiency of the learning compared to conventional evolving circuitry. They compared
several types of theoretical memristors and showed how they impacted performance.
21
More complex tasks
So far I’ve mostly discussed circuits tested on very simple tasks, they mostly served as “proofof-concepts”. Several circuits have in fact been proposed with more useful tasks in mind, two of
which I will highlight next.
The first is maze solving. Mazes, and more generally graph problems, can be hard to solve
computationally. Shortest-path algorithms exist but for large mazes they can be quite slow.
Using memristors Di Ventra and Pershin proposed a circuit that drastically improves on this (Di
Ventra & Pershin, 2011). The proposed circuit is shown in Figure 15. Every node is connected by
a memristor and a switch. If that particular path is open, the switch is turned on and vice-versa.
Figure 15. Basic maze solving circuit. The nodes (path crossings) of the maze are mapped to a circuit of memristors in series
with switches. If a path is closed, the switch is turned off, and vice versa for open paths. Reproduced from Ventra & Pershin
(2011).
The circuit is initialized by applying a voltage over the start- and endpoints of the maze. This will
result in a current flow through the “open” paths, changing resistance values everywhere along
the path. These changed resistances can be read out afterwards to find the possible paths
through the maze. The resulting path solving capabilities are shown Figure 16, including the
time needed to solve the maze. The mechanism works much faster than any existing
mechanism, due to massive parallelism: all units simultaneously involved in computation. This
circuit, although very specific to this task, is a good example of how circuit theory and brain
inspired architecture can work together to result in new and well working circuitry.
22
Figure 16. Solutions to a multiple solutions maze using the proposed circuit, encoded in the resistance of the memristors
(Figure 15). The colors of the dots indicates the resistance of said memristor, a major resistance difference can be seen at the
path-split, corresponding to a different size path. This solution can be found after only one iteration of the network, in 0.047
seconds, vastly outperforming any other method of solving this type of problem, thanks to the simultaneous cooperation of the
small units. Reproduced from M. D. Ventra & Pershin (2011).
V1 imitation
Finally I will consider a brain structure-informed self-learning circuit reproducing feature
recognizing properties of the V1 area in the brain’s visual cortex, proposed in ZamarreñoRamos et al. (2011). Neurons in this area have two properties that need to be reproduced. They
typically have specific receptive fields and portray orientation selectivity; specific kinds of edges
are detected within the receptive fields (Hubel & Wiesel, 1959). Zamarreño-Ramos et al. based
their network on known V1-architecture using receptive fields. They started the network with
random memristance values and trained it with realistic spiking data, corresponding to signals
coming from the retina. The resulting images produces with this network can be seen in Figure
17. The evolution of the weights in the receptive fields can be found in Figure 18, showing how
orientation selectivity arises during training. This not only shows that memristor networks work
well as computational circuits, but also reconfirms that memristance –like structures result in
brain-like learning automatically when using STDP. A similar circuit was proposed in (Snider,
2008).
23
Figure 17 A. Illustrating the effect of a rotating dot and a physical scene on the V1-simulating self-learning network. The 3Dgraph plot edge detections in space and time. Blue dots represent dark to light changes, red dots represent light to dark
changes. B. Network reproduction of natural scene, events collected during a 20ms video of two people walking. White and
black dots correspond to the blue and red dots in A respectively. For details see image source (Zamarreño-Ramos et al., 2011).
Figure 18 Receptive field training resulting in orientation selectivity illustrated. Reproduced from Zamarreño-Ramos et al.
(2011).
24
6. Discussion
In this paper I reviewed the current knowledge on synaptic plasticity in the brain and
memristive theory and realization. Next I showed how memristors can be used as plastic
synapses that portray Spike Timing Dependent Plasticity (STDP), and finally I showed how
various authors have proposed or already realized circuitry based on this able to perform, basic
arithmetic operations, associative learning, simple logic operations, maze solving and V1-like
edge detection. These are all very promising results, but there are still various points of
discussion.
STDP
The most important result for using memristors as plastic synapses are without a doubt the
STDP results. Networks consisting of spiking neuron-like structures connected by memristors
are almost automatically capable of timing and coincidence detection, sequence learning, path
learning and navigation, direction selectivity in visual responses and the emergence of
orientation selectivity, many of which have already been demonstrated in memristive circuits. A
big weakness of the presented STDP theory is the dependence on a specific learning rule of the
memristor: an exponential learning rule (see Figure 5). This is a very idealized function, and it is
not clear if this shape can be represented well by a physical device. It is likely that when
implementing STDP in actual circuits results will not be as clean as presented in the theoretical
papers, and the STDP function might be different. Another point of concern is the fact that even
with spiking structures, memristors don’t portray pure STDP, but multiplicative STDP, a less well
studied form. The effects of this definitely need to be studied more.
Different spike shapes
One weakness if wanting to implement the specific STDP learning function presented earlier, is
the specific spike shapes necessary. If you change the spike shape, the learning function also
changes. Some preliminary studies for this have already been done, and are shown in Figure 18
(Zamarreño-Ramos et al., 2011). This fact could actually be used as an advantage, as the
different SPDT functions allow for a higher variety of learning and thus applications. In fact,
many of the learning functions shown in Figure 19 bear resemblance to learning functions
which are also found in some brain areas, some examples of which can be found in Figure 20.
Perhaps in the experimental cases there is also a relationship between spike shape and learning
function, which would offer strong support for a memristive theory of plasticity in the brain.
Even if they don’t, however, at least their functional role can be replicated in circuitry by
choosing the spike shapes well.
25
Figure 19 Influence of different spike shapes on learning functions. Reproduced from Zamarreño-Ramos et al. (2011).
Figure 20 Different experimentally found SPDT functions. Reproduced from (Abbott & Nelson, 2000).
26
Firing rate based learning
A big gap in the current explorations of applying synaptic plasticity to memristance structures is
the reliance on spiking. This puts a circuit designer at a disadvantage, as circuits mimicking
spiking behavior need to be implemented, which requires extra space and energy. As discussed
earlier in this review, there is another class of functional learning theory, which is based on
firing rates. This learning has zero need for spiking, and still portrays many properties of
learning observed in the brain. No memristor studies, to my knowledge, have explored the
possibilities of applying that theory to memristor based synapses. This is surprising, as even the
basic memristor function is very similar to firing rate based learning. To reiterate the basic firing
rate learning equations:
𝑑𝑣
= 𝑓(𝑀, 𝑣, 𝑒)
𝑑𝑑
𝑑𝑀
= 𝑔(𝑀, 𝑣, 𝑒)
𝑑𝑑
Where v and u correspond to post-synaptic and pre-synaptic firing rates, and w to the
connection weight. Very similarly, a basic memristor is described by:
𝑑𝑣
= 𝑓(𝑀, 𝑣, 𝑖)
𝑑𝑑
𝑑𝑀
= 𝑔(𝑀, 𝑖)
𝑑𝑑
Where in this case 𝑣 and 𝑖 correspond to the current through and the voltage over the
memristor, and w to some memristor property governing the memristor’s resistance. It is clear
that these are the similar function classes, with the current and voltage being analogous to the
pre- and post-synaptic firing rates. If we can built a circuit in which the memristor functions
imitate known firing rate functions, this circuit can be used to employ learning as used in firing
rate models. Below I will show that the simplest firing rate learning functions and memristor
functions are already extremely similar. To reiterate the simplest firing rate-based functions:
𝑒 = 𝑀𝑣
𝑑𝑀(𝑑)
= 𝑒𝑣
𝑑𝑑
In this setup the post-synaptic firing rate depends on the pre-synaptic firing rates. The
connection weight get stronger dependent on both the pre and post-synaptic firing rates.
Meanwhile, the simplest theoretical memristor functions, meanwhile, are:
𝑣 = π‘Ÿ(𝑀)𝑖
𝑑𝑀
=𝑖
𝑑𝑑
Where the resistance as a function of w is given by:
π‘Ÿ(𝑀) = 𝑀 π‘…π‘œπ‘› + (1 − 𝑀)π‘…π‘œπ‘“π‘“
The time derivative of which is:
27
π‘‘π‘Ÿ π‘‘π‘Ÿ 𝑑𝑀
=
= (π‘…π‘œπ‘› − π‘…π‘œπ‘“π‘“ )𝑖
𝑑𝑑 𝑑𝑀 𝑑𝑑
If 𝑣 and 𝑖 are analogous to the post- and presynaptic firing rates respectively, π‘Ÿ would
correspond to the connection weights. So for a simple memristor with a specifiable input
current, the voltage depends on the amount of input current. Meanwhile, the “connection
weight” π‘Ÿ would increase dependent on the input current. While this is not equivalent to the
firing rate learning model, it’s already very close, and probably relatively simple circuits can be
built to mimic the firing rate function more, or even the more complicated learning functions
like the BCM functions (see synaptic plasticity section of this review), and future researchers
would do well to explore these possibilities. Perhaps by combining the circuits designed to do
arithmetic operations something like BCM learning can be achieved (Merrikh-Bayat & Shouraki,
2011a).
Other types of plasticity
This paper focused mostly on Hebbian and more specifically STDP type learning. However these
are not the only kinds of learning theories, nor are they the only mechanism observed in the
brain. In fact, several proposed network models of learning implement various types of learning
to work correctly (e.g. Lazar, Pipa, & Triesch, 2009; Tetzlaff, Kolodziejski, Timme, & Wörgötter,
2011). I will briefly discuss the most important ones and their impact on memristor based
synaptic plasticity.
The STDP learning we discussed in this paper was pair-based: it described how pairs of spikes
influenced the synaptic strength between two neurons. However, experimental evidence
(Froemke & Dan, 2002) suggests that triplets of spikes also have a separate influence on the
plasticity, which are not explained by the pair-based STDP learning rule. A computational model
next how a triplet-based STDP learning rule could explain the experiments (Pfister & Gerstner,
2006). In fact, a memristive model for triplet STDP learning has already been proposed (Cai,
Tetzlaff, & Ellinger, 2011).
Although STDP is a very powerful learning strategy, it does not guarantee stable circuits in a
realistic setting (Watt & Desai, 2010). Complementary learning methods have been suggested
to constrain firing rates and synaptic weights in a network; synaptic scaling and intrinsic
plasticity being the most well known methods. Synaptic scaling basically puts a limit on the sum
of all weights. Intrinsic plasticity meanwhile changes a single cell’s physiology and thus the way
it fires based on long term global network activity. Implementing these kinds of processes in
parallel of STDP might turn out to be necessary to build viable and effective circuits. An
example where synaptic scaling and intrinsic plasticity are already applied is in the BCM rule, a
firing rate based model. If a memristive BCM model could be designed and combined with a
STDP rule, synaptic scaling and intrinsic plasticity could potentially be applied in circuitry.
28
Plasticity
Much of the research presented here tried to apply knowledge obtained from the brain to
circuitry. In particular, theories about the role of synaptic plasticity were applied to learning
networks. A major potential problem is that there is actually very little understanding of the link
between synaptic plasticity and memory or computation in the brain. Some basic functional
mechanisms of functional learning are known, and some theoretical advantages of these
mechanisms have been studied, but on a network level we know very little. Is it actually
justified to start building circuits based on all this? First of all, we know enough of the low-level
mechanisms to start applying them to actual circuits. Secondly, trying to implement synaptic
activity in circuit goes both ways. It doesn’t only lead to better and new circuit architectures
(e.g. for extremely quick maze solving), but also to a potentially better understanding of the
very mechanisms we are trying to mimic.
Integrated circuitry
One of the many cited advantages of the memristor technology is that it is completely
compatible with existing circuitry. It can be used to make networks with plastic synapses,
connected to conventional circuits (e.g. Greg Snider et al., 2011). However, when claiming that
this technology could lead to structures with brain-level complexity and computing power there
is a possible point of discussion. Although not much is known about exact computation
mechanisms in the brain, one could argue that its strength is the fact that it consists only of
massively parallel connected plastic units, without a specific “interpreter” or some such
structure. When trying to mimic brain-like processes, it might not be easy to actually combine
these structures with existing circuitry. How do you read out thousands to millions of
independent processes, and make sense of it while passing them through a single processor
without having the same problem as before? For example, in the maze solving circuit, the
circuit solves the maze really quickly by a human’s eye, but how would a conventional
computer read out and use the solution? Again, large scale computations are not well
understood in the brain, and by building and testing memristor circuits we could actually begin
to study some of these questions.
Ideal modeling problems
A major problem with the current state of the memristor field, is that most major results have
only been achieved in simulated networks, and even then often rely on very simplified
memristors. Real life circuits will be noisier, and the memristors will not be as ideal as
simulated. It is not clear yet how well the theoretical results will carry over to actual circuitry. It
is possible that actual circuitry might perform better than the simulated circuits, or at least
close to a biology-like system, especially if something like evolution-based training is used. For
example, an increasing body of literature is suggesting that the noisiness of the brain could
actually be one of its major strengths, for example by encoding the reliability of sensory
evidence (Knill & Pouget, 2004). For memristors to really prove their worth as plastic synapses,
more experimental results are absolutely vital.
29
Energy use of circuits
Related to the absence of experimental results is the energy use necessary for proposed
circuitry. If one would want to match the complexity of the brain about 10 3-104 synapses per
neuron would be needed. With current day technology it is supposedly possible to maintain
that ratio and have 106 simulated neurons/cm2, which is quite substantial. However, energy
wise there is still a major problem, following the calculations from Zamarreño-Ramos et al.
(2011). Current memristor resistance values range from kOhm to MOhm scale. ZamarreñoRamos et al. assumed 1MOhm memristors and neurons providing enough current to maintain a
1V potential over the memristors. If the neurons would fire spikes at a 10Hz average, the
memristors alone would dissipate 2kW/cm2. This amount of power dissipation is unrealistic and
would melt any real circuitry. The only way to bring down these power requirements is by
increasing the maximum resistance values by about a 100 fold. Before circuits of brain-like
complexity can be built this milestone will have to be reached. Already progress is being made
here, the most recent memristor-like elements are operable with currents as low as 2 μA, as
opposed to the 10 mA needed for the theoretical memristors assumed above (Mehonic et al.,
2012).
Other small devices with similar properties
The memristor is not the only circuit element capable of plasticity. There are also capacitor and
inductor alternatives: memcapacitors and meminductors (Di Ventra, Pershin, & Chua, 2009).
These allow for even more complicated circuitry portraying plasticity related memory and
computations. Future work should incorporate these circuit elements in parallel or series with
the memristor. With these elements it might be especially easy to mimic firing rate based
plasticity rules. Moving away from memristance based circuitry, nano-scale transistor devices
have also been shown to portray plasticity and spiking-like mechanisms (e.g. Alibart et al.,
2010). Understanding how these devices relate to memristors and synaptic plasticity could lead
to better understanding of both, and potentially lead to combined circuitry.
Final remarks
Memristors have been realized only very recently, and considering the youth of the field
extraordinary advances have already been made. Memristors are more than a promising
technology: they could potentially revolutionize neuromorphic engineering and pioneer the use
of plasticity in actual circuits. The focus is currently mostly on using spiking structures
connected by memristors. These networks can be very powerful, since STDP is almost
automatically portrayed. However, there are various problems with the current approach, as
described above. I would suggest three central future research directions.
The similarity of STDP rules seen in memristors and between actual neurons is extremely
interesting, and possibly points towards a memristive explanation of STDP in the brain (LinaresBarranco & Serrano-Gotarredona, 2009a). For memristive model of STDP spike shape are
extremely important for the learning mechanism, but no work has has been done on relating
this to neuroscience. Future work in this direction could potentially increase our understanding
of the underlying neuroscience as well.
30
Although the current focus in on spiking structures, this high degree of biological realism is not
necessary. Firing rate based models exist which can portray complicated learning mechanisms,
and at a first glance these could possibly be reproduced in memristor based circuits. By
developing circuits like this the need for spiking structures could potentially be removed, or
more complicated learning rules than STDP could be implemented by combining spiking and
firing rate based learning rules.
Finally, despite the claims that memristor based circuits could potentially rival the complexity of
the brain, energy dissipation with current technology does not allow for it. Before circuits
operating on a scale similar to the brain could be developed, this bottleneck needs to be
addressed.
31
7. Bibliography
Abbott, L. F., & Blum, K. I. (1996). Functional significance of long-term potentiation for
sequence learning and prediction. Cerebral cortex (New York, N.Y.β€―: 1991), 6(3), 406-16.
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8670667
Abbott, L. F., & Nelson, S. B. (2000). Synaptic plasticity: taming the beast. Nature neuroscience,
3 Suppl(november), 1178-83. doi:10.1038/81453
Abbott, L. F., & Regehr, W. G. (2004). Synaptic computation. Nature, 431(7010), 796-803.
doi:10.1038/nature03010
Alibart, F., Pleutin, S., Guérin, D., Novembre, C., Lenfant, S., Lmimouni, K., Gamrat, C., et al.
(2010). An Organic Nanoparticle Transistor Behaving as a Biological Spiking Synapse.
Advanced Functional Materials, 20(2), 330-337. doi:10.1002/adfm.200901335
Backus, J. (1978). Can programming be liberated from the von Neumann style?: a functional
style and its algebra of programs. Communications of the ACM, 21(8), 613-641.
doi:10.1145/359576.359579
Bi, G., & Poo, M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence
on spike timing, synaptic strength, and postsynaptic cell type. The Journal of Neuroscience,
18(24), 10464-10472. Retrieved from http://www.jneurosci.org/content/18/24/10464.short
Bi, G., & Poo, M. (2001). Synaptic modification by correlated activity: Hebb’s postulate
revisited. Annual review of neuroscience, 24, 139-166. Retrieved from
http://www.annualreviews.org/doi/pdf/10.1146/annurev.neuro.24.1.139
Bienenstock, E. L., Cooper, L. N., & Munro, P. W. (1982). Theory for the development of
neuron selectivity: orientation specificity and binocular interaction in visual cortex. The
Journal of neuroscienceβ€―: the official journal of the Society for Neuroscience, 2(1), 32-48.
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/7054394
Blais, B. S., & Cooper, L. (2008). BCM theory. Scholarpedia. Retrieved from
http://www.scholarpedia.org/article/BCM_theory#BCM_and_scaling
Blum, K. I., & Abbott, L. F. (1996). A model of spatial map formation in the hippocampus of the
rat. Neural computation, 8(1), 85-93. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/8564805
Borghetti, J., Li, Z., Straznicky, J., Li, X., Ohlberg, D. a a, Wu, W., Stewart, D. R., et al. (2009).
A hybrid nanomemristor/transistor logic circuit capable of self-programming. Proceedings
of the National Academy of Sciences of the United States of America, 106(6), 1699-703.
doi:10.1073/pnas.0806642106
32
Cai, W., Tetzlaff, R., & Ellinger, F. (2011). A Memristive Model Compatible with Triplet Rule
for Spike-Timing-Dependent-Plasticity. Arxiv preprint arXiv:1108.4299, 1-6. Retrieved
from http://arxiv.org/abs/1108.4299
Chua, L. (1971). Memristor-the missing circuit element. Circuit Theory, IEEE Transactions on,
c(5). Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1083337
Chua, LO, & Kang, S. (1976). Memristive devices and systems. Proceedings of the IEEE, 64(2).
Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1454361
Dayan, P., & Abbott, L. F. (2003). Theoretical Neuroscience (1st ed.). MIT Press.
Di Ventra, M., Pershin, Y., & Chua, L. (2009). Circuit elements with memory: memristors,
memcapacitors, and meminductors. Proceedings of the IEEE, (1), 1-6. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5247127
Froemke, R. C., & Dan, Y. (2002). Spike-timing-dependent synaptic modification induced by
natural spike trains. Nature, 416(6879), 433-8. doi:10.1038/416433a
Gerstner, W., Ritz, R., & van Hemmen, J. L. (1993). Why spikes? Hebbian learning and retrieval
of time-resolved excitation patterns. Biological cybernetics, 69(5-6), 503-15. Retrieved
from http://www.ncbi.nlm.nih.gov/pubmed/7903867
Hebb, D. O. (1949). The Organization of Behavior. New York: Wiley & Sons.
Howard, G., Gale, E., Bull, L., de Lacy Costello, B., & Adamatzky, A. (2011). Towards evolving
spiking networks with memristive synapses. 2011 IEEE Symposium on Artificial Life
(ALIFE), 14-21. Ieee. doi:10.1109/ALIFE.2011.5954655
Hubel, D., & Wiesel, T. (1959). Receptive fields of single neurones in the cat’s striate cortex.
The Journal of physiology, 148, 574-591. Retrieved from
http://jp.physoc.org/content/148/3/574.full.pdf
Jo, S. H., Chang, T., Ebong, I., Bhadviya, B. B., Mazumder, P., & Lu, W. (2010). Nanoscale
memristor device as synapse in neuromorphic systems. Nano letters, 10(4), 1297-301.
doi:10.1021/nl904092h
Jo, S. H., Kim, K.-H., & Lu, W. (2009). High-density crossbar arrays based on a Si memristive
system. Nano letters, 9(2), 870-4. doi:10.1021/nl8037689
Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding
and computation. Trends in neurosciences, 27(12), 712-9. doi:10.1016/j.tins.2004.10.007
Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self-organizing recurrent neural network.
Frontiers in computational neuroscience, 3(October), 23. doi:10.3389/neuro.10.023.2009
33
Linares-Barranco, B., & Serrano-Gotarredona, T. (2009a). Exploiting memristance in adaptive
asynchronous spiking neuromorphic nanotechnology systems. IEEE Nanotechnology,
2009., 8, 601-604. Retrieved from
http://onlinelibrary.wiley.com/doi/10.1002/cbdv.200490137/abstract
Linares-Barranco, B., & Serrano-Gotarredona, T. (2009b). Memristance can explain spike-timedependent-plasticity in neural synapses. Nature Proc, (1), 2-5. Retrieved from
http://ini.ethz.ch/capo/raw-attachment/wiki/2010/memris10/npre20093010-1.pdf
Markram, H., Gerstner, W., & Sjöström, P. J. (2011). A history of spike-timing-dependent
plasticity. Frontiers in Synaptic Neuroscience, 3(4), doi: 10.3389/fnsyn.2011.00004.
Mehonic, A., Cueff, S., Wojdak, M., Hudziak, S., Jambois, O., Labbe, C., Garrido, B., et al.
(2012). Resistive switching in silicon suboxide films. Journal of Applied Physics, 111(7),
074507-074509.
Mehta, M. R., Quirk, M. C., & Wilson, M. a. (2000). Experience-dependent asymmetric shape of
hippocampal receptive fields. Neuron, 25(3), 707-15. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/10774737
Merrikh-Bayat, F., & Shouraki, S. B. (2011a). Memristor-based circuits for performing basic
arithmetic operations. Procedia Computer Science, 3, 128-132.
doi:10.1016/j.procs.2010.12.022
Merrikh-Bayat, F., & Shouraki, S. B. (2011b). Memristor-based circuits for performing basic
arithmetic operations. Procedia Computer Science, 3, 128-132.
doi:10.1016/j.procs.2010.12.022
Minai, A., & Levy, W. (1993). Sequence learning in a single trial. INNS World Congr. Neural
Netw. Retrieved from http://secs.ceas.uc.edu/~aminai/papers/minai_wcnn93.pdf
Pershin, Y., Fontaine, S. L., & Di Ventra, M. (2009). Memristive model of amoeba learning.
Physical Review E, 1-6. Retrieved from http://pre.aps.org/abstract/PRE/v80/i2/e021926
Pershin, Y. V., & Di Ventra, M. (2010). Experimental demonstration of associative memory with
memristive neural networks. Neural networksβ€―: the official journal of the International
Neural Network Society, 23(7), 881-6. Elsevier Ltd. doi:10.1016/j.neunet.2010.05.001
Pfister, J.-P., & Gerstner, W. (2006). Triplets of spikes in a model of spike timing-dependent
plasticity. The Journal of neuroscienceβ€―: the official journal of the Society for Neuroscience,
26(38), 9673-82. doi:10.1523/JNEUROSCI.1425-06.2006
Poon, C.-S., & Zhou, K. (2011). Neuromorphic silicon neurons and large-scale neural networks:
challenges and opportunities. Frontiers in neuroscience, 5(September), 108.
doi:10.3389/fnins.2011.00108
34
Purves, D., Augstine, G. J., Fitzpatrick, D., Hall, W. C., Lamantia, A., & White, L. E. (2012).
Neuroscience (5th ed.). Sinauer.
Rao, R., & Sejnowski, T. (2000). Predictive sequence learning in recurrent neocortical circuits.
advances in neural information processing systems, 12, 164-170. Retrieved from
http://books.google.com/books?hl=en&lr=&id=A6J8kzUhCcAC&oi=fnd&a
mp;pg=PA164&dq=Predictive+Sequence+Learning+in+Recurrent+Neocortical+Circu
its&ots=K2iY6-hA5n&sig=_osJaEfTvYaOahS7u0Z3K2_DmPY
Roberts, P. D. (1999). Computational consequences of temporally asymmetric learning rules: I.
Differential hebbian learning. Journal of computational neuroscience, 7(3), 235-46.
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10596835
Snider, Greg, Amerson, R., Carter, D., Abdalla, H., Qureshi, M. S., Leveille, A., Versace, M., et
al. (2011). From synapses to circuitry: Using memristive memory to explore the electronic
brain. IEEE Computer Society, (February), 21-28. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5713299
Snider, GS. (2008). Spike-timing-dependent learning in memristive nanodevices. Nanoscale
Architectures, 2008. NANOARCH, 85-92. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4585796
Song, S., Miller, K. D., & Abbott, L. F. (2000). Competitive Hebbian learning through spiketiming-dependent synaptic plasticity. Nature neuroscience, 3(9), 919-26. doi:10.1038/78829
Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2008). The missing memristor
found. Nature, 453(7191), 80-3. doi:10.1038/nature06932
Tetzlaff, C., Kolodziejski, C., Timme, M., & Wörgötter, F. (2011). Synaptic scaling in
combination with many generic plasticity mechanisms stabilizes circuit connectivity.
Frontiers in computational neuroscience, 5(November), 47. doi:10.3389/fncom.2011.00047
Ventra, M. D., & Pershin, Y. V. (2011). Biologically-Inspired Electronics with Memory Circuit
Elements. Arxiv preprint arXiv:1112.4987. Retrieved from http://arxiv.org/abs/1112.4987
Watt, A. J., & Desai, N. S. (2010). Homeostatic Plasticity and STDP: Keeping a Neuron’s Cool
in a Fluctuating World. Frontiers in synaptic neuroscience, 2(June), 5.
doi:10.3389/fnsyn.2010.00005
Zamarreño-Ramos, C., Camuñas-Mesa, L. a, Pérez-Carrasco, J. a, Masquelier, T., SerranoGotarredona, T., & Linares-Barranco, B. (2011). On spike-timing-dependent-plasticity,
memristive devices, and building a self-learning visual cortex. Frontiers in neuroscience,
5(March), 26. doi:10.3389/fnins.2011.00026
van Rossum, M. C., Bi, G. Q., & Turrigiano, G. G. (2000). Stable Hebbian learning from spike
timing-dependent plasticity. The Journal of neuroscienceβ€―: the official journal of the Society
35
for Neuroscience, 20(23), 8812-21. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/16711840
36
Download