Uploaded by Vedant Patel

Pysch 261 Test 3

Week 7: Video 1
Plasticity in the Adult Brain
There is mounting evidence that the brain does not become a static structure after early
development, but that it continues to change throughout adulthood, though to a substantially lesser
extent. Modification of the tissues of the brain throughout the lifespan is known as neuroplasticity,
and perhaps the most notable type of neuroplasticity is neurogenesis, which is the formation of new
neurons in the nervous system.
Strikingly, there in now compelling evidence supporting the view that new neurons are formed in
the adult mammalian brain. As we talk about plasticity in the adult brain, it’s important to keep in
mind that the mechanisms of neuroplasticity (and particularly neurogenesis) in the adult brain are
often studied in non-human mammals, and typically in rodents. Nevertheless, similar mechanisms
are now being documented in the human adult brain.i
It turns out that there are several places in the adult mammalian brain that continue to produce new
neurons throughout adult life. The chief areas are the subventricular zone by the lateral ventricles,
and the dentate gyrus of the hippocampus.ii New neurons have also been detected in the
striatum,iii though the origin of these neurons might be traced to the adjacent subventricular
This is a schematic of the process of neurogenesis in the dentate gyrus of the hippocampus.vi The
neurogenic process starts with Type 1 Radial Stem cells. Surprisingly, Type 1 cells are actually
astrocytes, specifically referred to as radial astrocytes.vii So, neurogenesis in the adult
hippocampus starts with glial cells. Type 1 cells can divide, forming Type 2 Non-Radial Stem
cells. Type 2 cells multiply in number and also give rise to immature neurons, which differentiate to
form mature neurons that are integrated into the neural architecture of the dentate gyrus of the
So, how many neurons are generated in the human adult hippocampus? According to one group of
authors: “The median turnover rate of neurons within the renewing subpopulation is 1.75% per year
during adulthood, corresponding to approximately 700 new neurons per day in each hippocampus
or 0.004% of the dentate gyrus neurons per day in the human hippocampus.”viii
As we mentioned before, adult neurogenesis in mammals can also occur in the subventricular zone
(SVZ), which is located by the lateral ventricles. As you can see in this figureix, the ventricles are
lined with ependymal cells, which form the ventricular zone (VZ). Beside the ependymal cells,
there are three different types of cells, conveniently known as Type A, Type B and Type C cells.
Type B cells again are a type of astrocyte and these can self-renew and also give rise to Type C cells,
which are proliferative precursors that ultimately form Type A cells which are new neurons. In
mammals such as rodents, the astrocytes then bundle the new neurons and migrate them to the
olfactory bulb via the rostral migratory stream, where they are integrated into the olfactory system.
Continued renewal of neurons in the olfactory system would be very functional for mammals that
rely heavily on their sense of smell to survive.
The role of neurogenesis in the subventricular zone of the adult human brain is a lot more
controversial. Nevertheless, there is some evidence that it does occur. The authors of one study
concluded that in humans,
“. . . SVZ [subventricular zone] astrocytes with the characteristics of neural stem cells were
identified in vitro.”x
They further note that “ . . . the SVZ maintains the ability to produce neuroblasts [that is stem
cells] in the adult human brain.”xi
However, the neurons produced in the subventricular zone of humans might not primarily move to
the olfactory bulb. Instead, they might make their way to areas close to the subventricular zone,
such as the striatum. One group of researchers who examined the formation of new neurons in the
striatum noted that “[i]t appears likely that the neuroblasts and new neurons in the adult human
striatum derive from the subventricular zone, although we cannot exclude other origins.”xii
Plasticity and Exercise
You might be wondering whether there are any ways to increase neurogenesis, especially in older
adulthood, when cognitive performance starts to decline? The answer to this question is “yes,” and
one of the key ways to increase neurogenesis in adulthood appears to be quite simple: exercise! The
fact that exercise in older adulthood triggers neurogenesis has been well documented in animal
modelsxiii and there is evidence that it also occurs in humans.xiv
Exercise not only increases neurogenesis, but it also increases growth and complexity of old neurons
by triggering the release of brain-derived neurotrophic factor (BDNF). You can think of BDNF
as a major neuron growth molecule—it supports the growth of neurons and synapses. The top
image illustrates that in neurons [and I’m quoting from the source paper] “BDNF is transported
retrogradely and anterogradely to synapses, were it potentiates synaptic transmission, participates in
gene transcription, modifies synaptic morphology, and enhances neuronal resilience.”xv The lower
image shows that “[r]eleased BDNF binds to its receptor (TrkB) presynaptically to modify
transmitter release and postsynpatically to modify postsynaptic sensitivity, for example, via
interaction with NMDA receptors”, increasing their excitatory effects.
Exercise has been shown to increase the production of BDNF in the hippocampus. Shown here are
the results of a study in which rats were either assigned to an exercise group or sedentary groupxvi.
The exercise group was given a running wheel to run in while the sedentary group was caged
without a running wheel. The images shown here depicts the amount of BDNF mRNA present in
the hippocampi of the two groups. The more red and yellow, the more BNDF mRNA.
As you can see, the rats in the exercise group expressed more BNDF mRNA than did those in the
sedentary group. Actual measures of the BDNF protein are shown here, and they confirm that there
was more BDNF in the hippocampi of the exercise group than in the hippocampi of the sedentary
group. Finally, this graph shows that the amount of BDNF present in the hippocampi systematically
increased with the distance that the rats ran during the night (in other words, the amount of exercise
they received).
In humans, exercise is known to improve cognitive performance, particularly in older adults. This
figure shows the results of a meta-analysisxvii of studies, which included older adults (those 55 years
old and older) that were either assigned to a control non-exercise condition or an aerobic exercise
condition. Participants in the studies were asked to complete at least one cognitive task. The
cognitive tasks either measured executive control, controlled processing, visuo-spatial ability, or response speed.
Performance measures were taken both before and after the intervention. This figure shows the
magnitudes of the changes in performance between the initial testing session and the postintervention testing session, using a metric of effect size. As you can see, those who exercised
improved much more across testing sessions than did those who didn’t exercise, and this was the
case for all of the types of tasks included in the analysis, though the effects were largest for the
executive control tasks. So, if you want to optimize your cognitive performance as you age, get out
there and exercise!
Plasticity and Diet
Now, in addition to exercise, you can also increase neuronal growth by restricting your energy
intake. In one study of dietary restriction in mice, the mice were separated into two groupsxviii. The
mice in one group were allowed to eat whenever they wanted, or ad-libitum; and the mice in the
other group were only fed every other day, so they were on a dietary restriction. Levels of mRNA
for the growth factor BDNF in the hippocampi of the mice are shown in this pair of images, with
higher levels of BDNF mRNA being depicted in blue and purple. The results for the ad-libitum
group are shown on the left and the results for the dietary restriction group are shown on the right.
Clearly, the mice with restricted diets had higher levels of BDNF mRNA in some parts of their
hippocampi than did the mice who ate at will. What’s more, the dieting mice showed higher levels
of neurogenesis than did the free eaters. Other studies have shown that rats on a restricted diet like
this one lived longer than did rats that were allowed to eat whenever they wanted.xix
In humans, restricting caloric intake by 30% over 3 months has been shown to lead to better
memory performance in older adults relative to those who ate their regular amount or those who ate
a diet high in unsaturated fatty acids, which is known to improve performance in some cases.xx The
figure shown here depicts participants’ percentage memory improvement from an initial baseline
measure to their performance after the intervention. Notice that only the group undergoing caloric
restriction showed an improvement over baseline levels.
Reducing food intake has been shown to have many benefits on brain physiology. According to a
recent review of the literaturexxi, dietary restriction increases neurotrophins, antioxidants, the
removal of damaged molecules and it reduces inflammation and oxidative stress. And this is not an
exhaustive list of the effects.
Dietary restriction has these wide ranging benefits because the it involves
“the general biological phenomenon of ‘hormesis’ or ‘preconditioning’, in which exposure of
cells and organisms to a mild stress results in adaptive responses that protect against more severe
stress.” (Mattson et al., 2014, p. 16649)
Having learned about the benefits of energy restriction, you might be wondering what sorts of eating
regimes are followed during dietary restriction. Here are some examplesxxii. The rings shown here
depict different patterns of food intake. Each ring depicts a 24-hr day, with the dark region roughly
corresponding to nighttime and the light region corresponding to daytime. On the rings, each meal
is shown as a circle, with large circles depicting regular means, medium circles depicting small meals
and small circles depicting snacks. The eating schedule corresponding to the common western diet
(shown on the far left). As you can see, the common Western diet involves three large meals a day
together with snacks interspersed throughout the waking hours. Mattson and colleagues consider
this to be the unhealthy diet.
Shown to the right of the common western diet are various dietary protocols. One type of dietary
protocol is called Caloric Restriction (CR), and it involves eating smaller meals, typically cutting
caloric intake by roughly 30% of normal levels. Another type of dietary restriction is known as Time
Restricted Feeding (TRF) and it typically involves eating a normal amount, but restricting that to
a 4 to 8-hour periodxxiii, and going the rest of the time without food. While you don’t have to reduce
your caloric intake on this diet, people often do, because they eliminate evening stacking and
sometimes skip breakfast. A third protocol is Intermittent Energy Restriction (IER), which
involves interspersing days of normal eating with days of energy restriction. Interestingly, the
evidence suggests that intermittent energy restriction has beneficial effects for the brain even when
the overall number of calories consumed remains the same as on a normal diet.xxiv This means that
the effects of intermittent energy restriction are to some extent independent of caloric intake. It
seems to be the case that including periods of fasting in the regime might have greater benefits than
simply reducing caloric intake.xxv
Not surprisingly, in addition to dietary restriction regimes, what you eat can also influence brain
plasticity in adulthood. For example, a study conducted with rats compared the effect on brain
plasticity of a diet high in saturated fats and refined sugar (HFS), which is the typical western diet,
and a low-fat, complex carbohydrate (LFCC) dietxxvi. The results showed that rats fed the low-fat,
complex carbohydrate diet had higher levels of BDNF in their hippocampi than did rats fed the
typical western diet containing lots of saturated fats and refined sugar. This result is shown in the
bar graph on the left as well as in the images of the hippocampi shown on the right in which greater
levels of BDNF are depicted by increased darker speckling. The study also showed that other
markers of neuronal plasticity were similarly affected. Performance was also affected, with rats fed
the low-fat, complex carbohydrate diet learning to swim in a water maze more effectively than the
rats raised on a diet of saturated fats and refined sugar.
Plasticity and Learning
Neuronal plasticity throughout adulthood is also evidenced by brain changes that result from
learning and experience. Studies of motor learning in the adult squirrel monkeyxxvii, for instance,
show that as the monkeys learn a grasping task that requires increased manual dexterity, areas in
motor cortex corresponding to finger control increase in size. The area of primary motor cortex
corresponding to hand and finger control is shown in red, and as you can see, this area increases in
size from pre-training to post-training. In this case, the changes in the brain occurred in just over 12
days of training.
Training also induces neural plasticity in human adults. In one study,xxviii people who had no prior
experience with juggling were divided into two groups: One group learned to juggle for 3 months
while the other group did not and served as a control group. Measures of cortical grey matter
revealed that the juggling group showed increases in the gray matter volume in the left posterior
intraparietal sulcus, and the mid-temporal area, which is an area responsible for processing visual
motion. The control group did not show such changes.
As another example, playing the violin for an extended period of time appears to lead to changes in
the right somatosensory cortex.xxix Recall that the right somatosensory cortex responds to
simulation of the left hand – the hand that is used by violinists to depress the strings on the
fingerboard of the violin. To quote the authors of the study:
“. . . the cerebral cortices of string players are different from the cortices of controls in that the
representation of the digits of the left hand is substantially enlarged in the cortices of string players.”
(p. 305).
So, learning and training change the adult brain.
Concluding Statement
The fact that there is plasticity (and particularly neurogenesis) in the adult brain, has made some
researchers wonder whether the amount of plasticity can be increased, perhaps as a treatment for
brain damage or various neurodegenerative diseases.
Week 7: Video 2
Brain Injury
Brain tissue can be damaged in many different ways. Let’s consider some of the most common
causes of brain injury.
One cause of brain injury, which is more common in older adults than in young people, is referred
to as a cerebrovascular accident or a stroke. Broadly defined, a stroke refers to a disruption of
normal blood delivery to brain tissue. There are two types of strokes: Ischemic and hemorrhagic.
Ischemic strokes occur when arteries or capillaries are blocked either by floating debris, or a
buildup of material inside the blood vessels, or a drastic constriction of the blood vessels. The
blockage prevents nutrients from getting to brain cells, and this ultimately results in the death of the
cells in the region supplied by the blocked blood vessels. The majority of strokes are ischemic in
A hemorrhagic stroke (also known as an intracerebral hemorrhage: ICH) involves the rupture
of a blood vessel in the brain. This can happen either by a traumatic brain injury, or by other
factors, the most common of which is high blood pressurexxx. Roughly 10-20% of all strokes are
hemorrhagic.xxxi Whether a hemorrhagic stroke leads to death depends largely on the amount of
blood lost during the hemorrhage. Apparently, if more than 150 mL of blood drains from your
arteries, the blood pressure in the cerebral vessels drops to the point that there is insufficient blood
reaching the remainder of the brain; and this leads to deathxxxii. A large bleed also creates a large
blood mass in the brain – known as a hematoma – which can put pressure on critical parts of the
brain; this can also lead to death. Roughly 40% of hemorrhagic strokes result in death soon after the
Surprisingly, cell death caused by an ischemic stroke (and to some extent a hemorrhagic stroke) is
actually due to an over-activity of the nutrient-starved cells, a processes known as excitotoxicity.
How does this w2ork?
Cytotoxic Edema
Well, when cerebral blood flow drops below a critical level, the energy demands of the cell outstrip
the nutrient availability in the dwindled blood supply.xxxiv This means that energy-dependent ion
channels, such as the sodium-potassium pump, work less efficiently. This leads to a buildup of
sodium in the cell, which depolarizes the cell, leading to an increase in the release of
neurotransmitters into the synaptic cleft. The main neurotransmitter of interest in stroke pathology
is the excitatory neurotransmitter glutamate.
At the tripartite synapse, it is the astrocyte that is response for removing the majority of the
glutamate from the synaptic cleft through transporter proteins; some transporters are also present
on the neuron membrane. But as synaptic glutamate levels increase, the astrocyte transporters can’t
keep up. At the same time the transporters begin to malfunction because they too require energy to
operate, which, at this point, is in short supplyxxxv. As they malfunction, the transporters can even
reverse, ejecting more glutamate in into the already glutamate filled synapsexxxvi.
The increase in glutamate in the synaptic cleft, which can reach levels that are even 100 times higher
than normalxxxvii, leads to an influx of even more positive ions into the local cells, thus further
exacerbating the problem, triggering more glutamate release.
In addition to letting sodium ions enter the cell, one of the glutamate receptors (the NMDA
receptor) also allows calcium (Ca2+) to enter into the cell, and because of the continued stimulation
of the NMDA receptors by the overabundant glutamate transmitters, calcium enters the cell in large
The influx of calcium impairs mitochondrial function, and disrupts production of ATPxxxviii, thus
further compromising energy supply to active mechanisms like the sodium-potassium pump.
As calcium accumulates in the cell, and the mitochondria begin to malfunction, a series of chemical
reactions occur that lead to the production of a large amount of dangerous free radicalsxxxix such as
nitric oxide (NO), which cause damage to the nearby cells.
In fact, during excitotoxicity, nitric oxide plays at least three problematic rolesxl: First, together with
other free radicals, it begins to destroy the organelles inside the cell. According to one group of
authors, “[o]xygen radicals generated as a result of the excitotoxic insult can attack proteins, nucleic
acids, and lipid membranes, thereby disrupting cellular functions and integrity”xli. Second, nitric
oxide is involved in the process of triggering more glutamate release. And third, it is a precursor to a
molecule that is involved in triggering apoptosis.
As positive ions accumulate in the cells [both neurons and glia], water is osmotically drawn into the
cells from the extracellular regionsxlii,xliii. This increase in intracellular water content is referred to as
cytotoxic edemaxliv. Cytotoxic edema further disrupts membrane protein function and physically
stresses the cells. Cytotoxic edema not only occurs in neurons, but also in glia, which are struggling
with similar ion imbalances as they try to compensate for neuronal dysfunction. In fact, astrocytes
[and I’m quoting here] “can swell five times their normal size”xlv and so they are the main players in
brain swelling, or in other words, edemaxlvi.
The cumulative result of the reduced energy supply, the increase in free radicals, the cytotoxic
edema, and the toxicity created by calcium buildup in the cell, is that the ischemic cells ultimately
either rupture, spilling their contents into the intercellular space – a type of cell death referred to as
necrosis – or initiate apoptosis, which leads to a dissolution of the contents of the cellxlvii. Necrosis
seems to occur first, with apoptosis occurring during later stages of ischemia.
Additional Mechanisms of Hemorrhagic Stroke
A hemorrhagic stroke involves a few additional mechanisms. While all of the mechanisms are not
yet fully understood, here are some key events that unfold during (and following) an intracerebral
1. As the blood pools outside of the ruptured blood vessel, it creates a hematoma which
mechanically puts pressure on surrounding cells, thus disrupting proper membrane
2. Some have argued that counterintuitively, during a hemorrhagic stroke, there is lower blood
flow to surrounding tissue, partly because of lower blood volume in the surviving arteries,
but also because vascular constriction can occur in the affected region, which can lead to
ischemic events we described in the context of ischemic strokesxlix.
3. Damage caused by pressure from a hematoma as well as ischemia caused by vasoconstriction
can lead to cytotoxic edemal.
4. As soon as a hemorrhage occurs, the surrounding cells produce thrombin, an enzyme that
is critical to the clotting process.
5. While thrombin helps to stop bleeding by triggering coagulation, in high concentrations
thrombin makes NMDA more responsive to glutamate, thus increasing excitoxicity.li
Ultimately, thrombin can lead to apoptosis in glia and neurons.
6. In addition, thrombin triggers inflammatory responses from microglia, which release proinflammatory markers and free radicals. Microglia also ramp up their immune response
because during a hemorrhage, the contents of the blood spill into the surrounding tissue and
microglia consider these to be invaders that have to be destroyed. However, this immune
response also inadvertently damages local brain cells.
7. Another source of cell damage is the iron in the hemoglobin of red blood cells. Iron is
released from hemoglobin and then creates free radicals that cause oxidative stress,
damaging the surrounding tissue.
8. Finally, plasma from the blood stream flows into the interstitial space, causing vasogenic
edema. This places further pressure on the cells, thus further disrupting cell function.
As you can see, these events associated with a hemorrhagic stroke can quickly get out of control,
leading to considerable damage to the brain, or even death.
Traumatic Brain Injury
A type of brain injury that’s more common in young people than older adults is traumatic brain
injury (TBI).
According to one set of authors, Traumatic Brain Injury (or TBI) “ . . . is a heterogenous
disorder with different forms of presentation. The unifying factor is that brain damage results from
external forces, as a consequence of direct impact, rapid acceleration or deceleration, a penetrating
object (e.g., gunshot), or blast waves from an explosion. The nature, intensity, direction, and
duration of these forces determines the pattern and extent of damage.”lii
Particularly common among young people who play sports are impact deceleration injuries, which
typically occur when the head is moving and comes into contact with a stationary object (for
instance, when you fall and your head hits the ground). Such impacts can result in coup and
contercoup areas of primary damage. The coup area of damage is the site directly adjacent to the
point of impact, where the skull is driven against the brain. The contercoup area is on the opposite
side of the head, where the brain and skull again make contact on rebound.liii
An impact to the head can lead to a) the formation of a hematoma either in the cerebrum or
around the meninges, b) a contusion, which is a local area of bruising in the brain involving small
tears in blood vessels, and c) diffuse axonal injury from general axonal sheering as the brain moves
within the skull case. These primary types of damage can occur together or separately and they can
each lead to various degrees of ischemia because of impaired blood flow caused by the injury.
In addition, the primary damage and ischemia can contribute to some degree of swelling (that is
edema) in the brain. This edema can be cytotoxic edema and/or vasogenic edemaliv. Edema can
occur locally or throughout the brain. When it occurs throughout the brain, it’s called diffuse
traumatic brain edemalv. The swelling can in turn increase intracranial pressure and decrease
cerebral perfusion pressure. These factors can increase the likelihood of ischemia [see Maas et al.,
2008 Figure 1].
Together, ischemia, swelling and bleeding can lead to damage through the various mechanisms we
discussed in the context of strokes, namely inflammation, excitotoxicity, and oxidative damage.
The brain damage associated with an acute traumatic brain injury is sometimes referred to as a
concussionlvi. Concussions can be recognized by their symptoms—which typically include a
headache, nausea, dizziness and memory and attention problems. In many cases, a concussion can
resolve within a week or two,lvii though it can take longer.
Particularly problematic are repeated impacts to the head, as might happen to boxers or to football
players. Repeated impacts are problematic because they lead to chronic traumatic brain injurylviii.
Chronic traumatic brain injury has been associated with a number of brain pathologies and
concerning symptoms, including Alzheimer’s symptoms, Parkinsonian symptoms, various problems
with motor movement, mood disturbances (such as impulsivity and aggression), and impaired
cognition (such as poor memory and attention). Specific neural damage can include disruption of
the cytoarchitecture of neurons, cerebral volume loss and an atrophy of white matter tracts.
A recent meta-analysis of a group of studies found that athletes who suffered sports-related
concussions had “significant cognitive deficits in verbal memory, delayed recall, and attention”
compared to their non-concussed counterparts.lix
So, if you like to play rugby, American football, or soccer; or you like to box and engage in martial
arts, perhaps it’s time to save your brain and instead take up a more brain-friendly activity, such as
learning more about physiological psychology!
Week 7: Video 3
The Healing Brain
After a brain injury, the brain begins a process of recovery that involves a surprising level of
plasticity. For the sake of simplicity, we’ll focus on the events that might unfold following a stroke;
let’s say an ischemic stroke.
Following focal damage caused by an ischemic stroke, the necrotic (that is, dead) tissue in the main
area of damage is surrounded by a penumbra of live, but struggling cells. In the hours and days that
follow the stroke, some cells in the penumbra that were connected to now dead cells experience loss
of axonal input, which is sometimes referred to as denervation. The dendrites of the surviving cells
experience spine collapse,
which is a shrinkage of dendritic spines. The surviving cells experience decreased activity62 and
axons that are damaged by the stroke are prevented from re-growing by inhibitory signals63. At this
point, things don’t look too good.
Spontaneous Recovery: Initial Events
But by one to four weeks after the stroke, the brain kicks in a series of adaptive plasticity
mechanisms. First, dendritic spines begin to reappear64, though they might appear in different
locations and have different shapes than they did before the injury.65 Local axons begin to sprout
axon collaterals that connect to denervated dendritic regions and even long range axons find their
way to the vacant dendritic locations66. Thus even one week after stroke the penumbral region
enjoys a considerable amount of synaptogenesis67. This is partly because production of growth
factors such as brain derived neurotrophic factor (or BDNF) is upregulated. In addition, inhibitory
GABAA receptor activity is decreased, while excitatory NMDA receptor activity is increased.68
Spontaneous Recovery: Network Reorganization
By the fourth to eighth week after the stroke, the reorganization of cortical networks is already
underway. The process or cortical network reorganization can be seen in this figure as it
develops over the first eight weeks after a stroke in a rodent model69. The panel on the far left shows
a schematic depiction of a healthy segment of a rodent somatosensory cortex before a stroke. The
red region represents the area of cortex that is responsive to the hindlimb and the green region
represents the area of cortex that is responsive to the forelimb. The double headed arrow indicates
connections between the two regions of somatosensory cortex, and the incoming single headed
arrows indicate inputs to the somatosensory cortex from the thalamus.
The second panel from the left shows what happens in the hours and the days that follow a stroke
to a part of the somatosensory cortex that previously represented the forelimb. The area damaged
by the stroke is shown in gray. During this time, some of the neurons representing the forelimb die
and as we have noted a moment ago, the surviving tissue is affected by dendritic spine collapse and
less efficient neuronal functioning. Importantly, the yellow region depicts the beginnings of cortical
reorganization, with neurons in the region now becoming responsive to both the hindlimb and the
In the third panel from the left, we see a larger yellow region showing that one to four weeks after
the stroke, more neurons are responsive to both fore- and hind-limbs. At this point there is an
increase in growth factors, the neurons are more active and there is dendritic spine remodeling and
axonal sprouting. The increase in double headed arrows depicts an increase in connections between
the various limb regions of the somatosensory cortex.
Finally, in the right-most panel, we now see a drop in the number of neurons responding to both
limbs, which is depicted by a smaller yellow area. This is because the region of cortex responsive to
the forelimb has now expanded and taken over some of the territory formerly responsive to the
hindlimb. The end result, then, is a topographic re-organization of the somatosensory cortex after a
Network reorganization can occur on a much larger scale as well, even crossing the hemispheres.
For example, people who experience a stroke to their subcortical motor pathway in one hemisphere
might end up with increased connectivity between the ipsilesional motor areas and the motor areas in
contralesional hemisphere. At the same time they might show reduced connectivity between the
thalamus of the ipselisional hemisphere and motor areas in the contralesional hemispheres.70
The reduced connectivity among areas distant from a stroke (that is beyond the penumbra) is
sometimes referred to as connectional diaschisis.71 This is shown schematically on the left side of
this slide: The connectivity between several areas before the stroke is shown by the blue arrows.
After the stroke, some of the connections might be strengthened (shown in green) and other
connections might be diminished (shown in red). The diminished connections in red are examples
of connectional diaschisis. The right side of the slide shows a slightly different type of diaschisis
known as functional diaschisis.72 In this form of diaschisis, some brain areas further away from the
stroke focal point might experience less activity in response to a stimulus or when they are involved
in a given behavior. This is shown here by depicting pre-stroke activations in yellow and post-stroke
reductions in activation in red. The figure also indicates that some regions might increase in
activation in response to stimulation after the stroke, which is depicted here in terms of larger green
regions. In general, the term diaschisis is used to denote reduced activity in or reduced connectivity
with areas that are relatively distant from the focal lesion, that is, those that are beyond the
penumbral region.
Functional reorganization in the cortex can also occur after somatosensory and motor areas are
deprived of their regular input because of the loss of a limb. In some cases, the loss of the limb can
be accompanied by the experience of continued sensations coming from the lost limb, a condition
referred to as phantom limb syndrome. While phantom limb sensations that feel as though the
amputated limb is being touched are not very common, experiencing pain coming from the
phantom limb, known as phantom limb pain, occurs for many amputees73.
Phantom limb pain can involve several peripheral mechanisms, such as activity in the injured nerve
endings in the stump as well as altered activity in the dorsal root ganglion, which contains the
sensory nerve bodies just outside the spinal cordlxxiv. However, phantom limb pain can also be due to
reorganization of the sensory and motor areas of cortex.
Shown on the left is brain activity from amputees with phantom limb pain who were asked to move
their lipslxxv. You can see that there is lateral activation of motor and somatosensory cortices
corresponding to the face area in the motor and somatosensory homunculi. However, there is also
activation of the area that would have previously been responsive to the now amputated limb, in this
case, the hand. So, the area responsible for the lips has expanded and now activates the previous,
but now denervated hand area. This additional activation caused by cortical reorganization is
thought to lead to phantom pain because the brain doesn’t know that the sensations are now coming
from the face and not the hand. Notice that in the middle image amputees without phantom limb
pain do not show activation of the hand area when moving their lips. And neither do people who
are not amputees, whose brain activations to lip movements are shown on the right.
Spontaneous Recovery: Neurogenesis & Gliogenesis
In addition to network reorganization, the mammalian brain attempts to recover function after a
brain injury by increasing the production of new cellslxxvi; in rodent models, this occurs in the
subventricular zonelxxvii and it has been documented in the hippocampuslxxviii. The production of new
cells includes neurogenesis (with is the production of new neurons) and gliogenesis (which is the
production of new glial cells)lxxix.
While much of the evidence for increased cell genesis after brain injury comes from mouse and rat
models, neurogenesis following stroke has also been shown in the adult human brain. However, the
source of these new neurons is unclear. The authors of one paper on this topic write the following:
“In rodent models of stroke, the source of new neurons that migrate to ischemic brain areas
appears to be the subventricular zone. In human biopsy specimens, the source of these cells is
unclear, but their perivascular location may provide clues. One possibility is that newborn
neurons migrate from elsewhere (such as the subventricular zone), using blood vessels as
scaffolds for migration or as destination markers. Alternatively, these neurons could arise locally,
although whether neurogenesis can occur in situ in primate cerebral neocortex is
Neurogenesis has also been found after traumatic brain injury. The authors of one paper
documenting this phenomenon write:
“We found that neurogenesis mainly occurs in the peri-damaged brain regions after TBI in
humans. However, it remains unclear whether these cells are born locally or from neurogenic
That neurogenesis following brain injury occurs is certainly intriguing. However, it’s important to
note that at present it’s unclear to what extent the newly generated cells integrate into existing
networks and whether and how they might support growth of other cells by releasing various
growth promoting substance.lxxxii Right now, this is a hot area of research as investigators continue to
make exciting discoveries about the plasticity of the human brain following injury.
Drug Therapies
As researchers have learned more about the physiological mechanisms involved in brain injury and
in recovery from brain injury, there has been a continued effort to develop drug therapies that might
mitigate the damage associated with brain injury and to speed recovery.
To be effective, any such drug therapy would have to consider the timeline of brain injury and
spontaneous recovery that we’ve discussed so far. Specifically, as we noted, during and right after a
brain injury, the tissue damage involves an overactivity of neurons—a process we called
excitotoxicity. Thus, during and right after the injury, it would be helpful if the patient could take a
drug that reduces neuronal overactivity. In contrast, after the initial damage, the problem is one of
underactivity of the remaining neurons. Thus, some time after the injury, it would be useful to have
a drug that increased the activity of underactive neurons.
While a number of drugs have shown promise in rodent models, when applied to humans, these
drug therapies have not been very successful. Those drugs that might have the intended effects, also
have very problematic side-effects, rendering them unusable as go-to therapies for brain injury.
The one drug therapy that has been shown to be useful, is one used as an early intervention during
ischemic strokes,. According to the American Heart and Stroke Association, at present
“[t]he only FDA approved treatment for ischemic strokes is tissue plasminogen activator (tPA) .
. . tPA works by dissolving the clot and improving blood flow to the part of the brain being
deprived of blood flow. If administered within 3 hours (and up to 4.5 hours in certain eligible
patients), tPA may improve the chances of recovering from a stroke.”lxxxiii
Training-Induced Recovery
The good news is that deficits associated with brain-injury can be ameliorated for up to a year post
injury by applying rehabilitative training techniques, which lead to training-induced recovery.lxxxiv
A good example of the sorts of neural changes that can accompany rehabilitative training is provided
by a studylxxxv in which some rats were given an ischemic lesion to the middle cerebral artery which
impaired function in one of their forelimbs. Some of the lesioned rats then recovered in an enriched
environment in which they had contact with four to five other rats and access to a variety of toys.
This enriched environment is shown in the top left photo. The photo below shows an additional
rehabilitation reaching task completed by the rats in the enriched condition. It involved using the
affected limb to reach for M&M’s, which rats find very tasty.
In contrast, the other lesioned rats were provided standard recovery conditions, being housed in
standard small cages by themselves, though they were allowed out to perform various tests that all
rats had to complete. The striking result was relative to the standard group, that the neurons of the
motor cortex in the spared hemisphere of the enriched group had much more dendritic arborisation.
Not surprisingly the enriched group also performed better on motor tasks involving the injured
hand than did the standard group. So, the enriched environment and training induced dendritic
growth in the spared tissue.
Stem Cell Therapy
One intervention researchers are pursing as a way to improve post-stroke recovery is stem cell
therapy, which involves the introduction of stem cells into the post-stroke brain. This form of
therapy has been shown to improve behavior and brain function in animal models. In one study,
stem cells from monkey embryos were transplanted into a mouse brain after the mouse received an
ischemic lesion.lxxxvi The evidence showed that the transplant was successful and that new neurons
began emerge in the mouse brain. It has also been shown that intravenous injection of bone
marrow stem cells into rats that were previously given an ischemic lesion led to the formation of
new glialxxxvii and neuronslxxxviii in the injured rats’ brains. The process might work in humans too. It
has been shown that intravenous injection of peoples’ own bone marrow stem cells (harvested under
local anesthesia) led to better recovery after ischemic strokes when compared to control stroke
patients who did not receive the treatment.lxxxix,xc While promising, the research into stem-cell
therapy is still in its infancy and it will take substantially more research before such therapy can be
considered a go-to therapy for stroke patients.xci
Brain Stimulation
Another therapy for improving post-stroke outcomes that is being explored is brain stimulation.
It has been shown, for example, that applying anodal transcranial direct current stimulation (tDCS)
over the stroke-damaged motor cortex leads to improved motor function.xcii,xciii The assumption is
that anodal tDCS over the lesion increases neural activity in the regions surrounding the lesion and
that this triggers neural growth.
In addition, either repetitive transcranial magnetic stimulation (rTMS)xciv or cathodal tDCSxcv over
the contralesional motor cortex can improve motor recovery after stroke. Why is this so? Well, it’s
believed that a stroke to one hemisphere causes an imbalance in the amount of inhibition and
excitation across the hemispheres, such that the motor cortex in the spared hemisphere actually
inhibits activity in the lesioned hemisphere. Applying rTMS and cathodal tDCS to the
contralesional (spared) cortex is thought to reduce activity in that cortex, and thus reduce inhibition
of the lesioned hemisphere. This allows the lesioned hemisphere to increase its activity, leading to
performance improvements.
Concluding Statement
It’s exciting to see just how much the brain heals itself after an injury, and how far we have come
along with treatments that boost the brain’s self-healing ability.
Week 8: Video 1
Light and the Eye
Successful performance in the world requires that we extract relevant information about the stimuli
in our environment and use it effectively to guide our actions. Contact with the stimuli in our
environment occurs at our sensory organs and the interpretation of the stimuli involves complex
processes carried out by the sensory organs and various parts of the central nervous system. Here
we’re going to focus on the sense of vision, which is perhaps one of the better understood senses.
Let’s start with a consideration of the stimulus involved in vision.
The Stimulus
The stimulus for our sense of vision is light. We can think of light energy in terms of discrete
packets of energy, called photons, and also in terms of a traveling wave. This dual conceptualization
of light is called the wave-particle duality of light. When construed as a wave, light energy can be
described in terms of a particular wavelength, which is the distance between two successive peaks
(or valleys) of the wave. The light that we see is only a small part of the overall electromagnetic
spectrum (see adjacent image), which includes wavelengths that range from extremely short (cosmic
rays at ~10-15 m) to quite long (radio waves at about 103 m). The human visible range includes light
at wavelengths ranging from roughly 400 nm to roughly 750 nm. With some qualifications that we
will mention later, the different wavelength of light correspond (roughly) to our experience of
different colours, with our experience of the colour ‘blue’ generally corresponding to shorter
wavelengths in the visible range and our experience of ‘red’ corresponding to the longer
Visible light is emitted from an energy source, most prominently our sun, but fire and lightbulbs will
also do the trick. Light emanating from the energy source hits various surfaces, and depending on
the nature of the surface, some of the light energy is reflected and travels to our eyes. Light can also
be refracted and partially absorbed as it travels though some substances, such a glass and water.
Conveniently, light travels quickly (speed of light = 299,792, 458 meters per second) in mostly
straight lines, and so light can give us information about the momentary location of an object in
space, its shape and its size. In addition, because different surfaces reflect different wavelengths of
light, the light hitting our eyes can tell us something about the surface properties of objects
(including their colour), even though the objects might be far away from us.
The Eye
The organ devoted to light reception is obviously the eye. Light impinging on the eye first passes
thought the cornea, which consists of transparent cells that slightly refract (i.e., alter the direction
of) the light passing through. If you wear contact lenses, these sit on the corneas of your eyes and if
you have had laser eye surgery, you likely had a layer of cells burned off your corneas to change the
amount they refract light (thus improving your vision).
After passing through the cornea, the light makes its way through the aperture formed by the iris,
which is called the pupil. The iris is the wonderfully coloured part of the eye and it controls the
size of the pupil, constricting the pupil to reduce the amount of light coming into the eye (e.g., under
bright lighting conditions) and dilating the pupil to increase the amount of light entering the eye
(e.g., under low light conditions).
After passing through the pupil, the light passes through the lens. Like the cornea, the lens is
composed of mostly transparent cells and it also refracts the light that passes through it. What
makes the lens unique, though, is that its oval shape can be narrowed or widened by the ciliary
muscles that surround the lens; changes in the shape of the lens changes the degree to which light is
refracted as it passes through the lens. The function of the lens is to focus the light optimally on the
retina, which is at the back of the eye and which contains the light receptive cells called
photoreceptors. A small area of the retina called the fovea is particularly densely packed with
photoreceptors that are optimally responsive to daylight conditions. Before it hits the retina,
though, light passes through a clear fluid, called the vitreous humour, which fills the chamber of
the eye and gives the eye its firmness. This figure of the eye also shows that blood vessels and axons
from retinal cells exit the eye through the optic disc located at the back of the eye. The outer white
part of the eye is called the sclera.
Let’s consider what happens when your eye is hit with light coming from an object, like this letter F.
The letter is illuminated by a light source, and it reflects some of the light towards your eye. Light
rays coming from a point in space—such as those that are reflected from the top of the letter—
diverge, and then converge at the retina because of refraction by the cornea and the lens. The same
process occurs for light coming from every point, including light reflecting off the bottom of the
letter. The resulting image at the retina consists of a topographic representation of the external
world—that is, adjacent points in space are represented by adjacent cells on the retina—but one
which is vertically inverted and horizontally backwards relative to the external world. At the level of
the retina, three-dimensional visual space is represented on a two-dimensional plane.
For vision to be clear and sharp, the light has to converge precisely on the back of the retina. This
converge is partly influenced by the shape and thickness of the lens, which, as we noted, are adjusted
by the ciliary muscles, under the control of the autonomic system. The ability to properly focus
light on the retina is also influenced by the shape of the eyeball. If the eyeball is shaped such that
the distance between the lens and the retina is shorter than it ought to be, you will have trouble
obtaining a sharp image of near objects, that is, you will have hyperopia, or farsightedness. In
contrast, if the distance between the lens and the retina is longer than it ought to be, you will not be
able to see far objects sharply; such a condition is called myopia or nearsightedness.
Strikingly, myopia in young people has been drastically increasing in recent years. According to one
article published in Nature in 2015 titled “The Myopia Boom”:
“East Asia has been gripped by an unprecedented rise in myopia, also known as shortsightedness. Sixty years ago, 10–20% of the Chinese population was short-sighted. Today, up
to 90% of teenagers and young adults are. In Seoul, a whopping 96.5% of 19-year-old men
are short-sighted. Other parts of the world have also seen a dramatic increase in the
condition, which now affects around half of young adults in the United States and Europe
— double the prevalence of half a century ago.”
What is the explanation for this catastrophic breakdown in vision? Well, there are evidently genetic
contributions to visual acuity, and some of the vision decline could be perhaps explained by recent
increases in the amount of book/screen time, which forces the eyes to focus on a single plane for an
extended period of time. However, there is emerging evidence that the recent rise in myopia might
be primarily caused by a reduced exposure to bright light—children are simply spending less time
outside in the bright sun. It turns out that bright light influences biochemical processes involved in
the development of the eye, and if children are not exposed to enough light, their eyes become
misshapen, leading to myopia.
“Based on epidemiological studies, Ian Morgan, a myopia researcher at the Australian
National University in Canberra, estimates that children need to spend around three hours
per day under light levels of at least 10,000 lux to be protected against myopia. This is about
the level experienced by someone under a shady tree, wearing sunglasses, on a bright
summer day.”97
So, for the sake of your eyes, you should get outside and soak up the sun!
The Retina
This is a photograph of the retina at the back of the eye. The meandering red lines are the blood
vessels that feed the retina, and the point at which they converge is the optic disc. Because the
blood vessels and axons from the neurons in the retina leave the eye through the optic disc, this area
has no photoreceptors and so the eye is effectively blind to the information that is projected onto
the optic disc. The slightly darker area to the left of the optic disc in this photo is the fovea, which
has a plethora of photoreceptors that operate best under daytime lighting conditions. The fovea is
actually only roughly 1.5 mm in diameter.98 Visual acuity is the best in the fovea, and it progressively
decreases at greater eccentricities from the fovea.
The retina consists of several layers of cells, which are shown here. The layer shown at the
bottom—called the pigmented epithelium—is the layer against the outer most part of the eye.
Going inwards, into the eye, next is the photoreceptor layer, followed by the outer nuclear layer,
the outer plexiform layer, in the inner nuclear layer, the inner plexiform layer and then the
ganglion cell layer.
A schematic depiction of the layers, and the types cells in the retina, is shown here.
Shown at the bottom are the cells that sit against the back of the eye, called pigment epithelial
cells. Abutting the pigment epithelial cells are two kinds of photoreceptors: the rods and the
cones. The rods and cones are connected by horizontal cells and there are also bipolar cells that
connect the photoreceptors with ganglion cells, which are neurons that send their axons out of the
eye through the optic disk and to several areas of the central nervous system. Ganglion cells are also
connected to each other by amacrine cells. The figure also shows the presence of glia, such as
astrocytes, and a fascinating cell called a Muller cell (we’ll return to that one in a moment).
Importantly, although the various cells in the retina communicate via neurotransmitters, only the
ganglion cells produce action potentials down their axons; the rest of the cells only generate
graded potentials, and the amount of neurotransmitters they release depends on the degree to which
they are depolarized or hyperpolarized.
The two types of photoreceptors—the rods and the cones—differ in terms of their response
characteristics, their distributions in the retina and their morphologies.
In terms of their response characteristics, the rods and cones respond optimally to different
wavelengths of light and to different intensities of light. There are three different types of cones;
one type is responsive optimally to short wavelengths (about 420 nm); another to medium
wavelengths (about 530 nm), and a third that responds optiamally to longer wavelengths (about 560
nm).99 In contrast, rods seem to respond most optimally to light within roughly the 500-nm range.100
Additionally, the cones respond most optimally in bright light while rods respond most optimally in
dim light. This is illustrated in the figure on this slide, which benchmarks luminance in terms of the
intensity of light reflected from white paper under different intensities of light illumination. Given
these characteristics, it will come as no surprise that it is the cones rather than the rods that are
critical for representing colour; our experiences of colour would require cells that respond to
different wavelengths of light—which is true of cones—and cells that respond better at higher light
intensities, since we view colours best under those conditions—conveniently, this is also true of
cones. In contrast, rods, which respond to dim light carry achromatic information.
Rods and cones also differ in terms of their distributions throughout the retina. Shown here is a
graph depicting receptor density on the y-axis and eccentricity from the center of the fovea on the x
axis, the center of the fovea is at zero degrees eccentricity. Notice the presence of the optic disc at
just under 20 degrees of eccentricity on the nasal side of the fovea; recall that this area has no photo
receptors. More importantly, the figure shows that cones are densely populated in the fovea and
that their density drastically drops off at greater eccentricities. Nearly the opposite is true of rods:
rods are more densely distributed in the periphery of the fovea than in the fovea. This means that
under normal daytime light levels, we have the greatest visual acuity and colour perception, for
objects that reflect light into the fovea. Under low light conditions, such as at dusk, foveal vision
becomes less useful, and we might be able to take advantage of the more sensitive rods in the
periphery of vision.
Rods and cones also differ with regard to their morphologies, though they do have some
morphological commonalities. Schematics of a rod and a cone are shown here. Notice that both
types of photoreceptors have an outer segment that sits against the pigment epithelium, an inner
segment, which projects into the interior of the eyeball, then a cell body that contains a nucleus,
and then an extension that releases neurotransmitters (or a synaptic terminal), called a rod spherule
in the case of the rod, and a cone pedicle in the case of the cone. The outer segments contain
lamellae, which serve as the site at which light energy is changed into chemical and then electrical
activity. The lamellae are slightly different across rods and cones; the rods contain discs that are
inside the rods, whereas the cones have lamellae that are connected to the outer membrane of the
cones. The lamellae are continually turning over and this requires a continuous supply of nutrients
which is provided by the pigment epithelial cells.
The Inverted Retina?
Now, some have noted that the wiring of the retina seems to be inverted relative to what it should
be. Specifically, notice that the photoreceptors are at the back of the eye and that there are layers of
cells that sit between the light rays as they enter the eye and the photoreceptors. Wouldn’t that
architecture lead to poor vision, since the light would be absorbed or scattered by the various layers
of cells before it hits the receptors?
Well, it turns out that there are several design features of the eye that minimize this concern. First,
while the intervening retinal cells do scatter some light before it gets to the photoreceptors, the
retinal cells are quite transparent; in fact, if you were to place them under a light microscope you
would have a hard time seeing the details of the cells without an appropriate stain to block some of
the light. Second, at the fovea, which is the area that confers the best visual acuity, there is a parting
of the retinal cells such that in that region, light has more direct access to the photoreceptors. Third,
and this is where things get really interesting, the retina contains Muller cells, which stretch from the
inner surface of the retina to the photoreceptors, and serve to funnel light right to the
photoreceptors.101 These Muller cells are actually radial glia, and they act as highly effective optical
Writing in the prestigious journal Proceedings of the National Academy of Sciences, a group of
authors put it this way.103
“ . . . the optical properties and geometry of Muller cells are consistent with those of optical
fibers so that they serve as low-scattering conduits for light through the retina. The low
scattering is likely due to their peculiar ultrastructure because highly scattering objects, such
as mitochondria, are rare, or even absent, whereas abundant long thin filaments are oriented
along the cell axis . . .”
“The endfeet of Muller cells cover the entire inner retinal surface and have a low refractive
index, allowing a highly efficient entry of light from the vitreous into the Muller cells. . . The
collective parallel arrangement of Muller cells in the retina resembles that of optical fibers in
fiberoptic plates, which are used to transfer images between spatially separate planes with
low loss and low distortion.” (Franz et al., 2007, p. 8290)
So, it seems that the eye has multiple ways to maximize the conveyance of light to the
While there is still much more to learn about vision, what we have covered so far already shows how
amazing the architecture of the human visual system really is.
Week 8: Video 2
The process of changing light energy into electrical signaling is called phototransduction. The left
side of this schematic shows a rod, and the right side illustrates the process of phototransduction as
it occurs in the rod. In the rod, phototransduction occurs in the lamellae (or discs) located in the
outer segment of the rod. The phospholipid bilayer oval depicts the membrane of a disc and the
phospholipid bilayer on the right depicts the outer membrane of the rod. The top half of the figure
illustrates the events that occur when light shines on the disc, and the bottom half of the figure
shows the process that unfolds when the disc is in darkness.104
Let’s introduce some of the main characters in this processes. Embedded in the membrane of the
disc is rhodopsin, which includes an opsin protein, and the light sensitive molecule retinal.
Rhodopsin is connected to a G protein called transducin. Also important is an enzyme called
phosphodiesterase, which breaks down cyclic guanosine monophosphate (cGMP) into
guanosine monophosphate (GMP). cGMP is important because it binds to, and opens, ion
channels. And we should mention guanylate cyclase which is an enzyme that creates cGMP from
guanosine triphosphate (GTP).
Let’s consider the state of affairs while the rod disk is in darkness. In the darkness, the retinal that’s
part of the rhodopsin molecule is in its inactive 11-cis configuration (those of you who know
chemistry will understand what “cis” refers to). In addition, phosphodiesterase remains inactive,
and this allows there to be high levels of cGMP. The cGMP attaches to ion channels and opens
them. The opening of the ion channels allows positive ions to flow into the rod, thus depolarizing
the rod. This influx of positive ions into the rod when it’s in darkness is called a dark current.105
There are other channels in the membrane that allow positive ions to leave the cell, which prevents a
catastrophic buildup of positive ions in the rod.106
Now, let’s consider what happens when light hits the disc. When the disc is exposed to light, the
photons change retinal from the 11-cis form to the all-trans form, thus changing rhodopsin into
metarhodopsin II. This leads to the release of the attached G-protein subunits. Part of the Gprotein attaches to phosphodiesterase, making it active. Phosphodiesterase then changes cGMP
into GMP, thus reducing the amount of cGMP available to attach to ion channels. As cGMP levels
drop, cGMP detaches from ion channels, which leads to the closing the channels. The end result is
a reduction in the influx of positive ions, causing the inside of the rod to be hyperpolarized. The ion
flow across the rod membrane as a result of light exposure is called a photocurrent.107 As the rod
hyperpolarizes, guanylate cyclase is activated, which changes guanosine triphosphate (GTP) to
cGMP, thus replenishing levels of cGMP.
At their connections with bipolar cells, photoreceptors release glutamate as the main
neurotransmitter. When the photoreceptor is depolarized, which is in darkness for the rod, the
release of glutamate is increased, whereas when the photoreceptor is hyperpolarized, the release of
glutamate is decreased.
Transduction is vary counterintuitive. You might think that shining light on a photoreceptor should
depolarize the photoreceptor and lead to an increase in neurotransmitter release. But actually, the
opposite is true. Shining light on a rod hyperpolarizes the rod, and this leads to a reduction in the
release of the neurotransmitter glutamate.
Week 8: Video 3
Receptive Fields of Retinal Cells
The cells in the retina are responsive to light coming from specific areas of space. The area of space
to which a cell responds is called it’s receptive field. A single photoreceptor is activated by a small
area of space; so they have small receptive fields. Multiple photoreceptors can be connected to a
single bipolar cell, so the receptive field of that bipolar cell is a combination of the receptive fields of
the connected photoreceptors. Moving further down the wiring diagram, there might be a ganglion
cell that’s connected to multiple bipolar cells, and so the receptive field of that ganglion cell is a
combination of the receptive fields of the contributing bipolar cells. Thus, as we move further
down the visual processing stream, the receptive fields of the cells typically become larger; in other
words, the downstream cells respond to stimulation coming from larger areas of space.
Bipolar cells and ganglion cells have complex receptive fields and so it’s perhaps best if we slowly
work our way up to a description of their full glory.
ON and OFF Bipolar Cells
Let’s start with a consideration of how bipolar cells respond to input from the photoreceptors to
which they are connected.108 In this regard, there are two types of bipolar cells: ON bipolar cells and
OFF bipolar cells.109
ON bipolar cells are depolarized—turned on, in other words—when the photoreceptors that are
connect with them are illuminated by light. The process here is counterintuitive, so make sure you
pay close attention.
Let’s start with what happens when the photoreceptor is in darkness
 Recall that in darkness, a photoreceptor is depolarized and it releases glutamate at its
synapse with a bipolar cell. You might think that since glutamate is an excitatory
neurotransmitter, this should lead to a depolarization of the bipolar cell. But this is not
always the case. In this unusual situation, the ON bipolar cell has metabotropic glutamate
receptors that, when activated by glutamate, trigger a closing of cation channels, which
prevents positive ions from entering the bipolar cell. This means that in the dark, the
bipolar cell is hyperpolarized.
 Now, let’s shine a light on the photoreceptor. When the photoreceptor that synapses with
the bipolar cell is exposed to light, the photoreceptor is hyperpolarized, and this leads to a
reduction in the release of glutamate. Because there is less glutamate at the synapse, the
metabotropic receptor on the bipolar cell is stimulated less, and so it closes fewer cation
channels. This means that positive ions can flow into the bipolar cell, causing the bipolar
cell to be depolarized. Thus, shining a light on the photoreceptor excites the
corresponding ON bipolar cell.
A different process unfolds for OFF bipolar cells, which are hyperpolarized—or turned off—when
the photoreceptors connected to them are exposed to light.110 A notable characteristic of these OFF
bipolar cells is that they have ionotropic glutamate receptors that open cation channels when they
are stimulated.
 In darkness, the photoreceptors are depolarized and release glutamate into the synapse.
This time however, the glutamate attaches to ionotropic receptors that open cation
channels, which allows positive ions to flow into the bipolar cell, thus depolarizing it.
 When illuminated, the photoreceptors are hyperpolarized and release less glutamate. This
means less stimulation of the ionotropic receptors, fewer open cation channels, and thus
less depolarization of the bipolar cell. So, in this case, exposing the receptor to light
deactivates the connecting OFF bipolar cell.
Now, if this wasn’t confusing enough, it turns out that the activity of a bipolar cell is not only
influenced by the photoreceptors to which it is directly connected, but that it is also influenced by
the activity of neighboring photoreceptors. This occurs indirectly through horizontal cells. These
are activated by the surrounding photoreceptors, and then inhibit the photoreceptors directly
connected to the bipolar cell in question.
In this example, we have included one photoreceptor that is directly connected to a bipolar cell (the
one in the center) and two photoreceptors that surround it. Importantly, notice that a horizontal cell
connects the photoreceptors, including the central one, the one that is connected to the bipolar cell.
Because of this organization, illumination of the central photoreceptor can have a different effect on
the bipolar cell than illumination of the surrounding photoreceptors. Thus, the bipolar cell has a
center-surround receptive field.
We should also add a ganglion cell to our schematic, because ganglion cells also have centersurround receptive fields.
Now, let’s see how this center-surround receptive field arrangement plays out for ON bipolar cells;
these have ON-center/OFF-surround receptive fields.
We can imagine a situation in which a beam of light hits only the central receptors (in this case we
only have one central receptor), leaving the surrounding receptors in darkness.
 Repeating what we’ve already discussed, when light hits the central photoreceptor, the
photoreceptor is hyperpolarized and so it releases less glutamate at the synapse with the
bipolar cell.
 Further reviewing what we just learned, because we are dealing with an ON bipolar cell, less
glutamate at the synapse means that the bipolar cell is depolarized. This is reviewed in the
box on the right of this slide.
 Here is the new part: Simultaneously, the horizontal cell is stimulated by the surrounding
photoreceptors (because they are active since they are in darkness), and as a result, the
horizontal cell inhibits the central photoreceptor, thus further reducing its glutamate
The net effect is a large depolarization of the bipolar cell.
This leads to the excitation of the connected ganglion cell, which fires more vigorously. So,
when the central photoreceptor is illuminated and the surrounding ones are in the dark, both
the bipolar cell and the ganglion cell are turned ON.
So that explains the ON-center, but what about the OFF surround?
Well, here is a case in which light illuminates only the surrounding photoreceptors and not the
central photoreceptor, which remains in darkness.
 In this case, since we are dealing with an ON bipolar cell, whose connected central
photoreceptor is in darkness, the bipolar cell is hyperpolarized by the central
photoreceptor—the details are repeated in the box on the right.
 However, in this case, the peripheral photoreceptors are hyperpolarized by light, and this
leads to a reduction in the extent to which the horizontal cell inhibits the central
photoreceptor. Accordingly, the central photoreceptor releases more glutamate and so
further hyperpolarizes the bipolar cell.
 Thus, overall, the bipolar cell is hyperpolarized.
 And the firing rate of the attached ganglion cell is correspondingly reduced.
 So, shining light on the surround of the bipolar cell’s receptive field turns the bipolar cell,
and the ganglion cell, OFF.
Similar principles apply to OFF bipolar cells; though these have OFF-center/ON-surround
receptive fields.
When the central receptor is illuminated:
 The central receptor is again hyperpolarized and releases less glutamate. In the case of these
OFF-center bipolar cells, however, a reduction in glutamate at the synapse leads to a
hyperpolarization of the bipolar cell.
 Simultaneously, the horizontal cells are stimulated by the peripheral photoreceptors, which
are in the dark, and so the horizontal cells further inhibit the central photoreceptors.
 Overall, the net result is a hyperpolarized bipolar cell, and a less active ganglion cell.
 So, shining light on the center of the receptive fields of these cells, turns them OFF.
In contrast, when the surrounding receptors are illuminated and the central receptor is in darkness:
 The central photoreceptor is depolarized and so releases more glutamate, which leads to a
depolarization of the OFF bipolar cell.
 Since the peripheral photoreceptors are illuminated, they release less glutamate, and this
means that the horizontal cells provide little inhibition of the central photoreceptor, thus
allowing it to further depolarize the bipolar cell.
 Overall then, the bipolar cell is depolarized, and the firing rate of the corresponding ganglion
cell is increased.
Now, the obvious question you might have is: What is the purpose of these center-surround
receptive fields? The answer is that they provide information about subtle differences in visual
stimulation—which is necessary for object perception.
For example, shown here are three different scenarios wherein an object casts light on a segment of
the retina containing a ganglion cell with an ON-center/OFF-surround receptive field. Since we are
talking about a ganglion cell, keep in mind that it has action potentials, and so we can represent
these as vertical lines on a timeline. Notice that when the receptive field is in darkness, the ganglion
cell fires moderately—which is its resting firing rate.
In contrast, when an objects casts a beam of light on the receptive field of the cell, hitting the
surround and not the center, the ganglion cell is silenced. And, when the light coming from the
object hits the central portion of the receptive field, the ganglion cells fires more vigorously. So, the
donut shaped receptive field helps the ganglion cell “know “ (to anthropomorphise the cell)
something about the characteristics of the object in its receptive field.
More about Ganglion Cell Receptive Fields
We should say a little more about the receptive fields of ganglion cells.
There are roughly 17 (possibly more) types of ganglion cells and their receptive fields cover the
entire visual field.
The most common and the most well studied of the ganglion cells are the parasol, the midget, and
the bistratified cells. This table shows the various response characteristics of these three different
cells. Of the three types, the parasol and midget cells are the most well understood.
Compared to the parasol cells, the midget cells have smaller receptive fields, have slower axon
conductance, are less sensitive to lower contrast information, are responsive to higher spatial
frequencies and to lower temporal frequencies. Parasol cells seem to be particularly sensitive to
motion in the visual field. Midget cells also convey red-green colour information while parasol cells
carry achromatic information. The bistratified cells carry blue and yellow colour information111, but
otherwise have been difficult to fully characterize.112 In terms of their characteristics we could
speculate that the bistratified cells are similar to the midget cells. In general, it seems that midget
and bistratified cells are most suited for conveying information about the form and colour of
objects, while parasol cells are most suited for conveying information about object movement.
This slide shows how a visual scene is simultaneously represented by two different types of ganglion
cells. Each oval represents a receptive field of a ganglion cell. The red ovals depict the receptive
fields of midget cells, which represent the amount of red-green in each region of the scene. The
larger black ovals are the receptive fields of parasol cells, which respond to motion in the visual field.
Notice that these two types of cells provide separate but parallel representations of the visual field,
each extracting different information from the visual field.
Concluding Comments:
Thus, already at the level of the retina, different visual characteristics are coded in parallel by
different cells. As we discuss vision further, we’ll see that this parallel nature of visual coding
continues throughout the visual system.
Week 8: Video 4
Parallel Visual Processing Streams
There are several distinct visual pathways that project from the retina of the eyes to various areas of
the central nervous sytem.
Visual Pathways
The primary visual pathway, which is called the geniculostriate pathway, is shown here. As you
can see, retinal ganglion cells send their axons from the eyes via the optic nerve, through the optic
chiasm and then to the lateral geniculate nucleus of the thalamus (the LGN). Cells in the
LGN then send axons via the optic radiations to the primary visual cortex, which is also called
striate cortex or V1 (which stands for visual area 1). Notice that the information in the left visual
fields of both eyes projects to the right sides of the retinas of the eyes, and this information
propagates to the right primary visual cortex. The opposite is true for information in the right visual
fields of both eyes, which projects to the left sides of the retinas, and is then propagated to the
primary visual cortex in the left hemisphere. This is because the axons from the retinal ganglion
cells on the nasal sides of the retinas cross over at the optic chiasm and project to the contralateral
hemispheres. In contrast, the axons of the ganglion cells on the lateral sides of the retinas project to
the ipsilateral hemisphere. The geniculostriate pathway conveys the bulk of the information used for
conscious visual object recognition and for visually guided action.
There is another, more minor, visual pathway, called the retinocollicular pathway. This pathway
involves retinal ganglion cell axons that project to the superior colliculli of the midbrain, which then
send their outputs to the pulvinar nuclei or the thalamus. This pathway is involved in visual
orienting, with includes directing eye movements, and it’s also involved in the perception of object
position and motion.
There is also a third visual pathway that we’ll encounter later when we discuss sleep and
The Geniculostriate Pathway
Let’s focus a little more on the primary geniculostriate pathway. The geniculostriate pathway
actually consists of multiple parallel visual processing streams, each of which codes for, processes,
and conveyes, different information in the visual array. These distinct parallel processing streams
have their origins in the different response characteristics of different types of ganglion cells.
The main types of ganglion cells include parasol, midget and bistratified cells—these are
schematically shown on the right side of this slide. Recall that parasol cells have large receptive
fields, have fast conducting axons, are sensitive to low contrast information, are sensitive to motion
and carry achromatic information. Midget and bistratified cells have small receptive fields, have
slower conducting axons, are sensitive at higher levels of contrast, are not particularly sensitive to
motion and carry colour information (red-green for midget cells and blue-yellow for bistratified
cells). A useful way to summarize this information is to say that midget and bistratified cells code
for visual form and colour, while parasol cells code for object movement. So, these three different
types of ganglion cells convey different information about the visual scene.
These three types of cells form separate projections to distinct layers of the lateral geniculate nucleus
(or LGN). Specifically, parasol cells connect to magnocelluar layers of the LGN, midget cells
connect to the parvocellular layers, and bistratified cells connect to the koniocellular layers .
Because of these connections, parasol ganglion cells are sometimes called M-cells (M for their
magnocelluar target), midget cells are sometimes called P-cells (P for their parvocelluar target) and
bistratified cells are called K-cells (K for their koniocellular target). Each eye sends input to distinct
layers of the LGN; layers 1, 4 and 6 receive input from the contralateral eye113. A retinotopic map of
the visual field is maintained in the LGN.
Cells in the LGN then send their axons to the primary visual cortex. The primary visual cortex has
six layers. The koniocellular pathway projects to the outer layers of primary visual cortex and the
magnocellular and parvocellular pathways project to different sublayers of layer 4 of the primary
visual cortex. Thus, the separation between pathways is maintained all the way to the primary visual
The primary visual cortex contains an ordered retinotopic map of visual space. That is, adjacent
points in external space, are represented by adjacent cells in the retina, and correspondingly by
adjacent cells in the LGN and also adjacent cells in the primary visual cortex. This retinotopic
organization of a macaque monkey’s visual cortex is shown here. 114 The visual stimulus that was
presented to the monkey is shown on the top left, and the resulting activation of the monkey’s
primary visual cortex (as measured using the 2-deoxy-D-glucose method) is shown on the bottom
left. The corresponding images on the right included colours added after the fact to better illustrate
the visual field-cortical mapping.
Notice that the image presented to one of monkey’s visual fields is topographically duplicated in the
primary visual cortex—this would be the visual cortex of the contralateral hemisphere. The foveal
region is represented by tissue in the most posterior aspect of the primary visual cortex, and the
peripheral region of the visual field is represented by more anterior segments of the primary visual
cortex. Notice also that the visual map is somewhat distorted, with the foveal regions being
allocated disproportionately more cortical tissue compared to the peripheral regions of the retina.
This is called “cortical magnification”. Also, the lower visual field is represented by superior
regions of the primary visual cortex, and conversely, the upper visual field is represented by inferior
regions of the visual cortex.
Feature Detectors in the Primary Visual Cortex
Cells in the primary visual cortex appear to be responsive to basic visual features . This was initially
demonstrated by David Hubel and Torsten Wiesel in their studies in which they recorded the
activity of single cells in the visual cortices of cats and monkeys. In their studies, Hubel and Wiesel
mapped out the characterises of the receptive fields of various cells in the primary visual cortices of
the animals they studied. They identified several different kinds of cells.
One type, which they called simple cells, had receptive fields with distinct excitatory ‘on’ regions
and inhibitory ‘off’ regions. Typically, these ‘on’ and ‘off’ regions were elongated, such that the cell
responded maximally when a rectangle (or a ‘slit’ as they called it) matching the width of the ‘on’
region was positioned in the ‘on’ region and aligned with the orientation of the elongated ‘on’
region. The schematic shown here depicts the firing of the cells as vertical lines on a horizontal
timeline. Evidently, [and I’m quoting their paper here] “changing the orientation by more than 5-10o
was usually enough to reduce a response greatly or even abolish it.”115 And, even if the bar was
suitably oriented, if it deviated in position from the ‘on’ region, the cell was also silenced. So, simple
cells have receptive fields with specific excitatory and inhibitory areas, they respond maximally to
stimuli that fall on specific locations of their receptive fields, and they are sensitive to stimuli of a
particular orientation. This means that simple cells could effectively serve as feature detectors.
Hubel and Wiesel also identified what they called ‘complex cells’ in the primary visual cortex of
their animal ‘participants’. According to Hubel and Wiesel, compared to simple cells, these cells had
[quote]“far more intricate and elaborate properties”. Importantly, complex cells lacked clearly
identifiable excitatory and inhibitory regions. One type of complex cell—which is depicted here—
responded optimally to rectangles of specific orientations, regardless of where they were positioned
in the receptive field. When the orientation of the rectangle deviated from the optimal orientation,
the cell responded less vigorously.
A third type of cell identified by Hubel and Wiesel was a hypercomplex cell.116 As one example of
a hypercomplex cell, Hubel and Wiesel described a cell that had a central excitation region that was
surrounded by inhibitory regions. When a stimulus fell within the bounds of the central region, the
cell responded vigorously, but when it extended beyond the bounds of the central region, the cell’s
activity was reduced. These cells are thought to be useful for detecting locations at which objects
In addition to discovering the various response characteristics of the cells in the primary visual
cortex, Hubel and Wiesel also found that the cells in the primary visual cortex were systematically
organized according to their response characteristics. A simplified depiction of the organization is
shown here—this graphic is directly from one of Hubel and Wiesel’s papers. Hubel and Wiesel
identified ‘slabs’ or columns of cells responsive to a particular stimulus orientation with adjacent
columns responsive to different but similar orientations. Hubel and Wiesel noted that as they
moved their electrode “as little as 25 or 50 micrometers (thousandths of a millimeter) the optimal
orientation changed by a small step, about 10 degrees on the average; the steps continued in the
same direction, clockwise or counter-clockwise, through a total angle of anywhere from 90 to 270
Some of these columns were responsive to input from only one of the eyes, forming what are now
called ocular-dominance columns. Within these, there are areas that respond to colour, which
show up using a cytochrome oxidase stain, and these were aptly called “blobs”; the regions between
the blobs, which do not respond to colour, were called “interblob regions”,118,119
So, Hubel and Wiesel found that distinct visual characteristics were still processed separately even in
the primary visual cortex.
The Primary Visual Cortex and Visual Awareness
Activity in the primary visual cortex is strongly related to visual awareness. Interestingly, even the
conscious act of visual imagery (which occurs in the absence of external visual stimulation) activates
corresponding areas of primary visual cortex.120 Damaging a section of primary visual cortex leads to
a lack of visual awareness of information that falls within the receptive fields of the damaged cells.
So, for instance, if the primary visual cortex in your left hemisphere was completely damaged, you
would lose your ability to consciously perceive objects in the right visual field. This is called a
hemianopia. Interestingly, however, such damage can result in a condition known as blindsight.
An extensive case study of a patient named D.B., who presented with blindsight, was reported in a
now classic book by Lawrence Weiskrantz. The book is titled simply “Blindsight: A case study and
implications”. D.B. had malformed blood vessels in the region of the right occipital lobe. He
underwent surgery to correct the problem, which resulted in the removal of a part of his right
occipital cortex. Weiskrantz writes:
“From the surgical notes it was estimated that the excision extended from the occipital pole
forward by approximately 6 cm and was thought to include the major portion of the
calcarine cortex—in which the striate cortex is situated—on the medial surface of the right
hemisphere.” Weiskrantz (1986, p. 21)
As one would expect, when tested 8 months after his operation, DB was blind to information
presented in most of his left visual field. So, he had nearly complete hemianopia. However, his
doctors noticed something interesting about DB’s vision. Quoting Weiskrantz again:
“The ophthalmic surgeon at the National Hospital, Michael D. Sanders, noticed that D.B.
appeared to be able to locate objects in his supposed blind field much more skillfully than
one might have expected. For example, even though D.B. could not see one’s outstretched
hand, he seemed to be able to reach for it accurately. We put movable markers on the wall
to the left of his fixation, and again he seemed to be able to point to them, although he said
he had not actually seen them.” Weiskrantz (1986, p. 23-24)
So, while D.B. was not able to consciously ‘see’ objects presented in the blind field, he was able to
reach for them correctly and even guess at the orientation of objects with an impressive degree of
accuracy. Weiskrantz writes: “This apparent dissociation between D.B.’s capacity and awareness of
it we dubbed ‘blindsight’.” Weiskrantz (1986, p. 24)
What brain mechanisms are responsible for the spared visual capabilities characteristic of blindsight?
Well, this is where the more minor visual pathway through the superior colliculi comes into play.121
It turns out that the superior colliculi send their outputs to the thalamic pulvinar nuclei, and this
region in turn sends axons that bypassing primary visual cortex and connect to other cortical areas
that process visual position and motion.122
This brings us to the next main topic, which is visual processing that happens downstream of the
primary visual cortex.
Week 9: Video 1
Beyond the Primary Visual Cortex
As we’ve already discussed, axons from retinal ganglion cells project to several areas of the central
nervous system. One of the main targets of these projections is the lateral geniculate nucleus of the
thalamus. From there, thalamic cells send axons to the primary visual cortex (also known as striate
cortex or V1) were basic visual processing occurs. We referred to this as the geniculostriate
pathway. But what happens to visual information once it’s been process by the primary visual
Right next to the primary visual cortex is the secondary visual cortex, also known as V2 (for visual
area 2). You can see area V2 in these images depicting the medial aspect of the brain. V1, or
primary visual cortex is, the prominent yellow region. Next to V1, you can see V2 depicted in blue.
There are other visual areas that are also labeled, but we can ignore those for now. V2, or the
secondary visual cortex, has bidirectional connections with primary visual cortex and elaborates on
the processing that occurs in the primary visual cortex.
From these early processing areas, information is passed on to other cortical areas along two general
streams or pathways, which are shown here. The dorsal stream, which includes the middle and
superior temporal areas, as well as parietal areas, is primarily responsible for processing object
movement and object location and also visually guide action. This stream has been called the
“where” pathway. In contrast, the ventral stream, which includes the inferior temporal cortex, is
primarily involved in processing object form and colour. This stream has been dubbed the “what”
pathway, as it is heavily involved in object recognition.
The Ventral Stream
We’ll discuss the ventral stream first. Along the ventral pathway, there is an area –called area V4
(visual area 4)--that has been implicated in colour perception. This is a figure from an early study
which used PET to localize the cortical areas responsive to colour. The researchers compared brain
activation obtained during moments when participants viewed coloured patches and during
moments when they viewed gray patches123. A comparison of these conditions revealed a unique
area of activation responsible for colour perception—area V4—which is located on the ventral
aspect of the brain, in the ventral visual pathway.
Damage to this area can lead to a condition known as cerebral achromatopisa.124 Cerebral
achromatopsia is characterized by impaired colour perception even though there is no damage to the
colour processing machinery in the retina. In some cases, when the damage extends to the regions
surrounding V4, there can be a complete loss of colour vision.
This figure shows a lesion overlay plot of patients with brain damage who experienced
achromatopsia.125 Notice that the region of most damage overlap maps roughly on to the location
of V4.
Further down the ventral stream, there are areas, particularly the inferior temporal cortex, that
contain cells which respond to specific categories of stimuli, such as faces and places. Shown here is
the ventral aspect of the temporal cortex, which contains the fusiform face area (the FFA), which
is specifically responsive to faces; and the parahippocampal place area (the PPA), which contains
cells specifically responsive to places.126 Notice also that there are some regions that respond more
to animate objects and other regions that respond more to inanimate objects.
This figure shows how the receptive fields of cells differ between areas along the ventral stream of
processing. The receptive fields become progressively larger along the ventral pathway, whereby the
receptive fields of cells in the inferior temporal cortex are much larger than those in V1 and V2.
The cells also respond to progressively more complex information as we move along the ventral
stream to the inferior temporal cortex. Cells in the inferior temporal cortex respond to objects
regardless of their retinal size or their orientation; thus unlike cells in the primary visual cortex, those
in the inferior temporal cortex respond to the identify of objects, and not just to basic visual
Damage to parts of the ventral stream can lead to visual agnosia, which is a deficit of visual object
recognition. Patients with visual agnosia can recognize objects by touch or by sight; they’re just
unable to recognize objects by vision alone. Some patients, who have a subtype of visual agnosia
called apperceptive agnosia (or visual form agnosia), have problems even putting together the
basic features of objects, and so they have trouble copying objects. The rather poor attempts at
copying objects of an apperceptive agnosic are shown here. Other patients, who have a subtype of
visual agnosia called associative agnosia, can put the features of objects together, but are unable to
link the resulting visual forms with meaning. These patients can copy objects just fine, but even as
they copy them, they cannot identify them. The patient whose picture copies are shown here,
consistently failed to identify the copied objects; the patient identified the keys as a violin, for
Some patients have a specific problem with visual facial recognition, a condition known as
prosopagnosia. Prosapagnosics can make out the various features of a face, but they cannot put
the features together to arrive at a percept of the whole face.
One example of a patient with visual form agnosia is a woman who suffered brain damage as a result
of carbon monoxide poisoning. According to the case description:
“Three to six months after her accident, neuropsychological and psychophysical testing
revealed the presence of a profound 'visual form agnosia'. D.F. showed poor perception of
shape or orientation, whether this information was conveyed by colour, intensity, stereopsis,
motion, proximity, continuity or similarity. Extensive visual testing indicated that the
patient's visual form agnosia could not be reduced to a simple sensory deficit.”127 (p. 154)
As you can see here, D.F. sustained damage to her ventral visual pathway.
To demonstrate her visual agnosia, a team of researchers from Western University in Ontario, led by
Mel Goodale, presented D.F. with a slot (like one you would find in a mailbox) roughly half a meter
in front of her. The researchers varied the orientation of the slot and D.F. was asked to hold a card
close to her body and orient the card such that it aligned with the orientation of the slot.
D.F.’s card orientation performance is shown here, together with the performance of two control
participants, identified as C.G. and C.J.,. The little triangle indicates the actual orientation of the
slot. the lines indicate the orientations of the card each time the participant attempted to manually
match the slot orientation. Notice that the performance of the control participants was nearly
perfect. In contrast, D.F.’s responses were highly variable; clearly she has trouble visually estimating
the orientation of the slot.
However, it turns out that D.F. was able to complete certain visual tasks without much of a
problem. For example, in another task, D.F. and the control participants were required to place the
card through the slot as if they were posting a letter. Strikingly, when reaching towards the slot,
D.F. oriented the card with a high degree of accuracy, much like the control participants. According
to the researchers:
“ . . . analysis of video records of each reaching movement revealed that, like the controls,
D.F. began to orient the card correctly even as her hand was being raised from the start
position . . .”128 (p. 155)
So, while D.F. showed impairments in visual object identification, she did not have problems with
visually guided action. This is likely because she suffered damage to the ventral stream of
processing, but had spared dorsal stream processing, which supports vision used for the purpose of
guiding action.
The Dorsal Stream
Let’s turn to the dorsal stream of processing. As we’ve just mentioned, the dorsal stream is involved
in supporting visually guided action.
Damage to the dorsal stream can lead to a condition known as optic ataxia, which is [and I’m
quoting here] “a failure to point or reach accurately towards objects presented visually.”129 The
deficit is particularly noticeable when patients have to reach for objects placed in the periphery of
vision. Despite their reaching problem, patients with optic ataxia can visually identify objects just
fine, which makes sense because their ventral processing stream remains intact. They also don’t
have an underlying motor problem; their problem is specific to visually guided reaching movements.
The dorsal pathway is also involved in processing visual motion. Two areas involved in visual
motion perception are the middle-temporal cortex (MT or V5) and the medial superior
temporal cortex (MST), which both have a retinotopic map, though MST has a coarser one.130
These areas are shown here in relation to the superior temporal sulcus (STS) and the inferior
temporal sulcus (ITS).
Area MT has cells with small receptive fields that code for direction of movements, specific speeds
of object movement, changes in object speeds, and object movements relative to their backgrounds.
In contrast, MST has large receptive fields with the dorsal part coding for expansion/contraction
and rotation of objects, and the lateral ventral part coding for object movement relative to the
Damage to motion processing areas can lead to condition known as akinetopsia, which is
characterized by a specific inability to visually perceive object motion.131 Here is one intriguing case
“The visual disorder complained of by the patient was a loss of movement vision in all three
dimensions. She had difficulty, for example, in pouring tea or coffee into a cup because the fluid
appeared to be frozen, like a glacier. In addition, she could not stop pouring at the right time
since she was unable to perceive the movement in the cup (or a pot) when the fluid rose.
Furthermore the patient complained of difficulties in following a dialogue because she could not
see the movements of the face and, especially, the mouth of the speaker. In a room where more
than two other people were walking she felt very insecure and unwell, and usually left the room
immediately, because 'people were suddenly here or there but I have not seen them moving'. The
patient experienced the same problem but to an even more marked extent in crowded streets or
places, which she therefore avoided as much as possible. She could not cross the street because
of her inability to judge the speed of a car, but she could identify the car itself without difficulty.
'When I'm looking at the car first, it seems far away. But then, when I want to cross the road,
suddenly the car is very near.' She gradually learned to 'estimate' the distance of moving vehicles
by means of the sound becoming louder.”132
So, the patient’s deficit was quite specific, involving primarily the visual perception of object
motion—she was able to see and visually recognize objects; she was able to judge movement by
sound cues; but she could not visually perceive object movement. According to available CT scans
of the patient’s brain, the investigators concluded that the patient’s deficits in motion perception
were likely caused by damage to her middle temporal gyrus—so, area MT.
Concluding Statement
In conclusion, let’s review some of the main themes we’ve discussed so far:
 First, there is segregation and parallel processing of different types of visual information, and
this parallel processing starts at the retina and is propagated throughout the visual system.
 Second, the main difference between processing streams seems to involve a distinction
between visual object recognition and the perception of object motion and visually guided
 And third, there seem to be multiple modules in the brain that process different aspects of
the visual array, such as motion and colour. And, damage to these modules results in highly
specific visual deficits.
While researchers have learned a lot about the visual system, at this point we still don’t know how
the brain areas involved in vision ultimately give rise to the sorts of unified, wholistic percepts that
characterize our day-to-day conscious visual experiences. Maybe you’ll be the one who cracks that
Week 9: Video 2
Sound Waves to Neural Activity
As with other sensory systems, audition—or the sense of hearing—involves a stimulus, a sensory
organ, and brain areas that process auditory information. Let’s start with the stimulus.
The Stimulus
The auditory stimulus consists of changes in pressure that are propagate through a medium, which
for human audition is most often air. Sources of sound create changes in air pressure by moving
against the air particle that surround them. For example, the woofer on a speaker moves in and out,
and as it does so it intermittently pushes against air particles, thus creating systematic patterns of
high density and low density regions of air, which propagate outwards. The propagation occurs by
adjacent particles pushing against each other. This systematic variation in air particle density can be
described in terms of a wave. The high density regions (or areas of high air pressure) form the
peaks in the wave, while the low density regions (or areas of low pressure) form the valleys of the
We can plot the sound wave tracking the changes in air pressure over time within one unit of space.
In its simplest form, the sound wave is a sinusoidal wave, such as the one shown here.
Importantly, different properties of the sound wave correlate with different aspects of conscious
auditory perception. Specifically, the amplitude of the wave correlates with the subjective
experience of loudness; low amplitude waves are experienced as quiet sounds while high amplitude
waves are experienced as loud sounds. The frequency of the sound wave correlates with the
experience of pitch. Frequency refers to the number of times the wave completes a repeatable
pattern within a given period of time. Low frequency oscillations correspond to lower pitch sounds,
such as those produced by a cello; and higher frequency oscillations correspond to high pitch
sounds, such as the those produced by a violin.
While simple sounds can be described by a simple sinusoidal wave, more complex sounds involve a
combination of sinusoidal waves that form a complex wave. In this example, we have combined a
low amplitude high frequency wave (shown by the broken purple line ), with a high amplitude low
frequency wave (shown by the broken blue line), to form a complex wave shown by the orange line.
The complexity of the wave corresponds to our experience of sound timbre. As an example, a
note of the same pitch can be played by two different instruments, but these can have different
timbre because of differences in the strengths of the underlying pure tones contributing to the
complex sound of each instrument. Complex sounds can be decomposed into their constituent
pure sinusoidal tones by a process called Fourier analysis.
The Sensory Organ
Sound waves traveling through the medium are detected by components of the ear. The ear includes
the outer ear, which consists of the pinna (the floppy part of the ear) and the auditory canal,
which terminates at the tympanic membrane (colloquially known as the ear drum). The tympanic
membrane is at the junction of the outer ear and the middle ear. The middle ear includes three
ossicles. These are the malleus (or hammer), which sits against the tympanic membrane; the
incus (or anvil), which is connected to the hammer; and the stapes (or stirrup), which is connected
to the anvil. The space surrounding the ossicles is continuous with the Eustachian tube. The
stirrup makes contact with the oval window, which forms the boundary between the middle ear and
the inner ear. Amongst other things, the inner ear includes the cochlea, which is curled into a snailshell-like shape. From the cochlea emerge the axons that form part of the auditory nerve.
This is a closeup view of the cochlea taken from an early version of the now classic Gray’s Anatomy
textbook. You can see the oval window, to which the stirrup is normally connected; and also shown
is the round window, the function of which will become clearer in a moment.
Shown on the left side of this slide is a cross section of the cochlea. Notice that the cochlea includes
three fluid filled chambers that run the length of the cochlea. These are the vestibular canal, the
cochlear duct, and the tympanic canal. The vestibular canal and the tympanic canal contain a fluid
called perilymph, and the cochlear duct contains a fluid called endolymph. As we’ll see in a
moment, at the tip of cochlea, the vestibular canal is continuous with the tympanic canal. The tissue
separating the tympanic canal from the cochlear duct is called the basilar membrane. Sitting atop
the basilar membrane is the organ of corti, which is shown on the right of this slide. This organ
contains hair cells, which have cilia that protrude into the endolymph. There are inner hair cells
as well as outer hair cells. Nerve fibers run along the basilar membrane, and the ends of these
nerve fibers from synapses with hair cells. Sitting above the cilia is the tectorial membrane.
Movement of the basilar membrane causes back and forth movement of the cilia against the
endolymph and the tectorial membrane, ultimately triggering action potentials in the auditory nerve.
This diagram gives an overview of the main steps involved in the conversion of sound waves into
neural signals. The cochlea is unrolled to better illustrate the continuity of the vestibular canal and
the tympanic canal. The process begins with sound waves entering the auditory canal and vibrating
the tympanic membrane. The tympanic membrane can also be vibrated by internal sounds coming
from the person’s vocal cords. The vibration of the tympanic membrane leads to the vibration of
the ossicles in the middle ear, which in turn vibrate the oval window. The movement of the oval
window produces waves in the perilymph of the vestibular canal, which are propagated down the
vestibular canal and then through the perilymph in the connected tympanic canal. The waves in the
perilymph cause the basilar membrane to move. This movement leads to back and forth movement
of the cilia on the hair cells, which leads to a triggering of action potentials in the auditory nerve.
The round window of the tympanic canal moves to accommodate the pressure changes caused by
the moving oval window. The axons of the auditory nerve send auditory signals to the central
nervous system.
How does movement of the cilia lead to action potentials in the auditory nerve? This figure shows
closeup photographs of cilia at different levels of magnification.133 The middle panel shows a group
of cilia emerging from a hair cell. Notice how close the adjacent cilia are to each other and that some
cilia are shorter than others. These turn out to be important design features for the processes of
auditory transduction.
Critically, the tips of cilia of varying heights are connected by protein strands called tip links. These
are identified by the arrows in this image of a group of cilia from a chick.134
Here is a simplified schematic of two cilia attached to a hair cell, which is adjacent to a spiral
ganglion cell.135,136,137,138 The spinal ganglion cell sends its axons to the central nervous system as part
of the auditory nerve. For the purpose of this schematic, the sizes of the cilia have been exaggerated.
In this instance, the cilia of the hair cell are in their upright positions, and they are joined by a tip
link. Notice that the tip link is connected to (or perhaps connected right next to) a cation channel
that allows potassium and calcium ions to flow into the cell. In prior cases we’ve seen potassium
flowing out of the cell, but in this case the endolymph has a high concentration of potassium, so
when the ion channel opens, potassium flows into the hair cell. When the cilia are in their upright
position as they are here, some of the cation channels are open, allowing some cations to flow into
the cell. This moderately depolarizes the cell, opening some voltage-gated ion channels that allowing
calcium to follow into the cell, thus triggering the release of glutamate. The end result is a moderate
firing rate in auditory nerve cells. It’s important to keep in mind that hair cell do not have action
potentials, but only graded potentials, and so the amount of glutamate released from the hair cell
depends on the amount the hair cell is depolarized. In contrast, the spiral ganglion cells send action
potentials down their axons.
Now, when sound stimulation causes the basilar membrane to move, the cilia may be deflected by
the endolymph in a direction towards the longer cilium. When this occurs, the distance between
the tips of the cilia is increased, and this puts tension on the tip link. Placing tension on the tip link
mechanically opens ion channels, either by pulling the channels open directly, or by pulling on the
membrane beside the channels, which indirectly pulls the channels open. So, these ion channels are
mechanically-gated ion channels. Mechanically opening the ion channels causes large quantities
of cations to flow into the hair cells. This depolarizes the hair cells, thus opening more voltage-gated
ion channels, which in turn allows more calcium to flow into the cell to trigger the release of more
neurotransmitters. The end result is increased firing rate of the spiral ganglion cells.
Movement of the basilar membrane can also cause the cilia to be moved by the endolymph such that
they are deflected towards the shorter cilium. In this case, the distance between the tips of the cilia
is decreased and so the tension on the tip link is reduced. This causes more cation channels to
mechanically close, leading to hyperpolarization of the hair cells, a corresponding reduction in
neurotransmitter release, and thus decreased firing rate in the spiral ganglion cells.
Concluding Comments
Perhaps the most interesting fact about audition that we’ve learned so far is that auditory
transduction occurs through a mechanical opening on ion channels. We can now add mechanicallygated ion channels to our growing list of the various types of ion channels.
Week 9: Video 3
The Primary Auditory Pathway
Sound waves come into the ears and are transformed into electrical signals by the hair cells in the
cochleae of the inner ears. The electrical signals are then propagated through auditory nerve to the
brainstem, and then to the cortex.
From the Cochlea to the Cortex
This figure shows a schematic depiction of the main components of the primary ascending auditory
pathway. As you can see cochlear hair cells cause action potentials in the axons of spiral ganglion
cells, which form the auditory nerve that connects to the cochlear nucleus of the medulla. The
cochlear nucleus on each side of the midline receives input from the ipsilateral ear.
Here we’ll track the propagation of information coming into the left ear; and you can image the
opposite occurring for information coming into the right ear.
Some of the axons emerging from the cochlear nucleus cross the midline and connect to cells in the
contralateral superior olive, and the contralateral inferior colliculus. The axons that have
ipsilateral projections connect primarily to the superior olive. From the superior olive,
information is fed to the inferior colliculus of the midbrain. The inferior colliculus sends
projections to the medial geniculate nucleus of the thalamus, and the cells of the medial
geniculate nucleus then send axons to the primary auditory cortex in the superior temporal lobe.
The end result is that each cortical hemisphere receives information from both ears.
The primary auditory cortex (also known as A1—for auditory area 1) is located in the superior and
medial aspects of the temporal lobe, along Heschel’s gyrus. A1 is shown in this 3-D reconstruction
of the brain, and on the adjacent coronal and horizontal slices.
The various cells along this main auditory pathway code for important psychoacoustic aspects of
sound, such as pitch and the location of sound sources. Let’s take a closer look at how pitch and
sound localization are processed by the various components of the auditory system.
When you’re listening to the melody line of your favourite song, your auditory system is encoding
the pitch of the sound. Recall that the experience of pitch corresponds to the frequencies of the
sound waves entering the ears. Most people can detect changes in frequencies ranging from roughly
30 Hz to about 5,000 Hz, though humans can perceive frequencies beyond these bounds—we can
perceive frequencies ranging from roughly 20 Hz to about 20,000 Hz.139 Pitch is partly coded in the
cochlea, and there are several theories that have been proposed to explain how this occurs.
Place Theories hold that each location along the basilar membrane of the cochlea is maximally
responsive to a particular frequency, known as its characteristic frequency140. This theoretical view
is depicted here. In this graphic the cochlea has been unrolled to better illustrate the systematic link
between sound frequency and the response of the basilar membrane. The part of the basilar
membrane closest to the oval window is the base, and the part furthest away is called the apex.
Notice that the region at the base of the cochlea is most responsive to high frequency sounds
(around 20,000 Hz) and that the optimal response frequencies progressively decrease towards the
apex. Thus, the cochlea shows a systematic tonotopic organization. The general idea is that a
particular frequency of sound creates a specific wave in the perilymph, which vibrates a specific part
of the basilar membrane, which in turn activates hair cells in the corresponding location of the
basilar membrane. So, when a particular group of hair cells is active, the brain “knows” [in air
quotes] that a particular frequency was present in the incoming sound.
How does a particular frequency trigger activity in a particular location of the basilar membrane?
While the precise mechanism is not fully known, there are at least a couple of contributing factors.
First, the basilar membrane progressively changes in its stiffness and its mass down its length, and
these characteristics determine which frequencies will optimally vibrate specific locations of the
basilar membrane. Second, the outer hair cells at different locations of the basilar membrane have
particular tuning characteristics that make them maximally responsive to specific frequency-related
movements of the basilar membrane.
Another class of theories—sometimes called Timing Theories—holds that pitch is coded by the
firing rate of spiral ganglion cells, whereby the action potentials of spiral ganglion cells are phaselocked to the frequency of the sound. So, for instance, every time a sound wave reached its peak, the
spiral ganglion cells would fire. High frequency waves lead to fast firing rates and low frequency
waves lead to slow firing rates. Phase locking does have some frequency limitations, however. Based
on the available evidence—which admittedly is mostly from studies of small mammals—it seems
that phase locking is useful for distinguishing pitch in music up to about 4-5 kHz, which,
conveniently, is roughly the point in the frequency spectrum at which pitch perception starts to
degrade. 141
So, which of these theories is correct? Well, it’s possible that both theories could be at least partly
correct and that they could each explain different aspects of pitch perception. In fact, various
combinations of place and timing theories have been proposed.
Tonotopy in Primary Auditory Cortex: Intriguingly, like the basilar membrane the primary
auditory cortex also contains a tonotopic map, which is shown on the inflated brain in this image.
The different letters and colours on the diagram identify regions that contain cells that are optimally
responsive to a particular frequency of sound. The frequencies to which the cells in each region
respond maximally are shown in Panel B, and the frequency response curves for the cells in a given
regions are shown in Panel C.
Notice that the blue regions, labeled ‘a’ and ‘f’, show greatest activity to frequencies of about 3,000
Hz; the green regions are most active to frequencies around 2,000 Hz; the yellow regions (labeled b
and d) are maximally active to frequencies of about 1,000 Hz and that the orange regions respond
most vigorously to frequencies of about 400 Hz. Also notice that adjacent areas of the cortex
respond to similar frequencies and that the tonotopic map has a symmetrical organization, with the
center of symmetry being roughly at the region labeled ‘c’.
As you consider the tonotopic auditory map in A1, you might have been reminded of the
retinotopic visual map in V1.
Sound Localization
Locating sound sources is particularly challenging for the auditory system. This is because the
system has to translate information from the movement of the tympanic membrane (which boils
down to frequency and amplitude information) into a three-dimensional representation of space. To
achieve this, the auditory system uses a combination of cues present in the sound waves that enter
the ears.
Interaural Time Difference. One of these cues emerges when the auditory system combines the
input from the two ears. The spatial separation of the two ears is quite useful for sound localization.
Consider the case in which the sound source is slightly to the right of the individual. In this
situation, the sound source is closer to the right ear than to the left ear, and so the sound hits the
tympanic membrane of the right ear moments before it hits the tympanic membrane of the left ear.
This difference in time is referred to as the Interaural Time Difference (ITD).142 The ITD
systematically increases as a sound source moves from a position directly in front (or behind) a
person to a position directly beside the person, at which point the ITD is the largest. When the
sound source crosses the midline, the ear that receives the sound first changes. So, when the sound
source is on the left side of the individual, the sound reaches the left ear sooner than the right ear.
To detect the ITD for continuous sounds, the auditory system can compare the phases of the sound
waves coming into the ears to assess how delayed one wave is relative to the other. ITD is an
optimal localization cue for low frequency sounds (less than about 2,000 Hz) because timing
differences in the phases of sound waves can be mostly easily discerned at those frequencies.
Interaural Level Difference. To localize sound, the auditory system also makes use of a cue called
the interaural level difference (ILD).143 When the sound source is towards the side of the
individual, the sound hitting the closer ear is slightly louder than the sound hitting the more distant
ear. This is because the head absorbs and reflects some of the sound waves, which creates a sound
shadow on the side of the head opposite of the sound source. For example, when the sound is
coming from the right, it is louder in the right ear than the left because the left ear is in the sound
shadow. Conversely, when the sound is coming from the left, it’s louder in the left ear compared to
the right ear. ILD is a better cue for the localization of higher frequency sounds (those that are
greater than 2,000 Hz).
Both the interaural time difference and interaural level difference are binaural cues—they require a
comparison of information across the two ears. So, the fact that you have two ears, and that they
are on opposite sides of the head, is very useful for sound localization. The interaural time and level
differences are detected by cells in the superior olive. There are cells in the superior olive that
receive input from the cochlear nucleus associated with each ear, and these cells are specially tuned
to maximally respond to particular differences in sound timing and loudness between the two ears.
While they are very useful, interaural time and level differences have some notable limitations: they
are not very useful for precisely localizing sounds that come from regions close to the midline in
front or behind the individual, and they are limited in their ability to assess the vertical position of
the sound source. In these situations, the sounds impinging on the two ears arrive at roughly the
same time and are of roughly the same intensity.
Spectral Notches. Fortunately, to address these limitations, there are also monaural cues (which
require only on ear) that can be used to localize sound.144 For instance, when the sound hits the
outer ear (the pinna), the different folds of the outer ear reflect sound in a way that introduces
frequency signatures into the sound. These signatures, known as spectral notches, are unique to
sounds coming from specific vertical and front and back locations. Shown here are the spectral
notches created by sounds hitting the ear from three different vertical positions; the spectral notches
are identified by the arrows. Spectral notches are extracted in the cochlear nucleus of the medulla.
There are also areas in the cerebral cortex that are specifically responsive to sound location
information. In fact, like the visual system, the auditory system can be divided into “what” and
“where” pathways. As we’ve already noted, pitch (which we can think of as the auditory “what”) is
processed primarily in the auditory cortices of the temporal lobes. As shown here, pitch
information is also processed in the inferior frontal regions of the cortex. In contrast, sound location
(the auditory “where”) is processed primarily in the superior parietal and superior frontal regions.
Auditory Deficits
Damage to parts of the ear and the primary auditory pathway can result in a variety of auditory
deficits. One of the most common deficits involves damage to the cochlea which leads to cochlear
hearing loss. Cochlear hearing loss can be caused by impaired function of the hair cells. In some
cases the cilia can be damaged and in other cases, the hair cells can be entirely destroyed145. Damage
to the hair cells can result in a loss of sensitivity to sound, a reduction in the dynamic range and
imprecise responses to specific frequencies.146 Hair cells can be damaged by exposure to loud
noise, which causes excessive mechanical action that destroys the cells.
Although temporary reductions in hearing sensitivity after listening to loud music is quite common,
it turns out that even in such cases there can be substantial loss of synapses between hair cells and
the spiral ganglion cells. Such loss can last for some time even after hearing has apparently
recovered—this has been referred to as hidden hearing loss.147 Hair cell damage can also be
caused by drugs (such as some antibiotics and anti-cancer drugs) viral infections, autoimmune
disorders and aging148,149. Aging often results in specific hearing loss of particular frequencies,
typically in the higher range of the frequency spectrum—this condition is known as presbycusis.
While the loss of sensitivity can be improved by the use of hearing aids, the loss of dynamic range
and proper frequency tuning are harder to correct.
Cochlear hearing loss can be alleviated by the use of cochlear implants. These have an auditory
sensor to pick up sound, and electrodes that send electrical signals into the cochlea, tonotopically
stimulating different parts of the cochlea. An example of an implant is shown here.
Hearing loss can also occur because of damage to the auditory nerve and the primary auditory
pathway, including the auditory cortex. This class of hearing impairments is referred to as
retrocochlear hearing loss.150 Retrocochlear hearing loss can be caused by tumours or traumatic
brain injuries.
A curious auditory deficit that involves damage to the cochlea and changes in the auditory cortex is
tinnitus. Tinnitis is characterized by the experience of a sound—typically a buzzing or a high38
pitched ringing—in the absence of an external stimulus that normally would trigger such an
experience. Tinnitus may be caused by damage to the hair cells in the cochlea, which leads to a loss
of input to the auditory cortex. As a result of this loss of input, the auditory cortex reorganizes,
much like it does in phantom limb syndrome.151 That is, the location of the auditory cortex normally
activated by the damaged hair cells is activated by other means, thus eliciting a phantom sound.
Consistent with this view are the results of a study shown here, in which tinnitus patients showed an
abnormal tonotopic map with the tinnitus frequency located off of the axis of the normal tonal
progression shown by healthy individuals.152
Finally, we should also mention that auditory deficits can also arise because of damage to the middle
ear, which restricts the conductance of vibrations to the inner ear. This is referred to as conductive
deafness. Conductive deafness can be caused by too much wax in your ear, a stiffening of the
ossicles, damage to the tympanic membrane or fluid build-up in the cavity of the middle ear.153
So, if you value your hearing, it’s time to clean out your ears, turn down your music, take off those
headphones, and stay away from infections—particularly COVID-19, which has already been linked
to hearing loss.154
Week 9: Video 4
Touch, Temperature and Pain
Think of the last time you held a paper cup of coffee or tea in your hand. What did it feel like? You
probably remember feeling the texture of the surface of the cup or the cardboard sleeve. The cup
likely felt warm in your hands and when you first received the full cup, it might have felt hot and you
might have registered a little pain. These sensations of touch, temperature and pain are all
components of the sensory system that involve receptors located in the skin. The receptors in the
skin serve an exteroceptive function155, which means that they are responsible for providing
information about the external world that impinges on the skin.
The exteroceptive function falls under the broader umbrella of somatosensation, which also
includes interoception (the sensation of information coming from the internal organs), and
proprioception (the sensation of information coming from muscle and tendons that conveys body
position information).156
We’ll begin by discussing the receptors involved in the sense of touch, which are shown in this
diagram. The diagram shows a section of hairless, or glabrous, skin, which is located on your hands
(palms and fingers) and the bottoms of your feet. The lips and parts of the genitalia also have
hairless skin. The outer layer of skin is called the epidermis, the middle layer is called the dermis,
and the inner, or deepest layer is called the subcutis,157 or the subcutaneous layer. Embedded in
these layers are four basic types of touch receptors that fall in the general category of low-threshold
mechanoreceptors (LTMRs). LTMRs are receptors that code for stimuli that are gentle enough
that they do not cause pain. The four LTMRs involved in sensing touch are Merkel discs, Ruffini
endings, Meissner corpuscles, and Pacinian corpuscles. The receptors contain the endings of
axons, known as Alpha-Beta Fibers, that are able to conduct the action potential very quickly, in
the range of 20 to 100 meters per second.
All of these receptors involve not only axon endings, but also various types of cells that intimately
interact with the axons, influencing responses of the axon endings. For example, here is a schematic
and a transmission electron micrograph of a Meissner corpuscle. Notice how the Meissner
corpuscle consists of a multilayered sandwich of axon endings and lamellar cells (which are actually
Schwann cells), all surrounded by collagen fibers. The collagen fibers connect the corpuscle to the
epidermal cells.
Each of the four types of LTMRs has distinct characteristics, which are shown in this table. The first
two columns just show the four types of receptors. The third column identifies the type of skin in
which the receptors are present. So far we have focused on just glabrous or hairless skin, which
contains all of four of the receptors. Notice that all of the receptors shown, except for the Meissner
corpuscle, are also present in hairy skin.
The fourth column from the left shows the physiological subtypes of each of the receptors. Notice
that the receptors can be divided into slowly adapting and rapidly adapting (or fast adapting)
receptors, with Merkel discs and Ruffini endings being slowly adapting, and Meissner and Pacinian
corpuscles being fast adapting. Each of the slowly and rapidly adapting receptor groups can be
further subdivided into Type I and Type II endings, allowing for a unique identification of each type
of receptor. So, Merkel disks, for instance, are slowly adapting type I receptors.
What is meant by the terms “slowly” and “rapidly adapting” is depicted in the fifth column from the
left. In the graphics shown in that column, the horizontal lines depict the passage of time and the
horizontal rectangles depict the time and duration of stimulus presentation (or in other words, the
moment of touch). The vertical lines each depict a firing of the sensory neuron. As you can see, the
slowly adapting receptors lead to continuous firing of sensory neurons throughout the duration of
the stimulus presentation. This means they adapt slowly to the change in stimulation. In contrast,
the rapidly adapting receptors trigger neuronal firing mostly at the onset and offset of the stimulus,
which means that they adapt to the change in stimulation very quickly.
The rate of adaptation determines in part what type of stimulation the nerve ending will be sensitive
to. The optimal stimulus sensitivities of each type of receptor are shown in the second to last
column of the table.158 The slowly adapting nature of the Merkel disks means that they are optimally
sensitive to continuous pressure and very low-frequency vibration. When you run your finger over a
textured surface, the texture manifests as a vibration on the skin. So, sensitivity to low-frequency
vibration means the these cells are able to detect very course texture. Ruffini endings, which are also
slowly adapting, are sensitive to continuous pressure and are responsible for detecting the stretch of
the skin. Interestingly, the ability to detect skin stretch contributes to proprioception, because the
stretch of the skin directly varies with angle of joints.159 For instance, when you bend your arm, the
skin on the back of your arm stretches in direct proportion to the arm bend. Meissner corpuscles
are optimally sensitive to changes in pressure on the skin that occur with a frequency of about 5 to
40 Hz, which means they are sensitive to low-frequency vibrations allowing them to detect medium
course textures. Finally, Pacinian corpuscles detect high-frequency vibration, detecting roughly 40 to
400 Hz fluctuations in pressure.
All of the receptors also contribute to sensing the deformation of the skin as the hand takes on
different configurations160, which is important for providing feedback while you grasp for an object
or open your hand to wave, or when you make a fist, for example. The pattern of activation of
these receptors, particularly the SAI Merkel disks161, also provides the system information about the
shape of objects that make contact with the skin.
The final column in the table simply shows that each of these nerve endings are serviced by alphabeta fibers, which are the axons of the sensory neurons in this case.
The four mechanoreceptors we have been discussing so far also vary in terms of their density in and
distribution over the skin as well as in terms of the nature of their receptive fields. Receptor
densities in different areas of the hand are shown in this image162. The schematics of the hand show
high receptor densities in dark red with a progression to very sparse receptor density shown in pale
pink. As you can see, the slowly adapting Merkle’s disk and the rapidly adapting Meissner
corpuscles are most dense in the finger tips, less densely distributed in the shafts of the fingers and
even less densely distributed in the palm. This means that the finger tips are more sensitive to low
frequency vibration than is the palm. Ruffini endings are more densely distributed in the palm than
in the finger tips, making the palm more sensitive than the finger tips to skin stretch. Pacinian
corpuscles are generally sparsely distributed, though more densely distributed in the finger tips
compared to the rest of the hand; so the finger tips are slightly more sensitive to high frequency
vibrations than the rest of the hand.
The characteristics of the receptive fields for each type of receptor are shown in the bottom row of
the figure.163 Type I receptors (Merkle’s disks and Meissner’s corpuscles) have sharp borders and
small receptive fields, with median values around 13 mm2 for Merkle’s cells and 11 mm2 for
Meissner cells164. In contrast, Type 2 receptors (Ruffini endings and Pacinian Corpuscles) have
relatively large receptive fields, with median values of 59 mm2 for Ruffini endings and about 101
mm2 for Pacinian corpuscles.165 Putting these numbers into perspective, the receptive fields for these
receptors can easily correspond to a half or a quarter of the palm of your hand. The boarders of
these receptive fields of the Type 2 receptors are more obscure and graded relative to that of the
Type 1 receptors.
Moving beyond the hand, we can look at the sensitivity to touch across the whole body. Sensitivity
to touch varies with the density and distribution of various receptors in the skin. One technique
used to assess sensitivity is the simultaneous two-point discrimination task. In this task,
participants are presented with a single stimulation to the skin on some trials (for example 25% of
the trials) and two stimuli on the remainder of the trials (e.g., 75% of the trials).166 In these touch
discrimination tasks, the stimuli are very thin filaments, that are roughly half a millimeter in
diameter167 pressed lightly against the skin. Critically, the distance between the two filaments is
varied across trials on which two stimulate are presented. The participants’ task on each trial is to
report whether they felt one or two points of contact (without the use of their vision, of course).
The main measure derived from participant responses is referred to as the discrimination
threshold168 which is the distance between two stimuli at which people are just able to the tell they
are being touched by two stimuli rather than just one stimulus.
This figure shows the two-point discrimination thresholds corresponding to different body parts.
Notice that the fingertips and palms are the most sensitive to touch, and that the shoulders and
calves are the least sensitive.169 Strikingly, when touching the calves, people can just barely tell they
are being touched with two stimuli rather than one when the two stimuli are roughly 3.7 cm apart;
that’s a pretty low level of sensitivity.
In addition to touch receptors, hairy skin contains low-threshold mechanoreceptors that surround
the hair follicles. Receptors associated with hair have mostly been studied in rodents, and in mice
more specifically. This work has revealed that mice have three different types of hair170, which are
each connected to a slightly different assortment of hair follicle receptors. Shown here is one hair
type known as the zigzag hair; roughly 80% of a rodent’s hairs are zigzag hairs171. Zigzag hairs are
enervated by two different types of fibers172: Alpha-delta fibers, which are myelinated and conduct
action potentials at roughly 5 to 30 meters per second; and C-fibers, which are unmyelinated,
conducting the electrical signal at roughly 0.2 to 2 meters per second.173 Both of these fibers end
with longitudinal lanceolate endings, 174 which include axon fibers surrounded by Schwann cells175
that protrude parallel to the shaft of the hair.
The left side of this image shows a close-up view of lanceolate endings surrounding two hairs, with
the lower lanceolate endings magnified in the image on the right. Notice how the lanceolate endings
of the C-fibers (shown in red) and the alpha-delta fibers (shown in green) intermingle and that they
both comingle with Schwann cells (shown in light blue). Notice also how the nerve fibers encircle
the hair and how they include projections that protrude parallel to the shaft of the hair.
The main characteristics of the longitudinal lanceolate endings contained in hairy skin and associated
with alpha-delta and C-fibers are shown in this table. Notice that fast conducting alpha-delta
LTMRs respond primarily to the onset and offset of stimulation, while the slowly conducting CLTMRs fire continuously during a moment of stimulation.176 The main function of these LTMRs is
to detect hair follicle deflection.177 In their review of touch receptors published in Science, one group
of authors make the point that hair allows “the sense of touch to extend beyond the skin surface.”178
As an example, you can noticed a gentle breeze partly because of the subtly movement it induces in
the hair on your skin. Zimmerman and colleagues further note that “[a]t least one type of
unmyelinated, slowly-conducting LTMR has endings localized solely to hairy skin and is implicated
in pleasurable touch sensation in humans.”179 So, if you want to feel maximal pleasure from touch,
perhaps you ought to stop shaving?
High-Threshold Mechanoreceptors
Let’s now return to our schematic of a segment of skin. This version of the schematic is meant to
show that the skin (both hairy and glabrous) is also enervated by axons that are unmodified by other
cells; these are referred to as free nerve endings. Free nerve endings occur at the end of all three
types of axons we have discussed so far: Alpha-beta, Alpha-delta and C-fibers. Free nerve endings
fall in the class of high-threshold mechanoreceptors (HTMRs). In contrast to LTMRs, which we
discussed earlier and which respond to gentle touch, HTMRs respond to temperature and noxious
For the sake of completeness, the characteristics of various free nerve endings are shown in this
table. Notice that the free nerve endings respond continuously during stimulation.180 The primary
difference between these subtypes is the speed of condition of the nerve signal which is slowest for
C-fibers, moderate for alpha-delta fibers, and fastest for alpha-beta fibers.
The distribution of pain receptors throughout the body closely follows the distribution of touch
receptors.181 Here is the graph we encountered earlier, but now including the two-point
discrimination thresholds for pain corresponding to different body areas. Notice that the fingers and
palm have the lowest threshold, while the calf and the top of the foot have the highest
discrimination threshold.
Mechanical Transduction
While many aspects of mechanical transduction, that is, the translation of touch to neural signals,
remain a mystery, some candidate models have been proposed.182 One way that ion channels might
open in response to touch is through a simple stretch mechanism, whereby the channel is pulled
open as the membrane molecules stretch apart when they are contacted during a moment of touch.
We can refer to this mechanism, which is shown in the top set of images, as stretch activation.
Another possible mechanism, shown in the middle set of image, involves protein tethers that
attached to the ion channel and to either the extracellular matrix, or the cytoskeleton inside the cell,
or both. The idea is that touch creates mechanical movement of the extracellular matrix or
cytoskeleton relative to the ion channel, and this pulls on the tether, thus opening the ion channel.
We can refer to this as activation by tether. Finally, the bottom pair images show a third
possibility, which is that tethers activate a protein that is separate from the ion channel, which then
opens the ion channel via a messenger of some sort. This is referred to as indirect activation.
While there is some evidence supporting the role of these mechanisms in touch perception, more
research is need to fully confirm their role and to work out the details.
Let’s talk a little about how the body senses temperature.
Your ability to sense temperature depends in part on a series of temperature sensitive ion channels
that are present on Alpha delta and C-fibers. These temperature sensitive ion channels fall in the
general family of channels called Transient Receptor Potential (or TRP) channels.183 Generally,
when they are open, these channels allow positive ions to flow into the cell, thus depolarizing the
cell and brining it closer to the threshold of activation. One example of a TRP channel sensitive to
heat is the TRP Vanilloid 1 (TRPV1) channel shown in this image.184 When this channel is
exposed to higher temperatures, it physically changes shapes and opens, allowing positive ions to
flow into the cell. Interestingly, researchers have also discovered that TRPV1 also has a receptor site
for the chemical capsaicin,185 which is a chemical found in spicy peppers. Eating spicy peppers
makes it feel like your mouth is burning because capsaicin opens TRPV1 ion channels in
temperature sensitive fibers and the brain interprets the activity coming from those fibers in terms
of temperature, in this case, heat.
In an article published in Nature, one group of authors explains why eating a lot of spicy chilli
peppers over time makes you less sensitive to the spice: The “ . . . phenomenon of nociceptor
desensitization underlies the seemingly paradoxical use of capsaicin as an analgesic agent in the
treatment of painful disorders ranging from viral and diabetic neuropathies to rheumatoid arthritis.
Some of this decreased sensitivity to noxious stimuli may result from reversible changes in the
nociceptor, but the long-term loss of responsiveness can be explained by death of the nociceptor or
destruction of its peripheral terminals following exposure to capsaicin.”186 So, eating too much
capsaicin can damage the nerve endings of your pain carrying axons.
We can add temperature activation to our growing list of ways that ion channels can be controlled.
Recall that we’ve already encountered voltage-activated, ligand-activated, light-activated, and
mechanically-activated channels. TRPV1 channels are also an excellent example of how some
channels are polymodal,187 in that they can be controlled by multiple activation mechanisms, in this
case, being activated by heat and by a ligand (or chemical).
There are other ion channels in the TRP family that are more responsive to colder temperatures.
For example, the TRP Member 8 (or TRPM8) channel opens and allows ions to flow into the
axon terminal when it is exposed to colder temperatures, below normal skin temperature.188
Strikingly, TRPM8 also has a receptor site for the chemical menthol, which has led one team to call
the ion channel the Cold-and Menthol-Sensitive Receptor 1 (or CMR1).189 Again, this is another
example of a polymodal ion channel, which is both temperature-activated and ligand-activated.
This figure shows the temperature specificity of various temperature sensitive channels.190 We have
already discussed the TRPM8 and the TRPV1 channels. Added to this image is another heat
sensitive ion channel called TRPV2. As you can see, each channel is maximally responsive to a
different temperature. In fact, there is a whole group of related channels that cover more of the
temperature range, which are not shown here. Collectively, these channels allow you to discriminate
between various temperatures to which your skin is exposed.
Somatosensory Pathways
Sensory information related touch, temperature and pain is primarily conveyed to the spinal cord
through the dorsal root associated with each spinal nerve. The alpha-beta fibers from the
mechanoreceptors responsible for touch enter the spinal cord and travel up the ipsilateral side of the
spinal cord. And the fibers carrying pain and temperature information enter the spinal cord and
cross to the contralateral side of the spinal cord.
Each spinal nerve carries sensory information from a circumscribed area of the body, referred to as
a dermatome. The dermatomes associated with the spinal nerves connected to the major divisions
of the spinal cord are shown here. In the graphic on the right, each band on the body delineated by
gray lines depicts a single dermatome.
Shown here are the main pathways through which somatosensory information is conveyed to the
cortex. Touch information is conveyed through the dorsal-column medial-lemniscus pathway.
This pathway derives its name from the fact that touch information is propagated through the
dorsal column of the spinal cord ipsilateral to the point of entry, and then is shuttled over to the
contralateral side at the level of the medulla, at which point it travels through the medial
lemniscus. The fibers carrying pain and temperature information cross to the contralateral side of
the spinal cord at the level of the spinal cord where the spinal nerve is connected, and then ascend
through the spinothalamic tract.
Somatosensory information then travels to the thalamus, where it connects primarily (though not
exclusively) to the ventral posterior nucleus (VPN). From there, the main projection is to the
primary somatosensory cortex in the parietal lobe.
As we’ve already discussed, the activation of the somatosensory cortex is associated with the
conscious experience of touch, temperature and pain. However, it’s important to note that other
brain areas make important contributions to the quality of the experiences we have.
Wang, C., Liu, F., Liu, Y. Y., Zhao,C. H., You, Y., Wang, L., ...& Zhang, Y. (2011). Identification
and characterization of neuroblasts in the subventricular zone and rostral migratory stream of the
adult human brain. Cell research,21(11), 1534.
Braun, S. M. G., & Jessberger, S.(2014). Adult neurogenesis and its role in neuropsychiatric disease,
brain repair and normal brain function. Neuropathology and applied neurobiology,40(1), 3-12.
Ernst, A., Alkass, K., Bernard, S., Salehpour, M., Perl, S., Tisdale, J., ... & Frisén, J. (2014).
Neurogenesis in the striatum of the adult human brain. Cell, 156(5), 1072-1083.
Wang, C., Liu, F., Liu, Y. Y., Zhao,C. H., You, Y., Wang, L., ...&Zhang, Y. (2011). Identification
andcharacterization of neuroblasts in the subventricular zone and rostralmigratory stream of the
adult human brain.Cell research,21(11), 1534.
Ernst et al. (2014).
Braun, S. M. G., & Jessberger, S.(2014). Adult neurogenesis and its role in neuropsychiatric disease,
brain repair and normal brain function. Neuropathology and applied neurobiology,40(1), 3-12.
Kriegstein, A., & Alvarez-Buylla, A. (2009). The glial nature of embryonic and adult neural stem
cells. Annual review of neuroscience, 32, 149-184.
Spalding, K. L., Bergmann, O., Alkass,K., Bernard, S., Salehpour, M., Huttner, H. B., ...&Possnert,
G.(2013). Dynamics of hippocampal neurogenesis in adult humans.Cell,153(6), 1219-1227. p. 1222
Alvarez-Buylla, A., & Garcıa-Verdugo, J. M. (2002). Neurogenesis in adult subventricular zone.
Journal of Neuroscience, 22(3), 629-634. Part of Figure 3, p. 632
Wang, C., Liu, F., Liu, Y. Y., Zhao,C. H., You, Y., Wang, L., ...&Zhang, Y. (2011). Identification
and characterization of neuroblasts in the subventricular zone and rostralmigratory stream of the
adult human brain. Cell research, 21(11), 1534. p. 1535.
Wang, C., Liu, F., Liu, Y. Y., Zhao,C. H., You, Y., Wang, L., ...&Zhang, Y. (2011). Identification
and characterization of neuroblasts in the subventricular zone and rostralmigratory stream of the
adult human brain. Cell research, 21(11), 1534. p. 1544.
Ernst, A., Alkass, K., Bernard, S., Salehpour, M., Perl, S., Tisdale, J., ... & Frisén, J. (2014).
Neurogenesis in the striatum of the adult human brain. Cell, 156(5), 1072-1083. p. 1078.
Van Praag, H., Shubert, T., Zhao, C., & Gage, F. H. (2005). Exercise enhances learning and
hippocampal neurogenesis in aged mice. Journal of Neuroscience, 25(38), 8680-8685.
Pereira, A. C., Huddleston, D. E., Brickman, A. M., Sosunov, A. A., Hen, R., McKhann, G. M., ...
& Small, S. A. (2007). An in vivo correlate of exercise-induced neurogenesis in the adult dentate
gyrus. Proceedings of the National Academy of Sciences, 104(13), 5638-5643.
Cotman, C. W., & Berchtold, N. C. (2002). Exercise: a behavioral intervention to enhance brain
health and plasticity. Trends in neurosciences, 25(6), 295-301. From the caption of Fig 1, p. 296.
Cotman, C. W., & Berchtold, N. C. (2002). Exercise: a behavioral intervention to enhance brain
health and plasticity. Trends in neurosciences, 25(6), 295-301. Fig 2, p. 297.
Colcombe, S., & Kramer, A. F. (2003). Fitness effects on the cognitive function of older adults: a
meta-analytic study. Psychological science, 14(2), 125-130.
Lee, J., Seroogy, K. B.,&Mattson, M. P. (2002). Dietary restriction enhances neurotrophin
expression and neurogenesis in the hippocampus of adult mice. Journal of neurochemistry,80(3),
Goodrick, C. L., Ingram, D. K., Reynolds, M. A., Freeman, J. R., & Cider, N. L. (1983).
Differential effects of intermittent feeding and voluntary exercise on body weight and lifespan in
adult rats. Journal of gerontology, 38(1), 36-45.
Witte, A. V., Fobker, M., Gellner, R., Knecht, S., & Flöel, A. (2009). Caloric restriction improves
memory in elderly humans. Proceedings of the National Academy of Sciences, 106(4), 1255-1260.
Mattson, M. P., Allison, D. B.,Fontana, L., Harvie, M., Longo, V. D., Malaisse, W. J., ...& Seyfried,
T. N. (2014). Meal frequency and timing in health and disease. Proceedings of the National Academy
of Sciences,111(47), 16647-16653. p. 16649
Mattson, M. P., Allison, D. B., Fontana, L., Harvie, M., Longo, V. D., Malaisse, W. J., ... &
Seyfried, T. N. (2014). Meal frequency and timing in health and disease. Proceedings of the National
Academy of Sciences, 111(47), 16647-16653.
Chaix, A., Zarrinpar, A., Miu, P., & Panda, S. (2014). Time-restricted feeding is a preventative and
therapeutic intervention against diverse nutritional challenges. Cell metabolism, 20(6), 991-1005.
Anson, R. M., Guo, Z., de Cabo, R., Iyun, T., Rios, M., Hagepanos, A., ... & Mattson, M. P.
(2003). Intermittent fasting dissociates beneficial effects of dietary restriction on glucose metabolism
and neuronal resistance to injury from calorie intake. Proceedings of the National Academy of Sciences,
100(10), 6216-6220.
Rothman, S. M., & Mattson, M. P. (2013). Activity-dependent, stress-responsive BDNF signaling
and the quest for optimal brain health and resilience throughout the lifespan. Neuroscience, 239, 228240. P.233.
Molteni, R., Barnard, R. J., Ying, Z., Roberts, C. K., & Gomez-Pinilla, F. (2002). A high-fat,
refined sugar diet reduces hippocampal brain-derived neurotrophic factor, neuronal plasticity, and
learning. Neuroscience, 112(4), 803-814.
Nudo, R. J., Milliken, G. W., Jenkins, W. M., & Merzenich, M. M. (1996). Use-dependent
alterations of movement representations in primary motor cortex of adult squirrel monkeys. Journal
of Neuroscience, 16(2), 785-807.
Draganski, B., Gaser, C., Busch, V., Schuierer, G., Bogdahn, U., & May, A. (2004).
Neuroplasticity: changes in grey matter induced by training. Nature, 427(6972), 311.
Elbert, T., Pantev, C., Wienbruch, C.,Rockstroh, B.,&Taub, E. (1995). Increased cortical
representation of the fingers of the left hand in string players.Science,270(5234), 305-307.
Xi, G., Keep, R. F., & Hoff, J. T. (2006). Mechanisms of brain injury after intracerebral
haemorrhage. The Lancet Neurology, 5(1), 53-63.
An, S. J., Kim, T. J., & Yoon, B. W. (2017). Epidemiology, risk factors, and clinical features of
intracerebral hemorrhage: an update. Journal of stroke, 19(1), 3.
Xi et al., (2006)
An, S. J., Kim, T. J., & Yoon, B. W. (2017). Epidemiology, risk factors, and clinical features of
intracerebral hemorrhage: an update. Journal of stroke, 19(1), 3.
Yi, J. H., & Hazell, A. S. (2006). Excitotoxic mechanisms and the role of astrocytic glutamate
transporters in traumatic brain injury. Neurochemistry international, 48(5), 394-403. P. 399
Salińska, E., Danysz, W., & Łazarewicz, J. W. (2005). The role of excitotoxicity in
neurodegeneration. Folia neuropathologica, 43(4), 322-339. p. 325, par 2.
Salinska et al., (2005). p. 325, par 2.
Koroshetz, W. J., & Moskowitz, M.A. (1996). Emerging treatments for stroke in humans. P.
230, Box 1.
Salińska, E., Danysz, W., & Łazarewicz, J. W. (2005). The role of excitotoxicity in
neurodegeneration. Folia neuropathologica, 43(4), 322-339. p. 329.
Koroshetz, W. J., & Moskowitz, M.A. (1996). Emerging treatments for stroke in humans. P. 230,
Box 1.
Salińska, E., Danysz, W., & Łazarewicz, J. W. (2005). The role of excitotoxicity in
neurodegeneration. Folia neuropathologica, 43(4), 322-339. p. 328, par 3-5.
Salińska, E., Danysz, W., & Łazarewicz, J. W. (2005). The role of excitotoxicity in
neurodegeneration. Folia neuropathologica, 43(4), 322-339. p. 328, par 3.
Salinska et al., (2005). P. 3 25, par 3.;
Unterberg, A. W., Stover, J., Kress, B., & Kiening, K. L. (2004). Edema and brain trauma.
Neuroscience, 129(4), 1019-1027. p.1022.
Unterberg, A. W., Stover, J., Kress, B., & Kiening, K. L. (2004). Edema and brain trauma.
Neuroscience, 129(4), 1019-1027. p.1022.
Unterberg et al. (2004). p. 1022, bottom right of page.
Unterberg et al. (2004). P. 1022, bottom right of page.
Salińska, E., Danysz, W., & Łazarewicz, J. W. (2005). The role of excitotoxicity in
neurodegeneration. Folia neuropathologica, 43(4), 322-339. p. 329, par 4.
For a review see: Xi, G., Keep, R. F., & Hoff, J. T. (2006). Mechanisms of brain injury after
intracerebral haemorrhage. The Lancet Neurology, 5(1), 53-63
Yi, J. H., & Hazell, A. S. (2006). Excitotoxic mechanisms and the role of astrocytic glutamate
transporters in traumatic brain injury. Neurochemistry international, 48(5), 394-403. p. 399, par 2. But
also see Xi et al., (2006) who argue that post hemorrhagic ischemia is not common in humans.
Unterberg et al. (2004). P. 1023, bottom right of page.
Xi et al., (2006), p. 56, par 2.
Maas, A. I., Stocchetti, N., & Bullock, R. (2008). Moderate and severe traumatic brain injury in
adults. The Lancet Neurology, 7(8), 728-741. p. 729.
El Sayed, T., Mota, A., Fraternali, F.,&Ortiz, M. (2008). Biomechanics of traumatic brain injury.
Computer Methods in Applied Mechanics and Engineering, 197 (51-52), 4692-4701.
Unterberg et al. (2004). P. 1021, bottom right of page.
Unterberg et al. (2004). P. 1021, bottom right of page.
Jordan, B. D. (2013). The clinical spectrum of sport-related traumatic brain injury. Nature Reviews Neurology, 9(4),
222–230. P. 2
Jordan, B. D. (2013). The clinical spectrum of sport-related traumatic brain injury. Nature Reviews Neurology, 9(4),
222–230. P. 2
Jordan (2013)
Zhang, Y., Ma, Y., Chen, S., Liu, X., Kang, H. J., Nelson, S., & Bell, S. (2019). Long-Term Cognitive
Performance of Retired Athletes with Sport-Related Concussion: A Systematic Review and MetaAnalysis. Brain Sciences, 9(8), 199.
Murphy, T. H., & Corbett, D. (2009). Plasticity during stroke recovery: from synapse to behaviour.
Nature Reviews Neuroscience, 10(12), 861 - 872. See Fig 3 caption, p. 867.
Wieloch, T., & Nikolich, K. (2006). Mechanisms of neural plasticity following brain injury. Current
opinion in neurobiology, 16(3), 258-264.
Murphy, T. H., & Corbett, D. (2009). Plasticity during stroke recovery: from synapse to
behaviour. Nature Reviews Neuroscience, 10(12), 861 - 872. See Fig 3 caption, p. 867.
Wieloch, T., & Nikolich, K. (2006). Mechanisms of neural plasticity following brain injury. Current
opinion in neurobiology, 16(3), 258-264.
Benowitz, L. I., & Carmichael, S. T. (2010). Promoting axonal rewiring to improve outcome after
stroke. Neurobiology of disease, 37(2), 259-266. p.3
Wieloch, T., & Nikolich, K. (2006). Mechanisms of neural plasticity following brain injury. Current
opinion in neurobiology, 16(3), 258-264. p. 2.
Wieloch, T., & Nikolich, K. (2006). Mechanisms of neural plasticity following brain injury. Current
opinion in neurobiology, 16(3), 258-264. p. 3
Benowitz, L. I., & Carmichael, S. T. (2010). Promoting axonal rewiring to improve outcome after
stroke. Neurobiology of disease, 37(2), 259-266. p. 3.
Wieloch, T., & Nikolich, K. (2006). p. 3.
Murphy, T. H., & Corbett, D. (2009). Plasticity during stroke recovery: from synapse to behaviour.
Nature Reviews Neuroscience, 10(12), 861 - 872. See Fig 3 caption, p. 867.
Wang, L., Yu, C., Chen, H., Qin, W., He, Y., Fan, F., ...&Woodward, T. S. (2010). Dynamic
functional reorganization of the motor execution network after stroke. Brain,133(4), 1224-1238.
Carrera, E., & Tononi, G. (2014). Diaschisis: past, present, future. Brain, 137(9), 2408-2422.
Carrera, E., & Tononi, G. (2014). Diaschisis: past, present, future. Brain, 137(9), 2408-2422.
Flor, H., Nikolajsen, L., & Jensen, T. S. (2006). Phantom limb pain: a case of maladaptive CNS
plasticity?. Nature Reviews Neuroscience, 7(11), 873-881. p. 876.
Flor, H., Nikolajsen, L., & Jensen, T. S. (2006). Phantom limb pain: a case of maladaptive CNS
plasticity?. Nature Reviews Neuroscience, 7(11), 873 -881. p. 874.
Lotze, M., Flor, H., Grodd, W., Larbig, W., & Birbaumer, N. (2001). Phantom movements and
pain An fMRI study in upper limb amputees. Brain, 124(11), 2268-2277.Figure 1, p. 2270.
Wieloch, T., & Nikolich, K. (2006). p. 3.
Yamashita, T., Ninomiya, M., Acosta, P. H., García-Verdugo, J. M., Sunabori, T., Sakaguchi, M.,
... & Araki, N. (2006). Subventricular zone-derived neuroblasts migrate and differentiate into mature
neurons in the post-stroke adult striatum. Journal of Neuroscience, 26(24), 6627-6636.
Yoshimura, S., Takagi, Y., Harada, J., Teramoto, T., Thomas, S. S., Waeber, C., Bakowska,
J. C., Breakefield, X. O., & Moskowitz, M. A. (2001). FGF-2 regulation of neurogenesis in adult
hippocampus after brain injury. Proceedings of the National Academy of Sciences of the United
States of America, 98(10), 5874–5879. https://doi.org/10.1073/pnas.101034998
Wieloch, T., & Nikolich, K. (2006). p. 3.
Jin, K., Wang, X., Xie, L., Mao, X.O., Zhu, W., Wang, Y., ...&Greenberg, D. A. (2006). Evidence
forstroke-induced neurogenesis in the human brain.Proceedings of the National Academy of
Sciences,103(35), 13198-13202. p. 13201.
Zheng, W., ZhuGe, Q., Zhong, M., Chen,G., Shao, B., Wang, H., ...&Jin, K. (2013).
Neurogenesis in adulthuman brain after traumatic brain injury.Journal of neurotrauma,30(22), 18721880. p. 1877
Wieloch, T., & Nikolich, K. (2006). p. 3.
Chen, H., Epstein, J., & Stern, E.(2010). Neural plasticity after acquired brain injury: evidence
from functional neuroimaging. PM&R,2(12), S306-S312.
Biernaskie, J., & Corbett, D.(2001). Enriched rehabilitative training promotes improved forelimb
motor function and enhanced dendritic growth after focal ischemic injury. Journal of
Neuroscience,21(14), 5272-5280.
Hayashi, J., Takagi, Y., Fukuda, H., Imazato, T., Nishimura, M., Fujimoto, M., ... & Nozaki, K.
(2006). Primate embryonic stem cell-derived neuronal progenitors transplanted into ischemic brain.
Journal of Cerebral Blood Flow & Metabolism, 26(7), 906-914.
Li, Y. I., Chen, J., Zhang, C. L., Wang, L., Lu, D., Katakowski, M., ... & Chopp, M. (2005).
Gliosis and brain remodeling after treatment of stroke in rats with marrow stromal cells. Glia, 49(3),
Bao, X., Wei, J., Feng, M., Lu, S., Li, G., Dou, W., ... & Zhao, R. C. (2011). Transplantation of
human bone marrow-derived mesenchymal stem cells promotes behavioral recovery and
endogenous neurogenesis after cerebral ischemia in rats. Brain research, 1367, 103-113.
Bang, O. Y., Lee, J. S., Lee, P. H., & Lee, G. (2005). Autologous mesenchymal stem cell
transplantation in stroke patients. Annals of neurology, 57(6), 874-882.
Lee, J. S., Hong, J. M., Moon, G. J., Lee, P. H., Ahn, Y. H., & Bang, O. Y. (2010). A long‐term
follow‐up study of intravenous autologous mesenchymal stem cell transplantation in patients with
ischemic stroke. Stem cells, 28(6), 1099-1106.
Wieloch, T., & Nikolich, K. (2006).
Boggio, P. S., Nunes, A., Rigonatti,S. P., Nitsche, M. A., Pascual-Leone, A.,&Fregni, F. (2007).
Repeated sessions of noninvasive brain DC stimulation is associated with motor function
improvement in stroke patients. Restorative neurology and neuroscience, 25(2), 123-129.
Fregni, F., Boggio, P. S., Mansur, C. G., Wagner, T., Ferreira, M. J., Lima, M. C., ... & PascualLeone, A. (2005). Transcranial direct current stimulation of the unaffected hemisphere in stroke
patients. Neuroreport, 16(14), 1551-1555.
Takeuchi, N., Chuma, T., Matsuo, Y.,Watanabe, I.,&Ikoma, K. (2005). Repetitive transcranial
magneticstimulation of contralesional primary motor cortex improves hand function after
stroke.Stroke,36(12), 2681-2686.
Fregni, F., Boggio, P. S., Mansur, C. G., Wagner, T., Ferreira, M. J., Lima, M. C., ... & PascualLeone, A. (2005). Transcranial direct current stimulation of the unaffected hemisphere in stroke
patients. Neuroreport, 16(14), 1551-1555.
Purves and Lotto (2003) Why we see what we do: An empirical theory of vision. P. 24-25
Schnapf, J. L., Kraft, T. W., & Baylor, D. A. (1987). Spectral sensitivity of human cone
photoreceptors. Nature, 325(6103), 439-441.
Purves and Lotto (2003) Why we see what we do: An empirical theory of vision.
Labin, A. M., & Ribak, E. N. (2010). Retinal glial cells enhance human vision acuity. Physical review
letters, 104(15), 158102.
Franze, K., Grosche, J., Skatchkov, S. N., Schinkinger, S., Foja, C., Schild, D., ... & Guck, J.
(2007). Müller cells are living optical fibers in the vertebrate retina. Proceedings of the National Academy
of Sciences, 104(20), 8287-8292.
Franze, K., Grosche, J., Skatchkov, S. N., Schinkinger, S., Foja, C., Schild, D., ... & Guck, J.
(2007). Müller cells are living optical fibers in the vertebrate retina. Proceedings of the National Academy
of Sciences, 104(20), 8287-8292.
Calvert, P. D., Strissel, K. J., Schiesser, W. E., Pugh Jr, E. N., & Arshavsky, V. Y. (2006). Lightdriven translocation of signaling proteins in vertebrate photoreceptors. Trends in cell biology, 16(11),
560-568. Fig 1, p. 561. Koutalos, Y., & Yau, K. W. (1996). Regulation of sensitivity in vertebrate rod
photoreceptors by calcium. Trends in neurosciences, 19(2), 73-81
Hagins, W. A., Penn, R. D., & Yoshikami, S. (1970). Dark current and photocurrent in retinal
rods. Biophysical journal, 10(5), 380–412. https://doi.org/10.1016/S0006-3495(70)86308-1
Hagins, W. A., Penn, R. D., & Yoshikami, S. (1970). Dark current and photocurrent in retinal
rods. Biophysical journal, 10(5), 380–412. https://doi.org/10.1016/S0006-3495(70)86308-1
Hagins, W. A., Penn, R. D., & Yoshikami, S. (1970). Dark current and photocurrent in retinal
rods. Biophysical journal, 10(5), 380–412. https://doi.org/10.1016/S0006-3495(70)86308-1
Euler, T., Haverkamp, S., Schubert, T., & Baden, T. (2014). Retinal bipolar cells: elementary
building blocks of vision. Nature Reviews Neuroscience, 15(8), 507–519.
Masland, R. H. (2012). The Neuronal Organization of the Retina. Neuron, 76(2), 266–280.
Puller, C., Ivanova, E., Euler, T., Haverkamp, S., & Schubert, T. (2013). OFF bipolar cells express
distinct types of dendritic glutamate receptors in the mouse retina. Neuroscience, 243, 136–148.
Field, G. D., Sher, A., Gauthier, J. L., Greschner, M., Shlens, J., Litke, A. M., & Chichilnisky,
E. J. (2007). Spatial properties and functional organization of small bistratified ganglion cells in
primate retina. Journal of Neuroscience, 27(48), 13261-13272.
Nassi, J. J., & Callaway, E. M. (2009). Parallel processing strategies of the primate visual
system. Nature Reviews Neuroscience, 10(5), 360–372.
Hubel & Wiesel (1979) Brain Mechanisms of Vision, Scientific American, 241, 150-163. P. 159
Tootell, R. B., Silverman, M. S., Switkes, E., & De Valois, R. L. (1982). Deoxyglucose
analysis of retinotopic organization in primate striate cortex. Science, 218(4575), 902-904.
Hubel & Wiesel (1962, p. 111)
Hubel & Wiesel (1968, p. 222)
Hubel & Wiesel (1979) Brain Mechanisms of Vision, Scientific American, 241, 150-163. P. 159
Livingstone, M. S., & Hubel, D. H. (1984). Anatomy and physiology of a color system in the
primate visual cortex. Journal of Neuroscience, 4(1), 309-356.
Murphy, K. M., Jones, D. G., & Van Sluyters, R. C. (1995). Cytochrome-oxidase blobs in cat
primary visual cortex. Journal of Neuroscience, 15(6), 4196-4208.
Kosslyn, S. M., Thompson, W. L., Klm, I. J., & Alpert, N. M. (1995). Topographical
representations of mental images in primary visual cortex. Nature, 378(6556), 496-498.
Kinoshita, M., Kato, R., Isa, K., Kobayashi, K., Kobayashi, K., Onoe, H., & Isa, T. (2019).
Dissecting the circuit for blindsight to reveal the critical role of pulvinar and superior
colliculus. Nature Communications, 10(1), 135.
Lyon, D. C., Nassi, J. J., & Callaway, E. M. (2010). A Disynaptic Relay from Superior Colliculus to
Dorsal Stream Visual Cortex in Macaque Monkey. Neuron, 65(2), 270–279.
Zeki, S., Watson, J., Lueck, C., Friston, K., Kennard, C., & Frackowiak, R. (1991). A direct
demonstration of functional specialization in human visual cortex. Journal of Neuroscience, 11(3),
ZEKI, S. (1990). A CENTURY OF CEREBRAL ACHROMATOPSIA. Brain, 113(6), 1721–1777.
Bouvier, S. E., & Engel, S. A. (2006). Behavioral Deficits and Cortical Damage Loci in Cerebral
Achromatopsia. Cerebral Cortex, 16(2), 183–191.
Grill-Spector, K., & Weiner, K. S. (2014). The functional architecture of the ventral temporal cortex and its role in
categorization. Nature Reviews Neuroscience, 15(8), 536-548. Figure 4a
Goodale, M. A., Milner, A. D., Jakobson, L. S., & Carey, D. P. (1991). A neurological dissociation
between perceiving objects and grasping them. Nature, 349(6305), 154–156.
Goodale, M. A., Milner, A. D., Jakobson, L. S., & Carey, D. P. (1991). A neurological dissociation
between perceiving objects and grasping them. Nature, 349(6305), 154–156.
Milner, A. D., Dijkerman, H. C., McIntosh, R. D., Rossetti, Y., & Pisella, L. (2003). Progress in
Brain Research. Progress in Brain Research, 142, 225–242. p. 225.
Huk, A. C., Dougherty, R. F., & Heeger, D. J. (2002). Retinotopy and functional subdivision of human areas MT
and MST. The Journal of Neuroscience, 22(16), 7195-7205.
Zihl, J., Von Cramon, D., & Mai, N. (1983). Selective disturbance of movement vision after
bilateral brain damage. Brain, 106(2), 313-340. p. 315.
Rask‐Andersen, H., Liu, W., Erixon, E., Kinnefors, A., Pfaller, K., Schrott‐Fischer, A., & Glueckert, R. (2012).
Human cochlea: anatomical characteristics and their relevance for cochlear implantation. The Anatomical Record,
295(11), 1791-1811.
Pickles, J. O., & Corey, D. P. (1992). Mechanoelectrical transduction by hair cells. Trends in neurosciences, 15(7), 254259. Figure 2, p. 255.
Goutman, J. D., Elgoyhen, A. B., & Gómez-Casati, M. E. (2015). Cochlear hair cells: the soundsensing machines. FEBS letters, 589(22), 3354-3361.
Kazmierczak, P., & Müller, U. (2012). Sensing sound: molecules that orchestrate
mechanotransduction by hair cells. Trends in neurosciences, 35(4), 220-229.
Gillespie, P. G., & Müller, U. (2009). Mechanotransduction by hair cells: models, molecules, and
mechanisms. Cell, 139(1), 33-44.
Ricci, A. J., Kachar, B., Gale, J., & Netten, S. M. V. (2006). Mechano-electrical Transduction: New Insights into
Old Ideas. The Journal of Membrane Biology, 209(2–3), 71–88
Oxenham A. J. (2018). How We Hear: The Perception and Neural Coding of Sound. Annual review
of psychology, 69, 27–50. https://doi.org/10.1146/annurev-psych-122216-011635
Oxenham A. J. (2018). How We Hear: The Perception and Neural Coding of Sound. Annual
review of psychology, 69, 27–50. https://doi.org/10.1146/annurev-psych-122216-011635
Oxenham A. J. (2018). How We Hear: The Perception and Neural Coding of Sound. Annual
review of psychology, 69, 27–50. P. 32
Grothe, B., Pecka, M., & McAlpine, D. (2010). Mechanisms of sound localization in mammals.
Physiological reviews, 90(3), 983-1012. Figure 2 p. 985
Grothe, B., Pecka, M., & McAlpine, D. (2010). Mechanisms of sound localization in mammals.
Physiological reviews, 90(3), 983-1012. Figure 2 p. 985
Grothe, B., Pecka, M., & McAlpine, D. (2010). Mechanisms of sound localization in mammals.
Physiological reviews, 90(3), 983-1012. Figure 2 p. 985
Moore, B. C. (2007). Cochlear hearing loss: physiological, psychological and technical issues. John Wiley &
Sons.p. 29
Oxenham, A. J. (2017). How We Hear: The Perception and Neural Coding of Sound. Annual
Review of Psychology, 69(1), 27–50. P. 41
Oxenham, A. J. (2017). How We Hear: The Perception and Neural Coding of Sound. Annual
Review of Psychology, 69(1), 27–50. P. 41
Moore, B. C. (2007). Cochlear hearing loss: physiological, psychological and technical issues. John Wiley &
Sons. p. 28
Furness, D. N. (2015) Inner ear. In Standring, S. (Ed.). Gray's anatomy e-book: the anatomical basis of
clinical practice. Elsevier Health Sciences. (p. 641-657).
Moore, B. C. (2007). Cochlear hearing loss: physiological, psychological and technical issues. John Wiley &
Sons. p. 28
Weisz, N., Hartmann, T., Dohrmann, K., Schlee, W., & Norena, A. (2006). High-frequency
tinnitus without hearing loss does not mean absence of deafferentation. Hearing research, 222(1-2),
Mühlnickel, W., Elbert, T., Taub, E., & Flor, H. (1998). Reorganization of auditory cortex in
tinnitus. Proceedings of the National Academy of Sciences, 95(17), 10340-10343.
Moore, B. C. (2007). Cochlear hearing loss: physiological, psychological and technical issues. John Wiley &
Mustafa, M. W. M. (2020). Audiological profile of asymptomatic Covid-19 PCR-positive cases.
American Journal of Otolaryngology, 102483.
Abraira, V. E., & Ginty, D. D. (2013). The sensory neurons of touch. Neuron, 79(4), 618-639. p.
Abraira, V. E., & Ginty, D. D. (2013). p. 618.
Vallbo, A. B., & Johansson, R. S. (1984). Properties of cutaneous mechanoreceptors in the human
hand related to touch sensation. Hum Neurobiol, 3(1), 3-14.
Lederman, S. J., & Klatzky, R. L. (2009). Haptic perception: A tutorial. Attention, Perception, &
Psychophysics, 71(7), 1439-1459. See Table 1B, p. 1141.
Abraira, V. E., & Ginty, D. D. (2013). Table 1, p. 619.
Johansson, R. S., & Flanagan, J. R. (2009). Coding and use of tactile signals from the fingertips in
object manipulation tasks. Nature Reviews Neuroscience, 10(5), 345. See Box 1, p. 347.
Lederman, S. J., & Klatzky, R. L. (2009). See Table 1B, p. 1141.
Lederman, S. J., & Klatzky, R. L. (2009). See Table 1B, p. 1141.
Johansson, R. S., & Flanagan, J. R. (2009). Table 1, p. 346; see also Vallbo, A. B., & Johansson, R.
S. (1984). Figs 2 & 3, p. 6 & 8.
Johansson, R. S., & Flanagan, J. R. (2009). Table 1, p. 346; see also Vallbo, A. B., & Johansson, R.
S. (1984). Figs 2 & 3, p. 6 & 8.
Vallbo, A. B., & Johansson, R. S. (1984). p. 5, paragraph 4.
Vallbo, A. B., & Johansson, R. S. (1984). p. 7, paragraph 1.
Mancini, F., Bauleo, A., Cole, J., Lui, F., Porro, C. A., Haggard, P., & Iannetti, G. D. (2014).
Whole‐body mapping of spatial acuity for pain and touch. Annals of neurology, 75(6), 917-924.
Mancini et al. (2014).
Mancini et al. (2014).
Mancini et al. (2014).
Li, L., & Ginty, D. D. (2014). The structure and organization of lanceolate mechanosensory
complexes at mouse hair follicles. Elife, 3.
Li, L., & Ginty, D. D. (2014). P. 2, paragraph 1.
Li, L., & Ginty, D. D. (2014). P. 2, paragraph 1.
See Abraira, V. E., & Ginty, D. D. (2013). Table 1, p. 619 for transmission speeds.
Li, L., & Ginty, D. D. (2014). P. 2, paragraph 1.
Takahashi‐Iwanaga, H. (2000). Three‐dimensional microanatomy of longitudinal lanceolate
endings in rat vibrissae. Journal of Comparative Neurology, 426(2), 259-269.
Abraira, V. E., & Ginty, D. D. (2013). Table 1, p. 619.
Abraira, V. E., & Ginty, D. D. (2013). Table 1, p. 619.
Zimmerman, A., Bai, L., & Ginty, D. D. (2014). The gentle touch receptors of mammalian skin.
Science, 346(6212), 950-954. p. 951, paragraph 2.
Zimmerman, A., Bai, L., & Ginty, D. D. (2014). p. 951, paragraph 2.
Abraira, V. E., & Ginty, D. D. (2013). Table 1, p. 619.
Mancini, F., et al. (2014).
Lumpkin, E. A., & Caterina, M. J. (2007). Mechanisms of sensory transduction in the skin.
Nature, 445(7130), 858. Fig 2., p. 861.
Eijkelkamp, N., Quick, K., & Wood, J. N. (2013). Transient receptor potential channels and
mechanosensation. Annual review of neuroscience, 36, 519-546. Fig 8., p 536.
Lumpkin, E. A., & Caterina, M. J. (2007). Mechanisms of sensory transduction in the skin. Nature,
445(7130), 858. p. 858, paragraph 3.
Lumpkin, E. A., & Caterina, M. J. (2007).
Caterina, M. J., Schumacher, M. A., Tominaga, M., Rosen, T. A., Levine, J. D., & Julius, D.
(1997). The capsaicin receptor: a heat-activated ion channel in the pain pathway. Nature, 389(6653),
816. [Note: Caterina et al. refer to the TRPV1 as the Vanilloid Receptor 1 (VR1)]
Caterina et al. (1997). P. 816
Lumpkin, E. A., & Caterina, M. J. (2007). p. 859, paragraph 13.
Lumpkin, E. A., & Caterina, M. J. (2007). p. 859, paragraph 4.
Peier, A. M., Moqrich, A., Hergarden, A. C., Reeve, A. J., Andersson, D. A., Story, G. M., ... &
Patapoutian, A. (2002). A TRP channel that senses cold stimuli and menthol. Cell, 108(5), 705-715.
McKemy, D. D., Neuhausser, W. M., & Julius, D. (2002). Identification of a cold receptor reveals
a general role for TRP channels in thermosensation. Nature, 416(6876), 52.
Patapoutian, A., Peier, A. M., Story, G. M., & Viswanath, V. (2003). ThermoTRP channels and
beyond: mechanisms of temperature sensation. Nature Reviews Neuroscience, 4(7), 529. Fig 3, p.
McKemy, D. D., Neuhausser, W. M., & Julius, D. (2002). Identification of a cold receptor reveals a
general role for TRP channels in thermosensation. Nature, 416(6876), 52. Fig. 7a, p. 57.