Neuron, Vol 41, 165-173, 8 January 2004

advertisement
2004 년 고급생물심리학
“Neuron and Cognition”
Perception:
Parietal activity and the perceived direction of ambiguous apparent motion
Ziv M Williams1, 2, John C Elfar1, Emad N Eskandar1, 2, Louis J Toth1 & John A Assad1
Nature Neuroscience, June 2003 Volume 6 Number 6 pp 616 - 623
We recorded from parietal neurons in monkeys (Macacca mulatta) trained to report the direction of an
apparent motion stimulus consisting of regularly spaced columns of dots surrounded by an aperture.
Displacing the dots by half their inter-column spacing produced vivid apparent motion that could be
perceived in either the preferred or anti-preferred direction for each neuron. Many neurons in the lateral
intraparietal area (LIP) responded more strongly on trials in which the animals reported perceiving the
neurons' preferred direction, independent of the hand movement used to report their percept. This
selectivity was less common in the medial superior temporal area (MST) and virtually absent in the
middle temporal area (MT). Variations in activity of LIP and MST neurons just before motion onset were
also predictive of the animals' subsequent perceived direction. These data suggest a hierarchy of
representation in parietal cortex, whereby neuronal responses become more aligned with subjective
perception in higher parietal areas.
Unconscious Orientation Processing
Reza Rajimehr
Neuron, Vol 41, 663-673, 19 February 2004
Recent findings have shown that certain attributes of visual stimuli, like orientation, are registered in
cortical areas when the stimulus is unresolvable or perceptually invisible; however, there is no evidence
to show that complex forms of orientation processing (e.g., modulatory effects of orientation on the
processing of other features) could occur in the absence of awareness. To address these questions,
different psychophysical paradigms were designed in six experiments to probe unconscious orientation
processing. First we demonstrated orientation-selective adaptation and color-contingent orientation
adaptation for peripheral unresolvable Gabor patches. The next experiments showed the modulatory
effects of perceptually indiscriminable orientations on apparent motion processing and attentional
mechanisms. Finally we investigated disappearance patterns of unresolvable Gabor stimuli during
motion-induced blindness (MIB). Abrupt changes in local unresolvable orientations truncated MIB;
however, orientation-based grouping failed to affect the MIB pattern when the orientations were
unresolvable. Overall results revealed that unresolvable orientations substantially influence perception at
multiple levels.
Review
View from the Top: Hierarchies and Reverse Hierarchies in the Visual System
Shaul Hochstein 1,3 and Merav Ahissar
Neuron, Vol 36, 791-804, 5 December 2002
We propose that explicit vision advances in reverse hierarchical direction, as shown for perceptual
learning. Processing along the feedforward hierarchy of areas, leading to increasingly complex
representations, is automatic and implicit, while conscious perception begins at the hierarchy's top,
gradually returning downward as needed. Thus, our initial conscious percept—vision at a glance—
matches a high-level, generalized, categorical scene interpretation, identifying “forest before trees.” For
later vision with scrutiny, reverse hierarchy routines focus attention to specific, active, low-level units,
incorporating into conscious perception detailed information available there. Reverse Hierarchy Theory
dissociates between early explicit perception and implicit low-level vision, explaining a variety of
phenomena. Feature search “pop-out” is attributed to high areas, where large receptive fields underlie
spread attention detecting categorical differences. Search for conjunctions or fine discriminations depends
on reentry to low-level specific receptive fields using serial focused attention, consistent with recently
reported primary visual cortex effects.
BOLD Activity during Mental Rotation and Viewpoint-Dependent Object Recognition
Isabel Gauthier 1, William G. Hayward 2, Michael J. Tarr 3, Adam W. Anderson 4, Pawel Skudlarski 4, and
John C. Gore
Neuron, Vol 34, 161-171, 28 March 2002
We measured brain activity during mental rotation and object recognition with objects rotated around
three different axes. Activity in the superior parietal lobe (SPL) increased proportionally to viewpoint
disparity during mental rotation, but not during object recognition. In contrast, the fusiform gyrus was
preferentially recruited in a viewpoint-dependent manner in recognition as compared to mental rotation.
In addition, independent of the effect of viewpoint, object recognition was associated with ventral areas
and mental rotation with dorsal areas. These results indicate that the similar behavioral effects of
viewpoint obtained in these two tasks are based on different neural substrates. Such findings call into
question the hypothesis that mental rotation is used to compensate for changes in viewpoint during object
recognition.
Representation of Space:
A statistical explanation of visual space
Zhiyong Yang & Dale Purves
Nature Neuroscience, June 2003 Volume 6 Number 6 pp 632 – 640
The subjective visual space perceived by humans does not reflect a simple transformation of objective
physical space; rather, perceived space has an idiosyncratic relationship with the real world. To date, there
is no consensus about either the genesis of perceived visual space or the implications of its peculiar
characteristics for visually guided behavior. Here we used laser range scanning to measure the actual
distances from the image plane of all unoccluded points in a series of natural scenes. We then asked
whether the differences between real and apparent distances could be explained by the statistical
relationship of scene geometry and the observer. We were able to predict perceived distances in a variety
of circumstances from the probability distribution of physical distances. This finding lends support to the
idea that the characteristics of human visual space are determined probabilistically.
Spatial Updating in Human Parietal Cortex
Elisha P. Merriam *1,3, Christopher R. Genovese 2,3, and Carol L. Colby
Neuron, Vol 39, 361-373, 17 July 2003
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements.
This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized
that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI.
We scanned subjects during a task that involved remapping of visual signals across hemifields. We
observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a
remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this
remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate
that updating of visual information occurs in human parietal cortex.
Representation of Quantity:
Graded persistent activity in entorhinal cortex neurons 173
ALEXEI V. EGOROV, BASSAM N. HAMAM, ERIK FRANSEN, MICHAEL E. HASSELMO &
ANGEL A. ALONSO
Nature 420, 173 - 178 (2002)
A Supramodal Number Representation in Human Intraparietal Cortex
Evelyn Eger , Philipp Sterzer , Michael O. Russ , Anne-Lise Giraud , and Andreas Kleinschmidt
Neuron, Vol 37, 719-725, 20 February 2003
The triple-code theory of numerical processing postulates an abstract-semantic “number sense.”
Neuropsychology points to intraparietal cortex as a potential substrate, but previous functional
neuroimaging studies did not dissociate the representation of numerical magnitude from task-driven
effects on intraparietal activation. In an event-related fMRI study, we presented numbers, letters, and
colors in the visual and auditory modality, asking subjects to respond to target items within each category.
In the absence of explicit magnitude processing, numbers compared with letters and colors across
modalities activated a bilateral region in the horizontal intraparietal sulcus. This stimulus-driven numberspecific intraparietal response supports the idea of a supramodal number representation that is
automatically accessed by presentation of numbers and may code magnitude information.
Visual Motion:
Attention-Grabbing Motion in the Human Brain
Jody Culham *
Visual motion signals can be derived either through a lower-order mechanism in which motion detectors
register changes in luminance over space and time or through a higher-order mechanism that tracks
salient features as they change position. A recent fMRI study by Claeys and colleagues (this issue of
Neuron) reports a new area of the human brain that responds to the motion of salient features and to
apparent motion.
A Higher Order Motion Region in Human Inferior Parietal Lobule: Evidence from fMRI
Neuron, Vol 40, 631-642, 30 October 2003
Kristl G. Claeys 1,2, Delwin T. Lindsey 3, Erik De Schutter 2, and Guy A. Orban *1
The proposal that motion is processed by multiple mechanisms in the human brain has received little
anatomical support so far. Here, we compared higher- and lower-level motion processing in the human
brain using functional magnetic resonance imaging. We observed activation of an inferior parietal lobule
(IPL) motion region by isoluminant red-green gratings when saliency of one color was increased and by
long-range apparent motion at 7 Hz but not 2 Hz. This higher order motion region represents the entire
visual field, while traditional motion regions predominantly process contralateral motion. Our results
suggest that there are two motion-processing systems in the human brain: a contralateral lower-level
luminance-based system, extending from hMT/V5+ into dorsal IPS and STS, and a bilateral higher-level
saliency-based system in IPL.
End-Stopping and the Aperture Problem: Two-Dimensional Motion Signals in Macaque V1
Christopher C. Pack *, Margaret S. Livingstone , Kevin R. Duffy , and Richard T. Born
Neuron, Vol 39, 671-680, 14 August 2003
Our perception of fine visual detail relies on small receptive fields at early stages of visual processing.
However, small receptive fields tend to confound the orientation and velocity of moving edges, leading to
ambiguous or inaccurate motion measurements (the aperture problem). Thus, it is often assumed that
neurons in primary visual cortex (V1) carry only ambiguous motion information. Here we show that a
subpopulation of V1 neurons is capable of signaling motion direction in a manner that is independent of
contour orientation. Specifically, end-stopped V1 neurons obtain accurate motion measurements by
responding only to the endpoints of long contours, a strategy which renders them largely immune to the
aperture problem. Furthermore, the time course of end-stopping is similar to the time course of motion
integration by MT neurons. These results suggest that cortical neurons might represent object motion by
responding selectively to two-dimensional discontinuities in the visual scene.
Neuronal Adaptation to Visual Motion in Area MT of the Macaque
Adam Kohn * and J. Anthony Movshon
Neuron, Vol 39, 681-691, 14 August 2003
The responsivity of primary sensory cortical neurons is reduced following prolonged adaptation, but such
adaptation has been little studied in higher sensory areas. Adaptation to visual motion has strong
perceptual effects, so we studied the effect of prolonged stimulation on neuronal responsivity in the
macaque's area MT, a cortical area whose importance to visual motion perception is well established. We
adapted MT neurons with sinusoidal gratings drifting in the preferred or null direction. Preferred
adaptation reduced the responsiveness of MT cells, primarily by changing their contrast gain, and this
effect was spatially specific within the receptive field. Null adaptation reduced the ability of null gratings
to inhibit the response to a simultaneously presented preferred stimulus. While both preferred and null
adaptation alter MT responses, these effects probably do not occur in MT neurons but are likely to reflect
adaptation-induced changes in contrast gain earlier in the visual pathway.
Interaction of Retinal Image and Eye Velocity in Motion Perception
Herbert C. Goltz 1, Joseph F.X. DeSouza 1, Ravi S. Menon 2, Douglas B. Tweed 4, and Tutis Vilis *3
Neuron, Vol 39, 569-576, 31 July 2003
When we move our eyes, why does the world look stable even as its image flows across our retinas, and
why do afterimages, which are stationary on the retinas, appear to move? Current theories say this is
because we perceive motion by summation: if an object slips across the retina at r°/s while the eye turns
at e°/s, the object's perceived velocity in space should be r + e. We show that activity in MT+, the visualmotion complex in human cortex, does reflect a mix of r and e rather than r alone. But we show also that,
for optimal perception, r and e should not summate; rather, the signals coding e interact multiplicatively
with the spatial gradient of illumination.
Directional Anisotropies Reveal a Functional Segregation of Visual Motion Processing for Perception and
Action
Anne K. Churchland *, Justin L. Gardner , I-han Chou , Nicholas J. Priebe , and Stephen G. Lisberger
Neuron, Vol 37, 1001-1011, 27 March 2003
Humans exhibit an anisotropy in direction perception: discrimination is superior when motion is around
horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in
perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor
action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction
discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple
measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy.
We conclude that MT represents different motion directions uniformly, and subsequent processing creates
a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least
for visual motion, perception and action are guided by inputs from separate sensory streams. The
directional anisotropy of perception appears to originate after the two streams have segregated and
downstream from area MT.
Parallel Visual Motion Processing Streams for Manipulable Objects and Human Movements
Michael S. Beauchamp , Kathryn E. Lee , James V. Haxby , and Alex Martin
Neuron, Vol 34, 149-159, 28 March 2002
We tested the hypothesis that different regions of lateral temporal cortex are specialized for processing
different types of visual motion by studying the cortical responses to moving gratings and to humans and
manipulable objects (tools and utensils) that were either stationary or moving with natural or artificially
generated motions. Segregated responses to human and tool stimuli were observed in both ventral and
lateral regions of posterior temporal cortex. Relative to ventral cortex, lateral temporal cortex showed a
larger response for moving compared with static humans and tools. Superior temporal cortex preferred
human motion, and middle temporal gyrus preferred tool motion. A greater response was observed in STS
to articulated compared with unarticulated human motion. Specificity for different types of complex
motion (in combination with visual form) may be an organizing principle in lateral temporal cortex.
Attention:
The Neural Fate of Consciously Perceived and Missed Events in the Attentional Blink
René Marois *1, Do-Joon Yi 12, and Marvin M. Chun 12
Neuron, Vol 41, 465-472, 5 February 2004
Cognitive models of attention propose that visual perception is a product of two stages of visual
processing: early operations permit rapid initial categorization of the visual world, while later attentiondemanding capacity-limited stages are necessary for the conscious report of the stimuli. Here we used the
attentional blink paradigm and fMRI to neurally distinguish these two stages of vision. Subjects detected
a face target and a scene target presented rapidly among distractors at fixation. Although the second,
scene target frequently went undetected by the subjects, it nonetheless activated regions of the medial
temporal cortex involved in high-level scene representations, the parahippocampal place area (PPA). This
PPA activation was amplified when the stimulus was consciously perceived. By contrast, the frontal
cortex was activated only when scenes were successfully reported. These results suggest that medial
temporal cortex permits rapid categorization of the visual input, while the frontal cortex is part of a
capacity-limited attentional bottleneck to conscious report.
Review
Visuomotor Origins of Covert Spatial Attention
Tirin Moore *13, Katherine M. Armstrong 13, and Mazyar Fallah
Neuron, Vol 40, 671-683, 13 November 2003
Covert spatial attention produces biases in perceptual performance and neural processing of behaviorally
relevant stimuli in the absence of overt orienting movements. The neural mechanism that gives rise to
these effects is poorly understood. This paper surveys past evidence of a relationship between oculomotor
control and visual spatial attention and more recent evidence of a causal link between the control of
saccadic eye movements by frontal cortex and covert visual selection. Both suggest that the mechanism of
covert spatial attention emerges as a consequence of the reciprocal interactions between neural circuits
primarily involved in specifying the visual properties of potential targets and those involved in specifying
the movements needed to fixate them.
Neuron-specific contribution of the superior colliculus to overt and covert shifts of attention
Alla Ignashchenkova1, Peter W Dicke1, Thomas Haarmeier1, 2 & Peter Thier1
Nature Neuroscience, January 2004 Volume 7 Number 1 pp 56 – 64
The analysis of a peripheral visual location can be improved in two ways: either by orienting one's gaze
(usually by making a foveating saccade) or by 'covertly' shifting one's attention to the peripheral location
without making an eye movement. The premotor theory of attention holds that saccades and spatial shifts
of attention share a common functional module with a distinct neuronal basis. Using single-unit recording
from the brains of trained rhesus monkeys, we investigated whether the superior colliculus, the major
subcortical center for the control of saccades, is part of this shared network for attention and saccades.
Here we show that a distinct type of neuron in the intermediate layer of the superior colliculus, the
visuomotor neuron, which is known to be centrally involved in the preparation of saccades, is also active
during covert shifts of attention.
Attention alters appearance
Marisa Carrasco1, 2, Sam Ling1 & Sarah Read2
Nature Neuroscience March 2004 Volume 7 Number 3 pp 308 – 313
Does attention alter appearance? This critical issue, debated for over a century, remains unsettled. From
psychophysical evidence that covert attention affects early vision—it enhances contrast sensitivity and
spatial resolution—and from neurophysiological evidence that attention increases the neuronal contrast
sensitivity (contrast gain), one could infer that attention changes stimulus appearance. Surprisingly, few
studies have directly investigated this issue. Here we developed a psychophysical method to directly
assess the phenomenological correlates of attention in humans. We show that attention alters appearance;
it boosts the apparent stimulus contrast. These behavioral results are consistent with neurophysiological
findings suggesting that attention changes the strength of a stimulus by increasing its 'effective contrast'
or salience.
Eye Movements:
Gaze bias both reflects and influences preference
Shinsuke Shimojo1, 2, 4, Claudiu Simion1, 4, Eiko Shimojo3 & Christian Scheier1
Nature Neuroscience, December 2003 Volume 6 Number 12 pp 1317 – 1322
Emotions operate along the dimension of approach and aversion, and it is reasonable to assume that
orienting behavior is intrinsically linked to emotionally involved processes such as preference decisions.
Here we describe a gaze 'cascade effect' that was present when human observers were shown pairs of
human faces and instructed to decide which face was more attractive. Their gaze was initially distributed
evenly between the two stimuli, but then gradually shifted toward the face that they eventually chose.
Gaze bias was significantly weaker in a face shape discrimination task. In a second series of experiments,
manipulation of gaze duration, but not exposure duration alone, biased observers' preference decisions.
We thus conclude that gaze is actively involved in preference formation. The gaze cascade effect was also
present when participants compared abstract, unfamiliar shapes for attractiveness, suggesting that
orienting and preference for objects in general are intrinsically linked in a positive feedback loop leading
to the conscious choice.
Saccades actively maintain perceptual continuity
John Ross1 & Anna Ma-Wyatt1
Nature Neuroscience, January 2004 Volume 7 Number 1 pp 65 – 69
People make saccades—rapid eye movements to a new fixation—approximately three times per second.
This would seemingly disrupt perceptual continuity, yet our brains construct a coherent, stable view of the
world from these successive fixations. There is conflicting evidence regarding the effects of saccades on
perceptual continuity: some studies report that they are disruptive, with little information carryover
between saccades; others report that carryover is substantial. Here we show that saccades actively
contribute to perceptual continuity in humans in two different ways. When bistable stimuli are presented
intermittently, saccades executed during the blank interval shorten the duration of states of ambiguous
figures, indicating that saccades can erase immediately past perceptual states. On the other hand, they
prolong the McCollough effect, indicating that saccades strengthen learned contingencies. Our results
indicate that saccades help, rather than hinder, perceptual continuity.
Neural Mechanisms of Saccadic Suppression
Thiele, P. Henning, M. Kubischik, and K.-P. Hoffmann
Science 2002 March 29; 295: 2460-2462
In normal vision our gaze leaps from detail to detail, resulting in rapid image motion across the retina.
Yet we are unaware of such motion, a phenomenon known as saccadic suppression. We recorded neural
activity in the middle temporal and middle superior temporal cortical areas during saccades and identical
image motion under passive viewing conditions. Some neurons were selectively silenced during saccadic
image motion, but responded well to identical external image motion. In addition, a subpopulation of
neurons reversed their preferred direction of motion during saccades. Consequently, oppositely directed
motion signals annul one another, and motion percepts are suppressed.
Spatiotopic temporal integration of visual motion across saccadic eye movements
David Melcher1, 3 & M Concetta Morrone1
Nature Neuroscience, August 2003 Volume 6 Number 8 pp 877 – 881
Saccadic eye movements pose many challenges for stable and continuous vision, such as how information
from successive fixations is amalgamated into a single precept. Here we show in humans that motion
signals are temporally integrated across separate fixations, but only when the motion stimulus falls either
on the same retinal region (retinotopic integration) or on different retinal positions that correspond to the
same external spatial coordinates (spatiotopic integration). We used individual motion signals that were
below detection threshold, implicating spatiotopic trans-saccadic integration in relatively early stages of
visual processing such as the middle temporal area (MT) or V5 of visual cortex. The trans-saccadic
buildup of important congruent visual information while irrelevant non-congruent information fades
could provide a simple and robust strategy to stabilize perception during eye movements.
A Neural Correlate of Oculomotor Sequences in Supplementary Eye Field
Xiaofeng Lu2 , Masako Matsuzawa3 , and Okihide Hikosaka4
Neuron, Vol 34, 317-325, 11 April 2002
Complex learned motor sequences can be composed of a combination of a small number of elementary
actions. To investigate how the brain represents such sequences, we devised an oculomotor sequence task
in which the monkey had to choose the target solely by the sequential context, not by the current stimulus
combination. We found that many neurons in the supplementary eye field (SEF) became active with a
specific target direction (D neuron) or a specific target/distractor combination (C neuron). Furthermore,
such activity was often selective for one among several sequences that included the combination (S
neuron). These results suggest that the SEF contributes to the generation of saccades in many learned
sequences.
The Effect of Gaze Angle and Fixation Distance on the Responses of Neurons in V1, V2, and V4
Neuron, Vol 33, 143-149, 3 January 2002
The Effect of Gaze Angle and Fixation Distance on the Responses of Neurons in V1, V2, and V4
Neuron, Vol 33, 143-149, 3 January 2002
David Rosenbluth 1 and John M. Allman David Rosenbluth 1 and John M. Allman
What we see depends on where we look. This paper characterizes the modulatory effects of point of
regard in three-dimensional space on responsiveness of visual cortical neurons in areas V1, V2, and V4.
Such modulatory effects are both common, affecting 85% of cells, and strong, frequently producing
changes of mean firing rate by a factor of 10. The prevalence of neurons in area V4 showing a preference
for near distances may be indicative of the involvement of this area in close scrutiny during object
recognition. We propose that eye-position signals can be exploited by visual cortex as classical
conditioning stimuli, enabling the perceptual learning of systematic relationships between point of regard
and the structure of the visual environment.
Saccadic Eye Movements Modulate Visual Responses in the Lateral Geniculate Nucleus
John B. Reppas 1, W. Martin Usrey3 1, and R. Clay Reid
Neuron, Vol 35, 961-974, 29 August 2002
We studied the effects of saccadic eye movements on visual signaling in the primate lateral geniculate
nucleus (LGN), the earliest stage of central visual processing. Visual responses were probed with
spatially uniform flickering stimuli, so that retinal processing was uninfluenced by eye movements.
Nonetheless, saccades had diverse effects, altering not only response strength but also the temporal and
chromatic properties of the receptive field. Of these changes, the most prominent was a biphasic
modulation of response strength, weak suppression followed by strong enhancement. Saccadic
modulation was widespread, and affected both of the major processing streams in the LGN. Our results
demonstrate that during natural viewing, thalamic response properties can vary dramatically, even over
the course of a single fixation.
Sensory-motor transformation:
Self-control during response conflict by human supplementary eye field
Masud Husain1, 2, Andrew Parton1, 2, Timothy L. Hodgson1, 3, Dominic Mort1 & Geraint Rees2
Nature Neuroscience, February 2003 Volume 6 Number 2 pp 117 – 118
Although medial frontal cortex is considered to have an important role in planning behavior and
monitoring errors, the specific contributions of regions within it are poorly understood 1-3. Here we report
that a patient with a highly selective lesion of a medial frontal motor area—the supplementary eye field
(SEF)—lacked control in changing the direction of his eye movement from either a previous intention or
behavioral 'set'; however, he monitored his errors well and corrected them quickly. The results indicate a
key new role for the SEF and show that medial frontal mechanisms for self-control of action may be
highly specific, with the SEF critically involved in implementing oculomotor control during response
conflict, but not in error monitoring.
Somatotopic Representation of Action Words in Human Motor and Premotor Cortex
Olaf Hauk , Ingrid Johnsrude , and Friedemann Pulvermüller *
Neuron, Vol 41, 301-307, 22 January 2004
Since the early days of research into language and the brain, word meaning was assumed to be processed
in specific brain regions, which most modern neuroscientists localize to the left temporal lobe. Here we
use event-related fMRI to show that action words referring to face, arm, or leg actions (e.g., to lick, pick,
or kick), when presented in a passive reading task, differentially activated areas along the motor strip that
either were directly adjacent to or overlapped with areas activated by actual movement of the tongue,
fingers, or feet. These results demonstrate that the referential meaning of action words has a correlate in
the somatotopic activation of motor and premotor cortex. This rules out a unified “meaning center” in the
human brain and supports a dynamic view according to which words are processed by distributed
neuronal assemblies with cortical topographies that reflect word semantics.
Direct visuomotor transformations for reaching.
Christopher A. Buneo, Murray R. Jarvis, Aaron P. Batista, Richard A. Andersen
Nature 416, 632 - 636 (11 Apr 2002)
I Feel My Hand Moving: A New Role of the Primary Motor Cortex in Somatic Perception of Limb
Movement
Eiichi Naito 1,2, Per E. Roland 1, and H. Henrik Ehrsson
Neuron, Vol 36, 979-988, 5 December 2002
The primary motor cortex (MI) is regarded as the site for motor control. Occasional reports that MI
neurons react to sensory stimuli have either been ignored or attributed to guidance of voluntary
movements. Here, we show that MI activation is necessary for the somatic perception of movement of our
limbs. We made use of an illusion: when the wrist tendon of one hand is vibrated, it is perceived as the
hand moving. If the vibrated hand has skin contact with the other hand, it is perceived as both hands
bending. Using fMRI and TMS, we show that the activation in MI controlling the nonvibrated hand is
compulsory for the somatic perception of the hand movement. This novel function of MI contrasts with
its traditional role as the executive locus of voluntary limb movement.
Active Vision and Visual Activation in Area V4
Charles E. Connor *
Neuron, Vol 40, 1056-1058, 18 December 2003
During normal vision, the focus of gaze continually jumps from one important image feature to the next.
In this issue of Neuron, Mazer and Gallant analyze neural activity in higher-level visual cortex during this
kind of active visual exploration, and they demonstrate a localized enhancement of visual responses that
predicts the target of the upcoming eye movement
Goal-Related Activity in V4 during Free Viewing Visual Search: Evidence for a Ventral Stream Visual
Salience Map
James A. Mazer * and Jack L. Gallant
Neuron, Vol 40, 1241-1250, 18 December 2003
Natural exploration of complex visual scenes depends on saccadic eye movements toward important
locations. Saccade targeting is thought to be mediated by a retinotopic map that represents the locations of
salient features. In this report, we demonstrate that extrastriate ventral area V4 contains a retinotopic
salience map that guides exploratory eye movements during a naturalistic free viewing visual search task.
In more than half of recorded cells, visually driven activity is enhanced prior to saccades that move the
fovea toward the location previously occupied by a neuron's spatial receptive field. This correlation
suggests that bottom-up processing in V4 influences the oculomotor planning process. Half of the neurons
also exhibit top-down modulation of visual responses that depends on search target identity but not visual
stimulation. Convergence of bottom-up and top-down processing streams in area V4 results in an adaptive,
dynamic map of salience that guides oculomotor planning during natural vision.
Learning:
Single Neurons in the Monkey Hippocampus and Learning of New Associations
Sylvia Wirth,1 Marianna Yanike,1 Loren M. Frank,2 Anne C. Smith,2 Emery N. Brown,2 Wendy A.
Suzuki1*
Science 2003 June 6; 300: 1578-1581
The medial temporal lobe is crucial for the ability to learn and retain new declarative memories. This form
of memory includes the ability to quickly establish novel associations between unrelated items. To better
understand the patterns of neural activity during associative memory formation, we recorded the activity
of hippocampal neurons of macaque monkeys as they learned new associations. Hippocampal neurons
signaled learning by changing their stimulus-selective response properties. This change in the pattern of
selective neural activity occurred before, at the same time as, or after learning, which suggests that these
neurons are involved in the initial formation of new associative memories.
Decision:
Temporal Evolution of a Decision-Making Process in Medial Premotor Cortex
Adrián Hernández , Antonio Zainos , and Ranulfo Romo
Neuron, Vol 33, 959-972, 14 March 2002
The events linking sensory discrimination to motor action remain unclear. It is not known, for example,
whether the motor areas of the frontal lobe receive the result of the discrimination process from other
areas or whether they actively participate in it. To investigate this, we trained monkeys to discriminate
between two mechanical vibrations applied sequentially to the fingertips; here subjects had to recall the
first vibration, compare it to the second one, and indicate with a hand/arm movement which of the two
vibrations had the higher frequency. We recorded the activity of single neurons in medial premotor cortex
(MPC) and found that their responses correlate with the diverse stages of the discrimination process. Thus,
activity in MPC reflects the temporal evolution of the decision-making process leading to action selection
during this perceptual task.
Neuronal Correlates of a Perceptual Decision in Ventral Premotor Cortex
Ranulfo Romo *, Adrián Hernández , and Antonio Zainos
Neuron, Vol 41, 165-173, 8 January 2004
The ventral premotor cortex (VPC) is involved in the transformation of sensory information into action,
although the exact neuronal operation is not known. We addressed this problem by recording from single
neurons in VPC while trained monkeys report a decision based on the comparison of two mechanical
vibrations applied sequentially to the fingertips. Here we report that the activity of VPC neurons reflects
current and remembered sensory inputs, their comparison, and motor commands expressing the result;
that is, the entire processing cascade linking the evaluation of sensory stimuli with a motor report. These
findings provide a fairly complete panorama of the neural dynamics that underlies the transformation of
sensory information into an action and emphasize the role of VPC in perceptual decisions.
Microstimulation of visual cortex affects the speed of perceptual decisions
Jochen Ditterich, Mark E Mazurek & Michael N Shadlen
Nature Neuroscience, August 2003 Volume 6 Number 8 pp 891 – 898
Direction-selective neurons in the middle temporal visual area (MT) are crucially involved in motion
perception, although it is not known exactly how the activity of these neurons is interpreted by the rest of
the brain. Here we report that in a two-alternative task, the activity of MT neurons is interpreted as
evidence for one direction and against the other. We measured the speed and accuracy of decisions as
rhesus monkeys performed a direction-discrimination task. On half of the trials, we stimulated directionselective neurons in area MT, thereby causing the monkeys to choose the neurons' preferred direction
more often. Microstimulation quickened decisions in favor of the preferred direction and slowed
decisions in favor of the opposite direction. Even on trials in which microstimulation did not induce a
preferred direction choice, it still affected response times. Our findings suggest that during the formation
of a decision, sensory evidence for competing propositions is compared and accumulates to a decisionmaking threshold.
Computation:
The Information Content of Receptive Fields
Thomas L. Adelman *1, William Bialek 2, and Robert M. Olberg
Neuron, Vol 40, 823-833, 13 November 2003
The nervous system must observe a complex world and produce appropriate, sometimes complex,
behavioral responses. In contrast to this complexity, neural responses are often characterized through very
simple descriptions such as receptive fields or tuning curves. Do these characterizations adequately reflect
the true dimensionality reduction that takes place in the nervous system, or are they merely convenient
oversimplifications? Here we address this question for the target-selective descending neurons (TSDNs)
of the dragonfly. Using extracellular multielectrode recordings of a population of TSDNs, we quantify the
completeness of the receptive field description of these cells and conclude that the information in
independent instantaneous position and velocity receptive fields accounts for 70%–90% of the total
information in single spikes. Thus, we demonstrate that this simple receptive field model is close to a
complete description of the features in the stimulus that evoke TSDN response.
Multineuronal Firing Patterns in the Signal from Eye to Brain
Mark J. Schnitzer 1 and Markus Meister
Neuron, Vol 37, 499-511, 6 February 2003
Population codes in the brain have generally been characterized by recording responses from one neuron
at a time. This approach will miss codes that rely on concerted patterns of action potentials from many
cells. Here we analyze visual signaling in populations of ganglion cells recorded from the isolated
salamander retina. These neurons tend to fire synchronously far more frequently than expected by chance.
We present an efficient algorithm to identify what groups of cells cooperate in this way. Such groups can
include up to seven or more neurons and may account for more than 50% of all the spikes recorded from
the retina. These firing patterns represent specific messages about the visual stimulus that differ
significantly from what one would derive by single-cell analysis.
Coding of Natural Scenes in Primary Visual Cortex
Michael Weliky , József Fiser , Ruskin H. Hunt , and David N. Wagner
Neuron, Vol 37, 703-718, 20 February 2003
Natural scene coding in ferret visual cortex was investigated using a new technique for multi-site
recording of neuronal activity from the cortical surface. Surface recordings accurately reflected radially
aligned layer 2/3 activity. At individual sites, evoked activity to natural scenes was weakly correlated
with the local image contrast structure falling within the cells' classical receptive field. However, a
population code, derived from activity integrated across cortical sites having retinotopically overlapping
receptive fields, correlated strongly with the local image contrast structure. Cell responses demonstrated
high lifetime sparseness, population sparseness, and high dispersal values, implying efficient neural
coding in terms of information processing. These results indicate that while cells at an individual cortical
site do not provide a reliable estimate of the local contrast structure in natural scenes, cell activity
integrated across distributed cortical sites is closely related to this structure in the form of a sparse and
dispersed code.
Socio-neurobiology:
Review
COGNITIVE NEUROSCIENCE OF HUMAN SOCIAL BEHAVIOUR
Nature Reviews Neuroscience 4, 165 -178 (2003);
Ralph Adolphs
A system in the human brain for predicting the actions of others
Narender Ramnani1 & R Christopher Miall2
Nature Neuroscience, January 2004 Volume 7 Number 1 pp 85 – 90
The ability to attribute mental states to others, and therefore to predict others' behavior, is particularly
advanced in humans. A controversial but untested idea is that this is achieved by simulating the other
person's mental processes in one's own mind. If this is the case, then the same neural systems activated by
a mental function should re-activate when one thinks about that function performed by another. Here,
using functional magnetic resonance imaging (fMRI), we tested whether the neural processes involved in
preparing one's own actions are also used for predicting the future actions of others. We provide
compelling evidence that areas within the action control system of the human brain are indeed activated
when predicting others' actions, but a different action sub-system is activated when preparing one's own
actions.
An fMRI investigation of the impact of interracial contact on executive function
Jennifer A Richeson1, Abigail A Baird1, Heather L Gordon1, Todd F Heatherton1, Carrie L Wyland1,
Sophie Trawalter1 & J Nicole Shelton2
Nature Neuroscience, December 2003 Volume 6 Number 12 pp 1323 – 1328
We investigated whether individual differences in racial bias among white participants predict the
recruitment, and potential depletion, of executive attentional resources during contact with black
individuals. White individuals completed an unobtrusive measure of racial bias, then interacted with a
black individual, and finally completed an ostensibly unrelated Stroop color-naming test. In a separate
functional magnetic resonance imaging (fMRI) session, subjects were presented with unfamiliar black
male faces, and the activity of brain regions thought to be critical to executive control was assessed. We
found that racial bias predicted activity in right dorsolateral prefrontal cortex (DLPFC) in response to
black faces. Furthermore, activity in this region predicted Stroop interference after an actual interracial
interaction, and it statistically mediated the relation between racial bias and Stroop interference. These
results are consistent with a resource depletion account of the temporary executive dysfunction seen in
racially biased individuals after interracial contact.
A Neural Basis for Social Cooperation
James K. Rilling2 , David A. Gutman , Thorsten R. Zeh , Giuseppe Pagnoni , Gregory S. Berns , and
Clinton D. Kilts
Neuron, Vol 35, 395-405, 18 July 2002
Cooperation based on reciprocal altruism has evolved in only a small number of species, yet it constitutes
the core behavioral principle of human social life. The iterated Prisoner's Dilemma Game has been used
to model this form of cooperation. We used fMRI to scan 36 women as they played an iterated Prisoner's
Dilemma Game with another woman to investigate the neurobiological basis of cooperative social
behavior. Mutual cooperation was associated with consistent activation in brain areas that have been
linked
with
reward
processing:
nucleus
accumbens,
the
caudate
nucleus,
ventromedial
frontal/orbitofrontal cortex, and rostral anterior cingulate cortex. We propose that activation of this neural
network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to
selfishly accept but not reciprocate favors.
Context-dependent processing:
Cortical Analysis of Visual Context
Moshe Bar * and Elissa Aminoff
Neuron, Vol 38, 347-358, 24 April 2003
Objects in our environment tend to be grouped in typical contexts. How does the human brain analyze
such associations between visual objects and their specific context? We addressed this question in four
functional neuroimaging experiments and revealed the cortical mechanisms that are uniquely activated
when people recognize highly contextual objects (e.g., a traffic light). Our findings indicate that a region
in the parahippocampal cortex and a region in the retrosplenial cortex together comprise a system that
mediates both spatial and nonspatial contextual processing. Interestingly, each of these regions has been
identified in the past with two functions: the processing of spatial information and episodic memory.
Attributing contextual analysis to these two areas, instead, provides a framework for bridging between
previous reports.
Temporal specificity in the cortical plasticity of visual space representation.
Fu YX, Djupsund K, Gao H, Hayden B, Shen K, Dan Y.
Science 2002 Jun 14;296(5575):1999-2003
Representation of color stimuli in awake macaque primary visual cortex.
Wachtler T, Sejnowski TJ, Albright TD.
Neuron. 2003 Feb 20;37(4):681-91.
Synaptic Integration by V1 Neurons Depends on Location within the Orientation Map
James Schummers 1, Jorge Mariño 1,2, and Mriganka Sur
Neuron, Vol 36, 969-978, 5 December 2002
Neurons in the primary visual cortex (V1) are organized into an orientation map consisting of orientation
domains arranged radially around “pinwheel centers” at which the representations of all orientations
converge. We have combined optical imaging of intrinsic signals with intracellular recordings to estimate
the subthreshold inputs and spike outputs of neurons located near pinwheel centers or in orientation
domains. We find that neurons near pinwheel centers have subthreshold responses to all stimulus
orientations but spike responses to only a narrow range of orientations. Across the map, the selectivity of
inputs covaries with the selectivity of orientations in the local cortical network, while the selectivity of
spike outputs does not. Thus, the input-output transformation performed by V1 neurons is powerfully
influenced by the local structure of the orientation map.
Dynamic Modification of Cortical Orientation Tuning Mediated by Recurrent Connections
Gidon Felsen4 1, Yao-song Shen4 1, Haishan Yao4 1,2, Gareth Spor 1, Chaoyi Li 2, and Yang Dan 1
Neuron, Vol 36, 945-954, 5 December 2002
Receptive field properties of visual cortical neurons depend on the spatiotemporal context within which
the stimuli are presented. We have examined the temporal context dependence of cortical orientation
tuning using dynamic visual stimuli with rapidly changing orientations. We found that tuning to the
orientation of the test stimulus depended on a briefly presented preceding stimulus, with the preferred
orientation shifting away from the preceding orientation. Analyses of the spatial-phase dependence of the
shift showed that the effect cannot be explained by purely feedforward mechanisms, but can be accounted
for by activity-dependent changes in the recurrent interactions between different orientation columns.
Thus, short-term plasticity of the intracortical circuit can mediate dynamic modification of orientation
tuning, which may be important for efficient visual coding.
Lateral Connectivity and Contextual Interactions in Macaque Primary Visual Cortex
Dan D. Stettler 1, Aniruddha Das 1, Jean Bennett 2, and Charles D. Gilbert 1
Neuron, Vol 36, 739-750, 14 November 2002
Two components of cortical circuits could mediate contour integration in primary visual cortex (V1):
intrinsic horizontal connections and feedback from higher cortical areas. To distinguish between these, we
combined functional mapping with a new technique for labeling axons, a recombinant adenovirus bearing
the gene for green fluorescent protein (GFP), to determine the extent, density, and orientation specificity
of V1 intrinsic connections and V2 to V1 feedback. Both connections cover portions of V1 representing
regions of visual space up to eight times larger than receptive fields as classically defined, though the
intrinsic connections are an order of magnitude denser than the feedback. Whereas the intrinsic
connections link similarly oriented domains in V1, V2 to V1 feedback displays no such specificity. These
findings suggest that V1 intrinsic horizontal connections provide a more likely substrate for contour
integration.
Reward-Dependent Gain and Bias of Visual Responses in Primate Superior Colliculus
Takuro Ikeda 1,2,3 and Okihide Hikosaka
Neuron, Vol 39, 693-700, 14 August 2003
Eye movements are often influenced by expectation of reward. Using a memory-guided saccade task with
an asymmetric reward schedule, we show that visual responses of monkey SC neurons increase when the
visual stimulus indicates an upcoming reward. The increase occurred in two distinct manners: (1)
reactively, as an increase in the gain of the visual response when the stimulus indicated an upcoming
reward; (2) proactively, as an increase in anticipatory activity when reward was expected in the neuron's
response field. These effects were observed mostly in saccade-related SC neurons in the deeper layer
which would receive inputs from the cortical eye fields and the basal ganglia. These results, together with
recent findings, suggest that the gain modulation may be determined by the inputs from both the cortical
eye fields and the basal ganglia, whereas the anticipatory bias may be derived mainly from the basal
ganglia.
Far side:
Neuronal synchrony does not correlate with motion coherence in cortical area MT
Alexander Thiele, Gene Stoner
Nature 421, 366 - 370 (23 Jan 2003)
Natural visual scenes are cluttered with multiple objects whose individual features must somehow be
selectively linked (or 'bound') if perception is to coincide with reality. Recent neurophysiological
evidence supports a 'binding-by-synchrony' hypothesis: neurons excited by features of the same object
fire synchronously, while neurons excited by features of different objects do not. Moving plaid patterns
offer a straightforward means to test this idea. By appropriate manipulations of apparent transparency, the
component gratings of a plaid pattern can be seen as parts of a single coherently moving surface or as two
non-coherently moving surfaces. We examined directional tuning and synchrony of area-MT neurons in
awake, fixating primates in response to perceptually coherent and non-coherent plaid patterns. Here we
show that directional tuning correlated highly with perceptual coherence, which is consistent with an
earlier study. Although we found stimulus-dependent synchrony, coherent plaids elicited significantly less
synchrony than did non-coherent plaids. Our data therefore do not support the binding-by-synchrony
hypothesis as applied to this class of motion stimuli in area MT.
Dissociation between hand motion and population vectors from neural activity in motor cortex
Stephen H. Scott, Paul L. Gribble, Kirsten M. Graham, D. William Cabel
Nature 413, 161 - 165 (13 Sep 2001)
The population vector hypothesis was introduced almost twenty years ago to illustrate that a population
vector constructed from neural activity in primary motor cortex (MI) of non-human primates could
predict the direction of hand movement during reaching. Alternative explanations for this population
signal have been suggested but could not be tested experimentally owing to movement complexity in the
standard reaching model. We re-examined this issue by recording the activity of neurons in contralateral
MI of monkeys while they made reaching movements with their right arms oriented in the horizontal
plane—where the mechanics of limb motion are measurable and anisotropic. Here we found systematic
biases between the population vector and the direction of hand movement. These errors were attributed to
a non-uniform distribution of preferred directions of neurons and the non-uniformity covaried with peak
joint power at the shoulder and elbow. These observations contradict the population vector hypothesis and
show that non-human primates are capable of generating reaching movements to spatial targets even
though population vectors based on MI activity do not point in the direction of hand motion.
Download