Linking Brain Response and Behavior to Reveal

advertisement
Linking Brain Response and Behavior to Reveal Top-Down
(and Inside-Out) Influences on Processing in Face Perception
Heather A. Wild
Thomas A. Busey
Department of Psychology
Indiana University
Bloomington, IN 47405
Please send correspondence to:
Heather Wild (hwild@indiana.edu)
or
Tom Busey (busey@indiana.edu)
Contextual Influences on Face-Related ERPs
2
Abstract
In two electrophysiological experiments we investigated contextual influences on
face and word recognition. An event-related potential component previously identified
with face processing (the N170) was shown to be modulated by task differences. Subjects
viewed faces and words embedded in fixed visual noise, and produced a larger N170 to
noise-alone trials when they expected a face. In a second experiment we found a larger
N170 on noise-alone trials when observers thought they saw a face. The results
demonstrate that the neurons responsible for the N170 are affected by a wider range of
influences than previously thought. In addition, the size of the N170 is related to the
behavioral response to an otherwise ambiguous stimulus, even in the absence of face-like
features. The results point to the intriguing suggestion that the illusion of a face in an
ambiguous display may result from greater activity in the temporal lobe face region.
Contextual Influences on Face-Related ERPs
3
Converging evidence from a variety of sources demonstrates that humans (and
other primates) have a special facility for face recognition, perhaps supported by
specialized areas in the inferotemporal cortex ( Kanwisher, N., McDermott, J. & Chun,
M. M.,1997; Tanaka & Farah, 1993). Research on neural correlates of face processing
suggests that areas of the inferotemporal (IT) cortex respond selectively to faces and
other complex visual stimuli with which the observer has extensive experience and
expertise (Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999). Single-cell recording in
nonhuman primates also shows face selectivity in IT neurons (Perrett, Rolls, & Cann,
1979; Young & Yamane, 1992), which are at the end of a series of processing stages that
extend along a pathway down the temporal lobe from earlier visual areas. In humans,
there is evidence for slight right-hemisphere dominance (Watanabe, S. Kakigi, R.,
Koyama, S., & Kirino, E., 1999). In addition, prosopagnosic individuals show evidence
of brain lesions in these areas and several show right-hemispheric specialization (Eimer
& McCarthy, 1999; Farah, Rabinowitz, Quinn, & Liu, 2000).
While much of the work has addressed the nature of the visual information that
results in activity in area IT, relatively little research has focused on other sources of
influence. In the present work we address the nature of contextual influences that are
brought to bear by the perceiver during face and word recognition, and address the degree
to which these contextual influences affect activity that occurs within 150-200
milliseconds after stimulus onset.
Electrophysiological (EEG) studies demonstrate that when a face stimulus is
visually presented to observers during recording, a negative-going potential occurs
around 170 ms after stimulus onset (Bentin, 1998; Bentin, Allison, Puce, Perez, &
Contextual Influences on Face-Related ERPs
4
McCarthy, 1996; Olivares & Iglesias, 2000). This downward deflection (which need not
extend below zero) is known in the EEG/ERP literature as the N170. The N170 is largest
at recording sites T5 and T6, roughly corresponding to the left and right temporal lobe
respectively (Bentin et al., 1996). Intracranial EEG recordings in epileptic human
observers reveal a face-specific negative-going potential at 170-200 ms in IT and the
fusiform gyrus (Allison, Puce, Spencer, & McCarthy, 1999; McCarthy, Puce, Belger, &
Allison, 1999; Puce, Allison, & McCarthy, 1999). In addition, using a combination of
MEG, EEG and source localization (BESA), Watanabe et al. (1999) have localized the
N170 response to regions of the inferotemporal cortex, around the fusiform gyrus. Thus
the locus of the N170 component appears to derive from the neurons in the
inferotemporal cortex.
The presentation of virtually any face or face-like stimulus results in an N170
component in the ERP. This is regardless of sex, age, emotional expression, or pose of
the face (Allison et al., 1999; McCarthy et al., 1999; Puce et al., 1999). Inverted (upsidedown) faces also yield an N170, although the onset is slightly delayed and the negative
deflection is slightly stronger (Bentin et al., 1996). Parts of faces (i.e., features alone) and
cartoon and animal faces also yield an N170, although it is sometimes attenuated and the
results are not entirely consistent (Allison et al., 1999; Bentin et al., 1996). Stimuli which
elicit an attenuated or nonexistent N170 include other highly homogenous classes of
symmetric complex visual stimuli such as flowers, cars, or butterflies; nonface bodyparts
(e.g., hands; McCarthy et al., 1999); and pixelwise scrambled faces and various
sinusoidal gratings (Allison et al., 1999). Components in the range of 100-150 ms have
been shown to be modulated by spatial attention (Hillyard & Anillo-Vento, 1998) but
Contextual Influences on Face-Related ERPs
5
these studies seem to indicate that modulation of these components by non-spatial
features such color, motion or shape depends on spatial attention. Thus selecting a class
of features (such as those in faces or words) without first selecting a spatial location may
not be sufficient to modulate the N170.
Researchers have examined whether there are top-down influences other than
spatial attention on the N170 for faces and, until recently, have found none. There
appears to be no modulation of the N170 by task demands, such as whether the face is a
target stimulus or not (Cauquil, Edmonds, & Taylor, 2000), and no effect of familiarity of
the face (Bentin & Deouell, 2000). This suggests that the N170 indexes purely perceptual
or bottom-up processes. Additionally, fMRI studies reveal that face-specific brain areas
are active any time there is a face in a scene, regardless of whether the face is relevant to
the task at hand (Gauthier et al., 1999). Thus, the neural response of cells in IT appears to
arise from feed-forward perceptual processing of faces. In support of this, Bentin et al.
(1996) suggested that the N170 indexes an automatically activated neural face detection
mechanism.
However, a recent study by Bentin and colleagues (Bentin, Sagiv, Mecklinger,
Friederici, & von Cramon, 2002) has altered this view. Subjects were first shown pairs of
dots, plus signs, or other simple shapes. These failed to elicit a strong N170. The pairs of
dots were then place into a face context, where the dots became the eyes of a schematic
face. When the observers were again shown just the pairs of shapes, they reported
interpreting the shapes as eyes, and more importantly, the shapes and the complete faces
produced equivalent N170s. These results suggest that the N170 can be modulated by
contextual information.
Contextual Influences on Face-Related ERPs
6
Bentin et al. (2002) took advantage of the fact that pairs of dots can sometimes be
interpreted as eyes. In the present work we address whether the N170 can be modulated
in the absence of any face-like stimulus, even in the absence of any task demands or prior
experience with the stimuli. If so, this would re-define the response properties of the
neurons responsible for the N170. To accomplish this, we embedded faces and words in
visual noise, and varied the task or other factors.
The critical conditions derive from trials in which only noise is presented to the
subject. Our noise is not resampled on each trial, and thus we hold the physical stimulus
constant across the tasks for the noise-alone conditions. In Experiment 1 we ask whether
the subject produces a larger N170 to the noise alone trials when they are looking for
faces than when looking for words. In Experiment 2 we ask whether subjects produce a
larger N170 to the noise-alone trials when they thought they saw a face, compared to a
word. The results indicate that the N170 is not just the signature of a feed-forward face
detector, but is affected by contextual information even in the absence of face-like
features. Surprisingly, on noise-alone trials there appears to be a direct relation between
the size of the N170 and whether the observer reports a face or a word was presented.
Experiment 1
In Experiment 1 observers made face or word judgments to faces and words
embedded in noise. The trials were blocked by task. The critical question is whether
observers will make a larger N170 to the noise-alone trials when looking for faces than
when looking for words.
Methods
Participants
Contextual Influences on Face-Related ERPs
7
Nine observers, eight of which were right-handed, participated in the study. These
observers were research assistants from our lab and/or students at IU whose participation
comprised part of their labwork or coursework. All observers were naïve as to the
purpose of the study.
Apparatus and EEG recording parameters
The EEG was sampled at 1000 Hz and amplified by a factor of 20,000 (Grass
amps model P511K) and band-pass filtered at .1 - 100 Hz (notch at 60 Hz). Signals were
recorded from sites F3, F4, Cz, T5, and T6, with a nose reference and forehead ground;
all channels had below 5 kOhm impedance. Recording was done inside a Faraday cage.
Eyeblink trials were identified from channels F3 and F4 and removed from the analysis
with the help of blink calibration trials. Images were shown on a 21 inch (53.34 cm)
Macintosh grayscale monitor approximately 44 inches (112 cm) from participants. Data
were collected by a PowerMac 7100. These details were identical for Experiment 2.
Stimuli
The stimuli for both experiments appear in Figure 1. Face stimuli consisted of
frontal views of one male and one female face with neutral expressions, generated using
the PoserTM application (Metacreations). Faces subtended a visual angle of 2.1 x 2.8
degrees. Two low imagery words were chosen for the second task (“Honesty” and
“Trust”). Words subtended a visual angle of 1.1 x .37 degrees. All stimuli were
embedded in white noise (4.33 x 4.33 degrees of visual angle) that was fixed (i.e. not
resampled) on all trials. The faces and words were presented at three contrast levels
(high, low, and zero). The zero contrast condition contained just noise and therefore had
no correct answer. The identical stimuli were used in Experiment 2.
Contextual Influences on Face-Related ERPs
8
Procedure
Participants completed the face task in the first half of the experiment and the
word task in the second half. In the face task, observers made male/female judgments on
100 trials per contrast level that were presented in random order within the face block.
The word task involved trust/honesty judgments, also with 100 trials per contrast level.
Observers were told that there was a stimulus present on every trial, but that it might be
difficult to see because of low contrast levels. Stimuli were presented for 1000 ms. EEG
was recorded from 100 ms prior to stimulus onset to 1100 ms post-stimulus onset. The
subject responded after the stimulus disappeared.
Results and Discussion - Experiment 1
EEG signals were averaged across trials for each subject within condition, such
that there were six ERP traces (high, medium, and zero contrast for faces and for words).
As expected, the high-contrast face produced a much larger N170 than the high-contrast
word, and had a latency of about 170 ms. The critical trials of the experiment are those
where only noise was presented, because in these trials the physical stimulus is held
constant and only the subjects' expectations vary (i.e. looking for a face or looking for a
word). The grand average ERPs collapsed across subjects for these conditions are
presented in Figure 2 (sites T5 and T6). Our central question is whether we see a larger
N170 in the noise-only trials when subjects expect a face than when they expect a word.
At both T6 and T5 there is a greater N170 when subjects are looking for a face. To assess
this difference statistically, we computed the average amplitude for each subject in the
time window from 140-200 ms for each condition. Across subjects, there was a
significantly greater negative potential for the face condition for T5 (two-tailed t(8) =
Contextual Influences on Face-Related ERPs
9
2.62, p= .031) and T6 (two-tailed t(8) = 2.35, p = .047). We also analyzed the P100 and
P300 components by averaging the amplitudes in the 80-130 ms and 260-340 ms
windows and found no significant differences.
We found a greater N170 on noise-alone trials where observers were expecting a
face rather than a word, despite the fact that the physical stimulus was identical on these
trials. These results demonstrate that top-down contextual factors can mediate the N170,
and are consistent with the results of Bentin (2002). The current results extend previous
results by demonstrating that the N170 can be modulated by contextual influences even
in the absence of face-like features.
Experiment 2
The contextual influences in Experiment 1 take the form of what might be thought
of as perceptual set. During a face block observers are looking for facial features, and
perhaps this involves attention to specific spatial frequency bands or features that fit the
response properties of the N170 neurons. This attentional tuning could occur well before
the start of a trial and be held constant throughout a block of trials.
To remove this pre-trial contextual information, in Experiment 2 we switched to a
mixed design. Subjects made a face/word discrimination on each trial, and again we
included the noise-only trials. The critical question is whether the N170 is larger on those
trials in which subject think they see a face in a noise-only display (relative to trials in
which they think they see a word). If so, this would relate the N170 activity directly to
the response and indicate how activity in this region might influence aspects of conscious
behavior such as perception and overt responding.
Methods
Contextual Influences on Face-Related ERPs
10
Participants
Ten right-handed observers participated in the study. These observers were
research assistants from our lab and/or students at IU whose participation comprised part
of their labwork or coursework. All observers were naïve as to the purpose of the study.
Procedure
The procedure was similar to that of Experiment 1, except that face and word
trials were intermixed and the task was to indicate whether a face or a word was
embedded in the noise. Observers responded via a joystick using a single finger, and were
asked to make speeded responses. This change was made to eliminate additional guessing
strategies not tied to the initial perceptual processing of the stimulus. The same stimuli
were presented as in Experiment 1 (high, low, or zero contrast faces and words embedded
in noise), and observers were told that there was a stimulus present on every trial, despite
the fact that one-third of the trials were noise-alone. Observers were also told that faces
and words appear equally often. There were a total of 720 trials.
Results and Discussion - Experiment 2
EEG signals were averaged across trials for each subject based on the stimulus
category for high or medium contrast words and faces, and the noise-only trials were
binned according to the subjects response (either 'face' or 'word'). Again we found a large
N170 for the high contrast face, while the high contrast word showed a much later onset
and a reduced amplitude. There was a slight bias to say “face” such that observers made
this response on 62% of the noise-alone trials. As shown in Figure 3, on these noise-alone
trials, ‘face’ responses are associated with a greater N170 than ‘word’ responses in the
right temporal channel (T6) (two-tailed, t(9) = 2.74, p = .023), but not for the left
Contextual Influences on Face-Related ERPs
11
temporal channel (T5) (t(9) = 1.54, p = .157, ns). We also analyzed the P100 and P300 by
averaging the amplitudes in the 80-130 ms and 260-340 ms windows and found no
significant differences for either channel. Thus the differences in the ERPs between
‘face’ and ‘word’ responses are confined to the right temporal lobe at about 170 ms after
stimulus onset.
The results of Experiment 2 demonstrate that the N170 is related to the response
given by subjects to an ambiguous stimulus (the noise-alone image). That the differences
were confined to the right temporal lobe at about 170 ms after stimulus onset puts strong
constraints on the nature of the processing that causes the differences in the EEG. Below
we discuss possible explanations for these differences at the N170 for the two responses.
General Discussion
In Experiment 1, we demonstrated that modulation of the EEG signal at 170 ms
can result from contextual influences in the absence of face-like features. Our noise-alone
display bears no resemblance to the perceptual stimuli that are typically reported in the
literature as generating an N170. First, there are no clearly identifiable face-like features.
Second, the display has a flat frequency spectrum, whereas faces have approximately a
1/f frequency spectrum. Third, pixelwise scrambled faces (which produces displays
similar to our white noise) do not yield an appreciable N170 (McCarthy et al., 1999).
Thus, we feel it is reasonable to conclude that we have excluded possible face-like
features from our noise-alone stimuli. The greater N170 on noise-alone trials in the face
block appears to be driven by expectations or differences in the nature of the perceptual
information acquired by the subject, rather than by bottom-up perceptual differences
since the physical stimulus was constant in the two blocks.
Contextual Influences on Face-Related ERPs
12
In Experiment 2 we found that observers show a greater N170 when they think
they see a face in the noise rather than a word, an effect which is localized to the right
temporal lobe. These results establish a link between the magnitude of the neural
response at 170 ms and the behavioral response (i.e., whether observers say they saw a
face). This is neither the result of bottom-up nor top-down influences, but rather
demonstrates what might be thought of as “inside-out” processing. Under this account,
the internal response of N170 neurons varies from trial to trial due to internal noise or
other stochastic processes. When faced with a noise-only trial that occurs in conjunction
with a greater activity level in the N170 neurons, the observer may experience the
illusion of the presence of a face and respond accordingly. In support of this hypothesis,
several studies have found that observers report face and face-like percepts following
intracranial stimulation of face-specific areas of the cortex (Puce et al., 1999; Vignal,
Chauvel, & Halgren, 2000). In our experiments this illusion may be quite faint, but
sufficient to bias the subject's response on noise-alone trials.
Before accepting this candidate hypothesis, we must first rule out several other
plausible explanations for this effect. First, the activity at 170 ms occurs too early to
simply be a signature of the observer’s response after it had been executed, since the
median reaction times were around 600 ms for both conditions. Thus while it is possible
(even likely) that the N170 neurons influence the decision, it is unlikely that the subject’s
decision influences the electrophysiological response at 170 ms.
The inside-out account assumes that the decision process starts once the trial
begins. One candidate mechanism that would allow the process to start earlier would be
priming from the previous trial. We explored this possibility by examining whether the
Contextual Influences on Face-Related ERPs
13
presentation of a face on the previous trial resulted in a larger N170 on the current noisealone trials. As shown in the left panel of Figure 4, the size of the N170 to noise-alone
trials has only a slight dependence on whether a face or a word was presented on the
previous trial. Noise-alone trials preceded by a face produced a reduced N170, which
contradicts the priming hypothesis. This difference is significant (t(9) = 2.43, p = 0.038)
mainly due to differences that occur late in the window. However, as shown in the right
panel of Figure 4, this effect is due entirely to differences on trials in which observers
responded ‘word’ to the noise-alone stimulus. For 'face' responses to noise-alone trials,
there were no differences in the EEG depending on whether the prior trial had been a face
or a word (t(9) = .04, p=0.96). However, there was a slightly smaller N170 for wordresponse noise-alone trials that were preceded by a face stimulus (t(9) = 3.24, p = .01),
which again contradicts the priming hypothesis. One possible explanation is negative
priming from the stimulus on the prior trial, but it is not clear why the effects of priortrial priming should be restricted to noise-alone trials in which the observer responded
‘word’. In addition, this effect is only a small part of the large difference seen in Figure 3
(right panel). This can be observed in the right panel of figure 4 by collapsing the dark
lines together and the light lines together to recover the original effect in Figure 3. This
shows that the overall differences between the two responses are much larger than the
differences conditioned on a 'word' response'. Thus we rule out the prior-trial priming
explanation as a major influence on our results in Experiment 21.
1
Of course, we could also look at whether the response on the previous trial is correlated with the
magnitude of the N170 on the present trial. However, as observers had extremely high accuracy levels, the
responses and stimuli on previous trials were strongly correlated and thus the analysis would yield the same
conclusions.
Contextual Influences on Face-Related ERPs
14
A second possible explanation for the results of Experiment 2 is that observers
attend to face-like features or face-specific spatial frequency bands of the noise on some
trials, which provides a stronger input to the N170 neurons, leading to a larger N170 and
an overt face response. This explanation is of interest because it still ties the N170
activity to the behavioral response, although it doesn't explain why subjects should decide
to look for a face on a particular trial. However, Gold, Bennett, & Sekuler (1998) have
shown that faces and words are processed by humans using the similar spatial frequency
filters despite the vastly different spatial frequency content. Thus attention to a particular
spatial frequency band by the observer in order to detect one type of stimulus may not
provide greater input to the N170 neurons. In addition, Puce et al. (1999) showed that
bandpass filtered faces, which include just high or just low spatial frequencies, give
equivalent N170 responses that are as strong as the N170 to the unfiltered face, and so
changing the spatial frequency content in the input to the N170 via selective attention to
different spatial frequency bands may not be sufficient to modulate the N170.
Having at least tentatively eliminated explanations based on prior-trial priming
and attention to different features in the noise, we are left with the intriguing possibility
that the behavioral response is directly related to the activity in the N170 neurons. That
is, when presented with an ambiguous stimulus, observers may have a tendency to think
they saw a face when the activity of the N170 neurons is higher.
The data from both experiments lead to the view that the processing of faces, as
indexed by the N170, is much more flexible and amenable to contextual influences than
has previously been reported in the literature. The N170 does not seem to simply reflect a
feedfoward face detector, but instead responds to the demands of the task and perhaps
Contextual Influences on Face-Related ERPs
15
internal influences. In Experiment 1, observers performing face or word tasks may tune
the behavior of the N170 neurons through top-down connections. In Experiment 2, the
differences may come from internal sources at the N170 neurons, but still influence
behavior. Thus we propose that, given no other information, internal activity may bias an
overt response, through processes we term inside-out influences. The fact that we can
link the activity in the N170 neurons to the behavioral response suggests that the output
of these neurons eventually becomes available to conscious awareness. The source of this
internal activity may be viewed as a form of internal noise that varies from trial to trial,
influencing the processing of perceptual input.
One way to investigate the nature of internal noise sources is to add external noise
to the stimulus, which grown recently as a method for measuring internal noise and
perceptual templates (Ahumada, 1987; Dosher & Lu, 1999; Gold, Bennett, & Sekuler,
1999a, 1999b; Gold, Murray, Bennett, & Sekuler, 2000; Skoczenski & Norcia, 1998; see
Pelli & Farell, 1999, for a review of methods). The current study used external noise
simply to create an ambiguous stimulus. However, the results point to a type of internal
noise in the face processing neurons that ultimately influences behavior, rather than a
noise source that simply limits performance. It seems likely that EEG recording may help
define the nature of the internal noise, and when used in conjunction with modeling, may
help tease apart current debates on the nature of internal noise (i.e., additive or
multiplicative noise; e.g. Dosher & Lu, 1999). By relying on the time course of the
signal, the nature of the internal noise at different stages of processing might be revealed.
We are currently investigating how EEG data can be used with resampled noise to
compute classification images (Gold et al., 2000). This is the EEG analog of the reverse
Contextual Influences on Face-Related ERPs
16
correlation technique in single-cell recording that has been used to map out the receptive
fields of single neurons (e.g. DeAngelis, Ohzawa & Freeman, 1995). The hope is that the
EEG signal will reveal, at a gross level, the response properties of the N170 neurons.
The strength of the current approach is that it holds the physical stimulus constant
in order to remove explanations based on different attributes of the stimuli. This allows
us to tie the response of the neurons in area IT to the behavioral response and reveal
evidence of non-perceptual influences. Similar techniques could be applied to other
domains such as spatial attention and object recognition to disambiguate bottom-up, topdown and inside-out processes.
Contextual Influences on Face-Related ERPs
17
References
Ahumada, A. J. (1987). Putting the visual system noise back in the picture.
Journal of the Optical Society of America A, 4, 2372-2378.
Allison, T., Puce, A., Spencer, D. D., & McCarthy, G. (1999).
Electrophysiological studies of human face perception. I: Potentials generated in
occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9(5), 415-430.
Bentin, S. (1998). Separate modules for face perception and face recognition:
Electrophysiological evidence. Journal of Psychophysiology, 12(1), 81-81.
Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996).
Electrophysiological studies of face perception in humans. Journal of Cognitive
Neuroscience, 8(6), 551-565.
Bentin, S., & Deouell, L. Y. (2000). Structural encoding and identification in face
processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology, 17, 3554.
Bentin, S., Sagiv, N., Mecklinger, A., Friederici, A., & von Cramon, Y. D.
(2002). Priming visual face-processing mechanisms: Electrophysiological evidence.
Psychological Science, 13(2), 190-193.
Cauquil, A. S., Edmonds, G. E., & Taylor, M. J. (2000). Is the face-sensitive
N170 the only ERP not affected by selective attention? NeuroReport, 11, 2167-2171.
DeAngelis, G.C., Ohzawa, I. & Freeman, R. D. (1995). Receptive field dynamics
in the central visual pathways. Trends in Neuroscience, 18, 451-458.
Dosher, B. A., & Lu, Z.-L. (1999). Mechanisms of perceptual learning. Vision
Research, 39, 3197-3221.
Contextual Influences on Face-Related ERPs
18
Eimer, M., & McCarthy, R. A. (1999). Prosopagnosia and structural encoding of
faces: Evidence from event-related potentials. NeuroReport, 10, 255-259.
Farah, M. J., Rabinowitz, C., Quinn, G. E., & Liu, G. T. (2000). Early
commitment of neural substrates for face recognition. Cognitive Neuropsychology, 17(13), 117-123.
Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999).
Activation of the middle fusiform "face area" increases with expertise in recognizing
novel objects. Nature Neuroscience, 2, 568-573.
Gold, J., Bennett, P. J., & Sekuler, A. B. (1998). The visual filters for letter and
face identification. Paper presented at the ARVO annual convention, Fort Lauderdale,
FL.
Gold, J., Bennett, P. J., & Sekuler, A. B. (1999a). Identification of band-pass
filtered letters and faces by human and ideal observers. Vision Research, 39, 3537-3560.
Gold, J., Bennett, P. J., & Sekuler, A. B. (1999b). Signal but not noise changes
with perceptual learning. Nature, 402, 176-178.
Gold, J., Murray, R. F., Bennett, P. J., & Sekuler, A. B. (2000). Deriving
behavioural receptive fields for visually completed contours. Current Biology, 10, 663666.
Hillyar, S. & Anllo-Vento, L. (1998). Event-related brain potentials in the study
of visual selective attention. Proc. National Acad. Sci., USA. 95, 781-787.
Kanwisher, N., McDermott, J., Chun, M. M. (1997). The fusiform face area: A
module in human extrastriate cortex specialized for face perception. Journal of
Neuroscience. Vol 17(11), 4302-4311.
Contextual Influences on Face-Related ERPs
19
McCarthy, G., Puce, A., Belger, A., & Allison, T. (1999). Electrophysiological
studies of human face perception. II: Response properties of face-specific potentials
generated in occipitotemporal cortex. Cerebral Cortex, 9(5), 431-444.
Olivares, E. I., & Iglesias, J. (2000). Neural bases of perception and recognition of
faces. Revista De Neurologia, 30(10), 946-952.
Pelli, D. G., & Farell, B. (1999). Why use noise? Journal of the Optical Society of
America A, 16, 647-653.
Perrett, D. I., Rolls, E. T., & Cann, W. (1979). Temporal lobe cells of the monkey
with visual responses selective for faces. Neuroscience Letters, Suppl. 2, 340.
Puce, A., Allison, T., & McCarthy, G. (1999). Electrophysiological studies of
human face perception. III: Effects of top-down processing on face-specific potentials.
Cerebral Cortex, 9(5), 445-458.
Skoczenski, A. M., & Norcia, A. M. (1998). Neural noise limitations on infant
visual sensitivity. Nature, 391, 697-700.
Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition.
Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 46A,
225-245.
Vignal, J. P., Chauvel, P., & Halgren, P. (2000). Localised face processing by the
human prefrontal cortex: Stimulation-evoked hallucinations of faces. Cognitive
Neuropsychology, 17, 281-291.
Watanabe, S. Kakigi, R., Koyama, S., & Kirino, E. (1999). Human face perception traced
by magneto- and electro-encephalography. Cognitive Brain Research, 8, 125-142.
Contextual Influences on Face-Related ERPs
Young, M. P., & Yamane, S. (1992). Sparse population coding of faces in the
inferotemporal cortex. Science, 256, 1327-1331.
20
Contextual Influences on Face-Related ERPs
High Contrast
Faces
Low Contrast
Faces
21
Noise Alone
Low Contrast
Words
High Contrast
Words
Figure 1. Stimuli for experiments 1 and 2. From left to right: high- and low-contrast
female and male faces; noise alone; and low- and high-contrast words (“Honesty” and
“Trust”). Note that the noise is identical for all stimuli.
Contextual Influences on Face-Related ERPs
22
Figure 2. Experiment 1. Event-related potentials (ERPs) elicited at temporal lobe sites T5 (left panel, left
hemisphere) and T6 (right panel, right hemisphere). Solid lines indicate noise-alone trials where observers are
expecting a face; dashed lines indicate noise-alone trials where observers are expecting a word. The two asterisks
indicate significant effects at the N170 component.
Contextual Influences on Face-Related ERPs
23
Figure 3. Experiment 2. ERPs elicited at temporal lobe sites T5 (left panel, left hemisphere) and T6 (right
panel, right hemisphere). Solid lines indicate noise-alone trials where observers thought they saw a face;
dashed lines indicate noise-alone trials where observers thought they saw a word. The asterisk in the right
panel indicate significant differences between the two responses at the N170 component.
Contextual Influences on Face-Related ERPs
Figure 4. Data from Experiment 2, channel T6 (right hemisphere). ERP traces from noise-alone trials
conditioned on the stimulus presented on the prior trial (left panel) and on prior trial and response (right
panel). Note that the time scale is different than in prior figures to emphasize effects at the N170. The
asterisk in the left panel indicates a significant difference between the two conditions at the N170, due
mainly to the differences late in the averaging window.
24
Download