In American Sign Language (ASL) the face conveys both emotional

advertisement
The neural systems underlying the recognition of linguistic and emotional facial expressions
Stephen McCullough
The University of San Diego, California
Karen Emmorey
The Salk Institute for Biological Studies
In American Sign Language (ASL) the face conveys both emotional and linguistic information.
Recognizing that certain facial expressions mark grammatical contrasts is unique to users of
signed languages, whereas recognizing emotional facial expressions is universal to humans. We
used functional magnetic resonance imaging (fMRI) to investigate the neural systems underlying
the recognition of such functionally distinct expressions. Recently, it has been argued that
cognitively distinct aspects of face perception are mediated by distinct neural representations.
We hypothesized that the laterality of these representations can be influenced by the function of
the expression conveyed by the face. Linguistic facial expressions are predicted to robustly
engage left hemisphere structures only for ASL signers, whereas perception of emotional
expressions was predicted to be lateralized to the right hemisphere for both signers and
nonsigners.
Three participant groups were studied: native Deaf ASL signers, hearing native ASL signers,
and hearing nonsigners. Subjects viewed static facial expressions produced by different models.
The expressions were either emotional (happy, sad, angry, disgusted, surprised, or fearful) or
linguistic (adverbial markers glossed as MM (meaning “effortlessly”), CS (“recently”), TH
(“carelessly”), INTENSE, PUFF (“a great deal” or “a large amount”), and PS (“smoothly”).
Facial expressions were presented in two conditions: a) a “face only” condition that showed just
the face (as in previous studies of facial expression recognition) and b) a “face with verb”
condition that consisted of still images of signers producing a manual verb (e.g., WRITE,
DRIVE) with either a facial adverb or an emotional facial expression. For both conditions,
participants made same/different judgments to two sequentially presented facial expressions,
blocked by expression type. This target task alternated with a control task in which subjects
made same/different judgments regarding gender. For the control task, the models produced
neutral expressions, either with or without verbs.
Both structural MRI and fMRI scans were performed using a 1.5 Tesla Siemens MRI scanner
with a whole head coil. Two structural MR images were acquired for each subject (T1-weighted
MPRAGE with TR = 11.4, TE = 4.4 , FOV 256, and 10 flip angle; voxel dimensions: 1 X 1 X
1mm). For all functional scans, T2*-weighted interleaved multislice gradient echo echo-planar
imaging (TR = 4 s , TE = 44 ms, FOV 192, flip angle 90, 64 x 64 matrix) was used to acquire
24 contiguous, 5 mm thick coronal slices extending from occipital lobe to mid-frontal lobe with
voxel dimension of 3 x 3 x 5 (mm). fMRI time series data were pre-processed and analyzed with
AFNI in several steps in order to acquire the voxel numbers from our regions of interest (ROIs)
for analysis. ROIs were bilateral supeior temporal sulcus and fusiform gyrus. Recent
neuroimaging results with hearing subjects indicate that the superior temporal sulcus (STS) is
critically involved in processing changeable aspects of the face, such as eye gaze, mouth
configuration, and facial expression. Furthermore, attention to emotional facial expression can
modulate activity within the right superior temporal sulcus. In addition, the fusiform gyrus (FG)
has been identified as crucial to the perception of faces and as particularly critical to perceiving
invariant properties of faces, such as gender or identity. For hearing subjects, activation within
FG in response to faces may be bilateral but is often lateralized to the right hemisphere.
For deaf ASL signers and hearing nonsigners, neural activation was observed in the superior
temporal sulcus and the fusiform gyrus. The data from hearing native ASL signers is currently
being collected and analyzed. Within the STS and FG, the lateralization of activation was
dependent upon subject group (deaf signers or hearing nonsigners) and facial expression type
(linguistic or emotional). Within the STS, activation for emotional expressions was lateralized to
the right hemisphere for both subject groups, whereas activation for linguistic facial expressions
was significantly left lateralized only for the deaf ASL signers, and only when linguistic facial
expressions co-occurred with verbs. This leftward asymmetry for facial expressions presented
within a linguistic context indicates a strong language dominance for left hemisphere processing,
regardless of the form in which that linguistic information is encoded.
Within the fusiform gyrus, activation was significantly left lateralized for both expression types
for the deaf ASL signers; whereas activation was slightly right lateralized for both expression
types for the hearing non-signers. One possible explanation for the (somewhat surprising)
finding that the deaf signers exhibited left-lateralized activation within FG for both linguistic and
emotional expressions is that the nature of the processing required for linguistic facial
expressions shifts the activation to the left hemisphere. Specifically, we hypothesize that this left
lateralization arises because of the need to rapidly analyze local facial features during on-line
sign language processing. Several studies indicate that local and global face processing are
mediated in the left and right fusiform gyri respectively. Recognition of linguistic facial
expressions requires identification of local facial features (McCullough & Emmorey, 1997), and
we hypothesize that left lateralization may emerge from the life-long and constant neural activity
associated with local facial feature processing for signers. The data from hearing native ASL
signers now being analyzed will help to confirm or disconfirm this hypothesis.
Overall, our study has shown that the neural regions associated with general facial expression
recognition are consistent, predictable, and robust. Similar neural activation was consistently
found in the STS and FG regions across both deaf and hearing subjects. The results support the
hypothesis that the right hemisphere, is dominant for recognition of emotional facial expressions.
The results also show that the neural representation underlying facial expression recognition can
be modulated. We conclude that function in part drives the lateralization of neural systems that
process human facial expressions.
References:
McCullough, S., & Emmorey, K. (1997).
Download