Signing in the Visual World: The time course of lexical... Language Presenting Language: English

advertisement
Signing in the Visual World: The time course of lexical recognition in American Sign
Language
Presenting Language: English
Individuals perceiving language must process input dynamically as it unfolds in real time.
Research on spoken word recognition has revealed that listeners continually evaluate the
unfolding speech input against a set of activated potential lexical candidates. The specific lexical
items activated include those sharing both phonological (Allopena, Magnuson, & Tanenhaus,
1998) and semantic (Huettig & Altmann, 2005) features with the target. However, little is
known about the nature of on-line processing during sign recognition (Morford & Carlson, 2011;
Thompson et al., 2010). In particular, the fact that signs are produced manually and contain both
sequentially and simultaneously organized sub-lexical features could lead to qualitatively
different patterns of on-line recognition. Understanding the nature of on-line sign processing by
native-signing adults provides insights into the modality specific versus supramodal aspects of
word recognition, and establishes baseline measures for future comparisons with signers from
diverse backgrounds.
The current study is an investigation of online sign processing in native-signing deaf adults
(n=17) using an adaptation of the visual world paradigm (Tanenhaus et al, 1995), which has been
widely used in the study of spoken language processing. This paradigm takes advantage of the
fact that individuals naturally direct gaze to a named object. In our adapted paradigm,
participants first see four pictures on a computer monitor. The pictures represent familiar,
known objects. Participants direct gaze to a central fixation cross, after which a sign appears
which names one of the pictured objects. Using an automated eye tracker, we measure the time
course and accuracy of lexical recognition in four conditions in which the semantic and
phonological relationships between the target and competitor items are systematically
manipulated.
Overall signers initiate looks to the target picture approximately 500ms following sign onset, i.e.
before the end of the sign. Despite the fact that gaze serves a dual function for our deaf
participants, both to perceive the input and index lexical recognition, we find that signers shift
gaze to the target item as soon as enough lexical information becomes available. As predicted,
signers are fastest to look to the target picture and spend more time looking overall when no
competitors are present (F(3,16) = 10.04, p < .0001). By contrast, signers are significantly
slower to look to the target picture, and spend less time looking overall, in the presence of a
phonological competitor. When a semantic competitor is present, signers are slower to look to
the target picture yet spend equal time looking to the target, indicating a lesser degree of
interference from semantic competitors relative to phonological competitors. These results
suggest that when signs are acquired as a first language in early life they are mentally organized
according to sub-lexical, phonological features. Thus modality does not appear to affect the
architecture or time-course of online lexical recognition. Currently we are investigating lexical
recognition using the identical measures in deaf individuals with varying age of acquisition, and
in hearing, second language learners of sign, in order to investigate the effects of early language
experience on sign processing.
Proportion of looks to target References:
Allopenna, P., Magnuson, J., & Tanenhaus, M. (1998). Tracking the time course of spoken word
recognition using eye movements: Evidence for continuous mapping models. Journal of
Memory and Language,38, 419-439.
Huetting, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation:
semantic competitor effects and the visual world paradigm. Cognition, 96, B23-B32.
Morford, J. P., & Carlson, M. L. Sign perception and recognition in non-native signers of ASL.
Language Learning and Development, 7(2), 149-168.
Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995).
Integration of visual and linguistic information in spoken language comprehension.
Science, 268 (5217), 1632-1634.
Thompson, R. L., Fox, N., Vinson, D. P., & Vigilocco, G. (2010). Seeing the world through a
visual language: visual world paradigm in BSL. Poster Presented at Theoretical Issues in
Sign Language Research (TISLR), Purdue, IN.
Time from sign onset
Figure 1: Time course of looks to target by condition.
Download