Integration of multiple channels in language comprehension

advertisement
Integration of multiple channels in language comprehension
David Vinson
UCL Institute for Multimodal Communication
Department of Experimental Psychology
Successful face-to-face communication involves integrating multiple types of
information, but studies of spoken languages overwhelmingly focus upon a primary
channel: speech, and even those studies that are concerned with the multimodal nature
of linguistic input are still mainly concerned with the visual aspects of speech, and in
particular mouth movements. However, studies of language as a multimodal
phenomenon highlight the way in which gesture provides additional information to
support or supplement information from the linguistic signal. There are now a number of
studies showing that gesture is closely integrated into language comprehension with
incongruent gestures impeding comprehension, but studies of this sort have so far used
ecologically invalid materials in which a speaker's hands, but not face is visible (e.g. Kelly
et al., 2010).
This project includes behavioural studies investigating how gestural and linguistic
information is integrated in comprehension of digitally manipulated video stimuli using
methods recently developed in our lab. In these, naturally-produced speech with
gestures are recombined into incompatible combinations (e.g. saying "swimming" while
gesturing PUNCHING). This provides a more ecologically valid type of stimulus in which
visual aspects of speech and of gesture are both fully visible. Initial work in our lab
indicates that gestures are still highly relevant and obligatorily integrated even when an
additional visual cue (mouth movements) is present. Moreover, the relative weighting of
these cues appears to differ for native and non-native speakers, but the conditions in
which integration occurs are yet unidentified. Projects in this area employ behavioural
and/or eyetracking methods to gain a better understanding of how comprehenders
integrate these multisensory cues to meaning:
1. Individual differences: what characteristics predict the extent to which different people
may use speech vs. gesture for comprehension? In addition to our preliminary evidence
suggesting a different balance between native and non-native speakers, other recent
studies suggest that this may differ with age (Cocks et al., 2011), and with participants'
memory span for bodily actions (Wu & Coulson, 2015). This project further investigates
the characteristics that affect individuals' different use of speech and gesture in
comprehension.
2. Content differences: Project #1 acknowledges that differences between individuals
may modulate the relative balance of speech and gesture, but the same is also likely to
be true of the content: the balance of speech and gesture in delivering a particular
message may vary from time to time. This project focuses on gestures that co-occur
with single words, and assess the extent to which characteristics of the words and
gestures affect the relative contribution of speech and gesture. (For a project that
investigates speech and gesture in more naturalistic contexts, please look up the project
by Jeremy Skipper and David Vinson)
In collaboration with Prof Gabriella Vigliocco (UCL) and Dr Pamela Perniss (Univ
Brighton)
Contact:
d.vinson@ucl.ac.uk
Relevant articles
Cocks, N., Morgan, G., & Kita, S. (2011). Iconic gesture and speech integration in younger
and older adults. Gesture, 11(1), 24-39.
Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and
gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267.
Wu, Y. C., & Coulson, S. (2015). Iconic gestures facilitate discourse comprehension in
individuals with superior immediate memory for body configurations. Psychological
science.
https://pss.sagepub.com/content/early/2015/09/15/0956797615597671.full
Download