Mouthings and their Reductions: A Kinematic Approach

advertisement
Mouthings and their Reductions: A Kinematic Approach
(To be presented in ASL)
The mouthing of English words during the production of American Sign Language (ASL)
was previously considered to be purely a code-mixing phenomenon.[6] However, it is now argued
that ASL, like other signed languages, incorporates mouthings into the production of native ASL
signs. However, mouthings occur with inconsistent form, often appearing as “reduced,” omitting
portions of the word.[3] Despite several attempts, efforts to explain mouthing reductions using
phonological properties of the spoken word have failed to provide a complete account of the
data.[1,2,7] The present work explores two factors that are independent of spoken language, which
may contribute to the form of a sign’s mouthing. First, deaf signers may represent co-sign
mouthings as part of the lexical specification of a sign, distinct from the representation of its
spoken-language translation. Second, neuromotor control of multimodal coordination may cause
the simultaneous movements of the hands to influence movements of the mouth during co-sign
mouthing. Thus, the movements of the mouth during the production of manual signs are the
product of articulating a unique phonetic representation while yielding to pressure to coordinate
with movements of an independent anatomical structure.
To test this hypothesis, a passive marker optical motion capture system recorded
participants’ sign and speech. The system tracked each of ten markers affixed to each
participant’s hands and face to calculate their position in space. In Experiment 1, a native deaf
signer produced signs and words in three conditions: signing while mouthing, silently uttering a
word without signing, and speaking a word aloud without signing. A hearing, sign-naïve
participant followed the same procedure after being taught the signed stimuli. Zero-lag crosscorrelations revealed that the deaf signer exhibited a stronger correlation of lip movements
between the signing and silent conditions than between signing and speaking-aloud conditions
(r=0.72 vs. r=0.51). In contrast, the hearing non-signer exhibited no such difference between
conditions (r=0.63, r=0.61, respectively). These results suggest that native signers represent
mouthings separately from speech.
In Experiment 2, the native deaf signer fingerspelled and mouthed nonce words, designed
to elicit a variety of mouth-hand movement combinations. Overall, movements of the two
articulators were tightly coupled; however, the number of gestures and the movement direction
of each articulator appear to be irrelevant to their coordination. As shown in Figure 1, the
stimulus O-B-O (/obo/) requires the mouth to close and open while the hands open and close,
whereas O-B-S (/ɑbz/) requires one closing gesture of the mouth while the hands open and close.
Despite the differing number of gestures required of the hands and mouth for O-B-S, the crosscorrelation between the two articulators was actually greater for O-B-S than for O-B-O (r = 0.88 vs.
r = −0.47). While coordination of single gestures produced across two different body parts is
well documented,[4,5] further research is necessary to understand how the neuromotor system
organizes simultaneous sequences of gestures between the mouth and hands. Once sign linguists
understand this coordination, we can build a model of co-sign mouthing independent from
speech, which more accurately accounts for mouthing reductions.
Figure 1. Time-normalized movement traces of fingerspelling with mouthing. Lip aperture is the
distance between the upper and lower lip markers. Hand aperture is the distance between the
index finger and ventral wrist markers.
Aperture (mm) 200 O‐B‐O O‐B‐S 150 100 50 0 Time Time Lips Hand Lips Hand References
1. Ajello, R., Mazzoni, L., and Nicolai, F. Linguistic gestures: Mouthing in Italian Sign
Language. In P. Boyes Braem and R. Sutton-Spence, eds., The Hands are the Head of the
Mouth: The Mouth as Articulator in Sign Languages. Signum-Verlag, Hamburg, 2001, 231–
246.
2. Bergman, B. and Wallin, L. A preliminary analysis of visual mouth segments in Swedish Sign
Language. In P. Boyes Braem and R. Sutton-Spence, eds., The Hands are the Head of the
Mouth: The Mouth as Articulator in Sign Languages. Signum-Verlag, Hamburg, 2001, 51–68.
3. Boyes Braem, P. and Sutton-Spence, R., eds. The Hands are the Head of the Mouth: The
Mouth as Articulator in Sign Languages. Signum, Hamburg, 2001.
4. Gentilucci, M., Benuzzi, F., Gangitano, M., and Grimaldi, S. Grasp with hand and mouth: a
kinematic study on healthy subjects. Journal of Neurophysiology 86, 4 (2001), 1685–99.
5. Gentilucci, M. and Volta, R.D. The motor system and the relationships between speech and
gesture. Gesture 7, 2 (2007), 159–177.
6. Lucas, C. Language contact in the American deaf community. Academic Press, San Diego,
1992.
7. Zeshan, U. Mouthing in Indopakistani Sign Language: Regularities and variation. In P. Boyes
Braem and R. Sutton-Spence, eds., The Hands are the Head of the Mouth: The Mouth as
Articulator in Sign Languages. Signum-Verlag, Hamburg, 2001, 247–272.
Download