TISLR 11    10 – 14 July 2013  University College London 

advertisement
 TISLR 11 10 – 14 July 2013 University College London Wednesday 10th July
TISLR 11, July 2013
Lars Wallin
Plenary Keynote Presentation
University of Stockholm, Sweden
Is the interaction between mouth and hand movement synchronized?
In her study on BSL phonology, published in the anthology The Hands are the Head of the Mouth (Boyes Braem &
Sutton-Spence, 2001), Woll (2001) argues that ”mouth gestures will share some features with manual components of
signs”. In particular, she argues that the mouth movement echoes the movement of the hand, such that opening and
closing movements of the hand are mirrored by the mouth. Similar observations have been made for Swedish Sign
Language (Wikström, 1979).
A question that naturally arises when observing signs with mouth gestures (cf. Boyes Braem & Sutton-Spence, 2001) is
to what extent movements of the mouth are synchronized with the manual articulation. Could it be that there is a slight
shift, so that the mouth movement is initiated before or after a hand internal movement is initiated? Is there any
connection between the beginning of the manual sign production and the opening of the mouth, and between the manual
sign production being finished and the mouth closing?
To try to answer these and similar questions, signs with a mouth gesture consisting of a combination of at least two
mouth segments were collected. (For this analysis oral adverbs where not included.) The mouth segments belong to two
different classes, characterized by the features closed and open. The two closed segments are BILABIAL (the lips are in
contact each other) and LABIODENTAL (the lower lip is in contact with the upper teeth). The four open segments are OPEN,
ROUND, FORWARD, STRETCHED (the lips are not in contact with each other) (Bergman & Wallin, 2001). The signs were taken
from the Swedish Sign Language Corpus and the Swedish Sign Language dictionary. Approximately 200 occurrences of 22
signs were found. These were rather unevenly distributed, where the highest number of instances for one of the signs was
62, and as little as 1 occurrence of one of the signs. The signs have varying types of manual articulation, such as: hand
internal movement, orientation change, articulation starts or ends with contact, the articulator passes or bounces on the
body or the second hand or is moved between two positions in the area in front of the signer.
All signs were transcribed and analyzed frame by frame in three rows: mouth (gesture), (hand) movement, and
handshape. The duration of movement of the mouth and manual articulation was noted in Excel with a sequence of color
coded cells corresponding to frames (see figure). In the figure, the first cell/cells (brown) indicate that the first mouth
segment and handshape have adopted their initial shape and the hand has taken its initial position. When the manual
activity starts, this is indicated with light brown. The onset of the second mouth segment, the resulting handshape and the
end of movement are all indicated in yellow. The colors thus help illustrate the temporal relationships in the interaction
between mouth and hand movement in the respective signs.
A detailed analysis of the data will be presented and some possible patterns of interaction between mouth and hands
will be suggested.
Figure
1
Wednesday 10th July
TISLR 11, July 2013
Victoria Nyst
Presentation 1
Leiden University, Netherlands
Towards a new model of sign language affiliation
Around the world, sign languages (SLs) have arisen in communities with an unusually high incidence of hereditary deafness.
Because of the very specific circumstances under which they arose, such microcommunity SLs are taken to have emerged
as isolates without linguistic ancestors. Adamorobe Sign Language (AdaSL), used in a Ghanaian village with a high incidence
of deafness, has a number of cross-linguistically unusual features (Nyst, 2007). Thus, AdaSL patterns differently in its use of
iconicity, with a systematic a) preference for life size projections (including role shift) and b) avoidance of outline depiction
(cf. Mandel, 1977). In addition, a system of measure stick signs in used to express size and shape. Such unusual patterning is
not surprising for a linguistic isolate.
From 2009-2012, a project aiming at the documentation of endangered microcommunity SLs in the Dogon area of Mali
took place, leading among others to annotated corpora of two microcommunity SLs; Berbey SL and Koubewel Koundia SL
(Nyst, Magassouba & Sylla, 2012). Analysis of these new data revealed a striking similarity with AdaSL in iconic patterning.
In addition, the three SLs appear to share other similarities as well, including a set of shared lexical items and phonological
features. Preliminary observation suggests that the same features are found in the family SL of Bouakako, Ivory Coast
(Tano, forthcoming).
Rather than a scenario of a spontaneous emergence, out of the blue, these extensive similarities suggest some form of
relation between these languages. Both common ancestry and contact are unlikely scenarios in view of distance and timedepth. An explanation must be sought in the raw material that emerging SLs start off with; the gesturing of hearing people.
This substrate must have been quite homogeneous and similar in terms of articulation, emblems and iconic strategies used.
Studies on gestures in (West) Africa are scarce and address the gestures of individuals or individual languages. Yet, the
publications found are in line with a hypothesized extensive regional gesture repertoire (Sorin-Barreteau 1996), with a
preference for life size projections (Nyst, 1997) and a measure stick system (Claessen 1984; Creider 1977; Hochegger 1978).
In conclusion, I will outline a model of SL emergence in cultures with a substantial gesture repertoire, qualifying SL
emergence as an independent, local process, while at the same time accounting for structural similarities. This model
implies that SLs can be related through a modality specific type of linguistic affiliation; i.e. gestural substrate. It also implies
that the microcommunity SLs under consideration (and possibly those in other gesture-rich cultures as well) were kick
started by an extensive repertoire of conventional gestures and iconic devices.
Corresponding author:
v.a.s.nyst@hum.leidenuniv.nl
2
Wednesday 10th July
TISLR 11, July 2013
Elaine Maypilama, Dorothee Yunggirrnga & *Dany Adone
Charles Darwin University, Australia: *University of Cologne, Germany
Corresponding author:
adoned@uni-koeln.de
3
Presentation 2
Wednesday 10th July
TISLR 11, July 2013
Roland Pfau & *Markus Steinbach
Presentation 3
University of Amsterdam, Netherlands: *University of Göttingen, Germany
Corresponding authors:
r.pfau@uva.nl
markus.steinbach@phil.uni-goettingen.de
4
Wednesday 10th July
TISLR 11, July 2013
Naja Ferjan Ramirez, Amy Lieberman & Rachel Mayberry
University of California, USA
Corresponding authors:
naja@ling.ucsd.edu
amymlieberman@yahoo.com
rmayberry@ucsd.edu
5
Presentation 4
Wednesday 10th July
TISLR 11, July 2013
Beyza Sumer, Inge Zwitserlood, *Pamela Perniss & Asli Ozyurek
Radboud University Nijmegen, Netherlands: *University College London, UK
Corresponding authors:
beyza.sumer@mpi.nl
inge.zwitserlood@mpi.nl
p.perniss@ucl.ac.uk
asli.ozyurek@mpi.nl
6
Presentation 5
Wednesday 10th July
TISLR 11, July 2013
Marlon Kuntze & Adam Stone
Presentation 6
Gallaudet University, USA
Looking Closely at Iconicity in Child ASL (Presented In ASL)
The question of the role iconicity plays in ASL acquisition is far from settled. Early sign language research suggested that
children's acquisition of sign language is minimally influenced by iconicity (Newport, 1988; Orlansky & Bonvillian, 1984).
However, the growing interest of gestures in human communication spurred by the work of individuals like Kendon and
McNeill has led to a revival of the interest in gestures and iconicity in ASL (e.g., Taub, 2000; Liddell, 2003).
Schick (2006) suggested that there is a strong iconic motivation behind the way location, movement, or handshape may be
represented in classifier construction and in the ways some verbs are modulated. Thompson (2001), however, argued that
instead of considering iconic motivation as being all or none, a more nuanced approach to examining the role of iconicity in
language acquisition is needed. Iconicity in signs is more obvious to adults than it is to children. It is hard to determine “the
degree to which a young child understands the iconic features of any given sign” (p. 612). For example, children do not
know that milk comes from cows before they learn the ASL sign for milk. Thompson pointed to the work by Tolar,
Lederberg, Gokhale and Tomasello (2008) who suggest that the effect of iconicity may be greater for iconic signs that depict
actions compared to those that depict perceptual features.
One way to answer the question is to examine language use in naturalistic settings to determine the type of iconicity in
child ASL (action-based as opposed to perceptual qualities) and to see if it changes over time. The data from one of the
authors’ five-year longitudinal study of a cohort of ASL-using deaf children in a classroom since their preschool year provide
a dataset for this study. The simple vocabulary (citation form) from the Year One dataset was sorted in the way that Cates
& Corina (2012) did their lexical database. Each item in their database was labeled on the basis of Taub’s (2000) criteria. We
found that 29% of the vocabulary is iconic and 32% is double-mapped metaphorical with the rest (39%) being non-iconic.
This is not very different from Orlansky and Bonvillian’s finding (1984) on the iconicity in the vocabulary of the toddlers.
Our working hypothesis is that if iconicity has a role in vocabulary development, we need to look more closely at the lexical
items that depict action and go beyond morphologically simple signs. 61% of the simple vocabulary from the first year are
labeled as either iconic or DMM; out of that set, 52% depict action. The kind of vocabulary usually not considered in
iconicity studies includes pronominals, morphologically complex signs, classifiers, and gestures. They represent 30% of the
total vocabulary under current study. We will discuss how the two types of iconicity (action-based or perceptual qualities)
distribute over different categories of vocabulary (simple signs, morphologically complex signs, classifiers, and gestures)
and how they vary over five years of the children’s language development.
Corresponding authors:
marlon.kuntze@gallaudet.edu
adam.stone@gallaudet.edu
7
Thursday 11th July
TISLR 11, July 2013
*Mary Rudner, +Eleni Orfanidou, ^Lena Kastner, ^Velia Cardin,
^Bencie Woll, >Cheryl Capek & *Jerker Ronnberg
Presentation 7
*Linnaeus Centre HEAD, Linköping University, Sweden; +Department of Psychology, University of Crete, Greece; ^Deafness,
Cognition and Language Research Centre, University College London, UK; >School of Psychological Sciences, University of
Manchester, UK
Corresponding authors:
mary.rudner@liu.se
lena.orf@gmail.com
lena.kaestner@gmail.com
velia.cardin@gmail.com
b.woll@ucl.ac.uk
cheryl.capek@manchester.ac.uk
jerker.ronnberg@liu.se
8
Thursday 11th July
TISLR 11, July 2013
Philippe Schlenker
Presentation 8
Institut Jean-Nicod, CNRS; NYU, France
Corresponding author:
philippe.schlenker@gmail.com
9
Thursday 11th July
TISLR 11, July 2013
*Lorraine Leeson, *John Saeed, ^Barbara Shaffer & >Terry Janzen
*Trinity College Dublin, Ireland; ^University of New Mexico, USA; >University of Manitoba, Canada
Corresponding authors:
leesonl@tcd.ie
jsaeed@tcd.ie
bshaffer@unm.edu
Terry.Janzen@ad.umanitoba.ca
10
Presentation 9
Thursday 11th July
TISLR 11, July 2013
Jonathan Udoff & Ignatius Nip
Presentation 10
University of California, San Diego; San Diego State University, USA
Corresponding authors:
judoff@projects.sdsu.edu
inip@mail.sdsu.edu
11
Thursday 11th July
TISLR 11, July 2013
Onno Crasborn & Els van der Kooij
Presentation 11
Radboud University Nijmegen, Centre for Language Studies, Netherlands
Corresponding authors:
o.crasborn@let.ru.nl
e.van.der.kooij@let.ru.nl
12
Thursday 11th July
TISLR 11, July 2013
Jonathan Keane
Presentation 12
University of Chicago, USA
Corresponding author:
jonkeane@uchicago.edu
13
TISLR 11, July 2013
Thursday 11th July
Vadim Kimmelman
Presentation 13
University of Amsterdam, Netherlands
Corresponding author:
v.kimmelman@uva.nl
14
Thursday 11th July
TISLR 11, July 2013
Gabrielle Hodge & Trevor Johnston
Presentation 14
Macquarie University, Australia
Patterns from a signed language corpus: Clause-level units in Auslan (Australian sign
language)
The Auslan Corpus (http://elar.soas.ac.uk/deposit/johnston2012auslan) is being enriched with
annotations to investigate Auslan constructions and hypotactic complexity using a corpus-driven
approach to identify and analyse clause-level composite utterances. Composite utterances are moves,
or turns, in face-to-face linguistic interaction in which fully conventional semiotic signs combine with
symbolic indexicals such as pointing gestures (Enfield 2009). In this way, linguistic texts develop as
shared and constantly negotiated symbolic artefacts that are co-created between two or more
interactants in the context of a communicative event (Enfield 2009; Givón 2005; 2009).
Approximately 1000 tokens of these possible clause-like units have been semantically identified in
narrative and conversation files using a corpus-driven approach (Biber 2010; Haspelmath 2007). Data
is extracted from these annotated files to identify and describe recurrent patterns of organisation at the
clause level. The manual and non-manual signs tagged as overt core argument (A) and predicate (V)
elements in the study corpus pattern in recurrent ways. The tendency for [V], [A V] and [V A] patterns is
very similar to observations of preferred argument structure in spoken language grammars, whereby
simple clauses in discourse are usually a predicate and an argument, and arguments are often inferred
rather than explicitly re-activated using morphology and lexis (Du Bois 1987; Thompson & Hopper
2001; Givón 2009).
Here we describe the frequency and distribution of these candidate constructions, with reference to
the patterning of overt core elements, macro-role, semantic role, sign type and enactment. Our goal is
to use empirical corpus-based data to contribute insights into the use of composite utterances in a
signed language and therefore on the way meaning is negotiated between interactants in face-to-face
discourse. Our data shows that clause-level constructions in Auslan (including clause complexes)
cannot be described or accounted for solely in terms of conventionalised morpho-syntax, i.e. without
appeal to gesture, enactment and the face-to-face context of utterances.
Corresponding authors:
gabrielle.hodge@students.mq.edu.au
trevor.johnston@mq.edu.au
15
Thursday 11th July
TISLR 11, July 2013
Paweł Rutkowski, Sylwia Łozińska, Joanna Łacheta
& Małgorzata Czajkowska-Kisil
University of Warsaw, Poland
Corresponding author:
p.rutkowski@uw.edu.pl
16
Presentation 15
Thursday 11th July
TISLR 11, July 2013
Okan Kubus & Christian Rathmann
Presentation 16
Universität Hamburg, Germany
Corresponding authors:
okankubus@gmail.com
christian.rathmann@uni-hamburg.de
17
Thursday 11th July
TISLR 11, July 2013
Christopher Stone & David Vinson
Presentation 17
University College London, UK
Enhanced cognition from L2 BSL acquisition
Although sign language interpreters have been trained within a university setting from many years, to date is
little has been understood of the underlying cognitive and linguistic skills required for L2 sign language
acquisition and sign language interpreting. This presentation will report on a longitudinal aptitude study
following the learning trajectory of undergraduate students within Deaf studies and interpreting programs
identifying the factors that are relevant for sign language learning and relevant for sign language interpreting.
Undergraduates (initially sign language naïve or low fluency) were recruited (n = 24) for the longitudinal study
from university programs that included Deaf studies and interpreting with students being selected for
interpreting depending on their exams results at the end of semester four. A battery of tasks was administered
to the undergraduates, which can broadly be split into five areas:
1.
General language skills – Modern Language Aptitude Task (MLAT) administered semester one (five subtests)
2.
General intelligence – digit span and matrix reasoning administered semesters one and six
3.
L1 language skills – English reading age administered semesters one and six
4.
L2 language skills – BSL grammatically judgement task (BSLGJT) administered semesters one, three, five,
six
5.
Cognitive tasks – connections A (psychomotor) and B (psychomotor and cognitive control), patterns
(perceptual processing) administered semesters three, five, six
Final collection of data from semesters five and six is ongoing and will be completed prior to TISLR 11, but
preliminary analysis of the current data set identifies a number of linguistic and cognitive factors related to sign
language learning.
Phonological working memory (pWM), MLAT subtest I, correlates with BSLGJT scores (semesters three and five),
suggesting a role of pWM in learning unwritten languages. Also rote learning, MLAT subtest V, correlates with
overall BSL exam performance. MLAT subtests related to written language or grammatical skills did not
correlate to BSL performance.
The MLAT total score and semester one BSLGJT scores correlate with students’ sign language exam scores at the
end of semester three (BSL3); within semester one BSLGJT, performance on simple sentences and WHsentences were particularly highly correlated with BSL3 scores. Of all sentence types in BSLGJT, variation in
performance on simple sentences was the most indicative of overall performance in BSLGJT (semester one).
Preliminary analysis of interpreting related exams find that BSL3 exam scores also correlate with both
translation and consecutive interpreting exam scores. These data point to BSL3 being the key indicator of
existing skill developed within the university environment and the pre-requisite to underpin interpreting skills
development. Perpetual processing, psychomotor skills and cognitive control (as measured by Patterns,
Connections A and B) all improve between semesters three, five and six with a marginal tendency for worse
performance by general Deaf studies students vs. interpreting students. Cognitive control does not correlate
with interpreting exam scores suggesting these skills are developed by training rather than required to underpin
language learning language learning and interpreting skills.
The final data collection should enable us to further characterise factors that predict high proficiency L2 BSL
acquisition and sign language interpreting skills development.
Corresponding authors:
christopher.stone@ucl.ac.uk
d.vinson@ucl.ac.uk
18
Thursday 11th July
TISLR 11, July 2013
Emily Kaufmann, Thomas Kaul & Reiner Griebel
Presentation 18
University of Cologne, Germany
Increasing metalinguistic awareness of non-manual features in adult learners of
German Sign Language as a second language: A transcription task (ENGLISH/ASL)
Abstract: In second language teaching, it is increasingly popular to use new technologies in instruction with the
goal of improving learner outcome. This is especially the case with commonly-taught languages such as English,
but it is also true to a certain extent of sign language programs. This paper reports on a study which evaluates a
new transcription-based language training technique designed to improve sign language L2 learners’
metalinguistic awareness, focusing on the area of non-manual features (mouth gestures and eyebrow
activation), an area which typically poses great difficulty for hearing adult sign language learners. This study
focuses on idiomatic signs (e.g. f-hand-to-forehead, ‘no idea’) and questions because they contain these
features.
Participants (N=33) were hearing native speakers of German and students enrolled in a fifth-semester German
Sign Language (DGS) course during the study period. The study was designed as follows: There were five training
sessions. In each session, students were shown one video of a DGS sentence containing an idiomatic sign. The
video was replayed as desired by participants, and the sentence and idiomatic sign were discussed. In Group 1
(N=18), the control group, this standard approach for teaching idiomatic signs was used. In Group 2 (N=15), the
test group, the transcription task was used: the idiomatic sign was discussed as above, and participants were
additionally asked to transcribe the sentences, providing a gloss, mouth gestures, mouthings, and eyebrow
activation. Following the five training sessions was the testing session, which was the same for both groups.
Participants watched 12 videos; each presented one complete DGS sentence containing either a mouth gesture
or eyebrow activation as a salient feature. Half of the videos of contained an error in the respective non-manual
feature and half were error-free. For each item, participants were instructed to decide whether or not the
sentence contained an error and if so, identify or correct it. As in previous studies, the ability to detect and
correct errors is used as a measure of metalinguistic awareness (e.g. Bialystok 1986).
Performance was compared between the two groups in two analyses. In the first analysis, we compared overall
accuracy scores on the error detection (grammaticality judgment) task between the two groups. Since our data
was not normally distributed, we performed a Kruskal-Wallis one-way ANOVA and found a significant difference
between groups (p<.001); on a median test, we also found a significant difference between groups (p<.001). In
both cases, the test group performed at a higher level. In a second analysis, we examined the items which
included an error in isolation in order to differentiate between different performance levels: error detection as
above, along with error identification, which is more difficult and may indicate a higher level of metalinguistic
awareness (Bialystok 1979). We performed a Kruskal-Wallis one-way ANOVA and found a significant difference
between groups (p<.001); on a median test, we again found a significant difference between groups (p<.001).
This result indicates that DGS learners’ metalinguistic awareness in the area of non-manuals can be increased by
using transcription tasks in DGS instruction.
Corresponding authors:
kaufmann.emily@gmail.com
thomas.kaul@uni-koeln.de
reiner.griebel@uni-koeln.de
19
Thursday 11th July
TISLR 11, July 2013
Marcel Giezen, Henrike Blumenfeld & Karen Emmorey
Presentation 19
San Diego State University, USA
Top-down cross-language activation in bimodal bilingual word recognition (ENGLISH)
Research with bilinguals in two spoken languages, i.e., unimodal bilinguals, has revealed that both languages
show some level of co-activation during listening, even in monolingual contexts. Such cross-language activation
is often taken to result from bottom-up activation of phonological representations when participants hear
words that sound similar in their two languages. This raises the question whether cross-language activation
would occur for languages with non-overlapping phonological systems, i.e., a spoken language and a sign
language. Morford et al. (2011) recently showed cross-language activation between written words and signs for
deaf ASL-English bilinguals, and findings by Shook and Marian (2012) suggest that hearing ASL-English bilinguals
experience cross-language activation of ASL during spoken word recognition in a visual world paradigm. This
paper reports two experiments that further investigated cross-language activation in hearing bimodal bilinguals.
Experiment 1 aimed to replicate the results of Shook and Marian (2012) and to relate the degree and time
course of cross-language competition to non-linguistic cognitive control abilities. Experiment 2 examined
whether phonological facilitation or semantic interference are observed when bilinguals name pictures in ASL
while English words are presented as auditory distractors.
For Experiment 1, participants listened to English words (e.g., “chair”) while looking at displays with four
pictures that included the target picture, a cross-language phonological competitor (e.g., train, which is
phonologically related to the ASL sign CHAIR), and two unrelated pictures. Fluent ASL-English bilinguals (N = 20)
fixated more on pictures of cross-language competitors than on unrelated pictures, suggesting parallel language
activation and replicating Shook and Marian (2012) with a larger sample and stimulus set. In a recent study with
unimodal bilinguals, Blumenfeld and Marian (under review) found significant correlations between crosslanguage competitor activation and inhibition performance on a non-linguistic cognitive control task, suggesting
a direct link between linguistic competition and non-linguistic competition resolution abilities. We are currently
investigating whether bimodal bilinguals exhibit a similar relationship, using the data from Experiment 1 and
several cognitive control tasks.
Experiment 2 was a picture-word interference task in which participants named pictures in ASL (e.g.,
signing CHAIR) while English distractor words were presented over headphones that were either 1)
phonologically related to the target sign through the ASL translation (e.g., “train”), 2) semantically related to the
target sign (e.g., “bed”), 3) translations of the target sign, or 4) unrelated to the target sign. The same 20 hearing
signers from Experiment 1 participated. Similar to studies with unimodal bilinguals, naming latencies showed
facilitation for translation distractors and interference for semantically related distractors. Most importantly,
facilitation was observed for phonologically related distractors, showing that ASL production was speeded by
hearing English words whose translations were phonologically related to the target signs.
Together, these results provide converging evidence for robust cross-language activation between a
spoken and a signed language. The findings are not predicted by theories of bilingual word recognition that
emphasize bottom-up activation through a shared phonology as the main source of cross-language activation
and instead suggest an important role for top-down activation from the conceptual and/or lexical levels.
Corresponding authors:
giezenmr@gmail.com
hblumenf@mail.sdsu.edu
kemmorey@projects.sdsu.edu
20
Thursday 11th July
TISLR 11, July 2013
Robin L. Thompson & Gabriella Vigliocco
Presentation 20
Deafness Cognition and Language Research Centre (DCAL), University College London, UK
The relationship between sign production and sign comprehension: what the hands reveal (ASL)
During spoken language processing the same perceptual feedback system can be used for speech
comprehension when listening to another person's output, and during speech production when monitoring
“self-output”. For sign language processing, however, visual input when comprehending another person's
signing is distinct from visual input from one’s own signing and, further, while there is proprioceptive feedback
during sign production there is no proprioceptive feedback during comprehension. To date, very little is known
about how signers monitor their own production, or the relationship between sign production and sign
comprehension with initial research suggesting that while sign comprehension relies on vision, sign monitoring
during production relies more on proprioceptive feedback (Emmorey, Gertsberg, Korpics, & Wright, 2009;
Emmorey, Korpics & Petronio, 2009).
Here we investigate the relationship between sign production and sign comprehension, making use of the fact
that signers have two primary articulators (the hands) and can differ in which hand is their dominant articulator.
Specifically, we ask if hand dominance (right or left) during sign production influences comprehension of signs
produced with the same (congruent) or different hand (incongruent). We explore two possibilities:
1. Perceptual input drives comprehension
More frequent exposure to right-handed signers (approximately 85% of all signers) results in a perceptual
frequency effect such that all signers (both left- and right-handed) are faster to comprehend right-handed
signers.
2. Sign production is tied to sign comprehension
There is a relationship between production and comprehension systems such that right-handed and left-handed
signers are faster to comprehend congruent (same-handed) signs.
Methods: British Sign Language sign participants decided whether a picture followed by a sign video matched
(120 match trials, 120 mismatch). Sign stimuli (balanced across one-handed, two handed signs with same
handshapes, and two-handed signs with different handshapes) were created using two left-handed, and two
right-handed sign models. During the experiment, each participant (6= left-handed, 22= right-handed; 13 native
signers, all deaf) saw each sign produced by only one of the four models with congruent/incongruent signing
hand balanced within each experiment and across participants.
Results: The data reveal a 3-way interaction: Sign Type x Model Handedness x Participant Handedness (p=.006).
Examining left- and right-handed participant groups separately indicates that right-handed participants are faster
overall when making decisions about right-handed (congruent) signs (main effect of Model Hand, p= .001), while
left-handed signers are only faster for left-handed (congruent) signs that are 2-handed (same or different
handshapes). For 1-handed signs they are faster for right-handed models: Model Hand x Sign Type interaction
(p=.03).
Overall the results indicate a tight relationship between production and comprehension systems for signers such
that the dominant hand used for production is privileged during comprehension. However frequency of
exposure during comprehension must also play a role with both left- and right-handed signers faster to
recognize 1-handed signs from right-handed models. These preliminary results follow from a model of sign
monitoring in which action systems used during production are also active during sign comprehension, but also a
model in which perceptual input plays an important role.
Corresponding authors:
robin.thompson@ucl.ac.uk
R.Atkinson@sms.ed.ac.uk
g.vigliocco@ucl.ac.uk
21
TISLR 11, T
July 2013 Thursda
ay 11th Jully *Corrine O
*
Occhino‐K
Kehoe, *Jilll P. Morfo
ord, *Paul A. Twitchell, ^Pilaar Piňar, Prese
>Judith F. >
Kroll & +EErin Wilkinson entation 2
21 *University of New Mexico & VL2; ^Gallaudet Universiity & VL2; >Pe
enn State Univversity & PIREE; +University of Manitoba &
& VL2 , USA V
Corresponding
C
g authors: morford@unm.edu cmocchino@ggmail.com paul.twitch@
@gmail.com pilar.pinar@ggallaudet.edu
jfk7@psu.edu
u Erin.Wilkinson@ad.umanittoba.ca 22
Thursday 11th July
TISLR 11, July 2013
Zed Sevcikova
Presentation 22
University College London, UK
Corresponding author:
z.sevcikova@ucl.ac.uk
23
Thursday 11th July
TISLR 11, July 2013
Brigitte Garcia, Marie-Therese L'Huillier, Angelo Frémeaux
& Dina Makouke
UMR SFL (Paris 8 & CNRS), France
Corresponding authors:
brigitte.garcia@univ-paris8.fr
marie-therese.lhuillier@sfl.cnrs.fr
angelo.fremeaux@gmail.com
makoukedina@yahoo.fr
24
Presentation 23
Thursday 11th July
TISLR 11, July 2013
*Pamela Perniss & ^Asli Özyürek
Presentation 24
*DCAL, University College London, UK
^Radboud University Nijmegen & MPI for Psycholinguistics, Netherlands
Modality effects in action and motion expressions in sign languages:
Taking typology into account
Signed language exhibits modality-specific effects in the domains of encoding location, motion, and action (e.g.
Emmorey 2002; Perniss et al. 2011). However, a proper characterization of these modality effects in previous
research, especially with respect to action and motion expression, has been limited by a narrow scope of
comparison between languages. Thus, comparison across unrelated sign languages, and, importantly,
comparison to variation across typologically different spoken languages (Talmy 1983) and the co-speech
gestures accompanying different linguistic structures (Kita & Özyürek 2003) has been lacking. A cross-linguistic
and cross-modal comparison is needed to characterize and generalize about the nature of modality-specific
effects in signed language.
Here, we investigate modality-specific effects by comparing two unrelated sign languages (Turkish and
German Sign Languages) and the speech and co-speech gestures of the surrounding spoken languages (Turkish
and German), which differ typologically in the linguistic encoding of caused motion (Furman 2012; Hickmann et
al. 2009). We focus on the expression of caused motion with manual action (e.g. she carried the suitcase down
the stairs), which involves encoding of the action (i.e. carry) and the motion of the entity acted upon (i.e.
downward path). Specifically, action and motion components in such caused motion events can be represented
in separate forms/units – in sign, gesture, and speech – or expressed in one.
Analysis of 6 caused motion events encoded by 12 German SL and 12 Turkish SL signers as well as by 12
Turkish and 12 German speakers revealed that German and Turkish signers' event encodings were very similar
to each other while Turkish and German speakers' gestural and spoken depictions of the events differed. The
signers encoded action and path components in separate predicates. Gestures by Turkish speakers also
expressed action and motion in separate forms, while German speakers expressed both components in a single
gesture - in line with expectations about how typological differences in linguistic structure would influence cospeech gestures. Signers and gesturers exhibited a general difference, however, in the expression of the action
component. Both German and Turkish signers used the body (i.e. face, shoulders, torso) in addition to the hands
to construct the action, while gesturers of both languages expressed the action on their hands only (e.g.
grasping handshape). The findings will be discussed in relation to how this unique cross-modal and crosslinguistic comparison contributes to our understanding of modality effects in sign language structure as well as
to the role of typology influencing encoding of events in human language, in general.
Corresponding authors:
p.perniss@ucl.ac.uk
asli.ozyurek@mpi.nl
25
Friday 12th July
TISLR 11, July 2013
Gemma Barberà & Josep Quer
Presentation 25
Universitat Pompeu Fabra; ICREA , Spain
Getting impersonal:
Impersonal reference in Catalan Sign Language (LSC)
BACKGROUND. Impersonal predications constitute a broad category in description and typology. Arbitrary pro, a
prototypical way to encode impersonality, has essentially received two sorts of analysis, either as a definite
(Alonso-Ovalle 2002; Malamud 2006) or as a special kind of non-anaphoric pronoun showing mixed behaviour
between indefinites and definite plurals (Cabredo-Hofherr 2003, 2006). However, the variation among the
means to encode impersonal reference is significant, both intra- and cross-linguistically. We focus on the kind of
predications where one of the arguments (typically the subject) is labelled as impersonal because of its low
referentiality and call this class R(eferential)-impersonals (Siewierska 2011). We offer its first characterization in
a sign language, namely in Catalan Sign Language (LSC).
EMPIRICAL OVERVIEW. Impersonal reference in LSC displays a rich array of overt markers expressed by functional
lexical elements directed towards signing space:
(i) A pronominal sign formed by the Wh-sign WHO concatenated with the 3rd person plural pronominal form
or with the determiner SOME.
(ii) 3rd person plural pronoun directed to the upper frontal plane, denoting non-specific reference (Fig. 1).
(iii) A pronominal index sign that consists in a form derived from the lexical noun PERSON, and that can be
used coreferentially for the three person distinctions and may have a singular or plural form.
(iv) The determiner glossed as ONEu, directed towards the upper frontal plane, functioning either as a
determiner or as a pronoun, and denoting non-specificity.
(v) The use of a spatial axis for verb inflection articulated from a location established amidst the 1 st and 3rd
person to a location established amidst the 2nd and 3rd person locations (Fig. 2).
(vi) A bimanual form of the verb, denoting plurality of the subject.
ANALYSIS. The LSC data motivates a hybrid analysis according to which R-impersonal interpretation is
instantiated by proarb, as well as by generic and indefinite pronouns. According to the general view (Cinque
1988), proarb and its overt counterparts receive a quantified interpretation when inserted in generic sentences.
In a context like (1) the pronoun ONEu is inserted in a generic-like context, not linked to any particular moment
of any particular individual. In the corresponding semantic representation, the variable introduced by ONEu is
bound by the adverbial operator ALWAYS and thus receives a quantified interpretation (2). The hybrid analysis
that impersonal reference in LSC imposes is further motivated by the fact that arbitrary subjects do not always
yield a quantificational interpretation only, since an existential reading may also obtain under Existential
Closure. This is shown in the generic context in (3) where two interpretations are possible: (3a) represents a
reading of the subject in which the variable, appearing in the restrictor of the covert operator USUALLY, is
bound by it (4); the existential interpretation in (3b) features a generic context which leads to an existential
reading of the subject (5).
CONCLUSIONS. The first exploration of impersonal reference in a sign language has unveiled a very rich domain,
where the expression of non-specificity through spatial contrasting locations, and overt and covert pronominal
forms interact in order to convey arbitrary interpretations for arguments. Although some elements might look
modality-specific, the overall picture that emerges is that the resources put to work by LSC in this domain rely
on the same basic ingredients identified for a range of spoken languages in the encoding of R-impersonality.
Corresponding authors:
gemma.barbera@upf.edu
josep.quer@upf.edu
26
TISLR 11, July 2013
Friday 12th July
Helen Koulidobrova
Presentation 26
Central Connecticut State University, USA
Corresponding author:
elena.koulidobrova@ccsu.edu
27
Friday 12th July
TISLR 11, July 2013
Celia Alba
Presentation 27
Universitat Pompeu Fabra, Spain
Corresponding author:
celiaalba@hotmail.com
28
Friday 12th July
TISLR 11, July 2013
Annika Hübl & Markus Steinbach
Presentation 28
University of Göttingen, Germany
Corresponding authors:
annika.huebl@phil.uni-goettingen.de
Markus.Steinbach@phil.uni-goettingen.de
29
Friday 12th July
TISLR 11, July 2013
*Sarah Ebling, ^Kearsy Cormier, ^Jordan Fenlon,
>Penny Boyes Braem & +Trevor Johnston
Presentation 29
*Institute of Computational Linguistics, University of Zurich, Switzerland;
^Deafness Cognition and Language Research Centre (DCAL), University College London, UK; >Center for Sign Language
Research, Basel, Switzerland; +Macquarie University, Australia
Identifying and Comparing Semantic Relations
across Signed and Spoken Languages
Little research has been carried out in the field of se- mantic networks for sign languages to date. Here, we
present an approach that identifies semantic relations between three sign languages and their surrounding
spoken languages: British Sign Language (BSL), Australian Sign Language (Auslan) and English; and Swiss German
Sign Language (DSGS) and German. This analysis will also determine whether semantic relations observed
between sign language/spoken language pairs are similar cross-linguistically.
Our data consists of entries from a BSL lexical database of around 2500 signs (Cormier et al., 2012), an Auslan
database of approximately 5000 signs (Johnston, 2001), and a DSGS database of approximately 4500 signs
(Boyes Braem, 2001). Within each database, all signs are represented with an ID gloss (i.e. a unique identifying
label; Johnston, 2003) and a set of keywords (i.e. translation equivalents in English or German to express the
range of meanings that lexical item can have).
Using the keywords within each database, we searched for pairs of lexical items that constituted a semantic
relation in a wordnet of the corresponding spoken language (Princeton Wordnet for English (Miller, 1993) and
German GermaNet (Hamp and Feldweg, 1997)). For every relation obtained that was valid for the corresponding
spoken language, we indicated whether this relation holds for the sign language under consideration. Once
completed, we compared semantic relations across pairs (e.g., BSL/English, Auslan/English, and DSGS/German)
to identify any similarities.
Results show that semantic relations valid for the spoken language are often not valid for the corresponding
sign language. For example (as reported in Ebling et al., 2012), Künstler (‘artist’) is a hypernym of Musiker
(‘musician’) in German. In DSGS, however, KÜNSTLER (depicting painting) is restricted to the meaning of a visual
artist, e.g., a painter. Additionally, the semantic extension of many English words is different from that of BSL
and Auslan. For example, in English, eat has hypernyms have, damage, and worry, but BSL/Auslan EAT does not
have as hypernyms BSL/Auslan HAVE, DAMAGE, or WORRY.
Comparisons suggest that semantic relations be- tween each sign language and its corresponding spoken
language are quite similar. This appears to be due in part to the prevalence of iconicity in sign languages. Some
differences may also be linked to the relative size of sign language lexica compared to spoken languages
(Johnston & Schembri, 1999).
Corresponding authors:
sarah.ebling@uzh.ch
k.cormier@ucl.ac.uk
j.fenlon@ucl.ac.uk
boyesbraem@fzgresearch.org
trevor.johnston@mq.edu.au
30
Friday 12th July
TISLR 11, July 2013
Anne-Marie Parisot, Karl Szymoniak & Darren Saunders
Presentation 30
UQAM, Canada
Body shift and head tilt in three sign languages: American Sign Language, French Sign Language and
Quebec Sign Language (Languages of presentation: BSL and English)
Short abstract
Regarding the question of typological variations for non-manual components, we propose a comparative
description of body shift and head tilt in ASL, LSF and LSQ. We compare the distribution of selected forms
(lateral, backward/forward and rotation) and functions (agreement, coordination, role shift, new information
and contrastive focus) and we discuss the interdependence between these two non-manual behaviours in sign
languages.
Long Abstract
Non-manual behaviour has been described in sign languages (SL) as playing a role at all levels of grammar and as
being multifunctional, such that each marker can express several grammatical functions and the same
grammatical function can be realized by different non-manual markers (Herrmann and Steinbach, 2011). Among
the many issues of interest in linguistic description of the scope of non-manual components in SL, in this paper
we explore the question of typological variation. More precisely, we explore, through three SL (American: ASL,
French: LSF, and Quebec: LSQ), the notion of discrete units for body (BM) and head movement (HM) on the one
hand, and the possibility of different configurations of BM on the other hand.
The forms of BM described in the literature on the grammar of SL - backward/forward lean, lateral tilt and
rotation - are associated with several functions, namely contrastive focus (Wilbur and Patschke, 1998; van der
Kooij et al., 2006), subject agreement (Parisot, 2003) and role-shift (Engberg-Pedersen, 1995; Poulin and Miller,
1995; Quer, 2005). Even if specific marking functions have been properly attributed to different HM, it is not
always clear if these markers are produced independently from the related BM (e.g. body shift and head tilt for
subject agreement marking (Bahan, 1996), or topic marking (Sze, 2011)). Moreover, the different BM
(backward/forward, lateral or rotation) are not always distinguished and are sometimes also treated as nonspecified BM (e.g. Bras et al. 2004, for role marking).
Our analysis of elicited productions (12 narratives from the same depiction task) from 3 deaf signers will lead us
to provide clues for the following theoretical questions: 1) Do the three SL make use of distinct forms (lean, tilt
and rotation) of BM and HM? 2) Do distinct forms (lean, tilt and rotation) of BM and/or HM have specific
functions? 3) Do BM and HM have differential effects on meaning or are they varieties of the same markers?
Our results are based on a qualitative analysis using 2D video. All the video data was transcribed for manual and
non-manual properties (including other features such as eye gaze) using the software ELAN and coded according
to form (lateral tilt, backward/forward lean and rotation) and functions (verb agreement marking, coordination,
role shift, new information and contrastive focus) of both HM and BM. The transcribed data also includes the
coding of associate or dissociate positions of the head and body. Although exploratory, our first results show
that even though both types of non-manual markers are found in the three languages, ASL, LSF and LSQ do not
make the exact same use of BM and HM, since i) they have different distributions, ii) they mark different
functions, and iii) there seem to be different degrees of interdependence between HM and BM. For example, in
a general distribution the head is used significantly more often as a marker in ASL than in the two other
languages.
Corresponding authors:
parisot.anne-marie@uqam.ca
karlszymoniak@gmail.com
m.dazze@gmail.com
31
Friday 12th July
TISLR 11, July 2013
*L. Viola Kozak, +Carina Rebello Cruz, +Aline Lemos Pizzio
& Ronice de Quadros
Presentation 31
*Gallaudet University, USA; +Universidade Federal de Santa Catarina, Brazil
Pseudoword/sign repetition performance across bimodal bilinguals from the US and Brazil: a
comparative analysis
Presented in ASL; English
This study involves a cross-linguistic analysis of phonological repetition in signed and spoken language
by hearing bimodal bilingual children (kodas) and bimodal Deaf children with a cochlear implant (CIs). While
previous studies have demonstrated mediocre performance on pseudoword repetition by implanted children
(Dillon et al. 2004), these studies focused only on children from hearing non-signing families, resulting in delays
in their exposure to a first language. The participants in the current study are aged 4;0-7;0 and are acquiring
either American Sign Language (ASL) and English, or Brazilian Portuguese (BP) and Brazilian Sign Language
(Libras). All koda participants, plus the American CI participants, come from Deaf signing families and received
full exposure to a sign language from birth. Our comparison of bimodal bilingual Deaf children with CIs to their
koda counterparts is quite novel; previous studies traditionally only compare monolingual speech-only CI
children to their monolingual peers.
Participants performed pseudoword and pseudosign repetition tasks testing their phonological memory,
calling for reproduction of novel sound or sign sequences that display typical phonological parameters of the
children’s target languages. The BP pseudoword test consisted of stimuli from the Teste de Pseudopalavras
(Santos & Bueno, 2003) and the English stimuli came from Carter, Dillon & Pisoni (2002). The pseudosign stimuli
were loosely designed after Mann et al. (2010), consistent with phonotactic constraints (involving handshape,
location, orientation and movement) for each of the target sign languages, but varying in number of hands,
number of handshapes, symmetry, and movement type. Children’s pseudoword production was scored in a
binary fashion for overall accuracy, with additional qualitative analyses of syllable structure, correct primary
stress and phoneme choice. Pseudosigns were similarly scored for overall accuracy, as well as for accuracy of the
individual phonological parameters.
Across the two language pairs, koda children had overall high accuracy on both the pseudoword and
pseudosign tasks, reproducing target stimuli with roughly seventy-five percent accuracy. The CIs from signing
Deaf families performed largely on par with age-matched koda children on both spoken and signed tasks.
However, Brazilian CIs from hearing non-signing families, who had relatively late age of implantation (2-3 years
of age) and limited access to sign language, performed much worse on both the spoken and signed tests, as can
be seen in examples (3) and (4). Notably, these children’s pseudosign scores were still higher than their
pseudoword scores, despite their limited exposure to sign. As can be seen in example (3), on the BP
pseudoword test, CI child number five, who has Deaf, signing parents, performed much higher than his peers
from hearing, non-signing families.
The striking performance gap between our native signing participants and their late-signing
counterparts (as well as the non-signing Dillon et al. study) points to important advantages of early and
unrestricted sign language exposure for phonological memory. Contrary to claims of signing as a hindrance to
speech development (e.g. Svirsky et al. 2000), our data suggest that early bilingualism in spoken and sign
language is a potential key to successful speech development for children with CIs.
Corresponding authors:
laura.kozak@gallaudet.edu
crcpesquisa@gmail.com
alinelemospizzio@gmail.com
ronicequadros@gmail.com
32
Friday 12th July
TISLR 11, July 2013
Jordan Fenlon, Kate Rowley & Kearsy Cormier
DCAL Research Centre, UK
Lexicalisation in British Sign Language: Implications for phonological theory (BSL)
Several phonological models have been proposed (e.g., Brentari, 1998; Sandler,
1989; van der Kooij, 2002)) that aspire to account for all the possible phonological
contrasts within a given (or multiple) sign language(s). These models have typically been
applied to the core, native lexicon but have been extended to show how signs from the
non-native lexicon (i.e., fingerspellings; Brentari & Padden, 2001; Cormier, Schembri, &
Tyrone, 2008) and non-core lexicon (i.e., classifier constructions; Eccarius & Brentari,
2007) largely fit the same phonological patterns as the core, native lexicon. However,
these phonological models have yet to be used in practice within a large dataset
representing the core lexicon of a sign language (i.e., a lexical database that has been
consistently lemmatised). Here, we outline an ongoing attempt to identify the minimal
features required for an adequate phonological description of a sign language lexicon
within a lemmatised lexical database of British Sign Language (BSL), and we describe
some issues that arise when attempting to apply phonological categories from existing
models to a sign language lexicon. In particular, we explore the phonology of signs which
are part of the core, native lexicon but have been lexicalised from classifier constructions
and/or fingerspelling.
The BSL lexical database, adapted from the Auslan lexical database (Johnston,
2001), consists of 2500 lexical signs (in their citation form) lemmatised from a BSL
lexical frequency study (Cormier et al., 2011) and an earlier dictionary of BSL (Brien,
1992). To provide a basic phonological description of all these entries in the BSL lexical
database, phonological categories from Johnston’s database (Johnston, 2003) have been
modified using modern phonological models proposed for other sign languages as a guide
(e.g., Brentari, 1998; van der Kooij, 2002). However, these phonological models do not
always lend themselves neatly to an overall description of the core lexicon within a
lexical database, particularly for core signs that have been lexicalised from classifier
constructions or nativised from fingerspelling. For example, the BSL sign GOAL is
difficult to describe using conventional phonological categories traditionally applied to
the core lexicon because categories used to describe the location of the dominant hand
with respect to the non-dominant hand cannot be applied (i.e., no phonological model
specifies the space between the extended index and pinky fingers on the non-dominant
hand as a place of articulation). Additionally these signs may or may not adhere to
phonological constraints proposed for lexical signs (e.g., Battinson, 1978). Thus, the
‘horns’ handshape used on the non-dominant hand in BSL GOAL is a violation of
Battison’s dominance constraint. Nonetheless, this sign is conventionalised in form and
meaning and is regarded as a fully lexical sign. Similar issues arise when considering
signs that have been nativised from fingerspelling. We describe how these entries are
accounted for within a phonological description of BSL’s core lexicon. In doing so, we
show how theory (via phonological modelling) is applied to a lexicon (which relies on
phonological contrast) and how this practice can then help inform theory, in an iterative
process.
Corresponding authors:
j.fenlon@ucl.ac.uk
kate.rowley@ucl.ac.uk
k.cormier@ucl.ac.uk
33
Presentation 32
Friday 12th July
TISLR 11, July 2013
Anna Safar & Onno Crasborn
Presentation 33
Centre for Language Studies, Radboud University Nijmegen, Netherlands
Handedness in deaf signers
Language of presentation: English
Signers use their hands as the main articulators, yet handedness remained a relatively
neglected area in the study of sign languages until recent years. Here we report a series of
studies investigating hand preferences of deaf signers, in signing and other activities, using
both questionnaire responses and observational methods.
First, we analyzed hand choice in signing based on the annotations of the Corpus NGT
(Crasborn, Zwitserlood & Ros, 2008) and found three distinct patterns of hand preference.
Most of the 92 signers (88%) in this corpus are right-handed, 5% are left-handed and 7% do
not show strong hand preference. This incidence of right-handedness is similar to that found
in various hearing populations, and contradict earlier results of increased non-righthandedness
among the deaf (e.g. Dane and Gümüstekin, 2002), at least as far as signers’
handedness for signing is concerned. However, compared to previous research (in particular,
Sharma, p.c.), we found more ambilateral signers but fewer left-handers.
A second study investigated differences between right-, left- and mixed-handed signers
by looking at hand dominance in spontaneous signing. Participants were chosen from the
Corpus NGT by including all left-handed (N=5) and mixed-handed (N=6) participants and
matching right-handed participants (N=9) on age and gender. Video recordings from each
participant were annotated sign-by-sign for hand dominance and sign type (that is whether a
sign was one-handed or two-handed, symmetrical or asymmetrical). Group level differences
were found between right-, left- and mixed-handed signers for all sign types. Signers differed
both in the direction and strength of lateralization, right-handers showing slightly stronger
preferences than left-handers, while mixed-handed signers’ hand preferences were weak.
Hand preference was not uniform among various sign types, as asymmetrical two-handed
signs were more often articulated by the preferred hand than one-handed signs. Further
analysis revealed differences between handedness groups in the number of dominance
reversals (switches of hand dominance during signing). Mixed-handed signers switched
dominance most often, followed by left-handers, while right-handers showed the fewest
reversals.
In a third study, we invited Corpus NGT participants to fill in an online questionnaire
about handedness in everyday life as well as signing. After taking into account a selfselection
bias among the 55 respondents (left-handers were more likely to respond),
handedness based on questionnaire responses regarding everyday activities showed a similar
distribution as in the general population. However, 9% of respondents show a mismatch
between the direction of handedness for signing and other activities (for example, writing
with the left hand but signing with the right or vice versa). This mismatch is not explained by
a history of either forced change of handedness or injury. Further analysis will focus on
whether any characteristics can be identified that set these signers apart.
We will discuss the main findings from these studies and argue that these results
suggest that a more nuanced picture of handedness for signing is necessary. The relevance of
this issue for a range of research areas across the fields of linguistics, psycholinguistics and
neuroscience will be addressed.
Corresponding
authors:
annasafar@gmail.com
o.crasborn@let.ru.nl
34
Friday 12th July
TISLR 11, July 2013
*Ronice Quadros, ^Diane Lillo-Martin, >Helen Koulidobrova
& +Deborah Chen Pichler
Presentation 34
*Universidade Federal de Santa Catarina, Brazil; ^University of Connecticut; >Central Connecticut State University
+Gallaudet University, USA
Noun Phrases in Koda Bimodal Bilingual Acquisition
English and ASL
For child and adult bilinguals, both languages are always activated (Kroll et al. 2006), and therefore, the
languages may interact with each other in numerous ways, including the phenomena known as cross-language
influence, code-switching, and code-blending. We have been building a model in which these phenomena are
essentially non-distinct, and simply fall out from the architecture of the language faculty as instances of what we
call ‘language synthesis’ (Figure 1).
In this presentation, we support this view of language synthesis by examining Noun Phases produced by
koda bimodal bilinguals, hearing children with deaf parents, simultaneously acquiring a sign language (American
Sign Language (ASL) or Brazilian Sign Language (Libras)) and a spoken language (English or Brazilian Portuguese
(BP)). We examine the use of determiners in the spoken languages and the ordering of nouns and adjectives in
both the spoken and sign languages in two types of data: longitudinal spontaneous production from three
children ages 2;00-3;01, and elicited production from a larger group of 4- to 7-year-olds .
Libras and ASL are different from BP and English (respectively) in not requiring an overt determiner in
most NP contexts. Despite the prevalence of determiners in the spoken languages, monolingual two-year-old
children omit them in required contexts, but the rate of ungrammatical omission quickly declines (especially in
BP, as in other Romance languages; Lopes 2006; Demuth & McCullough 2009). If the bimodal bilingual children
in our study are using determinerless structures from their signed languages in their spoken languages, it might
be expected that they would persist in the non-target use of noun phrases with article omission past the age at
which such usage disappears for monolingual children, and/or to a larger extent during the typical omission
period. Indeed, this is what we found. Table 1 shows the proportion of non-adult-like (NAL) DPs for each child,
and of those, the proportion of errors due to missing or incorrect articles (*art); see examples in (1). We have
also observed determiner omission continuing into the fifth year, and in adult code-blending contexts (2).
A different finding emerged from our study of adjective placement (3). Libras and English are strict in
the placement of adjectives after or before nouns, respectively. BP occasionally permits prenominal adjectives,
and ASL more freely allows postnominal ones. Thus, if the children show synthesis in this domain, we might
expect to see prenominal adjectives in Libras, and more likely, postnominal ones in English. However, we did not
find this usage, even though the US children did occasionally use post-nominal adjectives (appropriately) in their
ASL.
The contrast between domains where influence is found (determiners) and those where it is not found
(adjective placement) allows us to refine the details of our synthesis model. In particular, the post-nominal
placement of adjectives in ASL must involve a syntactic structure which is incompatible with the English DP;
whereas the use of null determiners is sure compatible, as they are found in both English and BP for mass nouns
and generics.
Corresponding authors:
ronicequadros@gmail.com
lillo.martin@uconn.edu
elena.koulidobrova@ccsu.edu
deborah.chen.pichler@gallaudet.edu
35
Friday 12th July
TISLR 11, July 2013
*Chiara Branchini & ^Caterina Donati
Presentation 35
*Ca' Foscari University of Venice; ^Sapienza University of Rome, Italy
.
Corresponding authors:
chiara.branchini@unive.it
donatica@yahoo.it
36
Friday 12th July
TISLR 11, July 2013
Kathryn Davidson, Corina Goodwin & Diane Lillo-Martin
University of Connecticut, USA
Corresponding authors:
kathryn.davidson@gmail.com
corinag@gmail.com
lillo.martin@uconn.edu
37
Presentation 36
Friday 12th July
TISLR 11, July 2013
Amanda Dupuis & Iris Berent
Presentation 37
Northeastern University, USA
Lexical access to signs is automatic
To be presented in Spoken English
Automaticity is the hallmark of linguistic competence. Words consist of arbitrary form meaning pairings (1-2).
Moreover, an encounter with a word’s form automatically activates its stored meaning irrespective of task
demands. Stroop-like interference offers the gold-standard for automaticity, as it demonstrates the activation of
words’ meanings despite contrary task demands (3-4). However, most previous Stroop studies were exclusively
based on hearing participants. Only one previous Stroop study examined ASL signers, but its results were
inconclusive (5). Accordingly, it remains unknown whether the automaticity of lexical access is a property of
language processing, generally, or speech, specifically. Here, we examine the propensity of signs to induce
Stroop interference among Deaf native signers of American Sign Language (ASL).
Our study featured monochromatic videos paired with ASL signs—either congruent ASL color signs (the sign for
BLUE presented in the color blue), incongruent signs (e.g., the sign for GREEN presented in the color blue) or a
novel neutral sign XX—a sign that shares location, palm orientation, and movement with the color signs, but
contrasts on handshape. The proportion of the three conditions was balanced, and their order was
randomized. Participants (Deaf, native ASL signers, N=7) were asked to sign the color of the video as quickly and
accurately as possible. An analysis of variance showed that the congruency between the signs and the color
reliably modulated both response time and accuracy (response time: F(2, 12) = 24.53, p = 0.00005; accuracy:
F(2, 12) = 4.03, p = .05, see Figure 1). Specifically, incongruent signs produced slower (∆ = 54.9 ms, t(12)
=14.288; p<.003) and less accurate responses (∆ = -0.0262, t(12)=6.054; p<.04) compared to the neutral
condition, whereas congruent signs facilitated response time (∆ = 46.8 ms, t(12) =10.382; p<.008; in accuracy: ∆
= 0.00).
To determine whether participants’ attention to the ASL signs was due to a response strategy, promoted by the
presence of congruent signs, Experiment 2 repeated Experiment 1 without the congruent condition. Preliminary
results (N=3) suggest that the Stroop interference remained intact (F(1, 2) = 40.577, p = .02378, in accuracy: see
Figure 2). In a third experiment, we sought to determine whether the Stroop interference might be due to
response competition or automatic lexical access. To this end, we asked participants to respond by pressing a
button rather than signing. Preliminary results (N=3) showed that incongruent signs nonetheless produced
slower responses compared to congruent ones (∆ = 72.29 ms, t(4) =8.11; p<.05).
Taken as a whole, these results suggest that Deaf native signers of ASL automatically retrieve signs’ meanings.
We conclude that the automaticity of lexical retrieval is an amodal property of natural language processing,
irrespective of modality—speech or sign.
Corresponding authors:
amandaleighdupuis@gmail.com
i.berent@neu.edu
38
Friday 12th July
TISLR 11, July 2013
Benjamin Anible, Corrine Occhino-Kehoe
& Jeannine Kammann-Cessac
Presentation 38
University of New Mexico, USA
Corresponding authors:
cocchino@unm.edu
banible@unm.edu
jkamman@unm.edu
39
Friday 12th July
TISLR 11, July 2013
*So-One Hwang, ^Clifton Langdon,^ Concetta Pucci, +William Idsardi
& ^Gaurav Mathur
Presentation 39
*UC San Diego; ^Gallaudet University; +University of Maryland, College Park, USA
Corresponding authors:
soone@ucsd.edu
clifton.langdon@gallaudet.edu
concetta.pucci@gallaudet.edu
idsardi@umd.edu
gaurav.mathur@gallaudet.edu
40
Friday 12th July
TISLR 11, July 2013
Amber Martin
Presentation 40
Barnard College of Columbia University, USA
Age of Language Acquisition Affects Spatial Cognitive Development -‐ ASL
Corresponding author:
amartin@barnard.edu
41
Friday 12th July
TISLR 11, July 2013
Carla Morris
Presentation 41
Gallaudet University, USA
Lexical Variation and Change in American Sign Language: A Multi-generational Family
Case Study
To be presented in ASL.
Abstract:
This sociolinguistic case study of a multi-generational Deaf family of native signers analyzed lexical variation and
change in American Sign Language (ASL). The journal American Speech chronicles lexical changes in American
English, and from 1941 to 1991 an average of 600 words were added annually (Algeo 1991). Written dictionaries
have recorded much of ASL’s lexicon, but none have attempted to specifically document its lexical changes
(Long 1910; Stokoe 1965). Corresponding linguistic patterns are found in spoken and signed languages, and
since so much lexical change has occurred in American English in just a few decades, this study asked: What
variations or changes have occurred in ASL during a similar period of time?
Previous studies have revealed the influence of social networks on language variation and change
(Labov 1969; Milroy & Milroy 1985; Eckert 1994), more specifically, finding that family networks have a strong
influence on the occurrence of variation and change (Hurford 1967; Hamilton & Hazen 2009; Bugge 2009). Such
findings suggest that a family’s combined lexical use is a good representation of what occurs in a language’s
lexicon. Accordingly, I chose to recruit a family whose native language use would represent ASL’s lexicon, as it
was used from the 1930s to 2010s (approximately 80 years). I selected four consultants from the second, third,
and fourth generation of a single Deaf family, who are all native ASL signers. Consultants participated in four
filmed sessions (three individual and one group), which totaled 130 minutes of video. During filming, consultants
reported lexical items that differed from their own as used by: their grandparents, parents, and younger family
members. Data was coded for quantitative content using ELAN. Each reported lexical item was glossed and
coded for the reporting consultant’s generational group, the item’s attributed place of origin, and the item’s
attributed generational group.
Consultants reported a total of 84 lexical items. Within the data, two patterns were found: 1) a variation
of phonological forms, or 2) a full change of the item (replacement or obsolescence). Phonological variations
correlated with a difference in generational group. In other words, if a sign’s use was attributed to both the
grandfather and granddaughter, these uses varied in phonological form (ex., HAMBURGER and HELP). Full item
changes correlated to a difference in region, specifically where the attributed consultant had grown up and
gone to school (ex., MOVE-CAR and UPTOWN attributed to New York city). One item (SWEATER) exhibited both
patterns. A majority of the 84 items was attributed to the family’s first generation of signers (c.1910), who are
now deceased. The fewest number of items was attributed to the fourth generational group.
This preliminary study shows that several instances of variation and change have occurred, as exhibited
by one family. It hints at the multiple variations and changes that may be present in the entire lexicon of ASL. By
evaluating lexical use within families of native signers we can better understand how ASL’s lexicon is evolving, as
well as how historical and societal changes have influenced the language.
Corresponding author:
carla.morris@gallaudet.edu
42
Friday 12th July
TISLR 11, July 2013
*Mieke Van Herreweghe & ^Myriam Vermeerbergen
Presentation 42
*Ghent University; ^KULeuven, Belgium
Sparkling or still?
Flemish native and non-native signers from 3 generations and the productive lexicon.
A number of sign language linguists discern two manifestations of sign language structure and/or use, i.e., a
form which makes maximal use of the possibilities offered by the visual-gestural modality (e.g. iconicity, use of
space, simultaneity) and a form more resembling oral language non-iconic sequential organisation. Examples are
Cuxac (1996, 2000)’s “dire en montrant” versus “dire sans montrer” and Vermeerbergen (2006)’s “de l’eau
pétillante” (sparkling water) versus “de l’eau plate” (still water). Concomitant with these two manifestations is
the idea that “de l’eau pétillante” makes more use of what can be called productive lexicon. In the Flemish Deaf
community it is generally assumed that “better” signers use more resources that reflect “de l’eau pétillante”.
Hypotheses
1. Older generations of VGT signers use relatively more resources from the productive lexicon and younger
generations rely more on the established lexicon.
2. Native signers (since they are supposedly better signers) use relatively more resources from the productive
lexicon and non-native signers rely more on the established lexicon.
Methodology
In Flanders a new corpus project was started up in 2012. The aim is to collect data from 120 participants in
different age groups from all over Flanders. For this study, we have selected a subset of 12 participants
belonging to 3 age groups: +75, 40-50, 17-25; each group consisting of 2 native and 2 non-native signers.
Because we wanted to make sure that participants would have ample opportunity to “dire en montrant”, we
made use of a picture story retelling of The Horse Story (Hickmann 2003).
For the analysis, we decided to work with the following categories: lexical signs, depicting signs containing entity
and SASS-handshapes, handling (depicting) signs and gesture (based on Cormier et al., under review, Johnston &
Schembri 2007, Ferrara 2012). We then calculated their percentages in each of the stories. We also compared
the use of role-taking across participants.
Results
Results show that there are many interpersonal differences. Nevertheless, generally speaking, older generations
use relatively more resources from the productive lexicon and younger generations rely more on the established
lexicon, but there were no overall differences between native and non-native signers in this respect.
However, there are indications that age and nativeness are interconnected. For example, results from the
middle group (i.e., signers aged 40-50) suggest that native signers rely more on the established lexicon than
non-native signers but this does not seem to be the case for older signers. Another interesting result relates to
role-taking: overall older signers use more role-taking than younger signers. And within the group of younger
signers, native signers use more role-taking than non-native signers. Clearly these results need to be
corroborated in follow-up research using a larger corpus.
An interesting spin-off of the research is that it has given the researchers ample opportunity to think about how
to deal with “de l’eau pétillante” in transcription and annotation of sign language corpus data (cf. Sallandre &
Garcia, 2013). Annotating the data used for this study was not always a straightforward activity. Some of the
difficulties we encountered include: (1) identifying a specific sign as established rather than as productive, (2)
applying the category of constructed action and (3) the identification of role taking (see Ferrara, 2012 for a
comparable discussion of “issues in annotation” of Auslan data).
Corresponding authors:
mieke.vanherreweghe@ugent.be
myriam.vermeerbergen@arts.kuleuven.be
43
Saturday 13th July
TISLR 11, July 2013
Amy M. Lieberman, Arielle Borovsky, Marla Hatrak
& Rachel I. Mayberry
University of California, San Diego, USA
Corresponding authors:
alieberman@ucsd.edu
aborovsk@crl.ucsd.edu
mhatrak@gmail.com
rmayberry@ucsd.edu
44
Presentation 43
Saturday 13th July
TISLR 11, July 2013
Connie de Vos
Presentation 44
Max Planck Institute for Psycholinguistics, Netherlands
The syntactic integration of pointing signs from a developmental perspective [ENGLISH]
Kata Kolok is a village sign language of North Bali that is unrelated to any other sign language (Marsaja 2008).
This language has been the primary means of communication of at least five subsequent generations of deaf
signers in social, liturgical, professional, and educational settings. It has a number of structural characteristics
that are unusual compared to other sign languages; most notably, Kata Kolok signers prefer an absolute Frame
of Reference in spatial description. Perhaps less surprisingly, one in six signs is an index finger point in Kata
Kolok, and these pointing signs come to serve a wide range of functions including, but not restricted to, colour
indications, time adverbial constructions, and place and person reference (de Vos 2012). These different
pointing types appear grammaticalised - i.e. syntactically integrated and/or morphemised to varying degrees. At
any rate, the ubiquity of pointing signs and their high degree of functional diversification suggest that pointing
signs are a key aspect in learning to use the language appropriately.
While all infants use communicative pointing gestures, pointing signs in sign languages come to serve
grammatical functions, particularly in the domain of place, time and person reference. Adopting a
developmental perspective, what language-specific evidence can be identified in support of the syntactic
integration of different pointing types? Petitto (1987) showed that children acquiring American Sign Language
start to differentiate first and second person pointing by 24-27 months, in parallel to hearing children acquiring
English pronouns in terms of timing and acquisition errors.
This study is based on monthly video recordings of a deaf preschooler between the ages of 24-36 months, as he
acquires Kata Kolok. Based on the analysis of 5.5 hours of video data, it explores the use of index finger pointing
in spontaneous conversations between the child and his caregivers, relatives, and older siblings. Transcription
activities have thus far resulted in the identification of 1,119 manual signs, of which 458 (41%) are pointing
signs. Following Petitto's (1987) study, I highlight some of the developments that indicate the transition from
pre-linguistic gesture to pointing sign.
For example, the child starts to produce multiple grammatical facial expressions alongside index finger points,
such as the perfective marker 'pah', around 25 months, suggesting that the syntactic integration of pointing
signs starts as early as 25 months. This finding matches previous observations on the syntactic acquisition of
pronominal pointing signs made by Petitto (1987). These initial results suggest that the ontological development
of pointing may provide additional evidence for the grammatical status of pointing in adult signing, as
instantiated in language-specific ways, and for different pointing types.
Corresponding author:
connie.devos@mpi.nl
45
Saturday 13th July
TISLR 11, July 2013
*Rain Bosworth, ^So-One Hwang, >David Corina, Paul Hildago
& *Karen Dobkins
Presentation 45
*Department of Psychology, University of California, San Diego; ^Center for Research in Language, University of California,
San Diego; >Center for Mind and Brain; Department of Linguistics, University of California, Davis , USA
Biological attraction for natural language rhythm: Eye-tracking in infants and children using reversed
videos of signs and gestures
Purpose. Shortly after birth, humans demonstrate a language bias for spoken language over nonlanguage stimuli, a finding that has recently been extended to signed input (Krentz & Corina, 2008; Petitto et al,
2011). With spoken languages, hearing infants are sensitive to the conspecific sounds in the speech frequency
range with temporal rhythms such as syllable-sized amplitude-modulated envelopes (Ramus et al., 2000; Rosen,
1992). Here, we ask what are the features that contribute to the early recognition of visual-manual input as
natural language? Signing has been contrasted with other biologically natural and communicative movements
such as pantomime because signing is produced with more defined movements, a constrained signing space,
and a greater repertoire of handshapes. Moreover, signing also has a temporal structure (Wilbur & Nolen, 1986;
Nespor & Sandler, 1999; Boyes-Braem, 1999) and triggers expectations of representational momentum (Wilson
et al., 2010). Our key aim is to determine whether sensitivity to linguistic features of signs increases with, and
decreases without, visual language exposure. Specifically, we investigate visual language processing using videos
of signs and gestures, played normally and reversed. Video reversals maintain spatial features of the original
actions, such as signing space and handshapes, but alter natural temporal characteristics, which we assume
disrupts the biological naturalness of body motion.
Methods. Using a Tobii eyetracker (sampling rate 120 Hertz), we measured total looking time in signexposed (SE) and non-sign exposed (NSE, exposed to spoken English) infants (5 months and 1-12 months) and
children (4-6 years) while they watched videos of signs and gestures, played normally and reversed. In stimuli
design, gestures and signs were matched in handshape complexity, movement patterns, handedness, and
location. We presented a continuous string of 7 items for each of the 4 conditions twice. Items were
counterbalanced so that no subject saw the same item in both forward and reversed conditions. We analyzed
absolute looking preference, percent mean looking time for regions of interest (face, torso, and signer’s body),
heat maps showing concentrations of duration, and gaze trajectories per condition.
Results. In SE children, natural signs engaged highly focused attention on the face while the reversed
signs revealed a more diffused gaze pattern. Meanwhile, NSE children did not differentiate between forward vs.
reversed signs in gaze patterns. We also examined whether the effects of reversal was greater on signs than
gestures, because reversal is a violation of signers’ phonological knowledge, whereas gestures presumably do
not have such phonology. We found a significant reversal effect for signs, in SE (and not NSE) children, which is
much weaker for gestures among SE children.
Discussion. Infants show an early attentional bias for biological motion in visual language. Without sign
exposure, the sensitivity to sign stimuli with phonological structure declines. Future work will analyze which
structural properties of the sign signal drive children’s attention to natural languages. We are currently
investigating the impact of natural and unnatural biological motion for language processing in deaf and hearing
adult signers.
Corresponding authors:
rain@ucsd.edu
soone@ucsd.edu
corina@ucdavis.edu
kdobkins@ucsd.edu
46
Saturday 13th July
TISLR 11, July 2013
Silke Matthes, Gabriele Langer, Dolly Blanck, Thomas Hanke,
Reiner Konrad, Susanne König & Anja Regen
IDGS, University of Hamburg, Germany
Corresponding authors:
silke.matthes@sign-lang.uni-hamburg.de
gabriele.langer@sign-lang.uni-hamburg.de
Dolly.Blanck@sign-lang.uni-hamburg.de
thomas.hanke@sign-lang.uni-hamburg.de
reiner.konrad@sign-lang.uni-hamburg.de
susanne.koenig@sign-lang.uni-hamburg.de
anja.regen@sign-lang.uni-hamburg.de
47
Presentation 46
Saturday 13th July
TISLR 11, July 2013
*Diane Brentari, ^Virginia Volterra & *Jonathan Keane
*University of Chicago/Linguistics Department, USA;
^National Research Council (CNR), Institute of Cognitive Sciences and Technologies, Italy
Corresponding authors:
dbrentari@uchicago.edu
vvolterra@teletu.it
jonkeane@uchicago.ecu
48
Presentation 47
Saturday 13th July
TISLR 11, July 2013
Ceil Lucas, Gene Mirus, Jeff Palmer, Nicholas Roessler
& Adam Frost
Presentation 48
Gallaudet University, USA
The Effect of New Technologies on Sign Language Research
It is well known that studies of the structure and use of sign language usually require the recording of the
language, to create a visual record. It is not sufficient to simply observe language. This recording of language
has of course involved a number of logistical issues and issues pertaining to the protection of human subjects,
given that it is virtually impossible and not even desirable to preserve anonymity. Traditional ways of collecting
sign language data, involve, by definition, intense face-to-face interaction. However, recent technologies are
having an impact both on how sign language data is collected and on what kinds of data can be studied. This
powerful effect of new technologies on sign language research has direct bearing on the TISLR theme of sign
language documentation. This presentation will first review the fairly established ways of collecting sign
language data and will then discuss the new technologies available and their impact of sign language research,
both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New
data collection methods and new kinds of data will be illustrated with three specific case studies. In the first
study, videotaped data was recorded directly from the computer screen while dyadic deaf videophone callers
were signing to one another. In collecting evidence to show how digital technologies such as the videophones
influence aspects of communicative behavior through new contexts for social interaction, the researchers were
present as observers at only one end of the call while callers at each end carried conversation with each other.
Callers used a small webcam (a simplified video camera for web interfaces) with a desktop computer and linking
through the Internet and communicated visually with one another over ordinary telephone lines. In the second
study, researchers investigated whether or not videophone communication with sign language interpreters
from various regions around the United States is contributing to the standardization of ASL. This new technology
has created virtual mobility in which interpreters and deaf consumers must negotiate regional variation. The
proliferation of videophones has allowed deaf people to communicate face-to-face from the comfort of their
home and all participants- deaf people and interpreters alike- are now exposed to regional and social variation
in ASL in an unprecedented way. As of this writing, the Federal Communications Commission (FCC) forbids the
recording of actual interactions, so descriptions are limited to interpreter accounts of the variation that they are
seeing but these descriptions are very informative. In the third study, the researchers conducted a survey
administered via the Internet in ASL with the use of video files instead of written text. The survey and the
responses to the survey were all done virtually. Although there were some unforeseen snags, this medium
proved to be very successful in gathering diverse data from diverse locations around the country in a very short
amount of time. All of these technologies will be illustrated and the implications for research involving human
subjects discussed.
Corresponding authors:
ceil.lucas2@gmail.com
gene.mirus@gallaudet.edu
jeff.palmer@gallaudet.edu
nicholas.roessler@gmail.com
49
Saturday 13th July
TISLR 11, July 2013
Peter Hauser
Plenary Keynote Presentation
Rochester Institute of Technology (RIT), National Technical Institute for the Deaf (NTID), USA
Sign Language Assessment:
A bridge between theory and practice
Over the past couple of decades, research on theoretical issues in sign language linguistics has grown
exponentially and significantly contributed to our understanding of language processing. These discoveries,
particularly the early studies that illustrated how signed languages are natural languages, have liberated the
deaf community in many ways (Hauser, 2004). Yet, there remains a great need for a bridge between theoretical
findings and Deaf Education and sign language instruction, particularly the development of sign language
proficiency measures (e.g., Paludnevičience, Hauser, Daggett, & Kurz, 2012). While spoken language proficiency
tests have been used for government, occupational, clinical, research, and educational purposes around the
world for generations, many countries do not have any sign language proficiency tests. Sign language
proficiency tests enables researchers to conduct new psycholinguistic (e.g., Kubus, Rathmann, Morford, &
Wilkinson, 2010; Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011) and neurolinguistic (e.g., Corina, Lawyer,
Hauser, & Hirshorn, 2013) research. Such tests enable clinicians and educators to document and track children’s
sign language development and identify those who require sign language intervention (e.g., Quinto-Pozos,
Singleton, Hauser, Levine, Garberoglio, & Hou, 2012).
The marriage between current psychometric theory and theoretical sign language linguistics yields
state-of-the-art assessment instruments. This talk will focus on two main theories of test development: Classical
Test Theory (CTT) and Item Response Theory (IRT) including the Rasch Model. Emphasis will be placed on how
current linguistics findings are used as the framework for test development. Three tests that were developed
based on different aspects of theoretical linguistics research will be discussed. The levels of linguistics analysis
that will be discussed and the relevant tests are: (a) phonology (ASL Discrimination Test, Bochner, Christie,
Hauser, & Searls, 2011); (b) depiction or depicting verbs (ASL Comprehension Test, Hauser, Paludnevičience,
Dudis, Riddle, & Daggett, 2010); and, (c) a global test that involves all levels of grammar (ASL Sentence
Reproduction Test, Hauser, Paludnevičiene, Supalla, & Bavelier, 2008). This talk will also discuss the importance
of analyzing the Age of Acquisition and differences between native and non-native signers to establish construct
validity. The purpose of this talk is to illustrate how sign language linguistics can contribute test development
and ultimately having an impact on both theory and practice.
50
TISLR 11 ABSTRACTS FOR POSTERS SESSION 1 51
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Svetlana Dachkovsky, ^Christina Healy & *Wendy Sandler
Poster 1.01
*University of Haifa, Israel; ^Gallaudet University, USA
The Universal and the Particular in Sign Language
The prosodic systems of visual languages are characterized by features similar to those of spoken languages: rhythmic
constituents, prominence cues, and intonation (Wilbur 2000, Sandler 2012). Yet, while it is well known that intonational
tunes vary from spoken language to spoken language, no previous research has systematically compared this system across
sign languages. What properties are shared among prosodic systems of different sign languages, and what prosodic
features make them look different? The present study is the first to compare the prosodic systems of two sign languages:
American Sign Language (ASL) and Israeli Sign Language (ISL). Using the same corpora and methodology for each language,
we identify some prosodic markings that are candidates for prosodic universals, and others that are language-particular.
The data collection procedure, coding parameters, and analysis were the same for both languages, making this study
different from other comparative studies (e.g., Zeshan 2004). The data were elicited from 6 ASL signers and 5 ISL signers,
and both manual and non-manual aspects of prosody were coded in detail. In this way, the study yields parallel corpora and
consistent coding and analysis, resulting in a more rigorous comparison than has previously been available in sign language
research.
Although the basic organizing principles of the prosodic system are common to the two languages, they differ significantly
in the details which constitute the individual intonational ‘flavor’ of each language. Yes/no questions are almost identical in
ASL and ISL, but differences were found on most other structures. For example, a salient intonational difference between
ASL and ISL is found on topics. Brow raise typically marks topics in ASL, while that constituent in ISL is marked by squint.
Another finding of this study is that displays that look ‘the same’ and have the same function in ASL and ISL can be
produced in different ways in each language, i.e., using different muscles. All the articulatory details of non-manuals were
coded using the FACS system (Ekman & Friesen 1978, 2002), for which coders of both languages are certified. An example
is what looks like a ‘squint’ in both languages, but is produced by cheek raise in ASL and by tightened lower lids in ISL.
Finally, the present study for the first time claims that not only facial configurations, but head movements as well play an
important role in sign language intonation. Our analysis provides evidence for this claim by showing that head components
contribute post-lexical meanings and obey prosodic boundaries. Thus, along with the differences in facial components,
topics are signaled by a head up position in ASL but by a gradual forward head movement in ISL.
Our investigation goes beyond purely descriptive aspects of the prosodic systems in ASL and ISL. It identifies similarities
between the two languages as possible candidates for sign language universals in this domain, as well as languageparticular signals, meanings, and privileges of occurrence which provide novel evidence for prosody as a grammatical
system.
52
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Sibaji Panda & Hasan Dikyuva
Poster 1.02
UCLAN, UK
Subtractive numerals in two village sign languages
In this paper we present data on subtractive numerals in Mardin Sign Language (MarSL), Turkey and Alipur Sign Language
(APSL), India. Both sign languages originated in small-scale rural communities with high incidence of hereditary deafness.
The unusual features found in these data considerably extend the known range of typological variation across sign
languages in the domain of numerals. Subtractive systems are attested though infrequent across spoken languages, but
have never been documented in sign languages before.
Data include a number naming task for MarSL, a bargaining game activity for APSL, and free conversations.
In Mardin Sign Language, only the numbers one to four appear as subtracted numbers. Thus this construction is possible
from 26 to 29, from 36 to 39, etc., although it seems some signers use it only for 28/29, 38/39 and so forth:
(1a)
(1b)
TWENTY TWO-LESS
TWENTY ONE-LSS ‘19’
‘18’
Other numbers in MarSL cannot be expressed using subtraction. Instead, ‘24’ would be signed using an additive strategy
(TWENTY FOUR). Thus the subtractive numbers in MarSL only constitute a small, non-productive sub-system.
By contrast, subtractive numerals in Alipur Sign Language (APSL) are a productive construction. In APSL, subtractive
numbers are used when the number in question is near a higher “round” number. For instance, ‘195’ can be expressed as
‘200 minus 5’, using the sign LESS:
(2)
(3)
(4)
FIVE LESS TWO
TWO LESS FIFTY/HALF
TWO LESS THREE
‘195’
‘48’
‘28’
In the first example, TWO is used to mean ‘200’, and this is a type of ambiguity regularly used in APSL. Similarly, THREE in
the last example stands for ‘30’. The intended magnitude, and hence the actual number that the signer is intending to
convey, must be recovered from the context in these cases.
With respect to the order of the signs in the subtractive sub-system, the data show that the usual order of signs is to start
with the number that is being subtracted, followed by the sign LESS, followed by the larger number that is being subtracted
from. Sometimes, signers topicalise the larger number, in which case it appears first but is marked as topic with raised
eyebrows and a following pause:
(5a)
TWO LESS FIFTY
(5b)
_____top
FIFTY, TWO LESS
‘48’
‘48’
Subtractive numbers that can be constructed flexibly in this way are very rare cross-linguistically. A number of spoken
languages have subtractive numbers for single-digit numerals (e.g. Hurford 1975; Greenberg 1978). For instance, in
Athapascan languages, ‘9’ is regularly expressed as ‘(10) minus 1’ (Hymes 1955). However, the subtractive sub-system is
productive in Alipur Sign Language, and that makes is highly unusual.
Subtractive numerals are not obligatory in MarSl and APSL but constitute a commonly used option of communicating
numbers. Documentation of these unusual structure adds to our understanding of typological diversity in the domain of
cardinal numerals, which has been studied extensively in spoken languages but not in sign languages.
53
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Agnes Villwock
Poster 1.03
University of Hamburg, Germany
Signs and their development in monastic environments:
From the rule of silence in early Christian congregations to the emergence of sign language communication in monasteries
today (Language: English)
Since the beginning of the 6th century Christian monasteries had a significant impact on the development of science and
education They also prove to be a fascinating area of research with respect to the evolution of signed communication
systems and signed languages. Studies in this field lead to a better understanding of the monastic impact on the
development of deaf education (Villwock, 2011, 2012a, 2012b).
For centuries signs have played a major role in the culture of Christian monasteries. The monastic signs were developed as
an alternative communication system to the spoken modality of language, which was thought to be a root of sins and
represent the abandonment from the cloistral ideal (Bruce, 2007).
So far, there is no data to answer the question of an exact time point for the first usage of signs as a way of communication
in monastic congregations. Nonverbal signs were already known in pre-Christian time, for example in the Egyptian, Roman
and Greek societies.
In the monastic context, signs were used as a substitute for spoken language. The monks or nuns were hearing - with a few
exceptions - but had taken a vow of silence, which was only to be broken on certain dates or times of the day. There is
evidence for numerous different sign systems in the diverging Christian orders. Some of them were documented in the socalled signa-lists. These lists contain descriptions of signs for religious purposes as well as those for daily life in specific
monasteries (Fischer, 1996). Although the rule of silence has lost its strictness today, in some orders these signs are still in
use.
Especially the signs of the Cistercian order are very well documented (Barakat, 1975) and grant a precious opportunity to
analyze the features of signed communication.
An extraordinarily interesting research question for the academic area of sign language studies is the possibility of a
connection between the signs of the congregations and the monasteries as environment of deaf education: Were the deaf
children taught by using signs? This and related questions lead to further discussions in the field of historical linguistics.
Despite the fact that final evidence for the existence of language borrowing between the monastic sign systems and signed
languages of the deaf has not been found yet, it would be highly recommendable to investigate this possibility thoroughly.
The present study is the first to not only focus on the historical perspective, but also to take the role of monasteries
nowadays concerning the usage of signed languages into consideration. Furthermore, it differentiates between various
orders and their characteristics – an important aspect which has been neglected in previous studies. In this context, the
study will also tackle the question whether the monastic impact on sign language development and usage can be estimated
to be as positive as that on deaf education (Villwock, 2011, 2012a).
54
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Anke Muller
Poster 1.04
University of Hamburg, Germany
Cinematic devices in signed discourse? The case of eyeline match and point-of-view editing
(presentation in English)
This poster illustrates the observed structural similarity between film and certain iconic devices used in signed discourse.
The mental space theory approach (Liddell 2003, Dudis 2004) is used to identify both signed phenomena that will be
compared: Constructed action (visible surrogates) and depicting verbs (classifier constructions). The study is based on a
small corpus of filmed signed discourse in DGS (German Sign Language). The paper focusses on the cinematic structure of
eyeline match and (optical) point-of-view editing.
Film has developed a widely used montage syntax describable as the sequence of a character looking at something
(presumably offscreen) and the object looked at. This sequence is called eyeline match and consists of two shots, the
glance shot and the object shot. A point-of-view (POV) shot is a subtype of the eyeline match (Branigan 1984).
In DGS, signers can use lexemes to express that person A sees an either object, or person B. They can also use iconic devices
as constructed action and depicting verbs. With these, a structure comparable to film syntax is produced. In both film and
sign, spatial orientation is at issue.
The DGS structure analogous to the eyeline match can be described as follows. The glance shot is expressed via constructed
action (which makes eye behaviour visible). The object shot is expressed via constructed action (e.g. in case of dialogue
editing), or via depicting verbs (in case of a long distance between looker and object).
In a dialogue sequence consisting of two instances of constructed action, the object shot can either be a reverse-angle, or a
POV shot. The first character is presented as looking at a second, non-visible character, who is then shown as the ‘object
looked at’, either at a reverse angle or facing the discourse addressee (POV shot). The latter is a subjective shot, presenting
the view as the first character sees it.
Depicting spaces (consisting of classifier constructions) are used to represent objects at a distance. Combinations of
constructed action/glance shot and depicting verb/object shot can be expressed sequentially or simultaneously (as a
superimposition of shots). In these cases, the spatial relation between looker and object looked at tends to be preserved.
This results in the adoption of a “stage editing” strategy, with the camera seemingly tracking backwards (and sideways), but
maintaining its angle of view. The effect of this is that objects are presented to the addressee from their back side, and not
as a reverse-angle shot or a POV shot (as seen from the character). When depicted scenes are signed in front of the signer’s
eye, a virtual POV shot may be triggered. Putting oneself in someone else’s position enables an addressee of signed
discourse to perceive depicting verbs signed in front of the signer’s eyes as a subjective shot.
Because constructed action and depicting verbs (or role shift and classifier constructions) exist in many sign languages,
cross-linguistic comparison with respect to types of eyeline match structures might reveal different strategies and
preferences. The adaptation of cinematic concepts could provide a terminological basis.
55
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Anna Puupponen, Tommi Jantunen, Tuija Wainio & Birgitta Burger
University of Jyväskylä, Finland
56
Poster 1.05
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Anne Baker, Patricia Spanjer & Marielle Fieret
Poster 1.06
University of Amsterdam, Netherlands
Language symptoms of dementia in an older signing population
Older people with dementia reveal problems with language as the dementia progresses. In spoken languages these
problems appear in various areas such as word finding problems, empty speech, grammatical errors, and inadequate
responses (see review in Reilly and Hung (in press) and Reilly, Rodriguez, Lamy and Neils-Strunjas (2010). In a signed
language such problems can also be expected to appear, however, since signers are usually bilingual, it may also be
expected that signers with dementia will make the wrong language choice for the interaction partner. Little is known to
date about the linguistics symptoms of dementia in signers. Instruments for linguistic analysis are still being developed (e.g.
Atkinson, Denmark, Woll et al. 2011).
In this exploratory study four older pre-lingually deaf signers with dementia (average age 87 years) living in The
Netherlands were compared to a matched control group of four older signers without dementia. The data were taken from
a spontaneous conversation with a deaf signing carer who was known to the informants (see example 1 from an older
woman with dementia (D2) and the interviewer (I)).
Inadequate responses, that is answers that were not appropriate to the question or echolalic responses, were in both
categories significantly more frequent for the individuals with dementia than for the control group. Word-finding
difficulties and sign-finding difficulties were not significantly more pronounced in the group with dementia, although there
was a trend. The use of fillers was also not significantly different. Both the mean sign and word MLU was smaller in the
group with dementia but not significantly so. The percentage of ungrammatical Dutch utterances was significantly higher in
the group with dementia. This grammatical analysis was not done for NGT since the variation in adult signing is too large to
apply strict criteria in this context. There were no significant differences between the two groups in their use of bimodal
utterances but the group with dementia made an inappropriate language choice more often than the control group. With a
deaf carer the message needs to be fully accessible through the signed modality but this was less often the case.
Methodologically it is a challenge to take into account the bimodal output of individuals with dementia and to compare
these but specific variables have indicated differences even in this small sample that further need exploring in future
research.
57
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Anne Therese Frederiksen, Marla Hatrak & Rachel Mayberry
Poster 1.07
UCSD, USA
Anaphoric reference in ASL. The developmental trajectory of adolescent learners.
In order to become proficient narrators, children must learn how to produced structured discourse, making
appropriate use of linguistic structure and obeying discourse pragmatic principles. One important step is the
mastery of anaphoric use of pronouns, i.e. 'Kate saw Peter. He waved at her', which is known to happen fairly
late in both Deaf1 and hearing2 children, despite the fact that they begin using pronouns relatively early. Young
ASL acquiring children progress from the use of literal points (at objects or people), to signs (to name objects or
people), to abstract points (at empty locations where objects or people have been), and finally to anaphora,
pronominal reference to language structure3. Homesigners have been observed to use abstract points4. but
whether they develop beyond this level and whether and how second-language, L2, or late first-language, L1,
learners develop ASL pronouns is largely unknown.
We answer this question by investigating development of anaphoric reference in ASL as a function of age of
acquisition. Adapting the original picture-description paradigm of Karmiloff-Smith5 (the balloon stories), we
created 6 sets of pictures depicting causally related events, each with two characters and one object. The
pictures are shown sequentially on a computer screen; participants are instructed to describe the events in ASL
under three counterbalanced conditions: 1) large scale pictures of the characters are present throughout the
stimulus events and during the participant’s ASL description; 2) the large scale pictures are placed on chairs
during the stimulus events but removed immediately before the participant’s description; and 3) no props are
present at any time. We test three groups who vary in age of L1 vs L2 acquisition: (a) young adult proficient
signers, deaf and hearing; (b) hearing college students currently studying ASL; and deaf immigrants learning
ASL as an L1. The ASL picture descriptions are videotaped, transcribed and managed with ELAN; each
character reference is coded as being named with a sign, a pronoun, or a point to either a prop, a location where a
prop was, or to another location. Because they have well-developed language, proficient signers, deaf or hearing,
will use ASL anaphoric reference independent of condition. Because they are learning ASL, the college students
are expected to use anaphoric reference and abstract points, but not signs or points to props. By contrast, the deaf
immigrants learning ASL are expected to use points to props and signs.
In extensive pilot work analyzing the story retelling of deaf adult signers who were native signers, late L1
learners, and adolescent homesigners learning ASL, we found, as predicted, that the native signers use ASL
anaphoric reference in contrast to the late L1 learners who use significantly fewer ASL pronouns, and the
adolescent learners used no pronouns at all (Table 1). Our on-going study experimentally replicates these results
and tests the hypothesis that late L1 learners follow the path of young child language learners in their acquisition
of anaphoric reference.
58
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Annika Herrmann
Poster 1.08
University of Goettingen, Germany
Phonological compensation strategies of a one-handed signer –
A case study of German Sign Language
(English)
In daily signing, there are many situations such as holding a glas at parties or
carrying a baby, where deaf people occasionally use only one hand for
communication. Furthermore, weak drop is described as a constraint based
phenomena in sign languages for specific two-handed signs (cf. Brentari 1998). But
what happens if these situations are the normal case?
This study presents data of a one-handed native signer of German Sign
Language (DGS). When the informant was 15 years old, his left arm got debilitated in
an accident. Thus – being 45 – he has been signing with only one hand for 30 years.
The data set annotated by ELAN comprises a vocabulary task (sitting and standing),
story telling using the ECHO fables, and natural signing in an interview conversation.
The signs were classified according to different categories: symmetric and
asymmetric, alternating and non-alternating, monosyllabic and reduplicated, having
contact or not, and whether or not the hands cross.
The results show that all symmetrical signs such as BOOK (see figure 1) were
performed in signing space without any modification. The signer did this without
exceptions, systematically expecting the addressee to mirror the signing hand, no
matter whether the hands usually contact each other or not. Both the contact feature
and the symmetric feature are said to block weak drop, so compensation strategies
might be expected. For the informant, however, the overall strategy of mirroring
shows that – to avoid a complicated substitution strategy – he is signing at the
midsagittal plane as if the second hand is present. In case of symmetric, but
alternating signs such as BICYCLE, the signer moves his upper body to indicate the
alternation. This strategy specifically differentiates symmetric non-alternating from
alternating signs and supports Brentari’s constraint on such forms saying that they
cannot undergo weak drop.
In complex asymmetric signs such as POLITICS and TO-SHOP (see figure 2a and
2b), the signer more frequently used specific compensation strategies. Sometimes,
the chest or the debilitated arm were used as alternative locations, and in other
cases, the table or chair were chosen as the substitute for the location feature,
depending on whether the informant was standing or sitting. If the non-dominant
hand usually involves a flat-handshape (e.g. MARRIAGE, BUTTER), a substitute
modification using the side of the body, for instance, was more likely than in cases
where the non-dominant hand would be a fist (e.g. BANANA, WORK) or a c-handshape
(e.g. BUS).
Furthermore, in complex signs that would be ambiguous with one hand only,
the signs were sequentially decomposed. Thus, one hand successively signed both
handshapes (see figure 3, MUSHROOM). Looking at special crossed signs, which
would be ineligible if they were only mirrored, the signer often signaled the addressee
what was meant by reaching out to the addressee’s hand, using that as a support.
The results confirm the rule based phonological system of sign languages and
illustrate systematic constraints on phonological features in a specific case study
where language has to find a different way.
59
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Cecily Whitworth
Poster 1.09
McDaniel College, USA
Learner Errors and ASL Parameters
to be presented in ASL
This poster describes a study investigating the accuracy of sign parameters and features in
ASL signs produced by hearing adult second-language learners. Signs produced during
unscripted conversations between 34 beginning ASL students were coded for accuracy on five
dimensions. Non-target forms were also transcribed phonetically and compared to target forms.
The highest error rates are seen in the handshape and movement parameters, but patterns vary
widely between participants and between lexical items. In addition, a number of interesting
patterns emerged at the feature level: the relationships between the target forms and the
produced forms are not random substitutions of unrelated units, but typically involve
substitutions of a parameter unit that shares multiple lower-level features with the target.
Multiple models for the phonology of ASL have been proposed over the past fifty years.
Models based on Stokoe’s 1960 work, in which every sign is treated as having one element for
each of several simultaneously produced parameters, remain common. Most frequently, the
major parameters of signs are said to be handshape, location, movement, and palm orientation.
Also common is the theory that for any pair of linguistic structures, one is more highly marked
than the other. Some studies (Siedlecki & Bonvillian 1993, Conlin et al 2000) have examined
the relative markedness of different parameters, while others (McIntire 1977, Boyes-Braem
1990, Bonvillian & Siedlecki 1996) have focused on the relative markedness of units within a
parameter inventory. In child L1 acquisition, less-marked elements are acquired earlier,
produced more accurately, and substituted for more-marked elements (Siedlecki & Bonvillian
1997; Marentette & Mayberry 2000; Cheek et al 2001). Research focused on L2 learners has
been more limited, but has indicated that markedness, transfer, and motor skills may all affect
learner productions (Chen Pichler 2011, Mirus et al 2001, Rosen 2004).
In this study, 3203 produced signs corresponding to 246 lexical items were coded for
accuracy on five dimensions: the four traditional major parameters and a fifth addressing the
number of hands and the relationship between them. Each dimension was treated as a complex
variable comprising multiple lower-level features, and coded on a trivalent scale (correct/
partially correct/incorrect). In productions with non-target forms (one or more incorrect or
partially correct parameter values), phonetic features were also transcribed and compared to the
features of posited target forms. In this data set, most learner errors occur in the handshape and
movement parameters. These errors typically involve the substitution of phonetically similar
forms for targets, but there is no correlation between traditional markedness hierarchies and
error rates for the target forms.
This study uses the phonetic details of ASL signs produced by L2 learners to examine the
nature and frequency of learner errors. Results indicate that learner errors occur in patterns that
are distinct from those displayed by children acquiring ASL as a first language: in learner
errors, the handshape and movement parameters involve similar error rates, and neither error
rates nor produced forms are correlated with markedness hierarchies for parameter inventories.
60
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Poster 1.10
Carl Börstell
Stockholm University, Sweden
Revisiting Reduplication Toward a description of reduplication of predicative signs in
Swedish Sign Language Language of presentation: English
Previous studies on Swedish Sign Language (SSL) have shown that reduplication
expresses a variety of meanings (Bergman 1983; Bergman & Dahl 1994). However, no study
had looked specifically at reduplication, and stative predicates were always excluded from
investigation. Also, the interaction between the manual and oral component in reduplication
had previously been somewhat overlooked, as had the interaction between reduplication and
negation. Thus, the current study had as its objective to investigate the use of reduplication in
predicative signs in SSL, with special importance given to these previously overlooked areas
(i.e. stative predicates, oral reduplication, and negation).
This study investigated reduplication of predicative signs in SSL by using a small-scale
corpus, as well as using informant data from consulting sessions with a native signer. The
small-scale corpus used for the study was based on previously recorded material for the
Swedish Sign Language Corpus Project (Mesch 2011), and the ECHO project (Bergman &
Mesch 2004), as well as other pre-recorded material used for teaching purposes. This material
consisted of narrative texts, interviews, and spontaneous dialogue texts. These data were
discussed with a language consultant (a native signer of SSL), and some complimentary data
also stem from the consultation sessions.
The study found that reduplication in SSL typically expresses plurality of events and/or
referents, but may also express intensification, ongoing event or generic activity. There is a
distinction between external and internal pluractionality: external pluractionality is associated
with a frequentative/habitual reading, where one singular event is repeated over time (see
Example 1); internal pluractionality is associated with an iterative reading, where one event
consists of several individual repetitions (see Example 2). There is a strong tendency for
lexically monosyllabic signs to be associated with pluractional readings of reduplication,
whereas lexically repeated signs are strongly associated with ongoing events or generic
activity. For stative predicates, reduplication usually only works with temporary state statives,
expressing one state recurring over and over (see Example 3), or with one state—temporary
or inherent—being associated with several distinct inviduals at the same time (see Example
4).
Investigating the use of the oral component in a reduplicated sign showed that there is a
significant difference in the use of oral reduplication with pluractional and non-pluractional
expressions of reduplication. Oral reduplication is more often associated with pluractional
meanings than with non-pluractional meanings (see Table 1). The study also found a
phenomenon not previously described: oral reduplication without manual reduplication. This
means the manual component is articulated with a single movement, but that several
reduplicated mouth movements accompany this movement. This process expresses the
ongoing function with telic predicates, and focuses on the telic predicate as a single event in
progress, thus replacing the function of manual reduplication.
Though previous studies found that SSL cannot negate reduplicated signs (cf. Bergman &
Dahl 1994), my study found a few instances of constructions with reduplication being
negated, though with other strategies than the regular negation strategies, e.g. by putting the
negator outside the intonational phrase containing the reduplication (see Example 1).
61
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Christina Murmann, Peter Indefrey, Thomas Weskott
& Markus Steinbach
Poster 1.11
Heinrich-Heine University of Duesseldorf, Germany
Why FORGET needs help but HELP does not – an experimental study on the influence
of animacy and agreement on the distribution of the DGS agreement auxiliary PAM
(English)
Many sign languages distinguish agreeing verbs like HELP in (1a), which can be modulated such that
their beginning/end point coincides with loci in signing space, and plain verbs such as LIKE in (1b),
which cannot express agreement overtly (Mathur/Rathmann 2012). To bridge the agreement gap of
plain verbs, many sign languages have developed specific auxiliaries. In DGS, the auxiliary PAM
(Person Agreement Marker, 1c), which is signed with the baby‐C handshape facing the object, has
been grammaticalized from the noun PERSON (Rathmann 2000; Steinbach/Pfau 2007).
It has been observed that PAM is subject to two constraints: (i) PAM only combines with arguments
referring to human entities or at least to entities ranked high on the animacy scale; (ii) PAM only
combines with plain verbs. The combination with (un)inflected agreeing verbs is only possible in
specific uses such as emphasis. Consequently, signers are expected to judge sentences with
[+animate] objects (1a) much better than sentences with [–animate] objects (1b). Likewise,
sentences with agreeing verbs in an uninflected citation form are predicted to be better when PAM is
present (2a), whereas with inflected ones its absence should be preferable (2b). An empirical
assessment of the relevant data by means of a controlled experiment, however, was as yet missing.
Therefore, we conducted two acceptability judgment experiments using the online questionnaire
platform provided by the OnExp experimental software (Onea 2011). Experiment 1 investigates the
influence of animacy. 12 plain verbs where combined with an object of one of the following three
levels on the animacy hierarchy: (a) domestic animals (b) inanimate objects of material/personal
value, e.g. COMPUTER or CAR, and (c) inanimate objects of lower personal value, e.g. BOTTLE or BOX. A
level for human beings was not included since the use of PAM with humans is already confirmed
(Steinbach/Pfau 2007; Sapountzaki 2012). All sentences were signed both with and without PAM. In
experiment 2 we tested the combination of PAM with agreeing verbs. We used 16 agreeing verbs and
tested the following four conditions: (a) verb in agreeing form with PAM, (b) verb in agreeing form
without PAM, (c) verb in citation form with PAM, and (d) verb in citation form without PAM. All in all,
we videotaped 72 sentences for the first experiment and 64 for the second one. In the online
questionnaire study, we asked 32 participants to judge a randomized sequence of 36 sentences,
along with 8 benchmark items on a rating scale ranging from 1 (totally unacceptable) to 5 (perfectly
acceptable).
The results show the predicted interaction with raters disfavouring PAM in sentences with
inanimate objects but not in sentences with animate objects (Figure 1). The interaction between
agreement and PAM is in line with the prediction that with inflected agreeing verbs, PAM is the
disfavoured option. By contrast, with uninflected agreeing verbs, PAM gets a better rating than zero
marking (Figure 2). We conclude that signers prefer sentences with overt agreement marking, but
agreement is preferably encoded on the verb and not on PAM. Accordingly, in DGS, PAM has not yet
become a general marker of agreement across all verb types. In addition, the distribution of PAM is
sensitive to the animacy hierarchy. Note finally that our results also show that acceptability
judgments are a viable methodological option in the investigation of sign languages.
62
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*David Corina, *Shane Blau, ^Todd Lamarr, +Matt Leonard
& +Edward Chang
Poster 1.12
*U.C. Davis; ^Cal State Sacramento; +U.C. San Francisco, USA
Cortical Stimulation Mapping in a Deaf Signer
Cortical stimulation mapping (CSM) is a routine procedure used in neurosurgical treatments of brain disorders. Following
craniotomy, the surgeon applies apply a small electrical current to discrete cortical regions in an awake behaving patient.
The electrical stimulation is designed to induce short-lasting focal functional disruption which can be used to assess
neuroanatomcial regions involved in motor, memory and language function. These data provide the neurosurgeon with a
detailed understanding of a particular patient’s functional neuroanatomy which is crucial for successful targeted brain
resection that will minimize long lasting neurological effects of surgery. In this poster we present the rare case of a deaf
adult fluent signer who underwent CSM for neurosurgical treatment of a brain tumor. We report data from automatized
naming (counting) and picture naming --protocols used to identify area of sign-motor and lexical naming. Two cortical sites
resulted in significant disruption, a left frontal opercualr site that disrupted automatized naming, and a second posterior
temporal site that led to sign-finding errors and sign misarticulation. We discuss these results in relation to existing models
of language representation in users of spoken languages and compare this case to two previously described cases of deaf
signers undergoing CSM.
perform a systematic sampling of
The procedure requires neurosurgeon to
Neuropychological studies of Deaf pateinten undergoing cortical stimulation mapping procedures
63
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
David Quinto-Pozos & Lynn Hou
Poster 1.13
University of Texas at Austin, USA
Development of perspective-taking skills by deaf signing children
Language of presentation: ASL
Signed language users capitalize on the space in front of their bodies for describing the
locations, orientations, and movements of animate and inanimate objects with respect to each other;
this has been termed topographic or diagrammatic use of the signing space.1 Objects are often
represented with linguistic classifiers and the interpretation of the locations, orientations, and
movements of objects within a space may require that the interlocutor envision the objects from a
different visual perspective (e.g., signer’s).2 Thus, visual-spatial skills are needed for comprehending
linguistic content that involves spatial descriptions.
The implication for child language acquisition is that the development of perspective-taking
skills must accompany sign language development. Studies have shown that normal adult fluent
signers possess enhanced visual-spatial skills in mental rotation or detection of mirror reversals,3,4,5,6
but references to deaf children’s visual-spatial development are limited.7 Research on hearing
children’s visual perspective-taking skills generally focuses on young children, and older children’s
abilities are generally not considered.8
Visual-spatial skills can be assessed via non-linguistic means (e.g. block design and mental
rotation tests), but such tests do not provide insight about a child’s use of perspective taking while
simultaneously engaging in linguistic processing. Our team has developed such an instrument and
piloted it on deaf children.
In the American Sign Language Perspective Taking Comprehension Test (ASL-PTCT) the testtaker
is presented with 20 signed phrases via a computer, each of which describes two objects with
classifiers in specific arrangements and orientations. Here, arrangement refers to whether a specific
object is located on the left or the right (both objects are always facing in the same direction), and
orientation refers to whether one of the objects in the scene has fallen outward or inward (with respect
to the upright object) as indicated by the signer’s articulation of classifiers. The arrangements and
orientations of the classifiers are systematic across items, and objects are counter-balanced for equal
representation of both features. The task for the test-taker is to view each signed phrase and then
choose a picture, from four choices for each item, that corresponds with the arrangements and
orientations of the objects as indicated by the signer (Figure 1). Generally, the pictures force the testtaker
to consider viewpoints, or visual perspectives, that differ from the default perspective of the
stimulus video. Accuracy and response time are measured for each item.
ASL-PTCT data come from 94 deaf students (mean age 15;8, median age 16;6 range 7;7-20;4,
42 females; 12 native signers) tested at a bilingual (ASL-English) school. Native signers did not
significantly outperform non-native signers. However, a generalized linear mixed model revealed a
main effect for age, with older children outperforming younger children (F(1,1858) = 9.45, p < .01;
Figure 2). This suggests that deaf children’s visual perspective-taking skills are developing throughout
their childhood & adolescent years, which could influence their linguistic comprehension of
topographic space. Reaction time differences across children and trends in male/female performance
will also be discussed. These results have implications for theoretical approaches to the acquisition of
spatial devices in sign.
64
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Gemma Barbera
Poster 1.14
Universitat Pompeu Fabra, Spain
(In)definite determiners and referential anchoring in sign language
MOTIVATION. Besides the reduced amount of works which describe definiteness marking on index signs in sign
language (Bahan et al. 1995, Tang & Sze 2002), the encoding of signed indefiniteness is still an understudied area.
Traditionally, it has been considered that noun phrases (NPs) are divided into definites and indefinites, and that the
latter are further categorised into specific and non-specifics, schematised in (1). Definiteness marks knowledge of the
entity the discourse is about by both the sender and the addressee. As for indefiniteness, while specificity exhibits a
sender-addressee asymmetry since only the sender knows the entity, non-specificity is symmetric since it marks that
neither the sender nor the addressee know the entity. After a detailed and systematic analysis based on data from a
small-scale Catalan Sign Language (LSC) corpus, I propose that the traditional division among the tripartite structure
of noun phrases must be reconsidered into a dual one, as definites and specific indefinites have the same semantics if
referential anchoring (von Heusinger 2002, Onea & Geist 2011) is considered. Interestingly, referential anchoring is
overtly expressed in the visual-gestural modality by locus establishment. The two-fold distinction among the
referentiality of NPs presented here is schematised in (2).
PROPOSAL. Both definite NPs and specific indefinites denote an entity which belongs to the common ground. That is,
they are part of the shared contextual information. As for their semantics, they are represented with a variable that
has wide scope over the other possible embedded variables and it is interpreted outside the operator. The only
difference among the two types of NPs concerns the different conditions they are attached to: while definite NPs
denote an entity that is presupposed to exist in the model, indefinite NPs assert their existence. The evidence for this
claim is shown in the encoding of LSC data. Both for definite and specific indefinites NPs, the entity is established with
a locus associated on the lower part of the frontal plane, which extends parallel to the body of the signer. As indicated
in the subscripts of the glosses, the definite NP in (3) and the specific indefinite in (5) are directed to a lower spatial
direction (Fig.1 and Fig. 2, respectively). In contrast, non-specific NPs denote an entity which does not belong to the
common ground. It is asserted into the discourse but it is non-referentially anchored. The articulation for the
establishment of a non-specific NP (7) is directed to the upper frontal plane with a lax movement (Fig. 3).
From a semantic perspective, the main distinction is that definite and specific are referentially anchored at the level of
discourse representation. They are anchored to another discourse entity and its interpretation depends on a
previously established entity, which is overtly or covertly expressed. This dependency proposes a pragmatic
enrichment, which introduces a functional dependency into the restrictor of the existential quantifier and thereby
allows for wide scope readings. In the semantic representation of the definite NP in (3), shown in (4), and the
representation for the NP interpreted as specific indefinite (5), implemented in (6), the Ident condition anchors the
variable to a previous entity. This condition is not present in an intentional context (8). Moreover, the visual-gestural
modality allows an overt marking for this anchoring with the establishment of the locus, which is covertly expressed
in the spoken languages studied to date.
CONCLUSIONS. This paper presents a new perspective on the localisation of index signs towards signing space, by
introducing the concept of referential anchoring and by incorporating into the analysis the encoding of definiteness
and specificity marking in an understudied language, such as LSC. The modulations towards signing space are
analysed considering the epistemic status of the entity and its incorporation in the common ground.
65
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Hope Morgan
Poster 1.15
University of California San Diego, USA
‘FIFTH’ but not ‘FIVE-DAYS-AGO’: Numeral Incorporation in Kenyan Sign Language
In the emerging field of sign language typology, a central issue is discovering how unrelated sign languages are similar
in structure, how they differ, and what might account for these differences. Modality-specific constraints on word length
appear to account for the prevalence of simultaneous rather than sequential morphology1 though this still remains to be
investigated in many sign languages. Numeral incorporation is one type of simultaneous morphology found in several
unrelated languages. It involves the fusion of two morphemes: a handshape quantifier plus a base lexical sign representing
the category to be quantified; e.g., THREE-WEEKS or FOUR-YEARS-OLD in ASL.2
Cross-linguistically, these constructions occur in the same types of lexemes (e.g., currency, time, etc.) and are
phonologically restricted in similar ways.3,4 However, numeral incorporation is not utilized equally in all sign languages. It
is very rare in Adamorobe Sign Language5, while ASL and languages in the Japanese Sign Language family have expansive
repertoires.3,4
The current study asks what form this strategy takes in a relatively young national sign language. Kenyan Sign Language
(KSL) emerged in schools for the deaf around 50 years ago and is used by a large, widely dispersed population. KSL has
indigenous origins,7 but also contains lexical borrowings from ASL, presumably via educational materials introduced in the
1990s.8
This study poses two questions. First, which lexical categories in KSL use numeral incorporation, and how do these
compare to other sign languages? Second, what are the phonological and morphological constraints on numeral
incorporation? Data is based on videotaped elicitation with five fluent adult deaf signers collected in western Kenya in
2012.
Findings show that KSL uses numeral incorporation productively in multiple quantification categories: ordinals, hours,
days (present, past, future), weeks, years, cents, shillings, tens, hundreds, thousands, and school classes. This is similar to
the number of categories reported in Mathur & Rathmann3 for DGS and Nihon Shuwa (JSL), but less than ASL. However,
numeral incorporation is relatively limited in KSL within these categories. First, it is phonologically restricted (but not always
blocked) by two-handed numbers (those over 6). Second, there are usage-based constraints in some paradigms; i.e., while
THREE-DAYS-AGO is licit in all respects, FOUR-DAYS-AGO is phonologically permissible, but rare; and FIVE-DAYS-AGO is both
phonologically illicit and rare.
A comparison between KSL and ASL shows much less incorporation overall in KSL, but also little lexical relatedness due
to phonological differences in both the numeric handshapes and most base lexical signs (year, month, hour, etc.). Evidence
also suggests that numeral incorporation probably emerged prior to contact with ASL in the 1960s, via paradigms related to
currency. Thus, KSL numeral incorporation is unlikely to be a borrowed morphological process.
This study presents new evidence from a young understudied language that sign language lexicons can readily develop
simultaneous morphology, but that these processes remain constrained by phonological well-formedness.
66
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Jessica Contreras, Erica Israel & Peter Hauser
Poster 1.16
Rochester Institute of Technology, USA
Deaf students’ performance on the Children Color Trails Test:
Effect of early sign language acquisition
LANGUAGE: American Sign Langauge
This project explored the relationship between age of language acquisition and
the development of the executive function (EF). Past research have suggested that hearing children with Specific
Language Impairment (Marton, 2008; Windsor& Hwang, 1999) and individuals with aphasia (Fridricksson, Nettles,
& Davis, 2006) exhibited EF difficulties. However, these results are based on correlational studies. To determine
whether delayed language development has an impact on executive function development, we focused on deaf
children form hearing families who were not exposed to their first visual language until later after birth. We
hypothesized that deaf children with delayed sign language acquisition, compared to deaf native signers, will
perform worse on a specific test of executive functions, the Color Trail Making Test.
The both the Children Color Trail Test (CCTT) and Color Trail Test (CTT) were administered to deaf children and
adults. The Test has two tasks: the first trail making task requires the subject to navigate the circles following
sequential number order, whereas the second trail making task requires the subject to adhere to the sequential
number order while alternating colors. Thus, testing their potential of their shifting/inhibiting by how well they are
able to perform the task and how fast with minimal errors and prompts. Those who have weak EF are known to
make more mistakes (sequentially in numbers and alternating with colors), require more time to complete the task
(in seconds), and need more prompts (cues that indicate the next answer and if mistakes are made) in
comparison with those who have well-developed EF.
Currently, we have preliminary data and we are currently collecting more data. Our preliminary analysis has
shown that those who had earlier the acquisition of sign language illustrated statistically significant fewer errors,
prompts, and faster completion time. We anticipate completing our data collection in the Spring of 2013 and will
share the results at TISLR. These results are imperative because they will help us better understand the effects
of language acquisition on the development of EF skills and the results will have a translational impact on the
early intervention programs for the deaf.
67
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Joanna Atkinson & Tanya Denmark
Poster 1.17
UCL, UK
Challenges to early detection of cognitive decline in deaf signers: A new British Sign Language
Cognitive Screening Test
Background
Tests used to identify cognitive disorders in users of spoken languages are unsuitable for Deaf people who use signed
languages. Establishing normal healthy ageing in the Deaf Community, including cognitive and linguistic functioning, is a
necessary precursor to the development of assessment tools that can be used to detect unusual changes associated with
dementia and other neurodegenerative conditions.
Method
We established the parameters of normative cognitive ageing in Deaf people, using a newly devised test, the British Sign
Language Cognitive Screening Test. This is the first screening test specifically developed in a signed language, rather than
relying on translation of spoken language tests. This test is now being used to identify cognitive disorders in deaf patients at
the National Hospital for Neurology and Neurosurgery in London.
The tests, with instructions entirely in BSL using standardized videoformat, were developed and piloted using a similar
format to the Addenbrokes Cognitive Examination (ACER) (Mioshi et al., 2005) and the Mini Mental State Examination
(Folstein et al., 1975), with test domains sampling memory, visuospatial, language and executive function abilities, as well
as orientation to time and space. Items were carefully selected to ensure linguistic and cultural suitability for deaf signers.
Naming items were developed using solely low-iconic targets and there is no English language requirement. Normative data
was collected from 226 participants aged 50-89 years during an annual holiday for older Deaf people.
Results
Details about test development will be presented with results showing changes in test performance across age cohorts and
correlation with non-verbal intellectual ability in healthy signers; and early indicators of test sensitivity and specificity in
identifying signers with a diagnosis of dementia.
Conclusions
We conclude that for ethnologically valid assessment of cognitive disorder in deaf signers, it is vital to test function in deaf
signers using tests and norms specifically devised for signed rather than spoken languages.
68
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Joni Oyserman & ^Mathilde de Geus
Poster 1.18
*SignHands; ^De Geus Advies, Netherlands
Hearing L2 parents and the fluency in sign language communication (language presentation:
International Sign)
This study investigates the metalinguistic awareness and language skills in sign language of hearing parents of Deaf children
in communication and education at home. Knoors and Marschark (2012) stated that reaching ‘fluency’ in sign language
communication is a very difficult one for parents of Deaf children as L2 learners. This statement is very interesting and has
feeded some discussion about language planning in Deaf Education in the Netherlands. In this study, we will focus on this
‘fluency’. For parents it is essential to develop skills in sign language that will enable them to interact with their Deaf child.
They need to know how to adapt their signing to a ‘children’s register’ and use of pedagogical strategies for Deaf children.
The use of sign language in parent-child communication is not only language self but has a role in pedagogical tools. In
order to obtain relevant information in depth Sign Languages assessments, semi-structured interviews and empirical based
surveys with ten Dutch parents were conducted. Those parents all have CI-children between 7 and 13 years old and do use
sign language in daily communication. They have finished same regular sign languages learning courses and do have same
level of communication. In assessments before starting the course they did show a communication level that was not
adequate to age-based communication of their children. Remarkable findings were that they did have a very small lexicon,
didn’t know how to use this lexicon in pragmatically settings, didn’t know how they could produce sentences in inventive
way and underestimated their language skills. Also they reported continuous frustrations in daily communication with their
Deaf children. In the examination period they attend new sign language courses following new curriculum, which has a
strong focus on communicative language strategies and metalinguistic acquisition. In those new courses the CEFRL
(Common European Framework of Reference for Languages) is imbedded. The CEFRL regards language users as social
agents who develop general and particular communicative competences while trying to achieve their everyday goals. For
parents it is essential to develop skills in sign language that will enable them to interact with their child. During and after
the new courses we measured remarkable developments of communication and interaction skills in those parents. Also
their inventiveness to produce signs and sentences in depth was increased. The length of sentences transferred from 3signs sentences to sentences with more signs included. Variation in lexicon usage enlarged. Parents themselves reported
that their frustrations did decline and knowledge of linguistic tools at home was expanded. They could use linguistic tools
faster than before to produce signs and sentences. They could anticipate more and faster in daily interaction with their
child. As L2 learner they reported more self-esteem and experienced pleasant family interchanges. Our study has provided
that it is important to enhance sign language courses and curriculum for parents to a higher level so that the needed
‘fluency’ can be reached in relative easy way for L2 parents.
69
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Karen Emmorey, *Jennifer Petrich, *Lucinda O'Grady, *Allison Bassett
& ^Erin Spurgeon
Poster 1.19
*San Diego State University; ^Gallaudet University, USA
The relation between linguistic and spatial working
Language comprehension involves actively maintaining, processing and accessing information. The
predictive value of linguistic working memory capacity for spoken language processing has been well
documented. Previous research suggests that working memory (WM) systems involved in spoken language
comprehension are restricted to the verbal domain and do not operate on visuospatial representations. However,
sign languages exhibit visual and spatial contrasts at all linguistic levels, and thus visuospatial WM capacity may
be relevant to sign language comprehension. The present study investigated the relationship between linguistic
and spatial WM capacity and sign language processing. Linguistic WM was assessed by a signed adaptation of
Daneman and Carpenter’s (1980) classic listening span task (i.e., signers make plausibility judgments on signed
sentences while remembering the last sign of each sentence) and a letter span task (parallel to the digit span
task); spatial WM was assessed by a spatial span task based on Shah and Miyake (1996) and by the Corsi block
task. American Sign Language (ASL) comprehension skill was assessed by a narrative comprehension task
(participants answer questions after viewing ASL topographic narratives) and the ASL-Sentence Repetition Task
(ASL-SRT). As a comparison, a group of hearing English speakers (N = 27) performed the same WM tasks and a
parallel English spatial narrative comprehension task.
Results from 28-34 deaf signers (not all participants completed all tasks) revealed that performance on
the sign language WM span task correlated only weakly with ASL-SRT (r = .383, p = .06) and narrative
comprehension for the signers (r = .332, p = .08), whereas listening span correlated strongly with narrative
comprehension for the English speakers (r = .592, p = .001). Because the linguistic WM span task has a strong
serial component (evidenced by a strong correlation with letter span; r = .517, p = .002), we suggest that deaf
signers rely less on serial order information during language processing than English speakers. Performance on
the spatial span tasks did NOT correlate with either language task, providing support for a domain-specific view of
linguistic WM (e.g., Caplan & Waters, 1999). However, sign language span correlated with both spatial WM span
(r = .547, p = .003) and the Corsi block span (forward: r = .436, p = .020; backward: r = .560, p = .002). In
contrast, for English speakers, listening span did not correlate with either spatial span or Corsi span. These
results suggest that linguistic and spatial working memory systems involve shared resources for signers, but not
for speakers.
In sum, the results of this study indicate a) sign language comprehension draws only weakly on linguistic
WM capacity and b) signers, but not speakers, draw on linguistic WM resources to perform non-linguistic spatial
WM tasks.
70
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Kazumi Matsuoka & ^Naotake Tsukidate
Poster 1.20
*Keio University, Japan; ^University of Connecticut, USA
71
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Kimberley
Mouvet, *Miriam Taverniers, ^Liesbeth Matthijs, ^Gerrit Loots
& *Mieke Van Herreweghe
Poster 1.21
*Ghent University; ^Vrije Universiteit Brussel, Belgium
The exchange of meaning: A systemic functional analysis of young deaf children’s language
development
Aim: Interaction, for a young child, is given form by language (Halliday 2003). It is in these early interactions that parents
socialize their children into the rules of language and of language use. Previous research of interaction between hearing
mothers and deaf children has shown that these dyads develop a controlling interactional dynamic (Loots et al. 2005).
Following Bernstein, these children are exposed to a “restricted code” and therefore have difficulties developing all
functions of language. This paper empirically assesses the language development of young deaf children today in an
interactional context in terms of a systemic functional model (Halliday 2003). This models offers the opportunity to chart
the development of communicative functions and the gradually widening meaning potential.
Method: Four severely deaf children, aided by traditional hearing aids, and four profoundly deaf children, aided by cochlear
implants, participated in the study. They were born between 2009 and 2010 into hearing families who employed different
communication strategies – ranging from bilingual Flemish Sign Language/ Dutch over Dutch supported by single signs to
monolingual Dutch. At child ages 12, 18 and 24 months a video was made of mother and child engaging in free play
interaction. These videos were analyzed in ELAN. As such, functional development was traced throughout the second year
of life. Moreover, effect of communication mode used by the mother could be measured. This made it possible to evaluate
whether the use of the visual-gestural modality was beneficial for the child’s language development.
Results: Despite the different communication strategies used by the mothers and despite differing hearing status between
the children, all children were delayed with respect to functional language development. Whereas we would expect them
to be in the final transitional phase towards adult language we find that they are only starting to make that transition.
However, we found that dialogues combining the visual-gestural and auditory-vocal modality tend to have more
conversational turns than those employing the auditory-vocal modality alone. Implications will be discussed.
72
Wednesd
day 10th Juuly, Poster Session 1 Poster 1.2
P
22 TISLR 11, T
July 2013 Kristen Se
K
cora Laboratory forr Language and Cognitive Neuroscience, USA The Action‐Se
T
entence Com
mpatibility E
Effect in ASL:: The role of semantics v
vs. perceptioon (English) Evidence from
E
the embodieed cognition litterature sugggests that hum
mans use the ssensorimotor system in pro
ocessing laanguage and tthat one mech
hanism by wh
hich this occurrs is mental simulation (Barrsalou, 2008; G
Gallese & Lakoff, 2005). Effects of this s
E
simulation on
n motor execu
ution have beeen demonstraated in stimulu
us‐response coompatibility e
effects such ass the Action‐sen
ntence Compaatibility Effect (ACE) (Glenbeerg & Kaschakk, 2002). Resp
ponse times (ee.g., for sentence plausibilitty udgments) aree facilitated w
when the moto
or response (ee.g., pressing a ‘yes’ button
n that requiress a movementt toward the ju
body) is congru
b
uent with the movement direction impliied by the sem
mantics of a w
written or spokken sentence (e.g., “You op
pen the drawer”). SSuch facilitation provides e
evidence for thhe involvement of sensorim
motor systemss in language comprehensio
on. n sign languagges, however, there is a pottential conflicct between sensorimotor sy
ystems and linnguistic seman
ntics because In
movement aw
m
way from the signer’s body is perceived a s motion toward the addre
essee’s body (w
who is facing the signer). FFor example, sema
e
antics of the vverb PUSH invvolve movemeent away from
m the body, bu
ut the addresssee perceives the movemen
nt of the verb PU
o
SH as toward, rather than away from th eir own body. Kaschak (200
05) found tha t perceiving n
nonlinguistic visual motion t
v
toward or away from the p
perceiver impaacts the comp
prehension of motion senteences expressiing motion to
oward or awaay from the bo
ody. We exam
mined whetherr perceptual p
processing of linguistic sign movement m
modulates thee ACE or whethe
A
er the ACE is d
driven purely by the seman tics of the verrb. If the latter, then the di rection of visu
ual movement seen in the sign should havee little effect o
on the motor response. If the former, then conflictingg perceptual and semantic motion should
m
d affect the AC
CE. Deaf ASL signers perform
med a semanttic judgment ttask while vieewing signed ssentences thatt expressed mot
e
tion away (e.gg., “you threw
w a ball”) or tooward (e.g., “yyou grabbed a cup”), responnding with bu
utton presses re
equiring moveement away ffrom or towarrd the body. W
We found thatt there was a ssignificant conngruency effe
ect only when re
esponses werre categorized
d in relation to
o the semanticc motion rath
her than the perceptual mottion of the sentence. This re
esult indicates that a) the m
motor system is involved inn the compreh
hension of a visual‐manual language and
d b) motor simulations for sign languagge are modula
ated by verb s emantics rath
her than by the perceived vvisual motion of the hands.
73
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Kurumi Saito & ^Naotake Tsukidate
Poster 1.23
*Japan College of Social Work; ^International Christian University, Japan
Eye Gaze and Eye Movement in Japanese Sign Language
(English)
Eye gaze and eye movement in Japanese Sign Language are linguistic elements, as
they are in American Sign Language (Thompson 2006) and German Sign Language
(Hosemann 2009). Ichida mentioned that eye gaze and eye movement in Japanese
Sign Language were semantically related to human action of “looking” (Ichida 1997).
We conducted eye tracking experiments to examine how 10 native signers and 10
non-native signers (5 deaf and 5 hearing) produce eye gaze and eye movement while
they are signing the same sentences. We tracked their eyes by the head mounted
eye-tracking technology (eye-mark-recorder), and we also developed a way to record
positions and movements of pupils in the image on the TV monitors (by auto-tracing
device of PV Studio).
First, we chose the VTR image of a native signer’s signing as the model (a DVD of
Japanese Sign Language lesson highly evaluated on the market). We asked examinees
who were native signers to look at the TV monitor, then asked them to express the same
sentences (the same sentence structures with the same lexical items) which we
video-recorded and measured by the eye tracker. Secondly non-native signers signed
the same sentences, but they failed at eye gaze and eye movement, which we measured.
Thirdly, the native examinees took the teachers’ role to the non-native signers. The
students (non-native signers) imitated the teachers repeatedly which we measured by
the eye tracker, until the teacher admitted that their eye gaze and eye movement were
correct. We asked the native signers to correct the eye gaze and eye movement of
non-native examinees.
We picked up the eye gazes and eye movements which were the same among the
native signers, and we examined which eye gaze and which eye movement the teachers
considered as errors. We determined the position and movement of the students’
pupils which made the teachers (native signers) stop correcting. Also the teachers
explained what kind of difference the error of eye gaze and movements caused. Thus
we determined the correct eye gazes and eye movements in Japanese Sign Language.
We found that one of the characteristics of eye gaze in Japanese Sign Language
was the one which functions as, or co-occurs with, existential verbs. This kind of eye
gaze is related to action of “looking”. Other findings are as follows:
1. eye gaze functioning as pronouns and adverbs on its own
2. eye gaze functioning as pronoun with pointing
3. eye gaze functioning as adverb (location/time) with pointing or other manual
sign
4. eye movement showing movement of human beings and things
5. eye movement, with classifiers, showing movement or shape
6. eye gaze functioning case marking with predicate verbs (agreement)
7. eye movement which shows human action/movement of looking or gazing
Among the above, 1 is a free morpheme. There are also prosodic morphemes,
which means the sentence is unnatural without them, and there are bound morphemes,
which means the sentence is ungrammatical without them.
74
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Lara Mantovan & ^Carlo Geraci
Poster 1.24
*Ca' Foscari University of Venice, Italy; ^Institut Jean-Nicod, Paris, France
A ROUND TRIP FROM THEORY TO CORPUS: THE CASE OF UNIVERSAL 20 IN LIS
Language: English
Word order is under the microscope of formal and typological linguists since many years ([1], [2], [3]).
Several generalizations have been proposed to account for order variation; some of them are considered
universals, however, the empirical ground of these generalizations is biased by the fact that almost all
languages under study are spoken languages (but see [4] for an exception). To what extent typological
universals are also valid for SLs is still an open issue.
We use corpus evidence to show that LIS conforms to Greenberg’s Universal 20 (U20), as reframed in
[3] (see [5] for a study on U20 in TSL and [8] for a qualitative study of the DP in LIS). We claim that
LIS conforms to the fine-grained structure in (1), adapted from [3].
Data come from the LIS Corpus [6]. We analyzed narrations of 162 signers. For each, the first 15 DPs
containing at least a noun and a modifier are annotated. The coding scheme is in (2-3). Sociolinguistic
predictors are in (4). The cleaned dataset of 1908 tokens is analyzed with mixed-effects model [7]. (2)
The distribution of modifiers wrt head-nouns indicates a preference for postnominal modifiers (63.7%).
The position of nouns wrt the hierarchy in (1) shown in table1 suggests a partition of the DP in three
areas: lower DP (almost categorical Noun>modifier), middle DP (preference for modifier>Noun) and
higher DP (preference for Noun>modifier). Table2 illustrates the summary of the fixed effects. Three
predictors are significant: Duration, Adjacency and Modifier types.
Duration: the shorter the modifier, the more likely it precedes the noun. This reflects a lenghtening of
postnominal modifiers due to intermediate phrasal boundaries.
Adjacency: nouns tend to appear before the modifiers when more than one modifier is present in the
DP. A movement constraint is forcing the noun to cross its modifier(s) and it is more likely to occur
with multiple modifiers because it makes the DP layers visible.
Modifier types: The fine-grained structure is collapsed into three macro areas each characterized by
specific syntactic properties: noun movement in the lower part, no movement in the middle and roll-up
movement in the higher part. We claim that movement is partially constrained by processing factors
[2]: the higher the noun climbs on the structure the higher the processing cost (roll-up movement being
the most costly option).
Corpus evidence shows that LIS conforms to the refined version of U20 in (1). Although rare, the
pattern of LIS is attested in spoken languages including a number of creoles. Interesting speculations
may follow, capitalizing on the fact that even when considering fine-grained structure it is possible to
find patterns of potential re-creolization.
The challenge posed by U20 opened a vivid debate in theoretical linguistics, we addressed this issue in
a new empirical domain (LIS) and from an unusual perspective (quantitative linguistics). Even in
languages with flexible order, typological universals are respected both at the macroscopic and
microscopic level.
75
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Lynn Hou & Kate Mesh
Poster 1.25
The University of Texas at Austin, USA
76
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Lynn McQuarrie, *Marilyn Abbott, ^Charlotte Enns
& *Brent Novodvorski
*University of Alberta; ^University of Manitoba, Canada
77
Poster 1.26
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Mara Moita, *Maria Vania Silva Nunes & ^Ana Mineiro
Poster 1.27
*Instituto de Ciências da Saúde - Universidade Católica Portuguesa; Catholic University of Portugal
Phonological segments, syllables and lexical components: a parallel linguistic modality analysis in
a cognitive study.
Since the greatest difference between sign and spoken languages is their sensory modality, which results in distinct
phonological and lexical structures, it is fundamental to provide a parallel linguistic analysis to study the cognitive
performances between deaf and hearing individuals. Even though a great part of the research focusing on semantic
processing between deaf and hearing individuals does not regard the linguistic influence in the performances of those studies,
linguistic modality seems to affect the semantic organization (Moita, 2012).
Focusing on the possible influence of linguistic modality in semantic organization between early deaf signers, late deaf
signers and hearing individuals, a category fluency task on three linguistic modalities was developed. Therefore, it was
essential to identify the phonological and lexical aspects of the working languages and create comparative variables to
classify the linguistic clusters from deaf and hearing individuals (Moita, 2012). This study compares the semantic
performance of three groups of subjects with different language acquisition backgrounds and with distinct language
modalities, solving those dissimilarities by adapting the task to each modality and equally comparing the performance
between groups.
Phonological analysis was based on phonological elements and prosodic units within the signs or words, which can be
labelled as syllables, and lexical analysis was based on initial and last lexical items in the compound words.
Results demonstrate that linguistic modality affects the semantic performance of early deaf signers, late signers and hearing
individuals. In general, deaf signers demonstrated comparable results with hearing individuals when the task was performed
in their natural linguistic modality.
This type of analysis provides us with a more direct and feasible comparative evaluation when the sample is comprised of
signers/speakers of different modality languages.
78
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Maria Josep Jarque, ^Sara Palmer & +Esther Pascual
Poster 1.28
*University of Barcelona, Spain; ^CIFP Son Llebre, Spain; +University of Groningen, Natherlands
Gestures with modal functions in Catalan Sign Language- poster-English
In this paper we present the results of two studies on the expression of modality in Catalan Sign Language (LSC). We focus
on the gestures that accomplish modal functions, that is, gestures that convey the signer's perspective on the certainty,
possibility, and truth of information in the discourse as well as non epistemic meanings. Previous studies on LSC have
identified modality grams and their gestural source, but none of them has considered the contribution of gestures expressing
modal stance.
This paper draws on data collected from two studies on modality in LSC, comprising five semi-structured interviews. In the
first study, a deaf signer interviews individually three adults. The second study is based on two interviews with teenagers
(two girls and two boys).
We describe the form, use and distribution of four 'palm-up gestures' that accomplish four functions: permission, uncertainty,
possibility and certainty. We compare our results with previous studies focusing on spoken languages (Kendon, 2004) as well
as signed languages (Conlin et al., 2003; Engberg-Pedersen, 2002; McKee & Wallingford, 2011).
Furthermore, the investigation leads us to consider three theoretical issues. Firstly, we define the criteria to categorize the
palm-up units as gestures versus discourse markers or modality grams, in the grammaticalization process (Amunsden &
Harvolser, 2011; Kendon, 2004; McNeill, 2005). Secondly, considering them as gestures with modal function challenges the
grammaticalization theory that claims that manual gestures enter in the sign language as lexical morphemes and later develop
a grammatical meaning (Wilcox et al. 2010). Finally, we characterize some use of these gestures that do not refer to
performative or descriptive use (Nuyts, 2001), but show the signer's attitude in a fictive interaction (Pascual, 2006) or virtual
speech act (Langacker, 1999).
79
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Marie Coppola & Deanna Gagne
Poster 1.29
University of Connecticut, USA
80
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Marie Nadolske & Christine Chynoweth
Poster 1.30
Montclair State University, USA
Semantic “Classifier” Handshape Variation: VEHICLE Descriptions by Different Populations of
ASL Signers [ASL]Signers
[ASL]
This study investigates handshape selection in the American Sign Language (ASL) Semantic “Classifier”
(CL) sign VEHICLE. Previous treatments of this sign have emphasized the necessity for the selection of
the 3-handshape, which has been said to represent a linguistic class of ‘vehicles.’ (Aronoff, Meir, Padden,
& Sandler, 2003) describe this property of the sign, and its applicability to an abstract category, “The ASL
VEHICLE classifier can be used for cars, trucks, bicycles, trains, boats, submarines, and sometimes
motorized wheelchairs (p. 67).” Possible alternate handshape selection within this sign have either been
unacknowledged, or been relegated to essentially footnote status (see Supalla (1990) for a brief discussion
of B-handshape use).
Elicited narratives were collected, and analyzed from four populations of adult ASL signers (i.e. Deaf
native, Deaf early learners, hearing native, and hearing nonnative signers). Patterns of handshape
selection within each group, and between groups were assessed to identify relevant factors that could
influence any possible handshape variation in this Semantic CL sign.
Confirmation of the use of the 3-handshape to represent a variety of vehicle types was found, however the
B-handshape was also used by all groups of signers. Contra to expectations, the findings from the Deaf
native signers indicate that there is more allowable variation in handshape selection than expected, the
level of variation differed based on the type of vehicle, and that physical properties of the referent can be
represented and manipulated within this type of sign. Additionally, between group comparisons have
identified differences in use by native populations of signers (i.e. between Deaf native and hearing native
signers), and differences based on linguistic status (e.g. first language, early learners, and second
language).
Results show that handshape selection in ASL Semantic CLs is more complex than previous research
would indicate. Most previous discussions of CLs have taken it that the 3-handshape is nearly
synonymous with all members within the category ‘vehicle.’ These data from native Deaf signers indicate
that the patterns of use and variation capabilities underlie a more complex system of handshape selection
within this type of sign. Data from the other groups of ASL signers allow for a comparison of additional
factors that ultimately shape language use; including, but not limited to, one’s status as a native or
nonnative user of a language. These findings expand our understanding of systematic variation in the
ASL lexicon that was previously unexplored, or even thought to be impossible.
81
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Matthew Dye
Poster 1.31
University of Illinois at Urbana-Champaign, USA
Early Access to Sign Language Inoculates Deaf Children Against Visual Attention
Deficits
Presentation language: American Sign Language
Early profound deafness brings about a cortical reorganization that enhances specific aspects of
visual processing. Studies of adult Deaf native signers have demonstrated that they have
enhanced attention to visual motion across the visual field (Bosworth and Dobkins, 2002), and
enhanced selective attention to the visual periphery (Bavelier et al., 2000; Bottari et al., 2008).
These enhancements appear to be driven by uni-modal and cross-modal changes in cortical
function, with enhanced processing in posterior parietal cortex (Bavelier et al., 2000), the
posterior superior temporal sulcus (Bavelier et al., 2000), and primary (Karns et al., 2012) and
secondary (Finney et al., 2001) auditory areas along the superior temporal gyrus. Behavioral
studies (Dye et al., 2009; Codina et al., 2011) suggest that these enhancements emerge during
early adolescence, although pediatric imaging work has yet to be undertaken. However, these
findings of enhanced visual attention are at odds with the developmental literature, where deaf
children are characterized as displaying poor sustained attention, high impulsivity, and high
distractibility (Horn et al., 2005; Quittner et al., 1994; Smith et al., 1998; Yucel and Derim,
2008). These developmental studies have typically recruited deaf children who use cochlear
implants, and who primarily use spoken language or a combination of speech and sign. In the
study reported here, measures of sustained attention, impulsivity, and distractibility were
administered to 60 hearing children and 37 Deaf children who acquired American Sign
Language as an L1 from Deaf parents (aged 6-13 years). Subjects performed a continuous
performance test (Gordon and Mettleman, 1987) that required selective responses to sequences
of digits, either with or without distractors. The results indicated few differences in sustained
attention as a function of deafness (Figure 1). However, younger deaf children (6-8 years)
showing higher levels of impulsivity and distractibility than their hearing peers (Figure 2). The
data is interpreted in terms of a spatial redistribution of visual attention (Dye and Bavelier, 2010)
that requires executive functions to be harnessed in a goal-directed manner. It is argued that a
role for EF explains the interaction between age and deafness, and the "cognitive inoculation"
provided by early access to natural (sign) language in deaf children. Future studies are proposed
to examine the role of executive functions (inhibitory control, task switching, working memory)
as mediators of performance gains resulting from cross-modal plasticity in Deaf children.
82
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Mirko Santoro, ^Carlo Cecchetto & +Alessandra Checchetto
*ENS, France; ^University of Milano-Bicocca, Italy; +Ca' Foscari University, Venice, Italy
83
Poster 1.32
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Molly Flaherty, *Susan Goldin Meadow, ^Ann Senghas
& +Marie Coppola
Poster 1.33
*The University of Chicago; ^Barnard College of Columbia University; +The University of Connecticut, USA
Growing a Spatial Grammar: The Emergence of Verb Agreement in Nicaraguan Sign
Language
Language of Presentation: English
The young sign language in Nicaragua has provided researchers with a unique
opportunity: the chance to study a language from its birth. With the founding of a new
school for special education in Managua approximately thirty years ago, Deaf
Nicaraguans came together in greater numbers than ever before. Though teaching in this
school was exclusively in written Spanish, students soon began to communicate
manually, giving birth to a new language: Nicaraguan Sign Language (NSL). Each year
children enter the school and learn the language naturally from their older peers,
eventually becoming Deaf adults who use NSL for daily communication. As succeeding
cohorts learn NSL, the language itself grows and changes, yielding the unique
opportunity to view the language’s development by comparing signers of succeeding
cohorts.
One area in Nicaraguan Sign that has shown striking development is the
emergence of verb agreement. Mature sign languages around the world use space to
indicate agreement between verbs and their subjects and objects; however, little work
examines how this type of agreement arises.
We investigate the emergence of spatial verb agreement in Nicaraguan Sign
Language across four groups: adult Nicaraguan homesigners (n=4, isolated individuals
who have not learned NSL but communicate manually though systems of their own
creation) and three cohorts of adult Nicaraguan signers (n=4 per cohort, average ages cohort 1: 39.5 years, cohort 2: 28.5 years, cohort 3: 20.25 years). All individuals viewed a
series of short clips of transitive and intransitive events and then were asked to describe
what they had seen. Responses were coded and analyzed for axis of verb articulation
(sagittal vs. horizontal, after Padden, Meir, Aronoff, and Sandler, 2010) and agreement
with nominals. We find that the emergence of spatial verb agreement is not monolithic
and that its developmental trajectory is gradual and discontinuous.
The seeds of spatial agreement are present even in adult homesigners, but even by
the third cohort of adult signers, a full system of spatial verb agreement is still not in
place. In fact, second cohort signers display higher levels of spatial verb agreement than
do their younger third cohort peers (Figure 1). Although the third cohort signers institute
sweeping changes into the language’s spatial grammar system, this change seems to
come at a cost to the verb agreement system. The youngest signers are reorienting the
language to function not only on a sagittal axis (straight out from the signer’s body), as is
seen among older signers’ productions, but also on the horizontal axis (Figure 2). The
signers do not, however, always set up their nominals along the same axis as their verbs.
Time and continued study will reveal if younger cohorts of signers produce a language
that spatially resembles the other sign languages of the world and is also rich in spatial
verb agreement. By observing the continued development of verb agreement in
Nicaraguan Sign we can see just how the processes of time and iterated transmission give
rise to natural language structure.
84
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Natasha Abner
Poster 1.34
University of Chicago, USA
85
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Rachel Sutton-Spence & ^Michiko Kaneko
Poster 1.35
*University of Bristol, UK; ^University of the Witwatersrand, Johannesburg, South Africa
Symmetry and the use of two hands in sign language poetry – some quantitative comparisons
Presentation in BSL
Our presentation uses a quantitative approach to consider use of two hands and symmetry in sign language poetry. We
demonstrate that it varies considerably depending on a poet’s style. Symmetry is a well-recognised feature of the
vocabularies of every studied sign language (eg Battison 1978, Napoli and Wu 2003) and has been recognised as a
prominent element of signed poetry since the 1970s (Klima and Bellugi 1979). It is valued for the sense of balance and
general aesthetic impression it presents and its contrasting use with deliberate asymmetry. Previous symmetry studies
have often used close reading of a few poems in a specific genre (Sutton-Spence and Kaneko 2007, Kaneko 2008) or the
works of a single poet (Russo et al 2001, Crasborn 2006). It is widely understood that symmetric signs occur much more in
poetic than non-poetic utterances (lectures, or dramatic narratives).
Using material from a large poetry anthology, we analysed four poems each by three sign language poets (ranging in total
from five to 12 minutes) and 10 minutes of their signing in a non-poetic register. As we were interested in the overall sense
of balance and aesthetics we categorised signing into four types: where only one hand was linguistically relevant; where
both hands were fully symmetrical; where both hands shared handshapes but differed in location or movement; and where
both hands were linguistically relevant but had different handshapes (including asymmetric signs and buoys). We calculated
the overall length of time these four types were used by all signers in the two registers. We chose length of time rather
than number of signs because we were interested in the impact of the different uses of one or two hands on the overall
visual impression of poetry; it was also sometimes extremely difficult to decide when a poetic sign began and ended. We
will discuss the advantages and disadvantages of this choice.
The use of the four types was not evenly distributed. One-handed signing accounted for between 8% and 20% of all signing,
showing that using two hands is the norm in both registers. All poets used two hands more in poetry than in non-poetic
registers. All poets also used more asymmetrical signs in non-poetry than poetry. In non-poetry proportions for each
category were very similar for all signers. In poetry, proportions of asymmetry were approximately equal, but other
categories varied, demonstrating that they are driven by stylistic choices. Importantly, symmetry proportion varied greatly
among poets. Two poets used considerably more symmetry in poetry than non-poetry, but one used the same proportion
of symmetry in both registers, demonstrating that not all signed poetry privileges symmetry.
We conclude that decision-making for quantitative studies of signed poetry is not straight-forward. Poetic language defies
categorisation even for apparently simple divisions like use of one or two hands. However, using larger amounts of signed
poetry and actively comparing across poets reveals new information about the reality of poetic symmetry.
86
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Rezenet Moges
Poster 1.36
SRS Inc., USA
Sign Language Continuum: A Methodology to a Multi-Language Community (ASL)
In general, religion plays a crucial role in the history of Deaf communities across the world, especially in the education and
the diffusion of sign languages. Similarly to Ferguson’s (1982) analysis of the concurrent spread of religion and language,
some factors in the Deaf case in Africa indicate that missionaries are the source of the diffusion of sign languages.
This paper will present a sign-language contact issue of missionization and in the Deaf community of Eritrea and proposes an
innovative methodological approach on documenting language continuum of a multi-language community. Eritrean Sign
Language is the product of missionary sign languages from Finland and Sweden after the establishment of a Deaf institute in
1955. In the process of developing their first-ever sign language dictionary, a small group of language planners (mostly
Deaf) decided on a linguistic objective which was to excise any foreign element in their sign language. Consequently,
Eritrean Sign Language underwent a language change through “demissionization,” a process of reversing from
missionization and an attempt at indigenous purism.
The methodological approach is to provide a diagram that illustrates a language continuum incorporating each language used
in a particular community. The diagram developed for the Deaf Eritrean community shows the continuum synthesizing both
language variations and repertories. The layout is designed to place the continuum between two polar points of both types of
sign languages, Authenticity (natural sign language) and Artificiality (coded to a spoken language). In addition, in the
middle area of this diagram, it attempts to place each possible language repertoire from different social groups in the Deaf
Eritrean community. At the further right endpoint of the continuum, the language diverges from a normal Deaf use of
language as it codes exactly to the grammatical structure as the spoken languages of Tigrinya (or English). That will be
Exact-Tigrinya Signs, which exhibits the most contact-influenced language in this continuum, as it incorporates spoken
languages and borrows heavily from the Missionary sign languages. At the opposite point of the continuum, the signs or
languages in Eritrea are indigenous or village signs. They are considered as the least-contact-influenced languages. They are
the most free from any other modality or foreign influence in signs, exhibiting authentic Eritrean signs; hence the continuum
portrays that point as “Natural sign language” for Eritrea. That direction is the ideological variation of the “authentic”
language, which language planners aimed to demissionize their language by soliciting and reviving the indigenous/village
signs and diverging away from the massive contacts on the other end of the continuum.
This graph has two functions by paralleling and linking the ranges of influence from two points—foreign sign language and
foreign modality (spoken language)—and laying out the continuum from extreme missionized language or artificial language
to indigenous signs (and home signs) or the language planners’ desired-for an “authentic” language. Finally, this diagram
offers an useful tool for a language contact research, laying out the aims and largely-complex language variations of a
signing community.
87
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Richard Bank
Poster 1.37
Radboud University Nijmegen, Netherlands
Mouthings in Sign Language of the Netherlands (NGT): Sociolinguistic differences in a
heterogeneous community. (English)
The second half of the previous century saw a gradual change in education of deaf children in the Netherlands. Following the
Milan congress in 1880, signing was forbidden in classrooms; deaf children were expected to learn to speak, and understand
spoken language through lip reading. With the rise of research into sign linguistics came the notion that sign languages are
real, full-fledged languages, and with that notion the educational system changed; at first Sign Supported Dutch entered the
classrooms, and later NGT became an instruction language (together with spoken language).
We wanted to know if and how these changes in the educational system have affected the use of mouthings in signed
languages. In contrast to language inherent mouth gestures, mouthings are derived from the spoken language(s) surrounding
the deaf community (see also Schermer 1990, Boyes Braem & Sutton-Spence 2001, Nadolske & Rosenstock 2007, SuttonSpence 2007, Crasborn et al. 2008a, Vinson et al. 2010, Mohr 2012). In the light of the changes in the educational system, we
expect that a decrease in spoken language input will be reflected by reduced use of mouthings in the younger generation, or
smaller mouth movements. Indeed, cursory inspection of the videos in the corpus suggests that there are differences between
older and younger signers, an impression supported by deaf native signers.
However, previous research on a small subset of the corpus (12 signers in 16 clips; Van de Sande & Crasborn 2009) did not
reveal significant differences. A subsequent study on amount of mouth opening, using the same sample, did not reveal
significant differences either (Stadthaus 2010).
The present study aims to extend these first two studies by not only studying a larger group of signers, but also by taking into
account a more differentiated set of metadata classifying this heterogeneous community. We compared older and younger
signers from the Corpus NGT (Crasborn et al. 2008b). Currently, there are 219 clips in the corpus that are fully or partly
annotated for mouth actions; we selected 38 signers for whom at least 5 minutes of discussion sessions were annotated at
both the gloss and the mouth levels. We investigated frequency and proportion of different types of mouth actions in relation
to the manual activity, the number of mouth actions without accompanying manual signs, and the proportion of manual signs
that were not accompanied by any mouth action. Results were set against a background of various sociolinguistic variables:
apart from age, gender and region we took a detailed look at the individual educational backgrounds of the signers.
Moreover, we tempted to assess signers’ proficiency by having their signing reviewed by a deaf native signer. Altogether,
this more detailed approach to the impact of personal history on the use of the mouth in Sign Language of the Netherlands
will allow us to see how the complexity of individual circumstances relates to language use.
88
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Richard Cokart & Trude Schermer
Poster 1.38
Dutch Sign Centre, Netherlands
What happened to standard signs?
The effects of the standardisation of NGT lexicon on the variation of signs used within deaf families
and their children.
In 2002 all schools for the Deaf and the parent guidance programmes in the Netherland started to implement standard
signs in their teaching and programmes. All NGT courses offered to parents and teachers contained standard signs. The
introduction of standard signs has been controversial amongst the deaf community and researchers. In 2012 it has been
ten years since the introduction of standard signs in the Netherland which is a good moment to investigate the effect of of
the introduction of standard signs on the signing of deaf families who have been affected by the standardisation process.
This paper will present the results of a study we did amongst six deaf families from different regions in the Netherlands
with at least one deaf child in the age between 8 and 12 who attended a school for the deaf. In total 16 people took part in
this study. Our research question is: what is the effect of the introduction of standard signs on the variation of signs used
by members of deaf families with deaf children who have been educated at school by teachers using standard signs.
In the standardisation project for only a few hundred signs out of the 5000 standardised signs, an explicit choice was made
between regional variants. The assumption has been that one of the reasons that NGT standard lexicon has been accepted
by deaf teachers of NGT that the actual number of signs that has been affected by the standardisation process is quite low.
Also the standardisation process has been conducted is such a way that regional variants have been incorporated in the
standard signs.
The research was carried out in the following manner:
-
All parents were interviewed on video by a deaf NGT researcher about their experiences with regional signs
versus standard signs.
50 signs with a standard variant and at least two regional variants were elicited from the participants.
The lexicon of NGT has been expanded over the last 7 years with a great number of new signs that do not
have a regional variant. Twenty of these new standard signs were elicted.
In the paper we will present the results of our analysis which is conducted presently. Our preliminiary results indicate that
the majority of the deaf children in our study use more standard signs at home with their family than their deaf parents.
The mothers in our study use more standard signs than the fathers and the hearing children in the family. None of the deaf
families encountered problems with the fact that their children were taught standard signs at school.
In our paper we will discuss the results in relation to language planning policies in general and the way in which
standardisation of the basis lexicon was carried out.
89
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Robert Adam
Poster 1.39
Deafness Cognition and Language Research Centre, University College London, UK
Cognate facilitation and switching costs in unimodal bilingualism: British Sign
Language and Irish sign language (BSL)
This presentation reports on a study of sign language unimodal bilingualism. It is not known whether it resembles
unimodal spoken language bilingualism, whether it is more like bimodal (spoken and signed) bilingualism, or
whether it has unique qualities.
Bilinguals are able to separate two languages during language production. (Costa and Santesteban, 2004). In a
picture naming study comparing language switching performance in L2 learners and highly proficient performers,
Costa and Santesteban (2004:507) found a switching cost, with switch responses taking longer than samelanguage responses. This cost was asymmetric for L2 learners. Switching to L1 was harder than switching to L2,
and proficient bilinguals were faster at naming pictures in their L2 than in their L1. Similar findings had been
previously reported by Meuter and Allport (1999). In a recent study, Christoffels et al (2007:192) found a cognate
facilitation effect as well, with more rapid naming of pictures where the words in the two language were related.
Because of iconicity in sign languages, signs may be similar in unrelated sign languages, and thus appear like
cognates (“pseudo-cognates”).
The study on unimodal (sign language) bilingualism reported here used a picture naming task to investigate
switching costs between British Sign Language and Irish Sign Language - historically unrelated and mutually
unintelligible sign languages. The signs for half of the pictures were totally different in the two sign languages; the
signs for the remaining stimuli were pseudo-cognates differing by one or two articulation parameters (eg
handshape, location or movement). A switching cost was found, with switch responses slower than non-switch
responses trials. A clear L2-L1asymmetry was not found, but there did appear to be a language specific effect:
producing ISL is faster than producing BSL. A stronger cognate facilitation effect was found where response
latency was shorter where the item had pseudo-cognates in both ISL and BSL. The implications of these findings
for models of bilingual lexical access will be discussed.
90
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Ronnie Fagundes de Brito & Tarcisio Vanzin
UFSC/EGC, Brazil
Data Modelling for Sign Language Subtitles in Textual Mode
Signwriting is a written mode of communication for sign languages. This method has been
adopted in learning situations and communication and its widespread adoption depends on
socio-cultural and technical factors to promote and enable their effective use.
Under a technological perspective, the codification of SignWriting in digital environments is
promoted by different formats and shows great potential for application in the context of digital
audiovisual contents, providing a means of access by the deaf to the auditory information of this
kind of content. However to be applied in scenarios with accessibility to audiovisual contents, the
display of written signs requires temporal alignment of its symbols with the content of presented
video.
This paper presents an encoding format for SignWriting enable the association of this type of
written symbols to the temporal information of videos.
This is an extension of the format proposed by Sutton and Slevinski (2011), with the creation of
elements that allow the combination of attributes to denote starting and ending time for
displaying written signs in the form of subtitles.
The format is implemented in association with the H.761 specification (ITU-T, 2011) intended to
IP television and gave origin to a subtitles player on this platform.
As results there is the format for subtitling in Signwriting and its player, and as final remarks
some aspects that must be refined in order for this format to meet accessibility requirements of
deaf on audiovisual contents.
91
Poster 1.40
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Ros Herman, *Penny Roy & ^Fiona Kyle
Poster 1.41
*City University London; ^University of Bedfordshire, UK
Sign Language and Literacy Skills in Deaf Children
It is widely acknowledged that many deaf children have difficulties in their language and literacy development due
to challenges accessing the spoken language upon which reading is based. Since sign languages bypass the
impaired auditory channel, proficient deaf signers may have more extensive language systems than those
available to non-signing deaf individuals. However, this does not in itself help the process of reading, since the
written script is based on a completely different language system to sign, i.e. speech. Whereas hearing children
(and indeed oral deaf children) first learn English as a spoken language, then transfer skills and knowledge from
spoken English to its written form, BSL users must acquire a new orthographical code and also a different
language, since there is no written equivalent of BSL.
Although theories of bilingualism suggest that skills acquired in a first (spoken) language transfer to the
acquisition of a second (spoken) language (Cummins 1989, 1991), there is no reason why transfer should occur
when the first language is sign and in a completely different modality to the second language, i.e. written English
(Mayer & Wells, 1996). Yet, there is some evidence that deaf children with stronger sign language skills (Strong &
Prinz, 2000), broader vocabularies (Kyle & Harris, 2010, 2011) and who are native signers from birth achieve
better reading outcomes (Stuckless & Birch, 1966; Meadow, 1968; Kusche, Greenberg, & Garfield, 1983; Strong
& Prinz, 1997, 2000). Most studies to date have been based exclusively on children in deaf families and may not
therefore fully represent all deaf signing children. Nonetheless, such achievements suggest that, without the
direct letter to sound correspondences, deaf signers may access literacy in alternative ways (Hermans et al.,
2008; Haptonstall-Nykaza & Schick (2007).
In a recently completed UK study, we investigated reading in a large sample of oral deaf children age 10-11 years
based on scores obtained from standardised measures of language, reading and phonological skills. Initial
findings suggest the key role of vocabulary in acquiring literacy. We are now studying a sample of same-aged
deaf signing children using the same/comparable measures adapted for BSL users. This paper will report
preliminary findings for the deaf signing children in comparison with our oral deaf group. In this paper we focus
predominately on findings from the language and reading measures and address the following questions:



Given that signing is more accessible to deaf children than spoken language, do deaf children who sign have
larger vocabularies than those who speak?
Do signing children with larger vocabularies and better language levels in BSL have better reading levels?
How does vocabulary relate to phonological skills in signing and oral deaf children?
92
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*Rose Stamp, *Kearsy Cormier, ^Bronwen Evans & +Adam Schembri
Poster 1.42
*DCAL, University College London, UK; ^University College London, UK; +La Trobe University, Melbourne, Australia
British Sign Language (BSL) dialects in contact: Investigating the patterns of
accommodation and language change
Presented in BSL
Short-term linguistic accommodation has been observed in a number of spoken language investigations (e.g.
Babel, 2010; Coupland, 1984). The first of its kind in sign language research, this study aims to investigate the effects
of dialects in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were
recruited from Belfast, Glasgow, Manchester and Newcastle and paired with the same conversational partner (a deaf
native BSL signer “confederate” from Bristol). All participants completed four tasks: a) a lexical elicitation task, b) a
Diapix task, c) a dialect comprehension task, and d) an interview. The aim of the Diapix task (Van Engen et al., 2010)
was to engage participants in spontaneous conversation whilst eliciting a large amount of regionally-specific sign
data. In the dialect comprehension task, participants were asked to identify the correct meanings of regional colour
signs. Known to show considerable regional lexical variation, BSL research has found conflicting evidence as to the
degree of comprehension of these varieties (e.g. Kyle & Woll, 1982; Woll et al., 1991).
Initial findings reveal that younger signers accommodate more at the lexical level than do older signers, but
that the overall rate of accommodation was comparatively low. Observation of the conversational data in this study
showed that signers have no difficulties interacting with signers from different regions with mouthing often
disambiguating the meaning of regional signs. Moreover, recent studies have shown that variation in BSL is
decreasing with younger signers using fewer regional variants and favouring variants associated with London in most
cases (Stamp et al., 2011). It is possible that both these factors affected the amount of accommodation needed for
successful communication. However, initial findings from the dialect comprehension task revealed that there are
differences in the comprehensibility of the different varieties. Out of the eight regions, participants performed best
with Birmingham varieties, followed by the London varieties. One possible explanation for this pattern of results is
that some Birmingham colour signs integrate the initial fingerspelled letter of the English word into the sign
production (e.g. fingerspelled ‘P’ for the sign PURPLE). A follow-up analysis, removing the signs incorporating
fingerspelling, revealed that the London colour signs were the most widely understood. This paper will explore the
full implications of this finding for an understanding of lexical variation in sign languages, as well as considering how
the findings relate to spoken language studies.
93
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Ryan Lepic
Poster 1.43
University of California, San Diego, USA
94
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Sally Gillespie
Poster 1.44
Queens University Belfast, UK
Exploring Language Preference within the Deaf Community of Northern Ireland
Historically understanding of the demographics of this community has been based largely
on assumptions, broad generalisations and widely accepted crude figures. These figures,
5,000 BSL users and 2-3,000 ISL users in Northern Ireland, were published by RNID
without explanation or clarification. It is not stated whether these refer only to deaf sign
language users, as one might assume, or whether these figures are intended to include all
residents of Northern Ireland that have BSL/ ISL as a functional language and yet, they
have remained the accepted statistics and remained unchallenged for many years. In
addition to this, RNID (now Action on Hearing Loss) have updated the facts sheet which
now states, "At the moment there are no reliable current figures on how many people... in
Northern Ireland use Irish Sign Language (ISL)."
There is no explanation of rational for these figures, which is also true of the second
commonly held statistic that 2/3 of sign language users in Northern Ireland use British Sign
Language and 1/3 use ISL, which is unpublished yet remains a commonly accepted
statistic within the Deaf community.
This ongoing research project aims to establish reliable information with regards to the
demographics of the deaf community, including language preference and the prevalence
of bilingualism within the community. Although initially using a wider definition of the deaf
community, focus is then drawn to the signing deaf community. If successful, this will be
the largest project of its kind in the UK, aiming to document details of an entire countries
deaf, signing community. Similar work has been completed in Sweden by WerngrenElgström et al. (2003) to establish greater understanding of the Swedish deaf signing
community. Similarly to the approach used by Werngren-Elgström et al., a number of
different measures are explored and compared in this research to create a more reliable
figure than could be reached by any individual approach. A combination of primary
research and carefully selected secondary research are used to generate an
understanding of this largely undocumented community.
This limited understanding of the community in turn limits meaningful evaluation of
resources and services to which the minority is entitled. Beyond the basic understanding
of the number of sign language users in the country, there is no understanding of the
dispersion or location of these individuals, therefore further complicating the provision of
appropriate services. Establishing reliable statistics will also act as a framework for future
research within the field, contributing particularly to the overarching project I am currently
undertaking evaluating the resources, demographics and deficit of opportunity afforded the
Deaf community in Northern Ireland. Beyond this, the research will support justification of
investment in the community, highlight the right and requirements to offer such services to
the minority and potentially create an exportable model of research for other national Deaf
communities.
WERNGREN-ELGSTRÖM, M., DEHLIN, O. and IWARSSON, S., 2003. A Swedish
Prevalence Study of Deaf People Using Sign Language: A prerequisite for Deaf studies.
Disability and Society, 18(3), pp. 311-323.
95
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Sara Carvalho, Mara Moita & Ana Mineiro
Poster 1.45
Instituto de Ciências da Saúde - Universidade Católica Portuguesa, Portugal
AQUI_LGP Corpus: a longitudinal corpus of Portuguese Sign Language acquisition
Acquisition studies demonstrate that in general the path of language acquisition is comparable for sign and spoken languages
(Lillo-Martin, 1999; Van der Bogaerde, 2000), even if their visual modality differs from the auditive modality of spoken
languages. Few researches claimed that babbling is a similar stage on the language acquisition among children from different
linguistic modalities (Petitto, 2000), however, several studies reported that there are modality effects on the phonological and
articulatory development, with sign advantage on first words (Meier & Newport, 1990). What if they are bilingual children
with different linguistic modalities? Are the two languages affected with code-blending or in code-switching? Will there be
grammatical interferences in both linguistic modalities?
Current researches on bimodal bilingualism have attempted to answer these questions based on bimodal bilingual production
of coda adults (e.g. Emmorey et al., 2008) and children (e.g. Johnson et al., 1992; Lillo-Martin et al., 2010). Studies have
shown that performances of bimodal bilingual children exhibit cross-linguistic influence and code-blending/switching,
establishing that these phenomena resulted from two phonological and syntactic available systems (Lillo-Martin et al., 2010),
instead of the existence of a ‘third grammar’ or ‘language dominance’ (Hulk & Müller, 2000).
Portuguese Sign Language (LGP) has been studied as a complex structural natural linguistic system since the 1980s, and
although some important linguistic studies were carried out and a number of limited sign vocabularies have been created,
only recently acquisition of LGP has began to be studied (Carmo et al., in press).
In order to fill this gap, a longitudinal corpus of Portuguese Sign Language acquisition has been developed, the AQUI_LGP
Corpus, reference PTDC/LIN/111889/2009. The main goal of this project is to establish research concerning early
acquisition of LGP in both monolingual and bilingual children. The project allows not only the study of language acquisition
and development but also the creation of a corpus that enables the study of LGP. We also aim at comparing bimodal
bilingualism results with what is reported about unimodal and bimodal bilingualism.
The corpus comprises naturalistic recordings (and their transcriptions) of 13 male and female children acquiring language for
a period of two years. The sample comprises hearing bimodal children (LGP/Portuguese), deaf children with early sign
language acquisition, children with cochlear implants, and hearing children with early Portuguese acquisition. The children’s
age onset varies from 10 months to 4 years of age. They were video-recorded individually every other week if monolingual
and every week if bilingual. The transcriptions are being made using ELAN (EUDICO Linguistic Annotator).
Currently there are several studies being carried out based on the corpus: an analysis and comparison of language acquisition
and development of monolingual and bilingual children both in LGP and Portuguese; a study of Mean Length of Utterance
(MLU) of children with cochlear implants in both LGP and Portuguese; an assessment of vocabulary of bilingual children in
both languages.
96
Wednesd
day 10th Juuly, Poster Session 1 Poster 1.4
P
46 TISLR 11, T
July 2013 Scholastic
S
a Lam & P
Pippen Wo
ong The Chinese U
T
niversity of Hong Kong Acquisition
A
n of Verb Agreement
A
in
i Hong Koong Sign La
anguage by
y Late Learrners in thee Jockey
Club
C
Sign B
Bilingualism
m and Co-eenrolment iin Deaf Education Programme
Child
C
languagge acquisition is an amazing
g process as alll children und
dergo similar developmentaal stages and acquire
a
their first
f
laanguage in few
w years of tim
me (cf. Guasti 2002, Lust 20006, among otthers). Lenneb
berg’s (1967) C
Critical Period Hypothesis,,
however,
h
notess that full lingguistic compettence can onlyy be obtained when
w
languag
ge acquisition takes place within
w
the criticcal
period.
p
Deaf chhildren of heaaring parents, who
w do not reeceive fully acccessible langu
uage input froom birth, then become naturral
teest cases of thhis hypothesis.. Previous stud
dies on deaf aadults show th
hat delayed AS
SL input have an impact on phonologicall
processing
p
as w
well as compeetence on the ASL
A morphollogy (Emmoreey, Bellugi, Frriederici, and H
Horn 1995, Newport
N
1990,
1991, Mayberrry 1994, amonng others). Recent studies oon two deaf ch
hildren who reeceive ASL inpput after the age
a of 6 show
further
fu
that dellayed sign lannguage input affects
a
the devvelopment of verb
v
agreemen
nt (Berk 2003,, 2004).
This
T paper aim
ms at exploringg the impact of
o delayed signn language inp
put on the acq
quisition of Hoong Kong Sign Language
(H
HKSL) verb aagreement by 11 severely to
o profoundly ddeaf children who received HKSL input in a sign bilin
ngual school
setting. All of tthese childrenn have hearing
g parents and ttheir ages at in
nitial exposure of HKSL ran
ange between 3;11 and 6;8.
Children’s
C
knoowledge of verrb agreement was elicited l ongitudinally from a story retelling
r
task of the Hong Kong
K
Sign
Language
L
Elicitation Tool.
This
T paper reports the resultts of the data collected
c
at tw
wo time pointss, with a 2-yeaar interval. Thhe deaf childreen produced a
to
otal of 56 andd 74 tokens of the agreemen
nt verb GIVE iin the first and
d second time points. The eerror rates are high in both time
t
points
p
(17.86%
% in the first tiime point and 14.86% in thee second timee point). Omission errors, coommission errrors and
avoidance
a
are tthe major erroor types. Deaff students tendded to avoid th
he use of fully
y marked agreeement forms by
b using locattion
marking
m
(e.g. aa-GIVE-3i). They
T
also prod
duced the agreeement markin
ngs sequentially by using tw
wo verb signs,, each markedd
with
w one persoon value and one
o location (ee.g. 3i-give+C
CL_body:hand
d-a a-GIVE-1o
o). While the oomission and commission
errors are comm
monly reporteed in acquisition studies on verb agreemeent, the avoidaance strategiess are not only used by late
leearners. Earlieer studies on acquisition
a
of HKSL verb aagreement sho
ow that CC, a deaf
d child who
ho received nattive HKSL inpput
since age 1;9, pproduced agreeement verbs marked for sppatial locations (cf. Lam 200
09). The sequuential producttion of
agreement
a
marrkings also echhoes the sequential producttion of classifi
fier predicates in first languaage acquisitio
on in ASL
(S
Supalla 1982)). Mastery of verb
v
agreemen
nt is often me asured by the production off agreement m
markings undeer obligatory
contexts. The ddeaf children studied here do
d not perform
m better with agreement
a
markings in obliggatory contex
xts initially.
However,
H
theirr performancee improved greeatly in the seecond time poiint, evidenced
d by a drop off error rate from
m 33.33% to
10.00%. Takenn together, thee deaf children
n of hearing p arents go thro
ough similar developmentall stages as natiive learners
acquiring
a
sign language as the
t first languaage. 97
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Sigrid Slettebakk Berge
Poster 1.47
NTNU. Department of Social Work and Health Science, Norway
Meaning making and construction of sign and gesture towards situated artifacts.
In interpreted mediated learning situation, the presence of a third element has a significant interference on the production
of sign, gesture, and the meaning making process. The presented material is video-data gathered from a PhD. study of ten
inclusive learning situations for deaf students in Norway. The problem to be addressed is: When constructing utterances
with the same intended meaning, how will the presence of a third object influence the construction of sign and gesture? This
paper presents the use of technical signs between two deaf boys that are studying to become machine operators, and their
sign language interpreter. Here, there is a difference between the student’s ways of constructing sign and gesture,
compared to the interpreter’s way of signing. In the presented learning situation, the students are working on their
computer, learning a new technical drawing program. The teacher is standing in front of the classroom working on his
computer, and at the same time describing the different effects of his operations. His actions are shown on a blackboard
and he is expecting the pupils to follow on their own computers. The teacher’s use of artifacts makes this communication
situation demanding. The two deaf students must focus on different modality of information and both gaze at the teacher,
the interpreter, black board and at their computer. The participant’s use of situated artifacts influences the pupils’
development of technical signs and how they create their utterances. The analysis of the boys talk shows that they use a lot
of gestures in their signed utterances (Liddell, 2003). These gestures are often constructed from the figures shown on their
computer. They also point to their computer screen. Here, the computer and what is in it, becomes a modality of how they
produce their talk and how they create meaning of the other’s signs and gestures (Nilsson, 2004). In interviews they
describe this as “the deaf way” of talking, and they say that this way of producing meaningful utterances and signs is not
shared by the interpreter, who uses a more word-to-word based translation. The analysis of the interpreter’s talk shows
that she is balancing between translating the spoken words and coordinating the pupils’ attention toward the blackboard.
This is done by a moment-to-moment evaluation of the students’ learning situation and the best way for them to capture
information from the different modalities in use (Wadensjö, 1998).
98
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Sin Yee Prudence Lau
Poster 1.48
The Chinese University of Hong Kong
The Realization of Shared Argument(s) in Serial Verb Constructions in Hong Kong Sign
Language (Presented in English)
Statement:
The syntactic dependency of shared argument(s) in Serial Verb Constructions (SVCs) in Hong
Kong Sign Language (HKSL) can be realized in two major ways: 1.) an empty category in the
form of either an NP trace or a free empty category (FEC); 2.) an overt pronominal in the form of
a classifier.
Abstract:
In linguistics, there is a wide range of disparate empirical phenomena in which the abstract
structural properties pose subtle interpretative constraints on how speakers of the language can
use a given form or description to identify the referent (or the object of entity) in a discourse
(Safir 2004). These constraints on acceptable interpretations for sentences thus have raised the
questions of the exact nature of these abstract structural properties as well as the possible rules
that govern these structural properties.
Among these linguistic phenomena, Serial Verb Constructions (SVCs) are of particular interest
because they invoke the important theoretical issues of how the shared argument(s), which is a
salient linguistic feature ascribed to the constructions in the literature, is distributed and
interpreted. In the SVC literature of spoken languages, the shared argument is often discussed as
mediated by a null argument which is realized as an empty category (Collins 1997; Baker &
Stewart 2002; Carstens 2002; among others). Following the previous studies, this paper explores
the possible syntactic interpretation(s) of the shared argument(s) in the analysis of SVCs in Hong
Kong Sign Language (HKSL) under the Minimalist framework (Chomsky 1995, 2000). The
main goal is to show how the syntactic dependency between the ‘missing’ argument and the
referent in the construction is represented, and whether the kind of interpretative strategy that
justifies the existence of the ‘missing’ argument is distinctive from that in spoken languages.
The analysis shows that the syntactic dependency of the shared argument(s) can be realized in
two major ways: 1.) an empty category; 2.) an overt pronominal. In terms of the former, the
empty category can be realized as either an NP trace, or a free empty category (FEC).
Specifically, the shared agent argument and/or the shared theme argument in some types of
SVCs (including Motion-directional SVCs, Take-SVCs (Instrument), Take-SVCs (Theme),
Manner-SVCs and Give-SVCs,) can be realized by an NP trace via A-movement triggered by θrole feature checking suggested by Hornstein (1999, 2001). On the other hand, the shared theme
argument is realized as a FEC which is base-generated in Transitive class-SVCs, and its
interpretation is pragmatically licensed by the antecedent or base-generated topic. In terms of the
latter, in the two types of Resultative-SVCs that the verbs are expressed as classifier predicates,
the shared agent argument or the theme argument can be observed through the use of the
classifier which is realized as an overt pronominal in the syntactic structure, and is licensed by its
antecedent in the discourse/pragmatic context. The claim that HKSL has two ways to show the
syntactic dependency of the shared arguments(s) in this paper is significant to the current
research of SVCs, in that it contributes to the linguistic evidence for the concept of shared
argument(s) across languages, not only by the stipulation of explicitly constructed linguistic
principles as agreed in spoken language literature, i.e. by means of an empty category, but also
by the language specific property which is unique in sign languages, i.e. by means of a classifier.
99
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
*So-One Hwang, *Sharon Seegers, *Carol Padden
& ^Susan Goldin- Meadow
Poster 1.49
*UC San Diego; ^University of Chicago, USA
The role of gesture in learning for signing children: Implications for sign language theory
Language for presentation: English
In spoken languages, gesture and speech are divided by modality, but in sign languages, the difference is less clear because
gesture and sign are in the same modality. Classifiers, in particular, are vexing: in various analyses, they are described as
gestural, linguistic, or both (Liddell, 2003; Supalla, 1982). We take an experimental approach to identifying properties of
classifiers by examining their use in learning tasks. In hearing children, gesture-language mismatches have been shown to
predict readiness to learn new concepts (Goldin-Meadow, 2000). Mismatches occur when speakers provide different
information in gesture than in speech in the same explanation. If deaf children can reliably be shown to produce gesture-sign
mismatches that also predict learning, the results could be useful for determining the properties of classifiers.
In a study of math equivalence learning, Goldin-Meadow et al. (2012) found gesture-sign mismatches in explanations of deaf
signing children. Deaf children ages 9-12 were asked to solve problems of the type 2+3+4=__+4. One coder transcribed their
signs, and a different coder transcribed their use of deictic points and sweeping gestures proximal to or touching the white
board as students were explaining their answers. They found that deaf children produced mismatches at the same rate as
hearing children on the same task, and that children who produced more mismatches on the pre-test were significantly more
likely to benefit from training and improve on a post-test. They conclude that gesture-language mismatches involve two
modes of thought. However, most of the gestures produced by children were deictic points and sweeps and do not address the
status of classifiers.
Using a task that involves use of classifiers, we asked young deaf children (n=33) ages 5-8 to answer questions about
conservation of liquid, mass, number and length. Conservation is achieved when children understand that material properties
remain the same even after transformation changes their perceptual qualities (Piaget, 1952). We compared the characteristics
of three groups of students: conservers (≥75% correct, n=5), partial conservers (<75% & >25% correct, n=8), and nonconservers (≤25% correct, n=20). We coded segments in the students’ responses as either lexical signs, descriptive
classifiers, or gestures. We then identified whether these segments were produced 1) in the signing space and not proximal to
the object, 2) proximal to the object without touching, or 3) actually touching the object, referring to gradient attributes of the
objects. Although the rate of gesture among the three groups was approximately the same (~50% of trials), conservers use
more lexical signs compared to the other two groups. Conservers were also more likely to use classifiers in a categorical
manner than non- or partial conservers, who used them in a gradient manner. The ability to abstract away from overt physical
appearance is key to acquiring conservation knowledge. Our findings suggest that classifiers may be used flexibly either as
categorical or gradient. Skillfully using categorical or gradient modes may play a role in learning and problem-solving.
100
Wednesday 10th July, Poster Session 1
TISLR 11, July 2013
Susan Mather
Poster 1.50
Gallaudet University, USA
Degree of Intensity: The use of mouthing and its face components in ASL
Mouth behaviors in sign languages have been divided into two categories: mouthing and
mouth gestures. Boyes-Braem and Sutton-Spence (2001) define mouthing as being
derived from spoken languages whose mouthing is produced simultaneously with manual
signs. Mouth gestures are defined as obligatory mouth patterns that function as separate
morphemes and are not derived from spoken language.
Liddell (1980) proposes that mouth gestures act as adverbs in combination with manual
verbs in ASL. Liddell and Johnson (1989) state that some non-manual signals have
specific syntactic or morphological functions independent from their manual
counterparts..
This research reveals the need for the following: a full-face analysis to
identify the type of adverb (e.g., adverb of degree); and the need to identify the specific
degree used to show whether the adverb is of low or high intensity.
In my research, I employed observer-based measurement of facial components using the
Facial Action Coding System (FACS) (Ekman, Friesen, and Hager, 2002). Using FACS
and viewing video-recorded facial behavior at both frame rate and in slow motion, coders
can manually code nearly all possible facial components including mouth actions, which
are decomposed into action units (AUs). An action unit, with some qualifications, is the
smallest visually distinguishable facial movements that can also measure the degree of
intensity. They propose that one has to compare both face actions before and after to help
determining the degree of intensity.
Among the ASL video clips for this study are two video clips with a description of
bacteria. In both clips, the narrator mouths and signs the adjective, ‘GOOD,” twice. In the
first clip, the narrator discusses two types of bacteria, bad and good bacteria in the human
body. In translation, she describes, “Definitely, those bacteria are good.” In this instance,
the narrator uses these AUs to intensify both her face and neck components as an adverb
to modify the adverbial sentence.” Her mouthing remains unchanged. See Figure 1.
In the second clip, the narrator again discusses the benefits of good bacteria. In closing
the description, she exclaims, “Unquestionably, there are extremely good bacteria.”
Unlike the first clip, she uses two different adverbs, “unquestionably” and “extremely.”
As Figure 2 shows that she intensifies her upper part of face components more than she
did for the first clip. The function of the adverb (“unquestionably”) is to display that there
is “without any question” good benefits to these good bacteria. To produce the adverb,
“extremely,” she uses and intensifies her mouthing, which she did not do in the first clip.
Unlike adverbs in spoken English, more than one non-manual adverb in ASL can occur
simultaneously with the signed and mouthed words they modify as well as with one
another. This finding shows that mouthing does have specific syntactic/ morphological
functions as adverbs of degree.
In conclusion, the FACS methodology when applied to full-face analysis is a highly
useful tool to help determine the degree of low-high intensity, which in turn helps in
identifying the types of adverbs.
101
TISLR 11 ABSTRACTS FOR POSTERS SESSION 2 102
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Brendan Costello
Poster 2.01
University of the Basque Country, UPV/EHU; ACLC, University of Amsterdam; BCBL (Basque Center on Cognition, Brain &
Language), Spain
Three important considerations for sign language agreement: location, location, location.
Language of presentation: English
In the on-going debate regarding the status of agreement in signed languages, much discussion has focused on how much a
part of the linguistic system agreement is, and what, if any, role is played by a gestural component. In this presentation, I
tackle a different (albeit related) topic concerning the domain of agreement. Specifically, should agreement be limited to the
class of agreement (or directional) verbs?
Most authors discuss agreement in the context of two-place verbs which move from a to b to mark their arguments. If the
linguist phenomenon of agreement involves the sharing of features between a controller and a target (Corbett 2006), then in
sign language the verb’s arguments (the controller) share features (their location in signing space) with the verb (the target)
in order to attain verbal agreement. And yet, many other structures in signed languages involve a similar sharing of (spatial)
features but they are not considered to be agreement. Such structures include localized verbs which “agree” with a single
argument – as shown in (1) – or nouns and adjectives which are articulated at the location of an associated referent.
The exclusion of such structures can be traced back to Padden’s (1988) seminal work on agreement in ASL, and her
dismissal of single-argument agreement as ambiguous structures with no clear syntactic underpinnings (2). This uncertainty
was cleared up by Engberg-Pedersen’s (1993) identification of discursive contexts involving contrast (in which referential
ambiguity may arise but is resolved by the context) and Meir’s (1998) subsequent observation that discourse-neutral
localization invariably (and unambiguously) involves the verb’s internal argument: the subject of intransitive verbs and the
object of transitive verbs (see example 1).
By the same token, other localization structures, involving nouns, adjectives or pronouns, seem to be equally free of any
ambiguity that might send a syntactician running. Why, then, this obsession with directionality? Why the need for two-place
structures? One possible motivation may be the strength of traditional agreeing verbs to show linguistic and phrase-driven
behaviour: they show clear signs of interacting with syntax by affecting word order (Quadros 1999) and licensing null
arguments (Lillo-Martin 1986).
This presentation looks at what mileage is to be had from expanding the notion of agreement to any structure which involves
sharing spatial features, namely localization. Possible advantages include a consistent model of spatial agreement and a more
uniform account of verbal behaviour in sign languages. The finding that from a set of nearly 650 common verbs in Spanish
Sign Language, over a third are classified as localizable (whereas less than 20% are directional) draws attention to a category
which does not fit well in the standard plain-spatial-agreeing verbal tripartite. Possible pitfalls may be an incommensurable
distancing of sign language agreement from spoken language agreement, especially when it comes to the thorny issue of
what features are being shared in the agreement process. Crucially, evidence of interaction between localization and the
grammar will support this broader conceptualization of agreement in sign language and strengthen the case for the linguistic
status of agreement.
103
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Iva Korunek, *Marina Milkovic & ^Ronnie Wilbur
*University of Zagreb, Croatia; ^Purdue University, USA
104
Poster 2.02
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Cat H.-M. Fung & Gladys Tang
Poster 2.03
The Chinese University of Hong Kong
A Universal Explanation for Code-blending and Code-switching (Presentation in English)
Statement
Code-blending and code-switching data from native deaf signers and child language acquirers of HKSL and Cantonese show
that these processes share the same underlying linguistic constraints as exemplified in the code-switching of spoken
languages, namely functional heads determine the head-complement order. Based on the data, we argue for a universal
explanation for code-switching and code-blending based on the Null Theory originally adopted to account for code-switching
in spoken languages.
Abstract
Code-blending, the simultaneous articulation of sign and speech, is not restricted to hearing sign bilinguals such as CODAs.
Deaf bilinguals, including both adults (Fung 2010, 2012) and children (van den Bogaerde 2000; Fung 2012), also code-blend
and this is independent of whether or not they have achieved a native-like competence in a spoken language. In this project,
instead of using data generated from CODAs, as reported in previous studies (c.f. van den Bogaerde and Baker 2005;
Emmorey et al. 2008; Donati and Branchini 2009; Lillo-Martin et al. 2010), we examine code-blending between Hong Kong
Sign Language (HKSL) and Cantonese in deaf bilinguals. The data came from (i) a longitudinal study involving interactions
between a deaf child acquiring HKSL and native deaf adults; and (ii) spontaneous conversations between native deaf signers.
In spoken language research, proponents of the Null Theory argued that code-switching of bilinguals should observe the
same set of linguistic constraints as monolinguals (Mahootian 1993; MacSwan 1997, 2000; Chan 2003, 2008). They found
that the language of the functional head determines the head-complement order of a code-switched phrase. Assuming that
unimodal and bimodal bilinguals do not differ in their abstract architecture of language, we want to testify whether the Null
Theory is applicable to code-blending between HKSL and Cantonese. To examine whether head-directionality interacts with
code-blending, we focus on the TP and NegP of HKSL and Cantonese because HKSL has head-final TP and NegP while the
Cantonese TP and NegP are head-initial. Similar to spoken languages, the head-complement order of a code-switched or
code-blended phrase in HKSL and Cantonese is consistently determined by the functional heads, as shown in (1) and (2)
below. When the head is not code-blended, the language of the functional head determines the head-complement order. In
(1), the non-blended head (i.e. auxiliary jau5 ‘have’) is from Cantonese while the blended element, the VP, follows the T.
The T-Comp order results because Cantonese TP is head-initial. In (2), the negative auxiliary NOT-HAVE ‘not have’ in
HKSL is head-final and hence its code-blended complement, the NP, is on its left. This Comp-Neg order follows the HKSL
grammar. These two patterns conform to our predictions that the head determines the complement order in code-switching as
well as code-blending, as part and parcel of the Null Theory.
The only difference between unimodal and cross-modal language mixing is that when the functional heads are code-blended,
the complement order may follow that of either language. In (3), the Cantonese modal ho2ji5 ‘can’ and the HKSL modal
CAN ‘can’ are blended. This TP adopts a T-Comp (i.e. Cantonese) order, presumably determined by Cantonese ho2ji5. On
the other hand, in (4), a Comp-Neg order results when the HKSL negator NOT ‘not’ is blended with Cantonese m4-hai6
‘NEG.be’. In this example, this Comp-Neg order follows HKSL grammar.
To examine whether the Null Theory can account for code-blending in child language development, we also analyze the
longitudinal data of a deaf child. Due to cross-linguistic influence during bilingual development, the child data reveal
violations of the code-blending constraints. Example (5) shows that the HKSL modal CAN ‘can’ precedes its blended
complement, violating the HKSL grammar, hence the code-blending constraints. These violations accord with the findings of
code-switching acquisition studies in spoken languages. During the stages of language acquisition, different factors might
arise and affect early mixing (c.f. Cantone 2005, 2007). We argue that language dominance might be one such factor
affecting head-directionalities of code-blending in sign bilingual child development.
105
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
David Vinson, Neil Fox, Pamela Perniss & Gabriella Vigliocco
Poster 2.04
University College London, UK
Involving the body in sentence comprehension: Action-sentence compatibility effects in British
Sign Language and written English
LANGUAGE: English
ABSTRACT: A wide variety of studies have highlighted a central role of bodily experience in language comprehension, in
contrast to traditional views of language as a system of abstract symbol manipulation separated from other aspects of
perception, action and cognition (e.g. Barsalou 2009; Meteyard et al. 2012, for review). In a notable study, Glenberg &
Kaschak (2002) asked English speakers to judge the sensibility of sentences like “Andy delivered the pizza to you”, “You
told Liz the story” and “Boil the air”. Sensible sentences implied motion either toward or away from the body. Importantly,
to indicate sensibility judgments, participants pressed a key located either near or far from their body. For both concrete and
abstract sentences, participants responded faster when the direction of motion implied in the sentence was congruent with
that of the required physical movement than when it was incongruent. This finding, the Action-Sentence Compatibility Effect
(ACE), implies that comprehension of written language involves simulation of the actions depicted in the sentences.
Given such findings for written language stimuli, one may wonder what would happen in sign language comprehension.
Specifically, many sign language verbs encoding transfer of the type studied by Glenberg & Kaschak (2002) explicitly realise
directionality of motion through a corresponding movement of the hands through space. In order to address this question, we
have investigated ACE effects in Deaf bilinguals, testing for effects in both their L1 (video-recorded BSL) and L2 (written
English).
Materials for the English study were taken from Glenberg & Kaschak (2002); BSL sentences indicated transfer between the
sign model and the participant (see Examples), including an equal number of plain (no directionality) and agreeing (with
directionality) verbs. Participants came for two sessions, which differed only in the direction of yes response (toward vs.
away from the body. In each session they carried out both the English and the BSL version of the experiment).
In English, we replicated the ACE effect. Participants were faster when implied motion in the sentence was congruent with
their response direction (1289ms vs. 1378ms): Deaf bilinguals simulate the actions implied in written English sentences as
they comprehend them. We did not find any ACE effect in BSL (congruent vs. incongruent: 2411 vs. 2403ms), even for
agreeing verbs. This is surprising because of the greater involvement of the body in BSL and the visual iconicity of motion
events.
However, it is possible that the use of first/second person sentences in the BSL study may have created conflicting embodied
influences: if participants are taking the perspective of the on-screen signer as part of comprehension, along with simulating
the events themselves, this may interfere with ACE effects. In order to address this issue we are currently conducting followup studies in BSL, manipulating sentences to encode transfer between first/third and second/third person. The results of these
studies will shed light on how motion events are simulated during sign language comprehension, and how comprehending
sign language may modulate hand-specific action simulation.
106
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Dominique Boutet, *Marion Blondel & ^Sébastien Delacroix
Poster 2.05
*CNRS/University Paris 8; ^LAM, France
Where does the sign end?
A Biomechanical exploration of the distinction between gestures and signs (English)
Emmorey (1999) has posed the question "do signers gesture?" and we know they do. Consequently, annotators distinguish
ID-glosses (Johnston 2010) into two types, shared gestuality (hearers, Deaf) and the core lexicon of the SL. However, this
distinction is frequently made on an intuitive basis rather than according to explicit criteria. In this study, we wish to show
that the joint limits of the hand play a role in distinguishing between what we term gesture and sign in the coverbal gestuality
of French and in LSF respectively.
We hypothesize that the stabilization of the forms of signs conforms to physiological parameters prior to any other
motivation (iconicity, among others). This general hypothesis leads to three propositions: 1) The signs used in SL (iconic or
not, such as pointing) would show the same characteristics with respect to the joint limits. 2) During the signifying stage of
signs, during a stroke for example, certain joint limits rarely figure in the physiology of stabilized SL signs compared to the
physiology of coverbal gestures. This is expected particularly in Flexion/Extension and Abduction/Adduction. 3) As the
transfer of movement affects the degrees of manual liberty (for Flexion/Extension, but more so for Abduction/Adduction,
XXX 2010), we expect such transfer to appear primarily on the forearm in both gestures and signs.
To address these issues, we will examine a demonstrative corpus in LSF and French (including coverbal gestuality) elicited
through the same protocol (http://www.irit.fr/marqspat/presentation.html). This is a video corpus that was also recorded in
Mocap. Movement coordinates have been transformed into articulatory physiological data. Thus, we obtain the amplitude of
each degree of freedom and the proximity to joint limits at any given moment.
The study encompasses both signs and gestures, which have been pre-classified into semiotic categories (indexical, iconic,
metaphoric, role-taking). Based on this classification, we measure and calculate the distance to the joint limits of the hands
(hypothesis 2 above) and the forearms (hypothesis 1 above) at any moment during gestures and signs. We also measure the
transfers between the hand and the forearm (hypothesis 3 above).
We will discuss the results and their significance, particularly with respect to phonological theories of SL, according to the
constraints they incorporate, visual, iconic and metric (Uyechi, 1996, Crasborn, 2001, Miller, 2000). We will also discuss the
reasons underlying our results with respect to pronation-supination. In fact, only the hands show a high enough level of
freedom in movement as well as in the initial and final positions of SL signs. We will show, for example, that this freedom
stems partly from what occurs in the forearm, thus relating segments that are considered independent.
107
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Donovan Grose
Poster 2.06
Hang Seng Management College, Hong Kong
The Iconicity of Scalar Verbs Across Semantic Fields
As research into sign language (SL) lexical semantics and iconicity develops,
methodologies are required allowing claims to be tested across Lexicons to both avoid
under-/over-generalizations, and identify new research questions. The strategy proposed
here includes developing an inventory of semantic fields (S-fields) of distinct cohesive
meanings. Under certain constraints, some meanings are lexicalized as verbs (V), but as
S-fields are diverse, Vs are naturally distributed unevenly across them. By sampling Vs
from a range of S-fields, the accuracy of a lexical semantic claim can be systematically
tested to reveal problems or gaps in coverage. Using a sample of 353 lexical SL Vs from
36 S-fields (Table 1) we test the notion that scalar meanings in V may be iconic. Scales
(or event paths) are ‘used up’ as an event progresses, and are distinct from telicity.
Beginning with 221 V from 26 S-fields (Group A) closely associated with scales in the
literature, based on representations of scales in classifier predicate constructions, scales
are treated as iconic in a V, if the V denotes a scale (VS), including a scalar process or
resultant state, and if its form includes a single hand shape or orientation change, or a
single movement to or from contact. Additionally, provided that VS are iconic, other
forms associated with non-scalar V (VN) are also treated as iconic. Vs from S-fields in
Group C, lacking appropriate dynamic scales, are used for comparison. Citation forms of
3 V from Group A violated the claim, as VS with non-iconic forms (EAT, BUILD,
DEVELOP). Other variants of these V roots conform to the prediction, raising questions
regarding apparent violations of lexicalization constraints. For V from the more abstract
Group B, 32 V with metaphorical or abstract apparent scalar meanings (FORGIVE, LEARN,
SCARE, MARRY) had forms associated with scales in Group A, despite differences in
associated argument roles (Experiencers vs. Patients), and the ‘unbound-able’ nature of
the denoted scales. In some cases (i.e. MARRY), the V form can be treated as a
metaphorical extension of scalar meanings from an S-field from Group A (COMBINE &
ATTACH), but this treatment is problematic for other S-fields in Group B. All VN from
Group B behaved as expected. Based on these data, despite important differences in
behavior, V denoting changes in psychological and social status are similar to, or are
lexicalized as similar to, concrete scales such as locomotion, transfer, creation and
consumption. We discuss how this raises questions for the treatments of scales, and
analyses that depend on a notion of scale, such as event structure, and propose a unified
analysis that resolves these issues. Across all S-fields, Vs denote both scalar and nonscalar,
events that elapse in time. Within an S-field, scales may be iconic, but similar
forms outside the S-field may be associated with non-scalar meanings. Thus, the
iconicity of scalar meanings, and other types of lexical iconicity, can only be understood
with reference to S-fields.
.
108
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Elaine Maypilama & ^Dany Adone
Poster 2.07
*Charles Darwin University, Australia; ^University of Cologne, Germany
109
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Elena Benedicto & ^Gladys Tang
Poster 2.08
*Purdue U., USA; ^CUHK, Hong Kong
Parametric variation in H1/H2 Shift: Switch Reference in Serial Verb Constructions
[ENGL]
In this work we explore the role of H1/H2-shift as a morpho-syntactic device. H1/H2
shift has been observed in the past (cf. Frishberg, 1985) but only its phonological
properties have been formally studied (Brentari & Crossley, 2002). In this work, we
explore its syntactic properties, specifically within the structural domain of the event
argument (vP), comparing its behaviour in two sign languages with divergent properties:
HKSL (Hong Kong Sign Language) and ASL.
We propose that H1/H2-shift constitutes the morphological manifestation of a system of
Switch Reference (SR) that gets activated in Serial Verb Constructions (SVC). SVCs
exist in a good number of languages across the world, and in some of them they combine
with a system of Switch Reference (e.g., Mayangna in the Misumalpan family of
Nicaragua – Hale, 1991). In the present study, contrastive data indicate that while HKSL
presents a system of SR structurally associated to the vP, such a system is absent in ASL
at that same structural level. Examples of the contrast are in (1-2) (where H1 indicates the
dominant hand and H2 the non-dominant hand; square brackets indicate sub-clausal
division).
In this work, we hypothesize that HKSL has a [±anaphoric] feature (the implementation
of a SR system) incorporated into the functional head v0 of the sub-clausal predicates in
an SVC (following the structure proposed in Benedicto, Cvejanov y Quer, 2008), whose
[-anaphoric] value surfaces morpho-phonologically as an H1/H2 shift. In ASL, this
feature system is absent in v0 (though it can be present at higher structural levels).
The data. In the HKSL example (1), we observe the use of both H1 and H2 to sign five
sub-clauses belonging to a SVC. There we observe a hand-shift in the transition from
each sub-clause to the next, and crucially so, between 3-4-5. According to our hypothesis,
the presence of a [-anaphoric] feature in each predicate (in each v) predicts the
appearance of hand-shift, which is indeed the case. In ASL, however, where we
hypothesize the absence of the [±anaphoric] feature in v, hand-shift is predicted not to
take place, and hand-shift is indeed not allowed (2b. vs 2a), as expected.
Additionally, in HKSL example (3), in the transition from sub-clause2 to sub-clause-3,
where we can postulate a [+anaphoric] value to the SR system (because of the lack of
referential change), we predict an absence of hand-shift, and indeed we can observe that
the articulating hand remains at H1, as expected.
The example in (3) confirms that the H1/H2 Shift is not associated to a simple clausal
change, but to a referential change in the clausal subject (see the transition between
subcl2>scubcl3). Furthermore, example (1) also shows, in particular, sub-clause 5, that
the use of a particular hand is not associated to a particular referent, since the referent of
the subject in sub-clause 5 is different from that in sub-clauses 1 and 3, all of them
articulated with H1.
Data Collection. Data was obtained via picture prompts and interactive elicitation.
110
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Elena I. Liskova
Poster 2.09
The University of Texas at Austin, USA
Identifying motivations for distribution of nonmanual negation in ASL
To be presented in English
Negation in signed languages can be expressed manually (with a manual negative sign or a morpheme) or
nonmanually (with head movements and/or negative facial expressions).1,2,3 Sign languages differ with respect to whether
nonmanual negation can be the only marker of negation in a clause or whether manual negation is required. It has been
observed that a negative headshake alone is sufficient to negate a predicate in ASL. 4,5,6 However, this negation strategy (here
‘headshake-only negation’) may not be available for all predicates in ASL. Here, I investigate a small group of frequently
used predicates that undergo ‘negative incorporation’, 7 namely WANT, LIKE, and KNOW. Unlike other predicates in ASL,
these predicates are typically negated by reversing the orientation of the hand[s] in a twisting outward/downward movement.
Similar predicates have been proposed for unrelated sign languages. 3 This analysis examines whether headshake-only
negation is a grammatical alternative to the use of negative incorporation predicates WANT-neg, LIKE-neg and KNOW-neg
in ASL or if there are linguistic constraints on the distribution of these two strategies.
To elicit the data, I designed a structured data collection procedure in the form of a production task and the
elicitation of acceptability judgments. Five native deaf ASL signers, ranging in age from 23 to 44 years, participated in a oneon-one interview session with the interviewer, a deaf native signer of ASL. Each session consisted of a participant’s
responses to elicitation materials followed by a discussion of these responses and possible alternatives. Elicitation materials
consisted of video clips showing a deaf native ASL signer producing a sentence containing one of the investigated predicates
(1) or some other predicate that served as a distractor (2). There were six test sentences for each investigated predicate. The
participants were asked to change the sentence in each video clip to a negative one (the interviewer asked, for instance, one
or more of the questions in (3)). If the participant’s response did not use headshake-only negation, the interviewer would
herself offer a sentence with this negation type (4) and ask whether the participant thought that this was an acceptable ASL
sentence.
None of the five participants used headshake-only negation with WANT and KNOW and each rejected it as a
possible alternative. For LIKE, four participants rejected this strategy. One participant admitted LIKE with a headshake as an
option for three of the six sentences, but also noted that this option is not used often. The participants indicated that negation
with a headshake was confusing, and the combination of the head and hand movements felt wrong. In this paper, I consider
phonological, morphological, syntactic and semantic factors that could explain these results. In particular, I examine the
possibility of phonological constraints on making repeated contact with a turning head and on simultaneous production of a
side-to-side movement of the head and the movement of the hands towards the torso, the effect of word order, and the
meaning expressed by headshake-only negation. In addition, I discuss the role of frequency and saliency of these predicates
in human interactions.
111
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Elena Tomasuolo, ^Daniele Capuano & +Maria Roccaforte
Poster 2.10
*Institute of cognitive sciences and technologies - CNR Rome; ^Pictorial Computing Laboratory - University of Rome
Sapienza; +Sapienza university, Italy
Sign language background, reading habits and eye movements in esploring a written text
The present research aims to investigate the eye movements involved in a text reading, performed by deaf people with
different Sign Language (SL) background. A non-intrusive hardware tool, TOBII® 1750 (TOBII, 2008), able to record the eye
movements of computer users has been used.
The research sample group consisted of 48 participants, 18 - 35 years old, IQ ≥ 85, affected by severe or profound deafness,
with no deafness-related deficits and no cochlear implant.
The 48 participants were chosen according to their SL background:
 12 deaf native signers (with at least one deaf native signer parent ) who learned Italian Sign Language (LIS) at home
not later than 3 years of age;
 12 deaf late signers (with either hearing or non-signing deaf parents) who did not learn LIS at home and who
learned LIS between 6 and 18 years of age;
 12 deaf non-signers who were never exposed to LIS under 19 years of age; only someone came in contact with LIS
sporadically in adulthood; do not use LIS for their daily communication;
 12 hearing non-signers (control group).
Each of these 4 groups was further divided into occasional readers (ORs, 6 participants for each group) and habitual readers
(HRs, 6 participants for each group). Each participant was given a knowledge questionnaire in order to obtain the “reading
habits” data.
Reading habits and the time of exposition to written texts, and not the knowledge of LIS, is expected to be the discriminant
variable driving the oculomotor mechanisms implied in reading. Hence, HR deaf participants are expected to be more similar
to the hearing control group (reading regularity being equal) in their reading approach and timing.
Every participant has been asked to read a text by Gianni Rodari (1993) and has been asked to answer to questions aimed to
check their attention in reading.
The eye-tracking data analysis confirmed our hypothesis: results show that the Total Visit Duration variable does not vary
significantly among the groups (i.e. LIS background is not concerned), while Reading habits revealed to be the only
significant variable. HRs take a shorter time to read the text (46,53 seconds) with respect to ORs (61,61 seconds), with a
statistically-significant difference of 15 seconds (tab. 1).
In a successive phase, the text was divided into 4 areas of interest (TITLE, INITIAL PART, CENTRAL PART, FINAL PART)
(fig. 1). When analyzing Time to first fixation variable, which describes the time required for each reader to enter one area of
interest, the results show that each experimental group first looks at INITIAL PART, then at TITLE, then at INITIAL PART
(again), at CENTRAL PART and at FINAL PART. The only statistically-significant difference was observed in the six OR
late-signers group, who look at the screen starting indifferently from one or the other of the four areas of interest (fig. 2).
112
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Elina Tapio
Poster 2.11
University of Jyväskylä, Finland
A multimodal view on signed interaction – transcribing data from Finnish Sign Language
context (English)
In my PhD study, with the title 'English language in the everyday life of Finnish Sign Language people – a multimodal
view on interaction,' I examined interactional situations in which the fingerspelling of English words occurred in a
Finnish Sign Language (FinSL) context. The aim was to explore, firstly, the multimodality of interaction when FinSL
signers fingerspell English words, and secondly, how fingerspelling is modified in such situations. In my paper, I will
discuss different ways of transcribing multimodal signed interaction, and how the process of transcribing is
interwoven into the methodological choices made in the research and the actual analysis.
The research methodology used in the PhD study, Mediated Discourse Analysis, is ethnographically oriented multimethod approach that focuses on social action more broadly than previous text and discourse studies (Scollon 1998,
2001, Scollon & Scollon 2004). Analytical tools derived from multimodal interaction analysis, social semiotics, and
sign language linguistics were employed to suit the data and the purpose of the study (e.g. Norris 2004, Patrie &
Johnson 2011, Rainò, 2001, Van Leeuwen 2005).
In this presentation I demonstrate examples of the analysis of three video recordings ‘The Aviator,’ ‘Guitar,’ and
‘Ultimatum.’ The first recordings, ‘The Aviator’ and ‘Ultimatum,’ were captured during a video conference that was
part of an English course. The third recording is of a ‘coffee table’ FinSL conversation between two participants. The
analysis of the first two situations focuses on the general multimodality of the situations and how the participants
select from different communicative modes in order to achieve their goals, as well as the modification of
fingerspelling. In ‘Ultimatum’ the focus of analysis is on ten instances of fingerspelling a proper name 'Jason,' on the
modification of the fingerspelled sequences and the mouthing in relation to signing.
My objective is to demonstrate the overall multimodal and multilingual complexity of the analysed situations, discuss
the relationship between fingerspelling and other modes available to the actors in those situations and show how the
physical place and the technology in the situation rearranges the interaction. The paper also features and discusses
transcription methods that were developed for describing and analysing the three video-recorded data.
113
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Elisa Gudrún Brynjúlfsdóttir & Jóhannes Gísli Jónsson
Poster 2.12
University of Iceland
Constituent questions in ÍTM
In this paper, we report the results of Brynjólfsdóttir (2012), the first study of wh-clauses in Icelandic Sign Language (ÍTM,
Íslenskt táknmál). ÍTM is the native language of nearly 300 deaf speakers in Iceland and its history goes back at least to the
middle of the 19th century. Until very recently, the grammar of ÍTM had not been subject to any serious study; hence, many
properties of wh-clauses in ÍTM remain unexplored and require further study.
The primary non-manual marking for wh-clauses in ÍTM is brow lowering. The non-manual marking accompanies the whphrase alone or spreads to the whole clause and this seems to be independent of the position of the wh-phrase. Most
commonly, a wh-phrase remains in-situ in wh-questions in ÍTM, the clearest examples of which involve wh-objects in
SOV-clauses (1). (Note that ÍTM is an SVO language but some speakers accept SOV orders quite freely.) ÍTM also has whinitial clauses (2a), and wh-doubling (2b). As in many other sign languages of the world, wh-doubling is used for emphatic
focus in ÍTM and is restricted to wh-heads (see Petronio 1993 for ASL and Nunes and Quadros 2005 for Libras).
Wh-initial clauses like (2a) involve wh-movement to the left periphery of the clause but the case for rightward whmovement, which is attested in many sign languages (see e.g. Cecchetto, Geraci, and Zucchi 2009 on LIS), is rather unclear
for ÍTM. This is so because all the examples of wh-final clauses in ÍTM that have been found could be analyzed as wh-insitu; see e.g. (3). For instance, there are no examples of clause-final wh-subjects in the naturalistic data we have examined
although some speakers accept such subjects in judgment tasks.
From a cross-linguistic perspective, the most intriguing aspect of wh-constructions in ÍTM is verb second (V2) in matrix
constituent questions, as exemplified in (4a). As shown in (4b), V2 is not found in embedded questions and this is consistent
with what we find in the Germanic V2 languages. To the best of our knowledge, V2 has not been reported in any sign
language investigated so far.
In all likelihood, V2 in ÍTM wh-questions is a borrowed word order from Icelandic as Icelandic is one of the Germanic V2
languages. V2 appears to be a fairly recent development in ÍTM as it is widely used by young speakers of ÍTM but noticably
absent in the speech of older speakers (approximately those over 40 years old). For speakers of ÍTM who have V2 in their
grammar it seems to be obligatory when leftward wh-movement applies, just as in Icelandic. Note that there is no reason to
assume that leftward wh-movement in ÍTM is a recently borrowed construction from Icelandic since it is found in the
speech of different age groups and is in fact quite common with certain wh-items (e.g. AF-HVERJU ‘why’).
114
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Ellen Ormel, ^Anne de Meijer & ^Onno Crasborn
Poster 2.13
*Radboud University; ^CLS at Radboud University, Natherlands
Coarticulation of thumb position in Sign Language of the Netherlands (English)
This poster presents the investigation of the articulation of the thumb in flat handshapes (B
handshapes) in Sign Language of the Netherlands (NGT). In several phonological models of
handshape, the thumb is not a selected finger in these handshapes (Crasborn 2001; van der
Kooij 2002, see Figure 1). We hypothesized that the thumb state is variable and will undergo
coarticulatory influences of neighbouring signs, just as the pinkie finger has been shown to do
in American Sign Language (Cheek 2001). More recent studies on location have shown that
coarticulation is pervasive both in elicited data (Tyrone & Mauk 2010) and in corpus data
(Russell et al. 2012) of ASL. The present study provides evidence for thumb coarticulation in
NGT, based on corpus data of 73 signers.
Our hypothesis was tested by investigating thumb articulation in signs with B handshapes that
occur frequently in the Corpus NGT. Manual transcriptions were made of the thumb state in
two dimensions (adduction-abduction and flexion-extension, see Figure 2) and of the degree
of spreading of the fingers (which was assumed not to be a specified phonological feature in
these specific flat hands). A total of 2105 tokens of 18 sign types were transcribed, together
with the signs preceding and following these target signs.
The data showed that the preceding and following signs predicted the thumb state in target
signs. This turned out to be true for the degree of flexion of the thumb depending on the
degree of flexion of the preceding or following signs, as well as for the degree of abduction of
the thumb that similarly depending on the degree of abduction of the preceding or following
signs. Moreover, the degree of spreading of the fingers in the target signs correlated with the
position of the thumb in those signs. The data further suggest that there is interplay between
the degree of spreading and coarticulation of the thumb. More specifically, the degree of
coarticulation of the abduction of the thumb was mediated by the degree of spreading of the
fingers. When we zoom in on individual items, the different signs show a rather dissimilar
pattern in spreading and thumb state. Physiological constraints, articulatory discomfort and
certain combinations of phonological features can conceivably explain this variation. Signs
with the relative orientation specification ‘ulnar side of the hand’, for example, displayed a
more consistent pattern of extension of the thumb.
We conclude that the thumb is subject to coarticulatory influences in flat handshapes in NGT.
We interpret this as being in line with the claim in distinctive feature models that not all
fingers are relevant in all signs. This needs to be verified in further studies. Based on the
present findings and the earlier work on ASL handshapes, it is foreseeable that not only the
thumb in the B handshape is predictable from various phonetic-phonological factors, but also
other fingers in various other hand configurations. Moreover, we conclude that handshape
coarticulation can be fruitfully studied by using spontaneous interactive data from a sign
language corpus.
115
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Emily Carrigan
Poster 2.14
University of Connecticut, USA
Adult Homesigners in Nicaragua Independently Innovate Features of their Homesign Systems
(ASL/English)
Studying the communication systems that arise in spontaneously occurring cases of degraded linguistic input can help
clarify human predispositions for language. Some deaf individuals born into hearing families, who do not receive
conventional linguistic input, develop gestures, called “homesign,” to communicate (e.g. 1). We examined homesign
systems of four deaf Nicaraguan adults (ages 15-27), and evaluated whether homesigners’ hearing mothers are potential
sources for these systems. If homesign is a direct reflection of mothers’ contributions, we would expect mothers to
comprehend homesign utterances.
Each Mother (ages 45-60) watched videotaped descriptions of 83 events (e.g. “A man taps a woman”) produced in
homesign by her deaf child, and selected, from an array of four, a picture matching the description. We analyzed: a) the
overall proportion correct, to determine whether each Mother shares homesign with her deaf child, and b) the picture foils
chosen when Mothers erred to learn which aspects of descriptions they did not understand.
To ensure that Mothers could do the task, Mothers also comprehended spoken Spanish descriptions produced by one of her
hearing children. Furthermore, we asked 4 native users of American Sign Language (ASL-Signers; ages 22-66) who did
not know the homesigner or his/her system to comprehend the same 83 homesign descriptions. If mothers innovate
homesign, we would expect them to perform better than ASL-Signers.
Although Mothers comprehended homesign descriptions significantly better than chance (25%), the maximum correct was
76%, and three of four Mothers were not above 54%. Furthermore, Mothers comprehended spoken Spanish descriptions
better than homesign descriptions, confirming that Mothers could do the task, and suggesting each Mother shares spoken
Spanish with her hearing child more than she shares homesign with her deaf child (Fig.1). We also found, unexpectedly,
that ASL-Signers comprehended homesign descriptions better than Mothers (Fig.2). This result confirms that homesign
productions contain comprehensible information, and shows that Mothers are not fully sensitive to this information.
On “reversible” items (like the example above), both ASL-Signers and Mothers (Receivers) often chose a foil depicting
participants in reversed semantic roles. Adult homesigners systematically express grammatical subject by consistently
placing the noun phrase before the verb (2); Receivers’ inability to effectively use this information indicates that neither
group understands the homesigner’s system for relating gestures. This is expected for ASL-Signers, but is not expected if
Mothers develop homesign structure.
Mothers also selected incorrect foils that indicate they did not understand homesigners’ gestures for participants in the events
(e.g. man/woman). Observational evidence suggests that Mothers do understand these individual gestures in other contexts;
Mothers’ errors in this task may reflect difficulties processing sequences of gestures in homesign utterances, which further
supports the notion that some aspects of homesign development are outside Mothers’ influence.
Mothers’ non-comprehension of structural information and difficulty processing homesign suggest that they are not wholly
responsible for the development of homesign. Instead, we propose that homesigners themselves contribute uniquely to the
development of their own systems. Further research will clarify the specific capacities homesigners possess that support the
innovation of their own systems.
116
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Emily Kaufmann & Thomas Kaul
Poster 2.15
University of Cologne, Germany
Bimodal language switching and single-task/dual-task switching: a bimodal advantage?
(ENGLISH/ASL as necessary)
Abstract: There is increasing interest in multimodality, including bimodal (signed-spoken) bilingualism. Language-switching
experiments have been used to study bilinguals, and these studies, testing sequential unimodal (spoken-spoken) language
switching, have found longer reaction times (RTs) and higher error rates in switch trials than repeat trials. These ‘switch
costs’ have been replicated often (Kiesel et al. 2010). Bimodal bilinguals often produce ‘code-blends’, in which elements of
the signed and spoken languages appear simultaneously, which is possible because the languages do not share a primary
articulator (e.g. Emmorey et al. 2008).
Experiment 1 examined modality effects in language switching. Participants (N=16) were hearing native speakers of German
who had completed 7-9 semesters of German Sign Language (DGS) classes and had learned English in school. There were
two conditions: unimodal (German-English switching) and bimodal (German-DGS switching). For vocal RTs (comparing
only the German vocal responses across conditions due to differences in timing across modalities), we conducted a
Modality×Shift ANOVA. There was a significant main effect of Modality, p<.001, and Shift, p<.01; the interaction between
Modality and Shift was also significant, p<.05. The results indicate that vocal responses are generally faster in bimodal
switching blocks than in unimodal blocks, indicating a bimodal switching advantage.
Experiment 2 examined simultaneous bimodal language production with a dual-task design (e.g. Navon & Miller 1987),
comparing dual-task trials with single-task trials in order to determine whether there is a dual-task advantage for bimodal
production. Studies comparing dual-task and single-task responses across two non-language modalities often find dual-tasks
costs (e.g. Huestegge & Koch 2009). However, bimodal bilinguals often produce words and signs simultaneously rather than
switching, so we may expect a different pattern for these tasks. Participants (N=12) were hearing native speakers of German
who completed 7-9 semesters of DGS. We compared RTs across Task-Type (single-task or dual-task) with a TaskType×Shift ANOVA. For German responses, we found a significant main effect of Task-Type, p<.001, and Shift, p<.01, but
the interaction between Task-Type and Shift did not reach significance, p>.05. For DGS manual responses, there was no
significant main effect of Task-Type, p>.05, but there was a main effect of Shift, p<.01. The interaction between Task-Type
and Shift was significant, p<.05, indicating a dual-task benefit for manual responses. For error rates, we compared the three
Trial-types (German single-task, DGS single-task, and German/DGS dual-task) with a Trial-Type×Shift ANOVA. There was
a significant main effect of Trial-type, p<.01, and Shift, p < .001, and the interaction between Trial-type and Shift was also
significant, p<.05, indicating that costs for a dual-task response are lower than those for a DGS response, which are again
lower than those for a German response.
117
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Erin Wilkinson & Jesse Stewart
Poster 2.16
University of Manitoba, Department of Linguistics, Canada
Pear Stories Narratives in American Sign Language: A distributional analysis of
disfluency types
Languages: ASL, English
Fluent language processing involves an interaction between linguistic and
cognitive organization that is chunked in a planning unit. Disfluency is defined as
language disruptions in language production. In speech, if language processing lags
behind cognitive processing at the completion of a planning unit, then speakers will show
disfluency in various forms, e.g., long pauses, verbal utterances like ‘hmmm’, and/or
utterances repetitions to allow more time to plan the following unit. Planning units
themselves may also be affected internally by prolongations and restarts. Language
disruptions provide evidence that people process language in units. However, nearly all
studies on disfluency concern spoken languages. Only a hand full studies on signed
languages, mostly on ‘slips of the hand’, show that disfluency occurs in signed languages
as well (Klima and Bellugi 1979, Newkirk et al 1980, Dively 1998, Leuninger et al.
2002).
In this study we revisit Emmorey et al’s (2000) findings that English speakers
have a significantly higher amount of disfluencies per minute compared to ASL signers.
We suggest that these different rates are caused by other disfluency types that are
modality/ language specific. By focusing on several types of signed language disfluencies
and subcategories within each group, we attempt to deliver a more in-depth analysis of
the cognitive and communicative processes involved in disfluent speech. We gathered
our data from five deaf signers from the Winnipeg area who produced two narrations the
Pear Film Narrative (Chafe 1980) spaced apart by approximately one hour. We then
isolated, pauses, fillers, repetitions, sign-lengthening and lexical selection errors. By
isolating and subcategorizing disfluencies, we aim to document (1) the modality
differences and processing similarities between spoken and signed language, (2) explore
how disfluencies aide in coordinating communication by analyzing phrase level
environments and (3) look at how facial expressions and body movement are used in
disfluent events. The narrative format used in documenting disfluencies not only allows
for individual comparative analyses among each participant, but also cross-participant
analyses. This permits us to look at (4) each signer’s individual isogloss and compare
both common and independent strategies for dealing with cognitive processing and the
coordination of communication. Finally (5), we look at tendencies in sign-specific
disfluencies i.e., the sub-categorizations of sign-lengthening not found in spoken
languages.
Our preliminary results suggest that disfluencies are just as complex as those
found in spoken languages. This implies that both speakers and signers implement similar
strategies for dealing with cognitive planning and coordinative processes. However, due
to the modality differences between spoken and signed languages, surface level
disfluencies can appear quite remote from those in spoken languages making this underinvestigated
area of discourse of great importance. Finally, we also suggest that facial
expressions and body movements may be used to both predict and make up part of a
disfluent event.
118
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Eva Gutierrez, *Heather Payne, ^Anna Safar & *Mairead MacSweeney
Poster 2.17
*UCL, UK; ^Radboud University Nijmegen, Netherlands
Using functional transcranial Doppler sonography
The overall aim of the current studies was to examine the hemispheric lateralisation of British Sign
Language (BSL) production. Numerous previous neuroimaging studies have used functional Magnetic
Resonance Imaging or Event-Related Potentials to examine hemispheric lateralisation of sign processing and
have predominantly reported left hemisphere lateralisation – just as for spoken languages. However, little is
currently known about the development of language processing in deaf children using a signed language. Here
we report data from two initial studies with deaf and hearing (non-signing) adults in which we pilot the use of a
relatively new neuroimaging technique that has the potential for portable, large scale data collection with young
deaf children.
Functional transcranial Doppler sonography (fTCD) is a simple and non-invasive technique which can be
used to measure event-related changes in blood flow velocity in the two main arteries which supply the left and
right cerebral hemispheres (Deppe et al., 2004). fTCD is a safe and reliable way to infer the extent to which
cognitive functions are lateralised in the brain. To date this method has not been applied to sign language
processing.
Due to concerns about artefacts induced by jaw movements during overt speech production, most
previous studies have required covert production, where words are subvocally produced and (some of them) are
reported later. Covert sign production is likely to be difficult for young deaf children, ‘overt’ production would be
required in our future studies. Therefore in experiment 1 we directly contrasted covert and overt speech
production in hearing non-signers during a phonological fluency task (e.g., produce words beginning with ‘b’).
The ‘phonology fluency’ paradigm has been widely used in fTCD studies of speech production. However,
based on studies with deaf adults we anticipate that a sign phonological fluency task would be especially difficult
for deaf children. Therefore in Experiment 1 we also contrasted phonological and semantic fluency: a paradigm
not previously used with fTCD.
In Experiment 1 we tested twelve hearing non-signers. There was no significant difference in hemispheric
lateralisation during overt and covert conditions. In line with previous studies, 92% of our sample were left
lateralised for speech production. Furthermore, the strength of lateralisation correlated with the number of words
produced in overt conditions. With regard to the fluency task we found that when the number of words produced
was controlled for, there was a main effect of task - indicating that phonological fluency was more strongly left
lateralised than semantic fluency.
In Experiment 2 we adapted the paradigms to test phonological and semantic fluency during both overt and
covert BSL production in deaf native signers. The semantic categories were the same as in Experiment 1.
Phonological fluency was tested using a handshape cue. The results from Experiment 2 will be discussed. We
are especially interested in the relationship between phonology and semantics in a signed language, and how this
may differ to spoken languages.
119
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Evie Malaia & ^Ronnie Wilbur
Poster 2.18
*University of Texas at Arlington; ^Purdue University, USA
Functional connectivity for visual language processing
Presentation will be conducted in English
As researchers try to determine the impact of using primarily a sign language instead of a spoken language, various
parts of the brain have been explored for changes resulting from both hearing loss and visual language usage. A key concept
for understanding these studies is ‘functional connectivity’, essentially the correlations between spatially remote
neurophysiological events. Prior studies investigating functional connectivity in Deaf signers suggest that life-long
experience with sign language and/or auditory deprivation may alter connectivity of brain regions; specifically, functional
and imaging evidence points to changes in superior temporal cortex function for processing of non-auditory inputs (Li et al.,
2012; Malaia et al., 2011). The present study investigates functional connectivity among regions associated with temporal
cortex in Deaf signers and hearing non-signers for network-based adaptations to visual language processing.
Participants (13 Deaf native ASL signers, 12 hearing sign-naïve participants) were presented with video stimuli in a
block paradigm. During half of the blocks the participants carried out an active Task associated with watching short signed
video clips, while the other half required only passive viewing (no-Task; task negative (TNN)).
fMRI demonstrated TNN activations in posterior cingulate (PCC), anterior cingulate (ACC), bilateral dorsomedial
prefrontal cortex (dMPFC), and posterior Inferior Parietal lobe (pIPL) in both groups (Table 1). However, bilateral posterior
superior temporal gyrus (STG) was only activated in hearing subjects. The absence of STG activation in the Deaf signers
likely reflects re-wiring of these temporal regions, possibly for participation in a sign language processing network. This
possibility is supported by presence of STG activation in Deaf signers in the Task-related condition .
We continue to analyze functional connectivity between regions activated differently in the default network of Deaf
and hearing participants using partial correlation based on ICA after global signal regression. Seed ROIs (spheres with 5-mm
radius) are centered in peak task-negative activation coordinates for the Deaf (PCC [4 -44 16], ACC [-8 40 8], left dMPFC [26 22 46], right dMPFC [16 32 48], left pIPL [-28 -92 26], right pIPL [24 -78 48]) and hearing participants (PCC [12 -54 20],
ACC [-6 30 0], left dMPFC [-24 32 54], right dMPFC [26 24 44], left pIPL [-44 -78 34], right pIPL [50 -74 32], left STG [52 -8 -26], right STG [56 -6 -8]). This enables us to compare TNNs in using node strength metrics.
The significance of these differential activations in task-negative networks in Deaf signers and hearing non-signers
is that brain regions are not selectively specified for the type of input. Instead, experience-based neuroplasticity results in
self-organizing processing networks for dealing with systematic linguistic input.
120
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Eyasu Tamene
Poster 2.19
Addis Ababa University, Ethiopia
Language Use in Ethiopian Sign Language (EthSL)
Ethiopian Sign Language (EthSL) is one of the under‐researched languages of Ethiopia although
it has over a million users of the Deaf community. Not much is known about the language, the
use and current status. However, there is a growing question of equality, participation and use
of rights which have been emerging among the users of the Deaf community. Until quite
recently many people in Ethiopia presume EthSL resembles the country’s dominant spoken
language, Amharic. It is also poorly understood that the use of EthSL in all domains of public
life helps to gain success in every aspects especially in the life especially in Deaf education. On
the other hand a large number of Deaf students are attending their education no matter how
the type of sign language that is used is in question i.e. due to mixing EthSL with Amharic. It has
also been difficult to minimize the challenges of Deaf students in education in Ethiopia for
lacking the basic knowledge and information about EthSL. As part of the ongoing PhD research,
this paper tries to outline all those Whs‐ that should be answered in the area of language use in
EthSL. How EthSL is used? Where it is used? Who is using EthSL? When it is used? Which
variety is used in what circumstances? In the mean time, various lexical items will be collected
to complement the research so as to provide a justifiable answer for the type of variety that is
used in certain domain. In order to achieve these objectives, the researcher would need to
assess some selected schools for the Deaf in Ethiopia. Interviews will be held with Deaf
students, teachers of the Deaf and their parents. Questionnaires will be distributed and
observations in the selected Deaf schools will be carried out. Finally, the study will try to come
up with the required information in the language use in EthSL for the prevailing academic and
social questions raised above.
121
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Felix Yim-Binh Sze, Connie Chun-Yi Lo, Lisa Sui-Wah Lo
& Kenny Kwan-Ngai Chu
Poster 2.20
The Chinese University of Hong Kong
Lexical Variations and Diachronic Change in Hong Kong Sign Language
This paper reports the findings of a pilot study that investigates lexical variations in Hong Kong Sign Language (HKSL),
focusing on the phonological processes involved in the variants and their role in driving diachronic change.
Lexical variants in HKSL fall into two types - separate variants and phonologically-related variants. The former are distinct
signs that stand for the same concepts, whereas the latter bear some resemblance to each other in handshape, location,
movement and/or palm orientation (cf. Lucas, Bayley, and Valli 2001). Phonologically-related variants typically arise from
processes such as assimilation, deletion, etc., and they can reflect the overall directions for diachronic change. In American
Sign Language, it is observed that, over time, signs become more symmetrical (i.e. two hands sharing the same handshape
and movement), centralized (i.e. signs originally in a peripheral location moving towards the centre of the signing space) and
fluid (i.e. complex movements/compound signs being simplified) due to the motivations for the ease of articulation and
perception (Frishberg 1975, Woodward & Erting 1975). One purpose of this study is to find out whether these tendencies are
also attested in the lexical variants in HKSL.
As a pilot study of our ongoing effort of documenting lexical variations in HKSL, our current data came from two groups of
signers: (i) 4 older signers (aged 56-60) who graduated from the two earliest Deaf schools in HK which were set up in the
late 1940s; (ii) 6 younger native signers (aged 26-34) who have signing parents graduating from these two schools. Each of
the ten informants was asked to produce 145 signs which, in our informal observations, exhibit lexical variations in the Deaf
community. Our data show that lexical variations in HKSL stem from both separate and phonological variants. Overall
speaking, signs in HKSL do undergo different phonological processes to become more symmetrical, fluid and centralized, as
reported in other sign languages. This suggests that phonological processes play a significant role in the diachronic change in
the lexical items HKSL. However, our data show that signs do not all change at the same speed. Phonological processes
mainly occur in signs whose original forms are not centralized, fluid or symmetrical. These signs change rapidly, resulting in
a wide range of variable forms within and across generations of signers. In contrast, signs that are originally simple in
structure remain relatively stable over time. In addition, some phonological processes are more frequently observed than
others. Among all types of phonological processes found in our data, deletion of signs within a compound are far more
frequent than handshape assimilation, centralization of POA and blending of signs inside compounds. This finding clearly
suggests that elimination of prosodically heavy forms is the strongest driving force underlying lexical changes.
122
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Felix Yim-Binh Sze, *James Woodward, ^Adhika Irlang Suwiryo,
^Laura Lesmana Wijaya, ^Iwan Satryawan & ^Silva Tenrisara Pertiwi Isma
Poster 2.21
*The Chinese University, Hong Kong; ^The Chinese University of Hong Kong, Indonesia
Sign language use and variations in Jakarta Sign Language
This paper discusses the situation of sign language use across deaf signers in Jakarta, the capital of Indonesia, focusing
specifically in the variations observed in the lexical items. The lexical data, which consist of around 230 lexical signs by 17
signers of different ages and schooling backgrounds, were originally collected for compiling a sample dictionary of Jakarta
Sign Language under the research project Asia-Pacific Sign Linguistics Research and Training Programme. It is observed
variations may stem from phonological processes or pure lexical differences. In addition, the choice of lexical signs vary
across signers, depending most crucially on their age and degree of exposure to oral education in deaf schools (i.e. deaf
schools in which students are required to learn through lipreading and residual hearing instead of sign language). Signers
over the age of 60 received relatively little education, and they typically use fewer mouthings (i.e. lip movements imitating
spoken words in Bahasa Indonesia), more facial expressions and distinct lexical forms for hyponyms. For signers under 60,
there is an increasing use of hypernymic forms with precise lexical meaning distinguished only through mouthings,
simultaneous use of mouthings/spoken words with signs, initialized signs and fingerspelling for filling up lexical gaps. As in
Thailand, Philippines and Malaysia, our data also provide evidence that American Sign Language, which dominates deafrelated multi-media on the internet, is beginning to set its foot in the deaf community in Jakarta, particularly among the
younger signers. Our findings suggest that Jakarta Sign Language is being endangered by the prevalence of oralism in deaf
education and rapid expansion of American Sign Language in the Asia-Pacific region. Hence, efforts in preserving and
promoting this indigenous sign language are urgently called for.
123
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Nick Palfreyman
Poster 2.22
iSLanDS Institute, University of Central Lancashire, UK
A thousand kilometres away: sociolinguistic variation
in the urban sign language varieties of Indonesia (BSL)
The Indonesian cities of Solo and Makassar are situated 1,000 kilometres apart on the islands of Java and Sulawesi
respectively. This paper compares the sign language varieties used in these cities for the domains of completion, number
and negative constructions, using qualitative and multivariate analysis to assess the degree and nature of lexical and
morphosyntactic variation. The research uses a corpus of conversational data, from which 90 minutes of data have been
transcribed in each location; the sample comprises 40 signers (balanced for region, sex and age).
Instantiations of the completive aspect are highly frequent in the data corpus, perhaps because completion is an important
cultural concept in Indonesia (Purwo 2011). There are 299 tokens of the completive aspect, and four lexical variants have
been identified – e.g. (1), (2). All variants occur in both varieties, and behave in similar ways, for example all variants can be
encliticized (1), (3).
Multivariate analysis suggests that location is not significant for the completive variable, but that other predictor variables
such as the age and sex of the signer, the syntactic slot, and the previous realisation of the variable are significant. There is
also a high degree of intra-individual variation, with some signers using all four variants in the course of one conversation;
in some cases there is a convincing explanation for the choice of variant (such as accommodation, or phonological influence
from the previous segment), while in other cases the reason is not clear.
Grammatical similarities between the SL varieties of Solo and Makassar can be explained in terms of contact between deaf
people in different regions of Indonesia – the first deaf schools were established in Java in the 1930s, and since then the
amount of contact between SL users has grown, through deaf organisation, social/sports events and, recently,
communications technology. Negative constructions have been found to be grammatically similar in India (Zeshan 2006)
and the situation in Indonesia appears to be comparable.
How do these findings relate to previous research on Indonesian SL varieties? A study by Isma (2012) compares the SL
varieties of Jakarta and Yogyakarta using the lexicostatistical methods described by Woodward (2011), and argues that
these varieties are different languages in the same ‘language family’. If applied to the Solo and Makassar varieties,
lexicostatistical methods would obtain similar findings. However, these methods have not been used for the current study
because they fail to capture sociolinguistic phenomena such as inter- and intra-individual variation. Given the degree of
lexical and grammatical variation, the elicitation of a single token for each item on a wordlist does not give the full picture.
While documentation is hugely valuable for sign communities, I suggest that the delimitation of SLs is essentially a sociopolitical matter. Rather than seeking to name new SLs on the basis of findings from any research method, the views of SL
communities should be prioritised, and researchers have a key role to play in raising the metalinguistic awareness of sign
communities (de Vos & Palfreyman 2012).
124
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Gemma Barbera & ^Marta Mosella
Poster 2.23
*Universitat Pompeu Fabra; ^Universitat de Barcelona, Spain
(NON-)SPECIFICITY MARKING IN RESTRICTIVE RELATIVE CLAUSES
IN CATALAN SIGN LANGUAGE (LSC)
INTRODUCTION. Although we already count on some previous work on relative clauses in
signed languages (Branchini 2006; Branchini et al. 2007; Pfau & Steinbach 2005), the
codification of the different types of specificity that relative clauses encode has not been
studied yet. Moreover, (non-)specificity marking, which has been studied in a broad
variety of spoken languages (von Heusinger 2011; Matthewson 1998), is still an
understudied field in signed languages. In this paper, we offer a descriptive analysis of the
expression of specificity in restrictive relative clauses in Catalan Sign Language (LSC).
DATA. From a syntactic point of view, relative clauses in LSC are circumnominal (1a),
instead of adnominal (1b), which appears to be the strategy used in the most thoroughly
studied languages. This means that the pivot noun phrase appears inside the relative
clause. As shown in (2), LSC relative clauses cannot occur internal to the matrix clause;
instead they must be fronted or postposed. Also, note that the non-manual markers have
scope over the whole relative clause and do not spread beyond this constituent.
From a semantic point of view, LSC shows an overt contrastive marking of the
semantic-pragmatic notions of (non-)specificity by localising the noun phrase on different
regions of the signing space. The frontal plane, which extends parallel to the signer’s body,
is grammatically relevant for the encoding of specificity and the two areas of the frontal
plane, namely upper and lower, are associated with two different interpretations. When
the noun phrase is associated with the lower frontal plane, a specific interpretation arises
(Fig. 1). That is, it denotes a referent known by at least the sender. In contrast, when the
localisation is associated with the upper frontal plane a non-specific reading is available,
meaning that it denotes a referent not known by the sender or the addressee (Fig. 2).
PROPOSAL. After an analysis of a small scale LSC corpus, which includes naturalistic and
elicited data, we propose that relative clauses in LSC encode specificity marking. We show
that three main strategies are used to encode (non-)specificity, as summarised in Fig. 3:
(i) The localisation of the pivot noun phrase on the frontal plane differs according to the
knowledge the signer has of the discourse referent. While in specific relative clauses the
pivot is strongly associated with the lower frontal plane (that is, with a tensed realisation
of the movement directed towards a concrete locus in signing space), non-specific ones are
localised on the upper frontal plane, but they are weakly localised (that is, with a relaxed
and non-tensed realisation to a bigger and wide-spread locus).
(ii) Different non-manuals codify the degree of knowledge of the discourse referent.
Squinted eyes spread over relative clauses when the specific pivot denotes shared
information. When the pivot denotes a specific but non-shared discourse referent, the
signer uses eyes wide open. Non-specificity occurs with sucked cheeks and a shrug. The
weak localisation on the upper frontal plane co-occurs with a non-fixed eye gaze towards
the locus.
(iii) LSC relative clauses may require either an overt or covert nominalizer, which is
instantiated by a sign glossed as MATEIX (‘same one’). However, this sign only appears in
context of specificity when the information is shared among the participants. Two other
signs, glossed as CONCRET (‘concrete’) and REQUERIMENT (‘requirement’), which denote
an established subset, appear in both specific and non-specific contexts when the
information is non-shared. The use of lexical signs appears to be non-compulsory.
CONCLUSIONS. In this paper we shed new light on the syntax-semantics interface of sign
languages, and more concretely on relative clauses in an understudied language, such as
LSC. We investigate three main strategies that distinguish different (non-)specificity and
(non-)shared information marking: spatial localisation, non-manual markers and the use
of lexical signs.
125
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Giulia Petitta & ^Alessio Di Renzo
Poster 2.24
*Università La Sapienza di Roma / CNR-ISTC; ^CNR-ISTC, Italy
Fingerspelling and metalinguistic reference: new data from Italian Sign Language (LIS)
discourse
Language: ENGLISH
It is well documented how fingerspelling is used in sign languages (SL) all over the world to refer to proper nouns, technical
terms or spoken words not having a corresponding lexical item in SL. In almost all SL it is also a resource for word
formation: a lot of signs are based on the fingerspelling handshapes (Brennan, 2001). Furthermore, other uses of fingerpelled
words have been discovered looking at its role in signed discourse, e.g. emphasis (Schembri, Johnston, 2007).
In Italian Sign Language (LIS), fingerspelling use is very limited. For example, not so many initialised signs are available in
LIS lexicon, and most fingerspelled units (almost exclusively proper nouns) are present in television news programs.
However, looking at signed discourses, fingerspelling appear to be used with several functions, representing a valuable
resource to express peculiar communicative intents, such as metalinguistic reference. The representation of spoken languages
words (Wilcox, 1992) could indeed be done with a metalinguistic intent.
In this presentation we will report analyses conducted on LIS discourses, referring to different registers and contexts,
including conferences, narratives, lectures, etc.
Data has been transcribed using Sutton’s SignWriting, a system that allows to observe the simultaneous articulation of
manual and nonmanual features, including eye gaze direction and head/hand movement temporal relations.
Apart from uses already documented for other sign languages (available in all LIS discourse registers), in LIS discourse we
found four different types of metalinguistic references expressed by fingerpelling:
1) Reference to the meaning of a sign by using a signifier belonging to a spoken language to clarify the meaning of a sign
previously articulated (to make the addressee understand the meaning);
2) Reference to a given meaning to explain a concept without using signs (e.g. in a conference, talking about the use of
several signs to express a unique meaning, and introducing it by fingerspelling).
3) Reference to a word (or a part of it) of a vocal language. Despite the presence of a signed equivalent, the signer needs to
refer to the written form of the spoken signifier to talk about letters, morphological structures, word formation, etc. (this
function is mostly available in linguistics conference and seminars).
4) Reference to fingerspelling itself: the signer is explaining something about handshapes, movement, practice or other
features concerning fingerspelling in sign language.
Though further research is needed to clarify the occurrency of these fuctions in other SL, our data shows that fingerspelling
seems to be an invaluable medium to refer both to signifiers and to meanings, and it is preferred, in specific cases, to, for
example, a sign belonging to another SL or a signed periphrasis. For these reasons, enhancing our understanding of
metalinguistic reference by fingerspelling could clarify the status of this device in sign linguistic system and in discourse
organization.
126
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Giulia Petitta, ^Marie-Anne Sallandre & *Paolo Rossini
Poster 2.25
*Institute of Cognitive Sciences and Technologies, CNR, Italy; ^Université Paris 8 & CNRS, France
Mouth Gestures and Mouthing in Two Sign Languages (LIS and LSF)
Languages: English, ASL
Several studies had been conducted on mouth patterns in sign languages, focusing on their function at different levels of
analysis (Boyes Braem, Sutton-Spence, 2001; Fontana, 2008) and on their distinction in two main categories: mouthings,
which are related to a spoken language, and mouth gestures, which seem to be independent and idiomatic.
It has been pointed out that mouthings are more often related to lexical items, with different forms and functions (see, among
others, Sutton-Spence, 2007), while mouth gestures seems to have a morphological role (Ebbinghaus & Hessmann 2001).
According to the distinction between lexical items and productive signs (Highly Iconic Structures, hereafter HIS) proposed
by Cuxac (2000) and Cuxac & Sallandre (2007), our research aims to investigate the systematic discourse patterns related to
the use of mouthings and mouth gestures.
The Cuxac’s semiological distinction is actually based on nonmanual features (eye gaze, facial expressions, body postures
and mouth patterns). So, it is important to test how mouth patterns are distributed in signed discourse, and what this
distribution depends on.
In order to compare the relation between lexical items/HIS and mouth patterns, we examined 24 signed narratives performed
by deaf adults in two sign languages, French Sign Language (LSF) and Italian Sign Language (LIS). We collected narratives
of four different signed stories:
1) The Horse Story (6 LSF signers);
2) The Frog Story (6 LIS signers)
3) “Doctor Jekill and Mr. Mouse” - Tom and Jerry series (6 LSF signers)
4) “The egg and Jerry” - Tom and Jerry series (6 LIS signers).
Despite the differences of subjects, the stories are fully comparable because they are respectively narratives based on picture
stories (The Frog and The Horse) and on cartoons (Tom and Jerry series). Furthermore, we investigated text sequences of
comparable content and duration in time (approximately one minute of signed productions).
Our analyses provide results on the relations between mouth patterns and HIS: while lexical items co-occur with both mouth
gestures and mouthing, HIS never co-occur with mouthing, but only with mouth gestures. So, the difference between HIS
and lexical items seems to be stressed by mouth patterns, in addition to eye gaze and other non manual features.
Our findings show how mouthing and mouth gestures, added to all non-manual patterns, are included in the sign language
relevant features, in particular for their role in the discourse organization. Furthermore, it is important to note that the
distribution of mouthing and mouth gestures shows the crucial role of a transcription system and an annotation tool which
allow the researchers to recognize and annotate the mouth patterns. Further research will need indeed adequate tools to
analyze all relevant features in signed discourse.
127
TISLR 11, July 2013
Thursday 11th July, Poster Session 2
Helen Koulidobrova
Poster 2.26
Central Connecticut State University, USA
Sign-speech bilingualism: A pathway to untangling bilingualism effects
Research has shown that both languages of a bilingual remain activated irrespective of which is of them is being
used (Kroll et al. 2008). This finding has two implications: (a) an increased
cognitive load associated with constant inhibition of one of the languages, and related to the
choice-making between competing stimuli (Bialystok 2008); and (b) a possibility of language
interaction. Effects related to one but not the other of these implications have been proven
difficult to untangle. We demonstrate that examination of linguistic patterns of sign-speech
(bimodal) bilinguals allows this untangling to take place.
The question of how/why language interaction occurs is largely subsumed by the study of
cross-linguistic influence, which we argued in previous work to be deducible from a Minimalist
view of code-switching (1). Yet, not all possible cases of (1) may be observed. E.g., it has been suggested that in
the domain of argument omission, the two characteristics of bilingualism interact, one trumping the other:
irrespective of whether null arguments (NAs) could ‘transfer’ into the non-NA language, the increase in the
cognitive load due to inhibition forces overt elements to appear in anaphora resolution tasks (Sorace 2011).
Consequently, bilinguals acquiring a null- and a non-NA argument language simultaneously do not exhibit the
relevant language-synthesis effects. This view suggests that a language combination which allows
coarticulation— i.e. dispenses with/‘relaxes’ the otherwise necessary inhibition—may also allow for observing
language-synthesis in the domain of argument omission. ASL-English is such combination (Emmorey et al. 2008,
Lillo-Martin et al. 2009).
In monolingual English, null subjects (NSs) of an embedded clause are ungrammatical
and non-existent in the corpus data (Valian 1991). In ASL, embedded NSs are allowed (cf. Lillo- Martin 1991) (2).
A corpus-based investigation has shown that ASL-English bilinguals also allow embedded NSs in their English
(Author 2011) up to age 5 (3). Here, we test this finding experimentally.
A grammaticality judgment test was administered to seven balanced bimodal bilingual
children (ages 6;02-7;4, mean=6;6) with normal or above verbal/nonverbal intelligence scores:
five typically hearing (KODAs) and two congenitally deaf with cochlear implants (CIs) (received
at young ages). Consistent two-language input was provided from the early age: ASL by Deaf
parents and English by hearing family members and teachers.
In the study, two toys performed some action; the experimenter spoke for one of the toys.
A cat described the situation, omitting subjects in 50% of the test trials. Children were asked to
judge and correct the cat’s sentences (4). 50% of fillers were also ungrammatical in adult
English (agreement/tense errors). The children’s performance was compared with a monolingual English (age
6;02) and a unimodal multilingual (English/Croatian/Taiwanese, age 8;06) controls.
Results indicate that overall, both KODAs and CIs perform differently from controls:
they (i) are less accurate in rejecting ungrammatical sentences in English (5); (ii) accept
significantly more embedded NSs than the controls (6) (z-= 4.37, p>.0003).
The data support previous conclusions: ASL-English bilinguals exhibit effects languageinteraction effects where
unimodal bilinguals do not. Thus, they offer an opportunity for studying various bilingualism effects independently.
128
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Hongyu Liu & ^Jilin Fu
Poster 2.27
*Fudan University; Yanshan University; ^Yanshan University, China
A Study on the Aspectual System of Chinese Sign Language (Shanghai Dialect)
Abstract:
Unlike spoken language, sign languages, via visual modality, have displayed a unique set of morphosyntactic strategies
in expressing aspectual concept. Data from literature show that aspectual categories in sign languages are mainly realized
through the changes of path movement, modifications of the prosodic features such as movement rhythm, speed, and
repetition of signs. Non-manual features also contain distinctive aspectual information.
Based on a critical review on aspect study in sign languages around the world, this study explores the aspectual system
of Chinese sign language, Shanghai Dialect, in particular, by making a tentative description and interpretation of its aspectual
categories and aspectual marking strategies. Based on fieldwork data and elicited experimental data, we identify a few
significant aspectual categories and aspectual markers of Shanghai sign dialect. The prominent aspect categories in Shanghai
sign dialect based on our observation are perfect, progressive and iterative aspect. The specific strategies include affixation of
the grammaticalized signs of WAN (to finish), and YOU (to have) in a relatively limited range of expressions. Repetition,
hold, non-manual gestures such as mouthing, change of the path movement of a particular sign, and merging of two lexical
signs(the main verb and the grammaticalized functional sign affix) also play significant role in marking aspect concept in
Shanghai sign dialect. The specific aspectual values of different aspect markers are discussed. Roughly, the strategies fall into
two types: the lexical and the morphological ones. We further on analyzes the grammaticalization path of these two different
types of aspectual marking identified in Shanghai sign language, namely, the inflection of the phonological manners of the
signs (movements, rhythm, non-manual gestures ) and the aspectual modulation of the manual signs with the lexical markers
such as WAN(to finish) and YOU(to have). The former type of aspect marking exhibits strong iconicity. This reflects a strong
visual resemblance of the signs and its counterpart actions in physical world. A close link between aspect and action manner
in Shanghai sign verbs is also revealed. We discuss the aspect problem of Shanghai sign language from the angle of the
lexical aspect, Aktionsarten, as well as grammatical aspect, to enrich our study into verbal aspect in Shanghai sign language.
129
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Hope Morgan
Poster 2.28
University of California San Diego, USA
Spatial Agreement and Word Order in Kenyan Sign Language
Spoken languages use word order, verb agreement, and/or case marking to indicate grammatical relationships in a clause
(Ann gave John the book). Sign languages additionally have the option of exploiting space for agreement (ASL: ANN IXa.
JOHN IXb. aGIVEb BOOK). However, as Meir (2010) shows, not all sign languages use space for grammatical marking. A
‘village’ sign language like Al-Sayyid Bedouin Sign Language uses word order only, without any spatial morphology for
verbal arguments. This prompts a question: what kind of typological distribution exists in sign languages around the world,
and what factors might account for the distribution?
The present study evaluates how (di)transitive events are morphosyntactically encoded in Kenyan Sign Language. A
communication task was given to 20 fluent KSL deaf signers (7 women, 13 men; 20-48 years old) in western Kenya. Signers
watched 21 short vignettes on a laptop and described them to a second KSL signer. Vignettes varied by number of actors and
objects in the scene and type of transitive event. Responses were coded for word order, indexes to spatial locations (points,
eyegaze), and direction of path movement in verbs of transfer.
Meir (2010) shows that grammaticalization of space can be measured by changes in the direction of movement in verbs of
transfer. A language like ABSL that consistently uses word order to indicate grammatical relationships also tends to use the
body as one of the arguments in an utterance, such that path movement starts on the body and moves straight outward on the
Z-axis. In contrast, spatial agreement in a language like Israeli Sign Language first establishes grammatical loci in the space
in front of the body, and the hands then move between these points horizontally on the X-axis. In the case of ISL, the
transition from a predominant Z-axis state to its current X-axis state was mediated by a X+Z-axis state, where the hands
moved diagonally from the body to a grammatical locus. This demonstrated that the end points of the sign had been
grammaticalized as agreement markers.
The current study finds that KSL signers use word order far more consistently than spatial agreement. Out of 236 responses
with a direct object, signers used SOV word order in 71% of cases ( MAN BALL KICK). In contrast, spatial agreement between
established locations in space is an uncommon strategy in KSL. Out of 122 verbs of transfer ( GIVE, THROW, etc.), KSL
signers used the Z-axis 58% of the time, X+Z-axis 37%, and X-axis only 5%. X-axis movement was observed in only three
signers.
No difference was found by age of signer, which is important because any evidence of an age effect, taken together with the
use of the X+Z axis, would suggest that, like ABSL, KSL is in the process of grammaticalizing space for verbal agreement.
Instead, it seems that KSL verbs, though they make use of grammatical space, have not left the body. These findings show
that a young, national language like KSL does not automatically or rapidly evolve a system of spatial agreement .
130
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Isabella Chiari, ^Alessio Di Renzo, *Giulia Petitta & ^Paolo Rossini
Poster 2.29
*Università La Sapienza di Roma / CNR-ISTC; ^CNR-ISTC, Italy
The signed discourse continuum: the issue of segmentation
Language: English, ASL
The focus of this contribution is to foster discussion on a critical theoretical issue in sign language
communicative aspects, raised by the analysis of spontaneous signed discourse, the identification of
boundaries between different signs, thus segmenting (and further identifying) the signed continuum. The
issue is closely linked to the identification of specific semantic and pragmatic functions in the use of
crystallized lexical signs and the use of ‘productive’ ad hoc highly iconic structures. The issue of segmentation
is relevant not only from a theoretical point of view but also from the point of view of annotation,
transcription of sign language video recordings and also in automatic sign recognition (Brentari and Wilbur
2008; Gonzalez and Collet 2011; Hanke et al. 2012).
The question has been raised in the last decades in vocal language research promoting new ways of looking
at linguistic units and at the principles of discreteness. In sign languages not only we observe similar issues
at stake, but we deal with a inherently multilinear, multimodal and simultaneous communication system, that
intensifies the question about the identification of the unit and impose a strong focus on pragmatic functions
for its achievement.
We present the results of three parallel experiments lead on the basis of the same 45 seconds video
recording of a narrative (The Egg and Jerry) in Italian Sign Language. The video prompt has been analysed by
different deaf signers in order to identify the borders of each sign. 15 different annotators were divided into
three groups performing different annotating operations: a) cutting video sequences, b) glossing, c)
transcribing using SignWriting. No time limit was imposed on annotators who participated individually.
The choice of different metalinguistic operations linked in one case directly to the video (segmentation), and
in the other cases indirectly portrayed by gloss annotation and transcription (segmentation and identification)
is a way of observing from different angles the same underlying process involved in any basic analysis of
signed discourse, thus unavoidable for anyone dealing with sign language data, especially in spontaneous
discourse contexts (other than in experimental and lexical based elicitation procedures). Moreover the
difference among operations that involve the use of a different language (glossing) and those that do not
require translation procedures (video cutting, transcription in SW) reveals possible different language specific
segmentation strategies and different metalinguistic operations.
A comparison of specific problems in segmentation and identification has been drawn showing that: a)
segmentation is always a probabilistic and not categorical operation, as has been observed for speech in
vocal language research; b) there are specific kinds of semantic/pragmatic units that pose peculiar challenges
to the agreement among annotators, mostly the presence of HIS and hypoarticulated signing, and that
require a more complex view in order to define what is a signed lexico-semantic and pragmatic unit; c)
confidence degrees increase in the case of lexicalized units; d) sequences bearing a strong pragmatic
function are perceived as unsegmentable (acting in a similar way to multiword expressions in vocal
languages), if segmentation is aimed at preserving the identification of a specific semantic minimal unit.
131
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Itamar Kastner & ^Kathryn Davidson
Poster 2.30
*New York University; ^University of Connecticut, USA
132
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Jana Hosemann
Poster 2.31
Georg-August University of Goettingen, Germany
133
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Jane Tsay & Yijun Chen
Poster 2.32
National Chung Cheng University, Taiwan
Prosodic cues in Taiwan Sign Language
Presented in English
Prosody plays an important role in parsing the speech stream into smaller
constituents. The prosodic cues that mark constituent boundaries have been reported
to include hold, repetition, pause, change in facial expressions, changing head or body
position, and eye blinks (Nespor and Sandler 1999; Sandler and Lillo-Martin 2006;
Tang et al. 2010). This study aims to examine the manual boundary cues and report
the characteristics of those cues that occur in face-to-face conversations in Taiwan
Sign Language (TSL).
The analysis was based on one hour of spontaneous signing from two dyad
conversations. The data were transcribed using ELAN. Facial expressions were coded
based on the Facial Action Coding System (Ekman, Friesen, and Hager 2002).
It was found that TSL exhibits prosodic cues similar to what have been reported
in the literature. The cues that mark the end of phonological phrases are repetition of
the last sign, hold, pause, and perseveration of the hand.
Regarding repetition, the last sign in a phonological phrase is often repeated once.
For example, AFFIRMATIVE is produced by an open hand with the thumb bending
inward and touching the chin. The touching movement is repeated once when this
sign is at the end of a phrase. As for hold, it is mostly realized in the signs which are
articulated in the neutral space.
If a single movement is treated as one weight unit and hold is treated as adding
one weight unit at the end of a phrase, it is plausible to propose that phrase boundaries
in TSL require two weight units. There are some exceptions where signs are repeated
twice. These exceptions will also be discussed.
The perseveration of the hand is also used to mark phrase boundaries in TSL.
The non-dominant hand is usually held in a particular location, while the dominant
hand continues producing other signs. The non-dominant hand is perseverated at the
end of a phonological phrase. This phonological process is termed Non-dominant
Hand Spread by Sandler and Lillo-Martin (2006). Furthermore, signers sometimes
hold a one-handed sign at a phrase boundary while the other hand articulates the next
sign.
Pauses are defined as relaxing the hands. However, pauses seem to be infrequent
and were rarely found in our data. One possible explanation for the rarity of pauses is
that repetition and hold have already served as the major cues for phrase boundaries.
When they do occur, they usually occur when signers intend to yield a turn. When
signers make a stop and their hands still stay in the signing space, as they do in
repetition or hold, it is an indication that they are still talking. If, by contrast, they
relax their hands, their interlocutor will take over the speaking turn.
134
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Joanna Atkinson
Poster 2.33
Deafness Cognition and Language Research Centre, UCL, UK
Voice hallucinations in speech and sign: What can they tell us about
language feedback loops?
Objectives: Data will be presented outlining the perceptual characteristics of voice-hallucinations in
congenitally deaf people who use speech reading or sign language. Secondly, a comparison will be
made with the phenomenology reported by hallucinators with unimpaired hearing.
Design: Adapted Q sort methodology used to sample subjective experience of how voicehallucinations are perceived.
Methods: Data from 48 deaf and hearing people with schizophrenia was factor analysed to reveal
latent phenomenological structure of both spoken and signed hallucinations.
Results: For deaf participants perceptual characteristics reported reflect individual experience with
language and sensory input, and included auditory, speech reading and sign language
hallucinations. Strikingly, hearing people also hallucinated visuomotor representations of speech
and manual gestures in fused, suggesting shared subvocal mechanisms.
Conclusion: This data throws new light on the phenomenology of voice hallucinations, suggesting
that traditional focus on auditory aspects is too narrow. The finding that motor imagery underpins
voice hallucinations in both the auditory-motor (speech) and visuomotor (sign) gives new insight
into: subvocal theories of hallucinations; articulatory representations of language and sensory
feedback loops. By studying deaf hallucinators we obtain new perspectives on the organisation of
language and thought, and a shift in thinking about voice-hallucinations in general.
135
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Johanna Mesch & Lars Wallin
Poster 2.34
Stockholm University, Sweden
The non-dominant hand as delimitation between inner element and outer element
In previous studies, Liddell (2003), Liddell, Vogt-Svendsen & Bergman (2004), Vogt-Svendsen & Bergman (2007) and
Nilsson (2007) described buoys in American, Norwegian and Swedish sign languages, as in the list buoy, THEME buoy,
POINTER buoy and point buoy. Common to all of these is that they are realized with the non-dominant hand or weak
hand, which “are held in a stationary configuration as the strong hand continues producing signs” (Liddell, 2003:223).
In this paper, we present an additional sign (usually consisting of all fingers relaxed gathered and slightly bent at both
distal knuckles with the thumb in opposition, or lateral), which, with respect to performance, matches the description
of other buoys but differs in function/content from previously described buoys with the partial exception of POINT-B
(Vogt-Svendsen & Bergman, 2007). In the Swedish Sign Language Corpus, we have tentatively annotated this sign as
DELIMIT (translated from the Swedish AVGRÄNS) because, in our initial analysis (of 84 preliminary tokens on 45
annotated texts (of dialogue) with 26 informants of different ages and genders), the sign seems to represent a form of
delimitation between an “inner” element – represented by the space in front of the hand’s palmar side – and an
“outer” element – represented by the space in front of the hand’s dorsal side – as if someone is inside and another is
outside, or there is an island surrounded by sea.
A typical example using DELIMIT is shown in the series of pictures below (see figure 1). The (left-handed) informant
is initially describing a comic strip about a lonely man on an island with a palm tree in the middle of the sea. The first
photograph shows the dominant hand performing the sign of the island (O-hand is moved up) with the non-dominant
hand initiating the execution of DELIMIT, which is completed in the second photograph, while the dominant index
hand is making a circular motion in the space in front of palmar side of DELIMIT, which now represents the inner
elements, or the island. After the third photograph, in which the dominant hand is performing the sign of the sea, the
following three photographs show the informant describing the sea as an outer element by using the dominant hand
to make a sweeping motion forward past DELIMIT's dorsal side – further in front of DELIMIT – and ending on the
contralateral side of the space.
DELIMIT is typically carried out in the space in front of the body. However, one example in our data uses the neck as
the location for DELIMIT by representing the space beneath the non-dominant hand with the palmar side down for
the chest and downwards, and the dorsal side of the space above the hand for the head.
Together the buoys described in this presentation show how the use of the non-dominant hand can be regarded as
more important at the discourse level than the dominant hand in individual signs, and thus, is not particularly “weak”
at all.
136
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Mara Moita, Patricia Carmo & Ana Mineiro
Poster 2.35
Instituto de Ciências da Saúde - Universidade Católica Portuguesa, Portugal
The Handshapes in Portuguese Sign Language: phonological study
Since Stokoe’s (1960) study on American Sign Language (ASL) phonology, sign languages began to be studied as natural
linguistic systems with minimal sign pairs as spoken languages and constituted by three major phonological categories:
handshape, location and movement.
Despite the large number of studies on sign languages in the last 50 years (Liddel & Johnson, 1989; Sandler, 1989; Liddell,
2003; Sandler & Lillo-Martin, 2006), Portuguese Sign Language (Língua Gestual Portuguesa, LGP) only began to be
scientifically studied since 1980, with a small systematic analysis of its phonological and morphological aspects based on
250 signs (Prata & Martins, 1980) and a research based on an exploratory approach to grammatical features of LGP (Amaral
et al., 1994). Although some important linguistic studies were carried out, so far LGP has not been studied in a systematic
and consistent manner in the major linguistic areas.
In the domain of developing a signing avatar for LGP, the LGP’s phonological system has been systematically described
based on Hand Tier model proposed by Sandler (1989) and Sandler & Lillo-Martin (2006) to computational purposes.
Therefore, the phonological and morphological elements that compose the sign articulation in LGP have been described and
characterized. The current poster will present a study focused on determinating and outlining the existent phonological
handshapes in LGP.
Researches on sign languages describe handshape as probably the most visible segmental category in the articulation of the
sign. Despite the potentially vast forms of hand configurations, each sign language tends to use only a limited number of
handshapes. With the need to describe the phonological behavior of LGP, we developed a study focusing on the description
of articulated handshapes in LGP aiming to list the phonological handshapes used in this language.
This study was based on the phonological description of 500 signs by deaf signers, who were university students attending
the degree in LGP. They had to describe each sign following the three major phonological categories and their features and
facial expressions, all inventoried to serve the purpose of creating a signing avatar of LGP (Moita et al., 2011; 2012). They
were given a large collection of handshapes in LGP (83 handshapes) to describe each sign articulation for dominant hand and
non-dominant hand.
The results show that the dominant hand uses a higher number of handshapes than the non-dominant hand, which seems to
have a restricted number of handshapes possible to be articulated, showing unequal distribution for each manual articulator.
Frequency analysis resulted in the reduction of the handshape list and revealed a number of handshapes with higher
occurrence in LGP signs. The results of this analysis are the main focus of this poster.
137
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Natasha Abner
Poster 2.36
University of Chicago, USA
138
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Nina-Kristin Pendzich
Poster 2.37
Georg-August-Universität Göttingen, Germany
Lexical layering of manual and nonmanual components within specific signs.
An empirical study of German Sign Language (DGS)
(poster presentation in English)
It is well known that languages in the visual-manual modality exhibit a phonological structure
comparable with that of languages in the oral-auditory modality (cf. Stokoe 1960). While
there are comprehensive studies on the manual parts of signs (cf. Sandler 1989, Brentari
1998) – handshape, hand orientation, place of articulation and movement – there is a lack of
empirically based research concerning the status of nonmanual-lexical markers. A promising
approach is to consider lexical facial expressions as a conglomerate of markers of the upper
and lower face and, thus, also include mouth gestures in their direct mimic surroundings.
In this presentation, I discuss elicited data from German Sign Language (DGS). I
annotated the videos with ELAN and investigated the consistent or inconsistent occurrence of
facial markers with individual signs within and across signers. This analysis of the
nonmanuals was always carried out in relation to the corresponding manual signs.
To be counted as a phonological component the following criteria are essential: (i) A
nonmanual feature must clearly be an obligatory, inherent part of a sign. (ii) In addition, it
must not spread across other signs and (iii) variation in execution can only occur in certain
limits. (iv) Furthermore, it always has to be checked whether the use of nonmanual
components implies (gradual) differences in meaning. Thereby, they would not just be
inherent lexical parts of the sign but modifying elements.
I present new examples of the lexical manual-nonmanual layering within signs of
DGS and explain different characteristics of lexical facial expressions. An important result is
that these nonmanuals occur in two different constitutive shape patterns: single, specific
determined components of the upper and lower face – eyebrows, eyes, nose, cheeks, mouth,
corner of the mouth, lips, tongue – (shape pattern I) and holistic facial expressions (shape
pattern II). Furthermore, my study indicates that – on the lexical level – the relationship
between the upper and lower face for DGS seems to be more balanced than previous research
suggests. This contradicts the general assumption that syntactic and prosodic functions are
realized in particular with the upper face, whereas the lower face markers are assumed to be
the most relevant for morphological and phonological functions (cf. e.g. Coerts 1992).
Additionally, I analyze aspects of lexical facial expressions like arbitrariness and motivation,
adaption of lexical mouth gestures in relation to handedness, mimic-manual sign variants and
the transfer of “echo phonology” (cf. Woll 2001) on the upper face. And finally, I discuss
how the term ‘lexical’ can be specified. One essential approach is the analogy to pitches in
tonal languages such as Thai (cf. Diamantidis et al. 2010, Herrmann & Köhler 2009, Liddell
2003). Pitches in sign languages can be defined as simultaneously nonmanual phonological
intensifications, which occur regularly with certain signs.
Based on an empirically founded theoretical classification of lexical nonmanuals –
facial expressions and posture of the body and head – it is especially interesting to examine
the interaction between these markers and other linguistic nonmanuals (e.g. syntactic facial
expressions and aspect marking).
139
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Päivi Rainò,^ Marja Huovila & +Irja Seilola
Poster 2.38
*Humak University of Applied Sciences; ^Keskuspuisto Vocational College; +University of Jyväskylä, Finland
Concept Formation in Mathematics in Finnish Sign Language
(English)
Numerous studies have shown that deaf children and adults lag behind hearing individuals in terms of arithmetical and
mathematical performance. Students’ performance remains constantly “below basic level”: deaf students’ perceptions of
measurements, numbers, fractions and arithmetic comparisons (see Bull 2008, Foisack 2003 among others). This bias should,
however, be questioned since there is no evidence to suggest that deaf subjects would have had the opportunity to apply
visual calculation strategies in sign language (cf. Korvorst, Nuerk & Willmes 2007).
Using the framework of ethnographic discourse analysis (e.g. Blommaert 2006) we present the findings from a corpus
containing signed mathematical discourse of eight native signers in Finnish Sign Language (FinSL). The data consists of
problem-solving monologues and tutorial dialogues where mental arithmetic processes including addition, subtraction,
division and multiplication are articulated in sign language in natural settings.
Conceptual mapping in signed mathematics is visual counting carried out using fingers, both hands and the three-dimensional
neutral space as a visual abacus in front of the signer. Fingers, hands and spatial layers are used as buoys (c.f. Liddell 2003)
to retrieve, for example, subtotals in a regular, syntactically well-defined manner proceeding, however, in a reverse order
than is suggested e.g. in textbooks for mathematics. The process is intelligible to all native signers monitoring the
calculation processes, but strangely enough, it has never been promoted in the strategies for improving mathematics
instruction in deaf schools nor are its most specialized lexical units present in mathematics dictionaries produced in FinSL.
140
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Paweł Rutkowski, Małgorzata Czajkowska-Kisil, Joanna Łacheta
& Anna Kuder
University of Warsaw, Poland
141
Poster 2.39
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Pierre Schmitt
Poster 2.40
EHESS, France
Blurred languages
The Signing Stage as a Multimodal Semiotic Space
Exploring interdisciplinary bridges, this paper is questioning possible paths toward a single linguistic and
semiotic model – understood as a unified theoretical and methodological frame – that would not only be
available for the study of both vocal and sign language, but for the study of deaf-hearing interaction where
multiple languages and modalities are at stake. Looking for such model, my own work is advocating for a
vantage point at the crossroad of linguistic anthropology (Duranti, 1997), performance studies (Schechner,
1977; Turner, 1986) and deaf studies (Padden &Humphries, 1988).
Among the new spaces that deaf and hearing people inhabit together, my research focus on interaction
occuring during artistic events. From the creation process to the signing stage, performing arts involving sign
language are sites in the Deaf world where deaf-hearing interaction are common, where vocal and sign
languages are constantly in co-presence. I would argue that in many cases, as in bilingual plays and productions,
sign and vocal languages are not only in contact. Indeed, while Deaf and hearing people are brought together
by these creative processes, co-presence of languages is constitutive of the interaction and shapes it (Schmitt,
2011). Therefore, we need models that are not studying deaf and hearing, languages or modalities separately,
or as separate.
Relying on a corpus of plays, performances and rehearsals (recorded interactions and descriptions), this
paper will show and analyze examples (bilingual plays, writing in interaction, interpreters' roles, use of subtitles
and video on stage...) where the blurred and simultaneous use of multiple languages and modalities in the
midst of interaction makes it difficult or impossible to consider sign language or vocal language in isolation as a
starting point for a relevant linguistic analysis. Studying language-s in (such) context, we will subsequently
discuss the stage as a multimodal semiotic space.
Contemporary linguistic anthropology will provide insights for such discussion – somehow treating
vocal language like sign language and possibly treating sign and vocal languages alike – as both multilinear and
compositional. Indeed, taking into account bodies, eye-gazes and facial expression in the study of vocal
languages, current researches in gestures studies or interested in multimodality are facing issues of
transcription similar to current debates in sign language studies, and often rely on video and sketches as tools
for analysis (Goodwin, 2007; Goodwin, 2010).
To conclude, this paper will intend to relate this theoretical issues to larger debates, hopefully allowing
a better understanding on how deaf and hearing people build their lives in a shared world of interaction.
Indeed, while our scientific views on sign language are time and culture bound, historically conditioned,
provisional, a call for models that foster equality among all speakers and all languages in presence is not only a
matter of scientific analysis. It has to do with our own values about the world we study and inhabi
142
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Rachel Benedict, Srishti Nayak & Catherine Caldwell-Harris
Poster 2.41
Boston University, USA
Group differences in the Acquisition of Bodypart Classifiers in Deaf children:
Deaf Parents versus Hearing Parents (language of presentation: ASL)
The acquisition of Verbs of Motion (VoM) and Verbs of Location (VoL) in American
Sign Language is interesting and crucial because these verbs are complex
morphologically. The complexity of VoM and VoL classifier structures may be one of
the reasons that these verbs are acquired relatively late in Deaf signers. We compared
differences in Deaf children of Hearing Parents (DCHP) and Deaf children of Deaf
parents (DCDP), to determine whether reduced natural language exposure in the home
delays acquisition of these morphologically complex structures. We focused specifically
on the acquisition of Bodypart Classifiers (BPCLs) - morphemes used to describe and
reference different body parts such as eyes, limbs, hands and feet.
We expected that older children would perform better than younger children on the
BPCL tasks (age effects). We also expected that DCDPs would perform better than
DCHPs due to natural language exposure in the home, and that this difference would
occur even in older children.
130 Deaf students of a school for the Deaf in Northeast USA, aged 5 - 18 years, were
assessed on comprehension of BPCLs using the American Sign Language Assessment
Instrument (ASLAI). Of these, 31 children were DCDPs, and 99 were DCHPs. The
present study analyzed the ASLAI Real Objects - Receptive (RO-R) task (13 questions).
The number of correct answers on these questions was used as the dependent measure in
analyzing the data.
A Linear Multiple Regression Analysis was performed to test for significant effects of
Age and Parentage, and to test for any interaction between Age and Parentage. These
predictors collectively explained 14.7% (F = 6.94, df = 3, p < .05), of the variation in the
Total Correct Answers. Age (t = 3.278, p < .05) and Parentage (t= -3.310, p =.05) effects
were both significant, but Age was a stronger predictor of Total Correct Answers. The
model predicted that by average sample age (11.5 years), DCDPs score approximately
6.6 points more than DCHPs, and that this difference is significant (t = 4.819, p < .05).
The Parentage-Age interaction effect was significant at the 90% level (t = -1.668, p <
.01) predicting that as children get older, the gap between DCDPs and DCHPs widens on
BPCL acquisition.
In conclusion, we found that the BPCL structure is acquired gradually in both DCDPs
and DCHPs; that DCDPs over all ages outperformed their DCHP peers in comprehension
of all 13 BPCL structures, that DCHPs remained delayed in these tasks compared to
DCDPs, and that the magnitude of the DCDP advantage increases with age. Our results
imply that the lack of early exposure to ASL in the home puts Deaf children at a
disadvantage with respect to learning the correct use of BPCLs – a complex VoM and
VoL classifier structure that is essential for fluent ASL.
143
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Raphael de Courville, ^Roman Miletitch, *Morgane Rebulard,
*Claire Danet, +Dominique Boutet & *Patrick Doan
*GestuelScript (ESAD Amiens); ^IRIDIA; +UMR 7023 SFL, Paris & Université EVE, France
Alteration of Signs in French Sign Language Speakers When Exposed
to a Bi-dimensional Still Representation of Their Own Gestures.
Language of the presentation: English
The aim of this paper is to explore the scriptural potentiality of French Sign
Language (FSL) in order to build a graphematic system for Sign Languages. The
purpose of this study is to reduce the signs to a bi-dimensionally projected
graphical representation of the gesture and observe the impact of these visuals on
the signers. We will present the results of this experimentation.
Up to now, due to the Sign Language complexity, no satisfactory script was
created, despite the efficiency of existing systems for linguisitic annotations.
These are based either on schematisation — SignWriting (Sutton V. 2010,
Bianchini C., 2012) — or encoding — (Hamnosys Prillwitz & al 1989,
SignFont Newkirk D. 1989) but none have yet taken advantage of the analogy
that exists between the gestures that sign and the gestures that leave ink strokes
on paper. Yet, SL and writing share a common visual-gestural modality that could
provide a natural entry for signers into alphabetisation (LSScript report –
Boutora, 2005) as an estimated 80% of profoundly deaf people in France suffer
from illiteracy (Gillot report, 1998).
In the experiment, a signer is given a sign in French Sign Language to execute. It
is then confronted with a representation of its own gesture on a screen. The
trajectory of the hands is synthesized in one image as a single stroke of
calligraphic quality (Marey et Demeny, 1883; Lioret, 2004). The signer is given
the opportunity to try again in order to reach a better readability of his signs. The
experiment is repeated with more signs (almost 100) and more subjects (8). We
present a list of the observed modifications in the execution of the signs through
shooting sessions.
Strategies are elicited by the constraint of the device which neutralizes depth,
blends overlaping configurations, blurs moving hands, etc. The signers in turn
resort to tricks such as altering the rythm of the sign in order to mark the
direction of the gesture or freeze on key configurations in order to get them to
register on the image. Comparison between different signers suggests there could
be a countable and generalisable set of such strategies. If consistent for given
signs over a larger panel of signers, this kind of graphical choices might be
transferred to the written form as dynamics of the stroke.
The theoretical implications of this experiment are discussed, especially the issue
of the link between the gestural sign and the graphical sign. To describe the
scriptural quality of the sign, we introduce the concept of a “gestural abstraction”:
an ideal object that the gesture projects in space and of which any form of capture
(long exposure photography, motion capture) is merely an imprint.
Further description will be given of how these observations could help the project
of a writing for French Sign Language.
144
Poster 2.42
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Renata Lúcia Moreira
Poster 2.43
Universidade de São Paulo (USP), Brazil
A preliminary description of time deixis in Brazilian Sign Language
Language: English
The aim of this research is to describe the time deixis in a narrative told in Brazilian Sign Language
(henceforth Libras) within the Theory of Enunciation (Benveniste, 1976 and Fiorin, 2002) and French
Semiotic Theory frameworks. In sign language, time deixis is substantially expressed by lexical
markers, since it does not present grammatical marks in the verb. There are narratives, however, whose
time markers are not expressed by lexical sign. In these cases, it is necessary to investigate the other
means employed to mark the time deixis. This work is a pilot study and its aim is to investigate the
nature of time markers used in Libras (linguistic or non-linguistic) of time markers used in Libras.
More precisely, to analyze the spatial, gestural and verbal means that establish, in the discourse, the
time relation of concomitance and non-concomitance between the action of the narrative and the
enunciation moment.
In order to describe the different time markers in Libras, I collected data from Libras by filming a deaf
collaborator signing a no speech bubbles comics story. The narrative was transcribed with ELAN
(EUDICO Language Annotator), following the transcription system proposed by McCleary, Viotti &
Leite (2009) and McCleary &Leite (2011).
The analysis of this first narrative allowed us to see (i) some devices used in Libras to construe,
represent and characterize the characters and the actions of the story; (ii) some time marks in three
different moments of the story; (iii) some of their formal aspects and pragmatic characteristics, and (iv)
the use of eyegaze and body movement, not only related to the personal deixis, but also to the
expression of time relations in the narrative.
145
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Ritva Takkinen & Irja Seilola
Poster 2.44
University of Jyväskylä, Finland
Kinship terms in Finnish Sign Language
Every human language and culture has ways to express family relations; the terms by which it is done vary from language to
language and from culture to culture. In spoken languages kinship terminology has often been studied (e.g. Wallace and
Atkins 1960, Shusky 1965, Greenberg 1990), but in sign languages few studies are found in the literature (Peng 1974,
Woodward 1978, Massone & Johnson 1991, Wilkinson 2009, Geer 2011). Sign languages are minority languages that are in
contact with the majority languages spoken in their countries. Some studies have shown similarities in the categorisation of
kinship terms between sign languages and the majority languages with which these sign languages are in contact. In
addition, some interesting structural features have been shown, for example, in Japanese sign language the fingers of the
hand are used in a culturally typical order to express family relations.
This paper deals with kinship terms in Finnish sign language (FinSL) and is a part of a typological research. The research
questions are: 1) How does FinSL categorise kinship terms? 2) Are the kinship terms semantically related to Finnish? 3) Are
there formal or semantic similarities in kinship terms between Swedish sign language (SSL) and FinSL?
The FinSL data have been gathered by video recording dialogues between native signers using material to elicit signs that
refer to family relations. Comparative data are gathered from dictionaries and interviewing native speakers of Finnish and
SLL.
Terms for both core family members and non-core family members are analysed in FinSL. There is a gender distinction in
some signs referring to core-family terms (MOTHER/FATHER, DAUGHTER/SON). In addition, there are also non-gendered
signs (FATHER^MOTHER for ’parents’, SIBLING, OFFSPRING, SPOUSE). Structurally the terms can vary: those already
mentioned are mostly individual signs, but there are also terms which are compounds (made up of two signs which
undergo phonological processes) e.g. FEMALE^OFFSPRING for ’daughter’, FEMALE^SIBLING for ’sister’ / MALE^SIBLING for
’brother’, FATHER^MOTHER, MALE^SPOUSE for ’husband’ and FEMALE^SPOUSE for wife.
Non-core family members can be either individual signs (GRANDFATHER/GRANDMOTHER, AUNT, UNCLE, COUSIN, FIANCÉ)
or compounds (GRANDCHILD, FEMALE-FIANCÉ). The term GRANDPARENTS is composed of one single sign and a compound.
The sign for ‘grandchild’ has the repetitive form CHILD_CHILD, in which the locations of the repeated forms follow the same
time-line as the sign for ‘generation’.
In FinSL there are single signs for some of the members of blended families (formed through marriage), such as
APPI/ANOPPI for ’father-in-law’ and ’mother-in-law’. Such family relations as step-mother, step-father, step-daughter and
step-son are expressed as compounds, HALF-MOTHER, HALF-FATHER, HALF-DAUGHTER, HALF-SON.
The paper will discuss in more datail the structural features of kinship terms and their semantic relationship to Finnish, the
majority language with which FinSL is in contact. Some similarities are seen in the categorisation of kinship terms in these
language, but these are in some respects different from for example English kinship
categorisation. FinSL is related to SSL, and therefore the formal and semantic connections between them will also be
discussed.
146
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Robert Adam
Poster 2.45
Deafness Cognition and Language Research Centre University College London, UK
Unimodal bilingualism in the Australian Deaf community: language contact between
Australian Sign Language and Australian Irish Sign Language.
The relationship between majority and minority language communities has often been examined, but it is not
known what happens in communities where there are majority and minority sign languages, and whether the
relationship between the two communities and their languages resembles that found in spoken languages. The
study reported here was a part of a larger examination of unimodal bilingualism in two sign languages.
In the Australian Deaf community there is a minority Deaf community of people bilingual in two unrelated
languages: Australian Sign Language (Auslan) and the Australian dialect of Irish Sign Language (AISL). Members
of this community, who have AISL as a first language and Auslan as a second language, were interviewed in
group discussions. Topics covered in these discussions included the relationship of AISL to Auslan and English,
attitudes to each of these languages and the power relationships between the two sign languages.
Because of educational policies which date back to the 1940s, AISL has not been used in schools since the
1950s. Also, the low incidence of deafness in general and the small size of the AISL community mean that no
Deaf people have learned AISL as a first language since then; those who have been born in the community since
have learned Auslan as a first language. Additionally, the members of the minority AISL community have been
pressured by the majority Auslan Deaf community not to use AISL in their community. As well as parallels with
the experiences of other minority language groups, parallels can be drawn with studies such as Zaurov’s work on
the ‘double discrimination’ confronted by Deaf Jewish people in Germany in the 1930s where the German Deaf
community actively oppressed the minority Jewish Deaf community within their midst.
This presentation also includes a brief history of AISL in Australia and the establishment of schools for Deaf
children from 1875 onwards; this historical perspective underpins an understanding of the centrality of schools in
the transmission of sign language, and how language atittudes, demographics, educational policies and overt
linguistic oppression can hasten language attrition in a Deaf community.
This research also contributed to the ‘Irish in Australia: Not Just Ned: A true history of the Irish in Australia’
exhibition at the National Museum of Australia where Deaf AISL signers were able to record their history in their
own language. This was the first time AISL was publicly acknowledged as a language group, and this in turn
promoted awareness of the AISL community, its language and history to other members of the Deaf community
and to the larger, hearing community. As well as an impact on the deaf community, the findings of this study are
of great relevance in relation to present-day migrants to multilingual and multicultural Australia who are Deaf and
use a different sign language.
147
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Stephen McCullough
Poster 2.46
San Diego State University, USA
Analysis of cortical thickness in deaf signers, hearing native signers, and hearing nonsigners
Both deafness and life-long experience with a sign language have the potential to alter
the neuroanatomical structure of the human brain. To investigate neural plasticity
associated with these effects, we conducted an investigation of cortical thickness
throughout the brain with a large cohort of deaf native ASL signers, hearing native ASL
signers (CODAs), and matched hearing nonsigners.
Method: The participants included 22 deaf native signers (mean age = 26.5, 14 female),
22 hearing native signers (mean age = 24.2, 15 female), and 22 hearing nonsigners
(mean age 25.0, 12 female). All participants were right-handed and had no history of
neurological disorder or disease. For each individual, two 3D structural MRI scans were
obtained using a 3-Tesla GE Signa Excite MR scanner at 1x1x1 mm resolution. These
structural volumes were then processed using FreeSurfer software to reconstruct white
matter and pial cortical surfaces. After surface reconstruction, cortical thickness was
computed algorithmically by measuring the distance between the boundary of white and
gray matter and the pial surface at all points between the two surfaces. Cortical
thickness measurements from each individual were registered to a standard surfacebased
coordinate system to compare cortical thickness point by point across groups and
to visualize the statistical results (set at p < .05).
Preliminary results: The contrast between deaf signers and hearing nonsigners revealed
increased gray matter for the deaf group within the following left hemisphere structures:
insula, middle/inferior frontal gyrus, superior parietal lobule (SPL), lateral and superior
occipital gyri, and the paracentral gyrus. Within the right hemisphere, increased gray
matter for the deaf group was observed in the superior temporal sulcus (STS),
intraparietal sulcus, and paracentral gyrus.
The contrast between hearing signers and nonsigners showed more gray matter for
hearing signers in the right postcentral gyrus (near the arm/hand area). Surprisingly,
hearing signers had less gray matter than nonsigners in other right hemisphere regions:
MT (a motion-sensitive visual region), superior temporal gyrus (STG), and precentral
gyrus. Right MT and STG also showed less gray matter in comparison to deaf signers.
The contrasts between the deaf group and the two hearing groups suggest that deafness
plays a role in increased thickness for three left hemisphere regions – insula, left
occipital cortex, SPL – and two right hemisphere regions – STS and the fusiform gyrus.
The increased thickness in the middle/inferior frontal gyrus that was observed only for
the deaf signers suggests that this region may be sensitive to the combination of
deafness and life-long signing.
The reason for decreased gray matter for hearing signers in right MT and STG is unclear
but may be due to the more distributional nature of language processing in sign-speech
(bimodal) bilinguals. Unlike deaf signers, hearing monolinguals, or spoken language
bilinguals, language processing in bimodal bilinguals encompasses and overlaps
between auditory and visual cortices, which could lead to less dense gray matter in
these regions.
148
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Susan Mather & Sara Malkowski
Poster 2.47
Gallaudet University, USA
Examining Echo Phonology in American Sign Language (ASL)
Numerous research studies have been conducted across the globe in locations such as Germany, Netherlands,
Sweden, and England regarding mouth components in sign languages (Boyes-Braem and Sutton-Spence (2001). These
components have been divided into two categories: mouthing and mouth gestures. In their introduction, Boyes-Braem and
Sutton-Spence summarize the works of various authors to define mouthing and mouth gestures. Mouthing is the use of
spoken languages that is produced simultaneously as manual signs; is used for emphasis; is used in loan signs and contact
signing. Mouth gestures are defined as obligatory mouth patterns that function as separate morphemes and are not derived
from spoken language (Boyes-Braem (2001). Bergman and Lars (2001) propose using visual phonemes to represent the
sequential order of opened and closed segments of mouth components. They propose labeling open mouth components as (+)
and closed mouth components as (-).
Woll (2001) refers to the three components that make up mouth gestures: adverbials, enaction and echo phonology,
the bulk of her work focusing on echo phonology. Woll argues that echo phonology is a component of mouth gestures due to
obligatory mouth patterns. Echo phonology is a theory that defines as a subset of mouth gestures that are driven by and
mirror the open/close movements of manual signs in addition to changes in the mouth’s configuration known as echo
syllables. Woll emphasizes that movement in echo phonology has no relation to the sign’s meaning and its movement of the
mouth is obligatory or necessary to the manual production and the sign requires the associated mouth gesture. As you can see
in Fig. 1, the sign WON is produced with the mouth and hand opened and completed with the mouth and hand closed.
Influenced by work from Bergman and Lars (2001), we propose the directional pattern will be labeled as open (-C) and
closed (+C); these labels can be applied to both the mouth and hand.
We examined whether echo phonology applies to mouth components in ASL by analyzing various videos of ASL
narratives. From what has been learned from Woll about echo phonology and the data collected for this study, we have
concluded that Woll’s theory can be applied to ASL mouthing, moreover, we have also found that the directional pattern
goes from closed to open but can be reversed going from open to closed as seen in Fig. 2. In addition, we identified an
emerging pattern, that is, body contact mechanism (BCM). We propose that the label (+) represents contact with the body
and (--) represents no contact with the body. In this study, BCM can be defined when contact is made with the body, the
signer’s mouth is closed and when movement is made and the contact is broken, the signer’s mouth opens, as seen in Fig. 3.
The data shown showcases the possibilities of another subgroup of mouthing, much like the one we refer to as body contact
mechanism as part of echo components.
149
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Szilárd Rácz-engelhardt
Poster 2.48
University of Hamburg, Germany
Morphological properties of mouthing in Hungarian Sign Language
Contact phenomena between a sign language and a Finno-Ugric spoken language
Language of presentation: English
The paper presents the first investigation of nonmanuals in Hungarian Sign Language (MJNY) focusing on mouthing.
MJNY borrows mouth forms from Hungarian, which is a Finno-Ugric spoken language with very rich inflectional
morphology (Kiefer 2000). According to informal observations Hungarian native signers make use of inflectional categories
of spoken Hungarian in mouthing (Rácz 2010).
Previous research findings (Boyes Braem & Sutton-Spence 2001; Crasborn et al. 2008) indicate that mouthing can behave in
two ways: (1) Mouthing forms are reduced and adapted to syntactic-prosodic boundaries of the sign language; (2) Mouthing
forms are clearly articulated enabling the addressee to match the visual forms to spoken words inclusively their grammatical
features. More researchers share the claim that this second type is not part of the sign language but is accompanied by a
contact signing variety (Lucas & Valli 1992; Hohenberger & Happ 2001).
In this study I proposed that (1) mouthing exhibits morphological information of Hungarian inflectional suffixes and (2) the
corresponding sign language structure is not a contact signing variety but well-formed MJNY.
An MJNY video collection was used that contains interviews with Hungarian Deaf native signers. I investigated six
participants (100 minutes video material) to find MJNY utterances with inflections in mouthing. The transcription was
carried out with the annotation tool iLex. Manual signs were annotated as glosses based on type-token matching (Hanke &
Stortz 2010). Mouthing forms were annotated by graphemes of spoken Hungarian. The transcripts were double-checked by
tow native signers.
In the finial dataset I found nearly 1000 inflectional morphemes in the mouthing, although their amount showed wide
differences among participants. The most often categories were Person, Number, Possession and Case. The first three have
functional equivalents in MJNY and therefore could co-occur with the respective MJNY categories. Case instances,
characterising only the spoken language, co-occurred with other MJNY elements e.g. classifier constructions showing
different aspects of the semantic content.
Mouthing inflections came along with three types of MJNY structures: (1) Manual signs conveyed the same grammatical
information as the mouthing did; (2) Bits of inflectional information in manual signs and mouthing complemented each
other; (3) Signing followed a morpheme-to-morpheme translation representing the mouthed words with their inflections.
According to my native consultants, in the first two cases the grammatical structures of manual signs equated to those of
well-formed MJNY sentences and did not show any influence of Signed Hungarian. These occurrences can be seen as unique
contact phenomena between sign and spoken languages not investigated to date.
This bilingual language production is assumed to be the specific outcome of the close contact between the sign language and
the spoken language in Hungary influenced by sociolinguistic factors like the tradition of oral education. Hence, these data
seem to provide a unique window to the understanding of bilingualism of Deaf signers in Hungary.
150
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
*Tommi Jantunen, ^Ville Viitaniemi, ^Matti Karppa
& ^Jorma Laaksonen
*University of Jyväskylä; ^Aalto University, Finland
The head as a place of articulation: From automated
detection to linguistic analysis (English)
Research into sign languages undoubtedly benefits from computer-vision-based tools
that can automatically extract information from large sets of video data. In our previous
work (Karppa & al. 2011, 2012), we have developed such a tool for the analysis of
movement, and here we will extend the system to the study of the place of articulation
(POA). Specifically, we will focus on the head, and test how accurately our system (i)
detects the signs with a head POA as occlusion events and (ii) distributes the
realisations of the POA into the five main head location classes used in the online
dictionary of FinSL, Suvi (Figure 1). We will also investigate the linguistic issue of (iii)
how much and where the hands move in the vicinity of the face during signing.
The operation of our POA analyser is based on the detection of local properties of the
sign language video that are identified with events when the hands occlude the head.
Primarily, the occlusions are detected by tracking local image neighbourhoods over the
frames of the video. An occlusion is reported when a neighbourhood originating from
outside the head region (prototypically the active hand) drifts over the head region. The
tracking is based on template matching of constellations of nearby points. However, as
the tracking in this stage is rather approximate, the information from tracking is
combined with a measure of local motion abruptness. This measure compounds the
residual template matching error between frames with the non-smoothness of the
estimated motion field. In addition to just identifying hand-head occlusions, the system
also encodes the locations of the occlusions relative to the parts of the face according to
Suvi’s facial coding scheme. All results are presented in ELAN (Figure 2).
At the time of writing, our data comprises 7950 frames of continuous semi-formal
monologue at 25fps from 4 sitting signers recorded in frontal view and includes 101
head POA signs. Concerning the first question, the analysis indicated the average
accuracy of 91% (Table 1). As nearly all of the non-detected signs also showed partial
occlusion, we consider this result to be highly positive. In terms of the second question,
the average success rate was 73% (Table 2). In the task, misclassification was relatively
frequent but, as the incorrectly classified instances typically represented borderline
cases between two vertically adjacent facial areas, we consider this result too to be very
encouraging. Finally, in terms of the third question, we found that the hands were
located in the vicinity of the face 36% of the time of signing, the most frequent
locations on the head being the sides of the lower half of the face. As the overall share
of head POA signs was further found to be only 36% of all occlusions, we conclude that
signers raise their hands to the area of the head not just to produce head POA signs but,
perhaps, also to maximize the perceptivity of the signing to the addressee, whose point
of fixation is the signer’s face (Siple 1978).
151
Poster 2.49
Thursday 11th July, Poster Session 2
TISLR 11, July 2013
Ulrike Zeshan, Keiko Sagara & Anastasia Bradford
Poster 2.50
University of Central Lancashire, UK
Multilingual and multimodal aspects of “cross-signing” – A study of emerging communication in
the domain of numerals
The study reported here involves communication between deaf sign language users with highly divergent linguistic
backgrounds who have no language in common and minimal experience of meeting signers from other countries. Unlike the
semi-conventionalised contact language International Sign (e.g. McKee & Napier 2002), we look at the earliest, least
conventionalised stages of ad hoc improvised communication, called “cross-signing” here.
Free conversations were recorded from multiply matched dyads of signers, and this paper focuses on three male signers with
the following language backgrounds, who were in daily contact over a five-week period:
- MH: Japanese Sign Language, written Japanese
- MI: Jordanian Sign Language, written Arabic
- MS: Indonesian Sign Language, written Indonesian
This case study focuses on communication involving numbers, including quantification, time units, and calendar dates. We
analyse data from the initial conversations of the three signers, that is, their very first meetings (three conversations
comprising just under two hours of video). Our aim is to account for communicative patterns, use of linguistic resources, and
instances of miscommunication and/or conversational repairs in a little-studied type of signing environment, co-opting
approaches from Conversation Analysis (cf. Sacks, Schegloff & Jefferson 1974 for speech, Baker 1977, Coates & SuttonSpence 2001 for sign).
The pairs of signers can be said to operate in a multilingual and multimodal space, where, in the absence of a shared
conventional inventory, they make creative use of diverse communicative resources, including:
- Signer using number signs from his own sign language
- Signer using number signs from his addressee’s sign language
- Signer using improvised iconic number signs (ad hoc inventions)
- Signer writing numbers on the palm of the hand
- Signer writing numbers in the air
It is evident in the data that the signed and written languages in which MH, MI and MS have competence influence their
signed productions. Considering the multimodal affordances in terms of signing and writing, influences from writing systems
can be a source of miscommunication, as Japanese, Arabic, and Indonesian have different writing systems (see examples (1),
(2), and (3), involving dates and fractions). Considering the multilingual setting, interference from a signer’s own sign
language may also result in miscommunication. Thus complex multilingual and multimodal repair and repetition sequences,
self-initiated or prompted by interlocutor (cf. Schegloff, Jefferson, & Sacks 1977, Schegloff 2000) occur in the data.
Some of the conversational sequences are best viewed as a collaborative activity between two signers working together to
negotiate meaning. This is evident in their turn-taking behaviour including overlapping turns and mirroring each other’s
signed productions. The patterns of turn-taking show that signers make a conscious effort to accommodate each other,
repeating each other’s signs and/or involving overlap in turns. A Conversation Analysis (CA) approach (e.g. Jefferson 1986,
Schegloff 2000) can account for some of these patterns, but it is also clear that signed communication in this unusual setting
has properties that are not easily accommodated under conventional theoretical frameworks applied previously to spoken and
signed languages.
152
TISLR 11 ABSTRACTS FOR POSTERS SESSION 3 153
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Ryan Lepic, ^Gal Belsitzman, ^Carl Borstell & >Wendy Sandler
Poster 3.01
*University of California, San Diego, USA; ^Tel Aviv University, Israel; ^Stockholm University, Sweden;
>University of Haifa, Israel
Motivation in two-handed signs: A cross-linguistic investigation of word forms
Language: English
Is the division between one-handed and two-handed signs in a language completely
arbitrary, or does iconicity allow us to predict whether certain concepts are likely to
foster two-handed expressions? Klima and Bellugi (1979: 21), for example, use TREE in
AmericanSL, DanishSL, and ChineseSL to argue that iconicity "does not in any way
determine the actual details of the form." However, each of these signs evokes the visual
image of a prototypical tree, and though the signs are different, each is two-handed. We
propose that iconicity can determine details of form, namely, whether a sign will be twohanded.
In this study, we use typological data from three languages to investigate
whether certain meanings co-occur with two-handed signs. We argue that it is possible to
predict, on semantic grounds, which signs will tend toward two-handedness.
The data in this study come from two databases of signs from AmericanSL, IsraeliSL,
and SwedishSL. The first database is built around meaning, using the ECHO project's
Swadesh list for sign languages (Crasborn et. al. 2007). The second database is built
around form, using lists of two-handed signs that share a phonological feature (ex.
movement or non-dominant handshape) in one language, to identify whether those signs
are two-handed in the other two sign languages, too.
We find significant overlap among the three unrelated languages. For example, in a
randomly-selected 100-concept sample from the Swadesh list data, we find 24 concepts
that are two-handed in all three languages (p = .001, Figure 1); these overlapping
concepts also represent the largest proportion of two-handed signs in each language.
Among signs that are two-handed in all three languages, we find signs that make use of
different iconic mappings, and signs that make use of the same iconic mapping, in
identical or different forms. These findings suggest that it is not simply concepts, but
salient sensory images or semantic features that are associated with concepts, that
promote two-handed signs.
For example, EMPTY looks different across the languages in our sample, but it is twohanded
in each language because of its semantics; one hand represents a surface or
container, and the other hand highlights that the surface or container is bare. Our
interpretation of the data is not that a given sign will always be two-handed in a given
language, though it may well be. Instead, we demonstrate that semantic features and
iconic mappings typical of figure-ground relationships, communicative processes, bodily
actions, and meteorological phenomena, to name a few, foster two-handed signs.
Our findings are consistent with a growing body of research suggesting that iconic forms
can serve as a tool for analyzing how aspects of human experience are codified in
language (Croft 1990, Taub 2001, Meir 2002, Meir et. al. 2007, Wilbur 2009). Because
the kinds of generalizations we are making here are cross-linguistic tendencies rather than
strict universal rules, and especially because two-handed signs can change due to other
lexicalization processes (Frishberg 1975), we hope that our study will serve as the basis
for further typological comparison.
154
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Akio Suemori
Poster 3.02
National Institute of Advanced Industrial Science and Technology (AIST), Japan
A phylogenetic approach to fingerspelling variation reveals the origin of hieroglyphic
letters
International Sign or American Sign Language
The introduction of fingerspellings corresponding to the 26 letters of the Latin alphabet into education for the deaf in Europe
ranks as one of the most important developments in the deaf history [1], one has interested many researchers in the field of
sign language studies and semiotics. However, the origins and variation processes of fingerspelling are controversial and
have long been debated. A number of studies have argued about the embryonic origins of fingerspelling, but their findings
are inferential, grounded in theory rather than in empirical testing.
I therefore developed a phylogenetic analysis as a way to deduce the variation process of fingerspelling. First, each of the
26 letters was classified according to phonological parameters from 43 current, geographically diverse fingerspellings, and
from 11 ancient fingerspellings, including Bede/Pacioli’s fingerspelling, a coding system used for reckoning in medieval
abbeys; Rosselli’s fingerspelling, a hieroglyphic system used for medieval mnemonics; and due Yebra’s fingerspelling,
known as the oldest fingerspelling system used in the education of deaf children in Spain. Subsequently, binary sequences for
identical or similar letter sets were arranged, and cladograms were built using the maximum likelihood method.
As shown in Figure 1, a consensus cladogram shows some clear clads that imply the phylogenetic relationship of
fingerspelling. The cladogram reveals that Rosselli’s fingerspelling would have been first performed as a hieroglyphic
fingerspelling, and that de Yebra’s fingerspelling may have developed from crosses between Bede/Pacioli’s fingerspelling
and other medieval hieroglyphic fingerspellings, which spread into Europe and America through Sicard’s fingerspelling.
Cladistic analysis also reveals that a major switch of the palm orientation of most of the letters took place in the 19th century,
a landmark period for the diversification of fingerspelling.
In-depth analyses of wide patterns of hand configuration for each letter identify invariable letters such as ’V’ ‘C,’ or ‘O,’
etc. Prosodic model-associated trees of hand configuration are analyzed for the invariable letters and variable letters,
respectively [2]. As shown in Figure 2, for example, four variations of ‘O’ involving the common circle formed by the
selected fingers—the thumb and index finger—, are invariable. However, non-selected fingers—the middle, the third, and the
little fingers—, are variable. This observation suggests that the iconicity of ‘O’ would be symbolically expressed by the
selected fingers.
This study provides important information on an effective phylogenetic approach for linguistic and semiotic researches for
fingerspelling, illuminating the origin of fingerspelling variations while providing insight into the iconicity of signed
representations of Latin characters.
155
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Allison Hilger, *Torrey Loucks, ^Quinto-Pozos David & *Matthew Dye
Poster 3.03
*University of Illinois at Urbana-Champaign; ^University of Texas at Austin, USA
Characterizing American Sign Language Fluency Using The Spatiotemporal Index
Presentation Languages: American Sign Language and English
Relatively little is known about the incidence of communication disorders in sign language users (Quinto-Pozos et
al., 2011). Although there are anecdotal reports of production fluency disorders (Cosyns et al., 2009; Montgomery
and Finch, 1988), as yet no one has provided a kinematic characterization of what it means to be fluent in a sign
language (although, see Mirus et al.
(2001)). In contrast, fluency has been characterized with more precision for speech and particularly for the
disorder of stuttering (Starkweather, 1987). The central and defining symptom of stuttering is disfluency that
perturbs the forward flow of speech (Bloodstein and Bernstein-Ratner, 2008). Stuttering has a biological basis
(Chang et al., 2009; Choo et al., 2011) stemming from a genetic predisposition (Fedyna et al., 2011). As part of a
longer-term approach to characterizing fluency disorders in sign language, the reported study used techniques
from the study of speech movement variability in spoken language. The aim was to assess the kinematics of
American Sign Language (ASL) production in 4 fluent Deaf native signers and 12 hearing college students
studying ASL as a second language at three different proficiency levels of an American university curriculum. The
dependent measure, called the spatiotemporal index (STI), characterizes the kinematic variability of fluent
utterances (Smith et al., 1995). Higher STI values indicate higher variability and suggest that whole-utterance
kinematics are less stable (Kleinow and Smith, 2000). The fluent Deaf native signers provided a benchmark for
fluency, and an estimate of the degree of variability to be explained within fluent production. The hearing sign
language learners were chosen because they knew enough ASL to be able to produce the required sentences,
and they provided a measure of fluency in the absence of communication disorder stemming from a lack of
familiarity and practice with the language. Hearing learners ranged from those with a few weeks' experience of
ASL, to those who had received 30-40 weeks of instruction. Participants were required to produce target signs in
carrier phrases (using the stimuli reported by Emmorey et al., 2009) to elicit utterances with greater ecological
validity.
Three-dimensional limb kinematics were obtained by placing markers on a participant's
shoulder, elbow, wrist and hand, and capturing the movements using infrared cameras (Motion
Analysis Corporation, Santa Rosa, CA). The start and end points of individual signs were
operationally identified using kinematic data and inter-utterance variability was then determined
following the established method of Smith et al. (1995). Figure 1 shows kinematic traces from
repeated utterances of a single target sign for a Deaf native signer and a hearing learner, along
with corresponding STI values. This novel approach to characterizing fluency in the context of
ASL utterances has relevance for the identification of potential ASL production disorders as well
as for assessing L2 acquisition in the context of teaching sign languages.
156
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Andre Xavier & Plinio Barbosa
Poster 3.04
Unicamp, Brazil
One or two? That is the question! The variation in the number of hands in the
production of
some Libras signs as a result of co-articulation
Language of the presentation: English
Klima and Bellugi (1979) reported that in American Sign Language (ASL) signs can be characterized
as one or two-handed. The number of hands, however, can vary in some signs. Woodward and DeSantis
(1977), for instance, showed that some ASL two-handed signs can be articulated by some signers with
one hand only. The same observations have been reported for Libras. Xavier (2006) observed that even
though Libras signs can be distinguished in terms of their number of hands, some signs can vary in
relation to this articulatory parameter (Xavier 2011). In his 2011 work, the author cites co-articulation,
either anticipatory (ACA) or perseveratory (PCA). ACA is the anticipation of articulatory properties of
a following segment onto the current one, whereas PCA is superficiallly the same phenomenon in the
opposite direction (Kühnert and Nolan 1999). Co-articulation related to variation in the number of
hands has also been reported to occur in ASL by Liddell and Johnson (1989) and in Australian Sign
Language (Auslan) by Johnston and Schembri (1999). To test Xavier's observations of the same
phenomenon in spontaneous signing, we designed an experiment in which five Libras signs (ACCEPT,
ALREADY, NEED, NOT and WANT), observed to vary in their number of hands, had their context of
occurrence controlled so that each of them appeared in-between (a) one-handed signs, (b) two-handed
signs, (c) a one-handed sign and a two-handed sign, and (d) a two-handed sign and a one-handed. In
addition, to check if the signing rate can also play a role in the occurrence of number-of-hands-related
co-articulation, the experiment was run in two sessions: in the first, subjects signed in their default
signing rate and, in the second, they were requested to sign faster. Based on Tyrone (2010), we created
Libras sentences with the help of a deaf signer and, during the experiment, we displayed them on a
computer screen through glosses. The experiment consisted of having the subjects sign each sentence
(after reading them) by memory to another deaf person sitting by the camera. The sentences, mixed
with 30 % of fillers, were displayed five times randomly in each session. Before the experiment,
however, a native bilingual signer interviewed and conducted a game with all eight subjects selected.
The goal of the interview and game was to elicit the same target signs used in the experiment in a more
relaxed situation. The analysis of the data obtained (both from the interview/game and the experiment
itself) revealed a great variability among subjects in terms of which of the signs underwent coarticulation
and to what extent this happened. This occurred even though the selected subjects are all
from the same city, shared a common educational background and mainly interacted among themselves
frequently. We concluded that inter-signer variability is still a main issue to tackle before conducting
any experiment in sign languages.
157
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Andrea Lackner & ^Christian Stalzer
Poster 3.05
*ZGH, 9020 University of Klagenfurt; ^Department of Translation Studies, 8010 University of Graz, Austria
The head takes it all!
- head and body movements in Austrian Sign Language
Language to present: English, Austrian Sign Language
We will demonstrate that the ‘head’ is one of the most systematically structured non-manual
articulators in Austrian Sign Language (ÖGS).
Following an empirical and inductive approach, various head and body movements/positions have
been identified and analyzed based on multi-signer ÖGS-texts. First, language phenomena have been
identified. Thus, each language-relevant instance of head position or movement(s) and body position
or movement(s) along a body axis has been annotated in separate tiers by Deaf annotators. In order to
assure interrater reliability the several head and body tiers have been annotated by multiple viewers.
Second, these head and body movements/positions have been classified, the results have been
generalized and to some extend explanations of some generalizations have been made.
The analysis of the ÖGS-texts together with the Deaf individuals’ annotations shows the following
results: First, based on common characteristics different groups of head and body movements have
been classified. The first classified group constitutes several head movements/positions which possess
a clear form-function-pair and co-occur with syntactic constituents. These head movements/positions
convey functions like expressing negation, assertion, interrogativity, irreality, or conditionality. The
second classified group includes those head and body elements which depend on the signing space and
which can vary the articulator. The third classified group comprises head and body movements which
are less regular produced as the first group, but which are in spite of it clearly identified by the Deaf
annotators. They cover entire utterances. This group, which I report for the first time, is a set of
nonmanual markers used to code ‘epistemic modality’ and includes a convinced-assertive, a nonassertive,
a dubitative and a timitive marker.
Second, the common characteristics of the described head and body movements/positions will be
discussed. It will be shown that the composed features of head and body movements/positions go
beyond a description which only includes the movement executions along a body plane.
Finally, we will demonstrate that some of the linguistic structures marked with the same non-manual
marker have a functional common ground, that is, their functions have a semantic/pragmatic
contiguity. For example, this can be applied to the ÖGS-marker ‘head forward’, which occurs in
content questions, embedded two-option interrogatives, conditionals, and exclamatory utterances
functioning as interrogative, irrealis, conditional and exclamative marker.
These new insights show that in sign languages – like in ÖGS – a lot of information can be coded by
the articulator head and body. By demonstrating the different classified groups of head and body
movements/positions based on common characteristics, by taking a closer at the features of head and
body markers and by comparing linguistic structures which are all indicated by the same head marker,
we aim to show the diversity and complexity of head and body movements/positions in ÖGS, which
Austrian Deaf annotators identified as language relevant distinctive elements.
158
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Angoua Jean-Jacques Tano
Poster 3.06
Felix Houphouet- Boigny University, Abidjan/Leiden University, Netherlands
DOCUMENTATION AND ANALYSIS OF LANGUE DES SIGNES DE COTE D'IVOIRE
(LSCI)
Language of presentation: ASL (as used in Ivory Coast)
In Ivory Coast at least two sign languages are used. American Sign Language (ASL), adapted to French (Kamei, 2006), is
used in Deaf education and by educated Deaf adults. Deaf people with no formal schooling use various forms of Ivorian Sign
Language (LSCI). ASL is spreading in the Ivorian Deaf community at the cost of LSCI.
As part of a project aiming at the documentation and descriptive analysis of LSCI varieties in Ivory Coast, lexical and
discourse data were collected by a team of deaf and hearing signers with 3-12 deaf signers in 6 locations spread across Ivory
Coast (i.e. Bouakako, Abengourou, Daloa, Yamoussoukro, Bonoua and Abidjan). In all locations, except Bouakako, the
signers are bilingual in ASL and LSCI.
The data are being provided with French glosses in ELAN by the deaf members of the team, who are trilingual in (written)
French, ASL and local Ivorian sign language. This digital corpus of video data, annotations and metadata will be stored in the
archives of the Endangered Language Documentation Programme (ELDP)and the national archives of Ivory Coast. The data
of each location, annotated for formal and iconic features, will be compared to assess the level of lexical variation found
across Ivory Coast. Through this, we hope to contribute to the development of methodologies for establishing language and
dialect borders in and between sign languages.
As far as grammatical analysis is concerned, one local signing variety will be studied in depth. This is the family sign
language used in Bouakako, a village of Hiré, South-west of Ivory Coast. In this village of around 1300 inhabitant, we found
10 deaf people (0.77% of the population). As at least 8 of the deaf signers there are blood-related, the deafness seems to
be hereditary (despite the fact that so far, only one generation of deaf people has been identified). The deaf people of
Bouakako are not in contact with varieties of ASL. They are integrated in the social life of the village. As a result, a
considerable number of hearing people have at least some basic signing skills. The grammatical description of selected
features of Bouakako Sign Language is one of the few studies of a family sign language, and the first one in its kind for
Africa. As such, it can contribute significant insights to our knowledge about the emergence of sign language.
159
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Anna-Lena Nilsson
Poster 3.07
Stockholm University, Sweden
Use of signing space in simultaneous sign language interpretation:
marking discourse structure with the body.
(Presented in English.)
A fundamental difference between signed and spoken languages is that in signed languages
the signer uses the three dimensional space in front of him/her (signing space) and his/her
own body for reference and cohesion. According to recent studies of signed languages (e.g.
Liddell, 2003; Liddell, Vogt-Svendsen & Bergman, 2007; Nilsson, 2010; Dudis, 2011;
Ferrara, 2011; Thumann, 2011) such linguistic tools make use of the conceptual blending
process (Fauconnier & Turner, 2002).
Optimal use of signing space is dependent on the signer’s knowledge of what s/he is going to
talk about. In a simultaneous interpreting situation, both the content and the structure of the
discourse become known to the interpreter only gradually. Thus, it is difficult for an
interpreter working simultaneously into a signed language to know how to best structure the
discourse, as there is no way s/he can know exactly what the speaker will say next. To date,
there are only a few studies regarding use of signing space in simultaneously interpreted
signed language (see, however, e.g. Frasu, 2007; Nicodemus, 2009; Armstrong, 2011;
Goswell, 2011).
In the present study, Swedish Sign Language (SSL) interpreters have been filmed when
interpreting from spoken Swedish into SSL. Both interpreters whose first language is SSL (L1
interpreters) and those who are second language learners of SSL (L2 interpreters) have been
recorded. Their signed language production is studied using a model based in Conceptual
Blending Theory, and mainly analyzing use of Real Space Blending (Liddell, 2003), focusing
on how they use signing space and their body to mark the discourse structure. Does the
interpreting situation make interpreters use fewer of the linguistic tools available, or use them
differently than in spontaneously produced SSL (as described in e.g. Bergman, 2007; Nilsson,
2010; Sikström, 2011)?
The unexpected findings of a preliminary analysis indicate striking differences both in how
and how much the recorded L1 and L2 interpreters use their body, especially regarding the
use of movements of the upper body. In this presentation, I will show how the L1 interpreters
structure the discourse content using buoys and tokens (Liddell, 2003) in a highly visual
interplay with body movements. Buoys and tokens are combined with e.g. sideway
movements and rotations of the upper body, thereby marking the structure of the discourse.
The L1 interpreters move their upper body in a manner that gives a relaxed and natural
impression, frequently e.g. raising their shoulders as part of sign production. Despite finding
out the discourse content only gradually, and while already rendering their interpretation of
what has been said so far, they manage to produce signed discourse that is strikingly similar to
spontaneously produced SSL discourse. In comparison, as we will see, the L2 interpreters
generally move their upper body less, and they use fewer buoys and tokens. Their use of
directions in signing space to indicate e.g. contrast and/or comparisons is more stereotypical,
and their body movements do not reflect the structure of the discourse to the same extent.
160
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Annemarie Kocab, ^Jennie Pyers & +Ann Senghas
Poster 3.08
*Harvard University; ^Wellesley College; +Barnard College, USA
From gesture to language: The emergence of nonmanual-markers in Nicaraguan Sign
Language
Will present in ASL
The appearance of common gestures as lexical signs indicates that gesture can serve as
raw material for a sign language’s lexicon. Can gesture similarly be the source of
grammatical forms? Sign languages typically use facial movements (nonmanualmarkers)
to convey grammatical information. For example, in American Sign Language,
wh-questions are marked with a brow furrow: a common facial expression indicating
puzzlement in American speakers. It has been proposed that nonmanual-markers in sign
languages derive from the facial gestures used by local speakers (Janzen & Schaffer,
2002; McClave, 2001). To examine this process empirically, we compared the facial
gestures used within wh-questions by 14 Spanish-speaking Nicaraguans and 25 Deaf
Nicaraguans, representing three sequential age cohorts of users of Nicaraguan Sign
Language (NSL), a new language created 30 years ago.
We observed five facial gestures across cohorts (Fig. 1), and found that their distribution
has changed over time and does not reflect hearing gestural “input.” The nose wrinkle is
evident early, but by the third cohort, the brow furrow dominates (Fig 2.). We consider
three explanations for this pattern. Frequency in hearing gesture: Disconfirmed; no
gesture appears to dominate; indeed, the brow furrow is the least frequent. Salience in
hearing gesture: A tendency to hold the brow furrow is intriguing but inconclusive; no
facial gesture was held significantly longer than others (Fig. 3). Associated spoken
question word: Also disconfirmed; Spanish-speakers produced the brow furrow
primarily with when, whereas the deaf signers produced it primarily with WHAT (Fig. 4).
We consider an alternative account. The brow furrow, while not dominant in Nicaraguan
hearing gesture, appears in many sign languages around the world. Additionally, the
brow furrow is a universally used facial expression with clear emotive salience, typically
indicating puzzlement (Ekman, 1979). The brow furrow may be a preferred candidate for
marking wh-questions due to the salience of the form, namely, the ease with which it can
be produced and perceived. Additionally, movements of the brows are easily
manipulated, allowing the signer to control the onset and offset of the nonmanual-marker,
and mark scope, transforming a facial gesture into a grammatical nonmanual-marker.
Research with hearing speakers suggests a similar preference for the use of the upper
facial area, including the brows, for judgments of prosody and paralinguistic information
(deGelder et al., 1999; Lansing & McConkie, 1999; Swerts & Krahmer, 2008).
Our findings suggest that once the language has blossomed, its seed is no longer visible.
Perhaps once gestures have been repurposed into sign language, the factors that then lead
to the grammaticalization of one form over another are internal to the sign language,
leaving gesture behind. Instead, other elements come into play, shaping the language to
conform to the needs of the users.
161
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Asako Uchibori & ^Kazumi Matsuoka
Poster 3.09
*Nihon University; ^Keio University, Japan
Rightward movement of wh-elements in Japanese Sign Language: A preliminary study (English)
Under the generative framework (Chomsky 1995a/b, 2000, 2001, 2004 etc.), the derivation of a wh-sentence is considered to
involve overt or covert movement of a wh-operator to the sentence peripheral position, i.e., wh-movement. Theoretically, the
operation of movement itself can be either rightward or leftward. However, there is a puzzling dichotomy between sign and
spoken languages when wh-elements are overtly displaced from their original positions: the wh-final construction is common
in sign languages (Zeshan 2006, Cecchetto et al. 2009, etc.), contrary to spoken languages where overt wh-movement is
leftward. Hence, one of the major theoretical issues concerning wh-sentences in sign languages is whether the wh-final
construction are derived by rightward wh-movement (Cecchetto et al. 2009, Neidel et.al. 1998), or by an operation different
from wh-movement (Petronio and Lillo-Martin 1997 (henceforth P&L-M)). In the latter approach, it is claimed that in
American Sign Language (ASL) (P&L-M) and in Brazilian Sign Language (LSB) (Quadros 1999), wh-words are actually
raised to C0 which has the [+focus] feature in the sentence final position. This derives the ‘Focus Double’ construction,
where non-phrasal wh-expressions (e.g. WHO and WHAT) as well as non-wh-words (e.g. a verb or the negative head) appear
in the final position.
In previous research of Japanese Sign Language (Nihon Shuwa), which is an SOV language, it has been pointed out that whfinals are common in matrix sentences (1), particularly when the non-manual marker (henceforth, NMM) for wh-questions
does not spread over the entire sentences (Morgan 2006, Fischer and Gong 2010, Kimura 2011, Akahori and Oka, 2011).
Few studies of embedded wh-constructions, however, have been conducted. The current paper presents a preliminary
analysis of JSL wh-sentences, in particular, those containing indirect wh-questions selected by verbs such as WANT-TOKNOW and complement clauses selected by the so-called bridge verbs. In the wh-sentences examined here, the wh-NMM
(NMMWH) co-occurs with the wh-expressions, whereas NMM for interrogatives (NMMQ) co-occurred with the sentence
final finger pointing. The data is provided by a native signer whose parents and siblings are also deaf, originally from
Okinawa.
The current study reveals that the final wh-elements in JSL behave in the same way as those in ASL/LSB in that they cannot
be phrasal (2). Furthermore, JSL disallows the non-wh Focus Double construction (3). Wh-doubles, however, are possible in
JSL (4), where the ‘weak’ wh-word appears in-situ, while the final wh-word is pronounced with a larger and clearer
movement. This phonological difference suggests that the weak form in-situ is not a word, but a copy of the item overtly
moved rightward. The initial-wh and wh-in-situ (unaccompanied by any copy/double) are rejected by our informant (5-6).
As shown in (7-14), the distributions of wh-words in the embedded contexts are basically same as in the matrix sentences.
The analysis of the data indicates that movement of wh-words must overtly take place toward the final position in JSL.
Hence, the rightward movement of wh-elements should be considered as an option available in SOV languages which do not
have the Focus Double construction..
162
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Aurore Paligot & Laurence Meurant
Poster 3.10
University of Namur, Belgium
The influence of the metalinguistic function on register variation in LSFB
Depending on the contexts of enunciation, a speaker uses different varieties of language. This
sociolinguistic phenomenon called "register variation" is the subject of few studies in signed languages (Zimmer,
1995; Quinto-Pozos et al., 2010). A challenge in studying registers is that a lot of factors influence the choice of a
language variety used in a particular situation. The more natural the context of recording, the more difficult it is for
the researcher to isolate these factors of variation and study their respective influence. This poster presents a test
constructed to study a factor of variation, the metalinguistic function, and the way it affects phonological variation
in different registers.
As in Zimmer's research, this study of registers is mainly based on the system of Halliday (1978), who
defines the characteristics of a speech situation according to three axes: the field, the mode and the tenor of
discourse. One of the main factors of variation of the field, defined by Zimmer as "the degree of emphasis placed
on the language itself", is the subject of the present research. This factor draws a continuum along which the
language is subordinate or, on the contrary, dominates the interaction. Going one step further, we formulate the
hypothesis that attention to language is at its peak when a speaker uses the metalinguistic function as defined by
Jakobson (1960), which might result in more careful speech productions. Following this statement, we aim to test
the influence of the metalinguistic function on the phonological plan and in contexts with different degrees of
formality.
To meet this goal, the production of two signers in four different situations will be analyzed. The data
include productions of the signers while using the metalinguistic function (situation A) or not (situation B) in two
formal contexts. These data will be compared with two less formal situations, taking the form of a dialogue
between speakers: the use of the metalinguistic function will appear in A’ but will not be present in B’. The aim of
comparing these four situations is to control as many factors of variation as possible and thus reduce the difficulty
mentioned above. At the conference, the following questions will be answered: does the use of the metalinguistic
function influence the phonological level of speech? If so, does the resulting variation occur in a patterned
fashion? The suitability of the designed study will also be discussed.
163
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Benjamin Anible, *Jill Morford, ^Pilar Pinar & +Giuli Dussias
Poster 3.11
*University of New Mexico; ^Gallaudet University; +Pennsylvania State University, USA
Sensitivity to Verb Bias in ASL-English bilinguals
Verb biases have been shown to help native English readers resolve temporary ambiguity in sentence
comprehension. When a post-verbal constituent may either be a direct object (DO) as in (1) or a sentential
complement (SC) as in (2), readers rely on distributional knowledge to anticipate the more frequent syntactic frame
for the associated verb (Garnsey et al., 1997). Some verbs (such as “appreciated”) have a bias toward DO
continuations; others toward SC continuations. Eye-tracking studies show that when this bias is violated in sentences
like (2), participants exhibit longer fixations in the disambiguating region and/or more regressions to the verb
(Wilson & Garnsey, 2009).
(1) Richard appreciated his vacation time in Paris.
(2) Richard appreciated that his parents helped him achieve his dream.
Are bilinguals able to access the same distributional information and employ it in the service of ambiguity
resolution while reading? There are indications that Spanish-English bilinguals reading English sentences show
sensitivity to English verb biases (Dussias et al., 2010). We found that deaf American Sign Language (ASL) - English
bilinguals exhibit similar sensitivity in two related studies. We evaluated ASL-English bilinguals’ sensitivity to English
verb biases in an elicitation task. Subjects were prompted with a two word subject/verb sentence fragment and asked
to complete the sentence (cf. Gahl, Jurafsky, & Roland, 2004). We found high correlation between sentence
completions by native English speakers and by ASL-English bilinguals (.83 for DO; .87 for SC). These correlations are
higher, in fact, than those found by Dussias et al. (2010) for Spanish-English bilinguals in an immersion setting (.61 for
DO; .67 for SC).
In a second study, we used eye-tracking to probe comprehension evaluating gaze duration for matching vs.
mismatching sentence continuations following SC-bias and DO-bias verbs to determine whether bias sensitivity can
be used online to resolve temporary ambiguity. Gaze duration was shorter, indicating anticipatory processing, only
for SC continuations with SC-bias verbs. Surprisingly, we also find that ASL-English bilinguals appear to detect bias
violations early, fixating the constituent immediately following the verb, rather than the traditional disambiguating
region two constituents distant. We interpret these results as suggesting Deaf ASL-English bilinguals have a wider
perceptual span than hearing readers (Bélanger et al., 2011), decoding words as much as a full fixation earlier than
hearing readers. An alternate perspective on these findings is that ASL-English bilinguals exhibit a less data-driven
reading style where longer fixations in earlier regions of the sentence are indications of more top-down control of
comprehension but our secondary measure showing verb sensitivity in production as well as perception discounts
this perspective.
The corroborating evidence in our two studies confirms our conclusion that ASL-English bilinguals are
sensitive to verb biases and suggests further that Deaf readers are able to decode text further into the periphery than
their hearing counterparts.
164
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Brendan Costello & Manuel Carreiras
Poster 3.12
BCBL (Basque Center on Cognition, Brain & Language), Spain
165
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Caroline Bogliotti & ^Laetitia Puissant-Schontz
Poster 3.13
*Université Paris Ouest Nanterre la Défense & MODYCO CNRS UMR7114; ^Speech therapist, La Rochelle, France
Assessing morphosyntactic skills in French Sign Language (English)
Native signers and hearing speakers follow similar patterns of language acquisition. Past research
has shown that language development is optimal if deaf children are sufficiently exposed to a
signed language (Petitto & Marentette, 1991 ; Morgan, Herman, Barriere & Woll, 2008).
Difficulties in either aspects of language development delays or specific language impairment (SLI)
affect hearing speakers and native signers alike (Woll & Morgan, 2012; Mason, Rowley, Marshall,
Atkinson, Herman, Woll & Morgan, 2010).
Isidore is a 6-year-old native signer who grew up in a native FSL environment. He has undergone
speech therapy from the age of 4. While his sisters showed a normal language development, he has
had difficulties to communicate from early age, and he suffers from a strong language deficit. To
evaluate and quantify Isidore’s deficit, testing morphosyntactic skills is required. To date, no FSLspecific
test allows caretakers (speech therapist, doctor, etc.) or researchers to assess those skills.
Two reasons justify the lack of test: (a) few works specialize in the linguistic description of FSL,
and more specifically in such aspects as language acquisition and developmental steps; (b) to our
knowledge, no assessment tool exists for FSL. In the wake of Courtin and Limousin’s works (2010)
on FSL assessment, and other sign-language assessment tools (Herman et al, 1999; 2004), our
general goal is to develop an arsenal of tests to assess linguistic abilities in FSL.
We focus on the morphosyntactic level. We evaluate morphosyntactic structures common to sign
languages and vocal languages (negation, verb inflection, linguistic tenses, causality links, etc.), as
well as structures that are restricted to signed languages (use of syntactic-semantic space,
grammaticalized facial expression, etc.).
Our aim is twofold. We want to: (a) obtain information on the acquisition of FSL morphosyntax; (b)
confirm our intuitions about the symptoms of language delay affecting native FSL speaker. All in
all, we want to see if we can diagnose language delays and SLI in FSL. Generally, morphosyntactic
skills provide a reliable indication of possible delays in language development for hearing speakers
(see Leonard, 1998 for a review). From our perspective, it is essential to create and implement the
same kind of tool for sign language.
Our experiment is in progress. It consists of a grammaticality judgement test (Boudreault &
Mayberry, 2006), for both receptive and expressive skills. We evaluate deaf children with a
language delay, and compare them to children who do not suffer from such deficits. We test
receptive skills with an image-choice task (subjects are asked to choose the correct image among
distractors, matching with FSL morphosyntactic structure). We test expressive skills via the
description of an image, with several levels of syntactic complexity. Hopefully, our results will
contribute to a better understanding of acquisition and the use of FSL structures. More precisely,
our results will help determine whether the observed language disorders are modality-specific.
166
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Cecily Whitworth
Poster 3.14
McDaniel College, USA
ASL Handshapes: Phonetic Features and Phonological Categories
to be presented in ASL
This paper describes the methods and results of a pilot study using a narrow phonetic
transcription of American Sign Language (ASL) hand configurations to examine inter- and
intra-signer variation in produced features. Video recordings of a three-year-old child and his
mother were coded for low-level hand configuration features. Occurrence and co-occurrence of
features were analyzed using ELAN and R, with relationships between and among features,
target handshapes, sign lemmas, and participants examined. Results show significant amounts
of variation across multiple productions of target forms, indicating that increased attention to
phonetic features is necessary for the accurate description of signed languages.
Many analyses of signed languages (e.g. Liddell & Johnson 1989; Sandler 1996; Brentari &
Eccarius 2010) have treated 'handshape' as a class of phonemic-level contrastive unit,
comparable to ‘consonant’ or 'vowel' for spoken languages. Transcriptions of signed data
typically record the handshapes and/or features posited for citation forms, but not the lower-level
features (such as individual joint positions) that are actually produced. These features are
assumed to be predictable from the posited target handshape, and any variation is assumed to be
noncontrastive and is typically neither recorded nor examined. Researchers have tended to (1)
assume that they (or their research assistants/informants) know what the handshape categories
for a language are; (2) assume that they/their assistants know which produced forms belong to
different categories and which are variants of the same category; (3) assume they/their assistants
can tell which handshape was intended, based on the produced form; and (4) treat unexpected
configurations either as errors or as the result of postlexical phonological processes.
Results from this study indicate that many ASL signs are produced with different handshape
features than their target forms. For many of the produced forms in this data, it is impossible to
tell what the target handshape is solely by examining the produced configuration. Figure 1a
shows three target handshapes in ASL; 1b shows four configurations that appear in tokens for all
three targets. None of the four configurations shown in 1b is a canonical production of any
phonemic handshape in ASL, but they all appear regularly in signs with posited target
handshapes V, K, and 3. The amount of variation in feature values for individual joints and the
amount of overlap between produced forms with different targets indicate that although the
handshapes V, K, and 3 may be distinct categories in ASL, the category boundaries are not clear.
This study uses the phonetic details of produced ASL hand configurations to examine the
nature of handshape categories, including not only category-central citation forms but also the
description and quantification of variation in produced forms. Results indicate that traditional
transcription methods and assumptions about category identity may be obscuring important
facts about allophonic, dialectal, and crosslinguistic variation in signed languages, concealing
underlying categories and/or encouraging false distinctions between categories that do not
actually pertain.
167
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Chiara Branchini, *Anna Cardinaletti, ^Carlo Cecchetto
& +Caterina Donati
*University of Venice Ca' Foscari; ^University of Milan Bicocca; +Sapienza University of Rome, Italy
168
Poster 3.15
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Christina Healy
Poster 3.16
Gallaudet University, USA
Affect Verbs in American Sign Language
Presentation language: English
Affect verbs reference circumstances in which an experiencer undergoes an internal change
in response to a stimulus. While affective experiences are universal, the constructions available
to describe them differ in each language. As affect verbs in sign languages have not been
studied, this project is a first step in that direction, examining American Sign Language (ASL).
Talmy (2003) notes the majority of English affect verbs are stimulus-subject, though they
may appear in constructions which evoke different construals. For example, to describe a
situation in which Mary reads a book and experiences fascination, a speaker could say any of the
sentences in (1)-(3). These refer to the same situation but evoke different construals with respect
to the prominence given to each participant. The primary focal participant of the unmarked verb
fascinate is the stimulus, so book appears in subject position in (1). The secondary focal
participant, the experiencer is lexicalized in the object position. In (2) the suffix -ing combines
with fascinate, creating an adjectival construction with a stimulus subject. The experiencer is
left unspecified, implying that anyone who reads the book will experience fascination. Finally,
the construction in (3) evokes a construal with the experiencer as the primary focal participant.
In English, the majority of affect verbs are stimulus-subject, like fascinate, while in other spoken
languages most affect verbs are experiencer-subject (Talmy, 2003). Previous research has not
studied affect verbs in the visual modality. This project is the first to investigate the
constructions and construals describing affective events in a sign language.
The data for this project were elicited through a translation task: ten English stimulussubject
affect verbs appeared in each of the three constructions represented in (1)-(3). Five native
ASL signers translated each sentence, and the videotaped data were coded for: clause types,
primary and secondary focal participant roles, use of space, and non-manual markers
(movements of the head, torso, eyes, brows, cheeks, nose, and mouth).
The results indicate that ASL predominately construes the experiencer as the primary focal
participant, in contrast to English. Even though the elicitation prompts were evenly distributed
across three construal types, the vast majority of elicited ASL utterances encoded construals in
which the experiencer had primary prominence, as in (4). Constructions encoding the stimulus
as the subject showed evidence of English influence, such as prepositions borrowed from Signed
English (5). Of the 120 responses, only 9 construals left the experiencer unspecified, and
participants commented that translating sentences with an unspecified experiencer was especially
challenging.
In these data, signers employed modality-specific features, such as depiction and surrogate
blends, to describe affective events. Use of space to modify verb agreement was used minimally
to evoke construals in which the experiencer was not the primary focal participant. Future
research on signed languages will determine if the visual modality predetermines the
predominate construal or if this preference for experiencer-subject affect verbs in ASL is a
language-specific phenomenon.
169
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Clark Denmark, ^Sabine Fries & +Jens Hessmann
*University of Central Lancashire, UK; ^Humboldt University Berlin;
+Magdeburg-Stendal University of Applied Sciences, Germany
One sign language is not enough:
Approaching second sign language acquisition of Deaf people
[language of presentation: BSL]
Foreign sign language learning of Deaf people appears to have been largely
neglected, not just in general second language acquisition studies but also in sign
language specific accounts, which tend to focus on hearing learners. Yet, while
‘International Sign‘ may be the communicative mode of choice in more fleeting
contacts, L2 sign language learning, or Second Sign Language Acquisition (SSLA),
as we may call it, does take place in many situations, induced, for instance, by
migration, intermarriage, prolonged study or work periods, where Deaf sign language
users come into extended contact with Deaf users of another sign language.
Deaf folklore seems to point to a characteristic feature of SSLA processes:
Apparently, Deaf users of a first sign language have some kind of privileged access
to a second (third, fourth ...) sign language. Generally, the acquisition process is
considered ‘easy’ in comparison to other unimodal or bimodal language learning
processes, since Deaf learners are expected to be able to draw upon a basic set of
‘rules’ that can be transferred to the target language. Our presentation will take a
closer look at such predictions and highlight SSLA as a field of study in its own right.
Firstly, we draw on a number of sources, including informal surveys of Deaf users of
L2 sign languages, to describe the conventional wisdom concerning SSLA processes
to be found within Deaf communities. One recurrent theme in such received views is
that what is involved in SSLA processes can be expressed as ‘different signs, same
grammar’. In this view, it is mainly lexical knowledge that needs to be acquired if a
foreign sign language is to be mastered. On the other hand, informal accounts also
point to factors that may be conducive to foreign sign language learning as well as a
number of specific learning challenges.
Secondly, we look at a BSL online course specifically designed for Deaf users of four
different European sign languages (www.signs2go.eu). This course is built around
the assumption that ‘significant overlap’ between BSL and the learners’ native sign
languages can be systematically exploited for language learning purposes. We will
present an analysis of the linguistic issues and features drawn upon in order to
provide support to Deaf learners in this course. This analysis will serve to make the
idea of ‘different signs, same grammar’ more precise.
Finally, we consider the place that SSLA might occupy within the wider field of
Second Language Acquisition. While many of the more general learning processes
described in the literature apply across unimodal and bimodal second language
acquisition processes, none seems to account fully for what is involved in foreign sign
language learning. Thus, at this stage of our knowledge, we should focus on what
seems specific about SSLA in order to understand more fully in what way
competence in one sign language opens up comparatively smooth roads to many
sign languages.
170
Poster 3.17
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Daisuke Sasaki
Poster 3.18
Seikei University, Japan
North Korean Sign Language: A Possible Influence from Korean and Japanese Sign
Languages
This would be a very first study on North Korean Sign Language (NKSL), as far as I know.
The status, or even the existence, of a signed language in North Korea is very little known, in
particular, to the West. In the meantime, I happened to get a duplicate copy of a NKSL
dictionary, entitled “son mal sa jeon” (손말 사전), which literally means “hand language
dictionary,” as well as a duplicate copy of a NKSL textbook, entitled “son mal hak seup”
(손말 학습), which literally means “hand language learning.” In this study, I compare the
vocabulary items of NKSL with those of Korean Sign Language (KSL) and those of Japanese
Sign Language (JSL).
In my previous studies, I have argued for the close relationship among JSL, KSL, and Taiwan
Sign Language (TSL), and have concluded that such closeness was due to the Japanese
colonial occupation of Taiwan and Korea, among others. Japan occupied Taiwan from 1895
and Korea (the whole peninsula) from 1910, until 1945 when it was defeated in World War
II. During the occupation, the then Japanese government established schools for the deaf in
Seoul in 1913, Tainan in 1915, and Taipei in 1917 (the Pyongyang School for the Deaf and
Blind was already established in 1909 by Rosetta Sherwood Hall before the occupation), and
sent Japanese teachers to those schools. Considering the current similarity in vocabulary
items among the three East Asian sign languages, it seems that those Japanese teachers used
JSL in the instruction and that the JSL vocabulary influenced the vocabulary of both TSL and
KSL. Since Korea was divided in 1948, it can also be thought that the vocabulary of NKSL
was also influenced by KSL and JSL.
In this study, I considered to what extent NKSL was lexically similar to KSL, the language
that was thought to have the shared source, and to JSL, the one that was thought to affect the
establishment of NKSL (and KSL). Following McKee and Kennedy (2000), I used four
phonological parameters (handshape, palm orientation, movement, and location), as well as
the number of hands involved in the production of a sign (i.e., one-handed versus twohanded,
or loss or addition of a hand), to analyze and distinguish signs. Signs were
classified into three categories: phonologically identical, phonologically distinct, and
phonologically similarly-articulated. I also followed Guerra Currie, Meier, and Walters’s
(2002) criteria to classify signs into these categories. For phonologically identical signs, all
the parameters mentioned above must be identical between the sign forms, and the signs must
share the same meaning; signs were identified as phonologically similarly-articulated if they
share the same meaning and only one difference between them is observed in any of the five
parameters; and all other signs were regarded as phonologically distinct. The preliminary
research using basic 100 signs (Woodward 2000) showed that NKSL and KSL were closely
related with more than 70% shared vocabulary, whose rate is much higher than the JSL-KSL
lexical comparison with approximately 50% shared vocabulary.
171
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Danny De Weerdt
Poster 3.19
University of Jyväskylä, Finland
Existential Constructions in Flemish Sign Language and Finnish Sign Language
(other sign language)
Existential constructions (ECs) are used to express the existence or presence of an
object or a person (McNally 2011). Functionally, ECs introduce important, new
referents within discourse, and these referents are indefinite (Givón 2001).
Typologically, ECs tend to have the order of Location (‘place’) followed by Located
Element (‘the one that exists or is present’). However, it is known that ECs exhibit
syntactic variation across languages, and the only obligatory syntactic part within ECs
is perhaps the pivot (i.e. the Located Element) (Francez 2007).
Lyons (1967) argues that ECs are semantically related to possessive and locative
constructions because they all tell something about an object that has a place.
Following this locative approach, Clark’s (1978) and Freeze’s (2002) typological
studies show that these three construction types are related syntactically. Kristoffersen
(2003) found that the order of constituents in existential, possessive and locative
constructions in Danish Sign Language resembles the order found in spoken
language’s typological studies.
The focus of this paper is describing the different mechanisms of expressing the
function of existence in Flemish Sign Language and Finnish Sign Language. The
study includes three main research questions: a) what different ways are there to
express existence, b) what is the word order in ECs and c) what are the differences
and similarities concerning ECs in both sign languages. The data of this study is based
on stimulus material from Perniss and Zeshan (2008). Four (near-) native signers from
each sign language were given four pairs of pictures and their task was to find
differences between their pictures. Conversations were videotaped, transcribed in
ELAN and analyzed from the functional point of view.
On the first question, the analysis has shown that both languages express the function
of existence by means of four different ways as shown in Example 1. However,
analysis also shows variation within each mechanism. For example, in certain cases it
is possible to omit the existential verb ‘have’.
On word order, ECs in both sign languages constitute of three parts: Location, ‘have’
and Located Element. The word order is mainly Location preceding Located Element.
However, Location can be left out as it can be retrieved contextually. This also counts
for ‘have’, and the Located Element is the only obligatory part in ECs. A preposition
sign, a verb construction or a localized lexical sign can mark the Locative Relation
between Location and Located Element within these constructions. The choice of
these markers affects the place of Locative Relation within the construction.
In my presentation, I will present my data and a detailed analysis on ECs in both sign
languages, also focusing on their differences and similarities. For example, Flemish
Sign Language makes use of preposition signs to mark the Locative Relation between
Location and Located Element. On the other hand, Finnish Sign Language requires
the use of ‘have’ when the Located Element is expressed by means of a localized
lexical sign and Location is mentioned explicitly.
172
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*David Corina, ^Laurie Lawyer, +Peter Hauser, >Elizabeth Hirshorn
& ^Deborah Cates
Poster 3.20
*UC Davis, Center for Mind and Brain & Dept. of Linguistics; ^UC Davis, Dept of Linguistics
+Rochester Institute of Technology; >Learning Research and Development Center University of Pittsburgh, USA
Lexical processing in deaf readers: An fMRI investigation of reading proficiency
Individuals with significant hearing loss often fail to attain competency in reading orthographic scripts which encode the
sound properties of spoken language. Nevertheless, some profoundly deaf individuals do learn to read at age-appropriate
levels. The question of what differentiates proficient deaf readers from less-proficient readers is poorly understood but
topical, as efforts to develop appropriate and effective interventions are needed. This study uses functional magnetic
resonance imaging (fMRI) to examine brain activation in deaf readers, comparing proficient and less proficient readers'
performance in a widely used test of implicit reading. Proficient deaf readers activated left inferior frontal gyrus and left
superior temporal gyrus in a pattern that is consistent with regions reported in hearing readers. In contrast, the less-proficient
readers exhibited a pattern of response characterized by bilateral middle frontal lobe activation (right > left) which bears
some similarity to areas reported in studies of logographic reading, raising the possibility that these individuals are using a
qualitatively different mode of orthographic processing than is traditionally observed in hearing individuals reading soundbased scripts. The evaluation of proficient and less-proficient readers points to different modes of processing printed English
words. Importantly, these preliminary findings allow us to begin to establish the impact of linguistic and educational factors
on the neural systems that underlie reading achievement in profoundly deaf individuals.
173
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Dietmar Roehm, *Julia Krebs & ^Ronnie Wilbur
*University of Salzburg, Austria; ^Purdue University, USA
174
Poster 3.21
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Ellen Ormel, ^Marcel Giezen & *Els van der Kooij
*CLS at Radboud University Nijmegen, Netherlands; ^LLCN at SDSU, USA
Phonological priming in deaf signers: Developing a continuous measure of
phonological overlap in Sign Language of the Netherlands (English)
Despite the availability of elaborate phonological models of sign phonology (e.g.
Brentari, 1998; Sandler, 1989), psycholinguistic studies on sign processing continue
to use a rather crude measure for the establishment of minimal sign pairs, i.e., signs
that share most of the phonological features but differ on one or more features. In
these studies, distinctions are generally based on full overlap for the parameters
handshape, location, movement, and orientation. In other words, variation within each
of the parameters is largely neglected and sign processing is implicitly assumed to be
insensitive to such differences. At the same time, we know from phonological models
that, for instance, a handshape consists of selected fingers and finger configuration,
and that the thumb may have a distinct role compared to the fingers. Furthermore,
studies have shown that lower-level distinctions impact perceptual identification (e.g.
Lane, Boyes-Braem, & Bellugi, 1976). This means that handshapes with a high
degree of overlap at the feature level might have to be treated differently from
handshapes that have less overlap (e.g., selected fingers in Figures 1a and 1b versus
selected fingers in Figure 1c). Similar distinctions between large and small degrees of
overlap can be made for the other phonological parameters.
Given the rigid measure of phonological overlap in previous studies, it might
not come as a surprise that the role for the different phonological parameters in sign
recognition appears inconsistent across these studies (e.g. Carreiras, Gutiérrez-Sigut,
Baquero, & Corina, 2008; Corina & Hildebrandt, 2002; Dye & Shih, 2006). In the
present study, we instead adopt a continuous measure of degree of overlap based on
detailed distinctions at the phonological level. Signers’ sensitivity to different degrees
of phonological overlap for each parameter is tested in a phonological priming
experiment in Sign Language of the Netherlands (SLN). Specifically, signs in our
study are described and compared at a fine-grained phonological level using the
Dependency model (Van der Hulst (1993) and further developed by Van der Kooij
(2002) for SLN). Iconically motivated features are described separately and are used
to control for effects of iconicity even at the level of phonological features. For any
pair of signs, the number of shared features is divided by the total number of features
that make up the two signs, resulting in a proportional measure of overlap with a
value between 0 (no overlap) and 1 (maximal overlap, i.e., identical signs). This
measure was then used to select prime and target sign pairs that reflect different
degrees of overlap for each phonological parameter.
This study will result in a more precise measure of phonological overlap in
sign languages that can be used in sign processing studies. Moreover, the results from
the priming experiment will provide further insight in the nature and size of cohorts of
signs that are active during sign processing, and lead to a better understanding of the
differential role of phonological parameters in sign recognition.
175
Poster 3.22
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Felix Yim-Binh Sze & Betty Cheung
Poster 3.23
The Chinese University of Hong Kong
Development of discourse referencing in Hong Kong Sign Language narratives by Deaf/hard-ofhearing children (English)
This paper investigates the development of discourse referencing in signing narratives of a group of Deaf/hard-and-hearing
(D/hh) children who study in a sign-bilingual (i.e. sign and spoken language) co-enrolment (i.e. D/hh and hearing students)
programme entitled The Jockey Club Sign Bilingualism and Co-enrolment in Deaf Edcuation Programme. In this program,
Hong Kong Sign Language and Cantonese are two major teaching languages.
Discourse referencing refers to the means by which referents are introduced, maintained, and reintroduced in a discourse.
Developmental patterns of discourse referencing in narratives provide a good window to children’s acquisition of nominal
forms and the process through which children gradually master the pragmatic knowledge of using appropriate forms to
meet the communication needs of the listeners (Wong & Johnston 2004). For hearing children, complete mastery of
discourse referencing in spoken languages is attained after the age of ten (Hickmann 2003). It is well-known that space plays
a crucial role in reference tracking in sign languages. This begs the question of whether space poses an additional hurdle to
D/hh children’s acquisition of discourse referencing, and what referencing strategies they adopt when their ability to use
space still falls short of the adult targets.
This paper addresses these issues by probing into the HKSL narratives of 16 D/hh children (with 1-3 years of sign language
exposure in a formal classroom setting who participated in a story-retelling task adapted from Hickman’s (2003) study.
Following Morgan’s suggestion (Morgan 2002, 2005; Morgan & Woll 2003) that nominal expressions, verb agreement, role
shift/constructed actions and entity classifiers all perform referencing functions in BSL, we look at these aspects in the D/hh
children’s narratives and compare them with the adult baseline data. Our findings basically concur with those of Morgan’s.
From an early age onwards, D/hh children use bare nouns to refer to the animate entities in a narrative most of the time,
particularly among those with the shortest duration of sign language exposure. Use of null forms is observed, and in
inappropriate contexts this gives rise to ambiguity in the interpretation of the intended referents. Role shift and agreement
verbs with not-always-consistent spatial inflections can be found in D/hh children with the longest duration of sign language
exposure. The use of classifier predicates over an extended discourse and spatial indexing such as pronominals or
demonstratives for referent-tracking purpose remain rare. This contrasts with the narratives of native signing adults, in
which discourse referencing heavily relies on spatial anchoring of the referents. Among these 16 D/hh children, 5
participated in the same elicitation experiment for three more consecutive years. Such longitudinal data show similar
developmental patterns. Our findings point to the hypothesis that while D/hh children gradually develop their ability to
make clear reference to characters in signing narratives as required by the discourse functions, namely, introduction,
maintenance and reintroduction, it takes a much longer time for them to acquire adult-like strategies due to the cognitive
load of manipulating the 3-dimentional space.
176
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Jana Hosemann, Nicole Altvater -‐ Mackensen, Annika Herrmann
& Nivedita Mani
Georg-August University of Goettingen, Germany
177
Poster 3.24
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Jennifer Petrich, ^Brenda Nicodemus & *Lucinda O'Grady
Poster 3.25
*San Diego State University; ^Gallaudet University, USA
Spatial language in dialogue: The role of modality in creating shape-based referring
expressions
Language: ENGLISH
We used a referential communication task to investigate a) how shape-based referring
expressions are created and evolve across a dialogue in signed vs. spoken language (ASL vs.
English), b) how different linguistic resources for encoding shape information affect
communication efficiency, and c) cognitive and linguistic strategies for referring to difficult-toname
shapes (i.e., Atteneave shapes; see Figure).
Deaf ASL signers and hearing English speakers (10 pairs for each language) were
seated next to each other at a table, with two sets of twelve cards (each with a different
Atteneave shape) laid out in front of them. A low divider allowed the participants to view each
other, but blocked their view of the other person’s cards. The stimuli were laid out in front of the
Director in a pre-arranged sequence, and the same figures were placed in front of the Matcher,
but in a completely different sequence. The Director’s task was to get the Matcher to quickly and
accurately arrange his or her shapes to match the target order through shared dialogue. After
the arrangements had been matched, the Director’s and Matcher’s shapes were placed in two
new random orders, and the procedure was repeated with the Director’s new sequence as the
target. Participants carried out the matching task six times (each time was considered one trial).
The results revealed intriguing differences and similarities in how shape-based referring
expressions were created and evolved in ASL vs. English. For both languages, shapes were
initially described with many words/signs. Expressions were dramatically reduced in subsequent
trials as participants mutually accepted a referring expression for each shape. Expressions were
coded as being either a) shape-specific (e.g., “the two triangles”; SASS classifier constructions),
b) lexical labels (e.g., “the arrow”; BELL), c) mixed (a combination of expression types), or d)
other (e.g., “the first one as last time”). English speakers began with a preference for shapespecific
expressions over lexical labels (59.2% vs. 35.5%), but quickly transitioned (by trial 3) to
a preference for lexical labels (for the last trial, 39.2% vs. 57.5% for shape-specific vs. lexical
labels). In contrast, ASL signers began with an even larger preference for shape-specific
expressions (75.8% vs. 22.5%), and this preference was maintained throughout. However, by
the final trial, the preference for shape-specific expressions was reduced (59.2% vs. 40.8%).
For both groups, the use of lexical labels was advantageous, evidenced by a significant
correlation between use of lexical labels and time to complete the task (r = -.520, p < .001).
Finally, classifier constructions changed from segmented descriptions of parts of a shape to
reduced holistic depictions of the entire shape.
The iterative and conversational nature of this referential communication task provides
a novel way to experimentally investigate the creation, stabilization, and evolution of referring
expressions over a short time span. The patterns of lexicalization and simplification that we
observed offer a rich source of predictions for how lexical forms might be created and change
across a longer time span, as seen in emerging sign languages.
178
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Jenny Singleton & ^Melissa Herzig
Poster 3.26
*Georgia Institute of Technology; ^Gallaudet University, USA
Translating Theoretical Research into Application & Practice
This presentation will focus on the theoretical rationale and application of the concept of Translational Research in
studies involving Deaf Participants. As researchers focusing on theoretical issues in signed languages and Deaf
communities, many of us share the goal of one day bringing our scientific findings to practice, and in some way having
a broader impact upon the well being of deaf individuals and improving their education. Implementing such
translations is not so easy. In our Center-based research environment, we rely upon several models of translation
from “theory to application” or evidence-based practice, including Community-Engaged Research (Ross et al., 2010),
Two-way Translation, and Implementation Science (Kelly, 2012). We shall present our theory of change, i.e., how our
own scientific findings, as well as findings from other researchers, provides a rationale for a proposed innovation. We
will discuss several examples of evidence-based “products” that are being developed in our Center, describing the
journey (the core components of a program of research translation), the ethics of Community-Engaged Research, and
how we apply rigorous standards to evaluate whether our innovations will “work” or lead to the kind of change we
hope for in promoting early ASL/English bilingualism among U.S. Deaf infants and children. The aim of the
presentation is to illustrate how sign language researchers, through a process of collaboration, innovation, a high
standard of ethical practice, and community engagement, can hopefully bring their research findings to meaningfully
impact social and policy change.
Presenting in ASL
179
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Jerry Schnepp, ^Rosalee Wolfe & ^John McDonald
Poster 3.27
*Bowling Green State University; ^DePaul University, USA
Modeling synchrony and co-occurrence for nonmanual signals in American Sign Language
Any automatic system for translating spoken to sign language must be capable of portraying all aspects of the
target language including facial nonmanual signals (NMS). This is difficult because facial nonmanuals can co-occur, each
potentially influencing facial features in common (Neidle, Nash, Michael, & Metaxas, 2009) (Sandler & Lillo-Martin, 2006)
(Wilbur, 2000) (Wilbur, 2009).
This matter compounds when affect is incorporated into sign synthesis. For example, both the WH-question
marker and angry affect lower the brows and both the YES-NO question marker and happy affect raise the brows. If a
signer asks a WH-question in a happy manner, there are two competing influences on the brows: one is a downward
motion, and the other is upward (Reilly & Bellugi, 1996) (Weast, 2008).
Previous computational models hampered avatar technologies because they did not fully support synchrony and
co-occurrence. Researchers were limited to choosing a single facial process instead of portraying all of the co-occurring
influences (Platt & Badler, 1981), (VCom3D, 2012).
Our computational model overcomes this limitation by using linguistic notation as a metaphor. This notation can
indicate co-occurring facial nonmanual signals that naturally maps to a software user interface. Because tiers in the
interface are linguistically, not anatomically based, it is possible to specify nonmanual processes that influence facial
features in common. The interface facilitates quick synthesis and editing of ASL via a signing avatar. It is abstract enough
that animations can be drafted quickly, while still providing access to fine tuning details for a greater level of control.
We conducted tests to evaluate the clarity and acceptability of the synthesized ASL produced with this technique.
Deaf participants viewed animations displaying combinations of competing nonmanual signals and affect. Results indicate
that even when multiple processes simultaneously influence a facial feature, each individual process is perceived.
The computational model is potentially useful for language study. It provides researchers with a highly detailed
level of control and a consistent repeatability unattainable by humans. Avatars can perform to exact specifications of
timing, alignment and location, and never suffer fatigue. This allows researchers to synthesize utterances that conform
exactly to a theoretical model, which can then be evaluated.
As a feasibility study, we created animations that displayed nonmanual and classifiers involving size. We
portrayed the phrase “a {large, medium, small} cup of coffee”. For each classifier, participants viewed one version in which
the nonmanual signal was consistent with classifier, one version where the nonmanual signal conflicted with classifier, and
one version with no nonmanual signal.
Although animations without nonmanual signals were comprehensible, the data indicate that the nonmanual
lends context to the classifier. The presence of a nonmanual increases perceived clarity. Notably, this occurred even in most
cases when the nonmanual conflicted with the size classifier. This is consistent with a common participant remark, “You
always need to have a nonmanual”.
Future work includes automatic synthesis of other language processes.
180
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Jia Li, Scholastica Lam & Cat H-M. Fung
Poster 3.28
Centre for Sign Linguistics and Deaf Studies, CUHK, Hong Kong
Acquisition of nonmanual adverbials in Hong Kong Sign Language
The nonmanual adverbials allow us to examine how children develop their linguistic facial behaviors (Anderson & Reilly
1998). Unfortunately, only a few studies on the acquisition of nonmanual adverbials, largely on ASL, have been conducted
(Anderson & Reilly 1998, Reilly & Anderson 2002, Reilly 2006). These studies show that deaf children of deaf parents
acquiring ASL as their native language master manual adverbs earlier than nonmanual adverbials which share the same
meaning with the manual one. Nonmanual adverbials emerge at around age 2;0, while manual adverbs sharing the same
meaning emerge five months earlier. By age 3;0, nonmanual adverbials modifying predicates become more productive. But
this may not be true for late learners. This paper aims at exploring how deaf children of hearing parents who initially receive
only oral input and later both sign and oral language input acquire nonmanual adverbials in Hong Kong Sign Language
(HKSL).
Two experiments were conducted to investigate the acquisition of nonmanual adverbials by severely and profoundly deaf late
learners who receive HKSL input in a sign bilingual programme. Experiment 1 consists of a comprehension task (i.e. a
signing selection task) and a production task (i.e. a video/animation description task) where three nonmanual adverbials were
examined. Experiment 2 is another comprehension task (i.e. an animation-signing matching task) that investigated six
nonmanual adverbials. All tested nonmanual adverbials have manual counterparts of the same meaning. 12 deaf children
aged from 5;9 to 10;2 participated in Experiment 1 while Experiment 2 involved 20 deaf children aged from 5;3 to 13;2.
Based on years of exposure to HKSL, they were divided into three and six groups in Experiment 1 and 2, respectively.
Deaf children’s performance in Experiment 1 suggests that most of them have not acquired nonmanual adverbials, although
the performance is better in the group of more years of exposure to HKSL. The accuracy of the oldest group having the most
years of exposure to HKSL in the production and comprehension task of Experiment 1 is 50% and 66.67%, respectively.
Most errors produced by deaf children are the omission of nonmanual adverbials. However, these errors are overcome when
they know nonmanual adverbials must be co-articulated with the predicates. A notable increase on the correct use of
nonmanual adverbials can be observed in the second group whose accuracy is 57.14% whereas the youngest group is 0%.
Our deaf children also used manual adverbs to express the same meaning when nonmanual adverbials were absent. But the
use of manual adverbs declined with the improvement on nonmanual adverbials. Given the different methodology in
Experiment 2, the deaf children performed best in judging grammatical signings (the accuracy rates are all above 75%) but
worst in refusing ungrammatical ones without nonmanual adverbials (the accuracy is only 41.67% in the oldest group with 5
years of exposure to HKSL). All these data suggest deaf late learners are not sensitive to nonmanual adverbials initially. But
progress can be seen as the years of exposure increases.
181
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Jill Weisberg, Stephen McCullough, Jennifer Petrich & Karen Emmorey
Poster 3.29
San Diego State University Research Foundation, USA
The neural correlates of comprehending ASL-English code-blends
(ENGLISH)
Bimodal bilinguals fluent in English and American Sign Language (ASL) often “code-blend”, producing signs and
words simultaneously, when conversing with each other. Recent evidence indicates that code-blending facilitates
comprehension in both languages. We investigated the neural basis of this facilitation using fMRI to examine
which brain regions are recruited when bimodal bilinguals perceive code-blends. We expected code-blends to
recruit a combination of brain regions that are active when comprehending sign or speech alone, but those
regions may be differentially modulated by the perception of simultaneous sign and speech. In addition, codeblend comprehension may engage regions involved in lexical, semantic, or multimodal sensory integration.
During scanning, 14 hearing ASL-English bilinguals (CODAs) (mean age = 27.0 years, education = 15.5 years, 7
female) viewed audiovisual clips of a native hearing signer producing a) an ASL sign; b) a spoken English word;
or c) a sign and spoken word simultaneously (translation equivalents). Participants made a semantic decision (‘Is
it edible?’) for 60 unique items in each condition, and responded via button press. For the control task participants
viewed the model at rest and indicated when a dot superimposed on the chin changed color and/or if an
accompanying tone changed pitch. Each of three imaging runs (3T, TR = 2s, 30 sagittal slices with voxel size =
3.75 x 3.75 x 4.5) contained two 30s blocks of each condition (10 trials/block), in pseudorandom order, plus two
20s fixation periods as a low-level baseline condition. Individuals’ BOLD responses were estimated using multiple
regression, and parameter estimates for each condition were entered into a group-level mixed effects ANOVA.
Direct comparison of brain responses during speech versus sign comprehension revealed greater activation for
speech in bilateral superior temporal gyrus (focused near Heschl’s gyrus), and for sign in bilateral
occipitotemporal (MT/V5, fusiform gyrus), and parietal cortices, and in left prefrontal/inferior frontal gyri. This
pattern reflects, in part, differential sensory engagement required by the modality of each language. Simultaneous
perception of sign and speech (code-blends), relative to perception of each alone, led to increased activation in
the relevant modality-specific regions, indicating that code-blend perception recruits a combination of brain
regions active for each language alone. No regions responded more during speech alone than during code-blend
comprehension, indicating that code-blends did not influence neural activity related to speech comprehension. In
contrast, several regions showed reduced activation during code-blend comprehension compared to sign alone,
including bilateral occipitotemporal cortex (area MT/V5), left precentral gyrus, and right anterior insula. Reduced
activation in these regions during code-blend perception may be a neural reflection of the behavioral facilitation
previously observed for comprehension of code-blends, compared to ASL signs alone. In addition, given that
hearing ASL-English bilinguals tend to be English-dominant, greater neural activity when comprehending ASL
alone may reflect the increased effort required to process ASL without the redundant cues provided by spoken
English. Overall, the results indicate that code-blends reduce neural activity in sensory processing regions
associated with language perception, rather than in regions associated with lexico-semantic processing (i.e., left
frontotemporal regions).
182
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Joanna Filipczak & Piotr Mostowski
Poster 3.30
Section for Sign Linguistcs, University of Warsaw, Poland
183
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Jonathan Keane, Diane Brentari & Jason Riggle
University of Chicago, USA
184
Poster 3.31
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Jordan Fenlon, ^Adam Schembri & +Rachel Sutton-Spence
Poster 3.32
*Deafness Cognition and Language (DCAL) Research Centre, UCL, UK; ^La Trobe University, Melbourne, Australia
+University of Bristol, UK
Turn -‐taking and backchannel behaviour in BSL
This paper reports findings from an ongoing study investigating turn length and backchannel responses in dyadic
conversations in British Sign Language (BSL) and their relationship to social factors, such as age and gender. Previous
research has revealed gender differences concerning turn length and frequency of backchannel responses. For example, it is
reported that male English speakers tend to hold the floor longer than women in mixed-sex dyads (Coates, 2004) and that
men and women’s backchannel behaviour (e.g., nodding, shaking the head, making small vocalisations such as ‘hmm’ or
‘yeah’, see Duncan (1974)) differ in same sex and mixed sex dyads (Bilous & Krauss, 1998). The relationship between
gender and backchannel use is, however, reported to vary over time and according to culture, language and discourse type
(Feke 2003, Dixon & Foster 1998).
Although few studies have examined turn length and backchannel responses using signed language data, some findings have
been reported. Nowell (1989) found no differences in time spent signing in six American Sign Language (ASL) mixed-sex
dyads during interviews as well as no differences in signed backchannels. In contrast, a small-scale pilot study into BSL
suggested that men produce fewer non-manual backchannels in conversations than women (Sutton-Spence, 2000).
Additionally, Mesch et al. (2011), investigating Swedish Sign Language found that younger signers produced fewer
backchannels than older signers.
This paper reports findings from a study in progress investigating these aspects of conversational behaviour in 30 informal
British Sign Language (BSL) conversations (taken from the BSL Corpus Project, see Schembri et al., 2011) in mixed sex and
single sex dyads involving older and younger deaf native and non-native signers (60 participants in total). Using ELAN, we
coded ten minutes of each conversation, noting when each signer had the floor, and marking any time the addressee made
any manual or non-manual backchannel response. All annotated data were then exported and analysed using Rbrul
multivariate statistical software
An overall analysis involving all 60 signers suggested no significant differences in the amount of time spent signing by
signers of either gender in same sex or mixed dyads However, we did find older signers (50 and above) prefer significantly
longer turns. Furthermore, a preliminary analysis based on 30 signers revealed no significant differences between gender and
age groups in the amount of time spent on manual or non-manual backchannels. It appears that female dyads spent longer
nodding than male or mixed dyads, although this difference was not found to be significant. This paper will present a
complete analysis of non-manual and manual backchannels (incorporating the remaining 50% of the dataset) outlining any
potential effects due to social factors. These findings appear to suggest that, unlike English, sociolinguistic variation in
conversational behaviour in BSL may not be conditioned by gender. We conclude by discussing the implications of this
study for the sociolinguistics of signed and spoken languages.
185
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Joshua Williams
Poster 3.33
Indiana University, USA
186
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Julie Rinfret, Anne-Marie Parisot, Karl Szymoniak
& Suzanne Villeneuve
Poster 3.34
Université du Québec à Montréal, Canada
Methodological issues in automatic recognition of pointing signs in Langue des signes
Québécoise using a 3D motion capture system
LANGUAGE: English
LONG ABSTRACT:
Sign languages use space to identify and track referents within a discourse. For example,
established loci can be pointed to in order to refer anaphorically to previously introduced
referents. Prior works on the use of space in sign languages are mainly based on qualitative
analysis. With respect to the articulatory possibilities in the production of sign discourse, there is
an increasing interest among sign language researchers in understanding and quantitatively
describing the use of space, especially for sign recognition and animation purposes. Jantunen et
al. (2012) point out that most motion capture studies on sign language have consisted of
relatively small sets of isolated expressions (single signs (Wilbur, 1990), phrases (Tyrone et al.,
2010)), and that the collection of biomechanical discourse-type data has been marginal (cf.
Duarte and Gibet, 2010). Even though recording, processing and analyzing biomechanical data is
time consuming, we believe that the understanding of the interaction of signs in space in
discourse has to be analyzed using discourse data. This requires a robust marker set in order to
automatically recognize and identify specific signs such as pointing signs.
The aim of this paper is to discuss the methodological issues we have encountered collecting 3D
discourse data in Langue des signes québécoise (LSQ) for morphosyntactic, semantic and
pragmatic analysis of the use of four strategies of spatial association (pointing sign, localization,
eye gaze, body shift). We will focus on one of the issues that we believe to be fundamental in
biomechanical data collection, i.e. marker placement on the hand to ensure the possibility of
automatic recognition of pointing signs. Due to the great variability of hand shapes and types of
contact in sign languages, the movements of the fingers involved are quite less regular than those
in handling activities (e.g. prehension task). Thus, the post-processing (manual identification of
markers) becomes a really complex task, especially if the number of markers is important. One of
the problems that we face as researchers is the lack in the literature of a described procedure for
collecting such data, i.e. the description of a minimal marker set that can be used without
affecting the reliability of the data.
We will discuss, using LSQ discourse data, the evolution of our marker set (limits of the former
marker set, strengths of the actual one) and the biomechanical criteria to automatically isolate
pointing signs in Visual-3D (a powerful biomechanical analysis software). Those criterion are: 1)
the variation of the distance between markers on the index and middle fingers, 2) the variation of
the angle based on markers on the index and the marker on the protrusion of the radius on the
internal side of the wrist, 3) the extended position of the index finger with respect to the position
of the other fingers. We will show that both the marker set and the biomechanical criteria allow
the automatic recognition of pointing signs in LSQ without false positives (signs incorrectly
identified as pointing signs). This paper will benefit researchers interested in automatic
recognition of signs.
187
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Keiko Sagara & Ulrike Zeshan
Poster 3.35
University of Central Lancashire, UK
188
TISLR 11, July 2013
Friday 12th July, Poster Session 3
Keren Cumberbatch
Poster 3.36
The University of the West Indies, Jamaica
CHALLENGING THE UNIVERSAL ON DOUBLING STRUCTURES
(To be presented in English)
Many interesting syntactic constructions are found in complex clauses in JSL. One such construction is doubling at the
clausal level.
The following examples are from a JSL data corpus.
(1)
mouthing: have
FIVE CHILD FIVE
Have five child
five
‘She has five children.’
(2)
IX.1.SG AGAINST DIRTY KITCHEN IX.1.SG AGAINST
I
against dirty kitchen I
against
‘I am against dirty kitchens.’
(3)
IX.1.SG ACCEPT WHAT TEACHER TELL IX.1.SG ACCEPT
I
accept what teacher
tell
I
accept
‘I accepted what the teacher told me.”
Sandler and Lillo-Martin (2006) refer to the work of Petronio (1993) and Quadros (1999) in their discussion of copied
sentence elements. None of the theoretical frameworks provided can account for the doubling of sentence elements
larger than the noun phrase. Petronio (1993, 135) outlines five syntactic properties of double constructions in ASL.
JSL violates property (ii) which states that the doubled construction is X0 not XP meaning that “only a single sign, a
head, can occur in the doubling construction [and] full phrases … are not allowed” (Sandler and Lillo-Martin 2006,
418). JSL violates this in (2) and (3) where doubling is seen at the clausal level. From the data corpus, this repetition
of main clauses, like IX.1.SG ACCEPT, appears to be productive in JSL. This is at variance with what has been
previously reported for sign languages.
In (2) and (3), two signs were doubled and based on the aforementioned frameworks, each of these signs are actually
XPs. IX.1.SG is a NP in each sentence. AGAINST and ACCEPT are VPs. Two XPs are doubled. This goes against Petronio
(1993) which says that not even one XP can be doubled. Quadros (1999) explores doubling in Brazilian Sign Language
(LSB). LSB allows a sentence with more than one clause, excluding relative clauses, to double an element from each
clause. This still does not account for JSL allowing two elements from one clause to be doubled. It should also be noted
that the rhythm of (3) made it clear that the second occurrence of IX.1.SG was as the subject of the clause IX.1.SG
ACCEPT and not as the object of TELL.
At an abstract level, JSL differs from other sign languages in the doubling constructions that it allows. It can be argued
that the clause or noun phrase is being repeated at the underlying level with ellipsis of elements of the clause or of the
noun phrase at the surface level. Only the elements that the signer wishes to mark pragmatically are realised at the
surface level. The signer is motivated to place focus on an element (s) in response to earlier discourse as or to make
an assertion. Quantifier slots in (1) were doubled at the surface when a question was being answered. In (2), the
signer was asserting his stance on kitchen cleanliness. The NP and VP slots in the clause were doubled at the surface.
This JSL doubling phenomenon requires a deeper discussion.
189
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Martha Tyrone & ^Claude Mauk
Poster 3.37
*Haskins Laboratories; ^University of Pittsburgh, USA
ASL Locations as Dynamic
Location is one of the phonological parameters of sign language, and as such, it has been integrated into all
major theories of sign phonology. For the most part, phonological theories have described locations as if they were
stationary (cf. Brentari, 1998; Sandler, 1989). Recent kinematic data suggest that the forehead as a location does not
remain still during the production of forehead-located signs, but instead, it is an active articulator that moves to
facilitate contact between it and the hand (Mauk & Tyrone, 2012). The current study explores this point further, and
examines movement of the head and torso during sign production for native and non-native ASL signers.
For this study, participants were asked to produce scripted ASL phrases that were presented to them as
written English glosses with accompanying illustrations. Individual signing trials included 18 multi-phrase utterances
and had a duration of about a minute and a half. The participants directed their productions to a Deaf research
assistant. Signing data were collected with a Vicon motion capture system. Thirty reflective markers were attached to
participants’ articulators (7 on each arm, 7 on the head, and 9 on the torso) and tracked at a 100Hz sampling rate. The
data presented here are from four ASL users: two who were native signers from Deaf families and two who acquired
the language at school. In this presentation, we will analyze all three dimensions of movement (vertical, lateral and
horizontal).
Our preliminary results suggest that what have traditionally been viewed as passive articulators in ASL, the
head and torso, move substantially during signing. Figure 1 shows the locations of the right hand (blue), the clavicle
(green) and the left forehead (red) for one native signer. While the clavicle marker has the smallest movement range,
it is not a fixed point in space, shifting along one of the axes by 5 cm. By comparison, the forehead marker moves by
13.5 cm, and the dominant hand moves by 21.5 cm along the same axis. Thus, the forehead movement is roughly 60%
of the extent of the movement of the hand marker. Figure 2 shows the same marker data from a non-native signer.
For this signer, the movement extent of the forehead marker is much smaller relative to the movement of the hand
(38%). This indicates that while all signers move the forehead to some extent, native signers of ASL may have larger
movements of the head and body during signing.
These findings are consistent with articulatory phonology, which proposes that articulatory gestures are the
minimal units of language structure (Browman & Goldstein, 1992). These gestures are not defined as the movements
of individual articulators, but as the coordinated actions of multiple articulators, for example, the closure of the lips
for a bilabial stop. More broadly, the active role of the head and body in sign production is relevant to sign phonology
in general, and could affect how we conceptualize the phonological structure of signs.
190
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Mirko Santoro & Carlo Geraci
Poster 3.38
Institut Jean-Nicod, France
Weak crossover in Italian Sign Language
Language: ASL and English
Introduction
Despite the pronominal system of SLs is influenced by the use of signing space (a.o. [1], [2] and [3]), it
crucially interacts with other syntactic properties of human language, like the principles of binding
theory [2]. In this talk we address the interaction between binding and syntactic movement by
investigating weak crossover (WCO) in LIS. While crossover phenomena are partially investigated in
ASL [5], [6] no study addresses an in-depth investigation of WCO in SL.
The picture
The examples in (1) illustrate a case of WCO in English. The bound reading of the pronoun in the
subordinate clause is possible in (1a-b), but is crucially unavailable in (1c). (1c) is a case of WCO: the
wh-phrase crosses and binds a non-commanding pronoun. The resulting sentence is ungrammatical.
Two influential explanations for this fact have been proposed in the literature, the Leftness condition
(LC) (2) [7], and the Bijection Principle (BP) (3) [8].
LIS & WCO
We start by illustrating the three properties of the structure of LIS that are essential to understand how
WCO is generated in the language: a) LIS is a head-final language (4), b) wh-movement targets the
right periphery of the clause (5) [10], c) sentential complements are displaced either in clause final (6a)
or clause initial position (6b) [9]. WCO in LIS is shown in (6-7). (6a) shows that the sentential
complement follows the matrix verb THINK. The ungrammatical example in (7a) is a case of WCO:
the wh-pronoun moves from the subject position targeting the right periphery of clause. In doing this, it
crosses the sentential complement containing a possessive marker. Forcing a bound reading generates
ungrammaticality. Example (7b) shows that wh-movement per se does not generate ungrammaticality.
However, example (7c) shows that when linear crossing of the wh-phrase is avoided by topicalizing the
sentential complement in clause initial position (an option available with the verb SAY but not with
THINK) WCO is not observed. The same patter is observed with adjunct clauses (e.g. if-clauses vs.
because-clauses, cf. (8)).
Theoretical issues & solution
Neither the LC or the BP is able to capture why (7c) is acceptable and (7a) is not. The LC predicts both
sentences to be equally grammatical, while the BP predicts both to be ungrammatical. Notice however,
that modulo small structural differences, (7a) is the syntactic mirror image of (1c), a case of WCO. The
issue is that while the LC is proposed in a framework allowing leftward movement only, the data from
LIS shows that the same constraint is operative with rightward movement. To capture WCO in both
configurations we propose (9).
Conclusion
The data from LIS show that genuine cases of WCO exist in SLs and support the analysis of rightward
wh-movement in LIS. However, WCO in LIS poses a non-trivial problem for the main accounts
currently available. Our proposal captures WCO in both spoken and signed languages in a simple and
elegant way.
191
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Rachel Magid & ^Jennie Pyers
Poster 3.39
*Massachusetts Institute of Technology; ^Wellesley College, USA
192
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Rachel Sutton-Spence & ^Penny Boyes Braem
Poster 3.40
*University of Bristol, UK; ^Forschungszentrum für Gebärdensprache, Switzerland
A comparison of artistic mime and sign language poetry
We report on a study comparing artistic mime and the more mimetic elements of sign language poetry. To express complex
ideas and feelings artistically, most poets use spoken words, while mime artists use their body without words. For signers,
however, there are no spoken words and the medium for the poetry is the body. Previous sign language studies referring
to mime focused on the difference between signed vocabulary and mimed representation of referents, often with a socialpolitical agenda to demonstrate sign languages are ‘language’ (akin to spoken language) rather than ‘not language’ (akin to
mime). Recognising unhesitatingly that sign languages are languages, we ask what sign and mime have in common,
especially in signed poetry, an art form that often goes beyond the conventional lexicon.
Mime and signed poetry have in common a visual-corporal modality and production of basic iconic images, generated by an
underlying cognitive iconicity. We also expect different depths of linguistic encoding embodied by sign language and
pantomimic conventions.
In previous comparative studies (e.g. Klima and Bellugi 1979 or Eastman 1989), informants were usually people
knowledgeable in both sign language and mime techniques. In this study, in order to keep the two art forms as separate as
possible, four leading British Sign Language poets (not trained in mime) and four American professional mime artists (who
knew no sign language) participated in the research. We asked both groups to try out and discuss together ways to
anthropomorphize a range of non-humans (animals, objects and qualities) by ‘becoming’ them and attributing human
characteristics to them. Our specific aims were to find out how sign language poets and skilled mime artists compare in
what they portray in their anthropomorphic performances, and in the way that they build up these performances.
Each group was videotaped separately, documenting the ‘shared thinking’ about the processes they went through to build
up their representations and the finished improvisations. We analysed the discussions and improvisations, comparing
responses to the task and differences in techniques/form.
Responses to the task were often similar but differences included representation of multiple characters, linguistic and nonlinguistic communication between characters, anthropomorphisation of abstract qualities and representations of lying,
negating or ambiguity.
In techniques and form, we noted differences in Transfer of Person and Form (Cuxac & Sallandre 2008), individual and
combined transfers, the frame of movement (the stage vs. the signing window), ways to show size and form, and the use of
cinematic techniques.
The similarities we see between the signers and non-signers clearly reveal a shared way humans can use their body to show
concepts involving actions and descriptions. We suggest that some differences observed come from the differing needs and
abilities of their audiences to understand their performances. Different shared specific cultural behaviour / topics /
allusions may account for our observations, and also different physical skills in the performers. Types of iconicity
represented may be explained by Grounded Cognition Theories predicting the preference for handling gestures in mimes.
193
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Russell Richie & ^Charles Yang
Poster 3.41
*University of Connecticut; ^University of Pennsylvania, USA
Elicitation and analytic methods for documenting the lexicons of emerging signed languages
Languages of presentation: English and ASL
Emerging sign languages provide unique opportunities to investigate the course of language emergence, as well as
the contributions of input and learner to language acquisition. While much work has examined emerging sign languages'
syntax and, to a lesser extent, phonology, less attention has been paid to their lexicons. This paucity seems at least partially
due to a lack of appropriate methods and analyses, which present challenges over and above those of documenting
established (signed or spoken) languages. The present work outlines these challenges, and proposes new means to address
them.
A central challenge is that, by definition, emerging sign languages are not fully shared among the languages’ users
or investigator(s). Thus, only minimal task instructions can be reliably communicated to participants. For example, we
cannot instruct participants to (A) give words for unimageable concepts, which abound on the Swadesh list and derived lists;
(B) give their word for an object/concept rather than describe it; or (C) respond to basic (e.g. dog) rather than superordinate
(e.g. animal) or subordinate (e.g. beagle) levels. We circumvent (A) by using only imageable objects for stimuli (e.g. boy,
cloud, bucket). We surmount (B) and (C) by structuring our stimuli so that they themselves constrain participants’ responses:
each item was represented on a computer screen by three tokens of that object (thus encouraging participants to extract the
relevant category level), and was presented in a 'contrast set' with 3 other objects that occur in similar real-world contexts
(e.g. cowboy hat, baseball hat, bandana, jacket). These three other objects then disappeared while the first object moved to
the center, increased in size, and then disappeared (to prevent participants from pointing at the screen, as they did with
previous static stimuli).
While such a dataset is amenable to many research questions and analyses, we focus on analyzing
conventionalization among Nicaraguan homesigners and their partners. Consistent with past research of emerging sign
systems, participants often produced multiple gestures in a single response. While these past studies either ignored such
responses, or used two uncombinable measures of consistency, we have devised a single, comprehensive measure of
consistency across multi-gesture responses. We code each gesture for the aspect of the object’s meaning it iconically
expressed, i.e., its Conceptual Component (CC). Treating each CC as a dimension, a response receives a 1 on a dimension if
it contains that CC, and a 0 if it does not. Shannon entropy in a CC-dimension is a measure of consistency in use of that CC,
and average Shannon entropy across CC-dimensions is consistency in responses.
Our materials and analyses represent a step towards surmounting unaddressed challenges inherent to investigating
emerging sign language lexicons. Specifically, our stimuli minimize the need for communicating task instructions to
participants who often do not share a communication system with others. Further, our approach provides a valuable tool for
assessing the extent of conventionalization among lexicons containing multi-gesture responses, which are common among
emerging sign language lexicons.
194
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Tanya Denmark & John Swettenham
Poster 3.42
University College London, UK
Are deaf children with Autism Spectrum Disorder (ASD) impaired when using emotional and
linguistic facial expressions in British Sign Language (BSL)?
Autism, BSL, Facial expressions
Hearing children with ASD are often reported to have a lack of interest in others, particularly looking at faces, as a result they
manifest difficulties understanding and using facial expressions. By contrast, typically developing deaf children show subtle
advantages with face processing due to the fact that they need to look to the face more to communicate and experience with
linguistic facial expressions in BSL. Therefore, we asked whether faces are impaired in deaf signing children who also have
autism.
Methods: Deaf children with ASD (n=13) were compared with language, age and intellectually matched deaf controls (n=12)
on a range of emotion and sign language tasks designed to elicit face processing difficulties in both production and
comprehension. Measures included: narrative tasks; sentence repetition test; signed sentence to emotion word matching task;
tests of emotional facial expressions and tests of BSL questions, negation and adverbials.
Results:
Deaf children with ASD performed comparably to deaf controls at comprehending and producing facial expressions across
many of the tasks, they did not show an impairment with faces overall. Rather they showed specific differences with the
comprehension and production of some emotional facial expressions and the comprehension and production of adverbial
linguistic structures. These findings suggest that facial expressions in BSL, which have an emotional component or involve
mentalising about the mental state of others, are more problematic for deaf individuals with ASD than those which are
largely linguistic.
195
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Trevor Johnston, *Donovan Cresdee, ^Bencie Woll & +Adam Schembri
Poster 3.43
*Macquarie University, Australia; ^University College London, UK; +La Trobe University, Australia
Tracking grammaticalization through synchronic contextual variation: the frequency
and distribution of signs meaning “finish” in the Auslan (Australian sign language) and
BSL (British Sign Language) corpora
Language of presentation: English
Language change and variation (grammaticalization) can most clearly be tracked diachronically but
this, naturally, requires historical data, usually in the form of written records. SLs lack detailed
historical data. A solution can be found in the fact that grammaticalization can also be observed
synchronically through the comparison of data on variant word forms and multi-word constructions
in particular contexts and in different dialects and registers, e.g., the presence of similarly related
variants of particular constructions often indicate on-going grammaticalization, e.g., going to and
gonna in modern English (Pagliuca 1994, Chambers 1995). This is termed ‘synchronic contextual
variation’ (Heine 2002). In this paper we report an investigation of language change and variation
in two very closely related signed languages (SLs) Auslan (Johnston 2008) and British SL
(Schembri et al 2011).* In excess of 400 tokens of signs that can be glossed as FINISH in Auslan
and BSL have been identified in each annotated corpus and the tokens tagged for function (full
verb, auxiliary, affix, conjunction, etc.), variation and/or erosion (reduction in phonetic form)
(Hopper & Traugott 2003, Heine & Kuteva 2007), and correlation of these with position in clauses
and clause complexes for evidence of grammaticalization (cf Meir 1999, Pfau & Steinbach in
press). We report on multivariate analysis of the data using Rbrul that suggests on-going
grammaticalization is part of the explanation of variation—variants correlate with different uses in
different linguistic contexts, not just with the region or with the age of the signer as has previously
been shown in studies into lexical variation in the literature. Grammaticalization theory predicts that
there should be pairs of sign variants in the corpus in which one is clearly a lexical or gestural
source and the other is an emerging/emergent grammaticalised form, i.e., there should be some
evidence that particular variants or forms of the target lexical signs are preferentially used in
particular environments. Manipulated corpus data of this type enables the identification of patterns
that would otherwise be difficult or impossible to identify.
196
Friday 12th July, Poster Session 3
TISLR 11, July 2013
*Vincent Homer & ^Carlo Geraci
Poster 3.44
*ENS/ Institut Jean-Nicod; ^CNRS, Institut Jean-Nicod, France
Can’t we make it possible? Maybe (not)
Language: English
The modal system in SLs is particularly rich, including both lexical and non-manual markers
[1]. One striking aspect of SLs is that they feature ‘negative modals’ (more so than spoken
languages, see [2] for Korean). Negative modals are synthetic forms of two kinds [1]: (i) derivational
forms morpho-phonologically related with affirmative modals or negation, and (ii) suppletive
forms (no morpho-phonological relation). (1c) is derivational: it combines the handshape of CAN
(1a) and the movement of negation (1b). (2b) and (3b) are suppletive, as they have no morphophonological
relations with the modal morpheme or negation (all examples are from LIS). While
synthetic forms received attention as alternative ways to express negation (a.o. [3], [4], [5]), a close
inspection of modals in SLs from a semantic viewpoint is virtually absent. Several fundamental
questions await a proper answer, among which: (a) how do synthetic forms interact with scopebearing
elements (e.g. adverbs like ALWAYS)?; (b) are those forms unbreakable items or can we
adduce evidence, in particular semantic evidence, that they are made up of distinct morphemes?
In this talk, we compare LIS and LSF. First we note that synthetic forms are preferred over
analytic ones. Both in (1c) and (2b), negation has scope over the existential modal; the reverse
obtains in (3b); so we extend to LIS (and LSF) the analysis of must as a ‘mobile Positive Polarity
Item’ proposed for spoken languages [6]. We address (a) using the frequency adverb ALWAYS,
ability CAN and deontic MUST. When linearized before a synthetic ‘negative modal’ (CAN.NEG
in (4a) and MUST.NEG in (5a)), ALWAYS can be interpreted with narrowest scope w.r.t. it ((4b)
and (5b)). This is expected since LIS is head-final. There is a more surprising ‘split scope’ interpretation
in (4c) and (5c), where the adverb is scopally sandwiched between a negation and a
modal ((4c) is semantically equivalent to (4d), with overtly separated morphemes). The sentences
are truly ambiguous, i.e. the ‘split scope’ interpretation counts as a separate reading, although it
entails the surface scope reading, and can thus be judged true in a subset of the situations in which
the latter is true. A falsity judgment is necessary to evidence the split scope reading, as in (6): B
judges (5a) false, whereas it is true under its surface scope interpretation. The discourse is coherent,
and we deduce from this that the split scope reading is real. The facts in (4)-(5) pave the way to
address the second question (b). The split scope readings require an ordering of the scope-bearing
elements which does not coincide with the surface order; furthermore those scope relations are
not illusive and require matching c-command relations and thus movement. This is all the more
important because it is suppletive forms that are involved in (4a) and (5a). So we have evidence
that ‘negative modals’ have parts and that the morphemes they comprise are available for syntactic
interactions with other material. The morphosyntactic structure of those signs is thus generated by
syntax itself, in line with Distributed Morphology.
197
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Wanette Reynolds
Poster 3.45
Gallaudet University, USA
Young bimodal bilingual narrative acquisition: Overt lexical sequence marking in ASL
video-retellings – ASL
The literature shows that spoken (Hickmann, 1995; Peterson & McCabe, 1983) and signed
(Morgan, 2006; Reilly, 2006) language narrative development is protracted, emerging around
preschool age and continuing to develop until age 10. However, narrative development by
bimodal bilinguals is limited to one study by Morgan (2000) tracking referential devices in two
young children (ages 7;01 & 09;10) in British Sign Language (BSL) and British English. Child
narratives offer a unique context for studying changes in function of specific forms throughout
acquisition; where old forms take on new functions or new forms taking on old functions (Slobin,
1973). Bilingual narrative development studies of young bimodals may provide insight to the
robust interaction between two langauges in two modalities.
This study contributes to the burgeoning bimodal bilingual acquisition field by examining two
lexical sequence markers (AND and THEN, signed equivalents of English and and then) in the
signed narratives of four young bimodal bilingual children acquiring ASL and English (ages
range 5;11-6:09). The acquisition of the coordinating conjunction and in spoken child narratives
has been studied in Hewbrew (Berman, 1996), French (Jisa, 1985), and English (Peterson &
McCabe, 1988). At about 5-7 years and sometimes beyond, and is overused, as an overt marker
of sequentiality (as seen example 1). Finally, the connector becomes adult-like and is only used
ocassionally to link clauses, since events in adult narratives are inherently sequential.
This study shows the overuse of lexical markers to indicate sequentiality by all four bimodal
bilingual children in their ASL narratives in a video-retelling task. Although the literature lexical
marking (NOW & NOW-THAT) in ASL on this area is limited to a lecture setting (Roy, 2001), it
seems reasonable to assume that Deaf adults frame sequential events much like hearing adults,
noting that it is not modality specific. Moreover, preliminary analysis of an adult ASL narrative
elicited using the same video confirms that the signer did not use the overt lexical markers that
the children did. This is not surprising, as AND THEN are typically considered English
borrowings. Instead, he marked sequential events through prosodic means, repetition of events
through different depictive perspectives. We may expect the observed adult sequential devices to
develop later, and indeed some of those aspects do appear the child renditions.
Table 1 displays the most frequent lexical markers of sequentality produced by the children. Each
child seems to prefer one lexical marker over another with GIA & BEN preferring THEN (see
figure 1), while VAL prefers AND (see figure 2). TOM prefers his own idiosyncratic form of
BUT-G followed by the indexical sign, IX(event) seen in figure 3. For an example of both AND
and THEN in use to connect two sequence of events, see example 2. A casual observer might
attribute the prevelance of AND THEN to English influence and the underdevelopment of ASL.
However, a more sophisiticated analysis reveals a systematic use of these connectors functioning
as sequence markers. This follows the same description of the changing functions of specific
forms posited by Slobin.
198
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Yijun Chen & James Tai
Poster 3.46
National Chung Cheng University, Taiwan
Polar and Content Questions in Taiwan Sign Language
Presented in English
Zeshan (2004) investigates interrogative construction across thirty-five sign
languages from theoretical and typological perspectives, focusing on polar questions
(yes/no questions) and content questions (wh-questions). In her investigation,
question particles in Taiwan Sign Language (TSL) are briefly discussed. Later, in an
edited volume (Zeshan 2006), a more detailed treatment of interrogative construction
of a dozen of sign languages is provided. But, TSL is not included. This study
attempts to provide the first detailed analysis of polar and content questions in TSL,
adding to the growing body of literature.
In addition to requesting information, interrogative constructions can be used to
accomplish a variety of different speech acts, such as complaining and invitation
(Heinemann 2008). In order to interpret an utterance as a question, the sequential
position of the utterance in the conversation and the organization of conversation
should be taken into account. Therefore, this study investigates TSL polar and content
questions in face-to-face dialogue settings with a conversation analytic framework
(Sacks, Schegloff and Jefferson 1974; Schegloff 1984).
The analyzed data contain approximately 3 hours of spontaneous signing, taken
from 4 dyads conversation. The data are annotated using ELAN annotation software.
The appearance changes for facial expressions are annotated based on Facial Action
Coding System (Ekman, Friesen, and Hager 2002).
Similar to other sign languages (Zeshan 2004), in TSL, polar questions only
employ non-manual expressions while content questions are formed by using question
words and some non-manual expressions. TSL has an extensive question-word
paradigm. Question words tend to be in sentence final position.
Three non-manual mechanisms are repeatedly observed accompanying a
majority of questions: eye gaze at the end of the questions, eyebrow movements, and
head movements. However, non-manual behaviors for questions in this study differ
from what is suggested in previous literature (Zeshan 2004, 2006). Non-manual
expressions do not have a clear and specific onset and offset pattern in TSL questions.
The components and the scope of non-manual behaviors in TSL vary from one
question to another. Therefore, these non-manuals appear to be pragmatic rather
grammatical. They are interactional expressions.
The non-manuals identified in both types of questions are basically no different.
Non-manuals are related to the functions of the questions rather than to different
question types. Gazing behavior functions as a conversation regulator. It is a
turn-yielding cue. Eye gaze behavior not only signals the interlocutors it is their turn
to speak but also monitor feedback. With respect to eyebrow movements, signers tend
to do furrowed eyebrows (AU4) when they have doubt or do not understand what the
previous utterances have been said whereas they tend to raise their eyebrows (AU1,
AU2, or AU1+2) when they want to ask for information or confirmation. This is
similar to what Ekman (1979) has observed for hearing non-signers in conversation.
As for head movements, manual signs sometimes influence the position or
movements of the head. Tilting is relatively common co-occurring with questions but
it does not occur with most questions.
199
Friday 12th July, Poster Session 3
TISLR 11, July 2013
Yu-Cheng Ho & Jung-Hsing Chang
Poster 3.47
National Chung Cheng University, Taiwan
Exploring the Form and Function of Body Classifiers in Taiwan Sign
Language
(language: English)
In sign language, not only the hands but also the body can deliver rich messages. Signers often view themselves as the
referents they are discussing and act like in the real-world situation. When a signer uses his body to represent a referent in
question, the use of body in linguistics has different names, for example, ‘body classifier’ (Supalla, 1986), ‘constructed
action’ (Metzger, 1995; Aarons and Morgan, 2003), or ‘full form pronoun’ (Kegl, 2003). As for the function of body
classifier, Perniss (2007) states that when a signer uses body to narrate events, the signer is in ‘character perspective’.
In this paper two classifiers are distinguished: body classifier and projected body classifier (cf. Kegl 2003). The former
refers to the use of the signer’s body as a pronoun, while the latter refers to the projection of an invisible body in the space.
This paper discusses body classifiers in Taiwan Sign Language (TSL) and aims to explore the form and function of these
classifiers. The first issue for discussion is why some TSL verbs such as WASH HAIR and BUTTON SOMEONE’S
BUTTONS are permitted to occur in both body classifier construction and the projected body classifier construction, as
shown in (1), while some TSL verbs such as PERFORM ACUPUNCTURE and PAINT NAILS are not.
The second issue is why the projected body classifier construction is preferred when the expression involves manner
adverbials such as LANGUISHINGLY or EARNESTLY.
This paper will discuss different linguistic forms and functions associated with body classifiers in TSL, and will explain
in what situations the use of the projected body classifier construction will be preferred by the signer. In addition, we will
show how agentivity is related to the projected body classifier construction.
200
TISLR 11 AUTHOR CONTACT DETAILS 201
TISLR 11 July 2013
Index of corresponding authors
Alphabetical by last name
Name
Marilyn Abbott
Natasha Abner
Robert Adam
Dany Adone
Celia Alba
Benjamin Anible
Joanna Atkinson
Institution
University of Alberta
University of Chicago
DCAL, UCL
University of Cologne
Universitat Pompeu Fabra
University of New Mexico
DCAL, UCL
Country
Canada
USA
UK
Germany
Spain
USA
UK
Anne Baker
Richard Bank
Gemma Barberà
Plinio Barbosa
Rachel Benedict
Elena Benedicto
Iris Berent
Caroline Bogliotti
University of Amsterdam
Radboud University nijmegen
Universitat Pompeu Fabra
Unicamp
Boston University
Purdue U.
Northeastern University
Université Paris Ouest Nanterre la Défense
& MODYCO CNRS UMR7114
Carl Börstell
Stockholm University
Rain Bosworth
University of California, San Diego
Dominique Boutet
CNRS/University Paris 8
Penny Boyes Braem
Forschungszentrum für Gebärdensprache
Anastasia Bradford
University of Central Lancashire
Diane Brentari
University of Chicago/Linguistics Dept
Elísa Guðrún Brynjólfsdóttir University of Iceland
Emily Carrigan
Sara Carvalho
Keren Cumberbatch
University of Connecticut
Instituto de Ciências da Saúde –
Universidade Católica Portuguesa
University of Milano-Bicocca
National Chung Cheng University
National Chung Cheng University
Gallaudet University
Università La Sapienza di Roma / CNR-ISTC
Dutch Sign Centre
University of Connecticut
UC Davis, Center for Mind and Brain
& Dept. of Linguistics
BCBL (Basque Center on Cognition,
Brain & Language)
Radboud University Nijmegen,
Centre for Language Studies
The University of the West Indies
Svetlana Dachkovsky
Kathryn Davidson
Raphaël de Courville
Mathilde de Geus
Ronice de Quadros
Connie de Vos
Danny De Weerdt
Clark Denmark
University of Haifa
University of Connecticut
GestuelScript (ESAD Amiens)
De Geus Advies
Universidade Federal de Santa Catarina
Max Planck Institute for Psycholinguistics
University of Jyväskylä
University of Central Lancashire
Carlo Cecchetto
Jung-Hsing Chang
Yijun Chen
Deborah Chen Pichler
Isabella Chiari
Richard Cokart
Marie Coppola
David Corina
Brendan Costello
Onno Crasborn
Email
marilyn.abbott@ualberta.ca
nabner@gmail.com
r.adam@ucl.ac.uk
danyadone@gmail.com
celiaalba@hotmail.com
banible@unm.edu
joanna.atkinson@ucl.ac.uk
Netherlands
a.e.baker@uva.nl
Netherlands
r.bank@let.ru.nl
Spain
gemma.barbera@upf.edu
Brazil
pabarbosa.unicampbr@gmail.com
USA
rmb625@bu.edu
USA
ebenedi@purdue.edu
USA
i.berent@neu.edu
France
caroline.bogliotti@u-paris10.fr
Israel
calleborstell@gmail.com
USA
rain@ucsd.edu
France
dominique_jean.boutet@orange.fr
Switzerland
boyesbraem@fzgresearch.org
UK
tashi.bradford@gmail.com
USA
dbrentari@uchicago.edu
Iceland
egb1@hi.is
USA
emily.carrigan@uconn.edu
Portugal
sarafcarvalho@yahoo.com
Italy
carlo.cecchetto@unimib.it
Taiwan
lngjhc@ccu.edu.tw
Taiwan
yijun814@gmail.com
USA
deborah.chen.pichler@gallaudet.edu
Italy
isabella.chiari@uniroma1.it
Netherlands
r.cokart@gebarencentrum.nl
USA
marie.coppola@uconn.edu
USA
corina@ucdavis.edu
Spain
b.costello@bcbl.eu
Netherlands
Jamaica
o.crasborn@let.ru.nl
mehrarbeite@gmail.com
Israel
dachkov@yahoo.com
USA
kathryn.davidson@uconn.edu
France
raphael.de.courville@gmail.com
Netherlands
degeusadvies@gmail.com
Brazil
ronicequadros@gmail.com
Netherlands
connie.devos@mpi.nl
Finland
danny.deweerdt@jyu.fi
UK
ACDenmark@uclan.ac.uk
1
202
TISLR 11 July 2013
Index of corresponding authors
Alphabetical by last name
Tanya Denmark
Alessio Di Renzo
Caterina Donati
Amanda Dupuis
Matthew Dye
UCL
CNR-ISTC
Sapienza University of Rome
Northeastern University
University of Illinois at Urbana-Champaign
UK
Italy
Italy
USA
USA
Sarah Ebling
Institute of Computational Linguistics,
University of Zurich
San Diego State University
University of Manitoba
Switzerland
USA
Canada
Karen Emmorey
Charlotte Enns
Ronnie Fagundes de Brito
Jordan Fenlon
Naja Ferjan Ramirez
Joanna Filipczak
Brazil
UK
USA
t.denmark@ucl.ac.uk
alessio.direnzo@istc.cnr.it
donatica@yahoo.it
amandaleighdupuis@gmail.com
mdye@illinois.edu
sarah.ebling@uzh.ch
kemmorey@mail.sdsu.edu
ennscj@cc.umanitoba.ca
UFSC/EGC
DCAL, University College London
University of California, San Diego
Section for Sign Linguistcs,
University of Warsaw
Molly Flaherty
The University of Chicago
Anne Therese Frederiksen UCSD
Angelo Frémeaux
UMR SFL (Paris 8 & CNRS)
Sabine Fries
Humboldt University Berlin
Adam Frost
Gallaudet University
Orit Fuks
Kaye College, Be'er Sheva Israel
Cat H.-M. Fung
The Chinese University of Hong Kong
Poland
USA
USA
France
Germany
USA
Israel
Hong Kong
asiafilipczak@yahoo.com
mflaherty@uchicago.edu
a.t.frederiksen@gmail.com
angelo.fremeaux@gmail.com
sabine.fries@staff.hu-berlin.de
adam@frostvillage.com
ofuks@zahav.net.il
fungcat@gmail.com
Deanna Gagne
Brigitte Garcia
Carlo Geraci
Marcel Giezen
Sally Gillespie
Corina Goodwin
Donovan Grose
Eva Gutierrez
University of Connecticut
UMR SFL (Paris 8 & CNRS)
Institut Jean-Nicod
San Diego State University
Queens University Belfast
University of Connecticut
Hang Seng Management College
UCL
USA
France
France
USA
UK
USA
Hong Kong
UK
deanna.gagne@uconn.edu
brigitte.garcia@univ-paris8.fr
carlo.geraci76@gmail.com
giezenmr@gmail.com
Sallyg.sli@googlemail.com
corinag@gmail.com
donovangrose@hsmc.edu.hk
eva.gutierrez@ucl.ac.uk
Peter Hauser
Christina Healy
Ros Herman
Annika Herrmann
Melissa Herzig
Jens Hessmann
Yu-Cheng Ho
Gabrielle Hodge
Vincent Homer
Jana Hosemann
Annika Hübl
So-One Hwang
Rochester Institute of Technology
USA
pchgss@rit.edu
Gallaudet University
USA
christina.healy@gallaudet.edu
City University London
UK
R.C.Herman@city.ac.uk
University of Goettingen Germany
annika.herrmann@phil.uni-goettingen.de
Gallaudet University
USA
melissa.herzig@gallaudet.edu
Magdeburg-Stendal University of Applied Sciences
jens.hessmann@hs-magdeburg.de
National Chung Cheng University
Taiwan
yucheng738@gmail.com
Macquarie University
Australia gabrielle.hodge@students.mq.edu.au
ENS/ Institut Jean-Nicod
France
vincenthomer@gmail.com
Georg-August University of Goettingen Germany
jana.hosemann@phil.uni-goettingen.de
University of Göttingen
Germany annika.huebl@phil.uni-goettingen.de
UC San Diego
USA
soone@ucsd.edu
Tommi Jantunen
Terry Janzen
Maria Josep Jarque
Trevor Johnston
Jóhannes Gísli Jónsson
University of Jyväskylä
University of Manitoba
University of Barcelona
Macquarie University
University of Iceland
Finland
Canada
Spain
Australia
Iceland
ronniefbrito@gmail.com
j.fenlon@ucl.ac.uk
naja@ling.ucsd.edu
tommi.j.jantunen@jyu.fi
Terry.Janzen@ad.umanitoba.ca
mj_jarque@ub.edu
trevor.johnston@mq.edu.au
jj@hi.is
2
203
TISLR 11 July 2013
Index of corresponding authors
Alphabetical by last name
Michiko Kaneko
Itamar Kastner
Emily Kaufmann
Jonathan Keane
Vadim Kimmelman
Annemarie Kocab
Helen Koulidobrova
L. Viola Kozak
Marlon Kuntze
University of the Witwatersrand, Johannesburg South Africa
Michiko.Kaneko@wits.ac.za
New York University
USA
itamar@nyu.edu
University of Cologne
Germany
kaufmann.emily@gmail.com
University of Chicago
USA
jonkeane@uchicago.edu
University of Amsterdam
Netherlands
v.kimmelman@uva.nl
Harvard University
USA
kocab@fas.harvard.edu
Central Connecticut State University
USA
elena.koulidobrova@ccsu.edu
Gallaudet University
USA
laura.kozak@gallaudet.edu
Gallaudet University
USA
marlon.kuntze@gallaudet.edu
Andrea Lackner
Scholastica Lam
Gabriele Langer
Sin Yee Prudence Lau
Lorraine Leeson
Aline Lemos Pizzio
Ryan Lepic
Marie-Thérèse L'Huillier
Jia Li
Amy M. Lieberman
Diane Lillo-Martin
Elena Liskova
Hongyu Liu
Ceil Lucas
ZGH, 9020 University of Klagenfurt
Austria
andrea.lackner@aau.at
The Chinese University of Hong Kong
Hong Kong
schola_cslds@cuhk.edu.hk
IDGS, University of Hamburg Germany
gabriele.langer@sign-lang.uni-hamburg.de
The Chinese University of Hong Kong
Hong Kong
sansylau@hotmail.com
Trinity College Dublin
Ireland
leesonl@tcd.ie
Universidade Federal de Santa Catarina
Brazil
alinelemospizzio@gmail.com
University of California, San Diego
USA
rlepic@ucsd.edu
UMR SFL (Paris 8 & CNRS)
France
marie-therese.lhuillier@sfl.cnrs.fr
Centre for Sign Linguistics and Deaf Studies, CUHK China
lijia_cslds@cuhk.edu.hk
University of California, San Diego
USA
alieberman@ucsd.edu
University of Connecticut
USA
lillo.martin@uconn.edu
The University of Texas at Austin
USA
eliskova@utexas.edu
Fudan University; Yanshan University
China
11110110006@fudan.edu.cn
Gallaudet University
USA
ceil.lucas2@gmail.com
Fernanda Machado
Rachel Magid
Dina Makouke
Evie Malaia
Lara Mantovan
Amber Martin
Susan Mather
Kazum Matsuoka
Silke Matthes
Claude Mauk
Stephen McCullough
Lynn McQuarrie
Johanna Mesch
Kate Mesh
Laurence Meurant
Marina Milkovic
Ana Mineiro
Universidade Federal de Santa Catarina
Brazil fernandamachado.eba.ufrj@gmail.com
Massachusetts Institute of Technology
USA
rwmagid@gmail.com
UMR SFL (Paris 8 & CNRS)
France
makoukedina@yahoo.fr
University of Texas at Arlington
USA
malaia@uta.edu
Ca' Foscari University of Venice
Italy
laramantovan@unive.it
Barnard College of Columbia University
USA
amartin@barnard.edu
Gallaudet University
USA
susan.mather@gallaudet.edu
Keio University
Japan
matsuoka@z7.keio.jp
IDGS, University of Hamburg Germany
silke.matthes@sign-lang.uni-hamburg.de
University of Pittsburgh
USA
cemauk@pitt.edu
San Diego State University
USA
smccullough@projects.sdsu.edu
University of Alberta
Canada
lynn.mcquarrie@ualberta.ca
Stockholm University
Sweden
johanna.mesch@ling.su.se
The University of Texas at Austin
USA
kate.a.mesh@gmail.com
University of Namur
Belgium
laurence.meurant@unamur.be
University of Zagreb
Croatia
milmarzg@gmail.com
Instituto de Ciências da Saúde –
Universidade Católica Portuguesa
Portugal
amineiro@ics.lisboa.ucp.pt
SRS Inc.
USA
rezearcher@erinad.org
Instituto de Ciências da Saúde –
Universidade Católica Portuguesa
Portugal
marapmoita@gmail.com
Universidade de São Paulo (USP)
Brazil
reka@usp.br
University of New Mexico & VL2
USA
morford@unm.edu
University of California San Diego
USA
hopeemorgan@mac.com
Gallaudet University
USA
carla.morris@gallaudet.edu
Section for Sign Linguistcs, University of Warsaw Poland
piotr.mostowski@uw.edu.pl
Ghent University
Belgium
kmouvet@hotmail.com
Rezenet Moges
Mara Moita
Renata Lúcia Moreira
Jill P. Morford
Hope Morgan
Carla Morris
Piotr Mostowski
Kimberley Mouvet
3
204
TISLR 11 July 2013
Index of corresponding authors
Alphabetical by last name
Anke Müller
University of Hamburg
Germany
Marie Nadolske
Anna-Lena Nilsson
Brent Novodvorski
Rama Novogrodsky
Victoria Nyst
Montclair State University
Stockholm University
University of Alberta
Boston university
Leiden University
USA
Sweden
Canada
USA
Netherlands
nadolske@me.com
annalena@ling.su.se
novodror@ualberta.ca
ramanovo@gmail.com
v.a.s.nyst@hum.leidenuniv.nl
Corrine Occhino-Kehoe
Ellen Ormel
Joni Oyserman
University of New Mexico
CLS at Radboud University Nijmegen
SignHands
USA
Netherlands
Netherlands
cocchino@unm.edu
E.Ormel@let.ru.nl
joni@oyserman.nl
Nick Palfreyman
Aurore Paligot
Sibaji Panda
Anne-Marie Parisot
Nina-Kristin Pendzich
Pamela Perniss
Giulia Petitta
Jennifer Petrich
Roland Pfau
Anna Puupponen
iSLanDS Institute,
University of Central Lancashire
UK
nicholaspalfreyman@yours.com
University of Namur
Belgium
aurore.paligot@unamur.be
UCLAN
UK
spanda@uclan.ac.uk
Université du Québec à Montréal
Canada
parisot.anne-marie@uqam.ca
Georg-August-Universität Göttingen Germany nina-kristin.pendzich@phil.uni-goettingen.de
DCAL, University College London
UK
p.perniss@ucl.ac.uk
Institute of Cognitive Sciences and Technologies, CNR Italy
giulia.petitta@tiscali.it
San Diego State University
USA
jpetrich@projects.sdsu.edu
University of Amsterdam
Netherlands
r.pfau@uva.nl
University of Jyväskylä
Finland
anna.puupponen@jyu.fi
Ronice Quadros
David Quinto-Pozos
Universidade Federal de Santa Catarina
University of Texas at Austin
Szilárd Rácz-Engelhardt
Päivi Rainò
Christian Rathmann
Carina Rebello Cruz
Wanette Reynolds
Russell Richie
Julie Rinfret
Maria Roccaforte
Dietmar Roehm
Nicholas Roessler
Jerker Rönnberg
Paolo Rossini
Mary Rudner
Paweł Rutkowski
University of Hamburg
Germany
szilard.racz@uni-hamburg.de
Humak University of Applied Sciences
Finland
paivi.raino@humak.fi
Universität Hamburg Germany
christian.rathmann@uni-hamburg.de
Universidade Federal de Santa Catarina
Brazil
crcpesquisa@gmail.com
Gallaudet University
USA
wanette.reynolds@gallaudet.edu
University of Connecticut
USA
russell.richie@uconn.edu
Université du Québec à Montréal
Canada
rinfret.julie@uqam.ca
Sapienza university
Italy
mariaroccaforte@yahoo.it
University of Salzburg
Austria
dietmar.roehm@sbg.ac.at
Gallaudet University
USA
nicholas.roessler@gmail.com
Linnaeus Centre HEAD, Linköping University Sweden
jerker.ronnberg@liu.se
Institute of Cognitive Sciences & Technologies, CNR Italy
paolo.rossini@istc.cnr.it
Linnaeus Centre HEAD, Linköping University Sweden
mary.rudner@liu.se
University of Warsaw
Poland
p.rutkowski@uw.edu.pl
John Saeed
Anna Safar
Trinity College Dublin
Centre for Language Studies,
Radboud University Nijmegen
University of Central Lancashire
Japan College of Social Work
Université Paris 8 & CNRS
Seikei University
UQAM
La Trobe University, Melbourne
Dutch Sign Centre
Keiko Sagara
Kurum Saito
Marie-Anne Sallandre
Daisuke Sasaki
Darren Saunders
Adam Schembri
Trude Schermer
Brazil
USA
Ireland
abemuenk@gmx.de
ronicequadros@gmail.com
davidqp@mail.utexas.edu
jsaeed@tcd.ie
Netherlands
annasafar@gmail.com
UK
ksagara@uclan.ac.uk
Japan
kurumi@jcsw.ac.jp
France marie-anne.sallandre@univ-paris8.fr
Japan
daisuke@econ.seikei.ac.jp
Canada
m.dazze@gmail.com
Australia
A.Schembri@latrobe.edu.au
Netherlands t.schermer@gebarencentrum.nl
4
205
TISLR 11 July 2013
Index of corresponding authors
Alphabetical by last name
Philippe Schlenker
Pierre Schmitt
Jerry Schnepp
Kristen Secora
Zed Sevcikova
Jenny Singleton
Sigrid Slettebakk Berge
Rose Stamp
Markus Steinbach
Adam Stone
Christopher Stone
Akio Suemori
Beyza Sumer
Rachel Sutton-Spence
Karl Szymoniak
Institut Jean-Nicod, CNRS; NYU
France
philippe.schlenker@gmail.com
EHESS
France
schmittpierre@alumni.purdue.edu
Bowling Green State University
USA
schnepp@bgsu.edu
Laboratory for Language & Cognitive Neuroscience USA
krsecora@gmail.com
UCL
UK
z.sevcikova@ucl.ac.uk
Georgia Institute of Technology
USA
jenny.singleton@psych.gatech.edu
NTNU. Dept of Social Work and Health Science Norway
sigrid.berge@svt.ntnu.no
DCAL, University College London
UK
r.stamp@ucl.ac.uk
University of Göttingen Germany
markus.steinbach@phil.uni-goettingen.de
Gallaudet University
USA
adam.stone@gallaudet.edu
UCL
UK
christopher.stone@ucl.ac.uk
National Institute of Advanced Industrial
Science and Technology (AIST)
Japan
suemori.akio@gmail.com
Radboud University Nijmegen
Netherlands
beyza.sumer@mpi.nl
University of Bristol
UK
rachel.spence@bristol.ac.uk
Université du Québec à Montréal
Canada
karlszymoniak@gmail.com
Ritva Takkinen
University of Jyväskylä
Finland
ritva.takkinen@jyu.fi
Eyasu Tamene
Addis Ababa University
Ethiopia
tusaye11@gmail.com
Angoua Jean-Jacques Tano Felix Houphouet- Boigny University, Abidjan/
Leiden University, Netherlands
Netherlands
tanojak@yahoo.fr
Elina Tapio
University of Jyväskylä
Finland
elina.tapio@jyu.fi
Robin L Thompson
DCAL, University College London
UK
robin.thompson@ucl.ac.uk
Elena Tomasuolo
Institute of cognitive sciences and technologies –
CNR Rome
Italy
elenatomasuolo@hotmail.com
Piotr Tomaszewski
University of Warsaw Faculty of Psychology Poland piotr.tomaszewski@psych.uw.edu.pl
Jane Tsay
National Chung Cheng University
Taiwan
Lngtsay@ccu.edu.tw
Martha Tyrone
Haskins Laboratories
USA
tyrone@haskins.yale.edu
Asako Uchibori
Jonathan Udoff
Nihon University
Japan
University of California; San Diego State University USA
Beppie van Den Bogaerde
Mieke Van Herreweghe
Tarcísio Vanzin
Myriam Vermeerbergen
Gabriella Vigliocco
Suzanne Villeneuve
Agnes Villwock
David Vinson
Utrecht University of Applied Sciences
Ghent University
UFSC/EGC
KULeuven
University College London
Université du Québec à Montréal
University of Hamburg
University College London
Lars Wallin
Jill Weisberg
Cecily Whitworth
Erin Wilkinson
Joshua Williams
Rosalee Wolfe
Stockholm University
Sweden
San Diego State University Research Foundation USA
McDaniel College
USA
University of Manitoba
Canada
Indiana University
USA
DePaul University
USA
Andre Xavier
Felix Yim-Binh Sze
Ulrike Zeshan
Unicamp
The Chinese University fo Hong Kong
University of Central Lancashire
uchibori.asako@nihon-u.ac.jp
judoff@projects.sdsu.edu
Netherlands beppie.vandenbogaerde@hu.nl
Belgium
mieke.vanherreweghe@ugent.be
Brazil
tvanzin@yahoo.com.br
Belgium
mvermeer@mac.com
UK
g.vigliocco@ucl.ac.uk
Canada
villeneuve.suzanne@uqam.ca
Germany
agnes.villwock@uni-hamburg.de
UK
d.vinson@ucl.ac.uk
wallin@ling.su.se
jweisberg@projects.sdsu.edu
cecily.whitworth@gmail.com
Erin.Wilkinson@ad.umanitoba.ca
willjota@indiana.edu
wolfe@cs.depaul.edu
Brazil
andre.xavier.unicamp@gmail.com
Hong Kong
felix_cslds@cuhk.edu.hk
UK
Uzeshan@uclan.ac.uk
5
206
Download
Study collections