Uploaded by Erin Butler

A Comparison Between Signed and Spoken Languages

advertisement
A Comparison of Signed and Spoken Languages
Erin Butler
Rochester Institute of Technology
National Technical Institute for the Deaf
Comparison of Signed and Spoken Languages
|2
A Comparison of Signed and Spoken Languages
Abstract
Studying language through multiple lenses will allow for a deeper understanding of how
language develops and what factors have a greater influence on language. This knowledge can be
used to guide how language is further studied and advance the use of language studies for
understanding other aspects of culture and the world. One particular area in which language
studies are focused is comparing signed and spoken languages and how they each develop in a
similar manner as well as how they are different.
Introduction
When considering signed and spoken languages, some studies to date have presented a
comparison based mainly on the study of gesture. Although some conclusions can be drawn from
these studies, it is difficult to find consistencies in the data. The main reason for this is each
study will have its own definition of gesture, mostly depending on what aspects of language the
researchers are trying to study [1].
Because of the inconsistencies and difficulties that arise when studying spoken and
signed languages based on gesture, this comparison will aim to study the languages by other
means. In doing so, the hope is to analyze how the two language types have developed
differently and what aspects of language study provide the biggest differences between them. To
ensure a broad understanding of how these language types can be compared, similarities will also
be noted. The two language types will be compared on the basis of modality, morphology,
phonology, and cultural development.
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|3
Methodology
Before diving fully into this investigation, preliminary research was done in order to
narrow the scope of this study. Through this, it was determined highlighting similarities between
the two languages is important and needs to be done in order to fully understand the comparison.
Even so, this study will focus on highlighting differences between spoken and signed languages.
This will allow for the language types to be analyzed on a deeper level and ensure a more
focused analysis. Although many examples will refer to American Sign Language (ASL) and
English, the goal is to study signed and spoken languages on a global scale. Throughout this
study examples used to highlight key points will draw from languages used in the same region.
For example, ASL is the predominant signed language and English is the predominant spoken
language used in the United States and comparisons will be made between these two languages.
There are numerous characteristics of language on which similarities and differences can
be drawn. This study will focus on highlighting the phonology, morphology, and semantic
characteristics of spoken and signed languages. Phonology, in this case, will study the physical
signal of each language. Morphology looks at the form of words used in each language. This will
be studied alongside the syntax, or arrangement of words, of each language. The semantics of a
language refer to the meaning of words and phrases used throughout the language. This is often
studied with the lexicon, or the vocabulary, of a language. These two characteristics will be used
to highlight the cultural aspect of each language type.
Reviewing journal sources is the primary approach to this study of signed and spoken
languages. In addition, discussions with individuals immersed in one or both cultures and those
who study language on a deeper level were helpful in guiding certain decisions. New areas to
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|4
study and characteristics on which to compare these two language types were discovered through
these conversations and additional journal sources were found to further investigate these topics.
Literature Review
An initial comparison between spoken and signed languages reveals the most obvious
difference between these two language types as the mode in which each is expressed. The
physical signal for a signed language is a visual one. The body moves and these motions
correlate to some meaning that is interpreted by the eyes of another. In a spoken language, the
physical signal is an auditory one. One body produces sound which is interpreted by the auditory
system of another. Many believe all movements in a signed language contain some form of
meaning [2]. This is not the case in spoken languages. Body movement (i.e. movement of the
vocal cords, tongue, mouth, lips, jaw, etc.) do not provide any meaning to the words being
spoken. Rather, these movements only work to produce the sound a person is trying to make
which in turn become words with meaning.
The anatomical comparison between these two language types can be further
exemplified. The process of comprehending language, both signed and spoken, occurs mainly in
Wernicke’s area [3], [4]. In addition to relying on this part of the brain, Broca’s area is crucial
for the ability to produce signs and speech [4]. Sandler et. al. expands upon this concept of
language comprehension. Signed languages are processed simultaneously – all components of
the language are seen at the same time – with minimal sequential processing [5], [6]. On the
other hand, spoken languages are considered to be sequential – one cue follows another.
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|5
Although this phonological difference is clear, phonological similarities can also be
drawn. Both language types have a duality of patterning [5]. Duality of patterning is the ability of
human language, including both spoken and signed, to form discrete meaningful units. These
units are known as morphemes which can be combined in various ways to convey different
meaning [5]. In a signed language, different morphemes can be created by changing certain
aspects of the sign. For example, a change in the location, movement, or handshape of a sign can
produce an entirely new meaning [5]. Researchers have also identified correlations between
handshape in signed language and language segments in spoken languages [7]. For example, the
similarities in the signs “nerve”, “apple”, and “candy” – which all have the same movement and
location but different handshapes – would be considered comparable to the similarities in the
spoken words “fat”, “bat”, and “pat”. The spoken words in this case differ by one phoneme – the
first consonant changes in each word. When two spoken words differ by one phoneme, they form
a minimal pair. In each of these cases, the three words in the group differ by one component to
change the meaning of the word or sign.
A second major difference between spoken and signed languages can be found when
looking at morphology. Studying iconicity, or the motivated connection between form and
meaning, in both languages exemplifies this. Iconicity, or depicting meaning through perceptual
resemblances, is much more common is signed languages [2], [8]. For example, the sign for
“house” resembles the shape of a house and often times a person’s sign name resembles a
defining characteristic, such as their hair or if they commonly wear a hat. One factor contributing
to this is the relative age of signed languages. Especially when compared to spoken languages,
signed languages are considered to be fairly young [2]. Although both language types use the
body to provide information (either by movement or sound), signed languages have a more
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|6
systematic way of doing so. Each movement is more likely to have some meaning associated
with it.
Another lens through which to analyze signed and spoken languages is one of how they
have developed. Spoken languages are thought to be descendants from other languages that have
had ample time to develop [2]. On the other hand, signed languages often develop
spontaneously. For example, when a group of deaf signers are together, a communication system
develops quick, even if each person comes from a different background. Because of this and the
fact it is easier to spot similarities between signed languages, many believe that signed languages
are more universal [2], [5]. It is important to keep in mind signed languages are easier to study
because of their relative youth as compared to spoken languages that have had time to develop
and deviate more from their parent languages. Also related to the topic of youth, it is much easier
to study language use in children for many reasons. First, the content and structure of language
used by children is simpler and easier to understand. In addition, some studies rely on children
being less influenced by their experiences and therefore having a more innate communication
style in order to draw conclusions [3].
Further investigation of sign language studies reveals a profound correlation between
sign language development and pidgins and creole languages. Pidgin languages are a form of
speech between people who do not speak the same language [8]. A prime example of where
pidgin languages develop was during colonial expansion. Often times, pidgin languages
developed between slaves and slave owners or between slaves who did not come from the same
area. Pidgin languages have little structure and remain for only one generation of users [9].
These languages will also commonly form when groups of people from different backgrounds
come together but often disappear once the group is no longer in contact and the language is not
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|7
used. If a pidgin language is passed down to the next generation (i.e. to the children of the first
generation), it starts to become a creole [9], [10]. Creoles are an actual form of communication
with a more developed structure. Signed languages are often compared to creoles because of how
each develops – both will typically rise without a full external model present. In this case, a
model refers to the language input source, most commonly understood to be one’s parents. For
creole and sign language users, the input is often considered not to be a full model because the
parents do not have full development of the language. These languages have not been allowed
ample time for development, they change quickly, and are heavily influenced by other languages
[11]. Depending on the user and their experience level, each of these language types is adaptable
to the situation in which they are being used.
A key comparison between signed and spoken languages is found with a deeper analysis
of the input form. Typically, learners of a spoken language as their first language have a full
model of the language as input, for example, hearing children with hearing parents. The child
receives a full and accurate form of the language from their parents. This case can also be
applied to deaf children with deaf parents but less than 10% of the deaf community are native
signers – those who were exposed to sign language from birth because their parents were also
deaf [11]. This small part of the deaf population also has a full language model to learn from. On
the other hand, most people in the deaf community are not receiving a full model of language
input. Parents sometimes do not want their children to learn sign language or will not take the
time to learn it themselves. As a result, it is often the case that this portion of the deaf population
do not have much, if any, exposure to sign language before they go to school [11]. If a hearing
parent attempts to learn and teach their deaf child sign language, a language form similar to a
pidgin will develop. As the child grows and learns more (either from school or other exposure to
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|8
sign language), the pidgin-like form will more closely resemble a creole. Therefore, creoles and
signed languages are often seen as similar.
Continuing the discussion of the input model leads to a discussion of the critical or
sensitive period. These periods refer to ages where the brain has the highest ability to develop
language properly [12]. In his study, Ruben found many researchers define the critical period as
the time between six months of age and twelve months into infancy [12]. The sensitive period
extends to about four years of life for aspects of language like syntax, and to about fifteen years
of life for things like semantics [12]. A deaf child born to hearing parents, will be most affected
during the critical period because of the lack of a proper model. During the critical period, if a
child is not exposed to a full model of language, their language development will be severely
affected. As the child grows out of the critical period, their physical ability to learn a language
will decrease [13]. Once in the sensitive period, some gains can be made in language learning if
a more intense input form is used. This is typically when these children would start at school and
may be exposed to signed languages more regularly. When comparing these periods for deaf
children to that of a hearing child using a spoken language, the effects of not having a full model
are far more severe and have a longer lasting impact on the deaf child.
Lastly, a discussion of facial expression reveals a set of differences between signed and
spoken languages. Both language types will use body language and facial expression to help in
conveying meaning [5]. Facial expression in signed language is also often compared to
intonation in spoken languages [5]. Researchers have noted an instinctual component of facial
expressions (i.e. surprise or wonder is often expressed through raised eyebrows) [3]. In instances
like this, facial expression is crucial to correct understand of what someone is signing. In
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
|9
contrast, spoken languages do not rely as heavily on facial expression for correct understanding
but they do certainly aide in understanding what is being conveyed to its full extent.
Results and Discussion
As discussed previously, researchers struggle in studying signed and spoken languages
using the same analytical methods. Many believe that sign and gesture can be compared to
speech and gesture, but difficulties arise with differentiating between sign and gesture [6]. Not
all methods used to analyze spoken languages can be applied to signed languages and vice versa.
Because of this, researchers struggle to find common ground when comparing these two
language types across all aspects.
There is a lot of cross over between lexicon/syntax and phonology [5]. Signed languages
can be broken down into fundamental units and these units can be combined in different ways
using articulators to develop syllables and in turn signs. From there the language starts to
develop. A similar process can be seen in spoken languages in that words can be broken down
into their fundamental units as well. When analyzing how these units are put together in their
respective languages, differences in word order and grammar become present. Through this it is
evident that the structure and organization of signed sentences is vastly different from that of
spoken languages. Even so, similarities can be seen in that both language types can be broken
down into a set of fundamental units (morphemes) as well as units of phonology and syntax.
It is a common understanding that signed languages develop relatively quickly, as seen in
cases where those from different backgrounds are able to come together and communicate with
each other. From this, it would make sense that when spoken languages were in their early years
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 10
of development they also would develop relatively quickly, as is the case with pidgin and creole
languages. As norms and standards started to become more commonplace, the pace at which
both language types develop begins to slow. Creation of signs and words still occurs today as
new circumstances become more relevant. For example, most ASL users did not have a sign for
“quarantine” until recently because it was an unused word. Previously words like this would be
fingerspelled but as its use increases, a sign will develop.
Deaf children are a prime example of a group that most often does not receive a full
language model during the critical and sensitive periods. If a child is exposed to a full language
model early in life, they will be more likely to have success with language later on. Hearing
children who learn a pidgin (which becomes a creole) as their first language have a similar
experience. In both cases, the user will begin to develop some sort of grammar system and begin
to make language rules. When these children are later exposed to a full model (i.e. when they
begin school), they will often struggle with developing the language properly since the pathways
in their brains are already accustomed to their own language form. With enough practice though,
it is possible to make improvements.
Recommendations and Limitations
Signers and speakers both use bodily actions to contribute to meaning. Body movements
and facial expression are more important for correct understanding of signed languages whereas
in a spoken language there are other aspects playing a role (i.e. tone of voice) [14]. Advancing
technology that can study gesture in both languages would advance the understanding of how
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 11
gesture plays a role in language and its development in language types [See Appendix]. It would
also help to standardize what is defined as gesture in studies like this.
Especially when studying language, it is often too difficult to control all aspects of a
study. Often times, the study will be simplified by not considering certain characteristics of
language and only looking at how one variable affects certain outcomes. For example, there are
speech gestures (tone fluctuation) that would have to be considered in a comprehensive
comparison study and there is no equivalent to this in signed languages. In most instances this is
an accepted practice, but when studying language, it makes it difficult to make full and accurate
conclusions because there is often an overlap in effect between multiple variables.
Conclusion
A comparison study of signed and spoken languages reveals an abundance of differences
between the two language types. Primary examples are the modality in which the two language
types are conveyed and the morphology and phonology of each language type. Even so, signed
languages have been analyzed using the same tools used to study spoken languages without
always having to add additional criteria to make accurate conclusions. In this way, similarities
are also present between the two language types, for example the anatomical similarities in
where language is processed. Highlighting both similarities and differences between signed and
spoken languages is essential for a comprehensive comparison between the two language types.
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 12
Acknowledgements
This research for this study was done as an independent study to supply information
about the similarities and differences between signed and spoken languages for Joseph Bochner
(Department Chair, Department of Cultural and Creative Studies, National Technical Institute for
the Deaf). Information about the deaf experience was gathered through conversations with
interpreters and others immersed in deaf and/or hearing culture at Rochester Institute of
Technology which also guided research questions and literature review choices.
Appendix
As mentioned, one of the limiting factors of studying sign languages in comparison to
spoken languages is the inability to differentiate between sign and gesture. Because of this, the
two language types can only be compared to a certain extent. In order to get a more accurate
comparison, the languages need to be compared on all aspects. Incorporating gesture studies is
difficult because the definition of what is gesture is sometimes skewed in order to develop results
that more closely fit into the realm of what the researcher is attempting to study.
One way in which this issue can be addressed is by developing technology that can help
researchers make a distinction between different movements associated with communication.
One such technology is motion capture systems. Motion capture is typically a system of cameras
that record movement or sensors that detect signals from markers placed on the body performing
the motion. These systems are set around an environment in such a way as to capture as much
data as possible (i.e. every part of the environment will be captured by at least one sensor).
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 13
Motion capture systems have become increasingly accurate and complex over the past few years
and their use has become more common in many research areas.
Language researchers have already begun to use motion capture systems to aide in
increasing accessibility to the deaf population. Sign Smith Studio is a commercial scripting tool
and eSIGN is a developing database of ASL scripts for websites [15]. These resources are often
difficult to understand and are considered unnatural. Often times systems like this use recorded
versions of single signs and concatenate videos together to produce sentences [15]. Transition
time, lack of grammar, and inability to use facial expression or set up space in these systems
became main concerns in their use being effective.
Lu and Huenerfauth looked to address many of the issues seen with these products. The
goal of their work was to develop a wider database of virtual ASL using motion capture systems
[15]. By doing so, they hoped to produce data-driven animations of ASL that could be used to
increase accessibility for those who are deaf, specifically on websites and other environments
where closed captioning or interpreters were not available, inaccurate, or inconvenient. The
motion capture data was then used to develop animation of short stories or articles that had
previously been developed using similar software [15]. The animations developed in this study
were then evaluated by native signers who were able to assess the accuracy and ease of
understanding of the animations produced when compared to these other versions.
The motion capture systems were able to address the issue of transition time and made
the animations more natural than previous attempts using similar methods even though
preliminary use of the system did not allow for accurate facial expression in the animations.
Another benefit of these animations was the ability to include spatial references. This is
important because throughout signing, space is often set up to refer back to later without having
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 14
to repeat an explanation of who or what is being discussed every time it is referred to. The
system developed used an eye gaze tracker, body suit and gloves in addition to the motion
capture camera system in order to develop as accurate of a representation as possible [15]. The
eye gaze tracker assisted in making accurate spatial references while the body suit and gloves
worn by the signer with embedded sensors allowed for accurate recording of joint angles. Using
the data gained through these devices, a cohesive and fluent animation was able to be developed.
Although the information presented in this study is not an end all solution to the problem
of accessibility, the research shows that increasing complexity of ASL animation can be found
through the use of motion capture systems. The data presented along with the preference of
native signers for the animation developed in this study verses state of the art systems shows the
promise that incorporating motion capture technology into language research has. Developing
this technology more will allow for a wider database of ASL animations to be available for use
in future research. From there, the hope is that a more common ground can be found to use this
technology to actually study the similarities and differences between signed and spoken
languages as well as between different sign language forms.
References
[1] Müller, Cornelia. “Gesture and Sign: Cataclysmic Break or Dynamic Relations?.” Frontiers
in psychology vol. 9 1651. 10 Sep. 2018, doi:10.3389/fpsyg.2018.01651
[2] Sandler, Wendy. “The Body as Evidence for the Nature of Language.” Frontiers in
psychology vol. 9 1782. 29 Oct. 2018, doi:10.3389/fpsyg.2018.01782
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 15
[3] Elliott, Eeva A, and Arthur M Jacobs. “Facial expressions, emotions, and sign
languages.” Frontiers in psychology vol. 4 115. 11 Mar. 2013,
doi:10.3389/fpsyg.2013.00115
[4] Campbell, Ruth, et al. “Sign Language and the Brain: A Review.” OUP Academic, Oxford
University Press, 29 June 2007, academic.oup.com/jdsde/article/13/1/3/500594.
[5] Sandler, Wendy. “THE PHONOLOGICAL ORGANIZATION OF SIGN
LANGUAGES.” Language and linguistics compass vol. 6,3 (2012): 162-182.
doi:10.1002/lnc3.326
[6] Perniss, Pamela. “Why We Should Study Multimodal Language.” Frontiers, Frontiers, 11
June 2018, www.frontiersin.org/articles/10.3389/fpsyg.2018.01109/full.
[7] Goldin-Meadow, Susan, and Diane Brentari. “Gesture, Sign, and Language: The Coming of
Age of Sign Language and Gesture Studies.” Behavioral and Brain Sciences, vol. 40, 2017,
p. e46., doi:10.1017/S0140525X15001247.
[8] Ferrara, Lindsay, and Gabrielle Hodge. “Language as Description, Indication, and
Depiction.” Frontiers in psychology vol. 9 716. 23 May. 2018,
doi:10.3389/fpsyg.2018.00716
[9] Adone, Dany. The Acquisition of Creole Languages : How Children Surpass their Input,
Cambridge University Press, 2012. ProQuest Ebook Central, April 17, 2021.
[10] " Sign languages". In A Bibliography of Sign Languages, 2008-2017. Leiden, The
Netherlands: Brill. Retrieved Apr 24, 2021.
Erin Butler
MLAS 599 Independent Study
Comparison of Signed and Spoken Languages
| 16
[11] Singleton, Jenny L, and Elissa L Newport. “When learners surpass their models: the
acquisition of American Sign Language from inconsistent input.” Cognitive
psychology vol. 49,4 (2004): 370-407. doi:10.1016/j.cogpsych.2004.0
[12] Ruben, R J. “A time frame of critical/sensitive periods of language development.” Indian
journal of otolaryngology and head and neck surgery : official publication of the
Association of Otolaryngologists of India vol. 51,3 (1999): 85-9. doi:10.1007/BF02996542
[13] Fischer, Susan. (2015). Sign languages in their Historical Context. 10.13140/2.1.2085.5683.
[14] Jill P. Morford, Insights to language from the study of gesture: A review of research on the
gestural communication of non-signing deaf people, Language & Communication, Volume
16, Issue 2, 1996, Pages 165-178.
[15] Lu, Pengfei & Huenerfauth, Matt. (2010). Collecting a motion-capture corpus of American
sign language for data-driven generation research. 89-97.
Erin Butler
MLAS 599 Independent Study
Download