Examining the Relationship between Language

advertisement
Curtis Prichard
MUS-E619 Spring 2014
Examining the Relationship between Language and Music
Abstract
Though music and language are typically associated with one another, cognitive
research remains inconclusive on the nature of connections between the two. Despite this
disparity, a rich body of studies has been conducted on the effects of both language
learning on music (Chen-Hafteck et al, 1999), music learning on language
(Chandrasekaran and Kraus, 2010), cross-domain effects (Bidelman et al, 2011) and
cognitive inquiries of various types, from broad comparisons between language and
music (Besson and Schon, 2001; Rebuschat, 2012), to neurological processing of music
and language (Fedorenko et al, 2009; Patel et al, 1998), to developmental comparisons
(McMullen and Saffran, 2004). From these research studies emerges the idea that music
and language are somehow linked, and the systems used to process both of them are
simultaneously alike and disparate. Researchers have also delved into the processes
involved in both music and language acquisition, including studies on enculturation
(Hannon and Trainor, 2007; Morrison et al, 2008), and music acquisition and its effects
on broader student experiences (Trainor and Corrigall, 2010). However, comparatively
little research exists in the realm of educational effects of utilizing language learning
techniques to teach music. The few studies that exist have favored showing music
learning occurring in constructivist environments, such as children teaching themselves
in social settings (Koops, 2010), using holistic approaches to teaching music (Whitaker,
1994), and making sense of student motivation to learn music through Gardner’s SocioEducational model (MacIntyre et al, 2012). From the impact that the Suzuki method has
had on musical training, it seems safe to assume that more research into cognitive
benefits of learning music without notation would be an excellent addition to the current
body of works. Additionally, a broadening of cognitive topics to include studies of jazz
musicians’ improvisation and poets’ ambiguities of language could shed further light on
the systems used by the human brain to interpret and create works of art in these
mediums.
Music – a language or not?
The universal language – could there be a better-known descriptive phrase about music
used by music advocates, educators, politicians, and even laymen? Yet despite its
ubiquity, we rarely take the time to examine what exactly we mean by the phrase. The
word language implies that meaning is exchanged, and “universal” would seem to
indicate that this language, unlike others, can have its meaning understood by everyone.
It also brings up an interesting point – why are paintings not the universal language? Why
not all art, rather than simply just music? These questions get at the point that the quote is
really trying to express, that there is something unique about music that makes it a
universal language, where other mediums of art fall short.
What could this uniqueness be? To arrive at any sort of answer, the words
themselves must first be defined in a more rigorous manner. Language can be defined in
a number of ways from a philosophical standpoint, with the broadest being, “the interplay
of sound and meaning” (Jakobson, 1942). From that perspective, no argument is needed
to make music into a language, for by its nature music fits into this definition with no
difficulty. For the purposes of pursuing cognitive studies, however, more useful scientific
definitions are needed. It is not easy to find a single, concise definition of what language
Curtis Prichard
MUS-E619 Spring 2014
is within the psycholinguistic community. The most basic definition is that language is
any human-based, complex system of communication. Of course, from this definition
arise a number of possible inquiries, the first being on the importance of semantics in any
language. Semantics are the ways in which languages attempt to use logic in order to
overcome ambiguity in favor of precise communication. This is where music and
language often delineate in function, for whatever music’s expressive qualities may be,
they are far from precise in their expression. Ambiguity exists in all art forms,
summarized by Nietzsche, “When you stare long into the abyss, the abyss stares back into
you,” meaning here that music interpretations are heavily dependent on the one doing the
interpreting. Music and language do share this ambiguity in common at times, such as
when poets write in a manner that is far from clear. A cursory reading of anything by E.E.
Cummings, Virginia Woolf, or William Faulkner, to name a few, makes it clear that
despite its proclivity for precision, language can be manipulated in many ways.
Then we must define music. Again we can easily state a broad definition, perhaps
no more specific than the same as for language: “the interplay of sound and meaning”
(Jakobson, 1942). For clarification, we might turn to Jackendoff’s broad comparative
definitions of language and music: “… language conveys propositional thought, and
music enhances affect” (Jackendoff, 2009). While neither of these definitions is perfect,
they provide a good starting point for further study. As the aphorism of music’s
universality is examined, it is important to remember that musical syntax, content, and
structure depends upon cultural elements. The concept of music being the universal
language is decidedly false, then, as cultural forces affect understanding of music in
much the same way that they affect the understanding of languages. Music itself may be
universal in that it appears in all human cultures, but the same may be said of language as
a broad concept.
Confronted with these contradictions and misunderstandings about the nature of
language and music, psycholinguists and music cognition researchers have used many
methodologies to examine these two fields. In this article, research done on the cognitive
processes involved in perception, understanding, and performance of language and music
will be examined, followed by a survey of literature on educational studies involving
acquisition and brain plasticity. Each of these topics contains a diverse body of research,
with the cognitive aspects being the most thoroughly explored, while educational
implications are still being treated with caution. The complexity of the phenomena
involved in language and music alike make the task of fully understanding their
relationship a difficult one.
COGNITION
Ray Jackendoff has written extensively on the subject of similarities and
dissimilarities between music and language (Jackendoff, 2009). For Jackendoff, the key
to understanding these two uniquely human activities is in identifying aspects of
cognition that belong to both music and language, and not to any other activities.
Jackendoff spends time debunking the idea that music and language can convey anything
close to similar meanings, mostly through qualitative and philosophical means. He argues
that the specificity of language is what defines it, while music’s defining quality is its
ability to enhance affect. From this and other points Jackendoff concludes that music and
Curtis Prichard
MUS-E619 Spring 2014
language do not share enough unique cognitive processes to be considered linked with
anything other than extreme caution.
Jackendoff concludes his article by saying that he and Patel examine similar
evidence but arrive at different conclusions. One of Patel’s studies, a study of eventrelated potential, revealed similarities in syntactic prediction for both language and music
(Patel et al, 1998). While language syntax may be processed via obligatory dependencies
of certain parts of speech (i.e., verb follows noun), musical syntax is processed through
expectation of musical development. According to Patel, “… it seems quite plausible to
assume that music perception involves continuous prediction and integration of events
into a perceived relational structure in working memory.” Music and language are both
processes of syntactic integration, where incoming stimuli are integrated with working
memory to cognitively process the whole. Nevertheless, the syntactic relationships
between parts of speech are much more rigid than those in elements of music. For this
study, Patel used the P600 brain potential, which elicits event-related potential from
words that are hard to integrate into sentence structure. In utilizing the P600 to measure
ERP, Patel played chord progressions that at one point contained three options for a
chord: a chord within the key, a somewhat related key, and a distantly-related key. To
control, he also ran similar tests with grammatical examples. Although the grammatical
examples gave the highest P600 response, the musical ones also generated responses of a
similar magnitude. Patel argues that this demonstrates the fact that music and language
draw on similar neurological resources for syntactical integration.
To bridge these two arguments, McMullen and Saffran focused on music and
language development, arguing that children encounter both fields aurally and, initially,
without any connection to meaning (McMullen and Saffran, 2004). Humans seem
programmed to respond to sound in all its various forms, and the brain is left to parse out
which sounds are more or less important, based on frequency of repetition. In addition to
children learning how to speak by this type of repetition, musical structures are learned
through it. From as young as six months, infants have shown preference to consonant
sounds versus dissonant ones. McMullen and Saffran suggest that a more beneficial way
to study music and language is to examine them from a perspective of modularity. This
field is young and as such prone to disagreements – the authors state that despite
modularity typically indicating music and language exist in separate cognitive domains,
there are a number of issues that remain unresolved. A perplexing one is whether humans
are born with separate cognitive domains for music and language, or if this separation
occurs through experience.
A theme that recurs throughout these three studies is the recognition of music and
language both containing rhythmic elements. Even Jackendoff somewhat reluctantly
acknowledges this point. McMullen and Saffran emphasize the similarities of
development of prosody in language and music, and of course Patel goes even further in
his argument for the similarities of underlying neural structures.
Another area that has enticed researchers concerns brain plasticity and language
and music’s impact on it. Studies have shown that speakers of tonal languages and
musicians have greater ability to track pitch accuracy than non-musicians and speakers of
non-tonal languages (Bidelman et al, 2010). Researchers found that musicians and
speakers of Chinese outperformed non-musicians both in ability to track pitches across
whole melodic contours. Musicians faired better than all non-musicians in identifying
Curtis Prichard
MUS-E619 Spring 2014
discrepancies in intervallic content, while Mandarin speakers outperformed musicians
and non-musicians in identifying lexical tone content. The authors infer that both
language development and music acquisition play an important role in promoting brain
plasticity in terms of pitch representation.
Cognitive studies have long shown that music acquisition increases brain
plasticity, and researchers offer a variety of uses for this phenomenon. Chandrasekaran
and Kraus argue that another effect of brain plasticity caused by music learning is the
improvement to perception of speech in children with language-based learning disorders
(Chandrasekaran and Kraus, 2010). Specifically, the researchers argue that musical
training can aid in recognition of speech-in-noise. The children with these learning
disabilities (such as dyslexia) suffer from what the authors refer to as noise-exclusion
deficits, where noise-based stimuli often overwhelm their senses. The authors cite a
number of sources indicating that musical learning enhances not only musical hearing,
but also auditory processing of speech and other language skills. They theorize that
because of the brain plasticity encouraged by music learning, patients with languagebased disabilities would benefit from musical training. The exact training needed to
combat such neurological problems remains unclear at this time.
Strait and Kraus wrote an excellent review of the research that has been done on
brain plasticity and musical training as it relates to enhancing speech processing (Kraus
and Strait, 2011). In it, the authors argue that music training strengthens the underlying
cognitive processes involved in music and language perception, in addition to enhancing
auditory discrimination.
The effect of native language on musical compositions presents another potential
window into the brain’s efforts to encode both systems effectively. Studies have shown
that composers of nationalities whose language differs often write rhythms that resemble
prosodic patterns of speech (Patel, 2003; Huron and Oillen, 2003). Both studies sample a
wide array of French, British, and other music, with Patel examining nearly two hundred
melodies from each country, and Huron and Oillen using over two thousand. This realm
of subconscious effects that cognitive processes of different systems have on each other
remains largely unexplored, and could provide clues as to the uniqueness of music and
language. It would be illuminating to examine whether there are other faculties that
influence one another subconsciously in this manner.
EDUCATION
The research that has been done indicates a great deal of interest in the neurological,
cognitive aspects of language and music processing. While this field holds a great deal of
interest in its own right, the study of music acquisition and education from a linguistic
perspective could provide great insights for educators. Music educators who teach
beginning band and other early instrumental ensembles might be able to benefit the most.
Teaching music as a language could transform the way that school music programs
attempt to engage learners. Currently a trend exists in band pedagogy to teach reading
music and performing music simultaneously. Based on research of infant language
cognition, greater emphasis should likely be placed on listening and performing in the
initial stages, adding reading at a later time. This is of course recognizable to any string
player as resembling the Suzuki method at least superficially.
Curtis Prichard
MUS-E619 Spring 2014
It is beneficial to make the distinction between knowledge or comprehension of
language and performance of it, a distinction that is seen in musical cognition as well.
Infants are able to recognize the meaning behind words much earlier than they are able to
perform the act of speaking on their own. Even as normal childhood development of
language continues on from mere comprehension to performance, so too do most people
perform music in the loosest sense of the word. Music performances may be altogether
private (singing in the shower, humming while working), and they are typically only
recreations of musical memories, unlike speech, which may be improvised more freely.
Quotes such as the following could use much more clarification of the
comprehension-performance dimensions: “If opportunities for practicing music (and
speech) are provided to children before the learning window closes the child seems to
learn the necessary implicit rules for music and language” (Heller and Athanasulis,
2002). In this study, Heller and Athanasulis used tests to examine whether a leveling off
of learning growth appears in students aged six to ten. Though they began the research
with firm ideas about learning window theory’s application to music and its relations to
language acquisition, research indicated that music and language may have different
critical periods. Compared to language, music seems to have a larger critical period than
was previously thought, and Heller and Athanasulis suggest that a revision of the learning
window theory associated with both language and music may be in order.
Music and language learning seem to exist on a spectrum of influence, wherein
either can have positive transfer to the other. In all inquiries of language and music
acquisition, studies of children are most illuminating. Chen-Hafteck et al (1999) found
that by studying performances of children singing songs whose native language is either
Sotho or English yielded evidence that native language affects approach to singing. The
findings support past research that has shown native speakers of tonal languages tend to
sing in a detached manner.
Lucrative research into diverse methods of teaching music can be found. Of
particular interest is a qualitative study by Koops (2010), conducted in the Gambia. The
title of the article, “They teach themselves,” is drawn from an interview conducted with a
master drummer in Baatiikunda. By that he means that the children acquire musical skills
through self-directed practice and by helping each other learn, rather than having a master
teacher oversee the process. Koops breaks down the learning she observed among the
children into three stages – listening, observing, and doing. The listening referred to aural
activities alone, while observing meant watching other associated behaviors, such as
clapping or dancing, which went along with the music. The final stage was doing the
activities – always approached in a holistic manner – and involved trial and error. Koops
says that the process was immersive, and rarely broken down into smaller, more
manageable chunks. Activities were learned holistically.
An article that does an excellent job highlighting the importance of having
student-driven learning in music education is Whole Language and Music Education by
Whitaker (1994). Whole Language is a type of language teaching that resembles
constructivism: Students help direct the learning, which takes place in a social
environment, and greater emphasis is placed on authenticity of experience. For example,
students in whole language classrooms do not use basal texts to facilitate learning reading
and writing – instead they utilize “real literature,” meaning writing outside the realm of
the strictly pedagogical.
Curtis Prichard
MUS-E619 Spring 2014
Another take on musical learning and language learning comes from MacIntyre,
Potter, and Burns (2012), who discuss the application of Gardner’s socio-economic
theory of second-language acquisition to music learning motivation. MacIntyre et al
argue that similar motivations occur in the two disciplines, which Gardner divided into
four categories: situational context, the social milieu, individual differences, and
outcomes.
DISCUSSION AND FUTURE DIRECTION
The notion of a linkage between music and language is one of the most ingrained ones in
society at large. Research done thus far indicates that while differences do exist, it is
likely that there are underlying neural structures facilitating the cognition of each (Patel
et al, 2009). From a cognitive standpoint, future research looking into the similarities of
rhythm in poetry and music, as well as rhythmic variation across cultures with different
languages, might be worth exploring. Poetry and music studies and articles typically
relate to composers choosing libretti (Salmon, 1919), or to aesthetics between the two
(Welleck, 1962), or to difficulties of translation (Raffel, 1964-5). Aside from being more
philosophical than scientific, another trait these articles share is that they are dated.
Modern studies could include elements of cognitive involvement in both poetry and
music in ways that could not have been imagined fifty years ago. The fact is that some
poetry shares many of the same qualities as music, as can be seen in twentieth century
poetry, particularly its ambiguity in regards to meaning. The poet T.S. Eliot comes to
mind as a reference point, whose writings contain allusions, imagery, and almost streamof-consciousness at times, all of which deliberately obfuscate meaning – a tactic that goes
against Jackendoff’s explicit definition of language. Searching for commonalities in
practice could guide researchers to spotting more of the underlying structures that music
and language might have in common. Patel’s inquiry into syntactic processing holds
much promise, as he does an excellent job of showing that the dissimilarities between
language and music are often superficial (Patel, 2009).
More focus could be directed towards practical studies that examine the effects of
Suzuki method and other such enculturation methodologies on musical acquisition. It
would also be of benefit to study students who begin music studies later than preschool
and elementary school, as this area has not been fully explored. Early development and
the critical window are well-documented and studied, but the effects of motivation on a
student’s musical achievement are worth pursuing. Such studies would have to be longterm and would require a tremendous amount of resources to do properly, but they may
help further delineate the concepts of achievement and ability.
In addition to researching the connections between linguistic pedagogy and
musical pedagogy at the beginning stage, studying the nature of jazz improvisation would
be an excellent line of inquiry. Jazz musicians frequently refer to jazz as a language, and
most musicians, whether jazz performers or not, would agree that improvisation requires
a mastery of the hierarchical systems at play in a piece of music. Improvisation is
frequently regarded as a most difficult task within music, and yet as performers of
language, we improvise much of the time. Jazz players refer to the commonalities of a
particular performer’s improv as his or her “vocabulary,” and a great deal of time is spent
learning patterns. The uniqueness of an improvised solo then comes from pairing together
previously separated patterns. This process on the surface seems different from language
performance. However, within language we also see a gravitational pull towards using
Curtis Prichard
MUS-E619 Spring 2014
colloquialisms and familiar phrases, suggesting that what we call speaking in language
and what we call improvisation in jazz and classical music may be more linked than we
realize.
Several other topics related to pedagogy may lead to greater understanding of the
interaction between language and musical expression. While mediocre music teachers
may overuse the simplistic urging, “Play with more expression,” most pedagogues utilize
a wide range of vocabulary to convey to students what the expression ought to be.
Conductors in particular offer many linguistic representations of musical ideas, usually
through symbolic imagery. It would be fascinating to study areas of brain activity while
performing, first without any sort of linguistic prompt, then with prompts that vary in
nature. Once again, this type of research might lead to greater insight into the use of
language as a tool to teach music.
REFERENCES
Besson, M., & Schön, D. (2001). Comparison between Language and Music. Annals of
the New York Academy of Sciences, 930(1), 232-258. Retrieved May 1, 2014,
from http://www.ncbi.nlm.nih.gov/pubmed/11458832
Bidelman, G. M., Gandour, J. T., & Krishnan, A. (2011). Cross-domain Effects of Music
and Language Experience on the Representation of Pitch in the Human Auditory
Brainstem. Journal of Cognitive Neuroscience, 23(2), 425-434. Retrieved May 2,
2014,
from http://www.mitpressjournals.org/doi/abs/10.1162/jocn.2009.21362#.U2ee94
Ub75I
Chandrasekaran, B., & Kraus, N. (2010). Music, Noise-Exclusion, and Learning. Music
Perception: An Interdisciplinary Journal, 27(4), 297-306. Retrieved May 6, 2014,
from http://www.jstor.org/stable/10.1525/mp.2010.27.4.297
Chen-Hafteck, L., Niekerk, C. v., Lebaka, E., & Masuelele, P. (1999). Effects of
Language Characteristics on Children's Singing Pitch: Some Observations on
Sotho- and English-Speaking Children's Singing. Bulletin of the Council for
Research in Music Education, 17(141), 26-31. Retrieved May 6, 2014, from
http://www.jstor.org/stable/40318979
Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural
Integration In Language And Music: Evidence For A Shared System. Memory &
Cognition, 37(1), 1-9. Retrieved May 1, 2014,
from http://link.springer.com/article/10.3758/MC.37.1.1#
Curtis Prichard
MUS-E619 Spring 2014
Hannon, E. E., & Trainor, L. J. (2007). Music acquisition: effects of enculturation and
formal training on development. Trends in Cognitive Sciences, 11(11), 466-472.
Retrieved from http://www.cell.com/trends/cognitivesciences//retrieve/pii/S1364661307002410?_returnURL=http://linkinghub.elsevie
r.com/retrieve/pii/S1364661307002410?showall=true
Heller, J., & Athanasulis, M. (2002). Music and Language: A Learning Window from
Birth to Age Ten. Bulletin of the Council for Research in Music Education,
19(153/154). Retrieved May 5, 2014, from http://www.jstor.org/stable/40319135
Huron, D., & Oillen, J. Agogic Contrast in French and English Themes: Further Support
for Patel and Daniele (2003). Music Perception: An Interdisciplinary Journal, 21,
267-271. Retrieved May 7, 2014,
from http://www.jstor.org/stable/10.1525/mp.2003.21.2.267
Jackendoff, R. (2009). Parallels and Nonparallels between Language and Music. Music
Perception: An Interdisciplinary Journal, 26(3), 195-204. Retrieved May 6, 2014,
from http://www.jstor.org/stable/10.1525/mp.2009.26.3.195
Jakobson, R. (Director) (1942, December 1). Six Lectures on Sound and Meaning.
Lecture conducted from Boston, MA. Retrieved May 6, 2014,
from http://www.marxists.org/reference/subject/philosophy/works/ru/jakobson.ht
m
Koops, L. H. (2010). “Deñuy jàngal seen bopp” (they teach themselves): Children’s
music learning in the gambia. Journal of Research in Music Education, 58(1), 2036. Retrieved from http://jrm.sagepub.com/content/58/1/20.abstract
Kraus, N., & Strait, D. (2011). Playing Music for a Smarter Ear: Cognitive, Perceptual
and Neurobiological Evidence. Music Perception: An Interdisciplinary
Journal, 29, 133-146. Retrieved May 4, 2014,
from http://www.jstor.org/stable/10.1525/mp.2011.29.2.133
MacIntyre, P. D., Potter, G. K., & Burns, J. N. (2012). The socio-educational model of
music education. Journal of Research in Music Education, 60(2), 129-144.
Retrieved from http://www.jstor.org/stable/41653844
McMullen, E., & Saffran, J. (2004). Music and language: A developmental comparison.
Music Perception: An Interdisciplinary Journal, 21(3), 289-311. Retrieved from
http://www.jstor.org/stable/10.1525/mp.2004.21.3.289
Morrison, S. J., Demorest, S. M., & Stambaugh, L. A. (2008). Enculturation Effects In
Music Cognition: The Role Of Age And Music Complexity. Journal of Research
in Music Education, 56(2), 118-129. Retrieved May 5, 2014,
from http://jrm.sagepub.com/content/56/2/118.short
Patel, A. (2003). An Empirical Comparison Of Rhythm In Language And Music.
Cognition, 87(1), B35-B45. Retrieved May 5, 2014,
from http://www.sciencedirect.com/science/article/pii/S0010027702001877
Patel, A. (1998). Syntactic Processing in Language and Music: Different Cognitive
Operations, Similar Neural Resources?. Music Perception: An Interdisciplinary
Journal, 16(1), 27-42. Retrieved May 6, 2014, from
http://www.jstor.org/stable/40285775
Curtis Prichard
MUS-E619 Spring 2014
Raffel, B. (1964-5). Music, Poetry, and Translation. The Antioch Review, 24(4), 453-461.
Retrieved May 6, 2014, from http://www.jstor.org/stable/4610629
Rebuschat, Patrick. Language and music as cognitive systems. Oxford: Oxford University
Press, 2012.
Salmon, A. (1919). The Relations of Music and Poetry. The Musical Times, 60, 530.
Retrieved May 1, 2014, from http://www.jstor.org/stable/3701758
Trainor, L. J., & Corrigall, K. A. (2010). Music acquisition and effects of musical
experience. Springer Handbook of Auditory Research, 36, 89-127. Retrieved from
http://link.springer.com/chapter/10.1007/978-1-4419-6114-3_4
Welleck, A (1962). The Relationship between Music and Poetry. Journal of Aesthetics
and Art Criticism, 21, 149-156. Retrieved May 7, 2014,
from http://www.jstor.org/stable/427187
Whitaker, N. (1994). Whole Language and Music Education. Music Educators Journal,
81(1), 24. Retrieved May 2, 2014,
from http://www.jstor.org/discover/10.2307/3398793?uid=3739656&uid=2&uid=
4&uid=3739256&sid=21103972007397
Download