Project Summary: Processing Speaker Variability in Lexical Tone Perception 1 Project Summary A research project is proposed to investigate how human listeners process speaker variability in speech perception. Speech perception is the process by which a listener interprets the sounds produced by a speaker in order to understand a spoken message. Acoustic-phonetic research over several decades has shown that the same spoken message can vary significantly across speakers. However, listeners are able to understand sounds and words spoken by different speakers. How listeners achieve such perceptual constancy despite speaker variability is a foundational issue in speech perception. Research on speaker variability has traditionally focused on segmental features (consonants and vowels) in the English language. By contrast, research on suprasegmental features (e.g., lexical tones) in non-English languages is relatively scarce. The proposed project investigates how speaker variability affects the processing of lexical tones in speech perception. In tone languages, lexical tones are functionally equivalent to consonants and vowels. The primary acoustic correlate of lexical tone is fundamental frequency (F0). Because F0 range varies across speakers, a phonologically high tone produced by one speaker could be acoustically equivalent to a phonologically low tone produced by another speaker. Conversely, a given tone produced by two speakers could be acoustically distinct. Objective 1 of this project is to examine how listeners estimate relative F0 height without cues typically considered necessary for speaker normalization. Because pitch perception is also integral to music perception, the role of musical background in F0 height estimation will also be explored. Objective 2 is to examine the effect of speaker variability, relative to other sources of acoustic variability, on native and nonnative perception of lexical tones. Objective 3 is to evaluate the impact of speaker variability on accessing the form and meaning of spoken words in tone and nontone languages. To these ends, three integrated studies will be conducted with established research paradigms in speech perception (tone identification), speech acoustics (acoustic analysis), and spoken word recognition (repetition and semantic/associative priming). The intellectual merit of this project is that it addresses a foundational issue in speech perception by extending current knowledge to the suprasegmental aspect of speech. Findings from this crosslinguistic project are expected to significantly advance current knowledge in three primary ways. First, suprasegmental features are integral to speech. They employ a set of acoustic properties that are distinct from those for segmental features. Any comprehensive theory of speech perception must account for how human listeners process speaker variability in suprasegmental features. Second, acoustic properties for lexical tones are closely associated with speaker characteristics. Investigating speaker variability in lexical tone perception provides an opportunity to clarify the relative contribution of lexical and speakerrelated pitch information to speech perception. Third, tone languages constitute the majority of the world’s known languages. Understanding how speaker variability is processed in lexical tone perception is expected to elucidate the universal versus language-specific aspects of speech perception. This project is expected to make broader impacts in four ways. First, it will integrate research and education by providing extensive research training to graduate students aiming to pursue a research and teaching career. It will also offer a unique opportunity for undergraduate students in related disciplines to integrate research experiences into their education. Second, the crosslinguistic nature of the project will allow contributions from speakers of other languages traditionally underrepresented in speech and language research. Because part of this project will be conducted in Taiwan, it is expected to foster international collaborations between participating institutions. Third, support for this project will significantly enhance the PI’s effort in strengthening partnerships among speech and language researchers at and beyond Ohio University. Finally, the potential benefits to society can be illustrated by two examples. Understanding similarities and difference between native and nonnative tone perception will contribute to improving and strengthening instruction of lexical tones, one of the most challenging aspects of learning a tone language. Knowledge of the relationship between musical pitch perception and lexical tone perception is expected to inspire further research on the intriguing connection between music and language, two of the most significant human activities.