ROBERT SIEGEL, HOST: Right now, as I'm speaking, your brain is

advertisement
ROBERT SIEGEL, HOST:
Right now, as I'm speaking, your brain is transforming this stream of sounds into meaningful
words and sentences. It's a remarkable achievement and scientists are just beginning to
understand how it happens.
NPR's Jon Hamilton reports on a study that let researchers watch brain cells performing one
of the earliest steps in processing speech.
JON HAMILTON, BYLINE: Understanding spoken language is something we're so good at
we don't think it's much of an accomplishment. But David Poeppel of New York University
says it is.
DAVID POEPPEL: Imagine for yourself how many different things have to happen for you
just to understand the sentence: I need a cup of coffee. First of all you have to identify all the
different sounds in the background that you don't want, or the competing speakers. And you
have to break it into units. You have to look up the words. You have to combine the words
and generate the correct meaning. And each of those parts has its own subroutines.
HAILTON: Researchers at the University of California, San Francisco, set out to understand
just one part of the process. Edward Chang, a brain surgeon at UCSF, says they wanted to
know how the brain recognizes the individual sounds that we combine to form words.
EDWARD CHANG: These are what we would consider the building blocks for speech and
language.
HAILTON: So Chang and a team of researchers studied the brains of six people who were in
the hospital being evaluated for epilepsy surgery. The team placed electrodes on the surface of
each patient's brain, which allowed monitoring of an area that processes speech. Then Chang
says they exposed the volunteers to lots and lots of words.
CHANG: What it involves is actually just listening to a long series of sounds, a lot of them
just sound like they are clips from movies.
HAILTON: All kinds of voices saying phrases like...
UNIDENTIFIED MAN: And what eyes they were.
HAILTON: Eventually, the scientists had a record of each volunteer's brain responding to
every sound used in the English language. Then, they studied the data using a sort of slowmotion replay.
UNIDENTIFIED MAN: And what eyes they were.
HAILTON: This let them see precisely what different brain cells, or neurons, were doing as
each bit of sound passed by. And Chang says they realized that some were responding
specifically to plosives, like the initial puh-sounds in Peter Piper Picked a Peck of Pickled
Peppers. Meanwhile, other neurons were responding to a particular type of vowel sound.
1
CHANG: We were shocked to see the kind of selectivity. Those sets of neurons were highly
responsive to particular speech sounds.
HAILTON: Chang says these sounds are what linguists call phonemic features, the most basic
components of speech. There are about a dozen of these features. And they can be combined
to make phonemes, the sounds that allow us to tell the difference between words like dad and
words like bad.
Chang says the finding helps explain how we can process speech so quickly and accurately,
even in a noisy place, or when the speaker has an unfamiliar accent.
CHANG: It's the starting point of thinking about how to build up some better understanding
of how language occurs in the brain. And that's certainly been a long-term passion and interest
of mine.
HAILTON: The result also could have practical applications.
Nima Mesgarani worked in Chang's lab before moving to Columbia University. He says an
impaired ability to process speech sounds seems to be a part of many disorders, including
dyslexia.
NIMA MESGARANI: In order to help people who are suffering from various speech and
communication disorders, we need to first understand how these processes become impaired.
And the first step toward that goal is to understand how they work normally.
HAILTON: Mesgarani says knowing how the brain identifies speech sounds also could lead
to better conversations with machines, like ATMs and Smartphones. He says artificial speech
processing systems, like Apple's Siri, were actually inspired by research on the brain.
MESGARANI: We've always dreamed of artificial systems that are able to communicate with
us, the same way that we communicate with other humans.
HAILTON: By speaking and listening. He says knowing precisely how the brain does this
should eventually make Siri and her cousins better listeners.
The new research appears in the online version of the journal Science.
Jon Hamilton, NPR News.
(SOUNDBITE OF MUSIC)
AUDIE CORNISH, HOST:
This is NPR News.
2
Download