What are children hearing?

advertisement
What are children hearing?
Wendy McCracken
and Imran Mulla
University of Manchester
Objectives
•
•
•
•
Neurological basis for listening
Signal to noise and reverberation
The perceptual effects of hearing loss
To look at new approaches to
understanding the listening environment of
children
• To consider ways in which children can be
provided with access to good quality
sound
Communication through speech
• Primary mode of accessing mainstream
classroom information
• Complex chain of events that can be
affected by the acoustic properties of the
room, the maturity of the listener, the
language skills of the listener, motivation
and level of effort/fatigue
Hearing /Listening
• Hearing: peripheral
Listening
• Cognitive effort
exerted by the listener
to understand the
speech signal
• Response is at the
level of the cortex
Neurological basis for listening
• Following birth, the brain of a
newborn is flooded with
information from the baby’s sense
organs.
• At birth, each neuron in the
cerebral cortex has approximately
2,500 synapses.
• By the time an infant is two or
three years old, the number of
approximately 15,000 synapses
per neuron (Gopnick, et al., 1999).
• This is about twice as many as an
adult brain
Synaptic pruning
• Synaptic pruning eliminates weaker synaptic contacts
while stronger connections are maintained and
strengthened.
• Experience determines which connections will be
strengthened and which will be pruned; connections that
have been activated most frequently are preserved.
• Ineffective or weak connections are "pruned" It is
plasticity that enables the process of developing and
pruning connections, allowing the brain to adapt itself to
its environment.
Neurological development
• Stimulation of the auditory centres of the brain
influence the actual growth and organisation of
the auditory pathways Sharma, Dorman and
Spahr, 2002]
• The higher auditory brain centres are not
auditory cognitive closure like adults [Bhatnagar,
2002; Chermak a& Musiek, 1997]
On going maturation
• Recent studies suggest that complex
neurological changes, including integration
of information from the senses continue
until the brain is mature, some time in the
third decade of life [Geidd, 2004]
Children are not small adults!
• A child does not bring a mature
neurological system to the listening
situation [Bhatnagar, 2002]
• Children do not have language skills or life
experience of adults to allow them to fill in
the gaps or infer meaning.
• All children require a quieter room and
louder signal than adults
Dual tasks
• There are many tasks that require infants and
children to process simultaneously presented
auditory and visual information
• It is likely that both overshadowing and response
competition affect the way children respond to
these stimuli.
• Evidence that executive function is not
developed until the age of 10 years
Real classrooms
Designed for learning ?
Real classroom as listening
environments
• Reverberation
• Noise
• Distance
Reverberation
• The persistence of prolongation of sound
• In highly reverberant rooms speech
signals are delayed and overlap with direct
sound
• This may mask out the intended message
of the speaker
Noise
•
Noise can be defined as any sound other
than the signal [in this case speech] that
you want to hear
• Classrooms are dynamic learning spaces,
noise is generated primarily by pupil
activity
Children need the signal to be 15
dB louder than background noise
[Ross, 92]
Source Institute for Enhanced Classroom Hearing
Acoustic Environment in an Unoccupied
Classroom Background Noise @ 35 dBA
per ANSI Standard 12.60
SNR = +15 dB
Source Institute for Enhanced Classroom Hearing
Acoustic Environment in an
Occupied Classroom
Background Noise @ 45+ dBA
SNR = +5 dB
Source Institute for Enhanced Classroom Hearing
Speech in quiet
S/N +6 RT0.8
Noise
It is hard for us to understand the difficulties
children may face as adults have mature
listening skills
1) the higher auditory brain centres are not
fully developed until a child is about 15 years old
2) and children cannot perform automatic
auditory cognitive closure like adults (intrinsic vs
extrinsic redundancy).
Distance
• Dydactic teaching is seldom the approach
taken in classrooms
• As the teacher moves round the level of
their voice will vary considerably
• The further a child is from the speaker the
quieter the speech will be
Listening
• “Listening” is the cornerstone of the
educational system.
• Children spend up to 70% of their
school day listening.
• Children are the biggest source of noise
in the classroom.
Listening effort
“When we want to remember something we
have heard, we must hear it clearly
because memory can be only as clear as
its original signal…” [Doidge, 2007]
For any listener poor acoustics mean that
the listener needs to exert increased effort
to be used to make sense of speech
Listening effort
• Short term memory [STM] plays a crucial
role in processing speech [ Choi, Lotto
Lewis et al., 2008]
• Where children are required to divide
attention short term memory they may
experience limitations due to STM
constraints [Nittrouer and Boothroyd,
1990]
Complex working memory
• Defined as the ability to manipulate and
store material simultaneously
• This shows a steep developmental slope
up to the age of 16 years
• All children attending schools require a
good acoustic environment if they are to
achieve their potential
Children who have a hearing loss
• Face very specific challenges relating to
the effects of deafness on the auditory
system
• Threshold elevation
• Dynamic range reduction
• Discrimination loss
• Increased susceptibility to noise
[simulation]
• AUDIBILITY means that the speech is
“heard” – but not heard clearly enough to
distinguish specific speech sounds.
• AUDIBILITY is carried by vowels – high
energy, low frequency speech sounds.
The low frequencies of 250 Hz and 500 Hz
carry 90% of the power of speech, but only
10% of the intelligibility.
INTELLIGIBILITY means that the listener
heard clearly enough to identify critical
word/sound distinctions.
INTELLIGIBILITY is carried by consonants –
low energy, high frequency speech sounds.
The frequencies of 2000 Hz and 4000 Hz
carry 90% of the intelligibility of speech, but
only10% of the power of speech. They are
very weak speech sounds.
Children using amplification
• Major opportunity to access speech signal
and 85% deaf children in mainstream but:
• Only as good as its management
• Does not overcome poor acoustics
• Are more significantly more adversely
affected by poor acoustics than peers
Children who are vulnerable
A positive listening environment is even
more important for children:
• who are deaf
• have English as a second language
• have an auditory processing problem
• have learning disabilities.
• have attention problems.
• have behaviour problems.
• have developmental disabilities.
• have visual disabilities.
Summary
• Classrooms are generally poor acoustic
environments
• Children’s listening skills are not fully
developed until they are 15
• Children need a quieter, less reverberant
learning environment than adults
• Children do not develop control of
executive function is not mature before the
age of 10
Children’s listening
environments at home
• Little is known about the listening
environment that young children
experience
• As part of a PhD study we have used
LENA [Language Environment Analysis]
• This uses advanced signal processing
strategies to monitor the natural language
environment of the child
• It provides automatic analysis of data
LENA
• Aim is to gain information on the overall
quality of the language environment and
general developmental status of the child
[Oller et al., 2010: Xu et al., 2008]
• Being used by some key researchers in
the field –[Yoshinga-Itano 2010; StremelThomas, 2010; Vohr, 2010]
LENA
• Device weighs 60 grams
• Single microphone
• Data is recorded over the child’s waking
hours, downloaded to USB
• Automated language software analysis
• The recoding is analysed through an
iterative modelling process which
segments the data
The segments components include:
• Male and female adult
• Key child
• Other child,
• Overlapping speech, noise, electronic
noise silence and very low inter unit
variation [Ford et al., 2008]
Lena report
The analysis produces detailed reports
including:
• Adult and child word counts
• Conversational turn word counts
• Time specific information
• Language specific measures such as
Mean length of utterance
• Data can be displayed in month, hour, day
and 5 min segments
(LENA Research v3.1.2)
Acoustic
Category
Silence and
Background
Noise
TV
Distant
Meaningful
Description
Quiet or vegetative sounds or silence
Rattles, bumps and other non-human
signals
Audio from television, radio and other
electronic noise
Audio typically coming from six or
more feet away from the DLP.*
Usable, distinguishable audio that is
included in the reported
information.
With special thanks to Imran Mulla
With special thanks to Imran Mulla
LENA
The potential of using LENA is considerable:
• to identify child/parent interaction
• as a counselling tool
• To identify noisy situations where FM
would be useful
Caveat: there are challenges as LENA is
not designed to measure benefit from
using FM
Typical classrooms
• Listening is the basis for the key skill
of literacy
• Classrooms are acoustically hostile for children
[+6 dB]
• Dehaene[2009]: 20,000 hours of listening as a
basis for reading
• Pittman: Children with hearing loss require
three times the exposure to learn new words
and concepts due to the reduced acoustic
bandwidth caused by the hearing loss.
Positive approaches to listening
• Recognise challenges faced by all children
• Improve levels of classroom control,
planned activity sessions and group work
• Use technology to improve situation for all
pupils
• Specifically with hearing aid users-proactive
and sensitive management of amplification
[mute button]
Dehaene, S. (2009). Reading in the brain: The science and evolution of a human invention.
New York: Penguin Group
.
Doidge, N. (2007). The BRAIN that changes itself. London: Penguin Books Ltd
Ford et al., 2008ITLR-03-2: The LENA Language Environment Analysis System: Audio
Specifications of the DLP-0121
;; Vohr, 2010]
Http//:www.leanfoundation.org.Techreport.aspx/Audio_Specifications/LTR-03-2
Oller et al., (2010) Automated vocal analysis of naturalistic recordings from children with
autism, language delay and typical development. Proceedings of the National Academy of
Science 107(30 13354-13359
Xu et al., 2008 LTR-0402: The LENA Language Environment Analysis System: the Interpreted
Time segments (ITS) file
Http//:www.lenafoundation.org.Techreport.aspx/Audio_Specifications/LTR-04-2
Yoshinga-Itano C. (2010) The missing link in the language assessment for children who are
deaf or hard of hearing http://www.lenafoundation.org./pdfLENA-conf2010/Presentations/LENA-Conf-2010-C-Yoshinaga--itano.pdf
Stremel-Thomas, K. (2010) Determining pre-post cochlear implant outcomes for young
children with deaf-blindness through LENA technology
http://www.lenafoundation.org./pdfLENA-conf-2010/Presentations/LENA-Conf-2010Katherine-Stremel.pdf
Vohr,B (2010) Studies of Early Language Development in high-risk populations.
http://www.lenafoundation.org./pdfLENA-conf-2010/Presentations/LENA-Conf-2010-BettyVohr.pdf
Download