Cortex: psychophysics, imaging and perception

advertisement
Contents
BSA Annual Conference Programme .................................................................................................. 2
Oral presentations ................................................................................................................................ 5
Symposium abstracts ......................................................................................................................... 12
From hair cells to hearing abstracts ................................................................................................... 12
PPC symposium Abstract .................................................................................................................. 14
Special Interest Group Session Abstracts .......................................................................................... 16
Sponsor workshops ............................................................................................................................ 18
Oral Presentation Abstracts: Tuesday afternoon, 1400-1545 ............................................................ 20
Oral Presentation Abstracts: Tuesday afternoon, 1615-1700 ............................................................ 26
Oral Presentation Abstracts: Wednesday afternoon 1430-1600 (parallel session 1) ......................... 28
Oral Presentation Abstracts : Wednesday afternoon 1430-1600 (parallel session 2) ........................ 33
Poster Abstracts: General sensation and perception .......................................................................... 40
Poster Abstracts: Hearing loss and hearing aids ................................................................................ 70
Poster Abstracts: Cochlear implants .................................................................................................. 96
Poster Abstracts: Balance and vEMPs ............................................................................................. 105
1
BSA Annual Conference Programme
Keele University
1st – 3rd September 2014
Quick
guide:
Keynotes Orals
Exhibition
Posters
Clinical
workshops
Special
events
MONDAY 1st September
Time
WESTMINSTER
THEATRE
MEETING ROOM
1.098
10.00–11.00
11.00–12.00
BSA Council
meeting (90 min)
12.00 – 1.00
13.00– 14.00
14.00– 15.45
EXHIBITION
SUITE
Exhibitor set up
POSTERS
Poster set up
Exhibitor set up
Lunch
Introduction to conference
Keynote I:
Prof Anne Schilder
Symposium
From hair cells to hearing
Sponsor Workshop
Oticon
15.45–16.15
Exhibition
Poster viewing
Refreshments
16.15–17.15
Symposium continued
17.15–18.00
Twilight lecture : Prof
Trevor Cox
(Author of Sonic
Wonderland)
Sponsor wokshop
Interacoustics
Social event:
wine/nibbles +
dinner
2
TUESDAY 2nd September
TIME
08.30–9.30
09.30–10.30
10.30–11.00
11.00-12.00
WESTMINSTER
THEATRE
Keynote II:
Dr Stefan Launer
BSA AGM (45
mins)
Professional
Practice
Committee
symposium
12.00– 13.00
13.00–14.00
14.00–15.45
Keynote 3:
Prof T Ricketts
Oral presentations
MEETING ROOM
1.098
MEETING
ROOM
1.099
POSTERS
Exhibition
refreshments
Poster
viewing
Sponsor workshop
Tinnitus Clinic
New members
workshop
Meeting with
sponsors
Lunch
Sponsor workshop
GN Resound
Discussion
Forum
ARIG
“Who defines
Rehabilitation?”
Exhibition
15.45-16.15
Poster
viewing
Refreshments
16.15– 17.00
Oral presentations
17.00–17.30
Action on
Hearing Loss
presentation
BSA Awards
Ceremony
Poster prizes
Conference dinner
in Keele Hall
19.30–23.00
EXHIBITION
SUITE
Forum
continued
Exhibition
3
WEDNESDAY 3rd September
Session 4
TIME
08.30– 9..30
9.30 – 10.30
WESTMINSTER
THEATRE
UK Biobank
Update
Innovation forum
10.30–11.15
11.15- 12.30
12.30– 13.30
13.30–14.30
14.30– 16.00
LECTURE ROOM
ONE (0.061)
APD Update
BIG Update
Future plans for
the Annual
Conference
Ted Evans
Lecture
Prof Corne Kros
Oral presentations
(basic science)
MEETING ROOM
1.098
Journal club
Adult Hearing
Screening
Electrophysiology
SIG
EXHIBITION
SUITE
Exhibition
Refreshments
POSTERS
Poster
viewing
Exhibition closes
Lunch
Oral presentations
(clinical/translational)
End of meeting
4
BSA Annual Conference 2014
Oral presentations
Monday 1st September
Westminster Theatre
13.00. Opening Keynote I: Scientific evidence within the clinical context
Prof. Anne Schilder, University College London.
14. 00. Symposium: From Hair Cells to Hearing
14.00. The sensory hair cell in normal hearing and disease
Prof. Dave Furness, Keele University
14.35. Coding of acoustic information in the auditory pathway
Prof. Alan Palmer, MRC Institute of Hearing Research, Nottingham.
15.10. Integrating acoustic and electric information following cochlear implantation
Dr. Padraig Kitterick, NIHR Nottingham Hearing Biomedical Research Unit.
15.45 Break
16.15. Listening difficulties and auditory processing disorder (APD) in children: proposed
mechanisms
Prof. David Moore, Cincinnati Children’s Hospital, Cincinnati, USA.
17.15. The Twilight Lecture: The Acoustics Behind Sonic Wonderland
Prof. Trevor Cox Salford University, Manchester.
18.00 Social event – wine and nibbles
Room 1.098
14.00. Sponsor Workshop – Oticon (Alison Stone)
14:00 – 14:30
Evidence for a new compression strategy for children
14:45 – 15:15
New clinical tools for paediatric fittings
15:25 – 15:45
Connectivity for Kids
16.15 PM: Sponsor Workshop Interacoustics
5
Tuesday 2nd September, morning
Westminster Theatre
8.30. Keynote II: Hearing Systems in a connected world.
Dr Stefan Launer, Phonak, Zurich, Switzerland
9.30. BSA AGM
10.30. Professional Practice Committee Symposium – How you can make audiology better
10.30. Introduction
Graham Frost, PPC Vice Chair
10.35. What PPC is doing to make audiology better
Dr Daniel Rowan, ISVR, Southampton University.
11.00. The impact of NICE accreditation
Deborah Collis, Associate Director, Accreditation and Quality Assurance, NICE
11.30. PPC docuemts: What, How, When? It’s your turn.
Dr Sebastian Hendricks, Barnet & Chase Farm Hospitals and the RNTNEH
11.55 Summary
Graham Frost.
12.00 – 13.00. Lunch
Room 1.098
11.00. Spomsor Workshop – Timmitus Clinic
Tinnitus pathophysiology and evolving clinical options for therapy.
Mark Williams:
12.00 – 13.00. New/young members workshop
Tuesday 2nd September, afternooon
Westminster Theatre
13.00. Keynote III: Speech recognition and spatialisation in complex listening environments:
effects of hearing aids and processing
Prof. Todd Rickets Vanderbilt Bill Wilkerson Center for Otolaryngology, Nashville, USA
Oral Presentations
14.00 – 14.15. L.F. Halliday, O. Tuomainen, and S. Rosen. Auditory processing deficits in
children with mild to moderate sensorineural hearing loss (Listening problems with hearing
loss)
6
14.15 – 14.30. J.G. Barry, D. Tomlin, D.R. Moore and H. Dillon. Assessing the role of parental
report (ECLiPS) in supporting assessment of children referred for auditory processing
disorder
14.30 – 14.45. E. Holmes, P.T. Kitterick and A.Q. Summerfield. Do children with hearing loss
show atypical attention during ‘cocktail party’ listening?
14.45 – 15.00. S.A. Flanagan and U. Goswami. Temporal asymmetry and loudness : amplitude
envelope sensitivity in developmental dyslexia
15.00 – 15.15. L.Zhang, P.Jennings and F. Schlaghecken.Learning to ignore background noise
in VCV test
15.15 – 15.30. A. Sarampalis, D. Alfandari, J.H. Gmelin, M.F. Lohden, A.F. Pietrus-Rajman and D.
Başkent. Cognitive listening fatigue with degraded speech
15.30 - 15.45. D. Kunke, M. J. Henry, R. Hannemann, M. Serman and J. Obleser. A link between
auditory attention and self perceived communication success in older listeners
Break
16.15 – 16.30. Brian C.J. Moore and Sara M.K. Madsen. Preliminary results of a survey of
experiences in listening to music via hearing aids
16.30 – 16.45.
listeners
J. L. Verhe* and J. Hots. Mid-bandwidth loudness depression in hearing-impaired
16.45 – 17.00. A. King, K. Hopkins and C. J. Plack. The effects of age and hearing loss on
neural synchrony and interaural phase difference discrimination
17.00 - 17.10. Action on Hearing Loss presentation.
BSA Awards Ceremony and Poster Prizes
Meeting Room 1.098
14.00. Sponsor Workshop – GN Resound.
Paul Leeming, Support Audiologist
Meeting Room 1.099
14.00. BSA ARIG Discussion forum on patient-centred care: “Who defines rehabilitation?”
(i) shared decision making
(ii) facilitating change through developing self-efficacy
(iii) outcome measures for clinicians and researchers
Led by Amanda Casey, Dr. Helen Pryce and Dr. Mel Ferguson
19.30-23.00. Conference dinner in Keele Hall
7
Wednesday 3rd September, morning
Westminster Theatre
8.30-9.30. Large scale hearing studies using the UK Biobank resource
Introduction to UK Biobank session
Prof. Kevin Munro, University of Manchester.
Hearing in middle age: a population snapshot
Prof. Kevin J Munro.
Cigarette smoking, alcohol consumption and hearing
Dr. Piers Dawes, University of Manchester.
Speech in noise hearing, pure tone threshold and cognition
Prof. David Moore, Cincinnati Children’s Hospital, Cincinnati, USA.
Hearing loss and cognitive decline: the role of hearing aids, social isolation and depression
Dr. Piers Dawes
Concluding comments
Dr. Piers Dawes
9.30 – 10.30. Innovation forum
12.30 Discussion on the future plans for the Annual Conference (open to all)
Lecture theatre 01
10.30-11.15. BSA Auditory Processing Disorder SIG update: Onwards and Upwards
Dr Nicci Campbell, ISVR, University of Southampton, and Prof Dave Moore,
Cincinnati
Children’s Hospital, Cincinnati, USA.
11.15-12.00. BSA Balance Interest Group update
The Video Head Impulse Test and its relationship to caloric testing
Dr Steven Bell, Hearing and Balance Centre, University of Southampton.
Motivational approach to behaviour change in vestibular rehabilitation to
attendance
Dr Nicola Topass, Audiology Department, Royal Surrey County Hospital.
improve clinic
Room 1.098
10.30-12.30 BSA Journal Club: Adult Hearing Screening
Led by Dr Cherilee Rutherford, Dr Lorraine Gailley, John Day
12.30 – 13.20. Electrophysiology SIG meeting
12.30 Lunch
8
Wednesday 3rd September, afternoon
Westminster Theatre
13.30. Keynote IV: Ted Evans Lecture. Adventures in mammalian mechanotransduction:
adaptation, aminoglycosides and anomalous currents. Prof. Corné Kros, University of Sussex.
Oral Presentations
Parallel session 1
14.30 – 14.45. B. Coomber, V.L. Kowalkowski, J.I. Berger, A.R. Palmer and M.N. Wallace.
Changes in nitric oxide synthase in the cochlear nucleus following unilateral acoustic trauma
14.45 – 15.00. M. Sayles and M. G. Heinz. Amplitude-modulation detection and discrimination
in the chinchilla ventral cochlear nucleus following noise-induced hearing loss
15.00 – 15.15. L.D. Orton and A. Rees. Commissural improvement of sound level sensitivity and
discriminability in the inferior colliculi
15.15 – 15.30. S.M. Town, K. C. Wood, H. Atilgan, G. P. Jones and J. K. Bizley. Probing the
physiology of perception: Invariant neural responses in ferret auditory cortex during vowel
discrimination
15.30 – 15.45. R.P, Carlyon, J.M. Deeks, A.J. Billig and J.A. Bierer. Across-electrode variation in
gap-detection and stimulus-detection thresholds for cochlear implant users
, § MRC Cognition and Brain Sciences Unit, Cambridge, UK ,‡ University of Washington, Seattle,
W.A, U.S.A.
15.45 – 16.00. N.P.Todd and C.S. Lee. Source analyses of vestibular evoked potentials (VsEPs)
activated during rhythm perception
Lecture theatre 1
Parallel session 2
14.30 – 14.45. E. Heffernan, N. Coulson, H. Henshaw and M. Ferguson. The psychosocial
experiences of individuals with hearing loss
14.45 – 15.00. S.E.I. Aboagye*, M.A. Ferguson§, N. Coulson‡, J. Birchall§‡ and J.G. Barry*,
“You’re not going to make me wear hearing aids are you?” Factors affecting adolescent’s
uptake and adherence to aural habilitation
15.00 – 15.15. S. Dasgupta, S.Raghavan, M.O’Hare and L.Marl. X linked gusher syndrome – a
rare cause for hearing loss in children
15.15 – 15.30. S.Wadsworth, H. Fortnum,, A. McCormack and I. Mulla.. What encourages or
discourages hearing-impaired children to take part in sport?
15.30 15.45. G. Al-Malky, M. De Jongh, M. Kikic, S.J. Dawson and R. Suri. Assessment of current
provision for auditory ototoxicity monitoring in the UK
15.45 – 16.00. D. Hewitt. Everything an Audiologist needs to know about speech in noise
16.00. Conference Close
9
Keynote speakers
Biographies and talk information
Professor Anne Schilder, ENT surgeon and NIHR Research Professor, leads
the evidENT team at UCL dedicated to developing high quality clinical
research and promote evidence based management of ENT, Hearing and
Balance conditions,. She is professor of Paediatric ENT at UCL's Ear Institute,
Royal National Throat, Nose and Ear Hospital and University Medical Centre
Utrecht, as well as Visiting Professor at Oxford University. She is an expert in
clinical trials; her RCTs in the field of upper airway infections in children have
influenced how global health-care systems think about the management of these conditions and
have been translated in evidence-based guidelines and health policies. Her evidENT team brings
together clinicians and researchers across the UK. evidENt works with patients, charities, industry
and nother stakeholders to ensure patients benefit from new and better treatments and to learn how
to improve ENT, Hearing and Balance services for the future. Her talk is entitled: Scientific
evidence within the clinical context.
Trevor Cox is Professor of Acoustic Engineering at Salford University where he
is a researcher and teacher in acoustics, signal processing and perception, and
renowned for his media work, documentaries and communicating to the general
population as well as to students. He has won prestigious awards from the
Institute of Acoustics of which he is a former President. He has written many
books, including Sonic Wonderland: A Scientific Odyssey of Sound and a
substantial body of scientific literature and research articles. His talk is entitled:
The Acoustics Behind Sonic Wonderland
Sonic Wonderland is a popular science book about the most remarkable sounds in the world. This
talk will look at some of the detailed acoustic science behind a few of the wonders, picking examples
that required first hand research in preparing the book. It will include solving the mystery of Echo
Bridge, something that first appeared in the Journal of Acoustical Society of America in the 1940s.
The talk will include measurements on the badly tuned musical road in California. To finish, the talk
will outline a search for places with excessive reverberation time. This will include the vast oil tank
in Scotland where Trevor broke the Guinness World Record for the longest 'longest echo'. The author
returned to the tank this spring to play the saxophone, and he will reveal what it is like to play in a
space with a reverberation time of 75 seconds.
Stefan Launer has been Vice President Science & Technology of Sonova since
April 2008 and joined the Management Board in April 2013. He started his
professional career at Phonak in 1995 in the Research and Development
department where he held various functions. Today he is in charge of managing
the basic science and technology programs in various fields of hearing health
care, the development of core technologies and intellectual property rights.
Stefan Launer studied Physics at the University of Würzburg and in 1995 was
awarded a PhD from the University of Oldenburg on modeling auditory perception in hearing
impaired subjects. His talk is entitled:
Hearing Systems in a connected world.
Over the last two decades hearing instruments have turned into intelligent systems offering a range of
10
different algorithms for addressing listening needs in specific acoustic environments. More recently
modern hearing systems are becoming wirelessly connected to each other to form body area
networks. These ear to ear connections allow applying new features such as binaural signal
processing techniques to improve communication in complex listening conditions. A second class of
applications allows connecting hearing systems to external audio sources such as phones, remote
microphones, TV, audio players etc. Today ear to ear wireless links allow to apply binaural signal
processing mimicking the way the auditory system processes sounds binaurally. These algorithms
can offer significant benefit in various difficult listening conditions beyond the performance of
classical noise reduction and directional microphone systems. The second class of systems is
particularly designed to improve communication over larger distances eg school environments. A
third set of applications of wireless connectivity is to offer new approaches to service delivery and
new fitting approaches to hearing impaired people. In this talk I want to present the state of the art of
hearing instrument technology and discuss future perspectives of the new technology trends.
Todd A. Ricketts, Ph.D, CCC-A, FAAA, is an associate professor and the
Director of Graduate Studies, Department of Hearing and Speech Sciences at the
Vanderbilt Bill Wilkerson center for Otolaryngology and Communication
Sciences; and, Director of the Dan Maddox Hearing Aid Research Laboratory.
Todd has published more than 100 scholarly articles and book chapters. To date
he has presented over 300 scholarly papers, poster presentations, short courses,
mini-seminars, and workshops both nationally and internationally. He continues
to pursue a federally and industry funded program studying various aspects of
hearing, hearing aids and cochlear implants. He was named a Fellow of the
American Speech Language and Hearing Association in 2006 and his article "Directional benefit in
simulated classroom environments" received the Editors award from the American Journal of
Audiology at the 2008 AHSA convention. He also is a past editor-in-chief of the quarterly journal
Trends in Amplification, a current associate editor for the Journal of Speech, Language and Hearing
Research and the past chair of the Vanderbilt Institutional Review Board. His talk is entitled: Speech
recognition and spatialisation in complex listening environments: effects of hearing aids and
processing.
Corné Kros is Professor of Neuroscience at the University of Sussex. Noted for
his seminal studies of cochlear hair cell physiology, in particular the properties
of spontaneous activity in pre-hearing inner hair cells and the process of
mechanoelectrical transduction by which these cells detect sound, Professor
Kros's research is at the forefront of basic science in hearing. His current
interests lie in the effects of aminoglycoside antibiotics on hearing, and the
development of blocking agents that might prevent aminoglycoside-induced
damage to the hair cells. His talk is entitled:
Adventures in mammalian mechanotransduction: adaptation, aminoglycosides and anomalous
currents
C.J. Kros*§, *Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, UK,
§
ENT Department, University Medical Center Groningen, Groningen, The Netherlands
Hearing requires sound being transduced into electrical signals in the brain. The key step in this
mechano-electrical transduction (MET) occurs in about a hundred ion channels atop each of the
auditory hair cells in the cochlea (Kros et al, 1992). Gated by tip links between adjacent stereocilia in
the hair bundles these MET channels, when open, allow current flow from the endolymph into the
11
hair cells, depolarizing them. Just two degrees of hair bundle deflection open 90% of the channels
(Géléoc et al, 1997) to encode the loudest sounds. An adaptation process dependent on influx of Ca2+
ions through the MET channels keeps the hair cells optimally sensitive to small changes in sound
intensity, and has been extensively studied in hair cells from lower vertebrates. Recent evidence
suggests that in mammalian auditory hair cells adaptation is similarly Ca2+ dependent and rapid, with
Ca2+ acting at or near the MET channel itself (Corns, Johnson, Kros & Marcotti, in preparation).
The unusual characteristics of ion permeation through the MET channels, favouring the entry of
Ca2+ ions by a vestibule at the extracellular side of the pore, also makes the cells vulnerable to
ototoxic damage by aminoglycoside antibiotics such as gentamicin and dihydrostreptomycin. The
aminoglycosides enter hair cells through the MET channels and are effectively trapped once inside
(Marcotti et al, 2005). We are currently screening for compounds that compete with the
aminoglycosides for entry into the hair cells, as a potential means to reduce the ototoxicity of these
otherwise clinically useful drugs.
Heretically, MET currents with unusual properties can be found under a variety of conditions in
which tip links are absent (Marcotti et al, 2014). These large currents occur predominantly in
response to stimuli of the opposite polarity to those that generate normal MET currents and have
slower kinetics. The underlying ion channels appear to lack the vestibule of the normal MET
channels and may be MET channel precursors situated at the base of the stereocilia.
Acknowledgements
Work in the Kros lab is supported by the MRC.
References
Géléoc G.S.G., Lennan G.W., Richardson G.P. & Kros C.J. 1997. A quantitative comparison of
mechanoelectrical transduction in vestibular and auditory hair cells of neonatal mice. Proc Biol
Sci, 264, 611-621.
Kros C.J., Rüsch A. & Richardson G.P. 1992. Mechano-electrical transducer currents in hair cells
of the cultured neonatal mouse cochlea. Proc Biol Sci, 249, 185-193.
Marcotti W., van Netten S.M. & Kros C.J.
2005.
The aminoglycoside antibiotic
dihydrostreptomycin rapidly enters hair cells through the mechano-electrical transducer
channels. J Physiol, 567, 505-521.
Marcotti W., Corns L.F., Desmonds T., Kirkwood N.K., Richardson G.P. & Kros C.J. 2014.
Transduction without tip links in cochlear hair cells is mediated by ion channels with permeation
properties distinct from those of the mechano-electrical transducer channel. J Neurosci, 34,
5505-5514.
Symposium abstracts
From hair cells to hearing abstracts
The sensory hair cell in normal hearing and disease
D.N. Furness. Institute for science and technology in hearing, Keele University.
Hair cells are the sensory receptor of our hearing and balance systems. Their ability to detect
mechanical stimuli caused by sound and head movements is based on a structure at their tops called
the hair bundle, composed of a precisely organised array of minute hairs (called stereocilia). This
bunde, as well as being physically highly sensitive, for istanve to high impact sounds or certin
drugs, is the target of a number of genetic abnormalities which underlie conditions such as Usher
syndrome and other hearing impairments. One of the prinicpal targets in Usher syndrome is the tip
12
link, a structure found within the hair bundle that is composed of dimers of two molecules, cadherin
23 and protocadherin 15. Another common hearing impairment is associated with one of the
candidate proteins for the hair-cell transducer channels, TMC1. Mouse mutations in these proteins
have revealed much about how defects in them cause hearing loss by targeting the hair bundle and
affecting both its structure and its ability to detect mechanical stimuli.
Coding of acoustic information in the auditory pathway
Professor Alan R. Palmer, Director, MRC Institute of Hearing Research, University Park,
Nottingham, NG7 2RD, UK.
As a result of the mechanical action of the basilar membrane and transduction in the cochlear hair
cells, responses of auditory nerve fibres are tuned like a series of overlapping band pass filters
allowing a good representation of the frequency content of any sound, which becomes less clear at
high levels as the filters broaden. Activity in the auditory nerve signals the frequency content, the
timing and the sound level of the sounds. Pathways from the first brainstem nucleus (the cochlear
nucleus) converge in the brainstem to allow combination of information from the two ears for
analysis of the location of the sound source, which is then sent to the auditory midbrain. Pathways
from the cochlear nucleus also send information about the sound spectrum and its pitch directly to
the auditory midbrain where it is integrated with inputs from all lower brainstem nuclei, before
sending the information on to the auditory cortex via the thalamus. The auditory cortex has several
frequency mapped areas which process sounds in parallel. There is some evidence for processing of
different aspects of sounds in different cortical areas, giving rise to suggestions of different
anatomical and functional processing streams for different aspects of sound perception. The deeper
layers of the cortex send projections back down the auditory pathway enabling the cortex to
modulate the ascending flow of auditory information.
Integrating acoustic and electric information following cochlear implantation
P.T. Kitterick. NIHR Nottingham Hearing Biomedical Research Unit, Nottingham, UK
A cochlear implant evokes neural activity in the auditory nerve by delivering a train of electric
pulses into the cochlea via a micro-electrode array. The amplitude of the pulses at each electrode is
modulated by an externally-worn speech processor to encode the temporal envelope of sound
energy within a particular frequency band. A unilateral cochlear implant provides sufficient
temporal and spectoral information to support speech understanding at favourable signal-to-noise
ratios, and bilateral cochlear implants provide access to inter-aural level cues which can support
sound source localisation.
Traditional candidates for cochlear implantation in the UK have a bilateral profound hearing
loss. Thus, their listening abilities following implantation primarily reflect access to the electric
signals from the implant. However, some UK recipients may have access to residual acoustic
hearing at low frequencies, and recipients in other healthcare systems are being implanted with
increasing levels of residual acoustic hearing in the non-implanted ear.
While patients with well-preserved access to residual acoustic hearing report subjective
differences in quality between the electric and acoustic signals, benefits arising from simultaneous
access to both signal modalities have been demonstrated using tests of speech perception and
localisation. Vocoder studies with normal-hearing listeners suggest that additional benefit may be
obtained if electric and acoustic information is delivered to neural populations with similar
characteristic frequencies in each ear.
Acknowledgements
Supported by infrastructure funding from the National Institute for Health Research.
13
Listening difficulties and auditory processing disorder (APD) in children: proposed
mechanisms
D.R. Moore* * Communication Sciences Research Center, Cincinnati Children’s Hospital and
Department of Otolaryngology, University of Cincinnati, Ohio, USA; § School of Psychological
Sciences, University of Manchester.
Developmental APD is sometimes diagnosed for children presenting with listening difficulties
(LiD), but normal audiograms, who score poorly on tests mostly of speech perception. However,
most children with LiD, whether or not diagnosed APD, have difficulties with cognitive functions,
also associated with other learning disorders. Here, I discuss two possible mechanisms underlying
LiD/APD in children (see also Moore, 2014), first, that some children have ‘suprathreshold’ hearing
loss, and second, that cognitive insufficiency may be modelled by dichotic listening. A link with
cochlear deficiencies is suggested by (i) impaired temporal perception (e.g. frequency
discrimination), which may indicate impaired phase locking in the brainstem, (ii) peripheral
processing abnormalities (e.g. acoustic reflexes, MOC inhibition), found in some children with LiD,
(iii) children with auditory neuropathy, without hearing loss, can also have LiD/APD. Dichotic
listening, used frequently to diagnose APD, is also used in cognitive neuroscience to study
executive function, strongly and specifically implicated in LiD/APD. By comparing the ‘free recall’
of dichotic CV syllables with ‘forced attention’ to right or left ear, Hugdahl (2009) has discovered a
fronto-parietal network involved in top-down attention regulation of bottom-up auditory processing
in individuals with learning disorders. We are currently examining this network in children with
LiD/APD.
References
Hugdahl K, Westerhausen R, Alho K, Medvedev S, Laine M, & Hämäläinen H. (2009).
Attention and cognitive control: unfolding the dichotic listening story. Scand J
Psychol, 50,11-22.
Moore, D.R. 2014. Sources of pathology underlying listening disorders in children. Int J
Psychophysiol. in press.
PPC symposium Abstract
HOW YOU CAN MAKE AUDIOLOGY BETTER.
A symposium by the BSA Professional Practice Committee.
You, the audiological professional, are fundamental to the development of best clinical practice
within the UK and you have an obligation to ensure that others in both your and allied
professions benefit from your knowledge, experience and ideas. Whether a student, recently
qualified or are following a career in audiology, you can make a significant contribution to
making audiology better.
The principle aim of the BSA is the advancement of Audiology. This not only includes the
scientific study of hearing and balance function and other related sciences, but also, as importantly,
the advancement of best clinical practice in the diagnosis, management and rehabilitation of hearing
and balance and allied disorders.
The development and promotion of good clinical Audiological practice at all stages of care by
the BSA is facilitated by its Professional Practice Committee which provides and disseminates
guidance on good practice which is demonstrably high on relevance, quality and impact. This
guidance is achieved through the provision of recommended procedures, guidelines, minimum
training standards and training days.
14
As part of the ongoing process to optimise the impact of the BSA, the PPC is in the process of
submitting an application to NICE for accreditation of its guidance. This symposium will include a
presentation by Deborah Collis, Associate Director of Accreditation and Quality Assurance for
NICE, who will be speaking about the accreditation process and its potential benefits.
This symposium will demonstrate the key role that you should be playing in the development
and promotion of good Audiological practice and discuss how this may best be achieved in order to
have greatest impact. This will include exploring ways by which you can effectively engage in and
contribute to the process of making audiology better.
UK Biobank symposium abstract
Large scale hearing studies using the UK Biobank resource
P. Dawes1, H. Fortnum2, D.R. Moore2,3, R. Emsley4, P. Norman5, K. Cruickshanks6, A.C. Davis7, M.
Edmondson-Jones2, A. McCormack2, Robert Pierzicki2 and K.J. Munro1,9
,1 School of Psychological Sciences, University of Manchester, UK, 2 NIHR Nottingham Hearing
Biomedical Research Unit, Nottingham, UK, 3Cincinnati Children’s Hospital Medical Center,
Cincinnati, USA, 4Centre for Biostatistics, Institute of Population Health, University of Manchester,
UK, 5 School of Geography, University of Leeds, UK, 6 Population Health Sciences and
Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of
Wisconsin, USA, 9Central Manchester Universities Hospitals NHS Foundation Trust, UK
The UK Biobank is a large data set established for investigations of the genetic, environmental and
lifestyle causes of diseases of middle and older age. Over the course of 2006-2010, 503,325 UK
adults between the ages of 40 to 69 years were recruited. Participants responded to questionnaire
measures of lifestyle and demographic factors, performed a range of physical measures and donated
biological samples. A subset of 164,770 participants completed a hearing test (the Digit Triplet
Test, a measure of speech recognition in noise). During 2012 to 2013, 17,819 participants
completed a repeat assessment, including 4,425 participants who completed the hearing test at both
time points.
A multi-disciplinary team including researchers from Manchester, Nottingham, Leeds,
Cincinnati and Wisconsin have been collaborating to analyse hearing and tinnitus data from the UK
Biobank. In this symposium, we report the first analyses, including i) an overview of patterns of
hearing impairment and hearing aid use, ii) cigarette smoking and alcohol consumption as risks for
hearing loss, iii) relationships between speech recognition in noise, audiometric thresholds and
cognitive performance and iv) how associations between cognitive performance and hearing loss
may be mediated by hearing aid use and social isolation.
Acknowledgements
This research was based on data provided by UK Biobank. Additional support was provided by the
National Institute of Health Research (Nottingham Hearing BRU and Manchester BRC), the
Medical Research Council, and Cincinnati Children’s Hospital. DRM was supported by the
Intramural Programme of the Medical Research Council [Grant U135097130]. KJC was supported
by R37AG11099, R01AG021917 and an Unrestricted Grant from Research to Prevent Blindness.
The Nottingham Hearing Biomedical Research Unit is funded by the National Institute for Health
Research. This paper presents independent research funded in part by the National Institute for
Health Research (NIHR). The views expressed are those of the author(s) and not necessarily those
of the NHS, the NIHR or the Department of Health. This research was facilitated by Manchester
Biomedical Research Centre. This research was conducted using the UK Biobank resource.
15
Special Interest Group Session Abstracts
APD
BSA Auditory Processing Disorder SIG update: Onwards and Upwards
N.G. Campbell* and D.R. Moore§, *Institute of Sound and Vibration Research, University of
Southampton, UK, §Communication Sciences Research Center, Cincinnati Children's Hospital,
Cincinnati, USA
Over the past 3 years the SIG has shaped international thinking. We published a peer-reviewed
Position Statement and Practice Guidance Document (2011), and an ‘APD White Paper’ (2013) in
the International Journal of Audiology. We initiated and collaborated with the American Academy
of Audiology (AAA) to present very successful global APD days as part of the AAA conferences in
Boston (2012) and in Orlando (2014). Since 2011, and our call for evidence based practice, there
has been a surge in the number of randomised control studies. Finally, we hosted an APD Satellite
Day at the 2013 BSA Annual Conference. This presentation reviews those past achievements and
new and future developments in tests, resources and technology, and our new APD Position
Statement and Practice Guidance Document. In our revised approach, the parcellation into
Developmental, Acquired and Secondary APD, which has proven useful and popular, is retained.
For Developmental APD, we emphasize the goal of management of children, with a more
hierarchical approach to testing and diagnosis. Assessment starts with questionnaires, followed by
testing; where-after individuals are guided towards various interventions that suit their specific
needs. Theoretic underpinnings become more biological and, therefore, testable, preventable or
treatable.
Acknowledgements
We acknowledge the anonymous reviewers, members of the BSA Professional Practice Committee
and all that took part in the public consultation for the Position Statement and Practice Guidance
Document. We further acknowledge the commentaries of the leading international researchers that
formed part of the BSA ‘APD White Paper’.
References
Moore D.R., Rosen S., Bamiou D., Campbell N.G. & Sirimanna T. (2013). Evolving Concepts of
Developmental Auditory Processing Disorder (APD): A British Society of Audiology APD
Special Interest Group ‘White Paper’. IJA, 1-11.
BSA APD SIG Steering Committee: Moore D.R., Alles R., Bamiou D., Batchelor L., Campbell
N.G., Canning D., Grant P., Luxon L., Murray P., Nairn S., Rosen S., Sirimanna T., Treharne
D.
and
Wakeham,
K.
(2011).
BSA
Position
Statement:
APD.
http://www.thebsa.org.uk/resources/apd-position-statement/
BSA APD SIG Steering Committee: Campbell N.G., Alles R., Bamiou D., Batchelor L., Canning
D., Grant P., Luxon L., Moore D.R., Murray P., Nairn S., Rosen S., Sirimanna T., Treharne D.
and Wakeham, K. (2011). BSA Practice Guidance: An Overview of Current Management of
APD.http://www.thebsa.org.uk/resources/overview-current-management-auditory-processingdisorder-apd/
BIG
The Video Head Impulse Test and its relationship to caloric testing
SL. Bell*, F. Barker, §, E.Mackenzie*, H.Hesleton*., C.Parker*, A.Sanderson*, *Hearing and
Balance Centre, University of Southampton UK, § Windsor ENT, Berkshire
16
The subjective Head Impulse Test (HIT) (Halmagyi and Curthoys, 1988) was proposed to indicate
the status of the vestibular occular reflex (VOR) at high frequencies. It relies on direct observation
of the eyes whilst rapid short duration impulses are applied to the subject’s head. The presence of
overt saccades are an indirect indication of peripheral abnormality in the canal being stimulated.
The Head Impulse Test tests the VOR at higher frequencies than the caloric test (Jorns-Haderli et
al., 2007). However it relies on the detection of overt saccades by the observer. The video Head
Impulse Test (vHIT) MacDougall et al. (2009) has been developed as an alternative system suitable
for routine clinical use to make head impulse testing more objective. The system consists of
software and lightweight goggles containing both an eye position camera that can track the subject's
pupils, and a gyroscope that can track the angular movement of the head. The system records head
impulse velocity, together with eye movement velocity in response to high velocity impulses that
are applied to the head by the clinician. Impulses can be applied in the three planes of the
semicircular canals. During this test the head is moved with acceleration that should be sufficient to
cause the afferents of the semicircular canal on one side to be completely inhibited. Hence a
semicircular canal on one side can be tested in effective isolation from the same canal on the other
side
This study has two aims: To explore normative ranges for vHIT gain and to compare the results
of vHIT testing with calorics in a sample of patients attending a clinic for balance disorder. Two
normative studies were conducted to explore the normal ranges of vHIT gain in different
semicircular canals. A clinical sample of 51 patients (20 male, 31 female) were tested with both
lateral canal vHIT and air calorics.
Normative studies found that vHIT gain is near unity for lateral canals, but is significantly
raised in the vertical canals. Care must be taken to avoid artefacts when recording from the vertical
canals.
vHIT gain appears relatively insensitive to peripheral vestibular disorder as indicated by caloric
testing, with low sensisivity but fair specificity. vHIT gain is abnormal in canals with no measurable
function. The vHIT does not appear well suited to screen and identify patients who require caloric
testing although the test may give complementary information to caloric testing.
Declaration of interest
The author reports no conflicts of interest. The authors alone are responsible for the content and
writing of the presentation.
References
Halmagyi GM, Curthoys IS. (1988) A clinical sign of canal paresis. Arch Neurol. ; 45(7): 737-9.
Jorns-Haderli, M., Straumann, D. and Palla, A. (2007) Accuracy of the bedside head impulse test in
detecting vestibular hypofunction, J Neurol Neurosurg Psychiatry, 78, 1113-1118.
MacDougall, H. G., Weber, K. P., McGarvie, L. A., Halmagyi, G. M. and Curthoys, I. S. (2009)
The video head impulse test- diagnostic accuracy in peripheral vestibulopathy, Neurology, 73,
1134-1141..
Motivational approach to behaviour change in vestibular rehabilitation to improve clinic
attendance
N. Topass, Audiology Department, Royal Surrey County Hospital, UK
This study aims to compare how the use of a motivational approach to behaviour change would
improve patient clinic attendance and thus therapy adherence. Vestibular rehabilitation is the
recommend primary treatment for stable vestibular lesions (Shepard et al, 1995). The prognosis for
uncompensated peripheral vestibular lesions is generally very good with the percentage of patients
17
who dramatically or completely improve set at 90% (Shepard et al, 1995). Adherence to vestibular
rehabilitation programs can however prove to be difficult as is the case in many chronic health
conditions.
An important part of learning a new behaviour is for the patient to identify and acknowledge
the value of the new behaviour (Konle-Parker, 2001). Motivational interviewing is a counselling
technique which has been developed to help a patient to explore and resolve ambivalence related to
behaviour change (Emmons & Rollnick, 2001).
A review of our clinic attendance was compiled to determine the efficacy of the customised
vestibular rehabilitation program. The data was reviewed in terms of: attendance, did not
attends(DNAs); and cancellation history for the six month period from September 2012 to February
2013. The percentage of DNA’s was 12.6% and the late cancellation rate was 15.4% (A late
cancellation is made within 1 week of the appointment). A review of current literature revealed that
patient motivation may be a key element to the high DNA rate for this particular speciality, thus a
motivational approach to the patient pathway was introduced. The patient would then ‘opt in’ or
‘opt out’ of the therapy. The therapy program also included elements referred to as the ‘box’, the
‘line’ and the ‘circle’.
The results of the change in protocol was assess for the period from September 2013 to
February 2014. The percentage of DNA’s was then 4.7% and the patient late appointment
cancellation rate was 4.7%. The conclusion is thus that a motivational approach to vestibular
rehabilitation delivery can improve clinic adherence and thus patient adherence to therapy.
Acknowledgements
Supported by Community Health Psychology, Farnham Road Hospital, UK.
References
Emmons K. & Rollnick S. 2001. Motivational interviewing in health care settings opportunities
and limitations. Am J Prevent Med, 20(1), 68-74.
Konkle-Parker D.J. 2001. A motivational intervention to improve adherence to treatment of
chronic disease, J Am Acad Nurse Pract, 13(2), 61-68.
Shepard N., Telian S., & Michigan A. A. (1995). Programmatic vestibular rehabilitation.
Otolaryngol-Head Neck Surg , 112, 173-182.
Tønnesen H. 2012. Engage in the process of change: Facts and methods, WHO-CC Clinical
Health Promotion Center Bispebjerg University Hospital Copenhagen, Denmark Health
Sciences Lund University, Sweden
Sponsor workshops
Monday 1st September at 2 PM: Oticon (Alison Stone)
14:00 – 14:30
Evidence for a new compression strategy for children
14:45 – 15:15
New clinical tools for paediatric fittings
15:25 – 15:45
Connectivity for Kids
Monday 1st September at 4.15 PM: Interacoustics
Title TBA
Tuesday 2nd September at 11 AM: The Tinnitus Clinic
18
Mark Williams: Tinnitus Pathophysiology and Evolving Clinical Options for Therapy.
The investigation of tinnitus neural correlates within mammals has enabled the development of
novel therapeutic approaches that strive to reduce or reverse pathological central changes in humans
afflicted with bothersome auditory hallucinations. This talk will review the pathophysiology of
subjective tinnitus and introduce evolving therapeutic intervention options.
Tuesday 2nd September at 2 PM : GN Resound
Paul Leeming, Support Audiologist
19
Oral Presentation Abstracts: Tuesday afternoon, 1400-1545
Auditory processing deficits in children with mild to moderate sensorineural hearing loss
(Listening problems with hearing loss)
L. F. Halliday, O. Tuomainen, and S. Rosen, Division of Psychology and Language Sciences,
University College London, UK
Impaired auditory processing has been proposed as a primary deficit in developmental disorders of
language, including dyslexia and specific language impairment (Goswami, 2011; Tallal, 2004).
However, it remains uncertain whether deficits in auditory processing contribute to the language
difficulties experienced by many children with mild to moderate sensorineural hearing loss
(MMSNHL). We assessed the auditory processing and language skills of 49, 8-16 year old children
with MMSNHL and 41 age-matched typically developing controls. Auditory processing abilities
were assessed using child-friendly psychophysical techniques. Discrimination thresholds were
obtained for stimuli spanning three different timescales (μs, ms, seconds), and across three different
levels of complexity, from simple nonspeech tones (frequency discrimination, frequency
modulation detection, rise-time discrimination), to complex nonspeech sounds (assessing
discrimination of modulations in formant frequency, fundamental frequency, and amplitude), and
speech sounds (/ba/-/da/ discrimination). Thresholds were obtained both when children with
MMSNHL were wearing hearing aids and when they were not. Language abilities were assessed
using a battery of standardised assessments of phonological processing, reading, vocabulary, and
grammar.
Principal components analysis identified three components underlying the auditory processing
test battery. The first, auditory processing (‘AP’) component appeared to reflect discrimination of
temporal fine structure and spectral shape, and was significantly impaired in children with
MMSNHL. Deficits in this component were almost always present in children with MMSNHL who
had difficulties in language (i.e. they were necessary to cause language difficulties) but many
children with deficits in AP had normal language (i.e. they were not sufficient). The second, ‘AMD’
component reflected discrimination of slow-rate amplitude modulation. Deficits in this component
were rare in children with MMSNHL, including those who had poor language (they were not
necessary) but when they were present they were almost always associated with poor language
abilities (they were sufficient). Finally, deficits in the third, speech processing (‘SP’) component
appeared to be neither necessary nor sufficient to cause language difficulties in children with
MMSNHL. Our findings challenge the proposal for a one-to-one causal relationship between
deficits in auditory processing and language difficulties in children with MMSNHL. Rather, they
suggest that deficits in the discrimination of temporal fine structure and spectral shape are likely to
comprise one of a number of risk factors for impaired language development in this group.
Acknowledgements
Supported by the ESRC.
References
Goswami, U. 2011. A temporal sampling framework for developmental dyslexia. Trends Cog Sci,
15, 3-10.
Tallal, P. 2004. Improving language and literacy is a matter of time. Nat Rev Neurosci, 5, 721-728.
Assessing the role of parental report (ECLiPS) in supporting assessment of children referred
for auditory processing disorder
20
J.G. Barry*, D. Tomlin, §,#, D.R. Moore*,$, H. Dillon‡,# *MRC Institute of Hearing Research,
Nottingham, UK; § University of Melbourne, Department of Audiology and Speech Pathology,
Melbourne, Australia; ‡National Acoustics Laboratory, Sydney, Australia; #The Hearing
Cooperative Research Centre, Melbourne, Australia; $Communication Sciences Research Center,
Cincinnati Children’s Hospital and Department of Otolaryngology, University of Cincinnati, Ohio,
USA
Children referred for auditory processing disorder (APD) present with a complex array of
symptoms, in addition to listening difficulties. This has led to a debate about the nature of the APD,
and the assessment and management of children affected by it. To progress understanding of APD,
it is necessary to reliably capture the reason(s) why children present for assessment, and then relate
those reasons both to the disorder itself, as well as to the broader range of listening-based learning
difficulties. The Evaluation of Children’s Listening and Processing Skills (ECLiPS) (Barry &
Moore, 2014) was developed as a first step towards achieving these goals. The ECLiPS is a reportbased measure comprising a five factor structure which is designed to assess everyday listening in
the context of cognitive abilities commonly affected in APD.
Here, we performed a series of correlational and discriminant analyses to compare parental
responses on the ECLiPS for 50 children (35 referred for suspected APD) with their performance on
5 tests used to diagnose APD, 4 tests of cognitive ability (nonverbal IQ, sustained attention and
auditory serial and working memory, and 2 measures of academic ability.
Few correlations were observed between the ECLiPS and the diagnostic tests of APD,
confirming previous conclusions of a mismatch between abilities probed by clinical tests of APD
and report-based measures of listening difficulty. Correlations with all ECLiPS factors were
observed with academic abilities (rs = 0.52 – 0.40). Finally, auditory working memory correlated
with all ECLiPS factors (rs = 0.30– 0.45), while attention was associated with language abilities (rs
= 0.57). Discriminant analysis suggested the ECLiPS and a combination of auditory and cognitive
tests was able to discriminate between the groups included in the stud with 83% accuracy. Overall,
our findings suggest cognitive difficulties associate with many of the symptoms of APD that parents
are sensitive to, and they further underline the importance of a broad-based assessment test battery
incorporating parental report and cognitive testing.
Acknowledgements
Supported by the MRC, and MRC-T
References
Barry, J.G. & Moore, D.R. (2014). Evaluation of Children’s Listening and Processing (ECLiPS)
Manual, Edition 1.
Do children with hearing loss show atypical attention during ‘cocktail party’ listening?
E. Holmes*, P.T. Kitterick§ and A.Q. Summerfield*, *Department of Psychology, University of
York, York, UK, §NIHR Nottingham Biomedical Research Unit, Nottingham, UK
Individual differences in cocktail-party listening could arise from differences in either central
attention or peripheral transduction. We aimed to isolate differences in attentional processing
between normally-hearing and hearing-impaired children, based on a procedure devised by Hill and
Miller (2010). During the presentation of acoustical stimuli, we expected to see differences in brain
activity between normally-hearing and hearing-impaired children as a result of differences in
peripheral processing of speech. Differences before the onset of acoustical stimuli were expected to
21
show differences in the control of attention without being confounded by differences in
transduction.
Participants were 24 normally-hearing children and 11 children with moderate sensorineural
hearing loss, all aged 7-16 years. We recorded brain activity using 64-channel
electroencephalography. On each trial, two sentences spoken by adult talkers (one male and one
female) were presented simultaneously from loudspeakers at two spatial locations (one left and one
right of fixation). A third ‘child’ talker was presented from straight ahead. Participants were cued, in
advance of the acoustic stimuli, to either the location (left/right) or the gender (male/female) of the
target talker. The task was to report key words spoken by that talker. A control condition, in which
the visual cues did not have implications for attention, allowed cortical activity evoked by the
configurational properties of the cues to be distinguished from activity evoked by attentional
processes.
Children with normal hearing displayed differences in event-related potentials (ERPs) between
trials that cued the location compared to the gender of the target talker. Evidence of cue-specific
preparatory attention, manifest as more positive ERPs evoked by the cue to location than gender in
posterior scalp locations, started approximately 950 ms after the onset of the visual cue. Sustained
cue-specific selective attention started approximately 350 ms after the onset of acoustical stimuli.
During both of these time windows, children with hearing loss showed smaller differences between
location and gender trials than normally-hearing children. We expected to find atypical activity
during the acoustical stimuli, which could be explained by peripheral transduction. Importantly, we
also found atypical brain activity during the preparatory phase, which cannot be attributed to
peripheral transduction. The results suggest that children with hearing loss do not utilise location or
gender information to prepare for an upcoming talker. This deficit may contribute to their difficulty
understanding speech in noisy environments.
Acknowledgements
EH is supported by the Goodricke Appeal Fund.
References
Hill K.T. & Miller L.M. 2010. Auditory attentional control and selection during cocktail party
listening. Cerebral Cortex, 20, 583-590.
Temporal asymmetry and loudness : amplitude envelope sensitivity in developmental dyslexia
S.A. Flanagan and U. Goswami, Centre for Neuroscience in Education, Department of Psychology,
University of Cambridge, UK.
One auditory difficulty found in children with developmental dyslexia is impaired processing of the
amplitude envelope of a sound (Goswami et al, 2002). Children with dyslexia also show impaired
‘phonological awareness’, the ability to reflect on the constituent sound elements in words. Deficits
in amplitude envelope processing are related to phonological deficits in dyslexia, across languages.
Sensitivity to envelope structure and dynamics is critical for speech perception as it signals speech
rate, stress, and tonal contrasts, and reflects prosodic and intonational information.
Here we replicated the experiment of Stecker and Hafter (2000) to investigate temporal
asymmetry of the envelope; the difference between sounds that increase in level slowly and have
fast offsets (i.e., S-F) and those that have fast onsets and slow, damped offsets (i.e., F-S) in a paired
comparison. These pairs of sounds have identical overall spectral contents, durations, and level.
We report on the results of 14 subjects who were in the range 8 – 12 years, 7 of whom were
dyslexic and 7 were typically developing readers. The results for the typical readers was similar to
those found by Stecker and Hafter with an overall loudness advantage for S-F test tones, with the
22
effect attenuated by the context of a damped, S-F, standard. However, for the dyslexic subjects the
results were consistently different from the typical readers, and those of Stecker and Hafter. For the
dyslexic subjects the loudness advantage for S-F test tones was greatly reduced and in some
instances reversed for both conditions of the standard. This is opposite to the general findings for
the perception of ramped and damped sounds (Ries et al, 2008). Impaired neural entrainment to the
onset envelope in developmental dyslexia as proposed in the temporal sampling framework
(Goswami, 2011) could explain the reduced or absent loudness advantage for S-F tones.
Acknowledgements
Supported by the Daphne Jackson Trust and the University of Cambridge.
References
Goswami U. et al, 2002. Amplitude envelope onsets and developmental dyslexia: a new hypothesis.
Proceedings of the National Academy of Sciences U.S.A. 99, 10911- 10916
Stecker G. C. & Hafter E. R. 2000. Temporal asymmetry and loudness. J Acoust Soc Am, 107,
3359 -3362
Ries D.T., Schlauch R.S. & DiGiovanni J.J. 2008 The role of temporal-masking patterns in the
determination of subjective duration and loudness for ramped and damped sounds. J Acoust
Soc Am, 124, 3772-3783
Goswami U., 2011 A temporal sampling framework for developmental dyslexia. Trends in
Cognitive Sciences, 15(1), 3-10.
Learning to ignore background noise in VCV test
L.Zhang*, P.Jennings* and F. Schlaghecken§,*WMG, University of Warwick, UK, §Department of
Psychology, University of Warwick, UK
Experiments on perceptual learning in the visual domain show that people can improve their
detection performance by learning to ignore (visual) 'noise'. Once participants have learned to
ignore the constant visual noise and can successfully detect targets, this skill then transfers to new,
random visual noise (Schubo, et al., 2001). This study explores whether this also follows for
perceptual learning in the hearing domain. It aims to investigate if it is possible to improve our
brains’ ability to process auditory stimuli by training a listener to ignore background noise and
recognise the sounds of consonants across time. In addition, Felty et al (2009) demonstrated that
listeners achieved better word recognition performance with fixed babble noise than with random
babble noise. This research also investigates if the learning effect is generalised from training
normal hearing listeners under fixed babble noise to random background environments. Twenty
normal-hearing English native speakers (aged between18 to 40) participated in an experiment. They
were randomly assigned to a fixed or random babble noise training goup. Both groups were required
to do a pre and post test with vowel consonant vowel (VCV) tasks (including eight consonants /b/,
/d/, /f/, /g/, /k/, /m/, /n/, /p/ with male and female voices) in random babble noise. The background
noise for the three days training session was different for each group (the fixed group was trained
with constant babble noise and the other one was trained with random babble noise). The results
from this study show how people’s listening performance can be improved with training to ignore
fixed and random babble noise.
Acknowledgements
We would like to thank Dr. James Harte, Dr. Katherine Roberts and Professor Christopher James
for their useful suggestions for the experiment set up.
23
References
Felty RA, Buchwald A, Pisoni D.B. 2009. Adaptation to frozen babble in spoken word recognition.
Journal of the Acoustical Society of America. 125, 93-97.
Schubo, A., Schlaghecken, F., & Meinecke, C. (2001). Learning to ignorethe mask in texture
segmentation tasks. Journal of Experimental Psychology: Human Perception and Performance,
27, 919–931.
Cognitive listening fatigue with degraded speech
A. Sarampalis*, D. Alfandari*, J.H. Gmelin*, M.F. Lohden*, A.F. Pietrus-Rajman* and D.
Başkent§, *Department of Psychology, University of Groningen, Groningen, The Netherlands,
§
Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center
Groningen, University of Groningen, Groningen, The Netherlands
Degradations in the speech signal, either because of external factors (such as background noise,
reverberation, or poor signal transmission) or internal factors (such as hearing loss or ageing) often
lead to reduced intelligibility and increases in listening effort. We use the term listening effort to
refer to the proportion of cognitive resources allocated to the task of understanding speech. There is
now ample evidence that even when intelligibility remains unchanged, reductions in the quality of
the speech signal result in increases of listening effort in a variety of tasks and paradigms,
suggesting that basic intelligibility measures are insufficient in fully evaluating some important
aspects of speech communication.
One frequently-reported consequence of increased listening effort is mental fatigue. Elderly
individuals and those with hearing problems often complain of fatigue in difficult listening settings.
Nevertheless, there have been few attempts to investigate listening fatigue empirically. We use the
term cognitive listening fatigue (CLF) to describe performance decreases in an auditory task as a
result of sustained effort. The aim of the present study is two-fold: first to identify the most suitable
tasks and methods for measuring CLF and second to investigate the hypothesis that mildly degraded
speech leads to increased fatigue when compared to undegraded speech.
We used four cognitively-demanding auditory tasks, some adapted in the auditory domain for
the first time, each imposing different cognitive demands on the listener, such as selective attention,
inhibition, and working memory in a procedure that lasted 120 minutes of uninterrupted testing.
Specifically, an auditory Stroop task, a dichotic-listening task, a flanker task, and a syllogisms task
were each repeated three times in 10-minute blocks of trials in a counterbalanced order.
Participants’ reaction times and accuracy were recorded, as well as a subjective measure of their
willingness to continue after each block. Half of the participants listened to undegraded speech, the
others to spectrally degraded signals.
Reaction times decreased throughout the entire testing session, for all four tasks. Participants
became progressively faster in their responses, even when their accuracy did not markedly decrease.
The greatest decrease in accuracy was seen in the dichotic listening task where, after an initial
period of improvement (presumably due to practice), performance decreased in the latter part of
testing. The subjective measure showed marked decreases in the willingness to continue with the
experiment after approximately 70 minutes. There was little difference in these effects between the
degraded and undegraded condition, which may have been due to the mild nature of the
degradation. Overall, these results indicate that there was transfer of fatigue from one task to the
next, but a single-task procedure may be more effective in measuring fatigue. Nevertheless, fatigue
was evident in at least one of these tasks.
24
Declaration of interest:
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
A link between auditory attention and self perceived communication success in older listeners
D. Kunke*, M. J. Henry*, R. Hannemann§, M. Serman§ and J. Obleser*, *Max Planck Research
Group, „Auditory Cognition“, Max Planck Institute for Human Cognitive and Brain Sciences,
Leipzig, Germany, §Siemens Audiologische Technik GmbH, Erlangen, Germany.
Many studies show that current standard practice in audiology including pure tone audiometry and
speech audiometry have limited application in gaging the every day communication success of
individuals. The aim of this study was to investigate the relationship between auditory attention and
every day listening skills in the older population. Auditory rhythmic sequences of low frequency
tones (a same–different judgement of a standard and a temporally jittered target tone, separated by a
series of 6 intervening distracting tones; similar to the Jones, et al, 2002 design) were used to test
the performance of 38 older listeners between the ages of 50 and 80 years who presented with
hearing thresholds of 20 dB HL or better in the task-critical frequency range (0.25 – 1 kHz). We
employed this judgement task in three different conditions (discriminating the acoustic dimensions
pitch, loudness and interaural time differences i.e. direction). Test-retest reliability was calculated
and results show that the pitch discrimination task was the most reliable test (r > 0.85). Performance
was assessed for each condition as perceptual sensitivity (d´), response bias (decision criterion, c),
and reaction time. Participants were also asked to complete a short version of the (SSQ) Speech,
Spatial and Qualities of Hearing Scale (Gatehouse & Noble, 2004). Sensitivity in the pitch task
predicted significantly the self-assessed speech scale of the SSQ (r = 0.496, p = 0.002), while
auditory threshold and speech audiometry results (Oldenburger sentence test and the Freiburger
word test) did not (all p > 0.15). Not only do these findings imply a link between auditory attention
and everyday communication success, but this can also be an important step towards a valid and
reliable screening tool.
Acknowledgements
Supported by Siemens Audiologische Technik and the Max Planck Institute for Human Cognitive
and Brain Sciences
References
Gatehouse S. & Noble W. 2004. The speech, spatial and qualities of hearing scale (SSQ). Int J
Audiol, 43(2), 85-99.
Jones M.R., Moznihan H., Mackenzie N. & Puente J. 2002. Temporal aspects of stimulus-driven
attending in dynamic arrays. Psychological Science, 13(4), 313-319.
25
Oral Presentation Abstracts: Tuesday afternoon, 1615-1700
Preliminary results of a survey of experiences in listening to music via hearing aids
Brian C.J. Moore and Sara M.K. Madsen, Department of Experimental Psychology, University of
Cambridge, UK
An internet-based survey was conducted of experiences in listening to music via hearing aids. Some
key results are presented here. Valid responses were obtained from over 500 participants. Responses
were analysed according to the self-reported degree of hearing loss: Mild, Moderate, Severe, and
Profound. 36% of participants reported problems with acoustic feedback when listening to music,
and the prevalence of such problems did not vary significantly with degree of hearing loss. A
substantial number of participants reported problems such as “warbling effects” or “when (the
organ) music STOPS, aids howl”, that are probably associated with feedback cancellation. In
response to the question “When listening to music via radio, TV or stereo system, do you find your
hearing aids to be helpful”, 64% of participants responded “Yes, a lot” or “Yes, a little”, but 13%
responded “No, a bit worse” or “No, a lot worse”. In response to the question “When listening to
live music, do you find your hearing aids to be helpful”, 53% of participants responded “Yes, a lot”
or “Yes, a little”, but 17% responded “No, a bit worse” or “No, a lot worse”. The difference in
responses for reproduced and live music probably reflects the higher sound levels that are typical of
the latter. For live music, hearing aids tended to be more helpful for those with severe or profound
hearing loss. While 61% of participants agreed with the statement that hearing aids “Make the
music louder”, only 28% agreed with the statement that hearing aids “Help hear soft passages
without the louder parts being too loud”. This indicates that the automatic gain control systems in
the hearing aids were not performing as well as would be desired. 32% of participants reported that
hearing aids “sometimes make the music seem distorted”, perhaps reflecting clipping or overload of
the circuitry resulting from the high levels of live music. Only 21% of participants agreed with the
statement that hearing aids “improve the tone quality of music”, while 29% agreed with the
statement that hearing aids “worsen the tone quality of music”. Judgements of tone quality tended to
be more positive for participants with mild hearing loss.
Overall, the results point to a need to improve feedback cancellation systems and automatic
gain control systems in hearing aids, and to increase the dynamic range of hearing aids to avoid
distortion for high input levels.
Acknowledgements
Supported by the Starkey (USA).
Mid-bandwidth loudness depression in hearing-impaired listeners
J. L. Verhey* and J. Hots* *Department of Experimental Audiology, Otto von Guericke University
Magdeburg, Magdeburg, Germany
Beyond a certain critical bandwidth, loudness increases with bandwidth, an effect commonly
referred to as spectral loudness summation. For bandwidths smaller than the critical bandwidth (i.e.,
subcritical bandwidths), it is commonly assumed that the loudness does not depend on bandwidth.
This view was recently challenged by Hots et al. (2013, 2014). They observed in normal-hearing
listeners a mid-bandwidth loudness depression, i.e., a decrease in loudness when the bandwidth was
increased up to about the critical bandwidth. The level difference of the noise with a subcritical
bandwidth and an equally-loud sinusoid could be as large as 8 dB. The effect was not restricted to a
26
certain frequency and increased as the level decreased. The present study investigated if such an
effect was also observed in hearing-impaired listeners with a hearing loss of cochlear origin. The
effect was studied at a centre frequency of 1500 Hz. In the frequency range around this frequency,
the listeners had a flat hearing loss. The individual hearing loss ranged from 40 to 75 dB HL.
Listeners were asked to adjust the level of the noise test signal to be perceived as equally loud as a
1500-Hz reference sinusoid. The test bandwidth ranged from 15 to 1620 Hz. The level of the
reference signal was chosen individually based on the threshold at this frequency. It was set to be
roughly equal to the loudness of a 30-dB sinusoid for normal –hearing listeners. In general, listeners
showed a higher level of the noise than the sinusoid at equal loudness. For some listeners, the
largest difference was found close to the normal critical bandwidth as in the normal-hearing
listeners. For most hearing-impaired listeners, this maximum was shifted towards larger
bandwidths, presumably reflecting the reduced frequency selectivity in these listeners. The effect
was smaller than in normal-hearing listeners when compared at about the same reference loudness
but was similar when compared at about the same reference level (70 dB SPL). The results of the
hearing-impaired listeners argue against peripheral suppression as the underlying mechanism. The
data are consistent with the hypothesis that the reduction in pitch strength with bandwidth affects
loudness.
Acknowledgements:
This work was supported by the Deutsche Forschungsgesellschaft (SFB/TRR31).
References
Hots, J., Rennies, J., & Verhey, J. L. 2013. Loudness of sounds with a subcritical bandwidth: A
challenge to current loudness models? J Acoust Soc Am, 134, EL334–EL339.
Hots, J., Rennies, J., & Verhey, J.L. 2014. Loudness of subcritical sounds as a function of
bandwidth, center frequency, and level,” J Acoust Soc Am, 135, 1313–1320.
The effects of age and hearing loss on neural synchrony and interaural phase difference
discrimination
A. King, K. Hopkins and C. J. Plack,Department of Computer Science, University of Manchester,
Manchester, UK
Hearing difficulties occuring in middle-age and later may be partly due to reduced neural synchrony
to the temporal characteristics of sounds (e.g. Ruggles et al, 2012). Poor neural synchrony may
affect sensitivity to interaural phase difference (IPD), which may be used to locate and separate
sounds from different angles around the listener. Neural synchrony can be recorded from the
auditory brainstem as the electrophysiological frequency-following response (FFR; Worden &
Marsh, 1968). This study aimed to determine whether changes in IPD sensitivity associated with
hearing loss and age (as published in King et al, 2014) are underpinned by poor phase locking as
defined by FFR strength.
Listeners (N=36) varied in age (19-83 yr) and absolute threshold (−1-59 dB, SPL at 250 and
500 Hz). They discriminated—separately—IPDs in temporal fine structure (TFS) and envelope of
20 Hz amplitude-modulated (AM) tones carried by either 250 or 500 Hz tones. With absolute
threshold partialled out, increasing age was correlated with poorer envelope-IPD discrimination,
and to a lesser extent, TFS-IPD discrimination. With age partialed out, increasing absolute threshold
was correlated with poorer TFS-IPD discrimination, but not envelope-IPD discrimination.
FFR to four different AM tones was measured simultaneously with electroencephalography
(tones presented dichotically, two to each ear). AM rates of 16, 27, 115 and 145 Hz were used, with
carrier tones of 307, 537, 357 and 578 Hz respectively. Neither FFR at the AM rates, nor FFR at the
27
component frequencies, was correlated with absolute threshold. As such, poorer TFS-IPD
discrimination with hearing loss could not be explained by neural desynchrony. FFR to the
components, and to the 145 Hz AM rate, deteriorated with age irrespective of absolute threshold.
Correlations between TFS-IPD thresholds and FFR at the component frequencies, and between
envelope-IPD thresholds and FFR at the AM rates, were almost entirely explained by age. On the
other hand, IPD thresholds increased with increasing age even when FFR strength was partialed out,
suggesting other age-related factors also affect IPD sensitivity.
Acknowledgements
Supported by the UK Medical Research Council (grant ref: G1001609) and Oticon A/S.
References
King, A., Hopkins, K. & Plack C. J. 2014. The effects of age and hearing loss on interaural phase
difference discrimination. J Acoust Soc Am 134, 342-351.
Ruggles, D., Bharadwaj, H. & Shinn-Cunningham, B. G. 2012. Why middle-aged listeners have
trouble hearing in everyday settings. Curr Biol, 22, 1417-1422.
Worden, F.G. & Marsh, J.T. 1968. Frequency-following (microphonic-like) neural responses
evoked by sound. Electroenceph Clin Neurophysiol, 25, 42-52.
Oral Presentation Abstracts
Wednesday afternoon 1430-1600 (parallel session 1)
Changes in nitric oxide synthase in the cochlear nucleus following unilateral acoustic trauma
B. Coomber, V.L. Kowalkowski, J.I. Berger, A.R. Palmer and M.N. Wallace, MRC Institute of
Hearing Research, Nottingham, UK
Nitric oxide (NO) is a potent neuromodulator in the mammalian brain. A free radical produced by
the enzyme nitric oxide synthase (NOS), NO regulates plasticity via presynaptic and postsynaptic
mechanisms. Neuronal NO production, and consequently neuronal NOS (nNOS) distribution, is
widespread throughout the brain, including the auditory system (Guix et al., 2005).
We have previously demonstrated that unilateral acoustic over-exposure (AOE) altered NOS
expression in the cochlear nucleus (CN) of guinea pigs, in a manner that correlated with behavioural
evidence of tinnitus (Coomber et al., 2014). Here, we further examined NOS expression in the
ventral CN (VCN): specifically, the types of neurons that express NOS, the distribution across the
rostral-caudal axis of the VCN, and the time course of AOE-induced NOS change, using
immunohistochemical methods.
NOS was present in a variety of principal cells – including spherical and globular bushy,
multipolar, and octopus cells – spanning the entire rostral-caudal extent of the VCN. Unilateral
AOE, and subsequent tinnitus development, resulted in strong asymmetrical differences, with both a
larger number of NOS-positive cells and a greater amount of NOS on the ipsilateral side, occurring
across the whole extent of the VCN. Notably, the mean soma size of NOS-positive neurons was
significantly smaller in the ipsilateral VCN, compared with the contralateral VCN.
Moreover, we found that changes in NOS expression – similar to that seen in tinnitus animals –
were apparent as early as 24 h post-AOE, although asymmetries were highly variable between
animals at all short-term intervals examined (up to 21 d). Asymmetrical NOS expression at these
shorter-term intervals was not correlated with hearing level threshold shifts.
Changes in excitability in both the VCN and dorsal CN (DCN) have been implicated in tinnitus
generation (Kaltenbach & Afman, 2000; Kraus et al., 2011; Vogler et al., 2011). NOS, and
28
consequently NO, may be involved in pathological changes in the VCN that subsequently lead to
tinnitus.
Acknowledgements
BC was supported by Action on Hearing Loss (International Project Grant G62).
References
Coomber B., Berger J.I., Kowalkowski V.L., Shackleton T.M., Palmer A.R. & Wallace M.W. 2014.
Neural changes accompanying tinnitus following unilateral acoustic trauma in the guinea pig.
Eur J Neurosci, in press.
Guix F.X., Uribesalgo I., Coma M. & Muñoz F.J. 2005. The physiology and pathophysiology of
nitric oxide in the brain. Prog Neurobiol, 76, 126-52.
Kaltenbach J.A. & Afman C.E. 2000. Hyperactivity in the dorsal cochlear nucleus after intense
sound exposure and its resemblance to tone-evoked activity: a physiological model for tinnitus.
Hear Res, 140, 165-72.
Kraus K.S., Ding D., Jiang H., Lobarinas E., Sun W. & Salvi R.J. 2011. Relationship between
noise-induced hearing-loss, persistent tinnitus and growth-associated protein-43 expression in
the rat cochlear nucleus: does synaptic plasticity in ventral cochlear nucleus suppress tinnitus?
Neuroscience, 194, 309-25.
Vogler D.P., Robertson D. & Mulders W.H. 2011. Hyperactivity in the ventral cochlear nucleus
after cochlear trauma. J Neurosci, 31, 6639-45.
Amplitude-modulation detection and discrimination in the chinchilla ventral cochlear nucleus
following noise-induced hearing loss
M. Sayles* and M. G. Heinz§*,*Department of Speech, Language and Hearing Sciences, Purdue
University, West Lafayette, IN, USA, §Weldon School of Biomedical Engineering, Purdue
University, West Lafayette, IN, USA
Amplitude modulation (AM) is a common feature of natural sounds. Modulation supports
perceptual segregation of "objects" in complex acoustic scenes, and provides information for speech
understanding and pitch perception. Previous work in our laboratory demonstrated increased
modulation gain, without change in modulation-transfer-function bandwidth for auditory-nerve
fibers (ANFs) in hearing-impaired (HI) chinchillas, compared to normal-hearing (NH) controls,
when evaluated in quiet conditions using high modulation-depth narrowband signals (Kale & Heinz,
2010; Kale & Heinz, 2012). However, it is unclear how (or if) this increased modulation gain relates
to psychophysically measured improved modulation detection thresholds in HI vs NH human
listeners (Moore & Glasberg, 2001; Füllgrabe et al, 2003).
The ventral cochlear nucleus (VCN) is the first brainstem processing station of the ascending
auditory pathway and an obligatory synapse for all ANFs. VCN neurons typically show enhanced
spike synchrony to the amplitude envelope relative to their ANF inputs, when measured with high
modulation-depth signals (e.g., Frisina et al, 1990). Using an approach based on signal-detection
theory, the present study characterized the AM-detection and discrimination sensitivity of all VCN
unit types in NH and HI chinchillas under quiet conditions and in background noise.
The HI animals were exposed to 116dBSPL 500Hz-centered octave-band Gaussian noise for 2
hours under anaesthesia, and then allowed to recover for at least two weeks. Auditory brainstem
response (ABR) thresholds were increased by ~25dB across the audiogram following this
intervention. Distortion-product otoacoustic emissions (DPOAEs) were decreased by ~15dB,
indicating some degree of outer hair-cell loss.
We recorded spike times in response to sinusoidal amplitude-modulated (SAM) tones with
modulation depths (defined as 20·log10(m), where m is the modulation index between 0 and 1)
29
between -30 and 0dB in 1-dB steps from all major VCN unit types (primary-like, primary-withnotch, transient chopper, sustained chopper, onset chopper, onset-L). Stimuli were presented in
quiet, and in three levels of background noise (10, 15, 20dB r.m.s. relative to the SAM tone,
16.5kHz bandwidth). A statistical bootstrap method was used to express spike synchrony as d’ for
AM detection and discrimination. Under quiet conditions, primary-like units showed worse AM
detection thresholds in HI animals compared to NH controls. In contrast, chopper units showed
improved AM detection thresholds (by ~5dB) in HI animals. This may be a neurophysiological
correlate of the improved AM detection observed psychophysically in humans. However, in
background noise AM detection thresholds increased in all unit types. Background noise had a
greater effect in HI animals than in NH controls. These results have implications for models of
temporal-information processing in the normal and impaired auditory system, especially in complex
listening environments.
Acknowledgements
Supported by an Action on Hearing Loss Fulbright Commission scholarship (M.S.), and NIHNIDCD grant R01-DC009838.
References
Kale S. & Heinz M.G. 2010. Envelope coding in auditory nerve fibers following noise-induced
hearing loss. JARO, 11, 657-673.
Kale S. & Heinz M.G. 2012. Temporal modulation transfer functions measured from auditory nerve
responses following sensorineural hearing loss. Hear Res, 286, 64-75.
Moore B.C.J. & Glasberg B. R. 2001. Temporal modulation transfer functions obtained using
sinusoidal carriers with normally hearing and hearing-impaired listeners. J Acoust Soc Am, 110,
1067-1073.
Füllgrabe C., Meyer B. & Lorenzi C. 2003. Effect of cochlear damage on the detection of complex
temporal envelopes. Hear Res, 178, 35-43.
Frisina R.D., Smith R.L. & Chamberlain S.C. 1990. Encoding of amplitude modulation in the gerbil
cochlear nucleus: I. A hierarchy of enhancement. Hear Res, 44, 99-122.
Commissural improvement of sound level sensitivity and discriminability in the inferior
colliculi
L.D. Orton and A. Rees, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
The inferior colliculi (ICs) are the foremost nuclei of the auditory midbrain and the final subcortical
locus in the auditory pathway where bilateral interactions occur. Direct processing between the ICs
is mediated by the commissure of the inferior colliculi (CoIC) - connections that are reciprocal and
tonotopic. We investigated if the CoIC has an influence on the representation of sound level in the
spike output of single neurons in the ICs. Experiments were conducted in urethane anaesthetised,
adult guinea pigs. Neural activity in the left IC was rapidly and reversibly deactivated by one of two
distinct, yet complimentary techniques: cryoloop cooling (CC) or microdialysis of procaine (MDP),
a short acting sodium channel blocker. We have previously verified the use of CC for this purpose
(Orton et al, 2012). To confirm the validity of MDP in these experiments, we recorded multi-unit
responses adjacent to the microdialysis probe in the left IC in response to pure tone stimuli. Neural
activity was suppressed on commencement of MDP. Washout of procaine with artificial
cerebrospinal fluid (aCSF) produced recovery of driven responses to control levels within 30
minutes. Histological analysis of the lesion produced by the microdialysis probe found a loss of just
1.49% (±0.31) of the total volume of the left IC. We generated rate-intensity-functions (RIFs) from
the responses of 38 well discriminated single units in the right IC before, during and following
30
recovery from reversible deactivation of neural activity in the left IC in response to CC (n=22) or
MDP (n=16). The slopes of RIFs were observed to shift to the right on deactivation so that a higher
sound level was required to produce the same firing rate. The median level required to elicit halfmaximal firing rate in the population increased from 51 to 59 dB before recovering back to 51 dB.
We computed receiver operating characteristics for each RIF using each position in the RIF as a
target relative to all other responses. We then summed the absolute deviation from 0.5 in all
responses in a RIF to produce a discriminability index (DI) to quantify how well changes in sound
level could be discriminated from changes in firing rate along each RIF. The population of DIs
reduced during contralateral IC deactivation. Similar effects were observed in response to CC and
MDP. These findings demonstrate that: 1) MDP is an effective and reliable means of rapidly and
reversibly abolishing spiking in guinea pig IC; 2) the CoIC has a significant influence on RIFs in
IC; and 3) the ability of IC neurons to signal changes in sound level were decremented by removing
intercollicular processing, suggesting commissurally mediated gain control enhances sound level
encoding in the ICs.
Acknowledgements
This research was supported by BBSRC grant BB/J008680/1
References
Orton L.D., Poon P.W.F. & Rees A. 2012. Deactivation of the inferior colliculus by cooling
demonstrates intercollicular modulation of neuronal activity. Front Neural Circuits, 6, 100.
Probing the physiology of perception: Invariant neural responses in ferret auditory cortex
during vowel discrimination
S.M. Town, K. C. Wood, H. Atilgan, G. P. Jones and J. K. Bizley, Ear Institute, UCL, London, UK
Perceptual invariance is the ability to recognize an object despite variation in sensory input. For
example, when listening to a violin solo the instrument has a constant identity, despite variations in
sound waveform as different notes are played. Likewise in speech, we can recognize phonetic
components, such as the vowel “u”, across talkers with different voice pitches. However, it is
unclear how the brain supports perceptual invariance and specifically whether neurons in auditory
cortex extract invariant representations of vowel identity across pitch.
Here we study an animal model of perceptual invariance in which ferrets (n=4) were trained in
a two-alternative forced choice task to discriminate synthetic vowel sounds. On each trial of the
task, the subject was presented with two vowel sounds (250 ms in duration) and required to respond
at a particular location depending on vowel identity. Across trials vowel pitch was varied and ferrets
were required to generalize discrimination across pitch variation. During task performance, multiunit activity was recorded from microelectrodes positioned in auditory cortex.
We identified sound-responsive units, selected as those whose firing rate during vowel
presentation differed by ≥ 3 standard deviations from mean spontaneous activity. For each
responsive unit, we asked if it was possible to decode vowel identity from the spiking responses
observed across all pitches. Using a permutation test to assess significance, the majority of units
(60%) were found to encode information about vowel identity across pitches. We also decoded
vowel pitch across all vowel identities in a smaller proportion of units (20%). Of these units,
approximately half (11% of the entire neural population) provided information about both vowel
pitch and identity. Finally, neural responses recorded from approximately one third of units (38%)
were informative about the behavioural choice of the animal. By using a classification procedure in
which temporal parameters were free to vary, we were also able to estimate when in the neural
response information was encoded. We found that information about vowel timbre arose early in the
31
response (0-100 ms after stimulus onset) whereas information about pitch and choice arose in the
sustained and offset periods of the stimulus (200-400 ms).
Our results show that auditory cortical neurons may offer a physiological substrate for invariant
perceptual representations of sound. Furthermore, during behaviour information about both acoustic
and perceptual parameters is represented in auditory cortex, potentially during different time
windows of the neural response.
Acknowledgements
Supported by the BBSRC, Royal Society and Wellcome Trust
Across-electrode variation in gap-detection and stimulus-detection thresholds for cochlear
implant users
R.P, Carlyon§,, J.M. Deeks§,, A.J. Billig§,, and J.A. Bierer‡, § MRC Cognition and Brain Sciences
Unit, Cambridge, UK ,‡ University of Washington, Seattle, W.A, U.S.A.
Cochlear implant (CI) users usually exhibit marked across-electrode differences in detection
thresholds with "focussed" modes of stimulation, such as tripolar (TP) or partial-tripolar (pTP)
mode. This may reflect differences either in local neural survival (e.g. "dead regions ") or in the
distance of the electrodes from the modiolus (" electrode distance "). There is consideral interest in
differentiating between these two possible sources of variability, because the former may have
greater implications for speech perception and for possible re-programming strategies. However,
detection thresholds typically vary much less across electrodes in the monopolar mode of
stimulation implemented in contemporary speech-processing strategies.
We reasoned that across-electrode variations in a supra-threshold task, when stimulation level
was adjusted to produce a comfortably loud sensation across electrodes, would be more dependent
on neural health than on electrode distance. We therefore measured stimulus-detection and gapdetection thresholds for at least four electrodes in each of ten Advanced Bionics CI users, using
1000-pps pulse trains. The electrodes were selected for each user to have a wide range of stimulusdetection thresholds. We also reasoned that listeners might exhibit substantial across-electrode
variatioins in the gap detection task even in monopolar mode, as such variations have been observed
in other supra-threshold tasks. Therefore all measures were obtained both in pTP and monopolar
mode.
Univariate ANCOVAs, with subject entered as a fixed factor, assessed the correlations between
different measures with overall subject differences removed. Both stimulus-detection and gapdetection thresholds correlated highly significantly across modes (r=0.65 and 0.66, p<0.001 in both
cases). However, there was no significant correlation between stimulus-detection and gap-detection
thresholds in either mode. Hence gap detection thresholds likely tap a source of across-electrode
variation additional to, or different from, that revealed by stimulus-detection thresholds. Stimulusdetection thresholds were significantly lower for apical than for basal electrodes in both modes ; this
was only true for gap detection in pTP mode. Finally, although the across-electrode standard
deviation in stimulus-detection thresholds was greater in pTP than in monopolar mode, the
reliability of these differences – assessed by dividing the across-electrode standard deviation by the
standard deviation across adaptive runs for each electrode – was similar for the two modes ; this
metric was also similar across modes for gap detection. Hence reliable across-electrode differences
can be revealed even using clinically available monopolar stimulation.
Declaration of interest: The authors report no conflicts of interest. The authors alone are
responsible for the content and writing of the paper.
32
Source analyses of vestibular evoked potentials (VsEPs) activated during rhythm perception
N.P.Todd*,C.S. Lee§, *Facuty of Life Science, University of Manchester, Manchester, UK,
§
Department of Psychology, Goldsmiths College, London, UK.
Vestibular evoked potentials (VsEPs) of cortical, including frontal and temporal lobe, origin
produced by short tone pips have recently been described (Todd at al 2014ab). These results suggest
that vestibular inputs could potentially influence the perception of sound sequences, including the
perception of rhythm (Trainor et al 2009). In the light of these developments, and a recently
described new synthesis (Todd and Lee 2014), we report a reanalysis of prior data from an
experiment to look for evidence of movement-relatedness in auditory evoked potentials (AEPs)
(Todd 2003; Todd and Seiss 2004). In this experiment EEG was recorded while subjects listened to
irregular, regular and syncopated rhythms, and then actively synchronised with the regular rhythm.
The reanalysis makes use of a source model which allows a specification of the current activity over
time in temporal and frontal lobe areas during passive listening to a rhythm, or active
synchronization where it dissociates pre-motor and motor contributions. The model successfully
captures the main features of the rhythm in showing that: (1) the metrical structure is manifest in
greater current activity for strong compared to weak beats; (2) the introduction of syncopation
produces greater activity than for a regular rhythm. In addition the outcomes of modeling suggest
that: (1) activity in both temporal and frontal areas contribute to the metrical percept and that this
activity is distributed over time, i.e. not localized to beat onsets; (2) it is confirmation of a beat
following syncopation that produces the greatest activity associated with syncopation; (3) two
distinct processes are involved in auditory cortex, corresponding to tangential and radial (possibly
vestibular dependent) components of the temporal lobe current sources.
Acknowledgements
Partially supported by the Wellcome Trust.
References
Todd NPM (2003) The N2 is movement related. Proceedings of the British Society of Audiology
Short Papers Meeting on Experimental Studies of Hearing and Deafness, September 2003.
Todd NPM and Lee CS (2014) The sensory-motor theory of rhythm and beat induction 20 years on:
A new synthesis and future directions. Front Neurosci, under review.
Todd NPM, Paillard AC, Kluk K, Whittle E and Colebatch JG (2014a) Vestibular receptors
contribute to cortical auditory evoked potentials. Hear Res, 309, 63-74.
Todd NPM, Paillard AC, Kluk K, Whittle E and Colebatch JG (2014b). Source analysis of short
and long latency vestibular-evoked potentials (VsEPs) produced by left versus right ear airconducted 500 Hz tone pips. Hear Res, 312, 91 – 102.
Todd NPM and Seiss E (2004) Electrophysiological correlates of beat induction as internally- and
externally-guided action. In SD Lipscomb, R Ashley, RO Gjerdingen and P Webster (Eds) Proc
8th Int Conf Music Percept Cog, Evanston, IL. (Causal Productions: Adelaide) pp 212-218.
Trainor LJ, Gao X, Lei J, Lehtovaara K and Harris LR (2009) The primal role of the vestibular
system in determining rhythm. Cortex 45, 35-43.
Oral Presentation Abstracts
Wednesday afternoon, 1430-1600 (parallel session 2)
The psychosocial experiences of individuals with hearing loss
E. Heffernan*, N. Coulson§, H. Henshaw* and M. Ferguson*, *NIHR Nottingham Hearing
Biomedical Research Unit, University of Nottingham, UK, §Division of Rehabilitation and Ageing,
University of Nottingham, UK
33
Hearing loss is not simply a physical condition; it also substantially affects psychosocial functioning
(Espmark et al, 2002). Specifically, people with hearing loss (PHL) can experience difficulties with
participation in social roles and activities, which may place them at risk of isolation and poor
wellbeing (Arlinger, 2003). Theoretical models from health psychology have much potential to
improve our understanding of the psychosocial experiences of PHL. One such model, the Common
Sense Model (CSM) by Leventhal et al (1980), has been successfully applied to numerous chronic
health conditions but not to hearing loss. The primary tenet of the CSM is that individuals have
parallel cognitive and emotional responses to their health condition and that they select and evaluate
coping strategies to manage these responses (Hagger & Orbell, 2003). The aim of this qualitative
study was to explore the psychosocial impact of hearing loss using the CSM as a theoretical
framework.
Semi-structured interviews were conducted with 33 participants (eight hearing healthcare
professionals and 25 adults with mild-to-moderate hearing loss). Maximum variation sampling was
used to recruit diverse participants (Patton, 1990). Therefore, PHL who varied in terms of age,
gender, occupation, hearing loss duration and hearing aid use were recruited. Also, various hearing
healthcare professionals, including audiologists, hearing therapists and international opinion leaders
on hearing loss, were recruited. Data collection was concluded when theoretical saturation was
reached. The data are currently undergoing the thematic analysis process outlined by Braun and
Clarke (2006).
Preliminary results indicate that the CSM can provide valuable and unique insights into the
psychosocial impact of hearing loss. The results of this study will help to inform the development of
a measure that assesses the impact of hearing loss on social participation.
Acknowledgements
This work is part of an NIHR funded PhD studentship.
References
Arlinger S. 2003. Negative consequences of uncorrected hearing loss: A review. Int J Aud, 42,
2S17-2S20.
Braun V. & Clarke V. 2006. Using thematic analysis in psychology. Qual Res Psych, 3, 77-101.
Espmark, A-K.K., Rosenhall U., Erlandsson S. & Steen B. 2002. The two faces of presbyacusis:
Hearing impairment and psychosocial consequences. Int J Aud, 41, 125-135.
Hagger M.S. & Orbell S. 2003. A meta-analytic review of the common-sense model of illness
representations. Psych Health, 18, 141-184.
Leventhal, H., Meyer, D. & Nerenz, D. 1980. The common sense model of illness danger. In: S.
Rachman (ed). Medical Psychology (Vol. 2). New York: Pergamon, pp. 7–30.
Patton, M. Q. 1990. Qualitative Evaluation and Research Methods. New York: Sage Publications
Inc.
“You’re not going to make me wear hearing aids are you?” Factors affecting adolescent’s
uptake and adherence to aural habilitation
S.E.I. Aboagye*, M.A. Ferguson§, N. Coulson‡, J. Birchall§‡ and J.G. Barry*, *MRC Institute of
Hearing Research, Nottingham, UK, §Nottingham Hearing Biomedical Research Unit, ‡Faculty of
Medicine & Health Sciences, University of Nottingham, UK
It has been suggested that uptake and adherence to aural habilitation among adults with hearing loss
is determined by the perceived susceptibility and seriousness of the condition and the perceived
benefits and barriers to intervention (Laplante-Lévesque et al, 2010). However factors affecting
34
uptake and adherence to habilitation among adolescents (11-18 years) with mild-to-moderate
sensorineural hearing loss (20-70 dB HL) are complex and not well understood. Reports from
teachers and clinicians suggest that this population, despite experiencing poorer speech perception
and more effortful listening than their normally hearing peers, frequently reject hearing aids and
other assistive technology. However, without appropriate support they are at risk of having
difficulties with language development, academic achievement and psychosocial functioning (Bess
et al, 1998).
In the current study, 11-18 year olds with mild to moderate sensorineural hearing loss (n=20)
participated in individual semi-structured interviews to identify situations and tasks at school or at
home which caused them most difficulty and which could be targeted by habilitation. Cognitive and
affective factors (Steinberg, 2005) impacting on uptake and adherence to previous aural habilitation
were identified and interpreted using a self-regulatory model of health behaviour (Leventhal et al,
1980). Findings from this study will help delineate a framework for how psychosocial factors
interact to affect behavioural outcomes among this population. The study represents a first step
towards developing new habilitation programmes for their support.
Acknowledgements
Supported by the Medical Research Council.
References
Bess F.H., Dodd-Murphy J. & Parker R.A. 1998. Children with minimal sensorineural hearing loss:
prevalence, educational performance, and functional status. Ear Hear, 19, 339-354.
Laplante-Lévesque A., Hickson L. & Worrall L. 2010. Factors influencing rehabilitation decisions
of adults with acquired hearing impairment. Int J Audiol, 49, 497-507.
Leventhal H., Meyer D. & Nerenz D. 1980. The common sense representation of illness danger.
Contr Med Psychol, 2, 7-30.
Steinberg, L. 2005. Cognitive and affective development in adolescence. Trends Cogn Sci, 9, 69-74.
Co-occurrence of cognition, communication and listening difficulties in children with APD,
SLI or ASD
M.A. Ferguson*, D.R. Moore§, *NIHR Nottingham Hearing Biomedical Research Unit, UK,
§
Cincinnati Children’s Hospital Medical Center, USA
It has been suggested that children with developmental disorders, such as auditory processing
disorder (APD), specific language impairment (SLI) and autistic spectrum disorder (ASD), have
similar characteristics. Diagnosing children with APD can be problematic due to the lack of a ‘gold
standard’ diagnostic test and suggestions that diagnosis is based on referral route (Ferguson et al.,
2011).
A multicentre population study (IMAP) tested 1476 normally-hearing children, aged 6-11 years
(Moore et al., 2010) . The children were categorised according to the (i) Children’s Communication
Checklist (CCC-2), and (ii) the Childrens Auditory Processing Performance Scale (CHAPPS). The
CCC-2 is designed to screen for different types of communication impairments, and categorised
children into groups with subprofiles of structural and pragmatic language difficulties, as well as
autistic behaviours. The children were identified as being aligned to diagnostic categories of either
language impairment (general, LI, or specific, SLI) or autistic spectrum disorder (ASD, or
Asperger’s syndrome). The CHAPPS was used to categorise children as having either listening
difficulties (overall score ≤ 5th percentile), or not.
Cognition (nonverbal IQ, memory), language and reading were significantly poorer (p<.05) for
the ‘clinical’ groups who had communication or listening difficulties compared to the typically
35
developing (TD) group. The exception was the Asperger’s group whose performance was similar to
the TD group. Listening abilities were significantly poorer for all the ‘clinical’ groups compared to
the TD children (p<.05). Furthermore, a significant proportion of children who had listening
difficulties, were shown to have poorer structural (56%) and pragmatic (35%) language abilities, as
well as demonstrating autistic behaviours (35%).
The clinical implications are that APD co-occurs with LI and ASD, is likely to sit along the LIASD spectrum, and may not be a unique, discrete disorder. Children with listening difficulties who
attend audiology clinics should be screened for functional everyday communication difficulties (e.g.
CCC-2) to ensure appropriate onward referrals. A newly developed validated questionnaire, the
ECLIPS (Evaluation of Children’s Listening and Processing Skills) could be a useful clinical tool to
identify functional difficulties.
Acknowledgements
This research was funded by the National Institute for Health Research and Medical Research
Council.
References
Ferguson M.A., Hall R.L., Riley A. & Moore D.R. 2011. Communication, listening, speech and
cognition in children diagnosed with auditory processing disorder (APD) or specific
language impairment (SLI). J Sp Lang Hear Res, 54, 211-227.
Moore D.R., Ferguson M.A., Edmondson-Jones A.M., Ratib S. & Riley A. 2010. Nature of auditory
processing disorder in children. Pediatrics, 126, e382-e390.
X linked gusher syndrome – a rare cause for hearing loss in children
S. Dasgupta, S.Raghavan, M.O’Hare and L.Marl, Department of Audiovestibular Medicine,and
Audiology, Alder Hey Children’s NHS Foundation Trust, Liverpool
A congenital mixed hearing loss with stapes fixation and a perilymph gusher at stapes surgery was
first described by Olson and Lehman (1968) which was later established to have an X linked trait by
Nance et al (1973). This X linked gusher syndrome is a very rare condition with only circa 50 cases
reported in the English language. The aim of this study is to review the condition with our series of
2 children presenting with it.
Patient 1, born in 2004 of African Liberian parentage was picked up to have a mixed hearing
loss at the age of 2 years. The right side showed an air conduction threshold of 80 dBHL across the
frequency range with a bone conduction threshold at an average of 55 dBHL while the left side
indicated a profound hearing loss. He was aided early and then had a left cochlear implant. Patient
2, born in 2006 is patient 1’s half brother born of a Caucasian father. Picked up by the newborn
hearing screening programme he had early amplification. His air conduction thresholds were in the
80 dBHL threshold range with bone conduction thresholds at the same level as his brother’s.
However his hearing losses were symmetrical. In both brothers, tympanometry was normal whilst
transient otoacoustic emissions were not recordable. MRI scan was normal in both. A high
resolution CT scan(HRCT) with temporal bone windows demonstrated almost identical inner ear
dysmorphic features including a loss of basal partition of the cochlea communicating with the
internal auditory meatus(IAM) which were entirely consistent with X linked gusher syndrome.
X linked gusher syndrome is characterised by a communication between the subarachnoid
space and the perilymphatic space of the IAM and the cochlear basal turn due to an absent lamina
cribrosa. Phelps et al (1991) in the largest series to date observed the features in HRCT appearing
consistently which was later observed by Saylisoy et al (2014). The diagnosis of the condition rests
on radiological findings and the characteristic mixed hearing losses. The hearing loss reported is
36
usually bilaterally symmetrical severe and can be progressive and present in late childhood or early
adult life. Our cases are unique from several aspects reported for the first time here: their African
race; a proven early congenital onset and non progression of the loss and the presence of a
significant asymmetry. These different phenotypic presentations indicate the variable expressivity
of the syndrome and must be considered when evaluating congenital mixed losses in children.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Olson N.R. & Lehman R.H. Cerebrospinal fluid otorrhea and the congenitally fixed stapes. 1968.
Laryngoscope, 78, 352–359.
Nance W.E., Setleff R., McLead A. et al. X-linked mixed deafness with congenital fixation of the
stapedial footplate and perilymphatic gusher. 1973. Birth Defects Orig Artic Ser, 4, 64–69.
Phelps P.D., Reardon W., Pembrey M. et al. X-linked deafness, stapes gushers and a distinctive
defect of the inner ear. 1991. Neuroradiology, 33, 326–330.
Saylisoy S., Incesulu A., Gurbuz M.K. & Adapinar B. Computed tomographic findings of X-linked
deafness: a spectrum from child to mother, from young to old, from boy to girl, from mixed to
sudden hearing loss. 2014. J Comput Assist Tomogr, 38, 20-4
What encourages or discourages hearing-impaired children to take part in sport?
S.Wadsworth*,§, H. Fortnum*, §, A. McCormack*,§,‡ and I. Mulla#, *NIHR Nottingham Hearing
Biomedical Research Unit, Nottingham, UK, § Hearing and Otology Group,Division of Clinical
Neuroscience, School of Medicine, University of Nottingham, ‡ MRC Institute of Hearing Research,
University Park, Nottingham, NG7 2RD, UK, #The Ear Foundation, Nottingham, UK
Limited physical activity including a lack of participation in sport has been linked to poorer
physical and mental health in adults and children (Bailey, 2006). Children are leading increasingly
sedentary lifestyles and are at greater risk of chronic disease such as diabetes and obesity. Children
who participate in sport benefit from improved social skills, better communication and selfconfidence and an improved quality of life (Kurkova et al, 2010). Hearing-impaired children should
have the same opportunities to participate in sport and receive the benefits. Therefore the aim was
to understand what encourages and discourages these children to take part in sport.
This qualitative study uses focus groups with hearing-impaired children conducted in schools or
through social clubs. Information was collected on the children’s thoughts and feelings about
taking part in sport. Including their involvement in and access to sport, what they do and don’t like
about sport, how they are involved in sport and who with.
Thematic analysis of the focus group transcripts identified several key themes including
friendships, family influence and communication. Children with a hearing impairment are more
likely to take part in sport if they have adequate support for example children were more likely to
try a sport if their friends and family played it and encouraged them to try it. Hearing impaired
children with active siblings had tried more sports. Communication was a common theme
throughout regardless of the type of technology a child used. Children found communication easier
during sport if they were allowed to use their technology and had difficulty during sports such as
swimming. Children expressed a desire not to have to identify themselves at a club as having a
hearing loss. This affected their confidence when participating in sport.
37
Acknowledgements
This research is supported by the National Institute for Health Research.
References
Bailey R. 2006. Physical education and sport in schools: a review of benefits and outcomes. J
School Health, 76, 8, 397–401.
Kurkova P., Scheetz N & Stelzer J. 2010. Health and physical education as an important part of
school curricula: a comparison of schools for the deaf in the Czech Republic and United States.
Am Ann Deaf, 155, 1, 78–95.
Action on Hearing Loss. 2011. Taking action on hearing loss in the 21st Century.
Assessment of current provision for auditory ototoxicity monitoring in the UK
G. Al-Malky 1, M. De Jongh. 1, M. Kikic 1, S.J. Dawson 1, R. Suri 2,, 1UCL Ear Institute, London, UK,
2
Department of Respiratory Medicine, CF Unit, Great Ormond Street Hospital, London, UK
Aminoglycoside antibiotics and chemotherapeutic agents such as cisplatin are two of the most
established drugs associated with permanent ototoxicity. This side effect and others, such as
nephrotoxicity, have limited the use of these treatments to patient groups that have to use them.
These groups include oncology and cystic fibrosis (CF) patients. Repeated exposure to these
medications needs continuous auditory monitoring for early identification of ototoxicity. This is
currently more relevant because survival rates from these disorders have significantly improved,
making maintenance of good quality of life essential. The American Speech-Language-Hearing
Association (ASHA) and American Academy of Audiology (AAA) have produced the most
comprehensive guidelines for ototoxicity monitoring to date (ASHA, 1994, AAA, 2009), however
similar guidelines are not available in the UK.
The aim of this study was to assess the current practice in UK services regarding ototoxicity
monitoring using online surveys specifically designed for each of three targeted professions using
the UCL Opinio survey tool. These three professions included audiology, oncology and CF unit
clinicians. Hyperlinks to the surveys were mailed through respective professional bodies such as the
British Academy of Audiology, Children’s Cancer and Leukaemia Group and UK CF Trust.
Responses were obtained from 56 oncology, 133 audiology and 33 CF clinicians representing
all regions of the UK. Encouragingly 69.6%, 63.9% and 79.3% of the three respective professions
indicated that they do perform auditory monitoring. However there were substantial variations in the
reported referral criteria, audiological tests used for monitoring, criteria for identification of
ototoxicity, and indications for change in management. Oncologists confirmed that Cisplatin and
Carboplatin were considered first line treatments by 94.6% and 78.6% of the respondents, whereas
extended interval Tobramycin was the first line treatment by 97.0% of CF clinicians. More than
70% of the audiologist didn’t know whether patients were counseled about ototoxicity prior to
treatment or not and 49.4% of them stated that baseline testing wasn’t routinely performed.
Regarding the location where monitoring was performed, it was shown that more than 90% of
testing was done in audiology departments indicating that there is limited access for un-well patients
unable to reach them. Respondents reported that most centers used patients’ auditory complaints as
the main reason for referral. This represents a drawback, as substantial cochlear damage would have
occurred already. There was strong evidence of limited awareness/ agreement of responsibilities
between team members so that audiologists’ roles were underutilized for counseling and
rehabilitation and clinicians had limited understanding of the criteria for ototoxicity and test battery
used to diagnose it. In conclusion, it was recommended to develop UK-wide clinical guidelines for
ototoxicity monitoring and to develop professional education programmes, as advocated by the
World Health Organisation (WHO, 1994), to increase the profile and standardisation of ototoxicity
38
monitoring in clinical practice. This can only be possible through active collaboration between the
associated professional bodies.
Acknowledgements:
Special thanks are due to all the clinicians that responded to the survey.
References
AAA. 2009. American Academy of Audiology Position Statement and Clinical Practice Guidelines:
Ototoxicity
monitoring
[Online].
http://www.audiology.org/resources/documentlibrary/Documents/OtoMonPositionGuideline
.pdf: ASHA. [Accessed 13/10/2013.
ASHA. 1994. American Speech-Language-Hearing Association Guidelines for the audiologic
management of individuals receiving cochleotoxic drug therapy. [Online].
http://www.asha.org/policy/GL1994-00003.htm. [Accessed 03/10/2013.
WHO 1994. Report of an informal consultation on strategies for prevention of hearing impairment
from
ototoxic
drugs
In:
ORGANISATION,
W.
H.
(ed.).
http://www.who.int/pbd/deafness/ototoxic_drugs.pdf.
Everything an Audiologist needs to know about Speech in Noise
D. Hewitt, Audiology Department, Queen Alexandra Hospital, Portsmouth, UK
I am an NHS Audiologist and a member of the Portsmouth team based at Queen Alexandra
Hospital. Working in Adult Rehabilitation I am reminded every week that the problem for a
significant number of people is hearing Speech in Noise and not hearing Speech in Quiet. Not quite
so regularly but often enough I meet people who have normal hearing as measured by PTA but feel
strongly they are worse than others when it comes to hearing Speech in Noise. And experience has
taught me to ignore the marketing brochures and not promise patients that hearing aids will be of
any great assistance in noisy places.
All of this means that I do not feel I am helping my patients with their hearing problems as
much as I want to and that my theoretical knowledge has too many gaps. My theoretical training
was the Audiology MSc at the Southampton University ISVR and my dissertation required the
digestion of many Speech in Noise research papers. Ever since I have frequently searched through
the latest published papers on the topic. My reading to date has included several neuroscience
journals such as Brain and Journal of Neuroscience as well as the more audiology related journals
such JASA and JRRD.
Whilst, as always, the more questions you get answers to, the more additional questions seem to
appear, I have recently began to feel some of my knowledge gaps had been filled. This presentation
is an attempt to capture what I think I now know about Speech in Noise and I frequently use its
content in the conversations I have with patients about their hearing difficulties. I hope you find it
similarly useful.
Declaration of interest
The author reports no conflicts of interest. The authors alone are responsible for the content and
writing of the presentation.
39
Poster Abstracts: General sensation and perception
1. Test-retest reliability of the Test of Everyday Attention (TAIL)
B.S. Stirzaker, H.J. Stewart and S. Amitay, MRC Institute of Hearing Research, Nottingham, UK
The Test of Attention in Listening (TAIL) is a computerized behavioural test which quantifies the
attentional processes that mediate auditory processing performance (Zhang et al, 2012). Through
making decisions regarding the frequency or location of two non-verbal tones TAIL extracts five
independent outcome measurements: information processing efficiency, involuntary orientation of
attention to both frequency and location, and conflict resolution for frequency and location.
The TAIL has been successfully validated as a test of auditory attention; Stewart and Amitay
(2013) found that the TAIL’s measures successfully capture a range of attentional types, and that
TAIL’s outcome measures significantly correlate with existing tests of attention which use visual
and/or auditory stimuli. While Zhang et al (2012) showed that the attentional effects measured
through TAIL are resistant to rapid learning, its test-retest reliability is yet to be formally assessed.
The current study examined the test-retest reliability of the TAIL. On the basis of previous testretest procedures of attention measures (e.g., Chan, Hoosain & Lee, 2002), the TAIL was
administered at time 1 (i.e. baseline test), and was delivered again (time 2) in one of three interassessment trial intervals: immediately, after 90 minutes, or between seven and fourteen days
following baseline assessment. Normal hearing participants were recruited from a healthy
population, and randomly allocated to one of the three inter-assessment trial intervals. TAIL scores
were compared between time 1 and 2 for each of the three groups, the scores were found to be
consistent across all time frames.
These preliminary findings confirm our initial prediction, that the five outcome measures
derived from TAIL have test-retest reliability across all proposed time intervals. These results
further support the use of the TAIL as an evidence-based clinical and research tool for assessing the
effect of auditory attention on auditory processing.
Acknowledgements
Supported by the Medical Research Council, UK.
References
Chan R.C., Hoosain R. & Lee T.M. 2002. Reliability and validity of the Cantonese version of the
test of everyday attention among normal Hong Kong Chinese: a preliminary report. Clin
Rehabil, 16, 900-909.
Stewart H.J. & Amitay S. 2013. How the Test of Attention in Listening (TAIL) relates to other tests
of attention. Presented at British Society of Audiology Annual Meeting, Keele, United Kingdom,
Abstract #84.
Zhang Y.-X., Barry J.G., Moore D.R. & Amitay S. 2012. A new test of attention in listening (TAIL)
predicts auditory performance. PLoS One, 7, e53502.
2. Effects of attention and intention on auditory streaming
A.J. Billig, M.H. Davis and R.P. Carlyon, MRC Cognition & Brain Sciences Unit, UK
We investigate the extent to which attention and intention affect the perception of repeated ABApatterns, in which A and B represent pure tones of two different frequencies, and “-” indicates a
40
silent gap of the same duration. These sequences can either be perceived in an “integrated” form
(with a characteristic galloping rhythm) or as two “segregated” streams, each containing tones of a
single frequency. The integrated pattern is typically experienced initially, with phases of segregated
and integrated perception then alternating in a bistable fashion.
In the first experiment, the ABA- sequence was presented in one ear. In the “Attend” condition,
listeners indicated their percept throughout and/or detected occasionally delayed B tones. In the
“Switch” condition, listeners performed a demanding task in the contralateral ear before switching
their attention to the tones for the final few seconds of each sequence. Both measures indicated that
segregated perception was significantly less likely toward the end of the “Switch” condition,
compared to the same point in the “Attend” condition. This may indicate that streaming is reduced
in the absence of attention, or that the act of switching attention increases the probability of
resetting perception to integration. We partially replicate previous findings and show that subjective
and objective measures of streaming can be collected simultaneously. Performance on the objective
task was better when the pattern was reported as integrated than as segregated.
In a second experiment, we will objectively investigate whether the perception of ABApatterns can be intentionally influenced by attending either to the triplet as a whole (to promote
integration), or selectively to tones of one frequency (to promote segregation). Deviants of two
types will be embedded in the sequences, the first of which will be easier to detect during
integration, and the other during segregation. Listeners will be asked to detect both types
simultaneously, while either allowing their perception to wander freely, or attending to the tones in
such a way as to promote integration or segregation.
In a third experiment, listeners will continuously report their percept while we manipulate
intention as in the previous experiment and record electro- and magnetoencephelograpic activity.
We will examine whether electrophysiological markers of streaming are elicited in a manner that
supports intention affecting not only subjective reports, but perception per se. Initial data from these
second and third experiments will be presented and discussed.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
3. Listening effort and fatigue: Insights from pupillometry
R. McGarrigle, K. J. Munro, P. Dawes, and A. Stewart, School of Psychological Sciences,
University of Manchester, UK.
Individuals with hearing loss commonly report feelings of fatigue and stress as a result of the mental
demands of listening in everyday situations. Previous attempts have been made to characterise this
phenomena by measuring listening effort objectively. One such objective measure of listening effort
is pupillometry. Pupil size is sensitive to changes in attention and memory load, and has been shown
to reflect the mental effort required to understand speech in challenging listening environments
(Zekveld et al, 2010). Pupil size has also been shown to reflect subjective fatigue (e.g. from sleep
deprivation) (Morad et al, 2000). However, it remains unclear whether or not pupil size can
characterise listening effort and/or listening-related fatigue in more naturalistic listening tasks.
We developed a novel speech-picture verification (SPV) task which involves speech passages
being presented with multi-talker babble noise in two different listening conditions (Easy/Hard).
Pupil size was recorded using an eyetracker, and the pupil response was recorded over the course of
speech processing. Four 3-second time-bins were created to analyse the ‘effort’ and ‘fatigue’
responses separately. The main findings were that pupil sizes were significantly smaller in the
‘Hard’ versus the ‘Easy’ condition in time-bin 4 (the final epoch of each trial). Post-hoc analysis
41
reveals that baseline pupil sizes decreased as the experiment progressed, and the ‘fatigue’ effect was
largest in the final epoch of the experiment. No significant difference in pupil size was found
between listening conditions in the first time-bin.
These findings suggest that pupillometry can capture listening-related fatigue. An objective
measure of listening-related fatigue may serve as a supplementary tool used to assess hearing aid
benefit in the clinic.
Acknowledgements
Supported by the Castang Foundation.
References
Morad, Y., Lemberg, H., Yofe, N. & Dagan, Y. 2000. Pupillography as an objective indicator of
fatigue. Curr Eye Res, 21, 535-542.
Zekveld, A., Kramer S. & Festen, J. 2010. Pupil response as an indication of effortful listening: The
influence of sentence intelligibility. Ear Hear, 31, 480-490.
4. Using adaptation to investigate the neural mechanisms of attention in the human auditory
cortex
J. de Boer, S. Gibbs and K. Krumbholz, MRC Institute of Hearing Research, Nottingham, UK
Single neuron recordings in auditory cortex have suggested that attention causes a sharpening of
neural frequency selectivity (Fritz et al., 2003). In humans, neuroimaging studies have reported
similar effects when using notched-noise masking to estimate cortical frequency selectivity
(Ahveninen et al, 2011). There is a possibility, however, that these results were confounded by
differences in attentional load between different masking conditions.
Here, we tested the effect of selective attention on cortical frequency tuning directly, using an
adaptation paradigm previously used in the visual system (Murray & Wojciulik, 2004). In this
paradigm, the feature selectivity of cortical neurons is assessed by measuring the degree of
stimulus-specific adaptation of the gross evoked response as a function of the difference between
the adapting stimulus and the subsequently presented probe stimulus. If attention causes an increase
in neural selectivity, it would be expected that adaptation becomes more stimulus specific, that is,
more strongly dependent on the adapter-probe difference.
Auditory-evoked potentials (AEPs) were recorded from 18 participants performing a dichotic
listening task. Tone sequences comprising four equally probable frequencies were presented to one
ear, while, concurrently, a sequence of waxing or waning noises was presented to the other ear. The
participants were instructed to attend either the tones or the noises (alternating every 2.5 min) and to
detect rare oddballs within the attended stimulus sequence. Only AEPs to the tones were recorded.
As expected, AEP amplitude was significantly larger when the tones were attended than when
they were ignored. Furthermore, the effect of attention on AEP amplitude was significantly greater
when the evoking tone was immediately preceded by a tone of a different frequency than when it
was preceded by a tone of the same frequency. This suggests that the adaptation caused by
preceding tones was more frequency-specific when the tones were attended than when they were
unattended, implying that selective attention caused an increase in cortical frequency selectivity.
Acknowledgements
Supported by the MRC.
References
Fritz J., Shamma S., Elhilali M. & Klein D. 2003. Nat. Neurosci, 6 (11), 1216-1223.
42
Ahveninen J., Hamalainen M., Jaaskelainen I.P., Ahlfors S.P., Huang S. & Lin F.H. 2011. Proc
Natl Acad Sci, USA, 108, 4182-4187.
Murray S.O. and Wojciulik E. 2004. Nat Neurosci,7, 70-74.
5. Differential effects of training regimen on the learning of speech and non-speech
discrimination
S. Amitay* and K. Banai§, *MRC Institute of Hearing Research, Nottingham, UK, §Department of
Communication Disorders, University of Haifa, Israel
While perceptual learning of speech was shown to generalise more widely when trained using a
variable regimen (e.g., Lively et al, 1993), this was not the case for the perceptual learning of
acoustic feature discrimination (e.g., Banai et al, 2010). Nevertheless, the two types of stimuli were
rarely assessed using comparable training schedules.
Listeners concurrently trained on discrimination of either four synthetic syllable contrasts or
four tone durations. During training, stimuli were presented either in a blocked (160 trials each
stimulus, ran consecutively) or a roving (all four stimuli randomly interleaved within each 160-trial
block) training schedule. All listeners trained for a total of 640 trials per session for 9 sessions,
interspersed over 1-2 months. All completed a pre- and post-test that included testing on all stimuli,
presented either in blocks or roved for separate groups. Regardless of training schedule, speechtrained listeners showed significant improvement on speech stimuli compared to duration-trained
listeners, who showed no learning effects whatsoever. Learning on the speech stimuli did not
generalise to the acoustic stimuli. Although roving impeded naïve discrimination performance,
training with either blocked or roved speech stimuli resulted in similar post-training thresholds for
both blocked and roved speech stimuli. It is unclear why no learning was observed for the blocked
duration training.
We conclude that roving and blocked training regimens were equally efficient (or inefficient)
for perceptual learning. Whereas speech learning appears robust, duration-discrimination learning
seems highly sensitive to the particulars of the training schedule.
Acknowledgements
Supported by The National Institute of Psychobiology in Israel and the Medical Research Council,
UK.
References
Banai, K., Ortiz, J. A., Oppenheimer, J. D., and Wright, B. A. 2010. Learning two things at once:
differential constraints on the acquisition and consolidation of perceptual learning.
Neuroscience, 165(2), 436-444.
Lively, S. E., Logan, J. S., Pisoni, D. B., 1993. Training Japanese listeners to identify English /r/
and /l/. II: The role of phonetic environment and talker variability in learning new perceptual
categories. J Acoust Soc Am, 94(3 Pt 1), 1242-1255.
6. A novel method for predicting speech intelligibility based counting time-frequency bins
S. Man Kim, N. Moslemi Yaldaei and S. Bleeck, ISVR, University of Southampton, Southampton,
UK
Finding an objective intelligibility measure that predicts subjective speech intelligibility is an
important on-going research area because existing measures are not good in all situations, and
43
subjective listening tests are costly and time consuming (Loizou, 2013). Several objective measures
have been proposed which rely, amongst others, on objective speech quality measures such as
signal-to-noise ratio (SNR), signal-to-distortion ratio (SDR) or correlation coefficients (Loizou,
2013). However, none of these methods are perfect, because they do not provide cues for example
on how intelligibility of noisy speech depends on the type of noise and why they do not correctly
predict intelligibility changes for de-noising algorithms.
We propose here an objective measure for predicting the intelligibility of noisy speech that is
based on counting the number of perceived time-frequency (T-F) speech component bins. The
intelligibility is determined by frame wise counting the number of T-F bins above and below
threshold. This strategy is motivated by the assumption that human beings have suprathreshold
speech intelligibility enhancing mechanisms, so the number of actually perceived speech
components is a more important factor for speech intelligibility than the SNR or SDR.
For evaluation, the speech intelligibility predictions of the proposed method are compared
against speech intelligibility scores from subjective listening trials against the results of state-of-theart conventional speech intelligibility predicting algorithms such as STOI (Taal et al, 2011) and
CSII (James & Kathryn, 2005). Preliminary results show that the proposed method is superior in
various noise and noise reduction situations. Further results from ongoing objective measures and
subjective listening tests will be shown on the poster.
Acknowledgements
The work leading to this deliverable and the results described therein has received funding from the
People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme
FP7/2007-2013/ under REA grant agreement n° PITN-GA-2012-317521.
References
Loizou P. C. 2013. Speech Enhancement: Theory and Practice, Second Edition. CRC Press.
Taal C. H., Hendriks R. C., Heusdens R. & Jensen J. 2011. An algorithm for intelligibility
prediction of time–frequency weighted noisy speech. IEEE Transactions on Audio, Speech, and
Language Processing, 19(7), 2125-2136.
James M. K. & Kathryn H. A. 2005. Coherence and the speech intelligibility index. J Acoust Soc
Am, 117(4), 2224-2237.
7. Informational masking of monaural speech by a single contralateral formant
B. Roberts* and R.J. Summers*, *Psychology, School of Life and Health Sciences, Aston University,
Birmingham, UK
The role of the time-varying properties of speech in separating a target voice from an interfering
voice remains unclear. Recent evidence suggests that when masking is primarily informational, the
impact of a single extraneous formant on intelligibility becomes greater as the depth of variation in
its frequency contour increases (Roberts et al, 2014). However, three design features constrained the
range of formant-frequency variation tested and restricted the generality of the conclusions. First,
the three target formants constituting the speech analogues were split between the two ears (left=F1;
right=F2+F3). Second, listeners were able to hear each stimulus up to six times before transcribing
it. Third, the extraneous formant was added in the same ear as the target F1, which restricted the
range of formant-frequency variation that could be tested without crossovers or close approaches
between adjacent formants. This study evaluated further the impact of formant-frequency variation
in an interferer on speech intelligibility using a method adapted to overcome these constraints.
Three-formant analogues of natural sentences were synthesized using second-order resonators
and a monotonous glottal source (F0=140 Hz). Target formants were always presented monaurally;
44
the ear receiving them was assigned randomly on each trial. A competitor for F2 (F2C) was added
in the contralateral ear (target ear = F1+F2+F3; other ear = F2C), which listeners must reject to
optimize recognition. Listeners heard each stimulus only once before entering their transcription.
Different competitors were created by inverting the frequency contour of F2 about its geometric
mean and varying its depth over a range extending from constant (0%) to twice that of the natural
utterances (200%). F2C always had the same F0, level, and amplitude contour as the target F2.
Despite the differences from earlier studies, in which the target formants were split between
ears and repeat listening was permitted, the adapted method was effective at assessing the impact of
F2C on intelligibility. The results confirm and extend those of Roberts et al (2014). Adding F2C
typically reduced intelligibility, presumably as a result of informational masking; the impact was
least for 0%-depth (constant) F2Cs and increased with depth up to ~100% scaling, but not
thereafter. This outcome is consistent with the idea that the ability to reject an extraneous formant
does not depend on target-masker similarity in depth of formant-frequency variation.
Acknowledgements
Thanks to Peter Bailey. Work supported by ESRC.
References
Roberts B., Summers R.J. & Bailey P.J. 2014. Formant-frequency variation and informational
masking of speech by extraneous formants: Evidence against dynamic and speech-specific
acoustical constraints. J Exp Psychol Hum Percept Perform. Online First Publication, May 19,
2014.
8. Assessing the interplay of linguistic factors and background noise characteristics to
successful speech in noise listening
A. Heinrich*, S. Knight*, M. Young*, R. Moorhouse§ and J. G. Barry* *MRC Institute of Hearing
Research, University of Nottingham, Nottingham, UK; § University of Nottingham, Nottingham,
UK
Contextual information supports speech-in-noise listening (Bilger et al. 1984) with the relative
importance of this information increasing for older adults (Pichora-Fuller et al, 1995). Studies
investigating contextual information are usually based on the SPIN-R sentence pair test (SPIN-R;
Bilger et al. 1984), where the same sentence-final word can either be predicted or not from the
preceding context. However, these sentence pairs do not differ purely in the relative usefulness of
the sentential content: they also vary in duration, stress pattern and semantic richness (highpredictability sentences are semantically varied, while their low-predictability counterparts rely on
repeated stock phrases). Additionally, the 12-talker babble traditionally used as masking noise has a
steep frequency roll-off above 1.5kHz, so is rumbly and muffled. Stimuli must therefore be
presented at very high, ecologically improbable SNRs for suitable levels of masking to occur. In
this study, we investigated the role of sentence structure and noise effects in determining overall
performance on SPIN-type tasks.
The study compared the performance of 12 young normally-hearing listeners on original SPINR sentences and recently developed SPIN-UK sentences. The SPIN-UK sentence pairs are matched
for duration, stress pattern, and semantic complexity. All sentences were recorded using a male
Standard British English speaker. They were presented either in the original 12-talker babble or new
12-talker babbler with a natural 6dB/oct roll-off at SNRs of -2dB (SPIN-UK) and -15dB (SPIN-R).
45
These levels were chosen to produce comparable intelligibility levels for high predictability
sentences.
There were main effects of sentence set and predictability, and two-way interactions of babble
type x predictability and sentence set x predictability. The main effect of sentence set showed that
accuracy was higher for SPIN-UK than SPIN-R sentences. The main effect of predictability showed
that accuracy was higher for high- than low-predictability sentences. The babble type x
predictability interaction revealed that traditional SPIN babble produced higher accuracy than the
new babble, but only within low predictability sentences. The sentence set x predictability
interaction revealed that the SPIN-UK sentences produced higher accuracy than the SPIN-R
sentences, but only within low predictability sentences.
These results suggest that in addition to semantic predictability, broader linguistic factors like
prosody and semantic richness contribute to speech-in-noise listening. Design of the masking noise
also makes an important contribution to performance and can influence conclusions about general
speech-in-noise listening. In future studies we plan to reassess the listening abilities of older adults
relative to younger adults using the more ecologically valid SPIN-UK sentences.
Acknowledgments
Supported by the MRC., Action on Hearing Loss, and Biotechnology and Biological Sciences
Research Council (BBSRC).
References
Bilger, R. C., Nuetzel, J. M., Rabinowitz, W. M. & Rzeczkowski, C. 1984. Standardization of a test
of speech perception in noise, J Speech Hear Res, 27(1), 32-48.
Pichora‐Fuller, M. K., Schneider, B. A. & Daneman, M. 1995. How young and old adults listen to
and remember speech in noise, J Acoust Soc Am, 97(1), 593-608.
9. Developing a speech-in-noise test to measure auditory fitness for duty for military
personnel: introducing using the Coordinate Response Measure
H.D. Semeraro* D. Rowan*, R.M. van Besouw* and A. Allsopp§, *Institute of Sound and Vibration
Research, Southampton, UK, § Institute of Naval Medicine, Gosport, UK.
The ability to listen to commands in noisy environments and understand acoustic signals, whilst
maintaining situational awareness, is an important skill for military personnel, and can be critical for
mission success. Pure-tone audiometry (PTA) is currently used in the UK to assess whether
personnel have sufficient hearing for duty, though it is known that PTA is poor at predicting speech
intelligibility in noise (SIN). The aim of this project is to develop and validate a SIN test suitable for
assessing the auditory fitness for duty of military personnel.
Following a review of existing SIN tests, the Coordinate Response Measure (CRM) was
selected, partly due to the high face validity when compared to command structure (CRM sentence
format: “Ready [call-sign], go to [colour] [number] now”). The CRM was re-recorded using
NATO call-signs and a larger selection of call-signs (18) and colours (9) than previous versions.
Psychometric functions were estimated with 20 normal-hearing listeners for each call-sign, colour
and number in speech-spectrum noise; correction factors were applied to equalise the intelligibility
of the stimuli. Analysis of confusion matrices for the call-signs, colours and numbers showed that
no two words were being consistently confused.
Next, the CRM was implemented in a 2-down, 1-up adaptive procedure, again, using speechspectrum noise. Speech recognition thresholds were estimated four times using two scoring
methods, first responding to the call-sign, colour and number and second only responding to the
colour and number, and using the Triple Digit Test (TDT) for comparison, with 30 normal-hearing
46
participants. Any learning effect across the repeats was less than 0.7 dB. The test-retest reliability,
expressed as the 95% confidence interval on any one SRT estimate on the CRM, was 2.1 dB when
responding to all three parts of the sentence and 1.9 dB when only responding to two parts,
compared to 2.6 dB on the TDT.
A similar study of 30 hearing impaired military personal (bilateral hearing losses ranging from
mild to severe) is on-going; assuming a satisfactory outcome from that study future work will focus
on assessing the external validity of the CRM using simulated combat scenarios.
Acknowledgement
This work is funded by the Surgeon General of the Ministry of Defence.
10. Speech understanding in realistic conditions: effects of number and type of interferers,
and of head orientation
J. F. Culling and S. Priddy, School of Psychology, Cardiff University, Park Place, Cardiff, U.K.
In real life, interfering noise is frequently uninterrupted, multi-source spatially distributed and
reverberant. Three experiments measured speech reception thresholds (SRTs) against uninterupted
interferers based on continuous discourse in different virtual environments. Experiments 1 and 2
compared the effects of speech, reversed speech, speech-shaped noise and speech-modulated
speech-shaped noise as a function of the number of interfering sources at a constant overall noise
level (controlled at source). Experiment 1 manipulated reverberation using a simulated room, while
Experiment 2 used recordings from an acoustic manikin in a real, but unoccupied, restaurant.
Experiment 3 examined the effect of the manikin’s head orientation in the presence of eight
interferers (speech or speech-shaped noise) in the same restaurant.
Experiment 1 found that SRTs were elevated in reverberation and, except in the case of
reverberated speech-shaped noise, for larger numbers of interferers. The effect of reverberation
decreased with increasing numbers of interferers. Reversed speech produced the lowest and
continuous noise the highest SRTs in most cases. The results suggest beneficial effects of masker
periodicity and modulation for 1 or 2 interfering sources (SRTs lower for modulated than
continuous sources, and lower for single-voice speech and reversed speech sources than for
modulated noise). There were detrimental effects of masker intelligibility (lower SRTs with
reversed than with forward speech interferers) with 2 interferers, and there was evidence of
modulation masking with 8 interferers (lowest SRTs for continuous noise). Experiment 2 found a
similar pattern of results to the reverberant case in Experiment 1. Experiment 3 found consistently
beneficial effects of head orientation away from the target source. This benefit was driven by
improved target-speech level at the ear turned towards the source. As with 8 interferers in
Experiments 1 and 2, speech interferers produced 2 dB higher SRTs than noises.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
11. The effect of changes in the acoustic environment on speech intelligibility
S. Aspeslagh*§, F. Clark§, M.A. Akeroyd* and W.O. Brimijoin*, *MRC/CSO Institute of Hearing
Research - Scottish Section, Glasgow Royal Infirmary, Glasgow, UK, §Department of Computing,
University of the West of Scotland, Paisley, UK
47
As listeners walk around a room or move from one room into another they experience changes in
the characteristics of the reverberation around them. It is reasonable to hypothesize that listeners are
constantly engaged in analysing the acoustics of the space around them in order to optimize speech
intelligibility. Previous studies have shown that prior exposure to reverberation is associated with
rapid improvements in speech intelligibility (Brandewie & Zahorik, 2010). However, reverberation
is not the only acoustic element that can change over time: static background noise changes in level
at the ear as listeners move closer or further from the source, and can completely change as
background noises themselves start or stop. Previous work from our laboratory has shown that
listeners’ ability to identify simple target words briefly drops as they are introduced to a novel
simulated acoustic environment. It takes on average around three seconds to return to optimum
performance and this time-course is the same regardless of level of hearing impairment, age, or type
of background noise or amount of reverberation.
Considering that in the previous experiment each environment consisted of a unique
combination of noise and reverberation that was entirely novel to the listener, we wished to measure
any drop in intelligibility when more subtle changes were made to the acoustics. Therefore, in the
current experiment only a single element or a subset of elements were changed rather than
everything about a given environment. The three elements included were background noise,
reverberation, and target/noise angles. Listeners were asked to identify an ongoing stream of simple
words, and the noise, reverberation, angle, or a combination of these three were changed every nine
seconds.
The results suggest that when all three of the above elements of an acoustic environment
change, speech intelligibility is more affected than when only one or two elements change. These
results agree with previous work in that listeners’ experience with novel acoustic environments
increases speech intelligibility, but also demonstrate that the extent to which speech intelligibility
drops after a change is dependent on how substantially the acoustics are altered. As before, we
observed no significant difference in adaptation time between normal and hearing impaired
listeners, suggesting that the adaptation to changes in the acoustic environment is a process
independent of hearing impairment.
Acknowledgments:
The first author is supported by a MRC PhD Studentship. Supported by MRC (U135097131), the
Chief Scientist Office (Edinburgh), and the University of the West of Scotland.
References
Brandewie, E., & Zahorik, P. 2010. Prior listening in rooms improves speech intelligibility. J
Acoust Soc Am, 128(1), 291-299.
12. Native and non-native sentence comprehension in the presence of a competing talker
H. Valdes-Laribi*, D. Wendt§, E. MacDonald§, M. Cooke‡ and S. Mattys*, *Department of
Psychology, The University of York, York, UK, §Department of Electrical Engineering, Technical
University of Denmark, Lyngby, Denmark, ‡Language and Speech Laboratory, University of the
Basque Country, Vitoria, Spain
In everyday environments, we often have to attend to one person’s speech (target speech) while
ignoring another (competing speech). Competing speech can mask the target through energetic
masking (acoustic degradation at the periphery) and informational masking (higher-order, cognitive
aspects of masking). In this study, we tested the hypothesis that informational masking depletes
processing resources that would otherwise be allocated to recognising and understanding the target
signal.
48
Using a speeded picture selection task, we investigated native English listeners’ understanding
of sentences varying in syntactic complexity, namely subject relatives (easy) and object relatives
(hard), played against a competing talker (energetic and informational masking) vs. speechmodulated noise (energetic masking only). Object relative sentences have been shown to require
more processing resources than subject relative sentences. We hypothesized that informational
masking depletes processing resources that would otherwise be allocated to recognizing and
understanding the target speech. The detrimental effect of the competing talker relative to speechmodulated noise should therefore be greater in the object relative than the subject relative condition.
Although participants were slower at responding to object relative sentences, there was no effect of
mask type or interaction with sentence type. Thus, within the constraints of this experiment, there
was no evidence that a competing talker requires greater processing resources than energetic
masking alone.
We then hypothesised that processing our sentences as a non-native listener might exacerbate
the interaction between mask and syntactic complexity because of the greater cognitive effort
involved in processing an L2. However, when native Danish speakers were tested on our stimuli,
their results were broadly equivalent to those of the native speakers, only slower. Thus, contrary to
research by other groups, we found no evidence that a competing talker requires greater processing
resources than energetic masking alone. An ongoing eye-tracking version of this experiment will
establish the nature of the discrepancy and the psycholinguistic mechanisms behind informational
masking.
Acknowledgements
This project has received funding from the European Union’s Seventh Framework Programme for
research, technological development and demonstration under grant agreement no FP7-PEOPLE2011-290000.
13. Specificity effects within and beyond spoken word processing
D. Strori*, O. Scharenborg§ and S. Mattys*, *Department of Psychology, University of York, UK,
§
Centre for Language Studies, Radboud University Nijmegen, The Netherlands
Previous research indicates that lexical representations might include both linguistic and indexical
(talker-related) specifications (Goldinger, 1998). Recent evidence suggests that non-linguistic
sounds co-occurring with spoken words are also incorporated in our lexical memory. Pufahl and
Samuel (2014) paired spoken words with environmental sounds and observed that perceptual
identification accuracy for previously heard words dropped when the paired sound changed from
exposure to test. The authors attributed this effect to the word-sound association and claimed that
the specificity (indexical) effects extend beyond the speech domain. We argue that this "soundspecificity effect" might not be due so much to a word-sound association as to the different acoustic
glimpses of the words that the associated sounds create.
Our first experiment replicated the “voice specificity effect”, such that recognition accuracy for
previously heard words (spoken by one of two talkers) dropped when the voice changed from
exposure to test. The next experiments investigated the “sound specificity effect”, from an energetic
masking angle. In an analogy to the two voices, we paired the spoken words (spoken by the same
talker) with one of two car honk sounds and varied the level of energetic masking from exposure to
test. The honk sounds had the same on/off pattern and differed only in the frequency domain.
During the exposure phase participants (university students) heard word-honk pairs played
binaurally and completed a semantic judgement task on the word, to ensure its lexical encoding in
memory. After a short delay, participants completed the test phase, in which half of the previously
played words were paired with the other honk sound. Their task was to decide whether they had
heard the word in the first phase or not.
49
We did not observe a drop in recognition accuracy for previously heard words when the paired
sound changed as long as energetic masking was controlled. This was achieved by preserving the
temporal overlap between words and honks in both phases. However, when we manipulated the
temporal overlap to create an energetic masking contrast, accuracy dropped. The finding suggests
that calling for an expansion of the mental lexicon to include non-speech auditory information
might be premature. It also indicates that specificity effects do not easily extend beyond the speech
domain.
Acknowledgements
This project has received funding from the European Union’s Seventh Framework Programme for
research, technological development and demonstration under grant agreement no FP7-PEOPLE2011-290000.
References
Goldinger, S. D. 1998. Echoes of echoes: An episodic theory of lexical access. Psych Rev,105, 251–
279.
Pufahl, A. & Samuel, A.G. 2014. How lexical is the lexicon? Evidence for integrated auditory
memory representations. Cog Psych, 70, 1-30.
14. Artificial grammar learning in young and older listeners
C. Füllgrabe*, E. Clifford* and B. Gygi§‡,*MRC Institute of Hearing Research, Nottingham, UK,
§
Nottingham Hearing Biomedical Research Unit, Nottingham, UK, ‡US Department of Veterans
Affairs, Martinez, California, USA
Speech identification benefits from the fact that words are arranged in formal grammars which
provide context and enhance expectations for certain words. Similarly, non-speech environmental
sounds tend to occur in certain settings and sequences, and listeners are aware of these regularities
(Gygi & Shafiro, 2011).
It is well established that speech intelligibility, especially in the presence of background noise,
declines with age. However, age-related peripheral hearing loss alone often fails to explain the
entire deficit. Hence, the aim of this study was to investigate whether or not (i) the ability to learn
grammars for short (5-item) sequences declines with age, and (ii) the knowledge acquired through
grammar learning enhances identification performance in noise.
Participants were 17 young (age range = 19-30 yrs; mean = 22 yrs) and 17 older (age range =
62-76 yrs; mean = 68 yrs) listeners with bilaterally normal hearing (defined as audiometric
threshold ≤ 20 dB HL between 0.125 and 4 kHz). All older listeners were screened for neurological
dysfunction using the Mini Mental State Examination; all scored 27/30 or higher. The Digits
Forward part of the Digit Span test (Wechsler Adult Intelligence Scale, 1999) was administered to
all participants to ascertain that all could maintain in short-term memory sequences of at least five
items.
Target (i.e., grammatical) sequences, containing five consecutive sounds (A-E), were generated
according to an artificial grammar based on a finite-state machine. Non-target sequences were
composed of the same A-E sounds but violated the grammatical grammar rules. The sounds were
five different environmental chimerae, obtained by combining the spectrum of one environmental
sound (e.g. whistle) with the temporal envelope of another (e.g. footsteps).
Participants were initially trained to recognize the five chimerae in isolation. Then, after
listening passively to target sequences, participants were asked to recognize these sequences
amongst non-target sequences. To assess if participants had indeed learned the underlying artificial
grammar, the ability to classify new grammatical and a-grammatical sequences as target or non50
target sequences, respectively, was tested. Finally, participants had to identify the 4th sound in target
and non-target sequences in the presences of masking noise at different signal-to-noise ratios. Three
additional cognitive tests were conducted to explain inter-individual and -group differences: the
Digits Backward from the Digit Span test and the Letter-Number-Sequencing test which are
assumed to probe working memory capacity, and the Verbal-Paired-Associates test to assess
explicit episodic memory performance.
Results show that both young and older learned the artificial grammar and transferred their
knowledge to new sequences. There was also a comparable benefit of grammaticality on sound-innoise identification in the two age groups. We conclude that age does not affect the use of top-down
information (such as grammaticality) to aid listening in noisy environments.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.'
References
Gygi, B. & Shafiro, V. 2011. The incongruency advantage for environmental sounds presented in
natural auditory scenes. J Exp Psychol Human, 37, 551-565.
15. Sensitivity of SPiN-UK sentences to differences in children’s listening abilities
J.G. Barry*, S. Knight*, A. Heinrich*, R. Moorhouse§ and H. Roebuck*, *MRC Institute of Hearing
Research, Nottingham, UK, § University of Nottingham, Nottingham, UK
The SPIN sentence-pair test (Bilger et al, 1984) was designed to separately assess contributions to
successful listening in noise from auditory-phonetic speech discrimination abilities and cognitive
and linguistic abilities. Sentence pairs share the same final single syllable word, but differ in the
ease with which this word can be predicted from the preceding context. We recently developed the
SPiN-UK sentence pair test (Heinrich et al, 2014) and here, we consider the suitability of these
sentence pairs for assessing individual differences in speech-in-noise listening in children aged 8 to
10 years.
The children were subdivided using the ECLiPS (Barry & Moore, 2014) according to whether
they were typically-developing (TD-Child) or had listening difficulties (AP-Child). SPiN sentences
(predictable vs unpredictable) were presented in a British English four-talker babble (Heinrich) at a
high (+2dB) and low (-2dB) SNR. Listeners (young adults, n = 18; TD-Child, n = 14; AP-Child, n =
13) repeated each sentence as accurately as possible and the number of correct sentence final words
for each group was compared.
No floor effects were apparent for any group on the task suggesting the sentences and SNRs
used were suitable for the age range and listening abilities of the participants. There was a
significant main effect of group (p<.001). The Child-AP group made more errors on all sentence
types than the Child-TD group, who in turn made more errors than the adults particularly in the
unfavourable SNR. There was an interaction of Group x Sentence type x SNR (p<.001). Both
groups of children made more errors in the low SNR, they also got less benefit from the sentence
context than the adults in this condition. The Child-AP group performed equally poorly regardless
of relative predictability of the final word. This suggests listening support from language abilities
(high predictability sentences) cannot offset their greater speech perception difficulties (low
predictability sentences).
Overall, the UK-IHR SPiN sentences can be used successfully with different child populations
and can provide interesting insights into age and disorder-related difficulties.
51
Acknowledgements
Supported by the MRC, Action on Hearing Loss, Biotechnology and Biological Sciences Research
Council (BBSRC).
References
Barry, J.G. & Moore, D.R. (2014). Evaluation of Children’s Listening and Processing (ECLiPS)
Manual, Edition 1.
Heinrich, A., Knight, S., Young, M., & Barry, J. (2014). Assessing the interplay of linguistic
factors and background noise characteristics in successful speech in noise listening. BSA
Abstract.
Bilger, Nuetzel, J., Rabinowitz, W., & Rzeczkowski, C. (1984). Standardization of a Test of
Speech-Perception in Noise. J Speech Hear Res, 27(1), 32-48.
16. Generalization of sensory and cognitive learning in typically developing children
C.F.B. Murphy*, D.R. Moore§ and E. Schochat*, *Department of Physical Therapy, SpeechLanguage Pathology and Occupational Therapy, University of São Paulo, São Paulo, Brazil,
§Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center,
Cincinnati, Ohio, USA.
Both bottom-up (sensory) and top-down (cognitive) mechanisms are involved in learning following
auditory training, as demonstrated by the far-transfer of auditory sensory learning to higher level
language skills (Moore et al, 2005) and top-down neural modulation of auditory sensory learning
(Bajo et al, 2010). However, direct comparison between sensory and cognitive training on untrained
auditory sensory and cognitive skills in children has not previously been attempted. This study
aimed to assess the extent to which generalization of learning occurred following each type of
training in typically developing children.
Sixty 5-to 8-year-old normal hearing children were quasi-randomly assigned to 5 training
groups: attention (AG), memory (MG), auditory sensory (SG), placebo (PG; drawing, painting) and
control, untrained (CG). Compliance measures (last level achieved during the game and the number
of game blocks played), as well as mid- transfer (same domain; auditory digit span, sustained
auditory attention, time compressed speech) and far-transfer (different domain; phonological
processing, word reading) measures were applied before and after training. All trained groups
received eleven 50-min training sessions; the CG did not receive any intervention.
All trained groups showed near-transfer; significant learning (higher levels) on the trained task.
No significant correlation was found between the numbers of game blocks played and the last level
achieved. On pre- to post-training measures, most groups showed (test-retest) improvements on
most tasks. There was a group x training interaction for the digit span task (F (4,117) = 2,73, p =
0.038), with higher spans after MG training. The results show that both bottom-up and top-down
training can lead to learning on the trained task, as well as mid-transfer learning on a task (auditory
digit span) in the same domain as those trained.
Acknowledgements
Supported by the CAPES/Coordenação de Aperfeiçoamento de Pessoal de Nível Superior.
References
Bajo V.M., Nodal F.R., Moore D.R. & King A.J. 2010. The descending corticocollicular pathway
mediates learning-induced auditory plasticity. Nat Neurosci, 13(2):253-60.
Moore D.R., Rosenberg J.F. & Coleman J.S. 2005. Discrimination training of phonemic contrasts
enhances phonological processing in mainstream school children. Brain Lang; 94(1):72-85.
52
17. Imaging brain structures in children with auditory neuropathy spectrum disorder
H. Cooper*, L. Halliday§, D. Bamiou‡ and C.Clark*, * Institute of Child Health, University College
London, UK, § Division of Psychology and Language Sciences, University College London, UK, ‡
Ear Institute, University College London, UK
It is believed that the brain requires both normal intrinsic and extrinsic factors in order for typical
development to take place. Auditory neuropathy spectrum disorder (ANSD) often results in
fluctuating hearing loss and auditory temporal processing abnormalities leading to degraded and
inconsistent auditory input (Rance et al, 1999). ANSD is a collection of test results rather than a
specific disorder in its own right. The pattern of test results required for a diagnosis of ANSD is
present otoacoustic emissions and/or cochlear microphonic with absent or severely abnormal
auditory brainstem response (Starr et al, 1996). The abnormal subcortical transmission of sound
experienced by children with ANSD therefore may result in the disruption of normal cortical
development. Very little is known about cortical neuromaturation and neural plasticity in children
with ANSD.
Children from around the UK who are age six and over and have a diagnosis of ANSD are
currently being invited to take part in this study. A detailed test battery has been developed in order
to comprehensively describe and analyze ANSD.
The study uses a variety of magnetic resonance imaging (MRI) techniques to examine both
structural and functional brain connectivity in children with ANSD. Diffusion tensor imaging is a
quantitative MRI technique that is capable of measuring the microstructure of the brain tissue. Tract
based spatial statistics (Smith et al, 2006) is being used to build a map of structural differences in
brain white matter networks in children with ANSD compared to normally hearing controls. Resting
state fMRI is used to examine differences in functional connectivity between children with ANSD
and controls. Cortical thickness is being measured using conventional T1-weighted MRI.
Up to 40% of cases of ANSD are thought to have a genetic origin (Manchaiah et al, 2001).
Genetic markers for non-syndromic ANSD are being evaluated, including mutations in the otoferlin
gene and pejvakin.
Tests of auditory performance are used to fully characterize ANSD, including conventional
behavioural assessment along with psychoacoustics testing. The psychoacoustics test battery
includes assessment of binaural masking level difference, frequency discrimination, temporal
integration and temporal modulation transfer function in order to quantify the processing difficulties
experienced by children with ANSD.
Correlations between tests of auditory function, and differences in structural and functional
brain connectivity between children with ANSD and normally hearing controls are being assessed.
Preliminary results show wide variations, particularly in behavioural testing, including speech
testing and psychophysics assessment.
Acknowledgements
Hannah Cooper is funded by a National Institute for Health Research CSO Healthcare Science
Doctoral Research Fellowship.
References
Manchaiah V. K. C., Zhao F., Danesh A. A. & Duprey R. 2011. The genetic basis of auditory
neuropathy spectrum disorder (ANSD). Int J Pediatr Otorhinolaryngol, 75(2), 151-158.
Rance G., Beer D. E., Cone-Wesson B., Shepherd R. K., Dowell R. C., King A. M., Rickards F. W.
& Clark G. M. 1999. Clinical findings for a group of infants and young children with auditory
neuropathy. Ear Hear, 20(3), 238-252.
53
Smith S. M., Jenkinson M., Johansen-Berg H., Rueckert D., Nichols T. E., Mackay C. E., Watkins
K. E., Ciccarelli O., Cader M. Z., Matthews P. M. & Behrens T. E. 2006. Tract-based spatial
statistics: voxelwise analysis of multi-subject diffusion data. Neuroimage, 31(4), 1487-1505.
Starr A., Picton T. W., Sininger Y., Hood L. J. & Berlin C. I. 1996. Auditory neuropathy. Brain,
119 (Pt 3), 741-753.
18. The development of audiovisual integration in children listening to noise-vocoded speech
D.W. Maidment, H-J. Kang, H.J. Stewart and S. Amitay, MRC Institute of Hearing Research,
Nottingham, UK
While visual information improves speech perception in adults when the fidelity of the auditory
speech signal is reduced by background noise, typically developing children benefit comparatively
less than adults from observing visual articulations (Ross et al. 2011). Degradation of the speech
signal itself, as experienced by hearing impaired listeners, represents another type of adverse
listening that can be potentially enhanced by visual cues. Noise-vocoding attempts to simulate
speech as heard via a cochlear implant in normal-hearing listeners, the study of which can arguably
improve our understanding of speech perception difficulties experienced by this population.
Nevertheless, whether additional visual cues improve normal-hearing children’s perception when
the speech signal is degraded (e.g. noise-vocoded) is yet to be explored.
In the current study, normal-hearing children (n=82) and adults (n=15) were presented noisevocoded sentences from the Children’s Co-ordinate Response Measure (Rosen, 2011) in auditoryonly (AO) or audiovisual (AV) conditions. The number of bands was adaptively varied to modulate
the degradation of the auditory signal, with the number of bands required for ~79% correct
identification calculated as the threshold. The data revealed two distinct findings: First, we
replicated previous work (Eisenberg, et al. 2000) showing that, regardless of visual information,
younger children between the ages of 4-5 years required a greater number of frequency bands in
order to identify speech in comparison to older children (6-11 years) and adults. Second, we found
that visual information improved vocoded speech perception in all but the youngest (4-5 year old)
children.
Taken together, the data suggests that children do not begin to benefit from accompanying visual
speech cues when the auditory signal is degraded until 6 years of age. This evidence not only has
implications for understanding the development of speech perception skills in normal-hearing
children, but may also inform the development of new treatment and intervention strategies that aim
to remediate speech perception difficulties in paediatric cochlear implant users.
Acknowledgements
Supported by the Medical Research Council.
References
Eisenberg, L. S., Shannon, R. V., Martinez, A. S., Wygonski, J., & Boothroyd, A. 2000. Speech
recognition with reduced spectral cues as a function of age. J Acoust Soc Am, 107, 2704-2710.
Rosen, S. 2011. The complexities of understanding speech in background noise. Paper presented at
the First International Conference on Cognitive Hearing Science for Communication, Linköping,
Sweden.
Ross, L. A., Molholm, S., Blanco, D., Gomez Ramirez, M., Saint Amour, D., & Foxe, J. J. 2011.
The development of multisensory speech perception continues into the late childhood years. Eur
J Neurosci, 33, 2329-2337.
54
19. Exploring the role of synchrony in spatial and temporal auditory-visual integration
G.P. Jones, S.M. Town, K.C. Wood, H. Atilgan and J.K. Bizley, UCL Ear Institute, London, UK.
Animals and humans integrate sensory information over time, and combine this information across
modalities in order to make accurate and reliable decisions in complex and noisy sensory
environments. Little is known about the neural basis of this accumulation of information, nor the
cortical circuitry that links the combination of information between the senses to perceptual
decision making and behaviour. Most previous examples of multisensory enhancement have relied
on synchrony dependent mechanisms, but these mechanisms alone are unlikely to explain the entire
scope of multisensory integration, particularly between senses such as vision and hearing, which
process multidimensional stimuli and operate with vastly different latency constraints.
Presented here are two audio-visual behavioural tasks, one spatial, one temporal (adapted
from Raposo, et al, 2012), requiring subjects (ferrets and humans) to accumulate evidence from one
or both senses over time. In the temporal task subjects estimated the average rate of short auditory
and/or visual events embedded in a noisy background (20 ms white noise bursts or flashes) over a
defined time period (1000 ms). Instantaneous event rates throughout this time period varied,
meaning the accuracy with which the event rate could be estimated increased over time. Similarly,
in the spatial task, subjects were required to report whether a greater event rate was presented to the
left or right of space. Discrimination was assessed in both unisensory auditory and visual conditions
as well as in synchronous and asynchronous multisensory conditions. In the temporal task, accuracy
and reaction times for humans were improved in both synchronous and asynchronous multisensory
conditions (accuracy: 71% and 72%, reaction times: 275±9 and 300±10 ms, mean ±SE,
respectively), relative to the auditory and visual unisensory performance (accuracy: 64% and 62%,
reaction times: 345±14 and 338±10 ms, respectively). To investigate how the additional information
available in the multisensory conditions leads to optimised listening performance, these data are
being analysed in the context of previously published drift diffusion models. Lastly, to better
understand how such signals are represented in the brain, recordings are presented from the auditory
and visual cortex of anesthetised ferrets.
Acknowledgements
Supported by the Wellcome Trust / Royal Society
References
Raposo D., Sheppard J.P., Schrater P.R. & Churchland A.K. 2012. Multisensory Decision-Making
in Rats and Humans. J Neurosci, 32(11), 3726-3735.
20. Visual –auditory interactions in ferret auditory cortex: The effect of temporal coherence
H. Atilgan, S.M. Town, G.P. Jones, K.C. Wood and J.K. Bizley, Ear Institute, University College
London, London, UK
We have previously observed that when the size of a visual stimulus was coherently modulated with
amplitude of a target auditory stream, human listeners were better able to report brief deviants in the
target stream than when the visual stream was coherent with an non-target auditory stream (Maddox
et al., submitted). Here, we used modified versions of the same stimuli to explore whether the
coherence between temporally modulated auditory and visual stimuli can influence neuronal
activity in auditory cortex. The auditory stimuli were continuous vowel sounds which were
amplitude modulated with a <7 Hz noisy carrier envelope and had a duration of 20 seconds.
Embedded within the continuous vowel were brief (200 ms) timbre deviants generated by altering
55
the first and second formant values of the reference vowel. Two such vowels (/u/ and /a/) were
generated, with different fundamental frequencies, and modulated with independent envelopes.
These were then presented either separately or concurrently. In both cases the auditory stimuli were
accompanied by a luminance-modulated visual stimulus, whose envelope matched one of the
auditory streams such that in the single stream case the auditory and visual stimuli were either
temporally coherent or independent, and in the dual stream case the visual stimulus was coherent
with one of the two streams. Recordings were made in anesthetised ferret auditory and visual cortex
and in the auditory cortex of awake, passively listening ferrets.
We explored how the visual stimulus influenced both the representation of the timbre deviants
as well as the ability of neurons to represent the modulation envelope of the sound in auditory
cortex when single auditory and visual streams were presented. Both timbre sensitive units and
envelope sensitive units were encountered. Some units responded only to changes in timbre (16% of
recorded units) while others responded only to changes in the amplitude envelope (26%). Over one
third of the driven units (36%) showed sensitivity to both timbre and AM changes. Next, we
quantified whether the coherence between the auditory and visual streams influenced auditory
cortical responses. 32% of units showed a significant effect of visual envelope coherence; the main
effect of the visual stimulus coherence appeared to be on how well a unit represented the changes in
the amplitude modulation, with the responses to timbre changes being relatively uninfluenced.
When both streams were presented some neurons responded preferentially to one stream over
the other (presumably reflecting tuning for a particular pitch or timbre). In a small number of cases
preliminary analysis suggests that some neurons responded preferentially to the auditory stream
with which the visual stimulus was coherent.
Acknowledgements
Action on Hearing Loss, Wellcome Trust, Royal Society, BBSRC
21. Multisensory integration in the ferret
A. Hammond-Kenny, V.M. Bajo, A.J. King and F.R. Nodal, Department of Physiology, Anatomy and
Genetics, University of Oxford, Oxford, UK
Our unified perception of the world relies on the ability to integrate information from different
sensory modalities with dedicated neural circuits. The reported locations of multisensory integration
in the mammalian brain are numerous and include primary sensory cortices. In order to explore the
role of auditory cortex in multisensory integration, we aim to record neural activity from the ferret
auditory cortex while animals perform a multisensory localization task, thus enabling correlation of
behavioural performance, a measure of perceptual ability, with simultaneously recorded neural
activity. However, prior to implantation of bilateral multi-electrode arrays, it is necessary first to
obtain, for this species, baseline behavioural measurements of multisensory integration.
Three ferrets were trained by positive operant conditioning to trigger a stimulus, presented from
1 of 7 sites separated by 30° intervals around the frontal hemifield of a circular arena, by licking a
central spout. Stimuli were either unisensory (auditory or visual) or multisensory (audio-visual) of
varying durations (20-2000ms) and, for auditory stimuli, of different intensities (56-84dB SPL).
Animals received a water reward for approaching the correct target. A comparison of reaction times
estimated by the latency of head-orienting movement and perceptual accuracy indicated by the
percentage of correct approach-to-target responses between unisensory and multisensory stimulus
conditions was used as a metric of multisensory integration.
As expected, perceptual accuracy was greater for multisensory versus unisensory stimulus
conditions with the greatest gains in percentage correct scores observed at the shortest stimulus
durations. In addition, facilitation of reaction times exceeded those predicted by a probability
56
summation mechanism alone, as demonstrated by violation of the Race Model. Together, these
results show behavioural enhancement during multisensory testing conditions and validate the use
of this behavioural paradigm to identify the neural correlates of multisensory integration. Our initial
recordings of neural activity from the auditory cortex, during the behavioural task, have revealed
various types of neural responses to stimulus presentations although their multisensory properties
remain to be established.
Acknowledgements
Supported by the Wellcome Trust
22. Auditory-visual interactions between amplitude-modulated sounds and shapes
M.P. Laing, A. Dryden, H.H.I. Tung, A. Rees and Q.C. Vuong, Institute of Neuroscience, Newcastle
University, Newcastle upon Tyne, UK.
Experimental evidence demonstrates that visual speech information has a marked effect on what we
hear, with respect to content (e.g. McGurk effect) and perceived location (ventriloquist effect).
Previous studies of AV interactions using non-speech related stimuli have often used transient
sounds that lack key features of speech. Here we address the hypothesis that auditory visual (AV)
interactions observed with speech and faces generalise to simpler stimuli occurring on similar time
scales to speech in both brain and behaviour. We have developed a paradigm using sinusoidally
amplitude-modulated tones and a visual shape (a cuboid which expands and contracts at the
modulation rate) that enables us to manipulate the extent to which listeners fuse the auditory and
visual components of the stimuli.
Listeners heard pairs of modulated tones (1.5 s duration, 250 Hz carrier frequency, 2 Hz
modulation rate) and judged whether their modulation depth (range 20-52% in 8% steps) was the
same or different (Audio Only). On cross-modal trials, we paired the shape with the tone with the
greatest (AV >) or least (AV <) modulation depth. We found that the modulated shape influenced
how listeners perceived the modulation depth of the tones, but modulation of the shape was not
influenced by the modulated tone. The auditory-visual interaction was nearly eliminated when the
tone and shape differed in their modulation frequency (e.g., 1 Hz shape modulation paired with 2
Hz amplitude modulation of the tone). Furthermore listeners reported that when the modulation
frequency in both modalities was the same, they perceived the shape to be the source of the tone,
thus indicating crossmodal integration. This merged percept was weak or absent when the
modulation frequencies did not match. In a preliminary functional magnetic resonance imaging
study using data acquired by sparse imaging, we found that superior-temporal, intraparietal and
frontal areas responded more when the tone and shape had the same rather than different
modulation frequencies.
Our findings provide evidence (1) that auditory-visual interactions observed with speech and
faces generalise to simpler stimuli occurring over a similar time scale to speech; and (2) that
temporal information, such as modulation frequency, is important for fusing hearing and vision in
both brain and behaviour.
Acknowledgements
MPL is supported by a studentship from the Wellcome Trust.
23. It’s all in the mix: Randomly mixing different auditory amplitude-modulation tasks
during training disrupts learning
57
H. Swanborough, D.W. Maidment, and S. Amitay, MRC Institute of Hearing Research, Nottingham,
UK
There is a growing corpus of work in the auditory learning domain investigating the efficacy of
mixed-task training. One concern surrounds the conditions under which mixed training of different
tasks does or does not disrupt learning. A previous study from our laboratory examined training of
two amplitude-modulation (AM) tasks involving different cues – AM-detection (AMD) and AMrate discrimination (AMR). We have shown that when the two tasks are interleaved in either 60-trial
or 360-trial blocks, significant learning is only observed for the AM task completed last in a training
session (Gill et al., 2013; Maidment et al., 2013). This suggests that the acquisition and
consolidation of the ultimate AM task within a training session disrupts learning of the preceding
task.
The objective of the current study was to investigate whether learning on two similar AM tasks
is disrupted when neither task has an ultimate block. Training consisted of interleaving AMD and
AMR tasks on either a trial-by trial basis in each training block, or in separate, 60-trial blocks. We
expected no learning to occur in the former mixed regimen, because neither task will have an
ultimate block with a sufficient number of trials to allow acquisition and initiate consolidation.
Preliminary results replicate our original finding that when training is mixed in separate, 60trial blocks, learning is only observed for the ultimate AM task. Furthermore, as predicted, learning
on both AM tasks is not observed when neither task has an ultimate block that disrupts previous
training. We conclude two similar tasks should not be mixed randomly on a trial-by-trial basis
during training, as this prevents the acquisition and consolidation of both tasks. These findings have
implications in the construction of training programmes intended for treatment and intervention
strategies, particularly those aimed at the habilitation of cochlear implant users that rely on AM cues
to enhance speech intelligibility.
Acknowledgements
Supported by the Medical Research Council.
References
Gill, E.C., Maidment, D.W., Kang, H-J., & Amitay, S. (2013). Disruption to learning during the
acquisition phase. Poster presented at the British Society of Audiology Annual Conference,
Keele University, UK. Abstract #88.
Maidment, D.W., Kang, H-J., & Amitay, S. (2013). Auditory perceptual learning: Initiation of
consolidation prevents dual-task disruption. Poster presented at the British Society of
Audiology Annual Conference, Keele University, UK. Abstract #87.
24. Comodulated notched-noise masking
R. Grzeschik*, B. Lübken* and J.L. Verhey*, *Department of Experimental Audiology, Otto von
Guericke University Magdeburg, Magdeburg, Germany
Detection thresholds of a masked sinusoidal signal are lower when the masking noise is coherently
modulated over a wide frequency range compared to a masking condition with the same spectrum
but using an unmodulated masker. This phenomenon is referred to as comodulation masking release
(CMR; Hall et al., 1984). Here we tested how masker comodulation affects notched-noise data.
Thresholds for sinusoids were measured in the presence of a diotic notched-noise masker for
comodulated and unmodulated conditions. The signal frequency of the target was 500 Hz. The
masker was a broadband filtered noise with lower- and upper cut-off frequencies of 30 and 1000
Hz, respectively. The masker had a spectral notch at the signal frequency. Notch widths were 0, 50,
58
100, 200, and 400 Hz. The arithmetic centre frequency of the notch was equal to the signal
frequency.
The data of eleven participants (4 male, 7 female, aged 25-37 yrs, mean 30 yrs) entered the
analysis. As expected, thresholds decreased with increasing notch width for both masking
conditions. The CMR decreased continuously with notch widths as well. CMR was almost 9 dB for
no notch. At 400 Hz, CMR was less than 2 dB.
The experimental data were compared with predictions of a modulation filterbank model that
was previously shown to simulate CMR in the bandwidening type of CMR experiments
(Piechowiak et al., 2007).
Acknowledgements
Supported by the German Research Foundation (SFB/TRR31).
References
Hall JW, Haggard MP, Fernandes MA (1984) Detection in noise by spectrotemporal patternanalysis. J Acoust Soc Am 76: 50-56.
Piechowiak T, Ewert SD, Dau T (2007) Modeling comodulation masking release using an
equalization-cancellation mechanism. J Acous. Soc Am 121 (4): 2111-2126.
25. Individual differences in auditory performance of naive listeners
D.E. Shub, School of Psychology, University of Nottingham, Nottingham, UK
A psychophysical experiment was conducted to examine differences between so called ‘good’ and
‘bad’ listeners. A total of 97 naive young healthy listeners were divided into three groups with
listeners from each group being measured on two tasks: (1) frequency and level discrimination, (2)
interaural time difference (ITD) and interaural level difference (ILD) discrimination, and (3) N0S0
and N0Spi detection. Two estimates of threshold were made with both a low (0.5 kHz) and a high (4
kHz) frequency tonal signal in every task, except for ITD discrimination, where only the low
frequency signal was used.
There was substantial correlation between the first and second estimates of threshold (r equal to
0.73, 0.69, and 0.91), but no statistically reliable differences between the thresholds (p equal to
0.72, 0.63, and 0.69). The correlation between the estimates of threshold at the two frequencies (r
equal to 0.69, 0.69, and 0.64) was similar to the test-retest correlation. Relative to the test-retest
correlation, the correlation between the N0S0 and N0Spi tasks (r=0.92) was similiar, but the
correlation between the level and frequency tasks (r=0.37) and ITD and ILD tasks (r=0.4) were
substantially reduced. The reduced correlation in the level and frequency tasks suggests that the
relative performance of listeners is task dependent. This, however, seems contradictory to both the
lack of a change in the N0S0-N0Spi correlation, where the two tasks are nominally measuring
different aspects of detection, and the presence of a change in the ITD-ILD correlation, where the
two tasks are nominally measuring spatial sensitivity. These discrepancies are potentially related to
the generally poor performance of the naive subjects. The level of performance in the ILD task
means that there were not only changes in the spatial position, but also changes in the loudness that
are absent in the ITD task. Conversely, the N0S0 and N0Spi tasks may have been measuring the
same thing since sensitivity to interaural differences is so poor in navie listeners and detection
thresholds were so elevated that listeners would not need to rely on interaural decorrelation in the
N0Spi task. This is supported by the lack of a binaural masking level difference (BMLD) with a low
frequency signal (p=0.46), however, inconsistent with the 3.8 dB BMLD (p<0.001) with a high
59
frequency signal. Overall, the results suggest that subjects are not universally good/bad performers,
but that the relative quality of the performance is task, but not frequency, specific.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
26. Frequency tuning of the efferent effect on cochlear gain
V. Drga*, I. Yasin* and C. J. Plack§, *Ear Institute, University College London, UK, §School of
Psychological Sciences, University of Manchester, UK
Behavioural and physiological evidence suggests that the amount of gain applied to the basilar
membrane may change during the course of acoustic stimulation due to efferent activation of the
cochlea (Liberman et al, 1996). The present study used the Fixed Duration Masking Curve (FDMC)
method (Yasin et al, 2013) to estimate human cochlear gain, in the presence of a precursor sound of
variable frequency. Masker level at threshold was obtained for a 10-dB SL, 4-kHz sinusoidal signal
(3-ms raised-cosine onset and offset ramps) in the presence of either an on-frequency (4 kHz) or
off-frequency (1.8 kHz) sinusoidal forward masker (2-ms raised cosine ramps). The masker-signal
silent interval was 0 ms. Thresholds were measured as a function of signal and masker steady-state
duration, while holding the combined masker-and-signal duration constant at 25 ms. A short total
duration ensured that accurate estimates of basilar membrane gain could be obtained in the absence
of efferent suppression of the signal by the masker. Presentation of a precursor sound prior to the
masker-signal stimulus was used to estimate the frequency dependency of the efferent effect. The
use of the FDMC technique ensured that the effect of the precursor on gain could be measured
independently of any masking effects by the precursor. This has been a confound in previous
behavioural studies. FDMCs were obtained from three listeners with and without a 160-ms
precursor noise with a bandwidth of 200 Hz (centred at frequencies of 2.5, 3.0, 3.5, 3.25, 4.0, 4.25,
4.5, 5.0 and 5.5 kHz) and a level of 60 dB SPL. The silent interval between precursor and masker
was fixed at 0 ms.
Results suggest that the efferent effect is tuned in frequency, with the greatest gain reduction
for a precursor presented at a frequency at, or just below, the signal frequency of 4 kHz. The
amount of gain reduction with a 3.0, 4.25, 4.5 and 5.0-kHz precursor was significantly less than that
with a 4-kHz precursor. The 10-dB bandwidth of the efferent filter was approximately 590 Hz. The
results are consistent with physiological studies which show efferent fibre frequency-response
tuning (Guinan, 2011).
Acknowledgements
Supported by the EPSRC (grant EP/H022732/1)
References
Liberman M.C., Puria S. & Guinan J.J.Jr. 1996. The ipsilaterally evoked olivocochlear reflex causes
rapid adaptation of the 2f1-f2 distortion product otoacoustic emission. J Acoust Soc Am, 99,
3572-3584.
Yasin I., Drga V. & Plack C.J. 2003. Estimating peripheral gain and compression using fixedduration masking curves. J Acoust Soc Am, 133, 4145-4155.
Guinan J.J.Jr. 2003. Cochlear efferent innervation and function. Curr Opin Otolaryngol Head Neck
Surg, 18, 447-453.
60
27. Temporal integration in the perception of low-frequency and infrasound
H. Patel and T. Marquardt, UCL Ear Institute, University College London, UK
Hearing threshold and equal-loudness contours (ELCs) were obtained as a function of tone duration
from 5 normal hearing subjects for 4 Hz, 16 Hz, 32 Hz and 125 Hz. The duration ELCs were also
obtained for 1 kHz. Like for higher frequencies, thresholds decreased with increasing tone duration
until the “critical duration”, beyond which threshold remained constant. The trend of an increase in
critical duration with decreasing tone frequency, previously reported for the normal frequency range
down to 125 Hz, continues into the infrasound range, where mean data indicate a critical duration of
approximately 1 s at 16 Hz, and an further increase to at least 4 s at 4 Hz. Rates of threshold
decrease below the critical duration were previously reported for higher frequencies to be in the
order of 10 dB per ten-fold duration increase, indicating an integration of stimulus power. Our data
demonstrate that this rate holds true down to infrasound. Although the critical durations in the
perception of loudness are comparable to those for thresholds, the rates of sound pressure
adjustment to keep an equal loudness percept with increasing tone duration (below the critical
durations) are considerably lower, and fit well with a rule of 10 phon per ten-fold duration increase.
Acknowledgements:
This study was supported by the EMRP project HLT01 (EARS). T.M. is supported by the related
EMRP Researcher Excellence Grant HLT01-Reg1. The EMRP is jointly funded by the EMRP
participating countries within EURAMET and the European Union.
28. Mind the gap: two dissociable mechanisms of temporal processing in the auditory system
L.A. Anderson*, J. Mattley* and J.F. Linden*, § *Ear Institute, University College London, London,
UK, §Dept. Neurosci., Physiol. & Pharmacol., University College London, London, UK
The high temporal acuity of auditory processing underlies perception of speech and other rapidly
varying sounds. A common measure of auditory temporal acuity is the threshold for detection of
brief gaps in noise. Gap-detection deficits, frequently observed in neurodevelopmental disorders,
are considered evidence for 'sluggish' auditory processing. Here we show that neural mechanisms
of auditory temporal processing are stimulus-specific, and gap-detection deficits do not imply a
general loss of auditory temporal acuity. Extracellular recordings from auditory thalamus of mice
with neocortical ectopia (a mouse model of thalamocortical developmental disorder) reveal reduced
neural sensitivity to brief gaps in noise, but normal sensitivity to other rapidly changing sounds. We
demonstrate, through experiments and mathematical modelling, that thalamic gap-detection deficits
arise from weakened neural activity following noise offsets, not onsets. We also present
preliminary results from further experiments aimed at identifying the anatomical origins of the
observed physiological deficit, examining neural responses at the level of the inferior colliculus and
the auditory brainstem. The results reveal dissociable mechanisms of central auditory temporal
processing involving sound-onset-sensitive and sound-offset-sensitive channels, and suggest that
auditory deficits in developmental disorders arise from abnormal neural sensitivity to sound offsets.
Acknowledgements
Wellcome Trust, Action on Hearing Loss
29. Build-up and resetting of auditory stream segregation: Effects of timbre, loudness, and
abrupt changes in these properties
61
S.L. Rajasingam, R.J. Summers* and B. Roberts*, *Psychology, School of Life and Health Sciences,
Aston University, Birmingham, UK
Better characterization of the effects of acoustic change on the dynamics of stream segregation is
important for understanding the role of streaming in everyday listening. The auditory system’s
ability to segregate sounds into separate streams is commonly investigated using rapid, unchanging
ABA– triplet sequences (alternating low- and high-frequency pure tones, followed by silence).
Presenting long, steady sequences results in ‘build-up’ – an increased tendency towards a
segregated percept – which tends to reset if an abrupt change occurs.
Two experiments explored the effects of timbre and loudness on build-up, and the
consequences of abrupt changes in these perceptual properties. 12 normal-hearing listeners heard
20-s-long ABA– sequences (tone duration = 100 ms; base frequency = 1 kHz; Δf = +4-8 semitones).
Participants continuously monitored each sequence and used key presses to report when they heard
the sequence as integrated or segregated (cf. Haywood & Roberts, 2013).
Experiment 1 used pure tones and narrowly spaced (±25 Hz) tone pairs, dyads, to investigate
the effects of timbre. Both stimuli evoke similar peripheral excitation patterns but dyads have a
“rougher” timbre. Sequences consisting only of dyads induced a highly segregated percept, with
limited scope for further build-up. Most notably, alternating abruptly every 13 triplets between pure
tones and dyads produced large changes in the extent of reported segregation. Dyad-to-tone
transitions produced substantial resetting, but tone-to-dyad transitions elicited greater segregation
than for the corresponding time interval in dyad-only sequences (i.e., overshoot).
Experiment 2 used pure-tone sequences and examined the effect of varying triplet loudness.
Louder and softer sequences with a maintained level difference (12 dB) showed similar patterns of
build-up, but a clear asymmetry occurred for alternating sequences. Specifically, soft-to-loud
transitions caused a significant reduction in segregation (resetting) whereas loud-to-soft transitions
had little or no effect (cf. Rogers & Bregman, 1998). Together, the results suggest that timbre can
strongly affect the tendency to hear streaming, even in the absence of peripheral-channelling cues,
but moderate differences in overall level do not. Furthermore, abrupt changes in sequence properties
have variable effects on the tendency to hear segregation – including resetting, overshoot, and
asymmetries in the effect of transition direction.
Acknowledgements
Supported by Aston University.
References
Haywood, N.R. & Roberts, B. 2013. Build-up of auditory stream segregation induced by tone
sequences of constant or alternating frequency and the resetting effects of single deviants. J Exp
Psychol Hum Percept Perform, 39, 1652-1666.
Rogers, W.L., & Bregman, A.S. 1998. Cumulation of the tendency to segregate auditory streams:
Resetting by changes in location and loudness. Percept Psychophys, 60, 1216-1227.
30. Influence of context on the relative pitch dominance of individual harmonics
H.E. Gockel, S. Alsindi, C. Hardy and R.P. Carlyon, MRC Cognition and Brain Sciences Unit,
Cambridge, UK
There is evidence that the contribution of a given harmonic in a complex tone to residue pitch is
62
influenced by the accuracy with which the frequency of that harmonic is encoded: Harmonics with
low frequency difference limens (FDLs) contribute more to residue pitch than harmonics with high
FDLs (Moore et al, 1985; Gockel et al, 2005). We investigated whether listeners adjust the weights
assigned to individual harmonics based on acquired knowledge of the reliability of the frequency
estimates of those harmonics. Importantly, the acquired knowledge was based on context trials in
which the reliability of the frequency estimates of individual harmonics was manipulated, rather
than on the reliability of estimates in the test trials themselves.
In a two-interval forced-choice task, seven listeners indicated which of two 12-harmonic
complex tones had the higher overall pitch. In context trials (60% of all trials), the fundamental
frequency (F0) was 200 Hz in one interval and 200+ΔF0 Hz in the other. In different blocks, either
the 3rd or the 4th harmonic, plus (always) the 7th, 9th, and 12th harmonics were replaced by
narrowband noises that were identical in the two intervals. Harmonics and narrowband noises had
rms levels of 50 dB SPL. Feedback was provided. In randomly interspersed test trials (40% of all
trials), the F0 was 200+ΔF0/2 Hz in both intervals and either the 3rd or the 4th harmonic was shifted
slightly up or down in frequency. There were no narrowband noises. Feedback was not provided.
Performance (d′) for judging the residue pitch was calculated separately for each mistuned harmonic
(3rd or the 4th) and each context condition.
In the test trials, d′ for a given harmonic was slightly but significantly smaller when it was
replaced by a noiseband in the context trials, compared to when a different harmonic was replaced.
The results are consistent with the notion that listeners give smaller weights to a harmonicfrequency region when they have learned that this frequency region does not provide reliable
information for a given task. That is, the relative contribution of individual harmonics to pitch may
depend not only on the accuracy of encoding, but also on the listener’s experience of how reliable
the information is likely to be.
Acknowledgements
Supported by intramural MRC-funds.
References
Gockel H., Carlyon R.P. & Plack C.J. 2005. Dominance region for pitch: Effects of duration and
dichotic presentation. J Acoust Soc Am, 117, 1326-1336.
Moore B.C.J, Glasberg B.R. & Peters R.W. 1985. Relative dominance of individual partials in
determining the pitch of complex tones. J Acoust Soc Am, 77, 1853-1860.
31. Effects of temporal context on frequency selectivity of cortical adaptation in primary
auditory cortex
O. Woolnough, J. de Boer, K. Krumbholz and C. Sumner, MRC Institute of Hearing Research,
Nottingham, UK
Adaptation, the reduction in neural responses to repeated stimuli, is a ubiquitous phenomenon
throughout human and animal sensory systems. It has been shown that adaptation in auditory cortex
is frequency specific (Brosch and Schreiner, 1997), with greater frequency separation between
adapter and probe tones reducing the degree of adaptation.
The aim of the current study was to compare frequency specificity of adaptation in human and
guinea pig (GP) auditory cortex, and to investigate how adaptation specificity depends on the
properties of the adapter and probe stimuli. A recent study has shown that using repeated
presentation of adapters can enhance frequency specificity of this adaptation (Briley and
Krumbholz, 2013). Here we have characterised the effects of both adapter repetition and duration on
the tuning of cortical adaptation.
63
Adaptation was measured in auditory cortex, recorded through EEG in humans and via EEG
and LFPs in GPs. Pure tone adapter-probe sequences were presented diotically with adapter
frequencies of 0, 0.5 and 1.5 octaves higher than the 1 kHz probe. We analysed the resultant
auditory evoked potential (AEP) amplitudes, measured as the N1-P2 amplitude, and quantified
adaptation as the reduction in response size in the adapted probe compared to the unadapted
response.
The human EEG results confirm the previous results (Briley and Krumbholz, 2013) that
repeated adapter presentation increases the frequency specificity of adaptation. Our results further
show how these effects depend on number of repetitions and that no sharpening is observed when
adapters are prolonged rather than repeated. The GP EEG results show comparable trends.
However, frequency selectivity of adaptation was overall much greater in GPs than in humans (GPs
showed a 50-105% reduction in adaptation over 1.5 octave frequency separation vs 15-45% in
humans). In fact at the largest adapter-probe frequency separation, the GPs showed significant
facilitation, increasing the AEP amplitude beyond the unadapted amplitude, whereas no facilitation
was evident in humans.
Our results suggest, in both humans and GPs, adaptation contributes to refine the frequency
tuning of adaptation with increasing numbers of adapting tones. However, there are apparent
qualitative differences in EEG responses across species. Further work will investigate how the
underlying representation in neural activity and local field potentials contribute to far field
potentials.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Briley P.M. & Krumbholz K. 2013. The specificity of stimulus-specific adaptation in human
auditory cortex increases with repeated exposure to the adapting stimulus. J Neurophysiol, 110,
2679–2688.
Brosch M. & Schreiner C.E. 1997. Time course of forward masking tuning curves in cat primary
auditory cortex. J Neurophysiol, 77, 923–943.
32. Functional organisation of familiar melody recognition as revealed using octavegeneralisation of melodic structure
R.E. Millman*, H. Biddles§, R. Garcia§, E. Giannakopoulou§, M. Kulkarni§ and M. Hymers*
*York Neuroimaging Centre, University of York, York, UK, §Department of Psychology, University
of York, York, UK
Deutsch (1972) demonstrated that the accuracy of familiar melody recognition decreased
significantly when successive notes of the melody were played in one of three octaves chosen at
random (i.e. octave-scrambled). Further, it was reported that on listening to the original melody
before the octave-scrambled version of the same melody, participants were better able to recognise
the melody in the octave-scrambled condition. Here we used the phenomenon of Deutsch’s
“mysterious melody” to study the neural processes underlying familiar melody recognition. Notably
Deutsch’s “mysterious melody” illusion permits control for low-level acoustical confounds between
“unfamiliar” and “familiar” conditions in a study of familiar melody recognition. First we tested the
effects of four categories of melody “scrambling” (octave-scrambled, randomised-note,
retrogradation and shuffled-note) on behavioural ratings of melody familiarity. Participants listened
to a given “scrambled” version (pre-reveal) of a melody followed by the original melody (reveal)
64
and finally the “scrambled” version again (post-reveal). Following each melody presentation,
participants were asked to give a 4-point scale familiarity rating. Post-reveal ratings of melodic
familiarity were significantly greater for the octave-scrambled condition than for the other melody
“scrambling” conditions. Secondly, we investigated the functional organisation of familiar melody
recognition in an Interleaved Silent Steady State (ISSS) functional magnetic resonance imaging
(fMRI) experiment (Schwarzbauer et al, 2006). The fMRI experiment closely followed the design
of the behavioural experiment and octave-scrambled stimuli were used to manipulate the percept of
familiar melodies. The responses to randomised-note stimuli were incorporated into an interaction
term that controlled for temporal order effects in the fMRI experiment. The fMRI analyses of the
original melody vs. “baseline” (silence) revealed activity in bilateral auditory cortices, bilateral
superior temporal lobes, bilateral frontal orbital cortices, right precentral gyrus and left inferior
frontal gyrus. The fMRI contrast of interest, based on the interaction term, revealed differential
activity in the right inferior parietal lobe, right angular gyrus and left intraparietal sulcus. These
findings support the recruitment of a bilateral network of supramodal brain regions in familiar
melody recognition.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Deutsch, D. 1972. Octave Generalization and Tune Recognition. Percept Psychophys, 11, 411412.
Schwarzbauer, C, Davis, M.H., Rodd, J.M. & Johnsrude, I. 2006. Interleaved silent steady state
(ISSS) imaging: a new sparse imaging method applied to auditory fMRI. NeuroImage, 29, 774782.
33. A steady-state evoked response to periodic changes in carrier IPD – the effect of IPD size
on the magnitude of the response
N.R. Haywood, J.A. Undurraga, T. Marquardt and D. McAlpine, UCL Ear Institute, University
College London, London, UK
A steady-state evoked response that is dependent on sensitivity to interaural timing differences
(ITDs) can be observed using electroencephalography (EEG). The stimuli comprised a five-minute
sinusoidally amplitude modulated tone, presented at 80 dB SPL (carrier frequency = 520 Hz,
modulation frequency = 41 Hz). The modulation envelope was diotic, and the carrier tone was
presented with one of five possible interaural phase differences (IPDs): 22.5º, 45º, 90º, 112.5º, or
135º. At the end of every six successive modulation cycles, the carrier IPD was ‘switched’
abruptly, so that carrier phase led in the opposite ear than previously. This resulted in a stimulus
which alternated periodically between right- and left-leading, but for which the overall size of the
IPD was held constant. IPD switches were presented at a rate of 6.8 per second, and the timing of
each switch coincided with a minimum in the modulation cycle. This ensured that the salience of
monaural phase cues were minimised.
Responses from ten normally-hearing subjects were recorded with a 16 channel EEG
system. Subjects did not attend the stimuli during the recording session. The periodic IPD switches
evoked a reliably detectable 6.8 Hz steady-state following response. All IPDs tested elicited a robust
following response, but the magnitude of this response was typically largest when the IPD was set
to 90º. Notably, following response magnitude decreased at the larger IPD values tested (112.5º,
65
135º). A possible explanation for this finding is that is that subjects applied greater weighting to the
envelope IPD cues once the carrier IPD extended beyond 90º. As the modulation envelope was
diotic, any increased weighting applied to the envelope IPD may have reduced the perceived
lateralisation of the stimulus, and/or reduced the salience of the abrupt lateralisation changes during
the stimulus. Previous psychophysical studies that have presented stimuli with envelope and carrier
ITDs in opposing directions have observed greater envelope ITD weighting once the carrier IPD
extended beyond ~ 45-90º (Buell et al., 1991; Dietz et al., 2009). Preliminary psychophysical
estimates of the lateralisation of modified versions of the current stimuli are consistent with these
previous findings.
Acknowledgements
The research leading to these results has received funding from the European Union’s Seventh
Framework Programme (FP7/2007-2013) under ABCIT grant agreement number 304912.
References
Buell, T. N., Trahiotis, C. & Bernstein, L. R. (1991). Lateralization of low‐frequency tones:
Relative potency of gating and ongoing interaural delays. J Acoust Soc Am, 90(6), 3077-3085.
Dietz, M., Ewert, S. D, & Hohmann, V. (2009). Lateralization of stimuli with independent finestructure and envelope-based temporal disparities. J Acoust Soc Am, 125(3), 1622-1635.
34. Detection of interaural time processing by means of frequency following responses
J.A. Undurraga, N. Haywood, T. Marquardt and D. McAlpine, University College London Ear
Institute, London, UK
Hearing loss in one or both ears can decrease the ability to use binaural cues, potentially making it
more difficult to understand speech and to localize sounds. Moreover, in the case of bilaterally
implanted cochlear implant (CI) users, a place mismatch between the channels in each ear may
reduce the binaural performance.
Thus, the use of a reliable and direct objective measure of the binaural processing may help to
understand the mechanisms of binaural processing and also be used as a clinical tool to aid
matching across-ear electrodes.
In this study we developed an objective measure to assess binaural processing in normal
hearing listeners; specifically, we measured frequency following response to abrupt interaural phase
changes (FFR- IPCs) imposed on amplitude modulated signals in normal hearing listeners. The
phase of the carrier signal presented to each ear was manipulated to produce discrete Interaural
Phase Changes (IPCs) at minimums in the modulation cycle. The phase of the carrier was
symmetrically opposed in each ear. For example, when the carrier phase was set to +5.5◦ in one ear,
it was set to −5.5◦ in the other ear, resulting in an effective Interaural Phase Difference (IPD) of 11◦.
The IPDs tested in this study ranged between 11 and 135◦. IPCs were presented at three different
rates (3.4, 6.8, and 13.6 switches/s) and the carrier was amplitude modulated using several rates (27
to 109 Hz). Measurements were obtained using an EEG system with two channels.
Recordings demonstrated that FFR-IPC could obtained from all participants in all conditions.
However, the Signal-to-Noise Ratio was largest when IPCs were presented at 6.8 switches/s. FFRIPC were largest generally for IPDs between 45 and 90◦. Overall, FFR-IPC increased with
modulation rate. We concluded that FFR-IPC can be used as a reliable objective measurement of
binaural processing and may be a more suitable objective measure than the binaural interaction
component (BIC) to match across-ear electrodes.
Acknowledgements:
66
The research leading to these results has received funding from the European Union’s Seventh
Framework Programme (FP7/2007-2013) under ABCIT grant agreement number 304912
35. What does gunfire sound like when you are the target? Investigating the acoustic
characteristics of small arms fire
Z.L. Bevis*, R.M. van Besouw*, M.C. Lower*, D. Rowan*, G.S. Paddan§, *Institute of Sound and
Vibration Research, Southampton, UK, §Institute of Naval Medicine, Gosport, UK
Locating the source of small arms fire is an important hearing task for infantry personnel. Bevis et
al. (2014) report that personnel claim to have varying confidence in their ability to localise small
arms fire and often rely on visual cues to confirm its origin. In order to measure localisation ability
amongst personnel, a greater understanding of the acoustic characteristics of small arms fire is
needed. Small arms fire (from SA80 rifles) was recorded using binaural (Knowles Electronic
Manikin for Acoustic Research) and omnidirectional (B&K2250 sound level meter) microphones
placed at 50 m intervals from the firer (up to 300 m) and within 30 cm of the bullet trajectory. The
sounds created by small arms are unlike other short duration stimuli such as a click. It is known that
a single shot creates two sounds: a high amplitude, high frequency ‘crack’ created by the supersonic
bullet parting the air as it travels, followed by a quieter, lower frequency ‘thump’ sound, created by
the muzzle blast as the bullet leaves the weapon (Beck et al, 2011). Acoustic analysis of the
recorded stimuli shows that the delay between crack and thump increases as distance from the firer
increases; a phenomenon described by personnel in the study of Bevis et al (2014). The thump
sound produces interaural level differences (ILDs) and interaural time differences (ITDs) as
expected for a subsonic sound; however, the supersonic crack results in decreased ILDs and ITDs as
the sound itself propagates from the bullet at a point close to the head. The recordings give an
insight into the cues available for localisation and the results of this study contribute to the
(currently sparse) literature surrounding small arms fire acoustics. These data will inform the
development of tests of localisation ability for personnel, to ensure that they can perform hearing
tasks critical to the success of military missions.
Acknowledgement
Funding provided by the Surgeon General of the Ministry of Defence.
References
Bevis Z.L., Semeraro, H.S., van Besouw R.M., Rowan D., Lineton B. et al. 2014. Fit for the
frontline? A focus group exploration of auditory tasks carried out by infantry and combat
support personnel. Noise Health, 69, 127–135.
Beck S.D., Nakasone H. & Marr K.W. 2011. Variations in recorded acoustic gunshot waveforms
generated by small firearms. J Acoust Soc Am, 129, 1748–59.
36. Decoding sound location from auditory cortex during a relative localisation task
K.C. Wood, S. Town, H. Atilgan, G.P. Jones and J. K. Bizley, The Ear Institute, University College
London, UK
Many studies have investigated the ability of human listeners to localise sounds in space to an
absolute location (e.g. Stevens & Newman, 1936). Other studies have compared the minimum
discriminable difference in spatial location that a listener can reliably discern; the minimum audible
angle (Mills, 1958). However, very few studies have investigated relative sound localisation, i.e.
67
reporting the relative position of two sequentially presented sources. Determining the relative
location of two sound sources or the direction of movement of a source are ethologically relevant
tasks. Here we report multi-unit activity from auditory cortex of ferrets performing a relative
localisation task.
Ferrets were trained in a positively conditioned 2AFC task to report whether a target sound
originated from the left or right of a preceding reference. In standard testing, the target and
reference stimuli were 150 ms noise bursts separated by a 10 ms gap. The reference was presented
from one of 6 locations in the frontal 180° and the target was presented from a speaker 30° to the
left or right. We recorded using 32 individually moveable tungsten electrodes, in a 4x4 array in
each side of the head.
We have a total of 1284 recordings which showed sound-evoked activity (p<0.05, t-test of
mean firing rate in 50 ms before and after stimulus onset) from 207 unique recording sites in two
ferrets. Of the 1284 unit recordings, 34% and 26% of recordings from ferret 1 and ferret 2 showed
significant tuning to the location of the reference sound (p<0.05 ANOVA on firing rate during
reference presentation and reference location, post-hoc analysis to look at tuned locations). Overall,
39% of unit recordings were tuned to contralateral space, 10% tuned to midline locations, and 51%
were ipsilaterally tuned. Preliminary ROC analysis on the target-evoked spike count showed that a
small fraction of the tuned unit recordings showed a significant choice probability (16%) and a
similar fraction (16%) reliably reported the direction of the stimulus.
On-going analysis is exploring the contribution of ILD and ITD cues to spatial tuning
through the use of band-pass stimuli, and relating other response measures, such as cortical location
and frequency tuning, to spatial tuning and the likelihood of observing a significant choice
probability.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Mills A.W. 1958. On the minimum audible angle. Journal of the Acoustical Society of America.
Stevens S. & Newman E.B. 1936. The localization of actual sources of sound. The American
Journal of Psychology.
37. The binaural masking level difference (BMLD) in single cells of the auditory cortex
H.J. Gilbert, T.M. Shackleton, K. Krumbholz and A.R. Palmer, MRC Institute of Hearing Research,
University Park, Nottingham, UK
The binaural masking level difference (BMLD) is an often used model for the improvement in
masked signal detectability when the signal and masker are separated in space. In the BMLD a tone
signal that is identical at the two ears (S0), masked by a noise that is also identical at the two ears
(N0) can be made 12-15 dB more detectable by sign-inverting either signal or noise at one ear (Sπ
or Nπ).
Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of
Urethane-anaesthetised guinea pigs. Firing rate was measured as a function of the presentation level
of 500 Hz S0 and Sπ pure tone signals in the presence of N0 and Nπ maskers. The maskers were
white noise, low-pass filtered at 5 kHz, with a spectrum level of 23 dB SPL. Tone and masker were
both 100-ms duration and switched on and off simultaneously.
Responses were similar to those previously reported in the inferior colliculus (IC: Jiang et al,
1997 a,b; McAlpine et al, 1996; Palmer et al, 2000). At the lowest tone signal levels the response
68
was dominated by the noise masker, at higher signal levels the firing rate either increased or
decreased. Signal detection theory was used to determine detection threshold. Very few neurones
yielded measurable detection thresholds for all four stimulus conditions, and there was a wide range
in thresholds. However, estimated from the lowest thresholds across the entire population, BMLDs
were 14-15 dB, consistent with human psychophysical BMLDs. As in the IC, the shape of the
firing-rate vs. signal-level functions depended on the neurons’ selectivity for interaural time
difference (ITD). In summary, like in the IC, the responses were consistent with a cross-correlation
model of BMLD with detection facilitated by either a decrease or increase in firing rate.
Declaration of interest:
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Jiang D., McAlpine D. & Palmer A.R. 1997a. Detectability index measures of binaural masking
level difference across populations of inferior colliculus neurons. J Neurosci 17, 9331-9339.
Jiang D., McAlpine D. & Palmer A.R. 1997b. Responses of neurons in the inferior colliculus to
binaural masking level difference stimuli measured by rate-versus-level functions. J
Neurophysiol 77, 3085-3106.
McAlpine D., Jiang D. & Palmer A.R. 1996. Binaural masking level differences in the inferior
colliculus. J Acoust Soc Am 100, 490-503.
Palmer A.R., Jiang D. & McAlpine D. 2000. Neural responses in the inferior colliculus to binaural
masking level differences created by inverting the noise in one ear. J Neurophysiol 84, 844852.
38. Distribution and morphology of cells expressing calcium binding proteins in the rat
inferior colliculus and their excitatory and inhibitory inputs.
B.M.J. Olthof, L.D. Orton, C. Jackson, S.E. Gartside and A. Rees, Institute of Neuroscience,
Newcastle University, Newcastle upon Tyne, UK.
The calcium binding proteins (CBP), calbindin (CB), parvalbumin (PV) and calretinin (CR) are the
most abundant fast acting cytoplasmic calcium buffers in the central nervous system and are often
implicated in neurological disorders and age related decline. Expression of these proteins has been
described in all major subdivisions of the inferior colliculus, although the specific roles of the cells
expressing them are largely unknown. To gain a better insight into the different functions these
neurons play within the IC, we examined the glutamatergic and inhibitory inputs these cells receive.
Furthermore, we aimed to establish what proportion of the CBP expressing neurons are
GABAergic. It has often been suggested that most if not all PV expressing cells are GABAergic,
while this is uncertain for CB and CR (Ouda & Syka, 2012).
Adult Wistar rats were perfused with PBS followed by 4% paraformaldehyde. The tissue was
cryoprotected in sucrose and 40 μm frozen sections were cut on a cryostat. PV, CB, CR, vesicular
transporters for glutamate (VGLUTs 1 and 2), the inhibitory neurotransmitters GABA and glycine
(VGAT), and GABA were detected using monoclonal and polyclonal antibodies. Binding of all
antibodies was visualised by using appropriate secondary antibodies conjugated with fluorophores.
In line with previous reports, we found that most PV expressing cells were located in the
central nucleus and most CR and CB positive cells were in the dorsal and lateral cortices. Excitatory
synapses containing VGLUT1 and 2 were found to be abundant in the dorsal and lateral cortices. In
the central nucleus the labelling of these synapses varied along the rostral caudal axis with denser
labelling in the caudal areas of the IC. Most somata, dendrites, and axons of PV expressing cells
69
were found to be surrounded by VGLUT2 labelled terminals that are known to have their origins in
the IC and other brainstem nuclei. VGLUT1 labelling, characteristic of descending inputs from the
auditory cortex (Ito & Oliver, 2010), did not show this pattern. Terminals expressing the inhibitory
marker VGAT were found to make contact with the dendrites and axons, but rarely the somata, of
PV positive cells. As observed in other studies, most, but not all, PV cells were found to co-express
GABA.
Acknowledgements
Supported by Action on Hearing Loss. B.M.J.O is supported by a studentship from Newcastle
University.
References
Ito, T. and Oliver, D.L. 2010. Origins of glutamatergic terminals in the inferior colliculus identified
by retrograde transport and expression of VGLUT1 and VGLUT2 genes. Front Neuroanat. 4,
135
Ouda, L. and Syka. J. 2012. Immunocytochemical profiles of inferior colliculus neurons in the rat
and their changes with aging. Front Neural Circuits, 2012, 6, 68
Poster Abstracts: Hearing loss and hearing aids
39. A comprehensive survey of questionnaires on hearing
K. Foreman, J. Holman, M.A Akeroyd, MRC/CSO Institute of Hearing Research - Scottish Section,
Glasgow, United Kingdom.
As part of wider work on outcome measures for hearing interventions, we collated copies of every
published hearing difficulty questionnaire that we could find. The search was part incremental, part
systematic, but still essentially comprehensive: we believe that we have included every major
questionnaire, most of the others, and a fair fraction of those obscure. We excluded those wholly
devoted to tinnitus, children, or cochlear implants.
In total we found 178 questionnaires. We classified 109 as “primary” (e.g., the SSQ), while the
remaining are 69 are variants or derivatives (e.g. SSQ-B, SSQ-C, and SSQ-12). In total, there were
3614 items across all the primary questionnaires. The median number of items per questionnaire
was 20; the maximum was 158.
We report here various summaries of the questionnaires and of the items included (e.g. grouped
by whether they related to hearing per se, the effects of hearing aids, or the repercussions or a
hearing loss). Across all items, about one third were concerned with the person’s own hearing,
another third with the repercussions of it, and about a quarter with hearing aids. Within the
repercussions grouping, nearly half asked about social situations, a quarter with feelings, and a fifth
with family or friends.
Many questionnaires had common themes; for instance, including some form of “How much
difficulty do you have with your hearing” or “how annoyed does your partner get at your hearing
loss”. These themes essentially represent what hearing scientists and clinicians have thought
important enough about hearing to be worth asking. But it is also clear that the choice of items is an
indirect result of the genealogy of the questionnaires, in that many questionnaires take some of their
items from an earlier questionnaire, which took them from an earlier one, and so on (e.g., parts of
the SSQ were based on the AIAD, the PHAP, and Noble et al.’s (1995) unnamed questionnaire on
spatial hearing). We conclude with some recommendations for future questionnaire designs.
70
Acknowledgements
Supported by intramural funding from the Medical Research Council (grant number U135097131)
and the Chief Scientist Office of the Scottish Government.
References
Noble, W., Ter-Horst, K. & Byrne, D. 1995. Disabilities and handicaps associated with impaired
auditory localization. J Am Acad Audiol, 6(2), 129–140.
40. Developing a perceptual benchmark for speech-intelligibility benefit
D. McShefferty, W. Whitmer and M. Akeroyd, MRC/CSO Institute of Hearing Research – Scottish
Section, Glasgow Royal Infirmary, UK
How large does a change in speech in noise need to be before for it to be meaningful to someone?
We here attempt to answer this question using objective and subjective methods. First, we measured
the just-noticeable difference (JND) in signal-to-noise ratio (SNR) to find the lower limits of
perceptually relevant features (e.g., noise reduction). Second, we measured the minimum SNR
change necessary to spur someone to seek out an intervention using different subjective-comparison
paradigms.
To measure an SNR JND, we used a variation on the classic level discrimination paradigm
using equalised sentences in same-spectrum noise with various controls and roves to ensure that the
task could only done by listening to SNR, not level per se. Averaged across participants, the SNR
JND was 3 dB. This value was corroborated using different participants in a fixed-level task. JNDs
were not correlated with hearing ability. To measure the subjective import of an increase in SNR,
we presented paired examples of speech and noise: one at a reference SNR and the other at a
variably higher SNR. In different experiments, participants were asked (1) to rate how many
successive conversations they would tolerate given each example, (2) to rate the ease of listening of
each example, (3) if they would be willing to go to the clinic for the given increase in SNR, and (4)
if they would swap the reference SNR for the better SNR example (e.g., their current device for
another). The results showed that the mean listening-ease ratings increased linearly with a change in
SNR (experiments 1-2), but an SNR change of at least 6 dB was necessary to motivate participants
to seek intervention (experiments 3-4). To probe individual variance, a questionnaire of general and
hearing health was also administered to participants in the latter experiments.
Overall, the results indicate not only the difference limen for SNR, but also how large a change
in SNR is needed for it to be meaningful to someone. While an SNR increase less than 3 dB may
have relevance to speech-recognition performance, it may not be enough of an SNR improvement to
be reliably recognized and, furthermore, may be too little increase to motivate potential users.
Acknowledgements
Supported by intramural funding from the Medical Research Council (grant number U135097131)
and the Chief Scientist Office of the Scottish Government.
41. Assessing the contributions of cognition and self-report of hearing difficulties to speech
perception tests
A. Heinrich*, H. Henshaw§,M. Ferguson§, *MRC Institute of Hearing Research, Nottingham, §NIHR
Nottingham Hearing Biomedical Research Unit, Nottingham, UK.
71
Perceiving speech, particularly in noise, becomes more difficult as people age. Listeners vary
widely as to their level of speech perception abilities even when hearing sensitivity is similar.
Speech perception abilities can be measured behaviourally using speech tasks, or subjectively by
assessing everyday hearing difficulties using self-report questionnaires. Here, we examined how
behavioural measures of speech perception performance relate to cognition and self-report of
hearing difficulties, while controlling for any confounding effects of age and hearing loss.
A group of 44 listeners aged 55-74 years with mild sensorineural hearing loss were tested with
speech tests that differed in linguistic complexity (phoneme discrimination in quiet, and digit
triplets (DT) and sentences in noise)(Ferguson et al., 2014). Self-report questionnaires assessed
hearing abilities (e.g. Speech, Spatial and Qualities of Hearing) and Quality of Life (QoL; EuroQol).
Cognitive tests included attention (Test of Everyday Attention) and memory (Visual Letter
Monitoring, Digit Span). Exploratory regression models were used to assess the relationship
between speech perception performance with both cognition and self-reported hearing ability/QoL
for each speech perception task, while controlling for age and hearing loss. Test-retest effects were
measured in 21 participants.
Cognition explained more variance in speech perception performance (28%) than hearing loss
and age (20%), although most speech perception variance was explained by self-report measure
scores (61%). The amount of explained variance differed for individual speech tests, which was
greater for self-report than cognitive measures. On retesting, the explained variance attributed to
hearing loss and age decreased, whilst that attributed to cognition increased.
For the DT test, the relative contributions of cognition and self-report differed according to
which stimulus (speech or noise) was adjusted adaptively. DT performance with variable noise was
better predicted by self-report, whereas DT performance with variable speech was better predicted
by cognition.
Researchers and clinicians need to be aware that the relationships between cognition, self-report
of hearing and hearing loss, as well as methodological issues, are important when using speech tests
as outcome measures for people with mild hearing loss.
Acknowledgements
This research was funded by the National Institute for Health Research and the Medical Research
Council.
References
Ferguson M.A., Henshaw H., Clark D.P. & Moore D.R. 2014. Benefits of phoneme discrimination
training in a randomized controlled trial of 50-to 74-year-olds with mild hearing loss. Ear
Hear, 35, e110-121.
42. Auditory and working memory training in people with hearing loss
M. Ferguson, A.Tittle and H. Henshaw. NIHR Nottingham Hearing Biomedical Research Unit,
Nottingham, UK.
One in ten older adults (55-74 years) has a significant hearing impairment. Auditory training (AT) is
an intervention designed to help listeners compensate for degradation in the auditory signal.
Auditory training is hypothesised to aid listening in two ways; bottom up refinement of sensory
perception and development of top-down cognitive control. For home-based training interventions
to benefit people with hearing loss (PHL), any task-specific learning needs to transfer to functional
benefits in real-world listening.
A randomised controlled trial (RCT) of 44 adults with mild sensorineural hearing loss (SNHL)
showed significant on-task learning for a phonetic-discrimination task (p<.001), and generalisation
72
to improvements in self-reported hearing (p<.05), with the greatest improvement shown for a
challenging listening condition (Ferguson et al., 2014). Significant improvements were shown for
complex measures of divided attention (p<.01) and working memory (WM) (p<.05), with the
greatest improvements in complex conditions. No improvements were shown for speech-in-noise.
These findings suggest that benefits of auditory training may be associated with the development of
cognitive abilities over and above the refinement of sensory processing.
A double-blind RCT aimed to assess whether directly training cognition can offer increased
benefit to HA users with mild-moderate SNHL. Participants (n=57) were randomly allocated to an
active-control group (ACG) or an adaptive-training group (ATG) and completed 30-40 minutes of
Cogmed® WM training, 5 days/week over 5 weeks. Participants in the ATG received adaptivedifficulty training based on individual performance. Participants in the ACG received a nonadaptive practice-level version of Cogmed®. All training was delivered online. Optimal measures of
speech perception, cognition and self-reported hearing were assessed pre- and post-intervention
(Henshaw & Ferguson, 2014). Significant on-task learning generalised to improvements in trained
WM (backwards digit span, p<.001) and untrained WM (visual letter monitoring, p<.001) tasks. No
improvements were shown for a competing-speech perception task. It is argued that training is most
effective when it targets core deficits.
Thus, training interventions that target cognitive skills embedded within auditory tasks are most
likely to offer generalised benefits to the real-world listening abilities of people with hearing loss.
Acknowledgements
This research was funded by the National Institute for Health Research.
References
Ferguson M.A., Henshaw H., Clark D.P. & Moore D.R. 2014. Benefits of phoneme discrimination
training in a randomized controlled trial of 50-to 74-year-olds with mild hearing loss. Ear Hear,
35, e110-121.
Henshaw H. & Ferguson M.A. 2014. Assessing the benefits of auditory training to real-world
listening: identifying appropriate and sensitive outcomes. In: T. Dau (ed.) Proc ISAAR 2013:
Auditory Plasticity - Listening with the Brain. Nyborg, Denmark.pp. 45-52.
43. A James Lind priority setting partnership for mild to moderate hearing loss – what
should research listen to?
H. Henshaw*, L. Sharkey§, V.Vas* and M. Ferguson* on behalf of the JLA Mild to Moderate
Hearing Loss PSP Steering Group. *NIHR Nottingham Hearing Biomedical Research Unit, UK.
§
Hearing Link.
Adult-onset hearing loss is the 13th most common disease burden worldwide (WHO, 2004). Yet
funding for research into hearing remains limited with just £47 per disability-adjusted life year
(DALY) spent on hearing research, compared to the significantly higher amounts spent on other
priority areas, such as vision (£99 per DALY). It is estimated that up to 85% of funds invested in
medical research have no direct value to aid patient need and satisfaction (Chalmers and Glasziou,
2009). There is general consensus that in order for research to have an impact on those who may
benefit from it, the research process itself should engage clinicians, patients and members of the
public as much as possible.
The James Lind Alliance (JLA) Priority Setting Partnership (PSP) is an initiative that unites
patients, clinicians and professional bodies to be active members in research prioritisation. The JLA
PSP will offer patients of a given health condition and their clinicians the opportunity to identify
73
and prioritise “uncertainties” about interventions they think should be at the forefront of current
research.
A JLA PSP has been recently formed to identify and prioritise uncertainties for the treatment of
mild to moderate hearing loss. The project will be driven forward by a Steering Group representing
patients (patient representatives, Hearing Link, AoHL, GP), clinicians, and their representative
bodies (BSA, BAA, BSHAA and ENT UK). A survey will be developed and distributed through the
stakeholders to raise awareness to as many clinicians and patients as possible. This survey is
designed to determine uncertainties across a range of aspects along the patient-pathway for those
with mild to moderate hearing loss (e.g. prevention, screening, diagnosis, management). Responses
will be collated, grouped into themes, and indicative research questions generated for prioritisation.
Indicative questions will be identified as ‘true’ uncertainties by comparing to existing research
literature. A final prioritisation workshop will be held in which patients and clinicians rank the top
10 uncertainties, to be disseminated to research commissioners and funding bodies. The
uncertainties will be formatted for inclusion in the UK Database of Uncertainties about the Effects
of Treatments (UK DUETs).
Acknowledgements
This research was funded by the Nottingham University Hospitals NHS Trust Charity and the
National Institute for Health Research.
References
World Health Organization (2004). The global burden of disease: 2004 update. Switzerland:
WHO press.
Chalmers I, Glasziou P (2009). Avoidable waste in the production and reporting of research
evidence. Lancet 374:86-89.
44. Challenges in testing binaural temporal fine structure cues in hearing impaired listeners
R.K.Hietkamp, L.Bramsløw, M.Vatti and N.H.Pontoppidan, Eriksholm Research Centre, DK
Recent studies (e.g. Lunner et al, 2012) have shown that temporal fine structure (TFS) information
plays a role in complex listening situations. Therefore it is important to investigate the degraded
nature of TFS perception in hearing-impaired (HI) listeners. TFS abilities can be tested both
monaurally, e.g. frequency discrimination, or binaurally, e.g. spatial cues in the form of interaural
time differences (ITD) or interaural time envelope differences (ITED). The present study focuses on
the challenges of testing TFS binaurally versus monaurally.
We tested 8 normal hearing and 26 hearing impaired listeners with a 2 alternative-forced-choice
(2AFC) test. Prior to testing, listeners were subjected to a stepwise training (Hietkamp et al, 2010)
where the magnitude of the stimuli was considerably larger than in the test itself. NH listeners
showed stable monaural and binaural test results. Binaural TFS (ITD) and binaural envelope (ITED)
thresholds for this group were below the boundary of real-life sounds imposed by the human head
size. For HI listeners, the results of the binaural TFS (ITD) and binaural envelope (ITED) tests
showed increased variability compared to the monaural 2AFC test. Furthermore, ITD and ITED
thresholds often exceeded the boundary of real-life sounds; the consequence was that the presented
stimuli alternately were below and above this boundary, evoking an ambiguity in percept, going
from mid towards one side, mid towards both ears or even change of loudness.
Because of the nonlinear and multidimensional relation of the percepts to the magnitude of the
manipulations in the binaural ITD and ITED tests, an effective stepwise training was not possible.
Lack of significant results might be due to an inability of providing sufficient training.
74
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Hietkamp R.K., Andersen M.R. & Lunner T. 2010. Perceptual audio evaluation by hearingimpaired listeners – some considerations on task training. In: Buchholz J., Dalsgaard J.C., Dau
T. and Poulsen T. (Eds.). Binaural Processing and Spatial Hearing. Proc 2nd Int Symp
Audiological Auditory Res, ISAAR, p. 487-496.
Lunner, T., Hietkamp, R.K., Andersen, M.R. & Hopkins, K. 2012. Effect of Speech Material on the
Benefit of Temporal Fine Structure Information in Speech for Young Normal-Hearing and
Older Hearing-Impaired Participants. Ear Hear, 33, 377-388.
45. Evaluation of individualized binaural earpiece for hearing-aid applications
F. Denk*#, M. Hiipakka*#, B. Kollmeier*# and S.M.A. Ernst*#, #Cluster of Excellence
"Hearing4all", *Medizinische Physik, University of Oldenburg, Oldenburg, Germany
Modern hearing aid signal processing is increasingly able to offer great benefit to listeners. Already
a target group with moderate hearing loss or even near-to-normal hearing could profit from some
hearing aid features, such as binaural directional filters, noise suppression or speech enhancement.
However one important challenge still is to achieve high perceptual plausibility and a natural sound
impression, i.e. acoustical transparency of the used hearing devices in order to gain acceptance
especially among potential first-time users.
This work evaluated a new open-fitting hearing aid earpiece prototype that can be individually
calibrated in situ to provide acoustic transparency, i.e. it is individually equalized to produce a
similar frequency response as the open ear canal. Each prototype includes two balanced-armature
receivers and three microphones. The microphones are located inside the ear canal, at the ear canal
entrance, and in the concha. Both of the receivers and two of the microphones are located inside a
small tube that can easily be inserted into individually shaped earmolds.
Evaluation methods include technical measurements using a dummy head with exchangeable
ear canal simulator (DADEC) and listening tests with human test subjects. The degree of perceived
transparency was determined using a three-interval two-alternative (ABX) forced choice procedure,
comparing the new earpiece with a simulated open ear channel reference. 25 repetitions of 20 short
audio samples divided in three categories (speech, music and environmental) were presented in
random order. 10 normal hearing subjects (5 experts, 5 non-experienced) participated in the
experiment. A statistical analysis of the obtained results shows that an accurate distinction between
the two presentation conditions was not significantly possible for the categories music and speech.
However for technical sounds (e.g. ringtones) with considerable energy up to high frequency the
subjects were able to differentiate with higher accuracy.
In a second test, the earpiece was evaluated for subjective listening quality when driven with a
selection of hearing aid algorithms using a combined discrimination and classification (CoDiCl) test
routine. Overall it could be shown that the new earpiece offers a high degree of signal control,
which allows for individual high quality and transparent sound presentation. It is therefore
promising to serve as a high-fidelity assisting listening device, which can be scaled up to a complete
hearing aid when necessary, but also is accepted by near-to-normal hearing patients. Thus, it can
guide the way to a sooner and therefore more effective treatment of hearing loss.
Acknowledgements
Supported by BMBF 13EZ1127D and DFG FOR1732.
75
46. Obstacle avoidance using echolocation and sensory substitution information
A.J. Kolarik*§, A.C. Scarfe*, S. Cirstea*, B.C.J. Moore§* and S. Pardhan*, *Vision and Eye
Research Unit, Anglia Ruskin University, Cambridge, UK, §Department of Psychology, University
of Cambridge, Cambridge, UK
In the absence of vision, sound can provide important spatial information to aid navigation. Bats
and dolphins echolocate by emitting sound bursts and use the returning echoes to detect objects in
the vicinity. Some blind people also use self-generated clicking sounds to echolocate, and both blind
and sighted people can be trained to echolocate (Rowan et al, 2013; Kolarik et al, 2014). Very few
studies have investigated the effectiveness of echolocation for human navigation, which requires
precise motor responses to avoid collisions when walking around obstacles. Empirical evidence
showing that echolocation can guide human locomotion is lacking. Sensory substitution devices
(SSDs) that work on an echolocation principle have been developed as electronic travel aids for the
blind. Tactile SSDs use an ultrasound source and a receiver to detect signal reflections from nearby
obstacles, and the rate of vibration applied to the skin informs the participant about distance to the
obstacle. However, few studies have assessed the accuracy of navigation using SSD information.
The current study compared performance using echolocation, vision, and a tactile SSD for
blindfolded sighted participants navigating around a single flat 0.6 x 2 m obstacle. It was expected
that vision would provide much better locomotor control than echolocation or the SSD. The
obstacle was positioned at the midline, or 25 cm to the right or left of the participant, and the
position was varied randomly from trial to trial. Participants approached the obstacle from distances
of 1.5 or 2 m. Motion analyses were conducted using a Vicon 3D system, which allows the accurate
quantification of human movement. Buffer space (clearance between obstacle and shoulder),
movement times, and number of impacts were smallest with visual guidance, larger for the SSD,
and greatest for echolocation. The changes in movement times suggest increased caution with the
SSD and echolocation. The results show that echolocation can be used to guide human movement,
but movement is not as accurate as when an SSD or visual information is used.
Acknowledgements
The research was supported by the Vision and Eye Research Unit (VERU), Postgraduate Medical
Institute at Anglia Ruskin University.
References
Kolarik A.J., Cirstea S., Pardhan S. & Moore B.C.J. 2014. A summary of research investigating
echolocation abilities of blind and sighted humans. Hear Res, 310, 60-68.
Rowan D., Papadopoulos T., Edwards D., Holmes H., Hollingdale A., Evans L. & Allen, R. 2013.
Identification of the lateral position of a virtual object based on echoes by humans. Hear Res,
300,
56-65.
47. Growing cochlear fibrocyte cultures on transplantable matrices
D.N. Furness, M.Z. Israr, N.C. Matthews, A.H. Osborn, K. Riasat and S. Mahendasingam, Institute
for Science and Technology in Medicine, Keele University, Keele, UK
Lateral wall fibrocytes play an important role in maintaining cochlear fluids by recycling potassium
from endolymph to perilymph. It is thought that fibrocyte degeneration may lead to hearing loss so
replacing fibrocytes may prevent this (Mahendrasginam et al, 2011). Fibrocyte cultures are a source
76
of potentially transplantable cells, but when grown in standard culture conditions theys do not
resemble native fibrocytes and have proved difficult to transplant successfully in vitro. We have
developed methods of fibrocyte culture in injectable matrices of collagen I. Here we evaluate the
morphology and ultrastructure of these cells.
We prepared gels in two ways. (i) 3 mg/mL collagen I gels were made in standard 24 well
plates and solidified. Cells grown in 2D cultures in T25 flasks were harvested and seeded at a
density of 10 000 – 50 000 cells per cm2 onto the surface of the gels and grown for 1 – 4 d. (ii) we
developed a simple way of culturing cells in the collagen gel solidified within a pipette tip.
Approximately 100 L of 3 mg/mL final concentration collagen I gel solution was drawn up into a
1 mL syringe through a cut down yellow pipette tip. This was followed by 50 L of cell suspension.
The gels became solidified overnight and were kept for 1 – 2 d. Gels were then removed from the
wells or extruded from the tips, fixed and prepared for either scanning or transmission electron
microscopy, or for immunocytochemistry using our standard methods.
Cells survived for at least 2 d on all the gels. Immunocytochemistry showed that almost all
expressed aquaporin, a marker characteristic primarily of native type III fibrocytes. However, some
cells also expressed S-100 or Na,K,ATPase, characteristic of other fibrocyte types, suggesting a
mixed phenotype. Scanning and transmission electron microscopy revealed cell morphologies that
were partially consistent with the suggestion that the cells were phenotypically type III, but they
also had other features that were less reminiscent of native cochlear fibrocytes.
These data suggest that it is possible to grow viable fibrocyte cultures in gels that could be used
for transplantation. The extrusible gels can be manipulated relatively easily from their pipette tip,
providing a potentially ideal way of introducing them into the lateral wall under control, without
damage to the culture cells themselves.
Acknowledgements
This research was funded by Action on Hearing Loss.and the Freemasons’ Grand Charity
References
Mahendrasingam S., Macdonald J.A. & Furness D.N. 2011. Relative time course of degeneration of
different cochlear structures in the CD/1 mouse model of accelerated aging. J Assoc Res
Otolaryngol 12, 437-453.
48. Ultrastructural localization of TMHS in mouse cochlear hair cells
C.M. Hackney*, S. Mahendrasingam§, D.N. Furness§, and R. Fettiplace‡, *Department of
Biomedical Sciences, University of Sheffield, UK, §Institute for Science and Technology in
Medicine, Keele University, UK, ‡Department of Physiology, University of Wisconsin-Madison,
USA.
Hair cells are the mechanosensors in the cochlea where they convert the mechanical vibrations
caused by sound into electrical signals to be carried along the auditory nerve to the
brain. Deflection of the stereocilia in hair cells is believed to place a strain on these channels
which then open, letting through K+ ions which results in depolarization of the cell. The molecular
components of the mechanotransduction channels of the hair cells are still under
investigation. Recently, mechanotransduction has been shown to be impaired in mice lacking a
tetraspan membrane protein, TMHS. Immunofluorescence suggests this protein is found near the
stereociliary tips (Xiong et al, 2012). Here we have utilized post-embedding immunogold labelling
and stereological analysis to refine and more precisely localize this protein in p5 mice. Our data
confirm the suggested stereociliary tip localization but they also show TMHS in the cytoplasm as
well as in the membranes suggesting that during development, this protein is being shipped to the
membranes from the manufacturing areas of the cell. It would be interesting to know if this
shipment continues into adult life as the turnover time of this protein may be relevant to
77
understanding some kinds of hearing loss. As mutations of the gene for TMHS lead to deafness and
because it may show the precise anatomical position of the mechanotransduction channels, we
believe that ultrastructural localization could provide further clues to its function. Immunogold
postembedding labelling gives a way of doing this, particularly if combined with quantitative
analysis of precise localization.
Acknowledgements
This work has been sponsored by the NIH USA.
References
Xiong W., Grillet N., Elledge H.M., Wagner T.F., Zhao B., Johnson K.R., Kazmierczak P., Müller
U. 2012. THMS is an integral component of the mechanotransduction machinery of cochlear
hair cells. Cell, 151(6), 1283-129.
49. A real-time hearing aid simulator on the open development platform
X. Yang, U. K. Ravuri, A. Gautam, S. Kim and S. Bleeck, Institute of Sound and Vibration Research,
University of Southampton, UK
With the development of digital technology, hearing aids have become more sophisticate. Advanced
signal processing strategies have become possible, such as noise reduction, feedback cancellation,
frequency compression/lowering and bilateral processing, because of the explosion of possible
techniques. Evaluation of various acoustic processing schemes in realistic environment is essential.
To facilitate this, we develop a real-time evaluation platform ― the "Open Development
Platform" (ODP), to implement real-time acoustic system. The main goals are:
 To enable usage of wide range of users with a minimum of programming knowledge

Using MATLAB® codes in a simple Simulink® environment (using the Simulink Real-Time®
Toolbox)

To support a wide range of different hardware platforms (currently, ODP supports Speedgoat®
target machines, Texas Instruments® DSP-based systems, and Raspberry PI® boards)

To provide a library of 'standard' acoustic signal processing algorithms that can be used out-ofthe-box (filters, feedback cancellation, compression, amplification, and also more complex
auditory models, etc.).

To enable online parameter tuning at runtime

To open this platform on a GNU license to facilitate growths form the community

To provide sufficient support, documentation and help
Our goal is to enable an average MSc student to build their own take-away real-time hearing aid
within a day. We demonstrate in the poster an example of a basic multi-band unilateral hearing aid
on a target machine.
Acknowledgements
78
The work leading to this deliverable and the results described therein has received funding from the
People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme
FP7/2007-2013/ under REA grant agreement n° PITN-GA-2012-317521
50. Do hearing impaired individuals tolerate different output delay lengths than normal
hearing individuals?
T. Goehring, J.L. Chapman, J.J.M. Monaghan and S. Bleeck, Institute of Sound and Vibration
Research, University of Southampton, UK
Current hearing aids issued to hearing impaired listeners are designed to provide less than 10
milliseconds of output delay; during this time, complex digital signal processing algorithms are
applied to enhance signal quality. Although increased output delay could allow for heightened
signal quality and so result in increased hearing aid use and quality of life, excessive delay can be
bothersome to the listener. Previous literature (Stone & Moore, 1999-2008) (Agnew & Thornton,
2000) investigating delay tolerance has focused predominantly on normal hearing participants. The
present study aimed to highlight the importance of testing hearing impaired and normal hearing
listeners under the same conditions using real-time processing and measuring delay tolerance for
application to hearing aid use. It also aimed to investigate long-term acclimatisation to output delay.
Twenty hearing impaired and twenty normal hearing participants took part and their tolerance
to various delays ranging from 10 to 50 milliseconds for their own voice and an external speaker
were assessed using subjective ratings.
Findings revealed a significant increase in delay tolerance for hearing impaired compared to
normal hearing listeners, demonstrating the importance of recruiting hearing impaired participants.
An increase in tolerance was also found for increasing degrees of hearing loss, but this failed to
reach significance. Although significance was not reached, an increase in tolerance was found for
experienced hearing aid users compared to new users. This could indicate long-term acclimatisation
to delays.
These findings support a possible increase of the output delay of current hearing aid technology
resulting in more sophisticated algorithms with better performance and/or lower energy
consumption.
Acknowledgements
The work leading to this deliverable and the results described therein has received funding from the
People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme
FP7/2007-2013/ under REA grant agreement n° PITN-GA-2012-317521.
References
Stone, M. A., & Moore, B. C. (1999-2008). Tolerable hearing aid delays. I-IV. Ear and Hearing,
20(3), 182.
Agnew, J., & Thornton, J. M. (2000). Just noticeable and objectionable group delays in digital
hearing aids. Journal of the American Academy of Audiology, 11(6), 330-336.
51. Android application of hearing aids on Google Glass
A. Gautam, U.K. Ravuri, X. Yang, T. Goehring, S. Kim and S.Bleeck, Institute of Sound and
Vibration Research, University of Southampton, UK.
79
Google Glass is a high tech hands-free augmented reality computer developed by Google with an
aim to replace existing computers and change the future of assistive technology. It clubs the real and
the virtual world within a single frame. A user can interact with the device through Google cloud
server, which is accessible through Wi-Fi. The gadget uses Bluetooth to communicate with other
external devices.
For programming, Google provides the ‘Mirror Application Programming Interface (API)’ to
facilitate application development which is implemented using Java. Sound is played back to the
user via a bone conductor via the temporal bone.
Our objective is to investigate the usability of the glasses as an android-based hearing aid. The
envisaged application will interact with the sound processor to perform the functions of
amplification, compression and noise reduction as implemented in state-of-art hearing devices.
Bone conduction audio technology improves the feedback cancellation performance to a high
extent. The voice control feature is particularly useful for controlling hearing aid for setting
parameters like noise reduction, volume variation etc. We aim to produce a system that will be
useful not only for mild and moderate hearing impaired users and also for normal-hearing people.
On our poster we will explore the potential of Google Glass as a hearing aid, thereby providing
overview on technical issues like latency, processing capabilities, frequency range, audio quality
etc. We will also demonstrate a demo version of the glasses hearing aid.
Acknowledgements
The work leading to this deliverable and the results described therein has received funding from the
People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme
FP7/2007-2013/ under REA grant agreement n° PITN-GA-2012-317521.
52. MicUp! A proposed head-controlled multichannel wireless system for hearing aids
W.O. Brimijoin* and A.W. Archer-Boyd§*, *MRC/CSO Institute of Hearing Research, Scottish
Section, Glasgow, UK, §Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum,
Germany
Hearing aids with directional microphones and noise reduction programs cannot always provide
sufficient speech intelligibility in reverberant and noisy environments. As an alternative means of
increasing SNR, several hearing aid manufacturers offer additional hand-held microphones that can
be linked wirelessly to the hearing aid. When the microphone is held close to the talker’s mouth, it
can transmit a low-noise signal directly to the hearing impaired listener. Providing a “clean” signal
directly to the ear has been shown to provide a significant intelligibility benefit (e.g., Whitmer et al,
2011). The primary issue with this solution, however, is that while it can be used to improve the
intelligibility for a single talker close to the microphone, many typical listening situations involve
group conversations, where the focus shifts rapidly from one person to another. These situations
demand that the microphone be passed from talker to talker during a conversation, a requirement
that is often impractical.
Here we propose an interactive multichannel wireless system to address the above issue. In this
system each talker is given a special badge (see Hart et al, 2009) comprised of a microphone, an
infrared LED, a short-range FM transmitter, and a battery. Each badge’s LED flashes at a unique
rate and transmits the microphone signal continuously at a predefined FM frequency. The listener
uses hearing aids mounted with FM receivers for each of the badges, an infrared micro-camera, and
a low-power processor. The infrared camera identifies each badge by its flashing rate and uses its
position in space to dynamically adjust the overall level (highest for signals in front) and – using
anechoic binaural impulse responses – the apparent lateral location of each of the wireless signals.
80
This system allows the listener’s natural head orientation to function as a source selector and
level controller. The use of impulse responses to place the signals at the correct angles in space
provides auditory localization cues to the listener, potentially further improving both speech
intelligibility and making it simpler to locate and orient towards the next talker in a multi-talker
situation. We have built two headphone prototypes of the system, one integrated using an Arduino
microcontroller and FM receiver, and one using real time spatialization on a laptop.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Whitmer, W.M., Brennan-Jones C.G., & Akeroyd M.A. 2011. The speech intelligibility benefit of a
unilateral wireless system for hearing-impaired adults. Int J Audiol, 50.12: 905-911.
Hart, J., Onceanu, D., Sohn, C., Wightman, D., & Vertegaal, R. 2009. The attentive hearing aid:
Eye selection of auditory sources for hearing impaired users. In: Human-Computer Interaction,
Springer Berlin Heidelberg, pp. 19-35.
53. The use of the Analytical Hierarchy Process to evaluate an algorithm designed to reduce
the negative effects of reverberation
N.S. Hockley, C. Lesimple and M. Kuriger, Bernafon AG, Bern, Switzerland
When someone is speaking in a room, the sound field consists of direct sound originating from
the speaker and indirect sound in the form of reverberation. The intensity of the speaker’s sound
decreases with increasing distance, however, the reverberation is assumed to be uniform in intensity
throughout the room. When the speaker is close to the listener, the direct sound will be more intense
than the reverberation. As the speaker moves away from the listener, the intensity of the direct
sound will decrease, however, the intensity of the reverberation will remain constant. Reverberation
can have detrimental effects for the hearing-impaired individual (e.g., Nabelak & Robinette, 1977)
and also challenges the directional performance of a hearing aid. These negative effects could be
addressed by a dedicated algorithm (Walden et al, 2004). The challenge when designing such an
algorithm is to remove the potentially negative effects of reverberation while maintaining sound
quality and speech intelligibility.
Two questions were addressed in this investigation. The first is, does the algorithm provide a
perceptible attenuation of reverberation? The second is, does the algorithm have a noticeable effect
on the quality of speech within a reverberant environment?
Eleven normal hearing subjects participated in this investigation. The methodology used was
based on the Analytical Hierarchy Process (AHP) (Saaty, 2008). The AHP is a structured technique
for organizing and analyzing complex decision-making. It provides an analytic measure of
multidimensional scaling and is based on paired comparisons that are embedded within a reciprocal
matrix. Paired-comparison ratings were used to measure the amount of reverberation, speech
naturalness, and overall preference. These criteria were derived from recommendations from the
International Telecommunication Union (ITU, 1996; 2003). ITU (2003) specifies the methodology
to evaluate speech communication systems that include noise suppression algorithms. As a high
level of reverberation can be considered to be a useless signal, like noise, a reverberation reduction
algorithm might have some similar behaviour as a noise suppression algorithm.
The reverberation reduction algorithm was tested in a reverberant condition using four different
settings (strengths) of the algorithm and unprocessed (off). In all of the paired comparisons, the
processed conditions were preferred regardless of the settings.
81
The subjects responded consistently and additionally there was a high degree of consensus between
the subjects. It was found that the algorithm provided a clear and constant reduction of the amount
of reverberation. With a stronger setting, this difference was even more noticeable. Speech
naturalness was found to be better with the algorithm than without, regardless of the tested setting.
The implication of these findings for further investigations will be discussed along with the
potential feasibility of using this algorithm within a hearing aid.
Acknowledgements:
The authors would like to thank Matthias Bertram for his help in developing the prototype and
Bernhard Künzle for his useful comments and guidance. The authors are all employees of Bernafon
AG. The authors alone are responsible for the content and writing of the paper.
References
International Telecommunications Union (ITU-T) (1996). Recommendation P.800: Methods
for
subjective determination of transmission quality. Geneva Switzerland.
International Telecommunications Union (ITU-T) (2003). Recommendation P.835: Subjective test
methodology for evaluating speech communication systems that include noise suppression
algorithm. Geneva, Switzerland.
Nabelek A. K. & Robinette L. (1978). Influence of the precedence effect on word identification by
normally hearing and hearing-impaired subjects. J Acoust Soc Am, 63(1), 187-194.
Saaty T. L. (2008). Decision making with the analytic hierarchy process. International J Serv Sci,
1(1), 83 - 98.
Walden B. E., Surr R. K., Cord M. T., & Dyrlund O. (2004). Predicting hearing aid microphone
preference in everyday listening. J Am Acad Audiol, 15(5), 365-396.
54. Validation of the Spatial Fixed-SNR (SFS) test in anechoic and reverberant conditions
N.S. Jensen*, S. Laugesen*, F.M. Rønne*, C.S. Simonsen§ and J.C.B. Kijne§, *Eriksholm Research
Centre, Oticon A/S, Snekkersten, Denmark, §Oticon A/S, Smørum, Denmark
Adaptive speech-intelligibility tests are widely used in the assessment of hearing aids. Typically, the
administration of such tests is easy and the results are reliable. However, one problem associated
with adaptive tests is the fact the different listeners are tested at different signal-to-noise ratios
(SNRs), introducing a possible confounding effect of the SNR on the outcome of the test (Bernstein,
2011). Furthermore, the test SNRs are often substantially lower than the typical daily-life SNRs
(Smeds et al, 2012), for which the hearing aids under test were designed.
In order to address these problems, the SFS test was developed. It is based on a paradigm where
some of the parameters in a spatial speech-on-speech test are set individually in order to
‘manipulate’ all listeners in a given study towards the same speech reception threshold (SRT). This
will enable a comparison of hearing aids at the corresponding fixed SNR without encountering floor
or ceiling effects. Based on findings from a previous experiment (Rønne et al, 2013), the variable
test parameters were scoring method, masker gender, and the target-masker spatial separation. The
target speech material was HINT sentences, while the masker speech material was recordings of
running speech.
In two separate studies, the Danish SFS test was implemented and validated in an anechoic
chamber and in a sound-treated listening room, respectively. N=26 and N=19 hearing-impaired test
subjects took part in the two studies. Whereas the two test protocols differed on some aspects, e.g.
by including two different experimental hearing-aid contrasts, both studies investigated the ability
of the SFS test to bring listeners to perform at a given SNR, and both studies assessed the test-retest
reliability.
82
The results indicated that the ‘SRT manipulations’ were reliable and similar in the anechoic and
reverberant test conditions. In both conditions, the test-retest standard deviation was estimated to be
around 8.5%. Better overall test performance was observed in the anechoic chamber, probably due
to the reverberation present in the listening room. In conclusion, the SFS test is a reliable method to
compare different hearing aids at a given fixed SNR.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Bernstein J.G.W. 2011. Controlling signal-to-noise ratio effects in the measurement of speech
intelligibility in fluctuating maskers. Proc 3rd ISAAR, Nyborg, Denmark, 33-44.
Rønne F.M., Laugesen S., Jensen N.S., Hietkamp R.K. & Pedersen J.H. 2013. Magnitude of speechreception-threshold manipulators for a spatial speech-in-speech test that takes signal-to-noise
ratio confounds and ecological validity into account. Proc 21st ICA, Montreal, Canada, 19, 1-8.
Smeds K., Wolters F. & Rung M. 2012. Estimation of Realistic Signal-to-Noise Ratios. Poster
presented at the International Hearing Aid Conference (IHCON), Lake Tahoe, USA.
55. Identifying an ideal alternative television audio mix for hearing-impaired listeners
R.E. Davis*§, D.F. Clark*, W.M. Whitmer§ and W.O. Brimijoin§, *Department of Computing,
University of the West of Scotland, Paisley, UK, §MRC/CSO Institute of Hearing Research, Scottish
Section, Glasgow, UK
Currently, television (TV) viewers cannot adjust the ratio between the speech and background
sound of TV broadcasts, only the overall level, which contributes little to speech intelligibility.
However, digital broadcast services such as the British Broadcasting Corporation (BBC) Red
Button and iPlayer potentially provide a means to simultaneously broadcast different audio versions
of TV programmes. If “high intelligibility” audio were broadcast alongside standard audio, viewers
could use a button on their TV remotes to swap between them. It was the intent of this study to
determine the ideal speech-to-background ratio for such a transmission.
The experiment consisted of two main parts, both of which used BBC TV programmes. For
part one, subjects were shown short video clips with pre-set sound mixes, and asked to rate the
audio on a subjective scale. For part two, subjects were shown a single long clip and asked to adjust
the levels for the music and effects tracks separately until they were satisfied by the mix. Both parts
of the experiment incorporated a range of -6 to +30 decibel (dB) speech-to-background ratio, with 0
dB providing the broadcast reference mix. The subject group consisted of 10 normal-hearing
listeners (6 female) with a median age of 23, and 21 hearing-impaired listeners (12 female) with a
median age of 63.
The results from part one indicated that normal-hearing listeners rated a mix of +12 dB most
highly. Hearing-impaired listeners tended to increase their ratings as a function of speech-tobackground ratio, but their ratings for mixes above +12 dB did not increase further. The results from
part two indicated that both normal-hearing and hearing-impaired listeners set background levels
between -12 and -18 dB below the reference mix, which equates to speech-to-background ratios of
+12 and +18 dB. These findings together suggest that an optimal alternative mix would consist of a
12 dB increase in speech-to-background ratio. It should be noted, however, that the results from this
experiment may be dependent on the programme material used. Our current testing incorporates a
larger range of source material, more natural testing conditions, and trial broadcasts of listenerswitchable audio by BBC Scotland.
83
Acknowledgements
The authors thank the BSA Gatehouse Fund for funding this study, the MRC (U135097131), the
Chief Scientist Office (Edinburgh), the University of the West of Scotland, and Jim Hunter and
Andrew Britton at BBC Scotland.
56. Assessing the impact of hearing impairment on immediate speech intelligibility as talkers
change.
J. Holman, A. MacPherson, M.A. Akeroyd, MRC/CSO Institute of Hearing Research - Scottish
Section, Glasgow, United Kingdom.
It is well known that hearing-impaired people report difficulties in multi-talker conversations. For
instance, data collected in our lab over a decade shows that the relevant SSQ question (“You are
with a group and the conversation switches from one person to another. Can you easily follow the
conversation without missing the start of what each new speaker is saying?”) gives one of the lower
scores of the speech section. Few studies, however, have used dynamic listening situations to
explore experimentally talker changes in conversations.
We developed a variation of a new continuous speech test, the Glasgow Monitoring of
Uninterrupted Speech Test (GMUST; MacPherson and Akeroyd 2014), to measure an important
aspect of the problem: does speech identification drop immediately after a change in talker? The
GMUST uses a long (here, 6-min) audio passage of continuous prose together with a text transcript
in which some single-word mis-transcripts have been deliberately made. The task is to spot the
mismatch between the audio and the text. Here, we edited the audio to change from male to female
to male and so on, varying every few seconds, while keeping the flow of the prose. Thirty-six mistranscripts were made, twelve being within 1 second of the change of talker (“early”), twelve
between 1-3 seconds later (“medium”), and twelve between 3-9 seconds later (“late”). The
hypothesis was that fewer mismatches would be noticed in the early condition than in the late
condition. In a control condition there was a single change at 3 mins, with the 36 mismatches
placed at arbitrary times unrelated to the change. All the audio was presented along with 4 talker
babble as a masker (the SNR was set individually on the basis of a short pre-test; here, they ranged
from -2 to +10 dB).
At present, 25 hearing-impaired listeners have participated (BEA >= 25 dB HL; aged = 45-74
years). Five listeners did particularly poorly or badly and were excluded. The remainder found, on
average, about 20 of the mis-transcripts. There were small differences in performance across early
vs. late in the hypothesized direction. But we discount these effects, as there were differences of the
same magnitude in the control condition, so overall the experiment did not support the hypothesis.
Various potential explanations will be explored.
Acknowledgements
Supported by intramural funding from the Medical Research Council (grant number U135097131)
and the Chief Scientist Office of the Scottish Government.
References
MacPherson, A. & Akeroyd, M.A. 2014. A method for measuring the intelligibility of
uninterrupted, continuous speech. J Acoust Soc Am, 135, 1027-1030
84
57. Comparison of different forms of frequency lowering in different forms of frequency
lowering in digital hearing aids for people with dead regions in the cochlea
M. Salorio-Corbetto, T. Baer, and B.C.J. Moore, University of Cambridge, Department of
Psychology, Downing Street, CB2 3EB, Cambridge, UK.
When high-frequency hearing loss is associated with extensive dead regions (DRs) in the cochlea,
i.e. regions of the basilar membrane with non-functioning or very poorly functioning inner hair cells
and/or neurons, high-frequency amplification via a hearing aid often fails to provide benefit for
speech intelligibility. Frequency-lowering hearing aids, which transform high-frequency sounds into
lower-frequency sounds, may be used to provide information from high frequencies to people with
high-frequency DRs. However, there have been few studies of the effectiveness of frequencylowering hearing aids for people with confirmed extensive DRs. Here, two frequency-lowering
prototypes based on a commercially available digital hearing aid (Phonak Exélia Art P) were
evaluated. One of the prototype hearing aids used frequency transposition, i.e. frequency
components in a high-frequency band were recoded by decreasing the frequency of each by a
constant amount in Hz. The other prototype hearing aid used frequency compression, i.e. the
downward frequency shift of components in a high-frequency band increased with increasing
frequency above the lower edge of the band. Ten subjects with steeply-sloping hearing losses and
extensive DRs took part in a single-blind, three-period, three-condition cross-over design trial
whose aim was to evaluate the effectiveness of frequency-lowering hearing aids for speech
perception. Conditions were: (1) Conventional amplification over the widest frequency range
possible with no frequency lowering; (2) Frequency transposition; (3) Frequency compression. The
edge frequency (fe) of the DRs ranged between 0.5 and 2 kHz, and was below 0.75 kHz for five of
the subjects, and below 1 kHz for seven of the subjects. The two frequency-lowering schemes were
fitted taking into account the characteristics of the DRs of the subjects, which led to frequency
lowering at frequencies below those used in previous research and in current clinical practice.
Outcome measures were: 1) Identification of consonants in nonsense vowel-consonant-vowel
(VCV) combinations; 2) Detection of word-final [s] and [z]; 3) Speech reception threshold in noise
(SRT); 4) Responses on the Speech, Spatial and Qualities Questionnaire (SSQ). Frequency lowering
did not provide a significant benefit for any of the outcome measures relative to the control
condition. However, the use of frequency lowering may provide some practical advantages, such as
reduced problems with acoustic feedback and lower output level requirements.
Acknowledgements: MSC was supported by Action on Hearing Loss and Phonak. TB and BCJM
were supported by the Medical Research Council (MRC).
58. Simulating speech understanding in noise for hearing impaired listeners
J.J.M. Monaghan, M.N. Alfakhri and S. Bleeck, ISVR, University of Southampton, Southampton, UK
Understanding speech in a noisy background is one of the greatest challenges for hearing-impaired
(HI) listeners and hearing aids can provide only limited benefit. As well as elevated thresholds for
hearing, supra-threshold deficits such as loudness recruitment and reduced frequency selectivity and
temporal resolution are thought to contribute to the difficulties experienced by HI-listeners in
complex listening environments. Recent experiments in our laboratory, however, suggest that the
low intelligibility of speech in noise observed in moderate-to-severely HI listeners is not evident in
simulations of the reduced frequency selectivity, threshold elevation and recruitment observed in
HI listeners. The aim of the current study, therefore, is to produce a better simulation of speech
understanding in noise for HI listeners. To this end, in addition to employing spectral smearing to
simulate reduced frequency selectivity (e.g., Baer and Moore, 1993), the effects of deafferentation
85
(Lopez-Poveda and Barrios, 2013) and of broadening the temporal integration window were also
simulated.
Appropriate parameters for the different simulation conditions were determined by measuring
the masking release obtained by adding either a temporal or spectral notch when detecting a tone in
a band of noise. Data for normal hearing (NH) participants listening through the simulation were
compared with previously measured data obtained in HI listeners (van Esch et al., 2013). Speech
recognition scores with BKB sentences were measured for HI listeners and for NH listeners
listening under simulations of hearing impairment. To determine whether simulation can be used to
predict any benefit provided by speech enhancement algorithms, scores were also measured for
noisy speech processed by an algorithm known to improve intelligibility of speech in noise for HI
listeners but not for NH listeners. Results will be presented at the meeting.
Acknowledgements
Work supported by EPSRC grant EP/K020501/1.
References
Baer, T., and Moore, B. C. J. (1993). "Effects of spectral smearing on the intelligibility of sentences
in noise," J. Acoust. Soc. Am. 94, 1229-1241.
Lopez-Poveda, E. A., and Barrios, P. (2013). "Perception of stochastically undersampled sound
waveforms: A model of auditory deafferentation," Frontiers in Neuroscience 7.
Van Esch T. E. M., Kollmeier, B., Vormann, M., Lyzenga, J., Houtgast, T., Hallgren, M., Larsby,
B., Athalye, S. P., Lutman, M. E., Dreshchler, W. A. (2013) “Evaluation of the preliminary
auditory profile test battery in an international multi-centre study” International Journal of
Audiology 2013; 52: 305–321
59. Development of a patient decision aid for adults with hearing loss
A.J.Hall*, H. Pryce§, *St Michael’s Hospital, Bristol, UK, §Sirona Health and Social Care, Bath,
UK,
Shared decision making (SDM) is a process in which patients and clinicians share information and
plan clinical care together based on evidence and patient preference (Edwards and Elwyn, 2009).
Decision aids are tools to support SDM (Stacey et al, 2011). They present information to patients
(e.g. in a pamphlet) on the range of options available giving answers to the most frequently asked
questions around options and their implications. The use of SDM and decision tools in clinical
practice is promoted within the NHS and is seen as a marker of clinical quality (NICE, 2012). This
study aims to develop a patient decision aid for adults with hearing loss in the UK.
The decision aid is being developed in the Option Grid format (www.optiongrid.org). Each
Option Grid provides key facts to frequently asked questions to support patients and clinicians in
deliberating about options. All Grids are evidence-based, aligned, where possible, with NICE
guidelines, and updated yearly. They do not attempt to be comprehensive and supplement existing
information and support. Patient and clinicians’ input is central to the Option Grid development
process and there is a standard procedure for developing Options Grids. We are following the
Option Grid Development Framework and iterative development process; the Editorial Team
includes six stakeholders, including clinical experts, researchers and one patient representative.
In the first phase of the project we have run three Focus groups with patients with hearing
loss to determine their decisional needs; also ethnographic observations of five audiology
appointments and the interactions between audiologists and patients offered hearing aids to examine
current practice and informational need. This research was conducted with patients from Sirona
Care and Health Audiology/Hearing Therapy department, Bath.
86
Using the qualitative data from the first phase, the Editorial Team has developed the first
version of the Option Grid ready for user testing.
Acknowledgements
Supported by the British Society of Audiology Research Fund.
References
NICE Quality Standard 15. 2012. Quality standard for patient experience in adult NHS services.
Edwards A. & Elwyn G. 2009. “Shared decision-making in health care: Achieving evidence-based
patient choice”. In. Edwards and Elwyn (eds) 2009. Shared decision-making in health care:
achieving evidence-based patient choice. 2nd ed. Oxford: Oxford University Press.
Stacey D., Bennett C.L., Barry M.J., Col N.F., Eden K.B., Holmes-Rovner M., Llewellyn-Thomas
H., Lyddiatt A., Légaré F. & Thomson R. 2011. Decision aids for people facing health
treatment or screening decisions. Cochrane Database Syst Rev 5;(10):CD001431.
60. Multimedia interactive reusable learning objects improve hearing aid knowledge,
practical handling skills and use in first-time hearing aid users
M. Ferguson*, M. Brandreth*, P. Leighton§, W. Brassington‡, H. Wharrad§
*NIHR Nottingham Hearing Biomedical Research Unit, §University of Nottingham, ‡Nottingham
Audiology Services, UK
There is currently a gap in audiology rehabilitation for an effective intervention to enhance
knowledge and educate hearing aid users. Interactive multimedia video tutorials (or re-usable
learning objects, RLOs) can enhance learning and motivation in educational and healthcare
contexts. To address this in first-time hearing aid users, eight short RLOs (5-10 mins; total duration
1 hour) were developed using a participatory design that included (i) a delphi review of 33 UK
audiology/hearing experts, and (ii) 35 hearing aid users.
The RLOs were evaluated in an RCT of 203 first-time hearing aid users, where 100 received
the RLOs at the fitting appointment (RLO+) and 103 formed a waitlist control group (RLO-).
Outcome measures were obtained 6-weeks post-fitting (RLO+, n=79; RLO-, n=88). RLO uptake
and compliance was high (78% and 94% respectively). Half the users watched the RLOs at least
twice, and 20% watched the RLOs 3+ times. This suggests that users were using the RLOs to selfmanage their hearing loss and hearing aids. The most prevalent delivery medium was DVD for TV
(50%), with a surprisingly high number using RLOs through the internet (30%).
The RLO+ group had significantly better knowledge of both practical hearing aid and
psychosocial issues (p<.001), and significantly better practical hearing aid skills (PHAST: p=.002)
than the RLO- group There was no significant difference in overall hearing aid use or benefit using
the Glasgow Hearing Aid Benefit Profile. However, there was significantly greater use in the RLO+
group for difficult listening situations (p=.013), and for suboptimal users (use<70%; p=.018) with a
large effect size (d=.88).
The RLOs were rated as highly useful (9.0/10), and the majority (>80%) agreed the RLOs were
enjoyable, improved their confidence, and were preferable to written information. 78% would
recommend the RLOs to others. These results were supported by discussion groups of hearing aid
users and family members. In addition, it was reported that RLOs were shared amongst other people
with hearing loss, and were watched by family members and friends. Thus, the RLOs were
beneficial to hearing aid users across a range of quantitatve and qualitative measures.
The RLOs have since undergone further development work based on feedback from from RCT
and the discussion group participants. The C2Hear RLOs will be commercially available for clinical
use and wider distribution from autumn 2014.
87
Acknowledgements
This research was funded by the NIHR Research for Patient Benefit scheme.
61. An investigation into the ability to detect and discriminate speech phonemes using nonlinear frequency compression hearing aids in a paediatric population
S. Harman and S. Morgan, Audiology, Bradford Royal Infirmary, Bradford, UK
This study aims to look at the ability of hearing impaired children to complete the Auditory Speech
Stimuli Evaluation (ASSE) test using non-linear frequency compression (NLFC) hearing aids.
Recent guidance (Feirn et al, 2014) recommends the use of NLFC hearing aids for children with
moderate as well as those with severe or profound hearing losses due to reported evidence
predicting positive benefit. It is recommended that NLFC fittings are verified using speech material.
Previous research has used the plurals test, consonant confusion testing, and sentence testing,
amongst others for verification.
ASSE was used to assess NLFC fittings using detection of eight phonemes /a/, /u/, /i/, /m/, /v/,
/z/, /s/, /sh/ presented at 65dHBL, and discrimination between seven pairs /i/-/a/, /u/-/a/, /u/-/i/, /z//s/, /m/-/z/, /s/-/sh/, /v/-/z/ presented at 65dBHL with a roving +/-3dB to remove intensity cues.
Twenty-nine children with stable sensori-neural hearing losses, who were long-term users of
NLFC aids bilaterally, were seen on their routine hearing aid review appointments. Prior to ASSE
testing, hearing aids were verified that they were fitted optimally and working within technical
specification.
All subjects completed both detection and discrimination lists. Eighteen children scored 100%
on detection and discrimination. The remaining eleven children could not discriminate between at
least one phoneme pair. Of those eleven children nine children could not detect /s/ and of these,
four children were unable to detect multiple phonemes.
When the patients were grouped by hearing level it was found that all eighteen children who
completed ASSE correctly had average hearing losses of mild or moderate level in at least one ear.
The eleven children who could not complete ASSE testing had average hearing losses of severe or
profound level bilaterally. The significance of these results on a child’s outcomes will require
further investigation
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Feirn R., Wood S., Sutton G., Booth R., Meredith R., Brennan S. & Lightfoot G. 2014. Guidelines
for Fitting Hearing Aids to Young Infants. MCHAS.
62. Using music therapy and dichotic training to develop listening skills in some children with
auditory processing disorders
A. Osisanya and A. Adewunmi, Department of Special Education, University of Ibadan- Nigeria
Prevalent evidence of poor listening ability among the school- aged population due to manifestation
of Auditory Processing Disorders (APD) has become a matter of concern as the disorder places a
negative impact on the academic achievement and psycho- social development of persons
88
presenting with it. This has evidently warranted the necessity of appropriate management strategies
such as music therapy and dichotic training, both aimed at improving an individual’s listening
abilities (BSA APD SIG, 2011).
35 school children, ages between 8 and11 years old, screened down to 15 with normal puretone
thresholds (< 25dB hearing level at 1- 4kHz) with no evidence of glue ear and with average
cognitive ability (90- 109 TSS on SIT- R3) were diagnosed with APD using a 2.S.D. below the
mean from the SCAN C and TALC- R, coupled with abnormal scores on the RGDT, and supported
by scores on the CCC- 2 and CHAPPS; were randomly assigned to three groups of music therapy,
dichotic listening and both; and exposed to 8 weeks training of 1 hour, 4 times a week session, using
live and recorded music/ nursery rhymes, phonetically balanced words from CID W- 22 Test List,
and Words/Sentence list developed by the researchers. The 15 participants were pre and post tested
on the Words/ Sentence list. Our results showed a significant difference in the pre and post test
scores of the music group (t= 7.060; p< 0.05), of the dichotic group (t= 9.791; p<0.05), and of the
combined group (t=11.326; p<0.05). Our post- hoc analysis showed that the combined therapy
group had the highest adjusted post- test score ( x = 75.398); followed by the dichotic group ( x =
61.519), and the music group ( x =58.283). Further, we found that the independent variables jointly
accounted for 61.8% of the variance in the therapy among the participants.
Therefore, we concluded that the peculiarity of APD does not warrant a single management
protocol, but a combination of several management strategies that involves active participation of
the individual presenting with the disorder.
‘Declaration of interest: the authors report no conflicts of interest. The authors alone are
responsible for the content and writing of the paper.’
References
BSA APD SIG. (2011). Practical Guidance: An overview of current management of auditory
processing disorder. 1-60
63. Effects of different masker types on speech perception in children and adults
H. Holmes, D. Rowan and H.E. Cullington. Institute of Sound and Vibration Research, University of
Southampton, UK
Normal-hearing adults show improved speech perception with speech maskers over stationary noise
maskers, showing a release from masking. Some studies with children have shown a smaller
masking release compared to adults (Hall et al., 2002), with large inter-child differences, although
there is conflicting literature (Litovsky, 2005). Conflicts may be due to differences in speech
masking materials used and it is thought that children may be more susceptible than adults to
‘informational masking’. Adaptive methods used in measuring speech perception result in different
tested signal-to-noise-ratios (SNRs). Bernstein and Brungart (2011) propose that the amount of
masking release depends on the SNRs used, where high SNRs yield low masking release and vice
versa. Therefore, as children generally need higher SNRs than adults in stationary maskers they may
also show a smaller masking release, which could be due to differences in the baseline conditions,
leading to an SNR confound which may be misinterpreted.
This study aims to:
1) Determine differences in performance between adults and children with stationary noise and
speech maskers.
2) Investigate whether expected smaller masking release in children compared to adults is due
to an SNR confound (Bernstein & Brungart, 2011).
Participants were required to listen to and identify target sentences (BKB and IHR sentences)
in two different masker types: speech shaped noise and a speech masker. The targets and maskers
were presented at five fixed signal-to-noise-ratios for each masker type to determine the percentage
of sentences scored correctly in order to estimate the shapes of the psychometric functions. Fifteen
89
sentences were presented for each condition and a repeat was carried out on a separate day. Data
collection from children (aged 7-8 years) and adults is currently underway and will be presented.
Acknowledgements
Supported by a scholarship from the University of Southampton.
References
Bernstein, J. G. W. & Brungart, D. S. 2011. Effects of spectral smearing and temporal fine
structure distortion on the fluctuating-masker benefit for speech at a fixed signal-to-noise
ratio. J Acoust Soc Am, 130, 473-488.
Hall III, J. W., Grose, J. H., Buss, E. & Dev, M. B. 2002. Spondee recognition in a two talker
masker and a speech-shaped noise masker in adults and children. Ear Hear, 23, 159-165.
Litovsky, R. Y. 2005. Speech intelligibility and spatial release from masking in young
children. J Acoust Soc Am, 117, 3091-3099.
64. Listening and reading in background noise: children versus adults
H. Holmes, D. Rowan and H.E. Cullington, ISVR, University of Southampton, UK
Every day we are faced with the task of parsing incoming simultaneous signal components to
decode meaning and supress interfering sounds when listening and reading, invoking both ‘bottomup’ and ‘top-down’ processes. Many listening studies have found a smaller masking release in
children compared to adults when a stationary-noise background is switched to a speech
background (e.g. Hall et al., 2002) and such results are understood to reflect maturation of ‘topdown’ processes in children. Bernstein and Brungart (2011) however propose, the amount of
masking release may be subject to a signal-to-noise-ratio (SNR) confound so previous findings
could be misinterpreted.
The first aim of this study was thus to:
1) Determine speech perception performance and investigate whether expected smaller
masking release in children compared to adults is due to a SNR confound (Bernstein &
Brungart, 2011).
Twenty normal hearing children (7-8 years) and twenty adults were required to listen to and identify
BKB and IHR target sentences in stationary-noise and speech backgrounds. Percent-correct was
measured for fifteen sentences presented at five fixed SNRs for each background to estimate
psychometric function shape. A repeat was carried out on a separate day. Masking release was
smaller in children compared to adults, but child/adult differences lessened when baseline
stationary-noise differences were taken into account. Although reduced, a child/adult difference
remained suggesting fundamental differences in the effect speech maskers have on listening tasks in
children compared to adults.
Such results could suggest children may have difficulties listening in noisy classrooms. Since
speech backgrounds may affect ‘top-down’ processes in children, it is of interest to determine how
such backgrounds affect these processes only, particularly to tap into the cognitive demands arising
from noisy conditions. In order to do this, a reading paradigm was utilised to track eye movements
during reading in the presence of different background sounds.
The second aim of this study was thus to:
2) Determine which backgrounds are most disruptive to a reading task to explore ‘top-down’
processes and investigate child/adult differences.
Data collection from children (8-9 years) and adults is currently underway and results from an eye
tracking experiment be presented.
90
Acknowledgements
Supported by a Scholarship from the University of Southampton.
References
Bernstein, J.G.W., Brungart, D.S. (2011). Effects of spectral smearing and temporal fine-structure
distortion on the fluctuating-masker benefit for speech at a fixed signal-to-noise ratio. J. Acoust.
Soc. Am., 130, 473–88.
Hall, J.W., Grose, J.H., Buss, E., et al. (2002). Spondee Recognition in a Two-Talker Masker and a
Speech-Shaped Noise Masker in Adults and Children. Ear Hear., 159–165.
65. Using the ECLiPS questionnaire to assess the everyday listening difficulties of children
with Otitis Media with Effusion
J.G. Barry*, M.L. Young*, M. Daniel§ and J. Birchall§, *MRC Institute of Hearing Research,
Nottingham, UK, §Nottingham Hearing Biomedical Research Unit, Nottingham, UK
Otitis media with effusion (OME) is known to reduce overall hearing level, but there has been little
focus on how it affects hearing and listening in everyday situations. At present, access to
interventions is determined by the NICE guidelines, which focus on pure tone audiograms but
suggest no standardised or quantifiable means of measuring the impact of OME on a child’s quality
of life.
The Evaluation of Children’s Listening and Processing Skills questionnaire (ECLiPS) has
recently been launched as a method of assessing children’s listening abilities and highlighting a
range of potential problems. The scale comprises five factors which examine symptoms of
listening, language, social, and cognitive difficulties. It was initially designed to assess children
referred for suspected auditory processing disorder.
This study investigates:
(a) The sensitivity of the ECLiPS to the particular listening difficulties associated with OME.
(b) The extent to which it is sensitive to differences in severity of OME.
The ECLiPS questionnaire was completed by parents of children aged 3-11 in three groups:
‘severe OME’ recommended for surgery (N=9); ‘mild OME’ with no surgery planned (N=7); and a
control group of age- and gender-matched healthy children (N=16).
Children with OME showed an opposite pattern of responses to controls on most items, with
higher ratings (indicating greater difficulty) for children with severe OME than those with mild
OME. ANOVAs revealed significant differences between all groups on the listening and language
factors and to a lesser extent, on the factor assessing social interaction skills. Differences were
found between OME children and controls on cognitive abilities (‘Memory and Attention’), but
there was no effect for severity between mild and severe OME.
The results indicate that the ECLiPS is sensitive to listening difficulties of children with OME.
Moreover, it also appears to be sensitive to the severity of the condition, particularly in the case of
the factors assessing listening and language abilities. The ECLiPS may be useful as a tool for
assessing the impact of this condition on a child’s global functioning and could potentially
supplement pure tone audiograms to aid clinical decision-making.
Acknowledgements
Supported by the MRC and Nottingham University Hospitals.
91
References
Barry, J.G. (2013). Measuring listening difficulties using parental report. Proceedings of British
Society of Audiology annual conference, Keele UK.
National Institute for Health and Care Excellence 2008 Surgical management of OME. CG60.
London: National Institute for Health and Care Excellence.
66. Attending to narrative and detecting mispronunciations with Auditory Processing
Disorder
(APD)
H. Roebuck and J.G. Barry, MRC Institute of Hearing Research, University Park, Nottingham, UK.
Recent research has suggested that Auditory Processing Disorder (APD) may reflect an underlying
deficit in an ability to maintain attention (Moore et al, 2010). This suggestion was partly based on
evidence of fluctuating threshold levels during psychophysical assessment of auditory processing.
Tests of auditory processing are inherently boring and bear little relationship to everyday listening.
In this study, we assess the hypothesis that APD is effectively a disorder of sustained attention using
a story task designed to mimic everyday listening.
The current sample consists of 27 children (8-10 years) with and without a referral for APD, or
listening difficulties identified using the ECLIPS questionnaire (Barry and Moore, 2014) (n=13/14).
The participants were asked to press a button when they heard a mispronounced word. To assess the
role of language, mispronounced words were presented in both predictable and unpredictable
contexts. Previous research has shown that children and adults are faster and more accurate at
detecting mispronounced words presented in predictable contexts (Cole & Perfetti, 1980). If
sustaining attention rather than underlying language difficulty is the primary difficulty for
participants with APD we would predict a similar benefit for mispronunciations in predictable
contexts, but fewer words would be detected. If however, underlying language deficits impact task
performance, we would expect the difference in word context to disappear.
Preliminary data suggests the listening difficulty group missed more mispronunciations (15.4 ±
1.9) than their typically developing counterparts (5 ± 1.8, p<0.001). Secondly, both groups detected
more words in a predictable context (p=0.003), and identified these words more quickly than in the
unpredictable context (p=0.001). Taken together, this suggests that the listening difficulty group
derived some benefit from sentence context when detecting mispronunciations, but their attention
could not be maintained throughout the duration of the task.
The results appear to support the idea that problems with sustained attention contribute to
listening difficulties during every day listening tasks, and are not limited to more artificial tasks
designed to assess auditory processing.
Acknowledgements
Supported by the Medical Research Council and Nottingham University Hospitals NHS Trust.
References
Moore, D.R., Ferguson, M.A., Edmondson-Jones, A.M., Ratib, S., & Riley, A. (2010). The nature
of auditory processing disorder in children. Pediatrics, 126, e382-390.
Cole, R.A., & Perfetti, C.A. (1980). Listening for mispronunciations in a children's story: The use of
context by children and adults. J Verb Learn Verb Beh, 19, 297-315.
Barry, J.G. and Moore, D.R. (2014). Evaluation of Children’s Listening and Processing (ECLiPS)
Manual edition 1.
67. Evaluation of school entry hearing screening
92
H. Fortnum*, A. Allardice §, C. Benton‡, S. Catterick*, L. Cocking#, C. Hyde+, J. Moody^, V.
Nikolaou+, M. Ozolins*, R. Taylor+, O. Ukoumunne+ and J. Watson$. *National Institute for Health
Research Nottingham Hearing Biomedical Research Unit, School of Medicine, University of
Nottingham, UK, §Nottinghamshire Healthcare NHS Trust, Nottingham, UK, ‡Nottingham
University Hospitals NHS Trust, Nottingham, UK, #Peninsula Clinical Trials Unit, Plymouth
University, UK, +University of Exeter Medical School, UK, ^Cambridge University Hospitals NHS
Foundation Trust, and Cambridgeshire Community Services NHS Trust, Cambridge, UK, $Parent
representative, Nottingham, UK
The School Entry Hearing Screen (SES) has been around since the 1950s. With the introduction of
Universal Newborn Hearing Screening in 2006, the value of the SES has been questioned as around
half the cases of permanent hearing loss are now identified at birth. In 2007 a report funded by the
Health Technology Assessment (HTA) Programme1 described a lack of good evidence of the
effectiveness of the SES. This project evaluates its diagnostic accuracy and cost-effectiveness.
We evaluated the diagnostic accuracy (sensitivity and specificity) of two screening tests; a
pure-tone sweep test, and the Siemens HearCheckTM device, via comparison with Pure Tone
Audiometry (PTA) (gold standard). Children with sensorineural or permanent conductive hearing
loss bilaterally (average 20-60dBHL) or unilaterally (≥20 dBHL) were screened at home and the
results compared with their latest PTA from audiological records. Children without identified
hearing loss were screened at the research unit and their PTA measured in a soundbooth.
To compare yield, referral age and source, and clinical pathway for children aged 3-6 years we
collected data on referrals to an audiology service with a school screen (Nottingham) and one
without (Cambridge).
We assessed the practical implementation of the two screens by school nurses. All data will
ultimately contribute to an evaluation of cost-effectiveness in an economic model.
To date 60 hearing-impaired children and 134 hearing children have been tested. Preliminary
data indicates little difference between the two screens for sensitivity (>83%). or specificity
(>87%). School nurses have highlighted advantages and disadvantages of each of the screens with
neither being overwhelmingly preferred.
Details of >3000 referrals for Nottingham and Cambridge have been collected. Analyses of the
difference in yield and age of referral between the two services are ongoing.
Acknowledgements
This project was funded by the National Institute for Health Research Health Technology
Assessment Programme (project number 10/63/03).
The views and opinions expressed therein are those of the authors and do not necessarily reflect
those of the HTA programme, NIHR, NHS or the Department of Health.
Reference
Bamford J et al. 2007. Current practice, accuracy, effectiveness and cost-effectiveness of the school
entry hearing screen. Health Technol Assess, 11(32), 1-168
68. Clinical feasibility and acceptability of recording cortical auditory evoked potentials:
results from the first 80 infants
Munro, Kevin J‡#, Nassar Ruth#, Purdy, Suzanne C*; O’Driscoll, Martin‡#, Booth, Ruth#, Bruce,
Iain‡,#2; Uus Kai‡., ‡School of Psychological Sciences, University of Manchester, UK; #Central
Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science
Centre, Manchester, UK; *School of Psychology, University of Auckland, New Zealand
93
There is growing interest in using supra-threshold obligatory cortical auditory evoked potentials
(CAEPs) to complement established paediatric clinical test procedures. As part of an on-going study
investigating the clinical usefulness of aided CAEPs in infants, we have been obtaining data on
feasibility and acceptability of the procedure within the clinical setting. Responses to short duration
stimuli (/m/, /g/ and /t/), will ultimately be recorded in 100 normal-hearing and 10 hearing-impaired
infants (between 4 and 39 weeks of age) from a loudspeaker at zero degree azimuth and a
presentation level of 65 dB SPL.
At the time of writing, we have the following data from the first 80 infants (who all passed
new-born screening and have no reported hearing difficulties): CAEP test duration, completion and
detection rates, and a parental acceptability questionnaire (9 questions with 7-point scale, 1 being
best). The mean test duration was 26 minutes (range 17-89 min): 13 min preparation and 13 min
data acquisition. A response was obtained to at least one stimuli in 100% of infants. A response to
all three, two and one stimuli was detected in 72%, 23% and 5% of infants, respectively. Responses
to /g/, /t/ and /m/ were detected in 97%, 88% and 820% of infants, respectively.
So far, 29 parents have completed the acceptability questionnaire and the mean score of
individual questions range from 1.2-2.5. The poorest score was obtained for the question enquiring
about the difficulty keeping the baby awake and quiet for the duration of the test procedure.
In conclusion, the short test duration, high completion and detection rates and good scores
on the acceptability questionnaire suggest that the sound field CAEP procedure may be feasible and
acceptable for use in infants within the clinical setting. Our next study will investigate the
relationship between audibility and CAEP detetection in hearing-impaired infants.
Acknowledgements
The study was funded by a strategic investment grant from Central Manchester University Hospitals
NHS Foundation Trust and facilitated by the Manchester Biomedical Research C and the
NIHR/Wellcome Clinical researech Facility.
69. The use of the Analytical Hierarchy Process to evaluate an algorithm designed to reduce
the negative effects of reverberation
N. S. Hockley, C. Lesimple and M. Kuriger, Bernafon AG, Bern, Switzerland
When someone is speaking in a room, the sound field consists of direct sound originating from the
speaker and indirect sound in the form of reverberation. The intensity of the speaker’s sound
decreases with increasing distance, however, the reverberation is assumed to be uniform in intensity
throughout the room. When the speaker is close to the listener, the direct sound will be more intense
than the reverberation. As the speaker moves away from the listener, the intensity of the direct
sound will decrease, however, the intensity of the reverberation will remain constant. Reverberation
can have detrimental effects for the hearing-impaired individual (e.g., Nabelak & Robinette, 1977)
and also challenges the directional performance of a hearing aid. These negative effects could be
addressed by a dedicated algorithm (Walden et al, 2004). The challenge when designing such an
algorithm is to remove the potentially negative effects of reverberation while maintaining sound
quality and speech intelligibility.
Two questions were addressed in this investigation. The first is, does the algorithm provide a
perceptible attenuation of reverberation? The second is, does the algorithm have a noticeable effect
on the quality of speech within a reverberant environment?
Eleven normal hearing subjects participated in this investigation. The methodology used was
based on the Analytical Hierarchy Process (AHP) (Saaty, 2008). The AHP is a structured technique
for organizing and analyzing complex decision-making. It provides an analytic measure of
multidimensional scaling and is based on paired comparisons that are embedded within a reciprocal
94
matrix. Paired-comparison ratings were used to measure the amount of reverberation, speech
naturalness, and overall preference. These criteria were derived from recommendations from the
International Telecommunication Union (ITU, 1996; 2003). ITU (2003) specifies the methodology
to evaluate speech communication systems that include noise suppression algorithms. As a high
level of reverberation can be considered to be a useless signal, like noise, a reverberation reduction
algorithm might have some similar behaviour as a noise suppression algorithm.
The reverberation reduction algorithm was tested in a reverberant condition using four different
settings (strengths) of the algorithm and unprocessed (off). In all of the paired comparisons, the
processed conditions were preferred regardless of the settings.
The subjects responded consistently and additionally there was a high degree of consensus
between the subjects. It was found that the algorithm provided a clear and constant reduction of the
amount of reverberation. With a stronger setting, this difference was even more noticeable. Speech
naturalness was found to be better with the algorithm than without, regardless of the tested setting.
The implication of these findings for further investigations will be discussed along with the
potential feasibility of using this algorithm within a hearing aid.
Declaration of interest:
The authors are all employees of Bernafon AG. The authors alone are responsible for the content
and writing of the paper.
Acknowledgements:
The authors would like to thank Matthias Bertram for his help in developing the prototype and
Bernhard Künzle for his useful comments and guidance.
References
International Telecommunications Union (ITU-T) (1996). Recommendation P.800: Methods
for
subjective determination of transmission quality. Geneva Switzerland.
International Telecommunications Union (ITU-T) (2003). Recommendation P.835: Subjective test
methodology for evaluating speech communication systems that include noise suppression
algorithm. Geneva, Switzerland.
Nabelek, A. K. & Robinette, L. (1978). Influence of the precedence effect on word identification by
normally hearing and hearing-impaired subjects. Journal of the Acoustical Society of America,
63(1), 187-194.
Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International Journal of
Services Sciences, 1(1), 83 - 98.
Walden, B. E., Surr, R. K., Cord, M. T., & Dyrlund, O. (2004). Predicting hearing aid microphone
preference in everyday listening. Journal of the American Academy of Audiology, 15(5), 365396.
70. The effect of primary tone ramping paradigms on the onset time-course of distortion
product otoacoustic emissions (DPOAEs).
B. Lineton and L. J. Macfarlane, Institute of Sound and Vibration Research, University of
Southampton, UK
In this study, low-side band (2f1-f2) and high-side (2f2-f1) distortion product otoacoustic emissions
(DPOAEs) are studied both experimentally and using cochlear model simulations. Current theory
suggests that the generation of both these DPOAEs side-bands involve a number of interacting
mechanisms (Knight & Kemp, 2001; Shera & Guinan, 1999; Young et al., 2012). In this theory, the
DPOAEs originate from a directional nonlinear source on the basilar membrane in the primarytravelling-wave overlap region that generates travelling waves preferentially in the forward
95
(anterograde) direction. A second component arises from reflection at the characteristic place of the
DPOAE component. The travelling wave from these components then combine, giving rise to
DPOAE measured in the ear canal that can have complex properties. The time-course of the
DPOAE onset due to the ramping on of the primary tones is also determined by the build-up of the
primary travelling waves along the basilar membrane. The time-course of the DPOAE is difficult to
measure directly, because the ramping of the primary tones introduces spectral components (by
“spectral splatter”) that swamp the DPOAE components. One way of overcoming this problem is to
use the primary-tone-phase-variation (PTPV) method (Duifhuis, 2006; Whitehead et al., 1996).
In this study, the PTPV method has been used in simulations of the DPOAE onset time-course
as the two primary tones (labelled f1 and f2 as per convention) are ramped on and off. Three
different primary-tone onset paradigms are studied: First f1 and f2 both ramped simultaneously;
second, f1 is held on continuously and f2 is ramped on and off, and third, f2 is held on continuously
while f1 is ramped on. The simulation result will be compared with PTPV measurements of the
DPOAE time-course, and discussed in terms of theoretical implications.
Acknowledgements
This work was funded by the University of Southampton.
References
Duifhuis, H. 2006. Distortion Product Otoacoustic Emissions: A Time Domain Analysis. ORL,
68, 340–346.
Knight, R.D. & Kemp, D.T. 2001. Wave and place fixed DPOAE maps of the human ear. J
Acoust Soc Am, 109, 1513-1525.
Shera, C.A. & Guinan, J.J. 1999. Evoked otoacoustic emissions arise by two fundamentally
different mechanisms: A taxonomy for mammalian OAEs. J Acoust Soc Am, 105, 782–798.
Whitehead, M.L., Stagner, B.B., Martin, G.K. & Lonsbury-Martin, B.L. (1996). Visualization
of the onset of distortion-product otoacoustic emissions, and measurement of their latency. J Acoust
Soc Am. 100, 1663–1679.
Young, J.A.; Elliott, S.J. & Lineton, B. 2012. Investigating the wave-fixed and place-fixed
origins of the 2f(1)-f(2) distortion product otoacoustic emission within a micromechanical cochlear
model, J Acoust Soc Am, 131, 4699-4709
Poster Abstracts: Cochlear implants
71. Auditory implant team support for speech and language therapists working with children
who use cochlear implants
A. Odell*§, H. Fortnum* and P. Leighton‡,*Nottingham Hearing Biomedical Research Unit, Otology
and Hearing group, Division of Clinical Neuroscience, University of Nottingham, Nottingham, UK,
§Nottingham Auditory Implant Programme, Nottingham University Hospitals, Nottingham, UK,
‡School of Medicine, University of Nottingham, Nottingham, UK
After 25 years of paediatric cochlear implantation in the UK, we have a large body of evidence
showing how children can benefit from this procedure. We can use some of that evidence to inform
community speech and language therapists (SLTs) about what speech and language progress to
expect following the surgery and to enable them to be more independent in their management of
children with this technology.
The initial phase of the study looks at community SLT confidence in working with this client
group and what support they need from the auditory implant programmes, specifically the implant
96
centre speech and language therapists (ICSLTs). We used qualitative methods, consisting of focus
groups and interviews, analysed using thematic analysis.
We found that SLTs value the support and advice of the ICSLTs to demystify the technology,
to advise in target setting and in differential diagnosis. Some issues were raised regarding
communication between professionals, particularly for liaison about target setting.
Those SLTs who specialise in hearing impairment reported feeling confident about working
with deaf children who have no additional needs, but valuing ICSLT support when there are other
issues to tease out. The more generalist SLTs feel they need more training and information about
what to aim for and in what timescale.
We hope that further work can propose a model of support which responds to the evidence and
is appropriate for the level of information which can be found in the literature as well as the
knowledge held by SLTs.
Acknowledgements:
The research was supported by a grant from the National Institute for Health Research
72. Pitch ranking and speech recognition in children with cochlear implants
R. Miah* and D. Vickers§ , *Audiology Department, Princess of Wales Hospital, South Wales
§
Ear Institute, University College London, U.K
Kopelovich et al (2011) and Dawson et al (2000) provided evidence suggesting that children with
cochlear implants (CIs) can pitch rank; that this ability improves with age and is related to speech
perception.
The goal of this research was to determine whether pitch ranking ability of implanted children
was related to speech recognition when vocabulary age was accounted for.
Fourteen children aged between 4-17 years participated; nine used Med-El, four Cochlear and
one Advanced Bionics devices, nine were unilaterally and five bilaterally implanted.
Pitch ranking and speech perception tests were delivered using computer generation of stimuli
and automated response collection. Sounds were presented through a loudspeaker one metre directly
in front of the listener in a sound-attenuating booth. For pitch ranking pairs of pure tones (1000ms
duration and 1000ms inter-stimulus interval) were delivered at 65dBA. A two-alternative forced
choice paradigm was used and participants indicated which tone was higher in pitch, feedback was
provided. Stimuli were twelve tones from 262Hz to 6272Hz separated by half an octave. ±3dB
level jitter was used. Speech perception in quiet was assessed using the Chear Auditory Perception
Test (CAPT) presented at 50dBA and speech perception in noise was assessed using the Children’s
Coordinate Response measure (CCRM); with adaptive speech-shaped noise and fixed level speech
(65dBA) measuring the 50% speech reception threshold. Vocabulary age was assessed using the
Renfrew word finding test (Renfrew, 1988).
When vocabulary age was controlled for there was a moderate correlation between pitch
ranking and CAPT (r = 0.54, p=0.021) and also for CCRM (r = -0.62, p = 0.006).
The findings showed that pitch ranking can be assessed in children with CIs and that there is a
relationship with speech recognition even when developmental age is accounted for. Therefore pitch
ranking may potentially be useful for mapping and habilitation for children with CIs.
Acknowledgments
Thanks to participants and families and CI programme at the Princess of Wales Hospital. The
authors report no conflicts of interest. The authors alone are responsible for the content and writing
of the paper.
97
References
Dawson, P.W., McKay, C M., Busby, P A., Grayden D B., and Clark, G M. 2000. Electrode
discrimination and speech perception in young children using cochlear implants. Ear Hear. 21
(6): 597 -607
Kopelovich, J C., Eisen, M D and Franck K H. 2010. Frequency and electrode discrimination
in children with cochlear implants). Hearing Res. 268: 105-113
Renfrew, C.E. (4th edition) 2009. The Renfrew Language Scales: Word finding vocabulary test.
United Kingdom, Speechmark.
73. Cochlear implant listeners at a cocktail party: evaluating CI performance in multi-talker
listening situations with the CRM (Coordinate Response Measure).
Cooper, H.R. University Hospital Birmingham
Cochlear implants (CI) can provide excellent speech discrimination in quiet, with large variance in
individual results. Standard tests of speech recognition do not replicate real-life listening conditions
where multiple, spatially separated maskers (background noise or competing talkers) create
difficulties for CI listeners. Cues available to NH listeners to solve the ‘cocktail party problem’ may
not be available to CI listeners (Cherry, 1953 ; Darwin, 2008).
The CRM (Coordinate Response Measure) measures the ability to selectively attend and
respond to a target sentence in the presence of distracter talkers either co-located or at random
spatially separated locations. Stimuli were presented from 9 head-height loudspeakers in the sound
field. The target sentence contained a pre-determined ‘call sign’ that listeners listened for. All
sentences had the form ‘Ready (call sign) go to (colour)(number) now. Listeners reported the target
colour and number (closed set choice of 16 responses (four colours/four numbers)). An adaptive
procedure varied the relative level of target and masker sentences to find the SNR for 50% correct
performance.
For NH listeners (n=12), median SRTs (50% correct), in 3 conditions (A : two talkers, one
female/one male, single speaker in front ; B : 7 talkers, locations of target and maskers randomised
across 9 loudspeakers, with an 800 ms onset delay between target and masker sentences, and C : 7
talkers, locations of target and maskers randomised across 9 loudspeakers, target and masker
sentences presented in pairs with no onset delay) were (A) -19.5 dB ; (B) -10.8 dB and (C); -1.7
dB .
Unilateral CI listeners (n=8) had median SRTs in the 3 conditions of (A) +1.72 dB ; (B) +6.57
and (C) +9.54 dB. Only 2 listeners achieved an SRT at a negative SNR, in the 2-talker condition.
Bilateral CI listeners (n=8), tested a) with bilateral implants and b) with the better implant
alone, all had positive SNRs in all 3 conditions. There was no significant bilateral advantage in
condition A (2 colocated talkers) or C (7 talkers without onset delay), but there was a significant
advantage in condition B (7 talkers with 800 ms onset delays).
Unlike NH listeners, CI users require positive SNRs in this multi-talker listening task. A
bilateral advantage was only found when onsets of multiple talkers were non-simultaneous.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Cherry E.C. (1953): Some experiments on the recognition of speech, with one and with two ears. Jl
of Acoust Soc Am, 25:975-979
98
Darwin C.J (2008) Listening to speech in the presence of other sounds. PhilTrans R Soc B, 363,
1011-1021
74. Comparing the benefit to sentence recognition in noise through a cochlear implant of a
contralateral modulated tone and noisy speech
T.Green, S. Rosen and A. Faulkner, Department of Speech, Hearing and Phonetic Sciences, UCL,
London, UK
Attempts to identify sources of bimodal benefit have shown that cochlear implant users with
residual low-frequency acoustic hearing can achieve improved speech recognition relative to
implant-alone from combining the implant signal with a contralaterally presented modulated tone
conveying voicing, fundamental frequency (F0) and amplitude envelope information (Brown &
Bacon, 2009). Earlier work examining F0-controlled sinusoids as an aid to lip-reading for
profoundly hearing impaired listeners established that a simple neural network could extract useful
voicing and F0 information from noisy speech in real-time (Faulkner et al, 1997). This suggests that
there may be circumstances in which bimodal benefit to speech perception in noise would be
maximised by presenting to the unimplanted ear a simplified signal conveying extracted F0 and
voicing information rather than amplified noisy speech. The present work attempts to establish the
range of signal-to-noise-ratios (SNRs) at which typical bimodal CI users are helped by a clean
simplified acoustic signal, and to compare the benefit provided to that from simple amplification of
noisy speech.
Recognition of target BKB sentences in noise was assessed with implant alone and in
combination with a contralateral acoustic signal consisting of either the speech and noise mixture,
low-pass filtered at 500 Hz, or a noise-free modulated tone. The tone follows the F0 contour of
voiced segments of the target speech, and is amplitude modulated with the envelope of the target
speech. No tone is presented during unvoiced target segments. Performance is assessed both in
steady speech-shaped noise and in noise modulated with an envelope extracted from connected
speech from a single-talker. The latter allows assessment of the extent to which bimodal hearing
promotes glimpsing of target speech in modulated maskers. Percentage of key words correct was
measured for each combination of acoustic input and noise type at each of four SNRs evenly-spaced
between 0 and +12 dB. Identifying the circumstances in which the modulated tone provides greater
bimodal benefit than low-frequency speech in noise would establish the requirements for real-time
F0 extraction that would need to be met for approaches using a modulated tone to be viable in
realistic environments.
Acknowledgements
Supported by Action on Hearing Loss.
References
Brown, C. A. & Bacon, S. P. 2009. Achieving electric-acoustic benefit with a modulated tone. Ear
Hear, 30, 489-493.
Faulkner, A., van Son, N. & Beijk, C. M. 1997. The TIDE project OSCAR. Scandanavian
Audiology, 26, Suppl 47, 38-44.
75. Improved measures of spatial selectivity in cochlear implant users
S. Cosentino, J.M. Deeks and R.P. Carlyon, MRC Cognition and Brain Sciences Unit, Cambridge,
UK
99
Cochlear implant (CI) users rely on a finite number of channels to perceive acoustic information. A
number of factors including the presence of dead regions in the cochlea or the positioning of an
electrode in the lateral wall, limit the selectivity with which electrodes stimulate the desired subset
of auditory nerve fibres. The present study aimed to develop a measure of CI spatial selectivity that
is sensitive to dead regions.
This was achieved by measuring psychophysical tuning curves for both single and dual-masker
electrodes using interleaved stimulation. This method was chosen because, we argue, the more
commonly used forward-masking method is susceptible to non-sensory effects that can lead to an
over-estimation of spatial selectivity and that may miss dead regions. Stimulation rate was chosen to
minimize channel interaction between probe and masker pulses. The level of the maskers was
adaptively changed to mask a fixed-level probe, to ensure the same amount of excitation at
threshold near the probe. Equal currents were applied to the masking electrodes in the dual-masker
configuration; this differs from equal loudness setting of the maskers, which, we argue, is adversely
influenced by neural survival at the masker, rather than the probe, position. The stimulus consisted
of a 48-ms 125-pps pulse train (probe) presented in the middle of a 368-ms 125-pps masker. In the
dual-masker condition, a second masker pulse was added 90us after the first, and the presentation
order with respect to the probe was alternated between the two masker pulses (e.g., M1 - M2 - P M2 - M1 - P - M1…). The pulses were 60-µs cathodic-first biphasic pulses. A 3I-2AFC procedure
was used with the probe fixed at 10 or 20% of its dynamic range, and the masker level of one or two
electrodes was varied for different probe-masker electrode separations. CI users implanted with
MedEl and Advanced Bionic devices took part.
With two masking electrodes, masker levels at probe masked thresholds generally increased
with increasing separation between the probe and the maskers. This reflects excitation of the neural
population responding to the probe that diminishes as the masker is physically moved away from
the probe location. Combined with single-masker thresholds, these data revealed several cases
where the probe was detected via neurons closer to adjacent electrodes than to the probe electrode.
The relative contributions of electrode position and neural survival to these cases was assessed by
combining the masking measures with detection thresholds across the electrode array.
Acknowledgements
SC is funded by Action on Hearing Loss.
76. Sound localisation by cochlear implant users and normally hearing adults: The role of
head movements
A.M. Goman*, P.T. Kitterick§ and A.Q. Summerfield*, *Department of Psychology, University of
York, UK, § NIHR Hearing Biomedical Research Unit, Nottingham, UK
Head movements can help listeners reduce front-back confusions and determine the location of a
sound source (Wightman & Kistler, 1999). It is less clear whether head movements improve
localisation accuracy for cochlear implant users. This study sought to investigate localisation
accuracy and listening effort in a group of cochlear implant users and normally-hearing adults in a
variety of listening environments.
24 normally hearing adults and 8 cochlear implant users (4 bilateral) participated. Participants
sat in the centre of an array of loudspeakers: Five were positioned in front of the listener (separated
by 15°) and five were positioned behind the listener (separated by 15°). In a blocked design, a short
(~0.8s), medium (~5.4s) or long (continuous repetition) stimulus was presented from one of 5
(front-only condition) or 10 (front-back condition) locations. Participants indicated where they
perceived the location of the sound to be and at the end of each block rated their listening effort on a
100
0-100 scale where zero indicated no effort and 100 indicated extreme effort. Head movement was
either permitted or not permitted. Motion tracking enabled changes in yaw to be measured.
Bilateral users were more accurate (59% correct) than unilateral users (18% correct) except in
the most challenging listening condition (short, front-back). Despite making use of additional time
when it was available, performance for unilateral users remained low in all conditions. However, for
bilateral users, accuracy improved when head movements were permitted for both the front-only
and front-back conditions. Conversely, whilst performance by normally-hearing adults was at
ceiling in the front-only condition, permitting head movements worsened performance slightly but
significantly in the front-back condition. Normally hearing adults did not benefit from additional
time and reported more listening effort was required when movement was not permitted.
This study found localisation to be better with two implants than one. Bilateral users benefit
from the varied interaural differences which arise as a result of head movements and can determine
the location of a sound source if sufficient time is available. In this study, accuracy by normally
hearing adults was higher when head movements were not permitted. Possibly cognitive control
perceived as effort is increased when head movement is not permitted. It is possible that this
cognitive control results in more concentration which in turn results in more accurate performance.
Acknowledgements
AMG is supported by and ESRC studentship
References
Wightman, F., & Kistler, D.J. (1999). Resolution of front-back ambiguity in spatial hearing by
listener and source movement. JASA, 105, 2841-53.
77. Effects of frequency-to-place mismatch on sound localisation in electro-acoustic hearing
S. Morris* and P.T. Kitterick§. *MRC Institute of Hearing Research, Nottingham, UK. §NIHR
Nottingham Hearing Biomedical Research Unit, Nottingham, UK
Individuals with severe-to-profound deafness in one ear and normal hearing in the other have
minimal access to inter-aural cues which support accurate localisation. Cochlear implantation (CI)
of the impaired ear can aid localisation despite probable differences in the mapping between
frequency and cochlear place in the implanted and non-implanted ears. A frequency-to-place
mismatch between the ears can reduce sensitivity to inter-aural level cues (Goupell et al., 2013) and
may limit binaural benefits in these individuals. Listening with an implanted and a normal-hearing
ear was simulated to address two questions: 1) Does a mismatch between the implanted and nonimplanted ears impair localisation; and 2) Can the mapping of the CI be adjusted to optimise
localisation?
Binaural recordings were made of monosyllabic words presented from one of seven
loudspeakers at 20-degree intervals from -60 to +60 degrees azimuth. Recordings were presented
over headphones and roved to disrupt monaural loudness cues. Participants reported the location of
each word from the set of possible locations. Signals presented to the right (CI) ear were processed
in one of three ways. Ideal processing represented the theoretical ability to deliver a wide range of
frequencies to sites in the cochlea with matching characteristic frequencies. Realistic processing
delivered the same wide range of frequencies but to an electrode array whose length and position
were based on published specifications and surgical outcomes, respectively. Best-Fit processing
allocated frequencies across the electrode array based on its position in the cochlea. Nine normalhearing participants completed three sessions, one for each simulation. Before each session,
loudness balancing was performed and training was provided.
101
Localisation (mean absolute difference between actual and reported locations) and lateralisation
(percentage of trials where actual and reported locations were on the same side) exceeded chance
levels for all simulations. A mismatch between the ears impaired performance significantly
compared to matched presentation (Realistic vs Ideal, mean differences: localisation +6.0 degrees;
lateralisation -7.9%). Allocating frequencies based on electrode position produced numerical but
non-significant improvements compared to position-independent allocation (Best-Fit vs Realistic,
mean differences: localisation -1.9 degrees; lateralisation +4.6 %). The results suggest that a
mismatch in frequency-to-place mapping between an implanted ear and a non-implanted ear impairs
localisation. Allocating frequencies based on electrode positioning does not appear to improve
localisation after short-term exposure.
Acknowledgements
Supported by the intramural program of the MRC and infrastructure support from the NIHR.
References
Goupell M.J., Stoelb C., Kan A., Litovsky R.Y. 2013. Effect of mismatched place-of-stimulation on
the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening. J
Acoust Soc Am, 133(4), 2272-87.
78. A conceptual framework for studying cochlear implant users’ relationship with music:
music-related quality of life
G. Dritsakis and R.M. van Besouw, ISVR, University of Southampton, UK
The relationship that cochlear implant (CI) users have with music has been the focus of many recent
studies (Looi et al. 2012). However, from a review of studies on music and CIs published since
2000, it is evident that there is inconsistent use of terminology. For instance, music appreciation and
music enjoyment are sometimes used interchangeably. As a result, it is often difficult to interpret
findings and draw comparisons across studies. In addition, the concept of quality of life (QoL) has
been overlooked, despite evidence showing the impact of music on the QoL of normal-hearing
listeners and significant positive correlations between music enjoyment or sound quality ratings and
QoL scores for CI users. Consequently, the psychological and social dimensions of CI users’
relationship with music, such as attitudes towards music and participation in music-related social
activities, are under-researched. The aim of the present study is to develop a conceptual framework
that will describe CI users’ relationship with music taking into account QoL. The concept of ‘musicrelated quality of life’ (MRQoL) is introduced. It refers to aspects of the QoL of CI users that are
particularly related to music. Issues relating to CI users’ relationship with music identified by
previous studies, e.g. pitch perception, music appraisal or listening habits, are organised into the
physical, psychological and social QoL domains (Hinderink et al. 2000). The use of terms and the
rationale behind the development of the framework are explained. The proposed framework is
holistic and can be adapted in order to cover additional domains or items. It can be used to structure
research in the area of music and CIs and is currently being used as the basis for the development of
a music-specific QoL measure. Focus group data from adult CI users will be analysed according to
the framework and questionnaire items will be generated. The analysis of the focus group data will
also provide evidence concerning the usefulness of the framework in describing CI users’
relationship with music.
Acknowledgments
Supported by the Faculty of Engineering and the Environment (FEE), University of Southampton
102
References
Hinderink, J.B., Krabbe, P.F. & Van Den Broek, P., 2000. Development and application of a healthrelated quality-of-life instrument for adults with cochlear implants: the Nijmegen cochlear
implant questionnaire. Otolaryngology-Head and Neck Surgery, 123(6), pp.756–65.
Looi, V., Gfeller, K. & Driscoll, V., 2012. Music appreciation and training for cochlear implant
recipients: A review. Seminars in Hearing, 33(4), pp.307–334.
79. The effect of stimulus polarity and pulse shape on spread of excitation in cochlear implant
users
R.P. Carlyon §, J.M. Deeks§ , A. van Wieringen ‡, J. Undurraga‡, O. Macherey§, and J. Wouters‡, §
MRC Cognition and Brain Sciences Unit, Cambridge, UK, ‡ Lab. Exp. ORL, Univ. Leuven, Belgium
Physiological measurements obtained with animals have shown more restricted spread of neural
excitation with bipolar (BP) compared to monopolar (MP) stimulation. However, in BP mode the
current at one electrode is a polarity inverted version of that at the other, leading to a possible
bimodal spread of excitation, at least for the commonly-used symmetric (SYM) biphasic pulses.
Experiments using asymmetric pulses show that, at least at moderately high levels, CI users are
more sensitive to anodic than to cathodic current. This raises the possibility that asymmetric pulses
in bipolar mode might be used to selectively excite neurons near one electrode. We measured
forward-masked psychophysical tuning curves in which 200-ms 1031-pps BP maskers consisted
either of either SYM or pseudomonophasic (PS) pulses ; for PS pulses both polarities were tested.
The signal consisted of three 97 µs/phase SYM pulses at a rate of 344 pps, separated from the
masker by a 6.8-ms silent interval, at a +3 dB sensation level. The first phase of each masker pulse
had a duration of 97 µs, and, for PS pulses, was followed by an opposite-polarity phase of four
times this duration and one quarter of its amplitude. The hypothesis was that PS maskers should be
more effective when the short high-amplitude phase was anodic on the electrode closest to the
probe, compared to when it was cathodic. Hence we predicted an interaction between the effects of
masker position and polarity. Seven users of the Advanced Bionics CI took part.
A repeated-measures ANOVA (masker position X polarity) on just the PS masker data revealed
no significant main effects or interactions. However, ANOVAs on the results for individual subjects
revealed a significant interaction for six out of the seven subjects. For all subjects there was at least
one electrode position where the masker level at threshold differed significantly between the two PS
polarities. For five of the seven listeners all of these significant differences were in the predicted
direction. However, for two listeners there were significant differences in the opposite direction. We
discuss possible reasons for these different patterns of results, including the possibility that some
listeners may show greater sensitivity to cathodic stimulation at low levels ; although all of our
maskers were well above threshold, the excitation that they produced at the probe place was
probably quite low.
Acknowledgement:
Supported by MRC and KU Leuven.
80. Comodulated notched-noise masking
Ramona Grzeschik*, Björn Lübken* and Jesko L. Verhey*, *Department of Experimental Audiology,
Otto von Guericke University Magdeburg, Magdeburg, Germany
Detection thresholds of a masked sinusoidal signal are lower when the masking noise is coherently
modulated over a wide frequency range compared to a masking condition with the same spectrum
103
but using an unmodulated masker. This phenomenon is referred to as comodulation masking release
(CMR; Hall et al., 1984). Here we tested how masker comodulation affects notched-noise data.
Thresholds for sinusoids were measured in the presence of a diotic notched-noise masker for
comodulated and unmodulated conditions. The signal frequency of the target was 500 Hz. The
masker was a broadband filtered noise with lower- and upper cut-off frequencies of 30 and 1000
Hz, respectively. The masker had a spectral notch at the signal frequency. Notch widths were 0, 50,
100, 200, and 400 Hz. The arithmetic centre frequency of the notch was equal to the signal
frequency.
The data of eleven participants (4 male, 7 female, aged 25-37 yrs, mean 30 yrs) entered the
analysis. As expected, thresholds decreased with increasing notch width for both masking
conditions. The CMR decreased continuously with notch widths as well. CMR was almost 9 dB for
no notch. At 400 Hz, CMR was less than 2 dB.
The experimental data were compared with predictions of a modulation filterbank model that
was previously shown to simulate CMR in the bandwidening type of CMR experiments
(Piechowiak et al., 2007).
Acknowledgements
Supported by the German Research Foundation (SFB/TRR31).
References
Hall JW, Haggard MP, Fernandes MA (1984) Detection in noise by spectrotemporal patternanalysis. J Acoust Soc Am 76: 50-56.
Piechowiak T, Ewert SD, Dau T (2007) Modeling comodulation masking release using an
equalization-cancellation mechanism. J Acous. Soc Am 121 (4): 2111-2126.
81. Cochlear implant listeners at a cocktail party: evaluating CI performance in multi-talker
listening situations with the CRM (Coordinate Response Measure).
Cooper, H.R. University Hospital Birmingham, Bormingham, UK
Cochlear implants (CI) can provide excellent speech discrimination in quiet, with large variance in
individual results. Standard tests of speech recognition do not replicate real-life listening conditions
where multiple, spatially separated maskers (background noise or competing talkers) create
difficulties for CI listeners. Cues available to NH listeners to solve the ‘cocktail party problem’ may
not be available to CI listeners (Cherry, 1953 ; Darwin, 2008).
The CRM (Coordinate Response Measure) measures the ability to selectively attend and
respond to a target sentence in the presence of distracter talkers either co-located or at random
spatially separated locations. Stimuli were presented from 9 head-height loudspeakers in the sound
field. The target sentence contained a pre-determined ‘call sign’ that listeners listened for. All
sentences had the form ‘Ready (call sign) go to (colour)(number) now. Listeners reported the target
colour and number (closed set choice of 16 responses (four colours/four numbers)). An adaptive
procedure varied the relative level of target and masker sentences to find the SNR for 50% correct
performance.
For NH listeners (n=12), median SRTs (50% correct), in 3 conditions (A : two talkers, one
female/one male, single speaker in front ; B : 7 talkers, locations of target and maskers randomised
across 9 loudspeakers, with an 800 ms onset delay between target and masker sentences, and C : 7
talkers, locations of target and maskers randomised across 9 loudspeakers, target and masker
sentences presented in pairs with no onset delay) were (A) -19.5 dB ; (B) -10.8 dB and (C); -1.7
dB .
104
Unilateral CI listeners (n=8) had median SRTs in the 3 conditions of (A) +1.72 dB ; (B) +6.57
and (C) +9.54 dB. Only 2 listeners achieved an SRT at a negative SNR, in the 2-talker condition.
Bilateral CI listeners (n=8), tested a) with bilateral implants and b) with the better implant
alone, all had positive SNRs in all 3 conditions. There was no significant bilateral advantage in
condition A (2 colocated talkers) or C (7 talkers without onset delay), but there was a significant
advantage in condition B (7 talkers with 800 ms onset delays).
Unlike NH listeners, CI users require positive SNRs in this multi-talker listening task. A
bilateral advantage was only found when onsets of multiple talkers were non-simultaneous.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Cherry,E.C. (1953): Some experiments on the recognition of speech, with one and with two ears. J
Acoust Soc Am, 25, 975-979
Darwin C.J (2008) Listening to speech in the presence of other sounds. Phil Trans R Soc B, 363,
1011-1021
Poster Abstracts: Balance and vEMPs
82. Cervical and ocular VEMPs in secondary school age children
D. Deval*, C. Shaw§, A. Shetye§, K. Harrop-Griffiths* and W. Pagarkar§,*Nuffield Hearing and
Speech Centre, Royal National Throat Nose and Ear Hospital, London, UK, §Department of
Audiovestibular Medicine, Hackney Ark, London, UK.
VEMPs are myogenic potentials generated following a sound stimulus and most commonly
detected by electrodes placed on the sternocleidomastoid muscle (cervical VEMPs or cVEMPs)
(Colebatch et al, 1994). More recently recording from the extra ocular muscles has been described
(ocular VEMPs or oVEMPs) (Rosengren et al, 2005). The cVEMPs are saccular in origin whereas
the oVEMPs are thought to be utricular, although their waveforms show some similarity. Studies
describing normative data of oVEMPs in children are scarce (Wang et al 2013). Studies comparing
oVEMPs and cVEMPs in adults have been published (Nguyen et al 2010), but there are no
published studies of comparison between oVEMPs and cVEMPs in children with normal hearing.
This study aims to compare the results of oVEMPs and cVEMPs in secondary school age children
with normal hearing.
The study recruited 50 secondary school children of both sexes, 11-16 years of age. Children
were excluded if they had hearing loss, middle ear dysfunction, history of vestibular symptoms,
neurological problems, developmental delay, cervical spine or neck pathology, ocular muscle
palsies or migraine headaches. All participants had pure tone audiometry, tympamometry, cVEMPs
and oVEMPs testing. The VEMP parameters recorded were: P and N wave latency (miliseconds),
amplitude of waveform (microvolts), asymmetry ratio between amplitudes of two ears and
threshold. The t test was used to compare the cVEMPs and oVEMPs data.
The mean cVEMPs threshold was 75dBnHL (±5.83) for the right ear and 76 dBnHL (±5.33) for
the left ear. The mean oVEMPs threshold was 83dBnHL (±5.07) for the right ear and 83dBnHL
(±4.49) for the left ear. The mean waveform amplitude of cVEMPs was 140microvolts (±99.19) for
the right ear and 143.81microvolts (±91.02) for the left ear. Corresponding values for oVEMPs
were 6.47microvolts (±4.37) (right ear) and 6.73microvolts (±6.58) (left ear).The asymmetry ratio
was 17.93 (±11.87) for cVEMPs and 25.64 (±16.22) for oVEMPs.
105
There was no significant difference between the mean values of the right and the left ears for
cVEMPs and oVEMPs threshold, wave latency and amplitude. The oVEMPs had a significantly
smaller amplitude and a higher threshold than the cVEMP. Both cVEMPs and oVEMPs show a
wide variation in amplitude. This study shows the differences in the waveforms of the cVEMPs and
oVEMPs in children with normal hearing. A larger study is required to compare the cVEMPs and
oVEMPs response variations with age and sex.
Acknowledgements
Supported by the BSA.
References
Colebatch J.G., Halmagyi G.M. & Skuse N.F. 1994. Myogenic potentials generated by a clickevoked vestibulocollic reflex. J Neurol Neurosurg Psychiatry, 57(2), 190-197.
Rosengren S.M., McAngus Todd N.P. & Colebatch J.G. 2005. Vestibular-evoked extraocular
potentials produced by stimulation with bone-conducted sound. Clin Neurophysiol, 116(8), 19381948.
Wang S.J., Hsieh W.S. & Young Y.H. 2013. Development of ocular vestibular-evoked myogenic
potentials in small children. Laryngoscope, 123(2), 512-517.
Nguyen K.D., Welgampola M.S. & Carey J.P. 2010. Test-Retest Reliability and Age-Related
Characteristics of the Ocular and Cervical Vestibular Evoked Myogenic Potential Tests. Otol
Neurotol, 31(5), 793-802.
Poster Abstracts: Tinnitus
83. An internet-based approach to characterising the misophonia population
V.L. Kowalkowski*§ and K. Krumbholz*, MRC Institute of Hearing Research, UK, University of
Nottingham, UK
The term ‘Misophonia’ was coined by otolaryngologists approximately fifteen years ago to identify
a population of individuals who experience negative emotional responses to certain everyday
sounds (Jastreboff & Jastreboff, 2001), but who do not fit the criteria of hyperacusis. Misophonia is
thought to be caused by dysfunctional interaction between the auditory and limbic systems
(Jastreboff & Jastreboff, 2001), and might therefore share similarities with tinnitus distress.
Whereas tinnitus has attracted intense research interest, little systematic research exists on
misophonia, probably because the condition is purportedly rare. Online support networks for
misophonia, however, suggest a much greater prevalence than seen in the clinic. This is supported
by a recent group study of misophonia, based on 42 patients, which reported that recruitment
through self-referral was easy (Schröder et al., 2013).
We developed an online questionnaire, targeted at online misophonia support groups, to
determine to what extent findings by previous case studies (Edelstein et al., 2013, Webber et al.,
2014) represent the wider misophonia population. The questionnaire combines psychological and
audiological approaches by containing an adapted version of the diagnostic-oriented A-Miso-S
questionnaire, as well as two new questionnaires: the Misophonia Handicap Questionnaire (MHQ)
and Misophonia Handicap Inventory (MHI), which were derived from the Tinnitus Handicap
Questionnaire and Tinnitus Handicap Inventory, respectively.
We present analyses of the data from 80 respondents, obtained within three weeks of the
questionnaire’s release. The data suggest that, of these 80 respondents, 40% are likely to have
106
hyperacusis instead of, or in addition to, misophonia. The data also support the suggestion from
previous studies for a hereditary component to misophonia: 50% of the respondents without
suspected hyperacusis reported immediate family members with similar symptoms.
To evaluate the measurement properties of the new misophonia questionnaires, we used the
methodological approach described by Terwee et al (2007). We found that the MHI correlated well
with the A-Miso-S questionnaire (r=0.73, P(>F)<0.001) and did not suffer from ceiling or floor
effects, suggesting that it represents a good measure of the effect of misophonia on an individual’s
quality of life. The MHQ correlated less strongly with the A-Miso-S (r=0.58, P(>F)<0.001), but did
correlate strongly with the MHI (r=0.8, P(>F)<0.001).
Future work will be aimed at validating the new questionnaire measures, and assessing their
usefulness in clinical and research settings. Some of the questionnaire participants will be invited
for EEG recordings of spontaneous brain activity, to be compared with subjects suffering from
bothersome tinnitus.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
References
Jastreboff & Jastreboff 2001. Components of decreased sound tolerance: hyperacusis, misophonia,
phonophobia. ITHS News Lett. 2, 5–7
Edelstein M., Brang D. and Ramachandran V. S. (2013) Front Hum Neurosci 7:296
Webber T. A., Johnson P. L., Storch E.A 2014 Gen. Hosp. Psych. 36:2, 231
Schroder A., Vulink N. and Denys D. 2013 PLOS One 8, 1-5
Terwee C. B., Bot S. D. M., de Boer M. R., van der Windt D. A. W. M., Knol D. L., Dekker J.,
Bouter L. M., de Vet H. C. W., 2007. Quality criteria were proposed for measurement
properties of health status questionnaires. J Clin Epid 60,1,34-42.
84. Source space estimation of oscillatory power and brain connectivity in tinnitus
O. Zobay*, A. R. Palmer*, D. A. Hall§,‡, M. Sereda§,‡ and P. Adjamian*, *MRC Institute of Hearing
Research, University Park, Nottingham, UK, §National Institute for Health Research (NIHR)
Nottingham Hearing Biomedical Research Unit, Nottingham, UK, ‡Otology and Hearing group,
Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of
structural and functional changes in the brain. However, the precise pathophysiology of tinnitus
remains unknown. The thalamocortical dysrhythmia model (Llinas et al. 1999, 2005) suggests that
neural deafferentation due to hearing loss causes a dysregulation of coherent activity between
thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory
activity in the resting state, localised to the auditory cortex where normally alpha oscillations should
occur. Numerous studies also suggest that tinnitus perception relies on the interplay between
auditory and non-auditory brain areas. According to the Global Brain Model (Schlee et al, 2011), a
network of global fronto–parietal–cingulate areas is important in the generation and maintenance of
the conscious perception of tinnitus. Thus, the distress experienced by many individuals with
tinnitus is related to the top–down influence of this global network on auditory areas. In this
magnetoencephalography (MEG) study, we compare resting-state data of tinnitus subjects and
normal-hearing controls to examine effects on spectral power as well as functional and effective
connectivity. The analysis is based on beamformer source projection and an atlas-based region-ofinterest approach. We find increased functional connectivity within the auditory cortices in the
107
alpha band. A significant increase is also found for the effective connectivity from a global brain
network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on
spectral power. Overall, our results provide only limited support for the thalamocortical
dysrhythmia and global brain models of tinnitus.
Acknowledgements
This research was funded by the Medical Research Council. MEG data was collected at the Sir
Peter Mansfield Magnetic Resonance Centre, University of Nottingham.
References
Llinas R.R., Ribary U., Jeanmonod D., Kronberg E. & Mitra P.P. 1999. Thalamocortical
dysrhythmia: A neurological and neuropsychiatric syndrome characterized by
magnetoencephalography. PNAS, 96, 15222-15227.
Llinas R., Urbano F.J., Leznik E., Ramirez R.R. & van Marle H.J.F. 2005. Rhythmic and
dysrhythmic thalamocortical dynamics: GABA systems and the edge effect. Trends Neurosci,
28, 325-333.
Schlee W., Lorenz I., Hartmann T., Müller N., Schulz H. & Weisz, N. 2011. A Global Brain Model
of Tinnitus. In: Møller A.R., Langguth B., De Ridder D. & Kleinjung T., (eds.) Textbook of
Tinnitus. New York: Springer. pp. 161-169.
85. Objectively characterising tinnitus using the postauricular muscle reflex
J.I. Berger, B. Coomber, A.P. Carver, M.N. Wallace and A.R. Palmer, MRC Institute of Hearing
Research, Nottingham, UK
Tinnitus – the perception of sound in the absence of external auditory input – affects 8-15% of
adults worldwide. Currently, there are no universally effective treatments, due to heterogeneity in
tinnitus aetiologies and a poor understanding of underlying neural pathologies. Assessment of
specific tinnitus characteristics relies solely on subjective self-report, which can often be unreliable.
Accordingly, an objective test would be desirable both as a diagnostic tool and as an outcome
measure for assessing treatment efficacy.
A commonly-used behavioural test for tinnitus in animals – gap-induced prepulse inhibition of
acoustic startle (GPIAS) - relies on measuring a reflex response to determine the detection of gaps
in continuous stimuli. Impaired gap detection following tinnitus induction is thought to be caused
by tinnitus ‘filling in’ the gap, thus reducing its salience. Preliminary work in humans measuring
GPIAS of eye blink responses showed gap detection deficits in tinnitus subjects, but the underlying
mechanisms of this effect were unclear (Fournier & Hébert, 2013). Conversely, gap recognition in a
perceptual task was not found to be affected when hearing-impaired tinnitus subjects were
compared with controls (Campolo et al., 2013).
We have developed a variation of the GPIAS method in which we measure gap-induced
prepulse inhibition (PPI) of the Preyer reflex in guinea pigs (Berger et al., 2013). The postauricular
muscle reflex (PAMR) is an analogue of the Preyer reflex, present in humans, and may represent a
possibility for developing an objective tinnitus test for the clinic. The PAMR is subject to prepulse
inhibition (Cassella & Davis, 1986), and represents an alternative to the eye blink reflex. However,
gap-induced PPI of the PAMR has yet to be demonstrated.
In the present study, we measured PPI of the PAMR in a small group of normal-hearing
subjects. PAMR responses were recorded electromyographically with surface electrodes placed
over the insertion of the postauricular muscle to the pinna. Gap detection was evaluated with tonal
background sounds, presented at either 60 or 70 dB SPL, and startle stimuli comprising very brief
broadband noise bursts presented at either 85, 90, 95, or 100 dB SPL. Preliminary data suggest that
108
the PAMR is susceptible to gap-induced PPI, and is dependent on the relative sound levels of
background and startle stimuli.
Future studies will examine further the parameters that affect gap-induced PPI of the PAMR,
and subsequently establish whether subjects with tinnitus exhibit deficits in gap detection.
Acknowledgements
BC was supported by Action on Hearing Loss (International Project Grant G62). AC was supported
by an Action on Hearing Loss summer studentship.
References
Berger JI, Coomber B, Shackleton TM, Palmer AR, Wallace MN (2013) A novel behavioural
approach to detecting tinnitus in the guinea pig. J Neurosci Methods 213: 188-95.
Campolo J, Lobarinas E, Salvi R (2013) Does tinnitus “fill in” the silent gaps? Noise Health 15:
398-405.
Cassella JV & Davis M (1986) Habituation, prepulse inhibition, fear conditioning, and drug
modulation of the acoustically elicited pinna reflex in rats. Behav Neurosci 100: 39-44.
Fournier P & Hébert S (2013) Gap detection deficits in humans with tinnitus as assessed with the
acoustic startle paradigm: does tinnitus fill in the gap? Hear Res 295: 16-23.
86. Investigating the central gain mechanism following a short period of unilateral earplug
use
H.E. Brotherton*, M. Maslin*, C.J. Plack*, R. Schaette§ and K.J. Munro*, School of Psychological
Sciences, University of Manchester, Manchester, UK, §Ear Institute, University College London,
London, UK.
Previous research has shown a change in the threshold of the middle ear acoustic reflex after
continuous earplug use in one ear for seven days (Munro et al, 2014). The main aim of the present
study was to extend the findings of Munro and colleagues and determine (i) the time course of the
changes, (ii) if the greatest change occurred at frequencies affected by the earplug and (iii) if the
underlying mechanism operated in the ascending auditory pathway.
The present study investigated the acoustic reflex threshold in 25 normal hearing listeners after six
days of continuous unilateral earplug use. Acoustic reflex thresholds for pure tones (0.5-4 kHz) and
broadband noise were measured using a blinded design. Measurements took place at baseline, two,
four, and six days after earplug use, as well as four and 24 hours after final removal of the earplug
on day six. Reflex thresholds were obtained at a lower sound pressure level when the highfrequency and broadband noise stimuli were presented to the previously plugged ear, and there was
no change when stimuli were presented to the control ear. The mean change of around 5 dB was
greatest after four days of unilateral earplug deprivation. The onset of change in the reflex threshold
was found to be slower than the recovery after the earplug was removed. Most of the recovery had
taken place within 24 hours of earplug removal. The findings reveal direct evidence for adaptive
plasticity after a period of unilateral auditory deprivation. The changes in the reflex threshold are
consistent with a gain control mechanism that operates on a time scale of hours to days and operates
in the ascending auditory system at frequencies most affected by the earplug. The findings may
have implications for patients with tinnitus or hyperacusis.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
109
References
Munro K.J., Turtle C. & Schaette R. 2014. Plasticity and modified loudness following short-term
unilateral deprivation: Evidence of multiple gain mechanisms within the auditory system. J
Acoust Soc Am, 135, 315-322.
87. Unilateral hearing loss in the ferret: functional characterisation of a novel model of
tinnitus
J.R. Gold*, F.R. Nodal*, D.N. Furness§, A.J. King* and V.M. Bajo** Department of Physiology,
Anatomy and Genetics, University of Oxford, UK, §Institute for Science and Technology in
Medicine, Keele University, UK
Tinnitus in its subjective form is characterised by the patient-reported presence of a sound percept
without any corresponding environmental acoustic source. Animal models have been sought that
may reproduce such a percept in the subject, verified using behavioural measures and understood by
physiological and anatomical means. However, published data have largely obtained either
behavioural or physiological measurements within the same subjects, rarely combining multiple
measures within individuals over time. Therefore, we have aimed to address this lacuna in the field
by developing a novel model of maladaptive plasticity based upon selective sensorineural hearing
loss in adult ferrets. Individual animals (n = 7) were tracked longitudinally (>12 months), with
auditory function assessed using an operantly-conditioned gap-detection paradigm and
measurement of auditory brainstem responses (ABRs). Upon acquisition of stable behaviour,
selective mechanical lesions were made unilaterally in the basal coil level of the spiral ganglion
(SG), which was followed by further behavioural testing and ABR measurement at multiple time
points post-recovery. We subsequently determined that SG cell density in control cochleas were
1.6x higher than in lesioned specimens. Behavioural performance outcomes were heterogeneous,
with bootstrap inference-based analysis revealing a subset of ferrets that, at specific times postlesion, displayed gap-detection psychometric function threshold elevations and slope changes.
These animals displayed correlated changes in late-wave ABR shape and amplitude, akin to those
observed in certain human tinnitus patient cohorts. In particular, these functional changes were
distinct from those observed in animals without behavioural signs of tinnitus. Extracellular
recordings performed in the auditory cortex bilaterally of lesioned and control ferrets revealed
changes in multiunit spectrotemporal response properties of tinnitus-like animals, including
tonotopic rearrangement and elevations of neural synchrony and spontaneous activity, in accordance
with published tinnitus-like physiology, and our own behavioural and ABR-defined criteria.
Histological processing of the same group of animals revealed that selective SG lesions also caused
changes in auditory brain neurochemistry and particularly in NADPH diaphorase expression, within
the primary auditory cortex and auditory brainstem nuclei. On the basis of these longitudinallyevaluated physiological changes, a second subset of lesioned ferrets received chronic bilateral fibreoptic implants in the auditory cortex, each of which had been injected with an AAV-based
optogenetic construct to enable neural expression of the light-driven outward proton pump, ArchT.
The aim of these experiments is to potentially reverse the tinnitus-like percept by silencing
abnormal cortical signalling in awake behaving animals, with behavioural testing and analysis
presently ongoing.
Acknowledgements:
This study is supported by the Wellcome Trust and Action on Hearing Loss.
110
88. Changes in resting-state oscillatory power in anaesthetised guinea pigs with
behaviourally-tested tinnitus following unilateral noise trauma.
V.L. Kowalkowski*§, B. Coomber*, M.N. Wallace*, K. Krumbholz*, MRC Institute of Hearing
Research, UK, University of Nottingham, UK
Chronic tinnitus affects about one in ten people in the UK, and can be highly bothersome for some
individuals. It is often associated with noise trauma or hearing loss. There are several theories for
how the tinnitus percept is generated, such as the ‘thalamo-cortical dysrhythmia’, and ‘corticalreorganisation’ models. In the former, deafferentation as a result of noise exposure leads to a
slowing of spontaneous oscillatory activity between the auditory thalamus and cortex, as well as an
increase in spontaneous high-frequency activity within auditory cortex. The increased spontaneous
activity is thought to underlie the tinnitus percept. On the other hand, the latter model posits that
tinnitus arises as a result of inhomogeneous (e.g., sloping) hearing loss, which deprives some areas
of the hearing range of input, but leaves others intact. This is thought to create an imbalance of
inhibitory and excitatory inputs to the intact areas, leading to expansion of their cortical
representation, as well as increased spontaneous activity and tinnitus.
In this study, we measured spontaneous oscillatory activity from the auditory cortical surface of
12 anaesthetised guinea pigs. Six of the animals were unilaterally exposed to loud noise to induce
tinnitus and then allowed to recover for 8-10 weeks. The presence of tinnitus was tested with prepulse inhibition of the Preyer startle reflex, and ABR measurements were used to show that the
exposed animal’s hearing thresholds had recovered to within 20 dB of their baseline. Six of the
animals were unexposed controls.
Consistent with the ‘thalamo-cortical dysrhythmia’ model, some differences in the frequency
composition of the spontaneous activity were observed between the exposed and control groups,
with a relative increase in mid-frequency activity in the exposed animals. More striking, however,
was a highly significant group-by-hemisphere interaction, which seems to accord with the ‘cortical
reorganisation’ model in that spontaneous activity in the contralateral hemisphere was reduced in
the exposed animals compared to the controls, whilst activity in the ipsilateral hemisphere was
increased. The contralateral hemispheres receives predominant input from the exposed ear, whereas
the ipsilateral hemisphere receives predominant input from the intact ear.
Future work aims to investigate changes in resting-state activity in humans with tinnitus and
hearing loss using electroencephalography.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
89. An investigation into the relationships between hearing loss, hearing handicap, tinnitus
handicap, and attention
R. Lakshman*, N. Mohamad§, D. Hoare§, D.A. Hall§, School of Medicine, University of
Nottingham*§, National Institute for Health Research (NIHR) Nottingham Hearing Biomedical
Research Unit§
One major complaint of tinnitus patients is the effect tinnitus has on their ability to concentrate
and perform attention-demanding tasks. However, whether and to what degree hearing loss or
handicap has a confounding effect on the relationship between tinnitus handicap and cognitive
function is unclear. To begin to interpret these relationships this study sought first to explore the
correlation between pure-tone audiogram (PTA) average and self-reported hearing handicap.
Essentially we wished to identify whether hearing loss was a sufficient predictor of hearing
111
handicap to be included as a covariate when considering the relationship between tinnitus and
attention for example.
One-hundred and twenty-two participants completed PTA, the Hearing Handicap Inventor
(HHI), the Tinnitus Handicap Inventory (THI), and Subtest 7 of the Test of Everyday Attention
(TEA) which provides a measure of sustained attention. PTA thresholds were measured at 0.5, 1, 2,
and 4 kHz with averages calculated for across both ears, and individually (to determine better ear).
There was a strong correlation between HHI scores and the average PTA across both ears (r =
0.72), and for the better ear only (r = 0.70) suggesting that either the audiogram data or the selfreport questionnaire would provide a reliable account of the effect of hearing in a model. In
contrast, correlations between the THI and hearing measures were weak (r = 0.39 for HHI, 0.13 for
PTA average) clearly demonstrating that the THI is measuring a construct different to that of the
hearing measures. THI score only weakly correlated with TEA scores (r = -0.18) suggesting
tinnitus to have little effect on sustained attention in this instance.
We have demonstrated the reliability of using either hearing loss or HHI score to represent the
construct of hearing. Further work will demonstrate whether tinnitus has effects on other forms of
attention or other cognitive function.
Acknowledgements
This is a summary of work supported by the University of Nottingham and the National Institute for
Health Research.
90. “100 years of amplified music”: a public science experiment on loud music exposure,
hearing loss, and tinnitus
M.A. Akeroyd*, W.M. Whitmer*, O. Zobay§, D.A. Hall‡, R. Mackinnon‡, H. Fortnum‡, and D.R.
Moore#. * MRC/CSO Institute of Hearing Research - Scottish Section, Glasgow, United Kingdom; §
MRC Institute of Hearing Research, University Park, Nottingham. ‡ NIHR Nottingham Hearing
Biomedical Research Unit & Otology and Hearing Group, School of Medicine, University of
Nottingham, Nottingham, United Kingdom; # Communication Sciences Research Center, Cincinnati
Childrens Hospital, Cincinnati, OH, United States.
It is well known that a lifetime of exposure to loud industrial noise can lead to substantial amounts
of hearing damage. The effect of a lifetime of loud music exposure is far less certain, however,
despite it being around 100 years since amplified music reproduction first became widely available.
We ran a public-science experiment to get data addressing this issue, using the 2013 Centenary of
the Medical Research Council to act as a further “pull” for the public to participate.
The experiment ran for 18 weeks on a dedicated website. It was designed to assess whether
one’s music-listening history across one’s lifetime was a predictor of speech reception thresholds
(triple digits) in noise, self reported hearing ability, and self-reported tinnitus. The website
combined (a) a 12-item questionnaire on the amount of listening to loud music (e.g. “How often
would you say you went to gigs, concerts, and festivals in your twenties?”), (b) a set of standard
questions on various aspects of hearing (e.g., “Do you currently have any difficulty with your
hearing?” or “How often nowadays do you get noises such as ringing or buzzing in your head or
ears (tinnitus) that last for more than 5 minutes?”) and (c) a triple-digit speech-in-noise test
specifically crafted for high-frequency hearing losses (Vlaming et al., 2014).
In all, 5301 people participated. We found no effect of loud music exposure on speech reception
thresholds, but there were links between exposure, self-report hearing difficulty, and self-report
tinnitus: those people reporting high exposures were more likely to report moderate or greater
hearing difficulty (and tinnitus) than those with low exposures, and, vice versa, those people
reporting low exposures were more likely to report no difficulty than those with high exposures.
112
We noted that participation rates were strongly correlated to media coverage, clearly demonstrating
its importance for public science experiments.
Acknowledgements
The experiment was supported by the Medical Research Foundation. The team are supported by the
Medical Research Council (grant number U135097131), the Chief Scientist Office of the Scottish
Government, and the National Institute for Health Research.
References
Vlaming M.C.M.G, MacKinnon R.C., Jansen M., & Moore D.R. 2014. Automated Screening for
High-Frequency Hearing Loss. Ear and Hearing, in press
91. The NIHR Clinical Research Network (CRN): what do we do for the hearing research
community?
D.A. Hall*, H. Blackshaw§, M. Simmonds§, A. Schilder§ *NIHR Nottingham Hearing Biomedical
Research Unit, §evidENT Team, Ear Institute, University College London.
The aim of this poster is to raise the profile of the NIHR Clinical Research Network (CRN),
particularly the work of the Ear, Nose and Throat Specialty Group, among the audiology profession
in the UK.
The NIHR CRN has two broad aims. First, it provides the infrastructure to support highquality clinical research within the NHS, so that patients are given the opportunity to participate in
research and ultimately benefit from new and better treatments. This is achieved by helping
researchers to plan, set up and deliver clinical studies quickly and effectively; providing health
professionals with research training, dedicated systems for gaining approvals, funding for
experienced support staff and equipment and both advice and practical help with opening of sites
and recruitment.. The second aim of the CRN is to support the Life Sciences Industry to deliver
their research programmes. The CRN provides a committed industry team who can offer industries
assistance with study design, site intelligence, site identification, study set-up and recruitment
including dedicated clinical time and industry managers.
The CRN structure comprises of a lead national office and 15 ‘Local CRNs’ (LCRNs) that
cover the length and breadth of England. Each LCRN delivers research across 30 clinical
specialties, including ENT. Each specialty has a national group of members representing each of the
LCRNs. Together these local leads are called the ‘ENT Specialty Group’. The group is chaired by
Prof Schilder and local lead members consist of research-active clinicians and practitioners across
Ear, Nose, Throat, Hearing and Balance fields. Our job is to ensure that the ENT clinical research
studies open in the UK and being supported by the CRN receive the correct specialty-specific
advice to ensure the studies are delivered successfully in the NHS. For example, our ‘intelligence’
can help industry identify where the best sites to conduct research are, where specific populations
are and how to access the right equipment. We support a UK portfolio of clinical research into the
diagnosis, treatment, management and therapy of ENT disorders. Our remit includes the normal
development, function and diseases of the ear, nose and throat and related aspects of hearing,
balance, smell and speech.
The NIHR ENT Specialty Group strongly supports audiology research activities and welcomes
queries from interested researchers (www.crn.nihr.ac.uk/ent/).
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
113
92. UK validation of the Tinnitus Functional Index (TFI) in a large research population
K. Fackrell*§, D. A. Hall*§, J. G. Barry‡ and D. J. Hoare*§, *NIHR Nottingham Hearing
Biomedical Research Unit. § University of Nottingham, ‡ MRC Institute of Hearing Research,
Nottingham, UK
Questionnaires are essential for measurement of tinnitus severity and treatment-related change but
there is no standard measure used across clinical and research settings. Current tinnitus
questionnaires are limited to measure either severity or change, but not both. The Tinnitus
Functional Index (TFI; Meikle et al, 2012) was developed to be used as both a diagnostic measure
of the functional impact of tinnitus and to be a sensitive measure of treatment-related change. In this
first study, we evaluate validity of the TFI as a diagnostic measure of tinnitus severity in a UK
research population.
283 participants completed a series of questionnaires; the TFI, the Tinnitus Handicap Inventory
(THI), Tinnitus Handicap Questionnaire (THQ), tinnitus loudness VAS, tinnitus annoyance
percentage scale, Beck’s Depression Inventory (BDI), Beck’s Anxiety Inventory (BAI), and the
World Health Organisation Quality of Life Bref (WHOQOL-BREF).One hundred participants
completed the TFI at a second visit.
Analyses included (1) Exploratory Factor Analysis (EFA) to identify interrelationships without
predefining a possible structure; (2) Confirmatory factor analysis (CFA) using the current eight
subscales proposed for the TFI development and the newly identified EFA structure; (3)
Discriminant and convergent validity; and (4) Test-retest reliability and agreement.
EFA indicated that two questions loaded onto factors other than the factor they are currently
associated with (Meikle et al, 2012). For CFA, although the approximate fit indices were within an
acceptable range (RMSEA =0.6, CFI > 0.95), X2 was still significant (p<0.001), indicating the
current overall TFI structure is not optimal. Convergent and discriminant validity of the TFI
revealed high correlations with the THI (r = 0.82) and THQ (r =0.82), moderate correlations with a
tinnitus loudness VAS (r =0.46) and tinnitus annoyance scale (r =0.58) and moderate correlations
with the BDI (r =0.57), BAI (r = 0.38) and WHOQOL (r = -0.48). Test-retest reliability was
extremely high (ICC =0.86), whilst test agreement was 93%. Overall, the TFI does appear to
reliably measure tinnitus and multiple symptoms domains. However, at this point we cannot
confirm the proposed TFI structure. We are currently collecting large scale data in a clinical
population to further this analysis and evaluate sensitivity of the TFI to treatment-related change.
Acknowledgements
This work is part of a NIHR funded PhD studentship.
References
Meikle, M. B. et al. 2012. The Tinnitus Functional Index: Development of a New Clinical Measure
for Chronic, Intrusive Tinnitus. Ear Hear, 33(2), 153-176.
93. Investing in the UK tinnitus support group network
D. Stockdale and C. Arthur, British Tinnitus Association
The aim of the project was to increase and improve the support available to people living with
chronic tinnitus in the UK by strengthening a network of tinnitus support groups around the UK.
Specifically, to reverse a trend of decline by aiming to increase the number of groups from 34 to 46
within a two year period, and to support these groups to be run as effectively as possible.
114
Following initial scoping work, investigating the strategies of similar organisations in this area,
the British Tinnitus Association (BTA) increased its organisational capacity and skills to better
support groups by recruiting a Groups Coordinator.
Research was undertaken to examine factors which might affect participation in support groups.
Different models of groups were also assessed. The BTA increased the promotion of the value of
organising a support group, and highlighted the increased support available from the BTA which
was developed in line with identified needs. Events were organised by the BTA including
workshops for new group organisers, and a national groups skill sharing and network event.
Following initial research, the BTA improved its understanding of the range and activities of
groups and their organisers:
 Identified 3 main types of group: purely voluntary (traditional ‘self help’ groups); groups run
by healthcare professionals; groups run by local charities.

Identified that the two key drivers of groups folding were declining membership and a
difficulty replacing outgoing committee members.

Identified a shared trend of decline across many types of local or community groups and a
general consensus amongst related networks that changing social patterns had a role notably, that people were increasingly seeking out communities online and less often doing so
in their locality; that changing work patterns led to reduced resource for voluntary projects.

Identified that key barriers for potential organisers included low confidence, a perceived
shortage of technical knowledge, expectations about the time and paperwork involved.

In contrast to the decline trend, initial work also identified a good number of potential group
organisers.
Voluntary, or ‘self help’, and charity-led groups have seen the greatest increases, whilst NHS
healthcare professionals appear to face more hurdles getting a new group off the ground. Year one
of investment in group support led to a reverse in the decline trend. Group numbers were up 50% by
the end of September 2013 from 34 to 52, exceeding the 24 month target of 46 groups.
Developments in support offered to groups have been well received and now need further refining
and embedding in organisational processes. Success in growing and strengthening the groups
network demonstrates the value of national organisations investing in these types of activities.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and
writing of the paper.
 The End 
115
Download