July 12-17, 2015 2013 CIAP Group Photo

advertisement
 2013 CIAP Group Photo
July 12-17, 2015
Granlibakken
Conference
Center
Lake Tahoe, CA
23 July 2015 2015 Conference on Implantable Auditory Prostheses
2015 CONFERENCE ON
IMPLANTABLE AUDITORY PROSTHESES
Scientific Program:
Conference Chair:
Andrew Oxenham
Conference Co-chair:
Johan Frijns
Conference Management:
Administrative Co-chair:
Bob Shannon
Webmaster:
Qian-Jie Fu
Sound and Vision:
John J Galvin III
2015 CIAP Steering Committee
Monita Chatterjee
Andrej Kral
Stefan Launer
Ruth Litovsky
John Middlebrooks
Jane Opie
Edward Overstreet
Zach Smith
Astrid van Wieringen
12-17 July 2015
Granlibakken, Lake Tahoe
Page 1
2015 Conference on Implantable Auditory Prostheses
CIAP 2015 SPONSORS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 2
2015 Conference on Implantable Auditory Prostheses
12-17 July 2015
Granlibakken, Lake Tahoe
Page 3
2015 Conference on Implantable Auditory Prostheses
Table of Contents
Pages
Organizers................................................................ 1
Sponsors .................................................................. 2
Site Map.................................................................... 3
Program Overview ................................................... 5
Podium Schedule .................................................... 6-8
Poster Schedule ................................... .................. 9-21
Speaker Abstracts ................................................... 22-69
Poster Abstracts ................................... .................. 70-295
A: Monday ....................................... .................. 70-128
B: Tuesday ...................................... .................. 129-184
C: Wednesday ................................. .................. 185-244
D: Thursday .................................... .................. 245-304
Attendee List ......................................... .................. 305-311
12-17 July 2015
Granlibakken, Lake Tahoe
Page 4
2015 Conference on Implantable Auditory Prostheses
PROGRAM OVERVIEW
Sunday July 12
3:00PM - 10:00PM
7:00PM – Midnight
Registration
Welcome Reception
Monday July 13
7:00AM
8:30AM – 12:00PM
12:00PM- 1:00PM
2:00PM – 4:00PM
4:00PM – 6:00PM
6:00PM – 7:00PM
7:00PM – 9:00PM
9:00PM – midnight
Breakfast opens
Modeling the electrode-neural interface
Lunch
Young Investigator Mentoring Session
Poster Viewing, free time
Dinner
Cortical processing of electrical stimulation
Poster Viewing and Social
Tuesday July 14
7:00AM
Breakfast opens
8:30AM – 12:00PM Cognitive, linguistic, and socio-emotional factors in cochlear-implant use
throughout the lifespan
12:00PM- 1:00PM
Lunch
4:00PM – 6:00PM
Poster Viewing, free time
6:00PM – 7:00PM
Dinner
7:00PM – 9:00PM
Bilateral Hearing
9:00PM – midnight Poster Viewing and Social
Wednesday July 15
7:00AM
Breakfast opens
8:30AM –12:10PM Functional and structural imaging applications with cochlear implants
12:00PM- 1:00PM
Lunch
1:30PM – 4:00PM
FDA and Company Workshop: New Technology
4:00PM – 6:00PM
Poster Viewing, free time
6:00PM – 7:00PM
Dinner
7:00PM-midnight
Dance Party and Poster Viewing
Thursday July 16
7:00AM
Breakfast opens
8:30AM – 12:00PM Single-sided deafness/EAS and Bimodal Hearing
12:00PM- 1:00PM
Lunch
4:00PM – 6:00PM
Poster Viewing, free time
6:00PM – 7:00PM
Dinner
7:00PM – 9:00PM
Experimental and computational approaches to improving speech
understanding with implants
9:00PM – midnight Poster Viewing and Social
Friday July 17
7:00AM
Breakfast opens
8:30AM – 12:00PM Beyond the cochlear implant - biological approaches and emergent
technologies
12:00PM-1:00PM
Lunch and Conference End
12-17 July 2015
Granlibakken, Lake Tahoe
Page 5
2015 Conference on Implantable Auditory Prostheses
CIAP 2015 Speaker Program
8:30-10:00am
Modeling the electrode-neural interface: Moderator Monita Chatterjee
8:30 Michael Heinz: Quantifying envelope coding metrics from auditory-nerve spike trains:
implications for predicting speech intelligibility with hearing impairment
9:00 Julie Bierer: Modeling the electrode-neural interface to select cochlear implant channels
for programming
9:30 Tania Hanekom: Design and application of user-specific models of cochlear implants
10-10:30am BREAK
10:30-noon Modeling the electrode-neural interface: Matt Goupell
10:30 Stoph Long: Investigating the Electro-Neural Interface: The Effects of Electrode
Position and Age on Neural Responses
11:00 Bob Carlyon: Spatial selectivity: How to measure and (maybe) improve it
11:30 Andrew Oxenham: Effects of spectral resolution on temporal-envelope processing of
speech in noise
12-1pm LUNCH
2-4pm Mentoring session - Ruth Litovsky
4-6pm Poster viewing
6-7pm DINNER
7:00-9:00pm Cortical processing of electrical stimulation: Moderator Andrej Kral
7:00 Xiaoqin Wang: What is missing in auditory cortex under cochlear implant stimulation?
7:30 Jan Wouters: Human cortical responses to CI stimulation
7:55 BREAK
8:10 James Fallon: Effects of deafness and cochlear implant use on cortical processing
8:35 Diane Lazard: fMRI studies of cortical reorganization in postlingual deafness:
Modification of left hemispheric dominance for speech.
9:00-midnight
Poster and social session
8:30-10:00am
Cognitive, linguistic, and socio-emotional factors in cochlear-implant
use throughout the lifespan: Moderator Ruth Litovsky
8:30 Michael Goldstein: Social influences on speech and language development in infancy
9:00 William Kronenberger: Executive functioning in prelingually deaf children with cochlear
implants
9:30 Carolien Rieffe: The importance of a CI for children´s social and emotional intelligence
10-10:30am BREAK
10:30-noon Cognitive, linguistic, and socio-emotional factors in cochlear-implant use
throughout the lifespan: Moderator Jane Opie
10:30 Astrid van Wieringen: Spoken language, perception and memory in children with CIs
and children with SSD
10:50 Monita Chatterjee: Voice Emotion Recognition and Production By Listeners With
Cochlear Implants
11:15 Deniz Başkent: Context effects, time course of speech perception, and listening effort
in cochlear-implant simulations
11:40 Yael Henkin: Insights into Auditory-Cognitive Processing in Older Adults with Cochlear
Implants
12:10-1pm LUNCH
12-17 July 2015
Granlibakken, Lake Tahoe
Page 6
2015 Conference on Implantable Auditory Prostheses
4-6pm Poster viewing
6-7pm DINNER
7:00-9:00pm Bilateral Hearing: Zack Smith
7:00 Ruth Litovsky: The impact of cochlear implantation on spatial hearing and listening effort
7:30 Matt Goupell: Better-ear glimpsing inefficiency in bilateral cochlear-implant listeners
7:50 BREAK
8:10 Hongmei Hu and David McAlpine: Differences in temporal weighting of interaural time
differences between acoustic and electric hearing
8:30 Yoojin Chung: Sensitivity to Interaural Time Differences in the Inferior Colliculus of an
Awake Rabbit Model of Bilateral Cochlear Implants
8:45 Sridhar Srinivasan: Effects of Introducing Short Inter-Pulse Intervals on Behavioral ITD
Sensitivity with Bilateral Cochlear Implants
9:00-midnight
Poster and social session
8:30-10:00am
Functional and structural imaging applications with cochlear implants:
Moderator Julie Bierer
8:30 Andrej Kral: Towards individualized cochlear implants: variations of the cochlear
microanatomy
9:00 Annerie vd Jagt: Visualization of Human Inner Ear Anatomy with High Resolution 7
Tesla Magnetic Resonance Imaging; First Clinical Application
9:20 Jack Noble: Comparison of cochlear implant outcomes with clinical, random, and imageguided selection of the active electrode set:
9:40 David Corina: Cross Modal Plasticity in Deaf Children with Cochlear Implants
10:00-10:30am BREAK
10:30am-12:10pm Functional and structural imaging applications with cochlear implants:
Moderator Stefan Launer
10:30 Shuman He: Acoustically-Evoked Auditory Change Complex in Children with Auditory
Neuropathy Spectrum Disorder: A Potential Objective Tool for Identifying Cochlear
Implant Candidates
11:00 Doug Hartley: Measuring cortical reorganisation in cochlear implant with functional nearinfrared spectroscopy: a predictor of variable speech outcomes?
11:30 Colette McKay: Brain Plasticity Due to Deafness as Revealed by FNIRS in Cochlear
Implant Users
12:10-1pm LUNCH
1:30-4:00pm Company Presentations and FDA updates
4-6pm Posters
6-7pm DINNER
7-8pm Posters
Wednesday 7:00pm to Midnight: Dance Night
8:30-10:00am
Single-sided deafness: Moderator Astrid van Wieringen
8:30 Larry Roberts: Ringing Ears: The Neuroscience of Tinnitus
9:00 Thomas Wesarg: Comparison of cochlear implant with other treatment options for singlesided deafness
9:30 Josh Bernstein: Binaural Unmasking for Cochlear Implantees with Single-Sided Deafness
10-10:30am BREAK
10:30-noon EAS and Bimodal Hearing: Moderator Ed Overstreet
12-17 July 2015
Granlibakken, Lake Tahoe
Page 7
2015 Conference on Implantable Auditory Prostheses
10:30
11:00
Lina Reiss: Binaural Pitch Integration with Cochlear Implants
Mario Svirsky: Speech perception and bimodal benefit in quiet for CI users with
contralateral residual hearing
11:20 Paul Abbas: Using Neural Response Telemetry (NRT) to monitor responses to acoustic
stimulation in Hybrid CI users
11:40 Clifford Hume: Direct Intracochlear Acoustic Stimulation using a PZT Microactuator
12-1pm LUNCH
4-6pm Poster viewing
6-7pm DINNER
7-9pm Experimental and computational approaches to improving speech understanding
with implants: Moderator Andrew Oxenham
7:00 DeLiang Wang: From Auditory Masking to Supervised Separation: A Tale of Improving
Intelligibility of Noisy Speech for Hearing-Impaired Listeners
7:30 Olivier Macherey: Polarity effects in cochlear implant stimulation: Insights from human
and animal studies
7:55 BREAK
8:10 Johan Frijns: Modeled Neural Response Patterns from Various Speech Coding
Strategies
8:40 Enrique Lopez-Poveda: Mimicking the unmasking benefits of the contralateral medial
olivocochlear reflex with cochlear implants
9:00-midnight
Poster and social session
8:30-10:00am
Beyond the cochlear implant - biological approaches: Moderator John
Middlebrooks
8:30 Sharon Kujawa: Aging vs Aging after Noise: Exaggeration of Cochlear Synaptic and
Neural Loss in Noise-Exposed Ears
9:00 Jeff Holt: TMC Gene Therapy Restores Auditory Function in Deaf Mice
9:30 Rob Shepherd: Combining Drug Delivery with Cochlear Prostheses: Developing and
Evaluating New Approaches
10-10:30am BREAK
10:30am-Noon
Beyond the cochlear implant - emerging technology: Moderator Johan
Frijns
10:30 Tobias Moser: Optogenetic stimulation of the auditory pathway for research and future
prosthetics
11:00 Ed Hight: Optogenetic Technology Provides Spatiotemporal Resolution Sufficient for an
Optically-based Auditory Neuroprosthesis
11:15 Hubert Lim, Thomas Lenarz: The Auditory Midbrain Implant: Research and
Development Towards a Second Clinical Trial
11:35 Bob Shannon: Auditory Brainstem Implants in Children: Implications for Neuroscience
12-1pm
LUNCH
1pm
End of Conference
12-17 July 2015
Granlibakken, Lake Tahoe
Page 8
2015 Conference on Implantable Auditory Prostheses
POSTER SESSION M:
MONDAY – 8AM TO MIDNIGHT
M1: DEVELOPMENTAL PROTECTION OF AURAL PREFERENCE IN CHILDREN WITH ASYMMETRIC
HEARING LOSS THROUGH BIMODAL HEARING. Melissa J Polonenko, Blake C Papsin, Karen
A Gordon
M2: DEVELOPMENT OF CORTICAL SPECIALIZATION TO PURE TONE LISTENING IN CHILDREN
AND ADOLESCENTS WITH NORMAL HEARING. Hiroshi Yamazaki, Salima Jiwani, Daniel D.E.
Wong, Melissa J. Polonenko, Blake C. Papsin, Karen A. Gordon
M3: THE DEVELOPMENT OF INTERNALIZING AND EXTERNALIZING SYMPTOMS IN EARLY
IDENTIFIED TODDLERS WITH COCHLEAR IMPLANTS COMPARED TO HEARING
CONTROLS. Anouk Paulien Netten, Carolien Rieffe, Lizet Ketelaar, Wim Soede, Evelien Dirks,
Jeroen Briaire, Johan Frijns
M4: HOW ADOLESCENTS WITH COCHLEAR IMPLANTS PERCEIVE LEARNING A SECOND
LANGUAGE. Dorit Enja Jung, Anastasios Sarampalis, Deniz Başkent
M5: NONWORD REPETITION BY CHILDREN WITH COCHLEAR IMPLANTS: EFFECTS OF EARLY
ACOUSTIC HEARING. Caroline N Lartz, Lisa S Davidson, Rosalie M Uchanski
M6: CONSONANT RECOGNITION IN TEENAGE CI USERS. Eun Kyung Jeon, Marcin Wroblewski,
Christopher W. Turner
M7: BINAURAL PITCH FUSION IN CHILDREN WITH NORMAL-HEARING, HEARING AIDS, AND
COCHLEAR IMPLANTS. Curtis L. Hartling, Jennifer R. Fowler, Gemaine N. Stark, Anna-Marie E.
Wood, Ashley Sobchuk, Yonghee Oh, Lina A.J. Reiss
M8: SPATIAL ATTENTION IN CHILDREN WITH BILATERAL COCHLEAR IMPLANTS AND IN
NORMAL HEARING CHILDREN. Sara Misurelli, Alan Kan, Rachael Jocewicz, Shelly Godar,
Matthew J Goupell, Ruth Litovsky
M9: AUDIOVISUAL INTEGRATION IN CHILDREN WITH COCHLEAR IMPLANTS. Iliza M. Butera, Ryan
A. Stevenson, Rene H. Gifford, Mark T. Wallace
M10: AAV-MEDIATED NEUROTROPHIN EXPRESSION IN THE DEAFENED COCHLEA. Patricia a
Leake, Stephen J Rebscher, Chantale Dore, Lawrence R. Lustig, Omar Akil
M11: OPTIMAL VOLUME SETTINGS OF COCHLEAR IMPLANTS AND HEARING AIDS IN BIMODAL
USERS. Dimitar Spirrov, Bas van Dijk, Jan Wouters, Tom Francart
M12: DECISION MAKING IN THE TREATMENT OF SINGLE-SIDED DEAFNESS AND ASYMMETRIC
HEARING LOSS. Nicolas Vannson, Mathieu Marx, Christopher James, Olivier Deguine, Bernard
Fraysse
M13: COCHLEAR RESPONSE TELEMETRY: REAL-TIME MONITORING OF INTRAOPERATIVE
ELECTROCOCHLEOGRAPHY. Luke Campbell, Arielle Kacier, Robert Briggs, Stephen OLeary
M14: WHAT IS THE EFFECT OF RESIDUAL HEARING ON TOP-DOWN REPAIR OF INTERRUPTED
SPEECH IN COCHLEAR-IMPLANT USERS? Jeanne Nora Clarke, Etienne Gaudrain, Deniz
Baskent
M15: RESIDUAL HEARING PRESERVATION, SIMULATED AND OBJECTIVE BENEFITS FROM
ELECTROACOUSTIC STIMULATION WITH THE EVO®-ZEBRA® SYSTEM. Emilie Daanouni,
Manuel Segovia-Martinez, Attila Frater, Dan Gnansia, Michel Hoen
M16: ELECTRO-ACOUSTIC COCHLEAR IMPLANT SIMULATION MODEL. Attila Frater, Patrick Maas,
Jaime Undurraga, Soren Kamaric Riis
M17: PATTERNS OF ELECTROPHONIC AND ELECTRONEURAL EXCITATION. Mika Sato, Peter
Baumhoff, Andrej Kral
M18: COMPARISONS BETWEEN ELECTRICAL STIMULATION OF A COCHLEAR-IMPLANT
ELECTRODE AND ACOUSTIC SOUNDS PRESENTED TO A NORMAL-HEARING EAR IN
UNILATERALLY DEAFENED SUBJECTS. John Deeks, Olivier Macherey, Johan Frijns, Patrick
Axon, Randy Kalkman, Patrick Boyle, David Baguley, Jeroen Briaire, Robert Carlyon
M19: SINGLE-SIDED DEAFNESS COCHLEAR-IMPLANT PERCEPTION AND SIMULATION:
LOCALIZATION AND SPATIAL-MASKING RELEASE. Coral Dirks, Peggy Nelson, Andrew
Oxenham
12-17 July 2015
Granlibakken, Lake Tahoe
Page 9
2015 Conference on Implantable Auditory Prostheses
M20: PITCH MATCHING PSYCHOMETRICS IN SINGLE-SIDED DEAFNESS WITH PLACE
DEPENDENT STIMULATION RATE. Tobias Rader, Julia Doege, Youssef Adel, Tobias
Weissgerber, Uwe Baumann
M21: MUSIC ENJOYMENT IN SSD PATIENTS: THE SYNERGISTIC EFFECT OF ELECTRIC AND
ACOUSTIC STIMULATION. David M Landsberger, Katrien Vermeire, Natalia Stupak, Annette M.
Zeman, Paul Van de Heyning, Mario A. Svirsky
M22: RECORDING LOW-FREQUENCY ACOUSTICALLY EVOKED POTENTIALS USING COCHLEAR
IMPLANTS. Youssef Adel, Tobias Rader, Andreas Bahmer, Uwe Baumann
M23: SINGLE-SIDED DEAFNESS WITH INCAPACITATING TINNITUS USING COCHLEAR
IMPLANTATION: PRELIMINARY RESULTS. Dan Gnansia, Christine Poncet-Wallet, Christophe
Vincent, Isabelle Mosnier, Benoit Godey, Emmanuel Lescanne, Eric Truy, Nicolas Guevara,
Bruno Frachet
M24: COCHLEAR IMPLANT OUTCOMES IN ADULTS WITH LARGE HEARING ASYMMETRY. Jill B.
Firszt, Ruth M. Reeder, Laura K. Holden, Noel Dwyer, Timothy E. Hullar
M25: THE EFFECT OF INTERAURAL MISMATCHES ON BINAURAL UNMASKING IN VOCODER
SIMULATIONS OF COCHLEAR IMPLANTS FOR SINGLE-SIDED DEAFNESS. Jessica M.
Wess, Douglas S. Brungart, Joshua G.W. Bernstein
M26: THE INFLUENCE OF RESIDUAL ACOUSTIC HEARING ON AUDITORY STREAM
SEGREGATION IN A COMPETING-SPEECH TASK. Ashley Zaleski-King, Allison Heuber,
Joshua G.W. Bernstein
M27: ELECTRO-ACOUSTIC INTERACTIONS IN COCHLEAR IMPLANT RECIPIENTS DERIVED
USING PHYSIOLOGICAL AND PSYCHOMETRIC TECHNIQUES. Kanthaiah Koka, Leonid M
Litvak
M28: A MODULAR SIGNAL PROCESSING PROGRAM TO SIMULATE AND TEST THE SOUND
QUALITY OF A COCHLEAR IMPLANT. Austin M Butts, Sarah J Cook, Visar Berisha, Michael F
Dorman
M29: THE DEVELOPMENT OF MUSICALITY IN CHILDREN WITH COCHLEAR IMPLANTS. Meng
Wang, Xueqing Chen, Tianqiu Xu, Yan Zhong, Qianqian Guo, Jinye Luo
M30: THE DEVELOPMENT OF A 'MUSIC-RELATED QUALITY OF LIFE' QUESTIONNAIRE FOR
COCHLEAR IMPLANT USERS. Giorgos Dritsakis, Rachel Marijke van Besouw, Carl A Verschuur
M31: TAKE-HOME EVALUATION OF MUSIC PRE-PROCESSING SCHEME WITH COCHLEAR
IMPLANT USERS. Wim Buyens, Bas van Dijk, Marc Moonen, Jan Wouters
M32: OPTIMIZING COMPRESSION STIMULATION STRATEGIES IN COCHLEAR IMPLANTS FOR
MUSIC PERCEPTION. Petra Maretic, Attila Frater, Soren Kamaric Riis, Alfred Widell, Manuel
Segovia-Martinez
M33: ENVIRONMENTAL SOUND COGNITION WITH COCHLEAR IMPLANTS: FROM SINGLE
SOUNDS TO AUDITORY SCENES. Valeriy Shafiro, Stanley Sheft, Molly Norris, Katherine
Radasevich, Brian Gygi
M34: THE PERCEPTION OF STEREO MUSIC BY COCHLEAR IMPLANT USERS. Stefan Fredelake,
Patrick J. Boyle, Benjamin Krueger, Andreas Buechner, Phillipp Hehrmann, Volkmar Hamacher
M35: SUNG SPEECH: A DATABASE TO EXAMINE SPEECH AND MUSIC PERCEPTION IN
COCHLEAR IMPLANTS. Joseph D Crew, John J Galvin III, Qian-Jie Fu
M36: APPLYING DYNAMIC PEAK ALGORITHM IN NUROTRON’S ADVANCED PEAK SELECTION
SOUND PROCESSING STRATEGY. Lichuan Ping, Guofang Tang, Qianjie Fu
M37: A REAL-TIME IMPLEMENTION OF THE EXCITABILITY CONTROLLED CODING STRATEGY.
Wai Kong Lai, Matthijs Killian, Norbert Dillier
M38: AUTOMATIC POST SPECTRAL COMPRESSION FOR OTICON MEDICAL’S SOUND
PROCESSOR. Manuel Segovia Martinez, Emilie Daanouni, Dan Gnansia, Michel Hoen
M39: A PHASE-LOCK LOOP APPROACH TO EXTRACT TEMPORAL-FINE STRUCTURE
INFORMATION FOR COCHLEAR IMPLANT PATIENTS. Marianna Vatti, Attila Frater, Teng
Huang, Manuel Segovia-Martinez, Niels Henrik Pontoppidan
12-17 July 2015
Granlibakken, Lake Tahoe
Page 10
2015 Conference on Implantable Auditory Prostheses
M40: EVALUATION OF CLASSIFIER-DRIVEN AND BINAURAL BEAMFORMER ALGORITHMS.
Gunnar Geissler, Silke Klawitter, Wiebke Heeren, Andreas Buechner
M41: withdrawn
M42: SPEECH ENHANCEMENT BASED ON GLIMPSE DETECTION TO IMPROVE THE SPEECH
INTELLIGIBILITY FOR COCHLEAR IMPLANT RECIPIENT. Dongmei Wang, John H. L. Hansen,
Emily Tobey
M43: DEVELOPMENT OF A REAL-TIME HARMONIC ENCODING STRATEGY FOR COCHLEAR
IMPLANTS. Tyler Ganter, Kaibao Nie, Xingbo Peng, Jay Rubinstein, Les Atlas
M44: FINDING THE CONSISTENT CONTRIBUTOR LEADING TO BIMODAL BENEFIT. Christopher B
Hall, Yang-Soo Yoon
M45: SPECTRAL CONTRAST ENHANCEMENT IMPROVES SPEECH INTELLIGIBILITY IN NOISE IN
NOFM STRATEGIES FOR COCHLEAR IMPLANTS. Thilo Rode, Andreas Buechner, Waldo
Nogueira
M46: ENSEMBLE CODING PROVIDES PITCH PERCEPTION THROUGH RELATIVE STIMULUS
TIMING. Stefan J Mauger
M47: PSYCHOACOUSTIC OPTIMIZATION OF PULSE SPREADING HARMONIC COMPLEXES FOR
VOCODER SIMULATIONS OF COCHLEAR IMPLANTS. Olivier Macherey, Gaston Hilkhuysen,
Quentin Mesnildrey, Remi Marchand
M48: FIELD-BASED PREFERENCE OF MULTI-MICROPHONE NOISE REDUCTION FOR COCHLEAR
IMPLANTS. Adam Hersbach, Ruth English, David Grayden, James Fallon, Hugh McDermott
M49: A NOVEL DEMODULATION TECHNIQUE IN REVERSE TELEMETRY FOR COCHLEAR DEVICE.
Sui Huang, Bin Xia, Song Wang, Xiaoan Sun
M50: OWN VOICE CLASSIFICATION FOR LINGUISTIC DATA LOGGING IN COCHLEAR IMPLANTS.
Obaid ur Rehman Qazi, Filiep Vanpoucke
M51: ULTRAFAST OPTOGENETIC COCHLEA STIMULATION AND DEVELOPMENT OF MICROLED
IMPLANTS FOR RESEARCH AND CLINICAL APPLICATION. Daniel Keppeler, Christain Wrobel,
Marcus Jeschke, Victor H Hernandez, Anna Gehrt, Gerhard Hoch, Christian Gossler, Ulrich T
Schwarz, Patrik Ruther, Michael Schwaerzle, Roland Hessler, Tim Salditt, Nicola Strenzke,
Sebastian Kuegler, Tobias Moser
M52: OPTIMAL WIRELESS POWER TRANSMISSION FOR CLOSE LOOP COCHLEAR IMPLANT
SYSTEM. Xiaoan Sun, Sui Huang
M53: A POLYMER-BASED INTRACOCHLEAR ELECTRODE FOR ATRAUMATIC INSERTION. Tae
Mok Gwon, Seung Ha Oh, Min-Hyun Park, Ho Sun Lee, Chaebin Kim, Gwang Jin Choi, Sung
June Kim
M54: DEVELOPMENT OF A FREE-MOVING CHRONIC ELECTRICAL STIMULATION SYSTEM FOR
THE GUINEA PIG. Gemaine N Stark, Michael E Reiss, Anh Nguyen-Huynh, Lina A. J. Reiss
M55: INTRA-COCHLEAR ELECTRO-STIMULATION EXPERIMENTS FOR COMPLEX WAVEFORMS
USING OTICON MEDICAL ANIMAL STIMULATION PLATFORM IN-VIVO. Matthieu Recugnat,
Jonathan Laudanski, Lucy Anderson, David Greenberg, Torsten Marquardt, David McAlpine
M56: OPTIMISATION OF NUCLEUS® AUTONRT® ALGORITHM. Saji Maruthurkkara, Ryan Melman
M57: ANDROID-BASED RESEARCH PLATFORM FOR COCHLEAR IMPLANTS. Feng Hong, Hussnain
Ali, John H.L. Hansen, Emily A. Tobey
M58: Poster withdrawn 12-17 July 2015
Granlibakken, Lake Tahoe
Page 11
2015 Conference on Implantable Auditory Prostheses
POSTER SESSION T:
TUESDAY–8AM TO MIDNIOGHT
T1: HIGH FREQUENCY ULTRASOUND IMAGING: A NEW TOOL FOR COCHLEAR IMPLANT
RESEARCH, TRAINING, AND INTRAOPERATIVE ASSISTANCE: Thomas G Landry,
Manohar Bance, Jeremy A Brown
T2: ELECTRICALLY-EVOKED AUDITORY STEADY-STATE RESPONSES AS AN OBJECTIVE
MEASURE OF LOUDNESS GROWTH: Maaike Van Eeckhoutte, Hanne Deprez, Robin
Gransier, Michael Hofmann, Jan Wouters, Tom Francart
T3: CORTICAL EVOKED POTENTIALS OF SPEECH IN COCHLEAR IMPLANT LISTENERS:
Emma Brint, Paul Iverson
T4: CORTICAL VOICE PROCESSING IN COCHLEAR-IMPLANTED CHILDREN: AN
ELECTROPHYSIOLOGICAL STUDY: David Bakhos, Emmanuel Lescanne, Sylvie Roux,
Frederique Bonnet-Brilhault, Nicole Bruneau
T5: DEFINITION OF SURGICAL LANDMARKS AND INSERTION VECTORS AS ORIENTATION
GUIDES FOR COCHLEAR IMPLANTATION BY MEANS OF THREE- AND TWODIMENSIONAL COMPUTED TOMOGRAPHY RECONSTRUCTIONS OF TEMPORAL
BONES: Hayo A. Breinbauer, Mark Praetorius
T6: PREDICTING COCHLEAR IMPLANT PERFORMANCES FROM A NOVEL EABR-BASED
ESTIMATION OF ELECTRICAL FIELD INTERACTIONS: Nicolas Guevara, Eric Truy,
Stephane Gallego, Dan Gnansia, Michel Hoen
T7: ASSESSING TEMPORAL MODULATION SENSITIVITY USING ELECTRICALLY EVOKED
AUDITORY STEADY STATE RESPONSES: Robert Luke, Lot Van Deun, Michael Hofmann,
Astrid van Wieringen, Jan Wouters
T8: SPEECH PERCEPTION AND ELECTRICALLY EVOKED AUDITORY STEADY STATE
RESPONSES: Robert Luke, Robin Gransier, Astrid van Wieringen, Jan Wouters
T9: withdrawn
T10: RESTING STATE CONNECTIVITY IN LANGUAGE AREAS: AN FNIRS STUDY OF
NORMALLY-HEARING LISTENERS AND COCHLEAR IMPLANT USERS: Adnan Shah,
Abd-Krim Seghouane, Colette M. McKay
T11: COCHLEAR IMPLANT ARTIFACT REMOVAL METHODS TO MEASURE ELECTRICALLY
EVOKED AUDITORY STEADY-STATE RESPONSES: Hanne Deprez, Robin Gransier,
Michael Hofmann, Tom Francart, Astrid van Wieringen, Marc Moonen, Jan Wouters
T12: ON THE RELATIONSHIP OF SPEECH INTELLIGIBILITY AND VERBAL INTELLIGENCE IN
COCHLEAR IMPLANT USERS - INSIGHTS FROM OBJECTIVE MEASURES: Mareike
Finke, Andreas Buechner, Esther Ruigendijk, Martin Meyer, Pascale Sandmann
T13: THE MODULATION FREQUENCY TRANSFER FUNCTION OF ELECTRICALLY EVOKED
AUDITORY STEADY-STATE RESPONSES: Robin Gransier, Hanne Deprez, Michael
Hofmann, Tom Francart, Marc Moonen, Astrid van Wieringen, Jan Wouters
T14: withdrawn
T15: ACOUSTIC CUE WEIGHTING BY ADULTS WITH COCHLEAR IMPLANTS: A MISMATCH
NEGATIVITY STUDY: Aaron C Moberly, Jyoti Bhat, Antoine J Shahin
T16: P300 IN BIMODAL CI-USERS: Lindsey Van Yper, Andy Beynon, Katrien Vermeire, Eddy De
Vel, Ingeborg Dhooge
T17: ACOUSTIC CHANGE COMPLEX RECORDED IN HYBRID COCHLEAR IMPLANT USERS:
Eun Kyung Jeon, Brittany E James, Bruna Mussoi, Carolyn J. Brown, Paul J. Abbas
T18: COCHLEAR IMPLANT ELECTRODE VARIABLES PREDICT CLINICAL OUTCOME
MEASURES: Timothy J Davis, Rene H Gifford, Benoit M Dawant, Robert F Labadie, Jack H
Noble
T19: MYOGENIC RESPONSES FROM THE VESTIBULAR SYSTEM CAN BE EVOKED USING
ELECTRICAL STIMULATION FROM A COCHLEAR IMPLANT: Joshua J. Gnanasegaram,
William J. Parkes, Sharon L. Cushing, Carmen L. McKnight, Blake C. Papsin, Karen A.
Gordon
12-17 July 2015
Granlibakken, Lake Tahoe
Page 12
2015 Conference on Implantable Auditory Prostheses
T20: INTRACOCHLEAR ACOUSTIC RECEIVER FOR TOTALLY IMPLANTABLE COCHLEAR
IMPLANTS: CONCEPT AND PRELIMINARY TEMPORAL BONE RESULTS: Flurin Pfiffner,
Lukas Prochazka, Dominik Peus, Konrad Thoele, Francesca Paris, Joris Walraevens, Rahel
Gerig, Jae Hoon Sim, Ivo Dobrev, Dominik Obrist, Christof Roosli, Alexander Huber
T21: ENHANCEMENT OF PITCH-RELATED PHASE-LOCKING RESPONSES OF THE AUDITORY
MIDBRAIN IN COCHLEAR IMPLANTS: Tianhao Li, Fengyi Guo
T22: UNRAVELING THE OBJECTIVES OF COMPUTATIONS IN THE PERIPHERAL AUDITORY
PATHWAY: Bonny Banerjee, Shamima Najnin, Jayanta Kumar Dutta
T23: INFERRING HEARING LOSS CHARACTERISTICS FROM STATISTICALLY LEARNED
SPEECH FEATURES: Shamima Najnin, Bonny Banerjee, Lisa Lucks Mendel
T24: THE RELATIONSHIP BETWEEN INSERTION ANGLES, DEFAULT FREQUENCY
ALLOCATIONS, AND SPIRAL GANGLION PLACE PITCH WITH COCHLEAR IMPLANTS:
David M. Landsberger, Maja Svrakic, J Thomas Roland Jr, Mario A Svirsky
T25: PERIPHERAL CONTRIBUTIONS TO LOUDNESS: SPREAD OF EXCITATION: Rachel Anna
Scheperle, Michelle Lynne Hughes
T26: CLINICALLY-USEFUL MEASURES FOR PROCESSOR-FITTING STRATEGIES: Kara C
Schvartz-Leyzac, Deborah J Colesa, Ning Zhou, Stefan B Strahl, Yehoash Raphael, Bryan E
Pfingst
T27: A MODELING FRAMEWORK FOR OPTICAL STIMULATION IN THE INNER EAR: Robin
Sebastian Weiss, Michael Schutte, Werner Hemmert
T28: FAILURE OF INFRARED STIMULATION TO EVOKE NEURAL ACTIVITY IN THE DEAF
GUINEA PIG COCHLEA: Alexander C Thompson, James B Fallon, Andrew K Wise, Scott A
Wade, Robert K Shepherd, Paul R Stoddart
T29: A MULTI-SCALE MODEL OF COCHLEAR IMPLANT STIMULATION: Phillip Tran, Paul Wong,
Andrian Sue, Qing Li, Paul Carter
T30: INVESTIGATIONS OF IRREVERSIBLE CHARGE TRANSFER FROM COCHLEAR IMPLANT
ELECTRODES: IN-VITRO AND IN-SILICO APPROACHES: Andrian Sue, Phillip Tran, Paul
Wong, Qing Li, Paul Carter
T31: A TRANSMODIOLAR COCHLEAR IMPLANT ELECTRODE: HIGH DENSITY ELECTRODE
DESIGN AND MANUFACTURING PROCESS: Guillaume Tourrel, Dang Kai, Dan Gnansia,
Nicolas Veau, Alexis Borzorg-Grayeli
T32: INSERTION FORCE TEST BENCH AND MODEL OF COCHLEA: Guillaume Tourrel, Andrea
Lovera
T33: FORWARD MASKING IN COCHLEAR IMPLANT USERS: ELECTROPHYSIOLOGICAL AND
PSYCHOPHYSICAL DATA USING SINGLE-PULSE AND PULSE-TRAIN MASKERS:
Youssef Adel, Gaston Hilkhuysen, Arnaud Norena, Yves Cazals, Stephane Roman, Olivier
Macherey
T34: A NEW ANALYSIS METHOD FOR SPREAD OF EXCITATION CURVES BASED ON
DECONVOLUTION: Jan Dirk Biesheuvel, Jeroen J. Briaire, Johan de Vos, Johan H.M. Frijns
T35: BENEATH THE TIP OF THE ICEBERG IN AUDITORY NERVE FIBERS: SUBTHRESHOLD
DYNAMICS FOR COCHLEAR IMPLANTS: Jason Boulet, Sonia Tabibi, Norbert Dillier, Mark
White, Ian C. Bruce
T36: A MODEL OF AUDITORY NERVE RESPONSES TO ELECTRICAL STIMULATION: Suyash N
Joshi, Torsten Dau, Bastian Epp
T37: MULTICHANNEL OPTRODE FOR COCHLEAR STIMULATION: Xia Nan, Xiaodong Tan,
Hunter Young, Matthwe Dummer, Mary Hibbs-Brenner, Claus-Peter Richter
T38: CURRENT SPREAD IN THE COCHLEA: INSIGHTS FROM CT AND ELECTRICAL FIELD
IMAGING: Steven M Bierer, Eric Shea-Brown, Julie A Bierer
T39: EMPLOYING AUTOMATIC SPEECH RECOGNITION TOWARDS IMPROVING SPEECH
INTELLIGIBILITY FOR COCHLEAR IMPLANT USERS: Oldooz Hazrati, Shabnam
Ghaffarzadegan, John Hansen
12-17 July 2015
Granlibakken, Lake Tahoe
Page 13
2015 Conference on Implantable Auditory Prostheses
T40: IMPORTANCE OF TONAL ENVELOPE IN CHINESE AUTOMATIC SPEECH RECOGNITION:
Payton Lin, Fei Chen, Syu-Siang Wang, Yu Tsao
T41: EVALUATION OF A NEW LOW-POWER SOUND PROCESSING STRATEGY: Andreas
Buechner, Leonid Litvak, Martina Brendel, Silke Klawitter, Volkmar Hamacher, Thomas
Lenarz
T42: NEURAL NETWORK BASED SPEECH ENHANCEMENT APPLIED TO COCHLEAR IMPLANT
CODING STRATEGIES Tobias Goehring, Federico Bolner, Jessica J.M. Monaghan, Bas van
Dijk, Jan Wouters, Marc Moonen, and Stefan Bleeck
T43: A NEW WIRELESS RESEARCH PLATFORM FOR NUROTRON COCHLEAR IMPLANTS:
Hongbin Chen, Yajie Lee, Shouxian Chen, Guofang Tang
T44: REDUCE ELECTRICAL INTERACTION DURING PARALLEL STIMULATION WITH
NEGATIVE FLANK CURRENT: Michael S Marzalek
T45: CONSONANT PERCEPTION ENHANCEMENT USING SIGNAL PROCESSING IN BIMODAL
HEARING: Allison Coltisor, Yang-Soo Yoon, Christopher Hall
T46: A PIEZOELECTRIC ARTIFICIAL BASILAR MEMBRANE BASED ON MEMS CANTILEVER
ARRAY AS A FRONT END OF A COCHLEAR IMPLANT SYSTEM: Jongmoon Jang,
JangWoo Lee, Seongyong Woo, David James Sly, Luke Campbell, Sungmin Han, Jin-Ho
Cho, Stephen John OLeary, Ji-Wong Choi, Jeong Hun Jang, Hongsoo Choi
T47: IMPROVING ITD BASED SOURCE LOCALIZATION FOR BILATERAL CI USERS IN
REVERBERANT CONDITIONS USING A NOVEL ONSET ENHANCEMENT ALGORITHM:
Aswin Wijetillake, Bernhard U Seeber
T48: EVALUATION OF A DEREVERBERATION ALGORITHM USING A VIRTUAL ACOUSTICS
ENVIRONMENT: Norbert Dillier, Patricia Bleiker, Andrea Kegel, Wai Kong Lai
T49: EFFECTS OF MULTI-BAND TRANSIENT NOISE REDUCTION FOR CI USERS: Phillipp
Hehrmann, Karl-Heinz Dyballa, Volkmar Hamacher, Thomas Lenarz, Andreas Buechner
T50: INDIVIDUALIZATION OF TEMPORAL MASKING PARAMETER IN A COCHLEAR IMPLANT
SPEECH PROCESSING STRATEGY: TPACE: Eugen Kludt, Waldo Nogueira, Andreas
Buechner
T51: ANIMAL-BASED CODING STRATEGY FOR COCHLEAR IMPLANTS: Claus-Peter Richter,
Petrina LaFaire, Xiaodong Tan, Yingyue Xu, Maxin Chen, Nan Xia, Pamela Fiebig, Alan
Micco
T52: INVESTIGATING THE USE OF A GAMMATONE FILTERBANK FOR A COCHLEAR IMPLANT
CODING STRATEGY: Sonia Tabibi, Wai Kong Lai, Norbert Dillier
T53: SIMULATING PINNA EFFECT BY USE OF THE REAL EAR SOUND ALGORITHM IN
ADVANCED BIONICS CI RECIPIENTS: Amy Stein, Chen Chen, Matthias Milczynski, Leonid
Litvak, Alexander Reich
T54: IMAGE-GUIDED FREQUENCY-PLACE MAPPING IN COCHLEAR IMPLANTS: Hussnain Ali,
Jack H. Noble, Rene H. Gifford, Labadie F. Robert, Benoit M. Dawant, John H. L. Hansen,
Emily A. Tobey
T55: INTEGRATION OF PLACE AND TEMPORAL CODING IN COCHLEAR IMPLANT
PROCESSING: Xin Luo
12-17 July 2015
Granlibakken, Lake Tahoe
Page 14
2015 Conference on Implantable Auditory Prostheses
POSTER SESSION W:
WEDNESDAY –8AM TO MIDNIGHT
W1: EFFECT OF CHANNEL ENVELOPE SYNCHRONY ON INTERAURAL TIME DIFFERENCE
SENSITIVITY IN BILATERAL COCHLEAR IMPLANT LISTENERS: Tom Francart, Anneke
Lenssen, Jan Wouters
W2: IMPROVING SENSITIVITY TO INTERAURAL TIMING DIFFERENCES FOR BILATERAL
COCHLEAR IMPLANT USERS WITH NARROWBAND FREQUENCY MODULATIONS IN HIGH
RATE ELECTRICAL PULSE TRAINS: Alan Kan, Ruth Y Litovsky
W3: IMPROVEMENT IN SPEECH INTELLIGIBILITY AND SIGNAL DETECTION BY CODING OF
INTERAURAL PHASE DIFFERENCES IN BICI USERS: Stefan Zirn, Susan Arndt, Thomas
Wesarg
W4: FRONT-END DYNAMIC-RANGE COMPRESSION PROCESSING EFFECTS ON MASKED
SPEECH INTELLIGIBILITY IN SIMULATED COCHLEAR IMPLANT LISTENING: Nathaniel J
Spencer, Lauren E Dubyne, Katrina J Killian, Christopher A Brown
W5: COMPARING DIFFERENT MODELS FOR SOUND LOCALIZATION WITHIN NORMAL HEARINGAND COCHLEAR IMPLANT LISTENERS: Christian Wirtz, Joerg Encke, Peter Schleich, Peter
Nopp, Werner Hemmert
W6: CORTICAL DETECTION OF INTERAURAL TIMING AND LEVEL CUES IN CHILDREN WITH
BILATERAL COCHLEAR IMPLANTS: Vijayalakshmi Easwar, Michael Deighton, Parvaneh
Abbasalipour, Blake C Papsin, Karen A Gordon
W7: WHEN PERCEPTUALLY ALIGNING THE TWO EARS, IS IT BETTER TO ONLY USE THE
PORTIONS THAT CAN BE ALIGNED OR TO USE THE WHOLE ARRAY?: Justin M. Aronoff,
Allison Laubenstein, Amulya Gampa, Daniel H. Lee, Julia Stelmach, Melanie J. Samuels, Abbigail
C. Buente
W8: withdrawn
W9: LAYER-SPECIFIC BINAURAL ACTIVATION IN THE CORTEX OF HEARING CONTROLS AND
CONGENITALLY DEAF CATS: Jochen Tillein, Peter Hubka, Andrej Kral
W10: THE EFFECTS OF A BROADER MASKER ON CONTRALATERAL MASKING FUNCTIONS:
Daniel H Lee, Justin Aronoff
W11: THE EFFECT OF INTERAURAL MISMATCH AND INTERAURALLY INTERLEAVED CHANNELS
ON SPECTRAL RESOLUTION IN SIMULATED COCHLEAR IMPLANT LISTENING: Ann E.
Todd, Justin M. Aronoff, Hannah Staisloff, David M. Landsberger
W12: NEURAL PROCESSING OF INTERAURAL TIME DIFFERENCES: DIRECT COMPARISONS
BETWEEN BILATERAL ELECTRIC AND ACOUSTIC STIMULATION: Maike Vollmer, Armin
Wiegner
W13: WITHIN-SUBJECTS COMPARISON OF BIMODAL VERSUS BILATERAL CI LISTENING AND
FINE-STRUCTURE VERSUS ENVELOPE-ONLY STRATEGIES IN SOUND LOCALIZATION,
SOUND QUALITY, AND SPEECH IN NOISE PERFORMANCE: Ewan A. Macpherson, Ioan A.
Curca, Vijay Parsa, Susan Scollie, Katherine Vansevenant, Kim Zimmerman, Jamie LewisTeeter, Prudence Allen, Lorne Parnes, Sumit Agrawal
W14: THE EFFECT OF INTERAURAL TEMPORAL DIFFERENCES IN INTERAURAL PHASE
MODULATION FOLLOWING RESPONSES: Jaime A. Undurraga, Nicholas R. Haywood, Torsten
Marquardt, David McAlpine
W15: SHORT INTER-PULSE INTERVALS IMPROVE NEURAL ITD CODING WITH BILATERAL
COCHLEAR IMPLANTS: Brian D. Buechel, Kenneth E. Hancock, Bertrand Delgutte
W16: SENSITIVITY TO INTERAURAL TIMING DIFFERENCES IN CHILDREN WITH BILATERAL
COCHLEAR IMPLANTS: Erica Ehlers, Alan Kan, Shelly Godar, Ruth Litovsky
W17: A SETUP FOR SIMULTANEOUS MEASUREMENT OF (E)ASSR AND PSYCHOMETRIC
TEMPORAL ENCODING IN THE AUDITORY SYSTEM: Andreas Bahmer, Uwe Baumann
W18: MODELING INDIVIDUAL DIFFERENCES IN MODULATION DETECTION: Gabrielle O Brien, Jay
Rubinstein, Nikita Imennov
12-17 July 2015
Granlibakken, Lake Tahoe
Page 15
2015 Conference on Implantable Auditory Prostheses
W19: A SPIKING NEURON NETWORK MODEL OF ITD DETECTION IN COCHLEAR IMPLANT
PATIENTS: Joerg Encke, Werner Hemmert
W20: REDUCING CURRENT SPREAD AND CHANNEL INTERACTION USING FOCUSED
MULTIPOLAR STIMULATION IN COCHLEAR IMPLANTS: EXPERIMENTAL DATA: Shefin S
George, Robert K Shepherd, Andrew K Wise, James B Fallon
W21: HEARING PRESERVATION IN COCHLEAR IMPLANTATION - IMPACT OF ELECTRODE
DESIGN, INDIVIDUAL COCHLEAR ANATOMY AND PREOPERATIVE RESIDUAL HEARING:
Thomas Lenarz, Andreas Buechner, Anke Lesinski-Schiedat, Omid Majdani, Waldemar Wuerfel,
Marie-Charlot Suhling
W22: ECAP RECOVERY FUNCTIONS: N1 LATENCY AS INDICATOR FOR THE BIAS INTRODUCED
BY FORWARD MASKING AND ALTERNATING POLARITY ARTIFACT REJECTION SCHEMES:
Konrad Eugen Schwarz, Angelika Dierker, Stefan Bernd Strahl, Philipp Spitzer
W23: COMPLEMENTARY RESULTS ON THE FEEDBACK PATH CHARACTERIZATION FOR THE
COCHLEAR CODACS DIRECT ACOUSTIC COCHLEAR IMPLANT: Giuliano Bernardi, Toon van
Waterschoot, Marc Moonen, Jan Wouters, Jean-Marc Gerard, Joris Walraevens, Martin Hillbratt,
Nicolas Verhaert
W24: RELATIONSHIP BETWEEN PERIPHERAL AND PSYCHOPHYSICAL MEASURES OF
AMPLITUDE MODULATION DETECTION IN CI USERS: Viral Tejani, Paul Abbas, Carolyn
Brown
W25: COCHLEAR MICROANATOMY AFFECTS COCHLEAR IMPLANT INSERTION FORCES: Ersin
Avci, Tim Nauwelaers, Thomas Lenarz, Volkmar Hamacher, Andrej Kral
W26: DEVELOPMENT OF A VOLTAGE DEPENDENT CURRENT NOISE ALGORITHM FOR
CONDUCTANCE BASED STOCHASTIC MODELLING OF AUDITORY NERVE FIBRE
POPULATIONS IN COMPOUND MODELS: Werner Badenhorst, Tiaan K Malherbe, Tania
Hanekom, Johan J Hanekom
W27: OPTIMIZATION OF ENERGY REQUIREMENTS FOR A GAPLESS CI INTERFACE BY
VARIATION OF STIMULUS DESIGN: Stefan Hahnewald, Heval Benav, Anne Tscherter,
Emanuele Marconi, Juerg Streit , Hans Rudolf Widmer, Carolyn Garnham, Marta Roccio, Pascal
Senn
W28: MODEL-BASED INTERVENTIONS IN COCHLEAR IMPLANTS: Tania Hanekom, Tiaan K
Malherbe, Liezl Gross, Johan J Hanekom
W29: TOWARDS INDIVIDUALIZED COCHLEAR IMPLANTS: VARIATIONS OF THE COCHLEAR
MICROANATOMY: Markus Pietsch, Lukas Aguierra Davila, Peter Erfurt, Ersin Avci, Annika
Karch, Thomas Lenarz, Andrej Kral
W30: WHAT CAN ECAP POLARITY SENSITIVITY TELL US ABOUT AUDITORY NERVE SURVIVAL?:
Michelle L. Hughes, Rachel A. Scheperle, Jenny L. Goehring
W31: VERTIGO AND COCHLEAR IMPLANTATION: Ioana Herisanu, Peter K. Plinkert, Mark Praetorius
W32: TIME-DOMAIN SIMULATION OF VOLUME CONDUCTION IN THE GUINEA PIG COCHLEA: Paul
Wong, Andrian Sue, Phillip Tran, Chidrupi Inguva, Qing Li, Paul Carter
W33: FUNCTIONAL MODELLING OF NEURAL INTERAURAL TIME DIFFERENCE CODING FOR
BIMODAL AND BILATERAL COCHLEAR IMPLANT STIMULATION: Andreas N Prokopiou, Jan
Wouters, Tom Francart
W34: THE ELECTRICALLY EVOKED COMPOUND ACTION POTENTIAL, COMPUTERIZED
TOMOGRAPHY, AND BEHAVIORAL MEASURES TO ASSESS THE ELECTRODE-NEURON
INTERFACE: Lindsay A DeVries, Rachel A Scheperle, Julie A Bierer
W35: CORTICAL REPRESENTATIONS OF STIMULUS INTENSITY OF COCHLEAR IMPLANT
STIMULATION IN AWAKE MARMOSETS: Kai Yuen Lim, Luke A Johnson, Charles Della
Santina, Xiaoqin Wang
W36: IMPEDANCE MEASURES FOR SUBJECT-SPECIFIC OPTIMIZATION OF SPATIAL
SELECTIVITY: Quentin Mesnildrey, Olivier Macherey, Frederic Venail, Philippe Herzog
12-17 July 2015
Granlibakken, Lake Tahoe
Page 16
2015 Conference on Implantable Auditory Prostheses
W37: DEVELOPMENT OF A MODEL FOR PATIENT-SPECIFIC SIMULATION OF COCHLEAR
IMPLANT STIMULATION: Ahmet Cakir, Jared A. Shenson, Robert F. Labadie, Benoit M. Dawant,
Rene H. Gifford, Jack H. Noble
W38: RELATIONSHIP BETWEEN COCHLEAR IMPLANT ELECTRODE POSITION,
ELECTROPHYSIOLOGICAL, AND PSYCHOPHYSICAL MEASURES: Jack Noble, Andrea
Hedley-Williams, Linsey Sunderhaus, Robert Labadie, Benoit Dawant, Rene Gifford
W39: COMPARISONS OF SPECTRAL RIPPLE NOISE RESOLUTION OBTAINED FROM MORE
RECENTLY IMPLANTED CI USERS AND PREVIOUSLY PUBLISHED DATA : Eun Kyung Jeon,
Christopher W. Turner, Sue A. Karsten, Bruce J. Gantz, Belinda A. Henry
W40: ENVELOPE INTERACTIONS IN MULTI-CHANNEL AMPLITUDE MODULATION FREQUENCY
DISCRIMINATION BY COCHLEAR IMPLANT USERS: John Galvin, Sandy Oba, Deniz Başkent,
Qian-jie Fu
W41: SPECTRAL RESOLUTION AND AUDITORY ENHANCEMENT IN COCHLEAR-IMPLANT USERS:
Lei Feng, Andrew Oxenham
W42: MONOPOLAR PSYCHOPHYSICAL DETECTION THRESHOLDS PREDICT SPATIAL
SPECIFICITY OF NEURAL EXCITATION IN COCHLEAR IMPLANT USERS: Ning Zhou
W43: DYNAMIC BINAURAL SYNTHESIS IN COCHLEAR-IMPLANT RESEARCH: A VOCODER-BASED
PILOT STUDY: Florian Voelk
W44: PERCEPTUAL SPACE OF MONOPOLAR AND ALL-POLAR STIMULI: Jeremy Marozeau, Colette
McKay
W45: STIMULATING ON MULTIPLE ELECTRODES CAN IMPROVE TEMPORAL PITCH
PERCEPTION: Richard Penninger, Waldo Nogueira, Andreas Buechner
W46: SOUND QUALITY OF ELECTRIC PULSE-TRAINS AS FUNCTION OF PLACE AND RATE OF
STIMULATION WITH LONG MED-EL ELECTRODES: Katrien Vermeire, Annes Claes, Paul Van
de Heyning, David M Landsberger
W47: LOUDNESS RECALIBRATION IN NORMAL-HEARING LISTENERS AND COCHLEAR-IMPLANT
USERS: Ningyuan Wang, Heather Kreft, Andrew J. Oxenham
W48: COCHLEAR IMPLANT INSERTION TRAUMA AND RECOVERY: Bryan E. Pfingst, Deborah J.
Colesa, Aaron P. Hughes, Stefan B. Strahl, Yehoash Raphael
W49: PERCEIVED PITCH SHIFTS ELICIT VOCAL CORRECTIONS IN COCHLEAR IMPLANT
PATIENTS: Torrey M. Loucks, Deepa Suneel, Justin Aronoff
W50: TEMPORAL GAP DETECTION IN SPEECH-LIKE STIMULI BY USERS OF COCHLEAR
IMPLANTS: FREE-FIELD AND DIRECT STIMULATION: Pranesh Bhargava, Etienne Gaudrain,
Stephen D. Holmes, Robert P. Morse, Deniz Baskent
W51: CRITICAL FACTORS THAT AFFECT MEASURES OF SPATIAL SELECTIVITY IN COCHLEAR
IMPLANT USERS: Stefano Cosentino, John Deeks, Robert Carlyon
W52: USING FNIRS TO STUDY SPEECH PROCESSING AND NEUROPLASTICITY IN COCHLEAR
IMPLANT USERS: Zhou Xin, William Cross, Adnan Shah, Abr-Krim Seghouane, Ruth Litovsky,
Colette McKay
W53: NEUROFEEDBACK: A NOVEL APPROACH TO PITCH TRAINING IN COCHLEAR IMPLANT
USERS: Annika Luckmann, Jacob Jolij, Deniz Başkent
W54: DIFFERENTIAL PATTERNS OF THALAMOCORTICAL AND CORTICOCORTICAL
PROJECTIONS TO AUDITORY FIELDS OF EARLY- AND LATE-DEAF CATS: Blake E. Butler,
Nicole Chabot, Stephen G. Lomber
W55: ASSESSING PERCEPTUAL ADAPTATION TO FREQUENCY MISMATCH IN POSTLINGUALLY
DEAFENED COCHLEAR IMPLANT USERS WITH CUSTOMIZED SELECTION OF
FREQUENCY-TO-ELECTRODE TABLES: Elad Sagi, Matthew B. Fitzgerald, Katelyn Glassman,
Keena Seward, Margaret Miller, Annette Zeman, Mario A. Svirsky
W56: abstract withdrawn
W57: PERCEPTUAL TRAINING IN POST-LINGUALLY DEAFENED USERS OF COCHLEAR
IMPLANTS AND ADULTS WITH NORMAL HEARING: DOES ONE PARADIGM FIT ALL? :
Matthew Fitzgerald, Susan Waltzman, Mario Svirsky, Beverly Wright
12-17 July 2015
Granlibakken, Lake Tahoe
Page 17
2015 Conference on Implantable Auditory Prostheses
W58: PERCEPTUAL LEARNING OF COCHLEAR IMPLANT RATE DISCRIMINATION: Raymond Lee
Goldsworthy
W59: IMPROVED BUT UNFAMILIAR CODES: INFLUENCE OF LEARNING ON ACUTE SPEECH
PERCEPTION WITH CURRENT FOCUSING: Zachary M. Smith, Naomi B.H. Croghan,
Christopher J. Long
12-17 July 2015
Granlibakken, Lake Tahoe
Page 18
2015 Conference on Implantable Auditory Prostheses
POSTER SESSION R:
THURSDAY –8AM TO MIDNIGHT
R1: EFFECT OF FREQUENCY OF SINE-WAVES USED IN TONE VOCODER SIMULATIONS OF
COCHLEAR IMPLANTS ON SPEECH INTELLIGIBILITY : Anwesha Chatterjee, Kuldip Paliwal
R2: THE PERCEPTUAL DISCRIMINATION OF SPEAKING STYLES UNDER COCHLEAR IMPLANT
SIMULATION: Terrin N. Tamati, Esther Janse, Deniz Başkent
R3: RELATIVE CONTRIBUTIONS OF TEMPORAL FINE STRUCTURE AND ENVELOP CUES FOR
LEXICAL TONE PERCEPTION IN NOISE: Beier Qi, Yitao Mao, Lauren Muscari, Li Xu
R4: ENHANCING CHINESE TONE RECOGNITION BY MANIPULATING AMPLITUDE ENVELOPE:
TONE AND SPEECH RECOGNITION EXPERIMENTS WITH COCHLEAR IMPLANT
RECIPIENTS: Lichuan Ping, Guofang Tang, Qianjie Fu
R5: INTERACTIONS BETWEEN SPECTRAL RESOLUTION AND INHERENT TEMPORAL-ENVELOPE
NOISE FLUCTUATIONS IN SPEECH UNDERSTANDING IN NOISE FOR COCHLEAR
IMPLANT USERS: Evelyn EMO Davies-Venn, Heather A Kreft, Andrew J Oxenham
R6: SPECTRAL DEGRADATION AFFECTS THE EFFICIENCY OF SENTENCE PROCESSING:
EVIDENCE FROM MEASURES OF PUPIL DILATION: Matthew Winn, Ruth Litovsky
R7: A GAMMATONE FILTER BANK AND ZERO-CROSSING DETECTION APPROACH TO EXTRACT
TFS INFORMATION FOR COCHLEAR IMPLANT PATIENTS: Teng HUANG, Attila Frater,
Manuel Segovia Martinez
R8: PITCH DISCRIMINATION TRAINING IMPROVES SPEECH INTELLIGIBILITY IN ADULT CI
USERS: Katelyn A Berg, Jeremy L Loebach
R9: EFFECT OF CONTEXTUAL CUES ON THE PERCEPTION OF INTERRUPTED SPEECH UNDER
VARIABLE SPECTRAL CONDITIONS: Chhayakant Patro, Lisa Lucks Mendel
R10: EFFECT OF FREQUENCY ALLOCATION ON VOCAL TRACT LENGTH PERCEPTION IN
COCHLEAR IMPLANT USERS: Nawal El Boghdady, Deniz Başkent, Etienne Gaudrain
R11: BOUNDED MAGNITUDE OF VISUAL BENEFIT IN AUDIOVISUAL SENTENCE RECOGNITION
BY COCHLEAR IMPLANT USERS: EVIDENCE FROM BEHAVIORAL OBSERVATIONS AND
MASSIVE DATA MINING : Shuai Wang, Michael F. Dorman, Visar Berisha, Sarah J Cook, Julie
Liss
R12: SPEECH PERCEPTION AND AUDITORY LOCALIZATION ACCURACY IN SENIORS WITH
COCHLEAR IMPLANTS OR HEARING AIDS: Tobias Weissgerber, Tobias Rader, Uwe Baumann
R13: NEW APPROACHES TO FEATURE INFORMATION TRANSMISSION ANALYSIS (FITA): Dirk JJ
Oosthuizen, Johan J Hanekom
R14: EVALUATION OF LEXICAL TONE RECOGNITION BY ADULT COCHLEAR IMPLANT USERS: Bo
Liu, xin Gu, ziye liu, Beier Qi, Ruijuan Dong, shuo wang
R15: EFFECT OF DIRECTIONAL MICROPHONE TECHNOLOGY ON SPEECH UNDERSTANDING
AND LISTENING EFFORT AMONG ADULT COCHLEAR IMPLANT USERS: douglas Sladen,
Jingjiu Nie, Katie Berg, Smita Agrawal
R16: COCHLEAR PHYSIOLOGY AND SPEECH PERCEPTION OUTCOMES: Christopher K Giardina,
Zachary J Bastian, Margaret T Dillon, Meredith L Anderson, Holly Teagle, Harold C Pillsbury,
Oliver F Adunka, Craig A Buchman, Douglas C Fitzpatrick
R17: STIMULUS-BRAIN ACTIVITY ALIGNMENT BETWEEN SPEECH AND EEG SIGNALS IN
COCHLEAR IMPLANT USERS, MORE THAN AN ARTIFACT?: Anita Wagner, Natasha Maurits,
Deniz Başkent
R18: POLARITY EFFECTS OF QUADRAPHASIC PULSES ON THE INTELLIGIBILITY OF SPEECH IN
NOISE: Gaston Hilkhuysen, Stephane Roman, Olivier Macherey
R19: GAP DETECTION IN COCHLEAR-IMPLANT USERS REVEALS AGE-RELATED CENTRAL
TEMPORAL PROCESSING DEFICITS: Matthew J. Goupell, Casey Gaskins, Maureen J. Shader,
Alessandro Presacco, Samira Anderson, Sandra Gordon-Salant
R20: THE VALUE FOR COCHLEAR IMPLANT PATIENTS OF A BEAM FORMER MICROPHONE
ARRAY “THE AB ULTRAZOOM“ ON SPEECH UNDERSTANDING IN A COCKTAIL PARTY
LISTENING ENVIRONMENT: Sarah J. Cook Natale, Erin Castioni, Anthony Spahr, Michael F.
Dorman
12-17 July 2015
Granlibakken, Lake Tahoe
Page 19
2015 Conference on Implantable Auditory Prostheses
R21: DEACTIVATING COCHLEAR IMPLANT ELECTRODES BASED ON PITCH INFORMATION:
DOES IT MATTER IF THE ELECTRODES ARE INDISCRIMINABLE?: Deborah A Vickers,
Aneeka Degun, Angela Canas, Filiep A Vanpoucke, Thomas A Stainsby
R22: VOWEL AND CONSONANT RECOGNITION AND ERROR PATTERNS WITH FOCUSED
STIMULATION AND REDUCED CHANNEL PROGRAMS : Mishaela DiNino, Julie Arenberg
Bierer
R23: VOICE EMOTION RECOGNITION BY MANDARIN-SPEAKING LISTENERS WITH COCHLEAR
IMPLANTS AND THEIR NORMALLY-HEARING PEERS: Hui-Ping Lu, Yung_Song Lin, ShuChen Peng, Aditya M Kulkarni, Monita Chatterjee
R24: SPEAKING RATE EFFECTS ON PHONEME PERCEPTION IN ADULT CI USERS WITH EARLYAND LATE-ONSET DEAFNESS: Brittany N. Jaekel, Rochelle Newman, Matthew Goupell
R25: INFLUENCE OF SIMULATED CURRENT SPREAD ON SPEECH-IN-NOISE PERCEPTION AND
SPECTRO-TEMPORAL RESOLUTION: Naomi B.H. Croghan, Zachary M. Smith
R26: IMPACT ANALYSIS OF NATURALISTIC ENVIRONMENTAL NOISE TYPE ON SPEECH
PRODUCTION FOR COCHLEAR IMPLANT USERS VERSUS NORMAL HEARING LISTENERS:
Jaewook Lee, Hussnain Ali, Ali Ziaei, John H.L. Hansen, Emily A. Tobey
R27: A MODEL OF INDIVIDUAL COCHLEAR IMPLANT USER’S SPEECH-IN-NOISE PERFORMANCE:
Tim Juergens, Volker Hohmann, Andreas Buechner, Waldo Nogueira
R28: SOUND LOCALIZATION IN BIMODAL COCHLEAR IMPLANT USERS AFTER LOUDNESS
BALANCING AND AGC MATCHING: Lidwien Veugen, Maartje Hendrikse, Martijn Agterberg,
Marc van Wanrooy, Josef Chalupper, Lucas Mens, Ad Snik, A_John van Opstal
R29: PERFORMANCE OF MICROPHONE CONFIGURATIONS IN PEDIATRIC COCHLEAR IMPLANT
USERS: Patti M Johnstone, Kristen ET Mills, Elizabeth L Humphrey, Kelly R Yeager, Amy Pierce,
Kelly J McElligott, Emily E Jones, Smita Agrawal
R30: FULLY SYNCHRONIZED BILATERAL STREAMING FOR THE NUCLEUS SYSTEM: Jan
Poppeliers, Joerg Pesch, Bas Van Dijk
R31: NEUROPHYSIOLOGICAL RESPONSES AND THEIR RELATION TO BINAURAL
PSYCHOPHYSICS IN BILATERAL COCHLEAR IMPLANT USERS: Heath Jones, Ruth Litovsky
R32: EFFECTS OF THE CHANNEL INTERACTION AND CURRENT LEVEL ON ACROSSELECTRODE INTEGRATION OF INTERAURAL TIME DIFFERENCES IN BILATERAL
COCHLEAR-IMPLANT LISTENERS: Katharina Egger, Bernhard Laback, Piotr Majdak
R33: FACTORS CONTRIBUTING TO VARIABLE SOUND LOCALIZATION PERFORMANCE IN
BILATERAL COCHLEAR IMPLANT USERS: Rachael Maerie Jocewicz, Alan Kan, Heath G
Jones, Ruth Y Litovsky
R34: UNDERSTANDING BINAURAL SENSITIVITY THROUGH FACTORS SUCH AS PITCH
MATCHING BETWEEN THE EARS AND PATIENTS’ HEARING HISTORY IN BILATERAL
COCHLEAR IMPLANT LISTENERS: Tanvi Thakkar, Alan Kan, Matthew Winn, Matthew J
Goupell, Ruth Y Litovsky
R35: AGING AFFECTS BINAURAL TEMPORAL PROCESSING IN COCHLEAR-IMPLANT AND
NORMAL-HEARING LISTENERS: Sean Robert Anderson, Matthew J Goupell
R36: EFFECTS OF ASYMMETRY IN ELECTRODE POSITION IN BILATERAL COCHLEAR IMPLANT
RECIPIENTS: Jill Firszt, Rosalie Uchanski, Laura Holden, Ruth Reeder, Tim Holden, Christopher
Long
R37: OPTIMIZING THE ENVELOPE ENHANCEMENT ALGORITHM TO IMPROVE LOCALIZATION
WITH BILATERAL COCHLEAR IMPLANTS: Bernhard U. Seeber, Claudia Freigang, James W.
Browne
R38: AUDITORY MOTION PERCEPTION IN NORMAL HEARING LISTENERS AND BILATERAL
COCHLEAR IMPLANT USERS: Keng Moua, Heath G. Jones, Alan Kan, Ruth Y. Litovsky
R39: EXTENT OF LATERALIZATION FOR PULSE TRAINS WITH LARGE INTERAURAL TIME
DIFFERENCES IN NORMAL-HEARING LISTENERS AND BILATERAL COCHLEAR IMPLANT
USERS: Regina Maria Baumgaertel, Mathias Dietz
12-17 July 2015
Granlibakken, Lake Tahoe
Page 20
2015 Conference on Implantable Auditory Prostheses
R40: THE EFFECTS OF SYLLABIC ENVELOPE CHARACTERISTICS AND SYNCHRONIZED
BILATERAL STIMULATION ON PRECEDENCE-BASED SPEECH SEGREGATION: Shaikat
Hossain, Vahid Montazeri, Alan Kan, Matt Winn , Peter Assmann, Ruth Litovsky
R41: MANIPULATING THE LOCALIZATION CUES FOR BILATERAL COCHLEAR IMPLANT USERS:
Christopher A. Brown, Kate Helms Tillery
R42: AN OTICON MEDICAL BINAURAL CI PROTOTYPE CONCEPT DEMONSTRATED IN
HARDWARE: Bradford C. Backus, Guillaume Tourell, Jean-Claude Repetto, Kamil Adiloglu,
Tobias Herzke, Matthias Dietz
R43: A BINAURAL CI RESEARCH PLATFORM FOR OTICON MEDICAL SP IMPLANT USERS
ENABLING ITD/ILD AND VARIABLE RATE PROCESSING: Bradford C. Backus, Kamil Adiloglu,
Tobias Herzke
R44: A SIMPLE INVERSE VARIANCE ESTIMATOR MODEL CAN PREDICT PERFORMANCE OF
BILATERAL CI USERS AND PROVIDE AN EXPLANATION FOR THE “SQUELCH EFFECT”:
Bradford C. Backus
R45: COMPARISON OF INTERAURAL ELECTRODE PAIRING METHODS: PITCH MATCHING,
INTERAURAL TIME DIFFERENCE SENSITIVITY, AND BINAURAL INTERACTION
COMPONENT: Mathias Dietz, Hongmei Hu
R46: A BINAURAL COCHLEAR IMPLANT ALGORITHM FOR ROBUST LOCATION CODING OF THE
MOST DOMINANT SOUND SOURCE: Ben Williges, Mathias Dietz
R47: ADVANCING BINAURAL COCHLEAR IMPLANT TECHNOLOGY - ABCIT: David McAlpine,
Jonathan Laudanski, Mathias Dietz, Torsten Marquardt, Rainer Huber, Volker Hohmann
R48: SUMMATION AND INHIBITION FROM PRECEDING SUB-THRESHOLD PULSES IN A
PSYCHOPHYSICAL THRESHOLD TASK: EFFECTS OF POLARITY, PULSE SHAPE, AND
TIMING: Nicholas R Haywood, Jaime A Undurraga, Torsten Marquardt, David McAlpine
R49: WHY IS CURRENT LEVEL DISCRIMINATION WORSE AT HIGHER STIMULATION RATES?:
Mahan Azadpour, Mario A. Svirsky, Colette M. McKay
R50: RELATIONSHIP BETWEEN SPECTRAL MODULATION DEPTH AND SPECTRAL RIPPLE
DISCRIMINATION IN NORMAL HEARING AND COCHLEAR IMPLANTED LISTENERS: Kavita
Dedhia, Kaibao Nie, Ward R Drennan, Jay T Rubinstein, David L Horn
R51: INFLUENCE OF THE RECORDING ELECTRODE ON THE ECAP THRESHOLD USING A NOVEL
FINE-GRAIN RECORDING PARADIGM: Lutz Gaertner, Andreas Buechner, Thomas Lenarz,
Stefan Strahl, Konrad Schwarz, Philipp Spitzer
R52: SOUND QUALITY OF MONOPOLAR AND PARTIAL TRIPOLAR STIMULATION AS FUNCTION
OF PLACE AND STIMULATION RATE: Natalia Stupak, David M. Landsberger
R53: PITCH RANKING WITH FOCUSED AND UNFOCUSED VIRTUAL CHANNEL CONFIGURATIONS:
Monica Padilla, Natalia Stupak, David M Landsberger
R54: AUDITORY NEUROPATHY SPECTRUM DISORDER (ANSD): ELECTRIC AND ACOUSTIC
AUDITORY FUNCTION: Rene Headrick Gifford, Sterling W Sheffield, Alexandra Key, George B
Wanna, Robert F Labadie, Linda J Hood
R55: REDUCING LOUDNESS CUES IN MODULATION DETECTION EXPERIMENTS: Sara I. Duran,
Zachary M. Smith
R56: THE EFFECT OF RECOMMENDED MAPPING APPROACHES ON SPEECH PERCEPTION AND
PSYCHOPHYSICAL CAPABILITIES IN COCHLEAR IMPLANT RECIPIENTS: Ward R Drennan,
Nancy E McIntosh, Wendy S Parkinson
R57: LOUDNESS AND PITCH PERCEPTION USING DYNAMICALLY COMPENSATED VIRTUAL
CHANNELS: Waldo Nogueira, Leonid Litvak, Amy Stein, Chen Chen, David M. Landsberger,
Andreas Buechner
R58: RATE PITCH WITH MULTI-ELECTRODE STIMULATION PATTERNS: CONFOUNDING CUES:
Pieter J Venter, Johan J Hanekom
R59: PITCH VARIATIONS ACROSS DYNAMIC RANGE USING DIFFERENT PULSE SHAPES IN
COCHLEAR IMPLANT USERS: Jaime A. Undurraga, Jan Wouters, Astrid van Wieringen
12-17 July 2015
Granlibakken, Lake Tahoe
Page 21
2015 Conference on Implantable Auditory Prostheses
SPEAKER ABSTRACTS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 22
2015 Conference on Implantable Auditory Prostheses
S1: QUANTIFYING ENVELOPE CODING METRICS FROM AUDITORY-NERVE
SPIKE TRAINS: IMPLICATIONS FOR PREDICTING SPEECH INTELLIGIBILITY
WITH HEARING IMPAIRMENT
Michael G. Heinz
Purdue University, West Lafayette, IN, USA
Recent psychophysical and modeling studies have highlighted the perceptual importance
of slowly varying fluctuations in speech when listening in background noise, i.e., conditions in
which people with cochlear hearing loss have the most difficulty. Psychophysiological
correlations between predicted neural coding (envelope and fine-structure) and the perception
of vocoded speech suggested that neural envelope coding was a primary contributor to speech
perception in noise (Swaminathan and Heinz, 2012). Furthermore, recent psychophysically
based modelling has demonstrated that the signal-to-noise ratio at the output of a modulation
filter bank provides a robust measure of speech intelligibility (Jorgensen and Dau, 2011). The
signal-to-noise envelope power ratio (SNR_ENV) was shown to predict speech intelligibility in a
wider range of degraded conditions (e.g., noisy speech, reverberation, and spectral subtraction)
than many long-standing speech-intelligibility models. The success of a multi-resolution version
of the speech-based envelope power spectrum model in fluctuating noises (Jorgensen et al.,
2013) provides support that the SNR_ENV metric is an important objective measure for speech
intelligibility. While the promise of the SNR_ENV metric has been demonstrated for normalhearing listeners, it has yet to be tested for hearing-impaired listeners because of limitations in
our physiological knowledge of how sensorineural hearing loss (SNHL) affects the envelope
coding of speech in noise. This talk will review our lab’s efforts to develop neural metrics for
envelope coding that can be computed from neural spike trains, thus allowing quantitative
analyses of the effects of SNHL on envelope coding from either well-established animal models
or computational neural models of SNHL. Envelope coding to non-periodic stimuli (e.g., speech
in noise) is quantified from model or recorded neural spike trains using shuffled-correlogram
analyses, which are analyzed in the modulation frequency domain to compute modulation-band
based estimates of signal and noise envelope coding. We also used correlogram analyses to
compute cross-channel envelope correlations, which have also been hypothesized to influence
speech intelligibility, particularly in adverse conditions. The development of quantitative spiketrain based metrics of SNR_ENV and cross-channel envelope correlations may ultimately
provide an important link between experimental recordings of neural responses to auditory
prostheses and predictions of speech intelligibility.
Funded by Action on Hearing Loss and NIH-NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 23
2015 Conference on Implantable Auditory Prostheses
S2: MODELING THE ELECTRODE-NEURON INTERFACE TO SELECT
COCHLEAR IMPLANT CHANNELS FOR PROGRAMMING
Julie Arenberg Bierer, Eric T Shea-Brown, Steven M Bierer
University of Washington, Seattle, WA, USA
Much of the variability in cochlear implant outcomes is likely a result of channels having
varying qualities of electrode-neuron interfaces, defined as the effectiveness with which an
electrode activates nearby auditory neurons. Previous studies have suggested that focused
stimulation measures, such as threshold, loudness and psychophysical tuning, are sensitive to
the status of the electrode-neuron interface. In this study, two important components of the
interface were measured: 1) the distance between each electrode and the inner wall of the
cochlea where the neurons are housed, and 2) local tissue impedance. The electrode-neuron
distances were estimated by three-dimensional computed tomography, and impedances from
electrical field imaging. We incorporated these elements into a computational model of cochlear
activation to predict perceptual thresholds. The model consists of a fluid-filled cylinder
representing the cochlear duct, an array of current point sources representing the electrodes,
and a population of voltage-activated neurons outside of the cylinder. The density of responsive
neurons is varied to fit the input data (electrode position, impedances, and behavioral thresholds
to broad and focused stimulation) using a nonlinear least-squares approach. The output of the
model was used to select subsets of channels based on the impact deactivating those channels
would have on the transmission of spectral cues. Those channels were compared to
deactivations determined only on the basis of focused behavioral thresholds, without modeling.
The results will be discussed in the context of optimizing clinical programming of cochlear
implants to improve speech perception.
This work was supported by NIDCD RO1 DC 012142 (JAB)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 24
2015 Conference on Implantable Auditory Prostheses
S3: DESIGN AND APPLICATION OF USER-SPECIFIC MODELS OF COCHLEAR
IMPLANTS
Tania Hanekom, Tiaan K Malherbe, Liezl Gross, Rene Baron, Riaze Asvat, Werner
Badenhorst, Johan J Hanekom
Bioengineering, University of Pretoria, Pretoria, ZAF
Much variation in hearing performance is observed among users of cochlear implants. To
gain an understanding of the underlying factors that cause inter-user performance differences,
insight into the functioning of individual implant users’ auditory systems is required. An
investigation into factors that are responsible for user-specificity starts at the periphery of the
implanted auditory system at the physical interface between technology and biophysiology. This
includes the unique geometric parameters that describe the cochlear morphometry of an
individual, variations in electrode geometry and location among users, individual neural survival
patterns and variations in electrical characteristics of tissues and fluids through which stimulus
currents propagate. While it is not possible to study the response of the peripheral auditory
system to electrical stimulation in humans invasively, computational physiology provides a
unique simulated invasive view into the workings of the implanted cochlea. Any measuring point
or quantity may be simulated or predicted provided that an accurate model of the electrically
stimulated auditory periphery is available.
The work presented expands on the development of advanced computational models to
describe individual CI users' cochleae with the intent to open a unique window onto the
electrophysiological functioning of a specific implant. The purpose is to apply these models to (i)
study the auditory system at a physiological level that is not physically accessible in human
users, and (ii) provide a tool to clinicians to access information about an individual user's
hearing system that may contribute to optimal mapping of the implant and/or support diagnostic
procedures in the event of deteriorating performance. Model-predicted mapping (MPM) relies on
the absolute accuracy with which spatial-temporal neural excitation characteristics may be
predicted, while model-based diagnostics (MBD) relies more on observation of larger-scale
phenomena, such as the effect of changes in model structure, e.g. new bone formation or
electrode encapsulation, on current paths.
Issues related to the design and construction of user-specific models are also discussed
and a number of outcomes that have been predicted by person-specific models are presented.
A summary of this presentation is available from www.up.ac.za/bioengineering.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 25
2015 Conference on Implantable Auditory Prostheses
S4: INVESTIGATING THE ELECTRO-NEURAL INTERFACE: THE EFFECTS OF
ELECTRODE POSITION AND AGE ON NEURAL RESPONSES
Christopher J. Long1, Ryan O. Melman2, Timothy A. Holden3, Wendy B. Potts1, Zachary M.
Smith1
1
Cochlear Limited, Centennial, CO, USA
2
Cochlear Limited, Sydney, AUS
3
Department of Otolaryngology Washington University School of Medicine, St. Louis, MO, USA
This study investigates the potential impacts of electrode position and age on Electrically Evoked
Compound Action Potential (ECAP) thresholds in deaf subjects treated with a cochlear implant (CI).
Previously, Long et al. (2014) found significant correlations between electrode-to-modiolus distances and
psychophysical thresholds for both focused multipolar (11 dB/mm; p = 0.0013; n = 10) and monopolar (2
dB/mm; p = 0.0048; n = 10) stimulation. They also showed a significant relationship between speech
understanding and the variance of threshold-controlling-for-distance (r = -0.79; p = 0.0065; n = 10).
These previous results are consistent with a model of electric cochlear stimulation where threshold
depends on two main factors: (1) the distance of an electrode to the modiolus, and (2) the pattern of
neural loss, since a reduced neural population could increase the effective distance of an electrode to
the nearest neurons. Here, we extend this work in three ways. In a first group of subjects, with both highresolution CT scans and ECAP data, we examine the relationship between electrode-to-modiolus
distance and ECAP threshold. Next, in a larger group of subjects with Contour Advance electrodes
(n=339), we compare ECAP thresholds along the array to the average electrode distances previously
obtained from nine Contour Advance subjects. Finally, we analyze ECAP thresholds as a function of age
in a subset of the larger group.
In the first group of subjects, we observe that ECAP threshold is correlated with electrode-tomodiolus distance (3.1 dB/mm; p = 0.0004; n=7). In the larger group, we see that ECAP threshold
similarly varies with average electrode-to-modiolus distance (3.3 dB/mm; r = 0.88; p < 0.0001; 339
Contour Advance subjects; 22 electrodes). In addition, psychophysical thresholds from subjects’ clinical
maps vary with distance at 1.7 dB/mm (r = 0.85; p < 0.0001; 243 Contour Advance subjects; 22
electrodes), similar to the 2 dB/mm previously reported. Finally, we find a significant effect of age, with a
0.24 dB (1.55 CL) increase in ECAP threshold per decade of life (r = 0.24; p = 0.0002; 236 Contour
Advance subjects).
These results lead to some interesting hypotheses. The mechanical properties of the electrode
array combined with the tapering of the cochlea appear to be the primary determinants of the electrodeto-modiolus distance for Contour Advance electrodes. Intriguingly, the mean ECAP threshold-distance
slope is about twice that of the mean psychophysical threshold-distance slope, perhaps because ECAP
measurement involves electric attenuation both from the electrode to the neurons (stimulation) and along
the reverse path (recording). In addition, combining the measured increase in ECAP threshold per
decade of life with the predicted 1003 spiral ganglion cells (SGCs) lost per decade (Makary et al., 2011)
suggests an approximate threshold increase of 0.24 dB per 1000 SGC lost.
These results further demonstrate the significant effect of electrode position and neural survival
on psychophysical and ECAP thresholds. They also support the idea that threshold measures may
provide a useful metric of the state of the local electrode-neural interface to guide clinical optimizations,
such as channel removal, for improved outcomes with a CI.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 26
2015 Conference on Implantable Auditory Prostheses
S5: SPATIAL SELECTIVITY: HOW TO MEASURE AND (MAYBE) IMPROVE IT
Robert Paul Carlyon1, Olivier Macherey2, Stefano Cosentino1, John M Deeks1
1
MRC Cognition & Brain Sciences Unit, Cambridge, GBR
2
LMA-CNRS, Marseille, FRA
Measures of spatial selectivity potentially provide two practical ways of improving speech
perception by cochlear implant (CI) users. The first is to identify those electrode(s) that, for a
given listener, do or do not selectively excite appropriate portions of the auditory nerve array.
This information could then be used to de-activate or re-program the implant on a bespoke
(patient-by-patient) basis. The second is to evaluate methods of electrical stimulation that might
result in better spatial selectivity than conventional methods. Unfortunately, measurement of
spatial selectivity in CIs is plagued by numerous complicating factors that do not occur in
acoustic hearing. This in turn presents the researcher with substantial challenges when applying
psycho-acoustical techniques to the psychophysical measurement of spatial selectivity in CI
users. We describe some of those challenges, and illustrate the approach we have employed to
overcome them in a study investigating the effects of stimulus polarity, waveform shape, and
mode of stimulation on the spread of excitation.
All contemporary CIs use symmetric (SYM) biphasic pulses presented in monopolar (MP)
mode. Greater spatial selectivity (mediated by reduced current spread) might be achieved by
more focussed stimulation methods, including tripolar (TP) stimulation. However, TP stimulation
theoretically produces three “lobes” of excitation, corresponding to the central and flanking
electrodes, and excitation arising from the side lobes may limit spatial selectivity. One potential
way of manipulating the amount of side-lobe excitation is to use pseudomonophasic (PS)
pulses, consisting of a short phase of one polarity followed by an opposite-polarity phase of (in
our case) four times the duration and a quarter of the amplitude. Because CI listeners are
preferentially sensitive to anodic current, we predicted that a TP “PSA” stimulus, where the highamplitude phase presented to the central electrode is anodic, and that to the flanking electrodes
is cathodic, would produce a narrower spread of excitation than a TP_PSC stimulus, which is a
polarity-flipped version of TP_PSA. We measured forward-masked excitation patterns for 1031pps 200-ms maskers in which the pulse shape was either TP_PSA, TP_PSC, or the MP_SYM
pulse shaped widely used clinically. The TP_SYM signal had a duration of 20 ms and had a
different (200 pps) pulse rate from the maskers, in order to reduce confusion effects. Each
masker was set to an “equally effective” level, so that all produced the same masked threshold
for a probe on the same electrode as the masker (∆x=0). Masked thresholds were then
measured for ∆x = -2, -1, 0, 1, and 2. Results varied across the five Advanced Bionics subjects
tested, but four showed narrower excitation patterns for TP_PSA than for one of the other two
masker types. Furthermore, within subjects, the loudness of each masker correlated positively
with the excitation pattern width, and we propose a novel measure, that combines loudness
comparisons and masking at ∆x=0, as an efficient method for measuring spatial selectivity in
CIs.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 27
2015 Conference on Implantable Auditory Prostheses
S6: EFFECTS OF SPECTRAL RESOLUTION ON TEMPORAL-ENVELOPE
PROCESSING OF SPEECH IN NOISE
Andrew J. Oxenham, Heather A. Kreft
University of Minnesota, Minneapolis, MN, USA
Recent work on predicting speech understanding in noise has focused on the role of
inherent fluctuations in noise and on the speech-to-masker ratio in the modulation spectrum
domain. Empirical work using tone vocoders in normal-hearing listeners has provided support
for this approach by showing that speech masking in "steady" noise is dominated by the
inherent noise fluctuations, and that truly steady maskers (with a flat temporal envelope)
produce much less masking. Because cochlear-implant (CI) users rely on temporal-envelope
cues to understand speech, our expectation was that CI users would also show much less
masking in the presence of steady (pure-tone) maskers than in noise. Pure-tone maskers were
placed at the center frequencies of each frequency channel of the CI, thereby producing the
same masker energy as a noise masker in each frequency channel, but without the inherent
fluctuations. In contrast to the results from normal-hearing subjects, the CI users gained no
benefit from eliminating the inherent fluctuations from the maskers. Further experiments
suggested that the poor spectral resolution of cochlear implants resulted in a smoothing of the
temporal envelope of the noise maskers. The results indicate an important, and potentially
overlooked, effect of spectral resolution on the temporal representations of speech and noise in
cochlear implants. Similar results were observed in listeners with hearing impairment, also
resulting in poorer spectral resolution. The results suggest a new interpretation for why CI users
and hearing-impaired listeners, generally show reduced masking release when additional
temporal modulations are imposed on noise maskers.
This work is supported by NIH grant R01 DC012262.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 28
2015 Conference on Implantable Auditory Prostheses
S7: WHAT IS MISSING IN AUDITORY CORTEX UNDER COCHLEAR IMPLANT
STIMULATION?
Xiaoqin Wang
Johns Hopkins University, Baltimore, MD, USA
Despite the success of the cochlear implants (CI), most users hear poorly in noisy
environments and report distorted perception of music and tonal languages. The further
improvement of CI devices depends crucially on our understanding of the central auditory
system’s ability to process, adapt and interpret electric stimuli delivered to the cochlea. Our
knowledge on how the central auditory system processes CI stimulation remains limited. Much
of this knowledge has come from electrophysiological studies in anesthetized animals. We have
developed a new non-human primate model for CI research using the common marmoset
(Callithrix jacchus), a highly vocal New World monkey that has emerged in recent years as a
promising model for studying neural basis of hearing and vocal communication. By implanting a
CI electrode array in one cochlea and leaving the other cochlea acoustically intact, we were able
to compare each neuron’s responses to acoustic and CI stimulation separately or in combination
in the primary auditory cortex (A1) of awake marmosets. The majority of neurons in both
hemispheres responded to acoustic stimulation, but CI stimulation was surprisingly ineffective at
activating most A1 neurons, particularly in the hemisphere ipsilateral to the implant. We further
discovered that CI-nonresponsive neurons exhibited greater acoustic stimulus selectivity in
frequency and sound level than CI-responsive neurons. Such cortical neurons may play an
important role in perceptual behaviors requiring fine frequency and level discrimination. Our
findings suggest that a selective population of auditory cortex neurons are not effectively
activated by CI stimulation, and provide insights into factors responsible for poor CI user
performance in a wide range of perceptual tasks.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 29
2015 Conference on Implantable Auditory Prostheses
S8: HUMAN CORTICAL RESPONSES TO CI STIMULATION
Jan Wouters
KU Leuven – University of Leuven, Dept. Neurosciences, ExpORL, Belgium Many research studies have focused on objective measures in cochlear implants (CI)
related to the electrode-neuron interface and brainstem activity. In principal, cortical responses
may provide better objective estimates and correlates for performance of CI recipients in
complex listening environments, lead to better insights in the variability of outcomes across
subject, allow methods for individualized speech processing strategies and define objective
measures for follow-up (monitoring of status, auditory development and maturation).
Related to these aspects, an overview of cortical potentials in CI will be given and new
research about auditory steady state responses (ASSR) obtained from multi-channel EEGrecordings in CI will be reported in this contribution. ASSR-stimuli are a good model for speech
and the responses are closely related to basic brain oscillations. Depending on the modulation
frequency of the ASSR-stimuli from about 100 Hz to a few Hertz, the ASSR neural responses
are generated at brainstem level to auditory cortex, respectively.
In CIs the evoked responses can be overshadowed by the electrical artifacts of the RF
communication link and electrical stimulation pulses. Techniques have been developed by
different research groups to extract the responses to short and transient stimuli. However, this is
particularly a challenge for continuous CI-like stimuli and ASSR.
In this contribution examples will be given of stimulation artifact rejection
techniques, of cortical and sub-cortical evoked responses that provide estimates for threshold
and supra-threshold electrical stimulation, of the relation of modulation transfer function of the
generated brain activity to stimuli modulated at 4-40 Hz and speech perception, across-subject
variation, and speech processing aspects.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 30
2015 Conference on Implantable Auditory Prostheses
S9: EFFECTS OF DEAFNESS AND COCHLEAR IMPLANT USE ON CORTICAL
PROCESSING
James B Fallon
Bionics Institute, East Melbourne, AUS
Prolonged periods of deafness are known to alter processing in the auditory system. For
example, partial deafness, from a young age, can result in over-representation of lesion edge
frequencies in primary auditory cortex and profound deafness results in a complete scrambling
of the normal cochleotopic organization. In contrast to these effects on cochleotopic (or spectral)
processing, temporal processing appears to be relatively robust. A reduction in the ability to
respond to every stimulus in a train (maximum following rate) was the only effect of neonatal
deafness of 7 - 13 months duration, although deafness of longer duration (> 36 months) has
been reported to result in significant increases in latency and jitter, and a decrease in best
repetition and cut-off rates.
For the past decade we have been examining the effects of reactivation of the deafened
auditory pathway, via a cochlear implant, and of the timing of the re-introduction of activity (i.e.
the preceding duration of deafness). We have utilized a range of techniques (anatomical,
electrophysiological, behavioral) to attempt to address these issues. In short, reactivation of the
deafened auditory pathway, with chronic intracochlear electrical stimulation, appears to maintain
most, but not all, of the cochleotopic processing. In contrast, chronic intracochlear stimulation
from a clinical cochlear implant system results in significant changes to temporal processing.
Specifically chronic stimulation results in decreased suppression duration, longer latency,
greater jitter, and increased best repetition and cut-off rates relative to acutely deafened
controls.
Overall, temporal processing appears to be far more robust than cochleotopic / spectral
processing with respect to both a moderate period of deafness and clinical cochlear implant use.
This work was supported by the NIDCD (HHS-N-263-2007-00053-C), the National Health and
Medical Research Council of Australia and the Victorian Government. We acknowledge our
colleagues and students who contributed to aspects of this work.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 31
2015 Conference on Implantable Auditory Prostheses
S10: FMRI STUDIES OF CORTICAL REORGANIZATION IN POSTLINGUAL
DEAFNESS: MODIFICATION OF THE LEFT HEMISPHERIC DOMINANCE FOR
SPEECH
1
2
Diane S Lazard1, Anne-Lise Giraud2
Institut Arthur Vernes, ENT surgery, Paris, FRA
Department of Neuroscience, University of Geneva, Campus Biotech, Geneva, CHE
Similarly to certain brain injuries, post-lingual deafness imposes central reorganization to
adapt to the loss of easy and instinctive oral communication. Occurring in mature brains, shaped
from childhood, plasticity cops with not so available brain resources such as those encountered
in developing brains. Using fMRI in post-lingual deaf adults, and matched normal hearing
controls, we address the question of late deafness-induced reorganization by exploring
phonological processing from written material, and relate the different strategies adopted to
speech recognition scores obtained after cochlear implantation (CI).
During easy to more difficult written rhyming tasks, the involvement of right temporal
areas during deafness prior to CI is a consistent marker of poor speech outcome after auditory
rehabilitation. The recruitment of these right areas, not involved usually in phonological
processing but in paralinguistic processing, is a marker of the central reorganization some
subjects develop to palliate the difficulties of losing oral communication. Depending on the skills
of audio-visual fusion acquired in early childhood, two neurobiological profiles emerge: i) some
subjects are able to maintain a left hemispheric dominance anchored to oral interactions through
left audio-visual loops maintained by lip-reading. They will later become proficient CI users, ii)
some subjects, less balanced, will develop a greater reliance on visual inputs, based on a
functional interaction between early visual cortex and the right superior temporal areas. This
greater right involvement favors and speeds up written processing, easing social interactions
when lip-reading is impossible. This shift in hemispheric dominance, leaving the visual cortex
less available for left audio-visual interactions, seems a reliable predictor of poorer CI outcome.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 32
2015 Conference on Implantable Auditory Prostheses
S11: SOCIAL INFLUENCES ON SPEECH AND LANGUAGE DEVELOPMENT IN
INFANCY
Michael H. Goldstein
Cornell University, Ithaca, NY, USA
The social environment plays an important role in vocal development and language
learning. Social interactions that promote vocal learning are characterized by contingent
responses of adults to early, prelinguistic vocalizations (babbling). These interactions are often
rapid and comprised of small, seemingly mundane behaviors, but they are crucial for vocal
development. Recent work in our laboratory has shown that social responses to babbling
facilitate learning of syllable form and early phonology. Contingent social feedback to infant
vocalizations produces rapid changes in babbling toward more developmentally-advanced vocal
forms. In contrast, yoked control infants, who receive identical amount of social responses (that
are not synchronized with their babbling), do not learn.
Such social effects are more than simple processes of imitation or shaping. We recently
studied the role of socially guided statistical learning in vocal development by presenting infants
with feedback containing sound patterns they were capable of pronouncing but rarely produce.
We used VCV-patterned words from the Nigerian language Yoruba (e.g.,ada). We found that
infants extracted and produced a novel phonological pattern only when their caregivers’ speech
was both contingent on their babbling and contained variable exemplars of the underlying VCV
pattern. Repeating the same exemplar did not facilitate learning even when the repetitions were
contingent on babbling. Thus the structure of social interactions organized by babbling, as well
as the statistical structure of contingent speech, afford infants opportunities for phonological
learning.
Additional studies indicate that infants are particularly primed to learn new associations
between novel labels and objects when the information is presented just after babbling. We
found that mothers’ responses to infants’ object-directed vocalizations (babbles produced while
looking at and/or holding an object) predicted infants’ word learning in real-time as well as later
vocabulary development. Infants learned at a significantly higher rate than those who received
novel label-object pairings after looking but not babbling at the objects.
Our most recent studies indicate that specific characteristics of social coordination are
rewarding to infants, and reward pathways may drive learning in social contexts. We are now
engaged in a parallel program of research on socially guided vocal learning in songbirds to
directly investigate the developing connections between reward and learning circuitry.
Taken together, our studies indicate that vocal learning and communicative development
is an active process, typically driven by social interactions that are organized by prelinguistic
vocalizations. By creating feedback that is both inherently informative and socially relevant,
structured social interaction boosts the salience of patterns in the input and facilitates vocal
learning.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 33
2015 Conference on Implantable Auditory Prostheses
S12: EXECUTIVE FUNCTIONING IN PRELINGUALLY DEAF CHILDREN WITH
COCHLEAR IMPLANTS
William G Kronenberger, David B Pisoni
Indiana University School of Medicine, Indianapolis, IN, USA
Indiana University, Bloomington, IN, USA
Cochlear implantation restores some attributes of hearing and spoken language skills to prelingually
deaf children, but a period of early deafness combined with underspecified sensory input from the cochlear
implant (CI) puts prelingually deaf, early implanted children at risk for delays in some areas of spoken
language skills. Although a major focus of CI efficacy and outcomes research has been on speech and
language skills, auditory experience and verbal skills are a part of a larger, functionally integrated system of
neurocognitive processes, some of which may also be affected by early deafness and language delays.
Executive functioning (EF) encompasses a range of neurocognitive processes concerned with the regulation,
allocation, and management of thoughts, behaviors, and emotions in the service of planning, goal-direction,
and organization. Reduced auditory and spoken language experiences may put CI users at greater risk than
normal-hearing (NH) peers for delayed development of critical building blocks of EF abilities, including
sequential processing skills and language skills for use in self-regulation. In this presentation, we will review
background theory and research in support of a rationale for risk of EF delays in prelingually deaf, early
implanted CI users as a result of early auditory and spoken language deprivation. We will then present
findings from a recent set of studies investigating four primary research questions: (1) Are prelingually deaf,
early-implanted CI users at risk for EF delays compared to NH peers? (2) What areas of EF are specifically
vulnerable to delay in children with CIs? (3) What is the timeline for EF development and change from
preschool to school ages in children with CIs? (4) What are the relations between EF delays and spoken
language outcomes in CI users, and how do these relations differ from those for NH peers?
In a first set of studies with 53 to 70 long-term (7 or more years), early implanted (at age 7 or younger)
CI users age 7-22 years, we demonstrated delays in three primary areas of EF relative to a matched NH
control sample: verbal working memory, controlled cognitive fluency, and inhibition-concentration; no delays
were found in spatial working memory. Unlike prior studies, this study used a 1:1 matching (age and
nonverbal IQ) procedure for the CI and NH samples, focused only on prelingually-deaf long-term CI users,
and measured EF with a broad battery of neurocognitive tests that placed minimal demands on audibility
using almost exclusively visual stimuli. An additional set of analyses showed that the EF delays in this sample
were not only present on neurocognitive tests but were also reported by parents on questionnaires of
everyday behaviors. In another study using this long-term outcome sample, we found that two areas of EF
(verbal working memory and controlled cognitive fluency) were related more strongly to language outcomes in
CI users than in NH peers; this finding suggests that reciprocal influences between EF and language
development may be different in CI users than in NH peers.
In a second set of studies, we investigated EF longitudinally in 37 CI users who averaged 4.1 years of
age (range=3-6 years) at study entry. Consistent with the long-term outcome study, delays were found in the
CI sample compared to NH peers using both neurocognitive and behavioral measures of EF, but the breadth
and magnitude of these delays were generally less than those seen in the long-term users. Longitudinally, EF
measures showed moderate stability over a 1-year period and were predicted by earlier language skills.
Results suggest that EF delays are present in CI users as early as preschool ages and that language skills
predict later EF skills in this age range.
Based on the results of these studies, we conclude that a period of early auditory deprivation followed
by later spoken language delays places some children with CIs at elevated risk for deficits in some areas of
EF (verbal working memory, controlled cognitive fluency, inhibition-concentration), although a majority of
children with CIs do not show delays in these areas. Delays in EF are present as early as preschool ages and
are predicted by early language skills. These findings suggest that early identification and intervention for
delays in executive functioning in prelingually-deaf, early implanted children with CIs is warranted.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 34
2015 Conference on Implantable Auditory Prostheses
S13: THE IMPORTANCE OF A CI FOR CHILDREN´S SOCIAL AND EMOTIONAL
INTELLIGENCE
1
Carolien Rieffe1,2 and Johan H.M. Frijns3,4
Developmental Psychology, Leiden University, The Netherlands
Dutch Foundation for the Deaf and Hard of Hearing Child, Amsterdam, The Netherlands
3
Dept. of Otorhinolaryngology, Head & Neck Surgery, Leiden University Medical Center, The Netherlands
4
Leiden Institute for Brain and Cognition, The Netherlands.
2
Like adults, children and adolescents want to belong, to be part of a family a peer group,
and have close friends with whom they can share their important life events. Whereas family
members most often show an unconditional love and care for each other, peer relationships rely
much more on good social skills. Especially during the early teens, peers become much more
important, and young teenagers usually shift their focus from the family to their peers. This puts
an extra demand on children´s social skills. To start a friendship is usually not a problem. To
maintain a friendship, or to maintain position in a peer group, is much more of a challenge. The
development of these social skills, in turn, largely depends on children´s emotional intelligence.
Emotional intelligence consists of many aspects, e.g. reading emotions in others’ facial
expressions, understanding the causes of one’s own and others’ emotions, being able to
regulate one’s own emotions, and to express emotions in a socially accepted and constructive
way. These aspects are prone to incidental learning. In other words, being able to observe
others, overhear conversations, and having implicit role models all contribute to the
development of these aspects. This could be more of a challenge for children with a CI
compared to their hearing peers. The question is to what extent limited access to incidental
learning hampers the development of emotional intelligence, and in turn, the social intelligence
in children with a CI.
In our research group (Developmental Psychology, Leiden University and the
Otorhinolaryngology Department of the Leiden University Medical Center, NL) we have studied
the extent to which the emotional and social intelligence of children with a CI is on par with their
hearing peers, and how this is related to the degree of hearing loss, the age of implantation, and
other related factors. In this presentation we will give an overview of the outcomes of our studies
over the last seven years and discuss the new research questions that we are currently working
on. These outcomes will show where the children with a CI are at risk, but also what the
protective factors are to enhance an optimal social and emotional development for these
children.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 35
2015 Conference on Implantable Auditory Prostheses
S14: SPOKEN LANGUAGE, MEMORY AND PERCEPTUAL ABILITIES IN
CHILDREN WITH COCHLEAR IMPLANT(S) AND CHILDREN WITH SINGLE
SIDED DEAFNESS
Astrid van Wieringen, Anouk Sangen, Jan Wouters
KU Leuven, Leuven, BEL
In many countries, congenital hearing impairment in children is detected soon after birth
through neonatal hearing screening provided by Child Health and Welfare Services.
Subsequently, in Belgium, children with profound bilateral hearing loss receive cochlear
implant(s) in their first years of life, while children with single sided deafness (SSD) do not
receive treatment. An increasing body of research suggests that SSD is a risk factor for speechlanguage delay, and that behavioral and academic problems persist throughout the years.
In this study we compare spoken language performance, working memory, speech
perception in noise, and parent’s/teacher’s evaluation for children (< 15 yrs) who received
cochlear implants at a young age (n= 47), children with SSD without intervention (n=20) and
normal hearing peers (n= 67).
Our data show that children with SSD lag behind on complex language tasks (in addition
to spatial hearing), although they perform better than bilaterally deaf children with cochlear
implants. Detailed analyses of the expressive language tasks show similar patterns of errors for
early implanted children and children with SSD. Understanding the strengths and weaknesses
of different skills in children with different degrees of deafness will allow us to develop or
improve targeted interventions. In all children with hearing impairment these issues should be
addressed at a young age in order to obtain age-adequate performance.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 36
2015 Conference on Implantable Auditory Prostheses
S15: VOICE EMOTION RECOGNITION AND PRODUCTION BY LISTENERS
WITH COCHLEAR IMPLANTS
Monita Chatterjee
Boys Town National Research Hospital, 555 N 30th St, Omaha, NE
Our recently published results show that children and adults with cochlear implants (CIs)
achieve significantly lower scores than normally-hearing (NH) children and adults in voice
emotion recognition with child-directed materials. Underscoring the contribution of voice pitch
perception to emotion recognition, preliminary new analyses suggest that CI children’s emotion
recognition is correlated with their fundamental-frequency-discrimination thresholds obtained
using broadband harmonic complexes.
Although CI children’s emotion recognition scores with full-spectrum speech are similar to
NH adults’ scores with 8-channel noise-vocoded (NV) speech, NH children in our study show
significant deficits with 8-channel and 16-channel NV speech. In recent work, we found that both
nonverbal intelligence and age were significant predictors of NH children’s performance with
NV-speech. Taken together, these findings confirm the strong benefit obtained by CI children
from experience with their device, as well as the contribution of top-down cognitive processing
to the perception of degraded speech. These results also suggest that younger children might
face greater difficulties with prosodic cues than post-lingually deaf adults immediately after
implantation. Given that our study used child-directed speech (i.e., exaggerated prosody), our
results likely under-estimate the difficulties faced by CI patients in the real world.
In preliminary experiments on voice emotion production, we are finding that early-deaf CI
children produce a narrower range of specific acoustic contrasts to communicate happy/sad
distinctions in simple sentences (e.g., This is it) than NH children and adults. In particular,
preliminary analyses suggest smaller variations in intensity and mean spectral centroid in CI
children than in the NH group. Of particular interest to us in all of these studies are
auditory/linguistic-experience/plasticity-related differences between post-lingually deaf CI adults,
and pre/peri-lingually deaf CI children.
[Work supported by NIH R21DC011905 and NIH R01DC014233]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 37
2015 Conference on Implantable Auditory Prostheses
S16: CONTEXT EFFECTS, TIME COURSE OF SPEECH PERCEPTION, AND
LISTENING EFFORT IN COCHLEAR-IMPLANT SIMULATIONS
Deniz Başkent1, Carina Pals1, Charlotte de Blecourt2, Anastasios Sarampalis3,
Anita Wagner1
1
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology, Groningen, NLD
2
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, Groningen, NLD
3
University of Groningen, Department of Psychology, Groningen, NLD
Speech perception is formed based on both the acoustic signal and listeners’ knowledge
of the world and semantic context. Under adverse listening conditions, where speech signal is
degraded, access to semantic information can facilitate interpretation of the degraded speech.
Speech perception through a cochlear implant (CI) is a good example of such facilitation as
speech signal transmitted by the CI is inherently degraded, lacking most spectro-temporal
details. Yet, interpretation of degraded speech in CI users likely comes at the cost of increased
cognitive processing, leading to increased listening effort.
In this study, we have investigated the time course of understanding words, and how
sentential context reduces listeners’ dependency on the acoustic signal for natural and
degraded speech via an acoustic CI simulation. In an eye-tracking experiment we combined
recordings of listeners’ gaze fixations with pupillometry, to capture effects of semantic
information on both the time course and effort of speech processing, respectively. Normalhearing listeners were presented with sentences with or without a semantically constraining verb
(e.g., crawl) preceding the target word (baby), and their ocular responses were recorded to four
pictures presented on the screen, including that of the target, a phonological competitor (bay), a
semantic distractor (worm), and an unrelated distractor.
The results show that in natural speech, listeners’ gazes reflect their uptake of acoustic
information, and integration of preceding semantic context. Degradation of the signal via an
acoustic CI simulation leads to delays in lexical processing, i.e., a later disambiguation of
phonologically similar words, and longer duration for integration of semantic information.
Complementary to this, the pupil dilation data show that early semantic integration reduces the
effort in disambiguating phonologically similar words. Hence, processing degraded speech
comes with increased effort due to the impoverished nature of the signal. Delayed integration of
semantic information further constrains listeners’ ability to compensate for inaudible signals.
These findings support the idea that, while indeed CI users may be able to make good use of
semantic context, this may come at the cost of increased effort, and further, the benefits for
enhancing speech perception may be limited due to increased processing time.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 38
2015 Conference on Implantable Auditory Prostheses
S17: INSIGHTS INTO AUDITORY-COGNITIVE PROCESSING IN OLDER
ADULTS WITH COCHLEAR IMPLANTS
Yael Henkin
Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer
Department of Communication Disorders, Sackler Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv
University, Tel Aviv, Israel
An increasing number of older adults are habilitated by means of cochlear implants (CI).
For the older CI recipient, communication challenges are presumably enhanced due to the
degraded input provided by the CI device and to age-related declines in auditory-cognitive
function. In the current series of studies, we investigated age-related and CI-related effects on
behavioral and neural manifestations of auditory-cognitive processing. Auditory event-related
potentials (AERPs) were recorded from multiple-site electrodes in older and young post-lingually
deafened adults with CI and in age-matched normal hearing (NH) listeners while performing a
high-load cognitive Stroop task. Participants were required to classify the speaker's gender
(male/female) that produced the words 'mother' and 'father' ('ima' and 'aba' in Hebrew) while
ignoring the irrelevant, congruent or incongruent, word meaning. A significant Stroop effect was
evident regardless of age and CI and manifested in prolonged reaction time to incongruent vs.
congruent stimuli.
Age-related effects were studied by comparing AERPs and behavioral measures of
young vs. older NH listeners and revealed similar performance accuracy and sensoryperceptual processing (N1 potential). In contrast, older NH listeners exhibited prolonged
reaction time and perceptual processing (P3 potential), reduced prevalence of neural events
reflecting inhibitory post-perceptual processing (N4 potential), and enhanced activation of
auditory areas as demonstrated by source localization analysis. Additionally, converging P3
latency data and reaction time data indicated that while young NH listeners employed a postperceptual conflict processing strategy, older NH listeners employed a combined perceptual and
post-perceptual strategy.
CI-related effects were studied by comparing data of young NH listeners to that of young
CI recipients and revealed similar performance accuracy, reaction time, and sensory-perceptual
processing (N1). Conversely, CI recipients exhibited prolonged perceptual (P3) and inhibitory
post-perceptual (N4) processing. Moreover, differently from young NH listeners, young CI
recipients employed a combined perceptual and post-perceptual conflict processing strategy.
Comparisons between perceptual processing of young CI and older CI recipients and between
older NH and older CI recipients provided evidence for negative synergy where the combined
effect of age and CI was greater than their sum.
Taken together, in older CI recipients age- and CI-related effects manifested in effortful,
prolonged auditory-cognitive processing and in a differential conflict processing strategy
characterized by enhanced allocation of perceptual resources. From a clinical perspective, such
data may have implications regarding evaluation and rehabilitation procedures that should be
tailored specifically for this unique group of patients.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 39
2015 Conference on Implantable Auditory Prostheses
S18: THE IMPACT OF COCHLEAR IMPLANTATION ON SPATIAL HEARING
AND LISTENING EFFORT
Ruth Y Litovsky, Matthew Winn, Heath Jones, Alan Kan, Melanie Buhr-Lawler, Shelly
Godar, Samuel Gubbels
University of Wisconsin-Madison, Madison, WI, USA
As the criteria for providing cochlear implants (CIs) evolve, unique challenges and
opportunities arise. A growing number of adults with single-sided deafness (SSD) are electing to
receive a CI in the deaf ear, despite having access to excellent acoustic hearing in the other ear.
The potential benefits of implantation in SSD patients can arise through a number of auditory
and non-auditory mechanisms, the contributions of which remain to be understood. First,
improvement in spatial hearing abilities is a hallmark test for benefits that are received from
integration of auditory inputs arriving at the two ears. In the case of SSD, the unique nature of
bilateral electric+acoustic (E+A) hearing might explain why the emergence of localization benefit
can take months or years. Performance is measured when patients are listening with either ear
alone, or in the bilateral E+A condition. While data suggest that the addition of the CI to the
normal acoustic hearing ear promotes improved localization, error patterns are indicative of
problems with fusion of the E+A signals. Results will be discussed in the context of parallel
measures in bilateral CI users.
Second, the ability to hear speech in noise is particularly interesting in the context of
spatial segregation of speech from background noise. While measures such as improvement
due to head shadow, binaural summation and squelch, could have relevance for SSD patients,
our data suggest that spatial release from masking (SRM) is a potentially more robust measure
for capturing speech unmasking effects due to E+A hearing. We quantify SRM by comparing
speech intelligibility under conditions in which target speech and background noise are either
co-located or symmetrically spatially separated. SRM is the measured benefit that can be
attributed to the difference in locations between the target and noise. SRM in SSD patients has
interesting parallels to the outcomes that are observed in children who are implanted with
bilateral CIs.
Third, an integrated approach towards assessing the impact of implantation in SSD
patients lies in an experimental protocol in which auditory perception is measured alongside
objective measures of listening effort, or cognitive load exerted during the task. We use
pupillometry to measure listening effort while the patient is performing a speech perception task,
in order to track benefits or interference brought on by the addition of a CI to a unilateral listener
with normal-hearing in the other ear. Results will be discussed in the context of binaural sensory
integration and top-down processing.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 40
2015 Conference on Implantable Auditory Prostheses
S19: BETTER-EAR GLIMPSING INEFFICIENCY IN BILATERAL COCHLEARIMPLANT LISTENERS
Matthew J. Goupell1, Joshua Bernstein2, Douglas Brungart2
1
2
University of Maryland, College Park, MD, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
Bilateral cochlear-implant (BICI) users generally localize sounds better than unilateral CI
users, understand speech better in quiet, and obtain better-ear listening benefits in noise when
the target and maskers are spatially separated, but do not demonstrate the same significant
improvements in speech perception that normal-hearing (NH) listeners do from binaural
interactions. One often overlooked benefit of having two ears is the opportunity for “better-ear
glimpsing” in cases where a target signal is masked by interfering sounds with different
fluctuations in the two ears, such as the case where a target in front is masked by interferers
located symmetrically to the left and right. The purpose of this study was to investigate how
efficiently BICI listeners can perform better-ear glimpsing - i.e., rapidly analyzing small spectrotemporal bins in the acoustical signals and synthesizing across the ears the bins with the best
signal-to-noise ratio (SNR) (Brungart and Iyer, 2012).
Seven BICI and 10 NH listeners identified three sequential digits spoken by a target
talker at 0° in the presence ±60° symmetrically placed interfering talkers. Stimuli were rendered
using generic head-related transfer functions (HRTFs) and presented via direct-line inputs (BICI)
or headphones (NH). NH listeners were tested with both unprocessed and eight-channel noisevocoded signals. To measure glimpsing efficiency, three conditions were tested. In the unilateral
condition, only one ear received an HRTF-rendered stimulus. In the bilateral condition, both ears
received the HRTF-rendered stimuli, thereby providing better-ear glimpses that alternated
rapidly across the ears. In the better-ear condition, the stimuli were processed by identifying the
ear containing the best SNR for each time-frequency bin in the spectrogram. All of the bins
containing the better SNR were then presented to one ear, thereby automatically performing the
better-ear glimpsing and across-ear synthesis for the listener.
When presented with non-vocoded signals, performance for the NH listeners was about
30 percentage points better for the bilateral than for unilateral condition, but performance did not
improve further with the better-ear processing. This suggests that the NH listeners efficiently
performed better-ear glimpsing, such that the automated processing did not confer additional
benefit. For the BICI listeners and NH listeners presented with vocoded signals, performance
was about 10 percentage points better for the bilateral than for the unilateral condition. But
unlike the NH listeners, the BICI and NH vocoder listeners received additional benefit from the
automated glimpsing, improving by an additional 20 percentage points in the better-ear
condition relative to the bilateral condition. These results indicate that BICI listeners are able to
perform better-ear glimpsing to a certain extent, but less efficiently than the NH listeners.
[Supported by NIH K99/R00-DC010206 (Goupell) and NIH P30-DC004664 (C-CEBH)].
12-17 July 2015
Granlibakken, Lake Tahoe
Page 41
2015 Conference on Implantable Auditory Prostheses
S20: DIFFERENCES IN TEMPORAL WEIGHTING OF INTERAURAL TIME
DIFFERENCES BETWEEN ACOUSTIC AND ELECTRIC HEARING
1
Hongmei Hu1, David McAlpine2, Stephan D Ewert1, Mathias Dietz1
Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, DEU
2
The Ear Institute, University College London, London, GBR
Azimuthal sound localization in reverberant environments is complicated by the fact that
sound reflections carry interaural differences that can be very different to that from the direct
sound. A mixture of direct sound and reflections thus typically elicits quickly fluctuating nonstationary interaural differences between the listener’s ears. This transforms localization into the
challenge of identifying and/or focusing on the interaural differences of the direct sound source.
A highly beneficial auditory processing strategy is to give a high weight to the interaural
differences at the onset of the stimulus or at the onset of each modulation cycle. During these
onsets, the direct-to-reverberant ratio is typically optimal, and the interaural differences in this
short moment are informative on the source location.
In amplitude modulated sounds, such as speech, normal hearing (NH) listeners have
indeed a short strongly enhanced sensitivity to interaural time differences (ITDs) during the early
rising portion of the modulation cycle, reflecting a higher weight on temporal read out for that
signal portion. Bilateral cochlear implant (BiCI) listeners can in principle use a similar auditory
processing strategy, if carrier phase information is preserved in the pulse timing and if pulse
rates are low enough that subjects can perceive ITDs.
Here we compare the temporal ITD read-out weighting of sinusoidally amplitude
modulated stimuli between NH and BiCI subjects with direct stimulation of a single electrode
pair. Experiments were performed with NH and BiCI subjects at stimulation rates where subjects
are sensitive to carrier ITDs (500 Hz and 200 pps respectively) and at modulation rates of 4 - 20
Hz. The results indicate that while NH listeners are more sensitive to ITDs applied to the
beginning of a modulation cycle, BiCI subjects are most sensitive to ITDs applied to the
modulation maximum.
The results have implications for future binaural CI processors: If subjects are provided
with perceptually exploitable ITD information, this does not necessarily allow them to localize in
reverberant and other complex environments.
This work was funded by the European Union under the Advancing Binaural Cochlear Implant
Technology (ABCIT) grant agreement (No. 304912).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 42
2015 Conference on Implantable Auditory Prostheses
S21: SENSITIVITY TO INTERAURAL TIME DIFFERENCES IN THE INFERIOR
COLLICULUS OF AN AWAKE RABBIT MODEL OF BILATERAL COCHLEAR
IMPLANTS
Yoojin Chung, Kenneth E. Hancock, Bertrand Delgutte
Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
Although bilateral cochlear implants (CIs) provide improvements in sound localization and
speech perception in noise over unilateral CIs, bilateral CI users’ sensitivity to interaural time
differences (ITD) is still poorer than normal. Single-unit studies in animal models of bilateral CIs
using anesthetized preparations show a severe degradation in neural sensitivity to ITD at
stimulation rates above 100 pps. However, temporal coding is degraded in anesthetized
preparations compared to the awake state (Chung et al., J Neurosci. 34:218). Here, we
characterize ITD sensitivity of single neurons in the inferior colliculus (IC) for an awake rabbit
model of bilateral CIs.
Four adult Dutch-belted rabbits were deafened and bilaterally implanted. Single unit
recordings were made from the IC over periods from 1 to 15 months after implantation. Stimuli
were periodic trains of biphasic electric pulses with varying pulse rates (20 - 640 pps) and ITDs
(-2000 - 2000 µs).
About 65% of IC neurons in our sample showed significant ITD sensitivity in their overall
firing rates based on an analysis of variance metric. Across the neuronal sample, ITD sensitivity
was best for pulse rates near 80-160 pps and degraded for both lower and higher pulse rates.
The degradation in ITD sensitivity at low pulse rates was caused by strong background activity
that masked stimulus-driven responses in many neurons. Selecting short-latency pulse-locked
responses by temporal windowing revealed ITD sensitivity in these neurons. With temporal
windowing, both the fraction of ITD-sensitive neurons and the degree of ITD sensitivity
decreased monotonically with increasing pulse rate.
We also computed neural just-noticeable differences (JND) in ITD using signal detection
theory. ITD JNDs based on overall firing rates were lowest (~200 µs on average) for pulse rates
near 160 pps. ITD JNDs could be improved by selecting short-latency pulse-locked spikes. With
temporal windowing, neural ITD JNDs were 100-200 µs on average, which is comparable to
perceptual JNDs in the better-performing human bilateral CI users. Unlike in anesthetized
preparations where ITD sensitivity at higher pulse rates was almost entirely based on the onset
response (< 20 ms), later responses contributed significantly to ITD sensitivity for all pulse rates
in awake rabbits.
In summary, using temporal windowing to isolate pulse-locked activity, the dependence
of ITD sensitivity on pulse rate in awake rabbit was in better agreement with perceptual data
from human CI users than earlier results from anesthetized preparations. Such windowing might
be implemented more centrally by coincidence detection across multiple IC neurons with similar
response latencies.
Funding:
NIH grants R01 DC005775 and P30 DC005209, and Hearing Health Foundation.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 43
2015 Conference on Implantable Auditory Prostheses
S22: EFFECTS OF INTRODUCING SHORT INTER-PULSE INTERVALS ON
BEHAVIORAL ITD SENSITIVITY WITH BILATERAL COCHLEAR IMPLANTS
Sridhar Srinivasan, Bernhard Laback, Piotr Majdak
Acoustics Research Institute, Austrian Academy of Sciences, Vienna, AUT
In normal hearing, interaural time differences (ITDs) at low frequencies are considered
important for sound localization and spatial speech unmasking in the lateral dimension. These
so-called fine-structure or carrier ITD cues are not encoded in commonly used envelope-based
stimulation strategies for cochlear implants (CI). In addition, even under laboratory control of the
timing of electric stimulation pulses, the ITD sensitivity is poor at the commonly used high pulse
rates. The present study aims to determine if and how electrical stimulation can be modified in
specific ways to better transmit ITD fine structure cues that result in better ITD sensitivity.
In this study, we measured the sensitivity of bilateral cochlear-implant (CI) listeners to
ITD cues when they were presented with unmodulated high-rate (1000 pulses/s) periodic pulse
trains overlaid with an additional pulse train at lower rates (50 to 200 pulses/s). The resulting
pulse train consisted of an occasional pulse doublet with a short interpulse interval (short IPI),
presented periodically. Six hundred millisecond pulse-train stimuli with on- and off-ramps were
presented binaurally. Participants performed a left/right discrimination task with a reference
stimulus and a target stimulus. While the reference stimulus was encoded with zero delay
between the two ears, resulting in a centralized binaural image, the target stimulus was
presented with an ITD leading to the right or left with respect to the reference stimulus. A highrate periodic pulse train alone was presented during the control condition. The experimental
conditions comprised the presentation of the high rate pulse train and the occasional pulse
doublets with short IPI at periodic intervals of 5 ms, 10 ms and 20 ms. The extra pulses were
presented with a short-IPI ratio, i.e., the percentage of the interval between the high-rate pulses,
ranging from 6% to 50%. For comparison, a condition with a binaurally-coherent jitter of pulse
timing (Laback & Majdak, 2008, PNAS 105:814-817), which has been shown to yield large
improvements in ITD sensitivity at high pulse rates was also included.
Preliminary results show that the behavioral ITD sensitivity improved with the introduction
of the short IPI pulses, with the amount of improvement depending on the combination of the
low pulse rates and the short IPI ratios. These findings are in line with neurophysiological
reports (Buechel et al., 2015, CIAP Abstract W15) where increased firing rates and improved
ITD sensitivity were observed in the neurons of the inferior colliculus with the introduction of
pulses with short IPI. Our results indicate that a short-IPI-based stimulation strategy may
supplement or even replace the binaurally-coherent jitter stimulation previously proposed.
Supported by NIH Grant R01 DC 005775
12-17 July 2015
Granlibakken, Lake Tahoe
Page 44
2015 Conference on Implantable Auditory Prostheses
S23: TOWARDS INDIVIDUALIZED COCHLEAR IMPLANTS: VARIATIONS OF
THE COCHLEAR MICROANATOMY
Andrej Kral
Institute of AudioNeuroTechnology, Medical University Hannover, Hannover, DEU
For minimizing cochlear trauma with cochlear implants, particularly for preservation of
residual hearing, the individual shape and size of the cochlea in the given patient needs to be
determined. This is, however, only possible using clinically-available imaging techniques with
insufficient resolution. Therefore, only basic parameters of the cochlear form can be assessed in
the given subject.
The present study analyzed the cochlear form in 108 human temporal bones post
mortem. For this purpose, frozen human temporal bones were used. After filling the cochlea with
epoxy and exposing these to vacuum for 5 min., the bones were stored for 8 hrs under room
temperature to harden the epoxy. Subsequently, the bone was removed by storing the
specimen in alkali solution for 3 weeks. The resulting corrosion casts were mechanically
cleaned and photographed in 3 orthogonal directions using a custom-made micromechanical
holder with laser-controlled positioning and a Keyence VHX 600 digital microscope. The
resulting resolution was 12 µm per pixel. The images were analyzed using VHX-600 software
and ImageJ. More than 60 different parameters were manually measured in each cochlea. The
data were compared to data obtained with 30 temporal bones that were imaged in µCT with
similar resolution. The data obtained from the corrosion casts were used to fit a mathematical
3D spiral model.
The µCTs and corrosion casts corresponded very well and demonstrated that the
corrosion cast data were reliable. As in previous study, also the present study demonstrated a
high variance in many parameters including absolute metric and angular length, as well as in
wrapping factor. Notably, the B ratio, a parameter characterizing where the modiolar axis cuts
the base width axis of the cochlea, appeared to be partly related to the course of the basalmost
vertical profile: if the ration was small, the vertical profiles had more pronounced rollercoaster
course (vertical minimum in the first 180°), if it was large and close to 0.5, this vertical minimum
was small or absent. Furthermore, factor analysis revealed a relation between cochlear base
width and length with absolute metric length, but not with height of the cochlea. Finally, the
analytical model allowed us to fit the cochlear 3D shape with residuals < 1 mm using the
cochlear length and width and their intersection with the modiolar axis. The model was validated
using the leave-one-out cross-validation technique and demonstrated excellent fit already using
few parameters of a real cochlea. This demonstrates that the analytical model can be used to
predict the length (angular and metric) and the shape of the human cochlea from data obtained
in imaging with high precision.
Supported by Deutsche Forschungsgemeinschaft (Cluster of Excellence Hearing4all).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 45
2015 Conference on Implantable Auditory Prostheses
S24: VISUALIZATION OF HUMAN INNER EAR ANATOMY WITH HIGH RESOLUTION
7 TESLA MAGNETIC RESONANCE IMAGING; FIRST CLINICAL APPLICATION
Annerie MA van der Jagt, Wyger M Brink, Jeroen J Briaire, Andrew Webb, Johan HM
Frijns, Berit M Verbist
Leiden University Medical Center, Leiden, NLD
Cochlear implantation requires detailed information of the microscopic anatomy for surgical
planning, morphological calculations and to predict functional success. In many centers MRI of the inner
ear and auditory pathway - performed on 1.5 or 3 Tesla systems - is part of the preoperative work-up of
CI-candidates. A higher magnetic field strength results in a higher signal-to-noise ratio that can be used
for more detailed imaging than previously possible. The visualization of delicate and small-sized inner
ear structures might benefit from such higher resolution imaging. However, imaging at such high field is
complicated and imaging quality is potentially hampered due to strong inhomogeneities in both the static
(B0) and the radiofrequency (RF; B1) magnetic fields. Due to this increased technical complexity specific,
anatomy-tailored protocol development is needed for high field scanners. In a previous study at our
institution, a scan protocol for inner ear scanning at 7 Tesla was developed in which the use of ear pads
containing dielectric material played a crucial role in improving the scan quality and visualization of the
inner ear [1]. The aim of this study, which is part of ongoing research at our institution, was to apply this
new scan protocol in in clinical setting and compare the visibility of inner ear anatomy with images
acquired at 3 Tesla.
A high-resolution T2-weighted spin-echo sequence for the inner ear was developed in healthy
volunteers on a 7T MRI system (Philips Healthcare, The Netherlands), with an isotropic resolution of
0.3mm, resulting in an acquisition duration of 10 minutes. Two high permittivity pads, which consisted of
a deuterated suspension of barium titanate, were positioned next to the ears to enhance the signal at the
location of the inner ear. The optimized protocol was applied to 13 patients with sensorineural hearing
loss, who also underwent 3T imaging. To compare 7T with 3T results two observers assessed 24
anatomical structures using a 4-point-grading scale for degree of visibility and the overall image quality.
Fine intra cochlear anatomical structures, such as the osseous spiral lamina and interscalar
septum were identified in higher detail at 7 Tesla. Overall, the visibility of 11 out of the 24 anatomical
structures was rated higher at 7T in comparison with 3T. In some patients even the scala media and a
delicate branch of the superior vestibular nerve, the superior ampullary nerve could be distinguished.
There was no significant difference in overall quality rating and the visibility of 13 anatomical structures,
which was mainly due to a higher incidence of susceptibility-related image artifacts in the 7T images.
This study was the first to show how the high resolution achievable with 7 Tesla MRI enables and
even substantially improves the representation of the inner ear anatomy and can contribute to improve
preoperative planning of cochlear implantation. Currently, we are investigating the potential advantages
of the increased visualization of the inner ear with 7 Tesla for evaluating localization of cochlear implant
electrode arrays, when combined with postoperative CT images. In such a way, the complementary
information of CT and 7 Tesla MRI can be used optimally.
This study was financially supported by Advanced Bionics.
1. Brink WM, van der Jagt AMA, Versluis MJ, Verbist BM, Webb AG. High Permittivity Dielectric Pads Improve High Spatial
Resolution Magnetic Resonance Imaging of the Inner Ear at 7 T. Invest Radiol. 2014;00(00):1-7.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 46
2015 Conference on Implantable Auditory Prostheses
S25: COMPARISON OF COCHLEAR IMPLANT OUTCOMES WITH CLINICAL,
RANDOM, AND IMAGE-GUIDED SELECTION OF THE ACTIVE ELECTRODE
SET
Jack Noble, Andrea Hedley-Williams, Linsey Sunderhaus, Rene Gifford, Benoit Dawant,
Robert Labadie
Vanderbilt University, Nashville, TN, USA
Cochlear implants (CIs) are arguably the most successful neural prosthesis to date. However, a
significant number of CI recipients experience marginal hearing restoration, and, even among the best
performers, restoration to normal fidelity is rare. We have developed a patient-customized CI processor
programming strategy we call Image-Guided CI Programming (IGCIP) and have shown our IGCIP strategy
can significantly improve hearing outcomes. IGCIP relies on CT image processing techniques we have
developed that can be used to detect the intra-cochlear positions of implanted electrodes for individual CI
users. With IGCIP, outcomes are improved by customizing the active electrode set, i.e., deactivating a
specific subset of electrodes, to reduce channel interaction artifacts that can result from sub-optimal CI
positioning. A limitation of our prior studies, however, was that IGCIP maps were not tested blindly against
control maps. In this study, acute hearing performance with maps created using IGCIP was tested in doubleblinded condition against control maps with 4 long-term CI users.
For each of 4 experienced adult CI users, an experimental map was created by choosing the active
electrode configuration using our IGCIP methods. Three control maps were created by finding a randomly
selected number and pattern of at least 8 active electrodes. A map with the subject’s normal electrode
configuration was also created. Prior to testing with each map, the identity of the map was masked to both the
participant and the testing audiologist. Equal experience with each map was provided with presentation of a
pre-recorded 2 minute passage in a sound booth. Word and sentence recognition was assessed for each map
using CNC words/phonemes and AzBio sentences in Quiet and +10 dB signal-to-noise ratio (SNR). Subjects
were asked to rate each map on a 1-10 scale in terms of listening difficulty, vocal quality, clarity, and
naturalness. After testing, subjects were asked to rank order all maps from best to worst on the basis of these
four qualities.
Mean respective speech recognition scores for the IGCIP, random, and normal configurations were as
follows: 1) 57.5%, 52.3%, and 51.5% for CNC words, 2) 71.2%, 68.8%, and 69% for CNC phonemes, 3)
64.5%, 60.8%, and 57.5%) for AzBio in quiet, and 4) 34.2%, 36.9%, and 30.5% for AzBio at +10 dB SNR.
While speech recognition scores may be less reliable when measured in the acute condition with no long-term
experience, it is notable that the average scores with IGCIP maps were superior to the random and normal
configuration controls for three of four measures and superior to the normal configuration condition for all four
measures. Random configuration maps received lowest ranking and were rated most poorly, on average, for
each quality metric. In contrast, IGCIP maps were ranked and rated highest in terms of listening difficulty and
clarity. Normal configuration maps were ranked and rated highest in terms of vocal quality and naturalness,
which is not surprising given their long-term use.
These results hold both empirical and clinical significance providing further validation that clinical
translation of our IGCIP technology can offer benefit even for experienced implant users. Even when tested
acutely, our double-blind tests show that maps created using the IGCIP strategy we have developed improves
outcomes with CIs over both normal and randomly selected electrode configurations. Further testing with
more subjects is necessary to definitively confirm these findings.
This work was supported in part by grant R01DC014037 from the NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 47
2015 Conference on Implantable Auditory Prostheses
S26: CROSS MODAL PLASTICITY IN DEAF CHILDREN WITH COCHLEAR
IMPLANTS
David P. Corina, Shane Blau, Laurie Lawyer, Sharon Coffey-Corina, Lee Miller
Center for Mind and Brain, University of California, Davis, Davis, CA, USA
The goal of this study was to use event related potential (ERP) techniques to assess the
presence of cross-modal plasticity in deaf children with cochlear implants. There is concern that
under conditions of deafness, cortical regions that normally support auditory processing become
reorganized for visual function. The conditions under which these changes occur are not
understood. We collected ERP data from 22 deaf children (ages 1 year-8 years) with cochlear
implants.
Method. ERPs were collected using Biosemi Active Two system. Recordings were taken
at 22 electrode sites, using standard 10/20 system. Three additional external electrodes were
used to record data from left and right mastoids and the third placed below the left eye to
monitor eye movement. An experimenter sat beside all children during the testing session. Deaf
participants had their CI’s turned off for visual paradigm presentation. We used an auditory oddball paradigm (85% /ba/ syllables vs. 15% FM tone sweeps) to elicit a P1-N1 complex to assess
auditory function. We assessed visual evoked potentials in these same subjects using an
intermittent peripheral radial checkerboard while children watched a silent cartoon. This
condition was designed to elicit a P1-N1-P2 visual evoked potential (VEP) response. Using
published norms of auditory P1 latencies (Sharma & Dorman 2006), we categorized deaf
children as showing normal (n=14) or abnormal auditory development (n = 8).
Results. Deaf children with abnormal auditory responses were more likely to have
abnormal visual evoked potentials (8/8) compared to deaf children with normal auditory
latencies (3/14). The aberrant responders showed a VEP off-set response that was larger than
the VEP onset response. The VEP data show an unusual topographic distribution with extension
to midline site Cz.
Conclusion. These data suggest evidence of cross-modal plasticity in deaf children with
cochlear implants. We discuss the contributions of signed and spoken language experience in
the expression of these results.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 48
2015 Conference on Implantable Auditory Prostheses
S27: ACOUSTICALLY-EVOKED AUDITORY CHANGE COMPLEX IN
CHILDREN WITH AUDITORY NEUROPATHY SPECTRUM DISORDER: A
POTENTIAL OBJECTIVE TOOL FOR IDENTIFYING COCHLEAR IMPLANT
CANDIDATES
Shuman He, John H Grose, Holly FB Teagle, Jennifer Woodard, Lisa R Park, Debora R
Hatch, Patricia Roush, Craig A Buchman
Department of Otolaryngology - Head & Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill,
NC, USA
Background: Children with Auditory Neuropathy Spectrum Disorder (ANSD) present a
challenge for early intervention and habilitation due to the lack of robust indicators to guide in
management of this population. This project evaluates the feasibility of using the
electrophysiological auditory change complex (ACC) to identify candidates for cochlear
implantation in children with ANSD. It tests the hypotheses that: 1) the ACC evoked by temporal
gaps can be recorded from children with ANSD; and 2) temporal resolution capabilities inferred
from ACC measures are associated with aided speech perception performance.
Methods: 19 ANSD children ranging in age between 1.9 and 14.9 years participated in
this study. Electrophysiological recordings of the auditory event-related potential (ERP),
including the onset ERP response and the ACC, were completed in all subjects. Aided open-set
speech perception was evaluated for a subgroup of sixteen subjects. For the ERP measures,
the stimulus was an 800-ms Gaussian noise presented through ER-3A insert earphones. Two
stimulation conditions were tested: (1) In the standard condition, an 800-ms Gaussian noise was
presented to the test ear without any interruption; (2) In the gap condition, a silent period (i.e.
temporal gap) was inserted after 400 ms of stimulation. The gap duration was fixed at 5, 10, 20,
50, or 100 ms. The shortest gap that could reliably evoke the ACC response was defined as the
gap detection threshold. The aided open-set speech perception ability was assessed using the
Phonetically Balanced Kindergarten (PBK) word lists presented at 60 dB SPL using recorded
testing material in a sound booth.
Results: Robust onset cortical auditory evoked responses were recorded from all ANSD
subjects regardless of their aided speech perception performance. ACC responses elicited by
gap stimuli were also recorded from all ANSD subjects. However, subjects who exhibited limited
benefit from their hearing aids had longer gap detection thresholds than subjects who received
substantial benefit from their listening devices.
Conclusions: The ACC recordings can be used to objectively evaluate temporal
resolution abilities in children with ANSD. The ACC can potentially be used as an objective tool
to identify poor performers among children with ANSD using properly fit amplification, and thus,
who are likely to be cochlear implant candidates.
Acknowledgments: This work was supported by grants from the NIH/NIDCD (1R21DC011383).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 49
2015 Conference on Implantable Auditory Prostheses
S28: MEASURING CORTICAL REORGANISATION IN COCHLEAR IMPLANT
WITH FUNCTIONAL NEAR-INFRARED SPECTROSCOPY: A PREDICTOR OF
VARIABLE SPEECH OUTCOMES?
Douglas Edward Hugh Hartley, Carly Ann Lawler, Ian Michael Wiggins, Rebecca Susan
Dewey
Nottingham University, Nottingham, GBR
Whilst many individuals benefit from a cochlear implant (CI) some people receive less
benefit from their implant than others. Emerging evidence suggests that cortical reorganization
could be an important factor in understanding and predicting how much benefit an individual will
receive from their CI. Specifically, following deafness, cortical areas that would usually process
auditory information can reorganize and become more sensitive to the intact senses, such as
vision. Indeed it has been shown that individuals with a CI rely on a heightened synergy
between audition and vision. Such findings highlight the importance of exploring and
understanding how the brain responds to auditory and visual information before and after an
individual receives their CI. Unfortunately, measuring cortical responses in CI recipients has to
date been challenging. Many established methods for non-invasive brain imaging in humans
can be plagued by electric and magnetic artefacts generated by the operation of the CI.
Functional near-infrared spectroscopy (fNIRS) is a flexible and non-invasive imaging technique
which, owing to its optical nature, is fully compatible with a CI. Furthermore, it is essentially
silent, which is advantageous for auditory research. Together, these advantages indicate that
fNIRS may provide a powerful tool to explore cortical reorganization during deafness and
following cochlear implantation.
At the NIHR Nottingham Hearing Biomedical Research Unit, we are using fNIRS to
examine cortical reorganization associated with deafness and cochlear implantation from
multiple perspectives. One strand focuses on the development of fNIRS as a tool to measure
’low-level’ cross-modal reorganization, specifically how auditory brain regions can become more
responsive to visual and touch stimulation in deaf people compared with hearing controls.
Another strand uses fNIRS to examine how the brain responds to auditory and visual
components of speech before and after an individual receives their CI. The aim of this
longitudinal study is to understand how perceptual improvements in an individual’s ability to
understand speech with their CI relate to changes in cortical responsiveness. We are also using
fNIRS to examine the mechanisms through which the brains of normal hearing listeners
combine information across the senses, and to understand the potential impact of auditory
deprivation and cochlear implantation on these mechanisms. By developing fNIRS as a tool to
study how the brain responds to multisensory stimulation before and after cochlear implantation,
we aim to provide valuable insights into the reasons for variable CI outcomes, and ultimately to
develop clinically useful prognostic and rehabilitative tools.
This work is supported by the University of Nottingham, the Medical Research Council and the
National Institute of Health Research.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 50
2015 Conference on Implantable Auditory Prostheses
S29: BRAIN PLASTICITY DUE TO DEAFNESS AS REVEALED BY fNIRS IN
COCHLEAR IMPLANT USERS
Colette M McKay1, Adnan Shah1,2, Abd-Krim Seghouane2, Xin Zhou1, William Cross1,3,
Ruth Litovsky4
1
2
The Bionics Institute of Australia, Melbourne, AUS
The University of Melbourne, Department of Electrical and Electronic Engineering, Melbourne, AUS
3
The University of Melbourne, Department of Medicine, Melbourne, AUS
4
The University of Wisconsin-Madison, Waisman Center, Madison, WI, USA
Many studies, using a variety of imaging techniques, have shown that deafness induces
functional plasticity in the brain of adults with late-onset deafness. Cross modal plasticity refers
to evidence that stimuli of one modality (e.g. vision) activate neural regions devoted to a
different modality (e.g. hearing) that are not normally activated by those stimuli. Other studies
have shown that multimodal brain networks (such as those involved in language
comprehension, and the default mode network) are altered by deafness, as evidenced by
changes in patterns of activation or connectivity within the networks. In this presentation, we
summarize what is already known about brain plasticity due to deafness and propose that
functional near-infra-red spectroscopy (fNIRS) is an imaging method that has potential to
provide prognostic and diagnostic information for cochlear implant users. As a non-invasive,
inexpensive and user-friendly imaging method, fNIRS provides an opportunity to study both preand post-implantation brain function.
In our lab, we are currently using fNIRS to compare the resting state functional
connectivity in aged-matched groups (N = 15) of normal-hearing listeners and cochlear implant
users. Preliminary data show reduced hemispheric connectivity in CI users compared to normalhearing listeners. In the same subjects, we are comparing task-related activation patterns in the
two groups in language-associated areas of the cortex while subjects are listening to or
watching speech stimuli. We are exploring the data to find group differences in activation or
connectivity patterns across the cortical language pathways. In this way, we aim to find potential
objective markers of plasticity due to deafness and implant use and to determine which are
correlated with speech understanding ability in implant users.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 51
2015 Conference on Implantable Auditory Prostheses
S30: RINGING EARS: THE NEUROSCIENCE OF TINNITUS
Larry E. Roberts1, Brandon Paul1, Daniel Bosnyak1, and Ian Bruce2
1
Department of Psychology Neuroscience and Behaviour
2
Department of Electrical and Computer Engineering
McMaster University, Hamilton, Ontario, Canada
Tinnitus (chronic ringing of the ears) affects quality of life for millions around the world.
Most cases are associated with hearing impairment detected by the clinical audiogram or by
more sensitive measures. Deafferentation by cochlear damage leads to increased neural gain
in auditory pathways and to increased oscillatory activity in corticothalamic networks that may
underlie tinnitus percepts. Nonauditory brain regions involved in attention and memory are also
disinhibited in tinnitus. While the mechanism of this effect is not well understood, disinhibition
may occur when spectrotemporal information conveyed to the auditory cortex from the damaged
ear does not match that predicted by central auditory processing. We will describe a qualitative
neural model for tinnitus based on this idea that appears able to explain current findings as well
as new results obtained from electromagnetic imaging of human subjects when tinnitus was
suppressed by forward masking. Novel off-frequency masking effects discovered in this
research also appeared to be explicable within the same framework. Neuromodulatory systems
activated by failure of prediction may play a role in fostering distributed changes that occur in
the brain in tinnitus, including changes documented by animal models and human functional
imaging in afferent and efferent auditory pathways. At present almost nothing is known about
the role of neuromodulatory systems in tinnitus, although several laboratories have expressed
interest in the question.
A challenge for deafferentation models going forward is to explain why some individuals
with clinically normal hearing experience tinnitus, while other individuals with substantial
threshold shifts do not have tinnitus. One point of view has interpreted these cases to imply a
role for nonauditory brain regions in generating tinnitus when clinical hearing loss is absent, or
suppressing tinnitus when hearing loss is present. Alternatively, accumulating evidence
supports the view that these cases may reflect hidden cochlear pathology that is not expressed
in the clinical audiogram. In our research we have observed reduced cortical and midbrain
responses to AM sounds in individuals with tinnitus compared to age and hearing level matched
controls. Modeling of the auditory nerve response suggests that damage to low spontaneous
rate, high threshold auditory nerve fibers combined with ~30% loss of high spontaneous rate,
low threshold fibers can account for the different brain responses observed in tinnitus, with little
or no effect expected on hearing thresholds. Forward masking briefly increases cortical and
subcortical brain responses during tinnitus suppression but not in control subjects, through
mechanisms that are at present unknown.
Supported by NSERC of Canada
12-17 July 2015
Granlibakken, Lake Tahoe
Page 52
2015 Conference on Implantable Auditory Prostheses
S31: COMPARISON OF COCHLEAR IMPLANT WITH OTHER TREATMENT
OPTIONS FOR SINGLE-SIDED DEAFNESS
Thomas Wesarg, Antje Aschendorff, Roland Laszig, Frederike Hassepass, Rainer Beck,
Susan Arndt
Department of Otorhinolaryngology, University Medical Center Freiburg, Germany
There are various treatment options in subjects with single sided deafness (SSD): no
treatment, treatment with a conventional contralateral routing of signal (CROS) hearing aid,
bone-anchored hearing system (BAHS), or cochlear implant (CI).
The aim of this investigation was to assess and compare the benefit of these treatment options
in adult subjects with acquired and congenital SSD.
101 adult subjects with SSD were included into this investigation. Preinvestigation before
treatment covered the assessment of speech recognition in noise, sound localization ability and
self-reported auditory disability in the unaided situation as well as after testing a CROS aid and
a BAHS coupled to a softband. A CI was recommended if the patients met our inclusion criteria
for CI: duration of deafness < 10 years and an intact auditory nerve. These assessments were
also administered after 12 month of device use. Speech recognition in noise was measured in
three spatial presentation conditions using the HSM sentence test (Hochmaier-Desoyer et al.,
1997). In the localization tests, sentences of the Oldenburg sentence test (Wagener et al., 1999)
were used as stimuli. For the assessment of self-reported auditory disability the Speech, Spatial
and Qualities of Hearing Scale (SSQ) (Gatehouse and Noble, 2004) was administered.
Twenty-five subjects were satisfied with their unaided condition and did not want to wear
external devices. The majority of subjects (76) decided in favor of a device treatment option and
received a CI (45), a BAHS (19) or a CROS aid (12), respectively. After 12 months of device
experience, the CI users showed significantly better aided speech recognition in noise and
localization ability compared to the BAHS and CROS aid users. The BAHS subjects showed
better localization ability and a slight tendency to better speech recognition in noise compared to
the CROS aid subjects. For all device treatment groups, SSQ results revealed an improvement
of self-reported auditory disability after 12 month of device use compared to unaided situation.
In adult subjects with single-sided deafness a cochlear implant offers significantly better
speech recognition and localization ability after 12 months of device use compared to a CROS
hearing aid or bone-anchored hearing system. A BAHS is an alternative option if patients do not
meet the inclusion criteria for a CI, or if they do not want cochlear implantation. Another
alternative option is a CROS aid if patients do not meet CI inclusion criteria or do not wish to
undergo surgery at all.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 53
2015 Conference on Implantable Auditory Prostheses
S32: BINAURAL UNMASKING FOR COCHLEAR IMPLANTEES WITH SINGLESIDED DEAFNESS
Joshua G.W. Bernstein1, Matthew J. Goupell2, Gerald I. Schuchman1, Arnaldo L. Rivera1,
Douglas S. Brungart1
1
Walter Reed National Military Medical Center, Bethesda, MD, USA
2
University of Maryland, College Park, MD, USA
Having two ears allows normal-hearing listeners to take advantage of head-shadow
effects by selectively attending to the ear providing the best signal-to-noise ratio (the “better-ear”
advantage) and provides access to binaural-difference cues for sound localization and the
perceptual segregation of spatially separated sources. Cochlear implants (CIs) have been
shown to improve speech perception in noise for individuals with single-sided deafness (SSD;
i.e., one deaf ear and one NH ear). However, most of the reported benefits appear to be
attributable to better-ear advantages. It is not known whether SSD-CI listeners can make use of
spatial differences between target and masker signals to organize the auditory scene.
We present evidence suggesting that SSD-CI listeners might, in fact, be able to take
advantage of binaural-difference cues that facilitate concurrent speech-stream segregation in
certain situations. SSD-CI listeners completed a task requiring them to segregate a target talker
from one or two masking talkers in the acoustic-hearing ear. The CI ear was presented with
silence or with a mixture containing only the maskers, thereby testing whether listeners could
combine the masker signals across the two ears to unmask the target, as occurs for normalhearing listeners. Presenting the maskers to the CI improved performance in conditions
involving one or two interfering talkers of the same gender as the target, but did not improve in
conditions involving opposite-gender interferers. This result suggests that the CI can produce
masking release when limited monaural cues are available for target-masker segregation.
As is typical of CI outcomes, the observed amount of binaural unmasking was variable
across individuals. One possible factor that might limit binaural unmasking for individual SSD-CI
listeners is that the shorter length of the CI array relative to the cochlear duct tends to produce a
mismatch between the places of stimulation in the acoustic and CI ears. The presentation will
conclude by discussing clinical attempts to provide SSD-CI listeners with frequency allocation
maps that reduce interaural mismatch, and will report on efforts to develop a measure of
interaural time-difference sensitivity to estimate the place of stimulation for a given CI electrode.
In summary, bilateral hearing via a CI for SSD listeners can partially restore the ability to
make use of differences in the signals arriving at the two ears to more effectively organize an
auditory scene. This benefit might be enhanced by adjusting the CI frequency-allocation table to
reduce interaural spectral mismatch.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 54
2015 Conference on Implantable Auditory Prostheses
S33: BINAURAL PITCH INTEGRATION WITH COCHLEAR IMPLANTS
Lina AJ Reiss
Oregon Health & Science University, Portland, OR, USA
In normal-hearing (NH) listeners, the two ears provide essentially matched spectral
information and thus sensory reinforcement to each other to average out noise. In contrast,
cochlear implant (CI) users often have interaural pitch mismatches due to the CI programming,
where real-world frequencies allocated to the electrodes differ from the electrically stimulated
cochlear frequencies. Thus, bimodal CI users, who use a CI together with a hearing aid in the
contralateral, non-implanted ear, often have a pitch mismatch between the electrically
stimulated pitches and the acoustic hearing. Similarly, bilateral CI users often have a pitch
mismatch between electric hearing in the two ears, as electrode arrays in the two ears typically
differ in insertion depth but not in the frequencies allocated to the electrodes.
Previous studies have shown that pitch perception adapts over time to reduce these
discrepancies in some, but not all CI users (Reiss et al., 2007, 2011). Here we present findings
in both bimodal and bilateral CI adults showing broad binaural pitch fusion, i.e. fusion of
inharmonic, dichotic sounds that differ in pitch between ears by as much as 2-3 octaves. These
fusion ranges are broader than the ranges of 0.1-0.3 octaves typically seen in normal-hearing
listeners, and prevent the perception of interaural pitch mismatch.
Instead, the different pitches are fused and averaged into a new single pitch, consistent
with studies of multi-input integration in other sensory modalities (Reiss et al., 2014). This
finding suggests that broad fusion leads to integration of mismatched rather than matched
spectral information between ears. Broad binaural pitch fusion and averaging may thus explain
speech perception interference sometimes observed with binaural compared to monaural
hearing device use, as well as limit the benefits of bimodal or bilateral CIs for sound localization
and spatial release from masking.
Supported by NIH-NIDCD grants P30DC010755 and R01 DC013307. Research equipment was
provided by Cochlear and MED-EL.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 55
2015 Conference on Implantable Auditory Prostheses
S34: SPEECH PERCEPTION AND BIMODAL BENEFIT IN QUIET FOR CI
USERS WITH CONTRALATERAL RESIDUAL HEARING
Mario A Svirsky, Susan B Waltzman, Kinneri Mehta, Ksenia Aaron, Yixin Fang, Arlene C
Neuman
New York University School of Medicine, New York, NY, USA
A retrospective review was undertaken of longitudinal speech perception data from 57
bimodal patients (cochlear implant in one ear and hearing aid in the contralateral ear). Speech
recognition scores (CNC words in quiet) were analyzed for the implanted ear only (CI), acoustic
hearing ear only (HA), and bimodal (CI + HA) to determine changes in performance over time
and to characterize bimodal benefit (difference between bimodal score and best unimodal
score). These measures were analyzed as a function of eight preoperative variables: age at
implantation, reported age at onset of hearing loss, length of hearing aid use in each ear, length
of hearing loss before implantation, as well as aided thresholds, unaided thresholds and aided
word identification score in the HA ear.
Performance in the implanted ear increased over time, performance in the HA ear
remained stable for the majority of patients, although performance with the HA dropped for
about 30% of the patients. These drops were not due to HA malfunction, and were not
associated with decreases in audiometric thresholds or with any of the preoperative variables
listed above. There was a wide range of bimodal benefit from +38% to -14% (the negative sign
indicates bimodal interference, i.e., the bimodal score was lower than the best unimodal score).
The average bimodal benefit (approximately 6%) was statistically significant. Normalized
bimodal benefit (the ratio between actually achieved bimodal benefit and maximum possible
bimodal benefit) was likely to be greater when the CI and HA scores were similar, or at least not
too different. In particular, in cases where the CI score exceeded the HA by 50 percentage
points or more, bimodal interference was just as likely as bimodal benefit. We also examined
normalized bimodal benefit as a function of CI scores and as a function of HA scores. The
higher the HA score, the higher was the likelihood of observing bimodal benefit. There was no
significant correlation between normalized bimodal benefit and CI score, but we observed an
inverted-u pattern in the data, where the average level of bimodal benefit was 29% when CI
scores were between 25% and 70%, and about half that amount when CI scores were either
lower than 25% or higher than 70%.
Taken together, these results suggest that most patients are able to combine acoustic
and electrical information to improve their speech perception, but that the amount of bimodal
benefit can vary substantially.
Supported by NIH-NIDCD grant R01- DC011329.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 56
2015 Conference on Implantable Auditory Prostheses
S35: USING NEURAL RESPONSE TELEMETRY (NRT) TO MONITOR
RESPONSES TO ACOUSTIC STIMULATION IN HYBRID CI USERS
Paul J Abbas, Viral Tejani, Carolyn J Brown
University of Iowa, Iowa City, IA, USA
Today many individuals present for cochlear implant surgery who have significant
amounts of low frequency hearing. Novel electrode arrays have been developed with the goal of
increasing the chance that acoustic hearing can be preserved. This study focuses on using
intracochlear electrodes that are part of the implanted array to record acoustically evoked
responses from residual hair cells and/or auditory neurons. Potential uses for such measures
include monitoring hearing status over time and assessing acoustic-electric interactions at the
auditory periphery.
In general, users of the Nucleus Hybrid cochlear implant show improved performance
when they are able to combine electric and acoustic input compared to when they only use
acoustic input. However, cochlear implantation carries a significant risk of loss of acoustic
hearing. That hearing loss can occur immediately after surgery but often only occurs after
several months of implant use. A method of monitoring auditory function at the level of the
cochlea (hair cell, synaptic or neural function) could be helpful in diagnosing the specific cause
of this delayed hearing loss.
We use the NRT software to record electrical potentials from an intracochlear electrode.
We synchronize the neural recordings to acoustic stimuli (clicks and/or tone bursts) presented
through an insert earphone. The averaged responses are then combined off line. Here we report
data from 30 individuals who use the Hybrid S8, S12, L24 or standard CI422 implants.
By adding the responses to opposite polarity stimuli we minimize the cochlear
microphonic (CM) and emphasize the neural component of the response. Subtracting the
responses to opposite polarity stimuli emphasizes the CM. Neural responses are recorded at
stimulus onset (CAP) in response to both clicks and tone bursts but are also evident as a phaselocked response to tone-bursts. Our data shows that CM and CAP thresholds are strongly
correlated with audiometric thresholds. Additionally, we have evaluated the extent to which the
summed and difference responses separate hair-cell and neural responses by evaluating the
effects of adaptation to slow vs high rates of stimulation. The CM component shows relatively
little adaptation compared to the CAP.
We conclude that NRT can be used to record responses to acoustic stimulation from an
intracochlear electrode. The responses tend to be quite repeatable of time. We will discuss
potential uses in terms of monitoring hearing loss, providing a better understanding of the
underlying causes of hearing loss and also in providing a method of directly assessing acoustic
and electric interactions at the level of the auditory periphery.
Work supported by NIH-NIDCD: P50-DC000242.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 57
2015 Conference on Implantable Auditory Prostheses
S36: DIRECT INTRACOCHLEAR ACOUSTIC STIMULATION USING A PZT
MICROACTUATOR
IY Steve Shen3, Clifford R Hume1,2, Luo Chuan3, Irina Omelchenko1, Carol Robbins1,
Elizabeth C Oesterle1, Robert Manson3, Guozhong Cao4
1
Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology Head and Neck Surgery
2
VA Puget Sound
3
Department of Mechanical Engineering
4
Department of Materials Science
University of Washington, Seattle, WA, USA
Combined electric and acoustic stimulation has proven to be an effective strategy to improve
hearing in some cochlear implant users. We describe our continued work to develop an intracochlear acoustic actuator that could be used as a component of a single integrated acousticelectric electrode array. The acoustic actuator takes the form of a silicon membrane driven by a
piezoelectric thin film (e.g., lead-zirconium-titanium oxide or PZT) that is 800 microns by 800
microns wide, with a diaphragm thickness of 1 µm in silicon and 1 µm in PZT. In the current study,
we established an acute guinea pig model to test the actuator for its ability to deliver auditory
stimulation to the cochlea in vivo.
A series of PZT microactuator probes was fabricated and coated with parylene. The probes
were tested in vitro using laser Doppler vibrometry to measure the velocity of the diaphragm when
driven by a swept-sine voltage from 0-100 kHz. A spectrum analyzer was used to obtain a frequency
response function (FRF) and determine the natural frequency and gain of the FRF. The impedance
and phase angle of each probe was also measured to assess current leakage.
Nine guinea pigs were used for a series of in vivo tests of the microactuators. ABR
measurements at 4, 8, 16 and 24 kHz pure tone bursts were obtained prior to surgery and after each
subsequent manipulation. A baseline ABR was measured in response to stimuli delivered via an ear
level microphone. Using a dorsal approach, the PZT probes were placed through the round window
into the basal turn of the cochlea. An oscilloscope was used to determine calibrated voltage values
and acoustic stimuli across the same frequencies were delivered via the actuator. A mechanically
non-functional probe was used to assess current leakage. In some animals, an ear level ABR was
also obtained after removal of the probe to assess loss of hearing related to the procedure. Wave I
peak latencies, interpeak latencies from wave I to III and wave I amplitude growth were calculated
for both auditory and microactuator stimulation to compare stimulation via an ear canal microphone
vs the intracochlear PZT probe. In some animals, the temporal bone was harvested for histologic
analysis of cochlear damage.
We show that the device has the desired response characteristics in vitro and is capable of
stimulating auditory brainstem responses in vivo in an acute guinea pig model with latencies and
growth functions comparable to stimulation in the ear canal. Our results suggest that this approach
is promising to be effective and minimally traumatic. Further experiments will be necessary to
evaluate the efficiency and safety of this modality in long-term auditory stimulation and its ability to
be integrated with conventional cochlear implant arrays.
Supported provided by National Science Foundation and NIH/NIDCD and Veterans Administration.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 58
2015 Conference on Implantable Auditory Prostheses
S37: FROM AUDITORY MASKING TO SUPERVISED SEPARATION: A TALE
OF IMPROVING INTELLIGIBILITY OF NOISY SPEECH FOR HEARINGIMPAIRED LISTENERS
DeLiang Wang
Ohio State University, Columbus, OH, USA
Speech separation, or the cocktail party problem, is a widely acknowledged challenge. A
solution to this problem is especially important for hearing-impaired listeners, including cochlear
implant users, given their particular difficulty in speech understanding in noisy backgrounds.
Part of the challenge stems from the confusion of what the computational goal should be.
Motivated by the auditory masking phenomenon, we have suggested time-frequency (T-F)
masking as a main means for speech separation. This leads to a new formulation of the
separation problem as a supervised learning problem. With this new formulation, we estimate
ideal T-F masks by employing deep neural networks (DNNs) for binary classification or function
approximation. DNNs are capable of extracting discriminative features through training on a
variety of noisy conditions. In recent intelligibility evaluations, our DNN-based monaural
separation system produces the first demonstration of substantial speech intelligibility
improvements for hearing-impaired (as well as normal-hearing) listeners in background noise.
[Work supported by NIH.]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 59
2015 Conference on Implantable Auditory Prostheses
S38: POLARITY EFFECTS IN COCHLEAR IMPLANT STIMULATION:
INSIGHTS FROM HUMAN AND ANIMAL STUDIES
Olivier Macherey1, Gaston Hilkhuysen1, Robert P Carlyon2, Roman Stephane3, Yves
Cazals4
1
Laboratoire de Mécanique et d'Acoustique, Marseille, FRA
MRC Cognition and Brain Sciences Unit, Cambridge, GBR
3
University Hospital La Timone, Aix-Marseille Univ., Marseille, FRA
4
Laboratoire de Neurosciences Intégratives et Adaptatives, Marseille, FRA
2
Previous studies have shown that modifying the shape of electrical pulses delivered by
cochlear implant (CI) electrodes can increase the efficiency of stimulation and improve the
perception of place and temporal pitch cues. Those studies were based on the observation that
human auditory nerve fibers respond differently to each stimulus polarity, showing higher sensitivity
to anodic stimulation. However, this observation contrasts with the results of most animal studies
showing higher sensitivity to cathodic stimulation. Here, we provide new observations on the
mechanisms underlying the effect of polarity in CI subjects and discuss implications for speech
coding strategies.
In Experiment 1, detection thresholds, most comfortable levels and loudness growth were
measured in users of the Cochlear device. Stimuli were trains of quadraphasic pulses presented on
a single monopolar channel. These pulses consisted of two biphasic pulses presented in short
succession in such a way that the two central phases were either anodic (QPA) or cathodic (QPC).
At threshold, the results were variable with some subjects being more sensitive to QPA and others
to QPC. At comfortable loudness, all subjects showed higher sensitivity to QPA, consistent with
previous data. Loudness ranking revealed that, for QPC, loudness sometimes grew nonmonotonically with current level. By contrast, these non-monotonicities never occurred for QPA. This
non-monotonic behavior may reflect the conduction block of action potentials along the nerve fibers.
This conduction block in turn provides a possible explanation for the higher supra-threshold
sensitivity to anodic stimulation reported here and in previous studies.
Additional measures performed on other electrodes showed that the effect of polarity at
threshold could vary across electrodes within the same subject. This observation suggests that
polarity sensitivity at threshold may depend on local properties of the electro-neural interface.
Modeling studies have proposed that neural degeneration could have an impact on the effect of
polarity and that degenerated fibers may be more sensitive to anodic stimulation. This hypothesis
was tested in Experiment 2 using ten guinea pigs who were chemically deafened and implanted with
an electrode in the first turn of the cochlea. The inferior colliculus evoked potential was measured in
response to a wide range of pulses differing in shape, polarity and current level one week after
deafening and at regular intervals up to one year after deafening. In a large majority of cases, the
response was larger for cathodic than for anodic pulses. However, the effect of polarity did not
change over time, suggesting that neural degeneration cannot entirely account for the higher
efficiency of anodic stimulation. Interestingly, in a few cases, neural growth functions were nonmonotonic. These non-monotonicities were more frequent for cathodic stimulation but could also
occur for anodic stimulation.
Contemporary CI coding strategies use symmetric biphasic pulses for which both phases
may elicit action potentials. It is therefore possible that non-monotonic neural growth also affect
some nerve fibers stimulated in clinical CI strategies. Such non-monotonicities would be expected to
distort the modulations in speech and to impair speech perception. This hypothesis was explored in
Experiment 3 and results of speech intelligibility in noise will be presented for different pulse shapes.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 60
2015 Conference on Implantable Auditory Prostheses
S39: MODELED NEURAL RESPONSE PATTERNS FROM VARIOUS SPEECH
CODING STRATEGIES
Johan H.M. Frijns, Margriet van Gendt, Randy K. Kallkman, Jeroen J. Briaire
ENT-department, Leiden University Medical Center, Leiden, NLD
Cochlear implants are able to provide reasonable good speech understanding in quiet
situations. In noisy (real life) situations however, the performance is much reduced. Other
limitations are found in the encoding of music, and pitch accents in tonal languages. In spite of
the many different speech coding strategies introduced in the last decade, no major advances in
performance have been made since the introduction of the CIS strategy.
New strategies are commonly evaluated by means of psychophysical experiments and
clinical trials. Alternatively, strategies can be investigated using computational models. There
are two different modalities found in this field: models that focus on temporal aspects (e.g.,
stochastic single node population models), and models that mainly study the effects of the
cochlear geometry and electrode-to-neural interface. The goal of the current study is to combine
both temporal and structural aspects in one model which allows the investigation of the interplay
of both aspects in full speech coding strategies.
The most recent version of our 3D human geometric model of the implanted cochlea with
spatially distributed spiral ganglion cell bodies and realistic fiber trajectories is capable of
predicting the effects of various electrode configurations (including monopolar, tripolar, phased
array, current steering and phantom stimulation). It includes 3000 auditory nerve fibers,
represented by a deterministic active cable model. This neural model turned out to be unsuitable
to predict the effects of high rate, long duration pulse trains as it does not include stochastic
behavior and does not exhibit adequate adaption effects. Now, the stochastic threshold
variations, refractory behavior and adaptation effects have been incorporated as a threshold
modulation by simulating 10 independent instances of each of the active fibers, thereby
effectively modeling the 30,000 nerve fibers in the human auditory nerve.
The relative spread of the thresholds, the absolute and relative refractory properties and
the adaptation parameters in the extended model are derived from published single fiber
experiments from electrical stimulation of the auditory nerve, thereby keeping all model
parameters within known physiological limits. With this set of parameters the model was able to
replicate experimental PSTHs and inter spike intervals for a range stimulus levels and rates.
The extended model is used to simulate nerve responses to different CI stimulation
strategies. The effect of different stimulation parameters such as stimulation rate, number of
channels, current steering and number of activated electrodes are investigated for a variety of
coding strategies such as classic CIS, parallel stimulation and n-over-m based strategies. A
basic discriminative model was developed for the interpretation of the neural responses. The
parameters of this model were derived from the comparison of simulation data with
psychophysical 3AFC data of the same experiments.
The extended computational model of the implanted human cochlea, combined with the
interpretation model is able to give insight in the effects underlying high rate stimulation and the
influence of changing the stimulation order of channels. The interpretation model requires
further extension to be applicable with arbitrary speech coding strategies and to investigate, for
instance, the encoding of fine structure.
Acknowledgment: This study is financially supported by Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 61
2015 Conference on Implantable Auditory Prostheses
S40: MIMICKING THE UNMASKING BENEFITS OF THE CONTRALATERAL
MEDIAL OLIVOCOCHLEAR REFLEX WITH COCHLEAR IMPLANTS
Enrique A. Lopez-Poveda1, Almudena Eustaquio-Martin1, Joshua S. Stohl2, Robert D.
Wolford2, Reinhold Schatzer3, Blake S. Wilson4
1
University of Salamanca, Salamanca, ESP
2
MED-EL GmbH, Durham, NC, USA
3
MED-EL GmbH, Innsbruck, AUT
4
Duke University, Durham, NC, USA
We present a bilateral sound coding strategy that mimics the effects of the contralateral
medial olivocochlear efferent reflex (MOCR) with cochlear implants (CIs) and assess its benefits
for understanding speech in competition with noise. Pairs of bilateral sound processors were
constructed to mimic or not mimic the effects of the MOCR. For the non-mimicking condition
(STD strategy), the two processors in a pair functioned independently of each other. When
configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated
with each other and the amount of compression in a given frequency channel of each processor
in the pair decreased with increases in the output energy from a corresponding frequency
channel in the contralateral processor.
We asked three bilateral CI users and two single-sided deaf CI users to recognize
sentences in simulated free-field conditions in the presence of a steady-state noise with a
speech-like spectrum. Performance was compared for the STD and MOC strategies using the
speech reception threshold (SRT), in unilateral and bilateral listening conditions, and for various
spatial configurations of the speech and noise sources.
Mean SRTs were at least 2 dB lower with the MOC than with the STD strategy when the
speech and the noise sources were at different spatial locations. SRTs improved with increasing
spatial separation between the speech and noise sources and the improvement was
significantly greater with the MOC than with the STD strategies.
The mutual inhibition of compression provided by the mimicking of the MOCR
significantly improved the intelligibility of speech in noisy environments and enhanced the spatial
release from masking. The MOC strategy as implemented here, or a modified version of it, may
be usefully applied in CIs and in hearing aids.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 62
2015 Conference on Implantable Auditory Prostheses
S41: AGING VS AGING AFTER NOISE: EXAGGERATION OF COCHLEAR
SYNAPTIC AND NEURAL LOSS IN NOISE-EXPOSED EARS
Sharon G. Kujawa
Massachusetts Eye and Ear Infirmary, Boston, MA, USA
Noise exposure and aging are two common causes of hearing loss in humans. Noiseexposed ears age, and the traditional view has been that noise produces no delayed
consequences as individuals age after exposure. However, much of the evidence cited in
support of this view is based on audiometric thresholds, which are generally good at reflecting
loss of hair cells, but not of the sensory neurons innervating them, particularly when the loss is
subtotal or diffuse. In studies to be summarized here, we provide clear evidence that noise
exposure changes the ways ears and hearing age, long after the noise has stopped.
In our models of noise- and age-related hearing loss, we have followed the postexposure fate of hair cells, cochlear neurons and the synapses that connect them and assessed
functional integrity throughout the lifespan. We have compared changes to those observed in
ears that age without intentional noise exposure. This work shows that loss of inner hair cell
synapses with cochlear neurons occurs as a primary event: Well before threshold elevations
and hair cell loss compromise function by reducing the audibility of sound signals, synapse loss
compromises function by interrupting sensory-neural communication for subsets of neurons. In
unexposed ears, this loss is diffuse and gradually progressive, reaching ~50% by the end of the
lifespan. Such declines can be accelerated dramatically after noise, with up to ~50% of
synapses lost within minutes of exposure, including many producing only temporary changes in
thresholds and no hair cell loss. In both models, cochlear ganglion cell loss parallels the
synaptic loss in magnitude and cochlear location. For ears that age after synaptopathic
exposure, ongoing loss of synapses and ganglion cells is accelerated relative to age-only
controls. Although thresholds are quite insensitive to these losses, they are reflected
proportionately and permanently in reduced neural, but not pre-neural, response amplitudes.
In humans, there is a steady age-related decline in spiral ganglion cell counts, even in
ears with a full complement of hair cells. Further study will be required to determine whether this
primary neural loss is preceded by synaptic loss and accelerated by noise exposure as it is in
our animal models, and whether differences in synaptic and neural losses contribute to the
variability of performance outcomes that have been documented for individuals with normal
thresholds as well as those hearing with the aid of a cochlear implant.
Work supported by the NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 63
2015 Conference on Implantable Auditory Prostheses
S42: TMC GENE THERAPY RESTORES AUDITORY FUNCTION IN DEAF MICE
Jeffrey R Holt
Harvard Medical School / Boston Children's Hospital, Boston, MA, USA
Genetic hearing loss accounts for up to 50% of prelingual deafness worldwide, yet there
are no biologic treatments currently available. To investigate gene therapy as a potential
biologic strategy for restoration of auditory function in patients with genetic hearing loss, we
tested a gene augmentation approach in mouse models of genetic deafness. We focused on
DFNB7/11 and DFNA36 which are autosomal recessive and dominant deafnesses, respectively,
caused by mutations in Transmembrane channel-like 1 (TMC1). Thirty-five recessive mutations
and five dominant mutations have been identified in human TMC1. Mice that carry targeted
deletion of Tmc1, or a dominant point mutation, known as Beethoven, are good models for
human DFNB7/11 and DFNA36, respectively. We screened several adeno-associated viral
(AAV) serotypes and promoters and identified AAV2/1 and the chicken beta-actin promoter as
an efficient combination for driving expression of exogenous Tmc1 in inner hair cells in vivo. We
find that exogenous Tmc1 or closely related ortholog, Tmc2, are capable of restoring sensory
transduction, auditory brainstem responses and acoustic startle reflexes in otherwise deaf mice,
suggesting that gene augmentation with Tmc1 or Tmc2 is well-suited for further development as
a strategy for restoration of auditory function in deaf patients who carry TMC1 mutations. Lastly,
we suggest that AAV-mediated gene augmentation in the inner ear may be a model that could
be expanded to address some of the over 70 forms of genetic deafness.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 64
2015 Conference on Implantable Auditory Prostheses
S43: COMBINING DRUG DELIVERY WITH COCHLEAR PROSTHESES:
DEVELOPING AND EVALUATING NEW APPROACHES
Robert K. Shepherd, Rachel Richardson, Lisa N Gillespie, James B. Fallon, Frank Caruso,
Andrew K. Wise
Bionics Institute and University of Melbourne, Melbourne, AUS
The inner ear has been a target for drug-based therapies for over 60 years via systemic
or middle-ear routes although the efficacy of these routes is low. Direct application to the
cochlea has only been recently used clinically and is typically restricted to patients with a
moderate to severe hearing loss in which the risk of further damage to their hearing is reduced.
These delivery techniques are usually designed to be performed in association with cochlear
implantation because the scala tympani is already surgically accessed for the implantation of the
electrode array.
Our work has focussed on the delivery of exogenous neurotrophins to promote the
rescue of spiral ganglion neurons following deafness. Techniques evaluated in preclinical
studies include drug elution from an electrode array carrier; drug release from a reservoir and
cannula within the electrode array; viral-mediated gene therapy; cell-based therapies; and
nanotechnology- inspired release, either as a polymer coating on the electrode array or via slow
release particles that are inserted into the cochlea just before or after the insertion of the
electrode array. All these techniques can be implemented in association with cochlear implant
surgery and a number of the techniques are suitable for long-term (months) drug delivery.
Importantly, functional studies have demonstrated that chronic neurotrophin delivery results in
reduced electrical thresholds; i.e. drug delivery in concert with a neural prosthesis has great
potential for improving the electrode-neural interface. Finally, a number of the drug delivery
technologies described above have been shown to be safe in preclinical trials and are able to
deliver a wide variety of therapeutic drugs in a controlled manner.
Recently, clinical trials have included the delivery of anti-inflammatory drugs in
association with cochlear implantation (e.g. NCT01588925) and viral-mediated gene delivery
techniques for hearing protection (NCT02132130). This field is expected to expand rapidly,
improving clinical outcomes with cochlear prostheses and the quality of hearing across a range
of hearing disorders.
This work was supported by the NIDCD (HHS-N-263-2007-00053-C), the Garnett Passe and
Rodney Williams Memorial Foundation, the National Health and Medical Research Council of
Australia and the Victorian Government’s OIS funding. We acknowledge our colleagues and
students who contributed to aspects of this work.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 65
2015 Conference on Implantable Auditory Prostheses
S44: OPTOGENETIC STIMULATION OF THE AUDITORY PATHWAY FOR
RESEARCH AND FUTURE PROSTHETICS
Tobias Moser
Cochlear Optogenetics Program Goettingen
Institute for Auditory Neuroscience, University of Göttingen Medical Center, Göttingen, Germany
When hearing fails, speech comprehension can be restored by auditory prostheses.
However, sound coding with current prostheses, based on electrical stimulation of auditory
neurons, has limited frequency resolution due to broad current spread. Optical stimulation can
be spatially confined and may therefore improve frequency and intensity resolution. We have
established optogenetic stimulation of the auditory pathway in rodents using virus-mediated
expression of channel rhodopsins to render spiral ganglion neurons light-sensitive. Optogenetic
stimulation of spiral ganglion neurons activated the auditory pathway, as demonstrated by
recordings of single neuron and neuronal population responses at various stages of the auditory
system. We approximated the spatial spread of cochlear excitation by recording local field
potentials in the inferior colliculus in response to supra-threshold optical and electrical stimuli,
which suggested a better frequency resolution for optogenetic than for electrical stimulation.
Moreover, we found activation of neurons in primary auditory cortex and were able to restore
auditory activity in deaf mice. In a collaborative effort we develop and characterize flexible
µLED-based multichannel intracochlear stimulators. My presentation will review recent progress
in optogenetic stimulation of the auditory system and its potential for future application in
research and hearing restoration.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 66
2015 Conference on Implantable Auditory Prostheses
S45: OPTOGENETIC TECHNOLOGY PROVIDES SPATIOTEMPORAL RESOLUTION
SUFFICIENT FOR AN OPTICALLY-BASED AUDITORY NEUROPROSTHESIS
1
Ariel Edward Hight1, Elliott Kozin2, Constantin Hirschbiegel3, Shreya Narasimhan4,
Alyson Kaplan2, Keith Darrow5, Xiankai Meng2, Ed Boyden6, M Christian Brown2, Daniel
Lee2
2
Harvard University Medical School, Boston, MA, USA; Massachusetts Eye and Ear Infirmary, Boston, MA, USA;
4
Technische Universität MÜnchen, Munich, DEU; École Polytechnique Fédérale De Lausanne, Lausanne, CHE;
5
Worcester Polytechnic Institute, Worcester, MA, USA
6
Massachusetts Institute of Technology, Cambridge, MA, USA
3
Introduction: The auditory brainstem implant (ABI) is a neuroprosthesis that provides sound
sensations to deaf individuals who are not candidates for a cochlear implant (CI) due to anatomic
constraints. Most ABI users obtain sound awareness but overall outcomes are modest compared to
the average CI user. One explanation for limited performance among the majority of ABI users is the
non-specific spread of electrical current, leading to stimulation of nonauditory brainstem neurons
and channel cross-talk. We hypothesize that optogenetic stimulation may address limitations of ABIs
by using light instead of electric current to stimulate neurons. Optogenetics relies on visible light to
stimulate neurons that are photosensitized via genetic expression of light sensitive ion channels
called opsins. Herein we aim to characterize spectral and temporal cues of an optogenetic-based
murine ABI model. These studies have implications for the development for a clinical ABI based on
light-evoked stimulation.
Methods: Transfer of genes for expressing opsins (ChR2 or Chronos) to the cochlear nucleus
(CN) is provided by either viral mediated gene transfer with adeno-associated viruses (AAV) (Hight,
Kozin, Darrow et al. 2014) or transgene expression of ChR2 controlled by the Bhlhb5 transcription
factor (Meng, Hight, Kozin, et al. 2014). Surgical access of the CN is performed to expose and
visualize the surface of the CN. Pulsed (1ms) blue light is delivered to the CN via an optical fiber.
Neural activity evoked by light is acquired via a single shank, 16-channel recording probe placed
across the frequency axis of the IC. Spectral cues: Pulsed blue light (28 pulses/s) is delivered to
seven discrete locations along the medio-lateral axis of the CN each separated by 100 µm (beam
diameter 37 µm). The center of evoked activity along the recording probe is computed by the first
moment (akin to center of gravity). Temporal cues: Pulse rates (28-448 pulses/s) are delivered to
the CN. Magnitude of neural activity is computed by subtracting the spontaneous rate from the spike
rate acquired during the stimulus period. Neural synchrony is calculated as the vector strength of
neural spiking with respect to the stimulus rate.
Results: Optical stimulation evoked strong neural activity at laser intensities low as 0.2 mW.
Spectral cues: optical stimulation at each beam locations evoked strong neural activity that was
restricted along the frequency axis of the IC. The center of neural activity with respect to the
frequency axis in the IC shifted monotonically as a function of the beam location. Temporal cues:
Steady-state neural activity is attained for all tested stimulation rates, including 448 pulses/s.
Synchrony of neural activity peaked at ~1 (28 pulses/s) and declined as a function of stimulus rate,
finally reaching 0 at ~ 300 pulses/s. Furthermore, the high-speed opsin Chronos (Klapoetke et al.
2014) compared to ChR2 enabled grater neural synchrony at all tested stimulus rates.
Conclusions: Optogenetic control of auditory brainstem neurons are associated with
frequency specific stimulation with a resolution of <100 µm, which is a greater than a 7-fold
improvement in the spatial resolution of stimulation compared to the size of a single clinical ABI
electrode (700x700µm). Additionally, optogenetics can drive neural activity that is temporally
synchronous with stimulation rates up to ~300 pulses/s. These results have implications for the
development of an auditory neuroprosthesis based on optogenetic technology.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 67
2015 Conference on Implantable Auditory Prostheses
S46: THE AUDITORY MIDBRAIN IMPLANT: RESEARCH AND DEVELOPMENT
TOWARDS A SECOND CLINICAL TRIAL
1
Hubert H Lim1, Thomas Lenarz2, James Patrick3
Departments of Biomedical Engineering and Otolaryngology, University of Minnesota, Minneapolis, MN, USA
2
Department of Otolaryngology, Hannover Medical School, Hannover, DEU
3
Cochlear Limited, Macquarie University, AUS
The cochlear implant is one of the most successful neural prostheses to date. However,
for deaf individuals without a functional auditory nerve or an implantable cochlea, direct
stimulation within the brain is required to restore hearing. There have been encouraging results
with the auditory brainstem implant (ABI). However, many ABI users, especially those with
neurofibromatosis type 2 (NF2), are still unable to achieve open set speech perception. One
alternative approach is to stimulate within the auditory midbrain (central nucleus of inferior
colliculus, ICC), which is located away from the NF2-related tumor damage at the brainstem
level. Towards this goal, we have worked with a team of scientists, engineers, clinicians, and
regulatory personnel across several institutions to develop the auditory midbrain implant (AMI)
and translate it from research studies to patient implementation. This talk will present a brief
overview of the successes and challenges of the first clinical trial with the AMI from 2006 to
2009, followed by the research and development in animal, human, and cadaver studies that
have led to our second clinical trial currently funded by the National Institutes of Health. This talk
will then present the rationale for moving from a single-shank multi-site array to a two-shank
multi-site array for the second clinical trial and a new surgical technique that will place the array
more consistently into the target region. The new two-shank AMI device will be implanted and
evaluated in five deaf NF2 patients from 2015 to 2019. Other future directions in the field of
central auditory prostheses will also be discussed.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 68
2015 Conference on Implantable Auditory Prostheses
S47: AUDITORY BRAINSTEM IMPLANTS IN CHILDREN: IMPLICATIONS
FOR NEUROSCIENCE
Robert V Shannon
Department of Otolaryngology, Keck School of Medicine of USC, Los Angeles, CA, USA
Auditory Brainstem Implants (ABIs) are intended for patients with no auditory nerve who
cannot use a cochlear implant. The ABI is similar to a cochlear implant except that the
electrode is placed adjacent to the cochlear nucleus in the brainstem. Originally the ABI was
designed for adults with NF2 – a genetic disorder that produces bilateral schwannomas on the
VIII nerve. In recent years the ABI has been applied to children with no VIII nerve, mostly from
genetic causes. More than 200 children worldwide have now received the ABI and the best
outcomes are comparable to that of a cochlear implant. This talk will review the latest outcomes
of ABI in children and will discuss the implications for brain plasticity and neuroscience.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 69
2015 Conference on Implantable Auditory Prostheses
MONDAY POSTER ABSTRACTS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 70
2015 Conference on Implantable Auditory Prostheses
M1: DEVELOPMENTAL PROTECTION OF AURAL PREFERENCE IN CHILDREN
WITH ASYMMETRIC HEARING LOSS THROUGH BIMODAL HEARING
1
Melissa J Polonenko1,2, Blake C Papsin1,3, Karen A Gordon1,2,3
Archie’s Cochlear Implant Laboratory, Department of Otolaryngology, Hospital for Sick Children,Toronto, ON,
Canada
2
Institute of Medical Sciences, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
3
Department of Otolaryngology – Head and Neck Surgery, Faculty of Medicine, University of Toronto, ON, Canada
Access to bilateral sound early in life is essential for normal auditory cortical
development. In the normal immature auditory system, neural activity is stronger in the auditory
cortex contralateral to the stimulated ear and each auditory cortex is preferentially stimulated by
input from the contralateral ear. Unfortunately, unilateral deprivation disrupts this developmental
pattern and creates abnormal aural preference for the stimulated ear. We are asking whether
we can protect symmetric cortical development in children with asymmetric hearing loss by
providing them with bilateral hearing through one cochlear implant (CI) in the deprived ear and
one hearing aid in the other ear with residual acoustic hearing (bimodal hearing). Hearing
histories of these children are heterogeneous, and several factors likely contribute to how
successful bimodal hearing may be at protecting symmetric auditory cortical development. We
hypothesize that bimodal hearing may normalize cortical development in the children who have
greater residual hearing in the non-implanted ear, and therefore, more similar access to sound
with both the hearing aid and CI.
To test this hypothesis, 17 children with asymmetric hearing loss aged 3.1 to 11.1 years
were recruited for this study. They received bimodal input for (mean ± SD) 1.9 ± 1.1 years. CIs
were implanted on the right side in 6 children and the left side in 11 children. Independent
variables being explored to impact cortical development included residual hearing (70.3 ± 17.7
dB HL, range 37.5 - 87.5 dB HL), duration of pre-implantation acoustic hearing experience (3.2
± 2.6 years, range 0.2 - 9.7 years), and duration of electrical experience (2.1 ± 1.1 years, range
1.0 - 4.9 years). Electroencephalographic measures of cortical activity were recorded across 64cephalic electrodes. Responses were evoked using 250 Hz acoustic clicks and biphasic electric
pulses in 36ms trains presented at 1 Hz. A beamforming method developed in our lab has been
shown to successfully suppress CI artifact, separate neural activity from the noise floor, and
locate underlying sources (dipoles) of the cortical waveforms.
Preliminary results indicated that CI stimulation promotes greater activation of the
contralateral auditory cortex, regardless of implanted side (left CI: 19.4 ± 8.6%; right CI: 35.6 ±
4.4%, with positive numbers indicating contralateral activity). Normal contralateral aural
preference was measured for both auditory cortices in 7 children. Analyses regarding the
contributions of residual hearing, age and duration of acoustic experience with the hearing aid
and electric experience with the CI will be described.
In conclusion, results indicate that 1) there is no specialized hemispheric bias for CI
stimulation in the immature/developing cortex; and 2) bimodal stimulation can protect normal
cortical activation and aural preference in some, but not all, children with asymmetric hearing
loss. Analyses of several demographic factors may highlight how and in whom this protection is
possible, which may inform how we provide amplification to children with asymmetric hearing
loss.
This work was funded by a doctoral award from the Ontario government to the first author and a
Canadian Institutes of Health Research (CIHR) operating grant to the last author.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 71
2015 Conference on Implantable Auditory Prostheses
M2: DEVELOPMENT OF CORTICAL SPECIALIZATION TO PURE TONE
LISTENING IN CHILDREN AND ADOLESCENTS WITH NORMAL HEARING
Hiroshi Yamazaki, Salima Jiwani, Daniel D.E. Wong, Melissa J. Polonenko, Blake C.
Papsin, Karen A. Gordon
Archie’s Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, CAN
Background: Non-speech and speech sounds elicit greater activation in the right and left
auditory cortices of normal hearing adults, respectively. While this functional cortical
hemispherical asymmetry has been reported in adults, we have shown more symmetric
responses in young children. This suggests that the auditory brain becomes increasingly
specialized during development. Recently, our group developed the time restricted, artifact and
coherence source suppression (TRACS) beamformer (Wong and Gordon 2009) which is a class
of adaptive spatial filters that enables us to assess sound-evoked cortical activity in each
hemisphere from electroencephalography data. The purpose of this study is to investigate a
chronological change of functional hemispherical asymmetry in the auditory cortices of children
with normal hearing using this TRACS beamforming technique.
Participants and methods: Twelve children with normal hearing ranging from 4 to 16
years old were included in this study. These subjects were divided into 2 groups according to
their ages (Younger Group; 4-11 years old, n = 5 and Older Group; > 14 years old, n = 7).
Cortical responses evoked by 500 Hz tone burst stimuli in each ear were recorded using 64cephalic surface electrodes. Sources and their dipole moments underlying cortical auditory
evoked potentials were evaluated using the TRACS beamformer. The time window was
selected for the beamforming analysis on the early peak (P1) identified across the 64 channels
(mean global field power).
Results: In the Younger Group, activity was evoked in the bilateral cortex regardless of
ear stimulated. On the other hand, activity lateralized to the right auditory cortex for both left and
right ear stimulation in the Older Group.
Discussion: 500 Hz tone burst stimuli delivered to each ear elicited bilateral auditory
cortical activation in the younger group of < 11 year-old children, but the same stimulus evoked
right-lateralized auditory cortical activation in the older group of > 14 year-old subjects. These
results confirm our hypothesis that functional hemispherical asymmetry of auditory cortices,
which is observed in adults, develops during late childhood. Previous studies demonstrated
changes of the inter-cortical fiber bundles and the maturation of inhibitory circuits during
adolescence. The inter- and intra-cortical maturation of neural circuits might contribute to the
increasing specialization of the auditory brain with development.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 72
2015 Conference on Implantable Auditory Prostheses
M3: THE DEVELOPMENT OF INTERNALIZING AND EXTERNALIZING
SYMPTOMS IN EARLY IDENTIFIED TODDLERS WITH COCHLEAR IMPLANTS
COMPARED TO HEARING CONTROLS
Anouk Paulien Netten1, Carolien Rieffe2, Lizet Ketelaar2, Wim Soede1, Evelien Dirks3,
Jeroen Briaire1, Johan Frijns1
1
Leiden University Medical Center, Leiden, NLD
2
Leiden University, Leiden, NLD
3
Dutch Foundation for the Deaf and Hard of Hearing Child, Amsterdam, NLD
Introduction: Compared to their peers with conventional hearing aids, adolescents with
cochlear implants (CI) function increasingly well. However, they still show higher levels of
psychopathology than normal hearing (NH) peers.
In order to prevent children from developing these problems, it is essential to identify
symptoms and possible risk factors as early on in childhood as possible. This study examines
the effect of early intervention and other factors on the development of internalizing and
externalizing symptoms in toddlers with CI.
Objective: The first aim of this study was to define the incidence of internalizing and
externalizing behavioral problems in a cohort of early identified toddlers with CI compared to
hearing peers. The second aim was to examine the development of these symptoms over time
and to identify factors that influence this development in a longitudinal design.
Methods: This study was conducted as part of a larger research project concerning the
socio-emotional development of toddlers with CIs. The 58 participants were between 1 and 5
years old (mean age 38 months) and their hearing loss was detected using early identification
programs in the Netherlands and Belgium. A NH control group (n = 120) was assembled
matched on gender, age and socio-economic status. Parents completed the Early Childhood
Inventory to screen for emotional and behavioral disorders as listed in the DSM-IV, the
Strengths and Difficulties Questionnaire to indicate behavioral problems and social functioning,
and a list of background variables. During three consecutive years, they were requested to
annually complete these questionnaires. Information regarding the child’s hearing loss and
speech- and language abilities was derived from their medical notes.
Preliminary results: At baseline, toddlers with CI scored higher on the Autism Spectrum
Disorder Scale and lower on the social functioning scales than their NH peers. Regardless of
hearing status, both groups showed increasing levels of Oppositional Defiant Disorder, Major
Depressive Disorder and anxiety over time. Comparing the two groups over time revealed that
toddlers with CI scored higher on the Conduct Disorder Scale than the NH group. Data
collection will be completed in May 2015, full results will be presented at the conference.
Conclusions: Starting early on in childhood, toddlers with CI already show higher levels of
behavioral and social difficulties compared to hearing peers. Because of their hearing
impairment, they have less access to the social world and diminished opportunities for social
learning. Additionally, their speech- and language skills are not yet on par with NH toddlers,
which makes it harder for them to express their selves. Besides speech- and language
performance, future rehabilitation programs should also focus on the socio-emotional
development of toddlers with CI.
Acknowledgement: This research was financially supported by the Care for the Young: Innovation
and Development program by ZonMw (grant number 80-82430-98-8025) and the Heinsius-Houbolt Fund.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 73
2015 Conference on Implantable Auditory Prostheses
M4: HOW ADOLESCENTS WITH COCHLEAR IMPLANTS PERCEIVE
LEARNING A SECOND LANGUAGE
1
Dorit Enja Jung1,2, Anastasios Sarampalis2,3, Deniz Başkent1,2
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology, Groningen, NLD
2
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, Groningen, NLD
3
University of Groningen, Faculty of Behavioural and Social Sciences, Dept Psychology, Groningen, NLD
Mastering a second spoken language (L2), most importantly English, has direct
advantages for adolescents from non-English-speaking countries, such as the Netherlands. To
date, cochlear implant (CI)-related factors and challenges for L2 learning after completion of
native language acquisition have not been identified. We postulate that two CI-related factors,
sensory and cognitive, may limit the ability to learn a L2 (English) successively to the native
language (Dutch). It would be of great interest to support L2 learning in implanted adolescents,
as they could develop closer to their full academic potential. Also, they might be able to benefit
from secondary effects regularly observed in speakers of multiple languages, for example,
greater cognitive control.
This project includes a two parts, a questionnaire study and an experimental study. We
expect to present first results from the questionnaire study. This study aims at investigating the
CI and non-CI adolescents’ self-perception regarding their English learning experience. Also,
proxy-reports from parents and English teachers will be presented. The questionnaire battery
will be administered to high school students (age 12 - 17 years) attending English classes.
Three participant groups will be included: adolescents with pediatric cochlear implantation,
hearing-impaired adolescents without implants, and normal-hearing adolescents. The
questionnaire battery will cover relevant domains for L2 learning, for example: language
exposure, motivation to learn a second language, language background, attention and fatigue,
and the role of hearing- and lip-reading abilities.
We expect that adolescents with implants report little difficulties regarding reading, as
well as grammar- and vocabulary acquisition. They are expected to report greater difficulties
regarding understanding spoken communication and pronunciation of the second language.
Implanted adolescents and hearing-impaired adolescents are likely to report greater importance
of lip-reading and of the amount of background noise for successful L2 learning than normalhearing peers.
Acknowledgements
The authors are supported by a Rosalind Franklin Fellowship from the University Medical Center
Groningen, University of Groningen, and the VIDI Grant No. 016.096.397 from the Netherlands
Organization for Scientific Research (NWO) and the Netherlands Organization for Health
Research and Development (ZonMw).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 74
2015 Conference on Implantable Auditory Prostheses
M5: NONWORD REPETITION BY CHILDREN WITH COCHLEAR IMPLANTS:
EFFECTS OF EARLY ACOUSTIC HEARING
1
Caroline N Lartz1, Lisa S Davidson2, Rosalie M Uchanski2
Washington University School of Medicine Program in Audiology and Communication Sciences, St Louis, MO,
USA
2
Washington University School of Medicine Dept of Otolaryngology, St. Louis, MO, USA
The effects of ‘early acoustic hearing’ on performance on a nonword repetition test was
examined for 29 children with cochlear implants (CIs). It has been proposed that repeating
nonwords requires many of the same processes (auditory, linguistic, cognitive and speechmotor) that are involved in learning new words (Cleary, Dillon & Pisoni, 2002; Carter, Dillon &
Pisoni, 2002). Consequently, a child’s performance on a nonword repetition task should reflect
that child’s ability to learn new words.
We define ‘early acoustic hearing’ as a combination of audiometric thresholds of nonimplanted ears (unaided and aided; pre- and/or post-surgery) and duration of hearing-aid (HA)
use. We analyzed children’s responses from the Children’s Nonword Repetition test (CNRep,
Gathercole et al., 1994). The children all wear two hearing devices (2 CIs, or a CI and a HA),
received their first (or only) CI before the age of 3.5 years, were 5 - 9 years old at the time of the
tests, and represent multiple geographic locations. A control group, 14 children with normal
hearing (NH) and a similar range of ages, was also tested.
This study used nonword repetitions recorded as part of a NIH-funded project [R01
DC012778]. For each of 20 nonwords, ranging from 2-5 syllables and spoken by a female talker,
the child was instructed to repeat back the “funny sound” played from a loudspeaker. The
imitations were transcribed using the International Phonetic Alphabet (IPA) and then analyzed
using Computer Aided Speech and Language Assessment (CASALA) software. The
transcriptions were scored for phonetic accuracy, namely the proportion-correct consonants,
vowels, and consonant-clusters. Additionally, two measures of suprasegmental accuracy were
made: for each repetition, the transcriber noted whether the correct number of syllables was
produced and whether the stress pattern was imitated correctly.
The children with NH outperformed the children with CIs in all outcome measures, both
phonetic and suprasegmental. Overall, there was much more variability in the data from children
with CIs than in the data of children with NH. Several CI children achieved scores comparable to
those of children with NH, suggesting that, for some, early cochlear implantation can lead to
performance equal to their NH peers. Relationships between performance on the nonword
repetition task and ‘early acoustic hearing’ will be explored. Additional relationships, if any, will
be examined between the accuracy in stress-pattern production for the nonword repetition test
and performance on a stress-discrimination perception test. These analyses will allow us to
examine the effects of degree and duration of acoustic hearing on pediatric CI recipients’ ability
to process the phonological structure of their language.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 75
2015 Conference on Implantable Auditory Prostheses
M6: CONSONANT RECOGNITION IN TEENAGE CI USERS
Eun Kyung Jeon, Marcin Wroblewski, Christopher W. Turner
University of Iowa, Iowa City, IA, USA
Cochlear implantation was devised to help people with sensorineural hearing loss to
access all frequencies of speech, including high frequencies, so that they could recognize
consonants and the place of articulation better than they could with hearing aids. This study
investigates how well teenage CI users perform with consonant recognition when compared to
adult CI users, i.e., post-lingually deafened and raised with acoustic hearing, and how acoustic
feature recognition and error patterns differ based on performance among CI teenagers and CI
adults.
A total of 66 CI users have participated so far: 18 teenagers and 48 adults. A closed-set
consonant recognition test was given to all subjects using 16 consonants. Responses were
further analyzed using acoustic features and confusion matrices.
Preliminary results show that CI teenagers are not significantly different from CI adults in
consonant recognition scores (teenagers 68% correct, adults 67% correct) and in recognition of
acoustic features: voicing, manner, and place of articulation. For further analysis, both teenage
and adult CI groups were divided into two subgroups based on their performance: low and
higher performing groups. For place of articulation recognition, the low and higher performing CI
teenagers obtained 75% and 79% correct, respectively, resulting in no significant difference. For
voicing recognition, however, the low performing CI teenagers obtained significantly lower
scores (36 % correct) than the higher performing CI teenagers (72% correct). The low
performing CI teenagers’ voicing recognition was also significantly lower than the low performing
CI adults who obtained 54% correct. Thus the teenagers’ data was much more variable across
subjects than the adults. Confusion matrices reflected the low performing CI teenagers’ difficulty
with voicing recognition; they often confused the voiced consonants with the voiceless ones or
vice versa.
In summary, results show that CI teenagers can reach scores as high as adult CI users.
They achieve high scores on place of articulation regardless of their scores elsewhere. This may
be an indication of the benefits of cochlear implantation. Voicing recognition, however, turns out
to be the primary problem for low performing CI teenagers. The differences in performance are
likely related to experience of acoustic hearing during early childhood prior to cochlear
implantation.
Work supported by NIH-NIDCD grants, R01 DC000377 and P50 DC000242
12-17 July 2015
Granlibakken, Lake Tahoe
Page 76
2015 Conference on Implantable Auditory Prostheses
M7: BINAURAL PITCH FUSION IN CHILDREN WITH NORMAL-HEARING,
HEARING AIDS, AND COCHLEAR IMPLANTS
Curtis L. Hartling, Jennifer R. Fowler, Gemaine N. Stark, Anna-Marie E. Wood, Ashley
Sobchuk, Yonghee Oh, Lina A.J. Reiss
Oregon Health & Science University, Portland, OR, USA
Previously, we reported broader binaural pitch fusion ranges in hearing-impaired, adult
cochlear implant (CI) and hearing aid (HA) users compared to normal-hearing (NH) adults.
Wider fusion ranges can result in the averaging of mismatched rather than matched spectral
information between the ears, and may help explain speech perception interference observed
with binaural versus monaural hearing device use (Reiss et al., JARO 2014). The goal of this
study was to measure binaural pitch fusion in NH, HA, and CI children in order to understand
how fusion ranges differ from adults as well as across groups. A long-term objective is to follow
children over time and understand how fusion range and binaural spectral integration changes
with developmental age, and the clinical implications of these differences for rehabilitation.
Twelve NH, three bilateral HA, and five bimodal CI (CI worn with HA in contralateral ear)
children were recruited for the study (age range: 6.3 to 9.0 years). All subjects completed a
dichotic fusion range measurement task, in which a reference stimulus (tone or electrode) was
presented simultaneously in one ear with a comparison stimulus in the contralateral ear, and the
comparison stimulus varied to find the frequency range that fused with the reference stimulus.
NH and bilateral HA subjects were tested under headphones. For bimodal CI subjects, stimuli
were presented via an experimental cochlear implant processor on the implanted ear, and a
headphone on the other.
Preliminary findings suggest that bilateral HA children exhibit wider fusion ranges (3.53 ±
0.41 octaves) compared to NH children (0.74 ± 0.84 oct.) and bimodal CI children (0.67 ± 0.75
oct.). Interestingly, no appreciable fusion range differences were found between NH and
bimodal CI children. Bimodal CI children had slightly narrower fusion ranges compared to
bimodal CI adults (1.91± 1.46 oct.). In contrast, fusion ranges were slightly broader for children
in the NH and bilateral HA groups compared to these same adult groups (0.19 ± 0.17 and 1.27 ±
1.80 oct., respectively).
Our preliminary findings suggest that children who wear bilateral HAs exhibit broader
fusion ranges than their NH or bimodal CI counterparts, raising the possibility that hearing
device type may influence development of binaural pitch fusion. Throughout the remainder of
this five-year longitudinal study, we expect to draw clear conclusions regarding how fusion
range and binaural spectral integration changes with developmental age in all groups of
children, including those with bilateral CIs.
This research was funded by a NIH-NIDCD grant R01 DC013307.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 77
2015 Conference on Implantable Auditory Prostheses
M8: SPATIAL ATTENTION IN CHILDREN WITH BILATERAL COCHLEAR
IMPLANTS AND IN NORMAL HEARING CHILDREN
Sara Misurelli1, Alan Kan, Rachael Jocewicz1, Shelly Godar1, Matthew J Goupell2, Ruth
Litovsky1
1
2
University of Wisconsin, Madison, WI, USA
University of Maryland, College Park, MD, USA
Auditory scene analysis is the process by which the auditory system segregates acoustic
information into various streams, and is related to our ability to selectively attend to a target
while ignoring distracting information. The ability to perform these ‘spatial attention’ tasks is
complex, but essential for successful communication in noisy environments. Studies on spatial
attention have shown that normal-hearing (NH) adult listeners are able to perceive target and
interfering stimuli at separate locations, which is fundamental for spatial unmasking. Adults who
are deaf and received cochlear implants (CIs) during adulthood exhibit poor performance on
these tasks. One possible reason is that their auditory system has not fully adapted to
performing spatial attention tasks with electric hearing. We are interested in whether children
who are deaf and fitted with bilateral CIs (BiCIs) are able to perform the spatial attention tasks,
given that their overall listening experience is achieved with electric hearing. We are also
interested in investigating the developmental trajectory of spatial attention in NH children.
The current study used methods similar to those used with adults with NH and BiCIs in a
previous study from our lab (Goupell, Kan, & Litovsky, CIAP 2013). Target stimuli (spoken by a
female talker) were delivered in the presence of interfering speech (spoken by a male talker) at
various SNRs. Children with NH were presented with either clean or vocoded speech via
headphones, and children with BiCIs were presented with stimuli via direct audio input to their
speech processors. All children were instructed to attend to either the right or left ear during
testing with 4 different conditions: (1) target only in one ear (quiet); (2) target in one ear,
interferer in contralateral ear; (3) target+interferer the same ear (co-located); (4)
target+interferer in one ear, interferer-only in contralateral ear (spatially separated).
Preliminary results showed that in the presence of interferers in the contralateral ear
(condition 2), children with BiCIs were able to selectively attend to the target ear. This suggests
that children were able to direct attention to the ear in which the target was presented. In
conditions with target and interferers presented to the same ear there was not a significant
difference when the stimuli were perceived as co-located (condition 3) or spatially separated
(condition 4), suggesting that children with BiCIs did not perceive any spatial unmasking. Thus,
having exposure to CIs early in life might not facilitate emergence of spatial unmasking, but we
believe this is not because the listeners could not spatially attend. Rather it may be that the
target and interfering speech were not perceived at different spatial locations, suggesting a
problem with encoding the binaural cues. Additional findings will be discussed in the context of
comparing results of children with NH and with BiCIs, as well as comparing all child groups to
the previous adult study. These findings may help to elucidate limitations of spatial hearing for
children with BiCIs due to central processing of spatial attention or peripheral coding of the
incoming signal.
Work supported by NIH-NIDCD R01-DC8365 (Litovsky) and NIH-NIDCD P30HD03352.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 78
2015 Conference on Implantable Auditory Prostheses
M9: AUDIOVISUAL INTEGRATION IN CHILDREN WITH COCHLEAR
IMPLANTS
Iliza M. Butera1, Ryan A. Stevenson2, Rene H. Gifford1, Mark T. Wallace1
1
2
Vanderbilt University, Nashville, TN, USA
University of Toronto, Toronto, Ontario, CAN
Cochlear implants (CIs) are highly successful neuroprosthetic devices that can enable
remarkable proficiency in speech understanding in quiet; however, most CI users still struggle to
communicate in noisy environments. Audiovisual integration is key to filtering out auditory
distractors like neighboring talkers, and for typical listeners, the visual component of speech can
boost intelligibility by as much as 15dB. Even so, clinical evaluation of a user’s proficiency with a
CI is typically restricted to auditory-only measures. The purpose of this study is to assess
audiovisual integration in children with cochlear implants and relate this ability to other auditory
and visual measures of perception and speech recognition. Participants included 15 cochlear
implant users and 15 age-matched controls ranging from 6 to 18 years old. Measures included
an audiovisual speech-in-noise task, the McGurk Illusion, and psychophysical tests of perceived
simultaneity with either a complex audiovisual stimulus (i.e. the recorded utterance of the
syllable “ba”) or a simple audiovisual stimulus (i.e. a circle and 1kHz tone pip). By varying the
stimulus onset asynchrony (SOA) between the respective auditory and visual components of
these pairings, we are able to define a subjective “temporal binding window” of audiovisual
integration. We hypothesized that these temporal windows and the fusion of auditory and visual
speech tokens would both differ between CI users and controls and generalize to measures of
speech in noise intelligibility. This is predicated on the fact that speech is typically a
multisensory mode of communication and is known to rely on the temporal coincidence between
auditory and visual cues for effective communication. Our data reveal four preliminary findings:
(1) significant differences in the SOA at which perceived simultaneity peaked in the simple
stimulus presentation of a circle and a tone (avg 62 v. 22ms; p=0.02), (2) no significant group
differences in the width of the temporal binding window with either simple or complex
audiovisual stimuli, (3) increased reports of the visual token in the McGurk illusion with CI users
(60% v. 4%; p<0.001), and (4) a strong correlation between the report of fused, illusory tokens
in the McGurk task with the percentage of correctly identified phonemes in an audiovisual word
recognition task presented at a 0dB signal-to-noise ratio (r2=0.64, p<0.01). Taken together,
these results suggest a greater visual bias in perception at multiple levels of sensory processing
in CI users and highlight practical benefits for enhancing audiovisual integration in realistic
listening environments.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 79
2015 Conference on Implantable Auditory Prostheses
M10: AAV-MEDIATED NEUROTROPHIN EXPRESSION IN THE DEAFENED
COCHLEA
Patricia a Leake1, Stephen J Rebscher1, Chantale Dore1, Lawrence R. Lustig2, Omar Akil1
1
Dept. Otolaryngology-HNS, University of California San Francisco, San Francisco, CA, USA
2
Dept. Otolaryngology-HNS, Columbia University, New York, NY, USA
After deafness, the long-term survival of cochlear spiral ganglion (SG) neurons requires
both neurotrophic support and neural activity. In early-deafened cats, electrical stimulation (ES)
from a cochlear implant (CI) can partly prevent SG degeneration. Intracochlear infusion of Brain
Derived Neurotrophic Factor (BDNF) further improves survival of both radial nerve fibers and
SG neurons, which is beneficial for CI performance. However, BDNF also elicits disorganized,
ectopic sprouting of radial nerve fibers that is potentially detrimental to optimum CI function due
to loss of tonotopicity. Further, in many studies to date, BDNF has been delivered by osmotic
pump, which is problematic for clinical application. The current study explores the potential for
using adeno-associated viral vectors (AAV) to elicit neurotrophic factor expression by cells in the
cochlea. Our hypothesis is that this approach will provide targeted delivery and more
physiological levels of neurotrophins, and avoid ectopic sprouting.
AAV2-GFP (green fluorescent protein), AAV5-GFP, AAV2-GDNF (glial-derived
neurotrophic factor), AAV5-GDNF or AAV2-BDNF was injected (1 µl) into the scala tympani of
FVB mice at 1-2 days postnatal) through the round window. Mice were studied 7-21 days later.
Additional studies assessed AAV in the normal and deafened cat cochlea. Kittens were
deafened prior to hearing onset by systemic injections of neomycin sulfate. ABRs showed
profound hearing loss by 16-18 days postnatal. AAV (10 µl) was injected at 3-4 weeks of age.
Animals were studied 4-12 weeks later for immunohistochemistry (GFP) or 3 months later to
assess SG survival.
Following AAV2-GFP injections in normal mice and cats, immunohistochemistry revealed
strong expression of the GFP reporter gene in inner (IHCs) and outer hair cells (OHCs), inner
pillar cells, and also in some SG neurons. With AAV5-GFP, robust transduction of IHCs and
many SG neurons was seen, but few OHCs and supporting cells expressed GFP. After
injections of AAV2-GDNF, q-PCR demonstrated that human GDNF was expressed at levels
about 1500 times higher than endogenous GDNF in mice and AAV5-GDNF elicited extremely
high levels of GDNF expression. However, initial long-term data in cats after deafening showed
no neurotrophic effects of AAV2-GDNF on SG survival. In contrast, preliminary data suggest
that AAV2-BDNF promotes improved survival of both SG neurons and radial nerve fibers in the
deafened cochlea, thus optimizing CI performance. Further, no ectopic fiber sprouting was
observed. We hypothesize that AAV5-BDNF may elicit even greater SG expression, as
suggested by qPCR in mice, and thus even more strongly promote neural survival. Moreover, in
ears with hearing deficits primarily cause by SG degeneration, BDNF expression in supporting
cells or remaining hair cells could attract resprouting radial nerve fibers, thereby potentially
restoring hearing.
The authors thank K. Bankiewicz and uniQure biopharma B.V. for donating AAV vectors for
these studies. Research supported by NIDCD Grant R01DC013067, the Epstein Fund, and
uniQure biopharma B.V. and Hearing Research, Inc.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 80
2015 Conference on Implantable Auditory Prostheses
M11: OPTIMAL VOLUME SETTINGS OF COCHLEAR IMPLANTS AND
HEARING AIDS IN BIMODAL USERS
Dimitar Spirrov1, Bas van Dijk2, Jan Wouters1, Tom Francart1
1
ExpORL, Dept. Neurosciences, KU Leuven, Leuven, BEL
2
Cochlear Technology Centre Belgium, Mechelen, BEL
The purpose of our study was to find the optimal relation between the volume settings of a
cochlear implant (CI) and a hearing aid (HA) so that the user's loudness perception is balanced.
Bimodal users have a CI in one ear and a HA in the other, non-implanted ear. This
combination increases the speech understanding in realistic multi-talker environments compared to
only the CI condition. However, there are inherent differences between the two devices, mainly the
mode of stimulation (acoustic versus electric) and the processing algorithms. The devices are not
developed with combined use in mind and even similar parts (e.g. the compressors) are not
coordinated. This can lead to an unbalanced sound perception which decreases user comfort and
potentially limits the speech understanding benefit. When the bimodal user changes the volume (or
sensitivity) of the CI or the volume of the HA, the loudness changes for that ear. However, the
opposite device does not change its volume accordingly. Additionally, one step volume change in
the CI will not have the same perceptual effect as one step volume change in the HA. This makes it
inconvenient for the users to correctly set their volume controls on both sides. Therefore, an
automatic setting procedure is needed.
In the present study we investigated the possibility to use loudness models and to compute a
function that relates the volume controls of the two devices. In the HA, the overall gain is used as a
volume control parameter. In the CI, either microphone sensitivity or the so-called volume, which
controls the amplitude of the electrical pulses, are used as volume control parameters. Using
loudness models parametrized for individual subjects, a function to relate the HA overall gain with
loudness was calculated. Similarly, for the CI a second function was calculated to relate the
sensitivity or volume with the loudness caused by the CI. In order to have a balanced sound the
loudness caused by the HA and the CI have to be equal. Using this constraint, the first two functions
were combined in a new function to relate the HA overall gain with CI sensitivity or volume. As an
input signal to the models steady state noise filtered according to the international long term average
speech spectrum was used.
The obtained function was validated using loudness balancing experiments with bimodal
users. The stimulus level at the CI side was fixed and the level at the HA side was varied by
interleaved 1up/2down and 2up/1down adaptive procedures. Balancing results were achieved for
different stimulus levels. The louder balancing result was used to determine the model parameters
(the fitting). The mean difference between the model prediction for the softer level and the softer
balancing results was 3 dB. Without the model prediction the mean difference at the softer level was
6.2 dB. From previous studies it is known that just noticeable level differences for bimodal users is
minimum 1.5 dB. Therefore, the average bimodal users will perceive electrical and acoustical stimuli
equally loud.
The achieved function can be used to link the volume control of the two devices, and/or to
relate their step sizes. Therefore, the automated volume control settings will lead to a major usability
improvement, which can further increase the speech understanding and the convenience of the
bimodal users.
We would like to acknowledge the support from the Agency for Innovation by Science and
Technology in Flanders (IWT R&D 110722 and IWT Baekeland 140748).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 81
2015 Conference on Implantable Auditory Prostheses
M12: QUALITY OF LIFE AND AUDITORY PERFORMANCE IN ADULTS WITH
ASYMMETRIC HEARING LOSS
Nicolas Vannson1,2,4, Christopher James3,4, Kuzma Strelnikov2, Pascal Barone2, Bernard
Fraysse3, Olivier Deguine1,2,3, Mathieu Marx1,2
1
Université de Toulouse; UPS; Centre de Recherche Cerveau et Cognition; Toulouse, France, Toulouse, FRA
2
CNRS; CerCo; France
3
Service Oto-Rhino-Laryngologie et Oto-Neurologie, Hôpital Purpan, Toulouse, France, Toulouse, FRA
4
Cochlear France SAS, Toulouse, FRA
Objectives: Asymmetric hearing loss (AHL) induces disabilities to discriminate speech-innoise and to localize sounds in the horizontal plane. Despite these disabilities, AHL patients
have been left untreated due to unsatisfactory solutions. Nevertheless, AHL may have a knockon effect in terms of quality of life. Therefore, the goal of this study was to measure the
relationship between binaural hearing deficits and quality of life. We hypothesized that a
decrease in the ability to recognize speech in different spatial configurations of background
noise may impact quality of life and that the impact may increase with the degree of hearing
loss.
Methods: 49 AHL adults underwent the Matrix test in sound-field and in three listening
conditions. I) Dichotic, the signal was sent to the poorer-ear and the noise to the contralateral
one. II) Diotic, the noise and the signal was mixed and presented from the frontal loudspeaker.
III) Reverse dichotic, the signal was presented to the better-ear and the noise to the opposite
one. In each listening condition, SNR50 was measured. Furthermore, the quality of life was
evaluated by a generic questionnaire; the SF36 and a hearing–specific one; the Speech, Spatial
and Quality of hearing scale (SSQ). In addition, 11 adult normal hearing listeners (NHL) were
used as control.
Results: Speech recognition in noise was significantly poorer for AHL subjects (-0.12 dB
SNR50 in dichotic, -1.72 dB in diotic and -6.84 dB in reverse-dichotic condition) compared to
NHL (-4.98, diotic and -9.58 dB in both dichotic conditions). Scores for quality of life
questionnaires were significantly below norms. Significant correlations were found between the
SNR50 for the dichotic condition and SSQ total score (rspear = -0.38, p = 0.01).
Conclusion: AHL subjects have binaural hearing deficits that present a handicap to their
everyday quality of life. The dichotic SNR50 appeared to be a reliable criterion to screen the
impact of their AHL. In addition, this study supports the need for therapeutic solutions for AHL
subjects such as a cochlear implant device.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 82
2015 Conference on Implantable Auditory Prostheses
M13: COCHLEAR RESPONSE TELEMETRY: REAL-TIME MONITORING OF
INTRAOPERATIVE ELECTROCOCHLEOGRAPHY
Luke Campbell1, Arielle Kacier2, Robert Briggs2, Stephen OLeary1
1
Dept Otolaryngology, University of Melbourne, Melbourne, AUS
2
Royal Victorian Eye and Ear Hospital, Melbourne, AUS
Objective: To explore the utility of real-time monitoring of acoustic intracochlear
electrocochleography during cochlear implantation to understand and avoid insertion related
trauma.
Methods: We have recently developed a method of making high quality
electrocochleographic recordings in response to acoustic stimulation in awake post operative
cochlear implant recipients. Recordings are made wirelessly directly from Cochlear’s implant
(Campbell et al, Otol Neurotol 2015). We will present our first recordings made using an
intraoperative version of this system which gives the surgeon real-time feedback of the
electrocochleogram measured from the apical intracochlear electrode during array insertion.
Both hearing preservation (straight electrodes via the round window) and non-hearing
preservation (perimodiolar electrodes via a cochleostomy) cases are examined.
Results: Intracochlear recordings could be measured from most patients. The usual
pattern seen during the insertion of a straight array is a gradual increase in the amplitude of the
cochlear microphonic. In approximately half of cases the gradual increase is either temporally or
permanently interrupted and the cochlear microphonic decreases. We hypothesize that this may
represent intracochlear trauma. In most cases the drop in amplitude could be correlated to a
specific event noted by the surgeon or on the recording from the operative microscope.
Conclusion: Real-time cochlear response telemetry is a powerful tool for identifying the
exact moment when intracochlear trauma may occur. Such feedback may prove valuable in
maximizing hearing preservation and improving a surgeon’s insertion technique.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 83
2015 Conference on Implantable Auditory Prostheses
M14: WHAT IS THE EFFECT OF RESIDUAL HEARING ON TOP-DOWN
REPAIR OF INTERRUPTED SPEECH IN COCHLEAR-IMPLANT USERS?
1
Jeanne Nora Clarke1,2, Etienne Gaudrain1,2,3, Deniz Başkent1,2
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology, Groningen, NLD
2
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, Groningen, NLD
3
Lyon Neuroscience Research Center, CNRS UMR 5292, INSERM U1028, University Lyon 1, Lyon, FRA
Cochlear Implant (CI) users do not perform as well as normal-hearing listeners in
understanding speech in noisy environments, even when performing well in silence. This may
be due to the fact that pitch, useful for perceptual organization of speech, is not well conveyed in
CIs. A previous study from our lab indeed showed that CI users show different patterns than
normal-hearing listeners for top-down restoration of degraded speech. We also showed that the
addition of pitch information to speech degraded via an acoustic CI simulation enhances both
the overall intelligibility and the top-down repair of interrupted speech. A similar pitch
enhancement can occur in bimodal CI users, where low frequency information, rich in voice
pitch cues, can be transmitted in the contralateral ear (with the hearing aid - HA). Provided the
complementary information from the two ears can be properly fused by the brain, the additional
use of the HA in combination with the CI should improve interrupted speech perception and
repair. One way to quantify the effect of top-down repair of speech is the phonemic restoration
effect, namely, the increased continuity percept and the improvement of intelligibility of
periodically interrupted speech when the interruptions are filled with noise. In a first experiment,
using such a paradigm, bimodal CI users were tested with interrupted sentences (with silence or
with noise), with three different duty cycles (proportion of ON/OFF speech segments) in two
modalities (CI only and bimodal CI + HA). In a second experiment, in the same conditions, the
same participants were asked to judge how continuous they perceived each interrupted
sentence. In general, we expect that access to pitch cues can help bimodal users to perform
better for speech perception in adverse listening situations. We expect restoration benefit to
happen at lower duty cycles with the addition of the HA, because access to pitch cues may
compensate for the shorter speech segments. We expect perceived continuity to increase with
the addition of the HA because pitch gives information about intonation and speaker
characteristics that make it easier to follow the voice stream. Preliminary data from 3
participants showed that, as expected, intelligibility improved with the duty cycle. Top-down
restoration benefit and bimodal benefit showed no trend yet. The interrupted sentences were
perceived more continuous with increasing duty cycle, and when noise was added rather than
silent interruptions, but no benefit from the HA was observed. The lack of benefit from the
addition of the HA may be due to the limited data, but might also suggest an improper fusion of
the different information from the two ears. Intelligibility and restoration scores combined with
perceived continuity data will allow us to conclude on what top-down mechanisms are favored
by access to the pitch cues via the HA.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 84
2015 Conference on Implantable Auditory Prostheses
M15: RESIDUAL HEARING PRESERVATION, SIMULATED AND OBJECTIVE
BENEFITS FROM ELECTROACOUSTIC STIMULATION WITH THE EVOTMZEBRATM SYSTEM
Emilie Daanouni, Manuel Segovia-Martinez, Attila Frater, Dan Gnansia, Michel Hoen
Oticon Medical, Vallauris, FRA
Patients with severe-to-profound hearing loss who cannot benefit from conventional
hearing aid (HA) amplification are candidates for cochlear implantation (CI). After CI
implantation however, some patients conserve residual hearing, with thresholds >80 dB HL over
low frequencies, usually up to 1 kHz. Recent developments in CI surgical ‘soft’ methods or
improved electrode-array designs have substantially increased the possibility of residual hearing
preservation. This led to an increase in the number of patients who could benefit from bimodal
stimulation, i.e., the conjunction of acoustical stimulation in the low-frequencies and electrical
stimulation over the whole frequency range. There are now always more experimental evidence
suggesting that bimodal stimulation could improve speech perception, in particular in complex
listening environments, or music listening.
In the general context of the developing of bimodal stimulation devices, different
challenges must be addressed such as improving the preservation of residual hearing after CI
surgery, optimizing signal processing strategies implemented in speech processors to handle
bimodal signals and quantifying benefits, in particular concerning speech perception. The
EVO®/Zebra® CI system (Oticon medical, Vallauris, France) was developed to provide bimodal
stimulation to CI patients with residual hearing.
We will report data concerning hearing preservation using the EVOTM electrode array
combined with soft surgery techniques. EVOTM is a lateral wall electrode array whose
mechanical properties aim at reducing insertion trauma and at maximizing optimal placement in
the tympanic ramp. We will then present the digital signal processing pipeline implemented in
the ZebraTM speech processor dedicated to bimodal stimulation. This system simultaneously
performs the electric and acoustic signal processing based on a common frequency analysis.
The electric path is similar to classical CI speech processors, whereas the acoustic path
synthetizes a low-frequency signal restoring the temporal fine structure (TFS), thereby
emphasizing temporal pitch cues. This signal processing scheme was evaluated in a simulation
study estimating signal quality using an auditory nerve computational model generating spike
patterns, the Neural Similarity Index Measure (NSIM). We will show how this modelling study
predicted better outcomes for bimodal stimulation compared to electrical- or acoustic-only
stimulations used in the same simulator.
Finally, we will report results from clinical studies in which CI patients showing lowfrequency residual hearing were fitted with the ZebraTM speech processor. Objective measures
consisted of speech perception tests ran in quiet and in noise at different presentation levels,
and a pitch discrimination task using harmonic and disharmonic intonation patterns. Speech
results in noise showed significant improvement with bimodal stimulation when compared to
electrical stimulation alone. Scores in the disharmonic intonation test were also improved in the
bimodal condition, suggesting better coding of pitch cues requiring phase locking.
Results will be discussed in the general context of developing and optimizing bimodal
stimulation devices.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 85
2015 Conference on Implantable Auditory Prostheses
M16: ELECTRO-ACOUSTIC COCHLEAR IMPLANT SIMULATION MODEL
Attila Frater1, Patrick Maas2, Jaime Undurraga3, Soren Kamaric Riis2
1
Oticon Medical, Vallauris, FRA
Oticon Medical, Kongebakken, DNK
3
University College London, London, GBR
2
Background. In recent years the development of atraumatic cochlear implant (CI)
electrode arrays and improved surgery techniques facilitated the preservation of low frequency
(LF) hearing in patients with residual hearing. The benefits of residual LF hearing for CI users
are well known although the knowledge of underlying phenomena such as the interaction of
electric and acoustic stimuli is limited. This work aims to establish a simulation model capable of
representing neural response at the auditory nerve level for a combined electro-acoustic
stimulation (EAS). With such a model it is possible to simulate and study effects of e.g. timing
between electrical and acoustic stimulation, aspects of acoustic versus electric compression as
well as effects of temporal or frequency masking between the two stimulation modes.
Methods. A computational model simulating EAS in CI users is built to mimic the neural
response of auditory nerves. For the acoustic path, the Matlab Auditory Periphery model is
applied with modifications to account for impaired hearing and is furthermore extended to
simulate a simple hearing aid. The electric path is represented by a model of an Oticon Medical
CI processor augmented by an electric-to-auditory nerve interface model. For generating
auditory nerve spike patterns the phenomenological point process model of Joshua H. Goldwyn
is used. This model is also responsible for connecting acoustic and electric stimulation at the
auditory nerve level by means of an inverse distribution function technique applied to the
superimposed firing intensities.
Results. For evaluating the developed EAS model, spike patterns are converted into
neurograms in which the typical biophysical responses of the auditory system for acoustic and
electric stimulation can be observed. Neurograms also serve as an input for a Neural Similarity
Index Measure (NSIM), an objective measure by which neural activities evoked by different
stimuli and model configurations can be compared. The importance of timing and cross-over
frequency range between electric and acoustic stimulation is shown by the first results.
Conclusion. A model for simulating the auditory nerve spike pattern of joint acoustic and
electric stimulation has successfully been implemented. The model output has been verified to
produce representative auditory nerve response patterns for different stimulation modes
(acoustic, electric, EAS) on both artificial and VCV word stimuli. Moreover, the applicability of
NSIM as a measure between EAS and (normal hearing) acoustic model output to guide
optimization of timing and cross-over range between acoustic and electric “N-of-M” CIS type
stimulation has been demonstrated.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 86
2015 Conference on Implantable Auditory Prostheses
M17: PATTERNS OF ELECTROPHONIC AND ELECTRONEURAL
EXCITATION
Mika Sato, Peter Baumhoff, Andrej Kral
Institute of AudioNeuroTechnology Hannover Medical University, Hannover, DEU
The present study investigated the patterns of excitation in the inferior colliculus (IC) with
electrical stimulation of hearing and deafened cochleae to identify the locations where
electrophonic and electroneural responses are generated. Cochlear implantation was performed
through a cochleostomy in normal hearing guinea pigs under general anesthesia. A Neuronexus
double-shank 32-channel electrode array was stereotactically placed in the contralateral side of
the inferior colliculus parallel to the tonotopic axis. The electric stimuli were charge-balanced
biphasic electric pulses, 100 µs/phase. Thresholds, firing rates and dynamic ranges were
determined from unit activity recorded in the midbrain and was related to the acoustic
characteristic frequency (CF) of the unit. The cochlea was subsequently deafened with the
implant left in place and the stimulation was repeated in the deaf condition. The response
patterns to electrical stimuli before and after deafening were compared.
Acoustic stimulation revealed an ordered frequency representation along the shanks of
the electrode arrays, covering CFs in the range of 1 - 32 kHz. In hearing cochleae, two spots of
activity were observed: one at low CFs (~ 5 kHz) and one at high CFs (> 9 kHz). After
deafening, the thresholds of electrical stimulation increased and the electrical dynamic range
decreased significantly. Most extensive changes were observed in the low CF region. Moreover,
with sinusoidal electrical stimuli, the apical excitation shifted with changing frequency of the
electrical stimulus, the basal one corresponded to the place of the stimulating electrode in the
cochlea.
The low threshold, the large dynamic range and the change with deafening suggest that
the low CF response was predominantly hair-cell mediated (electrophonic). This electrophonic
response appeared at the dominant frequency of the electrical stimulus. A direct neural
response with higher thresholds, small dynamic range and less change after deafening was
observed in the CF region >9kHz. Consequently, electrical stimulation of a hearing cochlea
results in two spatially separate regions of electrophonic and electroneural activation. Bipolar
stimulation revealed that the electrophonic response is more effectively generated if the
stimulating electrodes are more apical. In monopolar stimulation differences in properties of the
two stimulation sites were less pronounced than in bipolar stimulation.
Supported by Deutsche Forschungsgemeinschaft (Cluster of Excellence Hearing4all)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 87
2015 Conference on Implantable Auditory Prostheses
M18: COMPARISONS BETWEEN ELECTRICAL STIMULATION OF A
COCHLEAR-IMPLANT ELECTRODE AND ACOUSTIC SOUNDS PRESENTED
TO A NORMAL-HEARING EAR IN UNILATERALLY DEAFENED SUBJECTS
John M. Deeks1, Olivier Macherey2, Johan H. Frijns3, Patrick R. Axon4, Randy K.
Kalkman3, Patrick Boyle5, David M. Baguley4, Jeroen J. Briaire3 and Robert P. Carlyon1
1
MRC Cognition and Brain Sciences Unit, Cambridge, UK
Laboratoire de Mécanique et d'Acoustique, CNRS, Marseille, FRA
3
Leiden University Medical Centre, Leiden, Netherlands
4
Addenbrookes NHS Trust, Cambridge University Hospitals, Cambridge, UK
5
Advanced Bionics, Great Shelford, UK
2
An established method of estimating the place-of-excitation elicited by stimulation of a
cochlear-implant (CI) electrode involves pitch matches between CI stimulation and acoustic
sounds presented to the contralateral ear. Some controversy exists over the extent to which the
results match Greenwood’s frequency-to-place function, and on whether the perceived pitch of
electrical stimuli is affected by prolonged exposure to CI stimulation using the patient’s clinical
map. Carlyon et al (2010) described a method that minimised the influence of temporal cues to
pitch and introduced a number of checks for the influence of non-sensory biases. They reported
matches that were stable over time and that were roughly consistent with Greenwood’s
predictions, but presented data from only three unilaterally deafened subjects. We report data
from three further subjects with normal hearing in the unimplanted ear, and where 12-pps
electric pulse trains were compared to 12-pps acoustic pulse trains bandpass filtered at a range
of center frequencies. For each electrode, matches were obtained with two different acoustic
ranges in order to check for non-sensory biases. The pitch matches, obtained using the method
of constant stimuli, were not overly influenced by the acoustic range, and were typically within
half an octave of Greenwood’s predictions. One subject from Carlyon et al’s 2010 study was retested after more than two years of clinical use, and using a slightly different method; the
matches were very close to those obtained originally, and markedly different from those
corresponding to her clinical map. Finally, three subjects compared 1031-pps single-electrode
pulse trains to different acoustic stimuli whose spectra were centered on the pitch matches
obtained with 12-pps pulse trains. These comparisons were obtained for two electrodes, each
stimulated in either monopolar or tripolar mode. Four of the acoustic stimuli were passed
through a 1/3rd-octave bandpass filter; these were 100- and 200-Hz harmonic complexes, a logspaced inharmonic complex, and a noise. The other two were a pure tone and a 1-octave-wide
noise. In each trial the electrical stimulus was presented twice, followed each time by a different
acoustic sound. The task was to report which pair sounded most similar. Preferences differed
across subjects and electrodes but were consistent across sessions separated by several
months, and, for a given subject and electrode, were similar for monopolar and tripolar
stimulation. Two subjects generally reported the 1/3rd octave-filtered sounds as most similar to
the electrical stimulation, whereas the third judged the pure tone as most similar.
Carlyon et al., 2010, J. Assoc. Res. Oto., 11, 625-640
12-17 July 2015
Granlibakken, Lake Tahoe
Page 88
2015 Conference on Implantable Auditory Prostheses
M19: SINGLE-SIDED DEAFNESS COCHLEAR-IMPLANT PERCEPTION AND
SIMULATION: LOCALIZATION AND SPATIAL-MASKING RELEASE
Coral Dirks1, Peggy Nelson1, Andrew Oxenham1,2,3,4
1
University of Minnesota, Twin Cities, Minneapolis, MN, USA
2
Department of Psychology, University of Minnesota
Patients with single-sided deafness (SSD) report significant difficulty understanding
speech in noise and localizing sound, especially when the sounds originate on the deaf side.
Recently, some SSD patients have received a cochlear implant (CI) in their deaf ear. Initial
reports suggest that patients’ localization and speech perception may improve with CIs but few
formal studies exist. In this study, we examine whether and how a CI improves SSD patients’
ability to localize sound and understand speech in natural, spatial settings. Listeners with SSD
and at least 6 months of CI listening experience completed each task under three listening
conditions in the sound field: with their normal-hearing (NH) ear only, with a CROS hearing aid,
and with the combination of their NH ear and CI using every day clinical map settings. Agematched NH listeners completed each task in sound-field with and without one ear blocked and
masked (to simulate SSD) and under headphones with or without vocoded stimuli delivered to
the “poorer” ear (to simulate SSD and listening through a CI).
Localization tests were run using stimuli designed to provide different monaural and
binaural localization cues. Stimuli included: (1) Broadband, lowpass-filtered, and highpassfiltered speech sounds (NU-6 words) to provide baseline performance; (2) Unmodulated and
amplitude-modulated low-frequency pure tones to test the perception of low-frequency interaural
time differences (ITDs) in the temporal fine structure and temporal envelope; (3) Unmodulated
and amplitude-modulated high-frequency complex tones to test the perception of interaural level
differences (ILDs) and temporal-envelope ITDs, respectively.
Speech recognition in noise was measured for three spatial configurations (S0N0,
S0N+60, S0N-60) using two masker types: speech-shaped noise and two-talker babble. In
addition, masking release via the precedence effect was measured with two-talker babble by
adding a copy of the masker to the S0N0 condition, presented at +60° or -60°, with a lead time
of 4 ms. The speech conditions were repeated under lowpass- and highpass-filtering to test the
contributions of different acoustic cues.
Preliminary results from one SSD+CI patient wearing a MED-EL with FSP device shows
that a CI partially restored the patient’s ability to localize sound and that the patient primarily
relied on ILD cues, rather than ITDs in the temporal envelope or fine structure. Pilot data from
four NH listeners in vocoded simulations of the first SPIN experiment suggest that, when a CI is
added, it does not interfere with performance in any condition; however, a substantial
improvement in speech-recognition threshold (~5 dB) is only observed when speech-shaped
noise, not babble, was presented on the better ear side. Overall, the results should provide
valuable information on the mechanisms of sound localization and spatial speech understanding
in this relatively new population of subjects, and may help guide whether and how CIs are
considered as a more general future treatment for SSD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 89
2015 Conference on Implantable Auditory Prostheses
M20: PITCH MATCHING PSYCHOMETRICS IN SINGLE-SIDED DEAFNESS
WITH PLACE DEPENDENT STIMULATION RATE
Tobias Rader, Julia Doege, Youssef Adel, Tobias Weissgerber, Uwe Baumann
University Hospital Frankfurt, Frankfurt am Main, DEU
Introduction: Pitch perception in cochlear implant (CI) users is mostly dependent on the
place of stimulation. Individual variances can be observed as a function of cochlear size,
electrode carrier length and insertion depths, and intrascalar location of the stimulating
electrodes. The fitting of CI audio processors generally does not consider the place of
stimulation, which results in a place-rate mismatch and potentially worse speech understanding.
In this study, the impact of using place dependent stimulation rates on pitch perception was
examined.
Material and Methods: A group of 11 unilaterally implanted CI users (CI experience 7-40
months; age 28-70, median 45 years) with acquired single-singled deafness and normal hearing
in the contralateral ear (PTA of 125-4000Hz < 35 dB HL) were recruited. Implant devices were
MED-EL (Innsbruck, Austria) SONATA and CONCERTO with either FLEXSOFT or FLEX28
electrodes, and with insertion depths up to 649 degrees insertion angle (median 552 degrees).
The task of the subjects was to adjust the frequency of a sinusoid presented in the nonimplanted ear by means of adjusting a knob until they perceived the same pitch as the one
produced by a reference stimulus in the implanted ear. Acoustic and electric stimuli were
presented in alternating order. This was done for each of the six most apical electrodes, with six
pitch matching trials per electrode. Electrical stimulation rate was set for each electrode place by
means of individual insertion angle measurements from postoperative imaging of the cochlea.
The formula of Greenwood (1990) was combined with the findings of Stakhovskaya et al. (2007)
to calculate stimulation rates for each electrode. The RIB2 interface (Research Interface Box II,
Institute for Ion and Applied Physics, Innsbruck, Austria) was used to directly control the implant
devices.
Results:
1) Electrode place-pitch function: In contrast to previous findings with fixed stimulation rate
(Baumann and Rader 2011), the median of matched acoustic frequency using place dependent
stimulation rate was in good accordance with the exponential predictions of the Greenwood
function.
2) Correlation between pitch and rate: A correlation between the median matched acoustic
frequency and place dependent stimulation rate was highly significant (Pearson r = 0.937; p <
0.001), calculated in double-logarithmic scale.
3) Pitch matching accuracy: At the most apical electrode, the best performing subjects showed
pitch matching accuracy comparable to normal-hearing subjects. The ratio between variances of
single pitch matches (interquartile range) and the median of all trials showed an accuracy of up
to 3%.
Conclusion: Our findings showed that place dependent rates of electrical stimulation can
achieve accurate pitch matching in single-sided deafness cases. The use of optimized
stimulation rate may establish better music and speech perception in noise.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 90
2015 Conference on Implantable Auditory Prostheses
M21: MUSIC ENJOYMENT IN SSD PATIENTS: THE SYNERGISTIC EFFECT
OF ELECTRIC AND ACOUSTIC STIMULATION
David M Landsberger1, Katrien Vermeire2, Natalia Stupak1, Annette M. Zeman3, Paul Van
de Heyning3, Mario A. Svirsky1
1
New York University School of Medicine, New York, NY, USA
2
Long Island Jewish Medical Center, New Hyde Park, NY
3
University Hospital Antwerp, Antwerp, BEL
Although there have been many studies of music perception with a cochlear implant (CI),
musical sound quality has been difficult to quantify. For example, ratings of music enjoyment on
a scale of 1-100 by deaf CI users are difficult to interpret. The emergence of CI users with
normal contralateral hearing presents a unique opportunity to assess music enjoyment
quantitatively in CI users by referencing it to that obtained in a normal ear, which provides a
known and readily interpretable baseline.
In the present study, we investigated sound quality of music in Single-Sided Deafened
(SSD) subjects with a CI using a modified version of the MUSHRA (MUltiple Stimuli with Hidden
Reference and Anchor) method. Listeners rated their enjoyment of brief musical segments on a
scale of 0-200 relative to a reference stimulus, defined as 100. The reference was the
unmodified musical segment presented to the normal hearing ear only. An “anchor” stimulus
(defined as 0) was also provided only to the normal hearing ear. The anchor was the same
musical segment processed with a 6-channel noise vocoder simulating a 6.5 mm shift. Stimuli
consisted of acoustic only, electric only, acoustic and electric, as well as a number of conditions
with low pass filtered acoustic stimuli to simulate varying degrees of hearing loss and bimodal
stimulation. Acoustic stimulation was provided by headphone to the normal ear and electric
stimulation was provided by a direct connect cable to the subject’s clinical speech processor.
Ten out of 11 subjects rated combined electric and acoustic stimulation the best, with a trimmed
mean rating “Ring of Fire” as 133 and “Rhapsody in Blue” as 120. The combination of acoustic
and electric stimulation was significantly better than unilateral acoustic processing alone. The
sound quality of electric stimulation alone was much worse than acoustic stimulation alone. In all
tested conditions, adding electric hearing to acoustic hearing provided improvement in sound
quality.
In summary, music enjoyment from electric stimulation was extremely poor relative to an
interpretable normal-hearing baseline. Interestingly, adding the electric stimulation actually
enhanced the sound quality of unilateral acoustic stimulation. This effect also happened with low
pass filtered, acoustically presented musical segments, suggesting that similar results may be
found for bimodal CI users.
Support provided by the NIH/NIDCD (R01 DC012152, PI: Landsberger, R01 DC03937 and
DC011329, PI: Svirsky), a TOPOF grant (PI: Van de Heyning), a MED-EL Hearing Solutions
grant (PI: Landsberger), and Cochlear Corporation (PI: Roland).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 91
2015 Conference on Implantable Auditory Prostheses
M22: RECORDING LOW-FREQUENCY ACOUSTICALLY EVOKED
POTENTIALS USING COCHLEAR IMPLANTS
Youssef Adel1, Tobias Rader1, Andreas Bahmer2, Uwe Baumann1
1
University Hospital Frankfurt, Frankfurt am Main, DEU
2
University Hospital WÜrzburg, WÜrzburg, DEU
Introduction: Patients with severely impaired high-frequency hearing and residual lowfrequency hearing cannot be sufficiently accommodated with conventional hearing aids. Using
hearing preservation electrode designs and surgical techniques, these cases can be provided
with cochlear implants (CIs), thereby facilitating ipsilateral electric and acoustic stimulation
(EAS; review in von Ilberg et al. 2011, Audiol Neurotol 16(Supp2):1-30). Still, hearing
preservation is usually partial, and long-term degradation was observed. Surgical monitoring
and clinical evaluation of low-frequency residual hearing are therefore needed.
Electrocochleography (ECochG) comprises neural and receptor potentials which were shown to
be suitable for this purpose (Adunka et al. 2010, Otol Neurotol, 31(8):1233-1241).
Methods: The MED-EL (Innsbruck, Austria) CI telemetry system incorporated in recent
implant hardware (Pulsar, Sonata, Concerto, Synchrony) is capable of measuring electrically
evoked compound action potentials (ECAP) and features a recording window length of 1.7 ms.
However, low-frequency acoustically evoked ECochG requires longer measurement windows,
for example to obtain cochlear microphonic responses. To expand the recording window, an
algorithm was developed using the RIB2 interface (Research Interface Box II, Institute for Ion
and Applied Physics, Innsbruck, Austria). The algorithm uses repeated measurements and
variable concatenation of implant buffer recordings. This was integrated into a system featuring
synchronized recording of auditory potentials evoked by a triggered acoustic stimulus. The
recording system characteristics and boundary conditions were determined by in-vitro
measurements using a MED-EL Pulsar implant.
Results: Recordings with a total window length of 15 ms were achieved using a variable
recording offset. The algorithm allows longer recording windows, but recording time increases
proportionally. In-vitro testing using the recording system successfully reproduced sinusoidal
waveforms in the frequency range 100 Hz to 2 kHz, as well as a cardiac signal simulation. Noise
characteristics and recording time as a function of total window length were determined.
Preliminary data of measurements in 1 EAS user were obtained.
Conclusion: Recording of low-frequency auditory evoked potentials using the MED-EL CI
telemetry system is feasible. Our recording method allows intracochlear measurements of
ECochG signals.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 92
2015 Conference on Implantable Auditory Prostheses
M23: SINGLE-SIDED DEAFNESS WITH INCAPACITATING TINNITUS USING
COCHLEAR IMPLANTATION: PRELIMINARY RESULTS
Dan Gnansia1, Christine Poncet-Wallet2, Christophe Vincent3, Isabelle Mosnier4, Benoit
Godey5, Emmanuel Lescanne6, Eric Truy7, Nicolas Guevara8, Bruno Frachet2
1
Oticon Medical, Vallauris, FRA
Rothschild Hospital, Paris, FRA
3
Roger Salengro University Hospital, Lille, FRA
4
Pitié-Salpêtrière University Hospital, Paris, FRA
5
Ponchailloux University Hospital, Rennes, FRA
6
Tours University Hospital, Tours, FRA
7
Edouard Herriot University Hospital, Lyon, FRA
8
Pasteur University Hospital, Nice, FRA
2
Tinnitus treatment is a real challenge when it occurs together with deafness on the same
ear. It was shown that deafness treatment with cochlear implantation can reduce tinnitus, for
both cases of bilateral and unilateral deafness. The primary objective of the present study is to
evaluate whether this tinnitus reduction is mainly due to electrical stimulation of the primary
auditory pathways, or to activation of higher levels of auditory pathways.
20 subjects were included so far: they were all adults with single-sided deafness
associated with incapacitating tinnitus on the same side, and contralateral normal of sub-normal
hearing. Inclusion criteria on incapacitating tinnitus was chosen on visual analogic scale (at least
80% of annoyance from tinnitus) and questionnaire (at least 58 on THI score). Severe
depressive syndrome candidates were excluded.
After inclusion, all subjects were implanted with an Oticon Medical Digisonic SP cochlear
implant system. The first month following activation, all subjects were stimulated with a constant
white noise delivered through the auxiliary input of the processor (microphones were
deactivated). The next 12 months, they were using a conventional stimulation from the cochlear
implant.
Tinnitus severity and annoyance was evaluated over time using questionnaires and
visual scales. Speech performance, and binaural perception integration especially, was
evaluated after 6 and 12 months following conventional stimulation.
Preliminary results show that tinnitus reduction following constant stimulation is reduced,
but major tinnitus reduction occurs after conventional stimulation introduction. 6-months speech
tests results conducted on limited number of subject showed some benefit in binaural hearing in
noise conditions, even though barely no speech can be identified with cochlear implant alone.
These preliminary results suggest that 1) tinnitus reduction in single-sided deafened
patients can be reduced by cochlear implantation, but this tinnitus reduction may be related to
activation of higher levels of auditory pathways with meaningful signal rather than just electrical
stimulation, and 2) single-sided deafness treated with cochlear implant can improve binaural
speech perception in noisy background.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 93
2015 Conference on Implantable Auditory Prostheses
M24: COCHLEAR IMPLANT OUTCOMES IN ADULTS WITH LARGE HEARING
ASYMMETRY
Jill B. Firszt1, Ruth M. Reeder1, Laura K. Holden1, Noel Dwyer1, Timothy E. Hullar2
1
Washington University School of Medicine, St. Louis, MO, USA
2
Oregon Health and Science University, Portland, OR, USA
Our studies of cochlear implant (CI) recipients who presented with asymmetric hearing
loss, that is severe to profound hearing loss in the poor ear that was implanted and better
hearing in the contralateral ear that maintained a hearing aid, have shown promising results. In
particular, use of the CI and hearing aid together showed improved understanding of soft
speech, understanding of speech in noise, and sound localization in postlingually deafened
adults. The current prospective, longitudinal study examines CI outcomes in adults with greater
hearing asymmetry i.e., moderate to profound hearing loss in the implanted ear and normal or
mild hearing loss in the better ear. Results were analyzed from 18 adults with pure-tone
averages (PTA at .5, 1 and 2 kHz) less than 45 dB HL in the better ear and PTAs greater than
70 dB HL in the poor ear. Due to the large hearing asymmetry and inclusion of participants with
normal hearing in the better ear, amplification used by participants varied. A number of
participants were tested with and without BiCROS/ CROS amplification prior to implantation.
Pre-implant, participants had no word understanding in the poor ear and reported extensive
deficits for all three domains of the Speech, Spatial and Qualities of Hearing Scale (SSQ).
Post-implant, those with mild hearing loss in the better ear wore a hearing aid in that ear, while
others had normal hearing and no amplification. At 6 months post-implant, speech recognition in
quiet (CNC, TIMIT at 50 dB SPL, AzBio at 60 dB SPL) and in noise (TIMIT at 60 dB SPL +8
SNR) for the CI ear alone was significantly improved compared to pre-implant. Comparison of
the pre-implant listening condition to the post-implant condition indicated significant benefits for
understanding speech in noise when the noise originated from the better-ear side (BKB SIN) or
surrounded the listener (R-SPACETM) and speech was from the front. No deficit to performance
was seen for sentence recognition when noise originated from the front (TIMIT in noise) or from
the CI side (BKB SIN) and speech was from the front. Post-implant sound localization (15
loudspeaker array) improved significantly compared to pre-implant as did perceived
communication function (SSQ). Similar to traditional CI recipients, individual variability was
present on all measures. Unlike traditional CI recipients, neither hearing thresholds in the better
ear or length of deafness in the poor ear correlated with post-implant performance. Notably, for
the current study population, speech processor programming issues related to loudness
balancing and unique between ear (acoustic and electric) percepts were observed. Overall,
results suggest postlingual adults with a moderate to severe hearing loss in one ear may benefit
from cochlear implantation even when the contralateral ear has mild hearing loss or normal
hearing. However, this audiometric profile and consideration for cochlear implantation generate
novel aspects concerning evaluation of the effects of asymmetric hearing loss and subsequent
treatment, counselling, and programming that require specific attention and will be discussed.
Supported by NIH/NIDCD R01DC009010.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 94
2015 Conference on Implantable Auditory Prostheses
M25: THE EFFECT OF INTERAURAL MISMATCHES ON BINAURAL
UNMASKING IN VOCODER SIMULATIONS OF COCHLEAR IMPLANTS FOR
SINGLE-SIDED DEAFNESS
Jessica M. Wess1, Douglas S. Brungart2, Joshua G.W. Bernstein2
1
2
University of Maryland, College Park, MD, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
Communicating in complex environments requires the perceptually separation of the
target talker and interfering background. Whereas normal-hearing listeners (NH) are able to
capitalize on binaural cues to use spatial separation between competing sound sources to better
hear the target (“binaural unmasking”), individuals with single-sided deafness (SSD) lack this
ability. Recent results from our laboratory suggest that a cochlear implant (CI) can facilitate
speech-stream segregation for individuals with SSD in situations with substantial informational
masking. However, the amount of binaural unmasking is variable across individuals and is much
less than for NH listeners.
Vocoder simulations presented to NH listeners were used examine how spectral
resolution, spectral mismatches and latency differences between the ears might limit binaural
unmasking for CI listeners with SSD. Listeners identified target speech masked by two samegender interferers and presented unprocessed to the left ear. The right ear was presented with
either silence or a noise-vocoded mixture containing a copy of the masker signals, thereby
testing whether listeners could integrate the masker signals across the ears to better hear the
monaural target. Experiment 1 introduced interaural spectral mismatch by frequency-shifting the
vocoder synthesis filters (±1, 2, 4 or 7 auditory-filter equivalent rectangular bandwidths, ERBs).
Additional experiments examined the interaction between spectral mismatch and the number of
vocoder channels (3-10) or the presence of interaural temporal delays (±6-100 ms).
Binaural unmasking was found to decrease substantially for spectral mismatch of 4 ERBs
(3.6 mm) or more. Although the magnitude of binaural unmasking was largest for conditions with
greater spectral resolution, binaural unmasking was more immune to spectral shifts in conditions
with fewer, broader channels, such that better resolution tended to harm performance when
there was an interaural spectral mismatch. Temporal mismatches reduced binaural unmasking,
but mainly for large, non-physiologically plausible delays (>24 ms) where the vocoded maskers
caused interference rather than unmasking. Both the interference and the unmasking effects
were reduced by spectral mismatch, suggesting that they arise from the same grouping process.
Taken together, these results suggest that CI listeners with SSD might receive more
binaural unmasking with a spectrally aligned frequency map. Current CI technology (on the
order of eight effective channels) is likely adequate for maximal masking release. However, in
the absence of a spatially aligned map, reducing the spectral resolution of the CI could yield
almost as much unmasking while offering greater immunity to spectral mismatch.
[Supported by a grant from the Defense Medical Research and Development Program]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 95
2015 Conference on Implantable Auditory Prostheses
M26: THE INFLUENCE OF RESIDUAL ACOUSTIC HEARING ON AUDITORY
STREAM SEGREGATION IN A COMPETING-SPEECH TASK
Ashley Zaleski-King1, Allison Heuber1, Joshua G.W. Bernstein2
1
2
Gallaudet University, Washington, DC, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
With relaxed candidacy criteria and the advent of hybrid devices, many of today’s
cochlear implantees have access to residual acoustic hearing that complements the speech
information provided by the cochlear implant (CI). It has been well established that this residual
acoustic hearing can benefit CI listening by providing additional low-frequency voicing and
prosodic cues that are typically poorly relayed through the CI. Low-frequency acoustic
information might also assist the listener to better organize the auditory scene by providing pitch
cues to assist in perceptually separating concurrent speech streams. The benefit of residual
acoustic hearing (the “bimodal benefit”) is generally evaluated by comparing performance with
both the CI and acoustic hearing to performance with the CI alone. With this approach, it is
difficult to distinguish between the contributions of low-frequency speech cues and enhanced
speech segregation to the bimodal benefit, because the acoustic cues required for both are
contained within the residual acoustic speech signal.
The purpose of this study was to investigate whether contralateral residual acoustic
hearing could help CI listeners to perceptually segregate concurrent voices. We employed a
paradigm based on the Coordinate Response Measure where target and interfering phrases
containing competing call signs, colors and numbers were delivered via direct audio input to the
CI. Listeners were asked to identify the color and number keywords spoken by the target talker.
Either silence or a copy of the interfering speech signals was delivered to the acoustic ear via a
circumaural headphone. Because no target speech was presented to the acoustic ear, this
paradigm eliminated the possibility of a bimodal benefit deriving from additional target-speech
information. Therefore, any bimodal benefit observed could be attributed to source-separation
cues provided by the acoustic masker signal.
Preliminary results from two (of eight planned) bimodal listeners and four normally
hearing listeners presented with simulations of bimodal listening (vocoder plus low-pass filtered
speech) show an effect of presenting the masker signals to the acoustic-hearing ear that
depended on the target-to-masker ratio (TMR). At negative TMRs, listeners benefitted from the
addition of contralateral acoustic information. At positive TMRs, the addition of the masker
signals to the contralateral ear impaired performance.
Overall, these preliminary results suggest that in a competing-talker situation, the
availability of residual acoustic hearing can facilitate auditory-stream segregation, but can also
be a liability by producing interference. If the aspects of the acoustic signal that facilitate
streaming can be identified and are different from those causing the interference, it might be
possible to process the acoustic signal to favor the beneficial effect.
[Supported by Defense Health Affairs in support of the Army Hearing Program (AZ) and by a
grant from the Defense Medical Research and Development Program (JB)].
12-17 July 2015
Granlibakken, Lake Tahoe
Page 96
2015 Conference on Implantable Auditory Prostheses
M27: ELECTRO-ACOUSTIC INTERACTIONS IN COCHLEAR IMPLANT
RECIPIENTS DERIVED USING PHYSIOLOGICAL AND PSYCHOMETRIC
TECHNIQUES
Kanthaiah Koka, Leonid M Litvak
Advanced Bionics, Valencia, CA, USA
Objectives: (1) To develop an objective technique for measuring interactions between
electric and acoustic stimuli in a combined electro-acoustic stimulation (EAS). (2) To compare
the objective interaction measures with behavioral responses.
Background: The increased incidence of acoustic hearing in CI candidates has
necessitated the need for combined electric and acoustic stimulation to the cochlea. Insights
into interactions between the two stimulation modes may allow one to optimize the combined
system. Electrocochleographic (ECoG) potentials through CIs may be a practical technique for
evaluating such interactions.
Methods: We developed a technique to post-operatively measure ECoG potentials using
Advanced Bionics CI intra-cochlear electrodes. Specifically, acoustic stimulus presentation was
synchronized with intra-cochlear recording, and acoustic ECoG potentials were measured and
compared between acoustic alone and acoustic in presence of electrical stimuli. In addition,
psychometric acoustic thresholds measured either in presence or absence of electrical masker
using 3IFC adaptive tracking procedure.
Results: Current method successfully measured acoustic ECoG responses in presence
of electrical stimulus. Response amplitudes to acoustic stimuli were significantly altered by
presence of electrical stimulation. The degree of interaction varied by acoustic frequency, and
electrode location of electrical stimulation. Both ECoG potentials and psychophysical thresholds
were affected most when the electrical stimulation was presented on the most apical electrode.
Conclusions: Post-operative cochlear potentials with AB’s Advantage CI implant can
provide an objective method for understanding the electro-acoustic interactions in an EAS
device. These objective EAS interactions may provide a practical technique to optimize the EAS
device for individual patients.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 97
2015 Conference on Implantable Auditory Prostheses
M28: A MODULAR SIGNAL PROCESSING PROGRAM TO SIMULATE AND
TEST THE SOUND QUALITY OF A COCHLEAR IMPLANT
Austin M Butts, Sarah J Cook, Visar Berisha, Michael F Dorman
Arizona State University, Tempe, AZ, USA
As part of an ongoing effort to create and refine models of the sound quality of a cochlear
implant, we have created an interactive program to process and play speech samples in
experiments with single-sided deaf cochlear implant patients. Traditional perceptual models
such as sine and noise vocoders have been shown to yield comparable performance levels in
speech-related tasks. However, they do not resemble, with a high degree of fidelity, the
perceptual qualities of CI-speech as reported by single-sided deaf CI patients. The goal of this
project was to design a program to process speech signals using function representations of
various perceptual models and signal processing procedures. Each function's header conforms
to the same standard, enabling us to test several models in a single session. The program
processes short (<10s) audio files on-demand (<1s) in the MATLAB language and environment.
The structure allows for easy addition of new processing functions, and it can handle a variety of
parameter inputs, including those needed for batch processing. Presently, it implements the
following: audio playback, first-order filters, channel filters, sine vocoder, noise vocoder,
'frequency squeeze' (remap channels to different, often narrower frequency range), and 'spectral
smearing' (broaden STFT magnitude information while preserving phase). The GUI provides
intuitive forms of data entry for corresponding processing functions. It is also robust to nonsense
inputs and issues pre- and post-processing warning messages when necessary. Several
perceptual matching experiments have been completed using the features of this program with
single-sided deaf patients. The results of those experiments will be described.
Funding source(s): NIDCD R01 DC 010821
12-17 July 2015
Granlibakken, Lake Tahoe
Page 98
2015 Conference on Implantable Auditory Prostheses
M29: THE DEVELOPMENT OF MUSICALITY IN CHILDREN WITH COCHLEAR
IMPLANTS
Meng Wang, Xueqing Chen, Tianqiu Xu, Yan Zhong, Qianqian Guo, Jinye Luo
Beijing Tongren Hospital, Capital Medical University, Beijing Institute of Otolaryngology, Beijing, CHN
Objectives. The aims of this study were to evaluate the musicality of children with
cochlear implants and those with normal hearing by using the questionnaire of Musical Ears
Evaluation Form for Professionals, compare the differences of musicality between children with
cochlear implants and those with normal hearing, establish normal comparison data of
musicality development for children with hearing loss, and provide a basis for an appropriate
hearing and speech rehabilitation program.
Methods. All children participated in this study were divided into two groups, including
cochlear implant group (CI group) and normal hearing group (NH group) as control group. 109
children in CI group were diagnosed with prelingual, severe to profound hearing loss. They were
all implanted unilaterally before 3 years old and within 4 years after cochlear implantation. The
age at implantation ranged from 8 to 36 months with a mean of 18 months. The hearing age at
evaluation ranged from 1 to 46 months with a mean of 15 months. 180 children with normal
hearing in NH group were all under 4 years old. The age at evaluation ranged from 1 to 47
months with a mean of 15 months. The questionnaire of Musical Ears Evaluation Form for
Professionals was used to evaluate the musicality including abilities of singing, recognizing
songs, tunes and timbre and responding to music and rhythm for all children in this study. All
statistical analyses were executed using the IBM SPSS 20 statistical software with a criterion of
statistical significance set at p<0.05.
Results. The scores for overall musicality in children both with cochlear implants and with
normal hearing showed significant improvement over time (P<0.05). The scores for the three
subcategories of musicality in children both with cochlear implants and with normal hearing also
showed significant improvement over time (P<0.05). The regression function for prediction of
overall musicality scores (y) from age (x) was y=1.8x-0.008x2+3.6 (R2=0.874, P=0.000), the
regression function for prediction of singing scores (y) from age (x) was y=0.5x-0.001x2+3.0
(R2=0.831, P=0.000), the regression function for prediction of recognizing songs, tunes and
timbre scores (y) from age (x) was y=0.6x-0.002x2+0.9 (R2=0.808, P=0.000), the regression
function for prediction of responding to music and rhythm scores (y) from age (x) was y=0.7x0.005x2+1.5 (R2=0.848, P=0.000). The score for overall musicality was not significantly different
between CI and NH groups in the same hearing age (P>0.05). There were significant
differences between the two groups in the same chronological age (P<0.05). Conclusions The
musicality in children both with cochlear implants and with normal hearing improved significantly
over time. The musicality in children with cochlear implants was significantly lower than that of
children with normal hearing in the same chronological age.
[Key words] Children, Cochlear Implant, Musicality, Questionnaire
Acknowledgements The work was supported by the capital health research and development of special from the
Beijing Municipal Health Bureau (No.2011-1017-01), the research special fund for public welfare industry of health
from the Ministry of Health of the People's Republic of China (No.201202001), the promotion grant for high-level
scientific and technological elites in medical science from the Beijing Municipal Health Bureau (No.2009-3-29), key
projects in basic and clinical research cooperation funds from Capital Medical University (No.12JL12), the capital
citizen health program to foster from Beijing Municipal Science & Technology Commission (No.
Z141100002114033).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 99
2015 Conference on Implantable Auditory Prostheses
M30: THE DEVELOPMENT OF A 'MUSIC-RELATED QUALITY OF LIFE'
QUESTIONNAIRE FOR COCHLEAR IMPLANT USERS
1
Giorgos Dritsakis1, Rachel Marijke van Besouw1, Carl A Verschuur2
Hearing and Balance Centre, Institute of Sound and Vibration Research, University of Southampton, UK,
Southampton, GBR
2
University of Southampton Auditory Implant Service (USAIS), UK, Southampton, GBR
Music perception tests and questionnaires have been used to assess the accuracy of cochlear
implant (CI) users in perceiving the fundamental elements of music and CI users’ enjoyment of music,
appraisal of music sound quality and music listening habits (Looi et al. 2012). Based on relatively narrow
concepts, such as music perception and appreciation, these measures do not capture the relationship of
CI users with music in terms of their feelings about music or social interaction that is related to music. In
order to fully evaluate the impact of rehabilitation programs, processing strategies and novel tuning
protocols on CI users’ experiences with music, a reliable measure is needed to assess music
experiences in everyday listening situations. The aim of the present study is to investigate aspects of CI
users’ relationship with music that are relevant to the physical, the psychological and the social healthrelated quality of life (HRQoL) domains (WHOQOL 1993), with a view to generating items for a new
psychometric instrument.
Thirty adult CI users (12 male, 18 female, mean age: 49.5) with pre-lingual or post-lingual
deafness and with various levels of music training participated in one of six focus groups about music in
everyday life (4-6 participants/focus group). The focus groups were conducted in two halves; a group
discussion followed by a written evaluation of items from existing questionnaires. Open-ended questions
were asked to facilitate the discussion and ensure that issues relevant to all three HRQoL domains are
covered. The data were analysed based on the theory of ‘template analysis’ (King 2012). The themes
identified in the discussion were organised into a coding template. The HRQoL domains and subdomains
of the Nijmegen Cochlear Implant Questionnaire (Hinderink et al. 2000) were used as broad a priori
categories to help organise associated themes. Participants’ ratings and comments informed the content
and wording of the new questionnaire items.
The ‘Music-related Quality of Life’ (MRQoL) of CI users is a function of their ‘music listening
ability’, ‘attitude towards music’ and ‘musical activity’; dimensions which correspond to the physical,
psychological and social HRQoL domains respectively. Each MRQoL domain is further subdivided into
subdomains that contain themes. The presentation of the themes is organised around the three domains
with example quotes from the participants. Examples of themes in each domain include recognition of
lyrics (ability), perseverance with music (attitude) and participation in music fitness classes (activity).
Questionnaire items have been subsequently developed using a combination of the themes identified in
the data and the ratings of existing items. The prototype questionnaire comprises 53 items measuring
MRQoL of CI users on a 5-point Likert scale.
To the authors’ knowledge no study to date has mapped music experiences onto a QoL
framework. The findings of the study improve understanding of CI users’ everyday challenges with music
by highlighting aspects of music experience, especially in the ‘attitude’ and ‘activity’ MRQoL domains,
that previous studies have poorly addressed (e.g. avoidance of music) or have not addressed at all (e.g.
confidence to sing). Assessment of novel abilities, attitudes and activities may enable a more complete
evaluation of music-related CI outcomes. The next stages of this research are to: 1) refine the MRQoL
questionnaire and evaluate its content validity with professionals and 2) evaluate the test-retest reliability,
internal consistency and construct validity of the questionnaire with CI users and adults with normal
hearing before making it available for use. After validation, the questionnaire may be used to measure
the real-world effects of interventions intended to improve CI users’ music experiences.
Supported by the Faculty of Engineering and the Environment (FEE), University of Southampton.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 100
2015 Conference on Implantable Auditory Prostheses
M31: TAKE-HOME EVALUATION OF MUSIC PRE-PROCESSING SCHEME
WITH COCHLEAR IMPLANT USERS
Wim Buyens1, Bas van Dijk1, Marc Moonen2, Jan Wouters2
1
Cochlear Technology Centre Belgium, Mechelen, BEL
2
KU Leuven - ESAT (STADIUS), Leuven, BEL
3
KU Leuven - ExpORL, Leuven, BEL
Objective: Although cochlear implant (CI) users reach good speech understanding in
quiet surroundings, music perception and appraisal generally remain poor. Music mixing
preferences have been investigated in CI users with multi-track recordings and a mixing
console, showing a preference for clear vocals while preserving bass and drums. Since multitrack recordings are not widely available, a music pre-processing scheme has been designed
which allows adjusting the balance between vocals/bass/drums and other instruments for
mono/stereo recordings. In the current study, the music pre-processing scheme is evaluated in
a comfortable listening environment and with different genres of music in a take-home
experiment. Preferred settings are correlated with speech and pitch detection performance.
Design: During the initial visit preceding the take-home experiment the speech-in-noise
perception performance and pitch detection abilities are measured, and a questionnaire about
the music listening habits is completed. The take-home device (iPhone) is provided including a
custom-made app with the music pre-processing scheme and seven playlists with six songs
each. The subject is asked to adjust the balance with a turning wheel to make the music sound
most enjoyable for all songs and to repeat this three times.
Study Sample: Twelve post-lingually deafened CI users have participated in the study.
Results: All subjects prefer a balance significantly different from the original. Differences
across subjects are observed which cannot be explained by perceptual abilities. The withinsubject variability is explained in most subjects by an effect of training, genre or familiarity with
the songs.
Conclusions: The music pre-processing scheme shows potential improvement in music
appraisal with complex music and might be a good tool for music training or rehabilitation.
Individual settings for music can be adjusted according to personal preferences.
Acknowledgment: This work was supported by a PhD grant of the Institute for the Promotion of
Innovation through Science and Technology in Flanders (IWT090274) and Cochlear Technology
Centre Belgium.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 101
2015 Conference on Implantable Auditory Prostheses
M32: OPTIMIZING COMPRESSION STIMULATION STRATEGIES IN
COCHLEAR IMPLANTS FOR MUSIC PERCEPTION
Petra Maretic1, Attila Frater2, Soren Kamaric Riis3, Alfred Widell3, Manuel SegoviaMartinez2
1
Department of Mathematical Statistics, Faculty of Engineering, Lund University, Lund, SWE
2
Oticon Medical, Vallauris, FRA
3
Oticon Medical, Kongebakken, DNK
Background. Speech perception in favorable environmental conditions has gradually
improved for cochlear implant (CI) users over the years. Due to the positive development there
is a need to increase research effort to also include improved perception of non-speech sounds,
especially music. Present encoding compression strategies in the signal processor are
optimized for speech characteristics. This report presents alternative compression parameters
to capture the characteristics of genre classified music utilizing a complete CI model and a
Neural Similarity Index Measure (NSIM) for evaluation (Harte & Hines, Elsevier 2011).
Methods. The signal processing strategy used in the latest Oticon Medical CI BTE
Saphyr Neo is called XDP and allows the compressor to operate in four different frequency
bands. It maps 95% of the signal input level range in each band below a kneepoint representing
75% of the stimulation range. In the Saphyr Neo, the kneepoints are determined from a
statistical analysis based on a speech database. In this work, statistical analysis is repeated for
different music genres and performed on each electrode's acoustic level distribution to find a
new set of compression parameters. Hierarchical clustering is then applied to reduce the
complexity of fitting and group several frequency ranges together (Segovia-Martinez et al., CIAP
2013).
An important step in the encoding process preceding the compression is the frequencybin to electrode mapping. For music it is intuitive to use an octave band distribution mapping to
capture the fundamental frequencies and harmonics (Kasturi & Loizou, J. Acoust. Soc. Am.,
2007).
To run simulations with the new suggested parameters two separate simulation chains
are set up. The Matlab Auditory Periphery model by Ray Meddis is used to model the normal
hearing up to the auditory nerve response. For the electric hearing a modified version of the CI
model representing the Oticon Medical sound processor is used. Each of the two models are
extended with a point process model by Joshua H. Goldwyn to generate auditory nerve firing
patterns.
Evaluation. For evaluating the implemented strategies, the objective distance measure
NSIM indicating degradation in similarity, is calculated between neurograms. The alternative
method Neurogram Matching Similarity Index (NMSI) is investigated in parallel. It compensates
for possible frequency shifts in the neurograms and is obtained as the total cost of transforming
one neurogram into another (Drews et al., IEEE 2013).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 102
2015 Conference on Implantable Auditory Prostheses
M33: ENVIRONMENTAL SOUND COGNITION WITH COCHLEAR IMPLANTS:
FROM SINGLE SOUNDS TO AUDITORY SCENES
Valeriy Shafiro1, Stanley Sheft1, Molly Norris1, Katherine Radasevich1, Brian Gygi2
1
2
Rush University Medical Center, Chicago, IL, USA
Veterans Affairs Health Systems of Northern California, Martinez, CA, USA
Environmental sound perception is frequently cited as a major benefit of cochlear
implantation. Previous studies of environmental sound perception have demonstrated reduced
ability to identify common environmental sounds even in experienced cochlear implant (CI)
users with good speech recognition. However, previous tests were based exclusively on
identification of isolated sounds. In everyday environments multiple sounds, comprising an
auditory scene, often follow each other as events unfold in time. Integrating information across
sounds in a scene requires the knowledge of the causal relationships or probabilistic
contingencies of underlying sound sources, involving cognitive as well as perceptual processing.
Previous work has shown that performance of normal-hearing young and older listeners is
influenced by contextual relationships among individual environmental sounds in an auditory
scene. This study extended this work to investigate whether CI listeners are able to benefit from
contextual relationship among individual environmental sounds in an auditory scene.
Participants were adult postlingual CI users or CI-simulated normal-hearing adults, who heard
ten environmental sound test sequences. Each sequence was composed of five environmental
sounds that were either contextually coherent (i.e. likely to be heard in the same place and time,
for example: elevator bell, footsteps, door knocks, key turning, door open) or contextually
incoherent (crickets, car alarm, waves, tea kettle boiling, tearing paper). Coherent and
incoherent sequences were composed of the same environmental sounds presented in different
order. Participants were instructed to select the name of the sounds they heard from 20
response labels and arrange them on the screen in the order the sounds were presented.
Similar to other populations, both actual and simulated CI listeners, demonstrated a consistent
advantage of naming and ordering sounds from contextually coherent sequences. Despite
degradation in the sensory qualities of individual sounds, CI users are able to integrate
information across several sounds based on their co-occurrence in real world environments.
These results also demonstrate the feasibility of a brief clinical test of environmental sound
cognition to assesses the ability to integrate information across individual environmental sounds
in real-world environments that can be used with other CI populations including children and
prelingual CI users.
[Support provided by CAPITA foundation].
12-17 July 2015
Granlibakken, Lake Tahoe
Page 103
2015 Conference on Implantable Auditory Prostheses
M34: THE PERCEPTION OF STEREO MUSIC BY COCHLEAR IMPLANT
USERS
Stefan Fredelake1, Patrick J. Boyle, Benjamin Krueger2, Andreas Buechner2, Phillipp
Hehrmann1, Volkmar Hamacher2
1
Advanced Bionics European Research Center, Hannover, DEU
2
Medical University Hannover, Hannover, DEU
Objectives. Musical enjoyment for adult Cochlear Implant (CI) users is often reported as
limited. Although simple musical structures such as rhythm are accessible, more complex
elements such as melody, timbre and pitch are challenging. Bilateral CI users may be able to
combine musical cues from both ears, possibly obtaining improved perception of music. This
study investigated how musical enjoyment was influenced by stereo presentation of music for CI
users.
Methods. To date, 10 normal hearing (NH) listeners and 7 bilaterally implanted CI users
have been tested. Stimuli were delivered in free-field and using headphones for NH subjects
and the processor’s auxiliary input for CI subjects. The first task was to localize a virtual sound
source, created by two loudspeakers using different gains and time delays. Next 25 different
musical pieces, each some 10 seconds in duration, were presented in mono and in stereo. In a
three-interval-forced- choice paradigm, participants had to identify the stereo piece. Musical
enjoyment ratings were also obtained for all mono and stereo tokens.
Results. Results showed that CI subjects could localize based on gain but were not
sensitive to the time delay applied to the sound sources. In free-field CI subjects identified the
stereo token in 67% of the cases. For auxiliary input presentation this increased to 94%. Both
NH and CI subjects rated the enjoyment of stereo as significantly better than of mono pieces.
NH listeners showed no difference in music enjoyment between hearing in free-field or via
headphones. For CI subjects a significant difference between stereo and mono was found with
the auxiliary input (p<0.05), but not for free-field presentation.
Conclusions. These results demonstrate an ability for CI users to identify the stereo effect
in music, and with this obtain a higher enjoyment of hearing music. Auxiliary input supports
significantly better musical enjoyment than free-field delivery.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 104
2015 Conference on Implantable Auditory Prostheses
M35: SUNG SPEECH: A DATABASE TO EXAMINE SPEECH AND MUSIC
PERCEPTION IN COCHLEAR IMPLANTS
1
Joseph D Crew1, John J Galvin III2, Qian-Jie Fu2
Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
Department of Head and Neck Surgery, University of California - Los Angeles, Los Angeles, CA, USA
2
Previous work with electro-acoustic stimulation (EAS) patients has revealed the different
contributions of pitch and timbre to speech and music perception. Acoustic hearing via hearing
aid (HA) provides low-frequency fine-structure cues that contribute to pitch perception; electric
hearing via cochlear implant (CI) provides coarse temporal and spectral envelope cues that
contribute to speech and timbre perception. While everyday speech contains pitch and timbre
cues (e.g., prosody, vocal emotion), pitch and timbre perception are typically measured
independently using very different stimuli and methods that may influence the contribution of
pitch and timbre cues to the given perceptual task. To address these issues, we created the
Sung Speech Corpus (SSC), which is a database of acoustic stimuli that contains varying timbre
and word information as well as varying pitch and melody information.
The SSC consists of 50 sung monosyllable words. Each word was sung at all 13
fundamental frequencies (F0) from A2 (110 Hz) to and A3 (220 Hz) in discrete semitone steps.
Natural speech utterances were also produced for each word. After recording, all productions
were normalized to have the same duration (500 ms) and long-term RMS amplitude (65 dB);
minor pitch adjustments were applied to obtain the exact target F0. The words were chosen to fit
within a Matrix Sentence Test with the following syntax: “name” “verb” “number” “color”
“clothing” (e.g., “Bob sells three blue ties.”); each category contains ten words. As such, the
constructed five-word sentence also contains a five-note melody, allowing both word-insentence recognition to be measured alongside Melodic Contour Identification (MCI) using the
same stimuli. There are nine possible contours in the MCI task and spacing between successive
notes in the contour can be varied from 1 to 3 semitones. Given five word categories with ten
words each, the SSC contains a total of 100,000 potential unique sentences with 27 possible
melodies. The large number of potential unique stimuli allows for extensive testing and retesting, which is necessary for evaluating novel signal processing strategies and other device
manipulations. Using the SSC, pitch perception can be measured with fixed or variable words
(i.e., timbres), and word recognition can be measured with fixed or variable pitches.
We are currently validating the SSC with normal hearing (NH) musicians and nonmusicians. NH musicians scored nearly 100% in word-in-sentence recognition and MCI; for
these subjects, changes in pitch did not affect speech perception and changes in timbre did not
affect pitch perception. Preliminary data with NH non-musicians was more variable, with
changes in timbre sometimes affecting MCI performance. For all NH subjects, word-in-sentence
recognition was unchanged with spoken speech, sung speech with a fixed pitch, or sung speech
with variable pitch. Preliminary data with CI subjects showed that variable timbre (words)
negatively affected MCI performance and that variable pitch affected word recognition,
suggesting that pitch and timbre cues may be confounded.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 105
2015 Conference on Implantable Auditory Prostheses
M36: APPLYING DYNAMIC PEAK ALGORITHM IN NUROTRON’S ADVANCED
PEAK SELECTION SOUND PROCESSING STRATEGY
1
Lichuan Ping1, Guofang Tang2, Qianjie Fu3
Nurotron Biotechnology, Inc. 184 technology, Irvine, CA, USA
Zhejiang Nurotron Biotechnology Co., Ltd. Zhejiang, 310011, Hangzhou, CHN
3
Signal Processing and Auditory Research Laboratory, Head and Neck Surgery University of California, LA, CA,
USA
2
Background: Nurotron’s default sound processing strategy is Advanced Peak Selection
(APS). In each frame (4-8ms), 8 channels with the largest envelope amplitude are selected from
a total 24 channels. Envelopes from the selected channels are compressed and used to
determine the current level of the stimulation. If the amplitude of the selected channel is less
than the lower limit of the input dynamic range (30dB), the corresponding electrode would
generate stimulation with threshold current (T - level). Dynamic Peak (DP) is an algorithm
designed to apply with APS strategy to save power by stimulating fewer channels when the
input sound level is lower than a certain threshold, while not affecting the performance of APS
strategy. The threshold of DP algorithm could be programmed. The aim of this study was to
investigate the feasibility of DP + APS strategy.
Methods: In this study, the threshold condition of DP algorithm was, in one frame, the
selected maximum 8 channels’ Amplitude are all less than the 30dB. On that condition, which is
also the default setting of DP, APS would generate 8 channels’ T- level stimulation while
APS+DP would only generate 4 channels’ stimulation with 5 current units. Twelve experienced
Nurotron’s cochlear implant (CI) recipients using APS strategy participated in the study. They
were followed up and studied using the longitudinal method for one month to compare the
difference in sound quality and battery life (three 675P zinc-air batteries) between APS and
APS+ DP. They were required to record the battery life during this month and completed the
preference questionnaire by the end of the study.
Results: No difference was reported on sound quality of APS and APS+DP by all the
subjects. The average battery life of three 675P zinc-air batteries increased from 13.92 hours to
19.48 hours. One user reported a 17 hours gained battery life, increasing from 16 to 33 hours.
Conclusions: Applying the DP algorithm to APS strategy can significantly increase the
battery life while not affecting sound quality.
Key words
Dynamic Peak Algorithm, Battery Life, Advanced Peak Selection Strategy
12-17 July 2015
Granlibakken, Lake Tahoe
Page 106
2015 Conference on Implantable Auditory Prostheses
M37: A REAL-TIME IMPLEMENTION OF THE EXCITABILITY CONTROLLED
CODING STRATEGY
Wai Kong Lai1, Matthijs Killian2, Norbert Dillier1
1
2
ENT Clinic, University Hospital, ZÜrich, CHE
Cochlear Technology Centre, Mechelen, BEL
Most present day coding strategies are based on signal transmission concepts originally
developed for communication systems, often emphasizing the appropriate processing and
conditioning of the input signal. In contrast, the capacity of the stimulated neural population to
convey the encoded information, determined by their neurophysiological properties, is seldom
taken into account.
The Excitability Controlled Coding (ECC) strategy attempts to address this shortcoming.
Using a combined model of the electric field spread and the refractory properties of the
stimulated neural population to determine the excitability of a neural population at any given
point in time, ECC selects the next stimulus based on the momentary state of this neural
excitability as well as the input signal. Additionally, ECC encodes the signal intensity on a given
channel into the stimulation rate instead of the stimulation level. More precisely, ECC encodes
loudness by varying the stimulation rate, while keeping the stimulation level constant. One of the
aims of ECC is to minimize spatially-related channel interaction arising from the electric field
spread, since the stimulation level, which is assumed to be the primary parameter related to
electric field spread, will not increase with increasing signal intensity.
The ECC output signal is controlled by several parameters, whose optimization requires
subjective feedback from Cochlear Implant (CI) listeners. Although ECC has already been
successfully implemented in Matlab, testing output signals this way involves extensive and timeconsuming individual pre-compilation of test signals with different parameter combinations for
each CI listener. In contrast, a real-time implementation would yield significant time savings in
adjusting and optimizing these parameters. Consequently, ECC was implemented with Simulink
in conjunction with an xPC real-time target (Speedgoat) system. ECC signal output was
encoded for Nucleus CIC4 implants using a proprietary StimGen module from Cochlear Ltd.
The real-time system was then tested on the bench in order to ensure that the output signals
behave as expected, and that all stimulus parameters are within safety limits before they are
presented to real CI listeners for assessment. Details of the implementation and bench-testing
will be presented and discussed here.
This research was partly funded by a research grant from Cochlear AG, Basel.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 107
2015 Conference on Implantable Auditory Prostheses
M38: AUTOMATIC POST SPECTRAL COMPRESSION FOR OTICON
MEDICAL’S SOUND PROCESSOR
Manuel Segovia Martinez, Emilie Daanouni, Dan Gnansia, Michel Hoen
Oticon Medical, Vallauris, FRA
In order to account for large dynamic range differences existing between the acoustical
world and the electrical encoding of sounds at the cochlear implant (CI) electrode’s level, CI
coding strategies classically integrate automatic gain control (AGC) on the input audio stream,
combined to and a front-end or output compression function. This classic signal processing
method scheme aims to narrow the acoustical dynamic range, leaving more loudness
quantization steps available to the system.
The goal of the present study is to describe the evolution of the signal processing
strategy of forthcoming Oticon Medical (Vallauris, France) speech processors, which integrates
automatic post-spectral analysis compression (named auto XDP Strategy). This new signal
processing strategy will be compared to its previous implementation as presented on CIAP 2013
(Segovia-Martinez et al., CIAP 2013: Design and Effects of Post-Spectral Output Compression
in Cochlear Implant Coding Strategy).
In this new adaptive version, a novel multichannel frequency-selective compression
transfer function was implemented, where compression settings are automatically adjusted
according to a continuous analysis of intensity fluctuations in the acoustic environment.
In order to restrain the degrees of liberty of the end-user system, thereby allowing the fitting
process to remain easily manageable and transparent, a given number of pre-sets were
designed in order to maximize the speech information sent to the patient while ensuring listening
comfort in noisy and loud environments. These pre-sets were statistically determined in order to
preserve 95% of the speech information from soft, to very loud environments, using a clusterbased analysis ran on a large speech recordings database.
We will present results from a simulation study using speech samples that dynamically
change level in a model of the complete CI signal processing pipeline and representing the
activity of the electrode array generated by the implant as neural spike patterns evaluating the
neural similarity (NSIM) between different signal processing options will be presented that
clearly highlight the improvements obtained with the auto-XDP strategy.
In a clinical evaluation trial finally, several unilateral CI patients were fitted with this output
compression function and tested for pure tone thresholds and speech tests from 40 dB SPL to
85 dB SPL (in quiet and in noise). Results showed general improvements in speech intelligibility
in most of the listening conditions tested, with observed loudness growth functions similar to the
normal range.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 108
2015 Conference on Implantable Auditory Prostheses
M39: A PHASE-LOCK LOOP APPROACH TO EXTRACT TEMPORAL-FINE
STRUCTURE INFORMATION FOR COCHLEAR IMPLANT PATIENTS
Marianna Vatti1, Attila Frater2, Teng Huang, Manuel Segovia-Martinez2, Niels Henrik
Pontoppidan1
1
Eriksholm Research Centre, Snekkersten, DNK
2
Oticon Medical, Vallauris, FRA
Cochlear implants (CI) are known to have been successful in restoring speech perception
in quiet. However, the performance of CI users in tasks involving pitch discrimination and
speech recognition in noise remains much poorer than that of normal hearing people. One of the
reasons suggested is the inability of current CI stimulation strategies to properly encode sound
pitch variations. This work proposes a novel method to extract the fine frequency oscillations of
sound, so-called temporal-fine structure (TFS) by means of Phase-Lock Loop (PLL) filters. A
simulation study is performed on a full CI model and results are evaluated with objective
measures.
The PLL is an adaptive filter that tracks a frequency input signal as it varies in time. Here,
PLLs are combined in the form of a filterbank and estimate the phase and amplitude of the most
prominent components in the signal. The extracted information is then used to control the
electrode stimulation.
A complete simulation model is built to evaluate this novel TFS processing strategy. The
model simulates all the signal processing stages from the speech processor to the electrode
array. The output of the CI model is connected to an auditory nerve model that produces spike
patterns in response to electric stimulus. These spike patterns serve as a basis for comparing
the current approach to already existing ones. The evaluation is made by means of the Neural
Similarity Index Measure (NSIM), an objective distance measure that calculates the degradation
in speech intelligibility based on neural responses.
Experimental results with speech show that the present method estimates reliably the
amplitude and phase of the signal’s components. NSIM is performed for both the current and
PLL-based stimulation strategies and differences in the obtained quality are discussed.
A computation model for extracting TFS by means of a PLL filter bank is implemented and
evaluated with objective measures. The proposed method is shown to effectively extract and
encode the TFS features and has the potential to enhance electric pitch perception.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 109
2015 Conference on Implantable Auditory Prostheses
M40: EVALUATION OF CLASSIFIER-DRIVEN AND BINAURAL
BEAMFORMER ALGORITHMS
Gunnar Geissler1, Silke Klawitter2, Wiebke Heeren1, Andreas Buechner2
1
2
Advanced Bionics, European Research Center, Hannover, DEU
Hannover Medical School, Dept. of Otolaryngology, Hannover, DEU
State-of-the-art speech processors for cochlear implants nowadays offer the use of
directional microphones. It was shown in several studies that these can have a large impact on
the speech intelligibility in noisy environments. Nevertheless many patients don’t make use of
this technology in daily life. Some are not aware of the possibility and some do not want to
switch the programs manually depending on the actual listening situation.
Two algorithms are currently being tested in an ongoing study using a research prototype of
Advanced Bionics’ Naida CI speech processor: the first algorithm (autoUltraZoom) detects the
presence or absence of speech in noise and automatically activates the directional microphone
accordingly. The second algorithm (StereoZoom) makes use of the wireless connectivity
between left and right processor of bilateral patients. With the use of the contralateral
microphone signals the directivity of the beamformer can be further increased compared to the
monaural beamformer UltraZoom.
10 bilaterally implanted subject will be included in total. They will visit the center for two
appointments. In the first appointment, speech intelligibility with autoUltraZoom in quiet and in
noise will be compared to a Tmic and an UltraZoom setting. In the second appointment,
StereoZoom will be compared to Tmic and UltraZoom. The subjects will test the autoUltraZoom
in their everyday life and answer a questionnaire in between the two appointment.
4 subjects have completed the first appointment to date. For these subjects,
autoUltraZoom performs equally well as the Tmic program in quiet, and equally well as the
UltraZoom program in noise (both algorithms showing improvements in the speech reception
threshold of 4 to 6.7 dB compared to the Tmic program). 3 subjects have also finished the
second appointment, where two of them experienced an additional benefit by StereoZoom
against UltraZoom of up to 1.8 dB. Results of all 10 subjects will be presented with
corresponding statistical comparisons between the processing condition.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 110
2015 Conference on Implantable Auditory Prostheses
M41: Poster withdrawn
12-17 July 2015
Granlibakken, Lake Tahoe
Page 111
2015 Conference on Implantable Auditory Prostheses
M42: SPEECH ENHANCEMENT BASED ON GLIMPSE DETECTION TO
IMPROVE THE SPEECH INTELLIGIBILITY FOR COCHLEAR IMPLANT
RECIPIENT
1
2
Dongmei Wang1, John H. L. Hansen1, Emily Tobey2
Dept. Electrical Engineering, University of Texas at Dallas, Richardson, TX, USA
School of Behavior and Brain Science, University of Texas at Dallas, Richardson, TX, USA
Speech perception in noise is a challenging task for cochlear implant recipients. An
effective speech enhancement algorithm method could make the listening experience more
pleasurable for cochlear implant listeners. In this work, we propose to design a speech
enhancement algorithm based on glimpse detection to improve the speech intelligibility for
cochlear implant recipient. Our algorithm is inspired by the “glimpsing model” that listeners
process noisy speech by taking advantage of those spectrotemporal regions in which the target
signal is least affected by the background. As speech signal is a highly modulated signal in time
and frequency, and regions of high energy are sparsely distributed. We propose to detect these
glimpsing segments in the time-frequency (TF) plane, and remove the rest parts which are
considered as noise dominated.
The details of our algorithm are described as follows. Firstly, the noisy speech signal is
transformed into frequency domain based on a long short frame harmonic model. The
advantage of using long short harmonic model is to obtain a high frequency resolution while still
preserving the short time stationary character of speech signal. Then we divide the noisy
spectrum into overlapped short term and long term TF segment respectively. Secondly, we
estimate two acoustic features which are logarithmic frequency scale correlation coefficient
(LogFcc) and the harmonic to noise ratio (HNR). On one hand, LogFcc measures the similarity
of spectrum amplitudes between two neighboring frames. As we know that speech signal
usually changes more slowly than most of the daily life noise. Therefore, a TF segment with a
higher logFcc value is more likely to be a glimpse. On the other hand, HNR indicate the energy
ratio between target speech harmonic partials and the noise interference spectrum. Here, the
HNR estimation is based on “virtual pitch” estimation that the human auditory system can
perceive the complex tone without the fundamental frequency component. Thirdly, we classify
all of the TF segments into glimpse and non-glimpse based on the two parameters we obtained
above. Finally, the detected TF segment glimpses are resynthesized into time waveform with
the inverse Fourier transform.
Pilot listening experiments show that our algorithm improved the speech intelligibility for
the normal hearing subjects who are presented with the electrical acoustic simulated noisy
speech sentences under the SNR levels of 0dB and 5dB.
Research supported by Grant No. R01 DC010494 from NIDCD
12-17 July 2015
Granlibakken, Lake Tahoe
Page 112
2015 Conference on Implantable Auditory Prostheses
M43: DEVELOPMENT OF A REAL-TIME HARMONIC ENCODING STRATEGY
FOR COCHLEAR IMPLANTS
Tyler Ganter1, Kaibao Nie2, Xingbo Peng1, Jay Rubinstein2, Les Atlas1
1
2
Dept. of Electrical Engineering, University of Washington, Seattle, WA, USA
VM Bloedel Hearing Research Center, Dept. of Otolaryngology-HNS, University of Washington, Seattle, WA, USA
Current cochlear implant speech processing strategies perform well on single-talker
comprehension in quiet, however, these strategies leave much to be desired for more
challenging problems such as comprehension with a competing talker and music perception.
Lack of both spectral and temporal fine structure information retained in these primarily
envelope-based approaches are large contributing factors. Previous acute tests have shown
promising results for a harmonic-single-sideband-encoder (HSSE) strategy, specifically with
timbre recognition of musical instruments, [Li et al., IEEE TNSRE, 2013]. Encouraged by these
results, we have developed a real-time version of the HSSE strategy.
HSSE is a relatively complex algorithm, requiring pitch tracking and frequency shifting.
The purpose of this study is to investigate the feasibility of implementing a real-time version of
these components. The proposed pitch tracker was based on calculating autocorrelation with an
FFT and inverse FFT which can be readily implemented on most DSP platforms for cochlear
implants. The number of harmonics tracked can be varied to maximize the performance of
harmonic coding. Electrical stimuli were created in Matlab using block processing in order to
validate the real-time HSSE in cochlear implant subjects.
Two cochlear implant patients were initially tested with the musical timbre recognition on
the CAMP (Clinical Assessment of Music Perception) test battery. One subject was able to
achieve better performance (88% with HSSE vs. 54% with his clinical ACE) when the lowest 8
harmonics were processed and presented. Another subject showed comparable performance
with her clinical ACE in the first acute experiment.
Currently this real-time HSSE strategy is being programmed on Nucleus Freedom
speech processors, which can potentially provide take-home usage of HSSE. This would allow
us to evaluate the long-term efficacy of HSSE.
This work is supported by the Wallace H Coulter Foundation and research hardware and
software tools were provided by Cochlear, Ltd.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 113
2015 Conference on Implantable Auditory Prostheses
M44: FINDING THE CONSISTENT CONTRIBUTOR LEADING TO BIMODAL
BENEFIT
Christopher B Hall, Yang-Soo Yoon
Texas Tech University Health Sciences Center, Lubbock, TX, USA
Introduction: Bimodal hearing provides a considerable synergistic benefit on speech-innoise tests. The theory behind bimodal hearing is that a hearing aid (HA) can code lower
spectral components that a cochlear implant (CI) codes poorly, while a CI codes higher spectral
components that a HA codes poorly. However, we still know little about what defines success in
bimodal hearing as evidenced by the wide ranges of variability. Current testing measures, the
audiogram and preoperative sentence perception, are poorly correlated to the optimal use of
bimodal hearing, particularly on speech-in-noise. The purpose of this study is to investigate how
bimodal benefit in speech perception is related to the HA’s ability to detect spectral and
temporal cues in current bimodal users who have high levels of success. Based on the
information obtained through these experiments, we will then have a more refined
understanding of which specific cues must be present in the ear with the HA for maximal
bimodal benefit. This information can then in turn be used to make more accurate clinical
decisions determining which ear is to be implanted and which ear is to be left with the acoustic
hearing aid.
Methods: We conducted five tests on post-lingually deafened adult bimodal subjects. Our
first experiment was measuring the subject’s temporal processing ability through the amplitude
modulation detection threshold. The second test measured broadband spectral processing
ability using a spectral ripple test. Our third test assessed narrowband spectral detection
thresholds; these psychoacoustic tests were all tested using a three-alternative forced-choice
paradigm. Two speech perception tests (vowel and sentence) in noise and quiet were
conducted in order to analyze similarities between psychoacoustic measures and speech
perception tests.
Results & Conclusions: Our results suggest that greater bimodal benefit is possible if the
patient’s HA is able to better process low frequency temporal cues. Our data for frequency
difference limens suggests that at low frequencies, greater bimodal benefit is generated when
the difference limen detection ability of the HA is better. At high frequencies, we find that the
detection ability of the CI primarily governs bimodal benefit. Our results suggest that a better
detection ability in lower spectral regions by the HA may facilitate better fusion between the HA
and CI leading to enhanced speech recognition abilities. The data also suggests that vowel
perception is associated with the degree of temporal processing abilities with the HA alone ear.
Our results in sentence and word perception show similar patterns of benefit in subjects who
performed well on temporal and spectral measures for all conditions (HA alone, CI alone,
bimodal). These data suggest that better HA performance is crucial in better fusion between the
HA and CI which generates greater bimodal benefit in speech perception tests. A more
complete data analysis presentation will be given.
Funded by the National Organization for Hearing Research Foundation
12-17 July 2015
Granlibakken, Lake Tahoe
Page 114
2015 Conference on Implantable Auditory Prostheses
M45: SPECTRAL CONTRAST ENHANCEMENT IMPROVES SPEECH
INTELLIGIBILITY IN NOISE IN NOFM STRATEGIES FOR COCHLEAR
IMPLANTS
Thilo Rode1, Andreas Buechner2, Waldo Nogueira2
1
2
HörSys GmbH, Hannover, DEU
Department of Otolaryngology, Medical University Hannover, Cluster of Excellence “Hearing 4all”, Hannover, DEU
The ability to identify spectral cues such as location and amplitude of formants is a key
feature for speech intelligibility especially in difficult listening situations. Cochlear implant (CI)
listeners do not have access to the spectral sharpening mechanisms provided by the inner ear,
thus speech perception in noise is usually a challenging task. Additionally, the limited number of
stimulation channels and overlapping electric fields caused by the electrode nerve interface
introduce spectral smearing of sound to CI listeners. Compensating for those effects by spectral
contrast enhancement (SCE) to increase the ratio between spectral peaks and valleys applied
as front-end processing has shown to improve the identification of vowels and consonants in
quiet and noise [1][2].
A real-time SCE algorithm using Matlab Simulink and an xPC target was implemented
within a NofM CI coding strategy. The algorithm keeps the 3 most prominent spectral peaks
constant and attenuates the spectral valleys. Including it into the strategy instead of using it as a
front-end process keeps all SCE parameters under control when used in combination with
adaptive gain stages as found in modern coding strategies.
In Experiment 1, 12 CI users participated in a study to measure the speech reception
threshold (SRT) using the standard NofM CI coding strategy with and without SCE. No
significant differences in SRT were observed between both conditions. However, an objective
analysis of the stimulation patterns shows a 24% of reduction in electrical stimulation current
with SCE. In experiment 2, 12 additional CI users participated in a second configuration of the
SCE strategy in which the amount of current between the NofM strategies with and without SCE
was balanced. 11 out of 12 participants obtained better scores with SCE leading to a significant
improvement (p < 0.0005) with respect to the standard NofM strategy of 0.57 dB on average.
From these results we conclude that spectral contrast enhancement improves speech
intelligibility in noise for CI users.
[1] A. Bhattacharya and F.-G. Zeng, “Companding to improve cochlear-implant speech
recognition in speech-shaped noise.,” J. Acoust. Soc. Am., vol. 122, no. 2, pp. 1079-89,
Aug. 2007.
[2] J. M. Alexander, R. L. Jenison, and K. R. Kluender, “Real-time contrast enhancement to
improve speech recognition.” PLoS One, vol. 6, no. 9, p. e24630, Jan. 2011.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 115
2015 Conference on Implantable Auditory Prostheses
M46: ENSEMBLE CODING PROVIDES PITCH PERCEPTION THROUGH
RELATIVE STIMULUS TIMING
Stefan J Mauger
Research & Applications, Cochlear Limited, Melbourne, AUS
It is generally assumed that the rate and temporal neural codes are used by primary
sensory neurons to transmit information. These neural codes require multiple spikes for reliable
estimates. However, rapid behavioural responses to sensory input necessitate a fast and
powerful neural code. The “ensemble” neural code hypothesizes that information is conveyed
through the relative timing of first spikes across a population of neurons. Mathematical
modelling has shown that ensemble codes can carry more information at a faster rate than
either rate or temporal codes. Electrophysiological experiments have also supported an
ensemble code with relative timings of spikes across a neural population containing reliable
information related to the sensory input.
Cochlear implants are able to activate the ascending auditory pathway through electrical
stimulation of the cochlea. Such stimuli elicit localized and temporally locked neural responses
in spiral ganglion neurons. Stimulation strategies are largely successful by stimulating
tonotopically organised electrodes within the cochlea with fixed rate pulse trains with stimulus
level corresponding to the loudness of a frequency specific acoustic signal component. This
method targets the rate neural code. Some stimulation strategy variants additionally manipulate
the stimulus rate on some electrodes, but with limited clinical benefit. Such strategies target the
temporal neural code, which is weak and limited by the maximum frequency of ~300 Hz at
which pitch changes are perceived. Although a fast and powerful ensemble code is
hypothesized, psychoacoustic experiments have not yet investigated its perception. Could it be
that the precise relative timing of stimuli across an electrode array could be perceived, and
would they convey information for cochlear implant recipients?
Cochlear implant users (n=6) were tested in their ability to perceive “synthetic
ensembles”. Those were stimuli maintaining the stimulus place, amplitude and rate, but with
different relative timing of stimuli across the electrode array (i.e. base-to-apex and apex-to-base
stimulus orders only vary in their relative stimuli timing across the electrode array). Discriminate
between simple synthetic ensembles was found in two participants in a forced choice task
(p<0.05). Using enhanced synthetic ensembles which mimicked expected auditory processing,
all participants achieved high levels of discrimination (p<0.01). Pitch ranking was then
performed with a range of enhanced ensembles and found to be related to the synthetic
ensemble structure. Results will be discussed and a new stimulation strategy will be presented
which conveys temporal fine structure to cochlear implant recipients by dynamically varying
relative stimulus timings.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 116
2015 Conference on Implantable Auditory Prostheses
M47: PSYCHOACOUSTIC OPTIMIZATION OF PULSE SPREADING
HARMONIC COMPLEXES FOR VOCODER SIMULATIONS OF COCHLEAR
IMPLANTS
Olivier Macherey, Gaston Hilkhuysen, Quentin Mesnildrey, Remi Marchand
Laboratoire de Mécanique et d'Acoustique, CNRS, Marseille, FRA
Noise-bands are often used as carriers in vocoder simulations of cochlear implants (CIs).
However, in contrast with the electrical pulses used in real CIs, noise-bands contain intrinsic
modulations that can interfere with the modulations of the speech signal that is transmitted.
Hilkhuysen & Macherey (2014) recently introduced an acoustic signal termed pulse-spreading
harmonic complex that aims to reduce these intrinsic modulations. The particularity of PSHCs
lies in the phase relationship between its harmonics which leads to a pulsatile waveform whose
envelope repetition rate can be manipulated independently of its fundamental frequency. Based
on gammatone filtering, this previous study suggested that the repetition rate of PSHCs can be
adjusted to minimize the amount of intrinsic modulations at the output of auditory filters.
The present study had two aims. First we wanted to examine the effect of using PSHCs in a
vocoder and to compare speech intelligibility results to those obtained with usual noise-band
and sine-wave vocoders. Sentence-in-noise recognition was measured in eight normal-hearing
subjects for noise-, sine-, and PSHC-vocoders using the French matrix test. Second, the tuning
of the rate of PSHCs has so far only been based on a simple linear auditory model. This model,
however, does not take into account the fact that auditory filters are level-dependent and does
not consider their phase curvature. Here, the optimal rate of PSHCs was investigated using
psychoacoustic methods. The amount of intrinsic modulations as a function of the PSHC pulse
rate was assessed using modulation detection and forward masking tasks.
Results from the speech intelligibility measure show mean speech reception thresholds of
1.6, 5.2 and 4.1 dB for sine, noise and PSHC vocoders, respectively. These results confirm the
hypothesis that intrinsic modulations influence the intelligibility of noise-vocoders and underline
the importance of considering alternative carriers when acoustically simulating CIs. Data on the
psychoacoustic optimization of the PSHC rate are currently being collected in our laboratory.
This study is funded by grants from the ANR (Project DAIMA ANR-11-PDOC-0022) and from
Institut Carnot IC-STAR (Project SIM-SONIC).
Hilkhuysen & Macherey (2014) “Optimizing pulse-spreading harmonic complexes to minimize
intrinsic modulations after auditory filtering.” J. Acoust. Soc. Am. 136:1281-1294.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 117
2015 Conference on Implantable Auditory Prostheses
M48: FIELD-BASED PREFERENCE OF MULTI-MICROPHONE NOISE
REDUCTION FOR COCHLEAR IMPLANTS
Adam Hersbach1, Ruth English2, David Grayden2, James Fallon3, Hugh McDermott3
1
2
Cochlear Ltd, Melbourne, AUS
University of Melbourne, Melbourne, AUS
3
Bionics Institute, Melbourne, AUS
Cochlear implant (CI) users find noisy conditions difficult for listening and communicating.
Directional microphones can improve the situation by reducing the level of background noise.
For example, the Nucleus CP900 series sound processor provides three possible directionality
settings - standard, zoom and Beam - that have been demonstrated to improve speech
intelligibility. To further enhance existing directionality performance, a spatial post-filter algorithm
(SpatialNR) can improve conditions by additionally attenuating the background noise at the rear
and sides of the listener. For the SpatialNR algorithm, the degree of attenuation is adjustable so
that the strength of noise reduction can be controlled.
While evaluations of noise reduction algorithms are traditionally performed in specific
acoustic situations created inside a sound booth, a user’s perception of the benefit outside the
sound booth is of critical importance to the acceptance, use, and ultimate success of the
algorithm in a commercial product.
The present study was designed to allow listeners to log their opinion on program
preference, and on their preferred strength of noise reduction processing. Subjects provided
input via their sound processor’s remote control during their daily lives. The device recorded the
automatic scene classification and was used to study patterns of preference in different acoustic
situations.
Groups of 15 and 20 CI recipients took part in two separate evaluations, respectively. In
the first, users were provided with two listening programs (one with noise reduction and one
without) or three listening programs (standard, zoom and Beam) and asked to vote for their
favourite setting in as many situations as possible. Users cast their vote by changing programs
and pressing a vote button to indicate their preferred program. The group of 15 subjects also
completed laboratory-based speech intelligibility, sound quality, and acceptable noise level
tasks and completed the SSQ questionnaire. In the second group of subjects, users adjusted
the strength of noise reduction directly and indicated their preferred setting by pressing the vote
button.
The results showed the following general patterns: a) most users provided a clear
indication of their listening preference, successfully tuning their own settings, b) most users
chose strong noise reduction settings away from the laboratory, c) user preference was not
always consistent with expectations from laboratory-based speech intelligibility tests, and d) in
most cases, there was minimal difference in voting patterns amongst sound classes, but there
were differences in voting patterns amongst individuals.
The voting system provided a way for users to cast a direct and immediate vote on
listening preference. This, combined with traditional laboratory-based evaluations, provides
important data for the selection and tuning of algorithms for integration within a commercial
sound processor.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 118
2015 Conference on Implantable Auditory Prostheses
M49: A NOVEL DEMODULATION TECHNIQUE IN REVERSE TELEMETRY
FOR COCHLEAR DEVICE
Sui Huang, Bin Xia, Song Wang, Xiaoan Sun
Nurotron Biotechnology Company, Irvine, CA, USA
Objective: In a cochlear system, the implant often reversely transmits data to the speech
processor after measuring the neural impedance, testing the neural response, or changing the operation
status. However, due to the variation of patient’s skin depth and the misalignment of the coils, this
reverse telemetry, which is suffered from variable signal amplitude, a shifting DC level, and an
undetermined digitizing threshold, is not very robust. In this work, a demodulation architecture, which is
based on a compact CMOS integrated circuit solution, is proposed to greatly enhance the transmission
robustness, increase the maximum bit rate, reduce the bit error rate, and save the board area.
Methods: The entire architecture is composed of four blocks: sensing, rectification, amplification, and
digitization. Initially, by deliberately changing the load of the implant, the signal amplitude on the coil of the
headpiece is also modulated. Instead of directly using this signal, an additional small coil is introduced to pick up
the modulated signal to separate the forward and reverse telemetry and reduce the interaction between each other.
Then a half-wave rectification is implemented to filter high frequency components and remove the negative part of
the signal. In this block, the RC filter should be carefully designed to minimize the load of the additional coil to
alleviate its effect on the main coil and save power consumption. Since the sensing coil is very small, the signal
after rectification needs to be amplified before it goes into the final digitization step. The amplifier has two modes: a
normal mode and a sleep mode. Because the cochlear device would not do forward and reverse telemetry
simultaneously, the amplifier is in the sleep mode while the processor is transmitting the voice signal to the implant.
During this mode, a switch that is connected between the input of the amplifier and a constant DC voltage source is
on, and the DC level of the input is determined. Once the reverse telemetry is enabled and the amplifier goes into
the normal mode, the switch becomes off, and the common mode voltage of the input is still well defined. In order
to achieve an adaptive digitizing threshold, a high pass feedback network is employed in the amplifier so that the
voltage on the capacitor in the network is the integral of the output signal. This voltage is used as a reference for
the digitization block to differentiate digital ‘1’ and ‘0’. As a main circuit of the last block, a Schmitt trigger is
implemented to convert the amplified analog signal to a clean digital signal. To further suppress random and
deterministic noise, a hysteresis characteristic is utilized to avoid undesirable data transition caused by a small
disturbance near the threshold voltage.
Results: The maximum gain of the amplifier is 32dB, which guarantees that any electrical signal,
whose amplitude is larger than 10 mV, can be detected by this circuit. As a result, the working distance
between the headpiece and the implant is significantly extended from originally 7 mm (3 - 10 mm) to 13
mm (2 - 15 mm). This feature makes the cochlear devices cover most of patients including children and
adults with large variation of skin depth. Moreover, because most of the circuit is integrated into a single
chip (3 mm*3 mm), the entire board area of the proposed architecture without the sensing coil is only 5
mm*5 mm, which is 70% less than the previous circuit design based on the discrete components (10
mm*8 mm). Furthermore, the non-return-to-zero (NRZ) data coding can be used for the reverse
telemetry, since the digitizing threshold is adaptive with the density of the input signal. Therefore,
compared with return-to-zero (RZ) data coding, the maximum bit rate is doubled. In addition, the bit error
rate is reduced from 10-2 to 10-3 under the same circumstance due to the low pass filter in the
rectification block, the high pass filter in the amplification block, and the hysteresis characteristic in the
digitization block.
Conclusions: A novel demodulation technique in reverse telemetry for cochlear devices is
presented. The robustness of data transmission greatly enhances the accuracy of the neural impedance
test and neural response telemetry. The flexibility to adapt the signal’s amplitude overcomes the variation
of different patients and different using cases. The CMOS solution is easily to be combined with DSP
processors and other components into a single chip to significantly reduce the board area so that the
next generation of the small even invisible speech processor and headpiece becomes possible.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 119
2015 Conference on Implantable Auditory Prostheses
M50: OWN VOICE CLASSIFICATION FOR LINGUISTIC DATA LOGGING IN
COCHLEAR IMPLANTS
Obaid ur Rehman Qazi, Filiep Vanpoucke
Cochlear Technology Center Belgium, Mechelen, BEL
It has been shown that children’s linguistic and cognitive development is strongly dependent upon
their language environment (e.g. Hart & Risley, 1995). Quantitative features of caregiver language, like
the number of words addressed to children and the amount of conversational turns they participate in
have proven to be good predictors of the children’s developmental trajectory (Hoff & Naigles, 2002;
Huttenlocher et al., 1991; Weisleder & Fernald, 2013). Similarly, for the elderly citizens the amount of
produced speech can be a relevant metric to evaluate their speech communication capabilities (Li et al,
2014).
Quantifying such features of the natural language environment used to be time consuming and
costly, thus limiting the observations to a few hours. Only recently, the advent of automatic linguistic
logging has opened new possibilities to monitor language input over extended periods. This has
spawned a great interest in the topic by researchers and clinicians all around the world. Studies are
using these methods to investigate the language environment of different populations, including children
with autism (Dykstra et al., 2013), hearing loss (Caskey & Vohr, 2013; VanDam, Ambrose & Moeller,
2012), and Down syndrome (Thiemann-Bourque et al, 2014). Audiologists, speech language therapists,
and public home intervention programs (e.g. “Providence Talks”, “Project Aspire”), are using feedback
from linguistic logging in parent counselling.
To increase the scientific and clinical value of automated auditory scene classification within the
cochlear implant sound processor, the detection of the wearer’s own speech will be crucial. It is the basic
requirement for analysing several important features of the linguistic experience: The amount of own
speech is also a good indicator of language development. The amount of caregiver’s speech and the
number of conversational turns reflect the quality and adequacy of the language input. Both measures
can provide insight about the recipient’s social integration and participation. They can provide clinicians
with important information to guide their therapy and help researchers to increase knowledge about the
everyday experience of people with cochlear implants. We present a novel online own voice classifier
which segments the incoming speech into ‘own’ or ‘external’ speech and logs the duration of ‘own’ and
‘external’ speech. Furthermore, classifier counts the number of conversational turns taken during the
everyday conversations and therefore an estimate of utterance duration can be inferred. The
classification results on the training and test data sets in different listening environments will be
presented.
This work is supported by the EU SHiEC and Cochlear Technology Centre Belgium.
References:
Caskey, M., & Vohr, B. (2013). Assessing language and language environment of high-risk infants and
children: a new approach. Acta Paediatrica, 102(5), 451-61.
Dykstra, J. R., Sabatos-Devito, M. G., Irvin, D. W., Boyd, B. a, Hume, K. a, & Odom, S. L. (2013). Using the
Language Environment Analysis (LENA) system in preschool classrooms with children with autism
spectrum disorders. Autism : The International Journal of Research and Practice, 17(5), 582-94.
Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American
children. Baltimore, MD: Paul H. Brookes Publishing
Hoff, E., & Naigles, L. (2002). How children use input to acquire a lexicon. Child Development, 73(2), 418433.
Huttenlocher, J., Haight, W., Bryk, A., Seltzer, M., & Lyons, T. (1991). Early vocabulary growth: Relation to
language input and gender. Developmental Psychology, 27(1).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 120
2015 Conference on Implantable Auditory Prostheses
M51: ULTRAFAST OPTOGENETIC COCHLEA STIMULATION AND
DEVELOPMENT OF MICROLED IMPLANTS FOR RESEARCH AND CLINICAL
APPLICATION
Daniel Keppeler1, Christain Wrobel1, Marcus Jeschke1, Victor H Hernandez2, Anna Gehrt1,
Gerhard Hoch1, Christian Gossler3, Ulrich T Schwarz3, Patrik Ruther1, Michael
Schwaerzle3, Roland Hessler4, Tim Salditt5, Nicola Strenzke6, Sebastian Kuegler7, Tobias
Moser1
1
Institute for Auditory Neuroscience, University of Göttingen Medical Center, Göttingen, DEU
Department of Chemistry, Electronics and Biomedical Engineering, University of Guanajuato, Guanajuato, MEX
3
Department of Microsystems Engineering (IMTEK), University of Freiburg, Freiburg, DEU
4
MED-EL, Innsbruck, Austria and MED-EL Germany, Starnberg, DEU
5
Department of Physics, University of Göttingen, Göttingen, DEU
6
Auditory Systems Physiology Group, Department of Otolaryngology, University Medical Center, Göttingen, DEU
7
Department of Neurology, University Medical Center Göttingen, Göttingen, DEU
2
Cochlear implant patients suffer from low frequency resolution due to wide current spread
from stimulation contacts, which limits the number of independently usable channels and
compromises speech comprehension in noise, music appreciation or prosody understanding.
Our goal is to overcome these drawbacks by pursuing an optogenetic approach: Optical
stimulation can be spatially confined and thus promises lower spread of excitation in the
cochlea. Accordingly, an increased number of independent stimulation channels is expected to
enhance frequency resolution and intensity coding.
Recently we demonstrated the feasibility of optogenetic stimulation of the auditory
pathway in rodents with neuronal expression of Channelrhodopsin-2 (ChR2) a blue light-gated
ion channel (Hernandez et al. 2014). Immunohistological data showed expression of ChR2 in
the somas and neurites of spiral ganglion neurons. Electrophysiological measurements
including compound action potentials, recordings in the inferior colliculus and preliminary data
from the auditory cortex revealed specific activation along the auditory pathway upon blue light
stimulation with a laser-coupled fiber in the cochlea.
In 2014 Klapoetke et al. published the ultrafast light-gated ion channel Chronos which
has faster on/off kinetics compared to ChR2 which allows stimulus driven firing rates of over 200
Hz reaching physiological firing rates of the auditory nerve (200-400 Hz). Additionally, Chronos
has a 10-fold higher light sensitivity compared to ChR2. We specifically targeted spiral ganglion
neurons implementing adeno-associated virus-mediated gene transfer of Chronos in embryonal
(trans-uterine) and postnatal virus injections. Ongoing experiments suggest shorter first peak
latencies and responses to higher stimulation rates in comparison to ChR2. Additional
experiments in the IC and auditory cortex are on the way to further characterize responses of
the auditory pathway to blue-light stimulation of Chronos-expressing SGN.
In collaboration with semiconductor experts, we developed multichannel optical implants
which accommodate approximately 100 microLEDs per 10 mm on a flexible substrate. Silicone
encapsulated test devices have been successfully implanted in rodent cochleae assessed in 3D
models derived from x-ray tomography.
Taken together, our experiments demonstrate the feasibility of optogenetic cochlea
stimulation to activate the auditory pathway and lay the groundwork for future applications in
auditory research and prosthetics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 121
2015 Conference on Implantable Auditory Prostheses
M52: OPTIMAL WIRELESS POWER TRANSMISSION FOR CLOSE LOOP COCHLEAR IMPLANT SYSTEM
Xiaoan Sun, Sui Huang
Nurotron Biotechnology Company, Irvine, CA, USA
Objective: Provide just enough power for normal operation of implant circuit and stimulation current to auditory nerve,
for a wide range of flap thickness (1mm-12mm) of implantee, under dynamic acoustic level environment, to optimize battery life
of external processor.
Methods: Modern cochlear implant systems use transcutaneous signal and power transmission to avoid possible skin
infection which is commonly related with early percutaneous devices. Usually, a pan cake shape transmitter coil and a pan cake
shape receiver coil inductively couple together across skin to provider wireless signal and power transmission to implant system.
One challenge of wireless power transmission is that the power transmission efficiency is related with flap thickness, which
varies significantly among implantee. This is because the coupling factor of two inductively coupled coils is inversely proportional
to the square of distance between the two coils. According to clinical statistics, the flap thickness of cochlear implantee falls in
the range of 1mm to 12mm, which is a 12-folded variation. For ease of manufacture and market supply, the types of circuit and
hardware design of transmission/receiver circuit is kept at minimal, usually only one type. The property of hardware circuit
makes it difficult to achieve optimal power efficiency at all flap thickness. Without dynamic transmission power adjustment, the
power transmission circuit needsto design for worst case of maximal flap thickness of 12mm. At this maximal distance of coil
separation, the transmission power level have to be greatly increased since the coupling coefficient is the lowest. This means
more battery power is needed for increased transmission power. For most subjects of normal flap thickness, more than enough
power is transmitted and wasted. To maximize battery life for implantee of different flap thickness, the transmission power
should be adjusted to accommodate the variation of power transmission efficiency.
In a cochlear implant system, the digital signal processing (DSP) result of signal processing strategy is synchronized
with and modulated by radio frequency (RF) carrier signal first. And then, the modulatedRF signal is sent to an RF amplifier
circuit to gain enough power for data and power transmission into implant circuit. The amplified RF signal is sent to transmission
coil, which is inductively coupled with receiver coil of implant. Transmission coefficient is defined as the ratio of received power
at receiver side and transmitted power at transmitter side. According to the transmission coefficient of RF coupling system, the
signal and power at transmission coil is transmitted into implant circuit through the receiver coil. The transmission coefficient of
RF coupling system depends on circuit design of transmitter and receiver systems. Usually, a Class-E RF power amplifier is
employed for transmitter system for its high efficiency. A resonant tuning tank tuned at RF carrier frequency is used for receiver
system. The transmission coefficient of the RF coupling system depends on a lot of factors, including the efficiency of RF
amplifier, the quality factor Q of tuning tank of receiver system, and the distance between transmitter and receiver coils. When
the circuit design of transmitter system and receiver system is fixed, transmission coefficient is determined by the distance
between transmitter and receiver coils. For most cochlear implants’ RF transmission system, when the coil distance is over
certain value, e.g. 8mm, transmission coefficient will drop as the coil distance increases. This will cause the received power to
drop. To maintain the same power level at receiver system, the signal power of transmitter system needs to increase. For ClassE RF amplifier, an effective way to increase the signal power is to increase the power supply voltage of the Class-E RF
amplifier.In this work, a two-bit power level control signal is provided by DSP to an adjustable output voltage DC-DC converter.
Therefore, a total of four different voltage levels can be generated to power the Class-E RF power amplifier. According to the
flap thickness of different implantee, four different power levels can be chosen to provide appropriate power to the implant
system.
In order to select appropriate power voltage level for the RF power amplifier, DSP needs to know the power status of
implant system. To achieve that, two types of function are needed: First, the power supply voltage of implant system can
measured; Second, the measured power supply voltage of implant system can be transferred to external processor. For the first
function, usually an implant power supply voltage measurement command is sentimplant system. After receiving this command,
the sampling circuit of implant samples the power supply voltage. And then, the Analog-to-Digital Converter (ADC) circuit of
implant converts this voltage into digital signal. To implement the second function, a load modulated back telemetry system is
used to transfer the digitized power supply voltage of implant to DSP of external processor. With this close loop implant power
level monitor and adjustment system, DSP can select an appropriate power level and transmit just enough power into the
implant system.
Results: In our implementation, maximum power level is set for the biggest flap thickness. When the flap thickness is
shorter, power level will be reduced to save power, thus extending batter life of external system. The power supply voltage of
implant system can be measured to the accuracy of +/-0.1V. And the voltage measurement can be transferred to DSP of
external processor through back telemetry system. For the most common flap thickness between 4mm to 8mm, power level 1 is
enough for power transmission, which saves 30% of battery power compared with power level 4 associated with maximal flap
thickness of 12mm.
Conclusions: Combined with functions to measure power supply voltage of implant system and to transfer the voltage
measurement to DSP, a close loop optimal power level adjustment system is developed to provide just enough power for normal
operation of implant system, for a wide range of flap thickness (1mm-12mm) of implantee. Up to 30% of battery power can be
saved for the most common flap thickness between 4mm to 8mm, compared with the power level for maximal flap thickness of
12mm.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 122
2015 Conference on Implantable Auditory Prostheses
M53: A POLYMER-BASED INTRACOCHLEAR ELECTRODE FOR
ATRAUMATIC INSERTION
Tae Mok Gwon1, Seung Ha Oh2, Min-Hyun Park3, Ho Sun Lee3, Chaebin Kim1, Gwang Jin
Choi1, Sung June Kim1
1
School of Electrical and Computer Engineering and Inter-University Semiconductor Research Center, Seoul
2
National University, Seoul, Korea
Department of Otolaryngology-Head and Neck Surgery, Seoul National University, Seoul, Korea
3
Department of Otorhinolaryngology, Boramae Medical Center, SMG-SNU, Seoul, South Korea
Polymers have been adopted in implantable neural recording and stimulation electrodes
due to its flexibility and manufacturability [1]. Among biocompatible polymers, liquid crystal
polymer (LCP) is one of the most reliable polymers because its water absorption rate is lower
than that of any other polymers. Moreover, thermoplasticity and monolithic encapsulation of LCP
films allow fabrication of biomedical devices which are fit to their applications [2]. In this study,
we fabricate and evaluate LCP-based intracochlear electrode array designed for atraumatic
insertion, which is necessary to high-performance cochlear implant.
LCP films of 25 µm-thickness are used for flexible electrode substrates. Using thin-film
processes and thermal press bonding technology, we fabricate LCP-based cochlear electrode
array with multi-layered structure and blind vias. This method can reduce the width of electrode
array to occupy less space when it is inserted in scala tympani and vary the number of LCP film
layers to achieve a sufficient degree of basal support and a flexible tip.
Insertion force and extraction force are measured using motorized linear actuator and
force sensor when LCP-based cochlear electrode array is inserted into a transparent plastic
human scala tympani model. Insertion depth and safety is evaluated in human temporal bone
studies. The insertion depth is measured using a micro-CT scanned image and dyed crosssection of cochleae are used to check on insertion trauma. Additionally, electrically-evoked
auditory brainstem responses (EABR) of a guinea pig are recorded.
As a prototype, we fabricate and evaluate a 16-channel polymer-based intracochlear
electrode for atraumatic insertion using LCP [3]. Length of the final electrode was 28mm, and its
swidth varied from 0.3 mm (tip) to 0.7 mm (base). The insertion force with a displacement of 8
mm from a round window and the maximum extraction force are 2.4 mN and 34.0 mN,
respectively, which are lower than those of LCP-based cochlear electrodes with same thickness
from base to tip. The measured insertion depths in the human temporal bone are 630°, 470°,
450°, and 360° in the round window approach and 500° in the cochleostomy approach. There is
no trauma at the basal turn in all insertion trials, but, dislocation into the scala vestibuli at the
second turn of 630° insertion trial is observed. EABR recordings corroborate its efficacy.
Acknowledgements: This work is supported by Public Welfare & Safety research program (NRF2010-0020851), GFP (CISS-2012M3A6A6054204), and BK21 plus Project, the department of
electrical and computer engineering, Seoul National University.
References:
[1] C.B.Hassler, T.Boretius, T.Stieglitz, Journal of Polymer Science Part B: Polymer Physics, 2011
[2] S.W.Lee, et al., IEEE Transactions on Biomeddical Engineering, 2011
[3] T.M.Gwon, et al., Biomedical Microdevices, 2015
12-17 July 2015
Granlibakken, Lake Tahoe
Page 123
2015 Conference on Implantable Auditory Prostheses
M54: DEVELOPMENT OF A FREE-MOVING CHRONIC ELECTRICAL
STIMULATION SYSTEM FOR THE GUINEA PIG
Gemaine N Stark, Michael E Reiss, Anh Nguyen-Huynh, Lina A. J. Reiss
Oregon Health and Science University, Portland, OR, USA
The goal of Hybrid or electro-acoustic stimulation (EAS) cochlear implants (CIs) is to
provide high-frequency electric hearing while preserving residual low-frequency acoustic hearing
for combined electric and acoustic stimulation in the same ear. However, a third of EAS CI
patients lose 30 dB or more of low-frequency hearing months after implantation (Gantz et al.,
2010; Gstoettner et al., 2009). We recently showed that EAS may increase hearing loss beyond
that induced by surgery in normal-hearing guinea pigs (Tanaka et al., 2014), but the shifts were
small and variable. One limitation of the previous study was that animals were restrained during
chronic electrical stimulation, limiting the daily stimulation duration to 3 hours per day which is
less than that of a human cochlear implant patient. This illustrates the need for a durable,
reliable chronic stimulation system for guinea pigs that allows stimulation for longer durations of
up to 10-12 hours per day, i.e. a system that does not require restraint.
Here we describe the development and evaluation of a new “free-moving” chronic
electrical stimulation system for the guinea pig, i.e. a system that allows animals to move freely
in the cage while being stimulated. This system allows stimulation via a cable connected to the
guinea pig cochlear implant that is tethered to a stand over the cage. Additional components
include a screw-on connector plug and jack interface mounted on the skull, a specially designed
lid that allows movement throughout the cage, and a commutator fastened to a counterweighted tether stand on top of the cage. The electro-acoustic signal is sent to the guinea pig
through cables attached to the commutator and directly into the cochlear implant via the headmount connector.
By increasing the stimulation duration, we will be able to simulate the amount of daily
stimulation a human cochlear implant patient would normally experience. Successful
development of a durable, reliable chronic free-moving system for chronic electrical stimulation
will allow future studies of the effects of electrical stimulation on residual hearing, as well as
other studies requiring chronic stimulation, such as neurophysiological studies of brain plasticity.
This research was funded by a research contract with Cochlear.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 124
2015 Conference on Implantable Auditory Prostheses
M55: INTRA-COCHLEAR ELECTRO-STIMULATION EXPERIMENTS FOR
COMPLEX WAVEFORMS USING OTICON MEDICAL ANIMAL STIMULATION
PLATFORM IN-VIVO
Matthieu Recugnat1, Jonathan Laudanski2, Lucy Anderson1, David Greenberg1, Torsten
Marquardt1, David McAlpine1
1
University College London, London, GBR
2
Oticon Medical, Vallauris, FRA
In recent years, the principal advances in cochlear implant (CI) performance have been
led by improvements in signal processing technologies. Nevertheless, it is possible that further
advances in CI performance could be achieved through enhancement of the electrode-neuron
interface. To achieve this, we need to better understand how the CI stimulation strategy relates
to neural recruitment, and in turn how this influences sound perception.
Oticon Medical developed the Animal Stimulation Platform (ASP) on the basis of the new
generation stimulation chip. This ASP allows the generation of complex electrical stimuli in
various combinations of temporal and spatial configurations for monaural and bilnaural
implantations. Defining stimulation strategies using such complex waveforms will show how the
temporal and spatial configurations impact on auditory nerve fibre recruitment as measured with
intra- and extra- cochlear objective measures.
Here, we present the ASP and compare its abilities with other commercially available
devices. We also present preliminary data from a guinea pig model of cochlear implantation and
subsequent stimulation via the ASP. Our preliminary data demonstrates that stimulation
strategies can be taken beyond the standard comparison with biphasic bipolar or tripolar
stimulation by investigating how the intra-cochlear electrical stimulation impacts the peripheral
auditory pathway, namely the auditory nerve. The data include considerations of the impact on
spread of excitation for both classic and complex waveforms preceded with or without an
unbalanced sub-threshold pre-pulse presented at different pre-determined time intervals. We
believe that a stimulation strategy designed in response to the physiological behavior of the
auditory nerve could result in an increase in the efficacy of auditory nerve fibre recruitment with
reduced power consumption, broader dynamic range and reduced channel interaction.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 125
2015 Conference on Implantable Auditory Prostheses
M56: OPTIMISATION OF NUCLEUS® AUTONRT® ALGORITHM
Saji Maruthurkkara, Ryan Melman
Cochlear Ltd, Sydney, AUS
Objective: The AutoNRT® algorithm has greatly increased the efficiency of ECAP
measurements with Nucleus® cochlear implants, as it automates and therefore speeds up the
manual steps performed by the clinician. The AutoNRT test has been in regular use intraoperatively to confirm the auditory nerve’s responsiveness to electrical stimulation, as well as
post-operatively as an aid in the fitting process. This project aimed at further optimising the
current AutoNRT algorithm by finding parameters that results in 1) reduced number of
measurements above the tNRT so that the sound percept is not loud for the recipient, 2)
measurements at evenly spaced electrodes when less than 22 electrodes are measured and 3)
reduction of the overall duration of testing. An enhancement to the AutoNRT algorithm has been
developed that helps to further increase the speed of post-operative NRT measurements.
In the current clinical software, the intra-operative test takes less time to complete compared
with the post-operative test because a faster rate of 250Hz is used for the intra-operative
measurement, compared to a rate of 80Hz for post-operative measurements. A lower rate is
used post-operatively to ensure comfort for the recipient. The study also evaluated the feasibility
of using the faster rate of 250Hz for postoperative measurements in combination with the
enhanced AutoNRT algorithm.
Methods: NRT measurements from annonymised databases collected from large clinics
worldwide were analysed for the development of alternative methods. Four methods each for
electrode selection and starting current level were developed based on this analysis. These
methods were tested against the global databases to see the effects on the above mentioned
objectives. The final version of algorithm was implemented that gave the best outcome. In a
clinical study, NRT measurements were made with the enhanced NRT algorithm and the current
algorithm. Adult cochlear implant recipients implanted with Nucleus CI24RE series or CI500
series were recruited in the study. NRT measurements were made with the enhanced AutoNRT
algorithm at 250Hz and 80Hz. The speed of testing and the NRT thresholds were compared to
measurements made with the current AutoNRT algorithm at 250Hz and 80Hz rates.
Results: The results showed that the enhanced AutoNRT algorithm resulted in
significantly faster measurement of NRT with no differences in the NRT thresholds obtained.
The scenarios where testing at faster rates may not be indicated will be presented.
Conclusions: It was shown that the enhanced NRT algorithm can lead to more optimal
NRT measurements and also reduce the overall time required for the measurements.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 126
2015 Conference on Implantable Auditory Prostheses
M57: ANDROID-BASED RESEARCH PLATFORM FOR COCHLEAR IMPLANTS
1
2
Feng Hong1, Hussnain Ali1, John H.L. Hansen1, Emily A. Tobey2
Department of Electrical Engineering, The University of Texas at Dallas, Richardson, TX, USA
Department of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
The current work presents the design and features of an Android-based research
platform for cochlear implants. The platform is based on existing and emerging hand-held
Android smartphones/tablets and is an extension of the PDA-based research platform
developed by the Loizou Cochlear Implant Laboratory at The University of Texas at Dallas. This
highly versatile and portable research platform allows researchers to design and perform
complex experiments with cochlear implants manufactured by Cochlear Corporation with great
ease and flexibility. The platform consists of a smartphone/tablet for implementing and
evaluating novel sound processing algorithms and a custom-developed interface board to
stimulate Cochlear Corporation’s CI24 implants. The interface board houses a high quality
stereo audio codec, an FPGA, a Wi-Fi module, and a set of input/output ports for connection
with clinical Behind-The-Ear (BTE) microphone units and Freedom headpiece coils.
The acoustic signal is acquired from the BTEs and is sampled digitally by the on-board
stereo codec and sent to the smartphone wirelessly over Wi-Fi for subsequent processing. The
smartphone receives packets of stereo acoustic signal every 8 ms and processes them through
a sound coding strategy. As a proof-of-concept, we have implemented Advanced Combination
Encoder (ACE) strategy using a combination of JAVA and C languages. The processing
generates a set of stimulation data which consists of electrode, mode, amplitude (EMA), and
timing of each biphasic pulse. The stimulation data is sent back to the interface board where it is
encoded in the Embedded Protocol by the FPGA and finally streamed to the Freedom coils for
stimulation.
The platform can be used for unilateral or time-synchronized bilateral stimulation. It can
be used in both real-time and bench-top modes. In the bench-top mode, the processing can be
carried out in MATLAB in offline mode and the stimulation data can optionally be sent to the
interface board via a USB cable. The bench-top mode can also be used to design and conduct
psychophysical experiments. In addition, the platform can be configured to provide both electric
and acoustic stimulation (EAS). The graphical controls on the smartphone provide an interactive
user-interface for modifying processing parameters on the go. The platform will need FDA
approval before it can be used for experimentation with human participants and made available
to the research community.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 127
2015 Conference on Implantable Auditory Prostheses
M58: POSTER WITHDRAWN 12-17 July 2015
Granlibakken, Lake Tahoe
Page 128
2015 Conference on Implantable Auditory Prostheses
TUESDAY POSTER ABSTRACTS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 129
2015 Conference on Implantable Auditory Prostheses
T1: HIGH FREQUENCY ULTRASOUND IMAGING: A NEW TOOL FOR
COCHLEAR IMPLANT RESEARCH, TRAINING, AND INTRAOPERATIVE
ASSISTANCE
Thomas G Landry1, Manohar Bance2, Jeremy A Brown3
1
Capital District Health Authority, Div Otolaryngology, Halifax, CAN
2
3
Dalhousie University, Dept Surgery, Halifax, CAN
Dalhousie University, School of Biomedical Engineering, Halifax, CAN
Current methods of cochlear implant design testing, surgical training, and testing of new
surgical techniques typically involve the implantation into either an artificial cochlear model or a
cadaver cochlea. While artificial cochleae constructed of transparent material can provide direct
visualization of an implant, they do not contain the fine anatomical details of real cochleae, such
as a basilar membrane. Implantation into cadaver cochleae is usually followed by postimplantation analysis using, for example, histology or x-ray imaging. This can provide
information about the final position of the implant, but no visual feedback during implantation to
observe implant position dynamics or any interesting events which may have occurred, such as
tissue contact. Indeed, any tissue contact must be inferred afterwards based on tissue damage.
During implantation of patients, unwanted tissue contact is only inferred from subtle forces
experienced by the surgeon's hand, possibly after significant tissue damage has already been
done.
High frequency ultrasound (HFUS) is here presented as a new technology for
intracochlear imaging from several perspectives. First, a HFUS imaging system was used to
view the internal details of decalcified unfixed cadaver cochleae with high resolution in real-time.
Cochlear implants could be visualized during insertion and tip contact with the basilar
membrane could be observed. This technique could be useful for testing new cochlear implant
designs and surgical techniques, as well as during surgical training to provide real-time
visualization of the internal cochlea.
Using HFUS in a different approach, initial work on the construction of a miniature HFUS
transducer array which could be embedded within a cochlear implant tip is discussed. Such an
array would not provide two-dimensional intracochlear images, but rather each element would
provide "echolocation" data, with several elements arranged to provide an indication of how
close the tip is to the surrounding tissues of the scala tympani. This echolocating tip array could
provide surgeons with an early warning of impending contact with basilar membrane for
example, or indicate how close the implant is to the modiolar wall.
Acknowledgments: This work was funded by the Capital Health Research Fund and by the
Atlantic Innovation Fund
12-17 July 2015
Granlibakken, Lake Tahoe
Page 130
2015 Conference on Implantable Auditory Prostheses
T2: ELECTRICALLY-EVOKED AUDITORY STEADY-STATE RESPONSES AS
AN OBJECTIVE MEASURE OF LOUDNESS GROWTH
Maaike Van Eeckhoutte1, Hanne Deprez1,2, Robin Gransier1, Michael Hofmann1, Jan
Wouters1, Tom Francart1
1
2
ExpORL, Dept. Neurosciences, KU Leuven, Leuven, BEL
STADIUS, Dept. of Electrical Engineering (ESAT), KU Leuven, BEL
In current clinical practice, mainly threshold and comfort levels are measured for the
fitting of cochlear implants (CIs). While loudness growth is highly patient- and electrodedependent, measuring it is time-consuming and requires an active cooperation of the patient.
We recently found that the auditory steady-state response amplitude can be used as an
objective measure of loudness growth in acoustic hearing, for both normal-hearing and hearingimpaired listeners. This study aims to demonstrate the same for electric hearing using the
electrically-evoked auditory steady-state response (EASSR).
Seven CI patients with a Nucleus® implant participated in this study. Direct computer
controlled bipolar stimulation (BP+2) consisted of 40-Hz sinusoidally-amplitude-modulated 900pps biphasic pulse trains. Active electrodes 15 and 6 were chosen in order to stimulate at a
more apical and basal region of the cochlea. Stimuli were presented at different current levels
encompassing the patients’ dynamic ranges. Behaviourally, loudness growth was measured
using both absolute magnitude estimation and a graphical rating scale. In the first measure,
patients could choose a number that corresponded to the loudness of the stimuli. In the second
measure, loudness was rated on a scale with loudness categories. EASSRs to the same stimuli
and levels used for the behavioural measures were recorded with a Biosemi 64-channel
ActiveTwo EEG recording system. After blanking to eliminate CI stimulation artefacts, the
amplitude of the EASSRs was used as outcome measure. The data was transformed and
normalised in order to compare the different measures. The geometric mean of each
behavioural response was used, and the responses were logarithmically transformed. A
subtraction was also used to have zero-mean curves.
After normalisation, only small differences were found between behavioural and EASSR
growth functions. The correspondence seemed to be even better than in acoustic hearing. The
(E)ASSR can consequently be used as an objective measure of loudness growth in both
acoustic and electric hearing. This is potentially useful for fitting auditory prostheses, especially
in asymmetric cases such as in bimodal hearing.
Acknowledgements: Maaike Van Eeckhoutte and Robin Gransier are supported by a PhD-grant
for Strategic Basic Research by the Agency for Innovation by Science and Technology in
Flanders (IWT, 131106 and 141243). This work was also supported by the Research
Foundation Flanders (FWO, project number G.066213).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 131
2015 Conference on Implantable Auditory Prostheses
T3: CORTICAL EVOKED POTENTIALS OF SPEECH IN COCHLEAR IMPLANT
LISTENERS
Emma Brint, Paul Iverson
University College London, London, GBR
Cochlear implant (CI) users have a wide range of speech recognition outcomes, and
currently there is no objective test that can predict how a user will perform. This study evaluates
cortical evoked potentials for this purpose, more specifically, the acoustic change complex
(ACC), which is an event related potential (ERP) that reflects cortical auditory processing. The
aim is to see if speech perception performance in CI listeners is reflected in the ACC. Nine post
lingually deafened CI listeners were tested on their speech perception abilities and their ACC
responses to random continuous sequences of four vowels, four fricatives and silence, each
lasting 300-400 ms. The ACC stimulus sequences create trials where there is either a spectral
change (i.e. vowel to vowel, vowel to fricative, etc.) or a change from silence to sound (and vice
versa). This design means that the electrical artefact created by the CI is continuous across
spectral changing trials, and so for most participants its effect is minimal. Results show that the
two participants who performed worst in the speech perception tasks had the largest CI
artefacts and therefore an ACC could not be measured. Of the remaining seven, four showed a
slightly delayed, but clear, N1 response, whereas the other two showed very small or no
responses. Generally speaking, those who had large CI artefacts scored lower on tasks of
speech perception than those with clearer ACC responses. It therefore appears that an ACC
response can be measured in most CI users, and that it may be predictive of speech perceptual
performance.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 132
2015 Conference on Implantable Auditory Prostheses
T4: CORTICAL VOICE PROCESSING IN COCHLEAR-IMPLANTED CHILDREN:
AN ELECTROPHYSIOLOGICAL STUDY
David Bakhos, Emmanuel Lescanne, Sylvie Roux, Frederique Bonnet-Brilhault, Nicole
Bruneau
Université François-Rabelais de Tours, CHRU de Tours, UMR-S930, Tours, France, Tours, FRA
Introduction: In those with prelingual deafness, the use of cochlear implants can restore
both auditory input to the auditory cortex and the ability to acquire spoken language. Language
development is strongly intertwined with voice perception. The aim of this electrophysiological
study was to investigate human voice processing with cortical auditory evoked potentials (AEPs)
in cochlear-implanted (CI) children.
Patients and method: Eight CI children, with good auditory and language performance,
were investigated with cortical AEPs and compared with 8 normal-hearing age-matched
controls. The auditory stimuli were vocal and non-vocal sounds. Independent component
analysis was used to minimize the cochlear implant artifact in cortical AEPs.
Results: Fronto-temporal positivity to voice was found in normal-hearing children with a
significant effect in the 140-240 ms latency range. In the CI children group, we found a positive
response to voice in the 170-250 ms latency range with a more diffuse and anterior distribution
than in the normal-hearing children.
Conclusion: Response to voice was recorded in CI children. The topography and latency
of response to voice differed from that recorded in normal-hearing children. This finding argued
for cortical voice processing reorganization in congenitally deaf children fitted with a cochlear
implant.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 133
2015 Conference on Implantable Auditory Prostheses
T5: DEFINITION OF SURGICAL LANDMARKS AND INSERTION VECTORS AS
ORIENTATION GUIDES FOR COCHLEAR IMPLANTATION BY MEANS OF
THREE- AND TWO-DIMENSIONAL COMPUTED TOMOGRAPHY
RECONSTRUCTIONS OF TEMPORAL BONES
1
2
Hayo A. Breinbauer1, Mark Praetorius2
Pontificia Universidad Católica de Chile, Santiago de Chile, CHL
Div. of Otology and Neurotology, Dept. of Otolaryngology, University of Heidelberg Medical Center, Heidelberg,
DEU
AIMS: To describe the orientation and particular characteristics of the initial segment of
the cochlea, in the context of an insertion vector based on a cochleostomy and a round window
implantation approach.
MATERIAL AND METHODS: CT-scans of temporal bones of 51 cochlear implant
candidates (for a total of 100 included ears) were collected. Three dimensional reconstructions
of those temporal bones were analyzed focusing on multiple anatomical features influencing
insertion vectors: 1) Ideal insertion following the centerline of the initial segment of the cochlea,
2) Ideal insertion vector following a round window approach and parallel to the outer wall of the
cochlea on its initial segment, 3) Architecture of the hook region, 4) Indirect estimation of the
orientation of the basilar membrane by means of assessing the line connecting opposite points
of the first turn of the cochlea.
RESULTS: After correcting radiological data with true anatomical planes (true midsagittal plane and Frankfort plane), the average centerline of the initial segment of the cochlea
on the sample can be described as having a 63° angle on the axial plane (with the mid-sagittal
line as reference) and having a 7° angle on the coronal plane (with the horizontal line from left to
right as reference). The ideal insertion vector considering a round window approach was in
average 7° apart from this centerline (significant difference, p<0,001), with an average 62° angle
on axial plane, and 5° angle on coronal plane. A large dispersion was found on insertion vectors
along the sample, with as much as 60° of difference between subjects. This dispersion was
larger in the usually less assessed sagittal component, distributing as a “narrow-band” following
a long axis parallel and slightly lateral to the vertical segment of the facial nerve. The later
finding highlights the need of an individually applicable estimation protocol.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 134
2015 Conference on Implantable Auditory Prostheses
T6: PREDICTING COCHLEAR IMPLANT PERFORMANCES FROM A NOVEL
EABR-BASED ESTIMATION OF ELECTRICAL FIELD INTERACTIONS
Nicolas Guevara1, Eric Truy2, Stephane Gallego3, Dan Gnansia4, Michel Hoen4
1
2
University Head and Neck Institute, CHU de Nice, Nice, FRA
Department of Audiology and Otorhinolaryngology, Edouard Herriot Hospital, Lyons, FRA
3
Institute for Readaptation Sciences and Techniques, Lyons, FRA
4
Oticon Medical, Vallauris, FRA
Cochlear implants (CIs) are neural prostheses that have been used routinely in the clinic
over the past 25 years. They allow children who were born profoundly deaf, as well as adults
affected by hearing loss for whom conventional hearing aids are insufficient, to attain a
functional level of hearing. An increasingly frequent and systematic use for less severe cases of
deafness and bilateralization without strictly defined solid criteria, associated with a lack of
reliable prognostic factors, has limited individual and societal optimization of deafness
treatment.
Our aim was to develop a prognostic model for patients with unilateral cochlear implants.
A novel method of objectively measuring electrical and neuronal interactions using electrical
auditory brainstem responses (eABR) was used.
Speech recognition performance without lip reading was measured for each patient using
a logatome test (64 “vowel-consonant-vowel”; VCV; by forced choice of 1 out of 16). eABRs
were measured in 16 CIs patients (CIs with 20 electrodes, Digisonic SP; Oticon Medical ®,
Vallauris, France). Two measurements were obtained: eABR measurements with stimulation by
a single electrode at 70% of the dynamic range (four electrodes distributed within the cochlea
were tested), followed by a summation of these four eABRs, measurement of a single eABR
with stimulation from all four electrodes at 70% of the dynamic range.
A comparison of the eABRs obtained by these two methods indicated electrical and
neural interactions between the stimulation channels. Significant correlations were found
between speech recognition performance and the ratio of the amplitude of the V wave of the
eABRs obtained with the two methods (Pearson’s linear regression model, parametric
correlation: r2=0.33, p<0.05; n=16; non-linear regression model: r2=0.47, p=0.005).
This prognostic model allowed nearly half of the interindividual variance in speech
recognition scores to be explained. The present study used measurements of electrical and
neuronal interactions by eABR to assess patients’ bio-electric capacity to use multiple
information channels supplied by the implant. This type of prognostic information is valuable in
several ways. On the patient level, it allows for customization of individual treatments. More
generally, it may also improve the distribution of health resources by allowing individual needs to
be addressed more effectively.
Acknowledgements: The authors would like to thank the Nice University Hospital for financial
support to this study.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 135
2015 Conference on Implantable Auditory Prostheses
T7: ASSESSING TEMPORAL MODULATION SENSITIVITY USING
ELECTRICALLY EVOKED AUDITORY STEADY STATE RESPONSES
Robert Luke, Lot Van Deun, Michael Hofmann, Astrid van Wieringen, Jan Wouters
KU Leuven, Department of Neurosciences, ExpORL, Leuven, BEL
Temporal cues are important for cochlear implant (CI) users when listening to speech.
Users with greater sensitivity to temporal modulations show better speech recognition and
modifications to stimulation parameters based on modulation sensitivity have resulted in
improved speech understanding. Unfortunately behavioural measures of temporal sensitivity
require cooperative participants and a large amount of time.
EASSRs are neural responses to periodic electrical stimulation that have been used to
predict threshold (T) levels. In this study we evaluate the use of EASSRs as an objective tool for
assessing temporal modulation sensitivity.
Modulation sensitivity was assessed behaviourally using modulation detection thresholds
(MDTs) for a 20 Hz rate. On the same stimulation sites, EASSRS were measured using
sinusoidally amplitude modulated pulse trains at 4 and 40 Hz.
Measurements were taken using a bipolar configuration on 12 electrode pairs over 5
participants. Results showed that EASSR amplitudes and signal-to-noise ratios (SNRs) were
significantly related to the MDTs. Larger EASSRs corresponded with sites of improved
modulation sensitivity. This result indicates that EASSRs may be used as an objective measure
of site-specific temporal sensitivity for CI users.
The work leading to this deliverable and the results described therein has received funding from
the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework
Programme FP7/2007-2013/ under REA grant agreement n PITN-GA-2012-317521. This
research was supported by IWT-Vlaanderen project 110722.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 136
2015 Conference on Implantable Auditory Prostheses
T8: SPEECH PERCEPTION AND ELECTRICALLY EVOKED AUDITORY
STEADY STATE RESPONSES
Robert Luke, Robin Gransier, Astrid van Wieringen, Jan Wouters
KU Leuven, Department of Neurosciences, ExpORL, Leuven, BEL
Cochlear implant (CI) users show good speech understanding in quiet conditions. But
with increasing background noise, speech understanding decreases, and variability across
users increases. One proposed cause of this poor performance is variability in the electrode
neuron interface (ENI). It is hypothesised that variation in the quality of the ENI along the CI
array, creates unwanted variation in how CI users perceives input signals.
Increased variation in perceptual measures along the array have been related to poor
speech understanding. Using tripolar stimulation, Bierer [2007, J. Acoust. Soc. Am. 121,16421653] found that greater channel-to-channel variability in thresholds was related to poorer
speech performance. Zhou & Pfingst [2014, Ear Hear. 35, 30-40] improved the modulation
detection thresholds (MDT) of the 5 worst channels along the CI array by increasing the T-levels
on these channels. This resulted in a decrease in MDT variation and an increase in speech in
noise performance.
Electrically evoked auditory steady state responses (EASSRs) are neural responses to
periodic electrical stimulation. EASSRs presented at supra threshold levels are related to MDTs,
sites with larger EASSRs correspond to sites with better MDTs [Luke et al (2015), Hear Res,
324, 37-45]. In this study EASSRs are used as an objective measure to study modulation
sensitivity along the CI array.
Three questions are addressed in this study. First, how do EASSRs vary along the CI
array? Secondly, is their a relation between the variability in EASSRs across the array and
speech perception in noise? Finally, can speech perception be improved by adjusting the
stimulation parameters of poor performing EASSR channels in a similar fashion to Zhou &
Pfingst 2014?
Results will be presented for EASSRs measured in monopolar mode at 500 pulses per
second on all electrodes. Both sentence and phoneme perception in noise will be reported.
Speech perception scores will be reported for both an unaltered MAP and with the thresholds
increased on poor performing EASSR channels. Initial results show significant variation in
responses both across the CI array, and between adjacent electrodes.
The work leading to this deliverable and the results described therein has received funding from
the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework
Programme FP7/2007-2013/ under REA grant agreement n PITN-GA-2012-317521. This
research was supported by IWT-Vlaanderen project 110722. Support provided by a Ph.D. grant
to the second author by the Agency for Innovation by Science and Technology (IWT, 141243).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 137
2015 Conference on Implantable Auditory Prostheses
T9: poster withdrawn
12-17 July 2015
Granlibakken, Lake Tahoe
Page 138
2015 Conference on Implantable Auditory Prostheses
T10: RESTING STATE CONNECTIVITY IN LANGUAGE
AREAS: AN fNIRS STUDY OF NORMALLY-HEARING
LISTENERS AND COCHLEAR IMPLANT USERS
1,2
2
1,3
Adnan Shah , A. K. Seghouane , Colette M. McKay
1
The Bionics Institute of Australia
The University of Melbourne, Department of Electrical and Electronic Engineering, Australia
3
The University of Melbourne, Department of Medical Bionics, Australia
2
Functional changes to the brain induced by deafness have been demonstrated by
neuroimaging studies (e.g. PET, fMRI and EEG). However, little is known about the functional
reorganization of brain – especially in areas responsible for processing language and speech –
after activation with cochlear implants (CIs). Functional near-infrared spectroscopy (fNIRS) is
a neuroimaging tool that is likely to be suitable for CI users. The aim of this study is to
investigate functional changes that are induced in neural activation patterns of CI users
and determine how well correlated these changes are to speech understanding in the same
individuals. The purpose of the study is to investigate neuronal coupling or functional
connectivity for intra- and inter- hemisphere regions. It is hypothesized that differences
between normal-hearing listeners and CI users, as well as differences within the CI population
reflect changes in the default mode network.
Functional data for hemodynamic neuroactivation were acquired during rest using a
multichannel NIRScout fNIRS imaging system. Protocol was 5-8 minutes of rest with eyesclosed in awake state. For the experimental montage, we used a 4 x 4 optode array on
each hemisphere covering the language areas of the brain. Data analysis schemes were
developed in-house for pre-processing the data and thus evaluating inference of the functional
connectivity maps.
To date we have collected and analysed data of 5 implant users and 5 normally-hearing
listeners during resting state. Using healthy normal hearing subjects as controls, for whom
strong inter-hemispheric connectivity links were detected, we found differences in the
individual connectivity patterns among CI users and between the two groups. Further
recruitment of 10 more participants for each group is under way to better understand the
group differences and to understand the relation to speech understanding ability.
This research was supported by the Lions Foundation, the Melbourne Neuroscience Institute, an
Australian Research Council (grant FT130101394) to AKS, and veski fellowship to CMM. The Bionics
Institute acknowledges the support it receives from the Victorian Government through its Operational
Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 139
2015 Conference on Implantable Auditory Prostheses
T11: COCHLEAR IMPLANT ARTIFACT REMOVAL METHODS TO MEASURE
ELECTRICALLY EVOKED AUDITORY STEADY-STATE RESPONSES
Hanne Deprez1,2, Robin Gransier1, Michael Hofmann1, Tom Francart1, Astrid van
Wieringen1, Marc Moonen1,2, Jan Wouters1
1
2
ExpORL, Dept. Neurosciences, KU Leuven, Leuven, BEL
STADIUS, Dept. of Electrical Engineering (ESAT), Leuven, BEL
In cochlear implant (CI) subjects, electrically evoked auditory steady-state responses (EASSRs)
can be detected in the EEG in response to stimulation with periodic or modulated pulse trains. EASSRs
are currently being investigated for objective CI fitting. However, the EEG is obscured by electrical CI
artifacts, also present at the response frequency. This CI artifact is caused by 1) the radiofrequency link
between the external speech processor and internal implant (RF-art), and 2) the CI stimulation pulses
themselves (STIM-art). CI artifacts can be removed by blanking which applies a linear interpolation over
the duration of CI artifact. This method only works if the CI artifact duration is shorter than the interpulse
interval, which is the case for low-rate (<<500 pulses per second, pps) pulse trains, or stimulation in
bipolar mode. In monopolar mode, the CI artifacts are larger in amplitude and longer in duration than in
bipolar mode, so that blanking cannot always avoid CI artifacts being confused for neural responses, and
unreliable threshold level estimations.
In order to characterize the CI artifact, EASSRs were recorded in eight subjects implanted with a
Nucleus® device using 500 pps pulse trains presented in monopolar MP1+2 mode. CI artifact amplitude
growth functions (AGFs) were used to differentiate between the two sources of CI artifacts. The RF-art
does not scale with increasing stimulus intensity. Therefore, the intercept of the AGF and the Y-axis
quantified the RF-art, while the slope of the AGF indicated the STIM-art contribution. Furthermore, the CI
artifact duration was characterized by applying increasing blanking lengths. With increasing blanking
length, the CI artifact contribution at the response frequency is reduced, while the underlying EASSR
remains unchanged. The blanking length for which the amplitude at the response frequency saturates is
defined as the CI artifact duration. For all subjects, the RF-art was localized in the EEG signals recorded
around the CI. The STIM-art spread across the entire scalp, being the most severe close to the CI. In
contralateral electrode signals, the mean CI artifact duration was 0.8 ms, with an interquartile range of
0.4ms. Consequently, in all subjects and in all contralateral electrode signals, blanking could remove the
CI artifacts when the interpulse interval is 2 ms (i.e. for 500 pps pulse trains). At higher pulse rates, more
advanced CI artifact rejection methods were necessary.
An alternative and fully automated CI artifact rejection method, based on Independent
Component Analysis (ICA), was developed for the cases in which blanking could not remove the entire
CI artifact. EASSR growth functions were measured in five subjects, with 900 pps pulse trains presented
in monopolar MP1+2 mode. Electrophysiological threshold levels were determined from these growth
functions, based on one- and two-sample Hotelling's T-squared tests. At 900 pps, after ICA based CI
artifact rejection, the CI artifact has been removed in the EEG signals recorded in the contralateral and
the ipsilateral hemisphere. Furthermore, electrophysiological threshold levels could be successfully
estimated within 20% of the subjects’ dynamic range.
In summary, blanking (for low rate stimulation [0-500 pps]) and ICA based CI artifact rejection (for
high rate stimulation [500-900pps]) can be used for CI artifact removal from the EEG, opening the way to
use EASSRs for objective CI fitting with clinical stimuli.
Acknowledgements: Research was funded by the Research Foundation Flanders (G.0662.13) and a
Ph.D. grant to the second author by the Agency for innovation by Science and Technology (IWT,
141243).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 140
2015 Conference on Implantable Auditory Prostheses
T12: ON THE RELATIONSHIP OF SPEECH INTELLIGIBILITY AND VERBAL
INTELLIGENCE IN COCHLEAR IMPLANT USERS - INSIGHTS FROM
OBJECTIVE MEASURES
Mareike Finke1, Andreas Buechner1, Esther Ruigendijk2, Martin Meyer3, Pascale
Sandmann1
1
2
Hannover Medical School, Hannover, DEU
Department of Dutch, University of Oldenburg, Oldenburg, DEU
3
Psychological Institute, University of Zurich, Zurich, CHE
The (re-)acquisition of speech intelligibility is considered as a desirable result after
cochlear implantation. However, the successful adaptation of auditory cognition to the cochlear
implant (CI) input seems depend to a substantial degree on individual factors. The aim of the
present study was to investigate how verbal intelligence relates to the processing and
understanding of speech in CI users. We recorded the CI users’ electroencephalogram (EEG)
and analyzed the event-related potentials (ERPs) to gain insights into the neuronal processing
of spoken words in these individuals. In contrast to normal speech tests, the high temporal
resolution of the EEG allows us to study the different processing stages of speech
understanding, starting from initial sensory processing to higher-level cognitive classification of
words.
Prior to the EEG recording we assessed the verbal intelligence of 15 CI users and 15
age-matched normal hearing (NH) controls. In particular, we tested the lexical fluency, the
verbal working memory capacity as well as the word recognition ability. Importantly, none of the
three tests included auditory material.
Subsequently, we recorded EEG while the participants completed an auditory oddball
task. Participants were asked to press a button every time they heard a target word (p = 0.2)
intermixed with frequent standard words (p = 0.8). Specifically, participants indicated whether
the word they heard described a living or a non-living entity. Hereby, we could include not only
two words, but several target and standard words. The words were presented either in quiet or
in stationary or modulated background noise (10 dB signal-to-noise ratio) which allowed us to
study the effect of task difficulty on behavioral and ERP responses. The two groups did not differ
in age or years of education. However, CI users showed poorer performance in the word
recognition test and smaller working memory capacity by trend when compared with NH
controls. Regarding the oddball paradigm, CI users responded overall slower and less accurate
to the target words than NH listeners. These behavioral findings were confirmed by the ERP
results, showing longer ERP latencies not only at initial cortical (N1) but also at higher-level
processing stages (N2, P3 component). Moreover, we observed an effect of background (quiet,
stationary noise, modulated noise) on ERP responses for both groups at N1 and N2 latency,
suggesting that the processing time is prolonged with increasing task demand in both the CI
users and the NH listeners. Finally, we found significant correlations between the speech
intelligibility as measured by clinical speech tests, the behavioral responses in the oddball task
and the verbal intelligence tests. In sum, our results suggest a complex - but so far not
investigated - relationship between word recognition ability, cognitive/linguistic competence and
neural (post)perceptual processing of spoken words.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 141
2015 Conference on Implantable Auditory Prostheses
T13: THE MODULATION FREQUENCY TRANSFER FUNCTION OF
ELECTRICALLY EVOKED AUDITORY STEADY-STATE RESPONSES
Robin Gransier1, Hanne Deprez1,2, Michael Hofmann1, Tom Francart1, Marc Moonen2,
Astrid van Wieringen1, Jan Wouters1
1
2
ExpORL, Dept. of Neurosciences, KU Leuven, Leuven, BEL
STADIUS, Dept. of Electrical Engineering, KU Leuven, Leuven, BEL
Minimum (T-level) and maximum (C-level) electrical stimulation levels vary across stimulation
electrodes and cochlear implant (CI) recipients. The determination of these subject and electrode
dependent stimulation levels can be challenging and time-consuming, especially in recipients who cannot
give reliable behavioral feedback. An objective method to determine the electrical stimulation levels
would be of great value. Because T-levels are pulse rate dependent it is a prerequisite for an objective
method to use the same pulse rates as are used in clinical devices [500-1200 pulses-per-second (pps)].
Electrically evoked auditory steady-state responses (EASSRs) could potentially be applied to objectively
determine the high-pulse rate T-levels of CI recipients. EASSRs are periodic neural responses that can
be elicited by modulated high-rate pulse trains. In order to make EASSRs clinically applicable, insight
needs to be gained in which modulation frequencies evoke the highest responses.
We measured the modulation frequency transfer function (MFTF) in five awake adult CI
recipients. All subjects had a Nucleus® device. Monopolar stimulation was used with both the external
ball electrode and the casing of the stimulator as ground electrodes. Intracochlear electrode 11 was used
as the active electrode. A 500 pps pulse train was modulated with 43 different modulation frequencies to
measure the MFTF from 1 to 100 Hz. This range was chosen to compare our results with the ASSR
MFTF of normal hearing adults, which has clear peak in the 40-50 Hz range. EASSRs were measured
with a 64-channel EEG setup. The blanking method [Hofmann and Wouters, 2012, JARO, 13(4), 573589] was used to remove the stimulation artifacts, resulting from the electrical stimulation pulses, from
the EEG recordings. Hotelling’s T-squared test was used to test if the response at the modulation
frequency differed significantly from the background activity. For Nuclues® devices EASSRs can be
measured free from stimulation artifacts in bipolar mode (stimulation between two intracochlear
electrodes). In monopolar mode, however, stimulation artifacts are larger in amplitude and longer in
duration, and are difficult to remove from the EEG recording. The phase delay was used to assess the
presence of stimulation artifacts after blanking. The absolute phase delay of the EASSR increases with
increasing modulation frequency whereas the phase delay of the stimulation artifact remains constant.
Results show that EASSRs can be measured free from stimulation artifacts by recording
electrodes located in the hemisphere contralateral to the implant. For these contralateral electrodes, the
absolute phase delay of the significant EASSRs increased with increasing modulation frequency.
Recording electrodes positioned in the ipsilateral hemisphere showed significant responses for all
modulation frequencies. These responses had a constant phase delay, indicating an artifact-dominated
response. Modulation frequencies which evoked the highest responses were within the 30-50 Hz range.
This range is line with the MFTF obtained from normal hearing adults reported in literature.
Acknowledgements: Research was funded by the Research Foundation Flanders (G.0662.13) and a
Ph.D. grant to the first author by the Agency for Innovation by Science and Technology (IWT, 141243).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 142
2015 Conference on Implantable Auditory Prostheses
T14: poster withdrawn
12-17 July 2015
Granlibakken, Lake Tahoe
Page 143
2015 Conference on Implantable Auditory Prostheses
T15: ACOUSTIC CUE WEIGHTING BY ADULTS WITH COCHLEAR IMPLANTS:
A MISMATCH NEGATIVITY STUDY
1
Aaron C Moberly1, Jyoti Bhat1, Antoine J Shahin2
The Ohio State University Department of Otolaryngology, Columbus, OH, USA
2
University of California Davis Center for Mind and Brain, Davis, CA, USA
Normal hearing (NH) native English speakers weight formant rise time (FRT) more than
amplitude rise time (ART) during the perceptual labeling of the ba-wa contrast. This weighting
strategy is reflected neurophysiologically in the magnitude of the mismatch negativity (MMN) MMN is larger during the FRT than the ART distinction. The present study examined the
neurophysiological basis of acoustic cue-weighting in adult cochlear implant (CI) listeners using
the MMN design. It was hypothesized that individuals with CIs who weight ART more in
behavioral labeling (ART-users) would show larger MMNs during the ART than the FRT
contrast, and the opposite would be seen for FRT-users.
Electroencephalography (EEG) was recorded while twenty adults with CIs listened
passively to combinations of three synthetic speech stimuli: a /ba/ with /ba/-like FRT and ART; a
/wa/ with /wa/-like FRT and ART; and a /ba/wa stimulus with /ba/-like FRT and /wa/-like ART.
The MMN response was elicited during the FRT contrast by having participants passively listen
to a train of /wa/ stimuli interrupted occasionally by /ba/wa stimuli, and vice versa. For the ART
contrast, the same procedure was implemented using the /ba/ and /ba/wa stimuli.
Results were that both ART- and FRT-users with CIs elicited MMNs that were equal in
magnitudes during FRT and ART contrasts, with the exception that FRT-users exhibited MMNs
for ART and FRT contrasts that were temporally segregated. That is, their MMN was earlier
during the ART contrast (~100 ms following sound onset) than during the FRT contrast
(~200ms). In contrast, the MMN for ART-users of both contrasts occurred later and at about the
same time (~300 ms). Interestingly, this temporal segregation observed in FRT-users is
consistent with the MMN behavior in NH listeners.
It can be concluded that listeners with CIs who learn to classify phonemes based on
formant dynamics, consistent with NH listeners, develop a strategy similar to NH listeners, in
which the organization of the amplitude and spectral representations of phonemes in auditory
memory is temporally segregated.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 144
2015 Conference on Implantable Auditory Prostheses
T16: P300 IN BIMODAL CI-USERS
Lindsey Van Yper1, Andy Beynon2, Katrien Vermeire4, Eddy De Vel1, Ingeborg Dhooge1
1
Dept. of Otorhinolaryngology, Ghent University, Ghent, BEL
2
Radboud University Medical Centre, Nijmegen, NLD
3
Apelian Cochlear Implant Center, New York, USA
4
Long Island Jewish Medical Center, New Hyde Park, NY
Introduction. Bimodal listeners combine a cochlear implant (CI) with a contralateral
hearing aid (HA). Psychoacoustic research shows that bimodal listening can improve speech
perception in noise, sound localization, and music appreciation. Nevertheless, a substantial
proportion of bimodal listeners cease to wear the HA shortly after implantation. To date, there is
no consensus on whether or not bimodal listening is preferred in a given individual or
population. This study examines whether endogenous auditory evoked cortical responses can
be used to assess bimodal benefit.
Methods. Six experienced CI-users were included in the study. Three used a HA in daily
life, whereas the others did not wear a HA. All subjects were implanted with either the Nucleus
CI24RE(CA) or CI422 and had low-frequency residual hearing in the non-implanted ear. The
cognitive P300 response was elicited using an oddball paradigm with a 500 Hz tone-burst as the
standard and a 250 Hz tone-burst as the deviant stimulus. P300s were recorded in the CI-only
(i.e. with the contralateral ear plugged) and the bimodal condition (i.e. CI and HA or CI and
residual hearing, depending on the subject’s daily use).
Results. Overall, P300 morphology was clearer in the bimodal compared to the CI-only
condition. Amplitudes in the bimodal and CI-only condition were respectively 17.39 µV and
11.63 µV. Latencies were 262 ms (SD 30.0 ms) in the bimodal and 281 ms (SD 49.2 ms) in the
CI-only condition. Interestingly, the trend of shorter latencies for the bimodal compared to the
CI-only condition was only observed in the subjects who wear a HA in daily life.
Conclusion Preliminary data reveal that the bimodal condition elicited better P300 responses
than the CI-only condition, especially in subjects who wear a hearing aid in daily life.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 145
2015 Conference on Implantable Auditory Prostheses
T17: ACOUSTIC CHANGE COMPLEX RECORDED IN HYBRID COCHLEAR
IMPLANT USERS
Eun Kyung Jeon, Brittany E James, Bruna Mussoi, Carolyn J. Brown, Paul J. Abbas
University of Iowa, Iowa City, IA, USA
Until recently, people with significant low-frequency residual hearing were not considered
to be CI candidates. Excellent performance of conventional and Hybrid CI users have earned
the Hybrid CI system attention in recent years and FDA approval in 2014. It is likely that the
Hybrid CI system will be used in a pediatric population in near future.
This study compares obligatory cortical auditory evoked potentials, particularly the
acoustic change complex (ACC), recorded using different listening modes: Acoustic-alone vs.
Acoustic plus Electric (A+E) from Hybrid CI users. Our question is whether the ACC can be
used to document the benefit provided by electrical stimulation for listeners with low-frequency
acoustic hearing.
A total of 8 Nucleus Hybrid CI users have participated so far. Various 800-msec duration
complex stimuli were created by connecting two 400-msec duration segments, differing in type
of acoustic change. Both the P1-N1-P2 and the ACC were recorded at the onset and change of
the stimulus, respectively. Stimulus pairs tested differed in pitch (C4-D#4), timbre (oboeclarinet), or formant frequency (/u/-/i/). Two spectral ripple noise stimuli were also tested: one
with a change in a phase using 1 ripple/octave and another with a change in a modulation depth
(40 dB) between segments. These stimuli were presented to subjects once using the acoustic
component only and once using the acoustic and electric components. Six channels were used
to record neural responses, and one channel was used to monitor eye blinks.
Results show that both the onset P1-N1-P2 and the ACC were obtained from all Hybrid
CI listeners; however, their presence and amplitude varied across stimuli and listening modes
(A+E vs. A-alone condition). Both amplitudes of onset response and the ACC increased in the
A+E listening mode compared to the A-alone mode. When a stimulus changed from low to high
frequencies (e.g., C4-D#4, /u/-/i/), the difference in the ACC amplitude obtained in A+E and Aalone listening modes was larger. Individual variation in CAEP recordings and the relationship
with speech perception in different listening modes will be also discussed.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 146
2015 Conference on Implantable Auditory Prostheses
T18: COCHLEAR IMPLANT ELECTRODE VARIABLES PREDICT CLINICAL
OUTCOME MEASURES
Timothy J Davis, Rene H Gifford, Benoit M Dawant, Robert F Labadie, Jack H Noble
Vanderbilt University, Nashville, TN, USA
Considerable attention has been directed recently at the role of electrode array variables that
may predict individual clinical outcomes. Electrode-to-modiolus distance, scalar crossing, and
insertion angle are a few examples of electrode variables that have been evaluated and reported in
the literature. The current study investigated these three variables in a large sample of adults
implanted with all three FDA-approved manufacturer’s devices, including both standard and precurved electrode array designs. Electrode variables were used to predict a range of objective clinical
outcome measures including Consonant-Nucleus-Consonant (CNC) words, AzBio sentences in
quiet and background noise [+5 dB and +10 dB signal-to-noise ratios (SNR)], spectral modulation
detection (SMD) at two modulation rates, as well as a subjective outcome measure of
communication [Abbreviated Profile of Hearing Aid Benefit (APHAB)]. Data collection is ongoing and
work is currently being done to include additional demographic variables for multivariate analysis
such as age at implantation, duration of deafness, and cognition. At present, a total of 157 implanted
adult ears have been analyzed from all three manufacturers (Cochlear = 91, Advanced Bionics = 27,
MED-EL = 39) and both electrode array types (pre-curved = 87).
A one-way analysis of variance showed a significant effect of electrode arrays fully positioned
within scala tympani (ST) for AzBio sentences in noise at +5 dB SNR (F(1,89) = 4.46, p = 0.037) and
+10 dB SNR (F(1,126) = 4.31, p = 0.04). Electrode array type was only weakly predictive of APHAB
(r = -.179, p = 0.029), with straight arrays resulting in poorer mean APHAB scores. There was a
significant yet weak correlation between insertion angle and CNC words (r = 0.182, p = 0.022) as
well as AzBio at +10 dB SNR (r = 0.221, p = 0.012). Mean electrode-to-modiolus distance was
negatively correlated with SMD at 1.0 cycles/octave (r = -0.166, p = 0.044) and AzBio at +5 dB SNR
(r = -0.231, p = 0.028), and positively correlated with poorer APHAB scores (r = .175, p = 0.033).
Because we do not detect significantly different speech recognition scores based on electrode type,
these findings do not seem to suggest that pre-curved arrays lead to better speech recognition
outcomes but rather that poorer outcomes occur when arrays of either type are placed further from
the modiolus.
Not surprisingly, several electrode variables were found to be correlated with each other.
Electrode array type was highly correlated with mean electrode distance (r = -.778, p < 0.001) and
electrode arrays fully within ST (r = 0.418, p < 0.001). This finding confirms previous reports that
straight electrode arrays are more likely to remain fully within the scala tympani than pre-curved
arrays. Past research has shown that a greater proportion of electrodes within ST are associated
with higher CNC scores (e.g., Finley et al., 2008). In the present study, however, this relationship
was not confirmed. In contrast, we found a significant relationship between arrays fully within ST and
AzBio sentence recognition in noise.
Overall, electrode type and the metrics of electrode position within the cochlea that were
tested here were not found to strongly predict individual performance. However, our findings are
significant as they confirm that there is indeed a link between clinical outcomes and electrode type
and position. This suggests that further study and analysis of yet to-be-tested electrode placement
descriptors may reveal the true factors driving placement-related variance in CI performance and
could have implications for future hardware design and surgical techniques.
This work was supported in part by grant R01DC014037 from the NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 147
2015 Conference on Implantable Auditory Prostheses
T19: MYOGENIC RESPONSES FROM THE VESTIBULAR SYSTEM CAN BE
EVOKED USING ELECTRICAL STIMULATION FROM A COCHLEAR IMPLANT
Joshua J. Gnanasegaram1,2, William J. Parkes1,3, Sharon L. Cushing1,3, Carmen L.
McKnight1, Blake C. Papsin1,2,3, Karen A. Gordon1,2
1
Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Canada
2
The Institute of Medical Science, University of Toronto, Ontario, Canada
3
Dept of Otolaryngology Head and Neck Surgery, The Hospital for Sick Children, University of Toronto, Ontario,
Canada
Cochlear implantation is a standard treatment option for patients with severe to profound
hearing loss. An occasional complication however, is the spread of electrical current from the
implant to the facial nerve, which lies in close proximity to the cochlea and thus the intracochlear
electrode array. As the vestibular end organs and nerves are also situated nearby, current from
the implant likely impacts this system as well. Cervical (c) and ocular (o) vestibular evoked
myogenic potentials (VEMPs) are used clinically to assess the functionality of the otoliths and
the nerves that innervate them. The aim of the present study was to determine if these
vestibular potentials could be evoked by electrical stimulation via a cochlear implant. The
presence of VEMPs in response to electrical stimulation would provide evidence of current
spread from the implant to the vestibular system.
Twenty-six participants (mean age 14 years) that were unilaterally (n=3) or bilaterally
(n=23) implanted were recruited for testing. Electromyographic responses were evoked using an
acoustic tone burst stimulus (4 millisecond, 500Hz tone, presented at 5.1 Hz via insert
earphones to individual ears). Testing was repeated with comparable cochlear implant stimuli
(4ms, 900Hz pulse trains delivered at 5.1 Hz from electrodes at the apical and basal ends of the
implant array, at a maximally tolerable intensity). Using surface electrodes over the ipsilateral
sternocleidomastoid muscle and the contralateral inferior oblique muscle, electromyograms
were amplified and recorded with a 1-3000 Hz bandpass filter. VEMP latency and amplitude
were measured offline.
Of the 26 participants, 18 (69%) showed at least one vestibular potential in response to
acoustic stimulation; 18 (69%) had an electrically evoked vestibular response. A cVEMP was
present in 28 of the 49 tested ears (57%) in response to acoustic stimulation, and in 19 ears
(39%) in response to electrical stimulation. An oVEMP was elicited acoustically in 16 ears
(33%), and elicited electrically in 16 (33%) ears. Electrically evoked vestibular potentials
demonstrated shorter latencies than acoustically evoked potentials. The first peak of the cVEMP
biphasic response (P13) had a latency of 12.0±1.1ms when evoked electrically, and 15.2±1.6ms
when acoustically elicited. Similarly, the second peak of the response (N23) was seen at
19.1±2.0ms for electric stimulation, and 22.5±2.1ms for acoustic stimulation. The N10 peak of
the oVEMP also showed a decreased latency when stimulated electrically (6.9±2.2ms) in
comparison to acoustically (9.4±1.5ms).
These findings demonstrate that while VEMPs can still be evoked by the standard
acoustic method after cochlear implantation, the spread of current from the device to the
vestibular system allows for the elicitation of these responses with electrical stimulation as well.
Whereas acoustically evoked responses are dependent upon the mechanical stimulation of
vestibular end organs via a travelling fluid wave, the shorter latencies of the electrically evoked
responses suggest a more direct path of neural stimulation.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 148
2015 Conference on Implantable Auditory Prostheses
T20: INTRACOCHLEAR ACOUSTIC RECEIVER FOR TOTALLY
IMPLANTABLE COCHLEAR IMPLANTS: CONCEPT AND PRELIMINARY
TEMPORAL BONE RESULTS
Flurin Pfiffner1, Lukas Prochazka2, Dominik Peus1, Konrad Thoele1, Francesca Paris3,
Joris Walraevens3, Rahel Gerig1, Jae Hoon Sim, Ivo Dobrev1, Dominik Obrist4, Christof
Roosli1, Alexander Huber1
1
2
Dept. Otorhinolaryngology, Head and Neck Surgery, University Hospital, Zurich, CHE
Institute of Fluid dynamics, Swiss Federal Institute of Technology (ETHZ), Zurich, CHE
3
Cochlear Technology Centre, Mechelen, BEL
4
ARTORG Center, University of Bern, Bern, CHE
Introduction: A cochlear implant (CI) provides electrical stimulation directly to the nerves
within the cochlea and is used to treat patients with severe to profound hearing loss. There are
substantial unsatisfied needs that cannot be addressed with the currently available partially
implantable CI systems. A totally implantable CI system could deliver to recipients significant
safety benefits and improved quality of life. An implanted acoustic receiver (IAR) in the inner ear
that replaces the external microphone is a crucial step towards a totally implantable CI.
Goals: The overall goal of this project is to develop and validate an IAR that could be
included in future totally implantable CI systems.
Methods: 1) In a first step different suitable sensor technologies for an IAR have been
evaluated on the basis of theoretical modeling as well as experimental testing. Requirements
have to meet anatomical cochlea size restrictions, biocompatibility, sensitivity, power
consumption and operating environment (fluid).
2) A prototype IAR has been assembled and used to measure the inner ear pressure in
prepared human and sheep temporal bones. Acoustic stimulation was applied to a sealed ear
canal and recorded as a reference signal near the tympanic membrane. The inner ear pressure
measurement position was controlled by a micromanipulator system and verified with a
subsequent CT-scan and 3D reconstruction of the temporal bone.
Results : 1) The result of the sensor technology evaluation has shown that an IAR on the
basis of a MEMS condenser microphone is a promising technology for dynamic pressure
measurements in the inner ear. A prototype sensor with a single diaphragm has been
assembled.
2) Results in human and sheep temporal bones confirm that the inner ear pressure can
be measured with this prototype IAR. Results show that the inner ear pressure varied
significantly depending on sensor insertion depth and access location to the cochlea.
Measurement repeatability has been confirmed and pressure results are similar to reference
values described in literature.
Conclusions: Preliminary results confirm that a MEMS condenser microphone is a
promising technology to measure the inner ear pressure. To achieve our overall goal optimal
insertion place and sensor design improvements will be needed to improve the signal to noise of
pressure measurements.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 149
2015 Conference on Implantable Auditory Prostheses
T21: ENHANCEMENT OF PITCH-RELATED PHASE-LOCKING RESPONSES
OF THE AUDITORY MIDBRAIN IN COCHLEAR IMPLANTS
Tianhao Li, Fengyi Guo
School of Electrical and Control Engineering, Liaoning Technical University, Huludao, CHN
Cochlear implants (CI) have successfully recovered partial hearing for some deaf
persons, but CI users still have trouble communicating with other people in noise background
and/or enjoying music. Some recent neurophysiology studies showed that pitch-related phaselocking responses in the auditory midbrain are correlated with speech perception performance in
noise and pitch perception. In addition, the lack of fine spectral-temporal structure introduced
with acoustic CI simulations degraded phase-locking responses of the inferior colliculus in
behavioral rabbits. Therefore, enhancing pitch-related phase-locking responses of the auditory
midbrain is one potential method to improve CI users’ performance in noise background. The
present study investigated how to quantify and boost pitch-related phase-locking responses of
the auditory midbrain with auditory peripheral models and two assumptions in the context of CI.
The auditory peripheral model simulated auditory nerves’ responses to a group of harmonic
complexes and acoustic CI simulations of 10 American vowels. The two assumptions were that
the maximum pitch-related phase-locking responses are generated by minimizing the distances
between harmonic complexes with consistent phase relationship and acoustic CI simulations in
the physical space and the neural space, respectively. The results provide implication for
optimizing pitch-related phase-locking responses of the auditory midbrain in the current
configuration of CI signal processors. Further studies are needed to combine with electrical
stimulation computational models of CI and real CI processors.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 150
2015 Conference on Implantable Auditory Prostheses
T22: UNRAVELING THE OBJECTIVES OF COMPUTATIONS IN THE
PERIPHERAL AUDITORY PATHWAY
Bonny Banerjee, Shamima Najnin, Jayanta Kumar Dutta
University of Memphis, Memphis, TN, USA
Even if the brain circuitry is completely untangled, how the brain works will not be
understood unless an objective involving real-world stimuli can be imparted to the circuit. While
circuit models based on input-output characteristics for peripheral auditory areas exist, the
objectives of computations carried out in the same areas are unknown. For example, despite
detailed models of hair cells and auditory neurons in the inner ear (Meddis, 1986; Hewitt and
Meddis, 1991; Bruce et al., 2003), questions such as, why are the hair cells tonotopically
organized, do they help in discriminating between different stimuli, do they help in explaining
different stimuli, etc. are yet to be answered. The goal of our ongoing project is to unravel the
objectives of computations in the peripheral auditory pathway. Knowledge of the objectives will
be useful in investigating the input to higher auditory areas, determining the functional
consequences of individual hearing impairments, and designing and tuning hearing instruments,
such as cochlear implants, more effectively.
We formulate and experiment with different objective functions involving external real
world stimuli, where each function embodies a hypothesis regarding the goal of computations by
the cochlear hair cells and auditory nerve fibers. A two-layered neural network architecture is
utilized for investigating and comparing multiple encoding and feature learning strategies. The
winner-take-all and sparse coding strategies are considered for encoding, which when
combined with different network topologies (involving bottom-up, lateral and top-down
interactions) lead to different objectives. Three of these objectives are implemented, namely,
clustering, clustering by ignoring outliers and sparse coding, resulting in different kinds of
features. Learning from different representations of audio, such as time-amplitude and timefrequency, is also investigated.
The properties of audio features learned from the three objectives are compared to those
of gammatone filters and also to a well-established circuit model of peripheral auditory
physiology (Zilany et al., 2014). Three classes of audio data are used in the comparison,
namely, speech (male and female), music and natural sounds. Each of the three objectives
manifests strengths and weaknesses with different degrees of conformity with
neurophysiological findings. The equal loudness contour of the learned features peaks at
around 1 KHz and dips at around 4 KHz. Also, higher threshold is required for the neuron with
lower characteristic frequency to fire in our model. These properties conform well with
neurophysiological findings. Features learned by clustering are more evenly distributed in the
entire frequency range (0.1-10 KHz) with higher density towards the low frequency end. In
contrast, features learned from the other two objectives are very sparsely distributed around 1
KHz with higher densities at the two ends, which does not conform with neurophysiological
findings. This evaluation is discussed in details and a more effective objective based on
predictive coding that leads to distributed representation across the processing hierarchy is
suggested.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 151
2015 Conference on Implantable Auditory Prostheses
T23: INFERRING HEARING LOSS CHARACTERISTICS FROM
STATISTICALLY LEARNED SPEECH FEATURES
Shamima Najnin, Bonny Banerjee, Lisa Lucks Mendel
University of Memphis, Memphis, TN, USA
In the current state-of-the-art, personalized tuning of a cochlear implant (CI) to optimize
the hearing sensations received is a challenging and time-consuming task, even for highly
trained and experienced audiologists, largely due to four reasons: large number of tunable CI
parameters leading to the curse of dimensionality, paucity of data to reliably estimate a patient’s
hearing characteristics, substantial noise and variability in each patient’s data due to subjective
responses, and wide variation in hearing loss characteristics among patients. A number of semiautomated tuning procedures exist; however, they fail to accurately capture the hearing
characteristics of an individual due to lack of adequate data and the analysis of a patient’s
stimulus-response errors in terms of handcrafted features, such as Jakobson et al.’s (1961)
distinctive features.
We propose an approach to personalize the tuning of a CI, consisting of three steps:
learn features from the speech of the CI user around the clock in online and unsupervised
manner, compare these features to those learned from the speech of a normal-hearing
population using a set of neuro-physiological metrics to identify hearing deficiencies, and exploit
this information to modify the signal processing in the user’s CI to enhance his audibility of
speech. These steps are to be executed adaptively, allowing enough time for the user to adapt
to the new parameter setting. Our working hypothesis is that the deficiencies in hearing for
people with severe-to-profound hearing loss are reflected in their speech (Ryalls et al., 2003). It
is assumed that our algorithms can reside in the CI device and tune it internally, as proposed by
Krause et al. (2010), thereby having continuous access to the user’s speech. Learning features
from the speech output data around the clock overcomes the paucity of data and noise due to
subjective response.
Each feature learned by our algorithm may be conceived as representing the auditory
pattern encoded in the receptive field of a unique hair cell in the cochlea. In order to facilitate
comparison, we algorithmically define five metrics: distribution of characteristic frequencies
(CFs), equal loudness contour (ELC), tuning curve (TC), skewness and Q10 value of a TC for
the statistically learned audio features. Our experiments with 21 subjects with varying degrees
of hearing loss, amplification history, and speech therapy reveal the saliencies in the metrics for
a severely-to-profoundly hearing-impaired subject with respect to the normal hearing population.
Deficiencies in hearing are manifested in these saliencies. Lack of CFs in a particular frequency
range indicates a dead region in the cochlea, as verified by the audiograms. Wide or W-shaped
TCs in a frequency range are indicative of poor discrimination in that range. Also, the ELCs of
these subjects are steeper on the high frequency end than normal, in agreement with
neurophysiological findings. Exploiting this information to modify the signal processing in the
user’s CI to enhance his audibility of speech is left as future research.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 152
2015 Conference on Implantable Auditory Prostheses
T24: THE RELATIONSHIP BETWEEN INSERTION ANGLES, DEFAULT
FREQUENCY ALLOCATIONS, AND SPIRAL GANGLION PLACE PITCH WITH
COCHLEAR IMPLANTS
David M. Landsberger, Maja Svrakic, J Thomas Roland Jr, Mario A Svirsky
New York University School of Medicine, New York, NY, USA
Introduction: Commercially available cochlear implant systems attempt to deliver
frequency information down to a few hundred Hz, but electrode arrays are not designed to reach
the most apical regions of the cochlea which correspond to these low frequencies. This may
cause a mismatch between the frequencies presented by a cochlear implant electrode array and
the frequencies represented at the corresponding location in a normal hearing cochlea. In the
following study, the mismatch between the frequency presented at a given cochlear angle and
the frequency expected by an acoustic hearing ear at the corresponding angle is examined for
the cochlear implant systems that are most commonly used in the United States.
Methods: The angular location of each of the electrodes of four different types of
electrode arrays (MED-EL Standard, MED-EL Flex28, Advanced Bionics HiFocus 1J, and
Cochlear Contour Advance) was examined in 92 ears. The spiral ganglion frequency was
estimated for the angular location of each electrode on each electrode array. The predicted
spiral ganglion frequency was compared with the center frequency provided by the
corresponding electrode using the manufacturer’s default frequency-to-electrode allocation.
Results: Differences across devices were observed for angular locations corresponding
to frequencies below 650 Hz. Longer electrode arrays (i.e. the MED-EL Standard and Flex28)
demonstrated smaller deviations from the spiral ganglion frequency map than the other
electrode arrays. For insertion angles up to approximately 270 Hz, the frequencies presented at
a given location were typically an octave below the spiral ganglion frequency map, while the
deviations were larger for angles deeper than 270 Hz. For frequencies above 650 Hz, the
presented-frequency to angle relationship was very similar across all four types of electrode
arrays.
Conclusions: A mismatch was observed between the spiral ganglion frequency map and
default frequency provided by every electrode on all electrode arrays. Differences in this
mismatch between electrode arrays were observed only at low frequencies. The mismatch can
be reduced by changing the default frequency allocations, inserting electrodes deeper into the
cochlea, or allowing cochlear implant users to adapt to the mismatch.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 153
2015 Conference on Implantable Auditory Prostheses
T25: PERIPHERAL CONTRIBUTIONS TO LOUDNESS: SPREAD OF
EXCITATION
Rachel Anna Scheperle, Michelle Lynne Hughes
Boys Town National Research Hospital, Omaha, NE, USA
Loudness is represented in both temporal and spatial patterns of neural excitation. The
electrically evoked compound action potential (eCAP) is a measure of peripheral neural activity.
Although eCAP amplitude is not a direct assessment of the spatial or temporal distribution of
single-fiber responses, as a population response the amplitude is sensitive to the number and
synchrony of neurons firing. Nevertheless, eCAP amplitude is not a good predictor of loudness,
even for low-rate stimuli for which neural synchrony is high, and factors such as refractoriness
and adaptation are minimized. The purpose of this study was to explore whether a measure of
the spatial extent of the peripheral excitation pattern explains the variable relationship between
eCAP amplitude and loudness perception for low-rate stimuli. We hypothesize that the spatial
excitation pattern will better reflect total neuronal activity than eCAP amplitude, particularly when
contributing neurons are far from the recording electrode. We also tested whether faster rates of
eCAP amplitude and loudness growth are observed for broader excitation patterns, which has
been implied but never directly assessed.
Six recipients of Cochlear devices have participated to date. Two electrode sites and
three electrode configurations were tested for the purpose of inducing controlled within-subject
variability primarily in the spatial domain. Electrodes with the highest (EH) and lowest (EL)
behavioral threshold for focused stimulation were identified. The EH site was tested in
monopolar mode (MP). The EL site was tested using MP and bipolar (BP+3; BP+1 or 2) modes.
Outcome measures included (1) categorical loudness scaling, (2) eCAP amplitude growth
functions and (3) eCAP channel-interaction functions (for probe levels corresponding to “soft”,
“soft-medium”, “medium”, and “medium-loud” ratings).
Preliminary results do not support the hypothesis that area of the eCAP channelinteraction function is a better predictor of loudness than eCAP amplitude; rather, the two eCAP
measures are correlated. Width of the channel-interaction function was statistically independent
from eCAP amplitude, but did not improve predictions of loudness when combined with
amplitude; nor was it related to steepness of eCAP amplitude or loudness growth.
Although the equal activity/equal loudness hypothesis is generally accepted, it is not fully
supported empirically [Relkin & Doucet, 1997; this study]. Electrical field potentials are being
explored to describe the stimulation patterns and provide additional context for interpreting the
eCAP channel-interaction functions. The results will be presented to demonstrate feasibility of
performing such measures with the Custom Sound EP system. The implications of comparing
stimulus patterns to neural excitation patterns are being explored.
Funded by the National Institutes of Health: T32 DC000033, R01 DC009595 and P30
DC04662.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 154
2015 Conference on Implantable Auditory Prostheses
T26: CLINICALLY-USEFUL MEASURES FOR PROCESSOR-FITTING
STRATEGIES
Kara C Schvartz-Leyzac1, Deborah J Colesa1, Ning Zhou2, Stefan B Strahl3, Yehoash
Raphael1, Bryan E Pfingst1
1
University of Michigan, Ann Arbor, MI, USA
East Carolina University, Greenville, NC, USA
3
MED-EL GmbH, Research and Development, Innsbruck, AUT
2
In previous studies in human subjects with cochlear implants we found that functional
responses assessed at individual stimulation sites along a multisite electrode array varied from
one stimulation site to another and that the pattern of variation across stimulation sites was
typically stable over time, but subject dependent. These observations are consistent with the
hypothesis that the functional responses to cochlear-implant stimulation depends on conditions
in the implanted cochlea near the individual stimulation sites. Since the pattern of neural
pathology varies along the length of the cochlea in a deaf, implanted ear, we assume that neural
health is a key variable underlying these functional responses.
We have found that turning off electrodes at a few poorly-performing stimulation sites
throughout the electrode array results in improved speech recognition performance. However, to
translate this site-selection strategy to clinical practice, we need more efficient measures of
implant performance than those used in the previous studies. Reasonable candidate measures
for clinical use would be objective electrophysiological measures that require no training of the
patients and can be used to assess the entire electrode array, one electrode at a time, in a
period compatible with the time constraints of clinical practice. We hypothesize that measures
that are related to neural health will be useful for site-selection procedures.
For the current study we are examining properties of electrically-evoked compound action
potential (ECAP) amplitude-growth functions (ECAP amplitude as a function of electrical
stimulus level). In guinea pigs we have found that density of spiral ganglion neurons (SGNs)
near the stimulating electrodes can account for up to 53% of the variance in ECAP growthfunction slope, depending on the conditions tested. In human subjects we found that slopes of
ECAP growth functions recorded using similar stimulation and recording paradigms showed
appreciable variation across stimulation sites along the multichannel electrode array and
showed subject-dependent across-site patterns. Across-site patterns of performance were also
relatively stable over time. Other metrics based on ECAP growth functions are also being
studied in humans and guinea pigs. The data collected to date show promise for developing
clinically useful procedures to improve fitting of implants based on estimated conditions in the
individual subjects’ implanted ears.
This work is supported by NIH-NIDCD grants R01 DC010786, R01 DC010412, and P30
DC05188 and a contract from MED-EL.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 155
2015 Conference on Implantable Auditory Prostheses
T27: A MODELING FRAMEWORK FOR OPTICAL STIMULATION IN THE
INNER EAR
Robin Sebastian Weiss, Michael Schutte, Werner Hemmert
Bio-Inspired Information Processing, IMETUM, Technische Universität München, Munich, DEU
Optogenetic stimulation of neurons has become one of the more important methods to
research neuronal function and inter- and intraconnectivity [1]. The stimulation is made feasible
by e.g. viral transfection of neurons that alters the cell’s genome such that it expresses
Channelrhodopsin-2 (ChR2) [2]. This light activated channel allows to overcome some of the
limitations of electrical stimulation in cochlear implants. Therefore we want to create coding
strategies suitable for optical stimulation.
To predict the outcome of different optical stimuli at different wavelengths, frequencies
and energies, we have set up a framework in Python and Brian [3]. The implementation of ChR2
is based on Foutz et al. [4] and allows a wide range of optical inputs and takes into account the
attenuation, scattering and absorption in the inner ear as the optical ray travels from the
optrodes to the receiving tissue.
Our model allows to set tissue properties and the ion channel configuration of the
receiving neurons can be adjusted according to experimental measurements. Electrical and
optogenetic stimulation can be compared or combined, as models of different cochlear implant
electrode and optrode setups are implemented. The comparison of different stimulation patterns
shows good agreement with data from Hernandez et al. [5].
This model framework is a valuable tool to conduct quantitative evaluations of optically induced
auditory nerve excitation e.g. the evaluation of firing patterns with an automatic speech
recognition framework or with a neurogram similarity index measure (NSIM) [6].
References
[1] K. Deisseroth, “Optogenetics,” Nat Meth, vol. 8, no. 1, pp. 26-29,
http://dx.doi.org/10.1038/nmeth.f.324, 2011.
[2] A. M. Aravanis, L.-P. Wang, F. Zhang, L. A. Meltzer, M. Z. Mogri, M. B. Schneider, and K.
Deisseroth, “An optical neural interface: in vivo control of rodent motor cortex with
integrated fiberoptic and optogenetic technology,” J. Neural Eng, vol. 4, no. 3, pp. S143,
2007.
[3] Goodman, Dan F M and R. Brette, “The brian simulator,” (eng), Frontiers in neuroscience,
vol. 3, no. 2, pp. 192-197, 2009.
[4] T. J. Foutz, R. L. Arlow, and C. C. McIntyre, “Theoretical principles underlying optical
stimulation of a channelrhodopsin-2 positive pyramidal neuron,” Journal of
Neurophysiology, vol. 107, no. 12, pp. 3235-3245, 2012.
[5] V. H. Hernandez, A. Gehrt, Z. Jing, G. Hoch, M. C. Jeschke, N. Strenzke, and T. Moser,
“Optogenetic stimulation of the auditory nerve,” Journal of Visualized Experiments, no.
92, 2014.
[6] M. Drews, M. Nicoletti, W. Hemmert, and S. Rini, “The neurogram matching similarity index
(NMSI) for the assessment of similarities among neurograms,” in Acoustics, Speech and
Signal Processing (ICASSP), 2013 IEEE International Conference on, 2013, pp. 11621166.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 156
2015 Conference on Implantable Auditory Prostheses
T28: FAILURE OF INFRARED STIMULATION TO EVOKE NEURAL ACTIVITY
IN THE DEAF GUINEA PIG COCHLEA
Alexander C Thompson1, James B Fallon2, Andrew K Wise2, Scott A Wade1, Robert K
Shepherd2, Paul R Stoddart1
1
Faculty of Science, Engineering and Technology, Swinburne University of Technology, Melbourne, AUS
2
Bionics Institute, Melbourne, AUS
Introduction: Infrared neural stimulation (INS) is a proposed alternative to electrical
stimulation to activate auditory neurons (1) that may improve the spatial selectivity of cochlear
implants (2). At present there is some debate as to the mechanism by which INS activates
auditory neurons, as the lasers used for INS can potentially generate a range of secondary
stimuli e.g. an acoustic stimulus is produced when the light is absorbed by water. To clarify
whether INS in the cochlea requires functioning hair cells, and to explore the potential relevance
to cochlear implants, experiments using INS were performed in the cochleae of normal hearing,
acutely deafened and chronically deafened guinea pigs.
Methods: Four adult pigmented guinea pigs were used with experiments performed
bilaterally in two animals and unilaterally in two animals, giving a total of six experimental ears.
Two cochleae were used for acutely deafened experiments, which was achieved by local
perfusion of neomycin (10mg/ml) throughout the cochlea. The remaining two animals were
chronically deafened (kanamycin 420mg/kg s.c. and frusemide 130mg/kg i.v.) four weeks before
the acute INS experiment. In each acute INS experiment animals were anaesthetised (1-2%
isoflurane in O2 1L/min) and the cochlea surgically accessed. A cochleostomy was made in the
basal turn and an optical fibre, along with a platinum ball electrode, was positioned inside the
cochlea. The tip of the fibre was positioned to be close to the modiolar wall. Auditory brainstem
responses (ABRs) were measured to acoustic (prior to acute deafening), electrical and optical
stimulation. At the completion of the experiment, cochlear tissue was collected and processed
for histology.
Results: A response to INS was readily evoked in normal hearing cochleae. However, no
response was evoked in any deafened cochleae, for either acute or chronic deafening, contrary
to previous work where a response was observed after acute deafening with ototoxic drugs. A
neural response to electrical stimulation was readily evoked in all cochleae after deafening
indicating that the auditory neurons were functional.
Conclusion: The absence of a response from optical stimuli in acute and chronically
deafened cochleae suggests that the response from INS in the cochlea is hair cell mediated.
References
1. Richter C-P, Matic AI, Wells JD, Jansen ED, Walsh JT. Neural stimulation with optical radiation. Laser
& photonics reviews. 2011;5(1):68-80.
2. Thompson AC, Wade SA, Pawsey NC, Stoddart PR. Infrared Neural Stimulation: Influence of
Stimulation Site Spacing and Repetition Rates on Heating. Biomedical Engineering, IEEE Transactions
on. 2013;60(12):3534-41.
Funding source
Funding for this research was provided by Cochlear Ltd. and the Australian Research Council under
grant LP120100264. The Bionics Institute acknowledges the support it receives from the Victorian
Government through its Operational Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 157
2015 Conference on Implantable Auditory Prostheses
T29: A MULTI-SCALE MODEL OF COCHLEAR IMPLANT STIMULATION
1
Phillip Tran1, Paul Wong1, Andrian Sue1, Qing Li1, Paul Carter2
School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Sydney, AUS
2
Cochlear Ltd., Sydney, AUS
Volume conduction models of the implanted cochlea have been used to predict the
outcomes resulting from intracochlear stimulation. However, existing models make assumptions
on the boundary conditions that are applied, limiting the accuracy of these models. The
definition of correct boundary conditions is critical for the investigation of monopolar stimulation
because the grounding electrodes are located remotely. In addition, the anatomy and properties
of the tissues in the head can influence the current and voltage distributions within the cochlea,
leading to large differences in results.
In our previous work, a finite element model of the whole human head was developed to
allow for a realistic representation of the far-field boundary conditions and resulting distributions.
However, the resolution of the image data only allowed for a coarse reconstruction of the
cochlea. This present study aims to incorporate a localised high-resolution geometry of the
cochlea within the global anatomy of the head in a multi-scale finite element model.
The multi-scale model serves as a useful tool for conducting investigations of monopolar
stimulation. With the increased resolution of the cochlea, the prediction and visualisation of
current pathways, voltage distributions, and impedance within the cochlea are improved.
Intracochlear results using existing boundary conditions are compared with those obtained from
this whole head simulation to determine the extent of modelling error and the validity of
modelling assumptions.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 158
2015 Conference on Implantable Auditory Prostheses
T30: INVESTIGATIONS OF IRREVERSIBLE CHARGE TRANSFER FROM
COCHLEAR IMPLANT ELECTRODES: IN-VITRO AND IN-SILICO
APPROACHES
1
Andrian Sue1, Phillip Tran1, Paul Wong1, Qing Li1, Paul Carter2
School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Sydney, AUS
2
Cochlear Ltd., Sydney, AUS
The safety of platinum electrodes used in cochlear implants is commonly evaluated by
examining electrochemical responses in-vitro. It is difficult, however, for conventional in-vitro
methods to accurately predict the stimulation limits of implanted electrodes because of the
differences between in-vitro and in-vivo environments. Consequently, in-vivo evaluations of
electrode safety in animal models are necessary, but also have limited ability to develop an
understanding of the factors and mechanisms that contribute to stimulation-induced tissue
injury. This makes the systematic evaluation of irreversible charge transfer on electrodes
difficult, especially with the inter-subject and inter-application variability associated with these
experimental techniques. For these reasons, the utilization of an in-silico approach that
complements existing experimental approaches is important, because it enables the systematic
study of the various electrode design factors affecting the safe injection of charge through
intracochlear electrodes.
This study aims to develop an in-silico model to investigate the effect of electric field
distributions (and hence the electrode parameters that influence field distributions) on the safety
of cochlear implant electrodes. The finite element method is used to model cochlear implant
electrodes, their electrochemical responses, and the surrounding electric field. The responses of
the model to variations in electrode geometry, electrode position, pulse width, stimulation mode,
and the surrounding environment are explored. A 3D-printed in-vitro setup is also developed in
order to verify the accuracy of the in-silico model. It is hypothesized that the in-vitro setups used
in this study are more representative of in-vivo cochlear implantation conditions, when
compared to conventional in-vitro experiments.
The results of the finite element modeling suggest that in extreme cases, the shape of the
electric field surrounding the electrode impacts the extent of irreversible reactions occurring at
electrode-electrolyte interface. The distribution of the electric field is a function of the
physiological conditions and application of stimulation. Therefore, using a cochlea-like chamber
in-vitro to control the electric field distribution yields results that more closely resemble the
electrochemical characteristics of an implanted electrode.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 159
2015 Conference on Implantable Auditory Prostheses
T31: A TRANSMODIOLAR COCHLEAR IMPLANT ELECTRODE: HIGH
DENSITY ELECTRODE DESIGN AND MANUFACTURING PROCESS.
Guillaume Tourrel1, Dang Kai1, Dan Gnansia1, Nicolas Veau1, Alexis Borzorg-Grayeli2
1
2
Oticon Medical, Vallauris, FRA
University Hospital of Dijon, Dijon, FRA
The purpose of this study is to propose a new cochlear implant electrode design and
manufacturing technic adapted to transmodiolar electrical stimulation. This way of stimulating
the auditory nerve fibers is theoretically most efficient in terms of power consumption, and could
be really interesting concerning the focusing of the stimulation on fiber nerves, especially for the
apical turns of the cochlea.
We firstly we designed a specific electrode considering the particular surgery and
geometry of the transmodiolar implantation approach. For that purpose, we proposed the
principle of a rigid electrode with dimensions of about 5.5 mm in length and 0.5 mm in diameter.
In order to conform to these very small dimensions, we chose a substrate made of high resistant
zirconia. The electrical tracks and surface electrodes were realized by metallization with gold
and platinum. Finally, the last layer of insulation was made of a parylen coating. Surface
electrodes were revealed by locally removing the parylen coating thanks to a laser ablation
operation. In this same operation, a pattern can be applied on the top surface of the electrode in
order to improve the electrical characteristics by increasing geometric surface. Using this
design, we will be able to provide 20 electrodes or more on an electrode array with a helical
shape, directly facing the auditory nerve fibers.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 160
2015 Conference on Implantable Auditory Prostheses
T32: INSERTION FORCE TEST BENCH AND MODEL OF COCHLEA
Guillaume Tourrel1, Andrea Lovera2
1
Oticon Medical, Vallauris, FRA
2
FEMTOPrint, Ticino, CHE
Insertion force is one of the criteria for designing a new cochlear electrode array. We
know that this force is directly linked to the trauma caused by electrode insertion to surrounding
tissues inside the cochlea, especially the basilary membrane. Different studies have already
established a direct link between insertion forces and trauma.
Based on this fact, we developed a test bench able to measure very small forces using a
force sensor with a resolution of 0.5mN. We know that successful electrode insertions generate
a reaction force of about 0.1N max, when traumatic insertions (fold over, kink) are associated to
insertion force increases reaching up to 0.5N or more.
The position of the cochlea model can be fine-tuned thanks to a high precision x/y table,
and with a theta angle on base plate. The piston for pushing the electrode into the model of the
cochlea has also an alpha angle setting. So we ensure a high precision of positioning and a
good reproducibility for repetitive tests.
Different models of cochlea have been developed and different technologies have been
tested. The simplest was a 2D cochlea made with 2D machining. The most advanced was a 3D
model engraved with a laser into a piece of glass. This manufacturing technic from FEMTOprint
is new and very innovative.
First trials show that we generate very low artefact on insertion force (less than 0.005N)
due to the system itself.
This equipment will allow us to compare different electrode arrays into different models
and shapes of cochlea for simulating different angle of insertion into the cochlea. The next
phase will be to provide 3D models of the cochlea based on direct MRI from normalized cochlea
or singular cases such as partial ossification and so on.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 161
2015 Conference on Implantable Auditory Prostheses
T33: FORWARD MASKING IN COCHLEAR IMPLANT USERS:
ELECTROPHYSIOLOGICAL AND PSYCHOPHYSICAL DATA USING SINGLEPULSE AND PULSE-TRAIN MASKERS
Youssef Adel1, Gaston Hilkhuysen1, Arnaud Norena2, Yves Cazals2, Stéphane Roman3,
Olivier Macherey1
1
Laboratoire de Mécanique et d’Acoustique, CNRS, Marseille, France
Laboratoire Neurosciences Intégratives et Adaptatives, AMU, Marseille, France
3
Chirurgie cervico-faciale pédiatrique, ORL, Hôpital de la Timone, Marseille, France
2
Electrophysiological forward masking (PhyFM) measured at the level of the auditory
nerve in cochlear implant users (CIUs) typically shows short recovery time constants of a few
milliseconds. By contrast, psychophysical forward masking (PsyFM) shows longer recovery of
up to 100 ms. These discrepancies have led several authors to suggest two different
contributions to psychophysical forward masking: A short time-constant process due to the
refractory properties of the auditory nerve, and a longer time-constant process arising from more
central structures. However, most PsyFM studies used pulse-train maskers, while all PhyFM
studies performed in human CIUs used single-pulse maskers. Yet PhyFM studies using pulsetrain maskers in guinea pigs reported recovery times longer than 100 ms (Killian et al. 1994,
Hear Res 81:66-82; Miller et al. 2011, JARO 12:219-232). The current study further examines
this problem by measuring electrophysiological and psychophysical forward masking in CIUs
using identical stimuli.
PhyFM was determined using ECAP responses to a single-pulse probe presented at a
level evoking a response of approximately 50 µV without masking. The probe was preceded by
a single-pulse masker (SPM), a low-rate pulse-train masker (LTM), or a high-rate pulse train
masker (HTM). LTM and HTM had 300-ms duration, and rates of 250 and 5000 pulses per
second, respectively. HTM and LTM were compared either at equal current level or at equal
sensation level corresponding to most comfortable loudness. The time interval between masker
and probe was initially set to 1 ms and doubled until no masking occurred. PsyFM was
determined by measuring detection thresholds using a single-pulse probe presented 16 ms after
the masker.
PhyFM had recovery times of up to 256 ms for HTM, up to 128 ms for LTM, and up to 4
ms for SPM. Pulse trains generally showed longer recovery times than single pulses. Compared
at equal current level, HTM produced more and longer-lasting masking than LTM. Compared at
equal sensation level, results differed across subjects. Similar to PhyFM, PsyFM showed
increased probe detection thresholds for HTM when compared with LTM at equal current level.
However, when both maskers were compared at equal sensation level, LTM usually produced
more masking than HTM, which was not consistent with the PhyFM results.
Electrical pulse trains can produce long forward masking at the level of the auditory
nerve. This suggests that long recovery time constants measured psychophysically may have a
peripheral origin. Considering that ECAPs capture simultaneous firing of groups of neurons,
stronger masking by LTM and HTM compared to SPM could be due to neural adaptation, or
desynchronization of neural activity. Although both mechanisms should reduce the amplitude of
the ECAP probe response, they could also give rise to different cues for LTM and HTM in
PsyFM tests. This could account for the differences found between PsyFM and PhyFM as well
as differences in CIU performance as a function of stimulation rate.
This study was funded by the ANR (ANR-11-PDOC-0022) and the CNRS (DEFI-SENS).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 162
2015 Conference on Implantable Auditory Prostheses
T34: A NEW ANALYSIS METHOD FOR SPREAD OF EXCITATION CURVES
BASED ON DECONVOLUTION
Jan Dirk Biesheuvel, Jeroen J. Briaire, Johan J. de Vos, and Johan H.M. Frijns
Leiden University Medical Centre
INTRODUCTION: It has frequently been investigated whether there is a relationship between
the width of the objective eCAP-based spread of excitation (SOE) curves and psychophysical
parameters such as pitch discrimination and speech perception. However, no correlations have
been found yet. So far, the width of the SOE curve has been interpreted as a direct measure of
SOE. However, an SOE curve recorded with forward masking is not a direct measure of the SOE,
but it represents the area overlap of the excitation areas of the fixed probe and variable masker
along the electrode array. From a mathematical point of view the movement of the masker with
respect to the fixed probe is equal to a convolution operation. If we hypothesize that the SOE curve
actually is a convolution of the masker and probe, deconvolution of the SOE curve would give the
excitation areas of the masker and the probe and thus the actual neural SOE. This study aimed to
develop a new analysis method for SOE curves by using this principle of deconvolution.
METHODS: Intra-operative SOE curve measurements of 16 patients, implanted with an
Advanced Bionics implant (electrodes numbered from 1 to 16 from apex to base) were used. ECAPbased SOE curves were recorded on electrode 3 to 16 by using the forward masker paradigm with
variable masker. Theoretical SOE curves were calculated by the convolution of possible excitation
density profiles (EDPs) for masker and probe. The fit of these theoretical curves to the measured
SOE curves was optimized by iteratively adjusting the EDPs of the masker and the probe (using a
steepest decent algorithm). Several possible EDPs, which were all symmetrical in their decay to the
apex and the base of the cochlea, were evaluated such as a rectangular profile (Rectangular), a
profile with an exponential decay on both sides (Exponential) and a profile with a Gaussian decay
(Gaussian). The parameters varied were the width of the plateau of the EDP, and the steepness of
the decay (Exponential en Gaussian).
RESULTS: The calculated SOE curves fit the measured SOE curves well, including the
asymmetrical shape of the measured SOE curves. For rectangular EDPs the fit between the
modelled and the measures SOE curves is poorer than for Exponential or Gaussian EDPs, and the
SOE curves modelled using the Rectangular EDP are less smooth than the SOE curves modelled
using the Exponential and the Gaussian EDP. In most patients the EDP width (i.e., the size of the
excitation area) gradually changes from wide at the apex to narrow at the base. The corresponding
SOE curves reveal that the steeper this trend is, the more asymmetric the SOE curves are.
Exploring the widths of the Gaussian EDP in more detail reveals that in 15 of the 16 patients (94%)
the optimized widths also highly correlate with the amplitudes of the SOE curves (p < 0.05). The
comparison of the EDP widths to the SOE curve widths as calculated in literature revealed that the
EDPs provide a completely different measure of spread of excitation than the conventional methods.
CONCLUSION: This study shows that an eCAP-based SOE curve measured with forward
masking can be seen as a convolution of EDPs of masker and probe. The fact that the Rectangular
EDP provides poorer fits than the Exponential and Gausian ones, indicates that a sloped side of the
excitation area is required to explain actual SOE recordings. The deconvolution method can explain
the frequently observed asymmetry of SOE-curves along the electrode array as a consequence of a
wider excitation area in the apical part of the cochlea, without any asymmetry in the actual EDPs. In
addition, the broader EDPs in the apex can explain the higher eCAP amplitudes found for apical
stimulation.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 163
2015 Conference on Implantable Auditory Prostheses
T35: BENEATH THE TIP OF THE ICEBERG IN AUDITORY NERVE FIBERS:
SUBTHRESHOLD DYNAMICS FOR COCHLEAR IMPLANTS
Jason Boulet1, Sonia Tabibi, Norbert Dillier2, Mark White3, Ian C. Bruce1
1
McMaster University, Hamilton, CAN
2
ETH Zurich, Zurich, CHE
3
Unaffiliated, Cary, NC, USA
Refractoriness is a common neural phenomenon that has been shown to characterize
some aspects of the auditory nerve fiber (ANF) spike-dependent response to cochlear implant
(CI) stimulation. However, for high-rate pulse trains, a greater drop-off in spike rate over time is
often observed than can be explained by refractoriness alone. This is typically assumed to be
caused by ongoing spike-dependent neural adaptation, but mounting evidence suggests that
subthreshold stimulus-response behaviors may also play a crucial role in ANF stimulusresponse electrophysiology. In this study, we explore two such mechanisms: facilitation in which
a subthreshold stimulus increases the subsequent excitability, and accommodation in which the
excitability is decreased.
Progress has been made in the area of developing phenomenological models to predict
the effects of several of the aforementioned behaviors. However, up until now, no model has
combined all four of these stimulus-response behaviors: refractoriness, spike-rate adaptation,
facilitation and accommodation. In this study, we present a stochastic integrate-and-fire model
that simultaneously considers all four phenomena using parameters from fits to data from
paired-pulse experiments to model facilitation, accommodation (Dynes 1996, PhD Thesis) and
refractoriness (Miller et al 2001, JARO); and as well as spike-rate adaptation (Nourski et al
2006, NIH QPR). We observed that spike-rate adaptation behaved as expected by showing a
slow decay in excitability measured by post-stimulus time histograms (PSTHs). However, under
various stimulus regimes, including (1) current levels that elicit a low-probability of spiking and
(2) time-scales that are relevant for accommodation, the model also predicts long-term drops in
the ANF spike-rate due to accommodation without explicitly modeling spike-rate adaptation.
Thus, care should be taken when interpreting experimental PSTHs, since using only spike-rate
adaptation may be insufficient to explain the drop in excitability over the duration of the stimulus.
The proposed model with all four mechanisms permits a systematic investigation of their
contribution to ANF response properties under various stimulus conditions.
[Funded by NSERC Discovery Grant #261736 (IB) and the ICanHear EU Training Network
(ND).]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 164
2015 Conference on Implantable Auditory Prostheses
T36: A MODEL OF AUDITORY NERVE RESPONSES TO ELECTRICAL
STIMULATION
Suyash N Joshi, Torsten Dau, Bastian Epp
Technical University of Denmark, Kgs. Lyngby, DNK
Cochlear implants (CI) stimulate the auditory nerve (AN) with a train of symmetric
biphasic current pulses comprising of a cathodic and an anodic phase. The cathodic phase is
intended to depolarize the membrane of the neuron and to initiate an action potential (AP) and
the anodic phase to neutralize the charge induced during the cathodic phase. Single-neuron
recordings in cat auditory nerve using monophasic electrical stimulation show, however, that
both phases in isolation can generate an AP. The site of AP generation differs for both phases,
being more central for the anodic phase and more peripheral for the cathodic phase. This
results in an average difference of 200 µs in spike latency for AP generated by anodic vs
cathodic pulses. Previous models of electrical stimulation have been developed based on AN
responses to symmetric biphasic stimulation and therefore fail to predict important aspects of
the observed responses to stimulation of various pulse shapes like, for example, latency
differences between anodic- and cathodic first biphasic stimuli. This failure to account for these
important aspects disqualifies these models to investigate temporal and binaural processing in
CI listeners. Based on these premises, Joshi et al. (2015) proposed a model of the AN
responses to electrical stimulation. Their model consisted of two exponential integrate-and-fire
type neurons, representative of the peripheral and central sites of excitation. This model,
parametrized with data for monophasic stimulation, was able to correctly predict the responses
to a number of pulse shapes. Their study was only concerned with the responses to single pulse
stimulation. This report extends the model proposed by Joshi et al. (2015) for the case of
stimulation with pulse trains. The model is modified to include changes in excitability following
either sub-threshold or supra-threshold stimulation by including a variable representing an
adapting threshold. With an adaptive threshold, the model is tested for its ability to predict
facilitation (increased excitability following subthreshold pre-pulse), accommodation (decreased
excitability following subthreshold pre-pulse and facilitation), and adaptation (decreased
excitability following a spike produced by supra-threshold pre-pulse). The model will be further
tested for its ability to predict the observed responses for pulse trains by analyzing effects of
stimulation rate and level on the model responses. With the ability to account for the
responsiveness to electrical stimulation with pulses of various shapes, a successful model can
be generalized as a framework to test various stimulation strategies and to quantify their effect
on the performance of CI listeners in psychophysical tasks.
This work has been supported by grant from the People Programme (Marie Curie Actions) of the
European Union’s 7th Framework Programme FP7/2007-2013/ under REA grant agreement
number PITN-GA-2012-317521.
References:
Joshi, S. N., Dau, T., Epp, B. (2015) “A model of auditory nerve responses to electrical
stimulation” 38th Mid-winter meeting of the Association for Research in Otolaryngology,
Baltimore, MD
12-17 July 2015
Granlibakken, Lake Tahoe
Page 165
2015 Conference on Implantable Auditory Prostheses
T37: MULTICHANNEL OPTRODE FOR COCHLEAR STIMULATION
Xia Nan1, Xiaodong Tan2, Hunter Young2, Matthwe Dummer3, Mary Hibbs-Brenner3, ClausPeter Richter2
1
2
Northwestern University, Chicago, IL, USA
Department of Otolaryngology, Northwestern University, Chicago, IL, USA
3
Vixar Inc., Plymouth, MN, USA
Introduction: Optical stimulation has been proposed for cochlear implants as an
alternative method to electrical stimulation. Two different methods were explored for this
purpose, namely infrared neural stimulation (INS) and optogenetics. To deliver the optical
radiation, optical fibers and single optical chips, such as Vertical-Cavity Surface-Emitting Lasers
(VCSELs) and micro Light Emitting Diodes (µLEDs), were tested. Stiff optical fibers are not
suitable for intra-cochlear stimulation because of the potential inner ear damage during the
insertion into scala tympani. More flexible and multichannel optrodes are needed for safe
implantation.
Method: Blue µLEDs (wavelength: 470nm, dimensions: 1000x600x200 µm3, for the
optogenetic approach), red and infrared VCSELs (wavelength: 680nm, dimension: 250x250x200
µm3, for functional testing and wavelength: 1850 nm, dimension 450x250x200 µm3 for INS,
respectively) were used as light sources for the multi-channel optrodes. Conductive silver epoxy
was used to connect all cathode of VCSELs or µLEDs to the uninsulated part of a 125µm
diameter Teflon coated silver wire, which also serves as the heat sink. The anode of each
VCSEL or µLEDs was connected with epoxy to a 25 or 75µm diameter Teflon coated
platinum/silver wire. The optrode was then moved to a semi-cylindrical mold with diameter of 0.8
mm, and was embedded into silicone. For functional testing, the optrode was placed into a
physiological saline bath on a shaking table and was agitated for 24 hours, 7 days per week.
The functional integrity of each channel was tested and the physical appearance of the optrodes
was visually inspected daily for 4 weeks. After the in vitro test, the 4-channel red or an infrared
optrode was implanted into a cat cochlea. ABRs thresholds were tested before and every other
week after the implantation.
Results: In the saline bath, the longest test series lasted more than 3 months without any
changes in optrode appearance or function. For the red VCSELs, the ABR responses were
absent one month after implantation above 10kHz, and ABR thresholds were elevated on
average by 32dB at frequencies below 10kHz. Changes in threshold were stable after the
implantation. Moreover, stimulation with the infrared optrode evoked an ABR response.
Conclusion: We successfully fabricated implantable multichannel cochlear optrodes for INS and
optogenetics. The current results show that it is feasible to insert multichannel optrode into the
cat cochlea and that cochlear damage does not progress beyond the damages caused by the
insertion of the optrode. In the next step the animals will be deafened by injection of kanamyzin
(i.p.) and furosemide (i.v.) to study the changes that occur from a hearing to a deaf animal.
Funded with federal funds from the NIDCD, R01 DC011855
12-17 July 2015
Granlibakken, Lake Tahoe
Page 166
2015 Conference on Implantable Auditory Prostheses
T38: CURRENT SPREAD IN THE COCHLEA: INSIGHTS FROM CT AND
ELECTRICAL FIELD IMAGING
1
Steven M Bierer1, Eric Shea-Brown2, Julie A Bierer1
University of Washington, Dept of Speech and Hearing Sciences, Seattle, WA, USA
2
University of Washington, Dept of Applied Mathematics, Seattle, WA, USA
Previous research in our laboratory suggests that channels with high thresholds, when
measured with a focused electrode configuration, have a reduced ability to transmit spatial,
temporal, and intensity cues. Such channels are likely affected by a degraded electrode-neuron
interface, but the underlying factors are not well understood. In this study, we have constructed
impedance models of the cochlea from electrical field imaging (EFI) profiles, for direct
comparison to CT images of the implanted array. These data could allow a better understand of
how inter-electrode and inter-subject variability in perception relate to current flow within the
cochlea.
Perceptual thresholds to tripolar or quadrupolar stimulation were obtained on all channels
for 14 listeners wearing Advanced Bionics HiRes90k devices. EFI data were analyzed to create
a lumped-parameter impedance network model of the cochlear tissues, similar to Vanpoucke et
al, 2004. In 10 of the subjects, CT images were obtained to estimate the location of individual
electrodes within the cochlea.
We observed that tripolar thresholds, when evaluated across subjects, were highly
correlated with electrode distances from the cochlear inner wall. Conversely, transversal
impedances (representing flow of current out of the cochlea) exhibited significant positive
correlations with distance within, but not across, subjects. Interestingly, tripolar thresholds and
transversal impedances were significantly lower for electrodes in the scala tympani, which were
generally more basally positioned.
Although tripolar thresholds could not be consistently predicted by the transversal or
longitudinal impedance components of the ladder network, the analysis did reveal a strong
correspondence between monopolar threshold and the composite impedance representing all
current pathways away from each stimulating electrode. Specifically, the higher the total
impedance, the lower the monopolar threshold. This finding may reflect the relative
independence of monopolar threshold on localized impedance and other factors such as neural
health. Together, these results suggest that EFI potentials can help improve our understanding
of the electrode-neuron interface and subject-to-subject variability in perception.
This work was supported by NIDCD DC012142.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 167
2015 Conference on Implantable Auditory Prostheses
T39: EMPLOYING AUTOMATIC SPEECH RECOGNITION TOWARDS
IMPROVING SPEECH INTELLIGIBILITY FOR COCHLEAR IMPLANT USERS
Oldooz Hazrati, Shabnam Ghaffarzadegan, John Hansen
The University of Texas at Dallas, Richardson, TX, USA
Despite recent advancements in digital signal processing technology for cochlear implant
(CI) devices, there still remains a significant gap between speech identification performance of
CI users in reverberation compared to that in anechoic quiet conditions. Alternatively, automatic
speech recognition (ASR) systems have seen significant improvements in recent years resulting
in robust speech recognition in a variety of adverse environments, including reverberation.
In this study, we exploit advancements seen in ASR technology for alternative formulated
solutions to benefit CI users. Specifically, an ASR system is developed using multi-condition
training on speech data with different reverberation characteristics (e.g., T60 values), resulting
in low word error rates (WER) in reverberant conditions. In the training stage, a gender
independent speech recognizer is trained using anechoic, as well as a subset of the reverberant
training speech. In the test stage, the ASR system output text transcription is submitted through
a text to speech (TTS) synthesizer in order to generate speech waveforms. Finally, the
synthesized waveform is presented to the CI listener to evaluate speech intelligibility.
The effectiveness of this hybrid recognition-synthesis CI strategy is evaluated under moderate
to highly reverberant conditions (i.e., T60 = 0.3, 0.6, 0.8, and 1.0s) using speech material
extracted from the TIMIT corpus. Experimental results confirm the effectiveness of multicondition training on performance of the ASR system in reverberation, which consequently
results in substantial speech intelligibility gains for CI users in reverberant environments.
Research supported by Cochlear Ltd.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 168
2015 Conference on Implantable Auditory Prostheses
T40: IMPORTANCE OF TONAL ENVELOPE IN CHINESE AUTOMATIC
SPEECH RECOGNITION
1
Payton Lin1, Fei Chen2, Syu-Siang Wang1, Yu Tsao1
Research Center for Information Technology Innovation, Academia Sinica, Taipei, CA, TWN
2
South University of Science and Technology of China, Shenzhen, CA, CHN
The aim of this study is to devise a computational method to predict Chinese cochlear
implant (CI) speech recognition. Here, we describe a high-throughput screening system for
optimizing Mandarin CI speech processing strategies using hidden Markov model (HMM)-based
automatic speech recognition (ASR). Word accuracy was computed on vocoded CI speech
synthesized from primarily multi-channel temporal envelope information. The ASR performance
increased with the number of channels in a similar manner displayed in human recognition
scores. Results showed the computational method of HMM-based ASR offers better process
control for comparing signal carrier type. Training-test mismatch reduction provided a novel
platform for re-evaluating the relative contributions of spectral and temporal cues to Chinese
speech recognition.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 169
2015 Conference on Implantable Auditory Prostheses
T41: EVALUATION OF A NEW LOW-POWER SOUND PROCESSING
STRATEGY
Andreas Buechner1, Leonid Litvak2, Martina Brendel3, Silke Klawitter1, Volkmar
Hamacher1, Thomas Lenarz1
1
Medical University of Hannover, Hannover, DEU
2
Advanced Bionics LLC, Valencia, CA, USA
3
Advanced Bionics GmbH, European Research Center, Hannover, DEU
Objective: The goal of this study is to evaluate two newly developed low power sound
coding strategies on the Advanced Bionics HiRes90K platform with respect to power
consumption, speech understanding and subjective sound quality perception and compare the
outcomes with the performance of an established clinical strategy.
Methods: After some pilot trials of numerous signal coding approaches, two strategies
were identified as possible candidates for a clinically applicable low power strategy. One of the
two strategies (referred to as Ultra Low Power Strategy 1 - ULPS1) trades off more precise
spectral resolution for improved temporal representation. This is done by simultaneously
stimulating up to four adjacent electrodes, two inner ones being stimulated according to the
current steering (i.e. virtual channel) approach and two outer, flanking electrodes that are
additionally being stimulated when high output levels are required. In particular, the two flanking
electrodes are dynamically being added when the compliance limit of the two inner electrodes
has been reached and even more loudness needs to be conveyed to the subject as per the
acoustic scenario. The second candidate (Ultra Low Power Strategy 2 - ULPS2) is based on a
“n-of-m” approach that selects the most important channels on a frame-by-frame basis.
16 subjects have been recruited for the trial. A within-subject comparison cross over design is
being used for the major part of the study. Each of the 16 subjects visits the clinic five times over
the course of the study, and takes part in four take-home trials. Subjects are randomly divided
into two groups of 8 subjects each; the two groups receiving the study programs in reverse
order. Speech intelligibility in noise is measured using the HSM sentence test in free-field (in
quiet and in noise) and the adaptive Oldenburg sentence test (OlSa). Additionally, an
accompanying questionnaire (SQQ and APHAB) is handed out to the subjects during
appointments to obtain subjective feedback.
Results: Up to date, the new strategies yield comparable results with respect to the
baseline strategy HiRes Optima with a slight tendency towards better hearing performance.
Subjective feedback also indicated preference towards the new coding strategies, in particular
for the ULPS 1 candidate. With the new strategies, a significant reduction of the power
consumption could be achieved.
Conclusions: The new speech coding strategies seem to be able to manage the difficult
balance between power consumption and speech understanding. Both candidates achieve a
significant reduction in power consumption allowing for smaller externals in the future, while
keeping up or even improving the speech understanding compared to performance levels of
established clinically available coding strategies.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 170
2015 Conference on Implantable Auditory Prostheses
T42: NEURAL NETWORK BASED SPEECH ENHANCEMENT APPLIED TO
COCHLEAR IMPLANT CODING STRATEGIES
Tobias Goehring1*, Federico Bolner234*, Jessica J.M. Monaghan1, Bas van Dijk2,
Jan Wouters3, Marc Moonen4, and Stefan Bleeck1
*these authors contributed equally to this work
ISVR - University of Southampton, Southampton, UK
2
Cochlear Technology Centre Belgium, Mechelen, Belgium
3
ExpORL - KU Leuven, Leuven, Belgium
4
ESAT - KU Leuven, Leuven, Belgium
1
Traditionally, algorithms that attempt to significantly improve speech intelligibility in noise
for cochlear implant (CI) users have met with limited success, in particular in the presence of a
fluctuating masker.
Motivated by previous intelligibility studies of speech synthesized using the ideal binary
mask [1] and its estimation by means of machine learning [2], we propose a framework that
integrates a multi-layer feed-forward artificial neural network (ANN) into CI coding strategies.
The algorithm decomposes the noisy input signal into time-frequency units, extracts a set
of auditory-inspired features and feeds them to the ANN to produce an estimation of which CI
channels contain more perceptually important information (higher signal-to-noise ratio, (SNR)).
This estimate is then used accordingly to suppress the noise and retain the appropriate subset
of channels for electrical stimulation, as in traditional N-of-M coding strategies.
Speech corrupted by speech-shaped and BABBLE noise at different SNRs is processed
by the algorithm and re-synthesized with a vocoder. Evaluation has been performed in
comparison with the Advanced Combination Encoder (ACE) in terms of classification
performance and objective intelligibility measures. Results indicated significant improvement in
Hit – False Alarm rates and intelligibility prediction scores, especially in low SNR conditions.
These findings suggested that the use of ANNs could potentially improve speech
intelligibility in noise for CI users and motivated the collection of pilot data from CI users and
simulations with normal-hearing listeners. The results of this ongoing study will be presented
together with the objective evaluation.
Acknowledgments: The work leading to this deliverable and the results described therein has
received funding from the People Programme (Marie Curie Actions) of the European Union’s
Seventh Framework Programme FP7/2007-2013/ under REA grant agreement n° PITN-GA2012-317521.
[1] Y. Hu and P. C. Loizou, “A new sound coding strategy for suppressing noise in cochlear implants.,” J.
Acoust. Soc. Am., vol. 124, no. 1, pp. 498–509, Jul. 2008.
[2] Y. Hu and P. C. Loizou, “Environment-specific noise suppression for improved speech intelligibility by
cochlear implant users.,” J. Acoust. Soc. Am., vol. 127, no. 6, pp. 3689–95, Jun. 2010.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 171
2015 Conference on Implantable Auditory Prostheses
T43: A NEW WIRELESS RESEARCH PLATFORM FOR NUROTRON
COCHLEAR IMPLANTS
Hongbin Chen1, Yajie Lee1, Shouxian Chen2, Guofang Tang2
1
2
Nurotron Biotechnology Inc., Irvine, CA, USA
Zhejiang Nurotron Biotechnology LTD, Hangzhou, CHN
Nurotron cochlear implants (CI) have been successfully implanted in over 1800 patients.
It is of common interests for researchers in China to perform basic psycho-acoustic experiments
and develop new signal processing strategies on these available cochlear implant users.
We propose a new wireless research platform for developing and testing new signal processing
strategies on Nurotron’s cochlear implant users. The platform will be available only for
Nurotron’s next generation speech processor with 2.4 G RF wireless connectivity. A wireless
hub is connected to PC through USB and transfers stimulus, control commands and all other
information to the speech processor bi-directionally. DSP program file that resides in the speech
processor needs to be updated to allow connecting to the wireless hub and deliver given stimuli
to implants. Stimulus, either single channel or multiple channels, is provided by a flexible pulse
table, in which electrodes, amplitude and duration are specified. Testing material is processed
offline by researchers and built into pulses tables. User interface development can be done
either in C or in Matlab.
In this presentation, we report impedance measurement results through the platform on
an implant-in-box as well as on real users. The impedance data through the research interface
are in the same range as that of measured through Nurotron’s clinical user interface. Single
channel simulations were performed on a lab test environment and the pulse width, amplitude,
stimulation rate and other stimulation parameters were checked and verified through
oscilloscope. We further report a rate discrimination task performed on three Nurotron cochlear
implant users and the findings conform to classic cochlear implant data. The results imply that
the wireless research platform is functioning properly and can be used in the future for other
psycho-physical investigations. More tasks such as rate modulation detection, pitch ranking and
continuous interleaved sampling (CIS) strategy will be carried out to further verify the
functionality of the research platform.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 172
2015 Conference on Implantable Auditory Prostheses
T44: REDUCE ELECTRICAL INTERACTION DURING PARALLEL
STIMULATION WITH NEGATIVE FLANK CURRENT
Michael S Marzalek
Arizona State University, Tempe, AZ, USA
Paired Stimulation outputs two channels simultaneously. This is attractive since for an
electrode refresh rate identical to Sequential Stimulation, the pulse width is double, power is
half, and voltage compliance is improved by two. But outputting two channels simultaneously
has large channel interaction. Frijns explored solutions to the interaction problem in a CIAP
2013 talk[1]. Negative flank current can be used to reduce the current that flows between the
channels. The flank is placed far from the driving electrodes to minimize altering the monopolar
field. For example when driving electrodes three and ten simultaneously, a negative flank of
about 25% of electrode three’s current is placed on electrode six to keep electrode three’s
current from altering electrode ten’s field. And a negative flank of about 25% of electrode ten’s
current is placed on electrode seven to keep electrode ten’s current from altering electrode
three’s field.
Homogeneous electric field plots will be used to show how much an electrode’s field is
altered as its pair goes from zero to 100% current for various configurations of electrode
spacings and flank strengths.
[1] Frijns JH, Bruijn S, Kalkman RK, Vellinga D, Briaire JJ. Reducing electrical interaction during
parallel stimulation using various compensation techniques. CIAP 2013
12-17 July 2015
Granlibakken, Lake Tahoe
Page 173
2015 Conference on Implantable Auditory Prostheses
T45: CONSONANT PERCEPTION ENHANCEMENT USING SIGNAL
PROCESSING IN BIMODAL HEARING
Allison Coltisor, Yang-Soo Yoon, Christopher Hall
Texas Tech University Health Sciences Center, Lubbock, TX, USA
Background and Purpose: It is generally true that bimodal configurations demonstrate
considerable benefit for speech perception in noise. Even with the significant improvements in
technology there is still a limit to the amount of benefit bimodal users are able to receive in noisy
conditions with their devices. Through the use of a new signal processing tool three-dimensional
deep search (3DDS), researchers have shown that normal hearing populations are able to
benefit from enhanced perceptual cues for consonant recognition in noisy environments (Li et
al., 2009). However, the effects of amplifying these critical spectro-temporal cues in bimodal
populations have not previously been examined to see if they receive the same benefit as
normal hearing listeners. The purpose of this study is to determine if bimodal users are able to
process and receive benefit from spectro-temporal cues for consonant recognition as presented
by 3DDS.
Methods: The current data set includes four bimodal adult patients who are all native
speakers of Standard American English with at least one year of bimodal experience. Patients
were seated one meter away from a front facing speaker. The stimuli were presented using
3DDS which works by evaluating the properties of acoustic cues when a consonant-vowel
syllable is 1) truncated in time, 2) high-/low-pass filtered in frequency, and 3) masked with white
noise, and assessing the importance of the removed component by analyzing the change in the
recognition score. Stimuli included fourteen of the most commonly confused consonants in the
English language presented in a consonant-/a/ context by one female talker. The consonants
presented were: /b/, /d/, /g/, /p/, /t/, /k/, /m/, /n/, /f/, /s/, /ƒ/, /v/, /ʤ/, & /z/. Each consonant was
presented 10 times in the hearing aid alone, cochlear implant alone, and combined condition at
+5 dB SNR, +10 dB SNR, and quiet with each token randomized at each respective SNR. The
researchers then used confusion matrix analyses to determine the error distribution from
confusion matrices before and after 3DDS processing to define the nature of bimodal benefit.
Results & Conclusions: Preliminary data shows that patients who demonstrated bimodal
benefit had better ability to detect combined spectral and temporal cues that are enhanced by
3DDS processing, while rejecting confusions. While patients with poor bimodal benefit
demonstrated poor bimodal fusion and were unable to resolve confusions in all conditions. For
one subject who demonstrated bimodal interference, 3DDS appeared to help reduce bimodal
interference (in the CI+HA condition). The preliminary findings also suggest that more optimized
perceptual cues could be transmitted and bimodal benefit could be enhanced if a HA is
programmed to detect these cues. A more complete data analyses will be presented.
Funding provided by National Organization for Hearing Research Foundation
12-17 July 2015
Granlibakken, Lake Tahoe
Page 174
2015 Conference on Implantable Auditory Prostheses
T46: A PIEZOELECTRIC ARTIFICIAL BASILAR MEMBRANE BASED ON
MEMS CANTILEVER ARRAY AS A FRONT END OF A COCHLEAR IMPLANT
SYSTEM
Jongmoon Jang1, JangWoo Lee2, Seongyong Woo1, David James Sly3, Luke Campbell3,
Sungmin Han4, Jin-Ho Cho2, Stephen John OLeary3, Ji-Wong Choi4, Jeong Hun Jang5,
Hongsoo Choi1
1
Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu,
KOR
2
Graduate School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, KOR
3
Department of Otolaryngology, The University of Melbourne, Melbourne, AUS
4
Department of Information & Communication Engineering, DGIST, Daegu, KOR
5
Department of Otorhinolaryngology-Head and Neck Surgery, Kyungpook National University, Daegu, KOR
Cochlear implants (CIs) are designed to restore hearing signal for profoundly deaf and
are one of the most successful neural prostheses. Although CIs are remarkably beneficial, there
are still limitations such as high power consumption, external exposure of the microphone and
processor, and inconvenience of activities while wearing the device. To overcome the
disadvantages of conventional CIs, we developed a piezoelectric artificial basilar membrane
(ABM) as a front end of a cochlear implant system.
The ABM is an acoustic sensor which mimics two main function of cochlea: a frequency
selectivity and an acoustic-to-electric energy conversion. The frequency selectivity was
achieved by an eight cantilever beam with beams of varied length between 600-1350µm and
each of 400µm width. Each cantilever beam acts as mechanical filter with a specific resonance
frequency. The acoustic to mechanical energy conversion was implemented by piezoelectric
aluminum nitride (AlN) material. The ABM demonstrated frequency selectivity in the range of
2.6-3 kHz. To verify the utility of the ABM to elicit hearing responses, we conducted tests in
deafened guinea pigs to measure the electrically evoked auditory brainstem response (EABR)
to acoustic stimulation of the ABM. The piezoelectric output from the ABM was transformed to
an electrical signal by signal processor. Then, the electrical signal was used to stimulate
auditory neurons via an electrode array inserted into the cochlea of the guinea pig. We
successfully measured EABRs by using the ABM while applying an acoustic stimulus of 95 dB
sound pressure level at the resonance frequency. The proposed ABM has potential to be used
to an acoustic sensor without need for an external battery for a fully implantable artificial
cochlea. Although additional future work is required for the ABM to be used in a fully implantable
CI application, here we have demonstrated the appropriate acoustic fidelity of the ABM and its
ability to elicit in vivo neural responses from the cochlea.
Acknowledgement
This research was provided by a National Research Foundation of Korea grant funded by the
Korean Government (2011-0013638 and 2014R1A2A2A01006223) and by DGIST MIREBraiN
Project.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 175
2015 Conference on Implantable Auditory Prostheses
T47: IMPROVING ITD BASED SOURCE LOCALIZATION FOR BILATERAL CI USERS IN
REVERBERANT CONDITIONS USING A NOVEL ONSET ENHANCEMENT ALGORITHM
Aswin Wijetillake, Bernhard U Seeber
Associated Institute Audio Information Processing, Technical University of Munich, Munich, DEU
When listening in reverberant and noisy conditions, interaural timing difference (ITD)
cues can often help unimpaired listeners perceive, locate and comprehend a signal of interest.
However, similar benefits are typically unforthcoming to bilateral CI (BiCI) users, who are less
sensitive to ITD cues, especially when those cues are encoded in the temporal fine structure of
a signal. BiCI users can exhibit some sensitivity to ITDs encoded in the onsets and the slowly
fluctuating temporal envelope of a signal, particularly if onsets are sharp and envelope
fluctuations are deep. These ITD cues were previously shown to aid localization in reverberant
space (Kerber and Seeber, 2013). The presence of reverberation or background noise can
reduce envelope sharpness and depth and hence the effectiveness of ITDs. The current study
evaluates a novel onset enhancement (OE) algorithm that selectively sharpens and deepens
onsets of peaks in the signal envelopes in a CIS coding strategy, with the aim of improving ITD
based source localization in reverberant conditions for BiCI users (Monaghan and Seeber,
2011). The algorithm uses knowledge of the short-term direct-to-reverberant ratio (DRR) to
select peaks that are dominated by the direct sound rather than reflections.
The efficacy of the algorithm to improve sensitivity to ITDs of the direct sound in
(simulated) reverberation and the algorithm’s impact on speech understanding were evaluated
with BiCI users with stimuli presented via direct stimulation using the MED-EL RIBII system. The
potential benefit of the OE-modified CIS algorithm was assessed relative to a standard CIS
strategy at a range of DRRs using an intracranial lateralization test and an Oldenburg sentence
test. Both tests employed speech stimuli that were convolved with binaural room impulse
responses (BRIR), to simulate conditions in a reverberant 4.7m x 6.8m x 2.5m (W x L x H)
concrete-walled room. BRIRs with varied source-receiver distances were used to generate
DRRs of 7, 2 and -3dB. The ITD of the direct signal was adjusted, and the interaural level
difference (ILD) set to 0dB, without altering the reflected signals. This ensured that outcomes
are not confounded by ILD cues in the direct signal or by shifts in the perceived position of
reflections. The OE and the standard CIS strategies both employed piece-wise linear logcompressed-acoustic to electric level maps on each electrode that incorporate electrical T and C
levels as well as an electrical level around 80% dynamic range that was both matched in
loudness with all other electrodes and produced a centralized percept when stimulated
bilaterally. All electrical levels were measured prior to formal testing. Participants were prescreened for envelope ITD sensitivity.
Previous evaluations using vocoders with unimpaired listeners indicated that the
algorithm can significantly improve ITD sensitivity for DRRs as low as -3.6dB without degrading
speech comprehension. Data collection with BiCI users is currently ongoing, the outcomes of
which will be discussed in this presentation.
This study is supported by BMBF 01 GQ 1004B (Bernstein Center for Computational Neuroscience Munich).
Literature:
Kerber, S., and Seeber, B. U. (2013). "Localization in reverberation with cochlear implants: predicting performance
from basic psychophysical measures," J Assoc Res Otolaryngol 14, 379-392.
Monaghan, J. J. M., and Seeber, B. U. (2011). "Exploring the benefit from enhancing envelope ITDs for listening in
reverberant environments," in Int. Conf. on Implantable Auditory Prostheses (Asilomar, CA), p. 246.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 176
2015 Conference on Implantable Auditory Prostheses
T48: EVALUATION OF A DEREVERBERATION ALGORITHM USING A
VIRTUAL ACOUSTICS ENVIRONMENT
Norbert Dillier1, Patricia Bleiker2, Andrea Kegel1, Wai Kong Lai1
1
2
ENT Department University of Zurich, Zurich, CHE
Department of Information Technology and Electrical Engineering, ETH, Zurich, CHE
Reverberation and noise reduce speech intelligibility significantly and affect especially
hearing impaired persons. Several denoising and dereverberation techniques have been
developed in the past. The availability of wireless audio streaming options for hearing
instruments and CI sound processors provides new options for binaural signal processing
schemes in the future.
The processing algorithm evaluated in this study consists of three steps: the denoising
step, the removal of late reverberation parts and finally a general dereverberation stage based
on computed coherence between the input signals at both ears. For the denoising part, a
speech distortion weighted multi-channel Wiener filter (SDW-MWF) with an adaptable voice
activity detector (VAD) is used in order to achieve an optimal trade-off between noise reduction
and speech signal distortion.
In the second step a spectral subtraction filter is used in order to reduce late
reverberation. Finally, a coherence filter is applied based on the assumption that the
reverberated parts of a signal show a low coherence between the left and the right ear. In
addition to the basic multi-channel Wiener filter approach which attenuates low coherent signal
parts, an adaptation with a non-linear sigmoidal coherence to gain mapping is used.
The performance of this denoising and dereverberation scheme was evaluated with
common objective measures such as signal-to-noise ratio (SNR) and signal-to-reverberation
ratio (SRR) as well as with the perceptual evaluation of speech quality (PESQ). In addition,
speech in noise and localization tests in noise with three groups of subjects (normal hearing,
NH; hearing instrument, HI; cochlear implant, CI) were performed. The virtual acoustics
environment test setup used real life multimicrophone sound recordings which were reproduced
through a 12 loudspeaker system using ambisonics processing. Reverberant speech was
generated from dry recordings using a database of binaural room impulse responses. The
dereverberation algorithm was implemented on a Speedgoat xPC Target realtime system which
processed two input signals and generated two output signals. The input signals were obtained
from hearing instrument microphones placed at the two ears of the subject which was seated in
the center of the loudspeaker setup. The processed signals were presented to the two ears of
the subjects either via headphones (for NH and HI subjects) or via direct input into the CI sound
processors (for CI recipients). Data collection is ongoing and results will be presented at the
conference.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 177
2015 Conference on Implantable Auditory Prostheses
T49: EFFECTS OF MULTI-BAND TRANSIENT NOISE REDUCTION FOR CI
USERS
Phillipp Hehrmann1, Karl-Heinz Dyballa2, Volkmar Hamacher2, Thomas Lenarz2, Andreas
Buechner2
1
2
Advanced Bionics GmbH, European Research Center, Hannover, DEU
Medical University of Hanover, Department of Otolaryngology, Hannover, DEU
Speech understanding in noise remains a challenge for CI users. Single-channel noise
reduction algorithms have been shown to improve CI speech performance in situations where
the noise is more stationary than the speech signal. Many real-life noises like clinking cups or
typing on a keyboard, however, fluctuate rapidly in level. In this study, a novel multi-band
transient noise reduction (TNR) algorithm designed for such noises was evaluated regarding
speech performance and subjective sound quality.
15 experienced users of Advanced Bionics’ CII or HiRes90k implant participated in this
study. Each user’s clinical program (TNRoff) was compared acutely to two conditions in which
the audio signal was pre-processed with different TNR algorithms. The first was acting on the
broadband signal in a frequency-independent fashion (TNRsingle). The second was acting
independently in four parallel frequency bands (TNRmult), applying frequency-specific gains to
the signal. All signals were delivered from a laptop to the audio input jack of a speech
processor.
We measured speech reception thresholds (SRTs) in realistic transient noise using the
Oldenburg sentence test as well as subjective ratings of speech clarity and comfort. Two noises
were used for speech testing and both subjective ratings, one mimicking a cafeteria scene
(dishes and multi-talker babble) and one resembling an office situation (typing, ringing phone
and door slam). For the comfort ratings, several other noises were presented in addition
(hammer, door, rustling paper, crackling foil). Friedman’s non-parametric ANOVA was used for
family-wise comparisons between processing conditions, followed Conover’s post-hoc test.
Significant SRT improvements over the baseline condition TNRoff were achieved with the
multi-band algorithm TNRmult in both types of noise, amounting to differences in median SRT of
2.4dB for the cafeteria noise and 1.5dB for the office noise. TNRsingle did not improve
intelligibility, and for the cafeteria situation resulted in a minor deterioration of 0.5dB. Both clarity
and comfort were improved with TNRmult over TNRoff in the cafeteria situation. Comfort with
TNRmult was also improved for the noise of crackling foil.
Our results show that a multi-band TNR algorithm can improve listening performance and
experience in different types of realistic transient background noise. Future work will address the
interaction between TNR and other processing steps of the speech coding strategy including
gain control and stationary noise reduction.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 178
2015 Conference on Implantable Auditory Prostheses
T50: INDIVIDUALIZATION OF TEMPORAL MASKING PARAMETER IN A
COCHLEAR IMPLANT SPEECH PROCESSING STRATEGY: TPACE
Eugen Kludt, Waldo Nogueira, Andreas Buechner
Department of Otolaryngology, Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, DEU
The PACE (Psychoacoustic Advanced Combination Encoder) or MP3000 is the first
cochlear implant (CI) strategy that has been implemented in a commercial sound processor
using a psychoacoustic masking model to select the channels to be stimulated. So far only
simultaneous masking effects have been implemented, as this effect, from experience in normal
hearing subjects, holds the largest potential for saving bandwidth. The novel TPACE extends
the original PACE strategy with a temporal masking model. We hypothesize that a sound coding
strategy that transmits only meaningful or unmasked information to the auditory nerve can
improve speech intelligibility for CI users. We also hypothesize that speech intelligibility with
TPACE may depend on the amount of temporal masking.
The temporal masking model used within TPACE attenuates the simultaneous masking
thresholds over time. The attenuation is designed to fall exponentially with a strength
determined by a single parameter, the temporal masking half-life time constant T½. This
parameter gives the time interval at which the simultaneous masking threshold is halved.
The NIC research environment was used to test speech intelligibility in noise (HSM
sentences) of 24 CI users using TPACE with T½ of 0 (equivalent to PACE) to 1.1 ms.
Recordings of the electrically evoked compound action potential (ECAP) recovery functions
were performed in 12 of these subjects, to assess the strength of temporal masking.
The TPACE with T½ of 0.5 ms obtained a statistically significant speech performance
increase of 11% (p < 0.5) with respect to PACE (T½ = 0 ms). The improved speech test scores
correlated with the clinical performance of the subjects: CI users with above-average outcome in
their routine speech tests showed higher benefit with TPACE. No correlation was observed
between the T½ giving the best speech intelligibility for the individual subject and his ECAP
recovery function.
It seems that the consideration of short-acting temporal masking can improve speech
intelligibility in CI users. The time constant with the highest average speech perception benefit
(0.5 ms) corresponds to time scales that are typical for neuronal refractory behaviour. However,
a correlation between the achieved benefit to the individual ECAP recovery functions of the
subjects could not be found.
The good results in speech intelligibility obtained with the TPACE strategy motivate the
implementation of this strategy in a real-time speech processor (Mathworks Simulink-xPC
platform; Goorevich & Batty, 2005) as the next step on the way towards a take-home clinical
evaluation.
Support provided by Cochlear and by the DFG Cluster of Excellence “Hearing4all”.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 179
2015 Conference on Implantable Auditory Prostheses
T51: ANIMAL-BASED CODING STRATEGY FOR COCHLEAR IMPLANTS
Claus-Peter Richter, Petrina LaFaire, Xiaodong Tan, Yingyue Xu, Maxin Chen, Nan Xia,
Pamela Fiebig, Alan Micco
Northwestern University, Chicago, IL, USA
Introduction: According to the literature on single nerve fiber recordings in different animal
species, the activity patterns evoked by acoustic stimuli compare well in guinea pigs, gerbils,
mice, chinchilla, and cats. A major difference among the response properties is related to the
frequency ranges of the stimuli to which the neurons respond. Assuming that the input to the
central auditory system is similar across species, one could argue that the cochlea acts as a
complicated encoder for sound induced vibrations of cochlear soft tissue structures and the
activity of the auditory nerve could be used across species to convey the acoustic information to
the brain. Taking this thought one step further, the sequence of action potentials that can be
recorded from the auditory nerve of a cat could be used to generate a sequence of electrical
pulses, which is then used to stimulate the human auditory nerve through a cochlear implant
(CI) at the corresponding site along the cochlea.
Method: The approach has three steps: (1) The central nucleus of the inferior colliculus
was surgically approached and a single tungsten or multichannel recording electrode was
placed. (2) Neural activity was recorded from well-identified single units while spoken words
were played to the animal. (3) Recorded trains of action potentials were converted into a
sequence of electrical pulses, which were played directly to the CI user with the Bionic Ear Data
Collection System (BEDCS). Before the sequence was played, each word was adjusted for
stimulus comfort level. Nine test subjects, age 18 and older, were enrolled in the study.
Results: Initial patient trials have shown that patients are able to discern the length of words and
rhythm. When patients completed a forced choice test between four words, they were able to
identify the correct word 30-40% of the time. All of the trials occurred with no training of the
patients. Further trials are underway.
Conclusions: Initial results show promise that lexical information can be transmitted from
the ICC of an animal to a human auditory system. The approach would allow parallel stimulation
at 16 electrode contacts, coding the entire 120 dB of acoustical dynamic range, and reducing
the power by reducing the average repetition rate at each channel to well below 100 Hz.
Besides, loudness is no longer encoded by current levels, which can be held constant close to
stimulation thresholds.
This project has been funded with federal funds from the NIDCD, R01 DC011855, by
Lockheed Martin Aculight and by the Department of Biomedical Engineering at Northwestern
University.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 180
2015 Conference on Implantable Auditory Prostheses
T52: INVESTIGATING THE USE OF A GAMMATONE FILTERBANK FOR A
COCHLEAR IMPLANT CODING STRATEGY
1
2
Sonia Tabibi1, Wai Kong Lai2, Norbert Dillier2
Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, CHE
Laboratory of Experimental Audiology, ENT Department, University Hospital and University of Zurich, Zurich, CHE
Contemporary speech processing strategies in cochlear implants such as Advanced
Combinational Encoder (ACE) use an FFT (Fast Fourier Transform) filterbank to extract
envelopes. In this case, the linearly-spaced FFT bins are combined in a way to imitate the
frequency resolution of the basilar membrane in a normal cochlea. However, this assignment of
the FFT bins to different channels is only approximately physiologically based; especially since
the bins are distributed linearly below 1000 Hz and logarithmically above 1000 Hz.
In recent years, the Gammatone filterbank has been introduced to describe the shape of
the impulse response function of the auditory system as estimated by reverse correlation
functions of neural firing times. Typically, the center frequencies of a Gammatone filterbank are
equally spaced on the ERB (Equivalent Rectangular Bandwidth) scale which gives an
approximation to the bandwidths of filters in the human auditory system. In this study, we
replace the FFT filterbank with an all-pole infinite impulse response (IIR) Gammatone filterbank
(Hohmann 2002) in the coding strategy; by ignoring the zeros in the original filter we cut the
computation effort nearly in half (Slaney 1993). The length of the Gammatone impulse response
especially for low frequency channels is really long (64ms at 16000 Hz sampling frequency),
which is challenging for real time implementations. Signal processing methods such as overlapadd and overlap-save were suggested as practical solutions for this problem.
The Gammatone filterbank has greater resolution in the low frequencies compared to
FFT. In order to test the resolution capability of the gammatone filterbank, a discrimination test
with two-component complex tones was carried out using an adaptive 3-interval forced-choice
task. In two of these intervals, the frequencies of the two components are kept at nominal
values. In the third interval, the second component’s frequency is gradually adjusted adaptively
until the listener is just able to identify this interval from the other two intervals correctly. This
was carried out for both FFT and Gammatone filterbanks separately and the results compared.
Pilot test results with CI listeners will be presented and discussed.
Refrences:
[1] V. Hohmann, “Frequency analysis and synthesis using a gammatone filterbank”, Acta
Acustica United with Acustica, 2002.
[2] Malcom Slaney, “An efficient implementation of the Patterson-Holdsworth auditory filterbank”,
Apple Computer Technical Report, 1993.
[This work has received funding from the People Programme (Marie Curie Actions) of the
European Union’s Seventh Framework Programme FP7/2007-2013/ under REA grant
agreement n° PITN-GA-2012-31752].
12-17 July 2015
Granlibakken, Lake Tahoe
Page 181
2015 Conference on Implantable Auditory Prostheses
T53: SIMULATING PINNA EFFECT BY USE OF THE REAL EAR SOUND
ALGORITHM IN ADVANCED BIONICS CI RECIPIENTS
Amy Stein, Chen Chen, Matthias Milczynski, Leonid Litvak, Alexander Reich
Advanced Bionics, LLC, Valencia, CA, USA
In acoustic hearing, listeners use pinna cues to aid in sound localization ability and to
spatially separate speech signals from competing noise sources. Hearing impaired listeners lose
these pinna cues due to the traditional placement of BTE hearing instrument microphones (i.e.,
atop the pinna). Phonak’s Real Ear Sound (RES) algorithm has proven effective in reducing
front-back localization confusions in BTE hearing aid listeners. In this study, we applied the Real
Ear Sound algorithm to the Advanced Bionics Naida CI sound processor and compared
performance of this algorithm on localization and speech in noise tasks to performance with an
omnidirectional microphone as well as the T-MicTM2 microphone.
Subjects were 17 adult Advanced Bionics cochlear implant users; 10 bilaterally implanted
and 7 unilaterally implanted. Localization ability was assessed utilizing six speakers, three in the
front hemifield (-45°, 0° and 45°) and three in the back hemifield (135°, 180° and 225°). A
stimulus (train of 4 pink noise bursts, each of 170 ms duration, 10 ms rise/fall time, 50 ms interburst interval) was presented at 50 dB SPL (+/- 5 dB roving to remove level cues) with subjects
indicating the speaker location from which they perceived the sound had originated.
Performance with the omnidirectional microphone setting was significantly worse than
performance with the T-Mic2 and RES.
Speech perception in noise was measured at 60 dB SPL. The speech signal was
presented at 0° azimuth; R-SPACE restaurant noise was presented from speakers surrounding
the subject (90°, 135°, 180°, 225° and 270°), but never from the same speaker from which the
speech signal was presented, and the signal-to-noise ratio (SNR) was adjusted to achieve ~
50% score relative to the score obtained with the patient’s clinical program in a quiet listening
condition. Two TIMIT sentence lists were presented for each test condition. Performance with
the omnidirectional microphone setting was significantly worse than performance with the TMic2 and RES.
Results suggest that use of the T-Mic2 placing the microphone at the entrance to the ear
canal allows listeners to utilize natural pinna cues for localization and speech perception in
noise. The RES algorithm may lack the usability benefits of the T-Mic2; however, it may provide
similar pinna cues and allow CI recipients to achieve improvement in performance relative to the
omnidirectional microphone condition.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 182
2015 Conference on Implantable Auditory Prostheses
T54: IMAGE-GUIDED FREQUENCY-PLACE MAPPING IN COCHLEAR IMPLANTS
Hussnain Ali1, Jack H. Noble2, Rene H. Gifford3, Labadie F. Robert4, Benoit M. Dawant2,
John H. L. Hansen1, Emily A. Tobey5
1
Department of Electrical Engineering, The University of Texas at Dallas, Richardson, TX, USA
Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
3
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
4
Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
5
School of Brain and Behavioral Sciences, The University of Texas at Dallas, Richardson, TX, USA
2
Contemporary cochlear implant (CI) sound processors filter acoustic signals into different
frequency bands and provide electrical stimulation to tonotopically distributed spiral ganglion nerve fibers
via an electrode array which is blindly threaded into the cochlea during surgery. The final positions of the
electrodes in relation to the nerve fibers are generally unknown, resulting in a unique electrode
positioning for each patient. This is in part due to the variable length of the cochlea with respect to the
physical insertion depth of the electrode array. Despite this, default frequency assignments are a
common practice in clinical fitting procedures. Suboptimal electrode array placement, variations in
insertion depth, and exact positioning and proximity of electrodes to nerve fibers can all result in a
mismatch between the intended and actual pitch perception. This frequency mismatch holds potential for
reducing the efficacy of the coded information to the auditory cortex and, consequently, limit speech
recognition. The present study leverages image-guided procedures to determine the true location of
individual electrodes with respect to the nerve fibers and proposes a patient-specific frequency
assignment strategy which helps to minimize sub-optimal frequency-place mapping distortions in CIs.
Prior research in this domain suggests that peak performance is achieved when the full acoustic range is
mapped to the tonotopic map where analysis bands exactly match the tonotopic map of the cochlea.
While patients adapt to minor mismatch over time, severe mismatch, as seen with shallow insertion, can
result in significant spectral distortion (Başkent & Shannon, 2005) and hence limit the level of asymptotic
performance as well as increase adaptation time (Fu et al., 2002).
The proposed strategy utilizes pre and post implantation CT scans of recipients’ cochleae to
determine precise spatial location of electrode contacts and the corresponding neural stimulation sites
and thus generate an optimal user-customized frequency-place function which is used to derive
frequency characteristics of the filterbanks. This is achieved by maximizing the frequency match at lower
frequencies (frequency range of first three formants), and introducing mild compression as needed to
avoid truncation (e.g., due to shallow insertion). Mid and high frequency bands are assigned
conventional logarithmic filter spacing. The performance of the proposed strategy was evaluated with 42
normal hearing (NH) listeners using vocoder-simulations. The simulation data indicate significantly better
speech recognitions scores than the default clinical mapping scheme on all measures. Preliminary
investigation with one CI user indicates statistically significant improvement in speech recognition and
perception scores relative to the clinical map in acute experiments.
Lack of knowledge on the spatial relationship between electrodes and the stimulation sites has
resulted in a generic one-size-fits-all frequency mapping paradigm with the hope that CI users will learn
to adapt to the incorrect frequency locations of stimulation. The proposed solution optimizes the
frequency-to-place mapping based on individual’s cochlear physiology and true location of electrodes.
The data from the present study suggest that user customized frequency maps can potentially aid in
achieving higher asymptotic performance and possibly faster adaptation to electric hearing.
Başkent, D., and Shannon, R. V. (2005). "Interactions between cochlear implant electrode insertion depth and
frequency-place mapping," The Journal of the Acoustical Society of America, 117 (3), 1405-1416.
Fu, Q. J., Shannon, R. V., and Galvin III, J. J. (2002), "Perceptual learning following changes in the frequency-toelectrode assignment with the Nucleus-22 cochlear implant," The Journal of the Acoustical Society of America, 112
(4), 1664-1674.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 183
2015 Conference on Implantable Auditory Prostheses
T55: INTEGRATION OF PLACE AND TEMPORAL CODING IN COCHLEAR
IMPLANT PROCESSING
Xin Luo
Purdue University, West Lafayette, IN, USA
In multichannel cochlear implant (CI) systems, the place and temporal rate of electric
stimulation can be independently manipulated to present pitch cues, although both with limited
success. Our previous study showed that CI users achieved better pitch contour identification
when the place and temporal cues on a single electrode pair changed in the same direction (i.e.,
both rising or both falling) rather than in the opposite directions. Such psychophysical results
motivated us to study the integration of place and temporal cues in speech and music
perception with real-time multichannel CI processing.
Adult postlingually deafened Advanced Bionics HiRes90K CI users were tested with an
experimental Harmony sound processor. In the BEPS+ fitting software, a CI program template
was selected based on the HiRes-120 strategy with MP-mode current steering. Place coding
within each channel (or on each electrode pair) was either enabled using a steering range from
0 to 1 or disabled using a steering range from 0.5 to 0.5 (i.e., using a fixed middle virtual
channel). On the other hand, temporal coding within each channel was either enabled or
disabled in the form of pulse amplitude modulation (AM) following the frequency of the spectral
peak within the channel. Together, there were totally four experimental strategies (i.e., nosteering-AM, steering-only, AM-only, and steering-AM), which were matched in the number of
channels, pulse rate in each channel, and overall loudness level. The fitting procedure for each
strategy was similar to that used in clinic. After 20 minutes of listening experience, each strategy
was tested in random order for a series of psychophysical (i.e., pitch ranking and spectral ripple
discrimination), music (i.e., melodic contour identification), and speech tests (i.e., AzBio
sentence recognition in speech babble noise and vocal emotion recognition).
Preliminary results showed that some subjects received benefits from place coding with
current steering, while others received benefits from temporal coding with AM frequency
changes. Combining place and temporal coding did not necessarily provide the best
performance. Also, different tests were not equally sensitive to the availability of place and
temporal coding.
Research was supported by NIH/NIDCD grant R21-DC011844. 12-17 July 2015
Granlibakken, Lake Tahoe
Page 184
2015 Conference on Implantable Auditory Prostheses
WEDNESDAY POSTER ABSTRACTS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 185
2015 Conference on Implantable Auditory Prostheses
W1: EFFECT OF CHANNEL ENVELOPE SYNCHRONY ON INTERAURAL TIME
DIFFERENCE SENSITIVITY IN BILATERAL COCHLEAR IMPLANT LISTENERS
Tom Francart, Anneke Lenssen, Jan Wouters
ExpORL, Dept. Neurosciences, KU Leuven, Leuven, BEL
For a periodic acoustic input signal, the channel envelopes coded by current bilateral
cochlear implant (CI) sound processors can be asynchronous. The effect of this asynchrony on
sensitivity to interaural time differences (ITD) was assessed.
ITD sensitivity was measured in six bilateral CI listeners for single and three-electrode
stimuli. The three-electrode stimuli contained envelope modulations, either synchronous or
asynchronous across electrodes, with delays of 1.25 up to 5.00 ms. Each individual electrode
carried the same ITD. Either neighboring electrodes were chosen, or a separation of four
electrodes, to investigate the effect of electrode distance.
With synchronous envelopes, no difference in ITD sensitivity was found between single
electrode, adjacent 3-electrode and spaced 3-electrode stimuli. A decrease in ITD sensitivity
was found with increasing across-channel envelope asynchrony, which was consistent with the
use of the across-electrode aggregate stimulation pattern rather than individual information
channels for ITDs. No consistent effect of electrode separation was found.
While the binaural system was resilient to small delays between envelopes, larger delays
significantly deceased ITD sensitivity, both for adjacent and further spaced electrodes. For the
development of stimulation strategies that enable the use of ITDs, this means that to some
extent envelopes presented to electrodes close together should be synchronously modulated, of
course without compromising other aspects such as speech intelligibility.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 186
2015 Conference on Implantable Auditory Prostheses
W2: IMPROVING SENSITIVITY TO INTERAURAL TIMING DIFFERENCES FOR
BILATERAL COCHLEAR IMPLANT USERS WITH NARROWBAND
FREQUENCY MODULATIONS IN HIGH RATE ELECTRICAL PULSE TRAINS
Alan Kan, Ruth Y Litovsky
Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
Bilateral cochlear implant (CI) users typically show little reliance on interaural timing
differences (ITDs) when locating sounds in the free-field, which is an important sound
localization cue for normal hearing listeners. However, ITD sensitivity has been shown from
measurements with synchronized research processors, which suggests that current speech
processors are not encoding ITDs in the acoustic signals in a way that can be used by CI
patients. Speech processors typically use high stimulation rates to encode the speech envelope
but ITD sensitivity has been shown to be inversely related to the rate of stimulation. For a
constant amplitude stimulus, low stimulation rates typically lead to good ITD sensitivity, but
sensitivity declines as the rate of stimulation increases above ~300 pulses per second (pps).
Slow amplitude modulations (AM) imposed on a high rate pulse train can improve ITD sensitivity
to levels comparable to low stimulation rates, but a large modulation depth is usually required.
Random jittering of electrical pulses at high stimulation rates has also been shown to promote
ITD sensitivity, though it is not as good as that at low rates.
The effect of rate and modulation depth poses a conundrum for developing new
strategies to improve access to ITDs in bilateral CI users. Low stimulation rates are needed for
good ITD sensitivity but high stimulation rates which are used in CI processors today are
needed to encode acoustic envelopes with fidelity to ensure good speech understanding. In this
study, we examined whether narrowband frequency modulation (FM) of a high rate electrical
pulse train can be used to improve detection of changes in ITDs. The use of narrowband FM
has an advantage over random jittering in that there should be less disturbance to the encoding
of the acoustic envelope. ITD just noticeable differences (JNDs) were measured with a 4000-Hz
electrical pulse train. The pulse train either had: (1) constant amplitude and rate; (2) a 100-HZ
FM imposed on the timing of the pulses; (3) a 100-Hz AM imposed on the amplitude of the
pulses; or (4) a 100-Hz FM + 100-Hz AM imposed in-phase on the timing and amplitude of the
pulses, respectively. For FM, the maximum deviation of the instantaneous frequency from the
carrier frequency was 100 Hz, and for AM, the modulation depth was at 100%.
Results show that narrowband FM can improve ITD JNDs over that of a constant
amplitude pulse train in some listeners, but it was not as good as that obtained with AM-only.
With both FM and AM are applied, there can also be an improvement in ITD JNDs over the AMonly condition. These results suggest that the application of narrowband FM to high rate
electrical pulse trains may be useful for increasing ITD sensitivity in CI signal processing
strategies.
Work supported by NIH-NIDCD (R01DC003083 to RYL) and in part by NIH-NICHD
(P30HD03352 to the Waisman Center).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 187
2015 Conference on Implantable Auditory Prostheses
W3: IMPROVEMENT IN SPEECH INTELLIGIBILITY AND SIGNAL DETECTION
BY CODING OF INTERAURAL PHASE DIFFERENCES IN BICI USERS
Stefan Zirn, Susan Arndt, Thomas Wesarg
Department of Oto-Rhino-Laryngology of the Medical Center, University of Freiburg, Germany, Freiburg, DEU
The binaural intelligibility level difference (BILD) and binaural masking level difference
(BMLD) are manifestations of binaural unmasking. We measured BMLD and BILD in BiCI users
provided with two MED-EL CI systems and normal-hearing (NH). The scope of the study was to
compare the constant rate stimulation strategy HDCIS with the fine structure coding strategy
FS4 in conditions with and without interaural phase differences (IPD) in a fixed set of good
performing CI users with consideration of adaption times. Furthermore we implemented a
control experiment (cExp) where BMLD was measured on pitch matched electrode pairs with
high temporal precision using direct stimulation in the same set of CI users.
The results showed a large BILD in NH subjects (n=8) of 7.5 ± 1.3 dB**. In contrast, BiCI
users (n=7) with HDCIS (0.4 ± 0.6 dB) revealed no significant and with FS4 (0.6 ± 0.9 dB*) a
barely significant BILD. The available cues for BILD arising by IPD were interaural differences in
the envelope (IED) and the fine structure (IFD). To investigate if IFD is at all effective in these CI
users, we investigated BMLD using tones as signal and masker. IED were not provided in this
kind of BMLD experiment. The available cues for signal detection were IFD and (interaural) level
differences. Tests with the clinical processors revealed no BMLD for HDCIS and a small but not
significant BMLD for FS4. The perceptible cues were predominantly level differences that could
be used by BiCI users to detect a difference in a 3-AFC experiment mon- and binaurally. In
contrast, the cExp revealed a considerable BMLD based on IFD in the same set of CI users.
The implication of these experiments is a possible improvement of binaural unmasking in BiCI
users by adjusting tonotopy and increasing the precision of interaural stimulation timing at the
apical electrodes intended to transmit fine structure information.
This work was supported by MED-EL Elektromedizinische Geraete Gesellschaft m.b.H.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 188
2015 Conference on Implantable Auditory Prostheses
W4: FRONT-END DYNAMIC-RANGE COMPRESSION PROCESSING EFFECTS
ON MASKED SPEECH INTELLIGIBILITY IN SIMULATED COCHLEAR
IMPLANT LISTENING
Nathaniel J Spencer, Lauren E Dubyne, Katrina J Killian, Christopher A Brown
University of Pittsburgh, Pittsburgh, PA, USA
The front-end automatic gain control (AGC) of cochlear implant (CI) processing schemes
currently operate independently in a bilateral configuration. It also often employs fast
processing, and is applied to the broadband waveforms by each device. Such kinds of
processing might hinder everyday listening performance ability in bilateral CI users. In the
current study, we tested the hypothesis that benefits might be found in alternatives to singleband, channel-unlinked AGC processing in a speech intelligibility task in which performance is
measured in simulated bilateral cochlear implant (CI) users. Speech materials were binaurally
presented at a -2 dB input signal-to-noise ratio (SNR), with target presented to the left of midline
and masker presented to the right of midline, at spatial angles of +/- 15 and +/-30 degrees.
Effects of two single-band, channel-unlinked AGC alternatives were tested: “channel-linked”
AGC, in which the same gain control signal was applied to each of the two CIs at all time points,
and “multi-band AGC”, in which AGC acted independently on each of a number of narrow
frequency-regions, per CI channel. Effects were tested as a function of compression threshold
level, with less gain control applied with increasing threshold level. Significant main effects were
found for both main manipulations, suggesting benefits for channel-linked AGC over channelunlinked AGC, and for multi-band AGC over single-band AGC. Effects were also observed for
high threshold over low threshold, and also suggested that performance was better on average
for the +/-30 degree condition than for the +/-15 degree condition, as expected. Visual
inspection of the preliminary data suggest benefits of up to, and sometimes exceeding, 20%, for
channel-linked AGC or multi-band AGC (in channel-unlinked processing), depending on the
condition. These data suggest promise to channel-linked AGC and multi-band AGC as
alternatives to single-band, channel-unlinked AGC processing.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 189
2015 Conference on Implantable Auditory Prostheses
W5: COMPARING DIFFERENT MODELS FOR SOUND LOCALIZATION
WITHIN NORMAL HEARING- AND COCHLEAR IMPLANT LISTENERS
Christian Wirtz1, Joerg Encke2, Peter Schleich3, Peter Nopp3, Werner Hemmert2
1
2
MED-EL Deutschland GmbH, Starnberg, DEU
Technische Universität München, München, DEU
3
MED-EL, Innsbruck, AUT
The human brain has the remarkable ability to localize and separate sound sources in
complex acoustic scenarios. This is done by exploiting interaural time- and level differences of
the sound waves exciting the left and right ear. In normal hearing (NH) listeners, the inner hair
cells with the auditory nerve transcode those physical sounds into neuronal firing patterns. For
this process, several models exist [1, 2]. In the case of Cochlear Implant (CI) listeners, the
speech processors strategy replaces the inner ear function and the auditory nerve is excited
with electric currents. In this case, we modeled auditory spike responses with a model
developed by Nicoletti et al. [3].
We have evaluated binaural information using two models: we have adapted the binaural
model proposed by Lindemann [4] such that it works with spike inputs [5]. This model is based
on neuronal delay lines and coincidence detectors. We have also used a model implemented by
Encke et al [7], who followed a more physiological motivated approach based on ITD processing
in the medial superior olives (MSO, compare [6]). With ANF spike trains as a common physical
quantity it is possible to directly compare NH and CI listeners with the binaural models.
Sound properties like spatial accuracy of the binaural models and the localization
threshold in background noise were compared for NH and CI listeners. Even though the output
of the models are different, it was shown that both models can track moving sound sources and
they predict similar results for the localization threshold.
In summary, the physiologically motivated models are valuable tools for the evaluation of
binaural cues and they provide quantitative insight into how sound source localization works in
humans. In addition, the combination with models of the electrically excited auditory nerve, it
these models provide a platform to test hypotheses, at which point sound localization is most
severely limited in long-term deafened animals and in CI users.
[1] Zilany, M. S. a, Bruce, I. C., & Carney, L. H. (2014). Updated parameters and expanded simulation options for a
model of the auditory periphery. The Journal of the Acoustical Society of America, 135(1), 283-6.
doi:10.1121/1.4837815
[2] Wang H (2009). Speech coding and information processing in the peripheral human auditory system, PhD
Thesis Technische UniversitÄt MÜnchen
[3] Nicoletti, M., Wirtz, C., and Hemmert, W. (2013). Modelling Sound Localization with Cochlear Implants. In
Blauert, J., editor, Technol. Binaural List., pages 309-331. Springer.
[4] Lindemann, W. (1986). Extension of a binaural cross-correlation model by contralateral inhibition. I. Simulation
of lateralization for stationary signals. J Acoust Soc Am, 80(6):1608-1622.
[5] Wirtz, C., (2013) Modelling Sound Source Localization in Cochlear Implants, CIAP 2013
[6] Grothe, B. and Pecka, M. (2014). The natural history of sound localization in mammals: A story of neuronal
inhibition. Front. Neural Circuits, 8(October):1-19.
[7] Encke, J. and Hemmert, W. (2015). A spiking neuron Network model of ITD detection in cochlear implant
patients CIAP
This work was funded by a grant from MED-EL Innsbruck, within the Munich Bernstein Center for Computational
Neuroscience by the German Federal Ministry of Education and Research (reference number 01GQ1004B and
01GQ1004D) and the DFG "Ultrafast and temporally precise information processing: normal and dysfunctional
hearing" SPP 1608 (HE6713/1-1)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 190
2015 Conference on Implantable Auditory Prostheses
W6: CORTICAL DETECTION OF INTERAURAL TIMING AND LEVEL CUES IN
CHILDREN WITH BILATERAL COCHLEAR IMPLANTS
Vijayalakshmi Easwar1, Michael Deighton1, Parvaneh Abbasalipour1, Blake C Papsin1,2,3
and Karen A Gordon1,2
1Archie’s Cochlear Implant laboratory, The Hospital for Sick Children, Toronto, Ontario, Canada
2
Otolaryngology, University of Toronto, Toronto, Ontario, Canada
3
Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
Although children receiving bilateral cochlear implants (CI) develop binaural hearing, their
perception of interaural timing cues (created when sounds occur away from midline) remains
abnormal. Electrophysiological evidence from our lab indicates that binaural timing cues are
detected in the brainstem as normally expected in children with bilateral CIs. With that in mind,
the breakdown of detectable binaural timing cues may be due to abnormal representation or
processing at the level of the cortex. The aims of the present study were thus to evaluate
cortical identification of binaural cues in children with bilateral CIs. We hypothesized that
interaural level cues would be represented more similarly to peers with normal hearing than
interaural timing cues.
Twenty children who received bilateral CIs and 10 children with normal hearing were
recruited for this study. Cortical auditory evoked potentials were recorded across 64 cephalic
electrodes to bilaterally presented stimuli in the three following conditions: 1) no binaural cues
(no interaural level and onset time differences between the ears), 2) binaural level cues (10
dB/CU increase in the left or right ear), and 3) binaural timing cues (leading by 400 and 1000 µs
in the left or right ear). Biphasic pulses were delivered to electrode 20 (apical electrode) in
children with CIs and tones were delivered through insert earphones in children with normal
hearing. The strength and location of stimulus-related activity were evaluated using the timerestricted artifact and coherent suppression (TRACS) beamformer method for the P1/Pci and
N2/N2ci peaks.
Preliminary analyses indicate that in children who wear bilateral CIs, bilateral stimulation
with and without level cues elicits similar activity in left and right hemispheres for the Pci peak
unlike normal hearing children who show similar bilateral cortical activity only in conditions with
level cues. In the N2/N2ci peak, children with CIs show dominant activity in the left hemisphere
with or without level cues whereas children with normal hearing show stronger activity in the
hemisphere contralateral to the ear with higher level stimuli. With timing cues, children with CIs
show increased activity in the left hemisphere for Pci and N2ci peaks whereas children with
normal hearing show increased activity in the hemisphere contralateral to the ear leading by
1000 µs for the N2 peak. These results suggest that bilateral implantation promotes a unique
pattern of cortical activity in response to bilateral stimulation in children. Interesting differences
from pure tone listening are consistent with earlier developmental work from our lab and will be
described further.
This work was funded by Canadian Institutes of Health Research.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 191
2015 Conference on Implantable Auditory Prostheses
W7: WHEN PERCEPTUALLY ALIGNING THE TWO EARS, IS IT BETTER TO
ONLY USE THE PORTIONS THAT CAN BE ALIGNED OR TO USE THE
WHOLE ARRAY?
Justin M. Aronoff1, Allison Laubenstein1, Amulya Gampa2, Daniel H. Lee1, Julia
Stelmach1, Melanie J. Samuels1, Abbigail C. Buente1
1
University of Illinois at Urbana-Champaign, Champaign, IL, USA
2
University of Illinois at Chicago, Chicago, IL, USA
With bilateral cochlear implant patients, stimulation in the left and right ear rarely go to
matched cochlear locations because of differences in insertion depth and cell survival between
the ears. Leaving these perceptual misalignments can dramatically reduce patients’ binaural
benefits such as their ability to localize sounds. Although pitch matching has been used to
perceptually align the two arrays, an issue that arises is that the apical end of one array is
typically lower sounding than the apical end of the other array. Similarly the basal end of one
array is typically higher than the basal end of the other array. This raises the question of
whether or not those unalignable portions of the array should be stimulated. If they are
stimulated, there will be a perceptual misalignment for part of the array, potentially reducing
binaural benefits. However, by not stimulating those unalignable portions, the entire acoustic
frequency range must be spectrally compressed into the pitch-matched portions of the array,
potentially degrading spectral resolution. The effects of stimulating or deactivating the
unalignable portions of the array were investigated with normal hearing listeners using vocoder
simulations and with cochlear implant patients.
Normal hearing listeners were tested with vocoder simulations. In one condition the
signal was spectrally compressed to fit in a smaller frequency region (spectrally compressed),
mimicking the effects of only stimulating the matched portions of the two arrays. In the other
condition, the signal was not spectrally compressed, but the most apical filter was sent to one
ear and the most basal filter was sent to the other, with the center filters presented to both ears
(mismatched edges). Participants were tested on a localization test and the Spectral temporally
Modulated Ripple Test (SMRT), a measure of spectral resolution. The results indicated that the
best localization performance occurred with the spectrally compressed condition, suggesting the
importance of perceptually aligning the two ears for binaural abilities. In contrast, SMRT
performance was better with the mismatched edges conditions, suggesting that perceptually
aligning the arrays may involve a trade-off between spectral resolution and localization abilities.
To further examine this trade-off, four bilateral cochlear implant patients were tested. The
perceptual alignment of the two arrays was determined using a pitch matching task. Based on
these pitch matches, two twelve channel maps were created, one where only pitch-matched
stimulation sites were used and one where the full length of each array was used, with the
central portions pitch matched. As with the normal hearing listeners, the preliminary results
suggest that the mismatched edges condition yields better spectral resolution. In contrast, there
was no difference between the spectrally compressed and mismatched edges conditions.
Work supported by NIH/NIDCD R03-DC013380; equipment provided by Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 192
2015 Conference on Implantable Auditory Prostheses
W8: poster withdrawn
12-17 July 2015
Granlibakken, Lake Tahoe
Page 193
2015 Conference on Implantable Auditory Prostheses
W9: LAYER-SPECIFIC BINAURAL ACTIVATION IN THE CORTEX OF
HEARING CONTROLS AND CONGENITALLY DEAF CATS
1
2
Jochen Tillein1, Peter Hubka2, Andrej Kral2
ENT Department, J.W.Goethe University Frankfurt, Frankfurt, DEU
Experimental Otology, ENT Clinics, Medical University Hannover, Hannover, DEU
Congenital deafness leads to a wide range of deficits or alterations in the auditory cortex.
Acute intracochlear electrical stimulation results in lower firing rates, lower numbers of
responding neurons, smaller dynamic range, lower cortical thresholds, and decreased sensitivity
to binaural cues in congenitally deaf cats (CDCs; Kral & Sharma 2012, Tillein et al., 2010).
Previously, synaptic activity was reported to be significantly decreased in infragranular layers in
congenitally deaf cats compared to hearing cats (HCs) (Kral et al. 2005). Here we investigated
unit activity evoked by monaural as well as binaural stimulation in supra- and infragranular
cortical layers separately.
Eight animals (4 CDC and 4 HC, aged >6 month) were acutely stimulated with chargebalanced biphasic pulse trains (3 pulses, 200µs/phase, 500pps) in wide bipolar configuration
through a custom made cochlear implant inserted into the scala tympani of the cochlea on either
side. Control animals were acutely deafened by intracochlear application of neomycin. 16
channel electrode arrays (Neuronexus probes) inserted perpendicular to the cortical surface
were used to simultaneously record cortical activity throughout all cortical layers within the
region of AI where local field potential on the cortical surface had highest amplitudes. In each
animal 1-3 tracks were stained with a fluorescent dye (DiI) for later histological reconstruction of
the recording tracks. Animals were stimulated mono- and binaurally. In the binaural mode, time
delays between the stimulation of the left and right cochlea were introduced to measure
sensitivity to interaural time delays (ITDs).Template ITD functions (Tillein et al., 2010) were fitted
to the data and distribution of classified ITD functions were compared along cortical depth and
between groups. 288 and 272 cortical sites for HCs and CDCs respectively were analyzed in the
study.
Results revealed a significant reduction of spiking activity within the infragranular layers
of CDCs compared to HCs but no differences in the maximum firing rate in the supragranular
layers. Within both groups maximal firing rates between supra- and infragranular layers were
significantly different but showed reversed relations: higher rates in infragranular layers in HCs
vs. higher rates in supragranular layers in CDCs. The number of classified ITD functions was
significantly lower in infragranular layers in CDCs while there was no difference between groups
in the supragranular layers. In HCs all parameter of ITD responses (ITD center, ITD half width,
ITD modulation depth) were significantly different between supra- and infragranular layers but
no difference was found for either of the ITD parameter in CDCs.
The reduction of spiking activity within the infragranular layers of CDCs confirms previous
findings about decreased synaptic activity within these layers and might also cause the loss of
layer specific ITD processing in CDCs. This demonstrates that congenital auditory deprivation
leads to an altered columnar microcircuitry in the primary auditory cortex.
Kral A, Sharma A. Developmental neuroplasticity after cochlear implantation. Trends Neurosci. 2012; 35: 111-22.
Kral A, Tillein J, Heid S, Hartmann R, Klinke R. Postnatal cortical development in congenital auditory deprivation.
Cereb. Cortex 2005; 15: 552-62.
Tillein, J. et al. (2010) Cortical representation of interaural time difference in congenital deafness. Cereb. Cortex 20,
492-506
Supported by DFG (Cluster of Excellence Hearing4All)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 194
2015 Conference on Implantable Auditory Prostheses
W10: THE EFFECTS OF A BROADER MASKER ON CONTRALATERAL
MASKING FUNCTIONS
Daniel H Lee, Justin Aronoff
University of Illinois Urbana-Champaign, Champaign, IL, USA
Contralateral masking functions can provide important insight into how signals presented
to the left and right ear interact within the binaural auditory system. Studies have shown that
contralateral masking functions are sharper than ipsilateral masking functions in cochlear
implant users. This may indicate that the masking function is sharpened due to an interaction
between the contralateral masker and probe signal in the central auditory system. Alternatively,
the relative sharpness of the contralateral masking function may be a result of only the peak of
the signal ascending along the central auditory system.
To test these alternatives, contralateral masking functions were compared when using a
narrow and a broad masker with similar peaks. If no difference is seen between both
contralateral masking functions, this would suggest that only the peak of the masker ascends
along the central auditory system, since the broadness of the masker were not reflected in the
respective masking functions. In contrast, if the broadness of the masking function correlates to
that of the input masker, this would indicate there is sharpening of the contralateral masking
function occurring somewhere along the central auditory system.
Masking functions were measured using a simultaneous masking paradigm. A
continuous contralateral masker was presented and the level of the probe was adjusted using a
modified Bekesy protocol. The broadness of the masker was manipulated by stimulating one
masking electrode (masker alone) or the same masking electrode with simultaneous lower
amplitude stimulation of two adjacent electrodes (masker-plus-flankers). The magnitude of the
current presented on the center masking electrode was held constant across conditions to
provide similar masker peaks.
Preliminary results from two bilateral cochlear implant users showed similar masking
functions for both the masker alone and masker-plus-flankers conditions. Since broadening the
masker current spread did not increase the width of the contralateral masking function, the
results suggest that the peak of the masker alone ascends along the central auditory pathway.
This suggests that the central auditory system, where the masker and the probe interact, is not
necessarily responsible for the sharpness of the contralateral masking function shown in
studies.
Work supported by NIH/NIDCD R03-DC013380; equipment provided by Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 195
2015 Conference on Implantable Auditory Prostheses
W11: THE EFFECT OF INTERAURAL MISMATCH AND INTERAURALLY
INTERLEAVED CHANNELS ON SPECTRAL RESOLUTION IN SIMULATED
COCHLEAR IMPLANT LISTENING
Ann E. Todd1, Justin M. Aronoff2, Hannah Staisloff2, David M. Landsberger1
1
2
New York University, New York, NY, USA
University of Illinois at Urbana-Champaign, Champaign, IL, USA
Poor spectral resolution limits performance with cochlear implants (CIs) presumably
because of channel interaction caused by the broad spread of excitation from electrical
stimulation. With bilateral CIs, channels can be interleaved across ears (e.g., odd channels
presented to the left ear and even to the right ear) such that the physical distance between
adjacent stimulation sites increases, thus reducing channel interaction. Even though each ear is
only presented with half of the channels, complete spectral information is available when the
signals from the two ears are combined by the central auditory system. While interleaved
processing can improve performance in hearing aid users, improvements with bilateral CIs are
less consistent. This limitation may come from the inherent mismatches across the ears from
misalignments in electrode insertions, as well as variability in cochlear geometry and neural
survival. We hypothesize that interaural mismatch in place of stimulation leads to poorer
spectral resolution because information in a given channel is misaligned across ears promoting
contralateral masking. In addition, we expect interaurally interleaved channels to improve
spectral resolution and the improvement to decrease with interaural mismatch in place of
stimulation. We tested these hypotheses using an acoustic model of CI stimulation.
Spectral resolution was measured using a dynamic spectral ripple test [SMRT: Aronoff &
Landsberger, 2013]. The stimuli of the spectral ripple test were processed using a 16-channel
noise vocoder. Electrodes were simulated to be 1.1 mm apart based on the Greenwood
function. Current spread was simulated using an attenuation rate of 4 dB/mm for each channel.
Carrier filter cutoff frequencies were either matched between the ears or shifted in frequency to
simulate interaural mismatches of .25, .5, .75, 1, 1.5, and 2 electrode spaces. Preliminary
results suggest that listeners show better spectral ripple discrimination in interleaved conditions
compared to non-interleaved conditions. Further data will be collected to evaluate the effect of
interaural mismatch on both the interleaved and non-interleaved conditions.
Support provided by NIH grants R01 DC012152 (Landsberger) and R03 DC013380 (Aronoff).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 196
2015 Conference on Implantable Auditory Prostheses
W12: NEURAL PROCESSING OF INTERAURAL TIME DIFFERENCES:
DIRECT COMPARISONS BETWEEN BILATERAL ELECTRIC AND ACOUSTIC
STIMULATION
Maike Vollmer, Armin Wiegner
Comprehensive Hearing Center, University Hospital Würzburg, Würzburg, DEU
Bilateral cochlear implants (CIs) provide important cues for directional hearing and
speech understanding in noise. However, binaural performance of CI users that involve the
discrimination of interaural time differences (ITDs) is typically below that of normal hearing
listeners. Possible explanations for this limitation are deafness-induced degradations in neural
ITD sensitivity or differences in the binaural brain circuits activated by either acoustic or electric
stimulation. To identify the limitations of electric ITD coding, the present study compares ITD
processing for electric pulse trains in normal hearing animals with that for acoustic stimuli that
vary in their spectral content and temporal properties in the same population of neurons.
Adult gerbils were bilaterally implanted with round window electrodes, and earphones
were sealed to the auditory meati for acoustic stimulation. Electric stimuli were low-rate periodic
pulse trains. Acoustic stimuli were either pure-tones at the neuron's characteristic frequency or
broadband noise bursts. To mimic the transient nature and broadband spectrum of electric
pulses, we also used low-rate trains of acoustic clicks and short upward frequency-modulated
sweeps (‘chirps’). Responses to varying ITDs were recorded from single neurons in auditory
brainstem (dorsal nucleus of the lateral lemniscus) and midbrain (inferior colliculus).
The proportion of neurons exhibiting significant ITD sensitivity was high for both electric
and acoustic stimulation (>82%). However, the degree of ITD sensitivity to electric stimulation
was lower than that for acoustic signals with similar spectral and temporal properties. The
majority of rate-ITD functions for both electric and acoustic stimulation were peak-shaped
suggesting similar excitatory coincidence mechanisms in the auditory brainstem for the two
modes of stimulation in normal hearing animals. The results were in contrast to the high
incidence of sigmoidal and biphasic electric rate-ITD functions previously reported in juvenile
and adult deafened animals (Vollmer et al., ARO 37: 444, 2014). Moreover, ITD tuning
characteristics and neural ITD discrimination thresholds were not affected by either the
stimulation mode or by the spectral and temporal properties of the acoustic stimuli. Specifically,
electric stimulation maintained the asymmetric ITD tuning (i.e., contralateral bias of best ITDs)
that is typically observed in acoustic stimulation. These findings contradict the proposal that
cochlear delays are the main source of internal delays in ITD processing (Shamma et al., JASA
86: 989-1006, 1989).
In summary, neural ITD coding in the normal hearing system was similar for electric and
acoustic stimulation. The results suggest that discrepancies in ITD discrimination between
bilateral CI users and normal hearing listeners are based on deprivation-induced changes in
neural ITD coding rather than intrinsic differences in the binaural brain circuits involved in the
processing of electric and acoustic ITDs.
Supported by DFG VO 640/2-1.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 197
2015 Conference on Implantable Auditory Prostheses
W13: WITHIN-SUBJECTS COMPARISON OF BIMODAL VERSUS BILATERAL
CI LISTENING AND FINE-STRUCTURE VERSUS ENVELOPE-ONLY
STRATEGIES IN SOUND LOCALIZATION, SOUND QUALITY, AND SPEECH IN
NOISE PERFORMANCE
Ewan A. Macpherson1, Ioan A. Curca1, Vijay Parsa1, Susan Scollie1, Katherine
Vansevenant2, Kim Zimmerman, Jamie Lewis-Teeter2, Prudence Allen1, Lorne Parnes3,
Sumit Agrawal3
1
National Centre for Audiology, Western University, London, CAN
Cochlear Implant Program, London Health Sciences Centre, London, CAN
3
Department of Otolaryngology, Schulich School of Medicine, London, CAN
2
This cross-over study examined the effects of bimodal (CI with contralateral hearing aid, HA)
versus bilateral (two CIs) listening and of fine-structure versus envelope-only CI signal encoding
(Med_El FS4 versus HDCIS strategies) on sound localization, speech-in-noise perception, and
sound quality in 16 adult CI candidates with bilateral severe-to-profound hearing loss but some
aidable low-frequency hearing. Here we report results from testing at 12 months following receipt of
the first CI (bimodal, “12-month” time point) and a further 12 months following receipt of the second
CI (bilateral, “24-month” time point). All participants used MED-EL OPUS 2 processors programmed
with both FS4 and HDCIS strategies, and all chose to use FS4 in daily life.
For sound localization, participants oriented toward the perceived azimuth of 200- or 7500ms wideband noise bursts. In the latter case, participants were required to orient using head
movements during stimulus presentation. For sound quality, the participants used a visual analog
scale to rate short recordings of speech or music as the ear of presentation (left, right, diotic) and CI
strategy were varied using custom open headphones and a computer controlled implant remote
control that allowed trial-to-trial blind CI program selection. For speech in noise, 20-sentence HINT
SRTs were obtained in co-located (speech and noise in front) and separated (speech in front and
noise on the HA or second-CI side) conditions and with both devices together or with the first CI
only. Spatial unmasking of speech (SUS) scores were derived by subtracting co-located from
separated SRTs
For sound localization, in bimodal listening there was marked response bias toward the CI
side, and response scatter was 2-3 times larger than in pre-operative testing with two HAs. Errors
due to bias and scatter were greatly reduced in the bilateral CI configuration. Results were similar
for FS4 and HDCIS except for a trend with bilateral FS4 listening toward reduced bias and scatter
for the long-duration stimuli. For sound quality, at both time points the ratings were significantly
higher for FS4 than for HDCIS and higher for diotic presentation than for first CI alone. At 24
months, ratings were similar for the first and second CIs alone.
For speech in noise, there was actually significantly reduced benefit of spatial separation of
target and masker (SUS) at 24 months. This, however, was due to a combination of substantial
improvement of SRTs with the first CI alone from 12 to 24 months and increased ability to disattend
to the noise-side ear. There was a trend toward lower SRTs with FS4 at both 12 and 24 months.
Overall, these results suggest (for this patient population) that compared to bimodal listening,
bilateral CIs provide significantly improved sound localization and improved opportunity for betterear listening in noise. The fine-structure strategy (FS4) yielded significant sound quality benefits in
both bimodal and bilateral listening configurations, and there were also trends favoring FS4 in the
localization and speech in noise tasks.
[This study was supported by funding from the HEARRING Network, the London Health Sciences
Foundation, and the Canada Foundation for Innovation.]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 198
2015 Conference on Implantable Auditory Prostheses
W14: THE EFFECT OF INTERAURAL TEMPORAL DIFFERENCES IN
INTERAURAL PHASE MODULATION FOLLOWING RESPONSES
Jaime A. Undurraga, Nicholas R. Haywood, Torsten Marquardt, David McAlpine
University College London Ear Institute, London, GBR
Introduction: Binaural hearing, which is particularly sensitivity to interaural temporal
differences conveyed in the low-frequency temporal fine structure (TFs) of sounds, is
fundamental to sound-source localization and important for speech perception in the presence
of background noise. Hearing loss can impair binaural hearing, impacting on localization abilities
and the understanding of speech in noise. In cochlear implantation, restoration of binaural
hearing in the profoundly deaf is limited by a range of factors, including failure to stimulate
appropriately the binaural pathways sub-serving low-frequency binaural hearing, as well as
factors such as mismatch of intra-cochlear placement of electrodes in each ear, which may
reduce the integration of binaural cues. Here, we have developed a reliable objective measure
of binaural processing in normal-hearing (NH) listeners that could potentially be applied to
assess both of these factors in users of bilateral cochlear implants (CIs).
Methods: We first measured, in NH listeners, interaural phase modulation following
responses (IPM-FRs) to sinusoidally amplitude modulated (SAM) tones. The phase of a 520-Hz
carrier signal was manipulated to produce discrete periodic 'switches' in interaural phase
difference (IPD) at certain minima of the SAM cycle. A total of seven IPDs from between ±11 to
±135°, and three IPM rates (3.4, 6.8, and 13.6 Hz) were tested. In a second experimental
manipulation, the amplitude modulation rate was varied between 27 and 109 Hz. In a third
experiment, amplitude-modulated ‘transposed’ tones (carriers in the range 2-5 kHz) were used
to measure IPM-FRs (at 6.8 Hz). The carrier frequencies presented to each ear were either
matched or mismatched (3, 4, or 5 kHz - simulating an electrode mismatch). Transposed tones
allow the control of the envelope shape independently from the carrier rate. In one condition the
envelope IPD was modulated between 0 and 180° (0/180° condition), whilst in a second
condition the IPD was modulated between ± 90°. Similarly, an on-going fourth pilot experiment
with bilaterally implanted cochlear implant subjects is currently being conducted using SAM
pulse trains with an IPM presented at ~7 Hz. As in the third experiment, the IPD applied to the
carrier will be modulated either from 0 to 180° or between ± 90°. Data collection is currently
under way and will be presented at this meeting.
Results: Responses to the low-frequency tones could be obtained from all participants.
However, the Signal-to-Noise Ratio was largest when the IPM rate was 6.8 Hz and IPDs were
between ±45° and ±90°. Increasing the modulation rate resulted in increased IPM-FR
amplitudes.
Responses obtained for the transposed stimuli demonstrated that the magnitude of the
IPM-FR was larger for matched than for frequency-mismatched carriers. Moreover, responses
were larger for the 0/180° IPD condition.
Conclusions: We concluded that IPM-FR can be used as a reliable objective
measurement of binaural processing and may be a suitable objective measure to match acrossear electrodes.
The research leading to these results has received funding from the European Union’s Seventh
Framework Programme (FP7/2007-2013) under ABCIT grant agreement number 304912.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 199
2015 Conference on Implantable Auditory Prostheses
W15: SHORT INTER-PULSE INTERVALS IMPROVE NEURAL ITD CODING
WITH BILATERAL COCHLEAR IMPLANTS
Brian D. Buechel, Kenneth E. Hancock, Bertrand Delgutte
Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
Bilateral cochlear implant (CI) users have poor perceptual sensitivity to interaural time
differences (ITD), limiting their ability to localize sounds and perceive speech in noisy
environments. This is especially true for high-rate (>300 pps) periodic pulse trains that are used
as carriers in present CI processors. Here we test the principle for a novel stimulation strategy in
which extra pulses are added to high-rate periodic pulse trains to introduce short inter-pulse
intervals (IPIs). This strategy is based on the finding that short IPIs in high-rate pulse trains
improve neural ITD sensitivity and produce robust spiking in inferior colliculus (IC) neurons
(Hancock et al., J. Neurophysiol. 108: 714). The stimuli in that study used randomly jittered IPIs;
we hypothesized that inserting short IPIs to periodic pulse trains could have a similar effect. To
test this hypothesis, we investigated ITD tuning of IC neurons in an awake rabbit model of
bilateral CIs.
We recorded from single units in the IC of two unanesthetized rabbits with bilateral
cochlear implants. The stimuli were trains of biphasic pulses of that varied along three
dimensions independently: (1) mean pulse rate from 320-1280 pps, (2) period of extra pulse
insertion from 5 to 80 ms, and (3) length of short IPI from 10 to 50% of the mean inter-pulse
period. We also measured neural responses over a wide range of ITDs (2000 to 2000 µs) to
compare ITD tuning in conditions with and without short IPIs.
Inserting extra pulses at short IPIs increased firing rates for >50% of IC neurons. Spikes
tended to occur with short latencies after the extra pulses, and this effect was most pronounced
for high carrier rates (>448 pps), low short IPIs (10% of mean inter-pulse period), and
intermediate periods of extra pulse insertion (10 - 20 ms). Adding extra pulses also improved
ITD sensitivity to a point comparable to sensitivity with low-rate periodic pulse trains. In some
neurons, high spontaneous firing rates masked the ITD sensitivity introduced by the short IPIs.
This limitation could be overcome by selecting the spikes occurring at short latencies after the
extra pulses. Such temporal windowing could be implemented more centrally through
coincidence detection.
The introduction of extra pulses with short IPIs increases firing rates and ITD sensitivity in
many IC neurons. These results are consistent with the effects produced by pulse trains with
jittered IPIs, with the added benefit of retaining control over the timing of the short IPIs. Our
findings lay the groundwork for a novel CI processing strategy in which short IPIs are added
intermittently to pulse train carriers. Such a strategy could improve perceptual ITD sensitivity,
thereby improving binaural hearing for bilateral cochlear implant users.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 200
2015 Conference on Implantable Auditory Prostheses
W16: SENSITIVITY TO INTERAURAL TIMING DIFFERENCES IN CHILDREN
WITH BILATERAL COCHLEAR IMPLANTS
Erica Ehlers, Alan Kan, Shelly Godar, Ruth Litovsky
UW-Madison, Madison, WI, USA
Bilateral implantation in children is partly motivated by the attempt to activate binaural
circuits in the auditory system, so that better spatial hearing abilities can be achieved. The
important auditory cues ideally provided would consist of interaural time and level differences
(ITDs and ILDs). We conduct studies in which children are tested using synchronized research
processors with pulsatile stimulation on pitch-matched electrode pairs. Our studies to date with
low rates of 100 pulses per second suggest that children generally have sensitivity to ILD cues,
whereas sensitivity to ITD cues is weak or absent. This lack of sensitivity to ITD cues may arise
from two possible factors. First, current clinical processing provides this population with highrate (900+ pps) amplitude-modulated stimulation. Therefore, it is predicted that through use of
stimulation that is more similar to the children’s everyday listening experience, the ability to use
some information in ITD cues will be improved. Second, in earlier studies, pitch matching was
used to identify electrode pairs for testing, in order to account for possible differences in
electrode insertion depths across the ears. Although pitch matching has been used successfully
in adults, it may not be appropriate for children if they have learned pitch through their clinical
maps (c.f. Reiss et al., 2008). It may be the case that pitch matching tasks are not be reliable for
identifying anatomical mismatch in place of stimulation in the two ears with congenitally deaf
children.
In order to examine these two factors in greater detail, children (ages 9-15) with bilateral
Cochlear Nucleus CIs participated in two experiments. Experiment I measured ITD justnoticeable-difference (JND) thresholds using a pitch-matched electrode pair with low rate,
unmodulated stimuli (100 pps), high rate unmodulated stimuli (1000 pps), and high rate
modulated stimuli (1000 pps modulated at differing rates). Experiment II included a direct pitch
comparison (DPC) task, where subjects directly compared the perceived pitch of different
interaural pairs of electrodes. ITD sensitivity was also measured at the same pairs.
To investigate whether rate of stimulation has an effect on ITD sensitivity, ITD JNDs measured
with the low rate unmodulated, high rate unmodulated, and high rate modulated stimuli were
compared. To investigate the efficacy of pitch-matching tasks in this population, direct pitch
comparison data was compared with ITD JNDs for all interaural electrode combinations. For
children who still do not demonstrate sensitivity to ITDs even when tested with a variety of
interaural electrode combinations and/or high rate amplitude modulated stimuli, it may suggest
that early deprivation of experience with ITDs could result in degradation of the neural circuitry
required for ITD processing.
Work funded by NIH-NIDCD (R01DC8365, Litovsky) and NIH-NICHD (P30HD03352).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 201
2015 Conference on Implantable Auditory Prostheses
W17: A SETUP FOR SIMULTANEOUS MEASUREMENT OF (E)ASSR AND
PSYCHOMETRIC TEMPORAL ENCODING IN THE AUDITORY SYSTEM
1
Andreas Bahmer1, Uwe Baumann2
University Clinic Würzburg, Comprehensive Hearing Center, Würzburg, DEU
2
University Clinic Frankfurt, Audiological Acoustics, Frankfurt, DEU
Simultaneous assessment of psychometric tasks and electrophysiological recordings (like
EASSR) is challenging because each requires specific technical and physiological
preconditions. As the measurement of EASSR is a sophisticated task we first developed and
evaluated a modified auditory steady state response paradigm which combines
electrophysiological recording and psychophysical discrimination tasks. Electrophysiological
recordings require a comparatively long test time duration to gain sufficient signal-to-noise
ratios, whereas test duration of psychometric measurements should be limited to prevent
challenges to the attention of the subject. In order to investigate immediate correlation between
both measurements a method is described, which combines electro-physiological and
psychometrical measurements in a single test procedure. Auditory steady state responses
(ASSR) and a pitch discrimination task were combined in a single procedure. ASSR requires
usually a continuous stimulus presentation, whereas in discrimination tasks short stimuli are
typically presented. The setup employed two short-time ASSR sub-stimuli with different fixed
modulation frequencies but same carrier frequencies (signal 1 and 2). A setup capable of
performing both measurements was developed, and consisted of an external sound card, head
phones, EEG amplifier, and a standard PC. Software enabling stimulation and recording with
high phase precision, and that is capable of analysing the recorded EEG data, was developed.
The setup was successfully tested by means of an artificial EEG signal (sinusoidal amplitude
modulation of a carrier sine) and in one human subject. ASSR signal strength can be calculated
without knowledge of the absolute stimulus onset. In another test, software was evaluated that
enables to generate continuous sinusoidal amplitude modulated stimuli with statistically altered
modulation frequency. The modulation jitter is a stimulus parameter that potentially influences
both measures. Our (E)ASSR recordings show that it is possible to extract neural response
despite huge artifacts, jitter, and high stimulation rate (200 pps). Therefore, the next step will be
an integration of (E)ASSR in the described new test paradigm.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 202
2015 Conference on Implantable Auditory Prostheses
W18: MODELING INDIVIDUAL DIFFERENCES IN MODULATION DETECTION
Gabrielle O Brien, Jay Rubinstein, Nikita Imennov
University of Washington, Seattle, WA, USA
Across cochlear implant listeners, the ability to detect temporal changes in a stimulus
envelope - ”a task crucial for speech perception” - varies greatly. It is unclear what individual
differences account for the range of performance and at what level of auditory processing they
occur. Understanding these factors is an essential step towards developing customized settings
that optimize the transfer of sound information.
Unlike in psychophysical studies of a heterogeneous population, computational modeling
studies have exquisite control over all parameters. Recently, we utilized a cable model of the
auditory nerve featuring heterogenous fiber diameters and stochastic ion channels to compute
the neural population response to sinusoidally amplitude modulated pulse trains. Using methods
from signal detection theory, the modulation detection threshold (MDTs) were computed from
the neural responses across carrier rates and modulation frequencies. The simulated MDTs
predicted three qualitative trends from literature: that sensitivity to modulation increases at low
carrier rates, high stimulus intensity and low modulation frequencies.
These results reflect the average trends across listeners. We now show how
systematically varying physiological parameters of the nerve model, the placement of the
electrode and parameters of the discrimination procedure affect MDTs in order to illuminate
sources of individual differences. We vary the percent of fibers involved in discrimination to
model die-off, the fiber conduction velocity to simulate demyelination, the distance of the
electrode from the fibers, the temporal resolution of the discrimination procedure, and the
degree of internal noise. These results suggest mechanisms that explain the variability in the
shape and magnitude of MDTs and the specific hallmarks that may identify them in practice.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 203
2015 Conference on Implantable Auditory Prostheses
W19: A SPIKING NEURON NETWORK MODEL OF ITD DETECTION IN
COCHLEAR IMPLANT PATIENTS.
Joerg Encke, Werner Hemmert
Bio-Inspired Information Processing, IMETUM, Technische Universität München, Munich, DEU
Cochlear implant (CI) listeners show a remarkable ability to understand speech in quite
environments. Nevertheless, there is room for improvement as in some aspects like speech
understanding in noise and sound localization they still lag behind normal hearing listeners.
Normal hearing listeners are able to separate sound sources from noise by sound
localisation. Two mechanisms are used to locate sound sources in the horizontal plane. Low
frequency sounds are located by using the difference in arrival time between the two ears
(interaural time difference ITDs) while for high frequency sounds also interaural level differences
(ILDs) are available.
Here, we present a detailed spiking neuronal network model of the ITD detection circuit in
CI users and normal hearing subjects. The network consists of a Hodgkin Huxley type model of
the Medial Superior Olive (MSO), which is thought to be the first location decoding ITDs. The
cochlear nucleus is represented by a model of the globular bushy cells, its input is either
calculated using a model of the auditory periphery [Carney2014] or, for CI users, by using spike
responses from a model of the electrically stimulated cochlea [Nicoletti2013].
The presented model is able to create realistic rate to ITD curves when compared to
measurements in acutely implanted gerbils. It will help to better understand the cause of
deteriorated sound localisation in long-term deafened animals and support the improvement of
existing CI coding strategies in respect to ITD detection (see also [Wirtz2015]).
Acknowledgements:
This work was funded by the German Research Foundation within the Priority Program
"Ultrafast and temporally precise information processing: normal and dysfunctional hearing"
SPP 1608 (HE6713/1-1) and the German Federal Ministry of Education and Research within the
Munich Bernstein Center for Computational Neuroscience (reference number 01GQ1004B).
References:
[Nicoletti2013] Nicoletti, M., Wirtz, C., and Hemmert,
W. (2013). Modelling Sound Localization with Cochlear Implants. In Blauert, J., editor, Technol.
Binaural List., pages 309-331. Springer.
[Carney2014] Carney, L. H., Zilany, M. S. A., Huang, N. J., Abrams, K. S., & Idrobo, F. (2014).
Suboptimal use of neural information in a mammalian auditory system. The Journal of
Neuroscience : The Official Journal of the Society for Neuroscience, 34(4), 1306-13.
[Wirtz2015] Wirtz, C., Encke, J., Schleich, P. Nopp, P., Hemmert,
W. (2015). Comparing different Models for Sound Localization within Normal Hearing- and
Cochlear Implant Listeners. Poster CIAP2015.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 204
2015 Conference on Implantable Auditory Prostheses
W20: REDUCING CURRENT SPREAD AND CHANNEL INTERACTION USING FOCUSED
MULTIPOLAR STIMULATION IN COCHLEAR IMPLANTS: EXPERIMENTAL DATA
Shefin S George, Robert K Shepherd, Andrew K Wise, James B Fallon
Bionics Institute, East Melbourne, Victoria, Australia
Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia.
Introduction. The conductive nature of the fluids and tissues of the cochlea can lead to
broad activation of spiral ganglion neurons using contemporary cochlear implant stimulation
configurations such as monopolar (MP) stimulation. Focusing of the stimulation is expected to
result in improved implant performance. We evaluated the efficacy of focused multipolar (FMP)
stimulation, a current focusing technique in the cochlea, to achieve spatial selectivity and
reduced channel interaction by measuring neural activation in the auditory midbrain and
compared its efficacy to both MP stimulation and tripolar (TP) stimulation. We also explored the
efficacy of a stimulation mode that is referred to here as partial-FMP (pFMP) stimulation to
achieve lower stimulation thresholds compared to the standard FMP stimulation.
Methods. Following implantation of CochlearTM Hybrid-L 14 arrays into the acutely (n=8)
and long-term deafened (n=8) cochlea of cats, the inferior colliculus (IC) contralateral to the
implanted cochlea was exposed. Multiunit responses were recorded across the cochleotopic
gradient of the central nucleus of the IC in response to electric (MP, TP, FMP and pFMP)
stimulation over a range of intensities using a 32 channel silicon array (NeuroNexus). pFMP
stimulation involved varying the degree of current focusing by changing the level of
compensation current. The spread of neural activity across the IC, measured by determining the
spatial tuning curve (STC), was used as a measure of spatial selectivity. The width of STCs
were measured at cumulative d’=1 above minimum threshold. Channel interactions were
quantified by threshold shifts following simultaneous activation of two stimulation channels.
Results. MP stimulation resulted in significantly wider STCs compared to FMP and TP
stimulation in animals with normal and severe auditory neuron degeneration (one-way RM
ANOVAs, p’s<0.001). However, thresholds were significantly higher (one-way RM ANOVAs,
p’s<0.001) for FMP and TP stimulation compared to MP stimulation. Using pFMP stimulation,
the high threshold levels for FMP stimulation were significantly reduced without compromising
spatial selectivity (one-way RM ANOVA, p<0.001). In both experimental groups, channel
interactions were significantly larger for MP than both FMP and TP stimulation configurations
(one-way ANOVAs, p’s<0.001), while channel interactions for FMP and TP stimulation
configurations were not found to differ.
Conclusion. The data indicated that FMP and TP stimulation resulted in more restricted
neural activation and reduced channel interaction compared to MP stimulation and this
advantage of FMP and TP was maintained in cochleae with significant neural degeneration
more reflective of the clinical situation. pFMP stimulation would be expected to minimize
threshold increases compared to FMP stimulation while still maintaining the selectivity
advantages. Importantly, there was no benefit in terms of restricted neural activation and
reduced channel interaction for FMP compared to TP stimulation.
Funding. This work is funded by the Garnett Passe and Rodney Williams Memorial Foundation,
Australian Postgraduate Award (APA) and Bart Reardon Scholarship. The Bionics Institute
acknowledges the support it receives from the Victorian Government through its Operational
Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 205
2015 Conference on Implantable Auditory Prostheses
W21: HEARING PRESERVATION IN COCHLEAR IMPLANTATION - IMPACT
OF ELECTRODE DESIGN, INDIVIDUAL COCHLEAR ANATOMY AND
PREOPERATIVE RESIDUAL HEARING
Thomas Lenarz, Andreas Buechner, Anke Lesinski-Schiedat, Omid Majdani, Waldemar
Wuerfel, Marie-Charlot Suhling
Medical University Hannover, Hannover, DEU
Background: The percentage of patients with residual hearing undergoing cochlear
implantation has increased over the last years. The preservation of residual hearing allows
electroacoustic stimulation and the use of hybrid systems. The hearing preservation rate (PTA,
speech perception) varies substantially. The parameters which might be relevant for short and long
duration hearing preservation were analyzed in a large cohort of patients being implanted at the
Medical University Hannover including the electrode design (length, lateral vs. perimodiolar),
individual cochlear anatomy (cochlear length range 34-45 mm), preoperative residual hearing (type
of audiogram, kind of frequency) and parameters of electrical stimulation were analyzed
retrospectively.
Material Methods: Over all 560 patients were included in this retrospective study. The
patients had post lingual onset of hearing loss and they all were implanted by using the round
window approach through the posterior tympanotomy. Systemic corticosteroids were used
intraoperativly. The mean PTA pre- and postoperatively, the unaided and aided speech perception
were measured using monosyllabic word testing, HSM-sentence test in quiet and noise and OLSA
sentence test in noise. The amount of hearing loss was classified into less than 15 dB PTA, 15-30
dB PTA and more than 30 dB PTA (total loss). Patients were followed over a time up to several
years. The following types of the electrodes were used: Nucleus Hybrid-L, Nucleus SRA, MedEl Flex
20, Flex 24 and Flex 28, Advanced Bionics HiFocus 5 (Mid-Scala). Using pre- and postoperative
Cone Beam CT scans the cochlear length was measured and the cochlear coverage (ratio between
part of the cochlear covered by a used electrode vs. total length) was calculated in our cases.
Results: Hearing preservation is possible with all types of electrodes. However there are
significant differences in hearing preservation rates. The most important factor is the electrode
length with significantly higher preservation rates for electrodes shorter than 20 mm which cause a
smaller increase in hearing loss over time compared to longer electrodes. Larger cochlear coverage
resulted in poorer hearing preservation scores. There is a positive correlation between cochlear
length and hearing preservation rate. Electro-acoustic hearing could be used in patients with PTA
threshold in the low frequencies of better than 75 dB HL. The stimulation rate is important for the
long duration preservation of residual hearing. Higher rates and short pulse stimulation show higher
rates of hearing loss. Patients with sudden hearing loss developed cumulative higher losses of
postoperative residual hearing (> 30 dB) in contrast to patients with a progressive hearing loss.
Conclusion: Hearing preservation in cochlear implantation can be achieved with different
types of electrodes and surgical concepts. The relevant parameters for hearing preservation are
electrode length, cochlear coverage, type of electrical stimulation, and history of hearing loss. A
decision making matrix for the proper selection of electrode type and the type of speech processing
has been developed. Patients with longer electrodes achieved significantly better speech perception
results with electrical stimulation only compared to those with short electrodes. On the other hand
electro-acoustic stimulation is superior to any kind of electrical stimulation only. The decision making
matrix is presented.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 206
2015 Conference on Implantable Auditory Prostheses
W22: ECAP RECOVERY FUNCTIONS: N1 LATENCY AS INDICATOR FOR THE
BIAS INTRODUCED BY FORWARD MASKING AND ALTERNATING
POLARITY ARTIFACT REJECTION SCHEMES
Konrad Eugen Schwarz, Angelika Dierker, Stefan Bernd Strahl, Philipp Spitzer
MED-EL HQ, Innsbruck, AUT
In a multi-centre study [1] the electrically evoked compound action potentials (ECAP) of
141 subjects implanted with MED-EL standard and FLEXsoft electrode arrays were
investigated. ECAP amplitudes, latencies as well as double peak presences were manually
determined by experts.
ECAP signals were measured for three different stimulation electrodes (in the regions
apical / middle / basal), evoked by single biphasic pulses (Amplitude Growth Functions, AGF)
and two consecutive biphasic pulses (Masker-Probe stimuli within Recovery Functions, RF). A
discussion on the latency shift in ECAP-signals for AGF is given in [3].
Typically used artifact reduction methods are alternating polarity, forward masking [5, 7],
or an improved forward masking by C. Miller et al. [6]. These techniques result in a good artifact
reduction, but introduce some bias to the ECAP recordings [8]. This bias is e.g. visible in the
latency of the first negative peak (N1) regarding single biphasic pulses (AGF) and especially for
two consecutive biphasic pulses (RF).
Using Alternating Polarity, for ECAP signals following two consecutive biphasic pulses, a
remarkable prolongation of the latency up to 0.1ms for short inter-pulse-intervals (IPI) is visible
(see e.g. [4]).
If forward masking artifact reduction is used, the IPI dependent latency shift gets
considerably smaller. The attenuation of the latency shift was found to be independent of the
polarity of the masker and probe pulses for forward masking paradigms.
The IPI dependent latency shift cannot be explained by the answer to the masker pulse
only and appears physiologically reasonable. The bias introduced by the forward masking
artifact rejection scheme is analyzed and compared to the bias introduced by alternating
polarity.
Acknowledgments:
We want to thank the Members of HEARRING, Network of Comprehensive Hearing Implant
Centers, for recording of ECAPs. Representatives of the clinics are: Santiago L. Arauz, Marcus
Atlas, Wolf-Dieter Baumgartner, Marco Caversaccio, Han De Min, Javier Gavilán, Benoit
Godey, Joachim MÜller, Lorne Parnes, Christopher H. Raine, Gunesh Rajan, José Antonio
Rivas, Henryk Skarzynski, Yuri Yanov, Patrick Zorowka, Paul van de Heyning
12-17 July 2015
Granlibakken, Lake Tahoe
Page 207
2015 Conference on Implantable Auditory Prostheses
W23: COMPLEMENTARY RESULTS ON THE FEEDBACK PATH CHARACTERIZATION
FOR THE COCHLEAR CODACS DIRECT ACOUSTIC COCHLEAR IMPLANT
Giuliano Bernardi1, Toon van Waterschoot2, Marc Moonen1, Jan Wouters1, Jean-Marc
Gerard4, Joris Walraevens5, Martin Hillbratt6, Nicolas Verhaert7
1
2
KU Leuven, Dept. of Electrical Engineering (ESAT-STADIUS), Leuven, BEL
KU Leuven, Dept. of Electrical Engineering (ESAT-ETC), AdvISe Lab, Geel, BEL
3
KU Leuven, Dept. of Neurosciences (ExpORL), Leuven, BEL
4
ENT Department, Saint-Luc University Hospital, Brussels, BEL
5
Cochlear Technology Centre Belgium, Mechelen, BEL
6
Cochlear Bone Anchored Solutions AB, Mölnlycke, SWE
7
Dept. of Otolaryngology, Head and Neck Surgery, UZ Leuven, Leuven, BEL
In this study, we describe the latest measurements performed on the Cochlear Codacs
implant, an electro-mechanical direct acoustic cochlear implant (DACI) [Lenarz et al., 2014,
Audiol Neurotol 2014;19:164-174], in order to better understand the nature of the acoustic
feedback problems observed in some acoustic hearing implant recipients [Verhaert et al., 2013,
Otology & Neurotology 34(7), 1201-1209]. The measurements were performed on fresh frozen
cadaver heads in four measurement sessions, at different stimulus levels, using the exponential
sine sweep (ESS) technique.
The reason for carrying out such characterization measurements is threefold: first, to
estimate the feedback path frequency response of the DACI, both in the case of a correct and
an erroneous actuator position. Second, to investigate the presence of nonlinearities, since
these could have a profound impact on the performance of standard feedback cancellation
algorithms. Finally, to measure the difference between the impulse response (IR) recorded
through a microphone on a standard behind-the-ear (BTE) position and the IR measured with an
external, and mechanically decoupled, microphone. This is done to verify whether the
mechanical coupling, through the bone and tissue layers, between the actuator and the BTE
microphone gives rise to a significant mechanical feedback component.
The measured data complement a preliminary dataset [Bernardi et al., IHCON 2014], by
also providing a dB SPL normalization, and confirm the previous findings showing that the DACI
is characterized by a mechanical feedback component which is stronger than the airborne
feedback component and also has a different frequency spectrum. Additionally, the amount of
feedback seems to be dependent on the specific head morphology of the implantee. A strong
increase of the feedback signal was also recorded in the case of erroneous implantation. Finally,
the DACI did show some limited, level-dependent nonlinear behavior that was quantified by
means of a distortion measure.
[This research work was carried out in the frame of the IWT O&O Project nr. 110722 ‘Signal
processing and automatic fitting for next generation cochlear implants’.]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 208
2015 Conference on Implantable Auditory Prostheses
W24: RELATIONSHIP BETWEEN PERIPHERAL AND PSYCHOPHYSICAL
MEASURES OF AMPLITUDE MODULATION DETECTION IN CI USERS
Viral Tejani, Paul Abbas, Carolyn Brown
University of Iowa, Iowa City, IA, USA
The responses of the auditory nerve to amplitude modulated (AM) electrical stimuli in
cochlear implant (CI) users have been previously recorded in animals and in Ineraid implant
users (Jeng et al, 2009; Abbas et al., 2003; Wilson et al., 1995). In the present study, the
electrically evoked compound action potential (ECAP) was recorded in Nucleus CI users in
response to sinusoidal AM biphasic pulse trains presented to a basal, medial, and apical
electrode. Psychophysical temporal modulation transfer functions (TMTFs) were also obtained
for the same stimuli and same electrodes. The carrier rate was 4000 Hz, and modulation rates
were 125, 250, 500, and 1000 Hz. Preliminary results indicate ECAP amplitudes followed the
modulation of the stimuli, and maximal ECAP amplitude increases as modulation rate increases.
This increase is likely due to adaptation effects at the level of the auditory nerve; at low
modulation frequencies there are more current steps between minimal and maximal current
levels, resulting in greater adaptation. Preliminary psychophysical TMTFs resembled low-pass
functions (Shannon, 1992; Busby et al, 1993), meaning modulation detection thresholds (MDTs)
worsened at higher modulation frequencies. The differences in trends (increasing ECAP
amplitudes and decreasing MDTs with higher modulation frequencies) are likely related to
limitations in CNS processing of modulated signals. Additionally, slopes were obtained from the
ECAP amplitude vs modulation rate function and MDT vs modulation rate function. Preliminary
comparisons of both slopes indicate greater neural adaptation was correlated with greater
declines in MDTs. Psychophysical data collection is ongoing, and future analysis will also
compare the peripheral and psychophysical results to speech perception data.
Funding: NIH grant: P50 DC 000242.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 209
2015 Conference on Implantable Auditory Prostheses
W25: COCHLEAR MICROANATOMY AFFECTS COCHLEAR IMPLANT
INSERTION FORCES
Ersin Avci1, Tim Nauwelaers2, Thomas Lenarz1, Volkmar Hamacher2, Andrej Kral1
1
Hannover Medical University, Hannover, DEU
2
Advanced Bionics GmbH, Hannover, DEU
Background: Recently, patients with significant residual hearing have been shown to benefit from
a combination of electrical and acoustic stimulation. To make this possible, any damage to the fragile
intracochlear structures have to be avoided to preserve residual hearing. Unfortunately, the human
cochlear shows large interindividual variability, and the impact of the interindividual variability on the
insertion procedure remains unclear.
Material and Methods: The study was conducted on 11 fresh frozen human temporal bones. The
inner ear was removed from the temporal bone and dissected. To obtain a clear view of the osseous
spiral lamina and the basilar membrane, we removed the bony capsule coverÃng the scala vestibuli. The
temporal bone was then mounted on the 3-dimensional force measurement system (Agilent
technologies, Nano UTM, Santa Clara, USA) and a lateral wall electrode arrays was inserted using a
fixed speed by an automated arm. Forces were recorded in 3-dimensions with a sensitivity of 2 µN. The
corresponding angular planes are as follows: Z-forces are recorded in the direction of insertion
(horizontal plane), y-forces are recorded in the cochlear vertical plane and x-forces are recorded in the
direction orthogonal to “Z” and “Y”. Afterwards, the inserted bones were scanned using a Sykscan 1173
machine (40-130 kV source, 5 Mp, 12 bite CCD sensor), resulting in images with voxel size of 10 µm.
Materialise MIMICS software was used to segment out the scala tympani (ST). A 3-D model of the
segmented area was generated. Cross sectional images were taken perpendicular along the ST. The 2D images were exported to Matlab and analyzed by a custom-made software. The obtained 3dimensional force profiles were correlated with the microscopic recording files, and the micro-CT images.
Results: Preliminary data showed significant correlations between insertion forces and cochlear
geometry. We drew a line from the midpoint of the round window through the central axis to a distant
point of the first turn, and orthogonal to this line through the central axis, dividing the cochlear in four
quadrants (A1, A2, B1, and B2). The slopes of the force profiles, and the maximum forces for each
dimension (x, y, and z) were significantly depended on the length A1, A2, B1, and B2.
The length of B2 was negatively correlated with the maximum y-Force at 270 degrees (B2). The
length A1 was positively correlated with the average change in z-force between 90 to 180 degrees. The
average z-force at 180 degree was 25 ± 7 mN, ranging from 15 to 36 mN, whereas at 270 degree the
average z-force increased to 119 ± 33 mN (range: 65 - 168 mN). In comparison, the maximum x-force
was 19 ± 8 mN, whereas the maximum y-force was 25 ± 12 mN at the end of the insertion.
The z-force was the dominating force present during insertion. Translocation of the electrode
array caused a decrease of the z-force and change of the direction of the y-force (~ 10 mN).
The insertion angle through the round window showed significant correlations with the maximum
obtained forces. A less favorable angle leads to a contact with the modiolar wall, which can be seen in
the force profiles with an average force of 15 mN. Buckling of the proximal part of the electrode array
was identified as a rapid rise in forces, mainly in the x-plane.
Identification of cochlear trauma was only possible with the help of the three force dimensions.
Penetration of the electrode array through the basilar membrane was identified as an increase of the yforce. A sudden change of the direction vector implies possible intracochlear trauma.
Conclusion: 3D insertion forces convey detailed information about the impact of the cochlear
anatomy on the insertion forces. We were able to identify the main dynamic effects of an electrode array
and correlate to possible trauma to the inner ear during the insertion process by using this highly
sensitive 3D force measurement system.
Supported by Deutsche Forschungsgemeinschaft (Cluster of Excellence Hearing4All) and Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 210
2015 Conference on Implantable Auditory Prostheses
W26: DEVELOPMENT OF A VOLTAGE DEPENDENT CURRENT NOISE
ALGORITHM FOR CONDUCTANCE BASED STOCHASTIC MODELLING OF
AUDITORY NERVE FIBRE POPULATIONS IN COMPOUND MODELS
Werner Badenhorst, Tiaan K Malherbe, Tania Hanekom, Johan J Hanekom
Bioengineering, University of Pretoria, Pretoria, ZAF
It is ever the objective of a model to represent the actual system as closely as possible
whilst taking into account the required or acceptable accuracy and computational cost. Because
of the high frequency pulsatile stimulation of cochlear implant (CI) speech processors,
developed auditory nerve fibre (ANF) models need to include temporal characteristics such as
latency which is affected by variance in threshold stimulus. This variance has been shown to be
primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier’s
voltage dependent sodium channels (membrane noise) of which the intensity has been shown
to be a function of membrane voltage by Verveen and Derksen (IEEE Proc. 56(6):906-916).
Though numerous approaches are followed to account for the threshold variability in
phenomenological stochastic models, the present study focusses on conductance based
stochastic models such as the Hodgkin Huxley (HH) model since only these models provide
biophysically meaningful results (Izhikevich, IEEE Trans. Neural Netw. 15(5):1063-1070) as
required in the study and modelling of CIs (Rattay et al, Hear. Res. 2001; 153(1-2):43-63).
Goldwyn and Shea-Brown (PLoS Comput. Biol. 7(11): e1002247) identified three types of
conductance based stochastic models. Sub-unit and Conductance noise type models are almost
exclusively used in single node of Ranvier models due to their high computational cost making
them impractical for implementation in compound ANF and volume conduction (VC) models
used in the study of CIs. The third and simplest method, current noise, has the advantage of
simple implementation and relative low computational cost, but it has two significant
deficiencies: it is independent of membrane voltage and it is unable to inherently determine the
noise intensity required to produce in vivo measured discharge probability functions.
This study first presents a solution to both current noise deficiencies by presenting the
development of an alternative noise current term and novel current noise algorithm for
application in conductance based, compound ANF models used in CI modelling. Secondly the
current noise algorithm is applied to Rattay et al’s compartmental ANF model and validated via
comparison of the threshold probability and latency distributions to measured cat ANF data.
Thirdly the algorithm is applied to Rattay et al’s human ANF model within a human VC model to
compare the threshold probability of a single fibre to the joint threshold probability of a
population of fibres. Finally the significance of stochastic single and population ANF models are
evaluated in comparison with existing deterministic models.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 211
2015 Conference on Implantable Auditory Prostheses
W27: OPTIMIZATION OF ENERGY REQUIREMENTS FOR A GAPLESS CI
INTERFACE BY VARIATION OF STIMULUS DESIGN
Stefan Hahnewald1, Heval Benav2, Anne Tscherter3, Emanuele Marconi5, Juerg Streit3,
Hans Rudolf Widmer1, Carolyn Garnham2, Marta Roccio5, Pascal Senn5
1
Inselspital, University of Bern, Bern, CHE
MED-EL Medical Electronics, Innsbruck, AUT
3
University Department of Physiology, University of Bern, Bern, CHE
4
University Department of Neurosurgery, Inselspital, University of Bern, Bern, CHE
5
Inner Ear Research Laboratory, Inselspital, University of Bern, Bern, CHE
2
Introduction: The European NANOCI project aims at creating a gapless electrode-neuron
interface for cochlear implants (CI) by inducing growth of peripheral processes (PP) towards
electrode contacts. In this context we investigated response behavior of regrown PPs of mouse
spiral ganglion neuron (SGN) explants on multi-electrode arrays, the experimental procedures
are described in (Hahnewald et al., MEA Meeting 2014, Reutlingen). In a series of experiments
aimed at optimization of stimulation parameters we selected regrown PPs that contacted at least
two MEA electrodes and performed simultaneous stimulation and recording of neuronal
responses. The research leading to these results has received funding from the European
Union's Seventh Framework Programme under grant agreement No. 281056 (Project NANOCI).
Material and Methods: We tested a standard anodic-first biphasic stimulus A with 2 x
40µs phase duration (PD) and amplitudes from 20µA to 100µA against a quasi-biphasic
stimulus B with an initial long and weak phase (160µs, 5µA to 25µA) followed by short and
stronger phase (40µs, -20µA to -100µA). Furthermore, we tested standard stimulus A against
variations with interphase gaps (IPG) of 20µS (C), 40µS (C’) and 60µS (C’’) duration at
amplitudes from 20µA to 100µA.
Results: We analyzed the neuronal response rates elicited by stimuli B, C, C’ and C’’ with
amplitude levels that did not elicit a response when standard stimulus A was applied. All nonstandard stimuli caused reproducible action potentials in SGN PPs with current levels that were
ineffective for the standard stimulus. We also estimated the energy requirements for the
different standard and non-standard stimuli applied in this study. The standard pulseform had
higher energy requirements than the other stimuli tested here. The results with deviations in
standard error of mean (SEM) were 0.8 ± 0.16 nJ (A), 0.43 ± 0.07 nJ (B), 0.64 ± 0.13 nJ (C),
0.62 ± 0.11 nJ (C’) and 0.62 ± 0.11 nJ (C’’).
Conclusions: The data suggests that both quasi-biphasic stimuli (B) and stimuli with IPGs
elicit neuronal responses more efficiently than a standard stimulus (A) in a gapless electrode
neuron interface to regrown SGN PPs. Specifically, quasi-biphasic stimuli (B) resulted in a 45%
lower energy application needed to elicit a response compared to the standard stimulus.
Furthermore, introduction of an interphase gap can lower the energy requirements by
approximately 20%. We showed that alteration of the stimulus design through introduction of
non-symmetric phases or IPGs could potentially be used for optimized stimulation of SGNs in a
gapless CI interface with regrown PPs from SGNs.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 212
2015 Conference on Implantable Auditory Prostheses
W28: MODEL-BASED INTERVENTIONS IN COCHLEAR IMPLANTS
Tania Hanekom, Tiaan K Malherbe, Liezl Gross, Johan J Hanekom
Bioengineering, University of Pretoria, Pretoria, ZAF
Computational models provide a unique view into the electrophysiological functioning of
the electrically stimulated auditory system. The exploitation of this vantage point to gain insight
into the functioning of a specific user's electrically stimulated auditory periphery is investigated
with the aim to develop a clinically applicable toolset that could provide model-predicted
mapping (MPM) and/or model-based diagnostic (MBD) capabilities to surgeons and
audiologists. The progress that has been made towards the development of these tools is
reported.
Developments related to MPM include user-specific prediction of frequency mismatch,
dynamic range and electric field imaging (EFI) data to estimate current decay and thus electrode
interaction. The predictive ability of the models with regard to electrophysiological and
psychoacoustic quantities such as electrically evoked compound action potentials (ECAPs) and
perceptual thresholds is discussed.
Developments related to MBD include user-specific high-resolution visualization of the
intracochlear trajectory of the electrode array, visualization of current paths and associated
neural excitation patterns, and the effect of pathologies that affect bone impedance on
performance. A case study employing MBD is presented to demonstrate the potential clinical
applicability of user-specific computational models.
This approach has prospective implications for future clinical support and maintenance of
CI users.
This poster is available from www.up.ac.za/bioengineering.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 213
2015 Conference on Implantable Auditory Prostheses
W29: TOWARDS INDIVIDUALIZED COCHLEAR IMPLANTS: VARIATIONS OF
THE COCHLEAR MICROANATOMY
Markus Pietsch1, Lukas Aguierra Davila2, Peter Erfurt1, Ersin Avci1, Annika Karch2,
Thomas Lenarz1, Andrej Kral1
1
Institute of AudioNeuroTechnology, Medical University Hannover, Germany
2
Institute of Biostatistics, Medical University Hannover, Germany
For minimizing cochlear trauma with cochlear implants, particularly for preservation of
residual hearing, the individual shape and size of the cochlea in the given patient needs to be
determined. This is, however, only possible using clinically-available imaging techniques with
insufficient resolution. Therefore, only basic parameters of the cochlear form can be assessed in
the given subject.
The present study analyzed the cochlear form in 108 human temporal bones post
mortem. For this purpose, frozen human temporal bones were used. After filling the cochlea with
epoxy and exposing these to vacuum for 5 min., the bones were stored for 8 hrs. under room
temperature to harden the epoxy. Subsequently, the bone was removed by storing the
specimen in alkali solution for 3 weeks. The resulting corrosion casts were mechanically
cleaned and photographed in 3 orthogonal directions using a custom-made micromechanical
holder with laser-controlled positioning and a Keyence VHX 600 digital microscope. The
resulting resolution was 12 µm per pixel. The images were analyzed using VHX-600 software
and ImageJ. More than 60 different parameters were manually measured in each cochlea. The
data were compared to data obtained with 30 temporal bones that were imaged in µCT with
similar resolution. The data obtained from the corrosion casts were used to fit a mathematical
3D spiral model.
The µCTs and corrosion casts corresponded very well and demonstrated that the
corrosion cast data were reliable. As in previous study, also the present study demonstrated a
high variance in many parameters including absolute metric and angular length, as well as in
wrapping factor. Notably, the B ratio, a parameter characterizing where the modiolar axis cuts
the base width axis of the cochlea, appeared to be partly related to the course of the basalmost
vertical profile: if the ration was small, the vertical profiles had more pronounced rollercoaster
course (vertical minimum in the first 180°), if it was large and close to 0.5, this vertical minimum
was small or absent. Furthermore, factor analysis revealed a relation between cochlear base
width and length with absolute metric length, but not with height of the cochlea. Finally, the
analytical model allowed us to fit the cochlear 3D shape with residuals < 1 mm using the
cochlear length and width and their intersection with the modiolar axis. The model was validated
using the leave-one-out cross-validation technique and demonstrated excellent fit already using
few parameters of a real cochlea. This demonstrates that the analytical model can be used to
predict the length (angular and metric) and the shape of the human cochlea from data obtained
in imaging with high precision.
Supported by Deutsche Forschungsgemeinschaft (Cluster of Excellence Hearing4all).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 214
2015 Conference on Implantable Auditory Prostheses
W30: WHAT CAN ECAP POLARITY SENSITIVITY TELL US ABOUT
AUDITORY NERVE SURVIVAL?
Michelle L. Hughes, Rachel A. Scheperle, Jenny L. Goehring
Boys Town National Research Hospital, Omaha, NE, USA
Recent physiological and perceptual studies with human cochlear implant (CI) recipients
suggest that anodic pulses are more effective than cathodic pulses for stimulating the deafened
auditory system. Modeling studies suggest that both polarities are effective at eliciting action
potentials when peripheral processes are intact, whereas the anodic phase is more effective
when peripheral processes are absent because the central axon is directly activated.
Differences in electrically evoked compound action potential (ECAP) responses between
polarities may therefore provide information about neural survival patterns on an individual
basis. Existing studies of polarity effects in humans have used non-standard pulse shapes to
mimic monophasic stimulation; therefore, little is known about stimulus polarity effects in CI
recipients using clinically standard biphasic pulses. Although behavioral and physiological
thresholds have also been used to estimate neural survival patterns, thresholds likely reflect
electrode-nerve distance more so than localized neural survival patterns. Taken together,
thresholds and polarity sensitivity might provide a more complete picture of neural survival
patterns and electrode-nerve proximity. The goal of this study is to examine the relation between
physiological measures of polarity sensitivity and threshold variation across the electrode array
within subjects. Standard biphasic pulses will be used to quantify polarity sensitivity as the
difference in maximum amplitude and slope of the ECAP amplitude growth function (AGF)
obtained with anodic- versus cathodic-leading biphasic pulses. ECAP thresholds will also be
obtained for both polarities. We hypothesize that larger maximum amplitude and slope
differences between polarities (greater polarity sensitivity) and larger threshold differences
between polarities should reflect poorer auditory nerve survival.
To date, AGFs have been collected with both polarities for all electrodes in six subjects
with Cochlear devices. Across all electrodes and subjects, the maximum ECAP amplitude for
cathodic-leading pulses was 72% of the maximum amplitude for anodic-leading pulses, and
cathodic thresholds were on average 2.3 CL higher than anodic thresholds. Preliminary data
show that the slope and threshold patterns across electrodes are similar for the better
performers, but slope and threshold patterns are dissimilar for poorer performers. Results for a
larger group of subjects will be discussed.
Supported by grants from the NIH, NIDCD (R01 DC009595 and P30 DC 04662).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 215
2015 Conference on Implantable Auditory Prostheses
W31: VERTIGO AND COCHLEAR IMPLANTATION
Ioana Herisanu, Peter K. Plinkert, Mark Praetorius
Univ. ENT Clinic of Heidelberg, Heidelberg, DEU
Introduction: Vertigo is associated with a reduction in quality of life and increases anxiety.
Cochlear implantation has an imminent risk to cause vertigo as opening the inner ear, potentially
endangers the organ of equilibrium. We report on cochlear implant patients of the Heidelberg
Implant Center with regard to this symptom pre- and postoperatively.
Material and Method: We did a retrospective study on 239 patients implanted between
2009 and 2013 in our cochlear implant program. Forty two patients had a non responsive
vestibular organ already before the implantation, two of them not compensated before surgery
because of acute meningitis. In 219 patients the round window approach was used. In 20
patients we performed a cochleostomy, in all but 2 patients full insertion was achieved. The data
was collected from our ENTstatistics Database (Innoforce Est., Liechtenstein).
Results: Twenty three of 239 patients had vertigo in the first 48 hours postoperatively,
none of them were confined to bed. Further 39 patients had in some degree vertigo four weeks
or later after the operation, only 15 of them related to surgery, 12 of them with vestibular organ
function loss still compensating. The late onset of vertigo and dizziness had other causes not
related to surgery e.g. M. Meniere of the contralateral ear, vestibular neuropathy, cervical spine
degeneration problems, drug abuse and medicine side effects, acoustic neuroma of the
contralateral ear, benign paroxysmal positional vertigo, neck muscle hardening, hypotension,
barotrauma.
Thirty three patients of 239 did not receive protective cortisone perioperative, five of them
had vertigo in the first 48 h postoperatively. One patient, a revision surgery with cochleostomy
approach had vertigo when stimulated with high current levels probably because of the scaring
process.
Conclusions: Vertigo related to cochlear implantation in the soft surgery era is still
present, yet not common. Cortisone seems to have a protective effect and we do recommend
perioperative administration. In this group, the cochleostomy is not an issue for vertigo
symptoms. Directly postoperative occurred vertigo is rather related to the operation, later
occurred one rather not. Vertigo that occurred in the first 48h postoperatively did resolve in this
period or compensated within weeks, late onset vertigo did persist for a longer time and was not
related to the CI.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 216
2015 Conference on Implantable Auditory Prostheses
W32: TIME-DOMAIN SIMULATION OF VOLUME CONDUCTION IN THE
GUINEA PIG COCHLEA
1
Paul Wong1, Andrian Sue1, Phillip Tran1, Chidrupi Inguva2, Qing Li1, Paul Carter3
School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Sydney, AUS
2
School of Electrical Engineering, The University of Sydney, Sydney, AUS
3
Cochlear Ltd., Sydney, AUS
All volume conduction models of the cochlea used to date been formulated with purely
resistive tissue properties (i.e. the response to current injection does not change over time) in
order to invoke the quasi-static condition for simplifying analysis. However, experimental work in
both this and other fields has shown that biological tissues generally exhibit capacitive effects.
These capacitive effects are particularly strong for nerve and bone tissue, which (in conjunction
with the cochlear fluid spaces) make up the bulk of the volume in the cochlear region. Given that
the neural response is a function of the induced electric field, changes in the field over time may
play an important role that has hitherto not been investigated.
A refined model of the guinea pig cochlea (first shown at CIAP2013) was used in this new
study. The three key features of the study are: (1) frequency-dependent resistivity and
permittivity values that were sourced from the IT’IS Foundation database; (2) a biphasic
constant current pulse that was modelled as a Fourier sum of 5000 harmonics and injected at
electrode E4 of the modelled Hybrid-L8 array; and (3) solution of the tissue response in the
frequency domain using COMSOL Multiphysics, which was then reconstructed in MATLAB to
give the corresponding time-domain response.
The simulations revealed considerable differences between the quasi-static and timedependent formulations. Voltage measurements at the intracochlear electrodes and within the
scala tympani exhibited only weak time-dependence, in line with the findings of Spelman.
However, current flow measurements through the nerve showed a substantial reduction in
magnitude over the duration of each phase, indicating that the current flow there is sensitive to
reactive as well as resistive components of the tissue impedance. Since, at the pulse widths
typically used in cochlear implants, nerve activation is well approximated by the time integral of
electric charge delivered to the tissues, the existing assumption of nerve activation being a
simple product of current and time is not valid. Incorporating these time-dependent effects into
neural response models may help to improve the accuracy of in silico predictions.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 217
2015 Conference on Implantable Auditory Prostheses
W33: FUNCTIONAL MODELLING OF NEURAL INTERAURAL TIME
DIFFERENCE CODING FOR BIMODAL AND BILATERAL COCHLEAR
IMPLANT STIMULATION
Andreas N Prokopiou, Jan Wouters, Tom Francart
ExpORL, Dept. of Neurosciences, KU Leuven, Leuven, BEL
Recent studies have shown that it is possible for bimodal and bilateral CI users to
perceive ITDs in modulated pulse train stimuli such as generated by clinical processors, and
that the shape of the temporal envelope determines ITD sensitivity. However, there is a wide
parameter space which has not been fully explored. To aid in the study of this phenomenon a
computational model was developed. It amalgamates existing work on three different topics:
acoustic stimulation, electrical stimulation of the auditory nerve and mammalian binaural
models.
The acoustical and the electrical stimulation of the auditory nerve constitute the
peripheral processing part of the model. In both cases the output is a train of action potentials
generated by the auditory nerve. The model used for electrical stimulation is an already existing
model, developed by Bruce et al. [IEEE Tr Biomed. Eng. 1999] Temporal aspects of the
generated spike train, such as neuron refractory period, adaptation and response latency to the
electric pulse are well predicted by this model. The model used for acoustic stimulation is also
an already existing model developed by Zilany et al. [J. Acoust. Soc. Am. 2014]. It is a cascade
of phenomenological description of the major functional components of the auditory-periphery,
from the middle-ear to the auditory nerve.
The binaural model developed takes as inputs the action potentials generated by the left
and right auditory nerve stimulation models, either electrically or acoustically thus permitting
modelling of bimodal listeners physiology. The binaural processing that takes place in the model
is based on existing work describing the functional operation of the binaural system, such as
correlation measures and binaural cues identification. Furthermore, various central brain nuclei
dynamics, such as the MSO, LSO and the IC, are modelled to correspond to electrophysiological experimental work. The output of the binaural model is the predicted ITD cue
encoded in the system and the ITD-firing rate curves of central nuclei. These values are
validated against experimental investigations of the mammalian binaural system, specifically
work done with cats by Smith et al. using a bilateral CI system [J. Neurosci., 2007]. Further
validation is done by comparing model predictions with experimental work done with both
acoustic and electric stimulation in human users where Laback investigated envelope
modulation and its effect on binaural cues, specifically ITD cues [J. Acoust. Soc. Am. 2011].
This computational tool serves the purpose of a test-bench for developing temporal
enhancement stimulation strategies for bimodal and bilateral CIs. Future work is to optimise a
bimodal stimulation strategy, by considering both objective metrics from the computational
model and subjective measures from behavioural experiments. Results will be presented at the
conference.
Acknowledgments: Part of ICanHear network, funding source FP7 Marie Curie ITN scheme.
Grant agreement no.:317521
12-17 July 2015
Granlibakken, Lake Tahoe
Page 218
2015 Conference on Implantable Auditory Prostheses
W34: THE ELECTRICALLY EVOKED COMPOUND ACTION POTENTIAL,
COMPUTERIZED TOMOGRAPHY, AND BEHAVIORAL MEASURES TO
ASSESS THE ELECTRODE-NEURON INTERFACE
Lindsay A DeVries1, Rachel A Scheperle2, Julie A Bierer1
1
2
University of Washington, Seattle, WA, USA
Boys Town National Research Hospital, Omaha, NE, USA
Speech perception scores are widely variable among cochlear implant listeners. Part of
this variability is likely due to the status of the electrode-neuron interface (ENI): electrode
position and the integrity of auditory neurons. The aim of this study is to evaluate the possibility
of using the electrically evoked compound action potential (ECAP) as a tool to assess the ENI in
a clinical setting. To that end, ECAP measures, behavioral thresholds using focused
(quadrupolar) stimulation, computerized tomography (CT) to estimate electrode-to-modiolus
distance, and scalar location are evaluated for all available electrodes. Medial vowel and
consonant discrimination scores were collected using a /hVd/ and /aCa/ context using clinical
processor settings for each subject. Ten unilaterally implanted adult subjects with Advanced
Bionics HiRes90k devices were tested. Results demonstrate a positive correlation between
focused thresholds and electrode-to-modiolus distance, and a negative correlation between
focused thresholds and ECAP peak amplitudes. Equivalent rectangular bandwidth (ERB), used
to quantify the width of the ECAP channel interaction functions, was positively correlated with
electrode-to-modiolus distance. Additionally, both electrode-to-modiolus distance and ERB were
significantly related to scalar location, indicating that electrodes located in the scala tympani are
typically closer to the modiolus and have narrower channel interaction functions. Finally,
subjects who had larger average ECAP peak amplitudes tended to have higher average speech
perception scores. In summary, different aspects of the ECAP are correlated with important
features of the electrode-neuron interface. The long-term goal of this research is to provide data
that will improve ECAP protocols for audiologists for use in individualized programming to
optimize neural stimulation via assessment of the electrode-neuron interface.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 219
2015 Conference on Implantable Auditory Prostheses
W35: CORTICAL REPRESENTATIONS OF STIMULUS INTENSITY OF
COCHLEAR IMPLANT STIMULATION IN AWAKE MARMOSETS
1
2
Kai Yuen Lim1, Luke A Johnson1, Charles Della Santina2, Xiaoqin Wang1
Department of Biomedical Engineering at Johns Hopkins University School of Medicine, Baltimore, MD, USA
Departments of Biomedical Engineering and Otolaryngology at Johns Hopkins School of Medicine, Baltimore, MD,
USA
Electrical stimulation of the cochlear nerve via a cochlear implant (CI) is successful in
restoring auditory sensation to individuals with profound hearing loss. However, many questions
remain unanswered regarding how the central auditory system processes the electrical
stimulation. Understanding how the brain processes CI stimulation should help guide
improvement of CI technology, but techniques for studying human cortical processing have low
spatial resolution, and the extent to which non-primate or anesthetized animal models represent
the human case is unclear. We therefore developed an alert non-human primate CI model in the
common marmoset (Callithrix jacchus) by implanting a multi-channel electrode array in one
cochlea while leaving the other cochlea acoustically intact. This preparation allows us to directly
compare a cortical neuron’s responses to acoustic and CI stimulation in the awake condition.
We found that acute, episodic CI stimulation was less effective in activating primary
auditory cortex (A1) neurons compared to acoustic stimulation. This may be explained by
broader cochlear excitation areas caused by electric stimulation compared to acoustic stimuli,
because many cortical neurons exhibit narrow frequency tuning and sideband inhibition. For
neurons driven by both CI and acoustic stimuli, we characterized responses as a function of
current level and sound intensity. A majority of these neurons showed monotonic responses to
CI stimuli; less than 20% had non-monotonic responses. This compares to acoustic responses
which had 40% non-monotonic responses. Non-monotonic CI-driven neurons showed nonmonotonic responses to acoustic stimuli, while ~25% of monotonic CI-driven neurons showed
non-monotonic responses to acoustic stimuli. The change from monotonic to non-monotonic
response in the same neuron suggested that CI and acoustic stimuli evoked different neural
circuits. Moreover, thresholds and saturation levels for non-monotonic CI-driven neurons were
significantly lower than those for monotonic CI-driven neurons; however, there was no
significant difference between the dynamic ranges of the two groups.
Consistent with clinical psychophysical data from CI users, dynamic ranges of A1 cortical
neuron responses to CI stimuli were much smaller than those to acoustic stimuli (3.4 dB vs 32
dB in our experiments). For a given A1 neuron, thresholds of CI and acoustic stimuli were
positively correlated, but their dynamic ranges were not significantly correlated. Response
latencies were also positively correlated between the two stimulation types. In addition, acoustic
stimuli usually evoked greater firing rates than CI stimuli.
These findings suggest that the coding mechanism for stimulus intensity differs between
the stimulation modes, and that CI stimulation is less efficient in driving A1 neurons.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 220
2015 Conference on Implantable Auditory Prostheses
W36: IMPEDANCE MEASURES FOR SUBJECT-SPECIFIC OPTIMIZATION OF
SPATIAL SELECTIVITY
Quentin Mesnildrey1, Olivier Macherey1, Frederic Venail2, Philippe Herzog1
1
Laboratoire de Mécanique et d'Acoustique, CNRS, Marseille, FRA
2
University Hospital Gui de Chauliac, Montpellier, FRA
Among the several attempts to achieve focused stimulation in cochlear implants (CIs), the phased
array strategy is the only one that relies on subject-specific measures of the intracochlear electrical field
[1]. Despite its theoretical appeal, recent psychophysical measures have not shown clear improvements
in terms of spatial selectivity compared to monopolar stimulation. There are three potential factors that
may limit the benefit of this strategy. First, the current levels assigned to each electrode are based on
electrical field measurements. However, the potential measured on the electrode that is being stimulated
mainly reflects the interface impedance and not the potential value of interest. This potential is, therefore,
extrapolated from neighboring measures. Errors brought by this extrapolation may impair the focusing.
Second, this strategy relies on the hypothesis that the intracochlear medium is purely resistive and that
phase information can be discarded. Although this has been shown for frequencies up to about 12 kHz,
electrical pulses used clinically have sharp onsets making their bandwidth extend beyond 12 kHz. Third,
this technique aims to cancel the electrical field at the electrode site and not at the level of the targeted
neural elements.
The present study examines these three aspects using both in vitro measurements and
recordings in CI users. The in vitro measurements were carried out using a HiFocus 1j electrode array
immersed in artificial perilymph. Measures were performed with the clinical Advanced Bionics HiRes90k
implant. Using multiple recordings with different onset times relative to the stimulation yielded an
aggregate sampling rate of up to 1.1 MHz which allowed us to sample fast transients for pulsatile
stimulation and to perform sine-wave measurements for frequencies up to 46 kHz. Impedance
measurements were made by stimulating one electrode and recording from the same or from other
electrodes of the array at different current levels and using different imposed resistive loads. Similar
measurements were also made in CI listeners using the same electrode array. Finally, In vitro electrical
field measurements were made at different distances from the array in order to evaluate the degradation
of spatial selectivity induced by errors on the estimation of the impedances.
The in vitro measures highlighted the failure of common RC-circuits to describe the impedance of
stimulating electrodes. Here, we used a modified Randles circuit with a constant phase element which
led to a much better fitting of our data and to the possibility of estimating the imposed resistive load.
Preliminary measurements performed in CI users show that this model may also be valid in vivo. These
data, therefore, suggest that a modeling approach to estimate the potential at the stimulating electrode
site may be feasible.
This study is funded by a grant from the ANR (Project DAIMA ANR-11-PDOC-0022)
[1]Van den Honert, C., & Kelsall, D. C. (2007). Focused intracochlear electric stimulation with phased
array channels. J. Acoust. Soc. Amer., 121(6), 3703-16.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 221
2015 Conference on Implantable Auditory Prostheses
W37: DEVELOPMENT OF A MODEL FOR PATIENT-SPECIFIC SIMULATION OF
COCHLEAR IMPLANT STIMULATION
Ahmet Cakir, Jared A. Shenson, Robert F. Labadie, Benoit M. Dawant, Rene H. Gifford,
Jack H. Noble
Vanderbilt University, Nashville, TN, USA
Cochlear implants (CIs) are accepted as the standard-of-care for patients who experience
severe-to-profound sensory-based hearing loss. Although CIs are remarkably successful at restoring
hearing, it is rare to achieve natural fidelity due to the wide variability in outcomes, and many patients
experience below-average to poor benefit. Studies by our group have shown that the distance
relationship between the CI electrodes and the spiral ganglion (SG), as detected by our CT analysis
techniques, can be used to customize CI settings and improve outcomes. We call this Image-Guided CI
Programming (IGCIP). We now aim to develop comprehensive patient-specific electro-anatomical
models (EAMs). Compared to our current basic distance-based analysis of intra-cochlear electrode
position, EAMs have the potential to more accurately estimate patient-specific neural activation patterns
resulting from the patient’s unique electrode positioning within the cochlea. These benefits could lead to
more effective IGCIP strategies.
For IGCIP, we have developed an approach to accurately localize patient intra-cochlear anatomy
by fitting a high resolution microCT atlas constructed from multiple specimens to new patient CT images.
We plan to extend this approach to create EAMs using the microCTs that can similarly be fitted to patient
CT images. Our goal in this study is to optimize the design of the EAM. Our EAM approach is to use a
nodal analysis method in order to simulate the spread of current within the cochlea. A regularly-spaced
grid of model nodes is defined over the field of view of the microCT, and each node is labeled with a
different tissue resistivity class, identified using the microCT image. A current source is simulated at the
nodes where each implanted electrode is located. After solving the model, the density of electrical
current that results at sites in the SG along Rosenthal’s Canal (RC) is measured to estimate neural
activation. In this study we implemented and executed this proof-of-concept model. Estimated profiles of
neural activation along the RC were found and will be presented at the conference. Additionally, the
resolution of the model (i.e., the spacing between nodes) is an important variable to optimize in model
design; we therefore measured how sensitive our model results are to reduction in model resolution. The
resolution must be fine enough to ensure accurate results, but greater resolution leads to higher
computational cost.
Using a specialized computational server with 48GB memory, our EAM was used to estimate
profiles of neural activation along RC in our microCT image, and these will be presented at the
conference. We computed our EAM over five resolutions with node volume starting at 1.6x10-5 mm3 (the
resolution of the µCT image) and multiplied node volume by a factor of eight at each resolution level. The
running times for solving the models were 276 minutes, 32 minutes, 2.75 minutes, 10 seconds, and 0.59
seconds, respectively. The memory usage for each model was 27.2 GB, 3.4 GB, 0.4 GB, 0.1 GB and
0.01 GB, respectively. The mean percent difference in the current density measured at 49 sites along RC
between the finest resolution and the other resolution levels was calculated to be 5.98%, 15.28%,
46.88% and 45.46%, going from finest to coarsest resolution, respectively. These results show that a
model with node size of 1.2x10-4 mm3 has fine enough resolution to have accurate results (<6%
difference with the finest resolution level) and is coarse enough that it is solvable with standard
computational hardware in under an hour.
This work represents the first step toward creating patient-specific CI stimulation models, which,
to our knowledge, has not been attempted previously. These patient-specific EAMs will not only permit
design and implementation of different IGCIP strategies, but also may provide further insight into factors
that affect patient outcomes, with potential implications for future electrode design and surgical
techniques.
This work was supported in part by grant R01DC014037 from the NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 222
2015 Conference on Implantable Auditory Prostheses
W38: RELATIONSHIP BETWEEN COCHLEAR IMPLANT ELECTRODE
POSITION, ELECTROPHYSIOLOGICAL, AND PSYCHOPHYSICAL
MEASURES
Jack Noble, Andrea Hedley-Williams, Linsey Sunderhaus, Robert Labadie, Benoit
Dawant, Rene Gifford
Vanderbilt University, Nashville, TN, USA
Cochlear implants (CIs) are arguably the most successful neural prosthesis to date. However, a
significant number of CI recipients experience marginal hearing restoration, and, even among the best
performers, restoration to normal fidelity is rare. Work from several groups has demonstrated that there
is a strong link between outcomes with CIs and the position of the electrodes within the cochlea. We
have developed CT image processing techniques that can be used to directly detect the intra-cochlear
positions of implanted electrodes for individual CI users. This permits improving outcomes by
customizing CI processor settings to account for sub-optimal CI positioning. We call this Image-Guided
CI Programming (IGCIP). A limitation of this approach is the need for a post-implantation CT image. In
this study, we investigate whether a combination of electrophysiological and psychophysical measures
could predict electrode position and alleviate the need for post-operative CT to implement IGCIP.
For each of 5 experienced adult CI users for whom postoperative CT images were available, we
collected the following electrophysiological and psychophysical measures for each electrode: impedance,
behavioral threshold (T) with monopolar stimulation, behavioral T level with partial tripolar stimulation,
most comfortable (M) level with monopolar stimulation, M level with partial tripolar stimulation, and
electrode pitch discrimination. Using our CT analysis techniques, we measured the position of each
electrode within the cochlea in terms of three metrics: distance to modiolus, angular insertion depth, and
scalar location relative to the basilar membrane. Linear regression models were constructed using the
electrophysiological and psychoacoustic measures to predict each of the three electrode position
measures. To predict each position measure, one model (“limited model”) was created using only
impedance as well as T and M levels for predictors--as these are metrics commonly available clinically;
another model (“full model”) was created using the full set of predictor metrics defined above as well as
the ratios between partial tripolar M and T levels with monopolar M and T levels, as it has been
suggested that these ratios may be predictive of electrode position (Bierer et al. 2010). A cross-validation
procedure was used to verify model prediction results.
Cross validation showed that predictions made with the limited model resulted in a weak and nonsignificant (p=0.06) correlation with electrode distance and a strong inverse correlation was found with
scalar position, suggesting the limited model inputs are not predictive of these two electrode position
measures. A strong (r=0.69) significant correlation between the limited model predictions and angular
depth was observed. With the full model, a weak (r=0.29) correlation with distance and strong (r=0.58)
correlation with angular depth were detected at p<0.05, and no significant correlation with scalar position
was observed.
Our results indicate that a relationship exists between electrode position in the cochlea and
clinically measureable electrophysiological and psychophysical metrics. Interestingly, this relationship
may be strong enough to reliably estimate angular depth of insertion. This metric may by itself be useful
for customizing CI frequency map settings. The clinical measures do not appear to be predictive enough
to reliably estimate modiolar distance or scalar location without CT images. However, potential exists to
use these clinical and CT-based metrics in a complementary fashion to predict neural survival or tissue
growth, which could significantly improve IGCIP strategies and lead to improved outcomes with CIs.
Other future studies will include studying the relationship between electrode position and electrically
evoked compound action potentials.
This work was supported in part by grant R01DC014037 from the NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 223
2015 Conference on Implantable Auditory Prostheses
W39: COMPARISONS OF SPECTRAL RIPPLE NOISE RESOLUTION
OBTAINED FROM MORE RECENTLY IMPLANTED CI USERS AND
PREVIOUSLY PUBLISHED DATA
Eun Kyung Jeon1, Christopher W. Turner1, Sue A. Karsten1, Bruce J. Gantz1, Belinda A.
Henry2
1
2
University of Iowa, Iowa City, IA, USA
University of Queensland, Brisbane St Lucia, AUS
This study revisits the issue of the spectral ripple noise resolution abilities of CI users.
The spectral ripple noise resolution thresholds of CI recipients implanted in the last 10 years
were compared to those of CI recipients implanted 15-20 years ago, as well as those of normalhearing (NH) and hearing-impaired (HI) listeners (from Henry et al., 2005).
In the Henry et al. (2005) study, 12 NH young adults, 32 HI adults, and 23 CI subjects
participated. The CI subjects were implanted between 1996 and 2002. In the current study, 28
CI subjects, a total of 32 ears, participated. They were implanted between 2003 and 2013. The
spectral ripple noise thresholds were collected in the same lab using identical stimuli and
procedures as the Henry et al. (2005) study. Additionally, two speech perception tests were
administered to all CI users: a consonant recognition test in quiet and a speech recognition test
in background noise.
Significantly higher spectral ripple noise resolution scores for the more recently implanted
CI recipients were observed than those in the Henry et al. (2005) study. There was no
significant difference in spectral ripple resolution for these recently implanted subjects compared
to hearing-impaired listeners using hearing aids. The recently implanted CI recipients obtained
significantly higher scores in both consonant recognition and speech recognition thresholds in
noise than those in the Henry et al. study. The strong positive correlation between spectral
ripple resolution and speech recognition evident in earlier data is also seen in data collected as
part of this study. Possible reasons for the improved spectral ripple resolution among recently
implanted CI users will be discussed.
Work supported by NIH-NIDCD grants, R01 DC000377 and P50 DC000242
Reference:
Henry, B. A., Turner, C. W., & Behrens, A. (2005). Spectral peak resolution and speech
recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners. J Acoust
Soc Am, 118(2), 1111-1121.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 224
2015 Conference on Implantable Auditory Prostheses
W40: ENVELOPE INTERACTIONS IN MULTI-CHANNEL AMPLITUDE
MODULATION FREQUENCY DISCRIMINATION BY COCHLEAR IMPLANT
USERS
John J. Galvin III 1,2,3 , Sandy Oba 1, Deniz Başkent 2,3 , and Qian-Jie Fu 1
1
Department of Head and Neck Surgery, David Geffen School of Medicine,
UCLA, Los Angeles, CA, USA
2
Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of
Groningen, Groningen, The Netherlands
3
Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of
Groningen, Groningen, The Netherlands
Previous cochlear implant (CI) studies have shown that single-channel amplitude
modulation frequency discrimination (AMFD) can be improved when coherent modulation is
delivered to additional channels. It is unclear whether this multi-channel advantage is due to
multiple representations of the temporal envelope or to component channels that provide better
temporal processing. In this study, multi-channel AMFD was measured in CI subjects using a 3alternative forced-choice (3AFC) procedure (“which interval is different?”). For the reference
stimulus, the reference AM (100 Hz) was delivered to all three channels. For the probe stimulus,
the target AM (101, 102, 104, 108, 116, 132, 164, 228, or 256 Hz) was delivered to one of three
channels, and the reference AM (100 Hz) delivered to the other two channels. The spacing
between electrodes was varied to be wide or narrow to test different degrees of channel
interaction. For the wide spacing, performance was significantly better when the target AM was
delivered to the apical or middle channel, rather than the basal channel. For the narrow spacing,
there was no significant effect of target AM channel. Results showed that for small referenceprobe AM frequency differences, there was often a greater improvement when the target AM
was delivered to one of three channels, rather than to all three channels, especially when
channels were narrowly spaced. Given the 3AFC procedure, subjects may have attended to
easily perceptible envelope interactions when the target AM was delivered to only one of three
channels. These interactions suggest that in CI signal processing, where similar (but not
identical) temporal envelopes are typically delivered to adjacent channels, other qualities
besides AM rate pitch may be relatively perceived. The results suggest that envelope
interactions among multiple channels may be quite complex, depending on the type of
information presented to each channel and the relative independence of the stimulated
channels.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 225
2015 Conference on Implantable Auditory Prostheses
W41: SPECTRAL RESOLUTION AND AUDITORY ENHANCEMENT IN
COCHLEAR-IMPLANT USERS
Lei Feng, Andrew Oxenham
University of Minnesota, Minneapolis, MN, USA
A target tone embedded in a simultaneous multi-tone masker can be enhanced by
presenting the masker itself first (called the precursor). This effect, referred to as auditory
enhancement, has been observed in normal auditory perception, and may reflect the ability of
our auditory system to normalize or “whiten” the sound representation in order to efficiently
detect new acoustic events and to produce perceptual invariance in noisy environments.
Auditory enhancement has not been studied extensively in cochlear-implant (CI) users, but
existing reports suggest reduced enhancement is possibly due to reduced frequency selectivity.
As enhancement is one aspect of perception that could in principle be recreated through CI
processing, a better understanding of enhancement in CI users may help in developing
techniques to compensate for perceptual differences between CI users and normal-hearing
(NH) listeners.
The current study measured enhancement, using a (spectral or place) pitch comparison
paradigm under simultaneous masking, as a function of precursor duration and the gap between
the precursor and masker in CI users using direct stimulation. For comparison, NH listeners
were tested in the same task, with stimuli that were processed with a 16-channel noise-excited
envelope vocoder, with each channel center frequency corresponding to an electrode along a
standard CI map. A target was presented at one electrode (or vocoder channel), randomly
selected from electrodes 6 through 11 in each trial, and the two maskers were located 4
electrodes apically and basally away from the target. After a 200-ms gap, a probe (either the
target electrode or one of its most adjacent neighbors) was presented. Listeners were asked
whether the probe had been present in the target-plus-masker complex. For the CI users, the
current level of each masker component was set to the 20% of the dynamic range (DR); for the
NH listeners, the masker level was 45 dB SPL per noise band. The levels of the target and
probe were varied adaptively to track 70.7%-correct point on the psychometric function. In the
enhanced condition, a precursor (a copy of the masker) was presented before the target-plusmasker complex. Enhancement was defined as the difference in threshold between precursorabsent and precursor-present conditions. Three precursor durations (62.5, 250, and 1000 ms)
and three precursor-masker gaps (10, 100, and 1000 ms) were measured for a total of nine
conditions. In addition for NH listeners, two different filter slopes (24 dB/oct and 48 dB/oct) were
used to simulate different current spreads so we could examine the effect of spectral resolution
on enhancement.
Preliminary data show that enhancement effects may be observed in some CI users and
the amount of enhancement is larger for a longer precursors and a shorter gap between the
precursor and target-plus-masker complex. A similar time course of enhancement was observed
in NH listeners with the vocoded stimuli and the amount of enhancement decreased with
increased simulated current spread. Our results suggest that the reduced enhancement
measured in CI users is at least partly due to poor spectral resolution.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 226
2015 Conference on Implantable Auditory Prostheses
W42: MONOPOLAR PSYCHOPHYSICAL DETECTION THRESHOLDS
PREDICT SPATIAL SPECIFICITY OF NEURAL EXCITATION IN COCHLEAR
IMPLANT USERS
Ning Zhou
East Carolina University, Greenville, NC, USA
It is widely accepted that psychophysical detection thresholds are dependent on at least
two peripheral factors: the density of local nerve fibers and the distance of the electrode from
the neural target. In cases of low neural density and long electrode-neuron distance, neural
excitation tends to spread to the neighboring stimulation sites. For detecting a stimulus that is of
relatively high rate and long duration, threshold depends on a third variable, that is, the neurons’
temporal responsiveness to the large number of narrowly-spaced pulses. Auditory neurons’
temporal responsiveness rely on the healthiness of the fibers such as refractory period and
adaptation that do not necessarily determine spatial tuning. The first aim of the study was to
determine whether detection thresholds predict spatial specificity of excitation when the factor of
temporal responsiveness of the neurons is removed. The second aim of the study was to
determine whether speech recognition improves with the removal of electrodes that show poor
spatial specificity of excitation.
Multiple electrodes were tested in nine ears implanted with the Cochlear Nucleus device.
Detection thresholds were measured in monopolar mode (MP 1+2) for low-rate (40/80 pps, 250
ms), short-duration (640 pps, 15.625/31.25 ms), high-rate and long-duration (640 pps, 250 ms),
and the clinically commonly used (900 pps, 500 ms) pulse trains. The low-rate thresholds were
free of the temporal factors whereas the short-duration thresholds were relatively less prone to
adaptation (less pulses) but are still subject to the factor of refractoriness (high rate). Spatial
specificity of excitation was measured for the same electrodes using a forward-masking
paradigm, where a forward masker (300 ms) was presented at locations at and surrounding the
probe (20 ms). Averaged slope of the basal and apical forward-masking curves (normalized to
peak masking) quantified spatial specificity of excitation at the probe. There was no correlation
between detection thresholds for either of the two high-rate and long-duration stimuli and spatial
tuning, as expected. Correlation with spatial tuning improved for the short-duration threshold (R2
=0.53 for 31.15 ms) and became highly significant for the low-rate thresholds that were free of
both temporal factors (R2 =0.74 for 80 pps). Note that the 31.15 ms and 80 pps stimuli had the
same number of pulses. Speech reception thresholds for CUNY sentences in amplitudemodulated noise and recognition of the TIMIT sentences both significantly improved using an
experimental map that removed the 5 electrodes with the highest 80-pps thresholds, relative to
the subjects’ clinical map (p < 0.05). The experimental map avoided turning off two adjacent
electrodes.
The results of the study suggest that monopolar detection thresholds can be used to
estimate spatial tuning when the thresholds are not complicated by variations in the neurons’
temporal characteristics. Site selection based on low-rate thresholds may provide cochlear
implant uses with stimulation maps that are superior in spatial tuning.
Work supported by the Emerging Research Grants from Hearing Health Foundation.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 227
2015 Conference on Implantable Auditory Prostheses
W43: DYNAMIC BINAURAL SYNTHESIS IN COCHLEAR-IMPLANT
RESEARCH: A VOCODER-BASED PILOT STUDY
Florian Voelk
Bio-Inspired Information Processing, IMETUM, Technische Universität MÜnchen and WindAcoustics UG, Munich,
DEU
Binaural synthesis (BS) is a sound-reproduction method based on the convolution of sound signals with the
impulse responses of the sound-pressure propagation paths from the sources to the eardrums. The convolution
products are typically presented by headphones, and the simulation is valid if the scenario can be regarded as a
linear system. If BS is implemented adaptively with real-time capability, the impulse responses can be adjusted
based on head position and orientation, allowing the listeners to move while the ear signals remain correct
(dynamic binaural synthesis, DBS). For research and fitting purposes in the field of hearing aids, BS can save effort
and time by providing different acoustical conditions in the lab. As BS aims at reproducing ear signals, it is not
directly applicable to hearing aids or cochlear implants (CIs). We extended the BS theory accordingly, showing that
BS can be used with hearing aids, especially when no ear signals are involved, as for example with CIs.
Approximately eliciting the hearing sensations assumed for CI and hearing aid users in normal-hearing
listeners, for example with vocoders, has proven helpful in research and development. While omitted by some
authors, the specification of the synthesis system is crucial in that case, as the order of the original transmission
chain must not be changed. This chain consists of the paths:
a) Sources to hearing-aid inputs (dependent on head position and orientation)
b) Hearing-aid inputs to speech-processor outputs (typically non-linear)
c) Speech-processor outputs to final receivers.
That being said, typical playback procedures show the following properties and possible errors:
- Conventional loudspeaker presentation of the simulated hearing-aid or CI outputs results in erroneous ear signals,
as paths c) instead of a) are erroneously modified on listener movements. It is incorrect to argue that ear-signal
variations on listener movements are taken into account by paths c) instead of a), as the intermediate paths b),
involving the speech processing, typically show non-linear characteristics.
- Conventional headphone presentation provides constant paths a) and c). However, the simulation inputs remain
constant, incorrectly reflecting paths a). As a consequence, mostly little to no externalization occurs, the hearing
sensations are located inside the head.
- When simulating a static situation with static BS, paths a) are represented correctly. However, in a dynamic
situation, especially if the listener moves, the speech-processor input signals erroneously remain static, incorrectly
simulating paths a).
- DBS presentation allows to correctly include all paths.
The ability of normal-hearing subjects to differentiate directional information extracted from vocoded stimuli
may be considered as an indication for the discriminative directional information potentially present in cochlearimplant listening. In a pilot study with DBS, we addressed the angle formed with respect to the listener’s head by
two sound sources perceptually just differentiable in position when sounded in succession (minimum-audible angle,
MAA). MAAs were measured with two different sets of analysis-channel configurations with 6 respectively 8 normalhearing subjects (22 to 31 years). The DBS-output signals were passed through symmetric but inter-aurally
uncorrelated vocoder systems before being presented to the listeners.
Looking at the results, analysis of variance (ANOVA) indicates a significant main effect of the channel
configuration for both sets [F(5,45)=2.49; p=0.0447] and [F(7,63)=2.51; p=0.0243]. Informal reports and empirical
listening point towards a tendency for up to three hearing sensations to arise simultaneously during the DBS
experiment; one at the intended position and two at the listeners’ ears, presumably due to unrealistically low
interaural correlation. The MAAs in general appear plausible compared to the situation without vocoder and to
results reported for CI listening (between 3° and 8°). However, in none of the vocoded conditions MAAs were
comparable to those of the normal-hearing situation. The results indicate no clear dependency of the MAA on the
number of analysis channels, at which a tendency is visible of slightly increasing angles for more than six channels.
Summarizing, the described pilot study shows the applicability and potential benefit of DBS applied to
research in audiology: providing more realistic stimuli than most conventional procedures, otherwise hidden
phenomena can be revealed and studied under controlled but realistic acoustic conditions.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 228
2015 Conference on Implantable Auditory Prostheses
W44: PERCEPTUAL SPACE OF MONOPOLAR AND ALL-POLAR STIMULI
1
Jeremy Marozeau1, Colette McKay2
Technical University of Denmark, København DNK
2
Bionics Institute, East-Melbourne, AUS
In today’s cochlear implant systems, the monopolar electrode configuration is the most
commonly used stimulation mode, requiring only a single current source. However, it has been
argued that it should be possible to produce narrower current fields with an implant that allows
the simultaneous activation of three (tripolar mode) or more independent current sources
(multipolar mode). In this study the sound quality induced by the simultaneous activation of all
22 electrodes, the all-polar mode, was investigated. Five patients, implanted with a research
device connected via a percutaneous connector, took part in this experiment. They were asked
to judge the sound dissimilarity between pairs of stimuli presented in monoplar and all-polar
mode. The stimuli were designed to produce two different regions of neural excitation, either by
stimulating two individual electrodes sequentially in monopolar mode, or, in all-polar mode, by
selecting the current levels on each electrode appropriately to confine the regions activated to
two narrow bands. The distance separation between the two target regions, as well as, their
overall positions were varied. The dissimilarity ratings were analysed with a multidimensional
scaling technic and a two-dimensional space was produced. The first dimension was highly
correlated with the overall position of the target regions along the electrode array for both
modes. The second dimension was moderately correlated with the distances between the two
regions for both modes. Although the all-polar and monopolar perceptual spaces largely
overlap, a shift upward along the first dimension can be observed for the all-polar stimuli. This
suggested that the all-polar stimuli are perceived with a higher pitch than the monopolar stimuli.
This project is funded by The Garnett Passe and Rodney Williams Memorial Foundation and
supported by Cochlear Ltd. The Bionics Institute acknowledges the support it receives from the
Victorian Government through its Operational Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 229
2015 Conference on Implantable Auditory Prostheses
W45: STIMULATING ON MULTIPLE ELECTRODES CAN IMPROVE TEMPORAL PITCH
PERCEPTION
Richard Penninger, Waldo Nogueira, Andreas Buechner
Medical University Hanover, Germany, Hannover, DEU
Music perception and appraisal is generally very poor in CI subjects, mainly because
pitch is inadequately transmitted by the current clinically used sound processors, thereby limiting
CI user performance(Limb and Roy 2013). Compared to place pitch, rate pitch has the
advantage of being able to provide a continuum of pitches on a single electrode up to
approximately 300 Hz (Zeng 2002). Recent studies have shown that stimulating on multiple
electrodes can improve temporal pitch perception (Penninger et al. 2015;Venter and Hanekom
2014).
In the present study it was hypothesized that CI subjects would perform better with
increasing amount of electrodes stimulated per cycle and that stimulating on multiple electrodes
in very apical locations has a benefit for temporal pitch.
Med-El CI subjects were asked to pitch rank stimuli presented with direct electrical
stimulation. The pulses were applied on one or three electrodes in the basal, middle and apical
region of the cochlea. Their frequency ranged from 100 up to 1200 pps.
Listeners showed the previously reported performance pattern in most conditions with
very good performance at the lowest standard rates and deteriorating performance to near
chance level at the highest rate tested.
Preliminary results show that stimulating on apical locations on multiple electrodes can
improve temporal pitch perception performance.
This work was supported by the DFG Cluster of Excellence EXC 1077/1 "Hearing4all".
12-17 July 2015
Granlibakken, Lake Tahoe
Page 230
2015 Conference on Implantable Auditory Prostheses
W46: SOUND QUALITY OF ELECTRIC PULSE-TRAINS AS FUNCTION OF
PLACE AND RATE OF STIMULATION WITH LONG MED-EL ELECTRODES
Katrien Vermeire1, Annes Claes2, Paul Van de Heyning2, David M Landsberger3
1
Long Island Jewish Medical Center, New Hyde Park, NY
2
University Hospital Antwerp, Antwerpen, ,BEL
3
New York University School of Medicine, New York, NY, USA
Objectives: Although it has been previously shown that changes in temporal coding
produce changes in pitch in all cochlear regions, research has suggested that temporal coding
might be best encoded in relatively apical locations. We hypothesized that although temporal
coding may provide useable information at any cochlear location, low rates of stimulation might
provide better sound quality in apical regions which are more likely to encode temporal
information in the normal ear. In the present study, sound qualities of single electrode pulse
trains were scaled to provide insight into the combined effects of cochlear location and
stimulation rate on sound quality.
Design: Ten long-term users of MED-EL cochlear implants with 31 mm electrode arrays
(Standard or FLEXSOFT) were asked to scale the sound quality of single electrode pulse trains
in terms of how “Clean”, “Noisy”, “High”, and “Annoying” they sounded. Pulse trains were
presented on most electrodes between 1 and 12 representing the entire range of the long
electrode array at stimulation rates of 100, 150, 200, 400, or 1500 pulses per second.
Results: While high rates of stimulation are scaled as having a “Clean” sound quality across the
entire array, only the most apical electrodes (typically 1 through 3) were considered “Clean” at
low rates. Low rates on electrodes 6 through 12 were not rated as “Clean” while the low rate
quality of electrodes 4 and 5 were typically in between. Scaling of “Noisy” responses provided
an approximately inverse pattern as “Clean” responses. “High” responses show the trade-off
between rate and place of stimulation on pitch. Because “High” responses did not correlate with
“Clean” responses, subjects were not rating sound quality based on pitch.
Conclusions: If explicit temporal coding is to be provided in a cochlear implant, it is likely
to sound better when provided apically. Additionally, the finding that low rates sound clean only
at apical places of stimulation is consistent with previous findings that a change in rate of
stimulation corresponds to an equivalent change in perceived pitch at apical locations.
Collectively, the data strongly suggests that temporal coding with a cochlear implant is optimally
provided by electrodes placed well into the second cochlear turn.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 231
2015 Conference on Implantable Auditory Prostheses
W47: LOUDNESS RECALIBRATION IN NORMAL-HEARING LISTENERS AND
COCHLEAR-IMPLANT USERS
1
2
Ningyuan Wang1, Heather Kreft2, Andrew J. Oxenham1
Department of Psychology, University of Minnesota, Minneapolis, MN, USA
Department of Otolaryngology, University of Minnesota, Minneapolis, MN, USA
Auditory context effects, such as loudness recalibration and auditory enhancement, have
been observed in normal auditory perception, and may reflect a general gain control of the
auditory perceptual system. However, little is known about whether cochlear-implant (CI) users
experience these effects. Discovering whether and how CI users experience loudness
recalibration should provide us with a better understanding of the underlying mechanisms.
We examined the effects of a long-duration (1 s) intense precursor on the perception of
loudness of shorter-duration (200 ms) target and comparison stimuli. The precursor and target
were separated by a silent gap of 50 ms, and the target and comparison were separated by a
silent gap of 2 s. For CI users, all the stimuli were delivered as pulse trains directly to the
implant. The target and comparison stimuli were always presented to a middle electrode
(electrode 8), and the position of the precursor was parametrically varied from electrode 2
through 14. For normal-hearing listeners, bandpass noise was used as a stimulus to simulate
the spread of current produced by CIs. The center frequencies of stimuli were determined by a
standard frequency map for16-channel CIs, corresponding to selected electrodes in the CI
users.
Significant loudness recalibration effects were observed in both normal-hearing subjects
and CI users. As in previous studies, the effect size in normal-hearing listeners increased with
increasing level difference between precursor and target. However, this trend was not observed
in results from CI users.
The results confirm the effects associated with loudness recalibration in normal-hearing
listeners. The differences between the results from CI users and normal-hearing listeners may
be explained in terms of a “dual-process” hypothesis, which has been used to explain earlier
data from normal-hearing listeners.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 232
2015 Conference on Implantable Auditory Prostheses
W48: COCHLEAR IMPLANT INSERTION TRAUMA AND RECOVERY
Bryan E. Pfingst1, Deborah J. Colesa1, Aaron P. Hughes1, Stefan B. Strahl2, Yehoash
Raphael1
1
University of Michigan, Ann Arbor, MI, USA
2
MED-EL GmbH, Innsbruck, AUT
There is a growing realization that preservation of acoustic hearing and cochlear
physiology during cochlear implantation has meaningful benefits for cochlear implant function
with electrical plus acoustic hearing (EAS) and also for electrical hearing alone. Improvements
in implant design and surgical technique have been made to minimize insertion trauma. In
guinea pigs, we have assessed the effects of insertion trauma with psychophysical and
electrophysiological responses to both acoustic and electrical stimulation. In animals implanted
in a hearing ear, elevation in acoustic psychophysical detection thresholds of 15 dB or more
relative to pre-implant baselines have been observed in approximately 90% of cases studied.
Threshold elevations typically peaked within two weeks after implantation and then tended to
recover spontaneously toward the pre-implant baseline. The degree of recovery, assessed in
terms of stable post-recovery thresholds relative to pre-implant thresholds, was inversely
correlated with the degree of post-implant loss; the greater the loss, the poorer the recovery (r2
= 0.67; p = 0.0001 in the in the region of the implant: 8 to 24 kHz).
Post-implant threshold impairment and subsequent recovery were also seen in
electrically-evoked compound action potential (ECAP) growth function thresholds and slopes in
animals implanted in a hearing ear, suggesting a direct effect of the insertion trauma on the
auditory nerve. Furthermore, similar patterns were seen in animals deafened by cochlear
infusion of neomycin and subsequently treated with neurotrophin (AAV.Ntf3). These treatments
resulted in complete absence of inner hair cells (IHCs) and supporting cells but moderate to
good spiral-ganglion neuron (SGN) survival. Animals treated with neomycin but lacking effective
neurotrophin treatment had no IHCs and poor SGN survival and these cases tended to show
little or no recovery from the insertion trauma.
These results suggest (1) that cochlear-implant insertion trauma has direct effects on the
auditory nerve; (2) that the cochlea possesses mechanisms for self-recovery and (3) that
recovery can be aided by postsurgical neurotrophin treatment.
Supported by NIH-NIDCD grants R01 DC010412, R01DC011294, and P30 DC05188 and a
contract from MED-EL.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 233
2015 Conference on Implantable Auditory Prostheses
W49: PERCEIVED PITCH SHIFTS ELICIT VOCAL CORRECTIONS IN
COCHLEAR IMPLANT PATIENTS
Torrey M. Loucks, Deepa Suneel, Justin Aronoff
University of Illinois at Urbana-Champaign, Champaign, IL, USA
In normal hearing listeners, an abrupt and rapid change in perceived vocal pitch elicits a
compensatory shift in vocal pitch to correct the perceived error. This phenomenon called the
‘pitch-shift response’ or PSR is a short-latency feedback loop between audition and vocalization,
which is largely non-volitional. For CI patients, the relatively shallow insertion depth of the
electrode arrays likely results in the perception of their own vocal pitch as substantially higher
than their comfortable range of voice fundamental frequencies and distorts pitch control.
Moreover, long-term auditory deprivation in CI patients may have degraded typical pitch control
mechanisms. The goal of this study was to conduct systematic PSR testing to determine
whether this auditory-vocal response is preserved in CI patients.
Similar to previous studies of normal hearing listeners, three post-lingually deaf adult CI
patients produced /a/ vowel vocalizations while listening to their own vocalization. All
participants had Advanced Bionics implants and were using the HiRes 120 processing strategy.
Their vocalization was routed through a system that unexpectedly shifted the fundamental
frequency of their voice up or down by 200 cents or 500 cents for 200 ms. Each stimulus
amplitude was repeated approximately 20 times in each direction.
In each CI patient, the pitch-shift stimuli elicited changes in vocal fundamental frequency
that resembled responses in normal hearing listeners. The direction of the responses indicated
the majority of responses were in the opposite direction of the pitch-shift stimulus and could thus
be classified as compensatory responses. Approximately 40% of the responses followed the
stimulus direction and could be classified as following responses. The latency and amplitude of
the responses were similar to normal hearing listeners.
Adult CI patients show a pitch-shift response that is mostly similar to responses observed
in normal hearing listeners. This finding confirms that pitch sensitivity and auditory-vocal
responses can be preserved in adult CI patients. Our preliminary results raise the possibility of
using CI patients’ own vocal production to adjust their maps.
Work supported by NIH/NIDCD R03-DC013380; equipment provided by Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 234
2015 Conference on Implantable Auditory Prostheses
W50: TEMPORAL GAP DETECTION IN SPEECH-LIKE STIMULI BY USERS OF
COCHLEAR IMPLANTS: FREE-FIELD AND DIRECT STIMULATION
Pranesh Bhargava1,2, Etienne Gaudrain1,2,3, Stephen D. Holmes4, Robert P. Morse4,5, Deniz
Başkent1,2
1
University of Groningen, University Medical Center Groningen, Department of Otorhinolaryngology, Groningen, NL
2
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, Groningen, NL
3
Lyon Neuroscience Research Center, CNRS UMR 5292, Inserm U1028, Université Lyon 1, Lyon, FR
4
School of Life and Health Sciences, Aston University, Birmingham, UK
5
School of Engineering, University of Warwick, Coventry, UK
Gap detection (GD) is a measure of the temporal resolution of the auditory system in
general, but also relates to speech intelligibility because the ability to detect short silences is
crucial for the perception of certain speech features. Previous measures of gap detection
thresholds (GDTs) in cochlear implant (CI) users have been measured either with simple stimuli
and direct stimulation using a research interface, or with more complex speech-like stimuli
presented in free field and processed by a CI. The individual effects of stimulus complexity and
CI processing on GD are therefore unclear.
The present study explored the effect of stimulus complexity and device-related factors
by measuring CI users’ GDTs using synthetic vowels and recorded words for both free field and
direct stimulation. These GDTs were compared to those for normal hearing (NH) listeners. GD
in complex, broadband stimuli requires monitoring and comparing outputs across multiple
channels. Because of low spectral resolution (requiring fewer channels to monitor) and spread
of excitation (providing more cross-channel information), CI users might be expected to have
comparable, if not better, GDTs than NH listeners for speech stimuli. However, front-end CI
processing smoothens the temporal envelope of the CI signals, which might be hypothesized to
increase the free-field GDT.
For all stimuli, the GDTs of CI users tested in free field were significantly higher than
GDTs of NH listeners. However, when tested with direct stimulation, the CI users showed
smaller thresholds than in free field, comparable to those of the NH listeners. On average, the
GDTs for words were higher than those for synthetic vowels. No correlation was found between
GDTs and speech intelligibility. Results indicate that although the intrinsic temporal resolution of
CI users was similar to that of NH listeners, conventional front-end CI processing appears
worsen it, as hypothesized. For all groups, envelope fluctuations in words may have caused
higher thresholds than for synthetic vowels, which had a flat envelope. These results help better
understand limitations to CI listeners’ speech perception, especially in noisy environments, and
could contribute to evaluating new envelope-processing strategies in implants.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 235
2015 Conference on Implantable Auditory Prostheses
W51: CRITICAL FACTORS THAT AFFECT MEASURES OF SPATIAL SELECTIVITY IN
COCHLEAR IMPLANT USERS
Stefano Cosentino, John Deeks, Robert Carlyon
MRC - Cognition and Brain Sciences Unit, Cambridge, GBR
Accurate measures of spatial selectivity are important both for the identification of
channels that may be partly responsible for speech perception, and for the assessment of novel
stimulation methods. Here, five experiments studied the feasibility of a psychophysical measure
designed to limit confounding factors that might affect measures of affecting spatial selectivity.
The proposed approach relies on the measurement of psychophysical tuning curves (PTCs)
with the following features: interleaved masking paradigm; low stimulation rate; currents on a
single- or dual-electrode masker that is adaptively changed while the probe level is fixed just
above detection; maskers equated in current, rather than in loudness.
Exp. 1 tested the proposed measure across five Med-El and four Advance Bionic
cochlear implant users. Exp. 2 compared PTCs for dual and single-electrode masker to evaluate
the effect of off-site listening. Exp. 3 considered the implications of masking procedure on
detection of sites with poor electrode-to-neuron interface. Exp. 4 investigated the effect of
facilitation and refractory mechanisms on the amount of masking produced for different probepulses/masker-pulses gaps. Finally Exp. 5 tried to quantify the confusion effect experienced in
forward masking paradigms.
We found no evidence for off-site listening. The choice of the masking paradigm was
shown to alter the measured spatial selectivity. For short probe-pulses/masker-pulses gaps,
facilitation and refractory mechanisms had an effect on masking; hence the choice of stimulation
rate should consider this phenomenon. No evidence for confusion effect in forward masking was
revealed. The use of the proposed approach is recommended as a measure of spatial selectivity
in CI users.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 236
2015 Conference on Implantable Auditory Prostheses
W52: USING FNIRS TO STUDY SPEECH PROCESSING AND
NEUROPLASTICITY IN COCHLEAR IMPLANT USERS
Zhou Xin1, William Cross1, Adnan Shah2, Abr-Krim Seghouane2, Ruth Litovsky3, Colette
McKay3
1
The University of Melbourne, Department of Medical Bionics; The Bionics Institute of Australia, Melbourne, AUS
2
The University of Melbourne, Department of Electrical and Electronic Engineering; Melbourne, AUS
3
The University of Wisconsin-Madison, Waisman Center, Madison WI, USA
It is well known that the ability to understand speech varies among cochlear implant (CI)
users. Peripheral factors including etiology, duration of deafness, onset of deafness and CI
experience can only explain about 22% of the variance. Neuroimaging studies have shown that
neuroplasticity in postlingually deafened patients might be correlated with successful use of the
CI. Deaf individuals might adopt different communication strategies, which is likely to affect the
type and degree of neuroplasticity that they each experience. The aim of this experiment was to
conduct neuroimaging on brain areas known to represent language and speech, and to
investigate correlations between indicators of neuroplasticity and behavioral measures of CI
outcomes. We hypothesized that CI users who recruit the auditory cortex for lip-reading would
have poor speech understanding. In addition, that CI users whose visual cortex is more
activated for lip-reading and whose auditory cortex is more active during auditory speech
processing would have better outcomes on measures of speech understanding.
Using functional near-infrared spectroscopy (fNIRS), we measure the hemodynamic
response from participants while they were performing lip-reading or auditory speech listening
tasks. Speech understanding in quiet and in background noise, and lip-reading abilities were
also measured. The study aims to include 28 CI users with varying speech understanding
abilities and a group of age-matched normal hearing controls. Task-related fNIRS activation
patterns are obtained using a block design with 4 conditiosn: visual-only spondees, auditoryonly spondees, auditory sentences and audio-visual sentences. Preliminary results with normalhearing listeners show fNIRS activation patterns consistent with previously-reported fMRI and
PET data in response to visual and auditory speech. Preliminary fNIRS data comparing CI users
and normal-hearing listeners will be presented, with a focus on the relation between activation
patterns, neural connectivity measures, and speech understanding.
Supported by a Melbourne University PhD scholarship to ZX, a veski fellowship to CMM, an
Australian Research Council Grant (FT130101394) to AKS, the Australian Fulbright Commission
for a fellowship to RL, the Lions Foundation, and the Melbourne Neuroscience Institute. The
Bionics Institute acknowledges the support it receives from the Victorian Government through its
Operational Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 237
2015 Conference on Implantable Auditory Prostheses
W53: NEUROFEEDBACK: A NOVEL APPROACH TO PITCH TRAINING IN
COCHLEAR IMPLANT USERS
Annika Luckmann1,2, Jacob Jolij2,3, Deniz Başkent1,2
1
University Medical Center Groningen, Dept Otorhinolaryngology
2
Research School of Behavioral and Cognitive Neurosciences
3
Faculty of Behavioral and Social Sciences, Experimental Psychology
University of Groningen, Groningen, The Netherlands
Cochlear implantees experience difficulties in pitch discrimination, which can lead to
problems during daily speech communication and sound perception. Neurofeedback is an online
feedback method, in which neuronal oscillations, measured with EEG, are used to give real-time
cues to individuals. These cues correspond to specific brain wave patterns. Through the realtime feedback, individuals can be trained to regulate brain activity to match specific cognitive
states. Neurofeedback has been shown to be quicker than behavioural training, while still
resulting in comparable learning outcomes with long-lasting effects (Chang et al 2015).
In a pilot study, we aimed to build a classifier to be used in a pitch discrimination-training
paradigm using neurofeedback. 28 normal hearing participants were presented with two stimuli
per trail, which differed in pitch, with values around discrimination threshold. The participants'
task was to rate how much the second stimulus differed from the first. During this pilot, next to
the behavioural data, EEG was also measured. We investigated both P300 and mismatch
negativity (MMN), in order to understand the neural mechanisms of near threshold pitch
perception. The P300 is a positive posterior brain evoked potential around 250-500 ms after
stimulus onset, known to be an electrophysiological correlate of decision making, evaluation and
categorisation, commonly used in oddball paradigm studies. The mismatch negativity is also
connected to changes in stimuli (such as pitch), but is thought of as a more unconscious
process that appears 150-250ms post stimulus onset. Results showed that the paradigm is very
much suitable for classification of whether the participant did or did not perceive a difference in
pitch in a given trial. What is more is that we can even predict the behavioural response to a
stimulus, based on the EEG data.
In the next step, we simulated cochlear implant (CI) EEG data, including CI artifacts as
described in Viola Campos et al (2012). Using the analyses we used for our normal hearing
participants, we see that we can successfully filter the artifact and build a classifier on as little as
30 trials per stimulus condition.
Next we will train normal hearing individuals, as well as CI users, with a neurofeedbackbased training using the classifier that was successfully tested in the pilot and simulation
studies. Experiments conducted by Chang and colleagues (2014) suggest that pure tone
discrimination thresholds decrease significantly post neurofeedback training. We aim to replicate
their findings and also add valuable new insights to CI pitch discrimination training and
mechanisms. Thus, we hypothesise that both normal hearing people and CI users will have
decreased pitch discrimination threshold post-training.
As neurofeedback can be personalised to a patient's needs, thresholds, and abilities, and
is more time efficient but still as promising as behavioural training, we see great potential in this
method to increase pitch discrimination and thereby helping many implant users in daily life.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 238
2015 Conference on Implantable Auditory Prostheses
W54: DIFFERENTIAL PATTERNS OF THALAMOCORTICAL AND CORTICOCORTICAL
PROJECTIONS TO AUDITORY FIELDS OF EARLY- AND LATE-DEAF CATS
Blake E. Butler, Nicole Chabot, Stephen G. Lomber
University of Western Ontario, London, CAN
In the deaf brain, cortical fields that would normally process auditory information are
subject to reorganization that has both anatomical and functional consequences. For example,
our group has recently demonstrated that the areal cartography of auditory cortex is altered
following hearing loss, and that the nature of these changes are to some extent dependent on
the age at which sensory input is removed (Wong et al. 2013). Moreover, a variety of studies
across animal models and in human participants have indicated that following hearing loss,
compensatory advantages are observed in remaining sensory modalities, and that these
changes result from recruitment of auditory cortical fields. However, the patterns of
thalamocortical and corticocortical projections that underlie this neural reorganization are not yet
understood. Thus, we have undertaken a series of studies aimed at understanding how
projections to different fields of the auditory cortex are altered following hearing loss.
In each case, a retrograde neuronal tracer (BDA) was deposited in the field of interest in
hearing cats, and cats ototoxically deafened shortly after birth (early-deaf) or in adulthood (latedeaf). Coronal sections were taken at regular intervals and observed under a light microscope.
Neurons showing a positive retrograde labeling were counted. Labelled neurons were assigned
to functional cortical and thalamic areas according to published criteria. The proportion of
labelled neurons in each area was determined in relation to the total number of labeled neurons
in the brain and ANOVAs were used to compare between areas and between groups.
Across a variety of auditory cortical fields, the patterns of projections were altered
following deafness. While the majority of projections originated in cortical and thalamic fields
commonly associated with audition, labelled neurons were also observed in visual,
somatomotor, and associative cortical structures. The nature of these changes was shown to
depend upon where a given field was located within the hierarchy of processing, with limited
reorganization observed in primary fields while higher-level structures showed more potential for
plasticity. Moreover, the changes within a given field were dependent on the age at the onset of
deafness, with greater change observed in response to early-onset than late-onset deafness.
Deafness induces widespread change in the auditory cortex that is evident in patterns of
intra- and intermodal connectivity. These age-related changes likely play a key role in the
success of implantable prostheses, and gaining a better understanding of how and where
plasticity occurs will influence optimal design of these devices and may contribute to direct
manipulation of periods of optimal plasticity to improve functional outcomes.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 239
2015 Conference on Implantable Auditory Prostheses
W55: ASSESSING PERCEPTUAL ADAPTATION TO FREQUENCY MISMATCH
IN POSTLINGUALLY DEAFENED COCHLEAR IMPLANT USERS WITH
CUSTOMIZED SELECTION OF FREQUENCY-TO-ELECTRODE TABLES
Elad Sagi1, Matthew B. Fitzgerald2, Katelyn Glassman1, Keena Seward1, Margaret Miller1,
Annette Zeman, Mario A. Svirsky1
1
New York University School of Medicine, New York, NY, USA
2
Stanford University, Palo Alto, CA, USA
After receiving a cochlear implant (CI), many individuals undergo a process of perceptual
adaptation as they accommodate to the distorted input provided by the device. Particularly in
postlingually deafened individuals, one type of distortion they may need to overcome is a
mismatch between their long term memory for speech sounds and the stimuli they actually
receive from their device. That is, there may be a mismatch between the acoustic frequencies
assigned to electrodes, i.e. the frequency-to-electrode allocation table or 'frequency table', and
the characteristic frequencies of the neural populations stimulated by those electrodes. There is
substantial evidence that CI users adapt to this frequency mismatch, but it remains to be seen
whether this adaptation occurs completely in all cases. One way to assess the degree of CI
users' adaptation to frequency mismatch is to allow users to select the frequency table they
deem most intelligible in terms of speech perception and compare their selection with the
frequency table they were assigned clinically. If a CI user's adaptation to frequency mismatch is
complete, then they should select the same frequency table that they were assigned clinically.
Conversely, if their adaptation to frequency mismatch is incomplete, then they should select a
frequency table that is different than what they were assigned clinically. For these latter
individuals, given enough exposure to the user-selected frequency table, one would expect that
their speech perception performance with the user-selected frequency table would be better
than with their clinically assigned frequency table.
We present data from more than 50 postlingually deafened adult CI users. User-selected
frequency tables were obtained from all subjects and compared to their clinically assigned
frequency tables. Speech perception performance was assessed with both frequency tables.
More than half of the subjects selected a frequency table that differed from their clinically
assigned frequency table suggesting incomplete adaptation to frequency mismatch. Even
though the user-selected frequency tables were reported as preferable to, and “more intelligible”
than, the clinical frequency tables by many listeners, only some CI users had better speech
perception scores with their user-selected frequency table. To some extent this may be due to
differences in total experience with each frequency table, as the user-selected frequency table
was only tested acutely. More exposure with user-selected frequency tables may be necessary
to observe systematic improvements in speech perception performance.
These results suggest that user-selected frequency tables may be beneficial for some
postlingually deaf CI users because these tables may provide enhanced subjective perception
of speech intelligibility and, in some cases, better speech perception.
Supported by NIH-NIDCD grant DC03937 and by equipment loans from Cochlear Americas.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 240
2015 Conference on Implantable Auditory Prostheses
W56: Poster Withdrawn
12-17 July 2015
Granlibakken, Lake Tahoe
Page 241
2015 Conference on Implantable Auditory Prostheses
W57: PERCEPTUAL TRAINING IN POST-LINGUALLY DEAFENED USERS OF
COCHLEAR IMPLANTS AND ADULTS WITH NORMAL HEARING: DOES ONE
PARADIGM FIT ALL?
Matthew Fitzgerald1, Susan Waltzman2, Mario Svirsky2, Beverly Wright3
1
2
Stanford University, Palo Alto, CA, USA
New York University School of Medicine, New York, NY, USA
3
Northwestern University, Evanston, IL, USA
Post-lingually deafened recipients of cochlear implants (CIs) undergo a learning process
that can last for months, even with active practice on listening tasks. In young adults with normal
hearing (NH), combining periods of active practice with periods of stimulus exposure alone can
result in learning that is more rapid and of greater magnitude than learning resulting solely from
active practice. Here we asked whether this combined paradigm also promotes learning in
recipients of CIs. If so, it would provide a means to enhance patient performance with a reduced
amount of practice. If not, it could indicate that there are differences in the mechanisms that
underlie perceptual learning between individuals with NH and those with CIs. Such differences
would be relevant for developing optimal training regimes for the hearing impaired.
In a pilot investigation, we trained five monolingual users of CIs ranging in age from 21 to
33 years on a phonetic-categorization task using a combination of practice and stimulusexposure alone. Participants were asked to categorize the initial consonant of consonant-vowel
syllables that varied in voice-onset-time (VOT) along a 14-step continuum as belonging to one
of three phonetic categories: < ~-25 ms (negative VOT, termed ‘mba’ for ease of interpretation
by participants) vs. ~ -25 ms to ~ +25 ms (near-zero VOT, termed ‘ba’) vs. > ~+25 ms (positive
VOT, termed ‘pa’). This three-way phonetic contrast in VOT is present in many languages, such
as Thai and Hindi, but English has only a two-way contrast between near-zero VOT and positive
VOT. However, a three-way contrast (by inclusion of the negative-VOT category into the
system) can be acquired by native English speakers with practice. In each of two 60-75 minute
sessions, participants completed a pre-test, a training phase, and a post-test. The pre- and posttests consisted of the phonetic-categorization task without feedback. The training phase
consisted of alternating periods of task performance with feedback and periods of exposure to
the same stimuli during performance of a written task.
Using this training paradigm, young adults with NH show a considerable steepening of
the category boundary between ‘mba’ and ‘ba’. In contrast, none of the five users of CIs showed
any improvement. Analysis of the electrical stimulation patterns elicited by the stimuli suggested
that the voicing information needed to make the ‘mba’ vs. ‘ba’ contrast was confined to a single
apical electrode. While preliminary, these results suggest the same perceptual training paradigm
can have different effects on individuals with CIs compared to those with NH. These differences
may stem from a reduction in the available cues to categorize this contrast due to the signal
processing of the CI speech processor. However, the likelihood of this explanation is reduced
given that the starting performance was similar between the two groups. An alternative
possibility is that deprivation from hearing loss alters perceptual-learning mechanisms.
This work was supported by NIH / NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 242
2015 Conference on Implantable Auditory Prostheses
W58: PERCEPTUAL LEARNING OF COCHLEAR IMPLANT RATE
DISCRIMINATION
Raymond Lee Goldsworthy
University of Southern California, Los Angeles, CA, USA
The fundamental auditory percepts associated with electrical stimulation have been
explored since the advent of cochlear implant technology. It has been generally concluded that
stimulation rate can be used to elicit a pitch percept but with discrimination limens on the order
of 10% for rates less than 300 pulses per second (pps) and with rapidly reduced resolution for
increasingly higher rates. However, Goldsworthy and Shannon (2014) demonstrated that
psychophysical training on rate discrimination tasks substantially improves performance over
time. Therefore, as stimulation rate is a fundamental cue to auditory perception, the present
study examines the time course and probability distributions associated with rate discrimination
within the context of perceptual learning. Rate discrimination was measured and consequently
trained using a 2-interval, 2-alternative, forced-choice procedure with feedback, where the two
stimuli were 800-ms pulse trains and the target interval had a rate that was adaptively higher
than the standard interval. Discrimination limens were measured at standard rates of 110, 220,
440, and 880 pps using apically as well as basally situated electrodes. Results indicate that
perceptual learning of pitch perception associated with stimulation rate occurs at least for the
first 40 hours of psychophysical training and can improve discrimination limens by more than a
factor of 2. Preliminary results regarding how training rate discrimination on one electrode can
transfer to another electrode will be reported.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 243
2015 Conference on Implantable Auditory Prostheses
W59: IMPROVED BUT UNFAMILIAR CODES: INFLUENCE OF LEARNING ON
ACUTE SPEECH PERCEPTION WITH CURRENT FOCUSING
Zachary M. Smith, Naomi B.H. Croghan, Christopher J. Long
Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA
New recipients of cochlear implants (CIs) learn to make better use of the auditory cues
provided by their devices over several months, but new coding strategies are usually evaluated
in experienced CI listeners with crystallized internal models of speech sounds. Novel CI coding
strategies change the stimulation patterns and thus the neural representations of sounds in the
brain. One challenge of evaluating the impact of these changes on speech outcomes is
overcoming the effects of prior listening experience. This study investigates the role of speech
learning with a CI on the success of a novel coding strategy. Here we altered stimulation by
switching from a monopolar to a focused-multipolar stimulation mode in 16 subjects. While
psychophysical measures indicate a large and significant increase in sensitivity to spectral
features with this manipulation, acute changes in speech understanding were mixed across
subjects and were not predicted by spectral resolution. Analysis of previous speech learning
over the first 12 months with their clinical devices, however, reveals that subjects whose clinical
speech scores rapidly reached asymptote show significantly more acute benefit with focusing
than those with clinical speech scores that slowly increased over several months. These results
suggest an important role of cognition in the ability to acutely switch to new neural
representations of sound.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 244
2015 Conference on Implantable Auditory Prostheses
THURSDAY POSTER ABSTRACTS
12-17 July 2015
Granlibakken, Lake Tahoe
Page 245
2015 Conference on Implantable Auditory Prostheses
R1: EFFECT OF FREQUENCY OF SINE-WAVES USED IN TONE VOCODER
SIMULATIONS OF COCHLEAR IMPLANTS ON SPEECH INTELLIGIBILITY
Anwesha Chatterjee, Kuldip Paliwal
Griffith University, Brisbane, AUS
For decades, Cochlear Implants (CIs) have been used to partially restore hearing in
profoundly deaf individuals through direct electrical stimulation of the auditory nerve. Changes in
pitch due to electrode selection have been shown to conform to the tonotopic organisation of the
cochlea, i.e., each electrode corresponds to a localised band of the human hearing spectrum.
Recent studies have shown that it may be possible to produce intermediate pitch percepts in
some patients by stimulating pairs of adjacent electrodes simultaneously. In this study, we
evaluate the effect of producing place-pitch cues similar to the spectral subband centroid of
each spectral analysis band on speech perception. Vowels and consonants are processed
through software emulations of CI signal processors with 2-16 output channels. Tone vocoders
have often been used to simulate CI processed speech for normal hearing listeners. Signals
were generated as a sum of sine waves positioned at either the centre of the channel, or the
spectral subband centroid of the frequency band relevant to the channel.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 246
2015 Conference on Implantable Auditory Prostheses
R2: THE PERCEPTUAL DISCRIMINATION OF SPEAKING STYLES UNDER
COCHLEAR IMPLANT SIMULATION
1
Terrin N. Tamati1, Esther Janse2, Deniz Başkent1
Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, Groningen,
NLD
2
Centre for Language Studies, Radboud University Nijmegen, Nijmegen, NLD
While robust speech perception in cochlear implant (CI) users is mostly achieved for ideal
speech, i.e., carefully controlled speech with clear pronunciations, our knowledge of CI
perception of real-life speech is still limited. Further, CI users’ perception of speech produced in
well-controlled laboratory speech conditions may not reflect their actual real-life performance. In
order to begin to characterize CI perception of real-life speech forms, and to provide guidelines
for best clinical practice, two experiments were carried out to investigate the perceptual
discrimination of different speaking styles common in real-life speaking environments in normal
and CI-simulated conditions.
In Experiment 1, NH listeners classified multiple sentence-length utterances as either
formal speech or informal speech in a two-alternative forced-choice categorization task. The
utterances were produced as read text, a retold story, or a conversation, and thus represented
three speaking styles ranging from formal to informal. Further, they were presented in three CI
noise-vocoder simulation conditions, including none, 12- or 4-channel simulation. The ability to
perceive differences among the speaking styles was reduced as the spectral resolution
decreased, with NH listeners being unable to reliably make the categorization judgments under
the 4-channel simulation.
In Experiment 2, we focused on the cues listeners may use to make judgements about
speaking styles. Given that the fine-grained acoustic-phonetic information was limited in the CIsimulated conditions, participants in Experiment 1 may have relied on speaking rate, or other
durational cues, which are generally preserved in CIs and their simulations, to make their
judgements. Thus, in Experiment 2 the duration of the sentences was modified so that all
utterances had the same average speaking rate. NH listeners completed the same twoalternative forced-choice categorization task with the modified utterances, and otherwise the
same conditions as Experiment 1. Similar to Experiment 1, the categorization performance
dropped as spectral resolution decreased, but here, even at 12-channel simulation listeners
could not perceive the differences in speaking styles.
Taken together, the findings from the CI simulations suggest that perceptual adjustments
to real-life speaking styles may be difficult for CI users, given that some important cues to
speaking style, such as fine acoustic-phonetic detail, are not available to them. Despite this, the
results of Experiment 1 suggest that some CI listeners may still be able to use additional cues,
such as durational cues related to speaking rate, and draw upon prior linguistic experience to
make judgements about and/or adapt to real-life speaking styles. When these additional cues
are also removed, as in Experiment 2, reliable perceptual discrimination of these real-life speech
forms may be limited. By characterizing how CI users perceive and encode speech information
related to speaking style, we may be able to develop new clinical tools for the assessment and
training of real-life speech perception performance for these listeners.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 247
2015 Conference on Implantable Auditory Prostheses
R3: RELATIVE CONTRIBUTIONS OF TEMPORAL FINE STRUCTURE AND ENVELOP CUES
FOR LEXICAL TONE PERCEPTION IN NOISE
1
2
Beier Qi1, Yitao Mao1, Lauren Muscari2, Li Xu2
Beijing Tongren Hospital, Capital Medical University, Beijing, CHN
Communication Sciences and Disorders, Ohio University, ATHENS, OH, USA
In tone languages, such as Mandarin Chinese, the tonality of syllables conveys lexical
meaning. While tone perception might not be critical in Mandarin sentence recognition in quiet
(Feng et al., 2012), it has been shown very important for Mandarin sentence recognition in noise
(Wang et al., 2013; Chen et al., 2014). Previous studies on the relative contributions of temporal
fine structure (TFS) and envelop (E) cues have shown that E is most important for English
speech perception (Smith et al. 2002) whereas TFS is crucial for Mandarin tone perception (Xu
& Pfingst, 2003). However, the relative contributions of TFS and E in speech perception in noise
are under debate. For instance, Moore and colleagues proposed that the TFS is useful for
speech perception in noise (Moore et al., 2008). More recently, Apoux et al. (2013) showed that
TFS contributes little to English sentence recognition in noise.
The primary concern of the present study is the contributions of TFS and E cues in tone
perception in noise. We hypothesized that the relative importance of the E and TFS for tone
perception is different from that for English sentence perception in noise, but similar to that for
tonal perception in quiet. The testing materials were consisted of 2000-token which were
processed using the same signal process procedure as in the study by Apoux et al. (2013). The
original 80 monosyllabic Chinese words (namely, targets) were recorded by a male and a
female native-Mandarin-speaking adult talker. The masker was the speech-shaped noise (SSN)
or multi-talker speech babble noise. The targets and the masker will be added at five different
SNRs, i.e. -18, -12, -6, 0, and +6 dB. A total of 2,000 chimeric tone tokens of various
combinations of SNRs in E and TFS were created (i.e., 80 words @ 5 SNRs for TFS @ 5 SNRs
for E). The tone perception test used a word-based, four-alternative forced-choice test (4-AFC)
paradigm. Participants were asked to identify the Mandarin word by clicking the corresponding
button on a computer screen. The order of the stimulus presentation was randomized.
The results showed that both TFS and E cues contributed to lexical tone perception in
noise. The weights that subjects used for TFS and E for tone perception in noise seemed to be
equivalent. These results were in contrast to those observed in English sentence recognition.
For English sentence recognition, the TFS cues had a limited effect on speech recognition in
noise conditions (Apoux et al. 2013). Our results were also inconsistent to our hypothesis in that
the E cues appeared to have a significant role in tone perception in noise. The implications for
signal processing of cochlear implants for tonal languages will be discussed.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 248
2015 Conference on Implantable Auditory Prostheses
R4: ENHANCING CHINESE TONE RECOGNITION BY MANIPULATING
AMPLITUDE ENVELOPE: TONE AND SPEECH RECOGNITION
EXPERIMENTS WITH COCHLEAR IMPLANT RECIPIENTS
1
Lichuan Ping1, Guofang Tang2, Qianjie Fu3
Nurotron Biotechnology, Inc. 184 technology, Irvine, CA, US, 92618, Irvine, CA, USA
2
Zhejiang Nurotron Biotechnology Co., Ltd. Zhejiang, 310011, Hangzhou, CHN
3
Signal Processing and Auditory Research Laboratory, Head and Neck Surgery University of California, LA, CA,
USA
Objective: In 2004, Luo and Fu proposed the Envelope Enhanced-Tone (E-Tone)
algorithm to improve the tone perception by manipulating amplitude envelope according to the
F0 contour of speech. Recently, this algorithm has been implemented in Nurotron’s sound
processor. The purpose of this study was to compare the tone speech recognition performance
of Chinese-speaking cochlear implant (CI) adults between the standard Advanced Peak
Selection strategy (APS) and APS + E-Tone algorithm.
Methods: Nine native Chinese-speaking post-lingual CI adults (six male and three
female) participated in this comparison test. They were followed up and studied using the
longitudinal method for 4-5 weeks to compare the difference in tone identification and speech
perception results between APS and APS+ E-Tone.
Tone perception was examined using 240 Four-alternative forced-choice (4AFC) tasks
spoken by 10 talkers (five males and five females). Speech recognition was tested using
monosyllabic and disyllabic words from Mandarin Speech Perception system and Mandarin
Speech Test Material system (MSTMs), spoken by both male and female.
Baseline performance of APS was measured for at least 2 weeks or until performance
asympoted to reduce procedural learning effects. After baseline measures were complete, the
subjects were upgraded their strategy from APS to APS + E-Tone. The subjects were required
to complete those same tests after 2 weeks using new strategy. All subjects also completed a
strategy preference questionnaire (translated from the Iowa Sound Quality). They were asked to
rate their preference, overall sound quality, naturalness, clarity for voices (male, female, own),
environmental sounds, as well as speech understanding in noise of both strategies.
Results: Tone perception had improved by 4.76% after two-week APS + E-Tone use (t=3.34, p=0.01), while Monosyllable and disyllable perception improved by 5.82% (t=-2.84,
p=0.011) and 5.85% (t=-2.61, p=0.018), respectively. Significant effects were observed from the
analysis of individual results using a paired t-test. There was no significant subjective difference
between APS and APS + E-Tone. However, average 31.4% of subjects agreed APS + E-Tone
strategy was better among all the preference items.
Conclusions: APS + E-Tone strategy can offer improved tone and speech perception
benefit to CI users compared with standard APS.
Key words: E-tone, Tone perception, Sound processing strategy
12-17 July 2015
Granlibakken, Lake Tahoe
Page 249
2015 Conference on Implantable Auditory Prostheses
R5: INTERACTIONS BETWEEN SPECTRAL RESOLUTION AND INHERENT
TEMPORAL-ENVELOPE NOISE FLUCTUATIONS IN SPEECH
UNDERSTANDING IN NOISE FOR COCHLEAR IMPLANT USERS
Evelyn EMO Davies-Venn1, Heather A Kreft2, Andrew J Oxenham1,2
1
2
Department of Psychology, University of Minnesota, Minneapolis, MN, USA
Department of Otolaryngology, University of Minnesota, Minneapolis, MN, USA
Introduction: Recent work has shown that cochlear-implant (CI) users’ speech perception
in noise seems unaffected by the inherent temporal-envelope fluctuations in noise, in contrast to
results from normal-hearing (NH) listeners, where inherent noise fluctuations dominate
performance. Channel interactions, produced by current spread, have been hypothesized to
underlie this difference between CI users and NH listeners.. The current study tested this
hypothesis in CI users by varying the amount of current spread through different stimulation
strategies and varying the physical spacing between adjacent active electrodes. Spatial tuning
curves and spectral ripple perception were used as objective measures of spectral resolution.
Methods: Speech recognition in noise was measured monaurally in CI listeners for AZBio
sentences. The maskers were Gaussian noise (GN) and pure tones (PT). The noise was
spectrally shaped to match the long-term spectrum of the AzBio speech corpus. The pure tones
were selected to match the center frequencies from the clinical maps of the individual CI users.
Focused stimulation (partial triploar vs. monopolar) and increased spacing between active
electrodes (every electrode vs. every third electrode) were used to improve spectral resolution
and reduce current spread. The speech stimuli were presented via direct audio input at 65 dB
SPL. Subjective ratings were obtained for all four stimulation strategies.
Results: Overall performance improved with increasing signal-to-noise ratio (SNR).
Across all SNRs, CI users had higher scores with monopolar compared to partial tripolar
stimulation. Decreasing the number of channels to 6 slightly reduced overall scores, but also
restored a small benefit from the lack of inherent fluctuations in the PT compared to the GN
maskers. The PT masker benefit correlated with objective measures of spectral resolution.
Subjective ratings showed a slight preference for 16 channels compared to 6 channels.
Individual listeners’ preference for monopolar compared to partial tripolar stimulation was closely
related their objective scores.
Conclusions: The results suggest that decreasing current spread through partial tripolar
stimulation is not sufficient to reintroduce the effect of inherent noise fluctuations on speech
intelligibility. Introducing spectral distance between adjacent electrodes (i.e. deleting channels)
reintroduces small effects of the inherent noise fluctuations on speech intelligibility, suggesting
that much greater spectral resolution than is currently provided by CIs would be need to
recreate the effect of inherent noise fluctuations observed in NH listeners.
[Supported by NIH grant R01 DC012262.]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 250
2015 Conference on Implantable Auditory Prostheses
R6: SPECTRAL DEGRADATION AFFECTS THE EFFICIENCY OF SENTENCE
PROCESSING: EVIDENCE FROM MEASURES OF PUPIL DILATION
Matthew Winn, Ruth Litovsky
University of Wisconsin-Madison, Madison, WI, USA
Knowing the topic of conversation “or semantic context“ is helpful for hearing words that
are spoken under sub-optimal conditions. People can exploit semantic context to predict and
cognitively restore words that are masked by noise or degraded by hearing loss. Cochlear
implant (CI) users show heavy reliance on such contextual cues, and it is therefore important to
understand how various features of CI processing impact the ability of a listener to take
advantage of context during speech perception. Spectral resolution “long known to be
impoverished in CIs“ is known to affect the perception of language at various levels, including
full sentences, isolated words, and low-level phonetic cues. This study was designed to track
the ways in which spectral resolution affects the speed with which semantic context is utilized
during sentence perception, in addition to basic measures of intelligibility.
Listeners with cochlear implants (CIs) and listeners with normal hearing (NH) were tested
using the R-SPiN corpus, which contains sentences with context (e.g. “Stir your coffee with a
spoon”) or without context (e.g. “The woman thought about a spoon”). Verbal responses were
scored for accuracy, and pupil dilation was measured during each sentence as an index of
listening effort. NH listeners heard unprocessed signals and spectrally degraded signals. All
sentences were presented in quiet, with two seconds between stimulus and verbal response.
NH listeners also heard spectrally degraded versions of sentences, while CI listeners only heard
normal speech.
Higher intelligibility was obtained for sentences with semantic context, consistent with
previous studies. Listening effort was reduced by the presence of semantic context, as indicated
by smaller pupil dilation. The timing and magnitude of that reduction was mediated by spectral
resolution; for NH listeners, the context-driven effort reduction for normal speech stimuli was
observed during stimulus presentation, whereas for degraded stimuli, it was not observed until
after the stimulus was over. In other words, clear signal quality permitted listeners to take
advantage of context in real time, while poor single quality delayed the benefit. For CI listeners,
such context-supported effort reduction was variable, but generally not as early or as large as
that for NH listeners. These effects all persisted even in cases of perfect intelligibility.
These results suggest that poor spectral resolution resulting from CIs or CI-style
processing can delay real-time language comprehension, so that when words are heard, they
are not processed quickly enough to facilitate prediction of subsequent words. In view of the
rapid pace and the lack of substantial silent pauses in conversational speech, these results help
to explain the difficulties of CI patients in everyday communication, even while intelligibility
measures remain high for words and sentences in isolation.
Support provided by NIH-NIDCD 5R01 DC03083 (R. Litovsky), Waisman Center core grant
(P30 HD03352) and the UW-Madison Dept. of Surgery.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 251
2015 Conference on Implantable Auditory Prostheses
R7: A GAMMATONE FILTER BANK AND ZERO-CROSSING DETECTION
APPROACH TO EXTRACT TFS INFORMATION FOR COCHLEAR IMPLANT
PATIENTS
Teng Huang, Attila Frater, Manuel Segovia Martinez
Oticon Medical, Vallauris, France, Vallauris, FRA
The semantics of tonal languages (Chinese, Thai, etc.) depend on different patterns of
pitch variations, which are considered as Temporal Fine Structure (TFS) information. Despite of
their importance, today’s traditional cochlear implant (CI) stimulation strategies can usually not
transfer this information, therefore limiting the users’ ability to distinguish different tonal
patterns.
This report presents an innovative method TFS features extraction and an evaluation
method using a full CI model and Neural Similarity Index Measure (NSIM). To extract the pitch
variations, a 4th order Gammatone Filter bank with Equivalent Rectangular Bandwidth (ERB)
distribution is implemented, and then the zero-crossing positions with corresponding amplitudes
are extracted. Since the variation of distance between zero-crossings can indicate the different
pitch patterns, it is used to generate the corresponding pulse trains for the stimulation.
A complete simulation model is built to evaluate this novel TFS processing strategy by
simulating the signal processing of the speech processor and representing the activity of the
electrode array generated by the implant. Utilizing an intra-cochlear electrical spread simulation,
the CI model is connected to the auditory nerve model of J.H. Goldwyn that generates neural
spike patterns as a response of electrical input stimuli. These spike patterns serve as a basis for
comparing the current approach to already existing strategies. The evaluation of the TFS
strategy is made by the NSIM, an objective distance measure capable of indicating the
degradation in speech intelligibility based on neural responses. Similarity is calculated between
normal hearing and CI aided conditions with various strategies.
NSIM results are collected for conventional CI stimulation and CI stimulation controlled by
TFS feature extraction. Differences between processing techniques are explored by comparing
the collected NSIM results. This work proposed an innovative CI sound processing strategy
capable of restoring TFS information for patients. A computer simulation of the approach is
implemented and an objective evaluation is conducted.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 252
2015 Conference on Implantable Auditory Prostheses
R8: PITCH DISCRIMINATION TRAINING IMPROVES SPEECH
INTELLIGIBILITY IN ADULT CI USERS
Katelyn A Berg, Jeremy L Loebach
St. Olaf College, Northfield, MN, USA
CI users often struggle with speech and musical tasks that rely on fine spectral information.
Higher-performing users rely on both temporal and spectral cues, though to a lesser extent than
normal hearing individuals, whereas lower-performing CI users rely almost entirely on temporalenvelope cues for environmental sound recognition (Reed & Delhorne, 2005). Targeted training can
elicit improvement in pitch related tasks (Driscoll, 2012), melody recognition (11%) and song
identification (33%; Gfeller, 2000), as well as instrument identification (68%; Zeng, 2004). While
vocal pedagogy techniques focus on production, the effects of training can spill over to perception
as well. CI users found the production and perception of rising intonation contours to be difficult
(Peng et al, 2008). Musicians have shorter latency for pitch processing than nonmusicians (Schon,
et al., 2004) including tonal relationship differences in excited/subdued speech and major/minor
music (Bowling et al., 2010). Music theory training provides a systematic way of teaching tonal
relationships (Laitz, 2003). Pitch training has been shown to improve speech perception and
production by increasing pitch range and variation quotient (PVQ), which have been associated with
increased intelligibility and greater perceived liveliness of speech (Bradlow, Torretta, & Pisoni, 1996,
Hincks, 2005; Vermillion, 2006). CI users also reported higher self-esteem and confidence levels as
well as reduced stress levels after training (Holt & Dowell, 2010). By using techniques developed in
music theory in pitch training, CI users can practice using spectral cues to enhance their perception
and production of speech.
This study examines the effectiveness of an aural skills training paradigm using pitch
discrimination tasks to improve speech intelligibility in adult cochlear implant users. Each CI user
completed three training sessions per week over a four-week time period. Pitch discrimination tasks
included pitch matching, musical intervals, vocal slides, and singing melodies. Speech intelligibility
was measured using Harvard meaningful and anomalous sentences, and PB single words. A
questionnaire given pre/post also measured the CI user’s satisfaction with their implant and their
perception of abilities and struggles. Pretest data was also collected from 37 normal hearing
listeners with no musical training in the clear and 7 normal hearing listeners with extensive musical
training in the sine-wave vocoded condition.
Results showed that training improved both pitch discrimination and speech perception tasks
pre to posttest. For pitch discrimination, CI users increased their pitch range from 106.3 Hz to 143.8
Hz, which is comparable to 167.3 Hz for musicians and 145.3 Hz for normal hearing listeners. CI
users also reduced the number and size of interval errors on the singing task, and pitch production
and vocal inflections (rising, falling, double) improved with training. For speech intelligibility, CI users
increased their score from 14.7% to 23.8% on meaningful sentences (NH: 87.8%; NHV: 73.7%),
4.2% to 18.9% on anomalous sentences (NH: 75%; NHV: 67.5%), and 5% to 10% on single words
(NH: 89.5%; NHV: 50%). Self-perception and self-esteem of abilities improved as a result of training.
These results may have clinical implications for aural rehabilitation available for CI users.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 253
2015 Conference on Implantable Auditory Prostheses
R9: EFFECT OF CONTEXTUAL CUES ON THE PERCEPTION OF
INTERRUPTED SPEECH UNDER VARIABLE SPECTRAL CONDITIONS
Chhayakant Patro, Lisa Lucks Mendel
University Of Memphis, Memphis, TN, USA
Understanding speech within an auditory scene is constantly challenged by interfering noise
in realistic listening environments. In such environments, noise elimination becomes typically
problematic for Cochlear Implant (CI) systems due to limitations imposed by front-end processing
systems and inefficiency in speech processing schemes that may degrade speech further. As a
result, individuals with CIs find it difficult and effortful to understand speech in noisy environments
compared to their normal hearing counterparts. On the contrary, listeners with normal hearing
understand speech more easily in noisy environments by utilizing their linguistic experience,
situational and/or semantic context, expectations, and lexical abilities. This study examined the role
that contextual cues play in facilitating top-down processing to improve speech understanding in
noise.
Researchers have investigated the efficiency of complex top-down processes by studying
listeners’ ability to understand interrupted and degraded speech because this requires listeners to
integrate the residual speech stream and impose their lexical expertise. The literature suggests that
individuals with CIs find it difficult to understand interrupted speech. In particular, evidence from
studies using CI simulations with noise band vocoders suggest that listeners perform poorly when
speech is interrupted unless spectral resolution is increased. Since linguistic context contributes to
the perception of degraded speech by facilitating top-down processing, the purpose of the present
study was to evaluate the extent to which contextual cues enhance the perception of interrupted
speech as well as uninterrupted speech when presented in variable spectral reduction conditions.
Listeners with normal hearing were studied to gain baseline information with the goal of applying the
methodology to listeners with CIs.
Twenty native speakers of American English who had normal hearing participated in
thisstudy. Each subject listened to contextually rich and contextually poor stimuli from the Revised
Speech Perception in Noise test in four conditions: Baseline: Speech perception in noise at +8 dB
SNR, Phonemically Interrupted speech (PI), Spectrally Reduced speech (SR 4, 8 and 16 channels)
and Phonemically Interrupted and Spectrally Reduced speech (PI+SR, 4, 8 and 16 channels).
Spectral reduction was applied using Tiger CIS software and periodic interruptions were applied
using MATLAB (50% duty cycle, 5 Hz gating, and 10 ms cosine-ramp).
Findings indicated clear contextual benefit for speech perception in noise (baseline condition)
and phonemically interrupted speech (PI condition) as expected. However, contextual benefit
decreased as the spectral resolution increased in the SR condition suggesting that listeners do not
rely as heavily on contextual cues when more spectral bands of information are available. (i.e., the
signal is less degraded). Interestingly, for the PI+SR condition, the contextual benefit effect was
reversed in that when the speech was severely degraded when only 4 spectral bands were
available, contextual information and top-down processing were not helpful at all and listeners failed
to understand speech. However, as the spectral resolution increased (phonemically interrupted with
8 and 16 channels of spectral information), the contextual cues became helpful and facilitated
speech perception. Our results show that top-down processing facilitates speech perception up to a
point and it fails to compensate when speech is significantly degraded. These findings lend further
understanding regarding the extent of degradation that is permissible before listeners’ use of context
is no longer effective and can be helpful in advancements in device technology and optimization.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 254
2015 Conference on Implantable Auditory Prostheses
R10: EFFECT OF FREQUENCY ALLOCATION ON VOCAL TRACT LENGTH
PERCEPTION IN COCHLEAR IMPLANT USERS
1
2
Nawal El Boghdady1, Deniz Başkent1, Etienne Gaudrain1,2
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology, Groningen, NLD
Lyon Neuroscience Research Center, CNRS UMR 5292, INSERM U1028, University Lyon 1, Lyon, France, Lyon,
FRA
Cochlear implant (CI) patients experience tremendous difficulties in situations involving
speech intelligibility in cocktail-party settings. Speaker tracking in these scenarios usually
involves the perception of speaker-specific features, which can be characterized along two
largely independent dimensions, namely the Glottal Pulse Rate (GPR), or F0, and the Vocal
Tract Length (VTL). While F0 is responsible for the perceived pitch of a given voice, VTL relates
to the speaker size. Previous studies have shown that while normal hearing (NH) listeners can
utilize both F0 and VTL cues to identify the gender of a speaker, CI users rely solely on F0 to
perform the same task (Fuller et al., 2014). One possible hypothesis for this is that, in the
implant, VTL cues are lost in spectral distortions that may be induced by sub-optimal frequencyto-electrode mapping.
In the present study, the effect of varying frequency-to-electrode allocation on VTL
perception were investigated using vocoder simulations of CI processing. Twenty-four normal
hearing (NH) subjects were tested with vocoded stimuli consisting of naturally spoken Dutch
consonant-vowels (CVs) in a 3-alternative-forced-choice task (3-AFC). Triplets consisting of
such CVs were manipulated only along the VTL dimension, with F0 held constant. The VTL just
noticeable differences (JNDs) were tracked using a 2-down/1-up adaptive procedure, both for
positive and negative VTL differences.
All stimuli were processed using a 16-channel noise-band vocoder with four different
frequency allocation maps: a map based on the Greenwood formula, a purely linear map, a 16channel version of the Cochlear CI24 clinical map, and an Advanced Bionics (AB) HiRes 90K
frequency map. Vocoder synthesis filters were always distributed according to the Greenwood
function, and were shifted to simulate a deep electrode array insertion depth of 21.5 mm and a
shallow insertion depth of 18.5 mm, according to the specifications of the AB HiFocus Helix
electrode. VTL JNDs were measured for each subject under each experimental condition.
Results from this experiment show that there is a significant interaction between the
assigned frequency map and the voice of the target speaker: JNDs for each of the two target
speaker directions depend on the frequency map used. Specifically, the Greenwood map shows
more consistency compared to the other maps in terms of JNDs for the two voice directions
tested. No significant effect was observed for the insertion depths simulated in this study.
These results indicate that the frequency-to-electrode mapping may indeed be sub-optimal in CI
users, especially for extracting VTL cues from different voices. Hence, the presence of this
effect will be further investigated in CI participants. However, since this effect is small, different
directions will be explored, such as the effect of stimulation patterns on VTL JNDS.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 255
2015 Conference on Implantable Auditory Prostheses
R11: BOUNDED MAGNITUDE OF VISUAL BENEFIT IN AUDIOVISUAL
SENTENCE RECOGNITION BY COCHLEAR IMPLANT USERS: EVIDENCE
FROM BEHAVIORAL OBSERVATIONS AND MASSIVE DATA MINING
Shuai Wang, Michael F. Dorman, Visar Berisha, Sarah J Cook, Julie Liss
Arizona State University, Tempe, AZ, USA
A survey of CI patients’ everyday listening experience has indicated that most speech
perception occurs in environments in which both auditory and visual information is
available. Previous studies have indicated that the magnitude of visual benefit in sentence
recognition is related to the lip reading difficulty of the material. To investigate this, we first revisited the Kopra Lip-reading Sentences (Kopra et al., 1985). These sentence lists vary in lipreading difficulty from easy to difficult. The lists were re-recorded in AV format and used in a lip
reading experiment with 10 CI listeners. The results showed that sentences that are easy to lip
read can provide near 20% more benefit to the A-only score than sentences that are difficult to
lip read. To investigate the driving factor behind lip reading difficulty, multi-level text mining was
performed on the Kopra Sentences. Tri-gram occurrence in the Google N-Gram Project data
base (the texts from over 5 million books) was used to estimate the familiarity of word
sequences in the easy- and difficult-to-lip-read sentences. Results indicate that the familiarity of
word sequences is the major factor driving lip reading difficulty. The number of visible
phonemes did not play a role in distinguishing easy- and difficult-to-lip-read sentences. These
results help us distinguish the roles of visual information in early vs. late stages of sentence
recognition by cochlear implant listeners.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 256
2015 Conference on Implantable Auditory Prostheses
R12: SPEECH PERCEPTION AND AUDITORY LOCALIZATION ACCURACY IN
SENIORS WITH COCHLEAR IMPLANTS OR HEARING AIDS
Tobias Weissgerber, Tobias Rader, Uwe Baumann
University Hospital Frankfurt, Frankfurt am Main, DEU
Sufficient hearing is important for the elderly to ensure adequate participation in social
activities. Hearing impairment caused by presbycusis can show its onset as early as in the age
of 50, whereby speech perception in quiet is usually not as degraded as in noisy environments.
Recent studies demonstrated poor benefit provided by hearing aids (Has) in the elderly
population. The aim of the present study was to compare the results obtained with the fitting of
HAs and cochlear implants (CIs) in subjects aged between 60 and 90 years.
Data was collected in three different groups: 40 subjects with normal hearing (NH, mean
age: 69.3±7.1 years), 40 HA users (mean age: 76.3±4.7 years), and a group of 57 CI users
(mean age: 72.1±6.5 years). The speech score in quiet (Freiburg Monosyllables, FMS) for the
aided groups was tested in free field condition at 65 dB presentation level. Speech reception
thresholds (SRTs) in noise were measured adaptively with the German matrix test in two spatial
conditions (S0N0 and Multi-Source Noise Field, MSNF) with either continuous or amplitudemodulated noise. Auditory localization ability was assessed for 14 different angles in the
horizontal plane. 5 noise bursts of white noise were presented from one loudspeaker and the
patient’s task was to indicate the perceived direction of the sound with a LED pointer method.
Mean FMS score in quiet was 85% in the HA group and 90% in the CI group. No
statistical difference in speech perception was found. In MSNF, a significant decrease of
performance was found in both aided groups compared with the control group data in
continuous and modulated noise. In continuous noise, both hearing impaired groups performed
comparably poor (p=0.058). Highest differences were found in modulated noise (HA group 7.2
dB worse than NH, p<0.001; CI group 11.4 dB worse than NH, p<0.001). A pronounced shift of
the average speech reception threshold was found in the CI group data compared with the HA
group (4.2 dB, p<0.001). Mean relative localization error was 7.1° in the control group, 15.5° in
the HA group, and 18.0° in the CI group. There was a significant difference between NH group
and both hearing impaired groups (p<0.001). In the NH group front/back confusions occurred
significantly less frequently than in the HA and CI groups (p<0.001). The rate of front/back
confusions in the hearing impaired groups was about 50%, thus reaching only performance at
the level of chance.
A marked progress in implant technology was reflected by the speech perception results
obtained in quiet, where CI group average was at the same high level as the HA group.
Accuracy of auditory localization in terms of mean relative error was also comparable between
HA and CI groups. Likewise, both aided groups showed nearly equal mean SRTs in continuous
noise. However, in more realistic noise conditions as reflected by temporal modulations of the
masker, average CI group results show more pronounced difficulties.
Work supported by Cochlear R&D Ltd.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 257
2015 Conference on Implantable Auditory Prostheses
R13: NEW APPROACHES TO FEATURE INFORMATION TRANSMISSION
ANALYSIS (FITA)
Dirk JJ Oosthuizen, Johan J Hanekom
Bioengineering, University of Pretoria, Pretoria, ZAF
The acoustic features that underlie speech perception have often been studied to
advance understanding of human speech perception and to aid in the development of hearing
prostheses. Closed set phoneme identification experiments have frequently been used to
investigate acoustic features (e.g. Miller and Nicely, 1955, JASA vol. 27 pp. 338-352; Van
Wieringen and Wouters, 1999, Ear Hear, Vol. 20, pp. 89-103). Results from such experiments
are processed by a technique known as feature information transmission analysis (FITA) to
produce quantitative estimates of the amounts of information transmitted by different features.
FITA provides a means to isolate an individual speech feature and quantitatively estimate
information transmitted via this feature in a closed set listening experiment.
FITA estimates information transmitted by an acoustic feature by assigning tokens to
categories according to the feature under investigation and comparing within-category to
between-category confusions. FITA was initially developed for categorical features (e.g.
voicing), for which the category assignments arise from the feature definition. When used with
continuous features (e.g. formants), it may happen that pairs of tokens in different categories
are more similar than pairs of tokens in the same category. The estimated transmitted
information may be sensitive to category boundary location and the selected number of
categories.
The work presented introduces two new approaches to FITA. The first is an approach
based on fuzzy sets, which provides a smoother transition between categories. Sensitivity of the
fuzzy FITA to grouping parameters is compared with that of the traditional approach. The fuzzy
FITA was found to be sufficiently robust to boundary location to allow automation of category
boundary selection. Traditional and fuzzy FITA were both found to be sensitive to the number of
categories, however. This is inherent to the mechanism of isolating a feature by dividing tokens
into categories, so that transmitted information values calculated using different numbers of
categories should not be compared.
The second new approach presented addresses a number of problems that arise when
applying FITA to continuous features, including that (i) absolute information measures are
bounded from above by values substantially lower than their theoretical maxima and (ii) a low
resolution representation of features is used. While the traditional FITA produces acceptable
estimates of relative information measures used to study a communication channel, it is
unsuitable for the estimation of absolute information measures that are required to study the
characteristics of continuous features. The approach presented provides an alternative that
addresses the above problems effectively by representing continuous features in a more natural
way. This approach also estimates redundancy in multiple features and information transmitted
by combinations of features.
This poster is available from www.up.ac.za/bioengineering.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 258
2015 Conference on Implantable Auditory Prostheses
R14: EVALUATION OF LEXICAL TONE RECOGNITION BY ADULT
COCHLEAR IMPLANT USERS
Bo Liu, Xin Gu, Ziye Liu, Beier Qi, Ruijuan Dong, Shuo Wang
Beijing Tongren Hospital, Capital Medical University, Beijing Institute of Otorhinolaryngology, Beijing, CHN
Objective:To evaluate the lexical tone recognition ability of postlingually-deafened adult
cochlear implant users, and to analyse the correlation between tone recognition and music
perception.
Methods: Experimental group included 32 postlingually-deafened adult cochlear implant
users who had used their implants over 6 months, and a control group (32 normal-hearing
subjects) participated in this study as well. We adopted Mandarin tone identification test to
assess the tone recognition ability for both normal-hearing and cochlear implant subjects. Pitch
discrimination threshold, melody discrimination, and timbre discrimination measurements in
Musical Sounds in Cochlear Implants (Mu.S.I.C) test battery were used to assess the music
perception ability. And we did a correlation analysis between the results of music perception and
tone identification.
Results: (1)The average score of lexical tone identification tasks for cochlear implant
subjects was 69.38%±17.06%, which was significantly lower than those of normal-hearing
subjects. And the accuracy of recognition respectively was 83.13%, 71.72%, 67.50%, and
55.16% for tone 4, tone 3, tone 1, and tone 2 in order. They tended to misperceive tone1 as
tone2, or tone2 as tone 3. (2)There was a negative correlation between tone perception
performance and C4 pitch discrimination threshold (262Hz) for CI subjects (r=-0.431, p=0.014),
but no significant correlation between tone perception and A4 pitch discrimination threshold
(440Hz). There was a positive correlation between tone identification and melody discrimination
performance (r=0.617, p<0.01) as well as instrument identification performance (r=0.591,
p<0.01).
Conclusion: The postlingually-deafened adult cochlear implant users perform significantly
poorer in tone recognition than normal-hearing subjects. Their ability of tone recognition and
music perception has positive correlations.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 259
2015 Conference on Implantable Auditory Prostheses
R15: EFFECT OF DIRECTIONAL MICROPHONE TECHNOLOGY ON SPEECH
UNDERSTANDING AND LISTENING EFFORT AMONG ADULT COCHLEAR IMPLANT
USERS
Douglas Sladen1, Jingjiu Nie2, Katie Berg3, Smita Agrawal4
1
Mayo Clinic, Rochester, MN, USA
James Madison University, Harrisonburg, VA, USA
3
St. Olaf College, Northfield, MN, USA
4
Advanced Bionics Corporation, Valencia, CA, USA
2
Introduction: Adults with cochlear implants are able to achieve high levels of speech
understanding in quiet, though have considerable difficulty understanding speech in noise
(Sladen & Zappler, 2015; Ricketts et al., 2006). Adults with cochlear implants have improved
speech understanding in noise when their sound processors employ directional versus
omnidirectional microphone technology (Hersbach et al., 2012). The benefit of directional
microphones also decreases listening effort in adults that use hearing aids (Desjardins &
Doherty, 2014) when listening to speech in more difficult environments.
Objective: The aim of this study was to determine if directional adaptive dual-microphone
technology (UltraZoom) would reduce listening effort when compared to a T-Mic in a situation
where the target speech is in front of the listener. A second aim was to determine the effect of
adding a contralateral hearing aid (HA) or cochlear implant (CI) on listening effort.
Methods: Subjects were users of at least one Advanced Bionics Naida CI sound
processor. Listening effort was measured using a dual-task paradigm in which CNC word
recognition was the primary task, and reaction time to a light display was the secondary task.
Performance was measured first in quiet, then in a +5 dB signal-to-noise (SNR) in an R-SPACE
eight-speaker array. Participants were tested in three conditions: unilateral CI with T-Mic,
unilateral CI with UltraZoom, and unilateral CI with UltraZoom + contralateral HA or CI.
Participants also completed a self-perceived rating of difficulty for each listening condition.
Results: Preliminary results show that mean CNC scores in the unilateral T-Mic condition
were 64% in quiet and 25% in noise. In the unilateral UltraZoom condition, monosyllabic word
recognition in noise increased to 41%. Group performance was 45% when a contralateral HA or
CI was added when listening in noise. Listening effort, measured by reaction time to the light
display, was reduced in the UltraAoom condition compared to the T-Mic condition. Adding a
contralateral HA or CI had minimal impact on listening effort, but the results were quite variable
in this condition.
Conclusion: UltraZoom improved word recognition and reduced listening effort in noise
for users of a Naida CI processor. Additional data are being collected to evaluate the effect of
adding a contralateral HA or CI on speech understanding and listening effort in noise.
This study was funded through a research grant provided by Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 260
2015 Conference on Implantable Auditory Prostheses
R16: COCHLEAR PHYSIOLOGY AND SPEECH PERCEPTION OUTCOMES
Christopher K Giardina1, Zachary J Bastian1, Margaret T Dillon1, Meredith L Anderson1,
Holly Teagle1, Harold C Pillsbury1, Oliver F Adunka2, Craig A Buchman1, Douglas C
Fitzpatrick1
1
University of North Carolina, Chapel Hill, NC, USA
2
Ohio State University, Columbus, NC, USA
A high degree of variability in speech perception outcomes is a long-standing problem in
cochlear implant research. Because the variability is unexplained, a large number of subjects is
required to show significance for a given change in treatment. However, the availability of
subjects is often limited so many studies are under-powered. This major problem could be
reduced if an independent factor could account for much of the variability in speech perception
outcomes, but factors identified to date, such as age and duration of deafness, account for no
more than about 25% of the variance. Here, we used intraoperative, round-window
electrocochleography (ECochG) from adults and children to test the degree to which residual
cochlear physiology correlates with speech perception outcomes. All patients receiving implants
were eligible for the study, except revision surgeries and non-English speakers. The overall
magnitudes of the responses delivered across frequencies (250-4000 Hz at 90 dB nHL) were
summed to provide a measure of the “total response” of each cochlea (ECochG-TR). The test
takes less than 10 minutes of intraoperative time. More than 90% of the subjects (n>75 for both
adults and children) showed a significant response to one or more stimulus frequencies. The
range of the ECochG-TR varied by more than 60 dB (from the noise level of ~0.03 uV to > 30
uV). The correlation between the ECochG and six-month, consonant-nucleus-consonant (CNC)
word scores was highly significant for most adults, ages 18-80 (p<0.001, n=34) and accounted
for about 50% of the variance. Adults over 80 tended to have lower CNC word scores, and when
included as a separate group (n=8) the adjusted r2 indicated 57% of variance accounted for.
Three subjects, not included in the correlation, had lower CNC word scores than the ECochGTR would predict, and also showed strong evidence of nerve activity in the physiological
response. These are candidates for additional remediation steps because surgical or mapping
factors, rather than physiology, are likely to be limiting performance. In a specific group of
children (n=32) who were old enough when implanted to have reached the age where
Phonetically Balanced Kindergarten (PBK) scores could be obtained, the correlation accounted
for 32% of the variation. These results show that the state of cochlear health prior to
implantation is a strong determinant of speech perception outcomes. This simple measurement
therefore shows high promise for impacting a wide range of cochlear implant studies where
subject numbers are limited.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 261
2015 Conference on Implantable Auditory Prostheses
R17: STIMULUS-BRAIN ACTIVITY ALIGNMENT BETWEEN SPEECH AND
EEG SIGNALS IN COCHLEAR IMPLANT USERS, MORE THAN AN
ARTIFACT?
1
Anita Wagner1, Natasha Maurits2, Deniz Başkent1
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology, Groningen, NLD
2
University of Groningen, University Medical Center Groningen, Dept Neurology, Groningen, NLD
A number of studies (e.g. Luo & Poeppel, 2007) suggest that synchronization between
neural oscillations, as they are observed between different bands of electroencephalographic
(EEG) signals, and the temporal amplitude modulations of speech is foundational to speech
processing (Giraud & Poeppel, 2012). Speech intelligibility appears to vary with the strength of
such alignment between the brain and acoustic signals. These studies are based on vocoded
speech materials, and hence stimuli that are similar to the signal transmitted via cochlear
implants (CI). EEG studies with CI users suffer from the presence of electrical artifacts induced
by the device. This study investigates the phase alignment between EEG signals recorded with
CI users and envelopes of natural sentence stimuli, and queries how much such alignment
reflects brain signals engaged in speech processing or a CI induced artifact.
EEG signals within the theta range (3-8 Hz) of eight CI users with their own device, and
eight normal hearing (NH) participants, recorded while they were listening to naturally spoken
sentences, were compared in terms of their alignment with the signal envelopes. The analysis
involved a cross-correlation between the envelopes and the EEG channels to extract the lag
between the signals. Coherence between aligned signals was then measured in terms of
correlation and phase coherence. Preliminary results show for CI users correlations between
0.29 and 0.69, with higher values observed for channels close to the CI but also for contralateral temporal and central channels. For NH listeners, correlations were found on left temporal
and central channels (0.16-0.33). The EEG signal lagged behind the speech signal for 120 ms
to 380 ms for CI users, and for 200 till 450 ms for NH listeners. The correlation between speech
envelopes and signals recorded in their respective trials was generally greater than the
correlation found between EEG signals correlated with randomly chosen speech envelopes.
Greater coherence between the speech signal and the channels in vicinity to the CI,
together with the absence of coherence for these channels in NH listeners, suggest that signal
transmitted via the device is at the source of this alignment. The question of whether greater
coherence reported for NH listeners in studies with vocoded stimuli reflects the less natural
amplitude modulations in these signals is currently being tested. The coherence found for the
central channels for NH and CI listeners, however, suggests that this alignment may indeed
reflect a step in speech processing.
References
Luo, H., & Poeppel, D. (2007). Phase patterns of neuronal responses reliably
discriminate speech in human auditory cortex. Neuron, 54, 1001-1010.
Giraud, AL & Poeppel, D. (2012). Cortical oscillations and speech processing: emerging
computational principles and operations. Nature Neuroscience,15(4), 511-517.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 262
2015 Conference on Implantable Auditory Prostheses
R18: POLARITY EFFECTS OF QUADRAPHASIC PULSES ON THE
INTELLIGIBILITY OF SPEECH IN NOISE
Gaston Hilkhuysen1, Stephane Roman2, Olivier Macherey1
1
2
Laboratoire de Mécanique et d’Acoustique, CNRS, Marseille, FRA
Chirurgie cervico-faciale pédiatrique, ORL, Hôpital de la Timone, Marseille, FRA
In human cochlear implant (CI) listeners, the effect of pulse polarity can be assessed
using asymmetric pulses for which one polarity supposedly dominates over the other. One
example of such asymmetric pulses is the quadraphasic pulse shape. Recent data from our
laboratory have shown that the polarity of quadraphasic pulses can have a major influence on
the shape of the loudness growth function. Using cathodic quadraphasic pulses, the loudness
was found to grow non-monotonically as a function of level whereas it always grew
monotonically for anodic pulses. In other words, for some subjects and for some particular
electrodes, cathodic pulses sometimes showed a decrease in loudness with increases in current
level. We would expect such non-monotonic growth functions to impair the transmission of the
modulations contained in the speech signal and, therefore, to degrade speech perception.
This hypothesis was tested in nine postlingually deaf listeners implanted with Cochlear
Contour Advance electrode arrays. The stimuli consisted of biphasic cathodic leading pulses
(CA) or quadraphasic pulses with two cathodic or anodic phases at the center of each pulse
(ACCA and CAAC, respectively). Loudness-growth functions (LGFs) were first measured by
increasing the current level of 300-ms electrical pulse trains. Each pulse train had a fixed pulse
shape and was presented in monopolar mode to a preset electrode pair throughout the
measurement of a LGF. Listeners provided one LGF per pulse shape for each of their active
electrodes.
Intelligibility measures were based on a French translation of the BKB sentence lists
presented in steady-state long-term-average speech-shaped noise. During the presentation of
16 successive sentences within a list, the signal-to-noise ratio was adapted to a level resulting in
the correct perception of approximately one out of two sentences, an intelligibility measure
known as the speech reception threshold (SRT). Intelligibility was measured in free field with the
listeners’ clinical speech processors and with direct stimulation for CA, ACCA or CAAC pulse
shapes. CA matched the listeners’ clinical strategies except for preprocessing, a lower
stimulation rate, and the electrodes’ stimulation order. CA, ACCA and CAAC conditions only
differed in pulse shapes.
Among the 192 electrode pairs tested, 29% of the LGFs obtained with ACCA pulse trains
showed nonmonotonic behaviour. With CA and CAAC, these percentages were only 3% and
1%, respectively. Electrodes with non-monotonic-LGFs for ACCA appeared clustered within the
electrode array. In general, CA needed more current than CAAC to generate equal loudness.
ACCA needed more current than CAAC to obtain comfortable loudness levels. Preliminary
results surprisingly show little relation between non-monotonic-LGF and SRT scores.
This study is funded by a grant from the Agence Nationale de la Recherche (Project ANR-11PDOC-0022)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 263
2015 Conference on Implantable Auditory Prostheses
R19: GAP DETECTION IN COCHLEAR-IMPLANT USERS REVEALS AGERELATED CENTRAL TEMPORAL PROCESSING DEFICITS
Matthew J. Goupell, Casey Gaskins, Maureen J. Shader, Alessandro Presacco, Samira
Anderson, Sandra Gordon-Salant
University of Maryland, College Park, MD, USA
As one ages, biological changes occur in the auditory periphery and central nervous
system. Even in the absence of hearing loss, it is thought that age-related central temporal
processing deficits cause poorer speech understanding in older adults. Effects of chronological
age and age-related temporal processing deficits have not been investigated in depth with adult
cochlear-implant (CI) users. However, CI users are an ideal population to study age-related
temporal processing deficits because a CI bypasses the encoding of sound at the level of the
cochlea, and therefore it can be assumed that age-related changes will be mostly central in
origin.
Four younger CI (YCI; 26.8±6.7 yrs), 11 middle-age CI (MCI; 57.0±4.5 yrs), and seven
older CI (OCI; 73.4±5.3 yrs) listeners were asked to categorize words as either “Dish” or “Ditch.”
The closure duration of the silent interval between the fricative and the vowel of each word was
varied on a seven-step continuum between 0 and 60 ms. Stimuli were presented via direct
connect for CI listeners. The level of the word was varied from threshold to about 85 dB SPL in
10-dB steps. Young, middle-aged, and older normal-hearing (NH) listeners performed the same
experiment and were presented vocoded speech tokens over headphones.
The results showed that NH listeners could always distinguish the endpoints of the
continuum, i.e., they could discriminate dish from ditch. As the level increased, the point at
which they perceived the word ditch shifted to smaller closure durations and the slope of the
discrimination function became sharper; however, the effects were not as pronounced in the
ONH listeners. This is in contrast to the CI listeners. At low levels, CI listeners could not
discriminate dish from ditch. As the level increased, their ability to discriminate the two words
increased. However, there was an age-dependent interaction where YCI listeners showed
increasingly sharper slopes with increasing level and OCI listeners showed a decrease in slope
after 65 dB SPL (i.e., there was a functional roll-over or negative level effect). The age-related
temporal encoding changes were also supported by decreased temporal precision in subcortical
(NH) and cortical (CI and NH) acoustically evoked EEG responses.
Therefore, CI listeners demonstrate age-related temporal processing deficits both in
perception and EEG responses for a temporally based speech continuum. These deficits appear
to be mostly central in nature. The results are important not only for understanding the biological
effects of aging, but also for predicting speech understanding abilities for an increasingly large
proportion of the CI population, those over 65 years.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 264
2015 Conference on Implantable Auditory Prostheses
R20: THE VALUE FOR COCHLEAR IMPLANT PATIENTS OF A BEAM
FORMER MICROPHONE ARRAY “THE AB ULTRAZOOM“ ON SPEECH
UNDERSTANDING IN A COCKTAIL PARTY LISTENING ENVIRONMENT
Sarah J. Cook Natale1, Erin Castioni2, Anthony Spahr2, Michael F. Dorman1
1
Arizona State University, Tempe, AZ, USA
2
Advanced Bionics, Valencia, CA, USA
In this study we explored for cochlear implant patients the value of a beam former
microphone array -- the UltraZoom as implemented on the Naida signal processor by Advanced
Bionics - on speech understanding in a cocktail party listening environment. At issue was (i) the
gain in speech understanding when the UltraZoom feature was implemented and (ii) the value of
UltraZoom on a single ear vs. the value of bilateral cochlear implants and the value of bilateral
hearing preservation (i.e., CI patients with hearing preservation) for patients tested in the same
cocktail party environment. We found a 31 percentage point improvement in performance with
the UltraZoom relative to an omnidirectional microphone. We found that bilateral fitting of CIs
improved performance, relative to a single CI, by 20 percentage points. We found that hearing
preservation patients also gained about 20 percentage points in performance when evaluated
against a CI plus contralateral hearing condition. Thus, the UltraZoom on a single CI produces
as large an improvement in speech understanding in a complex listening environment as
bilateral or hearing preservation CIs. Finally, the UltraZoom, when implemented in a noise
environment, restored speech understanding to a level just below that found for patients tested
in quiet.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 265
2015 Conference on Implantable Auditory Prostheses
R21: DEACTIVATING COCHLEAR IMPLANT ELECTRODES BASED ON
PITCH INFORMATION: DOES IT MATTER IF THE ELECTRODES ARE
INDISCRIMINABLE?
Deborah A Vickers1, Aneeka Degun1, Angela Canas1, Filiep A Vanpoucke2, Thomas A
Stainsby2
1
2
University College London, Ear Institute, London. WC1X 8EE, UK
Cochlear Technology Centre, Schalienhoevedreef 20, building I, Mechelen 2800, Belgium
There is a wide range in performance for cochlear implant users and there is some
evidence to suggest that implant fitting can be modified to improve performance if electrodes
that do not provide distinct pitch information are de-activated. However, improvements in
performance may not be the same for users of all cochlear implant devices; in particular for
Cochlear devices when using n of m strategies such as ACE there is very little research to
demonstrate that de-activation of electrodes leads to improvement in performance.
The goal of this research was to determine for users of Cochlear devices if speech
perception improved when indiscriminable electrodes (determined by pitch ranking via the
Nucleus Implant Communicator (NIC)) were de-activated and this was compared to when the
same number of discriminable electrodes were de-activated. A carefully controlled cross-over
study was conducted in which each programme was used for a minimum of two months and
was compared to an optimised clinical programme.
Sixteen post-lingually deafened adults were assessed; all participants used either a
Nucleus system 5 or system 6 sound processor. Participants were excluded from the study if
they had ossification or were not competent speakers of English. Thirteen participants
completed the cross-over study to compare maps with indiscriminable electrodes de-activated to
those with discriminable electrodes de-activated. Order effects were carefully controlled for and
there were no rate of stimulation differences between the maps. Attempts were made to retain a
similar frequency distribution for all maps.
Group findings were analysed using pairwise Wilcoxon analyses to compare conditions.
The results showed that for users of the ACE sound-processing strategy that there were no
significant benefits of electrode de-activation on speech understanding, music quality
judgements or spectro-temporal ripple perception compared to the standard clinical map, and
that there were no significant differences between de-activation of discriminable or
indiscriminable electrodes. However, the individual data may suggest that there are different
patterns of behaviour for different CI users that may not be apparent when analysing the group
effects and this shall be explored in greater depth.
The conclusion from this research is that for users of the ACE strategy and other n of m
strategies that electrode de-activation may not lead to speech perception improvements.
However it should be noted that there was a range of performance for the participants in this
research so further aspects of fitting should be explored to determine the optimal fitting
paradigm for an individual user based on rate, number of channels and number of spectra.
Further work should also be conducted to look at the frequency distribution following deactivation to ensure that optimal spectral allocation is provided.
This work was funded by Cochlear Corporation.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 266
2015 Conference on Implantable Auditory Prostheses
R22: VOWEL AND CONSONANT RECOGNITION AND ERROR PATTERNS WITH
FOCUSED STIMULATION AND REDUCED CHANNEL PROGRAMS
1
2
Mishaela DiNino1, Julie Arenberg Bierer2
University of Washington, Graduate Program in Neuroscience, Seattle, WA, USA
University of Washington, Dept. of Speech and Hearing Sciences, Seattle, WA, USA
Speech performance in cochlear implant (CI) users is negatively impacted by the spectral
smearing that likely occurs from unwanted interaction of current between electrode channels.
Focused CI electrode stimulation, such as the tripolar (TP) electrode configuration, produce
narrower electrical fields than the monopolar (MP) configuration used clinically. The tripolar
mode may reduce current spread and channel interaction and thus improve speech perception
abilities for those CI users with a high degree of channel interaction. Focused stimulation can
also be used to identify channels with poor electrode-neuron interfaces. Channels with high TP
thresholds suggest poor neuronal survival and/or electrodes distant from spiral ganglion
neurons, since higher levels of stimulation are required to reach auditory perception for such
electrodes. This study explores the effects of TP electrode configurations and deactivation of
channels with poor electrode-neuron interfaces, as predicted by high TP thresholds, on vowel
and consonant recognition performance and the patterns of errors.
Experimental 14-channel partial TP programs (“TP All”; focusing parameter ∆ƒ > 0.675)
were created for seven CI users implanted with the Advanced Bionics HiRes90k device. For
each subject, detection thresholds were obtained to identify 1 to 6 electrodes with the poorest
electrode-neuron interfaces. High-threshold channels were selected by first filtering the subject’s
threshold profile to enhance the peaks and troughs, and setting an exclusion/inclusion criterion
based on the mean and standard deviation. Using BEPS+ software (Advanced Bionics)
experimental programs were created with the high-threshold channels deactivated (“TP High
Off”) and, as a control, low-threshold channels matched by apical, middle or basal regions (“TP
Low Off”). Frequencies from the deactivated channels were reallocated to active electrodes.
Three MP programs were also created for each subject: “MP All,” “MP High Off,” and “MP Low
Off.” Speech testing was performed using medial vowels in /hVd/ context and consonants in
/aCa/ context, presented at 60 dB SPL in a sound booth.
Data averaged across subjects showed no difference between performance with MP and
TP All programs for any speech test. However, the four poorer performing subjects (those who
received scores less than 70% correct with their everyday program), showed improved scores
on at least two of the three speech tests when using the TP All program. These results indicate
that focused stimulation can improve speech perception for CI subjects who may have a large
amount of channel interaction, as predicted by poor speech recognition performance.
Deactivating high-threshold channels in either TP and MP configurations resulted in
improved performance for female talker vowel identification scores for 6 out of 7 subjects, and
higher consonant recognition scores for 3 (TP) and 4 (MP) of 7 subjects. Despite some
performance differences, patterns of phoneme errors were similar across conditions for each
subject. Correlation analyses of the off-diagonal confusion matrices showed positive correlations
between TP and MP All conditions and those with reduced channel programs. These results
suggest that although the frequencies are reallocated and the vowel formant spacing is altered,
listeners tend to make the same errors with the different experimental programs. Given the
limited testing times in the laboratory with these strategies, it is likely that longer experiences
with new programs may lead to further improvements in performance.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 267
2015 Conference on Implantable Auditory Prostheses
R23: VOICE EMOTION RECOGNITION BY MANDARIN-SPEAKING LISTENERS WITH
COCHLEAR IMPLANTS AND THEIR NORMALLY-HEARING PEERS
Hui-Ping Lu1, Yung-Song Lin2, Shu-Chen Peng3, Aditya M Kulkarni, Monita Chatterjee4
1
Chi-Mei Medical Center, Tainan, TWN
Taipei Medical University, Taipei, TWN
3
US Food and Drug Administration, Silver Spring, MD, USA
4
Boys Town National Research Hospital, Omaha, NE, USA
2
Relatively little is known about voice emotion recognition by cochlear implant (CI)
listeners who are native speakers of Mandarin. The purpose of this study is to quantify CI
recipients’ voice emotion identification in speech, in comparison with listeners with normal
hearing (NH). A database of 10 sentences in Mandarin uttered with five emotions (angry, happy,
sad, neutral and scared) in a child-directed way by talkers representing both genders, has been
created. Two additional sentences expressed with the five emotions by the same talkers are
used for task familiarization prior to testing. Participants are native speakers of Mandarin living
in Taiwan. In initial results, five NH listeners (one adult and four children) and three CI listeners
(19-22 years old, all implanted before age 5) achieved mean scores of 88% correct and 56%
correct respectively (chance level is 20% correct). The NH listeners’ performance declined when
the stimuli were noise-vocoded (NV) with 16, 8 and 4 channels, with a mean score of 38%
correct in the 4-channel NV condition. CI listeners’ mean score with the full-spectrum speech
was comparable to NH listeners’ average score with 8-channel NV speech. These preliminary
results underscore the potential difficulty experienced by CI patients in processing voice
emotion. This is particularly the case given the child-directed nature of the stimuli, which are
likely to carry more exaggerated acoustic cues for emotion than adult-directed speech. We will
present results of continuing data collection, with additional analyses of reaction time, effects of
age (NH and CI listeners), age of implantation and experience with the device (CI listeners).
[Work supported by NIH R01 DC014233]
12-17 July 2015
Granlibakken, Lake Tahoe
Page 268
2015 Conference on Implantable Auditory Prostheses
R24: SPEAKING RATE EFFECTS ON PHONEME PERCEPTION IN ADULT CI
USERS WITH EARLY- AND LATE-ONSET DEAFNESS
Brittany N. Jaekel, Rochelle Newman, Matthew Goupell
University of Maryland-College Park, College Park, MD, USA
Speaking rate varies across and within talkers, dependent on factors like emotion,
dialect, and age. Speaking rate can affect the durations of certain phonemes (Crystal & House,
1982), and listeners must apply “rate normalization” to accurately identify those phonemes
(Miller, 1981). For example, the acoustic stimulus that a listener identifies as a /g/ phoneme
when produced by a talker with a slow speaking rate could be identified as a /k/ phoneme when
produced by a fast-speaking talker. The extent to which rate normalization occurs in cochlear
implant (CI) users is unknown; deviations could contribute to the variability in speech perception
outcomes in this population. We hypothesized that adult CI listeners with late-onset deafness
would be able to rate normalize effectively because of experience with acoustic speech before
hearing loss; in other words, they learned to rate normalize because they developed language
with acoustic hearing. In contrast, we hypothesized that adult CI listeners with early-onset
deafness would demonstrate abnormal rate normalization because of their lack of experience
with language with acoustic hearing.
Seven CI users with early-onset deafness, 15 CI users with late-onset deafness, and 15
NH listeners heard naturalistically produced sentences spoken with fast, medium, and slow
rates, ending in a word from a stop-consonant series that varied in voice-onset time (VOT) of
the initial phoneme; the briefest VOT was /g/ and the longest VOT was /k/ (Newman &
Sawusch, 2009). NH listeners additionally heard these same speech stimuli processed by a sine
vocoder and the number of channels was 4, 8, or 16. Listeners were asked select whether the
final word of each sentence began with a /g/ or /k/ phoneme.
Compared to NH listeners, CI users showed less precise phonemic boundaries, with
phonemic boundaries occurring at shorter VOT durations. NH listeners presented vocoded
stimuli had more precise phonemic boundaries as the number of spectral channels increased,
indicating that more spectral channels may be helpful to the listener for matching speech input
to phonemic categories.
Despite hearing status differences across groups, generally all listeners showed rate
normalization with the stop-consonant series. These results indicate that CI users can rate
normalize, adjusting how they apply phonemic categories onto degraded input on the basis of
speech rate. Furthermore, CI users with early-onset deafness seem to be able to “learn” to
normalize to speaking rate, even though they had limited to no exposure to acoustic hearing.
These findings are important for understanding not only how CI users with a wide range of
acoustic hearing experiences perceive and interpret degraded sentences, but also how CI users
might be affected in their daily lives by talkers speaking at different rates in real-world
conversations.
This work was supported by NIH K99/R00-DC010206 (Goupell), NIH P30-DC004664 (C-CEBH),
and the University of Maryland.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 269
2015 Conference on Implantable Auditory Prostheses
R25: INFLUENCE OF SIMULATED CURRENT SPREAD ON SPEECH-IN-NOISE
PERCEPTION AND SPECTRO-TEMPORAL RESOLUTION
Naomi B.H. Croghan, Zachary M. Smith
Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA
Recent work has suggested a tradeoff in cochlear implant recipients between poor
spectral resolution and the detrimental impact of temporal fluctuations in noise (Oxenham and
Kreft, 2014). Reduced spectral resolution due to intracochlear current spread may lead to an
effective smoothing of the temporal envelope, potentially offsetting any adverse effects of
masker fluctuations for speech understanding. Such an interaction may partially explain the
differences in susceptibility to different masker types observed between electric and acoustic
hearing. In the current study, we investigate speech perception in four masker types: speechshaped noise, four-talker babble, a single competing talker, and a multitone masker. The
masker conditions were combined with varying degrees of simulated current spread in normalhearing listeners using a tone vocoder. The use of this paradigm allows for experimental control
of spectral resolution within each listener. In addition to speech recognition, we assessed
spectro-temporal resolution with a dynamic ripple detection task across the same current spread
settings. Thus, we explore the relationship between psychophysical spectro-temporal resolution
and speech perception while considering the effects of temporal fluctuations and spectral
smearing. The results will be discussed in the context of other experimental findings from
cochlear-implant recipients with and without multipolar current focusing.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 270
2015 Conference on Implantable Auditory Prostheses
R26: IMPACT ANALYSIS OF NATURALISTIC ENVIRONMENTAL NOISE TYPE
ON SPEECH PRODUCTION FOR COCHLEAR IMPLANT USERS VERSUS
NORMAL HEARING LISTENERS
1
Jaewook Lee1, Hussnain Ali1, Ali Ziaei1, John H.L. Hansen1, Emily A. Tobey2
Center for Robust Speech System, Department of Electrical Engineering, University of Texas at Dallas,
Richardson, TX, USA
2
School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
Lombard effect is the involuntary response speakers experience in the presence of
environmental noise. This phenomenon is known to impact change in vocal effort including
increased voice intensity, pitch period structure, formant characteristics, glottal spectral slope,
speech rate, etc. for normal hearing (NH) listeners. However, little is known about Lombard
effect on speech production for cochlear implant (CI) users, or if there is speech production
changes between CI and NH individuals during 2-way conversations. The objective of this study
has been to analyze the speech production of CI users with respect to environmental noise
structure. In addition, the study aims to investigate the degree to which CI user's speech
production is affected as compared to NH listeners. A total of 12 speakers (6 CI and 6 NH)
participated by producing conversational speech in various everyday environments. Mobile
personal audio recording devices from continuous single-session audio streams were collected
and analyzed. Prior advancements in this domain include the "Prof-Life-Log" longitudinal study
at UT-Dallas. A number of parameters that are sensitive to Lombard speech were measured
from the speech. Preliminary analysis suggests that the presence of Lombard effect is shown in
speech from CI users who are post-lingual deaf adults. Speakers increased their vocal effort,
such as vowel intensity, fundamental frequency and glottal spectral slope significantly in
challenging noisy environments to ensure intelligible communication. Results across several
speech production parameters will be presented and compared for CI and NH subjects.
This work was supported by Grant R01 DC010494-01A awarded from the NIH/NIDCD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 271
2015 Conference on Implantable Auditory Prostheses
R27: A MODEL OF INDIVIDUAL COCHLEAR IMPLANT USER’S SPEECH-INNOISE PERFORMANCE
1
2
Tim Juergens1, Volker Hohmann1, Andreas Buechner2, Waldo Nogueira2
Medizinische Physik und Exzellenzcluster “Hearing4all”, Carl-von-Ossietzky Universität, Oldenburg, DEU
Department of Otolaryngology, Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, DEU
Users of cochlear implants (CI) experience much more difficulties with understanding speech
in ambient noise than, e.g., normal hearing listeners. Despite this general problem, there is also a
large variety of speech-in-noise performance across different CI users, of which most of the variance
is so far unexplained (cf. Lazard et al., 2012, PLoS one). This unexplained variety across listeners is
a large obstacle in designing individual computer models of CI-users’ speech-in-noise performance.
Such computer models could be beneficial to find individually-fitted signal processing that offer CI
users best possible speech-in-noise outcome. This study aims at an individual prediction of speech
intelligibility in noise using a “microscopic” model of CI users’ speech intelligibility (Fredelake and
Hohmann, 2012, Hearing Res.). The individual adjustment of the model is based on audiological and
non-audiological data that has been collected postoperatively from the individual CI user. Two
important factors of the model for speech intelligibility in noise were identified by Fredelake and
Hohmann (2012), which are individually adjusted in this study: (a) the spatial spread of the electrical
field in the perilymph and the (b) “cognitive noise”, i.e. the individual cognitive ability of the
participant to understand speech.
Measures were obtained from 10 adult post-lingually deaf CI users. The voltage distribution
in the scala tympani was measured using the backward telemetry of the Nucleus implant.
Exponential decay functions were fitted to the voltage measurements. These exponential decay
functions were used to estimate the spatial spread of the electrical field in the cochlea. The
“cognitive noise” was adjusted by incorporating a combination of anamnesis data (using the
conceptual model of Lazard et al., 2012, PLoS one) and a cognitive test (the Text Reception
Threshold Test, Zekveld et al., 2007, J. Speech Lang. Hear. Res.). Speech-in-noise-performance
was measured using an adaptive procedure to obtain the speech reception threshold (SRT) with the
Oldenburg sentence test (for example “Peter gives three green spoons”) in noise. The total number
of auditory nerve cells was not directly estimated, but was used as a parameter, fixed across all CI
participants.
4 different levels of individualization were implemented in the model: Spatial spread of the
electrical field (a) and cognitive noise (b) were individualized in combination, either one alone or no
individualization at all. The results show that the highest correlation between measured and
predicted SRTs (r=0.81, p<0.01) was obtained when (a) and (b) were individualized in combination.
The individualization of (a) or (b) alone did not reveal significant correlation. However, all model
versions presented here predicted SRTs with a small bias in comparison to the measured SRTs. In
conclusion, the outcomes of the model show that by individualizing the voltage distribution and the
“cognitive noise”, it is possible to predict the speech-in-noise performance of individual CI users.
Supported by DFG Cluster of Excellence “Hearing4all”.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 272
2015 Conference on Implantable Auditory Prostheses
R28: SOUND LOCALIZATION IN BIMODAL COCHLEAR IMPLANT USERS
AFTER LOUDNESS BALANCING AND AGC MATCHING
Lidwien Veugen1, Maartje Hendrikse2, Martijn Agterberg1, Marc van Wanrooy1, Josef
Chalupper3, Lucas Mens4, Ad Snik4, A_John van Opstal1
1
2
Radboud University Nijmegen, Nijmegen, NLD
Department of Biomedical Engineering, University of Twente, Enschede, NLD
3
Advanced Bionics European Research Centre, Hannover, DEU
4
Radboud University Medical Centre, Nijmegen, NLD
We assessed horizontal sound localization performance in fourteen bimodal listeners, all
using the same HA (Phonak Naida S IX UP) and an Advanced Bionics CI processor. Devices
were balanced in loudness and used the same automatic gain control (AGC) attack and release
times.
Localization performance ranged from complete absence to an RMS error of 28° for the
best performer. Three subjects were able to localize short broadband, high-pass and low-pass
filtered noise bursts, and two others were able to localize the low-pass noise only, which agreed
with the amount of residual hearing. Five subjects localized all sounds at the CI side. In three
subjects we could optimize localization performance by adjusting the bandwidth of the stimulus.
Localization performance was significantly correlated with speech understanding in noise.
Surprisingly, the majority of subjects did not rely on the head-shadow effect (HSE) in the
monaural CI and/or HA condition. The five best performers had relatively good residual hearing,
but another three with comparable thresholds localized much worse. Our results indicate that
bimodal stimulation can provide ILD cues, but little or no ITD cues.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 273
2015 Conference on Implantable Auditory Prostheses
R29: PERFORMANCE OF MICROPHONE CONFIGURATIONS IN PEDIATRIC COCHLEAR
IMPLANT USERS
Patti M Johnstone1, Kristen ET Mills1, Elizabeth L Humphrey1, Kelly R Yeager1, Amy
Pierce, Kelly J McElligott1, Emily E Jones1, Smita Agrawal2
1
University of Tennessee Health Science Center, Knoxville, TN, USA
2
Advanced Bionics, LLC, Valencia, CA, USA
Background: Children with hearing loss are affected more than their peers with normal hearing by
the negative consequences of background noise and distance on speech understanding. Technology
previously used in hearing aids to improve speech understanding in unfavorable listening environments
such as: adaptive directional microphones (DM); and wireless streaming to small remote microphones
(RM); is now available in cochlear implant (CI) devices. Little is known about how use of these
microphone technologies, affects speech understanding in children with CI devices.
Purpose: This study sought to answer the following research question: In children with CI, do DM
and RM technologies offer improved speech understanding as compared to their standard microphone
configuration when background babble noise is present or when speech is presented at a distance?
Design: Repeated Measures, Mixed Model
Study Sample: A preliminary sample of 6 children age 5 - 19 years with at least one Advanced
Bionics (AB) CI compatible with a Naida CI Q70 processor and an additional 6 children with normal
hearing age-matched to hearing impaired group.
Data Collection: The same set of standard-issue Naida CI Q70 processors was used to test all
children with CI. Using the child’s own everyday use program from their personal sound processor, 5
different microphone configuration programs were created and tested via the research Naida CI Q70
processor(s): i) 100% T-mic; ii) 100% DM - UltraZoom (AB and Phonak’s adaptive DM); iii) 50%-50%
mixing ratio RM/ComPilot with T-mic; iv) 75%-25% mixing ratio RM/ComPilot with T-mic; v) 100%
RM/ComPilot. Each participant completed two experiments: 1) listening in background noise (LBN); and
2) listening to speech at a distance (LD). In the LBN experiment, a speech recognition threshold (SRT)
was measured using the Children’s Realistic Index of Speech Perception (CRISP). The child faced a
loudspeaker at 0 degrees azimuth. An SRT was measured in quiet with 2 microphone configurations
(100% T-mic; 50% RM). SRT with 20-talker babble presented via a loudspeaker placed at 180 degrees
azimuth was measured for 5 microphone configurations (100% T-mic; DM - UltraZoom; 50%RM with Tmic); 75%-25% RM with T-mic; and 100% RM. Children with normal hearing were tested once in quiet
and once with background noise. In the LD experiment, a percent correct score was obtained using the
Baby AZ Bio Sentences presented in quiet in a large classroom at a distance of 5, 10, and 15 meters
with the 5 microphone configurations for each distance. Children with normal hearing were tested once at
each distance.
Results: In the LBN experiment, children with CI showed improvement in SRT in quiet and in
noise conditions with the 50% RM relative to the T-mic only in quiet. Benefit was seen with all three RM
configurations in noise and best performance was seen in the 100% RM condition. The use of the DM in
noise improved SRT relative to T-mic in noise, however the amount was less than that obtained with RM.
In the LD experiment, children with CI showed a systematic drop in word recognition scores over
distance when using T-mic alone. There was no difference in word recognition scores at 5 meters with
any microphone configuration. However at distances of 10 and 15 meters, improved word recognition
scores were obtained with the RM configurations; the highest scores being with the 100% RM condition.
Surprisingly, use of the DM - UltraZoom also improved performance over T-mic alone at 10 meters.
When children with CI were compared with age-matched peers with normal hearing, their SRT and word
recognition scores were significantly poorer.
Conclusions: Children who use CI can benefit from use of DM and/or RM over use of their T-mic.
However, performance in background noise or at a distance in quiet will rarely match those of children
with normal hearing.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 274
2015 Conference on Implantable Auditory Prostheses
R30: FULLY SYNCHRONIZED BILATERAL STREAMING FOR THE NUCLEUS
SYSTEM
Jan Poppeliers, Joerg Pesch, Bas Van Dijk
Cochlear CTC, Mechelen, BEL
Studying bilateral specific phenomena like ILD's and ITD's requires the left and right
channel stimulation streams to the recipient's implants to be in perfect lock-step. For this
purpose Cochlear developed the RF Generator XS (RFGenXS), a dedicated hardware bilateral
streaming platform, and NIC3, a research software tool that supports bilateral streaming.
The RF Generator XS device shares one clock signal to both the left and right channel
streamer, guaranteeing the RF output of both channels are perfectly synchronized at all times.
This significantly improves the timing accuracy compared to the previous setup (NIC2.x) where
two separate L34 research processors were used for bilateral streaming. Each device runs on
its own separate clock, causing the left and right stream to drift apart over time.
NIC3 is the latest version of the Nucleus Implant Communicator (NIC). It is a set of software
libraries that allow users to define and control the stimulation pattern of a recipient’s Nucleus
cochlear implant(s) from research applications on their PC/laptop. Currently MATLAB and
Python can be used to access the NIC3 libraries. For bilateral streaming, the researcher first
defines the left and right channel stimulation sequence in a MATLAB or Python script. A
stimulation sequence can be defined on a pulse-by-pulse basis for psychophysical experiments,
or it can be generated by for instance the Nucleus MATLAB Toolbox (NMT).
The NMT is a library of MATLAB functions, including implementations of Cochlear's CIS and
ACE strategies. Its purpose is to serve as a research tool for psychophysics and sound coding.
Researchers can modify existing coding strategies or design new ones, process sound samples
with their new strategy and present it to recipients through NIC.
Another key functionality of the NIC3 streamer is Low Latency Continuous Streaming.
Depending on the stimulation rate required and the speed of the PC the latency can be set as
low as 20 ms, enabling presenting stimulation sequences that can continuously be appended
with the given latency. This makes near real-time streaming possible without the need for
expensive hardware with a real-time operating system, and can be a cheap solution to for
instance assess a new sound coding strategy. It also allows letting recipients adapt to new
stimulation strategies by e.g. listening to audio books via the NIC system.
The NIC3 streamer now also offers the option to automatically manage the implant power
supply with Power-Up Frames (PUFs). PUFs are RF frames that do not cause stimulation but
are supplied for the sole purpose of keeping the implant powered. The researcher is now
relieved of tediously inserting PUFs into the NIC scripts by hand.
The NIC3 streamer and the Nucleus MATLAB Toolbox are available for registered NIC
users, the RF Generator XS for NIC users involved in bilateral streaming projects.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 275
2015 Conference on Implantable Auditory Prostheses
R31: NEUROPHYSIOLOGICAL RESPONSES AND THEIR RELATION TO
BINAURAL PSYCHOPHYSICS IN BILATERAL COCHLEAR IMPLANT USERS
Heath Jones, Ruth Litovsky
Waisman Center - University of Wisconsin, Madison, WI, USA
Bilateral cochlear implant (BiCIs) users receive substantial benefit from a second implant;
however performance remains poor compare to normal hearing (NH) listeners for tasks that
require binaural hearing, such as sound localization. One prevailing hypotheses has been that
the poor encoding of interaural time differences (ITDs) is most likely responsible for degraded
binaural performance. While numerous studies have demonstrated that many BiCI users are
sensitive to ITDs delivered directly to interaural electrode pairs, all have found that there is no
particular cochlear place with the best ITD sensitivity across subjects. In addition to this, a wide
range of performance within subjects and across groups is consistently observed. The ITD
threshold variability in BiCI users is larger than typically measured in acoustic hearing, whether
using high frequency carriers modulated at a low frequency, or low frequency carriers. Whether
peripheral factors (e.g., degraded auditory nerve connectivity) or central factors (i.e., degraded
binaural circuitry) are responsible for the degradation in ITD sensitivity has yet to be determined.
Furthermore, the electrical current delivered by an electrode spreads across distant auditory
nerve fibers, and whether asymmetries across the two ears in the neural spread of excitation
(SOE) affects binaural processing has not been studied.
In this study, we investigated whether clinically viable objective measures could provide
physiological insight to the variability observed in behavioral ITD sensitivity for different
interaural electrode pairs. Employing similar methods, previous work demonstrated that the
degree of channel interaction due to neural SOE was significantly correlated to psychophysical
measures of binaural unmasking (Lu et al. 2011). Using the Neural Response Telemetry (NRT)
system available for the Nucleus ® family of implants from Cochlear LTD, measurements of
electrically evoked compound action potentials (eCAPs) were used to estimate the neural SOE
profile for pitch-matched interaural electrode pairs at different places along the electrode array
that were tested on an ITD discrimination tasks. For each electrode, the SOE profile was
compared with the corresponding electrode in the pair, and the degree of asymmetry in the
profiles across the ears was quantified. These differences in SOE profiles were then tested for
correlations with ITD discrimination thresholds. Results show a correlation between the
quantified differences in neural SOE across the ears and ITD just-noticeable-difference
thresholds (JNDs), such that larger differences in SOE profiles typically lead to larger ITD JNDs.
These findings suggest that ITD sensitivity may be optimal for interaural electrode pairs that are
both pitch-matched and at binaurally matched current levels that stimulate similar amounts of
current spread along the cochlear array.
Work supported by NIH-NIDCD (R01-DC003083 and R01 DC010494) and NIH-NICHD (P30HD03352).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 276
2015 Conference on Implantable Auditory Prostheses
R32: EFFECTS OF THE CHANNEL INTERACTION AND CURRENT LEVEL ON
ACROSS-ELECTRODE INTEGRATION OF INTERAURAL TIME DIFFERENCES
IN BILATERAL COCHLEAR-IMPLANT LISTENERS
Katharina Egger, Bernhard Laback, Piotr Majdak
Acoustics Research Institute, Vienna, AUT
The sensitivity to interaural time differences (ITDs) is substantial for our ability to localize
and segregate sound sources. When consistent ITD cues are presented over a range of
frequencies, normal-hearing listeners benefit from the so-called across-frequency integration:
They combine ITD information across frequency and, thus, show improved ITD sensitivity
compared to when the ITD is only presented in a single frequency channel. The present study
aimed to elucidate whether cochlear-implant (CI) listeners can make use of similar processing
when consistent ITD cues are presented at multiple interaural electrode pairs. For that purpose,
the sensitivity of seven bilateral CI listeners to ITD encoded via two interaural electrode pairs
(i.e., a double pair) was systematically studied in order to clarify the performance changes
relative to the stimulation with a single electrode pair.
In a constant-stimuli paradigm, ITD thresholds for unmodulated, 100-pulse-per-second
pulse trains were measured using interaurally coordinated direct stimulation. Consistent ITDs
were presented at either one or two pitch-matched, interaural electrode pairs. Different tonotopic
separations between the pairs were tested. Double-pair JNDs were compared to single-pair
JNDs either at constant level or at constant loudness. For the constant level conditions,
electrode current levels were kept constant for both single and double pairs yielding increased
overall loudness for the double pair compared to the respective single pairs. For the constant
loudness conditions, the double pair had lowered current levels compared to the respective
single pairs yielding equal loudness for single and double pairs.
For large tonotopic separation and constant levels, ITD thresholds were significantly
lower for double pairs compared to the respective single pairs. For small tonotopic separation
and constant levels, similar thresholds for single and double pairs were found. When compared
at constant loudness, single- and double-pair thresholds were not significantly different for
neither large nor small tonotopic separation. Double-pair thresholds generally increased with
decreasing tonotopic distance between the stimulating electrode pairs, showing a significant
effect of tonotopic distance. Irrespective of electrode-pair configuration, thresholds significantly
decreased with increasing current level, demonstrating a substantial effect of stimulus level on
ITD sensitivity.
Our results show that CI listeners have only limited abilities to combine ITD information
presented across multiple electrodes. The improved sensitivity found for double pairs with a
large tonotopic separation and constant levels is probably caused by the increased number of
activated neurons, as indicated by the associated increase in overall loudness. This suggests
that overall loudness, controlled by either the number of stimulating electrodes or the electrode
current levels, may play an important role in the optimization of CI signal processing algorithms
which aims to enhance the ITD sensitivity of bilateral CI listeners.
Work supported by MED-EL Corp.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 277
2015 Conference on Implantable Auditory Prostheses
R33: FACTORS CONTRIBUTING TO VARIABLE SOUND LOCALIZATION
PERFORMANCE IN BILATERAL COCHLEAR IMPLANT USERS
Rachael Maerie Jocewicz, Alan Kan, Heath G Jones, Ruth Y Litovsky
University of Wisconsin, Madison, WI, USA
Many patients with bilateral profound hearing loss receive bilateral cochlear implants
(CIs), and they generally exhibit marked improvements in their ability to localize sounds
compared with unilateral CI users. However, within this growing population, localization ability
varies dramatically. Some of this variability may be due to factors involving the hardware and
software of the CIs, which do not present binaural cues with fidelity. The effect of patient specific
factors, such as hearing history and the duration of auditory deprivation before implantation, are
poorly understood in terms of how they contribute to the inter-patient variability.
We investigated the relationships between patients’ hearing history, years of bilateral CI
use, and ability to identify the location of a sound source positioned in the horizontal plane. The
sound localization task consisted of a signal being presented from one of 19 loudspeakers
positioned in 10° intervals on a semicircular array (-90° to +90°) in the azimuth. Stimuli
consisted of a train of pink noise bursts (spectrum and level roved, averaged at 50 dB SPL).
Participants listened through their clinical speech processors, and indicate the perceived
location of signals on a touchscreen computer that displayed a continuous arc representing of
the semicircular loudspeaker array. Sound localization performance was quantified as the rootmean-square error (RMS error) of 285 trials per individual. The age of onset of deafness and the
years of bilateral CI implant experience were self-reported by the patient.
Data were collected for 31 adult participants (all implanted as adults), between the ages
of 20 and 85 years. Of these participants, 15 experienced the onset of hearing loss in childhood
(under the age of fourteen) and had a range of 0.5 to 9 years of bilateral CI implant experience.
The remaining 16 participants experienced the onset of hearing loss in adulthood (over the age
of eighteen) and had a range of 0.5 to 12.5 years of bilateral CI implant experience.
Performance in the sound localization task ranged from an RMS error of 18.6 degrees to 68.3
degrees. In the childhood and adulthood onset of hearing loss groups, the average RMS error
was 38.5 degrees and 29.3 degrees respectively. Fitting a linear model to the data (R2 = 0.68)
revealed that the age at onset of deafness, years of bilateral CI experience, and the proportion
of life using CIs were found to be significant factors affecting RMS error. Predictions from the
linear model show that RMS localization errors are typically higher with earlier age of onset of
deafness. However, localization errors appear to fall with increasing years of bilateral CI
experience, but only when the onset of deafness occurred in childhood. In the adulthood onset
of deafness group, the years of bilateral CI use had no significant effect on RMS error. These
results suggest that the age of onset of deafness and the years of bilateral CI experience may
account for some of the high variability in sound localization performance among bilateral CI
users.
Work supported by NIH-NIDCD (R01DC003083 and R01 DC010494) and NIH-NICHD
(P30HD03352)
12-17 July 2015
Granlibakken, Lake Tahoe
Page 278
2015 Conference on Implantable Auditory Prostheses
R34: INVESTIGATING HOW FACTORS SUCH AS PATIENTS’ HEARING
HISTORY AND PITCH MATCHING BETWEEN THE EARS MAY AFFECT
BINAURAL SENSITIVITY IN BILATERAL COCHLEAR IMPLANT LISTENERS
Tanvi Thakkar1, Alan Kan1, Matthew Winn1, Matthew J. Goupell2, and Ruth Y. Litovsky1
1University of Wisconsin, Madison, WI, USA
2University of Maryland, College Park, MD, USA
Bilateral cochlear implant (BiCI) listeners demonstrate highly variable interaural timing
difference (ITD) sensitivity. This variability cannot be described simply by stimulus parameters
and poor reliability of the physical binaural cue. The aim of this study is to investigate how
other factors such as a patients’ hearing history and pitch matching techniques used in
laboratory settings my affect ITD sensitivity in BiCI users.
ITD sensitivity is generally better in BiCI listeners who experienced acoustic hearing
into adulthood and poorer in BiCI listeners with early onset of deafness. However, other
factors such as age, years of deafness, and years of BiCI experience may also explain the
variability observed among the patient population. In addition, ITD sensitivity is generally
measured for electrode pairs that are specifically chosen to match place of stimulation across
the ears, through subjective measures such as pitch matching. This is done because in
normal hearing systems binaural nuclei in the brainstem receive parallel inputs from the
periphery that are matched by place of stimulation. However, in BiCI users an interaural placeof-stimulation mismatch is likely to arise for electrodes of the same number. The aim is then to
stimulate the same place on the cochleae in the two ears, thus attempting to recruit peripheral
neurons whose excitation should result in similar pitch percepts in the two ears. However,
because pitch perception is complex and can be influenced by factors other than place of
stimulation, the variability in ITD sensitivity observed among the BiCI population may also be
influenced by using pitch assessments as a proxy for matching place of stimulation.
Data from 34 BiCI patients was used to understand ITD-sensitivity variability through:
(a) individual subject characteristics (years of bilateral CI experience, age at testing, age at
onset of hearing loss, the proportion of life with hearing loss); (b) ITD sensitivity, measured at
basal, middle, and apical locations along the length of the electrode array; and (c)
performance on pitch matching tasks, namely pitch magnitude estimation of stimulation to
single electrodes, and pitch comparison across the ears for pairs of electrodes stimulated
sequentially. All BiCI listeners were adult implant users with 0.5-9 years of bilateral
experience and at least 2-10 years of CI experience.
A linear model was fit to the data with results suggesting that pitch-assessment
measures may be related to ITD sensitivity while having co-variates such as years of
experience with cochlear implants and proportion of life with hearing loss. Overall years of
cochlear implant experience seemed to be a better predictor of ITD sensitivity than years of
BiCI experience. This indicates that pitch-assessment measures may be a helpful tool in
optimizing ITD sensitivity in BiCI patients, but may also lead to increased variability beyond
the above mentioned etiological factors. This knowledge may give insight into ways in which
we can optimize performance with bilateral CIs.
Work supported by NIH-NIDCD (R01DC003083 to RYL) and in part by NIH-NICHD
(P30HD03352 to the Waisman Center).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 279
2015 Conference on Implantable Auditory Prostheses
R35: AGING AFFECTS BINAURAL TEMPORAL PROCESSING IN COCHLEARIMPLANT AND NORMAL-HEARING LISTENERS
1
Sean Robert Anderson1, Matthew J Goupell2
Department of Communication Sciences and Disorders, University of Wisconsin, Madison, WI, USA
2
Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
Age-related temporal processing deficits occur in normal-hearing (NH) listeners;
however, the extent of these deficits in cochlear-implant (CI) users is unclear. As the number of
older (e.g., >65 yrs) CI users increases, it is important to understand the effects of aging on
temporal processing. In addition, almost all studies that present NH listeners with vocoded
stimuli have an age confound that could affect the validity of the comparison. In this study, we
hypothesized that older bilateral CI (OCI) users would demonstrate similar binaural temporal
processing compared to older NH (ONH) listeners, and both would show worse performance
than younger NH (YNH) listeners.
Twelve YNH, 12 ONH, and 8 OCI listeners (average age = 25.8, 66.3, 62.9 years,
respectively) performed a lateralization task where they indicated the perceived intracranial
position of a sound. NH listeners were presented bandlimited acoustic pulse trains via
headphones and CI users were presented electric pulse trains via direct stimulation. The rate of
the pulse trains was either 10, 30, 100, or 300 pulses/s. The pulse trains had either a non-zero
interaural time difference (ITD) or interaural level difference (ILD) applied that was intended to
change the position of the auditory image.
Listeners perceived changes in intracranial position with changes in ITD or ILD. The
range of responses changed as a function of rate for the ITD but not ILD tasks. For ITD only,
there was a decrease in lateralization range with increasing rate, demonstrating a binaural
temporal rate limitation. For ITD and ILD, there was a significant effect of age for lateralization
range where, at low rates, the lateralization range of the YNH was larger than the ONH and
YNH listeners and there was no difference between the ONH and YNH listeners. For ITD only,
there was a group by rate interaction where the OCI listeners demonstrated a sharper decrease
in lateralization range with increasing rate than either the YNH or ONH listeners.
The results show that age-related declines in temporal processing occur in CI listeners
and that previous studies utilizing vocoders likely have an age confound. In addition, the more
rapid decline in binaural temporal processing with increasing rate in the CI listeners suggests
different mechanisms affecting the temporal rate limitation in NH compared to CI listeners.
This work was supported by NIH K99/R00-DC010206 (Goupell), NIH P30-DC004664 (C-CEBH),
and the University of Maryland College of Behavioral and Social Sciences.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 280
2015 Conference on Implantable Auditory Prostheses
R36: EFFECTS OF ASYMMETRY IN ELECTRODE POSITION IN BILATERAL
COCHLEAR IMPLANT RECIPIENTS
Jill Firszt1, Rosalie Uchanski1, Laura Holden1, Ruth Reeder1, Tim Holden1, Christopher
Long2
1
Washington University School of Medicine, St. Louis, MO, USA
2
Cochlear Limited, Centennial, CO, USA
Bilateral cochlear implant (CI) recipients do not receive the same benefits from listening with two
ears as normal hearing listeners. Possible contributors include restricted frequency resolution due to the
limited number of electrodes and differences in the anatomical position of the arrays in each cochlea
which may result in sound stimulating different regions of the cochlea for each ear. Presumably, bilateral
CI users will have the most effective bilateral input if sound stimulates similar frequency regions of each
ear. This study’s overall objective is to determine whether differences in electrode position (obtained from
3D reconstructions of pre- and post-implant CT scans) correlate with interaural pitch percepts,
asymmetries in speech recognition between ears, and reduced bilateral performance.
Twelve adult, bilateral CI recipients have completed loudness balancing, pitch comparisons, and
speech recognition testing. Loudness balancing was conducted for adjacent electrodes within each ear
and for the same electrode numbers between ears, i.e., electrode 22 in left and right ears. Pitch
comparisons were made within each ear (monaural comparisons) and between ears (interaural
comparisons). Monaural pitch comparisons estimate tonotopic order for electrodes in each array.
Interaural pitch comparisons generate points of subjective equality between arrays, creating a pitchcorrespondence across arrays. For interaural pitch comparisons, each electrode of the designated
reference array was paired with five contralateral electrodes. A psychometric function estimated the point
of subjective equality for pitch (a contralateral electrode number) for each electrode of the reference
array. The procedure yielded a pitch-based correspondence across arrays for each participant
(interaural-pitch match, e.g., right CI El 7 most-closely matched in pitch to left CI El 9). Word and
sentence scores in quiet and noise were obtained for each ear separately and bilaterally. From CT
scans, the left and right cochleae of each participant were registered for anatomical correspondence to
create a unified array insertion depth reference. A cochlear-position match of electrodes was calculated
from the insertion depths in each array (e.g., right CI El 13 most closely matched in cochlear position to
left CI El 15). Researchers were blind to electrode position while conducting pitch comparisons.
Results showed that all could perform the monaural and interaural pitch matching tasks.
Generally, monaural pitch comparisons of adjacent electrodes showed good tonotopic ordering except
for the most basal electrodes. By contrast, interaural pitch comparisons yielded varied results. For an
individual participant, interaural pitch matches usually varied across the arrays in terms of the distance
based on electrode number. For example, right El 15 matched in pitch to left El 15 while right El 22
matched in pitch to left El 19. Additionally, interaural pitch matches differed across participants. For
example, right El 8 may have been matched in pitch to left El 8 for one participant and to left El 14 for
another. Interaural pitch matches also varied in discreteness. An electrode on one side might match in
pitch to a single contralateral electrode or to a range of electrodes. Speech recognition testing showed
that six participants had asymmetries in word scores between ears (>15 pct pts difference). CT scans
revealed asymmetries in electrode position (e.g., insertion depth, scala location, or wrapping factor)
between ears for most participants. Characterizing interaural asymmetries in speech recognition,
electrode position, and pitch percepts and the relationships amongst these asymmetries may be
beneficial in improving bilateral CI benefit.
Supported by NIH/NIDCD RO1DC009010.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 281
2015 Conference on Implantable Auditory Prostheses
R37: OPTIMIZING THE ENVELOPE ENHANCEMENT ALGORITHM TO
IMPROVE LOCALIZATION WITH BILATERAL COCHLEAR IMPLANTS
Bernhard U. Seeber, Claudia Freigang, James W. Browne
Audio Information Processing, TU München, Munich, DEU
Cochlear implant (CI) users show difficulties in localizing sounds, which has been attributed, in
part, to reduced availability of interaural time differences (ITD). Even though temporal information is
encoded in the envelope and - in some CIs - in the pulse timing, it becomes degraded due to temporal
quantization, current spread and the small number of stimulation channels used. In reverberant spaces,
interaural timing of sound onsets plays a particularly strong role for sound localization. Kerber and
Seeber (2013) have shown that localization ability of those CI users, who are particularly sensitive to
ITDs, is more robust against reverberation. Based on this consideration, Monaghan and Seeber (2011)
proposed a new algorithm which enhances selected sound onsets to increase the salience of the ITD
cues transmitted. The approach is an extension to the continuous interleaved sampling (CIS) strategy
which codes envelope amplitudes into the amplitudes of stimulation pulses. The onset enhancement
(OE) algorithm sets the envelope amplitude to zero immediately prior to those envelope peaks where the
direct-to-reverberant ratio (DRR) exceeds a threshold, thus creating a temporal gap followed by a steep
sound onset. This leads to an improved transmission of ITDs and was shown to improve localization in
reverberant spaces in simulated listening with cochlear implants.
In the present work we investigated the time point for setting the envelope to zero. If the
enhancement is done too early, the enhancement effect is likely minor due to forward masking from the
previous peak, if it is as late as the envelope peak, half of the peak’s energy is lost, thereby potentially
also reducing the enhancement effect. We first analyzed the location of the DRR maximum, indicating
the point with relatively largest direct sound energy and hence most reliable binaural cues, relative to the
peak in the envelope. Speech sounds were convolved with room impulse responses of different rooms
and source-receiver configurations. The envelope was extracted from the reverberated speech in
frequency-specific channels analog to the CIS strategy. The DRR was generally largest at or prior to a
peak, indicating that envelope peaks coincide statistically with the moments where direct sound energy
dominates over reverberant energy.
Next, we investigated the effect of envelope enhancement psychophysically in a lateralization
task for various enhancement time points: (i) at the envelope peak, (ii) prior to the envelope peak, (iii) at
the DRR peak, (iv) no enhancement. Evaluation was conducted for different source-receiver distances
and various ITDs in conditions with (a) direct sound only [anechoic] or (b) direct sound in reverberation
[reverberant] with simulated CI listening. Onset enhancement improved lateralization performance across
all tested source-receiver distances when compared to the condition without enhancement. In general,
enhancing at the envelope peak yielded steepest lateralization slopes, indicating largest sensitivity to
ITDs carried in the direct sound.
Subsequently, speech comprehension was examined using OLSA test sentences to ensure that
the onset enhancement does not compromise speech intelligibility. Speech comprehension was not
affected by onset enhancement. In conclusion, the onset enhancement approach has been shown to
improve localization performance in reverberant spaces using simulated listening with CIs. Since it did
not impair speech intelligibility it can be suggested as a method for improving localization with bilateral
CIs.
Kerber, S., and Seeber, B. U. (2013). Localization in reverberation with cochlear implants: predicting
performance from basic psychophysical measures. J Assoc Res Otolaryngol, 14(3): 379-392.
Monaghan, J. J. M., and Seeber, B. U. (2011). Exploring the benefit from enhancing envelope ITDs for
listening in reverberant environments, in Int. Conf. on Implantable Auditory Prostheses (Asilomar, CA), p.
246.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 282
2015 Conference on Implantable Auditory Prostheses
R38: AUDITORY MOTION PERCEPTION IN NORMAL HEARING LISTENERS
AND BILATERAL COCHLEAR IMPLANT USERS
Keng Moua, Heath G. Jones, Alan Kan, Ruth Y. Litovsky
University of Wisconsin-Madison, Madison, WI, USA
Bilateral cochlear implantation has led to improved sound localization ability compared to
unilateral cochlear implant (CI) use. However, localization performance is still much poorer than
that of normal hearing (NH) listeners. While localization ability of static sound sources has been
extensively studied in bilateral cochlear implant (BiCI) listeners, their ability to track and locate
the source of a moving sound is largely unknown. Moving sound sources are common in daily
life and understanding a BiCI listener’s ability to track moving sound sources may reveal
important information about how well they do in hearing moving sounds in everyday
environments. In this work, we assess BiCI listener’s ability to track the motion of a simulated
moving sound source, and whether they can distinguish a moving sound from a static sound.
We are interested in how BiCI listeners perceive moving sounds because there may, or may
not, be a difference in this ability compared to NH listeners. While good sensitivity to binaural
cues may be important for precise localization of static sounds, localization of a moving sound
may not be as heavily reliant on good binaural sensitivity.
The ability to locate a static sound, and track the motion of a moving sound source was
assessed in NH and BiCI listeners. Stimuli consisted of white noise tokens (150-6000 Hz) which
were played from 37 loudspeakers (5° intervals, -90° to +90° azimuth) and binaurally recorded
on a KEMAR manikin. Static sound sources were recorded from one of 19 loudspeakers (10°
intervals, -90° to +90° azimuth), while moving sounds were simulated by panning between all 37
loudspeakers. Moving sound sources ended at the same 19 locations as the static sources and
both were either 500 ms, 1000 ms and 2000 ms, in duration. Moving sound sources consisted of
a combination of angular ranges from 10° to 40° in 10° intervals, and velocities of 5°/s, 10°/s,
20°/s, 40°/s, and 80°/s. Stimuli were presented in blocks according to their duration, where in
each block static and moving sounds of different angular ranges and velocities were interleaved.
The recorded stimuli were presented to NH participants via Sennheiser HD600 circumaural
headphones and direct audio input connection for BiCI listeners.
Results so far show that performance in sound localization of the end point of a moving
sound was comparable to a static sound for both NH and BiCI listeners. However, consistent
with previous work localization performance was much poorer for BiCI listeners compared to NH
listeners. Localization of the starting point of the sound was also more accurate for NH listeners
compared to BiCI listeners. For BiCI listeners, there was noticeably higher confusion between
static and moving sounds, as well as poorer tracking of moving sounds such that the perceived
angular range of the moving sound did not correspond with the range of simulated motion.
These findings suggest that the poor localization performance of static sounds also appears to
affect the perception of moving sounds for BiCI listeners, which can affect their ability to
accurately track and locate moving sound sources in daily life.
Work supported by NIH-NIDCD (R01 DC003083 & R01 DC008365 to RYL) and NIH-NICHD
(P30HD03352 to the Waisman Center).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 283
2015 Conference on Implantable Auditory Prostheses
R39: EXTENT OF LATERALIZATION FOR PULSE TRAINS WITH LARGE
INTERAURAL TIME DIFFERENCES IN NORMAL-HEARING LISTENERS AND
BILATERAL COCHLEAR IMPLANT USERS
Regina Maria Baumgaertel, Mathias Dietz
Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, DEU
For normal hearing listeners, sound localization in the frontal azimuthal half-plane is
primarily achieved by neural processing of interaural differences in level and arrival time. While
interaural level differences are to some extent also available to and used by bilateral cochlear
implant (CI) subjects, encoding of perceptually exploitable interaural time differences (ITDs) by
pulse timing is still a topic of ongoing research. This research is motivated by several studies
showing that CI subjects are able to exploit ITDs when presented with fully synchronized low
rate pulse trains.
In the first part of the study extent of lateralization for fixed ITDs of up to 3 ms was
measured in normal-hearing subjects. Stimuli were either unfiltered or 3-5 kHz bandpass filtered
click trains, the latter mimicking the perception of CI users, i.e. the absence of any lowfrequency temporal fine-structure information. Results indicate that while unfiltered click trains
with 600 µs ITD were lateralized at the ear, filtered click-trains required approximately 1.4 ms
ITD for equally strong lateralization.
Under the assumption that the filtered click trains correctly mimic an average CI subject’s
percept, these results imply that even if ITD information is encoded through the pulse timing,
subjects will not perceive full lateralization with naturally occurring ITDs (< 650 µs). This
hypothesis was tested in the second part of this study using single electrode stimulation in
bilateral CI subjects. For the subjects tested so far, a change in lateralization percept with
changing ITD could only be measured at pulse rates lower than 200 pulses per second. On
average an ITD of 1.0 ms was required to lateralize the pulse train at the ear.
The results indicate that, if the speech coding allows for sufficiently low pulse rates, ITD
enhancement may be beneficial in future binaural cochlear implants to provide improved
localization performance and better spatial separation of sound sources which will subsequently
result in better speech intelligibility in noise.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 284
2015 Conference on Implantable Auditory Prostheses
R40: THE EFFECTS OF SYLLABIC ENVELOPE CHARACTERISTICS AND
SYNCHRONIZED BILATERAL STIMULATION ON PRECEDENCE-BASED
SPEECH SEGREGATION
Shaikat Hossain1, Vahid Montazeri1, Alan Kan2, Matt Winn2, Peter Assmann1, Ruth
Litovsky2
1
2
University of Texas at Dallas, Richardson, TX, USA
University of Wisconsin-Madison, Madison, WI, USA
The precedence effect (PE) is an auditory phenomenon that enables the perceptual dominance of
leading direct sound wave-fronts over lagging indirect reflections. In normal-hearing (NH) listeners, the
PE can aid in segregating sounds in reverberant environments by keeping target and masker spatially
distinct. Freyman et al. (1999) demonstrated that NH listeners were able to utilize the PE in the spatial
unmasking of speech from a competing talker. In a preliminary work involving free field testing, we found
that bilateral cochlear implant (BiCI) users using their clinical processors were unable to derive such a
benefit when presented with sentence-based stimuli. Part of this inability may be related to current
limitations in bilateral stimulation, namely the lack of temporal synchronization between processors which
may lead to distortions to critical ITD cues. It is possible that temporal characteristics of speech
envelopes (particularly onsets) mediate the PE since echo thresholds of individual syllables can vary as
a function of their stimulus onset characteristics (specifically the envelope rise times). A syllabic level of
analysis may provide an effective means to model the detrimental effects of asynchronous stimulation on
the PE benefit. The present study sought 1) to investigate the contributions of the syllabic envelope to
PE-based speech segregation in NH and BiCI listeners and 2) to assess differences in performance
between synchronized and unsynchronized stimulation strategies.
Participants will include NH and post-lingually deafened BiCI adults. The stimuli chosen for this
experiment were monosyllabic words spoken by male and female talkers. Based on the speed of rise
time of the phoneme classes, 4 masker stimulus groups were designated: (1) fast envelope in both onset
and offset consonant (e.g. “pick”) (2) fast onset only (3) fast offset only and (4) slow onset and offset
envelopes (e.g. “mall”). Head-related transfer functions were used to create a virtual auditory
environment. Stimuli will be presented over headphones for NH listeners and through a pair of
synchronized research processors (L-34) for BiCI users. The task will consist of an open-set word
recognition task with two conditions: 1) the target (female talker) located at a virtual front loudspeaker
and the masker (male talker) co-located at the front (F_F) or 2) the target in front with the masker first
being presented at a virtual right loudspeaker (located at 60 degrees in the azimuthal plane) before being
added back to the front loudspeaker (F_RF) after a delay of either 8, 32, or 64 ms. The second condition
was designed to test whether precedence-induced suppression of the trailing sound could support
perceived spatial separation and hence spatial release from masking. SNR values of -8 and -4 dB were
chosen for the NH listeners and 0 and 4 dB for BiCI users based on pilot testing. BiCI users will be tested
using two n-of-m strategies derived from the Nucleus Matlab Toolbox, which include a simulation of the
Advanced Combination Encoder (ACE) strategy used in clinical processors (unsynchronized ACE) and a
novel bilateral version of ACE which uses electrode pairs chosen to match place of stimulation across
the ears and bilateral peak-picking to preserve interaural level differences (synchronized ACE). In order
to have the same number of electrodes to choose from in each strategy, the number of active electrodes
will be reduced to 16. In unsynchronized ACE, a delay of 2 ms will be introduced to simulate the worst
case processing delay between the ears. Preliminary results indicate that NH listeners showed benefits
in the F_RF configuration at delays of 8 and 32 ms as compared to the F_F condition at -8 dB SNR.
Masking release differed between words with the initial consonant /b/ (fast onset) and /w/ (slow onset),
with the former set being more effectively suppressed and subsequently leading to higher word
recognition scores for the target. Findings from this study will have important implications for the
development of effective bilateral stimulation strategies.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 285
2015 Conference on Implantable Auditory Prostheses
R41: MANIPULATING THE LOCALIZATION CUES FOR BILATERAL
COCHLEAR IMPLANT USERS
Christopher A. Brown1, Kate Helms Tillery2
1
University of Pittsburgh, Pittsburgh, PA, USA
2
Arizona State University, Tempe, AZ, USA
Bilateral cochlear implant (BCI) users have shown relatively poor performance on
localization tasks (Grantham, 2008), a result that is likely due in part to the fact that these users
receive intact interaural level difference (ILD) cues, but poorly represented interaural time
differences (ITDs). Many BCI users show a sigmoidal pattern of localization results, in which
accuracy declines as azimuth increases away from the midsagittal plane. The current study
examined the efficacy of a novel strategy designed to increase localization accuracy by
estimating the azimuthal location of a sound source in space using instantaneously computed
ITDs, and applying corresponding ILDs in real-time. Conversion functions were utilized in which
the applied ILDs were relatively small near midline (where accuracy is often already adequate)
and increased exponentially as the ITD increased. Preliminary localization results by BCI users
show that the exponential functions produced lower RMS error localization performance, and
localization functions that appear more linear.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 286
2015 Conference on Implantable Auditory Prostheses
R42: AN OTICON MEDICAL BINAURAL CI PROTOTYPE CONCEPT
DEMONSTRATED IN HARDWARE
Bradford C. Backus1, Guillaume Tourell1, Jean-Claude Repetto1, Kamil Adiloglu2, Tobias
Herzke2, Matthias Dietz3
1
Oticon Medical, Valluris, FRA
HörTech gGmbH, Oldenburg, DEU
3
Medizinische Physik, Universität Oldenburg, Oldenburg, DEU
2
We present a combined hardware and software implementation of a binaural CI ‘concept’
system designed to exploit binaural processing for the purpose of enhancing a user’s
performance. The system is based on the idea of binaural channels and therefore always
addresses a pair of pitch-matched electrodes as a binaural channel. Physically, the system
consists of a binaural array with 20 electrodes on each side and stimulation hardware capable of
driving a subset of 15 ‘active pairs’ out of these 20 physical pairs via 8 independent current
sources. Electrode pairing is managed as part of a clinical ‘fitting’ and is achieved in the
hardware via special setup parameters. Any electrode can be paired with any other electrode
from the opposite side.
Processing is carried out by a software system connected to the hardware (see ‘A
Binaural CI Research Platform for Oticon Medical SP Implants Enabling ITD/ILD and Variable
Rate Processing’) and contains algorithms designed for improving speech intelligibility by using
novel binaural coding strategies (see ‘A binaural cochlear implant stimulation algorithm with
deliberate non-preservation of interaural cues’). An efficient transcutaneous data-transfer codec
capable of delivering the fine-time resolution needed for ITD processing while remaining within a
reasonable bandwidth budget (320kbps) was developed.
Work supported by funding from the European Union’s Seventh Framework Programme
(FP7/2007-2013) under ABCIT grant agreement no. 304912.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 287
2015 Conference on Implantable Auditory Prostheses
R43: A BINAURAL CI RESEARCH PLATFORM FOR OTICON MEDICAL SP
IMPLANT USERS ENABLING ITD/ILD AND VARIABLE RATE PROCESSING
Bradford C. Backus1, Kamil Adiloglu, Tobias Herzke2
1
2
Oticon Medical, Valluris, FRA
HörTech gGmbH, Oldenburg, DEU
We present a portable, binaural, real-time research platform for Oticon Medical SP
generation cochlear implant users. The platform is capable of processing signals from 4
microphones simultaneously and producing synchronized binaural outputs capable of driving
two (bilaterally implanted) SP implants. The platform consists of hardware and software parts.
The hardware is responsible for: (1) digitizing the 4-channel input audio signals coming from two
ear-worn microphone systems and (2) generating the final electric outputs needed to drive the
two antenna coils. The software is responsible for processing the four audio signals and then
generating two synchronized electrodograms from these signals. The software includes a
flexible environment for the development of sound pre-processing (“speech processing”) and
stimulation strategies. The interface between hardware and software is fully bi-directional via
standard USB.
When the whole research platform is combined with Oticon Medical SP implants,
interaural electrode timing can be controlled to better than 10 us accuracy. Hence, this new
platform is particularly well-suited to performing experiments related to ITD in real-time. The
platform also supports instantaneously variable stimulation rates and thereby enables
investigations such as the effect of changing the stimulation rate on pitch perception. In addition,
because software processing can be changed on the fly, researchers can use this platform to
study perceptual changes resulting from two different processing strategies acutely.
Work supported by funding from the European Union’s Seventh Framework Programme
(FP7/2007-2013) under ABCIT grant agreement no. 304912.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 288
2015 Conference on Implantable Auditory Prostheses
R44: A SIMPLE INVERSE VARIANCE ESTIMATOR MODEL CAN PREDICT
PERFORMANCE OF BILATERAL CI USERS AND PROVIDE AN
EXPLANATION FOR THE ‘SQUELCH EFFECT’
Bradford C. Backus
Oticon Medical, Valluris, FRA
The increase in performance of bilateral CI users over monaural users is traditionally
talked about in terms of head shadow, binaural summation and the binaural squelch effect
(spatial release from masking). Measuring the speech reception thresholds in bilateral CI
patients can be used to quantify the benefits of each of these effects over monaural CIs.
Typically the head shadow effect is largest (advantage of 4-6 dB) followed by binaural
summation (1-3 dB) and the controversial squelch effect (0-2 dB).
Although these effects are well understood and often measured, a mathematical
framework for understanding them is not often offered. Here we introduce a theoretical simple
inverse variance weighted estimator processing model to help explain the effects, in SNR terms,
and test it using published data to see if the model can quantitatively account for measured
data.
The model was able to use the head shadow information to make quantitatively accurate
(within +/- 1 dB) predictions of the magnitudes of the binaural and squelch effects observed in
patients. The inverse variance model proves to be simple and powerful mathematical framework
for understanding bilateral CI performance and further suggests this model can provide a new
model-based taxonomy that can unite and replace the long-existing head-shadow, binaural
summation, and squelch terminology with a simple model that provides more insight.
Supported by funding from the European Union’s Seventh Framework Programme (FP7/20072013) under ABCIT grant agreement no. 304912.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 289
2015 Conference on Implantable Auditory Prostheses
R45: COMPARISON OF INTERAURAL ELECTRODE PAIRING METHODS:
PITCH MATCHING, INTERAURAL TIME DIFFERENCE SENSITIVITY, AND
BINAURAL INTERACTION COMPONENT
Mathias Dietz, Hongmei Hu
Medizinische Physik and Cluster of Excellence Hearing4all, UniversitÄt Oldenburg, Oldenburg, DEU
Pairing left and right cochlear implant (CI) electrodes and stimulating them with the same
frequency band is expected to facilitate binaural functions such as binaural fusion, localization,
or spatial release from masking. Such pairing is expected to gain importance with increasing
bilateral implantation and even more so if future CI processor provide more binaural cues than
current systems. However, to date by far the most bilateral subjects are simply provided with a
generic frequency mapping. A potentially different implantation depth or locally different neural
survival is not compensated. In some cases pitch comparison methods are used, but the pitch
percept has also been shown to be changing over the first months after implantation,
questioning its general validity as a pairing method.
Here, testing three different methods in the same subjects, we were able to identify and
compare matched electrode pairs. Using direct stimulation of 7 bilaterally implanted MED-EL
subjects we reliably identified the pair with the largest binaural interaction component (BIC) of
auditory brainstem responses (ABR). Results from interaural pulse time difference (IPTD)
sensitivity and BIC indicate interaurally matched pairs separated by up to two electrodes, which
is quite remarkable considering the large spacing (approx. 2.2 mm) of a typical 12 electrode
MED-EL array. These two methods also resulted in similar matched pairs, encouraging that also
ABR-based electrode pairing without subjective responses is feasible. The pitch comparison
method, on the contrary, typically resulted in identical electrode numbers on both sides to elicit
the same pitch and thus did not correlate with the other methods.
From these results we conclude that pitch is not an ideal pairing method for binaural
functioning, at least not for subjects that have already adapted to a generic frequency mapping.
This work was funded by the European Union under the Advancing Binaural Cochlear Implant
Technology (ABCIT) grant agreement (No. 304912).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 290
2015 Conference on Implantable Auditory Prostheses
R46: A BINAURAL COCHLEAR IMPLANT ALGORITHM FOR ROBUST
LOCATION CODING OF THE MOST DOMINANT SOUND SOURCE
Ben Williges, Mathias Dietz
Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, DE
For azimuthal sound source localization normal hearing listeners exploit interaural time
differences (ITD) in the temporal fine-structure (TFS) and in the temporal envelope as well as
interaural level differences (ILD). The most informative of these cues is the TFS ITD in the
dominance region (500-900 Hz). However, cochlear implant (CI) processing often discards TFS
information and even if TFS information is provided, CI listeners apparently cannot exploit it for
localization at pulse rates in the dominance region. The absence of this information likely results
in a less focused and less pronounced spatial representation of sound sources and further
impedes the ability to spatially segregate competing sources.
Because of this reduced perceptual availability of binaural cues, we hypothesize that the
often stated goal of binaural cue preservation may not be the optimal strategy for future binaural
CI pre-processing. The aim of providing all spatial information may be too ambitious in many
real world situations. Instead CI listeners may profit more from a robust coding of the dominant
sound source. Quasi-stationary and potentially enhanced interaural cues together with a high
interaural coherence are expected to be most beneficial, especially in complex listening
situations.
A new binaural processing algorithm specific for cochlear implants is presented which
aims at providing such binaural cues, together with an improved signal-to-noise ratio to also
provide optimal speech intelligibility. The 2+2 microphone based acoustic pre-processing
includes a direction of arrival estimator and a steering binaural beamformer. For the subsequent
speech coding any strategy can be used, however, sparse or low rate strategies, such as
envelope peak picking are ideal. A subsequent azimuth dependent mapping of ITD and ILD
provides a flexible framework for further improvements.
Different settings of the algorithm are tested in speech intelligibility and localization tasks.
Results using a vocoder-like auralization of the electrodogram with normal hearing subjects
show improved localization accuracy at least at some reference angles. The increased speech
intelligibility can be attributed to the binaural beamformer.
This work was funded by the European Union under the Advancing Binaural Cochlear Implant
Technology (ABCIT) grant agreement (No. 304912).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 291
2015 Conference on Implantable Auditory Prostheses
R47: ADVANCING BINAURAL COCHLEAR IMPLANT TECHNOLOGY - ABCIT
David McAlpine1, Jonathan Laudanski2, Mathias Dietz3, Torsten Marquardt1, Rainer
Huber4, Volker Hohmann3
1
UCL Ear Institute, London, GBR
2
Oticon Medical, Nice, FRA
3
University of Oldenburg, Oldenburg, DEU
4
Hörtech, Oldenburg, DEU
The ability to process interaural time differences (ITDs) conveyed in low-frequency
sounds is critical to sound-source localization and contributes to effective communication in
noisy backgrounds - so-called ‘cocktail-party’ listening. The advent of bilateral cochlear
implantation (CI) has fuelled research endeavours into how sensitivity to ITDs might be restored
in electrical hearing - although bilateral benefits are evident with bilateral implants, true binaural
hearing remains elusive. In 2012, a consortium of researchers and commercial partners in the
UK, France and Germany secured €4M from the European Union’s ‘Framework 7’ research
fund to develop a programme of research aimed at progressing bilateral cochlear implantation
towards true binaural performance. At its outset, this project - Advancing Binaural Cochlear
Implant Technology (ABCIT) - had 5 main objectives designed to move the outcome of the
project beyond the state-of-the-art. These where i) to develop the ability to exploit the full range
of binaural hearing cues in CI, and to gear stimulation strategies towards enhancing binaural
information, ii) to develop a research platform, including a speech pre-processing module for
enhance the development of bilateral CI processors, iii) to adapt currently successful hearingaid (pre-processing) algorithms to meet the special demands of implants iv) to develop the
means of measuring from the auditory brain signals that will provide an objective means of
assessing binaural performance in CI users and v) to develop a low-power, wireless audio link
between the devices at both ears so as to enhance users’ access to binaural information. To
date, this project has been an outstanding success, and with the project due to end in August
2015, the partners are now starting to disseminate more broadly the outcomes of ABCIT in
patents, conference presentations and research papers? Why has the project been so
successful? First, the scope of the funding programme - requiring commercial and academic
partners - provided a means of cross-fertilizing scientific disciplines and technical expertise
within a single research project. Second, the work-packages, milestones and deliverables within
the project provided a core around which research and technical developments were
progressed, whilst remaining sufficiently flexible to be modified as necessary (or in response to
opportunities). Third, the regular pattern of partners meeting and working together bred an
atmosphere of trust and openness, tempered by the clear view that individual partners also had
their own drivers and requirements. The outcomes of the project, many reported at this meeting,
have been outstanding and, to us, provides a lesson for how implant companies and academic
partners might work together in the future (and avoid the fate of the pharmaceutical industry) to
advance our research ideas rapidly into new technologies and interventions, whilst maintaining
academic independence.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 292
2015 Conference on Implantable Auditory Prostheses
R48: SUMMATION AND INHIBITION FROM PRECEDING SUB-THRESHOLD
PULSES IN A PSYCHOPHYSICAL THRESHOLD TASK: EFFECTS OF
POLARITY, PULSE SHAPE, AND TIMING
Nicholas R Haywood, Jaime A Undurraga, Torsten Marquardt, David McAlpine
UCL Ear Institute, London, GBR
INTRODUCTION. Using biphasic stimuli, Eddington et al. (1994) measured thresholds for
detecting a 'reference pulse' (RP) presented with a preceding sub-threshold Pulse' (PP). The
leading phase of the RP was set as cathodic. When the final phase of the PP was cathodic, PPs
occurring up to 400 ms before the RP reduced detection threshold (compared to threshold in the
absence of a PP). This was consistent with an auditory nerve (AN) model that predicted
depolarization by the final cathodic phase of the PP (summation). A PP with an anodic final
phase was predicted to increase RP threshold (for PP-RP time delays up to ~200 ms), as the
anodic phase of the PP was expected to pre-hyperpolarize the AN. However, this effect was not
observed. The current experiment aimed to extend these findings using a sequence of
interleaved pulses (PP-RP-PP-RP etc.) where all PP and RP polarities arrangements were
tested using biphasic and triphasic stimuli. The study aimed to further develop understanding of
how pulse polarity interactions influence the electrical stimulation of the AN.
METHOD. Adult unilateral Med-El users were tested using the RIB II research platform. A
620 ms pulse train was presented to the most apical electrode available. The RP rate was fixed
at 100 pulses-per-second (pps) (i.e., 200 pps including the interleaved PPs). A 2I-2AFC
staircase procedure was used to estimate RP detection threshold. The target sequence
comprised the pulse train, and the reference sequence comprised silence only. Runs began at a
supra-threshold RP amplitude, and RP amplitude was reduced adaptively. Initial runs measured
RP threshold in the absence of PPs, and PP amplitude was set from this level. PPs occurred
between 0.25 - 4 ms before each RP. Both PP and RP polarities were varied in four
combinations. Where possible, thresholds for triphasic and biphasic arrangements were
obtained from the same subjects.
RESULTS. Preliminary data suggest that, for an arrangement in which PPs and RPs
were cathodic leading (i.e., middle main phase anodic), PPs reduced thresholds in a manner
consistent with the time-course of summation observed previously. Data concerning alternative
polarity configurations have yet to be collected. Results will be compared to previous outcomes,
and the effects of PPs will be considered in terms of the activation of potassium (i.e., Kv1)
channels of AN fibers. Activation of such low-voltage channels should suppress AN firing. Any
such effects could be adapted and applied to novel stimulation strategies to reduce the spread
of electrical stimulation.
REFERENCE. Eddington, Donald K., V. Noel, W. Rabinowitz, M. Svirsky, J. Tierney, and
M. Zissman. Speech Processors for Auditory Prostheses, Eighth Quarterly Progress Report,
1994.
ACKNOWLEDGMENT. The research leading to these results has received funding from
the European Union's Seventh Framework Programme (FP7/2007-2013) under ABCIT grant
agreement number 304912
12-17 July 2015
Granlibakken, Lake Tahoe
Page 293
2015 Conference on Implantable Auditory Prostheses
R49: WHY IS CURRENT LEVEL DISCRIMINATION WORSE AT HIGHER
STIMULATION RATES?
1
2
Mahan Azadpour1, Mario A. Svirsky1, Colette M. McKay2
Department of Otolaryngology, NYU School of Medicine, New York, NY, USA
Bionics Institute, and Department of Medical Bionics, The University of Melbourne, Melbourne, AUS
Cochlear implant users’ discrimination of changes in the current level of a pulse train
becomes poorer as the rate of the pulse train is increased. Level discrimination limen (LDL) of a
pulse train, according to signal detection theory, is proportional to the ratio of two factors: the
change in loudness, and the loudness variability or internal noise. The former (the numerator)
depends on the current-to-loudness function, which may become shallower as the rate of the
pulse train is increased. This might at least partially explain the larger (worse) LDLs at higher
pulse rates. However, it is not clear whether loudness variability changes with pulse rate, which
would contribute to the effects of rate on LDLs. We hypothesized that loudness perception is
less variable at lower stimulation rates than at higher rates. This hypothesis was inspired by the
fact that at lower rates, neural responses phase lock more precisely to the individual pulses and
are less stochastic.
Loudness variability was compared between high- and low-rate pulse trains using a
psychophysical approach based on signal detection theory. Trains of 500 and 3000 pulses-persecond presented on a test electrode were loudness balanced to obtain reference stimuli. LDLs
were obtained in 2-interval 3down-1up adaptive procedures in two conditions: 1) same rate for
the variable and reference stimuli 2) for each reference rate, the variable stimulus at the other
rate. The subjects’ task was to indicate the louder interval. The difference between LDLs
obtained in conditions 1 and 2 using the same variable rate depended only on the relative
relation of loudness variability at the two stimulation rates. The results were consistent with
greater loudness variability for the higher-rate pulse train. The experiments were performed at
two reference levels. The current differences between louder and softer references suggested
that current-to-loudness slopes were shallower for the higher rate pulse train.
The current results suggest that both current-to-loudness function slopes and loudness
variability (to a lesser degree) could contribute to the larger LDLs for higher pulse rates.
Shallower loudness functions can cause poorer LDLs, but should not affect discrimination of
speech envelope variations by implant users: the acoustic-to-electric mapping in the speech
processor compensates for differences in loudness functions. However, greater loudness
variability is expected to impair discrimination of changes in speech envelope. Loudness
variability or internal noise is inversely related to the number of discriminable steps of both
electric and acoustic levels across the dynamic range.
This study was supported by a grant from the National Organization for Hearing Research
(NOHR). The Bionics Institute acknowledges the support it receives from the Victorian
Government through its Operational Infrastructure Support Program.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 294
2015 Conference on Implantable Auditory Prostheses
R50: RELATIONSHIP BETWEEN SPECTRAL MODULATION DEPTH AND
SPECTRAL RIPPLE DISCRIMINATION IN NORMAL HEARING AND
COCHLEAR IMPLANTED LISTENERS
Kavita Dedhia1,2,3, Kaibao Nie1,2, Ward R Drennan1,2, Jay T Rubinstein1,2,3, David L Horn1,2,3
1
2
University of Washington Medical Center, Seattle, WA, USA
Virginia Merrill Bloedel Hearing Research Center, Seattle, WA, USA
3
Seattle Children's Hospital, Seattle, WA, USA
Background: Clinically, spectral ripple discrimination (SRD) is strongly correlated with
speech perception in adult CI users as well as with measures of spectral resolution (Henry and
Turner 2005; Won et al., 2011; Jones et al., 2013; Drennan et al., 2014). In this task, listeners
discriminate between two spectral ripples with the phases of their spectral envelope inverted.
SRD is affected both by spectral resolution as well as sensitivity to intensity (Anderson et al.,
2011, 2012). One way to separate these factors is to compute a spectral modulation transfer
function (SMTF). SMTFs can be determined in two ways 1. By varying ripple depth for range of
fixed ripple densities, and 2. By varying ripple density for a range of fixed ripple depths. Using
method 1, Supin et al. (1999) showed that the SMTF, showing ripple depth as a function of
ripple density, was an exponential function. However, the SMTF for method 2, showing ripple
density as a function of ripple depth, has not been explored. In this study, we compared how
SRD thresholds varied as a function of ripple depth in NH and CI adult listeners and whether the
mean and individual data could be fit to a logarithmic function. It was hypothesized that the xintercept would be similar between groups, whereas the SRD threshold at 2dB above the xintercept (2dB “up” point) would be much higher for NH than CI listeners.
Methods: Participants included NH listeners and unilateral, long-term CI listeners.
Listeners were presented with trials of 3 spectrally rippled noise in which the starting phase of
the ripple envelope for one noise was shifted 90 degrees from the other two. An adaptive 3AFC
task was used to estimate SRD threshold in ripples per octave at 5 ripple depths: 5, 10, 13, 20,
and 30dB.
Results: Mean and individual NH data (N=10) fit reasonably well to a logarithmic function.
Preliminary CI results (N=3) suggest that the logarithmic SMTF is significantly flatter and slightly
x-axis right-shifted compared to NH data.
Conclusions: These results suggest that SRD thresholds improve as a function of ripple
depth as a logarithmic SMTF function. This function produces two parameters: A and b. A is the
x-intercept and probably reflects both spectral and non-spectral factors whereas b is related to
the 2dB up point and reflects mainly spectral resolution. These findings have implications for
future use of SRD testing to determine spectral resolution in a variety of clinical populations.
Funding: K23DC013055, P30DC04661, R01DC010148, Gifts from Verne J. Wilkins and Motors
Northwest
12-17 July 2015
Granlibakken, Lake Tahoe
Page 295
2015 Conference on Implantable Auditory Prostheses
R51: INFLUENCE OF THE RECORDING ELECTRODE ON THE ECAP
THRESHOLD USING A NOVEL FINE-GRAIN RECORDING PARADIGM
Lutz Gaertner1, Andreas Buechner1, Thomas Lenarz1, Stefan Strahl2, Konrad Schwarz2,
Philipp Spitzer2
1
Hannover Medical School, Hannover, DEU
2
MED-EL HQ, Innsbruck, AUT
Measuring the electrically evoked compound action potential (ECAP) has become an
important tool for verifying the electrode-nerve interface as well as establishing a basis for a
map to program the speech processor. In a standard clinical setup, ECAPs are recorded via an
electrode being proximate to the stimulation electrode and the threshold of the ECAP response
is determined.
We investigated the influence of the distance between the stimulation and recording
electrode on the ECAP threshold in 35 cochlear implant (CI) users (39 implants, all MED-EL
devices). For each of the 12 contacts of the electrode array, amplitude growth functions (AGF)
were recorded using all 11 unstimulated electrode contacts until the loudest acceptable
presentation level (LAPL), resulting in 132 AGFs per CI. The so called fine-grain recording
paradigm [1] was used where the stimulation intensity is increased in quasi-continuous steps
and each current level is measured as an individual single trace. These single traces are being
combined with a moving average filter improving the signal to noise ratio without the need to
repeat and later average recordings with identical stimulation levels. However, as AGFs usually
show the shape of a sigmoidal curve, we decided to use this a-priori knowledge and
alternatively fitted sigmoidal functions to all individual data points obtained from the subjects
instead of using the moving average approach as a noise reduction method. The intention here
is to get rid of the smearing effects being introduced by the moving average filter and at the
same time get a snug fit by using the more realistic sigmoidal fitting function resulting in a highly
probable AGF estimation. The high resolution of the carrier of the measured AGF, i.e. in the
neighborhood of the ECAP threshold, allowed for a derivation of the ECAP threshold directly
from the fitting sigmoid parameters.
We observed a decrease of the maximal ECAP amplitude with increasing distance
between the stimulation and recording electrode, with a detectable ECAP being normally
present even at a distance of 11 electrode contacts. Analyzing AGFs where the LAPL was
above the ECAP threshold we could not find any significant effect of the distance between the
stimulation and recording electrode on the ECAP threshold determined from the sigmoidal
model.
Our findings indicate that the determination of the ECAP threshold is invariant of the
choice of the recording electrode in cases where the AGF can be recorded well above the
ECAP threshold.
[1] Gärtner L., Lenarz T., Büchner A. (2014): A novel ECAP recording paradigm to acquire finegrain growth functions. Proceedings of the 13th International Conference on Cochlear Implants
and Other Implantable Auditory Technologies, Munich, p. 1076.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 296
2015 Conference on Implantable Auditory Prostheses
R52: SOUND QUALITY OF MONOPOLAR AND PARTIAL TRIPOLAR
STIMULATION AS FUNCTION OF PLACE AND STIMULATION RATE
Natalia Stupak, David M. Landsberger
New York University School of Medicine, New York, NY, USA
Multiple previous studies have investigated partial tripolar (pTP) stimulation as a tool to
reduce the spread of neural excitation from a single electrode in a cochlear implant (CI). One
publication with only 6 relevant data points (Landsberger et al., 2012) as well as a number of
anecdotal reports have suggested that current focused single electrode pulse trains can
produce higher pitched percepts than monopolar (MP) stimulation on the same electrode.
In the present experiment, the effect of focusing on sound quality was systematically
investigated as a function of electrode location and stimulation rate. Subjects with Advanced
Bionics CII or HiRes90K devices and a HiFocus 1J electrode were asked to scale the sound
quality of single electrode pulse trains in terms of how “Clean”, “Noisy”, “High”, “Musical”, and
“Annoying” they sounded. Pulse trains were presented in either MP or pTP modes on most
electrodes between 2 and 14 at stimulation rates of 100, 150, 200, 400, or 1500 pulses per
second.
Preliminary results suggest that the perceptual differences between MP and pTP stimuli
are highly variable. While the sound quality is often described similarly for both stimulation
modes, there seems to be a bias towards pTP stimulation sounding more “Clean”, “High”, and
“Musical” while there is a bias towards MP stimulation sounding more “Noisy” and “Annoying”.
No systematic effect of rate or place has been observed.
Support provided by the NIH/NIDCD (R01 DC012152, PI: Landsberger) and Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 297
2015 Conference on Implantable Auditory Prostheses
R53: PITCH RANKING WITH FOCUSED AND UNFOCUSED VIRTUAL
CHANNEL CONFIGURATIONS
Monica Padilla, Natalia Stupak, David M Landsberger
New York University School of Medicine, New York, NY, USA
Virtual channels created by simultaneous stimulation on multiple electrodes can increase
the number of effective sites of stimulation beyond the number of implanted electrodes. The
most common virtual channel configuration (the Monopolar Virtual Channel or MPVC) is
generated by simultaneous monopolar (MP) stimulation of two adjacent electrodes. Current
focused virtual channels such as a Quadrupolar Virtual Channel (QPVCs; Landsberger and
Srinivasan, 2009) and the Virtual Tripole (VTP; Padilla and Landsberger, CIAP 2013) can
increase the number of sites of stimulation similarly to a MPVC while also reducing spread of
excitation and channel interactions.
For proper place pitch coding, it is important that virtual channels steered across the
electrode array provide tonotopically ordered stimulation. Because MPVCs and VTPs are
designed to be spectrally symmetric, tonotopic order should be preserved across the array.
However, QPVCs were not designed to be spectrally symmetric, and therefore there may be
violations of tonotopic order (and as a result, pitch reversals) across the array. Specifically,
because the grounding electrodes used to reduce current spread in QPVCs are not steered,
there may be pitch reversals for stimulation sites close to physical electrodes.
The present experiment explored pitch discrimination and potential reversals across
cochlear locations with Advanced Bionics HiFocus 1J users using MPVC, QPVC, and VTP
stimuli. Loudness-balanced fixed-rate pulse trains presented at electrode sites between 4.5 and
7.5 were pitch ranked relative to stimulation on electrode 6. By pitch matching sites of
stimulation near a physical electrode, we can detect any potential pitch shifts across the
physical sites using focused virtual channels.
Preliminary results with 6 subjects suggest that pitches with VTP stimulation are similar to
pitches with MPVC stimulation. However, with QPVC stimulation, the corresponding pitch is
highly variable. For some subjects, the pitch of QPVC stimulation is very similar to MPVC and
VTP stimulation while for other subjects, there are large pitch shifts.
In conclusion, although a QPVC signal processing strategy may be similarly beneficial as
a VTP strategy for some subjects, it may cause large pitch shifts for other subjects across the
physical electrode boundaries. VTP would be a better stimulation mode to implement in a
current focused virtual channel strategy.
Support provided by the NIH/NIDCD (R01 DC012152, PI: Landsberger) and Advanced Bionics.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 298
2015 Conference on Implantable Auditory Prostheses
R54: AUDITORY NEUROPATHY SPECTRUM DISORDER (ANSD): ELECTRIC
AND ACOUSTIC AUDITORY FUNCTION
Rene Headrick Gifford, Sterling W Sheffield, Alexandra Key, George B Wanna, Robert F
Labadie, Linda J Hood
Vanderbilt University, Nashville, TN, USA
Background: Auditory neuropathy spectrum disorder (ANSD) has been characterized historically
by abnormal temporal resolution with frequency resolution being affected to a lesser degree (e.g., Zeng
et al., 1999; Rance et al, 2004; but see Narne, 2013). In the absence of neural synchrony, cochlear
implantation is an effective intervention for individuals with ANSD with many recipients achieving
outcomes comparable to those with SNHL. Despite its effectiveness, many individuals with ANSD are
unilaterally implanted-as audiometric thresholds tend to be better than those in conventional candidates.
Bimodal hearing, however, could prove problematic for individuals with ANSD as asynchronous acoustic
hearing could negatively impact the more synchronous electric signal.
Purpose: To understand how ANSD-based acoustic hearing may either complement or negatively
impact electric hearing, we assessed temporal and frequency resolution as well as speech
understanding in quiet and noise for adult cochlear implant (CI) recipients with ANSD (n = 3) and SNHL
(n = 4) allowing for both a within-subjects and between-subjects comparison of electric and acoustic
hearing. Our primary hypotheses were 1) temporal resolution in the non-CI ear will be inversely related to
bimodal benefit for individuals with ANSD, and 2) spectral resolution in the non-CI ear will be inversely
related to bimodal benefit for individuals with SNHL.
Experiment: All participants were unilaterally implanted and had aidable acoustic hearing in the
non-CI ear. Estimates of temporal and frequency resolution were obtained with standard psychophysical
threshold measures as well as a language-based task. Temporal resolution was assessed with gap
detection as well as syllable identification varying in voice onset time (VOT). Frequency resolution was
assessed via spectral modulation detection (SMD) as well as consonant recognition. Speech
understanding was assessed using the adult CI minimum speech test battery (MSTB, 2011).
Psychophysical thresholds were obtained individually in the acoustic and electric conditions and speech
understanding was assessed in the acoustic, electric, and bimodal conditions.
Results: There was a significant difference between the ANSD and SNHL groups on all measures
in the acoustic, electric, and bimodal hearing conditions. This was not entirely unexpected given that
previous research has demonstrated ANSD-related deficits in both temporal and frequency resolution;
however, frequency resolution has not been considered a primary deficit as temporal processing shows
greater and more universal deficits. For the ANSD group, the CI ear performed significantly better on all
measures, even for frequency resolution despite the fact that the acoustic-hearing ears had relatively low
(i.e. good) audiometric thresholds. For the SNHL group, there was no difference across acoustic and
electric hearing for gap detection and SMD, though speech understanding was significantly better with
the CI both in quiet and noise. Despite having significantly poorer temporal and frequency resolution in
the non-CI ear, the ANSD group did not exhibit a negative bimodal interaction for speech understanding
in quiet nor in noise. Conclusion: Individuals with ANSD exhibited significantly poorer temporal and
spectral envelope processing in both their acoustic and electric hearing ears as compared to SNHL CI
recipients. Though the acoustic-hearing ear exhibited significantly poorer resolution both in the time and
frequency domains, the addition of acoustic hearing in the bimodal hearing configuration did not impair
nor enhance speech understanding in quiet or noise. No correlation was noted between psychophysical
measures and bimodal benefit for the ANSD group given the small sample size, universally poor
temporal and frequency resolution in the non-CI ear, and the fact that none showed bimodal benefit. On
the other hand, even with just 4 SNHL subjects, there was a consistent relationship between spectral
resolution and bimodal benefit. We will continue recruitment of additional subjects and experimentation in
both the acoustic- and electric-hearing ears to further investigate the relationship between temporal and
frequency resolution for adults with ANSD.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 299
2015 Conference on Implantable Auditory Prostheses
R55: REDUCING LOUDNESS CUES IN MODULATION DETECTION
EXPERIMENTS
Sara I. Duran, Zachary M. Smith
Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA
Most cochlear implants (CIs) encode the temporal envelope information of sounds by
amplitude-modulating a series of biphasic pulses presented to each channel. Increased
sensitivity to these fluctuations may allow listeners to extract more information from the electrical
signal. For this reason, listeners’ modulation detection sensitivity is often used as a
psychophysical predictor of speech recognition in CI listeners. This sensitivity is typically
quantified using the minimum modulation depth a listener is able to reliably detect.
In traditional modulation detection threshold (MDT) measurement paradigms, listeners are
asked to discriminate a modulated target stimulus, across a range of modulation depths, from
unmodulated reference stimuli. These stimuli are often constructed using methods that
introduce unintended loudness cues. For the target stimuli, amplitude modulation is typically
applied with respect to keeping either the mean or peak charge constant. Our results show that
when the target is modulated with mean charge constancy, loudness increases as modulation
depth increases whereas when the target is modulated with peak charge constancy, loudness
decreases as modulation depth increases. As the loudness of the target varies from trial to trial
across modulation depth, the loudness of the unmodulated reference remains fixed. If subjects
use the loudness differences between target and reference to perform the MDT task, then their
thresholds may be more dependent on loudness sensitivity than modulation sensitivity per se.
This work proposes a new method for measuring MDT that aims to reduce unintended
loudness cues by using an alternative reference stimulus. In the new method, reference stimuli
have random modulations rather than no modulations as in traditional MDT paradigms. Here,
MDTs were measured using an adaptive task that simultaneously changes the modulation
depths of both target and reference stimuli from trial to trial. All stimuli were modulated with
respect to maintaining constant peak amplitude, thereby limiting the maximum loudness across
different modulation depths. For each modulation depth, the distribution of amplitudes was
matched for the sinusoidally amplitude-modulated target and the randomly amplitude-modulated
reference stimuli.
Preliminary results suggest that this approach minimizes undesirable loudness cues
within each trial and may therefore provide a more accurate measure of listeners’ modulation
sensitivity.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 300
2015 Conference on Implantable Auditory Prostheses
R56: THE EFFECT OF RECOMMENDED MAPPING APPROACHES ON
SPEECH PERCEPTION AND PSYCHOPHYSICAL CAPABILITIES IN
COCHLEAR IMPLANT RECIPIENTS
Ward R Drennan, Nancy E McIntosh, Wendy S Parkinson
University of Washington, Department of Otolaryngology, Seattle, WA, USA
The effects of different manufacturer-recommended methods of setting electric threshold
(T) levels were investigated in cochlear implant (CI) recipients of Advanced Bionics, Cochlear
Ltd. and MedEl devices. Previous work has shown that psychophysical measures are more
sensitive to processing changes than speech tests (Drennan et al., 2010). This study evaluates
the extent to which speech and non-speech tests show the effects of three commonly used
methods for setting T levels. Three conditions were tested acutely: 1) T levels set exactly (TSet);
2) T levels set at 10% of maximum comfort levels (T10); 3) T levels set at zero (T0). Approaches
2 and 3 are recommended by some CI manufacturers to improve mapping effectiveness,
efficiency, or both. Listeners were tested with AzBio sentences in quiet, adaptive spondee
perception in noise (SRTs) (Turner et al., 2004), spectral ripple discrimination using adaptive
and constant stimuli methods (Drennan et al., 2014), spectral ripple detection (Saoji et al. 2009),
and modulation detection thresholds (MDTs) for 10- and 100-Hz modulation (Won et al., 2011).
All stimuli were presented at 65 dBA, a comfortable listening level at, or slightly above,
conversational speech. AzBio performance in quiet in the TSet condition was, on average, 4%
better than in the T0 condition (N = 11, paired t-test p = 0.025). Ceiling effects might have
minimized this difference. None of the spectral ripple test scores nor the SRT scores differed
significantly among conditions. MDTs were about 4 dB worse in the TSet condition compared to
the T10 condition (p = 0.021 for 10 Hz, p = 0.003 for 100 Hz). This result was most likely
produced by an increase in the electrical modulation depth with lower T settings. MDTs only
trended worse in the TSet condition compared to the T0 condition for 10-Hz MDTs (p = 0.051 for
10 Hz; p = 0.235 for 100 Hz). If T levels are set too low, important speech information might
become inaudible. Stimuli presented at lower levels might yield different levels of performance
among conditions, and the differences among conditions might be larger. Further study is being
undertaken to determine the effect of level on performance with these recommended T settings
using stimuli at 50 dBA in addition to 65 dBA.
Supported by NIH RO1-DC010148 and P30-DC04661.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 301
2015 Conference on Implantable Auditory Prostheses
R57: LOUDNESS AND PITCH PERCEPTION USING DYNAMICALLY
COMPENSATED VIRTUAL CHANNELS
Waldo Nogueira1, Leonid Litvak2, Amy Stein2, Chen Chen2, David M. Landsberger3,
Andreas Büchner1
1
Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, Germany
2
Advanced Bionics LLC, Valencia, CA, USA
3
New York University School of Medicine, New York, NY USA
Power consumption is important for the development of smaller cochlear implant (CI)
speech processors. Simultaneous electrode stimulation may improve power efficiency by
minizing the required current applied to a given electrode. To create a more power efficient
virtual channel, we developed the Dynamically Compensated Virtual Channel (DCVC) using four
adjacent electrodes. The two central electrodes are current steered using the coefficient
(0
1 ) whereas the two flanking electrodes are used to focus/unfocus the stimulation with
the coefficient ( 1
1). Specifically the electrodes (ordered from apical to basal)
provide the following currents:
, , 1
, 1
. With increasing values of power
can be saved at the potential expense of generating broader electric fields. Additionally, by
reshaping the electric fields, it might also alter place pitch coding.
The goal of the present experiment is to investigate the tradeoff between place pitch
encoding and power savings using simultaneous electrode stimulation in the DCVC
configuration. We developed three experiments to investigate:
1. Equal loudness contours at most comfortable levels (M) for different values of .
2. Discrimination of virtual channels (jnd of ) for different values of .
3. The ranges of , defined as the difference between maximum and minimum value of
producing the same pitch height for different values of .
Preliminary results from 7 adult Advanced Bionics CI users have been collected. Results
from experiment 1 show that the required current to produce M levels is significantly reduced
with increasing
as predicted by the model of Litvak et. al. (2007). This suggests that
increasing improves power efficency. Experiment 2 demonstrates that the jnd of becomes
poorer when focusing the field (i.e. decreasing ). Experiment 3 shows that focusing the field
increases the ranges of . Dividing the range’s
by the jnd of the corresponding can give an
estimation of the number of discriminable steps in a fixed range. No significant differences in the
estimated number of discriminable steps were detected for the different values of .
DCVCs can reduce power consumption without decreasing the number of discriminable
steps.
Support provided by the DFG Cluster of Excellence “Hearing4all”, Advanced Bionics and the
NIH/NIDCD (R01 DC012152, PI: Landsberger).
12-17 July 2015
Granlibakken, Lake Tahoe
Page 302
2015 Conference on Implantable Auditory Prostheses
R58: RATE PITCH WITH MULTI-ELECTRODE STIMULATION PATTERNS:
CONFOUNDING CUES
Pieter J Venter, Johan J Hanekom
Bioengineering, University of Pretoria, Pretoria, ZAF
A number of studies have considered the use of multi-electrode stimulation patterns to
improve rate pitch perception. For stimulation on single electrodes, CI rate pitch often saturates
at around 300 pps, so that above this rate, increases in rate need to be large to be detected. To
overcome this, multi-electrode stimulation patterns have been tested. In these, a set of
electrodes are stimulated at the same rate, the premise being that rate pitch would then be
extracted from multiple cochlear locations. Venter and Hanekom (JARO, vol. 15, 2014, pp. 849866) sequentially stimulated varying numbers of electrodes in two phase delay conditions,
measuring rate discrimination thresholds and pitch ranking resulting from increases in
stimulation rate. Penninger et al. (IJA, Early Online, 2015) applied a random order of stimulation
in a set of electrodes stimulated at the same rate and measured pitch ranking when rate varied.
A specific consideration comes into play when using multi-electrode stimuli to encode
pitch. Increased rate is expected to lead to increases in loudness, so that listeners may attend to
this cue rather than to rate pitch changes. It may be necessary to adjust loudness when rate is
changed, but this presents a problem, as the levels of multiple electrodes will be adjusted. To
maintain the same relative loudness on every electrode, we may adjust each electrode’s
stimulus amplitude by the same fraction of dynamic range (DR). As electrodes have different
DRs, however, this would lead to different current adjustments on each electrode, which could
lead to a shift in the stimulus centroid. This is expected to result in a change in place pitch. A
similar problem exists when adjusting stimuli by the same amount of current.
Therefore, it may be undesirable to adjust loudness of multi-electrode stimulus patterns,
especially when the intention is to effect changes in rate pitch. Place pitch changes would covary with rate, while level roving to mitigate this may cause randomly varying place pitch cues.
So, when varying rate in a multi-electrode stimulus, either unintended place pitch or loudness
cues may be available, which may interfere with rate pitch perception. These can probably not
be controlled simultaneously.
The work presented probed this in experiments with CI listeners to determine whether (i)
variation in stimulation rate of multi-electrode stimuli would result in significant loudness
increases, (ii) compensatory loudness adjustments would cause place pitch changes, and (iii)
which of these would be dominant. Rate pitch was assessed, and conditions tested included no
loudness balancing, loudness balancing and roving. Initial results show that (a) loudness
changes with rate change are small, and some listeners ignore these; (b) loudness adjustments
cause confounding pitch cues. This has implications for the design of multi-electrode stimulation
patterns.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 303
2015 Conference on Implantable Auditory Prostheses
R59: PITCH VARIATIONS ACROSS DYNAMIC RANGE USING DIFFERENT
PULSE SHAPES IN COCHLEAR IMPLANT USERS
Jaime A. Undurraga1, Jan Wouters2, Astrid van Wieringen2
1
University College London Ear Institute, London, GBR
2
ExpORL, KU Leuven, Leuven, BEL
Introduction: Cochlear Implant (CI) users' ability to discriminate pitch relies on both temporal
and place cues. Previous studies have shown that pitch judgements can be affected by changes in
stimulation level. These perceptual changes in pitch with stimulation level may be due to several
factors, e.g., subjects may be judging on the basis of loudness rather than pitch cues, and/or
changes in place of excitation due to variations of spread of excitation with level, and/or variations of
polarity sensitivity with level. Previous studies have shown that CI subjects are more sensitive to the
anodic than the cathodic polarity of a pulse. However, this sensitivity difference seems to decrease
at lower stimulation levels.
The aim of this study was to evaluate how temporal pitch is affected by stimulation level and
polarity in human CI users.
Methods: Three users of the CII or HiRes90k CI (Advanced Bionics) took part in this study.
Four different pulse shapes were tested: symmetric anodic-first (SYMA), pseudomonophasic anodic
(PSA), pseudomonophasic cathodic (PSC), and alternating pseudomonophasic pulse (ALTPS).
SYMA consisted of an initial anodic (positive) phase immediately followed by negative phase with
the same duration and magnitude than the leading phase. PSA consisted of a leading positive
phase followed by a secondary negative phase 8 times longer and shorter in amplitude than the
leading phase. PSC were similar to PSA but of opposite polarity. ALTPS consisted of PSA and PSC
pulses presented alternately. The shortest phase of all stimuli lasted 97 µs. SYMA, PSA, and PSC
pulse train were presented either at 150 or 75 pps. ALTPS pulse trains were presented at 150 pps.
These unmodulated pulse trains were presented for 500 ms. Stimuli were balanced in loudness at 3
different points of the dynamic range (DR): 100%, 50%, and 25%. After balancing all stimuli,
subjects had to rank which of two pulse trains presented at the same loudness was higher in pitch. A
total of 7 conditions (several pulse shape / pulse rate combinations) were presented in a random
order.
Results: A two-way repeated measures ANOVA with factors condition and DR indicated that
the factor condition was significant [F (5, 10) = 48.7, p < 0.001]. The interaction between condition
and DR was also significant [F (10, 20) = 5.4, p < 0.001]. The pitch elicited by the ALTPS at 150 pps
was always similar to that produced by 75 pps pulse trains at comfortable levels and lower than that
produced by 150 pps pulse trains. PSA or PSC pulse trains presented at 75 pps were perceived with
a lower pitch than SYMA presented at 150 pps. PSA pulse trains presented at 150 pps elicited a
higher pitch than that elicited by SYMA pulse trains presented at 75 pps. PSA and SYMA pulse
trains presented at the same rate elicited a similar pitch whereas PSC pulse trains tended to elicit a
lower pitch than that produced by SYMA pulse trains presented at the same rate.
Conclusions: These preliminary data suggest that ALTPS 150 pps pulse trains elicit the same
temporal pitch than PSA and SYMA presented at 75 pps. ALTPS pulses presented at 100 % of the
DR demonstrated higher pitch than PSA, PSC, and SYM pulses presented at 75 pps However the
pitch was either the same or even lower when the level decreased at 50 or 25 % of the DR.
Overall, PSC pulses elicit a lower pitch than other pulse shapes. More data are currently
under way and will be presented at this meeting.
12-17 July 2015
Granlibakken, Lake Tahoe
Page 304
2015 Conference on Implantable Auditory Prostheses
CIAP 2015 Attendee List
Name (last/first) Abbas/Paul
Adel/Youssef
Agrawal/Smita
Anderson/Sean
Aronoff/Justin
Avci/Erswin
Azadpour/Mahan
Backus/Bradford
Badenhorst/Werner
Bahmer/Prof. Andreas
Bakhos/David
Baskent/Deniz
Batsoulis/Cornelia
Bauer/Manuela
Baumgartel/Regina
Benav/Heval
Berg/Katie
Bernardi/Giuliano
Bernstein/Josh
Bhatti/Pamela
Bierer/Julie
Bierer/Steven
Biesheuvel/Jan Dirk
Boermans/Peter-Paul
Bolner/Federico
Boyle/Patrick
Briaire/Jeroen
Brill/Stefan
Brint/Emma
Brown/Christopher
Brownen/Mike
Bruce/Ian
Buechel/Brian
Buechner/Andreas
Butera/Iliza
Butler/Blake
Butts/Austin
Cakir/Ahmet
Campbell/Mr Luke
Carlyon/Bob
Carney/Arlene
Chao/Xiuhua
Chatterjee/Anwesha
Chatterjee/Monita
Chen/Chen
Chen/Hongbin
Christensen/Julie
Chung/Yoojin
Clarke/Jeanne
Colesa/Debbie
Coltisor/Allison
12-17 July 2015
Email City/State Country paul-abbas@uiowa.edu
youssef.adel@kgu.de
smita.agrawal@advancedbionics.com
sanderso11@mail.bw.edu
jaronoff@illinois.edu
avci.ersin@mh-hannover.de
mahan.azadpour@nyumc.org
lbou@oticonmedical.com
Werner.Badenhorst@up.ac.za
Bahmer_A@ukw.de
lbluze@cochlear.com
d.baskent@umcg.nl
Iowa City IA
Frankfurt
Stevenson Ranch CA
Madison WI
Champaign IL
Hannover
Forest Hills NY
Vallauris
Pretoria
Wuerzburg
Clarksville TN
Groeningen
USA
GERMANY
USA
USA
USA
GERMANY
USA
FRANCE
SOUTH AFRICA
GERMANY
USA
NETHERLANDS
regina.baumgaertel@uni-oldenburg.de
hevel.benav@medel.com
bergka@stolaf.edu
giuliano.bernardi@esat.kuleuven.be
joshua.g.bernstein.civ@mail.mil
pamela.bhatti@ece.gatech.edu
jbierer@u.washington.edu
jbierer@u.washington.edu
j.d.biesheuvel@lumc.nl
ppboermans@ziggo.nl
fbolner@cochlear.com
patrick.boyle@phonak.com
kerstin.toenjes@advancedbionics.com
brill_s@ukw.de
emma.brint.11@ucl.ac.uk
cbrown1@pitt.edu
mbrownen@cochlear.com
ibruce@ieee.org
buechelbrian@gmail.com
buechner@hoerzentrum-hannover.de
iliza.butera@Vanderbilt.Edu
bbutler9@uwo.ca
austin.m.butts@gmail.com
ahmet.cakir@vanderbilt.edu
lukejcampbell@yahoo.com.au
bob.carlyon@mrc-cbu.cam.ac
carne005@umn.edu
xiuhuachao@163.com
anchat1990@gmail.com
rebecca.cash@boystown.org
chen.chen@advancedbionics.com
hchen@nurotron.com
julie.christensen@boystown.org
yoojin.chung@gmail.com
j.n.clarke@umcg.nl
djmorris@umich.edu
allison.coltisor@ttuhsc.edu
Oldenburg
Innsbruck
Northfield MN
Leuven
Bethesda MD
Atlanta GA
Seattle WA
Seattle WA
Veenendaal
Amstelveen
Mechelen
Beckenham
Leiden
Wuerzburg
London
Pittsburgh PA
Flower Mound TX
Hamilton ON
Boston MA
Hannover
Nashville TN
London ON
Tempe AZ
Nashville TN
East Melbourne
Cambridge
Saint Paul MN
Chapel Hill NC
Brisbane
Omaha, NE
Valencia CA
Irvine CA
Omaha NE
Cambridge MA
Groningen
Grass Lake MI
Lubbock TX
GERMANY
AUSTRIA
USA
BELGIUM
USA
USA
USA
USA
NETHERLANDS
NETHERLANDS
BELGIUM
UK
NETHERLANDS
GERMANY
UK
USA
USA
CANADA
USA
GERMANY
USA
CANADA
USA
USA
AUSTRALIA
UK
USA
USA
AUSTRALIA
USA
USA
USA
USA
USA
NETHERLANDS
USA
USA
Granlibakken, Lake Tahoe
Page 305
2015 Conference on Implantable Auditory Prostheses
Corina/David
Cosentino/Stefano
Crew/Joseph
Croghan/Naomi
Curca/Ioan
Curtis/Hartling
Daanouni/Emilie
Dasika/Vasant
David/Marion
Davidson/Lisa
Davis/Tim
Dawson-Smith/Pam
De Jong/Monique
De Vos/Johan J.
Deeks/John
Delgutte/Bertrand
Deprez/Hanne
Deshpande/Alok
DeVries/Lindsey
Dieter/Alexander
Dietz/Mathias
Dillier/Norbert
DiNino/Mishaela
Dirks/Coral
Donahue/Amy
Dorman/Michael
Drennan/Ward R
Dritsakis/Giorgos
Dubyne/Lauren
Duran/Sara
Easwar/Vijayalakshmi
Ehlers/Erica
El Boghdady/Nawal
Fallon/James
Feng/Lei
Finke/Mareike
Finley/Charles
Firszt/Jill
Fitzgerald/Matthew
Fitzpatrick/Douglas
Frampton/Niki Erina
Francart/Tom
Franz/Darla
Fráter/Attila
Frijns/Johan
Fu/Qianjie
Fung/Stephen
Gaertner/Lutz
Galvin/John Joseph
Ganter/Tyler
Gaudrain/Etienne
Geissler/Gunnar
Geleoc/Gwenaelle
George/Sageev
George/Shefin
Giardina/Chris
Gifford/Rene
12-17 July 2015
dpcorina@ucdavis.edu
stefano.consentino@mrc-cbu.cam.ac.uk
jcrew@usc.edu
ncroghan@cochlear.com
curcaioan_aurelian@yahoo.com
hartlinc@ohsu.edu
lbou@oticonmedical.com
comptreebee@gmail.com
david602@umn.edu
uchanskir@ent.wustl.edu
timothy.j.davis@vanderbilt.edu
bsheridan@cochlear.com
m.a.m.de_jong@lumc.nl
j.j.de_vos@lumc.nl
john.deeks@mrc-cbu.cam.ac.uk
bertrand_delgutte@meei.harvard.edu
tom.francart@med.kuleuven.be
alok.deshpande@advancedbionics.com
lindsdev@uw.edu
alexander.dieter@stud.uni-goettingen.de
mathias.dietz@uni-oldenburg.de
norbert.dillier@usz.ch
mdinino@uw.edu
hans3675@umn.edu
donahuea@nidcd.nih.gov
mdorman@asu.edu
wrdrennan@hotmail.com
gd1y12@soton.ac.uk
led44@pitt.edu
sduran@cochlear.com
vijayalakshmi.easwar@sickkids.ca
godar@waisman.wisc.edu
n.el.boghdady@umcg.nl
bhale@bionicsinstitute.org
fenglei.thu@gmail.com
finke.mareike@mh-hannover.de
charles.finley@advancedbionics.com
firsztj@ent.wustl.edu
sitzmb@stanford.edu
dcf@med.unc.edu
nframpton@cochlear.com
tom.francart@med.kuleuven.be
darla.franz@medel.com
attila.frater.14@ucl.ac.uk
J.H.M.Frijns@lumc.nl
qfu@mednet.ucla.edu
stfung@cochlear.com
gaertner@hoerzentrum-hannover.de
rufus90@gmail.com
tylergan@uw.edu
egaudrain@gmail.com
gunnar.geissler@advancedbionics.com
emailjrholt@gmail.com
sageev.george@fda.hhs.gov
sgeorge@bionicsinstitute.org
christopher_giardina@med.unc.edu
rene.gifford@vanderbilt.edu
Granlibakken, Lake Tahoe
Davis, CA
Cambridge
Los Angeles CA
Englewood CO
London
Portland OR
Vallauris
Silver Spring MD
Minneapolis MN
Chesterfield MO
Nashville TN
East Melbourne
Leiden
The Hauge
Cambridge
Watertown MA
Leuven
Valencia CA
Seattle WA
Gottingen
Oldenburg
Zollikon
Seattle WA
Minneapolis MN
Bethesda MD
Scottsdale AZ
Seattle WA
Highfield
Russell PA
Englewood CO
Totonto
Madison WI
Groningen
Melbourne
Saint Paul MN
Hannover
Valencia CA
Saint Louis MO
San Francisco CA
Chapel Hill NC
Leuven
Durham NC
London
Leiden
Los Angeles CA
Macquarie Univer
Hannover
Los Angeles CA
Seattle WA
Groningen
Lower Saxony
Newton Center MA
Silver Spring MD
Melbourne
Chapel Hill NC
Franklin TN
USA
UK
USA
USA
UK
USA
FRANCE
USA
USA
USA
USA
AUSTRALIA
NETHERLANDS
NETHERLANDS
UK
USA
BELGIUM
USA
USA
GERMANY
GERMANY
SWITZERLAND
USA
USA
USA
USA
USA
UK
USA
USA
CANADA
USA
NETHERLANDS
AUSTRALIA
USA
GERMANY
USA
USA
USA
USA
AUSTRALIA
BELGIUM
USA
UK
NETHERLANDS
USA
AUSTRALIA
GERMANY
USA
USA
NETHERLANDS
GERMANY
USA
USA
AUSTRALIA
USA
USA
Page 306
2015 Conference on Implantable Auditory Prostheses
Glassman/Katelyn
Gnanasegaram/Joshua
Gnansia/Dan
Goehring/Tobias
Goldstein/Michael
Goldsworthy/Ray
Goorevich/Michael
Gordon/Karen
Goupell/Matthew
Gransier/Robin
Greisiger/Ralf
Hall/Christopher
Hamacher/Volkmar
Hanekom/Johan
Hanekom/Tania
Hansen/John
Hartley/Doug
Hartman/Jasenia
Haywood/Nicholas
Hazrati/Oldooz
He/Shuman
Heddon/Chris
Hehrmann/Phillipp
Heinz/Michael
Henkin/Yael
Herisanu/ Tereza Iona
Herssbach/Adam
Hight/Ariel Edward
Hilkhuysen/Gaston
Hoen/Michel
Hofmann/Michael
Holden/Laura K.
Holt/Jeff
Hong/Feng
Horn/David L.
Hossain/Shaikat
Hu/Hongmei
Huang/Juan
Huber/Alex
Hughes/Aaron
Hughes/Michelle
Hume/Cliff
Hun jang/Jeaong
Hussnain/Ali
Jaekel/Brittany
Jang/Jongmoon
Jeon/Eun Kyung
Jocewicz/Rachael
Johnstone/Patti
Jones/Heath
Joshi/Suyash N.
Juergens/Tim
Jung/D.E.
Kabot/Ernst
Kalkman/Randy
Kals/Mathias
Kan/Alan
12-17 July 2015
Katelyn.Glassman@medel.com
karen.gordon@utoronto.ca
lbou@oticonmedical.com
t.goehring@soton.ac.uk
michael.goldstein@cornell.edu
ray.goldsworthy@med.usc.edu
mgoorevich@cochlear.com
karen.gordon@utoronto.ca
triab@umd.edu
tom.francart@med.kuleuven.be
ralf.greisiger@medisin.uio.no
1570.hall@ttuhsc.edu
volkmar.hamacher@advancedbionics.com
johan.hanekom@up.ac.za
tania.hanekom@up.ac.za
jxh052100@utdallas.edu
carolinedoug.hartley@btinternet.com
hjasenia@gmail.com
nhaywood@gmail.com
oldooz.hazrati@gmail.com
shuman_he@med.unc.edu chris@resonance-med.com
phillipp.hehrmann@advancedbionics.com
mheinz@purdue.edu
henkin@post.tau.ac.il
ioana_h@yahoo.com
bsheridan@cochlear.com
ahight@fas.harvard.edu
hilkuysen@lma.cnrs-mrs.fr
lbou@oticonmedical.com
tom.francart@med.kuleuven.be
holdenl@wustl.edu
emailjrholt@gmail.com
feng.hong@utdallas.edu
david.horn@seattlechildrens.org
shossa3@gmail.com
hongmei.hu@uni-oldenburg.de
jhuang@jhu.edu
alex.huber@usz.ch
hughesaa@umich.edu
rebecca.cash@boystown.org
hume@u.washington.edu
jangjh@knu.ac.kr
hussnain.ali@utdallas.edu
triab@umd.edu
jmjang@dgist.ac.kr
eunkyung-jeon@uiowa.edu
godar@waisman.wisc.edu
pjohnst1@utk.edu
godar@waisman.wisc.edu
suyash@suyash-joshi.com
tim.jurgens@uni-oldenburg.de
d.e.jung@umcg.nl
New York NY
Toronto
Vallauris
Southampton
Ithaca, NY
Altadena CA
Sydney
Totonto
College Park MD
Leuven
Fjellhamar
Lubbock TX
Hannover
Pretoria
Pretoria
Richardson TX
Nottingham
Madison WI
London
Richardson TX
Chapel Hill NC
Evanston IL
Hannover
W. Lafayette, IN
Tel Aviv
Heidelburg
East Melbourne
Boston, MA
Marseille
Vallauris
Leuven
Saint Louis MO
Newton Center MA
Richardson TX
Seattle WA
Richardson TX
Oldenburg
Owings Mills MD
Zurich
Ann Arbor MI
Omaha NE
Seattle WA
Daegu
Richardson TX
College Park MD
Daigu
Iowa City IA
Madison WI
Knoxville TN
Madison WI
Copenhagen
Mineral Wells WV
Groningen
USA
CANADA
FRANCE
UK
USA
USA
AUSTRALIA
CANADA
USA
BELGIUM
NORWAY
USA
GERMANY
SOUTH AFRICA
SOUTH AFRICA
USA
UK
USA
UK
USA
USA
USA
GERMANY
USA
ISRAEL
GERMANY
AUSTRIA
USA
FRANCE
FRANCE
BELGIUM
USA
USA
USA
USA
USA
GERMANY
USA
SWITZERLAND
USA
USA
USA
KOREA
USA
USA
KOREA
USA
USA
USA
USA
DENMARK
USA
NETHERLANDS
r.k.kalkman@lumc.nl
Leiden
NETHERLANDS
godar@waisman.wisc.edu
Madison WI
USA
Granlibakken, Lake Tahoe
Page 307
2015 Conference on Implantable Auditory Prostheses
Karunasiri/Tissa
Keppeler/Daniel
Kludt/Eugen
Koka/Kanthaiah
Kokx-Ryan/Melissa
Kral/Andrej
Kreft/Heather
Krishnamoorthi/Harish
Kronenberger/William
Kronen-Gluck/Tish
Kujawa/Sharon
Kulkarni/Abhi
Kulkarni/Aditya
Laback/Bernhard
Lai/Wai Kong
Landry/Thomas
Landsberger/David
Lartz/Caroline
Lazard/Diane
Leake/Patricia A
Lee/Daniel
Lee/Daniel
Lenarz/Thomas
Leyzac/Kara
Li/Chu
Li/Jianan
Lim/Hubert
Lim/Kai Yuen
Lin/Harrison
Lin/Payton
Lin/Yung-Song
Lineaweaver/Sean
Litovsky/Ruth
Litvak/Leonid
Long/Christopher
Lopez-Poveda/Enrique A
Lu/Hui-Ping
Luckmann/Annika
Luke/Robert
Luo/Chuan
Macherey/Olivier
Macpherson/Ewan
Maretic/Petra
Marozeau/Jeremy
Marzalek/Michael
Mathew/Rajeev
Mauger/Stefan
McAlpine/David
McFayden/Tyler
McKay/Collette
McNight/Carmen
Mehta/Harshit
Mens/Lucas
Mesnildrey/Quentin
Middlebrooks/John C
Miller/Roger
Min-Hyun/Park
12-17 July 2015
rankiri.karunasiri@advancedbionics.com
daniel.keppeler@med.uni-goettingen.de
kludt.eugen@mh-hannover.de
kanthaiah.koka@advancedbionics.com
melissa.j.kokx-ryan.civ@mail.mil
a.kral@uke.de
plumx002@umn.edu
hkrishnamoorthi@cochlear.com
wkronenb@iupui.edu
tishkg@aol.com
sharon_kujawa@meei.harvard.edu
abhijit.kulkarni@advancedbionics.com
aditya.kulkarni@boystown.org
bernhard.laback@oeaw.ac.at
waikong.lai@usz.ch
tglandry@dal.ca
david.landsberger@nyumc.org
lartzc@wusm.wustl.edu
dianelazard@yahoo.fr
pleake@ohns.ucsf.edu
daniel_lee@meei.harvard.edu
dlee390@gmail.com
Lenarz.Thomas@mh-hannover.de
kleyzac@umich.edu
chuli@nurotron.com
301 PLA Hospital
hlim@umn.edu
klim19@jhmi.edu
harrison.lin@uci.edu
paytonlin@yahoo.com
kingear@gmail.com
slineaweaver@cochlear.com
litovsky@waisman.wisc.edu
leonid.litvak@advancedbionics.com
clong@cochlear.com
ealopezpoveda@usal.es
rebecca.cash@boystown.org
a.luckmann@umcg.nl
tom.francart@med.kuleuven.be
luochuan@mail.tsinghua.edu.cn
macherey@lma.cnrs-mrs.fr
ewan.macpherson@nca.uwo.ca
mareticp@gmail.com
jemaroz@elektro.dtu.dk
mike@mikemarz.com
rgmathew@hotmail.com
bsheridan@cochlear.com
d.mcalpine@ucl.ac.uk
tyler_mcfayden@med.unc.edu
cmckay@bionicsinstitute.org
karen.gordon@utoronto.ca
harshit.mehta@advancedbionics.com
lucas.mens@radboudumc.nl
mesnildrey@lma.cnrs-mrs.fr
zonsan@uci.edu
millerr@nidcd.nih.gov
drpark@snu.ac.kr
Granlibakken, Lake Tahoe
Valencia CA
Goettingen
Hannover
Valencia CA
Lorton VA
Hannover
Hinckley MN
Englewood CO
Indianapolis, IN
San Francisco CA
Boston, MA
Valencia CA
Omaha NE
Wien
Zurich
Halifax NS
New York NY
St Louis MO
Paris
San Francisco CA
Boston, MA
Champaign IL
Hannover
Ann Arbor MI
Hangzhou
Beijing
Saint Paul MN
Baltimore MD
Orange CA
Irvine CA
Taipei
Gig Harbor WA
Madison WI
Valencia CA
Denver, CO
Salamanca
Kaohsiung City
Groningen
Leuven
USA
GERMANY
GERMANY
USA
USA
GERMANY
USA
USA
USA
USA
USA
USA
USA
AUSTRIA
SWITZERLAND
CANADA
USA
USA
FRANCE
USA
USA
USA
GERMANY
USA
China
China
USA
USA
USA
USA
TAIWAN
USA
USA
USA
USA
SPAIN
CHINA
NETHERLANDS
BELGIUM
Marseilles
London ON
Skane
lyngby
Santa Rosa CA
Sutton
East Melbourne
London
Chapel Hill NC
Melbourne
Toronto
Valencia CA
Berg en Dal
Marseille
Irvine CA
Rockville MD
Seoul
FRANCE
CANADA
SWEDEN
DENMARK
USA
UK
AUSTRALIA
UK
USA
AUSTRALIA
CANADA
USA
NETHERLANDS
FRANCE
USA
USA
KOREA
Page 308
2015 Conference on Implantable Auditory Prostheses
Misurelli/Sara
Mok Gwon/Tae
Monaghan/Jessica
Morris/David
Morse/Robert
Moser/Tobias
Moua/Keng
Nagarajaiah/Yadunandan
Najnin/Shamima
Narasimhan/Shreya
Natale/Sarah
Nelson/Peggy
Netten/Anouk
Neuman/Arlene
Nie/Kaibao
Noble/Jack
Nogueira/Waldo
Nopp/Peter
Norton/Susan
Novak/Michael
Oberhauser/Werner
O'Brien/Gabriele
Oh/Yonghee
O'Neil/Erin
Oosthuizen/Dirk
Opie/Jane
Overstreet/Edward
Oxenham/Andrew
Padilla/Monica
Pandya/Pritesh
Parkinson/Aaron
Patrick/Jim
Patro/Chhayakant
Pesch/Jörg
Pfingst/Bryan E.
Ping/Lichuan
Plasmans/Anke
Polonenko/Melissa
Poppeliers/Jan
Potts/Wendy
Praetorius/Mark
Prokopiou/Andreas
Qazi/Obaid ur Rehman
Racey/Allison
Rader/Tobias
Ramanna/Lakshmish
Recugnat/Matthieu
Reeder/Ruth M.
Reiss/Lina
Richter/Claus Peter
Rieffe/Carolien
Riis/Soren
Robert/Mark
Roberts/Larry
Robinson/Jennifer
Rode/Thilo
Rubinstein/Jay
12-17 July 2015
godar@waisman.wisc.edu
tm.gwon0925@gmail.com
jessicamonaghan@gmail.com
dmorris@hum.ku.dk
r.morse@warwick.ac.uk
tmoser@gwdg.de
godar@waisman.wisc.edu
laura.benesh@advancedbionics.com
snajnin@memphis.edu
shreya_narasimhan@meei.harvard.edu
Madison WI
Seoul
London
Copenhagen
Conventry
Gottingen
Madison WI
Valencia CA
Memphis TN
Lausanne
Scottsdale AZ
peggynelson@umn.edu
Minneapolis MN
a.p.netten@lumc.nl
Leiden
arlene.neuman@gmail.com
Teaneck NJ
niek@uw.edu
Bothell WA
jack.noble@vanderbilt.edu
Nashville TN
nogueiravazquez.waldo@mh-hannover.de Hannover
peter.nopp@medel.com
Innsbruck
susan.norton@seattlechildrens.org
Seattle WA
noropro@hotmail.com
White Heath IL
werner.oberhauser@medel.com
Innsbruck
eobrien3@uw.edu
Seattle WA
oyo@ohsu.edu
Portland OR
oneil554@umn.edu
Saint Paul MN
dirk.jj.oosthuizen@gmail.com
Pretoria
jane.opie@medel.com
Innsbruck
edov@oticonmedical.com
Nice
oxenham@umn.edu
Minneapolis MN
monica.padillavelez@nyumc.org
Los Angeles CA
Pritesh.Pandya@medel.com
Chapel Hill NC
aaron.j.parkinson@outlook.com
Seattle WA
jpatrick@cochlear.com
MACQUARIE PARK
cpatro@memphis.edu
Memphis TN
jpesch@cochlear.com
Mechelen
bpfingst@umich.edu
Ann Arbor MI
Nurotron
Hangzhou
aplasmans@cochlear.com
Edegem
melissa.polonenko@sickkids.ca
Toronto
jpoppeliers@cochlear.com
Mechelen
wpotts@cochlear.com
Englewood CO
mark.praetorius@med.uni-heidelberg
Mainz
tom.francart@med.kuleuven.be
Leuven
oqazi@cochlear.com
Mechelen
Allison.Racey@medel.com
Chapel Hill NC
tobias.rader@kgu.de
Frankfurt
lakshmish@gmail.com
Parker CO
matthieu.recugnat@gmail.com
London
reederr@ent.wustl.edu
Belleville IL
reiss@ohsu.edu
Portland OR
cri529@northwestern.edu
Chicago IL
crieffe@fsw.leidenuniv.nl
Leiden
srka@oticonmedical.com
Kongebakken
insilicoetfibro@yahoo.com
Pasadena CA
roberts@mcmaster.ca
Hamilton ON
USA
KOREA
UK
DENMARK
UK
GERMANY
USA
USA
USA
SWITZERLAND
USA
USA
NETHERLANDS
USA
USA
USA
GERMANY
AUSTRIA
USA
USA
AUSTRIA
USA
USA
USA
SOUTH AFRICA
AUSTRIA
France
USA
USA
USA
USA
AUSTRALIA
USA
BELGIUM
USA
China
BELGIUM
CANADA
BELGIUM
USA
GERMANY
BELGIUM
BELGIUM
USA
GERMANY
USA
UK
USA
USA
USA
NETHERLANDS
DENMARK
USA
CANADA
rode.thilo@hzh-gmbh.de
rubinj@uw.edu
GERMANY
USA
Granlibakken, Lake Tahoe
Hannover
Seattle WA
Page 309
2015 Conference on Implantable Auditory Prostheses
S.Martinez/Manuel
Sagi/Elad
Sam-George/Shefin
Saoji/Aniket
Sato/Mika
Scheperle/Rachel
Schleich/Peter
Schwarz/Konrad
Seeber/Bernhard
Shah/A.
Shah/Darshan
Shahin/Antoine
Shannon/Bob
Shayman/Corey
Sheffield/Benjamin
Shen/Steve I. Y
Shepherd/Rob
Shinobu/Leslie
Sinha/Sumi
Sladen/Douglas
Smith/Leah
Smith/Zachary
Soleymani/Roozbeh
Spahr/Tony
Spencer/Nathan
Spirrov/Dimitar
Srinivasan/Sridhar
Stakhovskaya/Olga
Stark/Germaine
Stickney/Ginger
Stohl/Josh
Stupak/Natalia
Sue/Andrian
Sun/Xiaoan
Svirsky/Mario
Swanson/Brett
Tabibi/Sonia
Tamati/Terrin
Tan/Xiaodong
Tejani/Viral
Thakker/Tanvi
Tian/Chen
Tillein/Jochen
Tillery/Kate
Todd/Ann
Tourrel/Guillaume
Tran/Phillip
Treaba/Claudiu
Uchanski/Rosalie
Undurraga/Jaime
Van der Jagt/Annerie
van Dijk/Bas
van Eeckhoutte/Maaike
Van Gendt/Margriet
van Wieringen/Astrid
Van Yper/Lindsey
Vannson/Nicolas
12-17 July 2015
lbou@oticonmedical.com
elad.sagi@nyumc.org
sgeorge@bionicsinstitute.org
aniket.saoji2@phonak.com
sato.mika@mh-hannover.de
rebecca.cash@boystown.org
peter.schleich@medel.com
konrad.schwarz@medel.com
b_seabear@gmx.de
AShah@bionicsinstitute.org
darshan.shah@medel.com
ajshahin@ucdavis.edu
rshannon@usc.edu
corey.shayman@gmail.com
Vallauris
Brooklyn NY
East Melbourne
Valencia CA
Hannover
Omaha NE
Innsbruck
Innsbruck
Munich
Melbourne
Innsbruck
Davis CA
Los Angeles CA
Deerfield IL
FRANCE
USA
AUSTRALIA
USA
GERMANY
USA
AUSTRIA
AUSTRIA
GERMANY
AUSTRALIA
AUSTRIA
USA
USA
USA
ishen@u.washington.edu
bhale@bionicsinstitute.org
leslie@thirdrockventures.com
sumi_sinha@hms.harvard.edu
sladen.douglas@mayo.edu
leah.smith@sunnybrook.ca
zsmith@cochlear.com
rs4462@nyu.edu
tony.spahr@advancedbionics.com
njs64@pitt.edu
tom.francart@med.kuleuven.be
ssri.oeaw@gmail.com
olga.stakhovskaya.ctr@mail.mil
starkg@ohsu.edu
stickney@uci.edu
Seattle WA
Melbourne
Boston MA
Boston MA
Northfield MN
Etobicoke ON
Centennial CO
New York NY
Valencia CA
Pittsburgh PA
Leuven
Vienna
Albuquerque NM
Vancouver WA
Orange CA
Chapel Hill NC
Montvale NJ
Sydney
Hangzhou
New York NY
Macquarie NSW
Dietikon
Groningen
Chicago IL
Iowa City IA
Madison WI
Hangzhou
Frankfurt
Phoenix AZ
New York NY
Vallauris
Sydney
Englewood CO
Chesterfield MO
London
USA
AUSTRALIA
USA
USA
USA
CANADA
USA
USA
USA
USA
BELGIUM
AUSTRIA
USA
USA
USA
USA
USA
AUSTRALIA
China
USA
AUSTRALIA
SWITZERLAND
NETHERLANDS
USA
USA
USA
China
GERMANY
USA
USA
FRANCE
AUSTRALIA
USA
USA
UK
GERMANY
BELGIUM
BELGIUM
NETHERLANDS
BELGIUM
BELGIUM
FRANCE
n.stupak84@gmail.com
andrian.sue@sydney.edu.au
Xiaoansun@nurotron.com
mario.svirsky@nyumc.org
bswanson@cochlear.com
sonia.tabibi@usz.ch
t.n.tamati@umcg.nl
xiaodong.tan@northwestern.edu
viral-tejani@uiowa.edu
godar@waisman.wisc.edu
chentian@nurotron.com
tillein@em.uni-frankfurt.de
ahelms@asu.edu
ann.todd@nyumc.org
lbou@oticonmedical.com
phillip.tran@sydney.edu.au
ctreaba@cochlear.com
uchanskir@ent.wustl.edu
jaime.undurraga@gmail.com
kerstin.toenjes@advancedbionics.com
bvandijk@cochlear.com
tom.francart@med.kuleuven.be
mjvangendt@gmail.com
tom.francart@med.kuleuven.be
lindsey.vanyper@uzgent.be
nicolas.vannson@cerco.ups-tlse.fr
Granlibakken, Lake Tahoe
Mechelen
Leuven
Den Haag
Leuven
Ghent
Toulouse
Page 310
2015 Conference on Implantable Auditory Prostheses
Vatti/Marianna
Veau/Nicolas
Verhoeven/Kristien
Vickers/Deborah
Volk/Dr. Florian
Vollmer/Maike
Wagner/Anita
Waldmann/Bernd
Wang/DeLiang
Wang/Dongmei
Wang/Jing
Wang/Ningyuan
Wang/Shuai
Wang/Song
Wang/Xiaoqin
Wang/Xiaosong
Weiss/Robin
Weissgerber/Tobias
Wesarg/Thomas
Wess/Jessica
White/Mark
Wijetillake/Aswin
Williges/Ben
Wilson/Elyse
Winn/Matt
Wirtz/Christian
Wise/Andrew
Wohlbauer/Dietmar
Wolford/Robert
Won/Jong Ho
Wong/Paul
Wook Lee/Jae
Worman/Tina
Wouters/Jan
Xu/Li
Yamazaki/Hiroshi
Yan/Jie
Yoon/Yang-Soo
Yu/Yang
Zaleski/Ashley
Zeman/Annette
Zhao/Xin
Zhou/Ning
Zirn/Stefan
lbou@oticonmedical.com
lbou@oticonmedical.com
kverhoeven@cochlear.com
d.vickers@ucl.ac.uk
florian.voelk@mytum.de
vollmer_m@ukw.de
a.wagner@umcg.nl
bwaldmann@cochlear.com
dwang@cse.ohio-state.edu
dxw100020@utdallas.edu
wangjj@nurotron.com
wang2087@umn.edu
swang102@asu.edu
wangsong@nurotron.com
xiaoqin.wang@jhu.edu
qfu@mednet.ucla.edu
robin.weiss@tum.de
mhiltmann@hotmail.com
thomas.wesarg@uniklinik-freiburg.de
jwess@umd.edu
mark@markwhite.name
aswin.wijetillake@tum.de
ben.williges@uni-oldenburg.de
elysej@uw.edu
godar@waisman.wisc.edu
christian.wirtz@medel.de
bhale@bionicsinstitute.org
dietmar.wohlbauer@usz.ch
bob.wolford@medel.com
jhwon15@gmail.com
paul.wong@sydney.edu.au
jxl096020@utdallas.edu
wormant@gmail.com
tom.francart@med.kuleuven.be
xul@ohio.edu
hiroshi.yamazaki@sickkids.ca
jie.yan@advancedbionics.com
yyangsoo@hotmail.com
yayu@cochlear.com
ashley.c.zaleski.civ@mail.mil
annettezeman@gmail.com
xzhou@bionicsinstitute.org
zhoun@ecu.edu
Vallauris
Vallauris
Cambridgeshire
Garching
Wuerzburg
Groningen
Hannover
Columbus OH
Richardson TX
Hangzhou
Minneapolis MN
Tempe AZ
Hangzhou
Baltimore MD
La Canada Flintr CA
Vaterstetten
Frankfurt/Main
Freiburg
Rockville MD
Cary NC
Munich
Oldenburg
Seattle WA
Madison WI
Starnberg
Melbourne
Zürich
Rougemont NC
Washington DC
Sydney
Richardson TX
Seattle WA
Leuven
Athens OH
Toronto
Valencia CA
Lubbock TX
Sydney
Washington DC
Brooklyn NY
Melbourne
Greenville NC
Freiburg
FRANCE
FRANCE
BELGIUM
UK
GERMANY
GERMANY
NETHERLANDS
GERMANY
USA
USA
China
USA
USA
China
USA
USA
GERMANY
GERMANY
GERMANY
USA
USA
GERMANY
GERMANY
USA
USA
GERMANY
AUSTRALIA
SWITZERLAND
USA
USA
AUSTRALIA
USA
USA
BELGIUM
USA
CANADA
USA
USA
AUSTRALIA
USA
USA
AUSTRALIA
USA
GERMANY
12-17 July 2015
Granlibakken, Lake Tahoe
Page 311
Download