Translate

Κυριακή 9 Φεβρουαρίου 2020

Ear and Hearing

The Merits of Bilateral Application of Bone-Conduction Devices in Children With Bilateral Conductive Hearing Loss
Objectives: This study aims to characterize lateralization of sounds and localization of sounds in children with bilateral conductive hearing loss (BCHL) when listening with either one or two percutaneous bone conduction devices (BCDs). Design: Sound lateralization was measured with the minimum audible angle test in which children were asked to indicate from which of the two visible speakers the sound originated. Sound localization was measured with a test in which stimuli were presented from speakers that were not visible to the children. In the sound localization test, 150 ms broadband noise bursts were presented, and sound level was roved over a 20-dB range. Because speakers were not visible the localization response was not affected by any visual cue. The sound localization test provides a clear distinction between lateralization and localization of sounds. Ten children with congenital BCHL and one child with acquired BCHL participated. Results: Both lateralization and sound localization were better with bilateral BCDs compared with the unilaterally aided conditions. In the bilateral BCD condition, lateralization was close to normal in nearly all the children. The localization test demonstrated lateralization rather than sound localization behavior when listening with bilateral BCDs. Furthermore, in the unilateral aided condition, stimuli presented at different sound levels were mainly perceived at the same location. Conclusions: This study demonstrates that, in contrast to listening with two BCDs, children demonstrated difficulties in lateralization of sounds and in sound localization when listening with just one BCD (i.e., one BCD turned off). Because both lateralization and sound localization behavior were tested, it could be demonstrated that these children are more able to lateralize than localize sounds when listening with bilateral BCDs. The present study provides insight in (sub-optimal) sound localization capabilities of children with congenital BCHL in the unilateral-aided and bilateral-aided condition. Despite the sub-optimal results on sound localization, this study underlines the merits of bilateral application of BCDs in such children. In patients with congenital bilateral conductive hearing loss (BCHL), bilateral application of bone conduction devices (BCDs) is not the standard treatment (Liu et al. 2013). Based on systematic reviews of the existing literature, several authors have concluded that more studies are needed to provide convincing evidence on the advantage of bilateral BCD application over listening with a unilateral BCD (Colquitt et al. 2011; Janssen et al. 2012; Johnson et al. 2017). Patients with BCHL cannot access binaural cues when listening with only one BCD. With two BCDs, these patients may gain increased access to binaural cues, allowing for improved localization abilities (Zeitooni et al. 2016). Binaural processing of our auditory world provides important benefits like, improved directional hearing, increased safety (Stelling-Kończak et al. 2016), feelings of comfort and understanding of speech in noisy listening conditions (Avan et al. 2015). Hence, it is important to provide patients with BCHL and their caretakers with evidence on the merits of hearing rehabilitation with two BCDs. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com) ACKNOWLEDGMENTS: The research was supported by the William Demants og Hustra Ida Emilies Fond (M.J.H.A.), the FP7-PEOPLE-2013-ITN Marie Curie Initial Training Network iCare (K.V. and M.J.H.A.), and by European Union Horizon-2020 ERC Advanced Grant 2016 (ORIENT, Grant No. 693400) (A.F.M.S.). All authors contributed significantly to the study, with C.A.d.B., A.J.B., and M.J.H.A. mainly collecting the data; C.A.d.B. and K.V. analyzing the data; C.A.d.B., A.J.B., A.F.M.S., M.K.S.H., and M.J.H.A. writing the article. C.A.d.B., M.K.S.H. report financial support to the authors institution for conducting 2 clinical studies from Oticon Medical AB (Askim, Sweden) and from Cochlear Bone Anchored Solutions AB (Mölnlycke, Sweden), outside the submitted work. Received March 20, 2019; accepted January 3, 2020. Address for correspondence: Martijn J.H. Agterberg, Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Postbus 9101, 6500 HB Nijmegen, Netherlands. E-mail: martijn.agterberg@radboudumc.nl Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Spectral-Temporal Trade-Off in Vocoded Sentence Recognition: Effects of Age, Hearing Thresholds, and Working Memory
Objectives: Cochlear implant (CI) signal processing degrades the spectral components of speech. This requires CI users to rely primarily on temporal cues, specifically, amplitude modulations within the temporal envelope, to recognize speech. Auditory temporal processing ability for envelope modulations worsens with advancing age, which may put older CI users at a disadvantage compared with younger users. To evaluate how potential age-related limitations for processing temporal envelope modulations impact spectrally degraded sentence recognition, noise-vocoded sentences were presented to younger and older normal-hearing listeners in quiet. Envelope modulation rates were varied from 10 to 500 Hz by adjusting the low-pass filter cutoff frequency (LPF). The goal of this study was to evaluate if age impacts recognition of noise-vocoded speech and if this age-related limitation existed for a specific range of envelope modulation rates. Design: Noise-vocoded sentence recognition in quiet was measured as a function of number of spectral channels (4, 6, 8, and 12 channels) and LPF (10, 20, 50, 75, 150, 375, and 500 Hz) in 15 younger normal-hearing listeners and 15 older near-normal-hearing listeners. Hearing thresholds and working memory were assessed to determine the extent to which these factors were related to recognition of noise-vocoded sentences. Results: Younger listeners achieved significantly higher sentence recognition scores than older listeners overall. Performance improved in both groups as the number of spectral channels and LPF increased. As the number of spectral channels increased, the differences in sentence recognition scores between groups decreased. A spectral-temporal trade-off was observed in both groups in which performance in the 8- and 12-channel conditions plateaued with lower-frequency amplitude modulations compared with the 4- and 6-channel conditions. There was no interaction between age group and LPF, suggesting that both groups obtained similar improvements in performance with increasing LPF. The lack of an interaction between age and LPF may be due to the nature of the task of recognizing sentences in quiet. Audiometric thresholds were the only significant predictor of vocoded sentence recognition. Although performance on the working memory task declined with advancing age, working memory scores did not predict sentence recognition. Conclusions: Younger listeners outperformed older listeners for recognizing noise-vocoded sentences in quiet. The negative impact of age was reduced when ample spectral information was available. Age-related limitations for recognizing vocoded sentences were not affected by the temporal envelope modulation rate of the signal, but instead, appear to be related to a generalized task limitation or to reduced audibility of the signal. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank the University of Maryland’s College of Behavioral & Social Sciences (BSOS) Dean’s Office for their support. We thank Sasha Pletnikova, Allison Heuber, Adelia Witt, Hannah Johnson, and Shelby Creelman for help in collecting and analyzing these data. This study was supported by NIH Grant R01 AG051603 (M.J.G.), NIH Grant R37 AG09191 (S.G.S.), NIH Grant F32 DC016478 (M.J.S.), NIH Institutional Research Grant T32 DC000046E (S.G.S.: Co-principal investigator with Catherine Carr), and a seed grant from the University of Maryland-College Park College of Behavioral and Social Sciences. Finally, this study met all requirements for ethical research put forth by the IRB of the University of Maryland. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no conflicts of interest to disclose. Received December 5, 2018; accepted December 1, 2019. Address for correspondence: Maureen J. Shader, Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA. E-mail: mshader@bionicsinstitute.org Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Aided Hearing Moderates the Academic Outcomes of Children With Mild to Severe Hearing Loss
Objectives: There are very limited data regarding the spoken language and academic outcomes of children with mild to severe hearing loss (HL) during the elementary school years, and the findings of these studies are inconsistent. None of these studies have examined the possible role of aided hearing in these outcomes. This study used a large cohort of children to examine these outcomes and in particular to examine whether aided hearing moderates the effect of HL with regard to these outcomes. Design: The spoken language, reading, writing, and calculation abilities were measured after second and fourth grades in children with mild to severe HL (children who are hard of hearing; CHH, n = 183) and a group of children with normal hearing (CNH, n = 91) after the completion of second and fourth grades. Also, among the CHH who wore hearing aids, aided better-ear speech intelligibility index values at the age of school entry were obtained. Results: Oral language abilities of the CHH with mild and moderate HL were similar to the CNH at each grade. Children with moderately-severe HL (better-ear pure tone threshold >59 but <76 dB HL) had significantly poorer oral language and reading skills than the CNH at each grade. The children with mild and moderate HL did not differ from the CNH in oral language or reading. No differences were found between the CHH regardless of severity and CNH with regard to spelling, passage writing, or calculation. The degree to which hearing aids provided audible speech information played a moderating role in the oral language outcomes of CHH and this moderation of language mediated the relationship between the unaided hearing ability of the CHH and their academic outcomes. Conclusions: As a group, children with mild and moderate HL have good outcomes with regard to language and academic performance. Children with moderately-severe losses were less skilled in language and reading than the CNH and CHH children with mild and moderate losses. Audibility provided by hearing aids was found to moderate the effects of HL with respect to these outcomes. These findings emphasize the importance of including the effects of clinical interventions such as aided hearing when examining outcomes of CHH. ACKNOWLEGMENTS: The authors thank the families and children for their willingness to participate in the study. We note the important contributions by Marlea O’Brien in project management. The authors have no conflicts of interest to declare. Dr. Tomblin had full access to the data and takes responsibility for the integrity of the data and accuracy of the analysis. Statistical analysis and interpretation of the data: Tomblin and Oleson; Study concept and design: Tomblin, Oleson, Ambrose, Walker, McCreery, Moeller (i.e., all authors, led by Tomblin); Drafting and revision of manuscript: all authors; Obtained funding and administration; Tomblin and Moeller. This research was supported by the following grants from the NIH-NIDCD: R01DC009560 and R01DC013591. The content of this project is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health. This work was supported by the following grants from the NIH-NIDCD: R01DC009560 and R01DC013591. The authors had full editorial control of this work and manuscript. Received February 21, 2019; accepted September 11, 2019. Address for Correspondence: J. Bruce Tomblin, Department of Communication Sciences and Disorders, 250 Hawkins Drive, University of Iowa, Iowa City, IA 52252. E-mail: j-tomblin@uiowa.edu Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Listening Difficulties of Children With Cochlear Implants in Mainstream Secondary Education
Objectives: Previous research has shown that children with cochlear implants (CIs) encounter more communication difficulties than their normal-hearing (NH) peers in kindergarten and elementary schools. Yet, little is known about the potential listening difficulties that children with CIs may experience during secondary education. The aim of this study was to investigate the listening difficulties of children with a CI in mainstream secondary education and to compare these results to the difficulties of their NH peers and the difficulties observed by their teachers. Design: The Dutch version of the Listening Inventory for Education Revised (LIFE-R) was administered to 19 children (mean age = 13 years 9 months; SD = 9 months) who received a CI early in life, to their NH classmates (n = 239), and to their teachers (n = 18). All participants were enrolled in mainstream secondary education in Flanders (first to fourth grades). The Listening Inventory for Secondary Education consists of 15 typical listening situations as experienced by students (LIFEstudent) during class activities (LIFEclass) and during social activities at school (LIFEsocial). The teachers completed a separate version of the Listening Inventory for Secondary Education (LIFEteacher) and Screening Instrument for Targeting Educational Risk. Results: Participants with CIs reported significantly more listening difficulties than their NH peers. A regression model estimated that 75% of the participants with CIs were at risk of experiencing listening difficulties. The chances of experiencing listening difficulties were significantly higher in participants with CIs for 7 out of 15 listening situations. The 3 listening situations that had the highest chance of resulting in listening difficulties were (1) listening during group work, (2) listening to multimedia, and (3) listening in large-sized classrooms. Results of the teacher’s questionnaires (LIFEteacher and Screening Instrument for Targeting Educational Risk) did not show a similar significant difference in listening difficulties between participants with a CI and their NH peers. According to teachers, NH participants even obtained significantly lower scores for staying on task and for participation in class than participants with a CI. Conclusions: Although children with a CI seemingly fit in well in mainstream schools, they still experience significantly more listening difficulties than their NH peers. Low signal to noise ratios (SNRs), distortions of the speech signal (multimedia, reverberation), distance, lack of visual support, and directivity effects of the microphones were identified as difficulties for children with a CI in the classroom. As teachers may not always notice these listening difficulties, a list of practical recommendations was provided in this study, to raise awareness among teachers and to minimize the difficulties. ACKNOWLEDGMENTS: We like to thank four students of the Master’s program in Speech Language and Hearing Sciences of the Ghent University who have contributed to this work within the context of their Master thesis. We are grateful for the assistance of Lise Danau and Heleen Pieters during the interviews with teachers and students (2015–2016). Special thanks are extended to Cloë De Moor and Ine Verhelst who helped out with the data collection (school visits) and data input (2017–2018). Stefanie Krijger (first author) is currently receiving a grant from the Research Foundation Flanders (FWO), Belgium (PhD. Fellowship, 11Y4116N). Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). The authors have no conflicts of interest to disclose. Received July 23, 2018; accepted November 17, 2019. Address for correspondence: Stefanie Krijger, Department of Otorhinolaryngology, Ghent University, C. Heymanslaan 10, 9000 Gent, Belgium. E-mail: stefanie.krijger@ugent.be Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Effect of Cochlear Implantation on Vestibular Evoked Myogenic Potentials and Wideband Acoustic Immittance
Objectives: The objective of this study was to determine if absent air conduction stimuli vestibular evoked myogenic potential (VEMP) responses found in ears after cochlear implantation can be the result of alterations in peripheral auditory mechanics rather than vestibular loss. Peripheral mechanical changes were investigated by comparing the response rates of air and bone conduction VEMPs as well as by measuring and evaluating wideband acoustic immittance (WAI) responses in ears with cochlear implants and normal-hearing control ears. The hypothesis was that the presence of a cochlear implant can lead to an air-bone gap, causing absent air conduction stimuli VEMP responses, but present bone conduction vibration VEMP responses (indicating normal vestibular function), with changes in WAI as compared with ears with normal hearing. Further hypotheses were that subsets of ears with cochlear implants would (a) have present VEMP responses to both stimuli, indicating normal vestibular function and either normal or near-normal WAI, or (b) have absent VEMP responses to both stimuli, regardless of WAI, due to true vestibular loss. Design: Twenty-seven ears with cochlear implants (age range 7 to 31) and 10 ears with normal hearing (age range 7 to 31) were included in the study. All ears completed otoscopy, audiometric testing, 226 Hz tympanometry, WAI measures (absorbance), air conduction stimuli cervical and ocular VEMP testing through insert earphones, and bone conduction vibration cervical and ocular VEMP testing with a mini-shaker. Comparisons of VEMP responses to air and bone conduction stimuli, as well as absorbance responses between ears with normal hearing and ears with cochlear implants, were completed. Results: All ears with normal hearing demonstrated 100% present VEMP response rates for both stimuli. Ears with cochlear implants had higher response rates to bone conduction vibration compared with air conduction stimuli for both cervical and ocular VEMPs; however, this was only significant for ocular VEMPs. Ears with cochlear implants demonstrated reduced low-frequency absorbance (500 to 1200 Hz) as compared with ears with normal hearing. To further analyze absorbance, ears with cochlear implants were placed into subgroups based on their cervical and ocular VEMP response patterns. These groups were (1) present air conduction stimuli response, present bone conduction vibration response, (2) absent air conduction stimuli response, present bone conduction vibration response, and (3) absent air conduction stimuli response, absent bone conduction vibration response. For both cervical and ocular VEMPs, the group with absent air conduction stimuli responses and present bone conduction vibration responses demonstrated the largest decrease in low-frequency absorbance as compared with the ears with normal hearing. Conclusions: Bone conduction VEMP response rates were increased compared with air-conduction VEMP response rates in ears with cochlear implants. Ears with cochlear implants also demonstrate changes in low-frequency absorbance consistent with a stiffer system. This effect was largest for ears that had absent air conduction but present bone conduction VEMPs. These findings suggest that this group, in particular, has a mechanical change that could lead to an air-bone gap, thus, abolishing the air conduction VEMP response due to an alteration in mechanics and not a true vestibular loss. Clinical considerations include using bone conduction vibration VEMPs and WAI for preoperative and postoperative testing in patients undergoing cochlear implantation. ACKNOWLEDGMENTS: The research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under award number P20GM109023 and by the National Institute on Deafness and Other Communication Disorders under award numbers T35 DC 008757 and R03DC015318. G.R.M., K.M.S., J.N.P., and K.L.J. performed experiments, analyzed data, wrote the article, and provided revision. G.R.M., J.N.P., and K.L.J. provided statistical analysis. D.F. worked out technical details and performed numeric calculations for the suggested experiments; K.M.S. recruited participants; K.L.J conceived the original idea; G.R.M. and K.L.J. designed and supervised the project. KLJ provides consulting for Natus. Received September 24, 2019; accepted November 8, 2019. Address for correspondence: Gabrielle R. Merchant, Boys Town National Research Hospital, 555 N 30th Street, Omaha, NE 68131, USA. E-mail: gabrielle.merchant@boystown.org Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Audiovisual Enhancement of Speech Perception in Noise by School-Age Children Who Are Hard of Hearing
Objectives: The purpose of this study was to examine age- and hearing-related differences in school-age children’s benefit from visual speech cues. The study addressed three questions: (1) Do age and hearing loss affect degree of audiovisual (AV) speech enhancement in school-age children? (2) Are there age- and hearing-related differences in the mechanisms underlying AV speech enhancement in school-age children? (3) What cognitive and linguistic variables predict individual differences in AV benefit among school-age children? Design: Forty-eight children between 6 and 13 years of age (19 with mild to severe sensorineural hearing loss; 29 with normal hearing) and 14 adults with normal hearing completed measures of auditory and AV syllable detection and/or sentence recognition in a two-talker masker type and a spectrally matched noise. Children also completed standardized behavioral measures of receptive vocabulary, visuospatial working memory, and executive attention. Mixed linear modeling was used to examine effects of modality, listener group, and masker on sentence recognition accuracy and syllable detection thresholds. Pearson correlations were used to examine the relationship between individual differences in children’s AV enhancement (AV−auditory-only) and age, vocabulary, working memory, executive attention, and degree of hearing loss. Results: Significant AV enhancement was observed across all tasks, masker types, and listener groups. AV enhancement of sentence recognition was similar across maskers, but children with normal hearing exhibited less AV enhancement of sentence recognition than adults with normal hearing and children with hearing loss. AV enhancement of syllable detection was greater in the two-talker masker than the noise masker, but did not vary significantly across listener groups. Degree of hearing loss positively correlated with individual differences in AV benefit on the sentence recognition task in noise, but not on the detection task. None of the cognitive and linguistic variables correlated with individual differences in AV enhancement of syllable detection or sentence recognition. Conclusions: Although AV benefit to syllable detection results from the use of visual speech to increase temporal expectancy, AV benefit to sentence recognition requires that an observer extracts phonetic information from the visual speech signal. The findings from this study suggest that all listener groups were equally good at using temporal cues in visual speech to detect auditory speech, but that adults with normal hearing and children with hearing loss were better than children with normal hearing at extracting phonetic information from the visual signal and/or using visual speech information to access phonetic/lexical representations in long-term memory. These results suggest that standard, auditory-only clinical speech recognition measures likely underestimate real-world speech recognition skills of children with mild to severe hearing loss. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: This study was funded by NIH-NIDCD R01 DC013591 (McCreery). We relied on the BTNRH Clinical Measurement Program for recruitment and clinical measurement. This program is supported by NIH-NIGMS P20 GM109023. Audiovisual Speech Processing Lab members (Nancy He, Jamie Petersen, and Jessica Tran) completed data collection. Meredith Spratford, Au.D provided guidance with experiment setup. Kaylah Lalonde currently receives funding form NIH-NIGMS P20 GM109023. Ryan McCreery is a paid consultant for the British Columbia Early Hearing Program. The authors have no conflicts of interest to disclose. Received August 14, 2019; accepted October 30, 2019. Address for correspondence: Kaylah Lalonde, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30th Street, Omaha, NE 68104, USA. E-mail: kaylah.lalonde@boystown.org Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Evaluation of a Cognitive Behavioral Model of Tinnitus Distress: A Cross-Sectional Study Using Structural Equation Modeling
Objectives: There is a great deal of variation in the extent to which people with tinnitus find it distressing, which cannot be explained solely by differences in perceived loudness. The Cognitive Behavioral Model of Tinnitus Distress proposes that tinnitus becomes and is maintained as a distressing problem due to a process of interaction between negative thoughts, negative emotions, attention and monitoring, safety behavior, and beliefs. This study used path analysis to assess how well different configurations of this model fit using questionnaire data obtained from people with tinnitus. Design: This was a cross-sectional study. Three hundred forty-two members of the public with tinnitus volunteered to complete a survey comprising a series of questionnaires and subscales of questionnaires measuring each of the constructs contained within the Cognitive Behavioral Model of Tinnitus Distress. The optimum factor structure of each measure for the study population was established, and the resulting factors were used to construct a series of path models based on the theoretical model. Path analysis was conducted for each of these, and the goodness of fit of the models was assessed using established fit criteria. Results: Five of the six path models tested reached the threshold for adequate fit, and further modifications improved the fit of the three most parsimonious of these. The two best-fitting models had comparable fit indices which approached the criteria for good fit (Root Mean Square Error of Approximation = 0.061, Comparative Fit Index = 0.984, Tucker Lewis Index = 0.970 and Root Mean Square Error of Approximation = 0.055, Comparative Fit Index = 0.993, Tucker Lewis Index = 0.982). They differed principally in the placement of tinnitus magnitude and the inclusion/noninclusion of control beliefs. Conclusions: There are theoretical arguments to support both a beliefs-driven and a loudness-driven model, and it may be that different configurations of the Cognitive Behavioral Model of Tinnitus Distress are more appropriate to different groups of people with tinnitus. Further investigation of this is needed. This notwithstanding, the present study provides empirical support for a model of tinnitus distress which provides a clinical framework for the development of more effective psychological therapy. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors would like to thank the British Tinnitus Association for funding this study. Deborah Hall is a National Institute for Health Research (NIHR) senior investigator. This project was funded by the British Tinnitus Association. The authors have no conflicts of interest to disclose. Received January 25, 2019; accepted October 18, 2019. Address for correspondence: Lucy Handscomb, UCL Ear Institute, 332 Gray’s Inn Road, London WC1X 8EE, United Kingdom. E-mail: l.handscomb@ucl.ac.uk Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Electrophysiological Estimates of the Electrode–Neuron Interface Differ Between Younger and Older Listeners With Cochlear Implants
Objectives: The primary objective of this study was to quantify differences in evoked potential correlates of spiral ganglion neuron (SGN) density between younger and older individuals with cochlear implants (CIs) using the electrically evoked compound action potential (ECAP). In human temporal bone studies and in animal models, SGN density is the lowest in older subjects and in those who experienced long durations of deafness during life. SGN density also varies as a function of age at implantation and hearing loss etiology. Taken together, it is likely that younger listeners who were deafened and implanted during childhood have denser populations of SGNs than older individuals who were deafened and implanted later in life. In animals, ECAP amplitudes, amplitude growth function (AGF) slopes, and their sensitivity to stimulus interphase gap (IPG) are predictive of SGN density. The authors hypothesized that younger listeners who were deafened and implanted as children would demonstrate larger ECAP amplitudes, steeper AGF slopes, and greater IPG sensitivity than older, adult-deafened and implanted listeners. Design: Data were obtained from 22 implanted ears (18 individuals). Thirteen ears (9 individuals) were deafened and implanted as children (child-implanted group), and nine ears (9 individuals) were deafened and implanted as adults (adult-implanted group). The groups differed significantly on a number of demographic variables that are implicitly related to SGN density: (1) chronological age; (2) age at implantation; and (3) duration of preimplantation hearing loss. ECAP amplitudes, AGF linear slopes, and thresholds were assessed on a subset of electrodes in each ear in response to two IPGs (7 and 30 µsec). Speech recognition was assessed using a medial vowel identification task. Results: Compared with the adult-implanted listeners, individuals in the child-implanted group demonstrated larger changes in ECAP amplitude when the IPG of the stimulus was increased from 7 to 30 µsec (i.e., greater IPG sensitivity). On average, child-implanted participants also had larger ECAP amplitudes and steeper AGF linear slopes than the adult-implanted participants, irrespective of IPG. IPG sensitivity for AGF linear slope and ECAP threshold did not differ between age groups. Vowel recognition performance was not correlated with any of the ECAP measures assessed in this study. Conclusions: The results of this study support the theory that young CI listeners who were deafened and implanted during childhood may have denser neural populations than older listeners who were deafened and implanted as adults. Potential between-group differences in SGN integrity emphasize a need to investigate optimized CI programming parameters for younger and older listeners. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank our devoted participants for their time, Molly Bergan for assistance with data collection, and Kerri Corkrum and Wendy Parkinson for assistance with participant recruitment. This work was supported by the National Institutes of Health National Institute on Deafness and Other Communication Disorders Grant Nos. R01 DC012142 (to J.G.A.) and T32 DC005361 (to K.N.J.; principal investigator: Perkel). The authors have no conflicts of interest to disclose. Received April 18, 2019; accepted October 24, 2019. Address for correspondence: Kelly N. Jahn, Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, 243 Charles Street, Boston, MA 02114, USA. E-mail: kelly_jahn@meei.harvard.edu. Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Speech Understanding With Bimodal Stimulation Is Determined by Monaural Signal to Noise Ratios: No Binaural Cue Processing Involved
Objectives: To investigate the mechanisms behind binaural and spatial effects in speech understanding for bimodal cochlear implant listeners. In particular, to test our hypothesis that their speech understanding can be characterized by means of monaural signal to noise ratios, rather than complex binaural cue processing such as binaural unmasking. Design: We applied a semantic framework to characterize binaural and spatial effects in speech understanding on an extensive selection of the literature on bimodal listeners. In addition, we performed two experiments in which we measured speech understanding in different masker types (1) using head-related transfer functions, and (2) while adapting the broadband signal to noise ratios in both ears independently. We simulated bimodal hearing with a vocoder in one ear (the cochlear implant side) and a low-pass filter in the other ear (the hearing aid side). By design, the cochlear implant side was the main contributor to speech understanding in our simulation. Results: We found that spatial release from masking can be explained as a simple trade-off between a monaural change in signal to noise at the cochlear implant side (quantified as the head shadow effect) and an opposite change in signal to noise at the hearing aid side (quantified as a change in bimodal benefit). In simulated bimodal listeners, we found that for every 1 dB increase in signal to noise ratio at the hearing aid side, the bimodal benefit improved by approximately 0.4 dB in signal to noise ratio. Conclusions: Although complex binaural cue processing is often implicated when discussing speech intelligibility in adverse listening conditions, performance can simply be explained based on monaural signal to noise ratios for bimodal listeners. ACKNOWLEDGMENTS: We thank Hanne Meulemans and Dorien Vandevenne for their help in both experiments. This research is funded by the Research Foundation – Flanders (SB PhD fellow at Fonds Wetenschappelijk Onderzoek (FWO)), project 1S45817N; this research is jointly funded by Cochlear Ltd. and Flanders Innovation & Entrepreneurship (formerly Agentschap voor Innovatie door Wetenschap en Technologie (IWT)), project 150432; this project has also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 637424, ERC starting Grant to T.F.). B.D. and T.F. designed experiments, analyzed data, and wrote the article. The authors have no conflicts of interest to disclose. Received December 30, 2018; accepted November 11, 2019. Address for correspondence: Tom Francart, Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven – University of Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium. E-mail: tom.francart@med.kuleuven.be Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.
Recognition of Accented Speech by Cochlear-Implant Listeners: Benefit of Audiovisual Cues
Objectives: When auditory and visual speech information are presented together, listeners obtain an audiovisual (AV) benefit or a speech understanding improvement compared with auditory-only (AO) or visual-only (VO) presentations. Cochlear-implant (CI) listeners, who receive degraded speech input and therefore understand speech using primarily temporal information, seem to readily use visual cues and can achieve a larger AV benefit than normal-hearing (NH) listeners. It is unclear, however, if the AV benefit remains relatively large for CI listeners when trying to understand foreign-accented speech when compared with unaccented speech. Accented speech can introduce changes to temporal auditory cues and visual cues, which could decrease the usefulness of AV information. Furthermore, we sought to determine if the AV benefit was relatively larger in CI compared with NH listeners for both unaccented and accented speech. Design: AV benefit was investigated for unaccented and Spanish-accented speech by presenting English sentences in AO, VO, and AV conditions to 15 CI and 15 age- and performance-matched NH listeners. Performance matching between NH and CI listeners was achieved by varying the number of channels of a noise vocoder for the NH listeners. Because of the differences in age and hearing history of the CI listeners, the effects of listener-related variables on speech understanding performance and AV benefit were also examined. Results: AV benefit was observed for both unaccented and accented conditions and for both CI and NH listeners. The two groups showed similar performance for the AO and AV conditions, and the normalized AV benefit was relatively smaller for the accented than the unaccented conditions. In the CI listeners, older age was associated with significantly poorer performance with the accented speaker compared with the unaccented speaker. The negative impact of age was somewhat reduced by a significant improvement in performance with access to AV information. Conclusions: When auditory speech information is degraded by CI sound processing, visual cues can be used to improve speech understanding, even in the presence of a Spanish accent. The AV benefit of the CI listeners closely matched that of the NH listeners presented with vocoded speech, which was unexpected given that CI listeners appear to rely more on visual information to communicate. This result is perhaps due to the one-to-one age and performance matching of the listeners. While aging decreased CI listener performance with the accented speaker, access to visual cues boosted performance and could partially overcome the age-related speech understanding deficits for the older CI listeners. ACKNOWLEDGMENTS: We would like to thank Jan Edwards for helpful comments on this work. We would like to thank Arifi Waked for her assistance in data collection and Julie Cohen for helping set up, calibrate, and troubleshoot equipment. We would like to thank Eric Tarr and Paul Mayo for providing access to their real-time vocoder application. We would also like to thank Paul Mayo for troubleshooting assistance and help with calibration measurements. Research reported in this publication was supported by the National Institute on Aging of the National Institutes of Health under Award Numbers R01AG051603 (M.J.G.) and R01AG09191 (S.G.S.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. E.W. was also funded by the Maryland Summer Scholars Program. Portions of this work were presented at the 175th meeting of the Acoustical Society of America in Minneapolis, MN, May 2018, and Association for Research in Otolaryngology 42nd MidWinter Meeting in Baltimore, MD June 2019, and the 8th Aging and Speech Communication Conference, Clearwater, FL, November 2019. The authors have no conflicts of interest to disclose. Received April 8, 2019; accepted December 3, 2019. Address for correspondence: Matthew J. Goupell, Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA. E-mail: goupell@umd.edu Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate