Translate

Πέμπτη 22 Αυγούστου 2019

Binaural Optimization of Cochlear Implants: Discarding Frequency Content Without Sacrificing Head-Shadow Benefit
Objectives: Single-sided deafness cochlear-implant (SSD-CI) listeners and bilateral cochlear-implant (BI-CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition. Design: Speech-recognition thresholds for sentences in speech-shaped noise were measured for 6 adult SSD-CI listeners, 12 BI-CI listeners, and 9 normal-hearing listeners presented with vocoder simulations. Stimuli were presented using nonindividualized in-the-ear or behind-the-ear head-related impulse-response simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral speech-recognition thresholds gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the head-related impulse-response–filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, because the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low and high pass filtering were applied. The normal-hearing listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave). Results: Mean head-shadow benefit was smaller for the SSD-CI listeners (~7 dB) than for the BI-CI listeners (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit. Conclusions: The “exclusion frequency” ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch. ACKNOWLEDGMENTS: The authors thank Cochlear Ltd. and Med-El for providing equipment and technical support. The authors thank John Culling for providing head shadow modeling software. The authors thank Ginny Alexander for her assistance with subject recruitment, coordination, and payment of subjects at the University of Maryland-College Park, as well as Brian Simpson and Matt Ankrom for the recruitment, coordination, and payment of the subject panel at the Air Force Research Laboratory. Research reported was supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R01DC015798 (J.G.W.B. and M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. Portions of these data were presented at the 2017 Midwinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, and the 2017 Conference on Implantable Auditory Prostheses, Tahoe City, CA. The authors have no conflicts of interest to disclose. Received April 2, 2018; accepted June 25, 2019. Address for correspondence: Sterling W. Sheffield, Department of Speech, Language and Hearing Sciences, University of Florida, 1225 Center Drive, Room 2130, Gainesville, FL 32610, USA. E-mail: s.sheffield@phhp.ufl.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Vocal Turn-Taking Between Mothers and Their Children With Cochlear Implants
Objectives: The primary objective of the study was to examine the occurrence and temporal structure of vocal turn-taking during spontaneous interactions between mothers and their children with cochlear implants (CI) over the first year after cochlear implantation as compared with interactions between mothers and children with normal hearing (NH). Design: Mothers’ unstructured play sessions with children with CI (n = 12) were recorded at 2 time points, 3 months (mean age 18.3 months) and 9 months (mean age 27.5 months) post-CI. A separate control group of mothers with age-matched hearing children (n = 12) was recorded at the same 2 time points. Five types of events were coded: mother and child vocalizations, vocalizations including speech overlap, and between- and within-speaker pauses. We analyzed the proportion of child and mother vocalizations involved in turn-taking, the temporal structure of turn-taking, and the temporal reciprocity of turn-taking using proportions of simultaneous speech and the duration of between- and within-speaker pauses. Results: The CI group produced a significantly smaller proportion of vocalizations in turn-taking than the NH group at the first session; however, CI children’s proportion of vocalizations in turn-taking increased over time. There was a significantly larger proportion of simultaneous speech in the CI compared with the NH group at the first session. The CI group produced longer between-speaker pauses as compared with those in the NH group at the first session with mothers decreasing the duration of between-speaker pauses over time. NH infants and mothers in both groups produced longer within- than between-speaker pauses but CI infants demonstrated the opposite pattern. In addition, the duration of mothers’ between-speaker pauses (CI and NH) was predicted by the duration of the infants’ between-speaker pauses. Conclusions: Vocal turn-taking and timing in both members of the dyad, the mother and infant, were sensitive to the experiential effects of child hearing loss and remediation with CI. Child hearing status affected dyad-specific coordination in the timing of responses between mothers and their children. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the families who participated in this study, the audiologists at the different sites for their help with recruiting the families, and all the research assistants and staff members for their help with gathering and analyzing the data. This research was supported by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders (NIH-NIDCD) Research Grant 5R01DC008581-08 to D.M. Houston and L. Dilley and NIH-NIDCD grant R01DC008581 to T. Bergeson. The authors have no conflicts of interest to disclose. Received November 27, 2018; accepted May 27, 2019. Address for correspondence: Maria V. Kondaurova, Department of Psychological & Brain Sciences, University of Louisville, 301 Life Sciences Building, Louisville, KY 40292, USA. E-mail: maria.kondaurova@louisville.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Cochlear Implant Performance in the Wind Through Spectral Masking Release: A Multi-microphone and Multichannel Strategy
Objectives: Adopting the omnidirectional microphone (OMNI) mode and reducing low-frequency gain are the two most commonly used wind noise reduction strategies in hearing devices. The objective of this study was to compare the effectiveness of these two strategies on cochlear implant users’ speech-understanding abilities and perceived sound quality in wind noise. We also examined the effectiveness of a new strategy that adopts the microphone mode with lower wind noise level in each frequency channel. Design: A behind-the-ear digital hearing aid with multiple microphone modes was used to record testing materials for cochlear implant participants. It was adjusted to have linear amplification, flat frequency response when worn on a Knowles Electronic Manikin for Acoustic Research to remove the head-related transfer function of the manikin and to mimic typical microphone characteristics of hearing devices. Recordings of wind noise samples and hearing-in-noise test sentences were made when the hearing aid was programmed to four microphone modes, namely (1) OMNI; (2) adaptive directional microphone (ADM); (3) ADM with low-frequency roll-off; and (4) a combination of omnidirectional and directional microphone (COMBO). Wind noise samples were recorded in an acoustically treated wind tunnel from 0° to 360° in 10° increment at a wind velocity of 4.5, 9.0, and 13.5 m/s when the hearing aid was worn on the manikin. Two wind noise samples recorded at 90° and 300° head angles at the wind velocity of 9.0 m/s were chosen to take advantage of the spectral masking release effects of COMBO. The samples were then mixed with the sentences recorded using identical settings. Cochlear implant participants listened to the speech-in-wind testing materials and they repeated the sentences and compared overall sound quality preferences of different microphone modes using a paired-comparison categorical rating paradigm. The participants also rated their preferences of wind-only samples. Results: COMBO yielded the highest speech recognition scores among the four microphone modes, and it was also preferred the most often, likely due to the reduction of spectral masking. The speech recognition scores generated using ADM with low-frequency roll-off were either equal to or lower than those obtained using ADM because gain reduction decreased not only the level of wind noise but also the low-frequency energy of speech. OMNI consistently yielded speech recognition scores lower than COMBO, and it was often rated as less preferable than other microphone modes, suggesting the conventional strategy to switch to the omnidirectional mode in the wind was undesirable. Conclusions: Neither adopting an OMNI nor reducing low-frequency gain generated higher speech recognition scores or higher sound quality ratings than COMBO. Adopting the microphone with lower wind noise level in different frequency channels can provide spectral masking release, and it is a more effective wind noise reduction strategy. The natural 6 dB/octave low-frequency roll-off of first-order directional microphones should be compensated when speech is present. Signal detection and decision rules for wind noise reduction applications are discussed in hearing devices with and without binaural transmission capability. ACKNOWLEDGMENTS: The author thanks Lance Nelson and Melissa Teske Dunn for data collection and Jens Balslev, Peter Nopp, Nick McKibben, and Drs. Ernst Aschbacher and Kaiboa Nie, and the staff at the Herrick Laboratories for technical support. Many thanks to Oticon Foundation and Med-El Corporation for providing funding to support this study. This study was funded by the Oticon Foundation and Med-El Corporation. The author was solely responsible for the design of the study procedures and the contents presented in this article. The author has a United States Patent (8942815) - Enhancing cochlear implnats with hearing aid signal processing technologies. The author has no conflicts of interest to declare. Received February 2, 2017; accepted May 27, 2019. Address for correspondence: King Chung, Department of Allied Health and Communicative Disorders, Northern Illinois University, DeKalb, IL 60115, USA. E-mail: kchung@niu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Sensitivity of the Digits-In-Noise Test Using Antiphasic Stimuli
Objectives: The digits-in-noise test (DIN) has become increasingly popular as a consumer-based method to screen for hearing loss. Current versions of all DINs either test ears monaurally or present identical stimuli binaurally (i.e., diotic noise and speech, NoSo). Unfortunately, presentation of identical stimuli to each ear inhibits detection of unilateral sensorineural hearing loss (SNHL), and neither diotic nor monaural presentation sensitively detects conductive hearing loss (CHL). After an earlier finding of enhanced sensitivity in normally hearing listeners, this study tested the hypothesis that interaural antiphasic digit presentation (NoSπ) would improve sensitivity to hearing loss caused by unilateral or asymmetric SNHL, symmetric SNHL, or CHL. Design: This cross-sectional study recruited adults (18 to 84 years) with various levels of hearing based on a 4-frequency pure-tone average (PTA) at 0.5, 1, 2, and 4 kHz. The study sample was comprised of listeners with normal hearing (n = 41; PTA ≤ 25 dB HL in both ears), symmetric SNHL (n = 57; PTA > 25 dB HL), unilateral or asymmetric SNHL (n = 24; PTA > 25 dB HL in the poorer ear), and CHL (n = 23; PTA > 25 dB HL and PTA air-bone gap ≥ 20 dB HL in the poorer ear). Antiphasic and diotic speech reception thresholds (SRTs) were compared using a repeated-measures design. Results: Antiphasic DIN was significantly more sensitive to all three forms of hearing loss than the diotic DIN. SRT test–retest reliability was high for all tests (intraclass correlation coefficient r > 0.89). Area under the receiver operating characteristics curve for detection of hearing loss (>25 dB HL) was higher for antiphasic DIN (0.94) than for diotic DIN (0.77) presentation. After correcting for age, PTA of listeners with normal hearing or symmetric SNHL was more strongly correlated with antiphasic (rpartial[96] = 0.69) than diotic (rpartial = 0.54) SRTs. Slope of fitted regression lines predicting SRT from PTA was significantly steeper for antiphasic than diotic DIN. For listeners with normal hearing or CHL, antiphasic SRTs were more strongly correlated with PTA (rpartial[62] = 0.92) than diotic SRTs (rpartial[62] = 0.64). Slope of the regression line with PTA was also significantly steeper for antiphasic than diotic DIN. The severity of asymmetric hearing loss (poorer ear PTA) was unrelated to SRT. No effect of self-reported English competence on either antiphasic or diotic DIN among the mixed first-language participants was observed. Conclusions: Antiphasic digit presentation markedly improved the sensitivity of the DIN test to detect SNHL, either symmetric or asymmetric, while keeping test duration to a minimum by testing binaurally. In addition, the antiphasic DIN was able to detect CHL, a shortcoming of previous monaural or binaurally diotic DIN versions. The antiphasic DIN is thus a powerful tool for population-based screening. This enhanced functionality combined with smartphone delivery could make the antiphasic DIN suitable as a primary screen that is accessible to a large global audience. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the participants of this study, Steve Biko Academic Hospital, and all participating private practices for their assistance with data collection. The authors thank Li Lin for assistance with data analysis. This research was funded by the National Institute of Deafness and Communication Disorders of the National Institutes of Health under Award Number 5R21DC016241-02. Additional funding support was obtained from the National Research Foundation (Grant PR_CSRP190208414782). DWS, DRM, and HCM relationship with the hearX Group and hearZA includes equity, consulting, and potential royalties. DRM is supported by Cincinnati Children’s Research Foundation and by the National Institute for Health Research Manchester Biomedical Research Centre. The authors have no conflicts of interest to disclose. Received January 4, 2019; accepted June 4, 2019. Address for correspondence: Cas Smits, Amsterdam, UMC Vrije Universiteit, Department of Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, The Netherlands. E-mail: c.smits@vumc.nl This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Perspective on the Development of a Large-Scale Clinical Data Repository for Pediatric Hearing Research
The use of “big data” for pediatric hearing research requires new approaches to both data collection and research methods. The widespread deployment of electronic health record systems creates new opportunities and corresponding challenges in the secondary use of large volumes of audiological and medical data. Opportunities include cost-effective hypothesis generation, rapid cohort expansion for rare conditions, and observational studies based on sample sizes in the thousands to tens of thousands. Challenges include finding and forming appropriately skilled teams, access to data, data quality assessment, and engagement with a research community new to big data. The authors share their experience and perspective on the work required to build and validate a pediatric hearing research database that integrates clinical data for over 185,000 patients from the electronic health record systems of three major academic medical centers. ACKNOWLEDGMENTS: The authors are grateful to the late Judith Gravel, Ph.D., for her efforts in the early conception and design of this project. This study was funded by the National Institute on Deafness and Other Communications Disorders Grant Number 1R24DC012207-01A1. J.W.P. oversaw informatics and technical aspects of the project and drafted the article; B.R. architected the database, implemented all software, and extracted data for CHOP; J.M.M. extracted all imaging data and implemented imaging software components; J.P. performed analysis of CHOP audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; B.X. performed statistical summarization and literature review for data validation; I.K. served as a subject matter expert on the interpretation of clinical and genetic data; J.M. defined data requirements, performed data quality assessment, and extracted BCH data; T.G. performed analysis of BCH audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; D.S. performed analysis of BCH audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; M.K. served as a subject matter expert on clinical care and hearing loss research, provided scientific direction, and oversaw the extraction of BCH data; L.J.H. served as a subject matter expert on clinical care and hearing loss research, provided scientific direction, made major revisions to the article, and oversaw the extraction of VU data; J.G. served as a subject matter expert on hearing loss research, led compliance efforts, and provided scientific direction; E.B.C. oversaw the project, performed data quality assessment, provided scientific direction, and made major revisions to the article. The authors have no conflicts of interest to disclose. Received October 4, 2016; accepted June 11, 2019. Address for correspondence: E. Bryan Crenshaw III, Children’s Hospital of Philadelphia, 34th and Civic Center Blvd, Philadelphia, PA 19104, USA. E-mail: crenshaw@email.chop.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Quantifying the Range of Signal Modification in Clinically Fit Hearing Aids
Objectives: Hearing aids provide various signal processing techniques with a range of parameters to improve the listening experience for a hearing-impaired individual. In previous studies, we reported significant differences in signal modification for mild versus strong signal processing in commercially available hearing aids. In this study, the authors extend this work to clinically prescribed hearing aid fittings based on best-practice guidelines. The goals of this project are to determine the range of cumulative signal modification in clinically fit hearing aids across manufacturers and technology levels and the effects of listening conditions including signal to noise ratio (SNR) and presentation level on these signal modifications. Design: We identified a subset of hearing aids that were representative of a typical clinical setting. Deidentified hearing aid fitting data were obtained from three audiology clinics for adult hearing aid users with sensorineural hearing loss for a range of hearing sensitivities. Matching laboratory hearing aids were programmed with the deidentified fitting data. Output from these hearing aids was recorded at four SNRs and three presentation levels. The resulting signal modification was quantified using the cepstral correlation component of the Hearing Aid Speech Quality Index which measures the speech envelope changes in the context of a model of the listener’s hearing loss. These metric values represent the hearing aid processed signal as it is heard by the hearing aid user. Audiometric information was used to determine the nature of any possible association with the distribution of signal modification in these clinically fit hearing aids. Results: In general, signal modification increased as SNR decreased and presentation level increased. Differences across manufacturers were significant such that the effect of presentation level varied differently at each SNR, for each manufacturer. This result suggests that there may be variations across manufacturers in processing various listening conditions. There was no significant effect of technology level. There was a small effect of pure-tone average on signal modification for one manufacturer, but no effect of audiogram slope. Finally, there was a broad range of measured signal modification for a given hearing loss, for the same manufacturer and listening condition. Conclusions: The signal modification values in this study are representative of commonly fit hearing aids in clinics today. The results of this study provide insights into how the range of signal modifications obtained in real clinical fittings compares with a previous study. Future studies will focus on the behavioral implications of signal modifications in clinically fit hearing aids. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Kailey Durkin and Sarah Mullervy for conducting the hearing aid recordings, and Diane Novak and Pauline Norton for retrieving the hearing aid fitting data from Northwestern University Center for Speech, Language, & Learning. This study was supported by the National Institutes of Health Grant R01 DC012289 (to P.S. and K.A.). Portions of these data were presented at the 2018 International Hearing Aid Conference, Lake Tahoe, California, August 17, 2018. All authors contributed equally to this study. V.R., M.A., J.K., L.S., K.A., and P.S. contributed to the experimental design. V.R. and M.A. managed the experiment and supervised the hearing aid recordings. V.R. performed statistical analysis and wrote the main article. L.B. assisted with the design and interpretation of the statistical analysis and description of the results. M.A. and J.K. also contributed portions of the article. M.A. managed data retrieval at the University of Colorado Hospital, and L.S. managed data retrieval at I Love Hearing. P.S., M.A., K.A., J.K., L.S., and L.B. provided critical review of the article. All authors discussed the results and implications and contributed to the final article. The authors have no conflicts of interest to disclose. Received January 7, 2019; accepted June 4, 2019. Address for correspondence: Varsha Rallapalli, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA. E-mail: varsha.rallapalli@northwestern.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Health-Related Quality of Life With Cochlear Implants: The Children’s Perspective
Objectives: The objective of this study was to assess self-reported health-related quality of life (HR-QOL) in a group of children with cochlear implants (CIs) and to compare their scores to age- and gender-matched controls. The authors also assessed the agreement between proxy- and self-reported HR-QOL in the CI group and examined individual and environmental variables that could be associated with higher or lower self-reported HR-QOL in the CI group. Design: The sample consisted of 168 children between the ages of 5;6 and 13;1 (years;months), where 84 children had CIs (CI group) and 84 were age- and gender-matched controls with normal hearing (NH group). HR-QOL was assessed with the generic questionnaire Pediatric Quality of Life Inventory. Parents of the children in the CI group completed the same questionnaire as the children. In addition, the children in the CI group completed tests of language, hearing, and nonverbal I.Q. and background variables such as age at implantation and socioeconomic status were assessed. Results: On average, children with CIs rated their HR-QOL lower than peers with normal hearing on school functioning, social functioning, and overall HR-QOL. A higher percentage of children with CIs reported low levels of HR-QOL than did those in the NH group, 27% and 12%, respectively. The differences between groups were small, and fewer children than parents reported concerningly low HR-QOLs. Better spoken-language skills and older age at the time of testing was associated with better HR-QOL. Conclusions: Most children with CIs in this study reported HR-QOLs that were close to those of their age- and gender-matched normal-hearing peers. The children, however, reported concerns about social and school functioning, indicating that these areas require more attention to ensure children with CIs have good HR-QOL. Improving spoken-language skills in children with CIs may contribute to improved HR-QOL. ACKNOWLEDGMENTS: The authors thank the children and parents who participated in the study and the schools who allowed to recruit and test children in their facilities. The authors also thank the cochlear implant team at Oslo University Hospital for help with data collection and discussions during the writing progress. The authors thank Stefan Schauber at the University of Oslo for statistical help, Janet Olds at the Children’s Hospital of Eastern Ontario for valuable input, as well as to the people who helped collect data for the study, Marit Enny Gismarvik and Åsrun Valberg. The project was funded by the Norwegian Directory of Health and was executed in collaboration with Oslo University Hospital and the University of Oslo. The Regional Committees for Medical and Health Research Ethics in Norway and the Data Protection Official at Oslo University Hospital approved the study. The authors have no conflicts of interest to disclose. Received January 3, 2019; accepted May 20, 2019. Address for correspondence: Christiane Lingås Haukedal, Department of Special Needs Education, University of Oslo, P.O. Box 1140, Blindern, 0318 Oslo, Norway. E-mail: christiane.haukedal@isp.uio.no Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Links of Prosodic Stress Perception and Musical Activities to Language Skills of Children With Cochlear Implants and Normal Hearing
Objectives: A major issue in the rehabilitation of children with cochlear implants (CIs) is unexplained variance in their language skills, where many of them lag behind children with normal hearing (NH). Here, we assess links between generative language skills and the perception of prosodic stress, and with musical and parental activities in children with CIs and NH. Understanding these links is expected to guide future research and toward supporting language development in children with a CI. Design: Twenty-one unilaterally and early-implanted children and 31 children with NH, aged 5 to 13, were classified as musically active or nonactive by a questionnaire recording regularity of musical activities, in particular singing, and reading and other activities shared with parents. Perception of word and sentence stress, performance in word finding, verbal intelligence (Wechsler Intelligence Scale for Children (WISC) vocabulary), and phonological awareness (production of rhymes) were measured in all children. Comparisons between children with a CI and NH were made against a subset of 21 of the children with NH who were matched to children with CIs by age, gender, socioeconomic background, and musical activity. Regression analyses, run separately for children with CIs and NH, assessed how much variance in each language task was shared with each of prosodic perception, the child’s own music activity, and activities with parents, including singing and reading. All statistical analyses were conducted both with and without control for age and maternal education. Results: Musically active children with CIs performed similarly to NH controls in all language tasks, while those who were not musically active performed more poorly. Only musically nonactive children with CIs made more phonological and semantic errors in word finding than NH controls, and word finding correlated with other language skills. Regression analysis results for word finding and VIQ were similar for children with CIs and NH. These language skills shared considerable variance with the perception of prosodic stress and musical activities. When age and maternal education were controlled for, strong links remained between perception of prosodic stress and VIQ (shared variance: CI, 32%/NH, 16%) and between musical activities and word finding (shared variance: CI, 53%/NH, 20%). Links were always stronger for children with CIs, for whom better phonological awareness was also linked to improved stress perception and more musical activity, and parental activities altogether shared significantly variance with word finding and VIQ. Conclusions: For children with CIs and NH, better perception of prosodic stress and musical activities with singing are associated with improved generative language skills. In addition, for children with CIs, parental singing has a stronger positive association to word finding and VIQ than parental reading. These results cannot address causality, but they suggest that good perception of prosodic stress, musical activities involving singing, and parental singing and reading may all be beneficial for word finding and other generative language skills in implanted children. ACKNOWLEDGMENTS: The authors thank the personnel, especially speech therapists (Nonna Virokannas, Sari Vikman, Satu Rimmanen, Teija Tsupari), of university hospital CI clinics in Helsinki, Tampere, Turku, and Kuopio, and the students who collected data. The authors also thank Professor Martti Vainio for his help with the prosodic experiments, and Professors Minna Huotilainen and Mari Tervaniemi for their help and advice. Above all, the authors thank the parents and children for their participation. This research was funded by the Signe and Ane Gyllenberg Foundation, the Finnish Concordia Fund, the Ella and Georg Ehrnrooth Foundation, National doctoral program Langnet, the Finnish Audiological Society, the Finnish doctoral program in language studies, funded by the Ministry of Education and Culture and the Emil Aaltonen Foundation. R. T. was responsible for experimental design, statistical analyses, performed most of experiments at University of Helsinki, and wrote the first version of the manuscript; A. F. designed the experiments on perception of word and sentence stress, participated in study design and statistical analyses, provided critical revision, and checked English language; M. L. was responsible for supervising students who carried out psychological assessments and I.Q. tests; J. L. was responsible for statistical analyses and provided critical revision; D.S. commented on statistical analyses and provided critical revision. All authors discussed the results and implications and commented on the article at all stages. The authors have no conflicts of interest to declare. Received August 16, 2018; accepted May 29, 2019. Address for correspondence: Ritva Torppa, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, PL 21 (Haartmanink atu 3) 00014 Helsinki, Finland. E-mail: ritva.torppa@helsinki.fi This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
A Comparison of Intracochlear Pressures During Ipsilateral and Contralateral Stimulation With a Bone Conduction Implant
Objectives: To compare contralateral to ipsilateral stimulation with percutaneous and transcutaneous bone conduction implants. Background: Bone conduction implants (BCIs) effectively treat conductive and mixed hearing losses. In some cases, such as in single-sided deafness, the BCI is implanted contralateral to the remaining healthy ear in an attempt to restore some of the benefits provided by binaural hearing. While the benefit of contralateral stimulation has been shown in at least some patients, it is not clear what cues or mechanisms contribute to this function. Previous studies have investigated the motion of the ossicular chain, skull, and round window in response to bone vibration. Here, we extend those reports by reporting simultaneous measurements of cochlear promontory velocity and intracochlear pressures during bone conduction stimulation with two common BCI attachments, and directly compare ipsilateral to contralateral stimulation. Methods: Fresh–frozen whole human heads were prepared bilaterally with mastoidectomies. Intracochlear pressure (PIC) in the scala vestibuli (PSV) and tympani (PST) was measured with fiber optic pressure probes concurrently with cochlear promontory velocity (VProm) via laser Doppler vibrometry during stimulation provided with a closed-field loudspeaker or a BCI. Stimuli were pure tones between 120 and 10,240 Hz, and response magnitudes and phases for PIC and VProm were measured for air and bone conducted sound presentation. Results: Contralateral stimulation produced lower response magnitudes and longer delays than ipsilateral in all measures, particularly for high-frequency stimulation. Contralateral response magnitudes were lower than ipsilateral response magnitudes by up to 10 to 15 dB above ~2 kHz for a skin-penetrating abutment, which increased to 25 to 30 dB and extended to lower frequencies when applied with a transcutaneous (skin drive) attachment. Conclusions: Transcranial attenuation and delay suggest that ipsilateral stimulation will be dominant for frequencies over ~1 kHz, and that complex phase interactions will occur during bilateral or bimodal stimulation. These effects indicate a mechanism by which bilateral users could gain some bilateral advantage. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: J.K.M., H.A.J., D.J.T., S.P.C., and N.T.G. designed and performed the experiments; H.A.J., D.J.T., and S.P.C. reviewed data and provided interpretive analysis; J.K.M., R.M.B.H., S.P.C., and N.T.G. analyzed data and wrote the paper. All authors discussed the results and implications and commented on the manuscript at all stages. This work was supported by AAO-HNSF Resident Research Grant from The Oticon Foundation (to J. K. M.) and NIH/NIDCD 1T32-DC012280 (to R. M. B. H. and N. T. G.). We appreciate the assistance of Dr. Michael Hall in constructing some of the custom experimental equipment (supported by National Institutes of Health grant P30 NS041854). S.P.C. is a consultant for Cochlear Corporation. The other authors have no conflicts of interest to disclose. Received January 8, 2018; accepted May 15, 2019. Address for correspondence: Nathaniel T. Greene, Department of Otolaryngology, University of Colorado School of Medicine, 12631 E. 17th Avenue, B205, Aurora, CO 80045, USA. E-mail: nathaniel.greene@ucdenver.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
10-Year Follow-Up Results of The Netherlands Longitudinal Study on Hearing: Trends of Longitudinal Change in Speech Recognition in Noise
Objectives: Previous findings of longitudinal cohort studies indicate that acceleration in age-related hearing decline may occur. Five-year follow-up data of the Netherlands Longitudinal Study on Hearing (NL-SH) showed that around the age of 50 years, the decline in speech recognition in noise accelerates compared with the change in hearing in younger participants. Other longitudinal studies confirm an accelerated loss in speech recognition in noise but mostly use older age groups as a reference. In the present study, we determined the change in speech recognition in noise over a period of 10 years in participants aged 18 to 70 years at baseline. We additionally investigated the effects of age, sex, educational level, history of tobacco smoking, and alcohol use on the decline of speech recognition in noise. Design: Baseline (T0), 5-year (T1), and 10-year (T2) follow-up data of the NL-SH collected until May 2017 were included. The NL-SH is a web-based prospective cohort study which started in 2006. Central to the NL-SH is the National Hearing test (NHT) which was administered to the participants at all three measurement rounds. The NHT uses three-digit sequences which are presented in a background of stationary noise. The listener is asked to enter the digits using the computer keyboard. The outcome of the NHT is the speech reception threshold in noise (SRT) (i.e., the signal to noise ratio where a listener recognizes 50% of the digit triplets correctly). In addition to the NHT, participants completed online questionnaires on demographic, lifestyle, and health-related characteristics at T0, T1, and T2. A linear mixed model was used for the analysis of longitudinal changes in SRT. Results: Data of 1349 participants were included. At the start of the study, the mean age of the participants was 45 years (SD 13 years) and 61% of the participants were categorized as having good hearing ability in noise. SRTs significantly increased (worsened) over 10 years (p < 0.001). After adjustment for age, sex, and a history of tobacco smoking, the mean decline over 10 years was 0.89 dB signal to noise ratio. The decline in speech recognition in noise was significantly larger in groups aged 51 to 60 and 61 to 70 years compared with younger age groups (18 to 30, 31 to 40, and 41 to 50 years) (p < 0.001). Speech recognition in noise in participants with a history of smoking declined significantly faster during the 10-year follow-up interval (p = 0.003). Sex, educational level, and alcohol use did not appear to influence the decline of speech recognition in noise. Conclusions: This study indicated that speech recognition in noise declines significantly over a 10-year follow-up period in adults aged 18 to 70 years at baseline. It is the first longitudinal study with a 10-year follow-up to reveal that the increased rate of decline in speech recognition ability in noise already starts at the age of 50 years. Having a history of tobacco smoking increases the decline of speech recognition in noise. Hearing health care professionals should be aware of an accelerated decline of speech recognition in noise in adults aged 50 years and over. ACKNOWLEDGMENTS: The authors thank the participants on the Netherlands Longitudinal Study on Hearing (NL-SH). The authors also thank the assistance of Celina Henke in managing the database. The first measurement round of the Netherlands Longitudinal Study on Hearing (NL-SH) (2006–2010) was financially supported by the Heinsius Houbolt Foundation, The Netherlands. Sonova AG, Switzerland supported the data collection of the second measurement round (since 2011). Funding for data collection of the third measurement round (since 2016) came from the EMGO Institute for Health and Care Research, The Netherlands, and Sonova AG, Switzerland. T.P.M.G., M.S., P.M., U.L., C.S., and S.E.K. were involved in formulating the research questions and in designing the study. T.P.M.G. performed the analysis and M.S. and B.I.L.-W. verified the analytical methods. T.P.M.G. took the lead in writing the article. All authors provided critical feedback and helped shape the research, analysis, and article. Received June 28, 2018; accepted June 19, 2019. The authors have no conflicts of interest to disclose. Address for correspondence: Thadé P. M. Goderie, Department of Otolaryngology/Head and Neck Surgery, Section Ear and Hearing, Amsterdam University Medical Center, P.O. Box 7057, 1007 MB, Amsterdam, The Netherlands. E-mail: t.goderie@vumc.nl This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate