Translate

Πέμπτη 19 Σεπτεμβρίου 2019

Recommendations for Measuring the Electrically Evoked Compound Action Potential in Children With Cochlear Nerve Deficiency
Objectives: This study reports a method for measuring the electrically evoked compound action potential (eCAP) in children with cochlear nerve deficiency (CND). Design: This method was developed based on experience with 50 children with CND who were Cochlear Nucleus cochlear implant users. Results: This method includes three recommended steps conducted with recommended stimulating and recording parameters: initial screen, pulse phase duration optimization, and eCAP threshold determination (i.e., identifying the lowest stimulation level that can evoke an eCAP). Compared with the manufacturer-default parameters, the recommended parameters used in this method yielded a higher success rate for measuring the eCAP in children with CND. Conclusions: The eCAP can be measured successfully in children with CND using recommended parameters. This specific method is suitable for measuring the eCAP in children with CND in clinical settings. However, it is not suitable for intraoperative eCAP recordings due to the extensive testing time required. ACKNOWLEDGMENTS: The authors thank all subjects and their parents for participating in this study. Supported by the R03 (1R03DC013153) and the R01 (R01DC017846) grant from NIH/NIDCD. S.H. developed the method used in this study, participated in data collection and patient testing at the centers in China, Ohio, and North Carolina, drafted and approved the final version of this article. X.C., R.W., J.L., and L.X. participated in data collection and patient testing at the center in China, provided critical comments, and approved the final version of this article. M.S. and C.W. participated in patient testing at the center in Ohio, provided critical comments, and approved the final version of this article. H.F.B.T. and L.R.P. participated in patient testing at the center in North Carolina, provided critical comments, and approved the final version of this article. K.D.B., A.P., and W.J.R. provided critical comments and approved the final version of this article. H.F.B.T. is a member of a Cochlear Corp. Audiology Advisory Board. Received October 30, 2018; accepted June 21, 2019. Address for correspondence: Shuman He, Eye and Ear Institute, Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Suite 4000, Columbus, OH, USA. E-mail: shuman.he@osumc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Oscillopsia in Bilateral Vestibular Hypofunction: Not Only Gain But Saccades Too
Objectives: Oscillopsia is a disabling condition for patients with bilateral vestibular hypofunction (BVH). When the vestibulo-ocular reflex is bilaterally impaired, its ability to compensate for rapid head movements must be supported by refixation saccades. The objective of this study is to assess the relationship between saccadic strategies and perceived oscillopsia. Design: To avoid the possibility of bias due to remaining vestibular function, we classified patients into two groups according to their gain values in the video head impulse test. One group comprised patients with extremely low gain (0.2 or below) in both sides, and a control group contained BVH patients with gain between 0.2 and 0.6 bilaterally. Binary logistic regression (BLR) was used to determine the variables predicting oscillopsia. Results: Twenty-nine patients were assigned to the extremely low gain group and 23 to the control group. The BLR model revealed the PR score (saccades synchrony measurement) to be the best predictor of oscillopsia. Receiver operating characteristic analysis determined that the most efficient cutoff point for the probabilities saved with the BLR was 0.518, yielding a sensitivity of 86.6% and specificity of 84.2%. Conclusions: BVH patients with higher PR values (nonsynchronized saccades) were more prone to oscillopsia independent of their gain values. We suggest that the PR score can be considered a useful measurement of compensation. ACKNOWLEDGMENTS: All authors contributed to this work, discussed the results and implications, and commented on the article at all stages. A. B. designed the study; A. B. and G. T. performed explorations, designed and performed measurements, analyzed data, and wrote the article; J. R. described PR protocol, performed explorations, and reviewed the article; E. M. performed explorations and reviewed the article, performed explorations, and reviewed the article; N. P. participated in the study design, performed explorations, and reviewed the article. The authors have no conflicts of interest to disclose. Received January 16, 2019; accepted May 19, 2019. Address for correspondence: Angel Batuecas-Caletrio, Otoneurology Unit, ENT Department, University Hospital of Salamanca, Paseo San Vicente 58-182. 37007 Salamanca. Spain E-mail: abatuc@yahoo.es Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Chronic Conductive Hearing Loss Is Associated With Speech Intelligibility Deficits in Patients With Normal Bone Conduction Thresholds
Objectives: The main objective of this study is to determine whether chronic sound deprivation leads to poorer speech discrimination in humans. Design: We reviewed the audiologic profile of 240 patients presenting normal and symmetrical bone conduction thresholds bilaterally, associated with either an acute or chronic unilateral conductive hearing loss of different etiologies. Results: Patients with chronic conductive impairment and a moderate, to moderately severe, hearing loss had lower speech recognition scores on the side of the pathology when compared with the healthy side. The degree of impairment was significantly correlated with the speech recognition performance, particularly in patients with a congenital malformation. Speech recognition scores were not significantly altered when the conductive impairment was acute or mild. Conclusions: This retrospective study shows that chronic conductive hearing loss was associated with speech intelligibility deficits in patients with normal bone conduction thresholds. These results are as predicted by a recent animal study showing that prolonged, adult-onset conductive hearing loss causes cochlear synaptopathy. ACKNOWLEDGMENTS: The authors are grateful to William Goedicke and Dr. Barbara Herrmann for their technical help and logistic support. This research was funded by the National Institutes of Health–National Institute on Deafness and Other Communication Disorders P50 DC015857 (Project Principal Investigator: S. F. M.). The authors have no conflicts of interest to disclose. Received July 6, 2018; accepted June 28, 2019. Address for correspondence: Stéphane F. Maison, Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA 02114, USA. E-mail: stephane_maison@meei.harvard.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Subclinical Auditory Neural Deficits in Patients With Type 1 Diabetes Mellitus
Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very little attention has been given to auditory neuropathic complications in DM. The aim of this study was to determine whether type 1 DM (T1DM) affects neural coding of the rapid temporal fluctuations of sounds, and how any deficits may impact on behavioral performance. Design: Participants were 30 young normal-hearing T1DM patients, and 30 age-, sex-, and audiogram-matched healthy controls. Measurements included electrophysiological measures of auditory nerve and brainstem function using the click-evoked auditory brainstem response, and of brainstem neural temporal coding using the sustained frequency-following response (FFR); behavioral tests of temporal coding (interaural phase difference discrimination and the frequency difference limen); tests of speech perception in noise; and self-report measures of auditory disability using the Speech, Spatial and Qualities of Hearing Scale. Results: There were no significant differences between T1DM patients and controls in the auditory brainstem response. However, the T1DM group showed significantly reduced FFRs to both temporal envelope and temporal fine structure. The T1DM group also showed significantly higher interaural phase difference and frequency difference limen thresholds, worse speech-in-noise performance, as well as lower overall Speech, Spatial and Qualities scores than the control group. Conclusions: These findings suggest that T1DM is associated with degraded neural temporal coding in the brainstem in the absence of an elevation in audiometric threshold, and that the FFR may provide an early indicator of neural damage in T1DM, before any abnormalities can be identified using standard clinical tests. However, the relation between the neural deficits and the behavioral deficits is uncertain. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank the collaborators at the Manchester Diabetes Centre and the Help DiaBEATes campaign in the Salford NHS Foundation Trust and all of the participants in this research. This work was supported by the Deanship of Scientific Research, College of Applied Medical Sciences Research Center at King Saud University, Riyadh, Saudi Arabia, by the Medical Research Council UK (MR/L003589/1), and by the NIHR Manchester Biomedical Research Centre. Portions of this work were presented as posters at the 38th MidWinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, February 21–25, 2015, and at the 5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, Honolulu, Hawaii, November 28–December 2, 2016. The authors have no conflicts of interest to disclose. Received November 20, 2018; accepted June 19, 2019. Address for correspondence: Arwa AlJasser, Department of Rehabilitation Sciences, College of Applied Medical Sciences, King Saud University, P.O. Box 10219, Riyadh, 11433, Saudi Arabia. E-mail: aljasser@ksu.edu.sa This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Music Is More Enjoyable With Two Ears, Even If One of Them Receives a Degraded Signal Provided By a Cochlear Implant
Objectives: Cochlear implants (CIs) restore speech perception in quiet but they also eliminate or distort many acoustic cues that are important for music enjoyment. Unfortunately, quantifying music enjoyment by CI users has been difficult because comparisons must rely on their recollection of music before they lost their hearing. Here, we aimed to assess music enjoyment in CI users using a readily interpretable reference based on acoustic hearing. The comparison was done by testing “single-sided deafness” (SSD) patients who have normal hearing (NH) in one ear and a CI in the other ear. The study also aimed to assess binaural musical enjoyment, with the reference being the experience of hearing with a single NH ear. Three experiments assessed the effect of adding different kinds of input to the second ear: electrical, vocoded, or unmodified. Design: In experiment 1, music enjoyment in SSD-CI users was investigated using a modified version of the MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor) method. Listeners rated their enjoyment of song segments on a scale of 0 to 200, where 100 represented the enjoyment obtained from a song segment presented to the NH ear, 0 represented a highly degraded version of the same song segment presented to the same ear, and 200 represented enjoyment subjectively rated as twice as good as the 100 reference. Stimuli consisted of acoustic only, electric only, acoustic and electric, as well as other conditions with low pass filtered acoustic stimuli. Acoustic stimulation was provided by headphone to the NH ear and electric stimulation was provided by direct audio input to the subject’s speech processor. In experiment 2, the task was repeated using NH listeners who received vocoded stimuli instead of electric stimuli. Experiment 3 tested the effect of adding the same unmodified song segment to the second ear, also in NH listeners. Results: Music presented through the CI only was very unpleasant, with an average rating of 20. Surprisingly, the combination of the unpleasant CI signal in one ear with acoustic stimulation in the other ear was rated more enjoyable (mean = 123) than acoustic processing alone. Presentation of the same monaural musical signal to both ears in NH listeners resulted with even greater enhancement of the experience compared with presentation to a single ear (mean = 159). Repeating the experiment using a vocoder to one ear of NH listeners resulted in interference rather than enhancement. Conclusions: Music enjoyment from electric stimulation is extremely poor relative to a readily interpretable NH baseline for CI-SSD listeners. However, the combination of this unenjoyable signal presented through a CI and an unmodified acoustic signal presented to a NH (or near-NH) contralateral ear results in enhanced music enjoyment with respect to the acoustic signal alone. Remarkably, this two-ear enhancement experienced by CI-SSD listeners represents a substantial fraction of the two-ear enhancement seen in NH listeners. This unexpected benefit of electroacoustic auditory stimulation will have to be considered in theoretical accounts of music enjoyment and may facilitate the quest to enhance music enjoyment in CI users. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Johanna Boyer for multiple conversations that inspired the research questions investigated in this article. The authors are grateful for the time and effort from each of the participants. Griet Mertens provided demographic information for the participants from Antwerp. This research was funded by the National Institute on Deafness and Other Communication Disorders (R01 DC012152 principal investigator: Landsberger; R01 DC03937 principal investigator: Svirsky; R01 DC011329 principal investigators: Svirsky and Neuman), a MED-EL Hearing Solutions grant (principal investigator: Landsberger), a contract from Cochlear Americas (principal investigator: J. Thomas Roland), and a TOPBOF grant (principal investigator: Van de Heyning) from the University of Antwerp. D.M.L., M.A.S., and K.V. designed the experimental protocol. D.M.L. created the stimuli and modified software for use in this experiment. P.v.d.H. provided testing facilities, patient access, and organized IRB approval for data collection in Belgium. K.V., N.S., A.L., and J.N. collected the data. Figures and statistical analysis were generated by D.M.L. The article was primarily written by D.M.L. and M.A.S. All other authors contributed to drafting the article or revising it critically for important intellectual content. All authors provided final approval of the version to be published. The authors have no conflicts of interest to disclose. Received September 26, 2018; accepted June 5, 2019. Address for correspondence: David M. Landsberger, Department of Otolaryngology, New York University School of Medicine, 550 1st Avenue, STE NBV 5E5, New York, NY 10016, USA. E-mail: david.landsberger@nuymc.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Age Differences in the Effects of Speaking Rate on Auditory, Visual, and Auditory-Visual Speech Perception
Objectives: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. Design: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. Results: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. Conclusions: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss. ACKNOWLEDGMENTS: This research was supported by a grant from the National Institute of Aging. The authors have no conflicts of interest to disclose. Received January 29, 2019; accepted May 24, 2019. Address for correspondence: Mitchell S. Sommers, Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA. E-mail: msommers@Wustl.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Binaural Optimization of Cochlear Implants: Discarding Frequency Content Without Sacrificing Head-Shadow Benefit
Objectives: Single-sided deafness cochlear-implant (SSD-CI) listeners and bilateral cochlear-implant (BI-CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition. Design: Speech-recognition thresholds for sentences in speech-shaped noise were measured for 6 adult SSD-CI listeners, 12 BI-CI listeners, and 9 normal-hearing listeners presented with vocoder simulations. Stimuli were presented using nonindividualized in-the-ear or behind-the-ear head-related impulse-response simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral speech-recognition thresholds gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the head-related impulse-response–filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, because the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low and high pass filtering were applied. The normal-hearing listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave). Results: Mean head-shadow benefit was smaller for the SSD-CI listeners (~7 dB) than for the BI-CI listeners (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit. Conclusions: The “exclusion frequency” ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch. ACKNOWLEDGMENTS: The authors thank Cochlear Ltd. and Med-El for providing equipment and technical support. The authors thank John Culling for providing head shadow modeling software. The authors thank Ginny Alexander for her assistance with subject recruitment, coordination, and payment of subjects at the University of Maryland-College Park, as well as Brian Simpson and Matt Ankrom for the recruitment, coordination, and payment of the subject panel at the Air Force Research Laboratory. Research reported was supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R01DC015798 (J.G.W.B. and M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. Portions of these data were presented at the 2017 Midwinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, and the 2017 Conference on Implantable Auditory Prostheses, Tahoe City, CA. The authors have no conflicts of interest to disclose. Received April 2, 2018; accepted June 25, 2019. Address for correspondence: Sterling W. Sheffield, Department of Speech, Language and Hearing Sciences, University of Florida, 1225 Center Drive, Room 2130, Gainesville, FL 32610, USA. E-mail: s.sheffield@phhp.ufl.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Vocal Turn-Taking Between Mothers and Their Children With Cochlear Implants
Objectives: The primary objective of the study was to examine the occurrence and temporal structure of vocal turn-taking during spontaneous interactions between mothers and their children with cochlear implants (CI) over the first year after cochlear implantation as compared with interactions between mothers and children with normal hearing (NH). Design: Mothers’ unstructured play sessions with children with CI (n = 12) were recorded at 2 time points, 3 months (mean age 18.3 months) and 9 months (mean age 27.5 months) post-CI. A separate control group of mothers with age-matched hearing children (n = 12) was recorded at the same 2 time points. Five types of events were coded: mother and child vocalizations, vocalizations including speech overlap, and between- and within-speaker pauses. We analyzed the proportion of child and mother vocalizations involved in turn-taking, the temporal structure of turn-taking, and the temporal reciprocity of turn-taking using proportions of simultaneous speech and the duration of between- and within-speaker pauses. Results: The CI group produced a significantly smaller proportion of vocalizations in turn-taking than the NH group at the first session; however, CI children’s proportion of vocalizations in turn-taking increased over time. There was a significantly larger proportion of simultaneous speech in the CI compared with the NH group at the first session. The CI group produced longer between-speaker pauses as compared with those in the NH group at the first session with mothers decreasing the duration of between-speaker pauses over time. NH infants and mothers in both groups produced longer within- than between-speaker pauses but CI infants demonstrated the opposite pattern. In addition, the duration of mothers’ between-speaker pauses (CI and NH) was predicted by the duration of the infants’ between-speaker pauses. Conclusions: Vocal turn-taking and timing in both members of the dyad, the mother and infant, were sensitive to the experiential effects of child hearing loss and remediation with CI. Child hearing status affected dyad-specific coordination in the timing of responses between mothers and their children. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the families who participated in this study, the audiologists at the different sites for their help with recruiting the families, and all the research assistants and staff members for their help with gathering and analyzing the data. This research was supported by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders (NIH-NIDCD) Research Grant 5R01DC008581-08 to D.M. Houston and L. Dilley and NIH-NIDCD grant R01DC008581 to T. Bergeson. The authors have no conflicts of interest to disclose. Received November 27, 2018; accepted May 27, 2019. Address for correspondence: Maria V. Kondaurova, Department of Psychological & Brain Sciences, University of Louisville, 301 Life Sciences Building, Louisville, KY 40292, USA. E-mail: maria.kondaurova@louisville.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Cochlear Implant Performance in the Wind Through Spectral Masking Release: A Multi-microphone and Multichannel Strategy
Objectives: Adopting the omnidirectional microphone (OMNI) mode and reducing low-frequency gain are the two most commonly used wind noise reduction strategies in hearing devices. The objective of this study was to compare the effectiveness of these two strategies on cochlear implant users’ speech-understanding abilities and perceived sound quality in wind noise. We also examined the effectiveness of a new strategy that adopts the microphone mode with lower wind noise level in each frequency channel. Design: A behind-the-ear digital hearing aid with multiple microphone modes was used to record testing materials for cochlear implant participants. It was adjusted to have linear amplification, flat frequency response when worn on a Knowles Electronic Manikin for Acoustic Research to remove the head-related transfer function of the manikin and to mimic typical microphone characteristics of hearing devices. Recordings of wind noise samples and hearing-in-noise test sentences were made when the hearing aid was programmed to four microphone modes, namely (1) OMNI; (2) adaptive directional microphone (ADM); (3) ADM with low-frequency roll-off; and (4) a combination of omnidirectional and directional microphone (COMBO). Wind noise samples were recorded in an acoustically treated wind tunnel from 0° to 360° in 10° increment at a wind velocity of 4.5, 9.0, and 13.5 m/s when the hearing aid was worn on the manikin. Two wind noise samples recorded at 90° and 300° head angles at the wind velocity of 9.0 m/s were chosen to take advantage of the spectral masking release effects of COMBO. The samples were then mixed with the sentences recorded using identical settings. Cochlear implant participants listened to the speech-in-wind testing materials and they repeated the sentences and compared overall sound quality preferences of different microphone modes using a paired-comparison categorical rating paradigm. The participants also rated their preferences of wind-only samples. Results: COMBO yielded the highest speech recognition scores among the four microphone modes, and it was also preferred the most often, likely due to the reduction of spectral masking. The speech recognition scores generated using ADM with low-frequency roll-off were either equal to or lower than those obtained using ADM because gain reduction decreased not only the level of wind noise but also the low-frequency energy of speech. OMNI consistently yielded speech recognition scores lower than COMBO, and it was often rated as less preferable than other microphone modes, suggesting the conventional strategy to switch to the omnidirectional mode in the wind was undesirable. Conclusions: Neither adopting an OMNI nor reducing low-frequency gain generated higher speech recognition scores or higher sound quality ratings than COMBO. Adopting the microphone with lower wind noise level in different frequency channels can provide spectral masking release, and it is a more effective wind noise reduction strategy. The natural 6 dB/octave low-frequency roll-off of first-order directional microphones should be compensated when speech is present. Signal detection and decision rules for wind noise reduction applications are discussed in hearing devices with and without binaural transmission capability. ACKNOWLEDGMENTS: The author thanks Lance Nelson and Melissa Teske Dunn for data collection and Jens Balslev, Peter Nopp, Nick McKibben, and Drs. Ernst Aschbacher and Kaiboa Nie, and the staff at the Herrick Laboratories for technical support. Many thanks to Oticon Foundation and Med-El Corporation for providing funding to support this study. This study was funded by the Oticon Foundation and Med-El Corporation. The author was solely responsible for the design of the study procedures and the contents presented in this article. The author has a United States Patent (8942815) - Enhancing cochlear implnats with hearing aid signal processing technologies. The author has no conflicts of interest to declare. Received February 2, 2017; accepted May 27, 2019. Address for correspondence: King Chung, Department of Allied Health and Communicative Disorders, Northern Illinois University, DeKalb, IL 60115, USA. E-mail: kchung@niu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Sensitivity of the Digits-In-Noise Test Using Antiphasic Stimuli
Objectives: The digits-in-noise test (DIN) has become increasingly popular as a consumer-based method to screen for hearing loss. Current versions of all DINs either test ears monaurally or present identical stimuli binaurally (i.e., diotic noise and speech, NoSo). Unfortunately, presentation of identical stimuli to each ear inhibits detection of unilateral sensorineural hearing loss (SNHL), and neither diotic nor monaural presentation sensitively detects conductive hearing loss (CHL). After an earlier finding of enhanced sensitivity in normally hearing listeners, this study tested the hypothesis that interaural antiphasic digit presentation (NoSπ) would improve sensitivity to hearing loss caused by unilateral or asymmetric SNHL, symmetric SNHL, or CHL. Design: This cross-sectional study recruited adults (18 to 84 years) with various levels of hearing based on a 4-frequency pure-tone average (PTA) at 0.5, 1, 2, and 4 kHz. The study sample was comprised of listeners with normal hearing (n = 41; PTA ≤ 25 dB HL in both ears), symmetric SNHL (n = 57; PTA > 25 dB HL), unilateral or asymmetric SNHL (n = 24; PTA > 25 dB HL in the poorer ear), and CHL (n = 23; PTA > 25 dB HL and PTA air-bone gap ≥ 20 dB HL in the poorer ear). Antiphasic and diotic speech reception thresholds (SRTs) were compared using a repeated-measures design. Results: Antiphasic DIN was significantly more sensitive to all three forms of hearing loss than the diotic DIN. SRT test–retest reliability was high for all tests (intraclass correlation coefficient r > 0.89). Area under the receiver operating characteristics curve for detection of hearing loss (>25 dB HL) was higher for antiphasic DIN (0.94) than for diotic DIN (0.77) presentation. After correcting for age, PTA of listeners with normal hearing or symmetric SNHL was more strongly correlated with antiphasic (rpartial[96] = 0.69) than diotic (rpartial = 0.54) SRTs. Slope of fitted regression lines predicting SRT from PTA was significantly steeper for antiphasic than diotic DIN. For listeners with normal hearing or CHL, antiphasic SRTs were more strongly correlated with PTA (rpartial[62] = 0.92) than diotic SRTs (rpartial[62] = 0.64). Slope of the regression line with PTA was also significantly steeper for antiphasic than diotic DIN. The severity of asymmetric hearing loss (poorer ear PTA) was unrelated to SRT. No effect of self-reported English competence on either antiphasic or diotic DIN among the mixed first-language participants was observed. Conclusions: Antiphasic digit presentation markedly improved the sensitivity of the DIN test to detect SNHL, either symmetric or asymmetric, while keeping test duration to a minimum by testing binaurally. In addition, the antiphasic DIN was able to detect CHL, a shortcoming of previous monaural or binaurally diotic DIN versions. The antiphasic DIN is thus a powerful tool for population-based screening. This enhanced functionality combined with smartphone delivery could make the antiphasic DIN suitable as a primary screen that is accessible to a large global audience. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the participants of this study, Steve Biko Academic Hospital, and all participating private practices for their assistance with data collection. The authors thank Li Lin for assistance with data analysis. This research was funded by the National Institute of Deafness and Communication Disorders of the National Institutes of Health under Award Number 5R21DC016241-02. Additional funding support was obtained from the National Research Foundation (Grant PR_CSRP190208414782). DWS, DRM, and HCM relationship with the hearX Group and hearZA includes equity, consulting, and potential royalties. DRM is supported by Cincinnati Children’s Research Foundation and by the National Institute for Health Research Manchester Biomedical Research Centre. The authors have no conflicts of interest to disclose. Received January 4, 2019; accepted June 4, 2019. Address for correspondence: Cas Smits, Amsterdam, UMC Vrije Universiteit, Department of Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, The Netherlands. E-mail: c.smits@vumc.nl This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate