The factors responsible for interindividual differences in speech-understanding ability among hearingimpaired listeners are not well understood. Although audibility has been found to account for some of this variability, other factors may play a role. This study sought to examine whether part of the large interindividual variability of speech-recognition performance in individuals with severe-to-profound high-frequency hearing loss could be accounted for by differences in hearing-loss onset type (early, progressive, or sudden), age at hearing-loss onset, or hearing-loss duration. Other potential factors including age, hearing thresholds, speech-presentation levels, and speech audibility were controlled. Percent-correct (PC) scores for syllables in dissyllabic words, which were either unprocessed or lowpass filtered at cutoff frequencies ranging from 250 to 2,000 Hz, were measured in 20 subjects (40 ears) with severe-to-profound hearing losses above 1 kHz. For comparison purposes, 20 normal-hearing subjects (20 ears) were also tested using the same filtering conditions and a range of speech levels (10-80 dB SPL). Significantly higher asymptotic PCs were observed in the early (G=4 years) hearing-loss onset group than in both the progressive-and sudden-onset groups, even though the three groups did not differ significantly with respect to age, hearing thresholds, or speech audibility. In addition, significant negative correlations between PC and hearing-loss onset age, and positive correlations between PC and hearing-loss duration were observed. These variables accounted for a greater proportion of the variance in speech-intelligibility scores than, and were not significantly correlated with, speech audibility, as quantified using a variant of the articulation index. Although the lack of statistical independence between hearing-loss onset type, hearing-loss onset age, hearing-loss duration, and age complicate and limit the interpretation of the results, these findings indicate that other variables than audibility can influence speech intelligibility in listeners with severe-to-profound high-frequency hearing loss.
Improvements in speech-recognition performance resulting from the addition of low-frequency information to electric (or vocoded) signals have attracted considerable interest in recent years. An important question is whether these improvements reflect a form of constructive perceptual interaction—whereby acoustic cues enhance the perception of electric or vocoded signals—or whether they can be explained without assuming any interaction. To address this question, speech-recognition performance was measured in 24 normal-hearing listeners using lowpass-filtered, vocoded, and “combined” (lowpass + vocoded) words presented either in quiet or in a realistic background (cafeteria noise), for different signal-to-noise ratios, different lowpass-filter cutoff frequencies, and different numbers of vocoder bands. The results of these measures were then compared to the predictions of three models of cue-combination, including a “probability summation” model and two Gaussian signal-detection-theory (SDT) models—one (the “independent noises” model) involving pre-combination noises, and the other (the “late noise” model) involving post-combination noise. Consistent with previous findings, speech-recognition performance with combined stimulation was significantly higher than performance with vocoded or lowpass stimuli alone, and it was also higher than predicted by the probability-summation model. The two Gaussian-SDT models could account quantitatively for the data. Moreover, a Bayesian model-comparison procedure demonstrated that, given the data, these two models were far more likely than the probability-summation model. Since these models do not involve any constructive-interaction mechanism, this demonstrates that constructive interactions are not needed to explain the combined-stimulation benefits measured in this study. It will be important for future studies to investigate whether this conclusion generalizes to other test conditions, including real EAS, and to further test the assumptions of these different models of the combined-stimulation advantage.
Broader intra-cochlear current spread (ICCS) implies higher cochlear implant (CI) channel interactions. This study aimed to investigate the relationship between ICCS and speech intelligibility in experienced CI users. Using voltage matrices collected for impedance measurements, an individual exponential spread coefficient (ESC) was computed. Speech audiometry was performed to determine the intelligibility at 40 dB Sound Pressure Level (SPL) and the 50% speech reception threshold: I40 and SRT50 respectively. Correlations between ESC and either I40 or SRT50 were assessed. A total of 36 adults (mean age: 50 years) with more than 11 months (mean: 34 months) of CI experience were included. In the 21 subjects for whom all electrodes were active, ESC was moderately correlated with both I40 (r = −0.557, p = 0.009) and SRT50 (r = 0.569, p = 0.007). The results indicate that speech perception performance is negatively affected by the ICCS. Estimates of current spread at the closest vicinity of CI electrodes and prior to any activation of auditory neurons are indispensable to better characterize the relationship between CI stimulation and auditory perception in cochlear implantees.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.