Cochlear implant users report difficulty understanding speech in both noisy and reverberant environments. Electric-acoustic stimulation (EAS) is known to improve speech intelligibility in noise. However, little is known about the potential benefits of EAS in reverberation, or about how such benefits relate to those observed in noise. The present study used EAS simulations to examine these questions. Sentences were convolved with impulse responses from a model of a room whose estimated reverberation times were varied from 0 to 1 sec. These reverberated stimuli were then vocoded to simulate electric stimulation, or presented as a combination of vocoder plus low-pass filtered speech to simulate EAS. Monaural sentence recognition scores were measured in two conditions: reverberated speech and speech in a reverberated noise. The long-term spectrum and amplitude modulations of the noise were equated to the reverberant energy, allowing a comparison of the effects of the interferer (speech vs noise). Results indicate that, at least in simulation, (1) EAS provides significant benefit in reverberation; (2) the benefits of EAS in reverberation may be underestimated by those in a comparable noise; and (3) the EAS benefit in reverberation likely arises from partially preserved cues in this background accessible via the low-frequency acoustic component.
Objectives: "Channel-linked" and "multi-band" front-end automatic gain control (AGC) were examined as alternatives to single-band, channel-unlinked AGC in simulated bilateral cochlear implant (CI) processing. In channel-linked AGC, the same gain control signal was applied to the input signals to both of the two CIs ("channels"). In multi-band AGC, gain control acted independently on each of a number of narrow frequency regions per channel.Design: Speech intelligibility performance was measured with a single target (to the left, at −15 or −30º) and a single, symmetrically-opposed masker (to the right) at a signal-to-noise ratio (SNR) of −2 decibels. Binaural sentence intelligibility was measured as a function of whether channel linking was present and of the number of AGC bands. Analysis of variance was performed to assess condition effects on percent correct across the two spatial arrangements, both at a high and a low AGC threshold. Acoustic analysis was conducted to compare post-compressed better-ear SNR, interaural differences, and monaural within-band envelope levels across processing conditions.Results: Analyses of variance indicated significant main effects of both channel linking and number of bands at low threshold, and of channel linking at high threshold. These improvements were accompanied by several acoustic changes. Linked AGC produced a more favorable better-ear SNR and better preserved broadband ILD statistics, but did not reduce dynamic range as much as unlinked AGC. Multi-band AGC sometimes improved better-ear SNR statistics and always improved broadband ILD statistics whenever the AGC channels were unlinked. Multi-band AGC produced output envelope levels that were higher than single-band AGC.Conclusions: These results favor strategies that incorporate channel-linked AGC and multi-band AGC for bilateral CIs. Linked AGC aids speech intelligibility in spatially separated speech, but reduces the degree to which dynamic range is compressed. Combining multi-band and channellinked AGC offsets the potential impact of diminished dynamic range with linked AGC without sacrificing the intelligibility gains observed with linked AGC.
Objective Shifting the mean fundamental frequency (F0) of target speech down in frequency may be a way to provide the benefits of electric-acoustic stimulation (EAS) to cochlear implant (CI) users whose limited residual hearing precludes a benefit typically, even with amplification. However, previous work showed a decline in the amount of benefit at the greatest downward frequency shifts, and we hypothesized that this might be related to F0 variation. Thus, in the current study we sought to determine the relationship between mean F0, F0 variation, and the benefits of combining electric stimulation from a cochlear implant with low-frequency residual acoustic hearing. Design We measured speech intelligibility in normal-hearing listeners using an EAS simulation consisting of a sine vocoder combined either with speech low-pass filtered at 500 Hz, or with a pure tone representing target F0. We used extracted target voice pitch information to modulate the tone, and manipulated both the frequency of the carrier (mean F0), as well as the standard deviation of the voice pitch information (F0 variation). Results A decline in EAS benefit was observed at the lowest mean F0 tested, but this decline disappeared when F0 variation was reduced to be proportional to the amount of the shift in frequency (i.e., when F0 was shifted logarithmically instead of linearly). Conclusion Lowering mean F0 by shifting the frequency of a pure tone carrying target voice pitch information can provide as much EAS benefit as an unshifted tone, at least in the current simulation of EAS. These results may have implications for cochlear implant users with extremely limited residual acoustic hearing.
In reverberant environments, reflections occurring within 30–50 ms of the direct speech assist intelligibility by perceptually fusing with the source, effectively increasing its level. Fusion occurs at similar delays in monaurally deaf and normal-hearing listeners, suggesting an independence from binaural processes [Litovsky et al., J. Acoust. Soc. Am. 106, 1633–1654 (1999)]. Its effects can thus be examined in unilateral cochlear implant (CI) users. Simulated CI listening shows little benefit from early reflections, and a detriment when delays exceed 20 ms [Whitmal and Poissant, Conference on Implantable Auditory Prostheses (2009)]. Data from our laboratory show that CI users with residual acoustic hearing benefit from electric-acoustic stimulation (EAS) in reverberation. The role of early reflections in this EAS benefit remains unclear. The present study examined the effect of a single unattenuated reflection at ten delay times on intelligibility in simulated EAS. Target stimuli consisted of sentences combined with unattenuated copies delayed by 0–66 ms. Four-talker babble was added to increase difficulty. A four-channel vocoder simulated electric stimulation; low-pass filtered speech represented residual acoustic hearing. Monaural intelligibility scores from normal-hearing listeners under anechoic and reflected conditions with electric and electric-acoustic processing suggest that EAS may facilitate a benefit from early reflections. [Work supported by the NIDCD.]
In English, most word-initial syllables are stressed. Listeners use these syllable strength cues to identify word boundaries in degraded acoustic conditions. This is evident in the types of lexical boundary errors they make when tasked with parsing a continuous stream of degraded speech: word boundaries are more often inserted before strong syllables than before weak syllables. Listeners also use visual cues from the talker's face to glean both phonemic and prosodic speech information in degraded listening conditions. While the benefits of lipreading have received much attention, the role of visual cues in lexical segmentation remains largely unexplored. The present study examined the effect of auditory-visual cues on lexical boundary decisions. Normal-hearing listeners identified target phrases degraded by multi-talker babble. Responses in auditory-only and auditory-visual conditions were analyzed for percent words correct and lexical boundary error type. Results indicate large inter-individual variability, but overall an increase in word identification accuracy and a decrease in lexical boundary errors in the auditory-visual condition. Further, some listeners made a greater proportion of lexical boundary insertions before strong syllables, suggesting that the addition of visual cues increased their use of syllable strength to identify word boundaries. Implications for clinical populations will be discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.