The addition of low-frequency acoustic information to real or simulated electric stimulation ͑so-called electric-acoustic stimulation or EAS͒ often results in large improvements in intelligibility, particularly in competing backgrounds. This may reflect the availability of fundamental frequency ͑F0͒ information in the acoustic region. The contributions of F0 and the amplitude envelope ͑as well as voicing͒ of speech to simulated EAS was examined by replacing the low-frequency speech with a tone that was modulated in frequency to track the F0 of the speech, in amplitude with the envelope of the low-frequency speech, or both. A four-channel vocoder simulated electric hearing. Significant benefit over vocoder alone was observed with the addition of a tone carrying F0 or envelope cues, and both cues combined typically provided significantly more benefit than either alone. The intelligibility improvement over vocoder was between 24 and 57 percentage points, and was unaffected by the presence of a tone carrying these cues from a background talker. These results confirm the importance of the F0 of target speech for EAS ͑in simulation͒. They indicate that significant benefit can be provided by a tone carrying F0 and amplitude envelope cues. The results support a glimpsing account of EAS and argue against segregation.
Objective When either real or simulated electric stimulation from a cochlear implant (CI) is combined with low-frequency acoustic stimulation (electric-acoustic stimulation [EAS]), speech intelligibility in noise can improve dramatically. We recently showed that a similar benefit to intelligibility can be observed in simulation when the low-frequency acoustic stimulation (low-pass target speech) is replaced with a tone that is modulated both in frequency with the fundamental frequency (F0) of the target talker and in amplitude with the amplitude envelope of the low-pass target speech (Brown & Bacon 2009). The goal of the current experiment was to examine the benefit of the modulated tone to intelligibility in CI patients. Design Eight CI users who had some residual acoustic hearing either in the implanted ear, the unimplanted ear, or both ears participated in this study. Target speech was combined with either multitalker babble or a single competing talker and presented to the implant. Stimulation to the acoustic region consisted of no signal, target speech, or a tone that was modulated in frequency to track the changes in the target talker’s F0 and in amplitude to track the amplitude envelope of target speech low-pass filtered at 500 Hz. Results All patients showed improvements in intelligibility over electric-only stimulation when either the tone or target speech was presented acoustically. The average improvement in intelligibility was 46 percentage points due to the tone and 55 percentage points due to target speech. Conclusions The results demonstrate that a tone carrying F0 and amplitude envelope cues of target speech can provide significant benefit to CI users and may lead to new technologies that could offer EAS benefit to many patients who would not benefit from current EAS approaches.
Objective The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. Design The listeners were 16 bilateral CI patients fitted with MED-EL cochlear implants and 34 normal hearing listeners. The stimuli were wideband, high-pass and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs), to a Matlab simulation of MED-EL signal processing. Results For the wideband signal and high-pass signals, maximum ILDs of 15–17dB in the input signal were reduced to 3–4dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1–2dB. For wideband and high-pass signals, the largest ILDs for +/− 15 degree speaker locations were between .4 and .7dB; for the +/− 30 degree locations between .9 and 1.3dB; for the 45 degree locations between 2.4 and 2.9dB, for the +/− 60 degree locations, between 3.2 and 4.1dB and for the +/− 75 degree locations between 2.7 and 3.4dB. All of the CI patients in all stimulus conditions showed poorer localization than the normal hearing listeners. Localization accuracy for CI patients was best for the wideband and high-pass signals and was poorest for the low-pass signal. Conclusions Localization accuracy was related to the magnitude of the ILD cues available to the normal hearing listeners and CI patients. The pattern of localization errors for the CI patients was related to the magnitude of the ILD differences among loudspeaker locations. The error patterns for the wideband and high-pass signals, suggest that, for the conditions of this experiment, patients, on average, sorted signals on the horizontal plane into four sectors – on each side of the midline, one sector including 0, 15 and possibly 30 degrees, and a sector from 45 degrees to 75 degrees. Resolution within a sector was relatively poor.
Speech reception in noise is an especially difficult problem for listeners with hearing impairment as well as for users of cochlear implants (CIs). One likely cause of this is an inability to ‘glimpse’ a target talker in a fluctuating background, which has been linked to deficits in temporal fine-structure processing. A fine-structure cue that has the potential to be beneficial for speech reception in noise is fundamental frequency (F0). A challenging problem, however, is delivering the cue to these individuals. The benefits to speech intelligibility of F0 for both listeners with hearing impairment and users of CIs are reviewed, as well as various methods of delivering F0 to these listeners.
We have investigated the psychophysical properties of low-frequency hearing, both before and after implantation, to see if we can account for the benefit to speech understanding and melody recognition of adding acoustic stimulation to electric stimulation. In this paper, we review our work and the work of others and describe preliminary results not previously published. We show (a) that it is possible to preserve normal or near-normal nonlinear cochlear processing in the implanted ear following electric and acoustic stimulation surgery – though this is not the typical outcome; (b) that although low-frequency frequency selectivity is generally disrupted following implantation, some degree of frequency selectivity can be preserved, and (c) that neither nonlinear cochlear processing nor frequency selectivity in the acoustic hearing ear is correlated with the gain in speech understanding afforded by combined electric and acoustic stimulation. In another set of experiments, we show that the value of preserving hearing in the implanted ear is best seen in complex listening environments in which binaural cues can play a role in perception.
Several measures of sound source localization performance of 45 listeners with normal hearing were obtained when loudspeakers were in the front hemifield. Localization performance was not statistically affected by filtering the 200-ms, 2-octave or wider noise bursts (125 to 500, 1500 to 6000, and 125 to 6000 Hz wide noise bursts). This implies that sound source localization performance for noise stimuli is not differentially affected by which interaural cue (interaural time or level difference) a listener with normal hearing uses for sound source localization, at least for relatively broadband signals. This sound source localization task suggests that listeners with normal hearing perform with high reliability/repeatability, little response bias, and with performance measures that are normally distributed with a mean root-mean-square error of 6.2 and a standard deviation of 1.79.
Previous experiments have shown significant improvement in speech intelligibility under both simulated [Brown, C. A., and Bacon, S. P. (2009a). J. Acoust. Soc. Am. 125, 1658-1665; Brown, C. A., and Bacon, S. P. (2010). Hear. Res. 266, 52-59] and real [Brown, C. A., and Bacon, S. P. (2009b). Ear Hear. 30, 489-493] electric-acoustic stimulation when the target speech in the low-frequency region was replaced with a tone modulated in frequency to track the changes in the target talker's fundamental frequency (F0), and in amplitude with the amplitude envelope of the target speech. The present study examined the effects in simulation of applying these cues to a tone lower in frequency than the mean F0 of the target talker. Results showed that shifting the frequency of the tonal carrier downward by as much as 75 Hz had no negative impact on the benefit to intelligibility due to the tone, and that even a shift of 100 Hz resulted in a significant benefit over simulated electric-only stimulation when the sensation level of the tone was comparable to that of the tones shifted by lesser amounts.
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.