Objectives-This study aimed to characterize horizontal-plane sound localization in interfering noise at different signal-to-noise ratios (SNRs) and to compare performance across normal-hearing listeners and users of unilateral and bilateral cochlear implants (CIs). CI users report difficulties with listening in noisy environments. While their difficulties with speech understanding have been investigated in several studies, the ability to localize sounds in background noise has not extensively been examined, despite the benefits of binaural hearing being greatest in noisy situations. Sound localization is a measure of binaural processing and is thus well suited to assessing the benefit of bilateral implantation. The results will inform clinicians and implant manufacturers how to focus their efforts to improve localization with CIs in noisy situations.Design-Six normal-hearing listeners, four unilateral and ten bilateral CI users indicated the perceived location of sound sources using a light pointer method. Target sounds were noise pulses played from one of 11 loudspeakers placed between −80° and +80° in the frontal horizontal plane in the free-field. Localization was assessed in quiet and in diffuse background noise at SNRs between +10 and −7 dB. Speech reception thresholds were measured and their relation to the localization results examined.Results-Localization performance declined with decreasing SNR: Target sounds were perceived closer to the median plane and the standard deviation of responses increased. Localization performance across groups was compared using a measure of "Spatial Resolvability" (SR). This measure gives the angular separation between two sound sources that would enable an ideal observer to correctly distinguish them 69.1% of the time. For all participants SR increased with decreasing SNR, i.e. at low SNRs the spatial separation between sound sources remained distinguishable only when it was larger. Normal-hearing participants performed best, with SR between 1.4° and 5.1° in quiet. Bilateral CI users showed SR between 8.3° and 43.6° in quiet, corresponding approximately to the spatial resolution of normal-hearing listeners at an SNR of −5 dB. Most bilateral CI users had lost the ability to correctly determine which side the sound came from at an SNR of −3 dB. Overall, the SNR had to be at least +7 dB to achieve localization performance near to that in quiet for all bilateral CI users. No significant correlation was found between spatial resolution and speech reception thresholds, but the speech-processor sensitivity setting did significantly affect performance. Unilateral CI users showed the most severe localization problems, with only two of four participants being able to correctly determine which side sounds came from in quiet.Conclusions-This study is the first to examine sound localization with CIs at various SNRs and to compare it to normal hearing. The results confirm that localization with CIs is strongly disrupted in noisy situations. Bilateral CIs were shown to be clearly...
Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f mod < 20 Hz). On the basis of the fluctuation strength of background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.
The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research.
Users of bilateral cochlear implants (CIs) experience difficulties localizing sounds in reverberant rooms, even in rooms where normal-hearing listeners would hardly notice the reverberation. We measured the localization ability of seven bilateral CI users listening with their own devices in anechoic space and in a simulated reverberant room. To determine factors affecting performance in reverberant space we measured the sensitivity to interaural time differences (ITDs), interaural level differences (ILDs), and forward masking in the same participants using direct computer control of the electric stimulation in their CIs. Localization performance, quantified by the coefficient of determination r2 and the root mean squared error, was significantly worse in the reverberant room than in anechoic conditions. Localization performance in the anechoic room, expressed as r2, was best predicted by subject’s sensitivity to ILDs. However, the decrease in localization performance caused by reverberation was better predicted by the sensitivity to envelope ITDs measured on single electrode pairs, with a correlation coefficient of 0.92. The CI users who were highly sensitive to envelope ITDs also better maintained their localization ability in reverberant space. Results in the forward masking task added only marginally to the predictions of localization performance in both environments. The results indicate that envelope ITDs provided by CI processors support localization in reverberant space. Thus, methods that improve perceptual access to envelope ITDs could help improve localization with bilateral CIs in everyday listening situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.