The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
No abstract
Selected subjects with bilateral cochlear implants (CIs) showed excellent horizontal localization of wide-band sounds in previous studies. The current study investigated localization cues used by two bilateral CI subjects with outstanding localization ability. The first experiment studied localization for sounds of different spectral and temporal composition in the free field. Localization of wide-band noise was unaffected by envelope pulsation, suggesting that envelope-interaural time difference (ITD) cues contributed little. Low-pass noise was not localizable for one subject and localization depended on the cutoff frequency for the other which suggests that ITDs played only a limited role. High-pass noise with slow envelope changes could be localized, in line with contribution of interaural level differences (ILDs). In experiment 2, processors of one subject were raised above the head to void the head shadow. If they were spaced at ear distance, ITDs allowed discrimination of left from right for a pulsed wide-band noise. Good localization was observed with a head-sized cardboard inserted between processors, showing the reliance on ILDs. Experiment 3 investigated localization in virtual space with manipulated ILDs and ITDs. Localization shifted predominantly for offsets in ILDs, even for pulsed high-pass noise. This confirms that envelope ITDs contributed little and that localization with bilateral CIs was dominated by ILDs.
After successful cochlear implantation in one ear, some patients continue to use a hearing aid at the contralateral ear. They report an improved reception of speech, especially in noise, as well as a better perception of music when the hearing aid and cochlear implant are used in this bimodal combination. Some individuals in this bimodal patient group also report the impression of an improved localization ability. Similar experiences are reported by the group of bilateral cochlear implantees. In this study, a survey of 11 bimodally and 4 bilaterally equipped cochlear implant users was carried out to assess localization ability. Individuals in the bimodal implant group were all provided with the same type of hearing aid in the opposite ear, and subjects in the bilateral implant group used cochlear implants of the same manufacturer on each ear. Subjects adjusted the spot of a computer-controlled laser-pointer to the perceived direction of sound incidence in the frontal horizontal plane by rotating a trackball. Two subjects of the bimodal group who had substantial residual hearing showed localization ability in the bimodal configuration, whereas using each single device only the subject with better residual hearing was able to discriminate the side of sound origin. Five other subjects with more pronounced hearing loss displayed an ability for side discrimination through the use of bimodal aids, while four of them were already able to discriminate the side with a single device. Of the bilateral cochlear implant group one subject showed localization accuracy close to that of normal hearing subjects. This subject was also able to discriminate the side of sound origin using the first implanted device alone. The other three bilaterally equipped subjects showed limited localization ability using both devices. Among them one subject demonstrated a side-discrimination ability using only the first implanted device.
Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f mod < 20 Hz). On the basis of the fluctuation strength of background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.
Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.
To assess temporal integration in normal hearing, cochlear impairment, and impairment simulated by masking, absolute thresholds for tones were measured as a function of duration. Durations ranged from 500 ms down to 15 ms at 0.25 kHz, 8 ms at 1 kHz, and 2 ms at 4 and 14 kHz. An adaptive 2I, 2AFC procedure with feedback was used. On each trial, two 500-ms observation intervals, marked by lights, were presented with an interstimulus interval of 250 ms. The monaural signal was presented in the temporal center of one observation interval. The results for five normal and six impaired listeners show: (1) normal listeners' thresholds decrease by about 8 to 10 dB per decade of duration, as expected; (2) listeners with cochlear impairments generally show less temporal integration than normal listeners; and (3) listeners with impairments simulated using masking noise generally show the same amount of temporal integration as normal listeners tested in the quiet. The difference between real and simulated impairments indicates that the reduced temporal integration observed in impaired listeners probably is not due to splatter of energy to frequency regions where thresholds are low, but reflects reduced temporal integration per se.
Summary. In this chapter psycho-physical methods which are useful for both psycho-acoustics and sound-quality engineering will be discussed, namely, the methods of random access, the semantic differential, category scaling and magnitude estimation. Models of basic psycho-acoustic quantities like loudness, sharpness and roughness as well as composite metrics like psycho-acoustic annoyance will be introduced, and their application to sound-quality design will be explained. For some studies on sound quality the results of auditory evaluations will be compared to predictions from algorithmic models. Further, influences of the image of brand names as well as of the meaning of sound on sound-quality evaluation will be reported. Finally, the effects of visual cues on sound-quality ratings will be mentioned.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.