The perception of spatially distributed sound sources was investigated by conducting two listening experiments in anechoic conditions with 13 loudspeakers evenly distributed in the frontal horizontal plane emitting incoherent noise signals. In the first experiment, widely distributed sound sources with gaps in their distribution emitted pink noise. The results indicated that the exact loudspeaker distribution could not be perceived accurately and that the width of the distribution was perceived to be narrower than it was in reality. Up to three spatially distributed loudspeakers that were simultaneously emitting sound could be individually perceived. In addition, the number of loudspeakers that were indicated as emitting sound was smaller than the actual number. In the second experiment, a reference with 13 loudspeakers and test cases with fewer loudspeakers were presented and their perceived spatial difference was rated. The effect of the noise bandwidth was of particular interest. Noise with different bandwidths centered around 500 and 4000 Hz was used. The results indicated that when the number of loudspeakers was increased from four to seven, the perceived auditory event was very similar to that perceived with 13 loudspeakers at all bandwidths. The perceived differences were larger in wideband noise than in narrow-band noise.
Tinnitus is associated with changes in neural activity. How such alterations impact the localization ability of subjects with tinnitus remains largely unexplored. In this study, subjects with self-reported unilateral tinnitus were compared to subjects with matching hearing loss at high frequencies and to normal-hearing subjects in horizontal and vertical plane localization tasks. Subjects were asked to localize a pink noise source either alone or over background noise. Results showed some degree of difference between subjects with tinnitus and subjects with normal hearing in horizontal plane localization, which was exacerbated by background noise. However, this difference could be explained by different hearing sensitivities between groups. In vertical plane localization there was no difference between groups in the binaural listening condition, but in monaural listening the tinnitus group localized significantly worse with the tinnitus ear. This effect remained when accounting for differences in hearing sensitivity. It is concluded that tinnitus may degrade auditory localization ability, but this effect is for the most part due to the associated levels of hearing loss. More detailed studies are needed to fully disentangle the effects of hearing loss and tinnitus.
Spatial perception of concurrently active sound sources was investigated in an exploratory listening experiment. Incoherent noise source distributions of varying spatial characteristics were presented from loudspeaker arrays in anechoic conditions. The arrays were coinciding with the ±45 • angular sectors in the frontal median and horizontal planes. The task of the immobile subjects was to report the directions of loudspeakers they perceived emitting sound. The results from median plane distributions suggest that two concurrent sources located along the vertical midline can be perceived individually without resorting to head movements when they are separated in elevation by 60 • or more. With source pairs separated by less than 60 • , and with more complex physical distributions, the distributions were perceived inaccurately, biased, and spatially compressed but nevertheless not as point-like auditory images.
Previous studies on fusion in speech perception have demonstrated the ability of the human auditory system to group separate components of speech-like sounds together and consequently to enable the identification of speech despite the spatial separation between the components. Typically, the spatial separation has been implemented using headphone reproduction where the different components evoke auditory images at different lateral positions. In the present study, a multichannel loudspeaker system was used to investigate whether the correct vowel is identified and whether two auditory events are perceived when a noise-excited vowel is divided into two components that are spatially separated. The two components consisted of the even and odd formants. Both the amount of spatial separation between the components and the directions of the components were varied. Neither the spatial separation nor the directions of the components affected the vowel identification. Interestingly, an additional auditory event not associated with any vowel was perceived at the same time when the components were presented symmetrically in front of the listener. In such scenarios, the vowel was perceived from the direction of the odd formant components.
Synthesis of volumetric virtual sources is a useful technique for auditory displays and virtual worlds. This task can be simplified into synthesis of perceived spatial extent. Previous research in virtual-world Directional Audio Coding has shown that spatial extent can be synthesized with monophonic sources by applying a time-frequency-space decomposition, i.e., randomly distributing time-frequency bins of the source signal. However, although this technique often achieved perception of spatial extent, it was not guaranteed and the timbre could degrade. In this article this technique is revisited in detail and the effect of different parameters is examined to ultimately achieve optimal quality and perception in all situations. The results of a series of informal and formal experiments are presented here, and they suggest that the revised method is viable in many cases. There is some dependency on the signal content that requires proper tuning of parameters. Furthermore, it is shown that different distribution widths can be produced with the method as well. From a psychoacoustical perspective, it is interesting that distributed narrow frequency bands form a spatially extended auditory event with no apparent directional focus.
In performing arts venues, the spectra of direct and reflected sound at a receiving location differ, due to seat dip effect, diffusive and absorptive surfaces, and source directivity. This paper examines the influence of differing lead and lag spectral contents on echo suppression threshold. The results indicate, that for a highpass filtered direct sound and a broadband reflection, attenuation of low frequencies initially results in an increase in echo suppression threshold, while for higher cutoff frequencies echo suppression threshold drastically decreases. For broadband direct sound and filtered reflections, the echo suppression threshold is inversely related to high frequency content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.