This functional magnetic resonance imaging study was focused on the neural substrates underlying human auditory space perception. In order to present natural-like sound locations to the subjects, acoustic stimuli convolved with individual head-related transfer functions were used. Activation foci, as revealed by analyses of contrasts and interactions between sound locations, formed a complex network, including anterior and posterior regions of temporal lobe, posterior parietal cortex, dorsolateral prefrontal cortex and inferior frontal cortex. The distinct topography of this network was the result of different patterns of activation and deactivation, depending on sound location, in the respective voxels. These patterns suggested different levels of complexity in processing of auditory spatial information, starting with simple left/right discrimination in the regions surrounding the primary auditory cortex, while the integration of information on hemispace and eccentricity of sound may take place at later stages. Activations were identified as being located in regions assigned to both the dorsal and ventral auditory cortical streams, that are assumed to be preferably concerned with analysis of spatial and non-spatial sound features, respectively. The finding of activations also in the ventral stream could, on the one hand, reflect the well-known functional duality of auditory spectral analysis, that is, the concurrent extraction of information based on location (due to the spectrotemporal distortions caused by head and pinnae) and spectral characteristics of a sound source. On the other hand, this result may suggest the existence of shared neural networks, performing analyses of auditory 'higher-order' cues for both localization and identification of sound sources.
A real-time audio rendering system is introduced which combines a full room-specific simulation, dynamic crosstalk cancellation, and multitrack binaural synthesis for virtual acoustical imaging. The system is applicable for any room shape (normal, long, flat, coupled), independent of the a priori assumption of a diffuse sound field. This provides the possibility of simulating indoor or outdoor spatially distributed, freely movable sources and a moving listener in virtual environments. In addition to that, near-tohead sources can be simulated by using measured near-field HRTFs. The reproduction component consists of a headphone-free reproduction by dynamic crosstalk cancellation. The focus of the project is mainly on the integration and interaction of all involved subsystems. It is demonstrated that the system is capable of real-time room simulation and reproduction and, thus, can be used as a reliable platform for further research on VR applications.
The use of nonintrusive virtual environments is gaining more and more importance but was focused mainly on addressing the visual sense. However, the human perception consists not only of visual input and thus it would be worthwhile to create multimodal and interactive virtual environments. This thesis describes the techniques required to include the acoustic component into virtual environments and the implementation of a software system, which creates complex artificial acoustical scenes in real time. The system is based on the binaural technology. It features spatially distributed sound sources, which are utilized to create an environment that is as authentic as possible. This comprises a description of the source, including its relevant angle-, distance-, and time-dependent radiation, the sound distribution in the virtual scene, the perception-related consideration of all sound field components, and the exact reproduction at the ears of the user. In this context, an approach for dynamic crosstalk cancellation is presented, which enables a loudspeaker-based reproduction. The required filters are processed in real time on the basis of the position data and measured transfer functions of the outer ear. Furthermore the integration of this spatial audio system into a five-sided Virtual Reality display system is described and evaluated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.