General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Phased array measurements of the sound pressure in a room enable to reconstruct the sound field, i.e., to estimate pressure, velocity and sound intensity in positions that have not been measured. Typically, analytical wave functions are used to expand the measured data and interpolate the wave field. However, these bases are often redundant and lead to non-sparse solutions, as multiple basis functions are required to represent the measured data. In this study, we examine the use of dictionary learning to obtain a sparse representation of the sound field in a room, using atoms learned from experimental data. The aim is to obtain a model of reduced dimensionality that can represent optimally the spatial properties of the sound field in a room. We analyse the properties of the extracted dictionaries, their ability to reconstruct the sound field, and their generality. A broader question is the suitability of a given dictionary, which has been extracted from a particular room, to represent the sound field in another room.
Sound source localization is crucial for communication and sound scene analysis. This study uses direction-of-arrival estimates of multiple ad hoc distributed microphone arrays to localize sound sources in a room. An affine mapping between the independent array estimates and the source coordinates is derived from a set of calibration points. Experiments show that the affine model is sufficient to locate a source and can be calibrated to physical dimensions. A projection of the local array estimates increases localization accuracy, particularly further away from the calibrated region. Localization tests in three dimensions compare the affine approach to a nonlinear neural network.
The acquisition of the spatio-temporal characteristics of a sound field over a large volume of space is experimentally challenging, as a large number of transducers is required to sample the sound field. Sound field reconstruction methods are a resourceful approach, as they enable the interpolation and extrapolation of the sound field from a limited number of observed data. In this study we examine the spatio-temporal and spatio-spectral reconstruction of the sound field in a room from distributed measurements of the sound pressure. Specifically, a variational Gaussian process regression model is formulated, using time-domain anisotropic kernels to reconstruct the direct sound and early reflections, and frequency-domain isotropic kernels for reconstructing the late reverberant field. The proposed methodology is compared experimentally to classical regression models based on plane wave decompositions, which are widely used in sound field reconstruction in enclosures due to their simplicity and accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.