“…[6] presented a framework for performing decomposition using spherical convolution under the assumption of a continuous pressure-sensitive microphone array surface. In case of discrete microphones positioned on the sphere surface this assumption is invalid, and a quadrature formulae that preserves the orthonormality of spherical harmonics should be used as in [2]. Quadrature based on Fliege points [8] was presented and evaluated and two plane-wave decomposition algorithms were developed in [7], The current work analyzes the performance of those algorithms under realistic operating conditions -nite number of microphones, environmental noise, and aliasing effects -using both synthetic and experimental data.…”
Spherical microphone arrays offer a number of attractive properties such as direction-independent acoustic behavior and ability to reconstruct the sound eld in the vicinity of the array. Such ability is necessary in applications such as ambisonics and recreating auditory environment over headphones. We compare the performance of two scene reconstruction algorithms -one based on least-squares tting the observed potentials and another based on computing the far-eld signature function directly from the microphone measurements. A number of features important for the design and operation of spherical microphone arrays in real applications are revealed. Results indicate that it is possible to reconstruct the sound scene up to order p with p 2 microphones.Index Terms-Acoustic elds, spherical microphone arrays, array signal processing, acoustic position measurement.1. INTRODUCTION Spherical microphone arrays offer a number of properties attractive for the development of the acoustic and audio systems with 3-D listening capability. Due to 3-D symmetry of the array, the array beamforming pattern is independent of the steering direction and the spatial structure of the acoustic eld can be captured without distortion. [6] presented a framework for performing decomposition using spherical convolution under the assumption of a continuous pressure-sensitive microphone array surface. In case of discrete microphones positioned on the sphere surface this assumption is invalid, and a quadrature formulae that preserves the orthonormality of spherical harmonics should be used as in [2]. Quadrature based on Fliege points [8] was presented and evaluated and two plane-wave decomposition algorithms were developed in [7], The current work analyzes the performance of those algorithms under realistic operating conditions -nite number of microphones, environmental noise, and aliasing effects -using both synthetic and experimental data.
BACKGROUNDIn a space with no acoustic sources, acoustic wave propagation at a wavenumber k is governed by the Helmholtz equation [7] 2 (k, r) + k 2 (k, r) = 0,Thanks to the U.S. Department of Veterans Affairs for funding this work.where (k, r) is the Fourier transform of the pressure. Solutions of the Helmholtz equation can be expanded as a series of regular R m n (k, r) and singular S m n (k, r) spherical basis functions (see [7]where (r, , ) are spherical coordinates of the radius vector r, j n (kr) and h n (kr) are the spherical Bessel and Hankel functions, and Y m n ( , ) are the orthonormal spherical harmonics. Any regular acoustic eld (k, r) near a point r in a region that does not contain sources can be represented as a sum of regular functions with some complex coef cients C m n (k) asTo achieve negligible truncation error it is suf cient to set [9]3. SOLVING THE ACOUSTIC SCENE The potential (s 0 , s) created at point s 0 on the surface of the soundhard sphere of radius a by plane wave e iks·r propagating in the direction s is given bywhere Pn(s 0 · s) is the Legendre polynomial of degree n an...
“…[6] presented a framework for performing decomposition using spherical convolution under the assumption of a continuous pressure-sensitive microphone array surface. In case of discrete microphones positioned on the sphere surface this assumption is invalid, and a quadrature formulae that preserves the orthonormality of spherical harmonics should be used as in [2]. Quadrature based on Fliege points [8] was presented and evaluated and two plane-wave decomposition algorithms were developed in [7], The current work analyzes the performance of those algorithms under realistic operating conditions -nite number of microphones, environmental noise, and aliasing effects -using both synthetic and experimental data.…”
Spherical microphone arrays offer a number of attractive properties such as direction-independent acoustic behavior and ability to reconstruct the sound eld in the vicinity of the array. Such ability is necessary in applications such as ambisonics and recreating auditory environment over headphones. We compare the performance of two scene reconstruction algorithms -one based on least-squares tting the observed potentials and another based on computing the far-eld signature function directly from the microphone measurements. A number of features important for the design and operation of spherical microphone arrays in real applications are revealed. Results indicate that it is possible to reconstruct the sound scene up to order p with p 2 microphones.Index Terms-Acoustic elds, spherical microphone arrays, array signal processing, acoustic position measurement.1. INTRODUCTION Spherical microphone arrays offer a number of properties attractive for the development of the acoustic and audio systems with 3-D listening capability. Due to 3-D symmetry of the array, the array beamforming pattern is independent of the steering direction and the spatial structure of the acoustic eld can be captured without distortion. [6] presented a framework for performing decomposition using spherical convolution under the assumption of a continuous pressure-sensitive microphone array surface. In case of discrete microphones positioned on the sphere surface this assumption is invalid, and a quadrature formulae that preserves the orthonormality of spherical harmonics should be used as in [2]. Quadrature based on Fliege points [8] was presented and evaluated and two plane-wave decomposition algorithms were developed in [7], The current work analyzes the performance of those algorithms under realistic operating conditions -nite number of microphones, environmental noise, and aliasing effects -using both synthetic and experimental data.
BACKGROUNDIn a space with no acoustic sources, acoustic wave propagation at a wavenumber k is governed by the Helmholtz equation [7] 2 (k, r) + k 2 (k, r) = 0,Thanks to the U.S. Department of Veterans Affairs for funding this work.where (k, r) is the Fourier transform of the pressure. Solutions of the Helmholtz equation can be expanded as a series of regular R m n (k, r) and singular S m n (k, r) spherical basis functions (see [7]where (r, , ) are spherical coordinates of the radius vector r, j n (kr) and h n (kr) are the spherical Bessel and Hankel functions, and Y m n ( , ) are the orthonormal spherical harmonics. Any regular acoustic eld (k, r) near a point r in a region that does not contain sources can be represented as a sum of regular functions with some complex coef cients C m n (k) asTo achieve negligible truncation error it is suf cient to set [9]3. SOLVING THE ACOUSTIC SCENE The potential (s 0 , s) created at point s 0 on the surface of the soundhard sphere of radius a by plane wave e iks·r propagating in the direction s is given bywhere Pn(s 0 · s) is the Legendre polynomial of degree n an...
“…The use of an acoustically rigid spherical baffle for recording, where the microphone array is mounted, is particularly convenient because it adds stability during pseudo-inversion, as opposed to the so-called open microphone arrays that do not use a rigid baffle [78,79].…”
Section: Combination Matrices For Spherical Arraysmentioning
confidence: 99%
“…The capture of sound with uniform resolution along directions is possible by using a spherical array of microphones for recording [78,79] and a spherical array of sources for characterizing the HRTF dataset [27][28][29]. The use of an acoustically rigid spherical baffle for recording, where the microphone array is mounted, is particularly convenient because it adds stability during pseudo-inversion, as opposed to the so-called open microphone arrays that do not use a rigid baffle [78,79].…”
Section: Combination Matrices For Spherical Arraysmentioning
Signal processing methods that accurately synthesize sound pressure at the ears are important in the development of spatial audio devices for personal use. This paper reviews the current methods and focuses on a promising class of these methods that rely on combining the spatial information available in microphone array recordings and datasets of head-related transfer functions (HRTFs). These two kinds of spatial information enable the consideration of dynamic and individual auditory localization cues during binaural synthesis. A general formulation for such a class of methods is presented in terms of a linear system of equations, whose associated matrix is composed of acoustic transfer functions that relate the positions of microphones and HRTFs. Based on this formulation, it is shown that most of the existing methods under consideration can be classified into two prominent approaches: 1) the HRTF modeling approach and 2) the microphone signal modeling approach. An important relation between these two approaches is evidenced in the general formulation: when one approach arises from the solution to an overdetermined system, the other corresponds to an underdetermined system, and vice versa. Illustrative examples of binaural synthesis from spherical arrays are provided by means of simulations. Underdetermined systems generally achieve better performance than overdetermined ones.
“…This paper instead proposes the use of beamforming optimized considering a linear model of sound propagation and scattering. During the last decade, a number of works that use the sound scattering properties of rigid bodies for designing microphone arrays have appeared, such as those of Meyer and Elko [5] and Teutsch and Kellermann [6]. These works leverage the effective increase in the microphone array aperture size, resulting from the scattering of rigid spheres and cylinders, for improving the signal-to-noise (SNR) performance at low frequencies and increasing the aliasing frequency.…”
Super-directional loudspeaker arrays can be used to achieve high directivity in a limited low-frequency range. As opposed to microphone arrays, the distance between the loudspeakers has to be relatively large, resulting in aliasing starting at relatively low frequencies. On the other hand, mounting a loudspeaker on a rigid baffle (e.g., a rigid cylinder or sphere) increases its directivity with frequency. Using super-directional array techniques at low frequencies and leveraging loudspeakers' increased directivity at high frequencies enables achieving high directivity both at low and high frequencies. The design of baffled circular loudspeaker arrays and an improved beamforming procedure for achieving high directivity in a broad frequency range is described.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.