This paper describes a technique dedicated for the localization of acoustic sources in all directions and in the farfield. Classical beamforming techniques based on planar arrays provide an acoustic map restricted to a limited solid angle, but a spherical array does not have such a limitation since there is no preferential direction. In the processing called Spherical Harmonics Beamforming (SHB), the sound field on the sphere is decomposed with spherical harmonics functions, and then a corrected summation gives the acoustic contribution from a given direction. We have used a rigid spherical array, which has the advantage that cabling of microphones and integrated cameras can be hidden inside the sphere. A rigid surface also provides better numerical stability in connection with SHB. In this study, SHB is evaluated with respect to resolution and dynamic range. Simulated and experimental results are presented.
Acoustic impedance is typically measured using an impedance tube, which requires a material sample physically fitted to the tube. However, the impedance can vary greatly between the material mounted in the tube and the material located in a real environment, where the mounting conditions are likely to be different. Also, oblique incidence cannot be measured in an impedance tube. In this paper, we investigate the use of a double-layer microphone array for in-situ measurement of surface impedance and absorption coefficient. With the array positioned near the material surface, a source emits broad-band sound towards the array and the material. A measurement is taken, and the sound pressure and the surface-normal particle velocity at the material surface are calculated using Statistically Optimized Near-field Acoustical Holography (SONAH). From the surface pressure and velocity, the impedance across a selected area is calculated, and finally the absorption coefficient is calculated from the impedance. A set of tests has been performed on porous material samples in an anechoic chamber as well as in a fitted room. Different sample sizes and different sound incidence angles have been considered. The results show consistency between the measurements in the anechoic room and the ordinary room as well as good agreement with Miki's model up to large oblique incidence angles.
We propose a format and a set of tools in the rendering of computer musical notation. The proposed format is not to be considered as a universal standard. Nevertheless, it appears to be an efficient approach to musical notation. The proposed format might be thought as a contribution to future or existing implementations of musical editors. We will try to provide general guidelines and schemes that will be compatible with any type of programming language. This protocol has been implemented in the OpenMusic environment using the CLOS language environment. This paper is an a posteriori generalization of this implementation.
International audienceIn the last 4 years, we have developed a partnership between dance and neuroscience to study the relationships between body space in dance and the surrounding space, and the link between movement and audition as experienced by the dancer. The opportunity to work with a dancer/choreographer, an expert in movement, gives neuroscientists better access to the significance of the auditory-motor loop and its role in perception of the surrounding space. Given that a dancer has a very strong sense of body ownership (probably through a very accurate dynamic body schema) (Walsh et al. 2011), she is an ideal subject to investigate the feeling of controlling one's own body movements, and, through them, events in the external environment (Moore et al. 2009, Jola et al in press).We conducted several work sessions, which brought together a choreographer/dancer, a neuroscientist, a composer, and two researchers in acoustics and audio signal processing. These sessions were held at IRCAM (Institute for Research and Coordination Acoustic/Music, Paris) in a variable-acoustics concert hall equipped with a Wave Field Synthesis (WFS) sound reproduction system and infrared cameras for motion capture. During these work sessions, we concentrated on two specific questions: 1) is it possible to extend the body space of the dancer through auditory feedback (Maravita and Iriki 2004)? and 2) can we alter the dancer's perception of space by altering perceptions associated with movements?We used an interactive setup in which a collection of pre-composed sound events (individual sounds or musical sentences) could be transformed and rendered in real time according to the movements and the position of the dancer, that were sensed by markers on her body and detected by a motion tracking system. The transformations applied to the different sound components through the dancer's movement and position concerned not only musical parameters such as intensity, timbre, etc. but also the spatial parameters of the sounds. The technology we used allowed us to control their trajectory in space, apparent distance and the sound reverberation ambiance. We elaborated a catalogue of interaction modes with auditory settings that changed according to the dancer's movements. An interaction mode is defined by different mappings of position, posture or gesture of the dancer to musical and spatial parameters. For instance, a sound event may be triggered if the dancer is within a certain region or if she performs a predefined gesture. More elaborated modes involved the modulation of musical parameters by continuous movements of the dancer.The pertinence at a perceptive and cognitive level of the catalogue of interactions has been tested throughout the sessions. We observed that the detachable markers could be used to create a perception of extended body space, and that the performer perceived the stage space differently according to the auditory feedback of her action.The dancer reported that each experience with the technology shed light on her need ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.