Conference: Acoustics'08 Paris (June 30 to July 4) - Musical Acoustics: Virtual Musical Instruments II (poster session 4pMUf)International audienceThe aim of this article is mainly to offer a link between the Digital Waveguide and the CORDIS-ANIMA physical modeling formalism. The first one is highly a modular lumped physical modeling and simulation system based on the mass-interaction paradigm while the second one offers accurate and efficient discrete time distributed models synthesized typically by delay lines and scattering junctions, in combination with digital filters. Both of them are widely developed and used in the domain of computer music field by scientists and artists. Although Digital Waveguide models have already been combined with Wave Digital Filters, they have never been exploited and integrated with CORDIS ANIMA networks. Wave Digital Filters are lumped models which are based on a scattering theoretic formulation which simplifies interfacing to Digital Waveguide models in contrast with the CORDIS ANIMA models. This research investigates the similarities of those formalisms, as well as focuses on the advantages of each modeling technique and proposes a real time computable interface between them. Moreover it results as well in a common convenient structural representation of their computational algorithms using signal processing block diagrams. These hybrid models were designed directly by their block diagrams, simulated and run in differ time using the Simulink software package of the matlab technical computing language
We are concerned here especially with the instrument/instrumentist relation in the playing time, more generally in musical instrumental experiments. Three channels at least support this relation: the gestual channel, the acoustical, and the visual ones. We shall first propose a comparative analysis of them as “intentional” ways or control ways. Schematically, we can say that gestual and visual channels are devoted to control purposes, whereas the acoustic channel bears expressive intentionality. In traditional practice all these functions are implicitly assimilated and coordinated during primitive learning. In the context of real-time digital synthesis of sounds, we have to consider explicitly these different aspects. We shall propose a preliminary study of the specificity and complimentarity of these channels. Then we shall present the problems set by specific transducers to be built and connected to real-time synthesizers. Concerning gestual control we shall introduce a typology of instrumental gesture taking into account its bilateral aspect (as transmitter and as receiver). We shall describe corresponding “gestual transducers with mechanical feedback.” Concerning visual control we shall present a real-time modelization of musical instruments corresponding to their pertinent visual elements during playing. All these studies are correlated with the CORDIS system developed by ACROE research group in Grenoble (C. Cadoz et al.).
Musical activity is multifarious. From instrument making to instrument playing and compositional conception, machines and man/machine relations are quite different. It is nevertheless exclusively in the instrument playing, more generally in instrument experiments that the man/machine relation corresponds to and needs the true real-time situation as defined in computer context. So, instrumental experience is a fundamental reference to conceive the basic functions of systems in real-time digital synthesis. Accordingly, algorithmic models appear as intermediary between instrumentist gesture and musical sound. Then their functions are: (1) to permit and guide the gestual action, (2) to pick up the pertinent information from it, and (3) to create an acoustical signal combining this temporal information with an atemporal or structural one. The latter may be considered as instrument definition or representation. By its mechanical nature, the gesture implies a mechanical modelization, at least at a first step. We shall present a digital sound synthesis system, the CORDIS system, entirely founded on a mechanical modelization of musical instruments. The latter are analyzed and reconstructed from elementary mechanical components. One of our goals is to give a way of understanding certain elementary components of musical languages by correlating them with concrete elementary experiments of acoustical source objects.
Man-machine communication is a very decisive point in digital synthesis of sounds used as a musical creation tool. We are concerned here with the instrumental gesture aspect of this problem where gesture is regarded as the most basic form of the relationship. Two complementary axes of study are necessary: (1) Analysis of gesture in instrument-instrumentist relationship. An all important class of gestures is based on the energetical exchange between the instrumentist and his instrument. In such as case, the instrumentist receives information (by touch and dynamical perceptions) which determines his behavior and his listening. So we have built a special device for sound control, with mechanical feed-back, taking into account the fact that the gesture is simultaneously a transmitter and a receiver channel. (2) The carrying out of sound synthesis by means of concrete source simulation. The simulation system relies on a first analysis of the instrument as a combination of an exciter structure connected to a vibrating structure. The latter are then decomposed into elementary mechanical components. Algorithmic models for mechanical components then allow a representation by the computer programs that we have elaborated of the main types of phenomena encountered in traditional instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.