A set of prominent designers embarked on a research journey to explore aesthetics in movement-based design. Here we unpack one of the design sensitivities unique to our practice: a strong first person perspective-where the movements, somatics and aesthetic sensibilities of the designer, design researcher and user are at the forefront. We present an annotated portfolio of design exemplars and a brief introduction to some of the design methods and theory we use, together substantiating and explaining the first-person perspective. At the same time, we show how this felt dimension, despite its subjective nature, is what provides rigor and structure to our design research. Our aim is to assist researchers in soma-based design and designers wanting to consider the multiple facets when designing for the aesthetics of movement. The applications span a large field of designs, including slow introspective, contemplative interactions, arts, dance, health applications, games, work applications and many others.
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
Digital waveguides and finite difference time domain schemes have been used in physical modeling of spatially distributed systems. Both of them are known to provide exact modeling of ideal one-dimensional (1D) band-limited wave propagation, and both of them can be composed to approximate two-dimensional (2D) and three-dimensional (3D) mesh structures. Their equal capabilities in physical modeling have been shown for special cases and have been assumed to cover generalized cases as well. The ability to form mixed models by joining substructures of both classes through converter elements has been proposed recently. In this paper, we formulate a general digital signal processing (DSP)-oriented framework where the functional equivalence of these two approaches is systematically elaborated and the conditions of building mixed models are studied. An example of mixed modeling of a 2D waveguide is presented.
The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term “virtual musical instruments” has been used for a long time to describe software simulations, extensions of existing musical instruments, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field overview of VRMIs from the viewpoint of the performer. We propose nine design guidelines, describe evaluation methods, analyze case studies, and consider future challenges.
A high-fidelity but efficient sound simulation is an essential element of any VR experience. Many of the techniques used in virtual acoustics are graphical rendering techniques suitably modified to account for sound generation and propagation. In recent years, several advances in hardware and software technologies have been facilitating the development of immersive interactive sound-rendering experiences. In this article, we present a review of the state of the art of such simulations, with a focus on the different elements that, combined, provide a complete interactive sonic experience. This includes physics-based simulation of sound effects and their propagation in space together with binaural rendering to simulate the position of sound sources. We present how these different elements of the sound design pipeline have been addressed in the literature, trying to find the trade-off between accuracy and plausibility. Recent applications and current challenges are also presented.
Sound synthesis based on physical modeling of stringed instruments has been an active research field for the last decade. The most efficient synthesis models have been obtained using the theory of digital waveguides (Smith 1992). Commuted waveguide synthesis (Smith 1993;Karjalainen et al. 1993) is based on the linearity and time-invariance of the synthesis model and is an important method for developing a generic string instrument model. Recently, such a model has been presented including consolidated pluck and body wavetables, a pluck-shaping filter, a pluck-position comb filter, string models with loop filters and continuously variable delays, and sympathetic couplings between the strings (Karjalainen et al. 1998).Our model is realized in a real-time software synthesizer called PWSynth. PWSynth is a user library for PatchWork (Laurson 1996) that attempts to effectively integrate computer-assisted composition and sound synthesis. PWSynth is a part of our project that investigates different control strategies for physical models of musical instruments. PatchWork is used also to generate control data from an extended score representation, the Expressive Notation Package (ENP) (Laurson et al. 1999;Kuuskankare and Laurson 2000;Laurson 2000).Calibration of the synthesis model is based on the analysis of recorded guitar tones (Välimäki et al. 1996;Tolonen 1998). A recent article (Erkut et al. 2000) addressed the revision of the calibration process to improve efficiency and robustness. It also proposed extended methods to capture information about performance characteristics such as different pluck styles, vibrato, and dynamic variations of a professional player. In addition, the article presented basic techniques for simulation of the transients. Instead of using a detailed finger-string interaction model like that proposed by Cuzzucoli and Lombardo (1999), the simulation consolidates all the transient effects into the excitation signal and the update trajectories of the model parameters.The current article summarizes our achievements in model-based sound synthesis of the acoustic guitar with improved realism. First, a simplified physical model of a string instrument realized in our work is described. The next section discusses the calibration of the synthesis model. Then, we address controlling the synthesizer using ENP. After this, we provide an overview of the real-time synthesizer PWSynth. The final section discusses how we simulate various playing styles used in the classical guitar repertoire. Musical excerpts related to this article will be included on the forthcoming Computer Music Journal 25:4 compact disc. Structure of the SynthesizerWe have implemented a string instrument model that is based on the principle of commuted waveguide synthesis. We now present both the basic string model and a guitar string model that contains two basic models. Basic String ModelA model for a vibrating string is the only part of the system that explicitly models a physical phenomenon. Our string model implementation is illustrat...
The five-string Finnish kantele is a traditional folk music instrument that has unique structural features, resulting in a sound of bright and reverberant timbre. This article presents an analysis of the sound generation principles in the kantele, based on measurements and analytical formulation. The most characteristic features of the unique timbre are caused by the bridgeless string termination around a tuning pin at one end and the knotted termination around a supporting bar at the other end. These result in prominent second-order nonlinearity and strong beating of harmonics, respectively. A computational model of the instrument is also formulated and the algorithm is made efficient for real-time synthesis to simulate these features of the instrument timbre.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.