Cinematic virtual reality (VR) elicits new possibilities for the treatment of sound in space. Distinct from screen-based practices of filmmaking, diegetic sound-image relations in immersive environments present unique, potent affordances, in which content is at once imaginary, and real. However, a reductive modelling of environmental realism, in the name of 'presence' predominates. Yet cross-modal perception is a noisy, flickering representation of worlds. Treating our perceptual apparatus as stable, objective transducers, ignores the inter-subjective potential at the heart of immersive work, and situates users as passive spectators. This condescends to audiences and discounts the historic symbiosis of sound-image signification, which comes to constitute notions of verisimilitude. We understand the tropes; we willingly suspend disbelief. This article examines spatial sound rendering in virtual environments, probing at diegetic realism. It calls for an experimental, aesthetic approach, suggesting several speculative strategies, drawing from theories of embodied cognition and acousmatic practice (amongst others) which necessarily deal with space and time as contingencies of the immersive. VR affords a development of the dialectic between sound and image which distinctively involves our spatial attention. The lines between referent and signified blur; the mediation between representations invoked by practitioners, and those experienced by audiences, suggest new opportunities for co-authorship.
This paper examines the current eco-system of tools for implementing dynamic 3D audio through the browser, from the perspective of spatial sound practitioners. It presents a survey of some existing tools to assess usefulness, and ease of use. This takes the forms of case studies, interviews with other practitioners, and initial testing comparisons between the authors. The survey classifies and summarizes their relative advantages, disadvantages and potential use cases. It charts the specialist knowledge needed to employ them or enable others to. The recent and necessary move to online exhibition of works, has seen many creative practitioners grapple with a disparate eco-system of software. Such technologies are diverse in their both their motivations and applications. From formats which overcome the limits of WebGL's lack of support for Ambisonics, to the creative deployment of Web Audio API (WAA), to thirdparty tools based on WAA, the field can seem prohibitively daunting for practitioners. The current range of possible acoustic results may be too unclear to justify the learning curve. Through this evaluation of the current available tools, we hope to demystify and make accessible these novel technologies to composers, musicians, artists and other learners, who might otherwise be dissuaded from engaging with this rich territory. This paper is based on a special session at Soundstack 2021.
Spatial audio is enjoying a surge in attention in both scene and object based paradigms, due to the trend for, and accessibility of, immersive experience. This has been enabled through convergence in computing enhancements, component size reduction, and associated price reductions. For the first time, applications such as virtual reality (VR) are technologies for the consumer. Audio for VR is captured to provide a counterpart to the video or animated image, and can be rendered to combine elements of physical and psychoacoustic modelling, as well as artistic design. Given that distance is an inherent property of spatial audio, that it can augment sound's efficacy in cueing user attention (a problem which practitioners are seeking to solve), and that conventional film sound practices have intentionally exploited its use, the absence of research on its implementation and effects in immersive environments is notable. This paper sets out the case for its importance, from a perspective of research and practice. It focuses on cinematic VR, whose challenges for spatialized audio are clear, and at times stretches beyond the restrictions specific to distance in audio for VR, into more general audio constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.