The Collaborative-Research Augmented Immersive Virtual Environment Laboratory at Rensselaer is a state-of-the-art space that offers users the capabilities of multimodality and immersion. Realistic and abstract sets of data can be explored in a variety of ways, even in large group settings. This paper discusses the motivations of the immersive experience and the advantages over smaller scale and single-modality expressions of data. One experiment focuss on the influence of immersion on perceptions of architectural renderings. Its findings suggest disparities between participants’ judgment when viewing either two-dimensional printouts or the immersive CRAIVE-Lab screen. The advantages of multimodality are discussed in an experiment concerning abstract data exploration. Various auditory cues for aiding in visual data extraction were tested for their affects on participants’ speed and accuracy of information extraction. Finally, artificially generated auralizations are paired with recreations of realistic spaces to analyze the influences of immersive visuals on the perceptions of sound fields. One utilized method for creating these sound fields is a geometric ray-tracing model, which calculates the auditory streams of each individual loudspeaker in the lab to create a cohesive sound field representation of the visual space.
Recently, multi-modal presentation systems have gained much interest to study big data with interactive user groups. One of the problems of these systems is to provide a venue for both personalized and shared information. Especially, sound fields containing parallel audio streams can distract users from extracting necessary information. The way spatial information is processed in the brain allows humans to take complicated visuals and focus on details or the whole. However, temporal information, which can be better presented through audio, is processed differently, making dense sound environments difficult to segregate. In Rensselaer’s CRAIVE-Lab, sounds are presented spatially using an array of 134 loudspeakers to address individual participants who are working on analyzing data together. In this talk, we will present and discuss different methods to improve the ability of participants to focus on their designated audio streams using co-modulated visual cues. In this scheme, the virtual reality space is combined with see-through, augmented reality glasses to optimize the boundaries between personalized and global information. [Work supported by NSF #1229391 and the Cognitive and Immersive Systems Laboratory (CISL).]
Telematic performances connect musicians and artists at remote locations to form a single cohesive piece. As these performances become more ubiquitous as more people have access to very high-speed Internet connections, a variety of new technologies will enable the artists and musicians to create brand new styles of works. The development of the immersive virtual environment, including Rensselaer Polytechnic Institute's own Collaborative-Research Augmented Immersive Virtual Environment Laboratory, sets the stage for these original pieces. The ability to properly spatialize sound within these environments is important for having a complete set of tools. This project uses a local installation to exemplify the techniques and protocols that make this possible. Using the visual coding environment MaxMSP as a receiving client, patches are created to parse incoming commands and coordinate information for engaging sound sources. Their spatialization is done in conjunction with the Virtual Microphone Control system, which is then mapped to loudspeakers through a patch portable to various immersive environment setups.
Immersive rooms, a type of virtual reality system consisting of human-scale panoramic visual and acoustic display systems and distributed sensing apparatus for occupant motion, have been increasingly adopted for dynamic and interactive applications. While these applications enable multi-user audiovisual immersion and navigation from a single physical location, they have yet to propagate along multiple homogeneous system infrastructures in a networked manner. In this work, we intend to co-locate two physically-remote immersive rooms – at EMPAC and the CRAIVE-Lab, respectively – in a single system of shared environments developed in Unity and embedded with virtual soundscapes. This system actively monitors spatial properties of both immersive rooms’ dynamic virtual footprint and their corresponding occupants. It generates virtual sound sources. both procedurally and through spatially-aware user inputs. The sound sources are rendered in real time via an algorithm synthesizing a ray-traced early reflection window and a parameterized late reverberation estimate from in-scene geometries. The co-located virtual soundscapes, displayed in individual immersive rooms through their respective multi-channel wave field synthesis loudspeaker systems, are shared as such that the user interaction in one physical location has holistic effects on the experience of virtual environments across all associated physical locations. [Work supported by NSF IIS-1909229 & CNS-1229391.]
The use of spatialization techniques in data sonification provides system designers with an additional tool for conveying information to users. Oftentimes, spatialized data sets are meant to be experienced by a single or few users at a time. Projects at Rensselaer's Collaborative-Research Augmented Immersive Virtual Environment Laboratory allow even large groups of collaborators to work within a shared virtual environment system. The lab provides an equal emphasis on the visual and audio system, with a nearly 360 degree panoramic display and 128-loudspeaker array housed behind the acoustically-transparent screen. The space allows for dynamic switching between immersions in recreations of physical scenes and presentations of abstract or symbolic data. Content creation for the space is not a complex process - the entire display is essentially a single desktop and straight-forward tools such as the Virtual Microphone Control allow for dynamic real-time spatialization. With the ability to target individual channels in the array, audio-visual congruency is achieved. The loudspeaker array creates a high-spatial density soundfield within which users are able to freely explore due to the virtual elimination of a so-called “sweet-spot.”
State-of-the-art schemata of immersive audiovisual system design mostly rely on in-situ stand-up construction with footings and rigid structural supports, an approach limited by low mobility and long set-up time. In this work, a new concept of audiovisual system design for a collaborative Immersive Virtual Environment with flexible and deployable projection elements and modular assemblies, is proposed. Drawing on stand-up configuration from Rensselaer’s Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), a foundationless rectangular panoramic display with round corners is used, incorporating a motorized roll-up framework with mountable fillets. This set-up is then accompanied by a unitized 60-channel Wave Field Synthesis (WFS) linear loudspeaker array. The proposed audiovisual system calibrates the spatial audiovisual rendering by an integrated use of game-engine-based 3-D virtual environments (made in Unity and Unreal) and Max/MSP-based sonification utilities. In particular, an equirectangular transform is applied in virtual cameras and render textures to remove distortion effects from screen geometry. This transform is shared with the WFS array for a congruent presentation of audiovisual content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.