The Collaborative-Research Augmented Immersive Virtual Environment Laboratory at Rensselaer is a state-of-the-art space that offers users the capabilities of multimodality and immersion. Realistic and abstract sets of data can be explored in a variety of ways, even in large group settings. This paper discusses the motivations of the immersive experience and the advantages over smaller scale and single-modality expressions of data. One experiment focuss on the influence of immersion on perceptions of architectural renderings. Its findings suggest disparities between participants’ judgment when viewing either two-dimensional printouts or the immersive CRAIVE-Lab screen. The advantages of multimodality are discussed in an experiment concerning abstract data exploration. Various auditory cues for aiding in visual data extraction were tested for their affects on participants’ speed and accuracy of information extraction. Finally, artificially generated auralizations are paired with recreations of realistic spaces to analyze the influences of immersive visuals on the perceptions of sound fields. One utilized method for creating these sound fields is a geometric ray-tracing model, which calculates the auditory streams of each individual loudspeaker in the lab to create a cohesive sound field representation of the visual space.
Recently, multi-modal presentation systems have gained much interest to study big data with interactive user groups. One of the problems of these systems is to provide a venue for both personalized and shared information. Especially, sound fields containing parallel audio streams can distract users from extracting necessary information. The way spatial information is processed in the brain allows humans to take complicated visuals and focus on details or the whole. However, temporal information, which can be better presented through audio, is processed differently, making dense sound environments difficult to segregate. In Rensselaer’s CRAIVE-Lab, sounds are presented spatially using an array of 134 loudspeakers to address individual participants who are working on analyzing data together. In this talk, we will present and discuss different methods to improve the ability of participants to focus on their designated audio streams using co-modulated visual cues. In this scheme, the virtual reality space is combined with see-through, augmented reality glasses to optimize the boundaries between personalized and global information. [Work supported by NSF #1229391 and the Cognitive and Immersive Systems Laboratory (CISL).]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.