Abstract. The ability of haptic stimuli to augment visually and auditorily induced self-motion illusions has in part been investigated. However, haptically induced illusory self-motion in environments deprived of explicit motion cues remain unexplored. In this paper we present an experiment performed with the intention of investigating how different virtual environments -contexts of motion -influences self-motion illusions induced through haptic stimulation of the feet. A concurrent goal was to determine whether horizontal self-motion illusions can be induced through stimulation of the supporting areas of the feet. The experiment was based on the a within-subjects design and included four conditions, each representing one context of motion: an elevator, a train compartment, a bathroom, and a completely dark environment. The audiohaptic stimuli was identical across all conditions. The participants' sensation of movement was assessed by means of existing measures of illusory self-motion, namely, reported self-motion illusion per stimulus type, illusion compellingness, intensity and onset time. Finally the participants were also asked to estimate the experienced direction of movement. While the data obtained from all measures did not yield significant differences, the experiment did provide interesting indications. If motion is simulated through implicit motion cues, then the perceived context does influence the magnitude of displacement and the direction of movement of self-motion illusions as well as whether the illusion is experienced in the first place. Finally, the experiment confirmed that haptically induced illusory self-motion in the horizontal plane is indeed possible.
Achieving a full 3D auditory experience with head-related transfer functions (HRTFs) is still one of the main challenges of spatial audio rendering. HRTFs capture the listener's acoustic effects and personal perception, allowing immersion in virtual reality (VR) applications. This paper aims to investigate the connection between listener sensitivity in vertical localization cues and experienced presence, spatial audio quality, and attention. Two VR experiments with head-mounted display (HMD) and animated visual avatar are proposed: (i) a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source, and (ii) a 2 minute free exploration of a VR scene with five audiovisual sources in a both non-spatialized (2D stereo panning) and spatialized (free-field HRTF rendering) listening conditions. The screening test allows a distinction between good and bad localizers. The second one shows that no biases are introduced in the quality of the experience (QoE) due to different audio rendering methods; more interestingly, good localizers perceive a lower audio latency and they are less involved in the visual aspects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.