Self-motion perception, which partly determines the realism of dynamic driving simulators, is based on multisensory integration. For curved trajectories, adding linear translations to visual stimulation seems to improve the perception of motion (Bertin et al., 2004). However, cornering means not only translational motion but also rotational motion in yaw. Wikie and Wand (2005) found no effect of yaw motion on steering performance, but it seems that higher rates of physical stimulation could have a more visible effect. However, the influence of yaw acceleration on cornering perception is still a matter of debate, especially when associated to optic flow. Therefore, the present study aims to analyze the respective role of vestibular (yaw acceleration) and visual stimulations for the perception of curvilinear trajectories. We designed two experiments in which the subjects had 1/to orally estimate their angular displacements as passive drivers, 2/to generate angular displacements by controlling the steering wheel as active drivers. In both experiments, subjects were submitted to three different conditions: (1) visual motion, (2) physical yaw acceleration, (3) combined visual and physical motions. Preliminary results of the first experiment show that visual stimulation produces greater overestimations of angular displacements than physical yaw motion, this overestimation being in between in the visuo-vestibular condition. They also suggest that the weights of visual and vestibular cues in cornering perception depend on the amplitude of the angular displacements. The second experiment should allow us to observe the evolution of visuo-vestibular interaction when subjects are active rather than passive drivers.
Self-motion perception, which partly determines the realism of dynamic driving simulators, is based on multisensory integration. However, it remains unclear how the brain integrates these cues to create adequate motion perception, especially for curvilinear displacements. In the present study, the effect of visual, inertial and visuo-inertial cues (concordant or discordant bimodal cues) on self-motion perception was analyzed. Subjects were asked to evaluate (externally produced) or produce (self-controlled) curvilinear displacements as accurately as possible. The results show systematic overestimation of displacement, with better performance for active subjects than for passive ones. Furthermore, it was demonstrated that participants used unimodal or bimodal cues differently in performing their activity. When passive, subjects systematically integrated visual and inertial cues even when discordant, but with weightings that depended on the dynamics. On the contrary, active subjects were able to reject the inertial cue when the discordance became too high, producing self-motion perception on the basis of more reliable information. Thereby, multisensory integration seems to follow a non-linear integration model of, i.e., the cues' weight changes with the cue reliability and/or the intensity of the stimuli, as reported by previous studies. These results represent a basis for the adaptation of motion cueing algorithms are developed for dynamic driving simulators, by taking into account the dynamics of simulated motion in line with the status of the participants (driver or passenger).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.