This study investigated the role of vection (i.e., a visually induced sense of self-motion), optokinetic nystagmus (OKN), and inadvertent head movements in visually induced motion sickness (VIMS), evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent), and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent). In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48). Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI: http://dx.doi.org/10.7554/eLife.00031.001
Advanced driving simulators aim at rendering the motion of a vehicle with maximum fidelity, which requires increased mechanical travel, size, and cost of the system. Motion cueing algorithms reduce the motion envelope by taking advantage of limitations in human motion perception, and the most commonly employed method is just to scale down the physical motion. However, little is known on the effects of motion scaling on motion perception and on actual driving performance. This paper presents the results of a European collaborative project, which explored different motion scale factors in a slalom driving task. Three state-of-the-art simulator systems were used, which were capable of generating displacements of several meters. The results of four comparable driving experiments, which were obtained with a total of 65 participants, indicate a preference for motion scale factors below 1, within a wide range of acceptable values (0.4-0.75). Very reduced or absent motion cues significantly degrade driving performance. Applications of this research are discussed for the design of motion systems and cueing algorithms for driving simulation.
While moving through the environment, humans use vision to discriminate different self-motion intensities and to control their actions (e.g. maintaining balance or controlling a vehicle). How the intensity of visual stimuli affects self-motion perception is an open, yet important, question. In this study, we investigate the human ability to discriminate perceived velocities of visually induced illusory self-motion (vection) around the vertical (yaw) axis. Stimuli, generated using a projection screen (70 × 90 deg field of view), consist of a natural virtual environment (360 deg panoramic colour picture of a forest) rotating at constant velocity. Participants control stimulus duration to allow for a complete vection illusion to occur in every single trial. In a two-interval forced-choice task, participants discriminate a reference motion from a comparison motion, adjusted after every presentation, by indicating which rotation feels stronger. Motion sensitivity is measured as the smallest perceivable change in stimulus intensity (differential threshold) for eight participants at five rotation velocities (5, 15, 30, 45 and 60 deg/s). Differential thresholds for circular vection increase with stimulus velocity, following a trend well described by a power law with an exponent of 0.64. The time necessary for complete vection to arise is slightly but significantly longer for the first stimulus presentation (average 11.56 s) than for the second (9.13 s) and does not depend on stimulus velocity. Results suggest that lower differential thresholds (higher sensitivity) are associated with smaller rotations, because they occur more frequently during everyday experience. Moreover, results also suggest that vection is facilitated by a recent exposure, possibly related to visual motion after-effect.
Full-field visual rotation around the vertical axis induces a sense of self-motion (vection), optokinetic nystagmus (OKN), and, eventually, also motion sickness (MS). If the lights are then suddenly switched off, optokinetic afternystagmus (OKAN) occurs. This is due to the discharge of the velocity storage mechanism (VSM), a central integrative network that has been suggested to be involved in motion sickness. We previously showed that visually induced motion sickness (VIMS) following optokinetic stimulation is dependent on vection intensity. To shed light on this relationship, the current study investigated whether vection intensity is related to VSM activity, and thus, to the OKAN. In repetitive trials (eight per condition), 15 stationary participants were exposed to 120 s of visual yaw rotation (60°/s), followed by 90 s in darkness. The visual stimulus either induced strong vection (i.e., scene rotating normally) or weak vection (central and peripheral part moving in opposite directions). Eye movements and subjective vection intensity were continuously measured. Results showed that OKAN occurred less frequently and with lower initial magnitude in the weak-vection condition compared to the strong-vection condition. OKAN decay time constants were not significantly different. The results suggest that the stimuli that produced strong vection also enhanced the charging of the VSM. As VSM activity presumably is a factor in motion sickness, the enhanced VSM activity in our strong-vection condition hints at an involvement of the VSM in VIMS, and could explain why visual stimuli producing a strong sense of vection also elicit high levels of VIMS.
A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontal–vertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontal–vertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations.
To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.