In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called “strong prior”. As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
There is evidence that humans rely on an earth gravity (9.81 m/s²) prior for a series of tasks involving perception and action, the reason being that gravity helps predict future positions of moving objects. Eye-movements in turn are partially guided by predictions about observed motion. Thus, the question arises whether knowledge about gravity is also used to guide eye-movements: If humans rely on a representation of earth gravity for the control of eye movements, earth-gravity-congruent motion should elicit improved visual pursuit. In a pre-registered experiment, we presented participants (n = 10) with parabolic motion governed by six different gravities (−1/0.7/0.85/1/1.15/1.3 g), two initial vertical velocities and two initial horizontal velocities in a 3D environment. Participants were instructed to follow the target with their eyes. We tracked their gaze and computed the visual gain (velocity of the eyes divided by velocity of the target) as proxy for the quality of pursuit. An LMM analysis with gravity condition as fixed effect and intercepts varying per subject showed that the gain was lower for −1 g than for 1 g (by −0.13, SE = 0.005). This model was significantly better than a null model without gravity as fixed effect (p < 0.001), supporting our hypothesis. A comparison of 1 g and the remaining gravity conditions revealed that 1.15 g (by 0.043, SE = 0.005) and 1.3 g (by 0.065, SE = 0.005) were associated with lower gains, while 0.7 g (by 0.054, SE = 0.005) and 0.85 g (by 0.029, SE = 0.005) were associated with higher gains. This model was again significantly better than a null model (p < 0.001), contradicting our hypothesis. Post-hoc analyses reveal that confounds in the 0.7/0.85/1/1.15/1.3 g condition may be responsible for these contradicting results. Despite these discrepancies, our data thus provide some support for the hypothesis that internalized knowledge about earth gravity guides eye movements.
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object’s speed accurately at a small precision cost, even when self-motion is simulated only visually.
Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s. This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone.
Humans expect downwards moving objects to accelerate and upwards moving objects to decelerate. These results have been interpreted as humans maintaining an internal model of gravity. We have previously suggested an interpretation of these results within a Bayesian framework of perception: earth gravity could be represented as a Strong Prior that overrules noisy sensory information (Likelihood) and therefore attracts the final percept (Posterior) very strongly. Based on this framework, we use published data from a timing task involving gravitational motion to determine the mean and the standard deviation of the Strong Earth Gravity Prior. To get its mean, we refine a model of mean timing errors we proposed in a previous paper (Jö rges & Ló pez-Moliner, 2019), while expanding the range of conditions under which it yields adequate predictions of performance. This underscores our previous conclusion that the gravity prior is likely to be very close to 9.81 m/s 2. To obtain the standard deviation, we identify different sources of sensory and motor variability reflected in timing errors. We then model timing responses based on quantitative assumptions about these sensory and motor errors for a range of standard deviations of the earth gravity prior, and find that a standard deviation of around 2 m/s 2 makes for the best fit. This value is likely to represent an upper bound, as there are strong theoretical reasons along with supporting empirical evidence for the standard deviation of the earth gravity being lower than this value.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.