Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity
Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
During gradual speed changes, humans exhibit a sudden discontinuous switch from walking to running at a specific speed, and it has been suggested that different gaits may be associated with different functioning of neuronal networks. In this study we recorded the EMG activity of leg muscles at slow increments and decrements in treadmill belt speed and at different levels of body weight unloading. In contrast to normal walking at 1 g, at lower levels of simulated gravity (<0.4 g) the transition between walking and running was generally gradual, without systematic abrupt changes in either intensity or timing of EMG patterns. This phenomenon depended to a limited extent on the gravity simulation technique, although the exact level of the appearance of smooth transitions (0.4-0.6 g) tended to be lower for the vertical than for the tilted body weight support system. Furthermore, simulations performed with a half-center oscillator neuromechanical model showed that the abruptness of motor patterns at gait transitions at 1 g could be predicted from the distinct parameters anchored already in the normal range of walking and running speeds, whereas at low gravity levels the parameters of the model were similar for the two human gaits. A lack of discontinuous changes in the pattern of speed-dependent locomotor characteristics in a hypogravity environment is consistent with the idea of a continuous shift in the state of a given set of central pattern generators, rather than the activation of a separate set of central pattern generators for each distinct gait.
a b s t r a c tInput from the foot plays an essential part in perceiving support surfaces and determining kinematic events in human walking. To simulate adequate tactile pressure inputs under body weight support (BWS) conditions that represent an effective form of locomotion training, we here developed a new method of phasic mechanical foot stimulation using light-weight pneumatic insoles placed inside the shoes (under the heel and metatarsus). To test the system, we asked healthy participants to walk on a treadmill with different levels of BWS. The pressure under the stimulated areas of the feet and subjective sensations were higher at high levels of BWS and when applied to the ball and toes rather than heels. Foot stimulation did not disturb significantly the normal motor pattern, and in all participants we evoked a reliable step-synchronized triggering of stimuli for each leg separately. This approach has been performed in a general framework looking for ''afferent templates" of human locomotion that could be used for functional sensory stimulation. The proposed technique can be used to imitate or partially restore surrogate contact forces under body weight support conditions.
People easily intercept a ball rolling down an incline, despite its acceleration varies with the slope in a complex manner. Apparently, however, they are poor at detecting anomalies when asked to judge artificial animations of descending motion. Since the perceptual deficiencies have been reported in studies involving a limited visual context, here we tested the hypothesis that judgments of naturalness of rolling motion are consistent with physics when the visual scene incorporates sufficient cues about environmental reference and metric scale, roughly comparable to those present when intercepting a ball. Participants viewed a sphere rolling down an incline located in the median sagittal plane, presented in 3D wide-field virtual reality. In different experiments, either the slope of the plane or the sphere acceleration were changed in arbitrary combinations, resulting in a kinematics that was either consistent or inconsistent with physics. In Experiment 1 (slope adjustment), participants were asked to modify the slope angle until the resulting motion looked natural for a given ball acceleration. In Experiment 2 (acceleration adjustment), instead, they were asked to modify the acceleration until the motion on a given slope looked natural. No feedback about performance was provided. For both experiments, we found that participants were rather accurate at finding the match between slope angle and ball acceleration congruent with physics, but there was a systematic effect of the initial conditions: accuracy was higher when the participants started the exploration from the combination of slope and acceleration corresponding to the congruent conditions than when they started far away from the congruent conditions. In Experiment 3, participants modified the slope angle based on an adaptive staircase, but the target never coincided with the starting condition. Here we found a generally accurate performance, irrespective of the target slope. We suggest that, provided the visual scene includes sufficient cues about environmental reference and metric scale, joint processing of slope and acceleration may facilitate the detection of natural motion. Perception of rolling motion may rely on the kind of approximate, probabilistic simulations of Newtonian mechanics that have previously been called into play to explain complex inferences in rich visual scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.