SUMMARY Human locomotion through natural environments requires precise coordination between the biomechanics of the bipedal gait cycle and the eye movements that gather the information needed to guide foot placement. However, little is known about how the visual and locomotor systems work together to support movement through the world. We developed a system to simultaneously record gaze and full-body kinematics during locomotion over different outdoor terrains. We found that not only do walkers tune their gaze behavior to the specific information needed to traverse paths of varying complexity but that they do so while maintaining a constant temporal look-ahead window across all terrains. This strategy allows walkers to use gaze to tailor their energetically optimal preferred gait cycle to the upcoming path in order to balance between the drive to move efficiently and the need to place the feet in stable locations. Eye movements and locomotion are intimately linked in a way that reflects the integration of energetic costs, environmental uncertainty, and momentary informational demands of the locomotor task. Thus, the relationship between gaze and gait reveals the structure of the sensorimotor decisions that support successful performance in the face of the varying demands of the natural world.
The aim of this study was to investigate the role of visual information in the control of walking over complex terrain with irregularly spaced obstacles. We developed an experimental paradigm to measure how far along the future path people need to see in order to maintain forward progress and avoid stepping on obstacles. Participants walked over an array of randomly distributed virtual obstacles that were projected onto the floor by an LCD projector while their movements were tracked by a full-body motion capture system. Walking behavior in a full-vision control condition was compared with behavior in a number of other visibility conditions in which obstacles did not appear until they fell within a window of visibility centered on the moving observer. Collisions with obstacles were more frequent and, for some participants, walking speed was slower when the visibility window constrained vision to less than two step lengths ahead. When window sizes were greater than two step lengths, the frequency of collisions and walking speed were weakly affected or unaffected. We conclude that visual information from at least two step lengths ahead is needed to guide foot placement when walking over complex terrain. When placed in the context of recent research on the biomechanics of walking, the findings suggest that two step lengths of visual information may be needed because it allows walkers to exploit the passive mechanical forces inherent to bipedal locomotion, thereby avoiding obstacles while maximizing energetic efficiency.
To walk efficiently over complex terrain, humans must use vision to tailor their gait to the upcoming ground surface without interfering with the exploitation of passive mechanical forces. We propose that walkers use visual information to initialize the mechanical state of the body before the beginning of each step so the resulting ballistic trajectory of the walker's center-of-mass will facilitate stepping on target footholds. Using a precision stepping task and synchronizing target visibility to the gait cycle, we empirically validated two predictions derived from this strategy: (1) Walkers must have information about upcoming footholds during the second half of the preceding step, and (2) foot placement is guided by information about the position of the target foothold relative to the preceding base of support. We conclude that active and passive modes of control work synergistically to allow walkers to negotiate complex terrain with efficiency, stability, and precision.H umans and other animals are remarkable in their ability to take advantage of what is freely available in the environment to the benefit of efficiency, stability, and coordination in movement. This opportunism can take on at least two forms, both of which are evident in human locomotion over complex terrain: (i) harnessing external forces to minimize the need for self-generated (i.e., muscular) forces (1), and (ii) taking advantage of passive stability to simplify the control of a complex movement (e.g., ref. 2). In the ensuing section, we explain how walkers exploit external forces and passive stability while walking over flat, obstacle-free terrain.* We then generalize this account to walking over irregular surfaces by explaining how walkers can adapt gait to terrain variations while still reaping the benefits of the available mechanical forces and inherent stability. This account leads to hypotheses about how and when walkers use visual information about the upcoming terrain and where that information is found. We derive several predictions from these hypotheses and then put them to the test in three experiments. Passive Control in Human WalkingThe basic movement pattern of the human gait cycle arises primarily from the phasic activation of flexor and extensor muscle groups by spinal-level central pattern generators, regulated by sensory signals from lower limb proprioceptors and cutaneous feedback from the plantar surface of the foot. This low-level neuromuscular circuitry serves to maintain the rhythmic physical oscillations that define locomotor behavior (see ref. 3 for review). This section will provide an overview of the basic biomechanics of the bipedal gait cycle to show how these inherent physical dynamics contribute to the passive stability and energetic efficiency of human locomotion.During the single support phase of the bipedal gait cycle, when only one foot is in contact with the ground, a walker shares the physical dynamics of an inverted pendulum. The body's center of mass (COM) acts as the bob of the pendulum and is support...
How do humans achieve such remarkable energetic efficiency when walking over complex terrain such as a rocky trail? Recent research in biomechanics suggests that the efficiency of human walking over flat, obstacle-free terrain derives from the ability to exploit the physical dynamics of our bodies. In this study, we investigated whether this principle also applies to visually guided walking over complex terrain. We found that when humans can see the immediate foreground as little as two step lengths ahead, they are able to choose footholds that allow them to exploit their biomechanical structure as efficiently as they can with unlimited visual information. We conclude that when humans walk over complex terrain, they use visual information from two step lengths ahead to choose footholds that allow them to approximate the energetic efficiency of walking in flat, obstacle-free environments.
The aim of this study was to examine how visual information is used to control stepping during locomotion over terrain that demands precision in the placement of the feet. More specifically, we sought to determine the point in the gait cycle at which visual information about a target is no longer needed to guide accurate foot placement. Subjects walked along a path while stepping as accurately as possible on a series of small, irregularly spaced target footholds. In various conditions, each of the targets became invisible either during the step to the target or during the step to the previous target. We found that making targets invisible after toe off of the step to the target had little to no effect on stepping accuracy. However, when targets disappeared during the step to the previous target, foot placement became less accurate and more variable. The findings suggest that visual information about a target is used prior to initiation of the step to that target but is not needed to continuously guide the foot throughout the swing phase. We propose that this style of control is rooted in the biomechanics of walking, which facilitates an energetically efficient strategy in which visual information is primarily used to initialize the mechanical state of the body leading into a ballistic movement toward the target foothold. Taken together with previous studies, the findings suggest the availability of visual information about the terrain near a particular step is most essential during the latter half of the preceding step, which constitutes a critical control phase in the bipedal gait cycle.
The aim of this study was to investigate the perception of possibilities for action (i.e., affordances) that depend on one’s movement capabilities, and more specifically, the passability of a shrinking gap between converging obstacles. We introduce a new optical invariant that specifies in intrinsic units the minimum locomotor speed needed to safely pass through a shrinking gap. Detecting this information during self-motion requires recovering a component of the obstacles’ local optical expansion due to obstacle motion, independent of self-motion. In principle, recovering the obstacle motion component could involve either visual or non-visual self-motion information. We investigated the visual and non-visual contributions in two experiments in which subjects walked through a virtual environment and made judgments about whether it was possible to pass through a shrinking gap. On a small percentage of trials, visual and non-visual self-motion information were independently manipulated by varying the speed with which subjects moved through the virtual environment. Comparisons of judgments on such catch trials with judgments on normal trials revealed both visual and non-visual contributions to the detection of information about minimum walking speed.
Here we examine the ways that the optic flow patterns experienced during natural locomotion are shaped by the movement of the observer through their environments. By recording body motion during locomotion in natural terrain, we demonstrate that head-centered optic flow is highly unstable regardless of whether the walker’s head (and eye) is directed towards a distant target or at the ground nearby to monitor foothold selection. In contrast, VOR-mediated retinal optic flow has stable, reliable features that may be valuable for the control of locomotion. In particular, we found that a walker can determine whether they will pass to the left or right of their fixation point by observing the sign and magnitude of the curl of the flow field at the fovea. In addition, the divergence map of the retinal flow field provides a cue for the walker’s overground velocity/momentum vector in retinotopic coordinates, which may be an essential part of the visual identification of footholds during locomotion over complex terrain. These findings casts doubt on the assumption that accurate perception of heading direction requires correction for the effects of eccentric gaze, as has long been assumed. The present analysis of retinal flow patterns during the gait cycle suggests an alternative interpretation of the way flow is used for both perception of heading and the control of locomotion in the natural world.
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.