The authors investigated the dynamics of steering and obstacle avoidance, with the aim of predicting routes through complex scenes. Participants walked in a virtual environment toward a goal (Experiment 1) and around an obstacle (Experiment 2) whose initial angle and distance varied. Goals and obstacles behave as attractors and repellers of heading, respectively, whose strengths depend on distance. The observed behavior was modeled as a dynamical system in which angular acceleration is a function of goal and obstacle angle and distance. By linearly combining terms for goals and obstacles, one could predict whether participants adopt a route to the left or right of an obstacle to reach a go (Experiment 3). Route selection may emerge from on-line steering dynamics, making explicit path planning unnecessary.
Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.
This study explored visual control strategies for braking to avoid collision by manipulating information about speed of self-motion. Participants watched computer-generated displays and used a brake to stop at an object in the path of motion. Global optic flow rate and edge rate were manipulated by adjusting eye-height and ground-texture size. Stopping distance, initiation of braking, and the magnitude of brake adjustments were influenced by both optical variables, but global optic flow rate had a stronger effect. A new model is introduced according to which braking is controlled by keeping the perceived ideal deceleration, based in part on global optic flow rate, within a "safe" region between 0 and the maximum deceleration of the brake.
How do people walk to a moving target, and what visual information do they use to do so? Under a pursuit strategy, one would head toward the target's current position, whereas under an interception strategy, one would lead the target, ideally by maintaining a constant target-heading angle (or constant bearing angle). Either strategy may be guided by the egocentric direction of the target, local optic flow from the target, or global optic flow from the background. In four experiments, participants walked through a virtual environment to reach a target moving at a constant velocity. Regardless of the initial conditions, they walked ahead of the target for most of a trial at a fairly constant speed, consistent with an interception strategy (experiment 1). This behavior can be explained by trying to maintain a constant target-heading angle while trying to walk a straight path, with transient steering dynamics. In contrast to previous results for stationary targets, manipulation of the local optic flow from the target (experiment 2) and the global optic flow of the background (experiments 3 and 4) failed to influence interception behavior. Relative motion between the target and the background did affect the path slightly, presumably owing to its effect on perceived target motion. We conclude that humans use an interception strategy based on the egocentric direction of a moving target.
From matters of survival like chasing prey, to games like football, the problem of intercepting a target that moves in the horizontal plane is ubiquitous in human and animal locomotion. Recent data show that walking humans turn onto a straight path that leads a moving target by a constant angle, with some transients in the target-heading angle. We test four control strategies against the human data: (1) pursuit, or nulling the target-heading angle beta, (2) computing the required interception angle beta (3) constant target-heading angle, or nulling change in the target-heading angle beta and (4) constant bearing, or nulling change in the bearing direction of the target psi which is equivalent to nulling change in the target-heading angle while factoring out the turning rate (beta - phi) We show that human interception behavior is best accounted for by the constant bearing model, and that it is robust to noise in its input and parameters. The models are also evaluated for their performance with stationary targets, and implications for the informational basis and neural substrate of steering control are considered. The results extend a dynamical systems model of human locomotor behavior from static to changing environments.
The aim of this study was to investigate the role of visual information in the control of walking over complex terrain with irregularly spaced obstacles. We developed an experimental paradigm to measure how far along the future path people need to see in order to maintain forward progress and avoid stepping on obstacles. Participants walked over an array of randomly distributed virtual obstacles that were projected onto the floor by an LCD projector while their movements were tracked by a full-body motion capture system. Walking behavior in a full-vision control condition was compared with behavior in a number of other visibility conditions in which obstacles did not appear until they fell within a window of visibility centered on the moving observer. Collisions with obstacles were more frequent and, for some participants, walking speed was slower when the visibility window constrained vision to less than two step lengths ahead. When window sizes were greater than two step lengths, the frequency of collisions and walking speed were weakly affected or unaffected. We conclude that visual information from at least two step lengths ahead is needed to guide foot placement when walking over complex terrain. When placed in the context of recent research on the biomechanics of walking, the findings suggest that two step lengths of visual information may be needed because it allows walkers to exploit the passive mechanical forces inherent to bipedal locomotion, thereby avoiding obstacles while maximizing energetic efficiency.
To walk efficiently over complex terrain, humans must use vision to tailor their gait to the upcoming ground surface without interfering with the exploitation of passive mechanical forces. We propose that walkers use visual information to initialize the mechanical state of the body before the beginning of each step so the resulting ballistic trajectory of the walker's center-of-mass will facilitate stepping on target footholds. Using a precision stepping task and synchronizing target visibility to the gait cycle, we empirically validated two predictions derived from this strategy: (1) Walkers must have information about upcoming footholds during the second half of the preceding step, and (2) foot placement is guided by information about the position of the target foothold relative to the preceding base of support. We conclude that active and passive modes of control work synergistically to allow walkers to negotiate complex terrain with efficiency, stability, and precision.H umans and other animals are remarkable in their ability to take advantage of what is freely available in the environment to the benefit of efficiency, stability, and coordination in movement. This opportunism can take on at least two forms, both of which are evident in human locomotion over complex terrain: (i) harnessing external forces to minimize the need for self-generated (i.e., muscular) forces (1), and (ii) taking advantage of passive stability to simplify the control of a complex movement (e.g., ref. 2). In the ensuing section, we explain how walkers exploit external forces and passive stability while walking over flat, obstacle-free terrain.* We then generalize this account to walking over irregular surfaces by explaining how walkers can adapt gait to terrain variations while still reaping the benefits of the available mechanical forces and inherent stability. This account leads to hypotheses about how and when walkers use visual information about the upcoming terrain and where that information is found. We derive several predictions from these hypotheses and then put them to the test in three experiments. Passive Control in Human WalkingThe basic movement pattern of the human gait cycle arises primarily from the phasic activation of flexor and extensor muscle groups by spinal-level central pattern generators, regulated by sensory signals from lower limb proprioceptors and cutaneous feedback from the plantar surface of the foot. This low-level neuromuscular circuitry serves to maintain the rhythmic physical oscillations that define locomotor behavior (see ref. 3 for review). This section will provide an overview of the basic biomechanics of the bipedal gait cycle to show how these inherent physical dynamics contribute to the passive stability and energetic efficiency of human locomotion.During the single support phase of the bipedal gait cycle, when only one foot is in contact with the ground, a walker shares the physical dynamics of an inverted pendulum. The body's center of mass (COM) acts as the bob of the pendulum and is support...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.