How is human locomotion visually controlled? Fifty years ago, it was proposed that we steer to a goal using optic flow, the pattern of motion at the eye that specifies the direction of locomotion. However, we might also simply walk in the perceived direction of a goal. These two hypotheses normally predict the same behavior, but we tested them in an immersive virtual environment by displacing the optic flow from the direction of walking, violating the laws of optics. We found that people walked in the visual direction of a lone target, but increasingly relied on optic flow as it was added to the display. The visual control law for steering toward a goal is a linear combination of these two variables weighted by the magnitude of flow, thereby allowing humans to have robust locomotor control under varying environmental conditions.
Two strategies can guide walking to a stationary goal: (1) the optic-flow strategy, in which one aligns the direction of locomotion or "heading" specified by optic flow with the visual goal; and (2) the egocentric-direction strategy, in which one aligns the locomotor axis with the perceived egocentric direction of the goal and in which error results in optical target drift. Optic flow appears to dominate steering control in richly structured visual environments, whereas the egocentric- direction strategy prevails in visually sparse environments. Here we determine whether optic flow also drives visuo-locomotor adaptation in visually structured environments. Participants adapted to walking with the virtual-heading direction displaced 10 degrees to the right of the actual walking direction and were then tested with a normally aligned heading. Two environments, one visually structured and one visually sparse, were crossed in adaptation and test phases. Adaptation of the walking path was more rapid and complete in the structured environment; the negative aftereffect on path deviation was twice that in the sparse environment, indicating that optic flow contributes over and above target drift alone. Optic flow thus plays a central role in both online control of walking and adaptation of the visuo-locomotor mapping.
When observers face directly toward the incline of a hill, their awareness of the slant of the hill is greatly overestimated, but motoric estimates are much more accurate. The present study examined whether similar results would be found when observers were allowed to view the side of a hill. Observers viewed the cross-sections of hills in real (Experiment 1) and virtual (Experiment 2) environments and estimated the inclines with verbal estimates, by adjusting the cross-section of a disk, and by adjusting a board with their unseen hand to match the inclines. We found that the results for cross-section viewing replicated those found when observers directly face the incline. Even though the angles of hills are directly evident when viewed from the side, slant perceptions are still grossly overestimated.
Research in spatial cognition and object recognition has indicated that an “active” observer (i.e. moving) shows an advantage in their ability to recognize an object from a different viewpoint relative to a “passive” observer (i.e. stationary) who is presented with the same image geometry. Some researchers have attributed this advantage to the contributions made by the body senses (e.g. vestibular and proprioceptive) to an observer's ability to spatially update their location in the environment. However, a potential source of information that may be exploited by the visual system is the differential effect an interaction between illumination and viewpoint has for observer and object movement. Phenomenologically, the retinal projections will differ for the two types of motion due to the interaction with illumination sources. If an object rotates relative to a fixed light source, approximately the same area of the visual field will be illuminated, whereas when an observer moves about an object (relative to a fixed light source), the area illuminated in the visual field will change with orientation. The overall pattern of shading and shadows will differ for the two conditions despite equivalent physical geometries across rotations. To address this, we investigated whether the interaction between illumination and viewpoint change provides sufficient visual information to confer the same advantage seen for an active observer to a stationary observer. Preliminary findings suggest that the local feature information contained in images is sufficient to show an active observer advantage in object recognition
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.