We have studied the responses of MT neurons to moving gratings, spatially modulated in luminance and chromaticity. Most MT neurons responded briskly and with high contrast sensitivity to targets whose luminance was modulated, with or without added chromatic contrast. When luminance modulation was removed and only chromatic stimulation was used, the responses of all MT neurons were attenuated. Most were completely unresponsive to stimulation with targets whose modulation fell within a "null" plane in color space; these null planes varied from neuron to neuron, but all lay close to the plane of constant photometric luminance. For about a third of the neurons, there was no color direction in which responses were completely abolished; almost all of these neurons had a definite minimum response for chromatic modulation near the isoluminant plane. MT neurons that responded to isoluminant targets did so inconsistently and with poor contrast sensitivity, so that only intensely modulated targets were effective. Whereas the best thresholds of MT neurons for luminance targets are close to behavioral contrast threshold, the thresholds for isoluminant targets lie considerably above behavioral contrast threshold. Therefore, although some MT neurons do give responses to isoluminant targets, they are unlikely to be the source of the chromatic motion signals revealed behaviorally.
Experienced drivers performed simple steering maneuvers in the absence of continuous visual input. Experiments conducted in a driving simulator assessed drivers' performance of lane corrections during brief visual occlusion and examined the visual cues that guide steering. The dependence of steering behavior on heading, speed, and lateral position at the start of the maneuver was measured. Drivers adjusted steering amplitude with heading and performed the maneuver more rapidly at higher speeds. These dependencies were unaffected by a 1.5-s visual occlusion at the start of the maneuver. Longer occlusions resulted in severe performance degradation. Two steering control models were developed to account for these findings. In the 1st, steering actions were coupled to perceptual variables such as lateral position and heading. In the 2nd, drivers pursued a virtual target in the scene. Both models yielded behavior that closely matches that of human drivers.
A theory is developed in which the optic flow of an observer translating over the ground plane determines the metric of egocentric visual space. Optic flow is used to operationalize the equality of spatial intervals not unlike physicists use time to compare spatial intervals. The theory predicts empirical matching ratios for collinear, sagittal intervals to within 2% of the mean (eight subjects, standard error also 2%). The theory predicts that frontoparallel intervals on the ground plane will match sagittal intervals if their relative image motions match, which was found empirically. It is suggested that the optic flow metric serves to calibrate static depth cues such as angular elevation and binocular parallax.
We explore a method of representing solid shape that is useful for visual recognition. We assume that complex shapes are constructed from convex, compact shapes and that construction involves three operations: solid union (to form humps), solid subtraction (to leave dents), and smoothing (to remove discontinuities). The boundaries between shapes joined through these operations are contours of extrema of a principal curvature. Complex objects can be decomposed along these boundaries into convex shapes, the so-called parts. We suggest that this decomposition into parts forms the basis for a shape memory. We show that the part boundaries of an object can be inferred from its occluding contours, at least up to a number of ambiguities.
Previous research has demonstrated the importance of attention in the development of survey (or configural) knowledge of the environment. However, it is unclear if attention is also necessary for the development of route knowledge. Our aim in this paper is to evaluate the specific role of attention in the acquisition of both route and survey knowledge during simulated navigation. In four experiments, subjects in a condition of full or divided attention were presented a series of routes through a simulated environment. Spatial learning was assessed by having subjects discriminate between old and novel route segments in a subsequent recognition test. Novel route segments consisted of old landmarks from the same route but in the wrong order or with wrong turns, or consisted of old landmarks from two separate routes, or contained old landmarks in new spatial relations to one another. Divided attention disrupted memory for sequences of landmarks (experiment 1), landmark-turn associations (experiment 2), landmark-route associations (experiment 3), and spatial relations between landmarks (experiment 4). Together, these results show that even relatively simple components of spatial learning during navigation require attention. Furthermore, divided attention disrupts the acquisition of spatial knowledge at both the route level and the survey level. p
The 'direct-perception' model of heading perception posits that heading is computed directly from optic flow without an intervening structural representation of environmental layout. Here, I give an example in which such a representation is seen to play a role in the interpretation of optic flow. Manipulating the outline of concave objects to give an erroneous percept of convexity caused the perceived direction of heading during a stimulated approach to change as well. Thus, the representation of environmental structure provides the context for using and interpreting image motion.
Observers moving through a three-dimensional environment can use optic flow to determine their direction of heading. Existing heading algorithms use cartesian flow fields in which image flow is the displacement of image features over time. I explore a heading algorithm that uses affine flow instead. The affine flow at an image feature is its displacement modulo an affine transformation defined by its neighborhood. Modeling the observer's instantaneous motion by a translation and a rotation about an axis through its eye, affine flow is tangent to the translational field lines on the observer's viewing sphere. These field lines form a radial flow field whose center is the direction of heading. The affine flow heading algorithm has characteristics that can be used to determine whether the human visual system relies on it. The algorithm is immune to observer rotation and arbitrary affine transformations of its input images; its accuracy improves with increasing variation in environmental depth; and it cannot recover heading in an environment consisting of a single plane because affine flow vanishes in this case. Translational field lines can also be approximated through differential cartesian motion. I compare the performance of heading algorithms based on affine flow, differential cartesian flow, and least-squares search.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.