An optimal linear system for integrating visual cues to 3D surface geometry weights cues in inverse proportion to their uncertainty. The problem of integrating texture and stereo information for judgments of planar surface slant provides a strong test of optimality in human perception. Since the accuracy of slant from texture judgments changes by an order of magnitude from low to high slants, optimality predicts corresponding changes in cue weights as a function of surface slant. Furthermore, since humans show significant individual differences in their abilities to use both texture and stereo information for judgments of 3D surface geometry, the problem admits the stronger test that individual differences in subjects' thresholds for discriminating slant from the individual cues should predict individual differences in cue weights. We tested both predictions by measuring slant discrimination thresholds and stereo/texture cue weights as a function of surface slant for multiple subjects. The results bear out both predictions of optimality, with the exception of an apparent slight under-weighting of texture information. This may be accounted for by factors specific to the stimuli used to isolate stereo information in the experiments. Taken together, the results are consistent with the hypothesis that humans optimally combine the two cues to surface slant, with cue weights proportional to the subjective reliability of the cues.
Amblyopia is a neuro-developmental disorder of the visual cortex that arises from abnormal visual experience early in life. Amblyopia is clinically important because it is a major cause of vision loss in infants and young children. Amblyopia is also of basic interest because it reflects the neural impairment that occurs when normal visual development is disrupted. Amblyopia provides an ideal model for understanding when and how brain plasticity may be harnessed for recovery of function. Over the past two decades there has been a rekindling of interest in developing more effective methods for treating amblyopia, and for extending the treatment beyond the critical period, as exemplified by new clinical trials and new basic research studies. The focus of this review is on stereopsis and its potential for recovery. Impaired stereoscopic depth perception is the most common deficit associated with amblyopia under ordinary (binocular) viewing conditions (Webber & Wood, 2005). Our review of the extant literature suggests that this impairment may have a substantial impact on visuomotor tasks, difficulties in playing sports in children and locomoting safely in older adults. Furthermore, impaired stereopsis may also limit career options for amblyopes. Finally, stereopsis is more impacted in strabismic than in anisometropic amblyopia. Our review of the various approaches to treating amblyopia (patching, perceptual learning, videogames) suggests that there are several promising new approaches to recovering stereopsis in both anisometropic and strabismic amblyopes. However, recovery of stereoacuity may require more active treatment in strabismic than in anisometropic amblyopia. Individuals with strabismic amblyopia have a very low probability of improvement with monocular training; however they fare better with dichoptic training than with monocular training, and even better with direct stereo training.
How visual feedback contributes to the on-line control of fast reaching movements is still a matter of considerable debate. Whether feedback is used continuously throughout movements or only in the "slow" end-phases of movements remains an open question. In order to resolve this question, we applied a perturbation technique to measure the influence of visual feedback from the hand at different times during reaching movements. Subjects reached to touch targets in a virtual 3D space, with visual feedback provided by a small virtual sphere that moved with a subject's fingertip. Small random perturbations were applied to the position of the virtual fingertip at two different points in the movement, either at 25% or 50% of the total movement extent. Despite the fact that subjects were unaware of the perturbations, their hand trajectories showed smooth and accurate corrections. Detectable responses were observed within an average of 160 ms after perturbations, and as early as 60% of the distance to the target. Response latencies were constant across different perturbation times and movement speed conditions, suggesting that a fixed sensori-motor delay is the limiting factor. The results provide direct evidence that the human brain uses visual feedback from the hand in a continuous fashion to guide fast reaching movements throughout their extent.
We investigated what visual information contributes to on-line control of hand movements. It has been suggested that motion information predominates early in movements but that position information predominates for endpoint control. We used a perturbation method to determine the relative contributions of motion and position information to feedback control. Subjects reached to touch targets in a dynamic virtual environment in which subjects viewed a moving virtual fingertip in place of their own finger. On some trials, we perturbed the virtual fingertip while it moved behind an occluder. Subjects responded to perturbations that selectively altered either motion or position information, indicating that both contribute to feedback control. Responses to perturbations that changed both motion and position information were consistent with superimposed motion-based and position-based control. Results were well fit by a control model that optimally integrates noisy, delayed sensory feedback about both motion and position to estimate hand state.
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory.
Despite growing evidence for perceptual interactions between motion and position, no unifying framework exists to account for these two key features of our visual experience. We show that percepts of both object position and motion derive from a common object-tracking system-a system that optimally integrates sensory signals with a realistic model of motion dynamics, effectively inferring their generative causes. The object-tracking model provides an excellent fit to both position and motion judgments in simple stimuli. With no changes in model parameters, the same model also accounts for subjects' novel illusory percepts in more complex moving stimuli. The resulting framework is characterized by a strong bidirectional coupling between position and motion estimates and provides a rational, unifying account of a number of motion and position phenomena that are currently thought to arise from independent mechanisms. This includes motion-induced shifts in perceived position, perceptual slow-speed biases, slowing of motions shown in visual periphery, and the well-known curveball illusion. These results reveal that motion perception cannot be isolated from position signals. Even in the simplest displays with no changes in object position, our perception is driven by the output of an objecttracking system that rationally infers different generative causes of motion signals. Taken together, we show that object tracking plays a fundamental role in perception of visual motion and position.visual motion perception | Kalman filter | object tracking | causal inference | motion-induced position shift R esearch into the basic mechanisms of visual motion processing has largely focused on simple cases in which motion signals are fixed in space and constant over time (e.g., moving patterns presented in static windows) (1). Although this approach has resulted in considerable advances in our understanding of low-level motion mechanisms, it leaves open the question of how the brain integrates changing motion and position signals; when objects move in the world, motion generally co-occurs with changes in object position. The process of generating coherent estimates of object motion and position is known in the engineering and computer vision literature as "tracking" (e.g., as used by the Global Positioning System) (2). Conceptualizing motion and position perception in the broader context of object tracking suggests an alternative conceptual framework-one that we show provides a unifying account for a number of perceptual phenomena.An optimal tracking system would integrate incoming position and motion signals with predictive information from the recent past to continuously update perceptual estimates of both an object's position and its motion. Were such a system to underlie perception, position and motion should be perceptually coupled in predictable ways. Signatures of such a coupling appear in a number of known phenomena. On one hand, local motion signals can predictively bias position percepts (3-8). On the other hand, we can pe...
The human visual system has the remarkable capacity to perceive accurately the lightness, or relative reflectance, of surfaces, even though much of the variation in image luminance may be caused by other scene attributes, such as shape and illumination. Most physiological, and computational models of lightness perception invoke early sensory mechanisms that act independently of, or before, the estimation of other scene attributes. In contrast to the modularity of lightness perception assumed in these models are experiments that show that supposedly 'higher-order' percepts of planar surface attributes, such as orientation, depth and transparency, can influence perceived lightness. Here we show that perceived surface curvature can also affect perceived lightness. The results of the earlier experiments indicate that perceiving luminance edges as changes in surface attributes other than reflectance can influence lightness. These results suggest that the interpretation of smooth variations in luminance can also affect lightness percepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.