To interpret complex and ambiguous input, the human visual system uses prior knowledge or assumptions about the world. We show that the 'light-from-above' prior, used to extract information about shape from shading is modified in response to active experience with the scene. The resultant adaptation is not specific to the learned scene but generalizes to a different task, demonstrating that priors are constantly adapted by interactive experience with the environment.
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude.
When a visual stimulus is suppressed from awareness, processing of the suppressed image is necessarily reduced. Although adaptation to simple image properties such as orientation still occurs, adaptation to high-level properties such as face identity is eliminated. Here we show that emotional facial expression continues to be processed even under complete suppression, as indexed by substantial facial expression aftereffects.
Background: Conduct Disorder (CD) is associated with impairments in facial emotion recognition. However, it is unclear whether such deficits are explained by a failure to attend to emotionally informative face regions, such as the eyes, or by problems in the appraisal of emotional cues. Method: Male and female adolescents with CD and varying levels of callous-unemotional (CU) traits and age-and sex-matched typically developing (TD) controls (aged 13-18) categorised the emotion of dynamic and morphed static faces. Concurrent eye tracking was used to relate categorisation performance to participants' allocation of overt attention. Results: Adolescents with CD were worse at emotion recognition than TD controls, with deficits observed across static and dynamic expressions. In addition, the CD group fixated less on the eyes when viewing fearful and sad expressions. Across all participants, higher levels of CU traits were associated with fear recognition deficits and reduced attention to the eyes of surprised faces. Within the CD group, however, higher CU traits were associated with better fear recognition. Overall, males were worse at recognising emotions than females and displayed a reduced tendency to fixate the eyes. Discussion: Adolescents with CD, and particularly males, showed deficits in emotion recognition and fixated less on the eyes when viewing emotional faces. Individual differences in fixation behaviour predicted modest variations in emotion categorisation. However, group differences in fixation were small and did not explain the much larger group differences in categorisation performance, suggesting that CD-related deficits in emotion recognition were not mediated by abnormal fixation patterns.
Motion-induced blindness is a striking phenomenon in which salient static visual stimuli "disappear" for seconds at a time in the presence of specific moving patterns. Here we investigate whether the phenomenon is due to surface completion of the moving patterns. Stereo-depth information was added to the motion stimulus to create depth ordering between the static and moving components of the display. Depth ordering consistent with the perceptual occlusion of the static elements increased motion-induced blindness whereas placing the moving components behind the static elements decreased the static dot disappearance. In a second experiment we used an induced surface stimulus configuration to drive the motion-induced blindness phenomenon as further evidence of the importance of surface completion and interactions during visual processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.