Perceptual learning changes the way the human visual system processes stimulus information. Previous studies have shown that the human brain's weightings of visual information (the perceptual template) become better matched to the optimal weightings. However, the dynamics of the template changes are not well understood. We used the classification image method to investigate whether visual field or stimulus properties govern the dynamics of the changes in the perceptual template. A line orientation discrimination task where highly informative parts were placed in the peripheral visual field was used to test three hypotheses: (1) The template changes are determined by the visual field structure, initially covering stimulus parts closer to the fovea and expanding toward the periphery with learning; (2) the template changes are object centered, starting from the center and expanding toward edges; and (3) the template changes are determined by stimulus information, starting from the most informative parts and expanding to less informative parts. Results show that, initially, the perceptual template contained only the more peripheral, highly informative parts. Learning expanded the template to include less informative parts, resulting in an increase in sampling efficiency. A second experiment interleaved parts with high and low signal-to-noise ratios and showed that template reweighting through learning was restricted to stimulus elements that are spatially contiguous to parts with initial high template weights. The results suggest that the informativeness of features determines how the perceptual template changes with learning. Further, the template expansion is constrained by spatial proximity.
The extraction of a global orientation structure presumably has a different neural mechanism from that of the analysis of its local features. We investigated spatial integration within these two mechanisms using stimulus patterns composed of dot pairs (dipoles). The stimuli targeted local feature detection, contained no global configuration, but rather contained randomly oriented dipoles of a fixed length (the distance between the dots in a pair). For the detection of a global orientation structure, local dipole orientations were arranged in a concentric Glass pattern. Thresholds as a function of a stimulus area were determined by measuring the minimum proportion of dipoles among random-dot noise (signal-to-noise ratio) required for the detection of dipoles (features), as well as for the detection of an orientation structure. Thresholds for feature detection were significantly higher than those for the detection of the global structure--regardless of the stimulus size. Spatial integration, however, did not differ between the two tasks: the exponents of the power functions fitted to data for six observers were -0.48 +/- 0.07 for random dipole orientations and -0.62 +/- 0.1 for Glass patterns.
Radial frequency (RF) patterns are circular contours where the radius is modulated sinusoidally. These stimuli can represent a wide range of common shapes and have been popular for investigating human shape perception. Theories postulate a multistage model where a global contour integration mechanism integrates the outputs of local curvature-sensitive mechanisms. However, studies on how the local contour features are processed have been mostly based on indirect experimental manipulations. Here, we use a novel way to explore the contour integration, using the classification image (a psychophysical reverse-correlation) method. RF contours were composed of local elements, and random ''radial position noise'' offsets were added to element radial positions. We analyzed the relationship between trial-totrial variations in radial noise and corresponding behavioral responses, resulting in a ''shape template'': an estimate of the contour parts and features that the visual system uses in the shape discrimination task. Integration of contour features in a global template-like model explains our data well, and we show that observer performance for different shapes can be predicted from the classification images. Classification images show that observers used most of the contour parts. Further analysis suggests linear rather than probability summation of contour parts. Convex forms were detected better than concave forms and the corresponding templates had better sampling efficiency. With sufficient presentation time, we found no systematic preferences for a certain class of contour features (such as corners or sides). However, when the presentation time was very short, the visual system might prefer corner features over side features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.