There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.psychophysics | vision | texture | numerical cognition
In the social sciences it is common practice to test specific theoretically motivated research hypotheses using formal statistical procedures. Typically, students in these disciplines are trained in such methods starting at an early stage in their academic tenure. On the other hand, in psychophysical research, where parameter estimates are generally obtained using a maximum-likelihood (ML) criterion and data do not lend themselves well to the least-squares methods taught in introductory courses, it is relatively uncommon to see formal model comparisons performed. Rather, it is common practice to estimate the parameters of interest (e.g., detection thresholds) and their standard errors individually across the different experimental conditions and to ‘eyeball’ whether the observed pattern of parameter estimates supports or contradicts some proposed hypothesis. We believe that this is at least in part due to a lack of training in the proper methodology as well as a lack of available software to perform such model comparisons when ML estimators are used. We introduce here a relatively new toolbox of Matlab routines called Palamedes which allows users to perform sophisticated model comparisons. In Palamedes, we implement the model-comparison approach to hypothesis testing. This approach allows researchers considerable flexibility in targeting specific research hypotheses. We discuss in a non-technical manner how this method can be used to perform statistical model comparisons when ML estimators are used. With Palamedes we hope to make sophisticated statistical model comparisons available to researchers who may not have the statistical background or the programming skills to perform such model comparisons from scratch. Note that while Palamedes is specifically geared toward psychophysical data, the core ideas behind the model-comparison approach that our paper discusses generalize to any field in which statistical hypotheses are tested.
We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red-green, and blue-yellow channels of the primate visual system. The red-green and blue-yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm.
The past quarter century has witnessed considerable advances in our understanding of Lightness (perceived reflectance), Brightness (perceived luminance) and perceived Transparency (LBT). This review poses eight major conceptual questions that have engaged researchers during this period, and considers to what extent they have been answered. The questions concern 1. the relationship between lightness, brightness and perceived non-uniform illumination, 2. the brain site for lightness and brightness perception, 3 the effects of context on lightness and brightness, 4. the relationship between brightness and contrast for simple patch-background stimuli, 5. brightness "filling-in", 6. lightness anchoring, 7. the conditions for perceptual transparency, and 8. the perceptual representation of transparency. The discussion of progress on major conceptual questions inevitably requires an evaluation of which approaches to LBT are likely and which are unlikely to bear fruit in the long term, and which issues remain unresolved. It is concluded that the most promising developments in LBT are (a) models of brightness coding based on multi-scale filtering combined with contrast normalization, (b) the idea that the visual system decomposes the image into "layers" of reflectance, illumination and transparency, (c) that an understanding of image statistics is important to an understanding of lightness errors, (d) Whittle's logW metric for contrast-brightness, (e) the idea that "filling-in" is mediated by low spatial frequencies rather than neural spreading, and (f) that there exist multiple cues for identifying non-uniform illumination and transparency. Unresolved issues include how relative lightness values are anchored to produce absolute lightness values, and the perceptual representation of transparency. Bridging the gap between multi-scale filtering and layer decomposition approaches to LBT is a major task for future research.
The appearance of an object or surface depends strongly on the light from other objects and surfaces in view. This review focuses on color in complex scenes, which have regions of different colors in view simultaneously and/or successively, as in natural viewing. Two fundamental properties distinguish the chromatic representation evoked by a complex scene from the representation for an isolated patch of light. First, in complex scenes, the color of an object is not fully determined by the light from that object reaching the eye. Second, the chromatic representation of a complex scene contributes not only to hue, saturation, and brightness, but also to other percepts such as shape, texture, and object segmentation. These two properties are cornerstones of this review, which examines color perception with context that varies over space or time, including color constancy, and chromatic contributions to such percepts as orientation, contour, depth, and motion.
The color vision of Old World primates and humans uses two cone-opponent systems; one differences the outputs of L and M cones forming a red-green (RG) system, and the other differences S cones with a combination of L and M cones forming a blue-yellow (BY) system. In this paper, we show that in human vision these two systems have a differential distribution across the visual field. Cone contrast sensitivities for sine-wave grating stimuli (smoothly enveloped in space and time) were measured for the two color systems (RG & BY) and the achromatic (Ach) system at a range of eccentricities in the nasal field (0-25 deg). We spatially scaled our stimuli independently for each system (RG, BY, & Ach) in order to activate that system optimally at each eccentricity. This controlled for any differential variations in spatial scale with eccentricity and provided a comparison between the three systems under equivalent conditions. We find that while red-green cone opponency has a steep decline away from the fovea, the loss in blue-yellow cone opponency is more gradual, showing a similar loss to that found for achromatic vision. Thus only red-green opponency, and not blue-yellow opponency, can be considered a foveal specialization of primate vision with an overrepresentation at the fovea. In addition, statistical calculations of the level of chance cone opponency in the two systems indicate that selective S cone connections to postreceptoral neurons are essential to maintain peripheral blue-yellow sensitivity in human vision. In the red-green system, an assumption of cone selectivity is not required to account for losses in peripheral sensitivity. Overall, these results provide behavioral evidence for functionally distinct neuro-architectural origins of the two color systems in human vision, supporting recent physiological results in primates.
On the basis of the early primate neurophysiological recordings, it was thought that the different cone types of the primate retina project selectively into the centre and surround of the receptive fields of cone opponent neurons, and more recently this view has been reasserted on the basis of physiological results. An alternative idea is that these projections are in fact unselective for cone type, and, therefore, cone opponency arises from chance variations in the proportions of different cone types in centre and surround. The issue is presently controversial with anatomical or physiological support for both hypotheses. Our results show that there is a selective loss of red-green colour sensitivity across the human visual field. Furthermore, this selective loss occurs under low temporal frequency conditions (0.5 Hz) which were selected to favour the mediation of both colour and luminance detection by a common P cell pathway and to exclude an M cell contribution to detection threshold. We show that "hit and miss" post-receptoral cone projections will produce a decline in cone opponency that is sufficient to account for this selective loss, thus providing psychophysical evidence consistent with this hypothesis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.