Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene.
We measured discrimination thresholds for illumination changes along different chromatic directions starting from chromatically biased reference illuminations. Participants viewed a Mondrian-papered scene illuminated by LED lamps. The scene was first illuminated by a reference illumination, followed by two comparisons. One comparison matched the reference (the target); the other (the test) varied from the reference, nominally either bluer, yellower, redder, or greener. The participant's task was to correctly select the target. A staircase procedure found thresholds for discrimination of an illumination change along each axis of chromatic change. Nine participants completed the task for five different reference illumination conditions (neutral, blue, yellow, red, and green). We find that relative discrimination thresholds for different chromatic directions of illumination change vary with the reference illumination. For the neutral reference, there is a trend for thresholds to be highest in the bluer illumination-change direction, replicating our previous reports of a “blue bias” for neutral reference illuminations. For the four chromatic references (blue, yellow, red, and green), the change in illumination toward the neutral reference is less well discriminated than changes in the other directions: a “neutral bias.” The results have implications for color constancy: In considering the stability of surface appearance under changes in illumination, both the starting chromaticity of the illumination and direction of change must be considered, as well as the chromatic characteristics of the surface reflectance ensemble. They also suggest it will be worthwhile to explore whether and how the human visual system has internalized the statistics of natural illumination changes.
We rely on color to select objects as the targets of our actions (e.g., the freshest fish, the ripest fruit). To be useful for selection, color must provide accurate guidance about object identity across changes in illumination. Although the visual system partially stabilizes object color appearance across illumination changes, how such color constancy supports object selection is not understood. To study how constancy operates in real-life tasks, we developed a novel paradigm in which subjects selected which of two test objects presented under a test illumination appeared closer in color to a target object presented under a standard illumination. From subjects' choices, we inferred a selection-based match for the target via a variant of maximum likelihood difference scaling, and used it to quantify constancy. Selection-based constancy was good when measured using naturalistic stimuli, but was dramatically reduced when the stimuli were simplified, indicating that a naturalistic stimulus context is critical for good constancy. Overall, our results suggest that color supports accurate object selection across illumination changes when both stimuli and task match how color is used in real life. We compared our selection-based constancy results with data obtained using a classic asymmetric matching task and found that the adjustment-based matches predicted selection well for our stimuli and instructions, indicating that the appearance literature provides useful guidance for the emerging study of constancy in natural tasks.
Summary Natural viewing challenges the visual system with images that have a dynamic range of light intensity (luminance) that can approach 1,000,000:1 and that often exceeds 10,000:1 [1, 2]. The range of perceived surface reflectance (lightness), however, can be well-approximated by the Munsell matte neutral scale (N 2.0/ to N 9.5/), consisting of surfaces whose reflectance varies by about 30:1. Thus, the visual system, must map a large range of surface luminance onto a much smaller range of surface lightness. We measured this mapping in images with a dynamic range close to that of natural images. We studied simple images that lacked segmentation cues that would indicate multiple regions of illumination. We found a remarkable degree of compression: at a single image location, a stimulus luminance range of 5905:1 can be mapped onto an extended lightness scale that has a reflectance range of 100:1. We characterized how the luminance-to-lightness mapping changes with stimulus context. Our data rule out theories that predict perceived lightness from luminance ratios or Weber contrast. A mechanistic model connects our data to theories of adaptation and provides insight about how the underlying visual response varies with context.
Implicit features of the paintings are properties that are imposed by the observer (e.g. how pleasant, interesting, tense a painting appears), whereas explicit features refer to properties that can be directly perceived (form, color, depth, etc.). The aim of Experiments 1 and 2 was to investigate the underlying structure of implicit and explicit features of paintings using the factor analysis of elementary judgments. In the preliminary studies, representative sets of paintings and elementary implicit and explicit dimensions (in the form of bipolar scales) were selected. Four implicit factors were extracted: Regularity, Relaxation, Hedonic Tone and Arousal. Four explicit factors were extracted: Form, Color, Space and Complexity. The following significant correlations between implicit and explicit factors were obtained: Regularity-Form, Regularity-Space, Hedonic Tone-Form and Arousal-Complexity. In Experiment 3 the role of implicit and explicit factors in similarity-dissimilarity ratings was specified. Significant correlations between the position of paintings in MDS space and mean judgments of explicit factors Color, Space and Complexity and implicit factor Relaxation were obtained, suggesting that similarity ratings of paintings are primarily based on explicit features. The causal relation of explicit and implicit features is discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.