The effects of design decisions in the development of systems that generate images for human consumption, such as cameras and displays, are often evaluated using real-world images. However, human observers can react differently to complex pictorial stimuli over the course of a lengthy experiment. This study was conducted to develop understanding of the optimal design of pictorial stimuli for effective and efficient perceptual experiments. The goals were to understand the impact of image content on visual attention and consistency of experimental results and apply this understanding to develop guidelines for pictorial target design for perceptual image comparison experiments. The efficacy of the proposed guidelines was evaluated. While the fixation consistency results were generally as expected, fixation consistency did not always equate to experimental response consistency. Along with scene complexity, the image modifications and the difficulty of the image equivalency decisions played a role in the experimental response.
Two uniform patches presented on two displays under identical viewing conditions can appear as the same color to one observer but as mismatched colors to another observer. This phenomenon, called observer metamerism (OM), occurs due to individual differences in color matching functions. To avoid its potentially adverse impacts in display calibration and characterization, it is desirable to have a predictive model of OM. In this work, we report the computational results of how to use existing metrics to quantify the potential OM between commercial display pairs and a proposed OM metric that is verified through a psychophysical experiment.
The color rendition ad hoc team of INCITS W1.1 is working to address issues related to color and tone reproduction for printed output and its perceptual impact on color image quality. The scope of the work includes accuracy of specified colors with an emphasis on memory colors, color gamut, and the effective use of tone levels, including issues related to contouring. The team has identified three sub-attributes of color rendition: 1) color quantization, defined as the ability to merge colors where needed; 2) color scale, defined as the ability to distinguish color where needed; and 3) color fidelity, defined as a balance of colorimetric accuracy, in cases where a reference exists, and pleasing overall color appearance. Visual definitions and descriptions of how these sub-attributes are perceived have been developed. The team is presently working to define measurement methods for the subattributes, with the focus in 2004 being on color fidelity. This presentation will briefly review the definitions and appearance of the proposed subattributes and the progress to date of developing test targets and associated measurement methods to quantify the color quantization sub-attribute. The remainder of the discussion will focus on the recent progress made in developing measurement methods for the color fidelity sub-attribute.
Two practical methods for implementing spectral imaging within the framework of museum studio photography were investigated. Imaging was carried out using a consumer RGB digital camera paired with either 1) colored glass filters and a broadband source or 2) optimized multichannel LED illumination, yielding five or six spectral image bands, respectively. Color targets were used to develop and verify profiles for transforming between the multiband camera signals and final color managed images. The filter-based and LED-based profiles were assessed quantitatively for color accuracy using color difference statistics, and several paintings were imaged and rendered using the profiles as a visual demonstration of the differences. While both were superior to conventional RGB imaging, the LED-based method outperformed the filter-based method for accurate reproduction of independent data. This supplements practicality and cost considerations that are informing the development of accessible spectral imaging strategies for highly color accurate museum studio photography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.