The nature of the inputs to achromatic luminance flicker perception was explored psychophysically by measuring middle-(M-) and long-wavelength-sensitive (L-) cone modulation sensitivities, M-and L-cone phase delays, and spectral sensitivities as a function of temporal frequency. Under intense long-wavelength adaptation, the existence of multiple luminance inputs was revealed by substantial frequency-dependent changes in all three types of measure. Fast (f) and slow (s) M-cone input signals of the same polarity (+sM and +fM) sum at low frequencies, but then destructively interfere near 16 Hz because of the delay between them. In contrast, fast and slow L-cone input signals of opposite polarity (−sL and +fL) cancel at low frequencies, but then constructively interfere near 16 Hz. Although these slow, spectrally opponent luminance inputs (+sM and −sL) would usually be characterized as chromatic, and the fast, non-opponent inputs (+fM and +fL) as achromatic, both contribute to flicker photometric nulls without producing visible colour variation. Although its output produces an achromatic percept, the luminance channel has slow, spectrally opponent inputs in addition to the expected non-opponent ones. Consequently, it is not possible in general to silence this channel with pairs of 'equiluminant' alternating stimuli, since stimuli equated for the non-opponent luminance mechanism (+fM and +fL) may still generate spectrally opponent signals (+sM and +sL).
A psychophysical experiment was performed to determine the effects of lightness dependency on suprathreshold lightness tolerances. Using a pass/fail method of constant stimuli, lightness tolerance thresholds were measured using achromatic stimuli centered at CIELAB L* ϭ 10, 20, 40, 60, 80, and 90 using 44 observers. In addition to measuring tolerance thresholds for uniform samples, lightness tolerances were measured using stimuli with a simulated texture of thread wound on a card. A texture intermediate between the wound thread and the uniform stimuli was also used. A computer-controlled CRT was used to perform the experiments. Lightness tolerances were found to increase with increasing lightness of the test stimuli. For the uniform stimuli this effect was only evident at the higher lightnesses. For the textured stimuli, this trend was more evident throughout the whole lightness range. Texture had an effect of increasing the tolerance thresholds by a factor of almost 2 as compared to the uniform stimuli. The intermediate texture had tolerance thresholds that were between those of the uniform and full-textured stimuli. Transforming the results into a plot of threshold vs. intensity produced results that were more uniform across the three conditions. This may indicate that CIELAB is not the best space in which to model these effects.
This research extends the previous RIT‐DuPont research on suprathreshold color‐difference tolerances in which CIELAB was sampled in a balanced factorial design to quantify global lack of visual uniformity. The current experiments sampled hue, specifically. Three complete hue circles at two lightnesses (L* = 40 and 60) and two chroma levels ($C^*_{ab}$ = 20 and 40) plus three of the five CIE recommended colors (red, green, blue) were scaled, visually, for hue discrimination, resulting in 39 color centers. Forty‐five observers participated in a forced‐choice perceptibility experiment, where the total color difference of 393 sample pairs were compared with a near‐neutral anchor‐pair stimulus of 1.03 $\Delta E^*_{ab}.$ A supplemental experiment was performed by 30 additional observers in order to validate four of the 39 color centers. A total of 34,626 visual observations were made under the recently established CIE recommended reference conditions defined for the CIE94 color‐difference equation. The statistical method logit analysis with three‐dimensional normit function was used to determine the hue discrimination for each color center. A three‐dimensional analysis was required due to precision limitations of a digital printer used to produce the majority of colored samples. There was unwanted variance in lightness and chroma in addition to the required variance in hue. This statistical technique enabled estimates of only hue discrimination. The three‐dimensional analysis was validated in the supplemental experiment, where automotive coatings produced with a minimum of unwanted variance yielded the same visual tolerances when analyzed using one‐dimensional probit analysis. The results indicated that the hue discrimination suprathresholds of the pooled observers varied with CIELAB hue angle position. The suprathreshold also increased with the chroma position of a given color center, consistent with previous visual results. The results were compared with current color‐difference formulas: CMC, BFD, and CIE94. All three formulas had statistically equivalent performance when used to predict the visual data. Given the lack of a hue‐angle dependent function embedded in CIE94, it is clear from these results that neither CMC nor BFD adequately predict the visual data. Thus, these and other hue‐suprathreshold data can be used to develop a new color‐difference formula with superior performance to current equations. © 1998 John Wiley & Sons, Inc. Col Res Appl, 23, 302–313, 1998
Abstract. The method of paired comparison based on
LCD televisions have LC response times and hold-type data cycles that contribute to the appearance of blur when objects are in motion on the screen. New algorithms based on studies of the human visual system's sensitivity to motion are being developed to compensate for these artifacts. This paper describes a series of experiments that incorporate eyetracking in the psychophysical determination of spatio-velocity contrast sensitivity in order to build on the 2D spatiovelocity contrast sensitivity function (CSF) model first described by Kelly and later refined by Daly. We explore whether the velocity of the eye has an additional effect on sensitivity and whether the model can be used to predict sensitivity to more complex stimuli. There were a total of five experiments performed in this research. The first four experiments utilized Gabor patterns with three different spatial and temporal frequencies and were used to investigate and/or populate the 2D spatio-velocity CSF. The fifth experiment utilized a disembodied edge and was used to validate the model. All experiments used a two interval forced choice (2IFC) method of constant stimuli guided by a QUEST routine to determine thresholds. The results showed that sensitivity to motion was determined by the retinal velocity produced by the Gabor patterns regardless of the type of motion of the eye. Based on the results of these experiments the parameters for the spatio-velocity CSF model were optimized to our experimental conditions.
Using a paired comparison paradigm, various gamut mapping algorithms were evaluated using simple rendered images and artificial gamut boundaries. The test images consisted of simple rendered spheres floating in front of a gray background. Using CIELAB as our device-independent color space, cut-off values for lightness and chroma, based on the statistics of the images, were chosen to reduce the gamuts for the test images. The gamut mapping algorithms consisted of combinations of clipping and mapping the original gamut in linear piecewise segments. Complete color space compression in RGB and CIELAB was also tested. Each of the colored originals (R,G,B,C,M,Y, and Skin) were mapped separately in lightness and chroma. In addition, each algorithm was implemented with saturation (C(*)/L(*)) allowed to vary or retain the same values as in the original image. Pairs of test images with reduced color gamuts were presented to twenty subjects along with the original image. For each pair the subjects chose the test image that better reproduced the original. Rank orders and interval scales of algorithm performance with confidence limits were then derived. Clipping all out-of-gamut colors was the best method for mapping chroma. For lightness mapping at low lightness levels and high lightness levels particular gamut mapping algorithms consistently produced images chosen as most like the original. The choice of device-independent color space may also influence which gamut mapping algorithms are best.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.