Abstract-This paper is concerned with the derivation of a progression of shadow-free image representations. First we show that adopting certain assumptions about lights and cameras leads to a 1-d, grey-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1-d representation to an equivalent 2-d, chromaticity representation. We show that in this 2-d representation, it is possible to re-light all the image pixels in the same way, effectively deriving a 2-d image representation which is additionally shadow-free. Finally, we show how to recover a 3-d, full colour shadow-free image representation by first (with the help of the 2-d representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting, and we propose a method to re-integrate this thresholded edge map, thus deriving the sought-after 3-d shadow-free image.
We attempt to recover a 2D chromaticity intrinsic variation of a single RGB image which is independent of lighting, without requiring any prior knowledge about the camera. The proposed algorithm aims to learn an invariant direction for projecting from 2D color space to the greyscale intrinsic image. We notice that along this direction, the entropy in the derived invariant image is minimized. The experiments conducted on various inputs indicate that this method achieves intrinsic 2D chromaticity images which are free of shadows. In addition, we examined the idea to utilize projection pursuit instead of entropy minimization to find the desired direction.
ÐThis paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problemÐthat is of recovering an illuminant independent representation of the reflectances in a scene. Thus, the work presented here will have applications in fields such as color-based object recognition and digital photography, where solving the color constancy problem is important. The work in this paper differs from much previous work in that, rather than attempting to recover a single estimate of the illuminant as many previous authors have done, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss in the paper how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop in this paper. We propose a new probabilistic instantiation of this correlation framework and we show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy.
We develop sensor transformations, collectively called spectral sharpening, that convert a given set of sensor sensitivity functions into a new set that will improve the performance of any color-constancy algorithm that is based on an independent adjustment of the sensor response channels. Independent adjustment of multiplicative coefficients corresponds to the application of a diagonal-matrix transform (DMT) to the sensor response vector and is a common feature of many theories of color constancy. Land's retinex and von Kries adaptation in particular. We set forth three techniques for spectral sharpening. Sensor-based sharpening focuses on the production of new sensors as linear combinations of the given ones such that each new sensor has its spectral sensitivity concentrated as much as possible within a narrow band of wavelengths. Data-based sharpening, on the other hand, extracts new sensors by optimizing the ability of a DMT to account for a given illumination change by examining the sensor response vectors obtained from a set of surfaces under two different illuminants. Finally in perfect sharpening we demonstrate that, if illumination and surface reflectance are described by two- and three-parameter finite-dimensional models, there exists a unique optimal sharpening transform. All three sharpening methods yield similar results. When sharpened cone sensitivities are used as sensors, a DMT models illumination change extremely well. We present simulation results suggesting that in general nondiagonal transforms can do only marginally better. Our sharpening results correlate well with the psychophysical evidence of spectral sharpening in the human visual system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.