Abstract-This paper is concerned with the derivation of a progression of shadow-free image representations. First we show that adopting certain assumptions about lights and cameras leads to a 1-d, grey-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1-d representation to an equivalent 2-d, chromaticity representation. We show that in this 2-d representation, it is possible to re-light all the image pixels in the same way, effectively deriving a 2-d image representation which is additionally shadow-free. Finally, we show how to recover a 3-d, full colour shadow-free image representation by first (with the help of the 2-d representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting, and we propose a method to re-integrate this thresholded edge map, thus deriving the sought-after 3-d shadow-free image.
We attempt to recover a 2D chromaticity intrinsic variation of a single RGB image which is independent of lighting, without requiring any prior knowledge about the camera. The proposed algorithm aims to learn an invariant direction for projecting from 2D color space to the greyscale intrinsic image. We notice that along this direction, the entropy in the derived invariant image is minimized. The experiments conducted on various inputs indicate that this method achieves intrinsic 2D chromaticity images which are free of shadows. In addition, we examined the idea to utilize projection pursuit instead of entropy minimization to find the desired direction.
We develop sensor transformations, collectively called spectral sharpening, that convert a given set of sensor sensitivity functions into a new set that will improve the performance of any color-constancy algorithm that is based on an independent adjustment of the sensor response channels. Independent adjustment of multiplicative coefficients corresponds to the application of a diagonal-matrix transform (DMT) to the sensor response vector and is a common feature of many theories of color constancy. Land's retinex and von Kries adaptation in particular. We set forth three techniques for spectral sharpening. Sensor-based sharpening focuses on the production of new sensors as linear combinations of the given ones such that each new sensor has its spectral sensitivity concentrated as much as possible within a narrow band of wavelengths. Data-based sharpening, on the other hand, extracts new sensors by optimizing the ability of a DMT to account for a given illumination change by examining the sensor response vectors obtained from a set of surfaces under two different illuminants. Finally in perfect sharpening we demonstrate that, if illumination and surface reflectance are described by two- and three-parameter finite-dimensional models, there exists a unique optimal sharpening transform. All three sharpening methods yield similar results. When sharpened cone sensitivities are used as sensors, a DMT models illumination change extremely well. We present simulation results suggesting that in general nondiagonal transforms can do only marginally better. Our sharpening results correlate well with the psychophysical evidence of spectral sharpening in the human visual system.
Abstract.A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an "invariant direction" in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.
Abstract.Recently, a method for removing shadows from colour images was developed [Finlayson, Hordley, Lu, and Drew, PAMI2006] that relies upon finding a special direction in a 2D chromaticity feature space. This "invariant direction" is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon's definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.
This study's main result is to show that under the conditions imposed by the Maloney-Wandell color constancy algorithm, whereby illuminants are three dimensional and reflectances two dimensional (the 3-2 world), color constancy can be expressed in terms of a simple independent adjustment of the sensor responses (in other words, as a von Kries adaptation type of coefficient rule algorithm) as long as the sensor space is first transformed to a new basis. A consequence of this result is that any color constancy algorithm that makes 3-2 assumptions, such as the Maloney-Wandell subspace algorithm, Forsyth's MWEXT, and the Funt-Drew lightness algorithm, must effectively calculate a simple von Kries-type scaling of sensor responses, i.e., a diagonal matrix. Our results are strong in the sense that no constraint is placed on the initial spectral sensitivities of the sensors. In addition to purely theoretical arguments, we present results from simulations of von Kriestype color constancy in which the spectra of real illuminants and reflectances along with the human-conesensitivity functions are used. The simulations demonstrate that when the cone sensor space is transformed to its new basis in the appropriate manner a diagonal matrix supports nearly optimal color constancy.
Abstract. Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10,9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.