In this study, a novel illuminant color estimation framework is proposed for color constancy, which incorporates the high representational capacity of deep-learning-based models and the great interpretability of assumption-based models. The well-designed building block, feature map reweight unit (ReWU), helps to achieve comparative accuracy on benchmark datasets with respect to prior state-of-the-art models while requiring only 1%-5% model size and 8%-20% computational cost. In addition to local color estimation, a confidence estimation branch is also included such that the model is able to produce point estimate and its uncertainty estimate simultaneously, which provides useful clues for local estimates aggregation and multiple illumination estimation. The source code and the dataset are available at https://github.com/QiuJueqin/Reweight-CC.Keywords Color constancy, illuminant estimation, convolutional neural network, computer vision
IntroductionColor constancy of the human visual system is an essential prerequisite for many vision tasks, which compensates for the effect of the illumination on objects' color perception. Many computer vision applications are designed to extract comprehensive information from the intrinsic colors of the objects, thereby requiring the input images to be color-unbiased. Unfortunately, the photosensors in modern digital cameras do not possess the ability of automatically compensating for the illuminant colors. To address this issue, a variety of computational color constancy algorithms have been proposed to mimic the dynamical adjustments of the cones in the human visual system [1,2,3].Computational color constancy generally works by first estimating the illuminant color, and then compensating it by multiplying the reciprocal of the illuminant color to the color-biased image. Existing computational color constancy algorithms can be classified into a priori assumption-based ones and learning-based ones, according to whether a training process is needed. Typical assumption-based algorithms include Gray-World [4], White-Patch [1], variants of Gray-Edge [5,6], and some that utilize statistical information of the images [7]. Although assumption-based methods are lightweight and comprehensible, their performances are likely to decrease dramatically if these restrictive assumptions are not satisfied. Learning-based algorithms can be further grouped into low-level ones and high-level ones. Typical low-level methods include Color-by-Correlation [8], Gamut Mapping [9], Bayesian color constancy [10], etc.Since the spatial and textural information has been lost when generating low-level color descriptors, these methods are prone to produce ambiguous estimates if they have not "seen" the colorimetric patterns of the test images in the training phase. In recent years, following the massive success of deep learning in computer vision community, high-level color constancy algorithms based on the convolutional neural network (CNN) have achieved state-of-the-art performances on the benchmark datasets ...