2017
DOI: 10.1364/josaa.34.001448
|View full text |Cite
|
Sign up to set email alerts
|

Improving color constancy by discounting the variation of camera spectral sensitivity

Abstract: It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inte… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(26 citation statements)
references
References 68 publications
1
25
0
Order By: Relevance
“…In the literature, the validation of color constancy methods is customarily performed using k-fold cross-validation on the same dataset. As a result, this validation process favors learning-based methods and fails to assess their performance for color correction in images from an unknown camera (Gao et al, 2017).…”
Section: Camera-agnostic Color Constancymentioning
confidence: 99%
See 1 more Smart Citation
“…In the literature, the validation of color constancy methods is customarily performed using k-fold cross-validation on the same dataset. As a result, this validation process favors learning-based methods and fails to assess their performance for color correction in images from an unknown camera (Gao et al, 2017).…”
Section: Camera-agnostic Color Constancymentioning
confidence: 99%
“…We ar-gue that learning-based methods depend on the assumption that the statistical distribution of the illumination in both the training and testing images is similar. In other words, learning-based methods assume that imaging and illumination conditions of a given image can be inferred from previous training examples, thus becoming heavily dependent on the training data (Gao et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Use angular distance next step is to extend it so that it can train on images taken with one sensor and be used on images taken with another sensor. A solution to this problem has already been proposed [43], but it requires calibrated images and reflectance spectras to learn a 3 × 3 sensor transformation matrix. Here, however, the goal is to use neither calibrated images nor any reflectance spectras in order for the method to be fully unsupervised.…”
Section: Algorithm 2 Color Tiger Trainingmentioning
confidence: 99%
“…Nevertheless, since all well-known learning-based methods are supervised, a major obstacle for their application is that for a given sensor, despite proposed workarounds [43], supervised learning-based methods have to be trained on calibrated images taken by preferably the same sensor [44]. To calibrate arXiv:1712.00436v4 [cs.CV] 19 Mar 2019 the images, a calibration object has to be placed in the scenes of these images and later segmented to extract the groundtruth illumination.…”
Section: Introductionmentioning
confidence: 99%
“…As in INTEL-TUT2 dataset different camera models are used, the variation of camera spectral sensitivity needs to be discounted. For this purpose, we utilize Color Conversion Matrix (CCM) based preprocessing [28] to learn 3*3 CCM matrices for each camera pair.…”
Section: Evaluation Proceduresmentioning
confidence: 99%