2020
DOI: 10.1007/978-3-030-67070-2_22
|View full text |Cite
|
Sign up to set email alerts
|

A Benchmark for Burst Color Constancy

Abstract: Temporal Color Constancy (CC) is a recently proposed approach that challenges the conventional single-frame color constancy. The conventional approach is to use a single frame -shot frame -to estimate the scene illumination color. In temporal CC, multiple frames from the view finder sequence are used to estimate the color. However, there are no realistic large scale temporal color constancy datasets for method evaluation. In this work, a new temporal CC benchmark is introduced. The benchmark comprises of (1) 6… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
43
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(46 citation statements)
references
References 31 publications
1
43
2
Order By: Relevance
“…As part of this analysis, we investigate the number of frames involved in the illuminant estimation as a possible way to reduce the inference time of the TCC models. Contrary to preliminary results in [19], we show that it is possible to train models that work with shorter sequences, with substantial gain in inference efficiency and no major loss in accuracy, if the retained frames are selected to capture global temporal information.…”
Section: Introductioncontrasting
confidence: 91%
See 2 more Smart Citations
“…As part of this analysis, we investigate the number of frames involved in the illuminant estimation as a possible way to reduce the inference time of the TCC models. Contrary to preliminary results in [19], we show that it is possible to train models that work with shorter sequences, with substantial gain in inference efficiency and no major loss in accuracy, if the retained frames are selected to capture global temporal information.…”
Section: Introductioncontrasting
confidence: 91%
“…Most of the research so far has focused on CCC for single frames (see [11] for a survey), with some attempts of leveraging the temporal information intrinsic in sequences of correlated frames for illuminant estimation [18,19]. In both cases, state-of-the-art methods in terms of accuracy rely on Deep Learning (DL).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Barron et al [9] proposed a Fourier transform-based model (model O in the paper), learning the model weight from the camera metadata like aperture and so on. Qian et al [33,35] claimed the preceding image sequence benefits the illumination estimation for the shot image. Yoo et al [45] explored the AC light source in a high-speed setting.…”
Section: Methods Relying On Image(s) and Auxiliary Informationmentioning
confidence: 99%
“…Bianco et al [10] initially shows the potential of applying CNN for illumination estimation. Then more effective and efficient CNN variants are designed, like AlexNet+SVM [11], DS-Net [12], FC4-Net [13], RCC [14], BCC [15] and so on.…”
Section: Introductionmentioning
confidence: 99%