2020
DOI: 10.1109/access.2020.3045066
|View full text |Cite
|
Sign up to set email alerts
|

The Cube++ Illumination Estimation Dataset

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 61 publications
0
11
0
Order By: Relevance
“…The network was trained on a proprietary dataset of annotated images, reaching 98.33% accuracy on an independent test set. Prior to being classified, the images in the AWB datasets were sRGB-rendered and whitebalanced in order to match the appearance expected by For reference, Figure 7 shows example images from the SimpleCube++ dataset 20 identified in terms of lighting conditions and illumination measure.…”
Section: Shooting Parameters and Illumination Levelsmentioning
confidence: 99%
“…The network was trained on a proprietary dataset of annotated images, reaching 98.33% accuracy on an independent test set. Prior to being classified, the images in the AWB datasets were sRGB-rendered and whitebalanced in order to match the appearance expected by For reference, Figure 7 shows example images from the SimpleCube++ dataset 20 identified in terms of lighting conditions and illumination measure.…”
Section: Shooting Parameters and Illumination Levelsmentioning
confidence: 99%
“…ILSVRC2012 [75] is a low resolution, JPG encoded subset of ImageNet [16] and is a popular and commonly acknowledged dataset for visual object recognition. Cube++ is a high-resolution dataset and comes in two flavors: as 16-bit encoded PNGs and JPGs [19].…”
Section: CVmentioning
confidence: 99%
“…We devide the NR2R dataset into two parts: 120 images are used for training and the rest 30 images are used for testing. The stage 1 was first pretrained on Cube++ dataset [12] which is captured within the same camera of the NR2R.Then we use Adam optimizer [23] with 5e-5 learning rate. Then the stage 2 was trained for 300 epochs with the same learning rate while the parameters of stage 1 was frozen.…”
Section: Training Detailsmentioning
confidence: 99%