2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00225
|View full text |Cite
|
Sign up to set email alerts
|

Conditional GANs for Multi-Illuminant Color Constancy: Revolution or yet Another Approach?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(37 citation statements)
references
References 58 publications
0
37
0
Order By: Relevance
“…Wang et al (Wang et al 2018a) proposed a stacked conditional generative adversarial network (ST-CGAN) for image shadow removing. Sidorov (Sidorov 2019) proposed an end-to-end architecture named AngularGAN oriented specifically to the color constancy task, without estimating illumniation color or illumination color map. Wei et al (Wei et al 2019) proposed a two-stage generative adversarial network for shadow inpainting and removal with slice convolutions.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Wang et al (Wang et al 2018a) proposed a stacked conditional generative adversarial network (ST-CGAN) for image shadow removing. Sidorov (Sidorov 2019) proposed an end-to-end architecture named AngularGAN oriented specifically to the color constancy task, without estimating illumniation color or illumination color map. Wei et al (Wei et al 2019) proposed a two-stage generative adversarial network for shadow inpainting and removal with slice convolutions.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our RIS-GAN with the state of-the-art methods including the two traditional methods, i.e., Guo (Guo, Dai, and Hoiem 2011) and Zhang (Zhang, Zhang, and Xiao 2015) and the recent learning-based methods, i.e., Deshad-owNet (Qu et al 2017), DSC (Hu et al 2018), ST-CGAN (Wang et al 2018a), and AngularGAN (Sidorov 2019). Note that shadow removal works on the pixel and recovers the value of the pixel, and therefore we add two frameworks, i.e., Global/Local-GAN (Iizuka, Simo-Serra, and Ishikawa 2017) for image inpainting and Pix2Pix-HD (Wang et al 2018b) for image translation, as another two shadow removal baselines for solid validation.…”
Section: Comparison With State-of-the-artsmentioning
confidence: 99%
See 2 more Smart Citations
“…Instead of learning the shadow image directly, our network learns the shadow matting first and produces the high-quality shadow from the original shadow-free image and matting. Although these images are synthesized using neural network, they still have a similar distributions to the real natural scene compared with those from computer games (Sidorov 2019). In our shadow synthesis, we assume the scenes contain only cast shadows with the objects outside the scene.…”
Section: Introductionmentioning
confidence: 99%