2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01917
|View full text |Cite
|
Sign up to set email alerts
|

PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image Decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(26 citation statements)
references
References 44 publications
0
26
0
Order By: Relevance
“…Most prior data-driven models utilize architectures that estimate shading and albedo separately [Baslamisli et al 2018b;Cheng et al 2018;Das et al 2022;Li and Snavely 2018a,b;Luo et al 2020;Shi et al 2017;Takuya Narihira and Yu 2015;Zhou et al 2019]. These methods enforce constraints on each intrinsic component and incorporate a reconstruction loss that favors outputs that reproduce the input image when multiplied.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Most prior data-driven models utilize architectures that estimate shading and albedo separately [Baslamisli et al 2018b;Cheng et al 2018;Das et al 2022;Li and Snavely 2018a,b;Luo et al 2020;Shi et al 2017;Takuya Narihira and Yu 2015;Zhou et al 2019]. These methods enforce constraints on each intrinsic component and incorporate a reconstruction loss that favors outputs that reproduce the input image when multiplied.…”
Section: Related Workmentioning
confidence: 99%
“…When the input image is resized to the training resolution for inference, as many data-driven setups do by default [Das et al 2022;Liu et al 2020;Luo et al 2020], we can generate a consistent shading structure for the entire scene. In this scenario, since the entire image fits in the receptive field size of the network, we see a consistent shading structure in the estimation.…”
Section: Multi-resolution Behaviormentioning
confidence: 99%
See 2 more Smart Citations
“…Following previous works [18], we also introduce an extra guidance image G. We believe the content of the guidance image can benefit the learning of interpolation parameters (described in Sec 3.3) and make the DPF outputs better aligned with the high-resolution guidance image. We directly use the input image of different resolutions as the guidance image instead of introducing a task-specific guidance map (e.g., the edge guidance in [15,18]) that requires domain-specific pre-processing.…”
Section: Network Architecturementioning
confidence: 99%