2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00942
|View full text |Cite
|
Sign up to set email alerts
|

Learning Intrinsic Image Decomposition from Watching the World

Abstract: Single-view intrinsic image decomposition is a highly ill-posed problem, and so a promising approach is to learn from large amounts of data. However, it is difficult to collect ground truth training data at scale for intrinsic images. In this paper, we explore a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes. This approach allows us to learn … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
154
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 146 publications
(154 citation statements)
references
References 38 publications
0
154
0
Order By: Relevance
“…Third, our network could benefit from losses used in training intrinsic image decomposition networks. For example, if we added the timelapse dataset of [33] to our training, we could incorporate their reflectance consistency loss to improve our albedo map estimates. Our code, trained model and inverse rendering benchmark data is available at <URL removed for review>.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Third, our network could benefit from losses used in training intrinsic image decomposition networks. For example, if we added the timelapse dataset of [33] to our training, we could incorporate their reflectance consistency loss to improve our albedo map estimates. Our code, trained model and inverse rendering benchmark data is available at <URL removed for review>.…”
Section: Discussionmentioning
confidence: 99%
“…Recent work either uses synthetic training data and supervised learning [7,12,20,30,37] or self-supervision/unsupervised learning. Very recently, Li et al [33] used uncontrolled time-lapse images allowing them to combine an image reconstruction loss with reflectance consistency between frames. This work was further extended using photorealistic, synthetic training data [32].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…It is also possible to extract illumination, material and geometry information from time-lapse videos as shown in previous methods [23,29,17]. Most recently, Li and Snavely [20] proposed to learn singleview intrinsic image decomposition from time-lapse videos in the wild without ground truth data. We draw inspirations from this line of research and propose to learn a generative model for the time-lapse video synthesis.…”
Section: Related Workmentioning
confidence: 99%