2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00149
|View full text |Cite
|
Sign up to set email alerts
|

Seeing the World in a Bag of Chips

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(17 citation statements)
references
References 63 publications
0
17
0
Order By: Relevance
“…Most methods that recover factorized full 3D models for relighting and view synthesis rely on additional observations instead of strong priors. A common strategy is to use 3D geometry obtained from active scanning [Guo et al 2019;Lensch et al 2003;Park et al 2020;Schmitt et al 2020;Zhang et al 2020], proxy models Dong et al 2014;Gao et al 2020;Georgoulis et al 2015;Sato et al 2003], silhouette masks [Godard et al 2015;Oxholm and Nishino 2014;Xia et al 2016], or multi-view stereo (followed by surface reconstruction and meshing) [Goel et al 2020;Laffont et al 2012;Nam et al 2018;] as a starting point before recovering reflectance and refined geometry. In this work, we show that starting with geometry estimated using a state-of-the-art neural volumetric representation enables us to recover a fully-factorized 3D model just using images captured under one illumination, without requiring any additional observations.…”
Section: Related Workmentioning
confidence: 99%
“…Most methods that recover factorized full 3D models for relighting and view synthesis rely on additional observations instead of strong priors. A common strategy is to use 3D geometry obtained from active scanning [Guo et al 2019;Lensch et al 2003;Park et al 2020;Schmitt et al 2020;Zhang et al 2020], proxy models Dong et al 2014;Gao et al 2020;Georgoulis et al 2015;Sato et al 2003], silhouette masks [Godard et al 2015;Oxholm and Nishino 2014;Xia et al 2016], or multi-view stereo (followed by surface reconstruction and meshing) [Goel et al 2020;Laffont et al 2012;Nam et al 2018;] as a starting point before recovering reflectance and refined geometry. In this work, we show that starting with geometry estimated using a state-of-the-art neural volumetric representation enables us to recover a fully-factorized 3D model just using images captured under one illumination, without requiring any additional observations.…”
Section: Related Workmentioning
confidence: 99%
“…Geometry adjustment is modeled from an initial depth fusion through a linear combination of surface normals, which can inflate or deflate the surface. Park et al [31] model interreflection and Fresnel reflectance in their learning-based recovery of scene properties from RGBD imagery, via surface light field and specular reflectance map reconstructions. Both approaches as-sume accurate geometry initialization, while we include results on reconstructions from coarser initializations.…”
Section: Geometry and Materials Reconstructionmentioning
confidence: 99%
“…Specular highlights occur when the light source and the direction of the viewer are halfway between the normal to the surface. In most cases, specular highlights are nuisances creating undesired artifacts in the image, however, they can also be useful for determining the direction of the light source within the environment in such cases as photometric stereo or surface light field reconstruction [1].…”
Section: Introductionmentioning
confidence: 99%
“…as SunCG [5] lack sufficient model quality to recover specular highlights. Therefore, we propose the LIGHTS Dataset 1 , constructed from high-quality architectural 3D models with variation in lighting design and rendering parameters to create near photo-realistic scenes (see Fig. 1).…”
Section: Introductionmentioning
confidence: 99%