2020 International Conference on 3D Vision (3DV) 2020
DOI: 10.1109/3dv50981.2020.00129
|View full text |Cite
|
Sign up to set email alerts
|

Shape from Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF Material from Images via Differentiable Path Tracing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Our approach could speed-up these optimization methods by providing an initialization that is much closer to the end result compared to the random material maps that are typically used. Methods that also optimize for geometry have so far been limited to convex isolated objects often with specific lighting/capture constraints [15,16,17,18,19,20]. In contrast, we take multiple unconstrained sparse viewpoints of the scene resulting in a variable number of observations for different scene regions and complex visibility issues due to inexact geometry.…”
Section: Optimization-based Materials Capturementioning
confidence: 99%
“…Our approach could speed-up these optimization methods by providing an initialization that is much closer to the end result compared to the random material maps that are typically used. Methods that also optimize for geometry have so far been limited to convex isolated objects often with specific lighting/capture constraints [15,16,17,18,19,20]. In contrast, we take multiple unconstrained sparse viewpoints of the scene resulting in a variable number of observations for different scene regions and complex visibility issues due to inexact geometry.…”
Section: Optimization-based Materials Capturementioning
confidence: 99%
“…Most methods that recover factorized full 3D models for relighting and view synthesis rely on additional observations instead of strong priors. A common strategy is to use 3D geometry obtained from active scanning [Guo et al 2019;Lensch et al 2003;Park et al 2020;Schmitt et al 2020;Zhang et al 2020], proxy models Dong et al 2014;Gao et al 2020;Georgoulis et al 2015;Sato et al 2003], silhouette masks [Godard et al 2015;Oxholm and Nishino 2014;Xia et al 2016], or multi-view stereo (followed by surface reconstruction and meshing) [Goel et al 2020;Laffont et al 2012;Nam et al 2018;] as a starting point before recovering reflectance and refined geometry. In this work, we show that starting with geometry estimated using a state-of-the-art neural volumetric representation enables us to recover a fully-factorized 3D model just using images captured under one illumination, without requiring any additional observations.…”
Section: Related Workmentioning
confidence: 99%
“…These works perform inverse rendering from real image collections without supervision, but may fail to capture complex material and lighting effects-in contrast, our method models these directly. Several techniques also try to handle more photorealistic effects but typically require complex capturing settings, such as controllable lighting [28,29], a co-located camera-flashlight setup [5,6,8,13,34,38,41,48], and densely captured multi-view images [7,14,55,60] with additional known lighting [19] or hand-crafted inductive labels [43]. In our work, we propose a hybrid differentiable renderer and learn to disentangle complex specular effects given a single image.…”
Section: Related Workmentioning
confidence: 99%