2021
DOI: 10.48550/arxiv.2106.12052
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Volume Rendering of Neural Implicit Surfaces

Abstract: Neural volume rendering became increasingly popular recently due to its success in synthesizing novel views of a scene from a sparse set of input images. So far, the geometry learned by neural volume rendering techniques was modeled using a generic density function. Furthermore, the geometry itself was extracted using an arbitrary level set of the density function leading to a noisy, often low fidelity reconstruction. The goal of this paper is to improve geometry representation and reconstruction in neural vol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 35 publications
0
18
0
Order By: Relevance
“…More recently, by leveraging an implicit representation in the form of a Neural Radiance Field, showed the ability to model complex geometries and illumination from images. There has since been a flurry of impressive work to further push the boundaries of these representations and allow modeling deformation (Pumarola et al, 2020;Park et al, 2021), lighting variation (Martin-Brualla et al, 2021), and similar to ours, leveraging insights from surface rendering to model radiance (Yariv et al, 2020;Boss et al, 2021;Oechsle et al, 2021;Srinivasan et al, 2021;Zhang et al, 2021b;Wang et al, 2021a;Yariv et al, 2021). However, unlike our approach which can efficiently learn from a sparse set of images with coarse cameras, these approaches rely on a dense set of multi-view images with precise camera localization to recover a coherent 3D structure of the scene.…”
Section: Related Workmentioning
confidence: 84%
“…More recently, by leveraging an implicit representation in the form of a Neural Radiance Field, showed the ability to model complex geometries and illumination from images. There has since been a flurry of impressive work to further push the boundaries of these representations and allow modeling deformation (Pumarola et al, 2020;Park et al, 2021), lighting variation (Martin-Brualla et al, 2021), and similar to ours, leveraging insights from surface rendering to model radiance (Yariv et al, 2020;Boss et al, 2021;Oechsle et al, 2021;Srinivasan et al, 2021;Zhang et al, 2021b;Wang et al, 2021a;Yariv et al, 2021). However, unlike our approach which can efficiently learn from a sparse set of images with coarse cameras, these approaches rely on a dense set of multi-view images with precise camera localization to recover a coherent 3D structure of the scene.…”
Section: Related Workmentioning
confidence: 84%
“…Our solution to the first problem is to add a parsing branch. Since the portrait parts of the same semantic category share similar motion patterns and texture information, it will be beneficial for the appearance and geometry learning in NeRF, which is also proven in recent implicit representation studies [73][74][75]81]. As shown in Fig.…”
Section: Semantic-aware Dynamic Ray Samplingmentioning
confidence: 91%
“…DS-NeRF [11] shows that depth supervision can help NeRF train faster with fewer input views. Moreover, numerous works [24,38,52,58] show that despite the highquality color rendering, NeRF has difficulty reconstructing 3D geometry and surface normals. Accordingly, for training samples coming from datasets with ground truth depths, we also output the predicted depth d for each ray and supervise it if the ground truth depth of that pixel is available:…”
Section: Loss Functionsmentioning
confidence: 99%