2020
DOI: 10.48550/arxiv.2010.13938
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Unsigned Distance Fields for Implicit Function Learning

Abstract: Figure 1: Our method can represent and reconstruct complex open surfaces. Given a sparse test point cloud of a captured room (left) it generates a detailed, completed scene (right).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(22 citation statements)
references
References 48 publications
0
18
0
Order By: Relevance
“…We present a simple approach to PC extraction from DDFs, though it cannot guarantee uniform sampling over the shape. Analogous to prior work [5], we recall that q(p, v)…”
Section: Single-image 3d Reconstructionmentioning
confidence: 99%
See 2 more Smart Citations
“…We present a simple approach to PC extraction from DDFs, though it cannot guarantee uniform sampling over the shape. Analogous to prior work [5], we recall that q(p, v)…”
Section: Single-image 3d Reconstructionmentioning
confidence: 99%
“…Implicit Shape Representations Our work is most similar to distance field representations of shape, which have a long history in computer vision [60], most recently culminating in signed and unsigned distance fields (S/UDFs) [5,51,74]. In comparison to explicit representations, implicit shapes can capture arbitrary topologies with high fidelity.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep neural networks for computational science While deep neural networks were originally designed for tackling problems of regression or classification, their use has now been extended to deal with different sorts of problems in computational sciences and applications. Deep neural networks can be trained for computing the SDF to a given point-cloud or triangle mesh [1,2,3,7,11,9,22,21,24]. All these methods differ in how they model the loss function or the network architecture.…”
Section: Related Workmentioning
confidence: 99%
“…It is possible to add other constraints to the loss (7) or the loss (8). For example, if necessary, one could prevent eventual extra zero level-sets away from the surface by adding a term such as…”
Section: Loss Functionsmentioning
confidence: 99%