2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00308
|View full text |Cite
|
Sign up to set email alerts
|

Deep Marching Cubes: Learning Explicit Surface Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
162
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 242 publications
(182 citation statements)
references
References 30 publications
0
162
0
Order By: Relevance
“…There have been a variety of 3D shape representations for deep learning of shapes, such as voxel grids [10,15,30,48,49], octrees [19,38,43,46,47], multi-view images [4,31,42], point clouds [1,13,14,35,36,50,51], geometry images [40,41], deformable mesh/patches [17,41,45,50], and part-based structural graphs [29,52]. To the best of our knowledge, our work is the first to introduce a deep network for learning implicit fields for generative shape modeling 1 .…”
Section: Related Workmentioning
confidence: 99%
“…There have been a variety of 3D shape representations for deep learning of shapes, such as voxel grids [10,15,30,48,49], octrees [19,38,43,46,47], multi-view images [4,31,42], point clouds [1,13,14,35,36,50,51], geometry images [40,41], deformable mesh/patches [17,41,45,50], and part-based structural graphs [29,52]. To the best of our knowledge, our work is the first to introduce a deep network for learning implicit fields for generative shape modeling 1 .…”
Section: Related Workmentioning
confidence: 99%
“…A natural limitation of these approaches is the lack of surface connectivity in the representation. To address this limitation, [12,21,29,40] proposed to directly learn 3D meshes.…”
Section: D Reconstructionmentioning
confidence: 99%
“…In the last decade, major breakthroughs in shape extraction were due to deep neural networks coupled with the abundance of visual data. Recent works focus on learning 3D reconstruction using 2.5D [14,16,24,43], volumetric [7,11,13,18,30,42], mesh [12,21] and point cloud [10,27] representations. However, none of the above are sufficiently parsimonious or interpretable to allow for higher-level 3D scene understanding as required by intelligent systems.…”
Section: Introductionmentioning
confidence: 99%
“…As such, the whole pipeline cannot be trained end-to-end. To overcome this limitation, Liao et al [46] introduced the Deep Marching Cubes, an end-to-end trainable network, which predicts explicit surface representations of arbitrary topology. They use a modified differentiable representation, which separates the mesh topology from the geometry.…”
Section: Deep Marching Cubesmentioning
confidence: 99%