2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00765
|View full text |Cite
|
Sign up to set email alerts
|

DualSDF: Semantic Shape Manipulation Using a Two-Level Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(66 citation statements)
references
References 33 publications
0
64
0
Order By: Relevance
“…Some methods learn to decompose the shape into local parts automatically, and represent the parts with CSG primitives [32], superquadrics [28] or 3D Gaussians [12,13]. Dualsdf [18] expresses shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape. Another set of work seeks to divide the 3D space into local patches.…”
Section: Related Workmentioning
confidence: 99%
“…Some methods learn to decompose the shape into local parts automatically, and represent the parts with CSG primitives [32], superquadrics [28] or 3D Gaussians [12,13]. Dualsdf [18] expresses shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape. Another set of work seeks to divide the 3D space into local patches.…”
Section: Related Workmentioning
confidence: 99%
“…We need to constrain the generator such that the location of the isosurface is within the spherical projection layer, and ideally approximate to the desired shape. Methods using geometric initializations [3], meta-learning [47], and variational methods [20] have been proposed to stabilize training and accelerate convergence. We choose to regularize our generator with a variational autodecoder (VAD) [20] loss alongside our adversarial criterion.…”
Section: Overall Modelmentioning
confidence: 99%
“…Methods using geometric initializations [3], meta-learning [47], and variational methods [20] have been proposed to stabilize training and accelerate convergence. We choose to regularize our generator with a variational autodecoder (VAD) [20] loss alongside our adversarial criterion. This VAD-GAN setup preserves the generator+discriminator structure of a generative adversarial network (GAN) during training, only requiring the addition of a light-weight embedding layer to the model, while significantly outperforming the original VAD only objective.…”
Section: Overall Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been considerable interest in learningbased methods for signed distance field modelling. And this is lately especially true in the area of deep learning [2,9,3,4,10,16,17,8,11] 4 .…”
Section: Experiments On Learned Sdbsmentioning
confidence: 99%