2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00937
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Deep Shape Descriptor With Point Distribution Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(29 citation statements)
references
References 31 publications
0
29
0
Order By: Relevance
“…Unsupervised methods for learning shape descriptors follow two major lines of research, with the first line leveraging generative models such as autoencoders [Girdhar et al, 2016, Sharma et al, 2016, Yang et al, 2018 or generative adversarial networks (GANs) Wu et al [2016], Achlioptas et al [2018], Han et al [2019] and the second line focusing on probabilistic models [Xie et al, 2018, Shi et al, 2020. Autoencoder-based approaches focus either on adding additional supervision to the latent space via 2D predictability [Girdhar et al, 2016], adding de-noising [Sharma et al, 2016], or improving the decoder using a folding-inspired architecture [Yang et al, 2018].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Unsupervised methods for learning shape descriptors follow two major lines of research, with the first line leveraging generative models such as autoencoders [Girdhar et al, 2016, Sharma et al, 2016, Yang et al, 2018 or generative adversarial networks (GANs) Wu et al [2016], Achlioptas et al [2018], Han et al [2019] and the second line focusing on probabilistic models [Xie et al, 2018, Shi et al, 2020. Autoencoder-based approaches focus either on adding additional supervision to the latent space via 2D predictability [Girdhar et al, 2016], adding de-noising [Sharma et al, 2016], or improving the decoder using a folding-inspired architecture [Yang et al, 2018].…”
Section: Related Workmentioning
confidence: 99%
“…GAN-based approaches leverage either an additional VAE structure [Wu et al, 2016], pre-training via earthmover or Chamfer distance [Achlioptas et al, 2018], or using inter-view prediction as a pretext task [Han et al, 2019]. For probabilistic methods, Xie et al [2018] proposes an energy-based convolutional network which is trained with Markov Chain Monte Carlo such as Langevin dynamics, and Shi et al [2020] proposes to model point clouds using a Gaussian distribution for each point. Of these approaches, only Shi et al [2020] focuses on producing robust representations.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations