2019
DOI: 10.48550/arxiv.1901.09394
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler

Edoardo Remelli,
Pierre Baque,
Pascal Fua

Abstract: Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train.In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Other unsupervised and self-supervised tasks, such as point cloud representation, also benefited from recent deep learning advances. The representation learning aims to learn a data representation that facilitates other downstream tasks (Bengio et al, 2013), such as point cloud classification (Achlioptas et al, 2018;Hassani & Haley, 2019;Jiang et al, 2021;Rao et al, 2020;Remelli et al, 2019), segmentation (Hassani & Haley, 2019), semantic segmentation (Bachmann et al, 2021;Jiang et al, 2021), clustering (Remelli et al, 2019;Zamorski et al, 2020), up-sampling (Remelli et al, 2019) and reconstruction (Achlioptas et al, 2018;Bachmann et al, 2021;Zamorski et al, 2020). To address the point cloud representation learning task, these works usually combine an encoder with a decoder that aims to reconstruct the input, and, as a consequence, the encoder learns meaningful features to compactly represent the input point cloud.…”
Section: Deep Learning-based Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Other unsupervised and self-supervised tasks, such as point cloud representation, also benefited from recent deep learning advances. The representation learning aims to learn a data representation that facilitates other downstream tasks (Bengio et al, 2013), such as point cloud classification (Achlioptas et al, 2018;Hassani & Haley, 2019;Jiang et al, 2021;Rao et al, 2020;Remelli et al, 2019), segmentation (Hassani & Haley, 2019), semantic segmentation (Bachmann et al, 2021;Jiang et al, 2021), clustering (Remelli et al, 2019;Zamorski et al, 2020), up-sampling (Remelli et al, 2019) and reconstruction (Achlioptas et al, 2018;Bachmann et al, 2021;Zamorski et al, 2020). To address the point cloud representation learning task, these works usually combine an encoder with a decoder that aims to reconstruct the input, and, as a consequence, the encoder learns meaningful features to compactly represent the input point cloud.…”
Section: Deep Learning-based Approachesmentioning
confidence: 99%
“…With the recent advances in the deep learning field of research, especially geometric deep learning, several geometric tasks obtained significant improvement, such as shape registration (Hanocka et al, 2018), and point cloud processing. In the context of point cloud processing, several tasks benefited from these recent advances, such as point cloud classification and segmentation (Qi et al, 2017a;Qi et al, 2017b), primitive classification and fitting (Li et al, 2019), point cloud registration (Aoki et al, 2019), surface mesh reconstruction (Hanocka et al, 2020), and point cloud representation and clustering (Hassani & Haley, 2019;Rao et al, 2020;Remelli et al, 2019;Zamorski et al, 2020). In spite of these advances, when it comes to the CAD domain, some limitations are found in these existing techniques, such as being specialized to work with primitive geometries or considering only uniform scale transformations, both of which limit the applicability to general 3D CAD models.…”
Section: Introductionmentioning
confidence: 99%