2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00110
|View full text |Cite
|
Sign up to set email alerts
|

3D Point Capsule Networks

Abstract: In this paper, we propose 3D point-capsule networks, an auto-encoder designed to process sparse 3D point clouds while preserving spatial arrangements of the input data. 3D capsule networks arise as a direct consequence of our novel unified 3D auto-encoder formulation. Their dynamic routing scheme [30] and the peculiar 2D latent space deployed by our approach bring in improvements for several common point cloud-related tasks, such as object classification, object reconstruction and part segmentation as substant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
222
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 330 publications
(223 citation statements)
references
References 35 publications
1
222
0
Order By: Relevance
“…AtlasNet [26] extends the FoldingNet to multiple grid patches whereas SO-Net [39] aggregates the point features into SOM node features to encode the spatial distributions. PointCapsNet [96] introduces an autoencoder based on dynamic routing to extract latent capsules and a few MLPs that generate multiple point patches from the latent capsules with distinct grids.…”
Section: Deep Learning On Point Cloudsmentioning
confidence: 99%
See 1 more Smart Citation
“…AtlasNet [26] extends the FoldingNet to multiple grid patches whereas SO-Net [39] aggregates the point features into SOM node features to encode the spatial distributions. PointCapsNet [96] introduces an autoencoder based on dynamic routing to extract latent capsules and a few MLPs that generate multiple point patches from the latent capsules with distinct grids.…”
Section: Deep Learning On Point Cloudsmentioning
confidence: 99%
“…We also report part classification accuracy. Following [96], we randomly sample 1% and 5% of the ShapeNetPart train set to evaluate the point features in a semi-supervised setting. We use the same pre-trained model to extract the point features of the sampled training data, along with validation and test samples without any finetuning.…”
Section: Part Segmentationmentioning
confidence: 99%
“…Our evaluation is mainly against supervised methods since there has been hardly any semi-supervised segmentation methods that take only a few exemplars and segment shapes in a whole set. An exception is the very recent work by Zhao et al [69] which used only 5% of the training data. In comparison, their IOU is 70% averaged over the shapes, while our 1-exemplar result is 73.5% even by setting all IOUs of cars to zero.…”
Section: One-shot Training Vs Supervised Methodsmentioning
confidence: 99%
“…Recent trends in data driven approaches have encouraged the researchers to harness deep learning to surmount these nuisances. Representative works include 3DMatch [56], PPFNet [20], CGF [33], 3D-FeatNet [29], PPF-FoldNet [19] and 3D point-capsule networks [57], all outperforming the handcrafted alternatives by large margin. While the descriptors in 2D are typically complemented by the useful information of local orientation, derived from the local image appearance [37], the nature of 3D data renders the task of finding a unique and consistent local coordinate frame way more challenging [24,42].…”
Section: Related Workmentioning
confidence: 99%