2019 International Conference on 3D Vision (3DV) 2019
DOI: 10.1109/3dv.2019.00017
|View full text |Cite
|
Sign up to set email alerts
|

Learning Point Embeddings from Shape Repositories for Few-Shot Segmentation

Abstract: User generated 3D shapes in online repositories contain rich information about surfaces, primitives, and their geometric relations, often arranged in a hierarchy. We present a framework for learning representations of 3D shapes that reflect the information present in this meta data and show that it leads to improved generalization for semantic segmentation tasks. Our approach is a point embedding network that generates a vectorial representation of the 3D points such that it reflects the grouping hierarchy and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 24 publications
(31 reference statements)
0
9
0
Order By: Relevance
“…Several works have proposed learning point representation using noisy labels and semantic tags available from various shape repositories. Sharma et al [42] learn point representations using noisy part hierarchies and designer-labeled semantic tags for few-shot semantic segmentation. Muralikrishnan et al [34] design a U-Net to learn point representations that predicts userprescribed shape-level tags by first predicting intermediate semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Several works have proposed learning point representation using noisy labels and semantic tags available from various shape repositories. Sharma et al [42] learn point representations using noisy part hierarchies and designer-labeled semantic tags for few-shot semantic segmentation. Muralikrishnan et al [34] design a U-Net to learn point representations that predicts userprescribed shape-level tags by first predicting intermediate semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Label-efficient representation learning on point clouds. Several recent approaches [8,20,35,46,74] have been proposed to alleviate expensive labeling of shapes. Muralikrishnan et al [35] learn per-point representation by training the network to predict shape-level tags.…”
Section: Approximate Convex Decompositionsmentioning
confidence: 99%
“…If we further add a reconstruction term, our method achieves state-of-the-art performance in unsuperivsed shape classification. Finally, Sharma et al [46] proposed learning point embedding by utilizing noisy part labels and semantic tags available freely on a 3D warehouse dataset. The model learnt in this way is used for a few-shot semantic segmentation task.…”
Section: Approximate Convex Decompositionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Generating synthetic 3D point cloud data is an open area of research with the intention of facilitating the learning of non-Euclidean point representations. In three dimensions, synthetic data may take the form of meshes, voxels, or raw point clouds in order to learn a representation that aids the solution of computer vision tasks such as classification [34,45,29,59,9], segmentation [34,45,35,55,19,56,39,51], and reconstruction [50,46,48,38,57,42]. Currently, researchers make use of point clouds sampled from the mesh of manually designed objects as synthetic data for training deep learning models [34,45,40,7].…”
Section: Introductionmentioning
confidence: 99%