2022
DOI: 10.1007/978-3-031-20086-1_38
|View full text |Cite
|
Sign up to set email alerts
|

Masked Discrimination for Self-supervised Learning on Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(35 citation statements)
references
References 43 publications
0
35
0
Order By: Relevance
“…PointM2AE (Zhang et al, 2022b) uses a hierarchical Transformer and designs the corresponding masking strategy. MaskPoint (Liu et al, 2022) proposes to add some noise points and classify whether they belong to masking tokens. Recently, ACT (Dong et al, 2023) uses a cross-modal autoencoder as the reconstruction target to acquire dark knowledge from other modalities.…”
Section: Related Workmentioning
confidence: 99%
“…PointM2AE (Zhang et al, 2022b) uses a hierarchical Transformer and designs the corresponding masking strategy. MaskPoint (Liu et al, 2022) proposes to add some noise points and classify whether they belong to masking tokens. Recently, ACT (Dong et al, 2023) uses a cross-modal autoencoder as the reconstruction target to acquire dark knowledge from other modalities.…”
Section: Related Workmentioning
confidence: 99%
“…The recent rise to prominence of ViT architectures in computer vision has inspired a series of works for indoor point cloud segmentation [29,54,60,64,65].…”
Section: Vits For Point Cloud Segmentationmentioning
confidence: 99%
“…Annotating point clouds demands significant effort, necessitating self-supervised pre-training methods. Prior approaches primarily focus on object CAD models [21,26,29,39,42,44] and indoor scenes [17,35,46]. Point-BERT [42] applies BERT-like paradigms for point cloud recognition, while Point-MAE [26] reconstructs point patches without the tokenizer.…”
Section: Related Workmentioning
confidence: 99%