2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01207
|View full text |Cite
|
Sign up to set email alerts
|

PointGMM: A Neural GMM Network for Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
35
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(36 citation statements)
references
References 25 publications
0
35
0
Order By: Relevance
“…Yan et al [27] proposed PointASNL to deal with noise in point cloud processing, using a self-attention mechanism to update features for local groups of points. Hertz et al [28] proposed PointGMM for shape interpolation with both multilayer perceptron (MLP) splits and attentional splits.…”
Section: Point-based Deep Learningmentioning
confidence: 99%
“…Yan et al [27] proposed PointASNL to deal with noise in point cloud processing, using a self-attention mechanism to update features for local groups of points. Hertz et al [28] proposed PointGMM for shape interpolation with both multilayer perceptron (MLP) splits and attentional splits.…”
Section: Point-based Deep Learningmentioning
confidence: 99%
“…Recently, self-attention mechanism and Transformer has been employed in point cloud processing, including PointASNL [Yan et al 2020], PointGMM [Hertz et al 2020], PCT ], Point Transformer [Zhao et al 2020]. We borrow from both 3D point feature learning and Transformer.…”
Section: Point Set Learningmentioning
confidence: 99%
“…The pre-processed data are then used to train three different generative networks: PointGrow (Sun, Wang, Liu, Siegel, & Sarma, 2020), PointFlow (Yang et al, 2019), and PointGMM (Hertz, Hanocka, Giryes, & Cohen-Or, 2020).…”
Section: Generative Approaches For Point Cloud Generationmentioning
confidence: 99%
“…In this context, the aim of this paper is to propose a framework based on DL that synthetically generates additional architectural elements to increase segmentation accuracy. To generate novel scenes, we use three different generative networks: PointGrow (Sun, Wang, Liu, Siegel, & Sarma, 2020), PointFlow (Yang et al, 2019, and PointGMM (Hertz, Hanocka, Giryes, & Cohen-Or, 2020). Moreover, to compare the best performances we train a novel Deep Neural Network, namely DGCNN-Mod that classifies the synthetically generate scenes (Pierdicca et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation