2018
DOI: 10.48550/arxiv.1801.07829
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamic Graph CNN for Learning on Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
505
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 240 publications
(507 citation statements)
references
References 55 publications
1
505
0
1
Order By: Relevance
“…Backbone architectures. We incorporate the FA framework with two existing popular point cloud network layers: i) PointNet (Qi et al, 2017a); and ii) DGCNN (Wang et al, 2018). We denote both architectures by…”
Section: Point Clouds Euclidean Motionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Backbone architectures. We incorporate the FA framework with two existing popular point cloud network layers: i) PointNet (Qi et al, 2017a); and ii) DGCNN (Wang et al, 2018). We denote both architectures by…”
Section: Point Clouds Euclidean Motionsmentioning
confidence: 99%
“…We instantiate the FA framework by considering different choices of symmetry groups G, their actions on data spaces V, W (manifested by choices of group representations), and the backbone architectures (or part thereof) φ, Φ we want to make invariant/equivariant to G. We consider: (i) Multi-Layer Perceptrons (MLP), and Graph Neural Networks (GNNs) with node identification (Murphy et al, 2019;Loukas, 2020) adapted to permutation invariant Graph Neural Networks (GNNs); (ii) Message-Passing GNN (Gilmer et al, 2017) adapted to be invariant/equivariant to Euclidean motions, E(d); (iii) Set network, DeepSets and PointNet (Zaheer et al, 2017;Qi et al, 2017a) adapted to be equivariant or locally equivariant to E(d); (iv) Point cloud network, DGCNN (Wang et al, 2018), adapted to be equivariant to E(d).…”
Section: Introductionmentioning
confidence: 99%
“…TextureNet [36] parameterizes the room surface into local planar patches in the 4-RoSy field such that standard CNNs [7] can be applied to extract high-resolution texture information from mesh facets. Schult et al [15] applied the spatial graph convolutions of dynamic filters [31], [65], [68], [79] to the union of neighborhoods in both geodesic and Euclidean domains for vertex-wise feature learning. VMNet [80] combines the SparseConvNet [56] with graph convolutional networks to learn merged features from point clouds and meshes.…”
Section: Convolution On 3d Meshesmentioning
confidence: 99%
“…[49], 3D deep learning on point clouds has stimulated the interest of researchers. Existing methods can be mainly divided into: point-based [51,40,63,9], volumetric-based [50,44], graph-based [70,36,35,66,8,27], and viewbased [61,62,77] methods. However, volumetric-based and view-based methods lose fine-grained geometric information due to voxelization and projection, while graph-based methods are not suitable for sparse point clouds since few points cannot provide sufficient local geometric information for constructing a graph.…”
Section: Related Workmentioning
confidence: 99%
“…On the one hand, we exploit the max pooling combined with fully connected layers to capture global shape information. On the other hand, we adopt EdgeConv [70] to capture local geometric information of the target. After that, we augment the local feature of each point with the global shape information, yielding a new feature map of size 2048 × 2C.…”
Section: Shape-aware Feature Learningmentioning
confidence: 99%