2019
DOI: 10.48550/arxiv.1912.03264
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PU-GCN: Point Cloud Upsampling using Graph Convolutional Networks

Abstract: Upsampling sparse, noisy, and non-uniform point clouds is a challenging task. In this paper, we propose 3 novel point upsampling modules: Multi-branch GCN, Clone GCN, and NodeShuffle. Our modules use Graph Convolutional Networks (GCNs) to better encode local point information. Our upsampling modules are versatile and can be incorporated into any point cloud upsampling pipeline. We show how our 3 modules consistently improve state-of-the-art methods in all point upsampling metrics. We also propose a new multi-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(14 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…To explore the efficacy of the proposed EVA upsampling unit, we conduct ablation study to compare with different upsampling units. Specifically, direct point feature duplication upsampling, and the best upsampling unit in PU-GCN [24], named NodeShuffle, are adopted for com-parison. The direct point feature duplication follows the network architecture of the Feature Expansion module in PU-Net [42], while keeps rest of network the same as PU-EVA.…”
Section: Ablation Studymentioning
confidence: 99%
“…To explore the efficacy of the proposed EVA upsampling unit, we conduct ablation study to compare with different upsampling units. Specifically, direct point feature duplication upsampling, and the best upsampling unit in PU-GCN [24], named NodeShuffle, are adopted for com-parison. The direct point feature duplication follows the network architecture of the Feature Expansion module in PU-Net [42], while keeps rest of network the same as PU-EVA.…”
Section: Ablation Studymentioning
confidence: 99%
“…Specifically, unsupervised machine perception tasks extract informative features from point cloud samples without human supervision [9]. Some typical tasks include reconstruction [10], [11], completion [12], [13], and upsampling [14]. Recently, deep neural networks are common tools to achieve those tasks, and an objective point cloud distortion quantification is usually needed as the supervision to train deep neural networks.…”
Section: Machine Perception Tasksmentioning
confidence: 99%
“…Specifically, unsupervised tasks of machine perception generally aim to extract informative features from point clouds without using human supervision [9]. Some typical tasks include reconstruction [10], [11], completion [12], [13], and upsampling [14]. Recently, deep neural networks are emerging techniques to realize those tasks.…”
Section: Introductionmentioning
confidence: 99%
“…To enhance the mesh resolution, we used a cascade of Multi-branch GCN [25] modules, where GCNConv layers [17] were used for feature upsampling. For the dome dataset we used 5 Multi-branch GCN modules at the first cascade level and then 8 modules in the second cascaded level and for the FreiHAND dataset we use 3 Multi-branch GCN modules.…”
Section: Mesh Enhancermentioning
confidence: 99%
“…For the dome dataset we used 5 Multi-branch GCN modules at the first cascade level and then 8 modules in the second cascaded level and for the FreiHAND dataset we use 3 Multi-branch GCN modules. The resultant node features which construct mesh at full resolution were then passed through a set of Convolution-BatchNorm-ReLU which plays a role analogues to the role of "Coordinator Reconstructor" of the initial work on point upsampling [25]. This contains Convolution-BatchNorm-ReLU layers with 64, 64, 64, 64 and 1 kernels with each kernel having 1 × 3 filters.…”
Section: Mesh Enhancermentioning
confidence: 99%