2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00396
|View full text |Cite
|
Sign up to set email alerts
|

3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions

Abstract: Figure 1. Unsupervised 3D point clouds generated by our tree-GAN for multiple classes (e.g., Motorbike, Laptop, Table, Guitar, Skateboard, Knife, Table, Pistol, and Car from top-left to bottom-right). Our tree-GAN can generate more accurate point clouds than baseline (i.e., r-GAN [1]), and can also produce point clouds for semantic parts of objects, which are denoted by different colors. AbstractIn this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is cal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
242
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 233 publications
(242 citation statements)
references
References 35 publications
(35 reference statements)
0
242
0
Order By: Relevance
“…To overcome the redundancy and structural irregularity of point samples, Ramasinghe et al [2019] propose Spectral-GANs to synthesize shapes using a spherical-harmonicsbased representation. Shu et al [2019] propose tree-GAN to perform graph convolutions in a tree and recently extend it into a multi-rooted version. Hui et al [2020] design a progressive deconvolution network to generate 3D point clouds, while Arshad et al…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To overcome the redundancy and structural irregularity of point samples, Ramasinghe et al [2019] propose Spectral-GANs to synthesize shapes using a spherical-harmonicsbased representation. Shu et al [2019] propose tree-GAN to perform graph convolutions in a tree and recently extend it into a multi-rooted version. Hui et al [2020] design a progressive deconvolution network to generate 3D point clouds, while Arshad et al…”
Section: Related Workmentioning
confidence: 99%
“…Table 1. Quantitative comparison of the generated shapes produced by SP-GAN and five state-of-the-art methods, i.e., r-GAN [Achlioptas et al 2018], tree-GAN [Shu et al 2019], PointFlow [Yang et al 2019], PDGN [Hui et al 2020], and ShapeGF [Cai et al 2020]. We follow the same settings to conduct this experiment as in the state-of-the-art methods.…”
Section: Part Co-segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…These examples are generated using deep learning or simple geometrical manipulation to create synthetic datasets. GAN networks proposed to generate the synthetic 3D object point cloud are relevant for object detection and recognition (Shu et al, 2019). Existing frameworks lack the generation of synthetic data for large scale 3D point cloud.…”
Section: Deep Learning Networkmentioning
confidence: 99%
“…Point clouds can capture a much higher resolution than voxels, and can be processed using simpler manipulations than meshes. By leveraging the flexibility of deep learning, a deep generative model of point clouds enables a variety of synthesis tasks such as generation, reconstruction, and super-resolution [1,2,13,18,26,36,39,43,47]. Because it is difficult to measure the quality of a generated point cloud numerically, most studies employ flow-based generative models [6,10,20] or generative adversarial networks (GANs) [9].…”
mentioning
confidence: 99%