Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application 2020
DOI: 10.5220/0009102604210428
|View full text |Cite
|
Sign up to set email alerts
|

Latent-space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds

Abstract: Constructing high-quality generative models for 3D shapes is a fundamental task in computer vision with diverse applications in geometry processing, engineering, and design. Despite the recent progress in deep generative modelling, synthesis of finely detailed 3D surfaces, such as high-resolution point clouds, from scratch has not been achieved with existing approaches. In this work, we propose to employ the latent-space Laplacian pyramid representation within a hierarchical generative model for 3D point cloud… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…In the context of cultural heritage interesting is the work of Egiazarian et al (2019) where they propose a model that combines the latent-space GAN and Laplacian GAN architectures to form a multi-scale model capable of generating 3D point clouds at augmenting levels of detail.…”
Section: Related Workmentioning
confidence: 99%
“…In the context of cultural heritage interesting is the work of Egiazarian et al (2019) where they propose a model that combines the latent-space GAN and Laplacian GAN architectures to form a multi-scale model capable of generating 3D point clouds at augmenting levels of detail.…”
Section: Related Workmentioning
confidence: 99%
“…By replicating the training instances, autoencoders learn features that minimize reconstruction error; for novel instances similar to those in the training set, reconstruction error is low compared to that of strong outliers. We train six autoencoders [42,43] for point clouds using vertices of undeformed meshes separately for the top six classes present in Scan2CAD annotation: table, chair, display, trashbin, cabinet, and bookshelf. Passing vertices V def of a deformed shape to the respective autoencoder, one can assess how accurately deformed meshes can be approximated using features of undeformed meshes.…”
Section: Evaluation Setupmentioning
confidence: 99%
“…Having obtained a collection of deformed meshes, we aim to assess their visual quality in comparison to two baseline deformation methods: as-rigid-as-possible (ARAP) [27] and Harmonic deformation [34,35], using a set of perceptual quality measures. To bring second-order information about mesh surface in energy formulation of ARAP/Harmonic, we add the Laplacian smoothness term 3: Quantitative evaluation of visual quality of deformations obtained using ARAP [27], Harmonic deformation [34,35], and our CAD-Deform, using a variety of local surface-based (DAME [41]), neural (EMD [42,43]), and human measures.…”
Section: Fitting Accuracy: How Well Do Cad Deformations Fit?mentioning
confidence: 99%