2018
DOI: 10.1007/978-3-030-01216-8_15
|View full text |Cite
|
Sign up to set email alerts
|

3D-CODED: 3D Correspondences by Deep Deformation

Abstract: We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
396
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 250 publications
(406 citation statements)
references
References 45 publications
(52 reference statements)
0
396
1
Order By: Relevance
“…3). These numbers are comparable, but higher than state-of-the-art methods like [15] or [49]. However, we note that the two methods outperforming BPS in the FAUST intra challenge are orders of magnitude slower than our system.…”
Section: Methodsmentioning
confidence: 60%
See 2 more Smart Citations
“…3). These numbers are comparable, but higher than state-of-the-art methods like [15] or [49]. However, we note that the two methods outperforming BPS in the FAUST intra challenge are orders of magnitude slower than our system.…”
Section: Methodsmentioning
confidence: 60%
“…However, they are typically computationally expensive and require the use of a deformable model at application time. Machine learning based methods [15] remove this dependency by replacing them with a sufficiently large training corpus. However, current solutions like [15] rely on multistage models with complex internal representations, which makes them slow to train and test.…”
Section: Single-pass Mesh Registration From 3d Scansmentioning
confidence: 99%
See 1 more Smart Citation
“…We generate 3200 meshes for each of the five categories by sampling from each cluster distribution. Following the generation procedure in other works [21], we then sample the pose (here, the joint angles) via a Gaussian distribution with a standard deviation of 0.2. We then split the resulting dataset into 15000 training and 1000 testing meshes (each comprised of equal numbers of meshes per species).…”
Section: A3 Smal-derived Datasetmentioning
confidence: 99%
“…the protocol from 3D-CODED [21]. Briefly, we sampled 20500 meshes from each of the male and female models using random samples from the SURREAL dataset [60].…”
Section: A4 Smpl-derived Datasetmentioning
confidence: 99%