2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00053
|View full text |Cite
|
Sign up to set email alerts
|

Articulation-Aware Canonical Surface Mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
98
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 80 publications
(99 citation statements)
references
References 23 publications
0
98
0
Order By: Relevance
“…More recently, Canonical Surface Mapping (CSM) [5] predicts a UV mapping from a single image onto a canonical model, trained entirely using self-supervision, by introducing a geometric cycle-consistency term. For Kulkarni et al [10], the same mapping is applied but the canonical surface mesh can deform given an articulation parameter, which allows shape alignment to an input image. Our work is inspired by these, but differs in two key ways.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, Canonical Surface Mapping (CSM) [5] predicts a UV mapping from a single image onto a canonical model, trained entirely using self-supervision, by introducing a geometric cycle-consistency term. For Kulkarni et al [10], the same mapping is applied but the canonical surface mesh can deform given an articulation parameter, which allows shape alignment to an input image. Our work is inspired by these, but differs in two key ways.…”
Section: Related Workmentioning
confidence: 99%
“…A growing number of studies have tackled the reconstruction of category-specific, natural articulated objects with a particular kinematic structure, such as the human body and animals. Representative works rely on the use of category-specific template models as the shape and pose prior (Loper et al, 2015;Zuffi et al, 2017;Bogo et al, 2016;Zuffi et al, 2019;Kulkarni et al, 2020). Another body Figure 2: Model overview of PPD.…”
Section: Related Workmentioning
confidence: 99%
“…For the "revolute" part, we set B i = T (q i )R(s i , u i ), where R(•) denotes a homogeneous rotation matrix given the rotation representation, and s i and u i represent the axis-angle rotation around the axis u i by angle s i . In human shape reconstruction methods using template shape, its pose is initialized to be close to the real distribution to avoid the local minima (Kanazawa et al, 2018;Kulkarni et al, 2020). Inspired by these approaches, we parametrize the joint direction as…”
Section: Part Shape Representationmentioning
confidence: 99%
“…In ad-dition to numerous semantic-specific details, recognition in novel viewpoints via direct appearance synthesis is suboptimal: one may be sure of the presence of a rug behind a couch, but unsure of its particular color. Similarly, there have been advances in learning to infer 3D properties of scenes from image cues [20,46,63], or with differentiable rendering [10,29,38,50] and other methods for bypassing the need for direct 3D supervision [27,33,34,68]. However, these approaches do not connect to complex scene semantics; they primarily focus on single objects or small, less diverse 3D annotated datasets.…”
Section: Introductionmentioning
confidence: 99%