2021 International Conference on 3D Vision (3DV) 2021
DOI: 10.1109/3dv53792.2021.00097
|View full text |Cite
|
Sign up to set email alerts
|

Human Performance Capture from Monocular Video in the Wild

Abstract: Capturing the dynamically deforming 3D shape of clothed human is essential for numerous applications, including VR/AR, autonomous driving, and human-computer interaction. Existing methods either require a highly specialized capturing setup, such as expensive multi-view imaging systems, or they lack robustness to challenging body poses. In this work, we propose a method capable of capturing the dynamic 3D human shape from a monocular video featuring challenging body poses, without any additional input. We first… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…Need foreground mask to enable the mesh optimization, akin to shape-from-silhouette. One future direction might be equipping our method with the ability to separate foreground and background automatically [31,24]. It is also promising to model the background simultaneously during foreground subject optimization [31,24], which eliminates the requirement of foreground mask processing.…”
Section: Limitations and Further Discussionmentioning
confidence: 99%
“…Need foreground mask to enable the mesh optimization, akin to shape-from-silhouette. One future direction might be equipping our method with the ability to separate foreground and background automatically [31,24]. It is also promising to model the background simultaneously during foreground subject optimization [31,24], which eliminates the requirement of foreground mask processing.…”
Section: Limitations and Further Discussionmentioning
confidence: 99%
“…These works typically leverage parametric models for minimally clothed human bodies [9,17,18,29,46,55] (e.g. SMPL [33]) and use a displacement layer on top of the minimally clothed body to model clothing [3,4,23,35,39,61]. Recently, DSFN [11] proposes to embed MLPs into the canonical space of SMPL to model pose-dependent deformations.…”
Section: Related Workmentioning
confidence: 99%
“…These works typically leverage parametric models for minimally clothed human bodies [7,15,16,27,44,52] (e.g. SMPL [31]) and use a displacement layer on top of the minimally clothed body to model clothing [3,4,21,33,37,57]. Recently, DSFN [9] proposes to embed MLPs into the canonical space of SMPL to model pose-dependent deformations.…”
Section: Related Workmentioning
confidence: 99%