2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.584
|View full text |Cite
|
Sign up to set email alerts
|

Human Shape from Silhouettes Using Generative HKS Descriptors and Cross-Modal Neural Networks

Abstract: In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
89
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 87 publications
(89 citation statements)
references
References 57 publications
0
89
0
Order By: Relevance
“…The SiC-loPe [31] system tolerates greater clothing variation than our system, but its geometric detail is limited by the use of intermediate silhouettes. To the credit of these works, all but [10] were designed for capturing bodies "in the wild" with tolerance for pose variation, whereas our goal is to capture a detailed avatar from a restricted pose.…”
Section: Related Workmentioning
confidence: 99%
“…The SiC-loPe [31] system tolerates greater clothing variation than our system, but its geometric detail is limited by the use of intermediate silhouettes. To the credit of these works, all but [10] were designed for capturing bodies "in the wild" with tolerance for pose variation, whereas our goal is to capture a detailed avatar from a restricted pose.…”
Section: Related Workmentioning
confidence: 99%
“…Dibra et al [106] used an encoder followed by three fully connected layers which regress the SCAPE parameters from one or multiple silhouette images. Later, Dibra et al [107] first learn a common embedding of 2D silhouettes and 3D human body shapes (see Section 7.3.1). The latter are represented using their Heat Kernel Signatures [108].…”
Section: D Human Body Reconstructionmentioning
confidence: 99%
“…The most practical but also challenging setting is capturing from a single monocular RGB camera. Some methods attempt to infer the shape parameters of a body model from a single image [41,53,10,24,8,32,86,39,55], but reconstructed detail is constrained to the model shape space, and thus does not capture personalized shape detail and clothing geometry. Recent work [6,5] estimates more detailed shape, including clothing, from a video sequence of a person rotating in front of a camera while holding a rough A-pose.…”
Section: Introductionmentioning
confidence: 99%