2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.92
|View full text |Cite
|
Sign up to set email alerts
|

Quality Dynamic Human Body Modeling Using a Single Low-Cost Depth Camera

Abstract: In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor. Our method first captures a few high-quality scans of the user rotating herself at multiple poses from different views. We fit each incomplete scan using template fitting techniques with a generic human template, and register all scans to every pose using global consistency constraints. After registration, these watertight models with different poses are used to train … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
58
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 92 publications
(58 citation statements)
references
References 20 publications
0
58
0
Order By: Relevance
“…In particular, some techniques tackle the case of open surfaces, where the surface can be entirely observed in at least some frames of the sequence [37,16]. While some work has tackled the more challenging case of volumetric surfaces, most existing methods rely on a pre-processing step, where a 3D model of the object of interest is acquired under rigid, or quasi-rigid motion [17,7,18,35,39,40,42]. While methods relying on quasi-rigidity during this modelbuilding step do account for some degree of deformations, they are very far from attempting to directly estimate the 4D motion of, e.g., a human dancing in front of the sensor.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, some techniques tackle the case of open surfaces, where the surface can be entirely observed in at least some frames of the sequence [37,16]. While some work has tackled the more challenging case of volumetric surfaces, most existing methods rely on a pre-processing step, where a 3D model of the object of interest is acquired under rigid, or quasi-rigid motion [17,7,18,35,39,40,42]. While methods relying on quasi-rigidity during this modelbuilding step do account for some degree of deformations, they are very far from attempting to directly estimate the 4D motion of, e.g., a human dancing in front of the sensor.…”
Section: Related Workmentioning
confidence: 99%
“…The overprocessing potentially degrades reconstruction accuracy and tends to oversmooth the reconstructed surface . To avoid excessive data processing, Li et al, Wang et al, and Zhang et al, adopted semi‐nonrigid pose assumption, in which four to eight (could be more) static poses are captured at different angles to cover the full body, and partial scan meshes are then generated. The surface is reconstructed by nonrigidly stitching all the partial scan meshes.…”
Section: Related Workmentioning
confidence: 99%
“…The 3D human body modeling has become a hot topic in the past few years due to the availability of consumer‐level, low‐cost Red Green Blue & Depth (RGB‐D) sensors such as the Microsoft Kinect 360®, before which body scanners were only affordable to a few enterprises such as select health clinics, research institutions, fashion design industry, and film industry. Along with the availability of hardware, a variety of body scanning systems have been proposed that use single to multiple sensors. However, most of these low‐cost body reconstruction systems are geared toward applications for 3D printing, rigging animation, game, virtual reality, and fashion design, rather than clinical or health‐related applications with rigorous requirements in reconstruction accuracy and reliability.…”
Section: Introductionmentioning
confidence: 99%
“…In the literature, some researchers used prior templates to fit each scan frame for alignment [30,31]. This approach acquires a relative accurate template and markers for model fitting and is more common in dynamic human modeling.…”
Section: Related Workmentioning
confidence: 99%