2014 2nd International Conference on 3D Vision 2014
DOI: 10.1109/3dv.2014.93
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Face Reconstruction from a Single Depth Image

Abstract: P a r t Cl a s s i f i c a t i o n I n i t i a l Co r r e s p o n d e n c e s F i n a l Co r r e s p o n d e n c e s Re c o n s t r u c t e d Mo d e l Re n d e r e d wi t h Un wr a p p e d T e x t u r e Re t e x t u r e d Re n d e r e d wi t h Un wr a p p e d T e x t u r e Re c o n s t r u c t e d mo d e l o v e r i n p u t d e p t h ma p I n p u t De p t h Ma pFigure 1: Our method starts with estimating dense correspondences on an input depth image, using a discriminative model. A generative model parametrize… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(20 citation statements)
references
References 43 publications
0
20
0
Order By: Relevance
“…For a typical result mesh of 15K vertices, the running time was 92.16s, with 1.2s for preprocessing (finding fiducial points, rigid alignment), 83.4s for non-rigid registration, 7.16s for retrieval (calculating features for the input, warping all the faces, finding the best matching parts), and 0.4s for merging. The non-rigid registration part (90% of the running time) could be replaced with a real-time registration method [33, 22]. …”
Section: Methodsmentioning
confidence: 99%
“…For a typical result mesh of 15K vertices, the running time was 92.16s, with 1.2s for preprocessing (finding fiducial points, rigid alignment), 83.4s for non-rigid registration, 7.16s for retrieval (calculating features for the input, warping all the faces, finding the best matching parts), and 0.4s for merging. The non-rigid registration part (90% of the running time) could be replaced with a real-time registration method [33, 22]. …”
Section: Methodsmentioning
confidence: 99%
“…Face reconstruction There exists a vast body of research dedicated to facial reconstruction. One appealing method uses depth cameras for real-time reconstruction, computing a dense correspondence field between the input image and a generic face model [13]. More recent methods use deeplearning-based approaches from single images [22,26].…”
Section: Previous Workmentioning
confidence: 99%
“…However, these methods may fail when such features cannot be detected under conditions of highly noisy depth data, extreme poses or large occlusions. A different family of approaches considers discriminative methods based on random forests [21], [22], deep Hough network [23], or finding a dense correspondence field between the input depth image and a predefined canonical face model [24], [33]. Although promising and often accurate, these methods require sophisticated supervised training with large-scale, tediously labeled datasets.…”
Section: Related Workmentioning
confidence: 99%
“…; (2) sustaining an always-on face tracker that can dynamically adapt to any user without manual recalibration; and (3) providing stability over time to variations in user expressions. Unlike previous depthbased discriminative or data-driven methods [18], [19], [20], [21], [22], [23], [24] that require complex training or manual calibration, in this paper we propose a framework that unifies pose tracking and face model adaptation on-the-fly, offering highly accurate, occlusion-aware and uninterrupted 3D facial pose tracking, as shown in Fig. 1.…”
Section: Introductionmentioning
confidence: 99%