2021
DOI: 10.1145/3450626.3459936
|View full text |Cite
|
Sign up to set email alerts
|

Learning an animatable detailed 3D face model from in-the-wild images

Abstract: While current monocular 3D face reconstruction methods can recover fine geometric details, they suffer several limitations. Some methods produce faces that cannot be realistically animated because they do not model how wrinkles vary with expression. Other methods are trained on high-quality face scans and do not generalize well to in-the-wild images. We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. Our model, DECA (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
310
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 336 publications
(311 citation statements)
references
References 98 publications
1
310
0
Order By: Relevance
“…There are also works that successfully handle extreme skin deformation [Chen et al 2019;Wu et al 2016]. DECA [Feng et al 2021] further improve FLAME ] by additionally predicting individual-specific animatable details. High-fidelity facial details from real performers can be added to scanned facial models.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…There are also works that successfully handle extreme skin deformation [Chen et al 2019;Wu et al 2016]. DECA [Feng et al 2021] further improve FLAME ] by additionally predicting individual-specific animatable details. High-fidelity facial details from real performers can be added to scanned facial models.…”
Section: Related Workmentioning
confidence: 99%
“…To the best of our knowledge, our approach is the first to provide video-driven facial geometries with dynamic physically-based texture assets. To demonstrate the video-driven performance of our approach, we compare against various video-driven facial reconstruction and animation methods, including parametric facial model DECA proposed by Feng et al [2021], a performer-specific method for production proposed by Laine et al [2017], and Deep Appearance Model proposed by Lombardi et al [2018]. We adopt the official pre-trained PyTorch model for the method of Feng et al [2021], and faithfully re-implement both the methods of Laine et al [2017] and Lombardi et al [2018] and train their models using the same training dataset as ours for a fair comparison.…”
Section: Comparisonsmentioning
confidence: 99%
See 3 more Smart Citations