2021
DOI: 10.48550/arxiv.2112.07471
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

I M Avatar: Implicit Morphable Head Avatars from Videos

Abstract: Traditional morphable face models provide fine-grained control over expression but cannot easily capture geometric and appearance details. Neural volumetric representations approach photo-realism but are hard to animate and do not generalize well to unseen expressions. To tackle this problem, we propose IMavatar (Implicit Morphable avatar), a novel method for learning implicit head avatars from monocular videos. Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 47 publications
(101 reference statements)
0
5
0
Order By: Relevance
“…In recent years, neural fields have emerged as the leading approach for scene representation [52], yielding remarkable results in novel view synthesis [26,28,33,39,46] and 3D reconstruction [30,41,53,56]. These techniques have found practical applications in modeling full head avatars [11,33,35,59]. By leveraging surface priors [31] and surface rendering [53], neural fields enable highly accurate 3D reconstructions of the full head, encompassing intricate details such as hair and shoulders [41,59].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, neural fields have emerged as the leading approach for scene representation [52], yielding remarkable results in novel view synthesis [26,28,33,39,46] and 3D reconstruction [30,41,53,56]. These techniques have found practical applications in modeling full head avatars [11,33,35,59]. By leveraging surface priors [31] and surface rendering [53], neural fields enable highly accurate 3D reconstructions of the full head, encompassing intricate details such as hair and shoulders [41,59].…”
Section: Related Workmentioning
confidence: 99%
“…These techniques have found practical applications in modeling full head avatars [11,33,35,59]. By leveraging surface priors [31] and surface rendering [53], neural fields enable highly accurate 3D reconstructions of the full head, encompassing intricate details such as hair and shoulders [41,59]. [23] employs a similar approach to construct implicit morphable faces with consistent tex-ture parameterization and introduces single-shot inversion to obtain reconstructions from input images.…”
Section: Related Workmentioning
confidence: 99%
“…To model pose-dependent clothing deformations, SCANimate [53] proposes to transform scans to canonical space in a weakly supervised manner and to learn the implicit shape model conditioned on joint-angle rotations. Follow-up works further improve the generalization ability to unseen poses and accelerate the training process via a displacement network [56], deform the shape via a forward warping field [13,65] or leverage prior information from large-scale human datasets [58]. However, all of these methods require complete and registered 3D human scans for training, even if they sometimes can be fine-tuned on RGB-D data.…”
Section: Related Workmentioning
confidence: 99%
“…IDR [53] combines implicit signed distance field and differential neural rendering to generate high-quality rigid reconstruction from multi-view images. Concurrent IMAvatar [54] expand IDR to learn implicit head avatars from monocular videos.…”
Section: Related Workmentioning
confidence: 99%
“…Specif-ically, SelfRecon utilizes a learnable signed distance field (SDF) rather than a template with fixed topology to represent the canonical shape. To improve the generalization of the deformation and reduce the optimization difficulty, we adopt the forward deformation to map canonical points to the current frame space [11,54]. During optimization, we periodically extract the explicit canonical mesh and warp it to each frame with the deformation fields.…”
Section: Introductionmentioning
confidence: 99%