2020
DOI: 10.1145/3414685.3417817
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic facial asset and rig generation from a single scan

Abstract: The creation of high-fidelity computer-generated (CG) characters for films and games is tied with intensive manual labor, which involves the creation of comprehensive facial assets that are often captured using complex hardware. To simplify and accelerate this digitization process, we propose a framework for the automatic generation of high-quality dynamic facial models, including rigs which can be readily deployed for artists to polish. Our framework takes a single scan as input to generate a set of personali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 74 publications
0
9
0
Order By: Relevance
“…Lattas et al [2021] manage to infer renderable photorealistic 3D faces from a single image. Since facial animation corresponds to a series of 3D scans, we can predict blendshapes from a single 3D scan of the neutral expression of the performer and then infer dynamic texture maps in a generative manner based on expression offsets [Li et al 2020b]. Differently, we build dynamic textures using wrinkle maps, which allows us to maintain the authenticity of the high-resolution textures while avoiding the artifacts generated by super-resolution networks.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Lattas et al [2021] manage to infer renderable photorealistic 3D faces from a single image. Since facial animation corresponds to a series of 3D scans, we can predict blendshapes from a single 3D scan of the neutral expression of the performer and then infer dynamic texture maps in a generative manner based on expression offsets [Li et al 2020b]. Differently, we build dynamic textures using wrinkle maps, which allows us to maintain the authenticity of the high-resolution textures while avoiding the artifacts generated by super-resolution networks.…”
Section: Related Workmentioning
confidence: 99%
“…Recent deep learning techniques [Lattas et al 2021;Li et al 2020b,a] have shown hopeful results to enable automated facial asset generation direct from the captured data with little human intervention, bringing huge potential to revolutionize production-level workflows. In particular, the data-driven strategy employed by learning-based techniques is applicable to handle datasets of different scales, as big as thousands of subjects for producing a generic model or as small as a single subject with different expressions and lighting conditions for portrait animation.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Hao Li and coleagues [24] pioneer example-based rigging from a handful of training poses of the new identity. The most impressive advance in the field is the work of Jiaman Li and colleagues [25] who generate personalised blendshapes with the corresponding dynamic textures from a single neutral scan. Their approach is based on training two cascaded neural networks, specifically one for the personalised blendshape generation followed by another for the generation of dynamic texture maps (albedo, specular intensity, displacement), tailored to the expression geometries.…”
Section: Related Workmentioning
confidence: 99%