2022
DOI: 10.1109/tpami.2021.3125598
|View full text |Cite
|
Sign up to set email alerts
|

AvatarMe++: Facial Shape and BRDF Inference With Photorealistic Rendering-Aware GANs

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(19 citation statements)
references
References 63 publications
1
18
0
Order By: Relevance
“…On the NoW challenge [64], DECA [25] and the method of Deng et al [20] show on-par state-of-the-art results. Similar to DECA's offset prediction, there are GAN-based methods that predict detailed color maps [32,33] or skin properties [48,49,63,85] (e.g., albedo, reflectance, normals) in UV-space of a 3DMM-based face reconstruction. In contrast to these methods, we are interested in reconstructing a metrical 3D representation of a human face and not fine-scale details.…”
Section: Regression-based Reconstruction Of Human Facesmentioning
confidence: 99%
“…On the NoW challenge [64], DECA [25] and the method of Deng et al [20] show on-par state-of-the-art results. Similar to DECA's offset prediction, there are GAN-based methods that predict detailed color maps [32,33] or skin properties [48,49,63,85] (e.g., albedo, reflectance, normals) in UV-space of a 3DMM-based face reconstruction. In contrast to these methods, we are interested in reconstructing a metrical 3D representation of a human face and not fine-scale details.…”
Section: Regression-based Reconstruction Of Human Facesmentioning
confidence: 99%
“…AvatarMe can generate 4K‐resolution diffuse albedo, diffuse normal, specular albedo, and specular normal texture maps. AvatarMe++ [LMP*21] is an improvement of AvatarMe [LMG*20]. In contrast to the other methods, Bao et al [BLC*21] used an RGB‐D selfie video input acquired by a consumer smartphone.…”
Section: Related Workmentioning
confidence: 99%
“…They further apply super-resolution networks to produce textures that contain pore-level geometry. Lattas et al [2021] manage to infer renderable photorealistic 3D faces from a single image. Since facial animation corresponds to a series of 3D scans, we can predict blendshapes from a single 3D scan of the neutral expression of the performer and then infer dynamic texture maps in a generative manner based on expression offsets [Li et al 2020b].…”
Section: Related Workmentioning
confidence: 99%
“…GAN-based methods [Gecer et al 2019;Lattas et al 2021;Li et al 2020a] separate the shape and expression or using photorealistic differentiable rendering-based training to enhance details in model generation. Other identity-agnostic methods [Abrevaya et al 2020;Burkov et al 2020;Nirkin et al 2019] aim to enhance the identityindependence by imposing constraints in the latent space.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation