2023
DOI: 10.1007/978-3-031-25066-8_39
|View full text |Cite
|
Sign up to set email alerts
|

AvatarGen: A 3D Generative Model for Animatable Human Avatars

Abstract: a) Varying camera parameters (b) Varying body posesFigure 1: GETAvatar generates controllable human avatars with diverse textures and detailed geometries under full control over camera poses and body poses. Please refer to the Appendix for more multi-view and animation results.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 62 publications
0
16
0
Order By: Relevance
“…Specifically, EG3D [5], which we build our work upon, introduces tri-plane representation that can leverage a 2D GAN backbone for generating efficient 3D representation and is shown outperforming other 3D representations [38]. Parallel to these works, another thread of studies [30,41,46,50] have been working on controllable 3D GANs that can manipulate the generated 3D faces or bodies.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, EG3D [5], which we build our work upon, introduces tri-plane representation that can leverage a 2D GAN backbone for generating efficient 3D representation and is shown outperforming other 3D representations [38]. Parallel to these works, another thread of studies [30,41,46,50] have been working on controllable 3D GANs that can manipulate the generated 3D faces or bodies.…”
Section: Related Workmentioning
confidence: 99%
“…Following EG3D, there are also some methods aiming to full-body human generation [17,59]. Since their adversarial losses are only applied to the rendered images and there is no explicit supervision on the geometry, making the produced shape is far from satisfactory.…”
Section: Related Workmentioning
confidence: 99%
“…However, the training relies on annotated multiview datasets whereas our approach learns the disentangled 3D head synthesis with only single-view images. There has been concurrent work [4,49,51,58,64] to ours in 3D-aware controllable face or full-body GANs. Differently from these approaches, we use a full-head parametric model FLAME [29], and fully exploit the spatial geometric prior knowledge beyond the surface deformation and skinning.…”
Section: Related Workmentioning
confidence: 99%