2019
DOI: 10.48550/arxiv.1903.10384
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MeshGAN: Non-linear 3D Morphable Models of Faces

Abstract: Generative Adversarial Networks (GANs) are currently the method of choice for generating visual data. Certain GAN architectures and training methods have demonstrated exceptional performance in generating realistic synthetic images (in particular, of human faces). However, for 3D object, GANs still fall short of the success they have had with images. One of the reasons is due to the fact that so far GANs have been applied as 3D convolutional architectures to discrete volumetric representations of 3D objects. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(21 citation statements)
references
References 57 publications
(72 reference statements)
0
21
0
Order By: Relevance
“…The parameter of the 3DMM can be further disengaged into multiple dimensions like identity, expression, appearance, and poses. In recent years, several works tried to enhance the representation power of 3DMM by using a non-linear mapping [2,7,42,[44][45][46], which is more powerful in representing detailed shape and appearance than transitional linear mapping. However, they still suffer from the mesh representation which is hard to model fine geometry of pupils, eyelashes and hairs.…”
Section: Related Workmentioning
confidence: 99%
“…The parameter of the 3DMM can be further disengaged into multiple dimensions like identity, expression, appearance, and poses. In recent years, several works tried to enhance the representation power of 3DMM by using a non-linear mapping [2,7,42,[44][45][46], which is more powerful in representing detailed shape and appearance than transitional linear mapping. However, they still suffer from the mesh representation which is hard to model fine geometry of pupils, eyelashes and hairs.…”
Section: Related Workmentioning
confidence: 99%
“…An alternative line of work considers generative adversarial networks (GANs) instead of autoencoders. The first GAN operating on 3D meshes was proposed in [7] and it allowed to disentangle identity from expression generative factors. Other methods usually map 3D shapes to the image domain and then train adversarial networks with traditional 2D convolutions [1,15,24].…”
Section: Related Workmentioning
confidence: 99%
“…Even though these tools greatly simplify the design process, they are usually limited in flexibility because of the intrinsic constraints of the underlying generative models [17]. Blendshapes [28,31,38], 3D morphable mod-els [5,25,33], autoencoders [3,8,16,35], and generative adversarial networks [1,7,15,24] are currently the most used generative models, but they all share one particular issue: the creation of local features is difficult or even impossible. In fact, not only do generative coefficients (or latent variables) lack any semantic meaning, but they also create global changes in the output shape.…”
Section: Introductionmentioning
confidence: 99%
“…Such models usually assume linear combination of the different attributes which limits their disentanglement abilities due to the natural non-linearity of face shape variations. Aware of such limitations, recent works proposed non-linear models to model 3D face shapes [13,8,5,20]. [20] proposed a Graph Convolutional Autoencoder (GCA) making use of spectral graph convolutions [6] to encode 3D face shapes into non-linear latent representations.…”
Section: Introductionmentioning
confidence: 99%
“…[20] proposed a Graph Convolutional Autoencoder (GCA) making use of spectral graph convolutions [6] to encode 3D face shapes into non-linear latent representations. [8] presented an intrinsic Generative Adversarial Network (GAN) architecture named as MeshGAN operating directly on 3D face meshes using a similar strategy as [20]. Their proposed method allows the generation of new identities and expressions.…”
Section: Introductionmentioning
confidence: 99%