2019
DOI: 10.1007/s11263-019-01210-3
|View full text |Cite
|
Sign up to set email alerts
|

GANimation: One-Shot Anatomically Consistent Facial Animation

Abstract: Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for the task of facial expression synthesis. The most successful architecture is StarGAN [5], that conditions GANs' generation process with images of a specific domain, namely a set of images of people sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content and granularity of the dataset. To address this limitation, in this paper, we introduce… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
113
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 88 publications
(113 citation statements)
references
References 36 publications
0
113
0
Order By: Relevance
“…In contrast, our proposed method preserved the cat's original features (see third row of Figure 1). Figure 21 compares the proposed method with the expression transfer results of GANimation (Pumarola et al 2019). Fig.…”
Section: Comparison With Generative Adversarial Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast, our proposed method preserved the cat's original features (see third row of Figure 1). Figure 21 compares the proposed method with the expression transfer results of GANimation (Pumarola et al 2019). Fig.…”
Section: Comparison With Generative Adversarial Networkmentioning
confidence: 99%
“…This becomes more challenging if the trained models are to be deployed in resource-constrained environments such as mobile devices and embedded systems with limited memory, computational power, and stored energy. A comparison of the proposed algorithm with four state-of-theart GAN models including Pix2Pix (Isola et al 2017), Cy-cleGAN (Zhu et al 2017), StarGAN (Choi et al 2018) and GANimation (Pumarola et al 2019) is shown in Table 1. The proposed algorithm has more than two orders of magnitude fewer number of parameters than each of these GANs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thanks to the advances in deep learning techniques and appearance of big datasets, quite realistic images can be generated by these approaches. The majority of these approaches are encoder-decoder networks that receive an input image and are conditioned to the modification parameters, such as facial expressions [13]. An example of generative face reenactment is the work of [5], where an avatar human agent dynamically interacts with users.…”
Section: Data Driven Approachesmentioning
confidence: 99%
“…Often the contribution is a new architecture that is better suited to a particular vision task (e.g. Law and Deng 2019, Veit and Belongie 2019, Esteves et al 2019 or the definition of a new visual task that can be tackled with this powerful tool (Harwath et al 2019, Pumarola et al 2019. The flow of ideas between Computer Vision and Deep Learning continues to be bidirectional, and one of the selected papers defines a new learning procedure that may prove to be useful beyond computer vision (Wu and He 2019).…”
mentioning
confidence: 99%