2020
DOI: 10.1007/978-981-15-5577-0_18
|View full text |Cite
|
Sign up to set email alerts
|

AnimeGAN: A Novel Lightweight GAN for Photo Animation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(45 citation statements)
references
References 17 publications
0
31
0
Order By: Relevance
“…Kim et al [6] proposed the AdaLIN process, in which parameters can be trained by adaptively selecting the ratio between the instance and layer normalization processes, thus making it possible to construct robust translation models capable of handling tasks that require large shape gap changes, such as selfie-to-anime translations. New architectures have also been developed [7,18] that utilize a pretrained VGG19 [10] to calculate the content loss.…”
Section: One-to-one Domain Translationmentioning
confidence: 99%
“…Kim et al [6] proposed the AdaLIN process, in which parameters can be trained by adaptively selecting the ratio between the instance and layer normalization processes, thus making it possible to construct robust translation models capable of handling tasks that require large shape gap changes, such as selfie-to-anime translations. New architectures have also been developed [7,18] that utilize a pretrained VGG19 [10] to calculate the content loss.…”
Section: One-to-one Domain Translationmentioning
confidence: 99%
“…GANs are a specialized class of deep generative models that learn to generate novel data samples from random noise, while matching a given data distribution [12]. For instance, a GAN trained on a dataset of anime character images can generate novel anime characters, which look highly authentic, at least superficially [4]. Their flexibility and a wide-range of applications make GANs a popular choice for generation.…”
Section: Deep Learningmentioning
confidence: 99%
“…We use Precision, Recall, and F1-Score to evaluate performance of different architectures described above. Table 2 compiles the results of our hate speech experiments 4 . It includes information about input modalities involved, the experiment setting (binary/multi) and the type of fusion module used.…”
Section: Hate Speech Detectionmentioning
confidence: 99%
“…For example, animation colorization has been widely studied [15,23,29] for practical applicatio; in this, the authors reduce the inputs of human labor and time for colorization. Furthermore, generative adversarial networks [16] promote the development of deep learning models for generative tasks in the anima-tion field, e.g., style transfer [8,9], image generation [20,40] and video interpolation [34]. We believe that AnimeCeleb is able to boost the research progress on various tasks in the animation domain.…”
Section: Related Workmentioning
confidence: 99%