2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00935
|View full text |Cite
|
Sign up to set email alerts
|

MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
126
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 145 publications
(131 citation statements)
references
References 13 publications
0
126
0
Order By: Relevance
“…Fine-tuning provides training-quality visual feedback more quickly and thus offers a more efficient way to learn model behavior and hyperparameters. It has been reported in GANs-related studies [48,49] that transfer learning is beneficial in saving a large amount of training time and computation resources. In this study, we adopted the transfer learning in all the experiments.…”
Section: Improved Strategymentioning
confidence: 99%
“…Fine-tuning provides training-quality visual feedback more quickly and thus offers a more efficient way to learn model behavior and hyperparameters. It has been reported in GANs-related studies [48,49] that transfer learning is beneficial in saving a large amount of training time and computation resources. In this study, we adopted the transfer learning in all the experiments.…”
Section: Improved Strategymentioning
confidence: 99%
“…GANs are capable of image generation in two categories: low-resolution [4,8,12,13,15,16,23] and high-resolution [14,17,[31][32][33][34][35][36][37][38][39][40][41]. A summary of these approaches is presented in Table 1.…”
Section: Related Workmentioning
confidence: 99%
“…Transfer learning on generative models for limited data has been the subject of study for the last three years [33,34,[38][39][40][41], focusing on evaluating the impact of freezing the lower generator layers [33,34], the lower discriminator layers [39], and both the generator and discriminator lower layers [40], using mainly general purposes datasets of indoors (e.g., LSUN, Bedroons) and faces (e.g., CelebHQ, FFHQ, CelebA). The results show a reduction in the overfitting derived from the knowledge transfer and training time.…”
Section: Related Workmentioning
confidence: 99%
“…In the case of mode collapse, the GAN generator produces the same image regardless of the input noise, and is not able to recover from this. Several approaches have been developed in recent years to improve the training of a GAN when only few data are available [14][15][16][17][18][19]. In [19], the so-called stochastic discriminator augmentation is used to artificially increase the amount of training data and prevent the generator from learning these augmentations.…”
Section: Introductionmentioning
confidence: 99%
“…However, thousands of images are still required to train a high-resolution GAN like StyleGAN2. In addition to data augmentation, the current research focuses on transfer-learning methods for GANs in cases with few data samples [14][15][16][17]. During the transfer-learning in [16,17] the first layers of the discriminator, the discriminator and generator, are kept fixed.…”
Section: Introductionmentioning
confidence: 99%