2020
DOI: 10.48550/arxiv.2006.10738
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Augmentation for Data-Efficient GAN Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
119
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(123 citation statements)
references
References 0 publications
0
119
0
1
Order By: Relevance
“…where I is a random perturbation of the input image I drawn from a distribution π(•|I) of candidate data augmentations. In our work, we adopt the various data augmentation techniques considered in DiffAugment [38], including random colorization, random translation, random resize, and random cutout.…”
Section: Augclip: Avoiding Adversarial Generationmentioning
confidence: 99%
“…where I is a random perturbation of the input image I drawn from a distribution π(•|I) of candidate data augmentations. In our work, we adopt the various data augmentation techniques considered in DiffAugment [38], including random colorization, random translation, random resize, and random cutout.…”
Section: Augclip: Avoiding Adversarial Generationmentioning
confidence: 99%
“…Few-shot learning is appealing, but very challenging in training GANs, since data augmentation methods that are developed for discriminative learning tasks are not directly applicable. To address this challenge, differentiable data augmentation methods and variants [63,23,52,64] have been proposed in training GANs with very exciting results obtained. Very recently, equiped with the differentiable data augmentation method [63], a FastGAN approach [35] is proposed to realize light-weight yet sufficiently powerful GANs with several novel designs including the SLE module.…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
“…To address this challenge, differentiable data augmentation methods and variants [63,23,52,64] have been proposed in training GANs with very exciting results obtained. Very recently, equiped with the differentiable data augmentation method [63], a FastGAN approach [35] is proposed to realize light-weight yet sufficiently powerful GANs with several novel designs including the SLE module. The proposed SLIM is built on the SLE in FastGANs by exploiting the well-known Google's Inception [50,51] building block design and the Átrous convolution [7].…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the missing node generator is able to generate missing nodes for each local graph, it is well known that GAN may perform poorly on small data [34]. Similarly, training an effective GCN model for node classification also requires a large amount of samples.…”
Section: Federated Learningmentioning
confidence: 99%