2019
DOI: 10.48550/arxiv.1905.05469
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Improved Self-supervised GAN via Adversarial Training

Abstract: We propose to improve unconditional Generative Adversarial Networks (GAN) by training the self-supervised learning with the adversarial process. In particular, we apply self-supervised learning via the geometric transformation on input images and assign the pseudo-labels to these transformed images. (i) In addition to the GAN task, which distinguishes data (real) versus generated (fake) samples, we train the discriminator to predict the correct pseudolabels of real transformed samples (classification task). Im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…[5] introduced an auxiliary rotation detection task onto the discriminator to alleviate the forgetting issue. [45,46] analyzed the drawback of the self-supervised task in [5] and proposed a multi-class minimax selfsupervised task. [26] combined self-and semi-supervised learning to outperform supervised GANs.…”
Section: Self-supervised Learning In Gansmentioning
confidence: 99%
“…[5] introduced an auxiliary rotation detection task onto the discriminator to alleviate the forgetting issue. [45,46] analyzed the drawback of the self-supervised task in [5] and proposed a multi-class minimax selfsupervised task. [26] combined self-and semi-supervised learning to outperform supervised GANs.…”
Section: Self-supervised Learning In Gansmentioning
confidence: 99%
“…A number of approaches based on SS have been proven successful for GANs training. Tran et al [24] aim to improve GANs by applying SS learning via the geometric transformation on input images and assign the pseudo-labels to these transformed images. Chen et al propose the selfsupervised GAN [1], adding auxiliary rotation loss as a selfsupervised loss to the discriminator.…”
Section: B Self-supervised Learningmentioning
confidence: 99%
“…Finally, GANs face the perennial problem of mode collapse, where p g collapses to only cover a few modes of p r , resulting in generated samples of limited diversity. Consequently, recent years have seen efforts [4]- [10] to mitigate these GAN problems, including using gradient matching [5] and a two time-scale update rule [7].…”
Section: Introductionmentioning
confidence: 99%