2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01149
|View full text |Cite
|
Sign up to set email alerts
|

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

Abstract: We study two important concepts in adversarial deep learning-adversarial training and generative adversarial network (GAN). Adversarial training is the technique used to improve the robustness of discriminator by combining adversarial attacker and discriminator in the training phase. GAN is commonly used for image generation by jointly optimizing discriminator and generator. We show these two concepts are indeed closely related and can be used to strengthen each other-adding a generator to the adversarial trai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 95 publications
(56 citation statements)
references
References 28 publications
0
56
0
Order By: Relevance
“…While strong discriminator regularisation stabilises training, it allows the generator to make small changes and trick the discriminator, making convergence very slow. Rob-GAN [141], include an adversarial attack step [149] that perturbs real images to trick the discriminator without altering the content inordinately, adapting the GAN objective into a min-max-min problem. This provides a weaker regularisation, enforcing small Lipschitz values locally rather than globally.…”
Section: Training Speedmentioning
confidence: 99%
“…While strong discriminator regularisation stabilises training, it allows the generator to make small changes and trick the discriminator, making convergence very slow. Rob-GAN [141], include an adversarial attack step [149] that perturbs real images to trick the discriminator without altering the content inordinately, adapting the GAN objective into a min-max-min problem. This provides a weaker regularisation, enforcing small Lipschitz values locally rather than globally.…”
Section: Training Speedmentioning
confidence: 99%
“…Some of the current methods merely focused on discriminator learning, while the others were more specific on the generator, or both [9]. That being said, there are a lot of complicated models proposed by stacking multiple architectures [16,17]. Among these areas, there is another trend specific on the signal conditions, i.e., conditional training with labels [10] and unsupervised GANs trained without prior labels [9].…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…The adversarial training techniques consist in including AEs at the training stage of the model to build a robust classifier. Authors in [9,34,35] used benign samples with adversarial samples as data augmentation in the training process. In practice, different attacks can be used to generate the AEs.…”
Section: Adversarial Trainingmentioning
confidence: 99%