2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00257
|View full text |Cite
|
Sign up to set email alerts
|

AdvGAN++: Harnessing Latent Layers for Adversary Generation

Abstract: Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model. In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN and at the same time generates pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(23 citation statements)
references
References 7 publications
0
23
0
Order By: Relevance
“…GAN-based Adversarial Attack Methods. Researchers have investigated GAN-based structures to generate adversarial examples, such as [4], [56], [26], [59], [7]. For example, the authors in [59] train a GAN and an additional Inverter network AdvGAN [56], AdvGAN++ [26] Zhao et al [59], Carlini et al [7] Bounded Adversarial Examples Sharif et al [47] PS-GAN [31] Ours to generate full-size, fake images that are able to flip the predicted label or mount an untargeted attack.…”
Section: Related Work and Countermeasures A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…GAN-based Adversarial Attack Methods. Researchers have investigated GAN-based structures to generate adversarial examples, such as [4], [56], [26], [59], [7]. For example, the authors in [59] train a GAN and an additional Inverter network AdvGAN [56], AdvGAN++ [26] Zhao et al [59], Carlini et al [7] Bounded Adversarial Examples Sharif et al [47] PS-GAN [31] Ours to generate full-size, fake images that are able to flip the predicted label or mount an untargeted attack.…”
Section: Related Work and Countermeasures A Related Workmentioning
confidence: 99%
“…Researchers have investigated GAN-based structures to generate adversarial examples, such as [4], [56], [26], [59], [7]. For example, the authors in [59] train a GAN and an additional Inverter network AdvGAN [56], AdvGAN++ [26] Zhao et al [59], Carlini et al [7] Bounded Adversarial Examples Sharif et al [47] PS-GAN [31] Ours to generate full-size, fake images that are able to flip the predicted label or mount an untargeted attack. Notably, these studies resemble the investigation of an adversarial example objective-input-dependent or noisy perturbation-based distortions added to an input covering the whole image to mount an untargeted attack.…”
Section: Related Work and Countermeasures A Related Workmentioning
confidence: 99%
“…The trained generator can generate target and no target adversarial examples in batch, which achieved a very high attack success rate. Puneet Mangla et al [16] improved the AdvGAN (a GAN which can generate adversarial examples called AdvGAN), and proposed AdvGAN++. The authors thought that when generated the adversarial example, the potential characteristics of the original examples should be full of use, and generated adversarial examples should also be close to the input distribution.…”
Section: Related Workmentioning
confidence: 99%
“…They also utilize GAN for learning the latent space but without any reference point and therefore have exhaustive search space. AdvGAN [37], AdvGAN++ [9], AT-GAN [36], Defense-GAN [25] and [33] are few works where latent space was learnt using GANs to carry out learning the distribution of the training set and generate adversarial examples accordingly. It is imperative to note here that these attacks are different from our work as we have utilized autoencoders in our work to ensure that the adversarial examples remain in a modified distribution where the input images' latent space are combined with target class (the one with second highest probability in prediction).…”
Section: Related Workmentioning
confidence: 99%