2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00487
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Abstract: Deep neural networks have been shown to exhibit an intriguing vulnerability to adversarial input images corrupted with imperceptible perturbations. However, the majority of adversarial attacks assume global, fine-grained control over the image pixel space. In this paper, we consider a different setting: what happens if the adversary could only alter specific attributes of the input image? These would generate inputs that might be perceptibly different, but still natural-looking and enough to fool a classifier.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 76 publications
(49 citation statements)
references
References 29 publications
2
44
0
Order By: Relevance
“…More work is needed to understand approaches such as SEMA, which do not involve trading off image quality and attack strength. Alternatively, approaches that make adversarial images effective yet non-suspicious, such as [22,48], can also be studied.…”
Section: Discussionmentioning
confidence: 99%
“…More work is needed to understand approaches such as SEMA, which do not involve trading off image quality and attack strength. Alternatively, approaches that make adversarial images effective yet non-suspicious, such as [22,48], can also be studied.…”
Section: Discussionmentioning
confidence: 99%
“…In [2], the authors introduce texture and colorization to induce semantic perturbation with a large p norm perturbation to the raw pixel space while remaining visual imperceptibility. In [9], an adversarial network composed of an encoder and a generator conditioned on attributes are trained to find semantic adversarial examples. In [6,5], the authors show that simple operations such as image rotation or object translation can result in a notable mis-classification rate.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Beyond the p -norm bounded threat model, recent works have shown the possibility of generating semantic adversarial examples based on semantic perturbation techniques such as color shifting, lighting adjustment and rotation [8,13,2,9,6,5]. We refer the readers to Figure 1 for the illustration of some semantic perturbations for images.…”
Section: Introductionmentioning
confidence: 99%
“…This is especially true when deep neural networks (DNNs) are used as key components (e.g., to represent policies) of RL agents. Recently, a wealth of results in the ML literature demonstrated that DNNs can be fooled to misclassify images by perturbing the input by an imperceptible amount (Goodfellow, Shlens, and Szegedy 2015) or by introducing specific natural looking attributes (Joshi et al 2019). Such adversarial perturbations have also demonstrated the impacts of attacks on an RL agent's state space as shown by (Huang et al 2017).…”
Section: Introductionmentioning
confidence: 99%