2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00482
|View full text |Cite
|
Sign up to set email alerts
|

Sparse and Imperceivable Adversarial Attacks

Abstract: Neural networks have been proven to be vulnerable to a variety of adversarial attacks. From a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected. We propose a new black-box technique to craft adversarial examples aiming at minimizing l 0distance to the original image. Extensive experiments show that our attack is better or competitive to the state of the art. M… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
211
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 200 publications
(253 citation statements)
references
References 19 publications
2
211
0
Order By: Relevance
“…CornerSearch [36] proposes a black-box attack based on finding an adversarial example with respect to the l 0 norm. Abandoning norm based constraints completely, Patch Attack [37] replaces a certain area of the image with an adversarial patch.…”
Section: B Black-box Attack Categorizationmentioning
confidence: 99%
See 3 more Smart Citations
“…CornerSearch [36] proposes a black-box attack based on finding an adversarial example with respect to the l 0 norm. Abandoning norm based constraints completely, Patch Attack [37] replaces a certain area of the image with an adversarial patch.…”
Section: B Black-box Attack Categorizationmentioning
confidence: 99%
“…We cover three non-traditional norm attacks in this section. The first attack we summarize is the sparse and imperceivable attack [36] which focuses on black-box attacks with respect to the l 0 norm. The second non-traditional norm attack we survey is Patch Attack [37].…”
Section: Non-traditional Norm Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…They were initially shown to be effective in causing classification errors throughout different machine learning models [5,6,7]. Following this, a lot of effort has been put into generating increasingly more complex attack models that can utilize a small amount of semantic-preserving modifications, while still being able to fool a classifier [8,9,10]. Typically, this is done by constraining the perturbations with an p -norm, where the most common settings use either ∞ [11,12,9,8,13,14,15], 2 [16,9,17,18,19], or 1 [20,21].…”
Section: Introductionmentioning
confidence: 99%