2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00444
|View full text |Cite
|
Sign up to set email alerts
|

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Abstract: Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-inva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
495
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 543 publications
(499 citation statements)
references
References 19 publications
(56 reference statements)
3
495
1
Order By: Relevance
“…As mentioned above, we have used three state-of-the-art ReID methods as victim models, attacked them with the proposed DR attack, and evaluated the performance drop on four different datasets. Moreover, we attacked the same victim models with two other state-of-the-art attack approaches, namely TI-FGSM and TI-DIM [56], [71]. We compared the effectiveness of our DR attack with these other attack methods as well.…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…As mentioned above, we have used three state-of-the-art ReID methods as victim models, attacked them with the proposed DR attack, and evaluated the performance drop on four different datasets. Moreover, we attacked the same victim models with two other state-of-the-art attack approaches, namely TI-FGSM and TI-DIM [56], [71]. We compared the effectiveness of our DR attack with these other attack methods as well.…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
“…In one of the earlier works, Goodfellow et al [53] proposed fast gradient sign method (FGSM), which generates AEs in one step. Several works extended this by iteratively updating the AEs with multistep attacks including the basic iterative method (BIM) [10], deep fool [54], momentum iterative method [11], Diverse Inputs Method (DIM) [55] and Translation-Invariant (TI) attacks [56]. Compared with FGSM, the iterative methods generate a smaller perturbation, which makes the adversarial examples even more imperceptible to human eye.…”
Section: B Adversarial Attack Methodsmentioning
confidence: 99%
See 3 more Smart Citations