2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00060
|View full text |Cite
|
Sign up to set email alerts
|

Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty

Abstract: Dataset bias is a problem in adversarial machine learning, especially in the evaluation of defenses. An adversarial attack or defense algorithm may show better results on the reported dataset than can be replicated on other datasets. Even when two algorithms are compared, their relative performance can vary depending on the dataset. Deep learning offers state-of-the-art solutions for image recognition, but deep models are vulnerable even to small perturbations. Research in this area focuses primarily on advers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…However, as a side-effect of making these images more 'recognizable', they also get more resilient against adversarial attacks such as FGSM and PGD. Whereas this phenomenon was first studied by Pestana et al [33] for the ImageNet dataset, our proposed method offers a simple but efficient way of creating 'Robust' datasets and can be applied to any other datasets.…”
Section: Assistive Signals In the 2d Spacementioning
confidence: 94%
See 2 more Smart Citations
“…However, as a side-effect of making these images more 'recognizable', they also get more resilient against adversarial attacks such as FGSM and PGD. Whereas this phenomenon was first studied by Pestana et al [33] for the ImageNet dataset, our proposed method offers a simple but efficient way of creating 'Robust' datasets and can be applied to any other datasets.…”
Section: Assistive Signals In the 2d Spacementioning
confidence: 94%
“…There is an emerging area related to 'robust', 'easier-toclassify' and 'harder-to-perturb' datasets. As a concrete example, recently Pestana et al [33] proposed a dataset with robust natural images and showed that datasets with robust images are also harder-to-perturb and they could be used to create better adversarial attacks and perform unbiased benchmarking for defenses in the area of adversarial learning. We believe that our work nicely complements [33].…”
Section: Assistive Signals In the 2d Spacementioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate the robustness of Si m-DNN to imperceptible adversarial attacks, the recently introduced ImageNet-R dataset [38] was considered. The ImageNet-R dataset is a modified version of the ImageNet dataset for the specific task of adversarial attack detection.…”
Section: Methodsmentioning
confidence: 99%
“…Nevertheless, research in the field of adversarial attacks on such systems [16,17] proves that even the use of biometric metrics such as voice is not a guarantee of protection from adversarial attacks. Adversarial attacks can negatively affect the operation of almost any system [17][18][19][20][21], but we assume that certain characteristics of datasets may have some influence on the effectiveness of an attack [22,23].…”
Section: Introductionmentioning
confidence: 99%