2020
DOI: 10.48550/arxiv.2010.09670
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RobustBench: a standardized adversarial robustness benchmark

Abstract: Evaluation of adversarial robustness is often error-prone leading to overestimation of the true robustness of models. While adaptive attacks designed for a particular defense are a way out of this, there are only approximate guidelines on how to perform them. Moreover, adaptive evaluations are highly customized for particular models, which makes it difficult to compare different defenses. Our goal is to establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(57 citation statements)
references
References 27 publications
2
43
0
Order By: Relevance
“…Therefore, we follow the white-box setting demonstrated in Sec. V-A, and use AEs generated by AutoAttack on each ATC from [18] for evaluation. As shown in Tab.…”
Section: Contranet Against Autoattackmentioning
confidence: 99%
See 4 more Smart Citations
“…Therefore, we follow the white-box setting demonstrated in Sec. V-A, and use AEs generated by AutoAttack on each ATC from [18] for evaluation. As shown in Tab.…”
Section: Contranet Against Autoattackmentioning
confidence: 99%
“…Dataset. All experiments are conducted on CIFAR-10, which serves as a standard task by several public robustness testbenches [18], [48].…”
Section: A Experimental Settingsmentioning
confidence: 99%
See 3 more Smart Citations