2019
DOI: 10.1007/s11263-019-01213-0
|View full text |Cite
|
Sign up to set email alerts
|

Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks

Abstract: Modern neural networks are highly non-robust against adversarial manipulation. A significant amount of work has been invested in techniques to compute lower bounds on robustness through formal guarantees and to build provably robust models. However, it is still difficult to get guarantees for larger networks or robustness against larger perturbations. Thus attack strategies are needed to provide tight upper bounds on the actual robustness. We significantly improve the randomized gradient-free attack for ReLU n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 51 publications
(88 citation statements)
references
References 20 publications
0
88
0
Order By: Relevance
“…The most popular method to test adversarial robustness is the PGD (Projected Gradient Descent) attack (Madry et al, 2018), as it is computationally cheap and performs well in many cases. However, it has been shown that even PGD can fail (Mosbach et al, 2018;Croce et al, 2019b) leading to significant overestimation of robustness: we identify i) the fixed step size and ii) the widely used cross-entropy loss as two reasons for potential failure. As remedies we propose i) a new gradient-based scheme, Auto-PGD, which does not require a step size to be chosen (Sec.…”
Section: Introductionmentioning
confidence: 92%
“…The most popular method to test adversarial robustness is the PGD (Projected Gradient Descent) attack (Madry et al, 2018), as it is computationally cheap and performs well in many cases. However, it has been shown that even PGD can fail (Mosbach et al, 2018;Croce et al, 2019b) leading to significant overestimation of robustness: we identify i) the fixed step size and ii) the widely used cross-entropy loss as two reasons for potential failure. As remedies we propose i) a new gradient-based scheme, Auto-PGD, which does not require a step size to be chosen (Sec.…”
Section: Introductionmentioning
confidence: 92%
“…Even adversarial training has its problems despite being widely considered a reliable defense strategy. For instance, adversarially trained models with ∞ -norm bounded perturbations are still found vulnerable to p -norm perturbations, where p = ∞ [327], [406]. Certified defenses attempt to provide guarantee that the target model can not be fooled within an p -ball of the clean image.…”
Section: Certified Defensesmentioning
confidence: 99%
“…To address such a recognition problem, researchers have traditionally had to pretrain their human activity identification algorithms by extracting certain features from different types of descriptors, such as extended SURF [2] and STIPs [24], before they are incorporated into a particular prediction model such as HMM, SVM, etc. Because of their poor performance and memory usage needs, previous approaches are not very robust [7]. e Support Vector Machine (SVM) offers a high classification accuracy as well as good fault tolerance and e Rough Set eory (RST) technique provides the benefit of dealing with vast amounts of data and removing unnecessary material.…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%