2018
DOI: 10.48550/arxiv.1801.10578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

Abstract: The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
57
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(57 citation statements)
references
References 25 publications
0
57
0
Order By: Relevance
“…However, several papers consider the question of what assessment methodology and evaluating metrics should be used [38], [41]- [43]. Batsani et al [41] used the distortion of AEs as the robustness metric, while Weng et al [44] proposed a new metric for robustness called CLEVER, based on extreme value theory. Peng et al [45] proposed the EDLIC framework for a quantitative analysis on different threat models and defense techniques.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…However, several papers consider the question of what assessment methodology and evaluating metrics should be used [38], [41]- [43]. Batsani et al [41] used the distortion of AEs as the robustness metric, while Weng et al [44] proposed a new metric for robustness called CLEVER, based on extreme value theory. Peng et al [45] proposed the EDLIC framework for a quantitative analysis on different threat models and defense techniques.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Many studies addressing the formal verification of ANN classifiers have examined the extent to which test instances can be perturbed without yielding a change in the assigned class (Weng, 2018). This seam of research was a response to the observation that images which have been correctly assigned to a particular class by an ANN classifier, are sometimes assigned to an alternative, incorrect class when subject to minor modifications.…”
Section: Related Workmentioning
confidence: 99%
“…Effectively evaluating the adversarial robustness of DNNs is an important issue for both academic research and practical applications. Most of the previous evaluations can be divided into two groups: attack-based methods [4], [6], [10], [11] and bound-based methods [7], [33], [8], [34], [9]. Attack-based evaluations, such as FGSM [4] and PGD [10], directly generate adversarial examples to attack deep models to assess the worst-case risk, i.e., adversarial risk, as a robustness indicator.…”
Section: Robustness Evaluationmentioning
confidence: 99%
“…Bound-based methods try to provide a fundamental bound, or its estimation, for analyzing the robustness of classifiers to adversarial examples. Weng et al [8] proposed a metric named the Cross-Lipschitz extreme value for network robustness (CLEVER) to measure the robustness without any attack. Weng et al [34] provided efficient algorithms, i.e., Fast-Lin and Fast-Lip, for computing a certified lower bound by exploiting the ReLU property.…”
Section: Robustness Evaluationmentioning
confidence: 99%
See 1 more Smart Citation