2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00714
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling Adversarial Robustness and Generalization

Abstract: Obtaining deep networks that are robust against adversarial examples and generalize well is an open problem. A recent hypothesis [102,95] even states that both robust and accurate models are impossible, i.e., adversarial robustness and generalization are conflicting goals. In an effort to clarify the relationship between robustness and generalization, we assume an underlying, low-dimensional data manifold and show that: 1. regular adversarial examples leave the manifold; 2. adversarial examples constrained to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

8
153
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 185 publications
(176 citation statements)
references
References 42 publications
8
153
1
Order By: Relevance
“…Additionally, increasing the number of prototypes leads to a better ability to generalize. This observation provides empirical evidence supporting the results of [4]. In [4] it was stated that generalization and robustness are not necessarily contradicting goals, which is a topic recently under discussion.…”
Section: Hypothesis Margin Maximization In a Space Different To The Isupporting
confidence: 77%
See 2 more Smart Citations
“…Additionally, increasing the number of prototypes leads to a better ability to generalize. This observation provides empirical evidence supporting the results of [4]. In [4] it was stated that generalization and robustness are not necessarily contradicting goals, which is a topic recently under discussion.…”
Section: Hypothesis Margin Maximization In a Space Different To The Isupporting
confidence: 77%
“…All experiments and models were implemented using the Keras framework in Python on top of Tensorflow. 4 All evaluated LVQ models are made available as pretrained Tensorflow graphs and as part of the Foolbox zoo 5 at https: //github.com/LarsHoldijk/robust_LVQ_models.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is important to note that x may not necessarily follow the distribution D. Thus, the studies on adversarial examples are different from these on model generalization. Moreover, a number of studies reported the relation between these two properties Su et al, 2018;Stutz et al, 2019;Zhang et al, 2019b). From our clarification, we hope that our audience get the difference and relation between risk and adversarial risk, and the importance of studying adversarial countermeasures.…”
Section: Adversarial Risk Vs Riskmentioning
confidence: 99%
“…Recent work by Tsipras et al [11] argued that this might be due to the natural tendency of high-accuracy classifiers to exploit small differences as a means of greedily leveraging available information. Stutz et al [17] interestingly studied the creation of "on-manifold" adversarial examples, which conform to the original input distribution as defined by a Variational Autoencoder (VAE) -Generative Adversarial Network (GAN) hybrid. Unlike with off-manifold, or traditional, adversarial examples, they found that generalization accuracy would actually be increased by training with onmanifold adversarial examples [17].…”
Section: Related Workmentioning
confidence: 99%