2018
DOI: 10.48550/arxiv.1804.11285
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarially Robust Generalization Requires More Data

Ludwig Schmidt,
Shibani Santurkar,
Dimitris Tsipras
et al.

Abstract: Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
67
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(70 citation statements)
references
References 22 publications
3
67
0
Order By: Relevance
“…Adversarial examples: Several schools of thought suggest adversarial examples as an unavoidable consequence of various aspects of the ML pipeline: a) high dimensionality of input data [22,18,30,43], b) model misspecification [37]; c) label noise [17,19,3], d) larger sample complexity(upper bounds) of adversarial robustness v.s. regular generalization [42]. This motivates a commonly conjectured fundamental tradeoff between regular accuracy and robustness.…”
Section: Related Workmentioning
confidence: 68%
See 1 more Smart Citation
“…Adversarial examples: Several schools of thought suggest adversarial examples as an unavoidable consequence of various aspects of the ML pipeline: a) high dimensionality of input data [22,18,30,43], b) model misspecification [37]; c) label noise [17,19,3], d) larger sample complexity(upper bounds) of adversarial robustness v.s. regular generalization [42]. This motivates a commonly conjectured fundamental tradeoff between regular accuracy and robustness.…”
Section: Related Workmentioning
confidence: 68%
“…The estimation-centric explanation is in contrast to the majority of existing explanations for adversarial examples, which suggest that these are an unavoidable consequence of various aspects of the ML pipeline: (a) high dimensionality of input data [22,18,30,43] (b) model misspecification [37] (c) label noise [17,19,3] (d) larger sample complexity (upper bounds) of adversarial robustness v.s. regular generalization [42,49]. To rule-out the above as possible causes for adversarial examples in our model, we intentionally study a setup with low-dimensional input-data and without misspecification or label-noise.…”
Section: Introductionmentioning
confidence: 99%
“…Semi-supervised and self-supervised for Adversarial Training. Schmidt et al [196] show that sample complexity in adversarial learning can be significantly larger than that of "standard" learning and hence requires more samples to achieve non-trivial adversarial robust classifier. However, conventional adversarial training needs class labels and can not be easily applied to unlabeled data.…”
Section: Adversarial Training Adversarial Training (At) Incorporates ...mentioning
confidence: 99%
“…Parallel to this line of work, we provide theory to understand how NFM can improve robustness. Also related to this line of work is the study of the trade-offs between robustness and accuracy [47,54,58,63,65,73,78]. There are also attempts to study generalization in terms of robustness [16,33,72].…”
Section: Related Workmentioning
confidence: 99%