2017
DOI: 10.48550/arxiv.1712.02779
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring the Landscape of Spatial Robustness

Logan Engstrom,
Brandon Tran,
Dimitris Tsipras
et al.

Abstract: The study of adversarial robustness has so far largely focused on perturbations bound in p -norms. However, state-of-the-art models turn out to be also vulnerable to other, more natural classes of perturbations such as translations and rotations. In this work, we thoroughly investigate the vulnerability of neural network-based classifiers to rotations and translations. While data augmentation offers relatively small robustness, we use ideas from robust optimization and test-time input aggregation to significan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…Improving the generalization of deep learning models has become a major research topic, with many different threads of research including Bayesian deep learning (Neal, 1996;Gal, 2016), adversarial (Engstrom et al, 2019;Jacobsen et al, 2018) and non-adversarial (Hendrycks & Dietterich, 2019;Yin et al, 2019) robustness, causality (Arjovsky et al, 2019), and other works aimed at distinguishing statistical features from semantic features (Gowal et al, 2019;Geirhos et al, 2018). While neural networks often exhibit superhuman generalization performance on the training distribution, they can be extremely sensitive to minute changes in distribution (Su et al, 2019;Engstrom et al, 2017; In this work, we consider out-of-distribution (OoD) generalization, where a model must generalize to new distributions at test time without seeing any training data from them. We assume a fixed underlying task, and access to labeled data from multiple training environments.…”
Section: Introductionmentioning
confidence: 99%
“…Improving the generalization of deep learning models has become a major research topic, with many different threads of research including Bayesian deep learning (Neal, 1996;Gal, 2016), adversarial (Engstrom et al, 2019;Jacobsen et al, 2018) and non-adversarial (Hendrycks & Dietterich, 2019;Yin et al, 2019) robustness, causality (Arjovsky et al, 2019), and other works aimed at distinguishing statistical features from semantic features (Gowal et al, 2019;Geirhos et al, 2018). While neural networks often exhibit superhuman generalization performance on the training distribution, they can be extremely sensitive to minute changes in distribution (Su et al, 2019;Engstrom et al, 2017; In this work, we consider out-of-distribution (OoD) generalization, where a model must generalize to new distributions at test time without seeing any training data from them. We assume a fixed underlying task, and access to labeled data from multiple training environments.…”
Section: Introductionmentioning
confidence: 99%
“…• (Adversarial examples) We find, using human intuition or an automated method, a rotated version of a training image that the model classifies incorrectly, despite being correct on the unrotated image. ⇒ The model is not rotation invariant, and will likely misclassify rotated versions of other images [27].…”
Section: Examples Of Interactive Behavior Certificatesmentioning
confidence: 99%
“…Figure 1.6) (Beery, van Horn, and Perona, 2018), object pose (Alcorn, Li, Gong et al, 2019) or texture (Geirhos, Rubisch, Michaelis et al, 2018). Their performance also worsens when small rotations and translations are applied to the image (Engstrom, Tran, Tsipras et al, 2019) as well as corruptions and distortions (Dodge and Karam, 2017;Geirhos, Medina Temme, Rauber et al, 2018;Hendrycks and Dietterich, 2019).…”
Section: Why Do Deep Neural Network Generalize So Poorly?mentioning
confidence: 99%