2020
DOI: 10.48550/arxiv.2007.08450
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning perturbation sets for robust machine learning

Eric Wong,
J. Zico Kolter

Abstract: Although much progress has been made towards robust deep learning, a significant gap in robustness remains between real-world perturbations and more narrowly defined sets typically studied in adversarial defenses. In this paper, we aim to bridge this gap by learning perturbation sets from data, in order to characterize realworld effects for robust training and evaluation. Specifically, we use a conditional generator that defines the perturbation set over a constrained region of the latent space. We formulate d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 37 publications
0
10
0
Order By: Relevance
“…However, in this paper we make no particular assumption on the specific form of ∆. In particular, our results apply to arbitrary perturbation sets, such as those used in [62][63][64][65][66].…”
Section: Problem Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, in this paper we make no particular assumption on the specific form of ∆. In particular, our results apply to arbitrary perturbation sets, such as those used in [62][63][64][65][66].…”
Section: Problem Formulationmentioning
confidence: 99%
“…Due to space constraints, we considered only pertubations of the form G(x, δ) = x + δ as in (P-RO). Yet, by once again fixing the pertubation distribution λ, we can obtain a myriad of data augmentation techniques, including the group-theoretic data-augmentation scheme discussed in [93], where G denotes the group action, and the model-based robust training methods discussed in [62][63][64][65]. Indeed, exploring the efficacy of DALE toward improving robustness beyond norm-bounded perturbations is an exciting direction for future work.…”
Section: Acknowledgements and Disclosure Of Funding A Connections To ...mentioning
confidence: 99%
“…Gulshad et al [10] trained on images with adversarial as well as natural perturbations like occlusions or elastic deformations, while achieving good generalization for many other unseen perturbations. Robey et al [18] and Wong et al [26] argued that it is impossible to capture all possible natural perturbations mathematically. Therefore, they used generative models to generate images with perturbations to train the network.…”
Section: Related Workmentioning
confidence: 99%
“…[16] trained on images with adversarial as well as natural perturbations like occlusions or elastic deformations, while achieving good generalization for many other unseen perturbations. [31] and [43] argued that it is impossible to capture all possible natural perturbations mathematically. Therefore, they used generative models to generate images with perturbations to train the network.…”
Section: Related Workmentioning
confidence: 99%