2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00807
|View full text |Cite
|
Sign up to set email alerts
|

Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets

Abstract: We are concerned with the vulnerability of computer vision models to distributional shifts. We formulate a combinatorial optimization problem that allows evaluating the regions in the image space where a given model is more vulnerable, in terms of image transformations applied to the input, and face it with standard search algorithms. We further embed this idea in a training procedure, where we define new data augmentation rules according to the image transformations that the current model is most vulnerable t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
60
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(61 citation statements)
references
References 25 publications
1
60
0
Order By: Relevance
“…flipping and rotation. However, conventional data augmentation methods only deal with simple geometric changes within the same dataset (Volpi and Murino 2019). When the domain gap is large such as those illustrated in Figure 4 containing image style variations, learning-based augmentation strategies are required.…”
Section: Related Workmentioning
confidence: 99%
“…flipping and rotation. However, conventional data augmentation methods only deal with simple geometric changes within the same dataset (Volpi and Murino 2019). When the domain gap is large such as those illustrated in Figure 4 containing image style variations, learning-based augmentation strategies are required.…”
Section: Related Workmentioning
confidence: 99%
“…5 in order to maximize validation performance [139]. Since augmentation operations are often non-differentiable, this requires reinforcement learning [139], discrete gradient-estimators [140], or evolutionary [141] methods. Recent attempts use meta-gradient to learn mixing proportions in mixup-based augmentation [142].…”
Section: Embedding Functions (Metric Learning)mentioning
confidence: 99%
“…EAs are relatively more commonly applied in RL applications [24], [168] (where models are typically smaller, and inner optimizations are long and non-differentiable). However they have also been applied to learn learning rules [194], optimizers [195], architectures [26], [126] and data augmentation strategies [141] in supervised learning. They are also particularly important in learning human interpretable symbolic meta-representations [119].…”
Section: Evolutionmentioning
confidence: 99%
“…Existing works on single domain generalization [20,42,52,53,60] try to improve the generalization capability through adversarial domain augmentation (ADA), which synthesizes new training images in an adversarial way to mimic virtual challenging domains. The model therefore learns the domain-invariant features to improve its generalization performance.…”
Section: Introductionmentioning
confidence: 99%
“…To this end, we propose a novel adaptive form of normal-ization named as adaptive standardization and rescaling normalization (ASR-Norm), in which the standardization and rescaling statistics are both learned to be adaptive to each individual input sample. When being used with ADA [20,52,53], ASR-Norm can learn the normalization statistics by approximately optimizing a robust objective, making the statistics be adaptive to the data coming from different domains, and hence helping the model to generalize better across domains than traditional normalization approaches. We also show that ASR-Norm can be viewed as a generic form of the traditional normalization approaches including BN, IN, layer normalization (LN) [1], group normalization (GN) [55], and switchable normalization (SN) [32].…”
Section: Introductionmentioning
confidence: 99%