The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2021
DOI: 10.48550/arxiv.2104.01086
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Defending Against Image Corruptions Through Adversarial Augmentations

Abstract: Modern neural networks excel at image classification, yet they remain vulnerable to common image corruptions such as blur, speckle noise or fog. Recent methods that focus on this problem, such as AugMix and DeepAugment, introduce defenses that operate in expectation over a distribution of image corruptions. In contrast, the literature on p -norm bounded perturbations focuses on defenses against worst-case corruptions. In this work, we reconcile both approaches by proposing AdversarialAugment, a technique which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…(Koh et al, 2021) presented WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications. ; Cubuk et al (2018); Calian et al (2021) proposed augmentation methods to improve the corruption robustness in 2D vision tasks. On the adversarial robustness benchmarking front, Carlini et al (2019) discussed the methodological foundations, reviewed commonly accepted best practices, and suggested new methods for evaluating defenses to adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
“…(Koh et al, 2021) presented WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications. ; Cubuk et al (2018); Calian et al (2021) proposed augmentation methods to improve the corruption robustness in 2D vision tasks. On the adversarial robustness benchmarking front, Carlini et al (2019) discussed the methodological foundations, reviewed commonly accepted best practices, and suggested new methods for evaluating defenses to adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
“…Besides, there are cases where offline augmentation is not feasible as it relies on pre-trained or generative models which are unavailable in certain scenarios, e.g. DeepAugment [20] or AdA [6] cannot be applied on C-100. On the other hand, off-line augmentation may be necessary to avoid the computational cost of generating augmentations during training.…”
Section: Sample Complexitymentioning
confidence: 99%
“…Although AugMix attains significant gains on CIFAR-10-C, it does not perform well against sophisticated benchmarks like ImageNet-C. DeepAugment (DA) [20] addresses this issue and diversifies the space of augmentations by introducing distorted images computed by perturbing the weights of image-to-image networks. DA, combined with AugMix, achieves the current state-of-the-art on ImageNet-C. Other schemes include: (i) worst-case noise training [37], (ii) inducing shape bias through stylized images [17], (iii) adversarial counterparts of DeepAugment [6] and AugMix [43], (iv) pre-training and/or adversarial training [24,45], (v) constraining the total variation of convolutional layers [38] and (vi) learning the image information in the phase rather than amplitude [7]. Besides, Vision Transformers [15] have been shown to be more robust to common corruptions than standard CNNs [4,31].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations