2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9534307
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Spatial Frequency Based Constraints on Adversarial Robustness

Abstract: Adversarial examples mainly exploit changes to input pixels to which humans are not sensitive to, and arise from the fact that models make decisions based on uninterpretable features. Interestingly, cognitive science reports that the process of interpretability for human classification decision relies predominantly on low spatial frequency components. In this paper, we investigate the robustness to adversarial perturbations of models enforced during training to leverage information corresponding to different s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…adversarial training, gaussian noise augmentation) exhibit di erent sensitivities to perturbations along Fourier frequency components. [Ber+21] investigated the extent to which constraining models to use only the lowest (or highest) Fourier frequency components of input data provided perturbation robustness, also nding signi cant variability accross datasets. [AHW21] tested the extent to which CNNs relied on various frequency bands by measuring model error on inputs where certain frequencies were removed, again nding a striking amount of variability accross datasets.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…adversarial training, gaussian noise augmentation) exhibit di erent sensitivities to perturbations along Fourier frequency components. [Ber+21] investigated the extent to which constraining models to use only the lowest (or highest) Fourier frequency components of input data provided perturbation robustness, also nding signi cant variability accross datasets. [AHW21] tested the extent to which CNNs relied on various frequency bands by measuring model error on inputs where certain frequencies were removed, again nding a striking amount of variability accross datasets.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, more recent research has revealed a number of less desirable and potentially data-dependent biases of CNNs, such as a tendency to make predictions on the basis of texture features [Gei+19]. Moreover, it has been repeatedly observed that CNNs are sensitive to perturbations in targeted ranges of the Fourier frequency spectrum [GFW19; SDB19] and further investigation has shown that these frequency ranges are dependent on training data [AHW21;Ber+21;Mai+22;Yin+19]. In this work, we provide a mathematical explanation for these frequency space phenomena, showing with theory and experiments that neural network training causes CNNs to be most sensitive to frequencies that are prevalent in the training data distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, adversarial examples can attack such models by slightly altering the image in these frequency bands. While this may vary by dataset [34][35][36], at least some high-frequency is always present as e.g. adversarial attacks can be detected in the frequency spectrum [37].…”
Section: A Frequency Perspective On Adversarial Training and Out-of D...mentioning
confidence: 99%