2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00871
|View full text |Cite
|
Sign up to set email alerts
|

High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
208
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 388 publications
(250 citation statements)
references
References 17 publications
11
208
0
Order By: Relevance
“…For example, many machine-vision systems are intolerant to image distortions: If CNNs are trained on clean images but tested on noisy images, they perform far below humans at test (73). But here too, if machines were burdened with humanlike visual acuity and so could barely represent the high-frequency features in the training set (i.e., the features most distorted by this sort of noise), they may be less sensitive to the patterns that later mislead them (74). Indeed, recent work finds that giving CNNs a humanlike fovea (75) or a hidden layer simulating V1 (76)…”
Section: Limit Machines Like Humansmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, many machine-vision systems are intolerant to image distortions: If CNNs are trained on clean images but tested on noisy images, they perform far below humans at test (73). But here too, if machines were burdened with humanlike visual acuity and so could barely represent the high-frequency features in the training set (i.e., the features most distorted by this sort of noise), they may be less sensitive to the patterns that later mislead them (74). Indeed, recent work finds that giving CNNs a humanlike fovea (75) or a hidden layer simulating V1 (76)…”
Section: Limit Machines Like Humansmentioning
confidence: 99%
“…97)-which, while literally true, may not explain why such different representations arise in the first place. Indeed, it is telling that much progress in understanding such errors has come from "behavioral" studies (in humans and machines) (69,70,74,80). A mixed strategy is likely best, with behavioral comparisons as essential components.…”
Section: What Species-fair Comparisons Showmentioning
confidence: 99%
“…Ilyas et.al [6], on the other hand, claimed that among the feature patterns that can be used to train generalizable models, some are susceptible to attacks and others are not ; therefore, training robust models would require encoding human priors during training, which may not be known. Meanwhile, Wang et.al [7] found that highfrequency components of the inputs (images in this particular study) were targeted by adversarial perturbations and suggested smoothing the convolutional filters to reduce the influence of high-frequency features on the model's output. While there are some difference, a common thread in all of the aforementioned studies is that reducing the model's reliance on superfluous features can make it more robust to adversarial perturbations.…”
Section: Introductionmentioning
confidence: 85%
“…Recent studies have found that deep learning models are made vulnerable to adversarial attacks because their decision function relies on spurious features, which the adversary can perturb to induce misclassifications [5,6,7]. Building upon this line of work, in this paper we develop an approach to identify and remove spurious features from the network.…”
Section: Adversarial Attacks and Defensesmentioning
confidence: 99%
“…As (3) and ( 4) can be used to estimate how much of highfrequency image content is partially suppressed but remains preserved by an image filtering method, it would be interesting to establish a link with the ability of the method to protect deep learning schemes against adversarial attacks which often target high-frequency image components invisible to the human eye [20,21].…”
Section: Conclusion and Directions For Future Workmentioning
confidence: 99%