2022
DOI: 10.48550/arxiv.2203.09739
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Do Deep Networks Transfer Invariances Across Classes?

Abstract: To generalize well, classifiers must learn to be invariant to nuisance transformations that do not alter an input's class. Many problems have "class-agnostic" nuisance transformations that apply similarly to all classes, such as lighting and background changes for image classification. Neural networks can learn these invariances given sufficient data, but many real-world datasets are heavily class imbalanced and contain only a few examples for most of the classes. We therefore pose the question: how well do ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…We now consider the setting where we are confronted with training for five additional epochs under an imbalanced dataset shift. In particular, we construct a strongly imbalanced dataset shift, with an exponential long-tailed frequency distribution of the classes with an imbalance ratio of 100 as in [68]. Our result is summarized in Figure 7.…”
Section: Methodsmentioning
confidence: 99%
“…We now consider the setting where we are confronted with training for five additional epochs under an imbalanced dataset shift. In particular, we construct a strongly imbalanced dataset shift, with an exponential long-tailed frequency distribution of the classes with an imbalance ratio of 100 as in [68]. Our result is summarized in Figure 7.…”
Section: Methodsmentioning
confidence: 99%
“…Data Augmentation and Mixup. Various data augmentation strategies have been proposed to improve the generalization of deep neural networks, including directly augmenting images with manually designed strategies (e.g., whitening, cropping) [37], generating more examples with generative models [3,7,88,85], and automatically finding augmentation strategies [10,11,43]. Mixup [82] and its variants [9, 12, 20, 24, 26, 32, 33, 44, 45, 52, 66-68, 81, 87] propose to improve generalization by linear interpolating input features of a pair of examples and their corresponding labels.…”
Section: Related Workmentioning
confidence: 99%