2021
DOI: 10.48550/arxiv.2110.08220
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Combining Diverse Feature Priors

Abstract: To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Ensemble approaches: To obtain better generalization across domains, Mancini et al (2018) built an ensemble of multiple domain-specific classifiers, each attributing to a different cue. Jain et al (2021) trained multiple networks separately, each with a different kind of bias. The ensemble of these biased networks was used to produce pseudo labels for unlabeled data.…”
Section: Supervised Lossmentioning
confidence: 99%
See 2 more Smart Citations
“…Ensemble approaches: To obtain better generalization across domains, Mancini et al (2018) built an ensemble of multiple domain-specific classifiers, each attributing to a different cue. Jain et al (2021) trained multiple networks separately, each with a different kind of bias. The ensemble of these biased networks was used to produce pseudo labels for unlabeled data.…”
Section: Supervised Lossmentioning
confidence: 99%
“…We investigate this effect on three different datasets. We create a synthetic dataset, Tinted-STL-10, by adding a class-specific tint to the original STL-10 data following Jain et al (2021). This tint is only added to the training set (see samples in Figure 9) and not to the test set.…”
Section: Spurious Correlation Analysismentioning
confidence: 99%
See 1 more Smart Citation