2022
DOI: 10.48550/arxiv.2207.09239
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Assaying Out-Of-Distribution Generalization in Transfer Learning

Abstract: Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e.g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations. While sharing the same aspirational goal, these approaches have never been tested under the same experimental conditions on real data. In this paper, we take a unified view of previous work, highlighting message discrepancies that we a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 39 publications
1
1
0
Order By: Relevance
“…These results confirm that diminishing returns, in particular, are not necessarily linear even before probit transformation. This is consistent with the recent point by Wenzel et al (2022); Teney et al (2022) that the accuracy of IID and OOD differ depending on the dataset. Furthermore, as recent work, Baek et al (2022) states, whether or not the linear return actually occurs depends on the problem set.…”
Section: Correlation Behaviorssupporting
confidence: 93%
“…These results confirm that diminishing returns, in particular, are not necessarily linear even before probit transformation. This is consistent with the recent point by Wenzel et al (2022); Teney et al (2022) that the accuracy of IID and OOD differ depending on the dataset. Furthermore, as recent work, Baek et al (2022) states, whether or not the linear return actually occurs depends on the problem set.…”
Section: Correlation Behaviorssupporting
confidence: 93%
“…So far, there is no consensus on how to address this issue. Nevertheless, promising work in causal representation learning (59) is encouraging and raises hope that robustness will soon be achieved by AI models.…”
Section: Trends and Future Directionsmentioning
confidence: 99%