2021
DOI: 10.48550/arxiv.2110.11328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Fine-Grained Analysis on Distribution Shift

Abstract: Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 28 publications
0
16
0
Order By: Relevance
“…Comparing Datasets and Subsets Thereof Such comparisons have various applications, e.g., to assessing domain suitability in transfer learning and analyzing distribution shift [57]. We presented two simple examples of comparing specific features between two image datasets (Figure 1) or between two splits of CityScapes (Figure 6).…”
Section: Visually Linking Model Results With Dataset Propertiesmentioning
confidence: 99%
“…Comparing Datasets and Subsets Thereof Such comparisons have various applications, e.g., to assessing domain suitability in transfer learning and analyzing distribution shift [57]. We presented two simple examples of comparing specific features between two image datasets (Figure 1) or between two splits of CityScapes (Figure 6).…”
Section: Visually Linking Model Results With Dataset Propertiesmentioning
confidence: 99%
“…Besides, there are two popular approaches, domain-invariant representation learning and invariant risk minimization, which we will discuss in detail below. In addition to algorithms, there are works that propose theoretical frameworks for DG (Zhang et al, 2021;Ye et al, 2021), or empirically examine DG al-gorithms over various benchmarks (Gulrajani & Lopez-Paz, 2021;Koh et al, 2021;Wiles et al, 2021). Domain-Invariant Representation Learning.…”
Section: Related Workmentioning
confidence: 99%
“…History-restricted algorithms are a natural approach to online learning over non-stationary time series, which is a common problem in practical industry settings [Huyen, 2022] and which has been studied for many years [Sugiyama and Kawanabe, 2012, Wiles et al, 2021, Wu, 2021, Rabanser et al, 2019. In particular, one can view history-restricted online learners as adapting to non-stationary structure in the data -thus, one may not need to go through the whole process of detecting a distribution shift and then deciding to re-train -ideally the learning algorithm is adaptive and automatically takes such eventualities into account.…”
Section: A3 Shifting Data Distributionsmentioning
confidence: 99%