2021
DOI: 10.48550/arxiv.2106.03721
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization

Abstract: Deep learning has achieved tremendous success with independent and identically distributed (i.i.d.) data. However, the performance of neural networks often degenerates drastically when encountering out-of-distribution (OoD) data, i.e., training and test data are sampled from different distributions. While a plethora of algorithms has been proposed to deal with OoD generalization, our understanding of the data used to train and evaluate these algorithms remains stagnant. In this work, we position existing data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…One issue for deep-learning models in general is that they are susceptible to noise in the dataset [ 9 , 10 ], which leads to decreased model accuracy and poor prediction results. Likewise, concept drift is an ongoing challenge in deep learning [ 7 , 8 ] as deep learning models typically do not perform online learning and must be frequently adjusted to maintain performance on evolving data. Although anomalies typically do not imply a shift in the underlying data, an anomaly detection model that neglects concept drift will eventually begin detecting false anomalies.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…One issue for deep-learning models in general is that they are susceptible to noise in the dataset [ 9 , 10 ], which leads to decreased model accuracy and poor prediction results. Likewise, concept drift is an ongoing challenge in deep learning [ 7 , 8 ] as deep learning models typically do not perform online learning and must be frequently adjusted to maintain performance on evolving data. Although anomalies typically do not imply a shift in the underlying data, an anomaly detection model that neglects concept drift will eventually begin detecting false anomalies.…”
Section: Related Workmentioning
confidence: 99%
“…Yet, despite major progress within the field of deep learning, there are still many tasks where humans outperform models, especially in anomaly detection where the anomalies are often undefined. In addition, deep learning approaches have challenges when dealing with online learning, noise and concept drift [ 7 , 8 , 9 , 10 ].…”
Section: Introductionmentioning
confidence: 99%