2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) 2021
DOI: 10.1109/icmla52953.2021.00050
|View full text |Cite
|
Sign up to set email alerts
|

An Effective Baseline for Robustness to Distributional Shift

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…Following the benchmarks given in [16], we compare our method against three recent notable SOTA approaches: Ensemble of self supervised Leave-Out Classifiers (ELOC) [30], Generalized ODIN (GODIN) [12], and Deep Abstaining Classifier (DAC) [28]. It should be noted that both DAC and ELOC involve data treated as OoD during training.…”
Section: Benchmark Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following the benchmarks given in [16], we compare our method against three recent notable SOTA approaches: Ensemble of self supervised Leave-Out Classifiers (ELOC) [30], Generalized ODIN (GODIN) [12], and Deep Abstaining Classifier (DAC) [28]. It should be noted that both DAC and ELOC involve data treated as OoD during training.…”
Section: Benchmark Resultsmentioning
confidence: 99%
“…Out-of-distribution (OoD) detection is a binary classification of detecting inputs sampled from distribution different from training data [11]. Many existing methods rely on training or tuning with data labelled as OoD from other categories [28,32], adversaries [15,19] or the leave-out sub-set of training samples [30]. However, it is intractable to cover the full space of OoD particularly for data with large dimensions (e.g., an image) [20], resulting in that a supervised method capturing limited facets of OoD distribution hardly generalizes without a data selection bias [24].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Both these methods use only the inlier samples for training. From the OOD literature, we include both methods which train only on the inlier samples, namely, MaxLogit (Hendrickx et al, 2021), energy-based scorer (Hendrickx et al, 2021), ODIN (Hendrickx et al, 2021), and SIRC (Xia & Bouganis, 2022), and methods which additionally use unlabeled samples, such as the coupled CE loss (CCE) of Thulasidasan et al (2021), the de-coupled CE loss (DCE) of Bitterwolf et al (2022), and the outlier exposure (OE) loss of Hendrycks et al (2019). The closest among these to our approach is SIRC, which also seeks to treat inlier abstentions differently from OOD abstentions, but uses a heuristic post-hoc rule.…”
Section: Mixing Proportionsmentioning
confidence: 99%
“…ImageNet-O [9] and providing analysis for the same. As there are a plethora of OOD approaches out there [ [5], [15], [8], [13], [12], [28], [20], [11], [7]] and many more. We have included the predictive score based approach of Maximum Softmax Probability (MSP) [5] that is widely used across OOD detection literature as the baseline.…”
Section: Introductionmentioning
confidence: 99%