2022
DOI: 10.1609/aaai.v36i7.20670
|View full text |Cite
|
Sign up to set email alerts
|

iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection

Abstract: Machine learning methods such as deep neural networks (DNNs), despite their success across different domains, are known to often generate incorrect predictions with high confidence on inputs outside their training distribution. The deployment of DNNs in safety-critical domains requires detection of out-of-distribution (OOD) data so that DNNs can abstain from making predictions on those. A number of methods have been recently developed for OOD detection, but there is still room for improvement. We propose the n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…We show that SDEs alleviate the saturation effect faced by attribution methods and empirically demonstrate this. In future efforts, one can explore how such neural SDEs can lead to more robust confidence metrics (Jha et al 2019) and enhance out-of-distribution detection algorithms (Kaur et al 2022).…”
Section: Discussionmentioning
confidence: 99%
“…We show that SDEs alleviate the saturation effect faced by attribution methods and empirically demonstrate this. In future efforts, one can explore how such neural SDEs can lead to more robust confidence metrics (Jha et al 2019) and enhance out-of-distribution detection algorithms (Kaur et al 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Split conformal prediction has been extended to provide conditional probabilistic guarantees (Vovk 2012), to handle distribution shifts (Tibshirani et al 2019;Fannjiang et al 2022), and to allow for quantile regression (Romano, Patterson, and Candes 2019). Further, split conformal prediction has been used to construct probably approximately correct prediction sets for machine learning models (Park et al 2020;Angelopoulos et al 2022), to perform out-ofdistribution detection (Kaur et al 2022(Kaur et al , 2023, to guarantee safety in autonomous systems (Luo et al 2022), and to quantify uncertainty for F1/10 car motion predictions (Tumu et al 2023). Additionally, in (Stutz et al 2022) the authors encode the width of the generated prediction sets directly into the loss function of a neural network during training.…”
Section: Related Workmentioning
confidence: 99%
“…Although originally based on the premise of exchangeable (e.g., independently and identically distributed) training and test data, the framework has since been generalized to handle various forms of distribution shift, including covariate shift ( 4 , 7 ), label shift ( 8 ), arbitrary distribution shifts in an online setting ( 6 ), and test distributions that are nearby the training distribution ( 5 ). Conformal approaches have also been used to detect distribution shift ( 17 23 ).…”
Section: Uncertainty Quantification Under Feedback Loopsmentioning
confidence: 99%