Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467382
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Reliable Text Classification

Abstract: Deep neural networks have significantly contributed to the success in predictive accuracy for classification tasks. However, they tend to make over-confident predictions in real-world settings, where domain shifting and out-of-distribution (OOD) examples exist. Most research on uncertainty estimation focuses on computer vision because it provides visual validation on uncertainty quality. However, few have been presented in the natural language process domain. Unlike Bayesian methods that indirectly infer uncer… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 47 publications
0
15
0
Order By: Relevance
“…Due to limited time and computational resources, we did not conduct more experiments to explore various hyperparameters that could affect finetuning results, such as vocabulary size and pretraining epochs, to name a few. Future work should analyze how to optimize ConfliBERT, expand Con-fliBERT to multi-lingual settings, and apply Con-fliBERT to more challenging tasks such as understanding, inference, question answering, uncertainty qualification Hu and Khan, 2021), and few-/ zero-shot tasks to speed up the study of NLP application for the political science community.…”
Section: Discussionmentioning
confidence: 99%
“…Due to limited time and computational resources, we did not conduct more experiments to explore various hyperparameters that could affect finetuning results, such as vocabulary size and pretraining epochs, to name a few. Future work should analyze how to optimize ConfliBERT, expand Con-fliBERT to multi-lingual settings, and apply Con-fliBERT to more challenging tasks such as understanding, inference, question answering, uncertainty qualification Hu and Khan, 2021), and few-/ zero-shot tasks to speed up the study of NLP application for the political science community.…”
Section: Discussionmentioning
confidence: 99%
“…Benefit from self-training, previous methods make a remarkable success on a series of instance-level classification tasks, such as image classification (Zhou et al 2021;Wang et al 2022a;Liu et al 2021) and text classification (Meng et al 2020;Mukherjee and Awadallah 2020;Yu et al 2021;Hu and Khan 2021;Tsai, Lin, and Fu 2022;Kim, Son, and Han 2022). In contrast to instance-level classification, we observe that there are two challenges in applying standard self-training to NSL.…”
Section: Introductionmentioning
confidence: 94%
“…Confidence [88,112,114,115]; Entropy [76,113]; TS [113]; KL [103]; Mahalanobis [88,89]; Evidence [117]; MCD [116]; Ensemble [116]; Gaussian [113].…”
Section: Ood Detection Classmentioning
confidence: 99%
“…Accuracy [112,114,116]; [76,103,113]; F1, [89,115,116]; ECE [114]; AUROC [88,113,117]; AUPR [89,113,117]; FAR95 [88,112,113]; AUCRCC [89], FPR90 [117].…”
Section: Ood Detection Classmentioning
confidence: 99%
See 1 more Smart Citation