Findings of the Association for Computational Linguistics: EMNLP 2022 2022
DOI: 10.18653/v1/2022.findings-emnlp.198
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity

Abstract: We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…The methods can be used specifically for NLP problems and provide computationally cheap and reliable UE as in [Ulmer et al, 2022] for transformer neural networks, which are widely used to work with texts, for example, in [Xiao and Wang, 2019]. These methods measure uncertainty through the interpretability of the model or selective predictions.…”
Section: Attention Layer Featuresmentioning
confidence: 99%
“…The methods can be used specifically for NLP problems and provide computationally cheap and reliable UE as in [Ulmer et al, 2022] for transformer neural networks, which are widely used to work with texts, for example, in [Xiao and Wang, 2019]. These methods measure uncertainty through the interpretability of the model or selective predictions.…”
Section: Attention Layer Featuresmentioning
confidence: 99%