Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.121
|View full text |Cite
|
Sign up to set email alerts
|

Efficient, Uncertainty-based Moderation of Neural Networks Text Classifiers

Abstract: To maximize the accuracy and increase the overall acceptance of text classifiers, we propose a framework for the efficient, inoperation moderation of classifiers' output. Our framework focuses on use cases in which F1-scores of modern Neural Networks classifiers (ca. 90%) are still inapplicable in practice. We suggest a semi-automated approach that uses prediction uncertainties to pass unconfident, probably incorrect classifications to human moderators. To minimize the workload, we limit the human moderated da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 21 publications
(25 reference statements)
0
2
0
Order By: Relevance
“…Therefore, solutions based on DistilBERT are inspired. Andersen and Maalej [ 26 ] proposed a framework for the efficient, in-operation moderation of classifier output to maximize the accuracy and increase the overall acceptance of text classifiers. Chang et al [ 27 ] developed a universal financial fraud awareness model to avoid these cases escalating to the level of crime.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, solutions based on DistilBERT are inspired. Andersen and Maalej [ 26 ] proposed a framework for the efficient, in-operation moderation of classifier output to maximize the accuracy and increase the overall acceptance of text classifiers. Chang et al [ 27 ] developed a universal financial fraud awareness model to avoid these cases escalating to the level of crime.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, the teacher model inevitably generates some noisy labels that cause error accumulation (Wang et al 2021a). Some sample selection strategies (e.g., model confidence, uncertainty estimation) and consistency regularization mitigate the effect of noisy labels and alleviate the problem of confirmation bias (Do, Tran, and Venkatesh 2021;Cao et al 2021;Rizve et al 2021;Wang et al 2021b;Andersen and Maalej 2022). However, it is unclear how these methods can be applied to token-level classification.…”
Section: Introductionmentioning
confidence: 99%