Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.390
|View full text |Cite
|
Sign up to set email alerts
|

Empowering Active Learning to Jointly Optimize System and User Demands

Abstract: Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training. However, when active learning is integrated with an end-user application, this can lead to frustration for participating users, as they spend time labeling instances that they would not otherwise be interested in reading. In this paper, we propose a new active learning approach that jointly optimizes the seemingly counteracting objectives of the active le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Adaptive estimators. While simple heuristics or annotator-unaware models allow us to pre-compute annotation curricula, they do not consider any user-specific aspect that may influence the difficulty estimation (Lee, Meyer, and Gurevych 2020). Consequently, the resulting curriculum may not provide the optimal ordering for a specific annotator.…”
Section: Masked Language Modeling Loss (Mlm)mentioning
confidence: 99%
“…Adaptive estimators. While simple heuristics or annotator-unaware models allow us to pre-compute annotation curricula, they do not consider any user-specific aspect that may influence the difficulty estimation (Lee, Meyer, and Gurevych 2020). Consequently, the resulting curriculum may not provide the optimal ordering for a specific annotator.…”
Section: Masked Language Modeling Loss (Mlm)mentioning
confidence: 99%
“…Instead of switching between different strategies, adaptive estimators may provide another way to consider different objectives when selecting instances for annotation. As shown by Lee, Meyer, and Gurevych (2020) for language learning exercises, it may be possible to sample instances that jointly suffice seemingly counteracting objectives such as reducing the overall annotation time while being preferable for model training. We will investigate such strategies in future work.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…However, the interaction with a not yet fully trained model can get monotonous or can lead to frustration on the part of the users if they do not benefit from the interaction themselves (Lee et al, 2020). Wang et al (2016) present an interactive language learning setting, called SHRDLURN, in which a model learns a language by interacting with a player in a game environment, hence making the interactive learning setting more attractive and fun for users.…”
Section: Introductionmentioning
confidence: 99%