Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/261
|View full text |Cite
|
Sign up to set email alerts
|

Cost-Effective Active Learning from Diverse Labelers

Abstract: In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(58 citation statements)
references
References 17 publications
(8 reference statements)
0
58
0
Order By: Relevance
“…Geva, Saar-Tsechansky, and Lustiger (2019) consider acquiring labels from labelers with varying accuracies and costs based on the estimated effect on generalization error, but do not consider selecting instances for labeling. The most closely related work is by Huang et al (2017): the proposed method estimates workers' labeling accuracies based on a small set of ground-truth data, and then estimates the value of acquiring the label for a given instance from a given worker as the weighted labeling accuracy divide by worker's cost.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Geva, Saar-Tsechansky, and Lustiger (2019) consider acquiring labels from labelers with varying accuracies and costs based on the estimated effect on generalization error, but do not consider selecting instances for labeling. The most closely related work is by Huang et al (2017): the proposed method estimates workers' labeling accuracies based on a small set of ground-truth data, and then estimates the value of acquiring the label for a given instance from a given worker as the weighted labeling accuracy divide by worker's cost.…”
Section: Related Workmentioning
confidence: 99%
“…Below, we first discuss briefly how we quantify instance usefulness. Because the closest work to the contributions we present here is by Huang et al (2017), we then outline the main elements of the CEAL algorithm (Huang et al 2017) and then describe how our algorithm builds on this contributions.…”
Section: Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In traditional single-label imbalanced data processing methods, relevant studies can be divided into three aspects: algorithm-level methods, cost-sensitive learning methods [6][7][8], and data-level methods. Among algorithm-level methods, the classification algorithm is improved to adapt to imbalanced data sets.…”
Section: Related Workmentioning
confidence: 99%
“…Lin, Mausam, and Weld (2016) tackled the problem of re-active learning, a generalization of active learning that explores the tradeoff between decreasing the noise of the training set via relabeling and increasing the size of the noisy training set by labeling new instances, which introduced two re-active learning algorithms: an extension of uncertainty sampling, and a class of impact sampling algorithms. Huang et al (2017) observed that it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, they proposed an active selection criterion to evaluate the costeffectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relatively low cost.…”
Section: Active Learningmentioning
confidence: 99%