Crowdsourcing is an economic and efficient strategy aimed at collecting annotations of data through an online platform. Crowd workers with different expertise are paid for their service, and the task requester usually has a limited budget. How to collect reliable annotations for multi-label data and how to compute the consensus within budget is an interesting and challenging, but rarely studied, problem.In this paper, we propose a novel approach to accomplish Active Multi-label Crowd Consensus (AMCC). AMCC accounts for the commonality and individuality of workers, and assumes that workers can be organized into different groups. Each group includes a set of workers who share a similar annotation behavior and label correlations. To achieve an effective multilabel consensus, AMCC models workers' annotations via a linear combination of commonality and individuality, and reduces the impact of unreliable workers by assigning smaller weights to the group. To collect reliable annotations with reduced cost, AMCC introduces an active crowdsourcing learning strategy that selects sample-label-worker triplets. In a triplet, the selected sample and label are the most informative for the consensus model, and the selected worker can reliably annotate the sample with low cost. Our experimental results on multi-label datasets demonstrate the advantages of AMCC over state-of-the-art solutions on computing crowd consensus and on reducing the budget by choosing cost-effective triplets.
Crowdsourcing is a relatively inexpensive and efficient mechanism to collect annotations of data from the open Internet. Crowdsourcing workers are paid for the provided annotations, but the task requester usually has a limited budget. It is desirable to wisely assign the appropriate task to the right workers, so the overall annotation quality is maximized whilst the cost is reduced. In this paper, we propose a novel task assignment strategy (CrowdWT) to capture the complex interactions between tasks and workers, and properly assign tasks to workers. CrowdWT first develops a Worker Bias Model (WBM) to jointly model the worker's bias, the ground truths of tasks, and the task features. WBM constructs a mapping between task features and worker annotations to dynamically assign the task to a group of workers, who are more likely to give correct annotations for the task. CrowdWT further introduces a Task Difficulty Model (TDM), which builds a Kernel ridge regressor based on task features to quantify the intrinsic difficulty of tasks and thus to assign the difficult tasks to more reliable workers. Finally, CrowdWT combines WBM and TDM into a unified model to dynamically assign tasks to a group of workers, recall more reliable even expert workers to annotate the difficult tasks.Our experimental results on two real-world datasets and two semi-synthetic datasets show that CrowdWT achieves high-quality answers within a limited budget, and has the best performance against competitive methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.