2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00067
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Answer Aggregation Based on Joint Matrix Factorization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 31 publications
0
25
0
Order By: Relevance
“…To the best of our knowledge, none of the existing active crowdsourcing solutions [16], [17], [21]- [24] can jointly account for the impact of samples, labels, and workers in crowdsourcing. 4) Extensive results validate the advantages of our proposed AMCC approach over state-of-the-art solutions [19], [26]- [29] in effectively computing the multi-label crowd consensus and saving costs. The remainder of this paper is organized as follows.…”
Section: Introductionmentioning
confidence: 53%
“…To the best of our knowledge, none of the existing active crowdsourcing solutions [16], [17], [21]- [24] can jointly account for the impact of samples, labels, and workers in crowdsourcing. 4) Extensive results validate the advantages of our proposed AMCC approach over state-of-the-art solutions [19], [26]- [29] in effectively computing the multi-label crowd consensus and saving costs. The remainder of this paper is organized as follows.…”
Section: Introductionmentioning
confidence: 53%
“…To avoid this limitation, researchers have resorted to develop semi-supervised multilabel classifiers [23], [24], [25], [26], [27], in which limited labeled samples as well as abundant unlabeled samples are jointly used for training. Besides, considering the fact that labeled data is tagged by human efforts, they might have some missing or noisly labels [28], [29], [30], [31], [32], [33], several approaches have been proposed to design multi-label classifiers under weak-label setting [28], [29], [34], [35] or with noisy labels [36], [32], [37], [38].…”
Section: A Multi-label Learningmentioning
confidence: 99%
“…Improving data quality. The most intuitive strategy to deal with the low quality of data annotation is to improve the quality of the data itself [12,5,13,14,16,17]. The Dawid-Skene (DS) model [12] is a standard probabilistic model for label inference from multiple annotations using Expectation-Maximization (EM).…”
Section: Related Workmentioning
confidence: 99%
“…AWMV uses the frequency of positive labels in multiple noisy label sets of each task to estimate a bias rate, and then assigns weights, derived from the bias rate, to negative and positive labels. These methods tackle the problem of identifying low quality answers in different ways [12,5,14,16,13,17]. However, they all ignore the intrinsic features of tasks, which can improve the quality of aggregated labels.…”
Section: Related Workmentioning
confidence: 99%