2015
DOI: 10.1016/j.neucom.2014.10.082
|View full text |Cite
|
Sign up to set email alerts
|

Modeling annotator behaviors for crowd labeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(24 citation statements)
references
References 21 publications
(25 reference statements)
0
22
0
Order By: Relevance
“…Peng, Liu, Ihler, and Berger (2013) propose a domain-specific approach to the protein folding annotation problem by maximizing the log-likelihood of an exponential family mixture model of annotation similarities. Kara, Genc, Aran, and Akarun (2015) deal with the effects of diverse annotator behaviors on consensus estimation for continuous crowd-labeling problems. They also propose a scoring mechanism to determine annotator competence.…”
Section: Active Crowd-labeling For Categorical Annotation Problemsmentioning
confidence: 99%
See 3 more Smart Citations
“…Peng, Liu, Ihler, and Berger (2013) propose a domain-specific approach to the protein folding annotation problem by maximizing the log-likelihood of an exponential family mixture model of annotation similarities. Kara, Genc, Aran, and Akarun (2015) deal with the effects of diverse annotator behaviors on consensus estimation for continuous crowd-labeling problems. They also propose a scoring mechanism to determine annotator competence.…”
Section: Active Crowd-labeling For Categorical Annotation Problemsmentioning
confidence: 99%
“…For crowd consensus estimation, we employ the Consensus Bias Sensitive Model (M-CBS) of Kara et al (2015). The model assumes that a sample i has a single true rate (x i ) and an annotator produces an annotation (y k ) as a function of x i and their internal decision parameters.…”
Section: Crowd Consensus Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…There are many reasons for the aforementioned behavior including the level of expertise, low-attention / low-concentration when they perform the task and there is always the bad intent of the annotators. Examples of annotators with bad intention can be spammers, dishonest or try to manipulate the system by answering in an unrelated or nonsense way [12]. In a research about crowdsourcing annotators' consistency Theodosiou et.…”
Section: Related Workmentioning
confidence: 99%