2014 IEEE Fifth International Conference on Communications and Electronics (ICCE) 2014
DOI: 10.1109/cce.2014.6916756
|View full text |Cite
|
Sign up to set email alerts
|

Predicting result quality in Crowdsourcing using application layer monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 6 publications
0
10
0
Order By: Relevance
“…Approaches that predict worker quality from their behavior have been proposed [5], [6], [7], [15]. They do not require the creation of a gold task and do not assume redundancy; hence, they make it possible to reduce costs.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Approaches that predict worker quality from their behavior have been proposed [5], [6], [7], [15]. They do not require the creation of a gold task and do not assume redundancy; hence, they make it possible to reduce costs.…”
Section: Related Workmentioning
confidence: 99%
“…They gathered feature data such as the number of clicks, keyboard operations, and processing time from the workers. On the other hand, Hirth et al [6] used the following features: the duration in which the worker reads the target text, and the answer time in which the answer is considered as determined from scrolling of the page or the click interval of the radio button. Moreover, they predicted the quality of the workers by using a machine learning algorithm that took the workers' behaviors as input.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Nevertheless, the heterogeneous environment and unsupervised nature of such QoE studies can introduce pitfalls, which have to be avoided by proper study design and the ltering of unreliable results [77]. Therefore, in the eld of crowdsourced QoE studies, research deals with motivation and incentives of participants (e.g., [59,152]), methods for screening the reliability of participants (e.g., [54]), mechanisms for asserting a high quality of results (e.g., [32,36,153]), and the development of crowdsourcing frameworks and platforms (e.g., [154]). A comprehensive report of best practices and lessons learned for crowdsourced QoE studies can be found in [77].…”
Section: Impact Of Adaptation On the Qoe Of Hasmentioning
confidence: 99%
“…A defensive task design, i.e., making it easier to complete the task in a meaningful manner than to find a means to cheat, can be applied to measurements in which no user interaction is required. If user interaction is required, e.g., the worker must access certain web pages or videos, such interactions can be monitored [52][53][54] or additional validation questions [48] about the content of the visited pages or videos can be added to verify correct task completion by the worker. …”
Section: Best Practicesmentioning
confidence: 99%