Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing 2017
DOI: 10.1145/2998181.2998234
|View full text |Cite
|
Sign up to set email alerts
|

Crowd Guilds

Abstract: Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 38 publications
(7 citation statements)
references
References 60 publications
0
7
0
Order By: Relevance
“…These tools were empirically tested and validated to evaluate than many other conceptual frameworks. First tool consists of criteria of: [41].…”
Section: Mooc Evaluation Criteria and Toolsmentioning
confidence: 99%
See 1 more Smart Citation
“…These tools were empirically tested and validated to evaluate than many other conceptual frameworks. First tool consists of criteria of: [41].…”
Section: Mooc Evaluation Criteria and Toolsmentioning
confidence: 99%
“…Interactivity and Collaboration in a MOOC is identified as student's success factors in many research works [13,41]. In order to evaluate the MOOC platform designs, we are required to select number of MOOC platforms and an evaluation criterion.…”
Section: Evaluation Of Mooc Platform Designsmentioning
confidence: 99%
“…Competitive self-organization on which the SOPs approach is based, is in line with the main idea of the aforementioned paper, in that it also incentivizes workers to select others based on actual performance and compatibility, as this would help them win. As a second built-in quality mechanism, the SOPs framework uses peer assessment to identify winning stories; a powerful mechanism to enable collective feedback and emerging selectivity (Whiting et al, 2017). Competitive self-organisation and peer assessment complement the importance of agency within the SOPs framework .…”
Section: Resultsmentioning
confidence: 99%
“…From a task point-of-view, the story peer assessment at this stage allows for a collective decision to emerge regarding the outcome of the task, that is, users collectively have full control over the task result. Peer review is also a proven way of incorporating quality assurance during the task (Whiting et al, 2017). Alternative ways of evaluating the team result after each collaboration round can be envisioned and they are relatively straightforward to incorporate, without affecting the core of the proposed system.…”
Section: Methodsmentioning
confidence: 99%
“…In our case, we can expose the controversy or disagreement about a causal relation at the time of the experiment by looking at the networks already created by others. Another potential solution is enabling peer review [77], allowing crowdworkers to provide feedback to each other. Other potential solutions include automatically extracting digestible scientific documents related to the relevant causal attributes.…”
Section: Reflecting On the User Studiesmentioning
confidence: 99%