2013
DOI: 10.4018/ijswis.2013070102
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced Knowledge Acquisition

Abstract: Novel social media collaboration platforms, such as games with a purpose and mechanised labour marketplaces, are increasingly used for enlisting large populations of non-experts in crowdsourced knowledge acquisition processes. Climate Quiz uses this paradigm for acquiring environmental domain knowledge from non-experts. The game's usage statistics and the quality of the produced data show that Climate Quiz has managed to attract a large number of players but noisy input data and task complexity led to low play… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…A similar debate has been going on in the context of GWAPs, as designers are very restricted in assigning questions to difficulty levels without preprocessing them [43]. One option would be to try out a multi-step workflow (such as the hybrid workflow proposed by [39]) in which entity types that are empirically straightforward to annotate are solved by 'regular' workers, while miscellaneous and other problematic cases are only flagged and treated differently -be that by more experienced annotators, via a higher number of judgements [45], or otherwise.…”
Section: Discussionmentioning
confidence: 99%
“…A similar debate has been going on in the context of GWAPs, as designers are very restricted in assigning questions to difficulty levels without preprocessing them [43]. One option would be to try out a multi-step workflow (such as the hybrid workflow proposed by [39]) in which entity types that are empirically straightforward to annotate are solved by 'regular' workers, while miscellaneous and other problematic cases are only flagged and treated differently -be that by more experienced annotators, via a higher number of judgements [45], or otherwise.…”
Section: Discussionmentioning
confidence: 99%
“…In the area of using Human Computation for Semantic Web research (HC4SW), there are a few trending topics both in the overall paper corpus we collected and in the special issue papers. For example, research on workflow design has considered workflows that combine different HC genre [1,39] as well as hybrid human-machine workflows [9,14]. The latter type of workflows dovetails with recent efforts to construct Human-in-the-Loop systems and still raises several open research issues as discussed in [9].…”
Section: Open Challenges and Future Workmentioning
confidence: 99%
“…A way to handle this and collect cleaner data is to break the tasks down into smaller components (Sabou et al 2013). To make each task easier and faster, we will test a pipeline approach that will break the all-in-one task shown in Figs 2, 3, 4 into separate tasks performed by different Workers.…”
Section: Trainingmentioning
confidence: 99%