2012
DOI: 10.1007/s13222-012-0092-8
|View full text |Cite
|
Sign up to set email alerts
|

Information Extraction Meets Crowdsourcing: A Promising Couple

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…As our scenario is only concerned with factual data that can be looked up on the Web without requiring expert knowledge (e.g., product specifications, telephone numbers, addresses, etc. ), effective and simple quality control techniques like majority voting or Gold sampling, where questions whose known answers are injected in the HIT, [13] can be applied. Previous studies on crowdsourcing have shown that within certain bounds, missing values in database tuples can be elicited with reliable efficiency and quality as long as the information is generally available.…”
Section: Crowdsourcingmentioning
confidence: 99%
See 1 more Smart Citation
“…As our scenario is only concerned with factual data that can be looked up on the Web without requiring expert knowledge (e.g., product specifications, telephone numbers, addresses, etc. ), effective and simple quality control techniques like majority voting or Gold sampling, where questions whose known answers are injected in the HIT, [13] can be applied. Previous studies on crowdsourcing have shown that within certain bounds, missing values in database tuples can be elicited with reliable efficiency and quality as long as the information is generally available.…”
Section: Crowdsourcingmentioning
confidence: 99%
“…Previous studies on crowdsourcing have shown that within certain bounds, missing values in database tuples can be elicited with reliable efficiency and quality as long as the information is generally available. For example, [13] reports that crowdsourced manual look-ups of movie genres in IMDB.com are correct in ∼ 95% of all cases with costs of $0.03 per tuple (including quality assurance). Therefore, while quality issues are a severe concern for crowdsourcing in general, in this paper we simply assume that established quality control techniques are sufficient.…”
Section: Crowdsourcingmentioning
confidence: 99%
“…In previous studies on crowd sourcing it has been shown under certain constraints that missing values in database tuples can be elicited with surprising efficiency and quality as long as the information is generally available [5]. Especially for factual data that can be looked-up on the Web without requiring expert knowledge (e.g., product specifications, telephone numbers, addresses, etc.…”
Section: Crowd-enabled Dbs and Missing Datamentioning
confidence: 99%
“…), the expected data quality is quite high with only a moderate amount of quality assurance (e.g., majority votes). For example, [5] reports that crowd-sourced manual look-ups of movie genres in IMDB.com are correct in ~95% of all cases with costs of $0.03 per tuple (including quality assurance).…”
Section: Crowd-enabled Dbs and Missing Datamentioning
confidence: 99%
“…Crowdsourcing can often be found in knowledge processing tasks such as data or media classification [8], data acquisition tasks such as data completion [6] or information extraction [16], as well as in providing training data for machine-learning-based approaches [20]. Furthermore, crowdsourcing has proven to be useful to the research community for performing largescale user studies for evaluating new prototype implementations [11], or performing surveys with a large and diverse number of participants for investigating general human behavior or preferences [1].…”
Section: Introductionmentioning
confidence: 99%